diff --git a/translated/talk/20150823 How learning data structures and algorithms make you a better developer.md b/published/20150823 How learning data structures and algorithms make you a better developer.md similarity index 51% rename from translated/talk/20150823 How learning data structures and algorithms make you a better developer.md rename to published/20150823 How learning data structures and algorithms make you a better developer.md index 8125229719..27973b2f42 100644 --- a/translated/talk/20150823 How learning data structures and algorithms make you a better developer.md +++ b/published/20150823 How learning data structures and algorithms make you a better developer.md @@ -1,41 +1,41 @@ -学习数据结构与算法分析如何帮助您成为更优秀的开发人员? +学习数据结构与算法分析如何帮助您成为更优秀的开发人员 ================================================================================ -> "相较于其它方式,我一直热衷于推崇围绕数据设计代码,我想这也是Git能够如此成功的一大原因[…]在我看来,区别程序员优劣的一大标准就在于他是否认为自己设计的代码或数据结构更为重要。" +> "相较于其它方式,我一直热衷于推崇围绕数据设计代码,我想这也是Git能够如此成功的一大原因[…]在我看来,区别程序员优劣的一大标准就在于他是否认为自己设计的代码还是数据结构更为重要。" -- Linus Torvalds --- -> "优秀的数据结构与简陋的代码组合远比倒过来的组合方式更好。" +> "优秀的数据结构与简陋的代码组合远比反之的组合更好。" -- Eric S. Raymond, The Cathedral and The Bazaar 学习数据结构与算法分析会让您成为一名出色的程序员。 -**数据结构与算法分析是一种解决问题的思维模式** 在您的个人知识库中,数据结构与算法分析的相关知识储备越多,您将具备应对并解决越多各类繁杂问题的能力。掌握了这种思维模式,您还将有能力针对新问题提出更多以前想不到的漂亮的解决方案。 +**数据结构与算法分析是一种解决问题的思维模式。** 在您的个人知识库中,数据结构与算法分析的相关知识储备越多,您将越多具备应对并解决各类繁杂问题的能力。掌握了这种思维模式,您还将有能力针对新问题提出更多以前想不到的漂亮的解决方案。 -您将***更深入地***了解,计算机如何完成各项操作。无论您是否是直接使用给定的算法,它都影响着您作出的各种技术决定。从计算机操作系统的内存分配到RDBMS的内在工作机制,以及网络堆栈如何实现将数据从地球的一个角落发送至另一个角落这些大大小小的工作的完成,都离不开基础的数据结构与算法,理解并掌握它将会让您更了解计算机的运作机理。 +您将*更深入地*了解,计算机如何完成各项操作。无论您是否是直接使用给定的算法,它都影响着您作出的各种技术决定。从计算机操作系统的内存分配到RDBMS的内在工作机制,以及网络协议如何实现将数据从地球的一个角落发送至另一个角落,这些大大小小的工作的完成,都离不开基础的数据结构与算法,理解并掌握它将会让您更了解计算机的运作机理。 -对算法广泛深入的学习能让为您应对大体系的问题储备解决方案。之前建模困难时遇到的问题如今通常都能融合进经典的数据结构中得到很好地解决。即使是最基础的数据结构,只要对它进行足够深入的钻研,您将会发现在每天的编程任务中都能经常用到这些知识。 +对算法广泛深入的学习能为您储备解决方案来应对大体系的问题。之前建模困难时遇到的问题如今通常都能融合进经典的数据结构中得到很好地解决。即使是最基础的数据结构,只要对它进行足够深入的钻研,您将会发现在每天的编程任务中都能经常用到这些知识。 -有了这种思维模式,在遇到磨棱两可的问题时,您会具备想出新的解决方案的能力。即使最初并没有打算用数据结构与算法解决相应问题的情况,当真正用它们解决这些问题时您会发现它们将非常有用。要意识到这一点,您至少要对数据结构与算法分析的基础知识有深入直观的认识。 +有了这种思维模式,在遇到磨棱两可的问题时,您将能够想出新奇的解决方案。即使最初并没有打算用数据结构与算法解决相应问题的情况,当真正用它们解决这些问题时您会发现它们将非常有用。要意识到这一点,您至少要对数据结构与算法分析的基础知识有深入直观的认识。 理论认识就讲到这里,让我们一起看看下面几个例子。 ###最短路径问题### -我们想要开发一个计算从一个国际机场出发到另一个国际机场的最短距离的软件。假设我们受限于以下路线: +我们想要开发一个软件来计算从一个国际机场出发到另一个国际机场的最短距离。假设我们受限于以下路线: ![](http://www.happybearsoftware.com/assets/posts/how-learning-data-structures-and-algorithms-makes-you-a-better-developer/airport-graph-d2e32b3344b708383e405d67a80c29ea.svg) -从这张画出机场各自之间的距离以及目的地的图中,我们如何才能找到最短距离,比方说从赫尔辛基到伦敦?**Dijkstra算法**是能让我们在最短的时间得到正确答案的适用算法。 +从这张画出机场各自之间的距离以及目的地的图中,我们如何才能找到最短距离,比方说从赫尔辛基到伦敦?**[Dijkstra算法][3]**是能让我们在最短的时间得到正确答案的适用算法。 -在所有可能的解法中,如果您曾经遇到过这类问题,知道可以用Dijkstra算法求解,您大可不必从零开始实现它,只需***知道***该算法能指向固定的代码库帮助您解决相关的实现问题。 +在所有可能的解法中,如果您曾经遇到过这类问题,知道可以用Dijkstra算法求解,您大可不必从零开始实现它,只需***知道***该算法的代码库能帮助您解决相关的实现问题。 -实现了该算法,您将深入理解一项著名的重要图论算法。您会发现实际上该算法太集成化,因此名为A*的扩展包经常会代替该算法使用。这个算法应用广泛,从机器人指引的功能实现到TCP数据包路由,以及GPS寻径问题都能应用到这个算法。 +如果你深入到该算法的实现中,您将深入理解一项著名的重要图论算法。您会发现实际上该算法比较消耗资源,因此名为[A*][4]的扩展经常用于代替该算法。这个算法应用广泛,从机器人寻路的功能实现到TCP数据包路由,以及GPS寻径问题都能应用到这个算法。 ###先后排序问题### -您想要在开放式在线课程平台上(如Udemy或Khan学院)学习某课程,有些课程之间彼此依赖。例如,用户学习牛顿力学机制课程前必须先修微积分课程,课程之间可以有多种依赖关系。用YAML表述举例如下: +您想要在开放式在线课程(MOOC,Massive Open Online Courses)平台上(如Udemy或Khan学院)学习某课程,有些课程之间彼此依赖。例如,用户学习牛顿力学(Newtonian Mechanics)课程前必须先修微积分(Calculus)课程,课程之间可以有多种依赖关系。用YAML表述举例如下: # Mapping from course name to requirements # @@ -54,16 +54,16 @@ astrophysics: [radioactivity, calculus] quantumn_mechanics: [atomic_physics, radioactivity, calculus] -鉴于以上这些依赖关系,作为一名用户,我希望系统能帮我列出必修课列表,让我在之后可以选择任意一门课程学习。如果我选择了`微积分`课程,我希望系统能返回以下列表: +鉴于以上这些依赖关系,作为一名用户,我希望系统能帮我列出必修课列表,让我在之后可以选择任意一门课程学习。如果我选择了微积分(calculus)课程,我希望系统能返回以下列表: arithmetic -> algebra -> trigonometry -> calculus 这里有两个潜在的重要约束条件: - 返回的必修课列表中,每门课都与下一门课存在依赖关系 - - 必修课列表中不能有重复项 + - 我们不希望列表中有任何重复课程 -这是解决数据间依赖关系的例子,解决该问题的排序算法称作拓扑排序算法(tsort)。它适用于解决上述我们用YAML列出的依赖关系图的情况,以下是在图中显示的相关结果(其中箭头代表`需要先修的课程`): +这是解决数据间依赖关系的例子,解决该问题的排序算法称作拓扑排序算法(tsort,topological sort)。它适用于解决上述我们用YAML列出的依赖关系图的情况,以下是在图中显示的相关结果(其中箭头代表`需要先修的课程`): ![](http://www.happybearsoftware.com/assets/posts/how-learning-data-structures-and-algorithms-makes-you-a-better-developer/course-graph-2f60f42bb0dc95319954ce34c02705a2.svg) @@ -79,16 +79,17 @@ 这符合我们上面描述的需求,用户只需选出`radioactivity`,就能得到在此之前所有必修课程的有序列表。 -在运用该排序算法之前,我们甚至不需要深入了解算法的实现细节。一般来说,选择不同的编程语言在其标准库中都会有相应的算法实现。即使最坏的情况,Unix也会默认安装`tsort`程序,运行`tsort`程序,您就可以实现该算法。 +在运用该排序算法之前,我们甚至不需要深入了解算法的实现细节。一般来说,你可能选择的各种编程语言在其标准库中都会有相应的算法实现。即使最坏的情况,Unix也会默认安装`tsort`程序,运行`man tsort` 来了解该程序。 ###其它拓扑排序适用场合### - - **工具** 使用诸如`make`的工具您可以声明任务之间的依赖关系,这里拓扑排序算法将从底层实现具有依赖关系的任务顺序执行的功能。 - - **有`require`指令的编程语言**,适用于要运行当前文件需先运行另一个文件的情况。这里拓扑排序用于识别文件运行顺序以保证每个文件只加载一次,且满足所有文件间的依赖关系要求。 - - **包含甘特图的项目管理工具**.甘特图能直观列出给定任务的所有依赖关系,在这些依赖关系之上能提供给用户任务完成的预估时间。我不常用到甘特图,但这些绘制甘特图的工具很可能会用到拓扑排序算法。 + - **类似`make`的工具** 可以让您声明任务之间的依赖关系,这里拓扑排序算法将从底层实现具有依赖关系的任务顺序执行的功能。 + - **具有`require`指令的编程语言**适用于要运行当前文件需先运行另一个文件的情况。这里拓扑排序用于识别文件运行顺序以保证每个文件只加载一次,且满足所有文件间的依赖关系要求。 + - **带有甘特图的项目管理工具**。甘特图能直观列出给定任务的所有依赖关系,在这些依赖关系之上能提供给用户任务完成的预估时间。我不常用到甘特图,但这些绘制甘特图的工具很可能会用到拓扑排序算法。 ###霍夫曼编码实现数据压缩### -[霍夫曼编码](http://en.wikipedia.org/wiki/Huffman_coding)是一种用于无损数据压缩的编码算法。它的工作原理是先分析要压缩的数据,再为每个字符创建一个二进制编码。字符出现的越频繁,编码赋值越小。因此在一个数据集中`e`可能会编码为`111`,而`x`会编码为`10010`。创建了这种编码模式,就可以串联无定界符,也能正确地进行解码。 + +[霍夫曼编码][5](Huffman coding)是一种用于无损数据压缩的编码算法。它的工作原理是先分析要压缩的数据,再为每个字符创建一个二进制编码。字符出现的越频繁,编码赋值越小。因此在一个数据集中`e`可能会编码为`111`,而`x`会编码为`10010`。创建了这种编码模式,就可以串联无定界符,也能正确地进行解码。 在gzip中使用的DEFLATE算法就结合了霍夫曼编码与LZ77一同用于实现数据压缩功能。gzip应用领域很广,特别适用于文件压缩(以`.gz`为扩展名的文件)以及用于数据传输中的http请求与应答。 @@ -96,10 +97,11 @@ - 您会理解为什么较大的压缩文件会获得较好的整体压缩效果(如压缩的越多,压缩率也越高)。这也是SPDY协议得以推崇的原因之一:在复杂的HTTP请求/响应过程数据有更好的压缩效果。 - 您会了解数据传输过程中如果想要压缩JavaScript/CSS文件,运行压缩软件是完全没有意义的。PNG文件也是类似,因为它们已经使用DEFLATE算法完成了压缩。 - - 如果您试图强行破译加密的信息,您可能会发现重复数据压缩质量越好,给定的密文单位bit的数据压缩将帮助您确定相关的[分组密码模式](http://en.wikipedia.org/wiki/Block_cipher_mode_of_operation). + - 如果您试图强行破译加密的信息,您可能会发现由于重复数据压缩质量更好,密文给定位的数据压缩率将帮助您确定相关的[分组密码工作模式][6](block cipher mode of operation.)。 ###下一步选择学习什么是困难的### -作为一名程序员应当做好持续学习的准备。为成为一名web开发人员,您需要了解标记语言以及Ruby/Python,正则表达式,SQL,JavaScript等高级编程语言,还需要了解HTTP的工作原理,如何运行UNIX终端以及面向对象的编程艺术。您很难有效地预览到未来的职业全景,因此选择下一步要学习哪些知识是困难的。 + +作为一名程序员应当做好持续学习的准备。为了成为一名web开发人员,您需要了解标记语言以及Ruby/Python、正则表达式、SQL、JavaScript等高级编程语言,还需要了解HTTP的工作原理,如何运行UNIX终端以及面向对象的编程艺术。您很难有效地预览到未来的职业全景,因此选择下一步要学习哪些知识是困难的。 我没有快速学习的能力,因此我不得不在时间花费上非常谨慎。我希望尽可能地学习到有持久生命力的技能,即不会在几年内就过时的技术。这意味着我也会犹豫这周是要学习JavaScript框架还是那些新的编程语言。 @@ -111,13 +113,14 @@ via: http://www.happybearsoftware.com/how-learning-data-structures-and-algorithm 作者:[Happy Bear][a] 译者:[icybreaker](https://github.com/icybreaker) -校对:[校对者ID](https://github.com/校对者ID) +校对:[Caroline](https://github.com/carolinewuyan) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.happybearsoftware.com/ [1]:http://en.wikipedia.org/wiki/Huffman_coding [2]:http://en.wikipedia.org/wiki/Block_cipher_mode_of_operation - - - +[3]:http://en.wikipedia.org/wiki/Dijkstra's_algorithm +[4]:http://en.wikipedia.org/wiki/A*_search_algorithm +[5]:http://en.wikipedia.org/wiki/Huffman_coding +[6]:http://en.wikipedia.org/wiki/Block_cipher_mode_of_operation diff --git a/translated/tech/20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools.md b/published/20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools.md similarity index 87% rename from translated/tech/20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools.md rename to published/20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools.md index 4e9dce31ca..564fb33e1e 100644 --- a/translated/tech/20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools.md +++ b/published/20150914 Display Awesome Linux Logo With Basic Hardware Info Using screenfetch and linux_logo Tools.md @@ -1,12 +1,12 @@ -用 screenfetch 和 linux_logo 工具显示带有酷炫 Linux 标志的基本硬件信息 +用 screenfetch 和 linux_logo 显示带有酷炫 Linux 标志的基本硬件信息 ================================================================================ 想在屏幕上显示出你的 Linux 发行版的酷炫标志和基本硬件信息吗?不用找了,来试试超赞的 screenfetch 和 linux_logo 工具。 -### 来见见 screenfetch 吧 ### +### 来看看 screenfetch 吧 ### screenFetch 是一个能够在截屏中显示系统/主题信息的命令行脚本。它可以在 Linux,OS X,FreeBSD 以及其它的许多类Unix系统上使用。来自 man 手册的说明: -> 这个方便的 Bash 脚本可以用来生成那些漂亮的终端主题信息和 ASCII 发行版标志,就像如今你在别人的截屏里看到的那样。它会自动检测你的发行版并显示 ASCII 版的发行版标志,并且在右边显示一些有价值的信息。 +> 这个方便的 Bash 脚本可以用来生成那些漂亮的终端主题信息和用 ASCII 构成的发行版标志,就像如今你在别人的截屏里看到的那样。它会自动检测你的发行版并显示 ASCII 版的发行版标志,并且在右边显示一些有价值的信息。 #### 在 Linux 上安装 screenfetch #### @@ -16,7 +16,7 @@ screenFetch 是一个能够在截屏中显示系统/主题信息的命令行脚 ![](http://s0.cyberciti.org/uploads/cms/2015/09/ubuntu-debian-linux-apt-get-install-screenfetch.jpg) -图一:用 apt-get 安装 screenfetch +*图一:用 apt-get 安装 screenfetch* #### 在 Mac OS X 上安装 screenfetch #### @@ -26,7 +26,7 @@ screenFetch 是一个能够在截屏中显示系统/主题信息的命令行脚 ![](http://s0.cyberciti.org/uploads/cms/2015/09/apple-mac-osx-install-screenfetch.jpg) -图二:用 brew 命令安装 screenfetch +*图二:用 brew 命令安装 screenfetch* #### 在 FreeBSD 上安装 screenfetch #### @@ -36,7 +36,7 @@ screenFetch 是一个能够在截屏中显示系统/主题信息的命令行脚 ![](http://s0.cyberciti.org/uploads/cms/2015/09/freebsd-install-pkg-screenfetch.jpg) -图三:在 FreeBSD 用 pkg 安装 screenfetch +*图三:在 FreeBSD 用 pkg 安装 screenfetch* #### 在 Fedora 上安装 screenfetch #### @@ -46,7 +46,7 @@ screenFetch 是一个能够在截屏中显示系统/主题信息的命令行脚 ![](http://s0.cyberciti.org/uploads/cms/2015/09/fedora-dnf-install-screenfetch.jpg) -图四:在 Fedora 22 用 dnf 安装 screenfetch +*图四:在 Fedora 22 用 dnf 安装 screenfetch* #### 我该怎么使用 screefetch 工具? #### @@ -56,21 +56,21 @@ screenFetch 是一个能够在截屏中显示系统/主题信息的命令行脚 这是不同系统的输出: -![](http://s0.cyberciti.org/uploads/cms/2015/09/fedora-screenfetch-300x193.jpg) +![](http://s0.cyberciti.org/uploads/cms/2015/09/fedora-screenfetch.jpg) -Fedora 上的 Screenfetch +*Fedora 上的 Screenfetch* -![](http://s0.cyberciti.org/uploads/cms/2015/09/screenfetch-osx-300x213.jpg) +![](http://s0.cyberciti.org/uploads/cms/2015/09/screenfetch-osx.jpg) -OS X 上的 Screenfetch +*OS X 上的 Screenfetch* -![](http://s0.cyberciti.org/uploads/cms/2015/09/screenfetch-freebsd-300x143.jpg) +![](http://s0.cyberciti.org/uploads/cms/2015/09/screenfetch-freebsd.jpg) -FreeBSD 上的 Screenfetch +*FreeBSD 上的 Screenfetch* -![](http://s0.cyberciti.org/uploads/cms/2015/09/debian-ubutnu-screenfetch-outputs-300x279.jpg) +![](http://s0.cyberciti.org/uploads/cms/2015/09/debian-ubutnu-screenfetch-outputs.jpg) -Debian 上的 Screenfetch +*Debian 上的 Screenfetch* #### 获取截屏 #### @@ -134,7 +134,7 @@ linux_logo 程序生成一个彩色的 ANSI 版企鹅图片,还包含一些来 ![](http://s0.cyberciti.org/uploads/cms/2015/09/debian-linux_logo.jpg) -运行 linux_logo +*运行 linux_logo* #### 等等,还有更多! #### @@ -196,7 +196,7 @@ linux_logo 程序生成一个彩色的 ANSI 版企鹅图片,还包含一些来 ![](http://s0.cyberciti.org/uploads/cms/2015/09/linux-logo-fun.gif) -动图1: linux_logo 和 bash 循环,既有趣又能发朋友圈耍酷 +*动图1: linux_logo 和 bash 循环,既有趣又能发朋友圈耍酷* ### 获取帮助 ### @@ -216,7 +216,7 @@ via: http://www.cyberciti.biz/hardware/howto-display-linux-logo-in-bash-terminal 作者:Vivek Gite 译者:[alim0x](https://github.com/alim0x) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20150202 How to filter BGP routes in Quagga BGP router.md b/published/201510/20150202 How to filter BGP routes in Quagga BGP router.md similarity index 93% rename from translated/tech/20150202 How to filter BGP routes in Quagga BGP router.md rename to published/201510/20150202 How to filter BGP routes in Quagga BGP router.md index 53ce40cac6..17bf6fbbcc 100644 --- a/translated/tech/20150202 How to filter BGP routes in Quagga BGP router.md +++ b/published/201510/20150202 How to filter BGP routes in Quagga BGP router.md @@ -1,6 +1,6 @@ 如何使用 Quagga BGP(边界网关协议)路由器来过滤 BGP 路由 ================================================================================ -在[之前的文章][1]中,我们介绍了如何使用 Quagga 将 CentOS 服务器变成一个 BGP 路由器,也介绍了 BGP 对等体和前缀交换设置。在本教程中,我们将重点放在如何使用**前缀列表**和**路由映射**来分别控制数据注入和数据输出。 +在[之前的文章][1]中,我们介绍了如何使用 Quagga 将 CentOS 服务器变成一个 BGP 路由器,也介绍了 BGP 对等体和前缀交换设置。在本教程中,我们将重点放在如何使用**前缀列表(prefix-list)**和**路由映射(route-map)**来分别控制数据注入和数据输出。 之前的文章已经说过,BGP 的路由判定是基于前缀的收取和前缀的广播。为避免错误的路由,你需要使用一些过滤机制来控制这些前缀的收发。举个例子,如果你的一个 BGP 邻居开始广播一个本不属于它们的前缀,而你也将错就错地接收了这些不正常前缀,并且也将它转发到网络上,这个转发过程会不断进行下去,永不停止(所谓的“黑洞”就这样产生了)。所以确保这样的前缀不会被收到,或者不会转发到任何网络,要达到这个目的,你可以使用前缀列表和路由映射。前者是基于前缀的过滤机制,后者是更为常用的基于前缀的策略,可用于精调过滤机制。 @@ -36,15 +36,15 @@ 上面的命令创建了名为“DEMO-FRFX”的前缀列表,只允许存在 192.168.0.0/23 这个前缀。 -前缀列表的另一个牛X功能是支持子网掩码区间,请看下面的例子: +前缀列表的另一个强大功能是支持子网掩码区间,请看下面的例子: ip prefix-list DEMO-PRFX permit 192.168.0.0/23 le 24 -这个命令创建的前缀列表包含在 192.168.0.0/23 和 /24 之间的前缀,分别是 192.168.0.0/23, 192.168.0.0/24 and 192.168.1.0/24。运算符“le”表示小于等于,你也可以使用“ge”表示大于等于。 +这个命令创建的前缀列表包含在 192.168.0.0/23 和 /24 之间的前缀,分别是 192.168.0.0/23, 192.168.0.0/24 和 192.168.1.0/24。运算符“le”表示小于等于,你也可以使用“ge”表示大于等于。 一个前缀列表语句可以有多个允许或拒绝操作。每个语句都自动或手动地分配有一个序列号。 -如果存在多个前缀列表语句,则这些语句会按序列号顺序被依次执行。在配置前缀列表的时候,我们需要注意在所有前缀列表语句后面的**隐性拒绝**属性,就是说凡是不被明显允许的,都会被拒绝。 +如果存在多个前缀列表语句,则这些语句会按序列号顺序被依次执行。在配置前缀列表的时候,我们需要注意在所有前缀列表语句之后是**隐性拒绝**语句,就是说凡是不被明显允许的,都会被拒绝。 如果要设置成允许所有前缀,前缀列表语句设置如下: @@ -81,7 +81,7 @@ probability Match portion of routes defined by percentage value tag Match tag of route -如你所见,路由映射可以匹配很多属性,本教程需要匹配一个前缀。 +如你所见,路由映射可以匹配很多属性,在本教程中匹配的是前缀。 route-map DEMO-RMAP permit 10 match ip address prefix-list DEMO-PRFX @@ -163,7 +163,7 @@ 可以看到,router-A 有4条路由前缀到达 router-B,而 router-B 只接收3条。查看一下范围,我们就能知道只有被路由映射允许的前缀才能在 router-B 上显示出来,其他的前缀一概丢弃。 -**小提示**:如果接收前缀内容没有刷新,试试重置下 BGP 会话,使用这个命令:clear ip bgp neighbor-IP。本教程中命令如下: +**小提示**:如果接收前缀内容没有刷新,试试重置下 BGP 会话,使用这个命令:`clear ip bgp neighbor-IP`。本教程中命令如下: clear ip bgp 192.168.1.1 @@ -193,9 +193,9 @@ via: http://xmodulo.com/filter-bgp-routes-quagga-bgp-router.html 作者:[Sarmed Rahman][a] 译者:[bazz2](https://github.com/bazz2) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 [a]:http://xmodulo.com/author/sarmed -[1]:http://xmodulo.com/centos-bgp-router-quagga.html +[1]:https://linux.cn/article-4609-1.html diff --git a/published/20150716 Interview--Larry Wall.md b/published/201510/20150716 Interview--Larry Wall.md similarity index 100% rename from published/20150716 Interview--Larry Wall.md rename to published/201510/20150716 Interview--Larry Wall.md diff --git a/published/20150821 Linux 4.3 Kernel To Add The MOST Driver Subsystem.md b/published/201510/20150821 Linux 4.3 Kernel To Add The MOST Driver Subsystem.md similarity index 100% rename from published/20150821 Linux 4.3 Kernel To Add The MOST Driver Subsystem.md rename to published/201510/20150821 Linux 4.3 Kernel To Add The MOST Driver Subsystem.md diff --git a/published/20150906 Installing NGINX and NGINX Plus With Ansible.md b/published/201510/20150906 Installing NGINX and NGINX Plus With Ansible.md similarity index 100% rename from published/20150906 Installing NGINX and NGINX Plus With Ansible.md rename to published/201510/20150906 Installing NGINX and NGINX Plus With Ansible.md diff --git a/published/20150908 How to Run ISO Files Directly From the HDD with GRUB2.md b/published/201510/20150908 How to Run ISO Files Directly From the HDD with GRUB2.md similarity index 100% rename from published/20150908 How to Run ISO Files Directly From the HDD with GRUB2.md rename to published/201510/20150908 How to Run ISO Files Directly From the HDD with GRUB2.md diff --git a/published/20150911 5 Useful Commands to Manage File Types and System Time in Linu--Part 3.md b/published/201510/20150911 5 Useful Commands to Manage File Types and System Time in Linu--Part 3.md similarity index 100% rename from published/20150911 5 Useful Commands to Manage File Types and System Time in Linu--Part 3.md rename to published/201510/20150911 5 Useful Commands to Manage File Types and System Time in Linu--Part 3.md diff --git a/published/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md b/published/201510/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md similarity index 100% rename from published/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md rename to published/201510/20150914 How to Setup Node JS v4.0.0 on Ubuntu 14.04 or 15.04.md diff --git a/published/20150914 Linux FAQs with Answers--How to check weather forecasts from the command line on Linux.md b/published/201510/20150914 Linux FAQs with Answers--How to check weather forecasts from the command line on Linux.md similarity index 100% rename from published/20150914 Linux FAQs with Answers--How to check weather forecasts from the command line on Linux.md rename to published/201510/20150914 Linux FAQs with Answers--How to check weather forecasts from the command line on Linux.md diff --git a/published/20150917 TERMINATOR 0.98 INSTALL IN UBUNTU AND LINUX MINT.md b/published/201510/20150917 TERMINATOR 0.98 INSTALL IN UBUNTU AND LINUX MINT.md similarity index 100% rename from published/20150917 TERMINATOR 0.98 INSTALL IN UBUNTU AND LINUX MINT.md rename to published/201510/20150917 TERMINATOR 0.98 INSTALL IN UBUNTU AND LINUX MINT.md diff --git a/published/20150918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md b/published/201510/20150918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md similarity index 100% rename from published/20150918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md rename to published/201510/20150918 How To Add And Remove Bookmarks In Ubuntu Beginner Tip.md diff --git a/published/20150918 Install Justniffer In Ubuntu 15.04.md b/published/201510/20150918 Install Justniffer In Ubuntu 15.04.md similarity index 100% rename from published/20150918 Install Justniffer In Ubuntu 15.04.md rename to published/201510/20150918 Install Justniffer In Ubuntu 15.04.md diff --git a/published/20150921 How to Setup IonCube Loaders on Ubuntu 14.04 or 15.04.md b/published/201510/20150921 How to Setup IonCube Loaders on Ubuntu 14.04 or 15.04.md similarity index 100% rename from published/20150921 How to Setup IonCube Loaders on Ubuntu 14.04 or 15.04.md rename to published/201510/20150921 How to Setup IonCube Loaders on Ubuntu 14.04 or 15.04.md diff --git a/published/20150921 Meet The New Ubuntu 15.10 Default Wallpaper.md b/published/201510/20150921 Meet The New Ubuntu 15.10 Default Wallpaper.md similarity index 100% rename from published/20150921 Meet The New Ubuntu 15.10 Default Wallpaper.md rename to published/201510/20150921 Meet The New Ubuntu 15.10 Default Wallpaper.md diff --git a/translated/talk/20150921 Red Hat CEO Optimistic on OpenStack Revenue Opportunity.md b/published/201510/20150921 Red Hat CEO Optimistic on OpenStack Revenue Opportunity.md similarity index 63% rename from translated/talk/20150921 Red Hat CEO Optimistic on OpenStack Revenue Opportunity.md rename to published/201510/20150921 Red Hat CEO Optimistic on OpenStack Revenue Opportunity.md index e87cf21d8c..f09b4a5473 100644 --- a/translated/talk/20150921 Red Hat CEO Optimistic on OpenStack Revenue Opportunity.md +++ b/published/201510/20150921 Red Hat CEO Optimistic on OpenStack Revenue Opportunity.md @@ -1,28 +1,32 @@ 红帽 CEO 对 OpenStack 收益表示乐观 ================================================================================ -得益于围绕 Linux 和云不断发展的平台和基础设施技术,红帽正在持续快速发展。红帽宣布在九月二十一日完成了 2016 财年第二季度的财务业绩,再次超过预期。 +得益于围绕 Linux 和云不断发展的平台与基础设施技术,红帽正在持续快速发展。红帽宣布在九月二十一日完成了 2016 财年第二季度的财务业绩,再次超过预期。 ![](http://www.serverwatch.com/imagesvr_ce/1212/icon-redhatcloud-r.jpg) -这一季度,红帽的收入为 5 亿 4 百万美元,和去年同比增长 13%。网络输入为 5 千 1 百万美元,超过 2015 年第二季度的 4 千 7 百万美元。展望未来,红帽为下一季度和全年提供了积极的目标。对于第三季度,红帽希望指导收益能在 5亿1千9百万美元和5亿2千3百万美元之间,和去年同期相比增长 15%。 +这一季度,红帽的收入为 5 亿 4 百万美元,和去年同比增长 13%。净收入为 5 千 1 百万美元,超过了 2015 财年第二季度的 4 千 7 百万美元。 + +展望未来,红帽为下一季度和全年提供了积极的目标。对于第三季度,红帽希望指导收益能在 5亿1千9百万美元和5亿2千3百万美元之间,和去年同期相比增长 15%。 对于 2016 财年,红帽的全年指导目标是 20亿4千4百万美元,和去年相比增长 14%。 -红帽 CFO Frank Calderoni 在电话会议上指出,红帽最高的 30 个订单大概甚至超过 1 百万美元。其中有 4 个订单超过 5百万美元,还有一个超过1千万美元。从近几年的经验来看,红帽产品的交叉销售非常成功,全部订单中有超过 65% 的订单包括了一个或多个红帽应用和新兴技术产品组件。 +红帽 CFO Frank Calderoni 在电话会议上指出,红帽最高的 30 个订单差不多甚至超过了 1 百万美元。其中有 4 个订单超过 5 百万美元,还有一个超过 1 千万美元。 + +从近几年的经验来看,红帽产品的交叉销售非常成功,全部订单中有超过 65% 的订单包括了一个或多个红帽应用和新兴技术产品组件。 Calderoni 说 “我们希望这些技术,例如中间件、RHEL OpenStack 平台、OpenShift、云管理和存储能持续推动收益增长。” ### OpenStack ### -在电话会议中,红帽 CEO Jim Whitehurst 多次问到 OpenStack 的收入前景。Whitehurst 说得益于安装程序的改进,最近发布的 Red Hat OpenStack Platform 7.0 向前垮了一大步。 +在电话会议中,红帽 CEO Jim Whitehurst 多次问到 OpenStack 的预期收入。Whitehurst 说得益于安装程序的改进,最近发布的 Red Hat OpenStack Platform 7.0 向前垮了一大步。 Whitehurst 提到:“在识别硬件和使用方面它做的很好,当然,这也意味着在硬件识别并正确使用它们方便还有很多工作要做。” Whitehurst 说他已经开始注意到很多的生产应用程序开始迁移到 OpenStack 云上来。他也警告说在产业化方面迁移到 OpenStack 大部分只是尝鲜,还并没有成为主流。 -对于竞争对手, Whitehurst 尤其提到了微软、惠普和 Mirantis。在他看来,很多组织仍然会使用多种操作系统,如果他们其中一部分使用了微软,他们更倾向于开源方案作为替代选项。Whitehurst 说在云方面他还没有看到太多和惠普面对面的竞争,但和 Mirantis 则确实如此。 +对于竞争对手, Whitehurst 尤其提到了微软、惠普和 Mirantis。在他看来,很多组织仍然会使用多种操作系统,如果他们部分使用了微软产品,会更倾向于开源方案作为替代选项。Whitehurst 说在云方面他还没有看到太多和惠普面对面的竞争,但和 Mirantis 则确实如此。 -Whitehurst 说 “我们也有几次胜利,他们从 Mirantis 转到了 RHEL。” +Whitehurst 说 “我们也有几次胜利,客户从 Mirantis 转到了 RHEL。” -------------------------------------------------------------------------------- @@ -30,7 +34,7 @@ via: http://www.serverwatch.com/server-news/red-hat-ceo-optimistic-on-openstack- 作者:[Sean Michael Kerner][a] 译者:[ictlyh](http://mutouxiaogui.cn/blog) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20150923 How To Upgrade From Oracle 11g To Oracle 12c.md b/published/201510/20150923 How To Upgrade From Oracle 11g To Oracle 12c.md similarity index 83% rename from translated/tech/20150923 How To Upgrade From Oracle 11g To Oracle 12c.md rename to published/201510/20150923 How To Upgrade From Oracle 11g To Oracle 12c.md index 921b5d958f..3d27b772d8 100644 --- a/translated/tech/20150923 How To Upgrade From Oracle 11g To Oracle 12c.md +++ b/published/201510/20150923 How To Upgrade From Oracle 11g To Oracle 12c.md @@ -1,12 +1,8 @@ 如何将 Oracle 11g 升级到 Orcale 12c ================================================================================ -大家好。 +大家好。今天我们来学习一下如何将 Oracle 11g 升级到 Oracle 12c。开始吧。 -今天我们来学习一下如何将 Oracle 11g 升级到 Oracle 12c。开始吧。 - -在此,我使用的是 CentOS 7 64 位 Linux 发行版。 - -我假设你已经在你的系统上安装了 Oracle 11g。这里我会展示一下安装 Oracle 11g 时我的操作步骤。 +在此,我使用的是 CentOS 7 64 位 Linux 发行版。我假设你已经在你的系统上安装了 Oracle 11g。这里我会展示一下安装 Oracle 11g 时我的操作步骤。 我在 Oracle 11g 上选择 “Create and configure a database”,如下图所示。 @@ -16,7 +12,7 @@ ![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage2.png) -然后你输入安装 Oracle 11g 的所有路径以及密码。下面是我自己的 Oracle 11g 安装配置。确保你正确输入了 Oracle 的密码。 +然后你输入安装 Oracle 11g 的各种路径以及密码。下面是我自己的 Oracle 11g 安装配置。确保你正确输入了 Oracle 的密码。 ![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage3.png) @@ -30,7 +26,7 @@ 你需要从该[链接][1]上下载两个 zip 文件。下载并解压两个文件到相同目录。文件名为 **linuxamd64_12c_database_1of2.zip** & **linuxamd64_12c_database_2of2.zip**。提取或解压完后,它会创建一个名为 database 的文件夹。 -注意:升级到 12c 之前,请确保在你的 CentOS 上已经安装了所有必须的软件包并且 path 环境变量也已经正确配置,还有其它前提条件也已经满足。 +注意:升级到 12c 之前,请确保在你的 CentOS 上已经安装了所有必须的软件包,并且所有的路径变量也已经正确配置,还有其它前提条件也已经满足。 下面是必须使用正确版本安装的一些软件包 @@ -47,13 +43,11 @@ 在因特网上搜索正确的 rpm 版本。 -你也可以用一个查询处理多个软件包,然后在输出中查找正确版本。例如: - -在终端中输入下面的命令 +你也可以用一个查询处理多个软件包,然后在输出中查找正确版本。例如,在终端中输入下面的命令: rpm -q binutils compat-libstdc++ gcc glibc libaio libgcc libstdc++ make sysstat unixodbc -你的系统中必须安装了以下软件包(版本可能较新会旧) +你的系统中必须安装了以下软件包(版本可能或新或旧) - binutils-2.23.52.0.1-12.el7.x86_64 - compat-libcap1-1.10-3.el7.x86_64 @@ -83,11 +77,7 @@ 你也需要 unixODBC-2.3.1 或更新版本的驱动。 -我希望你安装 Oracle 11g 的时候已经在你的 CentOS 7 上创建了名为 oracle 的用户。 - -让我们以用户 oracle 登录 CentOS。 - -以用户 oracle 登录到 CentOS 之后,在你的 CentOS上打开一个终端。 +我希望你安装 Oracle 11g 的时候已经在你的 CentOS 7 上创建了名为 oracle 的用户。让我们以用户 oracle 登录 CentOS。以用户 oracle 登录到 CentOS 之后,在你的 CentOS上打开一个终端。 使用终端更改工作目录并导航到你解压两个 zip 文件的目录。在终端中输入以下命令开始安装 12c。 @@ -119,15 +109,15 @@ ![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage11.png) -第七步,像下面这样使用默认的选择继续下一步。 +对于第七步,像下面这样使用默认的选择继续下一步。 ![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage12.png) -在第九步,你会看到一个类似下面这样的总结报告。 +在第九步中,你会看到一个类似下面这样的总结报告。 ![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage13.png) -如果一切正常,你可以点击步骤九中的 install 开始安装,进入步骤十。 +如果一切正常,你可以点击第九步中的 install 开始安装,进入第十步。 ![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage14.png) @@ -135,7 +125,7 @@ 要有耐心,一步一步走下来最后它会告诉你成功了。否则,在谷歌上搜索做必要的操作解决问题。再一次说明,由于你可能会遇到的错误有很多,我无法在这里提供所有详细介绍。 -现在,只需要按照下面屏幕指令配置监听器 +现在,只需要按照下面屏幕指令配置监听器。 配置完监听器之后,它会启动数据库升级助手(Database Upgrade Assistant)。选择 Upgrade Oracle Database。 @@ -157,7 +147,7 @@ via: http://www.unixmen.com/upgrade-from-oracle-11g-to-oracle-12c/ 作者:[Mohammad Forhad Iftekher][a] 译者:[ictlyh](http://www.mutouxiaogui.cn/blog/) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20150923 Xenlism WildFire--Minimal Icon Theme For Linux Desktop.md b/published/201510/20150923 Xenlism WildFire--Minimal Icon Theme For Linux Desktop.md similarity index 100% rename from published/20150923 Xenlism WildFire--Minimal Icon Theme For Linux Desktop.md rename to published/201510/20150923 Xenlism WildFire--Minimal Icon Theme For Linux Desktop.md diff --git a/published/20150925 HTTP 2 Now Fully Supported in NGINX Plus.md b/published/201510/20150925 HTTP 2 Now Fully Supported in NGINX Plus.md similarity index 100% rename from published/20150925 HTTP 2 Now Fully Supported in NGINX Plus.md rename to published/201510/20150925 HTTP 2 Now Fully Supported in NGINX Plus.md diff --git a/published/20150930 Debian dropping the Linux Standard Base.md b/published/201510/20150930 Debian dropping the Linux Standard Base.md similarity index 100% rename from published/20150930 Debian dropping the Linux Standard Base.md rename to published/201510/20150930 Debian dropping the Linux Standard Base.md diff --git a/published/20150930 Install and use Ansible (Automation Tool) in CentOS 7.md b/published/201510/20150930 Install and use Ansible (Automation Tool) in CentOS 7.md similarity index 100% rename from published/20150930 Install and use Ansible (Automation Tool) in CentOS 7.md rename to published/201510/20150930 Install and use Ansible (Automation Tool) in CentOS 7.md diff --git a/published/20151005 pyinfo() A good looking phpinfo-like python script.md b/published/201510/20151005 pyinfo() A good looking phpinfo-like python script.md similarity index 100% rename from published/20151005 pyinfo() A good looking phpinfo-like python script.md rename to published/201510/20151005 pyinfo() A good looking phpinfo-like python script.md diff --git a/translated/tech/20151007 How To Download Videos Using youtube-dl In Linux.md b/published/201510/20151007 How To Download Videos Using youtube-dl In Linux.md similarity index 85% rename from translated/tech/20151007 How To Download Videos Using youtube-dl In Linux.md rename to published/201510/20151007 How To Download Videos Using youtube-dl In Linux.md index 99855c9d22..4d268e4c23 100644 --- a/translated/tech/20151007 How To Download Videos Using youtube-dl In Linux.md +++ b/published/201510/20151007 How To Download Videos Using youtube-dl In Linux.md @@ -4,11 +4,11 @@ 我知道你已经看过[如何下载 YouTube 视频][1]。但那些工具大部分都采用图形用户界面的方式。我会向你展示如何通过终端使用 youtube-dl 下载 YouTube 视频。 -### [youtube-dl][2] ### +### youtube-dl ### -youtube-dl 是基于 Python 的命令行小工具,允许你从 YouTube.com、Dailymotion、Google Video、Photobucket、Facebook、Yahoo、Metacafe、Depositfiles 以及其它一些类似网站中下载视频。它是用 pygtk 编写的,需要 Python 解析器来运行,对平台要求并不严格。它能够在 Unix、Windows 或者 Mac OS X 系统上运行。 +[youtube-dl][2] 是基于 Python 的命令行小工具,允许你从 YouTube.com、Dailymotion、Google Video、Photobucket、Facebook、Yahoo、Metacafe、Depositfiles 以及其它一些类似网站中下载视频。它是用 pygtk 编写的,需要 Python 解析器来运行,对平台要求并不严格。它能够在 Unix、Windows 或者 Mac OS X 系统上运行。 -youtube-dl 支持断点续传。如果在下载的过程中 youtube-dl 被杀死了(例如通过 Ctrl-C 或者丢失网络连接),你只需要使用相同的 YouTube 视频 URL 再次运行它。只要当前目录中有下载的部分文件,它就会自动恢复没有完成的下载,也就是说,你不需要[下载][3]管理器来恢复下载。 +youtube-dl 支持断点续传。如果在下载的过程中 youtube-dl 被杀死了(例如通过 Ctrl-C 或者丢失网络连接),你只需要使用相同的 YouTube 视频 URL 再次运行它。只要当前目录中有下载的部分文件,它就会自动恢复没有完成的下载,也就是说,你不需要[下载管理器][3]来恢复下载。 #### 安装 youtube-dl #### @@ -16,7 +16,7 @@ youtube-dl 支持断点续传。如果在下载的过程中 youtube-dl 被杀死 sudo apt-get install youtube-dl -对于任何 Linux 发行版,你都可以通过下面的命令行接口在你的系统上快速安装 youtube-dl: +对于任何 Linux 发行版,你都可以通过下面的命令行在你的系统上快速安装 youtube-dl: sudo wget https://yt-dl.org/downloads/latest/youtube-dl -O/usr/local/bin/youtube-dl @@ -83,11 +83,11 @@ via: http://itsfoss.com/download-youtube-linux/ 作者:[alimiracle][a] 译者:[ictlyh](http://mutouxiaogui.cn/blog/) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 [a]:http://itsfoss.com/author/ali/ [1]:http://itsfoss.com/download-youtube-videos-ubuntu/ [2]:https://rg3.github.io/youtube-dl/ -[3]:http://itsfoss.com/xtreme-download-manager-install/ \ No newline at end of file +[3]:https://linux.cn/article-6209-1.html \ No newline at end of file diff --git a/published/20151007 Open Source Media Player MPlayer 1.2 Released.md b/published/201510/20151007 Open Source Media Player MPlayer 1.2 Released.md similarity index 100% rename from published/20151007 Open Source Media Player MPlayer 1.2 Released.md rename to published/201510/20151007 Open Source Media Player MPlayer 1.2 Released.md diff --git a/published/20151007 Productivity Tools And Tips For Linux.md b/published/201510/20151007 Productivity Tools And Tips For Linux.md similarity index 100% rename from published/20151007 Productivity Tools And Tips For Linux.md rename to published/201510/20151007 Productivity Tools And Tips For Linux.md diff --git a/translated/tech/20151012 10 Useful Utilities For Linux Users.md b/published/201510/20151012 10 Useful Utilities For Linux Users.md similarity index 76% rename from translated/tech/20151012 10 Useful Utilities For Linux Users.md rename to published/201510/20151012 10 Useful Utilities For Linux Users.md index fbb4bfa8b0..99bfe6869a 100644 --- a/translated/tech/20151012 10 Useful Utilities For Linux Users.md +++ b/published/201510/20151012 10 Useful Utilities For Linux Users.md @@ -1,10 +1,10 @@ -对 Linux 用户10个有用的工具 + 10 个给 Linux 用户的有用工具 ================================================================================ ![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2014/09/linux-656x445.png) ### 引言 ### -在本教程中,我已经收集了对 Linux 用户10个有用的工具,其中包括各种网络监控,系统审计和一些其它实用的命令,它可以帮助用户提高工作效率。我希望你会喜欢他们。 +在本教程中,我已经收集了10个给 Linux 用户的有用工具,其中包括各种网络监控,系统审计和一些其它实用的命令,它可以帮助用户提高工作效率。我希望你会喜欢他们。 #### 1. w #### @@ -14,19 +14,18 @@ ![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_023.png) -显示帮助信息 +不显示头部信息(LCTT译注:原文此处有误) $w -h -(LCTT译注:-h为不显示头部信息) - -显示当前用户信息 +显示指定用户的信息 $w ![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_024.png) #### 2. nmon #### + Nmon(nigel’s monitor 的简写)是一个显示系统性能信息的工具。 $ sudo apt-get install nmon @@ -37,7 +36,7 @@ Nmon(nigel’s monitor 的简写)是一个显示系统性能信息的工具 ![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_001.png) -nmon 可以转储与 netwrok,cpu, memory 和磁盘使用情况的信息。 +nmon 可以显示与 netwrok,cpu, memory 和磁盘使用情况的信息。 **nmon 显示 cpu 信息 (按 c)** @@ -53,7 +52,7 @@ nmon 可以转储与 netwrok,cpu, memory 和磁盘使用情况的信息。 #### 3. ncdu #### -是一个基于‘du’的光标版本的命令行程序,这个命令是用来分析各种目录占用的磁盘空间。 +是一个支持光标的`du`程序,这个命令是用来分析各种目录占用的磁盘空间。 $apt-get install ncdu @@ -71,7 +70,7 @@ nmon 可以转储与 netwrok,cpu, memory 和磁盘使用情况的信息。 #### 4. slurm #### -一个基于网络接口的带宽监控命令行程序,它会基于图形来显示 ascii 文件。 +一个基于网络接口的带宽监控命令行程序,它会用字符来显示文本图形。 $ apt-get install slurm @@ -94,7 +93,7 @@ nmon 可以转储与 netwrok,cpu, memory 和磁盘使用情况的信息。 #### 5.findmnt #### -Findmnt 命令用于查找挂载的文件系统。它是用来列出安装设备,当需要时也可以挂载或卸载设备,它也是 util-linux 的一部分。 +Findmnt 命令用于查找挂载的文件系统。它用来列出安装设备,当需要时也可以挂载或卸载设备,它是 util-linux 软件包的一部分。 例子: @@ -122,7 +121,7 @@ Findmnt 命令用于查找挂载的文件系统。它是用来列出安装设备 #### 6. dstat #### -一种组合和灵活的工具,它可用于监控内存,进程,网络和磁盘的性能,它可以用来取代 ifstat, iostat, dmstat 等。 +一种灵活的组合工具,它可用于监控内存,进程,网络和磁盘性能,它可以用来取代 ifstat, iostat, dmstat 等。 $apt-get install dstat @@ -134,27 +133,27 @@ Findmnt 命令用于查找挂载的文件系统。它是用来列出安装设备 ![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_0141.png) -- **-c** cpu +**-c** cpu $ dstat -c ![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_0151.png) -显示 cpu 的详细信息。 - - $ dstat -cdl -D sda1 - -![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_017.png) - -- **-d** 磁盘 +**-d** 磁盘 $ dstat -d ![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_0161.png) +显示 cpu、磁盘等的详细信息。 + + $ dstat -cdl -D sda1 + +![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_017.png) + #### 7. saidar #### -另一种基于 CLI 的系统统计数据监控工具,提供了有关磁盘使用,网络,内存,交换等信息。 +另一种基于命令行的系统统计数据监控工具,提供了有关磁盘使用,网络,内存,交换分区等信息。 $ sudo apt-get install saidar @@ -172,7 +171,7 @@ Findmnt 命令用于查找挂载的文件系统。它是用来列出安装设备 #### 8. ss #### -ss(socket statistics)是一个很好的选择来替代 netstat,它从内核空间收集信息,比 netstat 的性能更好。 +ss(socket statistics)是一个很好的替代 netstat 的选择,它从内核空间收集信息,比 netstat 的性能更好。 例如: @@ -196,7 +195,7 @@ ss(socket statistics)是一个很好的选择来替代 netstat,它从内 #### 9. ccze #### -一个自定义日志格式的工具 :). +一个美化日志显示的工具 :). $ apt-get install ccze @@ -222,7 +221,7 @@ ss(socket statistics)是一个很好的选择来替代 netstat,它从内 一种基于 Python 的终端工具,它可以用来以图形方式显示系统活动状态。详细信息以一个丰富多彩的柱状图来展示。 -安装 python: +安装 python(LCTT 译注:一般来说,你应该已经有了 python,不需要此步): $ sudo apt-add-repository ppa:fkrull/deadsnakes @@ -234,7 +233,7 @@ ss(socket statistics)是一个很好的选择来替代 netstat,它从内 $ sudo apt-get install python3.2 -- [下载 ranwhen.py][1] +[点此下载 ranwhen.py][1] $ unzip ranwhen-master.zip && cd ranwhen-master @@ -246,7 +245,7 @@ ss(socket statistics)是一个很好的选择来替代 netstat,它从内 ### 结论 ### -这都是些冷门但重要的 Linux 管理工具。他们可以在日常生活中帮助用户。在我们即将发表的文章中,我们会尽量多带来些管理员/用户工具。 +这都是些不常见但重要的 Linux 管理工具。他们可以在日常生活中帮助用户。在我们即将发表的文章中,我们会尽量多带来些管理员/用户工具。 玩得愉快! @@ -256,7 +255,7 @@ via: http://www.unixmen.com/10-useful-utilities-linux-users/ 作者:[Rajneesh Upadhyay][a] 译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20151012 Linux FAQs with Answers--How to change USB device permission permanently on Linux.md b/published/201510/20151012 Linux FAQs with Answers--How to change USB device permission permanently on Linux.md similarity index 100% rename from published/20151012 Linux FAQs with Answers--How to change USB device permission permanently on Linux.md rename to published/201510/20151012 Linux FAQs with Answers--How to change USB device permission permanently on Linux.md diff --git a/translated/tech/20151012 Linux FAQs with Answers--How to force password change at the next login on Linux.md b/published/201510/20151012 Linux FAQs with Answers--How to force password change at the next login on Linux.md similarity index 86% rename from translated/tech/20151012 Linux FAQs with Answers--How to force password change at the next login on Linux.md rename to published/201510/20151012 Linux FAQs with Answers--How to force password change at the next login on Linux.md index 509d0b6e45..22c298c047 100644 --- a/translated/tech/20151012 Linux FAQs with Answers--How to force password change at the next login on Linux.md +++ b/published/201510/20151012 Linux FAQs with Answers--How to force password change at the next login on Linux.md @@ -1,12 +1,11 @@ -Linux有问必答--如何强制在下次登录Linux时更换密码 +Linux有问必答:如何强制在下次登录Linux时更换密码 ================================================================================ > **提问**:我管理着一台多人共享的Linux服务器。我刚使用默认密码创建了一个新用户,但是我想用户在第一次登录时更换密码。有没有什么方法可以让他/她在下次登录时修改密码呢? -在多用户Linux环境中,标准实践是使用一个默认的随机密码创建一个用户账户。成功登录后,新用户自己改变默认密码。出于安全里有,经常建议“强制”用户在第一次登录时修改密码来确保这个一次性使用的密码不会再被使用。 +在多用户Linux环境中,标准实践是使用一个默认的随机密码创建一个用户账户。成功登录后,新用户自己改变默认密码。出于安全考虑,经常建议“强制”用户在第一次登录时修改密码来确保这个一次性使用的密码不会再被使用。 下面是**如何强制用户在下次登录时修改他/她的密码**。 -changes, and when to expire the current password, etc. 每个Linux用户都关联这不同的密码相关配置和信息。比如,记录着上次密码更改的日期、最小/最大的修改密码的天数、密码何时过期等等。 一个叫chage的命令行工具可以访问并调整密码过期相关配置。你可以使用这个工具来强制用户在下次登录修改密码、 @@ -23,7 +22,7 @@ changes, and when to expire the current password, etc. $ sudo chage -d0 -原本“-d ”参数是用来设置密码的“年龄”(也就是上次修改密码起到1970 1.1起的天数)。因此“-d0”的意思是上次密码修改的时间是1970 1.1,这就让当前的密码过期了,也就强制让他在下次登录的时候修改密码了。 +原本“-d ”参数是用来设置密码的“年龄”(也就是上次修改密码起到1970/1/1起的天数)。因此“-d0”的意思是上次密码修改的时间是1970/1/1,这就让当前的密码过期了,也就强制让他在下次登录的时候修改密码了。 另外一个过期当前密码的方式是用passwd命令。 @@ -46,8 +45,8 @@ changes, and when to expire the current password, etc. via: http://ask.xmodulo.com/force-password-change-next-login-linux.html 作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20151013 Mytodo--A ToDo List Manager For DIY Lovers.md b/published/201510/20151013 Mytodo--A ToDo List Manager For DIY Lovers.md similarity index 100% rename from published/20151013 Mytodo--A ToDo List Manager For DIY Lovers.md rename to published/201510/20151013 Mytodo--A ToDo List Manager For DIY Lovers.md diff --git a/published/201510/20151015 New Collaborative Group to Speed Real-Time Linux.md b/published/201510/20151015 New Collaborative Group to Speed Real-Time Linux.md new file mode 100644 index 0000000000..34d14e2253 --- /dev/null +++ b/published/201510/20151015 New Collaborative Group to Speed Real-Time Linux.md @@ -0,0 +1,78 @@ +新的 RTL 协作组将加速实时 Linux 的发展 +================================================================================ +![](http://www.linux.com/images/stories/66866/Tux-150.png) + +在本周的 Linux 大会活动(LinuxCon)上 Linux 基金会(Linux Foundation)[宣称][1],实时Linux操作系统项目(RTL,Real-Time Linux)得到了新的资金支持,并预期这将促进该项目,使其自成立15年来第一次有机会在实时操作性上和其他的实时操作系统(RTOS,Real Time Operation System)一较高下。Linux 基金会将 RTL 组重组为一个新的项目,并命名为RTL协作组(Real-Time Linux Collaborative Project),该项目将获得更有力的资金支持,更多的开发人员将投入其中,并更加紧密地集成到 Linux 内核主线开发中。 + +根据 Linux 基金会的说法,RTL 项目并入 Linux基金会旗下后,“在研发方面将为业界节省数百万美元的费用。”同时此举也将“通过强有力的上游内核测试体系而改善本项目的代码质量”。 + +在过去的十几年中,RTL 项目的开发管理和经费资助主要由[开源自动化开发实验室] [2](OSADL,Open Source Automation Development Lab)承担,OSADL 将继续作为新合作项目的金牌成员之一,但其原来承担的资金资助工作将会在一月份移交给 Linux 基金会。RTL 项目和 [OSADL][3] 长久以来一直负责维护[内核的实时抢占(RT-Preempt 或 Preempt-RT)][4]补丁,并定期将其更新到 Linux 内核的主线上。 + +据长期以来一直担任 OSADL 总经理的 Carsten Emde 博士介绍,支持内核实时特性的工作已经完成了将近 90%。 “这就像盖房子,”他解释说。 “主要的部件,如墙壁,窗户和门都已经安装到位,就实时内核来说,类似的主要部件包括:高精度定时器(high-resolution timers),中断线程化机制(interrupt threads)和优先级可继承的互斥量(priority-inheritance mutexes)等。然后所剩下的就是需要一些边边角角的工作,就如同装修房子过程中还剩下铺设如地毯和墙纸等来完成最终的工程。” + +以 Emde 观点来看,从技术的角度来说,实时 Linux 的性能已经可以媲美绝大多数其他的实时操作系统 - 但前提是你要不厌其烦地把所有的补丁都打上。 Emde 的原话如下:“该项目(LCTT 译注,指RTL)的唯一目标就是提供一个满足实时性要求的 Linux 系统,使其无论运行状况如何恶劣都可以保证在确定的、可以预先定义的时间期限内对外界处理做出响应。这个目标已经实现,但需要你手动地将 RTL 提供的补丁添加到 Linux 内核主线的版本代码上,但将来的不用打补丁的实时 Linux 内核也能实现这个目标。唯一的,当然也是最重要的区别就是相应的维护工作将少得多,因为我们再也不用一次又一次移植那些独立于内核主线的补丁代码了。” + +新的 RTL 协作组将继续在 Thomas Gleixner 的指导下工作,Thomas Gleixner 在过去的十多年里一直是 RTL 的核心维护人员。本周,Gleixner 被任命为 Linux 基金会成员,并加入了一个特别的小组,小组成员包括 Linux 稳定内核维护者Greg Kroah-Hartman,Yocto 项目维护者 Richard Purdie 和 Linus Torvalds 本人。 + +据 Emde 介绍,RTL 的第二维护人 Steven Rostedt 来自 Red Hat 公司,他负责“维护旧的,但尚保持维护的内核版本”,他将和同样来自 Red Hat 的 Ingo Molnàr 继续参与该项目,Ingo 是 RTL 的关键开发人员,但近年来更多地从事咨询方面的工作。有些令人惊讶的是,Red Hat 竟然不是 RTL 协作组的成员之一。相反,谷歌作为唯一的白金会员占据了头把交椅,其他黄金会员包括国家仪器公司(NI,National Instruments),OSADL 和德州仪器(TI)。银卡会员包括Altera 公司,ARM,Intel 和 IBM。 + +###走向实时内核的漫长道路### + +当15年前 Linux 第一次出现在嵌入式设备上的时候,它所面临的嵌入式计算市场已经被其他的实时操作系统,譬如风河公司(WindRiver)的 VxWorks,所牢牢占据。VxWorks 从那时起到现在,一直在为众多的工控设备、航空电子设备以及交通运输应用提供着工业级别的高确定性的,硬实时的内核。微软后来也提供了一个支持实时性的操作系统版本- Windows CE,当时的 Linux 所面临的是来自潜在工业客户的公开嘲讽和层层阻力。他们认为那些从桌面系统改进来的 Linux 发行版本顶多适合要求不高的轻量级消费类电子产品,而不适合那些对硬实时要求更高的设备。 + +对于嵌入式 Linux 的先行者如 [MontaVista 公司][6]来说,其[早期的目标][5]很明确就是要改进 Linux 的实时能力。多年以来,对 Linux 的实时性能开发发展迅速,得到各种组织的支持,如[成立于2006年][7]的 OSADL,以及实时 Linux 基金会(RTLF,Real-Time Linux Foundation)。在2009年 [OSADL 与 RTLF 合并][8],OSADL 及其 RTL 组承担了所有的抢占式实时内核(Preempt-RT)补丁的维护工作和将补丁提交到上游内核主线的工作。除此之外 OSADL 还负责监管其他自动化相关的项目,例如[高可靠性 Linux][9](Safety Critical Linux)(译者注:指研究如何在关键系统上可靠安全地运行Linux)。 + +OSADL 对 RTL 的支持经历了三个阶段:拥护和推广,测试和质量评估,以及最后的资金支持。Emde 表示,在早期,OSADL 的角色仅限于写写推广的文章,制作专题报告,组织相关培训,以及“宣传” RTL 的优点。他说:“要让一个相当保守的工控行业接受象 Linux 之类的新技术及其基于社区的那种开发模式,首先就需要建立其对新事物的信任。从使用专有的实时操作系统转向改用 Linux 对公司意味着必须引入新的战略和流程,才能与社区进行互动。” + +后来,OSADL 改而提供技术性能数据,建立[质量评估和测试中心][10],并在和开源相关的法律事务问题和安全认证方面向行业成员提供帮助。 + +当 RTL 在实时性上变得愈加成熟的同时,相反地 Windows CE 却是江河日下,[其市场份额正在快速地被 RTL 所蚕食][11],一些与 RTL 竞争的实时 Linux 项目,主要是 [Xenomai][12] 也已开始集成 RTL。 + +“伴随 RTL 补丁的成功,以及明确的预期其最终会被完整集成到 Linux 内核主线代码中,导致 Xenomai 关注的重心发生了变化,”Emde 说。 “Xenomai 3.0 可与 RT 补丁结合起来使用,并提供了所谓的‘皮肤’,(LCTT 译注:一个封装层),使我们可以复用为其他系统编写的代码。不过,它们还没有完全统一起来,因为 Xenomai 使用了双内核方法,而RT 补丁只适用于单一的 Linux 内核。“ + +近些年来,RTL 组的资助来源越来越少,所以最终 OSADL 接过了这个重任。Emde 说:“当最近开发工作因缺少资金而陷入停滞时,OSADL 对 RTL 的支持进入到第三个重大阶段:开始直接资助 Thomas Gleixner 的工作。” + +正如 Emde 在其[10月5日的一篇博文][13]中所描述的那样,实时 Linux 的应用领域正在日益扩大,由其原来主要服务的工业控制扩大到了汽车行业和电信业等领域,这表明资助的来源也应该得到拓宽。Emde 原文写道:“仅仅靠来自工控行业的资金来支撑全部的工作是不合理的,因为电信等其他行业也在享用实时 Linux 内核。” + +当 Linux 基金会表明有兴趣提供资金支持时,OSADL 认为“单一的资助和控制渠道要有效得多”(LCTT 译注:指最终由Linux 基金会全盘接手了 RTL 项目),Emde 如是说。不过,他补充说,作为黄金级成员,OSADL 仍参与监管项目的工作,会继续从事其宣传和质量保证方面的活动。 + +###汽车行业期待 RTL 的崛起### + +Emde 表示,RTL 会继续在工业应用领域飞速发展并逐渐取代其他实时操作系统。而且,他补充说,RTL 在汽车行业发展也很迅猛,以后会扩大并应用到铁路和航空电子设备上。 + +的确,Linux 在汽车行业将扮演越来越重要的角色,这也是 Linux 基金对 RTL 所寄予厚望的原因之所在。RTL 工作组可能会与 Linux 基金会旗下的[车载Linux][14](AGL,Automotive Grade Linux)工作组展开合作。Emde 猜测,Google 高调参与的主要动因可能也是希望将 RTL 用于汽车控制。此外,德州仪器(TI)也非常期望将其 Jacinto 处理器应用于汽车行业。 + +面向车载 Linux 的项目(比如AGL)的目标是要扩大 Linux 在车载设备上的应用范围,其应用不是仅限于车载信息娱乐(IVI,In-Vehicle Infotainment),而是要进入到譬如集群控制和车载通讯领域,而这些领域目前主要使用的是 QNX 之类的实时操作系统。无人驾驶汽车在实时性上对操作系统也有很高的要求。 + +Emde 特别指出,OSADL 的 [SIL2LinuxMP][15] 项目可能会在将 RTL 引入到汽车工业领域上扮演重要的角色。SIL2LinuxMP 并不是专门针对汽车工业的项目,但随着 BMW 公司参与其中,汽车行业成为其很重要的应用领域之一。该项目的目标在于验证 RTL 在采用单核或多核 CPU 的标准化商用(COTS,Commercial Off-The-Shelf)板卡上运行所需的基本组件。它定义了引导程序、根文件系统、Linux 内核以及对应支持 RTL 的 C 库。 + +无人机和机器人使用实时 Linux 的时机也已成熟,Xenomai 系统早已用在许多机器人以及一些无人机中。不过,在更广泛的嵌入式 Linux 世界,包括了消费电子产品和物联网应用中,RTL 可以扮演的角色很有限。主要的障碍在于,无线通信和互联网本身会带来延迟。 + +Emde 说:“目前实时 Linux 主要还是应用于系统内部控制以及系统与周边外设之间的控制,在远程控制机器上作用不大。企图通过互联网实现实时控制恐怕不是一件可行的事情。” + +-------------------------------------------------------------------------------- + +via: http://www.linux.com/news/software/applications/858828-new-collaborative-group-to-speed-real-time-linux + +作者:[Eric Brown][a] +译者:[unicornx](https://github.com/unicornx) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linux.com/community/forums/person/42808 +[1]:http://www.linuxfoundation.org/news-media/announcements/2015/10/linux-foundation-announces-project-advance-real-time-linux +[2]:http://archive.linuxgizmos.com/celebrating-the-open-source-automation-development-labs-first-birthday/ +[3]:https://www.osadl.org/ +[4]:http://linuxgizmos.com/adding-real-time-to-linux-with-preempt-rt/ +[5]:http://archive.linuxgizmos.com/real-time-linux-what-is-it-why-do-you-want-it-how-do-you-do-it-a/ +[6]:http://www.linux.com/news/embedded-mobile/mobile-linux/841651-embedded-linux-pioneer-montavista-spins-iot-linux-distribution +[7]:http://archive.linuxgizmos.com/industry-group-aims-linux-at-automation-apps/ +[8]:http://archive.linuxgizmos.com/industrial-linux-groups-merge/ +[9]:https://www.osadl.org/Safety-Critical-Linux.safety-critical-linux.0.html +[10]:http://www.osadl.org/QA-Farm-Realtime.qa-farm-about.0.html +[11]:http://www.linux.com/news/embedded-mobile/mobile-linux/818011-embedded-linux-keeps-growing-amid-iot-disruption-says-study +[12]:http://xenomai.org/ +[13]:https://www.osadl.org/Single-View.111+M5dee6946dab.0.html +[14]:http://www.linux.com/news/embedded-mobile/mobile-linux/833358-first-open-automotive-grade-linux-spec-released +[15]:http://www.osadl.org/SIL2LinuxMP.sil2-linux-project.0.html \ No newline at end of file diff --git a/translated/tech/20151019 10 passwd command examples in Linux.md b/published/201510/20151019 10 passwd command examples in Linux.md similarity index 83% rename from translated/tech/20151019 10 passwd command examples in Linux.md rename to published/201510/20151019 10 passwd command examples in Linux.md index 0f9f1f3827..6e19bef6b2 100644 --- a/translated/tech/20151019 10 passwd command examples in Linux.md +++ b/published/201510/20151019 10 passwd command examples in Linux.md @@ -1,10 +1,9 @@ - -在 Linux 中 passwd 命令的10个示例 +10 个 Linux 中的 passwd 命令示例 ================================================================================ -正如 **passwd** 命令的名称所示,其用于改变系统用户的密码。如果 passwd 命令由非 root 用户执行,那么它会询问当前用户的密码,然后设置调用命令用户的新密码。当此命令由超级用户 root 执行的话,就可以重新设置任何用户的密码,包括不知道当前密码的用户。 +正如 **passwd** 命令的名称所示,其用于改变系统用户的密码。如果 passwd 命令由非 root 用户执行,那么它会询问当前用户的密码,然后设置调用该命令的用户的新密码。当此命令由超级用户 root 执行的话,就可以重新设置任何用户的密码,包括不知道当前密码的用户。 -在这篇文章中,我们将讨论 passwd 命令实际的例子。 +在这篇文章中,我们将用实例来介绍 passwd 命令。 #### 语法 : #### @@ -16,7 +15,7 @@ ### 例1:更改系统用户的密码 ### -当你使用非 root 用户登录时,像我使用 ‘linuxtechi’ 登录的情况下,运行 passwd 命令它会重置当前登录用户的密码。 +当你使用非 root 用户登录时,比如我使用 ‘linuxtechi’ 登录的情况下,运行 passwd 命令它会重置当前登录用户的密码。 [linuxtechi@linuxworld ~]$ passwd Changing password for user linuxtechi. @@ -44,7 +43,7 @@ linuxtechi PS 2015-09-20 0 99999 7 -1 (Password set, SHA512 crypt.) [root@linuxworld ~]# -在上面的输出中,第一个字段显示的用户名,第二个字段显示密码状态(**PS = 密码设置,LK = 密码锁定,NP = 无密码**),第三个字段显示了当密码被改变,后面的字段分别显示了密码能更改的最小期限和最大期限,超过更改期能使用的最大期限,最后的为过期禁用天数。 +在上面的输出中,第一个字段显示的用户名,第二个字段显示密码状态(**PS = 密码设置,LK = 密码锁定,NP = 无密码**),第三个字段显示了上次修改密码的时间,后面四个字段分别显示了密码能更改的最小期限和最大期限,警告期限和没有使用该口令的时长。 ### 例3:显示所有账号的密码状态信息 ### @@ -54,11 +53,11 @@ ![](http://www.linuxtechi.com/wp-content/uploads/2015/09/passwd-sa.jpg) -(LCTT译注:CentOS6.6 没有测试成功,但 Ubuntu 可以。) +(LCTT译注:不同发行版/passwd 的行为不同。CentOS6.6 没有测试成功,但 Ubuntu 可以。) ### 例4:使用 -d 选项删除用户的密码 ### -就我而言,我删除 ‘**linuxtechi**‘ 用户的密码。 +用我做例子,删除 ‘**linuxtechi**‘ 用户的密码。 [root@linuxworld ~]# passwd -d linuxtechi Removing password for user linuxtechi. @@ -68,7 +67,7 @@ linuxtechi NP 2015-09-20 0 99999 7 -1 (Empty password.) [root@linuxworld ~]# -“**-d**” 选项将使用户的密码为空,并禁用用户登录。 +“**-d**” 选项将清空用户密码,并禁用用户登录。 ### 例5:设置密码立即过期 ### @@ -81,7 +80,7 @@ linuxtechi PS 1970-01-01 0 99999 7 -1 (Password set, SHA512 crypt.) [root@linuxworld ~]# -现在尝试用 linuxtechi 用户 SSH 到主机。 +现在尝试用 linuxtechi 用户 SSH 连接到主机。 ![](http://www.linuxtechi.com/wp-content/uploads/2015/09/passwd-expiry.jpg) @@ -143,7 +142,7 @@ via: http://www.linuxtechi.com/10-passwd-command-examples-in-linux/ 作者:[Pradeep Kumar][a] 译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20151019 11 df command examples in Linux.md b/published/201510/20151019 11 df command examples in Linux.md similarity index 85% rename from translated/tech/20151019 11 df command examples in Linux.md rename to published/201510/20151019 11 df command examples in Linux.md index 8b2035acee..a5e3c827ba 100644 --- a/translated/tech/20151019 11 df command examples in Linux.md +++ b/published/201510/20151019 11 df command examples in Linux.md @@ -1,9 +1,9 @@ Linux 中 df 命令的11个例子 ================================================================================ -df(可用磁盘)命令用于显示文件系统的磁盘使用情况。默认情况下 df 命令将以 1KB 为单位进行显示所有当前已挂载的文件系统,如果你想以人类易读的格式显示 df 命令的输出,像这样“df -h”使用 -h 选项。 +df 即“可用磁盘”(disk free),用于显示文件系统的磁盘使用情况。默认情况下 df 命令将以每块 1K 的单位进行显示所有当前已挂载的文件系统,如果你想以人类易读的格式显示 df 命令的输出,像这样“df -h”使用 -h 选项。 -在这篇文章中,我们将讨论 ‘**df**‘ 命令在 Linux 下11种不同的实例 +在这篇文章中,我们将讨论 `df` 命令在 Linux 下11种不同的实例。 在 Linux 下 df 命令的基本格式为: @@ -13,7 +13,7 @@ df(可用磁盘)命令用于显示文件系统的磁盘使用情况。默认 ![](http://www.linuxtechi.com/wp-content/uploads/2015/10/df-command-options.jpg) -df 的原样输出 : +df 的样例输出 : [root@linux-world ~]# df Filesystem 1K-blocks Used Available Use% Mounted on @@ -28,9 +28,9 @@ df 的原样输出 : /dev/mapper/vg00-sap 14987656 37636 14165636 1% /sap [root@linux-world ~]# -### 例1:使用 ‘-a’ 选项列出所有文件系统的磁盘使用量 ### +### 例1:使用 -a 选项列出所有文件系统的磁盘使用量 ### -当我们在 df 命令中使用 ‘-a’ 选项时,它会显示所有文件系统的磁盘使用情况。 +当我们在 df 命令中使用 `-a` 选项时,它会显示所有文件系统的磁盘使用情况。 [root@linux-world ~]# df -a Filesystem 1K-blocks Used Available Use% Mounted on @@ -69,7 +69,7 @@ df 的原样输出 : ### 例2:以人类易读的格式显示 df 命令的输出 ### -在 df 命令中使用‘-h’选项,输出以人易读的格式输出(例如,5K,500M & 5G) +在 df 命令中使用`-h`选项,以人类易读的格式输出(例如,5K,500M 及 5G) [root@linux-world ~]# df -h Filesystem Size Used Avail Use% Mounted on @@ -95,7 +95,7 @@ df 的原样输出 : ### 例4:输出所有已挂载文件系统的类型 ### -‘**-T**’ 选项用在 df 命令中用来显示文件系统的类型。 +`-T` 选项用在 df 命令中用来显示文件系统的类型。 [root@linux-world ~]# df -T Filesystem Type 1K-blocks Used Available Use% Mounted on @@ -110,7 +110,7 @@ df 的原样输出 : /dev/mapper/vg00-sap ext3 14987656 37636 14165636 1% /sap [root@linux-world ~]# -### 例5:输出文件系统磁盘使用的块大小 ### +### 例5:按块大小输出文件系统磁盘使用情况 ### [root@linux-world ~]# df -k Filesystem 1K-blocks Used Available Use% Mounted on @@ -127,7 +127,7 @@ df 的原样输出 : ### 例6:输出文件系统的 inode 信息 ### -‘**-i**’ 选项用在 df 命令用于显示文件系统的 inode 信息。 +`-i` 选项用在 df 命令用于显示文件系统的 inode 信息。 所有文件系统的 inode 信息: @@ -151,9 +151,9 @@ df 的原样输出 : /dev/mapper/vg00-sap 960992 11 960981 1% /sap [root@linux-world ~]# -### 例7:输出所有文件系统总的使用情况 ### +### 例7:输出所有文件系统使用情况汇总 ### -‘–total‘ 选项在 df 命令中用于显示所有文件系统的磁盘使用情况。 +`-total` 选项在 df 命令中用于显示所有文件系统的磁盘使用情况汇总。 [root@linux-world ~]# df -h --total Filesystem Size Used Avail Use% Mounted on @@ -171,7 +171,7 @@ df 的原样输出 : ### 例8:只打印本地文件系统磁盘的使用情况 ### -假设网络文件系统也挂载在 Linux 上,但我们只想显示本地文件系统的信息,这可以通过使用 df 命令的 ‘-l‘ 选项来实现。 +假设网络文件系统也挂载在 Linux 上,但我们只想显示本地文件系统的信息,这可以通过使用 df 命令的 `-l` 选项来实现。 ![](http://www.linuxtechi.com/wp-content/uploads/2015/10/nfs4-fs-mount.jpg) @@ -192,7 +192,7 @@ df 的原样输出 : ### 例9:打印特定文件系统类型的磁盘使用情况 ### -‘**-t**’ 选项在 df 命令中用来打印特定文件系统类型的信息,‘-t’ 指定文件系统的类型,如下所示: +`-t` 选项在 df 命令中用来打印特定文件系统类型的信息,用 `-t` 指定文件系统的类型,如下所示: 对于 ext4 : @@ -209,11 +209,11 @@ df 的原样输出 : 192.168.1.5:/opensuse 301545472 266833920 19371008 94% /data [root@linux-world ~]# -### 例10:使用 ‘-x’ 选项排除特定的文件系统类型 ### +### 例10:使用 -x 选项排除特定的文件系统类型 ### -“**-x** 或 **–exclude-type**” 在 df 命令中用来在输出中排出某些文件系统类型。 +`-x` 或 `–exclude-type` 在 df 命令中用来在输出中排出某些文件系统类型。 -假设我们想打印排出 ext3 外所有的文件系统。 +假设我们想打印除 ext3 外所有的文件系统。 [root@linux-world ~]# df -x ext3 Filesystem 1K-blocks Used Available Use% Mounted on @@ -228,9 +228,9 @@ df 的原样输出 : ### 例11:在 df 命令的输出中只打印特定的字段 ### -‘**–output={field_name1,field_name2….}**‘ 选项用于显示 df 命令某些字段的输出。 +`-output={field_name1,field_name2...}` 选项用于显示 df 命令某些字段的输出。 -可用的字段名有: ‘source’, ‘fstype’, ‘itotal’, ‘iused’, ‘iavail’, ‘ipcent’, ‘size’, ‘used’, ‘avail’, ‘pcent’ 和 ‘target’ +可用的字段名有: `source`, `fstype`, `itotal`, `iused`, `iavail`, `ipcent`, `size`, `used`, `avail`, `pcent` 和 `target` [root@linux-world ~]# df --output=fstype,size,iused Type 1K-blocks IUsed @@ -252,7 +252,7 @@ via: http://www.linuxtechi.com/11-df-command-examples-in-linux/ 作者:[Pradeep Kumar][a] 译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20151019 How-To--Compile the Latest Wine 32-bit on 64-bit Ubuntu (15.10).md b/published/201510/20151019 How-To--Compile the Latest Wine 32-bit on 64-bit Ubuntu (15.10).md similarity index 62% rename from translated/tech/20151019 How-To--Compile the Latest Wine 32-bit on 64-bit Ubuntu (15.10).md rename to published/201510/20151019 How-To--Compile the Latest Wine 32-bit on 64-bit Ubuntu (15.10).md index 56c38176fa..06e28304c3 100644 --- a/translated/tech/20151019 How-To--Compile the Latest Wine 32-bit on 64-bit Ubuntu (15.10).md +++ b/published/201510/20151019 How-To--Compile the Latest Wine 32-bit on 64-bit Ubuntu (15.10).md @@ -1,28 +1,28 @@ -如何在64位Ubuntu 15.10中编译最新版32位Wine +如何在 64 位 Ubuntu 15.10 中编译最新版 32 位 Wine ================================================================================ -Wine发布了最新的1.7.53版本。此版本带来的大量性能提升,包括**XAudio**,**Direct3D**代码清理,改善**OLE对象嵌入**技术,更好的**Web服务dll**的实现,还有其他大量更新。 +Wine 发布了最新的1.7.53版本。此版本带来的大量性能提升,包括**XAudio**,**Direct3D**代码清理,改善**OLE对象嵌入**技术,更好的** Web Services dll**的实现,还有其他大量更新。 ![](http://www.tuxarena.com/wp-content/uploads/2015/10/wine1753a.jpg) -虽然官方PPA支持[Wine][1],但目前只提供1.7.44版本,所以安装最新版本可以从源码编译安装。 +虽然有一个官方 [Wine][1] PPA,但目前只提供1.7.44版本,所以安装最新版本可以从源码编译安装。 -[下载源码包][2]([直接下载][3])并解压(**tar -xf wine-1.7.53**).然后,安装依赖。 +[下载源码包][2]([直接下载地址在此][3])并解压 `tar -xf wine-1.7.53`。然后,安装如下依赖。 sudo apt-get install build-essential gcc-multilib libx11-dev:i386 libfreetype6-dev:i386 libxcursor-dev:i386 libxi-dev:i386 libxshmfence-dev:i386 libxxf86vm-dev:i386 libxrandr-dev:i386 libxinerama-dev:i386 libxcomposite-dev:i386 libglu1-mesa-dev:i386 libosmesa6-dev:i386 libpcap0.8-dev:i386 libdbus-1-dev:i386 libncurses5-dev:i386 libsane-dev:i386 libv4l-dev:i386 libgphoto2-dev:i386 liblcms2-dev:i386 gstreamer0.10-plugins-base:i386 libcapi20-dev:i386 libcups2-dev:i386 libfontconfig1-dev:i386 libgsm1-dev:i386 libtiff5-dev:i386 libmpg123-dev:i386 libopenal-dev:i386 libldap2-dev:i386 libgnutls-dev:i386 libjpeg-dev:i386 -现在切换到wine-1.7.53解压后的文件夹,并输入: +现在切换到 wine-1.7.53 解压后的文件夹,并输入: ./configure make sudo make install -同样地,你也可以指定prefix配置脚本,以当前用户安装wine: +同样地,你也可以给配置脚本指定 prefix 参数。以普通用户安装 wine: ./configure --prefix=$HOME/usr/bin make make install -这种情况下,Wine将会安装在**$HOME/usr/bin/wine**,所以请检查$HOME/usr/bin在你的PATH变量中。 +这种情况下,Wine 将会安装在`$HOME/usr/bin/wine`,所以请检查`$HOME/usr/bin`在你的`PATH`变量中。 -------------------------------------------------------------------------------- @@ -30,7 +30,7 @@ via: http://www.tuxarena.com/2015/10/how-to-compile-latest-wine-32-bit-on-64-bit 作者:Craciun Dan 译者:[VicYu/Vic020](http://vicyu.net) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/sources/talk/20151020 Five Years of LibreOffice Evolution (2010-2015).md b/published/201510/20151020 Five Years of LibreOffice Evolution (2010-2015).md similarity index 62% rename from sources/talk/20151020 Five Years of LibreOffice Evolution (2010-2015).md rename to published/201510/20151020 Five Years of LibreOffice Evolution (2010-2015).md index 998b725295..e21ebc06e9 100644 --- a/sources/talk/20151020 Five Years of LibreOffice Evolution (2010-2015).md +++ b/published/201510/20151020 Five Years of LibreOffice Evolution (2010-2015).md @@ -1,67 +1,67 @@ -Five Years of LibreOffice Evolution (2010-2015) +LibreOffice 这五年(2010-2015) ================================================================================ 注:youtube 视频 -[LibreOffice][1] – amazing free and open source office suite from The Document Foundation. LO was forked from [OpenOffice.org][2] in September 28, 2010 and OOo is an open-source version of the earlier [StarOffice][3]. The LibreOffice support word processing, the creation and editing of spreadsheets, slideshows, diagrams and drawings, databases, mathematical formulae. -### Core applications: ### +[LibreOffice][1],来自文档基金会(The Document Foundation)一个自由开源的令人惊叹的办公套件。LO (LibreOffice)在2010年9月28日由 [OpenOffice.org][2] 分支出来;而 OOo (OpenOffice.org)则是早期的 [StarOffice][3] 开源版本。LibreOffice 支持文字处理,创建与编辑电子表格,幻灯片,图表和图形,数据库,数学公式的创建和编辑等。 -- **Writer** – word processor -- **Calc** – spreadsheet app, similar to Excel -- **Impress** – application for presentations, support Microsoft PowerPoint’s format -- **Draw** – vector graphics editor -- **Math** – special application for writing and editing mathematical formulae -- **Base** – database management +### 核心应用: ### + +- **Writer** – 文字处理器 +- **Calc** – 电子表格应用程序,类似于 Excel +- **Impress** – 应用演示,支持 Microsoft PowerPoint 的格式 +- **Draw** – 矢量图形编辑器 +- **Math** – 用于编写和​​编辑数学公式的特殊应用 +- **Base** – 数据库管理 ![LibreOffice 3.3, 2011](https://github.com/paulcarroty/Articles/raw/master/LO_History/3.3/Help-License-Info.png) -LibreOffice 3.3, 2011 +*LibreOffice 3.3, 2011* -First version of LibreOffice – fork of OpenOffice.org +这是LibreOffice 的第一个版本 - 分支自 OpenOffice.org ![LibreOffice 3.4](https://github.com/paulcarroty/Articles/raw/master/LO_History/3.4/1cc80d1cada204a061402785b2048f7clibreoffice-3.4.3.png) -LibreOffice 3.4 +*LibreOffice 3.4* ![LibreOffice 3.5](https://raw.githubusercontent.com/paulcarroty/Articles/master/LO_History/3.5/libreoffice35-large_001.jpg) -LibreOffice 3.5 +*LibreOffice 3.5* ![LibreOffice 3.6](https://github.com/paulcarroty/Articles/raw/master/LO_History/3.6/libreoffice-3.6.0.png) -LibreOffice 3.6 +*LibreOffice 3.6* ![Libre Office 4.0](https://github.com/paulcarroty/Articles/raw/master/LO_History/4.0/libreoffice-writer.png) -LibreOffice 4.0 +*LibreOffice 4.0* ![Libre Office 4.1](https://github.com/paulcarroty/Articles/raw/master/LO_History/4.1/Writer1.png) -LibreOffice 4.1 +*LibreOffice 4.1* ![Libre Office 4.2](https://github.com/paulcarroty/Articles/raw/master/LO_History/4.2/libreoffice-4.2.png) -Libre Office 4.2 +*Libre Office 4.2* ![LibreOffice 4.3](https://github.com/paulcarroty/Articles/raw/master/LO_History/4.3/libreoffice.jpg) -LibreOffice 4.3 +*LibreOffice 4.3* ![LibreOffice 4.4](https://github.com/paulcarroty/Articles/raw/master/LO_History/4.4/LibreOffice_Writer_4_4_2.png) -LibreOffice 4.4 +*LibreOffice 4.4* ![Libre Office 5.0](https://github.com/paulcarroty/Articles/raw/master/LO_History/5.0/LibreOffice_Writer_5.0.png) -LibreOffice 5.0 +*LibreOffice 5.0* -### History of Libre Office from Wikipedia ### +### Libre Office 的发展,出自 Wikipedia ### ![StarOffice major derivatives](https://commons.wikimedia.org/wiki/File%3AStarOffice_major_derivatives.svg) - -### LibreOffice 5.0 Review ### +### LibreOffice 5.0 预览 ### 注:youtube 视频 @@ -72,12 +72,12 @@ LibreOffice 5.0 via: https://tlhp.cf/libreoffice-5years-evolution/ 作者:[Pavlo Rudyi][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://tlhp.cf/author/paul/ [1]:http://www.libreoffice.org/ [2]:https://www.openoffice.org/ -[3]:http://www.staroffice.org/ \ No newline at end of file +[3]:http://www.staroffice.org/ diff --git a/translated/talk/20151020 Linux History--24 Years Step by Step.md b/published/201510/20151020 Linux History--24 Years Step by Step.md similarity index 59% rename from translated/talk/20151020 Linux History--24 Years Step by Step.md rename to published/201510/20151020 Linux History--24 Years Step by Step.md index ce9f6f95d2..71a82faf8c 100644 --- a/translated/talk/20151020 Linux History--24 Years Step by Step.md +++ b/published/201510/20151020 Linux History--24 Years Step by Step.md @@ -6,19 +6,17 @@ Linux 的历史:24 年,一步一个脚印 ### 史前 ### -没有 [C 编程语言][1] and [GNU 项目][2] - Linux 环境,也就不可能有 Linux 的成功。 +没有 [C 编程语言][1] 和 [GNU 项目][2] 构成 Linux 环境,也就不可能有 Linux 的成功。 ![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/00-1.jpg) +*Ken Thompson 和 Dennis Ritchie* -Ken Thompson 和 Dennis Ritchie - -[Ken Thompson][1] 和 [Dennis Ritchie][2] 在 1969-1970 创造了 Unix 操作系统。之后发布了新的 [C 编程语言][3] - 高级可移植编程语言。 Linux 内核用 C 和一些汇编代码写成。 - +[Ken Thompson][1] 和 [Dennis Ritchie][2] 在 1969-1970 创造了 Unix 操作系统。之后发布了新的 [C 编程语言][3],它是一种高级的、可移植的编程语言。 Linux 内核用 C 和一些汇编代码写成。 ![Richard Matthew Stallman](https://github.com/paulcarroty/Articles/raw/master/Linux_24/00-2.jpg) -Richard Matthew Stallman +*Richard Matthew Stallman* [Richard Matthew Stallman][4] 在 1984 年启动了 [GNU 项目][5]。最大的一个目标 - 完全自由的类-Unix 操作系统。 @@ -26,9 +24,9 @@ Richard Matthew Stallman ![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/1991-1.jpg) -Linus Torvalds, 1991 +*Linus Torvalds, 1991* -[Linus Torvalds][5] 在芬兰赫尔辛基开始了 Linux 内核开发 - 为他的硬件 - Intel 30386 CPU 编写程序。他也使用 Minix 和 GNU C 编译器。下面是 Linus Torvalds 给 Minix 新闻组的历史消息: +[Linus Torvalds][5] 在芬兰赫尔辛基开始了 Linux 内核开发,他是为他的硬件 - Intel 30386 CPU 编写的程序。他也使用 Minix 和 GNU C 编译器。下面是 Linus Torvalds 给 Minix 新闻组的历史消息: > From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds) > Newsgroups: comp.os.minix @@ -55,7 +53,7 @@ Linus Torvalds, 1991 > > Linus (torvalds@kruuna.helsinki.fi) -从此之后,Linux 开始得到了世界范围志愿者和专业专家的支持。Linus 的合作者 Ari Lemmke 把它命名为 “Linux” - 大学服务器项目上的目录名称。 +从此之后,Linux 开始得到了世界范围志愿者和专业专家的支持。Linus 的同事 Ari Lemmke 把它命名为 “Linux” - 这其实是他们的大学 ftp 服务器上的项目目录名称。 ### 1992 ### @@ -67,13 +65,13 @@ Linus Torvalds, 1991 ![Slackware 1.0 ](https://github.com/paulcarroty/Articles/raw/master/Linux_24/1993-1.png) -第一次发布 Slackware(译者注:Slackware Linux 是一个高度技术性的,干净的发行版,只有少量非常有限的个人设置) – 相同主导者 Patrick Volkerding 最老的 Linux 发行版。Linux 内核有 100 多个开发者。 +Slackware 首次发布(LCTT 译注:Slackware Linux 是一个高度技术性的、干净的发行版,只有少量非常有限的个人设置) – 最早的 Linux 发行版,其领导者 Patrick Volkerding 也是最早的。其时,Linux 内核有 100 多个开发者。 ![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/1993-2.png) -Debian +*Debian* -Debian – 1991 年创立了最大的 Linux 社区之一。 +Debian – 最大的 Linux 社区之一也创立于 1991 年。 ### 1994 ### @@ -81,201 +79,201 @@ Linux 1.0 发布了,多亏了 XFree 86 项目,第一次有了 GUI。 ![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/1994-1.png) -Red Hat Linux +*Red Hat Linux* -发布 Red Hat Linux 1.0 +发布了 Red Hat Linux 1.0 ![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/1994-2.png) -S.u.S.E Linux +*S.u.S.E Linux* -和 [S.u.S.E. Linux][6] 1.0. +和 [S.u.S.E. Linux][6] 1.0。 ### 1995 ### ![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/1995-1.png) -Red Hat Inc. +*Red Hat Inc.* -Bob Yound 和 Marc Ewing 合并他们的本地业务为 [Red Hat Software][7]。Linux 移植到了很多硬件平台。 +Bob Young 和 Marc Ewing 合并他们的本地业务为 [Red Hat Software][7]。Linux 移植到了很多硬件平台。 ### 1996 ### ![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/1996-1.png) -### Tux ### +*Tux* -企鹅 Tux - Linux 官方吉祥物。Linus Torvalds 参观了堪培拉国家动物园和水族馆之后有了这个想法。发布了 Linux 2.0,支持对称多处理器。开始开发 KDE。 +企鹅 Tux 是 Linux 官方吉祥物,Linus Torvalds 参观了堪培拉国家动物园和水族馆之后有了这个想法。发布了 Linux 2.0,支持对称多处理器。开始开发 KDE。 ### 1997 ### ![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/1997-1.jpg) -Miguel de Icaza +*Miguel de Icaza* -Miguel de Icaza 和 Federico Mena 开始开发 GNOME - 自由桌面环境和应用程序。Linus Torvalds 赢得了 Linux 商标冲突,Linux 成为了 Linus 的注册商标。 +Miguel de Icaza 和 Federico Mena 开始开发 GNOME - 自由桌面环境和应用程序。Linus Torvalds 赢得了 Linux 商标冲突官司,Linux 成为了 Linus Torvalds 的注册商标。 ### 1998 ### ![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/1998-1.jpg) -大教堂和集市 +*大教堂和集市* -Eric S. Raymond 出版了文章 [The Cathedral and the Bazaar][8](大教堂和集市) - 非常推荐阅读。Linux 得到了大公司的支持: IBM、Oracle、康柏。 +Eric S. Raymond 出版了文章 [The Cathedral and the Bazaar(大教堂和集市)][8] - 高度推荐阅读。Linux 得到了大公司的支持: IBM、Oracle、康柏。 ![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/1998-2.png) -Mandrake Linux +*Mandrake Linux* -首次发布 Mandrake Linux - 基于红帽 Linux 带 K 桌面环境的发行版。 +Mandrake Linux 首次发布 - 基于红帽 Linux 的发行版,带有 KDE 桌面环境。 ### 1999 ### ![](https://upload.wikimedia.org/wikipedia/commons/4/4f/KDE_1.1.jpg) -第一个主要的 KDE 发行版。 +第一个主要的 KDE 版本。 ### 2000 ### ![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2000-1.jpg) -Dell 支持 Linux - 第一个大的硬件供应商。 +Dell 支持 Linux - 这是第一个支持的大硬件供应商。 ### 2001 ### ![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2001-1.jpg) -Revolution OS +*Revolution OS* -纪录片 “Revolution OS”(译者注:操作系统革命) - GNU、Linux、开源、自由软件的 20 年历史,以及 Linux 和开源界最好骇客的采访。 +纪录片 “Revolution OS(操作系统革命)” - GNU、Linux、开源、自由软件的 20 年历史,以及对 Linux 和开源界顶级黑客的采访。 ### 2002 ### ![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2002-1.jpg) -BitKeeper +*BitKeeper* -Linux 开始使用 BitKeeper - 分布式版本控制专用软件。 +Linux 开始使用 BitKeeper,这是一种商业版的分布式版本控制软件。 ### 2003 ### ![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2003-1.png) -SUSE +*SUSE* -Novell 用 210 美元购买了 SUSE Linux AG。2003 年也开始了 SCO 集团,IBM、以及 Linux 社区关于 Unix 版权的史诗般战役。 +Novell 用 2.1 亿美元购买了 SUSE Linux AG。同年 SCO 集团 也开始了同 IBM 以及 Linux 社区关于 Unix 版权的艰难的法律诉讼。 ![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2003-2.png) -Fedora +*Fedora* -红帽和 Linux 社区第一次发布了 Fedora Linux。 +红帽和 Linux 社区首次发布了 Fedora Linux。 ### 2004 ### ![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2004-1.png) -X.ORG 基金会 +*X.ORG 基金会* XFree86 解散了并加入到 [X.Org 基金会][9], X 的开发更快了。 ![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2004-2.jpg) -Ubuntu 4.10 – 第一次发布 +Ubuntu 4.10 – Ubuntu 首次发布 ### 2005 ### ![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2005-1.png) -openSUSE +*openSUSE* -开始了 [openSUSE][10] - 企业版 Novell’s OS 的免费版本。OpenOffice.org 开始支持 OpenDocument 标准。 +[openSUSE][10] 开始了,这是企业版 Novell’s OS 的免费版本。OpenOffice.org 开始支持 OpenDocument 标准。 ### 2006 ### ![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2006-1.png) -新的 Linux 发行版 - 基于红帽企业版 Linux 的 Oracle Linux。微软和 Novell 开始在 IT 和专利保护方面进行合作。 +一个新的 Linux 发行版,基于红帽企业版 Linux 的 Oracle Linux。微软和 Novell 开始在 IT 和专利保护方面进行合作。 ### 2007 ### ![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2007-1.jpg) -Dell Linux 笔记本 +*Dell Linux 笔记本* -Dell 发布了预安装 Linux 的笔记本。 +Dell 发布了第一个预装 Linux 的笔记本。 ### 2008 ### ![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2008-1.jpg) -KDE 4.0 +*KDE 4.0* -在不稳定的情况下发布了 KDE 4,很多用户开始迁移到 GNOME。 +KDE 4 发布了,但是不稳定,很多用户开始迁移到 GNOME。 ### 2009 ### ![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2009-1.jpg) -Red Hat +*Red Hat* -红帽 Linux 的成功 - 市值 2亿6千2百万美元。 +红帽 Linux 取得了成功 - 市值达 26亿2千万美元。 -2009 年微软第一次在 GPLv2 协议下向 Linux 内核提交了补丁。 +2009 年微软在 GPLv2 协议下向 Linux 内核提交了第一个补丁。 ### 2010 ### ![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2010-1.png) -Novell -> Attachmate +*Novell -> Attachmate* -Novell 已 2亿2千万美元卖给了 Attachmate Group, Inc。在新公司 SUSE 和 Novell 成为了两款独立的产品。 +Novell 已 22亿美元卖给了 Attachmate Group, Inc。SUSE 和 Novell 成为了新公司的两款独立的产品。 -第一次发布了 [systemd][11],开始了 Linux 系统的革命。 +[systemd][11] 首次发布,开始了 Linux 系统的革命。 ### 2011 ### ![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2011-1.png) -Unity Desktop in 2011 +*Unity 桌面,2011* -发布了 Ubuntu Unity - 遭到很多用户的批评。 +Ubuntu Unity 发布,遭到很多用户的批评。 ![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2011-2.png) -GNOME 3.0, 2011 +*GNOME 3.0,2011* -发布了 GNOME 3.0 - Linus Torvalds 评论为 “unholy mess” 以及很多负面评论。发布了 Linux 内核 3.0。 +GNOME 3.0 发布, Linus Torvalds 评论为 “unholy mess” ,有很多负面评论。Linux 内核 3.0 发布。 ### 2012 ### ![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2012-1.png) -1500 万行代码 +*1500 万行代码* -Linux 内核有 1500 万行代码。微软成为主要共享者之一。 +Linux 内核达到 1500 万行代码。微软成为主要贡献者之一。 ### 2013 ### ![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2013-1.png) -发布了 Kali Linux 1.0 - 用户渗透测试和数字取证的基于 Debian 的 Linux 发行版。2014 年 CentOS 代码开发者加入到了红帽公司。 +Kali Linux 1.0 发布, 用于渗透测试和数字取证,基于 Debian 的 Linux 发行版。2014 年 CentOS 及其代码开发者加入到了红帽公司。 ### 2014 ### ![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2014-1.jpg) -Lennart Poettering 和 Kay Sievers +*Lennart Poettering 和 Kay Sievers* -systemd - Ubuntu 和所有主流 Linux 发行版的默认初始化程序。Ubuntu 有 2200 万用户。安卓的大进步 - 占了所有移动设备的 75%。 +systemd 成为 Ubuntu 和所有主流 Linux 发行版的默认初始化程序。Ubuntu 有 2200 万用户。安卓的大进步 - 占了所有移动设备的 75% 份额。 ### 2015 ### ![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2015-1.jpg) -发布了 Linux 4.0。没有了 Mandriva(译者注:Mandriva 是目前全球最优秀的 Linux 发行版之一,稳居于 linux 排行榜第一梯队。2005年之前稳居 linux 排行榜 NO.1。它是目前最易用的 linux 发行版,也是众多国际级 linux 发行版中唯一一个默认即支持中文环境的 linux) - 但还有很多分支 - 其中最流行的一个是 Mageia。 +发布了 Linux 4.0。Mandriva 公司清算,但还有很多分支,其中最流行的一个是 Mageia。 -写于对 Linux 的热爱。 +带着对 Linux 的热爱而执笔。 -------------------------------------------------------------------------------- @@ -283,7 +281,7 @@ via: https://tlhp.cf/linux-history/ 作者:[Pavlo Rudyi][a] 译者:[ictlyh](http://mutouxiaogui.cn/blog) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20151012 How To Use iPhone In Antergos Linux.md b/published/20151012 How To Use iPhone In Antergos Linux.md new file mode 100644 index 0000000000..e9bbca215a --- /dev/null +++ b/published/20151012 How To Use iPhone In Antergos Linux.md @@ -0,0 +1,81 @@ +如何在 Antergos Linux 中使用 iPhone +================================================================================ +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/iPhone-Antergos-Arch-Linux.jpg) + +在Arch Linux中使用iPhone遇到麻烦了么?iPhone和Linux从来都没有很好地集成。本教程中,我会向你展示如何在Antergos Linux中使用iPhone,对于同样基于Arch的的Linux发行版如Manjaro也应该同样管用。 + +我最近购买了一台全新的iPhone 6S,当我连接到Antergos Linux中要拷贝一些照片时,它完全没有检测到它。我看见iPhone正在被充电并且我已经允许了iPhone“信任这台电脑”,但是还是完全没有检测到。我尝试运行`dmseg`但是没有关于iPhone或者Apple的信息。有趣的是我当我安装好了[libimobiledevice][1],这个就可以解决[iPhone在Ubuntu中的挂载问题][2]。 + +我会向你展示如何在Antergos中使用运行iOS 9的iPhone 6S。这会有更多的命令行,但是我假设你用的是ArchLinux,并不惧怕使用终端(也不应该惧怕)。 + +### 在Arch Linux中挂载iPhone ### + +**第一步**:如果已经插入,请拔下你的iPhone。 + +**第二步**:现在,打开终端输入下面的命令来安装必要的包。如果它们已经安装过了也没有关系。 + + sudo pacman -Sy ifuse usbmuxd libplist libimobiledevice + +**第三步**: 这些库和程序安装完成后,重启系统。 + + sudo reboot + +**第四步**:创建一个iPhone的挂载目录,我建议在家目录中创建一个iPhone目录。 + + mkdir ~/iPhone + +**第五步**:解锁你的手机并插入,如果询问是否信任该计算机,请允许信任。 + +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/iPhone-mount-Antergos-Linux-2.jpeg) + +**第六步**: 看看这时iPhone是否已经被机器识别了。 + + dmesg | grep -i iphone + +这时就该显示iPhone和Apple的结果了。就像这样: + + [ 31.003392] ipheth 2-1:4.2: Apple iPhone USB Ethernet device attached + [ 40.950883] ipheth 2-1:4.2: Apple iPhone USB Ethernet now disconnected + [ 47.471897] ipheth 2-1:4.2: Apple iPhone USB Ethernet device attached + [ 82.967116] ipheth 2-1:4.2: Apple iPhone USB Ethernet now disconnected + [ 106.735932] ipheth 2-1:4.2: Apple iPhone USB Ethernet device attached + +这意味着这时iPhone已经被Antergos/Arch成功地识别了。 + +**第七步**: 设置完成后是时候挂载iPhone了,使用下面的命令: + + ifuse ~/iPhone + +由于我们在家目录中创建了挂载目录,你不需要root权限就可以在家目录中看见。如果命令成功了,你就不会看见任何输出。 + +回到Files看下iPhone是否已经识别。对于我而言,在Antergos中看上去这样: + +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/iPhone-mount-Antergos-Linux.jpeg) + +你可以在这个目录中访问文件。从这里复制文件或者复制到里面。 + +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/iPhone-mount-Antergos-Linux-1.jpeg) + +**第八步**: 当你想要卸载的时候,使用这个命令: + + sudo umount ~/iPhone + +### 对你有用么? ### + +我知道这并不是非常方便和理想,iPhone应该像其他USB设备那样工作,但是事情并不总是像人们想的那样。好的是一点小的DIY就能解决这个问题带来了一点成就感(至少对我而言)。我必须要说的是Antergos应该修复这个问题让iPhone可以默认挂载。 + +这个技巧对你有用么?如果你有任何问题或者建议,欢迎留下评论。 + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/iphone-antergos-linux/ + +作者:[Abhishek][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:http://www.libimobiledevice.org/ +[2]:http://itsfoss.com/mount-iphone-ipad-ios-7-ubuntu-13-10/ diff --git a/translated/tech/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md b/published/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md similarity index 53% rename from translated/tech/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md rename to published/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md index 3b37ef3c91..ada766b696 100644 --- a/translated/tech/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md +++ b/published/20151012 Linux FAQs with Answers--How to find information about built-in kernel modules on Linux.md @@ -1,12 +1,12 @@ -Linux有问必答--如何找出Linux中内置模块的信息 +Linux有问必答:如何找出Linux中内置模块的信息 ================================================================================ -> **提问**:我想要知道Linux系统中内核内置的模块,以及每个模块的参数。有什么方法可以得到内置模块和设备驱动的列表,以及它们的详细信息呢? +> **提问**:我想要知道Linux系统中内核内置的模块,以及每个模块有哪些参数。有什么方法可以得到内置模块和设备驱动的列表,以及它们的详细信息呢? -现代Linux内核正在随着时间迅速地增长来支持大量的硬件、文件系统和网络功能。在此期间,“可加载模块”的引入防止内核变得越来越臃肿,以及在不同的环境中灵活地扩展功能及硬件支持,而不必重新构建内核。 +现代Linux内核正在随着时间变化而迅速增长,以支持大量的硬件、文件系统和网络功能。在此期间,“可加载模块(loadable kernel modules,[LKM])”的引入防止内核变得越来越臃肿,以及在不同的环境中灵活地扩展功能及硬件支持,而不必重新构建内核。 -最新的Linux发型版的内核只带了相对较小的“内置模块”,其余的特定硬件驱动或者自定义功能作为“可加载模块”来让你选择地加载或卸载。 +最新的Linux发行版的内核只带了相对较小的“内置模块(built-in modules)”,其余的特定硬件驱动或者自定义功能作为“可加载模块”来让你选择地加载或卸载。 -内置模块被静态地编译进了内核。不像可加载内核模块可以动态地使用modprobe、insmod、rmmod、modinfo或者lsmod等命令地加载、卸载、查询模块,内置的模块总是在启动是就加载进了内核,不会被这些命令管理。 +内置模块被静态地编译进了内核。不像可加载内核模块可以动态地使用`modprobe`、`insmod`、`rmmod`、`modinfo`或者`lsmod`等命令地加载、卸载、查询模块,内置的模块总是在启动时就加载进了内核,不会被这些命令管理。 ### 找出内置模块列表 ### @@ -22,13 +22,13 @@ Linux有问必答--如何找出Linux中内置模块的信息 ### 找出内置模块参数 ### -每个内核模块无论是内置的还是可加载的都有一系列的参数。对于可加载模块,modinfo命令显示它们的参数信息。然而这个命令不对内置模块管用。你会得到下面的错误。 +每个内核模块无论是内置的还是可加载的都有一系列的参数。对于可加载模块,`modinfo`命令可以显示它们的参数信息。然而这个命令对内置模块没有用。你会得到下面的错误。 modinfo: ERROR: Module XXXXXX not found. -如果你想要查看内置模块的参数,以及它们的值,你可以在**/sys/module** 下检查它们的内容。 +如果你想要查看内置模块的参数,以及它们的值,你可以在 **/sys/module** 下检查它们的内容。 -在 /sys/module目录下,你可以找到内核模块(包含内置和可加载的)命名的子目录。结合则进入每个模块目录,这里有个“parameters”目录,列出了这个模块所有的参数。 +在 /sys/module目录下,你可以找到内核模块(包含内置和可加载的)命名的子目录。进入每个模块目录,这里有个“parameters”目录,列出了这个模块所有的参数。 比如你要找出tcp_cubic(内核默认的TCP实现)模块的参数。你可以这么做: @@ -46,7 +46,7 @@ via: http://ask.xmodulo.com/find-information-builtin-kernel-modules-linux.html 作者:[Dan Nanni][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/share/20151012 What is a good IDE for R on Linux.md b/published/20151012 What is a good IDE for R on Linux.md similarity index 70% rename from translated/share/20151012 What is a good IDE for R on Linux.md rename to published/20151012 What is a good IDE for R on Linux.md index 2e5d72a880..36cd1ecda9 100644 --- a/translated/share/20151012 What is a good IDE for R on Linux.md +++ b/published/20151012 What is a good IDE for R on Linux.md @@ -1,8 +1,9 @@ -Linux 上针对 R 语言的好用 IDE +Linux 上好用的 R 语言 IDE ================================================================================ + 前一段时间,我已经介绍过 [Linux 上针对 C/C++ 语言的最好 IDE][1]。很显然 C 或 C++ 并不是现存的唯一的编程语言,是时间讨论某些更加特别的语言了。 -假如你做过一些统计工作,很可能你已经见识过 [R 语言][2] 了。假如你还没有,我真的非常推荐这门专为统计和数据挖掘而生的开源编程语言。若你拥有编程背景,它的语法可能会使你感到有些不适应,但希望它的向量化操作所带来的快速能够吸引到你。简而言之,请尝试使用一下这门语言。而要做到这一点,使用一个好的 IDE 来入门或许更好吧。R 作为一门跨平台的语言,有着一大把好用的 IDE,它们使得用 R 语言进行数据分析变得更惬意。假如你非常钟意一个特定的编辑器,这里也有一些好用的插件来将它转变为一个成熟的 R IDE。 +假如你做过一些统计工作,很可能你已经见识过 [R 语言][2] 了。假如你还没有,我真的非常推荐这门专为统计和数据挖掘而生的开源编程语言。若你拥有编程背景,它的语法可能会使你感到有些不适应,但希望它的向量化操作所带来的快速能够吸引到你。简而言之,请尝试使用一下这门语言。而要做到这一点,使用一个好的 IDE 来入门或许会更好。R 作为一门跨平台的语言,有着一大把好用的 IDE,它们使得用 R 语言进行数据分析变得更惬意。假如你非常钟意一个特定的编辑器,这里也有一些好用的插件来将它转变为一个成熟的 R 语言的 IDE。 下面就让我们见识一下 Linux 环境下 5 个针对 R 语言的好用 IDE吧。 @@ -10,15 +11,15 @@ Linux 上针对 R 语言的好用 IDE ![](https://c1.staticflickr.com/1/603/22093054381_431383ab60_c.jpg) -就让我们以或许是最为人们喜爱的 R IDE: [RStudio][3] 来开始我们的介绍吧。除了一般 IDE 所提供的诸如语法高亮,代码补全等功能,RStudio 还因其集成了 R 语言帮助文档,强大的调试器,多视图系统而突出。如果你准备入门 R 语言,我只建议你将 RStudio 作为你的 R 语言控制台,一方面用它来实时测试代码是很完美的,另外对象浏览器将帮助你理解你正在处理的是哪类数据。最后,真正征服我的是它集成了图形显示器,使得你能够更轻松地以图片方式探索你的图形。至于它不好的方面, RStudio 缺乏快捷键和高级设置来使得它成为一个完美的 IDE。然而,它有一个以 AGPL 协议发布的免费版本, Linux 用户没有借口不去试试这个 IDE。 +就让我们以或许是最为人们喜爱的 R IDE —— [RStudio][3] 来开始我们的介绍吧。除了一般 IDE 所提供的诸如语法高亮、代码补全等功能,RStudio 还因其集成了 R 语言帮助文档、强大的调试器、多视图系统而突出。如果你准备入门 R 语言,我只建议你将 RStudio 作为你的 R 语言控制台,一方面用它来实时测试代码是很完美的,另外对象浏览器可以帮助你理解你正在处理的是哪类数据。最后,真正征服我的是它集成了图形显示器,使得你能够更轻松地将图形输出为图片文件。至于它不好的方面, RStudio 缺乏快捷键和高级设置来使得它成为一个完美的 IDE。然而,它有一个以 AGPL 协议发布的免费版本, Linux 用户没有借口不去试试这个 IDE。 ### 2. 带有 ESS 插件的 Emacs ### ![](https://c2.staticflickr.com/6/5824/22056857776_a14a4e7e1b_c.jpg) -在我的前一个有关 IDE 的文章中,很多朋友对我所给出的清单中没有 Emacs 而感到失望。对于这个,我的主要理由是 Emacs 可以说是 IDE 里面的“通配符”:你可以将它放到任意语言的 IDE 清单中。但对于 [带有 ESS 插件的 R][4] 来说,事情就变得有些不同了。Emacs Speaks Statistics (ESS) 是一个令人惊异的插件,它将完全改变你使用 Emacs 编辑器的方式,真的非常适合 R 编程者的需求。与 RStudio 类似,带有 ESS 的 Emacs 拥有多视图,它有两个面板:一个显示代码,另一个则是一个 R 控制台,使得实时地测试代码和探索数据对象变得更加容易。但 ESS 真正的长处是可以和你已安装的其他 Emacs 插件无缝集成以及它的高级配置选项。简而言之,如果你喜欢你的 Emacs 快捷键,你将能够在 R 语言开发环境下使用它们。然而,当你在 ESS 中处理大量数据时,我已经听闻并经历了一些效率低下的问题。尽管这个问题不是很重大,但足以让我更偏好 RStudio。 +在我的前一个有关 IDE 的文章中,很多朋友对我所给出的清单中没有 Emacs 而感到失望。对于这个,我的主要理由是 Emacs 可以说是 IDE 里面的“通配符”:你可以将它放到任意语言的 IDE 清单中。但对于 [带有 ESS 插件的 R][4] 来说,事情就变得有些不同了。Emacs Speaks Statistics (ESS) 是一个令人惊异的插件,它将完全改变你使用 Emacs 编辑器的方式,真的非常适合 R 编程者的需求。与 RStudio 类似,带有 ESS 的 Emacs 拥有多视图,它有两个面板:一个显示代码,另一个则是一个 R 控制台,使得实时地测试代码和探索数据对象变得更加容易。但 ESS 真正的长处是可以和你已安装的其他 Emacs 插件无缝集成,以及它的高级配置选项。简而言之,如果你喜欢你的 Emacs 快捷键,你将能够在 R 语言开发环境下使用它们。然而,当你在 ESS 中处理大量数据时,我已经听闻并经历了一些效率低下的问题。尽管这个问题不是很重大,但足以让我更偏好 RStudio。 -### 3. Vim with Vim-R-plugin ### +### 3. Vim 及 Vim-R-plugin ### ![](https://c1.staticflickr.com/1/680/22056923916_abe3531bb4_b.jpg) @@ -28,7 +29,7 @@ Linux 上针对 R 语言的好用 IDE ![](https://c1.staticflickr.com/1/761/22056923956_1413f60b42_c.jpg) -若 Emacs 和 Vim 都不是你的菜,而你恰好喜欢默认的 Gnome 编辑器,则 [RGedit][7] 就专门为你而生:它是 Gedit 的一个专门编辑 R 代码的插件。Gedit 比它看起来的样子更强大,配上大量的插件,就有可能用它来做许许多多的事情。而 RGedit 恰好就是你编辑 R 代码所需要的那款插件。它支持传统的语法高亮并在屏幕下方集成了 R 控制台,但它还有一大类独特的功能,例如多文件编辑,代码折叠,文件查看器,甚至还有一个 GUI 的向导用来从 snippets 产生代码。尽管我对 Gedit 并不感冒,但我必须承认这些功能比一般插件的功能更好并且在你花费很长时间去分析数据时它会有很大的帮助。唯一的污点是它的最后一次更新是 2013 年。我真的希望这个项目能够被重新焕发新生。 +若 Emacs 和 Vim 都不是你的菜,而你恰好喜欢默认的 Gnome 编辑器,则 [RGedit][7] 就是专门为你而生的:它是 Gedit 的一个专门编辑 R 代码的插件。Gedit 比你以为的更强大,配上大量的插件,就有可能用它来做许许多多的事情。而 RGedit 恰好就是你编辑 R 代码所需要的那款插件。它支持传统的语法高亮并在屏幕下方集成了 R 控制台,但它还有一大类独特的功能,例如多文件编辑、代码折叠、文件查看器,甚至还有一个 GUI 的向导用来从 snippets 产生代码。尽管我对 Gedit 并不感冒,但我必须承认这些功能比一般插件的功能更好,并且在你花费很长时间去分析数据时它会有很大的帮助。唯一的不足是它的最后一次更新是 2013 年。我真的希望这个项目能够被重新焕发新生。 ### 5. RKWard ### @@ -45,8 +46,8 @@ Linux 上针对 R 语言的好用 IDE via: http://xmodulo.com/good-ide-for-r-on-linux.html 作者:[Adrien Brochard][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) +译者:[FSSlc](https://github.com/FSSlc) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20151027 How To Show Desktop In GNOME 3.md b/published/20151027 How To Show Desktop In GNOME 3.md new file mode 100644 index 0000000000..08933c0cb7 --- /dev/null +++ b/published/20151027 How To Show Desktop In GNOME 3.md @@ -0,0 +1,64 @@ +如何在 GNOME 3 中显示桌面 +================================================================================ +![How to show desktop in GNOME 3](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Show-Desktop-in-GNOME-3.jpg) + +你**如何在 GNOME 3 中显示桌面**?GNOME是一个很棒的桌面环境但是它更加专注于在程序间切换。如果你想关闭所有运行中的窗口,仅仅显示桌面呢? + +在Windows中,你可以按下Windows+D。在Ubuntu Unity中,可以用Ctrl+Super+D快捷键。不过由于一些原因,GNOME禁用了显示桌面的快捷键。 + +当你按下Super+D或者Ctrl+Super+D,什么都不会发生。如果你想要看到桌面,你得一个个最小化窗口。如果你有好几个打开的窗口那么这会非常不方便。 + +在本教程中,我将会向你展示在[GNOME 3][1]中添加显示桌面的快捷键。 + +### 在GNOME 3 中添加显示桌面的快捷键 ### + +我在本教程的使用的是带有GNOME 3.18的[Antergos Linux][2],但是这些步骤对于任何GNOME 3版本的Linux发行版都适用。同时,Antergos也使用了[Numix主题][3]作为默认主题。因此你也许不会看到平常的GNOME图标。但是我相信步骤是一目了然的,很容易就能理解。 + +#### 第一步 #### + +进入系统设置。点击右上角,在下拉列表中,点击系统设置图标。 + +![System Settings in GNOME Antergos Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Show-Desktop-GNOME-1.png) + +#### 第二步 #### + +当你在系统设置中时,寻找Keyboard设置。 + +![Keyboard settings in GNOME 3](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Show-Desktop-GNOME-2.png) + +#### 第三步 #### + +在这里,选择**Shortcuts**标签并在左边拦选择**Navigation**。向下滚动一点查找**Hide all normal windows**。你会看见它已经被禁用了。 + +![Shortcut keys in GNOME 3](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Show-Desktop-GNOME-3.jpeg) + +#### 第四步 #### + +在“Hide all normla windows”上面点击一下。你会看到它变成了**New accelerator**。现在无论你按下哪个键,它都会被指定为显示桌面的快捷键。 + +如果你不小心按下了错误的组合键,只要按下退格它就会被禁用。再次点击并使用需要的组合键。 + +![Shortcut key edit in GNOME 3](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Show-Desktop-GNOME-4.jpeg) + +#### 第五步 #### + +一旦设置了组合键,只要关闭系统设置。不用保存设置因为更改是立即生效的。在本例中,我使用Ctrl+Super+D来与我在Ubuntu Unity中的使用习惯保持一致。 + +![Keyboard shortcut edit in GNOME](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Show-Desktop-GNOME-5.jpeg) + +就是这样。享受GNOME 3中的显示桌面快捷键吧。我希望这篇教程对你们有用。有任何问题、建议或者留言都欢迎:) + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/show-desktop-gnome-3/ + +作者:[Abhishek][a] +译者:[geekpi](https://github.com/geekpi) +校对:[Caroline](https://github.com/carolinewuyan) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:https://www.gnome.org/gnome-3/ +[2]:http://itsfoss.com/tag/antergos/ +[3]:https://linux.cn/article-3281-1.html diff --git a/translated/tech/RHCE/Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets.md b/published/RHCE/Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets.md similarity index 69% rename from translated/tech/RHCE/Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets.md rename to published/RHCE/Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets.md index 7a373cd76b..b60fdfe39d 100644 --- a/translated/tech/RHCE/Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets.md +++ b/published/RHCE/Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets.md @@ -1,32 +1,32 @@ -RHCE 第三部分 - 如何使用 Linux 工具集产生和发送系统活动报告 +RHCE 系列(三):如何使用 Linux 工具集生成和发送系统活动报告 ================================================================================ -作为一个系统工程师,你经常需要生成一些显示系统资源利用率的报告,以便确保:1)正最佳利用它们,2)防止出现瓶颈,3)确保可扩展性,以及其它原因。 +作为一个系统工程师,你经常需要生成一些显示系统资源利用率的报告,以便确保:1)正在合理利用系统,2)防止出现瓶颈,3)确保可扩展性,以及其它原因。 ![监视 Linux 性能活动报告](http://www.tecmint.com/wp-content/uploads/2015/08/Monitor-Linux-Performance-Activity-Reports.jpg) -RHCE 第三部分:监视 Linux 性能活动报告 +*RHCE 第三部分:监视 Linux 性能活动报告* -除了著名的用于检测磁盘、内存和 CPU 使用率的原生 Linux 工具 - 可以给出很多例子,红帽企业版 Linux 7 还提供了两个额外的工具集用于为你的报告增加可以收集的数据:sysstat 和 dstat。 +除了著名的用于检测磁盘、内存和 CPU 使用率的原生 Linux 工具 - 可以给出很多例子,红帽企业版 Linux 7 还提供了另外两个可以为你的报告更多数据的工具套装:sysstat 和 dstat。 在这篇文章中,我们会介绍两者,但首先让我们来回顾一下传统工具的使用。 ### 原生 Linux 工具 ### -使用 df,你可以报告磁盘空间以及文件系统的 inode 使用情况。你需要监视两者,因为缺少磁盘空间会阻止你保存更多文件(甚至会导致系统崩溃),就像耗尽 inode 意味着你不能将文件链接到对应的数据结构,从而导致同样的结果:你不能将那些文件保存到磁盘中。 +使用 df,你可以报告磁盘空间以及文件系统的 inode 使用情况。你需要监视这两者,因为缺少磁盘空间会阻止你保存更多文件(甚至会导致系统崩溃),就像耗尽 inode 意味着你不能将文件链接到对应的数据结构,从而导致同样的结果:你不能将那些文件保存到磁盘中。 # df -h [以人类可读形式显示输出] # df -h --total [生成总计] ![检查 Linux 总的磁盘使用](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Total-Disk-Usage.png) -检查 Linux 总的磁盘使用 +*检查 Linux 总的磁盘使用* # df -i [显示文件系统的 inode 数目] # df -i --total [生成总计] ![检查 Linux 总的 inode 数目](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Total-inode-Numbers.png) -检查 Linux 总的 inode 数目 +*检查 Linux 总的 inode 数目* 用 du,你可以估计文件、目录或文件系统的文件空间使用。 @@ -37,7 +37,7 @@ RHCE 第三部分:监视 Linux 性能活动报告 ![检查 Linux 目录磁盘大小](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Directory-Disk-Size.png) -检查 Linux 目录磁盘大小 +*检查 Linux 目录磁盘大小* 别错过了: @@ -56,7 +56,7 @@ RHCE 第三部分:监视 Linux 性能活动报告 ![检查 Linux 系统性能](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Systerm-Performance.png) -检查 Linux 系统性能 +*检查 Linux 系统性能* 正如你从上面图片看到的,vmstat 的输出分为很多列:proc(process)、memory、swap、io、system、和 CPU。每个字段的意义可以在 vmstat man 手册的 FIELD DESCRIPTION 部分找到。 @@ -66,20 +66,20 @@ RHCE 第三部分:监视 Linux 性能活动报告 ![Vmstat Linux 性能监视](http://www.tecmint.com/wp-content/uploads/2015/08/Vmstat-Linux-Peformance-Monitoring.png) -Vmstat Linux 性能监视 +*Vmstat Linux 性能监视* 请注意当磁盘上的文件被更改时,活跃内存的数量增加,写到磁盘的块数目(bo)和属于用户进程的 CPU 时间(us)也是这样。 -或者一个保存大文件到磁盘时(dsync 引发): +或者直接保存一个大文件到磁盘时(由 dsync 标志引发): # vmstat -a 1 5 # dd if=/dev/zero of=dummy.out bs=1M count=1000 oflag=dsync ![Vmstat Linux 磁盘性能监视](http://www.tecmint.com/wp-content/uploads/2015/08/VmStat-Linux-Disk-Performance-Monitoring.png) -Vmstat Linux 磁盘性能监视 +*Vmstat Linux 磁盘性能监视* -在这个例子中,我们可以看到很大数目的块被写入到磁盘(bo),这正如预期的那样,同时 CPU 处理任务之前等待 IO 操作完成的时间(wa)也增加了。 +在这个例子中,我们可以看到大量的块被写入到磁盘(bo),这正如预期的那样,同时 CPU 处理任务之前等待 IO 操作完成的时间(wa)也增加了。 **别错过**: [Vmstat – Linux 性能监视][3] @@ -90,22 +90,22 @@ Vmstat Linux 磁盘性能监视 sysstat 软件包包含以下工具: - sar (收集、报告、或者保存系统活动信息)。 -- sadf (以多种方式显式 sar 收集的数据)。 +- sadf (以多种方式显示 sar 收集的数据)。 - mpstat (报告处理器相关的统计信息)。 - iostat (报告 CPU 统计信息和设备以及分区的 IO统计信息)。 - pidstat (报告 Linux 任务统计信息)。 - nfsiostat (报告 NFS 的输出/输出统计信息)。 - cifsiostat (报告 CIFS 统计信息) -- sa1 (收集并保存系统活动日常文件的二进制数据)。 -- sa2 (在 /var/log/sa 目录写每日报告)。 +- sa1 (收集并保存二进制数据到系统活动每日数据文件中)。 +- sa2 (在 /var/log/sa 目录写入每日报告)。 -dstat 为这些工具提供的功能添加了一些额外的特性,以及更多的计数器和更大的灵活性。你可以通过运行 yum info sysstat 或者 yum info dstat 找到每个工具完整的介绍,或者安装完成后分别查看每个工具的 man 手册。 +dstat 比这些工具所提供的功能更多一些,并且提供了更多的计数器和更大的灵活性。你可以通过运行 yum info sysstat 或者 yum info dstat 找到每个工具完整的介绍,或者安装完成后分别查看每个工具的 man 手册。 安装两个软件包: # yum update && yum install sysstat dstat -sysstat 主要的配置文件是 /etc/sysconfig/sysstat。你可以在该文件中找到下面的参数: +sysstat 主要的配置文件是 `/etc/sysconfig/sysstat`。你可以在该文件中找到下面的参数: # How long to keep log files (in days). # If value is greater than 28, then log files are kept in @@ -119,17 +119,17 @@ sysstat 主要的配置文件是 /etc/sysconfig/sysstat。你可以在该文件 # Compression program to use. ZIP="bzip2" -sysstat 安装完成后,/etc/cron.d/sysstat 中会添加和启用两个 cron 作业。第一个作业每 10 分钟运行系统活动计数工具并在 /var/log/sa/saXX 中保存报告,其中 XX 是该月的一天。 +sysstat 安装完成后,`/etc/cron.d/sysstat` 中会添加和启用两个 cron 任务。第一个任务每 10 分钟运行系统活动计数工具,并在 `/var/log/sa/saXX` 中保存报告,其中 XX 是该月的一天。 -因此,/var/log/sa/sa05 会包括该月份第 5 天所有的系统活动报告。这里假设我们在上面的配置文件中对 HISTORY 变量使用默认的值: +因此,`/var/log/sa/sa05` 会包括该月份第 5 天所有的系统活动报告。这里假设我们在上面的配置文件中对 HISTORY 变量使用默认的值: */10 * * * * root /usr/lib64/sa/sa1 1 1 -第二个作业在每天夜间 11:53 生成每日进程计数总结并把它保存到 /var/log/sa/sarXX 文件,其中 XX 和之前例子中的含义相同: +第二个任务在每天夜间 11:53 生成每日进程计数总结并把它保存到 `/var/log/sa/sarXX` 文件,其中 XX 和之前例子中的含义相同: 53 23 * * * root /usr/lib64/sa/sa2 -A -例如,你可能想要输出该月份第 6 天从上午 9:30 到晚上 5:30 的系统统计信息到一个 LibreOffice Calc 或 Microsoft Excel 可以查看的 .csv 文件(它也允许你创建表格和图片): +例如,你可能想要输出该月份第 6 天从上午 9:30 到晚上 5:30 的系统统计信息到一个 LibreOffice Calc 或 Microsoft Excel 可以查看的 .csv 文件(这样就可以让你创建表格和图片了): # sadf -s 09:30:00 -e 17:30:00 -dh /var/log/sa/sa06 -- | sed 's/;/,/g' > system_stats20150806.csv @@ -137,7 +137,7 @@ sysstat 安装完成后,/etc/cron.d/sysstat 中会添加和启用两个 cron ![Linux 系统统计信息](http://www.tecmint.com/wp-content/uploads/2015/08/Linux-System-Statistics.png) -Linux 系统统计信息 +*Linux 系统统计信息* 最后,让我们看看 dstat 提供什么功能。请注意如果不带参数运行,dstat 默认使用 -cdngy(表示 CPU、磁盘、网络、内存页、和系统统计信息),并每秒添加一行(可以在任何时候用 Ctrl + C 中断执行): @@ -145,15 +145,15 @@ Linux 系统统计信息 ![Linux 磁盘统计检测](http://www.tecmint.com/wp-content/uploads/2015/08/dstat-command.png) -Linux 磁盘统计检测 +*Linux 磁盘统计检测* 要输出统计信息到 .csv 文件,可以用 -output 标记后面跟一个文件名称。让我们来看看在 LibreOffice Calc 中该文件看起来是怎样的: ![检测 Linux 统计信息输出](http://www.tecmint.com/wp-content/uploads/2015/08/Monitor-Linux-Statistics-Output.png) -检测 Linux 统计信息输出 +*检测 Linux 统计信息输出* -我强烈建议你查看 dstat 的 man 手册,为了方便你的阅读用 PDF 格式包括本文以及 sysstat 的 man 手册。你会找到其它能帮助你创建自定义的详细系统活动报告的选项。 +为了更多的阅读体验,我强烈建议你查看 [dstat][5] 和 [sysstat][6] 的 pdf 格式 man 手册。你会找到其它能帮助你创建自定义的详细系统活动报告的选项。 **别错过**: [Sysstat – Linux 的使用活动检测工具][4] @@ -161,7 +161,7 @@ Linux 磁盘统计检测 在该指南中我们解释了如何使用 Linux 原生工具以及 RHEL 7 提供的特定工具来生成系统使用报告。在某种情况下,你可能像依赖最好的朋友那样依赖这些报告。 -你很可能使用过这篇指南中我们没有介绍到的其它工具。如果真是这样的话,用下面的表格和社区中的其他成员一起分享吧,也可以是任何其它的建议/疑问/或者评论。 +你很可能使用过这篇指南中我们没有介绍到的其它工具。如果真是这样的话,用下面的表单和社区中的其他成员一起分享吧,也可以是任何其它的建议/疑问/或者评论。 我们期待你的回复。 @@ -171,12 +171,14 @@ via: http://www.tecmint.com/linux-performance-monitoring-and-file-system-statist 作者:[Gabriel Cánepa][a] 译者:[ictlyh](https://github.com/ictlyh) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/how-to-check-disk-space-in-linux/ +[1]:https://linux.cn/article-6466-1.html [2]:http://www.tecmint.com/check-linux-disk-usage-of-files-and-directories/ -[3]:http://www.tecmint.com/linux-performance-monitoring-with-vmstat-and-iostat-commands/ -[4]:http://www.tecmint.com/install-sysstat-in-linux/ \ No newline at end of file +[3]:https://linux.cn/article-4024-1.html +[4]:https://linux.cn/article-4028-1.html +[5]:http://www.tecmint.com/wp-content/pdf/dstat.pdf +[6]:http://www.tecmint.com/wp-content/pdf/sysstat.pdf \ No newline at end of file diff --git a/translated/tech/RHCE/Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks.md b/published/RHCE/Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks.md similarity index 81% rename from translated/tech/RHCE/Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks.md rename to published/RHCE/Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks.md index 37a3dbe11c..fdf0d29c96 100644 --- a/translated/tech/RHCE/Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks.md +++ b/published/RHCE/Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks.md @@ -1,20 +1,20 @@ -第四部分 - 使用 Shell 脚本自动化 Linux 系统维护任务 +RHCE 系列(四): 使用 Shell 脚本自动化 Linux 系统维护任务 ================================================================================ -之前我听说高效系统管理员/工程师的其中一个特点是懒惰。一开始看起来很矛盾,但作者接下来解释了其中的原因: +之前我听说高效的系统管理员的一个特点是懒惰。一开始看起来很矛盾,但作者接下来解释了其中的原因: ![自动化 Linux 系统维护任务](http://www.tecmint.com/wp-content/uploads/2015/08/Automate-Linux-System-Maintenance-Tasks.png) -RHCE 系列:第四部分 - 自动化 Linux 系统维护任务 +*RHCE 系列:第四部分 - 自动化 Linux 系统维护任务* -如果一个系统管理员花费大量的时间解决问题以及做重复的工作,你就应该怀疑他这么做是否正确。换句话说,一个高效的系统管理员/工程师应该制定一个计划使得尽量花费少的时间去做重复的工作,以及通过使用该系列中第三部分 [使用 Linux 工具集监视系统活动报告][1] 介绍的工具预见问题。因此,尽管看起来他/她没有做很多的工作,但那是因为 shell 脚本帮助完成了他的/她的大部分任务,这也就是本章我们将要探讨的东西。 +如果一个系统管理员花费大量的时间解决问题以及做重复的工作,你就应该怀疑他这么做是否正确。换句话说,一个高效的系统管理员/工程师应该制定一个计划使得其尽量花费少的时间去做重复的工作,以及通过使用本系列中第三部分 [使用 Linux 工具集监视系统活动报告][1] 介绍的工具来预见问题。因此,尽管看起来他/她没有做很多的工作,但那是因为 shell 脚本帮助完成了他的/她的大部分任务,这也就是本章我们将要探讨的东西。 ### 什么是 shell 脚本? ### -简单的说,shell 脚本就是一个由 shell 一步一步执行的程序,而 shell 是在 Linux 内核和端用户之间提供接口的另一个程序。 +简单的说,shell 脚本就是一个由 shell 一步一步执行的程序,而 shell 是在 Linux 内核和最终用户之间提供接口的另一个程序。 -默认情况下,RHEL 7 中用户使用的 shell 是 bash(/bin/bash)。如果你想知道详细的信息和历史背景,你可以查看 [维基页面][2]。 +默认情况下,RHEL 7 中用户使用的 shell 是 bash(/bin/bash)。如果你想知道详细的信息和历史背景,你可以查看这个[维基页面][2]。 -关于这个 shell 提供的众多功能的介绍,可以查看 **man 手册**,也可以从 ([Bash 命令][3])下载 PDF 格式。除此之外,假设你已经熟悉 Linux 命令(否则我强烈建议你首先看一下 **Tecmint.com** 中的文章 [从新手到系统管理员指南][4] )。现在让我们开始吧。 +关于这个 shell 提供的众多功能的介绍,可以查看 **man 手册**,也可以从 ([Bash 命令][3])处下载 PDF 格式。除此之外,假设你已经熟悉 Linux 命令(否则我强烈建议你首先看一下 **Tecmint.com** 中的文章 [从新手到系统管理员指南][4] )。现在让我们开始吧。 ### 写一个脚本显示系统信息 ### @@ -27,7 +27,7 @@ RHCE 系列:第四部分 - 自动化 Linux 系统维护任务 #!/bin/bash - # RHCE 系列第四部分事例脚本 + # RHCE 系列第四部分示例脚本 # 该脚本会返回以下这些系统信息: # -主机名称: echo -e "\e[31;43m***** HOSTNAME INFORMATION *****\e[0m" @@ -67,9 +67,9 @@ RHCE 系列:第四部分 - 自动化 Linux 系统维护任务 ![服务器监视 Shell 脚本](http://www.tecmint.com/wp-content/uploads/2015/08/Server-Monitoring-Shell-Script.png) -服务器监视 Shell 脚本 +*服务器监视 Shell 脚本* -该功能用以下命令提供: +颜色功能是由以下命令提供的: echo -e "\e[COLOR1;COLOR2m\e[0m" @@ -79,13 +79,13 @@ RHCE 系列:第四部分 - 自动化 Linux 系统维护任务 你想使其自动化的任务可能因情况而不同。因此,我们不可能在一篇文章中覆盖所有可能的场景,但是我们会介绍使用 shell 脚本可以使其自动化的三种典型任务: -**1)** 更新本地文件数据库, 2) 查找(或者删除)有 777 权限的文件, 以及 3) 文件系统使用超过定义的阀值时发出警告。 +1) 更新本地文件数据库, 2) 查找(或者删除)有 777 权限的文件, 以及 3) 文件系统使用超过定义的阀值时发出警告。 让我们在脚本目录中新建一个名为 `auto_tasks.sh` 的文件并添加以下内容: #!/bin/bash - # 自动化任务事例脚本: + # 自动化任务示例脚本: # -更新本地文件数据库: echo -e "\e[4;32mUPDATING LOCAL FILE DATABASE\e[0m" updatedb @@ -123,16 +123,16 @@ RHCE 系列:第四部分 - 自动化 Linux 系统维护任务 ![查找 777 权限文件的 Shell 脚本](http://www.tecmint.com/wp-content/uploads/2015/08/Shell-Script-to-Find-777-Permissions.png) -查找 777 权限文件的 Shell 脚本 +*查找 777 权限文件的 Shell 脚本* ### 使用 Cron ### -想更进一步提高效率,你不会想只是坐在你的电脑前手动执行这些脚本。相反,你会使用 cron 来调度这些任务周期性地执行,并把结果通过邮件发动给预定义的接收者或者将它们保存到使用 web 浏览器可以查看的文件中。 +想更进一步提高效率,你不会想只是坐在你的电脑前手动执行这些脚本。相反,你会使用 cron 来调度这些任务周期性地执行,并把结果通过邮件发动给预先指定的接收者,或者将它们保存到使用 web 浏览器可以查看的文件中。 下面的脚本(filesystem_usage.sh)会运行有名的 **df -h** 命令,格式化输出到 HTML 表格并保存到 **report.html** 文件中: #!/bin/bash - # Sample script to demonstrate the creation of an HTML report using shell scripting + # 演示使用 shell 脚本创建 HTML 报告的示例脚本 # Web directory WEB_DIR=/var/www/html # A little CSS and table layout to make the report look a little nicer @@ -177,7 +177,7 @@ RHCE 系列:第四部分 - 自动化 Linux 系统维护任务 ![服务器监视报告](http://www.tecmint.com/wp-content/uploads/2015/08/Server-Monitoring-Report.png) -服务器监视报告 +*服务器监视报告* 你可以添加任何你想要的信息到那个报告中。添加下面的 crontab 条目在每天下午的 1:30 运行该脚本: @@ -193,12 +193,12 @@ via: http://www.tecmint.com/using-shell-script-to-automate-linux-system-maintena 作者:[Gabriel Cánepa][a] 译者:[ictlyh](https://github.com/ictlyh) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/linux-performance-monitoring-and-file-system-statistics-reports/ +[1]:https://linux.cn/article-6512-1.html [2]:https://en.wikipedia.org/wiki/Bash_%28Unix_shell%29 [3]:http://www.tecmint.com/wp-content/pdf/bash.pdf [4]:http://www.tecmint.com/60-commands-of-linux-a-guide-from-newbies-to-system-administrator/ diff --git a/translated/tech/RHCE/Part 5 - How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7.md b/published/RHCE/Part 5 - How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7.md similarity index 73% rename from translated/tech/RHCE/Part 5 - How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7.md rename to published/RHCE/Part 5 - How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7.md index a37c9610fd..ab4ddd5a32 100644 --- a/translated/tech/RHCE/Part 5 - How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7.md +++ b/published/RHCE/Part 5 - How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7.md @@ -1,26 +1,24 @@ -第五部分 - 如何在 RHEL 7 中管理系统日志(配置、旋转以及导入到数据库) +RHCE 系列(五):如何在 RHEL 7 中管理系统日志(配置、轮换以及导入到数据库) ================================================================================ -为了确保你的 RHEL 7 系统安全,你需要通过查看日志文件监控系统中发生的所有活动。这样,你就可以检测任何不正常或有潜在破坏的活动并进行系统故障排除或者其它恰当的操作。 +为了确保你的 RHEL 7 系统安全,你需要通过查看日志文件来监控系统中发生的所有活动。这样,你就可以检测到任何不正常或有潜在破坏的活动并进行系统故障排除或者其它恰当的操作。 -![Linux 中使用 Rsyslog 和 Logrotate 旋转日志文件](http://www.tecmint.com/wp-content/uploads/2015/08/Manage-and-Rotate-Linux-Logs-Using-Rsyslog-Logrotate.jpg) +![Linux 中使用 Rsyslog 和 Logrotate 轮换日志文件](http://www.tecmint.com/wp-content/uploads/2015/08/Manage-and-Rotate-Linux-Logs-Using-Rsyslog-Logrotate.jpg) -(译者注:[日志旋转][9]是系统管理中归档每天产生的日志文件的自动化过程) - -RHCE 考试 - 第五部分:使用 Rsyslog 和 Logrotate 管理系统日志 +*RHCE 考试 - 第五部分:使用 Rsyslog 和 Logrotate 管理系统日志* 在 RHEL 7 中,[rsyslogd][1] 守护进程负责系统日志,它从 /etc/rsyslog.conf(该文件指定所有系统日志的默认路径)和 /etc/rsyslog.d 中的所有文件(如果有的话)读取配置信息。 ### Rsyslogd 配置 ### -快速浏览一下 [rsyslog.conf][2] 会是一个好的开端。该文件分为 3 个主要部分:模块(rsyslong 按照模块化设计),全局指令(用于设置 rsyslogd 守护进程的全局属性),以及规则。正如你可能猜想的,最后一个部分指示获取,显示以及在哪里保存什么的日志(也称为选择子),这也是这篇博文关注的重点。 +快速浏览一下 [rsyslog.conf][2] 会是一个好的开端。该文件分为 3 个主要部分:模块(rsyslong 按照模块化设计),全局指令(用于设置 rsyslogd 守护进程的全局属性),以及规则。正如你可能猜想的,最后一个部分指示记录或显示什么以及在哪里保存(也称为选择子(selector)),这也是这篇文章关注的重点。 rsyslog.conf 中典型的一行如下所示: ![Rsyslogd 配置](http://www.tecmint.com/wp-content/uploads/2015/08/Rsyslogd-Configuration.png) -Rsyslogd 配置 +*Rsyslogd 配置* -在上面的图片中,我们可以看到一个选择子包括了一个或多个用分号分隔的设备:优先级(Facility:Priority)对,其中设备描述了消息类型(参考 [RFC 3164 4.1.1 章节][3] 查看 rsyslog 可用的完整设备列表),优先级指示它的严重性,这可能是以下几种之一: +在上面的图片中,我们可以看到一个选择子包括了一个或多个用分号分隔的“设备:优先级”(Facility:Priority)对,其中设备描述了消息类型(参考 [RFC 3164 4.1.1 章节][3],查看 rsyslog 可用的完整设备列表),优先级指示它的严重性,这可能是以下几种之一: - debug - info @@ -31,7 +29,7 @@ Rsyslogd 配置 - alert - emerg -尽管自身并不是一个优先级,关键字 none 意味着指定设备没有任何优先级。 +尽管 none 并不是一个优先级,不过它意味着指定设备没有任何优先级。 **注意**:给定一个优先级表示该优先级以及之上的消息都应该记录到日志中。因此,上面例子中的行指示 rsyslogd 守护进程记录所有优先级为 info 以及以上(不管是什么设备)的除了属于 mail、authpriv、以及 cron 服务(不考虑来自这些设备的消息)的消息到 /var/log/messages。 @@ -47,7 +45,7 @@ Rsyslogd 配置 #### 创建自定义日志文件 #### -要把所有的守护进程消息记录到 /var/log/tecmint.log,我们需要在 rsyslog.conf 或者 /etc/rsyslog.d 目录中的单独文件(易于管理)添加下面一行: +要把所有的守护进程消息记录到 /var/log/tecmint.log,我们需要在 rsyslog.conf 或者 /etc/rsyslog.d 目录中的单独文件(这样易于管理)添加下面一行: daemon.* /var/log/tecmint.log @@ -55,19 +53,19 @@ Rsyslogd 配置 # systemctl restart rsyslog -在随机重启两个守护进程之前和之后查看自定义日志的内容: +在随便重启两个守护进程之前和之后查看下自定义日志的内容: ![Linux 创建自定义日志文件](http://www.tecmint.com/wp-content/uploads/2015/08/Create-Custom-Log-File.png) -创建自定义日志文件 +*创建自定义日志文件* 作为一个自学练习,我建议你重点关注设备和优先级,添加额外的消息到已有的日志文件或者像上面那样创建一个新的日志文件。 -### 使用 Logrotate 旋转日志 ### +### 使用 Logrotate 轮换日志 ### -为了防止日志文件无限制增长,logrotate 工具用于旋转、压缩、移除或者通过电子邮件发送日志,从而减轻管理会产生大量日志文件系统的困难。 +为了防止日志文件无限制增长,logrotate 工具用于轮换、压缩、移除或者通过电子邮件发送日志,从而减轻管理会产生大量日志文件系统的困难。(译者注:[日志轮换][9](rotate)是系统管理中归档每天产生的日志文件的自动化过程) -Logrotate 作为一个 cron 作业(/etc/cron.daily/logrotate)每天运行,并从 /etc/logrotate.conf 和 /etc/logrotate.d 中的文件(如果有的话)读取配置信息。 +Logrotate 作为一个 cron 任务(/etc/cron.daily/logrotate)每天运行,并从 /etc/logrotate.conf 和 /etc/logrotate.d 中的文件(如果有的话)读取配置信息。 对于 rsyslog,即使你可以在主文件中为指定服务包含设置,为每个服务创建单独的配置文件能帮助你更好地组织设置。 @@ -75,27 +73,27 @@ Logrotate 作为一个 cron 作业(/etc/cron.daily/logrotate)每天运行, ![Logrotate 配置](http://www.tecmint.com/wp-content/uploads/2015/08/Logrotate-Configuration.png) -Logrotate 配置 +*Logrotate 配置* -在上面的例子中,logrotate 会为 /var/log/wtmp 进行以下操作:尝试每个月旋转一次,但至少文件要大于 1MB,然后用 0664 权限、用户 root、组 utmp 创建一个新的日志文件。下一步只保存一个归档日志,正如旋转指令指定的: +在上面的例子中,logrotate 会为 /var/log/wtmp 进行以下操作:尝试每个月轮换一次,但至少文件要大于 1MB,然后用 0664 权限、用户 root、组 utmp 创建一个新的日志文件。下一步只保存一个归档日志,正如轮换指令指定的: ![每月 Logrotate 日志](http://www.tecmint.com/wp-content/uploads/2015/08/Logrotate-Logs-Monthly.png) -每月 Logrotate 日志 +*每月 Logrotate 日志* 让我们再来看看 /etc/logrotate.d/httpd 中的另一个例子: -![旋转 Apache 日志文件](http://www.tecmint.com/wp-content/uploads/2015/08/Rotate-Apache-Log-Files.png) +![轮换 Apache 日志文件](http://www.tecmint.com/wp-content/uploads/2015/08/Rotate-Apache-Log-Files.png) -旋转 Apache 日志文件 +*轮换 Apache 日志文件* 你可以在 logrotate 的 man 手册([man logrotate][4] 和 [man logrotate.conf][5])中阅读更多有关它的设置。为了方便你的阅读,本文还提供了两篇文章的 PDF 格式。 -作为一个系统工程师,很可能由你决定多久按照什么格式保存一次日志,取决于你是否有一个单独的分区/逻辑卷给 /var。否则,你真的要考虑删除旧日志以节省存储空间。另一方面,根据你公司和客户内部的政策,为了以后的安全审核,你可能被迫要保留多个日志。 +作为一个系统工程师,很可能由你决定多久按照什么格式保存一次日志,这取决于你是否有一个单独的分区/逻辑卷给 `/var`。否则,你真的要考虑删除旧日志以节省存储空间。另一方面,根据你公司和客户内部的政策,为了以后的安全审核,你可能必须要保留多个日志。 #### 保存日志到数据库 #### -当然检查日志可能是一个很繁琐的工作(即使有类似 grep 工具和正则表达式的帮助)。因为这个原因,rsyslog 允许我们把它们导出到数据库(OTB 支持的关系数据库管理系统包括 MySQL、MariaDB、PostgreSQL 和 Oracle)。 +当然检查日志可能是一个很繁琐的工作(即使有类似 grep 工具和正则表达式的帮助)。因为这个原因,rsyslog 允许我们把它们导出到数据库(OTB 支持的关系数据库管理系统包括 MySQL、MariaDB、PostgreSQL 和 Oracle 等)。 指南的这部分假设你已经在要管理日志的 RHEL 7 上安装了 MariaDB 服务器和客户端: @@ -104,10 +102,9 @@ Logrotate 配置 然后使用 `mysql_secure_installation` 工具为 root 用户设置密码以及其它安全考量: - ![保证 MySQL 数据库安全](http://www.tecmint.com/wp-content/uploads/2015/08/Secure-MySQL-Database.png) -保证 MySQL 数据库安全 +*保证 MySQL 数据库安全* 注意:如果你不想用 MariaDB root 用户插入日志消息到数据库,你也可以配置用另一个用户账户。如何实现的介绍已经超出了本文的范围,但在 [MariaDB 知识][6] 中有详细解析。为了简单在这篇指南中我们会使用 root 账户。 @@ -117,7 +114,7 @@ Logrotate 配置 ![保存服务器日志到数据库](http://www.tecmint.com/wp-content/uploads/2015/08/Save-Server-Logs-to-Database.png) -保存服务器日志到数据库 +*保存服务器日志到数据库* 最后,添加下面的行到 /etc/rsyslog.conf: @@ -132,18 +129,18 @@ Logrotate 配置 #### 使用 SQL 语法查询日志 #### -现在执行一些会改变日志的操作(例如停止和启动服务),然后登陆到你的 DB 服务器并使用标准的 SQL 命令显示和查询日志: +现在执行一些会改变日志的操作(例如停止和启动服务),然后登录到你的数据库服务器并使用标准的 SQL 命令显示和查询日志: USE Syslog; SELECT ReceivedAt, Message FROM SystemEvents; ![在数据库中查询日志](http://www.tecmint.com/wp-content/uploads/2015/08/Query-Logs-in-Database.png) -在数据库中查询日志 +*在数据库中查询日志* ### 总结 ### -在这篇文章中我们介绍了如何设置系统日志,如果旋转日志以及为了简化查询如何重定向消息到数据库。我们希望这些技巧能对你准备 [RHCE 考试][8] 和日常工作有所帮助。 +在这篇文章中我们介绍了如何设置系统日志,如果轮换日志以及为了简化查询如何重定向消息到数据库。我们希望这些技巧能对你准备 [RHCE 考试][8] 和日常工作有所帮助。 正如往常,非常欢迎你的反馈。用下面的表单和我们联系吧。 @@ -153,7 +150,7 @@ via: http://www.tecmint.com/manage-linux-system-logs-using-rsyslogd-and-logrotat 作者:[Gabriel Cánepa][a] 译者:[ictlyh](http://www.mutouxiaogui.cn/blog/) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -165,5 +162,5 @@ via: http://www.tecmint.com/manage-linux-system-logs-using-rsyslogd-and-logrotat [5]:http://www.tecmint.com/wp-content/pdf/logrotate.conf.pdf [6]:https://mariadb.com/kb/en/mariadb/create-user/ [7]:https://github.com/sematext/rsyslog/blob/master/plugins/ommysql/createDB.sql -[8]:http://www.tecmint.com/how-to-setup-and-configure-static-network-routing-in-rhel/ +[8]:https://linux.cn/article-6451-1.html [9]:https://en.wikipedia.org/wiki/Log_rotation \ No newline at end of file diff --git a/translated/talk/The history of Android/06 - The history of Android.md b/published/The history of Android/06 - The history of Android.md similarity index 61% rename from translated/talk/The history of Android/06 - The history of Android.md rename to published/The history of Android/06 - The history of Android.md index 030bb83ca8..363fd85a84 100644 --- a/translated/talk/The history of Android/06 - The history of Android.md +++ b/published/The history of Android/06 - The history of Android.md @@ -1,48 +1,48 @@ -The history of Android +安卓编年史(6) ================================================================================ ![T-Mobile G1](http://cdn.arstechnica.net/wp-content/uploads/2014/04/t-mobile_g1.jpg) -T-Mobile G1 -T-Mobile供图 + +*T-Mobile G1 [T-Mobile供图]* ### 安卓1.0——谷歌系app和实体硬件的引入 ### -到了2008年10月,安卓1.0已经准备好发布,这个系统在[T-Mobile G1][1](又以HTC Dream为人周知)上初次登台。G1进入了被iPhone 3G和[Nokia 1680 classic][2]所主宰的市场。(这些手机并列获得了2008年[销量最佳手机][3]称号,各自卖出了350万台。)G1的销量数字已难以获得,但T-Mobile宣称截至2009年4月该设备的销量突破了100万台。无论从哪方面来说这在竞争中都处于落后地位。 +到了2008年10月,安卓1.0已经准备好发布,这个系统在[T-Mobile G1][1](又以HTC Dream为人周知)上初次登台。G1进入了被iPhone 3G和[Nokia 1680 classic][2]所主宰的市场。(这些手机并列获得了2008年[销量最佳手机][3]称号,各自卖出了350万台。)G1的具体销量数字已难以获得,但T-Mobile宣称截至2009年4月该设备的销量突破了100万台。无论从哪方面来说这在竞争中都处于落后地位。 -G1拥有单核528Mhz的ARM 11处理器,一个Adreno 130的GPU,192MB内存,以及多达256MB的存储空间供给系统以及应用使用。它有一块3.2英寸,320x480分辨率的显示屏,被布置在一个含有实体全键盘的滑动结构之上。所以尽管安卓软件的确走过了很长的一段路,硬件也是的。时至今日,我们可以在厂商的一个手表中得到比这更好的参数:最新的[三星智能手表][4]拥有512MB内存以及1GHz的双核处理器。 +G1拥有单核528Mhz的ARM 11处理器,一个Adreno 130的GPU,192MB内存,以及多达256MB的存储空间提供给系统以及应用使用。它有一块3.2英寸、320x480分辨率的显示屏,被布置在一个含有实体全键盘的滑动结构之上。所以尽管安卓软件的确走过了很长的一段路,硬件也是的。时至今日,我们可以在一个厂商提供手表中得到比这更好的参数:最新的[三星智能手表][4]拥有512MB内存以及1GHz的双核处理器。 -当iPhone有着最少数量的按键的时候,G1确实完全相反的,按键几乎支持每个硬件控制。它有拨通和挂断按钮,home键,后退,以及菜单键,一个相机快门键,音量控制键,一个轨迹球,当然,还有50个键盘按钮。未来安卓设备将会慢慢离开按键多多的界面设计,几乎每部新旗舰都在减少按键的数量。 +当iPhone有着最少数量的按键的时候,G1确实完全相反的,按键几乎支持每个硬件控制。它有拨通和挂断按钮,home键,后退,以及菜单键,一个相机快门键,音量控制键,一个轨迹球,当然,还有50个键盘按键。未来安卓设备将会慢慢离开按键多多的界面设计,几乎每部新旗舰都在减少按键的数量。 但是这是第一次,人们见到了运行在实机上的安卓,而不是跑在一个令人沮丧的慢吞吞的模拟器上。安卓1.0没有iPhone那样顺滑流畅,闪亮耀眼,或拥有那么多的新闻报道。它也不像Windows Mobile 6.5那样才华横溢。但这仍然是个好的开始。 ![安卓1.0和0.9的默认应用列表。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/apps.png) -安卓1.0和0.9的默认应用列表。 -Ron Amadeo供图 -安卓1.0的核心与两个月前发布的beta版本相比看起来并没有什么引人注目的不同,但消费者产品带来了不少应用,包括一套完整的谷歌系应用。日历,电子邮件,Gmail,即时通讯,市场,设置,语音拨号,以及YouTube都是全新登场。那时候,音乐是智能手机上占据主宰地位的媒体类型,其王者是iTunes音乐商店。谷歌没有自家的音乐服务,所以它选择了亚马逊并绑定了亚马逊MP3商店。 +*安卓1.0和0.9的默认应用列表。[Ron Amadeo供图]* -安卓最重要的新增是谷歌商店的首次登场,叫做“安卓市场Beta”。与此同时大部分公司满足于将它们的软件目录称作一些不同的“应用商店”——意思是一个出售应用的商店,并且只出售应用——谷歌明显有着更大的野心。它搭配了一个更为通用的名字,“安卓市场”。这个名字的想法是安卓市场不仅仅拥有应用,还拥有一切你的安卓设备所需要的东西。 +安卓1.0的核心与两个月前发布的beta版本相比看起来并没有什么引人注目的不同,但这个消费产品带来了不少应用,包括一套完整的谷歌系应用。日历,电子邮件,Gmail,即时通讯,市场,设置,语音拨号,以及YouTube都是全新登场。那时候,音乐是智能手机上占据主宰地位的媒体类型,其王者是iTunes音乐商店。谷歌没有自家的音乐服务,所以它选择了亚马逊并绑定了亚马逊MP3商店。 + +安卓最重要的新增内容是首次登场的谷歌商店,叫做“安卓市场Beta”。与此同时大部分公司满足于将它们的软件目录称作各种“应用商店”——意思是一个出售应用的商店,并且只出售应用——谷歌明显有着更大的野心。它搭配了一个更为通用的名字,“安卓市场”。这个名字的想法是安卓市场不仅仅拥有应用,还拥有一切你的安卓设备所需要的东西。 ![第一个安卓市场客户端。截图展示了主页,“我的下载”,一个应用页面,以及一个应用权限页面。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/market.png) -第一个安卓市场客户端。截图展示了主页,“我的下载”,一个应用页面,以及一个应用权限页面。 -[Google][5]供图 -那时候,安卓市场只提供应用和游戏,开发者们甚至还不能为它们收费。苹果的App Store相对与安卓市场有4个月的先发优势,但是谷歌的主要差异化在于安卓的商店几乎是完全开放的。在iPhone上,应用受制于苹果的审查,必须遵循设计和技术指南。潜在的新应用不允许在功能上复制已有应用。在安卓市场,开发者可以自由地做任何想做的,包括开发替代已有的应用。控制的缺失会转变成祝福同时也是诅咒。它允许开发者革新已有的功能,但同时意味着甚至是毫无价值的垃圾应用也被允许进入市场。 +*第一个安卓市场客户端。截图展示了主页,“我的下载”,一个应用页面,以及一个应用权限页面。[[Google][5]供图]* -现在,这个客户端是又一个不再能够和谷歌服务器通讯的应用。幸运的是,它也是在因特网上被[真正记录][6]的为数不多的早期安卓应用之一。主页提供了通向一般区域的连接,像应用,游戏,搜索,以及下载,顶部有横向滚动显示的特色应用图标。搜索结果和“我的下载”页面以滚动列表的方式显示应用,显示应用名,开发者,费用(在那时都是免费的),以及评分。单独的应用页面展示了一个简短的描述,安装数,用户评论和评分,以及最重要的安装按钮。早期的安卓市场不支持图片,开发者唯一能使用的区域是应用描述,还有着500字的限制。这使得类似维护一个更新日志变的十分困难,因为只有描述的位置可以供其使用。 +那时候,安卓市场只提供应用和游戏,开发者们甚至还不能为它们收费。苹果的App Store相对与安卓市场有4个月的先发优势,但是谷歌的主要差异化在于安卓的商店几乎是完全开放的。在iPhone上,应用受制于苹果的审查,必须遵循设计和技术指南。潜在的新应用不允许在功能上复制已有应用。在安卓市场,开发者可以自由地做任何想做的,包括开发替代已有的应用。控制的缺失导致福祸相依。它允许开发者革新已有的功能,但同时意味着甚至是毫无价值的垃圾应用也被允许进入市场。 + +时至今日,这个安卓市场的客户端是又一个不再能够和谷歌服务器通讯的应用。幸运的是,它也是在因特网上被[真正记录][6]的为数不多的早期安卓应用之一。主页提供了通向一般区域的连接,像应用,游戏,搜索,以及下载,顶部有横向滚动显示的特色应用图标。搜索结果和“我的下载”页面以滚动列表的方式显示应用,显示应用名,开发者,费用(在那时都是免费的),以及评分。单独的应用页面展示了一个简短的描述,安装数,用户评论和评分,以及最重要的安装按钮。早期的安卓市场不支持图片,开发者唯一能使用的区域是应用描述,还有着500字的限制。这使得类似维护一个更新日志变的十分困难,因为只有描述的位置可以供其使用。 就在安装之前,安卓市场显示了应用所需要的权限。这是苹果直至2012年之前都避免做的,那年一个iOS应用被发现在用户不知情的情况下[将完整的通讯录上传][7]到云端。权限显示给出了一个完整的应用用到的权限列表,尽管这个版本强迫用户同意应用权限。界面有个“OK”按钮,但是除了后退按钮没有办法取消。 ![Gmail展示收件箱,打开菜单的收件箱。 ](http://cdn.arstechnica.net/wp-content/uploads/2013/12/gmail1.01.png) -Gmail展示收件箱,打开菜单的收件箱。 -Ron Amadeo供图 -下一个重要的应用也许就是Gmail。大多数基本的功能此时已经准备好了。未读邮件以加粗显示,标签是个有颜色的标记。在收件箱中每封独立邮件显示着主题,发件人,以及一个会话中的回复数。Gmail加星标志也在这里——快速点击即可给邮件加星或取消。一如往常,对于早期版本的安卓,菜单里有收件箱视图应有的所有按钮。但是,一旦打开了一封邮件,界面看起来就更加的现代了,“回复”和“转发”按钮永久固定在了屏幕底部。各个独立回复可以点击它们来展开和收缩。 +*Gmail展示收件箱,打开菜单的收件箱。[Ron Amadeo供图]* + +下一个重要的应用也许就是Gmail。大多数基本的功能此时已经准备好了。未读邮件以加粗显示,标签是个有颜色的标记。在收件箱中每封独立邮件显示着主题,发件人,以及一个会话中的回复数。Gmail加星标志也在这里——快速点击即可给邮件加星或取消。一如往常,对于早期版本的安卓,菜单里有收件箱视图应有的所有按钮。但是,一旦打开了一封邮件,界面看起来就更加的现代了,“回复”和“转发”按钮永久固定在了屏幕底部。单独回复可以点击它们来展开和收缩。 圆角,阴影,以及气泡图标给了整个应用“卡通”的外表,但是这是个好的开始。安卓的功能第一哲学真正从此开始:Gmail支持标签,邮件会话,搜索,以及邮件推送。 ![Gmail在安卓1.0的标签视图,写邮件界面,以及设置。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/gmail3.png) -Gmail在安卓1.0的标签视图,写邮件界面,以及设置。 -Ron Amadeo供图 + +*Gmail在安卓1.0的标签视图,写邮件界面,以及设置。[Ron Amadeo供图]* 但是如果你认为Gmail很丑,电子邮件应用又拉低了下限。它没有分离的收件箱或文件夹视图——所有东西都糊在一个界面。应用呈现给你一个文件夹列表,点击一个文件夹会以内嵌的方式展开内容。未读邮件左侧有条绿色的线指示,这就是电子邮件应用的界面。这个应用支持IMAP和POP3,但是没有Exchange。 @@ -58,7 +58,7 @@ Ron Amadeo供图 via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/6/ -译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID) +译者:[alim0x](https://github.com/alim0x) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/translated/talk/The history of Android/07 - The history of Android.md b/published/The history of Android/07 - The history of Android.md similarity index 51% rename from translated/talk/The history of Android/07 - The history of Android.md rename to published/The history of Android/07 - The history of Android.md index 583e847d6e..9777963d63 100644 --- a/translated/talk/The history of Android/07 - The history of Android.md +++ b/published/The history of Android/07 - The history of Android.md @@ -1,90 +1,90 @@ -安卓编年史 +安卓编年史(7) ================================================================================ ![电子邮件应用的所有界面。前两张截图展示了标签/收件箱结合的视图,最后一张截图展示了一封邮件。](http://cdn.arstechnica.net/wp-content/uploads/2014/01/email2lol.png) -电子邮件应用的所有界面。前两张截图展示了标签/收件箱结合的视图,最后一张截图展示了一封邮件。 -Ron Amadeo供图 -邮件视图是——令人惊讶的!——白色。安卓的电子邮件应用从历史角度来说算是个打了折扣的Gmail应用,你可以在这里看到紧密的联系。读邮件以及写邮件视图几乎没有任何修改地就从Gmail那里直接取过来使用。 +*电子邮件应用的所有界面。前两张截图展示了标签/收件箱结合的视图,最后一张截图展示了一封邮件。 [Ron Amadeo供图]* + +邮件视图是——令人惊讶的!居然是白色。安卓的电子邮件应用从历史角度来说算是个打了折扣的Gmail应用,你可以在这里看到紧密的联系。读邮件以及写邮件视图几乎没有任何修改地就从Gmail那里直接取过来使用。 ![即时通讯应用。截图展示了服务提供商选择界面,朋友列表,以及一个对话。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/IM2.png) -即时通讯应用。截图展示了服务提供商选择界面,朋友列表,以及一个对话。 -Ron Amadeo供图 -在Google Hangouts之前,甚至是Google Talk之前,就有“IM”——安卓1.0带来的唯一一个即时通讯客户端。令人惊奇的是,它支持多种IM服务:用户可以从AIM,Google Talk,Windows Live Messenger以及Yahoo中挑选。还记得操作系统开发者什么时候关心过互通性吗? +*即时通讯应用。截图展示了服务提供商选择界面,朋友列表,以及一个对话。[Ron Amadeo供图]* -朋友列表是聊天中带有白色聊天气泡的黑色背景界面。状态用一个带颜色的圆形来指示,右侧的小安卓机器人指示出某人正在使用移动设备。IM应用相比Google Hangouts远比它有沟通性,这真是十分神奇的。绿色代表着某人正在使用设备并且已经登录,黄色代表着他们登录了但处于空闲状态,红色代表他们手动设置状态为忙,不想被打扰,灰色表示离线。现在Hangouts只显示用户是否打开了应用。 +在Google Hangouts之前,甚至是Google Talk之前,就有了“IM”——安卓1.0带来的唯一一个即时通讯客户端。令人惊奇的是,它支持多种IM服务:用户可以从AIM,Google Talk,Windows Live Messenger以及Yahoo中挑选。还记得操作系统开发者什么时候关心过互通性吗? + +朋友列表是黑色背景界面,如果在聊天中则带有白色聊天气泡。状态用一个带颜色的圆形来指示,右侧的小安卓机器人指示出某人正在使用移动设备。IM应用相比Google Hangouts远比它有沟通性,这真是十分神奇的。绿色代表着某人正在使用设备并且已经登录,黄色代表着他们登录了但处于空闲状态,红色代表他们手动设置状态为忙,不想被打扰,灰色表示离线。现在Hangouts只显示用户是否打开了应用。 聊天对话界面明显基于信息应用,聊天的背景从白色和蓝色被换成了白色和绿色。但是没人更改信息输入框的颜色,所以加上橙色的高亮效果,界面共使用了白色,绿色,蓝色和橙色。 ![安卓1.0上的YouTube。截图展示了主界面,打开菜单的主界面,分类界面,视频播放界面。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/yt5000.png) -安卓1.0上的YouTube。截图展示了主界面,打开菜单的主界面,分类界面,视频播放界面。 -Ron Amadeo供图 -YouTube仅仅以G1的320p屏幕和3G网络速度可能不会有今天这样的移动意识,但谷歌的视频服务在安卓1.0上就被置入发布了。主界面看起来就像是安卓市场调整过的版本,顶部带有一个横向滚动选择部分,下面有垂直滚动分类列表。谷歌的一些分类选择还真是奇怪:“最热门”和“最多观看”有什么区别? +*安卓1.0上的YouTube。截图展示了主界面,打开菜单的主界面,分类界面,视频播放界面。[Ron Amadeo供图]* -一个谷歌没有意识到YouTube最终能达到多庞大的标志——有一个视频分类是“最近更新”。在今天,每分钟有[100小时时长的视频][1]上传到Youtube上,如果这个分类能正常工作的话,它会是一个快速滚动的视频列表,快到以至于变为一片无法阅读的模糊。 +以G1的320p屏幕和3G网络速度,YouTube可能不会有今天这样的手机上的表现,但谷歌的视频服务在安卓1.0上就被置入发布了。主界面看起来就像是安卓市场调整过的版本,顶部带有一个横向滚动选择部分,下面有垂直滚动分类列表。谷歌的一些分类选择还真是奇怪:“最热门”和“最多观看”有什么区别? -菜单含有搜索,喜爱,分类,设置。设置(没有图片)是有史以来最简陋的,只有个清除搜索历史的选项。分类都是一样的平淡,仅仅是个黑色的文本列表。 +这是一个谷歌没有意识到YouTube最终能达到多庞大的标志——有一个视频分类是“最近更新”。在今天,每分钟有[100小时时长的视频][1]上传到Youtube上,如果这个分类能正常工作的话,它会是一个快速滚动的视频列表,快到以至于变为一片无法阅读的模糊。 + +菜单含有搜索,喜爱,分类,设置。设置(没有该图片)是有史以来最简陋的,只有个清除搜索历史的选项。分类都是一样的平淡,仅仅是个黑色的文本列表。 最后一张截图展示了视频播放界面,只支持横屏模式。尽管自动隐藏的播放控制有个进度条,但它还是很奇怪地包含了后退和前进按钮。 ![YouTube的视频菜单,描述页面,评论。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/yt3.png) -YouTube的视频菜单,描述页面,评论。 -Ron Amadeo供图 -每个视频的更多选项可以通过点击菜单按钮来打开。在这里你可以把视频标记为喜爱,查看详细信息,以及阅读评论。所有的这些界面,和视频播放一样,是锁定横屏模式的。 +*YouTube的视频菜单,描述页面,评论。[Ron Amadeo供图]* + +每个视频的更多选项可以通过点击菜单按钮来打开。在这里你可以把视频标记为“喜爱”,查看详细信息,以及阅读评论。所有的这些界面,和视频播放一样,是锁定横屏模式的。 然而“共享”不会打开一个对话框,它只是向Gmail邮件中加入了视频的链接。想要把链接通过短信或即时消息发送给别人是不可能的。你可以阅读评论,但是没办法评价他们或发表自己的评论。你同样无法给视频评分或赞。 ![相机应用的拍照界面,菜单,照片浏览模式。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/camera.png) -相机应用的拍照界面,菜单,照片浏览模式。 -Ron Amadeo供图 -在实体机上跑上真正的安卓意味着相机功能可以正常运作,即便那里没什么太多可关注的。左边的黑色方块是相机的界面,原本应该显示取景器图像,但SDK的截图工具没办法捕捉下来。G1有个硬件实体的拍照键(还记得吗?),所以相机没必要有个屏幕上的快门键。相机没有曝光,白平衡,或HDR设置——你可以拍摄照片,仅此而已。 +*相机应用的拍照界面,菜单,照片浏览模式。[Ron Amadeo供图]* + +在实体机上跑真正的安卓意味着相机功能可以正常运作,即便那里没什么太多可关注的。左边的黑色方块是相机的界面,原本应该显示取景器图像,但SDK的截图工具没办法捕捉下来。G1有个硬件实体的拍照键(还记得吗?),所以相机没必要有个屏幕上的快门键。相机没有曝光,白平衡,或HDR设置——你可以拍摄照片,仅此而已。 菜单按钮显示两个选项:跳转到相册应用和带有两个选项的设置界面。第一个设置选项是是否给照片加上地理标记,第二个是在每次拍摄后显示提示菜单,你可以在上面右边看到截图。同样的,你目前还只能拍照——还不支持视频拍摄。 ![日历的月视图,打开菜单的周视图,日视图,以及日程。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/calviews.png) -日历的月视图,打开菜单的周视图,日视图,以及日程。 -Ron Amadeo供图 + +*日历的月视图,打开菜单的周视图,日视图,以及日程。[Ron Amadeo供图]* 就像这个时期的大多数应用一样,日历的主命令界面是菜单。菜单用来切换视图,添加新事件,导航至当天,选择要显示的日程,以及打开设置。菜单扮演着每个单独按钮的入口的作用。 月视图不能显示约会事件的文字。每个日期旁边有个侧边,约会会显示为侧边上的绿色部分,通过位置来表示约会是在一天中的什么时候。周视图同样不能显示预约文字——G1的320×480的显示屏像素还不够密——所以你会在日历中看到一个带有颜色指示条的白块。唯一一个显示文字的是日程和日视图。你可以用滑动来切换日期——左右滑动切换周和日,上下滑动切换月份和日程。 ![设置主界面,无线设置,关于页面的底部。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/settings.png) -设置主界面,无线设置,关于页面的底部。 -Ron Amadeo供图 -安卓1.0最终带来了设置界面。这个界面是个带有文字的黑白界面,粗略地分为各个部分。每个列表项边的下箭头让人误以为点击它会展开折叠的更多东西,但是触摸列表项的任何位置只会加载下一屏幕。所有的界面看起来确实无趣,都差不多一样,但是嘿,这可是设置啊。 +*设置主界面,无线设置,关于页面的底部。[Ron Amadeo供图]* -任何带有开/关状态的选项都使用了卡通风的复选框。安卓1.0最初的复选框真是奇怪——就算是在“未选中”状态时,它们还是有个灰色的勾选标记在里面。安卓把勾选标记当作了灯泡,打开时亮起来,关闭的时候变得黯淡,但这不是复选框的工作方式。然而我们最终还是见到了“关于”页面。安卓1.0运行Linux内核2.6.25版本。 +安卓1.0最终带来了设置界面。这个界面是个带有文字的黑白界面,粗略地分为各个部分。每个列表项边上的下箭头让人误以为点击它会展开折叠的更多东西,但是触摸列表项的任何位置只会加载下一屏幕。所有的界面看起来确实无趣,都差不多一样,但是嘿,这可是设置啊。 + +任何带有开/关状态的选项都使用了卡通风格的复选框。安卓1.0最初的复选框真是奇怪——就算是在“未选中”状态时,它们还是有个灰色的勾选标记在里面。安卓把勾选标记当作了灯泡,打开时亮起来,关闭的时候变得黯淡,但这不是复选框的工作方式。然而我们最终还是见到了“关于”页面。安卓1.0运行Linux内核2.6.25版本。 设置界面意味着我们终于可以打开安全设置并更改锁屏。安卓1.0只有两种风格,安卓0.9那样的灰色方形锁屏,以及需要你在9个点组成的网格中画出图案的图形解锁。像这样的滑动图案相比PIN码更加容易记忆和输入,尽管它没有增加多少安全性。 ![语音拨号,图形锁屏,电池低电量警告,时间设置。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/grabbag.png) -语音拨号,图形锁屏,电池低电量警告,时间设置。 -Ron Amadeo供图 -语音功能和语音拨号一同来到了1.0。这个特性以各种功能实现在AOSP徘徊了一段时间,然而它是一个简单的拨打号码和联系人的语音命令应用。语音拨号是个和谷歌未来的语音产品完全无关的应用,但是,它的工作方式和非智能机上的语音拨号一样。 +*语音拨号,图形锁屏,电池低电量警告,时间设置。[Ron Amadeo供图]* + +语音功能和语音拨号一同来到了1.0。这个特性以各种功能实现在AOSP徘徊了一段时间,然而它是一个简单的拨打号码和联系人的语音命令应用。语音拨号是个和谷歌未来的语音产品完全无关的应用,它的工作方式和非智能机上的语音拨号一样。 关于最后一个值得注意的,当电池电量低于百分之十五的时候会触发低电量弹窗。这是个有趣的图案,它把电源线错误的一端插入手机。谷歌,那可不是(现在依然不是)手机应该有的充电方式。 -安卓1.0是个伟大的开头,但是功能上仍然有许多缺失。实体键盘和大量硬件按钮被强制要求配备,因为不带有十字方向键或轨迹球的安卓设备依然不被允许销售。另外,基本的智能手机功能比如自动旋转依然缺失。内置应用不可能像今天这样通过安卓市场来更新。所有的谷歌系应用和系统交织在一起。如果谷歌想要升级一个单独的应用,需要通过运营商推送整个系统的更新。安卓依然还有许多工作要做。 +安卓1.0是个伟大的开端,但是功能上仍然有许多缺失。强制配备了实体键盘和大量硬件按钮,因为不带有十字方向键或轨迹球的安卓设备依然不被允许销售。另外,基本的智能手机功能比如自动旋转依然缺失。内置应用不可能像今天这样通过安卓市场来更新。所有的谷歌系应用和系统交织在一起。如果谷歌想要升级一个单独的应用,需要通过运营商推送整个系统的更新。安卓依然还有许多工作要做。 ### 安卓1.1——第一个真正的增量更新 ### ![安卓1.1的所有新特性:语音搜索,安卓市场付费应用支持,谷歌纵横,设置中的新“系统更新”选项。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/11.png) -安卓1.1的所有新特性:语音搜索,安卓市场付费应用支持,谷歌纵横,设置中的新“系统更新”选项。 -Ron Amadeo供图 -安卓1.0发布四个半月后,2009年2月,安卓在安卓1.1中得到了它的第一个公开更新。系统方面没有太多变化,谷歌向1.1中添加新东西现如今也都已被关闭。谷歌语音搜索是安卓向云端语音搜索的第一个突击,它在应用抽屉里有自己的图标。尽管这个应用已经不能与谷歌服务器通讯,你可以[在iPhone上][2]看到它以前是怎么工作的。它还没有语音操作,但你可以说出想要搜索的,结果会显示在一个简单的谷歌搜索中。 +*安卓1.1的所有新特性:语音搜索,安卓市场付费应用支持,谷歌纵横,设置中的新“系统更新”选项。[Ron Amadeo供图]* -安卓市场添加了对付费应用的支持,但是就像beta客户端中一样,这个版本的安卓市场不再能够连接Google Play服务器。我们最多能够看到分类界面,你可以在免费应用,付费应用和全部应用中选择。 +安卓1.0发布四个半月后,2009年2月,安卓在安卓1.1中得到了它的第一个公开更新。系统方面没有太多变化,谷歌向1.1中添加的新东西现如今也都已被关闭。谷歌语音搜索是安卓向云端语音搜索的第一个突击,它在应用抽屉里有自己的图标。尽管这个应用已经不能与谷歌服务器通讯,你可以[在iPhone上][2]看到它以前是怎么工作的。它还没有语音操作,但你可以说出想要搜索的,结果会显示在一个简单的谷歌搜索中。 + +安卓市场添加了对付费应用的支持,但是就像beta客户端中一样,这个版本的安卓市场已经不能连接Google Play服务器。我们最多能够看到分类界面,你可以在免费应用、付费应用和全部应用中选择。 地图添加了[谷歌纵横][3],一个向朋友分享自己位置的方法。纵横在几个月前为了支持Google+而被关闭并且不再能够工作。地图菜单里有个纵横的选项,但点击它现在只会打开一个带载入中圆圈的画面,并永远停留在这里。 -安卓世界的系统更新来得更加迅速——或者至少是一条在运营商和OEM推送之前获得更新的途径——谷歌向“关于手机”界面添加了检查系统更新按钮。 +安卓世界的系统更新来得更加迅速——或者至少是一条在运营商和OEM推送之前获得更新的途径——谷歌也在“关于手机”界面添加了检查系统更新按钮。 ---------- @@ -98,7 +98,7 @@ Ron Amadeo供图 via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/7/ -译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID) +译者:[alim0x](https://github.com/alim0x) 校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/sources/news/20151104 Ubuntu Software Centre To Be Replaced in 16.04 LTS.md b/sources/news/20151104 Ubuntu Software Centre To Be Replaced in 16.04 LTS.md new file mode 100644 index 0000000000..0e689ede9e --- /dev/null +++ b/sources/news/20151104 Ubuntu Software Centre To Be Replaced in 16.04 LTS.md @@ -0,0 +1,56 @@ +Ubuntu Software Centre To Be Replaced in 16.04 LTS +================================================================================ +![The USC Will Be Replaced](http://www.omgubuntu.co.uk/wp-content/uploads/2011/09/usc1.jpg) + +The USC Will Be Replaced + +**The Ubuntu Software Centre is to be replaced in Ubuntu 16.04 LTS.** + +Users of the Xenial Xerus desktop will find that the familiar (and somewhat cumbersome) Ubuntu Software Centre is no longer available. + +GNOME’s [Software application][1] will – according to current plans – take its place as the default and package management utility on the Unity 7-based desktop. + +![GNOME Software](http://www.omgubuntu.co.uk/wp-content/uploads/2013/09/gnome-software.jpg) + +GNOME Software + +New plugins will be created to support the Software Centre’s ratings, reviews and paid app features as a result of the switch. + +The decisions were taken at a recent desktop Sprint held at Canonical HQ in London. + +“We are more confident in our ability to add support for Snaps to GNOME Software Centre (sic) than we are to Ubuntu Software Centre. And so, right now, it looks like we will be replacing [the USC] with GNOME Software Centre”, explains Ubuntu desktop manager Will Cooke at the Ubuntu Online Summit. + +GNOME 3.18 stack will also be included in Ubuntu 16.04, with select app updates to GNOME 3.20 apps taken ‘as and when it makes sense’, adds Will Cooke. + +We recently ran a poll on Twitter asking how you install software on Ubuntu. The results suggest that few of you will mourn the passing of the incumbent Software Centre… + +注:投票项目 +Which of these do you use to install software on #Ubuntu? + +- Software Centre +- Terminal + +### Other Apps Being Dropped in Ubuntu 16.04 ### + +The Ubuntu Software Centre is not the only app set to be given the heave-ho in Xenial Xerus. + +Disc burning utility Brasero and instant messaging app **Empathy** are also to be removed from the default install image. + +Neither app is considered to be under active development, and with the march of laptops lacking optical drives and web and mobile-based chat services, they may also be seen as increasingly obsolete. + +If you do have use for them don’t panic: both Brasero and Empathy will **still be available to install on Ubuntu from the archives**. + +It’s not all removals and replacements as one new desktop app is set be included by default: GNOME Calendar. + +-------------------------------------------------------------------------------- + +via: http://www.omgubuntu.co.uk/2015/11/the-ubuntu-software-centre-is-being-replace-in-16-04-lts + +作者:[Sam Tran][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/111008502832304483939?rel=author +[1]:https://wiki.gnome.org/Apps/Software \ No newline at end of file diff --git a/sources/share/20150824 Great Open Source Collaborative Editing Tools.md b/sources/share/20150824 Great Open Source Collaborative Editing Tools.md index 4696862569..c4746bc482 100644 --- a/sources/share/20150824 Great Open Source Collaborative Editing Tools.md +++ b/sources/share/20150824 Great Open Source Collaborative Editing Tools.md @@ -1,4 +1,3 @@ -cygmris is translating... Great Open Source Collaborative Editing Tools ================================================================================ In a nutshell, collaborative writing is writing done by more than one person. There are benefits and risks of collaborative working. Some of the benefits include a more integrated / co-ordinated approach, better use of existing resources, and a stronger, united voice. For me, the greatest advantage is one of the most transparent. That's when I need to take colleagues' views. Sending files back and forth between colleagues is inefficient, causes unnecessary delays and leaves people (i.e. me) unhappy with the whole notion of collaboration. With good collaborative software, I can share notes, data and files, and use comments to share thoughts in real-time or asynchronously. Working together on documents, images, video, presentations, and tasks is made less of a chore. diff --git a/sources/share/20150901 5 best open source board games to play online.md b/sources/share/20150901 5 best open source board games to play online.md index 5df980d1db..c14fecc697 100644 --- a/sources/share/20150901 5 best open source board games to play online.md +++ b/sources/share/20150901 5 best open source board games to play online.md @@ -1,4 +1,3 @@ -Translating by H-mudcup 5 best open source board games to play online ================================================================================ I have always had a fascination with board games, in part because they are a device of social interaction, they challenge the mind and, most importantly, they are great fun to play. In my misspent youth, myself and a group of friends gathered together to escape the horrors of the classroom, and indulge in a little escapism. The time provided an outlet for tension and rivalry. Board games help teach diplomacy, how to make and break alliances, bring families and friends together, and learn valuable lessons. diff --git a/sources/share/20151028 Bossie Awards 2015--The best open source application development tools.md b/sources/share/20151028 Bossie Awards 2015--The best open source application development tools.md new file mode 100644 index 0000000000..10da3e7cdc --- /dev/null +++ b/sources/share/20151028 Bossie Awards 2015--The best open source application development tools.md @@ -0,0 +1,336 @@ +Bossie Awards 2015: The best open source application development tools +================================================================================ +InfoWorld's top picks among platforms, frameworks, databases, and all the other tools that programmers use + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-app-dev-100613767-orig.jpg) + +### The best open source development tools ### + +There must be a better way, right? The developers are the ones who find it. This year's winning projects in the application development category include client-side frameworks, server-side frameworks, mobile frameworks, databases, languages, libraries, editors, and yeah, Docker. These are our top picks among all of the tools that make it faster and easier to build better applications. + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-docker-100613773-orig.jpg) + +### Docker ### + +The darling of container fans almost everywhere, [Docker][2] provides a low-overhead way to isolate an application or service’s environment, which serves its stated goal of being an open platform for building, shipping, and running distributed applications. Docker has been widely supported, even among those seeking to replace the Docker container format with an alternative, more secure runtime and format, specifically Rkt and AppC. Heck, Microsoft Visual Studio now supports deploying into a Docker container too. + +Docker’s biggest impact has been on virtual machine environments. Since Docker containers run inside the operating system, many more Docker containers than virtual machines can run in a given amount of RAM. This is important because RAM is usually the scarcest and most expensive resource in a virtualized environment. + +There are hundreds of thousands of runnable public images on Docker Hub, of which a few hundred are official, and the rest are from the community. You describe Docker images with a Dockerfile and build images locally from the Docker command line. You can add both public and private image repositories to Docker Hub. + +-- Martin Heller + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-nodejs-iojs-100613778-orig.jpg) + +### Node.js and io.js ### + +[Node.js][2] -- and its recently reunited fork [io.js][3] -- is a platform built on [Google Chrome's V8 JavaScript runtime][4] for building fast, scalable, network applications. Node uses an event-driven, nonblocking I/O model without threads. In general, Node tends to take less memory and CPU resources than other runtime engines, such as Java and the .Net Framework. For example, a typical Node.js Web server can run well in a 512MB instance on Cloud Foundry or a 512MB Docker container. + +The Node repository on GitHub has more than 35,000 stars and more than 8,000 forks. The project, sponsored primarily by Joyent, has more than 600 contributors. Some of the more famous Node applications are 37Signals, [Ancestry.com][5], Chomp, the Wall Street Journal online, FeedHenry, [GE.com][6], Mockingbird, [Pearson.com][7], Shutterstock, and Uber. The popular IoT back-end Node-RED is built on Node, as are many client apps, such as Brackets and Nuclide. + +-- Martin Heller + +![](rticle/2015/09/bossies-2015-angularjs-100613766-orig.jpg) + +### AngularJS ### + +[AngularJS][8] (or simply Angular, among friends) is a Model-View-Whatever (MVW) JavaScript AJAX framework that extends HTML with markup for dynamic views and data binding. Angular is especially good for developing single-page Web applications and linking HTML forms to models and JavaScript controllers. + +The weird sounding Model-View-Whatever pattern is an attempt to include the Model-View-Controller, Model-View-ViewModel, and Model-View-Presenter patterns under one moniker. The differences among these three closely related patterns are the sorts of topics that programmers love to argue about fiercely; the Angular developers decided to opt out of the discussion. + +Basically, Angular automatically synchronizes data from your UI (view) with your JavaScript objects (model) through two-way data binding. To help you structure your application better and make it easy to test, AngularJS teaches the browser how to do dependency injection and inversion of control. + +Angular was created by Google and open-sourced under the MIT license; there are currently more than 1,200 contributors to the project on GitHub, and the repository has more than 40,000 stars and 18,000 forks. The Angular site lists [210 “neat things” built with Angular][9]. + +-- Martin Heller + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-react-100613782-orig.jpg) + +### React ### + +[React][10] is a JavaScript library for building a UI or view, typically for single-page applications. Note that React does not implement anything having to do with a model or controller. React pages can render on the server or the client; rendering on the server (with Node.js) is typically much faster. People often combine React with AngularJS to create complete applications. + +React combines JavaScript and HTML in a single file, optionally a JSX component. React fans like the way JSX components combine views and their related functionality in one file, though that flies in the face of the last decade of Web development trends, which were all about separating the markup and the code. React fans also claim that you can’t understand it until you’ve tried it. Perhaps you should; the React repository on GitHub has 26,000 stars. + +[React Native][11] implements React with native iOS controls; the React Native command line uses Node and Xcode. [ReactJS.Net][12] integrates React with [ASP.Net][13] and C#. React is available under a BSD license with a patent license grant from Facebook. + +-- Martin Heller + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-atom-100613768-orig.jpg) + +### Atom ### + +[Atom][14] is an open source, hackable desktop editor from GitHub, based on Web technologies. It’s a full-featured tool with a fuzzy finder; fast projectwide search and replace; multiple cursors and selections; multiple panes, snippets, code folding; and the ability to import TextMate grammars and themes. Out of the box, Atom displayed proper syntax highlighting for every programming language on which I tried it, except for F# and C#; I fixed that easily by loading those packages from within Atom. Not surprising, Atom has tight integration with GitHub. + +The skeleton of Atom has been separated from the guts and called the Electron shell, providing an open source way to build cross-platform desktop apps with Web technologies. Visual Studio Code is built on the Electron shell, as are a number of proprietary and open source apps, including Slack and Kitematic. Facebook Nuclide adds significant functionality to Atom, including remote development and support for Flow, Hack, and Mercurial. + +On the downside, updating Atom packages can become painful, especially if you have many of them installed. The Nuclide packages seem to be the worst offenders -- they not only take a long time to update, they run CPU-intensive Node processes to do so. + +-- Martin Heller + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-brackets-100613769-orig.jpg) + +### Brackets ### + +[Brackets][15] is a lightweight editor for Web design that Adobe developed and open-sourced, drawing heavily on other open source projects. The idea is to build better tooling for JavaScript, HTML, CSS, and related open Web technologies. Brackets itself is written in JavaScript, HTML, and CSS, and the developers use Brackets to build Brackets. The editor portion is based on another open source project, CodeMirror, and the Brackets native shell is based on Google’s Chromium Embedded Framework. + +Brackets features a clean UI, with the ability to open a quick inline editor that displays all of the related CSS for some HTML, or all of the related JavaScript for some scripting, and a live preview for Web pages that you are editing. New in Brackets 1.4 is instant search in files, easier preferences editing, the ability to enable and disable extensions individually, improved text rendering on Macs, and Greek and Cyrillic character support. Last November, Adobe started shipping a preview version of Extract for Brackets, which can pull out design information from Photoshop files, as part of the default download for Brackets. + +-- Martin Heller + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-typescript-100613786-orig.jpg) + +### TypeScript ### + +[TypeScript][16] is a portable, duck-typed superset of JavaScript that compiles to plain JavaScript. The goal of the project is to make JavaScript usable for large applications. In pursuit of that goal, TypeScript adds optional types, classes, and modules to JavaScript, and it supports tools for large-scale JavaScript applications. Typing gets rid of some of the nonsensical and potentially buggy default behavior in JavaScript, for example: + + > 1 + "1" + '11' + +“Duck” typing means that the type checking focuses on the shape of the data values; TypeScript describes basic types, interfaces, and classes. While the current version of JavaScript does not support traditional, class-based, object-oriented programming, the ECMAScript 6 specification does. TypeScript compiles ES6 classes into plain, compatible JavaScript, with prototype-based objects, unless you enable ES6 output using the `--target` compiler option. + +Visual Studio includes TypeScript in the box, starting with Visual Studio 2013 Update 2. You can also edit TypeScript in Visual Studio Code, WebStorm, Atom, Sublime Text, and Eclipse. + +When using an external JavaScript library, or new host API, you'll need to use a declaration file (.d.ts) to describe the shape of the library. You can often find declaration files in the [DefinitelyTyped][17] repository, either by browsing, using the [TSD definition manager][18], or using NuGet. + +TypeScript’s GitHub repository has more than 6,000 stars. + +-- Martin Heller + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-swagger-100613785-orig.jpg) + +### Swagger ### + +[Swagger][19] is a language-agnostic interface to RESTful APIs, with tooling that gives you interactive documentation, client SDK generation, and discoverability. It’s one of several recent attempts to codify the description of RESTful APIs, in the spirit of WSDL for XML Web Services (2000) and CORBA for distributed object interfaces (1991). + +The tooling makes Swagger especially interesting. [Swagger-UI][20] automatically generates beautiful documentation and a live API sandbox from a Swagger-compliant API. The [Swagger codegen][21] project allows generation of client libraries automatically from a Swagger-compliant server. + +[Swagger Editor][22] lets you edit Swagger API specifications in YAML inside your browser and preview documentations in real time. Valid Swagger JSON descriptions can then be generated and used with the full Swagger tooling. + +The [Swagger JS][23] library is a fast way to enable a JavaScript client to communicate with a Swagger-enabled server. Additional clients exist for Clojure, Go, Java, .Net, Node.js, Perl, PHP, Python, Ruby, and Scala. + +The [Amazon API Gateway][24] is a managed service for API management at scale. It can import Swagger specifications using an open source [Swagger Importer][25] tool. + +Swagger and friends use the Apache 2.0 license. + +-- Martin Heller + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-polymer-100613781-orig.jpg) + +### Polymer ### + +The [Polymer][26] library is a lightweight, “sugaring” layer on top of the Web components APIs to help in building your own Web components. It adds several features for greater ease in building complex elements, such as creating custom element registration, adding markup to your element, configuring properties on your element, setting the properties with attributes, data binding with mustache syntax, and internal styling of elements. + +Polymer also includes libraries of prebuilt elements. The Iron library includes elements for working with layout, user input, selection, and scaffolding apps. The Paper elements implement Google's Material Design. The Gold library includes elements for credit card input fields for e-commerce, the Neon elements implement animations, the Platinum library implements push messages and offline caching, and the Google Web Components library is exactly what it says; it includes wrappers for YouTube, Firebase, Google Docs, Hangouts, Google Maps, and Google Charts. + +Polymer Molecules are elements that wrap other JavaScript libraries. The only Molecule currently implemented is for marked, a Markdown library. The Polymer repository on GitHub currently has 12,000 stars. The software is distributed under a BSD-style license. + +-- Martin Heller + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-ionic-100613775-orig.jpg) + +### Ionic ### + +The [Ionic][27] framework is a front-end SDK for building hybrid mobile apps, using Angular.js and Cordova, PhoneGap, or Trigger.io. Ionic was designed to be similar in spirit to the Android and iOS SDKs, and to do a minimum of DOM manipulation and use hardware-accelerated transitions to keep the rendering speed high. Ionic is focused mainly on the look and feel and UI interaction of your app. + +In addition to the framework, Ionic encompasses an ecosystem of mobile development tools and resources. These include Chrome-based tools, Angular extensions for Cordova capabilities, back-end services, a development server, and a shell View App to enable testers to use your Ionic code on their devices without the need for you to distribute beta apps through the App Store or Google Play. + +Appery.io integrated Ionic into its low-code builder in July 2015. Ionic’s GitHub repository has more than 18,000 stars and more than 3,000 forks. Ionic is distributed under an MIT license and currently runs in UIWebView for iOS 7 and later, and in Android 4.1 and up. + +-- Martin Heller + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-cordova-100613771-orig.jpg) + +### Cordova ### + +[Apache Cordova][28] is the open source project spun off when Adobe acquired PhoneGap from Nitobi. Cordova is a set of device APIs, plus some tooling, that allows a mobile app developer to access native device functionality like the camera and accelerometer from JavaScript. When combined with a UI framework like Angular, it allows a smartphone app to be developed with only HTML, CSS, and JavaScript. By using Cordova plug-ins for multiple devices, you can generate hybrid apps that share a large portion of their code but also have access to a wide range of platform capabilities. The HTML5 markup and code runs in a WebView hosted by the Cordova shell. + +Cordova is one of the cross-platform mobile app options supported by Visual Studio 2015. Several companies offer online builders for Cordova apps, similar to the Adobe PhoneGap Build service. Online builders save you from having to install and maintain most of the device SDKs on which Cordova relies. + +-- Martin Heller + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-famous-100613774-orig.jpg) + +### Famous Engine ### + +The high-performance Famo.us JavaScript framework introduced last year has become the [Famous Engine][29] and [Famous Framework][30]. The Famous Engine runs in a mixed mode, with the DOM and WebGL under a single coordinate system. As before, Famous structures applications in a scene graph hierarchy, but now it produces very little garbage (reducing the garbage collector overhead) and sustains 60FPS animations. + +The Famous Physics engine has been refactored to its own, fine-grained module so that you can load only the features you need. Other improvements since last year include streamlined eventing, improved sizing, decoupling the scene graph from the rendering pipeline by using a draw command buffer, and switching to a fully open MIT license. + +The new Famous Framework is an alpha-stage developer preview built on the Famous Engine; its goal is creating reusable, composable, and interchangeable UI widgets and applications. Eventually, Famous hopes to replace the jQuery UI widgets with Famous Framework widgets, but while it's promising, the Famous Framework is nowhere near production-ready. + +-- Martin Heller + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-mongodb-rev-100614248-orig.jpg) + +### MongoDB ### + +[MongoDB][31] is no stranger to the Bossies or to the ever-growing and ever-competitive NoSQL market. If you still aren't familiar with this very popular technology, here's a brief overview: MongoDB is a cross-platform document-oriented database, favoring JSON-like documents with dynamic schemas that make data integration easier and faster. + +MongoDB has attractive features, including but not limited to ad hoc queries, flexible indexing, replication, high availability, automatic sharding, load balancing, and aggregation. + +The big, bold move with [version 3.0 this year][32] was the new WiredTiger storage engine. We can now have document-level locking. This makes “normal” applications a whole lot more scalable and makes MongoDB available to more use cases. + +MongoDB has a growing open source ecosystem with such offerings as the [TokuMX engine][33], from the famous MySQL bad boys Percona. The long list of MongoDB customers includes heavy hitters such as Craigslist, eBay, Facebook, Foursquare, Viacom, and the New York Times. + +-- Andrew Oliver + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-couchbase-100614851-orig.jpg) + +### Couchbase ### + +[Couchbase][34] is another distributed, document-oriented database that has been making waves in the NoSQL world for quite some time now. Couchbase and MongoDB often compete, but they each have their sweet spots. Couchbase tends to outperform MongoDB when doing more in memory is possible. + +Additionally, Couchbase’s mobile features allow you to disconnect and ship a database in compact format. This allows you to scale down as well as up. This is useful not just for mobile devices but also for specialized applications, like shipping medical records across radio waves in Africa. + +This year Couchbase added N1QL, a SQL-based query language that did away with Couchbase’s biggest obstacle, requiring static views. The new release also introduced multidimensional scaling. This allows individual scaling of services such as querying, indexing, and data storage to improve performance, instead of adding an entire, duplicate node. + +-- Andrew C. Oliver + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-cassandra-100614852-orig.jpg) + +### Cassandra ### + +[Cassandra][35] is the other white meat of column family databases. HBase might be included with your favorite Hadoop distribution, but Cassandra is the one people deliberately deploy for specialized applications. There are good reasons for this. + +Cassandra was designed for high workloads of both writes and reads where millisecond consistency isn't as important as throughput. HBase is optimized for reads and greater write consistency. To a large degree, Cassandra tends to be used for operational systems and HBase more for data warehouse and batch-system-type use cases. + +While Cassandra has not received as much attention as other NoSQL databases and slipped into a quiet period a couple years back, it is widely used and deployed, and it's a great fit for time series, product catalog, recommendations, and other applications. If you want to keep a cluster up “no matter what” with multiple masters and multiple data centers, and you need to scale with lots of reads and lots of writes, Cassandra might just be your Huckleberry. + +-- Andrew C. Oliver + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-orientdb-100613780-orig.jpg) + +### OrientDB ### + +[OrientDB][36] is an interesting hybrid in the NoSQL world, combining features from a document database, where individual documents can have multiple fields without necessarily defining a schema, and a graph database, which consists of a set of nodes and edges. At a basic level, OrientDB considers the document as a vertex, and relationships between fields as graph edges. Because the relationships between elements are part of the record, no costly joins are required when querying data. + +Like most databases today, OrientDB offers linear scalability via a distributed architecture. Adding capacity is a matter of simply adding more nodes to the cluster. Queries are written in a variant of SQL that is extended to support graph concepts. It's not exactly SQL, but data analysts shouldn't have too much trouble adapting. Language bindings are available for most commonly used languages, such as R, Scala, .Net, and C, and those integrating OrientDB into their applications will find an active user community to get help from. + +-- Steven Nunez + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-rethinkdb-100613783-orig.jpg) + +### RethinkDB ### + +[RethinkDB][37] is a scalable, real-time JSON database with the ability to continuously push updated query results to applications that subscribe to changes. There are official RethinkDB drivers for Ruby, Python, and JavaScript/Node.js, and community-supported drivers for more than a dozen other languages, including C#, Go, and PHP. + +It’s temping to confuse RethinkDB with real-time sync APIs, such as Firebase and PubNub. RethinkDB can be run as a cloud service like Firebase and PubNub, but you can also install it on your own hardware or Docker containers. RethinkDB does more than synchronize: You can run arbitrary RethinkDB queries, including table joins, subqueries, geospatial queries, and aggregation. Finally, RethinkDB is designed to be accessed from an application server, not a browser. + +Where MongoDB requires you to poll the database to see changes, RethinkDB lets you subscribe to a stream of changes to a query result. You can shard and scale RethinkDB easily, unlike MongoDB. Also unlike relational databases, RethinkDB does not give you full ACID support or strong schema enforcement, although it can perform joins. + +The RethinkDB repository has 10,000 stars on GitHub, a remarkably high number for a database. It is licensed with the Affero GPL 3.0; the drivers are licensed with Apache 2.0. + +-- Martin Heller + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-rust-100613784-orig.jpg) + +### Rust ### + +[Rust][38] is a syntactically C-like systems programming language from Mozilla Research that guarantees memory safety and offers painless concurrency (that is, no data races). It does not have a garbage collector and has minimal runtime overhead. Rust is strongly typed with type inference. This is all promising. + +Rust was designed for performance. It doesn’t yet demonstrate great performance, however, so now the mantra seems to be that it runs as fast as C++ code that implements all the safety checks built into Rust. I’m not sure whether I believe that, as in many cases the strictest safety checks for C/C++ code are done by static and dynamic analysis and testing, which don’t add any runtime overhead. Perhaps Rust performance will come with time. + +So far, the only tools for Rust are the Cargo package manager and the rustdoc documentation generator, plus a couple of simple Rust plug-ins for programming editors. As far as we have heard, there is no shipping software that was actually built with Rust. Now that Rust has reached the 1.0 milestone, we might expect that to change. + +Rust is distributed with a dual Apache 2.0 and MIT license. With 13,000 stars on its GitHub repository, Rust is certainly attracting attention, but when and how it will deliver real benefits remains to be seen. + +-- Martin Heller + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-opencv-100613779-orig.jpg) + +### OpenCV ### + +[OpenCV][39] (Open Source Computer Vision Library) is a computer vision and machine learning library that contains about 500 algorithms, such as face detection, moving object tracking, image stitching, red-eye removal, machine learning, and eye movement tracking. It runs on Windows, Mac OS X, Linux, Android, and iOS. + +OpenCV has official C++, C, Python, Java, and MATLAB interfaces, and wrappers in other languages such as C#, Perl, and Ruby. CUDA and OpenCL interfaces are under active development. OpenCV was originally (1999) an Intel Research project in Russia; from there it moved to the robotics research lab Willow Garage (2008) and finally to [OpenCV.org][39] (2012) with a core team at Itseez, current source on GitHub, and stable snapshots on SourceForge. + +Users of OpenCV include Google, Yahoo, Microsoft, Intel, IBM, Sony, Honda, and Toyota. There are currently more than 6,000 stars and 5,000 forks on the GitHub repository. The project uses a BSD license. + +-- Martin Heller + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-llvm-100613777-orig.jpg) + +### LLVM ### + +The [LLVM Project][40] is a collection of modular and reusable compiler and tool chain technologies, which originated at the University of Illinois. LLVM has grown to include a number of subprojects, several of which are interesting in their own right. LLVM is distributed with Debian, Ubuntu, and Apple Xcode, among others, and it’s used in commercial products from the likes of Adobe (including After Effects), Apple (including Objective-C and Swift), Cray, Intel, NVIDIA, and Siemens. A few of the open source projects that depend on LLVM are PyPy, Mono, Rubinius, Pure, Emscripten, Rust, and Julia. Microsoft has recently contributed LLILC, a new LLVM-based compiler for .Net, to the .Net Foundation. + +The main LLVM subprojects are the core libraries, which provide optimization and code generation; Clang, a C/C++/Objective-C compiler that’s about three times faster than GCC; LLDB, a much faster debugger than GDB; libc++, an implementation of the C++ 11 Standard Library; and OpenMP, for parallel programming. + +-- Martin Heller + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-main-100613823-orig.jpg) + +### Read about more open source winners ### + +InfoWorld's Best of Open Source Awards for 2014 celebrate more than 100 open source projects, from the bottom of the stack to the top. Follow these links to more open source winners: + +[Bossie Awards 2015: The best open source applications][41] + +[Bossie Awards 2015: The best open source application development tools][42] + +[Bossie Awards 2015: The best open source big data tools][43] + +[Bossie Awards 2015: The best open source data center and cloud software][44] + +[Bossie Awards 2015: The best open source desktop and mobile software][45] + +[Bossie Awards 2015: The best open source networking and security software][46] + +-------------------------------------------------------------------------------- + +via: http://www.infoworld.com/article/2982920/open-source-tools/bossie-awards-2015-the-best-open-source-application-development-tools.html + +作者:[InfoWorld staff][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.infoworld.com/author/InfoWorld-staff/ +[1]:https://www.docker.com/ +[2]:https://nodejs.org/en/ +[3]:https://iojs.org/en/ +[4]:https://developers.google.com/v8/?hl=en +[5]:http://www.ancestry.com/ +[6]:http://www.ge.com/ +[7]:https://www.pearson.com/ +[8]:https://angularjs.org/ +[9]:https://builtwith.angularjs.org/ +[10]:https://facebook.github.io/react/ +[11]:https://facebook.github.io/react-native/ +[12]:http://reactjs.net/ +[13]:http://asp.net/ +[14]:https://atom.io/ +[15]:http://brackets.io/ +[16]:http://www.typescriptlang.org/ +[17]:http://definitelytyped.org/ +[18]:http://definitelytyped.org/tsd/ +[19]:http://swagger.io/ +[20]:https://github.com/swagger-api/swagger-ui +[21]:https://github.com/swagger-api/swagger-codegen +[22]:https://github.com/swagger-api/swagger-editor +[23]:https://github.com/swagger-api/swagger-js +[24]:http://aws.amazon.com/cn/api-gateway/ +[25]:https://github.com/awslabs/aws-apigateway-importer +[26]:https://www.polymer-project.org/ +[27]:http://ionicframework.com/ +[28]:https://cordova.apache.org/ +[29]:http://famous.org/ +[30]:http://famous.org/framework/ +[31]:https://www.mongodb.org/ +[32]:http://www.infoworld.com/article/2878738/nosql/first-look-mongodb-30-for-mature-audiences.html +[33]:http://www.infoworld.com/article/2929772/nosql/mongodb-crossroads-growth-or-openness.html +[34]:http://www.couchbase.com/nosql-databases/couchbase-server +[35]:https://cassandra.apache.org/ +[36]:http://orientdb.com/ +[37]:http://rethinkdb.com/ +[38]:https://www.rust-lang.org/ +[39]:http://opencv.org/ +[40]:http://llvm.org/ +[41]:http://www.infoworld.com/article/2982622/bossie-awards-2015-the-best-open-source-applications.html +[42]:http://www.infoworld.com/article/2982920/bossie-awards-2015-the-best-open-source-application-development-tools.html +[43]:http://www.infoworld.com/article/2982429/bossie-awards-2015-the-best-open-source-big-data-tools.html +[44]:http://www.infoworld.com/article/2982923/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html +[45]:http://www.infoworld.com/article/2982630/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html +[46]:http://www.infoworld.com/article/2982962/bossie-awards-2015-the-best-open-source-networking-and-security-software.html \ No newline at end of file diff --git a/sources/share/20151028 Bossie Awards 2015--The best open source applications.md b/sources/share/20151028 Bossie Awards 2015--The best open source applications.md new file mode 100644 index 0000000000..29fced5cc9 --- /dev/null +++ b/sources/share/20151028 Bossie Awards 2015--The best open source applications.md @@ -0,0 +1,238 @@ +Bossie Awards 2015: The best open source applications +================================================================================ +InfoWorld's top picks in open source business applications, enterprise integration, and middleware + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-applications-100614669-orig.jpg) + +### The best open source applications ### + +Applications -- ERP, CRM, HRM, CMS, BPM -- are not only fertile ground for three-letter acronyms, they're the engines behind every modern business. Our top picks in the category include back- and front-office solutions, marketing automation, lightweight middleware, heavyweight middleware, and other tools for moving data around, mixing it together, and magically transforming it into smarter business decisions. + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-xtuple-100614684-orig.jpg) + +### xTuple ### + +Small and midsize companies with light manufacturing or distribution needs have a friend in [xTuple][1]. This modular ERP/CRM combo bundles operations and financial control, product and inventory management, and CRM and sales support. Its relatively simple install lets you deploy all of the modules or only what you need today -- helping trim support costs without sacrificing customization later. + +This summer’s release brought usability improvements to the UI and a generous number of bug fixes. Recent updates also yielded barcode scanning and label printing for mobile warehouse workers, an enhanced workflow module (built with Plv8, a wrapper around Google’s V8 JavaScript engine that lets you write stored procedures for PostgreSQL in JavaScript), and quality management tools that are sure to get mileage on shop floors. + +The xTuple codebase is JavaScript from stem to stern. The server components can all be installed locally, in xTuple’s cloud, or deployed as an appliance. A mobile Web client, and mobile CRM features, augment a good native desktop client. + +-- James R. Borck + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-odoo-100614678-orig.jpg) + +### Odoo ### + +[Odoo][2] used to be known as OpenERP. Last year the company raised private capital and broadened its scope. Today Odoo is a one-stop shop for back office and customer-facing applications -- replete with content management, business intelligence, and e-commerce modules. + +Odoo 8 fronts accounting, invoicing, project management, resource planning, and customer relationship management tools with a flexible Web interface that can be tailored to your company’s workflow. Add-on modules for warehouse management and HR, as well as for live chat and analytics, round out the solution. + +This year saw Odoo focused primarily on usability updates. A recently released sales planner helps sales groups track KPIs, and a new tips feature lends in-context help. Odoo 9 is right around the corner with alpha builds showing customer portals, Web form creation tools, mobile and VoIP services, and integration hooks to eBay and Amazon. + +Available for Windows and Linux, and as a SaaS offering, Odoo gives small and midsized companies an accessible set of tools to manage virtually every aspect of their business. + +-- James R. Borck + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-idempiere-100614673-orig.jpg) + +### iDempiere ### + +Small and midsize companies have great choices in Odoo and xTuple. Larger manufacturing and distribution companies will need something more. For them, there’s [iDempiere][3] -- a well maintained offshoot of ADempiere with OSGi modularity. + +iDempiere implements a fully loaded ERP, supply chain, and CRM suite right out of the box. Built with Java, iDempiere supports both PostgreSQL and Oracle Database, and it can be customized extensively through modules built to the OSGi specification. iDempiere is perfectly suited to managing complex business scenarios involving multiple partners, requiring dynamic reporting, or employing point-of-sale and warehouse services. + +Being enterprise-ready comes with a price. iDempiere’s feature-rich tools and complexity impose a steep learning curve and require a commitment to integration support. Of course, those costs are offset by savings from the software’s free GPL2 licensing. iDempiere’s easy install script, small resource footprint, and clean interface also help alleviate some of the startup pains. There’s even a virtual appliance available on Sourceforge to get you started. + +-- James R. Borck + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-suitecrm-100614680-orig.jpg) + +### SuiteCRM ### + +SugarCRM held the sweet spot in open source CRM since, well, forever. Then last year Sugar announced it would no longer contribute to the open source Community Edition. Into the ensuing vacuum rushed [SuiteCRM][4] – a fork of the final Sugar code. + +SuiteCRM 7.2 creates an experience on a par with SugarCRM Professional’s marketing, sales, and service tools. With add-on modules for workflow, reporting, and security, as well as new innovations like Lucene-driven search, taps for social media, and a beta reveal of new desktop notifications, SuiteCRM is on solid footing. + +The Advanced Open Sales module provides a familiar migration path from Sugar, while commercial support is available from the likes of [SalesAgility][5], the company that forked SuiteCRM in the first place. In little more than a year, SuiteCRM rescued the code, rallied an inspired community, and emerged as a new leader in open source CRM. Who needs Sugar? + +-- James R. Borck + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-civicrm-100614671-orig.jpg) + +### CiviCRM ### + +We typically focus attention on CRM vis-à-vis small and midsize business requirements. But nonprofit and advocacy groups need to engage with their “customers” too. Enter [CiviCRM][6]. + +CiviCRM addresses the needs of nonprofits with tools for fundraising and donation processing, membership management, email tracking, and event planning. Granular access control and security bring role-based permissions to views, keeping paid staff and volunteers partitioned and productive. This year CiviCRM continued to develop with new features like simple A/B testing and monitoring for email campaigns. + +CiviCRM deploys as a plug-in to your WordPress, Drupal, or Joomla content management system -- a dead-simple install if you already have one of these systems in place. If you don’t, CiviCRM is an excellent reason to deploy the CMS. It’s a niche-filling solution that allows nonprofits to start using smarter, tailored tools for managing constituencies, without steep hurdles and training costs. + +-- James R. Borck + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-mautic-100614677-orig.jpg) + +### Mautic ### + +For marketers, the Internet -- Web, email, social, all of it -- is the stuff dreams are made on. [Mautic][7] allows you to create Web and email campaigns that track and nurture customer engagement, then roll all of the data into detailed reports to gain insight into customer needs and wants and how to meet them. + +Open source options in marketing automation are few, but Mautic’s extensibility stands out even against closed solutions like IBM’s Silverpop. Mautic even integrates with popular third-party email marketing solutions (MailChimp, Constant Contact) and social media platforms (Facebook, Twitter, Google+, Instagram) with quick-connect widgets. + +The developers of Mautic could stand to broaden the features for list segmentation and improve the navigability of their UI. Usability is also hindered by sparse documentation. But if you’re willing to rough it out long enough to learn your way, you’ll find a gem -- and possibly even gold -- in Mautic. + +-- James R. Borck + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-orangehrm-100614679-orig.jpg) + +### OrangeHRM ### + +The commercial software market in the human resource management space is rather fragmented, with Talent, HR, and Workforce Management startups all vying for a slice of the pie. It’s little wonder the open source world hasn’t found much direction either, with the most ambitious HRM solutions often locked inside larger ERP distributions. [OrangeHRM][8] is a standout. + +OrangeHRM tackles employee administration from recruitment and applicant tracking to performance reviews, with good audit trails throughout. An employee portal provides self-serve access to personal employment information, time cards, leave requests, and personnel documents, helping reduce demands on HR staff. + +OrangeHRM doesn’t yet address niche aspects like talent management (social media, collaboration, knowledge banks), but it’s remarkably full-featured. Professional and Enterprise options offer more advanced functionality (in areas such as recruitment, training, on/off-boarding, document management, and mobile device access), while community modules are available for the likes of Active Directory/LDAP integration, advanced reporting, and even insurance benefit management. + +-- James R. Borck + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-libreoffice-100614675-orig.jpg) + +### LibreOffice ### + +[LibreOffice][9] is the easy choice for best open source office productivity suite. Originally forked from OpenOffice, Libre has been moving at a faster clip than OpenOffice ever since, drawing more developers and producing more new features than its rival. + +LibreOffice 5.0, released only last month, offers UX improvements that truly enhance usability (like visual previews to style changes in the sidebar), brings document editing to Android devices (previously a view-only prospect), and finally delivers on a 64-bit Windows codebase. + +LibreOffice still lacks a built-in email client and a personal information manager, not to mention the real-time collaborative document editing available in Microsoft Office. But Libre can run off of a USB flash disk for portability, natively supports a greater number of graphic and file formats, and creates hybrid PDFs with embedded ODF files for full-on editing. Libre even imports Apple Pages documents, in addition to opening and saving all Microsoft Office formats. + +LibreOffice has done a solid job of tightening its codebase and delivering enhancements at a regular clip. With a new cloud version under development, LibreOffice will soon be more liberating than ever. + +-- James R. Borck + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-bonita-100614672-orig.jpg) + +### Bonita BPM ### + +Open source BPM has become a mature, cost-effective alternative to the top proprietary solutions. Having led the charge since 2009, Bonitasoft continues to raise the bar. The new [Bonita BPM 7][10] release impresses with innovative features that simplify code generation and shorten development cycles for BPM app creation. + +Most important to the new version, though, is better abstraction of underlying core business logic from UI and data components, allowing UIs and processes to be developed independently. This new MVC approach reduces downtime for live upgrades (no more recompilation!) and eases application maintenance. + +Bonita contains a winning set of connectors to a broad range of enterprise systems (ERP, CRM, databases) as well as to Web services. Complementing its process weaving tools, a new form designer (built on AngularJS/Bootstrap) goes a long way toward improving UI creation for the Web-centric and mobile workforce. + +-- James R. Borck + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-camunda-100614670-orig.jpg) + +### Camunda BPM ### + +Many open source solutions, like Bonita BPM, offer solid, drop-in functionality. Dig into the code base, though, and you may find it’s not the cleanest to build upon. Enterprise Java developers who hang out under the hood should check out [Camunda BPM][11]. + +Forked from Alfresco Activiti (a creation of former Red Hat jBPM developers), Camunda BPM delivers a tight, Java-based BPMN 2.0 engine in support of human workflow activities, case management, and systems process automation that can be embedded in your Java apps or run as a container service in Tomcat. Camunda’s ecosystem offers an Eclipse plug-in for process modeling and the Cockpit dashboard brings real-time monitoring and management over running processes. + +The Enterprise version adds WebSphere and WebLogic Server support. Additional incentives for the Enterprise upgrade include Saxon-driven XSLT templating (sidestepping the scripting engine) and add-ons to improve process management and exception handling. + +Camunda is a solid BPM engine ready for build-out and one of the first open source process managers to introduce DMN (Decision Model and Notation) support, which helps to simplify complex rules-based modeling alongside BPMN. DMN support is currently at the alpha stage. + +-- James R. Borck + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-talend-100614681-orig.jpg) + +### Talend Open Studio ### + +No open source ETL or EAI solution comes close to [Talend Open Studio][12] in functionality, performance, or support of modern integration trends. This year Talend unleashed Open Studio 6, a new version with a streamlined UI and smarter tooling that brings it more in line with Talend’s cloud-based offering. + +Using Open Studio you can visually design, test, and debug orchestrations that connect, transform, and synchronize data across a broad range of real-time applications and data resources. Talend’s wealth of connectors provides support for most any endpoint -- from flat files to Hadoop to Amazon S3. Packaged editions focus on specific scenarios such as big data integration, ESB, and data integrity monitoring. + +New support for Java 8 brings a speed boost. The addition of support for MariaDB and for in-memory processing with MemSQL, as well as updates to the ESB engine, keep Talend in step with the community’s needs. Version 6 was a long time coming, but no less welcome for that. Talend Open Studio is still first in managing complex data integration -- in-house, in the cloud, or increasingly, a combination of the two. + +-- James R. Borck + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-warewolf-100614683-orig.jpg) + +### Warewolf ESB ### + +Complex integration patterns may demand the strengths of a Talend to get the job done. But for many lightweight microservices, the overhead of a full-fledged enterprise integration solution is extreme overkill. + +[Warewolf ESB][13] combines a streamlined .Net-based process engine with visual development tools to provide for dead simple messaging and application payload routing in a native Windows environment. The Warewolf ESB is an “easy service bus,” not an enterprise service bus. + +Drag-and-drop tooling in the design studio makes quick work of configuring connections and logic flows. Built-in wizardry handles Web services definitions and database calls, and it can even tap Windows DLLs and the command line directly. Using the visual debugger, you can inspect execution streams (if not yet actually step through them), then package everything for remote deployment. + +Warewolf is still a .40.5 release and undergoing major code changes. It also lacks native connectors, easy transforms, and any means of scalability management. Be aware that the precompiled install demands collection of some usage statistics (I wish they would stop that). But Warewolf ESB is fast, free, and extensible. It’s a quirky, upstart project that offers definite benefits to Windows integration architects. + +-- James R. Borck + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-knime-100614674-orig.jpg) + +### KNIME ### + +[KNIME][14] takes a code-free approach to predictive analytics. Using a graphical workbench, you wire together workflows from an abundant library of processing nodes, which handle data access, transformation, analysis, and visualization. With KNIME, you can pull data from databases and big data platforms, run ETL transformations, perform data mining with R, and produce custom reports in the end. + +The company was busy this year rolling out the KNIME 2.12 update. The new release introduces MongoDB support, XPath nodes with autoquery creation, and a new view controller (based on the D3 JavaScript library) that creates interactive data visualizations on the fly. It also includes additional statistical nodes and a REST interface (KNIME Server edition) that provides services-based access to workflows. + +KNIME’s core analytics engine is free open source. The company offers several fee-based extensions for clustering and collaboration. (A portion of your licensing fee actually funds the open source project.) KNIME Server (on-premise or cloud) ups the ante with security, collaboration, and workflow repositories -- all serving to inject analytics more productively throughout your business lines. + +-- James R. Borck + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-teiid-100614682-orig.jpg) + +### Teiid ### + +[Teiid][15] is a data virtualization system that allows applications to use data from multiple, heterogeneous data stores. Currently a JBoss project, Teiid is backed by years of development from MetaMatrix and a long history of addressing the data access needs of the largest enterprise environments. I even see [uses for Teiid in Hadoop and big data environments][16]. + +In essence, Teiid allows you to connect all of your data sources into a “virtual” mega data source. You can define caching semantics, transforms, and other “configuration not code” transforms to load from multiple data sources using plain old SQL, XQuery, or procedural queries. + +Teiid is primarily accessible through JBDC and has built-in support for Web services. Red Hat sells Teiid as [JBoss Data Virtualization][17]. + +-- Andrew C. Oliver + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-main-100614676-orig.jpg) + +### Read about more open source winners ### + +InfoWorld's Best of Open Source Awards for 2014 celebrate more than 100 open source projects, from the bottom of the stack to the top. Follow these links to more open source winners: + +[Bossie Awards 2015: The best open source applications][18] + +[Bossie Awards 2015: The best open source application development tools][19] + +[Bossie Awards 2015: The best open source big data tools][20] + +[Bossie Awards 2015: The best open source data center and cloud software][21] + +[Bossie Awards 2015: The best open source desktop and mobile software][22] + +[Bossie Awards 2015: The best open source networking and security software][23] + +-------------------------------------------------------------------------------- + +via: http://www.infoworld.com/article/2982622/open-source-tools/bossie-awards-2015-the-best-open-source-applications.html + +作者:[InfoWorld staff][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.infoworld.com/author/InfoWorld-staff/ +[1]:http://xtuple.org/ +[2]:http://odoo.com/ +[3]:http://idempiere.org/ +[4]:http://suitecrm.com/ +[5]:http://salesagility.com/ +[6]:http://civicrm.org/ +[7]:https://www.mautic.org/ +[8]:http://www.orangehrm.com/ +[9]:http://libreoffice.org/ +[10]:http://www.bonitasoft.com/ +[11]:http://camunda.com/ +[12]:http://talend.com/ +[13]:http://warewolf.io/ +[14]:http://www.knime.org/ +[15]:http://teiid.jboss.org/ +[16]:http://www.infoworld.com/article/2922180/application-development/database-virtualization-or-i-dont-want-to-do-etl-anymore.html +[17]:http://www.jboss.org/products/datavirt/overview/ +[18]:http://www.infoworld.com/article/2982622/bossie-awards-2015-the-best-open-source-applications.html +[19]:http://www.infoworld.com/article/2982920/bossie-awards-2015-the-best-open-source-application-development-tools.html +[20]:http://www.infoworld.com/article/2982429/bossie-awards-2015-the-best-open-source-big-data-tools.html +[21]:http://www.infoworld.com/article/2982923/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html +[22]:http://www.infoworld.com/article/2982630/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html +[23]:http://www.infoworld.com/article/2982962/bossie-awards-2015-the-best-open-source-networking-and-security-software.html \ No newline at end of file diff --git a/sources/share/20151028 Bossie Awards 2015--The best open source big data tools.md b/sources/share/20151028 Bossie Awards 2015--The best open source big data tools.md new file mode 100644 index 0000000000..0cf65ea3a8 --- /dev/null +++ b/sources/share/20151028 Bossie Awards 2015--The best open source big data tools.md @@ -0,0 +1,287 @@ +Bossie Awards 2015: The best open source big data tools +================================================================================ +InfoWorld's top picks in distributed data processing, streaming analytics, machine learning, and other corners of large-scale data analytics + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-big-data-100613944-orig.jpg) + +### The best open source big data tools ### + +How many Apache projects can sit on a pile of big data? Fire up your Hadoop cluster, and you might be able to count them. Among this year's Bossies in big data, you'll find the fastest, widest, and deepest newfangled solutions for large-scale SQL, stream processing, sort-of stream processing, and in-memory analytics, not to mention our favorite maturing members of the Hadoop ecosystem. It seems everyone has a nail to drive into MapReduce's coffin. + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-spark-100613962-orig.jpg) + +### Spark ### + +With hundreds of contributors, [Spark][1] is one of the most active and fastest-growing Apache projects, and with heavyweights like IBM throwing their weight behind the project and major corporations bringing applications into large-scale production, the momentum shows no signs of letting up. + +The sweet spot for Spark continues to be machine learning. Highlights since last year include the replacement of the SchemaRDD with a Dataframes API, similar to those found in R and Pandas, making data access much simpler than with the raw RDD interface. Also new are ML pipelines for building repeatable machine learning workflows, expanded and optimized support for various storage formats, simpler interfaces to machine learning algorithms, improvements in the display of cluster resources usage, and task tracking. + +On by default in Spark 1.5 is the off-heap memory manager, Tungsten, which offers much faster processing by fine-tuning data structure layout in memory. Finally, the new website, [spark-packages.org][2], with more than 100 third-party libraries, adds many useful features from the community. + +-- Steven Nunez + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-storm-100614149-orig.jpg) + +### Storm ### + +[Apache Storm][3] is a Clojure-based distributed computation framework primarily for streaming real-time analytics. Storm is based on the [disruptor pattern][4] for low-latency complex event processing created LMAX. Unlike Spark, Storm can do single events as opposed to “micro-batches,” and it has a lower memory footprint. In my experience, it scales better for streaming, especially when you’re mainly streaming to ingest data into other data sources. + +Storm’s profile has been eclipsed by Spark, but Spark is inappropriate for many streaming applications. Storm is frequently used with Apache Kafka. + +-- Andrew C. Oliver + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-h2o-100613950-orig.jpg) + +### H2O ### + +[H2O][5] is a distributed, in-memory processing engine for machine learning that boasts an impressive array of algorithms. Previously only available for R users, version 3.0 adds Python and Java language bindings, as well as a Spark execution engine for the back end. The best way to view H20 is as a very large memory extension of your R environment. Instead of working directly on large data sets, the R extensions communicate via a REST API with the H2O cluster, where H2O does the heavy lifting. + +Several useful R packages such as ddply have been wrapped, allowing you to use them on data sets larger than the amount of RAM on the local machine. You can run H2O on EC2, on a Hadoop/YARN cluster, and on Docker containers. With Sparkling Water (Spark plus H2O) you can access Spark RDDs on the cluster side by side to, for example, process a data frame with Spark before passing it to an H2O machine learning algorithm. + +-- Steven Nunez + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-apex-100613943-orig.jpg) + +### Apex ### + +[Apex][6] is an enterprise-grade, big data-in-motion platform that unifies stream processing as well as batch processing. A native YARN application, Apex processes streaming data in a scalable, fault-tolerant manner and provides all the common stream operators out of the box. One of the best things about Apex is that it natively supports the common event processing guarantees (exactly once, at least once, at most once). Formerly a commercial product by DataTorrent, Apex's roots show in the quality of the documentation, examples, code, and design. Devops and application development are cleanly separated, and user code generally doesn't have to be aware that it is running in a streaming cluster. + +A related project, [Malhar][7], offers more than 300 commonly used operators and application templates that implement common business logic. The Malhar libraries significantly reduce the time it takes to develop an Apex application, and there are connectors (operators) for storage, file systems, messaging systems, databases, and nearly anything else you might want to connect to from an application. The operators can all be extended or customized to meet individual business's requirements. All Malhar components are available under the Apache license. + +-- Steven Nunez + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-druid-100613947-orig.jpg) + +### Druid ### + +[Druid][8], which moved to a commercially friendly Apache license in February of this year, is best described as a hybrid, “event streams meet OLAP” solution. Originally developed to analyze online events for ad markets, Druid allows users to do arbitrary and interactive exploration of time series data. Some of the key features include low-latency ingest of events, fast aggregations, and approximate and exact calculations. + +At the heart of Druid is a custom data store that uses specialized nodes to handle each part of the problem. Real-time ingest is managed by real-time nodes (JVMs) that eventually flush data to historical nodes that are responsible for data that has aged. Broker nodes direct queries in a scatter-gather fashion to both real-time and historical nodes to give the user a complete picture of events. Benchmarked at a sustained 500K events per second and 1 million events per second peak, Druid is ideal as a real-time dashboard for ad-tech, network traffic, and other activity streams. + +-- Steven Nunez + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-flink-100613949-orig.jpg) + +### Flink ### + +At its core, [Flink][9] is a data flow engine for event streams. Although superficially similar to Spark, Flink takes a different approach to in-memory processing. First, Flink was designed from the start as a stream processor. Batch is simply a special case of a stream with a beginning and an end, and Flink offers APIs for dealing with each case, the DataSet API (batch) and the DataStream API. Developers coming from the MapReduce world should feel right at home working with the DataSet API, and porting applications to Flink should be straightforward. In many ways Flink mirrors the simplicity and consistency that helped make Spark so popular. Like Spark, Flink is written in Scala. + +The developers of Flink clearly thought out usage and operations too: Flink works natively with YARN and Tez, and it uses an off-heap memory management scheme to work around some of the JVM limitations. A peek at the Flink JIRA site shows a healthy pace of development, and you’ll find an active community on the mailing lists and on StackOverflow as well. + +-- Steven Nunez + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-elastic-100613948-orig.jpg) + +### Elasticsearch ### + +[Elasticsearch][10] is a distributed document search server based on [Apache Lucene][11]. At its heart, Elasticsearch builds indices on JSON-formatted documents in nearly real time, enabling fast, full-text, schema-free queries. Combined with the open source Kibana dashboard, you can create impressive visualizations of your real-time data in a simple point-and-click fashion. + +Elasticsearch is easy to set up and easy to scale, automatically making use of new hardware by rebalancing shards as required. The query syntax isn't at all SQL-like, but it is intuitive enough for anyone familiar with JSON. Most users won't be interacting at that level anyway. Developers can use the native JSON-over-HTTP interface or one of the several language bindings available, including Ruby, Python, PHP, Perl, .Net, Java, and JavaScript. + +-- Steven Nunez + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-slamdata-100613961-orig.jpg) + +### SlamData ### + +If you are seeking a user-friendly tool to visualize and understand your newfangled NoSQL data, take a look at [SlamData][12]. SlamData allows you to query nested JSON data using familiar SQL syntax, without relocation or transformation. + +One of the technology’s main features is its connectors. From MongoDB to HBase, Cassandra, and Apache Spark, SlamData taps external data sources with the industry's most advanced “pushdown” processing technology, performing transformations and analytics close to the data. + +While you might ask, “Wouldn’t I be better off building a data lake or data warehouse?” consider the companies that were born in NoSQL. Skipping the ETL and simply connecting a visualization tool to a replica offers distinct advantages -- not only in terms of how up-to-date the data is, but in how many moving parts you have to maintain. + +-- Andrew C. Oliver + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-drill-100613946-orig.jpg) + +### Drill ### + +[Drill][13] is a distributed system for interactive analysis of large-scale data sets, inspired by [Google's Dremel][14]. Designed for low-latency analysis of nested data, Drill has a stated design goal of scaling to 10,000 servers and querying petabytes of data and trillions of records. + +Nested data can be obtained from a variety of data sources (such as HDFS, HBase, Amazon S3, and Azure Blobs) and in multiple formats (including JSON, Avro, and protocol buffers), and you don't need to specify a schema up front (“schema on read”). + +Drill uses ANSI SQL:2003 for its query language, so there's no learning curve for data engineers to overcome, and it allows you to join data across multiple data sources (for example, joining a table in HBase with logs in HDFS). Finally, Drill offers ODBC and JDBC interfaces to connect your favorite BI tools. + +-- Steven Nunez + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-hbase-100613951-orig.jpg) + +### HBase ### + +[HBase][15] reached the 1.x milestone this year and continues to improve. Like other nonrelational distributed datastores, HBase excels at returning search results very quickly and for this reason is often used to back search engines, such as the ones at eBay, Bloomberg, and Yahoo. As a stable and mature software offering, HBase does not get fresh features as frequently as newer projects, but that's often good for enterprises. + +Recent improvements include the addition of high-availability region servers, support for rolling upgrades, and YARN compatibility. Features in the works include scanner updates that promise to improve performance and the ability to use HBase as a persistent store for streaming applications like Storm and Spark. HBase can also be queried SQL style via the [Phoenix][16] project, now out of incubation, whose SQL compatibility is steadily improving. Phoenix recently added a Spark connector and the ability to add custom user-defined functions. + +-- Steven Nunez + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-hive-100613952-orig.jpg) + +### Hive ### + +Although stable and mature for several years, [Hive][17] reached the 1.0 version milestone this year and continues to be the best solution when really heavy SQL lifting (many petabytes) is required. The community continues to focus on improving the speed, scale, and SQL compliance of Hive. Currently at version 1.2, significant improvements since its last Bossie include full ACID semantics, cross-data center replication, and a cost-based optimizer. + +Hive 1.2 also brought improved SQL compliance, making it easier for organizations to use it to off-load ETL jobs from their existing data warehouses. In the pipeline are speed improvements with an in-memory cache called LLAP (which, from the looks of the JIRAs, is about ready for release), the integration of Spark machine learning libraries, and improved SQL constructs like nonequi joins, interval types, and subqueries. + +-- Steven Nunez + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-kylin-100613955-orig.jpg) + +### Kylin ### + +[Kylin][18] is an application developed at eBay for processing very large OLAP cubes via ANSI SQL, a task familiar to most data analysts. If you think about how many items are on sale now and in the past at eBay, and all the ways eBay might want to slice and dice data related to those items, you will begin to understand the types of queries Kylin was designed for. + +Like most other analysis applications, Kylin supports multiple access methods, including JDBC, ODBC, and a REST API for programmatic access. Although Kylin is still in incubation at Apache, and the community nascent, the project is well documented and the developers are responsive and eager to understand customer use cases. Getting up and running with a starter cube was a snap. If you have a need for analysis of extremely large cubes, you should take a look at Kylin. + +-- Steven Nunez + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-cdap-100613945-orig.jpg) + +### CDAP ### + +[CDAP][19] (Cask Data Access Platform) is a framework running on top of Hadoop that abstracts away the complexity of building and running big data applications. CDAP is organized around two core abstractions: data and applications. CDAP Datasets are logical representations of data that behave uniformly regardless of the underlying storage layer; CDAP Streams provide similar support for real-time data. + +Applications use CDAP services for things such as distributed transactions and service discovery to shield developers from the low-level details of Hadoop. CDAP comes with a data ingestion framework and a few prebuilt applications and “packs” for common tasks like ETL and website analytics, along with support for testing, debugging, and security. Like most formerly commercial (closed source) projects, CDAP benefits from good documentation, tutorials, and examples. + +-- Steven Nunez + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-ranger-100613960-orig.jpg) + +### Ranger ### + +Security has long been a sore spot with Hadoop. It isn’t (as is frequently reported) that Hadoop is “insecure” or “has no security.” Rather, the truth was more that Hadoop had too much security, though not in a good way. I mean that every component had its own authentication and authorization implementation that wasn’t integrated with the rest of platform. + +Hortonworks acquired XA/Secure in May, and [a few renames later][20] we have [Ranger][21]. Ranger pulls many of the key components of Hadoop together under one security umbrella, allowing you to set a “policy” that ties your Hadoop security to your existing ACL-based Active Directory authentication and authorization. Ranger gives you one place to manage Hadoop access control, one place to audit, one place to manage the encryption, and a pretty Web page to do it from. + +-- Andrew C. Oliver + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-mesos-100613957-orig.jpg) + +### Mesos ### + +[Mesos][22], developed at the [AMPLab][23] at U.C. Berkeley that also brought us Spark, takes a different approach to managing cluster computing resources. The best way to describe Mesos is as a distributed microkernel for the data center. Mesos provides a minimal set of operating system mechanisms like inter-process communications, disk access, and memory to higher-level applications, called “frameworks” in Mesos-speak, that run in what is analogous to user space. Popular frameworks for Mesos include [Chronos][24] and [Aurora][25] for building ETL pipelines and job scheduling, and a few big data processing applications including Hadoop, Storm, and Spark, which have been ported to run as Mesos frameworks. + +Mesos applications (frameworks) negotiate for cluster resources using a two-level scheduling mechanism, so writing a Mesos application is unlikely to feel like a familiar experience to most developers. Although Mesos is a young project, momentum is growing, and with Spark being an exceptionally good fit for Mesos, we're likely to see more from Mesos in the coming years. + +-- Steven Nunez + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-nifi-100613958-orig.jpg) + +### NiFi ### + +[NiFi][26] is an incubating Apache project to automate the flow of data between systems. It doesn't operate in the traditional space that Kafka and Storm do, but rather in the space between external devices and the data center. NiFi was originally developed by the NSA and donated to the open source community in 2014. It has a strong community of developers and users within various government agencies. + +NiFi isn't like anything else in the current big data ecosystem. It is much closer to a tradition EAI (enterprise application integration) tool than a data processing platform, although simple transformations are possible. One interesting feature is the ability to debug and change data flows in real time. Although not quite a REPL (read, eval, print loop), this kind of paradigm dramatically shortens the development cycle by not requiring a compile-deploy-test-debug workflow. Other interesting features include a strong “chain of custody,” where each piece of data can be tracked from beginning to end, along with any changes made along the way. You can also prioritize data flows so that time-sensitive information can be received as quickly as possible, bypassing less time-critical events. + +-- Steven Nunez + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-kafka-100613954-orig.jpg) + +### Kafka ### + +[Kafka][27] has emerged as the de-facto standard for distributed publish-subscribe messaging in the big data space. Its design allows brokers to support thousands of clients at high rates of sustained message throughput, while maintaining durability through a distributed commit log. Kafka does this by maintaining what is essentially a single log file in HDFS. Since HDFS is a distributed storage system that keeps redundant copies, Kafka is protected. + +When consumers want to read messages, Kafka looks up their offset in the central log and sends them. Because messages are not deleted immediately, adding consumers or replaying historical messages does not impose additional costs. Kafka has been benchmarked at 2 million writes per second by its developers at LinkedIn. Despite Kafka’s sub-1.0 version number, Kafka is a mature and stable product, in use in some of the largest clusters in the world. + +-- Steven Nunez + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-opentsdb-100613959-orig.jpg) + +### OpenTSDB ### + +[OpenTSDB][28] is a time series database built on HBase. It was designed specifically for analyzing data collected from applications, mobile devices, networking equipment, and other hardware devices. The custom HBase schema used to store the time series data has been designed for fast aggregations and minimal storage requirements. + +By using HBase as the underlying storage layer, OpenTSDB gains the distributed and reliable characteristics of that system. Users don't interact with HBase directly; instead events are written to the system via the time series daemon (TSD), which can be scaled out as required to handle high-throughput situations. There are a number of prebuilt connectors to publish data to OpenTSDB, and clients to read data from Ruby, Python, and other languages. OpenTSDB isn't strong on creating interactive graphics, but several third-party tools fill that gap. If you are already using HBase and want a simple way to store event data, OpenTSDB might be just the thing. + +-- Steven Nunez + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-jupyter-100613953-orig.jpg) + +### Jupyter ### + +Everybody's favorite notebook application went generic. [Jupyter][29] is “the language-agnostic parts of IPython” spun out into an independent package. Although Jupyter itself is written in Python, the system is modular. Now you can have an IPython-like interface, along with notebooks for sharing code, documentation, and data visualizations, for nearly any language you like. + +At least [50 language][30] kernels are already supported, including LISP, R, Ruby, F#, Perl, and Scala. In fact, even IPython itself is simply a Python module for Jupyter. Communication with the language kernel is via a REPL (read, eval, print loop) protocol, similar to [nREPL][31] or [Slime][32]. It is nice to see such a useful piece of software receiving significant [nonprofit funding][33] to further its development, such as parallel execution and multi-user notebooks. Behold, open source at its best. + +-- Steven Nunez + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-zeppelin-100613963-orig.jpg) + +### Zeppelin ### + +While still in incubation, [Apache Zeppelin][34] is nevertheless stirring the data analytics and visualization pot. The Web-based notebook enables users to ingest, discover, analyze, and visualize their data. The notebook also allows you to collaborate with others to make data-driven, interactive documents incorporating a growing number of programming languages. + +This technology also boasts an integration with Spark and an interpreter concept allowing any language or data processing back end to be plugged into Zeppelin. Currently Zeppelin supports interpreters such as Scala, Python, SparkSQL, Hive, Markdown, and Shell. + +Zeppelin is still immature. I wanted to put a demo up but couldn’t find an easy way to disable “shell” as an execution option (among other things). However, it already looks better visually than IPython Notebook, which is the popular incumbent in this space. If you don’t want to spring for DataBricks Cloud or need something open source and extensible, this is the most promising distributed computing notebook around -- especially if you’re a Sparky type. + +-- Andrew C. Oliver + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-main-100613956-orig.jpg) + +### Read about more open source winners ### + +InfoWorld's Best of Open Source Awards for 2014 celebrate more than 100 open source projects, from the bottom of the stack to the top. Follow these links to more open source winners: + +[Bossie Awards 2015: The best open source applications][35] + +[Bossie Awards 2015: The best open source application development tools][36] + +[Bossie Awards 2015: The best open source big data tools][37] + +[Bossie Awards 2015: The best open source data center and cloud software][38] + +[Bossie Awards 2015: The best open source desktop and mobile software][39] + +[Bossie Awards 2015: The best open source networking and security software][40] + +-------------------------------------------------------------------------------- + +via: http://www.infoworld.com/article/2982429/open-source-tools/bossie-awards-2015-the-best-open-source-big-data-tools.html + +作者:[InfoWorld staff][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.infoworld.com/author/InfoWorld-staff/ +[1]:https://spark.apache.org/ +[2]:http://spark-packages.org/ +[3]:https://storm.apache.org/ +[4]:https://lmax-exchange.github.io/disruptor/ +[5]:http://h2o.ai/product/ +[6]:https://www.datatorrent.com/apex/ +[7]:https://github.com/DataTorrent/Malhar +[8]:https://druid.io/ +[9]:https://flink.apache.org/ +[10]:https://www.elastic.co/products/elasticsearch +[11]:http://lucene.apache.org/ +[12]:http://teiid.jboss.org/ +[13]:https://drill.apache.org/ +[14]:http://research.google.com/pubs/pub36632.html +[15]:http://hbase.apache.org/ +[16]:http://phoenix.apache.org/ +[17]:https://hive.apache.org/ +[18]:https://kylin.incubator.apache.org/ +[19]:http://cdap.io/ +[20]:http://www.infoworld.com/article/2973381/application-development/apache-ranger-chuck-norris-hadoop-security.html +[21]:https://ranger.incubator.apache.org/ +[22]:http://mesos.apache.org/ +[23]:https://amplab.cs.berkeley.edu/ +[24]:http://nerds.airbnb.com/introducing-chronos/ +[25]:http://aurora.apache.org/ +[26]:http://nifi.apache.org/ +[27]:https://kafka.apache.org/ +[28]:http://opentsdb.net/ +[29]:http://jupyter.org/ +[30]:http://https//github.com/ipython/ipython/wiki/IPython-kernels-for-other-languages +[31]:https://github.com/clojure/tools.nrepl +[32]:https://github.com/slime/slime +[33]:http://blog.jupyter.org/2015/07/07/jupyter-funding-2015/ +[34]:https://zeppelin.incubator.apache.org/ +[35]:http://www.infoworld.com/article/2982622/bossie-awards-2015-the-best-open-source-applications.html +[36]:http://www.infoworld.com/article/2982920/bossie-awards-2015-the-best-open-source-application-development-tools.html +[37]:http://www.infoworld.com/article/2982429/bossie-awards-2015-the-best-open-source-big-data-tools.html +[38]:http://www.infoworld.com/article/2982923/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html +[39]:http://www.infoworld.com/article/2982630/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html +[40]:http://www.infoworld.com/article/2982962/bossie-awards-2015-the-best-open-source-networking-and-security-software.html \ No newline at end of file diff --git a/sources/share/20151028 Bossie Awards 2015--The best open source data center and cloud software.md b/sources/share/20151028 Bossie Awards 2015--The best open source data center and cloud software.md new file mode 100644 index 0000000000..5640c75137 --- /dev/null +++ b/sources/share/20151028 Bossie Awards 2015--The best open source data center and cloud software.md @@ -0,0 +1,261 @@ +Bossie Awards 2015: The best open source data center and cloud software +================================================================================ +InfoWorld's top picks of the year in open source platforms, infrastructure, management, and orchestration software + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-data-center-cloud-100613986-orig.jpg) + +### The best open source data center and cloud software ### + +You might have heard about this new thing called Docker containers. Developers love them because you can build them with a script, add services in layers, and push them right from your MacBook Pro to a server for testing. It works because they're superlightweight, unlike those now-archaic virtual machines. Containers -- and other lightweight approaches to deliver services -- are changing the shape of operating systems, applications, and the tools to manage them. Our Bossie winners in data center and cloud are leading the charge. + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-docker-100613987-orig.jpg) + +### Docker Machine, Compose, and Swarm ### + +Docker’s open source container technology has been adopted by the major public clouds and is being built into the next version of Windows Server. Allowing developers and operations teams to separate applications from infrastructure, Docker is a powerful data center automation tool. + +However, containers are only part of the Docker story. Docker also provides a series of tools that allow you to use the Docker API to automate the entire container lifecycle, as well as handling application design and orchestration. + +[Machine][1] allows you to automate the provisioning of Docker Containers. Starting with a command line, you can use a single line of code to target one or more hosts, deploy the Docker engine, and even join it to a Swarm cluster. There’s support for most hypervisors and cloud platforms – all you need are your access credentials. + +[Swarm][2] handles clustering and scheduling, and it can be integrated with Mesos for more advanced scheduling capabilities. You can use Swarm to build a pool of container hosts, allowing your apps to scale out as demand increases. Applications and all of their dependencies can be defined with [Compose][3], which lets you link containers together into a distributed application and launch them as a group. Compose descriptions work across platforms, so you can take a developer configuration and quickly deploy in production. + +-- Simon Bisson + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-coreos-rkt-100613985-orig.jpg) + +### CoreOS and Rkt ### + +A thin, lightweight server OS, [CoreOS][4] is based on Google’s Chromium OS. Instead of using a package manager to install functions, it’s designed to be used with Linux containers. By using containers to extend a thin core, CoreOS allows you to quickly deploy applications, working well on cloud infrastructures. + +CoreOS’s container management tooling, fleet, is designed to treat a cluster of CoreOS servers as a single unit, with tools for managing high availability and for deploying containers to the cluster based on resource availability. A cross-cluster key/value store, etcd, handles device management and supports service discovery. If a node fails, etcd can quickly restore state on a new replica, giving you a distributed configuration management platform that’s linked to CoreOS’s automated update service. + +While CoreOS is perhaps best known for its Docker support, the CoreOS team is developing its own container runtime, rkt, with its own container format, the App Container Image. Also compatible with Docker containers, rkt has a modular architecture that allows different containerization systems (even hardware virtualization, in a proof of concept from Intel) to be plugged in. However, rkt is still in the early stages of development, so isn’t quite production ready. + +-- Simon Bisson + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-rancheros-100613997-orig.jpg) + +### RancherOS ### + +As we abstract more and more services away from the underlying operating system using containers, we can start thinking about what tomorrow’s operating system will look like. Similar to our applications, it’s going to be a modular set of services running on a thin kernel, self-configuring to offer only the services our applications need. + +[RancherOS][5] is a glimpse of what that OS might look like. Blending the Linux kernel with Docker, RancherOS is a minimal OS suitable for hosting container-based applications in cloud infrastructures. Instead of using standard Linux packaging techniques, RancherOS leverages Docker to host Linux user-space services and applications in separate container layers. A low-level Docker instance is first to boot, hosting system services in their own containers. Users' applications run in a higher-level Docker instance, separate from the system containers. If one of your containers crashes, the host keeps running. + +RancherOS is only 20MB in size, so it's easy to replicate across a data center. It’s also designed to be managed using automation tools, not manually, with API-level access that works with Docker’s management tools as well as with Rancher Labs’ own cloud infrastructure and management tools. + +-- Simon Bisson + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-kubernetes-100613991-orig.jpg) + +### Kubernetes ### + +Google’s [Kubernetes][6] container orchestration system is designed to manage and run applications built in Docker and Rocket containers. Focused on managing microservice applications, Kubernetes lets you distribute your containers across a cluster of hosts, while handling scaling and ensuring managed services run reliably. + +With containers providing an application abstraction layer, Kubernetes is an application-centric management service that supports many modern development paradigms, with a focus on user intent. That means you launch applications, and Kubernetes will manage the containers to run within the parameters you set, using the Kubernetes scheduler to make sure it gets the resources it needs. Containers are grouped into pods and managed by a replication engine that can recover failed containers or add more pods as applications scale. + +Kubernetes powers Google’s own Container Engine, and it runs on a range of other cloud and data center services, including AWS and Azure, as well as vSphere and Mesos. Containers can be either loosely or tightly coupled, so applications not designed for cloud PaaS operations can be migrated to the cloud as a tightly coupled set of containers. Kubernetes also supports rapid deployment of applications to a cluster, giving you an endpoint for a continuous delivery process. + +-- Simon Bisson + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-mesos-100613993-orig.jpg) + +### Mesos ### + +Turning a data center into a private or public cloud requires more than a hypervisor. It requires a new operating layer that can manage the data center resources as if they were a single computer, handling resources and scheduling. Described as a “distributed systems kernel,” [Apache Mesos][7] allows you to manage thousands of servers, using containers to host applications and APIs to support parallel application development. + +At the heart of Mesos is a set of daemons that expose resources to a central scheduler. Tasks are distributed across nodes, taking advantage of available CPU and memory. One key approach is the ability for applications to reject offered resources if they don’t meet requirements. It’s an approach that works well for big data applications, and you can use Mesos to run Hadoop and Cassandra distributed databases, as well as Apache’s own Spark data processing engine. There’s also support for the Jenkins continuous integration server, allowing you to run build and test workers in parallel on a cluster of servers, dynamically adjusting the tasks depending on workload. + +Designed to run on Linux and Mac OS X, Mesos has also recently been ported to Windows to support the development of scalable parallel applications on Azure. + +-- Simon Bisson + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-smartos-100614849-orig.jpg) + +### SmartOS and SmartDataCenter ### + +Joyent’s [SmartDataCenter][8] is the software that runs its public cloud, adding a management platform on top of its [SmartOS][9] thin server OS. A descendent of OpenSolaris that combines Zones containers and the KVM hypervisor, SmartOS is an in-memory operating system, quick to boot from a USB stick and run on bare-metal servers. + +Using SmartOS, you can quickly deploy a set of lightweight servers that can be programmatically managed via a set of JSON APIs, with functionality delivered via virtual machines, downloaded by built-in image management tools. Through the use of VMs, all userland operations are isolated from the underlying OS, reducing the security exposure of both the host and guests. + +SmartDataCenter runs on SmartOS servers, with one server running as a dedicated management node, and the rest of a cluster operating as compute nodes. You can get started with a Cloud On A Laptop build (available as a VMware virtual appliance) that lets you experiment with the management server. In a live data center, you’ll deploy SmartOS on your servers, using ZFS to handle storage – which includes your local image library. Services are deployed as images, with components stored in an object repository. + +The combination of SmartDataCenter and SmartOS builds on the experience of Joyent’s public cloud, giving you a tried and tested set of tools that can help you bootstrap your own cloud data center. It’s an infrastructure focused on virtual machines today, but laying the groundwork for tomorrow. A related Joyent project, [sdc-docker][10], exposes an entire SmartDataCenter cluster as a single Docker host, driven by native Docker commands. + +-- Simon Bisson + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-sensu-100614850-orig.jpg) + +### Sensu ### + +Managing large-scale data centers isn’t about working with server GUIs, it’s about automating scripts based on information from monitoring tools and services, routing information from sensors and logs, and then delivering actions to applications. One tool that’s beginning to offer this functionality is [Sensu][11], often described as a “monitoring router.” + +Scripts running across your data center deliver information to Sensu, which then routes it to the appropriate handler, using a publish-and-subscribe architecture based on RabbitMQ. Servers can be distributed, delivering published check results to handler code. You might see results in email, or in a Slack room, or in Sensu’s own dashboards. Message formats are defined in JSON files, or mutators used to format data on the fly, and messages can be filtered to one or more event handlers. + +Sensu is still a relatively young tool, but it’s one that shows a lot of promise. If you’re going to automate your data center, you’re going to need a tool like this not only to show you what’s happening, but to deliver that information where it’s most needed. A commercial option adds support for integration with third-party applications, but much of what you need to manage a data center is in the open source release. + +-- Simon Bisson + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-prometheus-100613996-orig.jpg) + +### Prometheus ### + +Managing a modern data center is a complex task. Racks of servers need to be treated like cattle rather than pets, and you need a monitoring system designed to handle hundreds and thousands of nodes. Monitoring applications presents special challenges, and that’s where [Prometheus][12] comes in to play. A service monitoring system designed to deliver alerts to operators, Prometheus can run on everything from a single laptop to a highly available cluster of monitoring servers. + +Time series data is captured and stored, then compared against patterns to identify faults and problems. You’ll need to expose data on HTTP endpoints, using a YAML file to configure the server. A browser-based reporting tool handles displaying data, with an expression console where you can experiment with queries. Dashboards can be created with a GUI builder, or written using a series of templates, letting you deliver application consoles that can be managed using version control systems such as Git. + +Captured data can be managed using expressions, which make it easy to aggregate data from several sources -- for example, letting you bring performance data from a series of Web endpoints into one store. An experimental alert manager module delivers alerts to common collaboration and devops tools, including Slack and PagerDuty. Official client libraries for common languages like Go and Java mean it’s easy to add Prometheus support to your applications and services, while third-party options extend Prometheus to Node.js and .Net. + +-- Simon Bisson + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-elk-100613988-orig.jpg) + +### Elasticsearch, Logstash, and Kibana ### + +Running a modern data center generates a lot of data, and it requires tools to get information out of that data. That’s where the combination of Elasticsearch, Logstash, and Kibana, often referred to as the ELK stack, comes into play. + +Designed to handle scalable search across a mix of content types, including structured and unstructured documents, [Elasticsearch][13] builds on Apache’s Lucene information retrieval tools, with a RESTful JSON API. It’s used to provide search for sites like Wikipedia and GitHub, using a distributed index with automated load balancing and routing. + +Under the fabric of a modern cloud is a physical array of servers, running as VM hosts. Monitoring many thousands of servers needs centralized logs. [Logstash][14] harvests and filters the logs generated by those servers (and by the applications running on them), using a forwarder on each physical and virtual machine. Logstash-formatted data is then delivered to Elasticsearch, giving you a search index that can be quickly scaled as you add more servers. + +At a higher level, [Kibana][15] adds a visualization layer to Elasticsearch, providing a Web dashboard for exploring and analyzing the data. Dashboards can be created around custom searches and shared with your team, providing a quick, easy-to-digest devops information feed. + +-- Simon Bisson + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-ansible-100613984-orig.jpg) + +### Ansible ### + +Managing server configuration is a key element of any devops approach to managing a modern data center or a cloud infrastructure. Configuration management tooling that takes a desired state approach to simplifies systems management at cloud scale, using server and application descriptions to handle server and application deployment. + +[Ansible][16] offers a minimal management service, using SSH to manage Unix nodes and PowerShell to work with Windows servers, with no need to deploy agents. An Ansible Playbook describes the state of a server or service in YAML, deploying Ansible modules to servers that handle configuration and removing them once the service is running. You can use Playbooks to orchestrate tasks -- for example, deploying several Web endpoints with a single script. + +It’s possible to make module creation and Playbook delivery part of a continuous delivery process, using build tools to deliver configurations and automate deployment. Ansible can pull in information from cloud service providers, simplifying management of virtual machines and networks. Monitoring tools in Ansible are able to trigger additional deployments automatically, helping manage and control cloud services, as well as working to manage resources used by large-scale data platforms like Hadoop. + +-- Simon Bisson + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-jenkins-100613990-orig.jpg) + +### Jenkins ### + +Getting continuous delivery right requires more than a structured way of handling development; it also requires tools for managing test and build. That’s where the [Jenkins][17] continuous integration server comes in. Jenkins works with your choice of source control, your test harnesses, and your build server. It’s a flexible tool, initially designed for working with Java but now extended to support Web and mobile development and even to build Windows applications. + +Jenkins is perhaps best thought of as a switching network, shunting files through a test and build process, and responding to signals from the various tools you’re using – thanks to a library of more than 1,000 plug-ins. These include tools for integrating Jenkins with both local Git instances and GitHub so that it's possible to extend a continuous development model into your build and delivery processes. + +Using an automation tool like Jenkins is as much about adopting a philosophy as it is about implementing a build process. Once you commit to continuous integration as part of a continuous delivery model, you’ll be running test and build cycles as soon as code is delivered to your source control release branch – and delivering it to users as soon as it’s in the main branch. + +-- Simon Bisson + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-nodejs-iojs-100613995-orig.jpg) + +### Node.js and io.js ### + +Modern cloud applications are built using different design patterns from the familiar n-tier enterprise and Web apps. They’re distributed, event-driven collections of services that can be quickly scaled and can support many thousands of simultaneous users. One key technology in this new paradigm is [Node.js][18], used by many major cloud platforms and easy to install as part of a thin server or container on cloud infrastructure. + +Key to the success of Node.js is the Npm package format, which allows you to quickly install extensions to the core Node.js service. These include frameworks like Express and Seneca, which help build scalable applications. A central registry handles package distribution, and dependencies are automatically installed. + +While the [io.js][19] fork exposed issues with project governance, it also allowed a group of developers to push forward adding ECMAScript 6 support to an Npm-compatible engine. After reconciliation between the two teams, the Node.js and io.js codebases have been merged, with new releases now coming from the io.js code repository. + +Other forks, like Microsoft’s io.js fork to add support for its 64-bit Chakra JavaScript engine alongside Google’s V8, are likely to be merged back into the main branch over the next year, keeping the Node.js platform evolving and cementing its role as the preferred host for cloud-scale microservices. + +-- Simon Bisson + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-seneca-100613998-orig.jpg) + +### Seneca ### + +The developers of the [Seneca][20] microservice framework have a motto: “Build it now, scale it later!” It’s an apt maxim for anyone thinking about developing microservices, as it allows you to start small, then add functionality as your service grows. + +Seneca is at heart an implementation of the [actor/message design pattern][21], focused on using Node.js as a switching engine that takes in messages, processes their contents, and sends an appropriate response, either to the message originator or to another service. By focusing on the message patterns that map to business use cases, it’s relatively easy to take Seneca and quickly build a minimum viable product for your application. A plug-in architecture makes it easy to integrate Seneca with other tools and to quickly add functionality to your services. + +You can easily add new patterns to your codebase or break existing patterns into separate services as the needs of your application grow or change. One pattern can also call another, allowing quick code reuse. It’s also easy to add Seneca to a message bus, so you can use it as a framework for working with data from Internet of things devices, as all you need to do is define a listening port where JSON data is delivered. + +Services may not be persistent, and Seneca gives you the option of using a built-in object relational mapping layer to handle data abstraction, with plug-ins for common databases. + +-- Simon Bisson + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-netcore-aspnet-100613994-orig.jpg) + +### .Net Core and ASP.Net vNext ### + +Microsoft’s [open-sourcing of .Net][22] is bringing much of the company’s Web platform into the open. The new [.Net Core][23] release runs on Windows, on OS X, and on Linux. Currently migrating from Microsoft’s Codeplex repository to GitHub, .Net Core offers a more modular approach to .Net, allowing you to install the functions you need as you need them. + +Currently under development is [ASP.Net 5][24], an open source version of the Web platform, which runs on .Net Core. You can work with it as the basis of Web apps using Microsoft’s MVC 6 framework. There’s also support for the new SignalR libraries, which add support for WebSockets and other real-time communications protocols. + +If you’re planning on using Microsoft’s new Nano server, you’ll be writing code against .Net Core, as it’s designed for thin environments. The new DNX, the .Net Execution environment, simplifies deployment of ASP.Net applications on a wide range of platforms, with tools for packaging code and for booting a runtime on a host. Features are added using the NuGet package manager, letting you use only the libraries you want. + +Microsoft’s open source .Net is still very young, but there’s a commitment in Redmond to ensure it’s successful. Support in Microsoft’s own next-generation server operating systems means it has a place in both the data center and the cloud. + +-- Simon Bisson + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-glusterfs-100613989-orig.jpg) + +### GlusterFS ### + +[GlusterFS][25] is a distributed file system. Gluster aggregates various storage servers into one large parallel network file system. You can [even use it in place of HDFS in a Hadoop cluster][26] or in place of an expensive SAN system -- or both. While HDFS is great for Hadoop, having a general-purpose distributed file system that doesn’t require you to transfer data to another location to analyze it is a key advantage. + +In an era of commoditized hardware, commoditized computing, and increased performance and latency requirements, buying a big, fat expensive EMC SAN and hoping it fits all of your needs (it won’t) is no longer your sole viable option. GlusterFS was acquired by Red Hat in 2011. + +-- Andrew C. Oliver + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-main-100613992-orig.jpg) + +### Read about more open source winners ### + +InfoWorld's Best of Open Source Awards for 2014 celebrate more than 100 open source projects, from the bottom of the stack to the top. Follow these links to more open source winners: + +[Bossie Awards 2015: The best open source applications][27] + +[Bossie Awards 2015: The best open source application development tools][28] + +[Bossie Awards 2015: The best open source big data tools][29] + +[Bossie Awards 2015: The best open source data center and cloud software][30] + +[Bossie Awards 2015: The best open source desktop and mobile software][31] + +[Bossie Awards 2015: The best open source networking and security software][32] + +-------------------------------------------------------------------------------- + +via: http://www.infoworld.com/article/2982923/open-source-tools/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html + +作者:[InfoWorld staff][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.infoworld.com/author/InfoWorld-staff/ +[1]:https://www.docker.com/docker-machine +[2]:https://www.docker.com/docker-swarm +[3]:https://www.docker.com/docker-compose +[4]:https://coreos.com/ +[5]:http://rancher.com/rancher-os/ +[6]:http://kubernetes.io/ +[7]:https://mesos.apache.org/ +[8]:https://github.com/joyent/sdc +[9]:https://smartos.org/ +[10]:https://github.com/joyent/sdc-docker +[11]:https://sensuapp.org/ +[12]:http://prometheus.io/ +[13]:https://www.elastic.co/products/elasticsearch +[14]:https://www.elastic.co/products/logstash +[15]:https://www.elastic.co/products/kibana +[16]:http://www.ansible.com/home +[17]:https://jenkins-ci.org/ +[18]:https://nodejs.org/en/ +[19]:https://iojs.org/en/ +[20]:http://senecajs.org/ +[21]:http://www.infoworld.com/article/2976422/application-development/how-to-use-actors-in-distributed-applications.html +[22]:http://www.infoworld.com/article/2846450/microsoft-net/microsoft-open-sources-server-side-net-launches-visual-studio-2015-preview.html +[23]:https://dotnet.github.io/core/ +[24]:http://www.asp.net/vnext +[25]:http://www.gluster.org/ +[26]:http://www.gluster.org/community/documentation/index.php/Hadoop +[27]:http://www.infoworld.com/article/2982622/bossie-awards-2015-the-best-open-source-applications.html +[28]:http://www.infoworld.com/article/2982920/bossie-awards-2015-the-best-open-source-application-development-tools.html +[29]:http://www.infoworld.com/article/2982429/bossie-awards-2015-the-best-open-source-big-data-tools.html +[30]:http://www.infoworld.com/article/2982923/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html +[31]:http://www.infoworld.com/article/2982630/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html +[32]:http://www.infoworld.com/article/2982962/bossie-awards-2015-the-best-open-source-networking-and-security-software.html \ No newline at end of file diff --git a/sources/share/20151028 Bossie Awards 2015--The best open source desktop and mobile software.md b/sources/share/20151028 Bossie Awards 2015--The best open source desktop and mobile software.md new file mode 100644 index 0000000000..83b2b24a2e --- /dev/null +++ b/sources/share/20151028 Bossie Awards 2015--The best open source desktop and mobile software.md @@ -0,0 +1,223 @@ +Bossie Awards 2015: The best open source desktop and mobile software +================================================================================ +InfoWorld's top picks in open source productivity tools, desktop utilities, and mobile apps + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-desktop-mobile-100614439-orig.jpg) + +### The best open source desktop and mobile software ### + +Open source on the desktop has a long and distinguished history, and many of our Bossie winners in this category go back many years. Packed with features and still improving, some of these tools offer compelling alternatives to pricey commercial software. Others are utilities that we lean on daily for one reason or another -- the can openers and potato peelers of desktop productivity. One or two of them either plug holes in Windows, or they go the distance where Windows falls short. + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-libreoffice-100614436-orig.jpg) + +### LibreOffice ### + +With the major release of version 5 in August, the Document Foundation’s [LibreOffice][1] offers a completely redesigned user interface, better compatibility with Microsoft Office (including good-but-not-great DOCX, XLSX, and PPTX file format support), and significant improvements to Calc, the spreadsheet application. + +Set against a turbulent background, the LibreOffice effort split from OpenOffice.org in 2010. In 2011, Oracle announced it would no longer support OpenOffice.org, and handed the trademark to the Apache Software Foundation. Since then, it has become [increasingly clear][2] that LibreOffice is winning the race for developers, features, and users. + +-- Woody Leonhard + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-firefox-100614426-orig.jpg) + +### Firefox ### + +In the battle of the big browsers, [Firefox][3] gets our vote over its longtime open source rival Chromium for two important reasons: + +• **Memory use**. Chromium, like its commercial cousin Chrome, has a nasty propensity to glom onto massive amounts of memory. + +• **Privacy**. Witness the [recent controversy][4] over Chromium automatically downloading a microphone snooping program to respond to “OK, Google.” + +Firefox may not have the most features or the down-to-the-millisecond fastest rendering engine. But it’s solid, stingy with resources, highly extensible, and most of all, it comes with no strings attached. There’s no ulterior data-gathering motive. + +-- Woody Leonhard + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-thunderbird-100614433-orig.jpg) + +### Thunderbird ### + +A longtime favorite email client, Mozilla’s [Thunderbird][5], may be getting a bit long in the tooth, but it’s still supported and showing signs of life. The latest version, 38.2, arrived in August, and there are plans for more development. + +Mozilla officially pulled its people off the project back in July 2012, but a hardcore group of volunteers, led by Kent James and the all-volunteer Thunderbird Council, continues to toil away. While you won’t find the latest email innovations in Thunderbird, you will find a solid core of basic functions based on local storage. If having mail in the cloud spooks you, it’s a good, private alternative. And if James goes ahead with his idea of encrypting Thunderbird mail end-to-end, there may be significant new life in the old bird. + +-- Woody Leonhard + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-notepad-100614432-orig.jpg) + +### Notepad++ ### + +If Windows Notepad handles all of your text editing (and source code editing and HTML editing) needs, more power to ya. For Windows users who yearn for a little bit more in a text editor, there’s Don Ho’s [Notepad++][6], which is the editor I turn to, over and over again. + +With tabbed views, drag-and-drop, color-coded hints for completing HTML commands, bookmarks, macro recording, shortcut keys, and every text encoding format you’re likely to encounter, Notepad++ takes text to a new level. We get frequent updates, too, with the latest in August. + +-- Woody Leonhard + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-vlc-100614435-orig.jpg) + +### VLC ### + +The stalwart [VLC][7] (formerly known as VideoLan Client) runs almost any kind of media file on almost any platform. Yes, it even works as a remote control on Apple Watch. + +The tiled Universal app version for Windows 10, in the Windows Store, draws some criticism for instability and lack of control, but in most cases VLC works, and it works well -- without external codecs. It even supports Blu-ray formats with two new libraries. + +The desktop version is a must-have for Windows 10, unless you’re ready to run the advertising gauntlets that are the Universal Groove Music and Movies & TV apps from Microsoft. VLC received a major [feature update][8] in February and a comprehensive bug fix in April. + +-- Woody Leonhard + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-7-zip-100614429-orig.jpg) + +### 7-Zip ### + +Long recognized as the preeminent open source ZIP archive manager for Windows, [7-Zip][9] works like a champ, even on the Windows 10 desktop. Full coverage for RAR files, which can be problematic in Windows, combine with password-protected file creation and support for self-extracting ZIPs. It’s one of those programs that just works. + +Yes, it would be nice to get a more modern file picker. Yes, it would be interesting to see a tiled Universal app version. But even without the fancy bells and whistles, 7-Zip deserves a place on every Windows desktop. + +-- Woody Leonhard + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-handbrake-100614427-orig.jpg) + +### Handbrake ### + +If you want to convert your DVDs (or video files in any commonly used format) into a file in some other format, or simply scrape them off a silver coaster, [Handbrake][10] is the way to do it. If you’re a Windows user, Handbrake is almost indispensible, since Microsoft doesn’t believe in ripping DVDs. + +Handbrake presents a number of handy presets for optimizing conversions for your target device (iPod, iPad, Android Tablet, and so on) It’s simple, and it’s fast. With the latest round of bug fixes released in June, Handbrake’s keeping up on maintenance -- and it works fine on the Windows 10 desktop. + +-- Woody Leonhard + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-keepass-100614430-orig.jpg) + +### KeePass ### + +I’ll confess that I almost gave up on [KeePass][11] because the primary download site goes to Sourceforge. That means you have to be extremely careful which boxes are checked and what you click on (and when) as you attempt to download and install the software. While KeePass itself is 100 percent clean open source (GNU GPL), Sourceforge doesn’t feel so constrained, and its [installers reek of crapware][12]. + +One of many local-file password storage programs, KeePass distinguishes itself with broad scope, as well as its ability to run on all sorts of platforms, no installation required. KeePass will save not only passwords, but also credit card information and freely structured information. It provides a strong random password generator, and the database itself is locked with AES and Twofish, so nobody’s going to crack it. And it’s kept up to date, with a new stable release last month. + +-- Woody Leonhard + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-virtualbox-100614434-orig.jpg) + +### VirtualBox ### + +With a major release published in July, Oracle’s open source [VirtualBox][13] -- available for Windows, OS X, Linux, even Solaris --continues to give commercial counterparts VMware Workstation, VMware Fusion, Parallels Desktop, and Microsoft’s Hyper-V a hard run for their money. The Oracle team is still getting the final Windows 10 bugs ironed out, but come to think of it, so is Microsoft. + +VirtualBox doesn’t quite match the performance or polish of the VMware and Parallels products, but it’s getting closer. Version 5 brought long-awaited drag-and-drop support, making it easier to move files between VMs and host. + +I prefer VirtualBox over Hyper-V because it’s easy to control external devices. In Hyper-V, for example, getting sound to work is a pain in the neck, but in VirtualBox it only takes a click in setup. The shared clipboard between VM and host works wonders. Running speed on both is roughly the same, with a slight advantage to Hyper-V. But managing VirtualBox machines is much easier. + +-- Woody Leonhard + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-inkscape-100614428-orig.jpg) + +### Inkscape ### + +If you stand in awe of the designs created with Adobe Illustrator (or even CorelDraw), take a close look at [Inkscape][14]. Scalable vector images never looked so good. + +Version 0.91, released in January, uses a new internal graphics rendering engine called Cairo, sponsored by Google, to make the app run faster and allow for more accurate rendering. Inkscape will read and write SVG, PNG, PDF, even EPS, and many other formats. It can export Flash XML Graphics, HTML5 Canvas, and XAML, among others. + +There’s a strong community around Inkscape, and it’s built for easy extensibility. It’s available for Windows, OS X, and Linux. + +-- Woody Leonhard + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-keepassdroid-100614431-orig.jpg) + +### KeePassDroid ### + +Trying to remember all of the passwords we need today is impossible, and creating new ones to meet stringent password policy requirements can be agonizing. A port of KeePass for Android, [KeePassDroid][15] brings sanity preserving password management to mobile devices. + +Like KeyPass, KeyPassDroid makes creating and accessing passwords easy, requiring you to recall only a single master password. It supports both DES and Twofish algorithms for encrypting all passwords, and it goes a step further by encrypting the entire password database, not only the password fields. Notes and other password pertinent information are encrypted too. + +While KeePassDroid's interface is minimal -- dated, some would say -- it gets the job done with bare-bones efficiency. Need to generate passwords that have certain character sets and lengths? KeePassDroid can do that with ease. With more than a million downloads on the Google Play Store, you could say this app definitely fills a need. + +-- Victor R. Garza + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-prey-100615300-orig.jpg) + +### Prey ### + +Loss or theft of mobile devices is all too common these days. While there are many tools in the enterprise to manage and erase data either misplaced or stolen from an organization, [Prey][16] facilitates the recovery of the phone, laptop, or tablet, and not just the wiping of potentially sensitive information from the device. + +Prey is a Web service that works with an open source installed agent for Linux, OS X, Windows, Android, and iOS devices. Prey tracks your lost or stolen device by using either the device's GPS, the native geolocation provided by newer operating systems, or an associated Wi-Fi hotspot to home in on the location. + +If your smartphone is lost or stolen, send a text message to the device to activate Prey. For stolen tablets or laptops, use the Prey Project's cloud-based control panel to select the device as missing. The Prey agent on any device can then take a screenshot of the active applications, turn on the camera to catch a thief's image, reset the device to the factory settings, or fully lock down the device. + +Should you want to retrieve your lost items, the Prey Project strongly suggests you contact your local police to have them assist you. + +-- Victor R. Garza + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-orbot-100615299-orig.jpg) + +### Orbot ### + +The premiere proxy application for Android, [Orbot][17] leverages the volunteer-operated network of virtual tunnels called Tor (The Onion Router) to keep all communications private. Orbot works with companion applications [Orweb][18] for secure Web browsing and [ChatSecure][19] for secure chat. In fact, any Android app that allows its proxy settings to be changed can be secured with Orbot. + +One thing to remember about the Tor network is that it's designed for secure, lightweight communications, not for pulling down torrents or watching YouTube videos. Surfing media-rich sites like Facebook can be painfully slow. Your Orbot communications won't be blazing fast, but they will stay private and confidential. + +-- Victor R. Garza + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-tails-100615301-orig.jpg) + +### Tails ### + +[Tails][20], or The Amnesic Incognito Live System, is a Linux Live OS that can be booted from a USB stick, DVD, or SD card. It’s often used covertly in the Deep Web to secure traffic when purchasing illicit substances, but it can also be used to avoid tracking, support freedom of speech, circumvent censorship, and promote liberty. + +Leveraging Tor (The Onion Router), Tails keeps all communications secure and private and promises to leave no trace on any computer after it’s used. It performs disk encryption with LUKS, protects instant messages with OTR, encrypts Web traffic with the Tor Browser and HTTPS Everywhere, and securely deletes files via Nautilus Wipe. Tails even has an office suite, image editor, and the like. + +Now, it's always possible to be traced while using any system if you're not careful, so be vigilant when using Tails and follow good privacy practices, like turning off JavaScript while using Tor. And be aware that Tails isn't necessarily going to be speedy, even while using a fiber connect, but that's what you pay for anonymity. + +-- Victor R. Garza + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-main-100614438-orig.jpg) + +### Read about more open source winners ### + +InfoWorld's Best of Open Source Awards for 2014 celebrate more than 100 open source projects, from the bottom of the stack to the top. Follow these links to more open source winners: + +[Bossie Awards 2015: The best open source applications][21] + +[Bossie Awards 2015: The best open source application development tools][22] + +[Bossie Awards 2015: The best open source big data tools][23] + +[Bossie Awards 2015: The best open source data center and cloud software][24] + +[Bossie Awards 2015: The best open source desktop and mobile software][25] + +[Bossie Awards 2015: The best open source networking and security software][26] + +-------------------------------------------------------------------------------- + +via: http://www.infoworld.com/article/2982630/open-source-tools/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html + +作者:[InfoWorld staff][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.infoworld.com/author/InfoWorld-staff/ +[1]:https://www.libreoffice.org/download/libreoffice-fresh/ +[2]:http://lwn.net/Articles/637735/ +[3]:https://www.mozilla.org/en-US/firefox/new/ +[4]:https://nakedsecurity.sophos.com/2015/06/24/not-ok-google-privacy-advocates-take-on-the-chromium-team-and-win/ +[5]:https://www.mozilla.org/en-US/thunderbird/ +[6]:https://notepad-plus-plus.org/ +[7]:http://www.videolan.org/vlc/index.html +[8]:http://www.videolan.org/press/vlc-2.2.0.html +[9]:http://www.7-zip.org/ +[10]:https://handbrake.fr/ +[11]:http://keepass.info/ +[12]:http://www.infoworld.com/article/2931753/open-source-software/sourceforge-the-end-cant-come-too-soon.html +[13]:https://www.virtualbox.org/ +[14]:https://inkscape.org/en/download/windows/ +[15]:http://www.keepassdroid.com/ +[16]:http://preyproject.com/ +[17]:https://www.torproject.org/docs/android.html.en +[18]:https://guardianproject.info/apps/orweb/ +[19]:https://guardianproject.info/apps/chatsecure/ +[20]:https://tails.boum.org/ +[21]:http://www.infoworld.com/article/2982622/bossie-awards-2015-the-best-open-source-applications.html +[22]:http://www.infoworld.com/article/2982920/bossie-awards-2015-the-best-open-source-application-development-tools.html +[23]:http://www.infoworld.com/article/2982429/bossie-awards-2015-the-best-open-source-big-data-tools.html +[24]:http://www.infoworld.com/article/2982923/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html +[25]:http://www.infoworld.com/article/2982630/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html +[26]:http://www.infoworld.com/article/2982962/bossie-awards-2015-the-best-open-source-networking-and-security-software.html \ No newline at end of file diff --git a/sources/share/20151028 Bossie Awards 2015--The best open source networking and security software.md b/sources/share/20151028 Bossie Awards 2015--The best open source networking and security software.md new file mode 100644 index 0000000000..129ce3eff4 --- /dev/null +++ b/sources/share/20151028 Bossie Awards 2015--The best open source networking and security software.md @@ -0,0 +1,162 @@ +Bossie Awards 2015: The best open source networking and security software +================================================================================ +InfoWorld's top picks of the year among open source tools for building, operating, and securing networks + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-net-sec-100614459-orig.jpg) + +### The best open source networking and security software ### + +BIND, Sendmail, OpenSSH, Cacti, Nagios, Snort -- open source software seems to have been invented for networks, and many of the oldies and goodies are still going strong. Among our top picks in the category this year, you'll find a mix of stalwarts, mainstays, newcomers, and upstarts perfecting the arts of network management, security monitoring, vulnerability assessment, rootkit detection, and much more. + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-icinga-100614482-orig.jpg) + +### Icinga 2 ### + +Icinga began life as a fork of system monitoring application Nagios. [Icinga 2][1] was completely rewritten to give users a modern interface, support for multiple databases, and an API to integrate numerous extensions. With out-of-the-box load balancing, notifications, and configuration, Icinga 2 shortens the time to installation for complex environments. Icinga 2 supports Graphite natively, giving administrators real-time performance graphing without any fuss. But what puts Icinga back on the radar this year is its release of Icinga Web 2, a graphical front end with drag-and-drop customizable dashboards and streamlined monitoring tools. + +Administrators can view, filter, and prioritize problems, while keeping track of which actions have already been taken. A new matrix view lets administrators view hosts and services on one page. You can view events over a particular time period or filter incidents to understand which ones need immediate attention. Icinga Web 2 may boast a new interface and zippier performance, but all the usual commands from Icinga Classic and Icinga Web are still available. That means there is no downtime trying to learn a new version of the tool. + +-- Fahmida Rashid + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-zenoss-100614465-orig.jpg) + +### Zenoss Core ### + +Another open source stalwart, [Zenoss Core][2] gives network administrators a complete, one-stop solution for tracking and managing all of the applications, servers, storage, networking components, virtualization tools, and other elements of an enterprise infrastructure. Administrators can make sure the hardware is running efficiently and take advantage of the modular design to plug in ZenPacks for extended functionality. + +Zenoss Core 5, released in February of this year, takes the already powerful tool and improves it further, with an enhanced user interface and expanded dashboard. The Web-based console and dashboards were already highly customizable and dynamic, and the new version now lets administrators mash up multiple component charts onto a single chart. Think of it as the tool for better root cause and cause/effect analysis. + +Portlets give additional insights for network mapping, device issues, daemon processes, production states, watch lists, and event views, to name a few. And new HTML5 charts can be exported outside the tool. The Zenoss Control Center allows out-of-band management and monitoring of all Zenoss components. Zenoss Core has new tools for online backup and restore, snapshots and rollbacks, and multihost deployment. Even more important, deployments are faster with full Docker support. + +-- Fahmida Rashid + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-opennms-100614461-orig.jpg) + +### OpenNMS ### + +An extremely flexible network management solution, [OpenNMS][3] can handle any network management task, whether it's device management, application performance monitoring, inventory control, or events management. With IPv6 support, a robust alerts system, and the ability to record user scripts to test Web applications, OpenNMS has everything network administrators and testers need. OpenNMS has become, as now a mobile dashboard, called OpenNMS Compass, lets networking pros keep an eye on their network even when they're out and about. + +The iOS version of the app, which is available on the [iTunes App Store][4], displays outages, nodes, and alarms. The next version will offer additional event details, resource graphs, and information about IP and SNMP interfaces. The Android version, available on [Google Play][5], displays network availability, outages, and alarms on the dashboard, as well as the ability to acknowledge, escalate, or clear alarms. The mobile clients are compatible with OpenNMS Horizon 1.12 or greater and OpenNMS Meridian 2015.1.0 or greater. + +-- Fahmida Rashid + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-onion-100614460-orig.jpg) + +### Security Onion ### + +Like an onion, network security monitoring is made of many layers. No single tool will give you visibility into every attack or show you every reconnaissance or foot-printing session on your company network. [Security Onion][6] bundles scores of proven tools into one handy Ubuntu distro that will allow you to see who's inside your network and help keep the bad guys out. + +Whether you're taking a proactive approach to network security monitoring or following up on a potential attack, Security Onion can assist. Consisting of sensor, server, and display layers, the Onion combines full network packet capture with network-based and host-based intrusion detection, and it serves up all of the various logs for inspection and analysis. + +The star-studded network security toolchain includes Netsniff-NG for packet capture, Snort and Suricata for rules-based network intrusion detection, Bro for analysis-based network monitoring, OSSEC for host intrusion detection, and Sguil, Squert, Snorby, and ELSA (Enterprise Log Search and Archive) for display, analysis, and log management. It’s a carefully vetted collection of tools, all wrapped in a wizard-driven installer and backed by thorough documentation, that can help you get from zero to monitoring as fast as possible. + +-- Victor R. Garza + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-kali-100614458-orig.jpg) + +Kali Linux + +The team behind [Kali Linux][7] revamped the popular security Linux distribution this year to make it faster and even more versatile. Kali sports a new 4.0 kernel, improved hardware and wireless driver support, and a snappier interface. The most popular tools are easily accessible from a dock on the side of the screen. The biggest change? Kali Linux is now a rolling distribution, with a continuous stream of software updates. Kali's core system is based on Debian Jessie, and the team will pull packages continuously from Debian Testing, while continuing to add new Kali-flavored features on top. + +The distribution still comes jam-packed with tools for penetration testing, vulnerability analysis, security forensics, Web application analysis, wireless networking and assessment, reverse engineering, and exploitation tools. Now the distribution has an upstream version checking system that will automatically notify users when updates are available for the individual tools. The distribution also features ARM images for a range of devices, including Raspberry Pi, Chromebook, and Odroids, as well as updates to the NetHunter penetration testing platform that runs on Android devices. There are other changes too: Metasploit Community/Pro is no longer included, because Kali 2.0 is not yet [officially supported by Rapid7][8]. + +-- Fahmida Rashid + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-openvas-100614462-orig.jpg) + +### OpenVAS ### + +[OpenVAS][9], the Open Vulnerability Assessment System, is a framework that combines multiple services and tools to offer vulnerability scanning and vulnerability management. The scanner is coupled with a weekly feed of network vulnerability tests, or you can use a feed from a commercial service. The framework includes a command-line interface (so it can be scripted) and an SSL-secured, browser-based interface via the [Greenbone Security Assistant][10]. OpenVAS accommodates various plug-ins for additional functionality. Scans can be scheduled or run on-demand. + +Multiple OpenVAS installations can be controlled through a single master, which makes this a scalable vulnerability assessment tool for enterprises. The project is as compatible with standards as can be: Scan results and configurations are stored in a SQL database, where they can be accessed easily by external reporting tools. Client tools access the OpenVAS Manager via the XML-based stateless OpenVAS Management Protocol, so security administrators can extend the functionality of the framework. The software can be installed from packages or source code to run on Windows or Linux, or downloaded as a virtual appliance. + +-- Matt Sarrel + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-owasp-100614463-orig.jpg) + +### OWASP ### + +[OWASP][11], the Open Web Application Security Project, is a nonprofit organization with worldwide chapters focused on improving software security. The community-driven organization provides test tools, documentation, training, and almost anything you could imagine that’s related to assessing software security and best practices for developing secure software. Several OWASP projects have become valuable components of many a security practitioner's toolkit: + +[ZAP][12], the Zed Attack Proxy Project, is a penetration test tool for finding vulnerabilities in Web applications. One of the design goals of ZAP was to make it easy to use so that developers and functional testers who aren't security experts can benefit from using it. ZAP provides automated scanners and a set of manual test tools. + +The [Xenotix XSS Exploit Framework][13] is an advanced cross-site scripting vulnerability detection and exploitation framework that runs scans within browser engines to get real-world results. The Xenotix Scanner Module uses three intelligent fuzzers, and it can run through nearly 5,000 distinct XSS payloads. An API lets security administrators extend and customize the exploit toolkit. + +[O-Saft][14], or the OWASP SSL advanced forensic tool, is an SSL auditing tool that shows detailed information about SSL certificates and tests SSL connections. This command-line tool can run online or offline to assess SSL security such as ciphers and configurations. O-Saft provides built-in checks for common vulnerabilities, and you can easily extend these through scripting. In May 2015 a simple GUI was added as an optional download. + +[OWTF][15], the Offensive Web Testing Framework, is an automated test tool that follows OWASP testing guidelines and the NIST and PTES standards. The framework uses both a Web UI and a CLI, and it probes Web and application servers for common vulnerabilities such as improper configuration and unpatched software. + +-- Matt Sarrel + +![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-beef-100614456-orig.jpg) + +### BeEF ### + +The Web browser has become the most common vector for attacks against clients. [BeEF][15], the Browser Exploitation Framework Project, is a widely used penetration tool to assess Web browser security. BeEF helps you expose the security weaknesses of client systems using client-side attacks launched through the browser. BeEF sets up a malicious website, which security administrators visit from the browser they want to test. BeEF then sends commands to attack the Web browser and use it to plant software on the client machine. Administrators can then launch attacks on the client machine as if they were zombies. + +BeEF comes with commonly used modules like a key logger, a port scanner, and a Web proxy, plus you can write your own modules or send commands directly to the zombified test machine. BeEF comes with a handful of demo Web pages to help you get started and makes it very easy to write additional Web pages and attack modules so you can customize testing to your environment. BeEF is a valuable test tool for assessing browser and endpoint security and for learning how browser-based attacks are launched. Use it to put together a demo to show your users how malware typically infects client devices. + +-- Matt Sarrel + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-unhide-100614464-orig.jpg) + +### Unhide ### + +[Unhide][16] is a forensic tool that locates open TCP/UDP ports and hidden process on UNIX, Linux, and Windows. Hidden ports and processes can be the result of rootkit or LKM (loadable kernel module) activity. Rootkits can be difficult to find and remove because they are designed to be stealthy, hiding themselves from the OS and user. A rootkit can use LKMs to hide its processes or impersonate other processes, allowing it to run on machines undiscovered for a long time. Unhide can provide the assurance that administrators need to know their systems are clean. + +Unhide is really two separate scripts: one for processes and one for ports. The tool interrogates running processes, threads, and open ports and compares this info to what's registered with the system as active, reporting discrepancies. Unhide and WinUnhide are extremely lightweight scripts that run from the command line to produce text output. They're not pretty, but they are extremely useful. Unhide is also included in the [Rootkit Hunter][17] project. + +-- Matt Sarrel + +![](http://images.techhive.com/images/article/2015/09/bossies-2015-main-100614457-orig.jpg) + +Read about more open source winners + +InfoWorld's Best of Open Source Awards for 2014 celebrate more than 100 open source projects, from the bottom of the stack to the top. Follow these links to more open source winners: + +[Bossie Awards 2015: The best open source applications][18] + +[Bossie Awards 2015: The best open source application development tools][19] + +[Bossie Awards 2015: The best open source big data tools][20] + +[Bossie Awards 2015: The best open source data center and cloud software][21] + +[Bossie Awards 2015: The best open source desktop and mobile software][22] + +[Bossie Awards 2015: The best open source networking and security software][23] + +-------------------------------------------------------------------------------- + +via: http://www.infoworld.com/article/2982962/open-source-tools/bossie-awards-2015-the-best-open-source-networking-and-security-software.html + +作者:[InfoWorld staff][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.infoworld.com/author/InfoWorld-staff/ +[1]:https://www.icinga.org/icinga/icinga-2/ +[2]:http://www.zenoss.com/ +[3]:http://www.opennms.org/ +[4]:https://itunes.apple.com/us/app/opennms-compass/id968875097?mt=8 +[5]:https://play.google.com/store/apps/details?id=com.opennms.compass&hl=en +[6]:http://blog.securityonion.net/p/securityonion.html +[7]:https://www.kali.org/ +[8]:https://community.rapid7.com/community/metasploit/blog/2015/08/12/metasploit-on-kali-linux-20 +[9]:http://www.openvas.org/ +[10]:http://www.greenbone.net/ +[11]:https://www.owasp.org/index.php/Main_Page +[12]:https://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project +[13]:https://www.owasp.org/index.php/O-Saft +[14]:https://www.owasp.org/index.php/OWASP_OWTF +[15]:http://www.beefproject.com/ +[16]:http://www.unhide-forensics.info/ +[17]:http://www.rootkit.nl/projects/rootkit_hunter.html +[18]:http://www.infoworld.com/article/2982622/bossie-awards-2015-the-best-open-source-applications.html +[19]:http://www.infoworld.com/article/2982920/bossie-awards-2015-the-best-open-source-application-development-tools.html +[20]:http://www.infoworld.com/article/2982429/bossie-awards-2015-the-best-open-source-big-data-tools.html +[21]:http://www.infoworld.com/article/2982923/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html +[22]:http://www.infoworld.com/article/2982630/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html +[23]:http://www.infoworld.com/article/2982962/bossie-awards-2015-the-best-open-source-networking-and-security-software.html \ No newline at end of file diff --git a/sources/share/20151030 80 Linux Monitoring Tools for SysAdmins.md b/sources/share/20151030 80 Linux Monitoring Tools for SysAdmins.md new file mode 100644 index 0000000000..f9384d4635 --- /dev/null +++ b/sources/share/20151030 80 Linux Monitoring Tools for SysAdmins.md @@ -0,0 +1,605 @@ + +translation by strugglingyouth +80 Linux Monitoring Tools for SysAdmins +================================================================================ +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/linux-monitoring.jpg) + +The industry is hotting up at the moment, and there are more tools than you can shake a stick at. Here lies the most comprehensive list on the Internet (of Tools). Featuring over 80 ways to your machines. Within this article we outline: + +- Command line tools +- Network related +- System related monitoring +- Log monitoring tools +- Infrastructure monitoring tools + +It’s hard work monitoring and debugging performance problems, but it’s easier with the right tools at the right time. Here are some tools you’ve probably heard of, some you probably haven’t – and when to use them: + +### Top 10 System Monitoring Tools ### + +#### 1. Top #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/top.jpg) + +This is a small tool which is pre-installed in many unix systems. When you want an overview of all the processes or threads running in the system: top is a good tool. You can order these processes on different criteria and the default criteria is CPU. + +#### 2. [htop][1] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/htop.jpg) + +Htop is essentially an enhanced version of top. It’s easier to sort by processes. It’s visually easier to understand and has built in commands for common things you would like to do. Plus it’s fully interactive. + +#### 3. [atop][2] #### + +Atop monitors all processes much like top and htop, unlike top and htop however it has daily logging of the processes for long-term analysis. It also shows resource consumption by all processes. It will also highlight resources that have reached a critical load. + +#### 4. [apachetop][3] #### + +Apachetop monitors the overall performance of your apache webserver. It’s largely based on mytop. It displays current number of reads, writes and the overall number of requests processed. + +#### 5. [ftptop][4] #### + +ftptop gives you basic information of all the current ftp connections to your server such as the total amount of sessions, how many are uploading and downloading and who the client is. + +#### 6. [mytop][5] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mytop.jpg) + +mytop is a neat tool for monitoring threads and performance of mysql. It gives you a live look into the database and what queries it’s processing in real time. + +#### 7. [powertop][6] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/powertop.jpg) + +powertop helps you diagnose issues that has to do with power consumption and power management. It can also help you experiment with power management settings to achieve the most efficient settings for your server. You switch tabs with the tab key. + +#### 8. [iotop][7] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iotop.jpg) + +iotop checks the I/O usage information and gives you a top-like interface to that. It displays columns on read and write and each row represents a process. It also displays the percentage of time the process spent while swapping in and while waiting on I/O. + +### Network related monitoring ### + +#### 9. [ntopng][8] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ntopng.jpg) + +ntopng is the next generation of ntop and the tool provides a graphical user interface via the browser for network monitoring. It can do stuff such as: geolocate hosts, get network traffic and show ip traffic distribution and analyze it. + +#### 10. [iftop][9] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iftop.jpg) + +iftop is similar to top, but instead of mainly checking for cpu usage it listens to network traffic on selected network interfaces and displays a table of current usage. It can be handy for answering questions such as “Why on earth is my internet connection so slow?!”. + +#### 11. [jnettop][10] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/jnettop.jpg) + +jnettop visualises network traffic in much the same way as iftop does. It also supports customizable text output and a machine-friendly mode to support further analysis. + +12. [bandwidthd][11] + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/bandwidthd.jpg) + +BandwidthD tracks usage of TCP/IP network subnets and visualises that in the browser by building a html page with graphs in png. There is a database driven system that supports searching, filtering, multiple sensors and custom reports. + +#### 13. [EtherApe][12] #### + +EtherApe displays network traffic graphically, the more talkative the bigger the node. It either captures live traffic or can read it from a tcpdump. The displayed can also be refined using a network filter with pcap syntax. + +#### 14. [ethtool][13] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ethtool.jpg) + +ethtool is used for displaying and modifying some parameters of the network interface controllers. It can also be used to diagnose Ethernet devices and get more statistics from the devices. + +#### 15. [NetHogs][14] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nethogs.jpg) + +NetHogs breaks down network traffic per protocol or per subnet. It then groups by process. So if there’s a surge in network traffic you can fire up NetHogs and see which process is causing it. + +#### 16. [iptraf][15] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iptraf.jpg) + +iptraf gathers a variety of metrics such as TCP connection packet and byte count, interface statistics and activity indicators, TCP/UDP traffic breakdowns and station packet and byte counts. + +#### 17. [ngrep][16] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ngrep.jpg) + +ngrep is grep but for the network layer. It’s pcap aware and will allow to specify extended regular or hexadecimal expressions to match against packets of . + +#### 18. [MRTG][17] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mrtg.jpg) + +MRTG was orginally developed to monitor router traffic, but now it’s able to monitor other network related things as well. It typically collects every five minutes and then generates a html page. It also has the capability of sending warning emails. + +#### 19. [bmon][18] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/bmon.jpg) + +Bmon monitors and helps you debug networks. It captures network related statistics and presents it in human friendly way. You can also interact with bmon through curses or through scripting. + +#### 20. traceroute #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/traceroute.jpg) + +Traceroute is a built-in tool for displaying the route and measuring the delay of packets across a network. + +#### 21. [IPTState][19] #### + +IPTState allows you to watch where traffic that crosses your iptables is going and then sort that by different criteria as you please. The tool also allows you to delete states from the table. + +#### 22. [darkstat][20] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/darkstat.jpg) + +Darkstat captures network traffic and calculates statistics about usage. The reports are served over a simple HTTP server and gives you a nice graphical user interface of the graphs. + +#### 23. [vnStat][21] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/vnstat.jpg) + +vnStat is a network traffic monitor that uses statistics provided by the kernel which ensures light use of system resources. The gathered statistics persists through system reboots. It has color options for the artistic sysadmins. + +#### 24. netstat #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/netstat.jpg) + +Netstat is a built-in tool that displays TCP network connections, routing tables and a number of network interfaces. It’s used to find problems in the network. + +#### 25. ss #### + +Instead of using netstat, it’s however preferable to use ss. The ss command is capable of showing more information than netstat and is actually faster. If you want a summary statistics you can use the command `ss -s`. + +#### 26. [nmap][22] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nmap.jpg) + +Nmap allows you to scan your server for open ports or detect which OS is being used. But you could also use this for SQL injection vulnerabilities, network discovery and other means related to penetration testing. + +#### 27. [MTR][23] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mtr.jpg) + +MTR combines the functionality of traceroute and the ping tool into a single network diagnostic tool. When using the tool it will limit the number hops individual packets has to travel while also listening to their expiry. It then repeats this every second. + +#### 28. [Tcpdump][24] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/tcpdump.jpg) + +Tcpdump will output a description of the contents of the packet it just captured which matches the expression that you provided in the command. You can also save the this data for further analysis. + +#### 29. [Justniffer][25] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/justniffer.jpg) + +Justniffer is a tcp packet sniffer. You can choose whether you would like to collect low-level data or high-level data with this sniffer. It also allows you to generate logs in customizable way. You could for instance mimic the access log that apache has. + +### System related monitoring ### + +#### 30. [nmon][26] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nmon.jpg) + +nmon either outputs the data on screen or saves it in a comma separated file. You can display CPU, memory, network, filesystems, top processes. The data can also be added to a RRD database for further analysis. + +#### 31. [conky][27] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/cpulimit.jpg) + +Conky monitors a plethora of different OS stats. It has support for IMAP and POP3 and even support for many popular music players! For the handy person you could extend it with your own scripts or programs using Lua. + +#### 32. [Glances][28] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/glances.jpg) + +Glances monitors your system and aims to present a maximum amount of information in a minimum amount of space. It has the capability to function in a client/server mode as well as monitoring remotely. It also has a web interface. + +#### 33. [saidar][29] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/saidar.jpg) + +Saidar is a very small tool that gives you basic information about your system resources. It displays a full screen of the standard system resources. The emphasis for saidar is being as simple as possible. + +#### 34. [RRDtool][30] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/rrdtool.jpg) + +RRDtool is a tool developed to handle round-robin databases or RRD. RRD aims to handle time-series data like CPU load, temperatures etc. This tool provides a way to extract RRD data in a graphical format. + +#### 35. [monit][31] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/monit.jpg) + +Monit has the capability of sending you alerts as well as restarting services if they run into trouble. It’s possible to perform any type of check you could write a script for with monit and it has a web user interface to ease your eyes. + +#### 36. [Linux process explorer][32] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/linux-process-monitor.jpg) + +Linux process explorer is akin to the activity monitor for OSX or the windows equivalent. It aims to be more usable than top or ps. You can view each process and see how much memory usage or CPU it uses. + +#### 37. df #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/df.jpg) + +df is an abbreviation for disk free and is pre-installed program in all unix systems used to display the amount of available disk space for filesystems which the user have access to. + +#### 38. [discus][33] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/discus.jpg) + +Discus is similar to df however it aims to improve df by making it prettier using fancy features as colors, graphs and smart formatting of numbers. + +#### 39. [xosview][34] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/xosview.jpg) + +xosview is a classic system monitoring tool and it gives you a simple overview of all the different parts of the including IRQ. + +#### 40. [Dstat][35] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/dstat.jpg) + +Dstat aims to be a replacement for vmstat, iostat, netstat and ifstat. It allows you to view all of your system resources in real-time. The data can then be exported into csv. Most importantly dstat allows for plugins and could thus be extended into areas not yet known to mankind. + +#### 41. [Net-SNMP][36] #### + +SNMP is the protocol ‘simple network management protocol’ and the Net-SNMP tool suite helps you collect accurate information about your servers using this protocol. + +#### 42. [incron][37] #### + +Incron allows you to monitor a directory tree and then take action on those changes. If you wanted to copy files to directory ‘b’ once new files appeared in directory ‘a’ that’s exactly what incron does. + +#### 43. [monitorix][38] #### + +Monitorix is lightweight system monitoring tool. It helps you monitor a single machine and gives you a wealth of metrics. It also has a built-in HTTP server to view graphs and a reporting mechanism of all metrics. + +#### 44. vmstat #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/vmstat.jpg) + +vmstat or virtual memory statistics is a small built-in tool that monitors and displays a summary about the memory in the machine. + +#### 45. uptime #### + +This small command that quickly gives you information about how long the machine has been running, how many users currently are logged on and the system load average for the past 1, 5 and 15 minutes. + +#### 46. mpstat #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mpstat.jpg) + +mpstat is a built-in tool that monitors cpu usage. The most common command is using `mpstat -P ALL` which gives you the usage of all the cores. You can also get an interval update of the CPU usage. + +#### 47. pmap #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/pmap.jpg) + +pmap is a built-in tool that reports the memory map of a process. You can use this command to find out causes of memory bottlenecks. + +#### 48. ps #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ps.jpg) + +The ps command will give you an overview of all the current processes. You can easily select all processes using the command `ps -A` + +#### 49. [sar][39] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/sar.jpg) + +sar is a part of the sysstat package and helps you to collect, report and save different system metrics. With different commands it will give you CPU, memory and I/O usage among other things. + +#### 50. [collectl][40] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/collectl.jpg) + +Similar to sar collectl collects performance metrics for your machine. By default it shows cpu, network and disk stats but it collects a lot more. The difference to sar is collectl is able to deal with times below 1 second, it can be fed into a plotting tool directly and collectl monitors processes more extensively. + +#### 51. [iostat][41] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iostat.jpg) + +iostat is also part of the sysstat package. This command is used for monitoring system input/output. The reports themselves can be used to change system configurations to better balance input/output load between hard drives in your machine. + +#### 52. free #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/free.jpg) + +This is a built-in command that displays the total amount of free and used physical memory on your machine. It also displays the buffers used by the kernel at that given moment. + +#### 53. /Proc file system #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/procfile.jpg) + +The proc file system gives you a peek into kernel statistics. From these statistics you can get detailed information about the different hardware devices on your machine. Take a look at the [full list of the proc file statistics][42] + +#### 54. [GKrellM][43] #### + +GKrellm is a gui application that monitor the status of your hardware such CPU, main memory, hard disks, network interfaces and many other things. It can also monitor and launch a mail reader of your choice. + +#### 55. [Gnome system monitor][44] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/gnome-system-monitor.jpg) + +Gnome system monitor is a basic system monitoring tool that has features looking at process dependencies from a tree view, kill or renice processes and graphs of all server metrics. + +### Log monitoring tools ### + +#### 56. [GoAccess][45] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/goaccess.jpg) + +GoAccess is a real-time web log analyzer which analyzes the access log from either apache, nginx or amazon cloudfront. It’s also possible to output the data into HTML, JSON or CSV. It will give you general statistics, top visitors, 404s, geolocation and many other things. + +#### 57. [Logwatch][46] #### + +Logwatch is a log analysis system. It parses through your system’s logs and creates a report analyzing the areas that you specify. It can give you daily reports with short digests of the activities taking place on your machine. + +#### 58. [Swatch][47] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/swatch.jpg) + +Much like Logwatch Swatch also monitors your logs, but instead of giving reports it watches for regular expression and notifies you via mail or the console when there is a match. It could be used for intruder detection for example. + +#### 59. [MultiTail][48] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/multitail.jpg) + +MultiTail helps you monitor logfiles in multiple windows. You can merge two or more of these logfiles into one. It will also use colors to display the logfiles for easier reading with the help of regular expressions. + +#### System tools #### + +#### 60. [acct or psacct][49] #### + +acct or psacct (depending on if you use apt-get or yum) allows you to monitor all the commands a users executes inside the system including CPU and memory time. Once installed you get that summary with the command ‘sa’. + +#### 61. [whowatch][50] #### + +Similar to acct this tool monitors users on your system and allows you to see in real time what commands and processes they are using. It gives you a tree structure of all the processes and so you can see exactly what’s happening. + +#### 62. [strace][51] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/strace.jpg) + +strace is used to diagnose, debug and monitor interactions between processes. The most common thing to do is making strace print a list of system calls made by the program which is useful if the program does not behave as expected. + +#### 63. [DTrace][52] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/dtrace.jpg) + +DTrace is the big brother of strace. It dynamically patches live running instructions with instrumentation code. This allows you to do in-depth performance analysis and troubleshooting. However, it’s not for the weak of heart as there is a 1200 book written on the topic. + +#### 64. [webmin][53] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/webmin.jpg) + +Webmin is a web-based system administration tool. It removes the need to manually edit unix configuration files and lets you manage the system remotely if need be. It has a couple of monitoring modules that you can attach to it. + +#### 65. stat #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/stat.jpg) + +Stat is a built-in tool for displaying status information of files and file systems. It will give you information such as when the file was modified, accessed or changed. + +#### 66. ifconfig #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ifconfig.jpg) + +ifconfig is a built-in tool used to configure the network interfaces. Behind the scenes network monitor tools use ifconfig to set it into promiscuous mode to capture all packets. You can do it yourself with `ifconfig eth0 promisc` and return to normal mode with `ifconfig eth0 -promisc`. + +#### 67. [ulimit][54] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/unlimit.jpg) + +ulimit is a built-in tool that monitors system resources and keeps a limit so any of the monitored resources don’t go overboard. For instance making a fork bomb where a properly configured ulimit is in place would be totally fine. + +#### 68. [cpulimit][55] #### + +CPUlimit is a small tool that monitors and then limits the CPU usage of a process. It’s particularly useful to make batch jobs not eat up too many CPU cycles. + +#### 69. lshw #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/lshw.jpg) + +lshw is a small built-in tool extract detailed information about the hardware configuration of the machine. It can output everything from CPU version and speed to mainboard configuration. + +#### 70. w #### + +W is a built-in command that displays information about the users currently using the machine and their processes. + +#### 71. lsof #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/lsof.jpg) + +lsof is a built-in tool that gives you a list of all open files and network connections. From there you can narrow it down to files opened by processes, based on the process name, by a specific user or perhaps kill all processes that belongs to a specific user. + +### Infrastructure monitoring tools ### + +#### 72. Server Density #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/server-density-monitoring.png) + +Our [server monitoring tool][56]! It has a web interface that allows you to set alerts and view graphs for all system and network metrics. You can also set up monitoring of websites whether they are up or down. Server Density allows you to set permissions for users and you can extend your monitoring with our plugin infrastructure or api. The service already supports Nagios plugins. + +#### 73. [OpenNMS][57] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/opennms.jpg) + +OpenNMS has four main functional areas: event management and notifications; discovery and provisioning; service monitoring and data collection. It’s designed to be customizable to work in a variety of network environments. + +#### 74. [SysUsage][58] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/sysusage.jpg) + +SysUsage monitors your system continuously via Sar and other system commands. It also allows notifications to alarm you once a threshold is reached. SysUsage itself can be run from a centralized place where all the collected statistics are also being stored. It has a web interface where you can view all the stats. + +#### 75. [brainypdm][59] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/brainypdm.jpg) + +brainypdm is a data management and monitoring tool that has the capability to gather data from nagios or another generic source to make graphs. It’s cross-platform, has custom graphs and is web based. + +#### 76. [PCP][60] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/pcp.jpg) + +PCP has the capability of collating metrics from multiple hosts and does so efficiently. It also has a plugin framework so you can make it collect specific metrics that is important to you. You can access graph data through either a web interface or a GUI. Good for monitoring large systems. + +#### 77. [KDE system guard][61] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/kdesystemguard.jpg) + +This tool is both a system monitor and task manager. You can view server metrics from several machines through the worksheet and if a process needs to be killed or if you need to start a process it can be done within KDE system guard. + +#### 78. [Munin][62] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/munin.jpg) + +Munin is both a network and a system monitoring tool which offers alerts for when metrics go beyond a given threshold. It uses RRDtool to create the graphs and it has web interface to display these graphs. Its emphasis is on plug and play capabilities with a number of plugins available. + +#### 79. [Nagios][63] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nagios.jpg) + +Nagios is system and network monitoring tool that helps you monitor monitor your many servers. It has support for alerting for when things go wrong. It also has many plugins written for the platform. + +#### 80. [Zenoss][64] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/zenoss.jpg) + +Zenoss provides a web interface that allows you to monitor all system and network metrics. Moreover it discovers network resources and changes in network configurations. It has alerts for you to take action on and it supports the Nagios plugins. + +#### 81. [Cacti][65] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/cacti.jpg) + +(And one for luck!) Cacti is network graphing solution that uses the RRDtool data storage. It allows a user to poll services at predetermined intervals and graph the result. Cacti can be extended to monitor a source of your choice through shell scripts. + +#### 82. [Zabbix][66] #### + +![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/zabbix-monitoring.png) + +Zabbix is an open source infrastructure monitoring solution. It can use most databases out there to store the monitoring statistics. The Core is written in C and has a frontend in PHP. If you don’t like installing an agent, Zabbix might be an option for you. + +### Bonus section: ### + +Thanks for your suggestions. It’s an oversight on our part that we’ll have to go back trough and renumber all the headings. In light of that, here’s a short section at the end for some of the Linux monitoring tools recommended by you: + +#### 83. [collectd][67] #### + +Collectd is a Unix daemon that collects all your monitoring statistics. It uses a modular design and plugins to fill in any niche monitoring. This way collectd stays as lightweight and customizable as possible. + +#### 84. [Observium][68] #### + +Observium is an auto-discovering network monitoring platform supporting a wide range of hardware platforms and operating systems. Observium focuses on providing a beautiful and powerful yet simple and intuitive interface to the health and status of your network. + +#### 85. Nload #### + +It’s a command line tool that monitors network throughput. It’s neat because it visualizes the in and and outgoing traffic using two graphs and some additional useful data like total amount of transferred data. You can install it with + + yum install nload + +or + + sudo apt-get install nload + +#### 84. [SmokePing][69] #### + +SmokePing keeps track of the network latencies of your network and it visualises them too. There are a wide range of latency measurement plugins developed for SmokePing. If a GUI is important to you it’s there is an ongoing development to make that happen. + +#### 85. [MobaXterm][70] #### + +If you’re working in windows environment day in and day out. You may feel limited by the terminal Windows provides. MobaXterm comes to the rescue and allows you to use many of the terminal commands commonly found in Linux. Which will help you tremendously in your monitoring needs! + +#### 86. [Shinken monitoring][71] #### + +Shinken is a monitoring framework which is a total rewrite of Nagios in python. It aims to enhance flexibility and managing a large environment. While still keeping all your nagios configuration and plugins. + +-------------------------------------------------------------------------------- + +via: https://blog.serverdensity.com/80-linux-monitoring-tools-know/ + +作者:[Jonathan Sundqvist][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + + +[a]:https://www.serverdensity.com/ +[1]:http://hisham.hm/htop/ +[2]:http://www.atoptool.nl/ +[3]:https://github.com/JeremyJones/Apachetop +[4]:http://www.proftpd.org/docs/howto/Scoreboard.html +[5]:http://jeremy.zawodny.com/mysql/mytop/ +[6]:https://01.org/powertop +[7]:http://guichaz.free.fr/iotop/ +[8]:http://www.ntop.org/products/ntop/ +[9]:http://www.ex-parrot.com/pdw/iftop/ +[10]:http://jnettop.kubs.info/wiki/ +[11]:http://bandwidthd.sourceforge.net/ +[12]:http://etherape.sourceforge.net/ +[13]:https://www.kernel.org/pub/software/network/ethtool/ +[14]:http://nethogs.sourceforge.net/ +[15]:http://iptraf.seul.org/ +[16]:http://ngrep.sourceforge.net/ +[17]:http://oss.oetiker.ch/mrtg/ +[18]:https://github.com/tgraf/bmon/ +[19]:http://www.phildev.net/iptstate/index.shtml +[20]:https://unix4lyfe.org/darkstat/ +[21]:http://humdi.net/vnstat/ +[22]:http://nmap.org/ +[23]:http://www.bitwizard.nl/mtr/ +[24]:http://www.tcpdump.org/ +[25]:http://justniffer.sourceforge.net/ +[26]:http://nmon.sourceforge.net/pmwiki.php +[27]:http://conky.sourceforge.net/ +[28]:https://github.com/nicolargo/glances +[29]:https://packages.debian.org/sid/utils/saidar +[30]:http://oss.oetiker.ch/rrdtool/ +[31]:http://mmonit.com/monit +[32]:http://sourceforge.net/projects/procexp/ +[33]:http://packages.ubuntu.com/lucid/utils/discus +[34]:http://www.pogo.org.uk/~mark/xosview/ +[35]:http://dag.wiee.rs/home-made/dstat/ +[36]:http://www.net-snmp.org/ +[37]:http://inotify.aiken.cz/?section=incron&page=about&lang=en +[38]:http://www.monitorix.org/ +[39]:http://sebastien.godard.pagesperso-orange.fr/ +[40]:http://collectl.sourceforge.net/ +[41]:http://sebastien.godard.pagesperso-orange.fr/ +[42]:http://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/proc.html +[43]:http://members.dslextreme.com/users/billw/gkrellm/gkrellm.html +[44]:http://freecode.com/projects/gnome-system-monitor +[45]:http://goaccess.io/ +[46]:http://sourceforge.net/projects/logwatch/ +[47]:http://sourceforge.net/projects/swatch/ +[48]:http://www.vanheusden.com/multitail/ +[49]:http://www.gnu.org/software/acct/ +[50]:http://whowatch.sourceforge.net/ +[51]:http://sourceforge.net/projects/strace/ +[52]:http://dtrace.org/blogs/about/ +[53]:http://www.webmin.com/ +[54]:http://ss64.com/bash/ulimit.html +[55]:https://github.com/opsengine/cpulimit +[56]:https://www.serverdensity.com/server-monitoring/ +[57]:http://www.opennms.org/ +[58]:http://sysusage.darold.net/ +[59]:http://sourceforge.net/projects/brainypdm/ +[60]:http://www.pcp.io/ +[61]:https://userbase.kde.org/KSysGuard +[62]:http://munin-monitoring.org/ +[63]:http://www.nagios.org/ +[64]:http://www.zenoss.com/ +[65]:http://www.cacti.net/ +[66]:http://www.zabbix.com/ +[67]:https://collectd.org/ +[68]:http://www.observium.org/ +[69]:http://oss.oetiker.ch/smokeping/ +[70]:http://mobaxterm.mobatek.net/ +[71]:http://www.shinken-monitoring.org/ diff --git a/sources/share/20151104 Optimize Web Delivery with these Open Source Tools.md b/sources/share/20151104 Optimize Web Delivery with these Open Source Tools.md new file mode 100644 index 0000000000..aaf8a7292d --- /dev/null +++ b/sources/share/20151104 Optimize Web Delivery with these Open Source Tools.md @@ -0,0 +1,195 @@ +Optimize Web Delivery with these Open Source Tools +================================================================================ +Web proxy software forwards HTTP requests without modifying traffic in any way. They can be configured as a transparent proxy with no client-side configuration required. They can also be used as a reverse proxy front-end to websites; here the cache serves an unlimited number of clients for one or some web servers. + +Web proxies are versatile tools. They have a wide variety of uses, from caching web, DNS and other lookups, to speeding up the delivery of a web server / reducing bandwidth consumption. Web proxy software can also harden security by filtering traffic and anonymizing connections, and offer media-range limitations. This software is used by high-profile, high-traffic websites such as The New York Times, The Guardian, and social media and content sites such as Twitter, Facebook, and Wikipedia. + +Web caches have become a vital mechanism for optimising the amount of data that is delivered in a given period of time. Good web caches also help to minimise latency, serving pages as quickly as possible. This helps to prevent the end user from becoming impatient having to wait for content to be delivered. Web caches optimise the data flow between client and server. They also help to converse bandwidth by caching frequently-delivered content. If you need to reduce server load and improve delivery speed of your content, it is definitely worth exploring the benefits offered by web cache software. + +To provide an insight into the quality of software available for Linux, I feature below 5 excellent open source web proxy tools. Some of the them are full-featured; a couple of them have very modest resource needs. + +### Squid ### + +Squid is a high-performance open source proxy caching server and web cache daemon. It supports FTP, Internet Gopher, HTTPS, TLS, and SSL. It handles all requests in a single, non-blocking, I/O-driven process over IPv4 or IPv6. + +Squid consists of a main server program squid, a Domain Name System lookup program dnsserver, some optional programs for rewriting requests and performing authentication, together with some management and client tools. + +Squid offers a rich access control, authorization and logging environment to develop web proxy and content serving applications. + +Features include: + +- Web proxy: + - Caching to reduce access time and bandwidth use + - Keeps meta data and especially hot objects cached in RAM + - Caches DNS lookups + - Supports non-blocking DNS lookups + - Implements negative chacking of failed requests +- Squid caches can be arranged in a hierarchy or mesh for additional bandwidth savings +- Enforce site-usage policies with extensive access controls +- Anonymize connections, such as disabling or changing specific header fields in a client's HTTP request +- Reverse proxy +- Media-range limitations +- Supports SSL +- Support for IPv6 +- Error Page Localization - error pages presented by Squid may now be localized per-request to match the visitors local preferred language +- Connection Pinning (for NTLM Auth Passthrough) - a workaround which permits Web servers to use Microsoft NTLM Authentication instead of HTTP standard authentication through a web proxy +- Quality of Service (QoS) Flow support + - Select a TOS/Diffserv value to mark local hits + - Select a TOS/Diffserv value to mark peer hits + - Selectively mark only sibling or parent requests + - Allows any HTTP response towards clients to have the TOS value of the response coming from the remote server preserved + - Mask certain bits in the TOS received from the remote server, before copying the value to the TOS send towards clients +- SSL Bump (for HTTPS Filtering and Adaptation) - Squid-in-the-middle decryption and encryption of CONNECT tunneled SSL traffic, using configurable client- and server-side certificates +- eCAP Adaptation Module support +- ICAP Bypass and Retry enhancements - ICAP is now extended with full bypass and dynamic chain routing to handle multiple adaptation services. +- ICY streaming protocol support - commonly known as SHOUTcast multimedia streams +- Dynamic SSL Certificate Generation +- Support for the Internet Content Adaptation Protocol (ICAP) +- Full request logging +- Anonymize connections + +- Website: [www.squid-cache.org][1] +- Developer: National Laboratory for Applied Networking Research (NLANR) and Internet volunteers +- License: GNU GPL v2 +- Version Number: 4.0.1 + +### Privoxy ### + +Privoxy (Privacy Enhancing Proxy) is a non-caching Web proxy with advanced filtering capabilities for enhancing privacy, modifying web page data and HTTP headers, controlling access, and removing ads and other obnoxious Internet junk. Privoxy has a flexible configuration and can be customized to suit individual needs and tastes. It supports both stand-alone systems and multi-user networks. + +Privoxy uses the concept of actions in order to manipulate the data stream between the browser and remote sites. + +Features include: + +- Highly configurable - completely personalize your installation +- Ad blocking +- Cookie management +- Supports "Connection: keep-alive". Outgoing connections can be kept alive independently from the client +- Supports IPv6 +- Tagging which allows to change the behaviour based on client and server headers +- Run as an "intercepting" proxy +- Sophisticated actions and filters for manipulating both server and client headers +- Can be chained with other proxies +- Integrated browser-based configuration and control utility. Browser-based tracing of rule and filter effects. Remote toggling +- Web page filtering (text replacements, removes banners based on size, invisible "web-bugs" and HTML annoyances, etc) +- Modularized configuration that allows for standard settings and user settings to reside in separate files, so that installing updated actions files won't overwrite individual user settings +- Support for Perl Compatible Regular Expressions in the configuration files, and a more sophisticated and flexible configuration syntax +- GIF de-animation +- Bypass many click-tracking scripts (avoids script redirection) +- User-customizable HTML templates for most proxy-generated pages (e.g. "blocked" page) +- Auto-detection and re-reading of config file changes +- Most features are controllable on a per-site or per-location basis + +- Website: [www.privoxy.org][2] +- Developer: Fabian Keil (lead developer), David Schmidt, and many other contributors +- License: GNU GPL v2 +- Version Number: 3.4.2 + +### Varnish Cache ### + +Varnish Cache is a web accelerator written with performance and flexibility in mind. It's modern architecture offers significantly better performance. It typically speeds up delivery with a factor of 300 - 1000x, depending on your architecture. Varnish stores web pages in memory so the web servers do not have to create the same web page repeatedly. The web server only recreates a page when it is changed. When content is served from memory this happens a lot faster then anything. + +Additionally Varnish can serve web pages much faster then any application server is capable of - giving the website a significant speed enhancement. + +For a cost-effective configuration, Varnish Cache uses between 1-16GB and a SSD disk. + +Features include: + +- Modern design +- VCL - a very flexible configuration language. The VCL configuration is translated to C, compiled, loaded and executed giving flexibility and speed +- Load balancing using both a round-robin and a random director, both with a per-backend weighting +- DNS, Random, Hashing and Client IP based Directors +- Load balance between multiple backends +- Support for Edge Side Includes including stitching together compressed ESI fragments +- Heavily threaded +- URL rewriting +- Cache multiple vhosts with a single Varnish +- Log data is stored in shared memory +- Basic health-checking of backends +- Graceful handling of "dead" backends +- Administered by a command line interface +- Use In-line C to extend Varnish +- Can be used on the same system as Apache +- Run multiple Varnish on the same system +- Support for HAProxy's PROXY protocol. This is a protocol adds a small header on each incoming TCP connection that describes who the real client is, added by (for example) an SSL terminating process +- Warm and cold VCL states +- Plugin support with Varnish Modules, called VMODs +- Backends defined through VMODs +- Gzip Compression and Decompression +- HTTP Streaming Pass & Fetch +- Saint and Grace mode. Saint Mode allows for unhealthy backends to be blacklisted for a period of time, preventing them from serving traffic when using Varnish as a load balancer. Grace mode allows Varnish to serve an expired version of a page or other asset in cases where Varnish is unable to retrieve a healthy response from the backend +- Experimental support for Persistent Storage, without LRU eviction + +- Website: [www.varnish-cache.org][3] +- Developer: Varnish Software +- License: FreeBSD +- Version Number: 4.1.0 + +### Polipo ### + +Polipo is an open source caching HTTP proxy which has modest resource needs. + +It listens to requests for web pages from your browser and forwards them to web servers, and forwards the servers’ replies to your browser. In the process, it optimises and cleans up the network traffic. It is similar in spirit to WWWOFFLE, but the implementation techniques are more like the ones ones used by Squid. + +Polipo aims at being a compliant HTTP/1.1 proxy. It should work with any web site that complies with either HTTP/1.1 or the older HTTP/1.0. + +Features include: + +- HTTP 1.1, IPv4 & IPv6, traffic filtering and privacy-enhancement +- Uses HTTP/1.1 pipelining if it believes that the remote server supports it, whether the incoming requests are pipelined or come in simultaneously on multiple connections +- Cache the initial segment of an instance if the download has been interrupted, and, if necessary, complete it later using Range requests +- Upgrade client requests to HTTP/1.1 even if they come in as HTTP/1.0, and up- or downgrade server replies to the client's capabilities +- Complete support for IPv6 (except for scoped (link-local) addresses) +- Use as a bridge between the IPv4 and IPv6 Internets +- Content-filtering +- Can use a technique known as Poor Man's Multiplexing to reduce latency +- SOCKS 4 and SOCKS 5 protocol support +- HTTPS proxying +- Behaves as a transparent proxy +- Run Polipo together with Privoxy or tor + +- Website: [www.pps.univ-paris-diderot.fr/~jch/software/polipo/][4] +- Developer: Juliusz Chroboczek, Christopher Davis +- License: MIT License +- Version Number: 1.1.1 + +### Tinyproxy ### + +Tinyproxy is a lightweight open source web proxy daemon. It is designed to be fast and yet small. It is useful for cases such as embedded deployments where a full featured HTTP proxy is required, but the system resources for a larger proxy are unavailable. + +Tinyproxy is very useful in a small network setting, where a larger proxy would either be too resource intensive, or a security risk. One of the key features of Tinyproxy is the buffering connection concept. In effect, Tinyproxy will buffer a high speed response from a server, and then relay it to a client at the highest speed the client will accept. This feature greatly reduces the problems with sluggishness on the net. + +Features: + +- Easy to modify +- Anonymous mode - allows specification of individual HTTP headers that should be allowed through, and which should be blocked +- HTTPS support - Tinyproxy allows forwarding of HTTPS connections without modifying traffic in any way through the CONNECT method +- Remote monitoring - access proxy statistics from afar, letting you know exactly how busy the proxy is +- Load average monitoring - configure software to refuse connections after the server load reaches a certain point +- Access control - configure to only allow connections from certain subnets or IP addresses +- Secure - run without any special privileges, thus minimizing the chance of system compromise +- URL based filtering - allows domain and URL-based black- and whitelisting +- Transparent proxying - configure as a transparent proxy, so that a proxy can be used without any client-side configuration +- Proxy chaining - use an upstream proxy server for outbound connections, instead of direct connections to the target server, creating a so-called proxy chain +- Privacy features - restrict both what data comes to your web browser from the HTTP server (e.g., cookies), and to restrict what data is allowed through from your web browser to the HTTP server (e.g., version information) +- Small footprint - the memory footprint is about 2MB with glibc, and the CPU load increases linearly with the number of simultaneous connections (depending on the speed of the connection). Tinyproxy can be run on an old machine without affecting performance + +- Website: [banu.com/tinyproxy][5] +- Developer: Robert James Kaes and contributors +- License: GNU GPL v2 +- Version Number: 1.8.3 + +-------------------------------------------------------------------------------- + +via: http://www.linuxlinks.com/article/20151101020309690/WebDelivery.html + +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://www.squid-cache.org/ +[2]:http://www.privoxy.org/ +[3]:https://www.varnish-cache.org/ +[4]:http://www.pps.univ-paris-diderot.fr/%7Ejch/software/polipo/ +[5]:https://banu.com/tinyproxy/ \ No newline at end of file diff --git a/sources/talk/20150820 LinuxCon's surprise keynote speaker ​Linus Torvalds muses about open-source software.md b/sources/talk/20150820 LinuxCon's surprise keynote speaker ​Linus Torvalds muses about open-source software.md deleted file mode 100644 index c045233630..0000000000 --- a/sources/talk/20150820 LinuxCon's surprise keynote speaker ​Linus Torvalds muses about open-source software.md +++ /dev/null @@ -1,46 +0,0 @@ -LinuxCon's surprise keynote speaker ​Linus Torvalds muses about open-source software -================================================================================ -> In a broad-ranging question and answer session, Linus Torvalds, Linux's founder, shared his thoughts on the current state of open source and Linux. - -**SEATTLE** -- [LinuxCon][1] attendees got an early Christmas present when the Wednesday morning "surprise" keynote speaker turned out to be Linux's founder, Linus Torvalds. - -![zemlin-and-torvalds-08192015-1.jpg](http://zdnet2.cbsistatic.com/hub/i/2015/08/19/9951f05a-fedf-4bf4-a4a1-3b4a15458de6/c19c89ded58025eccd090787ba40e803/zemlin-and-torvalds-08192015-1.jpg) - -Jim Zemlin and Linus Torvalds shooting the breeze at LinuxCon in Seattle. -- sjvn - -Jim Zemlin, the Linux Foundation's executive director, opened the question and answer session by quoting from a recent article about Linus, "[Torvalds may be the most influential individual economic force][2] of the past 20 years. ... Torvalds has, in effect, been as instrumental in retooling the production lines of the modern economy as Henry Ford was 100 years earlier." - -Torvalds replied, "I don't think I'm all that powerful, but I'm glad to get all the credit for open source." For someone who's arguably been more influential on technology than Bill Gates, Steve Jobs, or Larry Ellison, Torvalds remains amusingly modest. That's probably one reason [Torvalds, who doesn't suffer fools gladly][3], remains the unchallenged leader of Linux. - -It also helps that he doesn't take himself seriously, except when it comes to code quality. Zemlin reminded him that he was also described in the same article as being "5-feet, ho-hum tall with a paunch, ... his body type and gait resemble that of Tux, the penguin mascot of Linux." Torvald's reply was to grin and say "What is this? A roast?" He added that 5'8" was a perfectly good height. - -More seriously, Zemlin asked Torvalds what he thought about the current excitement over containers. Indeed, at times LinuxCon has felt like DockerCon. Torvalds replied, "I'm glad that the kernel is far removed from containers and other buzzwords. We only care about just the kernel. I'm so focused on the kernel I really don't care. I don't get involved in the politics above the kernel and I'm really happy that I don't know." - -Moving on, Zemlin asked Torvalds what he thought about the demand from the Internet of Things (IoT) for an even smaller Linux kernel. "Everyone has always wished for a smaller kernel," Torvalds said. "But, with all the modules it's still tens of MegaBytes in size. It's shocking that it used to fit into a MB. We'd like it to be mean lean, mean IT machine again." - -But, "Torvalds continued, "It's hard to get rid of unnecessary fat. Things tend to grow. Realistically I don't think we can get down to the sizes we were 20 years ago." - -As for security, the next topic, Torvalds said, "I'm at odds with the security community. They tend to see technology as black and white. If it's not security they don't care at all about it." The truth is "security is bugs. Most of the security issues we've had in the kernel hasn't been that big. Most of them have been really stupid and then some clever person takes advantage of it." - -The bottom line is, "We'll never get rid of bugs so security will never be perfect. We do try to be really careful about code. With user space we have to be very strict." But, "Bugs happen and all you can do is mitigate them. Open source is doing fairly well, but anyone who thinks we'll ever be completely secure is foolish." - -Zemlin concluded by asking Torvalds where he saw Linux ten years from now. Torvalds replied that he doesn't look at it this way. "I'm plodding, pedestrian, I look ahead six months, I don't plan 10 years ahead. I think that's insane." - -Sure, "companies plan ten years, and their plans use open source. Their whole process is very forward thinking. But I'm not worried about 10 years ahead. I look to the next release and the release beyond that." - -For Torvalds, who works at home where "the FedEx guy is no longer surprised to find me in my bathrobe at 2 in the afternoon," looking ahead a few months works just fine. And so do all the businesses -- both technology-based Amazon, Google, Facebook and more mainstream, WalMart, the New York Stock Exchange, and McDonalds -- that live on Linux every day. - --------------------------------------------------------------------------------- - -via: http://www.zdnet.com/article/linus-torvalds-muses-about-open-source-software/ - -作者:[Steven J. Vaughan-Nichols][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/ -[1]:http://events.linuxfoundation.org/events/linuxcon-north-america -[2]:http://www.bloomberg.com/news/articles/2015-06-16/the-creator-of-linux-on-the-future-without-him -[3]:http://www.zdnet.com/article/linus-torvalds-finds-gnome-3-4-to-be-a-total-user-experience-design-failure/ \ No newline at end of file diff --git a/sources/talk/20150820 Why did you start using Linux.md b/sources/talk/20150820 Why did you start using Linux.md index 3ddf90c560..5fb6a8d4fe 100644 --- a/sources/talk/20150820 Why did you start using Linux.md +++ b/sources/talk/20150820 Why did you start using Linux.md @@ -1,4 +1,3 @@ -KevinSJ translating Why did you start using Linux? ================================================================================ > In today's open source roundup: What got you started with Linux? Plus: IBM's Linux only Mainframe. And why you should skip Windows 10 and go with Linux diff --git a/sources/talk/20150824 LinuxCon exclusive--Mark Shuttleworth says Snappy was born long before CoreOS and the Atomic Project.md b/sources/talk/20150824 LinuxCon exclusive--Mark Shuttleworth says Snappy was born long before CoreOS and the Atomic Project.md deleted file mode 100644 index 2c45b6064b..0000000000 --- a/sources/talk/20150824 LinuxCon exclusive--Mark Shuttleworth says Snappy was born long before CoreOS and the Atomic Project.md +++ /dev/null @@ -1,92 +0,0 @@ -LinuxCon exclusive: Mark Shuttleworth says Snappy was born long before CoreOS and the Atomic Project -================================================================================ -![](http://images.techhive.com/images/article/2015/08/mark-100608730-primary.idge.jpg) - -Mark Shuttleworth at LinuxCon Credit: Swapnil Bhartiya - -> Mark Shuttleworth, founder of Canonical and Ubuntu, made a surprise visit at LinuxCon. I sat down with him for a video interview and talked about Ubuntu on IBM’s new LinuxONE systems, Canonical’s plans for containers, open source in the enterprise space and much more. - -### You made a surprise entry during the keynote. What brought you to LinuxCon? ### - -**Mark Shuttleworth**: I am here at LinuxCon to support IBM and Canonical in their announcement of Ubuntu on their new Linux-only super-high-end mainframe LinuxONE. These are the biggest machines in the world, purpose-built to run only Linux. And we will be bringing Ubuntu to them, which is a real privilege for us and is going to be incredible for developers. - -![mark selfie](http://images.techhive.com/images/article/2015/08/mark-selfie-100608731-large.idge.jpg) - -Swapnil Bhartiya - -Mark Shuttleworth and Swapnil Bhartiya, mandatory selfie at LinuxCon - -### Only Red Hat and SUSE were supported on it. Why was Ubuntu missing from the mainframe scene? ### - -**Mark**: Ubuntu has always been about developers. It has been about enabling the free software platform from where it is collaboratively built to be available at no cost to developers in the world, so they are limited only by their imagination—not by money, not by geography. - -There was an incredible story told today about a 12-year-old kid who started out with Ubuntu; there are incredible stories about people building giant businesses with Ubuntu. And for me, being able to empower people, whether they come from one part of the world or another to express their ideas on free software, is what Ubuntu is all about. It's been a journey for us essentially, going to the platforms those developers care about, and just in the last year, we suddenly saw a flood of requests from companies who run mainframes, who are using Ubuntu for their infrastructure—70% of OpenStack deployments are on Ubuntu. Those same people said, “Look, there is the mainframe, and we like to unleash it and think of it as a region in the cloud.” So when IBM started talking to us, saying that they have this project in the works, it felt like a very natural fit: You are going to be able to take your Ubuntu laptop, build code there and ship it straight to every cloud, every virtualization environment, every bare metal in every architecture including the mainframe, and that's going to be beautiful. - -### Will Canonical be offering support for these systems? ### - -**Mark**: Yes. Ubuntu on z Systems is going to be completely supported. We will make long-term commitments to that. The idea is to bring together scale-out-fast cloud-like workloads, which is really born on Ubuntu; 70% of workloads on Amazon and other public clouds run on Ubuntu. Now you can think of running that on a mainframe if that makes sense to you. - -We are going to provide exactly the same platform that we do on the cloud, and we are going to provide that on the mainframe as well. We are also going to expose it to the OpenStack API so you can consume it on a mainframe with exactly the same tools and exactly the same processes that you would consume on a laptop, or OpenStack or public cloud resources. So all of the things that Ubuntu builds to make your life easy as a developer are going to be available across that full range of platforms and systems, and all of that is commercially supported. - -### Canonical is doing a lot of things: It is into enterprise, and it’s in the consumer space with mobile and desktop. So what is the core focus of Canonical now? ### - -**Mark**: The trick for us is to enable the reuse of specifically the same parts [of our technology] in as many useful ways as possible. So if you look at the work that we do at z Systems, it's absolutely defined by the work that we do on the cloud. We want to deliver exactly the same libraries on exactly the same date for the mainframe as we do for public clouds and for x86, ARM and Power servers today. - -We don't allow Ubuntu or our focus to fragment very dramatically because we don't allow different products managers to find Ubuntu in different ways in different environments. We just want to bring that standard experience that developers love to this new environment. - -Similarly if you look at the work we are doing on IoT [Internet of Things], Snappy Ubuntu is the heart of the phone. It’s the phone without the GUI. So the definitions, the tools, the kernels, the mechanisms are shared across those projects. So we are able to multiply the impact of the work. We have an incredible community, and we try to enable the community to do things that they want to do that we can’t do. So that's why we have so many buntus, and it's kind of incredible for me to see what they do with that. - -We also see the community climbing in. We see hundreds of developers working with Snappy for IoT, and we see developers working with Snappy on mobile, for personal computing as convergence becomes real. And, of course, there is the cloud server story: 70% of the world is Ubuntu, so there is a huge audience. We don't have to do all the work that we do; we just have to be open and willing to, kind of, do the core infrastructure and then reuse it as efficiently as possible. - -### Is Snappy a response to Atomic or CoreOS? ### - -**Mark**: Snappy as a project was born four years ago when we started working on the phone, which was long before the CoreOS, long before Atomic. I think the principles of atomicity, transactionality are beautiful, but remember: We needed to build the same things for the phone. And with Snappy, we have the ability to deliver transactional updates to any of these systems—phones, servers and cloud devices. - -Of course, it feels a little different because in order to provide those guarantees, we have to shape the system in such a way that we can guarantee the guarantees. And that's why Snappy is snappy; it's a new thing. It's not based on an old packaging system. Though we will keep both of them: All Snaps for us that Canonical makes, the core snaps that define the OS, are all built from Debian packages. They are two different faces of the same coin for us, and developers will use them as tools. We use the right tools for the job. - -There are couple of key advantages for Snappy over CoreOS and Atomic, and the main one is this: We took the view that we wanted the base idea to be extensible. So with Snappy, the core operating system is tiny. You make all the choices, and you take all the decisions about things you want to bolt on that: you want to bolt on Docker; you want to bolt on Kubernete; you want to bolt on Mesos; you want to bolt on Lattice from Pivotal; you want to bolt on OpenStack. Those are the things you choose to add with Snappy. Whereas with Atomic and CoreOS, it's one blob and you have to do it exactly the way they want you to do it. You have to live with the versions of software and the choices they make. - -Whereas with Snappy, we really preserve this idea of the choices you have got in Ubuntu are now transactionally available on Snappy systems. That makes the core much smaller, and it gives you the choice of different container systems, different container management systems, different cloud infrastructure systems or different apps of every description. I think that's the winning idea. In fullness of time, people will realize that they wanted to make those choices themselves; they just want Canonical to do the work of providing the updates in a really efficient manner. - -### There is so much competition in the container space with Docker, Rocket and many other players. Where will Canonical stand amid this competition? ### - -**Mark**: Canonical is focused on platform tools, and we see things like the Rocket and Docker as things super-useful for developers; we just make sure that those work best on Ubuntu. Docker, for years, ran only Ubuntu because we work very closely with them, and we are glad now that it's available everywhere else. But if you look at the numbers, the vast majority of Docker containers are on Ubuntu. Because we work really hard, as developers, you get the best experience with all of these tools on Ubuntu. We don't want to try and control everything, and it’s great for us to have those guys competing. - -I think in the end people will see that there is really two kinds of containers. 1) There are cases where a container is just like a VM machine. It feels like a whole machine, it runs all processes, all the logs and cron jobs are there. It's like a VM, just that it's much cheaper, much lighter, much faster, and that's LXD. 2) And then there would be process containers, which are like Docker or Rocket; they are there to run a specific application very fast. I think we lead the world in general machine container story, which is our hypervisor LXD, and I think Docker leads the story when it comes to applications containers, process containers. And those two work together really beautifully. - -### Microsoft and Canonical are working together on LXD? Can you tell us about this engagement? ### - -Mark: LXD is two things. First, it's an implementation on top of Canonical's work on the kernel so that you can start to create full machine containers on any host. But it's also a REST API. That’s the transitions from LXC to LXD. We got a daemon there so you can talk to the daemon over the network, if it's listening on the network, and says tell me about the containers on that machine, tell me about the file systems on that machine, the networks on that machine, start or stop the container. - -So LXD becomes a distributed hypervisor effectively. Very interestingly, last week Microsoft announced that they like REST API. It is very clean, very simple, very well engineered, and they are going to implement the same API for Windows machines. It's completely cross-platform, which means you will be able to talk to any machine—Linux or Windows. So it gives you very clean and simple APIs to talk about containers on any host on the network. - -Of course, we have led the work in [OpenStack to bind LXD to Nova][1], which is the control system to compute in OpenStack, so that's how we create a whole cloud with OpenStack API with the individual VMs being actually containers, so much denser, much faster, much lighter, much cheaper. - -### Open Source is becoming a norm in the enterprise segment. What do you think is driving the adoption of open source in the enterprise? ### - -**Mark**: The reason why open source has become so popular in the enterprise is because it enables them to go faster. We are all competing at some level, and if you can't make progress because you have to call up some vendor, you can't dig in and help yourself go faster, then you feel frustrated. And given the choice between frustration and at least the ability to dig into a problem, enterprises over time will always choose to give themselves the ability to dig in and help themselves. So that is why open source is phenomenal. - -I think it goes a bit deeper than that. I think people have started to realize as much as we compete, 99% of what we need to do is shared, and there is something meaningful about contributing to something that is shared. As I have seen Ubuntu go from something that developers love, to something that CIOs love that developers love Ubuntu. As that happens, it's not a one-way ticket. They often want to say how can we help contribute to make this whole thing go faster. - -We have always seen a curve of complexity, and open source has traditionally been higher up on the curve of complexity and therefore considered threatening or difficult or too uncertain for people who are not comfortable with the complexity. What's wonderful to me is that many open source projects have identified that as a blocker for their own future. So in Ubuntu we have made user experience, design and “making it easy” a first-class goal. We have done the same for OpenStack. With Ubuntu tools for OpenStack anybody can build an OpenStack cloud in an hour, and if you want, that cloud can run itself, scale itself, manage itself, can deal with failures. It becomes something you can just fire up and forget, which also makes it really cheap. It also makes it something that's not a distraction, and so by making open source easier and easier, we are broadening its appeal to consumers and into the enterprise and potentially into the government. - -### How open are governments to open source? Can you tell us about the utilization of open source by governments, especially in the U.S.? ### - -**Mark**: I don't track the usage in government, but part of government utilization in the modern era is the realization that how untrustworthy other governments might be. There is a desire for people to be able to say, “Look, I want to review or check and potentially self-build all the things that I depend on.” That's a really important mission. At the end of the day, some people see this as a game where maybe they can get something out of the other guy. I see it as a game where we can make a level playing field, where everybody gets to compete. I have a very strong interest in making sure that Ubuntu is trustworthy, which means the way we build it, the way we run it, the governance around it is such that people can have confidence in it as an independent thing. - -### You are quite vocal about freedom, privacy and other social issues on Google+. How do you see yourself, your company and Ubuntu playing a role in making the world a better place? ### - -**Mark**: The most important thing for us to do is to build confidence in trusted platforms, platforms that are freely available but also trustworthy. At any given time, there will always be people who can make arguments about why they should have access to something. But we know from history that at the end of the day, due process of law, justice, doesn't depend on the abuse of privacy, abuse of infrastructure, the abuse of data. So I am very strongly of the view that in the fullness of time, all of the different major actors will come to the view that their primary interest is in having something that is conceptually trustworthy. This isn't about what America can steal from Germany or what China can learn in Russia. This is about saying we’re all going to be able to trust our infrastructure; that's a generational journey. But I believe Ubuntu can be right at the center of people's thinking about that. - --------------------------------------------------------------------------------- - -via: http://www.itworld.com/article/2973116/linux/linuxcon-exclusive-mark-shuttleworth-says-snappy-was-born-long-before-coreos-and-the-atomic-project.html - -作者:[Swapnil Bhartiya][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.itworld.com/author/Swapnil-Bhartiya/ -[1]:https://wiki.openstack.org/wiki/HypervisorSupportMatrix \ No newline at end of file diff --git a/sources/talk/20150910 The Free Software Foundation--30 years in.md b/sources/talk/20150910 The Free Software Foundation--30 years in.md deleted file mode 100644 index f782b2e876..0000000000 --- a/sources/talk/20150910 The Free Software Foundation--30 years in.md +++ /dev/null @@ -1,149 +0,0 @@ -The Free Software Foundation: 30 years in -================================================================================ -![](http://opensource.com/sites/default/files/styles/image-full-size/public/images/life/osdc_general_openfield.png?itok=tcXpYeHi) - -Welcome back, folks, to a new Six Degrees column. As usual, please send your thoughts on this piece to the comment box and your suggestions for future columns to [my inbox][1]. - -Now, I have to be honest with you all, this column went a little differently than I expected. - -A few weeks ago when thinking what to write, I mused over the notion of a piece about the [Free Software Foundation][2] celebrating its 30 year anniversary and how relevant and important its work is in today's computing climate. - -To add some meat I figured I would interview [John Sullivan][3], executive director of the FSF. My plan was typical of many of my pieces: thread together an interesting narrative and quote pieces of the interview to give it color. - -Well, that all went out the window when John sent me a tremendously detailed, thoughtful, and descriptive interview. I decided therefore to present it in full as the main event, and to add some commentary throughout. Thus, this is quite a long column, but I think it paints a fascinating picture of a fascinating organization. I recommend you grab a cup of something delicious and settle in for a solid read. - -### The sands of change ### - -The Free Software Foundation was founded in 1985. To paint a picture of what computing was like back then, the [Amiga 1000][4] was released, C++ was becoming a dominant language, [Aldus PageMaker][5] was announced, and networking was just starting to grow. Oh, and that year [Careless Whisper][6] by Wham! was a major hit. - -Things have changed a lot in 30 years. Back in 1985 the FSF was primarily focused on building free pieces of software that were primarily useful to nerdy computer people. These days we have software, services, social networks, and more to consider. - -I first wanted to get a sense of what John feels are most prominent risks to software freedom today. - -"I think there's widespread agreement on the biggest risks for computer user freedom today, but maybe not on the names for them." - -"The first is what we might as well just call 'tiny computers everywhere.' The free software movement has succeeded to the point where laptops, desktops, and servers can run fully free operating systems doing anything users of proprietary systems can do. There are still a few holes, but they'll be closed. The challenge that remains in this area is to cut through the billion dollar marketing budgets and legal regimes working against us to actually get the systems into users hands." - -"However, we have a serious problem on the set of computers whose primary common trait is that they are very small. Even though a car is not especially small, the computers in it are, so I include that form factor in this category, along with phones, tablets, glasses, watches, and so on. While these computers often have a basis in free software—for example, using the kernel Linux along with other free software like Android or GNU—their primary uses are to run proprietary applications and be shims for services that replace local computing with computing done on a server over which the user has no control. Since these devices serve vital functions, with some being primary means of communication for huge populations, some sitting very close to our bodies and our actual vital functions, some bearing responsibility for our physical safety, it is imperative that they run fully free systems under their users' control. Right now, they don't." - -John feels the risk here is not just the platforms and form factors, but the services integrates into them. - -"The services many of these devices talk to are the second major threat we face. It does us little good booting into a free system if we do our actual work and entertainment on companies' servers running software we have no access to at all. The point of free software is that we can see, modify, and share code. The existence of those freedoms even for nontechnical users provides a shield that prevents companies from controlling us. None of these freedoms exist for users of Facebook or Salesforce or Google Docs. Even more worrisome, we see a trend where people are accepting proprietary restrictions imposed on their local machines in order to have access to certain services. Browsers—including Firefox—are now automatically installing a DRM plugin in order to appease Netflix and other video giants. We need to work harder at developing free software decentralized replacements for media distribution that can actually empower users, artists, and user-artists, and for other services as well. For Facebook we have GNU social, pump.io, Diaspora, Movim, and others. For Salesforce, we have CiviCRM. For Google Docs, we have Etherpad. For media, we have GNU MediaGoblin. But all of these projects need more help, and many services don't have any replacement contenders yet." - -It is interesting that John mentions finding free software equivalents for common applications and services today. The FSF maintains a list of "High Priority Projects" that are designed to fill this gap. Unfortunately the capabilities of these projects varies tremendously and in an age where social media is so prominent, the software is only part of the problem: the real challenge is getting people to use it. - -This all begs the question of where the FSF fit in today's modern computing world. I am a fan of the FSF. I think the work they do is valuable and I contribute financially to support it too. They are an important organization for building an open computing culture, but all organizations need to grow, adjust, and adapt, particularly ones in the technology space. - -I wanted to get a better sense of what the FSF is doing today that it wasn't doing at it's inception. - -"We're speaking to a much larger audience than we were 30 years ago, and to a much broader audience. It's no longer just hackers and developers and researchers that need to know about free software. Everyone using a computer does, and it's quickly becoming the case that everyone uses a computer." - -John went on to provide some examples of these efforts. - -"We're doing coordinated public advocacy campaigns on issues of concern to the free software movement. Earlier in our history, we expressed opinions on these things, and took action on a handful, but in the last ten years we've put more emphasis on formulating and carrying out coherent campaigns. We've made especially significant noise in the area of Digital Restrictions Management (DRM) with Defective by Design, which I believe played a role in getting iTunes music off DRM (now of course, Apple is bringing DRM back with Apple Music). We've made attractive and useful introductory materials for people new to free software, like our [User Liberation animated video][7] and our [Email Self-Defense Guide][8]. - -We're also endorsing hardware that [respects users' freedoms][9]. Hardware distributors whose devices have been certified by the FSF to contain and require only free software can display a logo saying so. Expanding the base of free software users and the free software movement has two parts: convincing people to care, and then making it possible for them to act on that. Through this initiative, we encourage manufacturers and distributors to do the right thing, and we make it easy for users who have started to care about free software to buy what they need without suffering through hours and hours of research. We've certified a home WiFi router, 3D printers, laptops, and USB WiFi adapters, with more on the way. - -We're collecting all of the free software we can find in our [Free Software Directory][10]. We still have a long way to go on this—we're at only about 15,500 packages right now, and we can imagine many improvements to the design and function of the site—but I think this resource has great potential for helping users find the free software they need, especially users who aren't yet using a full GNU/Linux system. With the dangers inherent in downloading random programs off the Internet, there is a definite need for a curated collection like this. It also happens to provide a wealth of machine-readable data of use to researchers. - -We're acting as the fiscal sponsor for several specific free software projects, enabling them to raise funds for development. Most of these projects are part of GNU (which we continue to provide many kinds of infrastructure for), but we also sponsor [Replicant][11], a fully free fork of Android designed to give users the free-est mobile devices currently possible. - -We're helping developers use free software licenses properly, and we're following up on complaints about companies that aren't following the terms of the GPL. We help them fix their mistakes and distribute properly. RMS was in fact doing similar work with the precursors of the GPL very early on, but it's now an ongoing part of our work. - -Most of the specific things the FSF does now it wasn't doing 30 years ago, but the vision is little changed from the original paperwork—we aim to create a world where everything users want to do on any computer can be done using free software; a world where users control their computers and not the other way around." - -### A cult of personality ### - -There is little doubt in anyone's minds about the value the FSF brings. As John just highlighted, its efforts span not just the creation and licensing of free software, but also recognizing, certifying, and advocating a culture of freedom in technology. - -The head of the FSF is the inimitable Richard M. Stallman, commonly referred to as RMS. - -RMS is a curious character. He has demonstrated an unbelievable level of commitment to his ideas, philosophy, and ethical devotion to freedom in software. - -While he is sometimes mocked online for his social awkwardness, be it things said in his speeches, his bizarre travel requirements, or other sometimes cringeworthy moments, RMS's perspectives on software and freedom are generally rock-solid. He takes a remarkably consistent approach to his perspectives and he is clearly a careful thinker about not just his own thoughts but the wider movement he is leading. My only criticism is that I think from time to time he somewhat over-eggs the pudding with the voracity of his words. But hey, given his importance in our world, I would rather take an extra egg than no pudding for anyone. O.K., I get that the whole pudding thing here was strained... - -So RMS is a key part of the FSF, but the organization is also much more than that. There are employees, a board, and many contributors. I was curious to see how much of a role RMS plays these days in the FSF. John shared this with me. - -"RMS is the FSF's President, and does that work without receiving a salary from the FSF. He continues his grueling global speaking schedule, advocating for free software and computer user freedom in dozens of countries each year. In the course of that, he meets with government officials as well as local activists connected with all varieties of social movements. He also raises funds for the FSF and inspires many people to volunteer." - -"In between engagements, he does deep thinking on issues facing the free software movement, and anticipates new challenges. Often this leads to new articles—he wrote a 3-part series for Wired earlier this year about free software and free hardware designs—or new ideas communicated to the FSF's staff as the basis for future projects." - -As we delved into the cult of personality, I wanted to tap John's perspectives on how wide the free software movement has grown. - -I remember being at the [Open Source Think Tank][12] (an event that brings together execs from various open source organizations) and there was a case study where attendees were asked to recommend license choice for a particular project. The vast majority of break-out groups recommended the Apache Software License (APL) over the GNU Public License (GPL). - -This stuck in my mind as since then I have noticed that many companies seem to have opted for open licenses other than the GPL. I was curious to see if John had noticed a trend towards the APL as opposed to the GPL. - -"Has there been? I'm not so sure. I gave a presentation at FOSDEM a few years ago called 'Is Copyleft Being Framed?' that showed some of the problems with the supposed data behind claims of shifts in license adoption. I'll be publishing an article soon on this, but here's some of the major problems: - - -- Free software license choices do not exist in a vacuum. The number of people choosing proprietary software licenses also needs to be considered in order to draw the kinds of conclusions that people want to draw. I find it much more likely that lax permissive license choices (such as the Apache License or 3-clause BSD) are trading off with proprietary license choices, rather than with the GPL. -- License counters often, ironically, don't publish the software they use to collect that data as free software. That means we can't inspect their methods or reproduce their results. Some people are now publishing the code they use, but certainly any that don't should be completely disregarded. Science has rules. -- What counts as a thing with a license? Are we really counting an app under the APL that makes funny noises as 1:1 with GNU Emacs under GPLv3? If not, how do we decide which things to treat as equals? Are we only looking at software that actually works? Are we making sure not to double- and triple- count programs that exist on multiple hosting sites, and what about ports for different OSes? - -The question is interesting to ponder, but every conclusion I've seen so far has been extremely premature in light of the actual data. I'd much rather see a survey of developers asking about why they chose particular licenses for their projects than any more of these attempts to programmatically ascertain the license of programs and then ascribe human intentions on to patterns in that data. - -Copyleft is as vital as it ever was. Permissively licensed software is still free software and on-face a good thing, but it is contingent and needs an accompanying strong social commitment to not incorporate it in proprietary software. If free software's major long-term impact is enabling businesses to more efficiently make products that restrict us, then we have achieved nothing for computer user freedom." - -### Rising to new challenges ### - -30 years is an impressive time for any organization to be around, and particularly one with such important goals that span so many different industries, professions, governments, and cultures. - -As I started to wrap up the interview I wanted to get a better sense of what the FSF's primary function is today, 30 years after the mission started. - -"I think the FSF is in a very interesting position of both being a steady rock and actively pushing the envelope." - -"We have core documents like the [Free Software Definition][13], the [GNU General Public License][14], and the [list we maintain of free and nonfree software licenses][15], which have been keystones in the construction of the world of free software we have today. People place a great deal of trust in us to stay true to the principles outlined in those documents, and to apply them correctly and wisely in our assessments of new products or practices in computing. In this role, we hold the ladder for others to climb. As a 501(c)(3) charity held legally accountable to the public interest, and about 85% funded by individuals, we have the right structure for this." - -"But we also push the envelope. We take on challenges that others say are too hard. I guess that means we also build ladders? Or maybe I should stop with the metaphors." - -While John may not be great with metaphors (like I am one to talk), the FSF is great at setting a mission and demonstrating a devout commitment to it. This mission starts with a belief that free software should be everywhere. - -"We are not satisfied with the idea that you can get a laptop that works with free software except for a few components. We're not satisfied that you can have a tablet that runs a lot of free software, and just uses proprietary software to communicate with networks and to accelerate video and to take pictures and to check in on your flight and to call an Über and to.. Well, we are happy about some such developments for sure, but we are also unhappy about the suggestion that we should be fully content with them. Any proprietary software on a system is both an injustice to the user and inherently a threat to users' security. These almost-free things can be stepping stones on the way to a free world, but only if we keep our feet moving." - -In the early years of the FSF, we actually had to get a free operating system written. This has now been done by GNU and Linux and many collaborators, although there is always more software to write and bugs to fix. So while the FSF does still sponsor free software development in specific areas, there are thankfully many other organizations also doing this." - -A key part of the challenge John is referring to is getting the right hardware into the hands of the right people. - -"What we have been focusing on now are the challenges I highlighted in the first question. We are in desperate need of hardware in several different areas that fully supports free software. We have been talking a lot at the FSF about what we can do to address this, and I expect us to be making some significant moves to both increase our support for some of the projects already out there—as we having been doing to some extent through our Respects Your Freedom certification program—and possibly to launch some projects of our own. The same goes for the network service problem. I think we need to tackle them together, because having full control over the mobile components has great potential for changing how we relate to services, and decentralizing more and more services will in turn shape the mobile components." - -I hope folks will support the FSF as we work to grow and tackle these challenges. Hardware is expensive and difficult, as is making usable, decentralized, federated replacements for network services. We're going to need the resources and creativity of a lot of people. But, 30 years ago, a community rallied around RMS and the concept of copyleft to write an entire operating system. I've spent my last 12 years at the FSF because I believe we can rise to the new challenges in the same way." - -### Final thoughts ### - -In reading John's thoughtful responses to my questions, and in knowing various FSF members, the one sense that resonates for me is the sheer level of passion that is alive and kicking in the FSF. This is not an organization that has got bored or disillusioned with its mission. Its passion and commitment is as voracious as it has ever been. - -While I don't always agree with the FSF and I sometimes think its approach is a little one-dimensional at times, I have been and will continue to be a huge fan and supporter of its work. The FSF represent the ethical heartbeat of much of the free software and open source work that happens across the world. It represents a world view that is pretty hard to the left, but I believe its passion and conviction helps to bring people further to the right a little closer to the left too. - -Sure, RMS can be odd, somewhat hardline, and a little sensational, but he is precisely the kind of leader that is valuable in a movement that encapsulates a mixture of technology, ethics, and culture. We need an RMS in much the same way we need a Torvalds, a Shuttleworth, a Whitehurst, and a Zemlin. These different people bring together mixture of perspectives that ultimately maps to technology that can be adaptable to almost any set of use cases, ethics, and ambitions. - -So, in closing, I want to thank the FSF for its tremendous efforts, and I wish the FSF and its fearless leaders, one Richard M. Stallman and one John Sullivan, another 30 years of fighting the good fight. Go get 'em! - -> This article is part of Jono Bacon's Six Degrees column, where he shares his thoughts and perspectives on culture, communities, and trends in open source. - --------------------------------------------------------------------------------- - -via: http://opensource.com/business/15/9/free-software-foundation-30-years - -作者:[Jono Bacon][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://opensource.com/users/jonobacon -[1]:Welcome back, folks, to a new Six Degrees column. As usual, please send your thoughts on this piece to the comment box and your suggestions for future columns to my inbox. -[2]:http://www.fsf.org/ -[3]:http://twitter.com/johns_fsf/ -[4]:https://en.wikipedia.org/wiki/Amiga_1000 -[5]:https://en.wikipedia.org/wiki/Adobe_PageMaker -[6]:https://www.youtube.com/watch?v=izGwDsrQ1eQ -[7]:http://fsf.org/ -[8]:http://emailselfdefense.fsf.org/ -[9]:http://fsf.org/ryf -[10]:http://directory.fsf.org/ -[11]:http://www.replicant.us/ -[12]:http://www.osthinktank.com/ -[13]:http://www.fsf.org/about/what-is-free-software -[14]:http://www.gnu.org/licenses/gpl-3.0.en.html -[15]:http://www.gnu.org/licenses/licenses.en.html \ No newline at end of file diff --git a/sources/talk/20150916 Italy's Ministry of Defense to Drop Microsoft Office in Favor of LibreOffice.md b/sources/talk/20150916 Italy's Ministry of Defense to Drop Microsoft Office in Favor of LibreOffice.md deleted file mode 100644 index f47352ed26..0000000000 --- a/sources/talk/20150916 Italy's Ministry of Defense to Drop Microsoft Office in Favor of LibreOffice.md +++ /dev/null @@ -1,30 +0,0 @@ -Italy's Ministry of Defense to Drop Microsoft Office in Favor of LibreOffice -================================================================================ ->**LibreItalia's Italo Vignoli [reports][1] that the Italian Ministry of Defense is about to migrate to the LibreOffice open-source software for productivity and adopt the Open Document Format (ODF), while moving away from proprietary software products.** - -The movement comes in the form of a [collaboration][1] between Italy's Ministry of Defense and the LibreItalia Association. Sonia Montegiove, President of the LibreItalia Association, and Ruggiero Di Biase, Rear Admiral and General Executive Manager of Automated Information Systems of the Ministry of Defense in Italy signed an agreement for a collaboration to adopt the LibreOffice office suite in all of the Ministry's offices. - -While the LibreItalia non-profit organization promises to help the Italian Ministry of Defense with trainers for their offices across the country, the Ministry will start the implementation of the LibreOffice software on October 2015 with online training courses for their staff. The entire transition process is expected to be completed by the end of year 2016\. An Italian law lets officials find open source software alternatives to well-known commercial software. - -"Under the agreement, the Italian Ministry of Defense will develop educational content for a series of online training courses on LibreOffice, which will be released to the community under Creative Commons, while the partners, LibreItalia, will manage voluntarily the communication and training of trainers in the Ministry," says Italo Vignoli, Honorary President of LibreItalia. - -### The Ministry of Defense will adopt the Open Document Format (ODF) - -The initiative will allow the Italian Ministry of Defense to be independent from proprietary software applications, which are aimed at individual productivity, and adopt open source document format standards like Open Document Format (ODF), which is used by default in the LibreOffice office suite. The project follows similar movements already made by governments of other European countries, including United Kingdom, France, Spain, Germany, and Holland. - -It would appear that numerous other public institutions all over Italy are using open source alternatives, including the Italian Region Emilia Romagna, Galliera Hospital in Genoa, Macerata, Cremona, Trento and Bolzano, Perugia, the municipalities of Bologna, ASL 5 of Veneto, Piacenza and Reggio Emilia, and many others. AGID (Agency for Digital Italy) welcomes this project and hopes that other public institutions will do the same. - - --------------------------------------------------------------------------------- - -via: http://news.softpedia.com/news/italy-s-ministry-of-defense-to-drop-microsoft-office-in-favor-of-libreoffice-491850.shtml - -作者:[Marius Nestor][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://news.softpedia.com/editors/browse/marius-nestor -[1]:http://www.libreitalia.it/accordo-di-collaborazione-tra-associazione-libreitalia-onlus-e-difesa-per-ladozione-del-prodotto-libreoffice-quale-pacchetto-di-produttivita-open-source-per-loffice-automation/ -[2]:http://www.libreitalia.it/chi-siamo/ diff --git a/sources/talk/20150929 A Slick New Set-Up Wizard Is Coming To Ubuntu and Ubuntu Touch.md b/sources/talk/20150929 A Slick New Set-Up Wizard Is Coming To Ubuntu and Ubuntu Touch.md deleted file mode 100644 index 2c147fb3e3..0000000000 --- a/sources/talk/20150929 A Slick New Set-Up Wizard Is Coming To Ubuntu and Ubuntu Touch.md +++ /dev/null @@ -1,49 +0,0 @@ -A Slick New Set-Up Wizard Is Coming To Ubuntu and Ubuntu Touch -================================================================================ -> Canonical aims to 'seduce and reassure' those unfamiliar with the OS by making a good first impression - -**The Ubuntu installer is set to undergo a dramatic makeover.** - -Ubuntu will modernise its out-of-the-box experience (OOBE) to be easier and quicker to complete, look more ‘seductive’ to new users, and better present the Ubuntu brand through its design. - -Ubiquity, the current Ubuntu installer, has largely remained unchanged since its [introduction back in 2010][1]. - -### First Impressions Are Everything ### - -Since the first thing most users see when trying Ubuntu for the first time is an installer (or set-up wizard, depending on device) the design team feel it’s “one of the most important categories of software usability”. - -“It essentially says how easy your software is to use, as well as introducing the user into your brand through visual design and tone of voice, which can convey familiarity and trust within your product.” - -Canonical’s new OOBE designs show a striking departure from the current look of the Ubiquity installer used by the Ubuntu desktop, and presents a refined approach to the way mobile users ‘set up’ a new Ubuntu Phone. - -![Old design (left) and the new proposed design](http://www.omgubuntu.co.uk/wp-content/uploads/2015/09/desktop-2.jpg) - -Old design (left) and the new proposed design - -Detailing the designs in [new blog post][2], the Canonical Design team say the aim of the revamp is to create a consistent out-of-the-box experience across Ubuntu devices. - -To do this it groups together “common first experiences found on the mobile, tablet and desktop” and unifies the steps and screens between each, something they say moves the OS closer to “achieving a seamless convergent platform.” - -![New Ubuntu installer on desktop/tablet (left) and phone](http://www.omgubuntu.co.uk/wp-content/uploads/2015/09/Convergence.jpg) - -New Ubuntu installer on desktop/tablet (left) and phone - -Implementation of the new ‘OOBE’ has already begun’ according to Canonical, though as of writing there’s no firm word on when a revamped installer may land on either desktop or phone images. - -With the march to ‘desktop’ convergence now in full swing, and a(nother) stack of design changes set to hit the mobile build in lieu of the first Ubuntu Phone that ‘transforms’ in to a PC, chances are you won’t have to wait too long to try it out. - -**What do you think of the designs? How would you go about improving the Ubuntu set-up experience? Let us know in the comments below.** - --------------------------------------------------------------------------------- - -via: http://www.omgubuntu.co.uk/2015/09/new-look-ubuntu-installer-coming-soon - -作者:[Joey-Elijah Sneddon][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://plus.google.com/117485690627814051450/?rel=author -[1]:http://www.omgubuntu.co.uk/2010/09/ubuntu-10-10s-installer-slideshow-oozes-class -[2]:http://design.canonical.com/wp-content/uploads/Convergence.jpg \ No newline at end of file diff --git a/sources/talk/20151012 The Brief History Of Aix HP-UX Solaris BSD And LINUX.md b/sources/talk/20151012 The Brief History Of Aix HP-UX Solaris BSD And LINUX.md index f45f901b3d..be5c7b9b2e 100644 --- a/sources/talk/20151012 The Brief History Of Aix HP-UX Solaris BSD And LINUX.md +++ b/sources/talk/20151012 The Brief History Of Aix HP-UX Solaris BSD And LINUX.md @@ -1,3 +1,4 @@ +zpl1025 translating The Brief History Of Aix, HP-UX, Solaris, BSD, And LINUX ================================================================================ ![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/05/linux-712x445.png) @@ -98,4 +99,4 @@ via: http://www.unixmen.com/brief-history-aix-hp-ux-solaris-bsd-linux/ [a]:http://www.unixmen.com/author/pirat9/ [1]:http://www.unixmen.com/ken-thompson-unix-systems-father/ -[2]:http://www.unixmen.com/dennis-m-ritchie-father-c-programming-language/ \ No newline at end of file +[2]:http://www.unixmen.com/dennis-m-ritchie-father-c-programming-language/ diff --git a/sources/talk/20151019 Gaming On Linux--All You Need To Know.md b/sources/talk/20151019 Gaming On Linux--All You Need To Know.md index 9b0df50160..525d08838b 100644 --- a/sources/talk/20151019 Gaming On Linux--All You Need To Know.md +++ b/sources/talk/20151019 Gaming On Linux--All You Need To Know.md @@ -1,3 +1,5 @@ +213edu Translating + Gaming On Linux: All You Need To Know ================================================================================ ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Gaming-on-Linux.jpeg) @@ -200,4 +202,4 @@ via: http://itsfoss.com/linux-gaming-guide/ [24]:http://freegamer.blogspot.fr/ [25]:http://linuxgamenews.com/ [26]:http://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/ -[27]:http://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/ \ No newline at end of file +[27]:http://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/ diff --git a/sources/talk/20151023 Ubuntu 15.10 Codenamed Wily Werewolf Review.md b/sources/talk/20151023 Ubuntu 15.10 Codenamed Wily Werewolf Review.md deleted file mode 100644 index e232beb30a..0000000000 --- a/sources/talk/20151023 Ubuntu 15.10 Codenamed Wily Werewolf Review.md +++ /dev/null @@ -1,68 +0,0 @@ -Ubuntu 15.10, Codenamed Wily Werewolf, Review -================================================================================ -![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Ubuntu-15.10-791x445.png) - -The problem we have with reviewing Ubuntu on any occasion, is readers consistently expect to read of a revolutionary new release, every 6 months. If you’re expecting Ubuntu 15.10 to be just that, then you may want to click out of this review right now. It’s important to clarify that this is nothing negative towards 15.10 as a release, but it is a maintenance release and not a release which purports to introduce a great deal of new software. - -With that opening disclaimer out of the way, let’s take a look at what 15.10 does offer. - -### Linux kernel 4.2 ### - -The biggest change you will find with Ubuntu 15.10 is the kernel branch has been upgraded to **Linux 4.2**. - -This is long overdue for Ubuntu. It feels like it has been lagging behind other distributions by sticking with the 3.x branch of Linux for the entirety of the 15.04 cycle. - -If you’re going to be installing Ubuntu 15.10 on new hardware, then you will benefit greatly from the Linux kernel upgrade to 4.x branch as there is loads of updates which directly improve performance on new hardware. Support for AMDGPU kernel DRM is included, which is a boon for owners of recent Radeon graphics cards. The latest iteration of the driver will reside alongside the current Radeon DRM drivers, which was already in the kernel in addition to the usual open-source driver offerings. - -Support for Intel Broxton is also included in Linux 4.2, albeit Ubuntu 15.10 users are probably going to get nothing out of this update, yet it’s still worthy of a mention we think. There are also some erroneous updates for Skylake CPU’s. Finally, there is a host of code updates and fixes for Ext4 filesystems. - -That pretty much rounds out the Linux kernel 4.2 updates. So what else is new? Let’s take a closer look at the software that you may be more familiar with and get more excited about. - -### Software ### - -LibreOffice has been upgraded to 5.0.1.2, a major update for LibreOffice users. Firefox on the version that we tested is sitting at 41.0.2. By the time you read this, it will most-likely be updated again and you may see a newer version be pushed out through the Ubuntu Repositories. - -On the desktop front, a vanilla Ubuntu installation will see you running Unity 7.3.2 while GNOME sits at 3.8. On the KDE end, a Plasma 5 desktop will see you running version 5.4.2. For the alternative desktop-environments, XFCE has been upgraded to the latest revision, 4.12 while the version of MATE includes 1.10. - -### User Experience/Screenshots ### - -![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/1.png) - -![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/2.png) - -![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/3.png) - -![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/4.png) - -![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/5.png) - -![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/6.png) - -### Conclusion ### - -Ubuntu 15.10 as a operating system for Review is pretty lackluster. There’s nothing new as such and there’s nothing we can really say that is going to change your opinion from its predecessor, 15.04. Therefore, we recommend you to upgrade either out of habit and according to your regular upgrade schedule rather than out of a specific necessity for a specific feature of this release. Because there is really nothing that could possibly differentiate it from the older, yet still very stable 15.04 release. But if you’re going to stick with 15.04 for a little longer, we do recommend that you look at [upgrading the kernel to the latest 4.2 branch][2]. It is worth it. - -If you really want a reason to upgrade? Linux kernel 4.2 would be our sole reason for taking Ubuntu 15.10 into consideration. - -### Looking Ahead ### - -What we really look forward to is the release of Ubuntu 16.04. We have been promised over and over again for several releases that Mir will be the default display server included in Ubuntu. We still see releases pushed out that rely on X.org. It has resulted in us adopting a “yeah right” attitude as we have become accustomed to the usual delay announcements. - -We are hopeful that Mir Developers can push out a working version in time for the release of 16.04 next year. As precaution though, we urge you to not get too excited because it may very well not happen. - -It remains much the same with Unity 8. It’s most certainly a possibility, but we can’t guarantee that it will be included in 16.04, yet we remain hopeful. - -As we’ve mentioned for this release, there’s nothing really ground-breaking with this release. In fact, it has been much the same story for the last couple of releases of Ubuntu Linux. It is in dire need of a distribution-wide reboot. Developers and Ubuntu users alike are positive that Mir and Unity 8 will be the two primary packages that may just provide the popular, yet ailing, distribution the reboot that it needs. - --------------------------------------------------------------------------------- - -via: http://www.unixmen.com/ubuntu-15-10-codenamed-wily-werewolf-review/ - -作者:[Chris Jones][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.unixmen.com/author/chris/ -[1]:http://www.unixmen.com/how-to-install-linux-kernel-4-2-3/ \ No newline at end of file diff --git a/sources/talk/20151105 Linus Torvalds Lambasts Open Source Programmers over Insecure Code.md b/sources/talk/20151105 Linus Torvalds Lambasts Open Source Programmers over Insecure Code.md new file mode 100644 index 0000000000..1e37549646 --- /dev/null +++ b/sources/talk/20151105 Linus Torvalds Lambasts Open Source Programmers over Insecure Code.md @@ -0,0 +1,35 @@ +Linus Torvalds Lambasts Open Source Programmers over Insecure Code +================================================================================ +![](http://thevarguy.com/site-files/thevarguy.com/files/imagecache/medium_img/uploads/2015/11/linus-torvalds.jpg) + +Linus Torvalds's latest rant underscores the high expectations the Linux developer places on open source programmers—as well the importance of security for Linux kernel code. + +Torvalds is the unofficial "benevolent dictator" of the Linux kernel project. That means he gets to decide which code contributions go into the kernel, and which ones land in the reject pile. + +On Oct. 28, open source coders whose work did not meet Torvalds's expectations faced an [angry rant][1]. "Christ people," Torvalds wrote about the code. "This is just sh*t." + +He went on to call the coders "just incompetent and out to lunch." + +What made Torvalds so angry? He believed the code could have been written more efficiently. It could have been easier for other programmers to understand and would run better through a compiler, the program that translates human-readable code into the binaries that computers understand. + +Torvalds posted his own substitution for the code in question and suggested that the programmers should have written it his way. + +Torvalds has a history of lashing out against people with whom he disagrees. It stretches back to 1991, when he famously [flamed Andrew Tanenbaum][2]—whose Minix operating system he later described as a series of "brain-damages." No doubt this latest criticism of fellow open source coders will go down as another example of Torvalds's confrontational personality. + +But Torvalds may also have been acting strategically during this latest rant. "I want to make it clear to *everybody* that code like this is completely unacceptable," he wrote, suggesting that his goal was to send a message to all Linux programmers, not just vent his anger at particular ones. + +Torvalds also used the incident as an opportunity to highlight the security concerns that arise from poorly written code. Those are issues dear to open source programmers' hearts in an age when enterprises are finally taking software security seriously, and demanding top-notch performance from their code in this regard. Lambasting open source programmers who write insecure code thus helps Linux's image. + +-------------------------------------------------------------------------------- + +via: http://thevarguy.com/open-source-application-software-companies/110415/linus-torvalds-lambasts-open-source-programmers-over-inse + +作者:[Christopher Tozzi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://thevarguy.com/author/christopher-tozzi +[1]:http://lkml.iu.edu/hypermail/linux/kernel/1510.3/02866.html +[2]:https://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_debate \ No newline at end of file diff --git a/sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md b/sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md index 501cb4a8dc..ae8df117ef 100644 --- a/sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md +++ b/sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md @@ -1,4 +1,3 @@ -Translating by ZTinoZ Installation Guide for Puppet on Ubuntu 15.04 ================================================================================ Hi everyone, today in this article we'll learn how to install puppet to manage your server infrastructure running ubuntu 15.04. Puppet is an open source software configuration management tool which is developed and maintained by Puppet Labs that allows us to automate the provisioning, configuration and management of a server infrastructure. Whether we're managing just a few servers or thousands of physical and virtual machines to orchestration and reporting, puppet automates tasks that system administrators often do manually which frees up time and mental space so sysadmins can work on improving other aspects of your overall setup. It ensures consistency, reliability and stability of the automated jobs processed. It facilitates closer collaboration between sysadmins and developers, enabling more efficient delivery of cleaner, better-designed code. Puppet is available in two solutions configuration management and data center automation. They are **puppet open source and puppet enterprise**. Puppet open source is a flexible, customizable solution available under the Apache 2.0 license, designed to help system administrators automate the many repetitive tasks they regularly perform. Whereas puppet enterprise edition is a proven commercial solution for diverse enterprise IT environments which lets us get all the benefits of open source puppet, plus puppet apps, commercial-only enhancements, supported modules and integrations, and the assurance of a fully supported platform. Puppet uses SSL certificates to authenticate communication between master and agent nodes. diff --git a/sources/tech/20150824 How to Setup Zephyr Test Management Tool on CentOS 7.x.md b/sources/tech/20150824 How to Setup Zephyr Test Management Tool on CentOS 7.x.md deleted file mode 100644 index b4014bb009..0000000000 --- a/sources/tech/20150824 How to Setup Zephyr Test Management Tool on CentOS 7.x.md +++ /dev/null @@ -1,233 +0,0 @@ -How to Setup Zephyr Test Management Tool on CentOS 7.x -================================================================================ -Test Management encompasses anything and everything that you need to do as testers. Test management tools are used to store information on how testing is to be done, plan testing activities and report the status of quality assurance activities. So in this article we will illustrate you about the setup of Zephyr test management tool that includes everything needed to manage the test process can save testers hassle of installing separate applications that are necessary for the testing process. Once you have done with its setup you will be able to track bugs, defects and allows the project tasks for collaboration with your team as you can easily share and access the data across multiple project teams for communication and collaboration throughout the testing process. - -### Requirements for Zephyr ### - -We are going to install and run Zephyr under the following set of its minimum resources. Resources can be enhanced as per your infrastructure requirements. We will be installing Zephyr on the CentOS-7 64-bit while its binary distributions are available for almost all Linux operating systems. - -注:表格 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Zephyr test management tool
Linux OSCentOS Linux 7 (Core), 64-bit
PackagesJDK 7 or above ,  Oracle JDK 6 updateNo Prior Tomcat, MySQL installed
RAM4 GBPreferred 8 GB
CPU2.0 GHZ or Higher
Hard Disk30 GB , Atleast 5GB must be free
- -You must have super user (root) access to perform the installation process for Zephyr and make sure that you have properly configured yout network with static IP address and its default set of ports must be available and allowed in the firewall where as the Port 80/443, 8005, 8009, 8010 will used by tomcat and Port 443 or 2099 will used within Zephyr by flex for the RTMP protocol. - -### Install Java JDK 7 ### - -Java JDK 7 is the basic requirement for the installation of Zephyr, if its not already installed in your operating system then do the following to install Java and setup its JAVA_HOME environment variables to be properly configured. - -Let’s issue the below commands to install Java JDK 7. - - [root@centos-007 ~]# yum install java-1.7.0-openjdk-1.7.0.79-2.5.5.2.el7_1 - ----------- - - [root@centos-007 ~]# yum install java-1.7.0-openjdk-devel-1.7.0.85-2.6.1.2.el7_1.x86_64 - -Once your java is installed including its required dependencies, run the following commands to set its JAVA_HOME environment variables. - - [root@centos-007 ~]# export JAVA_HOME=/usr/java/default - [root@centos-007 ~]# export PATH=/usr/java/default/bin:$PATH - -Now check the version of java to verify its installation with following command. - - [root@centos-007 ~]# java –version - ----------- - - java version "1.7.0_79" - OpenJDK Runtime Environment (rhel-2.5.5.2.el7_1-x86_64 u79-b14) - OpenJDK 64-Bit Server VM (build 24.79-b02, mixed mode) - -The output shows that we we have successfully installed OpenJDK Java verion 1.7.0_79. - -### Install MySQL 5.6.X ### - -If you have other MySQLs on the machine then it is recommended to remove them and -install this version on top of them or upgrade their schemas to what is specified. As this specific major/minor (5.6.X) version of MySQL is required with the root username as a prerequisite of Zephyr. - -To install MySQL 5.6 on CentOS-7.1 lets do the following steps: - -Download the rpm package, which will create a yum repo file for MySQL Server installation. - - [root@centos-007 ~]# yum install wget - [root@centos-007 ~]# wget http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm - -Now Install this downloaded rpm package by using rpm command. - - [root@centos-007 ~]# rpm -ivh mysql-community-release-el7-5.noarch.rpm - -After the installation of this package you will get two new yum repo related to MySQL. Then by using yum command, now we will install MySQL Server 5.6 and all dependencies will be installed itself. - - [root@centos-007 ~]# yum install mysql-server - -Once the installation process completes, run the following commands to start mysqld services and check its status whether its active or not. - - [root@centos-007 ~]# service mysqld start - [root@centos-007 ~]# service mysqld status - -On fresh installation of MySQL Server. The MySQL root user password is blank. -For good security practice, we should reset the password MySQL root user. - -Connect to MySQL using the auto-generated empty password and change the -root password. - - [root@centos-007 ~]# mysql - mysql> SET PASSWORD FOR 'root'@'localhost' = PASSWORD('your_password'); - mysql> flush privileges; - mysql> quit; - -Now we need to configure the required database parameters in the default configuration file of MySQL. Let's open its file located in "/etc/" folder and update it as follow. - - [root@centos-007 ~]# vi /etc/my.cnf - ----------- - - [mysqld] - datadir=/var/lib/mysql - socket=/var/lib/mysql/mysql.sock - symbolic-links=0 - - sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES - max_allowed_packet=150M - max_connections=600 - default-storage-engine=INNODB - character-set-server=utf8 - collation-server=utf8_unicode_ci - - [mysqld_safe] - log-error=/var/log/mysqld.log - pid-file=/var/run/mysqld/mysqld.pid - default-storage-engine=INNODB - character-set-server=utf8 - collation-server=utf8_unicode_ci - - [mysql] - max_allowed_packet = 150M - [mysqldump] - quick - -Save the changes made in the configuration file and restart mysql services. - - [root@centos-007 ~]# service mysqld restart - -### Download Zephyr Installation Package ### - -We done with installation of required packages necessary to install Zephyr. Now we need to get the binary distributed package of Zephyr and its license key. Go to official download link of Zephyr that is http://download.yourzephyr.com/linux/download.php give your email ID and click to download. - -![Zephyr Download](http://blog.linoxide.com/wp-content/uploads/2015/08/13.png) - -Then and confirm your mentioned Email Address and you will get the Zephyr Download link and its License Key link. So click on the provided links and choose the appropriate version of your Operating system to download the binary installation package and its license file to the server. - -We have placed it in the home directory and modify its permissions to make it executable. - -![Zephyr Binary](http://blog.linoxide.com/wp-content/uploads/2015/08/22.png) - -### Start Zephyr Installation and Configuration ### - -Now we are ready to start the installation of Zephyr by executing its binary installation script as below. - - [root@centos-007 ~]# ./zephyr_4_7_9213_linux_setup.sh –c - -Once you run the above command, it will check for the Java environment variables to be properly setup and configured. If there's some mis-configuration you might the error like. - - testing JVM in /usr ... - Starting Installer ... - Error : Either JDK is not found at expected locations or JDK version is mismatched. - Zephyr requires Oracle Java Development Kit (JDK) version 1.7 or higher. - -Once you have properly configured your Java, then it will start installation of Zephyr and asks to press "o" to proceed and "c" to cancel the setup. Let's type "o" and press "Enter" key to start installation. - -![install zephyr](http://blog.linoxide.com/wp-content/uploads/2015/08/32.png) - -The next option is to review all the requirements for the Zephyr setup and Press "Enter" to move forward to next option. - -![zephyr requirements](http://blog.linoxide.com/wp-content/uploads/2015/08/42.png) - -To accept the license agreement type "1" and Press Enter. - - I accept the terms of this license agreement [1], I do not accept the terms of this license agreement [2, Enter] - -Here we need to choose the appropriate destination location where we want to install the zephyr and choose the default ports, if you want to choose other than default ports, you are free to mention here. - -![installation folder](http://blog.linoxide.com/wp-content/uploads/2015/08/52.png) - -Then customize the mysql database parameters and give the right paths to the configurations file. You might the an error at this point as shown below. - - Please update MySQL configuration. Configuration parameter max_connection should be at least 500 (max_connection = 500) and max_allowed_packet should be at least 50MB (max_allowed_packet = 50M). - -To overcome this error make sure that you have configure the "max_connection" and "max_allowed_packet" limits properly in the mysql configuration file. So confirm these settings, connect to mysql server and run the commands as shown. - -![mysql connections](http://blog.linoxide.com/wp-content/uploads/2015/08/62.png) - -Once you have configured your mysql database properly, it will extract the configuration files to complete the setup. - -![mysql customization](http://blog.linoxide.com/wp-content/uploads/2015/08/72.png) - -The installation process completes with successful installation of Zephyr 4.7 on your computer. To Launch Zephyr Desktop type "y" to finish Zephyr installation. - -![launch zephyr](http://blog.linoxide.com/wp-content/uploads/2015/08/82.png) - -### Launch Zephyr Desktop ### - -Open your web browser to launch Zephyr Desktop with your localhost IP adress and you will be direted to the Zephyr Desktop. - - http://your_server_IP/zephyr/desktop/ - -![Zephyr Desktop](http://blog.linoxide.com/wp-content/uploads/2015/08/91.png) - -From your Zephyr Dashboard click on the "Test Manager" and login with the dault user name and password that is "test.manager". - -![Test Manage Login](http://blog.linoxide.com/wp-content/uploads/2015/08/test_manager_login.png) - -Once you are loged in you will be able to configure your administrative settings as shown. So choose the settings you wish to put according to your environment. - -![Test Manage Administration](http://blog.linoxide.com/wp-content/uploads/2015/08/test_manage_admin.png) - -Save the settings after you have done with your administrative settings, similarly do the settings of resources management and project setup and start using Zephyr as a complete set of your testing management tool. You check and edit the status of your administrative settings from the Department Dashboard Management as shown. - -![zephyr dashboard](http://blog.linoxide.com/wp-content/uploads/2015/08/dashboard.png) - -### Conclusion ### - -Cheers! we have done with the complete setup of Zephyr installation setup on Centos 7.1. We hope you are now much aware of Zephyr Test management tool which offer the prospect of streamlining the testing process and allow quick access to data analysis, collaborative tools and easy communication across multiple project teams. Feel free to comment us if you find any difficulty while you are doing it in your environment. - --------------------------------------------------------------------------------- - -via: http://linoxide.com/linux-how-to/setup-zephyr-tool-centos-7-x/ - -作者:[Kashif Siddique][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/kashifs/ \ No newline at end of file diff --git a/sources/tech/20150831 How to switch from NetworkManager to systemd-networkd on Linux.md b/sources/tech/20150831 How to switch from NetworkManager to systemd-networkd on Linux.md index bc7ebee015..101e86ecd0 100644 --- a/sources/tech/20150831 How to switch from NetworkManager to systemd-networkd on Linux.md +++ b/sources/tech/20150831 How to switch from NetworkManager to systemd-networkd on Linux.md @@ -1,5 +1,3 @@ -Translating by Ping - How to switch from NetworkManager to systemd-networkd on Linux ================================================================================ In the world of Linux, adoption of [systemd][1] has been a subject of heated controversy, and the debate between its proponents and critics is still going on. As of today, most major Linux distributions have adopted systemd as a default init system. diff --git a/sources/tech/20151012 How To Use iPhone In Antergos Linux.md b/sources/tech/20151012 How To Use iPhone In Antergos Linux.md deleted file mode 100644 index 0186a214d4..0000000000 --- a/sources/tech/20151012 How To Use iPhone In Antergos Linux.md +++ /dev/null @@ -1,81 +0,0 @@ -How To Use iPhone In Antergos Linux -================================================================================ -![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/iPhone-Antergos-Arch-Linux.jpg) - -Troubles with iPhone and Arch Linux? iPhone and Linux never really go along very well. In this tutorial, I am going to show you how can you use iPhone in Antergos Linux. Since Antergos is based on Arch Linux, the same steps should be applicable to other Arch based Linux distros such as Manjaro Linux. - -So, recently I bought me a brand new iPhone 6S and when I connected it to Antergos Linux to copy some pictures, it was not detected at all. I could see that iPhone was being charged and I had allowed iPhone to ‘trust the computer’ but there was nothing at all detected. I tried to run dmseg but there was no trace of iPhone or Apple there. What is funny that [libimobiledevice][1] was installed as well, which always fixes [iPhone mount issue in Ubuntu][2]. - -I am going to show you how I am using iPhone 6S, running on iOS 9 in Antergos. It goes more in command line way, but I presume since you are in Arch Linux zone, you are not scared of terminal (and you should not be as well). - -### Mount iPhone in Arch Linux ### - -**Step 1**: Unplug your iPhone, if it is already plugged in. - -**Step 2**: Now, open a terminal and use the following command to install some necessary packages. Don’t worry if they are already installed. - - sudo pacman -Sy ifuse usbmuxd libplist libimobiledevice - -**Step 3**: Once these programs and libraries are installed, reboot your system. - - sudo reboot - -**Step 4**: Make a directory where you want the iPhone to be mounted. I would suggest making a directory named iPhone in your home directory. - - mkdir ~/iPhone - -**Step 5**: Unlock your phone and plug it in. If asked to trust the computer, allow it. - -![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/iPhone-mount-Antergos-Linux-2.jpeg) - -**Step 6**: Verify that iPhone is recognized by the system this time. - - dmesg | grep -i iphone - -This should show you some result with iPhone and Apple in it. Something like this: - - [ 31.003392] ipheth 2-1:4.2: Apple iPhone USB Ethernet device attached - [ 40.950883] ipheth 2-1:4.2: Apple iPhone USB Ethernet now disconnected - [ 47.471897] ipheth 2-1:4.2: Apple iPhone USB Ethernet device attached - [ 82.967116] ipheth 2-1:4.2: Apple iPhone USB Ethernet now disconnected - [ 106.735932] ipheth 2-1:4.2: Apple iPhone USB Ethernet device attached - -This means that iPhone has been successfully recognized by Antergos/Arch Linux. - -**Step 7**: When everything is set, it’s time to mount the iPhone. Use the command below: - - ifuse ~/iPhone - -Since we created the mount directory in home, it won’t need root access and you should also be able to see it easily in your home directory. If the command is successful, you won’t see any output. - -Go back to Files and see if the iPhone is recognized or not. For me, it looks like this in Antergos: - -![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/iPhone-mount-Antergos-Linux.jpeg) - -You can access the files in this directory. Copy files from it or to it. - -![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/iPhone-mount-Antergos-Linux-1.jpeg) - -**Step 8**: When you want to unmount it, you should use this command: - - sudo umount ~/iPhone - -### Worked for you? ### - -I know that it is not very convenient and ideally, iPhone should be recognized as any other USB storage device but things don’t always behave as they are expected to. Good thing is that a little DIY hack can always fix the issue and it gives a sense of achievement (at least to me). That being said, I must say Antergos should work to fix this issue so that iPhone can be mounted by default. - -Did this trick work for you? If you have questions or suggestions, feel free to drop a comment. - --------------------------------------------------------------------------------- - -via: http://itsfoss.com/iphone-antergos-linux/ - -作者:[Abhishek][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://itsfoss.com/author/abhishek/ -[1]:http://www.libimobiledevice.org/ -[2]:http://itsfoss.com/mount-iphone-ipad-ios-7-ubuntu-13-10/ \ No newline at end of file diff --git a/sources/tech/20151027 How To Install Retro Terminal In Linux.md b/sources/tech/20151027 How To Install Retro Terminal In Linux.md new file mode 100644 index 0000000000..4de3e6d002 --- /dev/null +++ b/sources/tech/20151027 How To Install Retro Terminal In Linux.md @@ -0,0 +1,76 @@ +FSSlc translating + +How To Install Retro Terminal In Linux +================================================================================ +![Retro Terminal in Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Retro-Terminal-Linux.jpeg) + +Nostalgic about the past? Get a slice of the past by **installing retro terminal app** [cool-retro-term][1] which, as the name suggests, is both cool and retro at the same. + +Do you remember the time when there were CRT monitors everywhere and the terminal screen used to flicker? You don’t need to be old to have witnessed it. If you watch movies set in early 90’s, you’ll see plenty of CRT monitors with green/B&W command prompt. It has a geeky aura which makes it cooler. + +If you are tired of terminal appearance and you need something cool and ‘new’, cool-retro-term will give you a vintage terminal appearance to relive the past. You also can change its color, animation kind, and add some effect to it. + +Here are few screenshots of the different looks of cool-retro-term: + +![Retro Terminal](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Retro-Terminal-Linux-1.jpeg) + +![Retro Terminal Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Retro-Terminal-Linux-2.jpeg) + +![Vintage Terminal](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Retro-Terminal-Linux-3.jpeg) + +### Install cool-retro-term in Ubuntu based Linux distributions ### + +To install cool-retro-term in Ubuntu based Linux distributions, such as Linux Mint, elementary OS, Linux Lite etc, use the PPA below: + + sudo add-apt-repository ppa:noobslab/apps + sudo apt-get update + sudo apt-get install cool-retro-term + +### Install cool-retro-term in Arch based Linux distributions ### + +Installing cool-retro-term in Arch based Linux distributions such as [Antergos][2] and [Manjaro][3], use the following command: + + sudo pacman -S cool-retro-term + +### Install cool-retro-term from source code ### + +For installing this application from source code, you need to install a number of dependencies first. Some of the know dependencies in Ubuntu based distributions are: + + sudo apt-get install git build-essential qmlscene qt5-qmake qt5-default qtdeclarative5-dev qtdeclarative5-controls-plugin qtdeclarative5-qtquick2-plugin libqt5qml-graphicaleffects qtdeclarative5-dialogs-plugin qtdeclarative5-localstorage-plugin qtdeclarative5-window-plugin + +Known dependencies for other distributions can be found on the [github of cool-retro-term][4]. + +Now use commands below to compile the program: + + git clone https://github.com/Swordfish90/cool-retro-term.git + cd cool-retro-term + qmake && make + +Once the program is compiled, you can run it with this command: + + ./cool-retro-term + +If you like to have this app in program menu for quick access so that you won’t have to run it manually each time with the commands, you can use the command below: + + sudo cp cool-retro-term.desktop /usr/share/applications + +You can learn some more terminal tricks here. Enjoy the vintage terminal in Linux :) + +With inputs from: [Abhishek Prakash][5] + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/cool-retro-term/ + +作者:[Hossein Heydari][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/hossein/ +[1]:https://github.com/Swordfish90/cool-retro-term +[2]:http://itsfoss.com/tag/antergos/ +[3]:https://manjaro.github.io/ +[4]:https://github.com/Swordfish90/cool-retro-term +[5]:http://itsfoss.com/author/abhishek/ diff --git a/sources/tech/20151028 10 Tips for 10x Application Performance.md b/sources/tech/20151028 10 Tips for 10x Application Performance.md new file mode 100644 index 0000000000..a899284450 --- /dev/null +++ b/sources/tech/20151028 10 Tips for 10x Application Performance.md @@ -0,0 +1,277 @@ +10 Tips for 10x Application Performance +================================================================================ +Improving web application performance is more critical than ever. The share of economic activity that’s online is growing; more than 5% of the developed world’s economy is now on the Internet (see Resources below for statistics). And our always-on, hyper-connected modern world means that user expectations are higher than ever. If your site does not respond instantly, or if your app does not work without delay, users quickly move on to your competitors. + +For example, a study done by Amazon almost 10 years ago proved that, even then, a 100-millisecond decrease in page-loading time translated to a 1% increase in its revenue. Another recent study highlighted the fact that that more than half of site owners surveyed said they lost revenue or customers due to poor application performance. + +How fast does a website need to be? For each second a page takes to load, about 4% of users abandon it. Top e-commerce sites offer a time to first interaction ranging from one to three seconds, which offers the highest conversion rate. It’s clear that the stakes for web application performance are high and likely to grow. + +Wanting to improve performance is easy, but actually seeing results is difficult. To help you on your journey, this blog post offers you ten tips to help you increase your website performance by as much as 10x. It’s the first in a series detailing how you can increase your application performance with the help of some well-tested optimization techniques, and with a little support from NGINX. This series also outlines potential improvements in security that you can gain along the way. + +### Tip #1: Accelerate and Secure Applications with a Reverse Proxy Server ### + +If your web application runs on a single machine, the solution to performance problems might seem obvious: just get a faster machine, with more processor, more RAM, a fast disk array, and so on. Then the new machine can run your WordPress server, Node.js application, Java application, etc., faster than before. (If your application accesses a database server, the solution might still seem simple: get two faster machines, and a faster connection between them.) + +Trouble is, machine speed might not be the problem. Web applications often run slowly because the computer is switching among different kinds of tasks: interacting with users on thousands of connections, accessing files from disk, and running application code, among others. The application server may be thrashing – running out of memory, swapping chunks of memory out to disk, and making many requests wait on a single task such as disk I/O. + +Instead of upgrading your hardware, you can take an entirely different approach: adding a reverse proxy server to offload some of these tasks. A [reverse proxy server][1] sits in front of the machine running the application and handles Internet traffic. Only the reverse proxy server is connected directly to the Internet; communication with the application servers is over a fast internal network. + +Using a reverse proxy server frees the application server from having to wait for users to interact with the web app and lets it concentrate on building pages for the reverse proxy server to send across the Internet. The application server, which no longer has to wait for client responses, can run at speeds close to those achieved in optimized benchmarks. + +Adding a reverse proxy server also adds flexibility to your web server setup. For instance, if a server of a given type is overloaded, another server of the same type can easily be added; if a server is down, it can easily be replaced. + +Because of the flexibility it provides, a reverse proxy server is also a prerequisite for many other performance-boosting capabilities, such as: + +- **Load balancing** (see [Tip #2][2]) – A load balancer runs on a reverse proxy server to share traffic evenly across a number of application servers. With a load balancer in place, you can add application servers without changing your application at all. +- **Caching static files** (see [Tip #3][3]) – Files that are requested directly, such as image files or code files, can be stored on the reverse proxy server and sent directly to the client, which serves assets more quickly and offloads the application server, allowing the application to run faster. +- **Securing your site** – The reverse proxy server can be configured for high security and monitored for fast recognition and response to attacks, keeping the application servers protected. + +NGINX software is specifically designed for use as a reverse proxy server, with the additional capabilities described above. NGINX uses an event-driven processing approach which is more efficient than traditional servers. NGINX Plus adds more advanced reverse proxy features, such as application [health checks][4], specialized request routing, advanced caching, and support. + +![NGINX Worker Process helps increase application performance](https://www.nginx.com/wp-content/uploads/2015/10/Graph-11.png) + +### Tip #2: Add a Load Balancer ### + +Adding a [load balancer][5] is a relatively easy change which can create a dramatic improvement in the performance and security of your site. Instead of making a core web server bigger and more powerful, you use a load balancer to distribute traffic across a number of servers. Even if an application is poorly written, or has problems with scaling, a load balancer can improve the user experience without any other changes. + +A load balancer is, first, a reverse proxy server (see [Tip #1][6]) – it receives Internet traffic and forwards requests to another server. The trick is that the load balancer supports two or more application servers, using [a choice of algorithms][7] to split requests between servers. The simplest load balancing approach is round robin, with each new request sent to the next server on the list. Other methods include sending requests to the server with the fewest active connections. NGINX Plus has [capabilities][8] for continuing a given user session on the same server, which is called session persistence. + +Load balancers can lead to strong improvements in performance because they prevent one server from being overloaded while other servers wait for traffic. They also make it easy to expand your web server capacity, as you can add relatively low-cost servers and be sure they’ll be put to full use. + +Protocols that can be load balanced include HTTP, HTTPS, SPDY, HTTP/2, WebSocket, [FastCGI][9], SCGI, uwsgi, memcached, and several other application types, including TCP-based applications and other Layer 4 protocols. Analyze your web applications to determine which you use and where performance is lagging. + +The same server or servers used for load balancing can also handle several other tasks, such as SSL termination, support for HTTP/1/x and HTTP/2 use by clients, and caching for static files. + +NGINX is often used for load balancing; to learn more, please see our [overview blog post][10], [configuration blog post][11], [ebook][12] and associated [webinar][13], and [documentation][14]. Our commercial version, [NGINX Plus][15], supports more specialized load balancing features such as load routing based on server response time and the ability to load balance on Microsoft’s NTLM protocol. + +### Tip #3: Cache Static and Dynamic Content ### + +Caching improves web application performance by delivering content to clients faster. Caching can involve several strategies: preprocessing content for fast delivery when needed, storing content on faster devices, storing content closer to the client, or a combination. + +There are two different types of caching to consider: + +- **Caching of static content**. Infrequently changing files, such as image files (JPEG, PNG) and code files (CSS, JavaScript), can be stored on an edge server for fast retrieval from memory or disk. +- **Caching of dynamic content**. Many Web applications generate fresh HTML for each page request. By briefly caching one copy of the generated HTML for a brief period of time, you can dramatically reduce the total number of pages that have to be generated while still delivering content that’s fresh enough to meet your requirements. + +If a page gets ten views per second, for instance, and you cache it for one second, 90% of requests for the page will come from the cache. If you separately cache static content, even the freshly generated versions of the page might be made up largely of cached content. + +There are three main techniques for caching content generated by web applications: + +- **Moving content closer to users**. Keeping a copy of content closer to the user reduces its transmission time. +- **Moving content to faster machines**. Content can be kept on a faster machine for faster retrieval. +- **Moving content off of overused machines**. Machines sometimes operate much slower than their benchmark performance on a particular task because they are busy with other tasks. Caching on a different machine improves performance for the cached resources and also for non-cached resources, because the host machine is less overloaded. + +Caching for web applications can be implemented from the inside – the web application server – out. First, caching is used for dynamic content, to reduce the load on application servers. Then, caching is used for static content (including temporary copies of what would otherwise be dynamic content), further off-loading application servers. And caching is then moved off of application servers and onto machines that are faster and/or closer to the user, unburdening the application servers, and reducing retrieval and transmission times. + +Improved caching can speed up applications tremendously. For many web pages, static data, such as large image files, makes up more than half the content. It might take several seconds to retrieve and transmit such data without caching, but only fractions of a second if the data is cached locally. + +As an example of how caching is used in practice, NGINX and NGINX Plus use two directives to [set up caching][16]: proxy_cache_path and proxy_cache. You specify the cache location and size, the maximum time files are kept in the cache, and other parameters. Using a third (and quite popular) directive, proxy_cache_use_stale, you can even direct the cache to supply stale content when the server that supplies fresh content is busy or down, giving the client something rather than nothing. From the user’s perspective, this may strongly improves your site or application’s uptime. + +NGINX Plus has [advanced caching features][17], including support for [cache purging][18] and visualization of cache status on a [dashboard][19] for live activity monitoring. + +For more information on caching with NGINX, see the [reference documentation][20] and [NGINX Content Caching][21] in the NGINX Plus Admin Guide. + +**Note**: Caching crosses organizational lines between people who develop applications, people who make capital investment decisions, and people who run networks in real time. Sophisticated caching strategies, like those alluded to here, are a good example of the value of a [DevOps perspective][22], in which application developer, architectural, and operations perspectives are merged to help meet goals for site functionality, response time, security, and business results, )such as completed transactions or sales. + +### Tip #4: Compress Data ### + +Compression is a huge potential performance accelerator. There are carefully engineered and highly effective compression standards for photos (JPEG and PNG), videos (MPEG-4), and music (MP3), among others. Each of these standards reduces file size by an order of magnitude or more. + +Text data – including HTML (which includes plain text and HTML tags), CSS, and code such as JavaScript – is often transmitted uncompressed. Compressing this data can have a disproportionate impact on perceived web application performance, especially for clients with slow or constrained mobile connections. + +That’s because text data is often sufficient for a user to interact with a page, where multimedia data may be more supportive or decorative. Smart content compression can reduce the bandwidth requirements of HTML, Javascript, CSS and other text-based content, typically by 30% or more, with a corresponding reduction in load time. + +If you use SSL, compression reduces the amount of data that has to be SSL-encoded, which offsets some of the CPU time it takes to compress the data. + +Methods for compressing text data vary. For example, see the [section on HTTP/2][23] for a novel text compression scheme, adapted specifically for header data. As another example of text compression you can [turn on][24] GZIP compression in NGINX. After you [pre-compress text data][25] on your services, you can serve the compressed .gz version directly using the gzip_static directive. + +### Tip #5: Optimize SSL/TLS ### + +The Secure Sockets Layer ([SSL][26]) protocol and its successor, the Transport Layer Security (TLS) protocol, are being used on more and more websites. SSL/TLS encrypts the data transported from origin servers to users to help improve site security. Part of what may be influencing this trend is that Google now uses the presence of SSL/TLS as a positive influence on search engine rankings. + +Despite rising popularity, the performance hit involved in SSL/TLS is a sticking point for many sites. SSL/TLS slows website performance for two reasons: + +1. The initial handshake required to establish encryption keys whenever a new connection is opened. The way that browsers using HTTP/1.x establish multiple connections per server multiplies that hit. +1. Ongoing overhead from encrypting data on the server and decrypting it on the client. + +To encourage the use of SSL/TLS, the authors of HTTP/2 and SPDY (described in the [next section][27]) designed these protocols so that browsers need just one connection per browser session. This greatly reduces one of the two major sources of SSL overhead. However, even more can be done today to improve the performance of applications delivered over SSL/TLS. + +The mechanism for optimizing SSL/TLS varies by web server. As an example, NGINX uses [OpenSSL][28], running on standard commodity hardware, to provide performance similar to dedicated hardware solutions. NGINX [SSL performance][29] is well-documented and minimizes the time and CPU penalty from performing SSL/TLS encryption and decryption. + +In addition, see [this blog post][30] for details on ways to increase SSL/TLS performance. To summarize briefly, the techniques are: + +- **Session caching**. Uses the [ssl_session_cache][31] directive to cache the parameters used when securing each new connection with SSL/TLS. +- **Session tickets or IDs**. These store information about specific SSL/TLS sessions in a ticket or ID so a connection can be reused smoothly, without new handshaking. +- **OCSP stapling**. Cuts handshaking time by caching SSL/TLS certificate information. + +NGINX and NGINX Plus can be used for SSL/TLS termination – handling encryption and decyption for client traffic, while communicating with other servers in clear text. Use [these steps][32] to set up NGINX or NGINX Plus to handle SSL/TLS termination. Also, here are [specific steps][33] for NGINX Plus when used with servers that accept TCP connections. + +### Tip #6: Implement HTTP/2 or SPDY ### + +For sites that already use SSL/TLS, HTTP/2 and SPDY are very likely to improve performance, because the single connection requires just one handshake. For sites that don’t yet use SSL/TLS, HTTP/2 and SPDY makes a move to SSL/TLS (which normally slows performance) a wash from a responsiveness point of view. + +Google introduced SPDY in 2012 as a way to achieve faster performance on top of HTTP/1.x. HTTP/2 is the recently approved IETF standard based on SPDY. SPDY is broadly supported, but is soon to be deprecated, replaced by HTTP/2. + +The key feature of SPDY and HTTP/2 is the use of a single connection rather than multiple connections. The single connection is multiplexed, so it can carry pieces of multiple requests and responses at the same time. + +By getting the most out of one connection, these protocols avoid the overhead of setting up and managing multiple connections, as required by the way browsers implement HTTP/1.x. The use of a single connection is especially helpful with SSL, because it minimizes the time-consuming handshaking that SSL/TLS needs to set up a secure connection. + +The SPDY protocol required the use of SSL/TLS; HTTP/2 does not officially require it, but all browsers so far that support HTTP/2 use it only if SSL/TLS is enabled. That is, a browser that supports HTTP/2 uses it only if the website is using SSL and its server accepts HTTP/2 traffic. Otherwise, the browser communicates over HTTP/1.x. + +When you implement SPDY or HTTP/2, you no longer need typical HTTP performance optimizations such as domain sharding, resource merging, and image spriting. These changes make your code and deployments simpler and easier to manage. To learn more about the changes that HTTP/2 is bringing about, read our [white paper][34]. + +![NGINX Supports SPDY and HTTP/2 for increased web application performance](https://www.nginx.com/wp-content/uploads/2015/10/http2-27.png) + +As an example of support for these protocols, NGINX has supported SPDY from early on, and [most sites][35] that use SPDY today run on NGINX. NGINX is also [pioneering support][36] for HTTP/2, with [support][37] for HTTP/2 in NGINX open source and NGINX Plus as of September 2015. + +Over time, we at NGINX expect most sites to fully enable SSL and to move to HTTP/2. This will lead to increased security and, as new optimizations are found and implemented, simpler code that performs better. + +### Tip #7: Update Software Versions ### + +One simple way to boost application performance is to select components for your software stack based on their reputation for stability and performance. In addition, because developers of high-quality components are likely to pursue performance enhancements and fix bugs over time, it pays to use the latest stable version of software. New releases receive more attention from developers and the user community. Newer builds also take advantage of new compiler optimizations, including tuning for new hardware. + +Stable new releases are typically more compatible and higher-performing than older releases. It’s also easier to keep on top of tuning optimizations, bug fixes, and security alerts when you stay on top of software updates. + +Staying with older software can also prevent you from taking advantage of new capabilities. For example, HTTP/2, described above, currently requires OpenSSL 1.0.1. Starting in mid-2016, HTTP/2 will require OpenSSL 1.0.2, which was released in January 2015. + +NGINX users can start by moving to the [[latest version of the NGINX open source software][38] or [NGINX Plus][39]; they include new capabilities such as socket sharding and thread pools (see below), and both are constantly being tuned for performance. Then look at the software deeper in your stack and move to the most recent version wherever you can. + +### Tip #8: Tune Linux for Performance ### + +Linux is the underlying operating system for most web server implementations today, and as the foundation of your infrastructure, Linux represents a significant opportunity to improve performance. By default, many Linux systems are conservatively tuned to use few resources and to match a typical desktop workload. This means that web application use cases require at least some degree of tuning for maximum performance. + +Linux optimizations are web server-specific. Using NGINX as an example, here are a few highlights of changes you can consider to speed up Linux: + +- **Backlog queue**. If you have connections that appear to be stalling, consider increasing net.core.somaxconn, the maximum number of connections that can be queued awaiting attention from NGINX. You will see error messages if the existing connection limit is too small, and you can gradually increase this parameter until the error messages stop. +- **File descriptors**. NGINX uses up to two file descriptors for each connection. If your system is serving a lot of connections, you might need to increase sys.fs.file_max, the system-wide limit for file descriptors, and nofile, the user file descriptor limit, to support the increased load. +- **Ephemeral ports**. When used as a proxy, NGINX creates temporary (“ephemeral”) ports for each upstream server. You can increase the range of port values, set by net.ipv4.ip_local_port_range, to increase the number of ports available. You can also reduce the timeout before an inactive port gets reused with the net.ipv4.tcp_fin_timeout setting, allowing for faster turnover. + +For NGINX, check out the [NGINX performance tuning guides][40] to learn how to optimize your Linux system so that it can cope with large volumes of network traffic without breaking a sweat! + +### Tip #9: Tune Your Web Server for Performance ### + +Whatever web server you use, you need to tune it for web application performance. The following recommendations apply generally to any web server, but specific settings are given for NGINX. Key optimizations include: + +- **Access logging**. Instead of writing a log entry for every request to disk immediately, you can buffer entries in memory and write them to disk as a group. For NGINX, add the *buffer=size* parameter to the *access_log* directive to write log entries to disk when the memory buffer fills up. If you add the **flush=time** parameter, the buffer contents are also be written to disk after the specified amount of time. +- **Buffering**. Buffering holds part of a response in memory until the buffer fills, which can make communications with the client more efficient. Responses that don’t fit in memory are written to disk, which can slow performance. When NGINX buffering is [on][42], you use the *proxy_buffer_size* and *proxy_buffers* directives to manage it. +- **Client keepalives**. Keepalive connections reduce overhead, especially when SSL/TLS is in use. For NGINX, you can increase the maximum number of *keepalive_requests* a client can make over a given connection from the default of 100, and you can increase the *keepalive_timeout* to allow the keepalive connection to stay open longer, resulting in faster subsequent requests. +- **Upstream keepalives**. Upstream connections – connections to application servers, database servers, and so on – benefit from keepalive connections as well. For upstream connections, you can increase *keepalive*, the number of idle keepalive connections that remain open for each worker process. This allows for increased connection reuse, cutting down on the need to open brand new connections. For more information about keepalives, refer to this [blog post][41]. +- **Limits**. Limiting the resources that clients use can improve performance and security. For NGINX,the *limit_conn* and *limit_conn_zone* directives restrict the number of connections from a given source, while *limit_rate* constrains bandwidth. These settings can stop a legitimate user from “hogging” resources and also help prevent against attacks. The *limit_req* and *limit_req_zone* directives limit client requests. For connections to upstream servers, use the max_conns parameter to the server directive in an upstream configuration block. This limits connections to an upstream server, preventing overloading. The associated queue directive creates a queue that holds a specified number of requests for a specified length of time after the *max_conns* limit is reached. +- **Worker processes**. Worker processes are responsible for the processing of requests. NGINX employs an event-based model and OS-dependent mechanisms to efficiently distribute requests among worker processes. The recommendation is to set the value of *worker_processes* to one per CPU. The maximum number of worker_connections (512 by default) can safely be raised on most systems if needed; experiment to find the value that works best for your system. +- **Socket sharding**. Typically, a single socket listener distributes new connections to all worker processes. Socket sharding creates a socket listener for each worker process, with the kernel assigning connections to socket listeners as they become available. This can reduce lock contention and improve performance on multicore systems. To enable [socket sharding][43], include the reuseport parameter on the listen directive. +- **Thread pools**. Any computer process can be held up by a single, slow operation. For web server software, disk access can hold up many faster operations, such as calculating or copying information in memory. When a thread pool is used, the slow operation is assigned to a separate set of tasks, while the main processing loop keeps running faster operations. When the disk operation completes, the results go back into the main processing loop. In NGINX, two operations – the read() system call and sendfile() – are offloaded to [thread pools][44]. + +![Thread pools help increase application performance by assigning a slow operation to a separate set of tasks](https://www.nginx.com/wp-content/uploads/2015/10/Graph-17.png) + +**Tip**. When changing settings for any operating system or supporting service, change a single setting at a time, then test performance. If the change causes problems, or if it doesn’t make your site run faster, change it back. + +See this [blog post][45] for more details on tuning NGINX. + +### Tip #10: Monitor Live Activity to Resolve Issues and Bottlenecks ### + +The key to a high-performance approach to application development and delivery is watching your application’s real-world performance closely and in real time. You must be able to monitor activity within specific devices and across your web infrastructure. + +Monitoring site activity is mostly passive – it tells you what’s going on, and leaves it to you to spot problems and fix them. + +Monitoring can catch several different kinds of issues. They include: + +- A server is down. +- A server is limping, dropping connections. +- A server is suffering from a high proportion of cache misses. +- A server is not sending correct content. + +A global application performance monitoring tool like New Relic or Dynatrace helps you monitor page load time from remote locations, while NGINX helps you monitor the application delivery side. Application performance data tells you when your optimizations are making a real difference to your users, and when you need to consider adding capacity to your infrastructure to sustain the traffic. + +To help identify and resolve issues quickly, NGINX Plus adds [application-aware health checks][46] – synthetic transactions that are repeated regularly and are used to alert you to problems. NGINX Plus also has [session draining][47], which stops new connections while existing tasks complete, and a slow start capability, allowing a recovered server to come up to speed within a load-balanced group. When used effectively, health checks allow you to identify issues before they significantly impact the user experience, while session draining and slow start allow you to replace servers and ensure the process does not negatively affect perceived performance or uptime. The figure shows the built-in NGINX Plus [live activity monitoring][48] dashboard for a web infrastructure with servers, TCP connections, and caching. + +![Use real-time application performance monitoring tools to identify and resolve issues quickly](https://www.nginx.com/wp-content/uploads/2015/10/Screen-Shot-2015-10-05-at-4.16.32-PM.png) + +### Conclusion: Seeing 10x Performance Improvement ### + +The performance improvements that are available for any one web application vary tremendously, and actual gains depend on your budget, the time you can invest, and gaps in your existing implementation. So, how might you achieve 10x performance improvement for your own applications? + +To help guide you on the potential impact of each optimization, here are pointers to the improvement that may be possible with each tip detailed above, though your mileage will almost certainly vary: + +- **Reverse proxy server and load balancing**. No load balancing, or poor load balancing, can cause episodes of very poor performance. Adding a reverse proxy server, such as NGINX, can prevent web applications from thrashing between memory and disk. Load balancing can move processing from overburdened servers to available ones and make scaling easy. These changes can result in dramatic performance improvement, with a 10x improvement easily achieved compared to the worst moments for your current implementation, and lesser but substantial achievements available for overall performance. +- **Caching dynamic and static content**. If you have an overburdened web server that’s doubling as your application server, 10x improvements in peak-time performance can be achieved by caching dynamic content alone. Caching for static files can improve performance by single-digit multiples as well. +- **Compressing data**. Using media file compression such as JPEG for photos, PNG for graphics, MPEG-4 for movies, and MP3 for music files can greatly improve performance. Once these are all in use, then compressing text data (code and HTML) can improve initial page load times by a factor of two. +- **Optimizing SSL/TLS**. Secure handshakes can have a big impact on performance, so optimizing them can lead to perhaps a 2x improvement in initial responsiveness, particularly for text-heavy sites. Optimizing media file transmission under SSL/TLS is likely to yield only small performance improvements. +- **Implementing HTTP/2 and SPDY**. When used with SSL/TLS, these protocols are likely to result in incremental improvements for overall site performance. +- **Tuning Linux and web server software (such as NGINX)**. Fixes such as optimizing buffering, using keepalive connections, and offloading time-intensive tasks to a separate thread pool can significantly boost performance; thread pools, for instance, can speed disk-intensive tasks by [nearly an order of magnitude][49]. + +We hope you try out these techniques for yourself. We want to hear the kind of application performance improvements you’re able to achieve. Share your results in the comments below, or tweet your story with the hash tags #NGINX and #webperf! + +### Resources for Internet Statistics ### + +[Statista.com – Share of the internet economy in the gross domestic product in G-20 countries in 2016][50] + +[Load Impact – How Bad Performance Impacts Ecommerce Sales][51] + +[Kissmetrics – How Loading Time Affects Your Bottom Line (infographic)][52] + +[Econsultancy – Site speed: case studies, tips and tools for improving your conversion rate][53] + +-------------------------------------------------------------------------------- + +via: https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io + +作者:[Floyd Smith][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.nginx.com/blog/author/floyd/ +[1]:https://www.nginx.com/resources/glossary/reverse-proxy-server +[2]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip2 +[3]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip3 +[4]:https://www.nginx.com/products/application-health-checks/ +[5]:https://www.nginx.com/solutions/load-balancing/ +[6]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip1 +[7]:https://www.nginx.com/resources/admin-guide/load-balancer/ +[8]:https://www.nginx.com/blog/load-balancing-with-nginx-plus/ +[9]:https://www.digitalocean.com/community/tutorials/understanding-and-implementing-fastcgi-proxying-in-nginx +[10]:https://www.nginx.com/blog/five-reasons-use-software-load-balancer/ +[11]:https://www.nginx.com/blog/load-balancing-with-nginx-plus/ +[12]:https://www.nginx.com/resources/ebook/five-reasons-choose-software-load-balancer/ +[13]:https://www.nginx.com/resources/webinars/choose-software-based-load-balancer-45-min/ +[14]:https://www.nginx.com/resources/admin-guide/load-balancer/ +[15]:https://www.nginx.com/products/ +[16]:https://www.nginx.com/blog/nginx-caching-guide/ +[17]:https://www.nginx.com/products/content-caching-nginx-plus/ +[18]:http://nginx.org/en/docs/http/ngx_http_proxy_module.html?&_ga=1.95342300.1348073562.1438712874#proxy_cache_purge +[19]:https://www.nginx.com/products/live-activity-monitoring/ +[20]:http://nginx.org/en/docs/http/ngx_http_proxy_module.html?&&&_ga=1.61156076.1348073562.1438712874#proxy_cache +[21]:https://www.nginx.com/resources/admin-guide/content-caching +[22]:https://www.nginx.com/blog/network-vs-devops-how-to-manage-your-control-issues/ +[23]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip6 +[24]:https://www.nginx.com/resources/admin-guide/compression-and-decompression/ +[25]:http://nginx.org/en/docs/http/ngx_http_gzip_static_module.html +[26]:https://www.digicert.com/ssl.htm +[27]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip6 +[28]:http://openssl.org/ +[29]:https://www.nginx.com/blog/nginx-ssl-performance/ +[30]:https://www.nginx.com/blog/improve-seo-https-nginx/ +[31]:http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache +[32]:https://www.nginx.com/resources/admin-guide/nginx-ssl-termination/ +[33]:https://www.nginx.com/resources/admin-guide/nginx-tcp-ssl-termination/ +[34]:https://www.nginx.com/resources/datasheet/datasheet-nginx-http2-whitepaper/ +[35]:http://w3techs.com/blog/entry/25_percent_of_the_web_runs_nginx_including_46_6_percent_of_the_top_10000_sites +[36]:https://www.nginx.com/blog/how-nginx-plans-to-support-http2/ +[37]:https://www.nginx.com/blog/nginx-plus-r7-released/ +[38]:http://nginx.org/en/download.html +[39]:https://www.nginx.com/products/ +[40]:https://www.nginx.com/blog/tuning-nginx/ +[41]:https://www.nginx.com/blog/http-keepalives-and-web-performance/ +[42]:http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering +[43]:https://www.nginx.com/blog/socket-sharding-nginx-release-1-9-1/ +[44]:https://www.nginx.com/blog/thread-pools-boost-performance-9x/ +[45]:https://www.nginx.com/blog/tuning-nginx/ +[46]:https://www.nginx.com/products/application-health-checks/ +[47]:https://www.nginx.com/products/session-persistence/#session-draining +[48]:https://www.nginx.com/products/live-activity-monitoring/ +[49]:https://www.nginx.com/blog/thread-pools-boost-performance-9x/ +[50]:http://www.statista.com/statistics/250703/forecast-of-internet-economy-as-percentage-of-gdp-in-g-20-countries/ +[51]:http://blog.loadimpact.com/blog/how-bad-performance-impacts-ecommerce-sales-part-i/ +[52]:https://blog.kissmetrics.com/loading-time/?wide=1 +[53]:https://econsultancy.com/blog/10936-site-speed-case-studies-tips-and-tools-for-improving-your-conversion-rate/ \ No newline at end of file diff --git a/sources/tech/20151104 How to Install Pure-FTPd with TLS on FreeBSD 10.2.md b/sources/tech/20151104 How to Install Pure-FTPd with TLS on FreeBSD 10.2.md new file mode 100644 index 0000000000..3d898340d8 --- /dev/null +++ b/sources/tech/20151104 How to Install Pure-FTPd with TLS on FreeBSD 10.2.md @@ -0,0 +1,154 @@ +How to Install Pure-FTPd with TLS on FreeBSD 10.2 +================================================================================ +FTP or File Transfer Protocol is application layer standard network protocol used to transfer file from the client to the server, after user logged in to the FTP server over the TCP-Network, such as internet. FTP has been round long time ago, much longer then P2P Program, or World Wide Web, and until this day it was a primary method for sharing file with other over the internet and it it remain very popular even today. FTP provide an secure transmission, that protect username, password and encrypt the content with SSL/TLS. + +Pure-FTPd is free FTP Server with strong and focus on the software security. It was great choice for you if you want to provide a fast, secure, lightweight with feature rich FTP Services. Pure-FTPd can be install on variety of Unix-like operating system, include Linux and FreeBSD. Pure-FTPd is created by Frank Dennis in 2001, based on Troll-FTPd, and until now is actively developed by a team led by Dennis. + +In this tutorial we will provide about installation and configuration of "**Pure-FTPd**" with Unix-like operating system FreeBSD 10.2. + +### Step 1 - Update system ### + +The first thing you must do is to install and update the freebsd repository, please connect to your server with SSH and then type command below as sudo/root : + + freebsd-update fetch + freebsd-update install + +### Step 2 - Install Pure-FTPd ### + +You can install Pure-FTPd from the ports method, but in this tutorial we will install from the freebsd repository with "**pkg**" command. So, now let's install : + + pkg install pure-ftpd + +Once installation is finished, please add pure-ftpd to the start at the boot time with sysrc command below : + + sysrc pureftpd_enable=yes + +### Step 3 - Configure Pure-FTPd ### + +Configuration file for Pure-FTPd is located at directory "/usr/local/etc/", please go to the directory and copy the sample configuration for pure-ftpd to "**pure-ftpd.conf**". + + cd /usr/local/etc/ + cp pure-ftpd.conf.sample pure-ftpd.conf + +Now edit the file configuration with nano editor : + + nano -c pure-ftpd.conf + +Note : -c option to show line number on nano. + +Go to line 59 and change the value of "VerboseLog" to "**yes**". This option is allow you as administrator to see the log all command used by the users. + + VerboseLog yes + +And now look at line 126 "PureDB" for virtual-users configuration. Virtual users is a simple mechanism to store a list of users, with their password, name, uid, directory, etc. It's just like /etc/passwd. But it's not /etc/passwd. It's a different file and only for FTP. In this tutorial we will store the list of user to the file "**/usr/local/etc/pureftpd.passwd**" and "**/usr/local/etc/pureftpd.pdb**". Please uncomment that line and change the path for the file to "/usr/local/etc/pureftpd.pdb". + + PureDB /usr/local/etc/pureftpd.pdb + +Next, uncomment on the line 336 "**CreateHomeDir**", this option make you easy to add the virtual users, allow automatically create home directories if they are missing. + + CreateHomeDir yes + +Save and exit. + +Next, start pure-ftpd with service command : + + service pure-ftpd start + +### Step 4 - Adding New Users ### + +At this step FTP server is started without error, but you can not log in to the FTP Server, because the default configuration of pure-ftpd is disabled for anonymous users. We need to create new users with home directory, and then give it the password for login. + +On thing you must do befere you add new user to pure-ftpd virtual-user is to create a system user for this, lets create new system user "**vftp**" and the default group is same as username, with home directory "**/home/vftp/**". + + pw useradd vftp -s /sbin/nologin -w no -d /home/vftp \ + -c "Virtual User Pure-FTPd" -m + +Now you can add the new user for the FTP Server with "**pure-pw**" command. For an example here, we will create new user named "**akari**", so please see command below : + + pure-pw useradd akari -u vftp -g vftp -d /home/vftp/akari + Password: TYPE YOUR PASSWORD + +that command will create user "**akari**" and the data stored at the file "**/usr/local/etc/pureftpd.passwd**", not at /etc/passwd file, so this means is that you can easily create FTP-only accounts without messing up your system accounts. + +Next, you must generate the PureDB user database with this command : + + pure-pw mkdb + +Now restart the pure-ftpd services and try connect with user "akari" : + + service pure-ftpd restart + +Trying to connect with user akari : + + ftp SERVERIP + +![FTP Connect user akari](http://blog.linoxide.com/wp-content/uploads/2015/10/FTP-Connect-user-akari.png) + +**NOTE :** + +If you want to add new user again, you can use "**pure-pw**" command. And if you want to delete the current user, you can use this : + + pure-pw userdel useryouwanttodelete + pure-pw mkdb + +### Step 5 - Add SSL/TLS to Pure-FTPd ### + +Pure-FTPd supports encryption using TLS security mechanisms. To support for TLS/SSL, make sure the OpenSSL library is already installed on your freebsd system. + +Now you must generate new "**self-signed certificate**" on the directory "**/etc/ssl/private**". Before you generate the certificate, please create new directory there called "private". + + cd /etc/ssl/ + mkdir private + cd private/ + +Now generate "self-signed certificate" with openssl command below : + + openssl req -x509 -nodes -newkey rsa:2048 -sha256 -keyout \ + /etc/ssl/private/pure-ftpd.pem \ + -out /etc/ssl/private/pure-ftpd.pem + +FILL ALL WITH YOUR PERSONAL INFO. + +![Generate Certificate pem](http://blog.linoxide.com/wp-content/uploads/2015/10/Generate-Certificate-pem.png) + +Next, change the certificate permission : + + chmod 600 /etc/ssl/private/*.pem + +Once the certifcate is generated, Edit the pure-ftpd configuration file : + + nano -c /usr/local/etc/pure-ftpd.conf + +Uncomment on line **423** to enable the TLS : + + TLS 1 + +And line **439** for the certificate file path : + + CertFile /etc/ssl/private/pure-ftpd.pem + +Save and exit, then restart the pure-ftpd services : + + service pure-ftpd restart + +Now let's test the Pure-FTPd that work with TLS/SSL. I'm here use "**FileZilla**" to connect to the FTP Server, and use user "**akari**" that have been created. + +![Pure-FTPd with TLS SUpport](http://blog.linoxide.com/wp-content/uploads/2015/10/Pure-FTPd-with-TLS-SUpport.png) + +Pure-FTPd with TLS on FreeBSD 10.2 successfully. + +### Conclusion ### + +FTP or File Transfer Protocol is standart protocol used to transfer file between users and the server. One of the best, lightweight and secure FTP Server Software is Pure-FTPd. It is secure and support for TLS/SSL encryption mechanism. Pure-FTPd is easy to to install and configure, you can manage the user with virtual user support, and it is make you as sysadmin is easy to manage the user if you have a much user ftp server. + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/install-pure-ftpd-tls-freebsd-10-2/ + +作者:[Arul][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arulm/ \ No newline at end of file diff --git a/sources/tech/20151104 How to Install Redis Server on CentOS 7.md b/sources/tech/20151104 How to Install Redis Server on CentOS 7.md new file mode 100644 index 0000000000..6cb66e4f3e --- /dev/null +++ b/sources/tech/20151104 How to Install Redis Server on CentOS 7.md @@ -0,0 +1,236 @@ +How to Install Redis Server on CentOS 7 +================================================================================ +Hi everyone, today Redis is the subject of our article, we are going to install it on CentOS 7. Build sources files, install the binaries, create and install files. After installing its components, we will set its configuration as well as some operating system parameters to make it more reliable and faster. + +![Runnins Redis](http://blog.linoxide.com/wp-content/uploads/2015/10/run-redis-standalone.jpg) + +Redis server + +Redis is an open source multi-platform data store written in ANSI C, that uses datasets directly from memory achieving extremely high performance. It supports various programming languages, including Lua, C, Java, Python, Perl, PHP and many others. It is based on simplicity, about 30k lines of code that do "few" things, but do them well. Despite you work on memory, persistence may exist and it has a fairly reasonable support for high availability and clustering, which does good in keeping your data safe. + +### Building Redis ### + +There is no official RPM package available, we need to build it from sources, in order to do this you will need install Make and GCC. + +Install GNU Compiler Collection and Make with yum if it is not already installed + + yum install gcc make + +Download the tarball from [redis download page][1]. + + curl http://download.redis.io/releases/redis-3.0.4.tar.gz -o redis-3.0.4.tar.gz + +Extract the tarball contents + + tar zxvf redis-3.0.4.tar.gz + +Enter Redis the directory we have extracted + + cd redis-3.0.4 + +Use Make to build the source files + + make + +### Install ### + +Enter on the src directory + + cd src + +Copy Redis server and client to /usr/local/bin + + cp redis-server redis-cli /usr/local/bin + +Its good also to copy sentinel, benchmark and check as well. + + cp redis-sentinel redis-benchmark redis-check-aof redis-check-dump /usr/local/bin + +Make Redis config directory + + mkdir /etc/redis + +Create a working and data directory under /var/lib/redis + + mkdir -p /var/lib/redis/6379 + +#### System parameters #### + +In order to Redis work correctly you need to set some kernel options + +Set the vm.overcommit_memory to 1, which means always, this will avoid data to be truncated, take a look [here][2] for more. + + sysctl -w vm.overcommit_memory=1 + +Change the maximum of backlog connections some value higher than the value on tcp-backlog option of redis.conf, which defaults to 511. You can find more on sysctl based ip networking "tunning" on [kernel.org][3] website. + + sysctl -w net.core.somaxconn=512. + +Disable transparent huge pages support, that is known to cause latency and memory access issues with Redis. + + echo never > /sys/kernel/mm/transparent_hugepage/enabled + +### redis.conf ### + +Redis.conf is the Redis configuration file, however you will see the file named as 6379.conf here, where the number is the same as the network port is listening to. This name is recommended if you are going to run more than one Redis instance. + +Copy sample redis.conf to **/etc/redis/6379.conf**. + + cp redis.conf /etc/redis/6379.conf + +Now edit the file and set at some of its parameters. + + vi /etc/redis/6379.conf + +#### daemonize #### + +Set daemonize to no, systemd need it to be in foreground, otherwise Redis will suddenly die. + + daemonize no + +#### pidfile #### + +Set the pidfile to redis_6379.pid under /var/run. + + pidfile /var/run/redis_6379.pid + +#### port #### + +Change the network port if you are not going to use the default + + port 6379 + +#### loglevel #### + +Set your loglevel. + + loglevel notice + +#### logfile #### + +Set the logfile to /var/log/redis_6379.log + + logfile /var/log/redis_6379.log + +#### dir #### + +Set the directory to /var/lib/redis/6379 + + dir /var/lib/redis/6379 + +### Security ### + +Here are some actions that you can take to enforce the security. + +#### Unix sockets #### + +In many cases, the client application resides on the same machine as the server, so there is no need to listen do network sockets. If this is the case you may want to use unix sockets instead, for this you need to set the **port** option to 0, and then enable unix sockets with the following options. + +Set the path to the socket file + + unixsocket /tmp/redis.sock + +Set restricted permission to the socket file + + unixsocketperm 700 + +Now, to have access with redis-cli you should use the -s flag pointing to the socket file + + redis-cli -s /tmp/redis.sock + +#### requirepass #### + +You may need remote access, if so, you should use a password, that will be required before any operation. + + requirepass "bTFBx1NYYWRMTUEyNHhsCg" + +#### rename-command #### + +Imagine the output of the next command. Yes, it will dump the configuration of the server, so you should deny access to this kind information whenever is possible. + + CONFIG GET * + +To restrict, or even disable this and other commands by using the **rename-command**. You must provide a command name and a replacement. To disable, set the replacement string to "" (blank), this is more secure as it will prevent someone from guessing the command name. + + rename-command FLUSHDB "FLUSHDB_MY_SALT_G0ES_HERE09u09u" + rename-command FLUSHALL "" + rename-command CONFIG "CONFIG_MY_S4LT_GO3S_HERE09u09u" + +![Access Redis through unix with password and command changes](http://blog.linoxide.com/wp-content/uploads/2015/10/redis-security-test.jpg) + +Access through unix sockets with password and command changes + +#### Snapshots #### + +By default Redis will periodically dump its datasets to **dump.rdb** on the data directory we set. You can configure how often the rdb file will be updated by the save command, the first parameter is a timeframe in seconds and the second is a number of changes performed on the data file. + +Every 15 hours if there was at least 1 key change + + save 900 1 + +Every 5 hours if there was at least 10 key changes + + save 300 10 + +Every minute if there was at least 10000 key changes + + save 60 10000 + +The **/var/lib/redis/6379/dump.rdb** file contains a dump of the dataset on memory since last save. Since it creates a temporary file and then replace the original file, there is no problem of corruption and you can always copy it directly without fear. + +### Starting at boot ### + +You may use systemd to add Redis to the system startup + +Copy sample init_script to /etc/init.d, note also the number of the port on the script name + + cp utils/redis_init_script /etc/init.d/redis_6379 + +We are going to use systemd, so create a unit file named redis_6379.service under **/etc/systems/system** + + vi /etc/systemd/system/redis_6379.service + +Put this content, try man systemd.service for details + + [Unit] + Description=Redis on port 6379 + + [Service] + Type=forking + ExecStart=/etc/init.d/redis_6379 start + ExecStop=/etc/init.d/redis_6379 stop + + [Install] + WantedBy=multi-user.target + +Now add the memory overcommit and maximum backlog options we have set before to the **/etc/sysctl.conf** file. + + vm.overcommit_memory = 1 + + net.core.somaxconn=512 + +For the transparent huge pages support there is no sysctl directive, so you can put the command at the end of /etc/rc.local + + echo never > /sys/kernel/mm/transparent_hugepage/enabled + +### Conclusion ### + +That's enough to start, with these settings you will be able to deploy Redis server for many simpler scenarios, however there is many options on redis.conf for more complex environments. On some cases, you may use [replication][4] and [Sentinel][5] to provide high availability, [split the data][6] across servers, create a cluster of servers. Thanks for reading! + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/storage/install-redis-server-centos-7/ + +作者:[Carlos Alberto][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/carlosal/ +[1]:http://redis.io/download +[2]:https://www.kernel.org/doc/Documentation/vm/overcommit-accounting +[3]:https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt +[4]:http://redis.io/topics/replication +[5]:http://redis.io/topics/sentinel +[6]:http://redis.io/topics/partitioning \ No newline at end of file diff --git a/sources/tech/20151104 How to Install SQLite 3.9.1 with JSON Support on Ubuntu 15.04.md b/sources/tech/20151104 How to Install SQLite 3.9.1 with JSON Support on Ubuntu 15.04.md new file mode 100644 index 0000000000..83fc1c3f30 --- /dev/null +++ b/sources/tech/20151104 How to Install SQLite 3.9.1 with JSON Support on Ubuntu 15.04.md @@ -0,0 +1,122 @@ +How to Install SQLite 3.9.1 with JSON Support on Ubuntu 15.04 +================================================================================ +Hello and welcome to our today's article on SQLite which is the most widely deployed SQL database engine in the world that comes with zero-configuration, that means no setup or administration needed. SQLite is public-domain software package that provides relational database management system, or RDBMS that is used to store user-defined records in large tables. In addition to data storage and management, database engine process complex query commands that combine data from multiple tables to generate reports and data summaries. + +SQLite is very small and light weight that does not require a separate server process or system to operate. It is available on UNIX, Linux, Mac OS-X, Android, iOS and Windows which is being used in various software applications like Opera, Ruby On Rails, Adobe System, Mozilla Firefox, Google Chrome and Skype. + +### 1) Basic Requirements: ### + +There is are no such complex complex requirements for the installation of SQLite as it mostly comes support all major cross platforms. + +So, let's login to your Ubuntu server with sudo or root credentials using your CLI or Secure Shell. Then update your system so that your operating system is upto date with latest packages. + +In ubuntu, the below command is to be used for system update. + + # apt-get update + +If you are starting to deploy SQLite on on a fresh Ubuntu, then make sure that you have installed some basic system management utilities like wget, make, unzip, gcc. + +To install wget, make and gcc packages on ubuntu, you use the below command, then press "Y" to allow and proceed with installation of these packages. + + # apt-get install wget make gcc + +### 2) Download SQLite ### + +To download the latest package of SQLite, you can refer to their official [SQLite Download Page][1] as shown below. + +![SQLite download](http://blog.linoxide.com/wp-content/uploads/2015/10/Selection_014.png) + +You can copy the link of its resource package and download it on ubuntu server using the wget utility command. + + # wget https://www.sqlite.org/2015/sqlite-autoconf-3090100.tar.gz + +![wget SQLite](http://blog.linoxide.com/wp-content/uploads/2015/10/23.png) + +After downloading is complete, extract the package and change your current directory to the extracted SQLite folder by using the below command as shown. + + # tar -zxvf sqlite-autoconf-3090100.tar.gz + +### 3) Installing SQLite ### + +Now we are going to install and configure the SQLite package that we downloaded. So, to compile and install SQLite on ubuntu run the configuration script within the same directory where your have extracted the SQLite package as shown below. + + root@ubuntu-15:~/sqlite-autoconf-3090100# ./configure –prefix=/usr/local + +![SQLite Installation](http://blog.linoxide.com/wp-content/uploads/2015/10/35.png) + +Once the package is configuration is done under the mentioned prefix, then run the below command make command to compile the package. + + root@ubuntu-15:~/sqlite-autoconf-3090100# make + source='sqlite3.c' object='sqlite3.lo' libtool=yes \ + DEPDIR=.deps depmode=none /bin/bash ./depcomp \ + /bin/bash ./libtool --tag=CC --mode=compile gcc -DPACKAGE_NAME=\"sqlite\" -DPACKAGE_TARNAME=\"sqlite\" -DPACKAGE_VERSION=\"3.9.1\" -DPACKAGE_STRING=\"sqlite\ 3.9.1\" -DPACKAGE_BUGREPORT=\"http://www.sqlite.org\" -DPACKAGE_URL=\"\" -DPACKAGE=\"sqlite\" -DVERSION=\"3.9.1\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -DHAVE_DLFCN_H=1 -DLT_OBJDIR=\".libs/\" -DHAVE_FDATASYNC=1 -DHAVE_USLEEP=1 -DHAVE_LOCALTIME_R=1 -DHAVE_GMTIME_R=1 -DHAVE_DECL_STRERROR_R=1 -DHAVE_STRERROR_R=1 -DHAVE_POSIX_FALLOCATE=1 -I. -D_REENTRANT=1 -DSQLITE_THREADSAFE=1 -DSQLITE_ENABLE_FTS3 -DSQLITE_ENABLE_RTREE -g -O2 -c -o sqlite3.lo sqlite3.c + +After running make command, to complete the installation of SQLite on ubuntu run the 'make install' command as shown below. + + # make install + +![SQLite Make Install](http://blog.linoxide.com/wp-content/uploads/2015/10/44.png) + +### 4) Testing SQLite Installation ### + +To confirm the successful installation of SQLite 3.9, run the below command in your command line interface. + + # sqlite3 + +You will the SQLite verion after running the above command as shown. + +![Testing SQLite Installation](http://blog.linoxide.com/wp-content/uploads/2015/10/53.png) + +### 5) Using SQLite ### + +SQLite is very handy to use. To get the detailed information about its usage, simply run the below command in the SQLite console. + + sqlite> .help + +So here is the list of all its available commands, with their description that you can get help to start using SQLite. + +![SQLite Help](http://blog.linoxide.com/wp-content/uploads/2015/10/62.png) + +Now in this last section , we make use of few SQLite commands to create a new database using the SQLite3 command line interface. + +To to create a new database run the below command. + + # sqlite3 test.db + +To create a table within the new database run the below command. + + sqlite> create table memos(text, priority INTEGER); + +After creating the table, insert some data using the following commands. + + sqlite> insert into memos values('deliver project description', 15); + sqlite> insert into memos values('writing new artilces', 100); + +To view the inserted data from the table , run the below command. + + sqlite> select * from memos; + deliver project description|15 + writing new artilces|100 + +to exit from the sqlite3 type the below command. + + sqlite> .exit + +![Using SQLite3](http://blog.linoxide.com/wp-content/uploads/2015/10/73.png) + +### Conclusion ### + +In this article you learned the installation of latest version of SQLite 3.9.1 which enables the recently JSON1 support in its 3.9.0 version and so on. Its is an amazing library that gets embedded inside the application that makes use of it to keep the resources much efficient and lighter. We hope you find this article much helpful, feel free to get back to us if you find any difficulty. + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/ubuntu-how-to/install-sqlite-json-ubuntu-15-04/ + +作者:[Kashif Siddique][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/kashifs/ +[1]:https://www.sqlite.org/download.html \ No newline at end of file diff --git a/sources/tech/20151104 How to Setup Pfsense Firewall and Basic Configuration.md b/sources/tech/20151104 How to Setup Pfsense Firewall and Basic Configuration.md new file mode 100644 index 0000000000..821937390a --- /dev/null +++ b/sources/tech/20151104 How to Setup Pfsense Firewall and Basic Configuration.md @@ -0,0 +1,266 @@ +How to Setup Pfsense Firewall and Basic Configuration +================================================================================ +In this article our focus is Pfsense setup, basic configuration and overview of features available in the security distribution of FreeBSD. In this tutorial we will run network wizard for basic setting of firewall and detailed overview of services. After the [installation process][1] following snapshot shows the IP addresses of WAN/LAN and different options for the management of Pfsense firewall. + +![options](http://blog.linoxide.com/wp-content/uploads/2015/08/options.png) + +After setup , following window appear which shows the url for configuration of Pfsense. + +![URL for gui](http://blog.linoxide.com/wp-content/uploads/2015/08/login_pfsense.png) + +Open above given URL in the browser and login with username **admin** and password **pfsense** + +![login_username_password](http://blog.linoxide.com/wp-content/uploads/2015/08/login_username_password.png) + +After successful login, following wizard appears for the basic setting of Pfsense firewall. However setup wizard option can be bypassed and user can run it from the **System** menu from the web interface. + +Click on the **Next** button to start basic configuration process on Pfsense firewall. + +![wizard_start](http://blog.linoxide.com/wp-content/uploads/2015/08/wizard_start.png) + +Setting hostname, domain and DNS addresses is shown in the following figure. + +![basic_setting_wizard](http://blog.linoxide.com/wp-content/uploads/2015/08/basic_setting_wizard.png) + +Setting time zone is shown in the below given snapshot. + +![time_setting](http://blog.linoxide.com/wp-content/uploads/2015/08/time_setting.png) + +Next window shows setting for the WAN interface. By defaults Pfsense firewall block bogus and private networks. + +![wan setting](http://blog.linoxide.com/wp-content/uploads/2015/08/wan-setting.png) + +Setting LAN IP address which is used to access the Pfsense web interface for further configuration. + +![lan setting](http://blog.linoxide.com/wp-content/uploads/2015/08/lan-setting.png) + +By default password for web interface is "pfsense". Enter new password for admin user on the following window to access the web interface for further configuration. + +![password](http://blog.linoxide.com/wp-content/uploads/2015/08/password.png) + +Click on the "reload" button which is shown below. It applies the setting and redirect firewall user to main dashboard of Pfsense. + +![)reload](http://blog.linoxide.com/wp-content/uploads/2015/08/reload.png + +As shown in the following snapshot, Pfsense dashboard shows system information (such as cpu details, os version, dns detail, memory consumption) and status of ethernet/wireless interfaces etc. + +![dashboard](http://blog.linoxide.com/wp-content/uploads/2015/08/dashboard1.png) + +### Menu detail ### + +PFsense consist of System, interfaces, firewall,services,vpn,status,diagnostics and help menus. + +![all menu](http://blog.linoxide.com/wp-content/uploads/2015/10/all-menu.png) + +### System Menu ### + +Sub menus of **System** is given below. + +![system menu](http://blog.linoxide.com/wp-content/uploads/2015/08/system-menu.png) + +In the **Advanced** sub menu user can perform following operations. + +1. Configuration of web interface +1. Firewall/Nat setting +1. Networking setting +1. System tuneables setting +1. Notification setting + +![advanced-systemmenu](http://blog.linoxide.com/wp-content/uploads/2015/10/advanced-systemmenu.png) + +In the **Cert manager** sub menu, firewall administrator generates certificates for CA and users. + +![cert-manager-systemmenu](http://blog.linoxide.com/wp-content/uploads/2015/10/cert-manager-systemmenu.png) + +In the **Firmware** sub menu, user can update Pfsense firmware manually/automatically. User can take full backup of Pfsense configurations. + +![firmware-systemmenu](http://blog.linoxide.com/wp-content/uploads/2015/10/firmware-systemmenu.png) + +In the **General Setup** sub menu, user can change basic setting such as hostname and domain etc. + +![general setup-systemmenu](http://blog.linoxide.com/wp-content/uploads/2015/10/general-setup-systemmenu.png) + +As menu title indicates, user can enable/disable high availability feature from this sub menu. + +![highavail-systemmenu](http://blog.linoxide.com/wp-content/uploads/2015/10/highavail-systemmenu.png) + +Packages sub menu provides package manager facility in the web interface for Pfsense . + +![packages-system menu](http://blog.linoxide.com/wp-content/uploads/2015/10/packages-systemmenu.png) + +User can perform gateway and route management using **Routing** sub menu. + +![routing-system menu](http://blog.linoxide.com/wp-content/uploads/2015/10/routing-systemmenu.png) + +**Setup Wizard** sub menu opens following window which start basic configuration of Pfsense. + +![wizard_start](http://blog.linoxide.com/wp-content/uploads/2015/10/wizard_start.png) + +Management of user can be done from the **User manager** sub menu. + +![usermanager-system](http://blog.linoxide.com/wp-content/uploads/2015/10/usermanager-system.png) + +### Interfaces Menu ### + +This menu is used for the assignment of interfaces (LAN/WAN), VLAN setting,wireless and GRE configuration etc. + +![Interfaces setting](http://blog.linoxide.com/wp-content/uploads/2015/10/interfaces-setting.png) + +### Firewall Menu ### + +Firewall is the main and core part of Pfsense distribution and it provides following features. + +![firewall-menu](http://blog.linoxide.com/wp-content/uploads/2015/10/firewall-systemmenu.png) + +**Aliases** + +Aliases are defined for real hosts, networks or ports and they can be used to minimize the number of changes. + +![firewall-aliases](http://blog.linoxide.com/wp-content/uploads/2015/10/firewall-aliases.png) + +**NAT (Network Address Translation)** + +NAT binds a specific internal address to a specific external address. Incoming traffic from the Internet to the specified IP will be directed toward the associated internal IP. + +![firewall-nat](http://blog.linoxide.com/wp-content/uploads/2015/10/firewall-nat.png) + +**Firewall Rules** + +Firewall rules control what traffic is allowed to enter an interface on the firewall. After traffic is passed on the interface, it enters an entry in the state table is created. + +![firewall-rules](http://blog.linoxide.com/wp-content/uploads/2015/10/firewall-rules.png) + +**Schedules** + +Firewall rules can be scheduled so that they are only active at certain times of day or on certain specific days or days of the week. + +![firewall-schedules](http://blog.linoxide.com/wp-content/uploads/2015/10/firewall-schedules.png) + +**Traffic Shaper** + +Traffic shaping is the control of computer network traffic in order to optimize performance and lower latency. + +![firewall-traffic shapper](http://blog.linoxide.com/wp-content/uploads/2015/10/firewall-traffic-shapper.png) + +**Virtual IPs** + +Virtual IPs add knowledge of additional IP addresses to the firewall that are different from the firewall's real interface addresses. + +![firewall-virtualipaddresses](http://blog.linoxide.com/wp-content/uploads/2015/10/services-menu.png) + +### Services Menu ### + +Services menu shows services which are provided by the Pfsense distribution along firewall. + +![services-menu](http://blog.linoxide.com/wp-content/uploads/2015/10/services-menu.png) + +New program/software installed for some specific service is also shown in this menu such as snort. By default following services are listed in services menu. + +**Captive portal** + +The captive portal functionality in Pfsense allows securing a network by requiring a username and password entered on a portal page. + +![services-captive portal](http://blog.linoxide.com/wp-content/uploads/2015/10/services-captive-portal.png) + +**DHCP Relay** + +The DHCP Relay daemon will relay DHCP requests between broadcast domains for IPv4 DHCP. + +![services-dhcp relay](http://blog.linoxide.com/wp-content/uploads/2015/10/services-dhcp-relay.png) + +**DHCP Server** + +User can run DHCP service on the firewall for the network devices. + +![services-dhcp server](http://blog.linoxide.com/wp-content/uploads/2015/10/services-dhcp-server.png) + +**DNS Forwarder/Resolver/Dynamic DNS** + +DNS different services can be configured on the Pfsense firewall. + +![services-dynamic dns client](http://blog.linoxide.com/wp-content/uploads/2015/10/services-dynamic-dns-client.png) + +![services-dns resolver](http://blog.linoxide.com/wp-content/uploads/2015/10/services-dns-resolver.png) + +![services-dns forwarder](http://blog.linoxide.com/wp-content/uploads/2015/10/services-dns-forwarder.png) + +**IGMP Proxy** + +User can configure IGMP on the Pfsense firewall from services menu. + +![services igmp](http://blog.linoxide.com/wp-content/uploads/2015/10/services-igmp.png) + +**Load Balancer** + +Load Balancing is one of the important feature which is also supported by the Pfsense firewall. + +![services load balancer](http://blog.linoxide.com/wp-content/uploads/2015/10/services-load-balancer.png) + +**SNMP (Simple Network Management Protocol)** + +Pfsense supports all versions of snmp for remote management of firewall. + +![services snmp](http://blog.linoxide.com/wp-content/uploads/2015/10/services-snmp.png) + +**Wake on Lan** + +Using this feature packet sent to a workstation on a locally connected network which will power on a workstation. + +![services-wake on lan](http://blog.linoxide.com/wp-content/uploads/2015/10/services-wake-on-lan.png) + +### VPN Menu ### + +It is one of the most important feature of Pfsense. Its supports following types of vpn configuration. + +**VPN IPsec** + +IPsec is a standard for providing security to IP protocols via encryption and/or authentication. + +![vpn-ipsec](http://blog.linoxide.com/wp-content/uploads/2015/10/vpn-ipsec.png) + +**L2TP IPsec** + +L2TP/IPsec is a common VPN type that wraps L2TP, an insecure tunneling protocol, inside a secure channel built using transport mode IPsec. + +![vpn- l2tp](http://blog.linoxide.com/wp-content/uploads/2015/10/vpn-l2tp.png) + +**OpenVPN** + +OpenVPN is an Open Source VPN server and client that is supported on pfSense. + +![vpn openvpn](http://blog.linoxide.com/wp-content/uploads/2015/10/vpn-openvpn.png) + +**Status Menu** + +It shows the status of services provided by Pfsense such as dhcp server, ipsec and load balancer etc. + +![status-menu](http://blog.linoxide.com/wp-content/uploads/2015/10/status-menu.png) + +**Diagnostic Menu** + +This menu helps administrator/user for the rectification of Pfsense issues or problems. + +![diagnosics menu](http://blog.linoxide.com/wp-content/uploads/2015/10/diagnosics-menu.png) + +**Help Menu** + +This menu provides links for different useful resources such as FreeBSD handbook,developer wiki, paid support and pfsense book. + +![help menu](http://blog.linoxide.com/wp-content/uploads/2015/10/help-menu.png) + +### Conclusion ### + +In this article our focus was on the basic configuration and features set of Pfsense distribution. It is based on FreeBSD distribution and widely used due to security and stability features. In our future articles on Pfsense, our focus will be on the basic firewall rules setting, snort (IDS/IPS) and IPSEC VPN configuration. + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/firewall/pfsense-setup-basic-configuration/ + +作者:[nido][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/naveeda/ +[1]:http://linoxide.com/firewall/install-pfsense-firewall/ \ No newline at end of file diff --git a/sources/tech/20151105 How to Manage Your To-Do Lists in Ubuntu Using Go For It Application.md b/sources/tech/20151105 How to Manage Your To-Do Lists in Ubuntu Using Go For It Application.md new file mode 100644 index 0000000000..8977e0a420 --- /dev/null +++ b/sources/tech/20151105 How to Manage Your To-Do Lists in Ubuntu Using Go For It Application.md @@ -0,0 +1,84 @@ +How to Manage Your To-Do Lists in Ubuntu Using Go For It Application +================================================================================ +![](https://www.maketecheasier.com/assets/uploads/2015/10/gfi-featured1.jpg) + +Task management is arguably one of the most important and challenging part of professional as well as personal life. Professionally, as you assume more and more responsibility, your performance is directly related to or affected with your ability to manage the tasks you’re assigned. + +If your job involves working on a computer, then you’ll be happy to know that there are various applications available that claim to make task management easy for you. While most of them cater to Windows users, there are many options available on Linux, too. In this article we will discuss one such application: Go For It. + +### Go For It ### + +[Go For It][1] (GFI) is developed by Manuel Kehl, who describes it as a “a simple and stylish productivity app, featuring a to-do list, merged with a timer that keeps your focus on the current task.” The timer feature, specifically, is interesting, as it also makes sure that you take a break from your current task and relax for sometime before proceeding further. + +### Download and Installation ### + +Users of Debian-based systems, like Ubuntu, can easily install the app by running the following commands in terminal: + + sudo add-apt-repository ppa:mank319/go-for-it + sudo apt-get update + sudo apt-get install go-for-it + +Once done, you can execute the application by running the following command: + + go-for-it + +### Usage and Configuration ### + +Here is how the GFI interface looks when you run the app for the very first time: + +![gfi-first-run](https://www.maketecheasier.com/assets/uploads/2015/10/gfi-first-run1.png) + +As you can see, the interface consists of three tabs: To-Do, Timer, and Done. While the To-Do tab contains a list of tasks (the 4 tasks shown in the image above are there by default – you can delete them by clicking on the rectangular box in front of them), the Timer tab contains task timer, while Done contains a list of tasks that you’ve finished successfully. Right at the bottom is a text box where you can enter the task text and click “+” to add it to the list above. + +For example, I added a task named “MTE-research-work” to the list and selected it by clicking on it in the list – see the screenshot below: + +![gfi-task-added](https://www.maketecheasier.com/assets/uploads/2015/10/gfi-task-added1.png) + +Then I selected the Timer tab. Here I could see a 25-minute timer for the active task which was “MTE-reaserch-work.” + +![gfi-active-task-timer](https://www.maketecheasier.com/assets/uploads/2015/10/gfi-active-task-timer.png) + +Of course, you can change the timer value and set to any time you want. I, however, didn’t change the value and clicked the Start button present below to start the task timer. Once 60 seconds were left, GFI issued a notification indicating the same. + +![gfi-first-notification-new](https://www.maketecheasier.com/assets/uploads/2015/10/gfi-first-notification-new.jpg) + +And once the time was up, I was asked to take a break of five minutes. + +![gfi-time-up-notification-new](https://www.maketecheasier.com/assets/uploads/2015/10/gfi-time-up-notification-new.jpg) + +Once those five minutes were over, I could again start the task timer for my task. + +![gfi-break-time-up-new](https://www.maketecheasier.com/assets/uploads/2015/10/gfi-break-time-up-new.jpg) + +When you’re done with your task, you can click the Done button in the Timer tab. The task is then removed from the To-Do tab and listed in the Done tab. + +![gfi-task-done](https://www.maketecheasier.com/assets/uploads/2015/10/gfi-task-done1.png) + +GFI also allows you to tweak some of its settings. For example, the settings window shown below contains options to tweak the default task duration, break duration, and reminder time. + +![gfi-settings](https://www.maketecheasier.com/assets/uploads/2015/10/gfi-settings1.png) + +It’s worth mentioning that GFI stores the to-do lists in the Todo.txt format which simplifies synchronization with mobile devices and makes it possible for you to edit tasks using other frontends – read more about it [here][2]. + +You can also see the GFI app in action in the video below. + +注:youtube 视频 + + +### Conclusion ### + +As you have observed, GFI is an easy to understand and simple to use task management application. Although it doesn’t offer a plethora of features, it does what it claims – the timer integration is especially useful. If you’re looking for a basic, open-source task management tool for Linux, Go For It is worth trying. + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/to-do-lists-ubuntu-go-for-it/ + +作者:[Himanshu Arora][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/himanshu/ +[1]:http://manuel-kehl.de/projects/go-for-it/ +[2]:http://todotxt.com/ \ No newline at end of file diff --git a/sources/tech/20151105 Linux FAQs with Answers--How to change default Java version on Linux.md b/sources/tech/20151105 Linux FAQs with Answers--How to change default Java version on Linux.md new file mode 100644 index 0000000000..45eb3b4834 --- /dev/null +++ b/sources/tech/20151105 Linux FAQs with Answers--How to change default Java version on Linux.md @@ -0,0 +1,52 @@ +Linux FAQs with Answers--How to change default Java version on Linux +================================================================================ +> **Question**: When I am trying to run a Java program on Linux, I am getting the following error. Looks like the Java program is compiled for a different Java version than the default Java program installed on my Linux. How can I switch the default Java version on Linux? +> +> Exception in thread "main" java.lang.UnsupportedClassVersionError: com/xmodulo/hmon/gui/NetConf : Unsupported major.minor version 51.0 + +When a Java program is compiled, the build environment sets a "target" which is the oldest JRE version the program can support. If you run the Java program on a Linux system which does not meet the lowest JRE version requirement, you will encounter the following error while starting the program. + + Exception in thread "main" java.lang.UnsupportedClassVersionError: com/xmodulo/hmon/gui/NetConf : Unsupported major.minor version 51.0 + +For example, in this case the program is compiled for Java JRE 1.7 but the system only has Java JRE 1.6. + +To solve this problem, you need to change the default Java version you are using to Java JRE 1.7 or higher (assuming that such JRE is already installed). + +First, **check available Java versions** on your Linux system by using update-alternatives command: + + $ sudo update-alternatives --display java + +![](https://c2.staticflickr.com/6/5663/22661333316_81fe1ab7da_c.jpg) + +In this example, there are four different Java versions that are installed: OpenJDK JRE 1.6, Oracle Java JRE 1.6, OpenJDK JRE 1.7 and Oracle Java JRE 1.7. The default Java version is currently set to OpenJDK JRE 1.6. + +If the necessary Java JRE is not installed, you can always install it using [these instructions][1]. + +Now that there are suitable candidates to change to, you can **switch the default Java version** among available Java JREs by running the following command: + + $ sudo update-alternatives --config java + +When prompted, select the Java version you would like to use. In this example, we choose Oracle Java JRE 1.7. + +![](https://c2.staticflickr.com/6/5651/22066181083_b9c4c5b676_c.jpg) + +Now you can verify the default Java version as follows. + + $ java -version + +![](https://c1.staticflickr.com/1/634/22499411280_1d702a4101_c.jpg) + +Finally, if you defined JAVA_HOME environment variable somewhere, update the variable according to the newly set default Java version. + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/change-default-java-version-linux.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni +[1]:http://ask.xmodulo.com/install-java-runtime-linux.html \ No newline at end of file diff --git a/sources/tech/20151105 Linux FAQs with Answers--How to find which shell I am using on Linux.md b/sources/tech/20151105 Linux FAQs with Answers--How to find which shell I am using on Linux.md new file mode 100644 index 0000000000..b93eb6db06 --- /dev/null +++ b/sources/tech/20151105 Linux FAQs with Answers--How to find which shell I am using on Linux.md @@ -0,0 +1,93 @@ + +translation by strugglingyouth +Linux FAQs with Answers--How to find which shell I am using on Linux +================================================================================ +> **Question**: I often change between different shells at the command line. Is there a quick and easy way to find out which shell I am currently in? Also how can I find out the version of the shell? + +### Find out Which Shell You are In ### + +There are different ways to tell what shell you are currently in. The easiest way to find that out is by using special shell parameters. + +For one, [a special parameter named "$$"][1] denotes the PID of the current instance of the shell you are running. This parameter is read-only and cannot be modified. So the following command will also show you the name of the shell you are running: + + $ ps -p $$ + +---------- + + PID TTY TIME CMD + 21666 pts/4 00:00:00 bash + +The above command works across all available shells. + +If you are not using csh, another way to find out the current shell is to use an special shell parameter called "$$", which denotes the name of the shell or shell script that is currently running. This is one of the Bash special parameters, but available in other shells as well, such as sh, zsh, tcsh or dash. Using echo command to print out its value will tell you the name of the shell you are currently in. + + $ echo $0 + +---------- + + bash + +Don't be confused with a separate environment variable called $SHELL, which is set to the full path to your default shell. As such, this variable is not necessarily point to the current shell you are using. For example, $SHELL remains the same even if you invoke a different shell within a terminal. + + $ echo $SHELL + +---------- + + /bin/shell + +![](https://c2.staticflickr.com/6/5688/22544087680_4a9c180485_c.jpg) + +Thus to find out the current shell, you should use either $$ or $0, but not $SHELL. + +### Find out the Version of the Shell You are Using ### + +Once you know which shell you are in, you may want to find out what version of the shell it is. For that, type the name of your shell followed by "--version" at the command line. For example: + +**For** bash **shell**: + + $ bash --version + +---------- + + GNU bash, version 4.3.30(1)-release (x86_64-pc-linux-gnu) + Copyright (C) 2013 Free Software Foundation, Inc. + License GPLv3+: GNU GPL version 3 or later + + This is free software; you are free to change and redistribute it. + There is NO WARRANTY, to the extent permitted by law. + +**For** zsh **shell**: + + $ zsh --version + +---------- + + zsh 5.0.7 (x86_64-pc-linux-gnu) + +**For** tcsh **shell**: + $ tcsh --version + +---------- + + tcsh 6.18.01 (Astron) 2012-02-14 (x86_64-unknown-linux) options wide,nls,dl,al,kan,rh,nd,color,filec + +For some shells, you can also use shell-specific variables (e.g., $BASH_VERSION or $ZSH_VERSION). + + $ echo $BASH_VERSION + +---------- + + 4.3.8(1)-release + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/which-shell-am-i-using.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni +[1]:http://ask.xmodulo.com/process-id-pid-shell-script.html diff --git a/sources/tech/20151105 Linux FAQs with Answers--How to install Ubuntu desktop behind a proxy.md b/sources/tech/20151105 Linux FAQs with Answers--How to install Ubuntu desktop behind a proxy.md new file mode 100644 index 0000000000..c5cd0b5420 --- /dev/null +++ b/sources/tech/20151105 Linux FAQs with Answers--How to install Ubuntu desktop behind a proxy.md @@ -0,0 +1,61 @@ +Linux FAQs with Answers--How to install Ubuntu desktop behind a proxy +================================================================================ +> **Question**: My computer is connected to a corporate network sitting behind an HTTP proxy. When I try to install Ubuntu desktop on the computer from a CD-ROM drive, the installation hangs and never finishes while trying to retrieve files, which is presumably due to the proxy. However, the problem is that Ubuntu installer never asks me to configure proxy during installation procedure. Then how can I install Ubuntu desktop behind a proxy? + +Unlike Ubuntu server, installation of Ubuntu desktop is pretty much auto-pilot, not leaving much room for customization, such as custom disk partitioning, manual network settings, package selection, etc. While such simple, one-shot installation is considered user-friendly, it leaves much to be desired for those users looking for "advanced installation mode" to customize their Ubuntu desktop installation. + +In addition, one big problem of the default Ubuntu desktop installer is the absense of proxy settings. If your computer is connected behind a proxy, you will notice that Ubuntu installation gets stuck while preparing to download files. + +![](https://c2.staticflickr.com/6/5683/22195372232_cea81a5e45_c.jpg) + +This post describes how to get around the limitation of Ubuntu **installer and install Ubuntu desktop when you are behind a proxy**. + +The basic idea is as follows. Instead of starting with Ubuntu installer directly, boot into live Ubuntu desktop first, configure proxy settings, and finally launch Ubuntu installer manually from live desktop. The following is the step by step procedure. + +After booting from Ubuntu desktop CD/DVD or USB, click on "Try Ubuntu" on the first welcome screen. + +![](https://c1.staticflickr.com/1/586/22195371892_3816ba09c3_c.jpg) + +Once you boot into live Ubuntu desktop, click on Settings icon in the left. + +![](https://c1.staticflickr.com/1/723/22020327738_058610c19d_c.jpg) + +Go to Network menu. + +![](https://c2.staticflickr.com/6/5675/22021212239_ba3901c8bf_c.jpg) + +Configure proxy settings manually. + +![](https://c1.staticflickr.com/1/735/22020025040_59415e0b9a_c.jpg) + +Next, open a terminal. + +![](https://c2.staticflickr.com/6/5642/21587084823_357b5c48cb_c.jpg) + +Enter a root session by typing the following: + + $ sudo su + +Finally, type the following command as the root. + + # ubiquity gtk_ui + +This will launch GUI-based Ubuntu installer as follows. + +![](https://c1.staticflickr.com/1/723/22020025090_cc64848b6c_c.jpg) + +Proceed with the rest of installation. + +![](https://c1.staticflickr.com/1/628/21585344214_447020e9d6_c.jpg) + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/install-ubuntu-desktop-behind-proxy.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni \ No newline at end of file diff --git a/sources/tech/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md b/sources/tech/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md index 5dd1782a98..3ffb1dc54f 100644 --- a/sources/tech/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md +++ b/sources/tech/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md @@ -1,5 +1,3 @@ -Translating by Xuanwo - Part 10 - LFCS: Understanding & Learning Basic Shell Scripting and Linux Filesystem Troubleshooting ================================================================================ The Linux Foundation launched the LFCS certification (Linux Foundation Certified Sysadmin), a brand new initiative whose purpose is to allow individuals everywhere (and anywhere) to get certified in basic to intermediate operational support for Linux systems, which includes supporting running systems and services, along with overall monitoring and analysis, plus smart decision-making when it comes to raising issues to upper support teams. diff --git a/sources/tech/LFCS/Part 2 - LFCS--How to Install and Use vi or vim as a Full Text Editor.md b/sources/tech/LFCS/Part 2 - LFCS--How to Install and Use vi or vim as a Full Text Editor.md index 1d069e08ea..7fe8073a77 100644 --- a/sources/tech/LFCS/Part 2 - LFCS--How to Install and Use vi or vim as a Full Text Editor.md +++ b/sources/tech/LFCS/Part 2 - LFCS--How to Install and Use vi or vim as a Full Text Editor.md @@ -1,5 +1,3 @@ -Translating by Xuanwo - Part 2 - LFCS: How to Install and Use vi/vim as a Full Text Editor ================================================================================ A couple of months ago, the Linux Foundation launched the LFCS (Linux Foundation Certified Sysadmin) certification in order to help individuals from all over the world to verify they are capable of doing basic to intermediate system administration tasks on Linux systems: system support, first-hand troubleshooting and maintenance, plus intelligent decision-making to know when it’s time to raise issues to upper support teams. diff --git a/sources/tech/LFCS/Part 3 - LFCS--How to Archive or Compress Files and Directories Setting File Attributes and Finding Files in Linux.md b/sources/tech/LFCS/Part 3 - LFCS--How to Archive or Compress Files and Directories Setting File Attributes and Finding Files in Linux.md index 77fe5cf040..82cc54a5a6 100644 --- a/sources/tech/LFCS/Part 3 - LFCS--How to Archive or Compress Files and Directories Setting File Attributes and Finding Files in Linux.md +++ b/sources/tech/LFCS/Part 3 - LFCS--How to Archive or Compress Files and Directories Setting File Attributes and Finding Files in Linux.md @@ -1,5 +1,3 @@ -Translating by Xuanwo - Part 3 - LFCS: How to Archive/Compress Files & Directories, Setting File Attributes and Finding Files in Linux ================================================================================ Recently, the Linux Foundation started the LFCS (Linux Foundation Certified Sysadmin) certification, a brand new program whose purpose is allowing individuals from all corners of the globe to have access to an exam, which if approved, certifies that the person is knowledgeable in performing basic to intermediate system administration tasks on Linux systems. This includes supporting already running systems and services, along with first-level troubleshooting and analysis, plus the ability to decide when to escalate issues to engineering teams. diff --git a/sources/tech/LFCS/Part 4 - LFCS--Partitioning Storage Devices Formatting Filesystems and Configuring Swap Partition.md b/sources/tech/LFCS/Part 4 - LFCS--Partitioning Storage Devices Formatting Filesystems and Configuring Swap Partition.md index 93e4b2966b..ada637fabb 100644 --- a/sources/tech/LFCS/Part 4 - LFCS--Partitioning Storage Devices Formatting Filesystems and Configuring Swap Partition.md +++ b/sources/tech/LFCS/Part 4 - LFCS--Partitioning Storage Devices Formatting Filesystems and Configuring Swap Partition.md @@ -1,5 +1,3 @@ -Translating by Xuanwo - Part 4 - LFCS: Partitioning Storage Devices, Formatting Filesystems and Configuring Swap Partition ================================================================================ Last August, the Linux Foundation launched the LFCS certification (Linux Foundation Certified Sysadmin), a shiny chance for system administrators to show, through a performance-based exam, that they can perform overall operational support of Linux systems: system support, first-level diagnosing and monitoring, plus issue escalation – if needed – to other support teams. diff --git a/sources/tech/LFCS/Part 5 - LFCS--How to Mount or Unmount Local and Network Samba and NFS Filesystems in Linux.md b/sources/tech/LFCS/Part 5 - LFCS--How to Mount or Unmount Local and Network Samba and NFS Filesystems in Linux.md index 4316e32c16..1544a378bc 100644 --- a/sources/tech/LFCS/Part 5 - LFCS--How to Mount or Unmount Local and Network Samba and NFS Filesystems in Linux.md +++ b/sources/tech/LFCS/Part 5 - LFCS--How to Mount or Unmount Local and Network Samba and NFS Filesystems in Linux.md @@ -1,5 +1,3 @@ -Translating by Xuanwo - Part 5 - LFCS: How to Mount/Unmount Local and Network (Samba & NFS) Filesystems in Linux ================================================================================ The Linux Foundation launched the LFCS certification (Linux Foundation Certified Sysadmin), a brand new program whose purpose is allowing individuals from all corners of the globe to get certified in basic to intermediate system administration tasks for Linux systems, which includes supporting running systems and services, along with overall monitoring and analysis, plus smart decision-making when it comes to raising issues to upper support teams. diff --git a/sources/tech/LFCS/Part 6 - LFCS--Assembling Partitions as RAID Devices – Creating & Managing System Backups.md b/sources/tech/LFCS/Part 6 - LFCS--Assembling Partitions as RAID Devices – Creating & Managing System Backups.md index 901fb7b4f1..fd23db110f 100644 --- a/sources/tech/LFCS/Part 6 - LFCS--Assembling Partitions as RAID Devices – Creating & Managing System Backups.md +++ b/sources/tech/LFCS/Part 6 - LFCS--Assembling Partitions as RAID Devices – Creating & Managing System Backups.md @@ -1,5 +1,3 @@ -Translating by Xuanwo - Part 6 - LFCS: Assembling Partitions as RAID Devices – Creating & Managing System Backups ================================================================================ Recently, the Linux Foundation launched the LFCS (Linux Foundation Certified Sysadmin) certification, a shiny chance for system administrators everywhere to demonstrate, through a performance-based exam, that they are capable of performing overall operational support on Linux systems: system support, first-level diagnosing and monitoring, plus issue escalation, when required, to other support teams. diff --git a/sources/tech/LFCS/Part 7 - LFCS--Managing System Startup Process and Services SysVinit Systemd and Upstart.md b/sources/tech/LFCS/Part 7 - LFCS--Managing System Startup Process and Services SysVinit Systemd and Upstart.md index 4b7cdf9fe2..abf09ee523 100644 --- a/sources/tech/LFCS/Part 7 - LFCS--Managing System Startup Process and Services SysVinit Systemd and Upstart.md +++ b/sources/tech/LFCS/Part 7 - LFCS--Managing System Startup Process and Services SysVinit Systemd and Upstart.md @@ -1,5 +1,3 @@ -Translating by Xuanwo - Part 7 - LFCS: Managing System Startup Process and Services (SysVinit, Systemd and Upstart) ================================================================================ A couple of months ago, the Linux Foundation announced the LFCS (Linux Foundation Certified Sysadmin) certification, an exciting new program whose aim is allowing individuals from all ends of the world to get certified in performing basic to intermediate system administration tasks on Linux systems. This includes supporting already running systems and services, along with first-hand problem-finding and analysis, plus the ability to decide when to raise issues to engineering teams. diff --git a/sources/tech/LFCS/Part 8 - LFCS--Managing Users and Groups File Permissions and Attributes and Enabling sudo Access on Accounts.md b/sources/tech/LFCS/Part 8 - LFCS--Managing Users and Groups File Permissions and Attributes and Enabling sudo Access on Accounts.md index 50f39ee2d9..2cec4de4ae 100644 --- a/sources/tech/LFCS/Part 8 - LFCS--Managing Users and Groups File Permissions and Attributes and Enabling sudo Access on Accounts.md +++ b/sources/tech/LFCS/Part 8 - LFCS--Managing Users and Groups File Permissions and Attributes and Enabling sudo Access on Accounts.md @@ -1,5 +1,3 @@ -Translating by Xuanwo - Part 8 - LFCS: Managing Users & Groups, File Permissions & Attributes and Enabling sudo Access on Accounts ================================================================================ Last August, the Linux Foundation started the LFCS certification (Linux Foundation Certified Sysadmin), a brand new program whose purpose is to allow individuals everywhere and anywhere take an exam in order to get certified in basic to intermediate operational support for Linux systems, which includes supporting running systems and services, along with overall monitoring and analysis, plus intelligent decision-making to be able to decide when it’s necessary to escalate issues to higher level support teams. diff --git a/sources/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md b/sources/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md index a363a50c09..6d0f65223f 100644 --- a/sources/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md +++ b/sources/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md @@ -1,5 +1,3 @@ -Translating by Xuanwo - Part 9 - LFCS: Linux Package Management with Yum, RPM, Apt, Dpkg, Aptitude and Zypper ================================================================================ Last August, the Linux Foundation announced the LFCS certification (Linux Foundation Certified Sysadmin), a shiny chance for system administrators everywhere to demonstrate, through a performance-based exam, that they are capable of succeeding at overall operational support for Linux systems. A Linux Foundation Certified Sysadmin has the expertise to ensure effective system support, first-level troubleshooting and monitoring, including finally issue escalation, when needed, to engineering support teams. diff --git a/sources/tech/Learn with Linux/Learn with Linux--Master Your Math with These Linux Apps.md b/sources/tech/Learn with Linux/Learn with Linux--Master Your Math with These Linux Apps.md deleted file mode 100644 index f4625c6c13..0000000000 --- a/sources/tech/Learn with Linux/Learn with Linux--Master Your Math with These Linux Apps.md +++ /dev/null @@ -1,126 +0,0 @@ -Translated by KnightJoker - -用Linux学习:使用这些Linux应用来征服你的数学 -================================================================================ -![](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-featured.png) - -这篇文章是[用Linux学习][1]系列的一部分: - -- [用Linux学习: 学习类型][2] -- [用Linux学习: 物理模拟][3] -- [用Linux学习: 学习音乐][4] -- [用Linux学习: 两个地理应用程序][5] -- [用Linux学习: 用这些Linux应用来征服你的数学][6] - - -Linux提供了大量的教育软件和许多优秀的工具来帮助所有年龄段的学生学习和练习各种各样的话题,常常以交互的方式。与Linux一起学习这一系列的文章则为这些各种各样的教育软件和应用提供了一个介绍。 - -数学是计算机的核心。如果有人用精益求精和纪律来预期一个伟大的操作系统,比如GNU/ Linux,那么这将是数学。如果你在寻求一些数学应用程序,那么你将不会感到失望。Linux提供了很多优秀的工具使得数学看起来和你曾经做过的一样令人畏惧,但实际上他们会简化你使用它的方式。 -### Gnuplot ### - -Gnuplot 是一个适用于不同平台的命令行脚本化和多功能的图形工具。尽管它的名字,并不是GNU操作系统的一部分。也没有免费授权,但它是免费软件(这意味着它受版权保护,但免费使用)。 - -要在Ubuntu系统(或者衍生系统)上安装 `gnuplot`,输入: - sudo apt-get install gnuplot gnuplot-x11 - -进入一个终端窗口。启动该程序,输入: - - gnuplot - -你会看到一个简单的命令行界面: - -![learnmath-gnuplot](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-gnuplot.png) - -在其中您可以直接开始输入函数。绘图命令将绘制一个曲线图。 - -输入内容,例如, - - plot sin(x)/x - -随着`gnuplot的`提示,将会打开一个新的窗口,图像便会在里面呈现。 - -![learnmath-gnuplot-plot1](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-gnuplot-plot1.png) - -你也可以在线这个图设置不同的属性,比如像这样指定“title” - - plot sin(x) title 'Sine Function', tan(x) title 'Tangent' - -![learnmath-gnuplot-plot2](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-gnuplot-plot2.png) - -使用`splot`命令,你可以给的东西更深入一点并且绘制3D图形 - - splot sin(x*y/20) - -![learnmath-gnuplot-plot3](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-gnuplot-plot3.png) - -这个窗口有几个基本的配置选项, - -![learnmath-gnuplot-options](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-gnuplot-options.png) - -但是`gnuplot`的真正力量在于在它的命令行和脚本功能,`gnuplot`广泛完整的文档可在这里找到,并在[Duke大学网站][8]上面看见这个了不起的教程[7]的原始版本。 - -### Maxima ### - -[Maxima][9]是从Macsyma原始资料开发的一个计算机代数系统,根据它的 SourceForge 页面, - -> “Maxima是符号和数值的表达,包括微分,积分,泰勒级数,拉普拉斯变换,常微分方程,线性方程组,多项式,集合,列表,向量,矩阵和张量系统的操纵系统。Maxima通过精确的分数,任意精度的整数和可变精度浮点数产生高精度的计算结果。Maxima可以二维和三维中绘制函数和数据。“ - -你将会获得二进制包用于大多数Ubuntu衍生系统的Maxima以及它的图形界面中,插入所有包,输入: - - sudo apt-get install maxima xmaxima wxmaxima - -在终端窗口中,Maxima是一个没有太多UI的命令行工具,但如果你开始wxmaxima,你会进入一个简单但功能强大的图形用户界面。 - -![learnmath-maxima](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-maxima.png) - -你可以开始输入这个来简单的一个开始。(提示:如果你想计算一个表达式,使用“Shift + Enter”回车后会增加更多的方法) - -Maxima可以用于一些简单的问题,因此也可以作为一个计算器, - -![learnmath-maxima-1and1](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-maxima-1and1.png) - -以及一些更复杂的问题, - -![learnmath-maxima-functions](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-maxima-functions.png) - -它使用`gnuplot`使得绘制简单, - -![learnmath-maxima-plot](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-maxima-plot.png) - -或者绘制一些复杂的图形. - -![learnmath-maxima-plot2](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-maxima-plot2.png) - -(它需要gnuplot-X11的包,来显示它们。) - -除了美化一些图形,Maxima也尽可能用latex格式导出它们,或者通过右键是捷菜单进行一些突出的操作. - -![learnmath-maxima-menu](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-maxima-menu.png) - -然而其主菜单还是提供了大量压倒性的功能,当然Maxima的功能远不止如此,这里也有一个广泛使用的在线文档。 - -### 总结 ### - -数学不是一个简单的学科,这些在Linux上的优秀软件也没有使得数学更加简单,但是这些应用使得使用数学变得更加的简单和工程化。以上两种应用都只是介绍一下Linux的所提供的。如果你是认真从事数学和需要更多的功能与丰富的文档,那你更应该看看这些Mathbuntu项目。 --------------------------------------------------------------------------------- - -via: https://www.maketecheasier.com/learn-linux-maths/ - -作者:[Attila Orosz][a] -译者:[KnightJoker](https://github.com/KnightJoker/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.maketecheasier.com/author/attilaorosz/ -[1]:https://www.maketecheasier.com/series/learn-with-linux/ -[2]:https://www.maketecheasier.com/learn-to-type-in-linux/ -[3]:https://www.maketecheasier.com/linux-physics-simulation/ -[4]:https://www.maketecheasier.com/linux-learning-music/ -[5]:https://www.maketecheasier.com/linux-geography-apps/ -[6]:https://www.maketecheasier.com/learn-linux-maths/ -[7]:http://www.gnuplot.info/documentation.html -[8]:http://people.duke.edu/~hpgavin/gnuplot.html -[9]:http://maxima.sourceforge.net/ -[10]:http://maxima.sourceforge.net/documentation.html -[11]:http://www.mathbuntu.org/ \ No newline at end of file diff --git a/translated/talk/20151015 New Collaborative Group to Speed Real-Time Linux.md b/translated/talk/20151015 New Collaborative Group to Speed Real-Time Linux.md deleted file mode 100644 index e3650558c6..0000000000 --- a/translated/talk/20151015 New Collaborative Group to Speed Real-Time Linux.md +++ /dev/null @@ -1,79 +0,0 @@ -新的合作组将加速实时Linux的发展 -================================================================================ -![](http://www.linux.com/images/stories/66866/Tux-150.png) - -在本周的Linux大会活动(LinuxCon)上Linux基金会(Linux Foundation)[宣称][1],“Real-Time - Linux”项目(译者注:实时Linux操作系统项目,简称RTL)得到了新的资金支持,并预期这将促进该项目,使其自成立15年来第一次有机会在实时操作性上和其他的实时操作系统(RTOS,译者注:Real Time Operation System)一较高下。Linux基金会将RTL项目重组为一个新的项目,并命名为“Real-Time Linux Collaborative Project”(译者注,下文将称其为“RTL协作组”),该项目将获得更有力的资金支持,更多的开发人员将投入其中,并在开发和集成上和Linux内核主线保持更紧密的联系。 - -根据Linux基金会的说法,RTL项目并入Linux基金会后“将为业界节省数百万美元的重复研发费用。”同时此举也将“通过基于稳定的主线内核版本开发而改善本项目的代码质量”。 - -在过去的十几年中,RTL项目的开发管理和经费资助主要由[开源自动化开发实验室(译者注:Open Source Automation Development Lab,以下简称OSADL)] [2]承担,OSDL将继续作为新合作项目的金牌成员之一,但其原来承担的资金资助工作将会在一月份移交给Linux基金。RTL项目和[OSADL][3]长久以来一直负责维护[内核的实时抢占(RT-Preempt)补丁][4],并定期将其更新到Linux内核的主线上。 - -据长期以来一直担任OSADL总经理的Carsten Emde博士介绍,支持内核实时特性的工作已经完成了将近90%。 “这就像盖房子,”他解释说。 “主要的部件,如墙壁,窗户和门都已经安装到位,就实时内核来说,类似的主要部件包括:高精度定时器(high-resolution timers),中断线程化机制(interrupt threads)和基于优先级可继承的互斥量(priority-inheritance mutexes)等。你所剩下的就是还需要一些边边角角的工作,就如同装修房子过程中还剩下铺设如地毯和墙纸等来完成最终的工程。” - -以Emde观点来看,从技术的角度来说,实时Linux的性能已经可以媲美绝大多数其他的实时操作系统 - 但前提是你要不厌其烦地把所有的补丁都打上。 Emde的原话如下:“该项目(译者注,指RTL)的唯一目标就是提供一个满足实时性要求Linux系统,使其无论运行状况如何恶劣都可以保证在确定的可以预先定义的时间期限内对外界处理做出响应。这个目标已经实现,但需要你手动地将RTL提供的补丁添加到Linux内核主线的版本代码上,但新项目将在Linux内核主线版本上直接支持同样的的目标。唯一的,当然也是最重要的区别就是相应的维护工作将少得多,因为我们再也不用一次又一次移植那些独立于内核主线的补丁代码了。” - -新的RTL协作组将继续在Thomas Gleixner的指导下工作,Thomas Gleixner在过去的十多年里一直是RTL的核心维护人员。本周,Gleixner被任命为Linux基金会成员,并加入了一个特别的小组,小组成员包括Linux稳定内核维护者Greg Kroah-Hartman,Yocto项目维护者Richard Purdie和Linus Torvalds本人。 - -据Emde介绍,RTL的第二维护人Steven Rostedt来自Red Hat公司,他负责维护旧的内核版本,他将和同样来自Red Hat的Ingo Molnàr继续参与该项目,Ingo是RTL的关键开发人员,但近年来更多地从事咨询方面的工作。有些令人惊讶的是,Red Hat竟然不是RTL协作组的成员之一。相反,谷歌作为唯一的白金会员占据了头把交椅,其他黄金会员包括国家仪器公司(National Instruments,简称NI),OSADL和德州仪器(TI)。银卡会员包括Altera公司,ARM,Intel和IBM。 - -###走向实时内核的漫长道路### - -当15年前Linux第一次出现在嵌入式设备上的时候,它所面临的嵌入式计算市场已经被其他的实时操作系统,譬如风河公司(WindRiver)的VxWorks,所牢牢占据。VxWorks从那时起到现在,一直在为众多的工控设备,航空电子设备以及交通运输应用提供着工业级别的高确定性的,硬实时的内核。微软后来也提供了一个支持实时性的操作系统版本-Windows CE,当时的Linux所面临的是来自潜在工业客户的公开嘲讽和层层阻力。他们认为那些从桌面系统改进来的Linux发行版本顶多适合要求不高的轻量级消费类电子产品,而不适合那些对硬实时要求更高的设备。 - -对于嵌入式Linux的先行者如[MontaVista公司][6]来说,其[早期的目标][5]很明确就是要改进Linux的实时能力。多年以来,对Linux的实时性能开发发展迅速,得到各种组织的支持,如OSADL[成立于2006年][7],以及实时Linux基金会(Real-Time Linux Foundation,简称RTLF)。在2009年[OSADL与RTLF合并][8],OSADL及其RTL组承担了所有的抢占式实时内核(PREEMPT-RT)补丁的维护工作并始终保存跟踪最新的Linux内核主线版本。除此之外OSADL还负责监管其他自动化相关的项目,例如[高可靠性Linux][9](Safety Critical Linux,译者注:指研究如何在关键系统上可靠安全地运行Linux)。 - -OSADL对RTL的支持经历了三个阶段:拥护和推广,测试和质量评估,以及最后的资金支持。Emde表示,在早期,OSADL的角色仅限于写写推广的文章,制作专题报告,组织相关培训,以及“宣传”RTL的优点。他说:“要让一个相当保守的工控行业接受象Linux之类的新技术及其基于社区的那种开发模式,首先就需要建立其对新事物的信心。从使用专有的实时操作系统转向改用Linux对公司意味着必须引入新的战略和流程,才能与社区进行互动。” - -后来,OSADL改而提供技术性能数据,建立[质量评估和测试中心][10],并在和开源相关的法律事务问题和安全认证方面向行业成员提供帮助。 - -当RTL在实时性上变得愈加成熟的同时,相反地Windows CE却是江河日下,[其市场份额正在快速地被RTL所蚕食][11],一些与RTL竞争的实时Linux项目,主要是[Xenomai][12]也已开始集成RTL。 - -“伴随RTL补丁的成功发展,以及明确的预期其最终会被集成到Linux内核主线代码中,导致Xenomai关注的重心发生了变化,”Emde说。 “Xenomai 3.0可与RT补丁结合起来使用,并提供了所谓的‘皮肤’,(译者注:一个封装层),使我们可以复用为其他系统编写的代码。不过,它们还没有完全统一起来,因为Xenomai使用了双内核方法,而RT补丁只适用于单一的Linux内核。“ - -近些年来,RTL项目的资助来源越来越少,所以最终OSADL接过了这个重任。Emde说:“当最近开发工作因缺少资金而陷入停滞时,OSADL对RTL的支持进入到第三个重大阶段:开始直接资助Thomas Gleixner的工作。” - -正如Emde在其[10月5日的一篇博文][13]中所描述的那样,实时Linux的应用领域正在日益扩大,由其原来主要服务的工业控制扩大到了汽车行业和电信业等领域,这表明资助的来源也应该得到拓宽。Emde原文写道:“仅仅靠来自工控行业的资金来支撑全部的工作是不合理的,因为电信等其他行业也在享用实时Linux内核。” - -当Linux基金会表明有兴趣提供资金支持时,OSADL认为“单一的资助和控制渠道要有效得多”(译者注:指最终由Linux基金会全盘接手了RTL项目),Emde如是说。不过,他补充说,作为黄金级成员,OSADL仍参与监管项目的工作,会继续从事其宣传和质量保证方面的活动。 - -###汽车行业期待RTL的崛起### - -Emde表示,RTL会继续在工业应用领域飞速发展并逐渐取代其他实时操作系统。而且,他补充说,RTL在汽车行业发展也很迅猛,以后会扩大并应用到铁路和航空电子设备上。 - -的确,Linux在汽车行业将扮演越来越重要的角色,这也是Linux基金对RTL所寄予厚望的原因之所在。RTL工作组可能会与Linux基金会旗下的[车载Linux(Automotive Grade Linux,简称AGL)][14]工作组展开合作。Emde猜测,Google高调参与的主要动因可能也是希望将RTL用于汽车控制。此外,德州仪器(TI)也非常期望将其Jacinto处理器应用于汽车行业。 - -面向车载Linux的项目(比如AGL)的目标是要扩大Linux在车载设备上的应用范围,其应用不是仅限于车载信息娱乐(In-Vehicle Infotainment,简称IVI),而是要进入到譬如集群控制和车载通讯领域,而这些领域目前主要使用的是QNX之类的实时操作系统。无人驾驶汽车在实时性上对操作系统也有很高的要求。 - -Emde特别指出,OSADL的[SIL2LinuxMP][15]项目可能会在将RTL引入到汽车工业领域上扮演重要的角色。SIL2LinuxMP并不是专门针对汽车工业的项目,但随着BMW公司参与其中,汽车行业成为其很重要的应用领域之一。该项目的目标在于验证RTL在采用单核或多核CPU的标准化商用(Commercial Off-The-Shelf,简称COTS)板卡上运行所需的基本组件。它定义了引导程序、根文件系统、Linux内核以及对应支持RTL的C库。 - -无人机和机器人使用实时Linux的时机也已成熟,Xenomai系统早已用在许多机器人以及一些无人机中。不过,在更广泛的嵌入式Linux世界,包括了消费电子产品和物联网应用中,RTL可以扮演的角色很有限。主要的障碍在于,无线通信和互联网本身会带来延迟。 - -Emde说:“目前实时Linux主要还是应用于系统内部控制以及系统与周边外设之间的控制,在远程控制机器上作用不大。企图通过互联网实现实时控制恐怕不是一件可行的事情。” - --------------------------------------------------------------------------------- - -via: http://www.linux.com/news/software/applications/858828-new-collaborative-group-to-speed-real-time-linux - -作者:[Eric Brown][a] -译者:[unicornx](https://github.com/unicornx) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.linux.com/community/forums/person/42808 -[1]:http://www.linuxfoundation.org/news-media/announcements/2015/10/linux-foundation-announces-project-advance-real-time-linux -[2]:http://archive.linuxgizmos.com/celebrating-the-open-source-automation-development-labs-first-birthday/ -[3]:https://www.osadl.org/ -[4]:http://linuxgizmos.com/adding-real-time-to-linux-with-preempt-rt/ -[5]:http://archive.linuxgizmos.com/real-time-linux-what-is-it-why-do-you-want-it-how-do-you-do-it-a/ -[6]:http://www.linux.com/news/embedded-mobile/mobile-linux/841651-embedded-linux-pioneer-montavista-spins-iot-linux-distribution -[7]:http://archive.linuxgizmos.com/industry-group-aims-linux-at-automation-apps/ -[8]:http://archive.linuxgizmos.com/industrial-linux-groups-merge/ -[9]:https://www.osadl.org/Safety-Critical-Linux.safety-critical-linux.0.html -[10]:http://www.osadl.org/QA-Farm-Realtime.qa-farm-about.0.html -[11]:http://www.linux.com/news/embedded-mobile/mobile-linux/818011-embedded-linux-keeps-growing-amid-iot-disruption-says-study -[12]:http://xenomai.org/ -[13]:https://www.osadl.org/Single-View.111+M5dee6946dab.0.html -[14]:http://www.linux.com/news/embedded-mobile/mobile-linux/833358-first-open-automotive-grade-linux-spec-released -[15]:http://www.osadl.org/SIL2LinuxMP.sil2-linux-project.0.html \ No newline at end of file diff --git a/translated/tech/20150824 How to Setup Zephyr Test Management Tool on CentOS 7.x.md b/translated/tech/20150824 How to Setup Zephyr Test Management Tool on CentOS 7.x.md new file mode 100644 index 0000000000..7de8349b9c --- /dev/null +++ b/translated/tech/20150824 How to Setup Zephyr Test Management Tool on CentOS 7.x.md @@ -0,0 +1,231 @@ +如何在 CentOS 7.x 上安装 Zephyr 测试管理工具 +================================================================================ +测试管理工具包括作为测试人员需要的任何东西。测试管理工具用来记录测试执行的结果、计划测试活动以及报告质量保证活动的情况。在这篇文章中我们会向你介绍如何配置 Zephyr 测试管理工具,它包括了管理测试活动需要的所有东西,不需要单独安装测试活动所需要的应用程序从而降低测试人员不必要的麻烦。一旦你安装完它,你就看可以用它跟踪 bug、缺陷,和你的团队成员协作项目任务,因为你可以轻松地共享和访问测试过程中多个项目团队的数据。 + +### Zephyr 要求 ### + +安装和运行 Zephyr 要求满足以下最低条件。可以根据你的基础设施提高资源。我们会在 64 位 CentOS-7 系统上安装 Zephyr,几乎在所有的 Linux 操作系统中都有可用的 Zephyr 二进制发行版。 + +注:表格 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Zephyr test management tool
Linux OSCentOS Linux 7 (Core), 64-bit
PackagesJDK 7 or above ,  Oracle JDK 6 updateNo Prior Tomcat, MySQL installed
RAM4 GBPreferred 8 GB
CPU2.0 GHZ or Higher
Hard Disk30 GB , Atleast 5GB must be free
+ +安装 Zephyr 要求你有超级用户(root)权限,并确保你已经正确配置了网络的静态 IP ,默认端口必须可用并允许通过防火墙。其中 tomcat 会使用 80/443、 8005、 8009、 8010 端口, Zephyr 内部使用 RTMP 协议的 flex 会使用 443 和 2099 号端口。 + +### 安装 Java JDK 7 ### + +安装 Zephyr 首先需要安装 Java JDK 7,如果你的系统上还没有安装,可以按照下面的方法安装 Java 并设置 JAVA_HOME 环境变量。 + +输入以下的命令安装 Java JDK 7。 + + [root@centos-007 ~]# yum install java-1.7.0-openjdk-1.7.0.79-2.5.5.2.el7_1 + +---------- + + [root@centos-007 ~]# yum install java-1.7.0-openjdk-devel-1.7.0.85-2.6.1.2.el7_1.x86_64 + +安装完 java 和它的所有依赖后,运行下面的命令设置 JAVA_HOME 环境变量。 + + [root@centos-007 ~]# export JAVA_HOME=/usr/java/default + [root@centos-007 ~]# export PATH=/usr/java/default/bin:$PATH + +用下面的命令检查 java 版本以验证安装。 + + [root@centos-007 ~]# java –version + +---------- + + java version "1.7.0_79" + OpenJDK Runtime Environment (rhel-2.5.5.2.el7_1-x86_64 u79-b14) + OpenJDK 64-Bit Server VM (build 24.79-b02, mixed mode) + +输出显示我们已经正确安装了 1.7.0_79 版本的 OpenJDK Java。 + +### 安装 MySQL 5.6.x ### + +如果的机器上有其它的 MySQL,建议你先卸载它们并安装这个版本,或者升级它们的模式到指定的版本。因为 Zephyr 前提要求这个指定的主要/最小 MySQL (5.6.x)版本要有 root 用户名。 + +可以按照下面的步骤在 CentOS-7.1 上安装 MySQL 5.6 : + +下载 rpm 软件包,它会为安装 MySQL 服务器创建一个 yum 库文件。 + + [root@centos-007 ~]# yum install wget + [root@centos-007 ~]# wget http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm + +然后用 rpm 命令安装下载下来的 rpm 软件包。 + + [root@centos-007 ~]# rpm -ivh mysql-community-release-el7-5.noarch.rpm + +安装完这个软件包后你会有两个和 MySQL 相关的新的 yum 库。然后使用 yum 命令安装 MySQL Server 5.6,它会自动安装所有需要的依赖。 + + [root@centos-007 ~]# yum install mysql-server + +安装过程完成之后,运行下面的命令启动 mysqld 服务并检查它的状态是否激活。 + + [root@centos-007 ~]# service mysqld start + [root@centos-007 ~]# service mysqld status + +对于全新安装的 MySQL 服务器,MySQL root 用户的密码为空。 +为了安全起见,我们应该重置 MySQL root 用户的密码。 + +用自动生成的空密码连接到 MySQL 并更改 root 用户密码。 + + [root@centos-007 ~]# mysql + mysql> SET PASSWORD FOR 'root'@'localhost' = PASSWORD('your_password'); + mysql> flush privileges; + mysql> quit; + +现在我们需要在 MySQL 默认的配置文件中配置所需的数据库参数。打开 "/etc/" 目录中的文件并按照下面那样更新。 + + [root@centos-007 ~]# vi /etc/my.cnf + +---------- + + [mysqld] + datadir=/var/lib/mysql + socket=/var/lib/mysql/mysql.sock + symbolic-links=0 + + sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES + max_allowed_packet=150M + max_connections=600 + default-storage-engine=INNODB + character-set-server=utf8 + collation-server=utf8_unicode_ci + + [mysqld_safe] + log-error=/var/log/mysqld.log + pid-file=/var/run/mysqld/mysqld.pid + default-storage-engine=INNODB + character-set-server=utf8 + collation-server=utf8_unicode_ci + + [mysql] + max_allowed_packet = 150M + [mysqldump] + quick + +保存配置文件中的更新并重启 mysql 服务。 + + [root@centos-007 ~]# service mysqld restart + +### 下载 Zephyr 安装包 ### + +我们已经安装完了安装 Zephyr 所需要的软件包。现在我们需要获取 Zephyr 二进制发布包和它的许可证密钥。到 Zephyr 官方下载链接 [http://download.yourzephyr.com/linux/download.php](http://download.yourzephyr.com/linux/download.php) 输入你的电子邮件 ID 并点击下载。 + +![下载 Zephyr](http://blog.linoxide.com/wp-content/uploads/2015/08/13.png) + +然后确认你的电子邮件地址,你会获得 Zephyr 下载链接和它的许可证密钥链接。点击提供的链接从服务器中选择和你操作系统合适的版本下载二进制安装包以及许可证文件。 + +我们把它下载到 home 目录并更改它的权限为可执行。 + +![Zephyr 二进制包](http://blog.linoxide.com/wp-content/uploads/2015/08/22.png) + +### 开始安装和配置 Zephyr ### + +现在我们通过执行它的二进制安装脚本开始安装 Zephyr。 + + [root@centos-007 ~]# ./zephyr_4_7_9213_linux_setup.sh –c + +一旦你运行了上面的命令,它会检查是否正确配置了 Java 环境变量。如果配置不正确,你可能会看到类似下面的错误。 + + testing JVM in /usr ... + Starting Installer ... + Error : Either JDK is not found at expected locations or JDK version is mismatched. + Zephyr requires Oracle Java Development Kit (JDK) version 1.7 or higher. + +如果你正确配置了 Java,它会开始安装 Zephyr 并要求你输入 “o” 继续或者输入 “c” 取消安装。让我们敲击 “o” 并输入回车键开始安装。 + +![安装 zephyr](http://blog.linoxide.com/wp-content/uploads/2015/08/32.png) + +下一个选项是检查安装 Zephyr 所有的要求,输入回车进入下一个选项。 + +![zephyr 要求](http://blog.linoxide.com/wp-content/uploads/2015/08/42.png) + +输入 “1” 并回车同意许可证协议。 + + I accept the terms of this license agreement [1], I do not accept the terms of this license agreement [2, Enter] + +我们需要选择安装 Zephyr 合适的目标位置以及默认端口,如果你想用默认端口之外的其它端口,也可以在这里配置。 + +![installation folder](http://blog.linoxide.com/wp-content/uploads/2015/08/52.png) + +然后自定义 mysql 数据库参数并给出配置文件的正确路径。在这一步你可能看到类似下面的错误。 + + Please update MySQL configuration. Configuration parameter max_connection should be at least 500 (max_connection = 500) and max_allowed_packet should be at least 50MB (max_allowed_packet = 50M). + +要消除这个错误,你要确保在 mysql 配置文件中正确配置了 "max\_connection" 和 "max\_allowed\_packet" 参数。运行所示的命令连接到数据库确认这些设置。 + +![连接 mysql](http://blog.linoxide.com/wp-content/uploads/2015/08/62.png) + +当你正确配置了 mysql 数据库,它会提取配置文件并完成安装。 + +![配置 mysql](http://blog.linoxide.com/wp-content/uploads/2015/08/72.png) + +安装过程在你的计算机上成功的安装了 Zephyr 4.7。要启动 Zephyr 桌面,输入 “y” 完成 Zephyr 安装。 + +![启动 zephyr](http://blog.linoxide.com/wp-content/uploads/2015/08/82.png) + +### 启动 Zephyr 桌面 ### + +打开你的 web 浏览器并用你的本机 IP 地址启动 Zephyr 桌面,你会被导向 Zephyr 桌面。 + + http://your_server_IP/zephyr/desktop/ + +![Zephyr 桌面](http://blog.linoxide.com/wp-content/uploads/2015/08/91.png) + +从 Zephyr 仪表盘点击 "Test Manager" 并用默认的用户名和密码 "test.manager" 登录。 + +![Test Manage 登录](http://blog.linoxide.com/wp-content/uploads/2015/08/test_manager_login.png) + +你登录进去后你就可以配置你的管理设置了。根据你的环境选择你想要的设置。 + +![Test Manage 管理](http://blog.linoxide.com/wp-content/uploads/2015/08/test_manage_admin.png) + +完成管理设置后保存设置,资源管理和项目配置也类似,然后开始使用 Zephyr 作为你的测试管理工具吧。如图所示在 Department Dashboard Management 中检查和编辑管理设置状态。 + +![zephyr 仪表盘](http://blog.linoxide.com/wp-content/uploads/2015/08/dashboard.png) + +### 总结 ### + +好了! 我们已经在 CentOS 7.1 上安装完了 Zephyr。我们希望你能更加深入了解 Zephyr 测试管理工具,它提供简化测试流程、允许快速访问数据分析、协作工具以及多个项目成员之间交流。如果在你的环境中遇到任何问题,欢迎和我们联系。 + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/setup-zephyr-tool-centos-7-x/ + +作者:[Kashif Siddique][a] +译者:[ictlyh](http://mutouxiaogui.cn/blog/) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/kashifs/ \ No newline at end of file diff --git a/translated/tech/20151027 How to Install Ghost with Nginx on FreeBSD 10.2.md b/translated/tech/20151027 How to Install Ghost with Nginx on FreeBSD 10.2.md new file mode 100644 index 0000000000..f8d78c88f9 --- /dev/null +++ b/translated/tech/20151027 How to Install Ghost with Nginx on FreeBSD 10.2.md @@ -0,0 +1,296 @@ +如何在 FreeBSD 10.2 上安装使用 Nginx 的 Ghost +================================================================================ +Node.js 是用于开发服务器端应用程序的开源运行时环境。Node.js 应用使用 JavaScript 编写,能在任何有 Node.js 运行时的服务器上运行。它跨平台支持 Linux、Windows、OSX、IBM AIX,也包括 FreeBSD。Node.js 是 Ryan Dahl 以及在 Joyent 工作的其他开发者于 2009 年创建的。它的设计目标就是构建可扩展的网络应用程序。 + +Ghost 是使用 Node.js 编写的博客平台。它不仅开源,而且有很漂亮的界面设计、对用户友好并且免费。它允许你快速地在网络上发布内容,或者创建你的混合网站。 + +在这篇指南中我们会在 FreeBSD 上安装使用 Nginx 作为 web 服务器的 Ghost。我们会在 FreeBSD 10.2 上安装 Node.js、Npm、nginx 和 sqlite3。 + +### 第一步 - 安装 Node.js npm 和 Sqlite3 ### + +如果你想在你的服务器上运行 ghost,你必须安装 node.js。在这一部分,我们会从 freebsd 移植软件库中安装 node.js,请进入库目录 "/usr/ports/www/node" 并通过运行命令 "**make**" 安装。 + + cd /usr/ports/www/node + make install clean + +如果你已经安装了 node.js,那就进入到 npm 目录并安装它。**npm** 是用于安装、发布和管理 node 程序的软件包管理器。 + + cd /usr/ports/www/npm/ + make install clean + +下一步,安装 sqlite3。默认情况下 ghost 使用 sqlite3 作为数据库系统,但它也支持 mysql/mariadb 和 postgresql。我们会使用 sqlite3 作为默认数据库。 + + cd /usr/ports/databases/sqlite3/ + make install clean + +如果安装完了所有软件,还有检查 node.js 和 npm 的版本: + + node --version + v0.12.6 + + npm --version + 2.11.3 + + sqlite3 --version + 3.8.10.2 + +![node 和 npm 版本](http://blog.linoxide.com/wp-content/uploads/2015/10/node-and-npm-version.png) + +### 第二步 - 添加 Ghost 用户 ### + +我们会以普通用户 "**ghost**" 身份安装和运行 ghost。用 "adduser" 命令添加新用户: + + adduser ghost + FILL With Your INFO + +![添加用户 Ghost](http://blog.linoxide.com/wp-content/uploads/2015/10/Add-user-Ghost.png) + +### 第三步 - 安装 Ghost ### + +我们会把 ghost 安装到 "**/var/www/**" 目录,首先新建目录然后进入到安装目录: + + mkdir -p /var/www/ + cd /var/www/ + +用 wget 命令下载最新版本的 ghost: + + wget --no-check-certificate https://ghost.org/zip/ghost-latest.zip + +把它解压到 "**ghost**" 目录: + + unzip -d ghost ghost-latest.zip + +下一步,更改属主为 "**ghost**",我们会以这个用户安装和运行它。 + + chown -R ghost:ghost ghost/ + +都做完了的话,通过输入以下命令切换到 "**ghost**" 用户: + + su - ghost + +然后进入到安装目录"/var/www/ghost/": + + cd /var/www/ghost/ + +在安装 ghost 之前,我们需要为 node.js 安装 sqlite3 模块,用 npm 命令安装: + + setenv CXX c++ ; npm install sqlite3 --sqlite=/usr/local + +**注意: 以 “ghost” 用户运行,而不是 root 用户。** + +现在,我们准备好安装 ghost 了,用 npm 命令安装: + + npm install --production + +下一步,复制配置文件 "config.example.js" 为 "**config.js**",用 nano 编辑器编辑: + + cp config.example.js config.js + nano -c config.js + +更改 server 模块的第 25 行: + + host: '0.0.0.0', + +保存并退出。 + +现在用下面的命令运行 ghost: + + npm start --production + +通过访问服务器 ip 和 2368 号端口验证。 + +![Ghost 安装完成](http://blog.linoxide.com/wp-content/uploads/2015/10/Ghost-Installed.png) + +以 “ghost” 用户在 "/var/www/ghost" 目录安装了 ghost。 + +### 第四步 - 作为 FreeBSD 服务运行 Ghost ### + +要在 freebsd 上以服务形式运行应用,你需要在 rc.d 目录添加脚本。我们会在 "**/usr/local/etc/rc.d/**" 目录为 ghost 创建新的服务脚本。 + +在创建服务脚本之前,为了以服务形式运行 ghost,我们需要安装一个 node.js 模块,用 npm 命令以 **sudo/root** 权限安装 forever 模块: + + npm install forever -g + +现在进入到 rc.d 目录并创建名为 ghost 的新文件: + + cd /usr/local/etc/rc.d/ + nano -c ghost + +粘贴下面的服务脚本: + + #!/bin/sh + + # PROVIDE: ghost + # KEYWORD: shutdown + PATH="/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin" + + . /etc/rc.subr + + name="ghost" + rcvar="ghost_enable" + extra_commands="status" + + load_rc_config ghost + : ${ghost_enable:="NO"} + + status_cmd="ghost_status" + start_cmd="ghost_start" + stop_cmd="ghost_stop" + restart_cmd="ghost_restart" + + ghost="/var/www/ghost" + log="/var/log/ghost/ghost.log" + ghost_start() { + sudo -u ghost sh -c "cd $ghost && NODE_ENV=production forever start -al $log index.js" + } + + ghost_stop() { + sudo -u ghost sh -c "cd $ghost && NODE_ENV=production forever stop index.js" + } + + ghost_status() { + sudo -u ghost sh -c "NODE_ENV=production forever list" + } + + ghost_restart() { + ghost_stop; + ghost_start; + } + + run_rc_command "$1" + +保存并退出。 + +下一步,给 ghost 服务脚本添加可执行权限: + + chmod +x ghost + +为 ghost 日志创建新的目录和文件,并把属主修改为 ghost 用户: + + mkdir -p /var/www/ghost/ + touch /var/www/ghost/ghost.log + chown -R /var/www/ghost/ + +最后,如果你想运行 ghost 服务,你需要用 sysrc 命令添加 ghost 服务到开机启动应用程序: + + sysrc ghost_enable=yes + +用以下命令启动 ghost: + + service ghost start + +其它命令: + + service ghost stop + service ghost status + service ghost restart + +![Ghost 服务命令](http://blog.linoxide.com/wp-content/uploads/2015/10/Ghost-service-command.png) + +### 第五步 - 为 Ghost 安装和配置 Nginx ### + +默认情况下,ghost 会以单机模式运行,你可以不用 Nginx、apache 或 IIS web 服务器直接运行它。但在这篇指南中我们会安装和配置 nginx 和 ghost 一起使用。 + +用 pkg 命令从 freebsd 库中安装 nginx: + + pkg install nginx + +下一步,进入 nginx 配置目录并为 virtualhost 配置创建新的目录。 + + cd /usr/local/etc/nginx/ + mkdir virtualhost/ + +进入 virtualhost 目录,用 nano 编辑器创建名为 ghost.conf 的新文件: + + cd virtualhost/ + nano -c ghost.conf + +粘贴下面的 virtualhost 配置: + + server { + listen 80; + + #Your Domain + server_name ghost.me; + + location ~* \.(?:ico|css|js|gif|jpe?g|png|ttf|woff)$ { + access_log off; + expires 30d; + add_header Pragma public; + add_header Cache-Control "public, mustrevalidate, proxy-revalidate"; + proxy_pass http://127.0.0.1:2368; + } + + location / { + add_header X-XSS-Protection "1; mode=block"; + add_header Cache-Control "public, max-age=0"; + add_header Content-Security-Policy "script-src 'self' ; font-src 'self' ; connect-src 'self' ; block-all-mixed-content; reflected-xss block; referrer no-referrer"; + add_header X-Content-Type-Options nosniff; + add_header X-Frame-Options DENY; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_set_header Host $http_host; + proxy_set_header X-Forwarded-Proto $scheme; + proxy_pass http://127.0.0.1:2368; + } + + location = /robots.txt { access_log off; log_not_found off; } + location = /favicon.ico { access_log off; log_not_found off; } + + location ~ /\.ht { + deny all; + } + + } + +保存并退出。 + +要启用 virtualhost 配置,你需要把那个文件添加到 **nginx.conf**。进入 nginx 配置目录并编辑 nginx.conf 文件: + + cd /usr/local/etc/nginx/ + nano -c nginx.conf + +在最后一行的前面,包含 virtualhost 配置目录: + + [......] + + include virtualhost/*.conf; + + } + +保存并退出。 + +用命令 "**nginx -t**" 测试 nginx 配置,如果没有错误,用 sysrc 添加 nginx 到开机启动: + + sysrc nginx_enable=yes + +并启动 nginx: + + service nginx start + +现在测试所有 nginx 和 virtualhost 配置。请打开你的浏览器并输入: ghost.me + +![ghost.me 成功运行](http://blog.linoxide.com/wp-content/uploads/2015/10/ghost.me-successfully.png) + +Ghost.me 正在成功运行。 + +如果你想要检查 nginx 服务器,可以使用 "**curl**" 命令。 + +![测试 ghost 和 nginx](http://blog.linoxide.com/wp-content/uploads/2015/10/ghost-and-nginx-test.png) + +Ghost 正在 nginx 上运行。 + +### 总结 ### + +Node.js 是 Ryan Dahl 为创建和开发可扩展服务器端应用程序创建的运行时环境。Ghost 是使用 node.js 编写的开源博客平台,它有漂亮的外观设计并且易于使用。默认情况下,ghost 是可以单独运行的 web 应用程序,并不需要类似 apache、nginx 或 IIS 之类的 web 服务器,但我们也可以和 web 服务器集成(在这篇指南中使用 Nginx)。Sqlite 是 ghost 默认使用的数据库,它还支持 msql/mariadb 和 postgresql。Ghost 能快速部署并且易于使用和配置。 + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/install-ghost-nginx-freebsd-10-2/ + +作者:[Arul][a] +译者:[ictlyh](http://mutouxiaogui.cn/blog/) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arulm/ \ No newline at end of file diff --git a/translated/tech/20151027 How to Use SSHfs to Mount a Remote Filesystem on Linux.md b/translated/tech/20151027 How to Use SSHfs to Mount a Remote Filesystem on Linux.md new file mode 100644 index 0000000000..9d25041535 --- /dev/null +++ b/translated/tech/20151027 How to Use SSHfs to Mount a Remote Filesystem on Linux.md @@ -0,0 +1,72 @@ +如何在 Linux 上使用 SSHfs 挂载一个远程文件系统 +================================================================================ +你有想通过安全 shell 挂载一个远程文件系统到本地的经历吗?如果有的话,SSHfs 也许就是你所需要的。它通过使用 SSH 和 Fuse(LCTT 译注:Filesystem in Userspace,用户态文件系统,是 Linux 中用于挂载某些网络空间,如 SSH,到本地文件系统的模块) 允许你挂载远程计算机(或者服务器)到本地。 + +**注意**: 这篇文章假设你明白[SSH 如何工作并在你的系统中配置 SSH][1]。 + +### 准备 ### + +在使用 SSHfs 挂载之前,需要进行一些设置 - 在你的系统上安装 SSHfs 以及 fuse 软件包。你还需要为 fuse 创建一个组,添加用户到组,并创建远程文件系统将会驻留的目录。 + +要在 Ubuntu Linux 上安装两个软件包,只需要在终端窗口输入以下命令: + + sudo apt-get install sshfs fuse + +![ubuntu 安装 sshfs-fuse](https://www.maketecheasier.com/assets/uploads/2015/10/sshfs-install-fuse-ubuntu.jpg) + +如果你使用的不是 Ubuntu,那就在你的发行版软件包管理器中搜索软件包名称。最好搜索和 fuse 或 SSHfs 相关的关键字,因为取决于你运行的系统,软件包名称可能稍微有些不同。 + +在你的系统上安装完软件包之后,就该创建 fuse 组了。在你安装 fuse 的时候,应该会在你的系统上创建一个组。如果没有的话,在终端窗口中输入以下命令以便在你的 Linux 系统中创建组: + + sudo groupadd fuse + +添加了组之后,把你的用户添加到这个组。 + + sudo gpasswd -a "$USER" fuse + +![sshfs 添加用户到组 fuse](https://www.maketecheasier.com/assets/uploads/2015/10/sshfs-add-user-to-fuse-group.png) + +别担心上面命令的 `$USER`。shell 会自动用你自己的用户名替换。处理了和组相关的事之后,就是时候创建要挂载远程文件的目录了。 + + mkdir ~/remote_folder + +在你的系统上创建了本地目录之后,就可以通过 SSHfs 挂载远程文件系统了。 + +### 挂载远程文件系统 ### + +要在你的机器上挂载远程文件系统,你需要在终端窗口中输入一段较长的命令。 + + sshfs -o idmap=user username@ip.address:/remote/file/system/ ~/remote + +![sshfs 挂载文件系统到本地目录1](https://www.maketecheasier.com/assets/uploads/2015/10/sshfs-mount-file-system-to-local-folder.png) + +**注意**: 也可以通过 SSH 密钥文件挂载 SSHfs 文件系统。只需要在上面的命中用 `sshfs -o IdentityFile=~/.ssh/keyfile`, 替换 `sshfs -o idmap=user` 部分。 + +输入这个命令之后,会提示你输入远程用户的密码。如果登录成功了,你的远程文件系统就会被挂载到之前创建的 `~/remote_folder` 目录。 + +![sshfs挂载文件系统到本地目录2](https://www.maketecheasier.com/assets/uploads/2015/10/sshfs-mount-file-system-to-local-folder-2.jpg) + +使用完了你的远程文件系统,想要卸载它?容易吗?只需要在终端输入下面的命令: + + sudo umount ~/remote_folder + +这个简单的命令会断开远程连接同时清空 remote_folder 目录。 + +### 总结 ### + +在 Linux 上有很多工具可以用于访问远程文件并挂载到本地。如之前所说,如果有的话,也只有很少的工具能充分利用 SSH 的强大功能。我希望在这篇指南的帮助下,也能认识到 SSHfs 是一个多么强大的工具。 + +你觉得 SSHfs 怎么样呢?在线的评论框里告诉我们吧! + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/sshfs-mount-remote-filesystem-linux/ + +作者:[Derrik Diener][a] +译者:[ictlyh](http://mutouxiaogui.cn/blog/) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/derrikdiener/ +[1]:https://www.maketecheasier.com/setup-ssh-ubuntu/ \ No newline at end of file diff --git a/translated/tech/20151030 How To Install FreeBSD on Raspberry Pi 2 Model B.md b/translated/tech/20151030 How To Install FreeBSD on Raspberry Pi 2 Model B.md new file mode 100644 index 0000000000..941d3a2b72 --- /dev/null +++ b/translated/tech/20151030 How To Install FreeBSD on Raspberry Pi 2 Model B.md @@ -0,0 +1,98 @@ + +如何在树莓派2 B型上安装 FreeBSD +================================================================================ + +在树莓派2 B型上如何安装 FreeBSD 10 或 FreeBSD 11(current)?怎么在 Linux,OS X,FreeBSD 或类 Unix 操作系统上烧录 SD 卡? + +在树莓派2 B型上安装 FreeBSD 10或 FreeBSD 11(current)很容易。使用 FreeBSD 操作系统可以打造一个非常易用的 Unix 服务器。FreeBSD-CURRENT 自2012年十一月以来一直支持树莓派,2015年三月份后也开始支持树莓派2了。在这个快速教程中我将介绍如何在 RPI2 上安装 FreeBSD 11 current arm 版。 + +### 1. 下载 FreeBSD-current 的 arm 镜像 ### + +你可以 [访问这个页面来下载][1] 树莓派2的镜像。使用 wget 或 curl 命令来下载镜像: + + + $ wget ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/arm/armv6/ISO-IMAGES/11.0/FreeBSD-11.0-CURRENT-arm-armv6-RPI2-20151016-r289420.img.xz + +或 + + $ curl -O ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/arm/armv6/ISO-IMAGES/11.0/FreeBSD-11.0-CURRENT-arm-armv6-RPI2-20151016-r289420.img.xz + +### 2. 解压 FreeBSD-current 镜像 ### + +执行以下命令中的任何一个: + + $ unxz FreeBSD-11.0-CURRENT-arm-armv6-RPI2-20151016-r289420.img.xz + +或 + + $ xz --decompress FreeBSD-11.0-CURRENT-arm-armv6-RPI2-20151016-r289420.img.xz + +### 3. 设置 SD ### + +你可以在 OS X,Linux,FreeBSD,MS-Windows 和类 Unix 系统来烧录 SD 卡。 + +### 在 Mac OS X 下烧录 FreeBSD-current ### + +使用下面的 dd 命令: + + $ diskutil list + $ diskutil unmountDisk /dev/diskN + $ sudo dd if=FreeBSD-11.0-CURRENT-arm-armv6-RPI2-20151016-r289420.img of=/dev/disk2 bs=64k + +示例输出: + + 1024+0 records in + 1024+0 records out + 1073741824 bytes transferred in 661.669584 secs (1622776 bytes/sec) + +#### 使用 Linux/FreeBSD 或者 类 Unix 系统来烧录 FreeBSD-current #### + +语法是这样: + + $ dd if=FreeBSD-11.0-CURRENT-arm-armv6-RPI2-20151016-r289420.img of=/dev/sdb bs=1M + +确保使用实际 SD 卡的设备名称来替换 /dev/sdb 。 + +### 4. 引导 FreeBSD ### + +在树莓派2 B型上插入 SD 卡。你需要连接键盘,鼠标和显示器。我使用的是 USB 转串口线来连接显示器的: + +![Fig.01 RPi USB based serial connection](http://s0.cyberciti.org/uploads/faq/2015/10/Raspberry-Pi-2-Model-B.pin-out.jpg) + + +图01 RPI 基于 USB 的串行连接 + +在下面的例子中,我使用 screen 命令来连接我的 RPI: + + ## Linux version ## + screen /dev/tty.USB0 115200 + + ## OS X version ## + screen /dev/cu.usbserial 115200 + + ## Windows user use Putty.exe ## + +FreeBSD RPI 启动输出样例: + +![Gif 01: Booting FreeBSD-current on RPi 2](http://s0.cyberciti.org/uploads/faq/2015/10/freebsd-current-rpi.gif) + +图01: 在 RPi 2上引导 FreeBSD-current + +### 5. FreeBSD 在 RPi 2上的用户名和密码 ### + +默认的密码是 freebsd/freebsd 和 root/root。 + +到此为止, FreeBSD-current 已经安装并运行在 RPi 2上。 + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/how-to-install-freebsd-on-raspberry-pi-2-model-b/ + +作者:[Vivek Gite][a] +译者:[译者ID](https://github.com/译者ID) +校对:[strugglingyouth](https://github.com/strugglingyouth) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.cyberciti.biz/tips/about-us +[1]:ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/arm/armv6/ISO-IMAGES/11.0 diff --git a/translated/tech/20151104 How to Create New File Systems or Partitions in the Terminal on Linux.md b/translated/tech/20151104 How to Create New File Systems or Partitions in the Terminal on Linux.md new file mode 100644 index 0000000000..2948e8de61 --- /dev/null +++ b/translated/tech/20151104 How to Create New File Systems or Partitions in the Terminal on Linux.md @@ -0,0 +1,91 @@ + +如何在 Linux 终端下创建新的文件系统/分区 +================================================================================ +![](https://www.maketecheasier.com/assets/uploads/2015/03/cfdisk-feature-image.png) + +在 Linux 中创建分区或新的文件系统通常意味着一件事:安装 Gnome Parted 分区编辑器(GParted)。对于大多数 Linux 用户而言,这是唯一的办法。不过,你是否考虑过在终端创建这些分区和文件系统?当然可以!以下就是方法! + +### 使用 CFdisk 创建一个基本的 Linux 分区 ### + +以下是如何在命令行中创建一个基本的 Linux 分区的正确方案。要做的第一件事就是先打开你的终端。若你已打开,你需要找到你想要创建分区的磁盘。这可以使用一个简单的命令来找到。 + + lsblk + +![cfdisk-lsblk](https://www.maketecheasier.com/assets/uploads/2015/03/cfdisk-lsblk.png) + + +一旦你运行了 `lsblk`,你应该会看到当前系统上每个磁盘的详细列表。看看这个列表,然后找出你想要使用的磁盘。在本文中,我将使用 `sdb` 来进行演示。 + +在终端输入这个命令。它会显示一个功能强大的基于终端的分区编辑程序。 + + sudo cfdisk /dev/sdb + +![cfdisk-empty-layout](https://www.maketecheasier.com/assets/uploads/2015/03/cfdisk-empty-layout.png) + +**注意**: 使用在 `lsblk` 命令输出的你想要使用的磁盘来替换 `sdb`。 + +当输入此命令后,你将进入分区编辑器中,然后访问你想改变的磁盘。 + +Since hard drive partitions are different, depending on a user’s needs, this part of the guide will go over **how to set up a split Linux home/root system layout**. + +由于磁盘分区的不同,这取决于用户的需求,这部分的指南将在 **如何建立一个分布的 Linux home/root 文件分区**。 + +首先,需要创建根分区。这需要根据磁盘的字节数来进行分割。我测试的磁盘是 32 GB。 + +在 CFdisk 中使用键盘上的方向键选择需要分配的空间。你找到后,请使用箭头键选择 [ NEW ],然后按 Enter 键。 + +![cfdisk-create-root-partition](https://www.maketecheasier.com/assets/uploads/2015/03/cfdisk-create-root-partition.png) + +该程序会要求你输入分区大小。一旦你指定好大小后,按 Enter 键。这将被称为根分区(或 /dev/sdb1)。 + +接下来该创建用户分区(/dev/sdb2)了。你需要在 CFdisk 中再选择一些空闲分区。使用箭头选择 [ NEW ] 选项,然后按 Enter 键。输入你用户分区的大小,然后按 Enter 键来创建它。 + +![cfdisk-create-home-partition](https://www.maketecheasier.com/assets/uploads/2015/03/cfdisk-create-home-partition.png) + +最后,需要创建交换分区。像前两次一样,先找一些空闲分区,并使用箭头选择 [ NEW ] 选项。之后,算下你 Linux 想使用多大的交换分区。 + +**注意**: 交换分区通常和计算机的内存差不多大。 + +![cfdisk-specify-partition-type-swap](https://www.maketecheasier.com/assets/uploads/2015/03/cfdisk-specify-partition-type-swap.png) + +现在,交换分区被创建了,该指定其类型。使用上下箭头来选择它。之后,使用左右箭头选择 [ TYPE ] 。找到 Linux swap 选项,然后按 Enter 键。 + +![cfdisk-write-partition-table](https://www.maketecheasier.com/assets/uploads/2015/03/cfdisk-write-partition-table.jpg) + +所有分区创建后。然后就是将其写入到磁盘。使用右箭头键,选择 [ WRITE ] 选项,然后按 Enter 键。这将直接将新创建的分布写入到磁盘中。 + +### 使用 mkfs 创建文件系统 ### + +有时候,你并不需要一个完整的分区,你只想要创建一个文件系统而已。你可以在终端直接使用 `mkfs` 命令来实现。 + +![cfdisk-mkfs-list-partitions-lsblk](https://www.maketecheasier.com/assets/uploads/2015/10/cfdisk-mkfs-list-partitions-lsblk.png) + +首先,找出你要使用的磁盘。在终端输入 `lsblk` 找出来。它会打印出列表,之后只要找到你想制作文件系统的分区或盘符。 + +在这个例子中,我将使用 `/dev/sdb1` 的第一个分区。只对 `/dev/sdb` 使用 mkfs(将会使用整个分区)。 + +![cfdisk-mkfs-make-file-system-ext4](https://www.maketecheasier.com/assets/uploads/2015/10/cfdisk-mkfs-make-file-system-ext4.png) + +要在一个特定的分区上创建新文件系统,只需输入 + + sudo mkfs.ext4 /dev/sdb1 + +在终端。应当指出的是,`mkfs.ext4` 可以将你指定的任何文件系统改变。 + +### 结论 ### + +虽然使用图形工具编辑文件系统和分区更容易,但终端可以说是更有效的。终端的加载速度更快,点击几个按钮即可。GParted 和其它工具一样,它也是一个完整的工具。我希望在本教程的帮助下,你会明白如何在终端中高效的编辑文件系统。 + +你是否更喜欢使用基于终端的方法在 Linux 上编辑分区?为什么或为什么不?在下面告诉我们! + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/create-file-systems-partitions-terminal-linux/ + +作者:[Derrik Diener][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/derrikdiener/