mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-25 23:11:02 +08:00
commit
6f1bca3d49
@ -1,41 +1,41 @@
|
||||
学习数据结构与算法分析如何帮助您成为更优秀的开发人员?
|
||||
学习数据结构与算法分析如何帮助您成为更优秀的开发人员
|
||||
================================================================================
|
||||
|
||||
> "相较于其它方式,我一直热衷于推崇围绕数据设计代码,我想这也是Git能够如此成功的一大原因[…]在我看来,区别程序员优劣的一大标准就在于他是否认为自己设计的代码或数据结构更为重要。"
|
||||
> "相较于其它方式,我一直热衷于推崇围绕数据设计代码,我想这也是Git能够如此成功的一大原因[…]在我看来,区别程序员优劣的一大标准就在于他是否认为自己设计的代码还是数据结构更为重要。"
|
||||
-- Linus Torvalds
|
||||
|
||||
---
|
||||
|
||||
> "优秀的数据结构与简陋的代码组合远比倒过来的组合方式更好。"
|
||||
> "优秀的数据结构与简陋的代码组合远比反之的组合更好。"
|
||||
-- Eric S. Raymond, The Cathedral and The Bazaar
|
||||
|
||||
学习数据结构与算法分析会让您成为一名出色的程序员。
|
||||
|
||||
**数据结构与算法分析是一种解决问题的思维模式** 在您的个人知识库中,数据结构与算法分析的相关知识储备越多,您将具备应对并解决越多各类繁杂问题的能力。掌握了这种思维模式,您还将有能力针对新问题提出更多以前想不到的漂亮的解决方案。
|
||||
**数据结构与算法分析是一种解决问题的思维模式。** 在您的个人知识库中,数据结构与算法分析的相关知识储备越多,您将越多具备应对并解决各类繁杂问题的能力。掌握了这种思维模式,您还将有能力针对新问题提出更多以前想不到的漂亮的解决方案。
|
||||
|
||||
您将***更深入地***了解,计算机如何完成各项操作。无论您是否是直接使用给定的算法,它都影响着您作出的各种技术决定。从计算机操作系统的内存分配到RDBMS的内在工作机制,以及网络堆栈如何实现将数据从地球的一个角落发送至另一个角落这些大大小小的工作的完成,都离不开基础的数据结构与算法,理解并掌握它将会让您更了解计算机的运作机理。
|
||||
您将*更深入地*了解,计算机如何完成各项操作。无论您是否是直接使用给定的算法,它都影响着您作出的各种技术决定。从计算机操作系统的内存分配到RDBMS的内在工作机制,以及网络协议如何实现将数据从地球的一个角落发送至另一个角落,这些大大小小的工作的完成,都离不开基础的数据结构与算法,理解并掌握它将会让您更了解计算机的运作机理。
|
||||
|
||||
对算法广泛深入的学习能让为您应对大体系的问题储备解决方案。之前建模困难时遇到的问题如今通常都能融合进经典的数据结构中得到很好地解决。即使是最基础的数据结构,只要对它进行足够深入的钻研,您将会发现在每天的编程任务中都能经常用到这些知识。
|
||||
对算法广泛深入的学习能为您储备解决方案来应对大体系的问题。之前建模困难时遇到的问题如今通常都能融合进经典的数据结构中得到很好地解决。即使是最基础的数据结构,只要对它进行足够深入的钻研,您将会发现在每天的编程任务中都能经常用到这些知识。
|
||||
|
||||
有了这种思维模式,在遇到磨棱两可的问题时,您会具备想出新的解决方案的能力。即使最初并没有打算用数据结构与算法解决相应问题的情况,当真正用它们解决这些问题时您会发现它们将非常有用。要意识到这一点,您至少要对数据结构与算法分析的基础知识有深入直观的认识。
|
||||
有了这种思维模式,在遇到磨棱两可的问题时,您将能够想出新奇的解决方案。即使最初并没有打算用数据结构与算法解决相应问题的情况,当真正用它们解决这些问题时您会发现它们将非常有用。要意识到这一点,您至少要对数据结构与算法分析的基础知识有深入直观的认识。
|
||||
|
||||
理论认识就讲到这里,让我们一起看看下面几个例子。
|
||||
|
||||
###最短路径问题###
|
||||
|
||||
我们想要开发一个计算从一个国际机场出发到另一个国际机场的最短距离的软件。假设我们受限于以下路线:
|
||||
我们想要开发一个软件来计算从一个国际机场出发到另一个国际机场的最短距离。假设我们受限于以下路线:
|
||||
|
||||
![](http://www.happybearsoftware.com/assets/posts/how-learning-data-structures-and-algorithms-makes-you-a-better-developer/airport-graph-d2e32b3344b708383e405d67a80c29ea.svg)
|
||||
|
||||
从这张画出机场各自之间的距离以及目的地的图中,我们如何才能找到最短距离,比方说从赫尔辛基到伦敦?**Dijkstra算法**是能让我们在最短的时间得到正确答案的适用算法。
|
||||
从这张画出机场各自之间的距离以及目的地的图中,我们如何才能找到最短距离,比方说从赫尔辛基到伦敦?**[Dijkstra算法][3]**是能让我们在最短的时间得到正确答案的适用算法。
|
||||
|
||||
在所有可能的解法中,如果您曾经遇到过这类问题,知道可以用Dijkstra算法求解,您大可不必从零开始实现它,只需***知道***该算法能指向固定的代码库帮助您解决相关的实现问题。
|
||||
在所有可能的解法中,如果您曾经遇到过这类问题,知道可以用Dijkstra算法求解,您大可不必从零开始实现它,只需***知道***该算法的代码库能帮助您解决相关的实现问题。
|
||||
|
||||
实现了该算法,您将深入理解一项著名的重要图论算法。您会发现实际上该算法太集成化,因此名为A*的扩展包经常会代替该算法使用。这个算法应用广泛,从机器人指引的功能实现到TCP数据包路由,以及GPS寻径问题都能应用到这个算法。
|
||||
如果你深入到该算法的实现中,您将深入理解一项著名的重要图论算法。您会发现实际上该算法比较消耗资源,因此名为[A*][4]的扩展经常用于代替该算法。这个算法应用广泛,从机器人寻路的功能实现到TCP数据包路由,以及GPS寻径问题都能应用到这个算法。
|
||||
|
||||
###先后排序问题###
|
||||
|
||||
您想要在开放式在线课程平台上(如Udemy或Khan学院)学习某课程,有些课程之间彼此依赖。例如,用户学习牛顿力学机制课程前必须先修微积分课程,课程之间可以有多种依赖关系。用YAML表述举例如下:
|
||||
您想要在开放式在线课程(MOOC,Massive Open Online Courses)平台上(如Udemy或Khan学院)学习某课程,有些课程之间彼此依赖。例如,用户学习牛顿力学(Newtonian Mechanics)课程前必须先修微积分(Calculus)课程,课程之间可以有多种依赖关系。用YAML表述举例如下:
|
||||
|
||||
# Mapping from course name to requirements
|
||||
#
|
||||
@ -54,16 +54,16 @@
|
||||
astrophysics: [radioactivity, calculus]
|
||||
quantumn_mechanics: [atomic_physics, radioactivity, calculus]
|
||||
|
||||
鉴于以上这些依赖关系,作为一名用户,我希望系统能帮我列出必修课列表,让我在之后可以选择任意一门课程学习。如果我选择了`微积分`课程,我希望系统能返回以下列表:
|
||||
鉴于以上这些依赖关系,作为一名用户,我希望系统能帮我列出必修课列表,让我在之后可以选择任意一门课程学习。如果我选择了微积分(calculus)课程,我希望系统能返回以下列表:
|
||||
|
||||
arithmetic -> algebra -> trigonometry -> calculus
|
||||
|
||||
这里有两个潜在的重要约束条件:
|
||||
|
||||
- 返回的必修课列表中,每门课都与下一门课存在依赖关系
|
||||
- 必修课列表中不能有重复项
|
||||
- 我们不希望列表中有任何重复课程
|
||||
|
||||
这是解决数据间依赖关系的例子,解决该问题的排序算法称作拓扑排序算法(tsort)。它适用于解决上述我们用YAML列出的依赖关系图的情况,以下是在图中显示的相关结果(其中箭头代表`需要先修的课程`):
|
||||
这是解决数据间依赖关系的例子,解决该问题的排序算法称作拓扑排序算法(tsort,topological sort)。它适用于解决上述我们用YAML列出的依赖关系图的情况,以下是在图中显示的相关结果(其中箭头代表`需要先修的课程`):
|
||||
|
||||
![](http://www.happybearsoftware.com/assets/posts/how-learning-data-structures-and-algorithms-makes-you-a-better-developer/course-graph-2f60f42bb0dc95319954ce34c02705a2.svg)
|
||||
|
||||
@ -79,16 +79,17 @@
|
||||
|
||||
这符合我们上面描述的需求,用户只需选出`radioactivity`,就能得到在此之前所有必修课程的有序列表。
|
||||
|
||||
在运用该排序算法之前,我们甚至不需要深入了解算法的实现细节。一般来说,选择不同的编程语言在其标准库中都会有相应的算法实现。即使最坏的情况,Unix也会默认安装`tsort`程序,运行`tsort`程序,您就可以实现该算法。
|
||||
在运用该排序算法之前,我们甚至不需要深入了解算法的实现细节。一般来说,你可能选择的各种编程语言在其标准库中都会有相应的算法实现。即使最坏的情况,Unix也会默认安装`tsort`程序,运行`man tsort` 来了解该程序。
|
||||
|
||||
###其它拓扑排序适用场合###
|
||||
|
||||
- **工具** 使用诸如`make`的工具您可以声明任务之间的依赖关系,这里拓扑排序算法将从底层实现具有依赖关系的任务顺序执行的功能。
|
||||
- **有`require`指令的编程语言**,适用于要运行当前文件需先运行另一个文件的情况。这里拓扑排序用于识别文件运行顺序以保证每个文件只加载一次,且满足所有文件间的依赖关系要求。
|
||||
- **包含甘特图的项目管理工具**.甘特图能直观列出给定任务的所有依赖关系,在这些依赖关系之上能提供给用户任务完成的预估时间。我不常用到甘特图,但这些绘制甘特图的工具很可能会用到拓扑排序算法。
|
||||
- **类似`make`的工具** 可以让您声明任务之间的依赖关系,这里拓扑排序算法将从底层实现具有依赖关系的任务顺序执行的功能。
|
||||
- **具有`require`指令的编程语言**适用于要运行当前文件需先运行另一个文件的情况。这里拓扑排序用于识别文件运行顺序以保证每个文件只加载一次,且满足所有文件间的依赖关系要求。
|
||||
- **带有甘特图的项目管理工具**。甘特图能直观列出给定任务的所有依赖关系,在这些依赖关系之上能提供给用户任务完成的预估时间。我不常用到甘特图,但这些绘制甘特图的工具很可能会用到拓扑排序算法。
|
||||
|
||||
###霍夫曼编码实现数据压缩###
|
||||
[霍夫曼编码](http://en.wikipedia.org/wiki/Huffman_coding)是一种用于无损数据压缩的编码算法。它的工作原理是先分析要压缩的数据,再为每个字符创建一个二进制编码。字符出现的越频繁,编码赋值越小。因此在一个数据集中`e`可能会编码为`111`,而`x`会编码为`10010`。创建了这种编码模式,就可以串联无定界符,也能正确地进行解码。
|
||||
|
||||
[霍夫曼编码][5](Huffman coding)是一种用于无损数据压缩的编码算法。它的工作原理是先分析要压缩的数据,再为每个字符创建一个二进制编码。字符出现的越频繁,编码赋值越小。因此在一个数据集中`e`可能会编码为`111`,而`x`会编码为`10010`。创建了这种编码模式,就可以串联无定界符,也能正确地进行解码。
|
||||
|
||||
在gzip中使用的DEFLATE算法就结合了霍夫曼编码与LZ77一同用于实现数据压缩功能。gzip应用领域很广,特别适用于文件压缩(以`.gz`为扩展名的文件)以及用于数据传输中的http请求与应答。
|
||||
|
||||
@ -96,10 +97,11 @@
|
||||
|
||||
- 您会理解为什么较大的压缩文件会获得较好的整体压缩效果(如压缩的越多,压缩率也越高)。这也是SPDY协议得以推崇的原因之一:在复杂的HTTP请求/响应过程数据有更好的压缩效果。
|
||||
- 您会了解数据传输过程中如果想要压缩JavaScript/CSS文件,运行压缩软件是完全没有意义的。PNG文件也是类似,因为它们已经使用DEFLATE算法完成了压缩。
|
||||
- 如果您试图强行破译加密的信息,您可能会发现重复数据压缩质量越好,给定的密文单位bit的数据压缩将帮助您确定相关的[分组密码模式](http://en.wikipedia.org/wiki/Block_cipher_mode_of_operation).
|
||||
- 如果您试图强行破译加密的信息,您可能会发现由于重复数据压缩质量更好,密文给定位的数据压缩率将帮助您确定相关的[分组密码工作模式][6](block cipher mode of operation.)。
|
||||
|
||||
###下一步选择学习什么是困难的###
|
||||
作为一名程序员应当做好持续学习的准备。为成为一名web开发人员,您需要了解标记语言以及Ruby/Python,正则表达式,SQL,JavaScript等高级编程语言,还需要了解HTTP的工作原理,如何运行UNIX终端以及面向对象的编程艺术。您很难有效地预览到未来的职业全景,因此选择下一步要学习哪些知识是困难的。
|
||||
|
||||
作为一名程序员应当做好持续学习的准备。为了成为一名web开发人员,您需要了解标记语言以及Ruby/Python、正则表达式、SQL、JavaScript等高级编程语言,还需要了解HTTP的工作原理,如何运行UNIX终端以及面向对象的编程艺术。您很难有效地预览到未来的职业全景,因此选择下一步要学习哪些知识是困难的。
|
||||
|
||||
我没有快速学习的能力,因此我不得不在时间花费上非常谨慎。我希望尽可能地学习到有持久生命力的技能,即不会在几年内就过时的技术。这意味着我也会犹豫这周是要学习JavaScript框架还是那些新的编程语言。
|
||||
|
||||
@ -111,13 +113,14 @@ via: http://www.happybearsoftware.com/how-learning-data-structures-and-algorithm
|
||||
|
||||
作者:[Happy Bear][a]
|
||||
译者:[icybreaker](https://github.com/icybreaker)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[Caroline](https://github.com/carolinewuyan)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.happybearsoftware.com/
|
||||
[1]:http://en.wikipedia.org/wiki/Huffman_coding
|
||||
[2]:http://en.wikipedia.org/wiki/Block_cipher_mode_of_operation
|
||||
|
||||
|
||||
|
||||
[3]:http://en.wikipedia.org/wiki/Dijkstra's_algorithm
|
||||
[4]:http://en.wikipedia.org/wiki/A*_search_algorithm
|
||||
[5]:http://en.wikipedia.org/wiki/Huffman_coding
|
||||
[6]:http://en.wikipedia.org/wiki/Block_cipher_mode_of_operation
|
@ -0,0 +1,88 @@
|
||||
那些奇特的 Linux 发行版本
|
||||
================================================================================
|
||||
从大多数消费者所关注的诸如 Ubuntu,Fedora,Mint 或 elementary OS 到更加复杂、轻量级和企业级的诸如 Slackware,Arch Linux 或 RHEL,这些发行版本我都已经见识过了。除了这些,难道没有其他别的了吗?其实 Linux 的生态系统是非常多样化的,对每个人来说,总有一款适合你。下面就让我们讨论一些稀奇古怪的小众 Linux 发行版本吧,它们代表着开源平台真正的多样性。
|
||||
|
||||
### Puppy Linux
|
||||
|
||||
![strangest linux distros](http://2.bp.blogspot.com/--cSL2-6rIgA/VcwNc5hFebI/AAAAAAAAJzk/AgB55mVtJVQ/s1600/Puppy-Linux.png)
|
||||
|
||||
它是一个仅有一个普通 DVD 光盘容量十分之一大小的操作系统,这就是 Puppy Linux。整个操作系统仅有 100MB 大小!并且它还可以从内存中运行,这使得它运行极快,即便是在老式的 PC 机上。 在操作系统启动后,你甚至可以移除启动介质!还有什么比这个更好的吗? 系统所需的资源极小,大多数的硬件都会被自动检测到,并且它预装了能够满足你基本需求的软件。[在这里体验 Puppy Linux 吧][1].
|
||||
|
||||
### Suicide Linux(自杀 Linux)
|
||||
|
||||
![suicide linux](http://3.bp.blogspot.com/-dfeehRIQKpo/VdMgRVQqIJI/AAAAAAAAJz0/TmBs-n2K9J8/s1600/suicide-linux.jpg)
|
||||
|
||||
这个名字吓到你了吗?我想应该是。 ‘任何时候 -注意是任何时候-一旦你远程输入不正确的命令,解释器都会创造性地将它重定向为 `rm -rf /` 命令,然后擦除你的硬盘’。它就是这么简单。我真的很想知道谁自信到将[Suicide Linux][2] 安装到生产机上。 **警告:千万不要在生产机上尝试这个!** 假如你感兴趣的话,现在可以通过一个简洁的[DEB 包][3]来获取到它。
|
||||
|
||||
### PapyrOS
|
||||
|
||||
![top 10 strangest linux distros](http://3.bp.blogspot.com/-Q0hlEMCD9-o/VdMieAiXY1I/AAAAAAAAJ0M/iS_ZjVaZAk8/s1600/papyros.png)
|
||||
|
||||
它的 “奇怪”是好的方面。PapyrOS 正尝试着将 Android 的 material design 设计语言引入到新的 Linux 发行版本上。尽管这个项目还处于早期阶段,看起来它已经很有前景。该项目的网页上说该系统已经完成了 80%,随后人们可以期待它的第一个 Alpha 发行版本。在该项目被宣告提出时,我们做了 [PapyrOS][4] 的小幅报道,从它的外观上看,它甚至可能会引领潮流。假如你感兴趣的话,可在 [Google+][5] 上关注该项目并可通过 [BountySource][6] 来贡献出你的力量。
|
||||
|
||||
### Qubes OS
|
||||
|
||||
![10 most unique linux distros](http://3.bp.blogspot.com/-8aOtnTp3Yxk/VdMo_KWs4sI/AAAAAAAAJ0o/3NTqhaw60jM/s1600/qubes-linux.png)
|
||||
|
||||
Qubes 是一个开源的操作系统,其设计通过使用[安全分级(Security by Compartmentalization)][14]的方法,来提供强安全性。其前提假设是不存在完美的没有 bug 的桌面环境。并通过实现一个‘安全隔离(Security by Isolation)’ 的方法,[Qubes Linux][7]试图去解决这些问题。Qubes 基于 Xen、X 视窗系统和 Linux,并可运行大多数的 Linux 应用,支持大多数的 Linux 驱动。Qubes 入选了 Access Innovation Prize 2014 for Endpoint Security Solution 决赛名单。
|
||||
|
||||
### Ubuntu Satanic Edition
|
||||
|
||||
![top10 linux distros](http://3.bp.blogspot.com/-2Sqvb_lilC0/VdMq_ceoXnI/AAAAAAAAJ00/kot20ugVJFk/s1600/ubuntu-satanic.jpg)
|
||||
|
||||
Ubuntu SE 是一个基于 Ubuntu 的发行版本。通过一个含有主题、壁纸甚至来源于某些天才新晋艺术家的重金属音乐的综合软件包,“它同时带来了最好的自由软件和免费的金属音乐” 。尽管这个项目看起来不再积极开发了, Ubuntu Satanic Edition 甚至在其名字上都显得奇异。 [Ubuntu SE (Slightly NSFW)][8]。
|
||||
|
||||
### Tiny Core Linux
|
||||
|
||||
![10 strange linux distros](http://2.bp.blogspot.com/-ZtIVjGMqdx0/VdMv136Pz1I/AAAAAAAAJ1E/-q34j-TXyUY/s1600/tiny-core-linux.png)
|
||||
|
||||
Puppy Linux 还不够小?试试这个吧。 Tiny Core Linux 是一个 12MB 大小的图形化 Linux 桌面!是的,你没有看错。一个主要的补充说明:它不是一个完整的桌面,也并不完全支持所有的硬件。它只含有能够启动进入一个非常小巧的 X 桌面,支持有线网络连接的核心部件。它甚至还有一个名为 Micro Core Linux 的没有 GUI 的版本,仅有 9MB 大小。[Tiny Core Linux][9]。
|
||||
|
||||
### NixOS
|
||||
|
||||
![top 10 unique and special linux distros](http://4.bp.blogspot.com/-idmCvIxtxeo/VdcqcggBk1I/AAAAAAAAJ1U/DTQCkiLqlLk/s1600/nixos.png)
|
||||
|
||||
它是一个资深用户所关注的 Linux 发行版本,有着独特的打包和配置管理方式。在其他的发行版本中,诸如升级的操作可能是非常危险的。升级一个软件包可能会引起其他包无法使用,而升级整个系统感觉还不如重新安装一个。在那些你不能安全地测试由一个配置的改变所带来的结果的更改之上,它们通常没有“重来”这个选项。在 NixOS 中,整个系统由 Nix 包管理器按照一个纯功能性的构建语言的描述来构建。这意味着构建一个新的配置并不会重写先前的配置。大多数其他的特色功能也遵循着这个模式。Nix 相互隔离地存储所有的软件包。有关 NixOS 的更多内容请看[这里][10]。
|
||||
|
||||
### GoboLinux
|
||||
|
||||
![strangest linux distros](http://4.bp.blogspot.com/-rOYfBXg-UiU/VddCF7w_xuI/AAAAAAAAJ1w/Nf11bOheOwM/s1600/gobolinux.jpg)
|
||||
|
||||
这是另一个非常奇特的 Linux 发行版本。它与其他系统如此不同的原因是它有着独特的重新整理的文件系统。它有着自己独特的子目录树,其中存储着所有的文件和程序。GoboLinux 没有专门的包数据库,因为其文件系统就是它的数据库。在某些方面,这类重整有些类似于 OS X 上所看到的功能。
|
||||
|
||||
### Hannah Montana Linux
|
||||
|
||||
![strangest linux distros](http://1.bp.blogspot.com/-3P22pYfih6Y/VdcucPOv4LI/AAAAAAAAJ1g/PszZDbe83sQ/s1600/hannah-montana-linux.jpg)
|
||||
|
||||
它是一个基于 Kubuntu 的 Linux 发行版本,它有着汉娜·蒙塔娜( Hannah Montana) 主题的开机启动界面、KDM(KDE Display Manager)、图标集、ksplash、plasma、颜色主题和壁纸(I'm so sorry)。[这是它的链接][12]。这个项目现在不再活跃了。
|
||||
|
||||
### RLSD Linux
|
||||
|
||||
它是一个极其精简、小巧、轻量和安全可靠的,基于 Linux 文本的操作系统。开发者称 “它是一个独特的发行版本,提供一系列的控制台应用和自带的安全特性,对黑客或许有吸引力。” [RLSD Linux][13].
|
||||
|
||||
我们还错过了某些更加奇特的发行版本吗?请让我们知晓吧。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.techdrivein.com/2015/08/the-strangest-most-unique-linux-distros.html
|
||||
|
||||
作者:Manuel Jose
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://puppylinux.org/main/Overview%20and%20Getting%20Started.htm
|
||||
[2]:http://qntm.org/suicide
|
||||
[3]:http://sourceforge.net/projects/suicide-linux/files/
|
||||
[4]:http://www.techdrivein.com/2015/02/papyros-material-design-linux-coming-soon.html
|
||||
[5]:https://plus.google.com/communities/109966288908859324845/stream/3262a3d3-0797-4344-bbe0-56c3adaacb69
|
||||
[6]:https://www.bountysource.com/teams/papyros
|
||||
[7]:https://www.qubes-os.org/
|
||||
[8]:http://ubuntusatanic.org/
|
||||
[9]:http://tinycorelinux.net/
|
||||
[10]:https://nixos.org/
|
||||
[11]:http://www.gobolinux.org/
|
||||
[12]:http://hannahmontana.sourceforge.net/
|
||||
[13]:http://rlsd2.dimakrasner.com/
|
||||
[14]:https://en.wikipedia.org/wiki/Compartmentalization_(information_security)
|
@ -1,12 +1,12 @@
|
||||
用 screenfetch 和 linux_logo 工具显示带有酷炫 Linux 标志的基本硬件信息
|
||||
用 screenfetch 和 linux_logo 显示带有酷炫 Linux 标志的基本硬件信息
|
||||
================================================================================
|
||||
想在屏幕上显示出你的 Linux 发行版的酷炫标志和基本硬件信息吗?不用找了,来试试超赞的 screenfetch 和 linux_logo 工具。
|
||||
|
||||
### 来见见 screenfetch 吧 ###
|
||||
### 来看看 screenfetch 吧 ###
|
||||
|
||||
screenFetch 是一个能够在截屏中显示系统/主题信息的命令行脚本。它可以在 Linux,OS X,FreeBSD 以及其它的许多类Unix系统上使用。来自 man 手册的说明:
|
||||
|
||||
> 这个方便的 Bash 脚本可以用来生成那些漂亮的终端主题信息和 ASCII 发行版标志,就像如今你在别人的截屏里看到的那样。它会自动检测你的发行版并显示 ASCII 版的发行版标志,并且在右边显示一些有价值的信息。
|
||||
> 这个方便的 Bash 脚本可以用来生成那些漂亮的终端主题信息和用 ASCII 构成的发行版标志,就像如今你在别人的截屏里看到的那样。它会自动检测你的发行版并显示 ASCII 版的发行版标志,并且在右边显示一些有价值的信息。
|
||||
|
||||
#### 在 Linux 上安装 screenfetch ####
|
||||
|
||||
@ -16,7 +16,7 @@ screenFetch 是一个能够在截屏中显示系统/主题信息的命令行脚
|
||||
|
||||
![](http://s0.cyberciti.org/uploads/cms/2015/09/ubuntu-debian-linux-apt-get-install-screenfetch.jpg)
|
||||
|
||||
图一:用 apt-get 安装 screenfetch
|
||||
*图一:用 apt-get 安装 screenfetch*
|
||||
|
||||
#### 在 Mac OS X 上安装 screenfetch ####
|
||||
|
||||
@ -26,7 +26,7 @@ screenFetch 是一个能够在截屏中显示系统/主题信息的命令行脚
|
||||
|
||||
![](http://s0.cyberciti.org/uploads/cms/2015/09/apple-mac-osx-install-screenfetch.jpg)
|
||||
|
||||
图二:用 brew 命令安装 screenfetch
|
||||
*图二:用 brew 命令安装 screenfetch*
|
||||
|
||||
#### 在 FreeBSD 上安装 screenfetch ####
|
||||
|
||||
@ -36,7 +36,7 @@ screenFetch 是一个能够在截屏中显示系统/主题信息的命令行脚
|
||||
|
||||
![](http://s0.cyberciti.org/uploads/cms/2015/09/freebsd-install-pkg-screenfetch.jpg)
|
||||
|
||||
图三:在 FreeBSD 用 pkg 安装 screenfetch
|
||||
*图三:在 FreeBSD 用 pkg 安装 screenfetch*
|
||||
|
||||
#### 在 Fedora 上安装 screenfetch ####
|
||||
|
||||
@ -46,7 +46,7 @@ screenFetch 是一个能够在截屏中显示系统/主题信息的命令行脚
|
||||
|
||||
![](http://s0.cyberciti.org/uploads/cms/2015/09/fedora-dnf-install-screenfetch.jpg)
|
||||
|
||||
图四:在 Fedora 22 用 dnf 安装 screenfetch
|
||||
*图四:在 Fedora 22 用 dnf 安装 screenfetch*
|
||||
|
||||
#### 我该怎么使用 screefetch 工具? ####
|
||||
|
||||
@ -56,21 +56,21 @@ screenFetch 是一个能够在截屏中显示系统/主题信息的命令行脚
|
||||
|
||||
这是不同系统的输出:
|
||||
|
||||
![](http://s0.cyberciti.org/uploads/cms/2015/09/fedora-screenfetch-300x193.jpg)
|
||||
![](http://s0.cyberciti.org/uploads/cms/2015/09/fedora-screenfetch.jpg)
|
||||
|
||||
Fedora 上的 Screenfetch
|
||||
*Fedora 上的 Screenfetch*
|
||||
|
||||
![](http://s0.cyberciti.org/uploads/cms/2015/09/screenfetch-osx-300x213.jpg)
|
||||
![](http://s0.cyberciti.org/uploads/cms/2015/09/screenfetch-osx.jpg)
|
||||
|
||||
OS X 上的 Screenfetch
|
||||
*OS X 上的 Screenfetch*
|
||||
|
||||
![](http://s0.cyberciti.org/uploads/cms/2015/09/screenfetch-freebsd-300x143.jpg)
|
||||
![](http://s0.cyberciti.org/uploads/cms/2015/09/screenfetch-freebsd.jpg)
|
||||
|
||||
FreeBSD 上的 Screenfetch
|
||||
*FreeBSD 上的 Screenfetch*
|
||||
|
||||
![](http://s0.cyberciti.org/uploads/cms/2015/09/debian-ubutnu-screenfetch-outputs-300x279.jpg)
|
||||
![](http://s0.cyberciti.org/uploads/cms/2015/09/debian-ubutnu-screenfetch-outputs.jpg)
|
||||
|
||||
Debian 上的 Screenfetch
|
||||
*Debian 上的 Screenfetch*
|
||||
|
||||
#### 获取截屏 ####
|
||||
|
||||
@ -134,7 +134,7 @@ linux_logo 程序生成一个彩色的 ANSI 版企鹅图片,还包含一些来
|
||||
|
||||
![](http://s0.cyberciti.org/uploads/cms/2015/09/debian-linux_logo.jpg)
|
||||
|
||||
运行 linux_logo
|
||||
*运行 linux_logo*
|
||||
|
||||
#### 等等,还有更多! ####
|
||||
|
||||
@ -196,7 +196,7 @@ linux_logo 程序生成一个彩色的 ANSI 版企鹅图片,还包含一些来
|
||||
|
||||
![](http://s0.cyberciti.org/uploads/cms/2015/09/linux-logo-fun.gif)
|
||||
|
||||
动图1: linux_logo 和 bash 循环,既有趣又能发朋友圈耍酷
|
||||
*动图1: linux_logo 和 bash 循环,既有趣又能发朋友圈耍酷*
|
||||
|
||||
### 获取帮助 ###
|
||||
|
||||
@ -216,7 +216,7 @@ via: http://www.cyberciti.biz/hardware/howto-display-linux-logo-in-bash-terminal
|
||||
|
||||
作者:Vivek Gite
|
||||
译者:[alim0x](https://github.com/alim0x)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,9 +1,9 @@
|
||||
|
||||
在 Ubuntu 14.04 中配置 PXE 服务器
|
||||
在 Ubuntu 14.04 中配置 PXE 服务器
|
||||
================================================================================
|
||||
|
||||
![](https://www.maketecheasier.com/assets/uploads/2015/09/pxe-featured.jpg)
|
||||
|
||||
PXE(Preboot Execution Environment--预启动执行环境)服务器允许用户从网络中启动 Linux 发行版并且可以同时在数百台 PC 中安装而不需要 Linux ISO 镜像。如果你客户端的计算机没有 CD/DVD 或USB 引导盘,或者如果你想在大型企业中同时安装多台计算机,那么 PXE 服务器可以帮你节省时间和金钱。
|
||||
PXE(Preboot Execution Environment--预启动执行环境)服务器允许用户从网络中启动 Linux 发行版并且可以不需要 Linux ISO 镜像就能同时在数百台 PC 中安装。如果你客户端的计算机没有 CD/DVD 或USB 引导盘,或者如果你想在大型企业中同时安装多台计算机,那么 PXE 服务器可以帮你节省时间和金钱。
|
||||
|
||||
在这篇文章中,我们将告诉你如何在 Ubuntu 14.04 配置 PXE 服务器。
|
||||
|
||||
@ -11,11 +11,11 @@ PXE(Preboot Execution Environment--预启动执行环境)服务器允许用
|
||||
|
||||
开始前,你需要先设置 PXE 服务器使用静态 IP。在你的系统中要使用静态 IP 地址,需要编辑 “/etc/network/interfaces” 文件。
|
||||
|
||||
1. 打开 “/etc/network/interfaces” 文件.
|
||||
打开 “/etc/network/interfaces” 文件.
|
||||
|
||||
sudo nano /etc/network/interfaces
|
||||
|
||||
作如下修改:
|
||||
作如下修改:
|
||||
|
||||
# 回环网络接口
|
||||
auto lo
|
||||
@ -43,23 +43,23 @@ DHCP,TFTP 和 NFS 是 PXE 服务器的重要组成部分。首先,需要更
|
||||
|
||||
### 配置 DHCP 服务: ###
|
||||
|
||||
DHCP 代表动态主机配置协议(Dynamic Host Configuration Protocol),并且它主要用于动态分配网络配置参数,如用于接口和服务的 IP 地址。在 PXE 环境中,DHCP 服务器允许客户端请求并自动获得一个 IP 地址来访问网络。
|
||||
DHCP 代表动态主机配置协议(Dynamic Host Configuration Protocol),它主要用于动态分配网络配置参数,如用于接口和服务的 IP 地址。在 PXE 环境中,DHCP 服务器允许客户端请求并自动获得一个 IP 地址来访问网络。
|
||||
|
||||
1. 编辑 “/etc/default/dhcp3-server” 文件.
|
||||
1、编辑 “/etc/default/dhcp3-server” 文件.
|
||||
|
||||
sudo nano /etc/default/dhcp3-server
|
||||
|
||||
作如下修改:
|
||||
作如下修改:
|
||||
|
||||
INTERFACES="eth0"
|
||||
|
||||
保存 (Ctrl + o) 并退出 (Ctrl + x) 文件.
|
||||
|
||||
2. 编辑 “/etc/dhcp3/dhcpd.conf” 文件:
|
||||
2、编辑 “/etc/dhcp3/dhcpd.conf” 文件:
|
||||
|
||||
sudo nano /etc/dhcp/dhcpd.conf
|
||||
|
||||
作如下修改:
|
||||
作如下修改:
|
||||
|
||||
default-lease-time 600;
|
||||
max-lease-time 7200;
|
||||
@ -74,29 +74,29 @@ DHCP 代表动态主机配置协议(Dynamic Host Configuration Protocol),
|
||||
|
||||
保存文件并退出。
|
||||
|
||||
3. 启动 DHCP 服务.
|
||||
3、启动 DHCP 服务.
|
||||
|
||||
sudo /etc/init.d/isc-dhcp-server start
|
||||
|
||||
### 配置 TFTP 服务器: ###
|
||||
|
||||
TFTP 是一种文件传输协议,类似于 FTP。它不用进行用户认证也不能列出目录。TFTP 服务器总是监听网络上的 PXE 客户端。当它检测到网络中有 PXE 客户端请求 PXE 服务器时,它将提供包含引导菜单的网络数据包。
|
||||
TFTP 是一种文件传输协议,类似于 FTP,但它不用进行用户认证也不能列出目录。TFTP 服务器总是监听网络上的 PXE 客户端的请求。当它检测到网络中有 PXE 客户端请求 PXE 服务时,它将提供包含引导菜单的网络数据包。
|
||||
|
||||
1. 配置 TFTP 时,需要编辑 “/etc/inetd.conf” 文件.
|
||||
1、配置 TFTP 时,需要编辑 “/etc/inetd.conf” 文件.
|
||||
|
||||
sudo nano /etc/inetd.conf
|
||||
|
||||
作如下修改:
|
||||
作如下修改:
|
||||
|
||||
tftp dgram udp wait root /usr/sbin/in.tftpd /usr/sbin/in.tftpd -s /var/lib/tftpboot
|
||||
|
||||
保存文件并退出。
|
||||
保存文件并退出。
|
||||
|
||||
2. 编辑 “/etc/default/tftpd-hpa” 文件。
|
||||
2、编辑 “/etc/default/tftpd-hpa” 文件。
|
||||
|
||||
sudo nano /etc/default/tftpd-hpa
|
||||
|
||||
作如下修改:
|
||||
作如下修改:
|
||||
|
||||
TFTP_USERNAME="tftp"
|
||||
TFTP_DIRECTORY="/var/lib/tftpboot"
|
||||
@ -105,14 +105,14 @@ TFTP 是一种文件传输协议,类似于 FTP。它不用进行用户认证
|
||||
RUN_DAEMON="yes"
|
||||
OPTIONS="-l -s /var/lib/tftpboot"
|
||||
|
||||
保存文件并退出。
|
||||
保存文件并退出。
|
||||
|
||||
3. 使用 `xinetd` 让 boot 服务在每次系统开机时自动启动,并启动tftpd服务。
|
||||
3、 使用 `xinetd` 让 boot 服务在每次系统开机时自动启动,并启动tftpd服务。
|
||||
|
||||
sudo update-inetd --enable BOOT
|
||||
sudo service tftpd-hpa start
|
||||
|
||||
4. 检查状态。
|
||||
4、检查状态。
|
||||
|
||||
sudo netstat -lu
|
||||
|
||||
@ -123,7 +123,7 @@ TFTP 是一种文件传输协议,类似于 FTP。它不用进行用户认证
|
||||
|
||||
### 配置 PXE 启动文件 ###
|
||||
|
||||
现在,你需要将 PXE 引导文件 “pxelinux.0” 放在 TFTP 根目录下。为 TFTP 创建一个目录,并复制 syslinux 在 “/usr/lib/syslinux/” 下提供的所有引导程序文件到 “/var/lib/tftpboot/” 下,操作如下:
|
||||
现在,你需要将 PXE 引导文件 “pxelinux.0” 放在 TFTP 根目录下。为 TFTP 创建目录结构,并从 “/usr/lib/syslinux/” 复制 syslinux 提供的所有引导程序文件到 “/var/lib/tftpboot/” 下,操作如下:
|
||||
|
||||
sudo mkdir /var/lib/tftpboot
|
||||
sudo mkdir /var/lib/tftpboot/pxelinux.cfg
|
||||
@ -135,13 +135,13 @@ TFTP 是一种文件传输协议,类似于 FTP。它不用进行用户认证
|
||||
|
||||
PXE 配置文件定义了 PXE 客户端启动时显示的菜单,它能引导并与 TFTP 服务器关联。默认情况下,当一个 PXE 客户端启动时,它会使用自己的 MAC 地址指定要读取的配置文件,所以我们需要创建一个包含可引导内核列表的默认文件。
|
||||
|
||||
编辑 PXE 服务器配置文件使用可用的安装选项。.
|
||||
编辑 PXE 服务器配置文件,使用有效的安装选项。
|
||||
|
||||
编辑 “/var/lib/tftpboot/pxelinux.cfg/default,”
|
||||
编辑 “/var/lib/tftpboot/pxelinux.cfg/default”:
|
||||
|
||||
sudo nano /var/lib/tftpboot/pxelinux.cfg/default
|
||||
|
||||
作如下修改:
|
||||
作如下修改:
|
||||
|
||||
DEFAULT vesamenu.c32
|
||||
TIMEOUT 100
|
||||
@ -183,12 +183,12 @@ PXE 配置文件定义了 PXE 客户端启动时显示的菜单,它能引导
|
||||
|
||||
### 为 PXE 服务器添加 Ubuntu 14.04 桌面启动镜像 ###
|
||||
|
||||
对于这一步,Ubuntu 内核和 initrd 文件是必需的。要获得这些文件,你需要 Ubuntu 14.04 桌面 ISO 镜像。你可以通过以下命令下载 Ubuntu 14.04 ISO 镜像到 /mnt 目录:
|
||||
对于这一步需要 Ubuntu 内核和 initrd 文件。要获得这些文件,你需要 Ubuntu 14.04 桌面 ISO 镜像。你可以通过以下命令下载 Ubuntu 14.04 ISO 镜像到 /mnt 目录:
|
||||
|
||||
sudo cd /mnt
|
||||
sudo wget http://releases.ubuntu.com/14.04/ubuntu-14.04.3-desktop-amd64.iso
|
||||
|
||||
**注意**: 下载用的 URL 可能会改变,因为 ISO 镜像会进行更新。如果上面的网址无法访问,看看这个网站,了解最新的下载链接。
|
||||
**注意**: 下载用的 URL 可能会改变,因为 ISO 镜像会进行更新。如果上面的网址无法访问,看看[这个网站][4],了解最新的下载链接。
|
||||
|
||||
挂载 ISO 文件,使用以下命令将所有文件复制到 TFTP文件夹中:
|
||||
|
||||
@ -199,9 +199,9 @@ PXE 配置文件定义了 PXE 客户端启动时显示的菜单,它能引导
|
||||
|
||||
### 将导出的 ISO 目录配置到 NFS 服务器上 ###
|
||||
|
||||
现在,你需要通过 NFS 协议安装源镜像。你还可以使用 HTTP 和 FTP 来安装源镜像。在这里,我已经使用 NFS 导出 ISO 内容。
|
||||
现在,你需要通过 NFS 协议来设置“安装源镜像( Installation Source Mirrors)”。你还可以使用 HTTP 和 FTP 来安装源镜像。在这里,我已经使用 NFS 输出 ISO 内容。
|
||||
|
||||
要配置 NFS 服务器,你需要编辑 “etc/exports” 文件。
|
||||
要配置 NFS 服务器,你需要编辑 “/etc/exports” 文件。
|
||||
|
||||
sudo nano /etc/exports
|
||||
|
||||
@ -209,7 +209,7 @@ PXE 配置文件定义了 PXE 客户端启动时显示的菜单,它能引导
|
||||
|
||||
/var/lib/tftpboot/Ubuntu/14.04/amd64 *(ro,async,no_root_squash,no_subtree_check)
|
||||
|
||||
保存文件并退出。为使更改生效,启动 NFS 服务。
|
||||
保存文件并退出。为使更改生效,输出并启动 NFS 服务。
|
||||
|
||||
sudo exportfs -a
|
||||
sudo /etc/init.d/nfs-kernel-server start
|
||||
@ -218,9 +218,9 @@ PXE 配置文件定义了 PXE 客户端启动时显示的菜单,它能引导
|
||||
|
||||
### 配置网络引导 PXE 客户端 ###
|
||||
|
||||
PXE 客户端可以被任何具备 PXE 网络引导的系统来启用。现在,你的客户端可以启动并安装 Ubuntu 14.04 桌面,需要在系统的 BIOS 中设置 “Boot From Network” 选项。
|
||||
PXE 客户端可以是任何支持 PXE 网络引导的计算机系统。现在,你的客户端只需要在系统的 BIOS 中设置 “从网络引导(Boot From Network)” 选项就可以启动并安装 Ubuntu 14.04 桌面。
|
||||
|
||||
现在你可以去做 - 用网络引导启动你的 PXE 客户端计算机,你现在应该看到一个子菜单,显示了我们创建的 Ubuntu 14.04 桌面。
|
||||
现在准备出发吧 - 用网络引导启动你的 PXE 客户端计算机,你现在应该看到一个子菜单,显示了我们创建的 Ubuntu 14.04 桌面的菜单项。
|
||||
|
||||
![pxe](https://www.maketecheasier.com/assets/uploads/2015/09/pxe.png)
|
||||
|
||||
@ -241,7 +241,7 @@ via: https://www.maketecheasier.com/configure-pxe-server-ubuntu/
|
||||
|
||||
作者:[Hitesh Jethva][a]
|
||||
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
@ -249,3 +249,4 @@ via: https://www.maketecheasier.com/configure-pxe-server-ubuntu/
|
||||
[1]:https://en.wikipedia.org/wiki/Preboot_Execution_Environment
|
||||
[2]:https://help.ubuntu.com/community/PXEInstallServer
|
||||
[3]:https://www.flickr.com/photos/jhcalderon/3681926417/
|
||||
[4]:http://releases.ubuntu.com/14.04/
|
@ -1,6 +1,6 @@
|
||||
如何使用 Quagga BGP(边界网关协议)路由器来过滤 BGP 路由
|
||||
================================================================================
|
||||
在[之前的文章][1]中,我们介绍了如何使用 Quagga 将 CentOS 服务器变成一个 BGP 路由器,也介绍了 BGP 对等体和前缀交换设置。在本教程中,我们将重点放在如何使用**前缀列表**和**路由映射**来分别控制数据注入和数据输出。
|
||||
在[之前的文章][1]中,我们介绍了如何使用 Quagga 将 CentOS 服务器变成一个 BGP 路由器,也介绍了 BGP 对等体和前缀交换设置。在本教程中,我们将重点放在如何使用**前缀列表(prefix-list)**和**路由映射(route-map)**来分别控制数据注入和数据输出。
|
||||
|
||||
之前的文章已经说过,BGP 的路由判定是基于前缀的收取和前缀的广播。为避免错误的路由,你需要使用一些过滤机制来控制这些前缀的收发。举个例子,如果你的一个 BGP 邻居开始广播一个本不属于它们的前缀,而你也将错就错地接收了这些不正常前缀,并且也将它转发到网络上,这个转发过程会不断进行下去,永不停止(所谓的“黑洞”就这样产生了)。所以确保这样的前缀不会被收到,或者不会转发到任何网络,要达到这个目的,你可以使用前缀列表和路由映射。前者是基于前缀的过滤机制,后者是更为常用的基于前缀的策略,可用于精调过滤机制。
|
||||
|
||||
@ -36,15 +36,15 @@
|
||||
|
||||
上面的命令创建了名为“DEMO-FRFX”的前缀列表,只允许存在 192.168.0.0/23 这个前缀。
|
||||
|
||||
前缀列表的另一个牛X功能是支持子网掩码区间,请看下面的例子:
|
||||
前缀列表的另一个强大功能是支持子网掩码区间,请看下面的例子:
|
||||
|
||||
ip prefix-list DEMO-PRFX permit 192.168.0.0/23 le 24
|
||||
|
||||
这个命令创建的前缀列表包含在 192.168.0.0/23 和 /24 之间的前缀,分别是 192.168.0.0/23, 192.168.0.0/24 and 192.168.1.0/24。运算符“le”表示小于等于,你也可以使用“ge”表示大于等于。
|
||||
这个命令创建的前缀列表包含在 192.168.0.0/23 和 /24 之间的前缀,分别是 192.168.0.0/23, 192.168.0.0/24 和 192.168.1.0/24。运算符“le”表示小于等于,你也可以使用“ge”表示大于等于。
|
||||
|
||||
一个前缀列表语句可以有多个允许或拒绝操作。每个语句都自动或手动地分配有一个序列号。
|
||||
|
||||
如果存在多个前缀列表语句,则这些语句会按序列号顺序被依次执行。在配置前缀列表的时候,我们需要注意在所有前缀列表语句后面的**隐性拒绝**属性,就是说凡是不被明显允许的,都会被拒绝。
|
||||
如果存在多个前缀列表语句,则这些语句会按序列号顺序被依次执行。在配置前缀列表的时候,我们需要注意在所有前缀列表语句之后是**隐性拒绝**语句,就是说凡是不被明显允许的,都会被拒绝。
|
||||
|
||||
如果要设置成允许所有前缀,前缀列表语句设置如下:
|
||||
|
||||
@ -81,7 +81,7 @@
|
||||
probability Match portion of routes defined by percentage value
|
||||
tag Match tag of route
|
||||
|
||||
如你所见,路由映射可以匹配很多属性,本教程需要匹配一个前缀。
|
||||
如你所见,路由映射可以匹配很多属性,在本教程中匹配的是前缀。
|
||||
|
||||
route-map DEMO-RMAP permit 10
|
||||
match ip address prefix-list DEMO-PRFX
|
||||
@ -163,7 +163,7 @@
|
||||
|
||||
可以看到,router-A 有4条路由前缀到达 router-B,而 router-B 只接收3条。查看一下范围,我们就能知道只有被路由映射允许的前缀才能在 router-B 上显示出来,其他的前缀一概丢弃。
|
||||
|
||||
**小提示**:如果接收前缀内容没有刷新,试试重置下 BGP 会话,使用这个命令:clear ip bgp neighbor-IP。本教程中命令如下:
|
||||
**小提示**:如果接收前缀内容没有刷新,试试重置下 BGP 会话,使用这个命令:`clear ip bgp neighbor-IP`。本教程中命令如下:
|
||||
|
||||
clear ip bgp 192.168.1.1
|
||||
|
||||
@ -193,9 +193,9 @@ via: http://xmodulo.com/filter-bgp-routes-quagga-bgp-router.html
|
||||
|
||||
作者:[Sarmed Rahman][a]
|
||||
译者:[bazz2](https://github.com/bazz2)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/sarmed
|
||||
[1]:http://xmodulo.com/centos-bgp-router-quagga.html
|
||||
[1]:https://linux.cn/article-4609-1.html
|
132
published/201510/20150716 Interview--Larry Wall.md
Normal file
132
published/201510/20150716 Interview--Larry Wall.md
Normal file
@ -0,0 +1,132 @@
|
||||
Larry Wall 专访——语言学、Perl 6 的设计和发布
|
||||
================================================================================
|
||||
|
||||
> 经历了15年的打造,Perl 6 终将在年底与大家见面。我们预先采访了它的作者了解一下新特性。
|
||||
|
||||
Larry Wall 是个相当有趣的人。他是编程语言 Perl 的创造者,这种语言被广泛的誉为将互联网粘在一起的胶水,也由于大量地在各种地方使用非字母的符号被嘲笑为‘只写’语言——以难以阅读著称。Larry 本人具有语言学背景,以其介绍 Perl 未来发展的演讲“[洋葱的状态][1](State of the Onion)”而闻名。(LCTT 译注:“洋葱的状态”是 Larry Wall 的年度演讲的主题,洋葱也是 Perl 基金会的标志。)
|
||||
|
||||
在2015年布鲁塞尔的 FOSDEM 上,我们赶上了 Larry,问了问他为什么 Perl 6 花了如此长的时间(Perl 5 的发布时间是1994年),了解当项目中的每个人都各执己见时是多么的难以管理,以及他的语言学背景自始至终究竟给 Perl 带来了怎样的影响。做好准备,让我们来领略其中的奥妙……
|
||||
|
||||
![](http://www.linuxvoice.com/wp-content/uploads/2015/07/wall1.jpg)
|
||||
|
||||
**Linux Voice:你曾经有过计划去寻找世界上某个地方的某种不见经传的语言,然后为它创造书写的文字,但你从未有机会去实现它。如果你能回到过去,你会去做么?**
|
||||
|
||||
Larry Wall:你首先得是个年轻人才能搞得定!做这些事需要投入很大的努力和人力,以至于已经不适合那些上了年纪的人了。健康、活力是其中的一部分,同样也因为人们在年轻的时候更容易学习一门新的语言,只有在你学会了语言之后你才能写呀。
|
||||
|
||||
我自学了日语十年,由于我的音系学和语音学的训练我能说的比较流利——但要理解别人的意思对我来说还十分困难。所以到了日本我会问路,但我听不懂他们的回答!
|
||||
|
||||
通常需要一门语言学习得足够好才能开发一个文字体系,并可以使用这种语言进行少量的交流。在你能够实际推广它和用本土人自己的文化教育他们前,那还需要一些年。最后才可以教授本土人如何以他们的文明书写。
|
||||
|
||||
当然如果在语言方面你有帮手 —— 经过别人的提醒我们不再使用“语言线人”来称呼他们了,那样显得我们像是在 CIA 工作的一样!—— 你可以通过他们的帮助来学习外语。他们不是老师,但他们会以另一种方式来启发你学习 —— 当然他们也能教你如何说。他们会拿着一根棍子,指着它说“这是一根棍子”,然后丢掉同时说“棒子掉下去了”。然后,你就可以记下一些东西并将其系统化。
|
||||
|
||||
大多数让人们有这样做的动力是翻译圣经。但是这只是其中的一方面;另一方面也是为了文化保护。传教士在这方面臭名昭著,因为人类学家认为人们应该基于自己的文明来做这件事。但有些人注定会改变他们的文化——他们可能是军队、或是商业侵入,如可口可乐或者缝纫机,或传教士。在这三者之间,传教士相对来讲伤害最小的了,如果他们恪守本职的话。
|
||||
|
||||
**LV:许多文字系统有本可依,相较而言你的发明就像是格林兰语…**
|
||||
|
||||
印第安人照搬字母就发明了他们自己的语言,而且没有在这些字母上施加太多我们给这些字母赋予的涵义,这种做法相当随性。它们只要能够表达出人们的所思所想,使交流顺畅就行。经常是有些声调语言(Tonal language)使用的是西方文字拼写,并尽可能的使用拉丁文的字符变化,然后用重音符或数字标注出音调。
|
||||
|
||||
在你开始学习如何使用语音和语调表示之后,你也开始变得迷糊——或者你的书写就不如从前准确。或者你对话的时候像在讲英文,但发音开始无法匹配拼写。
|
||||
|
||||
**LV:当你在开发 Perl 的时候,你的语言学背景会不会使你认为:“这对程序设计语言真的非常重要”?**
|
||||
|
||||
LW:我在人们是如何使用语言上想了很多。在现实的语言中,你有一套名词、动词和形容词的体系,并且你知道这些单词的词性。在现实的自然语言中,你时常将一个单词放到不同的位置。我所学的语言学理论也被称为法位学(phoenetic),它解释了这些在自然语言中工作的原理 —— 也就是有些你当做名词的东西,有时候你可以将它用作动词,并且人们总是这样做。
|
||||
|
||||
你能很好的将任何单词放在任何位置而进行沟通。我比较喜欢的例子是将一个整句用作为一个形容词。这句话会是这样的:“我不喜欢你的[我可以用任何东西来取代这个形容词的]态度”!
|
||||
|
||||
所以自然语言非常灵活,因为聆听者非常聪明 —— 至少,相对于电脑而言 —— 你相信他们会理解你最想表达的意思,即使存在歧义。当然对电脑而言,你必须保证歧义不大。
|
||||
|
||||
> “在 Perl 6 中,我们试图让电脑更准确的了解我们。”
|
||||
|
||||
可以说在 Perl 1到5上,我们针对歧义方面处理做得还不够。有时电脑会在不应该的时候迷惑。在 Perl 6上,我们找了许多方法,使得电脑对你所说的话能更准确的理解,就算用户并不清楚这底是字符串还是数字,电脑也能准确的知道它的类型。我们找到了内部以强类型存储,而仍然可以无视类型的“以此即彼”的方法。
|
||||
|
||||
![](http://www.linuxvoice.com/wp-content/uploads/2015/07/wall2.jpg)
|
||||
|
||||
**LV:Perl 被视作互联网上的“胶水(glue)”语言已久,能将点点滴滴组合在一起。在你看来 Perl 6 的发布是否符合当前用户的需要,或者旨在招揽更多新用户,能使它重获新生吗?**
|
||||
|
||||
LW:最初的设想是为 Perl 程序员带来更好的 Perl。但在看到了 Perl 5 上的不足后,很明显改掉这些不足会使 Perl 6更易用,就像我在讨论中提到过 —— 类似于 [托尔金(J. R. R. Tolkien) 在《指环王》前言中谈到的适用性一样][2]。
|
||||
|
||||
重点是“简单的东西应该简单,而困难的东西应该可以实现”。让我们回顾一下,在 Perl 2和3之间的那段时间。在 Perl 2上我们不能处理二进制数据或嵌入的 null 值 —— 只支持 C 语言风格的字符串。我曾说过“Perl 只是文本处理语言 —— 在文本处理语言里你并不需要这些功能”。
|
||||
|
||||
但当时发生了一大堆的问题,因为大多数的文本中会包含少量的二进制数据 —— 如网络地址(network addresses)及类似的东西。你使用二进制数据打开套接字,然后处理文本。所以通过支持二进制数据,语言的适用性(applicability)翻了一倍。
|
||||
|
||||
这让我们开始探讨在语言中什么应该简单。现在的 Perl 中有一条原则,是我们偷师了哈夫曼编码(Huffman coding)的做法,它在位编码系统中为字符采取了不同的尺寸,常用的字符占用的位数较少,不常用的字符占用的位数更多。
|
||||
|
||||
我们偷师了这种想法并将它作为 Perl 的一般原则,针对常用的或者说常输入的 —— 这些常用的东西必须简单或简洁。不过,另一方面,也显得更加的不规则(irregular)。在自然语言中也是这样的,最常用的动词实际上往往是最不规则的。
|
||||
|
||||
所以在这样的情况下需要更多的差异存在。我很喜欢一本书是 Umberto Eco 写的的《探寻完美的语言(The Search for the Perfect Language)》,说的并不是计算机语言;而是哲学语言,大体的意思是古代的语言也许是完美的,我们应该将它们带回来。
|
||||
|
||||
所有的这类语言错误的认为类似的事物其编码也应该总是类似的。但这并不是我们沟通的方式。如果你的农场中有许多动物,他们都有相近的名字,当你想杀一只鸡的时候说“走,去把 Blerfoo 宰了”,你的真实想法是宰了 Blerfee,但有可能最后死的是一头牛(LCTT 译注:这是杀鸡用牛刀的意思吗?哈哈)。
|
||||
|
||||
所以在这种时候我们其实更需要好好的将单词区分开,使沟通信道的冗余增加。常用的单词应该有更多的差异。为了达到更有效的通讯,还有一种自足(LCTT 译注:self-clocking ,自同步,[概念][3]来自电信和电子行业,此处译为“自足”更能体现涵义)编码。如果你在一个货物上看到过 UPC 标签(条形码),它就是一个自足编码,每对“条”和“空”总是以七个列宽为单位,据此你就知道“条”的宽度加起来总是这么宽。这就是自足。
|
||||
|
||||
在电子产品中还有另一种自足编码。在老式的串行传输协议中有停止和启动位,来保持同步。自然语言中也会包含这些。比如说,在写日语时,不用使用空格。由于书写方式的原因,他们会在每个词组的开头使用中文中的汉字字符,然后用音节表(syllabary)中的字符来结尾。
|
||||
|
||||
**LV:是平假名,对吗?**
|
||||
|
||||
LW: 是的,平假名。所以在这一系统,每个词组的开头就自然就很重要了。同样的,在古希腊,大多数的动词都是搭配好的(declined 或 conjugated),所以它们的标准结尾是一种自足机制。在他们的书写体系中空格也是可有可无的 —— 引入空格是更近代的发明。
|
||||
|
||||
所以在计算机语言上也要如此,有的值也可以自足编码。在 Perl 上我们重度依赖这种方法,而且在 Perl 6 上相较于前几代这种依赖更重。当你使用表达式时,你要么得到的是一个词,要么得到的是插值(infix)操作符。当你想要得到一个词,你有可能得到的是一个前缀操作符,它也在相同的位置;同样当你想要得到一个插值操作符,你也可能得到的是前一个词的后缀。
|
||||
|
||||
但是反过来。如果编译器准确的知道它想要什么,你可以稍微重载(overload)它们,其它的让 Perl 来完成。所以在斜线“/”后面是单词时它会当成正则表达式,而斜线“/”在字串中时视作除法。而我们并不会重载所有东西,因为那只会使你失去自足冗余。
|
||||
|
||||
多数情况下我们提示的比较好的语法错误消息,是出于发现了一行中出现了两个关键词,然后我们尝试找出为什么一行会出现两个关键字 —— “哦,你一定漏掉了上一行的分号”,所以我们相较于很多其他的按步照班的解析器可以生成更好的错误消息。
|
||||
|
||||
![](http://www.linuxvoice.com/wp-content/uploads/2015/07/wall3.jpg)
|
||||
|
||||
**LV:为什么 Perl 6 花了15年?当每个人对事物有不同看法时一定十分难于管理,而且正确和错误并不是绝对的。**
|
||||
|
||||
LW:这必须要非常小心地平衡。刚开始会有许多的好的想法 —— 好吧,我并不是说那些全是好的想法。也有很多令人烦恼的地方,就像有361条 RFC [功能建议文件],而我也许只想要20条。我们需要坐下来,将它们全部看完,并忽略其中的解决方案,因为它们通常流于表象、视野狭隘。几乎每一条只针对一样事物,如若我们将它们全部拼凑起来,那简直是一堆垃圾。
|
||||
|
||||
> “掌握平衡时需要格外小心。毕竟在刚开始的时候总会有许多的好主意。”
|
||||
|
||||
所以我们必须基于人们在使用 Perl 5 时的真实感受重新整理,寻找统一、深层的解决方案。这些 RFC 文档许多都提到了一个事实,就是类型系统的不足。通过引入更条理分明的类型系统,我们可以解决很多问题并且即聪明又紧凑。
|
||||
|
||||
同时我们开始关注其他方面:如何统一特征集并开始重用不同领域的想法,这并不需要它们在下层相同。我们有一种标准的书写配对(pair)的方式——好吧,在 Perl 里面有两种!但使用冒号书写配对的方法同样可以用于基数计数法或是任何进制的文本编号。同样也可以用于其他形式的引用(quoting)。在 Perl 里我们称它为“奇妙的一致”。
|
||||
|
||||
> “做了 Perl 6 的早期实现的朋友们,握着我的手说:“我们真的很需要一位语言的设计者。””
|
||||
|
||||
同样的想法涌现出来,你说“我已经熟悉了语法如何运作,但是我看见它也被用在别处”,所以说视角相同才能找出这种一致。那些提出各种想法和做了 Perl 6 的早期实现的人们回来看我,握着我的手说:“我们真的需要一位语言的设计者。您能作为我们的[仁慈独裁者][4](benevolent dictator)吗?”(LCTT 译注:Benevolent Dictator For Life,或 BDFL,指开源领袖,通常指对社区争议拥有最终裁决权的领袖,典故来自 Python 创始人 Guido van Rossum, 具体参考维基条目[解释][4])
|
||||
|
||||
所以我是语言的设计者,但总是听到:“不要管具体实现(implementation)!我们目睹了你对 Perl 5 做的那些,我们不想历史重演!”真是让我忍俊不禁,因为他们作为起步的核心和原先 Perl 5 的内部结构上几乎别无二致,也许这就是为什么一些早期的实现做的并不好的原因。
|
||||
|
||||
因为我们仍然在摸索我们的整个设计,其实现在做了许多 VM (虚拟机)该做什么和不该做什么的假设,所以最终这个东西就像面向对象的汇编语言一样。类似的问题在伊始阶段无处不在。然后 Pugs 这家伙走过来说:“用用看 Haskell 吧,它能让你们清醒的认识自己正在干什么,让我们用它来弄清楚下层的语义模型(semantic model)。”
|
||||
|
||||
因此,我们明确了其中的一些语义模型,但更重要的是,我们开始建立符合那些语义模型的测试套件。在这之后,Parrot VM 继续进行开发,并且出现了另一个实现 Niecza ,它基于 .Net,是由一个年轻的家伙搞出来的。他很聪明,实现了 Perl 6 的一个很大的子集。不过他还是一个人干,并没有找到什么好方法让别人介入他的项目。
|
||||
|
||||
同时 Parrot 项目变得过于庞大,以至于任何人都不能真正的深入掌控它,并且很难重构。同时,开发 Rakudo 的人们觉得我们可能需要在更多平台上运行它,而不只是在 Parrot VM 上。 于是他们发明了所谓的可移植层 NQP ,即 “Not Quite Perl”。他们一开始将它移植到 JVM(Java虚拟机)上运行,与此同时,他们还秘密的开发了一个叫做 MoarVM 的 VM ,它去年才刚刚为人知晓。
|
||||
|
||||
无论 MoarVM 还是 JVM 在回归测试(regression test)中表现得十分接近 —— 在许多方面 Parrot 算是尾随其后。这样不挑剔 VM 真的很棒,我们也能开始考虑将 NQP 发扬光大。谷歌夏季编码大赛(Google Summer of Code project)的目标就是针对 JavaScript 的 NQP,这应该靠谱,因为 MoarVM 也同样使用 Node.js 作为日常处理。
|
||||
|
||||
我们可能要将今年余下的时光投在 MoarVM 上,直到 6.0 发布,方可休息片刻。
|
||||
|
||||
**LV:去年英国,政府开展编程年活动(Year of Code),来激发年轻人对编程的兴趣。针对活动的建议五花八门——类似为了让人们准确的认识到内存的使用你是否应该从低阶语言开始讲授,或是一门高阶语言。你对此作何看法?**
|
||||
|
||||
LW:到现在为止,Python 社区在低阶方面的教学工作做得比我们要好。我们也很想在这一方面做点什么,这也是我们有蝴蝶 logo 的部分原因,以此来吸引七岁大小的女孩子!
|
||||
|
||||
![Perl 6 : Camelia](https://upload.wikimedia.org/wikipedia/commons/thumb/8/85/Camelia.svg/640px-Camelia.svg.png)
|
||||
|
||||
> “到现在为止,Python 社区在低阶方面的教学工作做得比我们要好。”
|
||||
|
||||
我们认为将 Perl 6 作为第一门语言来学习是可行的。一大堆的将 Perl 5 作为第一门语言学习的人让我们吃惊。你知道,在 Perl 5 中有许多相当大的概念,如闭包,词法范围,和一些你通常在函数式编程中见到的特性。甚至在 Perl 6 中更是如此。
|
||||
|
||||
Perl 6 花了这么长时间的部分原因是我们尝试去坚持将近 50 种互不相同的原则,在设计语言的最后对于“哪点是最重要的规则”这个问题还是悬而未决。有太多的问题需要讨论。有时我们做出了决定,并已经工作了一段时间,才发现这个决定并不很正确。
|
||||
|
||||
之前我们并未针对并发程序设计或指定很多东西,直到 Jonathan Worthington 的出现,他非常巧妙的权衡了各个方面。他结合了一些其他语言诸如 Go 和 C# 的想法,将并发原语写的非常好。可组合性(Composability)是一个语言至关重要的一部分。
|
||||
|
||||
有很多的程序设计系统的并发和并行写的并不好 —— 比如线程和锁,不良的操作方式有很多。所以在我看来,额外花点时间看一下 Go 或者 C# 这种高阶原语的开发是很值得的 —— 那是一种关键字上的矛盾 —— 写的相当棒。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxvoice.com/interview-larry-wall/
|
||||
|
||||
作者:[Mike Saunders][a]
|
||||
译者:[martin2011qi](https://github.com/martin2011qi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxvoice.com/author/mike/
|
||||
[1]:https://en.wikipedia.org/wiki/Perl#State_of_the_Onion
|
||||
[2]:http://tinyurl.com/nhpr8g2
|
||||
[3]:http://en.wikipedia.org/wiki/Self-clocking_signal
|
||||
[4]:https://en.wikipedia.org/wiki/Benevolent_dictator_for_life
|
@ -0,0 +1,28 @@
|
||||
Linux 4.3 内核增加了 MOST 驱动子系统
|
||||
================================================================================
|
||||
当 4.2 内核还没有正式发布的时候,Greg Kroah-Hartman 就为他维护的各种子系统模块打开了4.3 的合并窗口。
|
||||
|
||||
之前 Greg KH 发起的拉取请求(pull request)里包含了 linux 4.3 的合并窗口更新,内容涉及驱动核心、TTY/串口、USB 驱动、字符/杂项以及暂存区内容。这些拉取申请没有提供任何震撼性的改变,大部分都是改进/附加/修改bug。暂存区内容又是大量的修正和清理,但是还是有一个新的驱动子系统。
|
||||
|
||||
Greg 提到了[4.3 的暂存区改变][2],“这里的很多东西,几乎全部都是细小的修改和改变。通常的 IIO 更新和新驱动,以及我们已经添加了的 MOST 驱动子系统,已经在源码树里整理了。ozwpan 驱动最终还是被删掉,因为它很明显被废弃了而且也没有人关心它。”
|
||||
|
||||
MOST 驱动子系统是面向媒体的系统传输(Media Oriented Systems Transport)的简称。在 linux 4.3 新增的文档里面解释道,“MOST 驱动支持 LInux 应用程序访问 MOST 网络:汽车信息骨干网(Automotive Information Backbone),高速汽车多媒体网络的事实上的标准。MOST 定义了必要的协议、硬件和软件层,提供高效且低消耗的传输控制,实时的数据包传输,而只需要使用一个媒介(物理层)。目前使用的媒介是光线、非屏蔽双绞线(UTP)和同轴电缆。MOST 也支持多种传输速度,最高支持150Mbps。”如文档解释的,MOST 主要是关于 Linux 在汽车上的应用。
|
||||
|
||||
当 Greg KH 发出了他为 Linux 4.3 多个子系统做出的更新,但是他还没有打算提交 [KDBUS][5] 的内核代码。他之前已经放出了 [linux 4.3 的 KDBUS] 的开发计划,所以我们将需要等待官方的4.3 合并窗口,看看会发生什么。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.phoronix.com/scan.php?page=news_item&px=Linux-4.3-Staging-Pull
|
||||
|
||||
作者:[Michael Larabel][a]
|
||||
译者:[oska874](https://github.com/oska874)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.michaellarabel.com/
|
||||
[1]:http://www.phoronix.com/scan.php?page=search&q=Linux+4.2
|
||||
[2]:http://lkml.iu.edu/hypermail/linux/kernel/1508.2/02604.html
|
||||
[3]:http://www.phoronix.com/scan.php?page=news_item&px=KDBUS-Not-In-Linux-4.2
|
||||
[4]:http://www.phoronix.com/scan.php?page=news_item&px=Linux-4.2-rc7-Released
|
||||
[5]:http://www.phoronix.com/scan.php?page=search&q=KDBUS
|
@ -1,27 +1,28 @@
|
||||
在 Ubuntu 和 Linux Mint 上安装 Terminator 0.98
|
||||
================================================================================
|
||||
[Terminator][1],在一个窗口中有多个终端。该项目的目标之一是为管理终端提供一个有用的工具。它的灵感来自于类似 gnome-multi-term,quankonsole 等程序,这些程序关注于在窗格中管理终端。 Terminator 0.98 带来了更完美的标签功能,更好的布局保存/恢复,改进了偏好用户界面和多出 bug 修复。
|
||||
[Terminator][1],它可以在一个窗口内打开多个终端。该项目的目标之一是为摆放终端提供一个有用的工具。它的灵感来自于类似 gnome-multi-term,quankonsole 等程序,这些程序关注于按网格摆放终端。 Terminator 0.98 带来了更完美的标签功能,更好的布局保存/恢复,改进了偏好用户界面和多处 bug 修复。
|
||||
|
||||
![](http://www.ewikitech.com/wp-content/uploads/2015/09/Screenshot-from-2015-09-17-094828.png)
|
||||
|
||||
###TERMINATOR 0.98 的更改和新特性
|
||||
|
||||
- 添加了一个布局启动器,允许在不用布局之间简单切换(用 Alt + L 打开一个新的布局切换器);
|
||||
- 添加了一个新的手册(使用 F1 打开);
|
||||
- 保存的时候,布局现在会记住:
|
||||
- * 最大化和全屏状态
|
||||
- * 窗口标题
|
||||
- * 激活的标签
|
||||
- * 激活的终端
|
||||
- * 每个终端的工作目录
|
||||
- 添加选项用于启用/停用非同质标签和滚动箭头;
|
||||
- 最大化和全屏状态
|
||||
- 窗口标题
|
||||
- 激活的标签
|
||||
- 激活的终端
|
||||
- 每个终端的工作目录
|
||||
- 添加选项用于启用/停用非同类(non-homogenous)标签和滚动箭头;
|
||||
- 添加快捷键用于按行/半页/一页向上/下滚动;
|
||||
- 添加使用 Ctrl+鼠标滚轮放大/缩小,Shift+鼠标滚轮向上/下滚动页面;
|
||||
- 为下一个/上一个 profile 添加快捷键
|
||||
- 添加使用 Ctrl+鼠标滚轮来放大/缩小,Shift+鼠标滚轮向上/下滚动页面;
|
||||
- 为下一个/上一个配置文件(profile)添加快捷键
|
||||
- 改进自定义命令菜单的一致性
|
||||
- 新增快捷方式/代码来切换所有/标签分组;
|
||||
- 改进监视插件
|
||||
- 增加搜索栏切换;
|
||||
- 清理和重新组织窗口偏好,包括一个完整的全局便签更新
|
||||
- 清理和重新组织偏好(preferences)窗口,包括一个完整的全局便签更新
|
||||
- 添加选项用于设置 ActivityWatcher 插件静默时间
|
||||
- 其它一些改进和 bug 修复
|
||||
- [点击此处查看完整更新日志][2]
|
||||
@ -37,10 +38,6 @@ Terminator 0.98 有可用的 PPA,首先我们需要在 Ubuntu/Linux Mint 上
|
||||
如果你想要移除 Terminator,只需要在终端中运行下面的命令(可选)
|
||||
|
||||
$ sudo apt-get remove terminator
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -48,7 +45,7 @@ via: http://www.ewikitech.com/articles/linux/terminator-install-ubuntu-linux-min
|
||||
|
||||
作者:[admin][a]
|
||||
译者:[ictlyh](http://mutouxiaogui.cn/blog)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,10 +1,10 @@
|
||||
如何在Ubuntu 14.04 / 15.04中设置IonCube Loaders
|
||||
================================================================================
|
||||
IonCube Loaders是PHP中用于辅助加速页面的加解密工具。它保护你的PHP代码不会被在未授权的计算机上查看。使用ionCube编码并加密PHP需要一个叫ionCube Loader的文件安装在web服务器上并提供给需要大量访问的PHP用。它在运行时处理并执行编码。PHP只需在‘php.ini’中添加一行就可以使用这个loader。
|
||||
IonCube Loaders是一个PHP中用于加解密的工具,并带有加速页面运行的功能。它也可以保护你的PHP代码不会查看和运行在未授权的计算机上。要使用ionCube编码、加密的PHP文件,需要在web服务器上安装一个叫ionCube Loader的文件,并需要让 PHP 可以访问到,很多 PHP 应用都在用它。它可以在运行时读取并执行编码过后的代码。PHP只需在‘php.ini’中添加一行就可以使用这个loader。
|
||||
|
||||
### 前提条件 ###
|
||||
|
||||
在这篇文章中,我们将在Ubuntu14.04/15.04安装Ioncube Loader ,以便它可以在所有PHP模式中使用。本教程的唯一要求就是你系统安装了LEMP,并有“的php.ini”文件。
|
||||
在这篇文章中,我们将在Ubuntu14.04/15.04安装Ioncube Loader ,以便它可以在所有PHP模式中使用。本教程的唯一要求就是你系统安装了LEMP,并有“php.ini”文件。
|
||||
|
||||
### 下载 IonCube Loader ###
|
||||
|
||||
@ -14,15 +14,15 @@ IonCube Loaders是PHP中用于辅助加速页面的加解密工具。它保护
|
||||
|
||||
![download ioncube](http://blog.linoxide.com/wp-content/uploads/2015/09/download1.png)
|
||||
|
||||
下载完成后用下面的命令解压到"/usr/local/src/"。
|
||||
下载完成后用下面的命令解压到“/usr/local/src/"。
|
||||
|
||||
# tar -zxvf ioncube_loaders_lin_x86-64.tar.gz -C /usr/local/src/
|
||||
|
||||
![extracting archive](http://blog.linoxide.com/wp-content/uploads/2015/09/2-extract.png)
|
||||
|
||||
解压完成后我们就可以看到所有的存在的模块。但是我们只需要我们安装的PHP版本的相关模块。
|
||||
解压完成后我们就可以看到所有提供的模块。但是我们只需要我们所安装的PHP版本的对应模块。
|
||||
|
||||
要检查PHP版本,你可以运行下面的命令来找出相关的模块。
|
||||
要检查PHP版本,你可以运行下面的命令来找出相应的模块。
|
||||
|
||||
# php -v
|
||||
|
||||
@ -30,14 +30,14 @@ IonCube Loaders是PHP中用于辅助加速页面的加解密工具。它保护
|
||||
|
||||
根据上面的命令我们知道我们安装的是PHP 5.6.4,因此我们需要拷贝合适的模块到PHP模块目录下。
|
||||
|
||||
首先我们在“/usr/local/”创建一个叫“ioncube”的目录并复制需要的ioncube loader到这里。
|
||||
首先我们在“/usr/local/”创建一个叫“ioncube”的目录并复制所需的ioncube loader到这里。
|
||||
|
||||
root@ubuntu-15:/usr/local/src/ioncube# mkdir /usr/local/ioncube
|
||||
root@ubuntu-15:/usr/local/src/ioncube# cp ioncube_loader_lin_5.6.so ioncube_loader_lin_5.6_ts.so /usr/local/ioncube/
|
||||
|
||||
### PHP 配置 ###
|
||||
|
||||
我们要在位于"/etc/php5/cli/"文件夹下的"php.ini"中加入下面的配置行并重启web服务和php模块。
|
||||
我们要在位于"/etc/php5/cli/"文件夹下的"php.ini"中加入如下的配置行并重启web服务和php模块。
|
||||
|
||||
# vim /etc/php5/cli/php.ini
|
||||
|
||||
@ -54,7 +54,6 @@ IonCube Loaders是PHP中用于辅助加速页面的加解密工具。它保护
|
||||
|
||||
要为我们的网站测试ioncube loader。用下面的内容创建一个"info.php"文件并放在网站的web目录下。
|
||||
|
||||
|
||||
# vim /usr/share/nginx/html/info.php
|
||||
|
||||
加入phpinfo的脚本后重启web服务后用域名或者ip地址访问“info.php”。
|
||||
@ -63,7 +62,6 @@ IonCube Loaders是PHP中用于辅助加速页面的加解密工具。它保护
|
||||
|
||||
![php info](http://blog.linoxide.com/wp-content/uploads/2015/09/php-info.png)
|
||||
|
||||
From the terminal issue the following command to verify the php version that shows the ionCube PHP Loader is Enabled.
|
||||
在终端中运行下面的命令来验证php版本并显示PHP Loader已经启用了。
|
||||
|
||||
# php -v
|
||||
@ -74,7 +72,7 @@ From the terminal issue the following command to verify the php version that sho
|
||||
|
||||
### 总结 ###
|
||||
|
||||
教程的最后你已经了解了在安装有nginx的Ubuntu中安装和配置ionCube Loader,如果你正在使用其他的web服务,这与其他服务没有明显的差别。因此做完这些安装Loader是很简单的,并且在大多数服务器上的安装都不会有问题。然而并没有一个所谓的“标准PHP安装”,服务可以通过许多方式安装,并启用或者禁用功能。
|
||||
教程的最后你已经了解了如何在安装有nginx的Ubuntu中安装和配置ionCube Loader,如果你正在使用其他的web服务,这与其他服务没有明显的差别。因此安装Loader是很简单的,并且在大多数服务器上的安装都不会有问题。然而并没有一个所谓的“标准PHP安装”,服务可以通过许多方式安装,并启用或者禁用功能。
|
||||
|
||||
如果你是在共享服务器上,那么确保运行了ioncube-loader-helper.php脚本,并点击链接来测试运行时安装。如果安装时你仍然遇到了问题,欢迎联系我们及给我们留下评论。
|
||||
|
||||
@ -84,7 +82,7 @@ via: http://linoxide.com/ubuntu-how-to/setup-ioncube-loaders-ubuntu-14-04-15-04/
|
||||
|
||||
作者:[Kashif Siddique][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,41 @@
|
||||
红帽 CEO 对 OpenStack 收益表示乐观
|
||||
================================================================================
|
||||
得益于围绕 Linux 和云不断发展的平台与基础设施技术,红帽正在持续快速发展。红帽宣布在九月二十一日完成了 2016 财年第二季度的财务业绩,再次超过预期。
|
||||
|
||||
![](http://www.serverwatch.com/imagesvr_ce/1212/icon-redhatcloud-r.jpg)
|
||||
|
||||
这一季度,红帽的收入为 5 亿 4 百万美元,和去年同比增长 13%。净收入为 5 千 1 百万美元,超过了 2015 财年第二季度的 4 千 7 百万美元。
|
||||
|
||||
展望未来,红帽为下一季度和全年提供了积极的目标。对于第三季度,红帽希望指导收益能在 5亿1千9百万美元和5亿2千3百万美元之间,和去年同期相比增长 15%。
|
||||
|
||||
对于 2016 财年,红帽的全年指导目标是 20亿4千4百万美元,和去年相比增长 14%。
|
||||
|
||||
红帽 CFO Frank Calderoni 在电话会议上指出,红帽最高的 30 个订单差不多甚至超过了 1 百万美元。其中有 4 个订单超过 5 百万美元,还有一个超过 1 千万美元。
|
||||
|
||||
从近几年的经验来看,红帽产品的交叉销售非常成功,全部订单中有超过 65% 的订单包括了一个或多个红帽应用和新兴技术产品组件。
|
||||
|
||||
Calderoni 说 “我们希望这些技术,例如中间件、RHEL OpenStack 平台、OpenShift、云管理和存储能持续推动收益增长。”
|
||||
|
||||
### OpenStack ###
|
||||
|
||||
在电话会议中,红帽 CEO Jim Whitehurst 多次问到 OpenStack 的预期收入。Whitehurst 说得益于安装程序的改进,最近发布的 Red Hat OpenStack Platform 7.0 向前垮了一大步。
|
||||
|
||||
Whitehurst 提到:“在识别硬件和使用方面它做的很好,当然,这也意味着在硬件识别并正确使用它们方便还有很多工作要做。”
|
||||
|
||||
Whitehurst 说他已经开始注意到很多的生产应用程序开始迁移到 OpenStack 云上来。他也警告说在产业化方面迁移到 OpenStack 大部分只是尝鲜,还并没有成为主流。
|
||||
|
||||
对于竞争对手, Whitehurst 尤其提到了微软、惠普和 Mirantis。在他看来,很多组织仍然会使用多种操作系统,如果他们部分使用了微软产品,会更倾向于开源方案作为替代选项。Whitehurst 说在云方面他还没有看到太多和惠普面对面的竞争,但和 Mirantis 则确实如此。
|
||||
|
||||
Whitehurst 说 “我们也有几次胜利,客户从 Mirantis 转到了 RHEL。”
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.serverwatch.com/server-news/red-hat-ceo-optimistic-on-openstack-revenue-opportunity.html
|
||||
|
||||
作者:[Sean Michael Kerner][a]
|
||||
译者:[ictlyh](http://mutouxiaogui.cn/blog)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.serverwatch.com/author/Sean-Michael-Kerner-101580.htm
|
@ -1,12 +1,8 @@
|
||||
如何将 Oracle 11g 升级到 Orcale 12c
|
||||
================================================================================
|
||||
大家好。
|
||||
大家好。今天我们来学习一下如何将 Oracle 11g 升级到 Oracle 12c。开始吧。
|
||||
|
||||
今天我们来学习一下如何将 Oracle 11g 升级到 Oracle 12c。开始吧。
|
||||
|
||||
在此,我使用的是 CentOS 7 64 位 Linux 发行版。
|
||||
|
||||
我假设你已经在你的系统上安装了 Oracle 11g。这里我会展示一下安装 Oracle 11g 时我的操作步骤。
|
||||
在此,我使用的是 CentOS 7 64 位 Linux 发行版。我假设你已经在你的系统上安装了 Oracle 11g。这里我会展示一下安装 Oracle 11g 时我的操作步骤。
|
||||
|
||||
我在 Oracle 11g 上选择 “Create and configure a database”,如下图所示。
|
||||
|
||||
@ -16,7 +12,7 @@
|
||||
|
||||
![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage2.png)
|
||||
|
||||
然后你输入安装 Oracle 11g 的所有路径以及密码。下面是我自己的 Oracle 11g 安装配置。确保你正确输入了 Oracle 的密码。
|
||||
然后你输入安装 Oracle 11g 的各种路径以及密码。下面是我自己的 Oracle 11g 安装配置。确保你正确输入了 Oracle 的密码。
|
||||
|
||||
![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage3.png)
|
||||
|
||||
@ -30,7 +26,7 @@
|
||||
|
||||
你需要从该[链接][1]上下载两个 zip 文件。下载并解压两个文件到相同目录。文件名为 **linuxamd64_12c_database_1of2.zip** & **linuxamd64_12c_database_2of2.zip**。提取或解压完后,它会创建一个名为 database 的文件夹。
|
||||
|
||||
注意:升级到 12c 之前,请确保在你的 CentOS 上已经安装了所有必须的软件包并且 path 环境变量也已经正确配置,还有其它前提条件也已经满足。
|
||||
注意:升级到 12c 之前,请确保在你的 CentOS 上已经安装了所有必须的软件包,并且所有的路径变量也已经正确配置,还有其它前提条件也已经满足。
|
||||
|
||||
下面是必须使用正确版本安装的一些软件包
|
||||
|
||||
@ -47,13 +43,11 @@
|
||||
|
||||
在因特网上搜索正确的 rpm 版本。
|
||||
|
||||
你也可以用一个查询处理多个软件包,然后在输出中查找正确版本。例如:
|
||||
|
||||
在终端中输入下面的命令
|
||||
你也可以用一个查询处理多个软件包,然后在输出中查找正确版本。例如,在终端中输入下面的命令:
|
||||
|
||||
rpm -q binutils compat-libstdc++ gcc glibc libaio libgcc libstdc++ make sysstat unixodbc
|
||||
|
||||
你的系统中必须安装了以下软件包(版本可能较新会旧)
|
||||
你的系统中必须安装了以下软件包(版本可能或新或旧)
|
||||
|
||||
- binutils-2.23.52.0.1-12.el7.x86_64
|
||||
- compat-libcap1-1.10-3.el7.x86_64
|
||||
@ -83,11 +77,7 @@
|
||||
|
||||
你也需要 unixODBC-2.3.1 或更新版本的驱动。
|
||||
|
||||
我希望你安装 Oracle 11g 的时候已经在你的 CentOS 7 上创建了名为 oracle 的用户。
|
||||
|
||||
让我们以用户 oracle 登录 CentOS。
|
||||
|
||||
以用户 oracle 登录到 CentOS 之后,在你的 CentOS上打开一个终端。
|
||||
我希望你安装 Oracle 11g 的时候已经在你的 CentOS 7 上创建了名为 oracle 的用户。让我们以用户 oracle 登录 CentOS。以用户 oracle 登录到 CentOS 之后,在你的 CentOS上打开一个终端。
|
||||
|
||||
使用终端更改工作目录并导航到你解压两个 zip 文件的目录。在终端中输入以下命令开始安装 12c。
|
||||
|
||||
@ -119,15 +109,15 @@
|
||||
|
||||
![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage11.png)
|
||||
|
||||
第七步,像下面这样使用默认的选择继续下一步。
|
||||
对于第七步,像下面这样使用默认的选择继续下一步。
|
||||
|
||||
![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage12.png)
|
||||
|
||||
在第九步,你会看到一个类似下面这样的总结报告。
|
||||
在第九步中,你会看到一个类似下面这样的总结报告。
|
||||
|
||||
![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage13.png)
|
||||
|
||||
如果一切正常,你可以点击步骤九中的 install 开始安装,进入步骤十。
|
||||
如果一切正常,你可以点击第九步中的 install 开始安装,进入第十步。
|
||||
|
||||
![](http://www.unixmen.com/wp-content/uploads/2015/09/11g212cimage14.png)
|
||||
|
||||
@ -135,7 +125,7 @@
|
||||
|
||||
要有耐心,一步一步走下来最后它会告诉你成功了。否则,在谷歌上搜索做必要的操作解决问题。再一次说明,由于你可能会遇到的错误有很多,我无法在这里提供所有详细介绍。
|
||||
|
||||
现在,只需要按照下面屏幕指令配置监听器
|
||||
现在,只需要按照下面屏幕指令配置监听器。
|
||||
|
||||
配置完监听器之后,它会启动数据库升级助手(Database Upgrade Assistant)。选择 Upgrade Oracle Database。
|
||||
|
||||
@ -157,7 +147,7 @@ via: http://www.unixmen.com/upgrade-from-oracle-11g-to-oracle-12c/
|
||||
|
||||
作者:[Mohammad Forhad Iftekher][a]
|
||||
译者:[ictlyh](http://www.mutouxiaogui.cn/blog/)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -4,11 +4,11 @@
|
||||
|
||||
我知道你已经看过[如何下载 YouTube 视频][1]。但那些工具大部分都采用图形用户界面的方式。我会向你展示如何通过终端使用 youtube-dl 下载 YouTube 视频。
|
||||
|
||||
### [youtube-dl][2] ###
|
||||
### youtube-dl ###
|
||||
|
||||
youtube-dl 是基于 Python 的命令行小工具,允许你从 YouTube.com、Dailymotion、Google Video、Photobucket、Facebook、Yahoo、Metacafe、Depositfiles 以及其它一些类似网站中下载视频。它是用 pygtk 编写的,需要 Python 解析器来运行,对平台要求并不严格。它能够在 Unix、Windows 或者 Mac OS X 系统上运行。
|
||||
[youtube-dl][2] 是基于 Python 的命令行小工具,允许你从 YouTube.com、Dailymotion、Google Video、Photobucket、Facebook、Yahoo、Metacafe、Depositfiles 以及其它一些类似网站中下载视频。它是用 pygtk 编写的,需要 Python 解析器来运行,对平台要求并不严格。它能够在 Unix、Windows 或者 Mac OS X 系统上运行。
|
||||
|
||||
youtube-dl 支持断点续传。如果在下载的过程中 youtube-dl 被杀死了(例如通过 Ctrl-C 或者丢失网络连接),你只需要使用相同的 YouTube 视频 URL 再次运行它。只要当前目录中有下载的部分文件,它就会自动恢复没有完成的下载,也就是说,你不需要[下载][3]管理器来恢复下载。
|
||||
youtube-dl 支持断点续传。如果在下载的过程中 youtube-dl 被杀死了(例如通过 Ctrl-C 或者丢失网络连接),你只需要使用相同的 YouTube 视频 URL 再次运行它。只要当前目录中有下载的部分文件,它就会自动恢复没有完成的下载,也就是说,你不需要[下载管理器][3]来恢复下载。
|
||||
|
||||
#### 安装 youtube-dl ####
|
||||
|
||||
@ -16,7 +16,7 @@ youtube-dl 支持断点续传。如果在下载的过程中 youtube-dl 被杀死
|
||||
|
||||
sudo apt-get install youtube-dl
|
||||
|
||||
对于任何 Linux 发行版,你都可以通过下面的命令行接口在你的系统上快速安装 youtube-dl:
|
||||
对于任何 Linux 发行版,你都可以通过下面的命令行在你的系统上快速安装 youtube-dl:
|
||||
|
||||
sudo wget https://yt-dl.org/downloads/latest/youtube-dl -O/usr/local/bin/youtube-dl
|
||||
|
||||
@ -83,11 +83,11 @@ via: http://itsfoss.com/download-youtube-linux/
|
||||
|
||||
作者:[alimiracle][a]
|
||||
译者:[ictlyh](http://mutouxiaogui.cn/blog/)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/ali/
|
||||
[1]:http://itsfoss.com/download-youtube-videos-ubuntu/
|
||||
[2]:https://rg3.github.io/youtube-dl/
|
||||
[3]:http://itsfoss.com/xtreme-download-manager-install/
|
||||
[3]:https://linux.cn/article-6209-1.html
|
@ -1,10 +1,10 @@
|
||||
对 Linux 用户10个有用的工具
|
||||
10 个给 Linux 用户的有用工具
|
||||
================================================================================
|
||||
![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2014/09/linux-656x445.png)
|
||||
|
||||
### 引言 ###
|
||||
|
||||
在本教程中,我已经收集了对 Linux 用户10个有用的工具,其中包括各种网络监控,系统审计和一些其它实用的命令,它可以帮助用户提高工作效率。我希望你会喜欢他们。
|
||||
在本教程中,我已经收集了10个给 Linux 用户的有用工具,其中包括各种网络监控,系统审计和一些其它实用的命令,它可以帮助用户提高工作效率。我希望你会喜欢他们。
|
||||
|
||||
#### 1. w ####
|
||||
|
||||
@ -14,19 +14,18 @@
|
||||
|
||||
![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_023.png)
|
||||
|
||||
显示帮助信息
|
||||
不显示头部信息(LCTT译注:原文此处有误)
|
||||
|
||||
$w -h
|
||||
|
||||
(LCTT译注:-h为不显示头部信息)
|
||||
|
||||
显示当前用户信息
|
||||
显示指定用户的信息
|
||||
|
||||
$w <username>
|
||||
|
||||
![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_024.png)
|
||||
|
||||
#### 2. nmon ####
|
||||
|
||||
Nmon(nigel’s monitor 的简写)是一个显示系统性能信息的工具。
|
||||
|
||||
$ sudo apt-get install nmon
|
||||
@ -37,7 +36,7 @@ Nmon(nigel’s monitor 的简写)是一个显示系统性能信息的工具
|
||||
|
||||
![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_001.png)
|
||||
|
||||
nmon 可以转储与 netwrok,cpu, memory 和磁盘使用情况的信息。
|
||||
nmon 可以显示与 netwrok,cpu, memory 和磁盘使用情况的信息。
|
||||
|
||||
**nmon 显示 cpu 信息 (按 c)**
|
||||
|
||||
@ -53,7 +52,7 @@ nmon 可以转储与 netwrok,cpu, memory 和磁盘使用情况的信息。
|
||||
|
||||
#### 3. ncdu ####
|
||||
|
||||
是一个基于‘du’的光标版本的命令行程序,这个命令是用来分析各种目录占用的磁盘空间。
|
||||
是一个支持光标的`du`程序,这个命令是用来分析各种目录占用的磁盘空间。
|
||||
|
||||
$apt-get install ncdu
|
||||
|
||||
@ -71,7 +70,7 @@ nmon 可以转储与 netwrok,cpu, memory 和磁盘使用情况的信息。
|
||||
|
||||
#### 4. slurm ####
|
||||
|
||||
一个基于网络接口的带宽监控命令行程序,它会基于图形来显示 ascii 文件。
|
||||
一个基于网络接口的带宽监控命令行程序,它会用字符来显示文本图形。
|
||||
|
||||
$ apt-get install slurm
|
||||
|
||||
@ -94,7 +93,7 @@ nmon 可以转储与 netwrok,cpu, memory 和磁盘使用情况的信息。
|
||||
|
||||
#### 5.findmnt ####
|
||||
|
||||
Findmnt 命令用于查找挂载的文件系统。它是用来列出安装设备,当需要时也可以挂载或卸载设备,它也是 util-linux 的一部分。
|
||||
Findmnt 命令用于查找挂载的文件系统。它用来列出安装设备,当需要时也可以挂载或卸载设备,它是 util-linux 软件包的一部分。
|
||||
|
||||
例子:
|
||||
|
||||
@ -122,7 +121,7 @@ Findmnt 命令用于查找挂载的文件系统。它是用来列出安装设备
|
||||
|
||||
#### 6. dstat ####
|
||||
|
||||
一种组合和灵活的工具,它可用于监控内存,进程,网络和磁盘的性能,它可以用来取代 ifstat, iostat, dmstat 等。
|
||||
一种灵活的组合工具,它可用于监控内存,进程,网络和磁盘性能,它可以用来取代 ifstat, iostat, dmstat 等。
|
||||
|
||||
$apt-get install dstat
|
||||
|
||||
@ -134,27 +133,27 @@ Findmnt 命令用于查找挂载的文件系统。它是用来列出安装设备
|
||||
|
||||
![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_0141.png)
|
||||
|
||||
- **-c** cpu
|
||||
**-c** cpu
|
||||
|
||||
$ dstat -c
|
||||
|
||||
![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_0151.png)
|
||||
|
||||
显示 cpu 的详细信息。
|
||||
|
||||
$ dstat -cdl -D sda1
|
||||
|
||||
![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_017.png)
|
||||
|
||||
- **-d** 磁盘
|
||||
**-d** 磁盘
|
||||
|
||||
$ dstat -d
|
||||
|
||||
![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_0161.png)
|
||||
|
||||
显示 cpu、磁盘等的详细信息。
|
||||
|
||||
$ dstat -cdl -D sda1
|
||||
|
||||
![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Selection_017.png)
|
||||
|
||||
#### 7. saidar ####
|
||||
|
||||
另一种基于 CLI 的系统统计数据监控工具,提供了有关磁盘使用,网络,内存,交换等信息。
|
||||
另一种基于命令行的系统统计数据监控工具,提供了有关磁盘使用,网络,内存,交换分区等信息。
|
||||
|
||||
$ sudo apt-get install saidar
|
||||
|
||||
@ -172,7 +171,7 @@ Findmnt 命令用于查找挂载的文件系统。它是用来列出安装设备
|
||||
|
||||
#### 8. ss ####
|
||||
|
||||
ss(socket statistics)是一个很好的选择来替代 netstat,它从内核空间收集信息,比 netstat 的性能更好。
|
||||
ss(socket statistics)是一个很好的替代 netstat 的选择,它从内核空间收集信息,比 netstat 的性能更好。
|
||||
|
||||
例如:
|
||||
|
||||
@ -196,7 +195,7 @@ ss(socket statistics)是一个很好的选择来替代 netstat,它从内
|
||||
|
||||
#### 9. ccze ####
|
||||
|
||||
一个自定义日志格式的工具 :).
|
||||
一个美化日志显示的工具 :).
|
||||
|
||||
$ apt-get install ccze
|
||||
|
||||
@ -222,7 +221,7 @@ ss(socket statistics)是一个很好的选择来替代 netstat,它从内
|
||||
|
||||
一种基于 Python 的终端工具,它可以用来以图形方式显示系统活动状态。详细信息以一个丰富多彩的柱状图来展示。
|
||||
|
||||
安装 python:
|
||||
安装 python(LCTT 译注:一般来说,你应该已经有了 python,不需要此步):
|
||||
|
||||
$ sudo apt-add-repository ppa:fkrull/deadsnakes
|
||||
|
||||
@ -234,7 +233,7 @@ ss(socket statistics)是一个很好的选择来替代 netstat,它从内
|
||||
|
||||
$ sudo apt-get install python3.2
|
||||
|
||||
- [下载 ranwhen.py][1]
|
||||
[点此下载 ranwhen.py][1]
|
||||
|
||||
$ unzip ranwhen-master.zip && cd ranwhen-master
|
||||
|
||||
@ -246,7 +245,7 @@ ss(socket statistics)是一个很好的选择来替代 netstat,它从内
|
||||
|
||||
### 结论 ###
|
||||
|
||||
这都是些冷门但重要的 Linux 管理工具。他们可以在日常生活中帮助用户。在我们即将发表的文章中,我们会尽量多带来些管理员/用户工具。
|
||||
这都是些不常见但重要的 Linux 管理工具。他们可以在日常生活中帮助用户。在我们即将发表的文章中,我们会尽量多带来些管理员/用户工具。
|
||||
|
||||
玩得愉快!
|
||||
|
||||
@ -256,7 +255,7 @@ via: http://www.unixmen.com/10-useful-utilities-linux-users/
|
||||
|
||||
作者:[Rajneesh Upadhyay][a]
|
||||
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,26 +1,26 @@
|
||||
Linux有问必答 -- 如何在LInux中永久修改USB设备权限
|
||||
Linux 有问必答:如何在 Linux 中永久修改 USB 设备权限
|
||||
================================================================================
|
||||
> **提问**:当我尝试在Linux中运行USB GPS接收器时我遇到了下面来自gpsd的错误。
|
||||
> **提问**:当我尝试在 Linux 中运行 USB GPS 接收器时我遇到了下面来自 gpsd 的错误。
|
||||
>
|
||||
> gpsd[377]: gpsd:ERROR: read-only device open failed: Permission denied
|
||||
> gpsd[377]: gpsd:ERROR: /dev/ttyUSB0: device activation failed.
|
||||
> gpsd[377]: gpsd:ERROR: device open failed: Permission denied - retrying read-only
|
||||
>
|
||||
> 看上去gpsd没有权限访问USB设备(/dev/ttyUSB0)。我该如何永久修改它在Linux上的权限?
|
||||
> 看上去 gpsd 没有权限访问 USB 设备(/dev/ttyUSB0)。我该如何永久修改它在Linux上的权限?
|
||||
|
||||
当你在运行一个会读取或者写入USB设备的进程时,进程的用户/组必须有权限这么做。当然你可以手动用chmod命令改变USB设备的权限,但是手动的权限改变只是暂时的。USB设备会在下次重启时恢复它的默认权限。
|
||||
当你在运行一个会读取或者写入USB设备的进程时,进程的用户/组必须有权限这么做才行。当然你可以手动用`chmod`命令改变 USB 设备的权限,但是手动的权限改变只是暂时的。USB 设备会在下次重启时恢复它的默认权限。
|
||||
|
||||
![](https://farm6.staticflickr.com/5741/20848677843_202ff53303_c.jpg)
|
||||
|
||||
作为一个永久的方式,你可以创建一个基于udev的USB权限规则,它可以根据你的选择分配任何权限模式。下面是该如何做。
|
||||
作为一个永久的方式,你可以创建一个基于 udev 的 USB 权限规则,它可以根据你的选择分配任何权限模式。下面是该如何做。
|
||||
|
||||
首先,你需要找出USB设备的vendorID和productID。使用lsusb命令。
|
||||
首先,你需要找出 USB 设备的 vendorID 和 productID。使用`lsusb`命令。
|
||||
|
||||
$ lsusb -vvv
|
||||
|
||||
![](https://farm1.staticflickr.com/731/20848677743_39f76eb403_c.jpg)
|
||||
|
||||
上面lsusb的输出中,找出你的USB设备,并找出"idVendor"和"idProduct"字段。本例中,我们的结果是idVendor (0x067b)和 idProduct (0x2303)
|
||||
上面`lsusb`的输出中,找出你的 USB 设备,并找出"idVendor"和"idProduct"字段。本例中,我们的结果是`idVendor (0x067b)`和 `idProduct (0x2303)`
|
||||
|
||||
下面创建一个新的udev规则。
|
||||
|
||||
@ -32,12 +32,11 @@ Linux有问必答 -- 如何在LInux中永久修改USB设备权限
|
||||
|
||||
用你自己的"idVendor"和"idProduct"来替换。**MODE="0666"**表示USB设备的权限。
|
||||
|
||||
现在重启电脑并重新加载udev规则:
|
||||
现在重启电脑并重新加载 udev 规则:
|
||||
|
||||
$ sudo udevadm control --reload
|
||||
|
||||
Then verify the permission of the USB device.
|
||||
接着验证USB设备的权限。
|
||||
接着验证下 USB 设备的权限。
|
||||
|
||||
![](https://farm1.staticflickr.com/744/21282872179_9a4a05d768_b.jpg)
|
||||
|
||||
@ -47,7 +46,7 @@ via: http://ask.xmodulo.com/change-usb-device-permission-linux.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,12 +1,11 @@
|
||||
Linux有问必答--如何强制在下次登录Linux时更换密码
|
||||
Linux有问必答:如何强制在下次登录Linux时更换密码
|
||||
================================================================================
|
||||
> **提问**:我管理着一台多人共享的Linux服务器。我刚使用默认密码创建了一个新用户,但是我想用户在第一次登录时更换密码。有没有什么方法可以让他/她在下次登录时修改密码呢?
|
||||
|
||||
在多用户Linux环境中,标准实践是使用一个默认的随机密码创建一个用户账户。成功登录后,新用户自己改变默认密码。出于安全里有,经常建议“强制”用户在第一次登录时修改密码来确保这个一次性使用的密码不会再被使用。
|
||||
在多用户Linux环境中,标准实践是使用一个默认的随机密码创建一个用户账户。成功登录后,新用户自己改变默认密码。出于安全考虑,经常建议“强制”用户在第一次登录时修改密码来确保这个一次性使用的密码不会再被使用。
|
||||
|
||||
下面是**如何强制用户在下次登录时修改他/她的密码**。
|
||||
|
||||
changes, and when to expire the current password, etc.
|
||||
每个Linux用户都关联这不同的密码相关配置和信息。比如,记录着上次密码更改的日期、最小/最大的修改密码的天数、密码何时过期等等。
|
||||
|
||||
一个叫chage的命令行工具可以访问并调整密码过期相关配置。你可以使用这个工具来强制用户在下次登录修改密码、
|
||||
@ -23,7 +22,7 @@ changes, and when to expire the current password, etc.
|
||||
|
||||
$ sudo chage -d0 <user-name>
|
||||
|
||||
原本“-d <N>”参数是用来设置密码的“年龄”(也就是上次修改密码起到1970 1.1起的天数)。因此“-d0”的意思是上次密码修改的时间是1970 1.1,这就让当前的密码过期了,也就强制让他在下次登录的时候修改密码了。
|
||||
原本“-d <N>”参数是用来设置密码的“年龄”(也就是上次修改密码起到1970/1/1起的天数)。因此“-d0”的意思是上次密码修改的时间是1970/1/1,这就让当前的密码过期了,也就强制让他在下次登录的时候修改密码了。
|
||||
|
||||
另外一个过期当前密码的方式是用passwd命令。
|
||||
|
||||
@ -46,8 +45,8 @@ changes, and when to expire the current password, etc.
|
||||
via: http://ask.xmodulo.com/force-password-change-next-login-linux.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,8 +1,8 @@
|
||||
Mytodo: 为 DIY 爱好者准备的待办事项管理软件
|
||||
Mytodo:为 DIY 爱好者准备的待办事项管理软件
|
||||
================================================================================
|
||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Mytodo-Linux.jpg)
|
||||
|
||||
通常我关注的软件都是那些不用折腾并且易用的(对图形界面而言)。这就是我把 [Go For It][1] 待办事项程序归到 [Linux 生产力工具][2] 列表的原因。今天,我要向你们展示另一款待办事项列表应用,和其它的待办事项软件有点不一样。
|
||||
通常我关注的软件都是那些不用折腾并且易用的(对图形界面而言)。这就是我把 [Go For It][1] 待办事项程序归到 [Linux 产能工具][2] 列表的原因。今天,我要向你们展示另一款待办事项列表应用,和其它的待办事项软件有点不一样。
|
||||
|
||||
[Mytodo][3] 是个开源的待办事项列表程序,让你能够掌管一切。与其它类似的程序不同的是,Mytodo 更加面向 DIY 爱好者,因为它允许你配置服务器(如果你想在多台电脑上使用的话),除了主要的功能外还提供一个命令行界面。
|
||||
|
||||
@ -19,15 +19,15 @@ Mytodo 的一些主要特性:
|
||||
|
||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Mytodo-list.jpeg)
|
||||
|
||||
图形界面
|
||||
*图形界面*
|
||||
|
||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Mytodo-list-cli.jpeg)
|
||||
|
||||
命令行
|
||||
*命令行*
|
||||
|
||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Mytodo-list-conky.jpeg)
|
||||
|
||||
Conky 显示着待办事项
|
||||
*Conky 显示着待办事项*
|
||||
|
||||
你可以在下面的 Github 链接里找到源码和配置介绍:
|
||||
|
||||
@ -43,13 +43,13 @@ via: http://itsfoss.com/mytodo-list-manager/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[alim0x](https://github.com/alim0x)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:http://itsfoss.com/go-for-it-to-do-app-in-linux/
|
||||
[2]:http://itsfoss.com/productivity-tips-ubuntu/
|
||||
[2]:https://linux.cn/article-6425-1.html
|
||||
[3]:https://github.com/mohamed-aziz/mytodo
|
||||
[4]:http://itsfoss.com/conky-gui-ubuntu-1304/
|
||||
[5]:https://github.com/mohamed-aziz/mytodo
|
@ -0,0 +1,78 @@
|
||||
新的 RTL 协作组将加速实时 Linux 的发展
|
||||
================================================================================
|
||||
![](http://www.linux.com/images/stories/66866/Tux-150.png)
|
||||
|
||||
在本周的 Linux 大会活动(LinuxCon)上 Linux 基金会(Linux Foundation)[宣称][1],实时Linux操作系统项目(RTL,Real-Time Linux)得到了新的资金支持,并预期这将促进该项目,使其自成立15年来第一次有机会在实时操作性上和其他的实时操作系统(RTOS,Real Time Operation System)一较高下。Linux 基金会将 RTL 组重组为一个新的项目,并命名为RTL协作组(Real-Time Linux Collaborative Project),该项目将获得更有力的资金支持,更多的开发人员将投入其中,并更加紧密地集成到 Linux 内核主线开发中。
|
||||
|
||||
根据 Linux 基金会的说法,RTL 项目并入 Linux基金会旗下后,“在研发方面将为业界节省数百万美元的费用。”同时此举也将“通过强有力的上游内核测试体系而改善本项目的代码质量”。
|
||||
|
||||
在过去的十几年中,RTL 项目的开发管理和经费资助主要由[开源自动化开发实验室] [2](OSADL,Open Source Automation Development Lab)承担,OSADL 将继续作为新合作项目的金牌成员之一,但其原来承担的资金资助工作将会在一月份移交给 Linux 基金会。RTL 项目和 [OSADL][3] 长久以来一直负责维护[内核的实时抢占(RT-Preempt 或 Preempt-RT)][4]补丁,并定期将其更新到 Linux 内核的主线上。
|
||||
|
||||
据长期以来一直担任 OSADL 总经理的 Carsten Emde 博士介绍,支持内核实时特性的工作已经完成了将近 90%。 “这就像盖房子,”他解释说。 “主要的部件,如墙壁,窗户和门都已经安装到位,就实时内核来说,类似的主要部件包括:高精度定时器(high-resolution timers),中断线程化机制(interrupt threads)和优先级可继承的互斥量(priority-inheritance mutexes)等。然后所剩下的就是需要一些边边角角的工作,就如同装修房子过程中还剩下铺设如地毯和墙纸等来完成最终的工程。”
|
||||
|
||||
以 Emde 观点来看,从技术的角度来说,实时 Linux 的性能已经可以媲美绝大多数其他的实时操作系统 - 但前提是你要不厌其烦地把所有的补丁都打上。 Emde 的原话如下:“该项目(LCTT 译注,指RTL)的唯一目标就是提供一个满足实时性要求的 Linux 系统,使其无论运行状况如何恶劣都可以保证在确定的、可以预先定义的时间期限内对外界处理做出响应。这个目标已经实现,但需要你手动地将 RTL 提供的补丁添加到 Linux 内核主线的版本代码上,但将来的不用打补丁的实时 Linux 内核也能实现这个目标。唯一的,当然也是最重要的区别就是相应的维护工作将少得多,因为我们再也不用一次又一次移植那些独立于内核主线的补丁代码了。”
|
||||
|
||||
新的 RTL 协作组将继续在 Thomas Gleixner 的指导下工作,Thomas Gleixner 在过去的十多年里一直是 RTL 的核心维护人员。本周,Gleixner 被任命为 Linux 基金会成员,并加入了一个特别的小组,小组成员包括 Linux 稳定内核维护者Greg Kroah-Hartman,Yocto 项目维护者 Richard Purdie 和 Linus Torvalds 本人。
|
||||
|
||||
据 Emde 介绍,RTL 的第二维护人 Steven Rostedt 来自 Red Hat 公司,他负责“维护旧的,但尚保持维护的内核版本”,他将和同样来自 Red Hat 的 Ingo Molnàr 继续参与该项目,Ingo 是 RTL 的关键开发人员,但近年来更多地从事咨询方面的工作。有些令人惊讶的是,Red Hat 竟然不是 RTL 协作组的成员之一。相反,谷歌作为唯一的白金会员占据了头把交椅,其他黄金会员包括国家仪器公司(NI,National Instruments),OSADL 和德州仪器(TI)。银卡会员包括Altera 公司,ARM,Intel 和 IBM。
|
||||
|
||||
###走向实时内核的漫长道路###
|
||||
|
||||
当15年前 Linux 第一次出现在嵌入式设备上的时候,它所面临的嵌入式计算市场已经被其他的实时操作系统,譬如风河公司(WindRiver)的 VxWorks,所牢牢占据。VxWorks 从那时起到现在,一直在为众多的工控设备、航空电子设备以及交通运输应用提供着工业级别的高确定性的,硬实时的内核。微软后来也提供了一个支持实时性的操作系统版本- Windows CE,当时的 Linux 所面临的是来自潜在工业客户的公开嘲讽和层层阻力。他们认为那些从桌面系统改进来的 Linux 发行版本顶多适合要求不高的轻量级消费类电子产品,而不适合那些对硬实时要求更高的设备。
|
||||
|
||||
对于嵌入式 Linux 的先行者如 [MontaVista 公司][6]来说,其[早期的目标][5]很明确就是要改进 Linux 的实时能力。多年以来,对 Linux 的实时性能开发发展迅速,得到各种组织的支持,如[成立于2006年][7]的 OSADL,以及实时 Linux 基金会(RTLF,Real-Time Linux Foundation)。在2009年 [OSADL 与 RTLF 合并][8],OSADL 及其 RTL 组承担了所有的抢占式实时内核(Preempt-RT)补丁的维护工作和将补丁提交到上游内核主线的工作。除此之外 OSADL 还负责监管其他自动化相关的项目,例如[高可靠性 Linux][9](Safety Critical Linux)(译者注:指研究如何在关键系统上可靠安全地运行Linux)。
|
||||
|
||||
OSADL 对 RTL 的支持经历了三个阶段:拥护和推广,测试和质量评估,以及最后的资金支持。Emde 表示,在早期,OSADL 的角色仅限于写写推广的文章,制作专题报告,组织相关培训,以及“宣传” RTL 的优点。他说:“要让一个相当保守的工控行业接受象 Linux 之类的新技术及其基于社区的那种开发模式,首先就需要建立其对新事物的信任。从使用专有的实时操作系统转向改用 Linux 对公司意味着必须引入新的战略和流程,才能与社区进行互动。”
|
||||
|
||||
后来,OSADL 改而提供技术性能数据,建立[质量评估和测试中心][10],并在和开源相关的法律事务问题和安全认证方面向行业成员提供帮助。
|
||||
|
||||
当 RTL 在实时性上变得愈加成熟的同时,相反地 Windows CE 却是江河日下,[其市场份额正在快速地被 RTL 所蚕食][11],一些与 RTL 竞争的实时 Linux 项目,主要是 [Xenomai][12] 也已开始集成 RTL。
|
||||
|
||||
“伴随 RTL 补丁的成功,以及明确的预期其最终会被完整集成到 Linux 内核主线代码中,导致 Xenomai 关注的重心发生了变化,”Emde 说。 “Xenomai 3.0 可与 RT 补丁结合起来使用,并提供了所谓的‘皮肤’,(LCTT 译注:一个封装层),使我们可以复用为其他系统编写的代码。不过,它们还没有完全统一起来,因为 Xenomai 使用了双内核方法,而RT 补丁只适用于单一的 Linux 内核。“
|
||||
|
||||
近些年来,RTL 组的资助来源越来越少,所以最终 OSADL 接过了这个重任。Emde 说:“当最近开发工作因缺少资金而陷入停滞时,OSADL 对 RTL 的支持进入到第三个重大阶段:开始直接资助 Thomas Gleixner 的工作。”
|
||||
|
||||
正如 Emde 在其[10月5日的一篇博文][13]中所描述的那样,实时 Linux 的应用领域正在日益扩大,由其原来主要服务的工业控制扩大到了汽车行业和电信业等领域,这表明资助的来源也应该得到拓宽。Emde 原文写道:“仅仅靠来自工控行业的资金来支撑全部的工作是不合理的,因为电信等其他行业也在享用实时 Linux 内核。”
|
||||
|
||||
当 Linux 基金会表明有兴趣提供资金支持时,OSADL 认为“单一的资助和控制渠道要有效得多”(LCTT 译注:指最终由Linux 基金会全盘接手了 RTL 项目),Emde 如是说。不过,他补充说,作为黄金级成员,OSADL 仍参与监管项目的工作,会继续从事其宣传和质量保证方面的活动。
|
||||
|
||||
###汽车行业期待 RTL 的崛起###
|
||||
|
||||
Emde 表示,RTL 会继续在工业应用领域飞速发展并逐渐取代其他实时操作系统。而且,他补充说,RTL 在汽车行业发展也很迅猛,以后会扩大并应用到铁路和航空电子设备上。
|
||||
|
||||
的确,Linux 在汽车行业将扮演越来越重要的角色,这也是 Linux 基金对 RTL 所寄予厚望的原因之所在。RTL 工作组可能会与 Linux 基金会旗下的[车载Linux][14](AGL,Automotive Grade Linux)工作组展开合作。Emde 猜测,Google 高调参与的主要动因可能也是希望将 RTL 用于汽车控制。此外,德州仪器(TI)也非常期望将其 Jacinto 处理器应用于汽车行业。
|
||||
|
||||
面向车载 Linux 的项目(比如AGL)的目标是要扩大 Linux 在车载设备上的应用范围,其应用不是仅限于车载信息娱乐(IVI,In-Vehicle Infotainment),而是要进入到譬如集群控制和车载通讯领域,而这些领域目前主要使用的是 QNX 之类的实时操作系统。无人驾驶汽车在实时性上对操作系统也有很高的要求。
|
||||
|
||||
Emde 特别指出,OSADL 的 [SIL2LinuxMP][15] 项目可能会在将 RTL 引入到汽车工业领域上扮演重要的角色。SIL2LinuxMP 并不是专门针对汽车工业的项目,但随着 BMW 公司参与其中,汽车行业成为其很重要的应用领域之一。该项目的目标在于验证 RTL 在采用单核或多核 CPU 的标准化商用(COTS,Commercial Off-The-Shelf)板卡上运行所需的基本组件。它定义了引导程序、根文件系统、Linux 内核以及对应支持 RTL 的 C 库。
|
||||
|
||||
无人机和机器人使用实时 Linux 的时机也已成熟,Xenomai 系统早已用在许多机器人以及一些无人机中。不过,在更广泛的嵌入式 Linux 世界,包括了消费电子产品和物联网应用中,RTL 可以扮演的角色很有限。主要的障碍在于,无线通信和互联网本身会带来延迟。
|
||||
|
||||
Emde 说:“目前实时 Linux 主要还是应用于系统内部控制以及系统与周边外设之间的控制,在远程控制机器上作用不大。企图通过互联网实现实时控制恐怕不是一件可行的事情。”
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linux.com/news/software/applications/858828-new-collaborative-group-to-speed-real-time-linux
|
||||
|
||||
作者:[Eric Brown][a]
|
||||
译者:[unicornx](https://github.com/unicornx)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linux.com/community/forums/person/42808
|
||||
[1]:http://www.linuxfoundation.org/news-media/announcements/2015/10/linux-foundation-announces-project-advance-real-time-linux
|
||||
[2]:http://archive.linuxgizmos.com/celebrating-the-open-source-automation-development-labs-first-birthday/
|
||||
[3]:https://www.osadl.org/
|
||||
[4]:http://linuxgizmos.com/adding-real-time-to-linux-with-preempt-rt/
|
||||
[5]:http://archive.linuxgizmos.com/real-time-linux-what-is-it-why-do-you-want-it-how-do-you-do-it-a/
|
||||
[6]:http://www.linux.com/news/embedded-mobile/mobile-linux/841651-embedded-linux-pioneer-montavista-spins-iot-linux-distribution
|
||||
[7]:http://archive.linuxgizmos.com/industry-group-aims-linux-at-automation-apps/
|
||||
[8]:http://archive.linuxgizmos.com/industrial-linux-groups-merge/
|
||||
[9]:https://www.osadl.org/Safety-Critical-Linux.safety-critical-linux.0.html
|
||||
[10]:http://www.osadl.org/QA-Farm-Realtime.qa-farm-about.0.html
|
||||
[11]:http://www.linux.com/news/embedded-mobile/mobile-linux/818011-embedded-linux-keeps-growing-amid-iot-disruption-says-study
|
||||
[12]:http://xenomai.org/
|
||||
[13]:https://www.osadl.org/Single-View.111+M5dee6946dab.0.html
|
||||
[14]:http://www.linux.com/news/embedded-mobile/mobile-linux/833358-first-open-automotive-grade-linux-spec-released
|
||||
[15]:http://www.osadl.org/SIL2LinuxMP.sil2-linux-project.0.html
|
149
published/201510/20151019 10 passwd command examples in Linux.md
Normal file
149
published/201510/20151019 10 passwd command examples in Linux.md
Normal file
@ -0,0 +1,149 @@
|
||||
10 个 Linux 中的 passwd 命令示例
|
||||
================================================================================
|
||||
|
||||
正如 **passwd** 命令的名称所示,其用于改变系统用户的密码。如果 passwd 命令由非 root 用户执行,那么它会询问当前用户的密码,然后设置调用该命令的用户的新密码。当此命令由超级用户 root 执行的话,就可以重新设置任何用户的密码,包括不知道当前密码的用户。
|
||||
|
||||
在这篇文章中,我们将用实例来介绍 passwd 命令。
|
||||
|
||||
#### 语法 : ####
|
||||
|
||||
# passwd {options} {user_name}
|
||||
|
||||
可以在 passwd 命令使用不同的选项,列表如下:
|
||||
|
||||
![](http://www.linuxtechi.com/wp-content/uploads/2015/09/passwd-command-options.jpg)
|
||||
|
||||
### 例1:更改系统用户的密码 ###
|
||||
|
||||
当你使用非 root 用户登录时,比如我使用 ‘linuxtechi’ 登录的情况下,运行 passwd 命令它会重置当前登录用户的密码。
|
||||
|
||||
[linuxtechi@linuxworld ~]$ passwd
|
||||
Changing password for user linuxtechi.
|
||||
Changing password for linuxtechi.
|
||||
(current) UNIX password:
|
||||
New password:
|
||||
Retype new password:
|
||||
passwd: all authentication tokens updated successfully.
|
||||
[linuxtechi@linuxworld ~]$
|
||||
|
||||
当你作为 root 用户登录后并运行 **passwd** 命令时,它默认情况下会重新设置 root 的密码,如果你在 passwd 命令后指定了用户名,它会重置该用户的密码。
|
||||
|
||||
[root@linuxworld ~]# passwd
|
||||
[root@linuxworld ~]# passwd linuxtechi
|
||||
|
||||
![](http://www.linuxtechi.com/wp-content/uploads/2015/09/passwd-command.jpg)
|
||||
|
||||
**注意** : 系统用户的密码以加密的形式保存在 /etc/shadow 文件中。
|
||||
|
||||
### 例2:显示密码状态信息 ###
|
||||
|
||||
要显示用户密码的状态信息,请在 passwd 命令后使用 **-S** 选项。
|
||||
|
||||
[root@linuxworld ~]# passwd -S linuxtechi
|
||||
linuxtechi PS 2015-09-20 0 99999 7 -1 (Password set, SHA512 crypt.)
|
||||
[root@linuxworld ~]#
|
||||
|
||||
在上面的输出中,第一个字段显示的用户名,第二个字段显示密码状态(**PS = 密码设置,LK = 密码锁定,NP = 无密码**),第三个字段显示了上次修改密码的时间,后面四个字段分别显示了密码能更改的最小期限和最大期限,警告期限和没有使用该口令的时长。
|
||||
|
||||
### 例3:显示所有账号的密码状态信息 ###
|
||||
|
||||
为了显示所有用户密码的状态信息需要使用 “**-aS**”选项在passwd 命令中,示例如下所示:
|
||||
|
||||
root@localhost:~# passwd -Sa
|
||||
|
||||
![](http://www.linuxtechi.com/wp-content/uploads/2015/09/passwd-sa.jpg)
|
||||
|
||||
(LCTT译注:不同发行版/passwd 的行为不同。CentOS6.6 没有测试成功,但 Ubuntu 可以。)
|
||||
|
||||
### 例4:使用 -d 选项删除用户的密码 ###
|
||||
|
||||
用我做例子,删除 ‘**linuxtechi**‘ 用户的密码。
|
||||
|
||||
[root@linuxworld ~]# passwd -d linuxtechi
|
||||
Removing password for user linuxtechi.
|
||||
passwd: Success
|
||||
[root@linuxworld ~]#
|
||||
[root@linuxworld ~]# passwd -S linuxtechi
|
||||
linuxtechi NP 2015-09-20 0 99999 7 -1 (Empty password.)
|
||||
[root@linuxworld ~]#
|
||||
|
||||
“**-d**” 选项将清空用户密码,并禁用用户登录。
|
||||
|
||||
### 例5:设置密码立即过期 ###
|
||||
|
||||
在 passwd 命令中使用 '-e' 选项会立即使用户的密码过期,这将强制用户在下次登录时更改密码。
|
||||
|
||||
[root@linuxworld ~]# passwd -e linuxtechi
|
||||
Expiring password for user linuxtechi.
|
||||
passwd: Success
|
||||
[root@linuxworld ~]# passwd -S linuxtechi
|
||||
linuxtechi PS 1970-01-01 0 99999 7 -1 (Password set, SHA512 crypt.)
|
||||
[root@linuxworld ~]#
|
||||
|
||||
现在尝试用 linuxtechi 用户 SSH 连接到主机。
|
||||
|
||||
![](http://www.linuxtechi.com/wp-content/uploads/2015/09/passwd-expiry.jpg)
|
||||
|
||||
### 例6:锁定系统用户的密码 ###
|
||||
|
||||
在 passwd 命令中使用 ‘**-l**‘ 选项能锁定用户的密码,它会在密码的起始位置加上“!”。当他/她的密码被锁定时,用户将不能更改它的密码。
|
||||
|
||||
[root@linuxworld ~]# passwd -l linuxtechi
|
||||
Locking password for user linuxtechi.
|
||||
passwd: Success
|
||||
[root@linuxworld ~]# passwd -S linuxtechi
|
||||
linuxtechi LK 2015-09-20 0 99999 7 -1 (Password locked.)
|
||||
[root@linuxworld ~]#
|
||||
|
||||
### 例7:使用 -u 选项解锁用户密码 ###
|
||||
|
||||
[root@linuxworld ~]# passwd -u linuxtechi
|
||||
Unlocking password for user linuxtechi.
|
||||
passwd: Success
|
||||
[root@linuxworld ~]#
|
||||
|
||||
### 例8:使用 -i 选项设置非活动时间 ###
|
||||
|
||||
在 passwd 命令中使用 -i 选项用于设系统用户的非活动时间。当用户(我使用的是linuxtechi用户)密码过期后,用户再经过 ‘**n**‘ 天后(在我的情况下是10天)没有更改其密码,用户将不能登录。
|
||||
|
||||
[root@linuxworld ~]# passwd -i 10 linuxtechi
|
||||
Adjusting aging data for user linuxtechi.
|
||||
passwd: Success
|
||||
[root@linuxworld ~]#
|
||||
[root@linuxworld ~]# passwd -S linuxtechi
|
||||
linuxtechi PS 2015-09-20 0 99999 7 10 (Password set, SHA512 crypt.)
|
||||
[root@linuxworld ~]#
|
||||
|
||||
### 例9:使用 -n 选项设置密码更改的最短时间 ###
|
||||
|
||||
在下面的例子中,linuxtechi用户必须在90天内更改密码。0表示用户可以在任何时候更改它的密码。
|
||||
|
||||
[root@linuxworld ~]# passwd -n 90 linuxtechi
|
||||
Adjusting aging data for user linuxtechi.
|
||||
passwd: Success
|
||||
[root@linuxworld ~]# passwd -S linuxtechi
|
||||
linuxtechi PS 2015-09-20 90 99999 7 10 (Password set, SHA512 crypt.)
|
||||
[root@linuxworld ~]#
|
||||
|
||||
### 例10:使用 -w 选项设置密码过期前的警告期限 ###
|
||||
|
||||
‘**-w**’ 选项在 passwd 命令中用于设置用户的警告期限。这意味着,n天之后,他/她的密码将过期。
|
||||
|
||||
[root@linuxworld ~]# passwd -w 12 linuxtechi
|
||||
Adjusting aging data for user linuxtechi.
|
||||
passwd: Success
|
||||
[root@linuxworld ~]# passwd -S linuxtechi
|
||||
linuxtechi PS 2015-09-20 90 99999 12 10 (Password set, SHA512 crypt.)
|
||||
[root@linuxworld ~]#
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxtechi.com/10-passwd-command-examples-in-linux/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxtechi.com/author/pradeep/
|
259
published/201510/20151019 11 df command examples in Linux.md
Normal file
259
published/201510/20151019 11 df command examples in Linux.md
Normal file
@ -0,0 +1,259 @@
|
||||
Linux 中 df 命令的11个例子
|
||||
================================================================================
|
||||
|
||||
df 即“可用磁盘”(disk free),用于显示文件系统的磁盘使用情况。默认情况下 df 命令将以每块 1K 的单位进行显示所有当前已挂载的文件系统,如果你想以人类易读的格式显示 df 命令的输出,像这样“df -h”使用 -h 选项。
|
||||
|
||||
在这篇文章中,我们将讨论 `df` 命令在 Linux 下11种不同的实例。
|
||||
|
||||
在 Linux 下 df 命令的基本格式为:
|
||||
|
||||
# df {options} {mount_point_of_filesystem}
|
||||
|
||||
在 df 命令中可用的选项有:
|
||||
|
||||
![](http://www.linuxtechi.com/wp-content/uploads/2015/10/df-command-options.jpg)
|
||||
|
||||
df 的样例输出 :
|
||||
|
||||
[root@linux-world ~]# df
|
||||
Filesystem 1K-blocks Used Available Use% Mounted on
|
||||
/dev/mapper/vg00-root 17003304 804668 15311852 5% /
|
||||
devtmpfs 771876 0 771876 0% /dev
|
||||
tmpfs 777928 0 777928 0% /dev/shm
|
||||
tmpfs 777928 8532 769396 2% /run
|
||||
tmpfs 777928 0 777928 0% /sys/fs/cgroup
|
||||
/dev/mapper/vg00-home 14987616 41000 14162232 1% /home
|
||||
/dev/sda1 487652 62593 395363 14% /boot
|
||||
/dev/mapper/vg00-var 9948012 48692 9370936 1% /var
|
||||
/dev/mapper/vg00-sap 14987656 37636 14165636 1% /sap
|
||||
[root@linux-world ~]#
|
||||
|
||||
### 例1:使用 -a 选项列出所有文件系统的磁盘使用量 ###
|
||||
|
||||
当我们在 df 命令中使用 `-a` 选项时,它会显示所有文件系统的磁盘使用情况。
|
||||
|
||||
[root@linux-world ~]# df -a
|
||||
Filesystem 1K-blocks Used Available Use% Mounted on
|
||||
rootfs 17003304 804668 15311852 5% /
|
||||
proc 0 0 0 - /proc
|
||||
sysfs 0 0 0 - /sys
|
||||
devtmpfs 771876 0 771876 0% /dev
|
||||
securityfs 0 0 0 - /sys/kernel/security
|
||||
tmpfs 777928 0 777928 0% /dev/shm
|
||||
devpts 0 0 0 - /dev/pts
|
||||
tmpfs 777928 8532 769396 2% /run
|
||||
tmpfs 777928 0 777928 0% /sys/fs/cgroup
|
||||
cgroup 0 0 0 - /sys/fs/cgroup/systemd
|
||||
pstore 0 0 0 - /sys/fs/pstore
|
||||
cgroup 0 0 0 - /sys/fs/cgroup/cpuset
|
||||
cgroup 0 0 0 - /sys/fs/cgroup/cpu,cpuacct
|
||||
cgroup 0 0 0 - /sys/fs/cgroup/memory
|
||||
cgroup 0 0 0 - /sys/fs/cgroup/devices
|
||||
cgroup 0 0 0 - /sys/fs/cgroup/freezer
|
||||
cgroup 0 0 0 - /sys/fs/cgroup/net_cls
|
||||
cgroup 0 0 0 - /sys/fs/cgroup/blkio
|
||||
cgroup 0 0 0 - /sys/fs/cgroup/perf_event
|
||||
cgroup 0 0 0 - /sys/fs/cgroup/hugetlb
|
||||
configfs 0 0 0 - /sys/kernel/config
|
||||
/dev/mapper/vg00-root 17003304 804668 15311852 5% /
|
||||
selinuxfs 0 0 0 - /sys/fs/selinux
|
||||
systemd-1 0 0 0 - /proc/sys/fs/binfmt_misc
|
||||
debugfs 0 0 0 - /sys/kernel/debug
|
||||
hugetlbfs 0 0 0 - /dev/hugepages
|
||||
mqueue 0 0 0 - /dev/mqueue
|
||||
/dev/mapper/vg00-home 14987616 41000 14162232 1% /home
|
||||
/dev/sda1 487652 62593 395363 14% /boot
|
||||
/dev/mapper/vg00-var 9948012 48692 9370936 1% /var
|
||||
/dev/mapper/vg00-sap 14987656 37636 14165636 1% /sap
|
||||
[root@linux-world ~]#
|
||||
|
||||
### 例2:以人类易读的格式显示 df 命令的输出 ###
|
||||
|
||||
在 df 命令中使用`-h`选项,以人类易读的格式输出(例如,5K,500M 及 5G)
|
||||
|
||||
[root@linux-world ~]# df -h
|
||||
Filesystem Size Used Avail Use% Mounted on
|
||||
/dev/mapper/vg00-root 17G 786M 15G 5% /
|
||||
devtmpfs 754M 0 754M 0% /dev
|
||||
tmpfs 760M 0 760M 0% /dev/shm
|
||||
tmpfs 760M 8.4M 752M 2% /run
|
||||
tmpfs 760M 0 760M 0% /sys/fs/cgroup
|
||||
/dev/mapper/vg00-home 15G 41M 14G 1% /home
|
||||
/dev/sda1 477M 62M 387M 14% /boot
|
||||
/dev/mapper/vg00-var 9.5G 48M 9.0G 1% /var
|
||||
/dev/mapper/vg00-sap 15G 37M 14G 1% /sap
|
||||
[root@linux-world ~]#
|
||||
|
||||
### 例3:显示特定文件系统已使用的空间 ###
|
||||
|
||||
假如我们想显示 /sap 文件系统空间的使用情况。
|
||||
|
||||
[root@linux-world ~]# df -h /sap/
|
||||
Filesystem Size Used Avail Use% Mounted on
|
||||
/dev/mapper/vg00-sap 15G 37M 14G 1% /sap
|
||||
[root@linux-world ~]#
|
||||
|
||||
### 例4:输出所有已挂载文件系统的类型 ###
|
||||
|
||||
`-T` 选项用在 df 命令中用来显示文件系统的类型。
|
||||
|
||||
[root@linux-world ~]# df -T
|
||||
Filesystem Type 1K-blocks Used Available Use% Mounted on
|
||||
/dev/mapper/vg00-root ext4 17003304 804668 15311852 5% /
|
||||
devtmpfs devtmpfs 771876 0 771876 0% /dev
|
||||
tmpfs tmpfs 777928 0 777928 0% /dev/shm
|
||||
tmpfs tmpfs 777928 8532 769396 2% /run
|
||||
tmpfs tmpfs 777928 0 777928 0% /sys/fs/cgroup
|
||||
/dev/mapper/vg00-home ext4 14987616 41000 14162232 1% /home
|
||||
/dev/sda1 ext3 487652 62593 395363 14% /boot
|
||||
/dev/mapper/vg00-var ext3 9948012 48696 9370932 1% /var
|
||||
/dev/mapper/vg00-sap ext3 14987656 37636 14165636 1% /sap
|
||||
[root@linux-world ~]#
|
||||
|
||||
### 例5:按块大小输出文件系统磁盘使用情况 ###
|
||||
|
||||
[root@linux-world ~]# df -k
|
||||
Filesystem 1K-blocks Used Available Use% Mounted on
|
||||
/dev/mapper/vg00-root 17003304 804668 15311852 5% /
|
||||
devtmpfs 771876 0 771876 0% /dev
|
||||
tmpfs 777928 0 777928 0% /dev/shm
|
||||
tmpfs 777928 8532 769396 2% /run
|
||||
tmpfs 777928 0 777928 0% /sys/fs/cgroup
|
||||
/dev/mapper/vg00-home 14987616 41000 14162232 1% /home
|
||||
/dev/sda1 487652 62593 395363 14% /boot
|
||||
/dev/mapper/vg00-var 9948012 48696 9370932 1% /var
|
||||
/dev/mapper/vg00-sap 14987656 37636 14165636 1% /sap
|
||||
[root@linux-world ~]#
|
||||
|
||||
### 例6:输出文件系统的 inode 信息 ###
|
||||
|
||||
`-i` 选项用在 df 命令用于显示文件系统的 inode 信息。
|
||||
|
||||
所有文件系统的 inode 信息:
|
||||
|
||||
[root@linux-world ~]# df -i
|
||||
Filesystem Inodes IUsed IFree IUse% Mounted on
|
||||
/dev/mapper/vg00-root 1089536 22031 1067505 3% /
|
||||
devtmpfs 192969 357 192612 1% /dev
|
||||
tmpfs 194482 1 194481 1% /dev/shm
|
||||
tmpfs 194482 420 194062 1% /run
|
||||
tmpfs 194482 13 194469 1% /sys/fs/cgroup
|
||||
/dev/mapper/vg00-home 960992 15 960977 1% /home
|
||||
/dev/sda1 128016 337 127679 1% /boot
|
||||
/dev/mapper/vg00-var 640848 1235 639613 1% /var
|
||||
/dev/mapper/vg00-sap 960992 11 960981 1% /sap
|
||||
[root@linux-world ~]#
|
||||
|
||||
特定文件系统的 inode 信息:
|
||||
|
||||
[root@linux-world ~]# df -i /sap/
|
||||
Filesystem Inodes IUsed IFree IUse% Mounted on
|
||||
/dev/mapper/vg00-sap 960992 11 960981 1% /sap
|
||||
[root@linux-world ~]#
|
||||
|
||||
### 例7:输出所有文件系统使用情况汇总 ###
|
||||
|
||||
`-total` 选项在 df 命令中用于显示所有文件系统的磁盘使用情况汇总。
|
||||
|
||||
[root@linux-world ~]# df -h --total
|
||||
Filesystem Size Used Avail Use% Mounted on
|
||||
/dev/mapper/vg00-root 17G 786M 15G 5% /
|
||||
devtmpfs 754M 0 754M 0% /dev
|
||||
tmpfs 760M 0 760M 0% /dev/shm
|
||||
tmpfs 760M 8.4M 752M 2% /run
|
||||
tmpfs 760M 0 760M 0% /sys/fs/cgroup
|
||||
/dev/mapper/vg00-home 15G 41M 14G 1% /home
|
||||
/dev/sda1 477M 62M 387M 14% /boot
|
||||
/dev/mapper/vg00-var 9.5G 48M 9.0G 1% /var
|
||||
/dev/mapper/vg00-sap 15G 37M 14G 1% /sap
|
||||
total 58G 980M 54G 2% -
|
||||
[root@linux-world ~]#
|
||||
|
||||
### 例8:只打印本地文件系统磁盘的使用情况 ###
|
||||
|
||||
假设网络文件系统也挂载在 Linux 上,但我们只想显示本地文件系统的信息,这可以通过使用 df 命令的 `-l` 选项来实现。
|
||||
|
||||
![](http://www.linuxtechi.com/wp-content/uploads/2015/10/nfs4-fs-mount.jpg)
|
||||
|
||||
只打印本地文件系统:
|
||||
|
||||
[root@linux-world ~]# df -Thl
|
||||
Filesystem Type Size Used Avail Use% Mounted on
|
||||
/dev/mapper/vg00-root ext4 17G 791M 15G 6% /
|
||||
devtmpfs devtmpfs 754M 0 754M 0% /dev
|
||||
tmpfs tmpfs 760M 0 760M 0% /dev/shm
|
||||
tmpfs tmpfs 760M 8.4M 752M 2% /run
|
||||
tmpfs tmpfs 760M 0 760M 0% /sys/fs/cgroup
|
||||
/dev/mapper/vg00-home ext4 15G 41M 14G 1% /home
|
||||
/dev/sda1 ext3 477M 62M 387M 14% /boot
|
||||
/dev/mapper/vg00-var ext3 9.5G 105M 8.9G 2% /var
|
||||
/dev/mapper/vg00-sap ext3 15G 37M 14G 1% /sap
|
||||
[root@linux-world ~]#
|
||||
|
||||
### 例9:打印特定文件系统类型的磁盘使用情况 ###
|
||||
|
||||
`-t` 选项在 df 命令中用来打印特定文件系统类型的信息,用 `-t` 指定文件系统的类型,如下所示:
|
||||
|
||||
对于 ext4 :
|
||||
|
||||
[root@linux-world ~]# df -t ext4
|
||||
Filesystem 1K-blocks Used Available Use% Mounted on
|
||||
/dev/mapper/vg00-root 17003304 809492 15307028 6% /
|
||||
/dev/mapper/vg00-home 14987616 41000 14162232 1% /home
|
||||
[root@linux-world ~]#
|
||||
|
||||
对于 nfs4 :
|
||||
|
||||
[root@linux-world ~]# df -t nfs4
|
||||
Filesystem 1K-blocks Used Available Use% Mounted on
|
||||
192.168.1.5:/opensuse 301545472 266833920 19371008 94% /data
|
||||
[root@linux-world ~]#
|
||||
|
||||
### 例10:使用 -x 选项排除特定的文件系统类型 ###
|
||||
|
||||
`-x` 或 `–exclude-type` 在 df 命令中用来在输出中排出某些文件系统类型。
|
||||
|
||||
假设我们想打印除 ext3 外所有的文件系统。
|
||||
|
||||
[root@linux-world ~]# df -x ext3
|
||||
Filesystem 1K-blocks Used Available Use% Mounted on
|
||||
/dev/mapper/vg00-root 17003304 809492 15307028 6% /
|
||||
devtmpfs 771876 0 771876 0% /dev
|
||||
tmpfs 777928 0 777928 0% /dev/shm
|
||||
tmpfs 777928 8540 769388 2% /run
|
||||
tmpfs 777928 0 777928 0% /sys/fs/cgroup
|
||||
/dev/mapper/vg00-home 14987616 41000 14162232 1% /home
|
||||
192.168.1.5:/opensuse 301545472 266834944 19369984 94% /data
|
||||
[root@linux-world ~]#
|
||||
|
||||
### 例11:在 df 命令的输出中只打印特定的字段 ###
|
||||
|
||||
`-output={field_name1,field_name2...}` 选项用于显示 df 命令某些字段的输出。
|
||||
|
||||
可用的字段名有: `source`, `fstype`, `itotal`, `iused`, `iavail`, `ipcent`, `size`, `used`, `avail`, `pcent` 和 `target`
|
||||
|
||||
[root@linux-world ~]# df --output=fstype,size,iused
|
||||
Type 1K-blocks IUsed
|
||||
ext4 17003304 22275
|
||||
devtmpfs 771876 357
|
||||
tmpfs 777928 1
|
||||
tmpfs 777928 423
|
||||
tmpfs 777928 13
|
||||
ext4 14987616 15
|
||||
ext3 487652 337
|
||||
ext3 9948012 1373
|
||||
ext3 14987656 11
|
||||
nfs4 301545472 451099
|
||||
[root@linux-world ~]#
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxtechi.com/11-df-command-examples-in-linux/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxtechi.com/author/pradeep/
|
@ -0,0 +1,39 @@
|
||||
如何在 64 位 Ubuntu 15.10 中编译最新版 32 位 Wine
|
||||
================================================================================
|
||||
Wine 发布了最新的1.7.53版本。此版本带来的大量性能提升,包括**XAudio**,**Direct3D**代码清理,改善**OLE对象嵌入**技术,更好的** Web Services dll**的实现,还有其他大量更新。
|
||||
|
||||
![](http://www.tuxarena.com/wp-content/uploads/2015/10/wine1753a.jpg)
|
||||
|
||||
虽然有一个官方 [Wine][1] PPA,但目前只提供1.7.44版本,所以安装最新版本可以从源码编译安装。
|
||||
|
||||
[下载源码包][2]([直接下载地址在此][3])并解压 `tar -xf wine-1.7.53`。然后,安装如下依赖。
|
||||
|
||||
sudo apt-get install build-essential gcc-multilib libx11-dev:i386 libfreetype6-dev:i386 libxcursor-dev:i386 libxi-dev:i386 libxshmfence-dev:i386 libxxf86vm-dev:i386 libxrandr-dev:i386 libxinerama-dev:i386 libxcomposite-dev:i386 libglu1-mesa-dev:i386 libosmesa6-dev:i386 libpcap0.8-dev:i386 libdbus-1-dev:i386 libncurses5-dev:i386 libsane-dev:i386 libv4l-dev:i386 libgphoto2-dev:i386 liblcms2-dev:i386 gstreamer0.10-plugins-base:i386 libcapi20-dev:i386 libcups2-dev:i386 libfontconfig1-dev:i386 libgsm1-dev:i386 libtiff5-dev:i386 libmpg123-dev:i386 libopenal-dev:i386 libldap2-dev:i386 libgnutls-dev:i386 libjpeg-dev:i386
|
||||
|
||||
现在切换到 wine-1.7.53 解压后的文件夹,并输入:
|
||||
|
||||
./configure
|
||||
make
|
||||
sudo make install
|
||||
|
||||
同样地,你也可以给配置脚本指定 prefix 参数。以普通用户安装 wine:
|
||||
|
||||
./configure --prefix=$HOME/usr/bin
|
||||
make
|
||||
make install
|
||||
|
||||
这种情况下,Wine 将会安装在`$HOME/usr/bin/wine`,所以请检查`$HOME/usr/bin`在你的`PATH`变量中。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tuxarena.com/2015/10/how-to-compile-latest-wine-32-bit-on-64-bit-ubuntu-15-10/
|
||||
|
||||
作者:Craciun Dan
|
||||
译者:[VicYu/Vic020](http://vicyu.net)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:https://launchpad.net/~ubuntu-wine/+archive/ubuntu/ppa
|
||||
[2]:https://www.winehq.org/announce/1.7.53
|
||||
[3]:http://prdownloads.sourceforge.net/wine/wine-1.7.53.tar.bz2
|
@ -0,0 +1,83 @@
|
||||
LibreOffice 这五年(2010-2015)
|
||||
================================================================================
|
||||
注:youtube 视频
|
||||
<iframe width="660" height="371" src="https://www.youtube.com/embed/plo8kP_ts-8?feature=oembed" frameborder="0" allowfullscreen></iframe>
|
||||
|
||||
|
||||
[LibreOffice][1],来自文档基金会(The Document Foundation)一个自由开源的令人惊叹的办公套件。LO (LibreOffice)在2010年9月28日由 [OpenOffice.org][2] 分支出来;而 OOo (OpenOffice.org)则是早期的 [StarOffice][3] 开源版本。LibreOffice 支持文字处理,创建与编辑电子表格,幻灯片,图表和图形,数据库,数学公式的创建和编辑等。
|
||||
|
||||
### 核心应用: ###
|
||||
|
||||
- **Writer** – 文字处理器
|
||||
- **Calc** – 电子表格应用程序,类似于 Excel
|
||||
- **Impress** – 应用演示,支持 Microsoft PowerPoint 的格式
|
||||
- **Draw** – 矢量图形编辑器
|
||||
- **Math** – 用于编写和编辑数学公式的特殊应用
|
||||
- **Base** – 数据库管理
|
||||
|
||||
![LibreOffice 3.3, 2011](https://github.com/paulcarroty/Articles/raw/master/LO_History/3.3/Help-License-Info.png)
|
||||
|
||||
*LibreOffice 3.3, 2011*
|
||||
|
||||
这是LibreOffice 的第一个版本 - 分支自 OpenOffice.org
|
||||
|
||||
![LibreOffice 3.4](https://github.com/paulcarroty/Articles/raw/master/LO_History/3.4/1cc80d1cada204a061402785b2048f7clibreoffice-3.4.3.png)
|
||||
|
||||
*LibreOffice 3.4*
|
||||
|
||||
![LibreOffice 3.5](https://raw.githubusercontent.com/paulcarroty/Articles/master/LO_History/3.5/libreoffice35-large_001.jpg)
|
||||
|
||||
*LibreOffice 3.5*
|
||||
|
||||
![LibreOffice 3.6](https://github.com/paulcarroty/Articles/raw/master/LO_History/3.6/libreoffice-3.6.0.png)
|
||||
|
||||
*LibreOffice 3.6*
|
||||
|
||||
![Libre Office 4.0](https://github.com/paulcarroty/Articles/raw/master/LO_History/4.0/libreoffice-writer.png)
|
||||
|
||||
*LibreOffice 4.0*
|
||||
|
||||
![Libre Office 4.1](https://github.com/paulcarroty/Articles/raw/master/LO_History/4.1/Writer1.png)
|
||||
|
||||
*LibreOffice 4.1*
|
||||
|
||||
![Libre Office 4.2](https://github.com/paulcarroty/Articles/raw/master/LO_History/4.2/libreoffice-4.2.png)
|
||||
|
||||
*Libre Office 4.2*
|
||||
|
||||
![LibreOffice 4.3](https://github.com/paulcarroty/Articles/raw/master/LO_History/4.3/libreoffice.jpg)
|
||||
|
||||
*LibreOffice 4.3*
|
||||
|
||||
![LibreOffice 4.4](https://github.com/paulcarroty/Articles/raw/master/LO_History/4.4/LibreOffice_Writer_4_4_2.png)
|
||||
|
||||
*LibreOffice 4.4*
|
||||
|
||||
![Libre Office 5.0](https://github.com/paulcarroty/Articles/raw/master/LO_History/5.0/LibreOffice_Writer_5.0.png)
|
||||
|
||||
*LibreOffice 5.0*
|
||||
|
||||
### Libre Office 的发展,出自 Wikipedia ###
|
||||
|
||||
![StarOffice major derivatives](https://commons.wikimedia.org/wiki/File%3AStarOffice_major_derivatives.svg)
|
||||
|
||||
### LibreOffice 5.0 预览 ###
|
||||
|
||||
注:youtube 视频
|
||||
|
||||
<iframe width="660" height="371" src="https://www.youtube.com/embed/BVdofVqarAc?feature=oembed" frameborder="0" allowfullscreen></iframe>
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://tlhp.cf/libreoffice-5years-evolution/
|
||||
|
||||
作者:[Pavlo Rudyi][a]
|
||||
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://tlhp.cf/author/paul/
|
||||
[1]:http://www.libreoffice.org/
|
||||
[2]:https://www.openoffice.org/
|
||||
[3]:http://www.staroffice.org/
|
@ -0,0 +1,299 @@
|
||||
Linux 的历史:24 年,一步一个脚印
|
||||
================================================================================
|
||||
注:youtube 视频
|
||||
|
||||
<iframe width="660" height="371" src="https://www.youtube.com/embed/84cHeoEebJM?feature=oembed" frameborder="0" allowfullscreen></iframe>
|
||||
|
||||
### 史前 ###
|
||||
|
||||
没有 [C 编程语言][1] 和 [GNU 项目][2] 构成 Linux 环境,也就不可能有 Linux 的成功。
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/00-1.jpg)
|
||||
|
||||
*Ken Thompson 和 Dennis Ritchie*
|
||||
|
||||
[Ken Thompson][1] 和 [Dennis Ritchie][2] 在 1969-1970 创造了 Unix 操作系统。之后发布了新的 [C 编程语言][3],它是一种高级的、可移植的编程语言。 Linux 内核用 C 和一些汇编代码写成。
|
||||
|
||||
![Richard Matthew Stallman](https://github.com/paulcarroty/Articles/raw/master/Linux_24/00-2.jpg)
|
||||
|
||||
*Richard Matthew Stallman*
|
||||
|
||||
[Richard Matthew Stallman][4] 在 1984 年启动了 [GNU 项目][5]。最大的一个目标 - 完全自由的类-Unix 操作系统。
|
||||
|
||||
### 1991 – 元年 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/1991-1.jpg)
|
||||
|
||||
*Linus Torvalds, 1991*
|
||||
|
||||
[Linus Torvalds][5] 在芬兰赫尔辛基开始了 Linux 内核开发,他是为他的硬件 - Intel 30386 CPU 编写的程序。他也使用 Minix 和 GNU C 编译器。下面是 Linus Torvalds 给 Minix 新闻组的历史消息:
|
||||
|
||||
> From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
|
||||
> Newsgroups: comp.os.minix
|
||||
> Subject: What would you like to see most in minix?
|
||||
> Summary: small poll for my new operating system
|
||||
> Message-ID:
|
||||
> Date: 25 Aug 91 20:57:08 GMT
|
||||
> Organization: University of Helsinki
|
||||
>
|
||||
>
|
||||
> Hello everybody out there using minix -
|
||||
>
|
||||
> I'm doing a (free) operating system (just a hobby, won't be big and
|
||||
> professional like gnu) for 386(486) AT clones. This has been brewing
|
||||
> since april, and is starting to get ready. I'd like any feedback on
|
||||
> things people like/dislike in minix, as my OS resembles it somewhat
|
||||
> (same physical layout of the file-system (due to practical reasons)
|
||||
> among other things).
|
||||
>
|
||||
> I've currently ported bash(1.08) and gcc(1.40), and things seem to work.
|
||||
> This implies that I'll get something practical within a few months, and
|
||||
> I'd like to know what features most people would want. Any suggestions
|
||||
> are welcome, but I won't promise I'll implement them :-)
|
||||
>
|
||||
> Linus (torvalds@kruuna.helsinki.fi)
|
||||
|
||||
从此之后,Linux 开始得到了世界范围志愿者和专业专家的支持。Linus 的同事 Ari Lemmke 把它命名为 “Linux” - 这其实是他们的大学 ftp 服务器上的项目目录名称。
|
||||
|
||||
### 1992 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/1992-1.jpg)
|
||||
|
||||
在 GPLv2 协议下发布了 0.12 版 Linux 内核。
|
||||
|
||||
### 1993 ###
|
||||
|
||||
![Slackware 1.0 ](https://github.com/paulcarroty/Articles/raw/master/Linux_24/1993-1.png)
|
||||
|
||||
Slackware 首次发布(LCTT 译注:Slackware Linux 是一个高度技术性的、干净的发行版,只有少量非常有限的个人设置) – 最早的 Linux 发行版,其领导者 Patrick Volkerding 也是最早的。其时,Linux 内核有 100 多个开发者。
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/1993-2.png)
|
||||
|
||||
*Debian*
|
||||
|
||||
Debian – 最大的 Linux 社区之一也创立于 1991 年。
|
||||
|
||||
### 1994 ###
|
||||
|
||||
Linux 1.0 发布了,多亏了 XFree 86 项目,第一次有了 GUI。
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/1994-1.png)
|
||||
|
||||
*Red Hat Linux*
|
||||
|
||||
发布了 Red Hat Linux 1.0
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/1994-2.png)
|
||||
|
||||
*S.u.S.E Linux*
|
||||
|
||||
和 [S.u.S.E. Linux][6] 1.0。
|
||||
|
||||
### 1995 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/1995-1.png)
|
||||
|
||||
*Red Hat Inc.*
|
||||
|
||||
Bob Young 和 Marc Ewing 合并他们的本地业务为 [Red Hat Software][7]。Linux 移植到了很多硬件平台。
|
||||
|
||||
### 1996 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/1996-1.png)
|
||||
|
||||
*Tux*
|
||||
|
||||
企鹅 Tux 是 Linux 官方吉祥物,Linus Torvalds 参观了堪培拉国家动物园和水族馆之后有了这个想法。发布了 Linux 2.0,支持对称多处理器。开始开发 KDE。
|
||||
|
||||
### 1997 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/1997-1.jpg)
|
||||
|
||||
*Miguel de Icaza*
|
||||
|
||||
Miguel de Icaza 和 Federico Mena 开始开发 GNOME - 自由桌面环境和应用程序。Linus Torvalds 赢得了 Linux 商标冲突官司,Linux 成为了 Linus Torvalds 的注册商标。
|
||||
|
||||
### 1998 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/1998-1.jpg)
|
||||
|
||||
*大教堂和集市*
|
||||
|
||||
Eric S. Raymond 出版了文章 [The Cathedral and the Bazaar(大教堂和集市)][8] - 高度推荐阅读。Linux 得到了大公司的支持: IBM、Oracle、康柏。
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/1998-2.png)
|
||||
|
||||
*Mandrake Linux*
|
||||
|
||||
Mandrake Linux 首次发布 - 基于红帽 Linux 的发行版,带有 KDE 桌面环境。
|
||||
|
||||
### 1999 ###
|
||||
|
||||
![](https://upload.wikimedia.org/wikipedia/commons/4/4f/KDE_1.1.jpg)
|
||||
|
||||
第一个主要的 KDE 版本。
|
||||
|
||||
### 2000 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2000-1.jpg)
|
||||
|
||||
Dell 支持 Linux - 这是第一个支持的大硬件供应商。
|
||||
|
||||
### 2001 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2001-1.jpg)
|
||||
|
||||
*Revolution OS*
|
||||
|
||||
纪录片 “Revolution OS(操作系统革命)” - GNU、Linux、开源、自由软件的 20 年历史,以及对 Linux 和开源界顶级黑客的采访。
|
||||
|
||||
### 2002 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2002-1.jpg)
|
||||
|
||||
*BitKeeper*
|
||||
|
||||
Linux 开始使用 BitKeeper,这是一种商业版的分布式版本控制软件。
|
||||
|
||||
### 2003 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2003-1.png)
|
||||
|
||||
*SUSE*
|
||||
|
||||
Novell 用 2.1 亿美元购买了 SUSE Linux AG。同年 SCO 集团 也开始了同 IBM 以及 Linux 社区关于 Unix 版权的艰难的法律诉讼。
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2003-2.png)
|
||||
|
||||
*Fedora*
|
||||
|
||||
红帽和 Linux 社区首次发布了 Fedora Linux。
|
||||
|
||||
### 2004 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2004-1.png)
|
||||
|
||||
*X.ORG 基金会*
|
||||
|
||||
XFree86 解散了并加入到 [X.Org 基金会][9], X 的开发更快了。
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2004-2.jpg)
|
||||
|
||||
Ubuntu 4.10 – Ubuntu 首次发布
|
||||
|
||||
### 2005 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2005-1.png)
|
||||
|
||||
*openSUSE*
|
||||
|
||||
[openSUSE][10] 开始了,这是企业版 Novell’s OS 的免费版本。OpenOffice.org 开始支持 OpenDocument 标准。
|
||||
|
||||
### 2006 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2006-1.png)
|
||||
|
||||
一个新的 Linux 发行版,基于红帽企业版 Linux 的 Oracle Linux。微软和 Novell 开始在 IT 和专利保护方面进行合作。
|
||||
|
||||
### 2007 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2007-1.jpg)
|
||||
|
||||
*Dell Linux 笔记本*
|
||||
|
||||
Dell 发布了第一个预装 Linux 的笔记本。
|
||||
|
||||
### 2008 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2008-1.jpg)
|
||||
|
||||
*KDE 4.0*
|
||||
|
||||
KDE 4 发布了,但是不稳定,很多用户开始迁移到 GNOME。
|
||||
|
||||
### 2009 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2009-1.jpg)
|
||||
|
||||
*Red Hat*
|
||||
|
||||
红帽 Linux 取得了成功 - 市值达 26亿2千万美元。
|
||||
|
||||
2009 年微软在 GPLv2 协议下向 Linux 内核提交了第一个补丁。
|
||||
|
||||
### 2010 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2010-1.png)
|
||||
|
||||
*Novell -> Attachmate*
|
||||
|
||||
Novell 已 22亿美元卖给了 Attachmate Group, Inc。SUSE 和 Novell 成为了新公司的两款独立的产品。
|
||||
|
||||
[systemd][11] 首次发布,开始了 Linux 系统的革命。
|
||||
|
||||
### 2011 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2011-1.png)
|
||||
|
||||
*Unity 桌面,2011*
|
||||
|
||||
Ubuntu Unity 发布,遭到很多用户的批评。
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2011-2.png)
|
||||
|
||||
*GNOME 3.0,2011*
|
||||
|
||||
GNOME 3.0 发布, Linus Torvalds 评论为 “unholy mess” ,有很多负面评论。Linux 内核 3.0 发布。
|
||||
|
||||
### 2012 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2012-1.png)
|
||||
|
||||
*1500 万行代码*
|
||||
|
||||
Linux 内核达到 1500 万行代码。微软成为主要贡献者之一。
|
||||
|
||||
### 2013 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2013-1.png)
|
||||
|
||||
Kali Linux 1.0 发布, 用于渗透测试和数字取证,基于 Debian 的 Linux 发行版。2014 年 CentOS 及其代码开发者加入到了红帽公司。
|
||||
|
||||
### 2014 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2014-1.jpg)
|
||||
|
||||
*Lennart Poettering 和 Kay Sievers*
|
||||
|
||||
systemd 成为 Ubuntu 和所有主流 Linux 发行版的默认初始化程序。Ubuntu 有 2200 万用户。安卓的大进步 - 占了所有移动设备的 75% 份额。
|
||||
|
||||
### 2015 ###
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/Linux_24/2015-1.jpg)
|
||||
|
||||
发布了 Linux 4.0。Mandriva 公司清算,但还有很多分支,其中最流行的一个是 Mageia。
|
||||
|
||||
带着对 Linux 的热爱而执笔。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://tlhp.cf/linux-history/
|
||||
|
||||
作者:[Pavlo Rudyi][a]
|
||||
译者:[ictlyh](http://mutouxiaogui.cn/blog)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://tlhp.cf/author/paul/
|
||||
[1]:https://en.wikipedia.org/wiki/C_(programming_language)
|
||||
[2]:https://en.wikipedia.org/wiki/GNU_Project
|
||||
[3]:https://en.wikipedia.org/wiki/Ken_Thompson
|
||||
[4]:https://en.wikipedia.org/wiki/Dennis_Ritchie
|
||||
[5]:https://en.wikipedia.org/wiki/Linus_Torvalds
|
||||
[6]:https://en.wikipedia.org/wiki/SUSE_Linux_distributions
|
||||
[7]:https://en.wikipedia.org/wiki/Red_Hat
|
||||
[8]:https://en.wikipedia.org/wiki/The_Cathedral_and_the_Bazaar
|
||||
[9]:http://www.x.org/
|
||||
[10]:https://en.opensuse.org/Main_Page
|
||||
[11]:https://en.wikipedia.org/wiki/Systemd
|
81
published/20151012 How To Use iPhone In Antergos Linux.md
Normal file
81
published/20151012 How To Use iPhone In Antergos Linux.md
Normal file
@ -0,0 +1,81 @@
|
||||
如何在 Antergos Linux 中使用 iPhone
|
||||
================================================================================
|
||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/iPhone-Antergos-Arch-Linux.jpg)
|
||||
|
||||
在Arch Linux中使用iPhone遇到麻烦了么?iPhone和Linux从来都没有很好地集成。本教程中,我会向你展示如何在Antergos Linux中使用iPhone,对于同样基于Arch的的Linux发行版如Manjaro也应该同样管用。
|
||||
|
||||
我最近购买了一台全新的iPhone 6S,当我连接到Antergos Linux中要拷贝一些照片时,它完全没有检测到它。我看见iPhone正在被充电并且我已经允许了iPhone“信任这台电脑”,但是还是完全没有检测到。我尝试运行`dmseg`但是没有关于iPhone或者Apple的信息。有趣的是我当我安装好了[libimobiledevice][1],这个就可以解决[iPhone在Ubuntu中的挂载问题][2]。
|
||||
|
||||
我会向你展示如何在Antergos中使用运行iOS 9的iPhone 6S。这会有更多的命令行,但是我假设你用的是ArchLinux,并不惧怕使用终端(也不应该惧怕)。
|
||||
|
||||
### 在Arch Linux中挂载iPhone ###
|
||||
|
||||
**第一步**:如果已经插入,请拔下你的iPhone。
|
||||
|
||||
**第二步**:现在,打开终端输入下面的命令来安装必要的包。如果它们已经安装过了也没有关系。
|
||||
|
||||
sudo pacman -Sy ifuse usbmuxd libplist libimobiledevice
|
||||
|
||||
**第三步**: 这些库和程序安装完成后,重启系统。
|
||||
|
||||
sudo reboot
|
||||
|
||||
**第四步**:创建一个iPhone的挂载目录,我建议在家目录中创建一个iPhone目录。
|
||||
|
||||
mkdir ~/iPhone
|
||||
|
||||
**第五步**:解锁你的手机并插入,如果询问是否信任该计算机,请允许信任。
|
||||
|
||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/iPhone-mount-Antergos-Linux-2.jpeg)
|
||||
|
||||
**第六步**: 看看这时iPhone是否已经被机器识别了。
|
||||
|
||||
dmesg | grep -i iphone
|
||||
|
||||
这时就该显示iPhone和Apple的结果了。就像这样:
|
||||
|
||||
[ 31.003392] ipheth 2-1:4.2: Apple iPhone USB Ethernet device attached
|
||||
[ 40.950883] ipheth 2-1:4.2: Apple iPhone USB Ethernet now disconnected
|
||||
[ 47.471897] ipheth 2-1:4.2: Apple iPhone USB Ethernet device attached
|
||||
[ 82.967116] ipheth 2-1:4.2: Apple iPhone USB Ethernet now disconnected
|
||||
[ 106.735932] ipheth 2-1:4.2: Apple iPhone USB Ethernet device attached
|
||||
|
||||
这意味着这时iPhone已经被Antergos/Arch成功地识别了。
|
||||
|
||||
**第七步**: 设置完成后是时候挂载iPhone了,使用下面的命令:
|
||||
|
||||
ifuse ~/iPhone
|
||||
|
||||
由于我们在家目录中创建了挂载目录,你不需要root权限就可以在家目录中看见。如果命令成功了,你就不会看见任何输出。
|
||||
|
||||
回到Files看下iPhone是否已经识别。对于我而言,在Antergos中看上去这样:
|
||||
|
||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/iPhone-mount-Antergos-Linux.jpeg)
|
||||
|
||||
你可以在这个目录中访问文件。从这里复制文件或者复制到里面。
|
||||
|
||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/iPhone-mount-Antergos-Linux-1.jpeg)
|
||||
|
||||
**第八步**: 当你想要卸载的时候,使用这个命令:
|
||||
|
||||
sudo umount ~/iPhone
|
||||
|
||||
### 对你有用么? ###
|
||||
|
||||
我知道这并不是非常方便和理想,iPhone应该像其他USB设备那样工作,但是事情并不总是像人们想的那样。好的是一点小的DIY就能解决这个问题带来了一点成就感(至少对我而言)。我必须要说的是Antergos应该修复这个问题让iPhone可以默认挂载。
|
||||
|
||||
这个技巧对你有用么?如果你有任何问题或者建议,欢迎留下评论。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/iphone-antergos-linux/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:http://www.libimobiledevice.org/
|
||||
[2]:http://itsfoss.com/mount-iphone-ipad-ios-7-ubuntu-13-10/
|
@ -1,12 +1,12 @@
|
||||
Linux有问必答--如何找出Linux中内置模块的信息
|
||||
Linux有问必答:如何找出Linux中内置模块的信息
|
||||
================================================================================
|
||||
> **提问**:我想要知道Linux系统中内核内置的模块,以及每个模块的参数。有什么方法可以得到内置模块和设备驱动的列表,以及它们的详细信息呢?
|
||||
> **提问**:我想要知道Linux系统中内核内置的模块,以及每个模块有哪些参数。有什么方法可以得到内置模块和设备驱动的列表,以及它们的详细信息呢?
|
||||
|
||||
现代Linux内核正在随着时间迅速地增长来支持大量的硬件、文件系统和网络功能。在此期间,“可加载模块”的引入防止内核变得越来越臃肿,以及在不同的环境中灵活地扩展功能及硬件支持,而不必重新构建内核。
|
||||
现代Linux内核正在随着时间变化而迅速增长,以支持大量的硬件、文件系统和网络功能。在此期间,“可加载模块(loadable kernel modules,[LKM])”的引入防止内核变得越来越臃肿,以及在不同的环境中灵活地扩展功能及硬件支持,而不必重新构建内核。
|
||||
|
||||
最新的Linux发型版的内核只带了相对较小的“内置模块”,其余的特定硬件驱动或者自定义功能作为“可加载模块”来让你选择地加载或卸载。
|
||||
最新的Linux发行版的内核只带了相对较小的“内置模块(built-in modules)”,其余的特定硬件驱动或者自定义功能作为“可加载模块”来让你选择地加载或卸载。
|
||||
|
||||
内置模块被静态地编译进了内核。不像可加载内核模块可以动态地使用modprobe、insmod、rmmod、modinfo或者lsmod等命令地加载、卸载、查询模块,内置的模块总是在启动是就加载进了内核,不会被这些命令管理。
|
||||
内置模块被静态地编译进了内核。不像可加载内核模块可以动态地使用`modprobe`、`insmod`、`rmmod`、`modinfo`或者`lsmod`等命令地加载、卸载、查询模块,内置的模块总是在启动时就加载进了内核,不会被这些命令管理。
|
||||
|
||||
### 找出内置模块列表 ###
|
||||
|
||||
@ -22,13 +22,13 @@ Linux有问必答--如何找出Linux中内置模块的信息
|
||||
|
||||
### 找出内置模块参数 ###
|
||||
|
||||
每个内核模块无论是内置的还是可加载的都有一系列的参数。对于可加载模块,modinfo命令显示它们的参数信息。然而这个命令不对内置模块管用。你会得到下面的错误。
|
||||
每个内核模块无论是内置的还是可加载的都有一系列的参数。对于可加载模块,`modinfo`命令可以显示它们的参数信息。然而这个命令对内置模块没有用。你会得到下面的错误。
|
||||
|
||||
modinfo: ERROR: Module XXXXXX not found.
|
||||
|
||||
如果你想要查看内置模块的参数,以及它们的值,你可以在**/sys/module** 下检查它们的内容。
|
||||
如果你想要查看内置模块的参数,以及它们的值,你可以在 **/sys/module** 下检查它们的内容。
|
||||
|
||||
在 /sys/module目录下,你可以找到内核模块(包含内置和可加载的)命名的子目录。结合则进入每个模块目录,这里有个“parameters”目录,列出了这个模块所有的参数。
|
||||
在 /sys/module目录下,你可以找到内核模块(包含内置和可加载的)命名的子目录。进入每个模块目录,这里有个“parameters”目录,列出了这个模块所有的参数。
|
||||
|
||||
比如你要找出tcp_cubic(内核默认的TCP实现)模块的参数。你可以这么做:
|
||||
|
||||
@ -46,7 +46,7 @@ via: http://ask.xmodulo.com/find-information-builtin-kernel-modules-linux.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
62
published/20151012 What is a good IDE for R on Linux.md
Normal file
62
published/20151012 What is a good IDE for R on Linux.md
Normal file
@ -0,0 +1,62 @@
|
||||
Linux 上好用的 R 语言 IDE
|
||||
================================================================================
|
||||
|
||||
前一段时间,我已经介绍过 [Linux 上针对 C/C++ 语言的最好 IDE][1]。很显然 C 或 C++ 并不是现存的唯一的编程语言,是时间讨论某些更加特别的语言了。
|
||||
|
||||
假如你做过一些统计工作,很可能你已经见识过 [R 语言][2] 了。假如你还没有,我真的非常推荐这门专为统计和数据挖掘而生的开源编程语言。若你拥有编程背景,它的语法可能会使你感到有些不适应,但希望它的向量化操作所带来的快速能够吸引到你。简而言之,请尝试使用一下这门语言。而要做到这一点,使用一个好的 IDE 来入门或许会更好。R 作为一门跨平台的语言,有着一大把好用的 IDE,它们使得用 R 语言进行数据分析变得更惬意。假如你非常钟意一个特定的编辑器,这里也有一些好用的插件来将它转变为一个成熟的 R 语言的 IDE。
|
||||
|
||||
下面就让我们见识一下 Linux 环境下 5 个针对 R 语言的好用 IDE吧。
|
||||
|
||||
### 1. RStudio ###
|
||||
|
||||
![](https://c1.staticflickr.com/1/603/22093054381_431383ab60_c.jpg)
|
||||
|
||||
就让我们以或许是最为人们喜爱的 R IDE —— [RStudio][3] 来开始我们的介绍吧。除了一般 IDE 所提供的诸如语法高亮、代码补全等功能,RStudio 还因其集成了 R 语言帮助文档、强大的调试器、多视图系统而突出。如果你准备入门 R 语言,我只建议你将 RStudio 作为你的 R 语言控制台,一方面用它来实时测试代码是很完美的,另外对象浏览器可以帮助你理解你正在处理的是哪类数据。最后,真正征服我的是它集成了图形显示器,使得你能够更轻松地将图形输出为图片文件。至于它不好的方面, RStudio 缺乏快捷键和高级设置来使得它成为一个完美的 IDE。然而,它有一个以 AGPL 协议发布的免费版本, Linux 用户没有借口不去试试这个 IDE。
|
||||
|
||||
### 2. 带有 ESS 插件的 Emacs ###
|
||||
|
||||
![](https://c2.staticflickr.com/6/5824/22056857776_a14a4e7e1b_c.jpg)
|
||||
|
||||
在我的前一个有关 IDE 的文章中,很多朋友对我所给出的清单中没有 Emacs 而感到失望。对于这个,我的主要理由是 Emacs 可以说是 IDE 里面的“通配符”:你可以将它放到任意语言的 IDE 清单中。但对于 [带有 ESS 插件的 R][4] 来说,事情就变得有些不同了。Emacs Speaks Statistics (ESS) 是一个令人惊异的插件,它将完全改变你使用 Emacs 编辑器的方式,真的非常适合 R 编程者的需求。与 RStudio 类似,带有 ESS 的 Emacs 拥有多视图,它有两个面板:一个显示代码,另一个则是一个 R 控制台,使得实时地测试代码和探索数据对象变得更加容易。但 ESS 真正的长处是可以和你已安装的其他 Emacs 插件无缝集成,以及它的高级配置选项。简而言之,如果你喜欢你的 Emacs 快捷键,你将能够在 R 语言开发环境下使用它们。然而,当你在 ESS 中处理大量数据时,我已经听闻并经历了一些效率低下的问题。尽管这个问题不是很重大,但足以让我更偏好 RStudio。
|
||||
|
||||
### 3. Vim 及 Vim-R-plugin ###
|
||||
|
||||
![](https://c1.staticflickr.com/1/680/22056923916_abe3531bb4_b.jpg)
|
||||
|
||||
在谈论完 Emacs 后,因为我不想去讨论 Emacs 和 Vim 的优劣,所以我尽力给予 Vim 同样的待遇,下面介绍 [Vim R 插件][5]。使用名为 tmux 的终端工具,这个工具使得在开启一个 R 控制台的同时,又书写 R 代码成为可能。但最为重要的是,它还为 Vim 带来了 R 语言的语法高亮和自动补全。你还可以轻易地获取 R 帮助文档和浏览数据对象。但再次强调,这些强大的功能来源于它大量的自定义选项和 Vim 的速度。假如你被这些功能所诱惑,我希望你能够通读有关介绍如何安装这个插件并设置相关环境的[文档][6]。
|
||||
|
||||
### 4. 带有 RGedit 的 Gedit ###
|
||||
|
||||
![](https://c1.staticflickr.com/1/761/22056923956_1413f60b42_c.jpg)
|
||||
|
||||
若 Emacs 和 Vim 都不是你的菜,而你恰好喜欢默认的 Gnome 编辑器,则 [RGedit][7] 就是专门为你而生的:它是 Gedit 的一个专门编辑 R 代码的插件。Gedit 比你以为的更强大,配上大量的插件,就有可能用它来做许许多多的事情。而 RGedit 恰好就是你编辑 R 代码所需要的那款插件。它支持传统的语法高亮并在屏幕下方集成了 R 控制台,但它还有一大类独特的功能,例如多文件编辑、代码折叠、文件查看器,甚至还有一个 GUI 的向导用来从 snippets 产生代码。尽管我对 Gedit 并不感冒,但我必须承认这些功能比一般插件的功能更好,并且在你花费很长时间去分析数据时它会有很大的帮助。唯一的不足是它的最后一次更新是 2013 年。我真的希望这个项目能够被重新焕发新生。
|
||||
|
||||
### 5. RKWard ###
|
||||
|
||||
![](https://c2.staticflickr.com/6/5643/21896132829_2ea8f3a320_c.jpg)
|
||||
|
||||
最后的并不意味着最不重要,作为这个清单的最后,[RKWard][8] 是一个 KDE 环境下的 R 语言 IDE。我最喜爱它的一点是它的名称。但说老实话,它的包管理系统和类似电子表格的数据编辑器排在我最喜欢它的理由的第二位。除了这些,它还包含一个简单的用来画图和导入数据的系统,另外它还可以使用插件来扩展功能。假如你不是一个 KDE 迷,或许你有点不喜欢这个,但若你是,我真的建议你考虑使用它。
|
||||
|
||||
总的来说,无论你是否刚入门 R 语言,这些 IDE 对你或许都有些帮助。假如你更偏好某个软件它自身所代表的东西或者是偏好针对你喜爱的编辑器的插件,这些都没有什么问题,我确信你将感激这些软件所提供的某些功能。同时我还确信我遗漏了很多好的针对 R 语言的 IDE,或许它们值得罗列在这个清单上。鉴于你们在上一篇针对 C/C++ 的最好 IDE 这个话题中陈述了很多非常有用的评论,我也邀请你们在这里做出同样精彩的评论并分享出你的知识。
|
||||
|
||||
关于 Linux 下针对 R 语言的好用编辑器,你有什么看法呢?请在下面的评论中让我们知晓。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/good-ide-for-r-on-linux.html
|
||||
|
||||
作者:[Adrien Brochard][a]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/adrien
|
||||
[1]:http://xmodulo.com/good-ide-for-c-cpp-linux.html
|
||||
[2]:https://www.r-project.org/
|
||||
[3]:https://www.rstudio.com/
|
||||
[4]:http://ess.r-project.org/
|
||||
[5]:http://www.vim.org/scripts/script.php?script_id=2628
|
||||
[6]:http://www.lepem.ufc.br/jaa/r-plugin.html
|
||||
[7]:http://rgedit.sourceforge.net/
|
||||
[8]:https://rkward.kde.org/
|
@ -0,0 +1,61 @@
|
||||
Linux有问必答: 当使用代理服务器连接互联网时如何安装 Ubuntu 桌面版
|
||||
================================================================================
|
||||
> **提问:** 我的电脑连接到的公司网络是使用HTTP代理连上互联网的。当我想使用CD-ROM安装Ubuntu时,安装在尝试获取文件时被停滞了,可能是由于代理的原因。然而问题是Ubuntu的安装程序从来没有在安装过程中提示我配置代理。我该怎样通过代理服务器安装Ubuntu桌面版?
|
||||
|
||||
不像Ubuntu服务器版,Ubuntu桌面版的安装非常自动化,没有留下太多的自定义空间,就像自定义磁盘分区,手动网络设置,包选择等等。虽然这种简单的,一键安装被认为是用户友好的,但却是那些寻找“高级安装模式”来定制自己的Ubuntu桌面安装的用户不希望的。
|
||||
|
||||
除此之外,默认的Ubuntu桌面版安装器的一个大问题是缺少代理设置。如果你电脑在代理后面,你会看到Ubuntu在准备下载文件的时候停滞了。
|
||||
|
||||
![](https://c2.staticflickr.com/6/5683/22195372232_cea81a5e45_c.jpg)
|
||||
|
||||
这篇文章描述了如何解除Ubuntu安装限制以及**如何通过代理服务器安装Ubuntu桌面**。
|
||||
|
||||
基本的想法是这样的。首先启动到live Ubuntu桌面中而不是直接启动Ubuntu安装器,配置代理设置并且手动在live Ubuntu中启动Ubuntu安装器。下面是步骤。
|
||||
|
||||
从Ubuntu桌面版CD/DVD或者USB启动后,在欢迎页面点击“Try Ubuntu”。
|
||||
|
||||
![](https://c1.staticflickr.com/1/586/22195371892_3816ba09c3_c.jpg)
|
||||
|
||||
当你进入live Ubuntu后,点击左边的设置图标。
|
||||
|
||||
![](https://c1.staticflickr.com/1/723/22020327738_058610c19d_c.jpg)
|
||||
|
||||
进入网络菜单。
|
||||
|
||||
![](https://c2.staticflickr.com/6/5675/22021212239_ba3901c8bf_c.jpg)
|
||||
|
||||
手动配置代理。
|
||||
|
||||
![](https://c1.staticflickr.com/1/735/22020025040_59415e0b9a_c.jpg)
|
||||
|
||||
接下来,打开终端。
|
||||
|
||||
![](https://c2.staticflickr.com/6/5642/21587084823_357b5c48cb_c.jpg)
|
||||
|
||||
输入下面的命令进入root会话。
|
||||
|
||||
$ sudo su
|
||||
|
||||
最后以root权限输入下面的命令。
|
||||
|
||||
# ubiquity gtk_ui
|
||||
|
||||
它会启动基于GUI的Ubuntu安装器。
|
||||
|
||||
![](https://c1.staticflickr.com/1/723/22020025090_cc64848b6c_c.jpg)
|
||||
|
||||
接着完成剩余的安装。
|
||||
|
||||
![](https://c1.staticflickr.com/1/628/21585344214_447020e9d6_c.jpg)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/install-ubuntu-desktop-behind-proxy.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ask.xmodulo.com/author/nanni
|
64
published/20151027 How To Show Desktop In GNOME 3.md
Normal file
64
published/20151027 How To Show Desktop In GNOME 3.md
Normal file
@ -0,0 +1,64 @@
|
||||
如何在 GNOME 3 中显示桌面
|
||||
================================================================================
|
||||
![How to show desktop in GNOME 3](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Show-Desktop-in-GNOME-3.jpg)
|
||||
|
||||
你**如何在 GNOME 3 中显示桌面**?GNOME是一个很棒的桌面环境但是它更加专注于在程序间切换。如果你想关闭所有运行中的窗口,仅仅显示桌面呢?
|
||||
|
||||
在Windows中,你可以按下Windows+D。在Ubuntu Unity中,可以用Ctrl+Super+D快捷键。不过由于一些原因,GNOME禁用了显示桌面的快捷键。
|
||||
|
||||
当你按下Super+D或者Ctrl+Super+D,什么都不会发生。如果你想要看到桌面,你得一个个最小化窗口。如果你有好几个打开的窗口那么这会非常不方便。
|
||||
|
||||
在本教程中,我将会向你展示在[GNOME 3][1]中添加显示桌面的快捷键。
|
||||
|
||||
### 在GNOME 3 中添加显示桌面的快捷键 ###
|
||||
|
||||
我在本教程的使用的是带有GNOME 3.18的[Antergos Linux][2],但是这些步骤对于任何GNOME 3版本的Linux发行版都适用。同时,Antergos也使用了[Numix主题][3]作为默认主题。因此你也许不会看到平常的GNOME图标。但是我相信步骤是一目了然的,很容易就能理解。
|
||||
|
||||
#### 第一步 ####
|
||||
|
||||
进入系统设置。点击右上角,在下拉列表中,点击系统设置图标。
|
||||
|
||||
![System Settings in GNOME Antergos Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Show-Desktop-GNOME-1.png)
|
||||
|
||||
#### 第二步 ####
|
||||
|
||||
当你在系统设置中时,寻找Keyboard设置。
|
||||
|
||||
![Keyboard settings in GNOME 3](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Show-Desktop-GNOME-2.png)
|
||||
|
||||
#### 第三步 ####
|
||||
|
||||
在这里,选择**Shortcuts**标签并在左边拦选择**Navigation**。向下滚动一点查找**Hide all normal windows**。你会看见它已经被禁用了。
|
||||
|
||||
![Shortcut keys in GNOME 3](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Show-Desktop-GNOME-3.jpeg)
|
||||
|
||||
#### 第四步 ####
|
||||
|
||||
在“Hide all normla windows”上面点击一下。你会看到它变成了**New accelerator**。现在无论你按下哪个键,它都会被指定为显示桌面的快捷键。
|
||||
|
||||
如果你不小心按下了错误的组合键,只要按下退格它就会被禁用。再次点击并使用需要的组合键。
|
||||
|
||||
![Shortcut key edit in GNOME 3](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Show-Desktop-GNOME-4.jpeg)
|
||||
|
||||
#### 第五步 ####
|
||||
|
||||
一旦设置了组合键,只要关闭系统设置。不用保存设置因为更改是立即生效的。在本例中,我使用Ctrl+Super+D来与我在Ubuntu Unity中的使用习惯保持一致。
|
||||
|
||||
![Keyboard shortcut edit in GNOME](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Show-Desktop-GNOME-5.jpeg)
|
||||
|
||||
就是这样。享受GNOME 3中的显示桌面快捷键吧。我希望这篇教程对你们有用。有任何问题、建议或者留言都欢迎:)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/show-desktop-gnome-3/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[Caroline](https://github.com/carolinewuyan)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:https://www.gnome.org/gnome-3/
|
||||
[2]:http://itsfoss.com/tag/antergos/
|
||||
[3]:https://linux.cn/article-3281-1.html
|
@ -0,0 +1,53 @@
|
||||
Ubuntu 软件中心将在 16.04 LTS 中被替换
|
||||
================================================================================
|
||||
![The USC Will Be Replaced](http://www.omgubuntu.co.uk/wp-content/uploads/2011/09/usc1.jpg)
|
||||
|
||||
*Ubuntu 软件中心将在 Ubuntu 16.04 LTS 中被替换。*
|
||||
|
||||
Ubuntu Xenial Xerus 桌面用户将会发现,这个熟悉的(并有些繁琐的)Ubuntu 软件中心将不再可用。
|
||||
|
||||
按照目前的计划,GNOME 的 [软件应用(Software application)][1] 将作为基于 Unity 7 的桌面的默认包管理工具。
|
||||
|
||||
![GNOME Software](http://www.omgubuntu.co.uk/wp-content/uploads/2013/09/gnome-software.jpg)
|
||||
|
||||
*GNOME 软件应用*
|
||||
|
||||
作为这次变化的一个结果是,会新开发插件来支持软件中心的评级、评论和应用程序付费的功能。
|
||||
|
||||
该决定是在伦敦的 Canonical 总部最近举行的一次桌面峰会中通过的。
|
||||
|
||||
“相对于 Ubuntu 软件中心,我们认为我们在 GNOME 软件中心(sic)添加 Snaps 支持上能做的更好。所以,现在看起来我们将使用 GNOME 软件中心来取代 [Ubuntu 软件中心]”,Ubuntu 桌面经理 Will Cooke 在 Ubuntu 在线峰会解释说。
|
||||
|
||||
GNOME 3.18 架构与也将出现在 Ubuntu 16.04 中,其中一些应用程序将更新到 GNOME 3.20 , ‘这么做也是有道理的’,Will Cooke 补充说。
|
||||
|
||||
我们最近在 Twitter 上做了一项民意调查,询问如何在 Ubuntu 上安装软件。结果表明,只有少数人怀念现在的软件中心...
|
||||
|
||||
你使用什么方式在 Ubuntu 上安装软件?
|
||||
|
||||
- 软件中心
|
||||
- 终端
|
||||
|
||||
### 在 Ubuntu 16.04 其他应用程序也将会减少 ###
|
||||
|
||||
Ubuntu 软件中心并不是唯一一个在 Xenial Xerus 中被丢弃的。
|
||||
|
||||
光盘刻录工具 Brasero 和即时通讯工具 **Empathy** 也将从默认镜像中删除。
|
||||
|
||||
虽然这些应用程序还在不断的开发,但随着笔记本减少了光驱以及基于移动网络的聊天服务,它们看起来越来越过时了。
|
||||
|
||||
如果你还在使用它们请不要惊慌:Brasero 和 Empathy 将 **仍然可以通过存档在 Ubuntu 上安装**。
|
||||
|
||||
也并不全是丢弃和替换,默认还包括了一个新的桌面应用程序:GNOME 日历。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.omgubuntu.co.uk/2015/11/the-ubuntu-software-centre-is-being-replace-in-16-04-lts
|
||||
|
||||
作者:[Sam Tran][a]
|
||||
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/111008502832304483939?rel=author
|
||||
[1]:https://wiki.gnome.org/Apps/Software
|
@ -0,0 +1,126 @@
|
||||
与 Linux 一起学习:使用这些 Linux 应用来征服你的数学学习
|
||||
================================================================================
|
||||
![](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-featured.png)
|
||||
|
||||
这篇文章是[与 Linux 一起学习][1]系列的一部分:
|
||||
|
||||
- [与 Linux 一起学习: 学习类型][2]
|
||||
- [与 Linux 一起学习: 物理模拟][3]
|
||||
- [与 Linux 一起学习: 学习音乐][4]
|
||||
- [与 Linux 一起学习: 两个地理应用程序][5]
|
||||
- [与 Linux 一起学习: 使用这些 Linux 应用来征服你的数学学习][6]
|
||||
|
||||
Linux 提供了大量的教育软件和许多优秀的工具来帮助各种年龄段和年级的学生学习和练习各种各样的习题,这通常是以交互的方式进行。“与 Linux 一起学习”这一系列的文章则为这些各种各样的教育软件和应用提供了一个介绍。
|
||||
|
||||
数学是计算机的核心。如果有人预期一个类如 GNU/ Linux 这样的伟大的操作系统精确而严格,那么这就是数学所起到的作用。如果你在寻求一些数学应用程序,那么你将不会感到失望。Linux 提供了很多优秀的工具使得数学看起来和你曾经做过的一样令人畏惧,但实际上他们会简化你使用它的方式。
|
||||
|
||||
### Gnuplot ###
|
||||
|
||||
Gnuplot 是一个适用于不同平台的命令行脚本化和多功能的图形工具。尽管它的名字中带有“GNU”,但是它并不是 GNU 操作系统的一部分。虽然不是自由授权,但它是免费软件(这意味着它受版权保护,但免费使用)。
|
||||
|
||||
要在 Ubuntu 系统(或者衍生系统)上安装 `gnuplot`,输入:
|
||||
|
||||
sudo apt-get install gnuplot gnuplot-x11
|
||||
|
||||
进入一个终端窗口。启动该程序,输入:
|
||||
|
||||
gnuplot
|
||||
|
||||
你会看到一个简单的命令行界面:
|
||||
|
||||
![learnmath-gnuplot](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-gnuplot.png)
|
||||
|
||||
在其中您可以直接输入函数开始。绘图命令将绘制一个曲线图。
|
||||
|
||||
输入内容,例如,
|
||||
|
||||
plot sin(x)/x
|
||||
|
||||
随着`gnuplot的`提示,将会打开一个新的窗口,图像便会在里面呈现。
|
||||
|
||||
![learnmath-gnuplot-plot1](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-gnuplot-plot1.png)
|
||||
|
||||
你也可以即时设置设置这个图的不同属性,比如像这样指定“title”
|
||||
|
||||
plot sin(x) title 'Sine Function', tan(x) title 'Tangent'
|
||||
|
||||
![learnmath-gnuplot-plot2](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-gnuplot-plot2.png)
|
||||
|
||||
你可以做的更深入一点,使用`splot`命令绘制3D图形:
|
||||
|
||||
splot sin(x*y/20)
|
||||
|
||||
![learnmath-gnuplot-plot3](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-gnuplot-plot3.png)
|
||||
|
||||
这个图形窗口有几个基本的配置选项,
|
||||
|
||||
![learnmath-gnuplot-options](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-gnuplot-options.png)
|
||||
|
||||
但是`gnuplot`的真正力量在于在它的命令行和脚本功能,`gnuplot`更完整的文档在[Duke大学网站][8]上面[找到][7],带有这个了不起的教程的原始版本。
|
||||
|
||||
### Maxima ###
|
||||
|
||||
[Maxima][9]是一个源于 Macsyma 开发的一个计算机代数系统,根据它的 SourceForge 页面所述:
|
||||
|
||||
> “Maxima 是一个操作符号和数值表达式的系统,包括微分,积分,泰勒级数,拉普拉斯变换,常微分方程,线性方程组,多项式,集合,列表,向量,矩阵和张量等。Maxima 通过精确的分数,任意精度的整数和可变精度浮点数产生高精度的计算结果。Maxima 可以以二维和三维的方式绘制函数和数据。“
|
||||
|
||||
大多数Ubuntu衍生系统都有 Maxima 二进制包以及它的图形界面,要安装这些软件包,输入:
|
||||
|
||||
sudo apt-get install maxima xmaxima wxmaxima
|
||||
|
||||
在终端窗口中,Maxima 是一个没有什么 UI 的命令行工具,但如果你开始 wxmaxima,你会进入一个简单但功能强大的图形用户界面。
|
||||
|
||||
![learnmath-maxima](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-maxima.png)
|
||||
|
||||
你可以通过简单的输入来开始。(提示:回车会增加更多的行,如果你想计算一个表达式,使用“Shift + Enter”。)
|
||||
|
||||
Maxima 可以用于一些简单的问题,因此也可以作为一个计算器:
|
||||
|
||||
![learnmath-maxima-1and1](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-maxima-1and1.png)
|
||||
|
||||
以及一些更复杂的问题:
|
||||
|
||||
![learnmath-maxima-functions](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-maxima-functions.png)
|
||||
|
||||
它使用`gnuplot`使得绘制简单:
|
||||
|
||||
![learnmath-maxima-plot](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-maxima-plot.png)
|
||||
|
||||
或者绘制一些复杂的图形。
|
||||
|
||||
![learnmath-maxima-plot2](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-maxima-plot2.png)
|
||||
|
||||
(它需要 gnuplot-X11 的软件包来显示它们。)
|
||||
|
||||
除了将表达式表示为图形,Maxima 也可以用 latex 格式导出它们,或者通过右键快捷菜单进行一些常用操作.
|
||||
|
||||
![learnmath-maxima-menu](https://www.maketecheasier.com/assets/uploads/2015/07/learnmath-maxima-menu.png)
|
||||
|
||||
不过其主菜单还是提供了大量重磅功能,当然 Maxima 的功能远不止如此,这里也有一个广泛使用的[在线文档][10]。
|
||||
|
||||
### 总结 ###
|
||||
|
||||
数学不是一门容易的学科,这些在 Linux 上的优秀软件也没有使得数学更加容易,但是这些应用使得使用数学变得更加的简单和方便。以上两种应用都只是介绍一下 Linux 所提供的。如果你是认真从事数学和需要更多的功能与丰富的文档,那你更应该看看这些 [Mathbuntu][11] 项目。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/learn-linux-maths/
|
||||
|
||||
作者:[Attila Orosz][a]
|
||||
译者:[KnightJoker](https://github.com/KnightJoker/KnightJoker)
|
||||
校对:[wxyD](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.maketecheasier.com/author/attilaorosz/
|
||||
[1]:https://www.maketecheasier.com/series/learn-with-linux/
|
||||
[2]:https://www.maketecheasier.com/learn-to-type-in-linux/
|
||||
[3]:https://www.maketecheasier.com/linux-physics-simulation/
|
||||
[4]:https://www.maketecheasier.com/linux-learning-music/
|
||||
[5]:https://www.maketecheasier.com/linux-geography-apps/
|
||||
[6]:https://www.maketecheasier.com/learn-linux-maths/
|
||||
[7]:http://www.gnuplot.info/documentation.html
|
||||
[8]:http://people.duke.edu/~hpgavin/gnuplot.html
|
||||
[9]:http://maxima.sourceforge.net/
|
||||
[10]:http://maxima.sourceforge.net/documentation.html
|
||||
[11]:http://www.mathbuntu.org/
|
112
published/LetsEncrypt.md
Normal file
112
published/LetsEncrypt.md
Normal file
@ -0,0 +1,112 @@
|
||||
# SSL/TLS 加密新纪元 - Let's Encrypt
|
||||
|
||||
根据 Let's Encrypt 官方博客消息,Let's Encrypt 服务将在下周(11 月 16 日)正式对外开放。
|
||||
|
||||
Let's Encrypt 项目是由互联网安全研究小组(ISRG,Internet Security Research Group)主导并开发的一个新型数字证书认证机构(CA,Certificate Authority)。该项目旨在开发一个自由且开放的自动化 CA 套件,并向公众提供相关的证书免费签发服务以降低安全通讯的财务、技术和教育成本。在过去的一年中,互联网安全研究小组拟定了 [ACME 协议草案][1],并首次实现了使用该协议的应用套件:服务端 [Boulder][2] 和客户端 [letsencrypt][3]。
|
||||
|
||||
至于为什么 Let's Encrypt 让我们如此激动,以及 HTTPS 协议如何保护我们的通讯请参考[浅谈 HTTPS 和 SSL/TLS 协议的背景与基础][4]。
|
||||
|
||||
## ACME 协议
|
||||
|
||||
Let's Encrypt 的诞生离不开 ACME(Automated Certificate Management Environment,自动证书管理环境)协议的拟定。
|
||||
|
||||
说到 ACME 协议,我们不得不提一下传统 CA 的认证方式。Let's Encrypt 服务所签发的证书为域名认证证书(DV,Domain-validated Certificate),签发这类证书需要域名所有者完成以下至少一种挑战(Challenge)以证明自己对域名的所有权:
|
||||
|
||||
* 验证申请人对域名的 Whois 信息中邮箱的控制权;
|
||||
* 验证申请人对域名的常见管理员邮箱(如以 `admin@`、`postmaster@` 开头的邮箱等)的控制权;
|
||||
* 在 DNS 的 TXT 记录中发布一条 CA 提供的字符串;
|
||||
* 在包含域名的网址中特定路径发布一条 CA 提供的字符串。
|
||||
|
||||
不难发现,其中最容易实现自动化的一种操作必然为最后一条,ACME 协议中的 [Simple HTTP][5] 认证即是用一种类似的方法对从未签发过任何证书的域名进行认证。该协议要求在访问 `http://域名/.well-known/acme-challenge/指定字符串` 时返回特定的字符串。
|
||||
|
||||
然而实现该协议的客户端 [letsencrypt][3] 做了更多——它不仅可以通过 ACME 协议配合服务端 [Boulder][2] 的域名进行独立(standalone)的认证工作,同时还可以自动配置常见的服务器软件(目前支持 Nginx 和 Apache)以完成认证。
|
||||
|
||||
## Let's Encrypt 免费证书签发服务
|
||||
|
||||
对于大多数网站管理员来讲,想要对自己的 Web 服务器进行加密需要一笔不小的支出进行证书签发并且难以配置。根据早些年 SSL Labs 公布的 [2010 年互联网 SSL 调查报告(PDF)][6] 指出超过半数的 Web 服务器没能正确使用 Web 服务器证书,主要的问题有证书不被浏览器信任、证书和域名不匹配、证书过期、证书信任链没有正确配置、使用已知有缺陷的协议和算法等。而且证书过期后的续签和泄漏后的吊销仍需进行繁琐的人工操作。
|
||||
|
||||
幸运的是 Let's Encrypt 免费证书签发服务在经历了漫长的开发和测试之后终于来临,在 Let's Encrypt 官方 CA 被广泛信任之前,IdenTrust 的根证书对 Let's Encrypt 的二级 CA 进行了交叉签名使得大部分浏览器已经信任 Let's Encrypt 签发的证书。
|
||||
|
||||
## 使用 letsencrypt
|
||||
|
||||
由于当前 Let's Encrypt 官方的证书签发服务还未公开,你只能尝试开发版本。这个版本会签发一个 CA 标识为 `happy hacker fake CA` 的测试证书,注意这个证书不受信任。
|
||||
|
||||
要获取开发版本请直接 `$ git clone https://github.com/letsencrypt/letsencrypt`。
|
||||
|
||||
以下的[使用方法][7]摘自 Let's Encrypt 官方网站。
|
||||
|
||||
### 签发证书
|
||||
|
||||
`letsencrypt` 工具可以协助你处理证书请求和验证工作。
|
||||
|
||||
#### 自动配置 Web 服务器
|
||||
|
||||
下面的操作将会自动帮你将新证书配置到 Nginx 和 Apache 中。
|
||||
|
||||
```
|
||||
$ letsencrypt run
|
||||
```
|
||||
|
||||
#### 独立签发证书
|
||||
|
||||
下面的操作将会将新证书置于当前目录下。
|
||||
|
||||
```
|
||||
$ letsencrypt -d example.com auth
|
||||
```
|
||||
|
||||
### 续签证书
|
||||
|
||||
默认情况下 `letsencrypt` 工具将协助你跟踪当前证书的有效期限并在需要时自动帮你续签。如果需要手动续签,执行下面的操作。
|
||||
|
||||
```
|
||||
$ letsencrypt renew --cert-path example-cert.pem
|
||||
```
|
||||
|
||||
### 吊销证书
|
||||
|
||||
列出当前托管的证书菜单以吊销。
|
||||
|
||||
```
|
||||
$ letsencrypt revoke
|
||||
```
|
||||
|
||||
你也可以吊销某一个证书或者属于某个私钥的所有证书。
|
||||
|
||||
```
|
||||
$ letsencrypt revoke --cert-path example-cert.pem
|
||||
```
|
||||
|
||||
```
|
||||
$ letsencrypt revoke --key-path example-key.pem
|
||||
```
|
||||
|
||||
## Docker 化 letsencrypt
|
||||
|
||||
如果你不想让 letsencrypt 自动配置你的 Web 服务器的话,使用 Docker 跑一份独立的版本将是一个不错的选择。你所要做的只是在装有 Docker 的系统中执行:
|
||||
|
||||
```
|
||||
$ sudo docker run -it --rm -p 443:443 -p 80:80 --name letsencrypt \
|
||||
-v "/etc/letsencrypt:/etc/letsencrypt" \
|
||||
-v "/var/lib/letsencrypt:/var/lib/letsencrypt" \
|
||||
quay.io/letsencrypt/letsencrypt:latest auth
|
||||
```
|
||||
|
||||
你就可以快速的为自己的 Web 服务器签发一个免费而且受信任的 DV 证书啦!
|
||||
|
||||
## Let's Encrypt 的注意事项
|
||||
|
||||
* Let's Encrypt 当前发行的 DV 证书仅能验证域名的所有权,并不能验证其所有者身份;
|
||||
* Let's Encrypt 不像其他 CA 那样对安全事故有保险赔付;
|
||||
* Let's Encrypt 目前不提共 Wildcard 证书;
|
||||
* Let's Encrypt 的有效时间仅为 90 天,逾期需要续签(可自动续签)。
|
||||
|
||||
对于 Let's Encrypt 的介绍就到这里,让我们一起目睹这场互联网的安全革命吧。
|
||||
|
||||
[1]: https://github.com/letsencrypt/acme-spec
|
||||
[2]: https://github.com/letsencrypt/boulder
|
||||
[3]: https://github.com/letsencrypt/letsencrypt
|
||||
[4]: https://linux.cn/article-5175-1.html
|
||||
[5]: https://letsencrypt.github.io/acme-spec/#simple-http
|
||||
[6]: https://community.qualys.com/servlet/JiveServlet/download/38-1636/Qualys_SSL_Labs-State_of_SSL_2010-v1.6.pdf
|
||||
[7]: https://letsencrypt.org/howitworks/
|
@ -0,0 +1,167 @@
|
||||
在 Linux 下使用 RAID(八):当软件 RAID 故障时如何恢复和重建数据
|
||||
================================================================================
|
||||
|
||||
在阅读过 [RAID 系列][1] 前面的文章后你已经对 RAID 比较熟悉了。回顾前面几个软件 RAID 的配置,我们对每一个都做了详细的解释,使用哪一个取决与你的具体情况。
|
||||
|
||||
![Recover Rebuild Failed Software RAID's](http://www.tecmint.com/wp-content/uploads/2015/10/Recover-Rebuild-Failed-Software-RAID.png)
|
||||
|
||||
*恢复并重建故障的软件 RAID - 第8部分*
|
||||
|
||||
在本文中,我们将讨论当一个磁盘发生故障时如何重建软件 RAID 阵列并且不会丢失数据。为方便起见,我们仅考虑RAID 1 的配置 - 但其方法和概念适用于所有情况。
|
||||
|
||||
#### RAID 测试方案 ####
|
||||
|
||||
在进一步讨论之前,请确保你已经配置好了 RAID 1 阵列,可以按照本系列第3部分提供的方法:[在 Linux 中如何创建 RAID 1(镜像)][2]。
|
||||
|
||||
在目前的情况下,仅有的变化是:
|
||||
|
||||
1. 使用不同版本 CentOS(v7),而不是前面文章中的(v6.5)。
|
||||
2. 磁盘容量发生改变, /dev/sdb 和 /dev/sdc(各8GB)。
|
||||
|
||||
此外,如果 SELinux 设置为 enforcing 模式,你需要将相应的标签添加到挂载 RAID 设备的目录中。否则,当你试图挂载时,你会碰到这样的警告信息:
|
||||
|
||||
![SELinux RAID Mount Error](http://www.tecmint.com/wp-content/uploads/2015/10/SELinux-RAID-Mount-Error.png)
|
||||
|
||||
*启用 SELinux 时 RAID 挂载错误*
|
||||
|
||||
通过以下命令来解决:
|
||||
|
||||
# restorecon -R /mnt/raid1
|
||||
|
||||
### 配置 RAID 监控 ###
|
||||
|
||||
存储设备损坏的原因很多(尽管固态硬盘大大减少了这种情况发生的可能性),但不管是什么原因,可以肯定问题随时可能发生,你需要准备好替换发生故障的部分,并确保数据的可用性和完整性。
|
||||
|
||||
首先建议是。虽然你可以查看 `/proc/mdstat` 来检查 RAID 的状态,但有一个更好的和节省时间的方法,使用监控 + 扫描模式运行 mdadm,它将警报通过电子邮件发送到一个预定义的收件人。
|
||||
|
||||
要这样设置,在 `/etc/mdadm.conf` 添加以下行:
|
||||
|
||||
MAILADDR user@<domain or localhost>
|
||||
|
||||
我自己的设置如下:
|
||||
|
||||
MAILADDR gacanepa@localhost
|
||||
|
||||
![RAID Monitoring Email Alerts](http://www.tecmint.com/wp-content/uploads/2015/10/RAID-Monitoring-Email-Alerts.png)
|
||||
|
||||
*监控 RAID 并使用电子邮件进行报警*
|
||||
|
||||
要让 mdadm 运行在监控 + 扫描模式中,以 root 用户添加以下 crontab 条目:
|
||||
|
||||
@reboot /sbin/mdadm --monitor --scan --oneshot
|
||||
|
||||
默认情况下,mdadm 每隔60秒会检查 RAID 阵列,如果发现问题将发出警报。你可以通过添加 `--delay` 选项到crontab 条目上面,后面跟上秒数,来修改默认行为(例如,`--delay` 1800意味着30分钟)。
|
||||
|
||||
最后,确保你已经安装了一个邮件用户代理(MUA),如[mutt 或 mailx][3]。否则,你将不会收到任何警报。
|
||||
|
||||
在一分钟内,我们就会看到 mdadm 发送的警报。
|
||||
|
||||
### 模拟和更换发生故障的 RAID 存储设备 ###
|
||||
|
||||
为了给 RAID 阵列中的存储设备模拟一个故障,我们将使用 `--manage` 和 `--set-faulty` 选项,如下所示:
|
||||
|
||||
# mdadm --manage --set-faulty /dev/md0 /dev/sdc1
|
||||
|
||||
这将导致 /dev/sdc1 被标记为 faulty,我们可以在 /proc/mdstat 看到:
|
||||
|
||||
![Stimulate Issue with RAID Storage](http://www.tecmint.com/wp-content/uploads/2015/10/Stimulate-Issue-with-RAID-Storage.png)
|
||||
|
||||
*在 RAID 存储设备上模拟问题*
|
||||
|
||||
更重要的是,让我们看看是不是收到了同样的警报邮件:
|
||||
|
||||
![Email Alert on Failed RAID Device](http://www.tecmint.com/wp-content/uploads/2015/10/Email-Alert-on-Failed-RAID-Device.png)
|
||||
|
||||
*RAID 设备故障时发送邮件警报*
|
||||
|
||||
在这种情况下,你需要从软件 RAID 阵列中删除该设备:
|
||||
|
||||
# mdadm /dev/md0 --remove /dev/sdc1
|
||||
|
||||
然后,你可以直接从机器中取出,并将其使用备用设备来取代(/dev/sdd 中类型为 fd 的分区是以前创建的):
|
||||
|
||||
# mdadm --manage /dev/md0 --add /dev/sdd1
|
||||
|
||||
幸运的是,该系统会使用我们刚才添加的磁盘自动重建阵列。我们可以通过标记 /dev/sdb1 为 faulty 来进行测试,从阵列中取出后,并确认 tecmint.txt 文件仍然在 /mnt/raid1 是可访问的:
|
||||
|
||||
# mdadm --detail /dev/md0
|
||||
# mount | grep raid1
|
||||
# ls -l /mnt/raid1 | grep tecmint
|
||||
# cat /mnt/raid1/tecmint.txt
|
||||
|
||||
![Confirm Rebuilding RAID Array](http://www.tecmint.com/wp-content/uploads/2015/10/Rebuilding-RAID-Array.png)
|
||||
|
||||
*确认 RAID 重建*
|
||||
|
||||
上面图片清楚的显示,添加 /dev/sdd1 到阵列中来替代 /dev/sdc1,数据的重建是系统自动完成的,不需要干预。
|
||||
|
||||
虽然要求不是很严格,有一个备用设备是个好主意,这样更换故障的设备就可以在瞬间完成了。要做到这一点,先让我们重新添加 /dev/sdb1 和 /dev/sdc1:
|
||||
|
||||
# mdadm --manage /dev/md0 --add /dev/sdb1
|
||||
# mdadm --manage /dev/md0 --add /dev/sdc1
|
||||
|
||||
![Replace Failed Raid Device](http://www.tecmint.com/wp-content/uploads/2015/10/Replace-Failed-Raid-Device.png)
|
||||
|
||||
*取代故障的 Raid 设备*
|
||||
|
||||
### 从冗余丢失中恢复数据 ###
|
||||
|
||||
如前所述,当一个磁盘发生故障时, mdadm 将自动重建数据。但是,如果阵列中的2个磁盘都故障时会发生什么?让我们来模拟这种情况,通过标记 /dev/sdb1 和 /dev/sdd1 为 faulty:
|
||||
|
||||
# umount /mnt/raid1
|
||||
# mdadm --manage --set-faulty /dev/md0 /dev/sdb1
|
||||
# mdadm --stop /dev/md0
|
||||
# mdadm --manage --set-faulty /dev/md0 /dev/sdd1
|
||||
|
||||
此时尝试以同样的方式重新创建阵列就(或使用 `--assume-clean` 选项)可能会导致数据丢失,因此不到万不得已不要使用。
|
||||
|
||||
让我们试着从 /dev/sdb1 恢复数据,例如,在一个类似的磁盘分区(/dev/sde1 - 注意,这需要你执行前在/dev/sde 上创建一个 fd 类型的分区)上使用 `ddrescue`:
|
||||
|
||||
# ddrescue -r 2 /dev/sdb1 /dev/sde1
|
||||
|
||||
![Recovering Raid Array](http://www.tecmint.com/wp-content/uploads/2015/10/Recovering-Raid-Array.png)
|
||||
|
||||
*恢复 Raid 阵列*
|
||||
|
||||
请注意,到现在为止,我们还没有触及 /dev/sdb 和 /dev/sdd,这是 RAID 阵列的一部分分区。
|
||||
|
||||
现在,让我们使用 /dev/sde1 和 /dev/sdf1 来重建阵列:
|
||||
|
||||
# mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[e-f]1
|
||||
|
||||
请注意,在真实的情况下,你需要使用与原来的阵列中相同的设备名称,即设备失效后替换的磁盘的名称应该是 /dev/sdb1 和 /dev/sdc1。
|
||||
|
||||
在本文中,我选择了使用额外的设备来重新创建全新的磁盘阵列,是为了避免与原来的故障磁盘混淆。
|
||||
|
||||
当被问及是否继续写入阵列时,键入 Y,然后按 Enter。阵列被启动,你也可以查看它的进展:
|
||||
|
||||
# watch -n 1 cat /proc/mdstat
|
||||
|
||||
当这个过程完成后,你就应该能够访问 RAID 的数据:
|
||||
|
||||
![Confirm Raid Content](http://www.tecmint.com/wp-content/uploads/2015/10/Raid-Content.png)
|
||||
|
||||
*确认 Raid 数据*
|
||||
|
||||
### 总结 ###
|
||||
|
||||
在本文中,我们回顾了从 RAID 故障和冗余丢失中恢复数据。但是,你要记住,这种技术是一种存储解决方案,不能取代备份。
|
||||
|
||||
本文中介绍的方法适用于所有 RAID 中,其中的概念我将在本系列的最后一篇(RAID 管理)中涵盖它。
|
||||
|
||||
如果你对本文有任何疑问,随时给我们以评论的形式说明。我们期待倾听阁下的心声!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/recover-data-and-rebuild-failed-software-raid/
|
||||
|
||||
作者:[Gabriel Cánepa][a]
|
||||
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/gacanepa/
|
||||
[1]:https://linux.cn/article-6085-1.html
|
||||
[2]:https://linux.cn/article-6093-1.html
|
||||
[3]:http://www.tecmint.com/send-mail-from-command-line-using-mutt-command/
|
@ -0,0 +1,162 @@
|
||||
在 Linux 下使用 RAID(九):如何使用 ‘Mdadm’ 工具管理软件 RAID
|
||||
================================================================================
|
||||
|
||||
无论你以前有没有使用 RAID 阵列的经验,以及是否完成了 [此 RAID 系列][1] 的所有教程,一旦你在 Linux 中熟悉了 `mdadm --manage` 命令的使用,管理软件 RAID 将不是很复杂的任务。
|
||||
|
||||
![在 Linux 中使用 mdadm 管理 RAID 设备 - 第9部分](http://www.tecmint.com/wp-content/uploads/2015/10/Manage-Raid-with-Mdadm-Tool-in-Linux.jpg)
|
||||
|
||||
*在 Linux 中使用 mdadm 管理 RAID 设备 - 第9部分*
|
||||
|
||||
在本教程中,我们会再介绍此工具提供的功能,这样当你需要它,就可以派上用场。
|
||||
|
||||
#### RAID 测试方案 ####
|
||||
|
||||
在本系列的最后一篇文章中,我们将使用一个简单的 RAID 1(镜像)阵列,它由两个 8GB 的磁盘(/dev/sdb 和 /dev/sdc)和一个备用设备(/dev/sdd)来演示,但在此使用的方法也适用于其他类型的配置。也就是说,放心去用吧,把这个页面添加到浏览器的书签,然后让我们开始吧。
|
||||
|
||||
### 了解 mdadm 的选项和使用方法 ###
|
||||
|
||||
幸运的是,mdadm 有一个内建的 `--help` 参数来对每个主要的选项提供说明文档。
|
||||
|
||||
因此,让我们开始输入:
|
||||
|
||||
# mdadm --manage --help
|
||||
|
||||
就会使我们看到 `mdadm --manage` 能够执行哪些任务:
|
||||
|
||||
![Manage RAID with mdadm Tool](http://www.tecmint.com/wp-content/uploads/2015/10/mdadm-Usage-in-Linux.png)
|
||||
|
||||
*使用 mdadm 工具来管理 RAID*
|
||||
|
||||
正如我们在上面的图片看到,管理一个 RAID 阵列可以在任意时间执行以下任务:
|
||||
|
||||
- (重新)将设备添加到阵列中
|
||||
- 把设备标记为故障
|
||||
- 从阵列中删除故障设备
|
||||
- 使用备用设备更换故障设备
|
||||
- 先创建部分阵列
|
||||
- 停止阵列
|
||||
- 标记阵列为 ro(只读)或 rw(读写)
|
||||
|
||||
### 使用 mdadm 工具管理 RAID 设备 ###
|
||||
|
||||
需要注意的是,如果用户忽略 `--manage` 选项,mdadm 默认使用管理模式。请记住这一点,以避免出现最坏的情况。
|
||||
|
||||
上图中的高亮文本显示了管理 RAID 的基本语法:
|
||||
|
||||
# mdadm --manage RAID options devices
|
||||
|
||||
让我们来演示几个例子。
|
||||
|
||||
#### 例1:为 RAID 阵列添加设备 ####
|
||||
|
||||
你通常会添加新设备来更换故障的设备,或者使用空闲的分区以便在出现故障时能及时替换:
|
||||
|
||||
# mdadm --manage /dev/md0 --add /dev/sdd1
|
||||
|
||||
![Add Device to Raid Array](http://www.tecmint.com/wp-content/uploads/2015/10/Add-Device-to-Raid-Array.png)
|
||||
|
||||
*添加设备到 Raid 阵列*
|
||||
|
||||
#### 例2:把一个 RAID 设备标记为故障并从阵列中移除 ####
|
||||
|
||||
在从逻辑阵列中删除该设备前,这是强制性的步骤,然后才能从机器中取出它 - 注意顺序(如果弄错了这些步骤,最终可能会造成实际设备的损害):
|
||||
|
||||
# mdadm --manage /dev/md0 --fail /dev/sdb1
|
||||
|
||||
请注意在前面的例子中,知道如何添加备用设备来自动更换出现故障的磁盘。在此之后,[恢复和重建 raid 数据][2] 就开始了:
|
||||
|
||||
![Recover and Rebuild Raid Data](http://www.tecmint.com/wp-content/uploads/2015/10/Recover-and-Rebuild-Raid-Data.png)
|
||||
|
||||
*恢复和重建 raid 数据*
|
||||
|
||||
一旦设备已被手动标记为故障,你就可以安全地从阵列中删除它:
|
||||
|
||||
# mdadm --manage /dev/md0 --remove /dev/sdb1
|
||||
|
||||
#### 例3:重新添加设备,来替代阵列中已经移除的设备 ####
|
||||
|
||||
到现在为止,我们有一个工作的 RAID 1 阵列,它包含了2个活动的设备:/dev/sdc1 和 /dev/sdd1。现在让我们试试重新添加 /dev/sdb1 到/dev/md0:
|
||||
|
||||
# mdadm --manage /dev/md0 --re-add /dev/sdb1
|
||||
|
||||
我们会碰到一个错误:
|
||||
|
||||
# mdadm: --re-add for /dev/sdb1 to /dev/md0 is not possible
|
||||
|
||||
因为阵列中的磁盘已经达到了最大的数量。因此,我们有两个选择:a)将 /dev/sdb1 添加为备用的,如例1;或 b)从阵列中删除 /dev/sdd1 然后重新添加 /dev/sdb1。
|
||||
|
||||
我们选择选项 b),先停止阵列然后重新启动:
|
||||
|
||||
# mdadm --stop /dev/md0
|
||||
# mdadm --assemble /dev/md0 /dev/sdb1 /dev/sdc1
|
||||
|
||||
如果上面的命令不能成功添加 /dev/sdb1 到阵列中,使用例1中的命令来完成。
|
||||
|
||||
mdadm 能检测到新添加的设备并将其作为备用设备,当添加完成后它会开始重建数据,它也被认为是 RAID 中的活动设备:
|
||||
|
||||
![Raid Rebuild Status](http://www.tecmint.com/wp-content/uploads/2015/10/Raid-Rebuild-Status.png)
|
||||
|
||||
*重建 Raid 的状态*
|
||||
|
||||
#### 例4:使用特定磁盘更换 RAID 设备 ####
|
||||
|
||||
在阵列中使用备用磁盘更换磁盘很简单:
|
||||
|
||||
# mdadm --manage /dev/md0 --replace /dev/sdb1 --with /dev/sdd1
|
||||
|
||||
![Replace Raid Device](http://www.tecmint.com/wp-content/uploads/2015/10/Replace-Raid-device.png)
|
||||
|
||||
*更换 Raid 设备*
|
||||
|
||||
这会导致 `--replace` 指定的设备被标记为故障,而 `--with`指定的设备添加到 RAID 中来替代它:
|
||||
|
||||
![Check Raid Rebuild Status](http://www.tecmint.com/wp-content/uploads/2015/10/Check-Raid-Rebuild-Status.png)
|
||||
|
||||
*检查 Raid 重建状态*
|
||||
|
||||
#### 例5:标记 RAID 阵列为 ro 或 rw ####
|
||||
|
||||
创建阵列后,你必须在它上面创建一个文件系统并将其挂载到一个目录下才能使用它。你可能不知道,RAID 也可以被设置为 ro,使其只读;或者设置为 rw,就可以同时写入了。
|
||||
|
||||
要标记该设备为 ro,首先需要将其卸载:
|
||||
|
||||
# umount /mnt/raid1
|
||||
# mdadm --manage /dev/md0 --readonly
|
||||
# mount /mnt/raid1
|
||||
# touch /mnt/raid1/test1
|
||||
|
||||
![Set Permissions on Raid Array](http://www.tecmint.com/wp-content/uploads/2015/10/Set-Permissions-on-Raid-Array.png)
|
||||
|
||||
*在 RAID 阵列上设置权限*
|
||||
|
||||
要配置阵列允许写入操作需要使用 `--readwrite` 选项。请注意,在设置 rw 标志前,你需要先卸载设备并停止它:
|
||||
|
||||
# umount /mnt/raid1
|
||||
# mdadm --manage /dev/md0 --stop
|
||||
# mdadm --assemble /dev/md0 /dev/sdc1 /dev/sdd1
|
||||
# mdadm --manage /dev/md0 --readwrite
|
||||
# touch /mnt/raid1/test2
|
||||
|
||||
![Allow Read Write Permission on Raid](http://www.tecmint.com/wp-content/uploads/2015/10/Allow-Write-Permission-on-Raid.png)
|
||||
|
||||
*配置 Raid 允许读写操作*
|
||||
|
||||
### 总结 ###
|
||||
|
||||
在本系列中,我们已经解释了如何建立一个在企业环境中使用的软件 RAID 阵列。如果你按照这些文章所提供的例子进行配置,在 Linux 中你会充分领会到软件 RAID 的价值。
|
||||
|
||||
如果你碰巧任何问题或有建议,请随时使用下面的方式与我们联系。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/manage-software-raid-devices-in-linux-with-mdadm/
|
||||
|
||||
作者:[GABRIEL CÁNEPA][a]
|
||||
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/gacanepa/
|
||||
[1]:https://linux.cn/article-6085-1.html
|
||||
[2]:https://linux.cn/article-6448-1.html
|
@ -1,27 +1,28 @@
|
||||
RHCE 系列第一部分:如何设置和测试静态网络路由
|
||||
RHCE 系列(一):如何设置和测试静态网络路由
|
||||
================================================================================
|
||||
RHCE(Red Hat Certified Engineer,红帽认证工程师)是红帽公司的一个认证,红帽向企业社区贡献开源操作系统和软件,同时它还给公司提供训练、支持和咨询服务。
|
||||
|
||||
![RHCE 考试准备指南](http://www.tecmint.com/wp-content/uploads/2015/07/RHCE-Exam-Series-by-TecMint.jpg)
|
||||
|
||||
RHCE 考试准备指南
|
||||
*RHCE 考试准备指南*
|
||||
|
||||
这个 RHCE 是基于性能的考试(代号 EX300),面向那些拥有更多的技能、知识和能力的红帽企业版 Linux(RHEL)系统高级系统管理员。
|
||||
这个 RHCE 是一个绩效考试(代号 EX300),面向那些拥有更多的技能、知识和能力的红帽企业版 Linux(RHEL)系统高级系统管理员。
|
||||
|
||||
**重要**: [红帽认证系统管理员][1] (Red Hat Certified System Administrator,RHCSA)认证要求先有 RHCE 认证。
|
||||
|
||||
以下是基于红帽企业版 Linux 7 考试的考试目标,我们会在该 RHCE 系列中分别介绍:
|
||||
|
||||
- 第一部分:如何在 RHEL 7 中设置和测试静态路由
|
||||
- 第二部分:如果进行包过滤、网络地址转换和设置内核运行时参数
|
||||
- 第三部分:如果使用 Linux 工具集产生和发送系统活动报告
|
||||
- 第二部分:如何进行包过滤、网络地址转换和设置内核运行时参数
|
||||
- 第三部分:如何使用 Linux 工具集产生和发送系统活动报告
|
||||
- 第四部分:使用 Shell 脚本进行自动化系统维护
|
||||
- 第五部分:如果配置本地和远程系统日志
|
||||
- 第六部分:如果配置一个 Samba 服务器或 NFS 服务器(译者注:Samba 是在 Linux 和 UNI X系统上实现 SMB 协议的一个免费软件,由服务器及客户端程序构成。SMB,Server Messages Block,信息服务块,是一种在局域网上共享文件和打印机的一种通信协议,它为局域网内的不同计算机之间提供文件及打印机等资源的共享服务。)
|
||||
- 第七部分:为收发邮件配置完整的 SMTP 服务器
|
||||
- 第八部分:在 RHEL 7 上设置 HTTPS 和 TLS
|
||||
- 第九部分:设置网络时间协议
|
||||
- 第十部分:如何配置一个 Cache-Only DNS 服务器
|
||||
- 第五部分:如何在 RHEL 7 中管理系统日志(配置、轮换和导入到数据库)
|
||||
- 第六部分:设置 Samba 服务器并配置 FirewallD 和 SELinux 支持客户端文件共享
|
||||
- 第七部分:设置 NFS 服务器及基于 Kerberos 认证的客户端
|
||||
- 第八部分:在 Apache 上使用网络安全服务(NSS)通过 TLS 提供 HTTPS 服务
|
||||
- 第九部分:如何使用无客户端配置来设置 Postfix 邮件服务器(SMTP)
|
||||
- 第十部分:在 RHEL/CentOS 7 中设置网络时间协议(NTP)服务器
|
||||
- 第十一部分:如何配置一个只缓存的 DNS 服务器
|
||||
|
||||
在你的国家查看考试费用和注册考试,可以到 [RHCE 认证][2] 网页。
|
||||
|
||||
@ -29,31 +30,31 @@ RHCE 考试准备指南
|
||||
|
||||
![在 RHEL 中设置静态网络路由](http://www.tecmint.com/wp-content/uploads/2015/07/Setup-Static-Network-Routing-in-RHEL-7.jpg)
|
||||
|
||||
RHCE 系列第一部分:设置和测试网络静态路由
|
||||
*RHCE 系列第一部分:设置和测试网络静态路由*
|
||||
|
||||
请注意我们不会作深入的介绍,但以这种方式组织内容能帮助你开始第一步并继续后面的内容。
|
||||
|
||||
### 红帽企业版 Linux 7 中的静态路由 ###
|
||||
|
||||
现代网络的一个奇迹就是有很多可用的设备能将一组计算机连接起来,不管是在一个房间里少量的机器还是在一栋建筑物、城市、国家或者大洲之间的多台机器。
|
||||
现代网络的一个奇迹就是有很多可用设备能将一组计算机连接起来,不管是在一个房间里少量的机器还是在一栋建筑物、城市、国家或者大洲之间的多台机器。
|
||||
|
||||
然而,为了能在任意情形下有效的实现这些,需要对网络包进行路由,或者换句话说,它们从源到目的地的路径需要按照某种规则。
|
||||
|
||||
静态路由是为网络包指定一个路由的过程,而不是使用网络设备提供的默认网关。除非另有指定,否则通过路由,网络包会被导向默认网关;基于预定义的标准,例如数据包目的地,使用静态路由可以定义其它路径。
|
||||
静态路由是为网络包指定一个路由的过程,而不是使用网络设备提供的默认网关。除非另有指定静态路由,网络包会被导向默认网关;而静态路由则基于预定义标准所定义的其它路径,例如数据包目的地。
|
||||
|
||||
我们在该篇指南中会考虑以下场景。我们有一台红帽企业版 Linux 7,连接到路由器 1号 [192.168.0.1] 以访问因特网以及 192.168.0.0/24 中的其它机器。
|
||||
我们在该篇指南中会考虑以下场景。我们有一台 RHEL 7,连接到 1号路由器 [192.168.0.1] 以访问因特网以及 192.168.0.0/24 中的其它机器。
|
||||
|
||||
第二个路由器(路由器 2号)有两个网卡:enp0s3 同样通过网络连接到路由器 1号,以便连接RHEL 7 以及相同网络中的其它机器,另外一个网卡(enp0s8)用于授权访问内部服务所在的 10.0.0.0/24 网络,例如 web 或数据库服务器。
|
||||
第二个路由器(2号路由器)有两个网卡:enp0s3 同样连接到路由器1号以访问互联网,及与 RHEL 7 和同一网络中的其它机器通讯,另外一个网卡(enp0s8)用于授权访问内部服务所在的 10.0.0.0/24 网络,例如 web 或数据库服务器。
|
||||
|
||||
该场景可以用下面的示意图表示:
|
||||
|
||||
![静态路由网络示意图](http://www.tecmint.com/wp-content/uploads/2015/07/Static-Routing-Network-Diagram.png)
|
||||
|
||||
静态路由网络示意图
|
||||
*静态路由网络示意图*
|
||||
|
||||
在这篇文章中我们会集中介绍在 RHEL 7 中设置路由表,确保它能通过路由器 1号访问因特网以及通过路由器 2号访问内部网络。
|
||||
在这篇文章中我们会集中介绍在 RHEL 7 中设置路由表,确保它能通过1号路由器访问因特网以及通过2号路由器访问内部网络。
|
||||
|
||||
在 RHEL 7 中,你会通过命令行用 [命令 ip][3] 配置和显示设备和路由。这些更改能在运行的系统中及时生效,但由于重启后不会保存,我们会使用 /etc/sysconfig/network-scripts 目录下的 ifcfg-enp0sX 和 route-enp0sX 文件永久保存我们的配置。
|
||||
在 RHEL 7 中,你可以通过命令行用 [ip 命令][3] 配置和显示设备和路由。这些更改能在运行的系统中及时生效,但由于重启后不会保存,我们会使用 `/etc/sysconfig/network-scripts` 目录下的 `ifcfg-enp0sX` 和 `route-enp0sX` 文件永久保存我们的配置。
|
||||
|
||||
首先,让我们打印出当前的路由表:
|
||||
|
||||
@ -61,15 +62,15 @@ RHCE 系列第一部分:设置和测试网络静态路由
|
||||
|
||||
![在 Linux 中检查路由表](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Current-Routing-Table.png)
|
||||
|
||||
检查当前路由表
|
||||
*检查当前路由表*
|
||||
|
||||
从上面的输出中,我们可以得出以下结论:
|
||||
|
||||
- 默认网关的 IP 是 192.168.0.1,可以通过网卡 enp0s3 访问。
|
||||
- 系统启动的时候,它启用了到 169.254.0.0/16 的 zeroconf 路由(只是在本例中)。也就是说,如果机器设置为通过 DHCP 获取一个 IP 地址,但是由于某些原因失败了,它就会在该网络中自动分配到一个地址。这一行的意思是,该路由会允许我们通过 enp0s3 和其它没有从 DHCP 服务器中成功获得 IP 地址的机器机器连接。
|
||||
- 最后,但同样重要的是,我们也可以通过 IP 地址是 192.168.0.18 的 enp0s3 和 192.168.0.0/24 网络中的其它机器连接。
|
||||
- 系统启动的时候,它启用了到 169.254.0.0/16 的 zeroconf 路由(只是在本例中)。也就是说,如果机器设置通过 DHCP 获取 IP 地址,但是由于某些原因失败了,它就会在上述网段中自动分配到一个地址。这一行的意思是,该路由会允许我们通过 enp0s3 和其它没有从 DHCP 服务器中成功获得 IP 地址的机器机器相连接。
|
||||
- 最后,但同样重要的是,我们也可以通过 IP 地址是 192.168.0.18 的 enp0s3 与 192.168.0.0/24 网络中的其它机器连接。
|
||||
|
||||
下面是这样的配置中你需要做的一些典型任务。除非另有说明,下面的任务都在路由器 2号上进行。
|
||||
下面是这样的配置中你需要做的一些典型任务。除非另有说明,下面的任务都在2号路由器上进行。
|
||||
|
||||
确保正确安装了所有网卡:
|
||||
|
||||
@ -88,7 +89,7 @@ RHCE 系列第一部分:设置和测试网络静态路由
|
||||
# ip addr del 10.0.0.17 dev enp0s8
|
||||
# ip addr add 10.0.0.18 dev enp0s8
|
||||
|
||||
现在,请注意你只能添加一个通过已经能访问的网关到目标网络的路由。因为这个原因,我们需要在 192.168.0.0/24 范围中给 enp0s3 分配一个 IP 地址,这样我们的 RHEL 7 才能连接到它:
|
||||
现在,请注意你只能添加一个通过网关到目标网络的路由,网关需要可以访问到。因为这个原因,我们需要在 192.168.0.0/24 范围中给 enp0s3 分配一个 IP 地址,这样我们的 RHEL 7 才能连接到它:
|
||||
|
||||
# ip addr add 192.168.0.19 dev enp0s3
|
||||
|
||||
@ -101,7 +102,7 @@ RHCE 系列第一部分:设置和测试网络静态路由
|
||||
# systemctl stop firewalld
|
||||
# systemctl disable firewalld
|
||||
|
||||
回到我们的 RHEL 7(192.168.0.18),让我们配置一个通过 192.168.0.19(路由器 2号的 enp0s3)到 10.0.0.0/24 的路由:
|
||||
回到我们的 RHEL 7(192.168.0.18),让我们配置一个通过 192.168.0.19(2号路由器的 enp0s3)到 10.0.0.0/24 的路由:
|
||||
|
||||
# ip route add 10.0.0.0/24 via 192.168.0.19
|
||||
|
||||
@ -111,7 +112,7 @@ RHCE 系列第一部分:设置和测试网络静态路由
|
||||
|
||||
![显示网络路由表](http://www.tecmint.com/wp-content/uploads/2015/07/Show-Network-Routing.png)
|
||||
|
||||
确认网络路由表
|
||||
*确认网络路由表*
|
||||
|
||||
同样,在你尝试连接的 10.0.0.0/24 网络的机器中添加对应的路由:
|
||||
|
||||
@ -131,13 +132,13 @@ RHCE 系列第一部分:设置和测试网络静态路由
|
||||
|
||||
192.168.0.18 也就是我们的 RHEL 7 机器的 IP 地址。
|
||||
|
||||
另外,我们还可以使用 [tcpdump][4](需要通过 yum install tcpdump 安装)来检查我们 RHEL 7 和 10.0.0.20 中 web 服务器之间的 TCP 双向通信。
|
||||
另外,我们还可以使用 [tcpdump][4](需要通过 `yum install tcpdump` 安装)来检查我们 RHEL 7 和 10.0.0.20 中 web 服务器之间的 TCP 双向通信。
|
||||
|
||||
首先在第一台机器中启用日志:
|
||||
|
||||
# tcpdump -qnnvvv -i enp0s3 host 10.0.0.20
|
||||
|
||||
在同一个系统上的另一个终端,让我们通过 telnet 连接到 web 服务器的 80 号端口(假设 Apache 正在监听该端口;否则在下面命令中使用正确的端口):
|
||||
在同一个系统上的另一个终端,让我们通过 telnet 连接到 web 服务器的 80 号端口(假设 Apache 正在监听该端口;否则应在下面命令中使用正确的监听端口):
|
||||
|
||||
# telnet 10.0.0.20 80
|
||||
|
||||
@ -145,7 +146,7 @@ tcpdump 日志看起来像下面这样:
|
||||
|
||||
![检查服务器之间的网络连接](http://www.tecmint.com/wp-content/uploads/2015/07/Tcpdump-logs.png)
|
||||
|
||||
检查服务器之间的网络连接
|
||||
*检查服务器之间的网络连接*
|
||||
|
||||
通过查看我们 RHEL 7(192.168.0.18)和 web 服务器(10.0.0.20)之间的双向通信,可以看出已经正确地初始化了连接。
|
||||
|
||||
@ -162,7 +163,7 @@ tcpdump 日志看起来像下面这样:
|
||||
# Device used to connect to default gateway. Replace X with the appropriate number.
|
||||
GATEWAYDEV=enp0sX
|
||||
|
||||
当需要为每个网卡设置特定的变量和值时(正如我们在路由器 2号上面做的),你需要编辑 /etc/sysconfig/network-scripts/ifcfg-enp0s3 和 /etc/sysconfig/network-scripts/ifcfg-enp0s8 文件。
|
||||
当需要为每个网卡设置特定的变量和值时(正如我们在2号路由器上面做的),你需要编辑 `/etc/sysconfig/network-scripts/ifcfg-enp0s3` 和 `/etc/sysconfig/network-scripts/ifcfg-enp0s8` 文件。
|
||||
|
||||
下面是我们的例子,
|
||||
|
||||
@ -184,23 +185,23 @@ tcpdump 日志看起来像下面这样:
|
||||
NAME=enp0s8
|
||||
ONBOOT=yes
|
||||
|
||||
分别对应 enp0s3 和 enp0s8。
|
||||
其分别对应 enp0s3 和 enp0s8。
|
||||
|
||||
由于要为我们的客户端机器(192.168.0.18)进行路由,我们需要编辑 /etc/sysconfig/network-scripts/route-enp0s3:
|
||||
由于要为我们的客户端机器(192.168.0.18)进行路由,我们需要编辑 `/etc/sysconfig/network-scripts/route-enp0s3`:
|
||||
|
||||
10.0.0.0/24 via 192.168.0.19 dev enp0s3
|
||||
|
||||
现在重启系统你可以在路由表中看到该路由规则。
|
||||
现在`reboot`你的系统,就可以在路由表中看到该路由规则。
|
||||
|
||||
### 总结 ###
|
||||
|
||||
在这篇文章中我们介绍了红帽企业版 Linux 7 的静态路由。尽管场景可能不同,这里介绍的例子说明了所需的原理以及进行该任务的步骤。结束之前,我还建议你看一下 Linux 文档项目中 [第四章 4][5] 保护和优化 Linux 部分,以了解这里介绍主题的更详细内容。
|
||||
在这篇文章中我们介绍了红帽企业版 Linux 7 的静态路由。尽管场景可能不同,这里介绍的例子说明了所需的原理以及进行该任务的步骤。结束之前,我还建议你看一下 Linux 文档项目(The Linux Documentation Project)网站上的《安全加固和优化 Linux(Securing and Optimizing Linux)》的[第四章][5],以了解这里介绍主题的更详细内容。
|
||||
|
||||
免费电子书 Securing & Optimizing Linux: The Hacking Solution (v.3.0) - 这本 800 多页的电子书全面收集了 Linux 安全的小技巧以及如果安全和简便的使用它们去配置基于 Linux 的应用和服务。
|
||||
免费电子书《Securing and Optimizing Linux: The Hacking Solution (v.3.0)》 - 这本 800 多页的电子书全面收集了 Linux 安全的小技巧以及如果安全和简便的使用它们去配置基于 Linux 的应用和服务。
|
||||
|
||||
![Linux 安全和优化](http://www.tecmint.com/wp-content/uploads/2015/07/Linux-Security-Optimization-Book.gif)
|
||||
|
||||
Linux 安全和优化
|
||||
*Linux 安全和优化*
|
||||
|
||||
[马上下载][6]
|
||||
|
||||
@ -214,12 +215,12 @@ via: http://www.tecmint.com/how-to-setup-and-configure-static-network-routing-in
|
||||
|
||||
作者:[Gabriel Cánepa][a]
|
||||
译者:[ictlyh](https://github.com/ictlyh)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/gacanepa/
|
||||
[1]:http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/
|
||||
[1]:https://linux.cn/article-6133-1.html
|
||||
[2]:https://www.redhat.com/en/services/certification/rhce
|
||||
[3]:http://www.tecmint.com/ip-command-examples/
|
||||
[4]:http://www.tecmint.com/12-tcpdump-commands-a-network-sniffer-tool/
|
@ -1,16 +1,17 @@
|
||||
RHCE 第二部分 - 如何进行包过滤、网络地址转换和设置内核运行时参数
|
||||
RHCE 系列(二):如何进行包过滤、网络地址转换和设置内核运行时参数
|
||||
================================================================================
|
||||
正如第一部分(“[设置静态网络路由][1]”)承诺的,在这篇文章(RHCE 系列第二部分),我们首先介绍红帽企业版 Linux 7中包过滤和网络地址转换原理,然后再介绍某些条件发送变化或者需要激活时设置运行时内核参数以改变运行时内核行为。
|
||||
|
||||
正如第一部分(“[设置静态网络路由][1]”)提到的,在这篇文章(RHCE 系列第二部分),我们首先介绍红帽企业版 Linux 7(RHEL)中包过滤和网络地址转换(NAT)的原理,然后再介绍在某些条件发生变化或者需要变动时设置运行时内核参数以改变运行时内核行为。
|
||||
|
||||
![RHEL 中的网络包过滤](http://www.tecmint.com/wp-content/uploads/2015/07/Network-Packet-Filtering-in-RHEL.jpg)
|
||||
|
||||
RHCE 第二部分:网络包过滤
|
||||
*RHCE 第二部分:网络包过滤*
|
||||
|
||||
### RHEL 7 中的网络包过滤 ###
|
||||
|
||||
当我们讨论数据包过滤的时候,我们指防火墙读取每个尝试通过它的数据包的包头所进行的处理。然后,根据系统管理员之前定义的规则,通过采取所要求的动作过滤数据包。
|
||||
当我们讨论数据包过滤的时候,我们指防火墙读取每个试图通过它的数据包的包头所进行的处理。然后,根据系统管理员之前定义的规则,通过采取所要求的动作过滤数据包。
|
||||
|
||||
正如你可能知道的,从 RHEL 7 开始,管理防火墙的默认服务是 [firewalld][2]。类似 iptables,它和 Linux 内核的 netfilter 模块交互以便检查和操作网络数据包。不像 iptables,Firewalld 的更新可以立即生效,而不用中断活跃的连接 - 你甚至不需要重启服务。
|
||||
正如你可能知道的,从 RHEL 7 开始,管理防火墙的默认服务是 [firewalld][2]。类似 iptables,它和 Linux 内核的 netfilter 模块交互以便检查和操作网络数据包。但不像 iptables,Firewalld 的更新可以立即生效,而不用中断活跃的连接 - 你甚至不需要重启服务。
|
||||
|
||||
Firewalld 的另一个优势是它允许我们定义基于预配置服务名称的规则(之后会详细介绍)。
|
||||
|
||||
@ -18,27 +19,27 @@ Firewalld 的另一个优势是它允许我们定义基于预配置服务名称
|
||||
|
||||
![静态路由网络示意图](http://www.tecmint.com/wp-content/uploads/2015/07/Static-Routing-Network-Diagram.png)
|
||||
|
||||
静态路由网络示意图
|
||||
*静态路由网络示意图*
|
||||
|
||||
然而,你应该记得,由于还没有介绍包过滤,为了简化例子,我们停用了路由器 2号 的防火墙。现在让我们来看看如何可以使接收的数据包发送到目的地的特定服务或端口。
|
||||
然而,你应该记得,由于还没有介绍包过滤,为了简化例子,我们停用了2号路由器的防火墙。现在让我们来看看如何使接收的数据包发送到目的地的特定服务或端口。
|
||||
|
||||
首先,让我们添加一条永久规则允许从 enp0s3 (192.168.0.19) 到 enp0s8 (10.0.0.18) 的绑定流量:
|
||||
首先,让我们添加一条永久规则允许从 enp0s3 (192.168.0.19) 到 enp0s8 (10.0.0.18) 的入站流量:
|
||||
|
||||
# firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 0 -i enp0s3 -o enp0s8 -j ACCEPT
|
||||
|
||||
上面的命令会把规则保存到 /etc/firewalld/direct.xml:
|
||||
上面的命令会把规则保存到 `/etc/firewalld/direct.xml` 中:
|
||||
|
||||
# cat /etc/firewalld/direct.xml
|
||||
|
||||
![在 CentOS 7 中检查 Firewalld 保存的规则](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Firewalld-Save-Rules.png)
|
||||
|
||||
检查 Firewalld 保存的规则
|
||||
*检查 Firewalld 保存的规则*
|
||||
|
||||
然后启用规则使其立即生效:
|
||||
|
||||
# firewall-cmd --direct --add-rule ipv4 filter FORWARD 0 -i enp0s3 -o enp0s8 -j ACCEPT
|
||||
|
||||
现在你可以从 RHEL 7 中通过 telnet 登录到 web 服务器并再次运行 [tcpdump][3] 监视两台机器之间的 TCP 流量,这次路由器 2号已经启用了防火墙。
|
||||
现在你可以从 RHEL 7 中通过 telnet 到 web 服务器并再次运行 [tcpdump][3] 监视两台机器之间的 TCP 流量,这次2号路由器已经启用了防火墙。
|
||||
|
||||
# telnet 10.0.0.20 80
|
||||
# tcpdump -qnnvvv -i enp0s3 host 10.0.0.20
|
||||
@ -61,19 +62,19 @@ Firewalld 的另一个优势是它允许我们定义基于预配置服务名称
|
||||
|
||||
我强烈建议你看看 Fedora Project Wiki 中的 [Firewalld Rich Language][4] 文档更详细地了解关于富规则的内容。
|
||||
|
||||
### RHEL 7 中的网络地址转换 ###
|
||||
### RHEL 7 中的网络地址转换(NAT) ###
|
||||
|
||||
网络地址转换(NAT)是为专用网络中的一组计算机(也可能是其中的一台)分配一个独立的公共 IP 地址的过程。结果,在内部网络中仍然可以用它们自己的私有 IP 地址区别,但外部“看来”它们是一样的。
|
||||
网络地址转换(NAT)是为专用网络中的一组计算机(也可能是其中的一台)分配一个独立的公共 IP 地址的过程。这样,在内部网络中仍然可以用它们自己的私有 IP 地址来区别,但外部“看来”它们是一样的。
|
||||
|
||||
另外,网络地址转换使得内部网络中的计算机发送请求到外部资源(例如因特网)然后只有源系统能接收到对应的响应成为可能。
|
||||
另外,网络地址转换使得内部网络中的计算机发送请求到外部资源(例如因特网),然后只有源系统能接收到对应的响应成为可能。
|
||||
|
||||
现在让我们考虑下面的场景:
|
||||
|
||||
![RHEL 中的网络地址转换](http://www.tecmint.com/wp-content/uploads/2015/07/Network-Address-Translation-Diagram.png)
|
||||
|
||||
网络地址转换
|
||||
*网络地址转换*
|
||||
|
||||
在路由器 2 中,我们会把 enp0s3 接口移动到外部区域,enp0s8 到内部区域,伪装或者说 NAT 默认是启用的:
|
||||
在2号路由器中,我们会把 enp0s3 接口移动到外部区域(external),enp0s8 到内部区域(external),伪装(masquerading)或者说 NAT 默认是启用的:
|
||||
|
||||
# firewall-cmd --list-all --zone=external
|
||||
# firewall-cmd --change-interface=enp0s3 --zone=external
|
||||
@ -81,7 +82,7 @@ Firewalld 的另一个优势是它允许我们定义基于预配置服务名称
|
||||
# firewall-cmd --change-interface=enp0s8 --zone=internal
|
||||
# firewall-cmd --change-interface=enp0s8 --zone=internal --permanent
|
||||
|
||||
对于我们当前的设置,内部区域 - 以及和它一起启用的任何东西都是默认区域:
|
||||
对于我们当前的设置,内部区域(internal) - 以及和它一起启用的任何东西都是默认区域:
|
||||
|
||||
# firewall-cmd --set-default-zone=internal
|
||||
|
||||
@ -89,44 +90,44 @@ Firewalld 的另一个优势是它允许我们定义基于预配置服务名称
|
||||
|
||||
# firewall-cmd --reload
|
||||
|
||||
最后,在 web 服务器中添加路由器 2 为默认网关:
|
||||
最后,在 web 服务器中添加2号路由器为默认网关:
|
||||
|
||||
# ip route add default via 10.0.0.18
|
||||
|
||||
现在你会发现在 web 服务器中你可以 ping 路由器 1 和外部网站(例如 tecmint.com):
|
||||
现在你会发现在 web 服务器中你可以 ping 1号路由器和外部网站(例如 tecmint.com):
|
||||
|
||||
# ping -c 2 192.168.0.1
|
||||
# ping -c 2 tecmint.com
|
||||
|
||||
![验证网络路由](http://www.tecmint.com/wp-content/uploads/2015/07/Verify-Network-Routing.png)
|
||||
|
||||
验证网络路由
|
||||
*验证网络路由*
|
||||
|
||||
### 在 RHEL 7 中设置内核运行时参数 ###
|
||||
|
||||
在 Linux 中,允许你更改、启用以及停用内核运行时参数,RHEL 也不例外。/proc/sys 接口允许你当操作条件发生变化时实时设置运行时参数以改变系统行为而不需太多麻烦。
|
||||
在 Linux 中,允许你更改、启用以及停用内核运行时参数,RHEL 也不例外。当操作条件发生变化时,`/proc/sys` 接口(sysctl)允许你实时设置运行时参数改变系统行为,而不需太多麻烦。
|
||||
|
||||
为了实现这个目的,会用内建的 echo shell 写 /proc/sys/<category\> 中的文件,其中 <category\> 很可能是以下目录中的一个:
|
||||
为了实现这个目的,会用 shell 内建的 echo 写 `/proc/sys/<category>` 中的文件,其中 `<category>` 一般是以下目录中的一个:
|
||||
|
||||
- dev: 连接到机器中的特定设备的参数。
|
||||
- fs: 文件系统配置(例如 quotas 和 inodes)。
|
||||
- kernel: 内核配置。
|
||||
- net: 网络配置。
|
||||
- vm: 内核虚拟内存的使用。
|
||||
- vm: 内核的虚拟内存的使用。
|
||||
|
||||
要显示所有当前可用值的列表,运行
|
||||
|
||||
# sysctl -a | less
|
||||
|
||||
在第一部分中,我们通过以下命令改变了 net.ipv4.ip_forward 参数的值以允许 Linux 机器作为一个路由器。
|
||||
在第一部分中,我们通过以下命令改变了 `net.ipv4.ip_forward` 参数的值以允许 Linux 机器作为一个路由器。
|
||||
|
||||
# echo 1 > /proc/sys/net/ipv4/ip_forward
|
||||
|
||||
另一个你可能想要设置的运行时参数是 kernel.sysrq,它会启用你键盘上的 Sysrq 键,以使系统更好的运行一些底层函数,例如如果由于某些原因冻结了后重启系统:
|
||||
另一个你可能想要设置的运行时参数是 `kernel.sysrq`,它会启用你键盘上的 `Sysrq` 键,以使系统更好的运行一些底层功能,例如如果由于某些原因冻结了后重启系统:
|
||||
|
||||
# echo 1 > /proc/sys/kernel/sysrq
|
||||
|
||||
要显示特定参数的值,可以按照下面方式使用 sysctl:
|
||||
要显示特定参数的值,可以按照下面方式使用 `sysctl`:
|
||||
|
||||
# sysctl <parameter.name>
|
||||
|
||||
@ -135,28 +136,29 @@ Firewalld 的另一个优势是它允许我们定义基于预配置服务名称
|
||||
# sysctl net.ipv4.ip_forward
|
||||
# sysctl kernel.sysrq
|
||||
|
||||
一些参数,例如上面提到的一个,只需要一个值,而其它一些(例如 fs.inode-state)要求多个值:
|
||||
有些参数,例如上面提到的某个,只需要一个值,而其它一些(例如 `fs.inode-state`)要求多个值:
|
||||
|
||||
![在 Linux 中查看内核参数](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Kernel-Parameters.png)
|
||||
|
||||
查看内核参数
|
||||
*查看内核参数*
|
||||
|
||||
不管什么情况下,做任何更改之前你都需要阅读内核文档。
|
||||
|
||||
请注意系统重启后这些设置会丢失。要使这些更改永久生效,我们需要添加内容到 /etc/sysctl.d 目录的 .conf 文件,像下面这样:
|
||||
请注意系统重启后这些设置会丢失。要使这些更改永久生效,我们需要添加内容到 `/etc/sysctl.d` 目录的 .conf 文件,像下面这样:
|
||||
|
||||
# echo "net.ipv4.ip_forward = 1" > /etc/sysctl.d/10-forward.conf
|
||||
|
||||
(其中数字 10 表示相对同一个目录中其它文件的处理顺序)。
|
||||
|
||||
并用下面命令启用更改
|
||||
并用下面命令启用更改:
|
||||
|
||||
# sysctl -p /etc/sysctl.d/10-forward.conf
|
||||
|
||||
### 总结 ###
|
||||
|
||||
在这篇指南中我们解释了基本的包过滤、网络地址变换和在运行的系统中设置内核运行时参数并使重启后能持久化。我希望这些信息能对你有用,如往常一样,我们期望收到你的回复!
|
||||
别犹豫,在下面的表格中和我们分享你的疑问、评论和建议吧。
|
||||
|
||||
别犹豫,在下面的表单中和我们分享你的疑问、评论和建议吧。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -164,12 +166,12 @@ via: http://www.tecmint.com/perform-packet-filtering-network-address-translation
|
||||
|
||||
作者:[Gabriel Cánepa][a]
|
||||
译者:[ictlyh](https://github.com/ictlyh)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/gacanepa/
|
||||
[1]:http://www.tecmint.com/how-to-setup-and-configure-static-network-routing-in-rhel/
|
||||
[1]:https://linux.cn/article-6451-1.html
|
||||
[2]:http://www.tecmint.com/firewalld-rules-for-centos-7/
|
||||
[3]:http://www.tecmint.com/12-tcpdump-commands-a-network-sniffer-tool/
|
||||
[4]:https://fedoraproject.org/wiki/Features/FirewalldRichLanguage
|
@ -1,32 +1,32 @@
|
||||
RHCE 第三部分 - 如何使用 Linux 工具集产生和发送系统活动报告
|
||||
RHCE 系列(三):如何使用 Linux 工具集生成和发送系统活动报告
|
||||
================================================================================
|
||||
作为一个系统工程师,你经常需要生成一些显示系统资源利用率的报告,以便确保:1)正最佳利用它们,2)防止出现瓶颈,3)确保可扩展性,以及其它原因。
|
||||
作为一个系统工程师,你经常需要生成一些显示系统资源利用率的报告,以便确保:1)正在合理利用系统,2)防止出现瓶颈,3)确保可扩展性,以及其它原因。
|
||||
|
||||
![监视 Linux 性能活动报告](http://www.tecmint.com/wp-content/uploads/2015/08/Monitor-Linux-Performance-Activity-Reports.jpg)
|
||||
|
||||
RHCE 第三部分:监视 Linux 性能活动报告
|
||||
*RHCE 第三部分:监视 Linux 性能活动报告*
|
||||
|
||||
除了著名的用于检测磁盘、内存和 CPU 使用率的原生 Linux 工具 - 可以给出很多例子,红帽企业版 Linux 7 还提供了两个额外的工具集用于为你的报告增加可以收集的数据:sysstat 和 dstat。
|
||||
除了著名的用于检测磁盘、内存和 CPU 使用率的原生 Linux 工具 - 可以给出很多例子,红帽企业版 Linux 7 还提供了另外两个可以为你的报告更多数据的工具套装:sysstat 和 dstat。
|
||||
|
||||
在这篇文章中,我们会介绍两者,但首先让我们来回顾一下传统工具的使用。
|
||||
|
||||
### 原生 Linux 工具 ###
|
||||
|
||||
使用 df,你可以报告磁盘空间以及文件系统的 inode 使用情况。你需要监视两者,因为缺少磁盘空间会阻止你保存更多文件(甚至会导致系统崩溃),就像耗尽 inode 意味着你不能将文件链接到对应的数据结构,从而导致同样的结果:你不能将那些文件保存到磁盘中。
|
||||
使用 df,你可以报告磁盘空间以及文件系统的 inode 使用情况。你需要监视这两者,因为缺少磁盘空间会阻止你保存更多文件(甚至会导致系统崩溃),就像耗尽 inode 意味着你不能将文件链接到对应的数据结构,从而导致同样的结果:你不能将那些文件保存到磁盘中。
|
||||
|
||||
# df -h [以人类可读形式显示输出]
|
||||
# df -h --total [生成总计]
|
||||
|
||||
![检查 Linux 总的磁盘使用](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Total-Disk-Usage.png)
|
||||
|
||||
检查 Linux 总的磁盘使用
|
||||
*检查 Linux 总的磁盘使用*
|
||||
|
||||
# df -i [显示文件系统的 inode 数目]
|
||||
# df -i --total [生成总计]
|
||||
|
||||
![检查 Linux 总的 inode 数目](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Total-inode-Numbers.png)
|
||||
|
||||
检查 Linux 总的 inode 数目
|
||||
*检查 Linux 总的 inode 数目*
|
||||
|
||||
用 du,你可以估计文件、目录或文件系统的文件空间使用。
|
||||
|
||||
@ -37,7 +37,7 @@ RHCE 第三部分:监视 Linux 性能活动报告
|
||||
|
||||
![检查 Linux 目录磁盘大小](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Directory-Disk-Size.png)
|
||||
|
||||
检查 Linux 目录磁盘大小
|
||||
*检查 Linux 目录磁盘大小*
|
||||
|
||||
别错过了:
|
||||
|
||||
@ -56,7 +56,7 @@ RHCE 第三部分:监视 Linux 性能活动报告
|
||||
|
||||
![检查 Linux 系统性能](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Systerm-Performance.png)
|
||||
|
||||
检查 Linux 系统性能
|
||||
*检查 Linux 系统性能*
|
||||
|
||||
正如你从上面图片看到的,vmstat 的输出分为很多列:proc(process)、memory、swap、io、system、和 CPU。每个字段的意义可以在 vmstat man 手册的 FIELD DESCRIPTION 部分找到。
|
||||
|
||||
@ -66,20 +66,20 @@ RHCE 第三部分:监视 Linux 性能活动报告
|
||||
|
||||
![Vmstat Linux 性能监视](http://www.tecmint.com/wp-content/uploads/2015/08/Vmstat-Linux-Peformance-Monitoring.png)
|
||||
|
||||
Vmstat Linux 性能监视
|
||||
*Vmstat Linux 性能监视*
|
||||
|
||||
请注意当磁盘上的文件被更改时,活跃内存的数量增加,写到磁盘的块数目(bo)和属于用户进程的 CPU 时间(us)也是这样。
|
||||
|
||||
或者一个保存大文件到磁盘时(dsync 引发):
|
||||
或者直接保存一个大文件到磁盘时(由 dsync 标志引发):
|
||||
|
||||
# vmstat -a 1 5
|
||||
# dd if=/dev/zero of=dummy.out bs=1M count=1000 oflag=dsync
|
||||
|
||||
![Vmstat Linux 磁盘性能监视](http://www.tecmint.com/wp-content/uploads/2015/08/VmStat-Linux-Disk-Performance-Monitoring.png)
|
||||
|
||||
Vmstat Linux 磁盘性能监视
|
||||
*Vmstat Linux 磁盘性能监视*
|
||||
|
||||
在这个例子中,我们可以看到很大数目的块被写入到磁盘(bo),这正如预期的那样,同时 CPU 处理任务之前等待 IO 操作完成的时间(wa)也增加了。
|
||||
在这个例子中,我们可以看到大量的块被写入到磁盘(bo),这正如预期的那样,同时 CPU 处理任务之前等待 IO 操作完成的时间(wa)也增加了。
|
||||
|
||||
**别错过**: [Vmstat – Linux 性能监视][3]
|
||||
|
||||
@ -90,22 +90,22 @@ Vmstat Linux 磁盘性能监视
|
||||
sysstat 软件包包含以下工具:
|
||||
|
||||
- sar (收集、报告、或者保存系统活动信息)。
|
||||
- sadf (以多种方式显式 sar 收集的数据)。
|
||||
- sadf (以多种方式显示 sar 收集的数据)。
|
||||
- mpstat (报告处理器相关的统计信息)。
|
||||
- iostat (报告 CPU 统计信息和设备以及分区的 IO统计信息)。
|
||||
- pidstat (报告 Linux 任务统计信息)。
|
||||
- nfsiostat (报告 NFS 的输出/输出统计信息)。
|
||||
- cifsiostat (报告 CIFS 统计信息)
|
||||
- sa1 (收集并保存系统活动日常文件的二进制数据)。
|
||||
- sa2 (在 /var/log/sa 目录写每日报告)。
|
||||
- sa1 (收集并保存二进制数据到系统活动每日数据文件中)。
|
||||
- sa2 (在 /var/log/sa 目录写入每日报告)。
|
||||
|
||||
dstat 为这些工具提供的功能添加了一些额外的特性,以及更多的计数器和更大的灵活性。你可以通过运行 yum info sysstat 或者 yum info dstat 找到每个工具完整的介绍,或者安装完成后分别查看每个工具的 man 手册。
|
||||
dstat 比这些工具所提供的功能更多一些,并且提供了更多的计数器和更大的灵活性。你可以通过运行 yum info sysstat 或者 yum info dstat 找到每个工具完整的介绍,或者安装完成后分别查看每个工具的 man 手册。
|
||||
|
||||
安装两个软件包:
|
||||
|
||||
# yum update && yum install sysstat dstat
|
||||
|
||||
sysstat 主要的配置文件是 /etc/sysconfig/sysstat。你可以在该文件中找到下面的参数:
|
||||
sysstat 主要的配置文件是 `/etc/sysconfig/sysstat`。你可以在该文件中找到下面的参数:
|
||||
|
||||
# How long to keep log files (in days).
|
||||
# If value is greater than 28, then log files are kept in
|
||||
@ -119,17 +119,17 @@ sysstat 主要的配置文件是 /etc/sysconfig/sysstat。你可以在该文件
|
||||
# Compression program to use.
|
||||
ZIP="bzip2"
|
||||
|
||||
sysstat 安装完成后,/etc/cron.d/sysstat 中会添加和启用两个 cron 作业。第一个作业每 10 分钟运行系统活动计数工具并在 /var/log/sa/saXX 中保存报告,其中 XX 是该月的一天。
|
||||
sysstat 安装完成后,`/etc/cron.d/sysstat` 中会添加和启用两个 cron 任务。第一个任务每 10 分钟运行系统活动计数工具,并在 `/var/log/sa/saXX` 中保存报告,其中 XX 是该月的一天。
|
||||
|
||||
因此,/var/log/sa/sa05 会包括该月份第 5 天所有的系统活动报告。这里假设我们在上面的配置文件中对 HISTORY 变量使用默认的值:
|
||||
因此,`/var/log/sa/sa05` 会包括该月份第 5 天所有的系统活动报告。这里假设我们在上面的配置文件中对 HISTORY 变量使用默认的值:
|
||||
|
||||
*/10 * * * * root /usr/lib64/sa/sa1 1 1
|
||||
|
||||
第二个作业在每天夜间 11:53 生成每日进程计数总结并把它保存到 /var/log/sa/sarXX 文件,其中 XX 和之前例子中的含义相同:
|
||||
第二个任务在每天夜间 11:53 生成每日进程计数总结并把它保存到 `/var/log/sa/sarXX` 文件,其中 XX 和之前例子中的含义相同:
|
||||
|
||||
53 23 * * * root /usr/lib64/sa/sa2 -A
|
||||
|
||||
例如,你可能想要输出该月份第 6 天从上午 9:30 到晚上 5:30 的系统统计信息到一个 LibreOffice Calc 或 Microsoft Excel 可以查看的 .csv 文件(它也允许你创建表格和图片):
|
||||
例如,你可能想要输出该月份第 6 天从上午 9:30 到晚上 5:30 的系统统计信息到一个 LibreOffice Calc 或 Microsoft Excel 可以查看的 .csv 文件(这样就可以让你创建表格和图片了):
|
||||
|
||||
# sadf -s 09:30:00 -e 17:30:00 -dh /var/log/sa/sa06 -- | sed 's/;/,/g' > system_stats20150806.csv
|
||||
|
||||
@ -137,7 +137,7 @@ sysstat 安装完成后,/etc/cron.d/sysstat 中会添加和启用两个 cron
|
||||
|
||||
![Linux 系统统计信息](http://www.tecmint.com/wp-content/uploads/2015/08/Linux-System-Statistics.png)
|
||||
|
||||
Linux 系统统计信息
|
||||
*Linux 系统统计信息*
|
||||
|
||||
最后,让我们看看 dstat 提供什么功能。请注意如果不带参数运行,dstat 默认使用 -cdngy(表示 CPU、磁盘、网络、内存页、和系统统计信息),并每秒添加一行(可以在任何时候用 Ctrl + C 中断执行):
|
||||
|
||||
@ -145,15 +145,15 @@ Linux 系统统计信息
|
||||
|
||||
![Linux 磁盘统计检测](http://www.tecmint.com/wp-content/uploads/2015/08/dstat-command.png)
|
||||
|
||||
Linux 磁盘统计检测
|
||||
*Linux 磁盘统计检测*
|
||||
|
||||
要输出统计信息到 .csv 文件,可以用 -output 标记后面跟一个文件名称。让我们来看看在 LibreOffice Calc 中该文件看起来是怎样的:
|
||||
|
||||
![检测 Linux 统计信息输出](http://www.tecmint.com/wp-content/uploads/2015/08/Monitor-Linux-Statistics-Output.png)
|
||||
|
||||
检测 Linux 统计信息输出
|
||||
*检测 Linux 统计信息输出*
|
||||
|
||||
我强烈建议你查看 dstat 的 man 手册,为了方便你的阅读用 PDF 格式包括本文以及 sysstat 的 man 手册。你会找到其它能帮助你创建自定义的详细系统活动报告的选项。
|
||||
为了更多的阅读体验,我强烈建议你查看 [dstat][5] 和 [sysstat][6] 的 pdf 格式 man 手册。你会找到其它能帮助你创建自定义的详细系统活动报告的选项。
|
||||
|
||||
**别错过**: [Sysstat – Linux 的使用活动检测工具][4]
|
||||
|
||||
@ -161,7 +161,7 @@ Linux 磁盘统计检测
|
||||
|
||||
在该指南中我们解释了如何使用 Linux 原生工具以及 RHEL 7 提供的特定工具来生成系统使用报告。在某种情况下,你可能像依赖最好的朋友那样依赖这些报告。
|
||||
|
||||
你很可能使用过这篇指南中我们没有介绍到的其它工具。如果真是这样的话,用下面的表格和社区中的其他成员一起分享吧,也可以是任何其它的建议/疑问/或者评论。
|
||||
你很可能使用过这篇指南中我们没有介绍到的其它工具。如果真是这样的话,用下面的表单和社区中的其他成员一起分享吧,也可以是任何其它的建议/疑问/或者评论。
|
||||
|
||||
我们期待你的回复。
|
||||
|
||||
@ -171,12 +171,14 @@ via: http://www.tecmint.com/linux-performance-monitoring-and-file-system-statist
|
||||
|
||||
作者:[Gabriel Cánepa][a]
|
||||
译者:[ictlyh](https://github.com/ictlyh)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/gacanepa/
|
||||
[1]:http://www.tecmint.com/how-to-check-disk-space-in-linux/
|
||||
[1]:https://linux.cn/article-6466-1.html
|
||||
[2]:http://www.tecmint.com/check-linux-disk-usage-of-files-and-directories/
|
||||
[3]:http://www.tecmint.com/linux-performance-monitoring-with-vmstat-and-iostat-commands/
|
||||
[4]:http://www.tecmint.com/install-sysstat-in-linux/
|
||||
[3]:https://linux.cn/article-4024-1.html
|
||||
[4]:https://linux.cn/article-4028-1.html
|
||||
[5]:http://www.tecmint.com/wp-content/pdf/dstat.pdf
|
||||
[6]:http://www.tecmint.com/wp-content/pdf/sysstat.pdf
|
@ -1,20 +1,20 @@
|
||||
第四部分 - 使用 Shell 脚本自动化 Linux 系统维护任务
|
||||
RHCE 系列(四): 使用 Shell 脚本自动化 Linux 系统维护任务
|
||||
================================================================================
|
||||
之前我听说高效系统管理员/工程师的其中一个特点是懒惰。一开始看起来很矛盾,但作者接下来解释了其中的原因:
|
||||
之前我听说高效的系统管理员的一个特点是懒惰。一开始看起来很矛盾,但作者接下来解释了其中的原因:
|
||||
|
||||
![自动化 Linux 系统维护任务](http://www.tecmint.com/wp-content/uploads/2015/08/Automate-Linux-System-Maintenance-Tasks.png)
|
||||
|
||||
RHCE 系列:第四部分 - 自动化 Linux 系统维护任务
|
||||
*RHCE 系列:第四部分 - 自动化 Linux 系统维护任务*
|
||||
|
||||
如果一个系统管理员花费大量的时间解决问题以及做重复的工作,你就应该怀疑他这么做是否正确。换句话说,一个高效的系统管理员/工程师应该制定一个计划使得尽量花费少的时间去做重复的工作,以及通过使用该系列中第三部分 [使用 Linux 工具集监视系统活动报告][1] 介绍的工具预见问题。因此,尽管看起来他/她没有做很多的工作,但那是因为 shell 脚本帮助完成了他的/她的大部分任务,这也就是本章我们将要探讨的东西。
|
||||
如果一个系统管理员花费大量的时间解决问题以及做重复的工作,你就应该怀疑他这么做是否正确。换句话说,一个高效的系统管理员/工程师应该制定一个计划使得其尽量花费少的时间去做重复的工作,以及通过使用本系列中第三部分 [使用 Linux 工具集监视系统活动报告][1] 介绍的工具来预见问题。因此,尽管看起来他/她没有做很多的工作,但那是因为 shell 脚本帮助完成了他的/她的大部分任务,这也就是本章我们将要探讨的东西。
|
||||
|
||||
### 什么是 shell 脚本? ###
|
||||
|
||||
简单的说,shell 脚本就是一个由 shell 一步一步执行的程序,而 shell 是在 Linux 内核和端用户之间提供接口的另一个程序。
|
||||
简单的说,shell 脚本就是一个由 shell 一步一步执行的程序,而 shell 是在 Linux 内核和最终用户之间提供接口的另一个程序。
|
||||
|
||||
默认情况下,RHEL 7 中用户使用的 shell 是 bash(/bin/bash)。如果你想知道详细的信息和历史背景,你可以查看 [维基页面][2]。
|
||||
默认情况下,RHEL 7 中用户使用的 shell 是 bash(/bin/bash)。如果你想知道详细的信息和历史背景,你可以查看这个[维基页面][2]。
|
||||
|
||||
关于这个 shell 提供的众多功能的介绍,可以查看 **man 手册**,也可以从 ([Bash 命令][3])下载 PDF 格式。除此之外,假设你已经熟悉 Linux 命令(否则我强烈建议你首先看一下 **Tecmint.com** 中的文章 [从新手到系统管理员指南][4] )。现在让我们开始吧。
|
||||
关于这个 shell 提供的众多功能的介绍,可以查看 **man 手册**,也可以从 ([Bash 命令][3])处下载 PDF 格式。除此之外,假设你已经熟悉 Linux 命令(否则我强烈建议你首先看一下 **Tecmint.com** 中的文章 [从新手到系统管理员指南][4] )。现在让我们开始吧。
|
||||
|
||||
### 写一个脚本显示系统信息 ###
|
||||
|
||||
@ -27,7 +27,7 @@ RHCE 系列:第四部分 - 自动化 Linux 系统维护任务
|
||||
|
||||
#!/bin/bash
|
||||
|
||||
# RHCE 系列第四部分事例脚本
|
||||
# RHCE 系列第四部分示例脚本
|
||||
# 该脚本会返回以下这些系统信息:
|
||||
# -主机名称:
|
||||
echo -e "\e[31;43m***** HOSTNAME INFORMATION *****\e[0m"
|
||||
@ -67,9 +67,9 @@ RHCE 系列:第四部分 - 自动化 Linux 系统维护任务
|
||||
|
||||
![服务器监视 Shell 脚本](http://www.tecmint.com/wp-content/uploads/2015/08/Server-Monitoring-Shell-Script.png)
|
||||
|
||||
服务器监视 Shell 脚本
|
||||
*服务器监视 Shell 脚本*
|
||||
|
||||
该功能用以下命令提供:
|
||||
颜色功能是由以下命令提供的:
|
||||
|
||||
echo -e "\e[COLOR1;COLOR2m<YOUR TEXT HERE>\e[0m"
|
||||
|
||||
@ -79,13 +79,13 @@ RHCE 系列:第四部分 - 自动化 Linux 系统维护任务
|
||||
|
||||
你想使其自动化的任务可能因情况而不同。因此,我们不可能在一篇文章中覆盖所有可能的场景,但是我们会介绍使用 shell 脚本可以使其自动化的三种典型任务:
|
||||
|
||||
**1)** 更新本地文件数据库, 2) 查找(或者删除)有 777 权限的文件, 以及 3) 文件系统使用超过定义的阀值时发出警告。
|
||||
1) 更新本地文件数据库, 2) 查找(或者删除)有 777 权限的文件, 以及 3) 文件系统使用超过定义的阀值时发出警告。
|
||||
|
||||
让我们在脚本目录中新建一个名为 `auto_tasks.sh` 的文件并添加以下内容:
|
||||
|
||||
#!/bin/bash
|
||||
|
||||
# 自动化任务事例脚本:
|
||||
# 自动化任务示例脚本:
|
||||
# -更新本地文件数据库:
|
||||
echo -e "\e[4;32mUPDATING LOCAL FILE DATABASE\e[0m"
|
||||
updatedb
|
||||
@ -123,16 +123,16 @@ RHCE 系列:第四部分 - 自动化 Linux 系统维护任务
|
||||
|
||||
![查找 777 权限文件的 Shell 脚本](http://www.tecmint.com/wp-content/uploads/2015/08/Shell-Script-to-Find-777-Permissions.png)
|
||||
|
||||
查找 777 权限文件的 Shell 脚本
|
||||
*查找 777 权限文件的 Shell 脚本*
|
||||
|
||||
### 使用 Cron ###
|
||||
|
||||
想更进一步提高效率,你不会想只是坐在你的电脑前手动执行这些脚本。相反,你会使用 cron 来调度这些任务周期性地执行,并把结果通过邮件发动给预定义的接收者或者将它们保存到使用 web 浏览器可以查看的文件中。
|
||||
想更进一步提高效率,你不会想只是坐在你的电脑前手动执行这些脚本。相反,你会使用 cron 来调度这些任务周期性地执行,并把结果通过邮件发动给预先指定的接收者,或者将它们保存到使用 web 浏览器可以查看的文件中。
|
||||
|
||||
下面的脚本(filesystem_usage.sh)会运行有名的 **df -h** 命令,格式化输出到 HTML 表格并保存到 **report.html** 文件中:
|
||||
|
||||
#!/bin/bash
|
||||
# Sample script to demonstrate the creation of an HTML report using shell scripting
|
||||
# 演示使用 shell 脚本创建 HTML 报告的示例脚本
|
||||
# Web directory
|
||||
WEB_DIR=/var/www/html
|
||||
# A little CSS and table layout to make the report look a little nicer
|
||||
@ -177,7 +177,7 @@ RHCE 系列:第四部分 - 自动化 Linux 系统维护任务
|
||||
|
||||
![服务器监视报告](http://www.tecmint.com/wp-content/uploads/2015/08/Server-Monitoring-Report.png)
|
||||
|
||||
服务器监视报告
|
||||
*服务器监视报告*
|
||||
|
||||
你可以添加任何你想要的信息到那个报告中。添加下面的 crontab 条目在每天下午的 1:30 运行该脚本:
|
||||
|
||||
@ -193,12 +193,12 @@ via: http://www.tecmint.com/using-shell-script-to-automate-linux-system-maintena
|
||||
|
||||
作者:[Gabriel Cánepa][a]
|
||||
译者:[ictlyh](https://github.com/ictlyh)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/gacanepa/
|
||||
[1]:http://www.tecmint.com/linux-performance-monitoring-and-file-system-statistics-reports/
|
||||
[1]:https://linux.cn/article-6512-1.html
|
||||
[2]:https://en.wikipedia.org/wiki/Bash_%28Unix_shell%29
|
||||
[3]:http://www.tecmint.com/wp-content/pdf/bash.pdf
|
||||
[4]:http://www.tecmint.com/60-commands-of-linux-a-guide-from-newbies-to-system-administrator/
|
@ -1,26 +1,24 @@
|
||||
第五部分 - 如何在 RHEL 7 中管理系统日志(配置、旋转以及导入到数据库)
|
||||
RHCE 系列(五):如何在 RHEL 7 中管理系统日志(配置、轮换以及导入到数据库)
|
||||
================================================================================
|
||||
为了确保你的 RHEL 7 系统安全,你需要通过查看日志文件监控系统中发生的所有活动。这样,你就可以检测任何不正常或有潜在破坏的活动并进行系统故障排除或者其它恰当的操作。
|
||||
为了确保你的 RHEL 7 系统安全,你需要通过查看日志文件来监控系统中发生的所有活动。这样,你就可以检测到任何不正常或有潜在破坏的活动并进行系统故障排除或者其它恰当的操作。
|
||||
|
||||
![Linux 中使用 Rsyslog 和 Logrotate 旋转日志文件](http://www.tecmint.com/wp-content/uploads/2015/08/Manage-and-Rotate-Linux-Logs-Using-Rsyslog-Logrotate.jpg)
|
||||
![Linux 中使用 Rsyslog 和 Logrotate 轮换日志文件](http://www.tecmint.com/wp-content/uploads/2015/08/Manage-and-Rotate-Linux-Logs-Using-Rsyslog-Logrotate.jpg)
|
||||
|
||||
(译者注:[日志旋转][9]是系统管理中归档每天产生的日志文件的自动化过程)
|
||||
|
||||
RHCE 考试 - 第五部分:使用 Rsyslog 和 Logrotate 管理系统日志
|
||||
*RHCE 考试 - 第五部分:使用 Rsyslog 和 Logrotate 管理系统日志*
|
||||
|
||||
在 RHEL 7 中,[rsyslogd][1] 守护进程负责系统日志,它从 /etc/rsyslog.conf(该文件指定所有系统日志的默认路径)和 /etc/rsyslog.d 中的所有文件(如果有的话)读取配置信息。
|
||||
|
||||
### Rsyslogd 配置 ###
|
||||
|
||||
快速浏览一下 [rsyslog.conf][2] 会是一个好的开端。该文件分为 3 个主要部分:模块(rsyslong 按照模块化设计),全局指令(用于设置 rsyslogd 守护进程的全局属性),以及规则。正如你可能猜想的,最后一个部分指示获取,显示以及在哪里保存什么的日志(也称为选择子),这也是这篇博文关注的重点。
|
||||
快速浏览一下 [rsyslog.conf][2] 会是一个好的开端。该文件分为 3 个主要部分:模块(rsyslong 按照模块化设计),全局指令(用于设置 rsyslogd 守护进程的全局属性),以及规则。正如你可能猜想的,最后一个部分指示记录或显示什么以及在哪里保存(也称为选择子(selector)),这也是这篇文章关注的重点。
|
||||
|
||||
rsyslog.conf 中典型的一行如下所示:
|
||||
|
||||
![Rsyslogd 配置](http://www.tecmint.com/wp-content/uploads/2015/08/Rsyslogd-Configuration.png)
|
||||
|
||||
Rsyslogd 配置
|
||||
*Rsyslogd 配置*
|
||||
|
||||
在上面的图片中,我们可以看到一个选择子包括了一个或多个用分号分隔的设备:优先级(Facility:Priority)对,其中设备描述了消息类型(参考 [RFC 3164 4.1.1 章节][3] 查看 rsyslog 可用的完整设备列表),优先级指示它的严重性,这可能是以下几种之一:
|
||||
在上面的图片中,我们可以看到一个选择子包括了一个或多个用分号分隔的“设备:优先级”(Facility:Priority)对,其中设备描述了消息类型(参考 [RFC 3164 4.1.1 章节][3],查看 rsyslog 可用的完整设备列表),优先级指示它的严重性,这可能是以下几种之一:
|
||||
|
||||
- debug
|
||||
- info
|
||||
@ -31,7 +29,7 @@ Rsyslogd 配置
|
||||
- alert
|
||||
- emerg
|
||||
|
||||
尽管自身并不是一个优先级,关键字 none 意味着指定设备没有任何优先级。
|
||||
尽管 none 并不是一个优先级,不过它意味着指定设备没有任何优先级。
|
||||
|
||||
**注意**:给定一个优先级表示该优先级以及之上的消息都应该记录到日志中。因此,上面例子中的行指示 rsyslogd 守护进程记录所有优先级为 info 以及以上(不管是什么设备)的除了属于 mail、authpriv、以及 cron 服务(不考虑来自这些设备的消息)的消息到 /var/log/messages。
|
||||
|
||||
@ -47,7 +45,7 @@ Rsyslogd 配置
|
||||
|
||||
#### 创建自定义日志文件 ####
|
||||
|
||||
要把所有的守护进程消息记录到 /var/log/tecmint.log,我们需要在 rsyslog.conf 或者 /etc/rsyslog.d 目录中的单独文件(易于管理)添加下面一行:
|
||||
要把所有的守护进程消息记录到 /var/log/tecmint.log,我们需要在 rsyslog.conf 或者 /etc/rsyslog.d 目录中的单独文件(这样易于管理)添加下面一行:
|
||||
|
||||
daemon.* /var/log/tecmint.log
|
||||
|
||||
@ -55,19 +53,19 @@ Rsyslogd 配置
|
||||
|
||||
# systemctl restart rsyslog
|
||||
|
||||
在随机重启两个守护进程之前和之后查看自定义日志的内容:
|
||||
在随便重启两个守护进程之前和之后查看下自定义日志的内容:
|
||||
|
||||
![Linux 创建自定义日志文件](http://www.tecmint.com/wp-content/uploads/2015/08/Create-Custom-Log-File.png)
|
||||
|
||||
创建自定义日志文件
|
||||
*创建自定义日志文件*
|
||||
|
||||
作为一个自学练习,我建议你重点关注设备和优先级,添加额外的消息到已有的日志文件或者像上面那样创建一个新的日志文件。
|
||||
|
||||
### 使用 Logrotate 旋转日志 ###
|
||||
### 使用 Logrotate 轮换日志 ###
|
||||
|
||||
为了防止日志文件无限制增长,logrotate 工具用于旋转、压缩、移除或者通过电子邮件发送日志,从而减轻管理会产生大量日志文件系统的困难。
|
||||
为了防止日志文件无限制增长,logrotate 工具用于轮换、压缩、移除或者通过电子邮件发送日志,从而减轻管理会产生大量日志文件系统的困难。(译者注:[日志轮换][9](rotate)是系统管理中归档每天产生的日志文件的自动化过程)
|
||||
|
||||
Logrotate 作为一个 cron 作业(/etc/cron.daily/logrotate)每天运行,并从 /etc/logrotate.conf 和 /etc/logrotate.d 中的文件(如果有的话)读取配置信息。
|
||||
Logrotate 作为一个 cron 任务(/etc/cron.daily/logrotate)每天运行,并从 /etc/logrotate.conf 和 /etc/logrotate.d 中的文件(如果有的话)读取配置信息。
|
||||
|
||||
对于 rsyslog,即使你可以在主文件中为指定服务包含设置,为每个服务创建单独的配置文件能帮助你更好地组织设置。
|
||||
|
||||
@ -75,27 +73,27 @@ Logrotate 作为一个 cron 作业(/etc/cron.daily/logrotate)每天运行,
|
||||
|
||||
![Logrotate 配置](http://www.tecmint.com/wp-content/uploads/2015/08/Logrotate-Configuration.png)
|
||||
|
||||
Logrotate 配置
|
||||
*Logrotate 配置*
|
||||
|
||||
在上面的例子中,logrotate 会为 /var/log/wtmp 进行以下操作:尝试每个月旋转一次,但至少文件要大于 1MB,然后用 0664 权限、用户 root、组 utmp 创建一个新的日志文件。下一步只保存一个归档日志,正如旋转指令指定的:
|
||||
在上面的例子中,logrotate 会为 /var/log/wtmp 进行以下操作:尝试每个月轮换一次,但至少文件要大于 1MB,然后用 0664 权限、用户 root、组 utmp 创建一个新的日志文件。下一步只保存一个归档日志,正如轮换指令指定的:
|
||||
|
||||
![每月 Logrotate 日志](http://www.tecmint.com/wp-content/uploads/2015/08/Logrotate-Logs-Monthly.png)
|
||||
|
||||
每月 Logrotate 日志
|
||||
*每月 Logrotate 日志*
|
||||
|
||||
让我们再来看看 /etc/logrotate.d/httpd 中的另一个例子:
|
||||
|
||||
![旋转 Apache 日志文件](http://www.tecmint.com/wp-content/uploads/2015/08/Rotate-Apache-Log-Files.png)
|
||||
![轮换 Apache 日志文件](http://www.tecmint.com/wp-content/uploads/2015/08/Rotate-Apache-Log-Files.png)
|
||||
|
||||
旋转 Apache 日志文件
|
||||
*轮换 Apache 日志文件*
|
||||
|
||||
你可以在 logrotate 的 man 手册([man logrotate][4] 和 [man logrotate.conf][5])中阅读更多有关它的设置。为了方便你的阅读,本文还提供了两篇文章的 PDF 格式。
|
||||
|
||||
作为一个系统工程师,很可能由你决定多久按照什么格式保存一次日志,取决于你是否有一个单独的分区/逻辑卷给 /var。否则,你真的要考虑删除旧日志以节省存储空间。另一方面,根据你公司和客户内部的政策,为了以后的安全审核,你可能被迫要保留多个日志。
|
||||
作为一个系统工程师,很可能由你决定多久按照什么格式保存一次日志,这取决于你是否有一个单独的分区/逻辑卷给 `/var`。否则,你真的要考虑删除旧日志以节省存储空间。另一方面,根据你公司和客户内部的政策,为了以后的安全审核,你可能必须要保留多个日志。
|
||||
|
||||
#### 保存日志到数据库 ####
|
||||
|
||||
当然检查日志可能是一个很繁琐的工作(即使有类似 grep 工具和正则表达式的帮助)。因为这个原因,rsyslog 允许我们把它们导出到数据库(OTB 支持的关系数据库管理系统包括 MySQL、MariaDB、PostgreSQL 和 Oracle)。
|
||||
当然检查日志可能是一个很繁琐的工作(即使有类似 grep 工具和正则表达式的帮助)。因为这个原因,rsyslog 允许我们把它们导出到数据库(OTB 支持的关系数据库管理系统包括 MySQL、MariaDB、PostgreSQL 和 Oracle 等)。
|
||||
|
||||
指南的这部分假设你已经在要管理日志的 RHEL 7 上安装了 MariaDB 服务器和客户端:
|
||||
|
||||
@ -104,10 +102,9 @@ Logrotate 配置
|
||||
|
||||
然后使用 `mysql_secure_installation` 工具为 root 用户设置密码以及其它安全考量:
|
||||
|
||||
|
||||
![保证 MySQL 数据库安全](http://www.tecmint.com/wp-content/uploads/2015/08/Secure-MySQL-Database.png)
|
||||
|
||||
保证 MySQL 数据库安全
|
||||
*保证 MySQL 数据库安全*
|
||||
|
||||
注意:如果你不想用 MariaDB root 用户插入日志消息到数据库,你也可以配置用另一个用户账户。如何实现的介绍已经超出了本文的范围,但在 [MariaDB 知识][6] 中有详细解析。为了简单在这篇指南中我们会使用 root 账户。
|
||||
|
||||
@ -117,7 +114,7 @@ Logrotate 配置
|
||||
|
||||
![保存服务器日志到数据库](http://www.tecmint.com/wp-content/uploads/2015/08/Save-Server-Logs-to-Database.png)
|
||||
|
||||
保存服务器日志到数据库
|
||||
*保存服务器日志到数据库*
|
||||
|
||||
最后,添加下面的行到 /etc/rsyslog.conf:
|
||||
|
||||
@ -132,18 +129,18 @@ Logrotate 配置
|
||||
|
||||
#### 使用 SQL 语法查询日志 ####
|
||||
|
||||
现在执行一些会改变日志的操作(例如停止和启动服务),然后登陆到你的 DB 服务器并使用标准的 SQL 命令显示和查询日志:
|
||||
现在执行一些会改变日志的操作(例如停止和启动服务),然后登录到你的数据库服务器并使用标准的 SQL 命令显示和查询日志:
|
||||
|
||||
USE Syslog;
|
||||
SELECT ReceivedAt, Message FROM SystemEvents;
|
||||
|
||||
![在数据库中查询日志](http://www.tecmint.com/wp-content/uploads/2015/08/Query-Logs-in-Database.png)
|
||||
|
||||
在数据库中查询日志
|
||||
*在数据库中查询日志*
|
||||
|
||||
### 总结 ###
|
||||
|
||||
在这篇文章中我们介绍了如何设置系统日志,如果旋转日志以及为了简化查询如何重定向消息到数据库。我们希望这些技巧能对你准备 [RHCE 考试][8] 和日常工作有所帮助。
|
||||
在这篇文章中我们介绍了如何设置系统日志,如果轮换日志以及为了简化查询如何重定向消息到数据库。我们希望这些技巧能对你准备 [RHCE 考试][8] 和日常工作有所帮助。
|
||||
|
||||
正如往常,非常欢迎你的反馈。用下面的表单和我们联系吧。
|
||||
|
||||
@ -153,7 +150,7 @@ via: http://www.tecmint.com/manage-linux-system-logs-using-rsyslogd-and-logrotat
|
||||
|
||||
作者:[Gabriel Cánepa][a]
|
||||
译者:[ictlyh](http://www.mutouxiaogui.cn/blog/)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
@ -165,5 +162,5 @@ via: http://www.tecmint.com/manage-linux-system-logs-using-rsyslogd-and-logrotat
|
||||
[5]:http://www.tecmint.com/wp-content/pdf/logrotate.conf.pdf
|
||||
[6]:https://mariadb.com/kb/en/mariadb/create-user/
|
||||
[7]:https://github.com/sematext/rsyslog/blob/master/plugins/ommysql/createDB.sql
|
||||
[8]:http://www.tecmint.com/how-to-setup-and-configure-static-network-routing-in-rhel/
|
||||
[8]:https://linux.cn/article-6451-1.html
|
||||
[9]:https://en.wikipedia.org/wiki/Log_rotation
|
@ -1,4 +1,4 @@
|
||||
安装 Samba 并配置 Firewalld 和 SELinux 使得能在 Linux 和 Windows 之间共享文件 - 第六部分
|
||||
RHCE 系列(六):安装 Samba 并配置 Firewalld 和 SELinux 让 Linux 和 Windows 共享文件
|
||||
================================================================================
|
||||
由于计算机很少作为一个独立的系统工作,作为一个系统管理员或工程师,就应该知道如何在有多种类型的服务器之间搭设和维护网络。
|
||||
|
||||
@ -6,9 +6,9 @@
|
||||
|
||||
![在 Linux 中配置 Samba 进行文件共享](http://www.tecmint.com/wp-content/uploads/2015/09/setup-samba-file-sharing-on-linux-windows-clients.png)
|
||||
|
||||
RHCE 系列第六部分 - 设置 Samba 文件共享
|
||||
*RHCE 系列第六部分 - 设置 Samba 文件共享*
|
||||
|
||||
如果有人叫你设置文件服务器用于协作或者配置很可能有多种不同类型操作系统和设备的企业环境,这篇文章就能派上用场。
|
||||
如果有人让你设置文件服务器用于协作或者配置很可能有多种不同类型操作系统和设备的企业环境,这篇文章就能派上用场。
|
||||
|
||||
由于你可以在网上找到很多关于 Samba 和 NFS 背景和技术方面的介绍,在这篇文章以及后续文章中我们就省略了这些部分直接进入到我们的主题。
|
||||
|
||||
@ -22,7 +22,7 @@ RHCE 系列第六部分 - 设置 Samba 文件共享
|
||||
|
||||
![测试安装 Samba](http://www.tecmint.com/wp-content/uploads/2015/09/Testing-Setup-for-Samba.png)
|
||||
|
||||
测试安装 Samba
|
||||
*测试安装 Samba*
|
||||
|
||||
在 box1 中安装以下软件包:
|
||||
|
||||
@ -36,7 +36,7 @@ RHCE 系列第六部分 - 设置 Samba 文件共享
|
||||
|
||||
### 步骤二: 设置通过 Samba 进行文件共享 ###
|
||||
|
||||
Samba 这么重要的原因之一是它为 SMB/CIFS 客户端(译者注:SMB 是微软和英特尔制定的一种通信协议,CIFS 是其中一个版本,更详细的介绍可以参考[Wiki][6])提供了文件和打印设备,这使得客户端看起来服务器就是一个 Windows 系统(我必须承认写这篇文章的时候我有一点激动,因为这是我多年前作为一个新手 Linux 系统管理员的第一次设置)。
|
||||
Samba 这么重要的原因之一是它为 SMB/CIFS 客户端(LCTT 译注:SMB 是微软和英特尔制定的一种通信协议,CIFS 是其中一个版本,更详细的介绍可以参考 [Wiki][6])提供了文件和打印设备,这使得服务器在客户端看起来就是一个 Windows 系统(我必须承认写这篇文章的时候我有一点激动,因为这是我多年前作为一个新手 Linux 系统管理员的第一次设置)。
|
||||
|
||||
**添加系统用户并设置权限和属性**
|
||||
|
||||
@ -91,9 +91,9 @@ Samba 这么重要的原因之一是它为 SMB/CIFS 客户端(译者注:SMB
|
||||
|
||||
![测试 Samba 配置](http://www.tecmint.com/wp-content/uploads/2015/09/Test-Samba-Configuration.png)
|
||||
|
||||
测试 Samba 配置
|
||||
*测试 Samba 配置*
|
||||
|
||||
如果你要添加另一个公开的共享目录(意味着没有任何验证),在 /etc/samba/smb.conf 中创建另一章节,在共享目录名称下面复制上面的章节,只需要把 public=no 更改为 public=yes 并去掉有效用户和写列表命令。
|
||||
如果你要添加另一个公开的共享目录(意味着不需要任何验证),在 /etc/samba/smb.conf 中创建另一章节,在共享目录名称下面复制上面的章节,只需要把 public=no 更改为 public=yes 并去掉有效用户(valid users)和写列表(write list)命令。
|
||||
|
||||
### 步骤五: 添加 Samba 用户 ###
|
||||
|
||||
@ -102,7 +102,7 @@ Samba 这么重要的原因之一是它为 SMB/CIFS 客户端(译者注:SMB
|
||||
# smbpasswd -a user1
|
||||
# smbpasswd -a user2
|
||||
|
||||
最后,重启 Samda,启用系统启动时自动启动服务,并确保共享目录对网络客户端可用:
|
||||
最后,重启 Samda,并让系统启动时自动启动该服务,确保共享目录对网络客户端可用:
|
||||
|
||||
# systemctl start smb
|
||||
# systemctl enable smb
|
||||
@ -112,7 +112,7 @@ Samba 这么重要的原因之一是它为 SMB/CIFS 客户端(译者注:SMB
|
||||
|
||||
![验证 Samba 共享](http://www.tecmint.com/wp-content/uploads/2015/09/Verify-Samba-Share.png)
|
||||
|
||||
验证 Samba 共享
|
||||
*验证 Samba 共享*
|
||||
|
||||
到这里,已经正确安装和配置了 Samba 文件服务器。现在让我们在 RHEL 7 和 Windows 8 客户端中测试该配置。
|
||||
|
||||
@ -120,12 +120,11 @@ Samba 这么重要的原因之一是它为 SMB/CIFS 客户端(译者注:SMB
|
||||
|
||||
首先,确保客户端可以访问 Samba 共享:
|
||||
|
||||
# smbclient –L 192.168.0.18 -U user2
|
||||
|
||||
# smbclient –L 192.168.0.18 -U user2
|
||||
|
||||
![在 Linux 上挂载 Samba 共享](http://www.tecmint.com/wp-content/uploads/2015/09/Mount-Samba-Share-on-Linux.png)
|
||||
|
||||
在 Linux 上挂载 Samba 共享
|
||||
*在 Linux 上挂载 Samba 共享*
|
||||
|
||||
(为 user1 重复上面的命令)
|
||||
|
||||
@ -135,11 +134,11 @@ Samba 这么重要的原因之一是它为 SMB/CIFS 客户端(译者注:SMB
|
||||
|
||||
![挂载 Samba 网络共享](http://www.tecmint.com/wp-content/uploads/2015/09/Mount-Samba-Network-Share.png)
|
||||
|
||||
挂载 Samba 网络共享
|
||||
*挂载 Samba 网络共享*
|
||||
|
||||
(其中 /media/samba 是一个已有的目录)
|
||||
|
||||
或者在 /etc/fstab 文件中添加下面的条目自动挂载:
|
||||
或者在 /etc/fstab 文件中添加下面的条目以自动挂载:
|
||||
|
||||
**fstab**
|
||||
|
||||
@ -147,7 +146,7 @@ Samba 这么重要的原因之一是它为 SMB/CIFS 客户端(译者注:SMB
|
||||
|
||||
//192.168.0.18/finance /media/samba cifs credentials=/media/samba/.smbcredentials,defaults 0 0
|
||||
|
||||
其中隐藏文件 /media/samba/.smbcredentials(它的权限被设置为 600 和 root:root)有两行,指示允许使用共享的账户的用户名和密码:
|
||||
其中隐藏文件 /media/samba/.smbcredentials(它的权限被设置为 600 和 root:root)有两行内容,指示允许使用共享的账户的用户名和密码:
|
||||
|
||||
**.smbcredentials**
|
||||
|
||||
@ -162,17 +161,17 @@ Samba 这么重要的原因之一是它为 SMB/CIFS 客户端(译者注:SMB
|
||||
|
||||
![在 Samba 共享中创建文件](http://www.tecmint.com/wp-content/uploads/2015/09/Create-File-in-Samba-Share.png)
|
||||
|
||||
在 Samba 共享中创建文件
|
||||
*在 Samba 共享中创建文件*
|
||||
|
||||
正如你看到的,用权限 0770 和属主 user1:finance 创建了文件。
|
||||
|
||||
### 步骤七: 在 Windows 上挂载 Samba 共享 ###
|
||||
|
||||
要在 Windows 上挂载 Samba 共享,进入 ‘我的计算机’ 并选择 ‘计算机’,‘网络驱动映射’。下一步,为要映射的驱动分配一个字母并用不同的认证检查连接(下面的截图使用我的母语西班牙语):
|
||||
要在 Windows 上挂载 Samba 共享,进入 ‘我的计算机’ 并选择 ‘计算机’,‘网络驱动映射’。下一步,为要映射的驱动分配一个驱动器盘符并用不同的认证身份检查是否可以连接(下面的截图使用我的母语西班牙语):
|
||||
|
||||
![在 Windows 中挂载 Samba 共享](http://www.tecmint.com/wp-content/uploads/2015/09/Mount-Samba-Share-in-Windows.png)
|
||||
|
||||
在 Windows 中挂载 Samba 共享
|
||||
*在 Windows 中挂载 Samba 共享*
|
||||
|
||||
最后,让我们新建一个文件并检查权限和属性:
|
||||
|
||||
@ -188,7 +187,7 @@ Samba 这么重要的原因之一是它为 SMB/CIFS 客户端(译者注:SMB
|
||||
|
||||
在这篇文章中我们不仅介绍了如何使用不同操作系统设置 Samba 服务器和两个客户端,也介绍了[如何配置 Firewalld][3] 和 [服务器中的 SELinux][4] 以获取所需的组协作功能。
|
||||
|
||||
最后,同样重要的是,我推荐阅读网上的 [smb.conf man 手册][5] 查看其它可能针对你的情况比本文中介绍的场景更加合适的配置命令。
|
||||
最后,同样重要的是,我推荐阅读网上的 [smb.conf man 手册][5] ,查看其它比本文中介绍的场景更加合适你的场景的配置命令。
|
||||
|
||||
正如往常,欢迎在下面的评论框中留下你的评论或建议。
|
||||
|
||||
@ -198,7 +197,7 @@ via: http://www.tecmint.com/setup-samba-file-sharing-for-linux-windows-clients/
|
||||
|
||||
作者:[Gabriel Cánepa][a]
|
||||
译者:[ictlyh](http://www.mutouxiaogui.cn/blog/)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,48 +1,48 @@
|
||||
The history of Android
|
||||
安卓编年史(6)
|
||||
================================================================================
|
||||
![T-Mobile G1](http://cdn.arstechnica.net/wp-content/uploads/2014/04/t-mobile_g1.jpg)
|
||||
T-Mobile G1
|
||||
T-Mobile供图
|
||||
|
||||
*T-Mobile G1 [T-Mobile供图]*
|
||||
|
||||
### 安卓1.0——谷歌系app和实体硬件的引入 ###
|
||||
|
||||
到了2008年10月,安卓1.0已经准备好发布,这个系统在[T-Mobile G1][1](又以HTC Dream为人周知)上初次登台。G1进入了被iPhone 3G和[Nokia 1680 classic][2]所主宰的市场。(这些手机并列获得了2008年[销量最佳手机][3]称号,各自卖出了350万台。)G1的销量数字已难以获得,但T-Mobile宣称截至2009年4月该设备的销量突破了100万台。无论从哪方面来说这在竞争中都处于落后地位。
|
||||
到了2008年10月,安卓1.0已经准备好发布,这个系统在[T-Mobile G1][1](又以HTC Dream为人周知)上初次登台。G1进入了被iPhone 3G和[Nokia 1680 classic][2]所主宰的市场。(这些手机并列获得了2008年[销量最佳手机][3]称号,各自卖出了350万台。)G1的具体销量数字已难以获得,但T-Mobile宣称截至2009年4月该设备的销量突破了100万台。无论从哪方面来说这在竞争中都处于落后地位。
|
||||
|
||||
G1拥有单核528Mhz的ARM 11处理器,一个Adreno 130的GPU,192MB内存,以及多达256MB的存储空间供给系统以及应用使用。它有一块3.2英寸,320x480分辨率的显示屏,被布置在一个含有实体全键盘的滑动结构之上。所以尽管安卓软件的确走过了很长的一段路,硬件也是的。时至今日,我们可以在厂商的一个手表中得到比这更好的参数:最新的[三星智能手表][4]拥有512MB内存以及1GHz的双核处理器。
|
||||
G1拥有单核528Mhz的ARM 11处理器,一个Adreno 130的GPU,192MB内存,以及多达256MB的存储空间提供给系统以及应用使用。它有一块3.2英寸、320x480分辨率的显示屏,被布置在一个含有实体全键盘的滑动结构之上。所以尽管安卓软件的确走过了很长的一段路,硬件也是的。时至今日,我们可以在一个厂商提供手表中得到比这更好的参数:最新的[三星智能手表][4]拥有512MB内存以及1GHz的双核处理器。
|
||||
|
||||
当iPhone有着最少数量的按键的时候,G1确实完全相反的,按键几乎支持每个硬件控制。它有拨通和挂断按钮,home键,后退,以及菜单键,一个相机快门键,音量控制键,一个轨迹球,当然,还有50个键盘按钮。未来安卓设备将会慢慢离开按键多多的界面设计,几乎每部新旗舰都在减少按键的数量。
|
||||
当iPhone有着最少数量的按键的时候,G1确实完全相反的,按键几乎支持每个硬件控制。它有拨通和挂断按钮,home键,后退,以及菜单键,一个相机快门键,音量控制键,一个轨迹球,当然,还有50个键盘按键。未来安卓设备将会慢慢离开按键多多的界面设计,几乎每部新旗舰都在减少按键的数量。
|
||||
|
||||
但是这是第一次,人们见到了运行在实机上的安卓,而不是跑在一个令人沮丧的慢吞吞的模拟器上。安卓1.0没有iPhone那样顺滑流畅,闪亮耀眼,或拥有那么多的新闻报道。它也不像Windows Mobile 6.5那样才华横溢。但这仍然是个好的开始。
|
||||
|
||||
![安卓1.0和0.9的默认应用列表。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/apps.png)
|
||||
安卓1.0和0.9的默认应用列表。
|
||||
Ron Amadeo供图
|
||||
|
||||
安卓1.0的核心与两个月前发布的beta版本相比看起来并没有什么引人注目的不同,但消费者产品带来了不少应用,包括一套完整的谷歌系应用。日历,电子邮件,Gmail,即时通讯,市场,设置,语音拨号,以及YouTube都是全新登场。那时候,音乐是智能手机上占据主宰地位的媒体类型,其王者是iTunes音乐商店。谷歌没有自家的音乐服务,所以它选择了亚马逊并绑定了亚马逊MP3商店。
|
||||
*安卓1.0和0.9的默认应用列表。[Ron Amadeo供图]*
|
||||
|
||||
安卓最重要的新增是谷歌商店的首次登场,叫做“安卓市场Beta”。与此同时大部分公司满足于将它们的软件目录称作一些不同的“应用商店”——意思是一个出售应用的商店,并且只出售应用——谷歌明显有着更大的野心。它搭配了一个更为通用的名字,“安卓市场”。这个名字的想法是安卓市场不仅仅拥有应用,还拥有一切你的安卓设备所需要的东西。
|
||||
安卓1.0的核心与两个月前发布的beta版本相比看起来并没有什么引人注目的不同,但这个消费产品带来了不少应用,包括一套完整的谷歌系应用。日历,电子邮件,Gmail,即时通讯,市场,设置,语音拨号,以及YouTube都是全新登场。那时候,音乐是智能手机上占据主宰地位的媒体类型,其王者是iTunes音乐商店。谷歌没有自家的音乐服务,所以它选择了亚马逊并绑定了亚马逊MP3商店。
|
||||
|
||||
安卓最重要的新增内容是首次登场的谷歌商店,叫做“安卓市场Beta”。与此同时大部分公司满足于将它们的软件目录称作各种“应用商店”——意思是一个出售应用的商店,并且只出售应用——谷歌明显有着更大的野心。它搭配了一个更为通用的名字,“安卓市场”。这个名字的想法是安卓市场不仅仅拥有应用,还拥有一切你的安卓设备所需要的东西。
|
||||
|
||||
![第一个安卓市场客户端。截图展示了主页,“我的下载”,一个应用页面,以及一个应用权限页面。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/market.png)
|
||||
第一个安卓市场客户端。截图展示了主页,“我的下载”,一个应用页面,以及一个应用权限页面。
|
||||
[Google][5]供图
|
||||
|
||||
那时候,安卓市场只提供应用和游戏,开发者们甚至还不能为它们收费。苹果的App Store相对与安卓市场有4个月的先发优势,但是谷歌的主要差异化在于安卓的商店几乎是完全开放的。在iPhone上,应用受制于苹果的审查,必须遵循设计和技术指南。潜在的新应用不允许在功能上复制已有应用。在安卓市场,开发者可以自由地做任何想做的,包括开发替代已有的应用。控制的缺失会转变成祝福同时也是诅咒。它允许开发者革新已有的功能,但同时意味着甚至是毫无价值的垃圾应用也被允许进入市场。
|
||||
*第一个安卓市场客户端。截图展示了主页,“我的下载”,一个应用页面,以及一个应用权限页面。[[Google][5]供图]*
|
||||
|
||||
现在,这个客户端是又一个不再能够和谷歌服务器通讯的应用。幸运的是,它也是在因特网上被[真正记录][6]的为数不多的早期安卓应用之一。主页提供了通向一般区域的连接,像应用,游戏,搜索,以及下载,顶部有横向滚动显示的特色应用图标。搜索结果和“我的下载”页面以滚动列表的方式显示应用,显示应用名,开发者,费用(在那时都是免费的),以及评分。单独的应用页面展示了一个简短的描述,安装数,用户评论和评分,以及最重要的安装按钮。早期的安卓市场不支持图片,开发者唯一能使用的区域是应用描述,还有着500字的限制。这使得类似维护一个更新日志变的十分困难,因为只有描述的位置可以供其使用。
|
||||
那时候,安卓市场只提供应用和游戏,开发者们甚至还不能为它们收费。苹果的App Store相对与安卓市场有4个月的先发优势,但是谷歌的主要差异化在于安卓的商店几乎是完全开放的。在iPhone上,应用受制于苹果的审查,必须遵循设计和技术指南。潜在的新应用不允许在功能上复制已有应用。在安卓市场,开发者可以自由地做任何想做的,包括开发替代已有的应用。控制的缺失导致福祸相依。它允许开发者革新已有的功能,但同时意味着甚至是毫无价值的垃圾应用也被允许进入市场。
|
||||
|
||||
时至今日,这个安卓市场的客户端是又一个不再能够和谷歌服务器通讯的应用。幸运的是,它也是在因特网上被[真正记录][6]的为数不多的早期安卓应用之一。主页提供了通向一般区域的连接,像应用,游戏,搜索,以及下载,顶部有横向滚动显示的特色应用图标。搜索结果和“我的下载”页面以滚动列表的方式显示应用,显示应用名,开发者,费用(在那时都是免费的),以及评分。单独的应用页面展示了一个简短的描述,安装数,用户评论和评分,以及最重要的安装按钮。早期的安卓市场不支持图片,开发者唯一能使用的区域是应用描述,还有着500字的限制。这使得类似维护一个更新日志变的十分困难,因为只有描述的位置可以供其使用。
|
||||
|
||||
就在安装之前,安卓市场显示了应用所需要的权限。这是苹果直至2012年之前都避免做的,那年一个iOS应用被发现在用户不知情的情况下[将完整的通讯录上传][7]到云端。权限显示给出了一个完整的应用用到的权限列表,尽管这个版本强迫用户同意应用权限。界面有个“OK”按钮,但是除了后退按钮没有办法取消。
|
||||
|
||||
![Gmail展示收件箱,打开菜单的收件箱。 ](http://cdn.arstechnica.net/wp-content/uploads/2013/12/gmail1.01.png)
|
||||
Gmail展示收件箱,打开菜单的收件箱。
|
||||
Ron Amadeo供图
|
||||
|
||||
下一个重要的应用也许就是Gmail。大多数基本的功能此时已经准备好了。未读邮件以加粗显示,标签是个有颜色的标记。在收件箱中每封独立邮件显示着主题,发件人,以及一个会话中的回复数。Gmail加星标志也在这里——快速点击即可给邮件加星或取消。一如往常,对于早期版本的安卓,菜单里有收件箱视图应有的所有按钮。但是,一旦打开了一封邮件,界面看起来就更加的现代了,“回复”和“转发”按钮永久固定在了屏幕底部。各个独立回复可以点击它们来展开和收缩。
|
||||
*Gmail展示收件箱,打开菜单的收件箱。[Ron Amadeo供图]*
|
||||
|
||||
下一个重要的应用也许就是Gmail。大多数基本的功能此时已经准备好了。未读邮件以加粗显示,标签是个有颜色的标记。在收件箱中每封独立邮件显示着主题,发件人,以及一个会话中的回复数。Gmail加星标志也在这里——快速点击即可给邮件加星或取消。一如往常,对于早期版本的安卓,菜单里有收件箱视图应有的所有按钮。但是,一旦打开了一封邮件,界面看起来就更加的现代了,“回复”和“转发”按钮永久固定在了屏幕底部。单独回复可以点击它们来展开和收缩。
|
||||
|
||||
圆角,阴影,以及气泡图标给了整个应用“卡通”的外表,但是这是个好的开始。安卓的功能第一哲学真正从此开始:Gmail支持标签,邮件会话,搜索,以及邮件推送。
|
||||
|
||||
![Gmail在安卓1.0的标签视图,写邮件界面,以及设置。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/gmail3.png)
|
||||
Gmail在安卓1.0的标签视图,写邮件界面,以及设置。
|
||||
Ron Amadeo供图
|
||||
|
||||
*Gmail在安卓1.0的标签视图,写邮件界面,以及设置。[Ron Amadeo供图]*
|
||||
|
||||
但是如果你认为Gmail很丑,电子邮件应用又拉低了下限。它没有分离的收件箱或文件夹视图——所有东西都糊在一个界面。应用呈现给你一个文件夹列表,点击一个文件夹会以内嵌的方式展开内容。未读邮件左侧有条绿色的线指示,这就是电子邮件应用的界面。这个应用支持IMAP和POP3,但是没有Exchange。
|
||||
|
||||
@ -58,7 +58,7 @@ Ron Amadeo供图
|
||||
|
||||
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/6/
|
||||
|
||||
译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[alim0x](https://github.com/alim0x) 校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -1,90 +1,90 @@
|
||||
安卓编年史
|
||||
安卓编年史(7)
|
||||
================================================================================
|
||||
![电子邮件应用的所有界面。前两张截图展示了标签/收件箱结合的视图,最后一张截图展示了一封邮件。](http://cdn.arstechnica.net/wp-content/uploads/2014/01/email2lol.png)
|
||||
电子邮件应用的所有界面。前两张截图展示了标签/收件箱结合的视图,最后一张截图展示了一封邮件。
|
||||
Ron Amadeo供图
|
||||
|
||||
邮件视图是——令人惊讶的!——白色。安卓的电子邮件应用从历史角度来说算是个打了折扣的Gmail应用,你可以在这里看到紧密的联系。读邮件以及写邮件视图几乎没有任何修改地就从Gmail那里直接取过来使用。
|
||||
*电子邮件应用的所有界面。前两张截图展示了标签/收件箱结合的视图,最后一张截图展示了一封邮件。 [Ron Amadeo供图]*
|
||||
|
||||
邮件视图是——令人惊讶的!居然是白色。安卓的电子邮件应用从历史角度来说算是个打了折扣的Gmail应用,你可以在这里看到紧密的联系。读邮件以及写邮件视图几乎没有任何修改地就从Gmail那里直接取过来使用。
|
||||
|
||||
![即时通讯应用。截图展示了服务提供商选择界面,朋友列表,以及一个对话。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/IM2.png)
|
||||
即时通讯应用。截图展示了服务提供商选择界面,朋友列表,以及一个对话。
|
||||
Ron Amadeo供图
|
||||
|
||||
在Google Hangouts之前,甚至是Google Talk之前,就有“IM”——安卓1.0带来的唯一一个即时通讯客户端。令人惊奇的是,它支持多种IM服务:用户可以从AIM,Google Talk,Windows Live Messenger以及Yahoo中挑选。还记得操作系统开发者什么时候关心过互通性吗?
|
||||
*即时通讯应用。截图展示了服务提供商选择界面,朋友列表,以及一个对话。[Ron Amadeo供图]*
|
||||
|
||||
朋友列表是聊天中带有白色聊天气泡的黑色背景界面。状态用一个带颜色的圆形来指示,右侧的小安卓机器人指示出某人正在使用移动设备。IM应用相比Google Hangouts远比它有沟通性,这真是十分神奇的。绿色代表着某人正在使用设备并且已经登录,黄色代表着他们登录了但处于空闲状态,红色代表他们手动设置状态为忙,不想被打扰,灰色表示离线。现在Hangouts只显示用户是否打开了应用。
|
||||
在Google Hangouts之前,甚至是Google Talk之前,就有了“IM”——安卓1.0带来的唯一一个即时通讯客户端。令人惊奇的是,它支持多种IM服务:用户可以从AIM,Google Talk,Windows Live Messenger以及Yahoo中挑选。还记得操作系统开发者什么时候关心过互通性吗?
|
||||
|
||||
朋友列表是黑色背景界面,如果在聊天中则带有白色聊天气泡。状态用一个带颜色的圆形来指示,右侧的小安卓机器人指示出某人正在使用移动设备。IM应用相比Google Hangouts远比它有沟通性,这真是十分神奇的。绿色代表着某人正在使用设备并且已经登录,黄色代表着他们登录了但处于空闲状态,红色代表他们手动设置状态为忙,不想被打扰,灰色表示离线。现在Hangouts只显示用户是否打开了应用。
|
||||
|
||||
聊天对话界面明显基于信息应用,聊天的背景从白色和蓝色被换成了白色和绿色。但是没人更改信息输入框的颜色,所以加上橙色的高亮效果,界面共使用了白色,绿色,蓝色和橙色。
|
||||
|
||||
![安卓1.0上的YouTube。截图展示了主界面,打开菜单的主界面,分类界面,视频播放界面。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/yt5000.png)
|
||||
安卓1.0上的YouTube。截图展示了主界面,打开菜单的主界面,分类界面,视频播放界面。
|
||||
Ron Amadeo供图
|
||||
|
||||
YouTube仅仅以G1的320p屏幕和3G网络速度可能不会有今天这样的移动意识,但谷歌的视频服务在安卓1.0上就被置入发布了。主界面看起来就像是安卓市场调整过的版本,顶部带有一个横向滚动选择部分,下面有垂直滚动分类列表。谷歌的一些分类选择还真是奇怪:“最热门”和“最多观看”有什么区别?
|
||||
*安卓1.0上的YouTube。截图展示了主界面,打开菜单的主界面,分类界面,视频播放界面。[Ron Amadeo供图]*
|
||||
|
||||
一个谷歌没有意识到YouTube最终能达到多庞大的标志——有一个视频分类是“最近更新”。在今天,每分钟有[100小时时长的视频][1]上传到Youtube上,如果这个分类能正常工作的话,它会是一个快速滚动的视频列表,快到以至于变为一片无法阅读的模糊。
|
||||
以G1的320p屏幕和3G网络速度,YouTube可能不会有今天这样的手机上的表现,但谷歌的视频服务在安卓1.0上就被置入发布了。主界面看起来就像是安卓市场调整过的版本,顶部带有一个横向滚动选择部分,下面有垂直滚动分类列表。谷歌的一些分类选择还真是奇怪:“最热门”和“最多观看”有什么区别?
|
||||
|
||||
菜单含有搜索,喜爱,分类,设置。设置(没有图片)是有史以来最简陋的,只有个清除搜索历史的选项。分类都是一样的平淡,仅仅是个黑色的文本列表。
|
||||
这是一个谷歌没有意识到YouTube最终能达到多庞大的标志——有一个视频分类是“最近更新”。在今天,每分钟有[100小时时长的视频][1]上传到Youtube上,如果这个分类能正常工作的话,它会是一个快速滚动的视频列表,快到以至于变为一片无法阅读的模糊。
|
||||
|
||||
菜单含有搜索,喜爱,分类,设置。设置(没有该图片)是有史以来最简陋的,只有个清除搜索历史的选项。分类都是一样的平淡,仅仅是个黑色的文本列表。
|
||||
|
||||
最后一张截图展示了视频播放界面,只支持横屏模式。尽管自动隐藏的播放控制有个进度条,但它还是很奇怪地包含了后退和前进按钮。
|
||||
|
||||
![YouTube的视频菜单,描述页面,评论。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/yt3.png)
|
||||
YouTube的视频菜单,描述页面,评论。
|
||||
Ron Amadeo供图
|
||||
|
||||
每个视频的更多选项可以通过点击菜单按钮来打开。在这里你可以把视频标记为喜爱,查看详细信息,以及阅读评论。所有的这些界面,和视频播放一样,是锁定横屏模式的。
|
||||
*YouTube的视频菜单,描述页面,评论。[Ron Amadeo供图]*
|
||||
|
||||
每个视频的更多选项可以通过点击菜单按钮来打开。在这里你可以把视频标记为“喜爱”,查看详细信息,以及阅读评论。所有的这些界面,和视频播放一样,是锁定横屏模式的。
|
||||
|
||||
然而“共享”不会打开一个对话框,它只是向Gmail邮件中加入了视频的链接。想要把链接通过短信或即时消息发送给别人是不可能的。你可以阅读评论,但是没办法评价他们或发表自己的评论。你同样无法给视频评分或赞。
|
||||
|
||||
![相机应用的拍照界面,菜单,照片浏览模式。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/camera.png)
|
||||
相机应用的拍照界面,菜单,照片浏览模式。
|
||||
Ron Amadeo供图
|
||||
|
||||
在实体机上跑上真正的安卓意味着相机功能可以正常运作,即便那里没什么太多可关注的。左边的黑色方块是相机的界面,原本应该显示取景器图像,但SDK的截图工具没办法捕捉下来。G1有个硬件实体的拍照键(还记得吗?),所以相机没必要有个屏幕上的快门键。相机没有曝光,白平衡,或HDR设置——你可以拍摄照片,仅此而已。
|
||||
*相机应用的拍照界面,菜单,照片浏览模式。[Ron Amadeo供图]*
|
||||
|
||||
在实体机上跑真正的安卓意味着相机功能可以正常运作,即便那里没什么太多可关注的。左边的黑色方块是相机的界面,原本应该显示取景器图像,但SDK的截图工具没办法捕捉下来。G1有个硬件实体的拍照键(还记得吗?),所以相机没必要有个屏幕上的快门键。相机没有曝光,白平衡,或HDR设置——你可以拍摄照片,仅此而已。
|
||||
|
||||
菜单按钮显示两个选项:跳转到相册应用和带有两个选项的设置界面。第一个设置选项是是否给照片加上地理标记,第二个是在每次拍摄后显示提示菜单,你可以在上面右边看到截图。同样的,你目前还只能拍照——还不支持视频拍摄。
|
||||
|
||||
![日历的月视图,打开菜单的周视图,日视图,以及日程。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/calviews.png)
|
||||
日历的月视图,打开菜单的周视图,日视图,以及日程。
|
||||
Ron Amadeo供图
|
||||
|
||||
*日历的月视图,打开菜单的周视图,日视图,以及日程。[Ron Amadeo供图]*
|
||||
|
||||
就像这个时期的大多数应用一样,日历的主命令界面是菜单。菜单用来切换视图,添加新事件,导航至当天,选择要显示的日程,以及打开设置。菜单扮演着每个单独按钮的入口的作用。
|
||||
|
||||
月视图不能显示约会事件的文字。每个日期旁边有个侧边,约会会显示为侧边上的绿色部分,通过位置来表示约会是在一天中的什么时候。周视图同样不能显示预约文字——G1的320×480的显示屏像素还不够密——所以你会在日历中看到一个带有颜色指示条的白块。唯一一个显示文字的是日程和日视图。你可以用滑动来切换日期——左右滑动切换周和日,上下滑动切换月份和日程。
|
||||
|
||||
![设置主界面,无线设置,关于页面的底部。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/settings.png)
|
||||
设置主界面,无线设置,关于页面的底部。
|
||||
Ron Amadeo供图
|
||||
|
||||
安卓1.0最终带来了设置界面。这个界面是个带有文字的黑白界面,粗略地分为各个部分。每个列表项边的下箭头让人误以为点击它会展开折叠的更多东西,但是触摸列表项的任何位置只会加载下一屏幕。所有的界面看起来确实无趣,都差不多一样,但是嘿,这可是设置啊。
|
||||
*设置主界面,无线设置,关于页面的底部。[Ron Amadeo供图]*
|
||||
|
||||
任何带有开/关状态的选项都使用了卡通风的复选框。安卓1.0最初的复选框真是奇怪——就算是在“未选中”状态时,它们还是有个灰色的勾选标记在里面。安卓把勾选标记当作了灯泡,打开时亮起来,关闭的时候变得黯淡,但这不是复选框的工作方式。然而我们最终还是见到了“关于”页面。安卓1.0运行Linux内核2.6.25版本。
|
||||
安卓1.0最终带来了设置界面。这个界面是个带有文字的黑白界面,粗略地分为各个部分。每个列表项边上的下箭头让人误以为点击它会展开折叠的更多东西,但是触摸列表项的任何位置只会加载下一屏幕。所有的界面看起来确实无趣,都差不多一样,但是嘿,这可是设置啊。
|
||||
|
||||
任何带有开/关状态的选项都使用了卡通风格的复选框。安卓1.0最初的复选框真是奇怪——就算是在“未选中”状态时,它们还是有个灰色的勾选标记在里面。安卓把勾选标记当作了灯泡,打开时亮起来,关闭的时候变得黯淡,但这不是复选框的工作方式。然而我们最终还是见到了“关于”页面。安卓1.0运行Linux内核2.6.25版本。
|
||||
|
||||
设置界面意味着我们终于可以打开安全设置并更改锁屏。安卓1.0只有两种风格,安卓0.9那样的灰色方形锁屏,以及需要你在9个点组成的网格中画出图案的图形解锁。像这样的滑动图案相比PIN码更加容易记忆和输入,尽管它没有增加多少安全性。
|
||||
|
||||
![语音拨号,图形锁屏,电池低电量警告,时间设置。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/grabbag.png)
|
||||
语音拨号,图形锁屏,电池低电量警告,时间设置。
|
||||
Ron Amadeo供图
|
||||
|
||||
语音功能和语音拨号一同来到了1.0。这个特性以各种功能实现在AOSP徘徊了一段时间,然而它是一个简单的拨打号码和联系人的语音命令应用。语音拨号是个和谷歌未来的语音产品完全无关的应用,但是,它的工作方式和非智能机上的语音拨号一样。
|
||||
*语音拨号,图形锁屏,电池低电量警告,时间设置。[Ron Amadeo供图]*
|
||||
|
||||
语音功能和语音拨号一同来到了1.0。这个特性以各种功能实现在AOSP徘徊了一段时间,然而它是一个简单的拨打号码和联系人的语音命令应用。语音拨号是个和谷歌未来的语音产品完全无关的应用,它的工作方式和非智能机上的语音拨号一样。
|
||||
|
||||
关于最后一个值得注意的,当电池电量低于百分之十五的时候会触发低电量弹窗。这是个有趣的图案,它把电源线错误的一端插入手机。谷歌,那可不是(现在依然不是)手机应该有的充电方式。
|
||||
|
||||
安卓1.0是个伟大的开头,但是功能上仍然有许多缺失。实体键盘和大量硬件按钮被强制要求配备,因为不带有十字方向键或轨迹球的安卓设备依然不被允许销售。另外,基本的智能手机功能比如自动旋转依然缺失。内置应用不可能像今天这样通过安卓市场来更新。所有的谷歌系应用和系统交织在一起。如果谷歌想要升级一个单独的应用,需要通过运营商推送整个系统的更新。安卓依然还有许多工作要做。
|
||||
安卓1.0是个伟大的开端,但是功能上仍然有许多缺失。强制配备了实体键盘和大量硬件按钮,因为不带有十字方向键或轨迹球的安卓设备依然不被允许销售。另外,基本的智能手机功能比如自动旋转依然缺失。内置应用不可能像今天这样通过安卓市场来更新。所有的谷歌系应用和系统交织在一起。如果谷歌想要升级一个单独的应用,需要通过运营商推送整个系统的更新。安卓依然还有许多工作要做。
|
||||
|
||||
### 安卓1.1——第一个真正的增量更新 ###
|
||||
|
||||
![安卓1.1的所有新特性:语音搜索,安卓市场付费应用支持,谷歌纵横,设置中的新“系统更新”选项。](http://cdn.arstechnica.net/wp-content/uploads/2013/12/11.png)
|
||||
安卓1.1的所有新特性:语音搜索,安卓市场付费应用支持,谷歌纵横,设置中的新“系统更新”选项。
|
||||
Ron Amadeo供图
|
||||
|
||||
安卓1.0发布四个半月后,2009年2月,安卓在安卓1.1中得到了它的第一个公开更新。系统方面没有太多变化,谷歌向1.1中添加新东西现如今也都已被关闭。谷歌语音搜索是安卓向云端语音搜索的第一个突击,它在应用抽屉里有自己的图标。尽管这个应用已经不能与谷歌服务器通讯,你可以[在iPhone上][2]看到它以前是怎么工作的。它还没有语音操作,但你可以说出想要搜索的,结果会显示在一个简单的谷歌搜索中。
|
||||
*安卓1.1的所有新特性:语音搜索,安卓市场付费应用支持,谷歌纵横,设置中的新“系统更新”选项。[Ron Amadeo供图]*
|
||||
|
||||
安卓市场添加了对付费应用的支持,但是就像beta客户端中一样,这个版本的安卓市场不再能够连接Google Play服务器。我们最多能够看到分类界面,你可以在免费应用,付费应用和全部应用中选择。
|
||||
安卓1.0发布四个半月后,2009年2月,安卓在安卓1.1中得到了它的第一个公开更新。系统方面没有太多变化,谷歌向1.1中添加的新东西现如今也都已被关闭。谷歌语音搜索是安卓向云端语音搜索的第一个突击,它在应用抽屉里有自己的图标。尽管这个应用已经不能与谷歌服务器通讯,你可以[在iPhone上][2]看到它以前是怎么工作的。它还没有语音操作,但你可以说出想要搜索的,结果会显示在一个简单的谷歌搜索中。
|
||||
|
||||
安卓市场添加了对付费应用的支持,但是就像beta客户端中一样,这个版本的安卓市场已经不能连接Google Play服务器。我们最多能够看到分类界面,你可以在免费应用、付费应用和全部应用中选择。
|
||||
|
||||
地图添加了[谷歌纵横][3],一个向朋友分享自己位置的方法。纵横在几个月前为了支持Google+而被关闭并且不再能够工作。地图菜单里有个纵横的选项,但点击它现在只会打开一个带载入中圆圈的画面,并永远停留在这里。
|
||||
|
||||
安卓世界的系统更新来得更加迅速——或者至少是一条在运营商和OEM推送之前获得更新的途径——谷歌向“关于手机”界面添加了检查系统更新按钮。
|
||||
安卓世界的系统更新来得更加迅速——或者至少是一条在运营商和OEM推送之前获得更新的途径——谷歌也在“关于手机”界面添加了检查系统更新按钮。
|
||||
|
||||
----------
|
||||
|
||||
@ -98,7 +98,7 @@ Ron Amadeo供图
|
||||
|
||||
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/7/
|
||||
|
||||
译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[alim0x](https://github.com/alim0x) 校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -1,4 +1,3 @@
|
||||
cygmris is translating...
|
||||
Great Open Source Collaborative Editing Tools
|
||||
================================================================================
|
||||
In a nutshell, collaborative writing is writing done by more than one person. There are benefits and risks of collaborative working. Some of the benefits include a more integrated / co-ordinated approach, better use of existing resources, and a stronger, united voice. For me, the greatest advantage is one of the most transparent. That's when I need to take colleagues' views. Sending files back and forth between colleagues is inefficient, causes unnecessary delays and leaves people (i.e. me) unhappy with the whole notion of collaboration. With good collaborative software, I can share notes, data and files, and use comments to share thoughts in real-time or asynchronously. Working together on documents, images, video, presentations, and tasks is made less of a chore.
|
||||
|
@ -1,4 +1,3 @@
|
||||
Translating by H-mudcup
|
||||
5 best open source board games to play online
|
||||
================================================================================
|
||||
I have always had a fascination with board games, in part because they are a device of social interaction, they challenge the mind and, most importantly, they are great fun to play. In my misspent youth, myself and a group of friends gathered together to escape the horrors of the classroom, and indulge in a little escapism. The time provided an outlet for tension and rivalry. Board games help teach diplomacy, how to make and break alliances, bring families and friends together, and learn valuable lessons.
|
||||
|
@ -1,3 +1,4 @@
|
||||
sevenot translating
|
||||
Curious about Linux? Try Linux Desktop on the Cloud
|
||||
================================================================================
|
||||
Linux maintains a very small market share as a desktop operating system. Current surveys estimate its share to be a mere 2%; contrast that with the various strains (no pun intended) of Windows which total nearly 90% of the desktop market. For Linux to challenge Microsoft's monopoly on the desktop, there needs to be a simple way of learning about this different operating system. And it would be naive to believe a typical Windows user is going to buy a second machine, tinker with partitioning a hard disk to set up a multi-boot system, or just jump ship to Linux without an easy way back.
|
||||
@ -41,4 +42,4 @@ via: http://www.linuxlinks.com/article/20151003095334682/LinuxCloud.html
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:https://www.labxnow.org/labxweb/
|
||||
[1]:https://www.labxnow.org/labxweb/
|
||||
|
@ -1,63 +0,0 @@
|
||||
FSSlc translating
|
||||
|
||||
What is a good IDE for R on Linux
|
||||
================================================================================
|
||||
Some time ago, I covered some of the [best IDEs for C/C++][1] on Linux. Obviously C and C++ are not the only programming languages out there, and it is time to turn to something a bit more specific.
|
||||
|
||||
If you have ever done some statistics, it is possible that you have encountered the [language R][2]. If you have not, I really recommend this open source programming language which is tailored for statistics and data mining. Coming from a coding background, you might be thrown off a bit by the syntax, but hopefully you will get seduced by the speed of its vector operations. In short, try it. And to do so, what better way to start with an IDE? R being a cross platform language, there are a bunch of good IDEs which make data analysis in R far more pleasurable. If you are very attached to a particular editor, there are also some very good plugins to turn that editor into a fully-fledged R IDE.
|
||||
|
||||
Here is a list of five good IDEs for R language in Linux environment.
|
||||
|
||||
### 1. RStudio ###
|
||||
|
||||
![](https://c1.staticflickr.com/1/603/22093054381_431383ab60_c.jpg)
|
||||
|
||||
Let’s start hard with maybe one of the most popular R IDEs out there: [RStudio][3]. In addition to common IDE features like syntax highlighting and code completion, RStudio stands out for its integration of R documentation, its powerful debugger and its multiple views system. If you start with R, I can only recommend RStudio as the R console on the side is perfect for testing your code in real time, and the object explorer will help you understand what kind of data you are dealing with. Finally, what really conquered me was the integration of the plots visualiser, making it easy to export your graphs as images. On the downside, RStudio lacks the shortcuts and the advanced settings to make it a perfect IDE. Still, with a free version under AGPL license, Linux users have no excuses not to give this IDE a try.
|
||||
|
||||
### 2. Emacs with ESS ###
|
||||
|
||||
![](https://c2.staticflickr.com/6/5824/22056857776_a14a4e7e1b_c.jpg)
|
||||
|
||||
In my last post about IDEs, some people were disappointed by the absence of Emacs in my list. My main reason for that is that Emacs is kind of the wild card of IDE: you could place it on any list for any languages. But things are different for [R with the ESS plugin][4]. Emacs Speaks Statistics (ESS) is an amazing plugin which completely changes the way you use the Emacs editor and really fits the needs of R coders. A bit like RStudio which has multiple views, Emacs with ESS displays presents two panels: one with the code and one with an R console, making it easy to test your code in real time and explore the objects. But ESS's real strength is its seamless integration with other Emacs plugins you might have installed and its advanced configuration options. In short, if you like your Emacs shortcuts, you will like to be able to use them in an environment that makes sense for R development. For full disclosure, however, I have heard of and experienced some efficiency issues when dealing with a lot of data in ESS. Nothing too major to be a problem, but just enough have me prefer RStudio.
|
||||
|
||||
### 3. Vim with Vim-R-plugin ###
|
||||
|
||||
![](https://c1.staticflickr.com/1/680/22056923916_abe3531bb4_b.jpg)
|
||||
|
||||
Because I do not want to discriminate after talking about Emacs, I also tried the equivalent for Vim: the [Vim-R-plugin][5]. Using the terminal tool called tmux, this plugin makes it possible to have an R console open and code at the same time. But most importantly, it brings syntax highlighting and omni-completion for R objects to Vim. You can also easily access R documentation and browse objects. But once again, the strength comes from its extensive customization capacities and the speed of Vim. If you are tempted by this option, I direct you to the extremely thorough [documentation][6] on installing and setting up your environment.
|
||||
|
||||
### 4. Gedit with RGedit ###
|
||||
|
||||
![](https://c1.staticflickr.com/1/761/22056923956_1413f60b42_c.jpg)
|
||||
|
||||
If neither Emacs or Vim is your cup of tea, and what you like is your default Gnome editor, then [RGedit][7] is made for you: a plugin to code in R from Gedit. Gedit is known to be more powerful than what it looks. With a very large library of plugins, it is possible to do a lot with it. And RGedit is precisely the plugin you need to code in R from Gedit. It comes with the classic syntax highlighting and integration of the R console at the bottom of the screen, but also a bunch of unique features like multiple profiles, code folding, file explorer, and even a GUI wizard to generate code from snippets. Despite my indifference towards Gedit, I have to admit that these features go beyond the basic plugin functionality and really make a difference when you spend a lot of time analyzing data. The only shadow is that the last update is from 2013. I really hope that this project can pick up again.
|
||||
|
||||
### 5. RKWard ###
|
||||
|
||||
![](https://c2.staticflickr.com/6/5643/21896132829_2ea8f3a320_c.jpg)
|
||||
|
||||
Finally, last but not least, [RKWard][8] is an R IDE made for KDE environments. What I love the most about it is its name. But honestly, its package management system and spreadsheet-like data editor come in close second. In addition to that, it includes an easy system for plotting and importing data, and can be extended by plugins. If you are not a fan of the KDE feel, you might be a bit uncomfortable, but if you are, I would really recommend checking it out.
|
||||
|
||||
To conclude, whether you are new to R or not, these IDEs might be useful to you. It does not matter if you prefer something that stands for itself, or a plugin for your favorite editor, I am sure that you will appreciate one of the features these software provide. I am also sure I missed a lot of good IDEs for R, which deserve to be on this list. So since you wrote a lot of very good comments for the post on the IDEs for C/C++, I invite you to do the same here and share your knowledge.
|
||||
|
||||
What do you feel is a good IDE for R on Linux? Please let us know in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/good-ide-for-r-on-linux.html
|
||||
|
||||
作者:[Adrien Brochard][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/adrien
|
||||
[1]:http://xmodulo.com/good-ide-for-c-cpp-linux.html
|
||||
[2]:https://www.r-project.org/
|
||||
[3]:https://www.rstudio.com/
|
||||
[4]:http://ess.r-project.org/
|
||||
[5]:http://www.vim.org/scripts/script.php?script_id=2628
|
||||
[6]:http://www.lepem.ufc.br/jaa/r-plugin.html
|
||||
[7]:http://rgedit.sourceforge.net/
|
||||
[8]:https://rkward.kde.org/
|
@ -0,0 +1,336 @@
|
||||
Bossie Awards 2015: The best open source application development tools
|
||||
================================================================================
|
||||
InfoWorld's top picks among platforms, frameworks, databases, and all the other tools that programmers use
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-app-dev-100613767-orig.jpg)
|
||||
|
||||
### The best open source development tools ###
|
||||
|
||||
There must be a better way, right? The developers are the ones who find it. This year's winning projects in the application development category include client-side frameworks, server-side frameworks, mobile frameworks, databases, languages, libraries, editors, and yeah, Docker. These are our top picks among all of the tools that make it faster and easier to build better applications.
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-docker-100613773-orig.jpg)
|
||||
|
||||
### Docker ###
|
||||
|
||||
The darling of container fans almost everywhere, [Docker][2] provides a low-overhead way to isolate an application or service’s environment, which serves its stated goal of being an open platform for building, shipping, and running distributed applications. Docker has been widely supported, even among those seeking to replace the Docker container format with an alternative, more secure runtime and format, specifically Rkt and AppC. Heck, Microsoft Visual Studio now supports deploying into a Docker container too.
|
||||
|
||||
Docker’s biggest impact has been on virtual machine environments. Since Docker containers run inside the operating system, many more Docker containers than virtual machines can run in a given amount of RAM. This is important because RAM is usually the scarcest and most expensive resource in a virtualized environment.
|
||||
|
||||
There are hundreds of thousands of runnable public images on Docker Hub, of which a few hundred are official, and the rest are from the community. You describe Docker images with a Dockerfile and build images locally from the Docker command line. You can add both public and private image repositories to Docker Hub.
|
||||
|
||||
-- Martin Heller
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-nodejs-iojs-100613778-orig.jpg)
|
||||
|
||||
### Node.js and io.js ###
|
||||
|
||||
[Node.js][2] -- and its recently reunited fork [io.js][3] -- is a platform built on [Google Chrome's V8 JavaScript runtime][4] for building fast, scalable, network applications. Node uses an event-driven, nonblocking I/O model without threads. In general, Node tends to take less memory and CPU resources than other runtime engines, such as Java and the .Net Framework. For example, a typical Node.js Web server can run well in a 512MB instance on Cloud Foundry or a 512MB Docker container.
|
||||
|
||||
The Node repository on GitHub has more than 35,000 stars and more than 8,000 forks. The project, sponsored primarily by Joyent, has more than 600 contributors. Some of the more famous Node applications are 37Signals, [Ancestry.com][5], Chomp, the Wall Street Journal online, FeedHenry, [GE.com][6], Mockingbird, [Pearson.com][7], Shutterstock, and Uber. The popular IoT back-end Node-RED is built on Node, as are many client apps, such as Brackets and Nuclide.
|
||||
|
||||
-- Martin Heller
|
||||
|
||||
![](rticle/2015/09/bossies-2015-angularjs-100613766-orig.jpg)
|
||||
|
||||
### AngularJS ###
|
||||
|
||||
[AngularJS][8] (or simply Angular, among friends) is a Model-View-Whatever (MVW) JavaScript AJAX framework that extends HTML with markup for dynamic views and data binding. Angular is especially good for developing single-page Web applications and linking HTML forms to models and JavaScript controllers.
|
||||
|
||||
The weird sounding Model-View-Whatever pattern is an attempt to include the Model-View-Controller, Model-View-ViewModel, and Model-View-Presenter patterns under one moniker. The differences among these three closely related patterns are the sorts of topics that programmers love to argue about fiercely; the Angular developers decided to opt out of the discussion.
|
||||
|
||||
Basically, Angular automatically synchronizes data from your UI (view) with your JavaScript objects (model) through two-way data binding. To help you structure your application better and make it easy to test, AngularJS teaches the browser how to do dependency injection and inversion of control.
|
||||
|
||||
Angular was created by Google and open-sourced under the MIT license; there are currently more than 1,200 contributors to the project on GitHub, and the repository has more than 40,000 stars and 18,000 forks. The Angular site lists [210 “neat things” built with Angular][9].
|
||||
|
||||
-- Martin Heller
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-react-100613782-orig.jpg)
|
||||
|
||||
### React ###
|
||||
|
||||
[React][10] is a JavaScript library for building a UI or view, typically for single-page applications. Note that React does not implement anything having to do with a model or controller. React pages can render on the server or the client; rendering on the server (with Node.js) is typically much faster. People often combine React with AngularJS to create complete applications.
|
||||
|
||||
React combines JavaScript and HTML in a single file, optionally a JSX component. React fans like the way JSX components combine views and their related functionality in one file, though that flies in the face of the last decade of Web development trends, which were all about separating the markup and the code. React fans also claim that you can’t understand it until you’ve tried it. Perhaps you should; the React repository on GitHub has 26,000 stars.
|
||||
|
||||
[React Native][11] implements React with native iOS controls; the React Native command line uses Node and Xcode. [ReactJS.Net][12] integrates React with [ASP.Net][13] and C#. React is available under a BSD license with a patent license grant from Facebook.
|
||||
|
||||
-- Martin Heller
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-atom-100613768-orig.jpg)
|
||||
|
||||
### Atom ###
|
||||
|
||||
[Atom][14] is an open source, hackable desktop editor from GitHub, based on Web technologies. It’s a full-featured tool with a fuzzy finder; fast projectwide search and replace; multiple cursors and selections; multiple panes, snippets, code folding; and the ability to import TextMate grammars and themes. Out of the box, Atom displayed proper syntax highlighting for every programming language on which I tried it, except for F# and C#; I fixed that easily by loading those packages from within Atom. Not surprising, Atom has tight integration with GitHub.
|
||||
|
||||
The skeleton of Atom has been separated from the guts and called the Electron shell, providing an open source way to build cross-platform desktop apps with Web technologies. Visual Studio Code is built on the Electron shell, as are a number of proprietary and open source apps, including Slack and Kitematic. Facebook Nuclide adds significant functionality to Atom, including remote development and support for Flow, Hack, and Mercurial.
|
||||
|
||||
On the downside, updating Atom packages can become painful, especially if you have many of them installed. The Nuclide packages seem to be the worst offenders -- they not only take a long time to update, they run CPU-intensive Node processes to do so.
|
||||
|
||||
-- Martin Heller
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-brackets-100613769-orig.jpg)
|
||||
|
||||
### Brackets ###
|
||||
|
||||
[Brackets][15] is a lightweight editor for Web design that Adobe developed and open-sourced, drawing heavily on other open source projects. The idea is to build better tooling for JavaScript, HTML, CSS, and related open Web technologies. Brackets itself is written in JavaScript, HTML, and CSS, and the developers use Brackets to build Brackets. The editor portion is based on another open source project, CodeMirror, and the Brackets native shell is based on Google’s Chromium Embedded Framework.
|
||||
|
||||
Brackets features a clean UI, with the ability to open a quick inline editor that displays all of the related CSS for some HTML, or all of the related JavaScript for some scripting, and a live preview for Web pages that you are editing. New in Brackets 1.4 is instant search in files, easier preferences editing, the ability to enable and disable extensions individually, improved text rendering on Macs, and Greek and Cyrillic character support. Last November, Adobe started shipping a preview version of Extract for Brackets, which can pull out design information from Photoshop files, as part of the default download for Brackets.
|
||||
|
||||
-- Martin Heller
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-typescript-100613786-orig.jpg)
|
||||
|
||||
### TypeScript ###
|
||||
|
||||
[TypeScript][16] is a portable, duck-typed superset of JavaScript that compiles to plain JavaScript. The goal of the project is to make JavaScript usable for large applications. In pursuit of that goal, TypeScript adds optional types, classes, and modules to JavaScript, and it supports tools for large-scale JavaScript applications. Typing gets rid of some of the nonsensical and potentially buggy default behavior in JavaScript, for example:
|
||||
|
||||
> 1 + "1"
|
||||
'11'
|
||||
|
||||
“Duck” typing means that the type checking focuses on the shape of the data values; TypeScript describes basic types, interfaces, and classes. While the current version of JavaScript does not support traditional, class-based, object-oriented programming, the ECMAScript 6 specification does. TypeScript compiles ES6 classes into plain, compatible JavaScript, with prototype-based objects, unless you enable ES6 output using the `--target` compiler option.
|
||||
|
||||
Visual Studio includes TypeScript in the box, starting with Visual Studio 2013 Update 2. You can also edit TypeScript in Visual Studio Code, WebStorm, Atom, Sublime Text, and Eclipse.
|
||||
|
||||
When using an external JavaScript library, or new host API, you'll need to use a declaration file (.d.ts) to describe the shape of the library. You can often find declaration files in the [DefinitelyTyped][17] repository, either by browsing, using the [TSD definition manager][18], or using NuGet.
|
||||
|
||||
TypeScript’s GitHub repository has more than 6,000 stars.
|
||||
|
||||
-- Martin Heller
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-swagger-100613785-orig.jpg)
|
||||
|
||||
### Swagger ###
|
||||
|
||||
[Swagger][19] is a language-agnostic interface to RESTful APIs, with tooling that gives you interactive documentation, client SDK generation, and discoverability. It’s one of several recent attempts to codify the description of RESTful APIs, in the spirit of WSDL for XML Web Services (2000) and CORBA for distributed object interfaces (1991).
|
||||
|
||||
The tooling makes Swagger especially interesting. [Swagger-UI][20] automatically generates beautiful documentation and a live API sandbox from a Swagger-compliant API. The [Swagger codegen][21] project allows generation of client libraries automatically from a Swagger-compliant server.
|
||||
|
||||
[Swagger Editor][22] lets you edit Swagger API specifications in YAML inside your browser and preview documentations in real time. Valid Swagger JSON descriptions can then be generated and used with the full Swagger tooling.
|
||||
|
||||
The [Swagger JS][23] library is a fast way to enable a JavaScript client to communicate with a Swagger-enabled server. Additional clients exist for Clojure, Go, Java, .Net, Node.js, Perl, PHP, Python, Ruby, and Scala.
|
||||
|
||||
The [Amazon API Gateway][24] is a managed service for API management at scale. It can import Swagger specifications using an open source [Swagger Importer][25] tool.
|
||||
|
||||
Swagger and friends use the Apache 2.0 license.
|
||||
|
||||
-- Martin Heller
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-polymer-100613781-orig.jpg)
|
||||
|
||||
### Polymer ###
|
||||
|
||||
The [Polymer][26] library is a lightweight, “sugaring” layer on top of the Web components APIs to help in building your own Web components. It adds several features for greater ease in building complex elements, such as creating custom element registration, adding markup to your element, configuring properties on your element, setting the properties with attributes, data binding with mustache syntax, and internal styling of elements.
|
||||
|
||||
Polymer also includes libraries of prebuilt elements. The Iron library includes elements for working with layout, user input, selection, and scaffolding apps. The Paper elements implement Google's Material Design. The Gold library includes elements for credit card input fields for e-commerce, the Neon elements implement animations, the Platinum library implements push messages and offline caching, and the Google Web Components library is exactly what it says; it includes wrappers for YouTube, Firebase, Google Docs, Hangouts, Google Maps, and Google Charts.
|
||||
|
||||
Polymer Molecules are elements that wrap other JavaScript libraries. The only Molecule currently implemented is for marked, a Markdown library. The Polymer repository on GitHub currently has 12,000 stars. The software is distributed under a BSD-style license.
|
||||
|
||||
-- Martin Heller
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-ionic-100613775-orig.jpg)
|
||||
|
||||
### Ionic ###
|
||||
|
||||
The [Ionic][27] framework is a front-end SDK for building hybrid mobile apps, using Angular.js and Cordova, PhoneGap, or Trigger.io. Ionic was designed to be similar in spirit to the Android and iOS SDKs, and to do a minimum of DOM manipulation and use hardware-accelerated transitions to keep the rendering speed high. Ionic is focused mainly on the look and feel and UI interaction of your app.
|
||||
|
||||
In addition to the framework, Ionic encompasses an ecosystem of mobile development tools and resources. These include Chrome-based tools, Angular extensions for Cordova capabilities, back-end services, a development server, and a shell View App to enable testers to use your Ionic code on their devices without the need for you to distribute beta apps through the App Store or Google Play.
|
||||
|
||||
Appery.io integrated Ionic into its low-code builder in July 2015. Ionic’s GitHub repository has more than 18,000 stars and more than 3,000 forks. Ionic is distributed under an MIT license and currently runs in UIWebView for iOS 7 and later, and in Android 4.1 and up.
|
||||
|
||||
-- Martin Heller
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-cordova-100613771-orig.jpg)
|
||||
|
||||
### Cordova ###
|
||||
|
||||
[Apache Cordova][28] is the open source project spun off when Adobe acquired PhoneGap from Nitobi. Cordova is a set of device APIs, plus some tooling, that allows a mobile app developer to access native device functionality like the camera and accelerometer from JavaScript. When combined with a UI framework like Angular, it allows a smartphone app to be developed with only HTML, CSS, and JavaScript. By using Cordova plug-ins for multiple devices, you can generate hybrid apps that share a large portion of their code but also have access to a wide range of platform capabilities. The HTML5 markup and code runs in a WebView hosted by the Cordova shell.
|
||||
|
||||
Cordova is one of the cross-platform mobile app options supported by Visual Studio 2015. Several companies offer online builders for Cordova apps, similar to the Adobe PhoneGap Build service. Online builders save you from having to install and maintain most of the device SDKs on which Cordova relies.
|
||||
|
||||
-- Martin Heller
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-famous-100613774-orig.jpg)
|
||||
|
||||
### Famous Engine ###
|
||||
|
||||
The high-performance Famo.us JavaScript framework introduced last year has become the [Famous Engine][29] and [Famous Framework][30]. The Famous Engine runs in a mixed mode, with the DOM and WebGL under a single coordinate system. As before, Famous structures applications in a scene graph hierarchy, but now it produces very little garbage (reducing the garbage collector overhead) and sustains 60FPS animations.
|
||||
|
||||
The Famous Physics engine has been refactored to its own, fine-grained module so that you can load only the features you need. Other improvements since last year include streamlined eventing, improved sizing, decoupling the scene graph from the rendering pipeline by using a draw command buffer, and switching to a fully open MIT license.
|
||||
|
||||
The new Famous Framework is an alpha-stage developer preview built on the Famous Engine; its goal is creating reusable, composable, and interchangeable UI widgets and applications. Eventually, Famous hopes to replace the jQuery UI widgets with Famous Framework widgets, but while it's promising, the Famous Framework is nowhere near production-ready.
|
||||
|
||||
-- Martin Heller
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-mongodb-rev-100614248-orig.jpg)
|
||||
|
||||
### MongoDB ###
|
||||
|
||||
[MongoDB][31] is no stranger to the Bossies or to the ever-growing and ever-competitive NoSQL market. If you still aren't familiar with this very popular technology, here's a brief overview: MongoDB is a cross-platform document-oriented database, favoring JSON-like documents with dynamic schemas that make data integration easier and faster.
|
||||
|
||||
MongoDB has attractive features, including but not limited to ad hoc queries, flexible indexing, replication, high availability, automatic sharding, load balancing, and aggregation.
|
||||
|
||||
The big, bold move with [version 3.0 this year][32] was the new WiredTiger storage engine. We can now have document-level locking. This makes “normal” applications a whole lot more scalable and makes MongoDB available to more use cases.
|
||||
|
||||
MongoDB has a growing open source ecosystem with such offerings as the [TokuMX engine][33], from the famous MySQL bad boys Percona. The long list of MongoDB customers includes heavy hitters such as Craigslist, eBay, Facebook, Foursquare, Viacom, and the New York Times.
|
||||
|
||||
-- Andrew Oliver
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-couchbase-100614851-orig.jpg)
|
||||
|
||||
### Couchbase ###
|
||||
|
||||
[Couchbase][34] is another distributed, document-oriented database that has been making waves in the NoSQL world for quite some time now. Couchbase and MongoDB often compete, but they each have their sweet spots. Couchbase tends to outperform MongoDB when doing more in memory is possible.
|
||||
|
||||
Additionally, Couchbase’s mobile features allow you to disconnect and ship a database in compact format. This allows you to scale down as well as up. This is useful not just for mobile devices but also for specialized applications, like shipping medical records across radio waves in Africa.
|
||||
|
||||
This year Couchbase added N1QL, a SQL-based query language that did away with Couchbase’s biggest obstacle, requiring static views. The new release also introduced multidimensional scaling. This allows individual scaling of services such as querying, indexing, and data storage to improve performance, instead of adding an entire, duplicate node.
|
||||
|
||||
-- Andrew C. Oliver
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-cassandra-100614852-orig.jpg)
|
||||
|
||||
### Cassandra ###
|
||||
|
||||
[Cassandra][35] is the other white meat of column family databases. HBase might be included with your favorite Hadoop distribution, but Cassandra is the one people deliberately deploy for specialized applications. There are good reasons for this.
|
||||
|
||||
Cassandra was designed for high workloads of both writes and reads where millisecond consistency isn't as important as throughput. HBase is optimized for reads and greater write consistency. To a large degree, Cassandra tends to be used for operational systems and HBase more for data warehouse and batch-system-type use cases.
|
||||
|
||||
While Cassandra has not received as much attention as other NoSQL databases and slipped into a quiet period a couple years back, it is widely used and deployed, and it's a great fit for time series, product catalog, recommendations, and other applications. If you want to keep a cluster up “no matter what” with multiple masters and multiple data centers, and you need to scale with lots of reads and lots of writes, Cassandra might just be your Huckleberry.
|
||||
|
||||
-- Andrew C. Oliver
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-orientdb-100613780-orig.jpg)
|
||||
|
||||
### OrientDB ###
|
||||
|
||||
[OrientDB][36] is an interesting hybrid in the NoSQL world, combining features from a document database, where individual documents can have multiple fields without necessarily defining a schema, and a graph database, which consists of a set of nodes and edges. At a basic level, OrientDB considers the document as a vertex, and relationships between fields as graph edges. Because the relationships between elements are part of the record, no costly joins are required when querying data.
|
||||
|
||||
Like most databases today, OrientDB offers linear scalability via a distributed architecture. Adding capacity is a matter of simply adding more nodes to the cluster. Queries are written in a variant of SQL that is extended to support graph concepts. It's not exactly SQL, but data analysts shouldn't have too much trouble adapting. Language bindings are available for most commonly used languages, such as R, Scala, .Net, and C, and those integrating OrientDB into their applications will find an active user community to get help from.
|
||||
|
||||
-- Steven Nunez
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-rethinkdb-100613783-orig.jpg)
|
||||
|
||||
### RethinkDB ###
|
||||
|
||||
[RethinkDB][37] is a scalable, real-time JSON database with the ability to continuously push updated query results to applications that subscribe to changes. There are official RethinkDB drivers for Ruby, Python, and JavaScript/Node.js, and community-supported drivers for more than a dozen other languages, including C#, Go, and PHP.
|
||||
|
||||
It’s temping to confuse RethinkDB with real-time sync APIs, such as Firebase and PubNub. RethinkDB can be run as a cloud service like Firebase and PubNub, but you can also install it on your own hardware or Docker containers. RethinkDB does more than synchronize: You can run arbitrary RethinkDB queries, including table joins, subqueries, geospatial queries, and aggregation. Finally, RethinkDB is designed to be accessed from an application server, not a browser.
|
||||
|
||||
Where MongoDB requires you to poll the database to see changes, RethinkDB lets you subscribe to a stream of changes to a query result. You can shard and scale RethinkDB easily, unlike MongoDB. Also unlike relational databases, RethinkDB does not give you full ACID support or strong schema enforcement, although it can perform joins.
|
||||
|
||||
The RethinkDB repository has 10,000 stars on GitHub, a remarkably high number for a database. It is licensed with the Affero GPL 3.0; the drivers are licensed with Apache 2.0.
|
||||
|
||||
-- Martin Heller
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-rust-100613784-orig.jpg)
|
||||
|
||||
### Rust ###
|
||||
|
||||
[Rust][38] is a syntactically C-like systems programming language from Mozilla Research that guarantees memory safety and offers painless concurrency (that is, no data races). It does not have a garbage collector and has minimal runtime overhead. Rust is strongly typed with type inference. This is all promising.
|
||||
|
||||
Rust was designed for performance. It doesn’t yet demonstrate great performance, however, so now the mantra seems to be that it runs as fast as C++ code that implements all the safety checks built into Rust. I’m not sure whether I believe that, as in many cases the strictest safety checks for C/C++ code are done by static and dynamic analysis and testing, which don’t add any runtime overhead. Perhaps Rust performance will come with time.
|
||||
|
||||
So far, the only tools for Rust are the Cargo package manager and the rustdoc documentation generator, plus a couple of simple Rust plug-ins for programming editors. As far as we have heard, there is no shipping software that was actually built with Rust. Now that Rust has reached the 1.0 milestone, we might expect that to change.
|
||||
|
||||
Rust is distributed with a dual Apache 2.0 and MIT license. With 13,000 stars on its GitHub repository, Rust is certainly attracting attention, but when and how it will deliver real benefits remains to be seen.
|
||||
|
||||
-- Martin Heller
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-opencv-100613779-orig.jpg)
|
||||
|
||||
### OpenCV ###
|
||||
|
||||
[OpenCV][39] (Open Source Computer Vision Library) is a computer vision and machine learning library that contains about 500 algorithms, such as face detection, moving object tracking, image stitching, red-eye removal, machine learning, and eye movement tracking. It runs on Windows, Mac OS X, Linux, Android, and iOS.
|
||||
|
||||
OpenCV has official C++, C, Python, Java, and MATLAB interfaces, and wrappers in other languages such as C#, Perl, and Ruby. CUDA and OpenCL interfaces are under active development. OpenCV was originally (1999) an Intel Research project in Russia; from there it moved to the robotics research lab Willow Garage (2008) and finally to [OpenCV.org][39] (2012) with a core team at Itseez, current source on GitHub, and stable snapshots on SourceForge.
|
||||
|
||||
Users of OpenCV include Google, Yahoo, Microsoft, Intel, IBM, Sony, Honda, and Toyota. There are currently more than 6,000 stars and 5,000 forks on the GitHub repository. The project uses a BSD license.
|
||||
|
||||
-- Martin Heller
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-llvm-100613777-orig.jpg)
|
||||
|
||||
### LLVM ###
|
||||
|
||||
The [LLVM Project][40] is a collection of modular and reusable compiler and tool chain technologies, which originated at the University of Illinois. LLVM has grown to include a number of subprojects, several of which are interesting in their own right. LLVM is distributed with Debian, Ubuntu, and Apple Xcode, among others, and it’s used in commercial products from the likes of Adobe (including After Effects), Apple (including Objective-C and Swift), Cray, Intel, NVIDIA, and Siemens. A few of the open source projects that depend on LLVM are PyPy, Mono, Rubinius, Pure, Emscripten, Rust, and Julia. Microsoft has recently contributed LLILC, a new LLVM-based compiler for .Net, to the .Net Foundation.
|
||||
|
||||
The main LLVM subprojects are the core libraries, which provide optimization and code generation; Clang, a C/C++/Objective-C compiler that’s about three times faster than GCC; LLDB, a much faster debugger than GDB; libc++, an implementation of the C++ 11 Standard Library; and OpenMP, for parallel programming.
|
||||
|
||||
-- Martin Heller
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-main-100613823-orig.jpg)
|
||||
|
||||
### Read about more open source winners ###
|
||||
|
||||
InfoWorld's Best of Open Source Awards for 2014 celebrate more than 100 open source projects, from the bottom of the stack to the top. Follow these links to more open source winners:
|
||||
|
||||
[Bossie Awards 2015: The best open source applications][41]
|
||||
|
||||
[Bossie Awards 2015: The best open source application development tools][42]
|
||||
|
||||
[Bossie Awards 2015: The best open source big data tools][43]
|
||||
|
||||
[Bossie Awards 2015: The best open source data center and cloud software][44]
|
||||
|
||||
[Bossie Awards 2015: The best open source desktop and mobile software][45]
|
||||
|
||||
[Bossie Awards 2015: The best open source networking and security software][46]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.infoworld.com/article/2982920/open-source-tools/bossie-awards-2015-the-best-open-source-application-development-tools.html
|
||||
|
||||
作者:[InfoWorld staff][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.infoworld.com/author/InfoWorld-staff/
|
||||
[1]:https://www.docker.com/
|
||||
[2]:https://nodejs.org/en/
|
||||
[3]:https://iojs.org/en/
|
||||
[4]:https://developers.google.com/v8/?hl=en
|
||||
[5]:http://www.ancestry.com/
|
||||
[6]:http://www.ge.com/
|
||||
[7]:https://www.pearson.com/
|
||||
[8]:https://angularjs.org/
|
||||
[9]:https://builtwith.angularjs.org/
|
||||
[10]:https://facebook.github.io/react/
|
||||
[11]:https://facebook.github.io/react-native/
|
||||
[12]:http://reactjs.net/
|
||||
[13]:http://asp.net/
|
||||
[14]:https://atom.io/
|
||||
[15]:http://brackets.io/
|
||||
[16]:http://www.typescriptlang.org/
|
||||
[17]:http://definitelytyped.org/
|
||||
[18]:http://definitelytyped.org/tsd/
|
||||
[19]:http://swagger.io/
|
||||
[20]:https://github.com/swagger-api/swagger-ui
|
||||
[21]:https://github.com/swagger-api/swagger-codegen
|
||||
[22]:https://github.com/swagger-api/swagger-editor
|
||||
[23]:https://github.com/swagger-api/swagger-js
|
||||
[24]:http://aws.amazon.com/cn/api-gateway/
|
||||
[25]:https://github.com/awslabs/aws-apigateway-importer
|
||||
[26]:https://www.polymer-project.org/
|
||||
[27]:http://ionicframework.com/
|
||||
[28]:https://cordova.apache.org/
|
||||
[29]:http://famous.org/
|
||||
[30]:http://famous.org/framework/
|
||||
[31]:https://www.mongodb.org/
|
||||
[32]:http://www.infoworld.com/article/2878738/nosql/first-look-mongodb-30-for-mature-audiences.html
|
||||
[33]:http://www.infoworld.com/article/2929772/nosql/mongodb-crossroads-growth-or-openness.html
|
||||
[34]:http://www.couchbase.com/nosql-databases/couchbase-server
|
||||
[35]:https://cassandra.apache.org/
|
||||
[36]:http://orientdb.com/
|
||||
[37]:http://rethinkdb.com/
|
||||
[38]:https://www.rust-lang.org/
|
||||
[39]:http://opencv.org/
|
||||
[40]:http://llvm.org/
|
||||
[41]:http://www.infoworld.com/article/2982622/bossie-awards-2015-the-best-open-source-applications.html
|
||||
[42]:http://www.infoworld.com/article/2982920/bossie-awards-2015-the-best-open-source-application-development-tools.html
|
||||
[43]:http://www.infoworld.com/article/2982429/bossie-awards-2015-the-best-open-source-big-data-tools.html
|
||||
[44]:http://www.infoworld.com/article/2982923/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html
|
||||
[45]:http://www.infoworld.com/article/2982630/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html
|
||||
[46]:http://www.infoworld.com/article/2982962/bossie-awards-2015-the-best-open-source-networking-and-security-software.html
|
@ -0,0 +1,238 @@
|
||||
Bossie Awards 2015: The best open source applications
|
||||
================================================================================
|
||||
InfoWorld's top picks in open source business applications, enterprise integration, and middleware
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-applications-100614669-orig.jpg)
|
||||
|
||||
### The best open source applications ###
|
||||
|
||||
Applications -- ERP, CRM, HRM, CMS, BPM -- are not only fertile ground for three-letter acronyms, they're the engines behind every modern business. Our top picks in the category include back- and front-office solutions, marketing automation, lightweight middleware, heavyweight middleware, and other tools for moving data around, mixing it together, and magically transforming it into smarter business decisions.
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-xtuple-100614684-orig.jpg)
|
||||
|
||||
### xTuple ###
|
||||
|
||||
Small and midsize companies with light manufacturing or distribution needs have a friend in [xTuple][1]. This modular ERP/CRM combo bundles operations and financial control, product and inventory management, and CRM and sales support. Its relatively simple install lets you deploy all of the modules or only what you need today -- helping trim support costs without sacrificing customization later.
|
||||
|
||||
This summer’s release brought usability improvements to the UI and a generous number of bug fixes. Recent updates also yielded barcode scanning and label printing for mobile warehouse workers, an enhanced workflow module (built with Plv8, a wrapper around Google’s V8 JavaScript engine that lets you write stored procedures for PostgreSQL in JavaScript), and quality management tools that are sure to get mileage on shop floors.
|
||||
|
||||
The xTuple codebase is JavaScript from stem to stern. The server components can all be installed locally, in xTuple’s cloud, or deployed as an appliance. A mobile Web client, and mobile CRM features, augment a good native desktop client.
|
||||
|
||||
-- James R. Borck
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-odoo-100614678-orig.jpg)
|
||||
|
||||
### Odoo ###
|
||||
|
||||
[Odoo][2] used to be known as OpenERP. Last year the company raised private capital and broadened its scope. Today Odoo is a one-stop shop for back office and customer-facing applications -- replete with content management, business intelligence, and e-commerce modules.
|
||||
|
||||
Odoo 8 fronts accounting, invoicing, project management, resource planning, and customer relationship management tools with a flexible Web interface that can be tailored to your company’s workflow. Add-on modules for warehouse management and HR, as well as for live chat and analytics, round out the solution.
|
||||
|
||||
This year saw Odoo focused primarily on usability updates. A recently released sales planner helps sales groups track KPIs, and a new tips feature lends in-context help. Odoo 9 is right around the corner with alpha builds showing customer portals, Web form creation tools, mobile and VoIP services, and integration hooks to eBay and Amazon.
|
||||
|
||||
Available for Windows and Linux, and as a SaaS offering, Odoo gives small and midsized companies an accessible set of tools to manage virtually every aspect of their business.
|
||||
|
||||
-- James R. Borck
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-idempiere-100614673-orig.jpg)
|
||||
|
||||
### iDempiere ###
|
||||
|
||||
Small and midsize companies have great choices in Odoo and xTuple. Larger manufacturing and distribution companies will need something more. For them, there’s [iDempiere][3] -- a well maintained offshoot of ADempiere with OSGi modularity.
|
||||
|
||||
iDempiere implements a fully loaded ERP, supply chain, and CRM suite right out of the box. Built with Java, iDempiere supports both PostgreSQL and Oracle Database, and it can be customized extensively through modules built to the OSGi specification. iDempiere is perfectly suited to managing complex business scenarios involving multiple partners, requiring dynamic reporting, or employing point-of-sale and warehouse services.
|
||||
|
||||
Being enterprise-ready comes with a price. iDempiere’s feature-rich tools and complexity impose a steep learning curve and require a commitment to integration support. Of course, those costs are offset by savings from the software’s free GPL2 licensing. iDempiere’s easy install script, small resource footprint, and clean interface also help alleviate some of the startup pains. There’s even a virtual appliance available on Sourceforge to get you started.
|
||||
|
||||
-- James R. Borck
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-suitecrm-100614680-orig.jpg)
|
||||
|
||||
### SuiteCRM ###
|
||||
|
||||
SugarCRM held the sweet spot in open source CRM since, well, forever. Then last year Sugar announced it would no longer contribute to the open source Community Edition. Into the ensuing vacuum rushed [SuiteCRM][4] – a fork of the final Sugar code.
|
||||
|
||||
SuiteCRM 7.2 creates an experience on a par with SugarCRM Professional’s marketing, sales, and service tools. With add-on modules for workflow, reporting, and security, as well as new innovations like Lucene-driven search, taps for social media, and a beta reveal of new desktop notifications, SuiteCRM is on solid footing.
|
||||
|
||||
The Advanced Open Sales module provides a familiar migration path from Sugar, while commercial support is available from the likes of [SalesAgility][5], the company that forked SuiteCRM in the first place. In little more than a year, SuiteCRM rescued the code, rallied an inspired community, and emerged as a new leader in open source CRM. Who needs Sugar?
|
||||
|
||||
-- James R. Borck
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-civicrm-100614671-orig.jpg)
|
||||
|
||||
### CiviCRM ###
|
||||
|
||||
We typically focus attention on CRM vis-à-vis small and midsize business requirements. But nonprofit and advocacy groups need to engage with their “customers” too. Enter [CiviCRM][6].
|
||||
|
||||
CiviCRM addresses the needs of nonprofits with tools for fundraising and donation processing, membership management, email tracking, and event planning. Granular access control and security bring role-based permissions to views, keeping paid staff and volunteers partitioned and productive. This year CiviCRM continued to develop with new features like simple A/B testing and monitoring for email campaigns.
|
||||
|
||||
CiviCRM deploys as a plug-in to your WordPress, Drupal, or Joomla content management system -- a dead-simple install if you already have one of these systems in place. If you don’t, CiviCRM is an excellent reason to deploy the CMS. It’s a niche-filling solution that allows nonprofits to start using smarter, tailored tools for managing constituencies, without steep hurdles and training costs.
|
||||
|
||||
-- James R. Borck
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-mautic-100614677-orig.jpg)
|
||||
|
||||
### Mautic ###
|
||||
|
||||
For marketers, the Internet -- Web, email, social, all of it -- is the stuff dreams are made on. [Mautic][7] allows you to create Web and email campaigns that track and nurture customer engagement, then roll all of the data into detailed reports to gain insight into customer needs and wants and how to meet them.
|
||||
|
||||
Open source options in marketing automation are few, but Mautic’s extensibility stands out even against closed solutions like IBM’s Silverpop. Mautic even integrates with popular third-party email marketing solutions (MailChimp, Constant Contact) and social media platforms (Facebook, Twitter, Google+, Instagram) with quick-connect widgets.
|
||||
|
||||
The developers of Mautic could stand to broaden the features for list segmentation and improve the navigability of their UI. Usability is also hindered by sparse documentation. But if you’re willing to rough it out long enough to learn your way, you’ll find a gem -- and possibly even gold -- in Mautic.
|
||||
|
||||
-- James R. Borck
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-orangehrm-100614679-orig.jpg)
|
||||
|
||||
### OrangeHRM ###
|
||||
|
||||
The commercial software market in the human resource management space is rather fragmented, with Talent, HR, and Workforce Management startups all vying for a slice of the pie. It’s little wonder the open source world hasn’t found much direction either, with the most ambitious HRM solutions often locked inside larger ERP distributions. [OrangeHRM][8] is a standout.
|
||||
|
||||
OrangeHRM tackles employee administration from recruitment and applicant tracking to performance reviews, with good audit trails throughout. An employee portal provides self-serve access to personal employment information, time cards, leave requests, and personnel documents, helping reduce demands on HR staff.
|
||||
|
||||
OrangeHRM doesn’t yet address niche aspects like talent management (social media, collaboration, knowledge banks), but it’s remarkably full-featured. Professional and Enterprise options offer more advanced functionality (in areas such as recruitment, training, on/off-boarding, document management, and mobile device access), while community modules are available for the likes of Active Directory/LDAP integration, advanced reporting, and even insurance benefit management.
|
||||
|
||||
-- James R. Borck
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-libreoffice-100614675-orig.jpg)
|
||||
|
||||
### LibreOffice ###
|
||||
|
||||
[LibreOffice][9] is the easy choice for best open source office productivity suite. Originally forked from OpenOffice, Libre has been moving at a faster clip than OpenOffice ever since, drawing more developers and producing more new features than its rival.
|
||||
|
||||
LibreOffice 5.0, released only last month, offers UX improvements that truly enhance usability (like visual previews to style changes in the sidebar), brings document editing to Android devices (previously a view-only prospect), and finally delivers on a 64-bit Windows codebase.
|
||||
|
||||
LibreOffice still lacks a built-in email client and a personal information manager, not to mention the real-time collaborative document editing available in Microsoft Office. But Libre can run off of a USB flash disk for portability, natively supports a greater number of graphic and file formats, and creates hybrid PDFs with embedded ODF files for full-on editing. Libre even imports Apple Pages documents, in addition to opening and saving all Microsoft Office formats.
|
||||
|
||||
LibreOffice has done a solid job of tightening its codebase and delivering enhancements at a regular clip. With a new cloud version under development, LibreOffice will soon be more liberating than ever.
|
||||
|
||||
-- James R. Borck
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-bonita-100614672-orig.jpg)
|
||||
|
||||
### Bonita BPM ###
|
||||
|
||||
Open source BPM has become a mature, cost-effective alternative to the top proprietary solutions. Having led the charge since 2009, Bonitasoft continues to raise the bar. The new [Bonita BPM 7][10] release impresses with innovative features that simplify code generation and shorten development cycles for BPM app creation.
|
||||
|
||||
Most important to the new version, though, is better abstraction of underlying core business logic from UI and data components, allowing UIs and processes to be developed independently. This new MVC approach reduces downtime for live upgrades (no more recompilation!) and eases application maintenance.
|
||||
|
||||
Bonita contains a winning set of connectors to a broad range of enterprise systems (ERP, CRM, databases) as well as to Web services. Complementing its process weaving tools, a new form designer (built on AngularJS/Bootstrap) goes a long way toward improving UI creation for the Web-centric and mobile workforce.
|
||||
|
||||
-- James R. Borck
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-camunda-100614670-orig.jpg)
|
||||
|
||||
### Camunda BPM ###
|
||||
|
||||
Many open source solutions, like Bonita BPM, offer solid, drop-in functionality. Dig into the code base, though, and you may find it’s not the cleanest to build upon. Enterprise Java developers who hang out under the hood should check out [Camunda BPM][11].
|
||||
|
||||
Forked from Alfresco Activiti (a creation of former Red Hat jBPM developers), Camunda BPM delivers a tight, Java-based BPMN 2.0 engine in support of human workflow activities, case management, and systems process automation that can be embedded in your Java apps or run as a container service in Tomcat. Camunda’s ecosystem offers an Eclipse plug-in for process modeling and the Cockpit dashboard brings real-time monitoring and management over running processes.
|
||||
|
||||
The Enterprise version adds WebSphere and WebLogic Server support. Additional incentives for the Enterprise upgrade include Saxon-driven XSLT templating (sidestepping the scripting engine) and add-ons to improve process management and exception handling.
|
||||
|
||||
Camunda is a solid BPM engine ready for build-out and one of the first open source process managers to introduce DMN (Decision Model and Notation) support, which helps to simplify complex rules-based modeling alongside BPMN. DMN support is currently at the alpha stage.
|
||||
|
||||
-- James R. Borck
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-talend-100614681-orig.jpg)
|
||||
|
||||
### Talend Open Studio ###
|
||||
|
||||
No open source ETL or EAI solution comes close to [Talend Open Studio][12] in functionality, performance, or support of modern integration trends. This year Talend unleashed Open Studio 6, a new version with a streamlined UI and smarter tooling that brings it more in line with Talend’s cloud-based offering.
|
||||
|
||||
Using Open Studio you can visually design, test, and debug orchestrations that connect, transform, and synchronize data across a broad range of real-time applications and data resources. Talend’s wealth of connectors provides support for most any endpoint -- from flat files to Hadoop to Amazon S3. Packaged editions focus on specific scenarios such as big data integration, ESB, and data integrity monitoring.
|
||||
|
||||
New support for Java 8 brings a speed boost. The addition of support for MariaDB and for in-memory processing with MemSQL, as well as updates to the ESB engine, keep Talend in step with the community’s needs. Version 6 was a long time coming, but no less welcome for that. Talend Open Studio is still first in managing complex data integration -- in-house, in the cloud, or increasingly, a combination of the two.
|
||||
|
||||
-- James R. Borck
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-warewolf-100614683-orig.jpg)
|
||||
|
||||
### Warewolf ESB ###
|
||||
|
||||
Complex integration patterns may demand the strengths of a Talend to get the job done. But for many lightweight microservices, the overhead of a full-fledged enterprise integration solution is extreme overkill.
|
||||
|
||||
[Warewolf ESB][13] combines a streamlined .Net-based process engine with visual development tools to provide for dead simple messaging and application payload routing in a native Windows environment. The Warewolf ESB is an “easy service bus,” not an enterprise service bus.
|
||||
|
||||
Drag-and-drop tooling in the design studio makes quick work of configuring connections and logic flows. Built-in wizardry handles Web services definitions and database calls, and it can even tap Windows DLLs and the command line directly. Using the visual debugger, you can inspect execution streams (if not yet actually step through them), then package everything for remote deployment.
|
||||
|
||||
Warewolf is still a .40.5 release and undergoing major code changes. It also lacks native connectors, easy transforms, and any means of scalability management. Be aware that the precompiled install demands collection of some usage statistics (I wish they would stop that). But Warewolf ESB is fast, free, and extensible. It’s a quirky, upstart project that offers definite benefits to Windows integration architects.
|
||||
|
||||
-- James R. Borck
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-knime-100614674-orig.jpg)
|
||||
|
||||
### KNIME ###
|
||||
|
||||
[KNIME][14] takes a code-free approach to predictive analytics. Using a graphical workbench, you wire together workflows from an abundant library of processing nodes, which handle data access, transformation, analysis, and visualization. With KNIME, you can pull data from databases and big data platforms, run ETL transformations, perform data mining with R, and produce custom reports in the end.
|
||||
|
||||
The company was busy this year rolling out the KNIME 2.12 update. The new release introduces MongoDB support, XPath nodes with autoquery creation, and a new view controller (based on the D3 JavaScript library) that creates interactive data visualizations on the fly. It also includes additional statistical nodes and a REST interface (KNIME Server edition) that provides services-based access to workflows.
|
||||
|
||||
KNIME’s core analytics engine is free open source. The company offers several fee-based extensions for clustering and collaboration. (A portion of your licensing fee actually funds the open source project.) KNIME Server (on-premise or cloud) ups the ante with security, collaboration, and workflow repositories -- all serving to inject analytics more productively throughout your business lines.
|
||||
|
||||
-- James R. Borck
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-teiid-100614682-orig.jpg)
|
||||
|
||||
### Teiid ###
|
||||
|
||||
[Teiid][15] is a data virtualization system that allows applications to use data from multiple, heterogeneous data stores. Currently a JBoss project, Teiid is backed by years of development from MetaMatrix and a long history of addressing the data access needs of the largest enterprise environments. I even see [uses for Teiid in Hadoop and big data environments][16].
|
||||
|
||||
In essence, Teiid allows you to connect all of your data sources into a “virtual” mega data source. You can define caching semantics, transforms, and other “configuration not code” transforms to load from multiple data sources using plain old SQL, XQuery, or procedural queries.
|
||||
|
||||
Teiid is primarily accessible through JBDC and has built-in support for Web services. Red Hat sells Teiid as [JBoss Data Virtualization][17].
|
||||
|
||||
-- Andrew C. Oliver
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-main-100614676-orig.jpg)
|
||||
|
||||
### Read about more open source winners ###
|
||||
|
||||
InfoWorld's Best of Open Source Awards for 2014 celebrate more than 100 open source projects, from the bottom of the stack to the top. Follow these links to more open source winners:
|
||||
|
||||
[Bossie Awards 2015: The best open source applications][18]
|
||||
|
||||
[Bossie Awards 2015: The best open source application development tools][19]
|
||||
|
||||
[Bossie Awards 2015: The best open source big data tools][20]
|
||||
|
||||
[Bossie Awards 2015: The best open source data center and cloud software][21]
|
||||
|
||||
[Bossie Awards 2015: The best open source desktop and mobile software][22]
|
||||
|
||||
[Bossie Awards 2015: The best open source networking and security software][23]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.infoworld.com/article/2982622/open-source-tools/bossie-awards-2015-the-best-open-source-applications.html
|
||||
|
||||
作者:[InfoWorld staff][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.infoworld.com/author/InfoWorld-staff/
|
||||
[1]:http://xtuple.org/
|
||||
[2]:http://odoo.com/
|
||||
[3]:http://idempiere.org/
|
||||
[4]:http://suitecrm.com/
|
||||
[5]:http://salesagility.com/
|
||||
[6]:http://civicrm.org/
|
||||
[7]:https://www.mautic.org/
|
||||
[8]:http://www.orangehrm.com/
|
||||
[9]:http://libreoffice.org/
|
||||
[10]:http://www.bonitasoft.com/
|
||||
[11]:http://camunda.com/
|
||||
[12]:http://talend.com/
|
||||
[13]:http://warewolf.io/
|
||||
[14]:http://www.knime.org/
|
||||
[15]:http://teiid.jboss.org/
|
||||
[16]:http://www.infoworld.com/article/2922180/application-development/database-virtualization-or-i-dont-want-to-do-etl-anymore.html
|
||||
[17]:http://www.jboss.org/products/datavirt/overview/
|
||||
[18]:http://www.infoworld.com/article/2982622/bossie-awards-2015-the-best-open-source-applications.html
|
||||
[19]:http://www.infoworld.com/article/2982920/bossie-awards-2015-the-best-open-source-application-development-tools.html
|
||||
[20]:http://www.infoworld.com/article/2982429/bossie-awards-2015-the-best-open-source-big-data-tools.html
|
||||
[21]:http://www.infoworld.com/article/2982923/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html
|
||||
[22]:http://www.infoworld.com/article/2982630/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html
|
||||
[23]:http://www.infoworld.com/article/2982962/bossie-awards-2015-the-best-open-source-networking-and-security-software.html
|
@ -0,0 +1,287 @@
|
||||
Bossie Awards 2015: The best open source big data tools
|
||||
================================================================================
|
||||
InfoWorld's top picks in distributed data processing, streaming analytics, machine learning, and other corners of large-scale data analytics
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-big-data-100613944-orig.jpg)
|
||||
|
||||
### The best open source big data tools ###
|
||||
|
||||
How many Apache projects can sit on a pile of big data? Fire up your Hadoop cluster, and you might be able to count them. Among this year's Bossies in big data, you'll find the fastest, widest, and deepest newfangled solutions for large-scale SQL, stream processing, sort-of stream processing, and in-memory analytics, not to mention our favorite maturing members of the Hadoop ecosystem. It seems everyone has a nail to drive into MapReduce's coffin.
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-spark-100613962-orig.jpg)
|
||||
|
||||
### Spark ###
|
||||
|
||||
With hundreds of contributors, [Spark][1] is one of the most active and fastest-growing Apache projects, and with heavyweights like IBM throwing their weight behind the project and major corporations bringing applications into large-scale production, the momentum shows no signs of letting up.
|
||||
|
||||
The sweet spot for Spark continues to be machine learning. Highlights since last year include the replacement of the SchemaRDD with a Dataframes API, similar to those found in R and Pandas, making data access much simpler than with the raw RDD interface. Also new are ML pipelines for building repeatable machine learning workflows, expanded and optimized support for various storage formats, simpler interfaces to machine learning algorithms, improvements in the display of cluster resources usage, and task tracking.
|
||||
|
||||
On by default in Spark 1.5 is the off-heap memory manager, Tungsten, which offers much faster processing by fine-tuning data structure layout in memory. Finally, the new website, [spark-packages.org][2], with more than 100 third-party libraries, adds many useful features from the community.
|
||||
|
||||
-- Steven Nunez
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-storm-100614149-orig.jpg)
|
||||
|
||||
### Storm ###
|
||||
|
||||
[Apache Storm][3] is a Clojure-based distributed computation framework primarily for streaming real-time analytics. Storm is based on the [disruptor pattern][4] for low-latency complex event processing created LMAX. Unlike Spark, Storm can do single events as opposed to “micro-batches,” and it has a lower memory footprint. In my experience, it scales better for streaming, especially when you’re mainly streaming to ingest data into other data sources.
|
||||
|
||||
Storm’s profile has been eclipsed by Spark, but Spark is inappropriate for many streaming applications. Storm is frequently used with Apache Kafka.
|
||||
|
||||
-- Andrew C. Oliver
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-h2o-100613950-orig.jpg)
|
||||
|
||||
### H2O ###
|
||||
|
||||
[H2O][5] is a distributed, in-memory processing engine for machine learning that boasts an impressive array of algorithms. Previously only available for R users, version 3.0 adds Python and Java language bindings, as well as a Spark execution engine for the back end. The best way to view H20 is as a very large memory extension of your R environment. Instead of working directly on large data sets, the R extensions communicate via a REST API with the H2O cluster, where H2O does the heavy lifting.
|
||||
|
||||
Several useful R packages such as ddply have been wrapped, allowing you to use them on data sets larger than the amount of RAM on the local machine. You can run H2O on EC2, on a Hadoop/YARN cluster, and on Docker containers. With Sparkling Water (Spark plus H2O) you can access Spark RDDs on the cluster side by side to, for example, process a data frame with Spark before passing it to an H2O machine learning algorithm.
|
||||
|
||||
-- Steven Nunez
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-apex-100613943-orig.jpg)
|
||||
|
||||
### Apex ###
|
||||
|
||||
[Apex][6] is an enterprise-grade, big data-in-motion platform that unifies stream processing as well as batch processing. A native YARN application, Apex processes streaming data in a scalable, fault-tolerant manner and provides all the common stream operators out of the box. One of the best things about Apex is that it natively supports the common event processing guarantees (exactly once, at least once, at most once). Formerly a commercial product by DataTorrent, Apex's roots show in the quality of the documentation, examples, code, and design. Devops and application development are cleanly separated, and user code generally doesn't have to be aware that it is running in a streaming cluster.
|
||||
|
||||
A related project, [Malhar][7], offers more than 300 commonly used operators and application templates that implement common business logic. The Malhar libraries significantly reduce the time it takes to develop an Apex application, and there are connectors (operators) for storage, file systems, messaging systems, databases, and nearly anything else you might want to connect to from an application. The operators can all be extended or customized to meet individual business's requirements. All Malhar components are available under the Apache license.
|
||||
|
||||
-- Steven Nunez
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-druid-100613947-orig.jpg)
|
||||
|
||||
### Druid ###
|
||||
|
||||
[Druid][8], which moved to a commercially friendly Apache license in February of this year, is best described as a hybrid, “event streams meet OLAP” solution. Originally developed to analyze online events for ad markets, Druid allows users to do arbitrary and interactive exploration of time series data. Some of the key features include low-latency ingest of events, fast aggregations, and approximate and exact calculations.
|
||||
|
||||
At the heart of Druid is a custom data store that uses specialized nodes to handle each part of the problem. Real-time ingest is managed by real-time nodes (JVMs) that eventually flush data to historical nodes that are responsible for data that has aged. Broker nodes direct queries in a scatter-gather fashion to both real-time and historical nodes to give the user a complete picture of events. Benchmarked at a sustained 500K events per second and 1 million events per second peak, Druid is ideal as a real-time dashboard for ad-tech, network traffic, and other activity streams.
|
||||
|
||||
-- Steven Nunez
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-flink-100613949-orig.jpg)
|
||||
|
||||
### Flink ###
|
||||
|
||||
At its core, [Flink][9] is a data flow engine for event streams. Although superficially similar to Spark, Flink takes a different approach to in-memory processing. First, Flink was designed from the start as a stream processor. Batch is simply a special case of a stream with a beginning and an end, and Flink offers APIs for dealing with each case, the DataSet API (batch) and the DataStream API. Developers coming from the MapReduce world should feel right at home working with the DataSet API, and porting applications to Flink should be straightforward. In many ways Flink mirrors the simplicity and consistency that helped make Spark so popular. Like Spark, Flink is written in Scala.
|
||||
|
||||
The developers of Flink clearly thought out usage and operations too: Flink works natively with YARN and Tez, and it uses an off-heap memory management scheme to work around some of the JVM limitations. A peek at the Flink JIRA site shows a healthy pace of development, and you’ll find an active community on the mailing lists and on StackOverflow as well.
|
||||
|
||||
-- Steven Nunez
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-elastic-100613948-orig.jpg)
|
||||
|
||||
### Elasticsearch ###
|
||||
|
||||
[Elasticsearch][10] is a distributed document search server based on [Apache Lucene][11]. At its heart, Elasticsearch builds indices on JSON-formatted documents in nearly real time, enabling fast, full-text, schema-free queries. Combined with the open source Kibana dashboard, you can create impressive visualizations of your real-time data in a simple point-and-click fashion.
|
||||
|
||||
Elasticsearch is easy to set up and easy to scale, automatically making use of new hardware by rebalancing shards as required. The query syntax isn't at all SQL-like, but it is intuitive enough for anyone familiar with JSON. Most users won't be interacting at that level anyway. Developers can use the native JSON-over-HTTP interface or one of the several language bindings available, including Ruby, Python, PHP, Perl, .Net, Java, and JavaScript.
|
||||
|
||||
-- Steven Nunez
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-slamdata-100613961-orig.jpg)
|
||||
|
||||
### SlamData ###
|
||||
|
||||
If you are seeking a user-friendly tool to visualize and understand your newfangled NoSQL data, take a look at [SlamData][12]. SlamData allows you to query nested JSON data using familiar SQL syntax, without relocation or transformation.
|
||||
|
||||
One of the technology’s main features is its connectors. From MongoDB to HBase, Cassandra, and Apache Spark, SlamData taps external data sources with the industry's most advanced “pushdown” processing technology, performing transformations and analytics close to the data.
|
||||
|
||||
While you might ask, “Wouldn’t I be better off building a data lake or data warehouse?” consider the companies that were born in NoSQL. Skipping the ETL and simply connecting a visualization tool to a replica offers distinct advantages -- not only in terms of how up-to-date the data is, but in how many moving parts you have to maintain.
|
||||
|
||||
-- Andrew C. Oliver
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-drill-100613946-orig.jpg)
|
||||
|
||||
### Drill ###
|
||||
|
||||
[Drill][13] is a distributed system for interactive analysis of large-scale data sets, inspired by [Google's Dremel][14]. Designed for low-latency analysis of nested data, Drill has a stated design goal of scaling to 10,000 servers and querying petabytes of data and trillions of records.
|
||||
|
||||
Nested data can be obtained from a variety of data sources (such as HDFS, HBase, Amazon S3, and Azure Blobs) and in multiple formats (including JSON, Avro, and protocol buffers), and you don't need to specify a schema up front (“schema on read”).
|
||||
|
||||
Drill uses ANSI SQL:2003 for its query language, so there's no learning curve for data engineers to overcome, and it allows you to join data across multiple data sources (for example, joining a table in HBase with logs in HDFS). Finally, Drill offers ODBC and JDBC interfaces to connect your favorite BI tools.
|
||||
|
||||
-- Steven Nunez
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-hbase-100613951-orig.jpg)
|
||||
|
||||
### HBase ###
|
||||
|
||||
[HBase][15] reached the 1.x milestone this year and continues to improve. Like other nonrelational distributed datastores, HBase excels at returning search results very quickly and for this reason is often used to back search engines, such as the ones at eBay, Bloomberg, and Yahoo. As a stable and mature software offering, HBase does not get fresh features as frequently as newer projects, but that's often good for enterprises.
|
||||
|
||||
Recent improvements include the addition of high-availability region servers, support for rolling upgrades, and YARN compatibility. Features in the works include scanner updates that promise to improve performance and the ability to use HBase as a persistent store for streaming applications like Storm and Spark. HBase can also be queried SQL style via the [Phoenix][16] project, now out of incubation, whose SQL compatibility is steadily improving. Phoenix recently added a Spark connector and the ability to add custom user-defined functions.
|
||||
|
||||
-- Steven Nunez
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-hive-100613952-orig.jpg)
|
||||
|
||||
### Hive ###
|
||||
|
||||
Although stable and mature for several years, [Hive][17] reached the 1.0 version milestone this year and continues to be the best solution when really heavy SQL lifting (many petabytes) is required. The community continues to focus on improving the speed, scale, and SQL compliance of Hive. Currently at version 1.2, significant improvements since its last Bossie include full ACID semantics, cross-data center replication, and a cost-based optimizer.
|
||||
|
||||
Hive 1.2 also brought improved SQL compliance, making it easier for organizations to use it to off-load ETL jobs from their existing data warehouses. In the pipeline are speed improvements with an in-memory cache called LLAP (which, from the looks of the JIRAs, is about ready for release), the integration of Spark machine learning libraries, and improved SQL constructs like nonequi joins, interval types, and subqueries.
|
||||
|
||||
-- Steven Nunez
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-kylin-100613955-orig.jpg)
|
||||
|
||||
### Kylin ###
|
||||
|
||||
[Kylin][18] is an application developed at eBay for processing very large OLAP cubes via ANSI SQL, a task familiar to most data analysts. If you think about how many items are on sale now and in the past at eBay, and all the ways eBay might want to slice and dice data related to those items, you will begin to understand the types of queries Kylin was designed for.
|
||||
|
||||
Like most other analysis applications, Kylin supports multiple access methods, including JDBC, ODBC, and a REST API for programmatic access. Although Kylin is still in incubation at Apache, and the community nascent, the project is well documented and the developers are responsive and eager to understand customer use cases. Getting up and running with a starter cube was a snap. If you have a need for analysis of extremely large cubes, you should take a look at Kylin.
|
||||
|
||||
-- Steven Nunez
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-cdap-100613945-orig.jpg)
|
||||
|
||||
### CDAP ###
|
||||
|
||||
[CDAP][19] (Cask Data Access Platform) is a framework running on top of Hadoop that abstracts away the complexity of building and running big data applications. CDAP is organized around two core abstractions: data and applications. CDAP Datasets are logical representations of data that behave uniformly regardless of the underlying storage layer; CDAP Streams provide similar support for real-time data.
|
||||
|
||||
Applications use CDAP services for things such as distributed transactions and service discovery to shield developers from the low-level details of Hadoop. CDAP comes with a data ingestion framework and a few prebuilt applications and “packs” for common tasks like ETL and website analytics, along with support for testing, debugging, and security. Like most formerly commercial (closed source) projects, CDAP benefits from good documentation, tutorials, and examples.
|
||||
|
||||
-- Steven Nunez
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-ranger-100613960-orig.jpg)
|
||||
|
||||
### Ranger ###
|
||||
|
||||
Security has long been a sore spot with Hadoop. It isn’t (as is frequently reported) that Hadoop is “insecure” or “has no security.” Rather, the truth was more that Hadoop had too much security, though not in a good way. I mean that every component had its own authentication and authorization implementation that wasn’t integrated with the rest of platform.
|
||||
|
||||
Hortonworks acquired XA/Secure in May, and [a few renames later][20] we have [Ranger][21]. Ranger pulls many of the key components of Hadoop together under one security umbrella, allowing you to set a “policy” that ties your Hadoop security to your existing ACL-based Active Directory authentication and authorization. Ranger gives you one place to manage Hadoop access control, one place to audit, one place to manage the encryption, and a pretty Web page to do it from.
|
||||
|
||||
-- Andrew C. Oliver
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-mesos-100613957-orig.jpg)
|
||||
|
||||
### Mesos ###
|
||||
|
||||
[Mesos][22], developed at the [AMPLab][23] at U.C. Berkeley that also brought us Spark, takes a different approach to managing cluster computing resources. The best way to describe Mesos is as a distributed microkernel for the data center. Mesos provides a minimal set of operating system mechanisms like inter-process communications, disk access, and memory to higher-level applications, called “frameworks” in Mesos-speak, that run in what is analogous to user space. Popular frameworks for Mesos include [Chronos][24] and [Aurora][25] for building ETL pipelines and job scheduling, and a few big data processing applications including Hadoop, Storm, and Spark, which have been ported to run as Mesos frameworks.
|
||||
|
||||
Mesos applications (frameworks) negotiate for cluster resources using a two-level scheduling mechanism, so writing a Mesos application is unlikely to feel like a familiar experience to most developers. Although Mesos is a young project, momentum is growing, and with Spark being an exceptionally good fit for Mesos, we're likely to see more from Mesos in the coming years.
|
||||
|
||||
-- Steven Nunez
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-nifi-100613958-orig.jpg)
|
||||
|
||||
### NiFi ###
|
||||
|
||||
[NiFi][26] is an incubating Apache project to automate the flow of data between systems. It doesn't operate in the traditional space that Kafka and Storm do, but rather in the space between external devices and the data center. NiFi was originally developed by the NSA and donated to the open source community in 2014. It has a strong community of developers and users within various government agencies.
|
||||
|
||||
NiFi isn't like anything else in the current big data ecosystem. It is much closer to a tradition EAI (enterprise application integration) tool than a data processing platform, although simple transformations are possible. One interesting feature is the ability to debug and change data flows in real time. Although not quite a REPL (read, eval, print loop), this kind of paradigm dramatically shortens the development cycle by not requiring a compile-deploy-test-debug workflow. Other interesting features include a strong “chain of custody,” where each piece of data can be tracked from beginning to end, along with any changes made along the way. You can also prioritize data flows so that time-sensitive information can be received as quickly as possible, bypassing less time-critical events.
|
||||
|
||||
-- Steven Nunez
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-kafka-100613954-orig.jpg)
|
||||
|
||||
### Kafka ###
|
||||
|
||||
[Kafka][27] has emerged as the de-facto standard for distributed publish-subscribe messaging in the big data space. Its design allows brokers to support thousands of clients at high rates of sustained message throughput, while maintaining durability through a distributed commit log. Kafka does this by maintaining what is essentially a single log file in HDFS. Since HDFS is a distributed storage system that keeps redundant copies, Kafka is protected.
|
||||
|
||||
When consumers want to read messages, Kafka looks up their offset in the central log and sends them. Because messages are not deleted immediately, adding consumers or replaying historical messages does not impose additional costs. Kafka has been benchmarked at 2 million writes per second by its developers at LinkedIn. Despite Kafka’s sub-1.0 version number, Kafka is a mature and stable product, in use in some of the largest clusters in the world.
|
||||
|
||||
-- Steven Nunez
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-opentsdb-100613959-orig.jpg)
|
||||
|
||||
### OpenTSDB ###
|
||||
|
||||
[OpenTSDB][28] is a time series database built on HBase. It was designed specifically for analyzing data collected from applications, mobile devices, networking equipment, and other hardware devices. The custom HBase schema used to store the time series data has been designed for fast aggregations and minimal storage requirements.
|
||||
|
||||
By using HBase as the underlying storage layer, OpenTSDB gains the distributed and reliable characteristics of that system. Users don't interact with HBase directly; instead events are written to the system via the time series daemon (TSD), which can be scaled out as required to handle high-throughput situations. There are a number of prebuilt connectors to publish data to OpenTSDB, and clients to read data from Ruby, Python, and other languages. OpenTSDB isn't strong on creating interactive graphics, but several third-party tools fill that gap. If you are already using HBase and want a simple way to store event data, OpenTSDB might be just the thing.
|
||||
|
||||
-- Steven Nunez
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-jupyter-100613953-orig.jpg)
|
||||
|
||||
### Jupyter ###
|
||||
|
||||
Everybody's favorite notebook application went generic. [Jupyter][29] is “the language-agnostic parts of IPython” spun out into an independent package. Although Jupyter itself is written in Python, the system is modular. Now you can have an IPython-like interface, along with notebooks for sharing code, documentation, and data visualizations, for nearly any language you like.
|
||||
|
||||
At least [50 language][30] kernels are already supported, including LISP, R, Ruby, F#, Perl, and Scala. In fact, even IPython itself is simply a Python module for Jupyter. Communication with the language kernel is via a REPL (read, eval, print loop) protocol, similar to [nREPL][31] or [Slime][32]. It is nice to see such a useful piece of software receiving significant [nonprofit funding][33] to further its development, such as parallel execution and multi-user notebooks. Behold, open source at its best.
|
||||
|
||||
-- Steven Nunez
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-zeppelin-100613963-orig.jpg)
|
||||
|
||||
### Zeppelin ###
|
||||
|
||||
While still in incubation, [Apache Zeppelin][34] is nevertheless stirring the data analytics and visualization pot. The Web-based notebook enables users to ingest, discover, analyze, and visualize their data. The notebook also allows you to collaborate with others to make data-driven, interactive documents incorporating a growing number of programming languages.
|
||||
|
||||
This technology also boasts an integration with Spark and an interpreter concept allowing any language or data processing back end to be plugged into Zeppelin. Currently Zeppelin supports interpreters such as Scala, Python, SparkSQL, Hive, Markdown, and Shell.
|
||||
|
||||
Zeppelin is still immature. I wanted to put a demo up but couldn’t find an easy way to disable “shell” as an execution option (among other things). However, it already looks better visually than IPython Notebook, which is the popular incumbent in this space. If you don’t want to spring for DataBricks Cloud or need something open source and extensible, this is the most promising distributed computing notebook around -- especially if you’re a Sparky type.
|
||||
|
||||
-- Andrew C. Oliver
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-main-100613956-orig.jpg)
|
||||
|
||||
### Read about more open source winners ###
|
||||
|
||||
InfoWorld's Best of Open Source Awards for 2014 celebrate more than 100 open source projects, from the bottom of the stack to the top. Follow these links to more open source winners:
|
||||
|
||||
[Bossie Awards 2015: The best open source applications][35]
|
||||
|
||||
[Bossie Awards 2015: The best open source application development tools][36]
|
||||
|
||||
[Bossie Awards 2015: The best open source big data tools][37]
|
||||
|
||||
[Bossie Awards 2015: The best open source data center and cloud software][38]
|
||||
|
||||
[Bossie Awards 2015: The best open source desktop and mobile software][39]
|
||||
|
||||
[Bossie Awards 2015: The best open source networking and security software][40]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.infoworld.com/article/2982429/open-source-tools/bossie-awards-2015-the-best-open-source-big-data-tools.html
|
||||
|
||||
作者:[InfoWorld staff][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.infoworld.com/author/InfoWorld-staff/
|
||||
[1]:https://spark.apache.org/
|
||||
[2]:http://spark-packages.org/
|
||||
[3]:https://storm.apache.org/
|
||||
[4]:https://lmax-exchange.github.io/disruptor/
|
||||
[5]:http://h2o.ai/product/
|
||||
[6]:https://www.datatorrent.com/apex/
|
||||
[7]:https://github.com/DataTorrent/Malhar
|
||||
[8]:https://druid.io/
|
||||
[9]:https://flink.apache.org/
|
||||
[10]:https://www.elastic.co/products/elasticsearch
|
||||
[11]:http://lucene.apache.org/
|
||||
[12]:http://teiid.jboss.org/
|
||||
[13]:https://drill.apache.org/
|
||||
[14]:http://research.google.com/pubs/pub36632.html
|
||||
[15]:http://hbase.apache.org/
|
||||
[16]:http://phoenix.apache.org/
|
||||
[17]:https://hive.apache.org/
|
||||
[18]:https://kylin.incubator.apache.org/
|
||||
[19]:http://cdap.io/
|
||||
[20]:http://www.infoworld.com/article/2973381/application-development/apache-ranger-chuck-norris-hadoop-security.html
|
||||
[21]:https://ranger.incubator.apache.org/
|
||||
[22]:http://mesos.apache.org/
|
||||
[23]:https://amplab.cs.berkeley.edu/
|
||||
[24]:http://nerds.airbnb.com/introducing-chronos/
|
||||
[25]:http://aurora.apache.org/
|
||||
[26]:http://nifi.apache.org/
|
||||
[27]:https://kafka.apache.org/
|
||||
[28]:http://opentsdb.net/
|
||||
[29]:http://jupyter.org/
|
||||
[30]:http://https//github.com/ipython/ipython/wiki/IPython-kernels-for-other-languages
|
||||
[31]:https://github.com/clojure/tools.nrepl
|
||||
[32]:https://github.com/slime/slime
|
||||
[33]:http://blog.jupyter.org/2015/07/07/jupyter-funding-2015/
|
||||
[34]:https://zeppelin.incubator.apache.org/
|
||||
[35]:http://www.infoworld.com/article/2982622/bossie-awards-2015-the-best-open-source-applications.html
|
||||
[36]:http://www.infoworld.com/article/2982920/bossie-awards-2015-the-best-open-source-application-development-tools.html
|
||||
[37]:http://www.infoworld.com/article/2982429/bossie-awards-2015-the-best-open-source-big-data-tools.html
|
||||
[38]:http://www.infoworld.com/article/2982923/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html
|
||||
[39]:http://www.infoworld.com/article/2982630/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html
|
||||
[40]:http://www.infoworld.com/article/2982962/bossie-awards-2015-the-best-open-source-networking-and-security-software.html
|
@ -0,0 +1,261 @@
|
||||
Bossie Awards 2015: The best open source data center and cloud software
|
||||
================================================================================
|
||||
InfoWorld's top picks of the year in open source platforms, infrastructure, management, and orchestration software
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-data-center-cloud-100613986-orig.jpg)
|
||||
|
||||
### The best open source data center and cloud software ###
|
||||
|
||||
You might have heard about this new thing called Docker containers. Developers love them because you can build them with a script, add services in layers, and push them right from your MacBook Pro to a server for testing. It works because they're superlightweight, unlike those now-archaic virtual machines. Containers -- and other lightweight approaches to deliver services -- are changing the shape of operating systems, applications, and the tools to manage them. Our Bossie winners in data center and cloud are leading the charge.
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-docker-100613987-orig.jpg)
|
||||
|
||||
### Docker Machine, Compose, and Swarm ###
|
||||
|
||||
Docker’s open source container technology has been adopted by the major public clouds and is being built into the next version of Windows Server. Allowing developers and operations teams to separate applications from infrastructure, Docker is a powerful data center automation tool.
|
||||
|
||||
However, containers are only part of the Docker story. Docker also provides a series of tools that allow you to use the Docker API to automate the entire container lifecycle, as well as handling application design and orchestration.
|
||||
|
||||
[Machine][1] allows you to automate the provisioning of Docker Containers. Starting with a command line, you can use a single line of code to target one or more hosts, deploy the Docker engine, and even join it to a Swarm cluster. There’s support for most hypervisors and cloud platforms – all you need are your access credentials.
|
||||
|
||||
[Swarm][2] handles clustering and scheduling, and it can be integrated with Mesos for more advanced scheduling capabilities. You can use Swarm to build a pool of container hosts, allowing your apps to scale out as demand increases. Applications and all of their dependencies can be defined with [Compose][3], which lets you link containers together into a distributed application and launch them as a group. Compose descriptions work across platforms, so you can take a developer configuration and quickly deploy in production.
|
||||
|
||||
-- Simon Bisson
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-coreos-rkt-100613985-orig.jpg)
|
||||
|
||||
### CoreOS and Rkt ###
|
||||
|
||||
A thin, lightweight server OS, [CoreOS][4] is based on Google’s Chromium OS. Instead of using a package manager to install functions, it’s designed to be used with Linux containers. By using containers to extend a thin core, CoreOS allows you to quickly deploy applications, working well on cloud infrastructures.
|
||||
|
||||
CoreOS’s container management tooling, fleet, is designed to treat a cluster of CoreOS servers as a single unit, with tools for managing high availability and for deploying containers to the cluster based on resource availability. A cross-cluster key/value store, etcd, handles device management and supports service discovery. If a node fails, etcd can quickly restore state on a new replica, giving you a distributed configuration management platform that’s linked to CoreOS’s automated update service.
|
||||
|
||||
While CoreOS is perhaps best known for its Docker support, the CoreOS team is developing its own container runtime, rkt, with its own container format, the App Container Image. Also compatible with Docker containers, rkt has a modular architecture that allows different containerization systems (even hardware virtualization, in a proof of concept from Intel) to be plugged in. However, rkt is still in the early stages of development, so isn’t quite production ready.
|
||||
|
||||
-- Simon Bisson
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-rancheros-100613997-orig.jpg)
|
||||
|
||||
### RancherOS ###
|
||||
|
||||
As we abstract more and more services away from the underlying operating system using containers, we can start thinking about what tomorrow’s operating system will look like. Similar to our applications, it’s going to be a modular set of services running on a thin kernel, self-configuring to offer only the services our applications need.
|
||||
|
||||
[RancherOS][5] is a glimpse of what that OS might look like. Blending the Linux kernel with Docker, RancherOS is a minimal OS suitable for hosting container-based applications in cloud infrastructures. Instead of using standard Linux packaging techniques, RancherOS leverages Docker to host Linux user-space services and applications in separate container layers. A low-level Docker instance is first to boot, hosting system services in their own containers. Users' applications run in a higher-level Docker instance, separate from the system containers. If one of your containers crashes, the host keeps running.
|
||||
|
||||
RancherOS is only 20MB in size, so it's easy to replicate across a data center. It’s also designed to be managed using automation tools, not manually, with API-level access that works with Docker’s management tools as well as with Rancher Labs’ own cloud infrastructure and management tools.
|
||||
|
||||
-- Simon Bisson
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-kubernetes-100613991-orig.jpg)
|
||||
|
||||
### Kubernetes ###
|
||||
|
||||
Google’s [Kubernetes][6] container orchestration system is designed to manage and run applications built in Docker and Rocket containers. Focused on managing microservice applications, Kubernetes lets you distribute your containers across a cluster of hosts, while handling scaling and ensuring managed services run reliably.
|
||||
|
||||
With containers providing an application abstraction layer, Kubernetes is an application-centric management service that supports many modern development paradigms, with a focus on user intent. That means you launch applications, and Kubernetes will manage the containers to run within the parameters you set, using the Kubernetes scheduler to make sure it gets the resources it needs. Containers are grouped into pods and managed by a replication engine that can recover failed containers or add more pods as applications scale.
|
||||
|
||||
Kubernetes powers Google’s own Container Engine, and it runs on a range of other cloud and data center services, including AWS and Azure, as well as vSphere and Mesos. Containers can be either loosely or tightly coupled, so applications not designed for cloud PaaS operations can be migrated to the cloud as a tightly coupled set of containers. Kubernetes also supports rapid deployment of applications to a cluster, giving you an endpoint for a continuous delivery process.
|
||||
|
||||
-- Simon Bisson
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-mesos-100613993-orig.jpg)
|
||||
|
||||
### Mesos ###
|
||||
|
||||
Turning a data center into a private or public cloud requires more than a hypervisor. It requires a new operating layer that can manage the data center resources as if they were a single computer, handling resources and scheduling. Described as a “distributed systems kernel,” [Apache Mesos][7] allows you to manage thousands of servers, using containers to host applications and APIs to support parallel application development.
|
||||
|
||||
At the heart of Mesos is a set of daemons that expose resources to a central scheduler. Tasks are distributed across nodes, taking advantage of available CPU and memory. One key approach is the ability for applications to reject offered resources if they don’t meet requirements. It’s an approach that works well for big data applications, and you can use Mesos to run Hadoop and Cassandra distributed databases, as well as Apache’s own Spark data processing engine. There’s also support for the Jenkins continuous integration server, allowing you to run build and test workers in parallel on a cluster of servers, dynamically adjusting the tasks depending on workload.
|
||||
|
||||
Designed to run on Linux and Mac OS X, Mesos has also recently been ported to Windows to support the development of scalable parallel applications on Azure.
|
||||
|
||||
-- Simon Bisson
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-smartos-100614849-orig.jpg)
|
||||
|
||||
### SmartOS and SmartDataCenter ###
|
||||
|
||||
Joyent’s [SmartDataCenter][8] is the software that runs its public cloud, adding a management platform on top of its [SmartOS][9] thin server OS. A descendent of OpenSolaris that combines Zones containers and the KVM hypervisor, SmartOS is an in-memory operating system, quick to boot from a USB stick and run on bare-metal servers.
|
||||
|
||||
Using SmartOS, you can quickly deploy a set of lightweight servers that can be programmatically managed via a set of JSON APIs, with functionality delivered via virtual machines, downloaded by built-in image management tools. Through the use of VMs, all userland operations are isolated from the underlying OS, reducing the security exposure of both the host and guests.
|
||||
|
||||
SmartDataCenter runs on SmartOS servers, with one server running as a dedicated management node, and the rest of a cluster operating as compute nodes. You can get started with a Cloud On A Laptop build (available as a VMware virtual appliance) that lets you experiment with the management server. In a live data center, you’ll deploy SmartOS on your servers, using ZFS to handle storage – which includes your local image library. Services are deployed as images, with components stored in an object repository.
|
||||
|
||||
The combination of SmartDataCenter and SmartOS builds on the experience of Joyent’s public cloud, giving you a tried and tested set of tools that can help you bootstrap your own cloud data center. It’s an infrastructure focused on virtual machines today, but laying the groundwork for tomorrow. A related Joyent project, [sdc-docker][10], exposes an entire SmartDataCenter cluster as a single Docker host, driven by native Docker commands.
|
||||
|
||||
-- Simon Bisson
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-sensu-100614850-orig.jpg)
|
||||
|
||||
### Sensu ###
|
||||
|
||||
Managing large-scale data centers isn’t about working with server GUIs, it’s about automating scripts based on information from monitoring tools and services, routing information from sensors and logs, and then delivering actions to applications. One tool that’s beginning to offer this functionality is [Sensu][11], often described as a “monitoring router.”
|
||||
|
||||
Scripts running across your data center deliver information to Sensu, which then routes it to the appropriate handler, using a publish-and-subscribe architecture based on RabbitMQ. Servers can be distributed, delivering published check results to handler code. You might see results in email, or in a Slack room, or in Sensu’s own dashboards. Message formats are defined in JSON files, or mutators used to format data on the fly, and messages can be filtered to one or more event handlers.
|
||||
|
||||
Sensu is still a relatively young tool, but it’s one that shows a lot of promise. If you’re going to automate your data center, you’re going to need a tool like this not only to show you what’s happening, but to deliver that information where it’s most needed. A commercial option adds support for integration with third-party applications, but much of what you need to manage a data center is in the open source release.
|
||||
|
||||
-- Simon Bisson
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-prometheus-100613996-orig.jpg)
|
||||
|
||||
### Prometheus ###
|
||||
|
||||
Managing a modern data center is a complex task. Racks of servers need to be treated like cattle rather than pets, and you need a monitoring system designed to handle hundreds and thousands of nodes. Monitoring applications presents special challenges, and that’s where [Prometheus][12] comes in to play. A service monitoring system designed to deliver alerts to operators, Prometheus can run on everything from a single laptop to a highly available cluster of monitoring servers.
|
||||
|
||||
Time series data is captured and stored, then compared against patterns to identify faults and problems. You’ll need to expose data on HTTP endpoints, using a YAML file to configure the server. A browser-based reporting tool handles displaying data, with an expression console where you can experiment with queries. Dashboards can be created with a GUI builder, or written using a series of templates, letting you deliver application consoles that can be managed using version control systems such as Git.
|
||||
|
||||
Captured data can be managed using expressions, which make it easy to aggregate data from several sources -- for example, letting you bring performance data from a series of Web endpoints into one store. An experimental alert manager module delivers alerts to common collaboration and devops tools, including Slack and PagerDuty. Official client libraries for common languages like Go and Java mean it’s easy to add Prometheus support to your applications and services, while third-party options extend Prometheus to Node.js and .Net.
|
||||
|
||||
-- Simon Bisson
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-elk-100613988-orig.jpg)
|
||||
|
||||
### Elasticsearch, Logstash, and Kibana ###
|
||||
|
||||
Running a modern data center generates a lot of data, and it requires tools to get information out of that data. That’s where the combination of Elasticsearch, Logstash, and Kibana, often referred to as the ELK stack, comes into play.
|
||||
|
||||
Designed to handle scalable search across a mix of content types, including structured and unstructured documents, [Elasticsearch][13] builds on Apache’s Lucene information retrieval tools, with a RESTful JSON API. It’s used to provide search for sites like Wikipedia and GitHub, using a distributed index with automated load balancing and routing.
|
||||
|
||||
Under the fabric of a modern cloud is a physical array of servers, running as VM hosts. Monitoring many thousands of servers needs centralized logs. [Logstash][14] harvests and filters the logs generated by those servers (and by the applications running on them), using a forwarder on each physical and virtual machine. Logstash-formatted data is then delivered to Elasticsearch, giving you a search index that can be quickly scaled as you add more servers.
|
||||
|
||||
At a higher level, [Kibana][15] adds a visualization layer to Elasticsearch, providing a Web dashboard for exploring and analyzing the data. Dashboards can be created around custom searches and shared with your team, providing a quick, easy-to-digest devops information feed.
|
||||
|
||||
-- Simon Bisson
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-ansible-100613984-orig.jpg)
|
||||
|
||||
### Ansible ###
|
||||
|
||||
Managing server configuration is a key element of any devops approach to managing a modern data center or a cloud infrastructure. Configuration management tooling that takes a desired state approach to simplifies systems management at cloud scale, using server and application descriptions to handle server and application deployment.
|
||||
|
||||
[Ansible][16] offers a minimal management service, using SSH to manage Unix nodes and PowerShell to work with Windows servers, with no need to deploy agents. An Ansible Playbook describes the state of a server or service in YAML, deploying Ansible modules to servers that handle configuration and removing them once the service is running. You can use Playbooks to orchestrate tasks -- for example, deploying several Web endpoints with a single script.
|
||||
|
||||
It’s possible to make module creation and Playbook delivery part of a continuous delivery process, using build tools to deliver configurations and automate deployment. Ansible can pull in information from cloud service providers, simplifying management of virtual machines and networks. Monitoring tools in Ansible are able to trigger additional deployments automatically, helping manage and control cloud services, as well as working to manage resources used by large-scale data platforms like Hadoop.
|
||||
|
||||
-- Simon Bisson
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-jenkins-100613990-orig.jpg)
|
||||
|
||||
### Jenkins ###
|
||||
|
||||
Getting continuous delivery right requires more than a structured way of handling development; it also requires tools for managing test and build. That’s where the [Jenkins][17] continuous integration server comes in. Jenkins works with your choice of source control, your test harnesses, and your build server. It’s a flexible tool, initially designed for working with Java but now extended to support Web and mobile development and even to build Windows applications.
|
||||
|
||||
Jenkins is perhaps best thought of as a switching network, shunting files through a test and build process, and responding to signals from the various tools you’re using – thanks to a library of more than 1,000 plug-ins. These include tools for integrating Jenkins with both local Git instances and GitHub so that it's possible to extend a continuous development model into your build and delivery processes.
|
||||
|
||||
Using an automation tool like Jenkins is as much about adopting a philosophy as it is about implementing a build process. Once you commit to continuous integration as part of a continuous delivery model, you’ll be running test and build cycles as soon as code is delivered to your source control release branch – and delivering it to users as soon as it’s in the main branch.
|
||||
|
||||
-- Simon Bisson
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-nodejs-iojs-100613995-orig.jpg)
|
||||
|
||||
### Node.js and io.js ###
|
||||
|
||||
Modern cloud applications are built using different design patterns from the familiar n-tier enterprise and Web apps. They’re distributed, event-driven collections of services that can be quickly scaled and can support many thousands of simultaneous users. One key technology in this new paradigm is [Node.js][18], used by many major cloud platforms and easy to install as part of a thin server or container on cloud infrastructure.
|
||||
|
||||
Key to the success of Node.js is the Npm package format, which allows you to quickly install extensions to the core Node.js service. These include frameworks like Express and Seneca, which help build scalable applications. A central registry handles package distribution, and dependencies are automatically installed.
|
||||
|
||||
While the [io.js][19] fork exposed issues with project governance, it also allowed a group of developers to push forward adding ECMAScript 6 support to an Npm-compatible engine. After reconciliation between the two teams, the Node.js and io.js codebases have been merged, with new releases now coming from the io.js code repository.
|
||||
|
||||
Other forks, like Microsoft’s io.js fork to add support for its 64-bit Chakra JavaScript engine alongside Google’s V8, are likely to be merged back into the main branch over the next year, keeping the Node.js platform evolving and cementing its role as the preferred host for cloud-scale microservices.
|
||||
|
||||
-- Simon Bisson
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-seneca-100613998-orig.jpg)
|
||||
|
||||
### Seneca ###
|
||||
|
||||
The developers of the [Seneca][20] microservice framework have a motto: “Build it now, scale it later!” It’s an apt maxim for anyone thinking about developing microservices, as it allows you to start small, then add functionality as your service grows.
|
||||
|
||||
Seneca is at heart an implementation of the [actor/message design pattern][21], focused on using Node.js as a switching engine that takes in messages, processes their contents, and sends an appropriate response, either to the message originator or to another service. By focusing on the message patterns that map to business use cases, it’s relatively easy to take Seneca and quickly build a minimum viable product for your application. A plug-in architecture makes it easy to integrate Seneca with other tools and to quickly add functionality to your services.
|
||||
|
||||
You can easily add new patterns to your codebase or break existing patterns into separate services as the needs of your application grow or change. One pattern can also call another, allowing quick code reuse. It’s also easy to add Seneca to a message bus, so you can use it as a framework for working with data from Internet of things devices, as all you need to do is define a listening port where JSON data is delivered.
|
||||
|
||||
Services may not be persistent, and Seneca gives you the option of using a built-in object relational mapping layer to handle data abstraction, with plug-ins for common databases.
|
||||
|
||||
-- Simon Bisson
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-netcore-aspnet-100613994-orig.jpg)
|
||||
|
||||
### .Net Core and ASP.Net vNext ###
|
||||
|
||||
Microsoft’s [open-sourcing of .Net][22] is bringing much of the company’s Web platform into the open. The new [.Net Core][23] release runs on Windows, on OS X, and on Linux. Currently migrating from Microsoft’s Codeplex repository to GitHub, .Net Core offers a more modular approach to .Net, allowing you to install the functions you need as you need them.
|
||||
|
||||
Currently under development is [ASP.Net 5][24], an open source version of the Web platform, which runs on .Net Core. You can work with it as the basis of Web apps using Microsoft’s MVC 6 framework. There’s also support for the new SignalR libraries, which add support for WebSockets and other real-time communications protocols.
|
||||
|
||||
If you’re planning on using Microsoft’s new Nano server, you’ll be writing code against .Net Core, as it’s designed for thin environments. The new DNX, the .Net Execution environment, simplifies deployment of ASP.Net applications on a wide range of platforms, with tools for packaging code and for booting a runtime on a host. Features are added using the NuGet package manager, letting you use only the libraries you want.
|
||||
|
||||
Microsoft’s open source .Net is still very young, but there’s a commitment in Redmond to ensure it’s successful. Support in Microsoft’s own next-generation server operating systems means it has a place in both the data center and the cloud.
|
||||
|
||||
-- Simon Bisson
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-glusterfs-100613989-orig.jpg)
|
||||
|
||||
### GlusterFS ###
|
||||
|
||||
[GlusterFS][25] is a distributed file system. Gluster aggregates various storage servers into one large parallel network file system. You can [even use it in place of HDFS in a Hadoop cluster][26] or in place of an expensive SAN system -- or both. While HDFS is great for Hadoop, having a general-purpose distributed file system that doesn’t require you to transfer data to another location to analyze it is a key advantage.
|
||||
|
||||
In an era of commoditized hardware, commoditized computing, and increased performance and latency requirements, buying a big, fat expensive EMC SAN and hoping it fits all of your needs (it won’t) is no longer your sole viable option. GlusterFS was acquired by Red Hat in 2011.
|
||||
|
||||
-- Andrew C. Oliver
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-main-100613992-orig.jpg)
|
||||
|
||||
### Read about more open source winners ###
|
||||
|
||||
InfoWorld's Best of Open Source Awards for 2014 celebrate more than 100 open source projects, from the bottom of the stack to the top. Follow these links to more open source winners:
|
||||
|
||||
[Bossie Awards 2015: The best open source applications][27]
|
||||
|
||||
[Bossie Awards 2015: The best open source application development tools][28]
|
||||
|
||||
[Bossie Awards 2015: The best open source big data tools][29]
|
||||
|
||||
[Bossie Awards 2015: The best open source data center and cloud software][30]
|
||||
|
||||
[Bossie Awards 2015: The best open source desktop and mobile software][31]
|
||||
|
||||
[Bossie Awards 2015: The best open source networking and security software][32]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.infoworld.com/article/2982923/open-source-tools/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html
|
||||
|
||||
作者:[InfoWorld staff][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.infoworld.com/author/InfoWorld-staff/
|
||||
[1]:https://www.docker.com/docker-machine
|
||||
[2]:https://www.docker.com/docker-swarm
|
||||
[3]:https://www.docker.com/docker-compose
|
||||
[4]:https://coreos.com/
|
||||
[5]:http://rancher.com/rancher-os/
|
||||
[6]:http://kubernetes.io/
|
||||
[7]:https://mesos.apache.org/
|
||||
[8]:https://github.com/joyent/sdc
|
||||
[9]:https://smartos.org/
|
||||
[10]:https://github.com/joyent/sdc-docker
|
||||
[11]:https://sensuapp.org/
|
||||
[12]:http://prometheus.io/
|
||||
[13]:https://www.elastic.co/products/elasticsearch
|
||||
[14]:https://www.elastic.co/products/logstash
|
||||
[15]:https://www.elastic.co/products/kibana
|
||||
[16]:http://www.ansible.com/home
|
||||
[17]:https://jenkins-ci.org/
|
||||
[18]:https://nodejs.org/en/
|
||||
[19]:https://iojs.org/en/
|
||||
[20]:http://senecajs.org/
|
||||
[21]:http://www.infoworld.com/article/2976422/application-development/how-to-use-actors-in-distributed-applications.html
|
||||
[22]:http://www.infoworld.com/article/2846450/microsoft-net/microsoft-open-sources-server-side-net-launches-visual-studio-2015-preview.html
|
||||
[23]:https://dotnet.github.io/core/
|
||||
[24]:http://www.asp.net/vnext
|
||||
[25]:http://www.gluster.org/
|
||||
[26]:http://www.gluster.org/community/documentation/index.php/Hadoop
|
||||
[27]:http://www.infoworld.com/article/2982622/bossie-awards-2015-the-best-open-source-applications.html
|
||||
[28]:http://www.infoworld.com/article/2982920/bossie-awards-2015-the-best-open-source-application-development-tools.html
|
||||
[29]:http://www.infoworld.com/article/2982429/bossie-awards-2015-the-best-open-source-big-data-tools.html
|
||||
[30]:http://www.infoworld.com/article/2982923/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html
|
||||
[31]:http://www.infoworld.com/article/2982630/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html
|
||||
[32]:http://www.infoworld.com/article/2982962/bossie-awards-2015-the-best-open-source-networking-and-security-software.html
|
@ -0,0 +1,223 @@
|
||||
Bossie Awards 2015: The best open source desktop and mobile software
|
||||
================================================================================
|
||||
InfoWorld's top picks in open source productivity tools, desktop utilities, and mobile apps
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-desktop-mobile-100614439-orig.jpg)
|
||||
|
||||
### The best open source desktop and mobile software ###
|
||||
|
||||
Open source on the desktop has a long and distinguished history, and many of our Bossie winners in this category go back many years. Packed with features and still improving, some of these tools offer compelling alternatives to pricey commercial software. Others are utilities that we lean on daily for one reason or another -- the can openers and potato peelers of desktop productivity. One or two of them either plug holes in Windows, or they go the distance where Windows falls short.
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-libreoffice-100614436-orig.jpg)
|
||||
|
||||
### LibreOffice ###
|
||||
|
||||
With the major release of version 5 in August, the Document Foundation’s [LibreOffice][1] offers a completely redesigned user interface, better compatibility with Microsoft Office (including good-but-not-great DOCX, XLSX, and PPTX file format support), and significant improvements to Calc, the spreadsheet application.
|
||||
|
||||
Set against a turbulent background, the LibreOffice effort split from OpenOffice.org in 2010. In 2011, Oracle announced it would no longer support OpenOffice.org, and handed the trademark to the Apache Software Foundation. Since then, it has become [increasingly clear][2] that LibreOffice is winning the race for developers, features, and users.
|
||||
|
||||
-- Woody Leonhard
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-firefox-100614426-orig.jpg)
|
||||
|
||||
### Firefox ###
|
||||
|
||||
In the battle of the big browsers, [Firefox][3] gets our vote over its longtime open source rival Chromium for two important reasons:
|
||||
|
||||
• **Memory use**. Chromium, like its commercial cousin Chrome, has a nasty propensity to glom onto massive amounts of memory.
|
||||
|
||||
• **Privacy**. Witness the [recent controversy][4] over Chromium automatically downloading a microphone snooping program to respond to “OK, Google.”
|
||||
|
||||
Firefox may not have the most features or the down-to-the-millisecond fastest rendering engine. But it’s solid, stingy with resources, highly extensible, and most of all, it comes with no strings attached. There’s no ulterior data-gathering motive.
|
||||
|
||||
-- Woody Leonhard
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-thunderbird-100614433-orig.jpg)
|
||||
|
||||
### Thunderbird ###
|
||||
|
||||
A longtime favorite email client, Mozilla’s [Thunderbird][5], may be getting a bit long in the tooth, but it’s still supported and showing signs of life. The latest version, 38.2, arrived in August, and there are plans for more development.
|
||||
|
||||
Mozilla officially pulled its people off the project back in July 2012, but a hardcore group of volunteers, led by Kent James and the all-volunteer Thunderbird Council, continues to toil away. While you won’t find the latest email innovations in Thunderbird, you will find a solid core of basic functions based on local storage. If having mail in the cloud spooks you, it’s a good, private alternative. And if James goes ahead with his idea of encrypting Thunderbird mail end-to-end, there may be significant new life in the old bird.
|
||||
|
||||
-- Woody Leonhard
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-notepad-100614432-orig.jpg)
|
||||
|
||||
### Notepad++ ###
|
||||
|
||||
If Windows Notepad handles all of your text editing (and source code editing and HTML editing) needs, more power to ya. For Windows users who yearn for a little bit more in a text editor, there’s Don Ho’s [Notepad++][6], which is the editor I turn to, over and over again.
|
||||
|
||||
With tabbed views, drag-and-drop, color-coded hints for completing HTML commands, bookmarks, macro recording, shortcut keys, and every text encoding format you’re likely to encounter, Notepad++ takes text to a new level. We get frequent updates, too, with the latest in August.
|
||||
|
||||
-- Woody Leonhard
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-vlc-100614435-orig.jpg)
|
||||
|
||||
### VLC ###
|
||||
|
||||
The stalwart [VLC][7] (formerly known as VideoLan Client) runs almost any kind of media file on almost any platform. Yes, it even works as a remote control on Apple Watch.
|
||||
|
||||
The tiled Universal app version for Windows 10, in the Windows Store, draws some criticism for instability and lack of control, but in most cases VLC works, and it works well -- without external codecs. It even supports Blu-ray formats with two new libraries.
|
||||
|
||||
The desktop version is a must-have for Windows 10, unless you’re ready to run the advertising gauntlets that are the Universal Groove Music and Movies & TV apps from Microsoft. VLC received a major [feature update][8] in February and a comprehensive bug fix in April.
|
||||
|
||||
-- Woody Leonhard
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-7-zip-100614429-orig.jpg)
|
||||
|
||||
### 7-Zip ###
|
||||
|
||||
Long recognized as the preeminent open source ZIP archive manager for Windows, [7-Zip][9] works like a champ, even on the Windows 10 desktop. Full coverage for RAR files, which can be problematic in Windows, combine with password-protected file creation and support for self-extracting ZIPs. It’s one of those programs that just works.
|
||||
|
||||
Yes, it would be nice to get a more modern file picker. Yes, it would be interesting to see a tiled Universal app version. But even without the fancy bells and whistles, 7-Zip deserves a place on every Windows desktop.
|
||||
|
||||
-- Woody Leonhard
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-handbrake-100614427-orig.jpg)
|
||||
|
||||
### Handbrake ###
|
||||
|
||||
If you want to convert your DVDs (or video files in any commonly used format) into a file in some other format, or simply scrape them off a silver coaster, [Handbrake][10] is the way to do it. If you’re a Windows user, Handbrake is almost indispensible, since Microsoft doesn’t believe in ripping DVDs.
|
||||
|
||||
Handbrake presents a number of handy presets for optimizing conversions for your target device (iPod, iPad, Android Tablet, and so on) It’s simple, and it’s fast. With the latest round of bug fixes released in June, Handbrake’s keeping up on maintenance -- and it works fine on the Windows 10 desktop.
|
||||
|
||||
-- Woody Leonhard
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-keepass-100614430-orig.jpg)
|
||||
|
||||
### KeePass ###
|
||||
|
||||
I’ll confess that I almost gave up on [KeePass][11] because the primary download site goes to Sourceforge. That means you have to be extremely careful which boxes are checked and what you click on (and when) as you attempt to download and install the software. While KeePass itself is 100 percent clean open source (GNU GPL), Sourceforge doesn’t feel so constrained, and its [installers reek of crapware][12].
|
||||
|
||||
One of many local-file password storage programs, KeePass distinguishes itself with broad scope, as well as its ability to run on all sorts of platforms, no installation required. KeePass will save not only passwords, but also credit card information and freely structured information. It provides a strong random password generator, and the database itself is locked with AES and Twofish, so nobody’s going to crack it. And it’s kept up to date, with a new stable release last month.
|
||||
|
||||
-- Woody Leonhard
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-virtualbox-100614434-orig.jpg)
|
||||
|
||||
### VirtualBox ###
|
||||
|
||||
With a major release published in July, Oracle’s open source [VirtualBox][13] -- available for Windows, OS X, Linux, even Solaris --continues to give commercial counterparts VMware Workstation, VMware Fusion, Parallels Desktop, and Microsoft’s Hyper-V a hard run for their money. The Oracle team is still getting the final Windows 10 bugs ironed out, but come to think of it, so is Microsoft.
|
||||
|
||||
VirtualBox doesn’t quite match the performance or polish of the VMware and Parallels products, but it’s getting closer. Version 5 brought long-awaited drag-and-drop support, making it easier to move files between VMs and host.
|
||||
|
||||
I prefer VirtualBox over Hyper-V because it’s easy to control external devices. In Hyper-V, for example, getting sound to work is a pain in the neck, but in VirtualBox it only takes a click in setup. The shared clipboard between VM and host works wonders. Running speed on both is roughly the same, with a slight advantage to Hyper-V. But managing VirtualBox machines is much easier.
|
||||
|
||||
-- Woody Leonhard
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-inkscape-100614428-orig.jpg)
|
||||
|
||||
### Inkscape ###
|
||||
|
||||
If you stand in awe of the designs created with Adobe Illustrator (or even CorelDraw), take a close look at [Inkscape][14]. Scalable vector images never looked so good.
|
||||
|
||||
Version 0.91, released in January, uses a new internal graphics rendering engine called Cairo, sponsored by Google, to make the app run faster and allow for more accurate rendering. Inkscape will read and write SVG, PNG, PDF, even EPS, and many other formats. It can export Flash XML Graphics, HTML5 Canvas, and XAML, among others.
|
||||
|
||||
There’s a strong community around Inkscape, and it’s built for easy extensibility. It’s available for Windows, OS X, and Linux.
|
||||
|
||||
-- Woody Leonhard
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-keepassdroid-100614431-orig.jpg)
|
||||
|
||||
### KeePassDroid ###
|
||||
|
||||
Trying to remember all of the passwords we need today is impossible, and creating new ones to meet stringent password policy requirements can be agonizing. A port of KeePass for Android, [KeePassDroid][15] brings sanity preserving password management to mobile devices.
|
||||
|
||||
Like KeyPass, KeyPassDroid makes creating and accessing passwords easy, requiring you to recall only a single master password. It supports both DES and Twofish algorithms for encrypting all passwords, and it goes a step further by encrypting the entire password database, not only the password fields. Notes and other password pertinent information are encrypted too.
|
||||
|
||||
While KeePassDroid's interface is minimal -- dated, some would say -- it gets the job done with bare-bones efficiency. Need to generate passwords that have certain character sets and lengths? KeePassDroid can do that with ease. With more than a million downloads on the Google Play Store, you could say this app definitely fills a need.
|
||||
|
||||
-- Victor R. Garza
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-prey-100615300-orig.jpg)
|
||||
|
||||
### Prey ###
|
||||
|
||||
Loss or theft of mobile devices is all too common these days. While there are many tools in the enterprise to manage and erase data either misplaced or stolen from an organization, [Prey][16] facilitates the recovery of the phone, laptop, or tablet, and not just the wiping of potentially sensitive information from the device.
|
||||
|
||||
Prey is a Web service that works with an open source installed agent for Linux, OS X, Windows, Android, and iOS devices. Prey tracks your lost or stolen device by using either the device's GPS, the native geolocation provided by newer operating systems, or an associated Wi-Fi hotspot to home in on the location.
|
||||
|
||||
If your smartphone is lost or stolen, send a text message to the device to activate Prey. For stolen tablets or laptops, use the Prey Project's cloud-based control panel to select the device as missing. The Prey agent on any device can then take a screenshot of the active applications, turn on the camera to catch a thief's image, reset the device to the factory settings, or fully lock down the device.
|
||||
|
||||
Should you want to retrieve your lost items, the Prey Project strongly suggests you contact your local police to have them assist you.
|
||||
|
||||
-- Victor R. Garza
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-orbot-100615299-orig.jpg)
|
||||
|
||||
### Orbot ###
|
||||
|
||||
The premiere proxy application for Android, [Orbot][17] leverages the volunteer-operated network of virtual tunnels called Tor (The Onion Router) to keep all communications private. Orbot works with companion applications [Orweb][18] for secure Web browsing and [ChatSecure][19] for secure chat. In fact, any Android app that allows its proxy settings to be changed can be secured with Orbot.
|
||||
|
||||
One thing to remember about the Tor network is that it's designed for secure, lightweight communications, not for pulling down torrents or watching YouTube videos. Surfing media-rich sites like Facebook can be painfully slow. Your Orbot communications won't be blazing fast, but they will stay private and confidential.
|
||||
|
||||
-- Victor R. Garza
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-tails-100615301-orig.jpg)
|
||||
|
||||
### Tails ###
|
||||
|
||||
[Tails][20], or The Amnesic Incognito Live System, is a Linux Live OS that can be booted from a USB stick, DVD, or SD card. It’s often used covertly in the Deep Web to secure traffic when purchasing illicit substances, but it can also be used to avoid tracking, support freedom of speech, circumvent censorship, and promote liberty.
|
||||
|
||||
Leveraging Tor (The Onion Router), Tails keeps all communications secure and private and promises to leave no trace on any computer after it’s used. It performs disk encryption with LUKS, protects instant messages with OTR, encrypts Web traffic with the Tor Browser and HTTPS Everywhere, and securely deletes files via Nautilus Wipe. Tails even has an office suite, image editor, and the like.
|
||||
|
||||
Now, it's always possible to be traced while using any system if you're not careful, so be vigilant when using Tails and follow good privacy practices, like turning off JavaScript while using Tor. And be aware that Tails isn't necessarily going to be speedy, even while using a fiber connect, but that's what you pay for anonymity.
|
||||
|
||||
-- Victor R. Garza
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-main-100614438-orig.jpg)
|
||||
|
||||
### Read about more open source winners ###
|
||||
|
||||
InfoWorld's Best of Open Source Awards for 2014 celebrate more than 100 open source projects, from the bottom of the stack to the top. Follow these links to more open source winners:
|
||||
|
||||
[Bossie Awards 2015: The best open source applications][21]
|
||||
|
||||
[Bossie Awards 2015: The best open source application development tools][22]
|
||||
|
||||
[Bossie Awards 2015: The best open source big data tools][23]
|
||||
|
||||
[Bossie Awards 2015: The best open source data center and cloud software][24]
|
||||
|
||||
[Bossie Awards 2015: The best open source desktop and mobile software][25]
|
||||
|
||||
[Bossie Awards 2015: The best open source networking and security software][26]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.infoworld.com/article/2982630/open-source-tools/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html
|
||||
|
||||
作者:[InfoWorld staff][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.infoworld.com/author/InfoWorld-staff/
|
||||
[1]:https://www.libreoffice.org/download/libreoffice-fresh/
|
||||
[2]:http://lwn.net/Articles/637735/
|
||||
[3]:https://www.mozilla.org/en-US/firefox/new/
|
||||
[4]:https://nakedsecurity.sophos.com/2015/06/24/not-ok-google-privacy-advocates-take-on-the-chromium-team-and-win/
|
||||
[5]:https://www.mozilla.org/en-US/thunderbird/
|
||||
[6]:https://notepad-plus-plus.org/
|
||||
[7]:http://www.videolan.org/vlc/index.html
|
||||
[8]:http://www.videolan.org/press/vlc-2.2.0.html
|
||||
[9]:http://www.7-zip.org/
|
||||
[10]:https://handbrake.fr/
|
||||
[11]:http://keepass.info/
|
||||
[12]:http://www.infoworld.com/article/2931753/open-source-software/sourceforge-the-end-cant-come-too-soon.html
|
||||
[13]:https://www.virtualbox.org/
|
||||
[14]:https://inkscape.org/en/download/windows/
|
||||
[15]:http://www.keepassdroid.com/
|
||||
[16]:http://preyproject.com/
|
||||
[17]:https://www.torproject.org/docs/android.html.en
|
||||
[18]:https://guardianproject.info/apps/orweb/
|
||||
[19]:https://guardianproject.info/apps/chatsecure/
|
||||
[20]:https://tails.boum.org/
|
||||
[21]:http://www.infoworld.com/article/2982622/bossie-awards-2015-the-best-open-source-applications.html
|
||||
[22]:http://www.infoworld.com/article/2982920/bossie-awards-2015-the-best-open-source-application-development-tools.html
|
||||
[23]:http://www.infoworld.com/article/2982429/bossie-awards-2015-the-best-open-source-big-data-tools.html
|
||||
[24]:http://www.infoworld.com/article/2982923/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html
|
||||
[25]:http://www.infoworld.com/article/2982630/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html
|
||||
[26]:http://www.infoworld.com/article/2982962/bossie-awards-2015-the-best-open-source-networking-and-security-software.html
|
@ -0,0 +1,162 @@
|
||||
Bossie Awards 2015: The best open source networking and security software
|
||||
================================================================================
|
||||
InfoWorld's top picks of the year among open source tools for building, operating, and securing networks
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-net-sec-100614459-orig.jpg)
|
||||
|
||||
### The best open source networking and security software ###
|
||||
|
||||
BIND, Sendmail, OpenSSH, Cacti, Nagios, Snort -- open source software seems to have been invented for networks, and many of the oldies and goodies are still going strong. Among our top picks in the category this year, you'll find a mix of stalwarts, mainstays, newcomers, and upstarts perfecting the arts of network management, security monitoring, vulnerability assessment, rootkit detection, and much more.
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-icinga-100614482-orig.jpg)
|
||||
|
||||
### Icinga 2 ###
|
||||
|
||||
Icinga began life as a fork of system monitoring application Nagios. [Icinga 2][1] was completely rewritten to give users a modern interface, support for multiple databases, and an API to integrate numerous extensions. With out-of-the-box load balancing, notifications, and configuration, Icinga 2 shortens the time to installation for complex environments. Icinga 2 supports Graphite natively, giving administrators real-time performance graphing without any fuss. But what puts Icinga back on the radar this year is its release of Icinga Web 2, a graphical front end with drag-and-drop customizable dashboards and streamlined monitoring tools.
|
||||
|
||||
Administrators can view, filter, and prioritize problems, while keeping track of which actions have already been taken. A new matrix view lets administrators view hosts and services on one page. You can view events over a particular time period or filter incidents to understand which ones need immediate attention. Icinga Web 2 may boast a new interface and zippier performance, but all the usual commands from Icinga Classic and Icinga Web are still available. That means there is no downtime trying to learn a new version of the tool.
|
||||
|
||||
-- Fahmida Rashid
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-zenoss-100614465-orig.jpg)
|
||||
|
||||
### Zenoss Core ###
|
||||
|
||||
Another open source stalwart, [Zenoss Core][2] gives network administrators a complete, one-stop solution for tracking and managing all of the applications, servers, storage, networking components, virtualization tools, and other elements of an enterprise infrastructure. Administrators can make sure the hardware is running efficiently and take advantage of the modular design to plug in ZenPacks for extended functionality.
|
||||
|
||||
Zenoss Core 5, released in February of this year, takes the already powerful tool and improves it further, with an enhanced user interface and expanded dashboard. The Web-based console and dashboards were already highly customizable and dynamic, and the new version now lets administrators mash up multiple component charts onto a single chart. Think of it as the tool for better root cause and cause/effect analysis.
|
||||
|
||||
Portlets give additional insights for network mapping, device issues, daemon processes, production states, watch lists, and event views, to name a few. And new HTML5 charts can be exported outside the tool. The Zenoss Control Center allows out-of-band management and monitoring of all Zenoss components. Zenoss Core has new tools for online backup and restore, snapshots and rollbacks, and multihost deployment. Even more important, deployments are faster with full Docker support.
|
||||
|
||||
-- Fahmida Rashid
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-opennms-100614461-orig.jpg)
|
||||
|
||||
### OpenNMS ###
|
||||
|
||||
An extremely flexible network management solution, [OpenNMS][3] can handle any network management task, whether it's device management, application performance monitoring, inventory control, or events management. With IPv6 support, a robust alerts system, and the ability to record user scripts to test Web applications, OpenNMS has everything network administrators and testers need. OpenNMS has become, as now a mobile dashboard, called OpenNMS Compass, lets networking pros keep an eye on their network even when they're out and about.
|
||||
|
||||
The iOS version of the app, which is available on the [iTunes App Store][4], displays outages, nodes, and alarms. The next version will offer additional event details, resource graphs, and information about IP and SNMP interfaces. The Android version, available on [Google Play][5], displays network availability, outages, and alarms on the dashboard, as well as the ability to acknowledge, escalate, or clear alarms. The mobile clients are compatible with OpenNMS Horizon 1.12 or greater and OpenNMS Meridian 2015.1.0 or greater.
|
||||
|
||||
-- Fahmida Rashid
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-onion-100614460-orig.jpg)
|
||||
|
||||
### Security Onion ###
|
||||
|
||||
Like an onion, network security monitoring is made of many layers. No single tool will give you visibility into every attack or show you every reconnaissance or foot-printing session on your company network. [Security Onion][6] bundles scores of proven tools into one handy Ubuntu distro that will allow you to see who's inside your network and help keep the bad guys out.
|
||||
|
||||
Whether you're taking a proactive approach to network security monitoring or following up on a potential attack, Security Onion can assist. Consisting of sensor, server, and display layers, the Onion combines full network packet capture with network-based and host-based intrusion detection, and it serves up all of the various logs for inspection and analysis.
|
||||
|
||||
The star-studded network security toolchain includes Netsniff-NG for packet capture, Snort and Suricata for rules-based network intrusion detection, Bro for analysis-based network monitoring, OSSEC for host intrusion detection, and Sguil, Squert, Snorby, and ELSA (Enterprise Log Search and Archive) for display, analysis, and log management. It’s a carefully vetted collection of tools, all wrapped in a wizard-driven installer and backed by thorough documentation, that can help you get from zero to monitoring as fast as possible.
|
||||
|
||||
-- Victor R. Garza
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-kali-100614458-orig.jpg)
|
||||
|
||||
Kali Linux
|
||||
|
||||
The team behind [Kali Linux][7] revamped the popular security Linux distribution this year to make it faster and even more versatile. Kali sports a new 4.0 kernel, improved hardware and wireless driver support, and a snappier interface. The most popular tools are easily accessible from a dock on the side of the screen. The biggest change? Kali Linux is now a rolling distribution, with a continuous stream of software updates. Kali's core system is based on Debian Jessie, and the team will pull packages continuously from Debian Testing, while continuing to add new Kali-flavored features on top.
|
||||
|
||||
The distribution still comes jam-packed with tools for penetration testing, vulnerability analysis, security forensics, Web application analysis, wireless networking and assessment, reverse engineering, and exploitation tools. Now the distribution has an upstream version checking system that will automatically notify users when updates are available for the individual tools. The distribution also features ARM images for a range of devices, including Raspberry Pi, Chromebook, and Odroids, as well as updates to the NetHunter penetration testing platform that runs on Android devices. There are other changes too: Metasploit Community/Pro is no longer included, because Kali 2.0 is not yet [officially supported by Rapid7][8].
|
||||
|
||||
-- Fahmida Rashid
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-openvas-100614462-orig.jpg)
|
||||
|
||||
### OpenVAS ###
|
||||
|
||||
[OpenVAS][9], the Open Vulnerability Assessment System, is a framework that combines multiple services and tools to offer vulnerability scanning and vulnerability management. The scanner is coupled with a weekly feed of network vulnerability tests, or you can use a feed from a commercial service. The framework includes a command-line interface (so it can be scripted) and an SSL-secured, browser-based interface via the [Greenbone Security Assistant][10]. OpenVAS accommodates various plug-ins for additional functionality. Scans can be scheduled or run on-demand.
|
||||
|
||||
Multiple OpenVAS installations can be controlled through a single master, which makes this a scalable vulnerability assessment tool for enterprises. The project is as compatible with standards as can be: Scan results and configurations are stored in a SQL database, where they can be accessed easily by external reporting tools. Client tools access the OpenVAS Manager via the XML-based stateless OpenVAS Management Protocol, so security administrators can extend the functionality of the framework. The software can be installed from packages or source code to run on Windows or Linux, or downloaded as a virtual appliance.
|
||||
|
||||
-- Matt Sarrel
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-owasp-100614463-orig.jpg)
|
||||
|
||||
### OWASP ###
|
||||
|
||||
[OWASP][11], the Open Web Application Security Project, is a nonprofit organization with worldwide chapters focused on improving software security. The community-driven organization provides test tools, documentation, training, and almost anything you could imagine that’s related to assessing software security and best practices for developing secure software. Several OWASP projects have become valuable components of many a security practitioner's toolkit:
|
||||
|
||||
[ZAP][12], the Zed Attack Proxy Project, is a penetration test tool for finding vulnerabilities in Web applications. One of the design goals of ZAP was to make it easy to use so that developers and functional testers who aren't security experts can benefit from using it. ZAP provides automated scanners and a set of manual test tools.
|
||||
|
||||
The [Xenotix XSS Exploit Framework][13] is an advanced cross-site scripting vulnerability detection and exploitation framework that runs scans within browser engines to get real-world results. The Xenotix Scanner Module uses three intelligent fuzzers, and it can run through nearly 5,000 distinct XSS payloads. An API lets security administrators extend and customize the exploit toolkit.
|
||||
|
||||
[O-Saft][14], or the OWASP SSL advanced forensic tool, is an SSL auditing tool that shows detailed information about SSL certificates and tests SSL connections. This command-line tool can run online or offline to assess SSL security such as ciphers and configurations. O-Saft provides built-in checks for common vulnerabilities, and you can easily extend these through scripting. In May 2015 a simple GUI was added as an optional download.
|
||||
|
||||
[OWTF][15], the Offensive Web Testing Framework, is an automated test tool that follows OWASP testing guidelines and the NIST and PTES standards. The framework uses both a Web UI and a CLI, and it probes Web and application servers for common vulnerabilities such as improper configuration and unpatched software.
|
||||
|
||||
-- Matt Sarrel
|
||||
|
||||
![](http://core0.staticworld.net/images/article/2015/09/bossies-2015-beef-100614456-orig.jpg)
|
||||
|
||||
### BeEF ###
|
||||
|
||||
The Web browser has become the most common vector for attacks against clients. [BeEF][15], the Browser Exploitation Framework Project, is a widely used penetration tool to assess Web browser security. BeEF helps you expose the security weaknesses of client systems using client-side attacks launched through the browser. BeEF sets up a malicious website, which security administrators visit from the browser they want to test. BeEF then sends commands to attack the Web browser and use it to plant software on the client machine. Administrators can then launch attacks on the client machine as if they were zombies.
|
||||
|
||||
BeEF comes with commonly used modules like a key logger, a port scanner, and a Web proxy, plus you can write your own modules or send commands directly to the zombified test machine. BeEF comes with a handful of demo Web pages to help you get started and makes it very easy to write additional Web pages and attack modules so you can customize testing to your environment. BeEF is a valuable test tool for assessing browser and endpoint security and for learning how browser-based attacks are launched. Use it to put together a demo to show your users how malware typically infects client devices.
|
||||
|
||||
-- Matt Sarrel
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-unhide-100614464-orig.jpg)
|
||||
|
||||
### Unhide ###
|
||||
|
||||
[Unhide][16] is a forensic tool that locates open TCP/UDP ports and hidden process on UNIX, Linux, and Windows. Hidden ports and processes can be the result of rootkit or LKM (loadable kernel module) activity. Rootkits can be difficult to find and remove because they are designed to be stealthy, hiding themselves from the OS and user. A rootkit can use LKMs to hide its processes or impersonate other processes, allowing it to run on machines undiscovered for a long time. Unhide can provide the assurance that administrators need to know their systems are clean.
|
||||
|
||||
Unhide is really two separate scripts: one for processes and one for ports. The tool interrogates running processes, threads, and open ports and compares this info to what's registered with the system as active, reporting discrepancies. Unhide and WinUnhide are extremely lightweight scripts that run from the command line to produce text output. They're not pretty, but they are extremely useful. Unhide is also included in the [Rootkit Hunter][17] project.
|
||||
|
||||
-- Matt Sarrel
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/09/bossies-2015-main-100614457-orig.jpg)
|
||||
|
||||
Read about more open source winners
|
||||
|
||||
InfoWorld's Best of Open Source Awards for 2014 celebrate more than 100 open source projects, from the bottom of the stack to the top. Follow these links to more open source winners:
|
||||
|
||||
[Bossie Awards 2015: The best open source applications][18]
|
||||
|
||||
[Bossie Awards 2015: The best open source application development tools][19]
|
||||
|
||||
[Bossie Awards 2015: The best open source big data tools][20]
|
||||
|
||||
[Bossie Awards 2015: The best open source data center and cloud software][21]
|
||||
|
||||
[Bossie Awards 2015: The best open source desktop and mobile software][22]
|
||||
|
||||
[Bossie Awards 2015: The best open source networking and security software][23]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.infoworld.com/article/2982962/open-source-tools/bossie-awards-2015-the-best-open-source-networking-and-security-software.html
|
||||
|
||||
作者:[InfoWorld staff][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.infoworld.com/author/InfoWorld-staff/
|
||||
[1]:https://www.icinga.org/icinga/icinga-2/
|
||||
[2]:http://www.zenoss.com/
|
||||
[3]:http://www.opennms.org/
|
||||
[4]:https://itunes.apple.com/us/app/opennms-compass/id968875097?mt=8
|
||||
[5]:https://play.google.com/store/apps/details?id=com.opennms.compass&hl=en
|
||||
[6]:http://blog.securityonion.net/p/securityonion.html
|
||||
[7]:https://www.kali.org/
|
||||
[8]:https://community.rapid7.com/community/metasploit/blog/2015/08/12/metasploit-on-kali-linux-20
|
||||
[9]:http://www.openvas.org/
|
||||
[10]:http://www.greenbone.net/
|
||||
[11]:https://www.owasp.org/index.php/Main_Page
|
||||
[12]:https://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project
|
||||
[13]:https://www.owasp.org/index.php/O-Saft
|
||||
[14]:https://www.owasp.org/index.php/OWASP_OWTF
|
||||
[15]:http://www.beefproject.com/
|
||||
[16]:http://www.unhide-forensics.info/
|
||||
[17]:http://www.rootkit.nl/projects/rootkit_hunter.html
|
||||
[18]:http://www.infoworld.com/article/2982622/bossie-awards-2015-the-best-open-source-applications.html
|
||||
[19]:http://www.infoworld.com/article/2982920/bossie-awards-2015-the-best-open-source-application-development-tools.html
|
||||
[20]:http://www.infoworld.com/article/2982429/bossie-awards-2015-the-best-open-source-big-data-tools.html
|
||||
[21]:http://www.infoworld.com/article/2982923/bossie-awards-2015-the-best-open-source-data-center-and-cloud-software.html
|
||||
[22]:http://www.infoworld.com/article/2982630/bossie-awards-2015-the-best-open-source-desktop-and-mobile-software.html
|
||||
[23]:http://www.infoworld.com/article/2982962/bossie-awards-2015-the-best-open-source-networking-and-security-software.html
|
@ -0,0 +1,605 @@
|
||||
|
||||
translation by strugglingyouth
|
||||
80 Linux Monitoring Tools for SysAdmins
|
||||
================================================================================
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/linux-monitoring.jpg)
|
||||
|
||||
The industry is hotting up at the moment, and there are more tools than you can shake a stick at. Here lies the most comprehensive list on the Internet (of Tools). Featuring over 80 ways to your machines. Within this article we outline:
|
||||
|
||||
- Command line tools
|
||||
- Network related
|
||||
- System related monitoring
|
||||
- Log monitoring tools
|
||||
- Infrastructure monitoring tools
|
||||
|
||||
It’s hard work monitoring and debugging performance problems, but it’s easier with the right tools at the right time. Here are some tools you’ve probably heard of, some you probably haven’t – and when to use them:
|
||||
|
||||
### Top 10 System Monitoring Tools ###
|
||||
|
||||
#### 1. Top ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/top.jpg)
|
||||
|
||||
This is a small tool which is pre-installed in many unix systems. When you want an overview of all the processes or threads running in the system: top is a good tool. You can order these processes on different criteria and the default criteria is CPU.
|
||||
|
||||
#### 2. [htop][1] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/htop.jpg)
|
||||
|
||||
Htop is essentially an enhanced version of top. It’s easier to sort by processes. It’s visually easier to understand and has built in commands for common things you would like to do. Plus it’s fully interactive.
|
||||
|
||||
#### 3. [atop][2] ####
|
||||
|
||||
Atop monitors all processes much like top and htop, unlike top and htop however it has daily logging of the processes for long-term analysis. It also shows resource consumption by all processes. It will also highlight resources that have reached a critical load.
|
||||
|
||||
#### 4. [apachetop][3] ####
|
||||
|
||||
Apachetop monitors the overall performance of your apache webserver. It’s largely based on mytop. It displays current number of reads, writes and the overall number of requests processed.
|
||||
|
||||
#### 5. [ftptop][4] ####
|
||||
|
||||
ftptop gives you basic information of all the current ftp connections to your server such as the total amount of sessions, how many are uploading and downloading and who the client is.
|
||||
|
||||
#### 6. [mytop][5] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mytop.jpg)
|
||||
|
||||
mytop is a neat tool for monitoring threads and performance of mysql. It gives you a live look into the database and what queries it’s processing in real time.
|
||||
|
||||
#### 7. [powertop][6] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/powertop.jpg)
|
||||
|
||||
powertop helps you diagnose issues that has to do with power consumption and power management. It can also help you experiment with power management settings to achieve the most efficient settings for your server. You switch tabs with the tab key.
|
||||
|
||||
#### 8. [iotop][7] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iotop.jpg)
|
||||
|
||||
iotop checks the I/O usage information and gives you a top-like interface to that. It displays columns on read and write and each row represents a process. It also displays the percentage of time the process spent while swapping in and while waiting on I/O.
|
||||
|
||||
### Network related monitoring ###
|
||||
|
||||
#### 9. [ntopng][8] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ntopng.jpg)
|
||||
|
||||
ntopng is the next generation of ntop and the tool provides a graphical user interface via the browser for network monitoring. It can do stuff such as: geolocate hosts, get network traffic and show ip traffic distribution and analyze it.
|
||||
|
||||
#### 10. [iftop][9] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iftop.jpg)
|
||||
|
||||
iftop is similar to top, but instead of mainly checking for cpu usage it listens to network traffic on selected network interfaces and displays a table of current usage. It can be handy for answering questions such as “Why on earth is my internet connection so slow?!”.
|
||||
|
||||
#### 11. [jnettop][10] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/jnettop.jpg)
|
||||
|
||||
jnettop visualises network traffic in much the same way as iftop does. It also supports customizable text output and a machine-friendly mode to support further analysis.
|
||||
|
||||
12. [bandwidthd][11]
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/bandwidthd.jpg)
|
||||
|
||||
BandwidthD tracks usage of TCP/IP network subnets and visualises that in the browser by building a html page with graphs in png. There is a database driven system that supports searching, filtering, multiple sensors and custom reports.
|
||||
|
||||
#### 13. [EtherApe][12] ####
|
||||
|
||||
EtherApe displays network traffic graphically, the more talkative the bigger the node. It either captures live traffic or can read it from a tcpdump. The displayed can also be refined using a network filter with pcap syntax.
|
||||
|
||||
#### 14. [ethtool][13] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ethtool.jpg)
|
||||
|
||||
ethtool is used for displaying and modifying some parameters of the network interface controllers. It can also be used to diagnose Ethernet devices and get more statistics from the devices.
|
||||
|
||||
#### 15. [NetHogs][14] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nethogs.jpg)
|
||||
|
||||
NetHogs breaks down network traffic per protocol or per subnet. It then groups by process. So if there’s a surge in network traffic you can fire up NetHogs and see which process is causing it.
|
||||
|
||||
#### 16. [iptraf][15] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iptraf.jpg)
|
||||
|
||||
iptraf gathers a variety of metrics such as TCP connection packet and byte count, interface statistics and activity indicators, TCP/UDP traffic breakdowns and station packet and byte counts.
|
||||
|
||||
#### 17. [ngrep][16] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ngrep.jpg)
|
||||
|
||||
ngrep is grep but for the network layer. It’s pcap aware and will allow to specify extended regular or hexadecimal expressions to match against packets of .
|
||||
|
||||
#### 18. [MRTG][17] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mrtg.jpg)
|
||||
|
||||
MRTG was orginally developed to monitor router traffic, but now it’s able to monitor other network related things as well. It typically collects every five minutes and then generates a html page. It also has the capability of sending warning emails.
|
||||
|
||||
#### 19. [bmon][18] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/bmon.jpg)
|
||||
|
||||
Bmon monitors and helps you debug networks. It captures network related statistics and presents it in human friendly way. You can also interact with bmon through curses or through scripting.
|
||||
|
||||
#### 20. traceroute ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/traceroute.jpg)
|
||||
|
||||
Traceroute is a built-in tool for displaying the route and measuring the delay of packets across a network.
|
||||
|
||||
#### 21. [IPTState][19] ####
|
||||
|
||||
IPTState allows you to watch where traffic that crosses your iptables is going and then sort that by different criteria as you please. The tool also allows you to delete states from the table.
|
||||
|
||||
#### 22. [darkstat][20] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/darkstat.jpg)
|
||||
|
||||
Darkstat captures network traffic and calculates statistics about usage. The reports are served over a simple HTTP server and gives you a nice graphical user interface of the graphs.
|
||||
|
||||
#### 23. [vnStat][21] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/vnstat.jpg)
|
||||
|
||||
vnStat is a network traffic monitor that uses statistics provided by the kernel which ensures light use of system resources. The gathered statistics persists through system reboots. It has color options for the artistic sysadmins.
|
||||
|
||||
#### 24. netstat ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/netstat.jpg)
|
||||
|
||||
Netstat is a built-in tool that displays TCP network connections, routing tables and a number of network interfaces. It’s used to find problems in the network.
|
||||
|
||||
#### 25. ss ####
|
||||
|
||||
Instead of using netstat, it’s however preferable to use ss. The ss command is capable of showing more information than netstat and is actually faster. If you want a summary statistics you can use the command `ss -s`.
|
||||
|
||||
#### 26. [nmap][22] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nmap.jpg)
|
||||
|
||||
Nmap allows you to scan your server for open ports or detect which OS is being used. But you could also use this for SQL injection vulnerabilities, network discovery and other means related to penetration testing.
|
||||
|
||||
#### 27. [MTR][23] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mtr.jpg)
|
||||
|
||||
MTR combines the functionality of traceroute and the ping tool into a single network diagnostic tool. When using the tool it will limit the number hops individual packets has to travel while also listening to their expiry. It then repeats this every second.
|
||||
|
||||
#### 28. [Tcpdump][24] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/tcpdump.jpg)
|
||||
|
||||
Tcpdump will output a description of the contents of the packet it just captured which matches the expression that you provided in the command. You can also save the this data for further analysis.
|
||||
|
||||
#### 29. [Justniffer][25] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/justniffer.jpg)
|
||||
|
||||
Justniffer is a tcp packet sniffer. You can choose whether you would like to collect low-level data or high-level data with this sniffer. It also allows you to generate logs in customizable way. You could for instance mimic the access log that apache has.
|
||||
|
||||
### System related monitoring ###
|
||||
|
||||
#### 30. [nmon][26] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nmon.jpg)
|
||||
|
||||
nmon either outputs the data on screen or saves it in a comma separated file. You can display CPU, memory, network, filesystems, top processes. The data can also be added to a RRD database for further analysis.
|
||||
|
||||
#### 31. [conky][27] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/cpulimit.jpg)
|
||||
|
||||
Conky monitors a plethora of different OS stats. It has support for IMAP and POP3 and even support for many popular music players! For the handy person you could extend it with your own scripts or programs using Lua.
|
||||
|
||||
#### 32. [Glances][28] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/glances.jpg)
|
||||
|
||||
Glances monitors your system and aims to present a maximum amount of information in a minimum amount of space. It has the capability to function in a client/server mode as well as monitoring remotely. It also has a web interface.
|
||||
|
||||
#### 33. [saidar][29] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/saidar.jpg)
|
||||
|
||||
Saidar is a very small tool that gives you basic information about your system resources. It displays a full screen of the standard system resources. The emphasis for saidar is being as simple as possible.
|
||||
|
||||
#### 34. [RRDtool][30] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/rrdtool.jpg)
|
||||
|
||||
RRDtool is a tool developed to handle round-robin databases or RRD. RRD aims to handle time-series data like CPU load, temperatures etc. This tool provides a way to extract RRD data in a graphical format.
|
||||
|
||||
#### 35. [monit][31] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/monit.jpg)
|
||||
|
||||
Monit has the capability of sending you alerts as well as restarting services if they run into trouble. It’s possible to perform any type of check you could write a script for with monit and it has a web user interface to ease your eyes.
|
||||
|
||||
#### 36. [Linux process explorer][32] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/linux-process-monitor.jpg)
|
||||
|
||||
Linux process explorer is akin to the activity monitor for OSX or the windows equivalent. It aims to be more usable than top or ps. You can view each process and see how much memory usage or CPU it uses.
|
||||
|
||||
#### 37. df ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/df.jpg)
|
||||
|
||||
df is an abbreviation for disk free and is pre-installed program in all unix systems used to display the amount of available disk space for filesystems which the user have access to.
|
||||
|
||||
#### 38. [discus][33] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/discus.jpg)
|
||||
|
||||
Discus is similar to df however it aims to improve df by making it prettier using fancy features as colors, graphs and smart formatting of numbers.
|
||||
|
||||
#### 39. [xosview][34] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/xosview.jpg)
|
||||
|
||||
xosview is a classic system monitoring tool and it gives you a simple overview of all the different parts of the including IRQ.
|
||||
|
||||
#### 40. [Dstat][35] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/dstat.jpg)
|
||||
|
||||
Dstat aims to be a replacement for vmstat, iostat, netstat and ifstat. It allows you to view all of your system resources in real-time. The data can then be exported into csv. Most importantly dstat allows for plugins and could thus be extended into areas not yet known to mankind.
|
||||
|
||||
#### 41. [Net-SNMP][36] ####
|
||||
|
||||
SNMP is the protocol ‘simple network management protocol’ and the Net-SNMP tool suite helps you collect accurate information about your servers using this protocol.
|
||||
|
||||
#### 42. [incron][37] ####
|
||||
|
||||
Incron allows you to monitor a directory tree and then take action on those changes. If you wanted to copy files to directory ‘b’ once new files appeared in directory ‘a’ that’s exactly what incron does.
|
||||
|
||||
#### 43. [monitorix][38] ####
|
||||
|
||||
Monitorix is lightweight system monitoring tool. It helps you monitor a single machine and gives you a wealth of metrics. It also has a built-in HTTP server to view graphs and a reporting mechanism of all metrics.
|
||||
|
||||
#### 44. vmstat ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/vmstat.jpg)
|
||||
|
||||
vmstat or virtual memory statistics is a small built-in tool that monitors and displays a summary about the memory in the machine.
|
||||
|
||||
#### 45. uptime ####
|
||||
|
||||
This small command that quickly gives you information about how long the machine has been running, how many users currently are logged on and the system load average for the past 1, 5 and 15 minutes.
|
||||
|
||||
#### 46. mpstat ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mpstat.jpg)
|
||||
|
||||
mpstat is a built-in tool that monitors cpu usage. The most common command is using `mpstat -P ALL` which gives you the usage of all the cores. You can also get an interval update of the CPU usage.
|
||||
|
||||
#### 47. pmap ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/pmap.jpg)
|
||||
|
||||
pmap is a built-in tool that reports the memory map of a process. You can use this command to find out causes of memory bottlenecks.
|
||||
|
||||
#### 48. ps ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ps.jpg)
|
||||
|
||||
The ps command will give you an overview of all the current processes. You can easily select all processes using the command `ps -A`
|
||||
|
||||
#### 49. [sar][39] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/sar.jpg)
|
||||
|
||||
sar is a part of the sysstat package and helps you to collect, report and save different system metrics. With different commands it will give you CPU, memory and I/O usage among other things.
|
||||
|
||||
#### 50. [collectl][40] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/collectl.jpg)
|
||||
|
||||
Similar to sar collectl collects performance metrics for your machine. By default it shows cpu, network and disk stats but it collects a lot more. The difference to sar is collectl is able to deal with times below 1 second, it can be fed into a plotting tool directly and collectl monitors processes more extensively.
|
||||
|
||||
#### 51. [iostat][41] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iostat.jpg)
|
||||
|
||||
iostat is also part of the sysstat package. This command is used for monitoring system input/output. The reports themselves can be used to change system configurations to better balance input/output load between hard drives in your machine.
|
||||
|
||||
#### 52. free ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/free.jpg)
|
||||
|
||||
This is a built-in command that displays the total amount of free and used physical memory on your machine. It also displays the buffers used by the kernel at that given moment.
|
||||
|
||||
#### 53. /Proc file system ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/procfile.jpg)
|
||||
|
||||
The proc file system gives you a peek into kernel statistics. From these statistics you can get detailed information about the different hardware devices on your machine. Take a look at the [full list of the proc file statistics][42]
|
||||
|
||||
#### 54. [GKrellM][43] ####
|
||||
|
||||
GKrellm is a gui application that monitor the status of your hardware such CPU, main memory, hard disks, network interfaces and many other things. It can also monitor and launch a mail reader of your choice.
|
||||
|
||||
#### 55. [Gnome system monitor][44] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/gnome-system-monitor.jpg)
|
||||
|
||||
Gnome system monitor is a basic system monitoring tool that has features looking at process dependencies from a tree view, kill or renice processes and graphs of all server metrics.
|
||||
|
||||
### Log monitoring tools ###
|
||||
|
||||
#### 56. [GoAccess][45] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/goaccess.jpg)
|
||||
|
||||
GoAccess is a real-time web log analyzer which analyzes the access log from either apache, nginx or amazon cloudfront. It’s also possible to output the data into HTML, JSON or CSV. It will give you general statistics, top visitors, 404s, geolocation and many other things.
|
||||
|
||||
#### 57. [Logwatch][46] ####
|
||||
|
||||
Logwatch is a log analysis system. It parses through your system’s logs and creates a report analyzing the areas that you specify. It can give you daily reports with short digests of the activities taking place on your machine.
|
||||
|
||||
#### 58. [Swatch][47] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/swatch.jpg)
|
||||
|
||||
Much like Logwatch Swatch also monitors your logs, but instead of giving reports it watches for regular expression and notifies you via mail or the console when there is a match. It could be used for intruder detection for example.
|
||||
|
||||
#### 59. [MultiTail][48] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/multitail.jpg)
|
||||
|
||||
MultiTail helps you monitor logfiles in multiple windows. You can merge two or more of these logfiles into one. It will also use colors to display the logfiles for easier reading with the help of regular expressions.
|
||||
|
||||
#### System tools ####
|
||||
|
||||
#### 60. [acct or psacct][49] ####
|
||||
|
||||
acct or psacct (depending on if you use apt-get or yum) allows you to monitor all the commands a users executes inside the system including CPU and memory time. Once installed you get that summary with the command ‘sa’.
|
||||
|
||||
#### 61. [whowatch][50] ####
|
||||
|
||||
Similar to acct this tool monitors users on your system and allows you to see in real time what commands and processes they are using. It gives you a tree structure of all the processes and so you can see exactly what’s happening.
|
||||
|
||||
#### 62. [strace][51] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/strace.jpg)
|
||||
|
||||
strace is used to diagnose, debug and monitor interactions between processes. The most common thing to do is making strace print a list of system calls made by the program which is useful if the program does not behave as expected.
|
||||
|
||||
#### 63. [DTrace][52] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/dtrace.jpg)
|
||||
|
||||
DTrace is the big brother of strace. It dynamically patches live running instructions with instrumentation code. This allows you to do in-depth performance analysis and troubleshooting. However, it’s not for the weak of heart as there is a 1200 book written on the topic.
|
||||
|
||||
#### 64. [webmin][53] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/webmin.jpg)
|
||||
|
||||
Webmin is a web-based system administration tool. It removes the need to manually edit unix configuration files and lets you manage the system remotely if need be. It has a couple of monitoring modules that you can attach to it.
|
||||
|
||||
#### 65. stat ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/stat.jpg)
|
||||
|
||||
Stat is a built-in tool for displaying status information of files and file systems. It will give you information such as when the file was modified, accessed or changed.
|
||||
|
||||
#### 66. ifconfig ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ifconfig.jpg)
|
||||
|
||||
ifconfig is a built-in tool used to configure the network interfaces. Behind the scenes network monitor tools use ifconfig to set it into promiscuous mode to capture all packets. You can do it yourself with `ifconfig eth0 promisc` and return to normal mode with `ifconfig eth0 -promisc`.
|
||||
|
||||
#### 67. [ulimit][54] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/unlimit.jpg)
|
||||
|
||||
ulimit is a built-in tool that monitors system resources and keeps a limit so any of the monitored resources don’t go overboard. For instance making a fork bomb where a properly configured ulimit is in place would be totally fine.
|
||||
|
||||
#### 68. [cpulimit][55] ####
|
||||
|
||||
CPUlimit is a small tool that monitors and then limits the CPU usage of a process. It’s particularly useful to make batch jobs not eat up too many CPU cycles.
|
||||
|
||||
#### 69. lshw ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/lshw.jpg)
|
||||
|
||||
lshw is a small built-in tool extract detailed information about the hardware configuration of the machine. It can output everything from CPU version and speed to mainboard configuration.
|
||||
|
||||
#### 70. w ####
|
||||
|
||||
W is a built-in command that displays information about the users currently using the machine and their processes.
|
||||
|
||||
#### 71. lsof ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/lsof.jpg)
|
||||
|
||||
lsof is a built-in tool that gives you a list of all open files and network connections. From there you can narrow it down to files opened by processes, based on the process name, by a specific user or perhaps kill all processes that belongs to a specific user.
|
||||
|
||||
### Infrastructure monitoring tools ###
|
||||
|
||||
#### 72. Server Density ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/server-density-monitoring.png)
|
||||
|
||||
Our [server monitoring tool][56]! It has a web interface that allows you to set alerts and view graphs for all system and network metrics. You can also set up monitoring of websites whether they are up or down. Server Density allows you to set permissions for users and you can extend your monitoring with our plugin infrastructure or api. The service already supports Nagios plugins.
|
||||
|
||||
#### 73. [OpenNMS][57] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/opennms.jpg)
|
||||
|
||||
OpenNMS has four main functional areas: event management and notifications; discovery and provisioning; service monitoring and data collection. It’s designed to be customizable to work in a variety of network environments.
|
||||
|
||||
#### 74. [SysUsage][58] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/sysusage.jpg)
|
||||
|
||||
SysUsage monitors your system continuously via Sar and other system commands. It also allows notifications to alarm you once a threshold is reached. SysUsage itself can be run from a centralized place where all the collected statistics are also being stored. It has a web interface where you can view all the stats.
|
||||
|
||||
#### 75. [brainypdm][59] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/brainypdm.jpg)
|
||||
|
||||
brainypdm is a data management and monitoring tool that has the capability to gather data from nagios or another generic source to make graphs. It’s cross-platform, has custom graphs and is web based.
|
||||
|
||||
#### 76. [PCP][60] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/pcp.jpg)
|
||||
|
||||
PCP has the capability of collating metrics from multiple hosts and does so efficiently. It also has a plugin framework so you can make it collect specific metrics that is important to you. You can access graph data through either a web interface or a GUI. Good for monitoring large systems.
|
||||
|
||||
#### 77. [KDE system guard][61] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/kdesystemguard.jpg)
|
||||
|
||||
This tool is both a system monitor and task manager. You can view server metrics from several machines through the worksheet and if a process needs to be killed or if you need to start a process it can be done within KDE system guard.
|
||||
|
||||
#### 78. [Munin][62] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/munin.jpg)
|
||||
|
||||
Munin is both a network and a system monitoring tool which offers alerts for when metrics go beyond a given threshold. It uses RRDtool to create the graphs and it has web interface to display these graphs. Its emphasis is on plug and play capabilities with a number of plugins available.
|
||||
|
||||
#### 79. [Nagios][63] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nagios.jpg)
|
||||
|
||||
Nagios is system and network monitoring tool that helps you monitor monitor your many servers. It has support for alerting for when things go wrong. It also has many plugins written for the platform.
|
||||
|
||||
#### 80. [Zenoss][64] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/zenoss.jpg)
|
||||
|
||||
Zenoss provides a web interface that allows you to monitor all system and network metrics. Moreover it discovers network resources and changes in network configurations. It has alerts for you to take action on and it supports the Nagios plugins.
|
||||
|
||||
#### 81. [Cacti][65] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/cacti.jpg)
|
||||
|
||||
(And one for luck!) Cacti is network graphing solution that uses the RRDtool data storage. It allows a user to poll services at predetermined intervals and graph the result. Cacti can be extended to monitor a source of your choice through shell scripts.
|
||||
|
||||
#### 82. [Zabbix][66] ####
|
||||
|
||||
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/zabbix-monitoring.png)
|
||||
|
||||
Zabbix is an open source infrastructure monitoring solution. It can use most databases out there to store the monitoring statistics. The Core is written in C and has a frontend in PHP. If you don’t like installing an agent, Zabbix might be an option for you.
|
||||
|
||||
### Bonus section: ###
|
||||
|
||||
Thanks for your suggestions. It’s an oversight on our part that we’ll have to go back trough and renumber all the headings. In light of that, here’s a short section at the end for some of the Linux monitoring tools recommended by you:
|
||||
|
||||
#### 83. [collectd][67] ####
|
||||
|
||||
Collectd is a Unix daemon that collects all your monitoring statistics. It uses a modular design and plugins to fill in any niche monitoring. This way collectd stays as lightweight and customizable as possible.
|
||||
|
||||
#### 84. [Observium][68] ####
|
||||
|
||||
Observium is an auto-discovering network monitoring platform supporting a wide range of hardware platforms and operating systems. Observium focuses on providing a beautiful and powerful yet simple and intuitive interface to the health and status of your network.
|
||||
|
||||
#### 85. Nload ####
|
||||
|
||||
It’s a command line tool that monitors network throughput. It’s neat because it visualizes the in and and outgoing traffic using two graphs and some additional useful data like total amount of transferred data. You can install it with
|
||||
|
||||
yum install nload
|
||||
|
||||
or
|
||||
|
||||
sudo apt-get install nload
|
||||
|
||||
#### 84. [SmokePing][69] ####
|
||||
|
||||
SmokePing keeps track of the network latencies of your network and it visualises them too. There are a wide range of latency measurement plugins developed for SmokePing. If a GUI is important to you it’s there is an ongoing development to make that happen.
|
||||
|
||||
#### 85. [MobaXterm][70] ####
|
||||
|
||||
If you’re working in windows environment day in and day out. You may feel limited by the terminal Windows provides. MobaXterm comes to the rescue and allows you to use many of the terminal commands commonly found in Linux. Which will help you tremendously in your monitoring needs!
|
||||
|
||||
#### 86. [Shinken monitoring][71] ####
|
||||
|
||||
Shinken is a monitoring framework which is a total rewrite of Nagios in python. It aims to enhance flexibility and managing a large environment. While still keeping all your nagios configuration and plugins.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blog.serverdensity.com/80-linux-monitoring-tools-know/
|
||||
|
||||
作者:[Jonathan Sundqvist][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
|
||||
[a]:https://www.serverdensity.com/
|
||||
[1]:http://hisham.hm/htop/
|
||||
[2]:http://www.atoptool.nl/
|
||||
[3]:https://github.com/JeremyJones/Apachetop
|
||||
[4]:http://www.proftpd.org/docs/howto/Scoreboard.html
|
||||
[5]:http://jeremy.zawodny.com/mysql/mytop/
|
||||
[6]:https://01.org/powertop
|
||||
[7]:http://guichaz.free.fr/iotop/
|
||||
[8]:http://www.ntop.org/products/ntop/
|
||||
[9]:http://www.ex-parrot.com/pdw/iftop/
|
||||
[10]:http://jnettop.kubs.info/wiki/
|
||||
[11]:http://bandwidthd.sourceforge.net/
|
||||
[12]:http://etherape.sourceforge.net/
|
||||
[13]:https://www.kernel.org/pub/software/network/ethtool/
|
||||
[14]:http://nethogs.sourceforge.net/
|
||||
[15]:http://iptraf.seul.org/
|
||||
[16]:http://ngrep.sourceforge.net/
|
||||
[17]:http://oss.oetiker.ch/mrtg/
|
||||
[18]:https://github.com/tgraf/bmon/
|
||||
[19]:http://www.phildev.net/iptstate/index.shtml
|
||||
[20]:https://unix4lyfe.org/darkstat/
|
||||
[21]:http://humdi.net/vnstat/
|
||||
[22]:http://nmap.org/
|
||||
[23]:http://www.bitwizard.nl/mtr/
|
||||
[24]:http://www.tcpdump.org/
|
||||
[25]:http://justniffer.sourceforge.net/
|
||||
[26]:http://nmon.sourceforge.net/pmwiki.php
|
||||
[27]:http://conky.sourceforge.net/
|
||||
[28]:https://github.com/nicolargo/glances
|
||||
[29]:https://packages.debian.org/sid/utils/saidar
|
||||
[30]:http://oss.oetiker.ch/rrdtool/
|
||||
[31]:http://mmonit.com/monit
|
||||
[32]:http://sourceforge.net/projects/procexp/
|
||||
[33]:http://packages.ubuntu.com/lucid/utils/discus
|
||||
[34]:http://www.pogo.org.uk/~mark/xosview/
|
||||
[35]:http://dag.wiee.rs/home-made/dstat/
|
||||
[36]:http://www.net-snmp.org/
|
||||
[37]:http://inotify.aiken.cz/?section=incron&page=about&lang=en
|
||||
[38]:http://www.monitorix.org/
|
||||
[39]:http://sebastien.godard.pagesperso-orange.fr/
|
||||
[40]:http://collectl.sourceforge.net/
|
||||
[41]:http://sebastien.godard.pagesperso-orange.fr/
|
||||
[42]:http://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/proc.html
|
||||
[43]:http://members.dslextreme.com/users/billw/gkrellm/gkrellm.html
|
||||
[44]:http://freecode.com/projects/gnome-system-monitor
|
||||
[45]:http://goaccess.io/
|
||||
[46]:http://sourceforge.net/projects/logwatch/
|
||||
[47]:http://sourceforge.net/projects/swatch/
|
||||
[48]:http://www.vanheusden.com/multitail/
|
||||
[49]:http://www.gnu.org/software/acct/
|
||||
[50]:http://whowatch.sourceforge.net/
|
||||
[51]:http://sourceforge.net/projects/strace/
|
||||
[52]:http://dtrace.org/blogs/about/
|
||||
[53]:http://www.webmin.com/
|
||||
[54]:http://ss64.com/bash/ulimit.html
|
||||
[55]:https://github.com/opsengine/cpulimit
|
||||
[56]:https://www.serverdensity.com/server-monitoring/
|
||||
[57]:http://www.opennms.org/
|
||||
[58]:http://sysusage.darold.net/
|
||||
[59]:http://sourceforge.net/projects/brainypdm/
|
||||
[60]:http://www.pcp.io/
|
||||
[61]:https://userbase.kde.org/KSysGuard
|
||||
[62]:http://munin-monitoring.org/
|
||||
[63]:http://www.nagios.org/
|
||||
[64]:http://www.zenoss.com/
|
||||
[65]:http://www.cacti.net/
|
||||
[66]:http://www.zabbix.com/
|
||||
[67]:https://collectd.org/
|
||||
[68]:http://www.observium.org/
|
||||
[69]:http://oss.oetiker.ch/smokeping/
|
||||
[70]:http://mobaxterm.mobatek.net/
|
||||
[71]:http://www.shinken-monitoring.org/
|
@ -0,0 +1,195 @@
|
||||
Optimize Web Delivery with these Open Source Tools
|
||||
================================================================================
|
||||
Web proxy software forwards HTTP requests without modifying traffic in any way. They can be configured as a transparent proxy with no client-side configuration required. They can also be used as a reverse proxy front-end to websites; here the cache serves an unlimited number of clients for one or some web servers.
|
||||
|
||||
Web proxies are versatile tools. They have a wide variety of uses, from caching web, DNS and other lookups, to speeding up the delivery of a web server / reducing bandwidth consumption. Web proxy software can also harden security by filtering traffic and anonymizing connections, and offer media-range limitations. This software is used by high-profile, high-traffic websites such as The New York Times, The Guardian, and social media and content sites such as Twitter, Facebook, and Wikipedia.
|
||||
|
||||
Web caches have become a vital mechanism for optimising the amount of data that is delivered in a given period of time. Good web caches also help to minimise latency, serving pages as quickly as possible. This helps to prevent the end user from becoming impatient having to wait for content to be delivered. Web caches optimise the data flow between client and server. They also help to converse bandwidth by caching frequently-delivered content. If you need to reduce server load and improve delivery speed of your content, it is definitely worth exploring the benefits offered by web cache software.
|
||||
|
||||
To provide an insight into the quality of software available for Linux, I feature below 5 excellent open source web proxy tools. Some of the them are full-featured; a couple of them have very modest resource needs.
|
||||
|
||||
### Squid ###
|
||||
|
||||
Squid is a high-performance open source proxy caching server and web cache daemon. It supports FTP, Internet Gopher, HTTPS, TLS, and SSL. It handles all requests in a single, non-blocking, I/O-driven process over IPv4 or IPv6.
|
||||
|
||||
Squid consists of a main server program squid, a Domain Name System lookup program dnsserver, some optional programs for rewriting requests and performing authentication, together with some management and client tools.
|
||||
|
||||
Squid offers a rich access control, authorization and logging environment to develop web proxy and content serving applications.
|
||||
|
||||
Features include:
|
||||
|
||||
- Web proxy:
|
||||
- Caching to reduce access time and bandwidth use
|
||||
- Keeps meta data and especially hot objects cached in RAM
|
||||
- Caches DNS lookups
|
||||
- Supports non-blocking DNS lookups
|
||||
- Implements negative chacking of failed requests
|
||||
- Squid caches can be arranged in a hierarchy or mesh for additional bandwidth savings
|
||||
- Enforce site-usage policies with extensive access controls
|
||||
- Anonymize connections, such as disabling or changing specific header fields in a client's HTTP request
|
||||
- Reverse proxy
|
||||
- Media-range limitations
|
||||
- Supports SSL
|
||||
- Support for IPv6
|
||||
- Error Page Localization - error pages presented by Squid may now be localized per-request to match the visitors local preferred language
|
||||
- Connection Pinning (for NTLM Auth Passthrough) - a workaround which permits Web servers to use Microsoft NTLM Authentication instead of HTTP standard authentication through a web proxy
|
||||
- Quality of Service (QoS) Flow support
|
||||
- Select a TOS/Diffserv value to mark local hits
|
||||
- Select a TOS/Diffserv value to mark peer hits
|
||||
- Selectively mark only sibling or parent requests
|
||||
- Allows any HTTP response towards clients to have the TOS value of the response coming from the remote server preserved
|
||||
- Mask certain bits in the TOS received from the remote server, before copying the value to the TOS send towards clients
|
||||
- SSL Bump (for HTTPS Filtering and Adaptation) - Squid-in-the-middle decryption and encryption of CONNECT tunneled SSL traffic, using configurable client- and server-side certificates
|
||||
- eCAP Adaptation Module support
|
||||
- ICAP Bypass and Retry enhancements - ICAP is now extended with full bypass and dynamic chain routing to handle multiple adaptation services.
|
||||
- ICY streaming protocol support - commonly known as SHOUTcast multimedia streams
|
||||
- Dynamic SSL Certificate Generation
|
||||
- Support for the Internet Content Adaptation Protocol (ICAP)
|
||||
- Full request logging
|
||||
- Anonymize connections
|
||||
|
||||
- Website: [www.squid-cache.org][1]
|
||||
- Developer: National Laboratory for Applied Networking Research (NLANR) and Internet volunteers
|
||||
- License: GNU GPL v2
|
||||
- Version Number: 4.0.1
|
||||
|
||||
### Privoxy ###
|
||||
|
||||
Privoxy (Privacy Enhancing Proxy) is a non-caching Web proxy with advanced filtering capabilities for enhancing privacy, modifying web page data and HTTP headers, controlling access, and removing ads and other obnoxious Internet junk. Privoxy has a flexible configuration and can be customized to suit individual needs and tastes. It supports both stand-alone systems and multi-user networks.
|
||||
|
||||
Privoxy uses the concept of actions in order to manipulate the data stream between the browser and remote sites.
|
||||
|
||||
Features include:
|
||||
|
||||
- Highly configurable - completely personalize your installation
|
||||
- Ad blocking
|
||||
- Cookie management
|
||||
- Supports "Connection: keep-alive". Outgoing connections can be kept alive independently from the client
|
||||
- Supports IPv6
|
||||
- Tagging which allows to change the behaviour based on client and server headers
|
||||
- Run as an "intercepting" proxy
|
||||
- Sophisticated actions and filters for manipulating both server and client headers
|
||||
- Can be chained with other proxies
|
||||
- Integrated browser-based configuration and control utility. Browser-based tracing of rule and filter effects. Remote toggling
|
||||
- Web page filtering (text replacements, removes banners based on size, invisible "web-bugs" and HTML annoyances, etc)
|
||||
- Modularized configuration that allows for standard settings and user settings to reside in separate files, so that installing updated actions files won't overwrite individual user settings
|
||||
- Support for Perl Compatible Regular Expressions in the configuration files, and a more sophisticated and flexible configuration syntax
|
||||
- GIF de-animation
|
||||
- Bypass many click-tracking scripts (avoids script redirection)
|
||||
- User-customizable HTML templates for most proxy-generated pages (e.g. "blocked" page)
|
||||
- Auto-detection and re-reading of config file changes
|
||||
- Most features are controllable on a per-site or per-location basis
|
||||
|
||||
- Website: [www.privoxy.org][2]
|
||||
- Developer: Fabian Keil (lead developer), David Schmidt, and many other contributors
|
||||
- License: GNU GPL v2
|
||||
- Version Number: 3.4.2
|
||||
|
||||
### Varnish Cache ###
|
||||
|
||||
Varnish Cache is a web accelerator written with performance and flexibility in mind. It's modern architecture offers significantly better performance. It typically speeds up delivery with a factor of 300 - 1000x, depending on your architecture. Varnish stores web pages in memory so the web servers do not have to create the same web page repeatedly. The web server only recreates a page when it is changed. When content is served from memory this happens a lot faster then anything.
|
||||
|
||||
Additionally Varnish can serve web pages much faster then any application server is capable of - giving the website a significant speed enhancement.
|
||||
|
||||
For a cost-effective configuration, Varnish Cache uses between 1-16GB and a SSD disk.
|
||||
|
||||
Features include:
|
||||
|
||||
- Modern design
|
||||
- VCL - a very flexible configuration language. The VCL configuration is translated to C, compiled, loaded and executed giving flexibility and speed
|
||||
- Load balancing using both a round-robin and a random director, both with a per-backend weighting
|
||||
- DNS, Random, Hashing and Client IP based Directors
|
||||
- Load balance between multiple backends
|
||||
- Support for Edge Side Includes including stitching together compressed ESI fragments
|
||||
- Heavily threaded
|
||||
- URL rewriting
|
||||
- Cache multiple vhosts with a single Varnish
|
||||
- Log data is stored in shared memory
|
||||
- Basic health-checking of backends
|
||||
- Graceful handling of "dead" backends
|
||||
- Administered by a command line interface
|
||||
- Use In-line C to extend Varnish
|
||||
- Can be used on the same system as Apache
|
||||
- Run multiple Varnish on the same system
|
||||
- Support for HAProxy's PROXY protocol. This is a protocol adds a small header on each incoming TCP connection that describes who the real client is, added by (for example) an SSL terminating process
|
||||
- Warm and cold VCL states
|
||||
- Plugin support with Varnish Modules, called VMODs
|
||||
- Backends defined through VMODs
|
||||
- Gzip Compression and Decompression
|
||||
- HTTP Streaming Pass & Fetch
|
||||
- Saint and Grace mode. Saint Mode allows for unhealthy backends to be blacklisted for a period of time, preventing them from serving traffic when using Varnish as a load balancer. Grace mode allows Varnish to serve an expired version of a page or other asset in cases where Varnish is unable to retrieve a healthy response from the backend
|
||||
- Experimental support for Persistent Storage, without LRU eviction
|
||||
|
||||
- Website: [www.varnish-cache.org][3]
|
||||
- Developer: Varnish Software
|
||||
- License: FreeBSD
|
||||
- Version Number: 4.1.0
|
||||
|
||||
### Polipo ###
|
||||
|
||||
Polipo is an open source caching HTTP proxy which has modest resource needs.
|
||||
|
||||
It listens to requests for web pages from your browser and forwards them to web servers, and forwards the servers’ replies to your browser. In the process, it optimises and cleans up the network traffic. It is similar in spirit to WWWOFFLE, but the implementation techniques are more like the ones ones used by Squid.
|
||||
|
||||
Polipo aims at being a compliant HTTP/1.1 proxy. It should work with any web site that complies with either HTTP/1.1 or the older HTTP/1.0.
|
||||
|
||||
Features include:
|
||||
|
||||
- HTTP 1.1, IPv4 & IPv6, traffic filtering and privacy-enhancement
|
||||
- Uses HTTP/1.1 pipelining if it believes that the remote server supports it, whether the incoming requests are pipelined or come in simultaneously on multiple connections
|
||||
- Cache the initial segment of an instance if the download has been interrupted, and, if necessary, complete it later using Range requests
|
||||
- Upgrade client requests to HTTP/1.1 even if they come in as HTTP/1.0, and up- or downgrade server replies to the client's capabilities
|
||||
- Complete support for IPv6 (except for scoped (link-local) addresses)
|
||||
- Use as a bridge between the IPv4 and IPv6 Internets
|
||||
- Content-filtering
|
||||
- Can use a technique known as Poor Man's Multiplexing to reduce latency
|
||||
- SOCKS 4 and SOCKS 5 protocol support
|
||||
- HTTPS proxying
|
||||
- Behaves as a transparent proxy
|
||||
- Run Polipo together with Privoxy or tor
|
||||
|
||||
- Website: [www.pps.univ-paris-diderot.fr/~jch/software/polipo/][4]
|
||||
- Developer: Juliusz Chroboczek, Christopher Davis
|
||||
- License: MIT License
|
||||
- Version Number: 1.1.1
|
||||
|
||||
### Tinyproxy ###
|
||||
|
||||
Tinyproxy is a lightweight open source web proxy daemon. It is designed to be fast and yet small. It is useful for cases such as embedded deployments where a full featured HTTP proxy is required, but the system resources for a larger proxy are unavailable.
|
||||
|
||||
Tinyproxy is very useful in a small network setting, where a larger proxy would either be too resource intensive, or a security risk. One of the key features of Tinyproxy is the buffering connection concept. In effect, Tinyproxy will buffer a high speed response from a server, and then relay it to a client at the highest speed the client will accept. This feature greatly reduces the problems with sluggishness on the net.
|
||||
|
||||
Features:
|
||||
|
||||
- Easy to modify
|
||||
- Anonymous mode - allows specification of individual HTTP headers that should be allowed through, and which should be blocked
|
||||
- HTTPS support - Tinyproxy allows forwarding of HTTPS connections without modifying traffic in any way through the CONNECT method
|
||||
- Remote monitoring - access proxy statistics from afar, letting you know exactly how busy the proxy is
|
||||
- Load average monitoring - configure software to refuse connections after the server load reaches a certain point
|
||||
- Access control - configure to only allow connections from certain subnets or IP addresses
|
||||
- Secure - run without any special privileges, thus minimizing the chance of system compromise
|
||||
- URL based filtering - allows domain and URL-based black- and whitelisting
|
||||
- Transparent proxying - configure as a transparent proxy, so that a proxy can be used without any client-side configuration
|
||||
- Proxy chaining - use an upstream proxy server for outbound connections, instead of direct connections to the target server, creating a so-called proxy chain
|
||||
- Privacy features - restrict both what data comes to your web browser from the HTTP server (e.g., cookies), and to restrict what data is allowed through from your web browser to the HTTP server (e.g., version information)
|
||||
- Small footprint - the memory footprint is about 2MB with glibc, and the CPU load increases linearly with the number of simultaneous connections (depending on the speed of the connection). Tinyproxy can be run on an old machine without affecting performance
|
||||
|
||||
- Website: [banu.com/tinyproxy][5]
|
||||
- Developer: Robert James Kaes and contributors
|
||||
- License: GNU GPL v2
|
||||
- Version Number: 1.8.3
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxlinks.com/article/20151101020309690/WebDelivery.html
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://www.squid-cache.org/
|
||||
[2]:http://www.privoxy.org/
|
||||
[3]:https://www.varnish-cache.org/
|
||||
[4]:http://www.pps.univ-paris-diderot.fr/%7Ejch/software/polipo/
|
||||
[5]:https://banu.com/tinyproxy/
|
220
sources/talk/20101020 19 Years of KDE History--Step by Step.md
Normal file
220
sources/talk/20101020 19 Years of KDE History--Step by Step.md
Normal file
@ -0,0 +1,220 @@
|
||||
19 Years of KDE History: Step by Step
|
||||
================================================================================
|
||||
注:youtube 视频
|
||||
<iframe width="660" height="371" src="https://www.youtube.com/embed/1UG4lQOMBC4?feature=oembed" frameborder="0" allowfullscreen></iframe>
|
||||
|
||||
### Introduction ###
|
||||
|
||||
KDE – one of most functional desktop environment ever. It’s open source and free for use. 19 years ago, 14 october 1996 german programmer Matthias Ettrich has started a development of this beautiful environment. KDE provides the shell and many applications for everyday using. Today KDE uses the hundred thousand peoples over the world on Unix and Windows operating system. 19 years – serious age for software projects. Time to return and see how it begin.
|
||||
|
||||
K Desktop Environment has some new aspects: new design, good look & feel, consistency, easy to use, powerful applications for typical desktop work and special use cases. Name “KDE” is an easy word hack with “Common Desktop Environment”, “K” – “Cool”. The first KDE version used proprietary Trolltech’s Qt framework (parent of Qt) with dual licensing: open source QPL(Q public license) and proprietary commercial license. In 2000 Trolltech released some Qt libraries under GPL; Qt 4.5 was released in LGPL 2.1. Since 2009 KDE is compiled for three products: Plasma Workspaces (Shell), KDE Applications, KDE Platform as KDE Software compilation.
|
||||
|
||||
### Releases ###
|
||||
|
||||
#### Pre-Release – 14 October 1996 ####
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/0b3.png)
|
||||
|
||||
Kool Desktop Environment. Word “Kool” will be dropped in future. In the beginning, all components were released to the developer community separately without any coordinated timeframe throughout the overall project. First communication of KDE via mailing list, that was called kde@fiwi02.wiwi.uni-Tubingen.de.
|
||||
|
||||
#### KDE 1.0 – July 12, 1998 ####
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/10.png)
|
||||
|
||||
This version received mixed reception. Many criticized the use of the Qt software framework – back then under the FreeQt license which was claimed to not be compatible with free software – and advised the use of Motif or LessTif instead. Despite that criticism, KDE was well received by many users and made its way into the first Linux distributions.
|
||||
|
||||
![28 January 1999](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/11.png)
|
||||
|
||||
28 January 1999
|
||||
|
||||
An update, **K Desktop Environment 1.1**, was faster, more stable and included many small improvements. It also included a new set of icons, backgrounds and textures. Among this overhauled artwork was a new KDE logo by Torsten Rahn consisting of the letter K in front of a gear which is used in revised form to this day.
|
||||
|
||||
#### KDE 2.0 – October 23, 2000 ####
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/20.png)
|
||||
|
||||
Major updates: * DCOP (Desktop COmmunication Protocol), a client-to-client communications protocol * KIO, an application I/O library. * KParts, a component object model * KHTML, an HTML 4.0 compliant rendering and drawing engine
|
||||
|
||||
![26 February 2001](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/21.png)
|
||||
|
||||
26 February 2001
|
||||
|
||||
**K Desktop Environment 2.1** release inaugurated the media player noatun, which used a modular, plugin design. For development, K Desktop Environment 2.1 was bundled with KDevelop.
|
||||
|
||||
![15 August 2001](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/22.png)
|
||||
|
||||
15 August 2001
|
||||
|
||||
The **KDE 2.2** release featured up to a 50% improvement in application startup time on GNU/Linux systems and increased stability and capabilities for HTML rendering and JavaScript; some new features in KMail.
|
||||
|
||||
#### KDE 3.0 – April 3, 2002 ####
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/30.png)
|
||||
|
||||
K Desktop Environment 3.0 introduced better support for restricted usage, a feature demanded by certain environments such as kiosks, Internet cafes and enterprise deployments, which disallows the user from having full access to all capabilities of a piece of software.
|
||||
|
||||
![28 January 2003](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/31.png)
|
||||
|
||||
28 January 2003
|
||||
|
||||
**K Desktop Environment 3.1** introduced new default window (Keramik) and icon (Crystal) styles as well as several feature enhancements.
|
||||
|
||||
![3 February 2004](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/32.png)
|
||||
|
||||
3 February 2004
|
||||
|
||||
**K Desktop Environment 3.2** included new features, such as inline spell checking for web forms and emails, improved e-mail and calendaring support, tabs in Konqueror and support for Microsoft Windows desktop sharing protocol (RDP).
|
||||
|
||||
![19 August 2004](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/33.png)
|
||||
|
||||
19 August 2004
|
||||
|
||||
**K Desktop Environment 3.3** focused on integrating different desktop components. Kontact was integrated with Kolab, a groupware application, and Kpilot. Konqueror was given better support for instant messaging contacts, with the capability to send files to IM contacts and support for IM protocols (e.g., IRC).
|
||||
|
||||
![16 March 2005](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/34.png)
|
||||
|
||||
16 March 2005
|
||||
|
||||
**K Desktop Environment 3.4** focused on improving accessibility. The update added a text-to-speech system with support for Konqueror, Kate, KPDF, the standalone application KSayIt and text-to-speech synthesis on the desktop.
|
||||
|
||||
![29 November 2005](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/35.png)
|
||||
|
||||
29 November 2005
|
||||
|
||||
**The K Desktop Environment 3.5** release added SuperKaramba, which provides integrated and simple-to-install widgets to the desktop. Konqueror was given an ad-block feature and became the second web browser to pass the Acid2 CSS test.
|
||||
|
||||
#### KDE SC 4.0 – January 11, 2008 ####
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/400.png)
|
||||
|
||||
The majority of development went into implementing most of the new technologies and frameworks of KDE 4. Plasma and the Oxygen style were two of the biggest user-facing changes. Dolphin replaces Konqueror as file manager, Okular – default document viewer.
|
||||
|
||||
![29 July 2008](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/401.png)
|
||||
|
||||
29 July 2008
|
||||
|
||||
**KDE 4.1** includes a shared emoticon theming system which is used in PIM and Kopete, and DXS, a service that lets applications download and install data from the Internet with one click. Also introduced are GStreamer, QuickTime 7, and DirectShow 9 Phonon backends. New applications: * Dragon Player * Kontact * Skanlite – software for scanners * Step – physics simulator * New games: Kdiamond, Kollision, KBreakout and others
|
||||
|
||||
![27 January 2009](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/402.png)
|
||||
|
||||
27 January 2009
|
||||
|
||||
**KDE 4.2** is considered a significant improvement beyond KDE 4.1 in nearly all aspects, and a suitable replacement for KDE 3.5 for most users.
|
||||
|
||||
![4 August 2009](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/403.png)
|
||||
|
||||
4 August 2009
|
||||
|
||||
**KDE 4.3** fixed over 10,000 bugs and implemented almost 2,000 feature requests. Integration with other technologies, such as PolicyKit, NetworkManager & Geolocation services, was another focus of this release.
|
||||
|
||||
![9 February 2010](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/404.png)
|
||||
|
||||
9 February 2010
|
||||
|
||||
**KDE SC 4.4** is based on version 4.6 of the Qt 4 toolkit. New application – KAddressBook, first release of Kopete.
|
||||
|
||||
![10 August 2010](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/405.png)
|
||||
|
||||
10 August 2010
|
||||
|
||||
**KDE SC 4.5** has some new features: integration of the WebKit library, an open-source web browser engine, which is used in major browsers such as Apple Safari and Google Chrome. KPackageKit replaced Kpackage.
|
||||
|
||||
![26 January 2011](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/406.png)
|
||||
|
||||
26 January 2011
|
||||
|
||||
**KDE SC 4.6** has better OpenGL compositing along with the usual myriad of fixes and features.
|
||||
|
||||
![27 July 2011](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/407.png)
|
||||
|
||||
27 July 2011
|
||||
|
||||
**KDE SC 4.7** has updated KWin with OpenGL ES 2.0 compatible, Qt Quick, Plasma Desktop with many enhancements and a lot of new functions in general applications. 12k bugs if fixed.
|
||||
|
||||
![25 January 2012](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/408.png)
|
||||
|
||||
25 January 2012
|
||||
|
||||
**KDE SC 4.8**: better KWin performance and Wayland support, new design of Doplhin.
|
||||
|
||||
![1 August 2012](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/409.png)
|
||||
|
||||
1 August 2012
|
||||
|
||||
**KDE SC 4.9**: several improvements to the Dolphin file manager, including the reintroduction of in-line file renaming, back and forward mouse buttons, improvement of the places panel and better usage of file metadata.
|
||||
|
||||
![6 February 2013](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/410.png)
|
||||
|
||||
6 February 2013
|
||||
|
||||
**KDE SC 4.10**: many of the default Plasma widgets were rewritten in QML, and Nepomuk, Kontact and Okular received significant speed improvements.
|
||||
|
||||
![14 August 2013](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/411.png)
|
||||
|
||||
14 August 2013
|
||||
|
||||
**KDE SC 4.11**: Kontact and Nepomuk received many optimizations. The first generation Plasma Workspaces entered maintenance-only development mode.
|
||||
|
||||
![18 December 2013](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/412.png)
|
||||
|
||||
18 December 2013
|
||||
|
||||
**KDE SC 4.12**: Kontact received substantial improvements, many small improvements.
|
||||
|
||||
![16 April 2014](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/413.png)
|
||||
|
||||
18 December 2013
|
||||
|
||||
**KDE SC 4.13**: Nepomuk semantic desktop search was replaced with KDE’s in house Baloo. KDE SC 4.13 was released in 53 different translations.
|
||||
|
||||
![20 August 2014](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/414.png)
|
||||
|
||||
18 December 2013
|
||||
|
||||
**KDE SC 4.14**: he release primarily focused on stability, with numerous bugs fixed and few new features added. This was the final KDE SC 4 release.
|
||||
|
||||
#### KDE Plasma 5.0 – July 15, 2014 ####
|
||||
|
||||
![](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/500.png)
|
||||
|
||||
KDE Plasma 5 – 5th generation of KDE. Massive impovements in design and system, new default theme – Breeze, complete migration to QML, better performance with OpenGL, better HiDPI displays support.
|
||||
|
||||
![11 November 2014](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/501.png)
|
||||
|
||||
11 November 2014
|
||||
|
||||
**KDE Plasma 5.1**: Ported missing features from Plasma 4.
|
||||
|
||||
![27 January 2015](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/502.png)
|
||||
|
||||
27 January 2015
|
||||
|
||||
**KDE Plasma 5.2**: New components: BlueDevil, KSSHAskPass, Muon, SDDM theme configuration, KScreen, GTK+ style configuration and KDecoration.
|
||||
|
||||
![28 April 2015](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/503.png)
|
||||
|
||||
28 April 2015
|
||||
|
||||
**KDE Plasma 5.3**: Tech preview of Plasma Media Center. New Bluetooth and touchpad applets. Enhanced power management.
|
||||
|
||||
![25 August 2015](https://github.com/paulcarroty/Articles/raw/master/KDE_History/im/504.png)
|
||||
|
||||
25 August 2015
|
||||
|
||||
**KDE Plasma 5.4**: Initial Wayland session, new QML-based audio volume applet, and alternative full-screen application launcher.
|
||||
|
||||
Big thanks to the [KDE][1] developers and community, Wikipedia for [descriptions][2] and all my readers. Be free and use the open source software like a KDE.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://tlhp.cf/kde-history/
|
||||
|
||||
作者:[Pavlo RudyiCategories][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://tlhp.cf/author/paul/
|
||||
[1]:https://www.kde.org/
|
||||
[2]:https://en.wikipedia.org/wiki/KDE_Plasma_5
|
@ -1,46 +0,0 @@
|
||||
LinuxCon's surprise keynote speaker Linus Torvalds muses about open-source software
|
||||
================================================================================
|
||||
> In a broad-ranging question and answer session, Linus Torvalds, Linux's founder, shared his thoughts on the current state of open source and Linux.
|
||||
|
||||
**SEATTLE** -- [LinuxCon][1] attendees got an early Christmas present when the Wednesday morning "surprise" keynote speaker turned out to be Linux's founder, Linus Torvalds.
|
||||
|
||||
![zemlin-and-torvalds-08192015-1.jpg](http://zdnet2.cbsistatic.com/hub/i/2015/08/19/9951f05a-fedf-4bf4-a4a1-3b4a15458de6/c19c89ded58025eccd090787ba40e803/zemlin-and-torvalds-08192015-1.jpg)
|
||||
|
||||
Jim Zemlin and Linus Torvalds shooting the breeze at LinuxCon in Seattle. -- sjvn
|
||||
|
||||
Jim Zemlin, the Linux Foundation's executive director, opened the question and answer session by quoting from a recent article about Linus, "[Torvalds may be the most influential individual economic force][2] of the past 20 years. ... Torvalds has, in effect, been as instrumental in retooling the production lines of the modern economy as Henry Ford was 100 years earlier."
|
||||
|
||||
Torvalds replied, "I don't think I'm all that powerful, but I'm glad to get all the credit for open source." For someone who's arguably been more influential on technology than Bill Gates, Steve Jobs, or Larry Ellison, Torvalds remains amusingly modest. That's probably one reason [Torvalds, who doesn't suffer fools gladly][3], remains the unchallenged leader of Linux.
|
||||
|
||||
It also helps that he doesn't take himself seriously, except when it comes to code quality. Zemlin reminded him that he was also described in the same article as being "5-feet, ho-hum tall with a paunch, ... his body type and gait resemble that of Tux, the penguin mascot of Linux." Torvald's reply was to grin and say "What is this? A roast?" He added that 5'8" was a perfectly good height.
|
||||
|
||||
More seriously, Zemlin asked Torvalds what he thought about the current excitement over containers. Indeed, at times LinuxCon has felt like DockerCon. Torvalds replied, "I'm glad that the kernel is far removed from containers and other buzzwords. We only care about just the kernel. I'm so focused on the kernel I really don't care. I don't get involved in the politics above the kernel and I'm really happy that I don't know."
|
||||
|
||||
Moving on, Zemlin asked Torvalds what he thought about the demand from the Internet of Things (IoT) for an even smaller Linux kernel. "Everyone has always wished for a smaller kernel," Torvalds said. "But, with all the modules it's still tens of MegaBytes in size. It's shocking that it used to fit into a MB. We'd like it to be mean lean, mean IT machine again."
|
||||
|
||||
But, "Torvalds continued, "It's hard to get rid of unnecessary fat. Things tend to grow. Realistically I don't think we can get down to the sizes we were 20 years ago."
|
||||
|
||||
As for security, the next topic, Torvalds said, "I'm at odds with the security community. They tend to see technology as black and white. If it's not security they don't care at all about it." The truth is "security is bugs. Most of the security issues we've had in the kernel hasn't been that big. Most of them have been really stupid and then some clever person takes advantage of it."
|
||||
|
||||
The bottom line is, "We'll never get rid of bugs so security will never be perfect. We do try to be really careful about code. With user space we have to be very strict." But, "Bugs happen and all you can do is mitigate them. Open source is doing fairly well, but anyone who thinks we'll ever be completely secure is foolish."
|
||||
|
||||
Zemlin concluded by asking Torvalds where he saw Linux ten years from now. Torvalds replied that he doesn't look at it this way. "I'm plodding, pedestrian, I look ahead six months, I don't plan 10 years ahead. I think that's insane."
|
||||
|
||||
Sure, "companies plan ten years, and their plans use open source. Their whole process is very forward thinking. But I'm not worried about 10 years ahead. I look to the next release and the release beyond that."
|
||||
|
||||
For Torvalds, who works at home where "the FedEx guy is no longer surprised to find me in my bathrobe at 2 in the afternoon," looking ahead a few months works just fine. And so do all the businesses -- both technology-based Amazon, Google, Facebook and more mainstream, WalMart, the New York Stock Exchange, and McDonalds -- that live on Linux every day.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.zdnet.com/article/linus-torvalds-muses-about-open-source-software/
|
||||
|
||||
作者:[Steven J. Vaughan-Nichols][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
|
||||
[1]:http://events.linuxfoundation.org/events/linuxcon-north-america
|
||||
[2]:http://www.bloomberg.com/news/articles/2015-06-16/the-creator-of-linux-on-the-future-without-him
|
||||
[3]:http://www.zdnet.com/article/linus-torvalds-finds-gnome-3-4-to-be-a-total-user-experience-design-failure/
|
@ -1,4 +1,3 @@
|
||||
KevinSJ translating
|
||||
Why did you start using Linux?
|
||||
================================================================================
|
||||
> In today's open source roundup: What got you started with Linux? Plus: IBM's Linux only Mainframe. And why you should skip Windows 10 and go with Linux
|
||||
|
@ -1,92 +0,0 @@
|
||||
LinuxCon exclusive: Mark Shuttleworth says Snappy was born long before CoreOS and the Atomic Project
|
||||
================================================================================
|
||||
![](http://images.techhive.com/images/article/2015/08/mark-100608730-primary.idge.jpg)
|
||||
|
||||
Mark Shuttleworth at LinuxCon Credit: Swapnil Bhartiya
|
||||
|
||||
> Mark Shuttleworth, founder of Canonical and Ubuntu, made a surprise visit at LinuxCon. I sat down with him for a video interview and talked about Ubuntu on IBM’s new LinuxONE systems, Canonical’s plans for containers, open source in the enterprise space and much more.
|
||||
|
||||
### You made a surprise entry during the keynote. What brought you to LinuxCon? ###
|
||||
|
||||
**Mark Shuttleworth**: I am here at LinuxCon to support IBM and Canonical in their announcement of Ubuntu on their new Linux-only super-high-end mainframe LinuxONE. These are the biggest machines in the world, purpose-built to run only Linux. And we will be bringing Ubuntu to them, which is a real privilege for us and is going to be incredible for developers.
|
||||
|
||||
![mark selfie](http://images.techhive.com/images/article/2015/08/mark-selfie-100608731-large.idge.jpg)
|
||||
|
||||
Swapnil Bhartiya
|
||||
|
||||
Mark Shuttleworth and Swapnil Bhartiya, mandatory selfie at LinuxCon
|
||||
|
||||
### Only Red Hat and SUSE were supported on it. Why was Ubuntu missing from the mainframe scene? ###
|
||||
|
||||
**Mark**: Ubuntu has always been about developers. It has been about enabling the free software platform from where it is collaboratively built to be available at no cost to developers in the world, so they are limited only by their imagination—not by money, not by geography.
|
||||
|
||||
There was an incredible story told today about a 12-year-old kid who started out with Ubuntu; there are incredible stories about people building giant businesses with Ubuntu. And for me, being able to empower people, whether they come from one part of the world or another to express their ideas on free software, is what Ubuntu is all about. It's been a journey for us essentially, going to the platforms those developers care about, and just in the last year, we suddenly saw a flood of requests from companies who run mainframes, who are using Ubuntu for their infrastructure—70% of OpenStack deployments are on Ubuntu. Those same people said, “Look, there is the mainframe, and we like to unleash it and think of it as a region in the cloud.” So when IBM started talking to us, saying that they have this project in the works, it felt like a very natural fit: You are going to be able to take your Ubuntu laptop, build code there and ship it straight to every cloud, every virtualization environment, every bare metal in every architecture including the mainframe, and that's going to be beautiful.
|
||||
|
||||
### Will Canonical be offering support for these systems? ###
|
||||
|
||||
**Mark**: Yes. Ubuntu on z Systems is going to be completely supported. We will make long-term commitments to that. The idea is to bring together scale-out-fast cloud-like workloads, which is really born on Ubuntu; 70% of workloads on Amazon and other public clouds run on Ubuntu. Now you can think of running that on a mainframe if that makes sense to you.
|
||||
|
||||
We are going to provide exactly the same platform that we do on the cloud, and we are going to provide that on the mainframe as well. We are also going to expose it to the OpenStack API so you can consume it on a mainframe with exactly the same tools and exactly the same processes that you would consume on a laptop, or OpenStack or public cloud resources. So all of the things that Ubuntu builds to make your life easy as a developer are going to be available across that full range of platforms and systems, and all of that is commercially supported.
|
||||
|
||||
### Canonical is doing a lot of things: It is into enterprise, and it’s in the consumer space with mobile and desktop. So what is the core focus of Canonical now? ###
|
||||
|
||||
**Mark**: The trick for us is to enable the reuse of specifically the same parts [of our technology] in as many useful ways as possible. So if you look at the work that we do at z Systems, it's absolutely defined by the work that we do on the cloud. We want to deliver exactly the same libraries on exactly the same date for the mainframe as we do for public clouds and for x86, ARM and Power servers today.
|
||||
|
||||
We don't allow Ubuntu or our focus to fragment very dramatically because we don't allow different products managers to find Ubuntu in different ways in different environments. We just want to bring that standard experience that developers love to this new environment.
|
||||
|
||||
Similarly if you look at the work we are doing on IoT [Internet of Things], Snappy Ubuntu is the heart of the phone. It’s the phone without the GUI. So the definitions, the tools, the kernels, the mechanisms are shared across those projects. So we are able to multiply the impact of the work. We have an incredible community, and we try to enable the community to do things that they want to do that we can’t do. So that's why we have so many buntus, and it's kind of incredible for me to see what they do with that.
|
||||
|
||||
We also see the community climbing in. We see hundreds of developers working with Snappy for IoT, and we see developers working with Snappy on mobile, for personal computing as convergence becomes real. And, of course, there is the cloud server story: 70% of the world is Ubuntu, so there is a huge audience. We don't have to do all the work that we do; we just have to be open and willing to, kind of, do the core infrastructure and then reuse it as efficiently as possible.
|
||||
|
||||
### Is Snappy a response to Atomic or CoreOS? ###
|
||||
|
||||
**Mark**: Snappy as a project was born four years ago when we started working on the phone, which was long before the CoreOS, long before Atomic. I think the principles of atomicity, transactionality are beautiful, but remember: We needed to build the same things for the phone. And with Snappy, we have the ability to deliver transactional updates to any of these systems—phones, servers and cloud devices.
|
||||
|
||||
Of course, it feels a little different because in order to provide those guarantees, we have to shape the system in such a way that we can guarantee the guarantees. And that's why Snappy is snappy; it's a new thing. It's not based on an old packaging system. Though we will keep both of them: All Snaps for us that Canonical makes, the core snaps that define the OS, are all built from Debian packages. They are two different faces of the same coin for us, and developers will use them as tools. We use the right tools for the job.
|
||||
|
||||
There are couple of key advantages for Snappy over CoreOS and Atomic, and the main one is this: We took the view that we wanted the base idea to be extensible. So with Snappy, the core operating system is tiny. You make all the choices, and you take all the decisions about things you want to bolt on that: you want to bolt on Docker; you want to bolt on Kubernete; you want to bolt on Mesos; you want to bolt on Lattice from Pivotal; you want to bolt on OpenStack. Those are the things you choose to add with Snappy. Whereas with Atomic and CoreOS, it's one blob and you have to do it exactly the way they want you to do it. You have to live with the versions of software and the choices they make.
|
||||
|
||||
Whereas with Snappy, we really preserve this idea of the choices you have got in Ubuntu are now transactionally available on Snappy systems. That makes the core much smaller, and it gives you the choice of different container systems, different container management systems, different cloud infrastructure systems or different apps of every description. I think that's the winning idea. In fullness of time, people will realize that they wanted to make those choices themselves; they just want Canonical to do the work of providing the updates in a really efficient manner.
|
||||
|
||||
### There is so much competition in the container space with Docker, Rocket and many other players. Where will Canonical stand amid this competition? ###
|
||||
|
||||
**Mark**: Canonical is focused on platform tools, and we see things like the Rocket and Docker as things super-useful for developers; we just make sure that those work best on Ubuntu. Docker, for years, ran only Ubuntu because we work very closely with them, and we are glad now that it's available everywhere else. But if you look at the numbers, the vast majority of Docker containers are on Ubuntu. Because we work really hard, as developers, you get the best experience with all of these tools on Ubuntu. We don't want to try and control everything, and it’s great for us to have those guys competing.
|
||||
|
||||
I think in the end people will see that there is really two kinds of containers. 1) There are cases where a container is just like a VM machine. It feels like a whole machine, it runs all processes, all the logs and cron jobs are there. It's like a VM, just that it's much cheaper, much lighter, much faster, and that's LXD. 2) And then there would be process containers, which are like Docker or Rocket; they are there to run a specific application very fast. I think we lead the world in general machine container story, which is our hypervisor LXD, and I think Docker leads the story when it comes to applications containers, process containers. And those two work together really beautifully.
|
||||
|
||||
### Microsoft and Canonical are working together on LXD? Can you tell us about this engagement? ###
|
||||
|
||||
Mark: LXD is two things. First, it's an implementation on top of Canonical's work on the kernel so that you can start to create full machine containers on any host. But it's also a REST API. That’s the transitions from LXC to LXD. We got a daemon there so you can talk to the daemon over the network, if it's listening on the network, and says tell me about the containers on that machine, tell me about the file systems on that machine, the networks on that machine, start or stop the container.
|
||||
|
||||
So LXD becomes a distributed hypervisor effectively. Very interestingly, last week Microsoft announced that they like REST API. It is very clean, very simple, very well engineered, and they are going to implement the same API for Windows machines. It's completely cross-platform, which means you will be able to talk to any machine—Linux or Windows. So it gives you very clean and simple APIs to talk about containers on any host on the network.
|
||||
|
||||
Of course, we have led the work in [OpenStack to bind LXD to Nova][1], which is the control system to compute in OpenStack, so that's how we create a whole cloud with OpenStack API with the individual VMs being actually containers, so much denser, much faster, much lighter, much cheaper.
|
||||
|
||||
### Open Source is becoming a norm in the enterprise segment. What do you think is driving the adoption of open source in the enterprise? ###
|
||||
|
||||
**Mark**: The reason why open source has become so popular in the enterprise is because it enables them to go faster. We are all competing at some level, and if you can't make progress because you have to call up some vendor, you can't dig in and help yourself go faster, then you feel frustrated. And given the choice between frustration and at least the ability to dig into a problem, enterprises over time will always choose to give themselves the ability to dig in and help themselves. So that is why open source is phenomenal.
|
||||
|
||||
I think it goes a bit deeper than that. I think people have started to realize as much as we compete, 99% of what we need to do is shared, and there is something meaningful about contributing to something that is shared. As I have seen Ubuntu go from something that developers love, to something that CIOs love that developers love Ubuntu. As that happens, it's not a one-way ticket. They often want to say how can we help contribute to make this whole thing go faster.
|
||||
|
||||
We have always seen a curve of complexity, and open source has traditionally been higher up on the curve of complexity and therefore considered threatening or difficult or too uncertain for people who are not comfortable with the complexity. What's wonderful to me is that many open source projects have identified that as a blocker for their own future. So in Ubuntu we have made user experience, design and “making it easy” a first-class goal. We have done the same for OpenStack. With Ubuntu tools for OpenStack anybody can build an OpenStack cloud in an hour, and if you want, that cloud can run itself, scale itself, manage itself, can deal with failures. It becomes something you can just fire up and forget, which also makes it really cheap. It also makes it something that's not a distraction, and so by making open source easier and easier, we are broadening its appeal to consumers and into the enterprise and potentially into the government.
|
||||
|
||||
### How open are governments to open source? Can you tell us about the utilization of open source by governments, especially in the U.S.? ###
|
||||
|
||||
**Mark**: I don't track the usage in government, but part of government utilization in the modern era is the realization that how untrustworthy other governments might be. There is a desire for people to be able to say, “Look, I want to review or check and potentially self-build all the things that I depend on.” That's a really important mission. At the end of the day, some people see this as a game where maybe they can get something out of the other guy. I see it as a game where we can make a level playing field, where everybody gets to compete. I have a very strong interest in making sure that Ubuntu is trustworthy, which means the way we build it, the way we run it, the governance around it is such that people can have confidence in it as an independent thing.
|
||||
|
||||
### You are quite vocal about freedom, privacy and other social issues on Google+. How do you see yourself, your company and Ubuntu playing a role in making the world a better place? ###
|
||||
|
||||
**Mark**: The most important thing for us to do is to build confidence in trusted platforms, platforms that are freely available but also trustworthy. At any given time, there will always be people who can make arguments about why they should have access to something. But we know from history that at the end of the day, due process of law, justice, doesn't depend on the abuse of privacy, abuse of infrastructure, the abuse of data. So I am very strongly of the view that in the fullness of time, all of the different major actors will come to the view that their primary interest is in having something that is conceptually trustworthy. This isn't about what America can steal from Germany or what China can learn in Russia. This is about saying we’re all going to be able to trust our infrastructure; that's a generational journey. But I believe Ubuntu can be right at the center of people's thinking about that.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.itworld.com/article/2973116/linux/linuxcon-exclusive-mark-shuttleworth-says-snappy-was-born-long-before-coreos-and-the-atomic-project.html
|
||||
|
||||
作者:[Swapnil Bhartiya][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.itworld.com/author/Swapnil-Bhartiya/
|
||||
[1]:https://wiki.openstack.org/wiki/HypervisorSupportMatrix
|
@ -1,67 +0,0 @@
|
||||
The Strangest, Most Unique Linux Distros
|
||||
================================================================================
|
||||
From the most consumer focused distros like Ubuntu, Fedora, Mint or elementary OS to the more obscure, minimal and enterprise focused ones such as Slackware, Arch Linux or RHEL, I thought I've seen them all. Couldn't have been any further from the truth. Linux eco-system is very diverse. There's one for everyone. Let's discuss the weird and wacky world of niche Linux distros that represents the true diversity of open platforms.
|
||||
|
||||
![strangest linux distros](http://2.bp.blogspot.com/--cSL2-6rIgA/VcwNc5hFebI/AAAAAAAAJzk/AgB55mVtJVQ/s1600/Puppy-Linux.png)
|
||||
|
||||
**Puppy Linux**: An operating system which is about 1/10th the size of an average DVD quality movie rip, that's Puppy Linux for you. The OS is just 100 MB in size! And it can run from RAM making it unusually fast even in older PCs. You can even remove the boot medium after the operating system has started! Can it get any better than that? System requirements are bare minimum, most hardware are automatically detected, and it comes loaded with software catering to your basic needs. [Experience Puppy Linux][1].
|
||||
|
||||
![suicide linux](http://3.bp.blogspot.com/-dfeehRIQKpo/VdMgRVQqIJI/AAAAAAAAJz0/TmBs-n2K9J8/s1600/suicide-linux.jpg)
|
||||
|
||||
**Suicide Linux**: Did the name scare you? Well it should. 'Any time - any time - you type any remotely incorrect command, the interpreter creatively resolves it into rm -rf / and wipes your hard drive'. Simple as that. I really want to know the ones who are confident enough to risk their production machines with [Suicide Linux][2]. **Warning: DO NOT try this on production machines!** The whole thing is available in a neat [DEB package][3] if you're interested.
|
||||
|
||||
![top 10 strangest linux distros](http://3.bp.blogspot.com/-Q0hlEMCD9-o/VdMieAiXY1I/AAAAAAAAJ0M/iS_ZjVaZAk8/s1600/papyros.png)
|
||||
|
||||
**PapyrOS**: "Strange" in a good way. PapyrOS is trying to adapt the material design language of Android into their brand new Linux distribution. Though the project is in early stages, it already looks very promising. The project page says the OS is 80% complete and one can expect the first Alpha release anytime soon. We did a small write up on [PapyrOS][4] when it was announced and by the looks of it, PapyrOS might even become a trend-setter of sorts. Follow the project on [Google+][5] and contribute via [BountySource][6] if you're interested.
|
||||
|
||||
![10 most unique linux distros](http://3.bp.blogspot.com/-8aOtnTp3Yxk/VdMo_KWs4sI/AAAAAAAAJ0o/3NTqhaw60jM/s1600/qubes-linux.png)
|
||||
|
||||
**Qubes OS**: Qubes is an open-source operating system designed to provide strong security using a Security by Compartmentalization approach. The assumption is that there can be no perfect, bug-free desktop environment. And by implementing a 'Security by Isolation' approach, [Qubes Linux][7] intends to remedy that. Qubes is based on Xen, the X Window System, and Linux, and can run most Linux applications and supports most Linux drivers. Qubes was selected as a finalist of Access Innovation Prize 2014 for Endpoint Security Solution.
|
||||
|
||||
![top10 linux distros](http://3.bp.blogspot.com/-2Sqvb_lilC0/VdMq_ceoXnI/AAAAAAAAJ00/kot20ugVJFk/s1600/ubuntu-satanic.jpg)
|
||||
|
||||
**Ubuntu Satanic Edition**: Ubuntu SE is a Linux distribution based on Ubuntu. "It brings together the best of free software and free metal music" in one comprehensive package consisting of themes, wallpapers, and even some heavy-metal music sourced from talented new artists. Though the project doesn't look actively developed anymore, Ubuntu Satanic Edition is strange in every sense of that word. [Ubuntu SE (Slightly NSFW)][8].
|
||||
|
||||
![10 strange linux distros](http://2.bp.blogspot.com/-ZtIVjGMqdx0/VdMv136Pz1I/AAAAAAAAJ1E/-q34j-TXyUY/s1600/tiny-core-linux.png)
|
||||
|
||||
**Tiny Core Linux**: Puppy Linux not small enough? Try this. Tiny Core Linux is a 12 MB graphical Linux desktop! Yep, you read it right. One major caveat: It is not a complete desktop nor is all hardware completely supported. It represents only the core needed to boot into a very minimal X desktop typically with wired internet access. There is even a version without the GUI called Micro Core Linux which is just 9MB in size. [Tiny Core Linux][9] folks.
|
||||
|
||||
![top 10 unique and special linux distros](http://4.bp.blogspot.com/-idmCvIxtxeo/VdcqcggBk1I/AAAAAAAAJ1U/DTQCkiLqlLk/s1600/nixos.png)
|
||||
|
||||
**NixOS**: A very experienced-user focused Linux distribution with a unique approach to package and configuration management. In other distributions, actions such as upgrades can be dangerous. Upgrading a package can cause other packages to break, upgrading an entire system is much less reliable than reinstalling from scratch. And top of all that you can't safely test what the results of a configuration change will be, there's no "Undo" so to speak. In NixOS, the entire operating system is built by the Nix package manager from a description in a purely functional build language. This means that building a new configuration cannot overwrite previous configurations. Most of the other features follow this pattern. Nix stores all packages in isolation from each other. [More about NixOS][10].
|
||||
|
||||
![strangest linux distros](http://4.bp.blogspot.com/-rOYfBXg-UiU/VddCF7w_xuI/AAAAAAAAJ1w/Nf11bOheOwM/s1600/gobolinux.jpg)
|
||||
|
||||
**GoboLinux**: This is another very unique Linux distro. What makes GoboLinux so different from the rest is its unique re-arrangement of the filesystem. It has its own subdirectory tree, where all of its files and programs are stored. GoboLinux does not have a package database because the filesystem is its database. In some ways, this sort of arrangement is similar to that seen in OS X. [Get GoboLinux][11].
|
||||
|
||||
![strangest linux distros](http://1.bp.blogspot.com/-3P22pYfih6Y/VdcucPOv4LI/AAAAAAAAJ1g/PszZDbe83sQ/s1600/hannah-montana-linux.jpg)
|
||||
|
||||
**Hannah Montana Linux**: Here is a Linux distro based on Kubuntu with a Hannah Montana themed boot screen, KDM, icon set, ksplash, plasma, color scheme, and wallpapers (I'm so sorry). [Link][12]. Project not active anymore.
|
||||
|
||||
**RLSD Linux**: An extremely minimalistic, small, lightweight and security-hardened, text-based operating system built on Linux. "It's a unique distribution that provides a selection of console applications and home-grown security features which might appeal to hackers," developers claim. [RLSD Linux][13].
|
||||
|
||||
Did we miss anything even stranger? Let us know.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.techdrivein.com/2015/08/the-strangest-most-unique-linux-distros.html
|
||||
|
||||
作者:Manuel Jose
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://puppylinux.org/main/Overview%20and%20Getting%20Started.htm
|
||||
[2]:http://qntm.org/suicide
|
||||
[3]:http://sourceforge.net/projects/suicide-linux/files/
|
||||
[4]:http://www.techdrivein.com/2015/02/papyros-material-design-linux-coming-soon.html
|
||||
[5]:https://plus.google.com/communities/109966288908859324845/stream/3262a3d3-0797-4344-bbe0-56c3adaacb69
|
||||
[6]:https://www.bountysource.com/teams/papyros
|
||||
[7]:https://www.qubes-os.org/
|
||||
[8]:http://ubuntusatanic.org/
|
||||
[9]:http://tinycorelinux.net/
|
||||
[10]:https://nixos.org/
|
||||
[11]:http://www.gobolinux.org/
|
||||
[12]:http://hannahmontana.sourceforge.net/
|
||||
[13]:http://rlsd2.dimakrasner.com/
|
@ -1,3 +1,5 @@
|
||||
martin translating...
|
||||
|
||||
Superclass: 15 of the world’s best living programmers
|
||||
================================================================================
|
||||
When developers discuss who the world’s top programmer is, these names tend to come up a lot.
|
||||
@ -386,4 +388,4 @@ via: http://www.itworld.com/article/2823547/enterprise-software/158256-superclas
|
||||
[131]:http://community.topcoder.com/tc?module=AlgoRank
|
||||
[132]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Prateek-Joshi
|
||||
[133]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Prateek-Joshi/comment/4720779
|
||||
[134]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Prateek-Joshi/comment/4880549
|
||||
[134]:http://www.quora.com/Computer-Programming/Who-is-the-best-programmer-in-the-world-right-now/answer/Prateek-Joshi/comment/4880549
|
||||
|
@ -1,149 +0,0 @@
|
||||
The Free Software Foundation: 30 years in
|
||||
================================================================================
|
||||
![](http://opensource.com/sites/default/files/styles/image-full-size/public/images/life/osdc_general_openfield.png?itok=tcXpYeHi)
|
||||
|
||||
Welcome back, folks, to a new Six Degrees column. As usual, please send your thoughts on this piece to the comment box and your suggestions for future columns to [my inbox][1].
|
||||
|
||||
Now, I have to be honest with you all, this column went a little differently than I expected.
|
||||
|
||||
A few weeks ago when thinking what to write, I mused over the notion of a piece about the [Free Software Foundation][2] celebrating its 30 year anniversary and how relevant and important its work is in today's computing climate.
|
||||
|
||||
To add some meat I figured I would interview [John Sullivan][3], executive director of the FSF. My plan was typical of many of my pieces: thread together an interesting narrative and quote pieces of the interview to give it color.
|
||||
|
||||
Well, that all went out the window when John sent me a tremendously detailed, thoughtful, and descriptive interview. I decided therefore to present it in full as the main event, and to add some commentary throughout. Thus, this is quite a long column, but I think it paints a fascinating picture of a fascinating organization. I recommend you grab a cup of something delicious and settle in for a solid read.
|
||||
|
||||
### The sands of change ###
|
||||
|
||||
The Free Software Foundation was founded in 1985. To paint a picture of what computing was like back then, the [Amiga 1000][4] was released, C++ was becoming a dominant language, [Aldus PageMaker][5] was announced, and networking was just starting to grow. Oh, and that year [Careless Whisper][6] by Wham! was a major hit.
|
||||
|
||||
Things have changed a lot in 30 years. Back in 1985 the FSF was primarily focused on building free pieces of software that were primarily useful to nerdy computer people. These days we have software, services, social networks, and more to consider.
|
||||
|
||||
I first wanted to get a sense of what John feels are most prominent risks to software freedom today.
|
||||
|
||||
"I think there's widespread agreement on the biggest risks for computer user freedom today, but maybe not on the names for them."
|
||||
|
||||
"The first is what we might as well just call 'tiny computers everywhere.' The free software movement has succeeded to the point where laptops, desktops, and servers can run fully free operating systems doing anything users of proprietary systems can do. There are still a few holes, but they'll be closed. The challenge that remains in this area is to cut through the billion dollar marketing budgets and legal regimes working against us to actually get the systems into users hands."
|
||||
|
||||
"However, we have a serious problem on the set of computers whose primary common trait is that they are very small. Even though a car is not especially small, the computers in it are, so I include that form factor in this category, along with phones, tablets, glasses, watches, and so on. While these computers often have a basis in free software—for example, using the kernel Linux along with other free software like Android or GNU—their primary uses are to run proprietary applications and be shims for services that replace local computing with computing done on a server over which the user has no control. Since these devices serve vital functions, with some being primary means of communication for huge populations, some sitting very close to our bodies and our actual vital functions, some bearing responsibility for our physical safety, it is imperative that they run fully free systems under their users' control. Right now, they don't."
|
||||
|
||||
John feels the risk here is not just the platforms and form factors, but the services integrates into them.
|
||||
|
||||
"The services many of these devices talk to are the second major threat we face. It does us little good booting into a free system if we do our actual work and entertainment on companies' servers running software we have no access to at all. The point of free software is that we can see, modify, and share code. The existence of those freedoms even for nontechnical users provides a shield that prevents companies from controlling us. None of these freedoms exist for users of Facebook or Salesforce or Google Docs. Even more worrisome, we see a trend where people are accepting proprietary restrictions imposed on their local machines in order to have access to certain services. Browsers—including Firefox—are now automatically installing a DRM plugin in order to appease Netflix and other video giants. We need to work harder at developing free software decentralized replacements for media distribution that can actually empower users, artists, and user-artists, and for other services as well. For Facebook we have GNU social, pump.io, Diaspora, Movim, and others. For Salesforce, we have CiviCRM. For Google Docs, we have Etherpad. For media, we have GNU MediaGoblin. But all of these projects need more help, and many services don't have any replacement contenders yet."
|
||||
|
||||
It is interesting that John mentions finding free software equivalents for common applications and services today. The FSF maintains a list of "High Priority Projects" that are designed to fill this gap. Unfortunately the capabilities of these projects varies tremendously and in an age where social media is so prominent, the software is only part of the problem: the real challenge is getting people to use it.
|
||||
|
||||
This all begs the question of where the FSF fit in today's modern computing world. I am a fan of the FSF. I think the work they do is valuable and I contribute financially to support it too. They are an important organization for building an open computing culture, but all organizations need to grow, adjust, and adapt, particularly ones in the technology space.
|
||||
|
||||
I wanted to get a better sense of what the FSF is doing today that it wasn't doing at it's inception.
|
||||
|
||||
"We're speaking to a much larger audience than we were 30 years ago, and to a much broader audience. It's no longer just hackers and developers and researchers that need to know about free software. Everyone using a computer does, and it's quickly becoming the case that everyone uses a computer."
|
||||
|
||||
John went on to provide some examples of these efforts.
|
||||
|
||||
"We're doing coordinated public advocacy campaigns on issues of concern to the free software movement. Earlier in our history, we expressed opinions on these things, and took action on a handful, but in the last ten years we've put more emphasis on formulating and carrying out coherent campaigns. We've made especially significant noise in the area of Digital Restrictions Management (DRM) with Defective by Design, which I believe played a role in getting iTunes music off DRM (now of course, Apple is bringing DRM back with Apple Music). We've made attractive and useful introductory materials for people new to free software, like our [User Liberation animated video][7] and our [Email Self-Defense Guide][8].
|
||||
|
||||
We're also endorsing hardware that [respects users' freedoms][9]. Hardware distributors whose devices have been certified by the FSF to contain and require only free software can display a logo saying so. Expanding the base of free software users and the free software movement has two parts: convincing people to care, and then making it possible for them to act on that. Through this initiative, we encourage manufacturers and distributors to do the right thing, and we make it easy for users who have started to care about free software to buy what they need without suffering through hours and hours of research. We've certified a home WiFi router, 3D printers, laptops, and USB WiFi adapters, with more on the way.
|
||||
|
||||
We're collecting all of the free software we can find in our [Free Software Directory][10]. We still have a long way to go on this—we're at only about 15,500 packages right now, and we can imagine many improvements to the design and function of the site—but I think this resource has great potential for helping users find the free software they need, especially users who aren't yet using a full GNU/Linux system. With the dangers inherent in downloading random programs off the Internet, there is a definite need for a curated collection like this. It also happens to provide a wealth of machine-readable data of use to researchers.
|
||||
|
||||
We're acting as the fiscal sponsor for several specific free software projects, enabling them to raise funds for development. Most of these projects are part of GNU (which we continue to provide many kinds of infrastructure for), but we also sponsor [Replicant][11], a fully free fork of Android designed to give users the free-est mobile devices currently possible.
|
||||
|
||||
We're helping developers use free software licenses properly, and we're following up on complaints about companies that aren't following the terms of the GPL. We help them fix their mistakes and distribute properly. RMS was in fact doing similar work with the precursors of the GPL very early on, but it's now an ongoing part of our work.
|
||||
|
||||
Most of the specific things the FSF does now it wasn't doing 30 years ago, but the vision is little changed from the original paperwork—we aim to create a world where everything users want to do on any computer can be done using free software; a world where users control their computers and not the other way around."
|
||||
|
||||
### A cult of personality ###
|
||||
|
||||
There is little doubt in anyone's minds about the value the FSF brings. As John just highlighted, its efforts span not just the creation and licensing of free software, but also recognizing, certifying, and advocating a culture of freedom in technology.
|
||||
|
||||
The head of the FSF is the inimitable Richard M. Stallman, commonly referred to as RMS.
|
||||
|
||||
RMS is a curious character. He has demonstrated an unbelievable level of commitment to his ideas, philosophy, and ethical devotion to freedom in software.
|
||||
|
||||
While he is sometimes mocked online for his social awkwardness, be it things said in his speeches, his bizarre travel requirements, or other sometimes cringeworthy moments, RMS's perspectives on software and freedom are generally rock-solid. He takes a remarkably consistent approach to his perspectives and he is clearly a careful thinker about not just his own thoughts but the wider movement he is leading. My only criticism is that I think from time to time he somewhat over-eggs the pudding with the voracity of his words. But hey, given his importance in our world, I would rather take an extra egg than no pudding for anyone. O.K., I get that the whole pudding thing here was strained...
|
||||
|
||||
So RMS is a key part of the FSF, but the organization is also much more than that. There are employees, a board, and many contributors. I was curious to see how much of a role RMS plays these days in the FSF. John shared this with me.
|
||||
|
||||
"RMS is the FSF's President, and does that work without receiving a salary from the FSF. He continues his grueling global speaking schedule, advocating for free software and computer user freedom in dozens of countries each year. In the course of that, he meets with government officials as well as local activists connected with all varieties of social movements. He also raises funds for the FSF and inspires many people to volunteer."
|
||||
|
||||
"In between engagements, he does deep thinking on issues facing the free software movement, and anticipates new challenges. Often this leads to new articles—he wrote a 3-part series for Wired earlier this year about free software and free hardware designs—or new ideas communicated to the FSF's staff as the basis for future projects."
|
||||
|
||||
As we delved into the cult of personality, I wanted to tap John's perspectives on how wide the free software movement has grown.
|
||||
|
||||
I remember being at the [Open Source Think Tank][12] (an event that brings together execs from various open source organizations) and there was a case study where attendees were asked to recommend license choice for a particular project. The vast majority of break-out groups recommended the Apache Software License (APL) over the GNU Public License (GPL).
|
||||
|
||||
This stuck in my mind as since then I have noticed that many companies seem to have opted for open licenses other than the GPL. I was curious to see if John had noticed a trend towards the APL as opposed to the GPL.
|
||||
|
||||
"Has there been? I'm not so sure. I gave a presentation at FOSDEM a few years ago called 'Is Copyleft Being Framed?' that showed some of the problems with the supposed data behind claims of shifts in license adoption. I'll be publishing an article soon on this, but here's some of the major problems:
|
||||
|
||||
|
||||
- Free software license choices do not exist in a vacuum. The number of people choosing proprietary software licenses also needs to be considered in order to draw the kinds of conclusions that people want to draw. I find it much more likely that lax permissive license choices (such as the Apache License or 3-clause BSD) are trading off with proprietary license choices, rather than with the GPL.
|
||||
- License counters often, ironically, don't publish the software they use to collect that data as free software. That means we can't inspect their methods or reproduce their results. Some people are now publishing the code they use, but certainly any that don't should be completely disregarded. Science has rules.
|
||||
- What counts as a thing with a license? Are we really counting an app under the APL that makes funny noises as 1:1 with GNU Emacs under GPLv3? If not, how do we decide which things to treat as equals? Are we only looking at software that actually works? Are we making sure not to double- and triple- count programs that exist on multiple hosting sites, and what about ports for different OSes?
|
||||
|
||||
The question is interesting to ponder, but every conclusion I've seen so far has been extremely premature in light of the actual data. I'd much rather see a survey of developers asking about why they chose particular licenses for their projects than any more of these attempts to programmatically ascertain the license of programs and then ascribe human intentions on to patterns in that data.
|
||||
|
||||
Copyleft is as vital as it ever was. Permissively licensed software is still free software and on-face a good thing, but it is contingent and needs an accompanying strong social commitment to not incorporate it in proprietary software. If free software's major long-term impact is enabling businesses to more efficiently make products that restrict us, then we have achieved nothing for computer user freedom."
|
||||
|
||||
### Rising to new challenges ###
|
||||
|
||||
30 years is an impressive time for any organization to be around, and particularly one with such important goals that span so many different industries, professions, governments, and cultures.
|
||||
|
||||
As I started to wrap up the interview I wanted to get a better sense of what the FSF's primary function is today, 30 years after the mission started.
|
||||
|
||||
"I think the FSF is in a very interesting position of both being a steady rock and actively pushing the envelope."
|
||||
|
||||
"We have core documents like the [Free Software Definition][13], the [GNU General Public License][14], and the [list we maintain of free and nonfree software licenses][15], which have been keystones in the construction of the world of free software we have today. People place a great deal of trust in us to stay true to the principles outlined in those documents, and to apply them correctly and wisely in our assessments of new products or practices in computing. In this role, we hold the ladder for others to climb. As a 501(c)(3) charity held legally accountable to the public interest, and about 85% funded by individuals, we have the right structure for this."
|
||||
|
||||
"But we also push the envelope. We take on challenges that others say are too hard. I guess that means we also build ladders? Or maybe I should stop with the metaphors."
|
||||
|
||||
While John may not be great with metaphors (like I am one to talk), the FSF is great at setting a mission and demonstrating a devout commitment to it. This mission starts with a belief that free software should be everywhere.
|
||||
|
||||
"We are not satisfied with the idea that you can get a laptop that works with free software except for a few components. We're not satisfied that you can have a tablet that runs a lot of free software, and just uses proprietary software to communicate with networks and to accelerate video and to take pictures and to check in on your flight and to call an Über and to.. Well, we are happy about some such developments for sure, but we are also unhappy about the suggestion that we should be fully content with them. Any proprietary software on a system is both an injustice to the user and inherently a threat to users' security. These almost-free things can be stepping stones on the way to a free world, but only if we keep our feet moving."
|
||||
|
||||
In the early years of the FSF, we actually had to get a free operating system written. This has now been done by GNU and Linux and many collaborators, although there is always more software to write and bugs to fix. So while the FSF does still sponsor free software development in specific areas, there are thankfully many other organizations also doing this."
|
||||
|
||||
A key part of the challenge John is referring to is getting the right hardware into the hands of the right people.
|
||||
|
||||
"What we have been focusing on now are the challenges I highlighted in the first question. We are in desperate need of hardware in several different areas that fully supports free software. We have been talking a lot at the FSF about what we can do to address this, and I expect us to be making some significant moves to both increase our support for some of the projects already out there—as we having been doing to some extent through our Respects Your Freedom certification program—and possibly to launch some projects of our own. The same goes for the network service problem. I think we need to tackle them together, because having full control over the mobile components has great potential for changing how we relate to services, and decentralizing more and more services will in turn shape the mobile components."
|
||||
|
||||
I hope folks will support the FSF as we work to grow and tackle these challenges. Hardware is expensive and difficult, as is making usable, decentralized, federated replacements for network services. We're going to need the resources and creativity of a lot of people. But, 30 years ago, a community rallied around RMS and the concept of copyleft to write an entire operating system. I've spent my last 12 years at the FSF because I believe we can rise to the new challenges in the same way."
|
||||
|
||||
### Final thoughts ###
|
||||
|
||||
In reading John's thoughtful responses to my questions, and in knowing various FSF members, the one sense that resonates for me is the sheer level of passion that is alive and kicking in the FSF. This is not an organization that has got bored or disillusioned with its mission. Its passion and commitment is as voracious as it has ever been.
|
||||
|
||||
While I don't always agree with the FSF and I sometimes think its approach is a little one-dimensional at times, I have been and will continue to be a huge fan and supporter of its work. The FSF represent the ethical heartbeat of much of the free software and open source work that happens across the world. It represents a world view that is pretty hard to the left, but I believe its passion and conviction helps to bring people further to the right a little closer to the left too.
|
||||
|
||||
Sure, RMS can be odd, somewhat hardline, and a little sensational, but he is precisely the kind of leader that is valuable in a movement that encapsulates a mixture of technology, ethics, and culture. We need an RMS in much the same way we need a Torvalds, a Shuttleworth, a Whitehurst, and a Zemlin. These different people bring together mixture of perspectives that ultimately maps to technology that can be adaptable to almost any set of use cases, ethics, and ambitions.
|
||||
|
||||
So, in closing, I want to thank the FSF for its tremendous efforts, and I wish the FSF and its fearless leaders, one Richard M. Stallman and one John Sullivan, another 30 years of fighting the good fight. Go get 'em!
|
||||
|
||||
> This article is part of Jono Bacon's Six Degrees column, where he shares his thoughts and perspectives on culture, communities, and trends in open source.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://opensource.com/business/15/9/free-software-foundation-30-years
|
||||
|
||||
作者:[Jono Bacon][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://opensource.com/users/jonobacon
|
||||
[1]:Welcome back, folks, to a new Six Degrees column. As usual, please send your thoughts on this piece to the comment box and your suggestions for future columns to my inbox.
|
||||
[2]:http://www.fsf.org/
|
||||
[3]:http://twitter.com/johns_fsf/
|
||||
[4]:https://en.wikipedia.org/wiki/Amiga_1000
|
||||
[5]:https://en.wikipedia.org/wiki/Adobe_PageMaker
|
||||
[6]:https://www.youtube.com/watch?v=izGwDsrQ1eQ
|
||||
[7]:http://fsf.org/
|
||||
[8]:http://emailselfdefense.fsf.org/
|
||||
[9]:http://fsf.org/ryf
|
||||
[10]:http://directory.fsf.org/
|
||||
[11]:http://www.replicant.us/
|
||||
[12]:http://www.osthinktank.com/
|
||||
[13]:http://www.fsf.org/about/what-is-free-software
|
||||
[14]:http://www.gnu.org/licenses/gpl-3.0.en.html
|
||||
[15]:http://www.gnu.org/licenses/licenses.en.html
|
@ -1,30 +0,0 @@
|
||||
Italy's Ministry of Defense to Drop Microsoft Office in Favor of LibreOffice
|
||||
================================================================================
|
||||
>**LibreItalia's Italo Vignoli [reports][1] that the Italian Ministry of Defense is about to migrate to the LibreOffice open-source software for productivity and adopt the Open Document Format (ODF), while moving away from proprietary software products.**
|
||||
|
||||
The movement comes in the form of a [collaboration][1] between Italy's Ministry of Defense and the LibreItalia Association. Sonia Montegiove, President of the LibreItalia Association, and Ruggiero Di Biase, Rear Admiral and General Executive Manager of Automated Information Systems of the Ministry of Defense in Italy signed an agreement for a collaboration to adopt the LibreOffice office suite in all of the Ministry's offices.
|
||||
|
||||
While the LibreItalia non-profit organization promises to help the Italian Ministry of Defense with trainers for their offices across the country, the Ministry will start the implementation of the LibreOffice software on October 2015 with online training courses for their staff. The entire transition process is expected to be completed by the end of year 2016\. An Italian law lets officials find open source software alternatives to well-known commercial software.
|
||||
|
||||
"Under the agreement, the Italian Ministry of Defense will develop educational content for a series of online training courses on LibreOffice, which will be released to the community under Creative Commons, while the partners, LibreItalia, will manage voluntarily the communication and training of trainers in the Ministry," says Italo Vignoli, Honorary President of LibreItalia.
|
||||
|
||||
### The Ministry of Defense will adopt the Open Document Format (ODF)
|
||||
|
||||
The initiative will allow the Italian Ministry of Defense to be independent from proprietary software applications, which are aimed at individual productivity, and adopt open source document format standards like Open Document Format (ODF), which is used by default in the LibreOffice office suite. The project follows similar movements already made by governments of other European countries, including United Kingdom, France, Spain, Germany, and Holland.
|
||||
|
||||
It would appear that numerous other public institutions all over Italy are using open source alternatives, including the Italian Region Emilia Romagna, Galliera Hospital in Genoa, Macerata, Cremona, Trento and Bolzano, Perugia, the municipalities of Bologna, ASL 5 of Veneto, Piacenza and Reggio Emilia, and many others. AGID (Agency for Digital Italy) welcomes this project and hopes that other public institutions will do the same.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://news.softpedia.com/news/italy-s-ministry-of-defense-to-drop-microsoft-office-in-favor-of-libreoffice-491850.shtml
|
||||
|
||||
作者:[Marius Nestor][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://news.softpedia.com/editors/browse/marius-nestor
|
||||
[1]:http://www.libreitalia.it/accordo-di-collaborazione-tra-associazione-libreitalia-onlus-e-difesa-per-ladozione-del-prodotto-libreoffice-quale-pacchetto-di-produttivita-open-source-per-loffice-automation/
|
||||
[2]:http://www.libreitalia.it/chi-siamo/
|
@ -1,3 +1,4 @@
|
||||
icybreaker translating...
|
||||
14 tips for teaching open source development
|
||||
================================================================================
|
||||
Academia is an excellent platform for training and preparing the open source developers of tomorrow. In research, we occasionally open source software we write. We do this for two reasons. One, to promote the use of the tools we produce. And two, to learn more about the impact and issues other people face when using them. With this background of writing research software, I was tasked with redesigning the undergraduate software engineering course for second-year students at the University of Bradford.
|
||||
@ -69,4 +70,4 @@ via: http://opensource.com/education/15/9/teaching-open-source-development-under
|
||||
|
||||
[a]:http://opensource.com/users/mariamkiran
|
||||
[1]:https://basecamp.com/
|
||||
[2]:https://www.mantisbt.org/
|
||||
[2]:https://www.mantisbt.org/
|
||||
|
@ -1,37 +0,0 @@
|
||||
Red Hat CEO Optimistic on OpenStack Revenue Opportunity
|
||||
================================================================================
|
||||
Red Hat continues to accelerate its growth thanks to an evolving mix of platform and infrastructure technology revolving around Linux and the cloud. Red Hat announced its second quarter fiscal 2016 financial results on September 21, once again exceeding expectations.
|
||||
|
||||
![](http://www.serverwatch.com/imagesvr_ce/1212/icon-redhatcloud-r.jpg)
|
||||
|
||||
For the quarter, Red Hat reported revenue of $504 million for a 13 percent year-over-year gain. Net Income was reported at $51 million, up from $47 Red Hatmillion in the second quarter of fiscal 2015. Looking forward, Red Hat provided some aggressive guidance for the coming quarter and the full year. For the third quarter, Red Hat provided guidance for revenue to be in the range of $519 million to $523 million, which is a 15 percent year-over-year gain.
|
||||
|
||||
On a full year basis, Red Hat's full year guidance is for fiscal 2016 revenue of $2.044 billion, for a 14 percent year-over-year gain.
|
||||
|
||||
Red Hat CFO Frank Calderoni commented during the earnings call that all of Red Hat's top 30 largest deals were approximately $1 million or more. He noted that Red Hat had four deals that were in excess of $5 million and one deal that was well over $10 million. As has been the case in recent years, cross selling across Red Hat products is strong with 65 percent of all deals including one or more components from Red Hat's group of application development and emerging technologies offerings.
|
||||
|
||||
"We expect the growing adoption of these technologies, like Middleware, the RHEL OpenStack platform, OpenShift, cloud management and storage, to continue to drive revenue growth," Calderoni said.
|
||||
|
||||
### OpenStack ###
|
||||
|
||||
During the earnings call, Red Hat CEO Jim Whitehurst was repeatedly asked about the revenue prospects for OpenStack. Whitehurst said that the recently released Red Hat OpenStack Platform 7.0 is a big jump forward thanks to the improved installer.
|
||||
|
||||
"It does a really good job of kind of identifying hardware and lighting it up," Whitehurst said. "Of course, that means there's a lot of work to do around certifying that hardware, making sure it lights up appropriately."
|
||||
|
||||
Whitehurst said that he's starting to see a lot more production application start to move to the OpenStack cloud. He cautioned however that it's still largely the early adopters moving to OpenStack in production and it isn't quite mainstream, yet.
|
||||
|
||||
From a competitive perspective, Whitehurst talked specifically about Microsoft, HP and Mirantis. In Whitehurst's view many organizations will continue to use multiple operating systems and if they choose Microsoft for one part, they are more likely to choose an open-source option,as the alternative option. Whitehurst said he doesn't see a lot of head-to-head competition against HP in cloud, but he does see Mirantis.
|
||||
|
||||
"We've had several wins or people who were moving away from Mirantis to RHEL," Whitehurst said.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.serverwatch.com/server-news/red-hat-ceo-optimistic-on-openstack-revenue-opportunity.html
|
||||
|
||||
作者:[Sean Michael Kerner][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.serverwatch.com/author/Sean-Michael-Kerner-101580.htm
|
@ -1,49 +0,0 @@
|
||||
A Slick New Set-Up Wizard Is Coming To Ubuntu and Ubuntu Touch
|
||||
================================================================================
|
||||
> Canonical aims to 'seduce and reassure' those unfamiliar with the OS by making a good first impression
|
||||
|
||||
**The Ubuntu installer is set to undergo a dramatic makeover.**
|
||||
|
||||
Ubuntu will modernise its out-of-the-box experience (OOBE) to be easier and quicker to complete, look more ‘seductive’ to new users, and better present the Ubuntu brand through its design.
|
||||
|
||||
Ubiquity, the current Ubuntu installer, has largely remained unchanged since its [introduction back in 2010][1].
|
||||
|
||||
### First Impressions Are Everything ###
|
||||
|
||||
Since the first thing most users see when trying Ubuntu for the first time is an installer (or set-up wizard, depending on device) the design team feel it’s “one of the most important categories of software usability”.
|
||||
|
||||
“It essentially says how easy your software is to use, as well as introducing the user into your brand through visual design and tone of voice, which can convey familiarity and trust within your product.”
|
||||
|
||||
Canonical’s new OOBE designs show a striking departure from the current look of the Ubiquity installer used by the Ubuntu desktop, and presents a refined approach to the way mobile users ‘set up’ a new Ubuntu Phone.
|
||||
|
||||
![Old design (left) and the new proposed design](http://www.omgubuntu.co.uk/wp-content/uploads/2015/09/desktop-2.jpg)
|
||||
|
||||
Old design (left) and the new proposed design
|
||||
|
||||
Detailing the designs in [new blog post][2], the Canonical Design team say the aim of the revamp is to create a consistent out-of-the-box experience across Ubuntu devices.
|
||||
|
||||
To do this it groups together “common first experiences found on the mobile, tablet and desktop” and unifies the steps and screens between each, something they say moves the OS closer to “achieving a seamless convergent platform.”
|
||||
|
||||
![New Ubuntu installer on desktop/tablet (left) and phone](http://www.omgubuntu.co.uk/wp-content/uploads/2015/09/Convergence.jpg)
|
||||
|
||||
New Ubuntu installer on desktop/tablet (left) and phone
|
||||
|
||||
Implementation of the new ‘OOBE’ has already begun’ according to Canonical, though as of writing there’s no firm word on when a revamped installer may land on either desktop or phone images.
|
||||
|
||||
With the march to ‘desktop’ convergence now in full swing, and a(nother) stack of design changes set to hit the mobile build in lieu of the first Ubuntu Phone that ‘transforms’ in to a PC, chances are you won’t have to wait too long to try it out.
|
||||
|
||||
**What do you think of the designs? How would you go about improving the Ubuntu set-up experience? Let us know in the comments below.**
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.omgubuntu.co.uk/2015/09/new-look-ubuntu-installer-coming-soon
|
||||
|
||||
作者:[Joey-Elijah Sneddon][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/117485690627814051450/?rel=author
|
||||
[1]:http://www.omgubuntu.co.uk/2010/09/ubuntu-10-10s-installer-slideshow-oozes-class
|
||||
[2]:http://design.canonical.com/wp-content/uploads/Convergence.jpg
|
@ -1,101 +0,0 @@
|
||||
The Brief History Of Aix, HP-UX, Solaris, BSD, And LINUX
|
||||
================================================================================
|
||||
![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/05/linux-712x445.png)
|
||||
|
||||
Always remember that when doors close on you, other doors open. [Ken Thompson][1] and [Dennis Richie][2] are a great example for such saying. They were two of the best information technology specialists in the **20th** century as they created the **UNIX** system which is considered one the most influential and inspirational software that ever written.
|
||||
|
||||
### The UNIX systems beginning at Bell Labs ###
|
||||
|
||||
**UNIX** which was originally called **UNICS** (**UN**iplexed **I**nformation and **C**omputing **S**ervice) has a great family and was never born by itself. The grandfather of UNIX was **CTSS** (**C**ompatible **T**ime **S**haring **S**ystem) and the father was the **Multics** (**MULT**iplexed **I**nformation and **C**omputing **S**ervice) project which supports interactive timesharing for mainframe computers by huge communities of users.
|
||||
|
||||
UNIX was born at **Bell Labs** in **1969** by **Ken Thompson** and later **Dennis Richie**. These two great researchers and scientists worked on a collaborative project with **General Electric** and the **Massachusetts Institute of Technology** to create an interactive timesharing system called the Multics.
|
||||
|
||||
Multics was created to combine timesharing with other technological advances, allowing the users to phone the computer from remote terminals, then edit documents, read e-mail, run calculations, and so on.
|
||||
|
||||
Over the next five years, AT&T corporate invested millions of dollars in the Multics project. They purchased mainframe computer called GE-645 and they dedicated to the effort of the top researchers at Bell Labs such as Ken Thompson, Stuart Feldman, Dennis Ritchie, M. Douglas McIlroy, Joseph F. Ossanna, and Robert Morris. The project was too ambitious, but it fell troublingly behind the schedule. And at the end, AT&T leaders decided to leave the project.
|
||||
|
||||
Bell Labs managers decided to stop any further work on operating systems which made many researchers frustrated and upset. But thanks to Thompson, Richie, and some researchers who ignored their bosses’ instructions and continued working with love on their labs, UNIX was created as one the greatest operating systems of all times.
|
||||
|
||||
UNIX started its life on a PDP-7 minicomputer which was a testing machine for Thompson’s ideas about the operating systems design and a platform for Thompsons and Richie’s game simulation that was called Space and Travel.
|
||||
|
||||
> “What we wanted to preserve was not just a good environment in which to do programming, but a system around which a fellowship could form. We knew from experience that the essence of communal computing, as supplied by remote-access, time-shared machines, is not just to type programs into a terminal instead of a keypunch, but to encourage close communication”. Dennis Richie Said.
|
||||
|
||||
UNIX was so close to be the first system under which the programmer could directly sit down at a machine and start composing programs on the fly, explore possibilities and also test while composing. All through UNIX lifetime, it has had a growing more capabilities pattern by attracting skilled volunteer effort from different programmers impatient with the other operating systems limitations.
|
||||
|
||||
UNIX has received its first funding for a PDP-11/20 in 1970, the UNIX operating system was then officially named and could run on the PDP-11/20. The first real job from UNIX was in 1971, it was to support word processing for the patent department at Bell Labs.
|
||||
|
||||
### The C revolution on UNIX systems ###
|
||||
|
||||
Dennis Richie invented a higher level programming language called “**C**” in **1972**, later he decided with Ken Thompson to rewrite the UNIX in “C” to give the system more portability options. They wrote and debugged almost 100,000 code lines that year. The migration to the “C” language resulted in highly portable software that require only a relatively small machine-dependent code to be then replaced when porting UNIX to another computing platform.
|
||||
|
||||
The UNIX was first formally presented to the outside world in 1973 on Operating Systems Principles, where Dennis Ritchie and Ken Thompson delivered a paper, then AT&T released Version 5 of the UNIX system and licensed it to the educational institutions, and then in 1975 they licensed Version 6 of UNIX to companies for the first time with a cost **$20.000**. The most widely used version of UNIX was Version 7 in 1980 where anybody could purchase a license but it was very restrictive terms in this license. The license included the source code, the machine dependents kernel which was written in PDP-11 assembly language. At all, versions of UNIX systems were determined by its user manuals editions.
|
||||
|
||||
### The AIX System ###
|
||||
|
||||
In **1983**, **Microsoft** had a plan to make a **Xenix** MS-DOS’s multiuser successor, and they created Xenix-based Altos 586 with **512 KB** RAM and **10 MB** hard drive by this year with cost $8,000. By 1984, 100,000 UNIX installations around the world for the System V Release 2. In 1986, 4.3BSD was released that included internet name server and the **AIX system** was announced by **IBM** with Installation base over 250,000. AIX is based on Unix System V, this system has BSD roots and is a hybrid of both.
|
||||
|
||||
AIX was the first operating system that introduced a **journaled file system (JFS)** and an integrated Logical Volume Manager (LVM). IBM ported AIX to its RS/6000 platform by 1989. The Version 5L was a breakthrough release that was introduced in 2001 to provide Linux affinity and logical partitioning with the Power4 servers.
|
||||
|
||||
AIX introduced virtualization by 2004 in AIX 5.3 with Advanced Power Virtualization (APV) which offered Symmetric multi-threading, micro-partitioning, and shared processor pools.
|
||||
|
||||
In 2007, IBM started to enhance its virtualization product, by coinciding with the AIX 6.1 release and the architecture of Power6. They also rebranded Advanced Power Virtualization to PowerVM.
|
||||
|
||||
The enhancements included form of workload partitioning that was called WPARs, that are similar to Solaris zones/Containers, but with much better functionality.
|
||||
|
||||
### The HP-UX System ###
|
||||
|
||||
The **Hewlett-Packard’s UNIX (HP-UX)** was based originally on System V release 3. The system initially ran exclusively on the PA-RISC HP 9000 platform. The Version 1 of HP-UX was released in 1984.
|
||||
|
||||
The Version 9, introduced SAM, its character-based graphical user interface (GUI), from which one can administrate the system. The Version 10, was introduced in 1995, and brought some changes in the layout of the system file and directory structure, which made it similar to AT&T SVR4.
|
||||
|
||||
The Version 11 was introduced in 1997. It was HP’s first release to support 64-bit addressing. But in 2000, this release was rebranded to 11i, as HP introduced operating environments and bundled groups of layered applications for specific Information Technology purposes.
|
||||
|
||||
In 2001, The Version 11.20 was introduced with support for Itanium systems. The HP-UX was the first UNIX that used ACLs (Access Control Lists) for file permissions and it was also one of the first that introduced built-in support for Logical Volume Manager.
|
||||
|
||||
Nowadays, HP-UX uses Veritas as primary file system due to partnership between Veritas and HP.
|
||||
|
||||
The HP-UX is up to release 11iv3, update 4.
|
||||
|
||||
### The Solaris System ###
|
||||
|
||||
The Sun’s UNIX version, **Solaris**, was the successor of **SunOS**, which was founded in 1992. SunOS was originally based on the BSD (Berkeley Software Distribution) flavor of UNIX but SunOS versions 5.0 and later were based on Unix System V Release 4 which was rebranded as Solaris.
|
||||
|
||||
SunOS version 1.0 was introduced with support for Sun-1 and Sun-2 systems in 1983. Version 2.0 was introduced later in 1985. In 1987, Sun and AT&T announced that they would collaborate on a project to merge System V and BSD into only one release, based on SVR4.
|
||||
|
||||
The Solaris 2.4 was first Sparc/x86 release by Sun. The last release of the SunOS was version 4.1.4 announced in November 1994. The Solaris 7 was the first 64-bit Ultra Sparc release and it added native support for file system metadata logging.
|
||||
|
||||
Solaris 9 was introduced in 2002, with support for Linux capabilities and Solaris Volume Manager. Then, Solaris 10 was introduced in 2005, and has number of innovations, such as support for its Solaris Containers, new ZFS file system, and Logical Domains.
|
||||
|
||||
The Solaris system is presently up to version 10 as the latest update was released in 2008.
|
||||
|
||||
### Linux ###
|
||||
|
||||
By 1991 there were growing requirements for a free commercial alternative. Therefore **Linus Torvalds** set out to create new free operating system kernel that eventually became **Linux**. Linux started with a small number of “C” files and under a license which prohibited commercial distribution. Linux is a UNIX-like system and is different than UNIX.
|
||||
|
||||
Version 3.18 was introduced in 2015 under a GNU Public License. IBM said that more than 18 million lines of code are Open Source and available to developers.
|
||||
|
||||
The GNU Public License becomes the most widely available free software license which you can find nowadays. In accordance with the Open Source principles, this license permits individuals and organizations the freedom to distribute, run, share by copying, study, and also modify the code of the software.
|
||||
|
||||
### UNIX vs. Linux: Technical Overview ###
|
||||
|
||||
- Linux can encourage more diversity, and Linux developers come from wider range of backgrounds with different experiences and opinions.
|
||||
- Linux can run on wider range of platforms and also types of architecture than UNIX.
|
||||
- Developers of UNIX commercial editions have a specific target platform and audience in mind for their operating system.
|
||||
- **Linux is more secure than UNIX** as it is less affected by virus threats or malware attacks. Linux has had about 60-100 viruses to date, but at the same time none of them are currently spreading. On the other hand, UNIX has had 85-120 viruses but some of them are still spreading.
|
||||
- With commands of UNIX, tools and elements are rarely changed, and even some interfaces and command lines arguments still remain in later versions of UNIX.
|
||||
- Some Linux development projects get funded on a voluntary basis such as Debian. The other projects maintain a community version of commercial Linux distributions such as SUSE with openSUSE and Red Hat with Fedora.
|
||||
- Traditional UNIX is about scale up, but on the other hand Linux is about scale out.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.unixmen.com/brief-history-aix-hp-ux-solaris-bsd-linux/
|
||||
|
||||
作者:[M.el Khamlichi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.unixmen.com/author/pirat9/
|
||||
[1]:http://www.unixmen.com/ken-thompson-unix-systems-father/
|
||||
[2]:http://www.unixmen.com/dennis-m-ritchie-father-c-programming-language/
|
@ -1,79 +0,0 @@
|
||||
unicornx 翻译中... ...
|
||||
New Collaborative Group to Speed Real-Time Linux
|
||||
================================================================================
|
||||
![](http://www.linux.com/images/stories/66866/Tux-150.png)
|
||||
|
||||
Tux-150The Linux Foundation’s [announcement][1] at LinuxCon this week that it was assuming funding control over the Real-Time Linux project gave renewed hope that embedded Linux will complete its 15-year campaign to achieve equivalence with RTOSes in real-time operations. The RTL group is being reinvigorated as a Real-Time Linux Collaborative Project, with better funding, more developers, and closer integration with mainline kernel development.
|
||||
|
||||
According to the Linux Foundation, moving RTL under its umbrella “will save the industry millions of dollars in research and development.” The move will also “improve quality of the code through robust upstream kernel test infrastructure,” says the Foundation.
|
||||
|
||||
Over the past decade, the RTL project has been overseen, and more recently, funded, by the [Open Source Automation Development Lab][2], which is continuing on as a Gold member of the new collaborative project, but will hand funding duties over to the Linux Foundation in January. The RTL project and [OSADL][3] have been responsible for maintaining the RT-Preempt (or [Preempt-RT][4]) patches, and periodically updating them to mainline Linux.
|
||||
|
||||
The task is about 90 percent complete, according to Dr. Carsten Emde, longtime General Manager of OSADL. “It’s like building a house,” he explains. “The main components such as the walls, windows, and doors are already in place, or in our case, things like high-resolution timers, interrupt threads, and priority-inheritance mutexes. But then you need all these little bits and pieces such as carpets and wallpaper to finish the job.”
|
||||
|
||||
According to Emde, real-time Linux is already technologically equivalent to most real-time operating systems – assuming you’re willing to hassle with all the patches. “The goal of the project was to provide a Linux system with a predefined deterministic worst-case latency and nothing else,” says Emde. “This goal is reached today when a kernel is patched, and the same goal will be reached when a future unpatched mainline RT kernel will be used. The only – of course important – difference is that the maintenance work will be much less when we do no longer need to continually adapt off-tree components to mainline.”
|
||||
|
||||
The RTL Collaborative Group will continue under the guidance of Thomas Gleixner, the key maintainer over the past decade. This week, Gleixner was appointed a Linux Foundation Fellow, joining a select group that includes Linux kernel stable maintainer Greg Kroah-Hartman, Yocto Project maintainer Richard Purdie, and Linus Torvalds.
|
||||
|
||||
According to Emde, RTL’s secondary maintainer Steven Rostedt of Red Hat, who “maintains older but still maintained kernel versions,” will continue to participate in the project along with Red Hat’s Ingo Molnàr, who was a key developer of RTL, but in recent years has had more of an advisory position. Somewhat surprisingly, however, Red Hat is not one of the RTL Collaborative Group’s members. Instead, Google takes the top spot as the lone Platinum member, while Gold members include National Instruments (NI), OSADL, and Texas Instruments (TI). Silver members include Altera, ARM, Intel, and IBM.
|
||||
|
||||
### The Long Road to Real Time ###
|
||||
|
||||
When Linux first appeared in embedded devices more than 15 years ago, it faced an embedded computing market dominated by RTOSes such as Wind River’s VxWorks, which continue to offer highly deterministic, hardened kernels required by many industrial, avionics, and transportation applications. Like Microsoft’s already then established – and more real-time – Windows CE, Linux faced resistance and outright mockery from potential industrial clients. These desktop-derived distributions might be okay for lightweight consumer electronics, it was argued, but they lacked the hardened, kernels that made RTOSes the choice for devices that required deterministic task scheduling for split-second reliability.
|
||||
|
||||
Improving Linux’s real-time capabilities was an [early goal][5] of embedded Linux pioneers such as [MontaVista][6]. Over the years, RTL development was accelerated and formalized in various groups such as OSADL, which [was founded in 2006][7], as well as the Real-Time Linux Foundation (RTLF). When RTLF [merged with OSADL][8] in 2009, OSADL and its RTL group took full ownership over the PREEMPT-RT patch maintenance and upstreaming process. OSADL also oversees other automation-related projects such as [Safety Critical Linux][9].
|
||||
|
||||
OSADL’s stewardship over RTL progressed in three stages: advocacy and outreach, testing and quality assessment, and finally, funding. Early on, OSADL’s role was to write articles, make presentations, organize training, and “spread the word” about the advantages of RTL, says Emde. “To introduce a new technology such as Linux and its community-based development model into the rather conservative automation industry required first of all to build confidence,” he says. “Switching from a proprietary RTOS to Linux means that companies must introduce new strategies and processes in order to interact with a community.”
|
||||
|
||||
Later, OSADL moved on to providing technical performance data, establishing [a quality assessment and testing center][10], and providing assistance to its industrial members in open source legal compliance and safety certifications.
|
||||
|
||||
As RTL grew more mature, pulling even with the fading Windows CE in real-time capabilities and increasingly [cutting into RTOS market share][11], rival real-time Linux projects – principally [Xenomai][12] – have begun to integrate with it.
|
||||
|
||||
“The success of the RT patches, and the clear prospective that they would eventually be merged completely, has led to a change of focus at Xenomai,” says Emde. “Xenomai 3.0 can be used in combination with the RT patches and provide so-called ‘skins’ that allow you to recycle real-time source code that was written for other systems. They haven’t been completely unified, however, since Xenomai uses a dual kernel approach whereas the RT patches apply only to a single Linux kernel.”
|
||||
|
||||
In more recent years, the RTL group’s various funding sources have dropped off, and OSADL took on that role, too. “When the development recently slowed down a bit because of a lack of funding, OSADL started its third milestone by directly funding Thomas Gleixner's work,” says Emde.
|
||||
|
||||
As Emde wrote in an [Oct. 5 blog entry][13], the growing expansion of Real-Time Linux beyond its core industrial base to areas like automotive and telecom suggested that the funding should be expanded as well. “It would not be entirely fair to let the automation industry fund the complete remaining work on its own, since other industries such as telecommunication also rely on the availability of a deterministic Linux kernel,” wrote Emde.
|
||||
|
||||
When the Linux Foundation showed interest in expanding its funding role, OSADL decided it would be “much more efficient to have a single funding and control channel,” says Emde. He adds, however, that as a Gold member, OSADL is still participating in the oversight of the project, and will continue its advocacy and quality assurance activities.
|
||||
|
||||
### Automotive Looks for Real-Time Boost ###
|
||||
|
||||
RTL will continue to see its greatest growth in industrial applications where it will gradually replace RTOS applications, says Emde. Yet, it is also growing quickly in automotive, and will later spread to railway and avionics, he adds.
|
||||
|
||||
Indeed, the growing role of Linux in automotive appears to be key to the Linux Foundation’s goals for RTL, with potential collaborations with its [Automotive Grade Linux][14] (AGL) workgroup. Automotive may also be the chief motivator for Google’s high-profile participation, speculates Emde. In addition, TI is deeply involved with automotive with its Jacinto processors.
|
||||
|
||||
Linux-oriented automotive projects like AGL aim to move Linux beyond in-vehicle infotainment (IVI) into cluster controls and telematics where RTOSes like QNX dominate. Autonomous vehicles are even in greater need of real-time performance.
|
||||
|
||||
Emde notes that OSADL's [SIL2LinuxMP][15] project may play an important role in extending RTL into automotive. SIL2LinuxMP is not an automotive-specific project, but BMW is participating, and automotive is one of the key applications. The project aims to certify base components required for RTL to run on a single- or multi-core COTS board. It defines bootloader, root filesystem, Linux kernel, and C library bindings to access RTL.
|
||||
|
||||
Autonomous drones and robots are also ripe for real-time, and Xenomai is already used in many robots, as well as some drones. Yet, RTL’s role will be limited in the wider embedded Linux world of consumer electronics and Internet of Things applications. The main barrier is the latency of wireless communications and the Internet itself.
|
||||
|
||||
“Real-time Linux will have a role within machine control and between machines and peripheral devices, but less between remote machines,” says Emde. “Real-time via Internet will probably never be possible.”
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linux.com/news/software/applications/858828-new-collaborative-group-to-speed-real-time-linux
|
||||
|
||||
作者:[Eric Brown][a]
|
||||
译者:[unicornx](https://github.com/unicornx)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linux.com/community/forums/person/42808
|
||||
[1]:http://www.linuxfoundation.org/news-media/announcements/2015/10/linux-foundation-announces-project-advance-real-time-linux
|
||||
[2]:http://archive.linuxgizmos.com/celebrating-the-open-source-automation-development-labs-first-birthday/
|
||||
[3]:https://www.osadl.org/
|
||||
[4]:http://linuxgizmos.com/adding-real-time-to-linux-with-preempt-rt/
|
||||
[5]:http://archive.linuxgizmos.com/real-time-linux-what-is-it-why-do-you-want-it-how-do-you-do-it-a/
|
||||
[6]:http://www.linux.com/news/embedded-mobile/mobile-linux/841651-embedded-linux-pioneer-montavista-spins-iot-linux-distribution
|
||||
[7]:http://archive.linuxgizmos.com/industry-group-aims-linux-at-automation-apps/
|
||||
[8]:http://archive.linuxgizmos.com/industrial-linux-groups-merge/
|
||||
[9]:https://www.osadl.org/Safety-Critical-Linux.safety-critical-linux.0.html
|
||||
[10]:http://www.osadl.org/QA-Farm-Realtime.qa-farm-about.0.html
|
||||
[11]:http://www.linux.com/news/embedded-mobile/mobile-linux/818011-embedded-linux-keeps-growing-amid-iot-disruption-says-study
|
||||
[12]:http://xenomai.org/
|
||||
[13]:https://www.osadl.org/Single-View.111+M5dee6946dab.0.html
|
||||
[14]:http://www.linux.com/news/embedded-mobile/mobile-linux/833358-first-open-automotive-grade-linux-spec-released
|
||||
[15]:http://www.osadl.org/SIL2LinuxMP.sil2-linux-project.0.html
|
205
sources/talk/20151019 Gaming On Linux--All You Need To Know.md
Normal file
205
sources/talk/20151019 Gaming On Linux--All You Need To Know.md
Normal file
@ -0,0 +1,205 @@
|
||||
213edu Translating
|
||||
|
||||
Gaming On Linux: All You Need To Know
|
||||
================================================================================
|
||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Gaming-on-Linux.jpeg)
|
||||
|
||||
**Can I play games on Linux?**
|
||||
|
||||
This is one of the most frequently asked questions by people who are thinking about [switching to Linux][1]. After all, gaming on Linux often termed as a distant possibility. In fact, some people even wonder if they can listen to music or watch movies on Linux. Considering that, question about native Linux games seem genuine.
|
||||
|
||||
In this article, I am going to answer most of the Linux gaming questions a Linux beginner may have. For example, if it is possible to play games on Linux, if yes, what are the Linux games available, where can you **download Linux games** from or how do you get more information of gaming on Linux.
|
||||
|
||||
But before I do that, let me make a confession. I am not a PC gamer or rather I should say, I am not desktop Linux gamer. I prefer to play games on my PS4 and I don’t care about PC games or even mobile games (no candy crush request sent to anyone in my friend list). This is the reason you see only a few articles in [Linux games][2] section of It’s FOSS.
|
||||
|
||||
So why am I covering this topic then?
|
||||
|
||||
Because I have been asked questions about playing games on Linux several times and I wanted to come up with a Linux gaming guide that could answer all those question. And remember, it’s not just gaming on Ubuntu I am talking about here. I am talking about Linux in general.
|
||||
|
||||
### Can you play games on Linux? ###
|
||||
|
||||
Yes and no!
|
||||
|
||||
Yes, you can play games on Linux and no, you cannot play ‘all the games’ in Linux.
|
||||
|
||||
Confused? Don’t be. What I meant here is that you can get plenty of popular games on Linux such as [Counter Strike, Metro Last Night][3] etc. But you might not get all the latest and popular Windows games on Linux, for e.g., [PES 2015][4].
|
||||
|
||||
The reason, in my opinion, is that Linux has less than 2% of desktop market share and these numbers are demotivating enough for most game developers to avoid working on the Linux version of their games.
|
||||
|
||||
Which means that there is huge possibility that the most talked about games of the year may not be playable in Linux. Don’t despair, there are ‘other means’ to get these games on Linux and we shall see it in coming sections, but before that let’s talk about what kind of games are available for Linux.
|
||||
|
||||
If I have to categorize, I’ll divide them in four categories:
|
||||
|
||||
1. Native Linux Games
|
||||
1. Windows games in Linux
|
||||
1. Browser Games
|
||||
1. Terminal Games
|
||||
|
||||
Let’s start with the most important one, native Linux games, first.
|
||||
|
||||
----------
|
||||
|
||||
### 1. Where to find native Linux games? ###
|
||||
|
||||
Native Linux games mean those games which are officially supported in Linux. These games have native Linux client and can be installed like most other applications in Linux without requiring any additional effort (we’ll see about these in next section).
|
||||
|
||||
So, as you see, there are games developed for Linux. Next question that arises is where can you find these Linux games and how can you play them. I am going to list some of the resources where you can get Linux games.
|
||||
|
||||
#### Steam ####
|
||||
|
||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/09/Install-Steam-Ubuntu-11.jpeg)
|
||||
|
||||
“[Steam][5] is a digital distribution platform for video games. As Amazon Kindle is digital distribution platform for e-Books, iTunes for music, similarly Steam is for games. It provides you the option to buy and install games, play multiplayer and stay in touch with other games via social networking on its platform. The games are protected with [DRM][6].”
|
||||
|
||||
A couple of years ago, when gaming platform Steam announced support for Linux, it was a big news. It was an indication that gaming on Linux is being taken seriously. Though Steam’s decision was more influenced with its own Linux-based gaming console and a separate [Linux distribution called Steam OS][7], it still was a reassuring move that has brought a number of games on Linux.
|
||||
|
||||
I have written a detailed article about installing and using Steam. If you are getting started with Steam, do read it.
|
||||
|
||||
- [Install and use Steam for gaming on Linux][8]
|
||||
|
||||
#### GOG.com ####
|
||||
|
||||
[GOG.com][9] is another platform similar to Steam. Like Steam, you can browse and find hundreds of native Linux games on GOG.com, purchase the games and install them. If the games support several platforms, you can download and use them across various operating systems. Your purchased games are available for you all the time in your account. You can download them anytime you wish.
|
||||
|
||||
One main difference between the two is that GOG.com offers only DRM free games and movies. Also, GOG.com is entirely web based. So you don’t need to install a client like Steam. You can simply download the games from browser and install them in your system.
|
||||
|
||||
#### Portable Linux Games ####
|
||||
|
||||
[Portable Linux Games][10] is a website that has a collection of a number of Linux games. The unique and best thing about Portable Linux Games is that you can download and store the games for offline installation.
|
||||
|
||||
The downloaded files have all the dependencies (at times Wine and Perl installation) and these are also platform independent. All you need to do is to download the files and double click to install them. Store the downloadable file on external hard disk and use them in future. Highly recommend if you don’t have continuous access to high speed internet.
|
||||
|
||||
#### Game Drift Game Store ####
|
||||
|
||||
[Game Drift][11] is actually a Linux distribution based on Ubuntu with sole focus on gaming. While you might not want to start using this Linux distribution for the sole purpose of gaming, you can always visit its game store online and see what games are available for Linux and install them.
|
||||
|
||||
#### Linux Game Database ####
|
||||
|
||||
As the name suggests, [Linux Game Database][12] is a website with a huge collection of Linux games. You can browse through various category of games and download/install them from the game developer’s website. As a member of Linux Game Database, you can even rate the games. LGDB, kind of, aims to be the IGN or IMDB for Linux games.
|
||||
|
||||
#### Penguspy ####
|
||||
|
||||
Created by a gamer who refused to use Windows for playing games, [Penguspy][13] showcases a collection of some of the best Linux games. You can browse games based on category and if you like the game, you’ll have to go to the respective game developer’s website.
|
||||
|
||||
#### Software Repositories ####
|
||||
|
||||
Look into the software repositories of your own Linux distribution. There always will be some games in it. If you are using Ubuntu, Ubuntu Software Center itself has an entire section for games. Same is true for other Linux distributions such as Linux Mint etc.
|
||||
|
||||
----------
|
||||
|
||||
### 2. How to play Windows games in Linux? ###
|
||||
|
||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Wine-Linux.png)
|
||||
|
||||
So far we talked about native Linux games. But there are not many Linux games, or to be more precise, most popular Linux games are not available for Linux but they are available for Windows PC. So the questions arises, how to play Windows games in Linux?
|
||||
|
||||
Good thing is that with the help of tools like Wine, PlayOnLinux and CrossOver, you can play a number of popular Windows games in Linux.
|
||||
|
||||
#### Wine ####
|
||||
|
||||
Wine is a compatibility layer which is capable of running Windows applications in systems like Linux, BSD and OS X. With the help of Wine, you can install and use a number of Windows applications in Linux.
|
||||
|
||||
[Installing Wine in Ubuntu][14] or any other Linux is easy as it is available in most Linux distributions’ repository. There is a huge [database of applications and games supported by Wine][15] that you can browse.
|
||||
|
||||
#### CrossOver ####
|
||||
|
||||
[CrossOver][16] is an improved version of Wine that brings professional and technical support to Wine. But unlike Wine, CrossOver is not free. You’ll have to purchase the yearly license for it. Good thing about CrossOver is that every purchase contributes to Wine developers and that in fact boosts the development of Wine to support more Windows games and applications. If you can afford $48 a year, you should buy CrossOver for the support they provide.
|
||||
|
||||
### PlayOnLinux ###
|
||||
|
||||
PlayOnLinux too is based on Wine but implemented differently. It has different interface and slightly easier to use than Wine. Like Wine, PlayOnLinux too is free to use. You can browse the [applications and games supported by PlayOnLinux on its database][17].
|
||||
|
||||
----------
|
||||
|
||||
### 3. Browser Games ###
|
||||
|
||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/Chrome-Web-Store.jpeg)
|
||||
|
||||
Needless to say that there are tons of browser based games that are available to play in any operating system, be it Windows or Linux or Mac OS X. Most of the addictive mobile games, such as [GoodGame Empire][18], also have their web browser counterparts.
|
||||
|
||||
Apart from that, thanks to [Google Chrome Web Store][19], you can play some more games in Linux. These Chrome games are installed like a standalone app and they can be accessed from the application menu of your Linux OS. Some of these Chrome games are playable offline as well.
|
||||
|
||||
----------
|
||||
|
||||
### 4. Terminal Games ###
|
||||
|
||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/03/nSnake_Linux_terminal_game.jpeg)
|
||||
|
||||
Added advantage of using Linux is that you can use the command line terminal to play games. I know that it’s not the best way to play games but at times, it’s fun to play games like [Snake][20] or [2048][21] in terminal. There is a good collection of Linux terminal games at [this blog][22]. You can browse through it and play the ones you want.
|
||||
|
||||
----------
|
||||
|
||||
### How to stay updated about Linux games? ###
|
||||
|
||||
When you have learned a lot about what kind of games are available on Linux and how could you use them, next question is how to stay updated about new games on Linux? And for that, I advise you to follow these blogs that provide you with the latest happenings of the Linux gaming world:
|
||||
|
||||
- [Gaming on Linux][23]: I won’t be wrong if I call it the nest Linux gaming news portal. You get all the latest rumblings and news about Linux games. Frequently updated, Gaming on Linux has dedicated fan following which makes it a nice community of Linux game lovers.
|
||||
- [Free Gamer][24]: A blog focusing on free and open source games.
|
||||
- [Linux Game News][25]: A Tumbler blog that updates on various Linux games.
|
||||
|
||||
#### What else? ####
|
||||
|
||||
I think that’s pretty much what you need to know to get started with gaming on Linux. If you are still not convinced, I would advise you to [dual boot Linux with Windows][26]. Use Linux as your main desktop and if you want to play games, boot into Windows. This could be a compromised solution.
|
||||
|
||||
I think that’s pretty much what you need to know to get started with gaming on Linux. If you are still not convinced, I would advise you to [dual boot Linux with Windows][27]. Use Linux as your main desktop and if you want to play games, boot into Windows. This could be a compromised solution.
|
||||
|
||||
It’s time for you to add your inputs. Do you play games on your Linux desktop? What are your favorites? What blogs you follow to stay updated on latest Linux games?
|
||||
|
||||
|
||||
投票项目:
|
||||
How do you play games on Linux?
|
||||
|
||||
- I use Wine and PlayOnLinux along with native Linux Games
|
||||
- I am happy with Browser Games
|
||||
- I prefer the Terminal Games
|
||||
- I use native Linux games only
|
||||
- I play it on Steam
|
||||
- I dual boot and go in to Windows to play games
|
||||
- I don't play games at all
|
||||
|
||||
注:投票代码
|
||||
<div class="PDS_Poll" id="PDI_container9132962" style="display:inline-block;"></div>
|
||||
<div id="PD_superContainer"></div>
|
||||
<script type="text/javascript" charset="UTF-8" src="http://static.polldaddy.com/p/9132962.js"></script>
|
||||
<noscript><a href="http://polldaddy.com/poll/9132962">Take Our Poll</a></noscript>
|
||||
|
||||
注,发布时根据情况看怎么处理
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/linux-gaming-guide/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:http://itsfoss.com/reasons-switch-linux-windows-xp/
|
||||
[2]:http://itsfoss.com/category/games/
|
||||
[3]:http://blog.counter-strike.net/
|
||||
[4]:https://pes.konami.com/tag/pes-2015/
|
||||
[5]:http://store.steampowered.com/
|
||||
[6]:https://en.wikipedia.org/wiki/Digital_rights_management
|
||||
[7]:http://itsfoss.com/valve-annouces-linux-based-gaming-operating-system-steamos/
|
||||
[8]:http://itsfoss.com/install-steam-ubuntu-linux/
|
||||
[9]:http://www.gog.com/
|
||||
[10]:http://www.portablelinuxgames.org/
|
||||
[11]:http://gamedrift.org/GameStore.html
|
||||
[12]:http://www.lgdb.org/
|
||||
[13]:http://www.penguspy.com/
|
||||
[14]:http://itsfoss.com/wine-1-5-11-released-ppa-available-to-download/
|
||||
[15]:https://appdb.winehq.org/
|
||||
[16]:https://www.codeweavers.com/products/
|
||||
[17]:https://www.playonlinux.com/en/supported_apps.html
|
||||
[18]:http://empire.goodgamestudios.com/
|
||||
[19]:https://chrome.google.com/webstore/category/apps
|
||||
[20]:http://itsfoss.com/nsnake-play-classic-snake-game-linux-terminal/
|
||||
[21]:http://itsfoss.com/play-2048-linux-terminal/
|
||||
[22]:https://ttygames.wordpress.com/
|
||||
[23]:https://www.gamingonlinux.com/
|
||||
[24]:http://freegamer.blogspot.fr/
|
||||
[25]:http://linuxgamenews.com/
|
||||
[26]:http://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/
|
||||
[27]:http://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/
|
@ -0,0 +1,199 @@
|
||||
18 Years of GNOME Design and Software Evolution: Step by Step
|
||||
================================================================================
|
||||
注:youtube 视频
|
||||
<iframe width="660" height="371" src="https://www.youtube.com/embed/MtmcO5vRNFQ?feature=oembed" frameborder="0" allowfullscreen></iframe>
|
||||
|
||||
[GNOME][1] (GNU Object Model Environment) was started on August 15th 1997 by two Mexican programmers – Miguel de Icaza and Federico Mena. GNOME – Free Software project to develop a desktop environment and applications by volunteers and paid full-time developers. All of GNOME Desktop Environment is the open source software and support Linux, FreeBSD, OpenBSD and others.
|
||||
|
||||
Now we move to 1997 and see the first version of GNOME:
|
||||
|
||||
### GNOME 1 ###
|
||||
|
||||
![GNOME 1.0 - First major GNOME release](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/1.0/gnome.png)
|
||||
|
||||
**GNOME 1.0** (1997) – First major GNOME release
|
||||
|
||||
![GNOME 1.2 Bongo](https://raw.githubusercontent.com/paulcarroty/Articles/master/GNOME_History/1.2/1361441938.or.86429.png)
|
||||
|
||||
**GNOME 1.2** “Bongo”, 2000
|
||||
|
||||
![GNOME 1.4 Tranquility](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/1.4/1.png)
|
||||
|
||||
**GNOME 1.4** “Tranquility”, 2001
|
||||
|
||||
### GNOME 2 ###
|
||||
|
||||
![GNOME 2.0](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.0/1.png)
|
||||
|
||||
**GNOME 2.0**, 2002
|
||||
|
||||
Major upgrade based on GTK+2. Introduction of the Human Interface Guidelines.
|
||||
|
||||
![GNOME 2.2](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.2/GNOME_2.2_catala.png)
|
||||
|
||||
**GNOME 2.2**, 2003
|
||||
|
||||
Multimedia and file manager improvements.
|
||||
|
||||
![GNOME 2.4 Temujin](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.4/gnome-desktop.png)
|
||||
|
||||
**GNOME 2.4** “Temujin”, 2003
|
||||
|
||||
First release of Epiphany Browser, accessibility support.
|
||||
|
||||
![GNOME 2.6](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.6/Adam_Hooper.png)
|
||||
|
||||
**GNOME 2.6**, 2004
|
||||
|
||||
Nautilus changes to a spatial file manager, and a new GTK+ file dialog is introduced. A short-lived fork of GNOME, GoneME, is created as a response to the changes in this version.
|
||||
|
||||
![GNOME 2.8](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.8/3.png)
|
||||
|
||||
**GNOME 2.8**, 2004
|
||||
|
||||
Improved removable device support, adds Evolution
|
||||
|
||||
![GNOME 2.10](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.10/GNOME-Screenshot-2.10-FC4.png)
|
||||
|
||||
**GNOME 2.10**, 2005
|
||||
|
||||
Lower memory requirements and performance improvements. Adds: new panel applets (modem control, drive mounter and trashcan); and the Totem and Sound Juicer applications.
|
||||
|
||||
![GNOME 2.12](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.12/gnome-livecd.jpg)
|
||||
|
||||
**GNOME 2.12**, 2005
|
||||
|
||||
Nautilus improvements; improvements in cut/paste between applications and freedesktop.org integration. Adds: Evince PDF viewer; New default theme: Clearlooks; menu editor; keyring manager and admin tools. Based on GTK+ 2.8 with cairo support
|
||||
|
||||
![GNOME 2.14](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.14/debian4-stable.jpg)
|
||||
|
||||
**GNOME 2.14**, 2006
|
||||
|
||||
Performance improvements (over 100% in some cases); usability improvements in user preferences; GStreamer 0.10 multimedia framework. Adds: Ekiga video conferencing application; Deskbar search tool; Pessulus lockdown editor; Fast user switching; Sabayon system administration tool.
|
||||
|
||||
![GNOME 2.16](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.16/Gnome-2.16-screenshot.png)
|
||||
|
||||
**GNOME 2.16**, 2006
|
||||
|
||||
Performance improvements. Adds: Tomboy notetaking application; Baobab disk usage analyser; Orca screen reader; GNOME Power Manager (improving laptop battery life); improvements to Totem, Nautilus; compositing support for Metacity; new icon theme. Based on GTK+ 2.10 with new print dialog
|
||||
|
||||
![GNOME 2.18](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.18/Gnome-2.18.1.png)
|
||||
|
||||
**GNOME 2.18**, 2007
|
||||
|
||||
Performance improvements. Adds: Seahorse GPG security application, allowing encryption of emails and local files; Baobab disk usage analyser improved to support ring chart view; Orca screen reader; improvements to Evince, Epiphany and GNOME Power Manager, Volume control; two new games, GNOME Sudoku and glChess. MP3 and AAC audio encoding.
|
||||
|
||||
![GNOME 2.20](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.20/rnintroduction-screenshot.png)
|
||||
|
||||
**GNOME 2.20**, 2007
|
||||
|
||||
Tenth anniversary release. Evolution backup functionality; improvements in Epiphany, EOG, GNOME Power Manager; password keyring management in Seahorse. Adds: PDF forms editing in Evince; integrated search in the file manager dialogs; automatic multimedia codec installer.
|
||||
|
||||
![GNOME 2.22, 2008](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.22/GNOME-2-22-2-Released-2.png)
|
||||
|
||||
**GNOME 2.22**, 2008
|
||||
|
||||
Addition of Cheese, a tool for taking photos from webcams and Remote Desktop Viewer; basic window compositing support in Metacity; introduction of GVFS; improved playback support for DVDs and YouTube, MythTV support in Totem; internationalised clock applet; Google Calendar support and message tagging in Evolution; improvements in Evince, Tomboy, Sound Juicer and Calculator.
|
||||
|
||||
![GNOME 2.24](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.24/gnome-224.jpg)
|
||||
|
||||
**GNOME 2.24**, 2008
|
||||
|
||||
Addition of the Empathy instant messenger client, Ekiga 3.0, tabbed browsing in Nautilus, better multiple screens support and improved digital TV support.
|
||||
|
||||
![GNOME 2.26](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.26/gnome226-large_001.jpg)
|
||||
|
||||
**GNOME 2.26**, 2009
|
||||
|
||||
New optical disc recording application Brasero, simpler file sharing, media player improvements, support for multiple monitors and fingerprint reader support.
|
||||
|
||||
![GNOME 2.28](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.28/1.png)
|
||||
|
||||
**GNOME 2.28**, 2009
|
||||
|
||||
Addition of GNOME Bluetooth module. Improvements to Epiphany web browser, Empathy instant messenger client, Time Tracker, and accessibility. Upgrade to GTK+ version 2.18.
|
||||
|
||||
![GNOME 2.30](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.30/GNOME2.30.png)
|
||||
|
||||
**GNOME 2.30**, 2010
|
||||
|
||||
Improvements to Nautilus file manager, Empathy instant messenger client, Tomboy, Evince, Time Tracker, Epiphany, and Vinagre. iPod and iPod Touch devices are now partially supported via GVFS through libimobiledevice. Uses GTK+ 2.20.
|
||||
|
||||
![GNOME 2.32](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/2.32/gnome-2-32.png.en_GB.png)
|
||||
|
||||
**GNOME 2.32**, 2010
|
||||
|
||||
Addition of Rygel and GNOME Color Manager. Improvements to Empathy instant messenger client, Evince, Nautilus file manager and others. 3.0 was intended to be released in September 2010, so a large part of the development effort since 2.30 went towards 3.0.
|
||||
|
||||
### GNOME 3 ###
|
||||
|
||||
![GNOME 3.0](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.0/chat-3-0.png)
|
||||
|
||||
**GNOME 3.0**, 2011
|
||||
|
||||
Introduction of GNOME Shell. A redesigned settings framework with fewer, more focused options. Topic-oriented help based on the Mallard markup language. Side-by-side window tiling. A new visual theme and default font. Adoption of GTK+ 3.0 with its improved language bindings, themes, touch, and multiplatform support. Removal of long-deprecated development APIs.[73]
|
||||
|
||||
![GNOME 3.2](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.2/gdm.png)
|
||||
|
||||
**GNOME 3.2**, 2011
|
||||
|
||||
Online accounts support; Web applications support; contacts manager; documents and files manager; quick preview of files in the File Manager; greater integration; better documentation; enhanced looks and various performance improvements.
|
||||
|
||||
![GNOME 3.4](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.4/application-view.png)
|
||||
|
||||
**GNOME 3.4**, 2012
|
||||
|
||||
New Look for GNOME 3 Applications: Documents, Epiphany (now called Web), and GNOME Contacts. Search for documents from the Activities overview. Application menus support. Refreshed interface components: New color picker, redesigned scrollbars, easier to use spin buttons, and hideable title bars. Smooth scrolling support. New animated backgrounds. Improved system settings with new Wacom panel. Easier extensions management. Better hardware support. Topic-oriented documentation. Video calling and Live Messenger support in Empathy. Better accessibility: Improved Orca integration, better high contrast mode, and new zoom settings. Plus many other application enhancements and smaller details.
|
||||
|
||||
![GNOME 3.6](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.6/gnome-3-6.png)
|
||||
|
||||
**GNOME 3.6**, 2012
|
||||
|
||||
Refreshed Core components: New applications button and improved layout in the Activities Overview. A new login and lock screen. Redesigned Message Tray. Notifications are now smarter, more noticeable, easier to dismiss. Improved interface and settings for System Settings. The user menu now shows Power Off by default. Integrated Input Methods. Accessibility is always on. New applications: Boxes, that was introduced as a preview version in GNOME 3.4, and Clocks, an application to handle world times. Updated looks for Disk Usage Analyzer, Empathy and Font Viewer. Improved braille support in Orca. In Web, the previously blank start page was replaced by a grid that holds your most visited pages, plus better full screen mode and a beta of WebKit2. Evolution renders email using WebKit. Major improvements to Disks. Revamped Files application (also known as Nautilus), with new features like Recent files and search.
|
||||
|
||||
![GNOME 3.8](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.8/applications-view.png)
|
||||
|
||||
**GNOME 3.8**, 2013
|
||||
|
||||
Refreshed Core components: A new applications view with frequently used and all apps. An overhauled window layout. New input methods OSD switcher. The Notifications & Messaging tray now react to the force with which the pointer is pressed against the screen edge. Added Classic mode for those who prefer a more traditional desktop experience. The GNOME Settings application features an updated toolbar design. New Initial Setup assistant. GNOME Online Accounts integrates with more services. Web has been upgraded to use the WebKit2 engine. Web has a new private browsing mode. Documents has gained a new dual page mode & Google Documents integration. Improved user interface of Contacts. GNOME Files, GNOME Boxes and GNOME Disks have received a number of improvements. Integration of ownCloud. New GNOME Core Applications: GNOME Clocks and GNOME Weather.
|
||||
|
||||
![GNOME 3.10](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.10/GNOME-3-10-Release-Schedule-2.png)
|
||||
|
||||
**GNOME 3.10**, 2013
|
||||
|
||||
A reworked system status area, which gives a more focused overview of the system. A collection of new applications, including GNOME Maps, GNOME Notes, GNOME Music and GNOME Photos. New geolocation features, such as automatic time zones and world clocks. HiDPI support[75] and smart card support. D-Bus activation made possible with GLib 2.38
|
||||
|
||||
![GNOME 3.12](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.12/app-folders.png)
|
||||
|
||||
**GNOME 3.12**, 2014
|
||||
|
||||
Improved keyboard navigation and window selection in the Overview. Revamped first set-up utility based on usability tests. Wired networking re-added to the system status area. Customizable application folders in the Applications view. Introduction of new GTK+ widgets such as popovers in many applications. New tab style in GTK+. GNOME Videos GNOME Terminal and gedit were given a fresh look, more consistent with the HIG. A search provider for the terminal emulator is included in GNOME Shell. Improvements to GNOME Software and high-density display support. A new sound recorder application. New desktop notifications API. Progress in the Wayland port has reached a usable state that can be optionally previewed.
|
||||
|
||||
![GNOME 3.14](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.14/Top-Features-of-GNOME-3-14-Gallery-459893-2.jpg)
|
||||
|
||||
**GNOME 3.14**, 2014
|
||||
|
||||
Improved desktop environment animations. Improved touchscreen support. GNOME Software supports managing installed add-ons. GNOME Photos adds support for Google. Redesigned UI for Evince, Sudoku, Mines and Weather. Hitori is added as part of GNOME Games.
|
||||
|
||||
![GNOME 3.16](https://github.com/paulcarroty/Articles/raw/master/GNOME_History/3.16/preview-apps.png)
|
||||
|
||||
**GNOME 3.16**, 2015
|
||||
|
||||
33,000 changes. Major changes include UI color scheme goes from black to charcoal. Overlay scroll bars added. Improvements to notifications including integration with Calendar applet. Tweaks to various apps including Files, Image Viewer, and Maps. Access to Preview Apps. Continued porting from X11 to Wayland.
|
||||
|
||||
Thanks to [Wikipedia][2] for short changelogs review and another big thanks for GNOME Project! Stay tuned!
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://tlhp.cf/18-years-of-gnome-evolution/
|
||||
|
||||
作者:[Pavlo Rudyi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://tlhp.cf/author/paul/
|
||||
[1]:https://www.gnome.org/
|
||||
[2]:https://en.wikipedia.org/wiki/GNOME
|
@ -0,0 +1,170 @@
|
||||
30 Years of Free Software Foundation: Best Quotes of Richard Stallman
|
||||
================================================================================
|
||||
注:youtube 视频
|
||||
<iframe width="660" height="495" src="https://www.youtube.com/embed/aIL594DTzH4?feature=oembed" frameborder="0" allowfullscreen></iframe>
|
||||
|
||||
**Richard Matthew Stallman** (rms) – one of biggest figure in Information Technology. He is a computer programmer and architect (GNU Compiler Collection (GCC)), GNU Debugger, Emacs), software freedom evangelist, [GNU Project][1] and [FSF][2] founder.
|
||||
|
||||
**GNU** is a recursive acronym “GNU’s Not Unix!”. GNU – collection of free computer software for Unix-based operation system. Can be used with GNU/Hurd and Linux kernels. Announced on September 27, 1983. General components:
|
||||
|
||||
- GNU Compiler Collection (GCC)
|
||||
- GNU C library (glibc)
|
||||
- GNU Core Utilities (coreutils)
|
||||
- GNU Debugger (GDB)
|
||||
- GNU Binary Utilities (binutils)
|
||||
- GNU Bash shell
|
||||
- NOME desktop environment
|
||||
|
||||
注:视频
|
||||
<video src="//static.fsf.org/nosvn/FSF30-video/FSF_30_720p.webm" controls="controls" width="640" height="390"></video>
|
||||
|
||||
**Free Software Foundation** (FSF) – non-profit organization for free software and computer user freedom promotion and defend their rights. Read more information here. Founded on 4 October 1985.
|
||||
|
||||
- The freedom to run the program as you wish, for any purpose (freedom 0).
|
||||
- The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.
|
||||
- The freedom to redistribute copies so you can help your neighbor (freedom 2).
|
||||
- The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.
|
||||
|
||||
This is the Four Freedoms of free software.
|
||||
|
||||
Here is quotes of Richard Stallman about freedom, software, social, philosophy and others things.
|
||||
|
||||
**About Facebook:**
|
||||
|
||||
> Facebook is not your friend, it is a surveillance engine.
|
||||
|
||||
**About Android:**
|
||||
|
||||
> Android is very different from the GNU/Linux operating system because it contains very little of GNU. Indeed, just about the only component in common between Android and GNU/Linux is Linux, the kernel.
|
||||
|
||||
**About computer industry:**
|
||||
|
||||
> The computer industry is the only industry that is more fashion-driven than women's fashion.
|
||||
|
||||
**About cloud computing:**
|
||||
|
||||
> The interesting thing about cloud computing is that we've redefined cloud computing to include everything that we already do.
|
||||
|
||||
**About ethics:**
|
||||
|
||||
> Whether gods exist or not, there is no way to get absolute certainty about ethics. Without absolute certainty, what do we do? We do the best we can.
|
||||
|
||||
**About freedom:**
|
||||
|
||||
> Free software is software that respects your freedom and the social solidarity of your community. So it's free as in freedom.
|
||||
|
||||
**About goal and idealism:**
|
||||
|
||||
> If you want to accomplish something in the world, idealism is not enough - you need to choose a method that works to achieve the goal.
|
||||
|
||||
**About sharing:**
|
||||
|
||||
> Sharing is good, and with digital technology, sharing is easy.
|
||||
|
||||
**About facebook (extended version):**
|
||||
|
||||
> Facebook mistreats its users. Facebook is not your friend; it is a surveillance engine. For instance, if you browse the Web and you see a 'like' button in some page or some other site that has been displayed from Facebook. Therefore, Facebook knows that your machine visited that page.
|
||||
|
||||
**About web application:**
|
||||
|
||||
> One reason you should not use web applications to do your computing is that you lose control.
|
||||
>
|
||||
> If you use a proprietary program or somebody else's web server, you're defenceless. You're putty in the hands of whoever developed that software.
|
||||
|
||||
**About books:**
|
||||
|
||||
> With paper printed books, you have certain freedoms. You can acquire the book anonymously by paying cash, which is the way I always buy books. I never use a credit card. I don't identify to any database when I buy books. Amazon takes away that freedom.
|
||||
|
||||
**About MPAA:**
|
||||
|
||||
> Officially, MPAA stands for Motion Picture Association of America, but I suggest that MPAA stands for Malicious Power Attacking All.
|
||||
|
||||
**About money and career:**
|
||||
|
||||
> I could have made money this way, and perhaps amused myself writing code. But I knew that at the end of my career, I would look back on years of building walls to divide people, and feel I had spent my life making the world a worse place.
|
||||
|
||||
**About proprietary software:**
|
||||
|
||||
> Proprietary software keeps users divided and helpless. Divided because each user is forbidden to redistribute it to others, and helpless because the users can't change it since they don't have the source code. They can't study what it really does. So the proprietary program is a system of unjust power.
|
||||
|
||||
**About smartphone:**
|
||||
|
||||
> A smartphone is a computer - it's not built using a computer - the job it does is the job of being a computer. So, everything we say about computers, that the software you run should be free - you should insist on that - applies to smart phones just the same. And likewise to those tablets.
|
||||
|
||||
**About CD and digital content:**
|
||||
|
||||
> CD stores have the disadvantage of an expensive inventory, but digital bookshops would need no such thing: they could write copies at the time of sale on to memory sticks, and sell you one if you forgot your own.
|
||||
|
||||
**About paradigm of competition:**
|
||||
|
||||
> The paradigm of competition is a race: by rewarding the winner, we encourage everyone to run faster. When capitalism really works this way, it does a good job; but its defenders are wrong in assuming it always works this way.
|
||||
|
||||
**About vi and emacs:**
|
||||
|
||||
> People sometimes ask me if it is a sin in the Church of Emacs to use vi. Using a free version of vi is not a sin; it is a penance. So happy hacking.
|
||||
|
||||
**About freedom and history:**
|
||||
|
||||
> Value your freedom or you will lose it, teaches history. 'Don't bother us with politics', respond those who don't want to learn.
|
||||
|
||||
**About patents:**
|
||||
|
||||
> Fighting patents one by one will never eliminate the danger of software patents, any more than swatting mosquitoes will eliminate malaria.
|
||||
>
|
||||
> Software patents are dangerous to software developers because they impose monopolies on software ideas.
|
||||
|
||||
**About copyrights:**
|
||||
|
||||
> In practice, the copyright system does a bad job of supporting authors, aside from the most popular ones. Other authors' principal interest is to be better known, so sharing their work benefits them as well as readers.
|
||||
|
||||
**About pay for work:**
|
||||
|
||||
> There is nothing wrong with wanting pay for work, or seeking to maximize one's income, as long as one does not use means that are destructive.
|
||||
|
||||
**About Chrome OS:**
|
||||
|
||||
> In essence, Chrome OS is the GNU/Linux operating system. However, it is delivered without the usual applications, and rigged up to impede and discourage installing applications.
|
||||
|
||||
**About Linux users:**
|
||||
|
||||
> Many users of the GNU/Linux system will not have heard the ideas of free software. They will not be aware that we have ideas, that a system exists because of ethical ideals, which were omitted from ideas associated with the term 'open source.'
|
||||
|
||||
**About privacy in facebook:**
|
||||
|
||||
> If there is a Like button in a page, Facebook knows who visited that page. And it can get IP address of the computer visiting the page even if the person is not a Facebook user.
|
||||
|
||||
**About programming:**
|
||||
|
||||
> Programming is not a science. Programming is a craft.
|
||||
>
|
||||
> My favorite programming languages are Lisp and C. However, since around 1992 I have worked mainly on free software activism, which means I am too busy to do much programming. Around 2008 I stopped doing programming projects.
|
||||
>
|
||||
> C++ is a badly designed and ugly language. It would be a shame to use it in Emacs.
|
||||
|
||||
**About hacking and learn programming:**
|
||||
|
||||
> People could no longer learn hacking the way I did, by starting to work on a real operating system, making real improvements. In fact, in the 1980s I often came across newly graduated computer science majors who had never seen a real program in their lives. They had only seen toy exercises, school exercises, because every real program was a trade secret. They never had the experience of writing features for users to really use, and fixing the bugs that real users came across. The things you need to know to do real work.
|
||||
>
|
||||
> It is hard to write a simple definition of something as varied as hacking, but I think what these activities have in common is playfulness, cleverness, and exploration. Thus, hacking means exploring the limits of what is possible, in a spirit of playful cleverness. Activities that display playful cleverness have "hack value".
|
||||
|
||||
**About web browsing:**
|
||||
|
||||
> For personal reasons, I do not browse the web from my computer. (I also have no net connection much of the time.) To look at page I send mail to a daemon which runs wget and mails the page back to me. It is very efficient use of my time, but it is slow in real time.
|
||||
|
||||
**About music sharing:**
|
||||
|
||||
> Friends share music with each other, they don't allow themselves to be divided by a system that says that nobody is supposed to have copies.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://tlhp.cf/fsf-richard-stallman/
|
||||
|
||||
作者:[Pavlo Rudyi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://tlhp.cf/author/paul/
|
||||
[1]:http://www.gnu.org/
|
||||
[2]:http://www.fsf.org/
|
@ -0,0 +1,119 @@
|
||||
Mark Shuttleworth – The Man Behind Ubuntu Operating System
|
||||
================================================================================
|
||||
![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Mark-Shuttleworth-652x445.jpg)
|
||||
|
||||
**Mark Richard Shuttleworth** is the founder of **Ubuntu** or the man behind the Debian as they call him. He was born in 1973 in Welkom, South Africa. He’s an entrepreneur and also space tourist who became later **1st citizen of independent African country who could travel to the space**.
|
||||
|
||||
Mark also founded **Thawte** in 1996, the Internet commerce security company, while he was studying finance and IT at University of Cape Town.
|
||||
|
||||
In 2000, Mark founded the HBD, as an investment company, and also he created the Shuttleworth Foundation in order to fund the innovative leaders in the society with combination of fellowships and some investments.
|
||||
|
||||
> “The mobile world is crucial to the future of the PC. This month, for example, it became clear that the traditional PC is shrinking in favor of tablets. So if we want to be relevant on the PC, we have to figure out how to be relevant in the mobile world first. Mobile is also interesting because there’s no pirated Windows market. So if you win a device to your OS, it stays on your OS. In the PC world, we are constantly competing with “free Windows”, which presents somewhat unique challenges. So our focus now is to establish a great story around Ubuntu and mobile form factors – the tablet and the phone – on which we can build deeper relationships with everyday consumers.”
|
||||
>
|
||||
> — Mark Shuttleworth
|
||||
|
||||
In 2002, he flew to International Space Station as member of their crew of Soyuz mission TM-34, after 1 year of training in the Star City, Russia. And after running campaign to promote the science, code, and mathematics to the aspiring astronauts and the other ambitious types at schools in SA, Mark founded the **Canonical Ltd**. and in 2013, he provided leadership for Ubuntu operating system for software development purposes.
|
||||
|
||||
Today, Shuttleworth holds dual citizenship of United Kingdom and South Africa currently lives on lovely Mallards botanical garden in Isle of Man, with 18 precocious ducks, equally his lovely girlfriend Claire, 2 black bitches and occasional itinerant sheep.
|
||||
|
||||
> “Computer is not a device anymore. It is an extension of your mind and your gateway to other people.”
|
||||
>
|
||||
> — Mark Shuttleworth
|
||||
|
||||
### Mark Shuttleworth’s Early life ###
|
||||
|
||||
As we mentioned above, Mark was born in Welkom, South Africa’s Orange Free State as son of surgeon and nursery-school teacher, Mark attended the school at Western Province Preparatory School where he became eventually the Head Boy in 1986, followed by 1 term at Rondebosch Boys’ High School, and later at Bishops/Diocesan College where he was again Head Boy in 1991.
|
||||
|
||||
Mark obtained the Bachelor of Business Science degree in the Finance and Information Systems at University of Cape Town, where he lived there in Smuts Hall. He became, as a student, involved in installations of the 1st residential Internet connections at his university.
|
||||
|
||||
> “There are many examples of companies and countries that have improved their competitiveness and efficiency by adopting open source strategies. The creation of skills through all levels is of fundamental importance to both companies and countries.”
|
||||
>
|
||||
> — Mark Shuttleworth
|
||||
|
||||
### Mark Shuttleworth’s Career ###
|
||||
|
||||
Mark founded Thawte in 1995, which was specialized in the digital certificates and Internet security, then he sold it to VeriSign in 1999, earning about $575 million at the time.
|
||||
|
||||
In 2000, Mark formed the HBD Venture Capital (Here be Dragons), the business incubator and venture capital provider. In 2004, he formed the Canonical Ltd., for promotion and commercial support of the free software development projects, especially Ubuntu operating system. In 2009, Mark stepped down as CEO of Canonical, Ltd.
|
||||
|
||||
> “In the early days of the DCC I preferred to let the proponents do their thing and then see how it all worked out in the end. Now we are pretty close to the end.”
|
||||
>
|
||||
> — Mark Shuttleworth
|
||||
|
||||
### Linux and FOSS with Mark Shuttleworth ###
|
||||
|
||||
In the late 1990s, Mark participated as one of developers of Debian operating system.
|
||||
|
||||
In 2001, Mark formed the Shuttleworth Foundation, It is non-profit organization dedicated to the social innovation that also funds free, educational, and open source software projects in South Africa, including Freedom Toaster.
|
||||
|
||||
In 2004, Mark returned to free software world by funding software development of Ubuntu, as it was Linux distribution based on Debian, throughout his company Canonical Ltd.
|
||||
|
||||
In 2005, Mark founded Ubuntu Foundation and made initial investment of 10 million dollars. In Ubuntu project, Mark is often referred to with tongue-in-cheek title “**SABDFL (Self-Appointed Benevolent Dictator for Life)**”. To come up with list of names of people in order to hire for the entire project, Mark took about six months of Debian mailing list archives with him during his travelling to Antarctica aboard icebreaker Kapitan Khlebnikov in 2004. In 2005, Mark purchased 65% stake of Impi Linux.
|
||||
|
||||
> “I urge telecommunications regulators to develop a commercial strategy for delivering effective access to the continent.”
|
||||
>
|
||||
> — Mark Shuttleworth
|
||||
|
||||
In 2006, it was announced that Shuttleworth became **first patron of KDE**, which was highest level of sponsorship available at the time. This patronship ended in 2012, with financial support together for Kubuntu, which was Ubuntu variant with KDE as a main desktop.
|
||||
|
||||
![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/shuttleworth-kde.jpg)
|
||||
|
||||
In 2009, Shuttleworth announced that, he would step down as the CEO of Canonical in order to focus more energy on partnership, product design, and the customers. Jane Silber, took on this job as the CEO at Canonical after he was the COO at Canonical since 2004.
|
||||
|
||||
In 2010, Mark received the honorary degree from Open University for that work.
|
||||
|
||||
In 2012, Mark and Kenneth Rogoff took part together in debate opposite Peter Thiel and Garry Kasparov at Oxford Union, this debate was entitled “**The Innovation Enigma**”.
|
||||
|
||||
In 2013, Mark and Ubuntu were awarded **Austrian anti-privacy Big Brother Award** for sending the local Ubuntu Unity Dash searches to the Canonical servers by default. One year earlier in 2012, Mark had defended the anonymization method that was used.
|
||||
|
||||
> “All the major PC companies now ship PC’s with Ubuntu pre-installed. So we have a very solid set of working engagements in the industry. But those PC companies are nervous to promote something new to PC buyers. If we can get PC buyers familiar with Ubuntu as a phone and tablet experience, then they may be more willing buy it on the PC too. Because no OS ever succeeded by emulating another OS. Android is great, but if we want to succeed we need to bring something new and better to market. We are all at risk of stagnating if we don’t pursue the future, vigorously. But if you pursue the future, you have to accept that not everybody will agree with your vision.”
|
||||
>
|
||||
> — Mark Shuttleworth
|
||||
|
||||
### Mark Shuttleworth’s Spaceflight ###
|
||||
|
||||
Mark gained worldwide fame in 2002 as a second self-funded space tourist and the first South African who could travel to the space. Flying through Space Adventures, Mark launched aboard Russian Soyuz TM-34 mission as spaceflight participant, and he paid approximately $20 million for that voyage. 2 days later, Soyuz spacecraft arrived at International Space Station, where Mark spent 8 days participating in the experiments related to the AIDS and the GENOME research. Later in 2002, Mark returned to the Earth on the Soyuz TM-33. To participate in that flight, Mark had to undergo 1 year of preparation and training, including 7 months spent in the Star City, Russia.
|
||||
|
||||
![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/10/Mark-Shuttleworth1.jpg)
|
||||
|
||||
While in space, Mark had radio conversation with Nelson Mandela and another 14 year old South African girl, called Michelle Foster, who asked Mark to marry her. Of course Mark politely dodged that question, stating that he was much honored to this question before cunningly change the subject. The terminally ill Foster was also provided the opportunity to have conversation with Mark and Nelson Mandela by Reach for Dream foundation.
|
||||
|
||||
Upon returning, Mark traveled widely and also spoke about that spaceflight to schoolchildren around the world.
|
||||
|
||||
> “The raw numbers suggest that Ubuntu continues to grow in terms of actual users. And our partnerships – Dell, HP, Lenovo on the hardware front, and gaming companies like EA, Valve joining up on the software front – make me feel like we continue to lead where it matters.”
|
||||
>
|
||||
> — Mark Shuttleworth
|
||||
|
||||
### Mark Shuttleworth’s Transport ###
|
||||
|
||||
Mark has his private jet, Bombardier Global Express that is often referred to as Canonical One but it’s in fact owned through the HBD Venture Capital Company. The dragon depicted on side of the plane is Norman, HBD Venture Capital mascot.
|
||||
|
||||
### The Legal Clash with South African Reserve Bank ###
|
||||
|
||||
Upon the moving R2.5 billion in the capital from South Africa to Isle of Man, South African Reserve Bank imposed R250 million levy to release Mark’s assets. Mark appealed, and then after lengthy legal battle, Reserve Bank was ordered to repay Mark his R250 million, plus the interest. Mark announced that he would be donating that entire amount to trust that will be established in order to help others take cases to Constitutional Court.
|
||||
|
||||
> “The exit charge was not inconsistent with the Constitution. The dominant purpose of the exit charge was not to raise revenue but rather to regulate conduct by discouraging the export of capital to protect the domestic economy.”
|
||||
>
|
||||
> — Judge Dikgang Moseneke
|
||||
|
||||
In 2015, Constitutional Court of South Africa reversed and set-aside findings of lower courts, ruling that dominant purpose of the exit charge was in order to regulate conduct rather than for raising the revenue.
|
||||
|
||||
### Mark Shuttleworth’s likes ###
|
||||
|
||||
Cesária Évora, mp3s,Spring, Chelsea, finally seeing something obvious for first time, coming home, Sinatra, daydreaming, sundowners, flirting, d’Urberville, string theory, Linux, particle physics, Python, reincarnation, mig-29s, snow, travel, Mozilla, lime marmalade, body shots, the African bush, leopards, Rajasthan, Russian saunas, snowboarding, weightlessness, Iain m banks, broadband, Alastair Reynolds, fancy dress, skinny-dipping, flashes of insight, post-adrenaline euphoria, the inexplicable, convertibles, Clifton, country roads, international space station, machine learning, artificial intelligence, Wikipedia, Slashdot, kitesurfing, and Manx lanes.
|
||||
|
||||
### Shuttleworth’s dislikes ###
|
||||
|
||||
Admin, salary negotiations, legalese, and public speaking.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.unixmen.com/mark-shuttleworth-man-behind-ubuntu-operating-system/
|
||||
|
||||
作者:[M.el Khamlichi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.unixmen.com/author/pirat9/
|
@ -0,0 +1,35 @@
|
||||
Linus Torvalds Lambasts Open Source Programmers over Insecure Code
|
||||
================================================================================
|
||||
![](http://thevarguy.com/site-files/thevarguy.com/files/imagecache/medium_img/uploads/2015/11/linus-torvalds.jpg)
|
||||
|
||||
Linus Torvalds's latest rant underscores the high expectations the Linux developer places on open source programmers—as well the importance of security for Linux kernel code.
|
||||
|
||||
Torvalds is the unofficial "benevolent dictator" of the Linux kernel project. That means he gets to decide which code contributions go into the kernel, and which ones land in the reject pile.
|
||||
|
||||
On Oct. 28, open source coders whose work did not meet Torvalds's expectations faced an [angry rant][1]. "Christ people," Torvalds wrote about the code. "This is just sh*t."
|
||||
|
||||
He went on to call the coders "just incompetent and out to lunch."
|
||||
|
||||
What made Torvalds so angry? He believed the code could have been written more efficiently. It could have been easier for other programmers to understand and would run better through a compiler, the program that translates human-readable code into the binaries that computers understand.
|
||||
|
||||
Torvalds posted his own substitution for the code in question and suggested that the programmers should have written it his way.
|
||||
|
||||
Torvalds has a history of lashing out against people with whom he disagrees. It stretches back to 1991, when he famously [flamed Andrew Tanenbaum][2]—whose Minix operating system he later described as a series of "brain-damages." No doubt this latest criticism of fellow open source coders will go down as another example of Torvalds's confrontational personality.
|
||||
|
||||
But Torvalds may also have been acting strategically during this latest rant. "I want to make it clear to *everybody* that code like this is completely unacceptable," he wrote, suggesting that his goal was to send a message to all Linux programmers, not just vent his anger at particular ones.
|
||||
|
||||
Torvalds also used the incident as an opportunity to highlight the security concerns that arise from poorly written code. Those are issues dear to open source programmers' hearts in an age when enterprises are finally taking software security seriously, and demanding top-notch performance from their code in this regard. Lambasting open source programmers who write insecure code thus helps Linux's image.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://thevarguy.com/open-source-application-software-companies/110415/linus-torvalds-lambasts-open-source-programmers-over-inse
|
||||
|
||||
作者:[Christopher Tozzi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://thevarguy.com/author/christopher-tozzi
|
||||
[1]:http://lkml.iu.edu/hypermail/linux/kernel/1510.3/02866.html
|
||||
[2]:https://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_debate
|
@ -1,4 +1,3 @@
|
||||
Translating by ZTinoZ
|
||||
Installation Guide for Puppet on Ubuntu 15.04
|
||||
================================================================================
|
||||
Hi everyone, today in this article we'll learn how to install puppet to manage your server infrastructure running ubuntu 15.04. Puppet is an open source software configuration management tool which is developed and maintained by Puppet Labs that allows us to automate the provisioning, configuration and management of a server infrastructure. Whether we're managing just a few servers or thousands of physical and virtual machines to orchestration and reporting, puppet automates tasks that system administrators often do manually which frees up time and mental space so sysadmins can work on improving other aspects of your overall setup. It ensures consistency, reliability and stability of the automated jobs processed. It facilitates closer collaboration between sysadmins and developers, enabling more efficient delivery of cleaner, better-designed code. Puppet is available in two solutions configuration management and data center automation. They are **puppet open source and puppet enterprise**. Puppet open source is a flexible, customizable solution available under the Apache 2.0 license, designed to help system administrators automate the many repetitive tasks they regularly perform. Whereas puppet enterprise edition is a proven commercial solution for diverse enterprise IT environments which lets us get all the benefits of open source puppet, plus puppet apps, commercial-only enhancements, supported modules and integrations, and the assurance of a fully supported platform. Puppet uses SSL certificates to authenticate communication between master and agent nodes.
|
||||
|
@ -1,233 +0,0 @@
|
||||
How to Setup Zephyr Test Management Tool on CentOS 7.x
|
||||
================================================================================
|
||||
Test Management encompasses anything and everything that you need to do as testers. Test management tools are used to store information on how testing is to be done, plan testing activities and report the status of quality assurance activities. So in this article we will illustrate you about the setup of Zephyr test management tool that includes everything needed to manage the test process can save testers hassle of installing separate applications that are necessary for the testing process. Once you have done with its setup you will be able to track bugs, defects and allows the project tasks for collaboration with your team as you can easily share and access the data across multiple project teams for communication and collaboration throughout the testing process.
|
||||
|
||||
### Requirements for Zephyr ###
|
||||
|
||||
We are going to install and run Zephyr under the following set of its minimum resources. Resources can be enhanced as per your infrastructure requirements. We will be installing Zephyr on the CentOS-7 64-bit while its binary distributions are available for almost all Linux operating systems.
|
||||
|
||||
注:表格
|
||||
<table width="669" style="height: 231px;">
|
||||
<tbody>
|
||||
<tr>
|
||||
<td width="660" colspan="3"><strong>Zephyr test management tool</strong></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="140"><strong>Linux OS</strong></td>
|
||||
<td width="312">CentOS Linux 7 (Core), 64-bit</td>
|
||||
<td width="209"></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="140"><strong>Packages</strong></td>
|
||||
<td width="312">JDK 7 or above , Oracle JDK 6 update</td>
|
||||
<td width="209">No Prior Tomcat, MySQL installed</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="140"><strong>RAM</strong></td>
|
||||
<td width="312">4 GB</td>
|
||||
<td width="209">Preferred 8 GB</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="140"><strong>CPU</strong></td>
|
||||
<td width="521" colspan="2">2.0 GHZ or Higher</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="140"><strong>Hard Disk</strong></td>
|
||||
<td width="521" colspan="2">30 GB , Atleast 5GB must be free</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
You must have super user (root) access to perform the installation process for Zephyr and make sure that you have properly configured yout network with static IP address and its default set of ports must be available and allowed in the firewall where as the Port 80/443, 8005, 8009, 8010 will used by tomcat and Port 443 or 2099 will used within Zephyr by flex for the RTMP protocol.
|
||||
|
||||
### Install Java JDK 7 ###
|
||||
|
||||
Java JDK 7 is the basic requirement for the installation of Zephyr, if its not already installed in your operating system then do the following to install Java and setup its JAVA_HOME environment variables to be properly configured.
|
||||
|
||||
Let’s issue the below commands to install Java JDK 7.
|
||||
|
||||
[root@centos-007 ~]# yum install java-1.7.0-openjdk-1.7.0.79-2.5.5.2.el7_1
|
||||
|
||||
----------
|
||||
|
||||
[root@centos-007 ~]# yum install java-1.7.0-openjdk-devel-1.7.0.85-2.6.1.2.el7_1.x86_64
|
||||
|
||||
Once your java is installed including its required dependencies, run the following commands to set its JAVA_HOME environment variables.
|
||||
|
||||
[root@centos-007 ~]# export JAVA_HOME=/usr/java/default
|
||||
[root@centos-007 ~]# export PATH=/usr/java/default/bin:$PATH
|
||||
|
||||
Now check the version of java to verify its installation with following command.
|
||||
|
||||
[root@centos-007 ~]# java –version
|
||||
|
||||
----------
|
||||
|
||||
java version "1.7.0_79"
|
||||
OpenJDK Runtime Environment (rhel-2.5.5.2.el7_1-x86_64 u79-b14)
|
||||
OpenJDK 64-Bit Server VM (build 24.79-b02, mixed mode)
|
||||
|
||||
The output shows that we we have successfully installed OpenJDK Java verion 1.7.0_79.
|
||||
|
||||
### Install MySQL 5.6.X ###
|
||||
|
||||
If you have other MySQLs on the machine then it is recommended to remove them and
|
||||
install this version on top of them or upgrade their schemas to what is specified. As this specific major/minor (5.6.X) version of MySQL is required with the root username as a prerequisite of Zephyr.
|
||||
|
||||
To install MySQL 5.6 on CentOS-7.1 lets do the following steps:
|
||||
|
||||
Download the rpm package, which will create a yum repo file for MySQL Server installation.
|
||||
|
||||
[root@centos-007 ~]# yum install wget
|
||||
[root@centos-007 ~]# wget http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm
|
||||
|
||||
Now Install this downloaded rpm package by using rpm command.
|
||||
|
||||
[root@centos-007 ~]# rpm -ivh mysql-community-release-el7-5.noarch.rpm
|
||||
|
||||
After the installation of this package you will get two new yum repo related to MySQL. Then by using yum command, now we will install MySQL Server 5.6 and all dependencies will be installed itself.
|
||||
|
||||
[root@centos-007 ~]# yum install mysql-server
|
||||
|
||||
Once the installation process completes, run the following commands to start mysqld services and check its status whether its active or not.
|
||||
|
||||
[root@centos-007 ~]# service mysqld start
|
||||
[root@centos-007 ~]# service mysqld status
|
||||
|
||||
On fresh installation of MySQL Server. The MySQL root user password is blank.
|
||||
For good security practice, we should reset the password MySQL root user.
|
||||
|
||||
Connect to MySQL using the auto-generated empty password and change the
|
||||
root password.
|
||||
|
||||
[root@centos-007 ~]# mysql
|
||||
mysql> SET PASSWORD FOR 'root'@'localhost' = PASSWORD('your_password');
|
||||
mysql> flush privileges;
|
||||
mysql> quit;
|
||||
|
||||
Now we need to configure the required database parameters in the default configuration file of MySQL. Let's open its file located in "/etc/" folder and update it as follow.
|
||||
|
||||
[root@centos-007 ~]# vi /etc/my.cnf
|
||||
|
||||
----------
|
||||
|
||||
[mysqld]
|
||||
datadir=/var/lib/mysql
|
||||
socket=/var/lib/mysql/mysql.sock
|
||||
symbolic-links=0
|
||||
|
||||
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
|
||||
max_allowed_packet=150M
|
||||
max_connections=600
|
||||
default-storage-engine=INNODB
|
||||
character-set-server=utf8
|
||||
collation-server=utf8_unicode_ci
|
||||
|
||||
[mysqld_safe]
|
||||
log-error=/var/log/mysqld.log
|
||||
pid-file=/var/run/mysqld/mysqld.pid
|
||||
default-storage-engine=INNODB
|
||||
character-set-server=utf8
|
||||
collation-server=utf8_unicode_ci
|
||||
|
||||
[mysql]
|
||||
max_allowed_packet = 150M
|
||||
[mysqldump]
|
||||
quick
|
||||
|
||||
Save the changes made in the configuration file and restart mysql services.
|
||||
|
||||
[root@centos-007 ~]# service mysqld restart
|
||||
|
||||
### Download Zephyr Installation Package ###
|
||||
|
||||
We done with installation of required packages necessary to install Zephyr. Now we need to get the binary distributed package of Zephyr and its license key. Go to official download link of Zephyr that is http://download.yourzephyr.com/linux/download.php give your email ID and click to download.
|
||||
|
||||
![Zephyr Download](http://blog.linoxide.com/wp-content/uploads/2015/08/13.png)
|
||||
|
||||
Then and confirm your mentioned Email Address and you will get the Zephyr Download link and its License Key link. So click on the provided links and choose the appropriate version of your Operating system to download the binary installation package and its license file to the server.
|
||||
|
||||
We have placed it in the home directory and modify its permissions to make it executable.
|
||||
|
||||
![Zephyr Binary](http://blog.linoxide.com/wp-content/uploads/2015/08/22.png)
|
||||
|
||||
### Start Zephyr Installation and Configuration ###
|
||||
|
||||
Now we are ready to start the installation of Zephyr by executing its binary installation script as below.
|
||||
|
||||
[root@centos-007 ~]# ./zephyr_4_7_9213_linux_setup.sh –c
|
||||
|
||||
Once you run the above command, it will check for the Java environment variables to be properly setup and configured. If there's some mis-configuration you might the error like.
|
||||
|
||||
testing JVM in /usr ...
|
||||
Starting Installer ...
|
||||
Error : Either JDK is not found at expected locations or JDK version is mismatched.
|
||||
Zephyr requires Oracle Java Development Kit (JDK) version 1.7 or higher.
|
||||
|
||||
Once you have properly configured your Java, then it will start installation of Zephyr and asks to press "o" to proceed and "c" to cancel the setup. Let's type "o" and press "Enter" key to start installation.
|
||||
|
||||
![install zephyr](http://blog.linoxide.com/wp-content/uploads/2015/08/32.png)
|
||||
|
||||
The next option is to review all the requirements for the Zephyr setup and Press "Enter" to move forward to next option.
|
||||
|
||||
![zephyr requirements](http://blog.linoxide.com/wp-content/uploads/2015/08/42.png)
|
||||
|
||||
To accept the license agreement type "1" and Press Enter.
|
||||
|
||||
I accept the terms of this license agreement [1], I do not accept the terms of this license agreement [2, Enter]
|
||||
|
||||
Here we need to choose the appropriate destination location where we want to install the zephyr and choose the default ports, if you want to choose other than default ports, you are free to mention here.
|
||||
|
||||
![installation folder](http://blog.linoxide.com/wp-content/uploads/2015/08/52.png)
|
||||
|
||||
Then customize the mysql database parameters and give the right paths to the configurations file. You might the an error at this point as shown below.
|
||||
|
||||
Please update MySQL configuration. Configuration parameter max_connection should be at least 500 (max_connection = 500) and max_allowed_packet should be at least 50MB (max_allowed_packet = 50M).
|
||||
|
||||
To overcome this error make sure that you have configure the "max_connection" and "max_allowed_packet" limits properly in the mysql configuration file. So confirm these settings, connect to mysql server and run the commands as shown.
|
||||
|
||||
![mysql connections](http://blog.linoxide.com/wp-content/uploads/2015/08/62.png)
|
||||
|
||||
Once you have configured your mysql database properly, it will extract the configuration files to complete the setup.
|
||||
|
||||
![mysql customization](http://blog.linoxide.com/wp-content/uploads/2015/08/72.png)
|
||||
|
||||
The installation process completes with successful installation of Zephyr 4.7 on your computer. To Launch Zephyr Desktop type "y" to finish Zephyr installation.
|
||||
|
||||
![launch zephyr](http://blog.linoxide.com/wp-content/uploads/2015/08/82.png)
|
||||
|
||||
### Launch Zephyr Desktop ###
|
||||
|
||||
Open your web browser to launch Zephyr Desktop with your localhost IP adress and you will be direted to the Zephyr Desktop.
|
||||
|
||||
http://your_server_IP/zephyr/desktop/
|
||||
|
||||
![Zephyr Desktop](http://blog.linoxide.com/wp-content/uploads/2015/08/91.png)
|
||||
|
||||
From your Zephyr Dashboard click on the "Test Manager" and login with the dault user name and password that is "test.manager".
|
||||
|
||||
![Test Manage Login](http://blog.linoxide.com/wp-content/uploads/2015/08/test_manager_login.png)
|
||||
|
||||
Once you are loged in you will be able to configure your administrative settings as shown. So choose the settings you wish to put according to your environment.
|
||||
|
||||
![Test Manage Administration](http://blog.linoxide.com/wp-content/uploads/2015/08/test_manage_admin.png)
|
||||
|
||||
Save the settings after you have done with your administrative settings, similarly do the settings of resources management and project setup and start using Zephyr as a complete set of your testing management tool. You check and edit the status of your administrative settings from the Department Dashboard Management as shown.
|
||||
|
||||
![zephyr dashboard](http://blog.linoxide.com/wp-content/uploads/2015/08/dashboard.png)
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
Cheers! we have done with the complete setup of Zephyr installation setup on Centos 7.1. We hope you are now much aware of Zephyr Test management tool which offer the prospect of streamlining the testing process and allow quick access to data analysis, collaborative tools and easy communication across multiple project teams. Feel free to comment us if you find any difficulty while you are doing it in your environment.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/setup-zephyr-tool-centos-7-x/
|
||||
|
||||
作者:[Kashif Siddique][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/kashifs/
|
@ -1,81 +0,0 @@
|
||||
How To Use iPhone In Antergos Linux
|
||||
================================================================================
|
||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/iPhone-Antergos-Arch-Linux.jpg)
|
||||
|
||||
Troubles with iPhone and Arch Linux? iPhone and Linux never really go along very well. In this tutorial, I am going to show you how can you use iPhone in Antergos Linux. Since Antergos is based on Arch Linux, the same steps should be applicable to other Arch based Linux distros such as Manjaro Linux.
|
||||
|
||||
So, recently I bought me a brand new iPhone 6S and when I connected it to Antergos Linux to copy some pictures, it was not detected at all. I could see that iPhone was being charged and I had allowed iPhone to ‘trust the computer’ but there was nothing at all detected. I tried to run dmseg but there was no trace of iPhone or Apple there. What is funny that [libimobiledevice][1] was installed as well, which always fixes [iPhone mount issue in Ubuntu][2].
|
||||
|
||||
I am going to show you how I am using iPhone 6S, running on iOS 9 in Antergos. It goes more in command line way, but I presume since you are in Arch Linux zone, you are not scared of terminal (and you should not be as well).
|
||||
|
||||
### Mount iPhone in Arch Linux ###
|
||||
|
||||
**Step 1**: Unplug your iPhone, if it is already plugged in.
|
||||
|
||||
**Step 2**: Now, open a terminal and use the following command to install some necessary packages. Don’t worry if they are already installed.
|
||||
|
||||
sudo pacman -Sy ifuse usbmuxd libplist libimobiledevice
|
||||
|
||||
**Step 3**: Once these programs and libraries are installed, reboot your system.
|
||||
|
||||
sudo reboot
|
||||
|
||||
**Step 4**: Make a directory where you want the iPhone to be mounted. I would suggest making a directory named iPhone in your home directory.
|
||||
|
||||
mkdir ~/iPhone
|
||||
|
||||
**Step 5**: Unlock your phone and plug it in. If asked to trust the computer, allow it.
|
||||
|
||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/iPhone-mount-Antergos-Linux-2.jpeg)
|
||||
|
||||
**Step 6**: Verify that iPhone is recognized by the system this time.
|
||||
|
||||
dmesg | grep -i iphone
|
||||
|
||||
This should show you some result with iPhone and Apple in it. Something like this:
|
||||
|
||||
[ 31.003392] ipheth 2-1:4.2: Apple iPhone USB Ethernet device attached
|
||||
[ 40.950883] ipheth 2-1:4.2: Apple iPhone USB Ethernet now disconnected
|
||||
[ 47.471897] ipheth 2-1:4.2: Apple iPhone USB Ethernet device attached
|
||||
[ 82.967116] ipheth 2-1:4.2: Apple iPhone USB Ethernet now disconnected
|
||||
[ 106.735932] ipheth 2-1:4.2: Apple iPhone USB Ethernet device attached
|
||||
|
||||
This means that iPhone has been successfully recognized by Antergos/Arch Linux.
|
||||
|
||||
**Step 7**: When everything is set, it’s time to mount the iPhone. Use the command below:
|
||||
|
||||
ifuse ~/iPhone
|
||||
|
||||
Since we created the mount directory in home, it won’t need root access and you should also be able to see it easily in your home directory. If the command is successful, you won’t see any output.
|
||||
|
||||
Go back to Files and see if the iPhone is recognized or not. For me, it looks like this in Antergos:
|
||||
|
||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/iPhone-mount-Antergos-Linux.jpeg)
|
||||
|
||||
You can access the files in this directory. Copy files from it or to it.
|
||||
|
||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/10/iPhone-mount-Antergos-Linux-1.jpeg)
|
||||
|
||||
**Step 8**: When you want to unmount it, you should use this command:
|
||||
|
||||
sudo umount ~/iPhone
|
||||
|
||||
### Worked for you? ###
|
||||
|
||||
I know that it is not very convenient and ideally, iPhone should be recognized as any other USB storage device but things don’t always behave as they are expected to. Good thing is that a little DIY hack can always fix the issue and it gives a sense of achievement (at least to me). That being said, I must say Antergos should work to fix this issue so that iPhone can be mounted by default.
|
||||
|
||||
Did this trick work for you? If you have questions or suggestions, feel free to drop a comment.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/iphone-antergos-linux/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:http://www.libimobiledevice.org/
|
||||
[2]:http://itsfoss.com/mount-iphone-ipad-ios-7-ubuntu-13-10/
|
@ -1,114 +0,0 @@
|
||||
translating by Ezio
|
||||
|
||||
|
||||
How to Setup DockerUI - a Web Interface for Docker
|
||||
================================================================================
|
||||
Docker is getting more popularity day by day. The idea of running a complete Operating System inside a container rather than running inside a virtual machine is an awesome technology. Docker has made lives of millions of system administrators and developers pretty easy for getting their work done in no time. It is an open source technology that provides an open platform to pack, ship, share and run any application as a lightweight container without caring on which operating system we are running on the host. It has no boundaries of Language support, Frameworks or packaging system and can be run anywhere, anytime from a small home computers to high-end servers. Running docker containers and managing them may come a bit difficult and time consuming, so there is a web based application named DockerUI which is make managing and running container pretty simple. DockerUI is highly beneficial to people who are not much aware of linux command lines and want to run containerized applications. DockerUI is an open source web based application best known for its beautiful design and ease simple interface for running and managing docker containers.
|
||||
|
||||
Here are some easy steps on how we can setup Docker Engine with DockerUI in our linux machine.
|
||||
|
||||
### 1. Installing Docker Engine ###
|
||||
|
||||
First of all, we'll gonna install docker engine in our linux machine. Thanks to its developers, docker is very easy to install in any major linux distribution. To install docker engine, we'll need to run the following command with respect to which distribution we are running.
|
||||
|
||||
#### On Ubuntu/Fedora/CentOS/RHEL/Debian ####
|
||||
|
||||
Docker maintainers have written an awesome script that can be used to install docker engine in Ubuntu 15.04/14.10/14.04, CentOS 6.x/7, Fedora 22, RHEL 7 and Debian 8.x distributions of linux. This script recognizes the distribution of linux installed in our machine, then adds the required repository to the filesystem, updates the local repository index and finally installs docker engine and required dependencies from it. To install docker engine using that script, we'll need to run the following command under root or sudo mode.
|
||||
|
||||
# curl -sSL https://get.docker.com/ | sh
|
||||
|
||||
#### On OpenSuse/SUSE Linux Enterprise ####
|
||||
|
||||
To install docker engine in the machine running OpenSuse 13.1/13.2 or SUSE Linux Enterprise Server 12, we'll simply need to execute the zypper command. We'll gonna install docker using zypper command as the latest docker engine is available on the official repository. To do so, we'll run the following command under root/sudo mode.
|
||||
|
||||
# zypper in docker
|
||||
|
||||
#### On ArchLinux ####
|
||||
|
||||
Docker is available in the official repository of Archlinux as well as in the AUR packages maintained by the community. So, we have two options to install docker in archlinux. To install docker using the official arch repository, we'll need to run the following pacman command.
|
||||
|
||||
# pacman -S docker
|
||||
|
||||
But if we want to install docker from the Archlinux User Repository ie AUR, then we'll need to execute the following command.
|
||||
|
||||
# yaourt -S docker-git
|
||||
|
||||
### 2. Starting Docker Daemon ###
|
||||
|
||||
After docker is installed, we'll now gonna start our docker daemon so that we can run docker containers and manage them. We'll run the following command to make sure that docker daemon is installed and to start the docker daemon.
|
||||
|
||||
#### On SysVinit ####
|
||||
|
||||
# service docker start
|
||||
|
||||
#### On Systemd ####
|
||||
|
||||
# systemctl start docker
|
||||
|
||||
### 3. Installing DockerUI ###
|
||||
|
||||
Installing DockerUI is pretty easy than installing docker engine. We just need to pull the dockerui from the Docker Registry Hub and run it inside a container. To do so, we'll simply need to run the following command.
|
||||
|
||||
# docker run -d -p 9000:9000 --privileged -v /var/run/docker.sock:/var/run/docker.sock dockerui/dockerui
|
||||
|
||||
![Starting DockerUI Container](http://blog.linoxide.com/wp-content/uploads/2015/09/starting-dockerui-container.png)
|
||||
|
||||
Here, in the above command, as the default port of the dockerui web application server 9000, we'll simply map the default port of it with -p flag. With -v flag, we specify the docker socket. The --privileged flag is required for hosts using SELinux.
|
||||
|
||||
After executing the above command, we'll now check if the dockerui container is running or not by running the following command.
|
||||
|
||||
# docker ps
|
||||
|
||||
![Running Docker Containers](http://blog.linoxide.com/wp-content/uploads/2015/09/running-docker-containers.png)
|
||||
|
||||
### 4. Pulling an Image ###
|
||||
|
||||
Currently, we cannot pull an image directly from DockerUI so, we'll need to pull a docker image from the linux console/terminal. To do so, we'll need to run the following command.
|
||||
|
||||
# docker pull ubuntu
|
||||
|
||||
![Docker Image Pull](http://blog.linoxide.com/wp-content/uploads/2015/10/docker-image-pull.png)
|
||||
|
||||
The above command will pull an image tagged as ubuntu from the official [Docker Hub][1]. Similarly, we can pull more images that we require and are available in the hub.
|
||||
|
||||
### 4. Managing with DockerUI ###
|
||||
|
||||
After we have started the dockerui container, we'll now have fun with it to start, pause, stop, remove and perform many possible activities featured by dockerui with docker containers and images. First of all, we'll need to open the web application using our web browser. To do so, we'll need to point our browser to http://ip-address:9000 or http://mydomain.com:9000 according to the configuration of our system. By default, there is no login authentication needed for the user access but we can configure our web server for adding authentication. To start a container, first we'll need to have images of the required application we want to run a container with.
|
||||
|
||||
#### Create a Container ####
|
||||
|
||||
To create a container, we'll need to go to the section named Images then, we'll need to click on the image id which we want to create a container of. After clicking on the required image id, we'll need to click on Create button then we'll be asked to enter the required properties for our container. And after everything is set and done. We'll need to click on Create button to finally create a container.
|
||||
|
||||
![Creating Docker Container](http://blog.linoxide.com/wp-content/uploads/2015/10/creating-docker-container.png)
|
||||
|
||||
#### Stop a Container ####
|
||||
|
||||
To stop a container, we'll need to move towards the Containers page and then select the required container we want to stop. Now, we'll want to click on Stop option which we can see under Actions drop-down menu.
|
||||
|
||||
![Managing Container](http://blog.linoxide.com/wp-content/uploads/2015/10/managing-container.png)
|
||||
|
||||
#### Pause and Resume ####
|
||||
|
||||
To pause a container, we simply select the required container we want to pause by keeping a check mark on the container and then click the Pause option under Actions . This is will pause the running container and then, we can simply resume the container by selecting Unpause option from the Actions drop down menu.
|
||||
|
||||
#### Kill and Remove ####
|
||||
|
||||
Like we had performed the above tasks, its pretty easy to kill and remove a container or an image. We just need to check/select the required container or image and then select the Kill or Remove button from the application according to our need.
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
DockerUI is a beautiful utilization of Docker Remote API to develop an awesome web interface for managing docker containers. The developers have designed and developed this application in pure HTML and JS language. It is currently incomplete and is under heavy development so we don't recommend it for the use in production currently. It makes users pretty easy to manage their containers and images with simple clicks without needing to execute lines of commands to do small jobs. If we want to contribute DockerUI, we can simply visit its [Github Repository][2]. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you !
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/setup-dockerui-web-interface-docker/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[oska874](https://github.com/oska874)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[1]:https://hub.docker.com/
|
||||
[2]:https://github.com/crosbymichael/dockerui/
|
277
sources/tech/20151028 10 Tips for 10x Application Performance.md
Normal file
277
sources/tech/20151028 10 Tips for 10x Application Performance.md
Normal file
@ -0,0 +1,277 @@
|
||||
10 Tips for 10x Application Performance
|
||||
================================================================================
|
||||
Improving web application performance is more critical than ever. The share of economic activity that’s online is growing; more than 5% of the developed world’s economy is now on the Internet (see Resources below for statistics). And our always-on, hyper-connected modern world means that user expectations are higher than ever. If your site does not respond instantly, or if your app does not work without delay, users quickly move on to your competitors.
|
||||
|
||||
For example, a study done by Amazon almost 10 years ago proved that, even then, a 100-millisecond decrease in page-loading time translated to a 1% increase in its revenue. Another recent study highlighted the fact that that more than half of site owners surveyed said they lost revenue or customers due to poor application performance.
|
||||
|
||||
How fast does a website need to be? For each second a page takes to load, about 4% of users abandon it. Top e-commerce sites offer a time to first interaction ranging from one to three seconds, which offers the highest conversion rate. It’s clear that the stakes for web application performance are high and likely to grow.
|
||||
|
||||
Wanting to improve performance is easy, but actually seeing results is difficult. To help you on your journey, this blog post offers you ten tips to help you increase your website performance by as much as 10x. It’s the first in a series detailing how you can increase your application performance with the help of some well-tested optimization techniques, and with a little support from NGINX. This series also outlines potential improvements in security that you can gain along the way.
|
||||
|
||||
### Tip #1: Accelerate and Secure Applications with a Reverse Proxy Server ###
|
||||
|
||||
If your web application runs on a single machine, the solution to performance problems might seem obvious: just get a faster machine, with more processor, more RAM, a fast disk array, and so on. Then the new machine can run your WordPress server, Node.js application, Java application, etc., faster than before. (If your application accesses a database server, the solution might still seem simple: get two faster machines, and a faster connection between them.)
|
||||
|
||||
Trouble is, machine speed might not be the problem. Web applications often run slowly because the computer is switching among different kinds of tasks: interacting with users on thousands of connections, accessing files from disk, and running application code, among others. The application server may be thrashing – running out of memory, swapping chunks of memory out to disk, and making many requests wait on a single task such as disk I/O.
|
||||
|
||||
Instead of upgrading your hardware, you can take an entirely different approach: adding a reverse proxy server to offload some of these tasks. A [reverse proxy server][1] sits in front of the machine running the application and handles Internet traffic. Only the reverse proxy server is connected directly to the Internet; communication with the application servers is over a fast internal network.
|
||||
|
||||
Using a reverse proxy server frees the application server from having to wait for users to interact with the web app and lets it concentrate on building pages for the reverse proxy server to send across the Internet. The application server, which no longer has to wait for client responses, can run at speeds close to those achieved in optimized benchmarks.
|
||||
|
||||
Adding a reverse proxy server also adds flexibility to your web server setup. For instance, if a server of a given type is overloaded, another server of the same type can easily be added; if a server is down, it can easily be replaced.
|
||||
|
||||
Because of the flexibility it provides, a reverse proxy server is also a prerequisite for many other performance-boosting capabilities, such as:
|
||||
|
||||
- **Load balancing** (see [Tip #2][2]) – A load balancer runs on a reverse proxy server to share traffic evenly across a number of application servers. With a load balancer in place, you can add application servers without changing your application at all.
|
||||
- **Caching static files** (see [Tip #3][3]) – Files that are requested directly, such as image files or code files, can be stored on the reverse proxy server and sent directly to the client, which serves assets more quickly and offloads the application server, allowing the application to run faster.
|
||||
- **Securing your site** – The reverse proxy server can be configured for high security and monitored for fast recognition and response to attacks, keeping the application servers protected.
|
||||
|
||||
NGINX software is specifically designed for use as a reverse proxy server, with the additional capabilities described above. NGINX uses an event-driven processing approach which is more efficient than traditional servers. NGINX Plus adds more advanced reverse proxy features, such as application [health checks][4], specialized request routing, advanced caching, and support.
|
||||
|
||||
![NGINX Worker Process helps increase application performance](https://www.nginx.com/wp-content/uploads/2015/10/Graph-11.png)
|
||||
|
||||
### Tip #2: Add a Load Balancer ###
|
||||
|
||||
Adding a [load balancer][5] is a relatively easy change which can create a dramatic improvement in the performance and security of your site. Instead of making a core web server bigger and more powerful, you use a load balancer to distribute traffic across a number of servers. Even if an application is poorly written, or has problems with scaling, a load balancer can improve the user experience without any other changes.
|
||||
|
||||
A load balancer is, first, a reverse proxy server (see [Tip #1][6]) – it receives Internet traffic and forwards requests to another server. The trick is that the load balancer supports two or more application servers, using [a choice of algorithms][7] to split requests between servers. The simplest load balancing approach is round robin, with each new request sent to the next server on the list. Other methods include sending requests to the server with the fewest active connections. NGINX Plus has [capabilities][8] for continuing a given user session on the same server, which is called session persistence.
|
||||
|
||||
Load balancers can lead to strong improvements in performance because they prevent one server from being overloaded while other servers wait for traffic. They also make it easy to expand your web server capacity, as you can add relatively low-cost servers and be sure they’ll be put to full use.
|
||||
|
||||
Protocols that can be load balanced include HTTP, HTTPS, SPDY, HTTP/2, WebSocket, [FastCGI][9], SCGI, uwsgi, memcached, and several other application types, including TCP-based applications and other Layer 4 protocols. Analyze your web applications to determine which you use and where performance is lagging.
|
||||
|
||||
The same server or servers used for load balancing can also handle several other tasks, such as SSL termination, support for HTTP/1/x and HTTP/2 use by clients, and caching for static files.
|
||||
|
||||
NGINX is often used for load balancing; to learn more, please see our [overview blog post][10], [configuration blog post][11], [ebook][12] and associated [webinar][13], and [documentation][14]. Our commercial version, [NGINX Plus][15], supports more specialized load balancing features such as load routing based on server response time and the ability to load balance on Microsoft’s NTLM protocol.
|
||||
|
||||
### Tip #3: Cache Static and Dynamic Content ###
|
||||
|
||||
Caching improves web application performance by delivering content to clients faster. Caching can involve several strategies: preprocessing content for fast delivery when needed, storing content on faster devices, storing content closer to the client, or a combination.
|
||||
|
||||
There are two different types of caching to consider:
|
||||
|
||||
- **Caching of static content**. Infrequently changing files, such as image files (JPEG, PNG) and code files (CSS, JavaScript), can be stored on an edge server for fast retrieval from memory or disk.
|
||||
- **Caching of dynamic content**. Many Web applications generate fresh HTML for each page request. By briefly caching one copy of the generated HTML for a brief period of time, you can dramatically reduce the total number of pages that have to be generated while still delivering content that’s fresh enough to meet your requirements.
|
||||
|
||||
If a page gets ten views per second, for instance, and you cache it for one second, 90% of requests for the page will come from the cache. If you separately cache static content, even the freshly generated versions of the page might be made up largely of cached content.
|
||||
|
||||
There are three main techniques for caching content generated by web applications:
|
||||
|
||||
- **Moving content closer to users**. Keeping a copy of content closer to the user reduces its transmission time.
|
||||
- **Moving content to faster machines**. Content can be kept on a faster machine for faster retrieval.
|
||||
- **Moving content off of overused machines**. Machines sometimes operate much slower than their benchmark performance on a particular task because they are busy with other tasks. Caching on a different machine improves performance for the cached resources and also for non-cached resources, because the host machine is less overloaded.
|
||||
|
||||
Caching for web applications can be implemented from the inside – the web application server – out. First, caching is used for dynamic content, to reduce the load on application servers. Then, caching is used for static content (including temporary copies of what would otherwise be dynamic content), further off-loading application servers. And caching is then moved off of application servers and onto machines that are faster and/or closer to the user, unburdening the application servers, and reducing retrieval and transmission times.
|
||||
|
||||
Improved caching can speed up applications tremendously. For many web pages, static data, such as large image files, makes up more than half the content. It might take several seconds to retrieve and transmit such data without caching, but only fractions of a second if the data is cached locally.
|
||||
|
||||
As an example of how caching is used in practice, NGINX and NGINX Plus use two directives to [set up caching][16]: proxy_cache_path and proxy_cache. You specify the cache location and size, the maximum time files are kept in the cache, and other parameters. Using a third (and quite popular) directive, proxy_cache_use_stale, you can even direct the cache to supply stale content when the server that supplies fresh content is busy or down, giving the client something rather than nothing. From the user’s perspective, this may strongly improves your site or application’s uptime.
|
||||
|
||||
NGINX Plus has [advanced caching features][17], including support for [cache purging][18] and visualization of cache status on a [dashboard][19] for live activity monitoring.
|
||||
|
||||
For more information on caching with NGINX, see the [reference documentation][20] and [NGINX Content Caching][21] in the NGINX Plus Admin Guide.
|
||||
|
||||
**Note**: Caching crosses organizational lines between people who develop applications, people who make capital investment decisions, and people who run networks in real time. Sophisticated caching strategies, like those alluded to here, are a good example of the value of a [DevOps perspective][22], in which application developer, architectural, and operations perspectives are merged to help meet goals for site functionality, response time, security, and business results, )such as completed transactions or sales.
|
||||
|
||||
### Tip #4: Compress Data ###
|
||||
|
||||
Compression is a huge potential performance accelerator. There are carefully engineered and highly effective compression standards for photos (JPEG and PNG), videos (MPEG-4), and music (MP3), among others. Each of these standards reduces file size by an order of magnitude or more.
|
||||
|
||||
Text data – including HTML (which includes plain text and HTML tags), CSS, and code such as JavaScript – is often transmitted uncompressed. Compressing this data can have a disproportionate impact on perceived web application performance, especially for clients with slow or constrained mobile connections.
|
||||
|
||||
That’s because text data is often sufficient for a user to interact with a page, where multimedia data may be more supportive or decorative. Smart content compression can reduce the bandwidth requirements of HTML, Javascript, CSS and other text-based content, typically by 30% or more, with a corresponding reduction in load time.
|
||||
|
||||
If you use SSL, compression reduces the amount of data that has to be SSL-encoded, which offsets some of the CPU time it takes to compress the data.
|
||||
|
||||
Methods for compressing text data vary. For example, see the [section on HTTP/2][23] for a novel text compression scheme, adapted specifically for header data. As another example of text compression you can [turn on][24] GZIP compression in NGINX. After you [pre-compress text data][25] on your services, you can serve the compressed .gz version directly using the gzip_static directive.
|
||||
|
||||
### Tip #5: Optimize SSL/TLS ###
|
||||
|
||||
The Secure Sockets Layer ([SSL][26]) protocol and its successor, the Transport Layer Security (TLS) protocol, are being used on more and more websites. SSL/TLS encrypts the data transported from origin servers to users to help improve site security. Part of what may be influencing this trend is that Google now uses the presence of SSL/TLS as a positive influence on search engine rankings.
|
||||
|
||||
Despite rising popularity, the performance hit involved in SSL/TLS is a sticking point for many sites. SSL/TLS slows website performance for two reasons:
|
||||
|
||||
1. The initial handshake required to establish encryption keys whenever a new connection is opened. The way that browsers using HTTP/1.x establish multiple connections per server multiplies that hit.
|
||||
1. Ongoing overhead from encrypting data on the server and decrypting it on the client.
|
||||
|
||||
To encourage the use of SSL/TLS, the authors of HTTP/2 and SPDY (described in the [next section][27]) designed these protocols so that browsers need just one connection per browser session. This greatly reduces one of the two major sources of SSL overhead. However, even more can be done today to improve the performance of applications delivered over SSL/TLS.
|
||||
|
||||
The mechanism for optimizing SSL/TLS varies by web server. As an example, NGINX uses [OpenSSL][28], running on standard commodity hardware, to provide performance similar to dedicated hardware solutions. NGINX [SSL performance][29] is well-documented and minimizes the time and CPU penalty from performing SSL/TLS encryption and decryption.
|
||||
|
||||
In addition, see [this blog post][30] for details on ways to increase SSL/TLS performance. To summarize briefly, the techniques are:
|
||||
|
||||
- **Session caching**. Uses the [ssl_session_cache][31] directive to cache the parameters used when securing each new connection with SSL/TLS.
|
||||
- **Session tickets or IDs**. These store information about specific SSL/TLS sessions in a ticket or ID so a connection can be reused smoothly, without new handshaking.
|
||||
- **OCSP stapling**. Cuts handshaking time by caching SSL/TLS certificate information.
|
||||
|
||||
NGINX and NGINX Plus can be used for SSL/TLS termination – handling encryption and decyption for client traffic, while communicating with other servers in clear text. Use [these steps][32] to set up NGINX or NGINX Plus to handle SSL/TLS termination. Also, here are [specific steps][33] for NGINX Plus when used with servers that accept TCP connections.
|
||||
|
||||
### Tip #6: Implement HTTP/2 or SPDY ###
|
||||
|
||||
For sites that already use SSL/TLS, HTTP/2 and SPDY are very likely to improve performance, because the single connection requires just one handshake. For sites that don’t yet use SSL/TLS, HTTP/2 and SPDY makes a move to SSL/TLS (which normally slows performance) a wash from a responsiveness point of view.
|
||||
|
||||
Google introduced SPDY in 2012 as a way to achieve faster performance on top of HTTP/1.x. HTTP/2 is the recently approved IETF standard based on SPDY. SPDY is broadly supported, but is soon to be deprecated, replaced by HTTP/2.
|
||||
|
||||
The key feature of SPDY and HTTP/2 is the use of a single connection rather than multiple connections. The single connection is multiplexed, so it can carry pieces of multiple requests and responses at the same time.
|
||||
|
||||
By getting the most out of one connection, these protocols avoid the overhead of setting up and managing multiple connections, as required by the way browsers implement HTTP/1.x. The use of a single connection is especially helpful with SSL, because it minimizes the time-consuming handshaking that SSL/TLS needs to set up a secure connection.
|
||||
|
||||
The SPDY protocol required the use of SSL/TLS; HTTP/2 does not officially require it, but all browsers so far that support HTTP/2 use it only if SSL/TLS is enabled. That is, a browser that supports HTTP/2 uses it only if the website is using SSL and its server accepts HTTP/2 traffic. Otherwise, the browser communicates over HTTP/1.x.
|
||||
|
||||
When you implement SPDY or HTTP/2, you no longer need typical HTTP performance optimizations such as domain sharding, resource merging, and image spriting. These changes make your code and deployments simpler and easier to manage. To learn more about the changes that HTTP/2 is bringing about, read our [white paper][34].
|
||||
|
||||
![NGINX Supports SPDY and HTTP/2 for increased web application performance](https://www.nginx.com/wp-content/uploads/2015/10/http2-27.png)
|
||||
|
||||
As an example of support for these protocols, NGINX has supported SPDY from early on, and [most sites][35] that use SPDY today run on NGINX. NGINX is also [pioneering support][36] for HTTP/2, with [support][37] for HTTP/2 in NGINX open source and NGINX Plus as of September 2015.
|
||||
|
||||
Over time, we at NGINX expect most sites to fully enable SSL and to move to HTTP/2. This will lead to increased security and, as new optimizations are found and implemented, simpler code that performs better.
|
||||
|
||||
### Tip #7: Update Software Versions ###
|
||||
|
||||
One simple way to boost application performance is to select components for your software stack based on their reputation for stability and performance. In addition, because developers of high-quality components are likely to pursue performance enhancements and fix bugs over time, it pays to use the latest stable version of software. New releases receive more attention from developers and the user community. Newer builds also take advantage of new compiler optimizations, including tuning for new hardware.
|
||||
|
||||
Stable new releases are typically more compatible and higher-performing than older releases. It’s also easier to keep on top of tuning optimizations, bug fixes, and security alerts when you stay on top of software updates.
|
||||
|
||||
Staying with older software can also prevent you from taking advantage of new capabilities. For example, HTTP/2, described above, currently requires OpenSSL 1.0.1. Starting in mid-2016, HTTP/2 will require OpenSSL 1.0.2, which was released in January 2015.
|
||||
|
||||
NGINX users can start by moving to the [[latest version of the NGINX open source software][38] or [NGINX Plus][39]; they include new capabilities such as socket sharding and thread pools (see below), and both are constantly being tuned for performance. Then look at the software deeper in your stack and move to the most recent version wherever you can.
|
||||
|
||||
### Tip #8: Tune Linux for Performance ###
|
||||
|
||||
Linux is the underlying operating system for most web server implementations today, and as the foundation of your infrastructure, Linux represents a significant opportunity to improve performance. By default, many Linux systems are conservatively tuned to use few resources and to match a typical desktop workload. This means that web application use cases require at least some degree of tuning for maximum performance.
|
||||
|
||||
Linux optimizations are web server-specific. Using NGINX as an example, here are a few highlights of changes you can consider to speed up Linux:
|
||||
|
||||
- **Backlog queue**. If you have connections that appear to be stalling, consider increasing net.core.somaxconn, the maximum number of connections that can be queued awaiting attention from NGINX. You will see error messages if the existing connection limit is too small, and you can gradually increase this parameter until the error messages stop.
|
||||
- **File descriptors**. NGINX uses up to two file descriptors for each connection. If your system is serving a lot of connections, you might need to increase sys.fs.file_max, the system-wide limit for file descriptors, and nofile, the user file descriptor limit, to support the increased load.
|
||||
- **Ephemeral ports**. When used as a proxy, NGINX creates temporary (“ephemeral”) ports for each upstream server. You can increase the range of port values, set by net.ipv4.ip_local_port_range, to increase the number of ports available. You can also reduce the timeout before an inactive port gets reused with the net.ipv4.tcp_fin_timeout setting, allowing for faster turnover.
|
||||
|
||||
For NGINX, check out the [NGINX performance tuning guides][40] to learn how to optimize your Linux system so that it can cope with large volumes of network traffic without breaking a sweat!
|
||||
|
||||
### Tip #9: Tune Your Web Server for Performance ###
|
||||
|
||||
Whatever web server you use, you need to tune it for web application performance. The following recommendations apply generally to any web server, but specific settings are given for NGINX. Key optimizations include:
|
||||
|
||||
- **Access logging**. Instead of writing a log entry for every request to disk immediately, you can buffer entries in memory and write them to disk as a group. For NGINX, add the *buffer=size* parameter to the *access_log* directive to write log entries to disk when the memory buffer fills up. If you add the **flush=time** parameter, the buffer contents are also be written to disk after the specified amount of time.
|
||||
- **Buffering**. Buffering holds part of a response in memory until the buffer fills, which can make communications with the client more efficient. Responses that don’t fit in memory are written to disk, which can slow performance. When NGINX buffering is [on][42], you use the *proxy_buffer_size* and *proxy_buffers* directives to manage it.
|
||||
- **Client keepalives**. Keepalive connections reduce overhead, especially when SSL/TLS is in use. For NGINX, you can increase the maximum number of *keepalive_requests* a client can make over a given connection from the default of 100, and you can increase the *keepalive_timeout* to allow the keepalive connection to stay open longer, resulting in faster subsequent requests.
|
||||
- **Upstream keepalives**. Upstream connections – connections to application servers, database servers, and so on – benefit from keepalive connections as well. For upstream connections, you can increase *keepalive*, the number of idle keepalive connections that remain open for each worker process. This allows for increased connection reuse, cutting down on the need to open brand new connections. For more information about keepalives, refer to this [blog post][41].
|
||||
- **Limits**. Limiting the resources that clients use can improve performance and security. For NGINX,the *limit_conn* and *limit_conn_zone* directives restrict the number of connections from a given source, while *limit_rate* constrains bandwidth. These settings can stop a legitimate user from “hogging” resources and also help prevent against attacks. The *limit_req* and *limit_req_zone* directives limit client requests. For connections to upstream servers, use the max_conns parameter to the server directive in an upstream configuration block. This limits connections to an upstream server, preventing overloading. The associated queue directive creates a queue that holds a specified number of requests for a specified length of time after the *max_conns* limit is reached.
|
||||
- **Worker processes**. Worker processes are responsible for the processing of requests. NGINX employs an event-based model and OS-dependent mechanisms to efficiently distribute requests among worker processes. The recommendation is to set the value of *worker_processes* to one per CPU. The maximum number of worker_connections (512 by default) can safely be raised on most systems if needed; experiment to find the value that works best for your system.
|
||||
- **Socket sharding**. Typically, a single socket listener distributes new connections to all worker processes. Socket sharding creates a socket listener for each worker process, with the kernel assigning connections to socket listeners as they become available. This can reduce lock contention and improve performance on multicore systems. To enable [socket sharding][43], include the reuseport parameter on the listen directive.
|
||||
- **Thread pools**. Any computer process can be held up by a single, slow operation. For web server software, disk access can hold up many faster operations, such as calculating or copying information in memory. When a thread pool is used, the slow operation is assigned to a separate set of tasks, while the main processing loop keeps running faster operations. When the disk operation completes, the results go back into the main processing loop. In NGINX, two operations – the read() system call and sendfile() – are offloaded to [thread pools][44].
|
||||
|
||||
![Thread pools help increase application performance by assigning a slow operation to a separate set of tasks](https://www.nginx.com/wp-content/uploads/2015/10/Graph-17.png)
|
||||
|
||||
**Tip**. When changing settings for any operating system or supporting service, change a single setting at a time, then test performance. If the change causes problems, or if it doesn’t make your site run faster, change it back.
|
||||
|
||||
See this [blog post][45] for more details on tuning NGINX.
|
||||
|
||||
### Tip #10: Monitor Live Activity to Resolve Issues and Bottlenecks ###
|
||||
|
||||
The key to a high-performance approach to application development and delivery is watching your application’s real-world performance closely and in real time. You must be able to monitor activity within specific devices and across your web infrastructure.
|
||||
|
||||
Monitoring site activity is mostly passive – it tells you what’s going on, and leaves it to you to spot problems and fix them.
|
||||
|
||||
Monitoring can catch several different kinds of issues. They include:
|
||||
|
||||
- A server is down.
|
||||
- A server is limping, dropping connections.
|
||||
- A server is suffering from a high proportion of cache misses.
|
||||
- A server is not sending correct content.
|
||||
|
||||
A global application performance monitoring tool like New Relic or Dynatrace helps you monitor page load time from remote locations, while NGINX helps you monitor the application delivery side. Application performance data tells you when your optimizations are making a real difference to your users, and when you need to consider adding capacity to your infrastructure to sustain the traffic.
|
||||
|
||||
To help identify and resolve issues quickly, NGINX Plus adds [application-aware health checks][46] – synthetic transactions that are repeated regularly and are used to alert you to problems. NGINX Plus also has [session draining][47], which stops new connections while existing tasks complete, and a slow start capability, allowing a recovered server to come up to speed within a load-balanced group. When used effectively, health checks allow you to identify issues before they significantly impact the user experience, while session draining and slow start allow you to replace servers and ensure the process does not negatively affect perceived performance or uptime. The figure shows the built-in NGINX Plus [live activity monitoring][48] dashboard for a web infrastructure with servers, TCP connections, and caching.
|
||||
|
||||
![Use real-time application performance monitoring tools to identify and resolve issues quickly](https://www.nginx.com/wp-content/uploads/2015/10/Screen-Shot-2015-10-05-at-4.16.32-PM.png)
|
||||
|
||||
### Conclusion: Seeing 10x Performance Improvement ###
|
||||
|
||||
The performance improvements that are available for any one web application vary tremendously, and actual gains depend on your budget, the time you can invest, and gaps in your existing implementation. So, how might you achieve 10x performance improvement for your own applications?
|
||||
|
||||
To help guide you on the potential impact of each optimization, here are pointers to the improvement that may be possible with each tip detailed above, though your mileage will almost certainly vary:
|
||||
|
||||
- **Reverse proxy server and load balancing**. No load balancing, or poor load balancing, can cause episodes of very poor performance. Adding a reverse proxy server, such as NGINX, can prevent web applications from thrashing between memory and disk. Load balancing can move processing from overburdened servers to available ones and make scaling easy. These changes can result in dramatic performance improvement, with a 10x improvement easily achieved compared to the worst moments for your current implementation, and lesser but substantial achievements available for overall performance.
|
||||
- **Caching dynamic and static content**. If you have an overburdened web server that’s doubling as your application server, 10x improvements in peak-time performance can be achieved by caching dynamic content alone. Caching for static files can improve performance by single-digit multiples as well.
|
||||
- **Compressing data**. Using media file compression such as JPEG for photos, PNG for graphics, MPEG-4 for movies, and MP3 for music files can greatly improve performance. Once these are all in use, then compressing text data (code and HTML) can improve initial page load times by a factor of two.
|
||||
- **Optimizing SSL/TLS**. Secure handshakes can have a big impact on performance, so optimizing them can lead to perhaps a 2x improvement in initial responsiveness, particularly for text-heavy sites. Optimizing media file transmission under SSL/TLS is likely to yield only small performance improvements.
|
||||
- **Implementing HTTP/2 and SPDY**. When used with SSL/TLS, these protocols are likely to result in incremental improvements for overall site performance.
|
||||
- **Tuning Linux and web server software (such as NGINX)**. Fixes such as optimizing buffering, using keepalive connections, and offloading time-intensive tasks to a separate thread pool can significantly boost performance; thread pools, for instance, can speed disk-intensive tasks by [nearly an order of magnitude][49].
|
||||
|
||||
We hope you try out these techniques for yourself. We want to hear the kind of application performance improvements you’re able to achieve. Share your results in the comments below, or tweet your story with the hash tags #NGINX and #webperf!
|
||||
|
||||
### Resources for Internet Statistics ###
|
||||
|
||||
[Statista.com – Share of the internet economy in the gross domestic product in G-20 countries in 2016][50]
|
||||
|
||||
[Load Impact – How Bad Performance Impacts Ecommerce Sales][51]
|
||||
|
||||
[Kissmetrics – How Loading Time Affects Your Bottom Line (infographic)][52]
|
||||
|
||||
[Econsultancy – Site speed: case studies, tips and tools for improving your conversion rate][53]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io
|
||||
|
||||
作者:[Floyd Smith][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.nginx.com/blog/author/floyd/
|
||||
[1]:https://www.nginx.com/resources/glossary/reverse-proxy-server
|
||||
[2]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip2
|
||||
[3]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip3
|
||||
[4]:https://www.nginx.com/products/application-health-checks/
|
||||
[5]:https://www.nginx.com/solutions/load-balancing/
|
||||
[6]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip1
|
||||
[7]:https://www.nginx.com/resources/admin-guide/load-balancer/
|
||||
[8]:https://www.nginx.com/blog/load-balancing-with-nginx-plus/
|
||||
[9]:https://www.digitalocean.com/community/tutorials/understanding-and-implementing-fastcgi-proxying-in-nginx
|
||||
[10]:https://www.nginx.com/blog/five-reasons-use-software-load-balancer/
|
||||
[11]:https://www.nginx.com/blog/load-balancing-with-nginx-plus/
|
||||
[12]:https://www.nginx.com/resources/ebook/five-reasons-choose-software-load-balancer/
|
||||
[13]:https://www.nginx.com/resources/webinars/choose-software-based-load-balancer-45-min/
|
||||
[14]:https://www.nginx.com/resources/admin-guide/load-balancer/
|
||||
[15]:https://www.nginx.com/products/
|
||||
[16]:https://www.nginx.com/blog/nginx-caching-guide/
|
||||
[17]:https://www.nginx.com/products/content-caching-nginx-plus/
|
||||
[18]:http://nginx.org/en/docs/http/ngx_http_proxy_module.html?&_ga=1.95342300.1348073562.1438712874#proxy_cache_purge
|
||||
[19]:https://www.nginx.com/products/live-activity-monitoring/
|
||||
[20]:http://nginx.org/en/docs/http/ngx_http_proxy_module.html?&&&_ga=1.61156076.1348073562.1438712874#proxy_cache
|
||||
[21]:https://www.nginx.com/resources/admin-guide/content-caching
|
||||
[22]:https://www.nginx.com/blog/network-vs-devops-how-to-manage-your-control-issues/
|
||||
[23]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip6
|
||||
[24]:https://www.nginx.com/resources/admin-guide/compression-and-decompression/
|
||||
[25]:http://nginx.org/en/docs/http/ngx_http_gzip_static_module.html
|
||||
[26]:https://www.digicert.com/ssl.htm
|
||||
[27]:https://www.nginx.com/blog/10-tips-for-10x-application-performance/?hmsr=toutiao.io&utm_medium=toutiao.io&utm_source=toutiao.io#tip6
|
||||
[28]:http://openssl.org/
|
||||
[29]:https://www.nginx.com/blog/nginx-ssl-performance/
|
||||
[30]:https://www.nginx.com/blog/improve-seo-https-nginx/
|
||||
[31]:http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache
|
||||
[32]:https://www.nginx.com/resources/admin-guide/nginx-ssl-termination/
|
||||
[33]:https://www.nginx.com/resources/admin-guide/nginx-tcp-ssl-termination/
|
||||
[34]:https://www.nginx.com/resources/datasheet/datasheet-nginx-http2-whitepaper/
|
||||
[35]:http://w3techs.com/blog/entry/25_percent_of_the_web_runs_nginx_including_46_6_percent_of_the_top_10000_sites
|
||||
[36]:https://www.nginx.com/blog/how-nginx-plans-to-support-http2/
|
||||
[37]:https://www.nginx.com/blog/nginx-plus-r7-released/
|
||||
[38]:http://nginx.org/en/download.html
|
||||
[39]:https://www.nginx.com/products/
|
||||
[40]:https://www.nginx.com/blog/tuning-nginx/
|
||||
[41]:https://www.nginx.com/blog/http-keepalives-and-web-performance/
|
||||
[42]:http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering
|
||||
[43]:https://www.nginx.com/blog/socket-sharding-nginx-release-1-9-1/
|
||||
[44]:https://www.nginx.com/blog/thread-pools-boost-performance-9x/
|
||||
[45]:https://www.nginx.com/blog/tuning-nginx/
|
||||
[46]:https://www.nginx.com/products/application-health-checks/
|
||||
[47]:https://www.nginx.com/products/session-persistence/#session-draining
|
||||
[48]:https://www.nginx.com/products/live-activity-monitoring/
|
||||
[49]:https://www.nginx.com/blog/thread-pools-boost-performance-9x/
|
||||
[50]:http://www.statista.com/statistics/250703/forecast-of-internet-economy-as-percentage-of-gdp-in-g-20-countries/
|
||||
[51]:http://blog.loadimpact.com/blog/how-bad-performance-impacts-ecommerce-sales-part-i/
|
||||
[52]:https://blog.kissmetrics.com/loading-time/?wide=1
|
||||
[53]:https://econsultancy.com/blog/10936-site-speed-case-studies-tips-and-tools-for-improving-your-conversion-rate/
|
@ -0,0 +1,154 @@
|
||||
How to Install Pure-FTPd with TLS on FreeBSD 10.2
|
||||
================================================================================
|
||||
FTP or File Transfer Protocol is application layer standard network protocol used to transfer file from the client to the server, after user logged in to the FTP server over the TCP-Network, such as internet. FTP has been round long time ago, much longer then P2P Program, or World Wide Web, and until this day it was a primary method for sharing file with other over the internet and it it remain very popular even today. FTP provide an secure transmission, that protect username, password and encrypt the content with SSL/TLS.
|
||||
|
||||
Pure-FTPd is free FTP Server with strong and focus on the software security. It was great choice for you if you want to provide a fast, secure, lightweight with feature rich FTP Services. Pure-FTPd can be install on variety of Unix-like operating system, include Linux and FreeBSD. Pure-FTPd is created by Frank Dennis in 2001, based on Troll-FTPd, and until now is actively developed by a team led by Dennis.
|
||||
|
||||
In this tutorial we will provide about installation and configuration of "**Pure-FTPd**" with Unix-like operating system FreeBSD 10.2.
|
||||
|
||||
### Step 1 - Update system ###
|
||||
|
||||
The first thing you must do is to install and update the freebsd repository, please connect to your server with SSH and then type command below as sudo/root :
|
||||
|
||||
freebsd-update fetch
|
||||
freebsd-update install
|
||||
|
||||
### Step 2 - Install Pure-FTPd ###
|
||||
|
||||
You can install Pure-FTPd from the ports method, but in this tutorial we will install from the freebsd repository with "**pkg**" command. So, now let's install :
|
||||
|
||||
pkg install pure-ftpd
|
||||
|
||||
Once installation is finished, please add pure-ftpd to the start at the boot time with sysrc command below :
|
||||
|
||||
sysrc pureftpd_enable=yes
|
||||
|
||||
### Step 3 - Configure Pure-FTPd ###
|
||||
|
||||
Configuration file for Pure-FTPd is located at directory "/usr/local/etc/", please go to the directory and copy the sample configuration for pure-ftpd to "**pure-ftpd.conf**".
|
||||
|
||||
cd /usr/local/etc/
|
||||
cp pure-ftpd.conf.sample pure-ftpd.conf
|
||||
|
||||
Now edit the file configuration with nano editor :
|
||||
|
||||
nano -c pure-ftpd.conf
|
||||
|
||||
Note : -c option to show line number on nano.
|
||||
|
||||
Go to line 59 and change the value of "VerboseLog" to "**yes**". This option is allow you as administrator to see the log all command used by the users.
|
||||
|
||||
VerboseLog yes
|
||||
|
||||
And now look at line 126 "PureDB" for virtual-users configuration. Virtual users is a simple mechanism to store a list of users, with their password, name, uid, directory, etc. It's just like /etc/passwd. But it's not /etc/passwd. It's a different file and only for FTP. In this tutorial we will store the list of user to the file "**/usr/local/etc/pureftpd.passwd**" and "**/usr/local/etc/pureftpd.pdb**". Please uncomment that line and change the path for the file to "/usr/local/etc/pureftpd.pdb".
|
||||
|
||||
PureDB /usr/local/etc/pureftpd.pdb
|
||||
|
||||
Next, uncomment on the line 336 "**CreateHomeDir**", this option make you easy to add the virtual users, allow automatically create home directories if they are missing.
|
||||
|
||||
CreateHomeDir yes
|
||||
|
||||
Save and exit.
|
||||
|
||||
Next, start pure-ftpd with service command :
|
||||
|
||||
service pure-ftpd start
|
||||
|
||||
### Step 4 - Adding New Users ###
|
||||
|
||||
At this step FTP server is started without error, but you can not log in to the FTP Server, because the default configuration of pure-ftpd is disabled for anonymous users. We need to create new users with home directory, and then give it the password for login.
|
||||
|
||||
On thing you must do befere you add new user to pure-ftpd virtual-user is to create a system user for this, lets create new system user "**vftp**" and the default group is same as username, with home directory "**/home/vftp/**".
|
||||
|
||||
pw useradd vftp -s /sbin/nologin -w no -d /home/vftp \
|
||||
-c "Virtual User Pure-FTPd" -m
|
||||
|
||||
Now you can add the new user for the FTP Server with "**pure-pw**" command. For an example here, we will create new user named "**akari**", so please see command below :
|
||||
|
||||
pure-pw useradd akari -u vftp -g vftp -d /home/vftp/akari
|
||||
Password: TYPE YOUR PASSWORD
|
||||
|
||||
that command will create user "**akari**" and the data stored at the file "**/usr/local/etc/pureftpd.passwd**", not at /etc/passwd file, so this means is that you can easily create FTP-only accounts without messing up your system accounts.
|
||||
|
||||
Next, you must generate the PureDB user database with this command :
|
||||
|
||||
pure-pw mkdb
|
||||
|
||||
Now restart the pure-ftpd services and try connect with user "akari" :
|
||||
|
||||
service pure-ftpd restart
|
||||
|
||||
Trying to connect with user akari :
|
||||
|
||||
ftp SERVERIP
|
||||
|
||||
![FTP Connect user akari](http://blog.linoxide.com/wp-content/uploads/2015/10/FTP-Connect-user-akari.png)
|
||||
|
||||
**NOTE :**
|
||||
|
||||
If you want to add new user again, you can use "**pure-pw**" command. And if you want to delete the current user, you can use this :
|
||||
|
||||
pure-pw userdel useryouwanttodelete
|
||||
pure-pw mkdb
|
||||
|
||||
### Step 5 - Add SSL/TLS to Pure-FTPd ###
|
||||
|
||||
Pure-FTPd supports encryption using TLS security mechanisms. To support for TLS/SSL, make sure the OpenSSL library is already installed on your freebsd system.
|
||||
|
||||
Now you must generate new "**self-signed certificate**" on the directory "**/etc/ssl/private**". Before you generate the certificate, please create new directory there called "private".
|
||||
|
||||
cd /etc/ssl/
|
||||
mkdir private
|
||||
cd private/
|
||||
|
||||
Now generate "self-signed certificate" with openssl command below :
|
||||
|
||||
openssl req -x509 -nodes -newkey rsa:2048 -sha256 -keyout \
|
||||
/etc/ssl/private/pure-ftpd.pem \
|
||||
-out /etc/ssl/private/pure-ftpd.pem
|
||||
|
||||
FILL ALL WITH YOUR PERSONAL INFO.
|
||||
|
||||
![Generate Certificate pem](http://blog.linoxide.com/wp-content/uploads/2015/10/Generate-Certificate-pem.png)
|
||||
|
||||
Next, change the certificate permission :
|
||||
|
||||
chmod 600 /etc/ssl/private/*.pem
|
||||
|
||||
Once the certifcate is generated, Edit the pure-ftpd configuration file :
|
||||
|
||||
nano -c /usr/local/etc/pure-ftpd.conf
|
||||
|
||||
Uncomment on line **423** to enable the TLS :
|
||||
|
||||
TLS 1
|
||||
|
||||
And line **439** for the certificate file path :
|
||||
|
||||
CertFile /etc/ssl/private/pure-ftpd.pem
|
||||
|
||||
Save and exit, then restart the pure-ftpd services :
|
||||
|
||||
service pure-ftpd restart
|
||||
|
||||
Now let's test the Pure-FTPd that work with TLS/SSL. I'm here use "**FileZilla**" to connect to the FTP Server, and use user "**akari**" that have been created.
|
||||
|
||||
![Pure-FTPd with TLS SUpport](http://blog.linoxide.com/wp-content/uploads/2015/10/Pure-FTPd-with-TLS-SUpport.png)
|
||||
|
||||
Pure-FTPd with TLS on FreeBSD 10.2 successfully.
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
FTP or File Transfer Protocol is standart protocol used to transfer file between users and the server. One of the best, lightweight and secure FTP Server Software is Pure-FTPd. It is secure and support for TLS/SSL encryption mechanism. Pure-FTPd is easy to to install and configure, you can manage the user with virtual user support, and it is make you as sysadmin is easy to manage the user if you have a much user ftp server.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/install-pure-ftpd-tls-freebsd-10-2/
|
||||
|
||||
作者:[Arul][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arulm/
|
236
sources/tech/20151104 How to Install Redis Server on CentOS 7.md
Normal file
236
sources/tech/20151104 How to Install Redis Server on CentOS 7.md
Normal file
@ -0,0 +1,236 @@
|
||||
How to Install Redis Server on CentOS 7
|
||||
================================================================================
|
||||
Hi everyone, today Redis is the subject of our article, we are going to install it on CentOS 7. Build sources files, install the binaries, create and install files. After installing its components, we will set its configuration as well as some operating system parameters to make it more reliable and faster.
|
||||
|
||||
![Runnins Redis](http://blog.linoxide.com/wp-content/uploads/2015/10/run-redis-standalone.jpg)
|
||||
|
||||
Redis server
|
||||
|
||||
Redis is an open source multi-platform data store written in ANSI C, that uses datasets directly from memory achieving extremely high performance. It supports various programming languages, including Lua, C, Java, Python, Perl, PHP and many others. It is based on simplicity, about 30k lines of code that do "few" things, but do them well. Despite you work on memory, persistence may exist and it has a fairly reasonable support for high availability and clustering, which does good in keeping your data safe.
|
||||
|
||||
### Building Redis ###
|
||||
|
||||
There is no official RPM package available, we need to build it from sources, in order to do this you will need install Make and GCC.
|
||||
|
||||
Install GNU Compiler Collection and Make with yum if it is not already installed
|
||||
|
||||
yum install gcc make
|
||||
|
||||
Download the tarball from [redis download page][1].
|
||||
|
||||
curl http://download.redis.io/releases/redis-3.0.4.tar.gz -o redis-3.0.4.tar.gz
|
||||
|
||||
Extract the tarball contents
|
||||
|
||||
tar zxvf redis-3.0.4.tar.gz
|
||||
|
||||
Enter Redis the directory we have extracted
|
||||
|
||||
cd redis-3.0.4
|
||||
|
||||
Use Make to build the source files
|
||||
|
||||
make
|
||||
|
||||
### Install ###
|
||||
|
||||
Enter on the src directory
|
||||
|
||||
cd src
|
||||
|
||||
Copy Redis server and client to /usr/local/bin
|
||||
|
||||
cp redis-server redis-cli /usr/local/bin
|
||||
|
||||
Its good also to copy sentinel, benchmark and check as well.
|
||||
|
||||
cp redis-sentinel redis-benchmark redis-check-aof redis-check-dump /usr/local/bin
|
||||
|
||||
Make Redis config directory
|
||||
|
||||
mkdir /etc/redis
|
||||
|
||||
Create a working and data directory under /var/lib/redis
|
||||
|
||||
mkdir -p /var/lib/redis/6379
|
||||
|
||||
#### System parameters ####
|
||||
|
||||
In order to Redis work correctly you need to set some kernel options
|
||||
|
||||
Set the vm.overcommit_memory to 1, which means always, this will avoid data to be truncated, take a look [here][2] for more.
|
||||
|
||||
sysctl -w vm.overcommit_memory=1
|
||||
|
||||
Change the maximum of backlog connections some value higher than the value on tcp-backlog option of redis.conf, which defaults to 511. You can find more on sysctl based ip networking "tunning" on [kernel.org][3] website.
|
||||
|
||||
sysctl -w net.core.somaxconn=512.
|
||||
|
||||
Disable transparent huge pages support, that is known to cause latency and memory access issues with Redis.
|
||||
|
||||
echo never > /sys/kernel/mm/transparent_hugepage/enabled
|
||||
|
||||
### redis.conf ###
|
||||
|
||||
Redis.conf is the Redis configuration file, however you will see the file named as 6379.conf here, where the number is the same as the network port is listening to. This name is recommended if you are going to run more than one Redis instance.
|
||||
|
||||
Copy sample redis.conf to **/etc/redis/6379.conf**.
|
||||
|
||||
cp redis.conf /etc/redis/6379.conf
|
||||
|
||||
Now edit the file and set at some of its parameters.
|
||||
|
||||
vi /etc/redis/6379.conf
|
||||
|
||||
#### daemonize ####
|
||||
|
||||
Set daemonize to no, systemd need it to be in foreground, otherwise Redis will suddenly die.
|
||||
|
||||
daemonize no
|
||||
|
||||
#### pidfile ####
|
||||
|
||||
Set the pidfile to redis_6379.pid under /var/run.
|
||||
|
||||
pidfile /var/run/redis_6379.pid
|
||||
|
||||
#### port ####
|
||||
|
||||
Change the network port if you are not going to use the default
|
||||
|
||||
port 6379
|
||||
|
||||
#### loglevel ####
|
||||
|
||||
Set your loglevel.
|
||||
|
||||
loglevel notice
|
||||
|
||||
#### logfile ####
|
||||
|
||||
Set the logfile to /var/log/redis_6379.log
|
||||
|
||||
logfile /var/log/redis_6379.log
|
||||
|
||||
#### dir ####
|
||||
|
||||
Set the directory to /var/lib/redis/6379
|
||||
|
||||
dir /var/lib/redis/6379
|
||||
|
||||
### Security ###
|
||||
|
||||
Here are some actions that you can take to enforce the security.
|
||||
|
||||
#### Unix sockets ####
|
||||
|
||||
In many cases, the client application resides on the same machine as the server, so there is no need to listen do network sockets. If this is the case you may want to use unix sockets instead, for this you need to set the **port** option to 0, and then enable unix sockets with the following options.
|
||||
|
||||
Set the path to the socket file
|
||||
|
||||
unixsocket /tmp/redis.sock
|
||||
|
||||
Set restricted permission to the socket file
|
||||
|
||||
unixsocketperm 700
|
||||
|
||||
Now, to have access with redis-cli you should use the -s flag pointing to the socket file
|
||||
|
||||
redis-cli -s /tmp/redis.sock
|
||||
|
||||
#### requirepass ####
|
||||
|
||||
You may need remote access, if so, you should use a password, that will be required before any operation.
|
||||
|
||||
requirepass "bTFBx1NYYWRMTUEyNHhsCg"
|
||||
|
||||
#### rename-command ####
|
||||
|
||||
Imagine the output of the next command. Yes, it will dump the configuration of the server, so you should deny access to this kind information whenever is possible.
|
||||
|
||||
CONFIG GET *
|
||||
|
||||
To restrict, or even disable this and other commands by using the **rename-command**. You must provide a command name and a replacement. To disable, set the replacement string to "" (blank), this is more secure as it will prevent someone from guessing the command name.
|
||||
|
||||
rename-command FLUSHDB "FLUSHDB_MY_SALT_G0ES_HERE09u09u"
|
||||
rename-command FLUSHALL ""
|
||||
rename-command CONFIG "CONFIG_MY_S4LT_GO3S_HERE09u09u"
|
||||
|
||||
![Access Redis through unix with password and command changes](http://blog.linoxide.com/wp-content/uploads/2015/10/redis-security-test.jpg)
|
||||
|
||||
Access through unix sockets with password and command changes
|
||||
|
||||
#### Snapshots ####
|
||||
|
||||
By default Redis will periodically dump its datasets to **dump.rdb** on the data directory we set. You can configure how often the rdb file will be updated by the save command, the first parameter is a timeframe in seconds and the second is a number of changes performed on the data file.
|
||||
|
||||
Every 15 hours if there was at least 1 key change
|
||||
|
||||
save 900 1
|
||||
|
||||
Every 5 hours if there was at least 10 key changes
|
||||
|
||||
save 300 10
|
||||
|
||||
Every minute if there was at least 10000 key changes
|
||||
|
||||
save 60 10000
|
||||
|
||||
The **/var/lib/redis/6379/dump.rdb** file contains a dump of the dataset on memory since last save. Since it creates a temporary file and then replace the original file, there is no problem of corruption and you can always copy it directly without fear.
|
||||
|
||||
### Starting at boot ###
|
||||
|
||||
You may use systemd to add Redis to the system startup
|
||||
|
||||
Copy sample init_script to /etc/init.d, note also the number of the port on the script name
|
||||
|
||||
cp utils/redis_init_script /etc/init.d/redis_6379
|
||||
|
||||
We are going to use systemd, so create a unit file named redis_6379.service under **/etc/systems/system**
|
||||
|
||||
vi /etc/systemd/system/redis_6379.service
|
||||
|
||||
Put this content, try man systemd.service for details
|
||||
|
||||
[Unit]
|
||||
Description=Redis on port 6379
|
||||
|
||||
[Service]
|
||||
Type=forking
|
||||
ExecStart=/etc/init.d/redis_6379 start
|
||||
ExecStop=/etc/init.d/redis_6379 stop
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
||||
Now add the memory overcommit and maximum backlog options we have set before to the **/etc/sysctl.conf** file.
|
||||
|
||||
vm.overcommit_memory = 1
|
||||
|
||||
net.core.somaxconn=512
|
||||
|
||||
For the transparent huge pages support there is no sysctl directive, so you can put the command at the end of /etc/rc.local
|
||||
|
||||
echo never > /sys/kernel/mm/transparent_hugepage/enabled
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
That's enough to start, with these settings you will be able to deploy Redis server for many simpler scenarios, however there is many options on redis.conf for more complex environments. On some cases, you may use [replication][4] and [Sentinel][5] to provide high availability, [split the data][6] across servers, create a cluster of servers. Thanks for reading!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/storage/install-redis-server-centos-7/
|
||||
|
||||
作者:[Carlos Alberto][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/carlosal/
|
||||
[1]:http://redis.io/download
|
||||
[2]:https://www.kernel.org/doc/Documentation/vm/overcommit-accounting
|
||||
[3]:https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt
|
||||
[4]:http://redis.io/topics/replication
|
||||
[5]:http://redis.io/topics/sentinel
|
||||
[6]:http://redis.io/topics/partitioning
|
@ -0,0 +1,124 @@
|
||||
translating by ezio
|
||||
|
||||
How to Install SQLite 3.9.1 with JSON Support on Ubuntu 15.04
|
||||
================================================================================
|
||||
Hello and welcome to our today's article on SQLite which is the most widely deployed SQL database engine in the world that comes with zero-configuration, that means no setup or administration needed. SQLite is public-domain software package that provides relational database management system, or RDBMS that is used to store user-defined records in large tables. In addition to data storage and management, database engine process complex query commands that combine data from multiple tables to generate reports and data summaries.
|
||||
|
||||
SQLite is very small and light weight that does not require a separate server process or system to operate. It is available on UNIX, Linux, Mac OS-X, Android, iOS and Windows which is being used in various software applications like Opera, Ruby On Rails, Adobe System, Mozilla Firefox, Google Chrome and Skype.
|
||||
|
||||
### 1) Basic Requirements: ###
|
||||
|
||||
There is are no such complex complex requirements for the installation of SQLite as it mostly comes support all major cross platforms.
|
||||
|
||||
So, let's login to your Ubuntu server with sudo or root credentials using your CLI or Secure Shell. Then update your system so that your operating system is upto date with latest packages.
|
||||
|
||||
In ubuntu, the below command is to be used for system update.
|
||||
|
||||
# apt-get update
|
||||
|
||||
If you are starting to deploy SQLite on on a fresh Ubuntu, then make sure that you have installed some basic system management utilities like wget, make, unzip, gcc.
|
||||
|
||||
To install wget, make and gcc packages on ubuntu, you use the below command, then press "Y" to allow and proceed with installation of these packages.
|
||||
|
||||
# apt-get install wget make gcc
|
||||
|
||||
### 2) Download SQLite ###
|
||||
|
||||
To download the latest package of SQLite, you can refer to their official [SQLite Download Page][1] as shown below.
|
||||
|
||||
![SQLite download](http://blog.linoxide.com/wp-content/uploads/2015/10/Selection_014.png)
|
||||
|
||||
You can copy the link of its resource package and download it on ubuntu server using the wget utility command.
|
||||
|
||||
# wget https://www.sqlite.org/2015/sqlite-autoconf-3090100.tar.gz
|
||||
|
||||
![wget SQLite](http://blog.linoxide.com/wp-content/uploads/2015/10/23.png)
|
||||
|
||||
After downloading is complete, extract the package and change your current directory to the extracted SQLite folder by using the below command as shown.
|
||||
|
||||
# tar -zxvf sqlite-autoconf-3090100.tar.gz
|
||||
|
||||
### 3) Installing SQLite ###
|
||||
|
||||
Now we are going to install and configure the SQLite package that we downloaded. So, to compile and install SQLite on ubuntu run the configuration script within the same directory where your have extracted the SQLite package as shown below.
|
||||
|
||||
root@ubuntu-15:~/sqlite-autoconf-3090100# ./configure –prefix=/usr/local
|
||||
|
||||
![SQLite Installation](http://blog.linoxide.com/wp-content/uploads/2015/10/35.png)
|
||||
|
||||
Once the package is configuration is done under the mentioned prefix, then run the below command make command to compile the package.
|
||||
|
||||
root@ubuntu-15:~/sqlite-autoconf-3090100# make
|
||||
source='sqlite3.c' object='sqlite3.lo' libtool=yes \
|
||||
DEPDIR=.deps depmode=none /bin/bash ./depcomp \
|
||||
/bin/bash ./libtool --tag=CC --mode=compile gcc -DPACKAGE_NAME=\"sqlite\" -DPACKAGE_TARNAME=\"sqlite\" -DPACKAGE_VERSION=\"3.9.1\" -DPACKAGE_STRING=\"sqlite\ 3.9.1\" -DPACKAGE_BUGREPORT=\"http://www.sqlite.org\" -DPACKAGE_URL=\"\" -DPACKAGE=\"sqlite\" -DVERSION=\"3.9.1\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -DHAVE_DLFCN_H=1 -DLT_OBJDIR=\".libs/\" -DHAVE_FDATASYNC=1 -DHAVE_USLEEP=1 -DHAVE_LOCALTIME_R=1 -DHAVE_GMTIME_R=1 -DHAVE_DECL_STRERROR_R=1 -DHAVE_STRERROR_R=1 -DHAVE_POSIX_FALLOCATE=1 -I. -D_REENTRANT=1 -DSQLITE_THREADSAFE=1 -DSQLITE_ENABLE_FTS3 -DSQLITE_ENABLE_RTREE -g -O2 -c -o sqlite3.lo sqlite3.c
|
||||
|
||||
After running make command, to complete the installation of SQLite on ubuntu run the 'make install' command as shown below.
|
||||
|
||||
# make install
|
||||
|
||||
![SQLite Make Install](http://blog.linoxide.com/wp-content/uploads/2015/10/44.png)
|
||||
|
||||
### 4) Testing SQLite Installation ###
|
||||
|
||||
To confirm the successful installation of SQLite 3.9, run the below command in your command line interface.
|
||||
|
||||
# sqlite3
|
||||
|
||||
You will the SQLite verion after running the above command as shown.
|
||||
|
||||
![Testing SQLite Installation](http://blog.linoxide.com/wp-content/uploads/2015/10/53.png)
|
||||
|
||||
### 5) Using SQLite ###
|
||||
|
||||
SQLite is very handy to use. To get the detailed information about its usage, simply run the below command in the SQLite console.
|
||||
|
||||
sqlite> .help
|
||||
|
||||
So here is the list of all its available commands, with their description that you can get help to start using SQLite.
|
||||
|
||||
![SQLite Help](http://blog.linoxide.com/wp-content/uploads/2015/10/62.png)
|
||||
|
||||
Now in this last section , we make use of few SQLite commands to create a new database using the SQLite3 command line interface.
|
||||
|
||||
To to create a new database run the below command.
|
||||
|
||||
# sqlite3 test.db
|
||||
|
||||
To create a table within the new database run the below command.
|
||||
|
||||
sqlite> create table memos(text, priority INTEGER);
|
||||
|
||||
After creating the table, insert some data using the following commands.
|
||||
|
||||
sqlite> insert into memos values('deliver project description', 15);
|
||||
sqlite> insert into memos values('writing new artilces', 100);
|
||||
|
||||
To view the inserted data from the table , run the below command.
|
||||
|
||||
sqlite> select * from memos;
|
||||
deliver project description|15
|
||||
writing new artilces|100
|
||||
|
||||
to exit from the sqlite3 type the below command.
|
||||
|
||||
sqlite> .exit
|
||||
|
||||
![Using SQLite3](http://blog.linoxide.com/wp-content/uploads/2015/10/73.png)
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
In this article you learned the installation of latest version of SQLite 3.9.1 which enables the recently JSON1 support in its 3.9.0 version and so on. Its is an amazing library that gets embedded inside the application that makes use of it to keep the resources much efficient and lighter. We hope you find this article much helpful, feel free to get back to us if you find any difficulty.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/ubuntu-how-to/install-sqlite-json-ubuntu-15-04/
|
||||
|
||||
作者:[Kashif Siddique][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/kashifs/
|
||||
[1]:https://www.sqlite.org/download.html
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user