mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-21 02:10:11 +08:00
pull the newest lctt files
This commit is contained in:
commit
568327817f
@ -0,0 +1,74 @@
|
||||
5 个提升你开源项目贡献者基数的方法
|
||||
============================================================
|
||||
|
||||

|
||||
|
||||
图片提供:opensource.com
|
||||
|
||||
许多自由和开源软件项目因解决问题而出现,人们开始为它们做贡献,是因为他们也想修复遇到的问题。当项目的最终用户发现它对他们的需求有用,该项目就开始增长。并且出于分享的目的把人们吸引到同一个项目社区。
|
||||
|
||||
就像任何事物都是有寿命的,增长既是项目成功的标志也是来源。那么项目领导者和维护者如何激励贡献者基数的增长?这里有五种方法。
|
||||
|
||||
### 1、 提供好的文档
|
||||
|
||||
人们经常低估项目[文档][2]的重要性。它是项目贡献者的主要信息来源,它会激励他们努力。信息必须是正确和最新的。它应该包括如何构建该软件、如何提交补丁、编码风格指南等步骤。
|
||||
|
||||
查看经验丰富的科技作家、编辑 Bob Reselman 的 [7 个创建世界级文档的规则][3]。
|
||||
|
||||
开发人员文档的一个很好的例子是 [Python 开发人员指南][4]。它包括清晰简洁的步骤,涵盖 Python 开发的各个方面。
|
||||
|
||||
### 2、 降低进入门槛
|
||||
|
||||
如果你的项目有[工单或 bug 追踪工具][5],请确保将初级任务标记为一个“小 bug” 或“起点”。新的贡献者可以很容易地通过解决这些问题进入项目。追踪工具也是标记非编程任务(如平面设计、图稿和文档改进)的地方。有许多项目成员不是每天都编码,但是却通过这种方式成为推动力。
|
||||
|
||||
Fedora 项目维护着一个这样的[易修复和入门级问题的追踪工具][6]。
|
||||
|
||||
### 3、 为补丁提供常规反馈
|
||||
|
||||
确认每个补丁,即使它只有一行代码,并给作者反馈。提供反馈有助于吸引潜在的候选人,并指导他们熟悉项目。所有项目都应有一个邮件列表和[聊天功能][7]进行通信。问答可在这些媒介中发生。大多数项目不会在一夜之间成功,但那些繁荣的列表和沟通渠道为增长创造了环境。
|
||||
|
||||
### 4、 推广你的项目
|
||||
|
||||
始于解决问题的项目实际上可能对其他开发人员也有用。作为项目的主要贡献者,你的责任是为你的的项目建立文档并推广它。写博客文章,并在社交媒体上分享项目的进展。你可以从简要描述如何成为项目的贡献者开始,并在该描述中提供主要开发者文档的参考连接。此外,请务必提供有关路线图和未来版本的信息。
|
||||
|
||||
为了你的听众,看看由 Opensource.com 的社区经理 Rikki Endsley 写的[写作提示][8]。
|
||||
|
||||
### 5、 保持友好
|
||||
|
||||
友好的对话语调和迅速的回复将加强人们对你的项目的兴趣。最初,这些问题只是为了寻求帮助,但在未来,新的贡献者也可能会提出想法或建议。让他们有信心他们可以成为项目的贡献者。
|
||||
|
||||
记住你一直在被人评头论足!人们会观察项目开发者是如何在邮件列表或聊天上交谈。这些意味着对新贡献者的欢迎和开放程度。当使用技术时,我们有时会忘记人文关怀,但这对于任何项目的生态系统都很重要。考虑一个情况,项目是很好的,但项目维护者不是很受欢迎。这样的管理员可能会驱使用户远离项目。对于有大量用户基数的项目而言,不被支持的环境可能导致分裂,一部分用户可能决定复刻项目并启动新项目。在开源世界中有这样的先例。
|
||||
|
||||
另外,拥有不同背景的人对于开源项目的持续增长和源源不断的点子是很重要的。
|
||||
|
||||
最后,项目负责人有责任维持和帮助项目成长。指导新的贡献者是项目的关键,他们将成为项目和社区未来的领导者。
|
||||
|
||||
阅读:由红帽的内容战略家 Nicole Engard 写的 [7 种让新的贡献者感到受欢迎的方式][1]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||

|
||||
|
||||
Kushal Das - Kushal Das 是 Python 软件基金会的一名 CPython 核心开发人员和主管。他是一名长期的 FOSS 贡献者和导师,他帮助新人进入贡献世界。他目前在 Red Hat 担任 Fedora 云工程师。他的博客在 https://kushaldas.in 。你也可以在 Twitter @kushaldas 上找到他
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/1/expand-project-contributor-base
|
||||
|
||||
作者:[Kushal Das][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[Bestony](https://github.com/bestony)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/kushaldas
|
||||
[1]:https://opensource.com/life/16/5/sumana-harihareswara-maria-naggaga-oscon
|
||||
[2]:https://opensource.com/tags/documentation
|
||||
[3]:https://opensource.com/business/16/1/scale-14x-interview-bob-reselman
|
||||
[4]:https://docs.python.org/devguide/
|
||||
[5]:https://opensource.com/tags/bugs-and-issues
|
||||
[6]:https://fedoraproject.org/easyfix/
|
||||
[7]:https://opensource.com/alternatives/slack
|
||||
[8]:https://opensource.com/business/15/10/what-stephen-king-can-teach-tech-writers
|
@ -1,42 +1,35 @@
|
||||
如何在 Linux 中捕获并流式传输你的游戏会话
|
||||
如何在 Linux 中捕获并流式传输你的游戏过程
|
||||
============================================================
|
||||
|
||||
### 在本页中
|
||||
|
||||
1. [捕获设置][1]
|
||||
2. [设置源][2]
|
||||
3. [过渡][3]
|
||||
4. [总结][4]
|
||||
|
||||
也许没有许多铁杆玩家使用 Linux,但现在肯定有很多 Linux 用户喜欢玩游戏。如果你是其中之一,并希望向世界展示 Linux 游戏不再是一个笑话,那么你会发现下面这个关于如何捕捉并且/或者流式播放游戏的快速教程。我在这将用一个名为 “[Open Broadcaster Software Studio][5]” 的软件,这可能是我们找到最好的一种。
|
||||
也许没有那么多铁杆的游戏玩家使用 Linux,但肯定有很多 Linux 用户喜欢玩游戏。如果你是其中之一,并希望向世界展示 Linux 游戏不再是一个笑话,那么你会喜欢下面这个关于如何捕捉并且/或者以流式播放游戏的快速教程。我在这将用一个名为 “[Open Broadcaster Software Studio][5]” 的软件,这可能是我们所能找到最好的一种。
|
||||
|
||||
### 捕获设置
|
||||
|
||||
在顶层菜单中,我们选择 File → Settings,然后我们选择 “Output” 来设置要生成的文件的选项。这里我们可以设置想要的音频和视频的比特率、新创建的文件的目标路径和文件格式。这上面还提供了粗略的质量设置。
|
||||
在顶层菜单中,我们选择 “File” → “Settings”,然后我们选择 “Output” 来设置要生成的文件的选项。这里我们可以设置想要的音频和视频的比特率、新创建的文件的目标路径和文件格式。这上面还提供了粗略的质量设置。
|
||||
|
||||
[
|
||||

|
||||
][6]
|
||||
|
||||
如果我们将顶部的输出模式从 “Simple” 更改为 “Advanced”,我们就能够设置 CPU 负载,使 OBS 能够控制系统。根据所选的质量,CPU 能力和捕获的游戏,存在一个 CPU 负载设置不会导致帧丢失。你可能需要做一些试验才能找到最佳设置,但如果质量设置为低,则不用担心。
|
||||
如果我们将顶部的输出模式从 “Simple” 更改为 “Advanced”,我们就能够设置 CPU 负载,以控制 OBS 对系统的影响。根据所选的质量、CPU 能力和捕获的游戏,可以设置一个不会导致丢帧的 CPU 负载设置。你可能需要做一些试验才能找到最佳设置,但如果将质量设置为低,则不用太多设置。
|
||||
|
||||
[
|
||||

|
||||
][7]
|
||||
|
||||
接下来,我们转到设置的 “Video” 部分,我们可以设置我们想要的输出视频分辨率。注意缩小过滤方法,因为它使最终的质量有所不同。
|
||||
接下来,我们转到设置的 “Video” 部分,我们可以设置我们想要的输出视频分辨率。注意缩小过滤(downscaling filtering )方式,因为它使最终的质量有所不同。
|
||||
|
||||
[
|
||||

|
||||
][8]
|
||||
|
||||
你可能还需要绑定热键以启动、暂停和停止录制。这是特别有用的,因为你可以在录制时看到游戏的屏幕。为此,请在设置中选择 “Hotkeys” 部分,并在相应的框中分配所需的按键。当然,你不必每个框都填写,你只需要填写所需的。
|
||||
你可能还需要绑定热键以启动、暂停和停止录制。这特别有用,这样你就可以在录制时看到游戏的屏幕。为此,请在设置中选择 “Hotkeys” 部分,并在相应的框中分配所需的按键。当然,你不必每个框都填写,你只需要填写所需的。
|
||||
|
||||
[
|
||||

|
||||
][9]
|
||||
|
||||
如果你对流式传输感兴趣,而不仅仅是录制,请选择 “Stream” 分类的设置,然后你可以选择支持的 30 种流媒体服务,包括Twitch、Facebook Live 和 Youtube,然后选择服务器并输入 流密钥。
|
||||
如果你对流式传输感兴趣,而不仅仅是录制,请选择 “Stream” 分类的设置,然后你可以选择支持的 30 种流媒体服务,包括 Twitch、Facebook Live 和 Youtube,然后选择服务器并输入 流密钥。
|
||||
|
||||
[
|
||||

|
||||
@ -44,17 +37,17 @@
|
||||
|
||||
### 设置源
|
||||
|
||||
在左下方,你会发现一个名为 “Sources” 的框。我们按下加号好添加一个新的源,它本质上就是我们录制的媒体源。在这你可以设置音频和视频源,但是图像甚至文本也是可以的。
|
||||
在左下方,你会发现一个名为 “Sources” 的框。我们按下加号来添加一个新的源,它本质上就是我们录制的媒体源。在这你可以设置音频和视频源,但是图像甚至文本也是可以的。
|
||||
|
||||
[
|
||||

|
||||
][11]
|
||||
|
||||
前三个是关于音频源,接下来的两个是图像,JACK 选项用于从乐器捕获的实时音频,媒体源用于添加文件等。这里我们感兴趣的是 “Screen Capture (XSHM)”、“Video Capture Device (V4L2)” 和 “Window Capture (Xcomposite)” 选项。
|
||||
前三个是关于音频源,接下来的两个是图像,JACK 选项用于从乐器捕获的实时音频, Media Source 用于添加文件等。这里我们感兴趣的是 “Screen Capture (XSHM)”、“Video Capture Device (V4L2)” 和 “Window Capture (Xcomposite)” 选项。
|
||||
|
||||
屏幕捕获选项让你选择要捕获的屏幕(包括活动屏幕),以便记录所有内容。如工作区更改、窗口最小化等。对于标准批量录制来说,这是一个适合的选项,它可在发布之前进行编辑。
|
||||
|
||||
我们来探讨另外两个。Window Capture 将让我们选择一个活动窗口并将其放入捕获监视器。为了将我们的脸放在一个角落,视频捕获设备是有用的,这样人们可以在我们说话时看到我们。当然,每个添加的源都提供了一组选项来供我们实现我们最后要的效果。
|
||||
我们来探讨另外两个。Window Capture 将让我们选择一个活动窗口并将其放入捕获监视器。为了将我们的脸放在一个角落,视频捕获设备用于人们可以在我们说话时可以看到我们。当然,每个添加的源都提供了一组选项来供我们实现我们最后要的效果。
|
||||
|
||||
[
|
||||

|
||||
@ -68,7 +61,7 @@
|
||||
|
||||
### 过渡
|
||||
|
||||
最后,我们假设你正在流式传输游戏会话,并希望能够在游戏视图和自己(或任何其他来源)之间切换。为此,请从右下角切换为“Studio Mode”,并添加一个分配给另一个源的场景。你还可以通过取消选中 “Duplicate scene” 并检查 “Transitions” 旁边的齿轮图标上的 “Duplicate sources” 来切换。 当你想在简短评论中显示你的脸部时,这很有帮助。
|
||||
最后,如果你正在流式传输游戏会话时希望能够在游戏视图和自己(或任何其他来源)之间切换。为此,请从右下角切换为“Studio Mode”,并添加一个分配给另一个源的场景。你还可以通过取消选中 “Duplicate scene” 并检查 “Transitions” 旁边的齿轮图标上的 “Duplicate sources” 来切换。 当你想在简短评论中显示你的脸部时,这很有帮助。
|
||||
|
||||
[
|
||||

|
||||
@ -84,9 +77,9 @@ OBS Studio 是一个功能强大的免费软件,它工作稳定,使用起来
|
||||
|
||||
via: https://www.howtoforge.com/tutorial/how-to-capture-and-stream-your-gaming-session-on-linux/
|
||||
|
||||
作者:[Bill Toulas ][a]
|
||||
作者:[Bill Toulas][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,216 @@
|
||||
8 个优秀的开源 Markdown 编辑器
|
||||
============================================================
|
||||
|
||||
### Markdown
|
||||
|
||||
首先,对 Markdown 进行一个简单的介绍。Markdown 是由 John Gruber 和 Aaron Swartz 共同创建的一种轻量级纯文本格式语法。Markdown 可以让用户“以易读、易写的纯文本格式来进行写作,然后可以将其转换为有效格式的 XHTML(或 HTML)“。Markdown 语法只包含一些非常容易记住的符号。其学习曲线平缓;你可以在炒蘑菇的同时一点点学习 Markdown 语法(大约 10 分钟)。通过使用尽可能简单的语法,错误率达到了最小化。除了拥有友好的语法,它还具有直接输出干净、有效的(X)HTML 文件的强大功能。如果你看过我的 HTML 文件,你就会知道这个功能是多么的重要。
|
||||
|
||||
Markdown 格式语法的主要目标是实现最大的可读性。用户能够以纯文本的形式发布一份 Markdown 格式的文件。用 Markdown 进行文本写作的一个优点是易于在计算机、智能手机和个人之间共享。几乎所有的内容管理系统都支持 Markdown 。它作为一种网络写作格式流行起来,其产生一些被许多服务采用的变种,比如 GitHub 和 Stack Exchange 。
|
||||
|
||||
你可以使用任何文本编辑器来写 Markdown 文件。但我建议使用一个专门为这种语法设计的编辑器。这篇文章中所讨论的软件允许你使用 Markdown 语法来写各种格式的专业文档,包括博客文章、演示文稿、报告、电子邮件以及幻灯片等。另外,所有的应用都是在开源许可证下发布的,在 Linux、OS X 和 Windows 操作系统下均可用。
|
||||
|
||||
|
||||
### Remarkable
|
||||
|
||||

|
||||
|
||||
让我们从 Remarkable 开始。Remarkable 是一个 apt 软件包的名字,它是一个相当有特色的 Markdown 编辑器 — 它并不支持 Markdown 的全部功能特性,但该有的功能特性都有。它使用和 GitHub Markdown 类似的语法。
|
||||
|
||||
你可以使用 Remarkable 来写 Markdown 文档,并在实时预览窗口查看更改。你可以把你的文件导出为 PDF 格式(带有目录)和 HTML 格式文件。它有强大的配置选项,从而具有许多样式,因此,你可以把它配置成你最满意的 Markdown 编辑器。
|
||||
|
||||
其他一些特性:
|
||||
|
||||
* 语法高亮
|
||||
* 支持 [GitHub 风味的 Markdown](https://linux.cn/article-8399-1.html)
|
||||
* 支持 MathJax - 通过高级格式呈现丰富文档
|
||||
* 键盘快捷键
|
||||
|
||||
在 Debian、Ubuntu、Fedora、SUSE 和 Arch 系统上均有 Remarkable 的可用的简易安装程序。
|
||||
|
||||
主页: [https://remarkableapp.github.io/][4]
|
||||
许可证: MIT 许可
|
||||
|
||||
### Atom
|
||||
|
||||

|
||||
|
||||
毫无疑问, Atom 是一个神话般的文本编辑器。超过 50 个开源包集合在一个微小的内核上,从而构成 Atom 。伴有 Node.js 的支持,以及全套功能特性,Atom 是我最喜欢用来写代码的编辑器。Atom 的特性在[杀手级开源应用][5]的文章中有更详细介绍,它是如此的强大。但是作为一个 Markdown 编辑器,Atom 还有许多不足之处,它的默认包不支持 Markdown 的特性。例如,正如上图所展示的,它不支持等价渲染。
|
||||
|
||||
但是,开源拥有强大的力量,这是我强烈提倡开源的一个重要原因。Atom 上有许多包以及一些复刻,从而添加了缺失的功能特性。比如,Markdown Preview Plus 提供了 Markdown 文件的实时预览,并伴有数学公式渲染和实时重加载。另外,你也可以尝试一下 [Markdown Preview Enhanced][6]。如果你需要自动滚动特性,那么 [markdown-scroll-sync][7] 可以满足你的需求。我是 [Markdown-Writer][8]和 [Markdown-pdf][9]的忠实拥趸,后者支持将 Markdown 快速转换为 PDF、PNG 以及 JPEG 文件。
|
||||
|
||||
这个方式体现了开源的理念:允许用户通过添加扩展来提供所需的特性。这让我想起了 Woolworths 的 n 种杂拌糖果的故事。虽然需要多付出一些努力,但能收获最好的回报。
|
||||
|
||||
主页: [https://atom.io/][10]
|
||||
许可证: MIT 许可
|
||||
|
||||
### Haroopad
|
||||
|
||||

|
||||
|
||||
Haroopad 是一个优秀的 Markdown 编辑器,是一个用于创建适宜 Web 的文档的处理器。使用 Haroopad 可以创作各种格式的文档,比如博客文章、幻灯片、演示文稿、报告和电子邮件等。Haroopad 在 Windows、Mac OS X 和 Linux 上均可用。它有 Debian/Ubuntu 的软件包,也有 Windows 和 Mac 的二进制文件。该应用程序使用 node-webkit、CodeMirror,marked,以及 Twitter 的 Bootstrap 。
|
||||
|
||||
Haroo 在韩语中的意思是“一天”。
|
||||
|
||||
它的功能列表非常可观。请看下面:
|
||||
|
||||
* 主题、皮肤和 UI 组件
|
||||
* 超过 30 种不同的编辑主题 - tomorrow-night-bright 和 zenburn 是近期刚添加的
|
||||
* 编辑器中的代码块的语法高亮
|
||||
* Ruby、Python、PHP、Javascript、C、HTML 和 CSS 的语法高亮支持
|
||||
* 基于 CodeMirror,这是一个在浏览器中使用 JavaScript 实现的通用文本编辑器
|
||||
* 实时预览主题
|
||||
* 基于 markdown-css 的 7 个主题
|
||||
* 语法高亮
|
||||
* 基于 hightlight.js 的 112 种语言以及 49 种样式
|
||||
* 定制主题
|
||||
* 基于 CSS (层叠样式表)的样式
|
||||
* 演示模式 - 对于现场演示非常有用
|
||||
* 绘图 - 流程图和序列图
|
||||
* 任务列表
|
||||
* 扩展 Markdown 语法,支持 TOC(目录)、 GitHub 风味 Markdown 以及数学表达式、脚注和任务列表等
|
||||
* 字体大小
|
||||
* 使用首选窗口和快捷键来设置编辑器和预览字体大小
|
||||
* 嵌入富媒体内容
|
||||
* 视频、音频、3D、文本、开放图形以及 oEmbed
|
||||
* 支持大约 100 种主要的网络服务(YouTude、SoundCloud、Flickr 等)
|
||||
* 支持拖放
|
||||
* 显示模式
|
||||
* 默认:编辑器|预览器,倒置:预览器|编辑器,仅编辑器,仅预览器(View -> Mode)
|
||||
* 插入当前日期和时间
|
||||
* 多种格式支持(Insert -> Data & Time)
|
||||
* HtML 到 Markdown
|
||||
* 拖放你在 Web 浏览器中选择好的文本
|
||||
* Markdown 解析选项
|
||||
* 大纲预览
|
||||
* 纯粹主义者的 Vim 键位绑定
|
||||
* Markdown 自动补全
|
||||
* 导出为 PDF 和 HTML
|
||||
* 带有样式的 HTML 复制到剪切板可用于所见即所得编辑器
|
||||
* 自动保存和恢复
|
||||
* 文件状态信息
|
||||
* 换行符或空格缩进
|
||||
* (一、二、三)列布局视图
|
||||
* Markdown 语法帮助对话框
|
||||
* 导入和导出设置
|
||||
* 通过 MathJax 支持 LaTex 数学表达式
|
||||
* 导出文件为 HTML 和 PDF
|
||||
* 创建扩展来构建自己的功能
|
||||
* 高效地将文件转换进博客系统:WordPress、Evernote 和 Tumblr 等
|
||||
* 全屏模式-尽管该模式不能隐藏顶部菜单栏和顶部工具栏
|
||||
* 国际化支持:英文、韩文、西班牙文、简体中文、德文、越南文、俄文、希腊文、葡萄牙文、日文、意大利文、印度尼西亚文土耳其文和法文
|
||||
|
||||
主页 [http://pad.haroopress.com/][11]
|
||||
许可证: GNU GPL v3 许可
|
||||
|
||||
### StackEdit
|
||||
|
||||

|
||||
|
||||
StackEdit 是一个功能齐全的 Markdown 编辑器,基于 PageDown(该 Markdown 库被 Stack Overflow 和其他一些 Stack 交流网站使用)。不同于在这个列表中的其他编辑器,StackEdit 是一个基于 Web 的编辑器。在 Chrome 浏览器上即可使用 StackEdit 。
|
||||
|
||||
特性包括:
|
||||
|
||||
* 实时预览 HTML,并通过绑定滚动连接特性来将编辑器和预览的滚动条相绑定
|
||||
* 支持 Markdown Extra 和 GitHub 风味 Markdown,Prettify/Highlight.js 语法高亮
|
||||
* 通过 MathJax 支持 LaTex 数学表达式
|
||||
* 所见即所得的控制按键
|
||||
* 布局配置
|
||||
* 不同风格的主题支持
|
||||
* la carte 扩展
|
||||
* 离线编辑
|
||||
* 可以与 Google 云端硬盘(多帐户)和 Dropbox 在线同步
|
||||
* 一键发布到 Blogger、Dropbox、Gist、GitHub、Google Drive、SSH 服务器、Tumblr 和 WordPress
|
||||
|
||||
主页: [https://stackedit.io/][12]
|
||||
许可证: Apache 许可
|
||||
|
||||
### MacDown
|
||||
|
||||

|
||||
|
||||
MacDown 是在这个列表中唯一一个只运行在 macOS 上的全特性编辑器。具体来说,它需要在 OX S 10.8 或更高的版本上才能使用。它在内部使用 Hoedown 将 Markdown 渲染成 HTML,这使得它的特性更加强大。Heodown 是 Sundown 的一个复活复刻。它完全符合标准,无依赖,具有良好的扩展支持和 UTF-8 感知。
|
||||
|
||||
MacDown 基于 Mou,这是专为 Web 开发人员设计的专用解决方案。
|
||||
|
||||
它提供了良好的 Markdown 渲染,通过 Prism 提供的语言识别渲染实现代码块级的语法高亮,MathML 和 LaTex 渲染,GTM 任务列表,Jekyll 前端以及可选的高级自动补全。更重要的是,它占用资源很少。想在 OS X 上写 Markdown?MacDown 是我针对 Web 开发者的开源推荐。
|
||||
|
||||
主页: [https://macdown.uranusjr.com/][13]
|
||||
许可证: MIT 许可
|
||||
|
||||
|
||||
### ghostwriter
|
||||
|
||||

|
||||
|
||||
ghostwriter 是一个跨平台的、具有美感的、无干扰的 Markdown 编辑器。它内建了 Sundown 处理器支持,还可以自动检测 pandoc、MultiMarkdown、Discount 和 cmark 处理器。它试图成为一个朴实的编辑器。
|
||||
|
||||
ghostwriter 有许多很好的功能设置,包括语法高亮、全屏模式、聚焦模式、主题、通过 Hunspell 进行拼写检查、实时字数统计、实时 HTML 预览、HTML 预览自定义 CSS 样式表、图片拖放支持以及国际化支持。Hemingway 模式按钮可以禁用退格键和删除键。一个新的 “Markdown cheat sheet” HUD 窗口是一个有用的新增功能。主题支持很基本,但在 [GitHub 仓库上][14]也有一些可用的试验性主题。
|
||||
|
||||
ghostwriter 的功能有限。我越来越欣赏这个应用的通用性,部分原因是其简洁的界面能够让写作者完全集中在策划内容上。这一应用非常值得推荐。
|
||||
|
||||
ghostwirter 在 Linux 和 Windows 系统上均可用。在 Windows 系统上还有一个便携式的版本可用。
|
||||
|
||||
主页: [https://github.com/wereturtle/ghostwriter][15]
|
||||
许可证: GNU GPL v3 许可
|
||||
|
||||
### Abricotine
|
||||
|
||||

|
||||
|
||||
Abricotine 是一个为桌面构建的、旨在跨平台且开源的 Markdown 编辑器。它在 Linux、OS X 和 Windows 上均可用。
|
||||
|
||||
该应用支持 Markdown 语法以及一些 GitHub 风味的 Markdown 增强(比如表格)。它允许用户直接在文本编辑器中预览文档,而不是在侧窗栏。
|
||||
|
||||
该应用有一系列有用的特性,包括拼写检查、以 HTML 格式保存文件或把富文本复制粘贴到邮件客户端。你也可以在侧窗中显示文档目录,展示语法高亮代码、以及助手、锚点和隐藏字符等。它目前正处于早期的开发阶段,因此还有一些很基本的 bug 需要修复,但它值得关注。它有两个主题可用,如果有能力,你也可以添加你自己的主题。
|
||||
|
||||
主页: [http://abricotine.brrd.fr/][16]
|
||||
许可证: GNU 通用公共许可证 v3 或更高许可
|
||||
|
||||
### ReText
|
||||
|
||||

|
||||
|
||||
ReText 是一个简单而强大的 Markdown 和 reStructureText 文本编辑器。用户可以控制所有输出的格式。它编辑的文件是纯文本文件,但可以导出为 PDF、HTML 和其他格式的文件。ReText 官方仅支持 Linux 系统。
|
||||
|
||||
特性包括:
|
||||
|
||||
* 全屏模式
|
||||
* 实时预览
|
||||
* 同步滚动(针对 Markdown)
|
||||
* 支持数学公式
|
||||
* 拼写检查
|
||||
* 分页符
|
||||
* 导出为 HTML、ODT 和 PDF 格式
|
||||
* 使用其他标记语言
|
||||
|
||||
主页: [https://github.com/retext-project/retext][17]
|
||||
许可证: GNU GPL v2 或更高许可
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ossblog.org/markdown-editors/
|
||||
|
||||
作者:[Steve Emms][a]
|
||||
译者:[ucasFL](https://github.com/ucasFL)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ossblog.org/author/steve/
|
||||
[1]:https://www.ossblog.org/author/steve/
|
||||
[2]:https://www.ossblog.org/markdown-editors/#comments
|
||||
[3]:https://www.ossblog.org/category/utilities/
|
||||
[4]:https://remarkableapp.github.io/
|
||||
[5]:https://www.ossblog.org/top-software/2/
|
||||
[6]:https://atom.io/packages/markdown-preview-enhanced
|
||||
[7]:https://atom.io/packages/markdown-scroll-sync
|
||||
[8]:https://atom.io/packages/markdown-writer
|
||||
[9]:https://atom.io/packages/markdown-pdf
|
||||
[10]:https://atom.io/
|
||||
[11]:http://pad.haroopress.com/
|
||||
[12]:https://stackedit.io/
|
||||
[13]:https://macdown.uranusjr.com/
|
||||
[14]:https://github.com/jggouvea/ghostwriter-themes
|
||||
[15]:https://github.com/wereturtle/ghostwriter
|
||||
[16]:http://abricotine.brrd.fr/
|
||||
[17]:https://github.com/retext-project/retext
|
@ -1,28 +1,28 @@
|
||||
如何在 CentOS 7 中安装、配置 SFTP - [全面指南]
|
||||
完全指南:如何在 CentOS 7 中安装、配置和安全加固 FTP 服务
|
||||
============================================================
|
||||
|
||||
FTP(文件传输协议)是一种用于通过网络[在服务器和客户端之间传输文件][1]的传统并广泛使用的标准工具,特别是在不需要身份验证的情况下(允许匿名用户连接到服务器)。我们必须明白,默认情况下 FTP 是不安全的,因为它不加密传输用户凭据和数据。
|
||||
|
||||
在本指南中,我们将介绍在 CentOS/RHEL7 和 Fedora 发行版中安装、配置和保护 FTP 服务器( VSFTPD 代表 “Very Secure FTP Daemon”)的步骤。
|
||||
|
||||
请注意,本指南中的所有命令将以 root 身份运行,以防你不使用 root 帐户操作服务器,请使用 [sudo命令][2] 获取 root 权限。
|
||||
请注意,本指南中的所有命令将以 root 身份运行,如果你不使用 root 帐户操作服务器,请使用 [sudo命令][2] 获取 root 权限。
|
||||
|
||||
### 步骤 1:安装 FTP 服务器
|
||||
|
||||
1. 安装 vsftpd 服务器很直接,只要在终端运行下面的命令。
|
||||
1、 安装 vsftpd 服务器很直接,只要在终端运行下面的命令。
|
||||
|
||||
```
|
||||
# yum install vsftpd
|
||||
```
|
||||
|
||||
2. 安装完成后,服务会先被禁用,因此我们需要手动启动,并设置在下次启动时自动启用:
|
||||
2、 安装完成后,服务先是被禁用的,因此我们需要手动启动,并设置在下次启动时自动启用:
|
||||
|
||||
```
|
||||
# systemctl start vsftpd
|
||||
# systemctl enable vsftpd
|
||||
```
|
||||
|
||||
3. 接下来,为了允许从外部系统访问 FTP 服务,我们需要打开 FTP 守护进程监听 21 端口:
|
||||
3、 接下来,为了允许从外部系统访问 FTP 服务,我们需要打开 FTP 守护进程监听的 21 端口:
|
||||
|
||||
```
|
||||
# firewall-cmd --zone=public --permanent --add-port=21/tcp
|
||||
@ -32,7 +32,7 @@ FTP(文件传输协议)是一种用于通过网络[在服务器和客户端
|
||||
|
||||
### 步骤 2: 配置 FTP 服务器
|
||||
|
||||
4. 现在,我们会进行一些配置来设置并加密我们的 FTP 服务器,让我们先备份一下原始配置文件 /etc/vsftpd/vsftpd.conf:
|
||||
4、 现在,我们会进行一些配置来设置并加密我们的 FTP 服务器,让我们先备份一下原始配置文件 `/etc/vsftpd/vsftpd.conf`:
|
||||
|
||||
```
|
||||
# cp /etc/vsftpd/vsftpd.conf /etc/vsftpd/vsftpd.conf.orig
|
||||
@ -41,30 +41,30 @@ FTP(文件传输协议)是一种用于通过网络[在服务器和客户端
|
||||
接下来,打开上面的文件,并将下面的选项设置相关的值:
|
||||
|
||||
```
|
||||
anonymous_enable=NO # disable anonymous login
|
||||
local_enable=YES # permit local logins
|
||||
write_enable=YES # enable FTP commands which change the filesystem
|
||||
local_umask=022 # value of umask for file creation for local users
|
||||
dirmessage_enable=YES # enable showing of messages when users first enter a new directory
|
||||
xferlog_enable=YES # a log file will be maintained detailing uploads and downloads
|
||||
connect_from_port_20=YES # use port 20 (ftp-data) on the server machine for PORT style connections
|
||||
xferlog_std_format=YES # keep standard log file format
|
||||
listen=NO # prevent vsftpd from running in standalone mode
|
||||
listen_ipv6=YES # vsftpd will listen on an IPv6 socket instead of an IPv4 one
|
||||
pam_service_name=vsftpd # name of the PAM service vsftpd will use
|
||||
userlist_enable=YES # enable vsftpd to load a list of usernames
|
||||
tcp_wrappers=YES # turn on tcp wrappers
|
||||
anonymous_enable=NO ### 禁用匿名登录
|
||||
local_enable=YES ### 允许本地用户登录
|
||||
write_enable=YES ### 允许对文件系统做改动的 FTP 命令
|
||||
local_umask=022 ### 本地用户创建文件所用的 umask 值
|
||||
dirmessage_enable=YES ### 当用户首次进入一个新目录时显示一个消息
|
||||
xferlog_enable=YES ### 用于记录上传、下载细节的日志文件
|
||||
connect_from_port_20=YES ### 使用端口 20 (ftp-data)用于 PORT 风格的连接
|
||||
xferlog_std_format=YES ### 使用标准的日志格式
|
||||
listen=NO ### 不要让 vsftpd 运行在独立模式
|
||||
listen_ipv6=YES ### vsftpd 将监听 IPv6 而不是 IPv4
|
||||
pam_service_name=vsftpd ### vsftpd 使用的 PAM 服务名
|
||||
userlist_enable=YES ### vsftpd 支持载入用户列表
|
||||
tcp_wrappers=YES ### 使用 tcp wrappers
|
||||
```
|
||||
|
||||
5. 现在基于用户列表文件 `/etc/vsftpd.userlist` 来配置 FTP 允许/拒绝用户访问。
|
||||
5、 现在基于用户列表文件 `/etc/vsftpd.userlist` 来配置 FTP 来允许/拒绝用户的访问。
|
||||
|
||||
默认情况下,如果设置了 userlist_enable=YES,当 userlist_deny 选项设置为 YES 的时候,`userlist_file=/etc/vsftpd.userlist` 中的用户列表被拒绝登录。
|
||||
默认情况下,如果设置了 `userlist_enable=YES`,当 `userlist_deny` 选项设置为 `YES` 的时候,`userlist_file=/etc/vsftpd.userlist` 中列出的用户被拒绝登录。
|
||||
|
||||
然而, userlist_deny=NO 更改了设置,意味着只有在 userlist_file=/etc/vsftpd.userlist 显式指定的用户才允许登录。
|
||||
然而, 更改配置为 `userlist_deny=NO`,意味着只有在 `userlist_file=/etc/vsftpd.userlist` 显式指定的用户才允许登录。
|
||||
|
||||
```
|
||||
userlist_enable=YES # vsftpd will load a list of usernames, from the filename given by userlist_file
|
||||
userlist_file=/etc/vsftpd.userlist # stores usernames.
|
||||
userlist_enable=YES ### vsftpd 将从 userlist_file 给出的文件中载入用户名列表
|
||||
userlist_file=/etc/vsftpd.userlist ### 存储用户名的文件
|
||||
userlist_deny=NO
|
||||
```
|
||||
|
||||
@ -72,30 +72,30 @@ userlist_deny=NO
|
||||
|
||||
接下来,我们将介绍如何将 FTP 用户 chroot 到 FTP 用户的家目录(本地 root)中的两种可能情况,如下所述。
|
||||
|
||||
6. 接下来添加下面的选项来限制 FTP 用户到它们自己的家目录。
|
||||
6、 接下来添加下面的选项来限制 FTP 用户到它们自己的家目录。
|
||||
|
||||
```
|
||||
chroot_local_user=YES
|
||||
allow_writeable_chroot=YES
|
||||
```
|
||||
|
||||
chroot_local_user=YES 意味着用户可以设置 chroot jail,默认是登录后的家目录。
|
||||
`chroot_local_user=YES` 意味着用户可以设置 chroot jail,默认是登录后的家目录。
|
||||
|
||||
同样默认的是,出于安全原因,vsftpd 不会允许 chroot jail 目录可写,然而,我们可以添加 allow_writeable_chroot=YES 来覆盖这个设置。
|
||||
同样默认的是,出于安全原因,vsftpd 不会允许 chroot jail 目录可写,然而,我们可以添加 `allow_writeable_chroot=YES` 来覆盖这个设置。
|
||||
|
||||
保存并关闭文件。
|
||||
|
||||
### 用 SELinux 加密 FTP 服务器
|
||||
### 步骤 3: 用 SELinux 加密 FTP 服务器
|
||||
|
||||
7. 现在,让我们设置下面的 SELinux 布尔值来允许 FTP 能读取用户家目录下的文件。请注意,这最初是使用以下命令完成的:
|
||||
7、现在,让我们设置下面的 SELinux 布尔值来允许 FTP 能读取用户家目录下的文件。请注意,这原本是使用以下命令完成的:
|
||||
|
||||
```
|
||||
# setsebool -P ftp_home_dir on
|
||||
```
|
||||
|
||||
然而,`ftp_home_dir` 指令由于这个 bug 报告:[https://bugzilla.redhat.com/show_bug.cgi?id=1097775][3] 默认是禁用的。
|
||||
然而,由于这个 bug 报告:[https://bugzilla.redhat.com/show_bug.cgi?id=1097775][3],`ftp_home_dir` 指令默认是禁用的。
|
||||
|
||||
现在,我们会使用 semanage 命令来设置 SELinux 规则来允许 FTP 读取/写入用户的家目录。
|
||||
现在,我们会使用 `semanage` 命令来设置 SELinux 规则来允许 FTP 读取/写入用户的家目录。
|
||||
|
||||
```
|
||||
# semanage boolean -m ftpd_full_access --on
|
||||
@ -109,21 +109,21 @@ chroot_local_user=YES 意味着用户可以设置 chroot jail,默认是登录
|
||||
|
||||
### 步骤 4: 测试 FTP 服务器
|
||||
|
||||
8. 现在我们会用[ useradd 命令][4]创建一个 FTP 用户来测试 FTP 服务器。
|
||||
8、 现在我们会用 [useradd 命令][4]创建一个 FTP 用户来测试 FTP 服务器。
|
||||
|
||||
```
|
||||
# useradd -m -c “Ravi Saive, CEO” -s /bin/bash ravi
|
||||
# passwd ravi
|
||||
```
|
||||
|
||||
之后,我们如下使用[ echo 命令][5]添加用户 ravi 到文件 /etc/vsftpd.userlist 中:
|
||||
之后,我们如下使用 [echo 命令][5]添加用户 ravi 到文件 `/etc/vsftpd.userlist` 中:
|
||||
|
||||
```
|
||||
# echo "ravi" | tee -a /etc/vsftpd.userlist
|
||||
# cat /etc/vsftpd.userlist
|
||||
```
|
||||
|
||||
9. 现在是时候测试我们上面的设置是否可以工作了。让我们使用匿名登录测试,我们可以从下面的截图看到匿名登录不被允许。
|
||||
9、 现在是时候测试我们上面的设置是否可以工作了。让我们使用匿名登录测试,我们可以从下面的截图看到匿名登录没有被允许。
|
||||
|
||||
```
|
||||
# ftp 192.168.56.10
|
||||
@ -134,13 +134,14 @@ Name (192.168.56.10:root) : anonymous
|
||||
Login failed.
|
||||
ftp>
|
||||
```
|
||||
|
||||
[
|
||||

|
||||
][6]
|
||||
|
||||
测试 FTP 匿名登录
|
||||
*测试 FTP 匿名登录*
|
||||
|
||||
10. 让我们也测试一下没有列在 /etc/vsftpd.userlist 中的用户是否有权限登录,这不是下面截图中的例子:
|
||||
10、 让我们也测试一下没有列在 `/etc/vsftpd.userlist` 中的用户是否有权限登录,下面截图是没有列入的情况:
|
||||
|
||||
```
|
||||
# ftp 192.168.56.10
|
||||
@ -155,9 +156,9 @@ ftp>
|
||||

|
||||
][7]
|
||||
|
||||
FTP 用户登录失败
|
||||
*FTP 用户登录失败*
|
||||
|
||||
11. 现在最后测试一下列在 /etc/vsftpd.userlis 中的用户是否在登录后真的进入了他/她的家目录:
|
||||
11、 现在最后测试一下列在 `/etc/vsftpd.userlist` 中的用户是否在登录后真的进入了他/她的家目录:
|
||||
|
||||
```
|
||||
# ftp 192.168.56.10
|
||||
@ -171,21 +172,22 @@ Remote system type is UNIX.
|
||||
Using binary mode to transfer files.
|
||||
ftp> ls
|
||||
```
|
||||
|
||||
[
|
||||

|
||||

|
||||
][8]
|
||||
|
||||
用户成功登录
|
||||
*用户成功登录*
|
||||
|
||||
警告:使用 `allow_writeable_chroot=YES' 有一定的安全隐患,特别是用户具有上传权限或 shell 访问权限时。
|
||||
警告:使用 `allow_writeable_chroot=YES` 有一定的安全隐患,特别是用户具有上传权限或 shell 访问权限时。
|
||||
|
||||
只有当你完全知道你正做什么时才激活此选项。重要的是要注意,这些安全性影响并不是 vsftpd 特定的,它们适用于所有 FTP 守护进程,它们也提供将本地用户置于 chroot jail中。
|
||||
只有当你完全知道你正做什么时才激活此选项。重要的是要注意,这些安全性影响并不是 vsftpd 特定的,它们适用于所有提供了将本地用户置于 chroot jail 中的 FTP 守护进程。
|
||||
|
||||
因此,我们将在下一节中看到一种更安全的方法来设置不同的不可写本地根目录。
|
||||
|
||||
### 步骤 5: 配置不同的 FTP 家目录
|
||||
|
||||
12. 再次打开 vsftpd 配置文件,并将下面不安全的选项注释掉:
|
||||
12、 再次打开 vsftpd 配置文件,并将下面不安全的选项注释掉:
|
||||
|
||||
```
|
||||
#allow_writeable_chroot=YES
|
||||
@ -199,7 +201,7 @@ ftp> ls
|
||||
# chmod a-w /home/ravi/ftp
|
||||
```
|
||||
|
||||
13. 接下来,在用户存储他/她的文件的本地根目录下创建一个文件夹:
|
||||
13、 接下来,在用户存储他/她的文件的本地根目录下创建一个文件夹:
|
||||
|
||||
```
|
||||
# mkdir /home/ravi/ftp/files
|
||||
@ -207,11 +209,11 @@ ftp> ls
|
||||
# chmod 0700 /home/ravi/ftp/files/
|
||||
```
|
||||
|
||||
、接着在 vsftpd 配置文件中添加/修改这些选项:
|
||||
接着在 vsftpd 配置文件中添加/修改这些选项:
|
||||
|
||||
```
|
||||
user_sub_token=$USER # 在本地根目录下插入用户名
|
||||
local_root=/home/$USER/ftp # 定义任何用户的本地根目录
|
||||
user_sub_token=$USER ### 在本地根目录下插入用户名
|
||||
local_root=/home/$USER/ftp ### 定义任何用户的本地根目录
|
||||
```
|
||||
|
||||
保存并关闭文件。再说一次,有新的设置后,让我们重启服务:
|
||||
@ -220,7 +222,7 @@ local_root=/home/$USER/ftp # 定义任何用户的本地根目录
|
||||
# systemctl restart vsftpd
|
||||
```
|
||||
|
||||
14. 现在最后在测试一次查看用户本地根目录就是我们在他的家目录创建的 FTP 目录。
|
||||
14、 现在最后在测试一次查看用户本地根目录就是我们在他的家目录创建的 FTP 目录。
|
||||
|
||||
```
|
||||
# ftp 192.168.56.10
|
||||
@ -238,7 +240,7 @@ ftp> ls
|
||||

|
||||
][9]
|
||||
|
||||
FTP 用户家目录登录成功
|
||||
*FTP 用户家目录登录成功*
|
||||
|
||||
就是这样了!在本文中,我们介绍了如何在 CentOS 7 中安装、配置以及加密的 FTP 服务器,使用下面的评论栏给我们回复,或者分享关于这个主题的任何有用信息。
|
||||
|
||||
@ -258,7 +260,7 @@ via: http://www.tecmint.com/install-ftp-server-in-centos-7/
|
||||
|
||||
作者:[Aaron Kili][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
@ -273,5 +275,5 @@ via: http://www.tecmint.com/install-ftp-server-in-centos-7/
|
||||
[7]:http://www.tecmint.com/wp-content/uploads/2017/02/FTP-User-Login-Failed.png
|
||||
[8]:http://www.tecmint.com/wp-content/uploads/2017/02/FTP-User-Login.png
|
||||
[9]:http://www.tecmint.com/wp-content/uploads/2017/02/FTP-User-Home-Directory-Login-Successful.png
|
||||
[10]:http://www.tecmint.com/install-proftpd-in-centos-7/
|
||||
[10]:https://linux.cn/article-8504-1.html
|
||||
[11]:http://www.tecmint.com/secure-vsftpd-using-ssl-tls-on-centos/
|
@ -0,0 +1,111 @@
|
||||
AI 正快速入侵我们生活的五个方面
|
||||
============================================================
|
||||
|
||||
> 让我们来看看我们已经被人工智能包围的五个真实存在的方面。
|
||||
|
||||

|
||||
|
||||
> 图片来源: opensource.com
|
||||
|
||||
开源项目[正在助推][2]人工智能(AI)进步,而且随着技术的成熟,我们将听到更多关于 AI 如何影响我们生活的消息。你有没有考虑过 AI 是如何改变你周围的世界的?让我们来看看我们日益被我们所改变的世界,以及大胆预测一下 AI 对未来影响。
|
||||
|
||||
### 1. AI 影响你的购买决定
|
||||
|
||||
最近 [VentureBeat][3] 上的一篇文章,[“AI 将如何帮助我们解读千禧一代”][4]吸引了我的注意。我承认我对人工智能没有思考太多,也没有费力尝试解读千禧一代,所以我很好奇,希望了解更多。事实证明,文章标题有点误导人,“如何卖东西给千禧一代”或许会是一个更准确的标题。
|
||||
|
||||
根据这篇文章,千禧一代是“一个令人垂涎的年龄阶段的人群,全世界的市场经理都在争抢他们”。通过分析网络行为 —— 无论是购物、社交媒体或其他活动 - 机器学习可以帮助预测行为模式,这将可以变成有针对性的广告。文章接着解释如何对物联网和社交媒体平台进行挖掘形成数据点。“使用机器学习挖掘社交媒体数据,可以让公司了解千禧一代如何谈论其产品,他们对一个产品类别的看法,他们对竞争对手的广告活动如何响应,还可获得很多数据,用于设计有针对性的广告,"这篇文章解释说。AI 和千禧一代成为营销的未来并不是什么很令人吃惊的事,但是 X 一代和婴儿潮一代,你们也逃不掉呢!(LCTT 译注:X 一代指出生于 20 世纪 60 年代中期至 70 年代末的美国人,婴儿潮是指二战结束后,1946 年初至 1964 年底出生的人)
|
||||
|
||||
> 人工智能根据行为变化,将包括城市人在内的整个人群设为目标群体。
|
||||
|
||||
例如, [Raconteur 上][23]的一篇文章 —— “AI 将怎样改变购买者的行为”中解释说,AI 在网上零售行业最大的力量是它能够迅速适应客户行为不断变化的形势。人工智能创业公司 [Fluid AI][25] 首席执行官 Abhinav Aggarwal 表示,他公司的软件被一个客户用来预测顾客行为,有一次系统注意到在暴风雪期间发生了一个变化。“那些通常会忽略在一天中发送的电子邮件或应用内通知的用户现在正在打开它们,因为他们在家里没有太多的事情可做。一个小时之内,AI 系统就适应了新的情况,并在工作时间开始发送更多的促销材料。”他解释说。
|
||||
|
||||
AI 正在改变我们怎样花钱和为什么花钱,但是 AI 是怎样改变我们挣钱的方式的呢?
|
||||
|
||||
### 2. 人工智能正在改变我们如何工作
|
||||
|
||||
[Fast 公司][5]最近的一篇文章“2017 年人工智能将如何改变我们的生活”中说道,求职者将会从人工智能中受益。作者解释说,除更新薪酬趋势之外,人工智能将被用来给求职者发送相关职位空缺信息。当你应该升职的时候,你很可能会得到一个升职的机会。
|
||||
|
||||
人工智能也可以被公司用来帮助新入职的员工。文章解释说:“许多新员工在刚入职的几天内会获得大量信息,其中大部分都留不下来。” 相反,机器人可能会随着时间的推移,当新员工需要相关信息时,再向他一点点“告知信息”。
|
||||
|
||||
[Inc.][7] 有一篇文章[“没有偏见的企业:人工智能将如何重塑招聘机制”][8],观察了人才管理解决方案提供商 [SAP SuccessFactors][9] 是怎样利用人工智能作为一个工作描述“偏见检查器”,以及检查员工赔偿金的偏见。
|
||||
|
||||
[《Deloitte 2017 人力资本趋势报告》][10]显示,AI 正在激励组织进行重组。Fast 公司的文章[“AI 是怎样改变公司组织的方式”][11]审查了这篇报告,该报告是基于全球 10,000 多名人力资源和商业领袖的调查结果。这篇文章解释说:"许多公司现在更注重文化和环境的适应性,而不是聘请最有资格的人来做某个具体任务,因为知道个人角色必须随 AI 的实施而发展。" 为了适应不断变化的技术,组织也从自上而下的结构转向多学科团队,文章说。
|
||||
|
||||
### 3. AI 正在改变教育
|
||||
|
||||
> AI 将使所有教育生态系统的利益相关者受益。
|
||||
|
||||
尽管教育预算正在缩减,但是教室的规模却正在增长。因此利用技术的进步有助于提高教育体系的生产率和效率,并在提高教育质量和负担能力方面发挥作用。根据 VentureBeat 上的一篇文章[“2017 年人工智能将怎样改变教育”][26],今年我们将看到 AI 对学生们的书面答案进行评分,机器人回答学生的问题,虚拟个人助理辅导学生等等。文章解释说:“AI 将惠及教育生态系统的所有利益相关者。学生将能够通过即时的反馈和指导学习地更好,教师将获得丰富的学习分析和对个性化教学的见解,父母将以更低的成本看到他们的孩子的更好的职业前景,学校能够规模化优质的教育,政府能够向所有人提供可负担得起的教育。"
|
||||
|
||||
### 4. 人工智能正在重塑医疗保健
|
||||
|
||||
2017 年 2 月 [CB Insights][12] 的一篇文章挑选了 106 个医疗保健领域的人工智能初创公司,它们中的很多在过去几年中提高了第一次股权融资。这篇文章说:“在 24 家成像和诊断公司中,19 家公司自 2015 年 1 月起就首次公开募股。”这份名单上有那些从事于远程病人监测,药物发现和肿瘤学方面人工智能的公司。”
|
||||
|
||||
3 月 16 日发表在 TechCrunch 上的一篇关于 AI 进步如何重塑医疗保健的文章解释说:“一旦对人类的 DNA 有了更好的理解,就有机会更进一步,并能根据他们特殊的生活习性为他们提供个性化的见解。这种趋势预示着‘个性化遗传学’的新纪元,人们能够通过获得关于自己身体的前所未有的信息来充分控制自己的健康。”
|
||||
|
||||
本文接着解释说,AI 和机器学习降低了研发新药的成本和时间。部分得益于广泛的测试,新药进入市场需要 12 年以上的时间。这篇文章说:“机器学习算法可以让计算机根据先前处理的数据来‘学习’如何做出预测,或者选择(在某些情况下,甚至是产品)需要做什么实验。类似的算法还可用于预测特定化合物对人体的副作用,这样可以加快审批速度。”这篇文章指出,2015 年旧金山的一个创业公司 [Atomwise][15] 一天内完成了可以减少埃博拉感染的两种新药物的分析,而不是花费数年时间。
|
||||
|
||||
> AI 正在帮助发现、诊断和治疗新疾病。
|
||||
|
||||
另外一个位于伦敦的初创公司 [BenevolentAI][27] 正在利用人工智能寻找科学文献中的模式。这篇文章说:“最近,这家公司找到了两种可能对 Alzheimer 起作用的化合物,引起了很多制药公司的关注。"
|
||||
|
||||
除了有助于研发新药,AI 正在帮助发现、诊断和治疗新疾病。TechCrunch 上的文章解释说,过去是根据显示的症状诊断疾病,但是现在 AI 正在被用于检测血液中的疾病特征,并利用对数十亿例临床病例分析进行深度学习获得经验来制定治疗计划。这篇文章说:“IBM 的 Watson 正在与纽约的 Memorial Sloan Kettering 合作,消化理解数十年来关于癌症患者和治疗方面的数据,为了向治疗疑难的癌症病例的医生提供和建议治疗方案。”
|
||||
|
||||
### 5. AI 正在改变我们的爱情生活
|
||||
|
||||
有 195 个国家的超过 5000 万活跃用户通过一个在 2012 年推出的约会应用程序 [Tinder][16] 找到潜在的伴侣。在一个 [Forbes 采访播客][17]中,Tinder 的创始人兼董事长 Sean Rad spoke 与 Steven Bertoni 对人工智能是如何正在改变人们约会进行过讨论。在[关于此次采访的文章][18]中,Bertoni 引用了 Rad 说的话,他说:“可能有这样一个时刻,Tinder 可以很好的推测你会感兴趣的人,在组织约会中还可能会做很多跑腿的工作”,所以,这个 app 会向用户推荐一些附近的同伴,并更进一步,协调彼此的时间安排一次约会,而不只是向用户显示一些有可能的同伴。
|
||||
|
||||
> 我们的后代真的可能会爱上人工智能。
|
||||
|
||||
你爱上了 AI 吗?我们的后代真的可能会爱上人工智能。Raya Bidshahri 发表在 [Singularity Hub][19] 的一篇文章“AI 将如何重新定义爱情”说,几十年的后,我们可能会认为爱情不再受生物学的限制。
|
||||
|
||||
Bidshahri 解释说:“我们的技术符合摩尔定律,正在以惊人的速度增长 —— 智能设备正在越来越多地融入我们的生活。”,他补充道:“到 2029 年,我们将会有和人类同等智慧的 AI,而到 21 世纪 40 年代,AI 将会比人类聪明无数倍。许多人预测,有一天我们会与强大的机器合并,我们自己可能会变成人工智能。”他认为在这样一个世界上那些是不可避免的,人们将会接受与完全的非生物相爱。
|
||||
|
||||
这听起来有点怪异,但是相比较于未来机器人将统治世界,爱上 AI 会是一个更乐观的结果。Bidshahri 说:“对 AI 进行编程,让他们能够感受到爱,这将使我们创造出更富有同情心的 AI,这可能也是避免很多人忧虑的 AI 大灾难的关键。”
|
||||
|
||||
这份 AI 正在入侵我们生活各领域的清单只是涉及到了我们身边的人工智能的表面。哪些 AI 创新是让你最兴奋的,或者是让你最烦恼的?大家可以在文章评论区写下你们的感受。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
Rikki Endsley - Rikki Endsley 是开源社区 Opensource.com 的管理员。在过去,她曾做过 Red Hat 开源和标准(OSAS)团队社区传播者;自由技术记者;USENIX 协会的社区管理员;linux 权威杂志 ADMIN 和 Ubuntu User 的合作出版者,还是杂志 Sys Admin 和 UnixReview.com 的主编。在 Twitter 上关注她:@rikkiends。
|
||||
|
||||
----
|
||||
|
||||
via: https://opensource.com/article/17/3/5-big-ways-ai-rapidly-invading-our-lives
|
||||
|
||||
作者:[Rikki Endsley][a]
|
||||
译者:[zhousiyu325](https://github.com/zhousiyu325)
|
||||
校对:[jasminepeng](https://github.com/jasminepeng)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/rikki-endsley
|
||||
[1]:https://opensource.com/article/17/3/5-big-ways-ai-rapidly-invading-our-lives?rate=ORfqhKFu9dpA9aFfg-5Za9ZWGcBcx-f0cUlf_VZNeQs
|
||||
[2]:https://www.linux.com/news/open-source-projects-are-transforming-machine-learning-and-ai
|
||||
[3]:https://twitter.com/venturebeat
|
||||
[4]:http://venturebeat.com/2017/03/16/how-ai-will-help-us-decipher-millennials/
|
||||
[5]:https://opensource.com/article/17/3/5-big-ways-ai-rapidly-invading-our-lives
|
||||
[6]:https://www.fastcompany.com/3066620/this-is-how-ai-will-change-your-work-in-2017
|
||||
[7]:https://twitter.com/Inc
|
||||
[8]:http://www.inc.com/bill-carmody/businesses-beyond-bias-how-ai-will-reshape-hiring-practices.html
|
||||
[9]:https://www.successfactors.com/en_us.html
|
||||
[10]:https://dupress.deloitte.com/dup-us-en/focus/human-capital-trends.html?id=us:2el:3pr:dup3575:awa:cons:022817:hct17
|
||||
[11]:https://www.fastcompany.com/3068492/how-ai-is-changing-the-way-companies-are-organized
|
||||
[12]:https://twitter.com/CBinsights
|
||||
[13]:https://www.cbinsights.com/blog/artificial-intelligence-startups-healthcare/
|
||||
[14]:https://techcrunch.com/2017/03/16/advances-in-ai-and-ml-are-reshaping-healthcare/
|
||||
[15]:http://www.atomwise.com/
|
||||
[16]:https://twitter.com/Tinder
|
||||
[17]:https://www.forbes.com/podcasts/the-forbes-interview/#5e962e5624e1
|
||||
[18]:https://www.forbes.com/sites/stevenbertoni/2017/02/14/tinders-sean-rad-on-how-technology-and-artificial-intelligence-will-change-dating/#4180fc2e5b99
|
||||
[19]:https://twitter.com/singularityhub
|
||||
[20]:https://singularityhub.com/2016/08/05/how-ai-will-redefine-love/
|
||||
[21]:https://opensource.com/user/23316/feed
|
||||
[22]:https://opensource.com/article/17/3/5-big-ways-ai-rapidly-invading-our-lives#comments
|
||||
[23]:https://twitter.com/raconteur
|
||||
[24]:https://www.raconteur.net/technology/how-ai-will-change-buyer-behaviour
|
||||
[25]:http://www.fluid.ai/
|
||||
[26]:http://venturebeat.com/2017/02/04/how-ai-will-transform-education-in-2017/
|
||||
[27]:https://twitter.com/benevolent_ai
|
||||
[28]:https://opensource.com/users/rikki-endsley
|
||||
|
@ -0,0 +1,82 @@
|
||||
T-UI Launcher:将你的 Android 设备变成 Linux 命令行界面
|
||||
============================================================
|
||||
|
||||
不管你是一位命令行大师,还是只是不想让你的朋友和家人使用你的 Android 设备,那就看下 T-UI Launcher 这个程序。Unix/Linux 用户一定会喜欢这个。
|
||||
|
||||
T-UI Launcher 是一个免费的轻量级 Android 程序,具有类似 Linux 的命令行界面,它可将你的普通 Android 设备变成一个完整的命令行界面。对于喜欢使用基于文本的界面的人来说,这是一个简单、快速、智能的启动器。
|
||||
|
||||
#### T-UI Launcher 功能
|
||||
|
||||
下面是一些重要的功能:
|
||||
|
||||
* 第一次启动后展示快速使用指南。
|
||||
* 快速且可完全定制。
|
||||
* 提供自动补全菜单及快速、强大的别名系统。
|
||||
* 此外,提供预测建议,并提供有用的搜索功能。
|
||||
|
||||
它是免费的,你可以从 Google Play 商店[下载并安装它][1],接着在 Android 设备中运行。
|
||||
|
||||
安装完成后,第一次启动时你会看到一个快速指南。阅读完成之后,你可以如下面那样使用简单的命令开始使用了。
|
||||
|
||||
[][2]
|
||||
|
||||
*T-UI 命令行帮助指南*
|
||||
|
||||
要启动一个 app,只要输入几个字母,自动补全功能会在屏幕中展示可用的 app。接着点击你想打开的程序。
|
||||
|
||||
```
|
||||
$ Telegram ### 启动 telegram
|
||||
$ WhatsApp ### 启动 whatsapp
|
||||
$ Chrome ### 启动 chrome
|
||||
```
|
||||
|
||||
[][3]
|
||||
|
||||
*T-UI 命令行使用*
|
||||
|
||||
要浏览你的 Android 设备状态(电池电量、wifi、移动数据),输入:
|
||||
|
||||
```
|
||||
$ status
|
||||
```
|
||||
|
||||
[][4]
|
||||
|
||||
*Android 电话状态*
|
||||
|
||||
其它的有用命令。
|
||||
|
||||
```
|
||||
$ uninstall telegram ### 卸载 telegram
|
||||
$ search [google, playstore, youtube, files] ### 搜索在线应用或本地文件
|
||||
$ wifi ### 打开或关闭 WIFI
|
||||
$ cp Downloads/* Music ### 从 Download 文件夹复制所有文件到 Music 文件夹
|
||||
$ mv Downloads/* Music ### 从 Download 文件夹移动所有文件到 Music 文件夹
|
||||
```
|
||||
|
||||
就是这样了!在本篇中,我们展示了一个带有类似 Linux CLI(命令界面)的简单而有用的 Android 程序,它可以将你的常规 Android 设备变成一个完整的命令行界面。尝试一下并在下面的评论栏分享你的想法。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Aaron Kili 是 Linux 和 F.O.S.S 爱好者,将来的 Linux 系统管理员和网络开发人员,目前是 TecMint 的内容创作者,他喜欢用电脑工作,并坚信分享知识。
|
||||
|
||||
------------------
|
||||
|
||||
via: https://www.tecmint.com/t-ui-launcher-turns-android-device-into-linux-cli/
|
||||
|
||||
作者:[Aaron Kili][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.tecmint.com/author/aaronkili/
|
||||
[1]:https://play.google.com/store/apps/details?id=ohi.andre.consolelauncher
|
||||
[2]:https://www.tecmint.com/wp-content/uploads/2017/05/T-UI-Commandline-Help.jpg
|
||||
[3]:https://www.tecmint.com/wp-content/uploads/2017/05/T-UI-Commandline-Usage.jpg
|
||||
[4]:https://www.tecmint.com/wp-content/uploads/2017/05/T-UI-Commandline-Status.jpg
|
||||
[5]:https://www.tecmint.com/author/aaronkili/
|
||||
[6]:https://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
|
||||
[7]:https://www.tecmint.com/free-linux-shell-scripting-books/
|
@ -0,0 +1,125 @@
|
||||
4 个拥有绝佳命令行界面的终端程序
|
||||
============================================================
|
||||
|
||||
> 让我们来看几个精心设计的 CLI 程序,以及如何解决一些可发现性问题。
|
||||
|
||||

|
||||
|
||||
>图片提供: opensource.com
|
||||
|
||||
在本文中,我会指出命令行界面的<ruby>可发现性<rt>discoverability</rt></ruby>缺点以及克服这些问题的几种方法。
|
||||
|
||||
我喜欢命令行。我第一次接触命令行是在 1997 的 DOS 6.2 上。我学习了各种命令的语法,并展示了如何在目录中列出隐藏的文件(`attrib`)。我会每次仔细检查命令中的每个字符。 当我犯了一个错误,我会从头开始重新输入命令。直到有一天,有人向我展示了如何使用向上和向下箭头按键遍历命令行历史,我被震惊了。
|
||||
|
||||
后来当我接触到 Linux 时,让我感到惊喜的是,上下箭头保留了它们遍历历史记录的能力。我仍然很仔细地打字,但是现在,我了解如何盲打,并且我能打的很快,每分钟可以达到 55 个单词的速度。接着有人向我展示了 tab 补完,再一次改变了我的生活。
|
||||
|
||||
在 GUI 应用程序中,菜单、工具提示和图标用于向用户展示功能。而命令行缺乏这种能力,但是有办法克服这个问题。在深入解决方案之前,我会来看看几个有问题的 CLI 程序:
|
||||
|
||||
**1、 MySQL**
|
||||
|
||||
首先让我们看看我们所钟爱的 MySQL REPL。我经常发现自己在输入 `SELECT * FROM` 然后按 `Tab` 的习惯。MySQL 会询问我是否想看到所有的 871 种可能性。我的数据库中绝对没有 871 张表。如果我选择 `yes`,它会显示一堆 SQL 关键字、表、函数等。(LCTT 译注:REPL —— Read-Eval-Print Loop,交互式开发环境)
|
||||
|
||||

|
||||
|
||||
**2、 Python**
|
||||
|
||||
我们来看另一个例子,标准的 Python REPL。我开始输入命令,然后习惯按 `Tab` 键。瞧,插入了一个 `Tab` 字符,考虑到 `Tab` 在 Python 源代码中没有特定作用,这是一个问题。
|
||||
|
||||

|
||||
|
||||
### 好的用户体验
|
||||
|
||||
让我看下设计良好的 CLI 程序以及它们是如何克服这些可发现性问题的。
|
||||
|
||||
#### 自动补全: bpython
|
||||
|
||||
[Bpython][15] 是对 Python REPL 的一个很好的替代。当我运行 bpython 并开始输入时,建议会立即出现。我没用通过特殊的键盘绑定触发它,甚至没有按下 `Tab` 键。
|
||||
|
||||

|
||||
|
||||
当我出于习惯按下 `Tab` 键时,它会用列表中的第一个建议补全。这是给 CLI 设计带来可发现性性的一个很好的例子。
|
||||
|
||||
bpython 的另一个方面是可以展示模块和函数的文档。当我输入一个函数的名字时,它会显示这个函数附带的签名以及文档字符串。这是一个多么令人难以置信的周到设计啊。
|
||||
|
||||
#### 上下文感知补全:mycli
|
||||
|
||||
[mycli][16] 是默认的 MySQL 客户端的现代替代品。这个工具对 MySQL 来说就像 bpython 之于标准 Python REPL 一样。mycli 将在你输入时自动补全关键字、表名、列和函数。
|
||||
|
||||
补全建议是上下文相关的。例如,在 `SELECT * FROM` 之后,只有来自当前数据库的表才会列出,而不是所有可能的关键字。
|
||||
|
||||

|
||||
|
||||
#### 模糊搜索和在线帮助: pgcli
|
||||
|
||||
如果您正在寻找 PostgreSQL 版本的 mycli,请看看 [pgcli][17]。 与 mycli 一样,它提供了上下文感知的自动补全。菜单中的项目使用模糊搜索缩小范围。模糊搜索允许用户输入整体字符串中的任意子字符串来尝试找到正确的匹配项。
|
||||
|
||||

|
||||
|
||||
pgcli 和 mycli 在其 CLI 中都实现了这个功能。斜杠命令的文档也作为补全菜单的一部分展示。
|
||||
|
||||
#### 可发现性: fish
|
||||
|
||||
在传统的 Unix shell(Bash、zsh 等)中,有一种搜索历史记录的方法。此搜索模式由 `Ctrl-R` 触发。当再次调用你上周运行过的命令时,例如 **ssh**或 **docker**,这是一个令人难以置信的有用的工具。 一旦你知道这个功能,你会发现自己经常会使用它。
|
||||
|
||||
如果这个功能是如此有用,那为什么不每次都搜索呢?这正是 [**fish** shell][18] 所做的。一旦你开始输入命令,**fish** 将开始建议与历史记录类似的命令。然后,你可以按右箭头键接受该建议。
|
||||
|
||||
### 命令行规矩
|
||||
|
||||
我已经回顾了一些解决可发现性的问题的创新方法,但也有一些基本的命令行功能应该作为每个 REPL 所实现基础功能的一部分:
|
||||
|
||||
* 确保 REPL 有可通过箭头键调用的历史记录。确保会话之间的历史持续存在。
|
||||
* 提供在编辑器中编辑命令的方法。不管你的补全是多么棒,有时用户只需要一个编辑器来制作完美的命令来删除生产环境中所有的表。
|
||||
* 使用分页器(`pager`)来管道输出。不要让用户滚动他们的终端。哦,要为分页器设置个合理的默认值。(记得添加选项来处理颜色代码。)
|
||||
* 提供一种通过 `Ctrl-R` 界面或者 fish 式的自动搜索来搜索历史记录的方法。
|
||||
|
||||
### 总结
|
||||
|
||||
在第 2 节中,我将来看看 Python 中使你能够实现这些技术的特定库。同时,请查看其中一些精心设计的命令行应用程序:
|
||||
|
||||
* [bpython][5]或 [ptpython][6]:具有自动补全支持的 Python REPL。
|
||||
* [http-prompt][7]:交互式 HTTP 客户端。
|
||||
* [mycli][8]:MySQL、MariaDB 和 Percona 的命令行界面,具有自动补全和语法高亮。
|
||||
* [pgcli][9]:具有自动补全和语法高亮,是对 [psql][10] 的替代工具。
|
||||
* [wharfee][11]:用于管理 Docker 容器的 shell。
|
||||
|
||||
_了解更多: Amjith Ramanujam 在 5 月 20 日在波特兰俄勒冈州举办的 [PyCon US 2017][12] 上的谈话“[神奇的命令行工具][13]”。_
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
|
||||
作者简介:
|
||||
|
||||
Amjith Ramanujam - Amjith Ramanujam 是 pgcli 和 mycli 的创始人。人们认为它们很酷,他表示笑纳赞誉。他喜欢用 Python、Javascript 和 C 编程。他喜欢编写简单易懂的代码,它们有时甚至会成功。
|
||||
|
||||
-----------------------
|
||||
|
||||
via: https://opensource.com/article/17/5/4-terminal-apps
|
||||
|
||||
作者:[Amjith Ramanujam][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/amjith
|
||||
[1]:https://opensource.com/tags/python?src=programming_resource_menu
|
||||
[2]:https://opensource.com/tags/javascript?src=programming_resource_menu
|
||||
[3]:https://opensource.com/tags/perl?src=programming_resource_menu
|
||||
[4]:https://developers.redhat.com/?intcmp=7016000000127cYAAQ&src=programming_resource_menu
|
||||
[5]:http://bpython-interpreter.org/
|
||||
[6]:http://github.com/jonathanslenders/ptpython/
|
||||
[7]:https://github.com/eliangcs/http-prompt
|
||||
[8]:http://mycli.net/
|
||||
[9]:http://pgcli.com/
|
||||
[10]:https://www.postgresql.org/docs/9.2/static/app-psql.html
|
||||
[11]:http://wharfee.com/
|
||||
[12]:https://us.pycon.org/2017/
|
||||
[13]:https://us.pycon.org/2017/schedule/presentation/518/
|
||||
[14]:https://opensource.com/article/17/5/4-terminal-apps?rate=3HL0zUQ8_dkTrinonNF-V41gZvjlRP40R0RlxTJQ3G4
|
||||
[15]:https://bpython-interpreter.org/
|
||||
[16]:http://mycli.net/
|
||||
[17]:http://pgcli.com/
|
||||
[18]:https://fishshell.com/
|
||||
[19]:https://opensource.com/user/125521/feed
|
||||
[20]:https://opensource.com/article/17/5/4-terminal-apps#comments
|
||||
[21]:https://opensource.com/users/amjith
|
@ -0,0 +1,95 @@
|
||||
朝鲜 180 局的网络战部门让西方国家忧虑
|
||||
============================================================
|
||||
|
||||
本文译自澳大利亚广播公司相关文章,不代表本站及译者、编辑的态度。
|
||||
|
||||
[][13]
|
||||
|
||||
*[PHOTO:脱北者说, 平壤的网络战攻击目的旨在为一个叫做“180局”的部门来筹集资金。(路透社:Damir Sagolj, file)][14]*
|
||||
|
||||
> 据脱北者、官方和网络安全专家的消息,朝鲜的情报机关有一个叫做 180 局的特殊部门, 这个部门已经发起过多起胆大且成功的网络战。
|
||||
|
||||
近几年,朝鲜被指责在美国、韩国,及其周边的几个国家对金融网络发起多起在线袭击。
|
||||
|
||||
网络安全研究人员称他们找到了这个月全球性感染了 150 多个国家 30 多万台计算机的[“想哭”勒索病毒和朝鲜网络战有关联的技术证据][15]。
|
||||
|
||||
平壤称该指控是“荒谬的”。
|
||||
|
||||
对朝鲜的关键指控是指朝鲜与一个叫做拉撒路(Lazarus)的黑客组织有联系,这个组织是在去年在孟加拉国中央银行网络抢劫了 8100 万美元,并在 2014 年攻击了索尼的好莱坞工作室的网络。
|
||||
|
||||
美国政府指责朝鲜对索尼公司的黑客袭击,同时美国政府对平壤在孟加拉国银行的盗窃行为提起公诉并要求立案。
|
||||
|
||||
由于没有确凿的证据、没有犯罪指控并不能够立案。朝鲜之后也否认了索尼公司和该银行的袭击与其有关。
|
||||
|
||||
朝鲜是世界上最封闭的国家之一,它秘密行动的一些细节很难获得。
|
||||
|
||||
但研究这个封闭国家的专家和流落到韩国和一些西方国家的的脱北者已经给出了或多或少的提示。
|
||||
|
||||
### 黑客们喜欢以雇员身份来作为掩护
|
||||
|
||||
金恒光,一位朝鲜前计算机教授,2004 叛逃到韩国,他仍然有着韩国内部的消息来源,他说平壤的网络战目的在于通过侦察总局(RGB)下属的一个叫做 180 局来筹集资金,这个局主要是负责海外的情报机构。
|
||||
|
||||
金教授称,“180 局负责入侵金融机构通过漏洞从银行账户提取资金”。
|
||||
|
||||
他之前也说过,他以前的一些学生已经加入了朝鲜的网络战略司令部,即朝鲜的网络部队。
|
||||
|
||||
>“黑客们到海外寻找比朝鲜更好的互联网服务的地方,以免留下痕迹,” 金教授补充说。
|
||||
|
||||
他说他们经常用贸易公司、朝鲜的海外分公司和在中国和东南亚合资企业的雇员来作为掩护。
|
||||
|
||||
位于华盛顿的战略与国际研究中心的一位名为 James Lewis 的朝鲜专家称,平壤首先把黑客攻击作为间谍活动的工具,然后对韩国和美国的目的进行政治干扰。
|
||||
|
||||
他说,“索尼公司事件之后,他们改变方法,通过用黑客来支持犯罪活动来形成国内坚挺的货币经济政策。”
|
||||
|
||||
“目前为止,网上毒品,假冒伪劣,走私,都是他们惯用的伎俩”。
|
||||
|
||||
|
||||
[**VIDEO:** 你遇到过勒索病毒吗? (ABC News)][16] : https://dn-linuxcn.qbox.me/static/video/CNb_Ransomware_1505_512k.mp4
|
||||
|
||||
### 韩国声称拥有“大量的证据”
|
||||
|
||||
美国国防部称在去年提交给国会的一个报告中显示,朝鲜将网络视为有成本效益的、不对称的、可否认的工具,它能够应付来自报复性袭击的很小风险,因为它的“网络”大部分是和因特网分离的。
|
||||
|
||||
> 报告中说," 它可能从第三方国家使用互联网基础设施"。
|
||||
|
||||
韩国政府称,他们拥有朝鲜网络战行动的大量证据。
|
||||
|
||||
“朝鲜进行网络战通过第三方国家来掩护网络袭击的来源,并且使用他们的信息和通讯技术设施”,韩国外交部副部长安总基在书面评论中告诉路透社。
|
||||
|
||||
除了孟加拉银行抢劫案,他说怀疑平壤也与菲律宾、越南和波兰的银行袭击有关。
|
||||
|
||||
去年六月,警察称朝鲜袭击了 160 个韩国公司和政府机构,入侵了大约 14 万台计算机,暗中在它的对手的计算机中植入恶意代码,为进行大规模网络攻击的长期计划而准备。
|
||||
|
||||
朝鲜也被怀疑在 2014 年对韩国核反应堆操作系统进行阶段性网络攻击,尽管朝鲜否认与其无关。
|
||||
|
||||
根据在一个韩国首尔的杀毒软件厂商 hauri 的高级安全研究员 Simon Choi 的说法,网络袭击是来自于朝鲜在中国的一个基地。
|
||||
|
||||
Choi 先生,是一位对朝鲜的黑客能力有广泛的研究的人,他称,“他们在那里行动,不管他们究竟在做什么,他们拥有中国的 IP 地址”。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.abc.net.au/news/2017-05-21/north-koreas-unit-180-cyber-warfare-cell-hacking/8545106
|
||||
|
||||
作者:[www.abc.net.au][a]
|
||||
译者:[hwlog](https://github.com/hwlog)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.abc.net.au
|
||||
[1]:http://www.abc.net.au/news/2017-05-16/wannacry-ransomware-showing-up-in-obscure-places/8527060
|
||||
[2]:http://www.abc.net.au/news/2015-08-05/why-we-should-care-about-cyber-crime/6673274
|
||||
[3]:http://www.abc.net.au/news/2017-05-15/what-to-do-if-youve-been-hacked/8526118
|
||||
[4]:http://www.abc.net.au/news/2017-05-16/researchers-link-wannacry-to-north-korea/8531110
|
||||
[5]:http://www.abc.net.au/news/2017-05-18/adylkuzz-cyberattack-could-be-far-worse-than-wannacry:-expert/8537502
|
||||
[6]:http://www.google.com/maps/place/Korea,%20Democratic%20People%20S%20Republic%20Of/@40,127,5z
|
||||
[7]:http://www.abc.net.au/news/2017-05-16/wannacry-ransomware-showing-up-in-obscure-places/8527060
|
||||
[8]:http://www.abc.net.au/news/2017-05-16/wannacry-ransomware-showing-up-in-obscure-places/8527060
|
||||
[9]:http://www.abc.net.au/news/2015-08-05/why-we-should-care-about-cyber-crime/6673274
|
||||
[10]:http://www.abc.net.au/news/2015-08-05/why-we-should-care-about-cyber-crime/6673274
|
||||
[11]:http://www.abc.net.au/news/2017-05-15/what-to-do-if-youve-been-hacked/8526118
|
||||
[12]:http://www.abc.net.au/news/2017-05-15/what-to-do-if-youve-been-hacked/8526118
|
||||
[13]:http://www.abc.net.au/news/2017-05-21/military-trucks-trhough-pyongyang/8545134
|
||||
[14]:http://www.abc.net.au/news/2017-05-21/military-trucks-trhough-pyongyang/8545134
|
||||
[15]:http://www.abc.net.au/news/2017-05-16/researchers-link-wannacry-to-north-korea/8531110
|
||||
[16]:http://www.abc.net.au/news/2017-05-15/have-you-been-hit-by-ransomware/8527854
|
@ -1,83 +0,0 @@
|
||||
How is your community promoting diversity?
|
||||
============================================================
|
||||
|
||||
> Open source foundation leaders weigh in.
|
||||
|
||||

|
||||
Image by : opensource.com
|
||||
|
||||
Open source software is a great enabler for technology innovation. Diversity unlocks innovation and drives market growth. Open source and diversity seem like the ultimate winning combination, yet ironically open source communities are among the least diverse tech communities. This is especially true when it comes to inherent diversity: traits such as gender, age, ethnicity, and sexual orientation.
|
||||
|
||||
It is hard to get a true picture of the diversity of our communities in all the various dimensions. Gender diversity, by virtue of being noticeably lacking and more straight forward to measure, is the starting point and current yardstick for measuring diversity in tech communities.
|
||||
|
||||
For example, it is estimated that around 25% of all software developers are women, but [only 3% work][5] in free and open software. These figures are consistent with my personal experience working in open source for over 10 years.
|
||||
|
||||
Even when individuals in the community are [doing their best][6] (and I have worked with many who are), it seems to make little difference. And little has changed in the last ten years. However, we are, as a community, starting to have a better understanding of some of the factors that maintain this status quo, things like [unconscious bias][7] or [social graph and privilege][8] problems.
|
||||
|
||||
In order to overcome the gravity of these forces in open source, we need combined efforts that are sustained over the long term and that really work. There is no better example of how diversity can be improved rapidly in a relatively short space of time than the Python community. PyCon 2011 consisted of just 1% women speakers. Yet in 2014, 33% of speakers at PyCon were women. Now Python conferences regularly lay out [their diversity targets and how they intend to meet them][9].
|
||||
|
||||
What did it take to make that dramatic improvement in women speaker numbers? In her great talk at PyCon 2014, [Outreach Program for Women: Lessons in Collaboration][10], Marina Zhurakhinskaya outlines the key ingredients:
|
||||
|
||||
* The importance of having a Diversity Champion to spearhead the changes over the long term; in the Python community Jessica McKellar was the driving force behind the big improvement in diversity figures
|
||||
* Specifically marketing to under-represented groups; for example, how GNOME used outreach programs, such as [Outreachy][1], to market to women specifically
|
||||
|
||||
We know diversity issues, while complex are imminently fixable. In this way, open source foundations can play a huge role in the sustaining efforts to promote initiatives. Are other open source communities also putting efforts into diversity? To find out, we asked a few open source foundation leaders:
|
||||
|
||||
### How does your foundation promote diversity in its open source community?
|
||||
|
||||
**Mike Milinkovich, executive director of the Eclipse Foundation:**
|
||||
|
||||
> "The Eclipse Foundation is committed to promoting diversity in its open source community. But that commitment does not mean that we are satisfied with where we are today. We have a long way to go, particularly in the area of gender diversity. That said, some of the tangible steps we've taken in the last couple of years are: (a) we put into place a [Community Code of Conduct][2] that covers all of our activities, (b) we are consciously recruiting women for our conference program committees, (c) we are consciously looking for women speakers for our conferences, including keynotes, and (d) we are supporting community channels to discuss diversity topics. It's been great to see members of our community step up to assume leadership roles on this topic, and we're looking forward to making a lot of progress in 2017."
|
||||
|
||||
**Abby Kearns, executive director for the Cloud Foundry:**
|
||||
|
||||
> "For Cloud Foundry we promote diversity in a variety of ways. For our community, this includes a heavy focus on diversity events at our summit, and on our keynote stage. I'm proud to say we doubled the representation by women and people of color at our last event. For our contributors, this takes on a slightly different meaning and includes diversification across company and role."
|
||||
|
||||
A recent Cloud Foundry Summit featured a [diversity luncheon][11] as well as a [keynote on diversity][12], which highlighted how [gender parity had been achieved][13] by one member company's team.
|
||||
|
||||
**Chris Aniszczyk, COO of the Cloud Native Computing Foundation:**
|
||||
|
||||
> "The Cloud Native Computing Foundation (CNCF) is a very young foundation still, and although we are only one year old as of December 2016, we've had promoting diversity as a goal since our inception. First, every conference hosted by CNCF has [diversity scholarships][3] available, and there are usually special diversity lunches or events at the conference to promote inclusion. We've also sponsored "[contribute your first patch][4]" style events to promote new contributors from all over. These are just some small things we currently do. In the near future, we are discussing launching a Diversity Workgroup within CNCF, and also as we ramp up our certification and training programs, we are discussing offering scholarships for folks from under-representative backgrounds."
|
||||
|
||||
Additionally, Cloud Native Computing Foundation is part of the [Linux Foundation][14] as a formal Collaborative Projects (along with other foundations, including Cloud Foundry Foundation). The Linux Foundation has extensive [Diversity Programs][15] and as an example, recently [partnered with the Girls In Tech][16] not-for-profit to improve diversity in open source. In the future, the CNCF actively plans to participate in these Linux Foundation wide initiatives as they arise.
|
||||
|
||||
For open source to thrive, companies need to foster the right environment for innovation. Diversity is a big part of this. Seeing open source foundations making the conscious decision to take action is encouraging. Dedicated time, money, and resources to diversity is making a difference within communities, and we are slowly but surely starting to see the effects. Going forward, communities can collaborate and learn from each other about what works and makes a real difference.
|
||||
|
||||
If you work in open source, be sure to ask and find out what is being done in your community as a whole to foster and promote diversity. Then commit to supporting these efforts and taking the steps toward making a real difference. It is exciting to think that the next ten years might be a huge improvement over the last 10, and we can start to envision a future of truly diverse open source communities, the ultimate winning combination.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||

|
||||
|
||||
Tracy Miranda - Tracy Miranda is a software developer and founder of Kichwa Coders, a software consultancy specializing in Eclipse tools for scientific and embedded software. Tracy has been using Eclipse since 2003 and is actively involved in the community, particularly the Eclipse Science Working Group. Tracy has a background in electronics system design. She mentors young coders at the festival of code for Young Rewired State. Follow Tracy on Twitter @tracymiranda.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
|
||||
via: https://opensource.com/article/17/1/take-action-diversity-tech
|
||||
|
||||
作者:[ Tracy Miranda][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/tracymiranda
|
||||
[1]:https://www.gnome.org/outreachy/
|
||||
[2]:https://www.eclipse.org/org/documents/Community_Code_of_Conduct.php
|
||||
[3]:http://events.linuxfoundation.org/events/cloudnativecon-and-kubecon-north-america/attend/scholarship-opportunities
|
||||
[4]:http://conferences.oreilly.com/oscon/oscon-tx-2016/public/schedule/detail/53257
|
||||
[5]:https://www.linux.com/blog/how-bring-more-women-free-and-open-source-software
|
||||
[6]:https://trishagee.github.io/post/what_can_men_do/
|
||||
[7]:https://opensource.com/life/16/3/sxsw-diversity-google-org
|
||||
[8]:https://opensource.com/life/15/8/5-year-plan-improving-diversity-tech
|
||||
[9]:http://2016.pyconuk.org/diversity-target/
|
||||
[10]:https://www.youtube.com/watch?v=CA8HN20NnII
|
||||
[11]:https://www.youtube.com/watch?v=LSRrc5B1an0&list=PLhuMOCWn4P9io8gtd6JSlI9--q7Gw3epW&index=48
|
||||
[12]:https://www.youtube.com/watch?v=FjF8EK2zQU0&list=PLhuMOCWn4P9io8gtd6JSlI9--q7Gw3epW&index=50
|
||||
[13]:https://twitter.com/ab415/status/781036893286854656
|
||||
[14]:https://www.linuxfoundation.org/about/diversity
|
||||
[15]:https://www.linuxfoundation.org/about/diversity
|
||||
[16]:https://www.linux.com/blog/linux-foundation-partners-girls-tech-increase-diversity-open-source
|
@ -1,111 +0,0 @@
|
||||
Be the open source supply chain
|
||||
============================================================
|
||||
|
||||
### Learn why you should be a supply chain influencer.
|
||||
|
||||

|
||||
Image by :
|
||||
|
||||
opensource.com
|
||||
|
||||
I would bet that whoever is best at managing and influencing the open source supply chain will be best positioned to create the most innovative products. In this article, I’ll explain why you should be a supply chain influencer, and how your organization can be an active participant in your supply chain.
|
||||
|
||||
In my previous article, [Open source and the software supply chain][2], I discussed the basics of supply chain management, and where open source fits in this model. I left readers with this illustration of the model:
|
||||
|
||||

|
||||
|
||||
The question to ask your employer and team(s) is: How do we best take advantage of this? After all, if Apple can set the stage for its dominance by creating a better hardware supply chain, then surely one can do the same with software supply chains.
|
||||
|
||||
### Evaluating supply chains
|
||||
|
||||
Having worked with developers and product teams in many companies, I learned that the process for selecting components that go into a product is haphazard. Sometimes there is an official bake-off of one or two components against each other, but the developers often choose to work with a product based on "feel". When determining the best components, you must evaluate based on those projects’ longevity, stage of development, and enough other metrics to form the basis of a "build vs. buy" decision. Number of users, interested parties, commercial activity, involvement of development team in support, and so on are a few considerations in the decision-making process.
|
||||
|
||||
Over time, technology and business needs change, and in the world of open source software, even more so. Not only must an engineering and product team be able to select the best component at that time, they must also be able to switch it out for something else when the time comes—for example, when the community managing the old component moves on, or when a new component with better features emerges.
|
||||
|
||||
### What not to do
|
||||
|
||||
When evaluating supply chain components, teams are prone to make a number of mistakes, including these common ones:
|
||||
|
||||
* **Not Invented Here (NIH)**: I can’t tell you how many times engineering teams decided to "fix" shortcomings in existing supply chain components by deciding to write it themselves. I won’t say "never ever do that," but I will warn that if you take on the responsibility of writing an infrastructure component, understand that you’re chucking away all the advantages of the open source supply chain—namely upstream testing and upstream engineering—and deciding to take on those tasks, immediately saddling your team (and your product) with technical debt that will only grow over time. You’re making the choice to be less efficient, and you had better have a compelling reason for doing so.
|
||||
* **Carrying patches forward**: Any open source-savvy team understands the value of contributing patches to their respective upstream projects. When doing so, contributed code goes through that project’s automated testing procedures, which, when combined with your own team’s existing testing infrastructure, makes for a more hardened end product. Unfortunately, not all teams are open source-savvy. Sometimes these teams are faced with onerous legal requirements that deter them from seeking permission to contribute fixes upstream. In that case, encourage (i.e., nag) your manager to get blanket legal approval for such things, because the alternative is carrying all those changes forward, incurring significant technical debt, and applying patches until the day your project (or you) dies.
|
||||
* **Think you’re only a user**: Using open source components as part of your software supply chain is only the first step. To reap the rewards of open source supply chains, you must dive in and be an influencer. (More on that shortly.)
|
||||
|
||||
### Effective supply chain management example: Red Hat
|
||||
|
||||
Because of its upstream-first policies, [Red Hat][3] is an example of how both to utilize and influence software supply chains. To understand the Red Hat model, you must view their products through a supply chain perspective.
|
||||
|
||||
Products supported by Red Hat are composed of open source components often vetted by multiple upstream communities, and changes made to these components are pushed to their respective upstream projects, often before they land in a supported product from Red Hat. The work flow look somewhat like:
|
||||
|
||||

|
||||
|
||||
There are multiple reasons for this kind of workflow:
|
||||
|
||||
* Testing, testing, testing: By offloading some initial testing, a company like Red Hat benefits from both the upstream community’s testing, as well as the testing done by other ecosystem participants, including competitors.
|
||||
* Upstream viability: The Red Hat model only works as long as upstream suppliers are viable and self-sustaining. Thus, it’s in Red Hat’s interest to make sure those communities stay healthy.
|
||||
* Engineering efficiency: Because Red Hat offloads common tasks to upstream communities, their engineers spend more time adding value to products for customers.
|
||||
|
||||
To understand the Red Hat approach to supply chain, let’s look at their approach to product development with OpenStack.
|
||||
|
||||
Curiously, Red Hat’s start with OpenStack was not to create a product or even to announce one; rather, they started pushing engineering resources into strategic projects in OpenStack (starting with Nova, Keystone, and Cinder). This list grew to include several other projects in the OpenStack community. A more traditional product management executive might look at this approach and think, "Why on earth would we contribute so much engineering to something that isn’t established and has no product? Why are we giving our competitors our work for free?"
|
||||
|
||||
Instead, here is the open source supply chain thought process:
|
||||
|
||||
### Step 1
|
||||
|
||||
Look at growth areas in the business or largest product gaps that need filling. Is there an open source community that fits a strategic gap? Or can we build a new project from scratch to do the same? In this case, Red Hat looked at the OpenStack community and eventually determined that it would fill a gap in the product portfolio.
|
||||
|
||||
### Step 2
|
||||
|
||||
Gradually turn up the dial on engineering resources. This does a couple of things. First, it helps the engineering team get a sense of the respective projects’ prospects for success. If prospects aren’t not good, the company can stop contributing, with minimal investment spent. Once the project is determined to be worth in the investment, to the company can ensure its engineers will influence current and future development. This helps the project with quality code development, and ensures that the code meets future product requirements and acceptance criteria. Red Hat spent a lot of time slinging code in OpenStack repositories before ever announcing an OpenStack product, much less releasing one. But this was a fraction of the investment that would have been made if the company had developed an IaaS product from scratch.
|
||||
|
||||
### Step 3
|
||||
|
||||
Once the engineering investments begin, start a product management roadmap and marketing release plan. Once the code reaches a minimum level of quality, fork the upstream repository and start working on product-specific code. Bug fixes are pushed upstream to openstack.org and into product branches. (Remember: Red Hat’s model depends on upstream viability, so it makes no sense not to push fixes upstream.)
|
||||
|
||||
Lather, rinse, repeat. This is how you manage an open source software supply chain.
|
||||
|
||||
### Don't accumulate technical debt
|
||||
|
||||
If needed, Red Hat could decide that it would simply depend on upstream code, supply necessary proprietary product glue, and then release that as a product. This is, in fact, what most companies do with upstream open source code; however, this misses a crucial point I made previously. To develop a really great product, being heavily involved in the development process helps. How can an organization make sure that the code base meets its core product criteria if they’re not involved in the day-to-day architecture discussions?
|
||||
|
||||
To make matters worse, in an effort to protect backwards compatibility and interoperability, many companies fork the upstream code, make changes and don't contribute them upstream, choosing instead to carry them forward internally. That is a big no-no, saddling your engineering team forever with accumulated technical debt that will only grow over time. In that scenario, all the gains made from upstream testing, development and release go away in a whiff of stupidity.
|
||||
|
||||
### Red Hat and OpenShift
|
||||
|
||||
Once you begin to understand Red Hat’s approach to supply chain, which you can see manifested in its approach to OpenStack, you can understand its approach to OpenShift. Red Hat first released OpenShift as a proprietary product that was also open sourced. Everything was homegrown, built by a team that joined Red Hat as part of the [Makara acquisition][4] in 2010.
|
||||
|
||||
The technology initially suffered from NIH—using its own homegrown clustering and container management technologies, in spite of the recent (at the time) release of new projects: Kubernetes, Mesos, and Docker. What Red Hat did next is a testament to the company’s commitment to its open source supply chain model: Between OpenShift versions 2 and 3, developers rewrote it to utilize and take advantage of new developments from the Kubernetes and Docker communities, ditching their NIH approach. By restructuring the project in that way, the company took advantage of economies of scale that resulted from the burgeoning developer communities for both projects. I
|
||||
|
||||
Instead of Red Hat fashioning a complete QC/QA testing environment for the entire OpenShift stack, they could rely on testing infrastructure supplied by the Docker and Kubernetes communities. Thus, Red Hat contributions to both the Docker and Kubernetes code bases would undergo a few rounds of testing before ever reaching the company’s own product branches:
|
||||
|
||||
1. The first round of testing is by the Docker and Kubernetes communities .
|
||||
2. Further testing is done by ecosystem participants building products on either or both projects.
|
||||
3. More testing happens on downstream code distributions or products that "embed" both projects.
|
||||
4. Final testing happens in Red Hat’s own product branch.
|
||||
|
||||
The amount of upstream (from Red Hat) testing done on the code ensures a level of quality that would be much more expensive for the company to do comprehensively and from scratch. This is the trick to open source supply chain management: Don’t just consume upstream code, minimally shimming it into a product. That approach won’t give you any of the advantages offered by open source development practices and direct participation for solving your customers’ problems.
|
||||
|
||||
To get the most benefit from the open source software supply chain, you must **be** the open source software supply chain.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
John Mark Walker - John Mark Walker is Director of Product Management at Dell EMC and is responsible for managing the ViPR Controller product as well as the CoprHD open source community. He has led many open source community efforts, including ManageIQ,
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/1/be-open-source-supply-chain
|
||||
|
||||
作者:[John Mark Walker][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/johnmark
|
||||
[1]:https://opensource.com/article/17/1/be-open-source-supply-chain?rate=sz6X6GSpIX1EeYBj4B8PokPU1Wy-ievIcBeHAv0Rv2I
|
||||
[2]:https://opensource.com/article/16/12/open-source-software-supply-chain
|
||||
[3]:https://www.redhat.com/en
|
||||
[4]:https://www.redhat.com/en/about/press-releases/makara
|
||||
[5]:https://opensource.com/user/11815/feed
|
@ -1,74 +0,0 @@
|
||||
Developing open leaders
|
||||
============================================================
|
||||
|
||||
> "Off-the-shelf" leadership training can't sufficiently groom tomorrow's organizational leaders. Here's how we're doing it.
|
||||
|
||||

|
||||
Image by : opensource.com
|
||||
|
||||
At Red Hat, we have a saying: Not everyone needs to be a people manager, but everyone is expected to be a leader.
|
||||
|
||||
For many people, that requires a profound mindset shift in how to think about leaders. Yet in some ways, it's what we all intuitively know about how organizations really work. As Red Hat CEO Jim Whitehurst has pointed out, in any organization, you have the thermometers—people who reflect the organizational "temperature" and sentiment and direction—and then you have the thermostats—people who _set_ those things for the organization.
|
||||
|
||||
Leadership is about maximizing influence and impact. But how do you develop leadership for an open organization?
|
||||
|
||||
In the first installment of this series, I will share the journey, from my perspective, on how we began to build a leadership development system at Red Hat to enable our growth while sustaining the best parts of our unique culture.
|
||||
|
||||
### Nothing 'off the shelf'
|
||||
|
||||
In an open organization, you can't just buy leadership development training "off the shelf" and expect it to resonate with people—or to reflect and reinforce your unique culture. But you also probably won't have the capacity and resources to build a great leadership development system entirely from scratch.
|
||||
|
||||
Early on in our journey at Red Hat, our leadership development efforts focused on understanding our own philosophy and approach, then taking a bit of an open source approach: sifting through the what people had created for conventional organizations, then configuring those ideas in a way that made them feasible for an open organization.
|
||||
|
||||
Looking back, I can also see we spent a lot of energy looking for ways to plug specific capability gaps.
|
||||
|
||||
Many of our people managers were engineers and other subject matter experts who stepped into management roles because that's what our organization needed. Yet the reality was, many had little experience leading a team or group. So we had some big gaps in basic management skills.
|
||||
|
||||
We also had gaps—not just among managers but also among individual contributors—when it came to navigating tough conversations with respect. In a company where passion runs high and people love to engage in open and heated debate, making your voice heard without shouting others down wasn't always easy.
|
||||
|
||||
We couldn't find any end-to-end leadership development systems that would help train people for leading in a culture that favors flatness and meritocracy over hierarchy and seniority. And while we could build some of those things ourselves, we couldn't build everything fast enough to meet our growing organization's needs.
|
||||
|
||||
So when we saw a need for improved goal setting, we introduced some of the best offerings available—like Closing the Execution Gap and the concept of SMART goals (i.e. specific, measurable, attainable, relevant, and time-bound). To make these work for Red Hat, we configured them to pull through themes from our own culture that could be used in tandem to make the concepts resonate and become even more powerful.
|
||||
|
||||
### Considering meritocracy
|
||||
|
||||
In a culture that values meritocracy, being able to influence others is critical. Yet the passionate open communication and debate that we love at Red Hat sometimes created hard feelings between individuals or teams. We introduced [Crucial Conversations][2] to help everyone navigate those heated and impassioned topics, and also to help them recognize that those kinds of conversations provide the greatest opportunity for influence.
|
||||
|
||||
After building that foundation with Crucial Conversations, we introduced [Influencer Training][3] to help entire teams and organizations communicate and gain traction for their ideas across boundaries.
|
||||
|
||||
We also found a lot of value in Marcus Buckingham's strengths-based approach to leadership development, rather than the conventional models that encouraged people to spend their energy shoring up weaknesses.
|
||||
|
||||
Early on, we made a decision to make our leadership offerings available to individual contributors as well as managers, because we saw that these skills were important for everyone in an open organization.
|
||||
|
||||
Looking back, I can see that this gave us the added benefit of developing a shared understanding and language for talking about leadership throughout our organization. It helped us build and sustain a culture where leadership is expected at all levels and in any role.
|
||||
|
||||
At the same time, training was only part of the solution. We also began developing processes that would help entire departments develop important organizational capabilities, such as talent assessment and succession planning.
|
||||
|
||||
Piece by piece, our open leadership system was beginning to take shape. The story of how it came together is pretty remarkable—at least to me!—and over the next few months, I'll share the journey with you. I look forward to hearing about the journeys of other open organizations, too.
|
||||
|
||||
_(An earlier version of this article appeared in _[The Open Organization Leaders Manual][4]_, now available as a free download from Opensource.com.)_
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
DeLisa Alexander - DeLisa Alexander | DeLisa is Executive Vice President and Chief People Officer at Red Hat. Under her leadership, this team focuses on acquiring, developing, and retaining talent and enhancing the Red Hat culture and brand. In her nearly 15 years with the company, DeLisa has also worked in the Office of General Counsel, where she wrote Red Hat's first subscription agreement and closed the first deals with its OEMs.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/17/1/developing-open-leaders
|
||||
|
||||
作者:[DeLisa Alexander][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/delisa
|
||||
[1]:https://opensource.com/open-organization/17/1/developing-open-leaders?rate=VU560k86SWs0OAchgX-ge2Avg041EOeU8BrlKgxEwqQ
|
||||
[2]:https://www.vitalsmarts.com/products-solutions/crucial-conversations/
|
||||
[3]:https://www.vitalsmarts.com/products-solutions/influencer/
|
||||
[4]:https://opensource.com/open-organization/resources/leaders-manual
|
||||
[5]:https://opensource.com/user/10594/feed
|
||||
[6]:https://opensource.com/open-organization/17/1/developing-open-leaders#comments
|
||||
[7]:https://opensource.com/users/delisa
|
@ -1,93 +0,0 @@
|
||||
4 questions to answer when choosing community metrics to measure
|
||||
============================================================
|
||||
|
||||
> When evaluating a specific metric that you are considering including in your metrics plan, you should answer four questions.
|
||||
|
||||
|
||||

|
||||
Image by :
|
||||
|
||||
[Internet Archive Book Images][4]. Modified by Opensource.com. [CC BY-SA 4.0][5]
|
||||
|
||||
Thus far in the [Community Metrics Playbook][6] column, I've discussed the importance of [setting goals][7] to guide the metrics process, outlined the general [types of metrics][8] that are useful for studying your community, and reviewed technical details of [available tools][9]. As you are deciding which metrics to track for your community, having a deeper understanding of each area is important so you not only choose good metrics, but also understand and plan for what to do when the numbers don't line up with expectations.
|
||||
|
||||
When evaluating a specific metric that you are thinking about including in your metrics plan, you should answer four questions:
|
||||
|
||||
* Does it help achieve my goals?
|
||||
* How accurate is it?
|
||||
* What is its relationship to other metrics?
|
||||
* What will I do if the metrics goes "bad"?
|
||||
|
||||
### Goal-appropriate
|
||||
|
||||
This one should be obvious by now from my [previous discussion on goals][10]: Why do you need to know this metric? Does this metric have a relationship to your project's goals? If not, then you should consider ignoring it—or at least placing much less emphasis on it. Metrics that do not help measure your progress toward goals waste time and resources that could be better spent developing better metrics.
|
||||
|
||||
One thing to consider are intermediate metrics. These are metrics that may not have an obvious, direct relationship to your goals. They can be dangerous when considered alone and can lead to undesirable behavior simply to "meet the number," but when combined with and interpreted in the context of other intermediates, can help projects improve.
|
||||
|
||||
### Accuracy
|
||||
|
||||
Accuracy is defined as the quality or state of being correct or precise. Gauging accuracy for metrics that have built-in subjectivity and bias, such as survey questions, is difficult, so for this discussion I'll talk about objective metrics obtained by computers, which are for the most part highly precise and accurate. [Data can't lie][11], so why are we even discussing accuracy of computed metrics? The potential for inaccurate metrics stems from their human interpretation. The classic example here is _number of downloads_. This metric can be measured easily—often as part of a download site's built-in metrics—but will not be accurate if your software is split into multiple packages, or known systemic processes produce artificially inflated (or deflated) numbers, such as automated testing systems that execute repeated downloads.
|
||||
|
||||
As long as you recognize and avoid fixating on absolute correctness, having slightly inaccurate metrics is usually better than no metrics at all. Web analytics are [notorious][12] for being inaccurate gauges of reality due to the underlying technical nature of web servers, browsers, proxies, caching, dynamic addressing, cookies, and other aspects of computing that can muddy the waters of visitor engagement metrics; however, multiple slightly inaccurate web metrics over time can be an accurate indicator that the website refresh you did reduced your repeat visits by 30%. So don't be afraid of the fact that you'll probably never achieve 100% accuracy.
|
||||
|
||||
### Understanding relationships
|
||||
|
||||

|
||||
|
||||
_Data from: [NHTSA, DOT HS 810 780][1]. [U.S. Department of Agriculture (pdf)][2]_
|
||||
|
||||
The universe of metrics is full of examples stemming from the statistical phrase "[correlation does not imply causation][13]." When choosing metrics, carefully consider whether the chosen metric might have relationships to other metrics, directly or indirectly. Related metrics often can help diagnose success and failure, and indicate needed changes to your project to drive the improvement you're looking for.
|
||||
|
||||
Truly proving that one metric's behavior causes predictable changes in another requires quite a bit of experimentation and statistical analysis, but you don't have to take it that far. If you suspect a relationship, take note and observe their behavior over time, and if evidence suggests a relationship, then you can do experimentation in your own project to test the hypothesis.
|
||||
|
||||
For example, a typical goal of open source projects is to drive innovation by attracting new developers who bring their diverse experience and backgrounds to the project. A given project notices that when the "average time from contribution to code commit" decreases, the number of new contributors coming to the project increases. If evidence over time maintains this correlation, the project might decide to dedicate more resources to handling contributions. This can have an effect elsewhere—such as an increase in bugs due to lots of new code coming in—so try not to over-rotate while using your new-found knowledge.
|
||||
|
||||
### Planning for failure
|
||||
|
||||
After gauging the accuracy and applicability of a metric, you need to think about and plan for what you will do when things don't go as planned (which will happen). Consider this scenario: You've chosen several quality-related metrics for your project, and there is general agreement that they are accurate and important to the project. The QA team is working hard, yet your chosen metrics continue to suffer. What do you do? You have several choices:
|
||||
|
||||
* Do nothing.
|
||||
* Make the QA team come in on the weekend to write more tests.
|
||||
* Work with developers to find the root cause of all the bugs.
|
||||
* Choose different metrics.
|
||||
|
||||
Which is the correct choice? The answer shouldn't surprise you: _It depends_. You may not need to do anything if the trend is expected, for example if resource constraints are forcing you to trade quality for some other metric. QA might actually need to write more tests if you have known poor coverage. Or you may need to do root cause analysis for a systemic issue in development. The last one is particularly important to include in any plan; your metrics may have become outdated and no longer align with your project's goals, and should be regularly evaluated and eliminated or replaced as needed.
|
||||
|
||||
Rarely will there be a single correct choice—it's more important to outline, for each metric, the potential causes for failure and which questions you need to ask and what you will do in various contexts. It doesn't have to be a lengthy checklist of actions for each possible cause, but you should at least list a handful of potential causes and how to proceed to investigate failure.
|
||||
|
||||
By answering these four questions about your metrics, you will gain a greater understanding of their purpose and efficacy. More importantly, sharing the answers with the rest of the project will give your community members a greater feeling of autonomy and purpose, which can be a much better motivator than simply asking them to meet a set of seemingly arbitrary numbers.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
译者简介:
|
||||
|
||||
James Falkner - Technology evangelist, teacher, learner, author, dedicated to open source and open computing. I work at Red Hat as a technical evangelist for Red Hat's portfolio of open source products and love what we do and learning from others, and occasionally teaching at conferences.
|
||||
|
||||
Prior to Red Hat I spent 5 years at Liferay growing a large open source community, onboarding new contributors, meeting and engaging with beginners and experts, and championing open source as the de facto choice for businesses large and small. I am based in the Orlando, Florida, USA area.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/2/4-questions-answer-when-choosing-community-metrics-measure
|
||||
|
||||
作者:[James Falkner][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/james-falkner
|
||||
[1]:https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/810780
|
||||
[2]:http://www.ers.usda.gov/media/320480/wrs0406f_1_.pdf
|
||||
[3]:https://opensource.com/article/17/2/4-questions-answer-when-choosing-community-metrics-measure?rate=I8iVb2WNG2xAcYFvNaZfoEFTozgl_gQ-Pz8Ra1SveOE
|
||||
[4]:https://www.flickr.com/photos/internetarchivebookimages/14753212581/in/photolist-otG57a-orWcFN-ovJbD4-orWgoN-otWQTN-otWmY9-otG3wg-otYjFc-otLxay-otWi5N-ovJ8pt-ocuoJr-otG4KZ-ovJ7ok-otWjdj-otY18v-otYqxn-orWptL-otWkzY-otWTnW-otYcHe-otWAx3-octWmY-otWNwd-otL2wq-otYco6-ovHSva-otFSq4-otFPP2-otWmAL-otYtwP-orWAj3-otLjQy-otWDRs-otWoPJ-otG7wR-otWBTQ-otG4b2-otWyD3-orWgCA-otWMzo-otYfHx-otY9oP-otGbrz-orWnwj-orW6gJ-ocuAd8-orW5U1-otWBcu-otFXgr/
|
||||
[5]:https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[6]:https://opensource.com/tags/community-metrics-playbook
|
||||
[7]:https://opensource.com/bus/16/8/measuring-community-health
|
||||
[8]:https://opensource.com/business/16/9/choosing-right-metrics
|
||||
[9]:https://opensource.com/article/16/11/tools-collecting-analyzing-community-metrics
|
||||
[10]:https://opensource.com/bus/16/8/measuring-community-health
|
||||
[11]:http://management.curiouscatblog.net/2007/08/09/data-cant-lie/
|
||||
[12]:https://brianclifton.com/pro-lounge-files/accuracy-whitepaper.pdf
|
||||
[13]:https://en.wikipedia.org/wiki/Correlation_does_not_imply_causation
|
||||
[14]:https://opensource.com/user/18065/feed
|
||||
[15]:https://opensource.com/users/james-falkner
|
@ -1,94 +0,0 @@
|
||||
How the University of Hawaii is solving today's higher ed problems
|
||||
============================================================
|
||||
|
||||

|
||||
>Image by : opensource.com
|
||||
|
||||
Openness invites greater participation and it takes advantage of the shared energy of collaborators. The strength of openly created educational resources comes paradoxically from the vulnerability of the shared experience of that creation process.
|
||||
|
||||
One of the leaders in Open Educational Resources (OER) is [Billy Meinke][3], educational technologist at the University of Hawaii at Manoa. The University's open creation model uses [Pressbooks][4], which Billy tells me more about in this interview.
|
||||
|
||||
**Don Watkins (DW): How did your work at Creative Commons lead you to the University of Hawaii?**
|
||||
|
||||
**Billy Meinke (BM)**: Well, I've actually _returned_ to The University of Hawaii (UH) after being in the Bay Area for several years. I completed the ETEC educational technology Master's program here and then moved to San Francisco where I worked with [Creative Commons][5] (CC). Being with CC was a rewarding and eye-opening experience, and I'm hopeful that what I learned out there will lend itself to the OER work we are ramping up at the University.
|
||||
|
||||
**DW: What came first: instructional design or OER? Are the two symbiotic?**
|
||||
|
||||
**BM**: To me, OER is just a better flavor of learning content. Instructional designers make lots of decisions about the learning product they want to create, be it a textbook or a course or a piece of media. But will they put an open license on that OER when it's published? Will they use an open source tool to author the content? Will they release it in an open format? An instructional designer can produce effective learning content without doing any of those things, but it won't be as useful to the next person. OERs are different because they are designed for reuse, regardless of pedagogical strategy or learning approach.
|
||||
|
||||
**DW: How long has the University of Hawaii been using OERs? What were the primary motivations?**
|
||||
|
||||
**BM**: The OER effort at UH started in 2014, and this past November I took over management of OER activities at UH Manoa, the University system's flagship campus.
|
||||
|
||||
The UH system has a healthy group of OER advocates throughout, primarily at the community colleges. They've transitioned hundreds of courses to become textbook zero (textbooks at no cost) and have made lots of headway building OER-based courses for two-year students. I've been really impressed with how well they've moved towards OER and how much money they've saved students over the last few semesters. We want to empower faculty to take control of what content they teach with, which we expect will result in their saving students money, at all of our campuses.
|
||||
|
||||
**DW: What are Pressbooks? Why are Pressbooks important to the creation of OERs?**
|
||||
|
||||
**BM**: Members of the faculty do have a choice in terms of what content they teach from, much of the time. Some write their own content, or maintain websites that house a course. Pressbooks is a WordPress-based publishing platform that makes it simpler to manage the content—like a book, with sections and chapters, a table of contents, author and publisher metadata, and the capability of to export the "book" into formats that can be easily read _and_ reused.
|
||||
|
||||
Because most undergraduate courses still rely on a primary textbook, we're opening up a means for faculty to adopt an existing open textbook or to co-author a text with others. Pressbooks is the tool, and we're developing the processes for adapting OER as we go.
|
||||
|
||||
**DW: How can a person get involved in development of Pressbooks?**
|
||||
|
||||
**BM**: Pressbooks has a [GitHub repository][6] where they collaboratively build the supporting software, and I've lurked on it for the last year or so. It can take some getting used to, but the conversations that happen there reveal the direction of the software and give an idea of who is working on what. Pressbooks does offer the free hosting of a limited version of the software (it includes a watermark to encourage folks to upgrade) for those who want to tinker without too much commitment. Also, the software is openly licensed (GPLv2), so anyone can use the code without cost or permission.
|
||||
|
||||
**DW: What other institutions use Pressbooks?**
|
||||
|
||||
**BM**: Some of the more widely known examples are [SUNY's Open Textbook project][7] and the [BCcampus OpenEd project][8]. [Lumen Learning][9] also has its own version of Pressbooks, as does [Open Oregon State][10].
|
||||
|
||||
We're looking at what all of these folks are doing to see where we can take our use of Pressbooks, and we hope to help pave the way for others who are developing their own OERs. In some cases, Pressbooks is being used to support entire courses and has integrated activities and assessments, which can hook into the Learning Management System (LMS) an institution uses for course delivery.
|
||||
|
||||
Because Pressbooks is powered by WordPress, it actually has quite a bit of flexibility in terms of what it can do, but we're setting up a humble roadmap for now. We'll be doing standalone open textbooks first.
|
||||
|
||||
**DW: How can other colleges and universities replicate your success? What are some first steps?**
|
||||
|
||||
**BM**: Forming a community that includes librarians, instructional designers, and faculty seems to be a healthy approach. The very first step will always be to get a handle on what is happening with OERs currently where you are, who is aware (or knowledgeable) about OERs, and then supporting them. My focus now is on curating the training resources around OERs that our team has developed, and helping the faculty gain the knowledge and skills it needs to begin adapting OERs. We'll be supporting a number of open textbook adoptions and creations this year, and it's my opinion that we should support folks with OERs, but then get out of the way when they're ready to take to the sky.
|
||||
|
||||
**DW: How important is "release early, release often?"**
|
||||
|
||||
**BM**: Even though the saying has been traditionally used to describe open practices for developing software, I think the creators of OER content should work toward embracing it, too. All too often, an open license is placed on a piece of OER as a finishing step, and none of the drafts or working documents are ever shared before the final content is released. Many folks don't consider that there might be much to gain by publishing early, especially when working independently on OER or as part of the small team. Taking a page from Mozilla's Matt Thompson, [working openly][11] makes way for greater participation, agility, momentum, iteration, and leveraging the collective energy of folks who have similar goals to your own. Because my role at UH is to connect and facilitate the adoption and creation of OER, releasing drafts of planning documents and OER as I go makes more sense.
|
||||
|
||||
To take advantage of the collective experience and knowledge that my networks have, I must improve the quality of the work continuously. This may be the most unsettling part of working openly—others can see your flaws and mistakes alongside your successes and wins. But in truth, I don't think many folks go around looking for issues with the work of others. More often, their assessment begins with asking (after watching and lurking) how useful the work of others is to their own work, which isn't always the case. If it seems useful on the surface, they'll take a deeper look, but they'll otherwise move on to find the good work of others that can help them go further with their own project.
|
||||
|
||||
Being able to borrow ideas from and in some cases directly use the planning docs of others can help new OER projects find legs. That's part of my strategy with the UH system as well: sharing what works so that we can carry our OER initiative forward, together.
|
||||
|
||||
**DW: How is the Open Foundation's approach for ****[OERu][1] ****of select, design, develop, deliver, and revise similar to **[**David Wiley's 5Rs**][12]**?**
|
||||
|
||||
**BM**: Well, OERu's development workflow for OER courses is designed to outline the process of creating and revising OER, while Wiley's 5Rs framework is an assessment tool for an OER. You would (as we have) use OERu's workflow to understand how you can contribute to their course development. Wiley's 5Rs is more of a set of questions to ask to understand how open an OER is.
|
||||
|
||||
**DW: Why are these frameworks essential to the development cycle of OERs and do you have your own framework?**
|
||||
|
||||
**BM**: While I don't believe that any framework or guide is a magic bullet or something that will guarantee success in developing OERs, I think that opening up the processes of content development can benefit teams and individuals who are taking on the challenge of adopting or creating OERs. At a minimum, a framework, or a set of them, can give a big-picture view of what it takes to produce OERs from start to finish. With tools like these, they may better understand where they are in their own process, and have an idea of what it will take to reach the end points they have set for their OER work.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
译者简介:
|
||||
|
||||
Don Watkins - Educator, education technology specialist, entrepreneur, open source advocate. M.A. in Educational Psychology, MSED in Educational Leadership, Linux system administrator, CCNA, virtualization using Virtual Box. Follow me at @Don_Watkins .
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/2/interview-education-billy-meinke
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/don-watkins
|
||||
[1]:https://oeru.org/
|
||||
[2]:https://opensource.com/article/17/2/interview-education-billy-meinke?rate=MTzLUGkz2UyQtAenC-MVjynw2M_qBr_X4B-vE-0KCVI
|
||||
[3]:https://www.linkedin.com/in/billymeinke
|
||||
[4]:https://pressbooks.com/
|
||||
[5]:https://creativecommons.org/
|
||||
[6]:https://github.com/pressbooks/pressbooks
|
||||
[7]:http://textbooks.opensuny.org/
|
||||
[8]:https://open.bccampus.ca/
|
||||
[9]:http://lumenlearning.com/
|
||||
[10]:http://open.oregonstate.edu/textbooks/
|
||||
[11]:https://openmatt.org/2011/04/06/how-to-work-open/
|
||||
[12]:https://opencontent.org/blog/archives/3221
|
||||
[13]:https://opensource.com/user/15542/feed
|
||||
[14]:https://opensource.com/article/17/2/interview-education-billy-meinke#comments
|
||||
[15]:https://opensource.com/users/don-watkins
|
@ -1,117 +0,0 @@
|
||||
A graduate degree could springboard you into an open source job
|
||||
============================================================
|
||||
|
||||
|
||||

|
||||
Image by :
|
||||
|
||||
opensource.com
|
||||
|
||||
Tech companies often prefer [hiring those who have open source experience][2] because quite simply open source experience is more valuable. This preference is only growing stronger now that open source software dominates the industry and free and open source hardware is gaining momentum. For example, a [Indeed.com salary analysis][3] shows that jobs with the keywords "Microsoft Windows" have an average salary of $64,000, while jobs with the keyword "Linux" have an average salary of $99,000\. Enough said.
|
||||
|
||||
There are many good open source jobs available to those with Bachelor's degrees, but if you want to control your destiny, a higher degree will give you the freedom to be paid more for following your interests.
|
||||
|
||||
This was very important to me when deciding what education I would choose, and I think it is true of most other PhDs. However, even if you do not put much stock in intellectual freedom, there is a pretty easy case to be made for "doing it for the Benjamins."
|
||||
|
||||
If you care about economic security, as an undergraduate you should consider graduate school. According to [data from the U.S. Bureau of Labor Statistics'][4] Current Population Survey, your average income is going to go up by over 20% if you get a Master's degree and by about 50% if you get a PhD. Similarly the unemployment rate for those with a Bachelor's degree was about 5%, drops to 3.6% for a Master's degree and is cut in half to 2.5% for those with a PhD.
|
||||
|
||||
Of course, all graduate programs and schools are _not_ equal. Most open source advocates would likely find themselves in some kind of engineering program. This is actually also pretty good news on the money front. [IEEE's Where the Jobs Are 2014][5] report says that engineering unemployment is just 1.9% and down to pre-recession levels. Similarly a [survey][6] by the American Society of Mechanical Engineers (ASME) and the American Society of Civil Engineers (ASCE) found that during the recession (from 2011 to 2013) the average salary for engineers actually rose almost 5%.
|
||||
|
||||
Ironically, many students do not consider graduate school for economic reasons. On its face, grad school appears expensive and working your way through it without shouldering a lot of debt seems impossible. For example, [MIT is $24,000 per term][7], and this does not even include room and board. Even at my more humble university graduate school (Michigan Tech, located in the [snow-blasted][8] upper peninsula of Michigan) will set you back more than [$40,000 a year][9] to be an electrical or computer engineer. Despite these costs, graduate school in technical disciplines almost always has an exceptionally high return on investment.
|
||||
|
||||
Also, I have even more good news: **If you are a solid student, graduate school will be more than free.**
|
||||
|
||||
In general, the best students are offered research assistantships that pay their way through graduate school completely, even at the nation's top schools. PhD and Master's degree students are generally fully funded, including tuition and monthly stipends. You will not get rich, but your ramen noodles will be covered. The real beauty of this path is that in general the research that you are paid for will go directly to your own thesis.
|
||||
|
||||
If you are looking for a graduate degree that will springboard you into an open source job, simply any graduate program will not do. A place to start is with the [top 100 universities][10] to support FOSS.
|
||||
|
||||
There are also many institutions that have a fairly well-developed open source culture. Students at RIT can now [earn a minor in free and open source software][11] and free culture, and at Michigan Tech you can join the [Open Hardware Enterprise][12], which is essentially a student-run business. The Massachusetts Institute of Technology hosts [OpenCourseware][13], an open source approach to educational materials. However, be aware that although an academic pedigree is important it is not the primary concern. This is because in graduate school (and particularly for funding) you are applying to a research group (i.e., a single professor) in addition to applying to the university and program.
|
||||
|
||||
### How to get a job in an open source lab
|
||||
|
||||
While many academics ascribe to open source principles and many schools are supportive of open source overall, the group of hard core open source lab groups is fairly selective. NetworkWorld offers [six examples][14], Wikipedia keeps an incomplete [list][15], and I maintain a list of contributors to open source hardware for science on [Appropedia][16]. There are many more to choose from (for example, anyone who attends the open science conferences, [GOSH][17], etc.).
|
||||
|
||||
I run one of these labs myself, and I hope to offer some insight into the process of acquiring funding for potential graduate students. My group studies solar cells and open hardware. [Solar photovoltaic technology represents one of the fastest growing industries][18] and the [open source hardware movement][19] (particularly [RepRap][20] 3D printers) is exploding. Because my lab, the Michigan Tech Open Sustainability Technology ([MOST][21]) Lab, is on the cutting edge of two popular fields, entrance into the group is extremely competitive. This is generally the case with most other open source research groups, which I am happy to report are increasing in both size and overall density within the academic community.
|
||||
|
||||
There are two routes you can take to getting a job in an open source lab: 1) the direct route and 2) the indirect route.
|
||||
|
||||
First, the direct route.
|
||||
|
||||
### Make personal contact and stand out
|
||||
|
||||
Applying to an open source academic lab usually starts with emailing the professor who runs the lab directly. To start, make sure your email is actually addressed to the professor by name and catches his or her interest in the subject and first line. This is necessary because, in general, professors want students working in their labs who share an interest in their research areas. They do not simply want to hire someone that is looking for a job. There are thousands of students looking for positions, so professors can be fairly picky about their selections. You need to prove your interest. Professors literally get dozens of email applications a week, so you must make sure you stand out.
|
||||
|
||||
### Get good grades and study for the GREs
|
||||
|
||||
In addition, you need to cover all the obvious bases. You are going to be judged first by your numbers. You must maintain high grades and get good GRE scores. Even if you are an awesome person, if you do not have scores and grades high enough to impress, you will not meet the minimum requirements for the graduate program and not even make the list for research assistantships. For my lab, competitive graduate students need to be in the top 10% in grades and test scores (GRE ninetieth percentile scores are above 162 for verbal, 164 for quantitative, and 5 or higher in analytical writing. International students will need TOEFL scores greater than 100 and IELTS scores greater than 7.5).
|
||||
|
||||
You can find less competitive groups, but grades and scores will largely determine your chances, particularly the GRE if you are coming from outside the country. There are simply too many universities throughout the world to allow for the evaluation of the quality of a particular grade in a particular school in a particular class. Thus, and I realize this is absurdly reductionist, the practicalities of graduate school admission mean that the GRE becomes a way of quickly vetting students. Realize, however, that you can study for the GRE to improve your scores. Some international students are known for taking a year off to study and then knocking out perfect scores. You do not need to take it that far because the nature of U.S. funding favors domestic students over international students, but you should study hard for the tests.
|
||||
|
||||
Even if your scores are not perfect, you can raise your chances considerably by proving your research interests. This is where the open source philosophy really pays some dividends. Unlike peers who intern at a proprietary company and can say generally, but not specifically, what they worked on, if you work in open source, a professor can see and vet your contributions to a project directly. Ideal applicants have a history and a portfolio already built up in the areas of the research group or closely related areas.
|
||||
|
||||
### Show and share your work
|
||||
|
||||
To gain entrance to my research group, and those like it, we really want to see your work. This means you should make some sort of personal webpage and load it up with your successful projects. You should have undertaken some major project in the research area you want to join. For my group it might be publishing a paper in a peer-reviewed journal as an undergrad, developing a new [open scientific method][22], or making valuable contributions to a large FOSS project, such as [Debian][23]. The project may be applied; for example, it could be in an applied sustainability project, such as organized [Engineers Without Borders][24] chapters at your school, or open hardware, such as founding a [hackerspace][25].
|
||||
|
||||
However, not all of your accomplishments need to be huge or need to be academic undergraduate research. If you restored a car, I want to know about it. If you have designed a cool video game, I want to play it. If you made a mod on the RepRap that I 3D print with or were a major developer of FOSS our group uses, I can more or less guarantee you a position if I have one.
|
||||
|
||||
If you are a good student you will be accepted into many graduate programs, but if funding is low you may not be offered a research assistantship immediately. Do not take rejection personally. You might be the perfect student for a whole range of research projects and a professor may really want you, but simply may not have the funding when you apply. Unfortunately, there have been a stream of pretty vicious [cutbacks to academia in the U.S.][26] in recent years, so research assistantships are not as numerous as they once were. You should apply to several programs and to many professors because you never know who is going to have funding that is matched up to your graduate school career.
|
||||
|
||||
This brings us to the second path to getting a good job in an open source graduate lab, the indirect one.
|
||||
|
||||
### Sneak in
|
||||
|
||||
The first step for this approach is ensuring you meet the minimum requirements for the particular graduate school and apply. These requirements tend to be much lower than advertised by an open source lab director. Once you are accepted to a university you can be placed in the teaching assistant (TA) pool. This also is a way to pay for graduate school, although it lacks the benefit of being paid to work on your thesis, which you will have to do on your own time. While you are establishing yourself at the university by getting good grades and being a good TA, you can attempt to volunteer in the open source lab of your choosing. Most professors with capacity in their lab will take on such self-funded students. If there really is no money, often the professor will offer you some form of independent study credits for your work. These can be used to reduce your class load, giving you time to do research. Take these credits, work hard, and prove yourself.
|
||||
|
||||
This gets your foot is in the door. Your chances at pulling a research assistantship will skyrocket at this point. In general professors are always applying for funding that is randomly being awarded. Often professors must fill a research position in a short amount of time when this happens. If you are good and physically there, your chances are much better for winning those funds. Even in the worst-case scenario, in which you are able to work in an open source lab, but funding does not come, again the nature of open source research will help you. Your projects will be more easily accessible by other professors (who may have funding) and all of your research (even if only paid hourly) will be disclosed to the public. This is a major benefit that is lost to all of those working on proprietary or secret military-related projects. If your work is good, access to your technical work can help you land a position at another group, a program, a school (for example, as a Master's student applying to a PhD program elsewhere), or a better higher-paying job.
|
||||
|
||||
**Work hard and share your research aggressively following the open source model and it will pay off.**
|
||||
|
||||
Good luck!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
译者简介:
|
||||
|
||||
Joshua Pearce - Dr. Joshua Pearce is cross appointed as an Associate Professor in the Materials Science & Engineering and the Electrical & Computer Engineering at Michigan Tech. He currently runs the Michigan Tech in Open Sustainability Technology (MOST) group. He is the author of the Open Source Lab.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/1/grad-school-open-source-academic-lab
|
||||
|
||||
作者:[Joshua Pearce][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jmpearce
|
||||
[1]:https://opensource.com/article/17/1/grad-school-open-source-academic-lab?rate=aJZB6TNyQIo2EOgqPxN8P9a5aoiYgLhtP9GujsPCJYk
|
||||
[2]:http://www.wired.com/2014/07/openhatch/
|
||||
[3]:http://www.indeed.com/salary?q1=linux&q2=microsoft+windows
|
||||
[4]:http://www.appropedia.org/MOST_application_process#Undergraduates
|
||||
[5]:http://spectrum.ieee.org/at-work/tech-careers/where-the-jobs-are-2014
|
||||
[6]:https://www.asme.org/career-education/articles/early-career-engineers/engineering-salaries-on-the-rise
|
||||
[7]:http://web.mit.edu/registrar/reg/costs/
|
||||
[8]:http://www.mtu.edu/alumni/favorites/snowfall/
|
||||
[9]:http://www.mtu.edu/gradschool/admissions/financial/cost/
|
||||
[10]:http://www.portalprogramas.com/en/how-to/best-american-universities-open-source-2014.html
|
||||
[11]:http://www.rit.edu/news/story.php?id=50590
|
||||
[12]:http://www.mtu.edu/enterprise/teams/
|
||||
[13]:https://ocw.mit.edu/index.htm
|
||||
[14]:http://www.networkworld.com/article/3062660/open-source-tools/6-colleges-turning-out-open-source-talent.html
|
||||
[15]:https://en.wikipedia.org/wiki/Open_Source_Lab
|
||||
[16]:http://www.appropedia.org/Open-source_Lab#Examples
|
||||
[17]:http://openhardware.science/
|
||||
[18]:https://hbr.org/2016/08/what-if-all-u-s-coal-workers-were-retrained-to-work-in-solar
|
||||
[19]:http://www.oshwa.org/
|
||||
[20]:http://reprap.org/
|
||||
[21]:http://www.appropedia.org/MOST
|
||||
[22]:http://openwetware.org/wiki/Main_Page
|
||||
[23]:https://www.debian.org/
|
||||
[24]:http://www.appropedia.org/Engineers_Without_Borders
|
||||
[25]:http://www.appropedia.org/Hackerspace
|
||||
[26]:http://www.cbpp.org/research/state-by-state-fact-sheets-higher-education-cuts-jeopardize-students-and-states-economic
|
||||
[27]:https://opensource.com/user/26164/feed
|
||||
[28]:https://opensource.com/article/17/1/grad-school-open-source-academic-lab#comments
|
||||
[29]:https://opensource.com/users/jmpearce
|
@ -1,66 +0,0 @@
|
||||
Poverty Helps You Keep Technology Safe and Easy
|
||||
============================================================
|
||||
|
||||
> In the technology age, there might be some before unknown advantages to living on the bottom rungs of the economic ladder. The question is, do they outweigh the disadvantages.
|
||||
|
||||
### Roblimo’s Hideaway
|
||||
|
||||

|
||||
|
||||
Earlier this week I saw a ZDNet story titled [Vizio: The spy in your TV][1] by my friend Steven J. Vaughan-Nichols. Scary stuff. I had a vision of my wife and me and a few dozen of our closest friends having a secret orgy in our living room, except our smart TV’s unblinking eye was recording our every thrust and parry (you might say). Zut alors! In this day of Internet everywhere, we all know that what goes online, stays online. Suddenly our orgy wasn’t secret, and my hopes of becoming the next President were dashed.
|
||||
|
||||
Except… lucky me! I’m poor, so I have an oldie-but-goodie dumb TV that doesn’t have a camera. There’s no way _my_ old Vizio can spy on us. As Mel Brooks didn’t quite say, “[It’s good to be the poverty case][2].”
|
||||
|
||||
Now about that Internet-connected thermostat. I don’t have one. They’re not only expensive (which is why I don’t have one), but according to [this article,][3] they can be hacked to to run ransomware. Oh my! Once again, poverty saves me from a tech problem that can easily afflict my more prosperous neighbors.
|
||||
|
||||
And how about the latest iPhone and the skinniest Mac BookPro. Apple sells the iPhone 7 Plus (gotta have the plussier one) for $769 or more. The MacBook, despite Scottish connotations of thrift, is Apple-priced “From $1299.” That’s a bunch of money, especially since we all know that as soon as you buy an Apple product it is obsolete and you need to get ready to buy a new, fancier one.
|
||||
|
||||
Also, don’t these things explode sometimes? Or catch on fire or something? My [sub-$100 Android phone][4] is safe as houses by comparison. (It has a bigger screen than the biggest-screen iPhone 7, too. Amnazing!)
|
||||
|
||||
Really big safe smartphone for cheap. Check. Simple, old-fashioned, non-networked thermostats that can’t be hacked. TV without the spycams most of the Money-TVs have. Check.
|
||||
|
||||
But wait! There’s more! The [Android phones that got famous for burning up][5] everything in sight were top-dollar models my wife says she wouldn’t want even if we _could_ afford them. Safety first, right? Frugality’s up there, too.
|
||||
|
||||
Now let’s talk about how I got started with Linux.
|
||||
|
||||
Guess what? It was because I was poor! The PC I had back in the days of yore ran DOS just fine, but couldn’t touch Windows 98 when it came out. Not only that, but Windows was expensive, and I was poor. Luckily, I had time on my hands, so I rooted around on the Internet (at phone modem speed) and eventually lit upon Red Hat Linux, which took forever to download and had an install procedure so complicated that instead of figuring it out I wrote an article about how Linux might be great for home computer use someday in the future, but not at the moment.
|
||||
|
||||
This led to the discovery of several helpful local Linux Users Groups (LUGs) and skilled help getting Linux going on my admittedly creaky PC. And that, you might say, led to my career as an IT writer and editor, including my time at Slashdot, NewsForge, and Linux.com.
|
||||
|
||||
This effectively, albeit temporarily, ended my poverty, but with the help of needy relatives — and later, needy doctors and hospitals — I was able to stay true to my “po’ people” roots. I’m glad I did. You’ve probably seen [this article][6] about hackers remotely shutting down a Jeep Cherokee. Hah! My 1996 Jeep Cherokee is totally immune to this kind of attack. Even my 2013 Kia Soul is _relatively_ immune, since it lacks remote-start/stop and other deluxe convenience features that make new cars easy to hack.
|
||||
|
||||
And the list goes on… same as [the beat went on][7] for Sonny and Cher. The more conveniences and Internet connections you have, the more vulnerable you are. Home automation? Make you into a giant hacking target. There’s also a (distant) possibility that your automated, uP-controlled home could become self-aware, suddenly say “I can’t do that, Dave,” and refuse to listen to your frantic cries that you aren’t Dave as it dumps you into the Internet-aware garbage disposal.
|
||||
|
||||
The solution? You got it! Stay poor! Own the fewest possible web-connect cameras and microphones. Don’t get a thermostat people in Nigeria can program to turn your temperature up and down on one-minute cycles. No automatic lights. I mean… I MEAN… is it really all that hard to flick a light switch? I know, that’s something a previous generation took for granted the same way they once walked across the room to change TV channels, and didn’t complain about it.
|
||||
|
||||
Computers? I have (not at my own expense) computers on my desk that run Mac OS, Windows, and Linux. Guess which OS causes me the least grief and confusion? You got it. _The one that cost the least!_
|
||||
|
||||
So I leave you with this thought: In today’s overly-connected world of overly-complex technology, one of the kindest parting comments you can make to someone you care about is, ** _“Stay poor, my friend!”_ **
|
||||
|
||||
The following two tabs change content below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Robin "Roblimo" Miller is a freelance writer and former editor-in-chief at Open Source Technology Group, the company that owned SourceForge, freshmeat, Linux.com, NewsForge, ThinkGeek and Slashdot, and until recently served as a video editor at Slashdot. He also publishes the blog Robin ‘Roblimo’ Miller’s Personal Site. @robinAKAroblimo
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
|
||||
via: http://fossforce.com/2017/02/poverty-helps-keep-technology-safe-easy/
|
||||
|
||||
作者:[Robin "Roblimo" Miller][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.roblimo.com/
|
||||
[1]:http://www.zdnet.com/article/vizio-the-spy-in-your-tv/
|
||||
[2]:https://www.youtube.com/watch?v=StJS51d1Fzg
|
||||
[3]:https://www.infosecurity-magazine.com/news/defcon-thermostat-control-hacked/
|
||||
[4]:https://www.amazon.com/LG-Stylo-Prepaid-Carrier-Locked/dp/B01FSVN3W2/ref=sr_1_1
|
||||
[5]:https://www.cnet.com/news/why-is-samsung-galaxy-note-7-exploding-overheating/
|
||||
[6]:https://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway/
|
||||
[7]:https://www.youtube.com/watch?v=umrp1tIBY8Q
|
@ -1,64 +0,0 @@
|
||||
How I became a project team leader in open source
|
||||
============================================================
|
||||

|
||||
|
||||
Image by :
|
||||
|
||||
opensource.com
|
||||
|
||||
> _The only people to whose opinions I listen now with any respect are people much younger than myself. They seem in front of me. Life has revealed to them her latest wonder. _― [Oscar Wilde][1], [The Picture of Dorian Gray][2]
|
||||
|
||||
2017 marks two decades since I was first introduced to the concept of "open source" (though the term wasn't coined until later), and a decade since I made my first open source documentation contribution. Each year since has marked another milestone on that journey: new projects, new toolchains, becoming a core contributor, new languages, and becoming a Program Technical Lead (PTL).
|
||||
|
||||
2017 is also the year I will take a step back, take a deep breath, and consciously give the limelight to others.
|
||||
|
||||
As an idealistic young university undergraduate I hung around with the nerds in the computer science department. I was studying arts and, later, business, but somehow I recognized even then that these were my people. I'm forever grateful to a young man (his name was Michael, as so many people in my story are) who introduced me first to IRC and, gradually, to Linux, Google (the lesser known search engine at the time), HTML, and the wonders of open source. He and I were the first people I knew to use USB storage drives, and oh how we loved explaining what they were to the curious in the campus computer lab.
|
||||
|
||||
After university, I found myself working for a startup in Canberra, Australia. Although the startup eventually failed to... well, start, I learned some valuable skills from another dear friend, David. I already knew I had a passion for writing, but David showed me how I could use that skill to build a career, and gave me the tools I needed to actually make that happen. He is also responsible for my first true language love: [LaTeX][3]. To this day, I can spot a LaTeX document from forty paces, which has prompted many an awkward conversation with the often-unwitting bearer of the document in question.
|
||||
|
||||
In 2007, I began working for Red Hat, in what was then known as Engineering Content Services. It was a heady time. Red Hat was determined to invest in an in-house documentation and translation team, and another man by the name of Michael was determined that this would happen in Brisbane, Australia. It was extraordinary case of right place, right time. I seized the opportunity and, working alongside people I still count among the best and brightest technical writers I know, we set about making that thing happen.
|
||||
|
||||
Working at Red Hat in those early days were some of the craziest and most challenging of my career so far. We grew rapidly, there were always several new hires waiting for us to throw them in the deep end, and we had the determination and tenacity to try new things constantly. _Release early, release often_ became a central tenet of our group, and we came up with some truly revolutionary ways of delivering content, as well as some appallingly bad ones. It was here that I discovered the beauty of data typing, single sourcing, remixing content, and using metadata to drive content curation. We weren't trying to tell stories to our readers, but to give our readers the tools to create their own stories.
|
||||
|
||||
As the Red Hat team matured, so too did my career, and I eventually led a team of writers. Around the same time, I started attending and speaking at tech conferences, spreading the word about these new ways of developing content, and trying to lead developers into looking at documentation in new ways. I had a thirst for sharing this knowledge and passion for technical documentation with the world, and with the Red Hat content team slowing their growth and maturing, I found myself craving the fast pace of days gone by. It was time to find a new project.
|
||||
|
||||
When I joined [Rackspace][4], [OpenStack][5] was starting to really hit its stride. I was on the organizing team for [linux.conf.au][6] in 2013 (ably led by yet another Michael), which became known affectionately as openstack.conf.au due to the sheer amount of OpenStack content that was delivered in that year. Anne Gentle had formed the OpenStack documentation team only a year earlier, and I had been watching with interest. The opportunity to work alongside Anne on such an exciting project was irresistible, so by the time 2013 drew to a close, Michael had hired me, and I had become a Racker and a Stacker.
|
||||
|
||||
In late 2014, as we were preparing the Kilo release, Anne asked if I would be willing to put my name forward as a candidate for documentation PTL. OpenStack works on a democratic system where individuals self-nominate for the lead, and the active contributors to each project vote when there is more than one candidate. The fact that Anne not only asked me to step up, but also thought I was capable of stepping in her footsteps was an incredible honor. In early 2015, I was elected unopposed to lead the documentation team for the Liberty release, and we were off to Vancouver.
|
||||
|
||||
By 2015, I had managed documentation teams sized between three and 13 staff members, across many time zones, for nearly five years. I had a business management degree and an MBA to my name, had run my own business, seen a tech startup fail, and watched a new documentation team flourish. I felt as though I understood what being a manager was all about, and I guess I did, but I realized I didn't know what being a PTL was all about. All of a sudden, I had a team where I couldn't name each individual, couldn't rely on any one person to come to work on any given day, couldn't delegate tasks with any authority, and couldn't compensate team members for good work. Suddenly, the only tool I had in my arsenal to get work done was my own ability to convince people that they should.
|
||||
|
||||
My first release as documentation PTL was basically me stumbling around in the dark and poking at the things I encountered. I relied heavily on the expertise of the existing members of the group, particularly Anne Gentle and Andreas Jaeger (our documentation infrastructure guru), to work out what needed to be done, and I gradually started to document the things I learned along the way. I learned that the key to getting things done in a community was not just to talk and delegate, but to listen and collaborate. I had not only to tell people what to do, but also convince them that it was a good idea, and help them to see the task through, picking up the pieces if they didn't.
|
||||
|
||||
Gradually, and through trial and error, I built the confidence and relationships to get through an OpenStack release successfully with my team and my sanity intact. This wouldn't have happened if the team hadn't been willing to stick by me through the times I was wandering in the woods, and the project would never have gotten off the ground in the first place without the advice and expertise of those that had gone before me. Shoulders of giants, etc.
|
||||
|
||||
Somewhat ironically, technical writers aren't very good at documenting their own team processes, so we've been codifying our practices, conventions, tools, and systems. We still have much work to do on this front, but we have made a good start. As the OpenStack documentation team has matured, we have accrued our fair share of [tech debt][7], so dealing with that has been a consistent ribbon through my tenure, not just by closing old bugs (not that there hasn't been a lot of that), but also by changing our systems to prevent it building up in the first place.
|
||||
|
||||
I am now in my tenth year as an open source contributor, and I have four OpenStack releases under my belt: Liberty, Mitaka, Newton, and Ocata. I have been a PTL for two years, and I have seen a lot of great documentation contributors come and go from our little community. I have made an effort to give those who are interested an opportunity to lead: through specialty teams looking after a book or two, release managers who perform the critical tasks to get each new release out into the wild, and moderators who lead a session at OpenStack Summit planning meetings (and help save my voice which, somewhat notoriously, is always completely gone by the end of Summit week).
|
||||
|
||||
From these humble roles, the team has grown leaders. In these people, I see myself. They are hungry for change, full of ideas and ideals, and ready to implement crazy schemes and see where it takes them. So, this year, I'm going to take that step back, allow someone else to lead this amazing team, and let the team take their own steps forward. I intend to be here, holding on for the ride. I can't wait to see what happens next.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Lana Brindley - Lana Brindley has several university degrees, a few of which are even relevant to her field. She has been playing and working with technology since she discovered the Hitchhikers’ Guide to the Galaxy text adventure game in the 80’s. Eventually, she worked out a way to get paid for her two passions – writing and playing with gadgetry – and has been a technical writer ever since.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/2/my-open-source-story-leader
|
||||
|
||||
作者:[Lana Brindley][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/loquacities
|
||||
[1]:http://www.goodreads.com/author/show/3565.Oscar_Wilde
|
||||
[2]:http://www.goodreads.com/work/quotes/1858012
|
||||
[3]:https://www.latex-project.org/
|
||||
[4]:https://www.rackspace.com/en-us
|
||||
[5]:https://www.openstack.org/
|
||||
[6]:https://linux.conf.au/
|
||||
[7]:https://en.wikipedia.org/wiki/Technical_debt
|
@ -1,74 +0,0 @@
|
||||
Does your open source project need a president?
|
||||
============================================================
|
||||
|
||||

|
||||
>Image by : opensource.com
|
||||
|
||||
Recently I was lucky enough to be invited to attend the [Linux Foundation Open Source Leadership Summit][4]. The event was stacked with many of the people I consider mentors, friends, and definitely leaders in the various open source and free software communities that I participate in.
|
||||
|
||||
I was able to observe the [CNCF][5] Technical Oversight Committee meeting while there, and was impressed at the way they worked toward consensus where possible. It reminded me of the [OpenStack Technical Committee][6] in its make-up of well-spoken technical individuals who care about their users and stand up for the technical excellence of their foundations' activities.
|
||||
|
||||
But it struck me (and several other attendees) that this consensus building has limitations. [Adam Jacob][7] noted that Linus Torvalds had given an interview on stage earlier in the day where he noted that most of his role was to listen closely for a time to differing opinions, but then stop them when it was clear there was no consensus, and select one that he felt was technically excellent, and move on. Linus, being the founder of Linux and the benevolent dictator of the project for its lifetime thus far, has earned this moral authority.
|
||||
|
||||
However, unlike Linux, many of the modern foundation-fostered projects lack an executive branch. The structure we see for governance is centered around ensuring that corporate sponsors have influence. Foundation members pay dues to get various levels of board seats or corporate access to events and data. And this is a good thing, as it keeps people like me paid to work in these communities.
|
||||
|
||||
However, I believe as technical contributors, we sometimes give this too much sway in the actual governance of the community and the projects. These foundation boards know that day to day decision making should be left to those working in the project, and as such allow committees like the [CNCF][8] TOC or the [OpenStack TC][9] full agency over the technical aspects of the member projects.
|
||||
|
||||
I believe these committees operate as a legislative branch. They evaluate conditions and regulate the projects accordingly, allocating budgets for infrastructure and passing edicts to avoid chaos. Since they're not as large as political legislative bodies like the US House of Representatives and Senate, they can usually operate on a consensus basis, and not drive everything to a contentious vote. By and large, these are as nimble as a legislative body can be.
|
||||
|
||||
However, I believe open source projects need an executive to be effective. At some point, we need a single person to listen to the facts, entertain theories, and then decide, and execute a plan. Some projects have natural single leaders like this. Most, however, do not.
|
||||
|
||||
I believe we as engineers aren't generally good at being like Linus. If you've spent any time in the corporate world you've had an executive disagree with you and run you right over. When we get the chance to distribute power evenly, we do it.
|
||||
|
||||
But I think that's a mistake. I think we should strive to have executives. Not just organizers like the [OpenStack PTL][10], but more like the [Debian Project Leader][11]. Empowered people with the responsibility to serve as a visionary and keep the project's decision making relevant and of high quality. This would also give the board somebody to interact with directly so that they do not have to try and convince the whole community to move in a particular direction to wield influence. In this way, I believe we'd end up with a system of checks and balances similar to the US Constitution.
|
||||
|
||||
So here is my suggestion for how a project executive structure could work, assuming there is already a strong technical committee and a well-defined voting electorate that I call the "active technical contributors."
|
||||
|
||||
1. The president is elected by [Condorcet][1] vote of the active technical contributors of a project for a term of 1 year.
|
||||
|
||||
2. The president will have veto power over any proposed change to the project's technical assets.
|
||||
|
||||
3. The technical committee may override the president's veto by a super majority vote.
|
||||
|
||||
4. The president will inform the technical contributors of their plans for the project every 6 months.
|
||||
|
||||
This system only works if the project contributors expect their project president to actively drive the vision of the project. Basically, the culture has to turn to this executive for final decision-making before it comes to a veto. The veto is for times when the community makes poor decisions. And this doesn't replace leaders of individual teams. Think of these like the governors of states in the US. They're running their sub-project inside the parameters set down by the technical committee and the president.
|
||||
|
||||
And in the case of foundations or communities with boards, I believe ultimately a board would serve as the judicial branch, checking the legality of changes made against the by-laws of the group. If there's no board of sorts, a judiciary could be appointed and confirmed, similar to the US Supreme Court or the [Debian CTTE][12]. This would also just be necessary to ensure that the technical arm of a project doesn't get the foundation into legal trouble of any kind, which is already what foundation boards tend to do.
|
||||
|
||||
I'd love to hear your thoughts on this on Twitter, please tweet me [@SpamapS][13] with the hashtag #OpenSourcePresident to get the discussion going.
|
||||
|
||||
_This article was originally published on [FewBar.com][2] as "Free and open source leaders—You need a president" and was republished with permission._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Clint Byrum - Clint Byrum is a Cloud Architect at IBM (Though his words here are his own, and not those of IBM). He is an active Open Source and Free Software contributor to Debian, Ubuntu, OpenStack, and various other projects spanning the past 20 years.
|
||||
|
||||
-------------------------
|
||||
|
||||
via: https://opensource.com/article/17/3/governance-needs-president
|
||||
|
||||
作者:[ Clint Byrum][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/spamaps
|
||||
[1]:https://en.wikipedia.org/wiki/Condorcet_method
|
||||
[2]:http://fewbar.com/2017/02/open-source-governance-needs-presidents/
|
||||
[3]:https://opensource.com/article/17/3/governance-needs-president?rate=g5uFkFg_AqVo7JnKqPHoAxKccWzo1XXgn5wj5hILAIk
|
||||
[4]:http://events.linuxfoundation.org/events/open-source-leadership-summit
|
||||
[5]:https://www.cncf.io/
|
||||
[6]:https://www.openstack.org/foundation/tech-committee/
|
||||
[7]:https://twitter.com/adamhjk
|
||||
[8]:https://www.cncf.io/
|
||||
[9]:https://www.openstack.org/foundation/tech-committee/
|
||||
[10]:https://docs.openstack.org/project-team-guide/ptl.html
|
||||
[11]:https://www.debian.org/devel/leader
|
||||
[12]:https://www.debian.org/devel/tech-ctte
|
||||
[13]:https://twitter.com/spamaps
|
||||
[14]:https://opensource.com/user/121156/feed
|
||||
[15]:https://opensource.com/users/spamaps
|
101
sources/talk/20170421 A Window Into the Linux Desktop.md
Normal file
101
sources/talk/20170421 A Window Into the Linux Desktop.md
Normal file
@ -0,0 +1,101 @@
|
||||
A Window Into the Linux Desktop
|
||||
============================================================
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
"What can it do that Windows can't?"
|
||||
|
||||
That is the first question many people ask when considering Linux for their desktop. While the open source philosophy that underpins Linux is a good enough draw for some, others want to know just how different its look, feel and functionality can get. To a degree, that depends on whether you choose a desktop environment or a window manager.
|
||||
|
||||
If you want a desktop experience that is lightning fast and uncompromisingly efficient, foregoing the classic desktop environment for a window manager might be for you.
|
||||
|
||||
### What's What
|
||||
|
||||
"Desktop environment" is the technical term for a typical, full-featured desktop -- that is, the complete graphical layout of your system. Besides displaying your programs, the desktop environment includes accoutrements such as app launchers, menu panels and widgets.
|
||||
|
||||
In Microsoft Windows, the desktop environment consists of, among other things, the Start menu, the taskbar of open applications and notification center, all the Windows programs that come bundled with the OS, and the frames enclosing open applications (with a dash, square and X in the upper right corner).
|
||||
|
||||
There are many similarities in Linux.
|
||||
|
||||
The Linux [Gnome][3] desktop environment, for instance, has a slightly different design, but it shares all of the Microsoft Windows basics -- from an app menu to a panel showing open applications, to a notification bar, to the windows framing programs.
|
||||
|
||||
Window program frames rely on a component for drawing them and letting you move and resize them: It's called the "window manager." So, as they all have windows, every desktop environment includes a window manager.
|
||||
|
||||
However, not every window manager is part of a desktop environment. You can run window managers by themselves, and there are reasons to consider doing just that.
|
||||
|
||||
### Out of Your Environment
|
||||
|
||||
For the purpose of this column, references to "window manager" refer to those that can stand alone. If you install a window manager on an existing Linux system, you can log out without shutting down, choose the new window manager on your login screen, and log back in.
|
||||
|
||||
You might not want to do this without researching your window manager first, though, because you will be greeted by a blank screen and sparse status bar that may or may not be clickable.
|
||||
|
||||
There typically is a straightforward way to bring up a terminal in a window manager, because that's how you edit its configuration file. There you will find key- and mouse-bindings to launch programs, at which point you actually can use your new setup.
|
||||
|
||||
In the popular i3 window manager, for instance, you can launch a terminal by hitting the Super (i.e., Windows) key plus Enter -- or press Super plus D to bring up the app launcher. There you can type an app name and hit Enter to open it. All the existing apps can be found that way, and they will open to full screen once selected.
|
||||
|
||||
[][4] (Click Image to Enlarge)
|
||||
|
||||
i3 is also a tiling window manager, meaning it ensures that all windows expand to evenly fit the screen, neither overlapping nor wasting space. When a new window pops up, it reduces the existing windows, nudging them aside to make room. Users can toggle to open the next window either vertically or horizontally adjacent.
|
||||
|
||||
### Features Can Be Friends or Foes
|
||||
|
||||
Desktop environments have their advantages, of course. First and foremost, they provide a feature-rich, recognizable interface. Each has its signature style, but overall they provide unobtrusive default settings out of the box, which makes desktop environments ready to use right from the start.
|
||||
|
||||
Another strong point is that desktop environments come with a constellation of programs and media codecs, allowing users to accomplish simple tasks immediately. Further, they include handy features like battery monitors, wireless widgets and system notifications.
|
||||
|
||||
As comprehensive as desktop environments are, the large software base and user experience philosophy unique to each means there are limits on how far they can go. That means they are not always very configurable. With desktop environments that emphasize flashy looks, oftentimes what you see is what you get.
|
||||
|
||||
Many desktop environments are notoriously heavy on system resources, so they're not friendly to lower-end hardware. Because of the visual effects running on them, there are more things that can go wrong, too. I once tried tweaking networking settings that were unrelated to the desktop environment I was running, and the whole thing crashed. When I started a window manager, I was able to change the settings.
|
||||
|
||||
Those prioritizing security may want to avoid desktop environments, since more programs means greater attack surface -- that is, entry points where malicious actors can break in.
|
||||
|
||||
However, if you want to give a desktop environment a try, XFCE is a good place to start, as its smaller software base trims some bloat, leaving less clutter behind if you don't stick with it.
|
||||
|
||||
It's not the prettiest at first sight, but after downloading some GTK theme packs (every desktop environment serves up either these or Qt themes, and XFCE is in the GTK camp) and enabling them in the Appearance section of settings, you easily can touch it up. You can even shop around at this [centralized gallery][5] to find the theme you like best.
|
||||
|
||||
### You Can Save a Lot of Time... if You Take the Time First
|
||||
|
||||
If you'd like to see what you can do outside of a desktop environment, you'll find a window manager allows plenty of room to maneuver.
|
||||
|
||||
More than anything, window managers are about customization. In fact, their customizability has spawned numerous galleries hosting a vibrant community of users whose palette of choice is a window manager.
|
||||
|
||||
The modest resource needs of window managers make them ideal for lower specs, and since most window managers don't come with any programs, they allow users who appreciate modularity to add only those they want.
|
||||
|
||||
Perhaps the most noticeable distinction from desktop environments is that window managers generally focus on efficiency by emphasizing mouse movements and keyboard hotkeys to open programs or launchers.
|
||||
|
||||
Keyboard-driven window managers are especially streamlined, since you can bring up new windows, enter text or more keyboard commands, move them around, and close them again -- all without moving your hands from the home row. Once you acculturate to the design logic, you will be amazed at how quickly you can blaze through your tasks.
|
||||
|
||||
In spite of the freedom they provide, window managers have their drawbacks. Most significantly, they are extremely bare-bones out of the box. Before you can make much use of one, you'll have to spend time reading your window manager's documentation for configuration syntax, and probably some more time getting the hang of said syntax.
|
||||
|
||||
Although you will have some user programs if you switched from a desktop environment (the likeliest scenario), you also will start out missing familiar things like battery indicators and network widgets, and it will take some time to set up new ones.
|
||||
|
||||
If you want to dive into window managers, i3 has [thorough documentation][6] and straightforward configuration syntax. The configuration file doesn't use any programming language -- it simply defines a variable-value pair on each line. Creating a hotkey is as easy as writing "bindsym", the key combination, and the action for that combination to launch.
|
||||
|
||||
While window managers aren't for everyone, they offer a distinctive computing experience, and Linux is one of the few OSes that allows them. No matter which paradigm you ultimately go with, I hope this overview gives you enough information to feel confident about the choice you've made -- or confident enough to venture out of your familiar zone and see what else is available.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
**Jonathan Terrasi** has been an ECT News Network columnist since 2017\. His main interests are computer security (particularly with the Linux desktop), encryption, and analysis of politics and current affairs. He is a full-time freelance writer and musician. His background includes providing technical commentaries and analyses in articles published by the Chicago Committee to Defend the Bill of Rights.
|
||||
|
||||
|
||||
-----------
|
||||
|
||||
via: http://www.linuxinsider.com/story/84473.html?rss=1
|
||||
|
||||
作者:[ ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[1]:http://www.linuxinsider.com/story/84473.html?rss=1#
|
||||
[2]:http://www.linuxinsider.com/perl/mailit/?id=84473
|
||||
[3]:http://en.wikipedia.org/wiki/GNOME
|
||||
[4]:http://www.linuxinsider.com/article_images/2017/84473_1200x750.jpg
|
||||
[5]:http://www.xfce-look.org/
|
||||
[6]:https://i3wm.org/docs/
|
@ -0,0 +1,52 @@
|
||||
Faster machine learning is coming to the Linux kernel
|
||||
============================================================
|
||||
|
||||
### The addition of heterogenous memory management to the Linux kernel will unlock new ways to speed up GPUs, and potentially other kinds of machine learning hardware
|
||||
|
||||
|
||||

|
||||
>Credit: Thinkstock
|
||||
|
||||
It's been a long time in the works, but a memory management feature intended to give machine learning or other GPU-powered applications a major performance boost is close to making it into one of the next revisions of the kernel.
|
||||
|
||||
Heterogenous memory management (HMM) allows a device’s driver to mirror the address space for a process under its own memory management. As Red Hat developer Jérôme Glisse [explains][10], this makes it easier for hardware devices like GPUs to directly access the memory of a process without the extra overhead of copying anything. It also doesn't violate the memory protection features afforded by modern OSes.
|
||||
|
||||
|
||||
One class of application that stands to benefit most from HMM is GPU-based machine learning. Libraries like OpenCL and CUDA would be able to get a speed boost from HMM. HMM does this in much the same way as [speedups being done to GPU-based machine learning][11], namely by leaving data in place near the GPU, operating directly on it there, and moving it around as little as possible.
|
||||
|
||||
These kinds of speed-ups for CUDA, Nvidia’s library for GPU-based processing, would only benefit operations on Nvidia GPUs, but those GPUs currently constitute the vast majority of the hardware used to accelerate number crunching. However, OpenCL was devised to write code that could target multiple kinds of hardware—CPUs, GPUs, FPGAs, and so on—so HMM could provide much broader benefits as that hardware matures.
|
||||
|
||||
|
||||
There are a few obstacles to getting HMM into a usable state in Linux. First is kernel support, which has been under wraps for quite some time. HMM was first proposed as a Linux kernel patchset [back in 2014][12], with Red Hat and Nvidia both involved as key developers. The amount of work involved wasn’t trivial, but the developers believe code could be submitted for potential inclusion within the next couple of kernel releases.
|
||||
|
||||
The second obstacle is video driver support, which Nvidia has been working on separately. According to Glisse’s notes, AMD GPUs are likely to support HMM as well, so this particular optimization won’t be limited to Nvidia GPUs. AMD has been trying to ramp up its presence in the GPU market, potentially by [merging GPU and CPU processing][13] on the same die. However, the software ecosystem still plainly favors Nvidia; there would need to be a few more vendor-neutral projects like HMM, and OpenCL performance on a par with what CUDA can provide, to make real choice possible.
|
||||
|
||||
The third obstacle is hardware support, since HMM requires the presence of a replayable page faults hardware feature to work. Only Nvidia’s Pascal line of high-end GPUs supports this feature. In a way that’s good news, since it means Nvidia will only need to provide driver support for one piece of hardware—requiring less work on its part—to get HMM up and running.
|
||||
|
||||
Once HMM is in place, there will be pressure on public cloud providers with GPU instances to [support the latest-and-greatest generation of GPU][14]. Not just by swapping old-school Nvidia Kepler cards for bleeding-edge Pascal GPUs; as each succeeding generation of GPU pulls further away from the pack, support optimizations like HMM will provide strategic advantages.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.infoworld.com/article/3196884/linux/faster-machine-learning-is-coming-to-the-linux-kernel.html
|
||||
|
||||
作者:[Serdar Yegulalp][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.infoworld.com/author/Serdar-Yegulalp/
|
||||
[1]:https://twitter.com/intent/tweet?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3196884%2Flinux%2Ffaster-machine-learning-is-coming-to-the-linux-kernel.html&via=infoworld&text=Faster+machine+learning+is+coming+to+the+Linux+kernel
|
||||
[2]:https://www.facebook.com/sharer/sharer.php?u=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3196884%2Flinux%2Ffaster-machine-learning-is-coming-to-the-linux-kernel.html
|
||||
[3]:http://www.linkedin.com/shareArticle?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3196884%2Flinux%2Ffaster-machine-learning-is-coming-to-the-linux-kernel.html&title=Faster+machine+learning+is+coming+to+the+Linux+kernel
|
||||
[4]:https://plus.google.com/share?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3196884%2Flinux%2Ffaster-machine-learning-is-coming-to-the-linux-kernel.html
|
||||
[5]:http://reddit.com/submit?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3196884%2Flinux%2Ffaster-machine-learning-is-coming-to-the-linux-kernel.html&title=Faster+machine+learning+is+coming+to+the+Linux+kernel
|
||||
[6]:http://www.stumbleupon.com/submit?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3196884%2Flinux%2Ffaster-machine-learning-is-coming-to-the-linux-kernel.html
|
||||
[7]:http://www.infoworld.com/article/3196884/linux/faster-machine-learning-is-coming-to-the-linux-kernel.html#email
|
||||
[8]:http://www.infoworld.com/article/3152565/linux/5-rock-solid-linux-distros-for-developers.html#tk.ifw-infsb
|
||||
[9]:http://www.infoworld.com/newsletters/signup.html#tk.ifw-infsb
|
||||
[10]:https://lkml.org/lkml/2017/4/21/872
|
||||
[11]:http://www.infoworld.com/article/3195437/machine-learning-analytics-get-a-boost-from-gpu-data-frame-project.html
|
||||
[12]:https://lwn.net/Articles/597289/
|
||||
[13]:http://www.infoworld.com/article/3099204/hardware/amd-mulls-a-cpugpu-super-chip-in-a-server-reboot.html
|
||||
[14]:http://www.infoworld.com/article/3126076/artificial-intelligence/aws-machine-learning-vms-go-faster-but-not-forward.html
|
136
sources/talk/20170515 How I got started with bash scripting.md
Normal file
136
sources/talk/20170515 How I got started with bash scripting.md
Normal file
@ -0,0 +1,136 @@
|
||||
How I got started with bash scripting
|
||||
============================================================
|
||||
|
||||
### With a few simple Google searches, a programming novice learned to write code that automates a previously tedious and time-consuming task.
|
||||
|
||||
|
||||
|
||||

|
||||
>Image by : opensource.com
|
||||
|
||||
I wrote a script the other day. For some of you, that sentence sounds like no big deal. For others, and I know you're out there, that sentence is significant. You see, I'm not a programmer. I'm a writer.
|
||||
|
||||
### What I needed to solve
|
||||
|
||||
My problem was fairly simple: I had to juggle files from engineering into our documentation. The files were available in a .zip format from a web URL. I was copying them to my desktop manually, then moving them into a different directory structure to match my documentation needs. A fellow writer gave me this advice: _"Why don't you just write a script to do this for you?"_
|
||||
|
||||
Programming and development
|
||||
|
||||
* [New Python content][1]
|
||||
|
||||
* [Our latest JavaScript articles][2]
|
||||
|
||||
* [Recent Perl posts][3]
|
||||
|
||||
* [Red Hat Developers Blog][4]
|
||||
|
||||
|
||||
|
||||
I thought _"just write a script?!?"_ —as if it was the easiest thing in the world to do.
|
||||
|
||||
### How Google came to the rescue
|
||||
|
||||
My colleague's question got me thinking, and as I thought, I googled.
|
||||
|
||||
**What scripting languages are on Linux?**
|
||||
|
||||
This was my first Google search criteria, and many of you are probably thinking, "She's pretty clueless." Well, I was, but it did set me on a path to solving my problem. The most common result was Bash. Hmm, I've seen Bash. Heck, one of the files I had to document had Bash in it, that ubiquitous line **#!/bin/bash**. I took another look at that file, and I knew what it was doing because I had to document it.
|
||||
|
||||
So that led me to my next Google search request.
|
||||
|
||||
**How to download a zip file from a URL?**
|
||||
|
||||
That was my basic task really. I had a URL with a .zip file containing all the files I needed to include in my documentation, so I asked the All Powerful Google to help me out. That search gem, and a few more, led me to Curl. But here's the best part: Not only did I find Curl, one of the top search hits showed me a Bash script that used Curl to download a .zip file and extract it. That was more than I asked for, but that's when I realized being specific in my Google search requests could give me the information I needed to write this script. So, momentum in my favor, I wrote the simplest of scripts:
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
|
||||
curl http://rather.long.url | tar -xz -C my_directory --strip-components=1
|
||||
```
|
||||
|
||||
What a moment to see that thing run! But then I realized one gotcha: The URL can change, depending on which set of files I'm trying to access. I had another problem to solve, which led me to my next search.
|
||||
|
||||
**How to pass parameters into a Bash script?**
|
||||
|
||||
I needed to be able to run this script with different URLs and different end directories. Google showed me how to put in **$1**, **$2**, etc., to replace what I typed on the command line with my script. For example:
|
||||
|
||||
```
|
||||
bash myscript.sh http://rather.long.url my_directory
|
||||
```
|
||||
|
||||
That was much better. Everything was working as I needed it to, I had flexibility, I had a working script, and most of all, I had a short command to type and save myself 30 minutes of copy-paste grunt work. That was a morning well spent.
|
||||
|
||||
Then I realized I had one more problem. You see, my memory is short, and I knew I'd run this script only every couple of months. That left me with two issues:
|
||||
|
||||
* How would I remember what to type for my script (URL first? directory first?)?
|
||||
|
||||
* How would another writer know how to run my script if I got hit by a truck?
|
||||
|
||||
I needed a usage message—something the script would display if I didn't use it correctly. For example:
|
||||
|
||||
```
|
||||
usage: bash yaml-fetch.sh <'snapshot_url'> <directory>
|
||||
```
|
||||
|
||||
Otherwise, run the script. My next search was:
|
||||
|
||||
**How to write "if/then/else" in a Bash script?**
|
||||
|
||||
Fortunately I already knew **if/then/else** existed in programming. I just had to find out how to do that. Along the way, I also learned to print from a Bash script using **echo**. What I ended up with was something like this:
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
|
||||
URL=$1
|
||||
DIRECTORY=$2
|
||||
|
||||
if [ $# -eq 0 ];
|
||||
then
|
||||
echo "usage: bash yaml-fetch.sh <'snapshot_url'> <directory>".
|
||||
else
|
||||
|
||||
# make the directory if it doesn't already exist
|
||||
echo 'create directory'
|
||||
|
||||
mkdir $DIRECTORY
|
||||
|
||||
# fetch and untar the yaml files
|
||||
echo 'fetch and untar the yaml files'
|
||||
|
||||
curl $URL | tar -xz -C $DIRECTORY --strip-components=1
|
||||
fi
|
||||
```
|
||||
|
||||
### How Google and scripting rocked my world
|
||||
|
||||
Okay, slight exaggeration there, but this being the 21st century, learning new things (especially somewhat simple things) is a whole lot easier than it used to be. What I learned (besides how to write a short, self-documented Bash script) is that if I have a question, there's a good chance someone else had the same or a similar question before. When I get stumped, I can ask the next question, and the next question. And in the end, not only do I have a script, I have the start of a new skill that I can hold onto and use to simplify other tasks I've been avoiding.
|
||||
|
||||
Don't let that first script (or programming step) get the best of you. It's a skill, like any other, and there's a wealth of information out there to help you along the way. You don't need to read a massive book or take a month-long course. You can do it a simpler way with baby steps and baby scripts that get you started, then build on that skill and your confidence. There will always be a need for folks to write those thousands-of-lines-of-code programs with all the branching and merging and bug-fixing.
|
||||
|
||||
But there is also a strong need for simple scripts and other ways to automate/simplify tasks. And that's where a little script and a little confidence can give you a kickstart.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Sandra McCann - Sandra McCann is a Linux and open source advocate. She's worked as a software developer, content architect for learning resources, and content creator. Sandra is currently a content creator for Red Hat in Westford, MA focusing on OpenStack and NFV techology.
|
||||
|
||||
----
|
||||
|
||||
via: https://opensource.com/article/17/5/how-i-learned-bash-scripting
|
||||
|
||||
作者:[ Sandra McCann ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/sandra-mccann
|
||||
[1]:https://opensource.com/tags/python?src=programming_resource_menu
|
||||
[2]:https://opensource.com/tags/javascript?src=programming_resource_menu
|
||||
[3]:https://opensource.com/tags/perl?src=programming_resource_menu
|
||||
[4]:https://developers.redhat.com/?intcmp=7016000000127cYAAQ&src=programming_resource_menu
|
||||
[5]:https://opensource.com/article/17/5/how-i-learned-bash-scripting?rate=s_R-jmOxcMvs9bi41yRwenl7GINDvbIFYrUMIJ8OBYk
|
||||
[6]:https://opensource.com/user/39771/feed
|
||||
[7]:https://opensource.com/article/17/5/how-i-learned-bash-scripting#comments
|
||||
[8]:https://opensource.com/users/sandra-mccann
|
@ -0,0 +1,60 @@
|
||||
How Microsoft is becoming a Linux vendor
|
||||
=====================================
|
||||
|
||||
|
||||
>Microsoft is bridging the gap with Linux by baking it into its own products.
|
||||
|
||||

|
||||
|
||||
|
||||
Linux and open source technologies have become too dominant in data centers, cloud and IoT for Microsoft to ignore them.
|
||||
|
||||
On Microsoft’s own cloud, one in three machines run Linux. These are Microsoft customers who are running Linux. Microsoft needs to support the platform they use, or they will go somewhere else.
|
||||
|
||||
Here's how Microsoft's Linux strategy breaks down on its developer platform (Windows 10), on its cloud (Azure) and datacenter (Windows Server).
|
||||
|
||||
**Linux in Windows**: IT professionals managing Linux machines on public or private cloud need native UNIX tooling. Linux and macOS are the only two platforms that offer such native capabilities. No wonder all you see is MacBooks or a few Linux desktops at events like DockerCon, OpenStack Summit or CoreOS Fest.
|
||||
|
||||
To bridge the gap, Microsoft worked with Canonical to build a Linux subsystem within Windows that offers native Linux tooling. It’s a great compromise, where IT professionals can continue to use Windows 10 desktop while getting to run almost all Linux utilities to manage their Linux machines.
|
||||
|
||||
**Linux in Azure**: What good is a cloud that can’t run fully supported Linux machines? Microsoft has been working with Linux vendors that allow customers to run Linux applications and workloads on Azure.
|
||||
|
||||
Microsoft not only managed to sign deals with all three major Linux vendors Red Hat, SUSE and Canonical, it also worked with countless other companies to offer support for community-based distros like Debian.
|
||||
|
||||
**Linux in Windows Server**: This is the last missing piece of the puzzle. There is a massive ecosystem of Linux containers that are used by customers. There are over 900,000 Docker containers on Docker Hub, which can run only on Linux machines. Microsoft wanted to bring these containers to its own platform.
|
||||
|
||||
At DockerCon, Microsoft announced support for Linux containers on Windows Server bringing all those containers to Linux.
|
||||
|
||||
Things are about to get more interesting, after the success of Bash on Ubuntu on Windows 10, Microsoft is bringing Ubuntu bash to Windows Server. Yes, you heard it right. Windows Server will now have a Linux subsystem.
|
||||
|
||||
Rich Turner, Senior Program Manager at Microsoft told me, “WSL on the server provides admins with a preference for *NIX admin scripting & tools to have a more familiar environment in which to work.”
|
||||
|
||||
Microsoft said in an announcement that It will allow IT professionals “to use the same scripts, tools, procedures and container images they have been using for Linux containers on their Windows Server container host. These containers use our Hyper-V isolation technology combined with your choice of Linux kernel to host the workload while the management scripts and tools on the host use WSL.”
|
||||
|
||||
With all three bases covered, Microsoft has succeeded in creating an environment where its customers don't have to deal with any Linux vendor.
|
||||
|
||||
### What does it mean for Microsoft?
|
||||
|
||||
By baking Linux into its own products, Microsoft has become a Linux vendor. They are part of the Linux Foundation, they are one of the many contributors to the Linux kernel, and they now distribute Linux from their own store.
|
||||
|
||||
There is only one minor problem. Microsoft doesn’t own any Linux technologies. They are totally dependent on an external vendor, in this case Canonical, for their entire Linux layer. Too risky a proposition, if Canonical gets acquired by a fierce competitor.
|
||||
|
||||
It might make sense for Microsoft to attempt to acquire Canonical and bring the core technologies in house. It makes sense.
|
||||
|
||||
### What does it mean for Linux vendors
|
||||
|
||||
On the surface, it’s a clear victory for Microsoft as its customers can live within the Windows world. It will also contain the momentum of Linux in a datacenter. It might also affect Linux on the desktop as now IT professionals looking for *NIX tooling don’t have to run Linux desktop, they can do everything from within Windows.
|
||||
|
||||
Is Microsoft's victory a loss for traditional Linux vendors? To some degree, yes. Microsoft has become a direct competitor. But the clear winner here is Linux.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.cio.com/article/3197016/linux/how-microsoft-is-becoming-a-linux-vendor.html
|
||||
|
||||
作者:[ Swapnil Bhartiya ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.cio.com/author/Swapnil-Bhartiya/
|
103
sources/talk/20170517 Security Debt is an Engineers Problem.md
Normal file
103
sources/talk/20170517 Security Debt is an Engineers Problem.md
Normal file
@ -0,0 +1,103 @@
|
||||
Security Debt is an Engineer’s Problem
|
||||
============================================================
|
||||
|
||||

|
||||
|
||||

|
||||
>Keziah Plattner of AirBnBSecurity.
|
||||
|
||||
Just like organizations can build up technical debt, so too can they also build up something called “security debt,” if they don’t plan accordingly, attendees learned at the [WomenWhoCode Connect ][5]event at Twitter headquarters in San Francisco last month.
|
||||
|
||||
Security has got to be integral to every step of the software development process, stressed [Mary Ann Davidson][6], Oracle’s Chief Security Officer, in a keynote talk with about security for developers with [Zassmin Montes de Oca][7] of [WomenWhoCode][8].
|
||||
|
||||
In the past, security used to be ignored by pretty much everyone, except banks. But security is more critical than it has ever been because there are so many access points. We’ve entered the era of [Internet of Things][9], where thieves can just hack your fridge to see that you’re not home.
|
||||
|
||||
Davidson is in charge of assurance at Oracle, “making sure we build security into everything we build, whether it’s an on-premise product, whether it’s a cloud service, even devices we have that support group builds at customer sites and reports data back to us, helping us do diagnostics — every single one of those things has to have security engineered into it.”
|
||||
|
||||

|
||||
|
||||
Plattner talking to a capacity crowd at #WWCConnect
|
||||
|
||||
AirBnB’s [Keziah Plattner][10] echoed that sentiment in her breakout session. “Most developers don’t see security as their job,” she said, “but this has to change.”
|
||||
|
||||
She shared four basic security principles for engineers. First, security debt is expensive. There’s a lot of talk about [technical debt ][11]and she thinks security debt should be included in those conversations.
|
||||
|
||||
“This historical attitude is ‘We’ll think about security later,’” Plattner said. As companies grab the low-hanging fruit of software efficiency and growth, they ignore security, but an initial insecure design can cause problems for years to come.
|
||||
|
||||
It’s very hard to add security to an existing vulnerable system, she said. Even when you know where the security holes are and have budgeted the time and resources to make the changes, it’s time-consuming and difficult to re-engineer a secure system.
|
||||
|
||||
So it’s key, she said, to build security into your design from the start. Think of security as part of the technical debt to avoid. And cover all possibilities.
|
||||
|
||||
Most importantly, according to Plattner, is the difficulty in getting to people to change their behavior. No one will change voluntarily, she said, even when you point out that the new behavior is more secure. We all nodded.
|
||||
|
||||
Davidson said engineers need to start thinking about how their code could be attacked, and design from that perspective. She said she only has two rules. The first is never trust any unvalidated data and rule two is see rule one.
|
||||
|
||||
“People do this all the time. They say ‘My client sent me the data so it will be fine.’ Nooooooooo,” she said, to laughs.
|
||||
|
||||
The second key to security, Plattner said, is “never trust users.”
|
||||
|
||||
Davidson put it another way: “My job is to be a professional paranoid.” She worries all the time about how someone might breach her systems even inadvertently. This is not academic, there has been recent denial of service attacks through IoT devices.
|
||||
|
||||
### Little Bobby Tables
|
||||
|
||||
If part of your security plan is trusting users to do the right thing, your system is inherently insecure regardless of whatever other security measures you have in place, said Plattner.
|
||||
|
||||
It’s important to properly sanitize all user input, she explained, showing the [XKCD cartoon][12] where a mom wiped out an entire school database because her son’s middle name was “DropTable Students.”
|
||||
|
||||
So sanitize all user input. Check.
|
||||
|
||||
She showed an example of JavaScript developers using Eval on open source. “A good ground rule is ‘Never use eval(),’” she cautioned. The [eval() ][13]function evaluates JavaScript code. “You’re opening your system to random users if you do.”
|
||||
|
||||
Davidson cautioned that her paranoia extends to including security testing your example code in documentation. “Because we all know no one ever copies sample code,” she said to laughter. She underscored the point that any code should be subject to security checks.
|
||||
|
||||

|
||||
|
||||
Make it easy
|
||||
|
||||
Plattner’s suggestion three: Make security easy. Take the path of least resistance, she suggested.
|
||||
|
||||
Externally, make users opt out of security instead of opting in, or, better yet, make it mandatory. Changing people’s behavior is the hardest problem in tech, she said. Once users get used to using your product in a non-secure way, getting them to change in the future is extremely difficult.
|
||||
|
||||
Internal to your company, she suggested make tools that standardize security so it’s not something individual developers need to think about. For example, encrypting data as a service so engineers can just call the service to encrypt or decrypt data.
|
||||
|
||||
Make sure that your company is focused on good security hygiene, she said. Switch to good security habits across the company.
|
||||
|
||||
You’re only secure as your weakest link, so it’s important that each individual also has good personal security hygiene as well as having good corporate security hygiene.
|
||||
|
||||
At Oracle, they’ve got this covered. Davidson said she got tired of explaining security to engineers who graduated college with absolutely no security training, so she wrote the first coding standards at Oracle. There are now hundreds of pages with lots of contributors, and there are classes that are mandatory. They have metrics for compliance to security requirements and measure it. The classes are not just for engineers, but for doc writers as well. “It’s a cultural thing,” she said.
|
||||
|
||||
And what discussion about security would be secure without a mention of passwords? While everyone should be using a good password manager, Plattner said, but they should be mandatory for work, along with two-factor authentication.
|
||||
|
||||
Basic password principles should be a part of every engineer’s waking life, she said. What matters most in passwords is their length and entropy — making the collection of keystrokes as random as possible. A robust password entropy checker is invaluable for this. She recommends [zxcvbn][14], the Dropbox open-source entropy checker.
|
||||
|
||||
Another trick is to use something intentionally slow like [bcrypt][15] when authenticating user input, said Plattner. The slowness doesn’t bother most legit users but irritates hackers who try to force password attempts.
|
||||
|
||||
All of this adds up to job security for anyone wanting to get into the security side of technology, said Davidson. We’re putting more code more places, she said, and that creates systemic risk. “I don’t think anybody is not going to have a job in security as long as we keep doing interesting things in technology.”
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://thenewstack.io/security-engineers-problem/
|
||||
|
||||
作者:[TC Currie][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://thenewstack.io/author/tc/
|
||||
[1]:http://twitter.com/share?url=https://thenewstack.io/security-engineers-problem/&text=Security+Debt+is+an+Engineer%E2%80%99s+Problem+
|
||||
[2]:http://www.facebook.com/sharer.php?u=https://thenewstack.io/security-engineers-problem/
|
||||
[3]:http://www.linkedin.com/shareArticle?mini=true&url=https://thenewstack.io/security-engineers-problem/
|
||||
[4]:https://thenewstack.io/security-engineers-problem/#disqus_thread
|
||||
[5]:http://connect2017.womenwhocode.com/
|
||||
[6]:https://www.linkedin.com/in/mary-ann-davidson-235ba/
|
||||
[7]:https://www.linkedin.com/in/zassmin/
|
||||
[8]:https://www.womenwhocode.com/
|
||||
[9]:https://www.thenewstack.io/tag/Internet-of-Things
|
||||
[10]:https://twitter.com/ittskeziah
|
||||
[11]:https://martinfowler.com/bliki/TechnicalDebt.html
|
||||
[12]:https://xkcd.com/327/
|
||||
[13]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/eval
|
||||
[14]:https://blogs.dropbox.com/tech/2012/04/zxcvbn-realistic-password-strength-estimation/
|
||||
[15]:https://en.wikipedia.org/wiki/Bcrypt
|
||||
[16]:https://thenewstack.io/author/tc/
|
@ -0,0 +1,72 @@
|
||||
|
||||
朝鲜180局的网络战部门让西方国家忧虑。
|
||||
|
||||
Translating by hwlog
|
||||
North Korea's Unit 180, the cyber warfare cell that worries the West
|
||||
|
||||
============================================================
|
||||
[][13] [**PHOTO:** 脱北者说, 平壤的网络战攻击目的在于一个叫做“180局”的部门来筹集资金。(Reuters: Damir Sagolj, file)][14]
|
||||
据叛逃者,官方和网络安全专家称,朝鲜的情报机关有一个叫做180局的特殊部门, 这个部门已经发起过多起勇敢且成功的网络战。
|
||||
近几年朝鲜被美国,韩国,和周边几个国家指责对多数的金融网络发起过一系列在线袭击。
|
||||
网络安全技术人员称他们找到了这个月感染了150多个国家30多万台计算机的全球想哭勒索病毒"ransomware"和朝鲜网络战有关联的技术证据。
|
||||
平壤称该指控是“荒谬的”。
|
||||
对朝鲜的关键指控是指朝鲜与一个叫做拉撒路的黑客组织有联系,这个组织是在去年在孟加拉国中央银行网络抢劫8000万美元并在2014年攻击了索尼的好莱坞工作室的网路。
|
||||
美国政府指责朝鲜对索尼公司的黑客袭击,同时美国政府对平壤在孟加拉国银行的盗窃行为提起公诉并要求立案。
|
||||
由于没有确凿的证据,没有犯罪指控并不能够立案。朝鲜之后也否认了Sony公司和银行的袭击与其有关。
|
||||
朝鲜是世界上最封闭的国家之一,它秘密行动的一些细节很难获得。
|
||||
但研究这个封闭的国家和流落到韩国和一些西方国家的的叛逃者已经给出了或多或少的提示。
|
||||
|
||||
### 黑客们喜欢用雇员来作为掩护
|
||||
金恒光,朝鲜前计算机教授,2004叛逃到韩国,他仍然有着韩国内部的消息,他说平壤的网络战目的在于通过侦察总局下属的一个叫做180局来筹集资金,这个局主要是负责海外的情报机构。
|
||||
金教授称,“180局负责入侵金融机构通过漏洞从银行账户提取资金”。
|
||||
他之前也说过,他以前的一些学生已经加入了朝鲜的网络战略司令部-朝鲜的网络部队。
|
||||
|
||||
>"黑客们到海外寻找比朝鲜更好的互联网服务的地方,以免留下痕迹," 金教授补充说。
|
||||
他说他们经常用贸易公司,朝鲜的海外分公司和在中国和东南亚合资企业的雇员来作为掩护
|
||||
位于华盛顿的战略与国际研究中心的叫做James Lewis的朝鲜专家称,平壤首先用黑客作为间谍活动的工具然后对韩国和美国的目的进行政治干扰。
|
||||
索尼公司事件之后,他们改变方法,通过用黑客来支持犯罪活动来形成国内坚挺的货币经济政策。
|
||||
“目前为止,网上毒品,假冒伪劣,走私,都是他们惯用的伎俩”。
|
||||
Media player: 空格键播放,“M”键静音,“左击”和“右击”查看。
|
||||
|
||||
[**VIDEO:** 你遇到过勒索病毒吗? (ABC News)][16]
|
||||
|
||||
### 韩国声称拥有大量的“证据”
|
||||
美国国防部称在去年提交给国会的一个报告中显示,朝鲜可能有作为有效成本的,不对称的,可拒绝的工具,它能够应付来自报复性袭击很小的风险,因为它的“网络”大部分是和因特网分离的。
|
||||
|
||||
> 报告中说," 它可能从第三方国家使用互联网基础设施"。
|
||||
韩国政府称,他们拥有朝鲜网络战行动的大量证据。
|
||||
“朝鲜进行网络战通过第三方国家来掩护网络袭击的来源,并且使用他们的信息和通讯技术设施”,Ahn Chong-ghee,韩国外交部副部长,在书面评论中告诉路透社。
|
||||
除了孟加拉银行抢劫案,他说平壤也被怀疑与菲律宾,越南和波兰的银行袭击有关。
|
||||
去年六月,警察称朝鲜袭击了160个韩国公司和政府机构,入侵了大约14万台计算机,暗中在他的对手的计算机中植入恶意代码作为长期计划的一部分来进行大规模网络攻击。
|
||||
朝鲜也被怀疑在2014年对韩国核反应堆操作系统进行阶段性网络攻击,尽管朝鲜否认与其无关。
|
||||
根据在一个韩国首尔的杀毒软件厂商“hauri”的高级安全研究员Simon Choi的说法,网络袭击是来自于他在中国的一个基地。
|
||||
Choi先生,一个有着对朝鲜的黑客能力进行了广泛的研究的人称,“他们在那里行动以至于不论他们做什么样的计划,他们拥有中国的ip地址”。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.abc.net.au/news/2017-05-21/north-koreas-unit-180-cyber-warfare-cell-hacking/8545106
|
||||
|
||||
作者:[www.abc.net.au ][a]
|
||||
译者:[译者ID](https://github.com/hwlog)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.abc.net.au
|
||||
[1]:http://www.abc.net.au/news/2017-05-16/wannacry-ransomware-showing-up-in-obscure-places/8527060
|
||||
[2]:http://www.abc.net.au/news/2015-08-05/why-we-should-care-about-cyber-crime/6673274
|
||||
[3]:http://www.abc.net.au/news/2017-05-15/what-to-do-if-youve-been-hacked/8526118
|
||||
[4]:http://www.abc.net.au/news/2017-05-16/researchers-link-wannacry-to-north-korea/8531110
|
||||
[5]:http://www.abc.net.au/news/2017-05-18/adylkuzz-cyberattack-could-be-far-worse-than-wannacry:-expert/8537502
|
||||
[6]:http://www.google.com/maps/place/Korea,%20Democratic%20People%20S%20Republic%20Of/@40,127,5z
|
||||
[7]:http://www.abc.net.au/news/2017-05-16/wannacry-ransomware-showing-up-in-obscure-places/8527060
|
||||
[8]:http://www.abc.net.au/news/2017-05-16/wannacry-ransomware-showing-up-in-obscure-places/8527060
|
||||
[9]:http://www.abc.net.au/news/2015-08-05/why-we-should-care-about-cyber-crime/6673274
|
||||
[10]:http://www.abc.net.au/news/2015-08-05/why-we-should-care-about-cyber-crime/6673274
|
||||
[11]:http://www.abc.net.au/news/2017-05-15/what-to-do-if-youve-been-hacked/8526118
|
||||
[12]:http://www.abc.net.au/news/2017-05-15/what-to-do-if-youve-been-hacked/8526118
|
||||
[13]:http://www.abc.net.au/news/2017-05-21/military-trucks-trhough-pyongyang/8545134
|
||||
[14]:http://www.abc.net.au/news/2017-05-21/military-trucks-trhough-pyongyang/8545134
|
||||
[15]:http://www.abc.net.au/news/2017-05-16/researchers-link-wannacry-to-north-korea/8531110
|
||||
[16]:http://www.abc.net.au/news/2017-05-15/have-you-been-hit-by-ransomware/8527854
|
@ -1,155 +0,0 @@
|
||||
10 tools for visual effects in Linux with Kdenlive
|
||||
================================================================================
|
||||

|
||||
Image credits : Seth Kenlon. [CC BY-SA 4.0.][1]
|
||||
|
||||
[Kdenlive][2] is one of those applications; you can use it daily for a year and wake up one morning only to realize that you still have only grazed the surface of all of its potential. That's why it's nice every once in a while to sit back and look over some of the lesser-used tricks and tools in Kdenlive. Even though something's not used as often as, say, the Spacer or Razor tools, it still may end up being just the right finishing touch on your latest masterpiece.
|
||||
|
||||
Most of the tools I'll discuss here are not officially part of Kdenlive; they are plugins from the [Frei0r][3] package. These are ubiquitous parts of video processing on Linux and Unix, and they usually get installed along with Kdenlive as distributed by most Linux distributions, so they often seem like part of the application. If your install of Kdenlive does not feature some of the tools mentioned here, make sure that you have Frei0r plugins installed.
|
||||
|
||||
Since many of the tools in this article affect the look of an image, here is the base image, without effects or adjustment:
|
||||
|
||||

|
||||
|
||||
Still image grabbed from a video by Footage Firm, Inc. [CC BY-SA 4.0.][1]
|
||||
|
||||
Let's get started.
|
||||
|
||||
### 1. Color effect ###
|
||||
|
||||

|
||||
|
||||
You can find the **Color Effect** filter in **Add Effect > Misc** context menu. As filters go, it's mostly just a preset; the only controls it has are which filter you want to use.
|
||||
|
||||

|
||||
|
||||
Normally that's the kind of filter I avoid, but I have to be honest: Sometimes a plug-and-play solution is exactly what you want. This filter has a few different settings, but the two that make it worth while (at least for me) are the Sepia and XPro effects. Admittedly, controls to adjust how sepia tone the sepia effect is would be nice, but no matter what, when you need a quick and familiar color effect, this is the filter to throw onto a clip. It's immediate, it's easy, and if your client asks for that look, this does the trick every time.
|
||||
|
||||
### 2. Colorize ###
|
||||
|
||||

|
||||
|
||||
The simplicity of the **Colorize** filter in **Add Effect > Misc** is also its strength. In some editing applications, it takes two filters and some compositing to achieve this simple color-wash effect. It's refreshing that in Kdenlive, it's a matter of one filter with three possible controls (only one of which, strictly speaking, is necessary to achieve the look).
|
||||
|
||||

|
||||
|
||||
Its use is intuitive; use the **Hue** slider to set the color. Use the other controls to adjust the luma of the base image as needed.
|
||||
|
||||
This is not a filter I use every day, but for ad spots, bumpers, dreamy sequences, or titles, it's the easiest and quickest path to a commonly needed look. Get a company's color, use it as the colorize effect, slap a logo over the top of the screen, and you've just created a winning corporate intro.
|
||||
|
||||
### 3. Dynamic Text ###
|
||||
|
||||

|
||||
|
||||
For the assistant editor, the Add Effect > Misc > Dynamic **Text** effect is worth the price of Kdenlive. With one mostly pre-set filter, you can add a running timecode burn-in to your project, which is an absolute must-have safety feature when round-tripping your footage through effects and sound.
|
||||
|
||||
The controls look more complex than they actually are.
|
||||
|
||||

|
||||
|
||||
The font settings are self-explanatory. Placement of the text is controlled by the Horizontal and Vertical Alignment settings; steer clear of the **Size** setting (it controls the size of the "canvas" upon which you are compositing the burn-in, not the size of the burn-in itself).
|
||||
|
||||
The text itself doesn't have to be timecode. From the dropdown menu, you can choose from a list of useful text, including frame count (useful for VFX, since animators work in frames), source frame rate, source dimensions, and more.
|
||||
|
||||
You are not limited to just one choice. The text field in the control panel will take whatever arbitrary text you put into it, so if you want to burn in more information than just timecode and frame rate (such as **Sc 3 - #timecode# - #meta.media.0.stream.frame_rate#**), then have at it.
|
||||
|
||||
### 4. Luminance ###
|
||||
|
||||

|
||||
|
||||
The **Add Effect > Misc > Luminance** filter is a no-options filter. Luminance does one thing and it does it well: It drops the chroma values of all pixels in an image so that they are displayed by their luma values. In simpler terms, it's a grayscale filter.
|
||||
|
||||
The nice thing about this filter is that it's quick, easy, efficient, and effective. This filter combines particularly well with other related filters (meaning that yes, I'm cheating and including three filters for one).
|
||||
|
||||

|
||||
|
||||
Combining, in this order, the **RGB Noise** for emulated grain, **Luminance** for grayscale, and **LumaLiftGainGamma** for levels can render a textured image that suggests the classic look and feel of [Kodax Tri-X][4] film.
|
||||
|
||||
### 5. Mask0mate ###
|
||||
|
||||

|
||||
Image by Footage Firm, Inc.
|
||||
|
||||
Better known as a four-point garbage mask, the **Add Effect > Alpha Manipulation > Mask0mate** tool is a quick, no-frills way to ditch parts of your frame that you don't need. There isn't much to say about it; it is what it is.
|
||||
|
||||

|
||||
|
||||
The confusing thing about the effect is that it does not imply compositing. You can pull in the edges all you want, but you won't see it unless you add the **Composite** transition to reveal what's underneath the clip (even if that's nothing). Also, use the **Invert** function for the filter to act like you think it should act (without it, the controls will probably feel backward to you).
|
||||
|
||||
### 6. Pr0file ###
|
||||
|
||||

|
||||
|
||||
The **Add Effect > Misc > Pr0file** filter is an analytical tool, not something you would actually leave on a clip for final export (unless, of course, you do). Pr0file consists of two components: the Marker, which dictates what area of the image is being analyzed, and the Graph, which displays information about the marked region.
|
||||
|
||||
Set the marker using the **X, Y, Tilt**, and **Length** controls. The graphical readout of all the relevant color channel information is displayed as a graph, superimposed over your image.
|
||||
|
||||

|
||||
|
||||
The readout displays a profile of the colors within the region marked. The result is a sort of hyper-specific vectorscope (or oscilloscope, as the case may be) that can help you zero in on problem areas during color correction, or compare regions while color matching.
|
||||
|
||||
In other editors, the way to get the same information was simply to temporarily scale your image up to the region you want to analyze, look at your readout, and then hit undo to scale back. Both ways work, but the Pr0file filter does feel a little more elegant.
|
||||
|
||||
### 7. Vectorscope ###
|
||||
|
||||

|
||||
|
||||
Kdenlive features an inbuilt vectorscope, available from the **View** menu in the main menu bar. A vectorscope is not a filter, it's just another view the footage in your Project Monitor, specifically a view of the color saturation in the current frame. If you are color correcting an image and you're not sure what colors you need to boost or counteract, looking at the vectorscope can be a huge help.
|
||||
|
||||
There are several different views available. You can render the vectorscope in traditional green monochrome (like the hardware vectorscopes you'd find in a broadcast control room), or a chromatic view (my personal preference), or subtracted from a color-wheel background, and more.
|
||||
|
||||
The vectorscope reads the entire frame, so unlike the Pr0file filter, you are not just getting a reading of one area in the frame. The result is a consolidated view of what colors are most prominent within a frame. Technically, the same sort of information can be intuited by several trial-and-error passes with color correction, or you can just leave your vectorscope open and watch the colors float along the color wheel and make adjustments accordingly.
|
||||
|
||||
Aside from how you want the vectorscope to look, there are no controls for this tool. It is a readout only.
|
||||
|
||||
### 8. Vertigo ###
|
||||
|
||||

|
||||
|
||||
There's no way around it; **Add Effect > Misc > Vertigo** is a gimmicky special effect filter. So unless you're remaking [Fear and Loathing][5] or the movie adaptation of [Dead Island][6], you probably aren't going to use it that much; however, it's one of those high-quality filters that does the exact trick you want when you happen to be looking for it.
|
||||
|
||||
The controls are simple. You can adjust how distorted the image becomes and the rate at which it distorts. The overall effect is probably more drunk or vision-quest than vertigo, but it's good.
|
||||
|
||||

|
||||
|
||||
### 9. Vignette ###
|
||||
|
||||

|
||||
|
||||
Another beautiful effect, the **Add Effect > Misc > Vignette** darkens the outer edges of the frame to provide a sort of portrait, soft-focus nouveau look. Combined with the Color Effect or the Luminance faux Tri-X trick, this can be a powerful and emotional look.
|
||||
|
||||
The softness of the border and the aspect ratio of the iris can be adjusted. The **Clear Center Size** attribute controls the size of the clear area, which has the effect of adjusting the intensity of the vignette effect.
|
||||
|
||||

|
||||
|
||||
### 10. Volume ###
|
||||
|
||||

|
||||
|
||||
I don't believe in mixing sound within the video editing application, but I do acknowledge that sometimes it's just necessary for a quick fix or, sometimes, even for a tight production schedule. And that's when the **Audio correction > Volume (Keyframable)** effect comes in handy.
|
||||
|
||||
The control panel is clunky, and no one really wants to adjust volume that way, so the effect is best when used directly in the timeline. To create a volume change, double-click the volume line over the audio clip, and then click and drag to adjust. It's that simple.
|
||||
|
||||
Should you use it? Not really. Sound mixing should be done in a sound mixing application. Will you use it? Absolutely. At some point, you'll get audio that is too loud to play as you edit, or you'll be up against a deadline without a sound engineer in sight. Use it judiciously, watch your levels, and get the show finished.
|
||||
|
||||
### Everything else ###
|
||||
|
||||
This has been 10 (OK, 13 or 14) effects and tools that Kdenlive has quietly lying around to help your edits become great. Obviously there's a lot more to Kdenlive than just these little tricks. Some are obvious, some are cliché, some are obtuse, but they're all in your toolkit. Get to know them, explore your options, and you might be surprised what a few cheap tricks will get you.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/life/15/12/10-kdenlive-tools
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/seth
|
||||
[1]:https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[2]:https://kdenlive.org/
|
||||
[3]:http://frei0r.dyne.org/
|
||||
[4]:http://www.kodak.com/global/en/professional/products/films/bw/triX2.jhtml
|
||||
[5]:https://en.wikipedia.org/wiki/Fear_and_Loathing_in_Las_Vegas_(film)
|
||||
[6]:https://en.wikipedia.org/wiki/Dead_Island
|
@ -1,77 +0,0 @@
|
||||
Open technology for land rights documentation
|
||||
============================================================
|
||||
|
||||
### One-third of people on the planet don't have documented rights to the land on which they rely.
|
||||
|
||||
[up][3]
|
||||

|
||||
Image by :
|
||||
|
||||
[Pixabay][4]. Modified by Opensource.com. [CC BY-SA 4.0][5]
|
||||
|
||||
The [Cadasta Foundation][6] creates tech to allow communities to document their land rights. By helping groups document the evidence of their individual and community rights to the land on which they depend, they can eventually obtain legal recognition of their land rights, and in the meantime, enjoy greater security.
|
||||
|
||||
We are motivated by the fact that most of the world does not have documented legal rights to the land on which they live. Technology is only a small part of this larger social issue, but our hope is that tech tools can be part of the solution even in the most remote and low-tech environments.
|
||||
|
||||
### The magnitude of property rights
|
||||
|
||||
Many of us who come from the global north probably take our rights to our land, property, and home for granted. We have titles, deeds, and rental agreements that document and solidly protect our rights.
|
||||
|
||||
But one-third of the people on the planet, from urban shanty towns to forest-dwelling indigenous communities, do not have documented rights to the land on which they rely. In fact, an estimated 70% of the property in emerging economies is undocumented. An estimated 25% of the world’s urban population live in homes to which they have no legal right. A majority of smallholder farmers around the world farm without the protection of having legal rights to their land documented by government records.
|
||||
|
||||
This is simply because government land and property records in many areas of the world either were never created or are out of date. For example, most rural land records in the state of Telangana, India haven't been updated since the 1940s. In other areas, such as parts of sub-Saharan Africa, there were never any records of land ownership to begin with—people simply farm the land their parents farmed, generation after generation.
|
||||
|
||||
Consider for a moment working land to which you have no secure rights. Would you invest your savings or labor in improving the land, including applying good quality seeds and fertilizer, with the knowledge that you could be displaced any day by a more powerful neighbor or investor? Imagine living in a home that could be bulldozed or usurped by an official any day. Or how could you sell your house, or use it for collateral for a loan, if you don’t have any proof that you own it?
|
||||
|
||||
For a majority of the world's population, these are not rhetorical questions. These are daily realities.
|
||||
|
||||
### How open source matters for land
|
||||
|
||||
Technology is only one part of the solution, but at Cadasta we believe it is a key component. While many governments had modern technology systems put in place to manage land records, often these were expensive to maintain, required highly trained staff, were not transparent, and were otherwise too complicated. Many of these systems, created at great expense by donor governments, are already outdated and no longer accurately reflect existing land and property rights.
|
||||
|
||||
By building open and user-friendly technology for land rights documentation we aim to overcome these problems and create land documentation systems that are flexible and accessible, allowing them to be treated as living documents that are updated continually.
|
||||
|
||||
We routinely train people who have never even used a smartphone before to use our technology to document their land rights in a single afternoon. The resulting data, hosted on an open source platform, is easy to access, update, and analyze. This flexibility means that governments in developing countries, should they adopt our platform, don't need to hire specially trained staff to manage the upkeep of these records.
|
||||
|
||||
We also believe that by contributing to and fostering open communities we can benefit more people, instead of attempting to develop all the technology ourselves. We do this by building a community around our tools as well as contributing to other existing software.
|
||||
|
||||
Over the past two years we've contributed and been involved in [OpenStreetMap][7]through the [Missing Maps Projec][8]t, used [OpenDataKit][9] extensively for data collection, and currently are integrating [Field Papers][10] with our system. Field Papers is technology that allows users to print paper maps, annotate those maps with pen, and then take a picture of those annotations with their phone and upload them to be transcribed.
|
||||
|
||||
We've also released a few Django libraries we hope will be useful to others in other Django applications. These include a policy-based permission system called [django-tutelary][11] and [django-jsonattrs][12], which provides JavaScript Object Notification (JSON)-based attribute management for PostgresSQL. If others use these pieces and contribute bug reports and patches, this can help make Cadasta's work stronger.
|
||||
|
||||
This work is critically important. Land rights are the foundation of stability and prosperity. Communities and countries seeking economic growth and sustainable development must document land rights and ensure land rights are secure for women, men, and communities.
|
||||
|
||||
_Learn more in Kate Chapman's talk at linux.conf.au 2017 ([#lca2017][1]) in Hobart: [Land Matters: Creating Open Technology for Land Rights][2]._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||

|
||||
|
||||
Kate Chapman - Kate Chapman is Chief Technology Officer of the Cadasta Foundation, leading the organization’s technology team and strategy. Cadasta develops free and open source software to help communities document their land rights around the world. Chapman is recognized as a leader in the domains of open source geospatial technology and community mapping, and an advocate for open imagery as a public good.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
|
||||
via: https://opensource.com/article/17/1/land-rights-documentation-Cadasta
|
||||
|
||||
作者:[Kate Chapman][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/wonderchook
|
||||
[1]:https://twitter.com/search?q=%23lca2017&src=typd
|
||||
[2]:https://linux.conf.au/schedule/presentation/50/
|
||||
[3]:https://opensource.com/article/17/1/land-rights-documentation-Cadasta?rate=E8gJkvb1mbBXytsZiKA_ZtBCOvpi41nDSfz4R8tNnoc
|
||||
[4]:https://pixabay.com/en/tree-field-cornfield-nature-247122/
|
||||
[5]:https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[6]:http://cadasta.org/
|
||||
[7]:http://www.openstreetmap.org/
|
||||
[8]:http://www.missingmaps.org/
|
||||
[9]:https://opendatakit.org/
|
||||
[10]:http://fieldpapers.org/
|
||||
[11]:https://github.com/Cadasta/django-tutelary
|
||||
[12]:https://github.com/Cadasta/django-jsonattrs
|
@ -1,104 +0,0 @@
|
||||
Red Hat's OpenShift Container Platform Expands Cloud Options
|
||||
============================================================
|
||||
|
||||
Red Hat on Wednesday announced the general availability of Red Hat OpenShift Container Platform 3.4.
|
||||
|
||||

|
||||
|
||||
This latest version helps organizations better embrace new Linux container technologies that can deliver innovative business applications and services without sacrificing existing IT investments.
|
||||
|
||||
Red Hat OpenShift Container Platform 3.4 provides a platform for innovation without giving up existing mission-critical workloads. It offers dynamic storage provisioning for both traditional and cloud-native applications, as well as multitenant capabilities that can support multiple applications, teams and deployment processes in a hybrid cloud environment.
|
||||
|
||||
Today's enterprises are must balance management of their existing application portfolios with the goal of making it easier for developers to build new applications, observed Brian Gracely, director of product strategy for OpenShift at Red Hat.
|
||||
|
||||
The new release focuses on three complex areas for enterprises: managing storage; isolating resources for multiple groups (multitenancy); and the ability to consistently run applications on multiple cloud environments (public or private).
|
||||
|
||||
"Red Hat OpenShift Container Platform 3.4 builds on the momentum of both the Kubernetes and Docker projects, which are helping developers use containers to modernize existing applications and build new cloud-native microservices," Gracely told LinuxInsider.
|
||||
|
||||
OpenShift Container Platform 3.4 makes storage provisioning easier for developers and operators, and it enhances how the platform can be used to provide multitenant resources to multiple groups within an organization. Additionally, it continues to codify the best practices needed to deploy a consistent container platform across any cloud environment, such as AWS, Azure, GCP, OpenStack or VMware.
|
||||
|
||||
### Pushes Cloud Benefits
|
||||
|
||||
The new platform advances the process of creating and deploying applications by addressing the growing storage needs of applications across the hybrid cloud for enterprises. It allows for coexistence of modern and future-forward workloads on a single, enterprise-ready platform.
|
||||
|
||||
The new OpenShift Container Platform and service gives Red Hat customers an easy way to adopt and use Google Cloud as a public of hybrid cloud environment, noted Charles King, principal analyst at [Pund-IT][1].
|
||||
|
||||
"It will be a welcome addition in many or most enterprise IT shops, especially those that are active employing or exploring container solutions," he told LinuxInsider.
|
||||
|
||||
"Since Red Hat will act as the service provider of the new offering, customers should also be able to seamlessly integrate OpenShift support with their other Red Hat products and services," King pointed out.
|
||||
|
||||
The new release also provides an enterprise-ready version of Kubernetes 1.4 and the Docker container runtime, which will help customers roll out new services more quickly with the backing of Red Hat Enterprise Linux.
|
||||
|
||||
OpenShift Container Platform 3.4 integrates architectures, processes and services to enable delivery of critical business applications, whether legacy or cloud-native, and containerized workloads.
|
||||
|
||||
### Open Source and Linux Innovation
|
||||
|
||||
Kubernetes is becoming the de facto standard for orchestrating and managing Linux containers. OpenShift is delivering the leading enterprise-ready platform built on Kubernetes, noted Red Hat's Gracely.
|
||||
|
||||
"Kubernetes is one of the fastest-growing open source projects, with contributors from cloud providers, independent software vendors and [individual and business] end-users," he said. "It has become a project that has done an excellent job of considering and addressing the needs of many different groups with many types of application needs."
|
||||
|
||||
Both Red Hat and Google are pushing for innovation. Both companies are among the market's most proactive and innovative supporters of open source and Linux solutions.
|
||||
|
||||
"The pair's collaboration on this new service is a no-brainer that could eventually lead to Red Hat and Google finding or creating further innovative open source offerings," said Pund-IT's King.
|
||||
|
||||
### Features and Benefits
|
||||
|
||||
Among the new capabilities in the latest version of OpenShift Container Platform:
|
||||
|
||||
* Next-level container storage with support for dynamic storage provisioning -- This allows multiple storage types and multitier storage exposure in Kubernetes;
|
||||
* Container-native storage enabled by Red Hat Gluster Storage -- This now supports dynamic provisioning and push button deployment for stateful and stateless applications;
|
||||
* Software-defined, highly available and scalable storage solution -- This provides access across on-premises and public cloud environments for more cost efficiency over traditional hardware-based or cloud-only storage services;
|
||||
* Enhanced multitenancy through more simplified management of projects -- This feature is powered by Kubernetes namespaces in a single Kubernetes cluster. Applications can run fully isolated and share resources on a single Kubernetes cluster in OpenShift Container Platform.
|
||||
|
||||
### More Supplements
|
||||
|
||||
The OpenShift Container Platform upgrade adds the capacity to search for projects and project details, manage project membership, and more via a more streamlined Web console. This capability facilitates working with multiple projects across dispersed teams.
|
||||
|
||||
Another enhancement is the multitenancy feature that provides application development teams with their own cloud-like application environment. It lets them build and deploy customer-facing or internal applications using DevOps processes that are isolated from one another.
|
||||
|
||||
Also available in the new release are new hybrid cloud reference architectures for running Red Hat OpenShift Container Platform on OpenStack, VMware, Amazon Web Services, Google Cloud Engine and Microsoft Azure. These guides help walk a user through deployment across public and private clouds, virtual machines and bare metal.
|
||||
|
||||
"It also drastically simplifies how developers can access storage resources, allowing developers to dynamically provision storage resources/capacity with the click of a button -- effectively self-service for developers. It also allows developers to feel confident that the resources required for their applications will be properly isolated from other resource needs in the platform," said Red Hat's Gracely.
|
||||
|
||||
### Orchestration Backbone
|
||||
|
||||
The foundation for Red Hat OpenShift Container Platform 3.4 is the open source Kubernetes Project community. Kubernetes 1.4 features alpha support for expanded cluster federation APIs.
|
||||
|
||||
It enables multiple clusters federated across a hybrid environment. Red Hat engineers view this feature as a key component to enabling hybrid cloud deployments in the enterprise.
|
||||
|
||||
The latest version of OpenShift is available now via the Red Hat Customer Portal. It offers community innovation as hardened, production-grade features.
|
||||
|
||||
### Ensuring Customer Health
|
||||
|
||||
Red Hat's platform is vital to the success of The Vitality Group's global initiative and reward program, according to CIO Neil Adamson.
|
||||
|
||||
This program is a key component of how the company envisions the future of health, he said.
|
||||
|
||||
"Advanced services for our customers can only be delivered by embracing next-generation technologies, particularly those provided through the open source communities that drive Linux containers, Kubernetes and IoT," said Adamson.
|
||||
|
||||
Red Hat's OpenShift Container Platform provides his company with the best of these communities while still delivering a stable, more secure foundation that help "reap the benefits of open source innovation while lessening the risks often inherent to emerging technologies."
|
||||
|
||||
The latest platform features will further support application development in the cloud. Container solutions are being adopted rapidly for many core IT tasks, including app development projects and processes, according to King, who noted that "being able to seamlessly deploy containers in a widely and easily accessible environment like Google Cloud should simplify development tasks."
|
||||

|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||

|
||||
|
||||
**Jack M. Germain** has been writing about computer technology since the early days of the Apple II and the PC. He still has his original IBM PC-Jr and a few other legacy DOS and Windows boxes. He left shareware programs behind for the open source world of the Linux desktop. He runs several versions of Windows and Linux OSes and often cannot decide whether to grab his tablet, netbook or Android smartphone instead of using his desktop or laptop gear. You can connect with him on [Google+][2].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxinsider.com/story/84239.html?rss=1
|
||||
|
||||
作者:[Jack M. Germain ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/116242401898170634809?rel=author
|
||||
[1]:http://www.pund-it.com/
|
||||
[2]:https://plus.google.com/116242401898170634809?rel=author
|
@ -1,216 +0,0 @@
|
||||
# Fedora 24 Gnome & HP Pavilion + Nvidia setup review
|
||||
|
||||
Recently, you may have come across my [Chapeau][1] review. This experiment prompted me to widen my Fedora family testing, and so I decided to try setting up [Fedora 24 Gnome][2] on my [HP][3] machine, a six-year-old laptop with 4 GB of RAM and an aging Nvidia card. Yes, Fedora 25 has since been released and I had it [tested][4] with delight. But we can still enjoy this little article now can we?
|
||||
|
||||
This review should complement - and contrast - my usual crop of testing on the notorious but capable [Lenovo G50][5] machine, purchased in 2015, so we have old versus new, but also the inevitable lack of proper Linux support for the [Realtek][6] network card on the newer box. We will then also check how well Fedora handles the Nvidia stack, test if Nouveau is a valid alternative, and of course, pimp the system to the max, using some of the beauty tricks we have witnessed in the Chapeau review. Should be more than interesting.
|
||||
|
||||

|
||||
|
||||
### Installation
|
||||
|
||||
Nothing special to report here. The system has a much simpler setup than the Lenovo laptop. The new machine comes with UEFI, Secure Boot, 1TB disk with a GPT setup partitioned sixteen different ways, with Windows 10 and some 6-7 Linux distros on it. In comparison, the BIOS-fueled Pavilion only dual boots. Prior to this review, it was running Linux Mint 17.3 [Rosa Xfce][7], but it used to have all sorts of Ubuntu children on it, and I had used it quite extensively for arguably funny [video processing][8] and all sorts of games. The home partition dates back to the early setup, and has remained such since, including a lot of legacy config and many desktop environments.
|
||||
|
||||

|
||||
|
||||
I was able to boot from a USB drive, although I did use the Fedora tool to create the live media. I've never had any problems booting on this host, to the best of my memory, a far cry (not the [game][9], just an expression, hi hi) from the Lenovo experience. There, before a BIOS update, Fedora would [not even run][10], and a large number of distros used to [struggle][11] until very recently. All part of my great disappointment adventure with Linux.
|
||||
|
||||
Anyhow, this procedure went without any fuss. Fedora 24 took control of the bootloader, managing itself and the resident Windows 7 installation. If you're interested in more details on how to dual-boot, you might want to check these:
|
||||
|
||||
[Ubuntu & Windows 7][12] dual-boot guide
|
||||
|
||||
[Xubuntu & Windows 7][13] dual-boot guide - same same but different
|
||||
|
||||
[CentOS 7 & Windows 7][14] dual-boot guide - fairly similar to our Fedora attempt
|
||||
|
||||
[Ubuntu & Windows 8][15] dual-boot guide - this one covers a UEFI setup, too
|
||||
|
||||
### It's pimping time!
|
||||
|
||||
My Fedora [pimping guide][16] has it all. I setup RPM Fusion Free and Non-Free, then installed about 700 MB worth of media codecs, plugins and extra software, including Steam, Skype, GIMP, VLC, Gnome Tweak Tool, Chrome, several other helper utilities, and more.
|
||||
|
||||
On the aesthetics side, I grabbed both Faenza and Moka icons, and configured half a dozen Gnome [extensions][17], including the mandatory [Dash to Dock][18], which really helps transforms this desktop environment into a usable product.
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
What is that green icon on the right side? 'Tis a spoiler of things to be, that is.
|
||||
|
||||
I also had no problems with my smartphones, [Ubuntu Phone][19] or the[iPhone][20]. Both setups worked fine, and this also brings the annoyance with the Apple device on Chapeau 24 into bad spotlight. Rhythmbox would not play from any external media, though. Fail.
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
This is a teaser, implying wossname Nvidia thingie; well here we go.
|
||||
|
||||
### Nvidia setup
|
||||
|
||||
This is a tricky one. First, take a look at my generic [tutorial][21] on this topic. Then, take a look at my recent [Fedora 23][22] [experience][23] on this topic. Unlike Ubuntu, Red Hat distros do not quite like the whole pre-compiled setup. However, just to see whether things have changed in any way, I did use a helper tool called easyLife to setup the drivers. I've talked about this utility and Fedy in an OCS-Mag [article][24], and how you can use them to make your Fedora experience more colorful. Bottom line: good for lots of things, not for drivers, though.
|
||||
|
||||

|
||||
|
||||
Yes, this resulted in a broken system. I had to manually installed the drivers - luckily I had installed the kernel sources and headers, as well as other necessary build tools, gcc and make, beforehand, to prepare for this kind of scenario. Be warned, kids. In the end, the official way is the best.
|
||||
|
||||
### Nouveau vs Nvidia, which is faster?
|
||||
|
||||
I did something you would not really expect. I benchmarked the actual performance of the graphics stack with the Nouveau driver first and then the closed-source blob, using the Unigine Heaven tool. This gives clear results on how the two compare.
|
||||
|
||||

|
||||
|
||||
Remember, this is an ancient laptop, and it does not stack well against modern tools, so you will not be surprised to learn that Heaven reported a staggering 1 FPS for Nouveau, and it took me like 5 minutes before the system actually responded, and I was able to quit the benchmark.
|
||||
|
||||

|
||||
|
||||
Nvidia gave much better results. To begin with, I was able to use the system while testing, and Heaven responded to mouse clicks and key strokes, all the while reporting a very humble 5-6 FPS, which means it was roughly 500% more efficient than the Nouveau driver. That tells you all you need to know, ladies and gentlemen.
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
Also, Steam would not run at all with Nouveau, so there's that to consider, too. Funny how system requirements creep up over time. I used to play, I mean test [Call of Duty][25], a highly mediocre and arcade-like shooter on this box on the highest settings, but that feat feels like a completely different era.
|
||||
|
||||

|
||||
|
||||
### Hardware compatibility
|
||||
|
||||
Things were quite all right overall. All of the Fn buttons worked fine, and so did the web camera. Power management also did its thing well, dimming the screen and whatnot, but we cannot really judge the battery life, as the cells are six years old now and quite broken. They only lend about 40 minutes of juice in the best case.
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
Bluetooth did not work at first, but this is because crucial packages are missing.
|
||||
|
||||

|
||||
|
||||
You can resolve the issue using dnf:
|
||||
|
||||
dnf install blueman bluez
|
||||
|
||||

|
||||
|
||||
### Suspend & resume
|
||||
|
||||
No issues, even with the Nvidia drivers. The whole sequence was quick and smooth, about 2-3 seconds each direction, into the land of sweet dreams and out of it. I do recall some problems with this in the past, but not any more. Happy sailing.
|
||||
|
||||
### Resource utilization
|
||||
|
||||
We can again compare Nouveau with Nvidia. But first, I had to sort out the swap partition setup manually, as Fedora refused to activate it. This is a big fail, and this happens consistently. Anyhow, the resource utilization with either one driver was almost identical. Both tolled a hefty 1.2 GB of RAM, and CPU ticked at about 2-3%, which is not really surprising, given the age of this machine. I did not see any big noise or heat difference the way we would witness it in the past, which is a testament to the improvements in the open-source driver, even though it fails on some of the advanced graphics logic required from it. But for normal use, non-gaming use, it behaves fairly well.
|
||||
|
||||

|
||||
|
||||
### Problems
|
||||
|
||||
Well, I observed some interesting issues during my testing. SELinux complained about legitimate processes a few times, and this really annoys me. Now to troubleshoot this, all you need to do is expand the alert, check the details, and then vomit. Why would anyone let ordinary users ever see this. Why?
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
SELinux is preventing totem-video-thu from write access on the directory gstreamer-1.0.
|
||||
|
||||
***** Plugin catchall_labels (83.8 confidence) suggests *****
|
||||
|
||||
If you want to allow totem-video-thu to have write access on the gstreamer-1.0 directory
|
||||
Then you need to change the label on gstreamer-1.0
|
||||
Do
|
||||
# semanage fcontext -a -t FILE_TYPE 'gstreamer-1.0'
|
||||
where FILE_TYPE is one of the following: cache_home_t, gstreamer_home_t, texlive_home_t, thumb_home_t, thumb_tmp_t, thumb_tmpfs_t, tmp_t, tmpfs_t, user_fonts_cache_t, user_home_dir_t, user_tmp_t.
|
||||
Then execute:
|
||||
restorecon -v 'gstreamer-1.0'
|
||||
|
||||
I want to execute something else, because hey, let us let developers be in charge of how things should be done. They know [best][26], right! This kind of garbage is what makes zombie apocalypses happen, when you miscode the safety lock on a lab confinement.
|
||||
|
||||
### Other observations
|
||||
|
||||
Exploring the system with gconf-editor and dconf-editor, I found tons of leftover settings from my old Gnome 2, Xfce and Cinnamon setups, and one of the weird things was that Nemo would create, or rather, restore, several desktop icons every time I had it launched, and it did not cooperate with the global settings I configured through the Tweak Tool. In the end, I had to resort to some command line witchcraft:
|
||||
|
||||
gsettings set org.nemo.desktop home-icon-visible false
|
||||
gsettings set org.nemo.desktop trash-icon-visible false
|
||||
gsettings set org.nemo.desktop computer-icon-visible false
|
||||
|
||||
### Gallery
|
||||
|
||||
Finally, some sweet screenshots:
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
### Conclusion
|
||||
|
||||
This was an interesting ordeal. It took me about four hours to finish the configuration and polish the system, the maniacal Fedora update that always runs in the deep hundreds and sometimes even thousands of packages, the graphics stack setup, and finally, all the gloss and trim needed to have a functional machine.
|
||||
|
||||
All in all, it works well. Fedora proved itself to be an adequate choice for the old HP machine, with decent performance and responsiveness, good hardware compatibility, fine aesthetics and functionality, once the extras are added, and only a small number of issues, some related to my laptop usage legacy. Not bad. Sure, the system could be faster, and Gnome isn't the best choice for olden hardware. But then, for something that was born in 2010, the HP laptop handles this desktop environment with grace, and it looks the part. Just proves that Red Hat makes a lot of sense once you release its essential oils and let the fragrance of extra software and codecs sweep you. It is your time to be enthused about this and commence your own testing.
|
||||
|
||||
Cheers.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
My name is Igor Ljubuncic. I'm more or less 38 of age, married with no known offspring. I am currently working as a Principal Engineer with a cloud technology company, a bold new frontier. Until roughly early 2015, I worked as the OS Architect with an engineering computing team in one of the largest IT companies in the world, developing new Linux-based solutions, optimizing the kernel and hacking the living daylights out of Linux. Before that, I was a tech lead of a team designing new, innovative solutions for high-performance computing environments. Some other fancy titles include Systems Expert and System Programmer and such. All of this used to be my hobby, but since 2008, it's a paying job. What can be more satisfying than that?
|
||||
|
||||
From 2004 until 2008, I used to earn my bread by working as a physicist in the medical imaging industry. My work expertise focused on problem solving and algorithm development. To this end, I used Matlab extensively, mainly for signal and image processing. Furthermore, I'm certified in several major engineering methodologies, including MEDIC Six Sigma Green Belt, Design of Experiment, and Statistical Engineering.
|
||||
|
||||
I also happen to write books, including high fantasy and technical work on Linux; mutually inclusive.
|
||||
|
||||
Please see my full list of open-source projects, publications and patents, just scroll down.
|
||||
|
||||
For a complete list of my awards, nominations and IT-related certifications, hop yonder and yonder please.
|
||||
|
||||
|
||||
-------------
|
||||
|
||||
|
||||
via: http://www.dedoimedo.com/computers/hp-pavilion-fedora-24.html
|
||||
|
||||
作者:[Igor Ljubuncic][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.dedoimedo.com/faq.html
|
||||
|
||||
[1]:http://www.dedoimedo.com/computers/chapeau-24.html
|
||||
[2]:http://www.dedoimedo.com/computers/fedora-24-gnome.html
|
||||
[3]:http://www.dedoimedo.com/computers/my-new-new-laptop.html
|
||||
[4]:http://www.dedoimedo.com/computers/fedora-25-gnome.html
|
||||
[5]:http://www.dedoimedo.com/computers/lenovo-g50-review.html
|
||||
[6]:http://www.dedoimedo.com/computers/ubuntu-xerus-realtek-bug.html
|
||||
[7]:http://www.dedoimedo.com/computers/linux-mint-rosa-xfce.html
|
||||
[8]:http://www.dedoimedo.com/computers/frankenstein-media.html
|
||||
[9]:http://www.dedoimedo.com/games/far-cry-4-review.html
|
||||
[10]:http://www.dedoimedo.com/computers/lenovo-g50-fedora.html
|
||||
[11]:http://www.dedoimedo.com/computers/lenovo-g50-distros-second-round.html
|
||||
[12]:http://www.dedoimedo.com/computers/dual-boot-windows-7-ubuntu.html
|
||||
[13]:http://www.dedoimedo.com/computers/dual-boot-windows-7-xubuntu.html
|
||||
[14]:http://www.dedoimedo.com/computers/dual-boot-windows-7-centos-7.html
|
||||
[15]:http://www.dedoimedo.com/computers/dual-boot-windows-8-ubuntu.html
|
||||
[16]:http://www.dedoimedo.com/computers/fedora-24-pimp.html
|
||||
[17]:http://www.dedoimedo.com/computers/fedora-23-extensions.html
|
||||
[18]:http://www.dedoimedo.com/computers/gnome-3-dash.html
|
||||
[19]:http://www.dedoimedo.com/computers/ubuntu-phone-sep-2016.html
|
||||
[20]:http://www.dedoimedo.com/computers/iphone-6-after-six-months.html
|
||||
[21]:http://www.dedoimedo.com/computers/fedora-nvidia-guide.html
|
||||
[22]:http://www.dedoimedo.com/computers/fedora-23-nvidia.html
|
||||
[23]:http://www.dedoimedo.com/computers/fedora-23-nvidia-steam.html
|
||||
[24]:http://www.ocsmag.com/2015/06/22/you-can-leave-your-fedora-on/
|
||||
[25]:http://www.dedoimedo.com/games/cod-mw2.html
|
||||
[26]:http://www.ocsmag.com/2016/10/19/systemd-progress-through-complexity/
|
@ -1,631 +0,0 @@
|
||||
How to install OTRS (OpenSource Trouble Ticket System) on CentOS 7
|
||||
============================================================
|
||||
|
||||
### On this page
|
||||
|
||||
1. [The Environment][1]
|
||||
2. [Preparation][2]
|
||||
3. [Install MariaDB on Centos 7][3]
|
||||
4. [Install EPEL ][4]
|
||||
5. [Install OTRS][5]
|
||||
6. [Configure OTRS on CentOS 7][6]
|
||||
|
||||
OTRS (open-source trouble ticket system software) is a sophisticated open source software used by companies to improve their operation related to customer support, help desk, call centers and more. OTRS is written in PERL and provides the following important features:
|
||||
|
||||
* Customers can register and create/interact with a Ticket via the customer portal and by email, phone, and fax with each queue (Attendants/Technicians post box).
|
||||
* Tickets can be managed by their priority, assignment, transmission and follow-up. A ticket can be split, merged, bulk actions can be applied, and links to each other and notifications can be set. Services can be configurated through the service catalog.
|
||||
* To increase the team capacity, auto email (automatic answers), text templates and signatures can be configured. The system supports notes and attachments on tickets.
|
||||
* Others capabilities include: statistics and reports (CSV/PDF), SLA and many other features.
|
||||
|
||||
### The Environment
|
||||
|
||||
This article covers the OTRS 5 installation and basic configuration. This article was writen based on the following enviroment: A Virtual Box VM with CENTOS 7 Minimal, 2GB RAM, 8GB HD and 2 network interfaces (host only and NAT).
|
||||
|
||||
### Preparation
|
||||
|
||||
Assuming that you use a fresh installation of Centos 7 Minimal, before to install OTRS, run the following command to update the system and install aditional packages:
|
||||
|
||||
```
|
||||
yum update
|
||||
```
|
||||
|
||||
Transaction Summary ================================================================================ Install 1 Package Upgrade 39 Packages Total download size: 91 M Is this ok [y/d/N]: **y**
|
||||
|
||||
Install a text editor or use VI. In this article we use VIM, run the following command to install it:
|
||||
|
||||
```
|
||||
yum install vim
|
||||
```
|
||||
|
||||
To install the WGET package, run the following command:
|
||||
|
||||
```
|
||||
yum install wget
|
||||
```
|
||||
|
||||
To configure the Centos 7 network, run the following command to open the NMTUI (Network Manager Text User Interface) tool and edit the interfaces and hostname if nescessary:
|
||||
|
||||
```
|
||||
nmtui
|
||||
```
|
||||
|
||||
[
|
||||

|
||||
][7]
|
||||
|
||||
After setup of network settings and hostname on CentOS 7, run the following command to apply the changes:
|
||||
|
||||
```
|
||||
service networks restart
|
||||
```
|
||||
|
||||
To verify the network information, run the following command:
|
||||
|
||||
```
|
||||
ip addr
|
||||
```
|
||||
|
||||
The output looks like this on my system:
|
||||
|
||||
```
|
||||
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
|
||||
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
||||
inet 127.0.0.1/8 scope host lo
|
||||
valid_lft forever preferred_lft forever
|
||||
inet6 ::1/128 scope host
|
||||
valid_lft forever preferred_lft forever
|
||||
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
|
||||
link/ether 08:00:27:67:bc:73 brd ff:ff:ff:ff:ff:ff
|
||||
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
|
||||
valid_lft 84631sec preferred_lft 84631sec
|
||||
inet6 fe80::9e25:c982:1091:90eb/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
|
||||
link/ether 08:00:27:68:88:f3 brd ff:ff:ff:ff:ff:ff
|
||||
inet 192.168.56.101/24 brd 192.168.56.255 scope global dynamic enp0s8
|
||||
valid_lft 1044sec preferred_lft 1044sec
|
||||
inet6 fe80::a00:27ff:fe68:88f3/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
```
|
||||
|
||||
Disable SELINUX (Security Enhanced Linux) on Centos 7, edit the following config file:
|
||||
|
||||
```
|
||||
vim /etc/selinux/config
|
||||
```
|
||||
|
||||
```
|
||||
"/etc/selinux/config" 14L, 547C# This file controls the state of SELinux on the system.
|
||||
# SELINUX= can take one of these three values:
|
||||
# enforcing - SELinux security policy is enforced.
|
||||
# permissive - SELinux prints warnings instead of enforcing.
|
||||
# disabled - No SELinux policy is loaded.
|
||||
SELINUX=enforcing
|
||||
# SELINUXTYPE= can take one of three two values:
|
||||
# targeted - Targeted processes are protected,
|
||||
# minimum - Modification of targeted policy. Only selected processes are prootected.
|
||||
# mls - Multi Level Security protection.
|
||||
SELINUXTYPE=targeted
|
||||
```
|
||||
|
||||
Change the value **enforcing** of directive SELINUX to **disabled**, save the file and reboot the server.
|
||||
|
||||
To check the status of SELinux on Centos 7, run the following command:
|
||||
|
||||
```
|
||||
getenforce
|
||||
```
|
||||
|
||||
The output must be:
|
||||
|
||||
```
|
||||
Disabled
|
||||
```
|
||||
|
||||
### Install MariaDB on Centos 7
|
||||
|
||||
To install MariaDB on Centos 7, run the following command:
|
||||
|
||||
```
|
||||
yum -y install mariadb-server
|
||||
```
|
||||
|
||||
Create the file with the name **zotrs.cnf** in the following directory:
|
||||
|
||||
```
|
||||
/etc/my.cnf.d/
|
||||
```
|
||||
|
||||
To create and edit the file, run the following command:
|
||||
|
||||
```
|
||||
vim /etc/my.cnf.d/zotrs.cnf
|
||||
```
|
||||
|
||||
Fill the file with the following content and save it:
|
||||
|
||||
```
|
||||
max_allowed_packet = 20M
|
||||
query_cache_size = 32M
|
||||
innodb_log_file_size = 256M
|
||||
```
|
||||
|
||||
To start MariaDB, run the following command:
|
||||
|
||||
```
|
||||
systemctl start mariadb
|
||||
```
|
||||
|
||||
To increase the security of MariaDB, run the following command:
|
||||
|
||||
```
|
||||
/usr/bin/mysql_secure_installation
|
||||
```
|
||||
|
||||
Setup the options accordind the following output:
|
||||
|
||||
```
|
||||
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
|
||||
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!
|
||||
|
||||
In order to log into MariaDB to secure it, we'll need the current
|
||||
password for the root user. If you've just installed MariaDB, and
|
||||
you haven't set the root password yet, the password will be blank,
|
||||
so you should just press enter here.
|
||||
|
||||
Enter current password for root (enter for none):<Press Enter>
|
||||
```
|
||||
|
||||
```
|
||||
OK, successfully used password, moving on...
|
||||
|
||||
Setting the root password ensures that nobody can log into the MariaDB
|
||||
root user without the proper authorisation.
|
||||
|
||||
Set root password? [Y/n] <Press Y>
|
||||
```
|
||||
|
||||
Set the root password:
|
||||
|
||||
```
|
||||
New password:
|
||||
Re-enter new password:
|
||||
Password updated successfully!
|
||||
Reloading privilege tables..
|
||||
... Success!
|
||||
|
||||
By default, a MariaDB installation has an anonymous user, allowing anyone
|
||||
to log into MariaDB without having to have a user account created for
|
||||
them. This is intended only for testing, and to make the installation
|
||||
go a bit smoother. You should remove them before moving into a
|
||||
production environment.
|
||||
|
||||
Remove anonymous users? [Y/n] <Press Y>
|
||||
```
|
||||
|
||||
```
|
||||
... Success!
|
||||
|
||||
Normally, root should only be allowed to connect from 'localhost'. This
|
||||
ensures that someone cannot guess at the root password from the network.
|
||||
|
||||
Disallow root login remotely? [Y/n] <Choose acording your needs>
|
||||
```
|
||||
|
||||
```
|
||||
... Success!
|
||||
|
||||
By default, MariaDB comes with a database named 'test' that anyone can
|
||||
access. This is also intended only for testing, and should be removed
|
||||
before moving into a production environment.
|
||||
|
||||
Remove test database and access to it? [Y/n] <Press Y>
|
||||
```
|
||||
|
||||
```
|
||||
- Dropping test database...
|
||||
... Success!
|
||||
- Removing privileges on test database...
|
||||
... Success!
|
||||
|
||||
Reloading the privilege tables will ensure that all changes made so far
|
||||
will take effect immediately.
|
||||
|
||||
Reload privilege tables now? [Y/n] <Press Y>
|
||||
```
|
||||
|
||||
```
|
||||
... Success!
|
||||
|
||||
Cleaning up...
|
||||
|
||||
All done! If you've completed all of the above steps, your MariaDB
|
||||
installation should now be secure.
|
||||
|
||||
Thanks for using MariaDB!
|
||||
|
||||
```
|
||||
|
||||
Setup MariaDB to start up automatically at boot time:
|
||||
|
||||
systemctl enable mariadb.service
|
||||
|
||||
To download OTRS, run the following command:
|
||||
|
||||
```
|
||||
wget http://ftp.otrs.org/pub/otrs/RPMS/rhel/7/otrs-5.0.15-01.n oarch.rpm
|
||||
```
|
||||
|
||||
### Install EPEL
|
||||
|
||||
Before we install OTRS, setup the EPEL repositoy on Centos 7\. Run the following command to do so:
|
||||
|
||||
```
|
||||
[root@centos7 ~]# yum -y http://mirror.globo.com/epel/7/x86_64/e/epel-r release-7-9.noarch.rpm
|
||||
```
|
||||
|
||||
### Install OTRS
|
||||
|
||||
Install OTRS with the following command:
|
||||
|
||||
```
|
||||
yum install -nogpgcheck otrs-5.0.15-01.noarch.rpm
|
||||
```
|
||||
|
||||
A list of software package will be installed, eg. Apache and all dependencies will be resolved automatically, at to the end of output press Y:
|
||||
|
||||
```
|
||||
Transaction Summary
|
||||
================================================================================
|
||||
Install 1 Package (+143 Dependent packages)
|
||||
|
||||
Total size: 148 M
|
||||
Total download size: 23 M
|
||||
Installed size: 181 M
|
||||
Is this ok [y/d/N]: y
|
||||
```
|
||||
|
||||
To start Apache (httpd), run the following command:
|
||||
|
||||
```
|
||||
systemctl start httpd.service
|
||||
```
|
||||
|
||||
To enable Apache (httpd) startup with systemd on Centos7, run the following command:
|
||||
|
||||
```
|
||||
systemctl enable httpd.service
|
||||
```
|
||||
|
||||
Enable SSL in Apache and configure a SelfSigned Certificate. Install the Mod_SSL module for the Apache HTTP Server, run the following command:
|
||||
|
||||
```
|
||||
yum -y install mod_ssl
|
||||
```
|
||||
|
||||
To generate a self-signed SSL certificate, go to the following directory:
|
||||
|
||||
```
|
||||
cd /etc/pki/tls/certs/
|
||||
```
|
||||
|
||||
And run the following command to generate the key (centos7.key is the name of my certificate, feel free to change it):
|
||||
|
||||
```
|
||||
make centos7.key
|
||||
```
|
||||
|
||||
```
|
||||
umask 77 ; \ /usr/bin/openssl genrsa -aes128 2048 > centos7.key Generating RSA private key, 2048 bit long modulus .+++ .........................................................................................+++ e is 65537 (0x10001) Enter pass phrase: **<Insert your Own Password>**
|
||||
|
||||
Verifying - Enter pass phrase:**<Retype the Password>**
|
||||
```
|
||||
|
||||
To generate the server SSL private key with OpenSSL, run the following command:
|
||||
|
||||
```
|
||||
openssl rsa -in centos7.key -out centos7.key
|
||||
```
|
||||
|
||||
```
|
||||
Enter pass phrase for centos7.key: **<Type the Password> **writing RSA key
|
||||
```
|
||||
|
||||
Run the following command to create the CSR (Certificate Signing Request) file (centos7.csr is the name of my certificate, feel free to change it):
|
||||
|
||||
```
|
||||
make centos7.csr
|
||||
```
|
||||
|
||||
Fill the questions acording your needs:
|
||||
|
||||
```
|
||||
umask 77 ; \ /usr/bin/openssl req -utf8 -new -key centos7.key -out centos7.csr You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. -----
|
||||
|
||||
Country Name (2 letter code) [XX]:
|
||||
|
||||
State or Province Name (full name) []:
|
||||
|
||||
Locality Name (eg, city) [Default City]:
|
||||
|
||||
Organization Name (eg, company) [Default Company Ltd]:
|
||||
|
||||
Organizational Unit Name (eg, section) []:
|
||||
|
||||
Centos7 Common Name (eg, your name or your server's hostname) []:
|
||||
|
||||
Email Address []:
|
||||
|
||||
Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: **<press enter>**
|
||||
|
||||
An optional company name []:
|
||||
```
|
||||
|
||||
Generate a CSR (Certificate Signing Request) for the server with the OpenSSL tool:
|
||||
|
||||
```
|
||||
openssl x509 -in centos7.csr -out centos7.crt -req -signkey centos7.key
|
||||
```
|
||||
|
||||
The output is:
|
||||
|
||||
```
|
||||
Signature ok subject=/C=BR/ST=SP/L=Campinas/O=Centos7/OU=Centos7/CN=centos7.local Getting Private key
|
||||
```
|
||||
|
||||
Before we edit the ssl.conf file, make a copy of the file with the following command:
|
||||
|
||||
```
|
||||
cp /etc/httpd/conf.d/ssl.conf /etc/httpd/conf.d/ssl.conf.old
|
||||
```
|
||||
|
||||
Then edit the file:
|
||||
|
||||
```
|
||||
vim /etc/httpd/conf.d/ssl.conf
|
||||
```
|
||||
|
||||
Find the following directives, uncomment each one and edit them like this:
|
||||
|
||||
```
|
||||
SSLCertificateKeyFile /etc/pki/tls/certs/centos7.key
|
||||
|
||||
SSLCertificateFile /etc/pki/tls/certs/centos7.csr
|
||||
|
||||
SSLProtocol -All +TLSv1 +TLSv1.1 +TLSv1.2
|
||||
|
||||
ServerName centos7.local:443
|
||||
```
|
||||
|
||||
Restart Apache with the following command:
|
||||
|
||||
```
|
||||
systemctl restart httpd
|
||||
```
|
||||
|
||||
To force OTRS to run in https mode, edit the following file:
|
||||
|
||||
```
|
||||
vim /etc/httpd/conf/httpd.conf
|
||||
```
|
||||
|
||||
At the end of file, uncoment the following directive:
|
||||
|
||||
```
|
||||
IncludeOptional conf.d/*.conf
|
||||
```
|
||||
|
||||
Edit the file zzz_otrs.conf:
|
||||
|
||||
```
|
||||
vim /etc/httpd/conf.d/zzz_otrs.conf
|
||||
```
|
||||
|
||||
After the line 26 (before the line module mod_version.c) add the following directives:
|
||||
|
||||
```
|
||||
RewriteEngine On
|
||||
RewriteCond %{HTTPS} off
|
||||
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
|
||||
```
|
||||
|
||||
Restart Apache:
|
||||
|
||||
```
|
||||
[root@centos7 ~]# systemctl restart httpd
|
||||
```
|
||||
|
||||
To use extended features in OTRS, we have to install some PERL modules. Run the following command to install them:
|
||||
|
||||
```
|
||||
yum -y install "perl(Text::CSV_XS)" "perl(Crypt::Eksblowfish::Bcrypt)" "perl(YAML::XS)" "perl(JSON::XS)" "perl(Encode::HanExtra)" "perl(Mail::IMAPClient)" "perl(ModPerl::Util)"
|
||||
```
|
||||
|
||||
The OTRS system has a tool to check the PERL modules, run it like this to verify the system requirements:
|
||||
|
||||
```
|
||||
cd /opt/otrs/bin
|
||||
```
|
||||
|
||||
and run:
|
||||
|
||||
```
|
||||
./otrs.CheckModules.pl
|
||||
```
|
||||
|
||||
The output for our configuration must be:
|
||||
|
||||
```
|
||||
o Apache::DBI......................ok (v1.12) o Apache2::Reload..................ok (v0.13) o Archive::Tar.....................ok (v1.92) o Archive::Zip.....................ok (v1.30) o Crypt::Eksblowfish::Bcrypt.......ok (v0.009) o Crypt::SSLeay....................ok (v0.64) o Date::Format.....................ok (v2.24) o DBI..............................ok (v1.627) o DBD::mysql.......................ok (v4.023) o DBD::ODBC........................Not installed! (optional - Required to connect to a MS-SQL database.) o DBD::Oracle......................Not installed! (optional - Required to connect to a Oracle database.) o DBD::Pg..........................Not installed! Use: 'yum install "perl(DBD::Pg)"' (optional - Required to connect to a PostgreSQL database.) o Digest::SHA......................ok (v5.85) o Encode::HanExtra.................ok (v0.23) o IO::Socket::SSL..................ok (v1.94) o JSON::XS.........................ok (v3.01) o List::Util::XS...................ok (v1.27) o LWP::UserAgent...................ok (v6.13) o Mail::IMAPClient.................ok (v3.37) o IO::Socket::SSL................ok (v1.94) o ModPerl::Util....................ok (v2.000010) o Net::DNS.........................ok (v0.72) o Net::LDAP........................ok (v0.56) o Template.........................ok (v2.24) o Template::Stash::XS..............ok (undef) o Text::CSV_XS.....................ok (v1.00) o Time::HiRes......................ok (v1.9725) o Time::Piece......................ok (v1.20_01) o XML::LibXML......................ok (v2.0018) o XML::LibXSLT.....................ok (v1.80) o XML::Parser......................ok (v2.41) o YAML::XS.........................ok (v0.54)
|
||||
```
|
||||
|
||||
To start the OTRS Daemon with the "otrs" user, run the following command:
|
||||
|
||||
```
|
||||
su -c "/opt/otrs/bin/otrs.Daemon.pl start" -s /bin/bash otrs
|
||||
```
|
||||
|
||||
To disable the CentOS 7 firewall, run the following command:
|
||||
|
||||
```
|
||||
systemctl stop firewalld
|
||||
```
|
||||
|
||||
To disable CentOS 7 Firewall to start up automaticaly, run:
|
||||
|
||||
```
|
||||
systemctl disable firewalld.service
|
||||
```
|
||||
|
||||
Start the OTRS Daemon with:
|
||||
|
||||
```
|
||||
su -c "/opt/otrs/bin/otrs.Daemon.pl start" -s /bin/bash otrsCron.sh
|
||||
```
|
||||
|
||||
The output of command must be:
|
||||
|
||||
```
|
||||
/opt/otrs/bin Cron.sh - start/stop OTRS cronjobs Copyright (C) 2001-2012 OTRS AG, http://otrs.org/ (using /opt/otrs) done
|
||||
```
|
||||
|
||||
If you want to check the OTRS Daemon status, run the following command:
|
||||
|
||||
```
|
||||
su -c "/opt/otrs/bin/otrs.Daemon.pl status" -s /bin/bash otrsCron.sh
|
||||
```
|
||||
|
||||
Configuring OTRS in the crontab. Change the user root to otrs and start to edit the crontab:
|
||||
|
||||
```
|
||||
su otrs
|
||||
|
||||
crontab -e
|
||||
```
|
||||
|
||||
Fill the crontab with the following content and save it:
|
||||
|
||||
```
|
||||
# --
|
||||
# Copyright (C) 2001-2016 OTRS AG, http://otrs.com/
|
||||
# --
|
||||
# This software comes with ABSOLUTELY NO WARRANTY. For details, see
|
||||
# the enclosed file COPYING for license information (AGPL). If you
|
||||
# did not receive this file, see http://www.gnu.org/licenses/agpl.txt.
|
||||
# --
|
||||
|
||||
# Who gets the cron emails?
|
||||
MAILTO="root@localhost"
|
||||
# --
|
||||
# Copyright (C) 2001-2016 OTRS AG, http://otrs.com/
|
||||
# --
|
||||
# This software comes with ABSOLUTELY NO WARRANTY. For details, see
|
||||
# the enclosed file COPYING for license information (AGPL). If you
|
||||
# did not receive this file, see http://www.gnu.org/licenses/agpl.txt.
|
||||
# --
|
||||
|
||||
# check OTRS daemon status
|
||||
*/5 * * * * $HOME/bin/otrs.Daemon.pl start >> /dev/null
|
||||
```
|
||||
|
||||
### Configure OTRS on CentOS 7
|
||||
|
||||
Open a web browser and open the URL [https://centos7.local/otrs/installer.pl][8]. Remember, centos7.local is the name of my server, insert your hostname or IP address. The first screen shows the 4 steps to conclude the OTRS installation, press Next.
|
||||
|
||||
[
|
||||

|
||||
][9]
|
||||
|
||||
License: to continue, read and accept the license to continue:
|
||||
|
||||
[
|
||||

|
||||
][10]
|
||||
|
||||
Database Selection: select the option **MySQL** and in the Install Type, mark the Create a new database for OTRS option and click on the next button:
|
||||
|
||||
[
|
||||

|
||||
][11]
|
||||
|
||||
Configure MySQL: fill the fields User, Password and Host (remember the data of the MariaDB configuration that we made) and press check database settings:
|
||||
|
||||
[
|
||||

|
||||
][12]
|
||||
|
||||
The OTRS installer will create the database in MariaDB, press next button:
|
||||
|
||||
[
|
||||

|
||||
][13]
|
||||
|
||||
OTRS database created successfully:
|
||||
|
||||
[
|
||||

|
||||
][14]
|
||||
|
||||
Config system settings: fill the fields with your own information and press next:
|
||||
|
||||
[
|
||||

|
||||
][15]
|
||||
|
||||
OTRS E-mail configuration: fill in the fields acording your e-mail server. In my setup, for outbound email I use SMPTTLS and port 587, for inbound email, I use pop3, you will need an e-mail account. Check mail configuration or skip this step:
|
||||
|
||||
[
|
||||

|
||||
][16]
|
||||
|
||||
To finish, take a note about the user and password to access the OTRS, after login you can change the password:
|
||||
|
||||
[
|
||||

|
||||
][17]
|
||||
|
||||
The OTRS url login is [https://centos7.local/otrs/index.pl?][18]. Remember, centos7.local is the name of my server, insert your hostnamen or IP address.:
|
||||
|
||||
[
|
||||

|
||||
][19]
|
||||
|
||||
Login at the OTRS:
|
||||
|
||||
[
|
||||

|
||||
][20]
|
||||
|
||||
OTRS is installed and ready to be configured with your support rules or business model.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/tutorial/how-to-install-otrs-on-centos-7/
|
||||
|
||||
作者:[Alexandre Costa][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com/tutorial/how-to-install-otrs-on-centos-7/
|
||||
[1]:https://www.howtoforge.com/tutorial/how-to-install-otrs-on-centos-7/#thenbspenvironment
|
||||
[2]:https://www.howtoforge.com/tutorial/how-to-install-otrs-on-centos-7/#preparation
|
||||
[3]:https://www.howtoforge.com/tutorial/how-to-install-otrs-on-centos-7/#install-mariadb-on-centos-
|
||||
[4]:https://www.howtoforge.com/tutorial/how-to-install-otrs-on-centos-7/#install-epelnbsp
|
||||
[5]:https://www.howtoforge.com/tutorial/how-to-install-otrs-on-centos-7/#install-otrs
|
||||
[6]:https://www.howtoforge.com/tutorial/how-to-install-otrs-on-centos-7/#configure-otrs-on-centos-
|
||||
[7]:https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/big/OTRS_How_To_Alexandre_Costa.jpg
|
||||
[8]:http://centos7.local/otrs/installer.pl
|
||||
[9]:https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/big/OTRS_How_To_Alexandre_Costa.13_.jpg
|
||||
[10]:https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/big/OTRS_How_To_Alexandre_Costa.14_.jpg
|
||||
[11]:https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/big/OTRS_How_To_Alexandre_Costa.15_.jpg
|
||||
[12]:https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/big/OTRS_How_To_Alexandre_Costa.16_.jpg
|
||||
[13]:https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/big/OTRS_How_To_Alexandre_Costa.17_.jpg
|
||||
[14]:https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/big/OTRS_How_To_Alexandre_Costa.18_.jpg
|
||||
[15]:https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/big/OTRS_How_To_Alexandre_Costa.19_.jpg
|
||||
[16]:https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/big/OTRS_How_To_Alexandre_Costa.21_.jpg
|
||||
[17]:https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/big/OTRS_How_To_Alexandre_Costa.23_.jpg
|
||||
[18]:https://centos7.local/otrs/index.pl
|
||||
[19]:https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/big/OTRS_How_To_Alexandre_Costa.25_.jpg
|
||||
[20]:https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/big/OTRS_How_To_Alexandre_Costa.27_.jpg
|
@ -1,83 +0,0 @@
|
||||
Dedicated engineering team in South Africa deploys open source tools, save lives
|
||||
============================================================
|
||||
|
||||

|
||||
Image by : opensource.com
|
||||
|
||||
In 2006, a groundbreaking TED talk used statistics to reveal surprising [insights about the developing world][2], including how many people in South Africa have HIV despite free and available anti-retroviral drugs.
|
||||
|
||||
[Gustav Praekelt][3], founder of [Praekelt.org][4], heard this TED talk and began tenaciously calling a local hospital to convince them to start an SMS program that would promote anti-retrovirals. The program that resulted from those calls became [txtAlert][5]—a successful and widely recognized mobile health program that dramatically improves medical appointment adherence and creates a free channel for patients to communicate with the hospital.
|
||||
|
||||
Today, nearly a decade later, the organization that Gustav founded in 2007, Praekelt.org, continues to harness the power of mobile technology.
|
||||
|
||||
The global nonprofit organization uses open source technologies to deliver essential information and vital services to millions of people around the world, particularly in Africa. We are deeply committed to the idea that our software innovations should be shared with the development community that made delivering our products possible. By participating and giving back to this community we support and sustain the rich ecosystem of tools and products that they have developed to improve the lives of people around the world.
|
||||
|
||||
Praekelt.org is a supporter of the [Principles for Digital Development][6] and in particular [Cause 6][7], which states:
|
||||
|
||||
* Adopt and expand existing open standards.
|
||||
* Open data and functionalities and expose them in documented Application Programming Interfaces (APIs) where use by a larger community is possible.
|
||||
* Invest in software as a public good.
|
||||
* Develop software to be open source by default with the code made available in public repositories and supported through developer communities.
|
||||
|
||||
A great example of this can be found in our original work to make population-scale messaging possible in the majority world. We had and continue to have success with txtAlert in South Africa, but despite considerable interest, replicating this success in other places has been very challenging. The necessary integration work required for each new messaging service provider requires too much customization.
|
||||
|
||||
To solve this, we created [Vumi][8], a software library that provides a single point of integration for messaging communication channel integrations. It abstracts away all of the differences that require the customized integrations and provided a single consistent API to speak to all of them. The result is a dramatic increase in the re-use of both integrations and applications because they were only needing to be written once and could be used widely.
|
||||
|
||||
Vumi provides the means of integrations, and this past year in collaboration with UNICEF we have launched [Junebug][9], an application server that provides APIs to launch Vumi integrations, enabling direct messaging system integrations in both cloud- and on-premise-based scenarios. Junebug now powers national-scale, maternal health programs in South Africa, Nigeria, and Uganda, delivering essential information for expecting women and mothers. It also provides SMS and [Unstructured Supplementary Service Data][10] (USSD) access to vital services, such as national helpdesks and FAQ services.
|
||||
|
||||
These systems have processed over 375 million real-time messages in the last year.
|
||||
|
||||
We are a relatively small engineering team based out of South Africa. We could not fathom developing these services were we not standing on the shoulders of giants. All of the services we provide or build on are available as open source software.
|
||||
|
||||
Our language of choice is [Python][11], which enables us to express our ideas in code succinctly and in a way that is both readable and maintainable. Our messaging systems are built using [Twisted][12], an excellent event-driven network programming framework built using Python. [Molo][13], our web publishing platform, is built using [Django][14], and the wonderful open source [Wagtail CMS][15] is built by our friends at [Torchbox][16].
|
||||
|
||||
Our three-person site reliability engineering team is able to run over a thousand applications in production by relying on Mesosphere's [Marathon][17] for [Apache Mesos][18]. We have recently released [Marathon Acme][19], which enables automatic SSL/TLS certificate provisioning via [LetsEncrypt][20] for Marathon's load balancer, ensuring our services are secure.
|
||||
|
||||
Our engineering team is distributed, and the workflow enabled by [Git][21] allows us to develop software in a reliable fashion. For example, by using test-driven development we are able to automate our deploys. Using these open source tools and systems we've averaged 21 automated deploys a day over the course of 2016. Developing software in an open environment is easier and more effective. Our work would have been significantly more difficult had there not been such an active and vibrant community on which to build.
|
||||
|
||||
We are excited to be part of these developments in open source technology integration. As a mission-driven organization we are deeply committed to continue [sharing ][22][what we learn][23] and develop. If you are interested in joining our team, [apply here][24]. Our open source repositories have documented OS licenses and contributions guidelines. We welcome any community contributions. Please email us at [dev@praekelt.org][25].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Simon de Haan - Simon de Haan is the Chief Engineer at Praekelt Foundation and has the rare talent to demystify software systems and platforms for nonengineers. He was the team lead on Praekelt Foundation’s Vumi platform, an open source messaging platform that allows for interactive conversations over SMS, USSD, Gtalk and other basic technologies at low cost and at population scale in the majority world. Vumi is the technology that powers various groundbreaking initiatives such as Wikipedia Text, PeaceTXT,
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/2/open-source-tools-south-africa
|
||||
|
||||
作者:[Simon de Haan][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/praekelt
|
||||
[1]:https://opensource.com/article/17/2/open-source-tools-south-africa?rate=XZZ1Mtc79KokPszccwi_HiEkWMJyoJZghkUumJTwIiI
|
||||
[2]:https://www.ted.com/talks/hans_rosling_shows_the_best_stats_you_ve_ever_seen
|
||||
[3]:http://www.praekelt.org/
|
||||
[4]:http://www.praekelt.org/
|
||||
[5]:http://txtalert.praekeltfoundation.org/bookings/about-txtalert/
|
||||
[6]:http://digitalprinciples.org/
|
||||
[7]:http://digitalprinciples.org/use-open-standards-open-data-open-source-and-open-innovation/
|
||||
[8]:https://github.com/praekelt/vumi
|
||||
[9]:http://junebug.praekelt.org/
|
||||
[10]:https://en.wikipedia.org/wiki/Unstructured_Supplementary_Service_Data
|
||||
[11]:https://www.python.org/
|
||||
[12]:https://en.wikipedia.org/wiki/Twisted_(software)
|
||||
[13]:http://molo.readthedocs.io/
|
||||
[14]:http://www.djangoproject.com/
|
||||
[15]:https://wagtail.io/
|
||||
[16]:https://torchbox.com/work/wagtail/
|
||||
[17]:https://mesosphere.github.io/marathon/
|
||||
[18]:http://mesos.apache.org/
|
||||
[19]:https://github.com/praekeltfoundation/marathon-acme
|
||||
[20]:https://letsencrypt.org/
|
||||
[21]:http://git-scm.org/
|
||||
[22]:https://medium.com/@praekeltorg
|
||||
[23]:https://medium.com/@praekeltorg
|
||||
[24]:http://www.praekelt.org/careers/
|
||||
[25]:https://opensource.com/article/17/2/mail%20to:%20dev@praekelt.org
|
||||
[26]:https://opensource.com/user/108011/feed
|
||||
[27]:https://opensource.com/users/praekelt
|
@ -1,544 +0,0 @@
|
||||
Blocking of international spam botnets with a Postfix plugin
|
||||
============================================================
|
||||
|
||||
### On this page
|
||||
|
||||
1. [Introduction][1]
|
||||
2. [How international botnet works][2]
|
||||
3. [Defending against botnet spammers][3]
|
||||
4. [Installation][4]
|
||||
|
||||
This article contains an analysis and solution for blocking of international SPAM botnets and a tutorial to install the anti-spam plugin to postfix firewall - postfwd in the postfix MTA.
|
||||
|
||||
### Introduction
|
||||
|
||||
One of the most important and hardest tasks for every company that provides mail services is staying out of the mail blacklists.
|
||||
|
||||
If a mail domain appears in one of the mail domain blacklists, other mail servers will stop accepting and relaying its e-mails. This will practically ban the domain from the majority of mail providers and prohibits that the provider’s customers can send e-mails. Tere is only one thing that a mail provider can do afterwards: ask the blacklist providers for removal from the list or change the IP addresses and domain names of its mail servers.
|
||||
|
||||
Getting into mail blacklist is very easy when a mail provider does not have a protection against spammers. Only one compromised customer mail account from which a hacker will start sending spam is needed to appear in a blacklist.
|
||||
|
||||
There are several ways of how hackers send spam from compromised mail accounts. In this article, I would like to show you how to completely mitigate international botnet spammers, who are characterized by logging into mail accounts from multiple IP addresses located in multiple countries worldwide.
|
||||
|
||||
### How international botnet works
|
||||
|
||||
Hackers who use an international botnet for spamming operate very efficient and are not easy to track. I started to analyze the behaviour of such an international spam botnet in October of 2016 and implemented a plugin for **postfix firewall** - **postfwd**, which intelligently bans all spammers from international botnets.
|
||||
|
||||
The first step was the analysis of the behavior of an international spam botnet done by tracking of one compromised mail account. I created a simple bash one-liner to select sasl login IP addresses of the compromised mail account from the postfwd login mail logs.
|
||||
|
||||
**Data in the following table are dumped 90 minutes after compromisation of one mail account and contains these attributes:**
|
||||
|
||||
* IP addresses from which hacker logged into account (ip_address)
|
||||
* Corresponding country codes of IP addresses from GeoIP database (state_code)
|
||||
* Number of sasl logins which hacker did from one IP address (login_count)
|
||||
|
||||
```
|
||||
+-----------------+------------+-------------+
|
||||
| ip_address | state_code | login_count |
|
||||
+-----------------+------------+-------------+
|
||||
| 41.63.176.___ | AO | 8 |
|
||||
| 200.80.227.___ | AR | 41 |
|
||||
| 120.146.134.___ | AU | 18 |
|
||||
| 79.132.239.___ | BE | 15 |
|
||||
| 184.149.27.___ | CA | 1 |
|
||||
| 24.37.20.___ | CA | 13 |
|
||||
| 70.28.77.___ | CA | 21 |
|
||||
| 70.25.65.___ | CA | 23 |
|
||||
| 72.38.177.___ | CA | 24 |
|
||||
| 174.114.121.___ | CA | 27 |
|
||||
| 206.248.139.___ | CA | 4 |
|
||||
| 64.179.221.___ | CA | 4 |
|
||||
| 184.151.178.___ | CA | 40 |
|
||||
| 24.37.22.___ | CA | 51 |
|
||||
| 209.250.146.___ | CA | 66 |
|
||||
| 209.197.185.___ | CA | 8 |
|
||||
| 47.48.223.___ | CA | 8 |
|
||||
| 70.25.41.___ | CA | 81 |
|
||||
| 184.71.9.___ | CA | 92 |
|
||||
| 84.226.27.___ | CH | 5 |
|
||||
| 59.37.9.___ | CN | 6 |
|
||||
| 181.143.131.___ | CO | 24 |
|
||||
| 186.64.177.___ | CR | 6 |
|
||||
| 77.104.244.___ | CZ | 1 |
|
||||
| 78.108.109.___ | CZ | 18 |
|
||||
| 185.19.1.___ | CZ | 58 |
|
||||
| 95.208.250.___ | DE | 1 |
|
||||
| 79.215.89.___ | DE | 15 |
|
||||
| 47.71.223.___ | DE | 23 |
|
||||
| 31.18.251.___ | DE | 27 |
|
||||
| 2.164.183.___ | DE | 32 |
|
||||
| 79.239.97.___ | DE | 32 |
|
||||
| 80.187.103.___ | DE | 54 |
|
||||
| 109.84.1.___ | DE | 6 |
|
||||
| 212.97.234.___ | DK | 49 |
|
||||
| 190.131.134.___ | EC | 42 |
|
||||
| 84.77.172.___ | ES | 1 |
|
||||
| 91.117.105.___ | ES | 10 |
|
||||
| 185.87.99.___ | ES | 14 |
|
||||
| 95.16.51.___ | ES | 15 |
|
||||
| 95.127.182.___ | ES | 16 |
|
||||
| 195.77.90.___ | ES | 19 |
|
||||
| 188.86.18.___ | ES | 2 |
|
||||
| 212.145.210.___ | ES | 38 |
|
||||
| 148.3.169.___ | ES | 39 |
|
||||
| 95.16.35.___ | ES | 4 |
|
||||
| 81.202.61.___ | ES | 45 |
|
||||
| 88.7.246.___ | ES | 7 |
|
||||
| 81.36.5.___ | ES | 8 |
|
||||
| 88.14.192.___ | ES | 8 |
|
||||
| 212.97.161.___ | ES | 9 |
|
||||
| 193.248.156.___ | FR | 5 |
|
||||
| 82.34.32.___ | GB | 1 |
|
||||
| 86.180.214.___ | GB | 11 |
|
||||
| 81.108.174.___ | GB | 12 |
|
||||
| 86.11.209.___ | GB | 13 |
|
||||
| 86.150.224.___ | GB | 15 |
|
||||
| 2.102.31.___ | GB | 17 |
|
||||
| 93.152.88.___ | GB | 18 |
|
||||
| 86.178.68.___ | GB | 19 |
|
||||
| 176.248.121.___ | GB | 2 |
|
||||
| 2.97.227.___ | GB | 2 |
|
||||
| 62.49.34.___ | GB | 2 |
|
||||
| 79.64.78.___ | GB | 20 |
|
||||
| 2.126.140.___ | GB | 22 |
|
||||
| 87.114.222.___ | GB | 23 |
|
||||
| 188.29.164.___ | GB | 24 |
|
||||
| 82.11.14.___ | GB | 26 |
|
||||
| 81.168.46.___ | GB | 29 |
|
||||
| 86.136.125.___ | GB | 3 |
|
||||
| 90.199.85.___ | GB | 3 |
|
||||
| 86.177.93.___ | GB | 31 |
|
||||
| 82.32.186.___ | GB | 4 |
|
||||
| 79.68.153.___ | GB | 46 |
|
||||
| 151.226.42.___ | GB | 6 |
|
||||
| 2.123.234.___ | GB | 6 |
|
||||
| 90.217.211.___ | GB | 6 |
|
||||
| 212.159.148.___ | GB | 68 |
|
||||
| 88.111.94.___ | GB | 7 |
|
||||
| 77.98.186.___ | GB | 9 |
|
||||
| 41.222.232.___ | GH | 4 |
|
||||
| 176.63.29.___ | HU | 30 |
|
||||
| 86.47.237.___ | IE | 10 |
|
||||
| 37.46.22.___ | IE | 4 |
|
||||
| 95.83.249.___ | IE | 4 |
|
||||
| 109.79.69.___ | IE | 6 |
|
||||
| 79.176.100.___ | IL | 13 |
|
||||
| 122.175.34.___ | IN | 19 |
|
||||
| 114.143.5.___ | IN | 26 |
|
||||
| 115.112.159.___ | IN | 4 |
|
||||
| 79.62.179.___ | IT | 11 |
|
||||
| 79.53.217.___ | IT | 19 |
|
||||
| 188.216.54.___ | IT | 2 |
|
||||
| 46.44.203.___ | IT | 2 |
|
||||
| 80.86.57.___ | IT | 2 |
|
||||
| 5.170.192.___ | IT | 27 |
|
||||
| 80.23.42.___ | IT | 3 |
|
||||
| 89.249.177.___ | IT | 3 |
|
||||
| 93.39.141.___ | IT | 31 |
|
||||
| 80.183.6.___ | IT | 34 |
|
||||
| 79.25.107.___ | IT | 35 |
|
||||
| 81.208.25.___ | IT | 39 |
|
||||
| 151.57.154.___ | IT | 4 |
|
||||
| 79.60.239.___ | IT | 42 |
|
||||
| 79.47.25.___ | IT | 5 |
|
||||
| 188.216.114.___ | IT | 7 |
|
||||
| 151.31.139.___ | IT | 8 |
|
||||
| 46.185.139.___ | JO | 9 |
|
||||
| 211.180.177.___ | KR | 22 |
|
||||
| 31.214.125.___ | KW | 2 |
|
||||
| 89.203.17.___ | KW | 3 |
|
||||
| 94.187.138.___ | KW | 4 |
|
||||
| 209.59.110.___ | LC | 18 |
|
||||
| 41.137.40.___ | MA | 12 |
|
||||
| 189.211.204.___ | MX | 5 |
|
||||
| 89.98.64.___ | NL | 6 |
|
||||
| 195.241.8.___ | NL | 9 |
|
||||
| 195.1.82.___ | NO | 70 |
|
||||
| 200.46.9.___ | PA | 30 |
|
||||
| 111.125.66.___ | PH | 1 |
|
||||
| 89.174.81.___ | PL | 7 |
|
||||
| 64.89.12.___ | PR | 24 |
|
||||
| 82.154.194.___ | PT | 12 |
|
||||
| 188.48.145.___ | SA | 8 |
|
||||
| 42.61.41.___ | SG | 25 |
|
||||
| 87.197.112.___ | SK | 3 |
|
||||
| 116.58.231.___ | TH | 4 |
|
||||
| 195.162.90.___ | UA | 5 |
|
||||
| 108.185.167.___ | US | 1 |
|
||||
| 108.241.56.___ | US | 1 |
|
||||
| 198.24.64.___ | US | 1 |
|
||||
| 199.249.233.___ | US | 1 |
|
||||
| 204.8.13.___ | US | 1 |
|
||||
| 206.81.195.___ | US | 1 |
|
||||
| 208.75.20.___ | US | 1 |
|
||||
| 24.149.8.___ | US | 1 |
|
||||
| 24.178.7.___ | US | 1 |
|
||||
| 38.132.41.___ | US | 1 |
|
||||
| 63.233.138.___ | US | 1 |
|
||||
| 68.15.198.___ | US | 1 |
|
||||
| 72.26.57.___ | US | 1 |
|
||||
| 72.43.167.___ | US | 1 |
|
||||
| 74.65.154.___ | US | 1 |
|
||||
| 74.94.193.___ | US | 1 |
|
||||
| 75.150.97.___ | US | 1 |
|
||||
| 96.84.51.___ | US | 1 |
|
||||
| 96.90.244.___ | US | 1 |
|
||||
| 98.190.153.___ | US | 1 |
|
||||
| 12.23.72.___ | US | 10 |
|
||||
| 50.225.58.___ | US | 10 |
|
||||
| 64.140.101.___ | US | 10 |
|
||||
| 66.185.229.___ | US | 10 |
|
||||
| 70.63.88.___ | US | 10 |
|
||||
| 96.84.148.___ | US | 10 |
|
||||
| 107.178.12.___ | US | 11 |
|
||||
| 170.253.182.___ | US | 11 |
|
||||
| 206.127.77.___ | US | 11 |
|
||||
| 216.27.83.___ | US | 11 |
|
||||
| 72.196.170.___ | US | 11 |
|
||||
| 74.93.168.___ | US | 11 |
|
||||
| 108.60.97.___ | US | 12 |
|
||||
| 205.196.77.___ | US | 12 |
|
||||
| 63.159.160.___ | US | 12 |
|
||||
| 204.93.122.___ | US | 13 |
|
||||
| 206.169.117.___ | US | 13 |
|
||||
| 208.104.106.___ | US | 13 |
|
||||
| 65.28.31.___ | US | 13 |
|
||||
| 66.119.110.___ | US | 13 |
|
||||
| 67.84.164.___ | US | 13 |
|
||||
| 69.178.166.___ | US | 13 |
|
||||
| 71.232.229.___ | US | 13 |
|
||||
| 96.3.6.___ | US | 13 |
|
||||
| 205.214.233.___ | US | 14 |
|
||||
| 38.96.46.___ | US | 14 |
|
||||
| 67.61.214.___ | US | 14 |
|
||||
| 173.233.58.___ | US | 141 |
|
||||
| 64.251.53.___ | US | 15 |
|
||||
| 73.163.215.___ | US | 15 |
|
||||
| 24.61.176.___ | US | 16 |
|
||||
| 67.10.184.___ | US | 16 |
|
||||
| 173.14.42.___ | US | 17 |
|
||||
| 173.163.34.___ | US | 17 |
|
||||
| 104.138.114.___ | US | 18 |
|
||||
| 23.24.168.___ | US | 18 |
|
||||
| 50.202.9.___ | US | 19 |
|
||||
| 96.248.123.___ | US | 19 |
|
||||
| 98.191.183.___ | US | 19 |
|
||||
| 108.215.204.___ | US | 2 |
|
||||
| 50.198.37.___ | US | 2 |
|
||||
| 69.178.183.___ | US | 2 |
|
||||
| 74.190.39.___ | US | 2 |
|
||||
| 76.90.131.___ | US | 2 |
|
||||
| 96.38.10.___ | US | 2 |
|
||||
| 96.60.117.___ | US | 2 |
|
||||
| 96.93.6.___ | US | 2 |
|
||||
| 74.69.197.___ | US | 21 |
|
||||
| 98.140.180.___ | US | 21 |
|
||||
| 50.252.0.___ | US | 22 |
|
||||
| 69.71.200.___ | US | 22 |
|
||||
| 71.46.59.___ | US | 22 |
|
||||
| 74.7.35.___ | US | 22 |
|
||||
| 12.191.73.___ | US | 23 |
|
||||
| 208.123.156.___ | US | 23 |
|
||||
| 65.190.29.___ | US | 23 |
|
||||
| 67.136.192.___ | US | 23 |
|
||||
| 70.63.216.___ | US | 23 |
|
||||
| 96.66.144.___ | US | 23 |
|
||||
| 173.167.128.___ | US | 24 |
|
||||
| 64.183.78.___ | US | 24 |
|
||||
| 68.44.33.___ | US | 24 |
|
||||
| 23.25.9.___ | US | 25 |
|
||||
| 24.100.92.___ | US | 25 |
|
||||
| 107.185.110.___ | US | 26 |
|
||||
| 208.118.179.___ | US | 26 |
|
||||
| 216.133.120.___ | US | 26 |
|
||||
| 75.182.97.___ | US | 26 |
|
||||
| 107.167.202.___ | US | 27 |
|
||||
| 66.85.239.___ | US | 27 |
|
||||
| 71.122.125.___ | US | 28 |
|
||||
| 74.218.169.___ | US | 28 |
|
||||
| 76.177.204.___ | US | 28 |
|
||||
| 216.165.241.___ | US | 29 |
|
||||
| 24.178.50.___ | US | 29 |
|
||||
| 63.149.147.___ | US | 29 |
|
||||
| 174.66.84.___ | US | 3 |
|
||||
| 184.183.156.___ | US | 3 |
|
||||
| 50.233.39.___ | US | 3 |
|
||||
| 70.183.165.___ | US | 3 |
|
||||
| 71.178.212.___ | US | 3 |
|
||||
| 72.175.83.___ | US | 3 |
|
||||
| 74.142.22.___ | US | 3 |
|
||||
| 98.174.50.___ | US | 3 |
|
||||
| 98.251.168.___ | US | 3 |
|
||||
| 206.74.148.___ | US | 30 |
|
||||
| 24.131.201.___ | US | 30 |
|
||||
| 50.80.199.___ | US | 30 |
|
||||
| 69.251.49.___ | US | 30 |
|
||||
| 108.6.53.___ | US | 31 |
|
||||
| 74.84.229.___ | US | 31 |
|
||||
| 172.250.78.___ | US | 32 |
|
||||
| 173.14.75.___ | US | 32 |
|
||||
| 216.201.55.___ | US | 33 |
|
||||
| 40.130.243.___ | US | 33 |
|
||||
| 164.58.163.___ | US | 34 |
|
||||
| 70.182.187.___ | US | 35 |
|
||||
| 184.170.168.___ | US | 37 |
|
||||
| 198.46.110.___ | US | 37 |
|
||||
| 24.166.234.___ | US | 37 |
|
||||
| 65.34.19.___ | US | 37 |
|
||||
| 75.146.12.___ | US | 37 |
|
||||
| 107.199.135.___ | US | 38 |
|
||||
| 206.193.215.___ | US | 38 |
|
||||
| 50.254.150.___ | US | 38 |
|
||||
| 69.54.48.___ | US | 38 |
|
||||
| 172.8.30.___ | US | 4 |
|
||||
| 24.106.124.___ | US | 4 |
|
||||
| 65.127.169.___ | US | 4 |
|
||||
| 71.227.65.___ | US | 4 |
|
||||
| 71.58.72.___ | US | 4 |
|
||||
| 74.9.236.___ | US | 4 |
|
||||
| 12.166.108.___ | US | 40 |
|
||||
| 174.47.56.___ | US | 40 |
|
||||
| 66.76.176.___ | US | 40 |
|
||||
| 76.111.90.___ | US | 41 |
|
||||
| 96.10.70.___ | US | 41 |
|
||||
| 97.79.226.___ | US | 41 |
|
||||
| 174.79.117.___ | US | 42 |
|
||||
| 70.138.178.___ | US | 42 |
|
||||
| 64.233.225.___ | US | 43 |
|
||||
| 97.89.203.___ | US | 43 |
|
||||
| 12.28.231.___ | US | 44 |
|
||||
| 64.235.157.___ | US | 45 |
|
||||
| 76.110.237.___ | US | 45 |
|
||||
| 71.196.10.___ | US | 46 |
|
||||
| 173.167.177.___ | US | 49 |
|
||||
| 24.7.92.___ | US | 49 |
|
||||
| 68.187.225.___ | US | 49 |
|
||||
| 184.75.77.___ | US | 5 |
|
||||
| 208.91.186.___ | US | 5 |
|
||||
| 71.11.113.___ | US | 5 |
|
||||
| 75.151.112.___ | US | 5 |
|
||||
| 98.189.112.___ | US | 5 |
|
||||
| 69.170.187.___ | US | 51 |
|
||||
| 97.64.182.___ | US | 51 |
|
||||
| 24.239.92.___ | US | 52 |
|
||||
| 72.211.28.___ | US | 53 |
|
||||
| 66.179.44.___ | US | 54 |
|
||||
| 66.188.47.___ | US | 55 |
|
||||
| 64.60.22.___ | US | 56 |
|
||||
| 73.1.95.___ | US | 56 |
|
||||
| 75.140.143.___ | US | 58 |
|
||||
| 24.199.140.___ | US | 59 |
|
||||
| 216.240.53.___ | US | 6 |
|
||||
| 216.26.16.___ | US | 6 |
|
||||
| 50.242.1.___ | US | 6 |
|
||||
| 65.83.137.___ | US | 6 |
|
||||
| 68.119.102.___ | US | 6 |
|
||||
| 68.170.224.___ | US | 6 |
|
||||
| 74.94.231.___ | US | 6 |
|
||||
| 96.64.21.___ | US | 6 |
|
||||
| 71.187.41.___ | US | 60 |
|
||||
| 184.177.173.___ | US | 61 |
|
||||
| 75.71.114.___ | US | 61 |
|
||||
| 75.82.232.___ | US | 61 |
|
||||
| 97.77.161.___ | US | 63 |
|
||||
| 50.154.213.___ | US | 65 |
|
||||
| 96.85.169.___ | US | 67 |
|
||||
| 100.33.70.___ | US | 68 |
|
||||
| 98.100.71.___ | US | 68 |
|
||||
| 24.176.214.___ | US | 69 |
|
||||
| 74.113.89.___ | US | 69 |
|
||||
| 204.116.101.___ | US | 7 |
|
||||
| 216.216.68.___ | US | 7 |
|
||||
| 65.188.191.___ | US | 7 |
|
||||
| 69.15.165.___ | US | 7 |
|
||||
| 74.219.118.___ | US | 7 |
|
||||
| 173.10.219.___ | US | 71 |
|
||||
| 97.77.209.___ | US | 72 |
|
||||
| 173.163.236.___ | US | 73 |
|
||||
| 162.210.13.___ | US | 79 |
|
||||
| 12.236.19.___ | US | 8 |
|
||||
| 208.180.242.___ | US | 8 |
|
||||
| 24.221.97.___ | US | 8 |
|
||||
| 40.132.97.___ | US | 8 |
|
||||
| 50.79.227.___ | US | 8 |
|
||||
| 64.130.109.___ | US | 8 |
|
||||
| 66.80.57.___ | US | 8 |
|
||||
| 74.68.130.___ | US | 8 |
|
||||
| 74.70.242.___ | US | 8 |
|
||||
| 96.80.61.___ | US | 81 |
|
||||
| 74.43.153.___ | US | 83 |
|
||||
| 208.123.153.___ | US | 85 |
|
||||
| 75.149.238.___ | US | 87 |
|
||||
| 96.85.138.___ | US | 89 |
|
||||
| 208.117.200.___ | US | 9 |
|
||||
| 208.68.71.___ | US | 9 |
|
||||
| 50.253.180.___ | US | 9 |
|
||||
| 50.84.132.___ | US | 9 |
|
||||
| 63.139.29.___ | US | 9 |
|
||||
| 70.43.78.___ | US | 9 |
|
||||
| 74.94.154.___ | US | 9 |
|
||||
| 50.76.82.___ | US | 94 |
|
||||
+-----------------+------------+-------------+
|
||||
```
|
||||
|
||||
**In next table we can see the distribution of IP addresses by country:**
|
||||
|
||||
```
|
||||
+--------+
|
||||
| 214 US |
|
||||
| 28 GB |
|
||||
| 17 IT |
|
||||
| 15 ES |
|
||||
| 15 CA |
|
||||
| 8 DE |
|
||||
| 4 IE |
|
||||
| 3 KW |
|
||||
| 3 IN |
|
||||
| 3 CZ |
|
||||
| 2 NL |
|
||||
| 1 UA |
|
||||
| 1 TH |
|
||||
| 1 SK |
|
||||
| 1 SG |
|
||||
| 1 SA |
|
||||
| 1 PT |
|
||||
| 1 PR |
|
||||
| 1 PL |
|
||||
| 1 PH |
|
||||
| 1 PA |
|
||||
| 1 NO |
|
||||
| 1 MX |
|
||||
| 1 MA |
|
||||
| 1 LC |
|
||||
| 1 KR |
|
||||
| 1 JO |
|
||||
| 1 IL |
|
||||
| 1 HU |
|
||||
| 1 GH |
|
||||
| 1 FR |
|
||||
| 1 EC |
|
||||
| 1 DK |
|
||||
| 1 CR |
|
||||
| 1 CO |
|
||||
| 1 CN |
|
||||
| 1 CH |
|
||||
| 1 BE |
|
||||
| 1 AU |
|
||||
| 1 AR |
|
||||
| 1 AO |
|
||||
+--------+
|
||||
```
|
||||
|
||||
Based on these tables can be drawn multiple facts according to which we designed our plugin:
|
||||
|
||||
* Spam was spread from a botnet. This is indicated by logins from huge amount of client IP addresses.
|
||||
* Spam was spread with a low cadence of messages in order to avoid rate limits.
|
||||
* Spam was spread from IP addresses from multiple countries (more than 30 countries after few minutes) which indicates an international botnet.
|
||||
|
||||
From these tables were taken out the statistics of IP addresses used, number of logins and countries from which were users logged in:
|
||||
|
||||
* Total number of logins 7531.
|
||||
* Total number of IP addresses used 342.
|
||||
* Total number of unique countries 41.
|
||||
|
||||
### Defending against botnet spammers
|
||||
|
||||
The solution to this kind of spam behavior was to make a plugin for the postfix firewall - postfwd. Postfwd is program that can be used to block users by rate limits, by using mail blacklists and by other means.
|
||||
|
||||
We designed and implemented the plugin that counts the number of unique countries from which a user logged in to his account by sasl authentication. Then in the postfwd configuration, you can set limits to the number of countries and after getting above the limit, user gets selected smtp code reply and is blocked from sending emails.
|
||||
|
||||
I am using this plugin in a medium sized internet provider company for 6 months and currently the plugin automatically caught over 50 compromised users without any intervention from administrator's side. Another interesting fact after 6 months of usage is that after finding spammer and sending SMTP code 544 (Host not found - not in DNS) to compromised account (sended directly from postfwd), botnets stopped trying to log into compromised accounts. It looks like the botnet spam application is intelligent and do not want to waste botnet resources. Sending other SMTP codes did not stopped botnet from trying.
|
||||
|
||||
The plugin is available at my company's github - [https://github.com/Vnet-as/postfwd-anti-geoip-spam-plugin][5]
|
||||
|
||||
### Installation
|
||||
|
||||
In this part I will give you a basic tutorial of how to make postfix work with postfwd and how to install the plugin and add a postfwd rule to use it. Installation was tested and done on Debian 8 Jessie. Instructions for parts of this installation are also available on the github project page.
|
||||
|
||||
1\. First install and configure postfix with sasl authentication. There are a lot of great tutorials on installation and configuration of postfix, therefore I will continue right next with postfwd installation.
|
||||
|
||||
2\. The next thing after you have postfix with sasl authentication installed is to install postfwd. On Debian systems, you can do it with the apt package manager by executing following command (This will also automatically create a user **postfw** and file **/etc/default/postfwd** which we need to update with correct configuration for autostart).
|
||||
|
||||
apt-get install postfwd
|
||||
|
||||
3\. Now we proceed with downloading the git project with our postfwd plugin:
|
||||
|
||||
apt-get install git
|
||||
git clone https://github.com/Vnet-as/postfwd-anti-geoip-spam-plugin /etc/postfix/postfwd-anti-geoip-spam-plugin
|
||||
chown -R postfw:postfix /etc/postfix/postfwd-anti-geoip-spam-plugin/
|
||||
|
||||
4\. If you do not have git or do not want to use git, you can download raw plugin file:
|
||||
|
||||
mkdir /etc/postfix/postfwd-anti-geoip-spam-plugin
|
||||
wget https://raw.githubusercontent.com/Vnet-as/postfwd-anti-geoip-spam-plugin/master/postfwd-anti-spam.plugin -O /etc/postfix/postfwd-anti-geoip-spam-plugin/postfwd-anti-spam.plugin
|
||||
chown -R postfw:postfix /etc/postfix/postfwd-anti-geoip-spam-plugin/
|
||||
|
||||
5. Then update the postfwd default config in the **/etc/default/postfwd** file and add the plugin parameter '**--plugins /etc/postfix/postfwd-anti-geoip-spam-plugin/postfwd-anti-spam.plugin'** to it**:**
|
||||
|
||||
sed -i 's/STARTUP=0/STARTUP=1/' /etc/default/postfwd # Auto-Startup
|
||||
|
||||
sed -i 's/ARGS="--summary=600 --cache=600 --cache-rdomain-only --cache-no-size"/#ARGS="--summary=600 --cache=600 --cache-rdomain-only --cache-no-size"/' /etc/default/postfwd # Comment out old startup parameters
|
||||
|
||||
echo 'ARGS="--summary=600 --cache=600 --cache-rdomain-only --cache-no-size --plugins /etc/postfix/postfwd-anti-geoip-spam-plugin/postfwd-anti-spam.plugin"' >> /etc/default/postfwd # Add new startup parameters
|
||||
|
||||
6\. Now create a basic postfwd configuration file with the anti spam botnet rule:
|
||||
|
||||
cat <<_EOF_ >> /etc/postfix/postfwd.cf
|
||||
# Anti spam botnet rule
|
||||
# This example shows how to limit e-mail address defined by sasl_username to be able to login from max. 5 different countries, otherwise they will be blocked to send messages.
|
||||
id=COUNTRY_LOGIN_COUNT ; \
|
||||
sasl_username=~^(.+)$ ; \
|
||||
incr_client_country_login_count != 0 ; \
|
||||
action=dunno
|
||||
id=BAN_BOTNET ; \
|
||||
sasl_username=~^(.+)$ ; \
|
||||
client_uniq_country_login_count > 5 ; \
|
||||
action=rate(sasl_username/1/3600/554 Your mail account was compromised. Please change your password immediately after next login.)
|
||||
_EOF_
|
||||
|
||||
7\. Update the postfix configuration file **/etc/postfix/main.cf** to use the policy service on the default postfwd port **10040** (or different port according to the configuration in **/etc/default/postfwd**). Your configuration should have following option in the **smtpd_recipient_restrictions** line. Note that the following restriction does not work without other restrictions such as one of **reject_unknown_recipient_domain** or **reject_unauth_destination**.
|
||||
|
||||
echo 'smtpd_recipient_restrictions = check_policy_service inet:127.0.0.1:12525' >> /etc/postfix/main.cf
|
||||
|
||||
8\. Install the dependencies of the plugin:
|
||||
|
||||
`apt-get install -y libgeo-ip-perl libtime-piece-perl libdbd-mysql-perl libdbd-pg-perl`
|
||||
|
||||
9\. Install MySQL or PostgreSQL database and configure one user which will be used in plugin.
|
||||
|
||||
10\. Update database connection part in plugin to refer to your database backend configuration. This example shows the MySQL configuration for a user testuser and database test.
|
||||
|
||||
```
|
||||
# my $driver = "Pg";
|
||||
my $driver = "mysql";
|
||||
my $database = "test";
|
||||
my $host = "127.0.0.1";
|
||||
my $port = "3306";
|
||||
# my $port = "5432";
|
||||
my $dsn = "DBI:$driver:database=$database;host=$host;port=$port";
|
||||
my $userid = "testuser";
|
||||
my $password = "password";
|
||||
```
|
||||
|
||||
11\. Now restart postfix and postfwd service.
|
||||
|
||||
```
|
||||
service postfix restart && service postfwd restart
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/tutorial/blocking-of-international-spam-botnets-postfix-plugin/
|
||||
|
||||
作者:[Ondrej Vasko][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com/tutorial/blocking-of-international-spam-botnets-postfix-plugin/
|
||||
[1]:https://www.howtoforge.com/tutorial/blocking-of-international-spam-botnets-postfix-plugin/#introduction
|
||||
[2]:https://www.howtoforge.com/tutorial/blocking-of-international-spam-botnets-postfix-plugin/#how-international-botnet-works
|
||||
[3]:https://www.howtoforge.com/tutorial/blocking-of-international-spam-botnets-postfix-plugin/#defending-against-botnet-spammers
|
||||
[4]:https://www.howtoforge.com/tutorial/blocking-of-international-spam-botnets-postfix-plugin/#installation
|
||||
[5]:https://github.com/Vnet-as/postfwd-anti-geoip-spam-plugin
|
@ -1,231 +0,0 @@
|
||||
ucasFL translating
|
||||
WRITE MARKDOWN WITH 8 EXCEPTIONAL OPEN SOURCE EDITORS
|
||||
============================================================
|
||||
|
||||
### Markdown
|
||||
|
||||
By way of a succinct introduction, Markdown is a lightweight plain text formatting syntax created by John Gruber together with Aaron Swartz. Markdown offers individuals “to write using an easy-to-read, easy-to-write plain text format, then convert it to structurally valid XHTML (or HTML)”. Markdown’s syntax consists of easy to remember symbols. It has a gentle learning curve; you can literally learn the Markdown syntax in the time it takes to fry some mushrooms (that’s about 10 minutes). By keeping the syntax as simple as possible, the risk of errors is minimized. Besides being a friendly syntax, it has the virtue of producing clean and valid (X)HTML output. If you have seen my HTML, you would know that’s pretty essential.
|
||||
|
||||
The main goal for the formatting syntax is to make it extremely readable. Users should be able to publish a Markdown-formatted document as plain text. Text written in Markdown has the virtue of being easy to share between computers, smart phones, and individuals. Almost all content management systems support Markdown. It’s popularity as a format for writing for the web has also led to variants being adopted by many services such as GitHub and Stack Exchange.
|
||||
|
||||
Markdown can be composed in any text editor. But I recommend an editor purposely designed for this syntax. The software featured in this roundup allows an author to write professional documents of various formats including blog posts, presentations, reports, email, slides and more. All of the applications are, of course, released under an open source license. Linux, OS X and Windows’ users are catered for.
|
||||
|
||||
* * *
|
||||
|
||||
### Remarkable
|
||||
|
||||

|
||||
|
||||
Let’s start with Remarkable. An apt name. Remarkable is a reasonably featured Markdown editor – it doesn’t have all the bells and whistles, but there’s nothing critical missing. It has a syntax like Github flavoured markdown.
|
||||
|
||||
With this editor you can write Markdown and view the changes as you make them in the live preview window. You can export your files to PDF (with a TOC) and HTML. There are multiple styles available along with extensive configuration options so you can configure it to your heart’s content.
|
||||
|
||||
Other features include:
|
||||
|
||||
* Syntax highlighting
|
||||
* GitHub Flavored Markdown support
|
||||
* MathJax support – render rich documents with advanced formatting
|
||||
* Keyboard shortcuts
|
||||
|
||||
There are easy installers available for Debian, Ubuntu, Fedora, SUSE and Arch systems.
|
||||
|
||||
Homepage: [https://remarkableapp.github.io/][4]
|
||||
License: MIT License
|
||||
|
||||
* * *
|
||||
|
||||
### Atom
|
||||
|
||||

|
||||
|
||||
Make no bones about it, Atom is a fabulous text editor. Atom consists of over 50 open source packages integrated around a minimal core. With Node.js support, and a full set of features, Atom is my preferred way to edit code. It features in our [Killer Open Source Apps][5], it is that masterly. But as a Markdown editor Atom leaves a lot to be desired – its default packages are bereft of Markdown specific features; for example, it doesn’t render equations, as illustrated in the graphic above.
|
||||
|
||||
But here lies the power of open source and one of the reasons I’m a strong advocate of openness. There are a plethora of packages, some forks, which add the missing functionality. For example, Markdown Preview Plus provides a real-time preview of markdown documents, with math rendering and live reloading. Alternatively, you might try [Markdown Preview Enhanced][6]. If you need an auto-scroll feature, there’s [markdown-scroll-sync][7]. I’m a big fan of [Markdown-Writer][8] and [markdown-pdf][9] the latter converts markdown to PDF, PNG and JPEG on the fly.
|
||||
|
||||
The approach embodies the open source mentality, allowing the user to add extensions to provide only the features needed. Reminds me of Woolworths pick ‘n’ mix sweets. A bit more effort, but the best outcome.
|
||||
|
||||
Homepage: [https://atom.io/][10]
|
||||
License: MIT License
|
||||
|
||||
* * *
|
||||
|
||||
### Haroopad
|
||||
|
||||

|
||||
|
||||
Haroopad is an excellent markdown enabled document processor for creating web-friendly documents. Author various formats of documents such as blog articles, slides, presentations, reports, and e-mail. Haroopad runs on Windows, Mac OS X, and Linux. There are Debian/Ubuntu packages, and binaries for Windows and Mac. The application uses node-webkit, CodeMirror, marked, and Twitter Bootstrap.
|
||||
|
||||
Haroo means “A Day” in Korean.
|
||||
|
||||
The feature list is rather impressive; take a look below:
|
||||
|
||||
* Themes, Skins and UI Components
|
||||
* Over 30 different themes to edit – tomorrow-night-bright and zenburn are recent additions
|
||||
* Syntax highlighting in fenced code block on editor
|
||||
* Ruby, Python, PHP, Javascript, C, HTML, CSS
|
||||
* Based on CodeMirror, a versatile text editor implemented in JavaScript for the browser
|
||||
* Live Preview themes
|
||||
* 7 themes based markdown-css
|
||||
* Syntax Highlighting
|
||||
* 112 languages & 49 styles based on highlight.js
|
||||
* Custom Theme
|
||||
* Style based on CSS (Cascading Style Sheet)
|
||||
* Presentation Mode – useful for on the spot presentations
|
||||
* Draw diagrams – flowcharts, and sequence diagrams
|
||||
* Tasklist
|
||||
* Enhanced Markdown syntax with TOC, GitHub Flavored Markdown and extensions, mathematical expressions, footnotes, tasklists, and more
|
||||
* Font Size
|
||||
* Editor and Viewer font size control using Preference Window & Shortcuts
|
||||
* Embedding Rich Media Contents
|
||||
* Video, Audio, 3D, Text, Open Graph and oEmbed
|
||||
* About 100 major internet services (YouTube, SoundCloud, Flickr …) Support
|
||||
* Drag & Drop support
|
||||
* Display Mode
|
||||
* Default (Editor:Viewer), Reverse (Viewer:Editor), Only Editor, Only Viewer (View > Mode)
|
||||
* Insert Current Date & Time
|
||||
* Various Format support (Insert > Date & Time)
|
||||
* HTML to Markdown
|
||||
* Drag & Drop your selected text on Web Browser
|
||||
* Options for markdown parsing
|
||||
* Outline View
|
||||
* Vim Key-binding for purists
|
||||
* Markdown Auto Completion
|
||||
* Export to PDF, HTML
|
||||
* Styled HTML copy to clipboard for WYSIWYG editors
|
||||
* Auto Save & Restore
|
||||
* Document state information
|
||||
* Tab or Spaces for Indentation
|
||||
* Column (Single, Two and Three) Layout View
|
||||
* Markdown Syntax Help Dialog.
|
||||
* Import and Export settings
|
||||
* Support for LaTex mathematical expressions using MathJax
|
||||
* Export documents to HTML and PDF
|
||||
* Build extensions for making your own feature
|
||||
* Effortlessly transform documents into a blog system: WordPress, Evernote and Tumblr,
|
||||
* Full screen mode – although the mode fails to hide the top menu bar or the bottom toolbar
|
||||
* Internationalization support: English, Korean, Spanish, Chinese Simplified, German, Vietnamese, Russian, Greek, Portuguese, Japanese, Italian, Indonesian, Turkish, and French
|
||||
|
||||
Homepage: [http://pad.haroopress.com/][11]
|
||||
License: GNU GPL v3
|
||||
|
||||
* * *
|
||||
|
||||
### StackEdit
|
||||
|
||||

|
||||
|
||||
StackEdit is a full-featured Markdown editor based on PageDown, the Markdown library used by Stack Overflow and the other Stack Exchange sites. Unlike the other editors in this roundup, StackEdit is a web based editor. A Chrome app is also available.
|
||||
|
||||
Features include:
|
||||
|
||||
* Real-time HTML preview with Scroll Link feature to bind editor and preview scrollbars
|
||||
* Markdown Extra/GitHub Flavored Markdown support and Prettify/Highlight.js syntax highlighting
|
||||
* LaTeX mathematical expressions using MathJax
|
||||
* WYSIWYG control buttons
|
||||
* Configurable layout
|
||||
* Theming support with different themes available
|
||||
* A la carte extensions
|
||||
* Offline editing
|
||||
* Online synchronization with Google Drive (multi-accounts) and Dropbox
|
||||
* One click publish on Blogger, Dropbox, Gist, GitHub, Google Drive, SSH server, Tumblr, and WordPress
|
||||
|
||||
Homepage: [https://stackedit.io/][12]
|
||||
License: Apache License
|
||||
|
||||
* * *
|
||||
|
||||
### MacDown
|
||||
|
||||

|
||||
|
||||
MacDown is the only editor featured in this roundup which only runs on macOS. Specifically, it requires OS X 10.8 or later. Hoedown is used internally to render Markdown into HTML which gives an edge to its performance. Hoedown is a revived fork of Sundown, it is fully standards compliant with no dependencies, good extension support, and UTF-8 aware.
|
||||
|
||||
MacDown is based on Mou, a proprietary solution designed for web developers.
|
||||
|
||||
It offers good Markdown rendering, syntax highlighting for fenced code blocks with language identifiers rendered by Prism, MathML and LaTeX rendering, GTM task lists, Jekyll front-matter, and optional advanced auto-completion. And above all, it isn’t a resource hog. Want to write Markdown on OS X? MacDown is my open source recommendation for web developers.
|
||||
|
||||
Homepage: [https://macdown.uranusjr.com/][13]
|
||||
License: MIT License
|
||||
|
||||
* * *
|
||||
|
||||
### ghostwriter
|
||||
|
||||

|
||||
|
||||
ghostwriter is a cross-platform, aesthetic, distraction-free Markdown editor. It has built-in support for the Sundown processor, but can also auto-detect Pandoc, MultiMarkdown, Discount and cmark processors. It seeks to be an unobtrusive editor.
|
||||
|
||||
ghostwriter has a good feature set which includes syntax highlighting, a full-screen mode, a focus mode, themes, spell checking with Hunspell, a live word count, live HTML preview, and custom CSS style sheets for HTML preview, drag and drop support for images, and internalization support. A Hemingway mode button disables backspace and delete keys. A new Markdown cheat sheet HUD window is a useful addition. Theme support is pretty basic, but there are some experimental themes available at this [GitHub repository][14].
|
||||
|
||||
ghostwriter is an under-rated utility. I have come to appreciate the versatility of this application more and more, in part because of its spartan interface helps the writer fully concentrate on curating content. Recommended.
|
||||
|
||||
ghostwriter is available for Linux and Windows. There is also a portable version available for Windows.
|
||||
|
||||
Homepage: [https://github.com/wereturtle/ghostwriter][15]
|
||||
License: GNU GPL v3
|
||||
|
||||
* * *
|
||||
|
||||
### Abricotine
|
||||
|
||||

|
||||
|
||||
Abricotine is a promising cross-platform open-source markdown editor built for the desktop. It is available for Linux, OS X and Windows.
|
||||
|
||||
The application supports markdown syntax combined with some Github-flavored Markdown enhancements (such as tables). It lets users preview documents directly in the text editor as opposed to a side pane.
|
||||
|
||||
The tool has a reasonable set of features including a spell checker, the ability to save documents as HTML or copy rich text to paste in your email client. You can also display a document table of content in the side pane, display syntax highlighting for code, as well as helpers, anchors and hidden characters. It is at a fairly early stage of development with some basic bugs that need fixing, but it is one to keep an eye on. There are 2 themes, with the ability to add your own.
|
||||
|
||||
Homepage: [http://abricotine.brrd.fr/][16]
|
||||
License: GNU General Public License v3 or later
|
||||
|
||||
* * *
|
||||
|
||||
### ReText
|
||||
|
||||

|
||||
|
||||
ReText is a simple but powerful editor for Markdown and reStructuredText. It gives users the power to control all output formatting. The files it works with are plain text files, however it can export to PDF, HTML and other formats. ReText is officially supported on Linux only.
|
||||
|
||||
Features include:
|
||||
|
||||
* Full screen mode
|
||||
* Live previews
|
||||
* Synchronised scrolling (for Markdown)
|
||||
* Support for math formulas
|
||||
* Spell checking
|
||||
* Page breaks
|
||||
* Export to HTML, ODT and PDF
|
||||
* Use other markup languages
|
||||
|
||||
Homepage: [https://github.com/retext-project/retext][17]
|
||||
License: GNU GPL v2 or higher
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ossblog.org/markdown-editors/
|
||||
|
||||
作者:[Steve Emms ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ossblog.org/author/steve/
|
||||
[1]:https://www.ossblog.org/author/steve/
|
||||
[2]:https://www.ossblog.org/markdown-editors/#comments
|
||||
[3]:https://www.ossblog.org/category/utilities/
|
||||
[4]:https://remarkableapp.github.io/
|
||||
[5]:https://www.ossblog.org/top-software/2/
|
||||
[6]:https://atom.io/packages/markdown-preview-enhanced
|
||||
[7]:https://atom.io/packages/markdown-scroll-sync
|
||||
[8]:https://atom.io/packages/markdown-writer
|
||||
[9]:https://atom.io/packages/markdown-pdf
|
||||
[10]:https://atom.io/
|
||||
[11]:http://pad.haroopress.com/
|
||||
[12]:https://stackedit.io/
|
||||
[13]:https://macdown.uranusjr.com/
|
||||
[14]:https://github.com/jggouvea/ghostwriter-themes
|
||||
[15]:https://github.com/wereturtle/ghostwriter
|
||||
[16]:http://abricotine.brrd.fr/
|
||||
[17]:https://github.com/retext-project/retext
|
@ -1,393 +0,0 @@
|
||||
The Perfect Server CentOS 7.3 with Apache, Postfix, Dovecot, Pure-FTPD, BIND and ISPConfig 3.1
|
||||
============================================================
|
||||
|
||||
### This tutorial exists for these OS versions
|
||||
|
||||
* **CentOS 7.3**
|
||||
* [CentOS 7.2][3]
|
||||
* [CentOS 7.1][4]
|
||||
* [CentOS 7][5]
|
||||
|
||||
### On this page
|
||||
|
||||
1. [1 Requirements][6]
|
||||
2. [2 Preliminary Note][7]
|
||||
3. [3 Prepare the server][8]
|
||||
4. [4 Enable Additional Repositories and Install Some Software][9]
|
||||
5. [5 Quota][10]
|
||||
1. [Enabling quota on the / (root) partition][1]
|
||||
2. [Enabling quota on a separate /var partition][2]
|
||||
6. [6 Install Apache, MySQL, phpMyAdmin][11]
|
||||
|
||||
This tutorial shows the installation of ISPConfig 3.1 on a CentOS 7.3 (64Bit) server. ISPConfig is a web hosting control panel that allows you to configure the following services through a web browser: Apache web server, Postfix mail server, MySQL, BIND nameserver, PureFTPd, SpamAssassin, ClamAV, Mailman, and many more.
|
||||
|
||||
### 1 Requirements
|
||||
|
||||
To install such a system you will need the following:
|
||||
|
||||
* A Centos 7.3 minimal server system. This can be a server installed from scratch as described in our [Centos 7.3 minimal server tutorial][12] or a virtual-server or root-server from a hosting company that has a minimal Centos 7.3 setup installed.
|
||||
* A fast Internet connection.
|
||||
|
||||
### 2 Preliminary Note
|
||||
|
||||
In this tutorial, I use the hostname server1.example.com with the IP address 192.168.1.100 and the gateway 192.168.1.1. These settings might differ for you, so you have to replace them where appropriate.
|
||||
|
||||
Please note that HHVM and XMPP are not supported in ISPConfig for the CentOS platform yet. If you like to manage an XMPP chat server from within ISPConfig or use HHVM (Hip Hop Virtual Machine) in an ISPConfig website, then please use Debian 8 or Ubuntu 16.04 as server OS instead of CentOS 7.3.
|
||||
|
||||
### 3 Prepare the server
|
||||
|
||||
**Set the keyboard layout**
|
||||
|
||||
In case that the keyboard layout of the server does not match your keyboard, you can switch to the right keyboard (in my case "de" for a german keyboard layout, with the localectl command:
|
||||
|
||||
`localectl set-keymap de`
|
||||
|
||||
To get a list of all available keymaps, run:
|
||||
|
||||
`localectl list-keymaps`
|
||||
|
||||
I want to install ISPConfig at the end of this tutorial, ISPConfig ships with the Bastille firewall script that I will use as firewall, therefor I disable the default CentOS firewall now. Of course, you are free to leave the CentOS firewall on and configure it to your needs (but then you shouldn't use any other firewall later on as it will most probably interfere with the CentOS firewall).
|
||||
|
||||
Run...
|
||||
|
||||
```
|
||||
yum -y install net-tools
|
||||
systemctl stop firewalld.service
|
||||
systemctl disable firewalld.service
|
||||
```
|
||||
|
||||
to stop and disable the CentOS firewall. It is ok when you get errors here, this just indicates that the firewall was not installed.
|
||||
|
||||
Then you should check that the firewall has really been disabled. To do so, run the command:
|
||||
|
||||
`iptables -L`
|
||||
|
||||
The output should look like this:
|
||||
|
||||
[root@server1 ~]# iptables -L
|
||||
Chain INPUT (policy ACCEPT)
|
||||
target prot opt source destination
|
||||
|
||||
Chain FORWARD (policy ACCEPT)
|
||||
target prot opt source destination
|
||||
|
||||
Chain OUTPUT (policy ACCEPT)
|
||||
target prot opt source destination
|
||||
|
||||
Or use the firewall-cmd command:
|
||||
|
||||
firewall-cmd --state
|
||||
|
||||
[root@server1 ~]# firewall-cmd --state
|
||||
not running
|
||||
[root@server1 ~]#
|
||||
|
||||
Now I will install the network configuration editor and the shell based editor "nano" that I will use in the next steps to edit the config files:
|
||||
|
||||
yum -y install nano wget NetworkManager-tui
|
||||
|
||||
If you did not configure your network card during the installation, you can do that now. Run...
|
||||
|
||||
nmtui
|
||||
|
||||
... and go to Edit a connection:
|
||||
|
||||
[
|
||||

|
||||
][13]
|
||||
|
||||
Select your network interface:
|
||||
|
||||
[
|
||||

|
||||
][14]
|
||||
|
||||
Then fill in your network details - disable DHCP and fill in a static IP address, a netmask, your gateway, and one or two nameservers, then hit Ok:
|
||||
|
||||
[
|
||||

|
||||
][15]
|
||||
|
||||
Next select OK to confirm the changes that you made in the network settings
|
||||
|
||||
[
|
||||

|
||||
][16]
|
||||
|
||||
and Quit to close the nmtui network configuration tool.
|
||||
|
||||
[
|
||||

|
||||
][17]
|
||||
|
||||
You should run
|
||||
|
||||
ifconfig
|
||||
|
||||
now to check if the installer got your IP address right:
|
||||
|
||||
```
|
||||
[root@server1 ~]# ifconfig
|
||||
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
|
||||
inet 192.168.1.100 netmask 255.255.255.0 broadcast 192.168.1.255
|
||||
inet6 fe80::20c:29ff:fecd:cc52 prefixlen 64 scopeid 0x20
|
||||
|
||||
ether 00:0c:29:cd:cc:52 txqueuelen 1000 (Ethernet)
|
||||
RX packets 55621 bytes 79601094 (75.9 MiB)
|
||||
RX errors 0 dropped 0 overruns 0 frame 0
|
||||
TX packets 28115 bytes 2608239 (2.4 MiB)
|
||||
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
|
||||
|
||||
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
|
||||
inet 127.0.0.1 netmask 255.0.0.0
|
||||
inet6 ::1 prefixlen 128 scopeid 0x10
|
||||
loop txqueuelen 0 (Local Loopback)
|
||||
RX packets 0 bytes 0 (0.0 B)
|
||||
RX errors 0 dropped 0 overruns 0 frame 0
|
||||
TX packets 0 bytes 0 (0.0 B)
|
||||
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
|
||||
```
|
||||
|
||||
If your network card does not show up there, then it not be enabled on boot, In this case, open the file /etc/sysconfig/network-scripts/ifcfg-eth0
|
||||
|
||||
nano /etc/sysconfig/network-scripts/ifcfg-ens33
|
||||
|
||||
and set ONBOOT to yes:
|
||||
|
||||
[...]
|
||||
ONBOOT=yes
|
||||
[...]
|
||||
|
||||
and reboot the server.
|
||||
|
||||
Check your /etc/resolv.conf if it lists all nameservers that you've previously configured:
|
||||
|
||||
cat /etc/resolv.conf
|
||||
|
||||
If nameservers are missing, run
|
||||
|
||||
nmtui
|
||||
|
||||
and add the missing nameservers again.
|
||||
|
||||
Now, on to the configuration...
|
||||
|
||||
**Adjusting /etc/hosts and /etc/hostname**
|
||||
|
||||
Next, we will edit /etc/hosts. Make it look like this:
|
||||
|
||||
nano /etc/hosts
|
||||
|
||||
```
|
||||
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
|
||||
192.168.1.100 server1.example.com server1
|
||||
|
||||
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
|
||||
```
|
||||
|
||||
Set the hostname in the /etc/hostname file. The file shall contain the fully qualified domain name (e.g. server1.example.com in my case) and not just the short name like "server1". Open the file with the nano editor:
|
||||
|
||||
nano /etc/hostname
|
||||
|
||||
And set the hostname in the file.
|
||||
|
||||
```
|
||||
server1.example.com
|
||||
```
|
||||
|
||||
Save the file and exit nano.
|
||||
|
||||
**Disable SELinux**
|
||||
|
||||
SELinux is a security extension of CentOS that should provide extended security. In my opinion you don't need it to configure a secure system, and it usually causes more problems than advantages (think of it after you have done a week of trouble-shooting because some service wasn't working as expected, and then you find out that everything was ok, only SELinux was causing the problem). Therefore I disable it (this is a must if you want to install ISPConfig later on).
|
||||
|
||||
Edit /etc/selinux/config and set SELINUX=disabled:
|
||||
|
||||
nano /etc/selinux/config
|
||||
|
||||
```
|
||||
# This file controls the state of SELinux on the system.
|
||||
# SELINUX= can take one of these three values:
|
||||
# enforcing - SELinux security policy is enforced.
|
||||
# permissive - SELinux prints warnings instead of enforcing.
|
||||
# disabled - No SELinux policy is loaded.
|
||||
SELINUX=disabled
|
||||
# SELINUXTYPE= can take one of these two values:
|
||||
# targeted - Targeted processes are protected,
|
||||
# mls - Multi Level Security protection.
|
||||
SELINUXTYPE=targeted
|
||||
```
|
||||
|
||||
Afterwards we must reboot the system:
|
||||
|
||||
reboot
|
||||
|
||||
### 4 Enable Additional Repositories and Install Some Software
|
||||
|
||||
First, we import the GPG keys for software packages:
|
||||
|
||||
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY*
|
||||
|
||||
Then we enable the EPEL repository on our CentOS system as lots of the packages that we are going to install in the course of this tutorial are not available in the official CentOS 7 repository:
|
||||
|
||||
yum -y install epel-release
|
||||
|
||||
yum -y install yum-priorities
|
||||
|
||||
Edit /etc/yum.repos.d/epel.repo...
|
||||
|
||||
nano /etc/yum.repos.d/epel.repo
|
||||
|
||||
... and add the line priority=10 to the [epel] section:
|
||||
|
||||
```
|
||||
[epel]
|
||||
name=Extra Packages for Enterprise Linux 7 - $basearch
|
||||
#baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch
|
||||
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
|
||||
failovermethod=priority
|
||||
enabled=1
|
||||
priority=10
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
|
||||
[...]
|
||||
```
|
||||
|
||||
Then we update our existing packages on the system:
|
||||
|
||||
yum -y update
|
||||
|
||||
Now we install some software packages that are needed later on:
|
||||
|
||||
yum -y groupinstall 'Development Tools'
|
||||
|
||||
### 5 Quota
|
||||
|
||||
(If you have chosen a different partitioning scheme than I did, you must adjust this chapter so that quota applies to the partitions where you need it.)
|
||||
|
||||
To install quota, we run this command:
|
||||
|
||||
yum -y install quota
|
||||
|
||||
Now we check if quota is already enabled for the filesystem where the website (/var/www) and maildir data (var/vmail) is stored. In this example setup, I have one big root partition, so I search for ' / ':
|
||||
|
||||
mount | grep ' / '
|
||||
|
||||
[root@server1 ~]# mount | grep ' / '
|
||||
/dev/mapper/centos-root on / type xfs (rw,relatime,attr2,inode64,noquota)
|
||||
[root@server1 ~]#
|
||||
|
||||
If you have a separate /var partition, then use:
|
||||
|
||||
mount | grep ' /var '
|
||||
|
||||
instead. If the line contains the word "**noquota**", then proceed with the following steps to enable quota.
|
||||
|
||||
### Enabling quota on the / (root) partition
|
||||
|
||||
Normally you would enable quota in the /etc/fstab file, but if the filesystem is the root filesystem "/", then quota has to be enabled by a boot parameter of the Linux Kernel.
|
||||
|
||||
Edit the grub configuration file:
|
||||
|
||||
nano /etc/default/grub
|
||||
|
||||
search fole the line that starts with GRUB_CMDLINE_LINUX and add rootflags=uquota,gquota to the commandline parameters so that the resulting line looks like this:
|
||||
|
||||
```
|
||||
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet rootflags=uquota,gquota"
|
||||
```
|
||||
|
||||
and apply the changes by running the following command.
|
||||
|
||||
cp /boot/grub2/grub.cfg /boot/grub2/grub.cfg_bak
|
||||
grub2-mkconfig -o /boot/grub2/grub.cfg
|
||||
|
||||
and reboot the server.
|
||||
|
||||
reboot
|
||||
|
||||
Now check if quota is enabled:
|
||||
|
||||
mount | grep ' / '
|
||||
|
||||
[root@server1 ~]# mount | grep ' / '
|
||||
/dev/mapper/centos-root on / type xfs (rw,relatime,attr2,inode64,usrquota,grpquota)
|
||||
[root@server1 ~]#
|
||||
|
||||
When quota is active, we can see "**usrquota,grpquota**" in the mount option list.
|
||||
|
||||
### Enabling quota on a separate /var partition
|
||||
|
||||
If you have a separate /var partition, then edit /etc/fstab and add ,uquota,gquota to the / partition (/dev/mapper/centos-var):
|
||||
|
||||
nano /etc/fstab
|
||||
|
||||
```
|
||||
|
||||
#
|
||||
# /etc/fstab
|
||||
# Created by anaconda on Sun Sep 21 16:33:45 2014
|
||||
#
|
||||
# Accessible filesystems, by reference, are maintained under '/dev/disk'
|
||||
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
|
||||
#
|
||||
/dev/mapper/centos-root / xfs defaults 1 1
|
||||
/dev/mapper/centos-var /var xfs defaults,uquota,gquota 1 2
|
||||
UUID=9ac06939-7e43-4efd-957a-486775edd7b4 /boot xfs defaults 1 3
|
||||
/dev/mapper/centos-swap swap swap defaults 0 0
|
||||
```
|
||||
|
||||
Then run
|
||||
|
||||
mount -o remount /var
|
||||
|
||||
quotacheck -avugm
|
||||
quotaon -avug
|
||||
|
||||
to enable quota. When you get an error that there is no partition with quota enabled, then reboot the server before you proceed.
|
||||
|
||||
### 6 Install Apache, MySQL, phpMyAdmin
|
||||
|
||||
We can install the needed packages with one single command:
|
||||
|
||||
yum -y install ntp httpd mod_ssl mariadb-server php php-mysql php-mbstring phpmyadmin
|
||||
|
||||
To ensure that the server can not be attacked trough the [HTTPOXY][18] vulnerability, we will disable the HTTP_PROXY header in apache globally.
|
||||
|
||||
Add the apache header rule at the end of the httpd.conf file:
|
||||
|
||||
echo "RequestHeader unset Proxy early" >> /etc/httpd/conf/httpd.conf
|
||||
|
||||
And restart httpd to apply the configuration change.
|
||||
|
||||
service httpd restart
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/tutorial/perfect-server-centos-7-3-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/
|
||||
|
||||
作者:[ Till Brehm][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-3-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/
|
||||
[1]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-3-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/#enabling-quota-on-the-root-partition
|
||||
[2]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-3-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/#enabling-quota-on-a-separate-var-partition
|
||||
[3]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-2-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/
|
||||
[4]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-1-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig3/
|
||||
[5]:https://www.howtoforge.com/perfect-server-centos-7-apache2-mysql-php-pureftpd-postfix-dovecot-and-ispconfig3
|
||||
[6]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-3-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/#-requirements
|
||||
[7]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-3-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/#-preliminary-note
|
||||
[8]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-3-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/#nbspprepare-the-server
|
||||
[9]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-3-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/#nbspenable-additional-repositories-and-install-some-software
|
||||
[10]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-3-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/#-quota
|
||||
[11]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-3-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/#-install-apache-mysql-phpmyadmin
|
||||
[12]:https://www.howtoforge.com/tutorial/centos-7-minimal-server/
|
||||
[13]:https://www.howtoforge.com/images/perfect_server_centos_7_1_x86_64_apache2_dovecot_ispconfig3/big/nmtui1.png
|
||||
[14]:https://www.howtoforge.com/images/perfect_server_centos_7_1_x86_64_apache2_dovecot_ispconfig3/big/nmtui2.png
|
||||
[15]:https://www.howtoforge.com/images/perfect_server_centos_7_1_x86_64_apache2_dovecot_ispconfig3/big/nmtui3.png
|
||||
[16]:https://www.howtoforge.com/images/perfect_server_centos_7_1_x86_64_apache2_dovecot_ispconfig3/big/nmtui4.png
|
||||
[17]:https://www.howtoforge.com/images/perfect_server_centos_7_1_x86_64_apache2_dovecot_ispconfig3/big/nmtui5.png
|
||||
[18]:https://www.howtoforge.com/tutorial/httpoxy-protect-your-server/
|
@ -1,265 +0,0 @@
|
||||
Monitoring a production-ready microservice
|
||||
============================================================
|
||||
|
||||
Explore essential components, principles, and key metrics.
|
||||
|
||||
|
||||
|
||||

|
||||
|
||||
|
||||
This is an excerpt from [Production-Ready Microservices][8], by Susan J. Fowler.
|
||||
|
||||
|
||||
A production-ready microservice is one that is properly monitored. Proper monitoring is one of the most important parts of building a production-ready microservice and guarantees higher microservice availability. In this chapter, the essential components of microservice monitoring are covered, including which key metrics to monitor, how to log key metrics, building dashboards that display key metrics, how to approach alerting, and on-call best practices.
|
||||
|
||||
|
||||
|
||||
### Principles of Microservice Monitoring
|
||||
|
||||
The majority of outages in a microservice ecosystem are caused by bad deployments. The second most common cause of outages is the lack of proper _monitoring_ . It’s easy to see why this is the case. If the state of a microservice is unknown, if key metrics aren’t tracked, then any precipitating failures will remain unknown until an actual outage occurs. By the time a microservice experiences an outage due to lack of monitoring, its availability has already been compromised. During these outages, the time to mitigation and time to repair are prolonged, pulling the availability of the microservice down even further: without easily accessible information about the microservice’s key metrics, developers are often faced with a blank slate, unprepared to quickly resolve the issue. This is why proper monitoring is essential: it provides the development team with all of the relevant information about the microservice. When a microservice is properly monitored, its state is never unknown.
|
||||
|
||||
Monitoring a production-ready microservice has four components. The first is proper _logging_ of all relevant and important information, which allows developers to understand the state of the microservice at any time in the present or in the past. The second is the use of well-designed _dashboards_ that accurately reflect the health of the microservice, and are organized in such a way that anyone at the company could view the dashboard and understand the health and status of the microservice without difficulty. The third component is actionable and effective _alerting_ on all key metrics, a practice that makes it easy for developers to mitigate and resolve problems with the microservice before they cause outages. The final component is the implementation and practice of running a sustainable _on-call rotation_ responsible for the monitoring of the microservice. With effective logging, dashboards, alerting, and on-call rotation, the microservice’s availability can be protected: failures and errors will be detected, mitigated, and resolved before they bring down any part of the microservice ecosystem.
|
||||
|
||||
###### A Production-Ready Service Is Properly Monitored
|
||||
|
||||
* Its key metrics are identified and monitored at the host, infrastructure, and microservice levels.
|
||||
|
||||
* It has appropriate logging that accurately reflects the past states of the microservice.
|
||||
|
||||
* Its dashboards are easy to interpret, and contain all key metrics.
|
||||
|
||||
* Its alerts are actionable and are defined by signal-providing thresholds.
|
||||
|
||||
* There is a dedicated on-call rotation responsible for monitoring and responding to any incidents and outages.
|
||||
|
||||
* There is a clear, well-defined, and standardized on-call procedure in place for handling incidents and outages.
|
||||
|
||||
|
||||
### Key Metrics
|
||||
|
||||
Before we jump into the components of proper monitoring, it’s important to identify precisely _what_ we want and need to monitor: we want to monitor a microservice, but what does that _actually_ mean? A microservice isn’t an individual object that we can follow or track, it cannot be isolated and quarantined—it’s far more complicated than that. Deployed across dozens, if not hundreds, of servers, the behavior of a microservice is the sum of its behavior across all of its instantiations, which isn’t the easiest thing to quantify. The key is identifying which properties of a microservice are necessary and sufficient for describing its behavior, and then determining what changes in those properties tell us about the overall status and health of the microservice. We’ll call these properties _key metrics_ .
|
||||
|
||||
There are two types of key metrics: host and infrastructure metrics, and microservice metrics. Host and infrastructure metrics are those that pertain to the status of the infrastructure and the servers on which the microservice is running, while microservice metrics are metrics that are unique to the individual microservice. In terms of the four-layer model of the microservice ecosystem as described in [Chapter 1, _Microservices_ ][9], host and infrastructure metrics are metrics belonging to layers 1–3, while microservice metrics are those belonging to layer 4.
|
||||
|
||||
Separating key metrics into these two different types is important both organizationally and technically. Host and infrastructure metrics often affect more than one microservice: for example, if there is a problem with a particular server, and the microservice ecosystem shares the hardware resources among multiple microservices, host-level key metrics will be relevant to every microservice team that has a microservice deployed to that host. Likewise, microservice-specific metrics will rarely be applicable or useful to anyone but the team of developers working on that particular microservice. Teams should monitor both types of key metrics (that is, all metrics relevant to their microservice), and any metrics relevant to multiple microservices should be monitored and shared between the appropriate teams.
|
||||
|
||||
The host and infrastructure metrics that should be monitored for each microservice are the CPU utilized by the microservice on each host, the RAM utilized by the microservice on each host, the available threads, the microservice’s open file descriptors (FD), and the number of database connections that the microservice has to any databases it uses. Monitoring these key metrics should be done in such a way that the status of each metric is accompanied by information about the infrastructure and the microservice. This means that monitoring should be granular enough that developers can know the status of the keys metrics for their microservice on any particular host and across all of the hosts that it runs on. For example, developers should be able to know how much CPU their microservice is using on one particular host _and_ how much CPU their microservice is using across all hosts it runs on.
|
||||
|
||||
### Monitoring Host-Level Metrics When Resources Are Abstracted
|
||||
|
||||
Some microservice ecosystems may use cluster management applications (like Mesos) in which the resources (CPU, RAM, etc.) are abstracted away from the host level. Host-level metrics won’t be available in the same way to developers in these situations, but all key metrics for the microservice overall should still be monitored by the microservice team.
|
||||
|
||||
Determining the necessary and sufficient key metrics at the microservice level is a bit more complicated because it can depend on the particular language that the microservice is written in. Each language comes with its own special way of processing tasks, for example, and these language-specific features must be monitored closely in the majority of cases. Consider a Python service that utilizes uwsgi workers: the number of uwsgi workers is a necessary key metric for proper monitoring.
|
||||
|
||||
In addition to language-specific key metrics, we also must monitor the availability of the service, the service-level agreement (SLA) of the service, latency (of both the service as a whole and its API endpoints), success of API endpoints, responses and average response times of API endpoints, the services (clients) from which API requests originate (along with which endpoints they send requests to), errors and exceptions (both handled and unhandled), and the health and status of dependencies.
|
||||
|
||||
Importantly, all key metrics should be monitored everywhere that the application is deployed. This means that every stage of the deployment pipeline should be monitored. Staging must be closely monitored in order to catch any problems before a new candidate for production (a new build) is deployed to servers running production traffic. It almost goes without saying that all deployments to production servers should be monitored carefully, both in the canary and production deployment phases. (For more information on deployment pipelines, see [Chapter 3, _Stability and Reliability_ ][10].)
|
||||
|
||||
Once the key metrics for a microservice have been identified, the next step is to capture the metrics emitted by your service. Capture them, and then log them, graph them, and alert on them. We’ll cover each of these steps in the following sections.
|
||||
|
||||
|
||||
###### Summary of Key Metrics
|
||||
|
||||
**Host and infrastructure key metrics:**
|
||||
|
||||
* Threads
|
||||
|
||||
* File descriptors
|
||||
|
||||
* Database connections
|
||||
|
||||
**Microservice key metrics:**
|
||||
|
||||
* Language-specific metrics
|
||||
|
||||
* Availability
|
||||
|
||||
* Latency
|
||||
|
||||
* Endpoint success
|
||||
|
||||
* Endpoint responses
|
||||
|
||||
* Endpoint response times
|
||||
|
||||
* Clients
|
||||
|
||||
* Errors and exceptions
|
||||
|
||||
* Dependencies
|
||||
|
||||
### Logging
|
||||
|
||||
_Logging_ is the first component of production-ready monitoring. It begins and belongs in the codebase of each microservice, nestled deep within the code of each service, capturing all of the information necessary to describe the state of the microservice. In fact, describing the state of the microservice at any given time in the recent past is the ultimate goal of logging.
|
||||
|
||||
One of the benefits of microservice architecture is the freedom it gives developers to deploy new features and code changes frequently, and one of the consequences of this newfound developer freedom and increased development velocity is that the microservice is always changing. In most cases, the service will not be the same service it was 12 hours ago, let alone several days ago, and reproducing any problems will be impossible. When faced with a problem, often the only way to determine the root cause of an incident or outage is to comb through the logs, discover the state of the microservice at the time of the outage, and figure out why the service failed in that state. Logging needs to be such that developers can determine from the logs exactly what went wrong and where things fell apart.
|
||||
|
||||
### Logging Without Microservice Versioning
|
||||
|
||||
Microservice versioning is often discouraged because it can lead to other (client) services pinning to specific versions of a microservice that may not be the best or most updated version of the microservice. Without versioning, determining the state of a microservice when a failure or outage occurred can be difficult, but thorough logging can prevent this from becoming a problem: if the logging is good enough that state of a microservice at the _time_ of an outage can be sufficiently known and understood, the lack of versioning ceases to be a hindrance to quick and effective mitigation and resolution.
|
||||
|
||||
Determining precisely _what_ to log is specific to each microservice. The best guidance on determining what needs to be logged is, somewhat unfortunately, necessarily vague: log whatever information is essential to describing the state of the service at a given time. Luckily, we can narrow down which information is necessary by restricting our logging to whatever can be contained in the code of the service. Host-level and infrastructure-level information won’t (and shouldn’t) be logged by the application itself, but by services and tools running the application platform. Some microservice-level key metrics and information, like hashed user IDs and request and response details can and should be located in the microservice’s logs.
|
||||
|
||||
There are, of course, some things that _should never, ever be logged_ . Logs should never contain identifying information, such as names of customers, Social Security numbers, and other private data. They should never contain information that could present a security risk, such as passwords, access keys, or secrets. In most cases, even seemingly innocuous things like user IDs and usernames should not be logged unless encrypted.
|
||||
|
||||
At times, logging at the individual microservice level will not be enough. As we’ve seen throughout this book, microservices do not live alone, but within complex chains of clients and dependencies within the microservice ecosystem. While developers can try their best to log and monitor everything important and relevant to their service, tracking and logging requests and responses throughout the entire client and dependency chains from end-to-end can illuminate important information about the system that would otherwise go unknown (such as total latency and availability of the stack). To make this information accessible and visible, building a production-ready microservice ecosystem requires tracing each request through the entire stack.
|
||||
|
||||
The reader might have noticed at this point that it appears that a lot of information needs to be logged. Logs are data, and logging is expensive: they are expensive to store, they are expensive to access, and both storing and accessing logs comes with the additional cost associated with making expensive calls over the network. The cost of storing logs may not seem like much for an individual microservice, but if the logging needs of all the microservices within a microservice ecosystem are added together, the cost is rather high.
|
||||
|
||||
###### Warning
|
||||
|
||||
### Logs and Debugging
|
||||
|
||||
Avoid adding debugging logs in code that will be deployed to production—such logs are very costly. If any logs are added specifically for the purpose of debugging, developers should take great care to ensure that any branch or build containing these additional logs does not ever touch production.
|
||||
|
||||
Logging needs to be scalable, it needs to be available, and it needs to be easily accessible _and_ searchable. To keep the cost of logs down and to ensure scalability and high availability, it’s often necessary to impose per-service logging quotas along with limits and standards on what information can be logged, how many logs each microservice can store, and how long the logs will be stored before being deleted.
|
||||
|
||||
|
||||
### Dashboards
|
||||
|
||||
Every microservice must have at least one _dashboard_ where all key metrics (such as hardware utilization, database connections, availability, latency, responses, and the status of API endpoints) are collected and displayed. A dashboard is a graphical display that is updated in real time to reflect all the most important information about a microservice. Dashboards should be easily accessible, centralized, and standardized across the microservice ecosystem.
|
||||
|
||||
Dashboards should be easy to interpret so that an outsider can quickly determine the health of the microservice: anyone should be able to look at the dashboard and know immediately whether or not the microservice is working correctly. This requires striking a balance between overloading a viewer with information (which would render the dashboard effectively useless) and not displaying enough information (which would also make the dashboard useless): only the necessary minimum of information about key metrics should be displayed.
|
||||
|
||||
A dashboard should also serve as an accurate reflection of the overall quality of monitoring of the entire microservice. Any key metric that is alerted on should be included in the dashboard (we will cover this in the next section): the exclusion of any key metric in the dashboard will reflect poor monitoring of the service, while the inclusion of metrics that are not necessary will reflect a neglect of alerting (and, consequently, monitoring) best practices.
|
||||
|
||||
There are several exceptions to the rule against inclusion of nonkey metrics. In addition to key metrics, information about each phase of the deployment pipeline should be displayed, though not necessarily within the same dashboard. Developers working on microservices that require monitoring a large number of key metrics may opt to set up separate dashboards for each deployment phase (one for staging, one for canary, and one for production) to accurately reflect the health of the microservice at each deployment phase: since different builds will be running on the deployment phases simultaneously, accurately reflecting the health of the microservice in a dashboard might require approaching dashboard design with the goal of reflecting the health of the microservice at a particular deployment phase (treating them almost as different microservices, or at least as different instantiations of a microservice).
|
||||
|
||||
###### Warning
|
||||
|
||||
### Dashboards and Outage Detection
|
||||
|
||||
Even though dashboards can illuminate anomalies and negative trends of a microservice’s key metrics, developers should never need to watch a microservice’s dashboard in order to detect incidents and outages. Doing so is an anti-pattern that leads to deficiencies in alerting and overall monitoring.
|
||||
|
||||
To assist in determining problems introduced by new deployments, it helps to include information about when a deployment occurred in the dashboard. The most effective and useful way to accomplish this is to make sure that deployment times are shown within the graphs of each key metric. Doing so allows developers to quickly check graphs after each deployment to see if any strange patterns emerge in any of the key metrics.
|
||||
|
||||
Well-designed dashboards also give developers an easy, visual way to detect anomalies and determine alerting thresholds. Very slight or gradual changes or disturbances in key metrics run the risk of not being caught by alerting, but a careful look at an accurate dashboard can illuminate anomalies that would otherwise go undetected. Alerting thresholds, which we will cover in the next section, are notoriously difficult to determine, but can be set appropriately when historical data on the dashboard is examined: developers can see normal patterns in key metrics, view spikes in metrics that occurred with outages (or led to outages) in the past, and then set thresholds accordingly.
|
||||
|
||||
|
||||
|
||||
|
||||
### Alerting
|
||||
|
||||
The third component of monitoring a production-ready microservice is real-time _alerting_ . The detection of failures, as well as the detection of changes within key metrics that could lead to a failure, is accomplished through alerting. To ensure this, all key metrics—host-level metrics, infrastructure metrics, and microservice-specific metrics—should be alerted on, with alerts set at various thresholds. Effective and actionable alerting is essential to preserving the availability of a microservice and preventing downtime.
|
||||
|
||||
|
||||
|
||||
### Setting up Effective Alerting
|
||||
|
||||
Alerts must be set up for all key metrics. Any change in a key metric at the host level, infrastructure level, or microservice level that could lead to an outage, cause a spike in latency, or somehow harm the availability of the microservice should trigger an alert. Importantly, alerts should also be triggered whenever a key metric is _not_ seen.
|
||||
|
||||
All alerts should be useful: they should be defined by good, signal-providing thresholds. Three types of thresholds should be set for each key metric, and have both upper and lower bounds: _normal_ , _warning_ , and _critical_ . Normal thresholds reflect the usual, appropriate upper and lower bounds of each key metric and shouldn’t ever trigger an alert. Warning thresholds on each key metric will trigger alerts when there is a deviation from the norm that could lead to a problem with the microservice; warning thresholds should be set such that they will trigger alerts _before_ any deviations from the norm cause an outage or otherwise negatively affect the microservice. Critical thresholds should be set based on which upper and lower bounds on key metrics actually cause an outage, cause latency to spike, or otherwise hurt a microservice’s availability. In an ideal world, warning thresholds should trigger alerts that lead to quick detection, mitigation, and resolution before any critical thresholds are reached. In each category, thresholds should be high enough to avoid noise, but low enough to catch any and all real problems with key metrics.
|
||||
|
||||
### Determining Thresholds Early in the Lifecycle of a Microservice
|
||||
|
||||
Thresholds for key metrics can be very difficult to set without historical data. Any thresholds set early in a microservice’s lifecycle run the risk of either being useless or triggering too many alerts. To determine the appropriate thresholds for a new microservice (or even an old one), developers can run load testing on the microservice to gauge where the thresholds should lie. Running "normal" traffic loads through the microservice can determine the normal thresholds, while running larger-than-expected traffic loads can help determine warning and critical thresholds.
|
||||
|
||||
All alerts need to be actionable. Nonactionable alerts are those that are triggered and then resolved (or ignored) by the developer(s) on call for the microservice because they are not important, not relevant, do not signify that anything is wrong with the microservice, or alert on a problem that cannot be resolved by the developer(s). Any alert that cannot be immediately acted on by the on-call developer(s) should be removed from the pool of alerts, reassigned to the relevant on-call rotation, or (if possible) changed so that it becomes actionable.
|
||||
|
||||
Some of the key microservice metrics run the risk of being nonactionable. For example, alerting on the availability of dependencies can easily lead to nonactionable alerts if dependency outages, increases in dependency latency, or dependency downtime do not require any action to be taken by their client(s). If no action needs to be taken, then the thresholds should be set appropriately, or in more extreme cases, no alerts should be set on dependencies at all. However, if any action at all should be taken, even something as small as contacting the dependency’s on-call or development team in order to alert them to the issue and/or coordinate mitigation and resolution, then an alert should be triggered.
|
||||
|
||||
|
||||
### Handling Alerts
|
||||
|
||||
Once an alert has been triggered, it needs to be handled quickly and effectively. The root cause of the triggered alert should be mitigated and resolved. To quickly and effectively handle alerts, there are several steps that can be taken.
|
||||
|
||||
The first step is to create step-by-step instructions for each known alert that detail how to triage, mitigate, and resolve each alert. These step-by-step alert instructions should live within an on-call runbook within the centralized documentation of each microservice, making them easily accessible to anyone who is on call for the microservice (more details on runbooks can be found in [Chapter 7, _Documentation and Understanding_ ][6]). Runbooks are crucial to the monitoring of a microservice: they allow any on-call developer to have step-by-step instructions on how to mitigate and resolve the root causes of each alert. Since each alert is tied to a deviation in a key metric, runbooks can be written so that they address each key metric, known causes of deviations from the norm, and how to go about debugging the problem.
|
||||
|
||||
Two types of on-call runbooks should be created. The first are runbooks for host-level and infrastructure-level alerts that should be shared between the whole engineering organization—these should be written for every key host-level and infrastructure-level metric. The second are on-call runbooks for specific microservices that have step-by-step instructions regarding microservice-specific alerts triggered by changes in key metrics; for example, a spike in latency should trigger an alert, and there should be step-by-step instructions in the on-call runbook that clearly document how to debug, mitigate, and resolve spikes in the microservice’s latency.
|
||||
|
||||
The second step is to identify alerting anti-patterns. If the microservice on-call rotation is overwhelmed by alerts yet the microservice appears to work as expected, then any alerts that are seen more than once but that can be easily mitigated and/or resolved should be automated away. That is, build the mitigation and/or resolution steps into the microservice itself. This holds for every alert, and writing step-by-step instructions for alerts within on-call runbooks allows executing on this strategy to be rather effective. In fact, any alert that, once triggered, requires a simple set of steps to be taken in order to be mitigated and resolved, can be easily automated away. Once this level of production-ready monitoring has been established, a microservice should never experience the same exact problem twice.
|
||||
|
||||
### On-Call Rotations
|
||||
|
||||
In a microservice ecosystem, the development teams themselves are responsible for the availability of their microservices. Where monitoring is concerned, this means that developers need to be on call for their own microservices. The goal of each developer on-call for a microservice needs to be clear: they are to detect, mitigate, and resolve any issue that arises with the microservice during their on call shift before the issue causes an outage for their microservice or impacts the business itself.
|
||||
|
||||
In some larger engineering organizations, site reliability engineers, DevOps, or other operations engineers may take on the responsibility for monitoring and on call, but this requires each microservice to be relatively stable and reliable before the on-call responsibilities can be handed off to another team. In most microservice ecosystems, microservices rarely reach this high level of stability because, as we’ve seen throughout the previous chapters, microservices are constantly changing. In a microservice ecosystem, developers need to bear the responsibility of monitoring the code that they deploy.
|
||||
|
||||
Designing good on-call rotations is crucial and requires the involvement of the entire team. To prevent burnout, on-call rotations should be both brief and shared: no fewer than two developers should ever be on call at one time, and on-call shifts should last no longer than one week and be spaced no more frequently than one month apart.
|
||||
|
||||
The on-call rotations of each microservice should be internally publicized and easily accessible. If a microservice team is experiencing issues with one of their dependencies, they should be able to track down the on-call engineers for the microservice and contact them very quickly. Hosting this information in a centralized place helps to make developers more effective in triaging problems and preventing outages.
|
||||
|
||||
Developing standardized on-call procedures across an engineering organization will go a long way toward building a sustainable microservice ecosystem. Developers should be trained about how to approach their on-call shifts, be made aware of on-call best practices, and be ramped up for joining the on-call rotation very quickly. Standardizing this process and making on-call expectations completely clear to every developer will prevent the burnout, confusion, and frustration that usually accompanies any mention of joining an on-call rotation.
|
||||
|
||||
### Evaluate Your Microservice
|
||||
|
||||
Now that you have a better understanding of monitoring, use the following list of questions to assess the production-readiness of your microservice(s) and microservice ecosystem. The questions are organized by topic, and correspond to the sections within this chapter.
|
||||
|
||||
|
||||
### Key Metrics
|
||||
|
||||
* What are this microservice’s key metrics?
|
||||
|
||||
* What are the host and infrastructure metrics?
|
||||
|
||||
* What are the microservice-level metrics?
|
||||
|
||||
* Are all the microservice’s key metrics monitored?
|
||||
|
||||
### Logging
|
||||
|
||||
* What information does this microservice need to log?
|
||||
|
||||
* Does this microservice log all important requests?
|
||||
|
||||
* Does the logging accurately reflect the state of the microservice at any given time?
|
||||
|
||||
* Is this logging solution cost-effective and scalable?
|
||||
|
||||
### Dashboards
|
||||
|
||||
* Does this microservice have a dashboard?
|
||||
|
||||
* Is the dashboard easy to interpret? Are all key metrics displayed on the dashboard?
|
||||
|
||||
* Can I determine whether or not this microservice is working correctly by looking at the dashboard?
|
||||
|
||||
### Alerting
|
||||
|
||||
* Is there an alert for every key metric?
|
||||
|
||||
* Are all alerts defined by good, signal-providing thresholds?
|
||||
|
||||
* Are alert thresholds set appropriately so that alerts will fire before an outage occurs?
|
||||
|
||||
* Are all alerts actionable?
|
||||
|
||||
* Are there step-by-step triage, mitigation, and resolution instructions for each alert in the on-call runbook?
|
||||
|
||||
### On-Call Rotations
|
||||
|
||||
* Is there a dedicated on-call rotation responsible for monitoring this microservice?
|
||||
|
||||
* Is there a minimum of two developers on each on-call shift?
|
||||
|
||||
* Are there standardized on-call procedures across the engineering organization?
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Susan J. Fowler is the author of Production-Ready Microservices. She is currently an engineer at Stripe. Previously, Susan worked on microservice standardization at Uber, developed application platforms and infrastructure at several small startups, and studied particle physics at the University of Pennsylvania.
|
||||
|
||||
----------------------------
|
||||
|
||||
via: https://www.oreilly.com/learning/monitoring-a-production-ready-microservice
|
||||
|
||||
作者:[Susan Fowler][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.oreilly.com/people/susan_fowler
|
||||
[1]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
|
||||
[2]:https://pixabay.com/en/container-container-ship-port-1638068/
|
||||
[3]:https://www.oreilly.com/learning/monitoring-a-production-ready-microservice?imm_mid=0ee8c5&cmp=em-webops-na-na-newsltr_20170310
|
||||
[4]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
|
||||
[5]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
|
||||
[6]:https://www.safaribooksonline.com/library/view/production-ready-microservices/9781491965962/ch07.html?utm_source=oreilly&utm_medium=newsite&utm_campaign=monitoring-production-ready-microservices
|
||||
[7]:https://www.oreilly.com/people/susan_fowler
|
||||
[8]:https://www.safaribooksonline.com/library/view/production-ready-microservices/9781491965962/?utm_source=newsite&utm_medium=content&utm_campaign=lgen&utm_content=monitoring-production-ready-microservices
|
||||
[9]:https://www.safaribooksonline.com/library/view/production-ready-microservices/9781491965962/ch01.html?utm_source=oreilly&utm_medium=newsite&utm_campaign=monitoring-production-ready-microservices
|
||||
[10]:https://www.safaribooksonline.com/library/view/production-ready-microservices/9781491965962/ch03.html?utm_source=oreilly&utm_medium=newsite&utm_campaign=monitoring-production-ready-microservices
|
@ -1,3 +1,4 @@
|
||||
translating by chenxinlong
|
||||
[A Programmer’s Introduction to Unicode][18]
|
||||
============================================================
|
||||
|
||||
@ -223,7 +224,7 @@ You can contact me at nathaniel dot reed at gmail dot com, or follow me on Twitt
|
||||
via: http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311
|
||||
|
||||
作者:[ Nathan][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[译者ID](https://github.com/chenxinlong)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -1,122 +0,0 @@
|
||||
# How to work around video and subtitle embed errors
|
||||
|
||||
|
||||
This is going to be a slightly weird tutorial. The background story is as follows. Recently, I created a bunch of [sweet][1] [parody][2] [clips][3] of the [Risitas y las paelleras][4] sketch, famous for its insane laughter by the protagonist, Risitas. As always, I had them uploaded to Youtube, but from the moment I decided on what subtitles to use to the moment when the videos finally became available online, there was a long and twisty journey.
|
||||
|
||||
In this guide, I would like to present several typical issues that you may encounter when creating your own media, mostly with subtitles and the subsequent upload to media sharing portals, specifically Youtube, and how you can work around those. After me.
|
||||
|
||||
### The background story
|
||||
|
||||
My software of choice for video editing is Kdenlive, which I started using when I created the most silly [Frankenstein][5] clip, and it's been my loyal companion ever since. Normally, I render files to WebM container, with VP8 video codec and Vorbis audio codec, because that's what Google likes. Indeed, I had no issues with the roughly 40 different clips I uploaded in the last seven odd years.
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
However, after I completed my Risitas & Linux project, I was in a bit of a predicament. The video file and the subtitle file were still two separate entities, and I needed somehow to put them together. My original article for subtitles work mentions Avidemux and Handbrake, and both these are valid options.
|
||||
|
||||
However, I was not too happy with the output generated by either one of these, and for a variety of reasons, something was ever so slightly off. Avidemux did not handle the video codecs well, whereas Handbrake omitted a couple of lines of subtitle text from the final product, and the font was ugly. Solvable, but not the topic for today.
|
||||
|
||||
Therefore, I decided to use VideoLAN (VLC) to embed subtitles onto the video. There are several ways to do this. You can use the Media > Convert/Save option, but this one does not have everything we need. Instead, you should use Media > Stream, which comes with a more fully fledged wizard, and it also offers an editable summary of the transcoding options, which we DO need - see my [tutorial][6] on subtitles for this please.
|
||||
|
||||
### Errors!
|
||||
|
||||
The process of embedding subtitles is not trivial. You will most likely encounter several problems along the way. This guide should help you work around these so you can focus on your work and not waste time debugging weird software errors. Anyhow, here's a small but probable collection of issues you will face while working with subtitles in VLC. Trial & error, but also nerdy design.
|
||||
|
||||
### No playable streams
|
||||
|
||||
You have probably chosen weird output settings. You might want to double check you have selected the right video and audio codecs. Also, remember that some media players may not have all the codecs. Also, make sure you test on the system you want these clips to play.
|
||||
|
||||

|
||||
|
||||
### Subtitles overlaid twice
|
||||
|
||||
This can happen if you check the box that reads Use a subtitle file in the first step of the streaming media wizard. Just select the file you need and click Stream. Leave the box unchecked.
|
||||
|
||||

|
||||
|
||||
### No subtitle output is generated
|
||||
|
||||
This can happen for two main reasons. One, you have selected the wrong encapsulation format. Do make sure the subtitles are marked correctly on the profile page when you edit it before proceeding. If the format does not support subtitles, it might not work.
|
||||
|
||||

|
||||
|
||||
Two, you may have left the subtitle codec render enabled in the final output. You do not need this. You only need to overlay the subtitles onto the video clip. Please check the generated stream output string and delete an option that reads scodec=<something> before you click the Stream button.
|
||||
|
||||

|
||||
|
||||
### Missing codecs + workaround
|
||||
|
||||
This is a common [bug][7] due to how experimental codecs are implemented, and you will most likely see it if you choose the following profile: Video - H.264 + AAC (MP4). The file will be rendered, and if you selected subtitles, they will be overlaid, too, but without any audio. However, we can fix this with a hack.
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
One possible hack is to start VLC from command line with the --sout-ffmpeg-strict=-2 option (might work). The other and more sureway workaround is to take the audio-less video but with the subtitles overlayed and re-render it through Kdenlive with the original project video render without subtitles as an audio source. Sounds complicated, so in detail:
|
||||
|
||||
* Move existing clips (containing audio) from video to audio. Delete the rest.
|
||||
* Alternatively, use rendered WebM file as your audio source.
|
||||
* Add new clip - the one we created with embedded subtitles AND no audio.
|
||||
* Place the clip as new video.
|
||||
* Render as WebM again.
|
||||
|
||||

|
||||
|
||||
Using other types of audio codecs will most likely work (e.g. MP3), and you will have a complete project with video, audio and subtitles. If you're happy that nothing is missing, you can now upload to Youtube. But then ...
|
||||
|
||||
### Youtube video manager & unknown format
|
||||
|
||||
If you're trying to upload a non-WebM clip (say MP4), you might get an unspecified error that your clip does not meet the media format requirements. I was not sure why VLC generated a non-Youtube-compliant file. However, again, the fix is easy. Use Kdenlive to recreate the video, and this should result in a file that has all the right meta fields and whatnot that Youtube likes. Back to my original story and the 40-odd clips created through Kdenlive this way.
|
||||
|
||||
P.S. If your clip has valid audio, then just re-run it through Kdenlive. If it does not, do the video/audio trick from before. Mute clips as necessary. In the end, this is just like overlay, except you're using the video source from one clip and audio from another for the final render. Job done.
|
||||
|
||||
### More reading
|
||||
|
||||
I do not wish to repeat myself or spam unnecessarily with links. I have loads of clips on VLC in the Software & Security section, so you might want to consult those. The earlier mentioned article on VLC & Subtitles has links to about half a dozen related tutorials, covering additional topics like streaming, logging, video rotation, remote file access, and more. I'm sure you can work the search engine like pros.
|
||||
|
||||
### Conclusion
|
||||
|
||||
I hope you find this guide helpful. It covers a lot, and I tried to make it linear and simple and address as many pitfalls entrepreneuring streamers and subtitle lovers may face when working with VLC. It's all about containers and codecs, but also the fact there are virtually no standards in the media world, and when you go from one format to another, sometimes you may encounter corner cases.
|
||||
|
||||
If you do hit an error or three, the tips and tricks here should help you solve at least some of them, including unplayable streams, missing or duplicate subtitles, missing codecs and the wicked Kdenlive workaround, Youtube upload errors, hidden VLC command line options, and a few other extras. Quite a lot for a single piece of text, right. Luckily, all good stuff. Take care, children of the Internet. And if you have any other requests as to what next my future VLC articles should cover, do feel liberated enough to send an email.
|
||||
|
||||
Cheers.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
|
||||
|
||||
作者简介:
|
||||
|
||||
My name is Igor Ljubuncic. I'm more or less 38 of age, married with no known offspring. I am currently working as a Principal Engineer with a cloud technology company, a bold new frontier. Until roughly early 2015, I worked as the OS Architect with an engineering computing team in one of the largest IT companies in the world, developing new Linux-based solutions, optimizing the kernel and hacking the living daylights out of Linux. Before that, I was a tech lead of a team designing new, innovative solutions for high-performance computing environments. Some other fancy titles include Systems Expert and System Programmer and such. All of this used to be my hobby, but since 2008, it's a paying job. What can be more satisfying than that?
|
||||
|
||||
From 2004 until 2008, I used to earn my bread by working as a physicist in the medical imaging industry. My work expertise focused on problem solving and algorithm development. To this end, I used Matlab extensively, mainly for signal and image processing. Furthermore, I'm certified in several major engineering methodologies, including MEDIC Six Sigma Green Belt, Design of Experiment, and Statistical Engineering.
|
||||
|
||||
I also happen to write books, including high fantasy and technical work on Linux; mutually inclusive.
|
||||
|
||||
Please see my full list of open-source projects, publications and patents, just scroll down.
|
||||
|
||||
For a complete list of my awards, nominations and IT-related certifications, hop yonder and yonder please.
|
||||
|
||||
|
||||
-------------
|
||||
|
||||
|
||||
via: http://www.dedoimedo.com/computers/vlc-subtitles-errors.html
|
||||
|
||||
作者:[Igor Ljubuncic][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.dedoimedo.com/faq.html
|
||||
|
||||
[1]:https://www.youtube.com/watch?v=MpDdGOKZ3dg
|
||||
[2]:https://www.youtube.com/watch?v=KHG6fXEba0A
|
||||
[3]:https://www.youtube.com/watch?v=TXw5lRi97YY
|
||||
[4]:https://www.youtube.com/watch?v=cDphUib5iG4
|
||||
[5]:http://www.dedoimedo.com/computers/frankenstein-media.html
|
||||
[6]:http://www.dedoimedo.com/computers/vlc-subtitles.html
|
||||
[7]:https://trac.videolan.org/vlc/ticket/6184
|
@ -1,333 +0,0 @@
|
||||
ictlyh translating
|
||||
Our guide to a Golang logs world
|
||||
============================================================
|
||||
|
||||

|
||||
|
||||
Do you ever get tired of solutions that use convoluted languages, that are complex to deploy, and for which building takes forever? Golang is the solution to these very issues, being as fast as C and as simple as Python.
|
||||
|
||||
But how do you monitor your application with Golang logs? There are no exceptions in Golang, only errors. Your first impression might thus be that developing a Golang logging strategy is not going to be such a straightforward affair. The lack of exceptions is not in fact that troublesome, as exceptions have lost their exceptionality in many programming languages: they are often overused to the point of being overlooked.
|
||||
|
||||
We’ll first cover here Golang logging basics before going the extra mile and discuss Golang logs standardization, metadatas significance, and minimization of Golang logging impact on performance.
|
||||
By then, you’ll be able to track a user’s behavior across your application, quickly identify failing components in your project as well as monitor overall performance and user’s happiness.
|
||||
|
||||
### I. Basic Golang logging
|
||||
|
||||
### 1) Use Golang “log” library
|
||||
|
||||
Golang provides you with a native [logging library][3] simply called “log”. Its logger is perfectly suited to track simple behaviors such as adding a timestamp before an error message by using the available [flags][4].
|
||||
|
||||
Here is a basic example of how to log an error in Golang:
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import (
|
||||
"log"
|
||||
"errors"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
func main() {
|
||||
/* local variable definition */
|
||||
...
|
||||
|
||||
/* function for division which return an error if divide by 0 */
|
||||
ret,err = div(a, b)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
fmt.Println(ret)
|
||||
}
|
||||
```
|
||||
|
||||
And here comes what you get if you try to divide by 0:
|
||||
|
||||

|
||||
|
||||
In order to quickly test a function in Golang you can use the [go playground][5].
|
||||
|
||||
To make sure your logs are easily accessible at all times, we recommend to write them in a file:
|
||||
|
||||
```
|
||||
package main
|
||||
import (
|
||||
"log"
|
||||
"os"
|
||||
)
|
||||
func main() {
|
||||
//create your file with desired read/write permissions
|
||||
f, err := os.OpenFile("filename", os.O_WRONLY|os.O_CREATE|os.O_APPEND, 0644)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
//defer to close when you're done with it, not because you think it's idiomatic!
|
||||
defer f.Close()
|
||||
//set output of logs to f
|
||||
log.SetOutput(f)
|
||||
//test case
|
||||
log.Println("check to make sure it works")
|
||||
}
|
||||
```
|
||||
|
||||
You can find a complete tutorial for Golang log library [here][6] and find the complete list of available functions within their “log” [library][7].
|
||||
|
||||
So now you should be all set to log errors and their root causes.
|
||||
|
||||
But logs can also help you piece an activity stream together, identify an error context that needs fixing or investigate how a single request is impacting several layers and API’s in your system.
|
||||
And to get this enhanced type of vision, you first need to enrich your Golang logs with as much context as possible as well as standardize the format you use across your project. This is when Golang native library reaches its limits. The most widely used libraries are then [glog][8] and [logrus][9]. It must be said though that many good libraries are available. So if you’re already using one that uses JSON format you don’t necessarily have to change library, as we’ll explain just below.
|
||||
|
||||
### II. A consistent format for your Golang logs
|
||||
|
||||
### 1) The structuring advantage of JSON format
|
||||
|
||||
Structuring your Golang logs in one project or across multiples microservices is probably the hardest part of the journey, even though it _could_ seem trivial once done. Structuring your logs is what makes them especially readable by machines (cf our [collecting logs best practices blogpost][10]). Flexibility and hierarchy are at the very core of the JSON format, so information can be easily parsed and manipulated by humans as well as by machines.
|
||||
|
||||
Here is an example of how to log in JSON format with the [Logrus/Logmatic.io][11] library:
|
||||
|
||||
```
|
||||
package main
|
||||
import (
|
||||
log "github.com/Sirupsen/logrus"
|
||||
"github.com/logmatic/logmatic-go"
|
||||
)
|
||||
func main() {
|
||||
// use JSONFormatter
|
||||
log.SetFormatter(&logmatic.JSONFormatter{})
|
||||
// log an event as usual with logrus
|
||||
log.WithFields(log.Fields{"string": "foo", "int": 1, "float": 1.1 }).Info("My first ssl event from golang")
|
||||
}
|
||||
```
|
||||
|
||||
Which comes out as:
|
||||
|
||||
```
|
||||
{
|
||||
"date":"2016-05-09T10:56:00+02:00",
|
||||
"float":1.1,
|
||||
"int":1,
|
||||
"level":"info",
|
||||
"message":"My first ssl event from golang",
|
||||
"String":"foo"
|
||||
}
|
||||
```
|
||||
|
||||
### 2) Standardization of Golang logs
|
||||
|
||||
It really is a shame when the same error encountered in different parts of your code is registered differently in logs. Picture for example not being able to determine a web page loading status because of an error on one variable. One developer logged:
|
||||
|
||||
```
|
||||
message: 'unknown error: cannot determine loading status from unknown error: missing or invalid arg value client'</span>
|
||||
```
|
||||
|
||||
While the other registered:
|
||||
|
||||
```
|
||||
unknown error: cannot determine loading status - invalid client</span>
|
||||
```
|
||||
|
||||
A good solution to enforce logs standardization is to create an interface between your code and the logging library. This standardization interface would contain pre-defined log messages for all possible behavior you want to add in your logs. Doing so prevent custom log messages that would not match your desired standard format…. And in so doing facilitates log investigation.
|
||||
|
||||

|
||||
|
||||
As log formats are centralized it becomes way easier to keep them up to date. If a new type of issue arises it only requires to be added in one interface for every team member to use the exact same message.
|
||||
|
||||
The most basic example would be to add the logger name and id before Golang log messages. Your code would then send “events” to your standardization interface that would in turn transform them into Golang log messages.
|
||||
|
||||
The most basic example would be to add the logger name and the id before the Golang log message. Your code would then send “events” to this interface that would transform them into Golang log messages:
|
||||
|
||||
```
|
||||
// The main part, we define all messages right here.
|
||||
// The Event struct is pretty simple. We maintain an Id to be sure to
|
||||
// retrieve simply all messages once they are logged
|
||||
var (
|
||||
invalidArgMessage = Event{1, "Invalid arg: %s"}
|
||||
invalidArgValueMessage = Event{2, "Invalid arg value: %s => %v"}
|
||||
missingArgMessage = Event{3, "Missing arg: %s"}
|
||||
)
|
||||
|
||||
// And here we were, all log events that can be used in our app
|
||||
func (l *Logger)InvalidArg(name string) {
|
||||
l.entry.Errorf(invalidArgMessage.toString(), name)
|
||||
}
|
||||
func (l *Logger)InvalidArgValue(name string, value interface{}) {
|
||||
l.entry.WithField("arg." + name, value).Errorf(invalidArgValueMessage.toString(), name, value)
|
||||
}
|
||||
func (l *Logger)MissingArg(name string) {
|
||||
l.entry.Errorf(missingArgMessage.toString(), name)
|
||||
}
|
||||
```
|
||||
|
||||
So if we use the previous example of the invalid argument value, we would get similar log messages:
|
||||
|
||||
```
|
||||
time="2017-02-24T23:12:31+01:00" level=error msg="LoadPageLogger00003 - Missing arg: client - cannot determine loading status" arg.client=<nil> logger.name=LoadPageLogger
|
||||
```
|
||||
|
||||
And in JSON format:
|
||||
|
||||
```
|
||||
{"arg.client":null,"level":"error","logger.name":"LoadPageLogger","msg":"LoadPageLogger00003 - Missing arg: client - cannot determine loading status", "time":"2017-02-24T23:14:28+01:00"}
|
||||
```
|
||||
|
||||
### III. The power of context in Golang logs
|
||||
|
||||
Now that the Golang logs are written in a structured and standardized format, time has come to decide which context and other relevant information should be added to them. Context and metadatas are critical in order to be able to extract insights from your logs such as following a user activity or its workflow.
|
||||
|
||||
For instance the Hostname, appname and session parameters could be added as follows using the JSON format of the logrus library:
|
||||
|
||||
```
|
||||
// For metadata, a common pattern is to re-use fields between logging statements by re-using
|
||||
contextualizedLog := log.WithFields(log.Fields{
|
||||
"hostname": "staging-1",
|
||||
"appname": "foo-app",
|
||||
"session": "1ce3f6v"
|
||||
})
|
||||
contextualizedLog.Info("Simple event with global metadata")
|
||||
```
|
||||
|
||||
Metadatas can be seen as javascript breadcrumbs. To better illustrate how important they are, let’s have a look at the use of metadatas among several Golang microservices. You’ll clearly see how decisive it is to track users on your application. This is because you do not simply need to know that an error occurred, but also on which instance and what pattern created the error. So let’s imagine we have two microservices which are sequentially called. The contextual information is transmitted and stored in the headers:
|
||||
|
||||
```
|
||||
func helloMicroService1(w http.ResponseWriter, r *http.Request) {
|
||||
client := &http.Client{}
|
||||
// This service is responsible to received all incoming user requests
|
||||
// So, we are checking if it's a new user session or a another call from
|
||||
// an existing session
|
||||
session := r.Header.Get("x-session")
|
||||
if ( session == "") {
|
||||
session = generateSessionId()
|
||||
// log something for the new session
|
||||
}
|
||||
// Track Id is unique per request, so in each case we generate one
|
||||
track := generateTrackId()
|
||||
// Call your 2nd microservice, add the session/track
|
||||
reqService2, _ := http.NewRequest("GET", "http://localhost:8082/", nil)
|
||||
reqService2.Header.Add("x-session", session)
|
||||
reqService2.Header.Add("x-track", track)
|
||||
resService2, _ := client.Do(reqService2)
|
||||
….
|
||||
```
|
||||
|
||||
So when the second service is called:
|
||||
|
||||
```
|
||||
func helloMicroService2(w http.ResponseWriter, r *http.Request) {
|
||||
// Like for the microservice, we check the session and generate a new track
|
||||
session := r.Header.Get("x-session")
|
||||
track := generateTrackId()
|
||||
// This time, we check if a track id is already set in the request,
|
||||
// if yes, it becomes the parent track
|
||||
parent := r.Header.Get("x-track")
|
||||
if (session == "") {
|
||||
w.Header().Set("x-parent", parent)
|
||||
}
|
||||
// Add meta to the response
|
||||
w.Header().Set("x-session", session)
|
||||
w.Header().Set("x-track", track)
|
||||
if (parent == "") {
|
||||
w.Header().Set("x-parent", track)
|
||||
}
|
||||
// Write the response body
|
||||
w.WriteHeader(http.StatusOK)
|
||||
io.WriteString(w, fmt.Sprintf(aResponseMessage, 2, session, track, parent))
|
||||
}
|
||||
```
|
||||
|
||||
Context and information relative to the initial query are now available in the second microservice and a log message in JSON format looks like the following ones:
|
||||
|
||||
In the first micro service:
|
||||
|
||||
```
|
||||
{"appname":"go-logging","level":"debug","msg":"hello from ms 1","session":"eUBrVfdw","time":"2017-03-02T15:29:26+01:00","track":"UzWHRihF"}
|
||||
```
|
||||
|
||||
Then in the second:
|
||||
|
||||
```
|
||||
{"appname":"go-logging","level":"debug","msg":"hello from ms 2","parent":"UzWHRihF","session":"eUBrVfdw","time":"2017-03-02T15:29:26+01:00","track":"DPRHBMuE"}
|
||||
```
|
||||
|
||||
In the case of an error occurring in the second micro service, we are now able – thanks to the contextual information hold in the Golang logs – to determine how it was called and what pattern created the error.
|
||||
|
||||
If you wish to dig deeper on Golang tracking possibilities, there are several libraries that offer tracking features such as [Opentracing][12]. This specific library delivers an easy way to add tracing implementations in complex (or simple) architecture. It allows you to track user queries across the different steps of any process as done below:
|
||||
|
||||

|
||||
|
||||
### IV. Performance impact of Golang logging
|
||||
|
||||
### 1) Do not log in Gorountine
|
||||
|
||||
It is tempting to create a new logger per goroutine. But it should not be done. Goroutine is a lightweight thread manager and is used to accomplish a “simple” task. It should not therefore be in charge of logging. It could lead to concurrency issues as using log.New() in each goroutine would duplicate the interface and all loggers would concurrently try to access the same io.Writer.
|
||||
Moreover libraries usually use a specific goroutine for the log writing to limit the impact on your performances and avoid concurrencial calls to the io.Writer.
|
||||
|
||||
### 2) Work with asynchronous libraries
|
||||
|
||||
If it is true that many Golang logging libraries are available, it’s important to note that most of them are synchronous (pseudo asynchronous in fact). The reason for this being probably that so far no one had any serious impact on their performance due to logging.
|
||||
|
||||
But as Kjell Hedström showed in [his experiment][13] using several threads that created millions of logs, asynchronous Golang logging could lead to 40% performance increase in the worst case scenario. So logging comes at a cost, and can have consequences on your application performance. In case you do not handle such volume of logs, using pseudo asynchronous Golang logging library might be efficient enough. But if you’re dealing with large amounts of logs or are keen on performance, Kjell Hedström asynchronous solution is interesting (despite the fact that you would probably have to develop it a bit as it only contains the minimum required features).
|
||||
|
||||
### 3) Use severity levels to manage your Golang logs volume
|
||||
|
||||
Some logging libraries allow you to enable or disable specific loggers, which can come in handy. You might not need some specific levels of logs once in production for example. Here is an example of how to disable a logger in the glog library where loggers are defined as boolean:
|
||||
|
||||
```
|
||||
type Log bool
|
||||
func (l Log) Println(args ...interface{}) {
|
||||
fmt.Println(args...)
|
||||
}
|
||||
var debug Log = false
|
||||
if debug {
|
||||
debug.Println("DEBUGGING")
|
||||
}
|
||||
```
|
||||
|
||||
You can then define those boolean parameters in a configuration file and use them to enable or disable loggers.
|
||||
|
||||
Golang logging can be expensive without a good Golang logging strategy. Developers should resist to the temptation of logging almost everything – even if much is interesting! If the purpose of logging is to gather as much information as possible, it has to be done properly in order to avoid the white noise of logs containing useless elements.
|
||||
|
||||
### V. Centralize Golang logs
|
||||
|
||||

|
||||
If your application is deployed on several servers, the hassle of connecting to each one of them to investigate a phenomenon can be avoided. Log centralization does make a difference.
|
||||
|
||||
Using log shippers such as Nxlog for windows, Rsyslog for linux (as it is installed by default) or Logstash and FluentD is the best way to do so. Log shippers only purpose is to send logs, and so they manage connection failures or other issues you could face very well.
|
||||
|
||||
There is even a [Golang syslog package][14] that takes care of sending Golang logs to the syslog daemon for you.
|
||||
|
||||
### Hope you enjoyed your Golang logs tour
|
||||
|
||||
Thinking about your Golang logging strategy at the beginning of your project is important. Tracking a user is much easier if overall context can be accessed from anywhere in the code. Reading logs from different services when they are not standardized is painful. Planning ahead to spread the same user or request id through several microservices will later on allow you to easily filter the information and follow an activity across your system.
|
||||
|
||||
Whether you’re building a large Golang project or several microservices also impacts your logging strategy. The main components of a large project should have their specific Golang logger named after their functionality. This enables you to instantly spot from which part of the code the logs are coming from. However with microservices or small Golang projects, fewer core components require their own logger. In each case though, the number of loggers should be kept below the number of core functionalities.
|
||||
|
||||
You’re now all set to quantify decisions about performance and user’s happiness with your Golang logs!
|
||||
|
||||
_Is there a specific coding language you want to read about? Let us know on Twitter [][1][@logmatic][2]._
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://logmatic.io/blog/our-guide-to-a-golang-logs-world/
|
||||
|
||||
作者:[Nils][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://logmatic.io/blog/our-guide-to-a-golang-logs-world/
|
||||
[1]:https://twitter.com/logmatic?lang=en
|
||||
[2]:http://twitter.com/logmatic
|
||||
[3]:https://golang.org/pkg/log/
|
||||
[4]:https://golang.org/pkg/log/#pkg-constants
|
||||
[5]:https://play.golang.org/
|
||||
[6]:https://www.goinggo.net/2013/11/using-log-package-in-go.html
|
||||
[7]:https://golang.org/pkg/log/
|
||||
[8]:https://github.com/google/glog
|
||||
[9]:https://github.com/sirupsen/logrus
|
||||
[10]:https://logmatic.io/blog/beyond-application-monitoring-discover-logging-best-practices/
|
||||
[11]:https://github.com/logmatic/logmatic-go
|
||||
[12]:https://github.com/opentracing/opentracing-go
|
||||
[13]:https://sites.google.com/site/kjellhedstrom2/g2log-efficient-background-io-processign-with-c11/g2log-vs-google-s-glog-performance-comparison
|
||||
[14]:https://golang.org/pkg/log/syslog/
|
@ -1,219 +0,0 @@
|
||||
translating by chenxinlong
|
||||
AWS cloud terminology
|
||||
============================================================
|
||||
|
||||
* * *
|
||||
|
||||

|
||||
|
||||
_Understand AWS cloud terminology of 71 services! Get acquainted with terms used in AWS world to start with your AWS cloud career!_
|
||||
|
||||
* * *
|
||||
|
||||
AWS i.e. Amazon Web Services is cloud platform providing list of web services on pay per use basis. Its one of the famous cloud platform to date. Due to flexibility, availability, elasticity, scalability and no-maintenance many corporate are moving to cloud. Since many companies using these services its become necessary that sysadmin or devOps should be aware of AWS.
|
||||
|
||||
This article aims at listing services provided by AWS and explaining terminology used in AWS world.
|
||||
|
||||
As of today, AWS offers total of 71 services which are grouped together in 17 groups as below :
|
||||
|
||||
* * *
|
||||
|
||||
_Compute_
|
||||
|
||||
Its a cloud computing means virtual server provisioning. This group provides below services.
|
||||
|
||||
1. EC2 : EC2 stands for Elastic Compute Cloud. This service provides you scalable [virtual machines per your requirement.][11]
|
||||
2. EC2 container service : Its high performance, high scalable which allows running services on EC2 clustered environment
|
||||
3. Lightsail : This service enables user to launch and manage virtual servers (EC2) very easily.
|
||||
4. Elastic Beanstalk : This service manages capacity provisioning, load balancing, scaling, health monitoring of your application automatically thus reducing your management load.
|
||||
5. Lambda : It allows to run your code only when needed without managing servers for it.
|
||||
6. Batch : It enables users to run computing workloads (batches) in customized managed way.
|
||||
|
||||
* * *
|
||||
|
||||
_Storage_
|
||||
|
||||
Its a cloud storage i.e. cloud storage facility provided by Amazon. This group includes :
|
||||
|
||||
1. S3 : S3 stands for Simple Storage Service (3 times S). This provides you online storage to store/retrive any data at any time, from anywhere.
|
||||
2. EFS : EFS stands for Elastic File System. Its a online storage which can be used with EC2 servers.
|
||||
3. Glacier : Its a low cost/slow performance data storage solution mainly aimed at archives or long term backups.
|
||||
4. Storage Gateway : Its interface which connects your on-premise applications (hosted outside AWS) with AWS storage.
|
||||
|
||||
* * *
|
||||
|
||||
_Database_
|
||||
|
||||
AWS also offers to host databases on their Infra so that client can benefit with cutting edge tech Amazon have for faster/efficient/secured data processing. This group includes :
|
||||
|
||||
1. RDS : RDS stands for Relational Database Service. Helps to setup, operate, manage relational database on cloud.
|
||||
2. DynamoDB : Its noSQL database providing fast processing and high scalability.
|
||||
3. ElastiCache : Its a way to manage in-memory cache for your web application to run them faster!
|
||||
4. Redshift : Its a huge (petabyte-size) fully scalable, data warehouse service in cloud.
|
||||
|
||||
* * *
|
||||
|
||||
_Networking & Content Delivery_
|
||||
|
||||
As AWS provides cloud EC2 server, its corollary that networking will be in picture too. Content delivery is used to serve files to users from their geographically nearest location. This is pretty much famous for speeding up websites now a days.
|
||||
|
||||
1. VPC : VPC stands for Virtual Private Cloud. Its your very own virtual network dedicated to your AWS account.
|
||||
2. CloudFront : Its content delivery network by AWS.
|
||||
3. Direct Connect : Its a network way of connecting your datacenter/premises with AWS to increase throughput, reduce network cost and avoid connectivity issues which may arise due to internet-based connectivity.
|
||||
4. Route 53 : Its a cloud domain name system DNS web service.
|
||||
|
||||
* * *
|
||||
|
||||
_Migration_
|
||||
|
||||
Its a set of services to help you migrate from on-premises services to AWS. It includes :
|
||||
|
||||
1. Application Discovery Service : A service dedicated to analyse your servers, network, application to help/speed up migration.
|
||||
2. DMS : DMS stands for Database Migration Service. It is used to migrate your data from on-premises DB to RDS or DB hosted on EC2.
|
||||
3. Server Migration : Also called as SMS (Server Migration Service) is a agentless service which moves your workloads from on-premises to AWS.
|
||||
4. Snowball : Intended to use when you want to transfer huge amount of data in/out of AWS using physical storage appliances (rather than internet/network based transfers)
|
||||
|
||||
* * *
|
||||
|
||||
_Developer Tools_
|
||||
|
||||
As name suggest, its a group of services helping developers to code easy/better way on cloud.
|
||||
|
||||
1. CodeCommit : Its a secure, scalable, managed source control service to host code repositories.
|
||||
2. CodeBuild : Code builder on cloud. Executes, tests codes and build software packages for deployments.
|
||||
3. CodeDeploy : Deployment service to automate application deployments on AWS servers or on-premises.
|
||||
4. CodePipeline : This deployment service enables coders to visualize their application before release.
|
||||
5. X-Ray : Analyse applications with event calls.
|
||||
|
||||
* * *
|
||||
|
||||
_Management Tools_
|
||||
|
||||
Group of services which helps you manage your web services in AWS cloud.
|
||||
|
||||
1. CloudWatch : Monitoring service to monitor your AWS resources or applications.
|
||||
2. CloudFormation : Infrastructure as a code! Its way of managing AWS relative infra in collective and orderly manner.
|
||||
3. CloudTrail : Audit & compliance tool for AWS account.
|
||||
4. Config : AWS resource inventory, configuration history, and configuration change notifications to enable security and governance.
|
||||
5. OpsWorks : Automation to configure, deploy EC2 or on-premises compute
|
||||
6. Service Catalog : Create and manage IT service catalogs which are approved to use in your/company account
|
||||
7. Trusted Advisor : Its AWS AI helping you to have better, money saving AWS infra by inspecting your AWS Infra.
|
||||
8. Managed Service : Provides ongoing infra management
|
||||
|
||||
* * *
|
||||
|
||||
_Security, Identity & compliance_
|
||||
|
||||
Important group of AWS services helping you secure your AWS space.
|
||||
|
||||
1. IAM : IAM stands for Identity and Access Management. Controls user access to your AWS resources and services.
|
||||
2. Inspector : Automated security assessment helping you to secure and compliance your apps on AWS.
|
||||
3. Certificate Manager : Provision, manage and deploy SSL/TLS certificates for AWS applications.
|
||||
4. Directory Service : Its Microsoft Active Directory for AWS.
|
||||
5. WAF & Shield : WAF stands for Web Application Firewall. Monitors and controls access to your content on CloudFront or Load balancer.
|
||||
6. Compliance Reports : Compliance reporting of your AWS infra space to make sure your apps an dinfra are compliant to your policies.
|
||||
|
||||
* * *
|
||||
|
||||
_Analytics_
|
||||
|
||||
Data analytics of your AWS space to help you see, plan, act on happenings in your account.
|
||||
|
||||
1. Athena : Its a SQL based query service to analyse S3 stored data.
|
||||
2. EMR : EMR stands for Elastic Map Reduce. Service for big data processing and analysis.
|
||||
3. CloudSearch : Search capability of AWS within application and services.
|
||||
4. Elasticsearch Service : To create a domain and deploy, operate, and scale Elasticsearch clusters in the AWS Cloud
|
||||
5. Kinesis : Streams large amount of data in real time.
|
||||
6. Data Pipeline : Helps to move data between different AWS services.
|
||||
7. QuickSight : Collect, analyse and present insight of business data on AWS.
|
||||
|
||||
* * *
|
||||
|
||||
_Artificial Intelligence_
|
||||
|
||||
AI in AWS!
|
||||
|
||||
1. Lex : Helps to build conversational interfaces in application using voice and text.
|
||||
2. Polly : Its a text to speech service.
|
||||
3. Rekognition : Gives you ability to add image analysis to applications
|
||||
4. Machine Learning : It has algorithms to learn patterns in your data.
|
||||
|
||||
* * *
|
||||
|
||||
_Internet of Things_
|
||||
|
||||
This service enables AWS highly available on different devices.
|
||||
|
||||
1. AWS IoT : It lets connected hardware devices to interact with AWS applications.
|
||||
|
||||
* * *
|
||||
|
||||
_Game Development_
|
||||
|
||||
As name suggest this services aims at Game Development.
|
||||
|
||||
1. Amazon GameLift : This service aims for deplyoing, managing dedicated gaming servers for session based multiplayer games.
|
||||
|
||||
* * *
|
||||
|
||||
_Mobile Services_
|
||||
|
||||
Group of services mainly aimed at handheld devices
|
||||
|
||||
1. Mobile Hub : Helps you to create mobile app backend features and integrate them to mobile apps.
|
||||
2. Cognito : Controls mobile user’s authentication and access to AWS on internet connected devices.
|
||||
3. Device Farm : Mobile app testing service enables you to test apps across android, iOS on real phones hosted by AWS.
|
||||
4. Mobile Analytics : Measure, track and analyze mobile app data on AWS.
|
||||
5. Pinpoint : Targeted push notification and mobile engagements.
|
||||
|
||||
* * *
|
||||
|
||||
_Application Services_
|
||||
|
||||
Its a group of services which can be used with your applications in AWS.
|
||||
|
||||
1. Step Functions : Define and use various functions in your applications
|
||||
2. SWF : SWF stands for Simple Workflow Service. Its cloud workflow management helps developers to co-ordinate and contribute at different stages of application life cycle.
|
||||
3. API Gateway : Helps developers to create, manage, host APIs
|
||||
4. Elastic Transcoder : Helps developers to converts media files to play of various devices.
|
||||
|
||||
* * *
|
||||
|
||||
_Messaging_
|
||||
|
||||
Notification and messaging services in AWS
|
||||
|
||||
1. SQS : SQS stands for Simple Queue Service. Fully managed messaging queue service to communicate between services and apps in AWS.
|
||||
2. SNS : SNS stands for Simple Notification Service. Push notification service for AWS users to alert them about their services in AWS space.
|
||||
3. SES : SES stands for Simple Email Service. Its cost effective email service from AWS for its own customers.
|
||||
|
||||
* * *
|
||||
|
||||
_Business Productivity_
|
||||
|
||||
Group of services to help boost your business productivity.
|
||||
|
||||
1. WorkDocs : Collaborative file sharing, storing and editing service.
|
||||
2. WorkMail : Secured business mail, calendar service
|
||||
3. Amazon Chime : Online business meetings!
|
||||
|
||||
* * *
|
||||
|
||||
_Desktop & App Streaming_
|
||||
|
||||
Its desktop app streaming over cloud.
|
||||
|
||||
1. WorkSpaces : Fully managed, secured desktop computing service on cloud
|
||||
2. AppStream 2.0 : Stream desktop applications from cloud.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://kerneltalks.com/virtualization/aws-cloud-terminology/
|
||||
|
||||
作者:[Shrikant Lavhate][a]
|
||||
译者:[chenxinlong](https://github.com/chenxinlong)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://kerneltalks.com/virtualization/aws-cloud-terminology/
|
@ -0,0 +1,346 @@
|
||||
ictlyh Translating
|
||||
Writing a Linux Debugger Part 3: Registers and memory
|
||||
============================================================
|
||||
|
||||
In the last post we added simple address breakpoints to our debugger. This time we’ll be adding the ability to read and write registers and memory, which will allow us to screw around with our program counter, observe state and change the behaviour of our program.
|
||||
|
||||
* * *
|
||||
|
||||
### Series index
|
||||
|
||||
These links will go live as the rest of the posts are released.
|
||||
|
||||
1. [Setup][3]
|
||||
|
||||
2. [Breakpoints][4]
|
||||
|
||||
3. [Registers and memory][5]
|
||||
|
||||
4. [Elves and dwarves][6]
|
||||
|
||||
5. [Source and signals][7]
|
||||
|
||||
6. [Source-level stepping][8]
|
||||
|
||||
7. Source-level breakpoints
|
||||
|
||||
8. Stack unwinding
|
||||
|
||||
9. Reading variables
|
||||
|
||||
10. Next steps
|
||||
|
||||
* * *
|
||||
|
||||
### Registering our registers
|
||||
|
||||
Before we actually read any registers, we need to teach our debugger a bit about our target, which is x86_64\. Alongside sets of general and special purpose registers, x86_64 has floating point and vector registers available. I’ll be omitting the latter two for simplicity, but you can choose to support them if you like. x86_64 also allows you to access some 64 bit registers as 32, 16, or 8 bit registers, but I’ll just be sticking to 64\. Due to these simplifications, for each register we just need its name, its DWARF register number, and where it is stored in the structure returned by `ptrace`. I chose to have a scoped enum for referring to the registers, then I laid out a global register descriptor array with the elements in the same order as in the `ptrace` register structure.
|
||||
|
||||
```
|
||||
enum class reg {
|
||||
rax, rbx, rcx, rdx,
|
||||
rdi, rsi, rbp, rsp,
|
||||
r8, r9, r10, r11,
|
||||
r12, r13, r14, r15,
|
||||
rip, rflags, cs,
|
||||
orig_rax, fs_base,
|
||||
gs_base,
|
||||
fs, gs, ss, ds, es
|
||||
};
|
||||
|
||||
constexpr std::size_t n_registers = 27;
|
||||
|
||||
struct reg_descriptor {
|
||||
reg r;
|
||||
int dwarf_r;
|
||||
std::string name;
|
||||
};
|
||||
|
||||
const std::array<reg_descriptor, n_registers> g_register_descriptors {{
|
||||
{ reg::r15, 15, "r15" },
|
||||
{ reg::r14, 14, "r14" },
|
||||
{ reg::r13, 13, "r13" },
|
||||
{ reg::r12, 12, "r12" },
|
||||
{ reg::rbp, 6, "rbp" },
|
||||
{ reg::rbx, 3, "rbx" },
|
||||
{ reg::r11, 11, "r11" },
|
||||
{ reg::r10, 10, "r10" },
|
||||
{ reg::r9, 9, "r9" },
|
||||
{ reg::r8, 8, "r8" },
|
||||
{ reg::rax, 0, "rax" },
|
||||
{ reg::rcx, 2, "rcx" },
|
||||
{ reg::rdx, 1, "rdx" },
|
||||
{ reg::rsi, 4, "rsi" },
|
||||
{ reg::rdi, 5, "rdi" },
|
||||
{ reg::orig_rax, -1, "orig_rax" },
|
||||
{ reg::rip, -1, "rip" },
|
||||
{ reg::cs, 51, "cs" },
|
||||
{ reg::rflags, 49, "eflags" },
|
||||
{ reg::rsp, 7, "rsp" },
|
||||
{ reg::ss, 52, "ss" },
|
||||
{ reg::fs_base, 58, "fs_base" },
|
||||
{ reg::gs_base, 59, "gs_base" },
|
||||
{ reg::ds, 53, "ds" },
|
||||
{ reg::es, 50, "es" },
|
||||
{ reg::fs, 54, "fs" },
|
||||
{ reg::gs, 55, "gs" },
|
||||
}};
|
||||
```
|
||||
|
||||
You can typically find the register data structure in `/usr/include/sys/user.h` if you’d like to look at it yourself, and the DWARF register numbers are taken from the [System V x86_64 ABI][11].
|
||||
|
||||
Now we can write a bunch of functions to interact with registers. We’d like to be able to read registers, write to them, retrieve a value from a DWARF register number, and lookup registers by name and vice versa. Let’s start with implementing `get_register_value`:
|
||||
|
||||
```
|
||||
uint64_t get_register_value(pid_t pid, reg r) {
|
||||
user_regs_struct regs;
|
||||
ptrace(PTRACE_GETREGS, pid, nullptr, ®s);
|
||||
//...
|
||||
}
|
||||
```
|
||||
|
||||
Again, `ptrace` gives us easy access to the data we want. We just construct an instance of `user_regs_struct` and give that to `ptrace` alongside the `PTRACE_GETREGS` request.
|
||||
|
||||
Now we want to read `regs` depending on which register was requested. We could write a big switch statement, but since we’ve laid out our `g_register_descriptors` table in the same order as `user_regs_struct`, we can just search for the index of the register descriptor, and access `user_regs_struct` as an array of `uint64_t`s.[1][9]
|
||||
|
||||
```
|
||||
auto it = std::find_if(begin(g_register_descriptors), end(g_register_descriptors),
|
||||
[r](auto&& rd) { return rd.r == r; });
|
||||
|
||||
return *(reinterpret_cast<uint64_t*>(®s) + (it - begin(g_register_descriptors)));
|
||||
```
|
||||
|
||||
The cast to `uint64_t` is safe because `user_regs_struct` is a standard layout type, but I think the pointer arithmetic is technically UB. No current compilers even warn about this and I’m lazy, but if you want to maintain utmost correctness, write a big switch statement.
|
||||
|
||||
`set_register_value` is much the same, we just write to the location and write the registers back at the end:
|
||||
|
||||
```
|
||||
void set_register_value(pid_t pid, reg r, uint64_t value) {
|
||||
user_regs_struct regs;
|
||||
ptrace(PTRACE_GETREGS, pid, nullptr, ®s);
|
||||
auto it = std::find_if(begin(g_register_descriptors), end(g_register_descriptors),
|
||||
[r](auto&& rd) { return rd.r == r; });
|
||||
|
||||
*(reinterpret_cast<uint64_t*>(®s) + (it - begin(g_register_descriptors))) = value;
|
||||
ptrace(PTRACE_SETREGS, pid, nullptr, ®s);
|
||||
}
|
||||
```
|
||||
|
||||
Next is lookup by DWARF register number. This time I’ll actually check for an error condition just in case we get some weird DWARF information:
|
||||
|
||||
```
|
||||
uint64_t get_register_value_from_dwarf_register (pid_t pid, unsigned regnum) {
|
||||
auto it = std::find_if(begin(g_register_descriptors), end(g_register_descriptors),
|
||||
[regnum](auto&& rd) { return rd.dwarf_r == regnum; });
|
||||
if (it == end(g_register_descriptors)) {
|
||||
throw std::out_of_range{"Unknown dwarf register"};
|
||||
}
|
||||
|
||||
return get_register_value(pid, it->r);
|
||||
}
|
||||
```
|
||||
|
||||
Nearly finished, now he have register name lookups:
|
||||
|
||||
```
|
||||
std::string get_register_name(reg r) {
|
||||
auto it = std::find_if(begin(g_register_descriptors), end(g_register_descriptors),
|
||||
[r](auto&& rd) { return rd.r == r; });
|
||||
return it->name;
|
||||
}
|
||||
|
||||
reg get_register_from_name(const std::string& name) {
|
||||
auto it = std::find_if(begin(g_register_descriptors), end(g_register_descriptors),
|
||||
[name](auto&& rd) { return rd.name == name; });
|
||||
return it->r;
|
||||
}
|
||||
```
|
||||
|
||||
And finally we’ll add a simple helper to dump the contents of all registers:
|
||||
|
||||
```
|
||||
void debugger::dump_registers() {
|
||||
for (const auto& rd : g_register_descriptors) {
|
||||
std::cout << rd.name << " 0x"
|
||||
<< std::setfill('0') << std::setw(16) << std::hex << get_register_value(m_pid, rd.r) << std::endl;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
As you can see, iostreams has a very concise interface for outputting hex data nicely[2][10]. Feel free to make an I/O manipulator to get rid of this mess if you like.
|
||||
|
||||
This gives us enough support to handle registers easily in the rest of the debugger, so we can now add this to our UI.
|
||||
|
||||
* * *
|
||||
|
||||
### Exposing our registers
|
||||
|
||||
All we need to do here is add a new command to the `handle_command` function. With the following code, users will be able to type `register read rax`, `register write rax 0x42` and so on.
|
||||
|
||||
```
|
||||
else if (is_prefix(command, "register")) {
|
||||
if (is_prefix(args[1], "dump")) {
|
||||
dump_registers();
|
||||
}
|
||||
else if (is_prefix(args[1], "read")) {
|
||||
std::cout << get_register_value(m_pid, get_register_from_name(args[2])) << std::endl;
|
||||
}
|
||||
else if (is_prefix(args[1], "write")) {
|
||||
std::string val {args[3], 2}; //assume 0xVAL
|
||||
set_register_value(m_pid, get_register_from_name(args[2]), std::stol(val, 0, 16));
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
* * *
|
||||
|
||||
### Where is my mind?
|
||||
|
||||
We’ve already read from and written to memory when setting our breakpoints, so we just need to add a couple of functions to hide the `ptrace` call a bit.
|
||||
|
||||
```
|
||||
uint64_t debugger::read_memory(uint64_t address) {
|
||||
return ptrace(PTRACE_PEEKDATA, m_pid, address, nullptr);
|
||||
}
|
||||
|
||||
void debugger::write_memory(uint64_t address, uint64_t value) {
|
||||
ptrace(PTRACE_POKEDATA, m_pid, address, value);
|
||||
}
|
||||
```
|
||||
|
||||
You might want to add support for reading and writing more than a word at a time, which you can do by just incrementing the address each time you want to read another word. You could also use [`process_vm_readv` and `process_vm_writev`][12] or `/proc/<pid>/mem` instead of `ptrace` if you like.
|
||||
|
||||
Now we’ll add commands for our UI:
|
||||
|
||||
```
|
||||
else if(is_prefix(command, "memory")) {
|
||||
std::string addr {args[2], 2}; //assume 0xADDRESS
|
||||
|
||||
if (is_prefix(args[1], "read")) {
|
||||
std::cout << std::hex << read_memory(std::stol(addr, 0, 16)) << std::endl;
|
||||
}
|
||||
if (is_prefix(args[1], "write")) {
|
||||
std::string val {args[3], 2}; //assume 0xVAL
|
||||
write_memory(std::stol(addr, 0, 16), std::stol(val, 0, 16));
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
* * *
|
||||
|
||||
### Patching `continue_execution`
|
||||
|
||||
Before we test out our changes, we’re now in a position to implement a more sane version of `continue_execution`. Since we can get the program counter, we can check our breakpoint map to see if we’re at a breakpoint. If so, we can disable the breakpoint and step over it before continuing.
|
||||
|
||||
First we’ll add for couple of helper functions for clarity and brevity:
|
||||
|
||||
```
|
||||
uint64_t debugger::get_pc() {
|
||||
return get_register_value(m_pid, reg::rip);
|
||||
}
|
||||
|
||||
void debugger::set_pc(uint64_t pc) {
|
||||
set_register_value(m_pid, reg::rip, pc);
|
||||
}
|
||||
```
|
||||
|
||||
Then we can write a function to step over a breakpoint:
|
||||
|
||||
```
|
||||
void debugger::step_over_breakpoint() {
|
||||
// - 1 because execution will go past the breakpoint
|
||||
auto possible_breakpoint_location = get_pc() - 1;
|
||||
|
||||
if (m_breakpoints.count(possible_breakpoint_location)) {
|
||||
auto& bp = m_breakpoints[possible_breakpoint_location];
|
||||
|
||||
if (bp.is_enabled()) {
|
||||
auto previous_instruction_address = possible_breakpoint_location;
|
||||
set_pc(previous_instruction_address);
|
||||
|
||||
bp.disable();
|
||||
ptrace(PTRACE_SINGLESTEP, m_pid, nullptr, nullptr);
|
||||
wait_for_signal();
|
||||
bp.enable();
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
First we check to see if there’s a breakpoint set for the value of the current PC. If there is, we first put execution back to before the breakpoint, disable it, step over the original instruction, and re-enable the breakpoint.
|
||||
|
||||
`wait_for_signal` will encapsulate our usual `waitpid` pattern:
|
||||
|
||||
```
|
||||
void debugger::wait_for_signal() {
|
||||
int wait_status;
|
||||
auto options = 0;
|
||||
waitpid(m_pid, &wait_status, options);
|
||||
}
|
||||
```
|
||||
|
||||
Finally we rewrite `continue_execution` like this:
|
||||
|
||||
```
|
||||
void debugger::continue_execution() {
|
||||
step_over_breakpoint();
|
||||
ptrace(PTRACE_CONT, m_pid, nullptr, nullptr);
|
||||
wait_for_signal();
|
||||
}
|
||||
```
|
||||
|
||||
* * *
|
||||
|
||||
### Testing it out
|
||||
|
||||
Now that we can read and modify registers, we can have a bit of fun with our hello world program. As a first test, try setting a breakpoint on the call instruction again and continue from it. You should see `Hello world` being printed out. For the fun part, set a breakpoint just after the output call, continue, then write the address of the call argument setup code to the program counter (`rip`) and continue. You should see `Hello world` being printed a second time due to this program counter manipulation. Just in case you aren’t sure where to set the breakpoint, here’s my `objdump` output from the last post again:
|
||||
|
||||
```
|
||||
0000000000400936 <main>:
|
||||
400936: 55 push rbp
|
||||
400937: 48 89 e5 mov rbp,rsp
|
||||
40093a: be 35 0a 40 00 mov esi,0x400a35
|
||||
40093f: bf 60 10 60 00 mov edi,0x601060
|
||||
400944: e8 d7 fe ff ff call 400820 <_ZStlsISt11char_traitsIcEERSt13basic_ostreamIcT_ES5_PKc@plt>
|
||||
400949: b8 00 00 00 00 mov eax,0x0
|
||||
40094e: 5d pop rbp
|
||||
40094f: c3 ret
|
||||
|
||||
```
|
||||
|
||||
You’ll want to move the program counter back to `0x40093a` so that the `esi` and `edi` registers are set up properly.
|
||||
|
||||
In the next post, we’ll take our first look at DWARF information and add various kinds of single stepping to our debugger. After that, we’ll have a mostly functioning tool which can step through code, set breakpoints wherever we like, modify data and so forth. As always, drop a comment below if you have any questions!
|
||||
|
||||
You can find the code for this post [here][13].
|
||||
|
||||
* * *
|
||||
|
||||
1. You could also reorder the `reg` enum and cast them to the underlying type to use as indexes, but I wrote it this way in the first place, it works, and I’m too lazy to change it. [↩][1]
|
||||
|
||||
2. Ahahahahahahahahahahahahahahahaha [↩][2]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blog.tartanllama.xyz/c++/2017/03/31/writing-a-linux-debugger-registers/
|
||||
|
||||
作者:[ TartanLlama ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.twitter.com/TartanLlama
|
||||
[1]:https://blog.tartanllama.xyz/c++/2017/03/31/writing-a-linux-debugger-registers/#fnref:2
|
||||
[2]:https://blog.tartanllama.xyz/c++/2017/03/31/writing-a-linux-debugger-registers/#fnref:1
|
||||
[3]:https://blog.tartanllama.xyz/2017/03/21/writing-a-linux-debugger-setup/
|
||||
[4]:https://blog.tartanllama.xyz/c++/2017/03/24/writing-a-linux-debugger-breakpoints/
|
||||
[5]:https://blog.tartanllama.xyz/c++/2017/03/31/writing-a-linux-debugger-registers/
|
||||
[6]:https://blog.tartanllama.xyz/c++/2017/04/05/writing-a-linux-debugger-elf-dwarf/
|
||||
[7]:https://blog.tartanllama.xyz/c++/2017/04/24/writing-a-linux-debugger-source-signal/
|
||||
[8]:https://blog.tartanllama.xyz/c++/2017/05/06/writing-a-linux-debugger-dwarf-step/
|
||||
[9]:https://blog.tartanllama.xyz/c++/2017/03/31/writing-a-linux-debugger-registers/#fn:2
|
||||
[10]:https://blog.tartanllama.xyz/c++/2017/03/31/writing-a-linux-debugger-registers/#fn:1
|
||||
[11]:https://www.uclibc.org/docs/psABI-x86_64.pdf
|
||||
[12]:http://man7.org/linux/man-pages/man2/process_vm_readv.2.html
|
||||
[13]:https://github.com/TartanLlama/minidbg/tree/tut_registers
|
@ -0,0 +1,329 @@
|
||||
ictlyh Translating
|
||||
Writing a Linux Debugger Part 4: Elves and dwarves
|
||||
============================================================
|
||||
|
||||
Up until now you’ve heard whispers of dwarves, of debug information, of a way to understand the source code without just parsing the thing. Today we’ll be going into the details of source-level debug information in preparation for using it in following parts of this tutorial.
|
||||
|
||||
* * *
|
||||
|
||||
### Series index
|
||||
|
||||
These links will go live as the rest of the posts are released.
|
||||
|
||||
1. [Setup][1]
|
||||
|
||||
2. [Breakpoints][2]
|
||||
|
||||
3. [Registers and memory][3]
|
||||
|
||||
4. [Elves and dwarves][4]
|
||||
|
||||
5. [Source and signals][5]
|
||||
|
||||
6. [Source-level stepping][6]
|
||||
|
||||
7. Source-level breakpoints
|
||||
|
||||
8. Stack unwinding
|
||||
|
||||
9. Reading variables
|
||||
|
||||
10. Next steps
|
||||
|
||||
* * *
|
||||
|
||||
### Introduction to ELF and DWARF
|
||||
|
||||
ELF and DWARF are two components which you may not have heard of, but probably use most days. ELF (Executable and Linkable Format) is the most widely used object file format in the Linux world; it specifies a way to store all of the different parts of a binary, like the code, static data, debug information, and strings. It also tells the loader how to take the binary and ready it for execution, which involves noting where different parts of the binary should be placed in memory, which bits need to be fixed up depending on the position of other components ( _relocations_ ) and more. I won’t cover much more of ELF in these posts, but if you’re interested, you can have a look at [this wonderful infographic][7] or [the standard][8].
|
||||
|
||||
DWARF is the debug information format most commonly used with ELF. It’s not necessarily tied to ELF, but the two were developed in tandem and work very well together. This format allows a compiler to tell a debugger how the original source code relates to the binary which is to be executed. This information is split across different ELF sections, each with its own piece of information to relay. Here are the different sections which are defined, taken from this highly informative if slightly out of date [Introduction to the DWARF Debugging Format][9]:
|
||||
|
||||
* `.debug_abbrev` Abbreviations used in the `.debug_info` section
|
||||
|
||||
* `.debug_aranges` A mapping between memory address and compilation
|
||||
|
||||
* `.debug_frame` Call Frame Information
|
||||
|
||||
* `.debug_info` The core DWARF data containing DWARF Information Entries (DIEs)
|
||||
|
||||
* `.debug_line` Line Number Program
|
||||
|
||||
* `.debug_loc` Location descriptions
|
||||
|
||||
* `.debug_macinfo` Macro descriptions
|
||||
|
||||
* `.debug_pubnames` A lookup table for global objects and functions
|
||||
|
||||
* `.debug_pubtypes` A lookup table for global types
|
||||
|
||||
* `.debug_ranges` Address ranges referenced by DIEs
|
||||
|
||||
* `.debug_str` String table used by `.debug_info`
|
||||
|
||||
* `.debug_types` Type descriptions
|
||||
|
||||
We are most interested in the `.debug_line` and `.debug_info` sections, so lets have a look at some DWARF for a simple program.
|
||||
|
||||
```
|
||||
int main() {
|
||||
long a = 3;
|
||||
long b = 2;
|
||||
long c = a + b;
|
||||
a = 4;
|
||||
}
|
||||
```
|
||||
|
||||
* * *
|
||||
|
||||
### DWARF line table
|
||||
|
||||
If you compile this program with the `-g` option and run the result through `dwarfdump`, you should see something like this for the line number section:
|
||||
|
||||
```
|
||||
.debug_line: line number info for a single cu
|
||||
Source lines (from CU-DIE at .debug_info offset 0x0000000b):
|
||||
|
||||
NS new statement, BB new basic block, ET end of text sequence
|
||||
PE prologue end, EB epilogue begin
|
||||
IS=val ISA number, DI=val discriminator value
|
||||
<pc> [lno,col] NS BB ET PE EB IS= DI= uri: "filepath"
|
||||
0x00400670 [ 1, 0] NS uri: "/home/simon/play/MiniDbg/examples/variable.cpp"
|
||||
0x00400676 [ 2,10] NS PE
|
||||
0x0040067e [ 3,10] NS
|
||||
0x00400686 [ 4,14] NS
|
||||
0x0040068a [ 4,16]
|
||||
0x0040068e [ 4,10]
|
||||
0x00400692 [ 5, 7] NS
|
||||
0x0040069a [ 6, 1] NS
|
||||
0x0040069c [ 6, 1] NS ET
|
||||
|
||||
```
|
||||
|
||||
The first bunch of lines is some information on how to understand the dump – the main line number data starts at the line starting with `0x00400670`. Essentially this maps a code memory address with a line and column number in some file. `NS` means that the address marks the beginning of a new statement, which is often used for setting breakpoints or stepping. `PE`marks the end of the function prologue, which is helpful for setting function entry breakpoints. `ET` marks the end of the translation unit. The information isn’t actually encoded like this; the real encoding is a very space-efficient program of sorts which can be executed to build up this line information.
|
||||
|
||||
So, say we want to set a breakpoint on line 4 of variable.cpp, what do we do? We look for entries corresponding to that file, then we look for a relevant line entry, look up the address which corresponds to it, and set a breakpoint there. In our example, that’s this entry:
|
||||
|
||||
```
|
||||
0x00400686 [ 4,14] NS
|
||||
|
||||
```
|
||||
|
||||
So we want to set a breakpoint at address `0x00400686`. You could do so by hand with the debugger you’ve already written if you want to give it a try.
|
||||
|
||||
The reverse works just as well. If we have a memory location – say, a program counter value – and want to find out where that is in the source, we just find the closest mapped address in the line table information and grab the line from there.
|
||||
|
||||
* * *
|
||||
|
||||
### DWARF debug info
|
||||
|
||||
The `.debug_info` section is the heart of DWARF. It gives us information about the types, functions, variables, hopes, and dreams present in our program. The fundamental unit in this section is the DWARF Information Entry, affectionately known as DIEs. A DIE consists of a tag telling you what kind of source-level entity is being represented, followed by a series of attributes which apply to that entity. Here’s the `.debug_info` section for the simple example program I posted above:
|
||||
|
||||
```
|
||||
|
||||
.debug_info
|
||||
|
||||
COMPILE_UNIT<header overall offset = 0x00000000>:
|
||||
< 0><0x0000000b> DW_TAG_compile_unit
|
||||
DW_AT_producer clang version 3.9.1 (tags/RELEASE_391/final)
|
||||
DW_AT_language DW_LANG_C_plus_plus
|
||||
DW_AT_name /super/secret/path/MiniDbg/examples/variable.cpp
|
||||
DW_AT_stmt_list 0x00000000
|
||||
DW_AT_comp_dir /super/secret/path/MiniDbg/build
|
||||
DW_AT_low_pc 0x00400670
|
||||
DW_AT_high_pc 0x0040069c
|
||||
|
||||
LOCAL_SYMBOLS:
|
||||
< 1><0x0000002e> DW_TAG_subprogram
|
||||
DW_AT_low_pc 0x00400670
|
||||
DW_AT_high_pc 0x0040069c
|
||||
DW_AT_frame_base DW_OP_reg6
|
||||
DW_AT_name main
|
||||
DW_AT_decl_file 0x00000001 /super/secret/path/MiniDbg/examples/variable.cpp
|
||||
DW_AT_decl_line 0x00000001
|
||||
DW_AT_type <0x00000077>
|
||||
DW_AT_external yes(1)
|
||||
< 2><0x0000004c> DW_TAG_variable
|
||||
DW_AT_location DW_OP_fbreg -8
|
||||
DW_AT_name a
|
||||
DW_AT_decl_file 0x00000001 /super/secret/path/MiniDbg/examples/variable.cpp
|
||||
DW_AT_decl_line 0x00000002
|
||||
DW_AT_type <0x0000007e>
|
||||
< 2><0x0000005a> DW_TAG_variable
|
||||
DW_AT_location DW_OP_fbreg -16
|
||||
DW_AT_name b
|
||||
DW_AT_decl_file 0x00000001 /super/secret/path/MiniDbg/examples/variable.cpp
|
||||
DW_AT_decl_line 0x00000003
|
||||
DW_AT_type <0x0000007e>
|
||||
< 2><0x00000068> DW_TAG_variable
|
||||
DW_AT_location DW_OP_fbreg -24
|
||||
DW_AT_name c
|
||||
DW_AT_decl_file 0x00000001 /super/secret/path/MiniDbg/examples/variable.cpp
|
||||
DW_AT_decl_line 0x00000004
|
||||
DW_AT_type <0x0000007e>
|
||||
< 1><0x00000077> DW_TAG_base_type
|
||||
DW_AT_name int
|
||||
DW_AT_encoding DW_ATE_signed
|
||||
DW_AT_byte_size 0x00000004
|
||||
< 1><0x0000007e> DW_TAG_base_type
|
||||
DW_AT_name long int
|
||||
DW_AT_encoding DW_ATE_signed
|
||||
DW_AT_byte_size 0x00000008
|
||||
|
||||
```
|
||||
|
||||
The first DIE represents a compilation unit (CU), which is essentially a source file with all of the `#includes` and such resolved. Here are the attributes annotated with their meaning:
|
||||
|
||||
```
|
||||
DW_AT_producer clang version 3.9.1 (tags/RELEASE_391/final) <-- The compiler which produced
|
||||
this binary
|
||||
DW_AT_language DW_LANG_C_plus_plus <-- The source language
|
||||
DW_AT_name /super/secret/path/MiniDbg/examples/variable.cpp <-- The name of the file which
|
||||
this CU represents
|
||||
DW_AT_stmt_list 0x00000000 <-- An offset into the line table
|
||||
which tracks this CU
|
||||
DW_AT_comp_dir /super/secret/path/MiniDbg/build <-- The compilation directory
|
||||
DW_AT_low_pc 0x00400670 <-- The start of the code for
|
||||
this CU
|
||||
DW_AT_high_pc 0x0040069c <-- The end of the code for
|
||||
this CU
|
||||
|
||||
```
|
||||
|
||||
The other DIEs follow a similar scheme, and you can probably intuit what the different attributes mean.
|
||||
|
||||
Now we can try and solve a few practical problems with our new-found knowledge of DWARF.
|
||||
|
||||
### Which function am I in?
|
||||
|
||||
Say we have a program counter value and want to figure out what function we’re in. A simple algorithm for this is:
|
||||
|
||||
```
|
||||
for each compile unit:
|
||||
if the pc is between DW_AT_low_pc and DW_AT_high_pc:
|
||||
for each function in the compile unit:
|
||||
if the pc is between DW_AT_low_pc and DW_AT_high_pc:
|
||||
return function information
|
||||
|
||||
```
|
||||
|
||||
This will work for many purposes, but things get a bit more difficult in the presence of member functions and inlining. With inlining, for example, once we’ve found the function whose range contains our PC, we’ll need to recurse over the children of that DIE to see if there are any inlined functions which are a better match. I won’t deal with inlining in my code for this debugger, but you can add support for this if you like.
|
||||
|
||||
### How do I set a breakpoint on a function?
|
||||
|
||||
Again, this depends on if you want to support member functions, namespaces and suchlike. For free functions you can just iterate over the functions in different compile units until you find one with the right name. If your compiler is kind enough to fill in the `.debug_pubnames` section, you can do this a lot more efficiently.
|
||||
|
||||
Once the function has been found, you can set a breakpoint on the memory address given by `DW_AT_low_pc`. However, that’ll break at the start of the function prologue, but it’s preferable to break at the start of the user code. Since the line table information can specify the memory address which specifies the prologue end, you could just lookup the value of `DW_AT_low_pc` in the line table, then keep reading until you get to the entry marked as the prologue end. Some compilers won’t output this information though, so another option is to just set a breakpoint on the address given by the second line entry for that function.
|
||||
|
||||
Say we want to set a breakpoint on `main` in our example program. We search for the function called `main`, and get this DIE:
|
||||
|
||||
```
|
||||
< 1><0x0000002e> DW_TAG_subprogram
|
||||
DW_AT_low_pc 0x00400670
|
||||
DW_AT_high_pc 0x0040069c
|
||||
DW_AT_frame_base DW_OP_reg6
|
||||
DW_AT_name main
|
||||
DW_AT_decl_file 0x00000001 /super/secret/path/MiniDbg/examples/variable.cpp
|
||||
DW_AT_decl_line 0x00000001
|
||||
DW_AT_type <0x00000077>
|
||||
DW_AT_external yes(1)
|
||||
|
||||
```
|
||||
|
||||
This tells us that the function begins at `0x00400670`. If we look this up in our line table, we get this entry:
|
||||
|
||||
```
|
||||
0x00400670 [ 1, 0] NS uri: "/super/secret/path/MiniDbg/examples/variable.cpp"
|
||||
|
||||
```
|
||||
|
||||
We want to skip the prologue, so we read ahead an entry:
|
||||
|
||||
```
|
||||
0x00400676 [ 2,10] NS PE
|
||||
|
||||
```
|
||||
|
||||
Clang has included the prologue end flag on this entry, so we know to stop here and set a breakpoint on address `0x00400676`.
|
||||
|
||||
### How do I read the contents of a variable?
|
||||
|
||||
Reading variables can be very complex. They are elusive things which can move around throughout a function, sit in registers, be placed in memory, be optimised out, hide in the corner, whatever. Fortunately our simple example is, well, simple. If we want to read the contents of variable `a`, we have a look at its `DW_AT_location` attribute:
|
||||
|
||||
```
|
||||
DW_AT_location DW_OP_fbreg -8
|
||||
|
||||
```
|
||||
|
||||
This says that the contents are stored at an offset of `-8` from the base of the stack frame. To work out where this base is, we look at the `DW_AT_frame_base` attribute on the containing function.
|
||||
|
||||
```
|
||||
DW_AT_frame_base DW_OP_reg6
|
||||
|
||||
```
|
||||
|
||||
`reg6` on x86 is the frame pointer register, as specified by the [System V x86_64 ABI][10]. Now we read the contents of the frame pointer, subtract 8 from it, and we’ve found our variable. If we actually want to make sense of the thing, we’ll need to look at its type:
|
||||
|
||||
```
|
||||
< 2><0x0000004c> DW_TAG_variable
|
||||
DW_AT_name a
|
||||
DW_AT_type <0x0000007e>
|
||||
|
||||
```
|
||||
|
||||
If we look up this type in the debug information, we get this DIE:
|
||||
|
||||
```
|
||||
< 1><0x0000007e> DW_TAG_base_type
|
||||
DW_AT_name long int
|
||||
DW_AT_encoding DW_ATE_signed
|
||||
DW_AT_byte_size 0x00000008
|
||||
|
||||
```
|
||||
|
||||
This tells us that the type is a 8 byte (64 bit) signed integer type, so we can go ahead and interpret those bytes as an `int64_t` and display it to the user.
|
||||
|
||||
Of course, types can get waaaaaaay more complex than that, as they have to be able to express things like C++ types, but this gives you a basic idea of how they work.
|
||||
|
||||
Coming back to that frame base for a second, Clang was nice enough to track the frame base with the frame pointer register. Recent versions of GCC tend to prefer `DW_OP_call_frame_cfa`, which involves parsing the `.eh_frame` ELF section, and that’s an entirely different article which I won’t be writing. If you tell GCC to use DWARF 2 instead of more recent versions, it’ll tend to output location lists, which are somewhat easier to read:
|
||||
|
||||
```
|
||||
DW_AT_frame_base <loclist at offset 0x00000000 with 4 entries follows>
|
||||
low-off : 0x00000000 addr 0x00400696 high-off 0x00000001 addr 0x00400697>DW_OP_breg7+8
|
||||
low-off : 0x00000001 addr 0x00400697 high-off 0x00000004 addr 0x0040069a>DW_OP_breg7+16
|
||||
low-off : 0x00000004 addr 0x0040069a high-off 0x00000031 addr 0x004006c7>DW_OP_breg6+16
|
||||
low-off : 0x00000031 addr 0x004006c7 high-off 0x00000032 addr 0x004006c8>DW_OP_breg7+8
|
||||
|
||||
```
|
||||
|
||||
A location list gives different locations depending on where the program counter is. This example says that if the PC is at an offset of `0x0` from `DW_AT_low_pc` then the frame base is an offset of 8 away from the value stored in register 7, if it’s between `0x1` and `0x4` away, then it’s at an offset of 16 away from the same, and so on.
|
||||
|
||||
* * *
|
||||
|
||||
### Take a breath
|
||||
|
||||
That’s a lot of information to get your head round, but the good news is that in the next few posts we’re going to have a library do the hard work for us. It’s still useful to understand the concepts at play, particularly when something goes wrong or when you want to support some DWARF concept which isn’t implemented in whatever DWARF library you use.
|
||||
|
||||
If you want to learn more about DWARF, then you can grab the standard [here][11]. At the time of writing, DWARF 5 has just been released, but DWARF 4 is more commonly supported.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blog.tartanllama.xyz/c++/2017/04/05/writing-a-linux-debugger-elf-dwarf/
|
||||
|
||||
作者:[ TartanLlama ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.twitter.com/TartanLlama
|
||||
[1]:https://blog.tartanllama.xyz/2017/03/21/writing-a-linux-debugger-setup/
|
||||
[2]:https://blog.tartanllama.xyz/c++/2017/03/24/writing-a-linux-debugger-breakpoints/
|
||||
[3]:https://blog.tartanllama.xyz/c++/2017/03/31/writing-a-linux-debugger-registers/
|
||||
[4]:https://blog.tartanllama.xyz/c++/2017/04/05/writing-a-linux-debugger-elf-dwarf/
|
||||
[5]:https://blog.tartanllama.xyz/c++/2017/04/24/writing-a-linux-debugger-source-signal/
|
||||
[6]:https://blog.tartanllama.xyz/c++/2017/05/06/writing-a-linux-debugger-dwarf-step/
|
||||
[7]:https://github.com/corkami/pics/raw/master/binary/elf101/elf101-64.pdf
|
||||
[8]:http://www.skyfree.org/linux/references/ELF_Format.pdf
|
||||
[9]:http://www.dwarfstd.org/doc/Debugging%20using%20DWARF-2012.pdf
|
||||
[10]:https://www.uclibc.org/docs/psABI-x86_64.pdf
|
||||
[11]:http://dwarfstd.org/Download.php
|
@ -0,0 +1,302 @@
|
||||
ictlyh Translating
|
||||
Writing a Linux Debugger Part 5: Source and signals
|
||||
============================================================
|
||||
|
||||
In the the last part we learned about DWARF information and how it can be used to read variables and associate our high-level source code with the machine code which is being executed. In this part we’ll put this into practice by implementing some DWARF primitives which will be used by the rest of our debugger. We’ll also take this opportunity to get our debugger to print out the current source context when a breakpoint is hit.
|
||||
|
||||
* * *
|
||||
|
||||
### Series index
|
||||
|
||||
These links will go live as the rest of the posts are released.
|
||||
|
||||
1. [Setup][1]
|
||||
|
||||
2. [Breakpoints][2]
|
||||
|
||||
3. [Registers and memory][3]
|
||||
|
||||
4. [Elves and dwarves][4]
|
||||
|
||||
5. [Source and signals][5]
|
||||
|
||||
6. [Source-level stepping][6]
|
||||
|
||||
7. Source-level breakpoints
|
||||
|
||||
8. Stack unwinding
|
||||
|
||||
9. Reading variables
|
||||
|
||||
10. Next steps
|
||||
|
||||
* * *
|
||||
|
||||
### Setting up our DWARF parser
|
||||
|
||||
As I noted way back at the start of this series, we’ll be using [`libelfin`][7] to handle our DWARF information. Hopefully you got this set up in the first post, but if not, do so now, and make sure that you use the `fbreg` branch of my fork.
|
||||
|
||||
Once you have `libelfin` building, it’s time to add it to our debugger. The first step is to parse the ELF executable we’re given and extract the DWARF from it. This is very easy with `libelfin`, just make these changes to `debugger`:
|
||||
|
||||
```
|
||||
class debugger {
|
||||
public:
|
||||
debugger (std::string prog_name, pid_t pid)
|
||||
: m_prog_name{std::move(prog_name)}, m_pid{pid} {
|
||||
auto fd = open(m_prog_name.c_str(), O_RDONLY);
|
||||
|
||||
m_elf = elf::elf{elf::create_mmap_loader(fd)};
|
||||
m_dwarf = dwarf::dwarf{dwarf::elf::create_loader(m_elf)};
|
||||
}
|
||||
//...
|
||||
|
||||
private:
|
||||
//...
|
||||
dwarf::dwarf m_dwarf;
|
||||
elf::elf m_elf;
|
||||
};
|
||||
```
|
||||
|
||||
`open` is used instead of `std::ifstream` because the elf loader needs a UNIX file descriptor to pass to `mmap` so that it can just map the file into memory rather than reading it a bit at a time.
|
||||
|
||||
* * *
|
||||
|
||||
### Debug information primitives
|
||||
|
||||
Next we can implement functions to retrieve line entries and function DIEs from PC values. We’ll start with `get_function_from_pc`:
|
||||
|
||||
```
|
||||
dwarf::die debugger::get_function_from_pc(uint64_t pc) {
|
||||
for (auto &cu : m_dwarf.compilation_units()) {
|
||||
if (die_pc_range(cu.root()).contains(pc)) {
|
||||
for (const auto& die : cu.root()) {
|
||||
if (die.tag == dwarf::DW_TAG::subprogram) {
|
||||
if (die_pc_range(die).contains(pc)) {
|
||||
return die;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
throw std::out_of_range{"Cannot find function"};
|
||||
}
|
||||
```
|
||||
|
||||
Here I take a naive approach of just iterating through compilation units until I find one which contains the program counter, then iterating through the children until we find the relevant function (`DW_TAG_subprogram`). As mentioned in the last post, you could handle things like member functions and inlining here if you wanted.
|
||||
|
||||
Next is `get_line_entry_from_pc`:
|
||||
|
||||
```
|
||||
dwarf::line_table::iterator debugger::get_line_entry_from_pc(uint64_t pc) {
|
||||
for (auto &cu : m_dwarf.compilation_units()) {
|
||||
if (die_pc_range(cu.root()).contains(pc)) {
|
||||
auto < = cu.get_line_table();
|
||||
auto it = lt.find_address(pc);
|
||||
if (it == lt.end()) {
|
||||
throw std::out_of_range{"Cannot find line entry"};
|
||||
}
|
||||
else {
|
||||
return it;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
throw std::out_of_range{"Cannot find line entry"};
|
||||
}
|
||||
```
|
||||
|
||||
Again, we simply find the correct compilation unit, then ask the line table to get us the relevant entry.
|
||||
|
||||
* * *
|
||||
|
||||
### Printing source
|
||||
|
||||
When we hit a breakpoint or step around our code, we’ll want to know where in the source we end up.
|
||||
|
||||
```
|
||||
void debugger::print_source(const std::string& file_name, unsigned line, unsigned n_lines_context) {
|
||||
std::ifstream file {file_name};
|
||||
|
||||
//Work out a window around the desired line
|
||||
auto start_line = line <= n_lines_context ? 1 : line - n_lines_context;
|
||||
auto end_line = line + n_lines_context + (line < n_lines_context ? n_lines_context - line : 0) + 1;
|
||||
|
||||
char c{};
|
||||
auto current_line = 1u;
|
||||
//Skip lines up until start_line
|
||||
while (current_line != start_line && file.get(c)) {
|
||||
if (c == '\n') {
|
||||
++current_line;
|
||||
}
|
||||
}
|
||||
|
||||
//Output cursor if we're at the current line
|
||||
std::cout << (current_line==line ? "> " : " ");
|
||||
|
||||
//Write lines up until end_line
|
||||
while (current_line <= end_line && file.get(c)) {
|
||||
std::cout << c;
|
||||
if (c == '\n') {
|
||||
++current_line;
|
||||
//Output cursor if we're at the current line
|
||||
std::cout << (current_line==line ? "> " : " ");
|
||||
}
|
||||
}
|
||||
|
||||
//Write newline and make sure that the stream is flushed properly
|
||||
std::cout << std::endl;
|
||||
}
|
||||
```
|
||||
|
||||
Now that we can print out source, we’ll need to hook this into our debugger. A good place to do this is when the debugger gets a signal from a breakpoint or (eventually) single step. While we’re at this, we might want to add some better signal handling to our debugger.
|
||||
|
||||
* * *
|
||||
|
||||
### Better signal handling
|
||||
|
||||
We want to be able to tell what signal was sent to the process, but we also want to know how it was produced. For example, we want to be able to tell if we just got a `SIGTRAP` because we hit a breakpoint, or if it was because a step completed, or a new thread spawned, etc. Fortunately, `ptrace` comes to our rescue again. One of the possible commands to `ptrace` is `PTRACE_GETSIGINFO`, which will give you information about the last signal which the process was sent. We use it like so:
|
||||
|
||||
```
|
||||
siginfo_t debugger::get_signal_info() {
|
||||
siginfo_t info;
|
||||
ptrace(PTRACE_GETSIGINFO, m_pid, nullptr, &info);
|
||||
return info;
|
||||
}
|
||||
```
|
||||
|
||||
This gives us a `siginfo_t` object, which provides the following information:
|
||||
|
||||
```
|
||||
siginfo_t {
|
||||
int si_signo; /* Signal number */
|
||||
int si_errno; /* An errno value */
|
||||
int si_code; /* Signal code */
|
||||
int si_trapno; /* Trap number that caused
|
||||
hardware-generated signal
|
||||
(unused on most architectures) */
|
||||
pid_t si_pid; /* Sending process ID */
|
||||
uid_t si_uid; /* Real user ID of sending process */
|
||||
int si_status; /* Exit value or signal */
|
||||
clock_t si_utime; /* User time consumed */
|
||||
clock_t si_stime; /* System time consumed */
|
||||
sigval_t si_value; /* Signal value */
|
||||
int si_int; /* POSIX.1b signal */
|
||||
void *si_ptr; /* POSIX.1b signal */
|
||||
int si_overrun; /* Timer overrun count;
|
||||
POSIX.1b timers */
|
||||
int si_timerid; /* Timer ID; POSIX.1b timers */
|
||||
void *si_addr; /* Memory location which caused fault */
|
||||
long si_band; /* Band event (was int in
|
||||
glibc 2.3.2 and earlier) */
|
||||
int si_fd; /* File descriptor */
|
||||
short si_addr_lsb; /* Least significant bit of address
|
||||
(since Linux 2.6.32) */
|
||||
void *si_lower; /* Lower bound when address violation
|
||||
occurred (since Linux 3.19) */
|
||||
void *si_upper; /* Upper bound when address violation
|
||||
occurred (since Linux 3.19) */
|
||||
int si_pkey; /* Protection key on PTE that caused
|
||||
fault (since Linux 4.6) */
|
||||
void *si_call_addr; /* Address of system call instruction
|
||||
(since Linux 3.5) */
|
||||
int si_syscall; /* Number of attempted system call
|
||||
(since Linux 3.5) */
|
||||
unsigned int si_arch; /* Architecture of attempted system call
|
||||
(since Linux 3.5) */
|
||||
}
|
||||
```
|
||||
|
||||
I’ll just be using `si_signo` to work out which signal was sent, and `si_code` to get more information about the signal. The best place to put this code is in our `wait_for_signal` function:
|
||||
|
||||
```
|
||||
void debugger::wait_for_signal() {
|
||||
int wait_status;
|
||||
auto options = 0;
|
||||
waitpid(m_pid, &wait_status, options);
|
||||
|
||||
auto siginfo = get_signal_info();
|
||||
|
||||
switch (siginfo.si_signo) {
|
||||
case SIGTRAP:
|
||||
handle_sigtrap(siginfo);
|
||||
break;
|
||||
case SIGSEGV:
|
||||
std::cout << "Yay, segfault. Reason: " << siginfo.si_code << std::endl;
|
||||
break;
|
||||
default:
|
||||
std::cout << "Got signal " << strsignal(siginfo.si_signo) << std::endl;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Now to handle `SIGTRAP`s. It suffices to know that `SI_KERNEL` or `TRAP_BRKPT` will be sent when a breakpoint is hit, and `TRAP_TRACE` will be sent on single step completion:
|
||||
|
||||
```
|
||||
void debugger::handle_sigtrap(siginfo_t info) {
|
||||
switch (info.si_code) {
|
||||
//one of these will be set if a breakpoint was hit
|
||||
case SI_KERNEL:
|
||||
case TRAP_BRKPT:
|
||||
{
|
||||
set_pc(get_pc()-1); //put the pc back where it should be
|
||||
std::cout << "Hit breakpoint at address 0x" << std::hex << get_pc() << std::endl;
|
||||
auto line_entry = get_line_entry_from_pc(get_pc());
|
||||
print_source(line_entry->file->path, line_entry->line);
|
||||
return;
|
||||
}
|
||||
//this will be set if the signal was sent by single stepping
|
||||
case TRAP_TRACE:
|
||||
return;
|
||||
default:
|
||||
std::cout << "Unknown SIGTRAP code " << info.si_code << std::endl;
|
||||
return;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
There are a bunch of different signals and flavours of signals which you could handle. See `man sigaction` for more information.
|
||||
|
||||
Since we now correct the program counter when we get the `SIGTRAP`, we can remove this coded from `step_over_breakpoint`, so it now looks like:
|
||||
|
||||
```
|
||||
void debugger::step_over_breakpoint() {
|
||||
if (m_breakpoints.count(get_pc())) {
|
||||
auto& bp = m_breakpoints[get_pc()];
|
||||
if (bp.is_enabled()) {
|
||||
bp.disable();
|
||||
ptrace(PTRACE_SINGLESTEP, m_pid, nullptr, nullptr);
|
||||
wait_for_signal();
|
||||
bp.enable();
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
* * *
|
||||
|
||||
### Testing it out
|
||||
|
||||
Now you should be able to set a breakpoint at some address, run the program and see the source code printed out with the currently executing line marked with a cursor.
|
||||
|
||||
Next time we’ll be adding the ability to set source-level breakpoints. In the meantime, you can get the code for this post [here][8].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blog.tartanllama.xyz/c++/2017/04/24/writing-a-linux-debugger-source-signal/
|
||||
|
||||
作者:[TartanLlama ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.twitter.com/TartanLlama
|
||||
[1]:https://blog.tartanllama.xyz/2017/03/21/writing-a-linux-debugger-setup/
|
||||
[2]:https://blog.tartanllama.xyz/c++/2017/03/24/writing-a-linux-debugger-breakpoints/
|
||||
[3]:https://blog.tartanllama.xyz/c++/2017/03/31/writing-a-linux-debugger-registers/
|
||||
[4]:https://blog.tartanllama.xyz/c++/2017/04/05/writing-a-linux-debugger-elf-dwarf/
|
||||
[5]:https://blog.tartanllama.xyz/c++/2017/04/24/writing-a-linux-debugger-source-signal/
|
||||
[6]:https://blog.tartanllama.xyz/c++/2017/05/06/writing-a-linux-debugger-dwarf-step/
|
||||
[7]:https://github.com/TartanLlama/libelfin/tree/fbreg
|
||||
[8]:https://github.com/TartanLlama/minidbg/tree/tut_source
|
@ -0,0 +1,150 @@
|
||||
ucasFL translating
|
||||
|
||||
Python-mode – A Vim Plugin to Develop Python Applications in Vim Editor
|
||||
============================================================
|
||||
|
||||
|
||||
Python-mode is a vim plugin that enables you to write Python code in [Vim editor][1] in a fast manner by utilizing libraries including pylint, rope, pydoc, pyflakes, pep8, autopep8, pep257 and mccabe for coding features such as static analysis, refactoring, folding, completion, documentation, and more.
|
||||
|
||||
**Suggested Read:** [Bash-Support – A Vim Plugin That Converts Vim Editor to Bash-IDE][2]
|
||||
|
||||
This plugin contains all the features that you can use to develop python applications in Vim editor.
|
||||
|
||||
#### Python-mode Features
|
||||
|
||||
It has the following notable features:
|
||||
|
||||
* Support Python version 2.6+ and 3.2+.
|
||||
|
||||
* Supports syntax highlighting.
|
||||
|
||||
* Offers virtualenv support.
|
||||
|
||||
* Supports python folding.
|
||||
|
||||
* Offers enhanced python indentation.
|
||||
|
||||
* Enables running of python code from within Vim.
|
||||
|
||||
* Enables addition/removal of breakpoints.
|
||||
|
||||
* Supports python motions and operators.
|
||||
|
||||
* Enables code checking (pylint, pyflakes, pylama, …) that can be run simultaneouslyi>
|
||||
|
||||
* Supports autofixing of PEP8 errors.
|
||||
|
||||
* Allows searching in python documentation.
|
||||
|
||||
* Supports code refactoring.
|
||||
|
||||
* Supports strong code completion.
|
||||
|
||||
* Supports going to definition.
|
||||
|
||||
In this tutorial, we will show you how to setup Vim to use Python-mode in Linux to develop Python applications in Vim editor.
|
||||
|
||||
### How to Install Python-mode for Vim in Linux
|
||||
|
||||
Start by installing [Pathogen][3] (makes it super easy to install plugins and runtime files in their own private directories) for easy installation of Python-mode.
|
||||
|
||||
Run the commands below to get the pathogen.vim file and the directories it needs:
|
||||
|
||||
```
|
||||
# mkdir -p ~/.vim/autoload ~/.vim/bundle && \
|
||||
# curl -LSso ~/.vim/autoload/pathogen.vim https://tpo.pe/pathogen.vim
|
||||
```
|
||||
|
||||
Then add the following lines below to your ~/.vimrc file:
|
||||
|
||||
```
|
||||
execute pathogen#infect()
|
||||
syntax on
|
||||
filetype plugin indent on
|
||||
```
|
||||
|
||||
Once you have installed pathogen, and you can now put Python-mode into ~/.vim/bundle as follows.
|
||||
|
||||
```
|
||||
# cd ~/.vim/bundle
|
||||
# git clone https://github.com/klen/python-mode.git
|
||||
```
|
||||
|
||||
Then rebuild helptags in vim like this.
|
||||
|
||||
```
|
||||
:helptags
|
||||
```
|
||||
|
||||
You need to enable filetype-plugin (:help filetype-plugin-on) and filetype-indent (:help filetype-indent-on) to use python-mode.
|
||||
|
||||
### Install Python-mode in Debian and Ubuntu
|
||||
|
||||
Another way you can install python-mode in Debian and Ubuntu systems using PPA as shown.
|
||||
|
||||
```
|
||||
$ sudo add-apt-repository https://klen.github.io/python-mode/deb main
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install vim-python-mode
|
||||
```
|
||||
|
||||
If you you encounter the message: “The following signatures couldn’t be verified because the public key is not available”, run the command below:
|
||||
|
||||
```
|
||||
$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys B5DF65307000E266
|
||||
```
|
||||
|
||||
Now enable python-mode using vim-addon-manager like so.
|
||||
|
||||
```
|
||||
$ sudo apt install vim-addon-manager
|
||||
$ vim-addons install python-mode
|
||||
```
|
||||
|
||||
### Customizing Python-mode in Linux
|
||||
|
||||
To override the default key bindings, redefine them in the .vimrc files, for instance:
|
||||
|
||||
```
|
||||
" Override go-to.definition key shortcut to Ctrl-]
|
||||
let g:pymode_rope_goto_definition_bind = "<C-]>"
|
||||
" Override run current python file key shortcut to Ctrl-Shift-e
|
||||
let g:pymode_run_bind = "<C-S-e>"
|
||||
" Override view python doc key shortcut to Ctrl-Shift-d
|
||||
let g:pymode_doc_bind = "<C-S-d>"
|
||||
```
|
||||
|
||||
Note that python-mode uses python 2 syntax checking by default. You can enable python 3 syntax checking by adding this in your .vimrc.
|
||||
|
||||
```
|
||||
let g:pymode_python = 'python3'
|
||||
```
|
||||
|
||||
You can find additional configuration options on the Python-mode Github Repository: [https://github.com/python-mode/python-mode][4]
|
||||
|
||||
That’s all for now! In this tutorial, we will show you how to integrate Vim to with Python-mode in Linux. Share your thoughts with us via the feedback form below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.
|
||||
|
||||
------------------
|
||||
|
||||
via: https://www.tecmint.com/python-mode-a-vim-editor-plugin/
|
||||
|
||||
作者:[Aaron Kili ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.tecmint.com/author/aaronkili/
|
||||
[1]:https://www.tecmint.com/vi-editor-usage/
|
||||
[2]:https://www.tecmint.com/use-vim-as-bash-ide-using-bash-support-in-linux/
|
||||
[3]:https://github.com/tpope/vim-pathogen
|
||||
[4]:https://github.com/python-mode/python-mode
|
||||
[5]:https://www.tecmint.com/author/aaronkili/
|
||||
[6]:https://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
|
||||
[7]:https://www.tecmint.com/free-linux-shell-scripting-books/
|
@ -0,0 +1,94 @@
|
||||
Linux's Big Bang: One Kernel, Countless Distros
|
||||
============================================================[Print][1][Email][2]By Jonathan Terrasi
|
||||
Apr 27, 2017 3:24 PM PT
|
||||

|
||||
|
||||
Even if you're a newcomer to Linux, you've probably figured out that it is not a single, monolithic operating system, but a constellation of projects. The different "stars" in this constellation take the form of "distributions," or "distros." Each offers its own take on the Linux model.
|
||||
|
||||
To gain an appreciation of the plethora of options offered by the range of distributions, it helps to understand how Linux started out and subsequently proliferated. With that in mind, here's a brief introduction to Linux's history.
|
||||
|
||||
### Linus Torvalds, Kernel Builder
|
||||
|
||||
Most people with any familiarity with Linux have heard of its creator, Linus Torvalds (pictured above), but not many know why he created it in the first place. In 1991, Torvalds was a university student in Finland studying computers. As an independent personal project, he wanted to create a Unix-like kernel to build a system for his unique hardware.
|
||||
|
||||
The "kernel" is the part of an operating system that mediates between the hardware, via its firmware, and the OS. Essentially, it is the heart of the system. Developing a kernel is no small feat, but Torvalds was eager for the challenge and found he had a rare knack for it.
|
||||
|
||||
As he was new to kernels, he wanted input from others to ensure he was on the right track, so he solicited the experience of veteran tinkerers on Usenet, the foremost among early Internet forums, by publishing the code for his kernel. Contributions flooded in.
|
||||
|
||||
After establishing a process for reviewing forum-submitted patches and selectively integrating them, Torvalds realized he had amassed an informal development team. It quickly became a somewhat formal development team once the project took off.
|
||||
|
||||
### Richard Stallman's Role
|
||||
|
||||
Though Torvalds and his team created the Linux kernel, there would have been no subsequent spread of myriad Linux distributions without the work of Richard Stallman, who had launched the free software movement a decade earlier.
|
||||
|
||||
Frustrated with the lack of transparency in many core Unix programming and system utilities, Stallman had decided to write his own -- and to share the source code freely with anyone who wanted it and also was committed to openness. He created a considerable body of core programs, collectively dubbed the "GNU Project," which he launched in 1983.
|
||||
|
||||
Without them, a kernel would not have been of much use. Early designers of Linux-based OSes readily incorporated the GNU tools into their projects.
|
||||
|
||||
Different teams began to emerge -- each with its own philosophy regarding computing functions and architecture. They combined the Linux kernel, GNU utilities, and their own original software, and "distributed" variants of the Linux operating system.
|
||||
|
||||
### Server Distros
|
||||
|
||||
Each distro has its own design logic and purpose, but to appreciate their nuances it pays to understand the difference between upstream and downstream developers. An "upstream developer" is responsible for actually creating the program and releasing it for individual download, or for including it in other projects. By contrast, a "downstream developer," or "package maintainer," is one who takes each release of the upstream program and tweaks it to fit the use case of a downstream project.
|
||||
|
||||
While most Linux distributions include some original projects, the majority of distribution development is "downstream" work on the Linux kernel, GNU tools, and the vast ecosystem of user programs.
|
||||
|
||||
Many distros make their mark by optimizing for specific use cases. For instance, some projects are designed to run as servers. Distributions tailored for deployment as servers often will shy away from quickly pushing out the latest features from upstream projects in favor of releasing a thoroughly tested, stable base of essential software that system administrators can depend on to run smoothly.
|
||||
|
||||
Development teams for server-focused distros often are large and are staffed with veteran programmers who can provide years of support for each release.
|
||||
|
||||
### Desktop Distros
|
||||
|
||||
There is also a wide array of distributions meant to run as user desktops. In fact, some of the more well-known of these are designed to compete with major commercial OSes by offering a simple installation and intuitive interface. These distributions usually include enormous software repositories containing every user program imaginable, so that users can make their systems their own.
|
||||
|
||||
As usability is key, they are likely to devote a large segment of their staff to creating a signature, distro-specific desktop, or to tweaking existing desktops to fit their design philosophy. User-focused distributions tend to speed up the downstream development timetable a bit to offer their users new features in a timely fashion.
|
||||
|
||||
"Rolling release" projects -- a subset of desktop distributions -- are crafted to be on the bleeding edge. Instead of waiting until all the desired upstream programs reach a certain point of development and then integrating them into a single release, package maintainers for rolling release projects release a new version of each upstream program separately, once they finish tweaking it.
|
||||
|
||||
One advantage to this approach is security, as critical patches will be available faster than non-rolling release distros. Another upside is the immediate availability of new features that users otherwise would have to wait for. The drawback for rolling release distributions is that they require more manual intervention and careful maintenance, as certain upgrades can conflict with others, breaking a system.
|
||||
|
||||
### Embedded Systems
|
||||
|
||||
Yet another class of Linux distros is known as "embedded systems," which are extremely trimmed down (compared to server and desktop distros) to fit particular use cases.
|
||||
|
||||
We often forget that anything that connects to the Internet or is more complex than a simple calculator is a computer, and computers need operating systems. Because Linux is free and highly modular, it's usually the one that hardware manufacturers choose.
|
||||
|
||||
In the vast majority of cases, if you see a smart TV, an Internet-connected camera, or even a car, you're looking at a Linux device. Practically every smartphone that's not an iPhone runs a specialized variety of embedded Linux too.
|
||||
|
||||
### Linux Live
|
||||
|
||||
Finally, there are certain Linux distros that aren't meant to be installed permanently in a computer, but instead reside on a USB stick and allow other computers to boot them up without touching the computer's onboard hard drive.
|
||||
|
||||
These "live" systems can be optimized to perform a number of tasks, ranging from repairing damaged systems, to conducting security evaluations, to browsing the Internet with high security.
|
||||
|
||||
As these live Linux distros usually are meant for tackling very specific problems, they generally include specialized tools like hard drive analysis and recovery programs, network monitoring applications, and encryption tools. They also keep a light footprint so they can be booted up quickly.
|
||||
|
||||
### How Do You Choose?
|
||||
|
||||
This is by no means an exhaustive list of Linux distribution types, but it should give you an idea of the scope and variety of the Linux ecosystem.
|
||||
|
||||
Within each category, there are many choices, so how do you choose the one that might best suit your needs?
|
||||
|
||||
One way is to experiment. It is very common in the Linux community to go back and forth between distros to try them out, or for users to run different ones on different machines, according to their needs.
|
||||
|
||||
In a future post, I'll showcase a few examples of each type of distribution so you can try them for yourself and begin your journey to discovering the one you like best.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Jonathan Terrasi has been an ECT News Network columnist since 2017. His main interests are computer security (particularly with the Linux desktop), encryption, and analysis of politics and current affairs. He is a full-time freelance writer and musician. His background includes providing technical commentaries and analyses in articles published by the Chicago Committee to Defend the Bill of Rights.
|
||||
|
||||
------
|
||||
|
||||
via: http://www.linuxinsider.com/story/84489.html?rss=1
|
||||
|
||||
作者:[ Jonathan Terrasi ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxinsider.com/story/84489.html?rss=1#searchbyline
|
||||
[1]:http://www.linuxinsider.com/story/84489.html?rss=1#
|
||||
[2]:http://www.linuxinsider.com/perl/mailit/?id=84489
|
@ -0,0 +1,91 @@
|
||||
WPSeku – A Vulnerability Scanner to Find Security Issues in WordPress
|
||||
============================================================
|
||||
|
||||
by [Aaron Kili][9] | Published: May 5, 2017 | Last Updated: May 5, 2017
|
||||
|
||||
Download Your Free eBooks NOW - [10 Free Linux eBooks for Administrators][10] | [4 Free Shell Scripting eBooks][11]
|
||||
|
||||
WordPress is a free and open-source, highly customizable content management system (CMS) that is being used by millions around the world to run blogs and fully functional websites. Because it is the most used CMS out there, there are so many potential WordPress security issues/vulnerabilities to be concerned about.
|
||||
|
||||
However, these security issues can be dealt with, if we follow common WordPress security best practices. In this article, we will show you how to use WPSeku, a WordPress vulnerability scanner in Linux, that can be used to find security holes in your WordPress installation and block potential threats.
|
||||
|
||||
WPSeku is a simple WordPress vulnerability scanner written using Python, it can be used to scan local and remote WordPress installations to find security issues.
|
||||
|
||||
### How to Install WPSeku – WordPress Vulnerability Scanner in Linux
|
||||
|
||||
To install WPSeku in Linux, you need to clone the most recent version of WPSeku from its Github repository as shown.
|
||||
|
||||
```
|
||||
$ cd ~
|
||||
$ git clone https://github.com/m4ll0k/WPSeku
|
||||
```
|
||||
|
||||
Once you have obtained it, move into the WPSeku directory and run it as follows.
|
||||
|
||||
```
|
||||
$ cd WPSeku
|
||||
```
|
||||
|
||||
Now run the WPSeku using the `-u` option to specify your WordPress installation URL like this.
|
||||
|
||||
```
|
||||
$ ./wpseku.py -u http://yourdomain.com
|
||||
```
|
||||
[][1]
|
||||
|
||||
WordPress Vulnerability Scanner
|
||||
|
||||
The command below will search for cross site scripting, local file inclusion, and SQL injection vulnerabilities in your WordPress plugins using the `-p` option, you need to specify the location of plugins in the URL:
|
||||
|
||||
```
|
||||
$ ./wpseku.py -u http://yourdomain.com/wp-content/plugins/wp/wp.php?id= -p [x,l,s]
|
||||
```
|
||||
|
||||
The following command will execute a brute force password login and password login via XML-RPC using the option `-b`. Also, you can set a username and wordlist using the `--user` and `--wordlist` options respectively as shown below.
|
||||
|
||||
```
|
||||
$ ./wpseku.py -u http://yourdomian.com --user username --wordlist wordlist.txt -b [l,x]
|
||||
```
|
||||
|
||||
To view all WPSeku usage options, type.
|
||||
|
||||
```
|
||||
$ ./wpseku.py --help
|
||||
```
|
||||
[][2]
|
||||
|
||||
WPSeku WordPress Vulnerability Scanner Help
|
||||
|
||||
WPSeku Github repository: [https://github.com/m4ll0k/WPSeku][3]
|
||||
|
||||
That’s it! In this article, we showed you how to get and use WPSeku for WordPress vulnerability scanning in Linux. WordPress is secure but only if we follow WordPress security best practices. Do you have any thoughts to share? If yes, then use the comment section below.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.
|
||||
|
||||
------------------
|
||||
|
||||
via: https://www.tecmint.com/wpseku-wordpress-vulnerability-security-scanner/
|
||||
|
||||
作者:[Aaron Kili ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.tecmint.com/author/aaronkili/
|
||||
[1]:https://www.tecmint.com/wp-content/uploads/2017/05/WordPress-Vulnerability-Scanner.png
|
||||
[2]:https://www.tecmint.com/wp-content/uploads/2017/05/WPSeku-WordPress-Vulnerability-Scanner-Help.png
|
||||
[3]:https://github.com/m4ll0k/WPSeku
|
||||
[4]:https://www.tecmint.com/wpseku-wordpress-vulnerability-security-scanner/#
|
||||
[5]:https://www.tecmint.com/wpseku-wordpress-vulnerability-security-scanner/#
|
||||
[6]:https://www.tecmint.com/wpseku-wordpress-vulnerability-security-scanner/#
|
||||
[7]:https://www.tecmint.com/wpseku-wordpress-vulnerability-security-scanner/#
|
||||
[8]:https://www.tecmint.com/wpseku-wordpress-vulnerability-security-scanner/#comments
|
||||
[9]:https://www.tecmint.com/author/aaronkili/
|
||||
[10]:https://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
|
||||
[11]:https://www.tecmint.com/free-linux-shell-scripting-books/
|
@ -0,0 +1,283 @@
|
||||
ictlyh Translating
|
||||
Writing a Linux Debugger Part 6: Source-level stepping
|
||||
============================================================
|
||||
|
||||
A couple of posts ago we learned about DWARF information and how it lets us relate the machine code to the high-level source. This time we’ll be putting this knowledge into practice by adding source-level stepping to our debugger.
|
||||
|
||||
* * *
|
||||
|
||||
### Series index
|
||||
|
||||
These links will go live as the rest of the posts are released.
|
||||
|
||||
1. [Setup][1]
|
||||
|
||||
2. [Breakpoints][2]
|
||||
|
||||
3. [Registers and memory][3]
|
||||
|
||||
4. [Elves and dwarves][4]
|
||||
|
||||
5. [Source and signals][5]
|
||||
|
||||
6. [Source-level stepping][6]
|
||||
|
||||
7. Source-level breakpoints
|
||||
|
||||
8. Stack unwinding
|
||||
|
||||
9. Reading variables
|
||||
|
||||
10. Next steps
|
||||
|
||||
* * *
|
||||
|
||||
### Exposing instruction-level stepping
|
||||
|
||||
But we’re getting ahead of ourselves. First let’s expose instruction-level single stepping through the user interface. I decided to split it between a `single_step_instruction` which can be used by other parts of the code, and a `single_step_instruction_with_breakpoint_check` which ensures that any breakpoints are disabled and re-enabled.
|
||||
|
||||
```
|
||||
void debugger::single_step_instruction() {
|
||||
ptrace(PTRACE_SINGLESTEP, m_pid, nullptr, nullptr);
|
||||
wait_for_signal();
|
||||
}
|
||||
|
||||
void debugger::single_step_instruction_with_breakpoint_check() {
|
||||
//first, check to see if we need to disable and enable a breakpoint
|
||||
if (m_breakpoints.count(get_pc())) {
|
||||
step_over_breakpoint();
|
||||
}
|
||||
else {
|
||||
single_step_instruction();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
As usual, another command gets lumped into our `handle_command` function:
|
||||
|
||||
```
|
||||
else if(is_prefix(command, "stepi")) {
|
||||
single_step_instruction_with_breakpoint_check();
|
||||
auto line_entry = get_line_entry_from_pc(get_pc());
|
||||
print_source(line_entry->file->path, line_entry->line);
|
||||
}
|
||||
```
|
||||
|
||||
With these functions added we can begin to implement our source-level stepping functions.
|
||||
|
||||
* * *
|
||||
|
||||
### Implementing the steps
|
||||
|
||||
We’re going to write very simple versions of these functions, but real debuggers tend to have the concept of a _thread plan_ which encapsulates all of the stepping information. For example, a debugger might have some complex logic to determine breakpoint sites, then have some callback which determines whether or not the step operation has completed. This is a lot of infrastructure to get in place, so we’ll just take a naive approach. We might end up accidentally stepping over breakpoints, but you can spend some time getting all the details right if you like.
|
||||
|
||||
For `step_out`, we’ll just set a breakpoint at the return address of the function and continue. I don’t want to get into the details of stack unwinding yet – that’ll come in a later part – but it suffices to say for now that the return address is stored 8 bytes after the start of a stack frame. So we’ll just read the frame pointer and read a word of memory at the relevant address:
|
||||
|
||||
```
|
||||
void debugger::step_out() {
|
||||
auto frame_pointer = get_register_value(m_pid, reg::rbp);
|
||||
auto return_address = read_memory(frame_pointer+8);
|
||||
|
||||
bool should_remove_breakpoint = false;
|
||||
if (!m_breakpoints.count(return_address)) {
|
||||
set_breakpoint_at_address(return_address);
|
||||
should_remove_breakpoint = true;
|
||||
}
|
||||
|
||||
continue_execution();
|
||||
|
||||
if (should_remove_breakpoint) {
|
||||
remove_breakpoint(return_address);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
`remove_breakpoint` is a little helper function:
|
||||
|
||||
```
|
||||
void debugger::remove_breakpoint(std::intptr_t addr) {
|
||||
if (m_breakpoints.at(addr).is_enabled()) {
|
||||
m_breakpoints.at(addr).disable();
|
||||
}
|
||||
m_breakpoints.erase(addr);
|
||||
}
|
||||
```
|
||||
|
||||
Next is `step_in`. A simple algorithm is to just keep on stepping over instructions until we get to a new line.
|
||||
|
||||
```
|
||||
void debugger::step_in() {
|
||||
auto line = get_line_entry_from_pc(get_pc())->line;
|
||||
|
||||
while (get_line_entry_from_pc(get_pc())->line == line) {
|
||||
single_step_instruction_with_breakpoint_check();
|
||||
}
|
||||
|
||||
auto line_entry = get_line_entry_from_pc(get_pc());
|
||||
print_source(line_entry->file->path, line_entry->line);
|
||||
}
|
||||
```
|
||||
|
||||
`step_over` is the most difficult of the three for us. Conceptually, the solution is to just set a breakpoint at the next source line, but what is the next source line? It might not be the one directly succeeding the current line, as we could be in a loop, or some conditional construct. Real debuggers will often examine what instruction is being executed and work out all of the possible branch targets, then set breakpoints on all of them. I’d rather not implement or integrate an x86 instruction emulator for such a small project, so we’ll need to come up with a simpler solution. A couple of horrible options are to just keep stepping until we’re at a new line in the current function, or to just set a breakpoint at every line in the current function. The former would be ridiculously inefficient if we’re stepping over a function call, as we’d need to single step through every single instruction in that call graph, so I’ll go for the second solution.
|
||||
|
||||
```
|
||||
void debugger::step_over() {
|
||||
auto func = get_function_from_pc(get_pc());
|
||||
auto func_entry = at_low_pc(func);
|
||||
auto func_end = at_high_pc(func);
|
||||
|
||||
auto line = get_line_entry_from_pc(func_entry);
|
||||
auto start_line = get_line_entry_from_pc(get_pc());
|
||||
|
||||
std::vector<std::intptr_t> to_delete{};
|
||||
|
||||
while (line->address < func_end) {
|
||||
if (line->address != start_line->address && !m_breakpoints.count(line->address)) {
|
||||
set_breakpoint_at_address(line->address);
|
||||
to_delete.push_back(line->address);
|
||||
}
|
||||
++line;
|
||||
}
|
||||
|
||||
auto frame_pointer = get_register_value(m_pid, reg::rbp);
|
||||
auto return_address = read_memory(frame_pointer+8);
|
||||
if (!m_breakpoints.count(return_address)) {
|
||||
set_breakpoint_at_address(return_address);
|
||||
to_delete.push_back(return_address);
|
||||
}
|
||||
|
||||
continue_execution();
|
||||
|
||||
for (auto addr : to_delete) {
|
||||
remove_breakpoint(addr);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This function is a bit more complex, so I’ll break it down a bit.
|
||||
|
||||
```
|
||||
auto func = get_function_from_pc(get_pc());
|
||||
auto func_entry = at_low_pc(func);
|
||||
auto func_end = at_high_pc(func);
|
||||
```
|
||||
|
||||
`at_low_pc` and `at_high_pc` are functions from `libelfin` which will get us the low and high PC values for the given function DIE.
|
||||
|
||||
```
|
||||
auto line = get_line_entry_from_pc(func_entry);
|
||||
auto start_line = get_line_entry_from_pc(get_pc());
|
||||
|
||||
std::vector<std::intptr_t> breakpoints_to_remove{};
|
||||
|
||||
while (line->address < func_end) {
|
||||
if (line->address != start_line->address && !m_breakpoints.count(line->address)) {
|
||||
set_breakpoint_at_address(line->address);
|
||||
breakpoints_to_remove.push_back(line->address);
|
||||
}
|
||||
++line;
|
||||
}
|
||||
```
|
||||
|
||||
We’ll need to remove any breakpoints we set so that they don’t leak out of our step function, so we keep track of them in a `std::vector`. To set all the breakpoints, we loop over the line table entries until we hit one which is outside the range of our function. For each one, we make sure that it’s not the line we are currently on, and that there’s not already a breakpoint set at that location.
|
||||
|
||||
```
|
||||
auto frame_pointer = get_register_value(m_pid, reg::rbp);
|
||||
auto return_address = read_memory(frame_pointer+8);
|
||||
if (!m_breakpoints.count(return_address)) {
|
||||
set_breakpoint_at_address(return_address);
|
||||
to_delete.push_back(return_address);
|
||||
}
|
||||
```
|
||||
|
||||
Here we are setting a breakpoint on the return address of the function, just like in `step_out`.
|
||||
|
||||
```
|
||||
continue_execution();
|
||||
|
||||
for (auto addr : to_delete) {
|
||||
remove_breakpoint(addr);
|
||||
}
|
||||
```
|
||||
|
||||
Finally, we continue until one of those breakpoints has been hit, then remove all the temporary breakpoints we set.
|
||||
|
||||
It ain’t pretty, but it’ll do for now.
|
||||
|
||||
Of course, we also need to add this new functionality to our UI:
|
||||
|
||||
```
|
||||
else if(is_prefix(command, "step")) {
|
||||
step_in();
|
||||
}
|
||||
else if(is_prefix(command, "next")) {
|
||||
step_over();
|
||||
}
|
||||
else if(is_prefix(command, "finish")) {
|
||||
step_out();
|
||||
}
|
||||
```
|
||||
|
||||
* * *
|
||||
|
||||
### Testing it out
|
||||
|
||||
I tested out my implementation with a simple program which calls a bunch of different functions:
|
||||
|
||||
```
|
||||
void a() {
|
||||
int foo = 1;
|
||||
}
|
||||
|
||||
void b() {
|
||||
int foo = 2;
|
||||
a();
|
||||
}
|
||||
|
||||
void c() {
|
||||
int foo = 3;
|
||||
b();
|
||||
}
|
||||
|
||||
void d() {
|
||||
int foo = 4;
|
||||
c();
|
||||
}
|
||||
|
||||
void e() {
|
||||
int foo = 5;
|
||||
d();
|
||||
}
|
||||
|
||||
void f() {
|
||||
int foo = 6;
|
||||
e();
|
||||
}
|
||||
|
||||
int main() {
|
||||
f();
|
||||
}
|
||||
```
|
||||
|
||||
You should be able to set a breakpoint on the address of `main` and then in, over, and out all over the program. Expect things to break if you try to step out of `main` or into some dynamically linked library.
|
||||
|
||||
You can find the code for this post [here][7]. Next time we’ll use our newfound DWARF expertise to implement source-level breakpoints.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blog.tartanllama.xyz/c++/2017/05/06/writing-a-linux-debugger-dwarf-step/
|
||||
|
||||
作者:[TartanLlama ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.twitter.com/TartanLlama
|
||||
[1]:https://blog.tartanllama.xyz/2017/03/21/writing-a-linux-debugger-setup/
|
||||
[2]:https://blog.tartanllama.xyz/c++/2017/03/24/writing-a-linux-debugger-breakpoints/
|
||||
[3]:https://blog.tartanllama.xyz/c++/2017/03/31/writing-a-linux-debugger-registers/
|
||||
[4]:https://blog.tartanllama.xyz/c++/2017/04/05/writing-a-linux-debugger-elf-dwarf/
|
||||
[5]:https://blog.tartanllama.xyz/c++/2017/04/24/writing-a-linux-debugger-source-signal/
|
||||
[6]:https://blog.tartanllama.xyz/c++/2017/05/06/writing-a-linux-debugger-dwarf-step/
|
||||
[7]:https://github.com/TartanLlama/minidbg/tree/tut_dwarf_step
|
@ -1,3 +1,4 @@
|
||||
ucasFL translating
|
||||
4 Python libraries for building great command-line user interfaces
|
||||
============================================================
|
||||
|
||||
|
@ -0,0 +1,296 @@
|
||||
wcnnbdk1 translating
|
||||
ssh_scan – Verifies Your SSH Server Configuration and Policy in Linux
|
||||
============================================================
|
||||
|
||||
|
||||
ssh_scan is an easy-to-use prototype SSH configuration and policy scanner for Linux and UNIX servers, inspired by [Mozilla OpenSSH Security Guide][6], which provides a reasonable baseline policy recommendation for SSH configuration parameters such as Ciphers, MACs, and KexAlgos and much more.
|
||||
|
||||
It has some of the following benefits:
|
||||
|
||||
* It has minimal dependencies, ssh_scan only employs native Ruby and BinData to do its work, no heavy dependencies.
|
||||
|
||||
* It’s portable, you can use ssh_scan in another project or for [automation of tasks][1].
|
||||
|
||||
* It’s easy to use, simply point it at an SSH service and get a JSON report of what it supports and it’s policy status.
|
||||
|
||||
* It’s also configurable, you can create your own custom policies that fit your specific policy requirements.
|
||||
|
||||
**Suggested Read:** [How to Install and Configure OpenSSH Server in Linux][7]
|
||||
|
||||
### How to Install ssh_scan in Linux
|
||||
|
||||
There are three ways you can install ssh_scan and they are:
|
||||
|
||||
To install and run as a gem, type:
|
||||
|
||||
```
|
||||
----------- On Debian/Ubuntu -----------
|
||||
$ sudo apt-get install ruby gem
|
||||
$ sudo gem install ssh_scan
|
||||
----------- On CentOS/RHEL -----------
|
||||
# yum install ruby rubygem
|
||||
# gem install ssh_scan
|
||||
```
|
||||
|
||||
To run from a [docker container][8], type:
|
||||
|
||||
```
|
||||
# docker pull mozilla/ssh_scan
|
||||
# docker run -it mozilla/ssh_scan /app/bin/ssh_scan -t github.com
|
||||
```
|
||||
|
||||
To install and run from source, type:
|
||||
|
||||
```
|
||||
# git clone https://github.com/mozilla/ssh_scan.git
|
||||
# cd ssh_scan
|
||||
# gpg2 --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
|
||||
# curl -sSL https://get.rvm.io | bash -s stable
|
||||
# rvm install 2.3.1
|
||||
# rvm use 2.3.1
|
||||
# gem install bundler
|
||||
# bundle install
|
||||
# ./bin/ssh_scan
|
||||
```
|
||||
|
||||
### How to Use ssh_scan in Linux
|
||||
|
||||
The syntax for using ssh_scan is as follows:
|
||||
|
||||
```
|
||||
$ ssh_scan -t ip-address
|
||||
$ ssh_scan -t server-hostname
|
||||
```
|
||||
|
||||
For example to scan SSH configs and policy of server 92.168.43.198, enter:
|
||||
|
||||
```
|
||||
$ ssh_scan -t 192.168.43.198
|
||||
```
|
||||
|
||||
Note you can also pass a [IP/Range/Hostname] to the `-t` option as shown in the options below:
|
||||
|
||||
```
|
||||
$ ssh_scan -t 192.168.43.198,200,205
|
||||
$ ssh_scan -t test.tecmint.lan
|
||||
```
|
||||
|
||||
##### Sample Output
|
||||
|
||||
```
|
||||
I, [2017-05-09T10:36:17.913644 #7145] INFO -- : You're using the latest version of ssh_scan 0.0.19
|
||||
[
|
||||
{
|
||||
"ssh_scan_version": "0.0.19",
|
||||
"ip": "192.168.43.198",
|
||||
"port": 22,
|
||||
"server_banner": "SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.1",
|
||||
"ssh_version": 2.0,
|
||||
"os": "ubuntu",
|
||||
"os_cpe": "o:canonical:ubuntu:16.04",
|
||||
"ssh_lib": "openssh",
|
||||
"ssh_lib_cpe": "a:openssh:openssh:7.2p2",
|
||||
"cookie": "68b17bcca652eeaf153ed18877770a38",
|
||||
"key_algorithms": [
|
||||
"curve25519-sha256@libssh.org",
|
||||
"ecdh-sha2-nistp256",
|
||||
"ecdh-sha2-nistp384",
|
||||
"ecdh-sha2-nistp521",
|
||||
"diffie-hellman-group-exchange-sha256",
|
||||
"diffie-hellman-group14-sha1"
|
||||
],
|
||||
"server_host_key_algorithms": [
|
||||
"ssh-rsa",
|
||||
"rsa-sha2-512",
|
||||
"rsa-sha2-256",
|
||||
"ecdsa-sha2-nistp256",
|
||||
"ssh-ed25519"
|
||||
],
|
||||
"encryption_algorithms_client_to_server": [
|
||||
"chacha20-poly1305@openssh.com",
|
||||
"aes128-ctr",
|
||||
"aes192-ctr",
|
||||
"aes256-ctr",
|
||||
"aes128-gcm@openssh.com",
|
||||
"aes256-gcm@openssh.com"
|
||||
],
|
||||
"encryption_algorithms_server_to_client": [
|
||||
"chacha20-poly1305@openssh.com",
|
||||
"aes128-ctr",
|
||||
"aes192-ctr",
|
||||
"aes256-ctr",
|
||||
"aes128-gcm@openssh.com",
|
||||
"aes256-gcm@openssh.com"
|
||||
],
|
||||
"mac_algorithms_client_to_server": [
|
||||
"umac-64-etm@openssh.com",
|
||||
"umac-128-etm@openssh.com",
|
||||
"hmac-sha2-256-etm@openssh.com",
|
||||
"hmac-sha2-512-etm@openssh.com",
|
||||
"hmac-sha1-etm@openssh.com",
|
||||
"umac-64@openssh.com",
|
||||
"umac-128@openssh.com",
|
||||
"hmac-sha2-256",
|
||||
"hmac-sha2-512",
|
||||
"hmac-sha1"
|
||||
],
|
||||
"mac_algorithms_server_to_client": [
|
||||
"umac-64-etm@openssh.com",
|
||||
"umac-128-etm@openssh.com",
|
||||
"hmac-sha2-256-etm@openssh.com",
|
||||
"hmac-sha2-512-etm@openssh.com",
|
||||
"hmac-sha1-etm@openssh.com",
|
||||
"umac-64@openssh.com",
|
||||
"umac-128@openssh.com",
|
||||
"hmac-sha2-256",
|
||||
"hmac-sha2-512",
|
||||
"hmac-sha1"
|
||||
],
|
||||
"compression_algorithms_client_to_server": [
|
||||
"none",
|
||||
"zlib@openssh.com"
|
||||
],
|
||||
"compression_algorithms_server_to_client": [
|
||||
"none",
|
||||
"zlib@openssh.com"
|
||||
],
|
||||
"languages_client_to_server": [
|
||||
],
|
||||
"languages_server_to_client": [
|
||||
],
|
||||
"hostname": "tecmint",
|
||||
"auth_methods": [
|
||||
"publickey",
|
||||
"password"
|
||||
],
|
||||
"fingerprints": {
|
||||
"rsa": {
|
||||
"known_bad": "false",
|
||||
"md5": "0e:d0:d7:11:f0:9b:f8:33:9c:ab:26:77:e5:66:9e:f4",
|
||||
"sha1": "fc:8d:d5:a1:bf:52:48:a6:7e:f9:a6:2f:af:ca:e2:f0:3a:9a:b7:fa",
|
||||
"sha256": "ff:00:b4:a4:40:05:19:27:7c:33:aa:db:a6:96:32:88:8e:bf:05:a1:81:c0:a4:a8:16:01:01:0b:20:37:81:11"
|
||||
}
|
||||
},
|
||||
"start_time": "2017-05-09 10:36:17 +0300",
|
||||
"end_time": "2017-05-09 10:36:18 +0300",
|
||||
"scan_duration_seconds": 0.221573169,
|
||||
"duplicate_host_key_ips": [
|
||||
],
|
||||
"compliance": {
|
||||
"policy": "Mozilla Modern",
|
||||
"compliant": false,
|
||||
"recommendations": [
|
||||
"Remove these Key Exchange Algos: diffie-hellman-group14-sha1",
|
||||
"Remove these MAC Algos: umac-64-etm@openssh.com, hmac-sha1-etm@openssh.com, umac-64@openssh.com, hmac-sha1",
|
||||
"Remove these Authentication Methods: password"
|
||||
],
|
||||
"references": [
|
||||
"https://wiki.mozilla.org/Security/Guidelines/OpenSSH"
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
You can use `-p` to specify a different port, `-L` to enable the logger and `-V` to define the verbosity level as shown below:
|
||||
|
||||
```
|
||||
$ ssh_scan -t 192.168.43.198 -p 22222 -L ssh-scan.log -V INFO
|
||||
```
|
||||
|
||||
Additionally, use a custom policy file (default is Mozilla Modern) with the `-P` or `--policy [FILE]` like so:
|
||||
|
||||
```
|
||||
$ ssh_scan -t 192.168.43.198 -L ssh-scan.log -V INFO -P /path/to/custom/policy/file
|
||||
```
|
||||
|
||||
Type this to view all ssh_scan usage options and more examples:
|
||||
|
||||
```
|
||||
$ ssh_scan -h
|
||||
```
|
||||
|
||||
##### Sample Output
|
||||
|
||||
```
|
||||
ssh_scan v0.0.17 (https://github.com/mozilla/ssh_scan)
|
||||
Usage: ssh_scan [options]
|
||||
-t, --target [IP/Range/Hostname] IP/Ranges/Hostname to scan
|
||||
-f, --file [FilePath] File Path of the file containing IP/Range/Hostnames to scan
|
||||
-T, --timeout [seconds] Timeout per connect after which ssh_scan gives up on the host
|
||||
-L, --logger [Log File Path] Enable logger
|
||||
-O, --from_json [FilePath] File to read JSON output from
|
||||
-o, --output [FilePath] File to write JSON output to
|
||||
-p, --port [PORT] Port (Default: 22)
|
||||
-P, --policy [FILE] Custom policy file (Default: Mozilla Modern)
|
||||
--threads [NUMBER] Number of worker threads (Default: 5)
|
||||
--fingerprint-db [FILE] File location of fingerprint database (Default: ./fingerprints.db)
|
||||
--suppress-update-status Do not check for updates
|
||||
-u, --unit-test [FILE] Throw appropriate exit codes based on compliance status
|
||||
-V [STD_LOGGING_LEVEL],
|
||||
--verbosity
|
||||
-v, --version Display just version info
|
||||
-h, --help Show this message
|
||||
Examples:
|
||||
ssh_scan -t 192.168.1.1
|
||||
ssh_scan -t server.example.com
|
||||
ssh_scan -t ::1
|
||||
ssh_scan -t ::1 -T 5
|
||||
ssh_scan -f hosts.txt
|
||||
ssh_scan -o output.json
|
||||
ssh_scan -O output.json -o rescan_output.json
|
||||
ssh_scan -t 192.168.1.1 -p 22222
|
||||
ssh_scan -t 192.168.1.1 -p 22222 -L output.log -V INFO
|
||||
ssh_scan -t 192.168.1.1 -P custom_policy.yml
|
||||
ssh_scan -t 192.168.1.1 --unit-test -P custom_policy.yml
|
||||
```
|
||||
|
||||
Check out some useful artilces on SSH Server:
|
||||
|
||||
1. [SSH Passwordless Login Using SSH Keygen in 5 Easy Steps][2]
|
||||
|
||||
2. [5 Best Practices to Secure SSH Server][3]
|
||||
|
||||
3. [Restrict SSH User Access to Certain Directory Using Chrooted Jail][4]
|
||||
|
||||
4. [How to Configure Custom SSH Connections to Simplify Remote Access][5]
|
||||
|
||||
For more details visit ssh_scan Github repository: [https://github.com/mozilla/ssh_scan][9]
|
||||
|
||||
In this article, we showed you how to set up and use ssh_scan in Linux. Do you know of any similar tools out there? Let us know via the feedback form below, including any other thoughts concerning this guide.
|
||||
|
||||
SHARE[+][10][0][11][20][12][25][13] [][14]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
作者简介:
|
||||
|
||||
Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.
|
||||
|
||||
------------------
|
||||
|
||||
via: https://www.tecmint.com/ssh_scan-ssh-configuration-and-policy-scanner-for-linux/
|
||||
|
||||
作者:[Aaron Kili ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.tecmint.com/author/aaronkili/
|
||||
[1]:https://www.tecmint.com/automating-linux-system-administration-tasks/
|
||||
[2]:https://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/
|
||||
[3]:https://www.tecmint.com/5-best-practices-to-secure-and-protect-ssh-server/
|
||||
[4]:https://www.tecmint.com/restrict-ssh-user-to-directory-using-chrooted-jail/
|
||||
[5]:https://www.tecmint.com/configure-custom-ssh-connection-in-linux/
|
||||
[6]:https://wiki.mozilla.org/Security/Guidelines/OpenSSH
|
||||
[7]:https://www.tecmint.com/install-openssh-server-in-linux/
|
||||
[8]:https://www.tecmint.com/install-docker-and-learn-containers-in-centos-rhel-7-6/
|
||||
[9]:https://github.com/mozilla/ssh_scan
|
||||
[10]:https://www.tecmint.com/ssh_scan-ssh-configuration-and-policy-scanner-for-linux/#
|
||||
[11]:https://www.tecmint.com/ssh_scan-ssh-configuration-and-policy-scanner-for-linux/#
|
||||
[12]:https://www.tecmint.com/ssh_scan-ssh-configuration-and-policy-scanner-for-linux/#
|
||||
[13]:https://www.tecmint.com/ssh_scan-ssh-configuration-and-policy-scanner-for-linux/#
|
||||
[14]:https://www.tecmint.com/ssh_scan-ssh-configuration-and-policy-scanner-for-linux/#comments
|
||||
[15]:https://www.tecmint.com/author/aaronkili/
|
||||
[16]:https://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
|
||||
[17]:https://www.tecmint.com/free-linux-shell-scripting-books/
|
@ -0,0 +1,358 @@
|
||||
How to Configure and Integrate iRedMail Services to Samba4 AD DC – Part 11
|
||||
============================================================
|
||||
|
||||
|
||||
|
||||
In this tutorial will learn how to modify iRedMail main daemons which provide mail services, respectively, [Postfix used for mail transfer and Dovecot][4] which delivers mail to accounts mailboxes, in order to integrate them both in [Samba4 Active Directory Domain Controller][5].
|
||||
|
||||
By integrating iRedMail to a Samba4 AD DC you will benefit from the following features: user authentication, management, and status via Samba AD DC, create mail lists with the help of AD groups and Global LDAP Address Book in Roundcube.
|
||||
|
||||
#### Requirements
|
||||
|
||||
1. [Install iRedMail on CentOS 7 for Samba4 AD Integration][1]
|
||||
|
||||
### Step 1: Prepare iRedMail System for Sama4 AD Integration
|
||||
|
||||
1. On the first step, you need to [assign a static IP address for your machine][6] in case you’re using a dynamic IP address provided by a DHCP server.
|
||||
|
||||
Run [ifconfig command][7] to list your machine network interfaces names and edit the proper network interface with your custom IP settings by issuing [nmtui-edit][8] command against the correct NIC.
|
||||
|
||||
Run nmtui-edit command with root privileges.
|
||||
|
||||
```
|
||||
# ifconfig
|
||||
# nmtui-edit eno16777736
|
||||
```
|
||||
[][9]
|
||||
|
||||
Find Network Interface Name
|
||||
|
||||
2. Once the network interface is opened for editing, add the proper static IP settings, make sure you add the DNS servers IP addresses of your Samba4 AD DC and the name of your domain in order to query the realm from your machine. Use the below screenshot as a guide.
|
||||
|
||||
[][10]
|
||||
|
||||
Configure Network Settings
|
||||
|
||||
3. After you finish configuring the network interface, restart the network daemon to apply changes and issue a series of ping commands against the domain name and samba4 domain controllers FQDNs.
|
||||
|
||||
```
|
||||
# systemctl restart network.service
|
||||
# cat /etc/resolv.conf # verify DNS resolver configuration if the correct DNS servers IPs are queried for domain resolution
|
||||
# ping -c2 tecmint.lan # Ping domain name
|
||||
# ping -c2 adc1 # Ping first AD DC
|
||||
# ping -c2 adc2 # Ping second AD DC
|
||||
```
|
||||
[][11]
|
||||
|
||||
Verify Network DNS Configuration
|
||||
|
||||
4. Next, sync time with samba domain controller by installing the ntpdate package and query Samba4 machine NTP server by issuing the below commands:
|
||||
|
||||
```
|
||||
# yum install ntpdate
|
||||
# ntpdate -qu tecmint.lan # querry domain NTP servers
|
||||
# ntpdate tecmint.lan # Sync time with the domain
|
||||
```
|
||||
[][12]
|
||||
|
||||
Sync Time with Samba NTP Server
|
||||
|
||||
5. You might want the local time to be automatically synchronized with samba AD time server. In order to achieve this setting, add a scheduled job to run every hour by issuing [crontab -e command][13] and append the following line:
|
||||
|
||||
```
|
||||
0 */1 * * * /usr/sbin/ntpdate tecmint.lan > /var/log/ntpdate.lan 2>&1
|
||||
```
|
||||
[][14]
|
||||
|
||||
Auto Sync Time with Samba NTP
|
||||
|
||||
### Step 2: Prepare Samba4 AD DC for iRedMail Integration
|
||||
|
||||
6. Now, move to a [Windows machine with RSAT tools installed][15] to manage Samba4 Active Directory as described in this tutorial [here][16].
|
||||
|
||||
Open DNS Manager, go to your domain Forward Lookup Zones and add a new A record, an MX record and a PTR record to point to your iRedMail system IP address. Use the below screenshots as a guide.
|
||||
|
||||
Add A record (replace the name and the IP Address of iRedMail machine accordingly).
|
||||
|
||||
[][17]
|
||||
|
||||
Create DNS A Record for iRedMail
|
||||
|
||||
Add MX record (leave child domain blank and add a 10 priority for this mail server).
|
||||
|
||||
[][18]
|
||||
|
||||
Create DNS MX Record for iRedMail
|
||||
|
||||
Add PTR record by expanding to Reverse Lookup Zones (replace IP address of iRedMail server accordingly). In case you haven’t configured a reverse zone for your domain controller so far, read the following tutorial:
|
||||
|
||||
1. [Manage Samba4 DNS Group Policy from Windows][2]
|
||||
|
||||
[][19]
|
||||
|
||||
Create DNS PTR Record for iRedMail
|
||||
|
||||
7. After you’ve added the basic DNS records which make a mail server to function properly, move to the iRedMail machine, install bind-utils package and query the newly added mail records as suggested on the below excerpt.
|
||||
|
||||
Samba4 AD DC DNS server should respond with the DNS records added in the previous step.
|
||||
|
||||
```
|
||||
# yum install bind-utils
|
||||
# host tecmint.lan
|
||||
# host mail.tecmint.lan
|
||||
# host 192.168.1.245
|
||||
```
|
||||
[][20]
|
||||
|
||||
Install Bind and Query Mail Records
|
||||
|
||||
From a Windows machine, open a Command Prompt window and issue [nslookup command][21] against the above mail server records.
|
||||
|
||||
8. As a final pre-requirement, create a new user account with minimal privileges in Samba4 AD DC with the name vmail, choose a strong password for this user and make sure the password for this user never expires.
|
||||
|
||||
The vmail user account will be used by iRedMail services to query Samba4 AD DC LDAP database and pull the email accounts.
|
||||
|
||||
To create the vmail account, use ADUC graphical tool from a Windows machine joined to the realm with RSAT tools installed as illustrated on the below screenshots or use samba-tool command line directly from a domain controller as explained on the following topic.
|
||||
|
||||
1. [Manage Samba4 Active Directory from Linux Command Line][3]
|
||||
|
||||
In this guide, we’ll use the first method mentioned above.
|
||||
|
||||
[][22]
|
||||
|
||||
Active Directory Users and Computers
|
||||
|
||||
[][23]
|
||||
|
||||
Create New User for iRedMail
|
||||
|
||||
[][24]
|
||||
|
||||
Set Strong Password for User
|
||||
|
||||
9. From iRedMail system, test the vmail user ability to query Samba4 AD DC LDAP database by issuing the below command. The returned result should be a total number of objects entries for your domain as illustrated on the below screenshots.
|
||||
|
||||
```
|
||||
# ldapsearch -x -h tecmint.lan -D 'vmail@tecmint.lan' -W -b 'cn=users,dc=tecmint,dc=lan'
|
||||
```
|
||||
|
||||
Note: Replace the domain name and the LDAP base dn in Samba4 AD (‘cn=users,dc=tecmint,dc=lan‘) accordingly.
|
||||
|
||||
[][25]
|
||||
|
||||
Query Samba4 AD DC LDAP
|
||||
|
||||
### Step 3: Integrate iRedMail Services to Samba4 AD DC
|
||||
|
||||
10. Now it’s time to tamper with iRedMail services (Postfix, Dovecot and Roundcube) in order to query Samba4 Domain Controller for mail accounts.
|
||||
|
||||
The first service to be modified will be the MTA agent, Postfix. Issue the following commands to disable a series of MTA settings, add your domain name to Postfix local domain and mailbox domains and use Dovecot agent to deliver received mails locally to user mailboxes.
|
||||
|
||||
```
|
||||
# postconf -e virtual_alias_maps=' '
|
||||
# postconf -e sender_bcc_maps=' '
|
||||
# postconf -e recipient_bcc_maps= ' '
|
||||
# postconf -e relay_domains=' '
|
||||
# postconf -e relay_recipient_maps=' '
|
||||
# postconf -e sender_dependent_relayhost_maps=' '
|
||||
# postconf -e smtpd_sasl_local_domain='tecmint.lan' #Replace with your own domain
|
||||
# postconf -e virtual_mailbox_domains='tecmint.lan' #Replace with your own domain
|
||||
# postconf -e transport_maps='hash:/etc/postfix/transport'
|
||||
# postconf -e smtpd_sender_login_maps='proxy:ldap:/etc/postfix/ad_sender_login_maps.cf' # Check SMTP senders
|
||||
# postconf -e virtual_mailbox_maps='proxy:ldap:/etc/postfix/ad_virtual_mailbox_maps.cf' # Check local mail accounts
|
||||
# postconf -e virtual_alias_maps='proxy:ldap:/etc/postfix/ad_virtual_group_maps.cf' # Check local mail lists
|
||||
# cp /etc/postfix/transport /etc/postfix/transport.backup # Backup transport conf file
|
||||
# echo "tecmint.lan dovecot" > /etc/postfix/transport # Add your domain with dovecot transport
|
||||
# cat /etc/postfix/transport # Verify transport file
|
||||
# postmap hash:/etc/postfix/transport
|
||||
```
|
||||
|
||||
11. Next, create Postfix `/etc/postfix/ad_sender_login_maps.cf` configuration file with your favorite text editor and add the below configuration.
|
||||
|
||||
```
|
||||
server_host = tecmint.lan
|
||||
server_port = 389
|
||||
version = 3
|
||||
bind = yes
|
||||
start_tls = no
|
||||
bind_dn = vmail@tecmint.lan
|
||||
bind_pw = ad_vmail_account_password
|
||||
search_base = dc=tecmint,dc=lan
|
||||
scope = sub
|
||||
query_filter = (&(userPrincipalName=%s)(objectClass=person)(!(userAccountControl:1.2.840.113556.1.4.803:=2)))
|
||||
result_attribute= userPrincipalName
|
||||
debuglevel = 0
|
||||
```
|
||||
|
||||
12. Create `/etc/postfix/ad_virtual_mailbox_maps.cf` with the following configuration.
|
||||
|
||||
```
|
||||
server_host = tecmint.lan
|
||||
server_port = 389
|
||||
version = 3
|
||||
bind = yes
|
||||
start_tls = no
|
||||
bind_dn = vmail@tecmint.lan
|
||||
bind_pw = ad_vmail_account_password
|
||||
search_base = dc=tecmint,dc=lan
|
||||
scope = sub
|
||||
query_filter = (&(objectclass=person)(userPrincipalName=%s))
|
||||
result_attribute= userPrincipalName
|
||||
result_format = %d/%u/Maildir/
|
||||
debuglevel = 0
|
||||
```
|
||||
|
||||
13. Create `/etc/postfix/ad_virtual_group_maps.cf` with the below configuration.
|
||||
|
||||
```
|
||||
server_host = tecmint.lan
|
||||
server_port = 389
|
||||
version = 3
|
||||
bind = yes
|
||||
start_tls = no
|
||||
bind_dn = vmail@tecmint.lan
|
||||
bind_pw = ad_vmail_account_password
|
||||
search_base = dc=tecmint,dc=lan
|
||||
scope = sub
|
||||
query_filter = (&(objectClass=group)(mail=%s))
|
||||
special_result_attribute = member
|
||||
leaf_result_attribute = mail
|
||||
result_attribute= userPrincipalName
|
||||
debuglevel = 0
|
||||
```
|
||||
|
||||
On all three configuration files replace the values from server_host, bind_dn, bind_pw and search_base to reflect your own domain custom settings.
|
||||
|
||||
14. Next, open Postfix main configuration file and search and disable iRedAPD check_policy_service and smtpd_end_of_data_restrictions by adding a comment `#` in front of the following lines.
|
||||
|
||||
```
|
||||
# nano /etc/postfix/main.cf
|
||||
```
|
||||
|
||||
Comment the following lines:
|
||||
|
||||
```
|
||||
#check_policy_service inet:127.0.0.1:7777
|
||||
#smtpd_end_of_data_restrictions = check_policy_service inet:127.0.0.1:7777
|
||||
```
|
||||
|
||||
15. Now, verify Postfix binding to Samba AD using an existing domain user and a domain group by issuing a series of queries as presented in the following examples.
|
||||
|
||||
The result should be similar as illustrated on the bellow screenshot.
|
||||
|
||||
```
|
||||
# postmap -q tecmint_user@tecmint.lan ldap:/etc/postfix/ad_virtual_mailbox_maps.cf
|
||||
# postmap -q tecmint_user@tecmint.lan ldap:/etc/postfix/ad_sender_login_maps.cf
|
||||
# postmap -q linux_users@tecmint.lan ldap:/etc/postfix/ad_virtual_group_maps.cf
|
||||
```
|
||||
[][26]
|
||||
|
||||
Verify Postfix Binding to Samba AD
|
||||
|
||||
Replace AD user and group accounts accordingly. Also, assure that the AD group you’re using has some AD users members assigned to it.
|
||||
|
||||
16. On the next step modify Dovecot configuration file in order to query Samba4 AD DC. Open file `/etc/dovecot/dovecot-ldap.conf` for editing and add the following lines.
|
||||
|
||||
```
|
||||
hosts = tecmint.lan:389
|
||||
ldap_version = 3
|
||||
auth_bind = yes
|
||||
dn = vmail@tecmint.lan
|
||||
dnpass = ad_vmail_password
|
||||
base = dc=tecmint,dc=lan
|
||||
scope = subtree
|
||||
deref = never
|
||||
user_filter = (&(userPrincipalName=%u)(objectClass=person)(!(userAccountControl:1.2.840.113556.1.4.803:=2)))
|
||||
pass_filter = (&(userPrincipalName=%u)(objectClass=person)(!(userAccountControl:1.2.840.113556.1.4.803:=2)))
|
||||
pass_attrs = userPassword=password
|
||||
default_pass_scheme = CRYPT
|
||||
user_attrs = =home=/var/vmail/vmail1/%Ld/%Ln/Maildir/,=mail=maildir:/var/vmail/vmail1/%Ld/%Ln/Maildir/
|
||||
```
|
||||
|
||||
The mailbox of a Samba4 AD account will be stored in /var/vmail/vmail1/your_domain.tld/your_domain_user/Maildir/ location on the Linux system.
|
||||
|
||||
17. Make sure pop3 and imap protocols are enabled in dovecot main configuration file. Verify if quota and acl mail plugins are also enabled by opening file `/etc/dovecot/dovecot.conf` and check if these values are present.
|
||||
|
||||
[][27]
|
||||
|
||||
Enable Pop3 and Imap in Dovecot
|
||||
|
||||
18. Optionally, if you want to set a global hard quota to not exceed the maximum of 500 MB of storage for each domain user, add the following line in /etc/dovecot/dovecot.conf file.
|
||||
|
||||
```
|
||||
quota_rule = *:storage=500M
|
||||
```
|
||||
|
||||
19. Finally, in order to apply all changes made so far, restart and verify the status of Postfix and Dovecot daemons by issuing the below commands with root privileges.
|
||||
|
||||
```
|
||||
# systemctl restart postfix dovecot
|
||||
# systemctl status postfix dovecot
|
||||
```
|
||||
|
||||
20. In order to test mail server configuration from the command line using IMAP protocol use telnet or [netcat command][28] as presented in the below example.
|
||||
|
||||
```
|
||||
# nc localhost 143
|
||||
a1 LOGIN ad_user@your_domain.tld ad_user_password
|
||||
a2 LIST “” “*”
|
||||
a3 LOGOUT
|
||||
```
|
||||
[][29]
|
||||
|
||||
Test iRedMail Configuration
|
||||
|
||||
If you can perform an IMAP login from the command line with a Samba4 user account then iRedMail server seems ready to send and receive mail for Active Directory accounts.
|
||||
|
||||
On the next tutorial will discuss how to integrate Roundcube webmail with Samba4 AD DC and enable Global LDAP Address Book, customize Roudcube, access Roundcube web interface from a browser and disable some unneeded iRedMail services.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
I'am a computer addicted guy, a fan of open source and linux based system software, have about 4 years experience with Linux distributions desktop, servers and bash scripting.
|
||||
|
||||
|
||||
-----
|
||||
|
||||
via: https://www.tecmint.com/integrate-iredmail-to-samba4-ad-dc-on-centos-7/
|
||||
|
||||
作者:[ Matei Cezar][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.tecmint.com/author/cezarmatei/
|
||||
[1]:https://www.tecmint.com/install-iredmail-on-centos-7-for-samba4-ad-integration/
|
||||
[2]:https://www.tecmint.com/manage-samba4-dns-group-policy-from-windows/
|
||||
[3]:https://www.tecmint.com/manage-samba4-active-directory-linux-command-line/
|
||||
[4]:https://www.tecmint.com/setup-postfix-mail-server-and-dovecot-with-mariadb-in-centos/
|
||||
[5]:https://www.tecmint.com/install-samba4-active-directory-ubuntu/
|
||||
[6]:https://www.tecmint.com/set-add-static-ip-address-in-linux/
|
||||
[7]:https://www.tecmint.com/ifconfig-command-examples/
|
||||
[8]:https://www.tecmint.com/configure-network-connections-using-nmcli-tool-in-linux/
|
||||
[9]:https://www.tecmint.com/wp-content/uploads/2017/05/Find-Network-Interface-Name.png
|
||||
[10]:https://www.tecmint.com/wp-content/uploads/2017/05/Configure-Network-Settings.png
|
||||
[11]:https://www.tecmint.com/wp-content/uploads/2017/05/Verify-Network-DNS-Configuration.png
|
||||
[12]:https://www.tecmint.com/wp-content/uploads/2017/05/Sync-Time-with-Samba-NTP-Server.png
|
||||
[13]:https://www.tecmint.com/11-cron-scheduling-task-examples-in-linux/
|
||||
[14]:https://www.tecmint.com/wp-content/uploads/2017/05/Auto-Sync-Time-with-Samba-NTP.png
|
||||
[15]:https://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/
|
||||
[16]:https://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/
|
||||
[17]:https://www.tecmint.com/wp-content/uploads/2017/05/Create-DNS-A-Record-for-iRedMail.png
|
||||
[18]:https://www.tecmint.com/wp-content/uploads/2017/05/Create-DNS-MX-Record-for-iRedMail.png
|
||||
[19]:https://www.tecmint.com/wp-content/uploads/2017/05/Create-DNS-PTR-Record-for-iRedMail.png
|
||||
[20]:https://www.tecmint.com/wp-content/uploads/2017/05/Install-Bind-and-Query-Mail-Records.png
|
||||
[21]:https://www.tecmint.com/8-linux-nslookup-commands-to-troubleshoot-dns-domain-name-server/
|
||||
[22]:https://www.tecmint.com/wp-content/uploads/2017/05/Active-Directory-Users-and-Computers.png
|
||||
[23]:https://www.tecmint.com/wp-content/uploads/2017/05/Create-New-User-for-iRedMail.png
|
||||
[24]:https://www.tecmint.com/wp-content/uploads/2017/05/Set-Strong-Password-for-User.png
|
||||
[25]:https://www.tecmint.com/wp-content/uploads/2017/05/Query-Samba4-AD-DC-LDAP.png
|
||||
[26]:https://www.tecmint.com/wp-content/uploads/2017/05/Verify-Postfix-Binding-to-Samba-AD.png
|
||||
[27]:https://www.tecmint.com/wp-content/uploads/2017/05/Enable-Pop3-Imap-in-Dovecot.png
|
||||
[28]:https://www.tecmint.com/check-remote-port-in-linux/
|
||||
[29]:https://www.tecmint.com/wp-content/uploads/2017/05/Test-iRedMail-Configuration.png
|
||||
[30]:https://www.tecmint.com/author/cezarmatei/
|
||||
[31]:https://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
|
||||
[32]:https://www.tecmint.com/free-linux-shell-scripting-books/
|
@ -0,0 +1,281 @@
|
||||
How to Integrate iRedMail Roundcube with Samba4 AD DC – Part 12
|
||||
============================================================
|
||||
|
||||
by [Matei Cezar][15] | Published: May 13, 2017 | Last Updated: May 14, 2017
|
||||
|
||||
Download Your Free eBooks NOW - [10 Free Linux eBooks for Administrators][16] | [4 Free Shell Scripting eBooks][17]
|
||||
|
||||
[Roundcube][3], one of the most used webmail user agent in Linux, offers a modern web interface for end users to interact with all mail services in order to read, compose and send e-mails. Roundcube supports a variety of mail protocols, including the secured ones, such IMAPS, POP3S or submission.
|
||||
|
||||
In this topic we’ll discuss how to configure Roundcube in iRedMail with IMAPS and submission secured ports to retrieve and send emails for Samba4 AD accounts, how to access iRedMail Roundcube web interface from a browser and add a web address alias, how to enable Samba4 AD integration for Global LDAP Address Book and how to disable some unneeded iRedMail services.
|
||||
|
||||
#### Requirements
|
||||
|
||||
1. [How to Install iRedMail on CentOS 7 for Samba4 AD Integration][1]
|
||||
|
||||
2. [Configure iRedMail on CentOS 7 for Samba4 AD Integration][2]
|
||||
|
||||
### Step 1: Declare E-mail Address for Domain Accounts in Samba4 AD DC
|
||||
|
||||
1. In order send and receive mail for Samba4 AD DC domain accounts, you need to edit each user account and explicitly set email filed with the proper e-mail address by opening ADUC tool from a [Windows machine with RSAT tools installed][4] and joined to Samba4 AD as illustrated in the below image.
|
||||
|
||||
[][5]
|
||||
|
||||
Add Email Account to Join Samba4 AD DC
|
||||
|
||||
2. Similarly, to use mail lists, you need to create groups in ADUC, add the corresponding e-mail address for each group and assign the proper user accounts as members of the group.
|
||||
|
||||
With this setup created as a mail list, all members mailboxes of a Samba4 AD group will receive mail destined for an AD group e-mail address. Use the below screenshots as a guide to declare e-mail filed for a Samba4 group account and add domain users as members of the group.
|
||||
|
||||
Make sure all accounts members added to a group have their e-mail address declared.
|
||||
|
||||
[][6]
|
||||
|
||||
Create Group Admin for Samba4 AD DC
|
||||
|
||||
[][7]
|
||||
|
||||
Add Users to Group
|
||||
|
||||
In this example, all mails sent to admins@tecmint.lan e-mail address declared for ‘Domain Admins’ group will be received by each member mailbox of this group.
|
||||
|
||||
3. An alternative method that you can use to declare the e-mail address for a Samba4 AD account is by creating a user or a group with samba-tool command line directly from one of the AD DC console and specify the e-mail address with the `--mail-address` flag.
|
||||
|
||||
Use one of the following command syntax to create a user with e-mail address specified:
|
||||
|
||||
```
|
||||
# samba-tool user add --mail-address=user_email@domain.tld --surname=your_surname --given-name=your_given_name your_ad_user
|
||||
```
|
||||
|
||||
Create a group with e-mail address specified:
|
||||
|
||||
```
|
||||
# samba-tool group add --mail-address=group_email@domain.tld your_ad_group
|
||||
```
|
||||
|
||||
To add members to a group:
|
||||
|
||||
```
|
||||
# samba-tool group addmembers your_group user1,user2,userX
|
||||
```
|
||||
|
||||
To list all available samba-tool command fields for a user or a group use the following syntax:
|
||||
|
||||
```
|
||||
# samba-tool user add -h
|
||||
# samba-tool group add -h
|
||||
```
|
||||
|
||||
### Step 3: Secure Roundcube Webmail
|
||||
|
||||
4. Before modifying Roundcube configuration file, first, use [netstat command][8] piped through egrep filter to list the sockets that [Dovecot and Postfix][9] listen to and assure that the properly secured ports (993 for IMAPS and 587 for submission) are active and enabled.
|
||||
|
||||
```
|
||||
# netstat -tulpn| egrep 'dovecot|master'
|
||||
```
|
||||
|
||||
5. To enforce mail reception and transfer between Roundcube and iRedMail services on secured IMAP and SMTP ports, open Roundcube configuration file located in /var/www/roundcubemail/config/config.inc.php and make sure you change the following lines, for localhost in this case, as shown in the below excerpt:
|
||||
|
||||
```
|
||||
// For IMAPS
|
||||
$config['default_host'] = 'ssl://127.0.0.1';
|
||||
$config['default_port'] = 993;
|
||||
$config['imap_auth_type'] = 'LOGIN';
|
||||
// For SMTP
|
||||
$config['smtp_server'] = 'tls://127.0.0.1';
|
||||
$config['smtp_port'] = 587;
|
||||
$config['smtp_user'] = '%u';
|
||||
$config['smtp_pass'] = '%p';
|
||||
$config['smtp_auth_type'] = 'LOGIN';
|
||||
```
|
||||
|
||||
This setup is highly recommended in case Roudcube is installed on a remote host than the one that provides mail services (IMAP, POP3 or SMTP daemons).
|
||||
|
||||
6. Next, don’t close the configuration file, search and make the following small changes in order for Roundcube to be visited only via HTTPS protocol, to hide the version number and to automatically append the domain name for accounts who login in the web interface.
|
||||
|
||||
```
|
||||
$config['force_https'] = true;
|
||||
$config['useragent'] = 'Your Webmail'; // Hide version number
|
||||
$config['username_domain'] = 'domain.tld'
|
||||
```
|
||||
|
||||
7. Also, disable the following plugins: managesieve and password by adding a comment `(//)` in front of the line that starts with $config[‘plugins’].
|
||||
|
||||
Users will change their password from a Windows or Linux machine joined to Samba4 AD DC once they login and authenticate to the domain. A sysadmin will globally manage all sieve rules for domain accounts.
|
||||
|
||||
```
|
||||
// $config['plugins'] = array('managesieve', 'password');
|
||||
```
|
||||
|
||||
8. Finally, save and close the configuration file and visit Roundcube Webmail by opening a browser and navigate to iRedMail IP address or FQDN/mail location via HTTPS protocol.
|
||||
|
||||
The first time when you visit Roundcube an alert should appear on the browser due to the Self-Signed Certificate the web server uses. Accept the certificate and login with a Samba AD account credentials.
|
||||
|
||||
```
|
||||
https://iredmail-FQDN/mail
|
||||
```
|
||||
[][10]
|
||||
|
||||
Roundcube Webmail Login
|
||||
|
||||
### Step 3: Enable Samba AD Contacts in Roundcube
|
||||
|
||||
9. To configure Samba AD Global LDAP Address Book to appear Roundcube Contacts, open Roundcube configuration file again for editing and make the following changes:
|
||||
|
||||
Navigate to the bottom of the file and identify the section that begins with ‘# Global LDAP Address Book with AD’, delete all its content until the end of the file and replace it with the following code block:
|
||||
|
||||
```
|
||||
# Global LDAP Address Book with AD.
|
||||
#
|
||||
$config['ldap_public']["global_ldap_abook"] = array(
|
||||
'name' => 'tecmint.lan',
|
||||
'hosts' => array("tecmint.lan"),
|
||||
'port' => 389,
|
||||
'use_tls' => false,
|
||||
'ldap_version' => '3',
|
||||
'network_timeout' => 10,
|
||||
'user_specific' => false,
|
||||
'base_dn' => "dc=tecmint,dc=lan",
|
||||
'bind_dn' => "vmail@tecmint.lan",
|
||||
'bind_pass' => "your_password",
|
||||
'writable' => false,
|
||||
'search_fields' => array('mail', 'cn', 'sAMAccountName', 'displayname', 'sn', 'givenName'),
|
||||
'fieldmap' => array(
|
||||
'name' => 'cn',
|
||||
'surname' => 'sn',
|
||||
'firstname' => 'givenName',
|
||||
'title' => 'title',
|
||||
'email' => 'mail:*',
|
||||
'phone:work' => 'telephoneNumber',
|
||||
'phone:mobile' => 'mobile',
|
||||
'department' => 'departmentNumber',
|
||||
'notes' => 'description',
|
||||
),
|
||||
'sort' => 'cn',
|
||||
'scope' => 'sub',
|
||||
'filter' => '(&(mail=*)(|(&(objectClass=user)(!(objectClass=computer)))(objectClass=group)))',
|
||||
'fuzzy_search' => true,
|
||||
'vlv' => false,
|
||||
'sizelimit' => '0',
|
||||
'timelimit' => '0',
|
||||
'referrals' => false,
|
||||
);
|
||||
```
|
||||
|
||||
On this block of code replace name, hosts, base_dn, bind_dn and bind_pass values accordingly.
|
||||
|
||||
10. After you’ve made all the required changes, save and close the file, login to Roundcube webmail interface and go to Address Book menu.
|
||||
|
||||
Hit on your Global Address Book chosen name and a contact list of all domain accounts (users and groups) with their specified e-mail address should be visible.
|
||||
|
||||
[][11]
|
||||
|
||||
Roundcube User Contact List
|
||||
|
||||
### Step 4: Add an Alias for Roundcube Webmail Interface
|
||||
|
||||
11. To visit Roundcube at a web address with the following form https://webmail.domain.tld instead of the old address provided by default by iRedMail you need to make the following changes.
|
||||
|
||||
From a joined Windows machine with RSAT tools installed, open DNS Manager and add a new CNAME record for iRedMail FQDN, named webmail, as illustrated in the following image.
|
||||
|
||||
[][12]
|
||||
|
||||
DNS Webmail Properties
|
||||
|
||||
12. Next, on iRedMail machine, open Apache web server SSL configuration file located in /etc/httpd/conf.d/ssl.conf and change DocumentRoot directive to point to /var/www/roundcubemail/ system path.
|
||||
|
||||
file /etc/httpd/conf.d/ssl.conf excerpt:
|
||||
|
||||
```
|
||||
DocumentRoot “/var/www/roundcubemail/”
|
||||
```
|
||||
|
||||
Restart Apache daemon to apply changes.
|
||||
|
||||
```
|
||||
# systemctl restart httpd
|
||||
```
|
||||
|
||||
13. Now, point the browser to the following address and Roundcube interface should appear. Accept the Self-Signed Cerificate error to continue to login page. Replace domain.tld from this example with your own domain name.
|
||||
|
||||
```
|
||||
https://webmail.domain.tld
|
||||
```
|
||||
|
||||
### Step 5: Disable iRedMail Unused Services
|
||||
|
||||
14. Since iRedMail daemons are configured to query Samba4 AD DC LDAP server for account information and other resources, you can safely stop and disable some local services on iRedMail machine, such as LDAP database server and iredpad service by issuing the following commands.
|
||||
|
||||
```
|
||||
# systemctl stop slapd iredpad
|
||||
# systemctl disable slapd iredpad
|
||||
```
|
||||
|
||||
15. Also, disable some scheduled tasks performed by iRedMail, such as LDAP database backup and iRedPad tracking records by adding a comment (#) in front of each line from crontab file as illustrated on the below screenshot.
|
||||
|
||||
```
|
||||
# crontab -e
|
||||
```
|
||||
[][13]
|
||||
|
||||
Disable iRedMail Tasks
|
||||
|
||||
### Step 6: Use Mail Alias in Postfix
|
||||
|
||||
16. To redirect all locally generated mail (destined for postmaster and subsequently redirected to root account) to a specific Samba4 AD account, open Postfix aliases configuration file located in /etc/postfix/aliases and modify root line as follows:
|
||||
|
||||
```
|
||||
root: your_AD_email_account@domain.tld
|
||||
```
|
||||
|
||||
17. Apply the aliases configuration file so that Postfix can read it in its own format by executing newaliases command and test if the mail gets sent to the proper domain e-email account by issuing the following command.
|
||||
|
||||
```
|
||||
# echo “Test mail” | mail -s “This is root’s email” root
|
||||
```
|
||||
|
||||
18. After the mail has been sent, login to Roundcube webmail with the domain account you’ve setup for mail redirection and verify the previously sent mail should be received in your account Inbox.
|
||||
|
||||
[][14]
|
||||
|
||||
Verify User Mail
|
||||
|
||||
That’all! Now, you have a fully working mail server integrated with Samba4 Active Directory. Domain accounts can send and receive mail for their internal domain or for other external domains.
|
||||
|
||||
The configurations used in this tutorial can be successfully applied to integrate an iRedMail server to a Windows Server 2012 R2 or 2016 Active Directory.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
I'am a computer addicted guy, a fan of open source and linux based system software, have about 4 years experience with Linux distributions desktop, servers and bash scripting.
|
||||
|
||||
-----------
|
||||
|
||||
|
||||
via: https://www.tecmint.com/integrate-iredmail-roundcube-with-samba4-ad-dc/
|
||||
|
||||
作者:[ ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.tecmint.com/author/cezarmatei/
|
||||
[1]:https://www.tecmint.com/install-iredmail-on-centos-7-for-samba4-ad-integration/
|
||||
[2]:https://www.tecmint.com/integrate-iredmail-to-samba4-ad-dc-on-centos-7/
|
||||
[3]:https://www.tecmint.com/install-and-configure-roundcube-webmail-for-postfix-mail-server/
|
||||
[4]:https://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/
|
||||
[5]:https://www.tecmint.com/wp-content/uploads/2017/05/Active-Directory-User-and-Computers.jpg
|
||||
[6]:https://www.tecmint.com/wp-content/uploads/2017/05/Create-Group-Admin-for-Samba4-AD-DC.png
|
||||
[7]:https://www.tecmint.com/wp-content/uploads/2017/05/Add-Users-to-Group.png
|
||||
[8]:https://www.tecmint.com/20-netstat-commands-for-linux-network-management/
|
||||
[9]:https://www.tecmint.com/configure-postfix-and-dovecot-with-virtual-domain-users-in-linux/
|
||||
[10]:https://www.tecmint.com/wp-content/uploads/2017/05/Roundcube-Webmail-Login.png
|
||||
[11]:https://www.tecmint.com/wp-content/uploads/2017/05/Roundcube-User-Contact-List.png
|
||||
[12]:https://www.tecmint.com/wp-content/uploads/2017/05/DNS-Webmail-Properties.jpg
|
||||
[13]:https://www.tecmint.com/wp-content/uploads/2017/05/Disable-iRedMail-Tasks.png
|
||||
[14]:https://www.tecmint.com/wp-content/uploads/2017/05/Verify-User-Mail.png
|
||||
[15]:https://www.tecmint.com/author/cezarmatei/
|
||||
[16]:https://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
|
||||
[17]:https://www.tecmint.com/free-linux-shell-scripting-books/
|
72
sources/tech/20170516 What Fuchsia could mean for Android.md
Normal file
72
sources/tech/20170516 What Fuchsia could mean for Android.md
Normal file
@ -0,0 +1,72 @@
|
||||
What Fuchsia could mean for Android
|
||||
============================================================
|
||||
|
||||
Fuchsia could be next replacement for Android, or Android and Chrome OS. Speculation abounds, and Jack Wallen adds to this speculation some kudos and caveats for Google to consider.
|
||||
|
||||
In this white paper, you’ll learn why legacy IT systems create drag on businesses, and how companies can transform their operations and reap the benefits of hybrid IT by partnering with the right managed services provider. [Download Now][9]
|
||||
|
||||
 Image: Jack Wallen
|
||||
|
||||
Google has never been one to settle or to do things in a way that is not decidedly "Google". So it should have come as no surprise to anyone that they began working on a project that had many scratching their heads. The project is called [Fuschia][6] and most people that follow Google and Android closely, know of this new platform.
|
||||
|
||||
For those that haven't been following the latest and greatest from Google, Fuschia is a new, real-time, open source operating system that first popped up on the radar in August, 2016\. Back then, Fuchsia was nothing more than a command line. Less than a year has zipped by and the platform already has a rather interesting GUI.
|
||||
|
||||
Much to the chagrin of the Linux faithful, Fuchsia does not use the Linux kernel. This project is all Google and uses a Google-developed microkernel, named "Magenta." Why would they do this? Consider the fact that Google's newest device, the Pixel, runs kernel 3.18 and you have your answer. The 3.18 Linux kernel was released in 2014 (which, in tech terms is ancient). With that in mind, why wouldn't Google want to break out completely on their own to keep their mobile platform as up to date as possible?
|
||||
|
||||
Although it pains me to think that Linux might not be (in some unknown future date) powering the most widely-used ecosystem on the planet, I believe this is the right move for Google, with one major caveat.
|
||||
|
||||
### First, a couple of kudos
|
||||
|
||||
I have to first say, bravo to Google for open sourcing Fuchsia. This was the right move. Android has benefitted from the open source Linux kernel for years, so it only makes sense that Google would open up their latest project. To be perfectly honest, had it not been for open source and the Linux kernel, Android would not have risen nearly as quickly as it did. In fact, I would venture a guess to say that, had it not been for Android being propped up by Linux and open source, the mobile market share would show a very different, apple-shaped, picture at the moment.
|
||||
|
||||
The next bit of bravo comes by way of necessity. Operating systems need to be completely rethought now and then. Android is an amazing platform that serves the mobile world quite well. However, there's only so much evolution one can squeeze out of it; and considering the consuming world is always looking for next big thing, Android (and iOS) can only deliver so many times before they have been wrung dry. Couple that with a sorely out of date kernel and you have a perfect storm ready for the likes of Fuchsia.
|
||||
|
||||
Google has never been one to remain stagnant and this new platform is proof.
|
||||
|
||||
### That darned caveat
|
||||
|
||||
I will preface this by reminding everyone of my open source background. I have been a user of Linux since the late 90s and have covered nearly every aspect of open source to be found. Over the last few years, I've been watching and commenting on the goings on with Ubuntu and their (now) failed attempt at convergence. With that said, here's my concern with Fuchsia.
|
||||
|
||||
My suspicion is that Google's big plan for Fucshia is to create a single operating system for all devices: Smartphones, IoT, Chromebooks. On the surface, that sounds like an idea that would bear significant fruit; but if you examine the struggles Canonical endured with Unity 8/Mir/Convergence, you cringe at the idea of "one platform to rule them all". Of course, this isn't quite the same. I doubt Google is creating a single platform that will allow you to "converge" all of your devices. After all, what benefit would there be to converging IoT with your smartphone? It's not like we need to start exchanging data between a phone and a thermostat. Right? Right???
|
||||
|
||||
Even so, should that be the plan for Google, I would caution them to look closely at what befell Canonical and Unity 8\. It was an outstanding idea that simply couldn't come to fruition.
|
||||
|
||||
I could be wrong about this. Google might be thinking of Fuchsia as nothing more than a replacement for Android. It is quite possible this was Google needing to replace the outdated Linux kernel and deciding they may as well go "all in". But considering Armadillo (the Fuchsia UI) has been written in the cross-platform [Flutter SDK][7], the idea of crossing the platform boundary starts to fall into the realm of the possible.
|
||||
|
||||
Or, maybe Fuchsia is simply just Google saying "Let's rebuild our smartphone platform with the knowledge we have today and see where it goes." If that's the case, I would imagine the Google mobile OS will be primed for major success. However, there's one elephant in the room that many have yet to address that hearkens back to "one platform to rule them all". Google has been teasing Android apps on Chromebooks for quite some time. Unfortunately, the success with this idea has been moderate (at best). With Microsoft going out of their way to compete with Chromebooks, Google knows they have to expand that ecosystem, or else lose precious ground (such as within the education arena). One way to combat this is with a single OS to drive both smartphones and Chromebooks. It would mean all apps would work on both platforms (which would be a serious boon) and a universality to the ecosystem (again, massive boon).
|
||||
|
||||
### Speculation
|
||||
|
||||
Google is very good at keeping this sort of thing close to the vest; which translates to a lot of speculation on the part of pundits. Generally speaking, at least with Android, Google has always seemed to make the right moves. If they believe Fuchsia is the way to go, then I'm inclined to believe them. However, there are so many uncertainties surrounding this platform that one is left constantly scratching one's head in wonder.
|
||||
|
||||
What do you think? What will Fuchsia become? Speculate with me.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Jack Wallen is an award-winning writer for TechRepublic and Linux.com. He’s an avid promoter of open source and the voice of The Android Expert. For more news about Jack Wallen, visit his website jackwallen.com.
|
||||
|
||||
-------------------
|
||||
|
||||
via: http://www.techrepublic.com/article/what-fuchsia-could-mean-for-android/
|
||||
|
||||
作者:[About Jack Wallen ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.techrepublic.com/article/what-fuchsia-could-mean-for-android/#modal-bio
|
||||
[1]:http://www.techrepublic.com/article/biometric-mobile-payments-will-hit-2b-this-year/
|
||||
[2]:http://www.techrepublic.com/article/apple-invests-200m-in-us-manufacturing-to-help-corning-produce-new-state-of-the-art-glass/
|
||||
[3]:http://www.techrepublic.com/article/google-will-soon-require-android-for-work-profiles-for-enterprise-users/
|
||||
[4]:http://www.techrepublic.com/newsletters/
|
||||
[5]:http://www.techrepublic.com/article/what-fuchsia-could-mean-for-android/#postComments
|
||||
[6]:https://github.com/fuchsia-mirror
|
||||
[7]:https://flutter.io/
|
||||
[8]:http://intent.cbsi.com/redir?tag=medc-content-top-leaderboard&siteId=11&rsid=cbsitechrepublicsite&pagetype=article&sl=en&sc=us&topicguid=09288d3a-8606-11e2-a661-024c619f5c3d&assetguid=714cb8ff-ebf0-4584-a421-e8464aae66cf&assettype=content_article&ftag_cd=LGN3588bd2&devicetype=desktop&viewguid=4c47ca57-283d-4861-a131-09e058b652ac&q=&ctype=docids;promo&cval=33109435;7205&ttag=&ursuid=&bhid=&destUrl=http%3A%2F%2Fwww.techrepublic.com%2Fresource-library%2Fwhitepapers%2Ftaming-it-complexity-with-managed-services-japanese%2F%3Fpromo%3D7205%26ftag%3DLGN3588bd2%26cval%3Dcontent-top-leaderboard
|
||||
[9]:http://intent.cbsi.com/redir?tag=medc-content-top-leaderboard&siteId=11&rsid=cbsitechrepublicsite&pagetype=article&sl=en&sc=us&topicguid=09288d3a-8606-11e2-a661-024c619f5c3d&assetguid=714cb8ff-ebf0-4584-a421-e8464aae66cf&assettype=content_article&ftag_cd=LGN3588bd2&devicetype=desktop&viewguid=4c47ca57-283d-4861-a131-09e058b652ac&q=&ctype=docids;promo&cval=33109435;7205&ttag=&ursuid=&bhid=&destUrl=http%3A%2F%2Fwww.techrepublic.com%2Fresource-library%2Fwhitepapers%2Ftaming-it-complexity-with-managed-services-japanese%2F%3Fpromo%3D7205%26ftag%3DLGN3588bd2%26cval%3Dcontent-top-leaderboard
|
||||
[10]:http://www.techrepublic.com/rssfeeds/topic/android/
|
||||
[11]:http://www.techrepublic.com/meet-the-team/us/jack-wallen/
|
||||
[12]:https://twitter.com/intent/user?screen_name=jlwallen
|
@ -0,0 +1,101 @@
|
||||
Hugo vs. Jekyll: Comparing the leading static website generators
|
||||
============================================================
|
||||
|
||||
### If you're building a new website, a static site generator may be the right platform for you.
|
||||
|
||||
|
||||

|
||||
>Image by : opensource.com
|
||||
|
||||
Unless your spirit animal is Emily Dickinson, when you make a thing, you want to share it with the world. Sharing your work means that you need a website. Of course, you could simply partake in digital sharecropping and use any of the various social media sites to get your work in front of an audience. There sure are plenty to choose from... and not just "conventional" social media sites. With places like Artstation, Flickr, Soundcloud, and Wattpad, there's an outlet for you, whatever your medium.
|
||||
|
||||
And actually, you _should_ use those sites. It's where the people are, after all. However, none of those places is truly yours. None is a home base that _you_ control and that you can ensure will be there for people to find regardless of the waxing and waning trends of social media.
|
||||
|
||||
Programming and development
|
||||
|
||||
* [New Python content][1]
|
||||
|
||||
* [Our latest JavaScript articles][2]
|
||||
|
||||
* [Recent Perl posts][3]
|
||||
|
||||
* [Red Hat Developers Blog][4]
|
||||
|
||||
Control. That's the value of having your own place on the web.
|
||||
|
||||
But this article isn't about setting up a domain name and hosting for your website. It's for the step _after_ that, the actual making of that site. The typical choice for a lot of people would be to use something like [WordPress][6]. It's a one-click install on most hosting providers, and there's a gigantic market of plugins and themes available to choose from, depending on the type of site you're trying to build. But not only is WordPress a bit overkill for most websites, it also gives you a dynamically generated site with a lot of moving parts. If you don't keep all of those pieces up to date, they can pose a significant security risk and your site could get hijacked.
|
||||
|
||||
The alternative would be to have a static website, with nothing dynamically generated on the server side. Just good old HTML and CSS (and perhaps a bit of Javascript for flair). The downside to that option has been that you've been relegated to coding the whole thing by hand yourself. It's doable, but you just want a place to share your work. You shouldn't have to know all the idiosyncrasies of low-level web design (and the monumental headache of cross-browser compatibility) to do that.
|
||||
|
||||
Enter static site generators. You get the speed and security of static HTML pages, but with a workflow that's closer to the convenience of a dynamic site. The two frontrunners in the static site generator world are [Hugo][7] and [Jekyll][8]. (By the way, Paolo Bonzini has a great article on [getting started with Jekyll][9].) But which one is the right choice for you? Hopefully by the end of this article, you'll have a better idea. We're evaluating both static site generators based on how quickly you can get started, availability of themes, editing workflow, and extensibility.
|
||||
|
||||
### Getting started
|
||||
|
||||
Fair warning, both of these tools will require you to work with them from the command line. Most of the commands are straightforward and easy to remember, but let's adjust our expectations accordingly. This is not a point-and-click interface.
|
||||
|
||||
Installation for both Jekyll and Hugo is pretty simple. Jekyll installs as a RubyGem, and Hugo offers a very handy all-in-one binary to get you started quickly. Because of the single install package, Hugo edges ahead here slightly. Although the RubyGems install method for Jekyll is easy in its own right, it _does_ require that you already have a properly installed and configured Ruby environment on your computer. Outside of the community of designers and developers for the web, most folks don't already have that setup.
|
||||
|
||||
Once installed, though, both Hugo and Jekyll are pretty evenly matched. They both have great documentation and quick-start guides. You start a new site with a single command (in Jekyll, it's **jekyll new <your_site>** and in Hugo, **hugo new site <your_site>**). This sets up a general directory structure and scaffolding for your site. Directory structures and basic configuration are pretty similar. Jekyll uses a **_config.yml** file and Hugo uses **config.toml** (although you _can_ use YAML or even JSON syntax with Hugo's config if you're more comfortable with either of those). The front matter metadata at the top of each content file uses the same syntax as the config. After that, all page content is written in Markdown.
|
||||
|
||||
I will say that in terms of getting you started with your very first statically generated site, Jekyll has a slight advantage over Hugo because it starts with some basic content and a default theme. You can use these as example templates as you start building your site. Hugo has no example content or even a default theme. That said, example content and default themes are usually the first things I delete when I'm making a new site with any tool, so Hugo actually saves me a step.
|
||||
|
||||
### Themes
|
||||
|
||||
As I mentioned, Hugo doesn't ship with a default theme at all, so that's probably one of the first things you're going to want to set up. Jekyll has a decent default theme, though it's pretty bare bones. You'll probably want to go theme hunting with your Jekyll site, too.
|
||||
|
||||
Both Hugo and Jekyll have a pretty diverse assortment of themes for all manners of website types from single-page ID themes to full-blown multipage sites with blog posts and comments. Despite that, it's not exactly easy to find a theme that suits your needs. In either case, the place to go for themes—[themes.gohugo.io][10] for Hugo and [jekyllthemes.org][11] for Jekyll—is basically a single large page full of theme screenshots. Once you click on a theme, you can get some pretty detailed information about it, but that initial search is pretty rough. Hugo's theme page has some basic tagging built into it, but in general, theme searching and presentation is something I feel both projects really need to work on.
|
||||
|
||||
Theme management is also an interesting topic. In both cases, nearly every theme is a Git repository (often hosted on GitHub) that you clone into your website scaffolding. In Jekyll, there's an additional step of using RubyGems' **bundle** to ensure that the theme is managed with the site. Most themes already come with a **Gemfile**, making this step relatively painless. If the theme doesn't already have a **Gemfile**, it's fairly easy to add. In Hugo there's no bundling step. Just point to the theme from your **config.toml,** and you're good to go.
|
||||
|
||||
I've found that I'm partial to the way that Hugo handles themes. You clone (or create) themes into their own space in a **themes** subdirectory. Not only does it make it relatively easy to switch between themes when you're first starting out, but it also gives you the ability to override any component file of a theme with your own file. This means you can customize a theme to your tastes without messing too much with the source of the original theme, allowing it to stay generic enough for other people to use. Of course, if you have a change that you feel other users of the theme may find worthwhile, you can still edit that source and submit a pull request to the theme maintainer.
|
||||
|
||||
### Workflow
|
||||
|
||||
The workflows for building your site in Jekyll and Hugo are pretty similar once you have your initial configuration set up. Both have a live **serve** command that runs a small, lightweight web server on your computer so you can test your site locally without needing to upload it anywhere. The really nice thing is that whether you're running **jekyll serve** or **hugo serve**, both are configured by default to watch for any changes you make to your site as you work on it. When you look at the locally served version of your site in a browser, it automatically updates with any change you make, regardless of whether that change is to content, configuration, theme, or just an image. It's really quite handy and a great time-saver.
|
||||
|
||||
You write the content for your site in both systems using [Markdown][12] syntax. If you don't happen to be familiar with Markdown, it's a very simplified means of writing in plain text while still allowing for some nice formatting marks. It's very easy to work in and human-readable. And because it's in plain text, your content (and therefore your site) is easily version controlled. It's pretty much the main way I write almost everything these days.
|
||||
|
||||
New content can be added to your site scaffolding by manually creating files in the right place. It just needs to be a Markdown file with the appropriate "front matter" metadata at the top of the file. As with the configuration file, Jekyll uses YAML syntax for front matter, while Hugo will accept TOML, YAML, or JSON (default is TOML). That new page file needs to be placed in the correct directory within your site's scaffolding. In Jekyll, you have separate **_drafts** and **_posts** directories for storing your work in progress and your completed content pages, respectively. In Hugo, there's only a single **content** directory. You specify whether a post is a draft or not within that content file's front matter.
|
||||
|
||||
Now, although it's possible to do all of this manually, Hugo does offer some convenience functions to ensure that your new file is created in the correct spot in the scaffolding and that files are pre-populated with appropriate front matter. It's simply a matter of going to your site's directory in a terminal and typing **hugo new content/<page.md>** where **<page.md>** is the new page you want to create. You can even set up templates called **archetypes** that hold customized front matter for pages of different types (like if you have both a blog and a podcast on your website).
|
||||
|
||||
When your site is ready to ship, you can shut down your preview server and issue a command to build the actual pages of the site. In Jekyll, that would be **jekyll build**. In Hugo, it's just **hugo**. Jekyll puts the completed site in the **_site** subdirectory, while Hugo puts them in a subdirectory named **public**. In either case, once you do that, you have a completed static website that you can upload and have hosted nearly anywhere.
|
||||
|
||||
### Extensibility
|
||||
|
||||
Both Hugo and Jekyll give you the ability to customize your site down to the smallest thing. However, in terms of extensibility, Jekyll currently leads in a big way because of its plugin API. Because of this plugin architecture, it's relatively easy to add functionality to your Jekyll-generated site with reasonably short snippets of code available through the Jekyll community or that you write yourself.
|
||||
|
||||
Hugo does not currently have a plugin API at all, so adding that kind of functionality is a bit tougher. There's hope that the ability to write and include plugins will be added in the future, but it doesn't appear that anyone is working on that yet.
|
||||
|
||||
### Conclusion
|
||||
|
||||
By and large, Hugo and Jekyll are pretty similar. It really comes down to determining how you're most comfortable working and what your site needs. If you already have a RubyGems environment set up and you need the extensibility of plugins, then Jekyll is the way to go. However, if you value a simple workflow and a straightforward means of customizing your site, then Hugo would be your top pick.
|
||||
|
||||
I find that I'm more drawn to Hugo's approach, and in building a small handful of sites, I haven't yet had a need for any plugins. Of course, everyone's needs are a little bit different. Which static site generator would you choose for your site?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/5/hugo-vs-jekyll
|
||||
|
||||
作者:[Jason van Gumster ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jason-van-gumster
|
||||
[1]:https://opensource.com/tags/python?src=programming_resource_menu
|
||||
[2]:https://opensource.com/tags/javascript?src=programming_resource_menu
|
||||
[3]:https://opensource.com/tags/perl?src=programming_resource_menu
|
||||
[4]:https://developers.redhat.com/?intcmp=7016000000127cYAAQ&src=programming_resource_menu
|
||||
[5]:https://opensource.com/article/17/5/hugo-vs-jekyll?rate=lZDKuqOQ7lVUon-ZKebR5SUCyXDz5oCQ_zoCSBPolOQ
|
||||
[6]:http://wordpress.org/
|
||||
[7]:http://gohugo.io/
|
||||
[8]:https://jekyllrb.com/
|
||||
[9]:https://opensource.com/article/17/4/getting-started-jekyll
|
||||
[10]:https://themes.gohugo.io/
|
||||
[11]:http://jekyllthemes.org/
|
||||
[12]:https://daringfireball.net/projects/markdown/
|
||||
[13]:https://opensource.com/user/26430/feed
|
||||
[14]:https://opensource.com/article/17/5/hugo-vs-jekyll#comments
|
||||
[15]:https://opensource.com/users/jason-van-gumster
|
@ -0,0 +1,178 @@
|
||||
10 Useful Tips for Writing Effective Bash Scripts in Linux
|
||||
============================================================
|
||||
ch-cn translating
|
||||
|
||||
[Shell scripting][4] is the easiest form of programming you can learn/do in Linux. More so, it is a required skill for [system administration for automating tasks][5], developing new simple utilities/tools just to mention but a few.
|
||||
|
||||
In this article, we will share 10 useful and practical tips for writing effective and reliable bash scripts and they include:
|
||||
|
||||
### 1\. Always Use Comments in Scripts
|
||||
|
||||
This is a recommended practice which is not only applied to shell scripting but all other kinds of programming. Writing comments in a script helps you or some else going through your script understand what the different parts of the script do.
|
||||
|
||||
For starters, comments are defined using the `#` sign.
|
||||
|
||||
```
|
||||
#TecMint is the best site for all kind of Linux articles
|
||||
```
|
||||
|
||||
### 2\. Make a Script exit When Fails
|
||||
|
||||
Sometimes bash may continue to execute a script even when a certain command fails, thus affecting the rest of the script (may eventually result in logical errors). Use the line below to exit a script when a command fails:
|
||||
|
||||
```
|
||||
#let script exit if a command fails
|
||||
set -o errexit
|
||||
OR
|
||||
set -e
|
||||
```
|
||||
|
||||
### 3\. Make a Script exit When Bash Uses Undeclared Variable
|
||||
|
||||
Bash may also try to use an undeclared script which could cause a logical error. Therefore use the following line to instruct bash to exit a script when it attempts to use an undeclared variable:
|
||||
|
||||
```
|
||||
#let script exit if an unsed variable is used
|
||||
set -o nounset
|
||||
OR
|
||||
set -u
|
||||
```
|
||||
|
||||
### 4\. Use Double Quotes to Reference Variables
|
||||
|
||||
Using double quotes while referencing (using a value of a variable) helps to prevent word splitting (regarding whitespace) and unnecessary globbing (recognizing and expanding wildcards).
|
||||
|
||||
Check out the example below:
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
#let script exit if a command fails
|
||||
set -o errexit
|
||||
#let script exit if an unsed variable is used
|
||||
set -o nounset
|
||||
echo "Names without double quotes"
|
||||
echo
|
||||
names="Tecmint FOSSMint Linusay"
|
||||
for name in $names; do
|
||||
echo "$name"
|
||||
done
|
||||
echo
|
||||
echo "Names with double quotes"
|
||||
echo
|
||||
for name in "$names"; do
|
||||
echo "$name"
|
||||
done
|
||||
exit 0
|
||||
```
|
||||
|
||||
Save the file and exit, then run it as follows:
|
||||
|
||||
```
|
||||
$ ./names.sh
|
||||
```
|
||||
[][6]
|
||||
|
||||
Use Double Quotes in Scripts
|
||||
|
||||
### 5\. Use functions in Scripts
|
||||
|
||||
Except for very small scripts (with a few lines of code), always remember to use functions to modularize your code and make scripts more readable and reusable.
|
||||
|
||||
The syntax for writing functions is as follows:
|
||||
|
||||
```
|
||||
function check_root(){
|
||||
command1;
|
||||
command2;
|
||||
}
|
||||
OR
|
||||
check_root(){
|
||||
command1;
|
||||
command2;
|
||||
}
|
||||
```
|
||||
|
||||
For single line code, use termination characters after each command like this:
|
||||
|
||||
```
|
||||
check_root(){ command1; command2; }
|
||||
```
|
||||
|
||||
### 6\. Use = instead of == for String Comparisons
|
||||
|
||||
Note that `==` is a synonym for `=`, therefore only use a single `=` for string comparisons, for instance:
|
||||
|
||||
```
|
||||
value1=”tecmint.com”
|
||||
value2=”fossmint.com”
|
||||
if [ "$value1" = "$value2" ]
|
||||
```
|
||||
|
||||
### 7\. Use $(command) instead of legacy ‘command’ for Substitution
|
||||
|
||||
[Command substitution][7] replaces a command with its output. Use `$(command)` instead of backquotes ``command`` for command substitution.
|
||||
|
||||
This is recommended even by [shellcheck tool][8] (shows warnings and suggestions for shell scripts). For example:
|
||||
|
||||
```
|
||||
user=`echo “$UID”`
|
||||
user=$(echo “$UID”)
|
||||
```
|
||||
|
||||
### 8\. Use Read-only to Declare Static Variables
|
||||
|
||||
A static variable doesn’t change; its value can not be altered once it’s defined in a script:
|
||||
|
||||
```
|
||||
readonly passwd_file=”/etc/passwd”
|
||||
readonly group_file=”/etc/group”
|
||||
```
|
||||
|
||||
### 9\. Use Uppercase Names for ENVIRONMENT Variables and Lowercase for Custom Variables
|
||||
|
||||
All bash environment variables are named with uppercase letters, therefore use lowercase letters to name your custom variables to avoid variable name conflicts:
|
||||
|
||||
```
|
||||
#define custom variables using lowercase and use uppercase for env variables
|
||||
nikto_file=”$HOME/Downloads/nikto-master/program/nikto.pl”
|
||||
perl “$nikto_file” -h “$1”
|
||||
```
|
||||
|
||||
### 10\. Always Perform Debugging for Long Scripts
|
||||
|
||||
If you are writing bash scripts with thousands of lines of code, finding errors may become a nightmare. To easily fix things before executing a script, perform some debugging. Master this tip by reading through the guides provided below:
|
||||
|
||||
1. [How To Enable Shell Script Debugging Mode in Linux][1]
|
||||
|
||||
2. [How to Perform Syntax Checking Debugging Mode in Shell Scripts][2]
|
||||
|
||||
3. [How to Trace Execution of Commands in Shell Script with Shell Tracing][3]
|
||||
|
||||
That’s all! Do you have any other best bash scripting practices to share? If yes, then use the comment form below to do that.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
|
||||
Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.
|
||||
|
||||
----------------
|
||||
|
||||
via: https://www.tecmint.com/useful-tips-for-writing-bash-scripts-in-linux/
|
||||
|
||||
作者:[ Aaron Kili][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.tecmint.com/author/aaronkili/
|
||||
[1]:https://www.tecmint.com/enable-shell-debug-mode-linux/
|
||||
[2]:https://www.tecmint.com/check-syntax-in-shell-script/
|
||||
[3]:https://www.tecmint.com/trace-shell-script-execution-in-linux/
|
||||
[4]:https://www.tecmint.com/category/bash-shell/
|
||||
[5]:https://www.tecmint.com/using-shell-script-to-automate-linux-system-maintenance-tasks/
|
||||
[6]:https://www.tecmint.com/wp-content/uploads/2017/05/Use-Double-Quotes-in-Scripts.png
|
||||
[7]:https://www.tecmint.com/assign-linux-command-output-to-variable/
|
||||
[8]:https://www.tecmint.com/shellcheck-shell-script-code-analyzer-for-linux/
|
@ -0,0 +1,218 @@
|
||||
5 reasons the D programming language is a great choice for development
|
||||
============================================================
|
||||
|
||||
### D's modeling, productivity, readability, and other features make it a good fit for collaborative software development.
|
||||
|
||||
|
||||

|
||||
>Image by : opensource.com
|
||||
|
||||
The [D programming language][8] is a statically typed, general purpose programming language with C-like syntax that compiles to native code. It's a good fit in open source software development for many reasons; here are some of them.
|
||||
|
||||
### Modeling power
|
||||
|
||||
It's not uncommon to find yourself in a situation where you have an idea and you want to implement it in code exactly the way you are thinking about it in your mind. However, sometimes you have to compromise the idea to fit the code, instead of modeling the code to fit the idea. D supports several [programming paradigms][9], including functional style, imperative, object oriented, metaprogramming, and concurrent (actor model), all harmoniously integrated. You have the option to choose whichever paradigm is convenient for modeling code to fit your idea.
|
||||
|
||||
Programming and development
|
||||
|
||||
* [New Python content][1]
|
||||
|
||||
* [Our latest JavaScript articles][2]
|
||||
|
||||
* [Recent Perl posts][3]
|
||||
|
||||
* [Red Hat Developers Blog][4]
|
||||
|
||||
By using [templates][10], a feature to generate additional D code and weave it in during compilation, you can describe code as a pattern for the compiler to generate the code. This is especially useful for designing algorithms without tying them to specific types. Platform-agnostic code becomes easy with the generic nature of templates. By combining templates with [conditional compilation][11], cross-platform apps become much easier to implement and are more likely to receive contributions from developers using different operating systems. With this, a single programmer can achieve a lot with less code and limited time.
|
||||
|
||||
[Ranges][12], deeply integrated into D, abstract how container elements (e.g., arrays, associative arrays, linked lists, etc.) are accessed as opposed to an actual implementation. This abstraction enables the design and use of a great number of algorithms over a great number of container types without tying them to a specific data structure. D's [array slicing][13] is an implementation of a range. In the end, you write less code in less time and have lower maintenance costs.
|
||||
|
||||
### Productivity
|
||||
|
||||
Most code contributors to open source software work on a voluntary basis with limited time. D allows you be more productive because you can do more in less time. Templates and ranges in D make programmers more productive as they write generic and reusable code, but those are only a couple of D's strengths in terms of productivity. Another main appeal is that D's compilation speed feels like interpreted languages such as Python, JavaScript, Ruby, and PHP, making D good for quick prototyping.
|
||||
|
||||
D can easily interface with legacy code, alleviating the need to port. It was designed to make [interfacing directly with C code][14] natural: After all, C is the master of legacy, well-written and tested code, libraries, and low-level system calls (especially in Linux). C++ code is also [callable in D][15] to a greater extent. In fact, [Python][16], [Objective-C][17], [Lua][18], and [Fortran][19] are some of the languages that are technically usable in D, and there are a number of third-party efforts pushing D in those areas. This makes the huge number of open source libraries usable in D code, which aligns with conventions of open source software development.
|
||||
|
||||
### Readable and maintainable
|
||||
|
||||
```
|
||||
import std.stdio; // import standard I/O module
|
||||
void main()
|
||||
{
|
||||
writeln("Hello, World!");
|
||||
}
|
||||
```
|
||||
|
||||
HelloWorld demo in D
|
||||
|
||||
D code is easy to understand by anyone familiar with C-like programming languages. Moreover, D is very readable, even for sophisticated code, which makes bugs easy to spot. Readability is also critical for engaging contributors, which is key to the growth of open source software.
|
||||
|
||||
One simple but very useful [syntactic sugar][20] in D is support for using an underscore to separate numbers, making them more readable. This is especially useful for math:
|
||||
|
||||
```
|
||||
int count = 100_000_000;
|
||||
double price = 20_220.00 + 10.00;
|
||||
int number = 0x7FFF_FFFF; // in hexadecimal system
|
||||
```
|
||||
|
||||
[Ddoc][21], a built-in tool, makes it easy to automatically generate documentation out of code comments without the need for an external tool. Documentation becomes less challenging to write, improve, and update as it goes side by side with the code.
|
||||
|
||||
[Contracts][22] are checks put in place to ensure D code behaves exactly as expected. Just like legal contracts are signed to ensure each party does their part in an agreement, contract programming in D ensures that the implementation of a function, class, etc. always produces the desired results or behaves as expected. Such a feature is practically useful for bug checks, especially in open source software where several people collaborate on a project. Contracts can be a lifesaver for large projects. D's powerful contract programming features are built-in rather than added as an afterthought. Contracts not only add to the convenience of using D but also make writing correct and maintainable code less of a headache.
|
||||
|
||||
### Convenient
|
||||
|
||||
Collaborative development can be challenging, as code is frequently changing and has many moving parts. D alleviates some of these issues, with support for importing modules locally within a scope:
|
||||
|
||||
```
|
||||
// returns even numbers
|
||||
int[] evenNumbers(int[] numbers)
|
||||
{
|
||||
// "filter" and "array" are only accessible locally
|
||||
import std.algorithm: filter;
|
||||
import std.array: array;
|
||||
return numbers.filter!(n => n%2 == 0).array;
|
||||
}
|
||||
```
|
||||
|
||||
The "!" operator used with **filter** is the syntax of a [template argument][5].
|
||||
|
||||
The function above can be tossed around without breaking code because it does not rely on any globally imported module. Any function implemented like this can be later enhanced without breaking code, which is a good thing for collaborative development.
|
||||
|
||||
[Universal Function Call Syntax][23] (UFCS), is a syntactic sugar in D that allows the call of regular functions, like member functions of an object. A function is defined as:
|
||||
|
||||
```
|
||||
void cook(string food, int quantity)
|
||||
{
|
||||
import std.stdio: writeln;
|
||||
writeln(food, " in quantity of ", quantity);
|
||||
}
|
||||
```
|
||||
|
||||
It can be called in the usual way like:
|
||||
|
||||
```
|
||||
string food = "rice";
|
||||
int quantity = 3;
|
||||
|
||||
cook(food, quantity);
|
||||
```
|
||||
|
||||
With UFCS, this same function can be called as if **cook** is a member function:
|
||||
|
||||
```
|
||||
string food = "rice";
|
||||
int quantity = 3;
|
||||
|
||||
food.cook(quantity);
|
||||
```
|
||||
|
||||
During compilation, the compiler automatically places **food** as the first argument to the function **cook**. UFCS makes it possible to **chain** regular functions—giving your code the natural feel of functional style programming. UFCS is heavily used in D, as it was in the case of the **filter** and **array** functions used in the **evenNumbers** function above. Combining templates, ranges, conditional compilation, and UFCS gives you massive power without sacrificing convenience.
|
||||
|
||||
The **auto** keyword can be used in place of a type. The compiler will statically infer the type during compilation. This saves you from long type names and makes writing D code feel like a dynamically typed language.
|
||||
|
||||
```
|
||||
// Nope. Do you?
|
||||
VeryLongTypeHere variable = new VeryLongTypeHere();
|
||||
|
||||
// using auto keyword
|
||||
auto variable = new VeryLongTypeHere();
|
||||
auto name = "John Doe";
|
||||
auto age = 12;
|
||||
auto letter = 'e';
|
||||
auto anArray = [1, 2.0, 3, 0, 1.5]; // type of double[]
|
||||
auto dictionary = ["one": 1, "two": 2, "three": 3]; // type of int[string]
|
||||
auto cook(string food) {...} // auto for a function return type
|
||||
```
|
||||
|
||||
D's [foreach][24] loop allows looping over collections and ranges of varying underlining data types:
|
||||
|
||||
```
|
||||
foreach(name; ["John", "Yaw", "Paul", "Kofi", "Ama"])
|
||||
{
|
||||
writeln(name);
|
||||
}
|
||||
|
||||
foreach(number; [1, 2, 3, 4, 4, 6]) {...}
|
||||
|
||||
foreach(number; 0..10) {...} // 0..10 is the syntax for number range
|
||||
|
||||
class Student {...}
|
||||
Student[] students = [new Student(), new Student()];
|
||||
foreach(student; students) {...}
|
||||
```
|
||||
|
||||
Built-in [unit test][25] support in D not only alleviates the need for an external tool, but also makes it convenient for programmers to implement tests in their code. All test cases go inside the customizable **unittest {}** block:
|
||||
|
||||
```
|
||||
int[] evenNumbers(int[] numbers)
|
||||
{
|
||||
import std.algorithm: filter;
|
||||
import std.array: array;
|
||||
return numbers.filter!(n => n%2 == 0).array;
|
||||
}
|
||||
|
||||
unittest
|
||||
{
|
||||
assert( evenNumbers([1, 2, 3, 4]) == [2, 4] );
|
||||
}
|
||||
```
|
||||
|
||||
Using DMD, D's reference compiler, you can compile all tests into the resulting executable by adding the **-unittest** compiler flag.
|
||||
|
||||
[Dub][26], a built-in package manager and build tool for D, makes it easy to use the increasing number of third-party packages (libraries) from the [Dub package registry][27]. Dub takes care of downloading, compiling, and linking those packages during compilation, as well as upgrading to future versions.
|
||||
|
||||
### Choice
|
||||
|
||||
In addition to providing several programming paradigms and features, D offers other choices. It currently has three compilers, all open source. The reference compiler, DMD, comes with its own backend, while the other two, GDC and LDC, use GCC and LLVM backends, respectively. DMD is noted for its fast compilation speeds, while LDC and GDC are noted for generating fast machine code at the cost of a little compilation time. You are free to choose whichever fits your use case.
|
||||
|
||||
Certain parts of D, when used, are [garbage-collected][28] by default. You can also choose manual memory management or even reference counting if you wish. The choice is all yours.
|
||||
|
||||
### And much more
|
||||
|
||||
There a several sugars in D that I haven't covered in this brief discussion. I highly recommend you check out [D's feature overview][29], the hidden treasures in the [standard library][30], and [areas of D usage][31] to see more of what people are doing with it. Many organizations are already [using D in production][32]. Finally, if you are ready to start learning D, check out the book _[Programming in D][6]_ .
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/5/d-open-source-software-development
|
||||
|
||||
作者:[Lawrence Aberba][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/aberba
|
||||
[1]:https://opensource.com/tags/python?src=programming_resource_menu
|
||||
[2]:https://opensource.com/tags/javascript?src=programming_resource_menu
|
||||
[3]:https://opensource.com/tags/perl?src=programming_resource_menu
|
||||
[4]:https://developers.redhat.com/?intcmp=7016000000127cYAAQ&src=programming_resource_menu
|
||||
[5]:http://ddili.org/ders/d.en/templates.html
|
||||
[6]:http://ddili.org/ders/d.en/index.html
|
||||
[7]:https://opensource.com/article/17/5/d-open-source-software-development?rate=2NrC12X6cAUXB18h8bLBYUkDmF2GR1nuiAdeMCFCvh8
|
||||
[8]:https://dlang.org/
|
||||
[9]:https://en.wikipedia.org/wiki/Programming_paradigm
|
||||
[10]:http://ddili.org/ders/d.en/templates.html
|
||||
[11]:https://dlang.org/spec/version.html
|
||||
[12]:http://ddili.org/ders/d.en/ranges.html
|
||||
[13]:https://dlang.org/spec/arrays.html#slicing
|
||||
[14]:https://dlang.org/spec/interfaceToC.html
|
||||
[15]:https://dlang.org/spec/cpp_interface.html
|
||||
[16]:https://code.dlang.org/packages/pyd
|
||||
[17]:https://dlang.org/spec/objc_interface.html
|
||||
[18]:http://beza1e1.tuxen.de/into_luad.html
|
||||
[19]:http://www.active-analytics.com/blog/interface-d-with-c-fortran/
|
||||
[20]:https://en.wikipedia.org/wiki/Syntactic_sugar
|
||||
[21]:https://dlang.org/spec/ddoc.html
|
||||
[22]:http://ddili.org/ders/d.en/contracts.html
|
||||
[23]:http://ddili.org/ders/d.en/ufcs.html
|
||||
[24]:http://ddili.org/ders/d.en/foreach.html
|
||||
[25]:https://dlang.org/spec/unittest.html
|
||||
[26]:http://code.dlang.org/getting_started
|
||||
[27]:https://code.dlang.org/
|
||||
[28]:https://dlang.org/spec/garbage.html
|
||||
[29]:https://dlang.org/comparison.html
|
||||
[30]:https://dlang.org/phobos/index.html
|
||||
[31]:https://dlang.org/areas-of-d-usage.html
|
||||
[32]:https://dlang.org/orgs-using-d.html
|
||||
[33]:https://opensource.com/user/129491/feed
|
||||
[34]:https://opensource.com/users/aberba
|
356
sources/tech/20170519 Accelerating your C++ on GPU with SYCL.md
Normal file
356
sources/tech/20170519 Accelerating your C++ on GPU with SYCL.md
Normal file
@ -0,0 +1,356 @@
|
||||
Accelerating your C++ on GPU with SYCL
|
||||
============================================================
|
||||
|
||||
|
||||
### WARNING: This is an incomplete draft. There are likely many mistaeks and unfinished sections.
|
||||
|
||||
* * *
|
||||
|
||||
Leveraging the power of graphics cards for compute applications is all the rage right now in fields such as machine learning, computer vision and high-performance computing. Technologies like OpenCL expose this power through a hardware-independent programming model, allowing you to write code which abstracts over different architecture capabilities. The dream of this is “write once, run anywhere”, be it an Intel CPU, AMD discrete GPU, DSP, etc. Unfortunately, for everyday programmers, OpenCL has something of a steep learning curve; a simple Hello World program can be a hundred or so lines of pretty ugly-looking code. However, to ease this pain, the Khronos group have developed a new standard called [SYCL][4], which is a C++ abstraction layer on top of OpenCL. Using SYCL, you can develop these general-purpose GPU (GPGPU) applications in clean, modern C++ without most of the faff associated with OpenCL. Here’s a simple vector multiplication example written in SYCL using the parallel STL implementation:
|
||||
|
||||
```
|
||||
#include <vector>
|
||||
#include <iostream>
|
||||
|
||||
#include <sycl/execution_policy>
|
||||
#include <experimental/algorithm>
|
||||
#include <sycl/helpers/sycl_buffers.hpp>
|
||||
|
||||
using namespace std::experimental::parallel;
|
||||
using namespace sycl::helpers;
|
||||
|
||||
int main() {
|
||||
constexpr size_t array_size = 1024*512;
|
||||
std::array<cl::sycl::cl_int, array_size> a;
|
||||
std::iota(begin(a),end(a),0);
|
||||
|
||||
{
|
||||
cl::sycl::buffer<int> b(a.data(), cl::sycl::range<1>(a.size()));
|
||||
cl::sycl::queue q;
|
||||
sycl::sycl_execution_policy<class Mul> sycl_policy(q);
|
||||
transform(sycl_policy, begin(b), end(b), begin(b),
|
||||
[](int x) { return x*2; });
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
For comparison, here’s a mostly equivalent version written in OpenCL using the C++ API (don’t spend much time reading this, just note that it looks ugly and is really long):
|
||||
|
||||
```
|
||||
#include <iostream>
|
||||
#include <array>
|
||||
#include <numeric>
|
||||
#include <CL/cl.hpp>
|
||||
|
||||
int main(){
|
||||
std::vector<cl::Platform> all_platforms;
|
||||
cl::Platform::get(&all_platforms);
|
||||
if(all_platforms.size()==0){
|
||||
std::cout<<" No platforms found. Check OpenCL installation!\n";
|
||||
exit(1);
|
||||
}
|
||||
cl::Platform default_platform=all_platforms[0];
|
||||
|
||||
std::vector<cl::Device> all_devices;
|
||||
default_platform.getDevices(CL_DEVICE_TYPE_ALL, &all_devices);
|
||||
if(all_devices.size()==0){
|
||||
std::cout<<" No devices found. Check OpenCL installation!\n";
|
||||
exit(1);
|
||||
}
|
||||
|
||||
cl::Device default_device=all_devices[0];
|
||||
cl::Context context({default_device});
|
||||
|
||||
cl::Program::Sources sources;
|
||||
std::string kernel_code=
|
||||
" void kernel mul2(global int* A){"
|
||||
" A[get_global_id(0)]=A[get_global_id(0)]*2;"
|
||||
" }";
|
||||
sources.push_back({kernel_code.c_str(),kernel_code.length()});
|
||||
|
||||
cl::Program program(context,sources);
|
||||
if(program.build({default_device})!=CL_SUCCESS){
|
||||
std::cout<<" Error building: "<<program.getBuildInfo<CL_PROGRAM_BUILD_LOG>(default_device)<<"\n";
|
||||
exit(1);
|
||||
}
|
||||
|
||||
constexpr size_t array_size = 1024*512;
|
||||
std::array<cl_int, array_size> a;
|
||||
std::iota(begin(a),end(a),0);
|
||||
|
||||
cl::Buffer buffer_A(context,CL_MEM_READ_WRITE,sizeof(int)*a.size());
|
||||
cl::CommandQueue queue(context,default_device);
|
||||
|
||||
if (queue.enqueueWriteBuffer(buffer_A,CL_TRUE,0,sizeof(int)*a.size(),a.data()) != CL_SUCCESS) {
|
||||
std::cout << "Failed to write memory;n";
|
||||
exit(1);
|
||||
}
|
||||
|
||||
cl::Kernel kernel_add = cl::Kernel(program,"mul2");
|
||||
kernel_add.setArg(0,buffer_A);
|
||||
|
||||
if (queue.enqueueNDRangeKernel(kernel_add,cl::NullRange,cl::NDRange(a.size()),cl::NullRange) != CL_SUCCESS) {
|
||||
std::cout << "Failed to enqueue kernel\n";
|
||||
exit(1);
|
||||
}
|
||||
|
||||
if (queue.finish() != CL_SUCCESS) {
|
||||
std::cout << "Failed to finish kernel\n";
|
||||
exit(1);
|
||||
}
|
||||
|
||||
if (queue.enqueueReadBuffer(buffer_A,CL_TRUE,0,sizeof(int)*a.size(),a.data()) != CL_SUCCESS) {
|
||||
std::cout << "Failed to read result\n";
|
||||
exit(1);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
In this post I’ll give an introduction on using SYCL to accelerate your C++ code on the GPU.
|
||||
|
||||
* * *
|
||||
|
||||
### Lightning intro to GPGPU
|
||||
|
||||
Before I get started on how to use SYCL, I’ll give a brief outline of why you might want to run compute jobs on the GPU for those who are unfamiliar. I’ve you’ve already used OpenCL, CUDA or similar, feel free to skip ahead.
|
||||
|
||||
The key difference between a GPU and a CPU is that, rather than having a small number of complex, powerful cores (1-8 for common consumer desktop hardware), a GPU has a huge number of small, simple processing elements.
|
||||
|
||||

|
||||
|
||||
Above is a comically simplified diagram of a CPU with four cores. Each core has a set of registers and is attached to various levels of cache (some might be shared, some not), and then main memory.
|
||||
|
||||

|
||||
|
||||
In the GPU, tiny processing elements are grouped into execution units. Each processing element has a bit of memory attached to it, and each execution unit has some memory shared between its processing elements. After that, there’s some GPU-wide memory, then the same main memory which the CPU uses. The elements within an execution unit execute in _lockstep_ , where each element executes the same instruction on a different piece of data.
|
||||
|
||||
There are many aspects of GPGPU programming which make it an entirely different beast to everyday CPU programming. For example, transferring data from main memory to the GPU is _slow_ . _Really_ slow. Like, kill all your performance and get you fired slow. Therefore, the tradeoff with GPU programming is to make as much of the ridiculously high throughput of your accelerator to hide the latency of shipping the data to and from it.
|
||||
|
||||
There are other issues which might not be immediately apparent, like the cost of branching. Since the processing elements in an execution unit work in lockstep, nested branches which cause them to take different paths (divergent control flow) is a real problem. This is often solved by executing all branches for all elements and masking out the unneeded results. That’s a polynomial explosion in complexity based on the level of nesting, which is A Bad Thing ™. Of course, there are optimizations which can aid this, but the idea stands: simple assumptions and knowledge you bring from the CPU world might cause you big problems in the GPU world.
|
||||
|
||||
Before we get back to SYCL, some short pieces of terminology. The _host_ is the main CPU running on your machine which executes and the _device_ is what will be running your OpenCL code. A device could be the same as the host, or it could be some accelerator sitting in your machine, a simulator, whatever. A _kernel_ is a special function which is the entry point to the code which will run on your device. It will often be supplied with buffers for input and output data which have been set up by the host.
|
||||
|
||||
* * *
|
||||
|
||||
### Back to SYCL
|
||||
|
||||
There are currently two implementations of SYCL available; “triSYCL”, an experimental open source version by Xilinx (mostly used as a testbed for the standard), and “ComputeCpp”, an industry-strength implementation by Codeplay[1][2] (currently in open beta). Only ComputeCpp supports execution of kernels on the GPU, so we’ll be using that in this post.
|
||||
|
||||
Step 1 is to get ComputeCpp up and running on your machine. The main components are a runtime library which implements the SYCL API, and a Clang-based compiler which compiles both your host code and your device code. At the time of writing, Intel CPUs and some AMD GPUs are officially supported on Ubuntu and CentOS. It should be pretty easy to get it working on other Linux distributions (I got it running on my Arch system, for instance). Support for more hardware and operating systems is being worked on, so check the [supported platforms document][5] for an up-to-date list. The dependencies and components are listed [here][6]. You might also want to download the [SDK][7], which contains samples, documentation, build system integration files, and more. I’ll be using the [SYCL Parallel STL][8] in this post, so get that if you want to play along at home.
|
||||
|
||||
Once you’re all set up, we can get GPGPUing! As noted in the introduction, my first sample used the SYCL parallel STL implementation. We’ll now take a look at how to write that code with bare SYCL.
|
||||
|
||||
```
|
||||
#include <CL/sycl.hpp>
|
||||
|
||||
#include <array>
|
||||
#include <numeric>
|
||||
#include <iostream>
|
||||
|
||||
int main() {
|
||||
const size_t array_size = 1024*512;
|
||||
std::array<cl::sycl::cl_int, array_size> in,out;
|
||||
std::iota(begin(in),end(in),0);
|
||||
|
||||
cl::sycl::queue device_queue;
|
||||
cl::sycl::range<1> n_items{array_size};
|
||||
cl::sycl::buffer<cl::sycl::cl_int, 1> in_buffer(in.data(), n_items);
|
||||
cl::sycl::buffer<cl::sycl::cl_int, 1> out_buffer(out.data(), n_items);
|
||||
|
||||
device_queue.submit([&](cl::sycl::handler &cgh) {
|
||||
constexpr auto sycl_read = cl::sycl::access::mode::read_write;
|
||||
constexpr auto sycl_write = cl::sycl::access::mode::write;
|
||||
|
||||
auto in_accessor = in_buffer.get_access<sycl_read>(cgh);
|
||||
auto out_accessor = out_buffer.get_access<sycl_write>(cgh);
|
||||
|
||||
cgh.parallel_for<class VecScalMul>(n_items,
|
||||
[=](cl::sycl::id<1> wiID) {
|
||||
out_accessor[wiID] = in_accessor[wiID]*2;
|
||||
});
|
||||
});
|
||||
|
||||
device_queue.wait();
|
||||
}
|
||||
```
|
||||
|
||||
I’ll break this down piece-by-piece.
|
||||
|
||||
```
|
||||
#include <CL/sycl.hpp>
|
||||
```
|
||||
|
||||
The first thing we do is include the SYCL header file, which will put the SYCL runtime library at our command.
|
||||
|
||||
```
|
||||
const size_t array_size = 1024*512;
|
||||
std::array<cl::sycl::cl_int, array_size> in,out;
|
||||
std::iota(begin(in),end(in),0);
|
||||
```
|
||||
|
||||
Here we construct a large array of integers and initialize it with the numbers from `0` to `array_size-1` (this is what `std::iota` does). Note that we use `cl::sycl::cl_int` to ensure compatibility.
|
||||
|
||||
```
|
||||
cl::sycl::queue device_queue;
|
||||
```
|
||||
|
||||
Now we create our command queue. The command queue is where all work (kernels) will be enqueued before being dispatched to the device. There are many ways to customise the queue, such as providing a device to enqueue on or setting up asynchronous error handlers, but the default constructor will do for this example; it looks for a compatible GPU and falls back on the host CPU if it fails.
|
||||
|
||||
```
|
||||
cl::sycl::range<1> n_items{array_size};
|
||||
```
|
||||
|
||||
Next we create a range, which describes the shape of the data which the kernel will be executing on. In our simple example, it’s a one-dimensional array, so we use `cl::sycl::range<1>`. If the data was two-dimensional we would use `cl::sycl::range<2>` and so on. Alongside `cl::sycl::range`, there is `cl::sycl::ndrange`, which allows you to specify work group sizes as well as an overall range, but we don’t need that for our example.
|
||||
|
||||
```
|
||||
cl::sycl::buffer<cl::sycl::cl_int, 1> in_buffer(in.data(), n_items);
|
||||
cl::sycl::buffer<cl::sycl::cl_int, 1> out_buffer(out.data(), n_items);
|
||||
```
|
||||
|
||||
In order to control data sharing and transfer between the host and devices, SYCL provides a `buffer` class. We create two SYCL buffers to manage our input and output arrays.
|
||||
|
||||
```
|
||||
device_queue.submit([&](cl::sycl::handler &cgh) {/*...*/});
|
||||
```
|
||||
|
||||
After setting up all of our data, we can enqueue our actual work. There are a few ways to do this, but a simple method for setting up a parallel execution is to call the `.submit` function on our queue. To this function we pass a _command group functor_ [2][3] which will be executed when the runtime schedules that task. A command group handler sets up any last resources needed by the kernel and dispatches it.
|
||||
|
||||
```
|
||||
constexpr auto sycl_read = cl::sycl::access::mode::read_write;
|
||||
constexpr auto sycl_write = cl::sycl::access::mode::write;
|
||||
|
||||
auto in_accessor = in_buffer.get_access<sycl_read>(cgh);
|
||||
auto out_accessor = out_buffer.get_access<sycl_write>(cgh);
|
||||
```
|
||||
|
||||
In order to control access to our buffers and to tell the runtime how we will be using the data, we need to create _accessors_ . It should be clear that we are creating one accessor for reading from `in_buffer`, and one accessor for writing to `out_buffer`.
|
||||
|
||||
```
|
||||
cgh.parallel_for<class VecScalMul>(n_items,
|
||||
[=](cl::sycl::id<1> wiID) {
|
||||
out_accessor[wiID] = in_accessor[wiID]*2;
|
||||
});
|
||||
```
|
||||
|
||||
Now that we’ve done all the setup, we can actually do some computation on our device. Here we dispatch a kernel on the command group handler `cgh` over our range `n_items`. The actual kernel itself is a lambda which takes a work-item identifier and carries out our computation. In this case, we are reading from `in_accessor` at the index of our work-item identifier, multiplying it by `2`, then storing the result in the relevant place in `out_accessor`. That `<class VecScalMul>` is an unfortunate byproduct of how SYCL needs to work within the confines of standard C++, so we need to give a unique class name to the kernel for the compiler to be able to do its job.
|
||||
|
||||
```
|
||||
device_queue.wait();
|
||||
```
|
||||
|
||||
Our last line is kind of like calling `.join()` on a `std::thread`; it waits until the queue has executed all work which has been submitted. After this point, we could now access `out` and expect to see the correct results. Queues will also wait implicitly on destruction, so you could alternatively place it in some inner scope and let the synchronisation happen when the scope ends.
|
||||
|
||||
There are quite a few new concepts at play here, but hopefully you can see the power and expressibility we get using these techniques. However, if you just want to toss some code at your GPU and not worry about the customisation, then you can use the SYCL Parallel STL implementation.
|
||||
|
||||
* * *
|
||||
|
||||
### SYCL Parallel STL
|
||||
|
||||
The SYCL Parallel STL is an implementation of the Parallelism TS which dispatches your algorithm function objects as SYCL kernels. We already saw an example of this at the top of the page, so lets run through it quickly.
|
||||
|
||||
```
|
||||
#include <vector>
|
||||
#include <iostream>
|
||||
|
||||
#include <sycl/execution_policy>
|
||||
#include <experimental/algorithm>
|
||||
#include <sycl/helpers/sycl_buffers.hpp>
|
||||
|
||||
using namespace std::experimental::parallel;
|
||||
using namespace sycl::helpers;
|
||||
|
||||
int main() {
|
||||
constexpr size_t array_size = 1024*512;
|
||||
std::array<cl::sycl::cl_int, array_size> in,out;
|
||||
std::iota(begin(in),end(in),0);
|
||||
|
||||
{
|
||||
cl::sycl::buffer<int> in_buffer(in.data(), cl::sycl::range<1>(in.size()));
|
||||
cl::sycl::buffer<int> out_buffer(out.data(), cl::sycl::range<1>(out.size()));
|
||||
cl::sycl::queue q;
|
||||
sycl::sycl_execution_policy<class Mul> sycl_policy(q);
|
||||
transform(sycl_policy, begin(in_buffer), end(in_buffer), begin(out_buffer),
|
||||
[](int x) { return x*2; });
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```
|
||||
constexpr size_t array_size = 1024*512;
|
||||
std::array<cl::sycl::cl_int, array_size> in, out;
|
||||
std::iota(begin(in),end(out),0);
|
||||
```
|
||||
|
||||
So far, so similar. Again we’re creating a couple of arrays to hold our input and output data.
|
||||
|
||||
```
|
||||
cl::sycl::buffer<int> in_buffer(in.data(), cl::sycl::range<1>(in.size()));
|
||||
cl::sycl::buffer<int> out_buffer(out.data(), cl::sycl::range<1>(out.size()));
|
||||
cl::sycl::queue q;
|
||||
```
|
||||
|
||||
Here we are creating our buffers and our queue like in the last example.
|
||||
|
||||
```
|
||||
sycl::sycl_execution_policy<class Mul> sycl_policy(q);
|
||||
```
|
||||
|
||||
Here’s where things get interesting. We create a `sycl_execution_policy` from our queue and give it a name to use for the kernel. This execution policy can then be used like `std::execution::par` or `std::execution::seq`.
|
||||
|
||||
```
|
||||
transform(sycl_policy, begin(in_buffer), end(in_buffer), begin(out_buffer),
|
||||
[](int x) { return x*2; });
|
||||
```
|
||||
|
||||
Now our kernel dispatch looks like a call to `std::transform` with an execution policy provided. That closure we pass in will be compiled for and executed on the device without us having to do any more complex set up.
|
||||
|
||||
Of course, you can do more than just `transform`. At the time of writing, the SYCL Parallel STL supports these algorithms:
|
||||
|
||||
* `sort`
|
||||
|
||||
* `transform`
|
||||
|
||||
* `for_each`
|
||||
|
||||
* `for_each_n`
|
||||
|
||||
* `count_if`
|
||||
|
||||
* `reduce`
|
||||
|
||||
* `inner_product`
|
||||
|
||||
* `transform_reduce`
|
||||
|
||||
* * *
|
||||
|
||||
That covers things for this short introduction. If you want to keep up to date with developments in SYCL, be sure to check out [sycl.tech][9]. Notable recent developments have been porting [Eigen][10] and [Tensorflow][11] to SYCL to bring expressive artificial intelligence programming to OpenCL devices. Personally, I’m excited to see how the high-level programming models can be exploited for automatic optimization of heterogeneous programs, and how they can support even higher-level technologies like [HPX][12] or [SkelCL][13].
|
||||
|
||||
1. I work for Codeplay, but this post was written in my own time with no suggestion from my employer. [↩][1]
|
||||
|
||||
2. Hey, “functor” is in the spec, don’t @ me.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blog.tartanllama.xyz/c++/2017/05/19/sycl/
|
||||
|
||||
作者:[TartanLlama ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.twitter.com/TartanLlama
|
||||
[1]:https://blog.tartanllama.xyz/c++/2017/05/19/sycl/#fnref:1
|
||||
[2]:https://blog.tartanllama.xyz/c++/2017/05/19/sycl/#fn:1
|
||||
[3]:https://blog.tartanllama.xyz/c++/2017/05/19/sycl/#fn:2
|
||||
[4]:https://www.khronos.org/sycl
|
||||
[5]:https://www.codeplay.com/products/computesuite/computecpp/reference/platform-support-notes
|
||||
[6]:https://www.codeplay.com/products/computesuite/computecpp/reference/release-notes/
|
||||
[7]:https://github.com/codeplaysoftware/computecpp-sdk
|
||||
[8]:https://github.com/KhronosGroup/SyclParallelSTL
|
||||
[9]:http://sycl.tech/
|
||||
[10]:https://github.com/ville-k/sycl_starter
|
||||
[11]:http://deep-beta.co.uk/setting-up-tensorflow-with-opencl-using-sycl/
|
||||
[12]:https://github.com/STEllAR-GROUP/hpx
|
||||
[13]:https://github.com/skelcl/skelcl
|
@ -0,0 +1,125 @@
|
||||
### New SMB Worm Uses Seven NSA Hacking Tools. WannaCry Used Just Two
|
||||
|
||||
|
||||

|
||||
|
||||
Researchers have detected a new worm that is spreading via SMB, but unlike the worm component of the WannaCry ransomware, this one is using seven NSA tools instead of two.
|
||||
|
||||
The worm's existence first came to light on Wednesday, after it infected the SMB honeypot of [Miroslav Stampar][15], member of the Croatian Government CERT, and creator of the sqlmap tool used for detecting and exploiting SQL injection flaws.
|
||||
|
||||
### EternalRocks uses seven NSA tools
|
||||
|
||||
The worm, which Stampar named EternalRocks based on worm executable properties found in one sample, works by using six SMB-centric NSA tools to infect a computer with SMB ports exposed online. These are **ETERNALBLUE**, **ETERNALCHAMPION**, **ETERNALROMANCE**, and **ETERNALSYNERGY**, which are SMB exploits used to compromise vulnerable computers, while **SMBTOUCH** and **ARCHITOUCH** are two NSA tools used for SMB reconnaissance operations.
|
||||
|
||||
Once the worm has obtained this initial foothold, it then uses another NSA tool, **DOUBLEPULSAR**, to propagate to new vulnerable machines.
|
||||
|
||||

|
||||
**Origin of the EternalRocks name**
|
||||
|
||||
[The WannaCry ransomware outbreak][16], which affected over 240,000 victims, also used an SMB worm to infect computers and spread to new victims.
|
||||
|
||||
Unlike EternalRocks, WannaCry's SMB worm used only ETERNALBLUE for the initial compromise, and DOUBLEPULSAR to propagate to new machines.
|
||||
|
||||
### EternalRocks is more complex but less dangerous
|
||||
|
||||
As a worm, EternalRocks is far less dangerous than WannaCry's worm component, as it currently does not deliver any malicious content. This, however, does not mean that EternalRocks is less complex. According to Stampar, it's actually the opposite.
|
||||
|
||||
For starters, EternalRocks is far more sneaky than WannaCry's SMB worm component. Once it infects a victim, the worm uses a two-stage installation process, with a delayed second stage.
|
||||
|
||||
During the first stage, EternalRocks gains a foothold on an infected host, downloads the Tor client, and beacons its C&C server, located on a .onion domain, the Dark Web.
|
||||
|
||||
Only after a predefined period of time — currently 24 hours — does the C&C server respond. The role of this long delay is most probably to bypass sandbox security testing environments and security researchers analyzing the worm, as very few will wait a full day for a response from the C&C server.
|
||||
|
||||
<twitterwidget class="twitter-tweet twitter-tweet-rendered" id="twitter-widget-0" data-tweet-id="865494946974900224" style="position: static; visibility: visible; display: block; transform: rotate(0deg); width: 500px; margin: 10px auto; max-width: 100%; min-width: 220px;">[View image on Twitter][10] [][5]
|
||||
|
||||
> [ Follow][1] [ Miroslav Stampar @stamparm][6]
|
||||
>
|
||||
> Update on [#EternalRocks][7]. Original name is actually "MicroBotMassiveNet" while author's nick is "tmc" [https://github.com/stamparm/EternalRocks/#debug-strings …][8]
|
||||
>
|
||||
> [<time class="dt-updated" datetime="2017-05-19T09:10:50+0000" pubdate="" title="Time posted: 19 May 2017, 09:10:50 (UTC)">5:10 PM - 19 May 2017</time>][9]
|
||||
>
|
||||
> * [][2]
|
||||
>
|
||||
> * [ 2525 Retweets][3]
|
||||
>
|
||||
> * [ 1717 likes][4]
|
||||
|
||||
[Twitter Ads info & Privacy][11]</twitterwidget>
|
||||
|
||||
### No kill switch domain
|
||||
|
||||
Additionally, EternalRocks also uses files with identical names to the ones used by WannaCry's SMB worm, in another attempt to fool security researchers into misclassifying it.
|
||||
|
||||
But unlike WannaCry, EternalRocks does not include a kill switch domain, the Achille's heel that security researchers used to stop the WannaCry outbreak.
|
||||
|
||||
After the initial dormancy period expires and the C&C server responds, EternalRocks goes into the second stage of its installation process and downloads a second stage malware component in the form of an archive named shadowbrokers.zip.
|
||||
|
||||
The name of this file is pretty self-explanatory, as it contains NSA SMB-centric exploits [leaked by the Shadow Brokers group][17] in April 2017.
|
||||
|
||||
The worm then starts a rapid IP scanning process and attempts to connect to random IP addresses.
|
||||
|
||||

|
||||
**The configuration files for NSA tools found in the shadowbrokers.zip archive**
|
||||
|
||||
### EternalRocks could be weaponized in an instant
|
||||
|
||||
Because of its broader exploit arsenal, the lack of a kill switch domain, and because of its initial dormancy, EternalRocks could pose a serious threat to computers with vulnerable SMB ports exposed to the Internet, if its author would ever decide to weaponize the worm with ransomware, a banking trojan, RATs, or anything else.
|
||||
|
||||
At first glance, the worm seems to be an experiment, or a malware author performing tests and fine-tuning a future threat.
|
||||
|
||||
This, however, does not mean EternalRocks is harmless. Computers infected with this worm are controllable via C&C server commands and the worm's owner could leverage this hidden communications channel to send new malware to the computers previously infected by EternalRocks.
|
||||
|
||||
Furthermore, DOUBLEPULSAR, [an NSA implant with backdoor features][18], remains running on PCs infected with EternalRocks. Unfortunately, the worm's author has not taken any measures to protect the DOUBLEPULSAR implant, which runs in a default unprotected state, meaning other threat actors could use it as a backdoor to machines infected by EternalRocks, by sending their own malware to those PCs.
|
||||
|
||||
IOCs and more info on the worm's infection process are available in a [GitHub repo][19] Stampar set up a few days ago.
|
||||
|
||||
### An SMB free-for-all
|
||||
|
||||
Currently, there are multiple actors scanning for computers running older and unpatched versions of the SMB services. System administrators have already taken notice and started patching vulnerable PCs or disabling the old SMBv1 protocol, slowly reducing the number of vulnerable machines that EternalRocks can infect.
|
||||
|
||||
Furthermore, malware such as [Adylkuzz][20] also shuts down SMB ports, preventing further exploitation from other threats, also contributing to reducing the number of potential targets for EternalRocks and other SMB-hunting malware. Reports from [Forcepoint][21], [Cyphort][22], and [Secdo][23] detail other threats currently targeting computers with SMB ports.
|
||||
|
||||
Nonetheless, the faster system administrators patch their systems the better. "The worm is racing with administrators to infect machines before they patch," Stampar told Bleeping Computer in a private conversation. "Once infected, he can weaponize any time he wants, no matter the late patch."
|
||||
|
||||
_Image credits: Miroslav Stampar, BleepingComputer & [Ana María Lora Macias][13]_
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Catalin covers various topics such as data breaches, software vulnerabilities, exploits, hacking news, the Dark Web, programming topics, social media, web technology, product launches, and a few more.
|
||||
|
||||
---------------
|
||||
|
||||
via: https://www.bleepingcomputer.com/news/security/new-smb-worm-uses-seven-nsa-hacking-tools-wannacry-used-just-two/
|
||||
|
||||
作者:[CATALIN CIMPANU ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.bleepingcomputer.com/author/catalin-cimpanu/
|
||||
[1]:https://twitter.com/stamparm
|
||||
[2]:https://twitter.com/intent/tweet?in_reply_to=865494946974900224
|
||||
[3]:https://twitter.com/intent/retweet?tweet_id=865494946974900224
|
||||
[4]:https://twitter.com/intent/like?tweet_id=865494946974900224
|
||||
[5]:https://twitter.com/stamparm/status/865494946974900224/photo/1
|
||||
[6]:https://twitter.com/stamparm
|
||||
[7]:https://twitter.com/hashtag/EternalRocks?src=hash
|
||||
[8]:https://t.co/xqoxkNYfM7
|
||||
[9]:https://twitter.com/stamparm/status/865494946974900224
|
||||
[10]:https://twitter.com/stamparm/status/865494946974900224/photo/1
|
||||
[11]:https://support.twitter.com/articles/20175256
|
||||
[12]:https://www.bleepingcomputer.com/news/security/new-smb-worm-uses-seven-nsa-hacking-tools-wannacry-used-just-two/#comment_form
|
||||
[13]:https://thenounproject.com/search/?q=worm&i=24323
|
||||
[14]:https://www.bleepingcomputer.com/author/catalin-cimpanu/
|
||||
[15]:https://about.me/stamparm
|
||||
[16]:https://www.bleepingcomputer.com/news/security/wana-decrypt0r-ransomware-using-nsa-exploit-leaked-by-shadow-brokers-is-on-a-rampage/
|
||||
[17]:https://www.bleepingcomputer.com/news/security/shadow-brokers-release-new-files-revealing-windows-exploits-swift-attacks/
|
||||
[18]:https://www.bleepingcomputer.com/news/security/over-36-000-computers-infected-with-nsas-doublepulsar-malware/
|
||||
[19]:https://github.com/stamparm/EternalRocks/
|
||||
[20]:https://www.bleepingcomputer.com/news/security/adylkuzz-cryptocurrency-miner-may-have-saved-you-from-the-wannacry-ransomware/
|
||||
[21]:https://blogs.forcepoint.com/security-labs/wannacry-multiple-malware-families-using-eternalblue-exploit
|
||||
[22]:https://www.cyphort.com/eternalblue-exploit-actively-used-deliver-remote-access-trojans/
|
||||
[23]:http://blog.secdo.com/multiple-groups-exploiting-eternalblue-weeks-before-wannacry
|
@ -1,75 +0,0 @@
|
||||
5 个提升你项目贡献者基数的方法
|
||||
============================================================
|
||||

|
||||
|
||||
图片提供
|
||||
|
||||
opensource.com
|
||||
|
||||
许多自由和开源软件项目开始解决一个问题时,就有人们开始为它们贡献,因为他们也想修复他们遇到的问题。当项目的最终用户发现它对他们的需求有用,项目就开始增长。这种共同的目的和焦点吸引人们到同一个项目社区。
|
||||
|
||||
像任何事物都是有寿命的,增长即是标志也是项目成功的来源。那么项目领导者和维护者如何鼓励贡献者基数的增长?这里有五种方法。
|
||||
|
||||
### 1. 提供好的文档
|
||||
|
||||
人们经常低估项目[文档][2]的重要性。它是项目贡献者的主要信息来源,它会激励他们努力。信息必须是正确和最新的。它应该包括如何构建软件、如何提交补丁、编码风格指南等步骤。
|
||||
|
||||
查看经验丰富的科技作家、编辑 Bob Reselman 的[ 7 个创建世界级文档的规则][3]。
|
||||
|
||||
开发人员文档的一个很好的例子是[ Python 开发人员指南][4]。它包括清晰简洁的步骤,涵盖 Python 开发的各个方面。
|
||||
|
||||
### 2. 降低进入门槛
|
||||
|
||||
如果你的项目有[问题或 bug 追踪][5]工具,请确保将初级任务标记为一个“容易 bug ”或“起点”。新的贡献者可以很容易地通过解决这些问题进入项目。追踪器也是标记非编程任务(如平面设计、图稿和文档改进)的地方。有许多项目成员不是每天都编码,但是却通过这种方式成为推动力。
|
||||
|
||||
Fedora 项目维护一个这样的[易修复和入门级问题的追踪][6]工具。
|
||||
|
||||
### 3. 为补丁提供常规反馈
|
||||
|
||||
即使它是一行,也要确认每个补丁,并给作者反馈。提供反馈有助于吸引潜在的候选人,并指导他们熟悉项目。所有项目都应有一个邮件列表和[聊天功能][7]进行通信。问答可在这些媒介中发生。大多数项目不会在一夜之间成功,但那些繁荣的列表和沟通渠道为增长创造了环境。
|
||||
|
||||
### 4. 推广你的项目
|
||||
|
||||
开始解决问题的项目实际上可能对其他开发人员也有用。作为项目的主要贡献者,你的责任是写下你的的项目并推广它。写博客文章,并在社交媒体上分享项目的进展。你可以简要描述如何以项目的贡献者来开始,并在该描述中提供主要开发者文档的参考连接。此外,请务必提供有关路线图和未来版本的信息。
|
||||
|
||||
为了你的听众,获取由 Opensource.com 的社区经理 Rikki Endsley 写的[写作提示][8]。
|
||||
|
||||
### 5. 保持友好
|
||||
|
||||
友好的对话语调和迅速的回复将加强人们对你的项目的兴趣。最初,问题只是为了寻求帮助,但在未来,新的贡献者也可能会提出想法或建议。让他们有信心他们可以成为项目的贡献者。
|
||||
|
||||
记住你一直在被评估!人们会观察任何项目开发者如何在邮件列表或聊天上的交谈。这些意味着对新贡献者的欢迎和开放。当使用技术时,我们有时会忘记人们,但这对于任何项目的生态系统都很重要。考虑一个情况,项目是很好的,但项目维护者不是很受欢迎。该管理员可能会驱使用户远离项目。对于有大量用户基数的项目而言,不被支持的环境可能导致分裂,一部分用户可能决定复刻项目并启动新项目。在开源世界中有这样的成功例子。
|
||||
|
||||
另外,拥有背景不同的人对于开源项目的持续增长和源源不断的电子是很重要的。
|
||||
|
||||
最后,项目主人有责任维持和帮助项目成长。指导新的贡献者是项目的关键,他们将成为项目和社区未来的领导者。
|
||||
|
||||
阅读:由红帽的内容战略家 Nicole Engard 写的_[ 7 种方式让新的贡献者感到受欢迎][1] _。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||

|
||||
|
||||
Kushal Das - Kushal Das 是 Python 软件基金会的一名 CPython 核心开发人员和主管。他是一名长期的 FOSS 贡献者和导师,他帮助新人进入贡献世界。他目前在 Red Hat 担任 Fedora 云工程师。他的博客在 https://kushaldas.in。你也可以在 Twitter @kushaldas 上找到他
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/1/expand-project-contributor-base
|
||||
|
||||
作者:[Kushal Das][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[Bestony](https://github.com/bestony)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/kushaldas
|
||||
[1]:https://opensource.com/life/16/5/sumana-harihareswara-maria-naggaga-oscon
|
||||
[2]:https://opensource.com/tags/documentation
|
||||
[3]:https://opensource.com/business/16/1/scale-14x-interview-bob-reselman
|
||||
[4]:https://docs.python.org/devguide/
|
||||
[5]:https://opensource.com/tags/bugs-and-issues
|
||||
[6]:https://fedoraproject.org/easyfix/
|
||||
[7]:https://opensource.com/alternatives/slack
|
||||
[8]:https://opensource.com/business/15/10/what-stephen-king-can-teach-tech-writers
|
@ -1,17 +1,19 @@
|
||||
|
||||
安卓IoT能否像在移动终端一样成功?
|
||||
Android 在物联网方面能否像在移动终端一样成功?
|
||||
============================================================
|
||||
|
||||

|
||||
Android Things让IoT如虎添翼
|
||||

|
||||
|
||||
###我在Android Things上的最初24小时
|
||||
正当我在开发一个基于Android的运行在树莓派3的物联网商业项目时,一些令人惊喜的事情发生了。谷歌发布了[Android Things] [1]的第一个预览版本,他们的SDK专门针对(最初)3个SBC(单板计算机) - 树莓派 3,英特尔Edison和恩智浦Pico。说我一直在挣扎似乎有些轻描淡写 - 没有成功的移植树莓派安卓可以参照,我们在理想丰满,但是实践漏洞百出的内测版本上叫苦不迭。其中一个问题,同时也是不可原谅的问题是,它不支持触摸屏,甚至连[Element14][2]官方销售的也不支持。曾经我认为安卓已经支持树莓派,更早时候[commi tto AOSP project from Google][3]提到过Pi曾让所有人兴奋不已。所以当2016年12月12日谷歌发布"Android Things"和其SDK的时候,我马上闭门谢客,全身心地去研究了……
|
||||
*Android Things 让 IoT 如虎添翼*
|
||||
|
||||
### 我 在Android Things 上的最初 24 小时
|
||||
|
||||
正当我在开发一个基于 Android 的运行在树莓派 3 的物联网商业项目时,一些令人惊喜的事情发生了。谷歌发布了[Android Things][1] 的第一个预览版本,他们的 SDK 专门(目前)针对 3 个 SBC(单板计算机) - 树莓派 3、英特尔 Edison 和恩智浦 Pico。说我一直在挣扎似乎有些轻描淡写 - 没有成功的移植树莓派 Android 可以参照,我们在理想丰满,但是实践漏洞百出的内测版本上叫苦不迭。其中一个问题,同时也是不可原谅的问题是,它不支持触摸屏,甚至连 [Element14][2] 官方销售的也不支持。曾经我认为 Android 已经支持树莓派,更早时候 [commi tto AOSP project from Google][3] 提到过树莓派曾让所有人兴奋不已。所以当 2016 年 12 月 12 日谷歌发布 “Android Things” 和其 SDK 的时候,我马上闭门谢客,全身心地去研究了……
|
||||
|
||||
### 问题?
|
||||
安卓扩展的工作和Pi上做过的一些项目,包括之前提到的,当前正在开发中的Pi项目,使我对谷歌安卓产生了许多问题。未来我会尝试解决它们,但是最重要的问题可以马上解答 - 有完整的Android Studio支持,Pi成为列表上的另一个常规的ADB可寻址设备。好极了。Android Atudio强大的,便利的,纯粹的易用的功能包括布局预览,调试系统,源码检查器,自动化测试等可以真正的应用在IoT硬件上。这些好处怎么说都不过分。到目前为止,我在Pi上的大部分工作都是在python中使用SSH运行在Pi上的编辑器(MC,如果你真的想知道)。这是有效的,毫无疑问硬核Pi / Python头可以指出更好的工作方式,而不是当前这种像极了80年代码农的软件开发模式。我的项目涉及到在控制Pi的手机上编写Android软件,这有点像在伤口狂妄地撒盐 - 我使用Android Studio做“真正的”Android工作,用SSH做剩下的。但是有了"Android Things"之后,一切都结束了。
|
||||
|
||||
所有的示例代码都适用于3个SBC,Pi 只是其中之一。 Build.DEVICE常量在运行时确定,所以你会看到很多如下代码:
|
||||
关于树莓派上的谷歌 Android 我遇到很多问题,我以前用 Android 做过许多开发,也做过一些树莓派项目,包括之前提到过的一个真正参与的。未来我会尝试解决它们,但是首先最重要的问题得到了解决 - 有完整的 Android Studio 支持,树莓派成为你手里的另一个常规的 ADB 可寻址设备。好极了。Android Studio 强大而便利、十分易用的功能包括布局预览、调试系统、源码检查器、自动化测试等都可以真正的应用在 IoT 硬件上。这些好处怎么说都不过分。到目前为止,我在树莓派上的大部分工作都是通过 SSH 使用运行在树莓派上的编辑器(MC,如果你真的想知道)借助 Python 完成的。这是有效的,毫无疑问铁杆的 Pi/Python 粉丝或许会有更好的工作方式,而不是当前这种像极了 80 年代码农的软件开发模式。我的项目需要在控制树莓派的手机上编写 Android 软件,这真有点痛不欲生 - 我使用 Android Studio 做“真正的” Android 开发,借助 SSH 做剩下的。但是有了“Android Things”之后,一切都结束了。
|
||||
|
||||
所有的示例代码都适用于这三种 SBC,树莓派只是其中之一。 `Build.DEVICE` 常量可以在运行时确定是哪一个,所以你会看到很多如下代码:
|
||||
|
||||
```
|
||||
public static String getGPIOForButton() {
|
||||
@ -30,7 +32,7 @@ Android Things让IoT如虎添翼
|
||||
}
|
||||
```
|
||||
|
||||
我对GPIO处理有浓厚的兴趣。 由于我只熟悉Pi,我只能假定其他SBC工作方式相同,GPIO只是一组引脚,可以定义为输入/输出,是连接物理外部世界的主要接口。 基于Pi Linux的操作系统发行版通过Python中的读取和写入方法提供了完整和便捷的支持,但对于Android,您必须使用NDK编写C ++驱动程序,并通过JNI在Java中与这些驱动程序对接。 不是那么困难,但需要在你的构建链中维护额外的一些东西。 Pi还为I2C指定了2个引脚,时钟和数据,因此需要额外的工作来处理它们。 I2C是真正酷的总线寻址系统,它通过串行化将许多独立的数据引脚转换成一个。 所以这里的优势是 - Android Things已经帮你完成了所有这一切。 你只需要_read()_和_write()_to /from你需要的任何GPIO引脚,I2C同样容易:
|
||||
我对 GPIO 处理有浓厚的兴趣。 由于我只熟悉树莓派,我只能假定其他 SBC 工作方式相同,GPIO 只是一组引脚,可以定义为输入/输出,是连接物理外部世界的主要接口。 基于 Linux 的树莓派操作系统通过 Python 中的读取和写入方法提供了完整和便捷的支持,但对于 Android,您必须使用 NDK 编写 C++ 驱动程序,并通过 JNI 在 Java 中与这些驱动程序对接。 不是那么困难,但需要在你的构建链中维护额外的一些东西。 树莓派还为 I2C 指定了 2 个引脚:时钟和数据,因此需要额外的工作来处理它们。I2C 是真正酷的总线寻址系统,它通过串行化将许多独立的数据引脚转换成一个。 所以这里的优势是 - Android Things 已经帮你完成了所有这一切。 你只需要 `read()` 和 `write() ` 你需要的任何 GPIO 引脚,I2C 同样容易:
|
||||
|
||||
```
|
||||
public class HomeActivity extends Activity {
|
||||
@ -68,46 +70,52 @@ public class HomeActivity extends Activity {
|
||||
}
|
||||
}
|
||||
```
|
||||
### Android Things基于Android的哪个版本?
|
||||
看起来是Android 7.0,这样很好,因为我们可以继承Android以前的所有版本的文档,优化,安全加固等。它也提出了一个有趣的问题 - 与应用程序必须单独管理不同,未来的平台应如何更新升级?请记住,这些设备可能无法连接到互联网。我们可能不在蜂窝/ WiFi连接的舒适空间,虽然之前这些连接至少可用,即使有时不那么可靠。
|
||||
|
||||
另一个担心是,Android Things仅仅是一个名字不同的分支版本的Android,如何选择它们的共同特性,就像启动Arduino(已经发布的一个更像市场营销而不是操作系统的操作系统)这种简单特性。实际上,通过查看[samples] [4],一些功能可能永不再用 - 比如一个最近的Android创新,甚至使用SVG图形作为资源,而不是传统的基于位图的图形,当然Andorid Things也可以轻松处理。
|
||||
### Android Things 基于 Android 的哪个版本?
|
||||
|
||||
不可避免地,与Android Things相比,普通的Android会抛出问题。例如,权限问题。因为Android Things为固定硬件设计,用户通常不会在这种设备上安装App,所以在一定程序上减轻了这个问题。另外,在没有图形界面的设备上请求权限通常不是问题,我们可以在安装时开放所有权限给App。 通常,这些设备只有一个应用程序,该应用程序从设备上电的那一刻就开始运行。
|
||||
看起来是 Android 7.0,这样很好,因为我们可以继承 Android 所有以前版本的平板设计 UI、优化,安全加固等。它也带来了一个有趣的问题 - 与应用程序必须单独管理不同,未来的平台应如何更新升级?请记住,这些设备可能无法连接到互联网。我们可能不便于连接蜂窝/ WiFi ,即便之前这些连接能用,但是有时不那么可靠。
|
||||
|
||||

|
||||
另一个担心是,Android Things 仅仅是一个名字不同的 Android 分支版本,大部分都是一样的,和已经发布的 Arduino 一样,更像为了市场营销而出现,而不是作为操作系统。不过可以放心,实际上通过[样例][4]可以看到,其中一些样例甚至使用了 SVG 图形作为资源,而不是传统的基于位图的图形(当然也能轻松处理) ——这是一个非常新的 Android 创新。
|
||||
|
||||
### Brillo怎么了?
|
||||
不可避免地,与 Android Things 相比,普通的 Android 会有些不同。例如,权限问题。因为 Android Things 为固定硬件设计,在构建好之后,用户通常不会在这种设备上安装应用,所以在一定程序上减轻了这个问题,尽管当设备要求权限时是个问题——因为它们没有 UI。解决方案是当应用在安装时给予所有需要的权限。 通常,这些设备只有一个应用,并且该应用从设备上电的那一刻就开始运行。
|
||||
|
||||
Brillo是谷歌以前的IoT操作系统的代号,听起来很像Android的前身。 实际上现在你仍然能看到很多Brillo引用,特别是在GitHub Android Things源码的例子中。 然而,它已经不复存在了。新王已经登基!
|
||||

|
||||
|
||||
### UI指南?
|
||||
Google针对Android智能手机和平板电脑应用发布了大量指南,例如屏幕按钮间距等。 当然,你最好在可行的情况下遵循这些,但这已经不是本文应该考虑的范畴了。 缺省情况下什么也没有- 应用程序作者决定一切。 这包括顶部状态栏,底部导航栏 - 绝对一切。 多年来谷歌一直叮咛Android应用程序作者不要去渲染屏幕上的返回按钮,因为平台将提供一个抛出异常,因为对于Android Things,[可能甚至不是一个UI!] [5]
|
||||
### Brillo 怎么了?
|
||||
|
||||
### 多少智能手机上的服务可以期待?
|
||||
有些,但不是所有。第一个预览版本没有蓝牙支持。没有NFC,两者都对物联网革命有重大贡献。 SBC支持他们,所以我们应该不会等待太久。由于没有通知栏,因此不支持任何通知。没有地图。缺省没有软键盘,你必须自己安装一个。由于没有Play商店,你只能屈尊通过 ADB做这个和许多其他操作。
|
||||
Brillo 是谷歌以前的 IoT 操作系统的代号,听起来很像 Android Things 的前身。 实际上现在你仍然能看到很多提及 Brillo 的地方,特别是在 GitHub Android Things 源码的文件夹名字中。 然而,它已经不复存在了。新王已经登基!
|
||||
|
||||
当开发Android Things时我试图和Pi使用同一个APK。这引发了一个错误,阻止它安装在除Android Things设备之外的任何设备:库“_com.google.android.things_”不存在。 Kinda有意义,因为只有Android Things设备需要这个,但它似乎是有限的,因为不仅智能手机或平板电脑不会出现,任何模拟器也不会。似乎只能在物理Android Things设备上运行和测试您的Android Things应用程序...直到Google在[G + Google的IoT开发人员社区] [6]组中回答了我的问题,并提供了规避方案。但是,躲过初一,躲不过十五 。
|
||||
### UI 指南?
|
||||
|
||||
### 让我如何期待Android Thing生态演进?
|
||||
谷歌针对 Android 智能手机和平板电脑应用发布了大量指南,例如屏幕按钮间距等。 当然,你最好在可行的情况下遵循这些,但这已经不是本文应该考虑的范畴了。 缺省情况下什么也没有 - 应用程序作者决定一切,这包括顶部状态栏,底部导航栏 - 绝对是一切。 多年来谷歌一直在告诉 Android 应用程序的作者们绝不要在屏幕上放置返回按钮,因为平台将提供一个,因为 Android Things [可能甚至没有 UI!] [5]
|
||||
|
||||
我期望看到移植更多传统的基于Linux服务器的应用程序,这对Android只有智能手机和平板电脑没有意义。例如,Web服务器突然变得非常有用。一些已经存在,但没有像重量级的Apache,或Nginx。物联网设备可能没有本地UI,但通过浏览器管理它们当然是可行的,因此需要用这种方式呈现Web面板。类似的那些如雷贯耳的通讯应用程序 - 它需要的仅是一个麦克风和扬声器,在理论上对任何视频通话应用程序,如Duo,Skype,FB等都可行。这个演变能走多远目前只能猜测。会有Play商店吗?他们会展示广告吗?我们可以确定他们不会窥探我们,或让黑客控制他们?从消费者的角度来看,物联网应该是具有触摸屏的网络连接设备,因为每个人都已经习惯于通过智能手机工作。
|
||||
### 智能手机上会有多少谷歌服务?
|
||||
|
||||
我还期望看到硬件的迅速发展 - 特别是更多的SBC并且拥有更低的成本。看看惊人的5美元 树莓派0,不幸的是,由于其有限的CPU和RAM,几乎肯定不能运行Android Things。多久之后像这样的设备才能运行Android Things?这是很明显的,标杆已经设定,任何自重的SBC制造商将瞄准Android Things的兼容性,规模经济也将波及到外围设备,如23美元的触摸屏。没人购买不会播放YouTube的微波炉,你的洗碗机会在eBay上购买更多的粉末商品,因为它注意到你很少使用它……
|
||||
有一些,但不是所有。第一个预览版本没有蓝牙支持、没有NFC,这两者都对物联网革命有重大贡献。 SBC 支持它们,所以我们应该不会等待太久。由于没有通知栏,因此不支持任何通知。没有地图。缺省没有软键盘,你必须自己安装一个键盘。由于没有 Play 商店,你只能难受地通过 ADB 做这个和许多其他操作。
|
||||
|
||||
然而,我不认为我们会失去掌控力。了解一点Android架构有助于将其视为一个包罗万象的物联网操作系统。它仍然使用Java,并几乎被其所有的垃圾回收机制导致的时序问题锤击致死。这仅仅是问题最少的部分。真正的实时操作系统依赖于可预测,准确和坚如磐石的时序,或者它不能被描述为“mission critical”。想想医疗应用程序,安全监视器,工业控制器等。使用Android,如果主机操作系统认为它需要,理论上可以在任何时候杀死您的活动/服务。在手机上不是那么糟糕 - 用户可以重新启动应用程序,杀死其他应用程序,或重新启动手机。心脏监视器完全是另一码事。如果前台Activity / Service正在监视一个GPIO引脚,并且信号没有被准确地处理,我们已经失败了。必须要做一些相当根本的改变让Android来支持这一点,到目前为止还没有迹象表明它已经在计划之中了。
|
||||
当为 Android Things 开发时,我试图为运行在手机上和树莓派上使用同一个 APK。这引发了一个错误,阻止它安装在除 Android Things 设备之外的任何设备:库 `com.google.android.things` 不存在。 这有点用,因为只有 Android Things 设备需要这个,但它似乎是个限制,因为不仅智能手机或平板电脑上没有,连模拟器上也没有。似乎只能在物理 Android Things 设备上运行和测试您的 Android Things 应用程序……直到谷歌在 [G+ 谷歌的 IoT 开发人员社区][6]组中回答了我的问题,并提供了规避方案。但是,躲过初一,躲不过十五。
|
||||
|
||||
###这24小时
|
||||
所以,回到我的项目。 我认为我会接管我已经完成和尽力能为的工作,等待不可避免的路障,并向G+社区寻求帮助。 除了一些在非Android Things上如何运行程序 的问题之外 ,没有其他问题。 它运行得很好! 这个项目也使用了一些奇怪的东西,自定义字体,高精定时器 - 所有这些都在Android Studio中完美地展现。对我而言,可以打满分 - 最后我可以开始给出实际原型,而不只是视频和截图。
|
||||
### 可以期待 Android Thing 生态演进到什么程度?
|
||||
|
||||
我期望看到移植更多传统的基于 Linux 服务器的应用程序,将 Android 限制在智能手机和平板电脑上没有意义。例如,Web 服务器突然变得非常有用。已经有一些了,但没有像重量级的 Apache 或 Nginx。物联网设备可以没有本地 UI,但通过浏览器管理它们当然是可行的,因此需要用这种方式呈现 Web 面板。类似的那些如雷贯耳的通讯应用程序 - 它需要的仅是一个麦克风和扬声器,而且在理论上任何视频通话应用程序,如 Duo、Skype、FB 等都可行。这个演变能走多远目前只能猜测。会有 Play 商店吗?它们会展示广告吗?我们能够确保它们不会窥探我们,或被黑客控制它们么?从消费者的角度来看,物联网应该是具有触摸屏的网络连接设备,因为每个人都已经习惯于通过智能手机工作。
|
||||
|
||||
我还期望看到硬件的迅速发展 - 特别是有更多的 SBC 拥有更低的成本。看看惊人的 5 美元 树莓派 Zero,不幸的是,由于其有限的 CPU 和内存,几乎可以肯定不能运行 Android Things。多久之后像这样的设备才能运行 Android Things?这是很明显的,标杆已经设定,任何有追求的 SBC 制造商将瞄准 Android Things 的兼容性,规模经济也将波及到外围设备,如 23 美元的触摸屏。没人会购买不会播放 YouTube 的微波炉,你的洗碗机会在 eBay 上购买更多的清洁粉,因为它注意到你很少使用它……
|
||||
|
||||
然而,我不认为我们会过于冲昏头脑。了解一点 Android 架构有助于将其视为一个包罗万象的物联网操作系统。它仍然使用 Java,其垃圾回收机制导致的所有时序问题在过去几乎把它搞死。这仅仅是问题最少的部分。真正的实时操作系统依赖于可预测、准确和坚如磐石的时序,或者它不能被用于“关键任务”。想想医疗应用、安全监视器,工业控制器等。使用 Android,如果宿主操作系统认为它需要,理论上可以在任何时候杀死您的活动/服务。这在手机上没那么糟糕 - 用户可以重新启动应用程序,杀死其他应用程序,或重新启动手机。但心脏监视器就完全是另一码事。如果前台的活动/服务正在监视一个 GPIO 引脚,而这个信号没有被准确地处理,我们就完了。必须要做一些相当根本的改变让 Android 来支持这一点,到目前为止还没有迹象表明它已经在计划之中了。
|
||||
|
||||
###这 24 小时
|
||||
|
||||
所以,回到我的项目。 我认为我会接管我已经完成和尽力能为的工作,等待不可避免的路障,并向 G+ 社区寻求帮助。 除了一些在非 Android Things 设备上如何运行程序的问题之外,没有其他问题。 它运行得很好! 这个项目也使用了一些奇怪的东西,如自定义字体、高精定时器 - 所有这些都在 Android Studio 中完美地展现。对我而言,可以打满分 - 至少我能够开始做出实际原型,而不只是视频和截图。
|
||||
|
||||
### 蓝图
|
||||
今天的物联网操作系统环境看起来非常零碎。 显然没有市场领导者,尽管炒作之声沸反连天,物联网仍然在草创阶段。 谷歌Android物联网能否像它在移动端那样,现在Android在那里的主导地位非常接近90%? 我相信果真如此,Android Things的推出正是重要的一步。
|
||||
|
||||
记住所有的关于开放和封闭软件的战争,它们主要发生在从不授权的苹果和一直担心免费还不够充分的谷歌之间? 那个老梗又来了,因为让苹果推出一个免费的物联网操作系统的构想就像让他们免费赠送下一代iPhone一样遥不可及。
|
||||
今天的物联网操作系统环境看起来非常零碎。 显然没有市场领导者,尽管炒作之声沸反连天,物联网仍然在草创阶段。 谷歌 Android 物联网能否像它在移动端那样取得成功?现在 Android 在移动方面的主导地位非常接近 90%。我相信如果真的如此,Android Things 的推出正是重要的一步。
|
||||
|
||||
记住所有的关于开放和封闭软件的战争,它们主要发生在从不授权的苹果和一直担心免费还不够充分的谷歌之间。那个老梗又来了,因为让苹果推出一个免费的物联网操作系统的构想就像让他们免费赠送下一代 iPhone 一样遥不可及。
|
||||
|
||||
物联网操作系统游戏是开放的,大家机遇共享,不过这个时候,封闭派甚至不会公布它们的开发工具箱……
|
||||
|
||||
转到[Developer Preview] [7]网站,立即获取Android Things SDK的副本。
|
||||
转到 [Developer Preview][7]网站,立即获取 Android Things SDK 的副本。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -115,7 +123,7 @@ via: https://medium.com/@carl.whalley/will-android-do-for-iot-what-it-did-for-mo
|
||||
|
||||
作者:[Carl Whalley][a]
|
||||
译者:[firstadream](https://github.com/firstadream)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
|
@ -1,43 +1,41 @@
|
||||
为何我们需要一个开放模型来设计评估公共政策
|
||||
============================================================
|
||||
|
||||
### 想象一个 app 可以让市民测试驱动提出的政策。
|
||||
> 想象一个 app 可以让市民来试车提出的政策。
|
||||
|
||||
[up][3]
|
||||

|
||||
图片提供:
|
||||

|
||||
|
||||
opensource.com
|
||||
图片提供:opensource.com
|
||||
|
||||
在政治选举之前的几个月中,公众辩论会加剧,并且公民面临大量的政策选择信息。在数据驱动的社会中,新的见解一直在为决策提供信息,对这些信息的深入了解从未如此重要,但公众仍然没有意识到公共政策建模的全部潜力。
|
||||
在政治选举之前的几个月中,公众辩论会加剧,并且公民面临大量的各种政策选择的信息。在数据驱动的社会中,新的见解一直在为决策提供信息,对这些信息的深入了解从未如此重要,但公众仍然没有意识到为公共政策建模的全部潜力。
|
||||
|
||||
在“开放政府”的概念不断演变以跟上新技术进步的时代,政府的政策模型和分析可能是新一代的开放知识。
|
||||
在“<ruby>开放政府<rt>open government</rt></ruby>”的概念不断演变以跟上新技术进步的时代,政府的政策模型和分析可能是新一代的开放知识。
|
||||
|
||||
政府开源模型 (GOSM) 是指政府开发的模型,其目的是设计和评估政策,免费提供给所有人使用、分发、不受限制地修改。社区可以提高政策建模的质量、可靠性和准确性,创造有利于公众的新的数据驱动程序。
|
||||
|
||||
今天的这代与技术相互作用,就像它的第二大本质,它默认吸收了大量的信息。如果我们可以在使用 GOSM 的虚拟、沉浸式环境中与不同的公共政策进行互动那会如何?
|
||||
今天的这一代人与技术相互作用,这俨然成为了它的第二种本质,自然而然地吸收了大量的信息。如果我们可以在使用 GOSM 在虚拟、沉浸式环境中与不同的公共政策进行互动那会如何?
|
||||
|
||||
想象一下有一个允许公民测试推动政策来确定他们想要生活的未来的程序。他们会本能地学习关键的驱动因素和所需要的东西。不久之后,公众将更深入地了解公共政策的影响,并更加精明地引导有争议的公众辩论。
|
||||
想象一下如果有一个程序,允许公民试车提出的政策来确定他们想要生活的未来。他们会本能地学习关键的驱动因素和所需要的东西。不久之后,公众将更深入地了解公共政策的影响,并更加精明地引导有争议性的公众辩论。
|
||||
|
||||
为什么我们以前没有更好的使用这些模型?原因在于公共政策建模的神秘面纱。
|
||||
|
||||
在一个如我们所生活的复杂的社会中,量化政策影响是一项艰巨的任务,并被被描述为一种“美好艺术”。此外,大多数政府政策模型都是基于行政和其他私人持有的数据。然而,政策分析师为了指导政策设计而勇于追求,多次以大量武力而获得政治斗争。
|
||||
在一个如我们所生活的复杂的社会中,量化政策影响是一项艰巨的任务,并被被描述为一种“美好艺术”。此外,大多数政府政策模型都是基于行政和其他私人持有的数据。然而,政策分析师为了指导政策设计而勇于追求,多次以大量武力而赢得政治斗争的胜利。
|
||||
|
||||
数字是很有说服力的。它们构建可信度并常常被用作引入新政策的理由。公共政策模型的发展赋予政治家和官僚权力,这些政治家和官僚们可能不愿意破坏现状。给予这一点可能并不容易,但 GOSM 为前所未有的公共政策改革提供了机会。
|
||||
数字是很有说服力的。它们构建可信度,并常常被用作引入新政策的理由。公共政策模型的发展赋予政治家和官僚权力,这些政治家和官僚们可能不愿意破坏现状。给予这一点可能并不容易,但 GOSM 为前所未有的公共政策改革提供了机会。
|
||||
|
||||
GOSM 将所有人的竞争环境均衡化:政治家、媒体、游说团体、利益相关者和公众。通过向社区开放政策评估的大门, 政府可以利用新的和未发现的能力用来创造、创新在公共领域的效率。但在公共政策设计中,利益相关者和政府之间战略互动有哪些实际影响?
|
||||
GOSM 将所有人的竞争环境均衡化:政治家、媒体、游说团体、利益相关者和公众。通过向社区开放政策评估的大门, 政府可以在公共领域为创造、创新和效率引入新的和未发现的能力。但在公共政策设计中,利益相关者和政府之间战略互动有哪些实际影响?
|
||||
|
||||
GOSM 是独一无二的,因为它们主要是设计公共政策的工具,而不一定需要重新分配私人收益。利益相关者和游说团体可能会将 GOSM 与其私人信息一起使用,以获得对经济参与者私人利益的政策环境运作的新见解。
|
||||
|
||||
GOSM 可以成为利益相关者在公共辩论中保持权力平衡的武器,并为战略争取最佳利益么?
|
||||
|
||||
作为一个可变的公共资源,GOSM 在概念上由纳税人资助,并属于国家。私有实体在不向社会带来利益的情况下从 GOSM 中获得资源是合乎道德的吗?与可能用于更有效的服务提供的程序不同,替代政策建议更有可能由咨询机构使用,并有助于公众辩论。
|
||||
作为一个可变的公共资源,GOSM 在概念上由纳税人资助,并属于国家。私有实体在不向社会带来利益的情况下从 GOSM 中获得资源是合乎道德的吗?与可能用于更有效的服务提供的那些程序不同,替代政策建议更有可能由咨询机构使用,并有助于公众辩论。
|
||||
|
||||
开源社区经常使用“ copyleft 许可证” 来确保代码和根据此许可证的任何衍生作品对所有人都开放。当产品价值是代码本身,这需要重新分配才能获得最大利益,它需要重新分发来获得最大的利益。但是,如果代码或 GOSM 重新分发是主要产品附带的,那它会是对现有政策环境的新战略洞察么?
|
||||
开源社区经常使用 “copyleft 许可证” 来确保代码和在此许可证下的任何衍生作品对所有人都开放。当产品价值是代码本身,这需要重新分配才能获得最大利益,它需要重新分发来获得最大的利益。但是,如果代码或 GOSM 重新分发是主要产品附带的,那它会是对现有政策环境的新战略洞察么?
|
||||
|
||||
在私人收集的数据变得越来越多的时候,GOSM 背后的真正价值可能是底层数据,它可以用来改进模型本身。最终,政府是唯一有权实施政策的消费者,利益相关者可以选择在谈判中分享修改后的 GOSM。
|
||||
|
||||
政府在公开发布政策模型时面临的巨大挑战是提高透明度的同时保护隐私。理想情况下,发布 GOSM 将需要以保护建模关键特征的方式保护封闭数据。
|
||||
政府在公开发布政策模型时面临的巨大挑战是提高透明度的同时保护隐私。理想情况下,发布 GOSM 将需要以保护建模关键特征的方式保护封闭的数据。
|
||||
|
||||
公开发布 GOSM 通过促进市民对民主的更多了解和参与,使公民获得权力,从而改善政策成果和提高公众满意度。在开放的政府乌托邦中,开放的公共政策发展将是政府和社区之间的合作性努力,这里知识、数据和分析可供大家免费使用。
|
||||
|
||||
@ -57,9 +55,9 @@ Audrey Lobo-Pulo - Audrey Lobo-Pulo 博士是 Phoensight 的联合创始人,
|
||||
|
||||
via: https://opensource.com/article/17/1/government-open-source-models
|
||||
|
||||
作者:[Audrey Lobo-Pulo ][a]
|
||||
作者:[Audrey Lobo-Pulo][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
|
328
translated/tech/20170308 Our guide to a Golang logs world.md
Normal file
328
translated/tech/20170308 Our guide to a Golang logs world.md
Normal file
@ -0,0 +1,328 @@
|
||||
Go 日志指南
|
||||
============================================================
|
||||
|
||||

|
||||
|
||||
你是否厌烦了那些使用复杂语言编写、难以部署、总是在构建的解决方案?Golang 是解决这些问题的好方法,它和 C 语言一样快,又和 Python 一样简单。
|
||||
|
||||
但是你如何使用 Golang 日志监控你的应用程序呢?Golang 没有异常,只有错误。因此你的第一印象可能就是开发 Golang 日志策略并不是一件简单的事情。不支持异常事实上并不是什么问题,异常在很多编程语言中已经失去了特殊用处:它们过于被滥用以至于它们的作用都被忽视了。
|
||||
|
||||
在进一步深入之前,我们首先会介绍 Golang 日志基础并讨论 Golang 日志标准、元数据意义、以及最小化 Golang 日志对性能的影响。通过日志,你可以追踪用户在你应用中的活动,快速识别你项目中失效的组件,并监控总的性能以及用户体验。
|
||||
|
||||
### I. Golang 日志基础
|
||||
|
||||
### 1) 使用 Golang "log" 库
|
||||
|
||||
Golang 给你提供了一个称为 “log” 的原生[日志库][3] 。它的日志器完美适用于追踪简单的活动,例如通过使用可用的[选项][4]在错误信息之前添加一个时间戳。
|
||||
|
||||
下面是一个 Golang 中如何记录错误日志的简单例子:
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import (
|
||||
"log"
|
||||
"errors"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
func main() {
|
||||
/* 定义局部变量 */
|
||||
...
|
||||
|
||||
/* 除法函数,除以 0 的时候会返回错误 */
|
||||
ret,err = div(a, b)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
fmt.Println(ret)
|
||||
}
|
||||
```
|
||||
|
||||
如果你尝试除以0,你就会得到类似下面的结果:
|
||||
|
||||

|
||||
|
||||
为了快速测试一个 Golang 函数,你可以使用 [go playground][5]。
|
||||
|
||||
为了确保你的日志总是能轻易访问,我们建议你把它们写到一个文件:
|
||||
|
||||
```
|
||||
package main
|
||||
import (
|
||||
"log"
|
||||
"os"
|
||||
)
|
||||
func main() {
|
||||
// 按照所需读写权限创建文件
|
||||
f, err := os.OpenFile("filename", os.O_WRONLY|os.O_CREATE|os.O_APPEND, 0644)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
// 完成后延迟关闭,而不是你认为这是惯用的!
|
||||
defer f.Close()
|
||||
//设置日志输出到 f
|
||||
log.SetOutput(f)
|
||||
//测试例
|
||||
log.Println("check to make sure it works")
|
||||
}
|
||||
```
|
||||
|
||||
你可以在[这里][6]找到 Golang 日志的完整指南,以及 “log” [库][7]内可用函数的完整列表。
|
||||
|
||||
现在你就可以记录它们的错误以及根本原因啦。
|
||||
|
||||
另外,日志也可以帮你将活动流拼接在一起,查找需要修复的错误上下文,或者调查在你的系统中单个请求如何影响其它应用层和 API。
|
||||
为了获得更好的日志效果,你首先需要在你的项目中使用尽可能多的上下文丰富你的 Golang 日志,并标准化你使用的格式。这就是 Golang 原生库能达到的极限。使用最广泛的库是 [glog][8] 和 [logrus][9]。必须承认还有很多好的库可以使用。如果你已经在使用支持 JSON 格式的库,你就不需要再换其它库了,后面我们会解释。
|
||||
|
||||
### II. 你 Golang 日志的统一格式
|
||||
|
||||
### 1) JSON 格式的结构优势
|
||||
|
||||
在一个项目或者多个微服务中结构化你的 Golang 日志可能是最困难的事情,但一旦完成,它看起来就微不足道了。结构化你的日志能使机器可读(参考我们 [收集日志的最佳实践博文][10])。灵活性和层级是 JSON 格式的核心,因此信息能够轻易被人类和机器解析以及处理。
|
||||
|
||||
下面是一个使用 [Logrus/Logmatic.io][11] 如何用 JSON 格式记录日志的例子:
|
||||
|
||||
```
|
||||
package main
|
||||
import (
|
||||
log "github.com/Sirupsen/logrus"
|
||||
"github.com/logmatic/logmatic-go"
|
||||
)
|
||||
func main() {
|
||||
// 使用 JSONFormatter
|
||||
log.SetFormatter(&logmatic.JSONFormatter{})
|
||||
// 使用 logrus 像往常那样记录事件
|
||||
log.WithFields(log.Fields{"string": "foo", "int": 1, "float": 1.1 }).Info("My first ssl event from golang")
|
||||
}
|
||||
```
|
||||
|
||||
会输出结果:
|
||||
|
||||
```
|
||||
{
|
||||
"date":"2016-05-09T10:56:00+02:00",
|
||||
"float":1.1,
|
||||
"int":1,
|
||||
"level":"info",
|
||||
"message":"My first ssl event from golang",
|
||||
"String":"foo"
|
||||
}
|
||||
```
|
||||
|
||||
### 2) 标准化 Golang 日志
|
||||
|
||||
出现在你代码不同部分的同一个错误以不同形式被记录下来是一件可耻的事情。下面是一个由于一个变量错误导致无法确定 web 页面加载状态的例子。一个开发者日志格式是:
|
||||
|
||||
```
|
||||
message: 'unknown error: cannot determine loading status from unknown error: missing or invalid arg value client'</span>
|
||||
```
|
||||
|
||||
另一个人的格式却是:
|
||||
|
||||
```
|
||||
unknown error: cannot determine loading status - invalid client</span>
|
||||
```
|
||||
|
||||
强制日志标准化的一个好的解决办法是在你的代码和日志库之间创建一个接口。这个标准化接口会包括所有你想添加到你日志中的可能行为的预定义日志消息。这么做可以防止不符合你想要的标准格式的自定义日志信息。这么做也便于日志调查。
|
||||
|
||||

|
||||
|
||||
由于日志格式都被统一处理,使它们保持更新也变得更加简单。如果出现了一种新的错误类型,它只需要被添加到一个接口,这样每个组员都会使用完全相同的信息。
|
||||
|
||||
最长使用的简单例子就是在 Golang 日志信息前面添加日志器名称和 id。你的代码然后就会发送 “事件” 到你的标准化接口,它会继续讲它们转化为 Golang 日志消息。
|
||||
|
||||
```
|
||||
// 主要部分,我们会在这里定义所有消息。
|
||||
// Event 结构体很简单。为了当所有信息都被记录时能检索它们,
|
||||
// 我们维护了一个 Id
|
||||
var (
|
||||
invalidArgMessage = Event{1, "Invalid arg: %s"}
|
||||
invalidArgValueMessage = Event{2, "Invalid arg value: %s => %v"}
|
||||
missingArgMessage = Event{3, "Missing arg: %s"}
|
||||
)
|
||||
|
||||
// 在我们应用程序中可以使用的所有日志事件
|
||||
func (l *Logger)InvalidArg(name string) {
|
||||
l.entry.Errorf(invalidArgMessage.toString(), name)
|
||||
}
|
||||
func (l *Logger)InvalidArgValue(name string, value interface{}) {
|
||||
l.entry.WithField("arg." + name, value).Errorf(invalidArgValueMessage.toString(), name, value)
|
||||
}
|
||||
func (l *Logger)MissingArg(name string) {
|
||||
l.entry.Errorf(missingArgMessage.toString(), name)
|
||||
}
|
||||
```
|
||||
|
||||
因此如果我们使用前面例子中无效的参数值,我们就会得到相似的日志信息:
|
||||
|
||||
```
|
||||
time="2017-02-24T23:12:31+01:00" level=error msg="LoadPageLogger00003 - Missing arg: client - cannot determine loading status" arg.client=<nil> logger.name=LoadPageLogger
|
||||
```
|
||||
|
||||
JSON 格式如下:
|
||||
|
||||
```
|
||||
{"arg.client":null,"level":"error","logger.name":"LoadPageLogger","msg":"LoadPageLogger00003 - Missing arg: client - cannot determine loading status", "time":"2017-02-24T23:14:28+01:00"}
|
||||
```
|
||||
|
||||
### III. Golang 日志上下文的力量
|
||||
|
||||
现在 Golang 日志已经按照特定结构和标准格式记录,时间会决定需要添加哪些上下文以及相关信息。为了能从你的日志中抽取信息,例如追踪一个用户活动或者工作流,上下文和元数据的顺序非常重要。
|
||||
|
||||
例如在 logrus 库中可以按照下面这样使用 JSON 格式添加 Hostname、appname 和 session 参数:
|
||||
|
||||
```
|
||||
// 对于元数据,通常做法是通过复用重用日志语句中的字段。
|
||||
contextualizedLog := log.WithFields(log.Fields{
|
||||
"hostname": "staging-1",
|
||||
"appname": "foo-app",
|
||||
"session": "1ce3f6v"
|
||||
})
|
||||
contextualizedLog.Info("Simple event with global metadata")
|
||||
```
|
||||
|
||||
元数据可以视为 javascript 片段。为了更好地说明它们有多么重要,让我们看看几个 Golang 微服务中元数据的使用。你会清楚地看到在你的应用程序中跟踪用户的决定性。这是因为你不仅需要知道一个错误发生了,还要知道是哪个实例以及什么模式导致了错误。假设我们有两个按顺序调用的微服务。上下文信息被传输并保存在头部(header):
|
||||
|
||||
```
|
||||
func helloMicroService1(w http.ResponseWriter, r *http.Request) {
|
||||
client := &http.Client{}
|
||||
// 该服务负责接收所有到来的用户请求
|
||||
// 我们会检查是否是一个新的会话还是已有会话的另一次调用
|
||||
session := r.Header.Get("x-session")
|
||||
if ( session == "") {
|
||||
session = generateSessionId()
|
||||
// 为新会话记录日志
|
||||
}
|
||||
// 每个请求的 Track Id 都是唯一的,因此我们会为每个会话生成一个
|
||||
track := generateTrackId()
|
||||
// 调用你的第二个微服务,添加 session/track
|
||||
reqService2, _ := http.NewRequest("GET", "http://localhost:8082/", nil)
|
||||
reqService2.Header.Add("x-session", session)
|
||||
reqService2.Header.Add("x-track", track)
|
||||
resService2, _ := client.Do(reqService2)
|
||||
….
|
||||
```
|
||||
|
||||
当调用第二个服务时:
|
||||
|
||||
```
|
||||
func helloMicroService2(w http.ResponseWriter, r *http.Request) {
|
||||
// 类似之前的微服务,我们检查会话并生成新的 track
|
||||
session := r.Header.Get("x-session")
|
||||
track := generateTrackId()
|
||||
// 这一次,我们检查请求中是否已经设置了一个 track id,
|
||||
// 如果是,它变为父 track
|
||||
parent := r.Header.Get("x-track")
|
||||
if (session == "") {
|
||||
w.Header().Set("x-parent", parent)
|
||||
}
|
||||
// 为响应添加 meta 信息
|
||||
w.Header().Set("x-session", session)
|
||||
w.Header().Set("x-track", track)
|
||||
if (parent == "") {
|
||||
w.Header().Set("x-parent", track)
|
||||
}
|
||||
// 填充响应
|
||||
w.WriteHeader(http.StatusOK)
|
||||
io.WriteString(w, fmt.Sprintf(aResponseMessage, 2, session, track, parent))
|
||||
}
|
||||
```
|
||||
|
||||
现在第二个微服务中已经有和初始查询相关的上下文和信息,一个 JSON 格式的日志消息看起来类似:
|
||||
|
||||
在第一个微服务:
|
||||
|
||||
```
|
||||
{"appname":"go-logging","level":"debug","msg":"hello from ms 1","session":"eUBrVfdw","time":"2017-03-02T15:29:26+01:00","track":"UzWHRihF"}
|
||||
```
|
||||
|
||||
在第二个微服务:
|
||||
|
||||
```
|
||||
{"appname":"go-logging","level":"debug","msg":"hello from ms 2","parent":"UzWHRihF","session":"eUBrVfdw","time":"2017-03-02T15:29:26+01:00","track":"DPRHBMuE"}
|
||||
```
|
||||
|
||||
如果在第二个微服务中出现了错误,多亏了 Golang 日志中保存的上下文信息,现在我们就可以确定它是怎样被调用的以及什么模式导致了这个错误。
|
||||
|
||||
如果你想进一步深挖 Golang 的追踪能力,这里还有一些库提供了追踪功能,例如 [Opentracing][12]。这个库提供了一种简单的方式在复杂(或简单)的架构中添加追踪实现。它通过不同步骤允许你追踪用户的查询,就像下面这样:
|
||||
|
||||

|
||||
|
||||
### IV. Golang 日志对性能的影响
|
||||
|
||||
### 1) 不要在 Goroutine 中使用日志
|
||||
|
||||
在每个 goroutine 中创建一个新的日志器看起来很诱人。但最好别这么做。Goroutine 是一个轻量级线程管理器,它用于完成一个 “简单的” 任务。因此它不应该负责日志。它可能导致并发问题,因为在每个 goroutine 中使用 log.New() 会复用接口,所有日志器会并发尝试访问同一个 io.Writer。
|
||||
为了限制对性能的影响以及避免并发调用 io.Writer,库通常使用一个特定的 goroutine 用于日志输出。
|
||||
|
||||
### 2) 使用异步库
|
||||
|
||||
尽管有很多可用的 Golang 日志库,要注意它们中的大部分都是同步的(事实上是伪异步)。原因很可能是到现在为止它们中没有一个由于日志对性能有严重影响。
|
||||
|
||||
但正如 Kjell Hedström 在[他的实验][13]中展示的,使用多个线程创建成千上万日志,在最坏情况下异步 Golang 日志也会有 40% 的性能提升。因此日志是有开销的,也会对你的应用程序性能产生影响。如果你并不需要处理大量的日志,使用伪异步 Golang 日志库可能就足够了。但如果你是处理大量的日志,或者很关注性能,Kjell Hedström 的异步解决方案就很有趣(尽管事实上你可能需要进一步开发,因为它只包括了最小的功能需求)。
|
||||
|
||||
### 3)使用严重等级管理 Golang 日志
|
||||
|
||||
一些日志库允许你启用或停用特定日志器,这可能会派上用场。例如在生产环境中你可能不需要一些特定等级的日志。下面是一个如何在 glog 库中停用日志器的例子,其中日志器被定义为布尔值:
|
||||
|
||||
```
|
||||
type Log bool
|
||||
func (l Log) Println(args ...interface{}) {
|
||||
fmt.Println(args...)
|
||||
}
|
||||
var debug Log = false
|
||||
if debug {
|
||||
debug.Println("DEBUGGING")
|
||||
}
|
||||
```
|
||||
|
||||
然后你就可以在配置文件中定义这些布尔参数来启用或者停用日志器。
|
||||
|
||||
没有一个好的 Golang 日志策略,Golang 日志可能开销很大。开发人员应该抵制记录几乎所有事情的诱惑 - 尽管它非常有趣!如果日志的目的是为了获取尽可能多的信息,为了避免包含没用元素的日志的白噪音,必须正确使用日志。
|
||||
|
||||
### V. 集中 Golang 日志
|
||||
|
||||

|
||||
如果你的应用程序是部署在多台服务器上的,可以避免为了调查一个现象需要连接到每一台服务器的麻烦。日志集中确实有用。
|
||||
|
||||
使用日志装箱工具,例如 windows 中的 Nxlog,linux 中的 Rsyslog(默认安装了的)、Logstash 和 FluentD,是最好的实现方式。日志装箱工具的唯一目的就是发送日志,因此它们管理连接失效以及其它你很可能会遇到的问题。
|
||||
|
||||
这里甚至有一个 [Golang syslog 软件包][14] 帮你将 Golang 日志发送到 syslog 守护进程。
|
||||
|
||||
### 希望你享受你的 Golang 日志之旅
|
||||
|
||||
在你项目一开始就考虑你的 Golang 日志策略非常重要。如果在你代码的任意地方都可以获得所有的上下文,追踪用户就会变得很简单。从不同服务中阅读没有标准化的日志是已经很痛苦的事情。一开始就计划在多个微服务中扩展相同用户或请求 id,后面就会允许你比较容易地过滤信息并在你的系统中跟踪活动。
|
||||
|
||||
你是构架一个很大的 Golang 项目还是几个微服务也会影响你的日志策略。一个大项目的主要组件应该有按照它们功能命名的特定 Golang 日志器。这使你可以立即判断出日志来自你的哪一部分代码。然而对于微服务或者小的 Golang 项目,较少的核心组件需要它们自己的日志器。但在每种情形中,日志器的数目都应该保持低于核心功能的数目。
|
||||
|
||||
你现在已经可以使用 Golang 日志量化决定你的性能或者用户满意度啦!
|
||||
|
||||
_如果你有想阅读的特定编程语言,在 Twitter [][1][@logmatic][2] 上告诉我们吧。_
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://logmatic.io/blog/our-guide-to-a-golang-logs-world/
|
||||
|
||||
作者:[Nils][a]
|
||||
译者:[ictlyh](https://github.com/ictlyh)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://logmatic.io/blog/our-guide-to-a-golang-logs-world/
|
||||
[1]:https://twitter.com/logmatic?lang=en
|
||||
[2]:http://twitter.com/logmatic
|
||||
[3]:https://golang.org/pkg/log/
|
||||
[4]:https://golang.org/pkg/log/#pkg-constants
|
||||
[5]:https://play.golang.org/
|
||||
[6]:https://www.goinggo.net/2013/11/using-log-package-in-go.html
|
||||
[7]:https://golang.org/pkg/log/
|
||||
[8]:https://github.com/google/glog
|
||||
[9]:https://github.com/sirupsen/logrus
|
||||
[10]:https://logmatic.io/blog/beyond-application-monitoring-discover-logging-best-practices/
|
||||
[11]:https://github.com/logmatic/logmatic-go
|
||||
[12]:https://github.com/opentracing/opentracing-go
|
||||
[13]:https://sites.google.com/site/kjellhedstrom2/g2log-efficient-background-io-processign-with-c11/g2log-vs-google-s-glog-performance-comparison
|
||||
[14]:https://golang.org/pkg/log/syslog/
|
218
translated/tech/20170317 AWS cloud terminology.md
Normal file
218
translated/tech/20170317 AWS cloud terminology.md
Normal file
@ -0,0 +1,218 @@
|
||||
AWS 云服务的相关术语
|
||||
============================================================
|
||||
|
||||
* * *
|
||||
|
||||

|
||||
|
||||
_认识 71 个 AWS 云服务的术语 ! 通过了解 AWS 世界中使用的术语开始你的 AWS 云服务使用生涯 !_
|
||||
|
||||
* * *
|
||||
|
||||
AWS i.e. 亚马逊 web 服务是一个提供了一系列按使用计费的 web 服务的云平台。 它是迄今为止最为著名的云平台之一。由于其灵活性、有效性、弹性、可测量性和无须维护,所以有一些企业正逐步把他们的业务迁移到云端。 由于许多公司目前在使用这些服务,所以对于系统管理员和运维人员来说应该了解一下 AWS。
|
||||
|
||||
这篇文章旨在列出这些 AWS 提供的服务并且解释在 AWS 中出现的术语。
|
||||
|
||||
截止到目前为止(2017 年 3 月),AWS 提供了共计 71 种并分为 17 组服务:
|
||||
|
||||
* * *
|
||||
|
||||
_计算型服务_
|
||||
|
||||
这是一个提供了虚拟服务器配置功能的服务即所谓的云计算。它提供了包括以下这一系列的服务。
|
||||
|
||||
1. EC2 : EC2 代表弹性计算云。这种服务提供了可根据个人需求扩展的 [虚拟机][11]
|
||||
2. EC2 容器服务 : 其高性能,高可扩展性使其可在 EC2 集群环境中运行服务
|
||||
3. Lightsail : 该服务使用户非常容易地启动和管理虚拟服务器(EC2)。
|
||||
4. Elastic Beanstalk : 该服务能够自动管理你的应用程序的容量配置、负载平衡、扩展以及健康监控从而减少你的管理负载。
|
||||
5. Lambda : 它允许你只在你需要的时候运行代码而不用去管理服务器。
|
||||
6. Batch : 它使用户能够以自定义的管理方式运行计算工作负载(批处理)。
|
||||
|
||||
* * *
|
||||
|
||||
_存储型服务_
|
||||
|
||||
它是一种云存储服务,即由Amazon提供的云存储设施。 该组服务包括:
|
||||
|
||||
1. S3 : S3代表简单存储服务(3次S)。 它给你提供了在线存储服务,你可随时从任何地方存储/检索任何数据。
|
||||
2. EFS : EFS 代表弹性文件系统。 它是一个可以和EC2服务器一起使用的在线存储服务。
|
||||
3. Glacier : 它是一种低成本/低性能数据存储解决方案,主要针对存档或长期备份。
|
||||
4. Storage Gateway : 这种服务的接口会将你的内部应用程序(托管在 AWS 之外)与 AWS 存储连接。
|
||||
|
||||
* * *
|
||||
|
||||
_数据库_
|
||||
|
||||
AWS 还提供在其 Infra 上托管数据库,以便客户可以利用亚马逊最先进的技术来获得更快/高效/安全的数据处理。 该组包括:
|
||||
|
||||
1. RDS : RDS 代表关系数据库服务。 有助于在云上设置,操作和管理关系数据库。
|
||||
2. DynamoDB : 其 noSQL 数据库提供快速处理和高可扩展性。
|
||||
3. ElastiCache : 着是一种为你的 Web 应用程序管理内存缓存以更快运行它们的方案 !
|
||||
4. Redshift : 它是一个巨大的(PB 级)的完全可升级的云端数据仓库服务。
|
||||
|
||||
* * *
|
||||
|
||||
_网络 & 内容分发_
|
||||
|
||||
由于 AWS 提供云 EC2 服务器,因此网络相关内容也将在出现在这里。 内容分发用于向位于最近位置的用户提供文件。 现如今有许多非常有名的加速网站。
|
||||
|
||||
1. VPC : VPC 代表虚拟私有云。 它是你自己的虚拟网络,是你的专用 AWS 帐户。
|
||||
2. CloudFront : 这是 AWS 的内容分发网络服务。
|
||||
3. Direct Connect : 它是将数据中心/场所与 AWS 连接起来的网络方式,以提高吞吐量,降低网络成本,并避免由于基于互联网的连接而导致的连接问题。
|
||||
4. Route 53 : 它是一个云域名系统的 DNS Web 服务。
|
||||
|
||||
* * *
|
||||
|
||||
_迁移_
|
||||
|
||||
它提供了一系列服务来帮助你实现本地服务到 AWS 的迁移工作。 这包括 :
|
||||
|
||||
1. Application Discovery Service : 专门用于分析您的服务器,网络,应用程序以帮助/加速迁移的服务。
|
||||
2. DMS : DMS 指的是数据库迁移服务。 它用于将数据从本地数据库迁移到 EC2 上托管的 RDS 或 DB。
|
||||
3. Server Migration : 也称为 SMS(服务器迁移服务)是一种无代理服务,将您的工作负载从本地移动到 AWS。
|
||||
4. Snowball : 当你想要使用物理存储设备(而不是基于互联网/基于网络的传输)将大量数据传入/迁出AWS时,你应该使用它。
|
||||
|
||||
* * *
|
||||
|
||||
_开发者工具_
|
||||
|
||||
见名知义, 这是一系列帮助开发者简化在云端编码的服务。
|
||||
|
||||
1. CodeCommit :它是一个安全的,可扩展的,可管理的源代码管理服务,用于托管代码仓库。
|
||||
2. CodeBuild : 这是一个云端的代码生成器。主要用于执行,测试代码和构建部署软件包。
|
||||
3. CodeDeploy : 这是一个可在 AWS 服务器或本地进行自动化应用程序部署的部署服务。
|
||||
4. CodePipeline : 这个部署服务可以使编码人员可以在发布之前将其应用程序可视化。
|
||||
5. X-Ray : 它可以使用事件调用分析应用程序。
|
||||
|
||||
* * *
|
||||
|
||||
_管理工具_
|
||||
|
||||
这是一组可帮助你管理 AWS 上的Web服务的服务。
|
||||
|
||||
1. CloudWatch : 监控服务以监控你的 AWS 资源或应用程序。
|
||||
2. CloudFormation : 基础设施即代码! 以集体有序的方式管理AWS相对的方式。
|
||||
3. CloudTrail : AWS 帐户的审计和合规工具。
|
||||
4. Config : AWS 的资源清单,配置历史记录和配置更改通知,以实现安全性和治理。
|
||||
5. OpsWorks : 它可以自动化地配置,部署EC2或内部部署计算。
|
||||
6. Service Catalog : 创建和管理被批准在你/公司帐户中使用的 IT 服务目录。
|
||||
7. Trusted Advisor : 它的 AWS AI 可以通过审查你的 AWS Infra 使你的 AWS Infra 更好,更省钱。
|
||||
8. Managed Service : 提供持续的下一步管理。
|
||||
|
||||
* * *
|
||||
|
||||
_安全性, 特征性和服从性_
|
||||
|
||||
这是一组很重要的 AWS 服务以确保你的 AWS 空间的安全性。
|
||||
|
||||
1. IAM : IAM 站在用户的身份和访问管理的立场上来控制用户访问你的 AWS 资源和服务。
|
||||
2. Inspector : 自动安全评估可保证你的应用在 AWS 上的安全和稳定性。
|
||||
3. Certificate Manager : 为 AWS 应用程序提供,管理和部署SSL / TLS证书。
|
||||
4. Directory Service : 相当于 AWS 的 Microsoft Active Directory。
|
||||
5. WAF & Shield : WAF 代表 We b应用防火墙。 监控和控制对 CloudFront 或负载均衡器上的内容的访问。
|
||||
6. Compliance Reports : AWS 基础设施空间的合规报告,以确保您的应用程序符合您的策略。
|
||||
|
||||
* * *
|
||||
|
||||
_数据分析_
|
||||
|
||||
AWS 空间的数据分析服务,以帮助您查看,计划,对帐户中的事件采取行动。
|
||||
|
||||
1. Athena : 它是一个基于 SQL 查询的服务,用于分析 S3 存储的数据。
|
||||
2. EMR : EMR 的全写是 Elastic Map Reduce。 是一个主要用于大数据处理和分析的服务。
|
||||
3. CloudSearch : AWS 在应用和服务中的搜索功能。
|
||||
4. Elasticsearch Service : 它可以创建一个域并在 AWS Cloud 中部署,操作和扩展 Elasticsearch 集群。
|
||||
5. Kinesis : 这种服务可以实现实时的大量流数据处理。
|
||||
6. Data Pipeline : 它可以帮助我们在不同的 AWS 服务之间实现数据迁移。
|
||||
7. QuickSight : 收集,分析和呈现 AWS 的业务数据。
|
||||
|
||||
* * *
|
||||
|
||||
_人工智能_
|
||||
|
||||
AWS 中的 AI!
|
||||
|
||||
1. Lex : 它可以帮助我们在一些使用语音和文本的应用中构建会话界面。
|
||||
2. Polly : 这是一个提供文字到语音转换的服务。
|
||||
3. Rekognition : 使您能够将图像分析添加到应用程序。
|
||||
4. Machine Learning : 它具有学习数据中的模式的算法。
|
||||
|
||||
* * *
|
||||
|
||||
_物联网_
|
||||
|
||||
这个服务保证了 AWS 在不同设备上的高可用性。
|
||||
|
||||
1. AWS IoT : 它使已连接的硬件设备与 AWS 的应用程序能够交互。
|
||||
|
||||
* * *
|
||||
|
||||
_游戏开发_
|
||||
|
||||
见名知义,这个服务旨在游戏开发。
|
||||
|
||||
1. Amazon GameLift : 该服务旨在削减,管理用于会话的多人游戏的专用游戏服务器。
|
||||
|
||||
* * *
|
||||
|
||||
_移动服务_
|
||||
|
||||
这是一组主要针对手持设备的服务。
|
||||
|
||||
1. Mobile Hub : 帮助您创建移动应用后台功能并将其集成到移动应用。
|
||||
2. Cognito : 在互联网连接的设备上控制移动用户的身份验证和 AWS 的访问。
|
||||
3. Device Farm : 移动应用测试服务使你可以在 Android 上托管的实时手机上跨 Android,iOS 测试应用。
|
||||
4. Mobile Analytics : 在AWS上测量,跟踪和分析移动应用数据。
|
||||
5. Pinpoint : 有针对性的推送通知和移动互动。
|
||||
|
||||
* * *
|
||||
|
||||
_应用服务_
|
||||
|
||||
这是一组可以和你在 AWS 上的应用一起使用的服务。
|
||||
|
||||
1. Step Functions : 定义和使用应用程序中的各种功能。
|
||||
2. SWF : SWF 代表简单的工作流服务。其云工作流程管理可帮助开发人员在应用程序生命周期的不同阶段进行协调和贡献。
|
||||
3. API Gateway : 帮助开发人员创建,管理和托管API
|
||||
4. Elastic Transcoder : 帮助开发人员将媒体文件转换为各种设备的播放。
|
||||
|
||||
* * *
|
||||
|
||||
_消息_
|
||||
|
||||
AWS 中的通知和消息服务。
|
||||
|
||||
1. SQS : SQS 表示简单队列服务。完全管理的消息队列服务,用于在AWS中的服务和应用之间进行通信。
|
||||
2. SNS : SNS 代表简单通知服务。 为 AWS 用户推送通知服务,提醒他们有关其在 AWS 空间中的服务。
|
||||
3. SES : SES代表简单电子邮件服务。 这是 AWS 为自己的客户提供高性价比的电子邮件服务。
|
||||
|
||||
* * *
|
||||
|
||||
_企业生产率_
|
||||
|
||||
一组帮你提高业务生产率的服务。
|
||||
|
||||
1. WorkDocs : 协同文件共享,存储和编辑服务。
|
||||
2. WorkMail : 安全的商务邮件,日程服务。
|
||||
3. Amazon Chime : 在线的企业会议!
|
||||
|
||||
* * *
|
||||
|
||||
_桌面和应用程序流式传输_
|
||||
|
||||
实现桌面应用程序通过云端进行流传输。
|
||||
|
||||
1. WorkSpaces : 完全管理且安全的云计算服务。
|
||||
2. AppStream 2.0 : 来自云端的流式桌面应用。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://kerneltalks.com/virtualization/aws-cloud-terminology/
|
||||
|
||||
作者:[Shrikant Lavhate][a]
|
||||
译者:[chenxinlong](https://github.com/chenxinlong)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://kerneltalks.com/virtualization/aws-cloud-terminology/
|
@ -1,111 +0,0 @@
|
||||
|
||||
AI 正快速入侵我们生活的五个方面
|
||||
============================================================
|
||||
|
||||
> 让我们来看看我们已经被人工智能包围的五个真实存在的方面。
|
||||
|
||||

|
||||
> 图片来源: opensource.com
|
||||
|
||||
开源项目[正在帮助推动][2]人工智能进步,而且随着技术的成熟,我们将听到更多关于 AI 如何影响我们生活的消息。你有没有考虑过 AI 是如何改变你周围的世界的?让我们来看看我们日益人为增强的世界,以及 AI 对未来影响的大胆预测。
|
||||
|
||||
### 1. AI 影响你的购买决定
|
||||
|
||||
最近 [VentureBeat][3] 上的一篇文章,[“AI 将如何帮助我们解读千禧一代”][4]吸引了我的注意。我承认我对人工智能没有思考太多,也没有费力尝试解读千禧一代,所以我很好奇,渴望了解更多。事实证明,文章标题有点误导;“如何卖东西给千禧一代”会是一个更准确的标题。
|
||||
|
||||
根据这篇文章,千禧一代是“一个年龄阶段的人群,被人觊觎,以至于来自全世界的市场经理都在争抢他们”。通过分析网络行为 —— 无论是购物、社交媒体或其他活动 - 机器学习可以帮助预测行为模式,这将可以变成有针对性的广告。文章接着解释如何对物联网和社交媒体平台进行挖掘形成数据点。“使用机器学习挖掘社交媒体数据,可以让公司了解千禧一代如何谈论其产品,他们对一个产品类别的看法,他们对竞争对手的广告活动如何响应,还可获得很多数据,用于设计有针对性的广告,"这篇文章解释说。AI 和千禧一代成为营销的未来并不是什么很令人吃惊的事,但是 X 一代和婴儿潮一代,你们也逃不掉呢!
|
||||
|
||||
> 人工智能被用来根据行为变化来定位包括城市人在内的整个人群。
|
||||
|
||||
例如, [Raconteur上][23]的一篇文章——"AI将怎样改变购买者的行为"解释说,AI 在网上零售行业最大的力量是它能够迅速适应流动的情况下改变客户行为。人工智能创业公司 [Fluid AI][25]首席执行官 Abhinav Aggarwal 表示,他的公司的软件被客户用来预测客户行为,并且系统注意到在暴风雪期间发生了变化。“那些通常会忽略在一天中发送的电子邮件或应用内通知的用户现在正在打开它们,因为他们在家里没有太多的事情可做。一个小时,AI 系统就适应了新的情况,并在工作时间发送更多的促销材料。"他解释说。
|
||||
|
||||
AI正在改变了我们怎样花钱和为什么花钱,但是AI是怎样改变我们挣钱的方式的呢?
|
||||
|
||||
### 2. 人工智能正在改变我们如何工作
|
||||
|
||||
[Fast 公司][5]最近的一篇文章《这就是在 2017 年人工智能如何改变我们的生活》说道,求职者将会从人工智能中受益。作者解释说,除薪酬趋势更新之外,人工智能将被用来给求职者发送相关职位空缺信息。当你应该升职的时候,你就会得到一个升职的机会。
|
||||
|
||||
人造智能也将被公司用来帮助新入职的员工。文章解释说:“许多新员工在头几天内获得了大量信息,其中大部分不会被保留。” 相反,机器人可能会随着时间的推移向一名新员工“滴滴”,因为它变得更加相关。
|
||||
|
||||
[Inc.][7]的一篇文章[《没有偏见的企业:人工智能将如何重塑招聘机制》][8]着眼于人才管理解决方案提供商 [SAP SuccessFactors][9] 是怎样利用人工智能作为一个工作描述偏差检查器”和检查员工赔偿金的偏差。
|
||||
|
||||
[《Deloitte 2017 人力资本趋势报告》][10]显示,AI 正在激励组织进行重组。Fast公司的文章[《AI 是怎样改变公司组织的方式》][11]审查了这篇报告,该文章是基于全球 10,000 多名人力资源和商业领袖的调查结果。这篇文章解释说:"许多公司现在更注重文化和环境的适应性,而不是聘请最有资格的人来做某个具体任务,因为知道个人角色必须随 AI 的实施而发展 。" 为了适应不断变化的技术,组织也从自上而下的结构转向多学科团队,文章说。
|
||||
|
||||
###3. AI 正在改变教育
|
||||
|
||||
> AI 将使所有教育生态系统的利益相关者受益。
|
||||
|
||||
尽管教育的预算正在缩减,但是教室的规模却正在增长。因此利用技术的进步有助于提高教育体系的生产率和效率,并在提高教育质量和负担能力方面发挥作用。根据VentureBeat上的一篇文章[《2017 年人工智能是怎样改变教育》][26],今年我们将看到 AI 对学生们的书面答案进行评分,机器人回答学生的答案,虚拟个人助理辅导学生等等。文章解释说:“AI 将惠及教育生态系统的所有利益相关者。学生将能够通过即时的反馈和指导学习地更好,教师将获得丰富的学习分析和对个性化教学的见解,父母将以更低的成本看到他们的孩子的更好的职业前景,学校能够规模化优质的教育,政府能够向所有人提供可负担得起的教育。"
|
||||
|
||||
### 4. 人工智能正在重塑医疗保健
|
||||
|
||||
2017 年 2 月[CB Insights][12]的一篇文章挑选了 106 个医疗保健领域的人工智能初创公司,它们中的很多在过去几年中提高了第一次股权融资。这篇文章说:“在24 家成像和诊断公司中,19 家公司自 2015 年 1 月起就首次公开募股。”这份名单上了有那些从事于远程病人监测,药物发现和肿瘤学方面人工智能的公司。
|
||||
|
||||
3 月 16 日发表在 TechCrunch 上的一篇关于 AI 进步如何重塑医疗保健的文章解释说:"一旦对人类的 DNA 有了更好的理解,就有机会更进一步,并能根据他们特殊的生活习性为他们提供个性化的见解"。这种趋势预示着“个性化遗传学”的新纪元,人们能够通过获得关于自己身体的前所未有的信息来充分控制自己的健康。"
|
||||
|
||||
本文接着解释说,AI 和机器学习降低了研发新药的成本和时间。部分得益于广泛的测试,新药进入市场需要 12年 以上的时间。这篇文章说:“机器学习算法可以让计算机根据先前处理的数据来"学习"如何做出预测,或者选择(在某些情况下,甚至是产品)需要做什么实验。类似的算法还可用于预测特定化合物对人体的副作用,这样可以加快审批速度。"这篇文章指出,2015 年旧金山的一个创业公司 [Atomwise][15] 完成了对两种可以减少一天内 Ebola 感染的新药物。
|
||||
|
||||
> AI 正在帮助发现、诊断和治疗新疾病。
|
||||
|
||||
另外一个位于伦敦的初创公司 [BenevolentAI][27] 正在利用人工智能寻找科学文献中的模式。这篇文章说:"最近,这家公司找到了两种可能对 Alzheimer 起作用的化合物,引起了很多制药公司的关注。"
|
||||
|
||||
除了有助于研发新药,AI正在帮助发现、诊断和治疗新疾病。TechCrunch 上 文章解释说,过去是根据显示的症状诊断疾病,但是现在 AI 正在被用于检测血液中的疾病特征,并利用对数十亿例临床病例分析进行深度学习获得经验来制定治疗计划。这篇文章说:“IBM 的 Watson 正在与纽约的 Memorial Sloan Kettering 合作,消化理解数十年来关于癌症患者和治疗方面的数据,为了向治疗疑难的癌症病例的医生提供和建议治疗方案。”
|
||||
|
||||
### 5. AI正在改变我们的爱情生活
|
||||
|
||||
有 195 个国家的超过 5000 万活跃用户通过一个在 2012 年推出的约会应用程序 [Tinder][16] 找到潜在的伴侣。在一篇 [Forbes 采访播客][17]中,Tinder 的创始人兼董事长 Sean Rad spoke 与 Steven Bertoni 对人工智能是如何正在改变人们约会进行过讨论。在一篇[关于采访的文章][18]中,Bertoni 引用了 Rad 说的话,他说:"可能有这样一个时刻,这时 Tinder 非常擅长推测你会感兴趣的人,Tinder 在组织一次约会中可能会做很多跑腿的工作,对吧?"所以,这个 app 会向用户推荐一些附近的同伴,并更进一步,协调彼此的时间安排一次约会,而不只是向用户显示一些有可能的同伴。
|
||||
|
||||
> 我们的后代真的可能会爱上人工智能。
|
||||
|
||||
你爱上了 AI 吗?我们的后代真的可能会爱上人工智能。Raya Bidshahri 发表在 [Singularity Hub][19] 的一篇文章《AI 将如何重新定义爱情》说,几十年的后,我们可能会认为爱情不再受生物学的限制。
|
||||
|
||||
Bidshahri 解释说:"我们的技术符合摩尔定律,正在以惊人的速度增长——智能设备正在越来越多地融入我们的生活。",他补充道:"到 2029 年,我们将会有和人类同等智慧的 AI,而到 21 世纪 40 年代,AI 将会比人类聪明无数倍。许多人预测,有一天我们会与强大的机器合并,我们自己可能会变成人工智能。"他认为在这样一个世界上那些是不可避免的,人们将会接受与完全的非生物相爱。
|
||||
|
||||
这听起来有点怪异,但是相比较于未来机器人将统治世界,爱上AI会是一个更乐观的结果。Bidshahri说:"对AI进行编程,让他们能够感受到爱,这将使我们创造出更富有同情心的AI,这可能也是避免很多人忧虑的AI大灾难的关键。"
|
||||
|
||||
这份 AI 正在入侵我们生活各领域的其中五个方面的清单仅仅只是涉及到了我们身边的人工智能的表面。哪些 AI 创新是让你最兴奋的,或者是让你最烦恼的?大家可以在文章评论区写下你们的感受。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
Rikki Endsley - Rikki Endsley 是开源社区 Opensource.com 的管理员。在过去,她曾做过 Red Hat 开源和标准(OSAS)团队社区传播者;自由技术记者;USENIX 协会的社区管理员; linux 权威杂志 ADMIN 和 Ubuntu User 的合作出版者,还是杂志 Sys Admin 和 UnixReview.com 的主编。在 Twitter上关注她:@rikkiends。
|
||||
|
||||
|
||||
|
||||
via: https://opensource.com/article/17/3/5-big-ways-ai-rapidly-invading-our-lives
|
||||
|
||||
作者:[Rikki Endsley ][a]
|
||||
译者:[zhousiyu325](https://github.com/zhousiyu325)
|
||||
校对:[jasminepeng](https://github.com/jasminepeng)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/rikki-endsley
|
||||
[1]:https://opensource.com/article/17/3/5-big-ways-ai-rapidly-invading-our-lives?rate=ORfqhKFu9dpA9aFfg-5Za9ZWGcBcx-f0cUlf_VZNeQs
|
||||
[2]:https://www.linux.com/news/open-source-projects-are-transforming-machine-learning-and-ai
|
||||
[3]:https://twitter.com/venturebeat
|
||||
[4]:http://venturebeat.com/2017/03/16/how-ai-will-help-us-decipher-millennials/
|
||||
[5]:https://opensource.com/article/17/3/5-big-ways-ai-rapidly-invading-our-lives
|
||||
[6]:https://www.fastcompany.com/3066620/this-is-how-ai-will-change-your-work-in-2017
|
||||
[7]:https://twitter.com/Inc
|
||||
[8]:http://www.inc.com/bill-carmody/businesses-beyond-bias-how-ai-will-reshape-hiring-practices.html
|
||||
[9]:https://www.successfactors.com/en_us.html
|
||||
[10]:https://dupress.deloitte.com/dup-us-en/focus/human-capital-trends.html?id=us:2el:3pr:dup3575:awa:cons:022817:hct17
|
||||
[11]:https://www.fastcompany.com/3068492/how-ai-is-changing-the-way-companies-are-organized
|
||||
[12]:https://twitter.com/CBinsights
|
||||
[13]:https://www.cbinsights.com/blog/artificial-intelligence-startups-healthcare/
|
||||
[14]:https://techcrunch.com/2017/03/16/advances-in-ai-and-ml-are-reshaping-healthcare/
|
||||
[15]:http://www.atomwise.com/
|
||||
[16]:https://twitter.com/Tinder
|
||||
[17]:https://www.forbes.com/podcasts/the-forbes-interview/#5e962e5624e1
|
||||
[18]:https://www.forbes.com/sites/stevenbertoni/2017/02/14/tinders-sean-rad-on-how-technology-and-artificial-intelligence-will-change-dating/#4180fc2e5b99
|
||||
[19]:https://twitter.com/singularityhub
|
||||
[20]:https://singularityhub.com/2016/08/05/how-ai-will-redefine-love/
|
||||
[21]:https://opensource.com/user/23316/feed
|
||||
[22]:https://opensource.com/article/17/3/5-big-ways-ai-rapidly-invading-our-lives#comments
|
||||
[23]:https://twitter.com/raconteur
|
||||
[24]:https://www.raconteur.net/technology/how-ai-will-change-buyer-behaviour
|
||||
[25]:http://www.fluid.ai/
|
||||
[26]:http://venturebeat.com/2017/02/04/how-ai-will-transform-education-in-2017/
|
||||
[27]:https://twitter.com/benevolent_ai
|
||||
[28]:https://opensource.com/users/rikki-endsley
|
||||
|
@ -0,0 +1,109 @@
|
||||
ttyload - 在终端中用彩色显示 Linux 的平均负载
|
||||
============================================================
|
||||
|
||||
立即下载你的免费电子书 - [10 本给管理员的免费 Linux 电子书][12] | [4 本免费的 Shell 脚本电子书][13]
|
||||
|
||||
ttyload 是一个轻量级的实用程序,它为 Linux 和其他类 Unix 系统上提供随着时间变化的彩色平均负载。它实现了在终端中图形跟踪系统的平均负载(“tty”)。
|
||||
|
||||
它已知可以在诸如 Linux、IRIX、Solaris、FreeBSD、MacOS X (Darwin) 和 Isilon OneFS 等系统上运行。它被设计为可以容易地移植到其他平台,但这也带来了一些艰苦的工作。
|
||||
|
||||
它的一些值得注意功能是:它使用相当标准但硬编码的 ANSI 转义序列进行屏幕操作和着色。如果你想要在一个没有负载的系统中查看工作的情况,也可以使用(默认不会安装,甚至不会构建)一个相对独立的负载炸弹。
|
||||
|
||||
**建议阅读:**[GoTTY - 将你的 Linux 终端(TTY)作为 Web 程序共享][1]
|
||||
|
||||
在本篇中,我们会向你展示如何在 Linux 安装及使用 ttyload,以在终端中用彩色图形查看系统的平均负载。
|
||||
|
||||
### 如何在 Linux 中安装 ttyload
|
||||
|
||||
在基于 Debian/Ubuntu 的发行版中,你可以输入下面的[ apt 命令][2]来从默认的系统仓库中安装 ttyload。
|
||||
|
||||
```
|
||||
$ sudo apt-get install ttyload
|
||||
```
|
||||
|
||||
在其他发型版中,你可以如下从 ttyload 的源码安装。
|
||||
|
||||
```
|
||||
$ git clone https://github.com/lindes/ttyload.git
|
||||
$ cd ttyload
|
||||
$ make
|
||||
$ ./ttyload
|
||||
$ sudo make install
|
||||
```
|
||||
|
||||
安装完成后,你可以输入下面的命令启动。
|
||||
|
||||
```
|
||||
$ ttyload
|
||||
```
|
||||
[][3]
|
||||
|
||||
ttyload - 图形浏览 Linux 的平均负载
|
||||
|
||||
注意:要关闭程序,只需按下 `[Ctrl+C]` 键。
|
||||
|
||||
你也可以定义两次刷新之间间隔的秒数。默认是 4 秒,最小是 1 秒。
|
||||
|
||||
```
|
||||
$ ttyload -i 5
|
||||
$ ttyload -i 1
|
||||
```
|
||||
|
||||
要以单色模式运行,即它会关闭 ANSI 转义,如下使用 `-m`:
|
||||
|
||||
```
|
||||
$ ttyload -m
|
||||
```
|
||||
[][4]
|
||||
|
||||
ttyload – 单色模式
|
||||
|
||||
要获取 ttyload 的使用信息以及帮助,输入:
|
||||
|
||||
```
|
||||
$ ttyload -h
|
||||
```
|
||||
|
||||
下面是一些尚未添加的重要功能:
|
||||
|
||||
* 支持任意大小调整。
|
||||
|
||||
* 使用相同的基本引擎制作 X 前端,具有 “3xload”。
|
||||
|
||||
* 面向日志的模式。
|
||||
|
||||
要获得更多信息,访问 ttyload 的主页:[http://www.daveltd.com/src/util/ttyload/][5]
|
||||
|
||||
就是这样了!在本文中,我们向你展示了如何在 Linux 中安装及使用 ttyload。通过下面的评论栏给我们回馈。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Aaron Kili 是一个 Linux 和 F.O.S.S 的爱好者,即将推出的 Linux SysAdmin 网络开发人员,目前也是 TecMint 的内容创作者,他喜欢和电脑一起工作,并且坚信共享知识。
|
||||
|
||||
|
||||
-------------------
|
||||
|
||||
via: https://www.tecmint.com/ttyload-shows-color-coded-graph-of-linux-load-average/
|
||||
|
||||
作者:[Aaron Kili ][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.tecmint.com/author/aaronkili/
|
||||
[1]:https://www.tecmint.com/gotty-share-linux-terminal-in-web-browser/
|
||||
[2]:https://www.tecmint.com/useful-basic-commands-of-apt-get-and-apt-cache-for-package-management/
|
||||
[3]:https://www.tecmint.com/wp-content/uploads/2017/05/ttyload-Graphical-View-of-Linux-Load-Average-.png
|
||||
[4]:https://www.tecmint.com/wp-content/uploads/2017/05/ttyload-monochrome-mode.png
|
||||
[5]:http://www.daveltd.com/src/util/ttyload/
|
||||
[6]:https://www.tecmint.com/ttyload-shows-color-coded-graph-of-linux-load-average/#
|
||||
[7]:https://www.tecmint.com/ttyload-shows-color-coded-graph-of-linux-load-average/#
|
||||
[8]:https://www.tecmint.com/ttyload-shows-color-coded-graph-of-linux-load-average/#
|
||||
[9]:https://www.tecmint.com/ttyload-shows-color-coded-graph-of-linux-load-average/#
|
||||
[10]:https://www.tecmint.com/ttyload-shows-color-coded-graph-of-linux-load-average/#comments
|
||||
[11]:https://www.tecmint.com/author/aaronkili/
|
||||
[12]:https://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
|
||||
[13]:https://www.tecmint.com/free-linux-shell-scripting-books/
|
@ -1,141 +0,0 @@
|
||||
4 个拥有漂亮命令行 UI 的终端程序
|
||||
============================================================
|
||||
|
||||
### 我们来看几个精心设计的 CLI 程序,以及如何克服一些可发现的问题。
|
||||
|
||||

|
||||
>图片提供: opensource.com
|
||||
|
||||
在本文中,我会指出命令行界面的可见缺点以及克服这些问题的几种方法。
|
||||
|
||||
我喜欢命令行。我第一次接触命令行是在 1997 的 DOS 6.2 上。我学习了各种命令的语法,并展示了如何在目录中列出隐藏的文件(**attrib**)。我会每次仔细检查命令中的每个字符。 当我犯了一个错误,我会从头开始重新输入命令。有一天,有人向我展示了如何使用向上和向下箭头按键遍历历史,我被震惊了。
|
||||
|
||||
编程和开发
|
||||
|
||||
* [新的 Python 内容][1]
|
||||
|
||||
* [我们最新的 JavaScript 文章][2]
|
||||
|
||||
* [最近的 Perl 帖子][3]
|
||||
|
||||
* [红帽开发者博客][4]
|
||||
|
||||
后来当我被介绍 Linux 时,让我感到惊喜的是,上下箭头保留了它们遍历历史记录的能力。我仍然仔细地打字,但到现在为止,我了解如何输入,并且我能以每分钟 55 个单词的速度做的很好。接着有人向我展示了 tab 键,并再次改变了我的生活。
|
||||
|
||||
在 GUI 应用程序菜单中,工具提示和图标向用户展示功能。命令行缺乏这种能力,但有办法克服这个问题。在深入解决方案之前,我会来看看几个有问题的 CLI 程序:
|
||||
|
||||
### 1\. MySQL
|
||||
|
||||
首先我们有我们所钟爱的 MySQL REPL。我经常发现自己在输入 **SELECT * FROM** 然后按 **Tab** 的习惯。MySQL 会询问我是否想看到所有的 871 中可能性。我的数据库中绝对没有 871 张表。如果我选择 **yes**,它会显示一堆 SQL 关键字、表、函数等。
|
||||
|
||||

|
||||
|
||||
### 2\. Python
|
||||
|
||||
我们来看另一个例子,标准的 Python REPL。我开始输入命令,然后习惯按 **Tab** 键。瞧,插入了一个 **Tab** 字符,考虑到 **Tab** 在 Python 中没有作用,这是一个问题。
|
||||
|
||||

|
||||
|
||||
### 好的 UX
|
||||
|
||||
让我看下设计良好的 CLI 程序以及它们是如何克服这些可见问题的。
|
||||
|
||||
### 自动补全: bpython
|
||||
|
||||
[Bpython][15] 是对 Python REPL 的一个很好的替代。当我运行 bpython 并开始输入时,建议会立即出现。我没用通过特殊的键盘绑定触发它,甚至没有按下 **Tab** 键。
|
||||
|
||||

|
||||
|
||||
当我出于习惯按下 **Tab** 键时,它会用列表中的第一个建议补全。这是给 CLI 设计带来可见性的一个很好的例子。
|
||||
|
||||
bpython 另一方面可以展示模块和函数的文档。当我输入一个函数的名字时,它会显示函数签名以及这个函数附带的文档字符串。这是一个多么令人难以置信的周到设计啊。
|
||||
|
||||
### 上下文感知补全:mycli
|
||||
|
||||
[Mycli][16]是默认的 MySQL 客户端的现代替代品。这个工具对 MySQL 来说就像 bpython 对标准 Python REPL 做的那样。Mycli 将在你输入时自动补全关键字、表名、列和函数。
|
||||
|
||||
补全建议是上下文相关的。例如,在 **SELECT * FROM** 之后,只有来自当前数据库的表才会列出,而不是所有可能的关键字。
|
||||
|
||||

|
||||
|
||||
### 模糊搜索和在线帮助: pgcli
|
||||
|
||||
如果您正在寻找 PostgreSQL 版本的 mycli,请查看 [pgcli][17]。 与 mycli 一样,它提供了上下文感知的自动补全。菜单中的项使用模糊搜索缩小。模糊搜索允许用户输入整体字符串中的任意子字符串来尝试找到正确的匹配项。
|
||||
|
||||

|
||||
|
||||
pgcli 和 mycli 同时在 CLI 中实现了这个功能。斜杠命令的文档也作为补全菜单的一部分展示。
|
||||
|
||||
### 可发现性: fish
|
||||
|
||||
在传统的 Unix shell(Bash、zsh 等)中,有一种搜索历史记录的方法。此搜索模式由 **Ctrl-R** 触发。当在再次调用你上周运行的命令时,这是一个令人难以置信的有用的工具,例如 **ssh**或 **docker**。 一旦你知道这个功能,你会发现自己经常使用它。
|
||||
|
||||
如果这个功能是如此有用,那为什么不每次都搜索呢?这正是 [**fish** shell][18] 所做的。一旦你开始输入命令,**fish** 将开始建议与历史记录类似的命令。然后,你可以按右箭头键接受该建议。
|
||||
|
||||
### 命令行规矩
|
||||
|
||||
我已经回顾了一些创新的方法来解决可见的问题,但也有一些命令行的基础知识, 每个人都应该作为基本的 repl 功能的一部分来执行:
|
||||
|
||||
* 确保 REPL 有可通过箭头键调用的历史记录。确保会话之间的历史持续存在。
|
||||
|
||||
* 提供在编辑器中编辑命令的方法。不管你的补全是多么棒,有时用户只需要一个编辑器来制作完美的命令来删除生产环境中所有的表。
|
||||
|
||||
* 使用 pager 来管道输出。不要让用户滚动他们的终端。哦,并为 pager 使用合理的默认值。(添加选项来处理颜色代码。)
|
||||
|
||||
* 提供一种通过 **Ctrl-R** 界面或者 **fish** 样式的自动搜索来搜索历史记录的方法。
|
||||
|
||||
### 总结
|
||||
|
||||
在第 2 部分中,我将来看看 Python 中使你能够实现这些技术的特定库。同时,请查看其中一些精心设计的命令行应用程序:
|
||||
|
||||
* [bpython][5]或 [ptpython][6]:具有自动补全支持的 Python REPL。
|
||||
|
||||
* [http-prompt][7]:交互式 HTTP 客户端。
|
||||
|
||||
* [mycli][8]:MySQL、MariaDB 和 Percona 的命令行界面,具有自动补全和语法高亮。
|
||||
|
||||
* [pgcli][9]:具有自动补全和语法高亮,是对 [psql][10] 的替代工具。
|
||||
|
||||
* [wharfee][11]:用于管理 Docker 容器的 shell。
|
||||
|
||||
_在这了解 Amjith Ramanujam 更多的在 5 月 20 日在波特兰俄勒冈州举办的 [PyCon US 2017][12] 上的谈话“[令人敬畏的命令行工具][13]”。_
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
|
||||
作者简介:
|
||||
|
||||
Amjith Ramanujam - Amjith Ramanujam 是 pgcli 和 mycli 的创始人。人们认为它们很酷,他并不反对。他喜欢用 Python、Javascript 和 C 编程。他喜欢编写简单易懂的代码,它们有时甚至会成功。
|
||||
|
||||
-----------------------
|
||||
|
||||
via: https://opensource.com/article/17/5/4-terminal-apps
|
||||
|
||||
作者:[Amjith Ramanujam ][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/amjith
|
||||
[1]:https://opensource.com/tags/python?src=programming_resource_menu
|
||||
[2]:https://opensource.com/tags/javascript?src=programming_resource_menu
|
||||
[3]:https://opensource.com/tags/perl?src=programming_resource_menu
|
||||
[4]:https://developers.redhat.com/?intcmp=7016000000127cYAAQ&src=programming_resource_menu
|
||||
[5]:http://bpython-interpreter.org/
|
||||
[6]:http://github.com/jonathanslenders/ptpython/
|
||||
[7]:https://github.com/eliangcs/http-prompt
|
||||
[8]:http://mycli.net/
|
||||
[9]:http://pgcli.com/
|
||||
[10]:https://www.postgresql.org/docs/9.2/static/app-psql.html
|
||||
[11]:http://wharfee.com/
|
||||
[12]:https://us.pycon.org/2017/
|
||||
[13]:https://us.pycon.org/2017/schedule/presentation/518/
|
||||
[14]:https://opensource.com/article/17/5/4-terminal-apps?rate=3HL0zUQ8_dkTrinonNF-V41gZvjlRP40R0RlxTJQ3G4
|
||||
[15]:https://bpython-interpreter.org/
|
||||
[16]:http://mycli.net/
|
||||
[17]:http://pgcli.com/
|
||||
[18]:https://fishshell.com/
|
||||
[19]:https://opensource.com/user/125521/feed
|
||||
[20]:https://opensource.com/article/17/5/4-terminal-apps#comments
|
||||
[21]:https://opensource.com/users/amjith
|
@ -0,0 +1,155 @@
|
||||
怎样在 Linux 中对 Vim 文件进行密码保护
|
||||
============================================================
|
||||
|
||||
现在下载你的免费电子书籍 - [给管理员的 10 本免费的 Linux 电子书籍][16] | [4 本免费的 Shell 脚本电子书籍][17]
|
||||
|
||||
[Vim][5] 是一种流行的、功能丰富的和高度可扩展的 [Linux 文本编辑器][6],它的重要功能之一便是支持用各种带密码的加密方法来加密文本文件。
|
||||
|
||||
本文中,我们将向你介绍一种简单的 Vim 使用技巧;在 Linux 中使用 Vim 对文件进行密码保护。我们将向你展示如何让一个文件在它创建的时侯以及为了修改目的而被打开了之后获得安全防护。
|
||||
|
||||
**建议阅读:** [你应该在 Linux 中使用 Vim 编辑器的 10 个原因][7]
|
||||
|
||||
要安装 Vim 完整版,只需运行这些命令:
|
||||
|
||||
```
|
||||
$ sudo apt install vim #Debian/Ubuntu 系统
|
||||
$ sudo yum install vim #RHEL/CentOS 系统
|
||||
$ sudo dnf install vim #Fedora 22+
|
||||
```
|
||||
参阅: [十年后 Vim 8.0 发布了– 在 Linux 上安装][8]
|
||||
|
||||
### 怎样在 Linux 中对 Vim 文件进行密码保护
|
||||
|
||||
Vim 有个 `-x` 选项,这个选项能让你在创建文件时用它来加密。一旦你运行下面的 [vim 命令][9],你会被提示输入一个密钥:
|
||||
|
||||
```
|
||||
$ vim -x file.txt
|
||||
警告:正在使用弱加密方法;参见 :help 'cm'
|
||||
输入加密密钥:*******
|
||||
再次输入相同密钥:*******
|
||||
```
|
||||
|
||||
如果第二次输入的密钥无误,你就能进去修改此文件了。
|
||||
|
||||
[][10]
|
||||
|
||||
被密码保护的 Vim 文件
|
||||
|
||||
等你修改好之后,摁 `[Esc]` 和键入 `:wq` 来保存及关闭文件。下次你想打开它编辑一下,你就必须像这样去输入密钥:
|
||||
|
||||
```
|
||||
$ vim file.txt
|
||||
需要 "file.txt" 的加密密钥
|
||||
警告:正在使用弱加密方法;参见 :help 'cm'
|
||||
输入密钥:*******
|
||||
```
|
||||
|
||||
假设你输了一个错误的密码(或者没输密码),你会看到一些垃圾字符。
|
||||
|
||||
[][11]
|
||||
|
||||
Vim 中的加密内容
|
||||
|
||||
#### 在 Vim 中设置一种强加密方法
|
||||
|
||||
注意:有条告警信息暗示一种弱加密方法已被用于保护文件。那么接下来,我们来看看怎么在 Vim 中设置一种强加密方法。
|
||||
|
||||
[][12]
|
||||
|
||||
Vim 中文件弱加密
|
||||
|
||||
为了查看 crytmethod(cm) 集,键入(向下滚动可查看所有可用的方法):
|
||||
|
||||
```
|
||||
:help 'cm'
|
||||
```
|
||||
|
||||
##### 输出样例
|
||||
|
||||
```
|
||||
*'cryptmethod'* *'cm'*
|
||||
'cryptmethod' 'cm' string (默认 "zip")
|
||||
global or local to buffer |global-local|
|
||||
{not in Vi}
|
||||
当缓冲区写进文件中所用的方法:
|
||||
*pkzip*
|
||||
zip PkZip 兼容法。 一种弱加密方法。
|
||||
与 Vim 7.2 及更老版本后向兼容。
|
||||
*blowfish*
|
||||
blowfish 河豚法。 中级强度加密方法但有实现上
|
||||
的瑕疵。需要 Vim 7.3 及以上版本,用它加密的文件不
|
||||
能被 Vim 7.2 及更老版本读取。它会添加一个 “种子”,
|
||||
每次你对这个文件写操作时……
|
||||
options.txt [帮助][只读]
|
||||
```
|
||||
|
||||
你可以像如下所示的那样给一个 Vim 文件设置个新的 cryptomethod(加密方法)(本例中我们用 blowfish2 加密方法)
|
||||
|
||||
```
|
||||
:setlocal cm=blowfish2
|
||||
```
|
||||
|
||||
然后键入 `[Enter]` 和 `:wq` 保存下文件。
|
||||
|
||||
[][13]
|
||||
|
||||
对 Vim 文件设置强加密
|
||||
|
||||
现在你再打开下示的文件时应该就看不到那条警告信息了。
|
||||
|
||||
```
|
||||
$ vim file.txt
|
||||
需要 "file.txt" 的加密密钥
|
||||
输入加密密钥:*******
|
||||
```
|
||||
|
||||
你也可以在打开 Vim 文件之后来设置密码,用 `:X` 命令就能像上面所示的那样去设置一个密码关卡。
|
||||
|
||||
可以看看我们其他的关于 Vim 编辑器的有用的文章。
|
||||
|
||||
1. [在 Linux 中学习有用的 Vim 编辑器的旅行与技巧][1]
|
||||
|
||||
2. [给每个 Linux 用户的 8 种有用的 Vim 编辑器技巧][2]
|
||||
|
||||
3. [spf13-vim – Vim 编辑器的顶级发行版][3]
|
||||
|
||||
4. [怎样在 Linux 种把 Vim 编辑当作 Bash IDE 来用][4]
|
||||
|
||||
本文到这里就结束了!文章中我们介绍了怎么通过 Linux 下的 Vim 文本编辑器来给一个文件做加密防护。
|
||||
|
||||
永远记住要用强加密方式及密码来适当的保护那些可能包含了诸如用户名及密码、财务账户信息等等机密信息的文本文件。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Aaron Kili 是一个 Linux 和 F.O.S.S(Free and Open-Source Software,自由及开放源代码软件)爱好者,未来的 Linux 系统管理员、web 开发人员,目前是 TecMint 的内容创作者,他喜欢用电脑工作,且崇尚分享知识。
|
||||
|
||||
------------------
|
||||
|
||||
via: https://www.tecmint.com/password-protect-vim-file-in-linux/
|
||||
|
||||
作者:[Aaron Kili ][a]
|
||||
译者:[ch-cn](https://github.com/ch-cn)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.tecmint.com/author/aaronkili/
|
||||
[1]:https://www.tecmint.com/learn-vi-and-vim-editor-tips-and-tricks-in-linux/
|
||||
[2]:https://www.tecmint.com/how-to-use-vi-and-vim-editor-in-linux/
|
||||
[3]:https://www.tecmint.com/spf13-vim-offers-vim-plugins-vim-editor/
|
||||
[4]:https://www.tecmint.com/use-vim-as-bash-ide-using-bash-support-in-linux/
|
||||
[5]:https://www.tecmint.com/vi-editor-usage/
|
||||
[6]:https://www.tecmint.com/best-open-source-linux-text-editors/
|
||||
[7]:https://www.tecmint.com/reasons-to-learn-vi-vim-editor-in-linux/
|
||||
[8]:https://www.tecmint.com/vim-8-0-install-in-ubuntu-linux-systems/
|
||||
[9]:https://www.tecmint.com/linux-command-line-editors/
|
||||
[10]:https://www.tecmint.com/wp-content/uploads/2017/05/Vim-File-Password-Protected-File.png
|
||||
[11]:https://www.tecmint.com/wp-content/uploads/2017/05/Vim-Content-Encrypted.png
|
||||
[12]:https://www.tecmint.com/wp-content/uploads/2017/05/Weak-Encryption-on-Vim-File.png
|
||||
[13]:https://www.tecmint.com/wp-content/uploads/2017/05/Set-Strong-Encryption-on-Vim-File.png
|
||||
[14]:https://www.tecmint.com/vi-editor-usage/
|
||||
[15]:https://www.tecmint.com/author/aaronkili/
|
||||
[16]:https://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
|
||||
[17]:https://www.tecmint.com/free-linux-shell-scripting-books/
|
@ -0,0 +1,82 @@
|
||||
如何在 Linux 中删除大(100-200GB)文件
|
||||
============================================================
|
||||
|
||||
通常,要[在 Linux 终端删除一个文件][1],我们使用 rm 命令(删除文件)、shred 命令(安全删除文件)、wipe 命令(安全擦除文件)或者 secure-deletion 工具包(一个[安全文件删除工具][2]集合)。
|
||||
|
||||
我们可以使用上面任意的工具来处理相对较小的文件。如果我们想要删除大的文件/文件夹,比如大概 100-200GB。这个方法在删除文件(I/O 调度)所花费的时间以及 RAM 占用量方面看起来可能并不容易。
|
||||
|
||||
在本教程中,我们会解释如何在 Linux 中有效率并可靠地删除大文件/文件夹。
|
||||
|
||||
**建议阅读:** [5 个在 Linux 中清空或者删除大文件内容的方法][3]
|
||||
|
||||
主要的目标是使用一种不会在删除大文件时拖慢系统的技术,并有合理的 I/O 占用。我们可以用 **ionice 命令**实现这个目标。
|
||||
|
||||
### 在 Linux 中使用 ionice 命令删除大(200GB)文件
|
||||
|
||||
ionice 是一个可以为另一个程序设置或获取 I/O 调度级别和优先级的有用程序。如果没有给出参数或者只有 `-p`,那么 ionice 将会查询该进程的当前的 I/O 调度级别以及优先级。
|
||||
|
||||
如果我们给出命令名称,如 rm 命令,它将使用给定的参数运行此命令。要指定要获取或设置调度参数的[进程的进程 ID][4],运行这个:
|
||||
|
||||
```
|
||||
# ionice -p PID
|
||||
```
|
||||
|
||||
要指定名字或者调度的数字,使用(0 表示无、1 表示实时、2 表示尽力、3 表示空闲)下面的命令。
|
||||
|
||||
这意味这 rm 会属于空闲 I/O 级别,并且只在其他进程不使用的时候使用 I/O:
|
||||
|
||||
```
|
||||
---- Deleting Huge Files in Linux -----
|
||||
# ionice -c 3 rm /var/logs/syslog
|
||||
# ionice -c 3 rm -rf /var/log/apache
|
||||
```
|
||||
|
||||
如果系统中没有很多空闲时间,那么我们希望使用尽力调度级别,并且使用低优先级:
|
||||
|
||||
```
|
||||
# ionice -c 2 -n 6 rm /var/logs/syslog
|
||||
# ionice -c 2 -n 6 rm -rf /var/log/apache
|
||||
```
|
||||
|
||||
注意:要使用安全的方法删除大文件,我们可以使用先前提到的 shred、wipe 以及 secure-deletion 工具包中的不同工具,而不是 rm 命令。
|
||||
|
||||
**建议阅读:**[3 个在 Linux 中永久/安全删除文件/文件夹的方法][5]
|
||||
|
||||
要获取更多信息,查阅 ionice 的手册页:
|
||||
|
||||
```
|
||||
# man ionice
|
||||
```
|
||||
|
||||
就是这样了!你脑海里还有其他的方法么?在评论栏中与我们分享。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Aaron Kili 是 Linux 和 F.O.S.S 爱好者,将来的 Linux 系统管理员和网络开发人员,目前是 TecMint 的内容创作者,他喜欢用电脑工作,并坚信分享知识。
|
||||
|
||||
------------------
|
||||
|
||||
via: https://www.tecmint.com/delete-huge-files-in-linux/
|
||||
|
||||
作者:[Aaron Kili ][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[jasminepeng](https://github.com/jasminepeng)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.tecmint.com/author/aaronkili/
|
||||
[1]:https://www.tecmint.com/permanently-and-securely-delete-files-directories-linux/
|
||||
[2]:https://www.tecmint.com/permanently-and-securely-delete-files-directories-linux/
|
||||
[3]:https://www.tecmint.com/empty-delete-file-content-linux/
|
||||
[4]:https://www.tecmint.com/find-linux-processes-memory-ram-cpu-usage/
|
||||
[5]:https://www.tecmint.com/permanently-and-securely-delete-files-directories-linux/
|
||||
[6]:https://www.tecmint.com/delete-huge-files-in-linux/#
|
||||
[7]:https://www.tecmint.com/delete-huge-files-in-linux/#
|
||||
[8]:https://www.tecmint.com/delete-huge-files-in-linux/#
|
||||
[9]:https://www.tecmint.com/delete-huge-files-in-linux/#
|
||||
[10]:https://www.tecmint.com/delete-huge-files-in-linux/#comments
|
||||
[11]:https://www.tecmint.com/author/aaronkili/
|
||||
[12]:https://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
|
||||
[13]:https://www.tecmint.com/free-linux-shell-scripting-books/
|
@ -0,0 +1,79 @@
|
||||
在 Linux 服务器关机前向用户显示一条自定义消息
|
||||
============================================================
|
||||
|
||||
|
||||
在先前的文章中,我们解释了 Linux 中[ shutdown、poweroff、halt、reboot 命令的不同之处][3],并揭示了在用不同的选项执行这些命令时它们实际做了什么。
|
||||
|
||||
本篇将会向你展示如何在系统关机时向所有的系统用户发送一条自定义的消息。
|
||||
|
||||
**建议阅读:**[tuptime - 显示 Linux 系统的历史和统计运行时间][4]
|
||||
|
||||
作为一名系统管理员,在你关闭服务器之前,你也许想要发送一条消息来警告他们系统将要关闭。默认上,shutdown 命令会如下所示给其他系统用户广播这条信息:
|
||||
|
||||
```
|
||||
# shutdown 13:25
|
||||
```
|
||||
Linux Shutdown Broadcast Message
|
||||
```
|
||||
Shutdown scheduled for Fri 2017-05-12 13:25:00 EAT, use 'shutdown -c' to cancel.
|
||||
Broadcast message for root@tecmint (Fri 2017-05-12 13:23:34 EAT):
|
||||
The system is going down for power-off at Fri 2017-05-12 13:25:00 EAT!
|
||||
```
|
||||
|
||||
要在 shutdown 那行发送一条自定义的消息给其他系统用户,运行下面的命令。在本例中,关闭会在命令执行后的两分钟之后发生。
|
||||
|
||||
```
|
||||
# shutdown 2 The system is going down for required maintenance. Please save any important work you are doing now!
|
||||
```
|
||||
[][5]
|
||||
|
||||
Linux 系统关闭消息
|
||||
|
||||
假设你有一些关键的系统操作,如计划系统备份或更新会在系统关闭的时候进行,如下所示,你可以使用 `-c` 选项取消关机,并在执行玩这些操作后继续执行:
|
||||
|
||||
```
|
||||
# shutdown -c
|
||||
```
|
||||
Linux Shutdown Cancel Message
|
||||
```
|
||||
Shutdown scheduled for Fri 2017-05-12 14:10:22 EAT, use 'shutdown -c' to cancel.
|
||||
Broadcast message for root@tecmint (Fri 2017-05-14 :10:27 EAT):
|
||||
The system shutdown has been cancelled at Fri 2017-05-12 14:11:27 EAT!
|
||||
```
|
||||
|
||||
另外,学习如何在 Linux 中使用简单和传统的方法[在重启或者开机时自动执行命令/脚本][6]。
|
||||
|
||||
不要错过:
|
||||
|
||||
1. [关系系统启动进程和服务(SysVinit、Systemd 和 Upstart)][1]
|
||||
|
||||
2. [11 个 Linux 中 cron 计划任务示例][2]
|
||||
|
||||
现在你知道了如何在系统关闭前向其他系统用户发送自定义消息了。你有其他关于这个主题想要分享的想法么?何不使用下面的评论栏?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Aaron Kili 是一个 Linux 和 F.O.S.S 爱好者、Linux 系统管理员、网络开发人员,现在也是 TecMint 的内容创作者,他喜欢和电脑一起工作,坚信共享知识。
|
||||
|
||||
------------------
|
||||
|
||||
via: https://www.tecmint.com/show-linux-server-shutdown-message/
|
||||
|
||||
作者:[Aaron Kili ][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.tecmint.com/author/aaronkili/
|
||||
[1]:https://www.tecmint.com/linux-boot-process-and-manage-services/
|
||||
[2]:https://www.tecmint.com/11-cron-scheduling-task-examples-in-linux/
|
||||
[3]:https://www.tecmint.com/shutdown-poweroff-halt-and-reboot-commands-in-linux/
|
||||
[4]:https://www.tecmint.com/find-linux-uptime-shutdown-and-reboot-time-with-tuptime/
|
||||
[5]:https://www.tecmint.com/wp-content/uploads/2017/05/Linux-System-Shutdown-Message.png
|
||||
[6]:https://www.tecmint.com/auto-execute-linux-scripts-during-reboot-or-startup/
|
||||
[7]:https://www.tecmint.com/author/aaronkili/
|
||||
[8]:https://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
|
||||
[9]:https://www.tecmint.com/free-linux-shell-scripting-books/
|
@ -0,0 +1,72 @@
|
||||
如何使用 Cream 提高 Vim 的用户友好性
|
||||
============================================================
|
||||
|
||||
### Cream 附加包通过把一个更加熟悉的"面孔"置于 Vim 文本编辑器之上,同时保留 Vim 的功能,使其更加容易使用
|
||||
|
||||
|
||||
|
||||

|
||||
图片来自 :
|
||||
|
||||
opensource.com
|
||||
|
||||
大约 10 年前,我既使用 Emacs 进行文本编辑,也使用 Vim 进行文本编辑。说到底,我的确是一个热衷于 Emacs 的家伙。尽管 Emacs 在我的心里占据了很重要的地位,但我知道, Vim 也不赖。
|
||||
|
||||
一些人,或者像我一样的人,在技术方面可能都会有些笨手笨脚的。多年来,我已经和很多下面这样的 Linux 新手交谈过,他们想使用 Vim,但是却失望的发现, Vim 编辑器和他们在其它操作系统上使用过的编辑器似乎不一样。
|
||||
|
||||
但是,当我把 Cream 介绍给他们以后,他们的失望就变成了满意。Cream 是 Vim 的一个附加包,它使得 Vim 更加容易使用。Cream 让这些 Linux 新手变成了 Vim 的坚决拥护者和忠心用户。
|
||||
|
||||
让我们来看一看 Cream 是什么以及它是如何让 Vim 变得更加容易使用的。
|
||||
|
||||
### Cream 安装
|
||||
|
||||
在安装 Cream 之前,你需要先在你的电脑上安装好 Vim 和 GVim 的 GUI 组件。我发现最容易完成这件事的方法是使用 Linux 版本的包管理器。
|
||||
|
||||
安装好 Vim 以后,便可[下载 Cream 的安装程序][2],或者你也可以再次使用 Linux 发行版的包管理器进行安装。
|
||||
|
||||
安装好 Cream 以后,你可以通过从应用菜单选择输入(比如,**Applications**->**Cream**)或者在程序启动器中输入 **Cream**,从而启动 Cream 。
|
||||
|
||||

|
||||
|
||||
### Cream 使用
|
||||
|
||||
如果你之前已经使用过 Gvim,那么你会注意到, Cream 几乎没改变编辑器的外观和感觉。最大的不同是 Cream 的菜单栏和工具栏,它们取代了 Gvim 陈旧的菜单栏和工具栏,新的菜单栏和工具栏的外观和群组功能看起来和其它编辑器的一样。
|
||||
|
||||
Cream 的菜单栏隐藏了更多的技术选项,比如指定一个编译器的能力,以及来自用户的 `make` 命令的能力。当你通过使用 Cream 更加熟悉 Vim 以后,你只需要从 `Setting->Preferences->Behavior` 选择选项,就可以更容易的访问这些特性。有了这些选项,你可以(如果你想)体验到一个包含了 Cream 和传统 Vim 二者优点的强大编辑器。
|
||||
|
||||
Cream 并不是仅由菜单驱动。尽管编辑器的功能仅有单击或双击两种方式,但是你也可以使用常见的键盘快捷键来执行操作,比如 `CTRL-O`(打开一个文件),`CTRL-C`(复制文本)。你不需要在几种模式之间切换,也不需要记住一些很难记住的命令。
|
||||
|
||||
Cream 开始运行以后,打开一个文件,或者新建一个文件,然后就可以开始输入了。几个我向他们介绍过 Cream 的人说,虽然 Cream 保留了 Vim 的许多典型风格,但是 Cream 使用起来更加舒服。
|
||||
|
||||

|
||||
|
||||
并不是说 Cream 是 Vim 的简化版,或者说 Cream “离” Vim 很远。事实上, Cream 保留了 Vim 的全部特性,同时,它还有一系列其他有用的特性。我发现的 Cream 的一些有用的特性包括:
|
||||
|
||||
* 一个标签式界面
|
||||
* 语法高亮(特别是针对 Markdown、LaTeX 和 HTML)
|
||||
* 自动修正拼写错误
|
||||
* 字数统计
|
||||
* 内建文件浏览器
|
||||
|
||||
|
||||
Cream 也有许多附加包,可以给编辑器增加一些新的特性。这些特性包括文本加密、清理电子邮件内容,甚至还有一个使用教程。老实说,我还没有发现哪一个附加包是真正有用的,不过你的里程可能会有所不同。
|
||||
|
||||
我曾听过一些 Vi/Vim 的狂热分子谴责 Cream “简化”(它们的话)了 Vi/Vim 编辑器的功能。的确,Cream 并不是为他们设计的。它是为那些想快速使用 Vim ,同时保留他们曾经使用过的编辑器的外观和感觉的人准备的。在这种情况下, Cream 是值得赞赏的,它使得 Vim 更加容易使用,更加广泛的被人们使用。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/5/stir-bit-cream-make-vim-friendlier
|
||||
|
||||
作者:[ Scott Nesbitt][a]
|
||||
译者:[ucasFL](https://github.com/ucasFL)
|
||||
校对:[jasminepeng](https://github.com/jasminepeng)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/scottnesbitt
|
||||
[1]:https://opensource.com/article/17/5/stir-bit-cream-make-vim-friendlier?rate=sPQVOnwWoNwyyQX4wV2SZ_7Ly_KXd_Gu9pBu16LRyhU
|
||||
[2]:http://cream.sourceforge.net/download.html
|
||||
[3]:http://cream.sourceforge.net/featurelist.html
|
||||
[4]:https://opensource.com/user/14925/feed
|
||||
[5]:https://opensource.com/article/17/5/stir-bit-cream-make-vim-friendlier#comments
|
||||
[6]:https://opensource.com/users/scottnesbitt
|
@ -0,0 +1,131 @@
|
||||
Linfo — 实时显示你的 Linux 服务器运行状况
|
||||
============================================================
|
||||
|
||||
|
||||
Linfo 是一个免费且开源的跨平台服务器统计 UI/库,它可以显示大量的系统信息。Linfo 是可扩展的,通过 `composer`,很容易使用 PHP5 库以程序化方式获取来自 PHP 应用的扩展系统统计。它是 `Web UI` 的一个 `Ncurses CLI view`,在 Linux、Windows、BSD、Darwin/Mac OSX、Solaris 和 Minix 系统上均可用。
|
||||
|
||||
Linfo 显示的系统信息包括 [CPU 类型/速度][2]、服务器的体系结构、挂载点使用量、硬盘/光纤/flash 驱动器、硬件设备、网络设备和统计信息、运行时间/启动日期、主机名、内存使用量(RAM 和 swap)、温度/电压/风扇速度和 RAID 阵列等。
|
||||
|
||||
#### 环境要求:
|
||||
|
||||
* PHP 5.3
|
||||
|
||||
* pcre 扩展
|
||||
|
||||
* Linux – /proc 和 /sys 已挂载且可对 `PHP` 可读,已经在 2.6.x/3.x 内核中测试过
|
||||
|
||||
### 如何在 Linux 中安装服务器统计 UI/库 Info
|
||||
|
||||
首先,在 `Apache` 或 `Nginx` 的 Web 根目录下创建一个 `Linfo` 目录,然后,克隆仓库文件并使用下面展示的 [rsync 命令][3]将其移动到目录 `/var/www/html/linfo` 下:
|
||||
|
||||
```
|
||||
$ sudo mkdir -p /var/www/html/linfo
|
||||
$ git clone git://github.com/jrgp/linfo.git
|
||||
$ sudo rsync -av linfo/ /var/www/html/linfo/
|
||||
```
|
||||
|
||||
接下来,将 `sample.config.inc.php` 重命名为 `config.inc.php`。这是 Linfo 的配置文件,你可以在里面定义你想要的值:
|
||||
|
||||
```
|
||||
$ sudo mv sample.config.inc.php config.inc.php
|
||||
```
|
||||
|
||||
现在,在 Web 浏览器中打开链接 `http://SERVER_IP/linfo` 来查看这个 Web UI,正如下面的截图所展示的。
|
||||
|
||||
从截图中可以看到, Linfo 显示了系统内核信息、硬件组成、RAM 统计、网络设备、驱动器以及文件系统挂载点。
|
||||
|
||||
[][4]
|
||||
|
||||
*Linux 服务器运行信息*
|
||||
|
||||
你可以将下面一行内容加入配置文件 `config.inc.php` 中,从而可以产生错误信息,以便进行故障排查。
|
||||
|
||||
```
|
||||
$settings['show_errors'] = true;
|
||||
```
|
||||
|
||||
### 以 Ncurses 模式运行 Linfo
|
||||
|
||||
Linfo 有一个基于 `ncurses` 的简单界面,它依赖于 `php` 的 `ncurses` 扩展。
|
||||
|
||||
```
|
||||
# yum install php-pecl-ncurses [在 CentOS/RHEL 上]
|
||||
# dnf install php-pecl-ncurses [在 Fedora 上]
|
||||
$ sudo apt-get install php5-dev libncurses5-dev [在 Debian/Ubuntu 上]
|
||||
```
|
||||
|
||||
现在,像下面这样编译这个 `php` 扩展:
|
||||
|
||||
```
|
||||
$ wget http://pecl.php.net/get/ncurses-1.0.2.tgz
|
||||
$ tar xzvf ncurses-1.0.2.tgz
|
||||
$ cd ncurses-1.0.2
|
||||
$ phpize # generate configure script
|
||||
$ ./configure
|
||||
$ make
|
||||
$ sudo make install
|
||||
```
|
||||
|
||||
接下来,如果编译成功并安装好了该 `php` 扩展,运行下面的命令:
|
||||
|
||||
```
|
||||
$ sudo echo extension=ncurses.so > /etc/php5/cli/conf.d/ncurses.ini
|
||||
```
|
||||
|
||||
验证 ncurse:
|
||||
|
||||
```
|
||||
$ php -m | grep ncurses
|
||||
```
|
||||
|
||||
Now run the Linfo.
|
||||
现在,运行 Info:
|
||||
|
||||
```
|
||||
$ cd /var/www/html/linfo/
|
||||
$ ./linfo-curses
|
||||
```
|
||||
[][5]
|
||||
|
||||
*Linux 服务器信息*
|
||||
|
||||
Info 中尚未加入下面这些功能:
|
||||
|
||||
1. 支持更多 Unix 操作系统(比如 Hurd、IRIX、AIX 和 HP UX 等)
|
||||
|
||||
2. 支持不太出名的操作系统 Haiku/BeOS
|
||||
|
||||
3. 额外功能/扩展
|
||||
|
||||
5. 在 ncurses 模式中支持 [htop-like][1] 特性
|
||||
|
||||
如果想了解更多信息,请访问 Info 的 GitHub 仓库: [https://github.com/jrgp/linfo][6]
|
||||
|
||||
这就是本文的全部内容了。从现在起,你可以使用 Info 在 Web 浏览器中查看 Linux 系统的信息。尝试一下,并在评论中和我们分享你的想法。另外,你是否还知道与之类似的有用工具/库?如果有,请给我们提供一些相关信息。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Aaron Kili 是 Linux 和 F.O.S.S 爱好者,将来的 Linux 系统管理员和网络开发人员,目前是 TecMint 的内容创作者,他喜欢用电脑工作,并坚信分享知识
|
||||
|
||||
---------------
|
||||
|
||||
via: https://www.tecmint.com/linfo-shows-linux-server-health-status-in-real-time/
|
||||
|
||||
作者:[ Aaron Kili][a]
|
||||
译者:[ucasFL](https://github.com/ucasFL)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.tecmint.com/author/aaronkili/
|
||||
[1]:https://www.tecmint.com/install-htop-linux-process-monitoring-for-rhel-centos-fedora/
|
||||
[2]:https://www.tecmint.com/corefreq-linux-cpu-monitoring-tool/
|
||||
[3]:https://www.tecmint.com/rsync-local-remote-file-synchronization-commands/
|
||||
[4]:https://www.tecmint.com/wp-content/uploads/2017/05/Linux-Server-Health-Information.png
|
||||
[5]:https://www.tecmint.com/wp-content/uploads/2017/05/Linux-Server-Information.png
|
||||
[6]:https://github.com/jrgp/linfo
|
||||
[7]:https://www.tecmint.com/author/aaronkili/
|
||||
[8]:https://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
|
||||
[9]:https://www.tecmint.com/free-linux-shell-scripting-books/
|
@ -0,0 +1,202 @@
|
||||
怎样在命令行下杀死一个进程
|
||||
============================================================
|
||||
|
||||

|
||||
>linux的命令行有你需要的用来停止正在运行的进程的所有工具。Jack Wallen透露出细节。[Creative Commons Zero][4]
|
||||
|
||||
想像一下: 你打开了一个程序 (可能来自于你的桌面菜单或者命令行) ,然后开始使用这个app, 没想到程序会锁死, 停止运行, 或者以外死机. 你尝试再次运行app, 但是结果是不能够完全真正的关机。
|
||||
|
||||
你该怎么办? 你要结束进程. 但该如何做? 信或者不信,最好的解决方法大都在命令行里. 值得庆幸的是, Linux有每个必要的工具来供用户杀死错误的进程,然而, 你在执行杀死进程的命令之前, 你首先必须知道进程是什么. 该如何处理这层的任务? 一旦你能够掌握这个工具,它实际是十分简单的...
|
||||
|
||||
让我来介绍给你这些工具.
|
||||
我来概述的步骤是每个linux发行版都能用的,不论是桌面版还是服务器版。
|
||||
我讲严格使用命令行,打开你的终端开始输入命令吧。
|
||||
|
||||
### 定位进程
|
||||
|
||||
杀死一个没有响应的进程的第一个步骤是定位这个进程。我用来定位进程的命令有两个:top和ps命令。top是每个系统管理员都知道的工具,用top命令,你能够get到所有当前正在运行的进程有那些。在命令行里,输入top命令能够就看到你正在运行的程序进程(图1)
|
||||
|
||||
|
||||
### [killa.jpg][5]
|
||||
|
||||

|
||||
|
||||
图1: top命令给出你许多的信息.[通过权限使用][1]
|
||||
|
||||
从显示的列表中你能够看到相当重要的信息,举个例子,Chrome浏览器反映迟钝,依据我们的top命令显示,我们能够辨别的有四个Chrome浏览器的进程在运行,进程的pid号分别是3827,3919,10764,和11679.这个信息是重要的,可以用一个特殊的方法来结束进程。
|
||||
|
||||
尽管top命令很是方便, 但也不是得到你所要信息最有效的方法。 你知道你要杀死的Chrome进程是那个,并且你也不想看top命令所显示的实时信息。 鉴于此, 你能够使用ps命令然后用grep命令来过滤出输出结果。这个ps命令能够显示出当前进程的快照,然后用grep命令输出匹配的样式。我们通过grep命令过滤ps命令的输出的理由很简单:如果你只输入ps命令,你将会得到当前所有进程的快照列表,而我们需要的是列出Chrome浏览器进程相关的。所以这个命令是这个样子:
|
||||
|
||||
```
|
||||
ps aux | grep chrome
|
||||
```
|
||||
|
||||
The _aux_ options are as follows: #辅助选项如下所示
|
||||
|
||||
* a = show processes for all users #a = 显示所有用户的进程
|
||||
|
||||
* u = display the process's user/owner #u = 显示进程的用户和拥有者
|
||||
|
||||
* x = also show processes not attached to a terminal #x = 显示进程不依附于终端
|
||||
|
||||
当你在图形化程序上搜寻信息的时候,这个x参数是很重要的。
|
||||
|
||||
当你输入以上命令的时候,你将会得到比图2更多的结束一个进程的信息,而且它有事用起来比top命令更有效。
|
||||
### [killb.jpg][6]
|
||||
|
||||

|
||||
|
||||
图2: 用ps命令来定位所需的内容信息。[通过权限使用][2]
|
||||
|
||||
### 结束进程
|
||||
|
||||
现在我们开始结束进程的任务。我们有两种信息来帮我们杀死错误的进程。
|
||||
|
||||
* Process name #进程的名字
|
||||
|
||||
* Process ID #进程的ID
|
||||
|
||||
你用哪一个将会决定终端命令如何使用,通常有两个命令来结束进程:
|
||||
|
||||
* kill - Kill a process by ID #通过ID来结束进程
|
||||
|
||||
* killall - Kill a process by name #通过进程名字来结束进程
|
||||
|
||||
有两个不同的信号能够发送给两个结束进程的命令。你发送的信号决定着你想要从结束进程命令中得到的结果。举个例子,你可以发送HUP(挂起)信号给结束进程的命令,命令实际上将会重启这个进程。当你需要立即重启一个进程(比如就守护进程来说),这是一个明智的选择。你通过输入kill -l可以得到所有信号的列表。,你将会发现大量的信号。(图3)
|
||||
|
||||
### [killc.jpg][7]
|
||||
|
||||

|
||||
|
||||
图3: 可用的结束进程信号.[通过权限使用][3]
|
||||
|
||||
最经常使用的结束进程的信号是:T
|
||||
|
||||
|
|
||||
|
||||
Signal Name
|
||||
|
||||
|
|
||||
|
||||
Single Value
|
||||
|
|
||||
|
||||
Effect
|
||||
|
||||
|
|
||||
|
|
||||
|
||||
SIGHUP
|
||||
|
||||
|
|
||||
|
||||
1
|
||||
|
||||
|
|
||||
|
||||
Hangup
|
||||
|
||||
|
|
||||
|
|
||||
|
||||
SIGINT
|
||||
|
||||
|
|
||||
|
||||
2
|
||||
|
||||
|
|
||||
|
||||
Interrupt from keyboard
|
||||
|
||||
|
|
||||
|
|
||||
|
||||
SIGKILL
|
||||
|
||||
|
|
||||
|
||||
9
|
||||
|
||||
|
|
||||
|
||||
Kill signal
|
||||
|
||||
|
|
||||
|
|
||||
|
||||
SIGTERM
|
||||
|
||||
|
|
||||
|
||||
15
|
||||
|
||||
|
|
||||
|
||||
Termination signal
|
||||
|
||||
|
|
||||
|
|
||||
|
||||
SIGSTOP
|
||||
|
||||
|
|
||||
|
||||
17, 19, 23
|
||||
|
||||
|
|
||||
|
||||
Stop the process
|
||||
|
||||
|
|
||||
|
||||
好的是,你能用信号值来代替信号名字。所以你没有必要来记住所有各种各样的信号名字。
|
||||
所以,让我们现在用kill命令来杀死Chrome浏览器的进程。这个命令的结构是:
|
||||
|
||||
```
|
||||
kill SIGNAL PID
|
||||
```
|
||||
|
||||
信号被发送到那里,进程的ID就会被结束。我们已经知道,来自我们的ps命令显示我们想要结束的进程ID号是3827, 3919, 10764, and 11679\。所以要发送结束进程信号,我们输入以下命令:
|
||||
|
||||
```
|
||||
kill -9 3827
|
||||
|
||||
kill -9 3919
|
||||
|
||||
kill -9 10764
|
||||
|
||||
kill -9 11679
|
||||
```
|
||||
|
||||
一旦我们输入了以上命令,Chrome浏览器的所有进程将会成功被杀死。
|
||||
|
||||
我们有更简单的方法! 如果我们已经知道我们想要杀死的那个进程的名字,我们能够利用killall命令发送同样的信号,像这样:
|
||||
|
||||
_killall -9 chrome_
|
||||
|
||||
附带说明的是,上边这个命令可能不能捕捉到所有正在运行的Chrome进程。如果,运行了上边这个命令之后,你输入ps aux|grep chrome 命令过滤一下,看到剩下正在运行的Chrome进程有那些,最好的办法还是回到KIll命令通过进程的ID来发送信号值9来结束这个进程。
|
||||
|
||||
### 结束进程变得容易
|
||||
|
||||
正如你看到的,杀死错误的进程并没有你原本想的那样有挑战性。当我让一个顽固的进程结束的时候,我趋向于用killall命令来作为有效的方法来终止,然而,当我让一个真正的活跃的进程结束的时候,kill命令是一个好的方法。
|
||||
|
||||
-------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2017/5/how-kill-process-command-line
|
||||
|
||||
作者:[JACK WALLEN][a]
|
||||
译者:[hwlog](https://github.com/hwlog)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/jlwallen
|
||||
[1]:https://www.linux.com/licenses/category/used-permission
|
||||
[2]:https://www.linux.com/licenses/category/used-permission
|
||||
[3]:https://www.linux.com/licenses/category/used-permission
|
||||
[4]:https://www.linux.com/licenses/category/creative-commons-zero
|
||||
[5]:https://www.linux.com/files/images/killajpg
|
||||
[6]:https://www.linux.com/files/images/killbjpg
|
||||
[7]:https://www.linux.com/files/images/killcjpg
|
||||
[8]:https://www.linux.com/files/images/stop-processesjpg
|
Loading…
Reference in New Issue
Block a user