Merge pull request #1 from LCTT/master

update
This commit is contained in:
Bestony 2016-09-17 23:18:45 +08:00 committed by GitHub
commit 60ff52db07
47 changed files with 5656 additions and 1229 deletions

View File

@ -0,0 +1,110 @@
Ubuntu 的 Snap、Red Hat 的 Flatpak 这种通吃所有发行版的打包方式真的有用吗?
=================================================================================
![](http://www.iwillfolo.com/wordpress/wp-content/uploads/2016/06/Flatpak-and-Snap-Packages.jpg)
**对新一代的打包格式开始渗透到 Linux 生态系统中的深入观察**
最近我们听到越来越多的有关于 Ubuntu 的 Snap 包和由 Red Hat 员工 Alexander Larsson 创造的 Flatpak (曾经叫做 xdg-app的消息。
这两种下一代打包方法在本质上拥有相同的目标和特点:即不依赖于第三方系统功能库的独立包装。
这种 Linux 新技术方向似乎自然会让人脑海中浮现这样的问题:独立包的优点/缺点是什么?这是否让我们拥有更好的 Linux 系统?其背后的动机是什么?
为了回答这些问题,让我们先深入了解一下 Snap 和 Flatpak。
### 动机
根据 [Flatpak][1] 和 [Snap][2] 的声明,背后的主要动机是使同一版本的应用程序能够运行在多个 Linux 发行版。
> “从一开始它的主要目标是允许相同的应用程序运行在各种 Linux 发行版和操作系统上。” —— Flatpak
> “……snap 通用 Linux 包格式,使简单的二进制包能够完美的、安全的运行在任何 Linux 桌面、服务器、云和设备上。” —— Snap
说得更具体一点,站在 Snap 和 Flatpak (以下称之为 S&F背后的人认为Linux 平台存在碎片化的问题。
这个问题导致了开发者们需要做许多不必要的工作来使他的软件能够运行在各种不同的发行版上,这影响了整个平台的前进。
所以,作为 Linux 发行版Ubuntu 和 Red Hat的领导者他们希望消除这个障碍推动平台发展。
但是,是否是更多的个人收益刺激了 S&F 的开发?
#### 个人收益?
虽然没有任何官方声明,但是试想一下,如果能够创造这种可能会被大多数发行版(即便不是全部)所采用的打包方式,那么这个项目的领导者将可能成为一个能够决定 Linux 大船航向的重要人物。
### 优势
这种独立包的好处多多,并且取决于不同的因素。
这些因素基本上可以归为两类:
#### 用户角度
**+** 从 Liunx 用户的观点来看Snap 和 Flatpak 带来了将任何软件包(软件或应用)安装在用户使用的任何发行版上的可能性。
例如你在使用一个不是很流行的发行版,由于开发工作的缺乏,它的软件仓库只有很稀少的包。现在,通过 S&F 你就可以显著的增加包的数量,这是一个多么美好的事情。
**+** 同样,对于使用流行的发行版的用户,即使该发行版的软件仓库上有很多的包,他也可以在不改变它现有的功能库的同时安装一个新的包。
比方说, 一个 Debian 的用户想要安装一个 “测试分支” 的包,但是他又不想将他的整个系统变成测试版(来让该包运行在更新的功能库上)。现在,他就可以简单的想安装哪个版本就安装哪个版本,而不需要考虑库的问题。
对于持后者观点的人,可能基本上都是使用源文件编译他们的包的人,然而,除非你使用类似 Gentoo 这样基于源代码的发行版,否则大多数用户将从头编译视为是一个恶心到吐的事情。
**+** 高级用户,或者称之为 “拥有安全意识的用户” 可能会觉得更容易接受这种类型的包,只要它们来自可靠来源,这种包倾向于提供另一层隔离,因为它们通常是与系统包想隔离的。
\* 不论是 Snap 还是 Flatpak 都在不断努力增强它们的安全性,通常他们都使用 “沙盒化” 来隔离,以防止它们可能携带病毒感染整个系统,就像微软 Windows 系统中的 .exe 程序一样。(关于微软和 S&F 后面还会谈到)
#### 开发者角度
与普通用户相比,对于开发者来说,开发 S&F 包的优点可能更加清楚。这一点已经在上一节有所提示。
尽管如此,这些优点有:
**+** S&F 通过统一开发的过程,将多发行版的开发变得简单了起来。对于需要将他的应用运行在多个发行版的开发者来说,这大大的减少了他们的工作量。
**++** 因此,开发者能够更容易的使他的应用运行在更多的发行版上。
**+** S&F 允许开发者私自发布他的包,不需要依靠发行版维护者在每一个/每一次发行版中发布他的包。
**++** 通过上述方法,开发者可以不依赖发行版而直接获取到用户安装和卸载其软件的统计数据。
**++** 同样是通过上述方法,开发者可以更好的直接与用户互动,而不需要通过中间媒介,比如发行版这种中间媒介。
### 缺点
**** 膨胀。就是这么简单。Flatpak 和 Snap 并不是凭空变出来它的依赖关系。相反,它是通过将依赖关系预构建在其中来代替使用系统中的依赖关系。
就像谚语说的:“山不来就我,我就去就山”。
**** 之前提到安全意识强的用户会喜欢 S&F 提供的额外的一层隔离,只要该应用来自一个受信任的来源。但是从另外一个角度看,对这方面了解较少的用户,可能会从一个不靠谱的地方弄来一个包含恶意软件的包从而导致危害。
上面提到的观点可以说是有很有意义的,虽说今天的流行方法,像 PPA、overlay 等也可能是来自不受信任的来源。
但是S&F 包更加增加这个风险,因为恶意软件开发者只需要开发一个版本就可以感染各种发行版。相反,如果没有 S&F恶意软件的开发者就需要创建不同的版本以适应不同的发行版。
### 原来微软一直是正确的吗?
考虑到上面提到的,很显然,在大多数情况下,使用 S&F 包的优点超过缺点。
至少对于二进制发行版的用户,或者重点不是轻量级的发行版的用户来说是这样的。
这促使我问出这个问题,可能微软一直是正确的吗?如果是的,那么当 S&F 变成 Linux 的标准后,你还会一如既往的使用 Linux 或者类 Unix 系统吗?
很显然,时间会是这个问题的最好答案。
不过,我认为,即使不完全正确,但是微软有些地方也是值得赞扬的,并且以我的观点来看,所有这些方式在 Linux 上都立马能用也确实是一个亮点。
--------------------------------------------------------------------------------
via: http://www.iwillfolo.com/ubuntus-snap-red-hats-flatpack-and-is-one-fits-all-linux-packages-useful/
作者:[Editorials][a]
译者:[Chao-zhi](https://github.com/Chao-zhi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.iwillfolo.com/category/editorials/
[1]: http://flatpak.org/press/2016-06-21-flatpak-released.html
[2]: https://insights.ubuntu.com/2016/06/14/universal-snap-packages-launch-on-multiple-linux-distros

View File

@ -7,7 +7,7 @@ LXDE、Xfce 及 MATE 桌面环境下的又一系统监视器应用Multiload-n
Multiload-ng 的特点有:
- 支持图形化 CPU内存网络交换空间平均负载磁盘以及温度
- 支持以下资源的图形块 CPU内存网络交换空间平均负载磁盘以及温度
- 高度可定制;
- 支持配色方案;
- 自动适应容器(面板或窗口)的改变;
@ -15,7 +15,7 @@ Multiload-ng 的特点有:
- 提供基本或详细的提示信息;
- 可自定义双击触发的动作。
相比于原来的 Multiload 应用Multiload-ng 含有一个额外的图形块(温度),更多独立的图形自定义选项,例如独立的边框颜色,支持配色方案,可根据自定义的动作对鼠标的点击做出反应,图形块的方向可以被设定为与面板的方向无关。
相比于原来的 Multiload 应用Multiload-ng 含有一个额外的图形块(温度),以及更多独立的图形自定义选项,例如独立的边框颜色,支持配色方案,可根据自定义的动作对鼠标的点击做出反应,图形块的方向可以被设定为与面板的方向无关。
它也可以运行在一个独立的窗口中,而不需要面板:
@ -29,15 +29,15 @@ Multiload-ng 的特点有:
![](https://1.bp.blogspot.com/-WAD5MdDObD8/V7GixgVU0DI/AAAAAAAAYS8/uMhHJri1GJccRWvmf_tZkYeenVdxiENQwCLcB/s400/multiload-ng-xfce-vertical.png)
这个应用的偏好设置窗口虽然不是非常好看,但有很多方式去改进它:
这个应用的偏好设置窗口虽然不是非常好看,但是有计划去改进它:
![](https://2.bp.blogspot.com/-P-ophDpc-gI/V7Gi_54b7JI/AAAAAAAAYTA/AHQck_JF_RcwZ1KbgHbaO2JRt24ZZdO3gCLcB/s320/multiload-ng-preferences.png)
Multiload-ng 当前使用的是 GTK2所以它不能在构建自 GTK3 下的 Xfce 或 MATE 桌面环境(面板)下工作。
对于 Ubuntu 系统而言,只有 Ubuntu MATE 16.10 使用 GTK3。但是鉴于 MATE 的系统监视器应用也是 Multiload GNOME 的一个分支,所以它们共享大多数的特点(除了 Multiload-ng 提供的额外自定义选项和温度图形块)。
对于 Ubuntu 系统而言,只有 Ubuntu MATE 16.10 使用 GTK3。但是鉴于 MATE 的系统监视器应用也是 Multiload GNOME 的一个分支,所以它们大多数的功能相同(除了 Multiload-ng 提供的额外自定义选项和温度图形块)。
该应用的 [愿望清单][2] 中提及到了计划支持 GTK3 的集成以及各种各样的改进,例如温度块资料的更多来源,能够显示十进制(KB, MB, GB...)或二进制(KiB, MiB, GiB...)单位等等。
该应用的[愿望清单][2] 中提及到了计划支持 GTK3 的集成以及各种各样的改进,例如温度块资料的更多来源,能够显示十进制KB、MB、GB……或二进制KiB、MiB、GiB……单位等等。
### 安装 Multiload-ng
@ -76,7 +76,7 @@ sudo apt install mate-multiload-ng-applet
sudo apt install multiload-ng-standalone
```
一旦安装完毕,便可以像其他应用那样添加到桌面面板中了。需要注意的是在 LXDE 中Multiload-ng 不能马上出现在面板清单中,除非面板被重新启动。你可以通过重启会话(登出后再登录)或者使用下面的命令来重启面板:
一旦安装完毕,便可以像其他应用那样添加到桌面面板中了。需要注意的是在 LXDE 中Multiload-ng 不能马上出现在面板清单中,除非重新启动面板。你可以通过重启会话(登出后再登录)或者使用下面的命令来重启面板:
```
lxpanelctl restart
@ -85,13 +85,14 @@ lxpanelctl restart
独立的 Multiload-ng 应用可以像其他正常应用那样从菜单中启动。
如果要下载源码或报告 bug 等,请看 Multiload-ng 的 [GitHub page][3]。
--------------------------------------------------------------------------------
via: http://www.webupd8.org/2016/08/alternative-system-monitor-applet-for.html
作者:[Andrew][a]
译者:[FSSlc](https://github.com/FSSlc)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,111 @@
使用 HTTP/2 服务端推送技术加速 Node.js 应用
=========================================================
四月份,我们宣布了对 [HTTP/2 服务端推送技术][3]的支持,我们是通过 HTTP 的 [Link 头部](https://www.w3.org/wiki/LinkHeader)来实现这项支持的。我的同事 John 曾经通过一个例子演示了[在 PHP 里支持服务端推送功能][4]是多么的简单。
![](https://blog.cloudflare.com/content/images/2016/08/489477622_594bf9e3d9_z.jpg)
我们想让现今使用 Node.js 构建的网站能够更加轻松的获得性能提升。为此,我们开发了 [netjet][1] 中间件,它可以解析应用生成的 HTML 并自动添加 Link 头部。当在一个示例的 Express 应用中使用这个中间件时,我们可以看到应用程序的输出多了如下 HTTP 头:
![](https://blog.cloudflare.com/content/images/2016/08/2016-08-11_13-32-45.png)
[本博客][5]是使用 [Ghost](https://ghost.org/)LCTT 译注:一个博客发布平台)进行发布的,因此如果你的浏览器支持 HTTP/2你已经在不知不觉中享受了服务端推送技术带来的好处了。接下来我们将进行更详细的说明。
netjet 使用了带有定制插件的 [PostHTML](https://github.com/posthtml/posthtml) 来解析 HTML。目前netjet 用它来查找图片、脚本和外部 CSS 样式表。你也可以用其它的技术来实现这个。
在响应过程中增加 HTML 解析器有个明显的缺点:这将增加页面加载的延时(到加载第一个字节所花的时间)。大多数情况下所新增的延时被应用里的其他耗时掩盖掉了比如数据库访问。为了解决这个问题netjet 包含了一个可调节的 LRU 缓存,该缓存以 HTTP 的 ETag 头部作为索引,这使得 netjet 可以非常快的为已经解析过的页面插入 Link 头部。
不过,如果我们现在从头设计一款全新的应用,我们就应该考虑把页面内容和页面中的元数据分开存放,从而整体地减少 HTML 解析和其它可能增加的延时了。
任意的 Node.js HTML 框架,只要它支持类似 Express 这样的中间件netjet 都是能够兼容的。只要把 netjet 像下面这样加到中间件加载链里就可以了。
```javascript
var express = require('express');
var netjet = require('netjet');
var root = '/path/to/static/folder';
express()
.use(netjet({
cache: {
max: 100
}
}))
.use(express.static(root))
.listen(1337);
```
稍微加点代码netjet 也可以摆脱 HTML 框架,独立工作:
```javascript
var http = require('http');
var netjet = require('netjet');
var port = 1337;
var hostname = 'localhost';
var preload = netjet({
cache: {
max: 100
}
});
var server = http.createServer(function (req, res) {
preload(req, res, function () {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/html');
res.end('<!doctype html><h1>Hello World</h1>');
});
});
server.listen(port, hostname, function () {
console.log('Server running at http://' + hostname + ':' + port+ '/');
});
```
[netjet 文档里][1]有更多选项的信息。
### 查看推送了什么数据
![](https://blog.cloudflare.com/content/images/2016/08/2016-08-02_10-49-33.png)
访问[本文][5]时,通过 Chrome 的开发者工具我们可以轻松的验证网站是否正在使用服务器推送技术LCTT 译注: Chrome 版本至少为 53。在“Network”选项卡中我们可以看到有些资源的“Initiator”这一列中包含了`Push`字样,这些资源就是服务器端推送的。
不过,目前 Firefox 的开发者工具还不能直观的展示被推送的资源。不过我们可以通过页面响应头部里的`cf-h2-pushed`头部看到一个列表,这个列表包含了本页面主动推送给浏览器的资源。
希望大家能够踊跃为 netjet 添砖加瓦,我也乐于看到有人正在使用 netjet。
### Ghost 和服务端推送技术
Ghost 真是包罗万象。在 Ghost 团队的帮助下,我把 netjet 也集成到里面了,而且作为测试版内容可以在 Ghost 的 0.8.0 版本中用上它。
如果你正在使用 Ghost你可以通过修改 config.js、并在`production`配置块中增加 `preloadHeaders` 选项来启用服务端推送。
```javascript
production: {
url: 'https://my-ghost-blog.com',
preloadHeaders: 100,
// ...
}
```
Ghost 已经为其用户整理了[一篇支持文档][2]。
### 总结
使用 netjet你的 Node.js 应用也可以使用浏览器预加载技术。并且 [CloudFlare][5] 已经使用它在提供了 HTTP/2 服务端推送了。
--------------------------------------------------------------------------------
via: https://blog.cloudflare.com/accelerating-node-js-applications-with-http-2-server-push/
作者:[Terin Stock][a]
译者:[echoma](https://github.com/echoma)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://blog.cloudflare.com/author/terin-stock/
[1]: https://www.npmjs.com/package/netjet
[2]: http://support.ghost.org/preload-headers/
[3]: https://www.cloudflare.com/http2/server-push/
[4]: https://blog.cloudflare.com/using-http-2-server-push-with-php/
[5]: https://blog.cloudflare.com/accelerating-node-js-applications-with-http-2-server-push/

View File

@ -0,0 +1,48 @@
百度运用 FPGA 方法大规模加速 SQL 查询
===================================================================
尽管我们对百度今年工作焦点的关注集中在这个中国搜索巨头在深度学习方面的举措上,许多其他的关键的,尽管不那么前沿的应用表现出了大数据带来的挑战。
正如百度的欧阳剑在本周 Hot Chips 大会上谈论的,百度坐拥超过 1 EB 的数据,每天处理大约 100 PB 的数据,每天更新 100 亿的网页,每 24 小时更新处理超过 1 PB 的日志更新,这些数字和 Google 不分上下,正如人们所想象的。百度采用了类似 Google 的方法去大规模地解决潜在的瓶颈。
正如刚刚我们谈到的Google 寻找一切可能的方法去打败摩尔定律,百度也在进行相同的探索,而令人激动的、使人着迷的机器学习工作是迷人的,业务的核心关键任务的加速同样也是,因为必须如此。欧阳提到,公司基于自身的数据提供高端服务的需求和 CPU 可以承载的能力之间的差距将会逐渐增大。
![](http://www.nextplatform.com/wp-content/uploads/2016/08/BaiduFPGA1.png)
对于百度的百亿亿级问题,在所有数据的接受端是一系列用于数据分析的框架和平台,从该公司的海量知识图谱,多媒体工具,自然语言处理框架,推荐引擎,和点击流分析都是这样。简而言之,大数据的首要问题就是这样的:一系列各种应用和与之匹配的具有压倒性规模的数据。
当谈到加速百度的大数据分析所面临的几个挑战欧阳谈到抽象化运算核心去寻找一个普适的方法是困难的。“大数据应用的多样性和变化的计算类型使得这成为一个挑战把所有这些整合成为一个分布式系统是困难的因为有多变的平台和编程模型MapReduceSparkstreaminguser defined等等。将来还会有更多的数据类型和存储格式。”
尽管存在这些障碍,欧阳讲到他们团队找到了(它们之间的)共同线索。如他所指出的那样,那些把他们的许多数据密集型的任务相连系在一起的就是传统的 SQL。“我们的数据分析任务大约有 40% 是用 SQL 写的,而其他的用 SQL 重写也是可用做到的。” 更进一步,他讲道他们可以享受到现有的 SQL 系统的好处,并可以和已有的框架相匹配,比如 HiveSpark SQL和 Impala 。下一步要做的事情就是 SQL 查询加速,百度发现 FPGA 是最好的硬件。
![](http://www.nextplatform.com/wp-content/uploads/2016/08/BaiduFPGA2.png)
这些主板,被称为处理单元( 下图中的 PE ),当执行 SQL 时会自动地处理关键的 SQL 功能。这里所说的都是来自演讲,我们不承担责任。确切的说,这里提到的 FPGA 有点神秘或许是故意如此。如果百度在基准测试中得到了如下图中的提升那这可是一个有竞争力的信息。后面我们还会继续介绍这里所描述的东西。简单来说FPGA 运行在数据库中,当其收到 SQL 查询的时候,该团队设计的软件就会与之紧密结合起来。
![](http://www.nextplatform.com/wp-content/uploads/2016/08/BaiduFPGA3.png)
欧阳提到了一件事,他们的加速器受限于 FPGA 的带宽,不然性能表现本可以更高,在下面的评价中,百度安装了 2 块12 核心,主频 2.0 GHz 的 intl E26230 CPU运行在 128G 内存。SDA 具有 5 个处理单元,(上图中的 300MHz FPGA 主板每个分别处理不同的核心功能筛选filter排序sort聚合aggregate联合join和分组group by
为了实现 SQL 查询加速,百度针对 TPC-DS 的基准测试进行了研究并且创建了称做处理单元PE的特殊引擎用于在基准测试中加速 5 个关键功能这包括筛选filter排序sort聚合aggregate联合join和分组group by我们并没有把这些单词都像 SQL 那样大写。SDA 设备使用卸载模型,具有多个不同种类的处理单元的加速卡在 FPGA 中组成逻辑SQL 功能的类型和每张卡的数量由特定的工作量决定。由于这些查询在百度的系统中执行,用来查询的数据被以列格式推送到加速卡中(这会使得查询非常快速),而且通过一个统一的 SDA API 和驱动程序SQL 查询工作被分发到正确的处理单元而且 SQL 操作实现了加速。
SDA 架构采用一种数据流模型,加速单元不支持的操作被退回到数据库系统然后在那里本地运行,比其他任何因素,百度开发的 SQL 加速卡的性能被 FPGA 卡的内存带宽所限制。加速卡跨整个集群机器工作,顺便提一下,但是数据和 SQL 操作如何分发到多个机器的准确原理没有被百度披露。
![](http://www.nextplatform.com/wp-content/uploads/2016/08/BaiduFPGA4.png)
![](http://www.nextplatform.com/wp-content/uploads/2016/08/BaiduFPGA5.png)
我们受限与百度所愿意披露的细节,但是这些基准测试结果是十分令人鼓舞的,尤其是 Terasort 方面,我们将在 Hot Chips 大会之后跟随百度的脚步去看看我们是否能得到关于这是如何连接到一起的和如何解决内存带宽瓶颈的细节。
--------------------------------------------------------------------------------
via: http://www.nextplatform.com/2016/08/24/baidu-takes-fpga-approach-accelerating-big-sql/
作者:[Nicole Hemsoth][a]
译者:[LinuxBars](https://github.com/LinuxBars)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.nextplatform.com/author/nicole/
[1]: http://www.nextplatform.com/?s=baidu+deep+learning
[2]: http://www.hotchips.org/wp-content/uploads/hc_archives/hc26/HC26-12-day2-epub/HC26.12-5-FPGAs-epub/HC26.12.545-Soft-Def-Acc-Ouyang-baidu-v3--baidu-v4.pdf

View File

@ -0,0 +1,77 @@
QOwnNotes一款记录笔记和待办事项的应用集成 ownCloud 云服务
===============
[QOwnNotes][1] 是一款自由而开源的笔记记录和待办事项的应用,可以运行在 Linux、Windows 和 mac 上。
这款程序将你的笔记保存为纯文本文件,它支持 Markdown 支持,并与 ownCloud 云服务紧密集成。
![](https://2.bp.blogspot.com/-a2vsrOG0zFk/V81gyHWlaaI/AAAAAAAAYZs/uzY16JtNcT8bnje1rTKJx1135WueY6V9gCLcB/s400/qownnotes.png)
QOwnNotes 的亮点就是它集成了 ownCloud 云服务(当然是可选的)。在 ownCloud 上用这款 APP你就可以在网路上记录和搜索你的笔记也可以在移动设备上使用比如一款像 CloudNotes 的软件[2])。
不久以后,用你的 ownCloud 账户连接上 QOwnNotes你就可以从你 ownCloud 服务器上分享笔记和查看或恢复之前版本记录的笔记(或者丢到垃圾箱的笔记)。
同样QOwnNotes 也可以与 ownCloud 任务或者 Tasks Plus 应用程序相集成。
如果你不熟悉 [ownCloud][3] 的话,这是一款替代 Dropbox、Google Drive 和其他类似商业性的网络服务的自由软件,它可以安装在你自己的服务器上。它有一个网络界面,提供了文件管理、日历、照片、音乐、文档浏览等等功能。开发者同样提供桌面同步客户端以及移动 APP。
因为笔记被保存为纯文本,它们可以在不同的设备之间通过云存储服务进行同步,比如 DropboxGoogle Drive 等等,但是在这些应用中不能完全替代 ownCloud 的作用。
我提到的上述特点,比如恢复之前的笔记,只能在 ownCloud 下可用(尽管 Dropbox 和其他类似的也提供恢复以前的文件的服务,但是你不能在 QOwnnotes 中直接访问到)。
鉴于 QOwnNotes 有这么多优点,它支持 Markdown 语言(内置了 Markdown 预览模式),可以标记笔记,对标记和笔记进行搜索,在笔记中加入超链接,也可以插入图片:
![](https://4.bp.blogspot.com/-SuBhC43gzkY/V81oV7-zLBI/AAAAAAAAYZ8/l6nLQQSUv34Y7op_Xrma8XYm6EdWrhbIACLcB/s400/qownnotes_2.png)
标记嵌套和笔记文件夹同样支持。
代办事项管理功能比较基本还可以做一些改进,它现在打开在一个单独的窗口里,它也不用和笔记一样的编辑器,也不允许添加图片或者使用 Markdown 语言。
![](https://3.bp.blogspot.com/-AUeyZS3s_ck/V81opialKtI/AAAAAAAAYaA/xukIiZZUdNYBVZ92xgKEsEFew7q961CDwCLcB/s400/qownnotes-tasks.png)
它可以让你搜索你代办事项,设置事项优先级,添加提醒和显示完成的事项。此外,待办事项可以加入笔记中。
这款软件的界面是可定制的,允许你放大或缩小字体,切换窗格等等,也支持无干扰模式。
![](https://4.bp.blogspot.com/-Pnzw1wZde50/V81rrE6mTWI/AAAAAAAAYaM/0UZnH9ktbAgClkuAk1g6fgXK87kB_Bh0wCLcB/s400/qownnotes-distraction-free.png)
从程序的设置里,你可以开启黑夜模式(这里有个 bug在 Ubuntu 16.04 里有些工具条图标消失了),改变状态条大小,字体和颜色方案(白天和黑夜):
![](https://1.bp.blogspot.com/-K1MGlXA8sxs/V81rv3fwL6I/AAAAAAAAYaQ/YDhhhnbJ9gY38B6Vz1Na_pHLCjLHhPWiwCLcB/s400/qownnotes-settings.png)
其他的特点有支持加密(笔记只能在 QOwnNotes 中加密),自定义键盘快捷键,输出笔记为 pdf 或者 Markdown自定义笔记自动保存间隔等等。
访问 [QOwnNotes][11] 主页查看完整的特性。
### 下载 QOwnNotes
如何安装,请查看安装页(支持 Debian、Ubuntu、Linux Mint、openSUSE、Fedora、Arch Linux、KaOS、Gentoo、Slackware、CentOS 以及 Mac OSX 和 Windows
QOwnNotes 的 [snap][5] 包也是可用的,在 Ubuntu 16.04 或更新版本中,你可以通过 Ubuntu 的软件管理器直接安装它。
为了集成 QOwnNotes 到 ownCloud你需要有 [ownCloud 服务器][6],同样也需要 [Notes][7]、[QOwnNotesAPI][8]、[Tasks][9]、[Tasks Plus][10] 等 ownColud 应用。这些可以从 ownCloud 的 Web 界面上安装,不需要手动下载。
请注意 QOenNotesAPI 和 Notes ownCloud 应用是实验性的,你需要“启用实验程序”来发现并安装他们,可以从 ownCloud 的 Web 界面上进行设置,在 Apps 菜单下,在左下角点击设置按钮。
--------------------------------------------------------------------------------
via: http://www.webupd8.org/2016/09/qownnotes-is-note-taking-and-todo-list.html
作者:[Andrew][a]
译者:[jiajia9linuxer](https://github.com/jiajia9linuxer)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.webupd8.org/p/about.html
[1]: http://www.qownnotes.org/
[2]: http://peterandlinda.com/cloudnotes/
[3]: https://owncloud.org/
[4]: http://www.qownnotes.org/installation
[5]: https://uappexplorer.com/app/qownnotes.pbek
[6]: https://download.owncloud.org/download/repositories/stable/owncloud/
[7]: https://github.com/owncloud/notes
[8]: https://github.com/pbek/qownnotesapi
[9]: https://apps.owncloud.com/content/show.php/Tasks?content=164356
[10]: https://apps.owncloud.com/content/show.php/Tasks+Plus?content=170561
[11]: http://www.qownnotes.org/

View File

@ -1,82 +0,0 @@
Torvalds 2.0: Patricia Torvalds on computing, college, feminism, and increasing diversity in tech
================================================================================
![Image by : Photo by Becky Svartström. Modified by Opensource.com. CC BY-SA 4.0](http://opensource.com/sites/default/files/styles/image-full-size/public/images/life/osdc-lead-patriciatorvalds.png)
Image by : Photo by Becky Svartström. Modified by Opensource.com. [CC BY-SA 4.0][1]
Patricia Torvalds isn't the Torvalds name that pops up in Linux and open source circles. Yet.
![](http://opensource.com/sites/default/files/images/life-uploads/ptorvalds.png)
At 18, Patricia is a feminist with a growing list of tech achievements, open source industry experience, and her sights set on diving into her freshman year of college at Duke University's Pratt School of Engineering. She works for [Puppet Labs][2] in Portland, Oregon, as an intern, but soon she'll head to Durham, North Carolina, to start the fall semester of college.
In this exclusive interview, Patricia explains what got her interested in computer science and engineering (spoiler alert: it wasn't her father), what her high school did "right" with teaching tech, the important role feminism plays in her life, and her thoughts on the lack of diversity in technology.
![](http://opensource.com/sites/default/files/images/life/Interview%20banner%20Q%26A.png)
### What made you interested in studying computer science and engineering? ###
My interest in tech really grew throughout high school. I wanted to go into biology for a while, until around my sophomore year. I had a web design internship at the Portland VA after my sophomore year. And I took an engineering class called Exploratory Ventures, which sent an ROV into the Pacific ocean late in my sophomore year, but the turning point was probably when I was named a regional winner and national runner up for the [NCWIT Aspirations in Computing][3] award halfway through my junior year.
The award made me feel validated in my interest, of course, but I think the most important part of it was getting to join a Facebook group for all the award winners. The girls who have won the award are absolutely incredible and so supportive of each other. I was definitely interested in computer science before I won the award, because of my work in XV and at the VA, but having these girls to talk to solidified my interest and has kept it really strong. Teaching XV—more on that later—my junior and senior year, also, made engineering and computer science really fun for me.
### What do you plan to study? And do you already know what you want to do after college? ###
I hope to major in either Mechanical or Electrical and Computer Engineering as well as Computer Science, and minor in Women's Studies. After college, I hope to work for a company that supports or creates technology for social good, or start my own company.
### My daughter had one high school programming class—Visual Basic. She was the only girl in her class, and she ended up getting harassed and having a miserable experience. What was your experience like? ###
My high school began offering computer science classes my senior year, and I took Visual Basic as well! The class wasn't bad, but I was definitely one of three or four girls in the class of 20 or so students. Other computing classes seemed to have similar gender breakdowns. However, my high school was extremely small and the teacher was supportive of inclusivity in tech, so there was no harassment that I noticed. Hopefully the classes become more diverse in future years.
### What did your schools do right technology-wise? And how could they have been better? ###
My high school gave us consistent access to computers, and teachers occasionally assigned technology-based assignments in unrelated classes—we had to create a website for a social studies class a few times—which I think is great because it exposes everyone to tech. The robotics club was also pretty active and well-funded, but fairly small; I was not a member. One very strong component of the school's technology/engineering program is actually a student-taught engineering class called Exploratory Ventures, which is a hands-on class that tackles a new engineering or computer science problem every year. I taught it for two years with a classmate of mine, and have had students come up to me and tell me they're interested in pursuing engineering or computer science as a result of the class.
However, my high school was not particularly focused on deliberately including young women in these programs, and it isn't very racially diverse. The computing-based classes and clubs were, by a vast majority, filled with white male students. This could definitely be improved on.
### Growing up, how did you use technology at home? ###
Honestly, when I was younger I used my computer time (my dad created a tracker, which logged us off after an hour of Internet use) to play Neopets or similar games. I guess I could have tried to mess with the tracker or played on the computer without Internet use, but I just didn't. I sometimes did little science projects with my dad, and I remember once printing "Hello world" in the terminal with him a thousand times, but mostly I just played online games with my sisters and didn't get my start in computing until high school.
### You were active in the Feminism Club at your high school. What did you learn from that experience? What feminist issues are most important to you now? ###
My friend and I co-founded Feminism Club at our high school late in our sophomore year. We did receive lots of resistance to the club at first, and while that never entirely went away, by the time we graduated feminist ideals were absolutely a part of the school's culture. The feminist work we did at my high school was generally on a more immediate scale and focused on issues like the dress code.
Personally, I'm very focused on intersectional feminism, which is feminism as it applies to other aspects of oppression like racism and classism. The Facebook page [Guerrilla Feminism][4] is a great example of an intersectional feminism and has done so much to educate me. I currently run the Portland branch.
Feminism is also important to me in terms of diversity in tech, although as an upper-class white woman with strong connections in the tech world, the problems here affect me much less than they do other people. The same goes for my involvement in intersectional feminism. Publications like [Model View Culture][5] are very inspiring to me, and I admire Shanley Kane so much for what she does.
### What advice would you give parents who want to teach their children how to program? ###
Honestly, nobody ever pushed me into computer science or engineering. Like I said, for a long time I wanted to be a geneticist. I got a summer internship doing web design for the VA the summer after my sophomore year and totally changed my mind. So I don't know if I can fully answer that question.
I do think genuine interest is important, though. If my dad had sat me down in front of the computer and told me to configure a webserver when I was 12, I don't think I'd be interested in computer science. Instead, my parents gave me a lot of free reign to do what I wanted, which was mostly coding terrible little HTML sites for my Neopets. Neither of my younger sisters are interested in engineering or computer science, and my parents don't care. I'm really lucky my parents have given me and my sisters the encouragement and resources to explore our interests.
Still, I grew up saying my future career would be "like my dad's"—even when I didn't know what he did. He has a pretty cool job. Also, one time when I was in middle school, I told him that and he got a little choked up and said I wouldn't think that in high school. So I guess that motivated me a bit.
### What suggestions do you have for leaders in open source communities to help them attract and maintain a more diverse mix of contributors? ###
I'm actually not active in particular open source communities. I feel much more comfortable discussing computing with other women; I'm a member of the [NCWIT Aspirations in Computing][6] network and it's been one of the most important aspects of my continued interest in technology, as well as the Facebook group [Ladies Storm Hackathons][7].
I think this applies well to attracting and maintaining a talented and diverse mix of contributors: Safe spaces are important. I have seen the misogynistic and racist comments made in some open source communities, and subsequent dismissals when people point out the issues. I think that in maintaining a professional community there have to be strong standards on what constitutes harassment or inappropriate conduct. Of course, people can—and will—have a variety of opinions on what they should be able to express in open source communities, or any community. However, if community leaders actually want to attract and maintain diverse talent, they need to create a safe space and hold community members to high standards.
I also think that some community leaders just don't value diversity. It's really easy to argue that tech is a meritocracy, and the reason there are so few marginalized people in tech is just that they aren't interested, and that the problem comes from earlier on in the pipeline. They argue that if someone is good enough at their job, their gender or race or sexual orientation doesn't matter. That's the easy argument. But I was raised not to make excuses for mistakes. And I think the lack of diversity is a mistake, and that we should be taking responsibility for it and actively trying to make it better.
--------------------------------------------------------------------------------
via: http://opensource.com/life/15/8/patricia-torvalds-interview
作者:[Rikki Endsley][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://opensource.com/users/rikki-endsley
[1]:https://creativecommons.org/licenses/by-sa/4.0/
[2]:https://puppetlabs.com/
[3]:https://www.aspirations.org/
[4]:https://www.facebook.com/guerrillafeminism
[5]:https://modelviewculture.com/
[6]:https://www.aspirations.org/
[7]:https://www.facebook.com/groups/LadiesStormHackathons/

View File

@ -1,112 +0,0 @@
Translating by Chao-zhi
Ubuntus Snap, Red Hats Flatpak And Is One Fits All Linux Packages Useful?
=================================================================================
![](http://www.iwillfolo.com/wordpress/wp-content/uploads/2016/06/Flatpak-and-Snap-Packages.jpg)
An in-depth look into the new generation of packages starting to permeate the Linux ecosystem.
Lately weve been hearing more and more about Ubuntus Snap packages and Flatpak (formerly referred to as xdg-app) created by Red Hats employee Alexander Larsson.
These 2 types of next generation packages are in essence having the same goal and characteristics which are: being standalone packages that doesnt rely on 3rd-party system libraries in order to function.
This new technology direction which Linux seems to be headed is automatically giving rise to questions such as, what are the advantages / disadvantages of standalone packages? does this lead us to a better Linux overall? what are the motives behind it?
To answer these questions and more, let us explore the things we know about Snap and Flatpak so far.
### The Motive
According to both [Flatpak][1] and [Snap][2] statements, the main motive behind them is to be able to bring one and the same version of application to run across multiple Linux distributions.
>“From the very start its primary goal has been to allow the same application to run across a myriad of Linux distributions and operating systems.” Flatpak
>“… snap universal Linux package format, enabling a single binary package to work perfectly and securely on any Linux desktop, server, cloud or device.” Snap
To be more specific, the guys behind Snap and Flatpak (S&F) believe that theres a barrier of fragmentation on the Linux platform.
A barrier which holds back the platform advancement by burdening developers with more, perhaps unnecessary, work to get their software run on the many distributions out there.
Therefore, as leading Linux distributions (Ubuntu & Red Hat), they wish to eliminate the barrier and strengthen the platform in general.
But what are the more personal gains which motivate the development of S&F?
#### Personal Gains?
Although not officially stated anywhere, it may be assumed that by leading the efforts of creating a unified package that could potentially be adopted by the vast majority of Linux distros (if not all of them), the captains of these projects could assume a key position in determining where the Linux ship sails.
### The Advantages
The benefits of standalone packages are diverse and can depend on different factors.
Basically however, these factors can be categorized under 2 distinct criteria:
#### User Perspective
+ From a Linux user point of view, Snap and Flatpak both bring the possibility of installing any package (software / app) on any distribution the user is using.
That is, for instance, if youre using a not so popular distribution which has only a scarce supply of packages available in their repo, due to workforce limitations probably, youll now be able to easily and significantly increase the amount of packages available to you which is a great thing.
+ Also, users of popular distributions that do have many packages available in their repos, will enjoy the ability of installing packages that might not have behaved with their current set of installed libraries.
For example, a Debian user who wants to install a package from testing branch will not have to convert his entire system into testing (in order for the package to run against newer libraries), rather, that user will simply be able to install only the package he wants from whichever branch he likes and on whatever branch hes on.
The latter point, was already basically possible for users who were compiling their packages straight from source, however, unless using a source based distribution such as Gentoo, most users will see this as just an unworthily hassle.
+ The advanced user, or perhaps better put, the security aware user might feel more comfortable with this type of packages as long as they come from a reliable source as they tend to provide another layer of isolation since they are generally isolated from system packages.
* Both S&F are being developed with enhanced security in mind, which generally makes use of “sandboxing” i.e isolation in order to prevent cases where they carry a virus which can infect the entire system, similar to the way .exe files on MS Windows may. (More on MS and S&F later)
#### Developer Perspective
For developers, the advantages of developing S&F packages will probably be a lot clearer than they are to the average user, some of these were already hinted in a previous section of this post.
Nonetheless, here they are:
+ S&F will make it easier on devs who want to develop for more than one Linux distribution by unifying the process of development, therefore minimizing the amount of work a developer needs to do in order to get his app running on multiple distributions.
++ Developers could therefore gain easier access to a wider range of distributions.
+ S&F allow devs to privately distribute their packages without being dependent on distribution maintainers to stabilize their package for each and every distro.
++ Through the above, devs may gain access to direct statistics of user adoption / engagement for their software.
++ Also through the above, devs could get more directly involved with users, rather than having to do so through a middleman, in this case, the distribution.
### The Downsides
Bloat. Simple as that. Flatpak and Snap arent just magic making dependencies evaporate into thin air. Rather, instead of relying on the target system to provide the required dependencies, S&F comes with the dependencies prebuilt into them.
As the saying goes “if the mountain wont come to Muhammad, Muhammad must go to the mountain…”
Just as the security-aware user might enjoy S&F packages extra layer of isolation, as long they come from a trusted source. The less knowledgeable user on the hand, might be prone to the other side of the coin hazard which is using a package from an unknown source which may contain malicious software.
The above point can be said to be valid even with todays popular methods, as PPAs, overlays, etc might also be maintained by untrusted sources.
However, with S&F packages the risk increases since malicious software developers need to create only one version of their program in order to infect a large number of distributions, whereas, without it theyd needed to create multiple versions in order to adjust their malware to other distributions.
Was Microsoft Right All Along?
With all thats mentioned above in mind, its pretty clear that for the most part, the advantages of using S&F packages outweighs the drawbacks.
At the least for users of binary-based distributions, or, non lightweight focused distros.
Which eventually lead me to asking the above question could it be that Microsoft was right all along? if so and S&F becomes the Linux standard, would you still consider Linux a Unix-like variant?
Well apparently, the best one to answer those questions is probably time.
Nevertheless, Id argue that even if not entirely right, MS certainly has a good point to their credit, and having all these methods available here on Linux out of the box is certainly a plus in my book.
--------------------------------------------------------------------------------
via: http://www.iwillfolo.com/ubuntus-snap-red-hats-flatpack-and-is-one-fits-all-linux-packages-useful/
作者:[Editorials][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.iwillfolo.com/category/editorials/

View File

@ -1,101 +0,0 @@
Tips for managing your project's issue tracker
==============================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BUSINESS_opennature_3.png?itok=30fRGfpv)
Issue-tracking systems are important for many open source projects, and there are many open source tools that provide this functionality but many projects opt to use GitHub's built-in issue tracker.
Its simple structure makes it easy for others to weigh in, but issues are really only as good as you make them.
Without a process, your repository can become unwieldy, overflowing with duplicate issues, vague feature requests, or confusing bug reports. Project maintainers can become burdened by the organizational load, and it can become difficult for new contributors to understand where priorities lie.
In this article, I'll discuss how to take your GitHub issues from good to great.
### The issue as user story
My team spoke with open source expert [Jono Bacon][1]—author of [The Art of Community][2], a strategy consultant, and former Director of Community at GitHub—who said that high-quality issues are at the core of helping a projects succeed. He says that while some see issues as merely a big list of problems you have to tend to, well-managed, triaged, and labeled issues can provide incredible insight into your code, your community, and where the problem spots are.
"At the point of submission of an issue, the user likely has little patience or interest in providing expansive detail. As such, you should make it as easy as possible to get the most useful information from them in the shortest time possible," Jono Bacon said.
A consistent structure can take a lot of burden off project maintainers, particularly for open source projects. We've found that encouraging a user story approach helps make clarity a constant. The common structure for a user story addresses the "who, what, and why" of a feature: As a [user type], I want to [task] so that [goal].
Here's what that looks like in practice:
>As a customer, I want to create an account so that I can make purchases.
We suggest sticking that user story in the issue's title. You can also set up [issue templates][3] to keep things consistent.
![](https://opensource.com/sites/default/files/resize/issuetemplate-new-520x293.png)
> Issue templates bring consistency to feature requests.
The point is to make the issue well-defined for everyone involved: it identifies the audience (or user), the action (or task), and the outcome (or goal) as simply as possible. There's no need to obsess over this structure, though; as long as the what and why of a story are easy to spot, you're good.
### Qualities of a good issue
Not all issues are created equal—as any OSS contributor or maintainer can attest. A well-formed issue meets these qualities outlined in [The Agile Samurai][4].
Ask yourself if it is...
- something of value to customers
- avoids jargon or mumbo jumbo; a non-expert should be able to understand it
- "slices the cake," which means it goes end-to-end to deliver something of value
- independent from other issues if possible; dependent issues reduce flexibility of scope
- negotiable, meaning there are usually several ways to get to the stated goal
- small and easily estimable in terms of time and resources required
- measurable; you can test for results
### What about everything else? Working with constraints
If an issue is difficult to measure or doesn't seem feasible to complete within a short time period, you can still work with it. Some people call these "constraints."
For example, "the product needs to be fast" doesn't fit the story template, but it is non-negotiable. But how fast is fast? Vague requirements don't meet the criteria of a "good issue", but if you further define these concepts—for example, "the product needs to be fast" can be "each page needs to load within 0.5 seconds"—you can work with it more easily. Constraints can be seen as internal metrics of success, or a landmark to shoot for. Your team should test for them periodically.
### What's inside your issue?
In agile, user stories typically include acceptance criteria or requirements. In GitHub, I suggest using markdown checklists to outline any tasks that make up an issue. Issues should get more detail as they move up in priority.
Say you're creating an issue around a new homepage for a website. The sub-tasks for that task might look something like this.
![](https://opensource.com/sites/default/files/resize/markdownchecklist-520x255.png)
>Use markdown checklists to split a complicated issue into several parts.
If necessary, link to other issues to further define a task. (GitHub makes this really easy.)
Defining features as granularly as possible makes it easier to track progress, test for success, and ultimately ship valuable code more frequently.
Once you've gathered some data points in the form of issues, you can use APIs to glean deeper insight into the health of your project.
"The GitHub API can be hugely helpful here in identifying patterns and trends in your issues," Bacon said. "With some creative data science, you can identify problem spots in your code, active members of your community, and other useful insights."
Some issue management tools provide APIs that add additional context, like time estimates or historical progress.
### Getting others on board
Once your team decides on an issue structure, how do you get others to buy in? Think of your repo's ReadMe.md file as your project's "how-to." It should clearly define what your project does (ideally using searchable language) and explain how others can contribute (by submitting requests, bug reports, suggestions, or by contributing code itself.)
![](https://opensource.com/sites/default/files/resize/readme-520x184.png)
>Edit your ReadMe file with clear instructions for new collaborators.
This is the perfect spot to share your GitHub issue guidelines. If you want feature requests to follow the user story format, share that here. If you use a tracking tool to organize your product backlog, share the badge so others can gain visibility.
"Issue templates, sensible labels, documentation for how to file issues, and ensuring your issues get triaged and responded to quickly are all important" for your open source project, Bacon said.
Remember: It's not about adding process for the process' sake. It's about setting up a structure that makes it easy for others to discover, understand, and feel confident contributing to your community.
"Focus your community growth efforts not just on growing the number of programmers, but also [on] people interested in helping issues be accurate, up to date, and a source of active conversation and productive problem solving," Bacon said.
--------------------------------------------------------------------------------
via: https://opensource.com/life/16/7/how-take-your-projects-github-issues-good-great
作者:[Matt Butler][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mattzenhub
[1]: http://www.jonobacon.org/
[2]: http://www.artofcommunityonline.org/
[3]: https://help.github.com/articles/creating-an-issue-template-for-your-repository/
[4]: https://www.amazon.ca/Agile-Samurai-Masters-Deliver-Software/dp/1934356581

View File

@ -0,0 +1,50 @@
Adobe's new CIO shares leadership advice for starting a new role
====
![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO_Leadership_3.png?itok=QWUGMw-V)
Im currently a few months into a new CIO role at a highly-admired, cloud-based technology company. One of my first tasks was to get to know the organizations people, culture, and priorities.
As part of that goal, I am visiting all the major IT sites. While In India, less than two months into the job, I was asked directly: “What are you going to do? What is your plan?” My response, which will not surprise seasoned CIOs, was that I was still in discovery mode, and I was there to listen and learn.
Ive never gone into an organization with a set blueprint for what Ill do. I know some CIOs have a playbook for how they will operate. Theyll come in and blow the whole organization up and put their set plan in motion.
Yes, there may be situations where things are massively broken and not working, so that course of action makes sense. Once Im inside a company, however, my strategy is to go through a discovery process. I dont want to have any preconceived notions about the way things should be or whats working versus whats not.
Here are my guiding principles as a newly-appointed leader:
### Get to know your people
This means building relationships, and it includes your IT staff as well as your business users and your top salespeople. What are the top things on their lists? What do they want you to focus on? Whats working well? Whats not? How is the customer experience? Knowing how you can help everyone be more successful will help you shape the way you deliver services to them.
If your department is spread across several floors, as mine is, consider meet-and-greet lunches or mini-tech fairs so people can introduce themselves, discuss what theyre working on, and share stories about their family, if they feel comfortable doing that. If you have an open-door office policy, make sure they know that as well. If your staff spreads across countries or continents, get out there and visit as soon as you reasonably can.
### Get to know your products and company culture
One of the things that surprised me coming into to Adobe was how broad our product portfolio is. We have a platform of solutions and services across three clouds Adobe Creative Cloud, Document Cloud and Marketing Cloud and a vast portfolio of products within each. Youll never know how much opportunity your new company presents until you get to know your products and learn how to support all of them. At Adobe we use many of our digital media and digital marketing solutions as Customer Zero, so we have first-hand experiences to share with our customers
### Get to know customers
Very early on, I started getting requests to meet with customers. Meeting with customers is a great way to jump-start your thinking into the future of the IT organization, which includes the different types of technologies, customers, and consumers we could have going forward.
### Plan for the future
As a new leader, I have a fresh perspective and can think about the future of the organization without getting distracted by challenges or obstacles.
What CIOs need to do is jump-start IT into its next generation. When I meet my staff, Im asking them what we want to be three to five years out so we can start positioning ourselves for that future. That means discussing the initiatives and priorities.
After that, it makes sense to bring the leadership team together so you can work to co-create the next generation of the organization its mission, vision, modes of alignment, and operating norms. If you start changing IT from the inside out, it will percolate into business and everything else you do.
Through this whole process, Ive been very open with people that this is not going to be a top-down directive. I have ideas on priorities and what we need to focus on, but we have to be in lockstep, working as a team and figuring out what we want to do jointly.
--------------------------------------------------------------------------------
via: https://enterprisersproject.com/article/2016/9/adobes-new-cio-shares-leadership-advice-starting-new-role
作者:[Cynthia Stoddard][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://enterprisersproject.com/user/cynthia-stoddard

View File

@ -0,0 +1,62 @@
Linus Torvalds reveals his favorite programming laptop
====
>It's the Dell XPS 13 Developer Edition. Here's why.
I recently talked with some Linux developers about what the best laptop is for serious programmers. As a result I checked out several laptops from a programmer's viewpoint. The winner in my book? The 2016 Dell XPS 13 Developer Edition. I'm in good company. Linus Torvalds, Linux's creator, agrees. The Dell XPS 13 Developer Edition, for him, is the best laptop around.
![](http://zdnet3.cbsistatic.com/hub/i/r/2016/07/18/702609c3-db38-4603-9f5f-4dcc3d71b140/resize/770xauto/50a8ba1c2acb1f0994aec2115d2e55ce/2016-dell-xps-13.jpg)
Torvald's requirements may not be yours though.
On Google+, Torvalds explained, "First off: [I don't use my laptop as a desktop replacement][1], and I only travel for a small handful of events each year. So for me, the laptop is a fairly specialized thing that doesn't get daily (or even weekly) use, so the main criteria are not some kind of "average daily use", but very much "travel use".
Therefore, for Torvalds, "I end up caring a lot about it being fairly small and light, because I may end up carrying it around all day at a conference. I also want it to have a good screen, because by now I'm just used to it at my main desktop, and I want my text to be legible but small."
The Dell's display is powered by Intel's Iris 540 GPU. In my experience it works really well.
The Iris powers a 13.3 inch display with a 3,200×1,800 touchscreen. That's 280 pixels per inch, 40 more than my beloved [2015 Chromebook Pixel][2] and 60 more than a [MacBook Pro with Retina][3].
However, getting that hardware to work and play well with the [Gnome][4] desktop isn't easy. As Torvalds explained in another post, it "has the [same resolution as my desktop][5], but apparently because the laptop screen is smaller, Gnome seems to decide on its own that I need an automatic scaling factor of 2, which blows up all the stupid things (window decorations, icons etc) to a ridiculous degree".
The solution? You can forget about looking to the user interface. You need to go to the shell and run: gsettings set org.gnome.desktop.interface scaling-factor 1.
Torvalds may use Gnome, but he's [never liked the Gnome 3.x family much][6]. I can't argue with him. That's why I use [Cinnamon][7] instead.
He also wants "a reasonably powerful CPU, because when I'm traveling I still build the kernel a lot. I don't do my normal full 'make allmodconfig' build between each pull request like I do at home, but I'd like to do it more often than I did with my previous laptop, which is actually (along with the screen) the main reason I wanted to upgrade."
Linus doesn't describe the features of his XPS 13, but my review unit was a high-end model. It came with dual-core, 2.2GHz 6th Generation Intel Core i7-6560U Skylake processor and 16GBs of DDR3 RAM with a half a terabyte, PCIe solid state drive (SSD). I'm sure Torvalds' system is at least that well-equipped.
Some features you may care about aren't on Torvalds' list.
>"What I don't tend to care about is touch-screens, because my fingers are big and clumsy compared to the text I'm looking at (I also can't handle the smudges: maybe I just have particularly oily fingers, but I really don't want to touch that screen).
I also don't care deeply about some 'all day battery life', because quite frankly, I can't recall the last time I didn't have access to power. I might not want to bother to plug it in for some quick check, but it's just not a big overwhelming issue. By the time battery life is in 'more than a couple of hours', I just don't care very much any more."
Dell claims the XPS 13, with its 56wHR, 4-Cell Battery, has about a 12-hour battery life. It has well over 10 in my experience. I haven't tried to run it down to the dregs.
Torvalds also didn't have any trouble with the Intel Wi-Fi set. The non Developer Edition uses a Broadcom chip set and that has proven troublesome for both Windows and Linux users. Dell technical support was extremely helpful to me in getting this problem under control.
Some people have trouble with the XPS 13 touchpad. Neither I nor Torvalds have any worries. Torvalds wrote, the "XPS13 touchpad works very well for me. That may be a personal preference thing, but it seems to be both smooth and responsive."
Still, while Torvalds likes the XPS 13, he's also fond of the latest Lenovo X1 Carbon, HP Spectre 13 x360, and last year's Lenovo Yoga 900. Me? I like the XPS 13 Developer Editor. The price tag, which for the model I reviewed was $1949.99, may keep you from reaching for your credit card.
Still, if you want to develop like one of the world's top programmers, the Dell XPS 13 Developer Edition is worth the money.
--------------------------------------------------------------------------------
via: http://www.zdnet.com/article/linus-torvalds-reveals-his-favorite-programming-laptop/
作者:[Steven J. Vaughan-Nichols ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
[1]: https://plus.google.com/+LinusTorvalds/posts/VZj8vxXdtfe
[2]: http://www.zdnet.com/article/the-best-chromebook-ever-the-chromebook-pixel-2015/
[3]: http://www.zdnet.com/product/apple-15-inch-macbook-pro-with-retina-display-mid-2015/
[4]: https://www.gnome.org/
[5]: https://plus.google.com/+LinusTorvalds/posts/d7nfnWSXjfD
[6]: http://www.zdnet.com/article/linus-torvalds-finds-gnome-3-4-to-be-a-total-user-experience-design-failure/
[7]: http://www.zdnet.com/article/how-to-customise-your-linux-desktop-cinnamon/

View File

@ -0,0 +1,54 @@
Should Smartphones Do Away with the Headphone Jack? Here Are Our Thoughts
====
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/09/Writers-Opinion-Headphone-Featured.jpg)
Even though Apple removing the headphone jack from the iPhone 7 has been long-rumored, after the official announcement last week confirming the news, it has still become a hot topic.
For those not in the know on this latest news, Apple has removed the headphone jack from the phone, and the headphones will now plug into the lightning port. Those that want to still use their existing headphones may, as there is an adapter that ships with the phone along with the lightning headphones. They are also selling a new product: AirPods. These are wireless and are inserted into your ear. The biggest advantage is that by eliminating the jack they were able to make the phone dust and water-resistant.
Being its such a big story right now, we asked our writers, “What are your thoughts on Smartphones doing away with the headphone jack?”
### Our Opinion
Derrik believes that “Apples way of doing it is a play to push more expensive peripherals that do not comply to an open standard.” He also doesnt want to have to charge something every five hours, meaning the AirPods. While he understands that the 3.5mm jack is aging, as an “audiophile” he would love a new, open standard, but “proprietary pushes” worry him about device freedom.
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/09/headphone-jacks.jpg)
Damien doesnt really even use the headphone jack these days as he has Bluetooth headphones. He hates that wire anyway, so feels “this is a good move.” Yet he also understands Derriks point about the wireless headphones running out of battery, leaving him with “nothing to fall back on.”
Trevor is very upfront in saying he thought it was “dumb” until he heard you couldnt charge the phone and use the headphones at the same time and realized it was “dumb X 2.” He uses the headphones/headphone jack all the time in a work van without Bluetooth and listens to audio or podcasts. He uses the plug-in style as Bluetooth drains his battery.
Simon is not a big fan. He hasnt seen much reasoning past it leaving more room within the device. He figures “it will then come down to whether or not consumers favor wireless headphones, an adapter, and water-resistance over not being locked into AirPods, lightning, or an adapter”. He fears it might be “too early to jump into removing ports” and likes a “one pair fits all” standard.
James believes that wireless technology is progressive, so he sees it as a good move “especially for Apple in terms of hardware sales.” He happens to use expensive headphones, so personally hes “yet to be convinced,” noting his Xperia is waterproof and has a jack.
Jeffry points out that “almost every transition attempt in the tech world always starts with strong opposition from those who wont benefit from the changes.” He remembers the flak Apple received when they removed the floppy disk drive and decided not to support Flash, and now both are industry standards. He believes everything is evolving for the better, removing the audio jack is “just the first step toward the future,” and Apple is just the one who is “brave enough to lead the way (and make a few bucks in doing so).”
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/09/Writers-Opinion-Headphone-Headset.jpg)
Vamsi doesnt mind the removal of the headphone jack as long as there is a “better solution applicable to all the users that use different headphones and other gadgets.” He doesnt feel using headphones via a lightning port is a good solution as it renders nearly all other headphones obsolete. Regarding Bluetooth headphones, he just doesnt want to deal with another gadget. Additionally, he doesnt get the argument of it being good for water resistance since there are existing devices with headphone jacks that are water resistant.
Mahesh prefers a phone with a jack because many times he is charging his phone and listening to music simultaneously. He believes well get to see how it affects the public in the next few months.
Derrik chimed back in to say that by “removing open standard ports and using a proprietary connection too.,” you can be technical and say there are adapters, but Thunderbolt is also closed, and Apple can stop selling those adapters at any time. He also notes that the AirPods wont be Bluetooth.
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/09/Writers-Opinion-Headphone-AirPods.jpg)
As for me, Im always up for two things: New technology and anything Apple. Ive been using iPhones since a few weeks past the very first model being introduced, yet I havent updated since 2012 and the iPhone 5, so I was overdue. Ill be among the first to get my hands on the iPhone 7. I hate that stupid white wire being in my face, so I just might be opting for AirPods at some point. I am very appreciative of the phone becoming water-resistant. As for charging vs. listening, the charge on new iPhones lasts so long that I dont expect it to be much of a problem. Even my old iPhone 5 usually lasts about twenty hours on a good day and twelve hours on a bad day. So I dont expect that to be a problem.
### Your Opinion
Our writers have given you a lot to think about. What are your thoughts on Smartphones doing away with the headphone jack? Will you miss it? Is it a deal breaker for you? Or do you relish the upgrade in technology? Will you be trying the iPhone 5 or the AirPods? Let us know in the comments below.
--------------------------------------------------------------------------------
via: https://www.maketecheasier.com/should-smartphones-do-away-with-the-headphone-jack/?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+maketecheasier
作者:[Laura Tucker][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.maketecheasier.com/author/lauratucker/

View File

@ -0,0 +1,57 @@
What the rise of permissive open source licenses means
====
Why restrictive licenses such as the GNU GPL are steadily falling out of favor.
"If you use any open source software, you have to make the rest of your software open source." That's what former Microsoft CEO Steve Ballmer said back in 2001, and while his statement was never true, it must have spread some FUD (fear, uncertainty and doubt) about free software. Probably that was the intention.
This FUD about open source software is mainly about open source licensing. There are many different licenses, some more restrictive (some people use the term "protective") than others. Restrictive licenses such as the GNU General Public License (GPL) use the concept of copyleft, which grants people the right to freely distribute copies and modified versions of a piece of software as long as the same rights are preserved in derivative works. The GPL (v3) is used by open source projects such as bash and GIMP. There's also the Affero GPL, which provides copyleft to software that is offered over a network (for example as a web service.)
What this means is that if you take code that is licensed in this way and you modify it by adding some of your own proprietary code, then in some circumstances the whole new body of code, including your code, becomes subject to the restrictive open source license. It was this type of license that Ballmer was probably referring to when he made his statement.
But permissive licenses are a different animal. The MIT License, for example, lets anyone take open source code and do what they want with it — including modifying and selling it — as long as they provide attribution and don't hold the developer liable. Another popular permissive open source license, the Apache License 2.0, also provides an express grant of patent rights from contributors to users. JQuery, the .NET Core and Rails are licensed using the MIT license, while the Apache 2.0 license is used by software including Android, Apache and Swift.
Ultimately both license types are intended to make software more useful. Restrictive licenses aim to foster the open source ideals of participation and sharing so everyone gets the maximum benefit from software. And permissive licenses aim to ensure that people can get the maximum benefit from software by allowing them to do what they want with it — even if that means they take the code, modify it and keep it for themselves or even sell the resulting work as proprietary software without contributing anything back.
Figures compiled by open source license management company Black Duck Software show that the restrictive GPL 2.0 was the most commonly used open source license last year with about 25 percent of the market. The permissive MIT and Apache 2.0 licenses were next with about 18 percent and 16 percent respectively, followed by the GPL 3.0 with about 10 percent. That's almost evenly split at 35 percent restrictive and 34 percent permissive.
But this snapshot misses the trend. Black Duck's data shows that in the six years from 2009 to 2015 the MIT license's share of the market has gone up 15.7 percent and Apache's share has gone up 12.4 percent. GPL v2 and v3's share during the same period has dropped by a staggering 21.4 percent. In other words there was a significant move away from restrictive licenses and towards permissive ones during that period.
And the trend is continuing. Black Duck's [latest figures][1] show that MIT is now at 26 percent, GPL v2 21 percent, Apache 2 16 percent, and GPL v3 9 percent. That's 30 percent restrictive, 42 percent permissive — a huge swing from last years 35 percent restrictive and 34 percent permissive. Separate [research][2] of the licenses used on GitHub appears to confirm this shift. It shows that MIT is overwhelmingly the most popular license with a 45 percent share, compared to GLP v2 with just 13 percent and Apache with 11 percent.
![](http://images.techhive.com/images/article/2016/09/open-source-licenses.jpg-100682571-large.idge.jpeg)
### Driving the trend
Whats behind this mass move from restrictive to permissive licenses? Do companies fear that if they let restrictive software into the house they will lose control of their proprietary software, as Ballmer warned? In fact, that may well be the case. Google, for example, has [banned Affero GPL software][3] from its operations.
Jim Farmer, chairman of [Instructional Media + Magic][4], a developer of open source technology for education, believes that many companies avoid restrictive licenses to avoid legal difficulties. "The problem is really about complexity. The more complexity in a license, the more chance there is that someone has a cause of action to bring you to court. Complexity makes litigation more likely," he says.
He adds that fear of restrictive licenses is being driven by lawyers, many of whom recommend that clients use software that is licensed with the MIT or Apache 2.0 licenses, and who specifically warn against the Affero license.
This has a knock-on effect with software developers, he says, because if companies avoid software with restrictive licenses then developers have more incentive to license their new software with permissive ones if they want it to get used.
But Greg Soper, CEO of SalesAgility, the company behind the open source SuiteCRM, believes that the move towards permissive licenses is also being driven by some developers. "Look at an application like Rocket.Chat. The developers could have licensed that with GPL 2.0 or Affero but they chose a permissive license," he says. "That gives the app the widest possible opportunity, because a proprietary vendor can take it and not harm their product or expose it to an open source license. So if a developer wants an application to be used inside a third-party application it makes sense to use a permissive license."
Soper points out that restrictive licenses are designed to help an open source project succeed by stopping developers from taking other people's code, working on it, and then not sharing the results back with the community. "The Affero license is critical to the health of our product because if people could make a fork that was better than ours and not give the code back that would kill our product," he says. "For Rocket.Chat it's different because if it used Affero then it would pollute companies' IP and so it wouldn't get used. Different licenses have different use cases."
Michael Meeks, an open source developer who has worked on Gnome, OpenOffice and now LibreOffice, agrees with Jim Farmer that many companies do choose to use software with permissive licenses for fear of legal action. "There are risks with copyleft licenses, but there are also huge benefits. Unfortunately people listen to lawyers, and lawyers talk about risk but they never tell you that something is safe."
Fifteen years after Ballmer made his inaccurate statement it seems that the FUD it generated it is still having an effect — even if the move from restrictive licenses to permissive ones is not quite the effect he intended.
--------------------------------------------------------------------------------
via: http://www.cio.com/article/3120235/open-source-tools/what-the-rise-of-permissive-open-source-licenses-means.html
作者:[Paul Rubens ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.cio.com/author/Paul-Rubens/
[1]: https://www.blackducksoftware.com/top-open-source-licenses
[2]: https://github.com/blog/1964-open-source-license-usage-on-github-com
[3]: http://www.theregister.co.uk/2011/03/31/google_on_open_source_licenses/
[4]: http://immagic.com/

View File

@ -0,0 +1,84 @@
Setup honeypot in Kali Linux
====
The Pentbox is a safety kit containing various tools for streamlining PenTest conducting a job easily. It is programmed in Ruby and oriented to GNU / Linux, with support for Windows, MacOS and every systems where Ruby is installed. In this small article we will explain how to set up a honeypot in Kali Linux. If you dont know what is a honeypot, “a honeypot is a computer security mechanism set to detect, deflect, or, in some manner, counteract attempts at unauthorized use of information systems.”
### Download Pentbox:
Simply type in the following command in your terminal to download pentbox-1.8.
```
root@kali:~# wget http://downloads.sourceforge.net/project/pentbox18realised/pentbox-1.8.tar.gz
```
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-1.jpg)
### Uncompress pentbox files
Decompressing the file with the following command:
```
root@kali:~# tar -zxvf pentbox-1.8.tar.gz
```
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-2.jpg)
### Run pentbox ruby script
Change directory into pentbox folder
```
root@kali:~# cd pentbox-1.8/
```
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-3.jpg)
Run pentbox using the following command
```
root@kali:~# ./pentbox.rb
```
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-4.jpg)
### Setup a honeypot
Use option 2 (Network Tools) and then option 3 (Honeypot).
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-5.jpg)
Finally for first test, choose option 1 (Fast Auto Configuration)
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-6.jpg)
This opens up a honeypot in port 80. Simply open browser and browse to http://192.168.160.128 (where 192.168.160.128 is your IP Address. You should see an Access denied error.
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-7.jpg)
and in the terminal you should see “HONEYPOT ACTIVATED ON PORT 80” followed by “INTRUSION ATTEMPT DETECTED”.
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-8.jpg)
Now, if you do the same steps but this time select Option 2 (Manual Configuration), you should see more extra options
![](https://www.blackmoreops.com/wp-content/uploads/2016/05/Set-up-a-honeypot-in-Kali-Linux-blackMORE-Ops-9.jpg)
Do the same steps but select port 22 this time (SSH Port). Then do a port forwarding in your home router to forward port external port 22 to this machines port 22. Alternatively, set it up in a VPS in your cloud server.
Youd be amazed how many bots out there scanning port SSH continuously. You know what you do then? You try to hack them back for the lulz!
Heres a video of setting up honeypot if video is your thing:
<https://youtu.be/NufOMiktplA>
--------------------------------------------------------------------------------
via: https://www.blackmoreops.com/2016/05/06/setup-honeypot-in-kali-linux/
作者:[blackmoreops.com][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: blackmoreops.com

View File

@ -1,408 +0,0 @@
[jiajia9linuxer translating]
Backup Photos While Traveling With an Ipad Pro and a Raspberry Pi
===================================================================
![](http://www.movingelectrons.net/images/bkup_photos_main.jpg)
>Backup Photos While Traveling - Gear.
### Introduction
Ive been on a quest to finding the ideal travel photo backup solution for a long time. Relying on just tossing your SD cards in your camera bag after they are full during a trip is a risky move that leaves you too exposed: SD cards can be lost or stolen, data can get corrupted or cards can get damaged in transit. Backing up to another medium - even if its just another SD card - and leaving that in a safe(r) place while traveling is the best practice. Ideally, backing up to a remote location would be the way to go, but that may not be practical depending on where you are traveling to and Internet availability in the region.
My requirements for the ideal backup procedure are:
1. Use an iPad to manage the process instead of a laptop. I like to travel light and since most of my trips are business related (i.e. non-photography related), Id hate to bring my personal laptop along with my business laptop. My iPad, however; is always with me, so using it as a tool just makes sense.
2. Use as few hardware devices as practically possible.
3. Connection between devices should be secure. Ill be using this setup in hotels and airports, so closed and encrypted connection between devices is ideal.
4. The whole process should be sturdy and reliable. Ive tried other options using router/combo devices and [it didnt end up well][1].
### The Setup
I came up with a setup that meets the above criteria and is also flexible enough to expand on it in the future. It involves the use of the following gear:
1. [iPad Pro 9.7][2] inches. Its the most powerful, small and lightweight iOS device at the time of writing. Apple pencil is not really needed, but its part of my gear as I so some editing on the iPad Pro while on the road. All the heavy lifting will be done by the Raspberry Pi, so any other device capable of connecting through SSH would fit the bill.
2. [Raspberry Pi 3][3] with Raspbian installed.
3. [Micro SD card][4] for Raspberry Pi and a Raspberry Pi [box/case][5].
5. [128 GB Pen Drive][6]. You can go bigger, but 128 GB is enough for my user case. You can also get a portable external hard drive like [this one][7], but the Raspberry Pi may not provide enough power through its USB port, which means you would have to get a [powered USB hub][8], along with the needed cables, defeating the purpose of having a lightweight and minimalistic setup.
6. [SD card reader][9]
7. [SD Cards][10]. I use several as I dont wait for one to fill up before using a different one. That allows me to spread photos I take on a single trip amongst several cards.
The following diagram shows how these devices will be interacting with each other.
![](http://www.movingelectrons.net/images/bkup_photos_diag.jpg)
>Backup Photos While Traveling - Process Diagram.
The Raspberry Pi will be configured to act as a secured Hot Spot. It will create its own WPA2-encrypted WiFi network to which the iPad Pro will connect. Although there are many online tutorials to create an Ad Hoc (i.e. computer-to-computer) connection with the Raspberry Pi, which is easier to setup; that connection is not encrypted and its relatively easy for other devices near you to connect to it. Therefore, I decided to go with the WiFi option.
The cameras SD card will be connected to one of the Raspberry Pis USB ports through an SD card reader. Additionally, a high capacity Pen Drive (128 GB in my case) will be permanently inserted in one of the USB ports on the Raspberry Pi. I picked the [Sandisk Ultra Fit][11] because of its tiny size. The main idea is to have the Raspberry Pi backup the photos from the SD Card to the Pen Drive with the help of a Python script. The backup process will be incremental, meaning that only changes (i.e. new photos taken) will be added to the backup folder each time the script runs, making the process really fast. This is a huge advantage if you take a lot of photos or if you shoot in RAW format. The iPad will be used to trigger the Python script and to browse the SD Card and Pen Drive as needed.
As an added benefit, if the Raspberry Pi is connected to Internet through a wired connection (i.e. through the Ethernet port), it will be able to share the Internet connection with the devices connected to its WiFi network.
### 1. Raspberry Pi Configuration
This is the part where we roll up our sleeves and get busy as well be using Raspbians command-line interface (CLI) . Ill try to be as descriptive as possible so its easy to go through the process.
#### Install and Configure Raspbian
Connect a keyboard, mouse and an LCD monitor to the Raspberry Pi. Insert the Micro SD in the Raspberry Pis slot and proceed to install Raspbian per the instructions in the [official site][12].
After the installation is done, go to the CLI (Terminal in Raspbian) and type:
```
sudo apt-get update
sudo apt-get upgrade
```
This will upgrade all software on the machine. I configured the Raspberry Pi to connect to the local network and changed the default password as a safety measure.
By default SSH is enabled on Raspbian, so all sections below can be done from a remote machine. I also configured RSA authentication, but thats optional. More info about it [here][13].
This is a screenshot of the SSH connection to the Raspberry Pi from [iTerm][14] on Mac:
##### Creating Encrypted (WPA2) Access Point
The installation was made based on [this][15] article, it was optimized for my user case.
##### 1. Install Packages
We need to type the following to install the required packages:
```
sudo apt-get install hostapd
sudo apt-get install dnsmasq
```
hostapd allows to use the built-in WiFi as an access point. dnsmasq is a combined DHCP and DNS server thats easy to configure.
##### 2. Edit dhcpcd.conf
Connect to the Raspberry Pi through Ethernet. Interface configuration on the Raspbery Pi is handled by dhcpcd, so first we tell it to ignore wlan0 as it will be configured with a static IP address.
Open up the dhcpcd configuration file with sudo nano `/etc/dhcpcd.conf` and add the following line to the bottom of the file:
```
denyinterfaces wlan0
```
Note: This must be above any interface lines that may have been added.
##### 3. Edit interfaces
Now we need to configure our static IP. To do this, open up the interface configuration file with sudo nano `/etc/network/interfaces` and edit the wlan0 section so that it looks like this:
```
allow-hotplug wlan0
iface wlan0 inet static
address 192.168.1.1
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
# wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
```
Also, the wlan1 section was edited to be:
```
#allow-hotplug wlan1
#iface wlan1 inet manual
# wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
```
Important: Restart dhcpcd with sudo service dhcpcd restart and then reload the configuration for wlan0 with `sudo ifdown eth0; sudo ifup wlan0`.
##### 4. Configure Hostapd
Next, we need to configure hostapd. Create a new configuration file with `sudo nano /etc/hostapd/hostapd.conf with the following contents:
```
interface=wlan0
# Use the nl80211 driver with the brcmfmac driver
driver=nl80211
# This is the name of the network
ssid=YOUR_NETWORK_NAME_HERE
# Use the 2.4GHz band
hw_mode=g
# Use channel 6
channel=6
# Enable 802.11n
ieee80211n=1
# Enable QoS Support
wmm_enabled=1
# Enable 40MHz channels with 20ns guard interval
ht_capab=[HT40][SHORT-GI-20][DSSS_CCK-40]
# Accept all MAC addresses
macaddr_acl=0
# Use WPA authentication
auth_algs=1
# Require clients to know the network name
ignore_broadcast_ssid=0
# Use WPA2
wpa=2
# Use a pre-shared key
wpa_key_mgmt=WPA-PSK
# The network passphrase
wpa_passphrase=YOUR_NEW_WIFI_PASSWORD_HERE
# Use AES, instead of TKIP
rsn_pairwise=CCMP
```
Now, we also need to tell hostapd where to look for the config file when it starts up on boot. Open up the default configuration file with `sudo nano /etc/default/hostapd` and find the line `#DAEMON_CONF=""` and replace it with `DAEMON_CONF="/etc/hostapd/hostapd.conf"`.
##### 5. Configure Dnsmasq
The shipped dnsmasq config file contains tons of information on how to use it, but we wont be using all the options. Id recommend moving it (rather than deleting it), and creating a new one with
```
sudo mv /etc/dnsmasq.conf /etc/dnsmasq.conf.orig
sudo nano /etc/dnsmasq.conf
```
Paste the following into the new file:
```
interface=wlan0 # Use interface wlan0
listen-address=192.168.1.1 # Explicitly specify the address to listen on
bind-interfaces # Bind to the interface to make sure we aren't sending things elsewhere
server=8.8.8.8 # Forward DNS requests to Google DNS
domain-needed # Don't forward short names
bogus-priv # Never forward addresses in the non-routed address spaces.
dhcp-range=192.168.1.50,192.168.1.100,12h # Assign IP addresses in that range with a 12 hour lease time
```
##### 6. Set up IPV4 forwarding
One of the last things that we need to do is to enable packet forwarding. To do this, open up the sysctl.conf file with `sudo nano /etc/sysctl.conf`, and remove the # from the beginning of the line containing `net.ipv4.ip_forward=1`. This will enable it on the next reboot.
We also need to share our Raspberry Pis internet connection to our devices connected over WiFi by the configuring a NAT between our wlan0 interface and our eth0 interface. We can do this by writing a script with the following lines.
```
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
sudo iptables -A FORWARD -i eth0 -o wlan0 -m state --state RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A FORWARD -i wlan0 -o eth0 -j ACCEPT
```
I named the script hotspot-boot.sh and made it executable with:
```
sudo chmod 755 hotspot-boot.sh
```
The script should be executed when the Raspberry Pi boots. There are many ways to accomplish this, and this is the way I went with:
1. Put the file in `/home/pi/scripts`.
2. Edit the rc.local file by typing `sudo nano /etc/rc.local` and place the call to the script before the line that reads exit 0 (more information [here][16]).
This is how the rc.local file looks like after editing it.
```
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
# Print the IP address
_IP=$(hostname -I) || true
if [ "$_IP" ]; then
printf "My IP address is %s\n" "$_IP"
fi
sudo /home/pi/scripts/hotspot-boot.sh &
exit 0
```
#### Installing Samba and NTFS Compatibility.
We also need to install the following packages to enable the Samba protocol and allow the File Browser App to see the connected devices to the Raspberry Pi as shared folders. Also, ntfs-3g provides NTFS compatibility in case we decide to connect a portable hard drive to the Raspberry Pi.
```
sudo apt-get install ntfs-3g
sudo apt-get install samba samba-common-bin
```
You can follow [this][17] article for details on how to configure Samba.
Important Note: The referenced article also goes through the process of mounting external hard drives on the Raspberry Pi. We wont be doing that because, at the time of writing, the current version of Raspbian (Jessie) auto-mounts both the SD Card and the Pendrive to `/media/pi/` when the device is turned on. The article also goes over some redundancy features that we wont be using.
### 2. Python Script
Now that the Raspberry Pi has been configured, we need to work on the script that will actually backup/copy our photos. Note that this script just provides certain degree of automation to the backup process. If you have a basic knowledge of using the Linux/Raspbian CLI, you can just SSH into the Raspberry Pi and copy yourself all photos from one device to the other by creating the needed folders and using either the cp or the rsync command. Well be using the rsync method on the script as its very reliable and allows for incremental backups.
This process relies on two files: the script itself and the configuration file `backup_photos.conf`. The latter just have a couple of lines indicating where the destination drive (Pendrive) is mounted and what folder has been mounted to. This is what it looks like:
```
mount folder=/media/pi/
destination folder=PDRIVE128GB
```
Important: Do not add any additional spaces between the `=` symbol and the words to both sides of it as the script will break (definitely an opportunity for improvement).
Below is the Python script, which I named `backup_photos.py` and placed in `/home/pi/scripts/`. I included comments in between the lines of code to make it easier to follow.
```
#!/usr/bin/python3
import os
import sys
from sh import rsync
'''
The script copies an SD Card mounted on /media/pi/ to a folder with the same name
created in the destination drive. The destination drive's name is defined in
the .conf file.
Argument: label/name of the mounted SD Card.
'''
CONFIG_FILE = '/home/pi/scripts/backup_photos.conf'
ORIGIN_DEV = sys.argv[1]
def create_folder(path):
print ('attempting to create destination folder: ',path)
if not os.path.exists(path):
try:
os.mkdir(path)
print ('Folder created.')
except:
print ('Folder could not be created. Stopping.')
return
else:
print ('Folder already in path. Using that instead.')
confFile = open(CONFIG_FILE,'rU')
#IMPORTANT: rU Opens the file with Universal Newline Support,
#so \n and/or \r is recognized as a new line.
confList = confFile.readlines()
confFile.close()
for line in confList:
line = line.strip('\n')
try:
name , value = line.split('=')
if name == 'mount folder':
mountFolder = value
elif name == 'destination folder':
destDevice = value
except ValueError:
print ('Incorrect line format. Passing.')
pass
destFolder = mountFolder+destDevice+'/'+ORIGIN_DEV
create_folder(destFolder)
print ('Copying files...')
# Comment out to delete files that are not in the origin:
# rsync("-av", "--delete", mountFolder+ORIGIN_DEV, destFolder)
rsync("-av", mountFolder+ORIGIN_DEV+'/', destFolder)
print ('Done.')
```
### 3. iPad Pro Configuration
Since all the heavy-lifting will be done on the Raspberry Pi and no files will be transferred through the iPad Pro, which was a huge disadvantage in [one of the workflows I tried before][18]; we just need to install [Prompt 2][19] on the iPad to access the Raspeberry Pi through SSH. Once connected, you can either run the Python script or copy the files manually.
![](http://www.movingelectrons.net/images/bkup_photos_ipad&rpi_prompt.jpg)
>SSH Connection to Raspberry Pi From iPad Using Prompt.
Since we installed Samba, we can access USB devices connected to the Raspberry Pi in a more graphical way. You can stream videos, copy and move files between devices. [File Browser][20] is perfect for that.
### 4. Putting it All Together
Lets suppose that `SD32GB-03` is the label of an SD card connected to one of the USB ports on the Raspberry Pi. Also, lets suppose that `PDRIVE128GB` is the label of the Pendrive, also connected to the device and defined on the `.conf` file as indicated above. If we wanted to backup the photos on the SD Card, we would need to go through the following steps:
1. Turn on Raspberry Pi so that drives are mounted automatically.
2. Connect to the WiFi network generated by the Raspberry Pi.
3. Connect to the Raspberry Pi through SSH using the [Prompt][21] App.
4. Type the following once you are connected:
```
python3 backup_photos.py SD32GB-03
```
The first backup my take some minutes depending on how much of the card is used. That means you need to keep the connection alive to the Raspberry Pi from the iPad. You can get around this by using the [nohup][22] command before running the script.
```
nohup python3 backup_photos.py SD32GB-03 &
```
![](http://www.movingelectrons.net/images/bkup_photos_ipad&rpi_finished.png)
>iTerm Screenshot After Running Python Script.
### Further Customization
I installed a VNC server to access Raspbians graphical interface from another computer or the iPad through [Remoter App][23]. Im looking into installing [BitTorrent Sync][24] for backing up photos to a remote location while on the road, which would be the ideal setup. Ill expand this post once I have a workable solution.
Feel free to either include your comments/questions below or reach out to me. My contact info is at the footer of this page.
--------------------------------------------------------------------------------
via: http://www.movingelectrons.net/blog/2016/06/26/backup-photos-while-traveling-with-a-raspberry-pi.html
作者:[Editor][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.movingelectrons.net/blog/2016/06/26/backup-photos-while-traveling-with-a-raspberry-pi.html
[1]: http://bit.ly/1MVVtZi
[2]: http://www.amazon.com/dp/B01D3NZIMA/?tag=movinelect0e-20
[3]: http://www.amazon.com/dp/B01CD5VC92/?tag=movinelect0e-20
[4]: http://www.amazon.com/dp/B010Q57T02/?tag=movinelect0e-20
[5]: http://www.amazon.com/dp/B01F1PSFY6/?tag=movinelect0e-20
[6]: http://amzn.to/293kPqX
[7]: http://amzn.to/290syFY
[8]: http://amzn.to/290syFY
[9]: http://amzn.to/290syFY
[10]: http://amzn.to/290syFY
[11]: http://amzn.to/293kPqX
[12]: https://www.raspberrypi.org/downloads/noobs/
[13]: https://www.raspberrypi.org/documentation/remote-access/ssh/passwordless.md
[14]: https://www.iterm2.com/
[15]: https://frillip.com/using-your-raspberry-pi-3-as-a-wifi-access-point-with-hostapd/
[16]: https://www.raspberrypi.org/documentation/linux/usage/rc-local.md
[17]: http://www.howtogeek.com/139433/how-to-turn-a-raspberry-pi-into-a-low-power-network-storage-device/
[18]: http://bit.ly/1MVVtZi
[19]: https://itunes.apple.com/us/app/prompt-2/id917437289?mt=8&uo=4&at=11lqkH
[20]: https://itunes.apple.com/us/app/filebrowser-access-files-on/id364738545?mt=8&uo=4&at=11lqkH
[21]: https://itunes.apple.com/us/app/prompt-2/id917437289?mt=8&uo=4&at=11lqkH
[22]: https://en.m.wikipedia.org/wiki/Nohup
[23]: https://itunes.apple.com/us/app/remoter-pro-vnc-ssh-rdp/id519768191?mt=8&uo=4&at=11lqkH
[24]: https://getsync.com/

View File

@ -1,62 +0,0 @@
LinuxBars Translating
LinuxBars 翻译中
Baidu Takes FPGA Approach to Accelerating SQL at Scale
===================
![](http://www.nextplatform.com/wp-content/uploads/2016/08/BaiduFPGAFeatured-200x114.png)
While much of the work at Baidu we have focused on this year has centered on the Chinese search giants [deep learning initiatives][1], many other critical, albeit less bleeding edge applications present true big data challenges.
As Baidus Jian Ouyang detailed this week at the Hot Chips conference, Baidu sits on over an exabyte of data, processes around 100 petabytes per day, updates 10 billion webpages daily, and handles over a petabyte of log updates every 24 hours. These numbers are on par with Google and as one might imagine, it takes a Google-like approach to problem solving at scale to get around potential bottlenecks.
Just as we have described Google looking for any way possible to beat Moores Law, Baidu is on the same quest. While the exciting, sexy machine learning work is fascinating, acceleration of the core mission-critical elements of the business is as well—because it has to be. As Ouyang notes, there is a widening gap between the companys need to deliver top-end services based on their data and what CPUs are capable of delivering.
![](http://www.nextplatform.com/wp-content/uploads/2016/08/BaiduFPGA1.png)
As for Baidus exascale problems, on the receiving end of all of this data are a range of frameworks and platforms for data analysis; from the companys massive knowledge graph, multimedia tools, natural language processing frameworks, recommendation engines, and click stream analytics. In short, the big problem of big data is neatly represented here—a diverse array of applications matched with overwhelming data volumes.
When it comes to acceleration for large-scale data analytics at Baidu, there are several challenges. Ouyang says it is difficult to abstract the computing kernels to find a comprehensive approach. “The diversity of big data applications and variable computing types makes this a challenge. It is also difficult to integrate all of this into a distributed system because there are also variable platforms and program models (MapReduce, Spark, streaming, user defined, and so on). Further there is more variance in data types and storage formats.”
Despite these barriers, Ouyang says teams looked for the common thread. And as it turns out, that string that ties together many of their data-intensive jobs is good old SQL. “Around 40% of our data analysis jobs are already written in SQL and rewriting others to match it can be done.” Further, he says they have the benefit of using existing SQL system that mesh with existing frameworks like Hive, Spark SQL, and Impala. The natural thing to do was to look for SQL acceleration—and Baidu found no better hardware than an FPGA.
![](http://www.nextplatform.com/wp-content/uploads/2016/08/BaiduFPGA2.png)
These boards, called processing elements (PE on coming slides), automatically handle key SQL functions as they come in. With that said, a disclaimer note here about what we were able to glean from the presentation. Exactly what the FPGA is talking to is a bit of a mystery and so by design. If Baidu is getting the kinds of speedups shown below in their benchmarks, this is competitive information. Still, we will share what was described. At its simplest, the FPGAs are running in the database and when it sees SQL queries coming it, the software the team designed ([and presented at Hot Chips two years ago][2] related to DNNs) kicks into gear.
![](http://www.nextplatform.com/wp-content/uploads/2016/08/BaiduFPGA3.png)
One thing Ouyang did note about the performance of their accelerator is that their performance could have been higher but they were bandwidth limited with the FPGA. In the evaluation below, Baidu setup with a 12-core 2.0 Ghz Intel E26230 X2 sporting 128 GB of memory. The SDA had five processing elements (the 300 MHzFPGA boards seen above) each of which handles core functions (filter, sort, aggregate, join and group by.).
To make the SQL accelerator, Baidu picked apart the TPC-DS benchmark and created special engines, called processing elements, that accelerate the five key functions in that benchmark test. These include filter, sort, aggregate, join, and group by SQL functions. (And no, we are not going to put these in all caps to shout as SQL really does.) The SDA setup employs an offload model, with the accelerator card having multiple processing elements of varying kinds shaped into the FPGA logic, with the type of SQL function and the number per card shaped by the specific workload. As these queries are being performed on Baidus systems, the data for the queries is pushed to the accelerator card in columnar format (which is blazingly fast for queries) and through a unified SDA API and driver, the SQL work is pushed to the right processing elements and the SQL operations are accelerated.
The SDA architecture uses a data flow model, and functions not supported by the processing elements are pushed back to the database systems and run natively there. More than any other factor, the performance of the SQL accelerator card developed by Baidu is limited by the memory bandwidth of the FPGA card. The accelerator works across clusters of machines, by the way, but the precise mechanism of how data and SQL operations are parsed out to multiple machines was not disclosed by Baidu.
![](http://www.nextplatform.com/wp-content/uploads/2016/08/BaiduFPGA4.png)
![](http://www.nextplatform.com/wp-content/uploads/2016/08/BaiduFPGA5.png)
Were limited in some of the details Baidu was willing to share but these benchmark results are quite impressive, particularly for Terasort. We will follow up with Baidu after Hot Chips to see if we can get more detail about how this is hooked together and how to get around the memory bandwidth bottlenecks.
--------------------------------------------------------------------------------
via: http://www.nextplatform.com/2016/08/24/baidu-takes-fpga-approach-accelerating-big-sql/?utm_source=dbweekly&utm_medium=email
作者:[Nicole Hemsoth][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.nextplatform.com/author/nicole/
[1]: http://www.nextplatform.com/?s=baidu+deep+learning
[2]: http://www.hotchips.org/wp-content/uploads/hc_archives/hc26/HC26-12-day2-epub/HC26.12-5-FPGAs-epub/HC26.12.545-Soft-Def-Acc-Ouyang-baidu-v3--baidu-v4.pdf

View File

@ -0,0 +1,219 @@
Understanding Different Classifications of Shell Commands and Their Usage in Linux
====
When it comes to gaining absolute control over your Linux system, then nothing comes close to the command line interface (CLI). In order to become a Linux power user, one must understand the [different types of shell commands][1] and the appropriate ways of using them from the terminal.
In Linux, there are several types of commands, and for a new Linux user, knowing the meaning of different commands enables for efficient and precise usage. Therefore, in this article, we shall walk through the various classifications of shell commands in Linux.
One important thing to note is that the command line interface is different from the shell, it only provides a means for you to access the shell. The shell, which is also programmable then makes it possible to communicate with the kernel using commands.
Different classifications of Linux commands fall under the following classifications:
### 1. Program Executables (File System Commands)
When you run a command, Linux searches through the directories stored in the $PATH environmental variable from left to right for the executable of that specific command.
You can view the directories in the $PATH as follows:
```
$ echo $PATH
/home/aaronkilik/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
```
In the above order, the directory /home/aaronkilik/bin will be searched first followed by /usr/local/sbin and so on, the order is significant in the search process.
Examples of file system commands in /usr/bin directory:
```
$ ll /bin/
```
Sample Output
```
total 16284
drwxr-xr-x 2 root root 4096 Jul 31 16:30 ./
drwxr-xr-x 23 root root 4096 Jul 31 16:29 ../
-rwxr-xr-x 1 root root 6456 Apr 14 18:53 archdetect*
-rwxr-xr-x 1 root root 1037440 May 17 16:15 bash*
-rwxr-xr-x 1 root root 520992 Jan 20 2016 btrfs*
-rwxr-xr-x 1 root root 249464 Jan 20 2016 btrfs-calc-size*
lrwxrwxrwx 1 root root 5 Jul 31 16:19 btrfsck -> btrfs*
-rwxr-xr-x 1 root root 278376 Jan 20 2016 btrfs-convert*
-rwxr-xr-x 1 root root 249464 Jan 20 2016 btrfs-debug-tree*
-rwxr-xr-x 1 root root 245368 Jan 20 2016 btrfs-find-root*
-rwxr-xr-x 1 root root 270136 Jan 20 2016 btrfs-image*
-rwxr-xr-x 1 root root 249464 Jan 20 2016 btrfs-map-logical*
-rwxr-xr-x 1 root root 245368 Jan 20 2016 btrfs-select-super*
-rwxr-xr-x 1 root root 253816 Jan 20 2016 btrfs-show-super*
-rwxr-xr-x 1 root root 249464 Jan 20 2016 btrfstune*
-rwxr-xr-x 1 root root 245368 Jan 20 2016 btrfs-zero-log*
-rwxr-xr-x 1 root root 31288 May 20 2015 bunzip2*
-rwxr-xr-x 1 root root 1964536 Aug 19 2015 busybox*
-rwxr-xr-x 1 root root 31288 May 20 2015 bzcat*
lrwxrwxrwx 1 root root 6 Jul 31 16:19 bzcmp -> bzdiff*
-rwxr-xr-x 1 root root 2140 May 20 2015 bzdiff*
lrwxrwxrwx 1 root root 6 Jul 31 16:19 bzegrep -> bzgrep*
-rwxr-xr-x 1 root root 4877 May 20 2015 bzexe*
lrwxrwxrwx 1 root root 6 Jul 31 16:19 bzfgrep -> bzgrep*
-rwxr-xr-x 1 root root 3642 May 20 2015 bzgrep*
```
### 2. Linux Aliases
These are user defined commands, they are created using the alias shell built-in command, and contain other shell commands with some options and arguments. The ideas is to basically use new and short names for lengthy commands.
The syntax for creating an alias is as follows:
```
$ alias newcommand='command -options'
```
To list all aliases on your system, issue the command below:
```
$ alias -p
alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"'
alias egrep='egrep --color=auto'
alias fgrep='fgrep --color=auto'
alias grep='grep --color=auto'
alias l='ls -CF'
alias la='ls -A'
alias ll='ls -alF'
alias ls='ls --color=auto'
```
To create a new alias in Linux, go through some below examples.
```
$ alias update='sudo apt update'
$ alias upgrade='sudo apt dist-upgrade'
$ alias -p | grep 'up'
```
![](http://www.tecmint.com/wp-content/uploads/2016/08/Create-Aliase-in-Linux.png)
However, the aliases we have created above only work temporarily, when the system is restarted, they will not work after the next boot. You can set permanent aliases in your `.bashrc` file as shown below.
![](http://www.tecmint.com/wp-content/uploads/2016/08/Set-Linux-Aliases-Permanent.png)
After adding them, run the command below to active.
```
$ source ~/.bashrc
```
### 3. Linux Shell Reserved Words
In shell programming, words such as if, then, fi, for, while, case, esac, else, until and many others are shell reserved words. As the description implies, they have specialized meaning to the shell.
You can list out all Linux shell keywords using type command as shown:
```
$ type if then fi for while case esac else until
if is a shell keyword
then is a shell keyword
fi is a shell keyword
for is a shell keyword
while is a shell keyword
case is a shell keyword
esac is a shell keyword
else is a shell keyword
until is a shell keyword
```
Suggested Read: 10 Useful Linux Chaining Operators with Practical Examples
### 4. Linux Shell Functions
A shell function is a group of commands that are executed collectively within the current shell. Functions help to carry out a specific task in a shell script. The conventional form of writing shell functions in a script is:
```
function_name() {
command1
command2
…….
}
```
Alternatively,
```
function function_name {
command1
command2
…….
}
```
Lets take a look at how to write shell functions in a script named shell_functions.sh.
```
#!/bin/bash
#write a shell function to update and upgrade installed packages
upgrade_system(){
sudo apt update;
sudo apt dist-upgrade;
}
#execute function
upgrade_system
```
Instead of executing the two commands: sudo apt update and sudo apt dist-upgrade from the command line, we have written a simple shell function to execute the two commands as a single command, upgrade_system within a script.
Save the file and thereafter, make the script executable. Finally run it as below:
```
$ chmod +x shell_functions.sh
$ ./shell_functions.sh
```
![](http://www.tecmint.com/wp-content/uploads/2016/08/Linux-Shell-Functions-Script.png)
### 5. Linux Shell Built-in Commands
These are Linux commands that built into the shell, thus you cannot find them within the file system. They include pwd, cd, bg, alias, history, type, source, read, exit and many others.
You can list or check Linux built-in commands using type command as shown:
```
$ type pwd
pwd is a shell builtin
$ type cd
cd is a shell builtin
$ type bg
bg is a shell builtin
$ type alias
alias is a shell builtin
$ type history
history is a shell builtin
```
Learn about some Linux built-in Commands usage:
- [15 pwd Command Examples in Linux][2]
- [15 cd Command Examples in Linux][3]
- [Learn The Power of Linux history Command][4]
### Conclusion
As a Linux user, it is always important to know the type of command you are running. I believe, with the precise and simple-to-understand explanation above including a few relevant illustrations, you probably have a good understanding of the [various categories of Linux commands][5].
You can as well get in tough through the comment section below for any questions or supplementary ideas that you would like to offer us.
--------------------------------------------------------------------------------
via: http://linoxide.com/firewall/pfsense-setup-basic-configuration/
作者:[Aaron Kili ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/aaronkili/
[1]: http://www.tecmint.com/different-types-of-linux-shells/
[2]: http://www.tecmint.com/pwd-command-examples/
[3]: http://www.tecmint.com/cd-command-in-linux/
[4]: http://www.tecmint.com/history-command-examples/
[5]: http://www.tecmint.com/category/linux-commands/

View File

@ -0,0 +1,78 @@
translating by ucasFL
4 Best Linux Boot Loaders
====
When you turn on your machine, immediately after POST (Power On Self Test) is completed successfully, the BIOS locates the configured bootable media, and reads some instructions from the master boot record (MBR) or GUID partition table which is the first 512 bytes of the bootable media. The MBR contains two important sets of information, one is the boot loader and two, the partition table.
### What is a Boot Loader?
A boot loader is a small program stored in the MBR or GUID partition table that helps to load an operating system into memory. Without a boot loader, your operating system can not be loaded into memory.
There are several boot loaders we can install together with Linux on our systems and in this article, we shall briefly talk about a handful of the best Linux boot loaders to work with.
### 1. GNU GRUB
GNU GRUB is a popular and probably the most used multiboot Linux boot loader available, based on the original GRUB (GRand Unified Bootlader) which was created by Eirch Stefan Broleyn. It comes with several improvements, new features and bug fixes as enhancements of the original GRUB program.
Importantly, GRUB 2 has now replaced the GRUB. And notably, the name GRUB was renamed to GRUB Legacy and is not actively developed, however, it can be used for booting older systems since bug fixes are still on going.
GRUB has the following prominent features:
- Supports multiboot
- Supports multiple hardware architectures and operating systems such as Linux and Windows
- Offers a Bash-like interactive command line interface for users to run GRUB commands as well interact with configuration files
- Enables access to GRUB editor
- Supports setting of passwords with encryption for security
- Supports booting from a network combined with several other minor features
Visit Homepage: <https://www.gnu.org/software/grub/>
### 2. LILO (Linux Loader)
LILO is a simple yet powerful and stable Linux boot loader. With the growing popularity and use of GRUB, which has come with numerous improvements and powerful features, LILO has become less popular among Linux users.
While it loads, the word “LILO” is displayed on the screen and each letter appears before or after a particular event has occurred. However, the development of LILO was stopped in December 2015, it has a number of features as listed below:
- Does not offer an interactive command line interface
- Supports several error codes
- Offers no support for booting from a network
- All its files are stored in the first 1024 cylinders of a drive
- Faces limitation with BTFS, GPT and RAID plus many more.
Visit Homepage: <http://lilo.alioth.debian.org/>
### 3. BURG New Boot Loader
Based on GRUB, BURG is a relatively new Linux boot loader. Because it is derived from GRUB, it ships in with some of the primary GRUB features, nonetheless, it also offers remarkable features such as a new object format to support multiple platforms including Linux, Windows, Mac OS, FreeBSD and beyond.
Additionally, it supports a highly configurable text and graphical mode boot menu, stream plus planned future improvements for it to work with various input/output devices.
Visit Homepage: <https://launchpad.net/burg>
### 4. Syslinux
Syslinux is an assortment of light weight boot loaders that enable booting from CD-ROMs, from a network and so on. It supports filesystems such as FAT for MS-DOS, and ext2, ext3, ext4 for Linux. It as well supports uncompressed single-device Btrfs.
Note that Syslinux only accesses files in its own partition, therefore, it does not offer multi-filesystem boot capabilities.
Visit Homepage: <http://www.syslinux.org/wiki/index.php?title=The_Syslinux_Project>
### Conclusion
A boot loader allows you to manage multiple operating systems on your machine and select which one to use at a particular time, without it, your machine can not load the kernel and the rest of the operating system files.
Have we missed any tip-top Linux boot loader here? If so, then let us know by using the comment form below by making suggestions of any commendable boot loaders that can support Linux operating system.
--------------------------------------------------------------------------------
via: http://linoxide.com/firewall/pfsense-setup-basic-configuration/
作者:[Aaron Kili][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/best-linux-boot-loaders/

View File

@ -1,263 +0,0 @@
Ohm: JavaScript Parser that Creates a Language in 200 Lines of Code
===========
Parsers are an incredibly useful software libraries. While conceptually simple, they can be challenging to implement and are often considered a dark art in computer science. In this blog series, Ill show you why you dont need to be Harry Potter to master parsers. But bring your wand just in case.
Well explore a new open source Javascript library called Ohm that makes building parsers easy and easier to reuse. In this series, we use Ohm to recognize numbers, build a calculator, and more. By the end of this series you will have created a complete programming language in under 200 lines of code. This powerful tool will let you do things that you might have thought impossible otherwise.
### Why Parsers are Hard
Parsers are useful. There are lots of times you might need a parser. A new file format might come along that you need to process and no one else has written a library for it yet. Or maybe you find files in an old file format and the existing parsers arent built for the platform you need. Ive seen this happen over and over. Code will come and go but data is forever.
Fundamentally parsers are simple: just transform one data structure into another. So why does it feel like you need to be Dumbledore to figure them out?
The challenge that parsers have historically been surprisingly difficult to write, and most of the existing tools are old and assume a fair amount of arcane computer science knowledge. If you took a compilers class in college the textbook may well have techniques from the 1970s. Fortunately, parser technology has improved a great deal since then.
Typically a parser is created by defining what you want to parse using a special syntax called a formal grammar. Then you feed this into several tools like Bison and Yacc which generate a bunch of C code that you the need to modify or link into whatever programming language you are actually writing in. The other option is to manually write a parser in your preferred language, which is slow and error prone. Thats a lot of extra work before you get to actually use the parser.
Imagine if your description of the thing you wanted to parse, the grammar, was also the parser? What if you could just run the grammar directly, then add hooks only where you want? Thats what Ohm does.
### Introducing Ohm
[Ohm][1] is a new kind of parsing system. While it resembles the grammars you may have seen in text books its a lot more powerful and easier to use. With Ohm you write your format definition in a very flexible syntax in a .ohm file, then attach semantic meaning to it using your host language. For this blog we will use JavaScript as the host language.
Ohm is based on years of research into making parsers easier and more flexible. VPRIs [STEPS program][2] (pdf) created many custom languages for specific tasks (like a fully parallelizable graphics renderer in 400 lines of code!) using Ohms precursor [OMeta][3].
Ohm has many interesting features and notations, but rather than explain them all I think we should just dive in and build something.
### Parsing Integers
Lets parse some numbers. It seems like this would be easy. Just look for adjacent digits in a string of text; but lets try to handle all forms of numbers: Integers and floating point. Hex and octal. Scientific notation. A leading negative digit. Parsing numbers is easy. Doing it right is hard
To build this code by hand would be difficult and buggy, with lots of special cases which sometimes conflict with each other. A regular expression could probably do it, but would be ugly and hard to maintain. Lets do it with Ohm instead.
Every parser in Ohm involves three parts: the grammar, the semantics, and the tests. I usually pick part of the problem and write tests for it, then build enough of the grammar and semantics to make the tests pass. Then I pick another part of the problem, add more tests, update the grammar and semantics, while making sure all of the tests continue to pass. Even with our new powerful tool, writing parsers is still conceptually complicated. Tests are the only way to build parsers in a reasonable manner. Now lets dig in.
Well start with an integer number. An integer is composed of a sequences of digits next to each other. Lets put this into a file called grammar.ohm:
```
CoolNums {
// just a basic integer
Number = digit+
}
```
This creates a single rule called Number which matches one or more digits. The + means one or more, just like in a regular expression. This rule will match if there is one digit or more than one digit. It wont match if there are zero digits or something something other than a digit. A digit is defined as the characters for the numbers 0 to 9. digit is also a rule like Number is, but its one of Ohms built in rules so we dont have to define it ourselves. We could override if it we wanted to but that wouldnt make sense in this case. After all we dont plan to invent a new form of number (yet!)
Now we can read in this grammar and process it with the Ohm library.
Put this into test1.js
```
var ohm = require('ohm-js');
var fs = require('fs');
var assert = require('assert');
var grammar = ohm.grammar(fs.readFileSync('src/blog_numbers/syntax1.ohm').toString());
```
The ohm.grammar call will read in the file and parse it into a grammar object. Now we can add semantics. Add this to your Javascript file:
```
var sem = grammar.createSemantics().addOperation('toJS', {
Number: function(a) {
return parseInt(this.sourceString,10);
}
});
```
This creates a set of semantics called sem with the operation toJS. The semantics are essentially a bunch of functions matching up to each rule in the grammar. Each function will be called when the corresponding rule in the grammar is parsed. The Number function above will be called when the Number rule in the grammar is parsed. The grammar defines what chunks are in the language. The semantics define what to do with chunks once theyve been parsed.
Our semantics functions can do anything we want, such as print debugging information, create objects, or recursively call toJS on any sub-nodes. In this case we just want to convert the matched text into a real Javascript integer.
All semantic functions have an implicit this object with some useful properties. The source property represents the part of the input text that matches this node. this.sourceString is the matched input as a string. Calling the built in JavaScript function parseInt turns this string to a number. The 10 argument to parseInt tells JavaScript that we are giving it a number in base ten. If we leave it out then JS will assume its base 10 anyway, but Ive included it because later on we will support base 16 (hex) numbers, so its good to be explicit.
Now that we have some semantics, lets actually parse something to see if our parser works. How do we know our parser works? By testing it. Lots and lots of testing. Every possible edge case needs a test.
With the standard assert API, here is a test function which matches some input then applies our semantics to it to turn it into a number, then compares the number with the expected input.
```
function test(input, answer) {
var match = grammar.match(input);
if(match.failed()) return console.log("input failed to match " + input + match.message);
var result = sem(match).toJS();
assert.deepEqual(result,answer);
console.log('success = ', result, answer);
}
```
Thats it. Now we can write a bunch of tests for different numbers. If the match fails then our script will throw an exception. If not it will print success. Lets try it out. Add this to the script
```
test("123",123);
test("999",999);
test("abc",999);
```
Then run the script with node test1.js
Your output should look like this:
```
success = 123 123
success = 999 999
input failed to match abcLine 1, col 1:
> 1 | abc
^
Expected a digit
```
Cool. The first two succeed and the third one fails, as it should. Even better, Ohm automatically gave us a nice error message pointing to the match failure.
### Floating Point
Our parser works, but it doesnt do anything very interesting. Lets extend it to parse both integers and floating point numbers. Change the grammar.ohm file to look like this:
```
CoolNums {
// just a basic integer
Number = float | int
int = digit+
float = digit+ "." digit+
}
```
This changes the Number rule to point to either a float or an int. The | means or. We read this as “a Number is composed of a float or an int.” Then int is defined as digit+ and float is defined as digit+ followed by a period followed by another digit+. This means there must be at least one digit before the period and at least one after. If there is not a period then it wont be a float at all, so int will match instead.
Now lets go look at our semantic actions again. Since we now have new rules we need new action functions: one for int and one for float.
```
var sem = grammar.createSemantics().addOperation('toJS', {
Number: function(a) {
return a.toJS();
},
int: function(a) {
console.log("doing int", this.sourceString);
return parseInt(this.sourceString,10);
},
float: function(a,b,c) {
console.log("doing float", this.sourceString);
return parseFloat(this.sourceString);
}
});
```
Theres two things to note here. First, int and float and Number all have matching grammar rules and functions. However, the action for Number no longer does anything interesting. It receives the child node a and returns the result of toJS on the child. In other words the Number rule simply returns whatever its child rule matched. Since this is the default behavior of any rule in Ohm we can actually just leave the Number action out. Ohm will do it for us.
Second, int has one argument a while float has three: a, b, and c. This is because of the rules arity. Arity means how many arguments a rule has. If we look back at the grammar, the rule for float is
```
float = digit+ "." digit+
```
The float rule is defined by three parts: the first digit+, the "." and the second digit+. All three of those parts will be passed as parameters to the action function for float. Thus float must have three arguments or else the Ohm library will give us an error. In this case we dont care about the arguments because we will just grab the input string directly, but we still need the arguments listed to avoid compiler errors. Later on we will actually use some of these parameters.
Now we can add a few more tests for our new floating point number support.
```
test("123",123);
test("999",999);
//test("abc",999);
test('123.456',123.456);
test('0.123',0.123);
test('.123',0.123);
```
Note that the last test will fail. A floating point number must begin with a digit, even if its just zero. .123 is not valid, and in fact the real JavaScript language has the same rule.
### Hexadecimal
So now we have integers and floats but theres a few other number syntaxes that might be good to support: hexadecimal and scientific notation. Hex numbers are integers in base sixteen. The digits can be from 0 to 9 and A to F. Hex is often used in computer science when working with binary data because you can represent 0 through 255 exactly with only two digits.
In most C derived programming languages (including JavaScript) hex numbers are preceded by `0x` to indicate to the compiler that what follows is a hexadecimal number. To support hex numbers in our parser we just need to add another rule.
```
Number = hex | float | int
int = digit+
float = digit+ "." digit+
hex = "0x" hexDigit+
hexDigit := "0".."9" | "a".."f" | "A".."F"
```
Ive actually added two rules. `hex` says that a hex number is the string `0x` followed by one or more `hexDigits`. A `hexDigit` is any letter from 0 to 9, or a to f, or A to F (covering both upper and lower case). I also modified Number to recognize hex as another possible option. Now we just need another action rule for hex.
```
hex: function(a,b) {
return parseInt(this.sourceString,16);
}
```
Notice that in this case we are passing `16` as the radix to `parseInt` because we want JavaScript to know that this is a hexadecimal number.
I skipped over something important to notice. The rule for `hexDigit` looks like this.
```
hexDigit := "0".."9" | "a".."f" | "A".."F"
```
Notice that I used `:=` instead of `=`. In Ohm, the `:=` is used when you are overriding a rule. It turns out Ohm already has a default rule for `hexDigit`, just as it does for `digit`, `space` and a bunch of others. If I had used = then Ohm would have reported an error. This is a check so I cant override a rule unintentionally. Since our new hexDigit rule is actually the same as Ohms built in rule we can just comment it out and let Ohm do it. I left the rule in just so we can see whats really going on.
Now we can add some more tests and see that our hex digits really work:
```
test('0x456',0x456);
test('0xFF',255);
```
### Scientific Notation
Finally lets support scientific notation. This is for very large or small numbers like 1.8 x 10^3 In most programming languages numbers in scientific notation would be written as 1.8e3 for 18000 or 1.8e-3 for .0018. Lets add another couple of rules to support this exponent notation.
```
float = digit+ "." digit+ exp?
exp = "e" "-"? digit+
```
This adds a the exp rule to the end of the float rule with a question mark. The ? means zero or one, so exp is optional but there cant be more than one. Adding the exp rule also changes the arity of the float rule, so we need to add another argument to the float action, even if we dont use it.
```
float: function(a,b,c,d) {
console.log("doing float", this.sourceString);
return parseFloat(this.sourceString);
},
```
And now our new tests can pass:
```
test('4.8e10',4.8e10);
test('4.8e-10',4.8e-10);
```
### Conclusion
Ohm is a great tool for building parsers because its easy to get started and you can incrementally add to it. It also has other great features that I didnt cover today, like a debugging visualizer and sub-classing.
So far we have used Ohm to translate character strings into JavaScript numbers, and often Ohm is used for this very purpose: converting one representation to another. However, Ohm can be used for a lot more. By putting in a different set of semantic actions you can use Ohm to actually process and calculate things. This is one of Ohms magic features. A single grammar can be used with many different semantics.
In the next article of this series Ill show you how to not just parse numbers but actually evaluate math expressions like `(4.8+5 * (238-68)/2)`, just like a real calculator.
Bonus challenge: Can you extend the grammar with support for octal numbers? These are numbers in base 8 and can be represented with only the digits 0 to 7, preceded by a zero and the letter o. See if you are right with these test cases. Next time Ill show you the answer.
```
test('0o77',7*8+7);
test('0o23',0o23);
```
--------------------------------------------------------------------------------
via: https://www.pubnub.com/blog/2016-08-30-javascript-parser-ohm-makes-creating-a-programming-language-easy/?utm_source=javascriptweekly&utm_medium=email
作者:[Josh Marinacci][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.pubnub.com/blog/author/josh/
[1]: https://github.com/cdglabs/ohm
[2]: http://www.vpri.org/pdf/tr2012001_steps.pdf
[3]: http://tinlizzie.org/ometa/

View File

@ -1,81 +0,0 @@
jiajia9linuxer
QOWNNOTES IS A NOTE TAKING AND TODO LIST APP THAT INTEGRATES WITH OWNCLOUD
===============
[QOwnNotes][1] is a free, open source note taking and todo list application available for Linux, Windows, and Mac.
The application saves your notes as plain-text files, and it features Markdown support and tight ownCloud integration.
![](https://2.bp.blogspot.com/-a2vsrOG0zFk/V81gyHWlaaI/AAAAAAAAYZs/uzY16JtNcT8bnje1rTKJx1135WueY6V9gCLcB/s400/qownnotes.png)
What makes QOwnNotes stand out is its ownCloud integration (which is optional). Using the ownCloud Notes app, you are able to edit and search notes from the web, or from mobile devices (by using an app like [CloudNotes][2]).
Furthermore, connecting QOwnNotes with your ownCloud account allows you to share notes and access / restore previous versions (or trashed files) of your notes from the ownCloud server.
In the same way, QOwnNotes can also integrate with the ownCloud tasks or Tasks Plus apps.
In case you're not familiar with [ownCloud][3], this is a free software alternative to proprietary web services such as Dropbox, Google Drive, and others, which can be installed on your own server. It comes with a web interface that provides access to file management, calendar, image gallery, music player, document viewer, and much more. The developers also provide desktop sync clients, as well as mobile apps.
Since the notes are saved as plain text, they can be synchronized across devices using other cloud storage services, like Dropbox, Google Drive, and so on, but this is not done directly from within the application.
As a result, the features I mentioned above, like restoring previous note versions, are only available with ownCloud (although Dropbox, and others, do provide access to previous file revisions, but you won't be able to access this directly from QOwnNotes).
As for the QOwnNotes note taking features, the app supports Markdown (with a built-in Markdown preview mode), tagging notes, searching in tags and notes, adding links to notes, and inserting images:
![](https://4.bp.blogspot.com/-SuBhC43gzkY/V81oV7-zLBI/AAAAAAAAYZ8/l6nLQQSUv34Y7op_Xrma8XYm6EdWrhbIACLcB/s400/qownnotes_2.png)
Hierarchical note tagging and note subfolders are also supported.
The todo manager feature is pretty basic and could use some improvements, as it currently opens in a separate window, and it doesn't use the same editor as the notes, not allowing you to insert images, or use Markdown.
![](https://3.bp.blogspot.com/-AUeyZS3s_ck/V81opialKtI/AAAAAAAAYaA/xukIiZZUdNYBVZ92xgKEsEFew7q961CDwCLcB/s400/qownnotes-tasks.png)
It does allow you to search your todo items, set item priority, add reminders, and show completed items. Also, todo items can be inserted into notes.
The application user interface is customizable, allowing you to increase or decrease the font size, toggle panes (Markdown preview, note edit and tag panes), and more. A distraction-free mode is also available:
![](https://4.bp.blogspot.com/-Pnzw1wZde50/V81rrE6mTWI/AAAAAAAAYaM/0UZnH9ktbAgClkuAk1g6fgXK87kB_Bh0wCLcB/s400/qownnotes-distraction-free.png)
From the application settings, you can enable the dark mode (this was buggy in my test under Ubuntu 16.04 - some toolbar icons were missing), change the toolbar icon size, fonts, and color scheme (light or dark):
![](https://1.bp.blogspot.com/-K1MGlXA8sxs/V81rv3fwL6I/AAAAAAAAYaQ/YDhhhnbJ9gY38B6Vz1Na_pHLCjLHhPWiwCLcB/s400/qownnotes-settings.png)
Other QOwnNotes features include encryption support (notes can only be decrypted in QOwnNotes), customizable keyboard shortcuts, export notes to PDF or Markdown, customizable note saving interval, and more.
Check out the QOwnNotes [homepage][11] for a complete list of features.
### Download QOwnNotes
For how to install QownNotes, see its [installation][4] page (packages / repositories available for Debian, Ubuntu, Linux Mint, openSUSE, Fedora, Arch Linux, KaOS, Gentoo, Slakware, CentOS, as well as Mac OSX and Windows).
A QOwnNotes [snap][5] package is also available (in Ubuntu 16.04 and newer, you should be able to install it directly from Ubuntu Software).
To integrate QOwnNotes with ownCloud you'll need [ownCloud server][6], as well as [Notes][7], [QOwnNotesAPI][8], and [Tasks][9] or [Tasks Plus][10] ownCloud apps. These can be installed from the ownCloud web interface, without having to download anything manually.
Note that the QOenNotesAPI and Notes ownCloud apps are listed as experimental, so you'll need to enable experimental apps to be able to find and install them. This can be done from the ownCloud web interface, under Apps, by clicking on the settings icon in the lower left-hand side corner.
--------------------------------------------------------------------------------
via: http://www.webupd8.org/2016/09/qownnotes-is-note-taking-and-todo-list.html
作者:[Andrew][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.webupd8.org/p/about.html
[1]: http://www.qownnotes.org/
[2]: http://peterandlinda.com/cloudnotes/
[3]: https://owncloud.org/
[11]: http://www.qownnotes.org/
[4]: http://www.qownnotes.org/installation
[5]: https://uappexplorer.com/app/qownnotes.pbek
[6]: https://download.owncloud.org/download/repositories/stable/owncloud/
[7]: https://github.com/owncloud/notes
[8]: https://github.com/pbek/qownnotesapi
[9]: https://apps.owncloud.com/content/show.php/Tasks?content=164356
[10]: https://apps.owncloud.com/content/show.php/Tasks+Plus?content=170561

View File

@ -0,0 +1,308 @@
17 tar command practical examples in Linux
=====
Tar (tape archive ) is the most widely used command in Unix like operating system for creating archive of multiple files and folders into a single archive file and that archive file can be further compressed using gzip and bzip2 techniques. In other words we can say that tar command is used to take backup by archiving multiple files and directory into a single tar or archive file and later on files & directories can be extracted from the tar compressed file.
In this article we will discuss 17 practical examples of tar command in Linux.
Syntax of tar command:
```
# tar <options> <files>
```
Some of the commonly used options in tar command are listed below :
![](http://www.linuxtechi.com/wp-content/uploads/2016/09/tar-command-options.jpg)
Note : hyphen ( ) in the tar command options are optional.
### Example: 1 Create a tar archive file
Lets create a tar file of /etc directory and /root/ anaconda-ks.cfg file.
```
[root@linuxtechi ~]# tar -cvf myarchive.tar /etc /root/anaconda-ks.cfg
```
Above command will create a tar file with the name “myarchive” in the current folder. Tar file contains all the files and directories of /etc folder and anaconda-ks.cfg file.
In the tar command -c option specify to create a tar file, -v is used for verbose output and -f option is used to specify the archive file name.
```
[root@linuxtechi ~]# ls -l myarchive.tar
-rw-r--r--. 1 root root 22947840 Sep 7 00:24 myarchive.tar
[root@linuxtechi ~]#
```
### Example:2 List the contents of tar archive file.
Using t option in tar command we can view the contents of tar files without extracting it.
```
[root@linuxtechi ~]# tar -tvf myarchive.tar
```
Listing a specific file or directory from tar file. In the below example i am trying to view whether anaconda-ks.cfg file is there in the tar file or not.
```
[root@linuxtechi ~]# tar -tvf myarchive.tar root/anaconda-ks.cfg
-rw------- root/root 953 2016-08-24 01:33 root/anaconda-ks.cfg
[root@linuxtechi ~]#
```
### Example:3 Append or add files to end of archive or tar file.
-r option in the tar command is used to append or add file to existing tar file. Lets add /etc/fstab file in data.tar
```
[root@linuxtechi ~]# tar -rvf data.tar /etc/fstab
```
Note: In the Compressed tar file we cant append file or directory.
### Example:4 Extracting files and directories from tar file.
-x option is used to extract the files and directories from the tar file. Lets extract the content of above created tar file.
```
[root@linuxtechi ~]# tar -xvf myarchive.tar
```
This command will extract all the files and directories of myarchive tar file in the current working directory.
### Example:5 Extracting tar file to a particular folder.
In case you want to extract tar file to a particular folder or directory then use -C option and after that specify the path of a folder.
```
[root@linuxtechi ~]# tar -xvf myarchive.tar -C /tmp/
```
### Example:6 Extracting particular file or directory from tar file.
Lets assume you want to extract only anaconda-ks.cfg file from the tar file under /tmp folder.
Syntax :
```
# tar xvf {tar-file } {file-to-be-extracted } -C {path-where-to-extract}
[root@linuxtechi tmp]# tar -xvf /root/myarchive.tar root/anaconda-ks.cfg -C /tmp/
root/anaconda-ks.cfg
[root@linuxtechi tmp]# ls -l /tmp/root/anaconda-ks.cfg
-rw-------. 1 root root 953 Aug 24 01:33 /tmp/root/anaconda-ks.cfg
[root@linuxtechi tmp]#
```
### Example:7 Creating and compressing tar file (tar.gz or .tgz )
Lets assume that we want to create a tar file of /etc and /opt folder and also want to compress it using gzip tool. This can be achieved using -z option in tar command. Extensions of such tar files will be either tar.gz or .tgz
```
[root@linuxtechi ~]# tar -zcpvf myarchive.tar.gz /etc/ /opt/
```
Or
```
[root@linuxtechi ~]# tar -zcpvf myarchive.tgz /etc/ /opt/
```
### Example:8 Creating and compressing tar file ( tar.bz2 or .tbz2 )
Lets assume that we want to create compressed (bzip2) tar file of /etc and /opt folder. This can be achieved by using the option ( -j) in the tar command.Extensions of such tar files will be either tar.bz2 or .tbz
```
[root@linuxtechi ~]# tar -jcpvf myarchive.tar.bz2 /etc/ /opt/
```
Or
```
[root@linuxtechi ~]# tar -jcpvf myarchive.tbz2 /etc/ /opt/
```
### Example:9 Excluding particular files or type while creating tar file.
Using “exclude” option in tar command we can exclude the particular file or file type while creating tar file. Lets assume we want to exclude the file type of html while creating the compressed tar file.
```
[root@linuxtechi ~]# tar -zcpvf myarchive.tgz /etc/ /opt/ --exclude=*.html
```
### Example:10 Listing the contents of tar.gz or .tgz file
Contents of tar file with the extensions tar.gz or .tgz is viewed by using the option -t. Example is shown below :
```
[root@linuxtechi ~]# tar -tvf myarchive.tgz | more
.............................................
drwxr-xr-x root/root 0 2016-09-07 08:41 etc/
-rw-r--r-- root/root 541 2016-08-24 01:23 etc/fstab
-rw------- root/root 0 2016-08-24 01:23 etc/crypttab
lrwxrwxrwx root/root 0 2016-08-24 01:23 etc/mtab -> /proc/self/mounts
-rw-r--r-- root/root 149 2016-09-07 08:41 etc/resolv.conf
drwxr-xr-x root/root 0 2016-09-06 03:55 etc/pki/
drwxr-xr-x root/root 0 2016-09-06 03:15 etc/pki/rpm-gpg/
-rw-r--r-- root/root 1690 2015-12-09 04:59 etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
-rw-r--r-- root/root 1004 2015-12-09 04:59 etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-Debug-7
-rw-r--r-- root/root 1690 2015-12-09 04:59 etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-Testing-7
-rw-r--r-- root/root 3140 2015-09-15 06:53 etc/pki/rpm-gpg/RPM-GPG-KEY-foreman
..........................................................
```
### Example:11 Listing the contents of tar.bz2 or .tbz2 file.
Contents of tar file with the extensions tar.bz2 or .tbz2 is viewed by using the option -t. Example is shown below :
```
[root@linuxtechi ~]# tar -tvf myarchive.tbz2 | more
........................................................
rwxr-xr-x root/root 0 2016-08-24 01:25 etc/pki/java/
lrwxrwxrwx root/root 0 2016-08-24 01:25 etc/pki/java/cacerts -> /etc/pki/ca-trust/extracted/java/cacerts
drwxr-xr-x root/root 0 2016-09-06 02:54 etc/pki/nssdb/
-rw-r--r-- root/root 65536 2010-01-12 15:09 etc/pki/nssdb/cert8.db
-rw-r--r-- root/root 9216 2016-09-06 02:54 etc/pki/nssdb/cert9.db
-rw-r--r-- root/root 16384 2010-01-12 16:21 etc/pki/nssdb/key3.db
-rw-r--r-- root/root 11264 2016-09-06 02:54 etc/pki/nssdb/key4.db
-rw-r--r-- root/root 451 2015-10-21 09:42 etc/pki/nssdb/pkcs11.txt
-rw-r--r-- root/root 16384 2010-01-12 15:45 etc/pki/nssdb/secmod.db
drwxr-xr-x root/root 0 2016-08-24 01:26 etc/pki/CA/
drwxr-xr-x root/root 0 2015-06-29 08:48 etc/pki/CA/certs/
drwxr-xr-x root/root 0 2015-06-29 08:48 etc/pki/CA/crl/
drwxr-xr-x root/root 0 2015-06-29 08:48 etc/pki/CA/newcerts/
drwx------ root/root 0 2015-06-29 08:48 etc/pki/CA/private/
drwx------ root/root 0 2015-11-20 06:34 etc/pki/rsyslog/
drwxr-xr-x root/root 0 2016-09-06 03:44 etc/pki/pulp/
..............................................................
```
### Example:12 Extracting or unzip tar.gz or .tgz files.
tar files with extension tar.gz or .tgz is extracted or unzipped with option -x and -z. Example is shown below :
```
[root@linuxtechi ~]# tar -zxpvf myarchive.tgz -C /tmp/
```
Above command will extract tar file under /tmp folder.
Note : Now a days tar command will take care compression file types automatically while extracting, it means it is optional for us to specify compression type in tar command. Example is shown below :
```
[root@linuxtechi ~]# tar -xpvf myarchive.tgz -C /tmp/
```
### Example:13 Extracting or unzip tar.bz2 or .tbz2 files
tar files with the extension tar.bz2 or .tbz2 is extract with option -j and -x. Example is shown below:
```
[root@linuxtechi ~]# tar -jxpvf myarchive.tbz2 -C /tmp/
```
Or
```
[root@linuxtechi ~]# tar xpvf myarchive.tbz2 -C /tmp/
```
### Example:14 Scheduling backup with tar command
There are some real time scenarios where we have to create the tar files of particular files and directories for backup purpose on daily basis. Lets suppose we have to take backup of whole /opt folder on daily basis, this can be achieved by creating a cron job of tar command. Example is shown below :
```
[root@linuxtechi ~]# tar -zcvf optbackup-$(date +%Y-%m-%d).tgz /opt/
```
Create a cron job for above command.
### Example:15 Creating compressed archive or tar file with -T and -X option.
There are some real time scenarios where we want tar command to take input from a file and that file will consists of path of files & directory that are to be archived and compressed, there might some files that we would like to exclude in the archive which are mentioned in input file.
In the tar command input file is specified after -T option and file which consists of exclude list is specified after -X option.
Lets suppose we want to archive and compress the directories like /etc , /opt and /home and want to exclude the file /etc/sysconfig/kdump and /etc/sysconfig/foreman, Create a text file /root/tar-include and /root/tar-exclude and put the following contents in respective file.
```
[root@linuxtechi ~]# cat /root/tar-include
/etc
/opt
/home
[root@linuxtechi ~]#
[root@linuxtechi ~]# cat /root/tar-exclude
/etc/sysconfig/kdump
/etc/sysconfig/foreman
[root@linuxtechi ~]#
```
Now run the below command to create and compress archive file.
```
[root@linuxtechi ~]# tar zcpvf mybackup-$(date +%Y-%m-%d).tgz -T /root/tar-include -X /root/tar-exclude
```
### Example:16 View the size of .tar , .tgz and .tbz2 file.
Use the below commands to view the size of tar and compressed tar files.
```
[root@linuxtechi ~]# tar -czf - data.tar | wc -c
427
[root@linuxtechi ~]# tar -czf - mybackup-2016-09-09.tgz | wc -c
37956009
[root@linuxtechi ~]# tar -czf - myarchive.tbz2 | wc -c
30835317
[root@linuxtechi ~]#
```
### Example:17 Splitting big tar file into smaller files.
In Unix like operating big file is divided or split into smaller files using split command. Big tar file can also be divided into the smaller parts using split command.
Lets assume we want to split mybackup-2016-09-09.tgz file into smaller parts of each 6 MB.
```
Syntax : split -b <Size-in-MB> <tar-file-name>.<extension> “prefix-name”
```
```
[root@linuxtechi ~]# split -b 6M mybackup-2016-09-09.tgz mybackup-parts
```
Above command will split the mybackup compressed tar file into the smaller files each of size 6 MB in current working directory and split file names will starts from mybackup-partsaa … mybackup-partsag. In case if you want to append numbers in place of alphabets then use -d option in above split command.
```
[root@linuxtechi ~]# ls -l mybackup-parts*
-rw-r--r--. 1 root root 6291456 Sep 10 03:05 mybackup-partsaa
-rw-r--r--. 1 root root 6291456 Sep 10 03:05 mybackup-partsab
-rw-r--r--. 1 root root 6291456 Sep 10 03:05 mybackup-partsac
-rw-r--r--. 1 root root 6291456 Sep 10 03:05 mybackup-partsad
-rw-r--r--. 1 root root 6291456 Sep 10 03:05 mybackup-partsae
-rw-r--r--. 1 root root 6291456 Sep 10 03:05 mybackup-partsaf
-rw-r--r--. 1 root root 637219 Sep 10 03:05 mybackup-partsag
[root@linuxtechi ~]#
```
Now we can move these files into another server over the network and then we can merge all the files into a single tar compressed file using below mentioned command.
```
[root@linuxtechi ~]# cat mybackup-partsa* > mybackup-2016-09-09.tgz
[root@linuxtechi ~]#
```
Thats all, hope you like different examples of tar command. Please share your feedback and comments.
--------------------------------------------------------------------------------
via: http://www.linuxtechi.com/17-tar-command-examples-in-linux/
作者:[Pradeep Kumar ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.linuxtechi.com/author/pradeep/

View File

@ -0,0 +1,125 @@
15 Top Open Source Artificial Intelligence Tools
====
Artificial Intelligence (AI) is one of the hottest areas of technology research. Companies like IBM, Google, Microsoft, Facebook and Amazon are investing heavily in their own R&D, as well as buying up startups that have made progress in areas like machine learning, neural networks, natural language and image processing. Given the level of interest, it should come as no surprise that a recent [artificial intelligence report][1] from experts at Stanford University concluded that "increasingly useful applications of AI, with potentially profound positive impacts on our society and economy are likely to emerge between now and 2030."
In a recent [article][2], we provided an overview of 45 AI projects that seem particularly promising or interesting. In this slideshow, we're focusing in on open source artificial intelligence tools, with a closer look at fifteen of the best-known open source AI projects.
![](http://www.datamation.com/imagesvr_ce/5668/00AI.jpg)
Open Source Artificial Intelligence
These open source AI applications are on the cutting edge of artificial intelligence research.
![](http://www.datamation.com/imagesvr_ce/8922/01Caffe.JPG)
### 1. Caffe
The brainchild of a UC Berkeley PhD candidate, Caffe is a deep learning framework based on expressive architecture and extensible code. It's claim to fame is its speed, which makes it popular with both researchers and enterprise users. According to its website, it can process more than 60 million images in a single day using just one NVIDIA K40 GPU. It is managed by the Berkeley Vision and Learning Center (BVLC), and companies like NVIDIA and Amazon have made grants to support its development.
![](http://www.datamation.com/imagesvr_ce/1232/02CNTK.JPG)
### 2. CNTK
Short for Computational Network Toolkit, CNTK is one of Microsoft's open source artificial intelligence tools. It boasts outstanding performance whether it is running on a system with only CPUs, a single GPU, multiple GPUs or multiple machines with multiple GPUs. Microsoft has primarily utilized it for research into speech recognition, but it is also useful for applications like machine translation, image recognition, image captioning, text processing, language understanding and language modeling.
![](http://www.datamation.com/imagesvr_ce/2901/03Deeplearning4j.JPG)
### 3. Deeplearning4j
Deeplearning4j is an open source deep learning library for the Java Virtual Machine (JVM). It runs in distributed environments and integrates with both Hadoop and Apache Spark. It makes it possible to configure deep neural networks, and it's compatible with Java, Scala and other JVM languages.
The project is managed by a commercial company called Skymind, which offers paid support, training and an enterprise distribution of Deeplearning4j.
![](http://www.datamation.com/imagesvr_ce/7269/04DMLT.JPG)
### 4. Distributed Machine Learning Toolkit
Like CNTK, the Distributed Machine Learning Toolkit (DMTK) is one of Microsoft's open source artificial intelligence tools. Designed for use in big data applications, it aims to make it faster to train AI systems. It consists of three key components: the DMTK framework, the LightLDA topic model algorithm, and the Distributed (Multisense) Word Embedding algorithm. As proof of DMTK's speed, Microsoft says that on an eight-cluster machine, it can "train a topic model with 1 million topics and a 10-million-word vocabulary (for a total of 10 trillion parameters), on a document collection with over 100-billion tokens," a feat that is unparalleled by other tools.
![](http://www.datamation.com/imagesvr_ce/2890/05H2O.JPG)
### 5. H20
Focused more on enterprise uses for AI than on research, H2O has large companies like Capital One, Cisco, Nielsen Catalina, PayPal and Transamerica among its users. It claims to make is possible for anyone to use the power of machine learning and predictive analytics to solve business problems. It can be used for predictive modeling, risk and fraud analysis, insurance analytics, advertising technology, healthcare and customer intelligence.
It comes in two open source versions: standard H2O and Sparkling Water, which is integrated with Apache Spark. Paid enterprise support is also available.
![](http://www.datamation.com/imagesvr_ce/1127/06Mahout.JPG)
### 6. Mahout
An Apache Foundation project, Mahout is an open source machine learning framework. According to its website, it offers three major features: a programming environment for building scalable algorithms, premade algorithms for tools like Spark and H2O, and a vector-math experimentation environment called Samsara. Companies using Mahout include Adobe, Accenture, Foursquare, Intel, LinkedIn, Twitter, Yahoo and many others. Professional support is available through third parties listed on the website.
![](http://www.datamation.com/imagesvr_ce/4038/07MLlib.JPG)
### 7. MLlib
Known for its speed, Apache Spark has become one of the most popular tools for big data processing. MLlib is Spark's scalable machine learning library. It integrates with Hadoop and interoperates with both NumPy and R. It includes a host of machine learning algorithms for classification, regression, decision trees, recommendation, clustering, topic modeling, feature transformations, model evaluation, ML pipeline construction, ML persistence, survival analysis, frequent itemset and sequential pattern mining, distributed linear algebra and statistics.
![](http://www.datamation.com/imagesvr_ce/839/08NuPIC.JPG)
### 8. NuPIC
Managed by a company called Numenta, NuPIC is an open source artificial intelligence project based on a theory called Hierarchical Temporal Memory, or HTM. Essentially, HTM is an attempt to create a computer system modeled after the human neocortex. The goal is to create machines that "approach or exceed human level performance for many cognitive tasks."
In addition to the open source license, Numenta also offers NuPic under a commercial license, and it also offers licenses on the patents that underlie the technology.
![](http://www.datamation.com/imagesvr_ce/99/09OpenNN.JPG)
### 9. OpenNN
Designed for researchers and developers with advanced understanding of artificial intelligence, OpenNN is a C++ programming library for implementing neural networks. Its key features include deep architectures and fast performance. Extensive documentation is available on the website, including an introductory tutorial that explains the basics of neural networks. Paid support for OpenNNis available through Artelnics, a Spain-based firm that specializes in predictive analytics.
![](http://www.datamation.com/imagesvr_ce/4168/10OpenCyc.JPG)
### 10. OpenCyc
Developed by a company called Cycorp, OpenCyc provides access to the Cyc knowledge base and commonsense reasoning engine. It includes more than 239,000 terms, about 2,093,000 triples, and about 69,000 owl:sameAs links to external semantic data namespaces. It is useful for rich domain modeling, semantic data integration, text understanding, domain-specific expert systems and game AIs. The company also offers two other versions of Cyc: one for researchers that is free but not open source and one for enterprise use that requires a fee.
![](http://www.datamation.com/imagesvr_ce/9761/11Oryx2.JPG)
### 11. Oryx 2
Built on top of Apache Spark and Kafka, Oryx 2 is a specialized application development framework for large-scale machine learning. It utilizes a unique lambda architecture with three tiers. Developers can use Oryx 2 to create new applications, and it also includes some pre-built applications for common big data tasks like collaborative filtering, classification, regression and clustering. The big data tool vendor Cloudera created the original Oryx 1 project and has been heavily involved in continuing development.
![](http://www.datamation.com/imagesvr_ce/7423/12.%20PredictionIO.JPG)
### 12. PredictionIO
In February this year, Salesforce bought PredictionIO, and then in July, it contributed the platform and its trademark to the Apache Foundation, which accepted it as an incubator project. So while Salesforce is using PredictionIO technology to advance its own machine learning capabilities, work will also continue on the open source version. It helps users create predictive engines with machine learning capabilities that can be used to deploy Web services that respond to dynamic queries in real time.
![](http://www.datamation.com/imagesvr_ce/6886/13SystemML.JPG)
### 13. SystemML
First developed by IBM, SystemML is now an Apache big data project. It offers a highly-scalable platform that can implement high-level math and algorithms written in R or a Python-like syntax. Enterprises are already using it to track customer service on auto repairs, to direct airport traffic and to link social media data with banking customers. It can run on top of Spark or Hadoop.
![](http://www.datamation.com/imagesvr_ce/5742/14TensorFlow.JPG)
### 14. TensorFlow
TensorFlow is one of Google's open source artificial intelligence tools. It offers a library for numerical computation using data flow graphs. It can run on a wide variety of different systems with single- or multi-CPUs and GPUs and even runs on mobile devices. It boasts deep flexibility, true portability, automatic differential capabilities and support for Python and C++. The website includes a very extensive list of tutorials and how-tos for developers or researchers interested in using or extending its capabilities.
![](http://www.datamation.com/imagesvr_ce/9018/15Torch.JPG)
### 15. Torch
Torch describes itself as "a scientific computing framework with wide support for machine learning algorithms that puts GPUs first." The emphasis here is on flexibility and speed. In addition, it's fairly easy to use with packages for machine learning, computer vision, signal processing, parallel processing, image, video, audio and networking. It relies on a scripting language called LuaJIT that is based on Lua.
--------------------------------------------------------------------------------
via: http://www.datamation.com/open-source/slideshows/15-top-open-source-artificial-intelligence-tools.html
作者:[Cynthia Harvey][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.datamation.com/author/Cynthia-Harvey-6460.html
[1]: https://ai100.stanford.edu/sites/default/files/ai_100_report_0906fnlc_single.pdf
[2]: http://www.datamation.com/applications/artificial-intelligence-software-45-ai-projects-to-watch-1.html

View File

@ -0,0 +1,74 @@
8 best practices for building containerized applications
====
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/containers_2015-2-osdc-lead.png?itok=0yid3gFY)
Containers are a major trend in deploying applications in both public and private clouds. But what exactly are containers, why have they become a popular deployment mechanism, and how will you need to modify your application to optimize it for a containerized environment?
### What are containers?
The technology behind containers has a long history beginning with SELinux in 2000 and Solaris zones in 2005. Today, containers are a combination of several kernel features including SELinux, Linux namespaces, and control groups, providing isolation of end user processes, networking, and filesystem space.
### Why are they so popular?
The recent widespread adoption of containers is largely due to the development of standards aimed at making them easier to use, such as the Docker image format and distribution model. This standard calls for immutable images, which are the launching point for a container runtime. Immutable images guarantee that the same image the development team releases is what gets tested and deployed into the production environment.
The lightweight isolation that containers provide creates a better abstraction for an application component. Components running in containers won't interfere with each other the way they might running directly on a virtual machine. They can be prevented from starving each other of system resources, and unless they are sharing a persistent volume won't block attempting to write to the same files. Containers have helped to standardize practices like logging and metric collection, and they allow for increased multi-tenant density on physical and virtual machines, all of which leads to lower deployment costs.
### How do you build a container-ready application?
Changing your application to run inside of a container isn't necessarily a requirement. The major Linux distributions have base images that can run anything that runs on a virtual machine. But the general trend in containerized applications is following a few best practices:
- 1. Instances are disposable
Any given instance of your application shouldn't need to be carefully kept running. If one system running a bunch of containers goes down, you want to be able to spin up new containers spread out across other available systems.
- 2. Retry instead of crashing
When one service in your application depends on another service, it should not crash when the other service is unreachable. For example, your API service is starting up and detects the database is unreachable. Instead of failing and refusing to start, you design it to retry the connection. While the database connection is down the API can respond with a 503 status code, telling the clients that the service is currently unavailable. This practice should already be followed by applications, but if you are working in a containerized environment where instances are disposable, then the need for it becomes more obvious.
- 3. Persistent data is special
Containers are launched based on shared images using a copy-on-write (COW) filesystem. If the processes the container is running choose to write out to files, then those writes will only exist as long as the container exists. When the container is deleted, that layer in the COW filesystem is deleted. Giving a container a mounted filesystem path that will persist beyond the life of the container requires extra configuration, and extra cost for the physical storage. Clearly defining the abstraction for what storage is persisted promotes the idea that instances are disposable. Having the abstraction layer also allows a container orchestration engine to handle the intricacies of mounting and unmounting persistent volumes to the containers that need them.
- 4. Use stdout not log files
You may now be thinking, if persistent data is special, then what do I do with log files? The approach the container runtime and orchestration projects have taken is that processes should instead [write to stdout/stderr][1], and have infrastructure for archiving and maintaining [container logs][2].
- 5. Secrets (and other configurations) are special too
You should never hard-code secret data like passwords, keys, and certificates into your images. Secrets are typically not the same when your application is talking to a development service, a test service, or a production service. Most developers do not have access to production secrets, so if secrets are baked into the image then a new image layer will have to be created to override the development secrets. At this point, you are no longer using the same image that was created by your development team and tested by quality engineering (QE), and have lost the benefit of immutable images. Instead, these values should be abstracted away into environment variables or files that are injected at container startup.
- 6. Don't assume co-location of services
In an orchestrated container environment you want to allow the orchestrator to send your containers to whatever node is currently the best fit. Best fit could mean a number of things: it could be based on whichever node has the most space right now, the quality of service the container is requesting, whether the container requires persistent volumes, etc. This could easily mean your frontend, API, and database containers all end up on different nodes. While it is possible to force an API container to each node (see [DaemonSets][3] in Kubernetes), this should be reserved for containers that perform tasks like monitoring the nodes themselves.
- 7. Plan for redundancy / high availability
Even if you don't have enough load to require an HA setup, you shouldn't write your service in a way that prevents you from running multiple copies of it. This will allow you to use rolling deployments, which make it easy to move load off one node and onto another, or to upgrade from one version of a service to the next without taking any downtime.
- 8. Implement readiness and liveness checks
It is common for applications to have startup time before they are able to respond to requests, for example, an API server that needs to populate in-memory data caches. Container orchestration engines need a way to check that your container is ready to serve requests. Providing a readiness check for new containers allows a rolling deployment to keep an old container running until it is no longer needed, preventing downtime. Similarly, a liveness check is a way for the orchestration engine to continue to check that the container is in a healthy state. It is up to the application creator to decide what it means for their container to be healthy, or "live". A container that is no longer live will be killed, and a new container created in its place.
### Want to find out more?
I'll be at the Grace Hopper Celebration of Women in Computing in October, come check out my talk: [Containerization of Applications: What, Why, and How][4]. Not headed to GHC this year? Then read on about containers, orchestration, and applications on the [OpenShift][5] and [Kubernetes][6] project sites.
--------------------------------------------------------------------------------
via: https://opensource.com/life/16/9/8-best-practices-building-containerized-applications
作者:[Jessica Forrester ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jwforres
[1]: https://docs.docker.com/engine/reference/commandline/logs/
[2]: http://kubernetes.io/docs/getting-started-guides/logging/
[3]: http://kubernetes.io/docs/admin/daemons/
[4]: https://www.eiseverywhere.com/ehome/index.php?eventid=153076&tabid=351462&cid=1350690&sessionid=11443135&sessionchoice=1&
[5]: https://www.openshift.org/
[6]: http://kubernetes.io/

View File

@ -0,0 +1,257 @@
Content Security Policy, Your Future Best Friend
=====
A long time ago, my personal website was attacked. I do not know how it happened, but it happened. Fortunately, the damage from the attack was quite minor: A piece of JavaScript was inserted at the bottom of some pages. I updated the FTP and other credentials, cleaned up some files, and that was that.
One point made me mad: At the time, there was no simple solution that could have informed me there was a problem and — more importantly — that could have protected the websites visitors from this annoying piece of code.
A solution exists now, and it is a technology that succeeds in both roles. Its name is content security policy (CSP).
### What Is A CSP? Link
The idea is quite simple: By sending a CSP header from a website, you are telling the browser what it is authorized to execute and what it is authorized to block.
Here is an example with PHP:
```
<?php
header("Content-Security-Policy: <your directives>");
?>
```
#### SOME DIRECTIVES LINK
You may define global rules or define rules related to a type of asset:
```
default-src 'self' ;
# self = same port, same domain name, same protocol => OK
```
The base argument is default-src: If no directive is defined for a type of asset, then the browser will use this value.
```
script-src 'self' www.google-analytics.com ;
# JS files on these domains => OK
```
In this example, weve authorized the domain name www.google-analytics.com as a source of JavaScript files to use on our website. Weve added the keyword 'self'; if we redefined the directive script-src with another rule, it would override default-src rules.
If no scheme or port is specified, then it enforces the same scheme or port from the current page. This prevents mixed content. If the page is https://example.com, then you wouldnt be able to load http://www.google-analytics.com/file.js because it would be blocked (the scheme wouldnt match). However, there is an exception to allow a scheme upgrade. If http://example.com tries to load https://www.google-analytics.com/file.js, then the scheme or port would be allowed to change to facilitate the scheme upgrade.
```
style-src 'self' data: ;
# Data-Uri in a CSS => OK
```
In this example, the keyword data: authorizes embedded content in CSS files.
Under the CSP level 1 specification, you may also define rules for the following:
- `img-src`
valid sources of images
- `connect-src`
applies to XMLHttpRequest (AJAX), WebSocket or EventSource
- `font-src`
valid sources of fonts
- `object-src`
valid sources of plugins (for example, `<object>, <embed>, <applet>`)
- `media-src`
valid sources of `<audio> and <video>`
CSP level 2 rules include the following:
- `child-src`
valid sources of web workers and elements such as `<frame>` and `<iframe>` (this replaces the deprecated frame-src from CSP level 1)
- `form-action`
valid sources that can be used as an HTML `<form>` action
- `frame-ancestors`
valid sources for embedding the resource using `<frame>, <iframe>, <object>, <embed> or <applet>`.
- `upgrade-insecure-requests`
instructs user agents to rewrite URL schemes, changing HTTP to HTTPS (for websites with a lot of old URLs that need to be rewritten).
For better backwards-compatibility with deprecated properties, you may simply copy the contents of the actual directive and duplicate them in the deprecated one. For example, you may copy the contents of child-src and duplicate them in frame-src.
CSP 2 allows you to whitelist paths (CSP 1 allows only domains to be whitelisted). So, rather than whitelisting all of www.foo.com, you could whitelist www.foo.com/some/folder to restrict it further. This does require CSP 2 support in the browser, but it is obviously more secure.
#### AN EXAMPLE
I made a simple example for the Paris Web 2015 conference, where I presented a talk entitled “[CSP in Action][1].”
Without CSP, the page would look like this:
![](https://www.smashingmagazine.com/wp-content/uploads/2016/09/csp_smashing1b-500.jpg)
Not very nice. What if we enabled the following CSP directives?
```
<?php
header("Content-Security-Policy:
default-src 'self' ;
script-src 'self' www.google-analytics.com stats.g.doubleclick.net ;
style-src 'self' data: ;
img-src 'self' www.google-analytics.com stats.g.doubleclick.net data: ;
frame-src 'self' ;");
?>
```
What would the browser do? It would (very strictly) apply these directives under the primary rule of CSP, which is that anything not authorized in a CSP directive will be blocked (“blocked” meaning not executed, not displayed and not used by the website).
By default in CSP, inline scripts and styles are not authorized, which means that every `<script>`, onclick or style attribute will be blocked. You could authorize inline CSS with style-src 'unsafe-inline' ;.
In a modern browser with CSP support, the example would look like this:
![](https://www.smashingmagazine.com/wp-content/uploads/2016/09/csp_smashing5-500.jpg)
What happened? The browser applied the directives and rejected anything that was not authorized. It sent these notifications to the console:
![](https://www.smashingmagazine.com/wp-content/uploads/2016/09/csp_smashing2-500.jpg)
If youre still not convinced of the value of CSP, have a look at Aaron Gustafsons article “[More Proof We Dont Control Our Web Pages][2].”
Of course, you may use stricter directives than the ones in the example provided above:
- set default-src to 'none',
- specify what you need for each rule,
- specify the exact paths of required files,
- etc.
### More Information On CSP
#### SUPPORT
CSP is not a nightly feature requiring three flags to be activated in order for it to work. CSP levels 1 and 2 are candidate recommendations! [Browser support for CSP level 1][3] is excellent.
![](https://www.smashingmagazine.com/wp-content/uploads/2016/09/csp_smashing3-500.jpg)
The [level 2 specification][4] is more recent, so it is a bit less supported.
![](https://www.smashingmagazine.com/wp-content/uploads/2016/09/csp_smashing4-500.jpg)
CSP level 3 is an early draft now, so it is not yet supported, but you can already do great things with levels 1 and 2.
#### OTHER CONSIDERATIONS
CSP has been designed to reduce cross-site scripting (XSS) risks, which is why enabling inline scripts in script-src directives is not recommended. Firefox illustrates this issue very nicely: In the browser, hit Shift + F2 and type security csp, and it will show you directives and advice. For example, here it is used on Twitters website:
![](https://www.smashingmagazine.com/wp-content/uploads/2016/09/csp_smashing6b-500.jpg)
Another possibility for inline scripts or inline styles, if you really have to use them, is to create a hash value. For example, suppose you need to have this inline script:
```
<script>alert('Hello, world.');</script>
```
You might add 'sha256-qznLcsROx4GACP2dm0UCKCzCG-HiZ1guq6ZZDob_Tng=' as a valid source in your script-src directives. The hash generated is the result of this in PHP:
```
<?php
echo base64_encode(hash('sha256', "alert('Hello, world.');", true));
?>
```
I said earlier that CSP is designed to reduce XSS risks — I could have added, “… and reduce the risks of unsolicited content.” With CSP, you have to know where your sources of content are and what they are doing on your front end (inline styles, etc.). CSP can also help you force contributors, developers and others to respect your rules about sources of content!
Now your question is, “OK, this is great, but how do we use it in a production environment?”
### How To Use It In The Real World
The easiest way to get discouraged with using CSP the first time is to test it in a live environment, thinking, “This will be easy. My code is bad ass and perfectly clean.” Dont do this. I did it. Its stupid, trust me.
As I explained, CSP directives are activated with a CSP header — there is no middle ground. You are the weak link here. You might forget to authorize something or forget a piece of code on your website. CSP will not forgive your oversight. However, two features of CSP greatly simplify this problem.
#### REPORT-URI
Remember the notifications that CSP sends to the console? The directive report-uri can be used to tell the browser to send them to the specified address. Reports are sent in JSON format.
```
report-uri /csp-parser.php ;
```
So, in the csp-parser.php file, we can process the data sent by the browser. Here is the most basic example in PHP:
```
$data = file_get_contents('php://input');
if ($data = json_decode($data, true)) {
$data = json_encode(
$data,
JSON_PRETTY_PRINT | JSON_UNESCAPED_SLASHES
);
mail(EMAIL, SUBJECT, $data);
}
```
This notification will be transformed into an email. During development, you might not need anything more complex than this.
For a production environment (or a more visited development environment), youd better use a way other than email to collect information, because there is no auth or rate limiting on the endpoint, and CSP can be very noisy. Just imagine a page that generates 100 CSP notifications (for example, a script that display images from an unauthorized source) and that is viewed 100 times a day — you could get 10,000 notifications a day!
A service such as report-uri.io can be used to simplify the management of reporting. You can see other simple examples for report-uri (with a database, with some optimizations, etc.) on GitHub.
### REPORT-ONLY
As we have seen, the biggest issue is that there is no middle ground between CSP being enabled and disabled. However, a feature named report-only sends a slightly different header:
```
<?php
header("Content-Security-Policy-Report-Only: <your directives>");
?>
```
Basically, this tells the browser, “Act as if these CSP directives were being applied, but do not block anything. Just send me the notifications.” It is a great way to test directives without the risk of blocking any required assets.
With report-only and report-uri, you can test CSP directives with no risk, and you can monitor in real time everything CSP-related on a website. These two features are really powerful for deploying and maintaining CSP!
### Conclusion
#### WHY CSP IS COOL
CSP is most important for your users: They dont have to suffer any unsolicited scripts or content or XSS vulnerabilities on your website.
The most important advantage of CSP for website maintainers is awareness. If youve set strict rules for image sources, and a script kiddie attempts to insert an image on your website from an unauthorized source, that image will be blocked, and you will be notified instantly.
Developers, meanwhile, need to know exactly what their front-end code does, and CSP helps them master that. They will be prompted to refactor parts of their code (avoiding inline functions and styles, etc.) and to follow best practices.
#### HOW CSP COULD BE EVEN COOLER LINK
Ironically, CSP is too efficient in some browsers — it creates bugs with bookmarklets. So, do not update your CSP directives to allow bookmarklets. We cant blame any one browser in particular; all of them have issues:
- Firefox
- Chrome (Blink)
- WebKit
Most of the time, the bugs are false positives in blocked notifications. All browser vendors are working on these issues, so we can expect fixes soon. Anyway, this should not stop you from using CSP.
--------------------------------------------------------------------------------
via: https://www.smashingmagazine.com/2016/09/content-security-policy-your-future-best-friend/?utm_source=webopsweekly&utm_medium=email
作者:[Nicolas Hoffmann][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.smashingmagazine.com/author/nicolashoffmann/
[1]: https://rocssti.net/en/example-csp-paris-web2015
[2]: https://www.aaron-gustafson.com/notebook/more-proof-we-dont-control-our-web-pages/
[3]: http://caniuse.com/#feat=contentsecuritypolicy
[4]: http://caniuse.com/#feat=contentsecuritypolicy2

View File

@ -0,0 +1,44 @@
Five Linux Server Distros Worth Checking Out
====
>Pretty much any of the nearly 300 Linux distributions you'll find listed on Distrowatch can be made to work as servers. Here are those that stand out above the rest.
![](http://windowsitpro.com/site-files/windowsitpro.com/files/imagecache/large_img/uploads/2016/09/cloudservers.jpg)
Pretty much any of the nearly 300 Linux distributions you'll find listed on Distrowatch can be made to work as servers. Since Linux's earliest days, users have been provisioning "all purpose" distributions such as Slackware, Debian and Gentoo to do heavy lifting as servers for home and business. That may be fine for the hobbyist, but its a lot of unnecessary work for the professional.
From the beginning, however, there have been distributions with no other purpose but to serve files and applications, help workstations share common peripherals, serve-up web pages and all the other things we ask servers to do, whether in the cloud, in a data center or on a shelf in a utility closet.
Here's a look at four of the most used Linux server distros, as well as one distro that might fit the bill for smaller businesses.
**Red Hat Enterprise Linux**: Perhaps the best known of Linux server distros, RHEL has a reputation for being a solid distribution ready for the most demanding mission critical tasks -- like running the New York Stock Exchange for instance. It's also backed by Red Hat's best-of-breed support.
The downside? While Red Hat is known for offering customer service and support that's second to none, its support subscriptions aren't cheap. Some might point out, however, that you get what you pay for. Cheaper third party support for RHEL is available, but you might want to do some research before going that route.
**CentOS**: Anyone who likes RHEL but would like to avoid shoveling money to Red Hat for support should take a look at CentOS, which is basically an RHEL fork. Although it's been around since 2004, in 2014 it became officially sponsored by Red Hat, which now employs most of the project's developers. This means that security patches and bug fixes are made available to CentOS soon after they're pushed to Red Hat.
If you're going to deploy CentOS, you'll need people with Linux skills on staff, because as far as technical support goes, you're mainly on your own. The good news is that the CentOS community offers excellent resources, such as mailing lists, web forums, and chat rooms, so help is available to those who search.
**Ubuntu Server**: When Canonical announced many years back that it was coming out with a server edition of Ubuntu, you could hear the snickers. Laughter turned into amazement rather quickly, however, as Ubuntu Server rapidly took hold. This was partly due to the DNA it shares as a derivative of Debian, which has long been a favorite base for Linux servers. Ubuntu filled a gap by adding affordable technical support, superior hardware support, developer tools and lots of polish.
How popular is Ubuntu Server? Recent figures show it being the most deployed operating system both on OpenStack and on the Amazon Elastic Compute Cloud, where it outpaces second place Amazon Linux Amazon Machine Image by a mile and leaves third place Windows in the virtual dust. Another study shows it as the most used Linux web server.
**SUSE Linux Enterprise Server**: This German distro has a large base of users in Europe, and was a top server distro on this side of the Atlantic until PR issues arose after it was bought by Novell in the early part of the century. With those days long behind it, SUSE has been gaining ground in the US, and its use will probably accelerate now that HPE is naming it as its preferred Linux partner.
SUSE Linux Enterprise Server, or SLES, is stable and easy to maintain, which you'd expect for a distro that's been around for nearly as long as Linux itself. Affordable 24/7 "rapid-response" technical support is available, making it suitable for mission critical deployments.
**ClearOS**: Based on RHEL, ClearOS is included here because it's simple enough for anyone, even most non-techies, to configure. Targeted at small to medium sized businesses, it can also be used as an entertainment server by home users. Using a web-based administration interface for ease-of-use, it's built with the premise in mind that "building your IT infrastructure should be as simple as downloading apps on your smart phone."
The latest release, version 7.2, includes capabilities that might not be expected from a "lightweight" offering, such as VM support which includes Microsoft Hyper-V, support for the XFS and BTRFS file systems, as well as support for LVM caching and IPv6. Available in a free version or in an inexpensive "professional" version that comes with a variety of support options.
--------------------------------------------------------------------------------
via: http://windowsitpro.com/industry/five-linux-server-distros-worth-checking-out
作者:[Christine Hall][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://windowsitpro.com/industry/five-linux-server-distros-worth-checking-out

View File

@ -0,0 +1,50 @@
4 big ways companies benefit from having open source program offices
====
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BUSINESS_creativity.png?itok=x2HTRKVW)
In the first article in my series on open source program offices, I took a deep dive into [what an open source program office is and why your company might need one][1]. Next I looked at [how Google created a new kind of open source program office][2]. In this article, I'll explain a few benefits of having an open source program office.
At first glance, one big reason why a company not in the business of software development might more enthusiastically embrace an open source program office is because they have less to lose. After all, they're not gambling with software products that are directly tied to revenue. Facebook, for example, can easily unleash a distributed key-value datastore as an open source project because they don't sell a product called "enterprise key-value datastore." That answers the question of risk, but it still doesn't answer the question of what they gain from contributing to the open source ecosystem. Let's look at a few potential reasons and then tackle each. You'll notice a lot of overlap with vendor open source program offices, but some of the motivations are slightly different.
### Recruiting
Recruiting is perhaps the easiest way to sell an open source program office to upper management. Show them the costs associated with recruiting, as well as the return on investment, and then explain how developing relationships with talented engineers results in a pipeline of talented developers who are actually familiar with your technology and excited to help work on it. We don't really need to go in more depth here—it's self-explanatory, right?
### Technology influence
Once upon a time, companies that didn't specialize in selling software were powerless to influence development cycles of their software vendors directly, especially if they were not a large customer. Open source completely changed that dynamic and brought users onto a more level playing field with vendors. With the rise of open source development, anyone could push technology into a chosen direction, assuming they were willing to invest the time and resources. But these companies learned that simply investing in developer time, although fruitful, could be even more useful if tied to an overarching strategic effort&mdash. Think bug fixes vs. software architects—lots of companies push bug fixes to upstream open source projects, but some of these same companies began to learn that coordinating a sustained effort with a deeper commitment paid off with faster feature development, which could be good for business. With the open source program office model, companies have staffers who can sniff out strategically important open source communities in which they then invest developer resources.
With rapidly growing companies such as Google and Facebook, providing leadership in existing open source projects still proved insufficient for their expanding businesses. Facing the challenges of intense growth and building out hyperscale systems, many of the largest companies had built highly customized stacks of software for internal use only. What if they could convince others to collaborate on some of these infrastructure projects? Thus, while they maintained investments in areas such as the Linux kernel, Apache, and other existing projects, they also began to release their own large projects. Facebook released Cassandra, Twitter created Mesos, and eventually Google created the Kubernetes project. These projects have become major platforms for industry innovation, proving to be spectacular successes for the companies involved. (Note that Facebook stopped using Cassandra internally after it needed to create a new software project to solve the problem at a larger scale. However, by that time Cassandra had already become popular and DataStax had formed to take on development). Each of these projects have spawned entire ecosystems of developers, related projects, and end users that serve to accelerate growth and development.
This would not have been possible without coordination between an open source program office and strategic company initiatives. Without that effort, each of the companies mentioned would still be trying to solve these problems individually—and more slowly. Not only have these projects helped solve business problems internally, they also helped establish the companies that created them as industry heavyweights. Sure, Google has been an industry titan for a few years now, but the growth of Kubernetes ensures both better software, and a direct say in the future direction of container technologies, even more than it already had. These companies are still known for their hyperscale infrastructure and for simply being large Silicon Valley stalwarts. Lesser known, but possibly even more important, is their new relevance as technology producers. Open source program offices guide these efforts and maximize their impact, through technology recommendations and relationships with influential developers, not to mention deep expertise in community governance and people management.
### Marketing power
Going hand-in-hand with technology influence is how each company talks about its open source efforts. By honing the messages around these projects and communities, an open source program office is able to deliver maximum impact through targeted marketing campaigns. Marketing has long been a dirty word in open source circles, because everyone has had a bad experience with corporate marketing. In the case of open source communities, marketing takes on a vastly different form from a traditional approach and involves amplifying what is already happening in the communities of strategic importance. Thus, an open source program office probably won't create whiz-bang slides about a project that hasn't even released any code yet, but they'll talk about the software they've created and other initiatives they've participated in. Basically, no vaporware here.
Think of the first efforts made by Google's open source program office. They didn't simply contribute code to the Linux kernel and other projects—they talked about it a lot, often in keynotes at open source conferences. They didn't just give money to students who write open source code—they created a global program, the Google Summer of Code, that became a cultural touchstone of open source development. This marketing effort cemented Google's status as a major open source developer long before Kubernetes was even developed. As a result, Google wielded major influence during the creation of the GPLv3 license, and company speakers and open source program office representatives became staples at tech events. The open source program office is the entity best situated to coordinate these efforts and deliver real value for the parent company.
### Improve internal processes
Improving internal processes may not sound like a big benefit, but overcoming chaotic internal processes is a challenge for every open source program office, whether software vendor or company-driven. Whereas a software vendor must make sure that their processes don't step on products they release (for example, unintentionally open sourcing proprietary software), a user is more concerned with infringement of intellectual property (IP) law: patents, copyrights, and trademarks. No one wants to get sued simply for releasing software. Without an active open source program office to manage and coordinate licensing and other legal questions, large companies face great difficulty in arriving at a consensus around open source processes and governance. Why is this important? If different groups release software under incompatible licenses, not only will this prove to be an embarrassment, it also will provide a significant obstacle to achieving one of the most basic goals—improved collaboration.
Combined with the fact that many of these companies are still growing incredibly quickly, an inability to establish basic rules around process will prove unwieldy sooner than expected. I've seen large spreadsheets with a matrix of approved and unapproved licenses as well as guidelines for how to (and how not to) create open source communities while complying with legal restrictions. The key is to have something that developers can refer to when they need to make decisions, without incurring the legal overhead of a massive, work-slowing IP review every time a developer wants to contribute to an open source community.
Having an active open source program office that maintains rules over license compliance and source contribution, as well as establishing training programs for engineers, helps to avoid potential legal pitfalls and costly lawsuits. After all, what good is better collaboration on open source projects if the company loses real money because someone didn't read the license? The good news is that companies have less to worry about with respect to proprietary IP when compared to software vendors. The bad news is that their legal questions are no less complex, especially when they run directly into the legal headwinds of a software vendor.
How has your organization benefited from having an open source program office? Let me know about it in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/business/16/9/4-big-ways-companies-benefit-having-open-source-program-offices
作者:[John Mark Walker][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/johnmark
[1]: https://opensource.com/business/16/5/whats-open-source-program-office
[2]: https://opensource.com/business/16/8/google-open-source-program-office

View File

@ -0,0 +1,58 @@
How to Use Markdown in WordPress to Improve Workflow
====
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/09/markdown-wordpress-featured-2.jpg)
Markdown is a simple markup language that helps you format your plain text documents with minimal effort. You may be used to formatting your articles using HTML or the Visual Editor in WordPress, but using markdown makes formatting a lot easier, and you can always export it to several formats including (but not limited to) HTML.
WordPress does not come with native markdown support, but there are plugins that can add this functionality to your website if you so desire.
In this tutorial I will demonstrate how to use the popular WP-Markdown plugin to add markdown support to a WordPress website.
### Installation
You can install this plugin directly by navigating to “Plugins -> Add New” and entering “[wp-markdown][1]” in the search box provided. The plugin should appear as the first option on the list. Click “Install Now” to install it.
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/08/markdown-wordpress-install-plugin-1.png)
### Configuration
Once you have installed the plugin and activated it, navigate to “Settings -> Writing” in the menu and scroll down until you get to the markdown section.
You can enable markdown support in posts, pages and comments. You can also enable a help bar for your post editor or comments which could be handy if youre just learning the markdown syntax.
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/09/markdown-wordpress-configuration.png)
If you include code snippets in your blog posts, enabling the “Prettify syntax highlighter” option will automatically provide syntax highlighting for your code snippets.
Once you are satisfied with your selections, click “Save Changes” to save your settings.
### Write your posts with Markdown
Once you have enabled markdown support on your website, you can start using it right away.
Create a new post by going to “Posts -> Add New.” You will notice that the default Visual and Plain Text editors have been replaced by the markdown editor.
If you did not enable the markdown help bar in the configuration options, you will not see a live preview of your formatted markdown. Nonetheless, as long as your syntax is correct, your markdown will be converted to valid HTML when you save or publish the post.
However, if youre a beginner to markdown and the live preview feature is important to you, simply go back to the settings to enable the help bar option, and you will get a nice live preview area at the bottom of your posts. In addition, you also get some buttons on top that will help you quickly insert markdown syntax into your posts. This could be a potentially amazing setting if people use it. You can adjust the priority of the notification on individual apps. This will let you choose what you see in the notification bar.
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/08/markdown-wordpress-create-post.png)
### Wrap up
As you can see adding markdown support to a WordPress website is really easy, and it will only take a few minutes of your time. If you are completely new to markdown, you might also check out our [markdown cheatsheet][2] which provides a comprehensive reference to the markdown syntax.
--------------------------------------------------------------------------------
via: https://www.maketecheasier.com/use-markdown-in-wordpress/?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+maketecheasier
作者:[Ayo Isaiah][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.maketecheasier.com/author/ayoisaiah/
[1]: https://wordpress.org/plugins/wp-markdown/
[2]: https://www.maketecheasier.com/productive-with-markdown-cheatsheet/

View File

@ -0,0 +1,207 @@
Monitoring Docker Containers with Elasticsearch and cAdvisor
=======
If youre running a Swarm Mode cluster or even a single Docker engine, youll end up asking this question:
>How do I keep track of all thats happening?
The answer is “not easily.”
You need a few things to have a complete overview of stuff like:
1. Number and status of containers
2. If, where, and when a container has been moved to another node
3. Number of containers on a given node
4. Traffic peaks at a given time
5. Orphan volumes and networks
6. Free disk space, free inodes
7. Number of containers against number of veths attached to the docker0 and docker_gwbridge bridges
8. Up and down Swarm nodes
9. Centralize logs
The goal of this post is to demonstrate the use of [Elasticsearch][1] + [Kibana][2] + [cAdvisor][3] as tools to analyze and gather metrics and visualize dashboards for Docker containers.
Later on in this post, you can find a dashboard trying to address a few points from the previous list. There are also points that cant be addressed by simply using cAdvisor, like the status of Swarm Mode nodes.
Also, if you have specific needs that arent covered by cAdvisor or another tool, I encourage you to write your own data collector and data shipper (e.g., [Beats][4]). Note that I wont be showing you how to centralize Docker containers log on Elasticsearch.
>[“How do you keep track of all thats happening in a Swarm Mode cluster? Not easily.” via @fntlnz][5]
### Why Do We Need to Monitor Containers?
Imagine yourself in the classic situation of managing a virtual machine, either just one or several. You are a tmux hero, so you have your sessions preconfigured to do basically everything, monitoring included. Theres a problem in production? You just do a top, htop, iotop, jnettop, whatevertop on all your machines, and youre ready for troubleshooting!
Now imagine that you have the same three nodes but split into 50 containers. You need some history displayed nicely in a single place where you can perform queries to know what happened instead of just risking your life in front of those ncurses tools.
### What Is the Elastic Stack?
The Elastic Stack is a set of tools composed of:
- Elasticsearch
- Kibana
- Logstash
- Beats
Were going to use a few open-source tools from the Elastic Stack, such as Elasticsearch for the JSON-based analytics engine and Kibana to visualize data and create dashboards.
Another important piece of the Elastic Stack is Beats, but in this post, were focused on containers. Theres no official Beat for Docker, so well just use cAdvisor that can natively talk with Elasticsearch.
cAdvisor is a tool that collects, aggregates, and exports metrics about running containers. In our case, those metrics are being exported to an Elasticsearch storage.
Two cool facts about cAdvisor are:
- Its not limited to Docker containers.
- It has its own webserver with a simple dashboard to visualize gathered metrics for the current node.
### Set Up a Test Cluster or BYOI
As I did in my previous posts, my habit is to provide a small script to allow the reader to set up a test environment on which to try out my projects steps in no time. So you can use the following not-for-production-use script to set up a little Swarm Mode cluster with Elasticsearch running as a container.
>If you have enough time/experience, you can BYOI (Bring Your Own Infrastructure).
To follow this post, youll just need:
- One or more nodes running the Docker daemon >= 1.12
- At least a stand-alone Elasticsearch node 2.4.X
Again, note that this post is not about setting up a production-ready Elasticsearch cluster. A single node cluster is not recommended for production. So if youre planning a production installation, please refer to [Elastic guidelines][6].
### A friendly note for early adopters
Im usually an early adopter (and Im already using the latest alpha version in production, of course). But for this post, I chose not to use the latest Elasticsearch 5.0.0 alpha. Their roadmap is not perfectly clear to me, and I dont want be the root cause of your problems!
So the Elasticsearch reference version for this post is the latest stable version, 2.4.0 at the moment of writing.
### Test cluster setup script
As said, I wanted to provide this script for everyone who would like to follow the blog without having to figure out how to create a Swarm cluster and install an Elasticsearch. Of course, you can skip this if you choose to use your own Swarm Mode engines and your own Elasticsearch nodes.
To execute the setup script, youll need:
- [Docker Machine][7] latest version: to provision Docker engines on DigitalOcean
- [DigitalOcean API Token][8]: to allow docker-machine to start nodes on your behalf
![](https://resources.codeship.com/hubfs/CTAs/EVAL/Codeship_Request_Trial_Access.png?t=1473869513342)
### Create Cluster Script
Now that you have everything we need, you can copy the following script in a file named create-cluster.sh:
```
#!/usr/bin/env bash
#
# Create a Swarm Mode cluster with a single master and a configurable number of workers
workers=${WORKERS:-"worker1 worker2"}
#######################################
# Creates a machine on Digital Ocean
# Globals:
# DO_ACCESS_TOKEN The token needed to access DigitalOcean's API
# Arguments:
# $1 the actual name to give to the machine
#######################################
create_machine() {
docker-machine create \
-d digitalocean \
--digitalocean-access-token=$DO_ACCESS_TOKEN \
--digitalocean-size 2gb \
$1
}
#######################################
# Executes a command on the specified machine
# Arguments:
# $1 The machine on which to run the command
# $2..$n The command to execute on that machine
#######################################
machine_do() {
docker-machine ssh $@
}
main() {
if [ -z "$DO_ACCESS_TOKEN" ]; then
echo "Please export a DigitalOcean Access token: https://cloud.digitalocean.com/settings/api/tokens/new"
echo "export DO_ACCESS_TOKEN=<yourtokenhere>"
exit 1
fi
if [ -z "$WORKERS" ]; then
echo "You haven't provided your workers by setting the \$WORKERS environment variable, using the default ones: $workers"
fi
# Create the first and only master
echo "Creating the master"
create_machine master1
master_ip=$(docker-machine ip master1)
# Initialize the swarm mode on it
echo "Initializing the swarm mode"
machine_do master1 docker swarm init --advertise-addr $master_ip
# Obtain the token to allow workers to join
worker_tkn=$(machine_do master1 docker swarm join-token -q worker)
echo "Worker token: ${worker_tkn}"
# Create and join the workers
for worker in $workers; do
echo "Creating worker ${worker}"
create_machine $worker
machine_do $worker docker swarm join --token $worker_tkn $master_ip:2377
done
}
main $@
```
And make it executable:
```
chmod +x create-cluster.sh
```
### Create the cluster
As the name suggests, well use the script to create the cluster. By default, the script will create a cluster with a single master and two workers. If you want to configure the number of workers, you can do that by setting the WORKERS environment variable.
Now, lets create that cluster!
```
./create-cluster.sh
```
Ok, now you can go out for a coffee. This will take a while.
Finally the cluster is ready!
--------------------------------------------------------------------------------
via: https://blog.codeship.com/monitoring-docker-containers-with-elasticsearch-and-cadvisor/
作者:[Lorenzo Fontana][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://blog.codeship.com/author/lorenzofontana/
[1]: https://github.com/elastic/elasticsearch
[2]: https://github.com/elastic/kibana
[3]: https://github.com/google/cadvisor
[4]: https://github.com/elastic/beats
[5]: https://twitter.com/share?text=%22How+do+you+keep+track+of+all+that%27s+happening+in+a+Swarm+Mode+cluster%3F+Not+easily.%22+via+%40fntlnz&url=https://blog.codeship.com/monitoring-docker-containers-with-elasticsearch-and-cadvisor/
[6]: https://www.elastic.co/guide/en/elasticsearch/guide/2.x/deploy.html
[7]: https://docs.docker.com/machine/install-machine/
[8]: https://cloud.digitalocean.com/settings/api/tokens/new

View File

@ -0,0 +1,62 @@
Ryver: Why You Should Be Using It instead of Slack
=====
It seems like everyone has heard of Slack, a team communication tool that can be used across multiple platforms to stay in the loop. It has revolutionised the way users discuss and plan projects, and its a clear upgrade to emails.
I work in small writing teams, and Ive never had a problem with communicating with others on my phone or computer while using it. If you want to keep up to date with team of any size, its a great way to stay in the loop.
So, why are we here? Ryver is supposed to be the next big thing, offering an upgraded service in comparison to Slack. Its completely free, and theyre pushing for a larger share of the market.
Is it good enough to be a Slack-Killer? What are the differences between two similar sounding services?
Read on to find out more.
### Why Ryver?
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/04/Ryver.jpg)
Why mess with something that works? The developers at Ryver are well aware of Slack, and theyre hoping their improved service will be enough to make you switch over. They promise a completely free team-communication service with no hidden charges along the way.
Thankfully, they deliver on their main aim with a high quality product.
Extra content is the name of the game, and they promise to remove some of the limits youll find on a free account with Slack. Unlimited data storage is a major plus point, and its also more open in a number of ways. If storage limits are an issue for you, you have to check out Ryver.
Its a simple system to use, as it was built so that all functions are always one click away. Its a mantra used to great success by Apple, and there arent many growing pains when you first get started.
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/09/ryver-web-interface.png)
Conversations are split between personal chats and public posts, and it means theres a clear line between team platforms and personal use. It should help to avoid broadcasting any embarrassing announcements to your colleagues, and Ive seen a few during my time as a Slack user.
Integration with a number of existing apps is supported, and there are native applications for most platforms.
You can add guests when needed at no additional cost, and its useful if you deal with external clients regularly. Guests can add more guests, so theres an element of fluidity that isnt seen with the more popular option.
Think of Ryver as a completely different service that will cater to different needs. If you need to deal with numerous clients on the same account, its worth trying out.
The question is how is it free? The quick answer is premium users will be paying your way. Like Spotify and other services, theres a minority paying for the rest of us. Heres a direct link to their download page if youre interested in giving it a go.
### Should You Switch to Ryver?
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/04/Slack-homepage.jpg)
Slack is great as long as you stick to smaller teams like I do, but Ryver has a lot to offer. The idea of a completely free team messaging program is noble, and it works perfectly.
Theres nothing wrong with using both, so make sure to try out the competition if youre not willing to pay for a premium Slack account. You might find that both are better in different situations, depending on what you need.
Above all, Ryver is a great free alternative, and its more than just a Slack clone. They have a clear idea of what theyre trying to achieve, and they have a decent product that offers something different in a crowded marketplace.
However, theres a chance that it will disappear if theres a sustained lack of funding in the future. It could leave your teams and discussions in disarray. Everything is fine for now, but be careful if you plan to export a larger business over to the new upstart.
If youre tired of Slacks limitations on a free account, youll be impressed by what Ryver has to offer. To learn more, check out their website for information about the service.
--------------------------------------------------------------------------------
via: https://www.maketecheasier.com/why-use-ryver-instead-of-slack/?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+maketecheasier
作者:[James Milin-Ashmore][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.maketecheasier.com/author/james-ashmore/

View File

@ -0,0 +1,52 @@
Taskwarrior: A Brilliant Command-Line TODO App For Linux
====
Taskwarrior is a simple, straight-forward command-line based TODO app for Ubuntu/Linux. This open-source app has to be one of the easiest of all [CLI based apps][4] I've ever used. Taskwarrior helps you better organize yourself, and without installing bulky new apps which sometimes defeats the whole purpose of TODO apps.
![](https://2.bp.blogspot.com/-pQnRlOUNIxk/V9cuc3ytsBI/AAAAAAAAKHs/yYxyiAk4PwMIE0HTxlrm6arWOAPcBRRywCLcB/s1600/taskwarrior-todo-app.png)
### Taskwarrior: A Simple CLI Based TODO App That Gets The Job Done!
Taskwarrior is an open-source and cross-platform, command-line based TODO app, which lets you manage your to-do lists right from the Terminal. The app lets you add tasks, shows you the list, and removes tasks from that list with much ease. And what's more, it's available within your default repositories, no need to fiddle with PPAs. In Ubuntu 16.04 LTS and similar, do the following in Terminal to install Taskwarrior.
```
sudo apt-get install task
```
A simple use case can be as follows:
```
$ task add Read a book
Created task 1.
$ task add priority:H Pay the bills
Created task 2.
```
This is the same example I used in the screenshot above. Yes, you can set priority levels (H, L or M) as shown. And then you can use 'task' or 'task next' commands to see your newly-created todo list. For example:
```
$ task next
ID Age P Description Urg
-- --- - -------------------------------- ----
2 10s H Pay the bills 6
1 20s Read a book 0
```
And once its completed, you can use 'task 1 done' or 'task 2 done' commands to clear the lists. A more comprehensive list of commands, use-cases [can be found here][1]. Also, Taskwarrior is cross-platform, which means you'll find a version that [fits your needs][2] no matter what. There's even an [Android version][3] if you want one. Enjoy!
--------------------------------------------------------------------------------
via: http://www.techdrivein.com/2016/09/taskwarrior-command-line-todo-app-linux.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+techdrivein+%28Tech+Drive-in%29
作者:[Manuel Jose ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.techdrivein.com/2016/09/taskwarrior-command-line-todo-app-linux.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+techdrivein+%28Tech+Drive-in%29
[1]: https://taskwarrior.org/docs/
[2]: https://taskwarrior.org/download/
[3]: https://taskwarrior.org/news/news.20160225.html
[4]: http://www.techdrivein.com/search/label/Terminal

View File

@ -0,0 +1,95 @@
The Five Principles of Monitoring Microservices
====
![](http://thenewstack.io/wp-content/uploads/2016/09/toppicsysdig.jpg)
The need for microservices can be summed up in just one word: speed. The need to deliver more functionality and reliability faster has revolutionized the way developers create software. Not surprisingly, this change has caused ripple effects within software management, including monitoring systems. In this post, well focus on the radical changes required to monitor your microservices in production efficiently. Well lay out five guiding principles for adapting your monitoring approach for this new software architecture.
Monitoring is a critical piece of the control systems of microservices, as the more complex your software gets, the harder it is to understand its performance and troubleshoot problems. Given the dramatic changes to software delivery, however, monitoring needs an overhaul to perform well in a microservice environment. The rest of this article presents the five principles of monitoring microservices, as follows:
1. Monitor containers and whats inside them.
2. Alert on service performance, not container performance.
3. Monitor services that are elastic and multi-location.
4. Monitor APIs.
5. Map your monitoring to your organizational structure.
Leveraging these five principles will allow you to establish more effective monitoring as you make your way towards microservices. These principles will allow you to address both the technological changes associated with microservices, in addition to the organizational changes related to them.
### The Principles of Microservice Monitoring
#### 1. Monitor Containers and Whats Running Inside Them
Containers gained prominence as the building blocks of microservices. The speed, portability, and isolation of containers made it easy for developers to embrace a microservice model. Theres been a lot written on the benefits of containers so we wont recount it all here.
Containers are black boxes to most systems that live around them. Thats incredibly useful for development, enabling a high level of portability from development through production, from developer laptop to cloud. But when it comes to operating, monitoring and troubleshooting a service, black boxes make common activities harder, leading us to wonder: whats running in the container? How is the application/code performing? Is it spitting out important custom metrics? From the DevOps perspective, you need deep visibility inside containers rather than just knowing that some containers exist.
![](http://thenewstack.io/wp-content/uploads/2016/09/greatfordev.jpg)
The typical process for instrumentation in a non-containerized environment — an agent that lives in the user space of a host or VM — doesnt work particularly well for containers. Thats because containers benefit from being small, isolated processes with as few dependencies as possible.
And, at scale, running thousands of monitoring agents for even a modestly-sized deployment is an expensive use of resources and an orchestration nightmare. Two potential solutions arise for containers: 1) ask your developers to instrument their code directly, or 2) leverage a universal kernel-level instrumentation approach to see all application and container activity on your hosts. We wont go into depth here, but each method has pros and cons.
#### 2. Leverage Orchestration Systems to Alert on Service Performance
Making sense of operational data in a containerized environment is a new challenge. The metrics of a single container have a much lower marginal value than the aggregate information from all the containers that make up a function or a service.
This particularly applies to application-level information, like which queries have the slowest response times or which URLs are seeing the most errors, but also applies to infrastructure-level monitoring, like which services containers are using the most resources beyond their allocated CPU shares.
Increasingly, software deployment requires an orchestration system to “translate” a logical application blueprint into physical containers. Common orchestration systems include Kubernetes, Mesosphere DC/OS and Docker Swarm. Teams use an orchestration system to (1) define your microservices and (2) understand the current state of each service in deployment. You could argue that the orchestration system is even more important than the containers. The actual containers are ephemeral — they matter only for the short time that they exist — while your services matter for the life of their usefulness.
DevOps teams should redefine alerts to focus on characteristics that get as close to monitoring the experience of the service as possible. These alerts are the first line of defense in assessing if something is impacting the application. But getting to these alerts is challenging, if not impossible unless your monitoring system is container-native.
Container-native solutions leverage orchestration metadata to dynamically aggregate container and application data and calculate monitoring metrics on a per-service basis. Depending on your orchestration tool, you might have different layers of a hierarchy that youd like to drill into. For example, in Kubernetes, you typically have a Namespace, ReplicaSets, Pods and some containers. Aggregating at these various layers is essential for logical troubleshooting, regardless of the physical deployment of the containers that make up the service.
![](http://thenewstack.io/wp-content/uploads/2016/09/servicemonitoring.jpg)
#### 3. Be Prepared for Services that are Elastic and Multi-Location
Elastic services are certainly not a new concept, but the velocity of change is much faster in container-native environments than virtualized environments. Rapidly changing environments can wreak havoc on brittle monitoring systems.
Frequently monitoring legacy systems required manual tuning of metrics and checks based on individual deployments of software. This tuning can be as specific as defining the individual metrics to be captured, or configuring collection based on what application is operating in a particular container. While that may be acceptable on a small scale (think tens of containers), it would be unbearable in anything larger. Microservice focused monitoring must be able to comfortably grow and shrink in step with elastic services, without human intervention.
For example, if the DevOps team must manually define what service a container is included in for monitoring purposes, they no doubt drop the ball as Kubernetes or Mesos spins up new containers regularly throughout the day. Similarly, if Ops were required to install a custom stats endpoint when new code is built and pushed into production, challenges may arise as developers pull base images from a Docker registry.
In production, build monitoring toward a sophisticated deployment that spans multiple data centers or multiple clouds. Leveraging, for example, AWS CloudWatch will only get you so far if your services span your private data center as well as AWS. That leads back to implementing a monitoring system that can span these different locations as well as operate in dynamic, container-native environments.
#### 4. Monitor APIs
In microservice environments, APIs are the lingua franca. They are essentially the only elements of a service that are exposed to other teams. In fact, response and consistency of the API may be the “internal SLA” even if there isnt a formal SLA defined.
As a result, API monitoring is essential. API monitoring can take many forms but clearly, must go beyond binary up/down checks. For instance, its valuable to understand the most frequently used endpoints as a function of time. This allows teams to see if anything noticeable has changed in the usage of services, whether it be due to a design change or a user change.
You can also consider the slowest endpoints of your service, as these can reveal significant problems, or, at the very least, point to areas that need the most optimization in your system.
Finally, the ability to trace service calls through your system represents another critical capability. While typically used by developers, this type of profiling will help you understand the overall user experience while breaking information down into infrastructure and application-based views of your environment.
#### 5. Map Monitoring to Your Organizational Structure
While most of this post has been focused on the technological shift in microservices and monitoring, like any technology story, this is as much about people as it is about software bits.
For those of you familiar with Conways law, he reminds us that the design of systems is defined by the organizational structure of the teams building them. The allure of creating faster, more agile software has pushed teams to think about restructuring their development organization and the rules that govern it.
![](http://thenewstack.io/wp-content/uploads/2016/09/mapmonitoring.jpg)
So if an organization wants to benefit from this new software architecture approach, their teams must, therefore, mirror microservices themselves. That means smaller teams, loosely coupled; that can choose their direction as long as it still meets the needs of the whole. Within each team, there is more control than ever over languages used, how bugs are handled, or even operational responsibilities.
DevOps teams can enable a monitoring platform that does exactly this: allows each microservice team to isolate their alerts, metrics, and dashboards, while still giving operations a view into the global system.
### Conclusion
Theres one, clear trigger event that precipitated the move to microservices: speed. Organizations wanted to deliver more capabilities to their customers in less time. Once this happened, technology stepped in, the architectural move to micro-services and the underlying shift to containers make speed happen. Anything that gets in the way of this progress train is going to get run over on the tracks.
As a result, the fundamental principles of monitoring need to adapt to the underlying technology and organizational changes that accompany microservices. Operations teams that recognize this shift can adapt to microservices earlier and easier.
--------------------------------------------------------------------------------
via: http://linoxide.com/firewall/pfsense-setup-basic-configuration/
作者:[Apurva Dave][a] [Loris Degioanni][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://thenewstack.io/author/apurvadave/
[b]: http://thenewstack.io/author/lorisdegioanni/

View File

@ -0,0 +1,85 @@
Down and dirty with Windows Nano Server 2016
====
![](http://images.techhive.com/images/article/2016/04/pokes-fun-at-1164459_1280-100654917-primary.idge.jpg)
>Nano Server is a very fast, powerful tool for remotely administering Windows servers, but you need to know what you're doing
There's been a good deal of talk around the [upcoming Nano version of Windows Server 2016][1], the remote-administered, command-line version designed with private clouds and datacenters in mind. But there's also a big difference between talking about it and getting your hands into it. Let's get into the guts.
Nano has no local login, is 64-bit all the way (applications, tools, and agents), and is fast to set up, update, and restart (for the rare times it needs to restart). It's perfect for compute hosts in or out of a cluster, a storage host, a DNS server, an IIS web server, and any server-hosting applications running in a container or virtual-machine guest operating system.
A Nano Server isn't all that fun to play with: You have to know what you want to accomplish. Otherwise, you'll be looking at a remote PowerShell connection and wondering what you're supposed to do next. But if you know what you want, it's very fast and powerful.
Microsoft has provided a [quick-start guide][2] to setting up Nano Server. Here, I take the boots-on-the-ground approach to show you what it's like in the real world.
First, you have to create a .vhd virtual hard drive file. As you can see in Figure 1, I had a few issues with files not being in the right place. PowerShell errors often indicate a mistyped line, but in this case, I had to keep double-checking where I put the files so that it could use the ISO information (which has to be copied and pasted to the server you want to create the .vhd file on). Once you have everything in place, you should see it go through the process of creating the .vhd file.
![](http://images.techhive.com/images/article/2016/09/nano-server-1-100682371-large.idge.jpg)
>Figure 1: One of the many file path errors I got when trying to run the New-NanoServerImage script. Once I worked out the file-location issues, it went through and created the .vhd file (as shown here).
Next, when you create the VM in Hyper-V using the VM wizard, you need to point to an existing virtual hard disk and point to the new .vhd file you created (Figure 2).
![](http://images.techhive.com/images/article/2016/09/nano-server-2-100682368-large.idge.jpg)
>Figure 2: Connecting to a virtual hard disk (the one you created at the start).
When you start up the Nano server, you may get a memory error depending on how much memory you allocated and how much memory the Hyper-V server has left if you have other VMs running. I had to shut off a few VMs and increase the RAM until it finally started up. That was unexpected -- [Microsoft's Nano system][3] requirements say you can run it with 512MB, although it recommends you give it at least 800MB. (I ended up allocating 8GB after 1GB didn't work; I was impatient, so I didn't try increments in between.)
I finally came to the login screen, then signed in to get the Nano Server Recovery Console (Figure 3), which is essentially Nano server's terminal screen.
![](http://images.techhive.com/images/article/2016/09/nano-server-3-100682369-large.idge.jpg)
>Figure 3: The Nano Server Recovery Console.
Once I was in, I thought I was golden. But in trying to figure out a few details (how to join a domain, how to inject drivers I might not have, how to add roles), I realized that some configuration pieces would have been easier to add when I ran the New-NanoServerImage cmdlet by popping in a few more parameters.
However, once you have the server up and running, there are ways to configure it live. It all starts with a Remote PowerShell connection, as Figure 4 shows.
![](http://images.techhive.com/images/article/2016/09/nano-server-4-100682370-large.idge.jpg)
>Figure 4: Getting information from the Nano Server Recovery Console that you can use to perform a PowerShell Remote connection.
Microsoft provides direction on how to make the connection happen, but after trying four different sites, I found MSDN has the clearest (working) direction on the subject. Figure 5 shows the result.
![](http://images.techhive.com/images/article/2016/09/nano-server-5-100682372-large.idge.jpg)
>Figure 5: Making the remote PowerShell connection to your Nano Server.
Note: Once you've done the remote connection the long way, you can connect more quickly using a single line:
```
Enter-PSSession ComputerName "192.168.0.100"-Credential ~\Administrator.
```
If you knew ahead of time that this server was going to be a DNS server or be part of a compute cluster and so on, you would have added those roles or feature packages when you were creating the .vhd image in the first place. If you're looking to do so after the fact, you'll need to make the remote PowerShell connection, then install the NanoServerPackage and import it. Then you can see which packages you want to deploy using Find-NanoServerPackage (shown in Figure 6).
![](http://images.techhive.com/images/article/2016/09/nano-server-6-100682373-large.idge.jpg)
>Figure 6: Once you have installed and imported the NanoServerPackage, you can find the one you need to get your Nano Server up and running with the roles and features you require.
I tested this out by running the DNS package with the following command: `Install-NanoServerPackage Name Microsoft-NanoServer-DNS-Package`. Once it was installed, I had to enable it with the following command: `Enable-WindowsOptionalFeature Online FeatureName DNS-Server-Full-Role`.
Obviously I didn't know these commands ahead of time. I have never run them before in my life, nor had I ever enabled a DNS role this way, but with a little research I had a DNS (Nano) Server up and running.
The next part of the process involves using PowerShell to configure the DNS server. That's a completely different topic and one best researched online. But it doesn't appear to be mind-blowingly difficult once you've learned the cmdlets to use: Add a zone? Use the Add-DNSServerPrimaryZone cmdlet. Add a record in that zone? Use the Add-DNSServerResourceRecordA. And so on.
After doing all this command-line work, you'll likely want proof that any of this is working. You should be able to do a quick review of PowerShell commands and not the many DNS ones that now present themselves (using Get-Command).
But if you need a GUI-based confirmation, you can open Server Manager on a GUI-based server and add the IP address of the Nano Server. Then right-click that server and choose Manage As to provide your credentials (~\Administrator and password). Once you have connected, right-click the server in Server Manager and choose Add Roles and Features; it should show that you have DNS installed as a role, as Figure 7 shows.
![](http://images.techhive.com/images/article/2016/09/nano-server-7-100682374-large.idge.jpg)
>Figure 7: Proving through the GUI that DNS was installed.
Don't bother trying to remote-desktop into the server. There is only so much you can do through the Server Manager tool, and that isn't one of them. And just because you can confirm the DNS role doesn't mean you have the ability to add new roles and features through the GUI. It's all locked down. Nano Server is how you'll make any needed adjustments.
--------------------------------------------------------------------------------
via: http://www.infoworld.com/article/3119770/windows-server/down-and-dirty-with-windows-nano-server-2016.html?utm_source=webopsweekly&utm_medium=email
作者:[J. Peter Bruzzese ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.infoworld.com/author/J.-Peter-Bruzzese/
[1]: http://www.infoworld.com/article/3049191/windows-server/nano-server-a-slimmer-slicker-windows-server-core.html
[2]: https://technet.microsoft.com/en-us/windows-server-docs/compute/nano-server/getting-started-with-nano-server
[3]: https://technet.microsoft.com/en-us/windows-server-docs/get-started/system-requirements--and-installation

View File

@ -0,0 +1,90 @@
How to Speed Up LibreOffice with 4 Simple Steps
====
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/08/speed-up-libreoffice-featured-2.jpg)
For many fans and supporters of Open Source software, LibreOffice is the best alternative to Microsoft Office, and it has definitely seen huge improvements over the last few releases. However, the initial startup experience still leaves a lot to be desired. There are ways to improve launch time and overall performance of LibreOffice.
I will go over some practical steps that you can take to improve the load time and responsiveness of LibreOffice in the paragraphs below.
### 1. Increase Memory Per Object and Image Cache
This will help the program load faster by allocating more memory resources to the image cache and objects.
1. Launch LibreOffice Writer (or Calc)
2. Navigate to “Tools -> Options” in the menubar or use the keyboard shortcut “Alt + F12.”
3. Click “Memory” under LibreOffice and increase “Use for LibreOffice” to 128MB.
4. Also increase “Memory per object” to 20Mb.
5. Click “Ok” to save your changes.
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/08/speed-up-libreoffice-step-1.png)
Note: You can set the numbers higher or lower than the suggested values depending on how powerful your machine is. It is best to experiment and see which value gives you the optimum performance.
### 2. Enable LibreOffice QuickStarter
If you have a generous amount of RAM on your machine, say 4GB and above, you can enable the “Systray Quickstarter” option to keep part of LibreOffice in memory for quicker response with opening new documents.
You will definitely see improved performance in opening new documents after enabling this option.
1. Open the options dialog by navigating to “Tools -> Options.”
2. In the sidebar under “LibreOffice”, select “Memory.”
3. Tick the “Enable Systray Quickstarter” checkbox.
4. Click “OK” to save the changes.
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/08/speed-up-libreoffice-2.png)
Once this option is enabled, you will see the LibreOffice icon in your system tray with options to open any type of document.
### 3. Disable Java Runtime
Another easy way to speed up the launch time and responsiveness of LibreOffice is to disable Java.
1. Open the Options dialog using “Alt + F12.”
2. In the sidebar, select “LibreOffice,” then “Advanced.”
3. Uncheck the “Use Java runtime environment” option.
4. Click “OK” to close the dialog.
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/08/speed-up-libreoffice-3.png)
If all you use is Writer and Calc, disabling Java will not stop you from working with your files as normal. But to use LibreOffice Base and some other special features, you may need to re-enable it again. In that case, you will get a popup asking if you wish to turn it back on.
### 4. Reduce Number of Undo Steps
By default, LibreOffice allows you to undo up to 100 changes to a document. Most users do not need anywhere near that, so holding that many steps in memory is largely a waste of resources.
I recommend that you reduce this number to 20 to free up memory for other things, but feel free to customise this part to suit your needs.
1. Open the options dialog by navigating to “Tools -> Options.”
2. In the sidebar under “LibreOffice,” select “Memory.”
3. Under “Undo” and change the number of steps to your preferred value.
4. Click “OK” to save the changes.
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2016/08/speed-up-libreoffice-5.png)
If the tips provided helped you speed up the launch time of your LibreOffice Suite, let us know in the comments. Also, please share any other tips you may know for others to benefit as well.
--------------------------------------------------------------------------------
via: https://www.maketecheasier.com/speed-up-libreoffice/?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+maketecheasier
作者:[Ayo Isaiah][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.maketecheasier.com/author/ayoisaiah/

View File

@ -0,0 +1,66 @@
Translating by bianjp
It's time to make LibreOffice and OpenOffice one again
==========
![](http://tr2.cbsistatic.com/hub/i/2016/09/14/2e91089b-7ebd-4579-bf8f-74c34d1a94ce/e7e9c8dd481d8e068f2934c644788928/openofficedeathhero.jpg)
Let's talk about OpenOffice. More than likely you've already read, countless times, that Apache OpenOffice is near the end. The last stable iteration was 4.1.2 (released October, 2015) and a recent major security flaw took a month to patch. A lack of coders has brought development to a creeping crawl. And then, the worst possible news hit the ether; the project suggested users switch to MS Office (or LibreOffice).
For whom the bells tolls? The bell tolls for thee, OpenOffice.
I'm going to say something that might ruffle a few feathers. Are you ready for it?
The end of OpenOffice will be a good thing for open source and for users.
Let me explain.
### One fork to rule them all
When LibreOffice was forked from OpenOffice we saw yet another instance of the fork not only improving on the original, but vastly surpassing it. LibreOffice was an instant success. Every Linux distribution that once shipped with OpenOffice migrated to the new kid on the block. LibreOffice burst out of the starting gate and immediately hit its stride. Updates came at an almost breakneck speed and the improvements were plenty and important.
After a while, OpenOffice became an afterthought for the open source community. This, of course, was exacerbated when Oracle decided to discontinue the project in 2011 and donated the code to the Apache Project. By this point OpenOffice was struggling to move forward and that brings us to now. A burgeoning LibreOffice and a suffering, stuttering OpenOffice.
But I say there is a light at the end of this rather dim tunnel.
### Unfork them
This may sound crazy, but I think it's time LibreOffice and OpenOffice became one again. Yes, I know there are probably political issues and egos at stake, but I believe the two would be better served as one. The benefits of this merger would be many. Off the top of my head:
- Bring the MS Office filters together: OpenOffice has a strong track record of better importing certain files from MS Office (whereas LibreOffice has been known to be improving, but spotty).
- More developers for LibreOffice: Although OpenOffice wouldn't bring with it a battalion of developers, it would certainly add to the mix.
- End the confusion: Many users assume OpenOffice and LibreOffice are the same thing. Some don't even know that LibreOffice exists. This would end that confusion.
- Combine their numbers: Separate, OpenOffice and LibreOffice have impressive usage numbers. Together, they would be a force.
### A golden opportunity
The possible loss of OpenOffice could actually wind up being a golden opportunity for open source office suites in general. Why? I would like to suggest something that I believe has been necessary for a while now. If OpenOffice and LibreOffice were to gather their forces, diff their code, and merge, they could then do some much-needed retooling of not just the internal works of the whole, but also of the interface.
Let's face it, the LibreOffice and (by extension) OpenOffice UIs are both way out of date. When I install LibreOffice 5.2.1.2 the tool bar is an absolute disaster (Figure A).
### Figure A
![](http://tr2.cbsistatic.com/hub/i/2016/09/14/cc5250df-48cd-40e3-a083-34250511ffab/c5ac8eb1e2cb12224690a6a3525999f0/openofficea.jpg)
#### The LibreOffice default toolbar setup.
As much as I support and respect (and use daily) LibreOffice, it has become all too clear the interface needs a complete overhaul. What we're dealing with now is a throwback to the late 90s/early 2000s and it has to go. When a new user opens up LibreOffice for the first time, they are inundated with buttons, icons, and toolbars. Ubuntu Unity helped this out with the Head up Display (HUD), but that did nothing for other desktops and distributions. Sure, the enlightened user has no problem knowing what to look for and where it is (or to even customize the toolbars to reflect their specific needs), but for a new or average user, that interface is a nightmare. Now would be the perfect time for this change. Bring in the last vestiges of the OpenOffice developers and have them join the fight for an improved interface. With the combination of the additional import filters from OpenOffice and a modern interface, LibreOffice could finally make some serious noise on both the home and business desktops.
### Will this actually happen?
This needs to happen. Will it? I have no idea. But even if the powers that be decide the UI isn't in need of retooling (which would be a mistake), bringing OpenOffice into the fold would still be a big step forward. The merging of the two efforts would bring about a stronger focus on development, easier marketing, and far less confusion by the public at large.
I realize this might seem a bit antithetical to the very heart and spirit of open source, but merging LibreOffice and OpenOffice would combine the strengths of the two constituent pieces and possibly jettison the weaknesses.
From my perspective, that's a win-win.
--------------------------------------------------------------------------------
via: http://www.techrepublic.com/article/its-time-to-make-libreoffice-and-openoffice-one-again/
作者:[Jack Wallen ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.techrepublic.com/search/?a=jack%2Bwallen

View File

@ -0,0 +1,98 @@
NitroShare Easily Share Files Between Multiple Operating Systems on Local Network
====
One of the most important uses of a network is for file sharing purposes. There are multiple ways Linux and Windows, Mac OS X users on a network can now share files with each other and in this post, we shall cover Nitroshare, a cross-platform, open-source and easy-to-use application for sharing files across a local network.
Nitroshare tremendously simplifies file sharing on a local network, once installed, it integrates with the operating system seamlessly. On Ubuntu, simply open it from the applications indicator, and on Windows, check it in the system tray.
Additionally, it automatically detects every other device on a network that has Nitroshare installed thereby enabling a user to easily transfer files from one machine to another by selecting which device to transfer to.
The following are the illustrious features of Nitroshare:
- Cross-platform, runs on Linux, Windows and Mac OS X
- Easy to setup, no configurations required
- Its simple to use
- Supports automatic discovery of devices running Nitroshare on local network
- Supports optional TSL encryption for security
- Works at high speeds on fast networks
- Supports transfer of files and directories (folders on Windows)
- Supports desktop notifications about sent files, connected devices and more
The latest version of Nitroshare was developed using Qt 5, it comes with some great improvements such as:
- Polished user interfaces
- Simplified device discovery process
- Removal of file size limitation from other versions
- Configuration wizard has also been removed to make it easy to use
### How To Install Nitroshare on Linux Systems
NitroShare is developed to run on a wide variety of modern Linux distributions and desktop environments.
#### On Debian Sid and Ubuntu 16.04+
NitroShare is included in the Debian and Ubuntu software repositories and can be easily installed with the following command.
```
$ sudo apt-get install nitroshare
```
But the available version might be out of date, however, to install the latest version of Nitroshare, issue the command below to add the PPA for the latest packages:
```
$ sudo apt-add-repository ppa:george-edison55/nitroshare
$ sudo apt-get update
$ sudo apt-get install nitroshare
```
#### On Fedora 24-23
Recently, NitroShare has been included to Fedora repositories and can be installed with the following command:
```
$ sudo dnf install nitroshare
```
#### On Arch Linux
For Arch Linux, NitroShare packages are available from the AUR and can be built/installed with the following commands:
```
# wget https://aur.archlinux.org/cgit/aur.git/snapshot/nitroshare.tar.gz
# tar xf nitroshare.tar.gz
# cd nitroshare
# makepkg -sri
```
### How to Use NitroShare on Linux
Note: As I had already mentioned earlier on, all other machines that you wish to share files with on the local network must have Nitroshare installed and running.
After successfully installing it, search for Nitroshare in the system dash or system menu and launch it.
![](http://www.tecmint.com/wp-content/uploads/2016/09/NitroShare-Send-Files.png)
After selecting the files, click on “Open” to proceed to choosing the destination device as in the image below. Select the device and click “Ok” that is if you have any devices running Nitroshare on the local network.
![](http://www.tecmint.com/wp-content/uploads/2016/09/NitroShare-Local-Devices.png)
![](http://www.tecmint.com/wp-content/uploads/2016/09/NitroShare-File-Transfer-Progress.png)
From the NitroShare settings General tab, you can add the device name, set default downloads location and in Advance settings you can set port, buffer, timeout, etc. only if you needed.
Homepage: <https://nitroshare.net/index.html>
Thats it for now, if you have any issues regarding Nitroshare, you can share with us using our comment section below. You can as well make suggestions and let us know of any wonderful, cross-platform file sharing applications out there that we probably have no idea about and always remember to stay connected to Tecmint.
--------------------------------------------------------------------------------
via: http://linoxide.com/firewall/pfsense-setup-basic-configuration/
作者:[Aaron Kili][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/aaronkili/

View File

@ -0,0 +1,364 @@
Server Monitoring with Shinken on Ubuntu 16.04
=====
Shinken is an open source computer and network monitoring framework written in python and compatible with Nagios. Shinken can be used on all operating systems that can run python applications like Linux, Unix, and Windows. Shinken was written by Jean Gabes as proof of concept for a new Nagios architecture, but it was turned down by the Nagios author and became an independent network and system monitoring tool that stays compatible with Nagios.
In this tutorial, I will show you how to install Shinken from source and add a Linux host to the monitoring system. I will use Ubuntu 16.04 Xenial Xerus as the operating system for the Shinken server and monitored host.
### Step 1 - Install Shinken Server
Shinken is a python framework, we can install it with pip or install it from source. In this step, we will install Shinken from source.
There are some tasks that have to be completed before we start installing Shinken.
Install some new python packages and create Linux user with the name "shinken":
```
sudo apt-get install python-setuptools python-pip python-pycurl
useradd -m -s /bin/bash shinken
```
Download the Shinken source from GitHub repository:
```
git clone https://github.com/naparuba/shinken.git
cd shinken/
```
Then install Shinken with the command below:
```
git checkout 2.4.3
python setup.py install
```
Next, for better results, we need to install 'python-cherrypy3' from the ubuntu repository:
```
sudo apt-get install python-cherrypy3
```
Now Shinken is installed, next we add Shinken to start at boot time and start it:
```
update-rc.d shinken defaults
systemctl start shinken
```
### Step 2 - Install Shinken Webui2
Webui2 is the Shinken web interface available from shinken.io. The easiest way to install Sshinken webui2 is by using the shinken CLI command (which has to be executed as shinken user).
Login to the shinken user:
```
su - shinken
```
Initialize the shinken configuration file - The command will create a new configuration .shinken.ini:
```
shinken --init
```
And install webui2 with this shinken CLI command:
```
shinken install webui2
```
![](https://www.howtoforge.com/images/server-monitoring-with-shinken-on-ubuntu-16-04/6.png)
Webui2 is installed, but we need to install MongoDB and another python package with pip. Run command below as root:
```
sudo apt-get install mongodb
pip install pymongo>=3.0.3 requests arrow bottle==0.12.8
```
Next, go to the shinken directory and add the new webui2 module by editing the 'broker-master.cfg' file:
```
cd /etc/shinken/brokers/
vim broker-master.cfg
```
Add a new option inside module on line 40:
```
modules webui2
```
Save the file and exit the editor.
Now go to the contacts directory and edit the file 'admin.cfg' for the admin configuration.
```
cd /etc/shinken/contacts/
vim admin.cfg
```
Change the values shown below:
```
contact_name admin # Username 'admin'
password yourpass # Pass 'mypass'
```
Save and exit.
### Step 3 - Install Nagios-plugins and Shinken Packages
In this step, we will install Nagios-plugins and some Perl module. Then install additional shinken packages from shinken.io to perform the monitoring.
Install Nagios-plugins and cpanminus which is required for building and installing the Perl modules:
```
sudo apt-get install nagios-plugins* cpanminus
```
Install these Perl modules with the cpanm command:
```
cpanm Net::SNMP
cpanm Time::HiRes
cpanm DBI
```
Now create new link for utils.pm file to shinken the directory and create a new directory for Log_File_Health:
```
chmod u+s /usr/lib/nagios/plugins/check_icmp
ln -s /usr/lib/nagios/plugins/utils.pm /var/lib/shinken/libexec/
mkdir -p /var/log/rhosts/
touch /var/log/rhosts/remote-hosts.log
```
Next, install the shinken packages ssh and linux-snmp for monitoring SSH and SNMP sources from shinken.io:
```
su - shinken
shinken install ssh
shinken install linux-snmp
```
### Step 4 - Add a New Linux Host/host-one
We will add a new Linux host that shall be monitored by using an Ubuntu 16.04 server with IP address 192.168.1.121 and hostname 'host-one'.
Connect to the Linux host-one:
```
ssh host1@192.168.1.121
```
Install the snmp and snmpd packages from the Ubuntu repository:
```
sudo apt-get install snmp snmpd
```
Next, edit the configuration file 'snmpd.conf' with vim:
```
vim /etc/snmp/snmpd.conf
```
Comment line 15 and uncomment line 17:
```
#agentAddress udp:127.0.0.1:161
agentAddress udp:161,udp6:[::1]:161
```
Comment line 51 and 53, then add new line configuration below:
```
#rocommunity mypass default -V systemonly
#rocommunity6 mypass default -V systemonly
rocommunity mypass
```
Save and exit.
Now start the snmpd service with the systemctl command:
```
systemctl start snmpd
```
Go to the shinken server and define the new host by creating a new file in the 'hosts' directory.
```
cd /etc/shinken/hosts/
vim host-one.cfg
```
Paste configuration below:
```
define host{
use generic-host,linux-snmp,ssh
contact_groups admins
host_name host-one
address 192.168.1.121
_SNMPCOMMUNITY mypass # SNMP Pass Config on snmpd.conf
}
```
Save and exit.
Edit the SNMP configuration on the Shinken server:
```
vim /etc/shinken/resource.d/snmp.cfg
```
Change 'public' to 'mypass' - must be the same password that you used in the snmpd configuration file on the client host-one.
```
$SNMPCOMMUNITYREAD$=mypass
```
Save and exit.
Now reboot both servers - Shinken server and the monitored Linux host:
```
reboot
```
The new Linux host has been added successfully to the Shinken server.
### Step 5 - Access Shinken Webui2
Visit the Shinken webui2 on port 7677 (replace the IP in the URL with your IP):
```
http://192.168.1.120:7767
```
Log in with user admin and your password (the one that you have set in the admin.cfg configuration file).
![](https://www.howtoforge.com/images/server-monitoring-with-shinken-on-ubuntu-16-04/1.png)
Shinken Dashboard in Webui2.
![](https://www.howtoforge.com/images/server-monitoring-with-shinken-on-ubuntu-16-04/2.png)
Our 2 servers are monitored with Shinken.
![](https://www.howtoforge.com/images/server-monitoring-with-shinken-on-ubuntu-16-04/3.png)
List all services that are monitored with linux-snmp.
![](https://www.howtoforge.com/images/server-monitoring-with-shinken-on-ubuntu-16-04/4.png)
Status of all hosts and services.
![](https://www.howtoforge.com/images/server-monitoring-with-shinken-on-ubuntu-16-04/5.png)
### Step 6 - Common Problems with Shinken
- Problems with the NTP server
When you get this error with NTP.
```
TimeSync - CRITICAL ( NTP CRITICAL: No response from the NTP server)
TimeSync - CRITICAL ( NTP CRITICAL: Offset unknown )
```
To solve this problem, install ntp on all Linux hosts.
```
sudo apt-get install ntp ntpdate
```
Edit the ntp configuration:
```
vim /etc/ntp.conf
```
Comment all the pools and replace it with:
```
#pool 0.ubuntu.pool.ntp.org iburst
#pool 1.ubuntu.pool.ntp.org iburst
#pool 2.ubuntu.pool.ntp.org iburst
#pool 3.ubuntu.pool.ntp.org iburst
pool 0.id.pool.ntp.org
pool 1.asia.pool.ntp.org
pool 0.asia.pool.ntp.org
```
Next, add a new line inside restrict:
```
# Local users may interrogate the ntp server more closely.
restrict 127.0.0.1
restrict 192.168.1.120 #shinken server IP address
restrict ::1
NOTE: 192.168.1.120 is the Shinken server IP address.
```
Save and exit.
Start ntp and check the Shinken dashboard:
```
ntpd
```
- Problem check_netint.pl Not Found
Download the source from the github repository to the shinken lib directory:
```
cd /var/lib/shinken/libexec/
wget https://raw.githubusercontent.com/Sysnove/shinken-plugins/master/check_netint.pl
chmod +x check_netint.pl
chown shinken:shinken check_netint.pl
```
- Problem with NetworkUsage
There is error message:
```
ERROR : Unknown interface eth\d+
```
Check your network interface and edit the linux-snmp template.
On my Ubuntu server, the network interface is 'enp0s8', not eth0, so I got this error.
Edit the linux-snmp template packs with vim:
```
vim /etc/shinken/packs/linux-snmp/templates.cfg
```
Add the network interface to line 24:
```
_NET_IFACES eth\d+|em\d+|enp0s8
```
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/server-monitoring-with-shinken-on-ubuntu-16-04/
作者:[Muhammad Arul][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.howtoforge.com/tutorial/server-monitoring-with-shinken-on-ubuntu-16-04/
Save and exit.

View File

@ -0,0 +1,129 @@
GOOGLER: NOW YOU CAN GOOGLE FROM LINUX TERMINAL!
====
![](https://itsfoss.com/wp-content/uploads/2016/09/google-from-linux-terminal.jpg)
A quick question: What do you do every day? Of course, a lot of things. But I can tell one thing, you search on Google almost every day (if not every day). Am I right?
Now, if you are a Linux user (which Im guessing you are) heres another question: wouldnt it be nice if you can Google without even leaving the terminal? Without even firing up a Browser window?
If you are a *nix enthusiast and also one of those people who just love the view of the terminal, I know your answer is Yes. And I think, the rest of you will also like the nifty little tool Im going to introduce today. Its called Googler!
### GOOGLER: GOOGLE IN YOUR LINUX TERMINAL
Googler is a straightforward command-line utility for Google-ing right from your terminal window. Googler mainly supports three types of Google Searches:
- Google Search: Simple Google searching, equivalent to searching on Google homepage.
- Google News Search: Google searching for News, equivalent to searching on Google News.
- Google Site Search: Google searching for results from a specific site.
Googler shows the search results with the title, URL and page excerpt. The search results can be opened directly in the browser with only a couple of keystrokes.
![](https://itsfoss.com/wp-content/uploads/2016/09/googler-1.png)
### INSTALLATION ON UBUNTU
Lets go through the installation process first.
At first make sure you have python version 3.3 or later using this command:
```
python3 --version
```
If not, upgrade it. Googler requires python 3.3+ for running.
Though Googler is yet not available through package repository on Ubuntu, we can easily install it from the GitHub repository. All we have to do is run the following commands:
```
cd /tmp
git clone https://github.com/jarun/googler.git
cd googler
sudo make install
cd auto-completion/bash/
sudo cp googler-completion.bash /etc/bash_completion.d/
```
And thats it. Googler is installed along with command autocompletion feature.
### FEATURES & BASIC USAGE
If we go through all its features, Googler is actually quite powerful a tool. Some of the main features are:
Interactive Interface: Run the following command in terminal:
```
googler
```
The interactive interface will be opened. The developer of Googler, Arun [Prakash Jana][1] calls it the omniprompt. You can enter ? for available commands on omniprompt.
![](https://itsfoss.com/wp-content/uploads/2016/09/googler-2.png)
From the omniprompt, enter any search phrases to initiate the search. You can then enter n or p to navigate next or previous page of search results.
To open any search result in a browser window, just enter the index number of that result. Or you can open the search page itself by entering o .
- News Search: If you want to search News, start googler with the N optional argument:
```
googler -N
```
The subsequent omniprompt will fetch results from Google News.
- Site Search: If you want to search pages from a specific site, run googler with w {domain} argument:
```
googler -w itsfoss.com
```
The subsequent omniprompt with fetch results only from Its FOSS blog!
- Manual Page: Run the following command for Googler manual page equipped with various examples:
```
man googler
```
- Google country/domain specific search:
```
googler -c in "hello world"
```
The above example command will open search results from Googles Indian domain (in for India).
- Filter search results by duration and language preference.
- Google search keywords support, such as: site:example.com or filetype:pdf etc.
- HTTPS proxy support.
- Shell commands autocomplete.
- Disable automatic spelling correction.
There are much more. You can twist Googler to suit your needs.
Googler can also be integrated with a text-based browser ( like [elinks][2], [links][3], [lynx][4], w3m etc.), so that you wouldnt even need to leave the terminal for browsing web pages. The instructions can be found on the [GitHub project page of Googler][5].
If you want a graphical demonstration of Googlers various features, feel free to check the terminal recording attached to the GitHub project page : [jarun/googler v2.7 quick demo][6].
### THOUGHTS ON GOOGLER?
Though Googler might not feel necessary or desired to everybody, for someone who doesnt want to open the browser just for searching on google or simply want to spend as much as time possible on the terminal window, it is a great tool indeed. What do you think?
--------------------------------------------------------------------------------
via: http://linoxide.com/firewall/pfsense-setup-basic-configuration/
作者:[Munif Tanjim][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/munif/
[1]: https://github.com/jarun
[2]: http://elinks.or.cz/
[3]: http://links.twibright.com/
[4]: http://lynx.browser.org/
[5]: https://github.com/jarun/googler#faq
[6]: https://asciinema.org/a/85019

View File

@ -0,0 +1,51 @@
Insomnia 3.0 Is a Slick Desktop REST Client for Linux
=====
![](http://www.omgubuntu.co.uk/wp-content/uploads/2016/09/insomnia-app-screenshot.png)
Looking for a free REST client for the Linux desktop? Dont lose sleep: get [Insomnia][1].
The app is cross-platform and works on Linux, macOS and Windows. Its developer, Gregory Schier, told us that he created the app “to help developers communicate with [REST APIs][2].”
He also told that Insomnia already has around 10,000 active users — 9% of which are on Linux.
“So far, the feedback from Linux users has been very positive because similar applications (not nice ones anyway) arent usually available for Linux.”
Insomnia aims to speed up your API testing workflow, by letting you organise, run and debug HTTP requests through a cleanly designed interface.
The app also includes advanced features like cookie management, global environments, SSL validation, and code snippet generation.
As I am not a developer I cant evaluate this app first-hand, nor tell you why it rocks or highlight any major feature deficiencies.
But I thought Id bring the app to your attention and let you decide for yourself. If youve been hunting for a slickly designed GUI alternative to command-line tools like HTTPie, it might be well worth giving it a whirl.
### Download Insomnia 3.0 for Linux
Insomnia 3.0 (not to be confused with Insomnia v2.0 which is only available on Chrome) is available to download for Windows, macOS and Linux.
[Download Insomnia 3.0][4]
An installer is available for Ubuntu 14.04 LTS and up, as is a cross-distro AppImage:
[Download Insomnia 3.0 (.AppImage)][5]
If you want to keep pace with development of the app you can follow [Insomnia on Twitter][6].
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2016/09/insomnia-3-is-free-rest-client-for-linux?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+d0od+%28OMG%21+Ubuntu%21%29
作者:[JOEY-ELIJAH SNEDDON ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://plus.google.com/117485690627814051450/?rel=author
[1]: http://insomnia.rest/
[2]: https://en.wikipedia.org/wiki/Representational_state_transfer
[3]: https://github.com/jkbrzt/httpie
[4]: https://insomnia.rest/download/
[5]: https://builds.insomnia.rest/downloads/linux/latest
[6]: https://twitter.com/GetInsomnia

View File

@ -0,0 +1,159 @@
Using Ansible to Provision Vagrant Boxes
====
![](https://i1.wp.com/cdn.fedoramagazine.org/wp-content/uploads/2016/08/vagrant-plus-ansible.jpg?w=1352&ssl=1)
Ansible is a great tool for system administrators who want to automate system administration tasks. From configuration management to provisioning and managing containers for application deployments, Ansible [makes it easy][1]. The lightweight module based architecture is ideal for system administration. One advantage is that when the node is not being managed by Ansible, no resources are used.
This article covers how to use Ansible to provision Vagrant boxes. A [Vagrant box][2] in simple terms is a virtual machine prepackaged with tools required to run the development environment. You can use these boxes to distribute the development environment used by other team members for project work. Using Ansible, you can automate provisioning the Vagrant boxes with your development packages.
This tutorial uses Fedora 24 as the host system and [CentOS][3] 7 as the Vagrant box.
### Setting up prerequisites
To configure Vagrant boxes using Ansible, youll need a few things setup. This tutorial requires you to install Ansible and Vagrant on the host machine. On your host machine, execute the following command to install the tools:
```
sudo dnf install ansible vagrant vagrant-libvirt
```
The above command installs both Ansible and Vagrant on your host system, along with Vagrants libvirt provider. Vagrant doesnt provide functionality to host your virtual machine guests (VMs). Rather, it depends on third party providers such as libvirt, VirtualBox, VMWare, etc. to host the VMs. This provider works directly with libvirt and KVM on your Fedora system.
Next, make sure your user is in the wheel group. This special group allows you to run system administration commands. If you created your user as an administrator, such as during installation, youll have this group membership. Run the following command:
```
id | grep wheel
```
If you see output, your user is in the group, and you can move on to the next section. If not, run the following command. Youll need to provide the password for the root account. Substitute your user name for the text <username>:
```
su -c 'usermod -a -G wheel <username>'
```
Then you will need to logout, and log back in, to inherit the group membership properly.
Now its time to create your first Vagrant box, which youll then configure using Ansible.
### Setting up the Vagrant box
Before you use Ansible to provision a box, you must create the box. To start, create a new directory which will store files related to the Vagrant box. To create this directory and make it the current working directory, issue the following command:
```
mkdir -p ~/lampbox && cd ~/lampbox
```
Before you create the box, you should understand the goal. This box is a simple example that runs CentOS 7 as its base system, along with the Apache web server, MariaDB (the popular open source database server from the original developers of MySQL) and PHP.
To initialize the Vagrant box, use the vagrant init command:
```
vagrant init centos/7
```
This command initializes the Vagrant box and creates a file named Vagrantfile, with some pre-configured variables. Open this file so you can modify it. The following line lists the base box used by this configuration.
```
config.vm.box = "centos/7"
```
Now setup port forwarding, so after you finish setup and the Vagrant box is running, you can test the server. To setup port forwarding, add the following line just before the end statement in Vagrantfile:
```
config.vm.network "forwarded_port", guest: 80, host: 8080
```
This option maps port 80 of the Vagrant Box to port 8080 of the host machine.
The next step is to set Ansible as our provisioning provider for the Vagrant Box. Add the following lines before the end statement in your Vagrantfile to set Ansible as the provisioning provider:
```
config.vm.provision :ansible do |ansible|
ansible.playbook = "lamp.yml"
end
```
(You must add all three lines before the final end statement.) Notice the statement ansible.playbook = “lamp.yml”. This statement defines the name of the playbook used to provision the box.
### Creating the Ansible playbook
In Ansible, playbooks describe a policy to be enforced on your remote nodes. Put another way, playbooks manage configurations and deployments on remote nodes. Technically speaking, a playbook is a YAML file in which you write tasks to perform on remote nodes. In this tutorial, youll create a playbook named lamp.yml to provision the box.
To make the playbook, create a file named lamp.yml in the same directory where your Vagrantfile is located and add the following lines to it:
```
---
- hosts: all
become: yes
become_user: root
tasks:
- name: Install Apache
yum: name=httpd state=latest
- name: Install MariaDB
yum: name=mariadb-server state=latest
- name: Install PHP5
yum: name=php state=latest
- name: Start the Apache server
service: name=httpd state=started
- name: Install firewalld
yum: name=firewalld state=latest
- name: Start firewalld
service: name=firewalld state=started
- name: Open firewall
command: firewall-cmd --add-service=http --permanent
```
An explanation of each line of lamp.yml follows.
- hosts: all specifies the playbook should run over every host defined in the Ansible configuration. Since no hosts are configured hosts yet, the playbook will run on localhost.
- sudo: true states the tasks should be performed with root privileges.
- tasks: specifies the tasks to perform when the playbook runs. Under the tasks section:
- - name: … provides a descriptive name to the task
- - yum: … specifies the task should be executed by the yum module. The options name and state are key=value pairs for use by the yum module.
When this playbook executes, it installs the latest versions of the Apache (httpd) web server, MariaDB, and PHP. Then it installs and starts firewalld, and opens a port for the Apache server. Youre now done writing the playbook for the box. Now its time to provision it.
### Provisioning the box
A few final steps remain before using the Vagrant Box provisioned using Ansible. To run this provisioning, execute the following command:
```
vagrant up --provider libvirt
```
The above command starts the Vagrant box, downloads the base box image to the host system if not already present, and then runs the playbook lamp.yml to provision.
If everything works fine, the output looks somewhat similar to this example:
![](https://i1.wp.com/cdn.fedoramagazine.org/wp-content/uploads/2016/08/vagrant-ansible-playbook-run.png?w=574&ssl=1)
This output shows that the box has been provisioned. Now check whether the server is accessible. To confirm, open your web browser on the host machine and point it to the address http://localhost:8080. Remember, port 8080 of the local host is forwarded to port 80 of the Vagrant box. You should be greeted with the Apache welcome page like the one shown below:
![](https://i0.wp.com/cdn.fedoramagazine.org/wp-content/uploads/2016/08/vagrant-ansible-apache-up.png?w=1004&ssl=1)
To make changes to your Vagrant box, first edit the Ansible playbook lamp.yml. You can find plentiful documentation on Ansible at [its official website][4]. Then run the following command to re-provision the box:
```
vagrant provision
```
### Conclusion
Youve now seen how to use Ansible to provision Vagrant boxes. This was a basic example, but you can use these tools for many other use cases. For example, you can deploy complete applications along with up-to-date version of required tools. Be creative as you use Ansible to provision your remote nodes or containers.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/using-ansible-provision-vagrant-boxes/
作者:[Saurabh Badhwar][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://h4xr.id.fedoraproject.org/
[1]: https://ansible.com/
[2]: https://www.vagrantup.com/
[3]: https://centos.org/
[4]: http://docs.ansible.com/ansible/index.html

View File

@ -0,0 +1,82 @@
translating by ucasFL
How to Install Latest XFCE Desktop in Ubuntu 16.04 and Fedora 22-24
====
Xfce is a modern, open source and lightweight desktop environment for Linux systems. It also works well on many other Unix-like systems such as Mac OS X, Solaris, *BSD plus several others. It is fast and also user friendly with a simple and elegant user interface.
Installing a desktop environment on servers can sometimes prove helpful, as certain applications may require a desktop interface for efficient and reliable administration and one of the remarkable properties of Xfce is its low system resources utilization such as low RAM consumption, thereby making it a recommended desktop environment for servers if need be.
### XFCE Desktop Features
Additionally, some of its noteworthy components and features are listed below:
- Xfwm windows manager
- Thunar file manager
- User session manger to deal with logins, power management and beyond
- Desktop manager for setting background image, desktop icons and many more
- An application manager
- Its highly pluggable as well plus several other minor features
The latest stable release of this desktop is Xfce 4.12, all its features and changes from previous versions are listed here.
#### Install Xfce Desktop on Ubuntu 16.04
Linux distributions such as Xubuntu, Manjaro, OpenSUSE, Fedora Xfce Spin, Zenwalk and many others provide their own Xfce desktop packages, however, you can install the latest version as follows.
```
$ sudo apt update
$ sudo apt install xfce4
```
Wait for the installation process to complete, then logout out of your current session or you can possibly restart your system as well. At the login interface, choose Xfce desktop and login as in the screen shot below:
![](http://www.tecmint.com/wp-content/uploads/2016/09/Select-Xfce-Desktop-at-Login.png)
![](http://www.tecmint.com/wp-content/uploads/2016/09/XFCE-Desktop.png)
#### Install Xfce Desktop in Fedora 22-24
If you have an existing Fedora distribution and wanted to install xfce desktop, you can use yum or dnf to install it as shown.
```
-------------------- On Fedora 22 --------------------
# yum install @xfce
-------------------- On Fedora 23-24 --------------------
# dnf install @xfce-desktop-environment
```
After installing Xfce, you can choose the xfce login from the Session menu or reboot the system.
![](http://www.tecmint.com/wp-content/uploads/2016/09/Select-Xfce-Desktop-at-Fedora-Login.png)
![](http://www.tecmint.com/wp-content/uploads/2016/09/Install-Xfce-Desktop-in-Fedora.png)
If you dont want Xfce desktop on your system anymore, use the command below to uninstall it:
```
-------------------- On Ubuntu 16.04 --------------------
$ sudo apt purge xfce4
$ sudo apt autoremove
-------------------- On Fedora 22 --------------------
# yum remove @xfce
-------------------- On Fedora 23-24 --------------------
# dnf remove @xfce-desktop-environment
```
In this simple how-to guide, we walked through the steps for installation of latest version of Xfce desktop, which I believe were easy to follow. If all went well, you can enjoy using xfce, as one of the [best desktop environments for Linux systems][1].
However, to get back to us, you can use the feedback section below and remember to always stay connected to Tecmint.
--------------------------------------------------------------------------------
via: http://linoxide.com/firewall/pfsense-setup-basic-configuration/
作者:[Aaron Kili ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/aaronkili/
[1]: http://www.tecmint.com/best-linux-desktop-environments/

View File

@ -0,0 +1,160 @@
Part 13 - How to Write Scripts Using Awk Programming Language
====
All along from the beginning of the Awk series up to Part 12, we have been writing small Awk commands and programs on the command line and in shell scripts respectively.
However, Awk, just as Shell, is also an interpreted language, therefore, with all that we have walked through from the start of this series, you can now write Awk executable scripts.
Similar to how we write a shell script, Awk scripts start with the line:
```
#! /path/to/awk/utility -f
```
For example on my system, the Awk utility is located in /usr/bin/awk, therefore, I would start an Awk script as follows:
```
#! /usr/bin/awk -f
```
Explaining the line above:
```
#! referred to as Shebang, which specifies an interpreter for the instructions in a script
/usr/bin/awk is the interpreter
-f interpreter option, used to read a program file
```
That said, let us now dive into looking at some examples of Awk executable scripts, we can start with the simple script below. Use your favorite editor to open a new file as follows:
```
$ vi script.awk
```
And paste the code below in the file:
```
#!/usr/bin/awk -f
BEGIN { printf "%s\n","Writing my first Awk executable script!" }
```
Save the file and exit, then make the script executable by issuing the command below:
```
$ chmod +x script.awk
```
Thereafter, run it:
```
$ ./script.awk
```
Sample Output
```
Writing my first Awk executable script!
```
A critical programmer out there must be asking, “where are the comments?”, yes, you can also include comments in your Awk script. Writing comments in your code is always a good programming practice.
It helps other programmers looking through your code to understand what you are trying to achieve in each section of a script or program file.
Therefore, you can include comments in the script above as follows.
```
#!/usr/bin/awk -f
#This is how to write a comment in Awk
#using the BEGIN special pattern to print a sentence
BEGIN { printf "%s\n","Writing my first Awk executable script!" }
```
Next, we shall look at an example where we read input from a file. We want to search for a system user named aaronkilik in the account file, /etc/passwd, then print the username, user ID and user GID as follows:
Below is the content of our script called second.awk.
```
#! /usr/bin/awk -f
#use BEGIN sepecial character to set FS built-in variable
BEGIN { FS=":" }
#search for username: aaronkilik and print account details
/aaronkilik/ { print "Username :",$1,"User ID :",$3,"User GID :",$4 }
```
Save the file and exit, make the script executable and execute it as below:
```
$ chmod +x second.awk
$ ./second.awk /etc/passwd
```
Sample Output
```
Username : aaronkilik User ID : 1000 User GID : 1000
```
In the last example below, we shall use do while statement to print out numbers from 0-10:
Below is the content of our script called do.awk.
```
#! /usr/bin/awk -f
#printing from 0-10 using a do while statement
#do while statement
BEGIN {
#initialize a counter
x=0
do {
print x;
x+=1;
}
while(x<=10)
}
```
After saving the file, make the script executable as we have done before. Afterwards, run it:
```
$ chmod +x do.awk
$ ./do.awk
```
Sample Output
```
0
1
2
3
4
5
6
7
8
9
10
```
### Summary
We have come to the end of this interesting Awk series, I hope you have learned a lot from all the 13 parts, as an introduction to Awk programming language.
As I mentioned from the beginning, Awk is a complete text processing language, for that reason, you can learn more other aspects of Awk programming language such as environmental variables, arrays, functions (built-in & user defined) and beyond.
There is yet additional parts of Awk programming to learn and master, so, below, I have provided some links to important online resources that you can use to expand your Awk programming skills, these are not necessarily all that you need, you can also look out for useful Awk programming books.
For any thoughts you wish to share or questions, use the comment form below. Remember to always stay connected to Tecmint for more exciting series.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/write-shell-scripts-in-awk-programming/
作者:[Aaron Kili |][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/aaronkili/

View File

@ -0,0 +1,246 @@
How to Use Flow Control Statements in Awk - part12
====
When you review all the Awk examples we have covered so far, right from the start of the Awk series, you will notice that all the commands in the various examples are executed sequentially, that is one after the other. But in certain situations, we may want to run some text filtering operations based on some conditions, that is where the approach of flow control statements sets in.
![](http://www.tecmint.com/wp-content/uploads/2016/08/Use-Flow-Control-Statements-in-Awk.png)
There are various flow control statements in Awk programming and these include:
- if-else statement
- for statement
- while statement
- do-while statement
- break statement
- continue statement
- next statement
- nextfile statement
- exit statement
However, for the scope of this series, we shall expound on: if-else, for, while and do while statements. Remember that we already walked through how to use next statement in Part 6 of this Awk series.
### 1. The if-else Statement
The expected syntax of the if statement is similar to that of the shell if statement:
```
if (condition1) {
actions1
}
else {
actions2
}
```
In the above syntax, condition1 and condition2 are Awk expressions, and actions1 and actions2 are Awk commands executed when the respective conditions are satisfied.
When condition1 is satisfied, meaning its true, then actions1 is executed and the if statement exits, otherwise actions2 is executed.
The if statement can also be expanded to a if-else_if-else statement as below:
```
if (condition1){
actions1
}
else if (conditions2){
actions2
}
else{
actions3
}
```
For the form above, if condition1 is true, then actions1 is executed and the if statement exits, otherwise condition2 is evaluated and if it is true, then actions2 is executed and the if statement exits. However, when condition2 is false then, actions3 is executed and the if statement exits.
Here is a case in point of using if statements, we have a list of users and their ages stored in the file, users.txt.
We want to print a statement indicating a users name and whether the users age is less or more than 25 years old.
```
aaronkilik@tecMint ~ $ cat users.txt
Sarah L 35 F
Aaron Kili 40 M
John Doo 20 M
Kili Seth 49 M
```
We can write a short shell script to carry out our job above, here is the content of the script:
```
#!/bin/bash
awk ' {
if ( $3 <= 25 ){
print "User",$1,$2,"is less than 25 years old." ;
}
else {
print "User",$1,$2,"is more than 25 years old" ;
}
}' ~/users.txt
```
Then save the file and exit, make the script executable and run it as follows:
```
$ chmod +x test.sh
$ ./test.sh
```
Sample Output
```
User Sarah L is more than 25 years old
User Aaron Kili is more than 25 years old
User John Doo is less than 25 years old.
User Kili Seth is more than 25 years old
```
### 2. The for Statement
In case you want to execute some Awk commands in a loop, then the for statement offers you a suitable way to do that, with the syntax below:
Here, the approach is simply defined by the use of a counter to control the loop execution, first you need to initialize the counter, then run it against a test condition, if it is true, execute the actions and finally increment the counter. The loop terminates when the counter does not satisfy the condition.
```
for ( counter-initialization; test-condition; counter-increment ){
actions
}
```
The following Awk command shows how the for statement works, where we want to print the numbers 0-10:
```
$ awk 'BEGIN{ for(counter=0;counter<=10;counter++){ print counter} }'
```
Sample Output
```
0
1
2
3
4
5
6
7
8
9
10
```
### 3. The while Statement
The conventional syntax of the while statement is as follows:
```
while ( condition ) {
actions
}
```
The condition is an Awk expression and actions are lines of Awk commands executed when the condition is true.
Below is a script to illustrate the use of while statement to print the numbers 0-10:
```
#!/bin/bash
awk ' BEGIN{ counter=0 ;
while(counter<=10){
print counter;
counter+=1 ;
}
}
```
Save the file and make the script executable, then run it:
```
$ chmod +x test.sh
$ ./test.sh
```
Sample Output
```
0
1
2
3
4
5
6
7
8
9
10
```
### 4. The do while Statement
It is a modification of the while statement above, with the following underlying syntax:
```
do {
actions
}
while (condition)
```
The slight difference is that, under do while, the Awk commands are executed before the condition is evaluated. Using the very example under while statement above, we can illustrate the use of do while by altering the Awk command in the test.sh script as follows:
```
#!/bin/bash
awk ' BEGIN{ counter=0 ;
do{
print counter;
counter+=1 ;
}
while (counter<=10)
}
'
```
After modifying the script, save the file and exit. Then make the script executable and execute it as follows:
```
$ chmod +x test.sh
$ ./test.sh
```
Sample Output
```
0
1
2
3
4
5
6
7
8
9
10
```
### Conclusion
This is not a comprehensive guide regarding Awk flow control statements, as I had mentioned earlier on, there are several other flow control statements in Awk.
Nonetheless, this part of the Awk series should give you a clear fundamental idea of how execution of Awk commands can be controlled based on certain conditions.
You can as well expound more on the rest of the flow control statements to gain more understanding on the subject matter. Finally, in the next section of the Awk series, we shall move into writing Awk scripts.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/use-flow-control-statements-with-awk-command/
作者:[Aaron Kili][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.tecmint.com/author/aaronkili/

View File

@ -0,0 +1,101 @@
几个小窍门帮你管理项目的问题追踪器
==============================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BUSINESS_opennature_3.png?itok=30fRGfpv)
对于大多数开源项目来讲问题追踪系统是至关重要的。虽然市面上有非常多的开源工具提供了这样的功能但是大量项目还是选择了GitHub自带的问题追踪器Issue Tracker
它结构简单,因此其他人可以非常轻松地参与进来,但这才刚刚开始。
如果没有适当的处理你的代码仓库会挤满重复的问题单、模糊不明的特性需求单、混淆不清的bug报告单。项目维护者会被大量的组织内协调工作压得喘不过气来新的贡献者也搞不清楚项目当前的重点工作是什么。
接下来我们一起研究下如何玩转GitHub的问题单。
### 问题单就是用户的故事
我的团队曾经和开源专家[Jono Bacon][1]做过一次对话,他是[The Art of Community][2]的作者、GitHub的战略顾问和前社区总监。他告诉我们高质量的问题单是项目成功的关键。尽管有些人把问题单仅仅看作是一堆难题的列表但是他认为这个难题列表是我们必须要时刻关注、完善管理并进行分类的。他还认为给问题单打上标签的做法会令人意想不到的提升我们对代码和社区的了解程度也让我们更清楚问题的关键点在哪里。
“在提交问题单时用户不太会有耐心或者有兴趣把问题的细节描述清楚。在这种情况下你应当努力花最短的时间尽量多的获取有用的信息。”Jono Bacon说。
统一的问题单模板可以大大减轻项目维护者的负担,尤其是开源项目的维护者。我们发现,让用户讲故事的方法总是可以把问题描述的非常清楚。用户讲故事时需要说明“是谁,做了什么,为什么而做”,也就是:我是【何种用户】,为了【达到何种目的】,我要【做何种操作】。
实际操作起来,大概是这样的:
>我是一名顾客,我想付钱,所以我想创建个账户。
我们建议,问题单的标题始终使用这样的用户故事形式。你可以设置[问题单模板][3]来保证这点。
![](https://opensource.com/sites/default/files/resize/issuetemplate-new-520x293.png)
> 问题单模板让特性需求单保持统一的形式
这个做法的核心点在于,问题单要被清晰的呈现给它涉及的每一个人:它要尽量简单的指明受众(或者说用户),操作(或者说任务),和收益(或者说目标)。你不需要拘泥于这个具体的模板,只要能把故事里的是什么事情或者是什么原因搞清楚,就达到目的了。
### 高质量的问题单
问题单的质量是参差不齐的,这一点任何一个开源软件的贡献者或维护者都能证实。具有良好格式的问题单所应具备的素质在[The Agile Samurai][4]有过概述。
问题单需要满足如下条件:
- 客户价值所在
- 避免使用术语或晦涩的文字,就算不是专家也能看懂
- 可以切分,也就是说我们可以一小步一小步的对最终价值进行交付
- 尽量跟其他问题单没有瓜葛,这会降低我们在问题范围上的灵活性
- 可以协商,也就说我们有好几种办法达到目标
- 问题足够小,可以非常容易的评估出所需时间和资源
- 可衡量,我们可以对结果进行测试
### 那其他的呢? 要有约束
如果一个问题单很难衡量,或者很难在短时间内完成,你也一样有办法搞定它。有些人把这种办法叫做”约束“。
例如”这个软件要快“这种问题单是不符合我们的故事模板的而且是没办法协商的。多快才是快呢这种模糊的需求没有达到”好问题单“的标准但是如果你把一些概念进一步定义一下例如”每个页面都需要在0.5秒内加载完“,那我们就能更轻松的解决它了。我们可以把”约束“看作是成功的标尺,或者是里程碑。每个团队都应该定期的对”约束“进行测试。
### 问题单里面有什么?
敏捷方法中用户的故事里通常要包含验收指标或者标准。如果是在GitHub里建议大家使用markdown的清单来概述完成这个问题单需要完成的任务。优先级越高的问题单应当包含更多的细节。
比如说,你打算提交一个问题单,关于网站的新版主页的。那这个问题单的子任务列表可能就是这样的:
![](https://opensource.com/sites/default/files/resize/markdownchecklist-520x255.png)
>使用markdown的清单把复杂问题拆分成多个部分
在必要的情况下你还可以链接到其他问题单那些问题单每个都是一个要完成的任务。GitHub里做这个挺方便的
将特性进行细粒度的拆分,这样更轻松的跟踪整体的进度和测试,要能更高频的发布有价值的代码。
一旦以问题单的形式收到数据我们还可以用API更深入的了解软件的健康度。
”在统计问题单的类型和趋势时GitHub的API可以发挥巨大作用“Bacon告诉我们”如果再做些数据挖掘工作你就能发现代码里的问题点谁是社区的活跃成员或者其他有用的信息。“
有些问题单管理工具提供了API通过API可以增加额外的信息比如预估时间或者历史进度。
### 让大伙都上车
一旦你的团队决定使用某种问题单模板你要想办法让所有人都按照模板来。代码仓库里的ReadMe.md其实也可以是项目的”How-to“文档。这个文档会描述清除这个项目是做什么的最好是用可以搜索的语言并且解释其他贡献者应当如何参与进来比如提交需求单、bug报告、建议或者直接贡献代码
![](https://opensource.com/sites/default/files/resize/readme-520x184.png)
>为新来的合作者在ReadMe文件里增加清晰的说明
ReadMe文件是提供”问题单指引“的完美场所。如果你希望特性需求单遵循”用户讲故事“的格式那就把格式写在ReadMe里。如果你使用某种跟踪工具来管理待办事项那就标记在ReadMe里这样别人也能看到。
”问题单模板合理的标签如何提交问题单的文档确保问题单被分类所有的问题单都及时做出回复这些对于开源项目都至关重要“Bacon说。
记住一点:这不是为了完成工作而做的工作。这时让其他人更轻松的发现、了解、融入你的社区而设立的规则。
"关注社区的成长,不仅要关注参与开发者的的数量增长,也要关注那些在问题单上帮助我们的人,他们让问题单更加明确、保持更新,这是活跃沟通和高效解决问题的力量源泉"Bacon说。
--------------------------------------------------------------------------------
via: https://opensource.com/life/16/7/how-take-your-projects-github-issues-good-great
作者:[Matt Butler][a]
译者:[echoma](https://github.com/echoma)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mattzenhub
[1]: http://www.jonobacon.org/
[2]: http://www.artofcommunityonline.org/
[3]: https://help.github.com/articles/creating-an-issue-template-for-your-repository/
[4]: https://www.amazon.ca/Agile-Samurai-Masters-Deliver-Software/dp/1934356581

View File

@ -0,0 +1,50 @@
Torvalds2.0: Patricia Torvalds 谈“计算”,大学,女权主义和科技界的多元化
![Image by : Photo by Becky Svartström. Modified by Opensource.com. CC BY-SA 4.0](http://opensource.com/sites/default/files/styles/image-full-size/public/images/life/osdc-lead-patriciatorvalds.png)
Image by : Photo by Becky S. Modified by Opensource.com. [CC BY-SA 4.0][1]
图片来源照片来自Becky Svartström, 修改自Opensource.com.
Patricia Torvalds 并不是那个在 Linux 和开源领域非常有名同样叫做 Torvalds 的人。
![](http://opensource.com/sites/default/files/images/life-uploads/ptorvalds.png)
18岁的时候Patricia 已经是一个有多项科技成就并且拥有开源产业经验的女权主义者。紧接着,她把目光投入到在杜克大学地球科学学院工程学第一学年的学习中,以实习生的身份在位于美国奥勒冈州伯特兰市的 Puppet 实验室工作。但不久后,她又直接去了北卡罗纳州的达拉莫,开始秋季学期的大学学习。
在这次独家采访中Patricia 表示,使她对计算机科学和工程学感兴趣的(剧透警告:不是她的父亲)原因,包括高中时候对科技的偏爱,女权主义在她的生活中扮演了重要角色以及对科技多元化缺乏的思考。
![](http://opensource.com/sites/default/files/images/life/Interview%20banner%20Q%26A.png)
###什么东西使你钟情于学习计算机科学和工程学?###
我在科技方面的兴趣的确产生于高中时代。我曾一度想投身于生物学,直到大约大学二年级的时候。大二结束以后,我在波特兰 VA 当网页设计实习生。与此同时我参加了一个叫做“风险勘探”的工程学课程在我大二学年的后期我们把一个水下机器人送到了太平洋。但是转折点大概是在青年时代的中期当我因被授予“NCWIT Aspiration in Computing"奖而被称作地区和民族英雄的时候出现的。
这个奖项的获得我感觉确立了自己的兴趣。当然,我认为最重要的部分是我加入到一个所有获奖者在里面的 Facebook 群,里面那些已经获奖的女孩们简直难以置信的相互支持。由于在 XV 和 VA
的工作,我在获奖前就已经确定了致力于计算机科学,但是和这些女孩的交谈更加坚定了这份兴趣,使之更加强壮。再后来,初高年级后期的时候教授 XV 也使我体会到工程学和计算机科学对我来说的确很有趣。
###你打算学习什么?毕业以后你已经知道自己想干什么了吗?###
我希望主修力学或电气科学以及计算机工程学和计算机科学还有女性学。毕业以后,我希望在一个支持或者创造科技为社会造福的公司工作,或者自己开公司。
###我的女儿在高中有一门 Visual Basic的编程课。她是整个班上唯一的一个女生并且以疲倦和痛苦的经历结束这门课程。你的经历是什么样的呢###
我的高中在高年级的时候开设计算机科学的课程,我也学习了 Visual Basic这门课不是很糟糕但我的确是20多个人的班级里唯一的三四个女生之一。其他的计算机课程似乎也有相似的性别比例差异。然而我所在的高中极其小并且老师对科技非常支持和包容所以我并没有感到厌倦。希望在未来的一些年里计算机方面的课程会变得更加多样化。
###你的学校做了哪些促进科技的智举?它们如何能够变得更好?###
我的高中学校给了我们长时间的机会接触到计算机,老师们会突然在不相关的课程上安排科技基础任务,比如有好多次任务,我们必须建一个供社会学习课程使用的网站,我认为这很棒因为它使我们每一个人都能接触到科技。机器人俱乐部也很活跃并且资金充足,但是非常小,我不是其中的成员。学校的科技/工程学项目中一个非常强大的组成部分是一门叫做”风险勘测“的学生教学工程学课程,这是一门需要亲自动手的课程,并且每年处理一个工程学或者计算机科学难题。我和我的一个同学在这儿教授了两年,在课程结束以后,有学生上来告诉我他们对从事工程学或者计算机科学感兴趣。
然而,我的高中没有特别的关注于让年轻女性加入到这些项目中来,并且在人种上也没有呈现多样化。计算机课程和俱乐部大量的主要成员都是男性白人。这的确应该能够有所改善。
###在成长过程中,你如何在家使用科技?###
老实说小的时候我使用电脑我的父亲设了一个跟踪装置当我们上网一个小时就会断线玩尼奥宠物和或者相似的游戏。我想我本可以毁坏跟踪装置或者在不连接网络的情况下玩游戏但我没有这样做。我有时候也会和我的父亲做一些小的科学项目我还记得我和他在电脑终端上打印出”Hello world"无数次。但是大多数时候,我都是和我的妹妹一起玩网络游戏,直到高中的时候才开始学习“计算”。
###你在高中学校的女权俱乐部很积极,从这份经历中你学到了什么?现在对你来说什么女权问题是最重要的?###
在高中二年级的后期,我和我的朋友一起建立了女权俱乐部。刚开始,我们受到了很多反对和抵抗,并且这从来就没有完全消失过。到我们毕业的时候,女权主义理想已经彻底成为了学校文化的一个部分。我们在学校做的女权主义工作通常是以一些比较直接的方式并集中于像着装要求这样一些问题。
就我个人来说我更集中于交叉地带的女权主义把女权主义运用到缓解其他方面的压迫比如种族歧视和阶级歧视。Facebook 网页《Gurrilla Feminism》是交叉地带女权主义一个非常好的例子并且我从中学到了很多。我目前管理波特兰分支。
在科技多样性方面女权主义对我也非常重要尽管作为一名和科技世界有很强联系的上流社会女性女权主义问题对我产生的影响相比其他人来说非常少我所涉及的交叉地带女权主义也是同样的。出版集团比如《Model View Culture》非常鼓舞我并且我很感激 Shanley Kane 所做的一切。
###你会给想教他们的孩子学习编程的父母什么样的建议?###
老实说,从没有人把我推向计算机科学或者工程学。正如我前面说的,在很长一段时间里,我想成为一名遗传学家。大二结束的那个夏天,我在 VA 当了一个夏季的网页设计实习生,这彻底改变了我之前的想法。所以我不知道我是否能够完整的回答这个问题。
我的确认为真实的兴趣很重要。如果在我12岁的时候我的父亲让我坐在一台电脑前教我安装网站服务器我认为我不会对计算机科学感兴趣。相反我的父母给了我很多可以支配的自由让我去做自己想做的事情绝大多数时候是为我的尼奥宠物游戏编糟糕的HTML网站。比我小的妹妹们没有一个对工程学或计算机科学感兴趣我的父母也不在乎。我感到很幸运的是我的父母给了我和我的妹妹们鼓励和资源去探索自己的兴趣。
仍然,在我成长过程中我也常说未来要和我的父亲做同样的职业,尽管我还不知道我父亲是干什么的,只知道他有一个很酷的工作。另外,中学的时候有一次,我告诉我的父亲这件事,然后他没有发表什么看法只是告诉我高中的时候不要想这事。所以我猜想这从一定程度上刺激了我。
###对于开源社区的领导者们,你有什么建议给他们来吸引和维持更加多元化的贡献者?###
我实际上在开源社区不是特别积极和活跃。和女性讨论“计算”我感觉更舒服。我是“NCWIT Aspirarion in Computing"网站的一名成员这是我对科技有持久兴趣的一个重要方面同样也包括Facebook群”Ladies Storm Hackathons".
我认为对于吸引和维持多种多样有天赋的贡献者,安全空间很重要。我过去看到在一些开源社区有人发表关于女性歧视和种族主义的评论,人们指出这一问题随后该人被解职。我认为要维持一个专业的社区必须就骚扰事件和不正当行为有一个很高的标准。当然,人们已经有或者将有很多的选择关于在开源社区或其他任何社区能够表达什么。然而,如果社区领导人真的想吸引和维持多元化有天赋的人员,他们必须创造一个安全的空间并且把社区成员维持在很高的标准上。我也认为一些一些社区领导者不明白多元化的价值。很容易说明科技象征着精英社会,并且很多人被科技忽视的原因是他们不感兴趣,这一问题在很早的准备过程中就提出了。他们争论如果一个人在自己的工作上做得很好,那么他的性别或者民族还有性取向这些情况都变得不重要了。这很容易反驳,但我不想为错误找理由。我认为多元化缺失是一个错误,我们应该为之负责并尽力去改善这件事。
--------------------------------------------------------------------------------
via: http://opensource.com/life/15/8/patricia-torvalds-interview
作者:[Rikki Endsley][a]
译者:[ucasFL](https://github.com/ucasFL)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://opensource.com/users/rikki-endsley
[1]:https://creativecommons.org/licenses/by-sa/4.0/
[2]:https://puppetlabs.com/
[3]:https://www.aspirations.org/
[4]:https://www.facebook.com/guerrillafeminism
[5]:https://modelviewculture.com/
[6]:https://www.aspirations.org/
[7]:https://www.facebook.com/groups/LadiesStormHackathons/

View File

@ -0,0 +1,405 @@
旅行时通过树莓派和iPad Pro备份图片
===================================================================
![](http://www.movingelectrons.net/images/bkup_photos_main.jpg)
>旅行中备份图片 - Gear.
### 介绍
我在很长的时间内一直在寻找一个旅行中备份图片的理想方法,把SD卡放进你的相机包是比较危险和暴露的SD卡可能丢失或者被盗数据可能损坏或者在传输过程中失败。比较好的一个选择是复制到另外一个设备即使它也是个SD卡并且将它放到一个比较安全的地方去备份到远端也是一个可行的办法但是如果去了一个没有网络的地方就不太可行了。
我理想的备份步骤需要下面的工具:
1. 用一台iPad pro而不是一台笔记本。我喜欢简便的旅行我的旅行大部分是商务的而不是拍摄休闲的这很显然我为什么选择了iPad Pro
2. 用尽可能少的设备
3. 设备之间的连接需要很安全。我需要在旅馆和机场使用,所以设备之间的连接需要时封闭的加密的。
4. 整个过程应该是稳定的安全的,我还用过其他的移动设备,但是效果不太理想[1].
### 设置
我制定了一个满足上面条件并且在未来可以扩充的设定,它包含下面这些部件的使用:
1. [2]9.7寸写作时最棒的又小又轻便的IOS系统的iPad Pro苹果笔不是不许的但是当我在路上进行一些编辑的时候依然需要所有的重活由树莓派做 其他设备通过ssh连接设备
2. [3] 树莓派3包含Raspbian系统
3. [4]Mini SD卡 [box/case][5].
5. [6]128G的优盘对于我是够用了你可以买个更大的你也可以买个移动硬盘但是树莓派没办法给移动硬盘供电你需要额外准备一个供电的hub当然优质的线缆能提供可靠便捷的安装和连接。
6. [9]SD读卡器
7. [10]另外的sd卡SD卡我在用满之前就会立即换一个这样就会让我的照片分布在不同的sd卡上
下图展示了这些设备之间如何相互连接.
![](http://www.movingelectrons.net/images/bkup_photos_diag.jpg)
>旅行时照片的备份-过程表格.
树莓派会作为一个热点. 它会创建一个WIFI网络当然也可以建立一个Ad Hoc网络更简单一些但是它不会加密设备之间的连接因此我选择创建WIFI网络。
SD卡放进SD读卡器插到树莓派USB端口上128G的大容量优盘一直插在树莓派的USB端口上我选择了一款闪迪的体积比较小。主要的思路就是通过脚本把SD卡的图片备份到优盘上脚本是增量备份而且脚本会自动运行使备份特别快如果你有很多的照片或者拍摄了很多没压缩的照片这个任务量就比较大用ipad来运行Python脚本而且用来浏览SD卡和优盘的文件。
如果给树莓派连上一根能上网的网线那样连接树莓派wifi的设备就可以上网啦
### 1. 树莓派的设置
这部分要用到命令行模式,我会尽可能详细的介绍,方便大家进行下去。
#### 安装和配置Raspbian
给树莓派连接鼠标键盘和显示器将SD卡插到树莓派上在官网按步骤安装Raspbian [12].
安装完后执行下面的命令:
```
sudo apt-get update
sudo apt-get upgrade
```
升级机器上所有的软件到最新,我将树莓派连接到本地网络,而且为了安全更改了默认的密码。
Raspbian默认开启了SSH这样所有的设置可以在一个远程的设备上完成。我也设置了RSA验证那是个可选的功能查看能多信息 [这里][13].
这是一个在MAC上建立SSH连接到树莓派上的截图[14]:
##### 建立WPA2验证的WIFI
这个安装过程是基于这篇文章,只适用于我自己做的例子[15].
##### 1. 安装软件包
我们需要安装下面的软件包:
```
sudo apt-get install hostapd
sudo apt-get install dnsmasq
```
hostapd用来创建wifidnsmasp用来做dhcp和dns服务很容易设置.
##### 2. 编辑dhcpcd.conf
通过网络连接树莓派网络设置树莓派需要dhcpd首先我们将wlan0设置为一个静态的IP。
用sudo nano `/etc/dhcpcd.conf`命令打开配置文件,在最后一行添加上如下信息:
```
denyinterfaces wlan0
```
注意: 必须先配置这个接口才能配置其他接口.
##### 3. 编辑端口
现在设置静态IPsudo nano `/etc/network/interfaces`打开端口配置文件按照如下信息编辑wlan0选项:
```
allow-hotplug wlan0
iface wlan0 inet static
address 192.168.1.1
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
# wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
```
同样, 然后添加wlan1信息:
```
#allow-hotplug wlan1
#iface wlan1 inet manual
# wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
```
重要: sudo service dhcpcd restart命令重启dhcpd服务`sudo ifdown eth0; sudo ifup wlan0`命令用来关闭eth0端口再开启用来生效配置文件.
##### 4. 配置Hostapd
接下来我们配置hostapd`sudo nano /etc/hostapd/hostapd.conf` 用这个命令创建并填写配置信息到文件中:
```
interface=wlan0
# Use the nl80211 driver with the brcmfmac driver
driver=nl80211
# This is the name of the network
ssid=YOUR_NETWORK_NAME_HERE
# Use the 2.4GHz band
hw_mode=g
# Use channel 6
channel=6
# Enable 802.11n
ieee80211n=1
# Enable QoS Support
wmm_enabled=1
# Enable 40MHz channels with 20ns guard interval
ht_capab=[HT40][SHORT-GI-20][DSSS_CCK-40]
# Accept all MAC addresses
macaddr_acl=0
# Use WPA authentication
auth_algs=1
# Require clients to know the network name
ignore_broadcast_ssid=0
# Use WPA2
wpa=2
# Use a pre-shared key
wpa_key_mgmt=WPA-PSK
# The network passphrase
wpa_passphrase=YOUR_NEW_WIFI_PASSWORD_HERE
# Use AES, instead of TKIP
rsn_pairwise=CCMP
```
配置完成后,我们需要运行 `sudo nano /etc/default/hostapd` 命令打开这个配置文件然后找到`#DAEMON_CONF=""` 替换成`DAEMON_CONF="/etc/hostapd/hostapd.conf"`以便hostapd服务能够找到对应的配置文件.
##### 5. 配置Dnsmasq
dnsmasp配置文件包含很多信息方便你使用它但是我们不需要那么多选项我建议用下面两条命令把它放到别的地方不要删除它然后自己创建一个文件
```
sudo mv /etc/dnsmasq.conf /etc/dnsmasq.conf.orig
sudo nano /etc/dnsmasq.conf
```
粘贴下面的信息到新文件中:
```
interface=wlan0 # Use interface wlan0
listen-address=192.168.1.1 # Explicitly specify the address to listen on
bind-interfaces # Bind to the interface to make sure we aren't sending things elsewhere
server=8.8.8.8 # Forward DNS requests to Google DNS
domain-needed # Don't forward short names
bogus-priv # Never forward addresses in the non-routed address spaces.
dhcp-range=192.168.1.50,192.168.1.100,12h # Assign IP addresses in that range with a 12 hour lease time
```
##### 6. 设置IPv4转发
最后我们需要做的事就是配置包转发,用`sudo nano /etc/sysctl.conf`命令打开sysctl.conf文件将containing `net.ipv4.ip_forward=1`之前的#号删除,然后重启生效
我们还需要给连接树莓派的设备通过WIFI分享一个网络连接做一个wlan0和eth0的NAT我们可以参照下面的脚本来实现。
```
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
sudo iptables -A FORWARD -i eth0 -o wlan0 -m state --state RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A FORWARD -i wlan0 -o eth0 -j ACCEPT
```
我命名了一个hotspot-boot.sh的脚本然后让它可以运行:
```
sudo chmod 755 hotspot-boot.sh
```
脚本会在树莓派启动的时候运行,有很多方法实现,下面是我实现的方式:
1. 把文件放到`/home/pi/scripts`目录下.
2. 编辑rc.local文件输入`sudo nano /etc/rc.local`命令将运行脚本命令放到exit0之前[16]).
下面是实例.
```
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
# Print the IP address
_IP=$(hostname -I) || true
if [ "$_IP" ]; then
printf "My IP address is %s\n" "$_IP"
fi
sudo /home/pi/scripts/hotspot-boot.sh &
exit 0
```
#### 安装Samba服务和NTFS兼容驱动.
我们要安装下面几个软件使我们能够访问树莓派分享的文件夹ntfs-3g可以使我们能够方位ntfs文件系统的文件.
```
sudo apt-get install ntfs-3g
sudo apt-get install samba samba-common-bin
```
你可以参照这些文档来配置Samba[17] .
重要提示推荐的文档要先挂在外置硬盘我们不这样做因为在这篇文章写作的时候树莓派在启动时的auto-mounts功能同时将sd卡和优盘挂载到`/media/pi/`上,这篇文章有一些多余的功能我们也不会采用。
### 2. Python脚本
树莓派配置好后我们需要让脚本拷贝和备份照片的时候真正的起作用脚本只提供了特定的自动化备份进程如果你有基本的cli操作的技能你可以ssh进树莓派然后拷贝你自己的照片从一个设备到另外一个设备用cp或者rsync命令。在脚本里我们用rsync命令这个命令比较可靠而且支持增量备份。
这个过程依赖两个文件,脚本文件自身和`backup_photos.conf`这个配置文件,后者只有几行包含已挂载的目的驱动器和应该挂载到哪个目录,它看起来是这样的:
```
mount folder=/media/pi/
destination folder=PDRIVE128GB
```
重要提示: 在这个符号`=`前后不要添加多余的空格,否则脚本会失效.
下面是这个Python脚本我把它命名为`backup_photos.py`,把它放到了`/home/pi/scripts/`目录下,我在每行都做了注释可以方便的查看各行的功能.
```
#!/usr/bin/python3
import os
import sys
from sh import rsync
'''
脚本将挂载到/media/pi的sd卡上的内容复制到一个目的磁盘的同名目录下目的驱动器的名字在.conf文件里定义好了.
Argument: label/name of the mounted SD Card.
'''
CONFIG_FILE = '/home/pi/scripts/backup_photos.conf'
ORIGIN_DEV = sys.argv[1]
def create_folder(path):
print ('attempting to create destination folder: ',path)
if not os.path.exists(path):
try:
os.mkdir(path)
print ('Folder created.')
except:
print ('Folder could not be created. Stopping.')
return
else:
print ('Folder already in path. Using that instead.')
confFile = open(CONFIG_FILE,'rU')
#IMPORTANT: rU Opens the file with Universal Newline Support,
#so \n and/or \r is recognized as a new line.
confList = confFile.readlines()
confFile.close()
for line in confList:
line = line.strip('\n')
try:
name , value = line.split('=')
if name == 'mount folder':
mountFolder = value
elif name == 'destination folder':
destDevice = value
except ValueError:
print ('Incorrect line format. Passing.')
pass
destFolder = mountFolder+destDevice+'/'+ORIGIN_DEV
create_folder(destFolder)
print ('Copying files...')
# Comment out to delete files that are not in the origin:
# rsync("-av", "--delete", mountFolder+ORIGIN_DEV, destFolder)
rsync("-av", mountFolder+ORIGIN_DEV+'/', destFolder)
print ('Done.')
```
### 3.iPad Pro的配置
树莓派做了最重的活而且iPad Pro根本没参与传输文件我们在iPad上只需要安装上Prompt2来通过ssh连接树莓派就行了这样你既可以运行Python脚本也可以复制文件了。[18]; [19].
![](http://www.movingelectrons.net/images/bkup_photos_ipad&rpi_prompt.jpg)
>iPad用prompt通过SSH连接树莓派.
我们安装了Samba我们可以通过图形方式通过树莓派连接到USB设备你可以看视频在不同的设备之间复制和移动文件文件浏览器是必须的[20] .
### 4. 将它们都放到一起
我们假设`SD32GB-03`是连接到树莓派的SD卡名字`PDRIVE128GB`是那个优盘通过事先的配置文件挂载好如果我们想要备份SD卡上的图片我们需要这么做:
1. 让树莓派先正常运行,将设备挂载好.
2. 连接树莓派配置好的WIFI网络.
3. 用prompt这个app通过ssh连接树莓派[21].
4. 连接好后输入下面的命令:
```
python3 backup_photos.py SD32GB-03
```
首次备份需要一些时间基于SD卡的容量你需要保持好设备之间的连接在脚本运行之前你可以通过下面这个命令绕过.
```
nohup python3 backup_photos.py SD32GB-03 &
```
![](http://www.movingelectrons.net/images/bkup_photos_ipad&rpi_finished.png)
>运行完成的脚本如图所示.
### 未来的定制
我在树莓派上安装了vnc服务这样我可以通过ipad连接树莓派的图形界面我安装了bittorrent用来远端备份我的图片当然需要先设置好我会放出这些当我完成这些工作后[23[24]。
你可以在下面发表你的评论和问题,我会在此页下面回复。.
--------------------------------------------------------------------------------
via: http://www.movingelectrons.net/blog/2016/06/26/backup-photos-while-traveling-with-a-raspberry-pi.html
作者:[Editor][a]
译者:[jiajia9linuxer](https://github.com/jiajia9linuxer)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.movingelectrons.net/blog/2016/06/26/backup-photos-while-traveling-with-a-raspberry-pi.html
[1]: http://bit.ly/1MVVtZi
[2]: http://www.amazon.com/dp/B01D3NZIMA/?tag=movinelect0e-20
[3]: http://www.amazon.com/dp/B01CD5VC92/?tag=movinelect0e-20
[4]: http://www.amazon.com/dp/B010Q57T02/?tag=movinelect0e-20
[5]: http://www.amazon.com/dp/B01F1PSFY6/?tag=movinelect0e-20
[6]: http://amzn.to/293kPqX
[7]: http://amzn.to/290syFY
[8]: http://amzn.to/290syFY
[9]: http://amzn.to/290syFY
[10]: http://amzn.to/290syFY
[11]: http://amzn.to/293kPqX
[12]: https://www.raspberrypi.org/downloads/noobs/
[13]: https://www.raspberrypi.org/documentation/remote-access/ssh/passwordless.md
[14]: https://www.iterm2.com/
[15]: https://frillip.com/using-your-raspberry-pi-3-as-a-wifi-access-point-with-hostapd/
[16]: https://www.raspberrypi.org/documentation/linux/usage/rc-local.md
[17]: http://www.howtogeek.com/139433/how-to-turn-a-raspberry-pi-into-a-low-power-network-storage-device/
[18]: http://bit.ly/1MVVtZi
[19]: https://itunes.apple.com/us/app/prompt-2/id917437289?mt=8&uo=4&at=11lqkH
[20]: https://itunes.apple.com/us/app/filebrowser-access-files-on/id364738545?mt=8&uo=4&at=11lqkH
[21]: https://itunes.apple.com/us/app/prompt-2/id917437289?mt=8&uo=4&at=11lqkH
[22]: https://en.m.wikipedia.org/wiki/Nohup
[23]: https://itunes.apple.com/us/app/remoter-pro-vnc-ssh-rdp/id519768191?mt=8&uo=4&at=11lqkH
[24]: https://getsync.com/

View File

@ -1,113 +0,0 @@
echoma 翻译中
使用HTTP/2服务端推送技术加速Node.js应用
=========================================================
四月份,我们宣布了对[HTTP/2服务端推送技术][3]的支持我们是通过HTTP的[Link头](https://www.w3.org/wiki/LinkHeader)来实现这项支持的。我的同事John曾经通过一个例子演示了[在PHP里支持服务端推送功能][4]是多么的简单。
![](https://blog.cloudflare.com/content/images/2016/08/489477622_594bf9e3d9_z.jpg)
我们想让使用Node.js构建的网站能够更加轻松的获得性能提升。为此我们开发了netjet中间件它可以解析应用生成的HTML并自动添加Link头。当结合Express框架使用这个中间件时我们可以看到应用程序的输出多了如下HTTP头
![](https://blog.cloudflare.com/content/images/2016/08/2016-08-11_13-32-45.png)
[本博客][5]是使用 [Ghost](https://ghost.org/)(译者注:一个博客发布平台)进行发布的, 因此如果你的浏览器支持HTTP/2你已经在不知不觉中享受了服务端推送技术带来的好处了。接下来我们将进行更详细的说明。
netjet使用了带有自制插件的[PostHTML](https://github.com/posthtml/posthtml)来解析HTML。目前netjet用它来查找图片、脚本和外部样式。
在响应过程中增加HTML解析器有个明显的缺点这将增加页面加载的时延(加载到第一个字节的所花的时间)。大多数情况下新增的延时被应用里的其他耗时掩盖掉了比如数据库访问。为了解决这个问题netjet包含了一个可调节的LRU缓存该缓存以HTTP的ETag头作为索引这使得netjet可以非常快的为已经解析过的页面插入Link头。
在这种情况下如果我们现在从头设计一款全新的应用我们就需要全面的考量如何减少HTML解析和页面加载延时了。把页面内容和页面中的元数据分开存放是一种值得考虑的方法。
任意的Node.js HTML框架只要它支持类似Express这样的中间件netjet都是能够兼容的。只要把netjet像下面这样加到中间件加载链里就可以了。
```javascript
var express = require('express');
var netjet = require('netjet');
var root = '/path/to/static/folder';
express()
.use(netjet({
cache: {
max: 100
}
}))
.use(express.static(root))
.listen(1337);
```
稍微加点代码netjet也可以摆脱HTML框架独立工作
```javascript
var http = require('http');
var netjet = require('netjet');
var port = 1337;
var hostname = 'localhost';
var preload = netjet({
cache: {
max: 100
}
});
var server = http.createServer(function (req, res) {
preload(req, res, function () {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/html');
res.end('<!doctype html><h1>Hello World</h1>');
});
});
server.listen(port, hostname, function () {
console.log('Server running at http://' + hostname + ':' + port+ '/');
});
```
[netjet文档里][1]有更多选项的信息。
### 查看推送了什么数据
![](https://blog.cloudflare.com/content/images/2016/08/2016-08-02_10-49-33.png)
访问[本文][5]时通过Chrome的开发者工具我们可以轻松的验证网站是否正在使用服务器推送技术译者注Chrome版本至少为53。在"Network"选项卡中,我们可以看到有些图片的"Initiator"这一列中包含了`Push`字样,这些图片就是服务器端推送的。
目前Firefox的开发者工具还不能直观的展示被推送的自选。不过我们可以通过页面响应头里的`cf-h2-pushed`头看到一个列表,这个列表包含了本页面主动推送给浏览器的资源。
希望大家能够踊跃为netjet添砖加瓦我也乐于看到有人正在使用netjet。
### Ghost和服务端推送技术
Ghost真是包罗万象。在Ghost团队的帮助下我把netjet也集成到里面了而且作为测试版内容可以在Ghost的0.8.0版本中用上它。
如果你正在使用Ghost你可以通过修改config.js、并在`production`配置块中增加preloadHeaders选项来启用服务端推送。
```javascript
production: {
url: 'https://my-ghost-blog.com',
preloadHeaders: 100,
// ...
}
```
Ghost已经为其用户整理了[一篇支持文档][2].
### 结论
使用netjet你的Node.js应用也可以使用浏览器预加载技术。并且[本站][5]已经使用它在提供了HTTP/2服务端推送了。
--------------------------------------------------------------------------------
via: https://blog.cloudflare.com/accelerating-node-js-applications-with-http-2-server-push/
作者:[Terin Stock][a]
译者:[译者ID](https://github.com/echoma)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://blog.cloudflare.com/author/terin-stock/
[1]: https://www.npmjs.com/package/netjet
[2]: http://support.ghost.org/preload-headers/
[3]: https://www.cloudflare.com/http2/server-push/
[4]: https://blog.cloudflare.com/using-http-2-server-push-with-php/
[5]: https://blog.cloudflare.com/accelerating-node-js-applications-with-http-2-server-push/

View File

@ -0,0 +1,192 @@
Ohm: 一种可以用两百行代码创造一种语言的 JavaScript 解释器
解释器是一种非常有用的软件库。从概念上简单的说,它们的实现很有挑战并且在计算机科学中经常被认为是暗黑艺术。在这个系列的博文中,我会向你们展示为什么你不需要成为哈利波特就能够很好的控制解释器。但是为了以防万一带上你的魔杖。
我们将探索一种叫做 Ohm 的新的开源库,它使得搭建解释器很简单并且更加容易再利用。在这个系列里,我们使用 Ohm 去识别数字,构建计算器等等。在这个系列的最后你将已经用少于 200 行的代码发明了一种完整的编程语言。这个强大的工具将让你能够做一些你可能过去认为不可能的事情。
###为什么解释器很困难?
解释器非常有用。在很多时候你可能需要一个解释器。一种新的文件格式可能出现,你需要去处理但还没有人为它写了一个库;或者你发现了一种老格式的文件但是现存的解释器不能构建你需要的平台。我已经看到这样的事发生无数次。代码会来来去去但数据却是永恒的。
基础的解释器很简单:只是把一个数据结构转化成另一个。所以为什么感觉你需要成为 邓布利多【魔法师】才能够把它们做出来。
解释器的一些历史性的挑战是很难写,绝大多数工具很老并且假设了大量晦涩难懂的计算机科学知识。如果你在大学里上过编译器课程那么课本里可能也有从 1970 年以来的技术。幸运的是,解释器技术从那时候起已经提高了很多。
代表性地,解释器是通过使用一种叫作形式语法的特殊语法来定义你想要解析的东西这样发明的,然后你需要把它放入像 Bison 和 Yacc 的工具中,这些工具能够产生一堆你需要修改的 C 代码或者链接到你实际写入额的编程语言。另外的选择是用你更喜欢的语言亲自动手写一个解释器,这很慢且很容易出错,在你能够真正使用它之前还有许多额外的工作。
想像一下,是否你关于你想要解析的东西的语法描述也是解释器?如果你能够仅仅直接运行语法,然后在你需要的地方增加挂钩,那是什么?那就是 Ohm 所做的事。
###解释器简介
[Ohm][1]是一种新的解析系统。它类似你可能已经在课本里面看到的语法并且它更强大,使用起来更简单。通过 Ohm, 你能够使用一种灵活的语法以 .ohm 文件格式来写格式定义,然后使用你的宿主语言把语义加入到里面。在这篇博文里,我们将用 JavaScript 作为宿主语言。
Ohm 建立在一个为制造更简单、更灵活的解释器的一个多年调查基础之上。VPRI 的 [STEPS program](pdf) 使用 Ohm 的前驱为许多特殊的工作创造了专门的语言(比如一个有 400 行代码的平行制图描绘器)[Ometa][3].
Ohm 有许多有趣的特点和符号,但是不是要全部解释它们,我认为我们应该只需投入其中并构建一些东西。
###解析整数
让我们来解析一些数字。这看起来会很简单,只需在一个文本串中寻找毗邻的数字,但是让我们尝试去处理所有形式的数字:整数和浮点数,十六进制数和八进制数,科学计数,负数。解析数字很简单,正确解析却很难。
亲自构建这个代码将会很困难,会有很多故障,会伴随有许多特殊的情况,比如有时会相互矛盾。
用 Ohm 构建的解释器涉及三个部分:语法、语义和测试。我通常挑选一个问题的一部分为它写测试,然后构建足够的语法和语义来使测试通过。然后我再挑选问题的另一部分,增加更多的测试,更新语法和语义,从而确保所有的测试能够持续通过。即使我们有了新的强大的工具,写解释器从概念上来说依旧很困难。测试是用一种合理的方式来构建解释器的唯一方法。现在,让我们开始工作。
我们将从整数开始。一个整数由一系列相互毗邻的数字组成。让我们把下面的内容放入一个叫做 grammar.ohm 的文件中:
```
CoolNums {
// just a basic integer
Number = digit+
}
```
这创造了一条撮合一个或多个数字叫作 Number 的单一规则。+ 意味着一个或更多,就像一个常规的表达。当有一个或更多的数字时,这个规则将会撮合它们,如果没有数字或者有一些不是数字的东西将不会撮合。一个数字定义成从 0 到 9 其中的一个字符。数字也是像 Number 一样的规则,但是它是 Ohm 的其中一条构建规则因此我们不需要去定义它。我们可以推翻它如果我们想的话但在这时候这没有任何意义,毕竟我们不打算去发明一种新的数。
现在,我们可以读入这个语法并用 Ohm 库来运行它。
把它放入 test1.js
```
var ohm = require('ohm-js');
var fs = require('fs');
var assert = require('assert');
var grammar = ohm.grammar(fs.readFileSync('src/blog_numbers/syntax1.ohm').toString());
```
Ohm 的语法调用将把文件读入并解释成一个语法对象。现在我们可以增加一些语义。把下面内容增加到你的 JavaScript 文件中:
```
var sem = grammar.createSemantics().addOperation('toJS', {
Number: function(a) {
return parseInt(this.sourceString,10);
}
});
```
这创造了一系列叫作 sem with the operation to JS[伴有 JavaScript 操作的语义] 的语义。这些语义至关重要,一群函数和语法中的每一条规则相匹配。一个函数将会被调用当与它相匹配的语法规则被解析时。上面的 Number 函数将会被调用当语法中的 Number 规则被解析时。语法定义在语言中 chunks[大块] 是什么,语义定义当 chunks[大块] 被解析时应该做什么。
语义函数能够做我们想做的任何事,比如打印初故障信息,创造对象,或者递归调用 toJS 作用于任何子节点。此时我们仅仅想把文本转换成真正的 JavaScript 整数。
所有的语义函数有一个包含一些有用性质的暗含对象。源属性代表输入文本和这个节点相匹配。这个 sourceString[源串] 是一个匹配输入串,调用构建在 JavaScript 中的parseInt 函数会把这个串转换成一个数。parseInt 中 10 这个参数告诉 JavaScript 我们输入的是一个以 10 为基底的数。如果少了这个参数, JavaScript 也会假定以 10 为基底,但是我们把它包含在里面因为后面我们将支持以 16 为基底的数,所以使之明确比较好。
既然我们有一些语法,让我们来实际解析一些东西看一看我们的解释器是否能够工作。如果知道我们的解释器工作?通过测试它,许多许多的测试,每一个边缘情况都需要一个测试。
伴随标准断言 API有一个测试函数能够匹配一些输入并运用我们的语义把它转换成一个数然后比较转换生成的数和我们期望的输入。
```
function test(input, answer) {
var match = grammar.match(input);
if(match.failed()) return console.log("input failed to match " + input + match.message);
var result = sem(match).toJS();
assert.deepEqual(result,answer);
console.log('success = ', result, answer);
}
```
这个函数就是上面这个。现在我们能够为不同的数写一堆测试。如果匹配失败我们的脚本将会丢弃一个例外。如果不能打印成功,让我们尝试一下,把下面这些内容加入到脚本中:
```
test("123",123);
test("999",999);
test("abc",999);
```
然后用节点 test.js 运行脚本
你的输出应该是这样:
```
success = 123 123
success = 999 999
input failed to match abcLine 1, col 1:
> 1 | abc
^
Expected a digit
```
真酷。正如理所当然的那样前两个成功了第三个失败了。更好的是Ohm 自动给了我们一个很棒的错误信息指出匹配失败。
###浮点数
我们的解释器工作了,但是它不能做任何非常有趣的事。让我们把它扩展成既能解析整数又能解析浮点数。改变 grammar.ohm 文件使它看起来像下面这样:
```
CoolNums {
// just a basic integer
Number = float | int
int = digit+
float = digit+ "." digit+
}
```
这把 Number 规则改变成指向一个浮点数或者一个整数。我的意思是,我们把这读成"一个 Number 由一个浮点数或者一个整数构成。”然后整数定义成 digit+, 浮点数定义成 digit+ 后面跟着一个句号然后再跟着另一个 digit+. 这意味着在句号前和句号后都至少要有一个数字。如果一个数中没有一个句号那么它就不是一个浮点数,因此就是一个整数。
现在,让我们再次看一下我们的语义作用。由于我们现在有了新的规则所以我们需要新的作用函数:一个作为整数的,一个作为浮点数的。
```
var sem = grammar.createSemantics().addOperation('toJS', {
Number: function(a) {
return a.toJS();
},
int: function(a) {
console.log("doing int", this.sourceString);
return parseInt(this.sourceString,10);
},
float: function(a,b,c) {
console.log("doing float", this.sourceString);
return parseFloat(this.sourceString);
}
});
```
这里有两件事情需要注意。首先,整数,浮点数和数都有相匹配的语法规则和函数。然而,针对数的作用不再有任何意义。它接收子节点 'a' 然后通过子节点返回 toJS 的结果。换句话说Number 规则简单的返回相匹配的子规则。由于这是在 Ohm 中任何规则的默认行为,因此实际上我们不用去考虑 Number 的作用Ohm 会替我们来做这件事。
第二整数有一个参数然而浮点数有三个a, b, 和 c. 这是由于规则的参数数量。参数数量意味着一个规则里面有多少参数。如果我们回过头去看语法,浮点数的规则是:
```
float = digit+ "." digit+
```
浮点数规则通过三个部分来定义:第一个 digit+, '.', 还有第二个 digit+. 这三个部分都会作为参数传递给浮点数的作用函数。因此浮点数必须有三个参数否则 Ohm 库给出一个错误。在这种情况下我们不用在意参数因为我们仅仅直接攫取了输入串,但是我们仍然需要列表的参数来回避编译器错误。后面我们将实际使用其中一些参数。
现在我们可以为新的浮点数支持添加更多的测试。
```
test("123",123);
test("999",999);
//test("abc",999);
test('123.456',123.456);
test('0.123',0.123);
test('.123',0.123);
```
注意最后一个测试将会失败。一个浮点数必须以一个数开始,即使它仅仅是 0, .123 不是有效的,实际上真正的 JavaScript 语言有相同的规则。
###十六进制数
现在我们已经有了整数和浮点数,但是有一些其他的数的语法可能能够很好的支持:十六进制数和科学计数。十六进制数是是以 16 为基底2的数。十六进制数的数字能从 0 到 9 和从 A 到 F. 十六进制数经常用在计算机科学中当用二进制数据工作时,因为你可以仅仅使用两个数字表示 0 到 255 的数。
在绝大多数源自 C 的编程语言(包括 JavaScript), 十六进制数通过在前面加上 0x' 来向编译器表明后面跟的是一个十六进制数。为了让我们的解释器支持十六进制数,我们只需要添加另一条规则。
```
Number = hex | float | int
int = digit+
float = digit+ "." digit+
hex = "0x" hexDigit+
hexDigit := "0".."9" | "a".."f" | "A".."F"
```
我实际上已经增加了两条规则。'hex' 表明十六进制数是一个 'ox' 后面一个或多个 hexDigits'[十六进制数子] 的串。一个 'hexDigit' 是从 0 到 9, 或从 a 到 f, 或 A 到 F包扩大写和小写的情况的一个字符。我也修改了 Number 规则来识别十六进制数作为其他可能的选择。现在我们只需要另一条针对十六进制数的作用规则。
```
hex: function(a,b) {
return parseInt(this.sourceString,16);
}
```
注意到,在这种情况下,我们把 '16' 作为基数传递给 'parseInt' 因为我们希望 JavaScript 知道这是一个十六进制数。
我略过了一些很重要需要注意的事。针对 'hexDigit' 的规则像下面这样:
```
hexDigit := "0".."9" | "a".."f" | "A".."F"
```
注意我使用的是 ':=' 而不是 '='. 在 Ohm 中,'=' 是当你需要推翻一条规则的时候使用。证明是 Ohm 已经有了针对 'hexDigit' 的默认规则,就像针对 'digit', 'space' 等一堆其他的东西。如果我使用了 '=', Ohm 将会报告一个错误。这是一个检查从而我不能无意识的推翻一个规则。由于新的 hexDigit 规则和 Ohm 的构建规则一样,所以我们可以仅仅对它添加注释然后让 Ohm 来实现它。我留下这个规则仅仅是因为这样我们可以看到它实际上是如何进行的。
Now we can add some more tests and see that our hex digits really work:
现在,我们可以添加更多的测试然后看到十六进制数真的工作。
```
test('0x456',0x456);
test('0xFF',255);
```
###科学计数
最后,让我们来支持科学计数。科学计数是针对非常大或非常小的数比如 1.8×10^3, 在大多数编程语言中科学计数法表示的数会写成这样1.8e3 表示 18000, 或者 1.8e-3 表示 .018. 让我们增加另外一对规则来支持这个指数表示:
```
float = digit+ "." digit+ exp?
exp = "e" "-"? digit+
```
上面增加了一个指数规则通过在浮点数规则末尾加上一个 '?'. '?' 表示 0 或 1所以指数是可选择的但是不能超过一个。增加指数规则也改变了浮点数规则的参数数量所以我们需要为浮点数作用增加又一个参数即使我们不使用它。
```
float: function(a,b,c,d) {
console.log("doing float", this.sourceString);
return parseFloat(this.sourceString);
},
```
现在我们的测试可以通过了:
```
test('4.8e10',4.8e10);
test('4.8e-10',4.8e-10);
```
###结论
Ohm 是构建解释器的一个很棒的工具因为它很容易开始并且你可以递增的增加规则。Ohm 也还有其他我今天没有写到的很棒的特点,比如故障观察仪和子类化。
s.
到目前为止,我们已经使用 Ohm 来把字符串翻译成 JavaScript 数,并且 Ohm 经常由于需要把一个表示转化成另一个这一目的而使用。然而Ohm 还有更多的用途。通过放入以系列不同的语义作用你可以使用 Ohm 来真正处理和计算东西。一个单独的语法可以被许多不同的语义使用,这是 Ohm 其中一个不可思议的特点。
在这个系列的下一篇文章中我将向你们展示如何计算像4.85 + 5 * (238 - 68)/2) 这样的数学表达式,不仅仅是解析数。
额外的挑战:你能够扩展语法来支持八进制数吗?这些以 8 为基底的数能够只用 0 到 7 这几个数字来表示,前面加上一个数字 0 或者字母 o. 看看针对下面这些测试情况是够正确。下次我将给出答案。
```
test('0o77',7*8+7);
test('0o23',0o23);
```
--------------------------------------------------------------------------------
via: https://www.pubnub.com/blog/2016-08-30-javascript-parser-ohm-makes-creating-a-programming-language-easy/?utm_source=javascriptweekly&utm_medium=email
作者:[Josh Marinacci][a]
译者:[ucasFL](https://github.com/ucasFL)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.pubnub.com/blog/author/josh/
[1]: https://github.com/cdglabs/ohm
[2]: http://www.vpri.org/pdf/tr2012001_steps.pdf
[3]: http://tinlizzie.org/ometa/

File diff suppressed because it is too large Load Diff