mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-13 22:30:37 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
c6f6581f84
@ -64,6 +64,7 @@ LCTT 的组成
|
||||
* 2017/11/19 wxy 在上海交大举办的 2017 中国开源年会上做了演讲:《[如何以翻译贡献参与开源社区](https://linux.cn/article-9084-1.html)》。
|
||||
* 2018/01/11 提升 lujun9972 成为核心成员,并加入选题组。
|
||||
* 2018/02/20 遭遇 DMCA 仓库被封。
|
||||
* 2018/05/15 提升 MjSeven 为核心成员。
|
||||
|
||||
核心成员
|
||||
-------------------------------
|
||||
@ -92,6 +93,7 @@ LCTT 的组成
|
||||
- 核心成员 @rusking,
|
||||
- 核心成员 @qhwdw,
|
||||
- 核心成员 @lujun9972
|
||||
- 核心成员 @MjSeven
|
||||
- 前任选题 @DeadFire,
|
||||
- 前任校对 @reinoir222,
|
||||
- 前任校对 @PurlingNayuki,
|
||||
|
@ -1,65 +1,67 @@
|
||||
Taskwarrior 入门
|
||||
基于命令行的任务管理器 Taskwarrior
|
||||
=====
|
||||
|
||||
Taskwarrior 是一个灵活的[命令行任务管理程序][1],用他们[自己的话说][2]:
|
||||
|
||||
Taskwarrior 是从你的命令行管理你的 TODO 列表。它灵活,快速,高效,不显眼,它默默做自己的事情让你避免自己管理。
|
||||
> Taskwarrior 在命令行里管理你的 TODO 列表。它灵活,快速,高效,不显眼,它默默做自己的事情让你避免自己管理。
|
||||
|
||||
Taskwarrior 是高度可定制的,但也可以“立即使用”。在本文中,我们将向你展示添加和完成任务的基本命令,然后我们将介绍几个更高级的命令。最后,我们将向你展示一些基本的配置设置,以开始自定义你的设置。
|
||||
|
||||
### 安装 Taskwarrior
|
||||
|
||||
Taskwarrior 在 Fedora 仓库中是可用的,所有安装它很容易:
|
||||
|
||||
```
|
||||
sudo dnf install task
|
||||
|
||||
```
|
||||
|
||||
一旦完成安装,运行 `task`。第一次运行将会创建一个 `~/.taskrc` 文件。
|
||||
一旦完成安装,运行 `task` 命令。第一次运行将会创建一个 `~/.taskrc` 文件。
|
||||
|
||||
```
|
||||
$ **task**
|
||||
$ task
|
||||
A configuration file could not be found in ~
|
||||
|
||||
Would you like a sample /home/link/.taskrc created, so Taskwarrior can proceed? (yes/no) yes
|
||||
[task next]
|
||||
No matches.
|
||||
|
||||
```
|
||||
|
||||
### 添加任务
|
||||
|
||||
添加任务快速而不显眼。
|
||||
|
||||
```
|
||||
$ **task add Plant the wheat**
|
||||
$ task add Plant the wheat
|
||||
Created task 1.
|
||||
|
||||
```
|
||||
|
||||
运行 `task` 或者 `task list` 来显示即将来临的任务。
|
||||
|
||||
```
|
||||
$ **task list**
|
||||
$ task list
|
||||
|
||||
ID Age Description Urg
|
||||
1 8s Plant the wheat 0
|
||||
|
||||
1 task
|
||||
|
||||
```
|
||||
|
||||
让我们添加一些任务来完成这个示例。
|
||||
```
|
||||
$ **task add Tend the wheat**
|
||||
Created task 2.
|
||||
$ **task add Cut the wheat**
|
||||
Created task 3.
|
||||
$ **task add Take the wheat to the mill to be ground into flour**
|
||||
Created task 4.
|
||||
$ **task add Bake a cake**
|
||||
Created task 5.
|
||||
|
||||
```
|
||||
$ task add Tend the wheat
|
||||
Created task 2.
|
||||
$ task add Cut the wheat
|
||||
Created task 3.
|
||||
$ task add Take the wheat to the mill to be ground into flour
|
||||
Created task 4.
|
||||
$ task add Bake a cake
|
||||
Created task 5.
|
||||
```
|
||||
|
||||
再次运行 `task` 来查看列表。
|
||||
|
||||
```
|
||||
[task next]
|
||||
|
||||
@ -71,84 +73,83 @@ ID Age Description Urg
|
||||
5 2s Bake a cake 0
|
||||
|
||||
5 tasks
|
||||
|
||||
```
|
||||
|
||||
### 完成任务
|
||||
|
||||
将一个任务标记为完成, 查找其 ID 并运行:
|
||||
|
||||
```
|
||||
$ **task 1 done**
|
||||
$ task 1 done
|
||||
Completed task 1 'Plant the wheat'.
|
||||
Completed 1 task.
|
||||
|
||||
```
|
||||
|
||||
你也可以用它的描述来标记一个任务已完成。
|
||||
|
||||
```
|
||||
$ **task 'Tend the wheat' done**
|
||||
$ task 'Tend the wheat' done
|
||||
Completed task 1 'Tend the wheat'.
|
||||
Completed 1 task.
|
||||
|
||||
```
|
||||
|
||||
通过使用 `add`, `list` 和 `done`,你可以说已经入门了。
|
||||
通过使用 `add`、`list` 和 `done`,你可以说已经入门了。
|
||||
|
||||
### 设定截止日期
|
||||
|
||||
很多任务不需要一个截止日期:
|
||||
|
||||
```
|
||||
task add Finish the article on Taskwarrior
|
||||
|
||||
```
|
||||
|
||||
但是有时候,设定一个截止日期正是你需要提高效率的动力。在添加任务时使用 `due` 修饰符来设置特定的截止日期。
|
||||
|
||||
```
|
||||
task add Finish the article on Taskwarrior due:tomorrow
|
||||
|
||||
```
|
||||
|
||||
`due` 非常灵活。它接受特定日期 ("2017-02-02") 或 ISO-8601 ("2017-02-02T20:53:00Z"),甚至相对时间 ("8hrs")。可以查看所有示例的 [Date & Time][3] 文档。
|
||||
`due` 非常灵活。它接受特定日期 (`2017-02-02`) 或 ISO-8601 (`2017-02-02T20:53:00Z`),甚至相对时间 (`8hrs`)。可以查看所有示例的 [Date & Time][3] 文档。
|
||||
|
||||
日期也不只有截止日期,Taskwarrior 有 `scheduled`, `wait` 和 `until` 选项。
|
||||
|
||||
日期也会超出截止日期,Taskwarrior 有 `scheduled`, `wait` 和 `until` 选项。
|
||||
```
|
||||
task add Proof the article on Taskwarrior scheduled:thurs
|
||||
|
||||
```
|
||||
|
||||
一旦日期(本例中的星期四)通过,该任务就会被标记为 `READY` 虚拟标记。它会显示在 `ready` 报告中。
|
||||
|
||||
```
|
||||
$ **task ready**
|
||||
$ task ready
|
||||
|
||||
ID Age S Description Urg
|
||||
1 2s 1d Proof the article on Taskwarrior 5
|
||||
|
||||
```
|
||||
|
||||
要移除一个日期,使用空白值来 `modify` 任务:
|
||||
|
||||
```
|
||||
$ task 1 modify scheduled:
|
||||
|
||||
```
|
||||
|
||||
### 查找任务
|
||||
|
||||
如果没有使用正则表达式搜索的能力,任务列表是不完整的,对吧?
|
||||
|
||||
```
|
||||
$ **task '/.* the wheat/' list**
|
||||
$ task '/.* the wheat/' list
|
||||
|
||||
ID Age Project Description Urg
|
||||
2 42min Take the wheat to the mill to be ground into flour 0
|
||||
1 42min Home Cut the wheat 1
|
||||
|
||||
2 tasks
|
||||
|
||||
```
|
||||
|
||||
### 自定义 Taskwarrior
|
||||
|
||||
记得我们在开头创建的文件 (`~/.taskrc`)吗?让我们来看看默认设置:
|
||||
|
||||
```
|
||||
# [Created by task 2.5.1 2/9/2017 16:39:14]
|
||||
# Taskwarrior program configuration file.
|
||||
@ -180,41 +181,40 @@ data.location=~/.task
|
||||
#include /usr//usr/share/task/solarized-dark-256.theme
|
||||
#include /usr//usr/share/task/solarized-light-256.theme
|
||||
#include /usr//usr/share/task/no-color.theme
|
||||
|
||||
|
||||
```
|
||||
|
||||
现在唯一有效的选项是 `data.location=~/.task`。要查看活动配置设置(包括内置的默认设置),运行 `show`。
|
||||
现在唯一生效的选项是 `data.location=~/.task`。要查看活动配置设置(包括内置的默认设置),运行 `show`。
|
||||
|
||||
```
|
||||
task show
|
||||
|
||||
```
|
||||
|
||||
改变设置,使用 `config`。
|
||||
要改变设置,使用 `config`。
|
||||
|
||||
```
|
||||
$ **task config displayweeknumber no**
|
||||
$ task config displayweeknumber no
|
||||
Are you sure you want to add 'displayweeknumber' with a value of 'no'? (yes/no) yes
|
||||
Config file /home/link/.taskrc modified.
|
||||
|
||||
```
|
||||
|
||||
### 示例
|
||||
|
||||
这些只是你可以用 Taskwarrior 做的一部分事情。
|
||||
|
||||
为你的任务分配一个项目:
|
||||
将你的任务分配到一个项目:
|
||||
|
||||
```
|
||||
task 'Fix leak in the roof' modify project:Home
|
||||
|
||||
```
|
||||
|
||||
使用 `start` 来标记你正在做的事情,这可以帮助你回忆起你周末后在做什么:
|
||||
|
||||
```
|
||||
task 'Fix bug #141291' start
|
||||
|
||||
```
|
||||
|
||||
使用相关的标签:
|
||||
|
||||
```
|
||||
task add 'Clean gutters' +weekend +house
|
||||
|
||||
@ -229,7 +229,7 @@ via: https://fedoramagazine.org/getting-started-taskwarrior/
|
||||
|
||||
作者:[Link Dupont][a]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,122 @@
|
||||
两款 Linux 桌面端可用的科学计算器
|
||||
======
|
||||
|
||||
> 如果你想找个高级的桌面计算器的话,你可以看看开源软件,以及一些其它有趣的工具。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_OpenData_CityNumbers.png?itok=lC03ce76)
|
||||
|
||||
每个 Linux 桌面环境都至少带有一个功能简单的桌面计算器,但大多数计算器只能进行一些简单的计算。
|
||||
|
||||
幸运的是,还是有例外的:不仅可以做得比开平方根和一些三角函数还多,而且还很简单。这里将介绍两款强大的计算器,外加一大堆额外的功能。
|
||||
|
||||
### SpeedCrunch
|
||||
|
||||
[SpeedCrunch][1] 是一款高精度科学计算器,有着简明的 Qt5 图像界面,并且强烈依赖键盘。
|
||||
|
||||
![SpeedCrunch graphical interface][3]
|
||||
|
||||
*SpeedCrunch 在工作时*
|
||||
|
||||
它支持单位,并且可用在所有函数中。
|
||||
|
||||
例如,
|
||||
|
||||
```
|
||||
2 * 10^6 newton / (meter^2)
|
||||
```
|
||||
|
||||
你可以得到:
|
||||
|
||||
```
|
||||
= 2000000 pascal
|
||||
```
|
||||
|
||||
SpeedCrunch 会默认地将结果转化为国际标准单位,但还是可以用 `in` 命令转换:
|
||||
|
||||
例如:
|
||||
|
||||
```
|
||||
3*10^8 meter / second in kilo meter / hour
|
||||
```
|
||||
|
||||
结果是:
|
||||
|
||||
```
|
||||
= 1080000000 kilo meter / hour
|
||||
```
|
||||
|
||||
`F5` 键可以将所有结果转为科学计数法(`1.08e9 kilo meter / hour`),`F2` 键可以只将那些很大的数或很小的数转为科学计数法。更多选项可以在配置页面找到。
|
||||
|
||||
可用的函数的列表看上去非常壮观。它可以用在 Linux 、 Windows、macOS。许可证是 GPLv2,你可以在 [Bitbucket][4] 上得到它的源码。
|
||||
|
||||
### Qalculate!
|
||||
|
||||
[Qalculate!][5](有感叹号)有一段长而复杂的历史。
|
||||
|
||||
这个项目给了我们一个强大的库,而这个库可以被其它程序使用(在 Plasma 桌面中,krunner 可以用它来计算),以及一个用 GTK3 搭建的图形界面。它允许你转换单位,处理物理常量,创建图像,使用复数,矩阵以及向量,选择任意精度,等等。
|
||||
|
||||
![Qalculate! Interface][7]
|
||||
|
||||
*在 Qalculate! 中寻找物理常量*
|
||||
|
||||
在单位的使用方面,Qalculate! 会比 SppedCrunch 更加直观,而且可以识别一些常用前缀。你有听说过 exapascal 压力吗?反正我没有(太阳的中心大概在 `~26 PPa`),但 Qalculate! ,可以准确 `1 EPa` 的意思。同时,Qalculate! 可以更加灵活地处理语法错误,所以你不需要担心打括号:如果没有歧义,Qalculate! 会直接给出正确答案。
|
||||
|
||||
一段时间之后这个项目看上去被遗弃了。但在 2016 年,它又变得强大了,在一年里更新了 10 个版本。它的许可证是 GPLv2 (源码在 [GitHub][8] 上),提供Linux 、Windows 、macOS的版本。
|
||||
|
||||
### 更多计算器
|
||||
|
||||
#### ConvertAll
|
||||
|
||||
好吧,这不是“计算器”,但这个程序非常好用。
|
||||
|
||||
大部分单位转换器只是一个大的基本单位列表以及一大堆基本组合,但 [ConvertAll][9] 与它们不一样。有试过把光年转换为英尺每秒吗?不管它们说不说得通,只要你想转换任何种类的单位,ConvertAll 就是你要的工具。
|
||||
|
||||
只需要在相应的输入框内输入转换前和转换后的单位:如果单位相容,你会直接得到答案。
|
||||
|
||||
主程序是在 PyQt5 上搭建的,但也有 [JavaScript 的在线版本][10]。
|
||||
|
||||
#### 带有单位包的 (wx)Maxima
|
||||
|
||||
有时候(好吧,很多时候)一款桌面计算器时候不够你用的,然后你需要更多的原力。
|
||||
|
||||
[Maxima][11] 是一款计算机代数系统(LCTT 译注:进行符号运算的软件。这种系统的要件是数学表示式的符号运算),你可以用它计算导数、积分、方程、特征值和特征向量、泰勒级数、拉普拉斯变换与傅立叶变换,以及任意精度的数字计算、二维或三维图像··· ···列出这些都够我们写几页纸的了。
|
||||
|
||||
[wxMaxima][12] 是一个设计精湛的 Maxima 的图形前端,它简化了许多 Maxima 的选项,但并不会影响其它。在 Maxima 的基础上,wxMaxima 还允许你创建 “笔记本”,你可以在上面写一些笔记,保存你的图像等。其中一项 (wx)Maxima 最惊艳的功能是它可以处理尺寸单位。
|
||||
|
||||
在提示符只需要输入:
|
||||
|
||||
```
|
||||
load("unit")
|
||||
```
|
||||
|
||||
按 `Shift+Enter`,等几秒钟的时间,然后你就可以开始了。
|
||||
|
||||
默认地,单位包可以用基本的 MKS 单位,但如果你喜欢,例如,你可以用 `N` 为单位而不是 `kg*m/s2`,你只需要输入:`setunits(N)`。
|
||||
|
||||
Maxima 的帮助(也可以在 wxMaxima 的帮助菜单中找到)会给你更多信息。
|
||||
|
||||
你使用这些程序吗?你知道还有其它好的科学、工程用途的桌面计算器或者其它相关的计算器吗?在评论区里告诉我们吧!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/scientific-calculators-linux
|
||||
|
||||
作者:[Ricardo Berlasso][a]
|
||||
译者:[zyk2290](https://github.com/zyk2290)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/rgb-es
|
||||
[1]:http://speedcrunch.org/index.html
|
||||
[2]:/file/382511
|
||||
[3]:https://opensource.com/sites/default/files/u128651/speedcrunch.png "SpeedCrunch graphical interface"
|
||||
[4]:https://bitbucket.org/heldercorreia/speedcrunch
|
||||
[5]:https://qalculate.github.io/
|
||||
[6]:/file/382506
|
||||
[7]:https://opensource.com/sites/default/files/u128651/qalculate-600.png "Qalculate! Interface"
|
||||
[8]:https://github.com/Qalculate
|
||||
[9]:http://convertall.bellz.org/
|
||||
[10]:http://convertall.bellz.org/js/
|
||||
[11]:http://maxima.sourceforge.net/
|
||||
[12]:https://andrejv.github.io/wxmaxima/
|
@ -1,23 +1,27 @@
|
||||
Linux 命令行下的数学运算
|
||||
======
|
||||
|
||||
> 有几个有趣的命令可以在 Linux 系统下做数学运算: `expr`、`factor`、`jot` 和 `bc` 命令。
|
||||
|
||||
![](https://images.techhive.com/images/article/2014/12/math_blackboard-100534564-large.jpg)
|
||||
|
||||
可以在 Linux 命令行下做数学运算吗?当然可以!事实上,有不少命令可以轻松完成这些操作,其中一些甚至让你大吃一惊。让我们来学习这些有用的数学运算命令或命令语法吧。
|
||||
|
||||
### expr
|
||||
|
||||
首先,对于在命令行使用命令进行数学运算,可能最容易想到、最常用的命令就是 **expr** (expression)。它可以完成四则运算,也可以用于比较大小。下面是几个例子:
|
||||
首先,对于在命令行使用命令进行数学运算,可能最容易想到、最常用的命令就是 `expr` (<ruby>表达式<rt>expression</rt></ruby>。它可以完成四则运算,也可以用于比较大小。下面是几个例子:
|
||||
|
||||
#### 变量递增
|
||||
|
||||
```
|
||||
$ count=0
|
||||
$ count=`expr $count + 1`
|
||||
$ echo $count
|
||||
1
|
||||
|
||||
```
|
||||
|
||||
#### 完成简单运算
|
||||
|
||||
```
|
||||
$ expr 11 + 123
|
||||
134
|
||||
@ -31,11 +35,12 @@ $ expr 11 \* 123
|
||||
1353
|
||||
$ expr 20 % 3
|
||||
2
|
||||
|
||||
```
|
||||
注意,你需要在 * 运算符之前增加 \ 符号,避免语法错误。% 运算符用于取余运算。
|
||||
|
||||
注意,你需要在 `*` 运算符之前增加 `\` 符号,避免语法错误。`%` 运算符用于取余运算。
|
||||
|
||||
下面是一个稍微复杂的例子:
|
||||
|
||||
```
|
||||
participants=11
|
||||
total=156
|
||||
@ -45,47 +50,49 @@ echo $share
|
||||
14
|
||||
echo $remaining
|
||||
2
|
||||
|
||||
```
|
||||
|
||||
假设某个活动中有 11 位参与者,需要颁发的奖项总数为 156,那么平均每个参与者获得 14 项奖项,额外剩余 2 个奖项。
|
||||
|
||||
#### 比较大小
|
||||
#### 比较
|
||||
|
||||
下面让我们看一下比较的操作。从第一印象来看,语句看似有些怪异;这里并不是**设置**数值,而是进行数字比较。在本例中 `expr` 判断表达式是否为真:如果结果是 1,那么表达式为真;反之,表达式为假。
|
||||
|
||||
下面让我们看一下比较大小的操作。从第一印象来看,语句看似有些怪异;这里并不是设置数值,而是进行数字大小比较。在本例中 **expr** 判断表达式是否为真:如果结果是 1,那么表达式为真;反之,表达式为假。
|
||||
```
|
||||
$ expr 11 = 11
|
||||
1
|
||||
$ expr 11 = 12
|
||||
0
|
||||
|
||||
```
|
||||
请读作"11 是否等于 11?"及"11 是否等于 12?",你很快就会习惯这种写法。当然,我们不会在命令行上执行上述比较,可能的比较是 $age 是否等于 11。
|
||||
|
||||
请读作“11 是否等于 11?”及“11 是否等于 12?”,你很快就会习惯这种写法。当然,我们不会在命令行上执行上述比较,可能的比较是 `$age` 是否等于 `11`。
|
||||
|
||||
```
|
||||
$ age=11
|
||||
$ expr $age = 11
|
||||
1
|
||||
|
||||
```
|
||||
|
||||
如果将数字放到引号中间,那么你将进行字符串比较,而不是数值比较。
|
||||
|
||||
```
|
||||
$ expr "11" = "11"
|
||||
1
|
||||
$ expr "eleven" = "11"
|
||||
0
|
||||
|
||||
```
|
||||
|
||||
在本例中,我们判断 10 是否大于 5,以及是否 大于 99。
|
||||
在本例中,我们判断 10 是否大于 5,以及是否大于 99。
|
||||
|
||||
```
|
||||
$ expr 10 \> 5
|
||||
1
|
||||
$ expr 10 \> 99
|
||||
0
|
||||
|
||||
```
|
||||
|
||||
的确,返回 1 和 0 分别代表比较的结果为真和假,我们一般预期在 Linux 上得到这个结果。在下面的例子中,按照上述逻辑使用 **expr** 并不正确,因为 **if** 的工作原理刚好相反,即 0 代表真。
|
||||
的确,返回 1 和 0 分别代表比较的结果为真和假,我们一般预期在 Linux 上得到这个结果。在下面的例子中,按照上述逻辑使用 `expr` 并不正确,因为 `if` 的工作原理刚好相反,即 0 代表真。
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
@ -99,19 +106,19 @@ if [ `expr $price \> $cost` ]; then
|
||||
else
|
||||
echo "Don't sell it"
|
||||
fi
|
||||
|
||||
```
|
||||
|
||||
下面,我们运行这个脚本:
|
||||
|
||||
```
|
||||
$ ./checkPrice
|
||||
Cost to us> 11.50
|
||||
Price we're asking> 6
|
||||
We make money
|
||||
|
||||
```
|
||||
|
||||
这显然与我们预期不符!我们稍微修改一下,以便使其按我们预期工作:
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
@ -125,12 +132,12 @@ if [ `expr $price \> $cost` == 1 ]; then
|
||||
else
|
||||
echo "Don't sell it"
|
||||
fi
|
||||
|
||||
```
|
||||
|
||||
### factor
|
||||
|
||||
**factor** 命令的功能基本与你预期相符。你给出一个数字,该命令会给出对应数字的因子。
|
||||
`factor` 命令的功能基本与你预期相符。你给出一个数字,该命令会给出对应数字的因子。
|
||||
|
||||
```
|
||||
$ factor 111
|
||||
111: 3 37
|
||||
@ -143,11 +150,12 @@ $ factor 1987
|
||||
|
||||
```
|
||||
|
||||
注:factor 命令对于最后一个数字没有返回很多,这是因为 1987 是一个 **质数**。
|
||||
注:`factor` 命令对于最后一个数字没有返回更多因子,这是因为 1987 是一个**质数**。
|
||||
|
||||
### jot
|
||||
|
||||
**jot** 命令可以创建一系列数字。给定数字总数及起始数字即可。
|
||||
`jot` 命令可以创建一系列数字。给定数字总数及起始数字即可。
|
||||
|
||||
```
|
||||
$ jot 8 10
|
||||
10
|
||||
@ -158,10 +166,10 @@ $ jot 8 10
|
||||
15
|
||||
16
|
||||
17
|
||||
|
||||
```
|
||||
|
||||
你也可以用如下方式使用 **jot**,这里我们要求递减至数字 2。
|
||||
你也可以用如下方式使用 `jot`,这里我们要求递减至数字 2。
|
||||
|
||||
```
|
||||
$ jot 8 10 2
|
||||
10
|
||||
@ -172,10 +180,10 @@ $ jot 8 10 2
|
||||
4
|
||||
3
|
||||
2
|
||||
|
||||
```
|
||||
|
||||
**jot** 可以帮你构造一系列数字组成的列表,该列表可以用于其它任务。
|
||||
`jot` 可以帮你构造一系列数字组成的列表,该列表可以用于其它任务。
|
||||
|
||||
```
|
||||
$ for i in `jot 7 17`; do echo April $i; done
|
||||
April 17
|
||||
@ -185,28 +193,28 @@ April 20
|
||||
April 21
|
||||
April 22
|
||||
April 23
|
||||
|
||||
```
|
||||
|
||||
### bc
|
||||
|
||||
**bc** 基本上是命令行数学运算最佳工具之一。输入你想执行的运算,使用管道发送至该命令即可:
|
||||
`bc` 基本上是命令行数学运算最佳工具之一。输入你想执行的运算,使用管道发送至该命令即可:
|
||||
|
||||
```
|
||||
$ echo "123.4+5/6-(7.89*1.234)" | bc
|
||||
113.664
|
||||
|
||||
```
|
||||
|
||||
可见 **bc** 并没有忽略精度,而且输入的字符串也相当直截了当。它还可以进行大小比较、处理布尔值、计算平方根、正弦、余弦和正切等。
|
||||
可见 `bc` 并没有忽略精度,而且输入的字符串也相当直截了当。它还可以进行大小比较、处理布尔值、计算平方根、正弦、余弦和正切等。
|
||||
|
||||
```
|
||||
$ echo "sqrt(256)" | bc
|
||||
16
|
||||
$ echo "s(90)" | bc -l
|
||||
.89399666360055789051
|
||||
|
||||
```
|
||||
|
||||
事实上,**bc** 甚至可以计算 pi。你需要指定需要的精度。
|
||||
事实上,`bc` 甚至可以计算 pi。你需要指定需要的精度。
|
||||
|
||||
```
|
||||
$ echo "scale=5; 4*a(1)" | bc -l
|
||||
3.14156
|
||||
@ -216,10 +224,10 @@ $ echo "scale=20; 4*a(1)" | bc -l
|
||||
3.14159265358979323844
|
||||
$ echo "scale=40; 4*a(1)" | bc -l
|
||||
3.1415926535897932384626433832795028841968
|
||||
|
||||
```
|
||||
|
||||
除了通过管道接收数据并返回结果,**bc**还可以交互式运行,输入你想执行的运算即可。本例中提到的 scale 设置可以指定有效数字的个数。
|
||||
除了通过管道接收数据并返回结果,`bc`还可以交互式运行,输入你想执行的运算即可。本例中提到的 `scale` 设置可以指定有效数字的个数。
|
||||
|
||||
```
|
||||
$ bc
|
||||
bc 1.06.95
|
||||
@ -232,10 +240,10 @@ scale=2
|
||||
2/3
|
||||
.66
|
||||
quit
|
||||
|
||||
```
|
||||
|
||||
你还可以使用 **bc** 完成数字进制转换。**obase** 用于设置输出的数字进制。
|
||||
你还可以使用 `bc` 完成数字进制转换。`obase` 用于设置输出的数字进制。
|
||||
|
||||
```
|
||||
$ bc
|
||||
bc 1.06.95
|
||||
@ -248,23 +256,23 @@ obase=16
|
||||
256 <=== entered
|
||||
100 <=== response
|
||||
quit
|
||||
|
||||
```
|
||||
|
||||
按如下方式使用 **bc** 也是完成十六进制与十进制转换的最简单方式之一:
|
||||
按如下方式使用 `bc` 也是完成十六进制与十进制转换的最简单方式之一:
|
||||
|
||||
```
|
||||
$ echo "ibase=16; F2" | bc
|
||||
242
|
||||
$ echo "obase=16; 242" | bc
|
||||
F2
|
||||
|
||||
```
|
||||
|
||||
在上面第一个例子中,我们将输入进制 (ibase) 设置为十六进制 (hex),完成十六进制到为十进制的转换。在第二个例子中,我们执行相反的操作,即将输出进制 (obase) 设置为十六进制。
|
||||
在上面第一个例子中,我们将输入进制(`ibase`)设置为十六进制(`hex`),完成十六进制到为十进制的转换。在第二个例子中,我们执行相反的操作,即将输出进制(`obase`)设置为十六进制。
|
||||
|
||||
### 简单的 bash 数学运算
|
||||
|
||||
通过使用双括号,我们可以在 bash 中完成简单的数学运算。在下面的例子中,我们创建一个变量,为变量赋值,然后依次执行加法、自减和平方。
|
||||
|
||||
```
|
||||
$ ((e=11))
|
||||
$ (( e = e + 7 ))
|
||||
@ -278,19 +286,19 @@ $ echo $e
|
||||
$ ((e=e**2))
|
||||
$ echo $e
|
||||
289
|
||||
|
||||
```
|
||||
|
||||
允许使用的运算符包括:
|
||||
|
||||
```
|
||||
+ - 加法及减法
|
||||
++ -- 自增与自减
|
||||
* / % 乘法,除法及求余数
|
||||
* / % 乘法、除法及求余数
|
||||
^ 指数运算
|
||||
|
||||
```
|
||||
|
||||
你还可以使用逻辑运算符和布尔运算符:
|
||||
|
||||
```
|
||||
$ ((x=11)); ((y=7))
|
||||
$ if (( x > y )); then
|
||||
@ -303,14 +311,13 @@ $ if (( x > y )) >> (( y > z )); then
|
||||
> echo "letters roll downhill"
|
||||
> fi
|
||||
letters roll downhill
|
||||
|
||||
```
|
||||
|
||||
或者如下方式:
|
||||
|
||||
```
|
||||
$ if [ x > y ] << [ y > z ]; then echo "letters roll downhill"; fi
|
||||
letters roll downhill
|
||||
|
||||
```
|
||||
|
||||
下面计算 2 的 3 次幂:
|
||||
@ -319,23 +326,20 @@ $ echo "2 ^ 3"
|
||||
2 ^ 3
|
||||
$ echo "2 ^ 3" | bc
|
||||
8
|
||||
|
||||
```
|
||||
|
||||
### 总结
|
||||
|
||||
在 Linux 系统中,有很多不同的命令行工具可以完成数字运算。希望你在读完本文之后,能掌握一两个新工具。
|
||||
|
||||
使用 [Facebook][1] 或 [LinkedIn][2] 加入 Network World 社区,点评你最喜爱的主题。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3268964/linux/how-to-do-math-on-the-linux-command-line.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
译者:[pinewall](https://github.com/pinewall)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[pinewall](https://github.com/pinewall)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,95 +0,0 @@
|
||||
Translating by FelxiYFZ IT automation: How to make the case
|
||||
======
|
||||
At the start of any significant project or change initiative, IT leaders face a proverbial fork in the road.
|
||||
|
||||
Path #1 might seem to offer the shortest route from A to B: Simply force-feed the project to everyone by executive mandate, essentially saying, “You’re going to do this – or else.”
|
||||
|
||||
Path #2 might appear less direct, because on this journey you take the time to explain the strategy and the reasons behind it. In fact, you’re going to be making pit stops along this route, rather than marathoning from start to finish: “Here’s what we’re doing – and why we’re doing it.”
|
||||
|
||||
Guess which path bears better results?
|
||||
|
||||
If you said #2, you’ve traveled both paths before – and experienced the results first-hand. Getting people on board with major changes beforehand is almost always the smarter choice.
|
||||
|
||||
IT leaders know as well as anyone that with significant change often comes [significant fear][1], skepticism, and other challenges. It may be especially true with IT automation. The term alone sounds scary to some people, and it is often tied to misconceptions. Helping people understand the what, why, and how of your company’s automation strategy is a necessary step to achieving your goals associated with that strategy.
|
||||
|
||||
[ **Read our related article,** [**IT automation best practices: 7 keys to long-term success**][2]. ]
|
||||
|
||||
With that in mind, we asked a variety of IT leaders for their advice on making the case for automation in your organization:
|
||||
|
||||
## 1. Show people what’s in it for them
|
||||
|
||||
Let’s face it: Self-interest and self-preservation are natural instincts. Tapping into that human tendency is a good way to get people on board: Show people how your automation strategy will benefit them and their jobs. Will automating a particular process in the software pipeline mean fewer middle-of-the-night calls for team members? Will it enable some people to dump low-skill, manual tasks in favor of more strategic, higher-order work – the sort that helps them take the next step in their career?
|
||||
|
||||
“Convey what’s in it for them, and how it will benefit clients and the whole company,” advises Vipul Nagrath, global CIO at [ADP][3]. “Compare the current state to a brighter future state, where the company enjoys greater stability, agility, efficiency, and security.”
|
||||
|
||||
The same approach holds true when making the case outside of IT; just lighten up on the jargon when explaining the benefits to non-technical stakeholders, Nagrath says.
|
||||
|
||||
Setting up a before-and-after picture is a good storytelling device for helping people see the upside.
|
||||
|
||||
“You want to paint a picture of the current state that people can relate to,” Nagrath says. “Present what’s working, but also highlight what’s causing teams to be less than agile.” Then explain how automating certain processes will improve that current state.
|
||||
|
||||
## 2. Connect automation to specific business goals
|
||||
|
||||
Part of making a strong case entails making sure people understand that you’re not just trend-chasing. If you’re automating simply for the sake of automating, people will sniff that out and become more resistant – perhaps especially within IT.
|
||||
|
||||
“The case for automation needs to be driven by a business demand signal, such as revenue or operating expense,” says David Emerson, VP and deputy CISO at [Cyxtera][4]. “No automation endeavor is self-justifying, and no technical feat, generally, should be a means unto itself, unless it’s a core competency of the company.”
|
||||
|
||||
Like Nagrath, Emerson recommends promoting the incentives associated with achieving the business goals of automation, and working toward these goals (and corresponding incentives) in an iterative, step-by-step fashion.
|
||||
|
||||
## 3. Break the automation plan into manageable pieces
|
||||
|
||||
Even if your automation strategy is literally “automate everything,” that’s a tough sell (and probably unrealistic) for most organizations. You’ll make a stronger case with a plan that approaches automation manageable piece by manageable piece, and that enables greater flexibility to adapt along the way.
|
||||
|
||||
“When making a case for automation, I recommend clearly illustrating the incentive to move to an automated process, and allowing iteration toward that goal to introduce and prove the benefits at lower risk,” Emerson says.
|
||||
|
||||
Sergey Zuev, founder at [GA Connector][5], shares an in-the-trenches account of why automating incrementally is crucial – and how it will help you build a stronger, longer-lasting argument for your strategy. Zuev should know: His company’s tool automates the import of data from CRM applications into Google Analytics. But it was actually the company’s internal experience automating its own customer onboarding process that led to a lightbulb moment.
|
||||
|
||||
“At first, we tried to build the whole onboarding funnel at once, and as a result, the project dragged [on] for months,” Zuev says. “After realizing that it [was] going nowhere, we decided to select small chunks that would have the biggest immediate effect, and start with that. As a result, we managed to implement one of the email sequences in just a week, and are already reaping the benefits of the desecrated manual effort.”
|
||||
|
||||
## 4. Sell the big-picture benefits too
|
||||
|
||||
A step-by-step approach does not preclude painting a bigger picture. Just as it’s a good idea to make the case at the individual or team level, it’s also a good idea for help people understand the company-wide benefits.
|
||||
|
||||
“If we can accelerate the time it takes for the business to get what it needs, it will silence the skeptics.”
|
||||
|
||||
Eric Kaplan, CTO at [AHEAD][6], agrees that using small wins to show automation’s value is a smart strategy for winning people over. But the value those so-called “small” wins reveal can actually help you sharpen the big picture for people. Kaplan points to the value of individual and organizational time as an area everyone can connect with easily.
|
||||
|
||||
“The best place to do this is where you can show savings in terms of time,” Kaplan says. “If we can accelerate the time it takes for the business to get what it needs, it will silence the skeptics.”
|
||||
|
||||
Time and scalability are powerful benefits business and IT colleagues, both charged with growing the business, can grasp.
|
||||
|
||||
“The result of automation is scalability – less effort per person to maintain and grow your IT environment, as [Red Hat][7] VP, Global Services John Allessio recently [noted][8]. “If adding manpower is the only way to grow your business, then scalability is a pipe dream. Automation reduces your manpower requirements and provides the flexibility required for continued IT evolution.” (See his full article, [What DevOps teams really need from a CIO][8].)
|
||||
|
||||
## 5. Promote the heck out of your results
|
||||
|
||||
At the outset of your automation strategy, you’ll likely be making the case based on goals and the anticipated benefits of achieving those goals. But as your automation strategy evolves, there’s no case quite as convincing as one grounded in real-world results.
|
||||
|
||||
“Seeing is believing,” says Nagrath, ADP’s CIO. “Nothing quiets skeptics like a track record of delivery.”
|
||||
|
||||
That means, of course, not only achieving your goals, but also doing so on time – another good reason for the iterative, step-by-step approach.
|
||||
|
||||
While quantitative results such as percentage improvements or cost savings can speak loudly, Nagrath advises his fellow IT leaders not to stop there when telling your automation story.
|
||||
|
||||
“Making a case for automation is also a qualitative discussion, where we can promote the issues prevented, overall business continuity, reductions in failures/errors, and associates taking on [greater] responsibility as they tackle more value-added tasks.”
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://enterprisersproject.com/article/2018/1/how-make-case-it-automation
|
||||
|
||||
作者:[Kevin Casey][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://enterprisersproject.com/user/kevin-casey
|
||||
[1]:https://enterprisersproject.com/article/2017/10/how-beat-fear-and-loathing-it-change
|
||||
[2]:https://enterprisersproject.com/article/2018/1/it-automation-best-practices-7-keys-long-term-success?sc_cid=70160000000h0aXAAQ
|
||||
[3]:https://www.adp.com/
|
||||
[4]:https://www.cyxtera.com/
|
||||
[5]:http://gaconnector.com/
|
||||
[6]:https://www.thinkahead.com/
|
||||
[7]:https://www.redhat.com/en?intcmp=701f2000000tjyaAAA
|
||||
[8]:https://enterprisersproject.com/article/2017/12/what-devops-teams-really-need-cio
|
||||
[9]:https://enterprisersproject.com/email-newsletter?intcmp=701f2000000tsjPAAQ
|
@ -0,0 +1,38 @@
|
||||
How a university network assistant used Linux in the 90s
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/moneyrecycle_520x292.png?itok=SAaIziNr)
|
||||
In the mid-1990s, I was enrolled in computer science classes. My university’s computer science department provided a SunOS server—a multi-user, multitasking Unix system—for its students. We logged into it and wrote source code for the programming languages we were learning, such as C, C++, and ADA. In those days, well before social networks and instant messaging, we also used the system to communicate with each other, sending emails and using utilities such as `write` and `talk`. We were each also allowed to host a personal website. I enjoyed being able to complete my assignments and contact other users.
|
||||
|
||||
It was my first experience with this type of operating environment, but I soon learned about another operating system that could do the same thing: Linux.
|
||||
|
||||
While I was a student, I also worked part-time at the university. My first position was as a network installer in the Department of Housing and Residence (H&R). This involved connecting student dormitories to the campus network. As this was the university's first dormitory network service, only two buildings and about 75 students had been connected.
|
||||
|
||||
In my second year, the network expanded to cover an additional two buildings. H&R decided to let the university’s Office of Information Technology (OIT) manage this growing operation. I transferred to OIT and started the position of Student Assistant to the OIT Network Manager. That is how I discovered Linux. One of my new responsibilities was to manage the firewall systems that provided network and internet access to the dormitories.
|
||||
|
||||
Each student was registered with their hardware MAC address. Registered students could connect to the dorm network and receive an IP address and a route to the internet. Unlike the other expensive SunOS and VMS servers used by the university, these firewalls used low-cost computers running the free and open source Linux operating system. By the end of the year, the system had registered nearly 500 students.
|
||||
|
||||
![Red hat Linux install disks][1]
|
||||
|
||||
The OIT network staff members were using Linux for HTTP, FTP, and other services. They also used Linux on their personal desktops. That's when I realized I had my hands on a computer system that looked and acted just like the expensive SunOS box in the CS department but without the high cost. Linux could run on commodity x86 hardware, such as a Dell Latitude with 8 MB of RAM and a 133Mhz Intel Pentium CPU. That was the selling point for me! I installed Red Hat Linux 5.2 on a box scavenged from the surplus warehouse and gave my friends login accounts.
|
||||
|
||||
While I used my new Linux server to host my website and provide accounts to my friends, it also offered graphics capabilities over the CS department server. Using the X Windows system, I could browse the web with Netscape Navigator, play music with [XMMS][2], and try out different window managers. I could also download and compile other open source software and write my own code.
|
||||
|
||||
I learned that Linux offered some pretty advanced features, many of which were more convenient than or superior to more mainstream operating systems. For example, many operating systems did not yet offer simple ways to apply updates. In Linux, this was easy, thanks to [autoRPM][3], an update manager written by Kirk Bauer, which sent the root user a daily email with available updates. It had an intuitive interface for reviewing and selecting software updates to install—pretty amazing for the mid-'90s.
|
||||
|
||||
Linux may not have been well-known back then, and it was often received with skepticism, but I was convinced it would survive. And survive it did!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/my-linux-story-student
|
||||
|
||||
作者:[Alan Formy-Duval][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/alanfdoss
|
||||
[1]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/red_hat_linux_install_disks.png?itok=VSw6Cke9 (Red hat Linux install disks)
|
||||
[2]:http://www.xmms.org/
|
||||
[3]:http://www.ccp14.ac.uk/solution/linux/autorpm_redhat7_3.html
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
3 ways robotics affects the CIO role
|
||||
======
|
||||
![配图](https://enterprisersproject.com/sites/default/files/styles/620x350/public/cio_ai.png?itok=toMIgELj)
|
||||
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
Testing IPv6 Networking in KVM: Part 2
|
||||
======
|
||||
|
||||
|
@ -1,79 +0,0 @@
|
||||
Translating by FelixYFZ 10 easy steps from proprietary to open source
|
||||
======
|
||||
"But surely open source software is less secure, because everybody can see it, and they can just recompile it and replace it with bad stuff they've written." Hands up: who's heard this?1
|
||||
|
||||
When I talk to customers--yes, they let me talk to customers sometimes--and to folks in the field2 this comes up quite frequently. In a previous article, "[Review by many eyes does not always prevent buggy code][1]", I talked about how open source software--particularly security software--isn't magically more secure than proprietary software, but I'd still go with open source over proprietary every time. But the way I've heard the particular question--about open source software being less secure--suggests that sometimes it's not enough to just explain that open source needs work, but we must also actively engage in [apologetics][2]3.
|
||||
|
||||
So here goes. I don't expect it to be up to Newton's or Wittgenstein's levels of logic, but I'll do what I can, and I'll summarise at the bottom so you have a quick list of the points if you want it.
|
||||
|
||||
### The arguments
|
||||
|
||||
First, we should accept that no software is perfect6. Not proprietary software, not open source software. Second, we should accept that good proprietary software exists, and third, there is also some bad open source software out there. Fourth, there are extremely intelligent, gifted, and dedicated architects, designers, and software engineers who create proprietary software.
|
||||
|
||||
But here's the rub: fifth, there is a limited pool of people who will work on or otherwise look at proprietary software. And you can never hire all the best people. Even in government and public sector organisations--who often have a larger talent pool available to them, particularly for cough security-related cough applications--the pool is limited.
|
||||
|
||||
Sixth, the pool of people available to look at, test, improve, break, re-improve, and roll out open source software is almost unlimited and does include the best people. Seventh (and I love this one), the pool also includes many of the people writing the proprietary software. Eighth, many of the applications being written by public sector and government organisations are open sourced anyway.
|
||||
|
||||
Ninth, if you're worried about running open source software that is unsupported or comes from dodgy, un-provenanced sources, then good news: There are a bunch of organisations7 who will check the provenance of that code, support, maintain, and patch it. They'll do it along the same type of business lines that you'd expect from a proprietary software provider. You can also ensure that the software you get from them is the right software: Their standard technique is to sign bundles of software so you can verify that what you're installing isn't from some random bad person who's taken that code and done Bad Things™ with it.
|
||||
|
||||
Tenth (and here's the point of this article), when you run open source software, when you test it, when you provide feedback on issues, when you discover errors and report them, you are tapping into--and adding to--the commonwealth of knowledge and expertise and experience that is open source, which is made only greater by your doing so. If you do this yourself, or through one of the businesses that support open source software8, you are part of this commonwealth. Things get better with open source software, and you can see them getting better. Nothing is hidden--it's, well, open. Can things get worse? Yes, they can, but we can see when that happens and fix it.
|
||||
|
||||
This commonwealth does not apply to proprietary software: what stays hidden does not enlighten or enrich the world.
|
||||
|
||||
I know that I need to be careful about the use of the "commonwealth" as a Briton; it has connotations of (faded…) empires, which I don't intend in this case. It's probably not what Cromwell9 had in mind when he talked about the "Commonwealth," either, and anyway, he's a somewhat controversial historical figure. What I'm talking about is a concept in which I think the words deserve concatenation--"common" and "wealth"--to show that we're talking about something more than just money, but shared wealth available to all of humanity.
|
||||
|
||||
I really believe in this. If you want to take away a religious message from this article, it should be this10: the commonwealth is our heritage, our experience, our knowledge, our responsibility. The commonwealth is available to all of humanity. We have it in common, and it is an almost inestimable wealth.
|
||||
|
||||
### A handy crib sheet
|
||||
|
||||
1. (Almost) no software is perfect.
|
||||
2. There is good proprietary software.
|
||||
3. There is bad open source software.
|
||||
4. There are clever, talented, and devoted people who create proprietary software.
|
||||
5. The pool of people available to write and improve proprietary software is limited, even within the public sector and government realm.
|
||||
6. The corresponding pool of people for open source is virtually unlimited…
|
||||
7. …and includes a goodly number of the talent pool of people writing proprietary software.
|
||||
8. Public sector and government organisations often open source their software anyway.
|
||||
9. There are businesses that will support open source software for you.
|
||||
10. Contribution--even usage--adds to the commonwealth.
|
||||
|
||||
|
||||
|
||||
1 OK--you can put your hands down now.
|
||||
|
||||
2 Should this be capitalized? Is there a particular field, or how does it work? I'm not sure.
|
||||
|
||||
3 I have a degree in English literature and theology--this probably won't surprise regular readers of my articles.4
|
||||
|
||||
4 Not, I hope, because I spout too much theology,5 but because it's often full of long-winded, irrelevant humanities (U.S. English: "liberal arts") references.
|
||||
|
||||
5 Emacs. Every time.
|
||||
|
||||
6 Not even Emacs. And yes, I know that there are techniques to prove the correctness of some software. (I suspect that Emacs doesn't pass many of them…)
|
||||
|
||||
7 Hand up here: I'm employed by one of them, [Red Hat][3]. Go have a look--it's a fun place to work, and [we're usually hiring][4].
|
||||
|
||||
8 Assuming that they fully abide by the rules of the open source licence(s) they're using, that is.
|
||||
|
||||
9 Erstwhile "Lord Protector of England, Scotland, and Ireland"--that Cromwell.
|
||||
|
||||
10 Oh, and choose Emacs over Vi variants, obviously.
|
||||
|
||||
This article originally appeared on [Alice, Eve, and Bob - a security blog][5] and is republished with permission.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/11/commonwealth-open-source
|
||||
|
||||
作者:[Mike Bursell][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/mikecamel
|
||||
[1]:https://opensource.com/article/17/10/many-eyes
|
||||
[2]:https://en.wikipedia.org/wiki/Apologetics
|
||||
[3]:https://www.redhat.com/
|
||||
[4]:https://www.redhat.com/en/jobs
|
||||
[5]:https://aliceevebob.com/2017/10/24/the-commonwealth-of-open-source/
|
@ -1,3 +1,5 @@
|
||||
translating by cizezsy
|
||||
|
||||
How To Kill The Largest Process In An Unresponsive Linux System
|
||||
======
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2017/11/Kill-The-Largest-Process-720x340.png)
|
||||
|
@ -1,143 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
HeRM’s - A Commandline Food Recipes Manager
|
||||
======
|
||||
![配图](https://www.ostechnix.com/wp-content/uploads/2017/12/herms-720x340.jpg)
|
||||
|
||||
Cooking is love made visible, isn't? Indeed! Either cooking is your passion or hobby or profession, I am sure you will maintain a cooking journal. Keeping a cooking journal is one way to improve your cooking practice. There are many ways to take notes about the recipes. You could maintain a small diary/notebook or store the recipe's notes in the smartphone or save them in a word document in your computer. There are multitude of options. Today, I introduce **HeRM 's**, a Haskell-based commandline food recipes manager to make notes about your delicious food recipes. Using Herm's, you can add, view, edit, and delete food recipes and even can make your shopping lists. All from your Terminal! It is free, and open source utility written using Haskell programming language. The source code is freely available in GitHub, so you can fork it, add more features or improve it.
|
||||
|
||||
### HeRM's - A Commandline Food Recipes Manager
|
||||
|
||||
#### **Installing HeRM 's**
|
||||
|
||||
Since it is written using Haskell, we need to install Cabal first. Cabal is a command-line program for downloading and building software written in Haskell programming language. Cabal is available in the core repositories of most Linux distributions, so you can install it using your distribution's default package manager.
|
||||
|
||||
For instance, you can install cabal in Arch Linux and its variants such as Antergos, Manjaro Linux using command:
|
||||
```
|
||||
sudo pacman -S cabal-install
|
||||
```
|
||||
|
||||
On Debian, Ubuntu:
|
||||
```
|
||||
sudo apt-get install cabal-install
|
||||
```
|
||||
|
||||
After installing Cabal, make sure you have added it your PATH. To do so, edit your **~/.bashrc** file:
|
||||
```
|
||||
vi ~/.bashrc
|
||||
```
|
||||
|
||||
Add the following line:
|
||||
```
|
||||
PATH=$PATH:~/.cabal/bin
|
||||
```
|
||||
|
||||
Press **:wq** to save and quit the file. Then, run the following command to update the changes made.
|
||||
```
|
||||
source ~/.bashrc
|
||||
```
|
||||
|
||||
Once cabal installed, run the following command to install herms:
|
||||
```
|
||||
cabal install herms
|
||||
```
|
||||
|
||||
Have a cup of coffee! This will take a while. After couple minutes, you will see an output, something like below.
|
||||
```
|
||||
[...]
|
||||
Linking dist/build/herms/herms ...
|
||||
Installing executable(s) in /home/sk/.cabal/bin
|
||||
Installed herms-1.8.1.2
|
||||
```
|
||||
|
||||
Congratulations! Herms is installed.
|
||||
|
||||
#### **Adding recipes**
|
||||
|
||||
Let us add a food recipe, for example **Dosa**. For those wondering, Dosa is a popular south Indian food served hot with **sambar** and **chutney**. It is a healthy, and arguably most delicious food. It contains no added sugars or saturated fats. It is also easy to make one. There are couple types of different Dosas, the most common served in our home is Plain Dosa.
|
||||
|
||||
To add a recipe, type:
|
||||
```
|
||||
herms add
|
||||
```
|
||||
|
||||
You will see a screen something like below. Start entering the recipe's details.
|
||||
|
||||
[![][1]][2]
|
||||
|
||||
To navigate through fields,use the following keyboard shortcuts:
|
||||
|
||||
* **Tab / Shift+Tab** - Next / Previous field
|
||||
* **Ctrl + <Arrow keys>** - Navigate fields
|
||||
* **[Meta or Alt] + <h-j-k-l>** - Navigate fields
|
||||
* **Esc** - Save or Cancel.
|
||||
|
||||
|
||||
|
||||
Once you added the recipe's details, press ESC key and hit Y to save it. Similarly, you can add as many recipes as you want.
|
||||
|
||||
To list the added recipes, type:
|
||||
```
|
||||
herms list
|
||||
```
|
||||
|
||||
[![][1]][3]
|
||||
|
||||
To view the details of any recipes listed above, just use the respective number like below.
|
||||
```
|
||||
herms view 1
|
||||
```
|
||||
|
||||
[![][1]][4]
|
||||
|
||||
To edit any recipes, use:
|
||||
```
|
||||
herms edit 1
|
||||
```
|
||||
|
||||
Once you made the changes, press ESC key. You'll be asked whether you want to save or not. Just choose the appropriate option.
|
||||
|
||||
[![][1]][5]
|
||||
|
||||
To delete a recipe, the command would be:
|
||||
```
|
||||
herms remove 1
|
||||
```
|
||||
|
||||
To generate a shopping list for a given recipe(s), run:
|
||||
```
|
||||
herms shopping 1
|
||||
```
|
||||
|
||||
[![][1]][6]
|
||||
|
||||
For help, run:
|
||||
```
|
||||
herms -h
|
||||
```
|
||||
|
||||
The next time you overhear a conversation about a good recipe from your colleague or friend or somewhere else, just open Herms and quickly take a note and share them to your spouse. She would be delighted!
|
||||
|
||||
And, that's all. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/herms-commandline-food-recipes-manager/
|
||||
|
||||
作者:[][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com
|
||||
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]:http://www.ostechnix.com/wp-content/uploads/2017/12/Make-Dosa-1.png ()
|
||||
[3]:http://www.ostechnix.com/wp-content/uploads/2017/12/herms-1-1.png ()
|
||||
[4]:http://www.ostechnix.com/wp-content/uploads/2017/12/herms-2.png ()
|
||||
[5]:http://www.ostechnix.com/wp-content/uploads/2017/12/herms-3.png ()
|
||||
[6]:http://www.ostechnix.com/wp-content/uploads/2017/12/herms-4.png ()
|
@ -1,3 +1,6 @@
|
||||
Translating by MjSeven
|
||||
|
||||
|
||||
How to create mobile-friendly documentation
|
||||
======
|
||||
![配图](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_mobile_phone.png?itok=RqVtvxkd)
|
||||
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
5 open source software tools for supply chain management
|
||||
======
|
||||
|
||||
|
@ -1,312 +0,0 @@
|
||||
Translating by qhwdw
|
||||
How to apply Machine Learning to IoT using Android Things and TensorFlow
|
||||
============================================================
|
||||
|
||||
This project explores how to apply Machine Learning to IoT. In more details, as IoT platform, we will use **Android Things** and as Machine Learning engine we will use **Google TensorFlow**.
|
||||
|
||||
![Machine Learning with Android Things](https://www.survivingwithandroid.com/wp-content/uploads/2018/03/machine_learning_android_things.png)
|
||||
|
||||
Nowadays, Machine Learning is with Internet of Things one of the most interesting technological topics. To give a simple definition of the Machine Learning, it is possible to the [Wikipedia definition][13]:Machine learning is a field of computer science that gives computer systems the ability to “learn” (i.e. progressively improve performance on a specific task) with data, without being explicitly programmed.
|
||||
|
||||
In other words, after a training step, a system can predict outcomes even if it is not specifically programmed for them. On the other hands, we all know IoT and the concept of connected devices. One of the most promising topics is how to apply Machine Learning to IoT, building expert systems so that it is possible to develop a system that is able to “learn”. Moreover, it uses this knowledge to control and manage physical objects.
|
||||
|
||||
There are several fields where applying Machine Learning and IoT produce an important value, just to mention a few interesting fields, there are:
|
||||
|
||||
* Industrial IoT (IIoT) in the predictive maintenance
|
||||
|
||||
* Consumer IoT where the Machine earning can make the device intelligent so that it can adapt to our habits
|
||||
|
||||
In this tutorial, we want to explore how to apply Machine Learning to IoT using Android Things and TensorFlow. The basic idea that stands behind this Android Things IoT project is exploring how to build a _robot car that is able to recognize some basic shapes (like arrows) and control in this way the robot car directions_ . We have already covered [how to build robot car using Android Things][5], so I suggest you read the tutorial before starting this project.
|
||||
|
||||
This Machine Learning and IoT project cover these main topics:
|
||||
|
||||
* How to set up the TensorFlow environment using Docker
|
||||
|
||||
* How to train the TensorFlow system
|
||||
|
||||
* How to integrate TensorFlow with Android Things
|
||||
|
||||
* How to control the robot car using TensorFlow result
|
||||
|
||||
This project is derived from [Android Things TensorFlow image classifier][6].
|
||||
|
||||
Let us start!
|
||||
|
||||
### How to use Tensorflow image recognition
|
||||
|
||||
Before starting it is necessary to install and configure the TensorFlow environment. I’m not a Machine Learning expert, so I need to find something fast and ready to use so that we can build the TensorFlow image classifier. For this reason, we can use Docker to run an image of TensorFlow. Follow these steps:
|
||||
|
||||
1. Clone the TensorFlow repository:
|
||||
```
|
||||
git clone https://github.com/tensorflow/tensorflow.git
|
||||
cd /tensorflow
|
||||
git checkout v1.5.0
|
||||
```
|
||||
|
||||
2. Create a directory (`/tf-data`) that will hold all the files that we will use during the project.
|
||||
|
||||
3. Run Docker:
|
||||
```
|
||||
docker run -it \
|
||||
--volume /tf-data:/tf-data \
|
||||
--volume /tensorflow:/tensorflow \
|
||||
--workdir /tensorflow tensorflow/tensorflow:1.5.0 bash
|
||||
```
|
||||
|
||||
Using this command, we run an interactive TensorFlow environment and we mount some directories that we will use during the project
|
||||
|
||||
### How to Train TensorFlow to recognize images
|
||||
|
||||
Before the Android Things system is able to recognize images, it is necessary to train the TensorFlow engine so that it can build its model. For this purpose, it is necessary to gather several images. As said before, we want to use arrows to control the Android Things robot car so that we have to collect at least four arrow types:
|
||||
|
||||
* up arrow
|
||||
|
||||
* down arrow
|
||||
|
||||
* left arrow
|
||||
|
||||
* right arrow
|
||||
|
||||
To train the system is necessary to create a “knowledge base” with these four different image categories. Create in `/tf-data` a directory called `images` and under it four sub-directories named:
|
||||
|
||||
* up-arrow
|
||||
|
||||
* down-arrow
|
||||
|
||||
* left-arrow
|
||||
|
||||
* right-arrow
|
||||
|
||||
Now it is time to look for the images. I have used Google Image search but you can use other approaches too. To simplify the image download process, you should install a Chrome plugin that downloads all the images with only one click. Do not forget more images you download better is the training process, even if the time to create the model could increase.
|
||||
|
||||
**You may like also**
|
||||
[How to integrate Android Things using API][2]
|
||||
[How to use Android Things with Firebase][3]
|
||||
|
||||
Open your browser and start looking for the four image categories:
|
||||
|
||||
![TensorFlow image classifier](https://www.survivingwithandroid.com/wp-content/uploads/2018/03/TensorFlow-image-classifier.png)
|
||||
[Save][7]
|
||||
|
||||
I have downloaded 80 images for each category. Do not care about image extension.
|
||||
|
||||
Once all the categories have their images follow these steps (in the Docker interface):
|
||||
|
||||
```
|
||||
python /tensorflow/examples/image_retraining/retrain.py \
|
||||
--bottleneck_dir=tf_files/bottlenecks \
|
||||
--how_many_training_steps=4000 \
|
||||
--output_graph=/tf-data/retrained_graph.pb \
|
||||
--output_labels=/tf-data/retrained_labels.txt \
|
||||
--image_dir=/tf-data/images
|
||||
```
|
||||
|
||||
It could take some time so be patient. At the end, you should have two files in `/tf-data` folder:
|
||||
|
||||
1. retrained_graph.pb
|
||||
|
||||
2. retrained_labels.txt
|
||||
|
||||
The first file contains our model as the result of the TensorFlow training process while the second file contains the labels related to our four image categories.
|
||||
|
||||
### How to test the Tensorflow model
|
||||
|
||||
If you want to test the model to check if everything is working you can use this command:
|
||||
|
||||
```
|
||||
python scripts.label_image \
|
||||
--graph=/tf-data/retrained-graph.pb \
|
||||
--image=/tf-data/images/[category]/[image_name.jpg]
|
||||
```
|
||||
|
||||
### Optimizing the model
|
||||
|
||||
Before we can use this TensorFlow model in the Android Things project it is necessary to optimize it:
|
||||
|
||||
```
|
||||
python /tensorflow/python/tools/optimize_for_inference.py \
|
||||
--input=/tf-data/retrained_graph.pb \
|
||||
--output=/tf-data/opt_graph.pb \
|
||||
--input_names="Mul" \
|
||||
--output_names="final_result"
|
||||
```
|
||||
|
||||
That’s all we have our model. We will use this model to apply Machine Learning to IoT or in more details to integrate Android Things with TensorFlow. The goal is applying to the Android Things app the intelligence to recognize arrow images and react consequently controlling the robot car directions.
|
||||
|
||||
If you want to have more details about TensorFlow and how to generate the model look at the official documentation and to this [tutorial][8].
|
||||
|
||||
### How to apply Machine Learning to IoT using Android Things and TensorFlow
|
||||
|
||||
Once the TensorFlow data model is ready, we can move to the next step: how to integrate Android Things with TensorFlow. To this purpose, we can split this task into two steps:
|
||||
|
||||
1. The hardware part, where we connect motors and other peripherals to the Android Things board
|
||||
|
||||
2. Implementing the app
|
||||
|
||||
### Android Things Schematics
|
||||
|
||||
Before digging into the details about how to connect peripherals, this is the list of components used in this Android Things project:
|
||||
|
||||
1. Android Things board (Raspberry Pi 3)
|
||||
|
||||
2. Raspberry Pi Camera
|
||||
|
||||
3. One LED
|
||||
|
||||
4. LN298N Dual H Bridge (to control the motors)
|
||||
|
||||
5. A robot car chassis with two wheels
|
||||
|
||||
I do not cover again [how to control motors using Android Things][9] because we have already covered in the previous post.
|
||||
|
||||
Below the schematics:
|
||||
|
||||
![Integrating Android Things with IoT](https://www.survivingwithandroid.com/wp-content/uploads/2018/03/tensor_bb.png)
|
||||
[Save][10]
|
||||
|
||||
In the picture above, the camera is not shown. The final result is:
|
||||
|
||||
![Integrating Android Things with TensorFlow](https://www.survivingwithandroid.com/wp-content/uploads/2018/03/android_things_with_tensorflow-min.jpg)
|
||||
[Save][11]
|
||||
|
||||
### Implementing the Android Things app with TensorFlow
|
||||
|
||||
The last step is implementing the Android Things app. To this purpose, we can re-use the example available in Github named [sample TensorFlow image classifier][12]. Before starting, clone the Github repository so that you can modify the source code.
|
||||
|
||||
This Android Things app is different from the original app because:
|
||||
|
||||
1. it does not use the button to start the camera to capture the image
|
||||
|
||||
2. It uses a different model
|
||||
|
||||
3. It uses a blinking led to notify that the camera will take the picture after the LED stops blinking
|
||||
|
||||
4. It controls the motors when TensorFlow detects an image (arrows). Moreover, it turns on the motors for 5 seconds before starting the loop from step 3
|
||||
|
||||
To handle a blinking LED, use the following code:
|
||||
|
||||
```
|
||||
private Handler blinkingHandler = new Handler();
|
||||
private Runnable blinkingLED = new Runnable() {
|
||||
@Override
|
||||
public void run() {
|
||||
try {
|
||||
// If the motor is running the app does not start the cam
|
||||
if (mc.getStatus())
|
||||
return ;
|
||||
|
||||
Log.d(TAG, "Blinking..");
|
||||
mReadyLED.setValue(!mReadyLED.getValue());
|
||||
if (currentValue <= NUM_OF_TIMES) {
|
||||
currentValue++;
|
||||
blinkingHandler.postDelayed(blinkingLED,
|
||||
BLINKING_INTERVAL_MS);
|
||||
}
|
||||
else {
|
||||
mReadyLED.setValue(false);
|
||||
currentValue = 0;
|
||||
mBackgroundHandler.post(mBackgroundClickHandler);
|
||||
}
|
||||
} catch (IOException e) {
|
||||
e.printStackTrace();
|
||||
}
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
When the LED stops blinking, the app captures the image.
|
||||
|
||||
Now it is necessary to focus on how to control the motors according to the image detected. Modify the method:
|
||||
|
||||
```
|
||||
@Override
|
||||
public void onImageAvailable(ImageReader reader) {
|
||||
final Bitmap bitmap;
|
||||
try (Image image = reader.acquireNextImage()) {
|
||||
bitmap = mImagePreprocessor.preprocessImage(image);
|
||||
}
|
||||
|
||||
final List<Classifier.Recognition> results =
|
||||
mTensorFlowClassifier.doRecognize(bitmap);
|
||||
|
||||
Log.d(TAG,
|
||||
"Got the following results from Tensorflow: " + results);
|
||||
|
||||
// Check the result
|
||||
if (results == null || results.size() == 0) {
|
||||
Log.d(TAG, "No command..");
|
||||
blinkingHandler.post(blinkingLED);
|
||||
return ;
|
||||
}
|
||||
|
||||
Classifier.Recognition rec = results.get(0);
|
||||
Float confidence = rec.getConfidence();
|
||||
Log.d(TAG, "Confidence " + confidence.floatValue());
|
||||
|
||||
if (confidence.floatValue() < 0.55) {
|
||||
Log.d(TAG, "Confidence too low..");
|
||||
blinkingHandler.post(blinkingLED);
|
||||
return ;
|
||||
}
|
||||
|
||||
String command = rec.getTitle();
|
||||
Log.d(TAG, "Command: " + rec.getTitle());
|
||||
|
||||
if (command.indexOf("down") != -1)
|
||||
mc.backward();
|
||||
else if (command.indexOf("up") != -1)
|
||||
mc.forward();
|
||||
else if (command.indexOf("left") != -1)
|
||||
mc.turnLeft();
|
||||
else if (command.indexOf("right") != -1)
|
||||
mc.turnRight();
|
||||
}
|
||||
```
|
||||
|
||||
In this method, after the TensorFlow returns the possible labels matching the image captured, the app compares the result with the possible directions and controls the motors consequently.
|
||||
|
||||
Finally, it is time to use the model created at the beginning. Copy the `opt_graph.pb` and the `reatrained_labels.txt` under the _assets_ folder replacing the existing files.
|
||||
|
||||
Open the `Helper.java` and modify the following lines:
|
||||
|
||||
```
|
||||
public static final int IMAGE_SIZE = 299;
|
||||
private static final int IMAGE_MEAN = 128;
|
||||
private static final float IMAGE_STD = 128;
|
||||
private static final String LABELS_FILE = "retrained_labels.txt";
|
||||
public static final String MODEL_FILE = "file:///android_asset/opt_graph.pb";
|
||||
public static final String INPUT_NAME = "Mul";
|
||||
public static final String OUTPUT_OPERATION = "output";
|
||||
public static final String OUTPUT_NAME = "final_result";
|
||||
```
|
||||
|
||||
Run the app and have fun showing arrows to the camera and check the result. The robot car has to move according to the arrow shown.
|
||||
|
||||
### Summary
|
||||
|
||||
At the end of this tutorial, we have discovered how to apply Machine Learning to IoT using Android Things and TensorFlow. We can control the robot car using images and make it moving according to the image shown.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.survivingwithandroid.com/2018/03/apply-machine-learning-iot-using-android-things-tensorflow.html
|
||||
|
||||
作者:[Francesco Azzola ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.survivingwithandroid.com/author/francesco-azzolagmail-com
|
||||
[1]:https://www.survivingwithandroid.com/author/francesco-azzolagmail-com
|
||||
[2]:https://www.survivingwithandroid.com/2017/11/building-a-restful-api-interface-using-android-things.html
|
||||
[3]:https://www.survivingwithandroid.com/2017/10/synchronize-android-things-with-firebase-real-time-control-firebase-iot.html
|
||||
[4]:http://pinterest.com/pin/create/bookmarklet/?media=data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs=&url=https://www.survivingwithandroid.com/2018/03/apply-machine-learning-iot-using-android-things-tensorflow.html&is_video=false&description=Machine%20Learning%20with%20Android%20Things
|
||||
[5]:https://www.survivingwithandroid.com/2017/12/building-a-remote-controlled-car-using-android-things-gpio.html
|
||||
[6]:https://github.com/androidthings/sample-tensorflow-imageclassifier
|
||||
[7]:http://pinterest.com/pin/create/bookmarklet/?media=data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs=&url=https://www.survivingwithandroid.com/2018/03/apply-machine-learning-iot-using-android-things-tensorflow.html&is_video=false&description=TensorFlow%20image%20classifier
|
||||
[8]:https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/#0
|
||||
[9]:https://www.survivingwithandroid.com/2017/12/building-a-remote-controlled-car-using-android-things-gpio.html
|
||||
[10]:http://pinterest.com/pin/create/bookmarklet/?media=data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs=&url=https://www.survivingwithandroid.com/2018/03/apply-machine-learning-iot-using-android-things-tensorflow.html&is_video=false&description=Integrating%20Android%20Things%20with%20IoT
|
||||
[11]:http://pinterest.com/pin/create/bookmarklet/?media=data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs=&url=https://www.survivingwithandroid.com/2018/03/apply-machine-learning-iot-using-android-things-tensorflow.html&is_video=false&description=Integrating%20Android%20Things%20with%20TensorFlow
|
||||
[12]:https://github.com/androidthings/sample-tensorflow-imageclassifier
|
||||
[13]:https://en.wikipedia.org/wiki/Machine_learning
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
Dynamic Linux Routing with Quagga
|
||||
============================================================
|
||||
|
||||
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
How To Register The Oracle Linux System With The Unbreakable Linux Network (ULN)
|
||||
======
|
||||
Most of us knows about RHEL subscription but only few of them knows about Oracle subscription and its details.
|
||||
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
Top 9 open source ERP systems to consider | Opensource.com
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_orgchart1.png?itok=tukiFj89)
|
||||
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
Some Common Concurrent Programming Mistakes
|
||||
============================================================
|
||||
|
||||
|
@ -1,119 +0,0 @@
|
||||
pinewall translating
|
||||
|
||||
Getting started with Anaconda Python for data science
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X)
|
||||
Like many others, I've been trying to get involved in the rapidly expanding field of data science. When I took Udemy courses on the [R][1] and [Python][2] programming languages, I downloaded and installed the applications independently. As I was trying to work through the challenges of installing data science packages like [NumPy][3] and [Matplotlib][4] and solving the various dependencies, I learned about the [Anaconda Python distribution][5].
|
||||
|
||||
Anaconda is a complete, [open source][6] data science package with a community of over 6 million users. It is easy to [download][7] and install, and it is supported on Linux, MacOS, and Windows.
|
||||
|
||||
I appreciate that Anaconda eases the frustration of getting started for new users. The distribution comes with more than 1,000 data packages as well as the [Conda][8] package and virtual environment manager, so it eliminates the need to learn to install each library independently. As Anaconda's website says, "The Python and R conda packages in the Anaconda Repository are curated and compiled in our secure environment so you get optimized binaries that 'just work' on your system."
|
||||
|
||||
I recommend using [Anaconda Navigator][9], a desktop graphical user interface (GUI) system that includes links to all the applications included with the distribution including [RStudio][10], [iPython][11], [Jupyter Notebook][12], [JupyterLab][13], [Spyder][14], [Glue][15], and [Orange][16]. The default environment is Python 3.6, but you can also easily install Python 3.5, Python 2.7, or R. The [documentation][9] is incredibly detailed and there is an excellent community of users for additional support.
|
||||
|
||||
### Installing Anaconda
|
||||
|
||||
To install Anaconda on my Linux laptop (an I3 with 4GB of RAM), I downloaded the Anaconda 5.1 Linux installer and ran `md5sum` to verify the file:
|
||||
```
|
||||
$ md5sum Anaconda3-5.1.0-Linux-x86_64.sh
|
||||
|
||||
```
|
||||
|
||||
Then I followed the directions in the [documentation][17], which instructed me to issue the following Bash command whether I was in the Bash shell or not:
|
||||
```
|
||||
$ bash Anaconda3-5.1.0-Linux-x86_64.sh
|
||||
|
||||
```
|
||||
|
||||
`/home/<user>/.bashrc`?" I allowed it and restarted the shell, which I found was necessary for the `.bashrc` environment to work correctly.
|
||||
|
||||
I followed the installation directions exactly, and the well-scripted install took about five minutes to complete. When the installation prompted: "Do you wish the installer to prepend the Anaconda install location to PATH in your?" I allowed it and restarted the shell, which I found was necessary for theenvironment to work correctly.
|
||||
|
||||
After completing the install, I launched Anaconda Navigator by entering the following at the command prompt in the shell:
|
||||
```
|
||||
$ anaconda-navigator
|
||||
|
||||
```
|
||||
|
||||
Every time Anaconda Navigator launches, it checks to see if new software is available and prompts you to update if necessary.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/anaconda-update.png?itok=wMk78pGQ)
|
||||
|
||||
Anaconda updated successfully without needing to return to the command line. Anaconda's initial launch was a little slow; that plus the update meant it took a few additional minutes to get started.
|
||||
|
||||
You can also update manually by entering the following:
|
||||
```
|
||||
$ conda update anaconda-navigator
|
||||
|
||||
```
|
||||
|
||||
### Exploring and installing applications
|
||||
|
||||
Once Navigator launched, I was free to explore the range of applications included with Anaconda Distribution. According to the documentation, the 64-bit Python 3.6 version of Anaconda [supports 499 packages][18]. The first application I explored was [Jupyter QtConsole][19]. The easy-to-use GUI supports inline figures and syntax highlighting.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/anaconda-jupyterqtconsole.png?itok=fQQoErIO)
|
||||
|
||||
Jupyter Notebook is included with the distribution, so (unlike other Python environments I have used) there is no need for a separate install.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/anaconda-jupyternotebook.png?itok=VqvbyOcI)
|
||||
|
||||
I was already familiar with RStudio. It's not installed by default, but it's easy to add with the click of a mouse. Other applications, including JupyterLab, Orange, Glue, and Spyder, can be launched or installed with just a mouse click.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/anaconda-otherapps.png?itok=9QmSUdel)
|
||||
|
||||
One of the Anaconda distribution's strengths is the ability to create multiple environments. For example, if I wanted to create a Python 2.7 environment instead of the default Python 3.6, I would enter the following in the shell:
|
||||
```
|
||||
$ conda create -n py27 python=2.7 anaconda
|
||||
|
||||
```
|
||||
|
||||
Conda takes care of the entire install; to launch it, just open the shell and enter:
|
||||
```
|
||||
$ anaconda-navigator
|
||||
|
||||
```
|
||||
|
||||
Select the **py27** environment from the "Applications on" drop-down in the Anaconda GUI.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/anaconda-navigator.png?itok=2i5qYAyG)
|
||||
|
||||
### Learn more
|
||||
|
||||
There's a wealth of information available about Anaconda if you'd like to know more. You can start by searching the [Anaconda Community][20] and its [mailing list][21].
|
||||
|
||||
Are you using Anaconda Distribution and Navigator? Let us know your impressions in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/4/getting-started-anaconda-python
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/don-watkins
|
||||
[1]:https://www.r-project.org/
|
||||
[2]:https://www.python.org/
|
||||
[3]:http://www.numpy.org/
|
||||
[4]:https://matplotlib.org/
|
||||
[5]:https://www.anaconda.com/distribution/
|
||||
[6]:https://docs.anaconda.com/anaconda/eula
|
||||
[7]:https://www.anaconda.com/download/#linux
|
||||
[8]:https://conda.io/
|
||||
[9]:https://docs.anaconda.com/anaconda/navigator/
|
||||
[10]:https://www.rstudio.com/
|
||||
[11]:https://ipython.org/
|
||||
[12]:http://jupyter.org/
|
||||
[13]:https://blog.jupyter.org/jupyterlab-is-ready-for-users-5a6f039b8906
|
||||
[14]:https://spyder-ide.github.io/
|
||||
[15]:http://glueviz.org/
|
||||
[16]:https://orange.biolab.si/
|
||||
[17]:https://docs.anaconda.com/anaconda/install/linux
|
||||
[18]:https://docs.anaconda.com/anaconda/packages/py3.6_linux-64
|
||||
[19]:http://qtconsole.readthedocs.io/en/stable/
|
||||
[20]:https://www.anaconda.com/community/
|
||||
[21]:https://groups.google.com/a/continuum.io/forum/#!forum/anaconda
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
Passwordless Auth: Server
|
||||
============================================================
|
||||
|
||||
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
An introduction to Python bytecode
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_development_programming.png?itok=4OM29-82)
|
||||
|
@ -0,0 +1,193 @@
|
||||
11 Methods To Find System/Server Uptime In Linux
|
||||
======
|
||||
Do you want to know, how long your Linux system has been running without downtime? when the system is up and what date.
|
||||
|
||||
There are multiple commands is available in Linux to check server/system uptime and most of users prefer the standard and very famous command called `uptime` to get this details.
|
||||
|
||||
Server uptime is not important for some people but it’s very important for server administrators when the server running with mission-critical applications such as online shopping portal, netbanking portal, etc,.
|
||||
|
||||
It must be zero downtime because if there is a down time then it will impact badly to million users.
|
||||
|
||||
As i told, many commands are available to check server uptime in Linux. In this tutorial we are going teach you how to check this using below 11 methods.
|
||||
|
||||
Uptime means how long the server has been up since its last shutdown or reboot.
|
||||
|
||||
The uptime command the fetch the details from `/proc` files and print the server uptime, the `/proc` file is not directly readable by humans.
|
||||
|
||||
The below commands will print how long the system has been running and up. It also shows some additional information.
|
||||
|
||||
### Method-1 : Using uptime Command
|
||||
|
||||
uptime command will tell how long the system has been running. It gives a one line display of the following information.
|
||||
|
||||
The current time, how long the system has been running, how many users are currently logged on, and the system load averages for the past 1, 5, and 15 minutes.
|
||||
```
|
||||
# uptime
|
||||
|
||||
08:34:29 up 21 days, 5:46, 1 user, load average: 0.06, 0.04, 0.00
|
||||
|
||||
```
|
||||
|
||||
### Method-2 : Using w Command
|
||||
|
||||
w command provides a quick summary of every user logged into a computer, what each user is currently doing,
|
||||
and what load all the activity is imposing on the computer itself. The command is a one-command combination of several other Unix programs: who, uptime, and ps -a.
|
||||
```
|
||||
# w
|
||||
|
||||
08:35:14 up 21 days, 5:47, 1 user, load average: 0.26, 0.09, 0.02
|
||||
USER TTY FROM [email protected] IDLE JCPU PCPU WHAT
|
||||
root pts/1 103.5.134.167 08:34 0.00s 0.01s 0.00s w
|
||||
|
||||
```
|
||||
|
||||
### Method-3 : Using top Command
|
||||
|
||||
Top command is one of the basic command to monitor real-time system processes in Linux. It display system information and running processes information like uptime, average load, tasks running, number of users logged in, number of CPUs & cpu utilization, Memory & swap information. Run top command then hit E to bring the memory utilization in MB.
|
||||
|
||||
**Suggested Read :** [TOP Command Examples to Monitor Server Performance][1]
|
||||
```
|
||||
# top -c
|
||||
|
||||
top - 08:36:01 up 21 days, 5:48, 1 user, load average: 0.12, 0.08, 0.02
|
||||
Tasks: 98 total, 1 running, 97 sleeping, 0 stopped, 0 zombie
|
||||
Cpu(s): 0.0%us, 0.3%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
|
||||
Mem: 1872888k total, 1454644k used, 418244k free, 175804k buffers
|
||||
Swap: 2097148k total, 0k used, 2097148k free, 1098140k cached
|
||||
|
||||
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
|
||||
1 root 20 0 19340 1492 1172 S 0.0 0.1 0:01.04 /sbin/init
|
||||
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [kthreadd]
|
||||
3 root RT 0 0 0 0 S 0.0 0.0 0:00.00 [migration/0]
|
||||
4 root 20 0 0 0 0 S 0.0 0.0 0:34.32 [ksoftirqd/0]
|
||||
5 root RT 0 0 0 0 S 0.0 0.0 0:00.00 [stopper/0]
|
||||
|
||||
```
|
||||
|
||||
### Method-4 : Using who Command
|
||||
|
||||
who command displays a list of users who are currently logged into the computer. The who command is related to the command w, which provides the same information but also displays additional data and statistics.
|
||||
```
|
||||
# who -b
|
||||
|
||||
system boot 2018-04-12 02:48
|
||||
|
||||
```
|
||||
|
||||
### Method-5 : Using last Command
|
||||
|
||||
The last command displays a list of last logged in users. Last searches back through the file /var/log/wtmp and displays a list of all users logged in (and out) since that file was created.
|
||||
```
|
||||
# last reboot -F | head -1 | awk '{print $5,$6,$7,$8,$9}'
|
||||
|
||||
Thu Apr 12 02:48:04 2018
|
||||
|
||||
```
|
||||
|
||||
### Method-6 : Using /proc/uptime File
|
||||
|
||||
This file contains information detailing how long the system has been on since its last restart. The output of `/proc/uptime` is quite minimal.
|
||||
|
||||
The first number is the total number of seconds the system has been up. The second number is how much of that time the machine has spent idle, in seconds.
|
||||
```
|
||||
# cat /proc/uptime
|
||||
|
||||
1835457.68 1809207.16
|
||||
|
||||
```
|
||||
|
||||
# date -d “$(Method-7 : Using tuptime Command
|
||||
|
||||
Tuptime is a tool for report the historical and statistical running time of the system, keeping it between restarts. Like uptime command but with more interesting output.
|
||||
```
|
||||
$ tuptime
|
||||
|
||||
```
|
||||
|
||||
### Method-8 : Using htop Command
|
||||
|
||||
htop is an interactive process viewer for Linux which was developed by Hisham using ncurses library. Htop have many of features and options compared to top command.
|
||||
|
||||
**Suggested Read :** [Monitor system resources using Htop command][2]
|
||||
```
|
||||
# htop
|
||||
|
||||
CPU[| 0.5%] Tasks: 48, 5 thr; 1 running
|
||||
Mem[||||||||||||||||||||||||||||||||||||||||||||||||||| 165/1828MB] Load average: 0.10 0.05 0.01
|
||||
Swp[ 0/2047MB] Uptime: 21 days, 05:52:35
|
||||
|
||||
PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command
|
||||
29166 root 20 0 110M 2484 1240 R 0.0 0.1 0:00.03 htop
|
||||
29580 root 20 0 11464 3500 1032 S 0.0 0.2 55:15.97 /bin/sh ./OSWatcher.sh 10 1
|
||||
1 root 20 0 19340 1492 1172 S 0.0 0.1 0:01.04 /sbin/init
|
||||
486 root 16 -4 10780 900 348 S 0.0 0.0 0:00.07 /sbin/udevd -d
|
||||
748 root 18 -2 10780 932 360 S 0.0 0.0 0:00.00 /sbin/udevd -d
|
||||
|
||||
```
|
||||
|
||||
### Method-9 : Using glances Command
|
||||
|
||||
Glances is a cross-platform curses-based system monitoring tool written in Python. We can say all in one place, like maximum of information in a minimum of space. It uses psutil library to get information from your system.
|
||||
|
||||
Glances capable to monitor CPU, Memory, Load, Process list, Network interface, Disk I/O, Raid, Sensors, Filesystem (and folders), Docker, Monitor, Alert, System info, Uptime, Quicklook (CPU, MEM, LOAD), etc,.
|
||||
|
||||
**Suggested Read :** [Glances (All in one Place)– An Advanced Real Time System Performance Monitoring Tool for Linux][3]
|
||||
```
|
||||
glances
|
||||
|
||||
ubuntu (Ubuntu 17.10 64bit / Linux 4.13.0-37-generic) - IP 192.168.1.6/24 Uptime: 21 days, 05:55:15
|
||||
|
||||
CPU [|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 90.6%] CPU - 90.6% nice: 0.0% ctx_sw: 4K MEM \ 78.4% active: 942M SWAP - 5.9% LOAD 2-core
|
||||
MEM [||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 78.0%] user: 55.1% irq: 0.0% inter: 1797 total: 1.95G inactive: 562M total: 12.4G 1 min: 4.35
|
||||
SWAP [|||| 5.9%] system: 32.4% iowait: 1.8% sw_int: 897 used: 1.53G buffers: 14.8M used: 749M 5 min: 4.38
|
||||
idle: 7.6% steal: 0.0% free: 431M cached: 273M free: 11.7G 15 min: 3.38
|
||||
|
||||
NETWORK Rx/s Tx/s TASKS 211 (735 thr), 4 run, 207 slp, 0 oth sorted automatically by memory_percent, flat view
|
||||
docker0 0b 232b
|
||||
enp0s3 12Kb 4Kb Systemd 7 Services loaded: 197 active: 196 failed: 1
|
||||
lo 616b 616b
|
||||
_h478e48e 0b 232b CPU% MEM% VIRT RES PID USER NI S TIME+ R/s W/s Command
|
||||
63.8 18.9 2.33G 377M 2536 daygeek 0 R 5:57.78 0 0 /usr/lib/firefox/firefox -contentproc -childID 1 -isForBrowser -intPrefs 6:50|7:-1|19:0|34:1000|42:20|43:5|44:10|51
|
||||
DefaultGateway 83ms 78.5 10.9 3.46G 217M 2039 daygeek 0 S 21:07.46 0 0 /usr/bin/gnome-shell
|
||||
8.5 10.1 2.32G 201M 2464 daygeek 0 S 8:45.69 0 0 /usr/lib/firefox/firefox -new-window
|
||||
DISK I/O R/s W/s 1.1 8.5 2.19G 170M 2653 daygeek 0 S 2:56.29 0 0 /usr/lib/firefox/firefox -contentproc -childID 4 -isForBrowser -intPrefs 6:50|7:-1|19:0|34:1000|42:20|43:5|44:10|51
|
||||
dm-0 0 0 1.7 7.2 2.15G 143M 2880 daygeek 0 S 7:10.46 0 0 /usr/lib/firefox/firefox -contentproc -childID 6 -isForBrowser -intPrefs 6:50|7:-1|19:0|34:1000|42:20|43:5|44:10|51
|
||||
sda1 9.46M 12K 0.0 4.9 1.78G 97.2M 6125 daygeek 0 S 1:36.57 0 0 /usr/lib/firefox/firefox -contentproc -childID 7 -isForBrowser -intPrefs 6:50|7:-1|19:0|34:1000|42:20|43:5|44:10|51
|
||||
|
||||
```
|
||||
|
||||
### Method-10 : Using stat Command
|
||||
|
||||
stat command displays the detailed status of a particular file or a file system.
|
||||
```
|
||||
# stat /var/log/dmesg | grep Modify
|
||||
|
||||
Modify: 2018-04-12 02:48:04.027999943 -0400
|
||||
|
||||
```
|
||||
|
||||
### Method-11 : Using procinfo Command
|
||||
|
||||
procinfo gathers some system data from the /proc directory and prints it nicely formatted on the standard output device.
|
||||
```
|
||||
# procinfo | grep Bootup
|
||||
|
||||
Bootup: Fri Apr 20 19:40:14 2018 Load average: 0.16 0.05 0.06 1/138 16615
|
||||
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/11-methods-to-find-check-system-server-uptime-in-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.2daygeek.com/author/magesh/
|
||||
[1]:https://www.2daygeek.com/top-command-examples-to-monitor-server-performance/
|
||||
[2]:https://www.2daygeek.com/htop-command-examples-to-monitor-system-resources/
|
||||
[3]:https://www.2daygeek.com/install-glances-advanced-real-time-linux-system-performance-monitoring-tool-on-centos-fedora-ubuntu-debian-opensuse-arch-linux/
|
@ -0,0 +1,155 @@
|
||||
How the four components of a distributed tracing system work together
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/touch-tracing.jpg?itok=rOmsY-nU)
|
||||
Ten years ago, essentially the only people thinking hard about distributed tracing were academics and a handful of large internet companies. Today, it’s turned into table stakes for any organization adopting microservices. The rationale is well-established: microservices fail in surprising and often spectacular ways, and distributed tracing is the best way to describe and diagnose those failures.
|
||||
|
||||
That said, if you set out to integrate distributed tracing into your own application, you’ll quickly realize that the term “Distributed Tracing” means different things to different people. Furthermore, the tracing ecosystem is crowded with partially-overlapping projects with similar charters. This article describes the four (potentially) independent components in distributed tracing, and how they fit together.
|
||||
|
||||
### Distributed tracing: A mental model
|
||||
|
||||
Most mental models for tracing descend from [Google’s Dapper paper][1]. [OpenTracing][2] uses similar nouns and verbs, so we will borrow the terms from that project:
|
||||
|
||||
![Tracing][3]
|
||||
|
||||
* **Trace:** The description of a transaction as it moves through a distributed system.
|
||||
* **Span:** A named, timed operation representing a piece of the workflow. Spans accept key:value tags as well as fine-grained, timestamped, structured logs attached to the particular span instance.
|
||||
* **Span context:** Trace information that accompanies the distributed transaction, including when it passes from service to service over the network or through a message bus. The span context contains the trace identifier, span identifier, and any other data that the tracing system needs to propagate to the downstream service.
|
||||
|
||||
|
||||
|
||||
If you would like to dig into a detailed description of this mental model, please check out the [OpenTracing specification][4].
|
||||
|
||||
### The four big pieces
|
||||
|
||||
From the perspective of an application-layer distributed tracing system, a modern software system looks like the following diagram:
|
||||
|
||||
![Tracing][5]
|
||||
|
||||
The components in a modern software system can be broken down into three categories:
|
||||
|
||||
* **Application and business logic:** Your code.
|
||||
* **Widely shared libraries:** Other people's code.
|
||||
* **Widely shared services:** Other people’s infrastructure.
|
||||
|
||||
|
||||
|
||||
These three components have different requirements and drive the design of the Distributed Tracing systems which is tasked with monitoring the application. The resulting design yields four important pieces:
|
||||
|
||||
* **A tracing instrumentation API:** What decorates application code.
|
||||
* **Wire protocol:** What gets sent alongside application data in RPC requests.
|
||||
* **Data protocol:** What gets sent asynchronously (out-of-band) to your analysis system.
|
||||
* **Analysis system:** A database and interactive UI for working with the trace data.
|
||||
|
||||
|
||||
|
||||
To explain this further, we’ll dig into the details which drive this design. If you just want my suggestions, please skip to the four big solutions at the bottom.
|
||||
|
||||
### Requirements, details, and explanations
|
||||
|
||||
Application code, shared libraries, and shared services have notable operational differences, which heavily influence the requirements for instrumenting them.
|
||||
|
||||
#### Instrumenting application code and business logic
|
||||
|
||||
In any particular microservice, the bulk of the code written by the microservice developer is the application or business logic. This is the code that defines domain-specific operations; typically, it contains whatever special, unique logic justified the creation of a new microservice in the first place. Almost by definition, **this code is usually not shared or otherwise present in more than one service.**
|
||||
|
||||
That said, you still need to understand it, and that means it needs to be instrumented somehow. Some monitoring and tracing analysis systems auto-instrument code using black-box agents, and others expect explicit "white-box" instrumentation. For the latter, abstract tracing APIs offer many practical advantages for microservice-specific application code:
|
||||
|
||||
* An abstract API allows you to swap in new monitoring tools without re-writing instrumentation code. You may want to change cloud providers, vendors, and monitoring technologies, and a huge pile of non-portable instrumentation code would add meaningful overhead and friction to that procedure.
|
||||
* It turns out there are other interesting uses for instrumentation, beyond production monitoring. There are existing projects that use this same tracing instrumentation to power testing tools, distributed debuggers, “chaos engineering” fault injectors, and other meta-applications.
|
||||
* But most importantly, what if you wanted to extract an application component into a shared library? That leads us to:
|
||||
|
||||
|
||||
|
||||
#### Instrumenting shared libraries
|
||||
|
||||
The utility code present in most applications—code that handles network requests, database calls, disk writes, threading, queueing, concurrency management, and so on—is often generic and not specific to any particular application. This code is packaged up into libraries and frameworks which are then installed in many microservices, and deployed into many different environments.
|
||||
|
||||
This is the real difference: with shared code, someone else is the user. Most users have different dependencies and operational styles. If you attempt to instrument this shared code, you will note a couple of common issues:
|
||||
|
||||
* You need an API to write instrumentation. However, your library does not know what analysis system is being used. There are many choices, and all the libraries running in the same application cannot make incompatible choices.
|
||||
* The task of injecting and extracting span contexts from request headers often falls on RPC libraries, since those packages encapsulate all network-handling code. However, a shared library cannot not know which tracing protocol is being used by each application.
|
||||
* Finally, you don’t want to force conflicting dependencies on your user. Most users have different dependencies and operational styles. Even if they use gRPC, will it be the same version of gRPC you are binding to? So any monitoring API your library brings in for tracing must be free of dependencies.
|
||||
|
||||
|
||||
|
||||
**So, an abstract API which (a) has no dependencies, (b) is wire protocol agnostic, and (c) works with popular vendors and analysis systems should be a requirement for instrumenting shared library code.**
|
||||
|
||||
#### Instrumenting shared services
|
||||
|
||||
Finally, sometimes entire services—or sets of microservices—are general-purpose enough that they are used by many independent applications. These shared services are often hosted and managed by third parties. Examples might be cache servers, message queues, and databases.
|
||||
|
||||
It’s important to understand that **shared services are essentially "black boxes" from the perspective of application developers.** It is not possible to inject your application’s monitoring solution into a shared service. Instead, the hosted service often runs its own monitoring solution.
|
||||
|
||||
### **The four big solutions**
|
||||
|
||||
So, an abstracted tracing API would help libraries emit data and inject/extract Span Context. A standard wire protocol would help black-box services interconnect, and a standard data format would help separate analysis systems consolidate their data. Let's have a look at some promising options for solving these problems.
|
||||
|
||||
#### Tracing API: The OpenTracing project
|
||||
|
||||
#### As shown above, in order to instrument application code, a tracing API is required. And in order to extend that instrumentation to shared libraries, where most of the Span Context injection and extraction occurs, the API must be abstracted in certain critical ways.
|
||||
|
||||
The [OpenTracing][2] project aims to solve this problem for library developers. OpenTracing is a vendor-neutral tracing API which comes with no dependencies, and is quickly gaining support from a large number of monitoring systems. This means that, increasingly, if libraries ship with native OpenTracing instrumentation baked in, tracing will automatically be enabled when a monitoring system connects at application startup.
|
||||
|
||||
Personally, as someone who has been writing, shipping, and operating open source software for over a decade, it is profoundly satisfying to work on the OpenTracing project and finally scratch this observability itch.
|
||||
|
||||
In addition to the API, the OpenTracing project maintains a growing list of contributed instrumentation, some of which can be found [here][6]. If you would like to get involved, either by contributing an instrumentation plugin, natively instrumenting your own OSS libraries, or just want to ask a question, please find us on [Gitter][7] and say hi.
|
||||
|
||||
#### Wire Protocol: The trace-context HTTP headers
|
||||
|
||||
In order for monitoring systems to interoperate, and to mitigate migration issues when changing from one monitoring system to another, a standard wire protocol is needed for propagating Span Context.
|
||||
|
||||
The [w3c Distributed Trace Context Community Group][8] is hard at work defining this standard. Currently, the focus is on defining a set of standard HTTP headers. The latest draft of the specification can be found [here][9]. If you have questions for this group, the [mailing list][10] and [Gitter chatroom][11] are great places to go for answers.
|
||||
|
||||
#### Data protocol (Doesn't exist yet!!)
|
||||
|
||||
For black-box services, where it is not possible to install a tracer or otherwise interact with the program, a data protocol is needed to export data from the system.
|
||||
|
||||
Work on this data format and protocol is currently at an early stage, and mostly happening within the context of the w3c Distributed Trace Context Working Group. There is particular interest is in defining higher-level concepts, such as RPC calls, database statements, etc, in a standard data schema. This would allow tracing systems to make assumptions about what kind of data would be available. The OpenTracing project is also working on this issue, by starting to define a [standard set of tags][12]. The plan is for these two efforts to dovetail with each other.
|
||||
|
||||
Note that there is a middle ground available at the moment. For “network appliances” that the application developer operates, but does not want to compile or otherwise perform code modifications to, dynamic linking can help. The primary examples of this are service meshes and proxies, such as Envoy or NGINX. For this situation, an OpenTracing-compliant tracer can be compiled as a shared object, and then dynamically linked into the executable at runtime. This option is currently provided by the [C++ OpenTracing API][13]. For Java, an OpenTracing [Tracer Resolver][14] is also under development.
|
||||
|
||||
These solutions work well for services that support dynamic linking, and are deployed by the application developer. But in the long run, a standard data protocol may solve this problem more broadly.
|
||||
|
||||
#### Analysis system: A service for extracting insights from trace data
|
||||
|
||||
Last but not least, there is now a cornucopia of tracing and monitoring solutions. A list of monitoring systems known to be compatible with OpenTracing can be found [here][15], but there are many more options out there. I would encourage you to research your options, and I hope you find the framework provided in this article to be useful when comparing options. In addition to rating monitoring systems based on their operational characteristics (not to mention whether you like the UI and features), make sure you think about the three big pieces above, their relative importance to you, and how the tracing system you are interested in provides a solution to them.
|
||||
|
||||
### Conclusion
|
||||
|
||||
In the end, how important each piece is depends heavily on who you are and what kind of system you are building. For example, open source library authors are very interested in the OpenTracing API, while service developers tend to be more interested in the Trace-Context specification. When someone says one piece is more important than the other, they usually mean “one piece is more important to me than the other."
|
||||
|
||||
However, the reality is this: Distributed Tracing has become a necessity for monitoring modern systems. In designing the building blocks for these systems, the age-old approach—"decouple where you can"—still holds true. Cleanly decoupled components are the best way to maintain flexibility and forwards-compatibility when building a system as cross-cutting as a distributed monitoring system.
|
||||
|
||||
Thanks for reading! Hopefully, now when you're ready to implement tracing in your own application, you have a guide to understanding which pieces they are talking about, and how they fit together.
|
||||
|
||||
Want to learn more? Sign up to attend [KubeCon EU][16] in May or [KubeCon North America][17] in December.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/distributed-tracing
|
||||
|
||||
作者:[Ted Young][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/tedsuo
|
||||
[1]:https://research.google.com/pubs/pub36356.html
|
||||
[2]:http://opentracing.io/
|
||||
[3]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/tracing1_0.png?itok=dvDTX0JJ (Tracing)
|
||||
[4]:https://github.com/opentracing/specification/blob/master/specification.md
|
||||
[5]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/tracing2_0.png?itok=yokjNLZk (Tracing)
|
||||
[6]:https://github.com/opentracing-contrib/
|
||||
[7]:https://gitter.im/opentracing/public
|
||||
[8]:https://www.w3.org/community/trace-context/
|
||||
[9]:https://w3c.github.io/distributed-tracing/report-trace-context.html
|
||||
[10]:http://lists.w3.org/Archives/Public/public-trace-context/
|
||||
[11]:https://gitter.im/TraceContext/Lobby
|
||||
[12]:https://github.com/opentracing/specification/blob/master/semantic_conventions.md
|
||||
[13]:https://github.com/opentracing/opentracing-cpp
|
||||
[14]:https://github.com/opentracing-contrib/java-tracerresolver
|
||||
[15]:http://opentracing.io/documentation/pages/supported-tracers
|
||||
[16]:https://events.linuxfoundation.org/kubecon-eu-2018/
|
||||
[17]:https://events.linuxfoundation.org/events/kubecon-cloudnativecon-north-america-2018/
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
How To Improve Application Startup Time In Linux
|
||||
======
|
||||
|
||||
|
@ -0,0 +1,93 @@
|
||||
translating---geekpi
|
||||
|
||||
Orbital Apps – A New Generation Of Linux applications
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2016/05/orbital-apps-720x340.jpg)
|
||||
|
||||
Today, we are going to learn about **Orbital Apps** or **ORB** ( **O** pen **R** unnable **B** undle) **apps** , a collection of free, cross-platform, open source applications. All ORB apps are portable. You can either install them on your Linux system or on your USB drive, so that you you can use the same app on any system. There is no need of root privileges, and there are no dependencies. All required dependencies are included in the apps. Just copy the ORB apps to your USB drive and plug it on any Linux system, and start using them in no time. All settings and configurations, and data of the apps will be stored on the USB drive. Since there is no need to install the apps on the local drive, we can run the apps either in online or offline computers. That means we don’t need Internet to download any dependencies.
|
||||
|
||||
ORB apps are compressed up to 60% smaller, so we can store and use them even from the small sized USB drives. All ORB apps are signed with PGP/RSA and distributed via TLS 1.2. All Applications are packaged without any modifications, they are not even re-compiled. Here is the list of currently available portable ORB applications.
|
||||
|
||||
* abiword
|
||||
* audacious
|
||||
* audacity
|
||||
* darktable
|
||||
* deluge
|
||||
* filezilla
|
||||
* firefox
|
||||
* gimp
|
||||
* gnome-mplayer
|
||||
* hexchat
|
||||
* inkscape
|
||||
* isomaster
|
||||
* kodi
|
||||
* libreoffice
|
||||
* qbittorrent
|
||||
* sound-juicer
|
||||
* thunderbird
|
||||
* tomahawk
|
||||
* uget
|
||||
* vlc
|
||||
* And more yet to come.
|
||||
|
||||
|
||||
|
||||
Orb is open source, so If you’re a developer, feel free to collaborate and add more applications.
|
||||
|
||||
### Download and use portable ORB apps
|
||||
|
||||
As I mentioned already, we don’t need to install portable ORB apps. However, ORB team strongly recommends you to use **ORB launcher** to get better experience. ORB launcher is a small installer file (less than 5MB) that will help you to launch the ORB apps with better and smoother experience.
|
||||
|
||||
Let us install ORB launcher first. To do so, [**download the ORB launcher**][1]. You can manually download ORB launcher ISO and mount it on your file manager. Or run any one of the following command in Terminal to install it:
|
||||
```
|
||||
$ wget -O - https://www.orbital-apps.com/orb.sh | bash
|
||||
|
||||
```
|
||||
|
||||
If you don’t have wget, run:
|
||||
```
|
||||
$ curl https://www.orbital-apps.com/orb.sh | bash
|
||||
|
||||
```
|
||||
|
||||
Enter the root user password when it asked.
|
||||
|
||||
That’s it. Orbit launcher is installed and ready to use.
|
||||
|
||||
Now, go to the [**ORB portable apps download page**][2], and download the apps of your choice. For the purpose of this tutorial, I am going to download Firefox application.
|
||||
|
||||
Once you downloaded the package, go to the download location and double click ORB app to launch it. Click Yes to confirm.
|
||||
|
||||
![][4]
|
||||
|
||||
Firefox ORB application in action!
|
||||
|
||||
![][5]
|
||||
|
||||
Similarly, you can download and run any applications instantly.
|
||||
|
||||
If you don’t want to use ORB launcher, make the downloaded .orb installer file as executable and double click it to install. However, ORB launcher is recommended and it gives you easier and smoother experience while using orb apps.
|
||||
|
||||
As far as I tested ORB apps, they worked just fine out of the box. Hope this helps. And, that’s all for now. Have a good day!
|
||||
|
||||
Cheers!!
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/orbitalapps-new-generation-ubuntu-linux-applications/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://www.orbital-apps.com/documentation/orb-launcher-all-installers
|
||||
[2]:https://www.orbital-apps.com/download/portable_apps_linux/
|
||||
[3]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[4]:http://www.ostechnix.com/wp-content/uploads/2016/05/orbital-apps-1-2.png
|
||||
[5]:http://www.ostechnix.com/wp-content/uploads/2016/05/orbital-apps-2.png
|
@ -0,0 +1,419 @@
|
||||
UKTools - Easy Way To Install Latest Linux Kernel
|
||||
======
|
||||
There are multiple utilities is available for Ubuntu to upgrade Linux kernel to latest stable version. We had already wrote about those utility in the past such as Linux Kernel Utilities (LKU), Ubuntu Kernel Upgrade Utility (UKUU) and Ubunsys.
|
||||
|
||||
Also few utilities are available and we will be planning to include in the further article like, ubuntu-mainline-kernel.sh and manual method from mainline kernel.
|
||||
|
||||
Today also we are going to teach you the similar utility called UKTools. You can try any one of these utilities to get your Linux kernels to the latest releases.
|
||||
|
||||
Latest kernel release comes with security bug fixes and some improvements so, better to keep latest one to get reliable, secure and better hardware performance.
|
||||
|
||||
Some times the latest kernel version might be buggy and can crash your system so, it’s your own risk. I would like to advise you to not to install on production environment.
|
||||
|
||||
**Suggested Read :**
|
||||
**(#)** [Linux Kernel Utilities (LKU) – A Set Of Shell Scripts To Compile, Install & Update Latest Kernel In Ubuntu/LinuxMint][1]
|
||||
**(#)** [Ukuu – An Easy Way To Install/Upgrade Linux Kernel In Ubuntu based Systems][2]
|
||||
**(#)** [6 Methods To Check The Running Linux Kernel Version On System][3]
|
||||
|
||||
### What Is UKTools
|
||||
|
||||
[UKTools][4] stands for Ubuntu Kernel Tools, that contains two shell scripts `ukupgrade` and `ukpurge`.
|
||||
|
||||
ukupgrade stands for “Ubuntu Kernel Upgrade”, which allows user to upgrade Linux kernel to latest stable version for Ubuntu/Mint and derivatives based on [kernel.ubuntu.com][5].
|
||||
|
||||
ukpurge stands for “Ubuntu Kernel Purge”, which allows user to remove old Linux kernel images/headers in machine for Ubuntu/ Mint and derivatives. It will keep only three kernel versions.
|
||||
|
||||
There is no GUI for this utility, however it looks very simple and straight forward so, newbie can perform the upgrade without any issues.
|
||||
|
||||
I’m running Ubuntu 17.10 and the current kernel version is below.
|
||||
```
|
||||
$ uname -a
|
||||
Linux ubuntu 4.13.0-39-generic #44-Ubuntu SMP Thu Apr 5 14:25:01 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
|
||||
|
||||
```
|
||||
|
||||
Run the following command to get the list of installed kernel on your system (Ubuntu and derivatives). Currently i’m holding `seven` kernels.
|
||||
```
|
||||
$ dpkg --list | grep linux-image
|
||||
ii linux-image-4.13.0-16-generic 4.13.0-16.19 amd64 Linux kernel image for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-4.13.0-17-generic 4.13.0-17.20 amd64 Linux kernel image for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-4.13.0-32-generic 4.13.0-32.35 amd64 Linux kernel image for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-4.13.0-36-generic 4.13.0-36.40 amd64 Linux kernel image for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-4.13.0-37-generic 4.13.0-37.42 amd64 Linux kernel image for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-4.13.0-38-generic 4.13.0-38.43 amd64 Linux kernel image for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-4.13.0-39-generic 4.13.0-39.44 amd64 Linux kernel image for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-extra-4.13.0-16-generic 4.13.0-16.19 amd64 Linux kernel extra modules for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-extra-4.13.0-17-generic 4.13.0-17.20 amd64 Linux kernel extra modules for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-extra-4.13.0-32-generic 4.13.0-32.35 amd64 Linux kernel extra modules for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-extra-4.13.0-36-generic 4.13.0-36.40 amd64 Linux kernel extra modules for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-extra-4.13.0-37-generic 4.13.0-37.42 amd64 Linux kernel extra modules for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-extra-4.13.0-38-generic 4.13.0-38.43 amd64 Linux kernel extra modules for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-extra-4.13.0-39-generic 4.13.0-39.44 amd64 Linux kernel extra modules for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-generic 4.13.0.39.42 amd64 Generic Linux kernel image
|
||||
|
||||
```
|
||||
|
||||
### How To Install UKTools
|
||||
|
||||
Just run the below commands to install UKTools on Ubuntu and derivatives.
|
||||
|
||||
Run the below command to clone UKTools repository on your system.
|
||||
```
|
||||
$ git clone https://github.com/usbkey9/uktools
|
||||
|
||||
```
|
||||
|
||||
Navigate to uktools directory.
|
||||
```
|
||||
$ cd uktools
|
||||
|
||||
```
|
||||
|
||||
Run the Makefile to generate the necessary files. Also this will automatically install latest available kernel. Just reboot the system in order to use the latest kernel.
|
||||
```
|
||||
$ sudo make
|
||||
[sudo] password for daygeek:
|
||||
Creating the directories if neccessary
|
||||
Linking profile.d file for reboot message
|
||||
Linking files to global sbin directory
|
||||
Ubuntu Kernel Upgrade - by Mustafa Hasturk
|
||||
------------------------------------------
|
||||
This script is based on the work of Mustafa Hasturk and was reworked by
|
||||
Caio Oliveira and modified and fixed by Christoph Kepler
|
||||
|
||||
Current Development and Maintenance by Christoph Kepler
|
||||
|
||||
Do you want the Stable Release (if not sure, press y)? (y/n): y
|
||||
Do you want the Generic kernel? (y/n): y
|
||||
Do you want to autoremove old kernel? (y/n): y
|
||||
no crontab for root
|
||||
Do you want to update the kernel automatically? (y/n): y
|
||||
Setup complete. Update the kernel right now? (y/n): y
|
||||
Reading package lists... Done
|
||||
Building dependency tree
|
||||
Reading state information... Done
|
||||
The following packages were automatically installed and are no longer required:
|
||||
linux-headers-4.13.0-16 linux-headers-4.13.0-16-generic linux-headers-4.13.0-17 linux-headers-4.13.0-17-generic linux-headers-4.13.0-32 linux-headers-4.13.0-32-generic linux-headers-4.13.0-36
|
||||
linux-headers-4.13.0-36-generic linux-headers-4.13.0-37 linux-headers-4.13.0-37-generic linux-image-4.13.0-16-generic linux-image-4.13.0-17-generic linux-image-4.13.0-32-generic linux-image-4.13.0-36-generic
|
||||
linux-image-4.13.0-37-generic linux-image-extra-4.13.0-16-generic linux-image-extra-4.13.0-17-generic linux-image-extra-4.13.0-32-generic linux-image-extra-4.13.0-36-generic
|
||||
linux-image-extra-4.13.0-37-generic
|
||||
Use 'sudo apt autoremove' to remove them.
|
||||
The following additional packages will be installed:
|
||||
lynx-common
|
||||
The following NEW packages will be installed:
|
||||
lynx lynx-common
|
||||
0 upgraded, 2 newly installed, 0 to remove and 71 not upgraded.
|
||||
Need to get 1,498 kB of archives.
|
||||
After this operation, 5,418 kB of additional disk space will be used.
|
||||
Get:1 http://in.archive.ubuntu.com/ubuntu artful/universe amd64 lynx-common all 2.8.9dev16-1 [873 kB]
|
||||
Get:2 http://in.archive.ubuntu.com/ubuntu artful/universe amd64 lynx amd64 2.8.9dev16-1 [625 kB]
|
||||
Fetched 1,498 kB in 12s (120 kB/s)
|
||||
Selecting previously unselected package lynx-common.
|
||||
(Reading database ... 441037 files and directories currently installed.)
|
||||
Preparing to unpack .../lynx-common_2.8.9dev16-1_all.deb ...
|
||||
Unpacking lynx-common (2.8.9dev16-1) ...
|
||||
Selecting previously unselected package lynx.
|
||||
Preparing to unpack .../lynx_2.8.9dev16-1_amd64.deb ...
|
||||
Unpacking lynx (2.8.9dev16-1) ...
|
||||
Processing triggers for mime-support (3.60ubuntu1) ...
|
||||
Processing triggers for doc-base (0.10.7) ...
|
||||
Processing 1 added doc-base file...
|
||||
Processing triggers for man-db (2.7.6.1-2) ...
|
||||
Setting up lynx-common (2.8.9dev16-1) ...
|
||||
Setting up lynx (2.8.9dev16-1) ...
|
||||
update-alternatives: using /usr/bin/lynx to provide /usr/bin/www-browser (www-browser) in auto mode
|
||||
|
||||
Cleaning old downloads in /tmp
|
||||
|
||||
Downloading the kernel's components...
|
||||
Checksum for linux-headers-4.16.7-041607-generic_4.16.7-041607.201805021131_amd64.deb succeed
|
||||
Checksum for linux-image-unsigned-4.16.7-041607-generic_4.16.7-041607.201805021131_amd64.deb succeed
|
||||
Checksum for linux-modules-4.16.7-041607-generic_4.16.7-041607.201805021131_amd64.deb succeed
|
||||
|
||||
Downloading the shared kernel header...
|
||||
Checksum for linux-headers-4.16.7-041607_4.16.7-041607.201805021131_all.deb succeed
|
||||
|
||||
Installing Kernel and Headers...
|
||||
Selecting previously unselected package linux-headers-4.16.7-041607.
|
||||
(Reading database ... 441141 files and directories currently installed.)
|
||||
Preparing to unpack .../linux-headers-4.16.7-041607_4.16.7-041607.201805021131_all.deb ...
|
||||
Unpacking linux-headers-4.16.7-041607 (4.16.7-041607.201805021131) ...
|
||||
Selecting previously unselected package linux-headers-4.16.7-041607-generic.
|
||||
Preparing to unpack .../linux-headers-4.16.7-041607-generic_4.16.7-041607.201805021131_amd64.deb ...
|
||||
Unpacking linux-headers-4.16.7-041607-generic (4.16.7-041607.201805021131) ...
|
||||
Selecting previously unselected package linux-image-unsigned-4.16.7-041607-generic.
|
||||
Preparing to unpack .../linux-image-unsigned-4.16.7-041607-generic_4.16.7-041607.201805021131_amd64.deb ...
|
||||
Unpacking linux-image-unsigned-4.16.7-041607-generic (4.16.7-041607.201805021131) ...
|
||||
Selecting previously unselected package linux-modules-4.16.7-041607-generic.
|
||||
Preparing to unpack .../linux-modules-4.16.7-041607-generic_4.16.7-041607.201805021131_amd64.deb ...
|
||||
Unpacking linux-modules-4.16.7-041607-generic (4.16.7-041607.201805021131) ...
|
||||
Setting up linux-headers-4.16.7-041607 (4.16.7-041607.201805021131) ...
|
||||
dpkg: dependency problems prevent configuration of linux-headers-4.16.7-041607-generic:
|
||||
linux-headers-4.16.7-041607-generic depends on libssl1.1 (>= 1.1.0); however:
|
||||
Package libssl1.1 is not installed.
|
||||
|
||||
Setting up linux-modules-4.16.7-041607-generic (4.16.7-041607.201805021131) ...
|
||||
Setting up linux-image-unsigned-4.16.7-041607-generic (4.16.7-041607.201805021131) ...
|
||||
I: /vmlinuz.old is now a symlink to boot/vmlinuz-4.13.0-39-generic
|
||||
I: /initrd.img.old is now a symlink to boot/initrd.img-4.13.0-39-generic
|
||||
I: /vmlinuz is now a symlink to boot/vmlinuz-4.16.7-041607-generic
|
||||
I: /initrd.img is now a symlink to boot/initrd.img-4.16.7-041607-generic
|
||||
Processing triggers for linux-image-unsigned-4.16.7-041607-generic (4.16.7-041607.201805021131) ...
|
||||
/etc/kernel/postinst.d/initramfs-tools:
|
||||
update-initramfs: Generating /boot/initrd.img-4.16.7-041607-generic
|
||||
/etc/kernel/postinst.d/zz-update-grub:
|
||||
Generating grub configuration file ...
|
||||
Warning: Setting GRUB_TIMEOUT to a non-zero value when GRUB_HIDDEN_TIMEOUT is set is no longer supported.
|
||||
Found linux image: /boot/vmlinuz-4.16.7-041607-generic
|
||||
Found initrd image: /boot/initrd.img-4.16.7-041607-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-39-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-39-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-38-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-38-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-37-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-37-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-36-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-36-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-32-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-32-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-17-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-17-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-16-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-16-generic
|
||||
Found memtest86+ image: /boot/memtest86+.elf
|
||||
Found memtest86+ image: /boot/memtest86+.bin
|
||||
done
|
||||
|
||||
Thanks for using this script! Hope it helped.
|
||||
Give it a star: https://github.com/MarauderXtreme/uktools
|
||||
|
||||
```
|
||||
|
||||
Restart the system to activate the latest kernel.
|
||||
```
|
||||
$ sudo shutdown -r now
|
||||
|
||||
```
|
||||
|
||||
Once the system back to up, re-check the kernel version.
|
||||
```
|
||||
$ uname -a
|
||||
Linux ubuntu 4.16.7-041607-generic #201805021131 SMP Wed May 2 15:34:55 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
|
||||
|
||||
```
|
||||
|
||||
This make command will drop the below files into `/usr/local/bin` directory.
|
||||
```
|
||||
do-kernel-upgrade
|
||||
do-kernel-purge
|
||||
|
||||
```
|
||||
|
||||
To remove old kernels, run the following command.
|
||||
```
|
||||
$ do-kernel-purge
|
||||
|
||||
Ubuntu Kernel Purge - by Caio Oliveira
|
||||
|
||||
This script will only keep three versions: the first and the last two, others will be purge
|
||||
|
||||
---Current version:
|
||||
Linux Kernel 4.16.7-041607 Generic (linux-image-4.16.7-041607-generic)
|
||||
|
||||
---Versions to remove:
|
||||
4.13.0-16
|
||||
4.13.0-17
|
||||
4.13.0-32
|
||||
4.13.0-36
|
||||
4.13.0-37
|
||||
|
||||
---Do you want to remove the old kernels/headers versions? (Y/n): y
|
||||
Reading package lists... Done
|
||||
Building dependency tree
|
||||
Reading state information... Done
|
||||
The following packages were automatically installed and are no longer required:
|
||||
linux-headers-4.13.0-17 linux-headers-4.13.0-17-generic linux-headers-4.13.0-32 linux-headers-4.13.0-32-generic linux-headers-4.13.0-36 linux-headers-4.13.0-36-generic linux-headers-4.13.0-37
|
||||
linux-headers-4.13.0-37-generic linux-image-4.13.0-17-generic linux-image-4.13.0-32-generic linux-image-4.13.0-36-generic linux-image-4.13.0-37-generic linux-image-extra-4.13.0-17-generic
|
||||
linux-image-extra-4.13.0-32-generic linux-image-extra-4.13.0-36-generic linux-image-extra-4.13.0-37-generic
|
||||
Use 'sudo apt autoremove' to remove them.
|
||||
The following packages will be REMOVED:
|
||||
linux-headers-4.13.0-16* linux-headers-4.13.0-16-generic* linux-image-4.13.0-16-generic* linux-image-extra-4.13.0-16-generic*
|
||||
0 upgraded, 0 newly installed, 4 to remove and 71 not upgraded.
|
||||
After this operation, 318 MB disk space will be freed.
|
||||
(Reading database ... 465582 files and directories currently installed.)
|
||||
Removing linux-headers-4.13.0-16-generic (4.13.0-16.19) ...
|
||||
Removing linux-headers-4.13.0-16 (4.13.0-16.19) ...
|
||||
Removing linux-image-extra-4.13.0-16-generic (4.13.0-16.19) ...
|
||||
run-parts: executing /etc/kernel/postinst.d/apt-auto-removal 4.13.0-16-generic /boot/vmlinuz-4.13.0-16-generic
|
||||
run-parts: executing /etc/kernel/postinst.d/initramfs-tools 4.13.0-16-generic /boot/vmlinuz-4.13.0-16-generic
|
||||
update-initramfs: Generating /boot/initrd.img-4.13.0-16-generic
|
||||
run-parts: executing /etc/kernel/postinst.d/unattended-upgrades 4.13.0-16-generic /boot/vmlinuz-4.13.0-16-generic
|
||||
run-parts: executing /etc/kernel/postinst.d/update-notifier 4.13.0-16-generic /boot/vmlinuz-4.13.0-16-generic
|
||||
run-parts: executing /etc/kernel/postinst.d/zz-update-grub 4.13.0-16-generic /boot/vmlinuz-4.13.0-16-generic
|
||||
Generating grub configuration file ...
|
||||
Warning: Setting GRUB_TIMEOUT to a non-zero value when GRUB_HIDDEN_TIMEOUT is set is no longer supported.
|
||||
Found linux image: /boot/vmlinuz-4.16.7-041607-generic
|
||||
Found initrd image: /boot/initrd.img-4.16.7-041607-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-39-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-39-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-38-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-38-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-37-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-37-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-36-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-36-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-32-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-32-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-17-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-17-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-16-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-16-generic
|
||||
Found memtest86+ image: /boot/memtest86+.elf
|
||||
Found memtest86+ image: /boot/memtest86+.bin
|
||||
done
|
||||
Removing linux-image-4.13.0-16-generic (4.13.0-16.19) ...
|
||||
Examining /etc/kernel/postrm.d .
|
||||
run-parts: executing /etc/kernel/postrm.d/initramfs-tools 4.13.0-16-generic /boot/vmlinuz-4.13.0-16-generic
|
||||
update-initramfs: Deleting /boot/initrd.img-4.13.0-16-generic
|
||||
run-parts: executing /etc/kernel/postrm.d/zz-update-grub 4.13.0-16-generic /boot/vmlinuz-4.13.0-16-generic
|
||||
Generating grub configuration file ...
|
||||
Warning: Setting GRUB_TIMEOUT to a non-zero value when GRUB_HIDDEN_TIMEOUT is set is no longer supported.
|
||||
Found linux image: /boot/vmlinuz-4.16.7-041607-generic
|
||||
Found initrd image: /boot/initrd.img-4.16.7-041607-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-39-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-39-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-38-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-38-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-37-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-37-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-36-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-36-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-32-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-32-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-17-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-17-generic
|
||||
Found memtest86+ image: /boot/memtest86+.elf
|
||||
Found memtest86+ image: /boot/memtest86+.bin
|
||||
done
|
||||
(Reading database ... 430635 files and directories currently installed.)
|
||||
Purging configuration files for linux-image-extra-4.13.0-16-generic (4.13.0-16.19) ...
|
||||
Purging configuration files for linux-image-4.13.0-16-generic (4.13.0-16.19) ...
|
||||
Examining /etc/kernel/postrm.d .
|
||||
run-parts: executing /etc/kernel/postrm.d/initramfs-tools 4.13.0-16-generic /boot/vmlinuz-4.13.0-16-generic
|
||||
run-parts: executing /etc/kernel/postrm.d/zz-update-grub 4.13.0-16-generic /boot/vmlinuz-4.13.0-16-generic
|
||||
Reading package lists... Done
|
||||
Building dependency tree
|
||||
Reading state information... Done
|
||||
.
|
||||
.
|
||||
.
|
||||
.
|
||||
Reading package lists... Done
|
||||
Building dependency tree
|
||||
Reading state information... Done
|
||||
The following packages will be REMOVED:
|
||||
linux-headers-4.13.0-37* linux-headers-4.13.0-37-generic* linux-image-4.13.0-37-generic* linux-image-extra-4.13.0-37-generic*
|
||||
0 upgraded, 0 newly installed, 4 to remove and 71 not upgraded.
|
||||
After this operation, 321 MB disk space will be freed.
|
||||
(Reading database ... 325772 files and directories currently installed.)
|
||||
Removing linux-headers-4.13.0-37-generic (4.13.0-37.42) ...
|
||||
Removing linux-headers-4.13.0-37 (4.13.0-37.42) ...
|
||||
Removing linux-image-extra-4.13.0-37-generic (4.13.0-37.42) ...
|
||||
run-parts: executing /etc/kernel/postinst.d/apt-auto-removal 4.13.0-37-generic /boot/vmlinuz-4.13.0-37-generic
|
||||
run-parts: executing /etc/kernel/postinst.d/initramfs-tools 4.13.0-37-generic /boot/vmlinuz-4.13.0-37-generic
|
||||
update-initramfs: Generating /boot/initrd.img-4.13.0-37-generic
|
||||
run-parts: executing /etc/kernel/postinst.d/unattended-upgrades 4.13.0-37-generic /boot/vmlinuz-4.13.0-37-generic
|
||||
run-parts: executing /etc/kernel/postinst.d/update-notifier 4.13.0-37-generic /boot/vmlinuz-4.13.0-37-generic
|
||||
run-parts: executing /etc/kernel/postinst.d/zz-update-grub 4.13.0-37-generic /boot/vmlinuz-4.13.0-37-generic
|
||||
Generating grub configuration file ...
|
||||
Warning: Setting GRUB_TIMEOUT to a non-zero value when GRUB_HIDDEN_TIMEOUT is set is no longer supported.
|
||||
Found linux image: /boot/vmlinuz-4.16.7-041607-generic
|
||||
Found initrd image: /boot/initrd.img-4.16.7-041607-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-39-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-39-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-38-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-38-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-37-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-37-generic
|
||||
Found memtest86+ image: /boot/memtest86+.elf
|
||||
Found memtest86+ image: /boot/memtest86+.bin
|
||||
done
|
||||
Removing linux-image-4.13.0-37-generic (4.13.0-37.42) ...
|
||||
Examining /etc/kernel/postrm.d .
|
||||
run-parts: executing /etc/kernel/postrm.d/initramfs-tools 4.13.0-37-generic /boot/vmlinuz-4.13.0-37-generic
|
||||
update-initramfs: Deleting /boot/initrd.img-4.13.0-37-generic
|
||||
run-parts: executing /etc/kernel/postrm.d/zz-update-grub 4.13.0-37-generic /boot/vmlinuz-4.13.0-37-generic
|
||||
Generating grub configuration file ...
|
||||
Warning: Setting GRUB_TIMEOUT to a non-zero value when GRUB_HIDDEN_TIMEOUT is set is no longer supported.
|
||||
Found linux image: /boot/vmlinuz-4.16.7-041607-generic
|
||||
Found initrd image: /boot/initrd.img-4.16.7-041607-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-39-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-39-generic
|
||||
Found linux image: /boot/vmlinuz-4.13.0-38-generic
|
||||
Found initrd image: /boot/initrd.img-4.13.0-38-generic
|
||||
Found memtest86+ image: /boot/memtest86+.elf
|
||||
Found memtest86+ image: /boot/memtest86+.bin
|
||||
done
|
||||
(Reading database ... 290810 files and directories currently installed.)
|
||||
Purging configuration files for linux-image-extra-4.13.0-37-generic (4.13.0-37.42) ...
|
||||
Purging configuration files for linux-image-4.13.0-37-generic (4.13.0-37.42) ...
|
||||
Examining /etc/kernel/postrm.d .
|
||||
run-parts: executing /etc/kernel/postrm.d/initramfs-tools 4.13.0-37-generic /boot/vmlinuz-4.13.0-37-generic
|
||||
run-parts: executing /etc/kernel/postrm.d/zz-update-grub 4.13.0-37-generic /boot/vmlinuz-4.13.0-37-generic
|
||||
|
||||
Thanks for using this script!!!
|
||||
|
||||
```
|
||||
|
||||
Re-check the list of installed kernels using the below command. This will keep only old three kernels.
|
||||
```
|
||||
$ dpkg --list | grep linux-image
|
||||
ii linux-image-4.13.0-38-generic 4.13.0-38.43 amd64 Linux kernel image for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-4.13.0-39-generic 4.13.0-39.44 amd64 Linux kernel image for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-extra-4.13.0-38-generic 4.13.0-38.43 amd64 Linux kernel extra modules for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-extra-4.13.0-39-generic 4.13.0-39.44 amd64 Linux kernel extra modules for version 4.13.0 on 64 bit x86 SMP
|
||||
ii linux-image-generic 4.13.0.39.42 amd64 Generic Linux kernel image
|
||||
ii linux-image-unsigned-4.16.7-041607-generic 4.16.7-041607.201805021131 amd64 Linux kernel image for version 4.16.7 on 64 bit x86 SMP
|
||||
|
||||
```
|
||||
|
||||
For next time you can call `do-kernel-upgrade` utility for new kernel installation. If any new kernel is available then it will install. If no, it will report no kernel update is available at the moment.
|
||||
```
|
||||
$ do-kernel-upgrade
|
||||
Kernel up to date. Finishing
|
||||
|
||||
```
|
||||
|
||||
Run the `do-kernel-purge` command once again to confirm on this. If this found more than three kernels then it will remove. If no, it will report nothing to remove message.
|
||||
```
|
||||
$ do-kernel-purge
|
||||
|
||||
Ubuntu Kernel Purge - by Caio Oliveira
|
||||
|
||||
This script will only keep three versions: the first and the last two, others will be purge
|
||||
|
||||
---Current version:
|
||||
Linux Kernel 4.16.7-041607 Generic (linux-image-4.16.7-041607-generic)
|
||||
Nothing to remove!
|
||||
|
||||
Thanks for using this script!!!
|
||||
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/uktools-easy-way-to-install-latest-stable-linux-kernel-on-ubuntu-mint-and-derivatives/
|
||||
|
||||
作者:[Prakash Subramanian][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.2daygeek.com/author/prakash/
|
||||
[1]:https://www.2daygeek.com/lku-linux-kernel-utilities-compile-install-update-latest-kernel-in-linux-mint-ubuntu/
|
||||
[2]:https://www.2daygeek.com/ukuu-install-upgrade-linux-kernel-in-linux-mint-ubuntu-debian-elementary-os/
|
||||
[3]:https://www.2daygeek.com/check-find-determine-running-installed-linux-kernel-version/
|
||||
[4]:https://github.com/usbkey9/uktools
|
||||
[5]:http://kernel.ubuntu.com/~kernel-ppa/mainline/
|
@ -0,0 +1,98 @@
|
||||
Creating small containers with Buildah
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open%20source_collaboration_0.png?itok=YEl_GXbv)
|
||||
I recently joined Red Hat after many years working for another tech company. In my previous job, I developed a number of different software products that were successful but proprietary. Not only were we legally compelled to not share the software outside of the company, we often didn’t even share it within the company. At the time, that made complete sense to me: The company spent time, energy, and budget developing the software, so they should protect and claim the rewards it garnered.
|
||||
|
||||
Fast-forward to a year ago, when I joined Red Hat and developed a completely different mindset. One of the first things I jumped into was the [Buildah project][1]. It facilitates building Open Container Initiative (OCI) images, and it is especially good at allowing you to tailor the size of the image that is created. At that time Buildah was in its very early stages, and there were some warts here and there that weren’t quite production-ready.
|
||||
|
||||
Being new to the project, I made a few minor changes, then asked where the company’s internal git repository was so that I could push my changes. The answer: Nothing internal, just push your changes to GitHub. I was baffled—sending my changes out to GitHub would mean anyone could look at that code and use it for their own projects. Plus, the code still had a few warts, so that just seemed so counterintuitive. But being the new guy, I shook my head in wonder and pushed the changes out.
|
||||
|
||||
A year later, I’m now convinced of the power and value of open source software. I’m still working on Buildah, and we recently had an issue that illustrates that power and value. The issue, titled [Buildah images not so small?][2] , was raised by Tim Dudgeon (@tdudgeon). To summarize, he noted that images created by Buildah were bigger than those created by Docker, even though the Buildah images didn’t contain the extra "fluff" he saw in the Docker images.
|
||||
|
||||
For comparison he first did:
|
||||
```
|
||||
$ docker pull centos:7
|
||||
$ docker images
|
||||
REPOSITORY TAG IMAGE ID CREATED SIZE
|
||||
docker.io/centos 7 2d194b392dd1 2 weeks ago 195 MB
|
||||
```
|
||||
|
||||
He noted that the size of the Docker image was 195MB. Tim then created a minimal (scratch) image using Buildah, with only the `coreutils` and `bash` packages added to the image, using the following script:
|
||||
```
|
||||
$ cat ./buildah-base.sh
|
||||
#!/bin/bash
|
||||
|
||||
set -x
|
||||
|
||||
# build a minimal image
|
||||
newcontainer=$(buildah from scratch)
|
||||
scratchmnt=$(buildah mount $newcontainer)
|
||||
|
||||
# install the packages
|
||||
yum install --installroot $scratchmnt bash coreutils --releasever 7 --setopt install_weak_deps=false -y
|
||||
yum clean all -y --installroot $scratchmnt --releasever 7
|
||||
|
||||
sudo buildah config --cmd /bin/bash $newcontainer
|
||||
|
||||
# set some config info
|
||||
buildah config --label name=centos-base $newcontainer
|
||||
|
||||
# commit the image
|
||||
buildah unmount $newcontainer
|
||||
buildah commit $newcontainer centos-base
|
||||
|
||||
$ sudo ./buildah-base.sh
|
||||
|
||||
$ sudo buildah images
|
||||
IMAGE ID IMAGE NAME CREATED AT SIZE
|
||||
8379315d3e3e docker.io/library/centos-base:latest Mar 25, 2018 17:08 212.1 MB
|
||||
```
|
||||
|
||||
Tim wondered why the image was 17MB larger, because `python` and `yum` were not installed in the Buildah image, whereas they were installed in the Docker image. This set off quite the discussion in the GitHub issue, as it was not at all an expected result.
|
||||
|
||||
What was great about the discussion was that not only were Red Hat folks involved, but several others from outside as well. In particular, a lot of great discussion and investigation was led by GitHub user @pixdrift, who noted that the documentation and locale-archive were chewing up a little more than 100MB of space in the Buildah image. Pixdrift suggested forcing locale in the yum installer and provided this updated `buildah-bash.sh` script with those changes:
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
set -x
|
||||
|
||||
# build a minimal image
|
||||
newcontainer=$(buildah from scratch)
|
||||
scratchmnt=$(buildah mount $newcontainer)
|
||||
|
||||
# install the packages
|
||||
yum install --installroot $scratchmnt bash coreutils --releasever 7 --setopt=install_weak_deps=false --setopt=tsflags=nodocs --setopt=override_install_langs=en_US.utf8 -y
|
||||
yum clean all -y --installroot $scratchmnt --releasever 7
|
||||
|
||||
sudo buildah config --cmd /bin/bash $newcontainer
|
||||
|
||||
# set some config info
|
||||
buildah config --label name=centos-base $newcontainer
|
||||
|
||||
# commit the image
|
||||
buildah unmount $newcontainer
|
||||
buildah commit $newcontainer centos-base
|
||||
```
|
||||
|
||||
When Tim ran this new script, the image size shrank to 92MB, shedding 120MB from the original Buildah image size and getting closer to the expected size; however, engineers being engineers, a size savings of 56% wasn’t enough. The discussion went further, involving how to remove individual locale packages to save even more space. To see more details of the discussion, click the [Buildah images not so small?][2] link. Who knows—maybe you’ll have a helpful tip, or better yet, become a contributor for Buildah. On a side note, this solution illustrates how the Buildah software can be used to quickly and easily create a minimally sized container that's loaded only with the software that you need to do your job efficiently. As a bonus, it doesn’t require a daemon to be running.
|
||||
|
||||
This image-sizing issue drove home the power of open source software for me. A number of people from different companies all collaborated to solve a problem through open discussion in a little over a day. Although no code changes were created to address this particular issue, there have been many code contributions to Buildah from contributors outside of Red Hat, and this has helped to make the project even better. These contributions have served to get a wider variety of talented people to look at the code than ever would have if it were a proprietary piece of software stuck in a private git repository. It’s taken only a year to convert me to the [open source way][3], and I don’t think I could ever go back.
|
||||
|
||||
This article was originally posted at [Project Atomic][4]. Reposted with permission.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/containers-buildah
|
||||
|
||||
作者:[Tom Sweeney][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/tomsweeneyredhat
|
||||
[1]:https://github.com/projectatomic/buildah
|
||||
[2]:https://github.com/projectatomic/buildah/issues/532
|
||||
[3]:https://twitter.com/opensourceway
|
||||
[4]:http://www.projectatomic.io/blog/2018/04/open-source-what-a-concept/
|
@ -0,0 +1,260 @@
|
||||
Get more done at the Linux command line with GNU Parallel
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux11x_cc.png?itok=XMDOouJR)
|
||||
|
||||
Do you ever get the funny feeling that your computer isn't quite as fast as it should be? I used to feel that way, and then I found GNU Parallel.
|
||||
|
||||
GNU Parallel is a shell utility for executing jobs in parallel. It can parse multiple inputs, thereby running your script or command against sets of data at the same time. You can use all your CPU at last!
|
||||
|
||||
If you've ever used `xargs`, you already know how to use Parallel. If you don't, then this article teaches you, along with many other use cases.
|
||||
|
||||
### Installing GNU Parallel
|
||||
|
||||
GNU Parallel may not come pre-installed on your Linux or BSD computer. Install it from your repository or ports collection. For example, on Fedora:
|
||||
```
|
||||
$ sudo dnf install parallel
|
||||
|
||||
```
|
||||
|
||||
Or on NetBSD:
|
||||
```
|
||||
# pkg_add parallel
|
||||
|
||||
```
|
||||
|
||||
If all else fails, refer to the [project homepage][1].
|
||||
|
||||
### From serial to parallel
|
||||
|
||||
As its name suggests, Parallel's strength is that it runs jobs in parallel rather than, as many of us still do, sequentially.
|
||||
|
||||
When you run one command against many objects, you're inherently creating a queue. Some number of objects can be processed by the command, and all the other objects just stand around and wait their turn. It's inefficient. Given enough data, there's always going to be a queue, but instead of having just one queue, why not have lots of small queues?
|
||||
|
||||
Imagine you have a folder full of images you want to convert from JPEG to PNG. There are many ways to do this. There's the manual way of opening each image in GIMP and exporting it to the new format. That's usually the worst possible way. It's not only time-intensive, it's labor-intensive.
|
||||
|
||||
A pretty neat variation on this theme is the shell-based solution:
|
||||
```
|
||||
$ convert 001.jpeg 001.png
|
||||
|
||||
$ convert 002.jpeg 002.png
|
||||
|
||||
$ convert 003.jpeg 003.png
|
||||
|
||||
... and so on ...
|
||||
|
||||
```
|
||||
|
||||
It's a great trick when you first learn it, and at first it's a vast improvement. No need for a GUI and constant clicking. But it's still labor-intensive.
|
||||
|
||||
Better still:
|
||||
```
|
||||
$ for i in *jpeg; do convert $i $i.png ; done
|
||||
|
||||
```
|
||||
|
||||
This, at least, sets the job(s) in motion and frees you up to do more productive things. The problem is, it's still a serial process. One image gets converted, and then the next one in the queue steps up for conversion, and so on until the queue has been emptied.
|
||||
|
||||
With Parallel:
|
||||
```
|
||||
$ find . -name "*jpeg" | parallel -I% --max-args 1 convert % %.png
|
||||
|
||||
```
|
||||
|
||||
This is a combination of two commands: the `find` command, which gathers the objects you want to operate on, and the `parallel` command, which sorts through the objects and makes sure everything gets processed as required.
|
||||
|
||||
* `find . -name "*jpeg"` finds all files in the current directory that end in `jpeg`.
|
||||
* `parallel` invokes GNU Parallel.
|
||||
* `-I%` creates a placeholder, called `%`, to stand in for whatever `find` hands over to Parallel. You use this because otherwise you'd have to manually write a new command for each result of `find`, and that's exactly what you're trying to avoid.
|
||||
* `--max-args 1` limits the rate at which Parallel requests a new object from the queue. Since the command Parallel is running requires only one file, you limit the rate to 1. Were you doing a more complex command that required two files (such as `cat 001.txt 002.txt > new.txt`), you would limit the rate to 2.
|
||||
* `convert % %.png` is the command you want to run in Parallel.
|
||||
|
||||
|
||||
|
||||
The result of this command is that `find` gathers all relevant files and hands them over to `parallel`, which launches a job and immediately requests the next in line. Parallel continues to do this for as long as it is safe to launch new jobs without crippling your computer. As old jobs are completed, it replaces them with new ones, until all the data provided to it has been processed. What took 10 minutes before might take only 5 or 3 with Parallel.
|
||||
|
||||
### Multiple inputs
|
||||
|
||||
The `find` command is an excellent gateway to Parallel as long as you're familiar with `find` and `xargs` (collectively called GNU Find Utilities, or `findutils`). It provides a flexible interface that many Linux users are already comfortable with and is pretty easy to learn if you're a newcomer.
|
||||
|
||||
The `find` command is fairly straightforward: you provide `find` with a path to a directory you want to search and some portion of the file name you want to search for. Use wildcard characters to cast your net wider; in this example, the asterisk indicates anything, so `find` locates all files that end with the string `searchterm`:
|
||||
```
|
||||
$ find /path/to/directory -name "*searchterm"
|
||||
|
||||
```
|
||||
|
||||
By default, `find` returns the results of its search one item at a time, with one item per line:
|
||||
```
|
||||
$ find ~/graphics -name "*jpg"
|
||||
|
||||
/home/seth/graphics/001.jpg
|
||||
|
||||
/home/seth/graphics/cat.jpg
|
||||
|
||||
/home/seth/graphics/penguin.jpg
|
||||
|
||||
/home/seth/graphics/IMG_0135.jpg
|
||||
|
||||
```
|
||||
|
||||
When you pipe the results of `find` to `parallel`, each item on each line is treated as one argument to the command that `parallel` is arbitrating. If, on the other hand, you need to process more than one argument in one command, you can split up the way the data in the queue is handed over to `parallel`.
|
||||
|
||||
Here's a simple, unrealistic example, which I'll later turn into something more useful. You can follow along with this example, as long as you have GNU Parallel installed.
|
||||
|
||||
Assume you have four files. List them, one per line, to see exactly what you have:
|
||||
```
|
||||
$ echo ada > ada ; echo lovelace > lovelace
|
||||
|
||||
$ echo richard > richard ; echo stallman > stallman
|
||||
|
||||
$ ls -1
|
||||
|
||||
ada
|
||||
|
||||
lovelace
|
||||
|
||||
richard
|
||||
|
||||
stallman
|
||||
|
||||
```
|
||||
|
||||
You want to combine two files into a third that contains the contents of both files. This requires that Parallel has access to two files, so the `-I%` variable won't work in this case.
|
||||
|
||||
Parallel's default behavior is basically invisible:
|
||||
```
|
||||
$ ls -1 | parallel echo
|
||||
|
||||
ada
|
||||
|
||||
lovelace
|
||||
|
||||
richard
|
||||
|
||||
stallman
|
||||
|
||||
```
|
||||
|
||||
Now tell Parallel you want to get two objects per job:
|
||||
```
|
||||
$ ls -1 | parallel --max-args=2 echo
|
||||
|
||||
ada lovelace
|
||||
|
||||
richard stallman
|
||||
|
||||
```
|
||||
|
||||
Now the lines have been combined. Specifically, two results from `ls -1` are passed to Parallel all at once. That's the right number of arguments for this task, but they're effectively one argument right now: "ada lovelace" and "richard stallman." What you actually want is two distinct arguments per job.
|
||||
|
||||
Luckily, that technicality is parsed by Parallel itself. If you set `--max-args` to `2`, you get two variables, `{1}` and `{2}`, representing the first and second parts of the argument:
|
||||
```
|
||||
$ ls -1 | parallel --max-args=2 cat {1} {2} ">" {1}_{2}.person
|
||||
|
||||
```
|
||||
|
||||
In this command, the variable `{1}` is ada or richard (depending on which job you look at) and `{2}` is either `lovelace` or `stallman`. The contents of the files are redirected with a redirect symbol in quotes (the quotes grab the redirect symbol from Bash so Parallel can use it) and placed into new files called `ada_lovelace.person` and `richard_stallman.person`.
|
||||
```
|
||||
$ ls -1
|
||||
|
||||
ada
|
||||
|
||||
ada_lovelace.person
|
||||
|
||||
lovelace
|
||||
|
||||
richard
|
||||
|
||||
richard_stallman.person
|
||||
|
||||
stallman
|
||||
|
||||
|
||||
|
||||
$ cat ada_*person
|
||||
|
||||
ada lovelace
|
||||
|
||||
$ cat ri*person
|
||||
|
||||
richard stallman
|
||||
|
||||
```
|
||||
|
||||
If you spend all day parsing log files that are hundreds of megabytes in size, you might see how parallelized text parsing could be useful to you; otherwise, this is mostly a demonstrative exercise.
|
||||
|
||||
However, this kind of processing is invaluable for more than just text parsing. Here's a real-life example from the film world. Consider a directory of video files and audio files that need to be joined together.
|
||||
```
|
||||
$ ls -1
|
||||
|
||||
12_LS_establishing-manor.avi
|
||||
|
||||
12_wildsound.flac
|
||||
|
||||
14_butler-dialogue-mixed.flac
|
||||
|
||||
14_MS_butler.avi
|
||||
|
||||
...and so on...
|
||||
|
||||
```
|
||||
|
||||
Using the same principles, a simple command can be created so that the files are combined in parallel:
|
||||
```
|
||||
$ ls -1 | parallel --max-args=2 ffmpeg -i {1} -i {2} -vcodec copy -acodec copy {1}.mkv
|
||||
|
||||
```
|
||||
|
||||
### Brute. Force.
|
||||
|
||||
All this fancy input and output parsing isn't to everyone's taste. If you prefer a more direct approach, you can throw commands at Parallel and walk away.
|
||||
|
||||
First, create a text file with one command on each line:
|
||||
```
|
||||
$ cat jobs2run
|
||||
|
||||
bzip2 oldstuff.tar
|
||||
|
||||
oggenc music.flac
|
||||
|
||||
opusenc ambiance.wav
|
||||
|
||||
convert bigfile.tiff small.jpeg
|
||||
|
||||
ffmepg -i foo.avi -v:b 12000k foo.mp4
|
||||
|
||||
xsltproc --output build/tmp.fo style/dm.xsl src/tmp.xml
|
||||
|
||||
bzip2 archive.tar
|
||||
|
||||
```
|
||||
|
||||
Then hand the file over to Parallel:
|
||||
```
|
||||
$ parallel --jobs 6 < jobs2run
|
||||
|
||||
```
|
||||
|
||||
And now all jobs in your file are run in Parallel. If more jobs exist than jobs allowed, a queue is formed and maintained by Parallel until all jobs have run.
|
||||
|
||||
### Much, much more
|
||||
|
||||
GNU Parallel is a powerful and flexible tool, with far more use cases than can fit into this article. Its man page provides examples of really cool things you can do with it, from remote execution over SSH to incorporating Bash functions into your Parallel commands. There's even an extensive demonstration series on [YouTube][2], so you can learn from the GNU Parallel team directly. The GNU Parallel lead maintainer has also just released the command's official guide, available from [Lulu.com][3].
|
||||
|
||||
GNU Parallel has the power to change the way you compute, and if doesn't do that, it will at the very least change the time your computer spends computing. Try it today!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/gnu-parallel
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[1]:https://www.gnu.org/software/parallel
|
||||
[2]:https://www.youtube.com/watch?v=OpaiGYxkSuQ&list=PL284C9FF2488BC6D1
|
||||
[3]:http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html
|
@ -1,3 +1,4 @@
|
||||
Translating KevinSJ -- 05142018
|
||||
How To Display Images In The Terminal
|
||||
======
|
||||
|
||||
|
@ -0,0 +1,84 @@
|
||||
3 useful things you can do with the IP tool in Linux
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/find-file-linux-code_magnifying_glass_zero.png?itok=E2HoPDg0)
|
||||
|
||||
It has been more than a decade since the `ifconfig` command has been deprecated on Linux in favor of the `iproute2` project, which contains the magical tool `ip`. Many online tutorial resources still refer to old command-line tools like `ifconfig`, `route`, and `netstat`. The goal of this tutorial is to share some of the simple networking-related things you can do easily using the `ip` tool instead.
|
||||
|
||||
### Find your IP address
|
||||
```
|
||||
[dneary@host]$ ip addr show
|
||||
|
||||
[snip]
|
||||
|
||||
44: wlp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
|
||||
|
||||
link/ether 5c:e0:c5:c7:f0:f1 brd ff:ff:ff:ff:ff:ff
|
||||
|
||||
inet 10.16.196.113/23 brd 10.16.197.255 scope global dynamic wlp4s0
|
||||
|
||||
valid_lft 74830sec preferred_lft 74830sec
|
||||
|
||||
inet6 fe80::5ee0:c5ff:fec7:f0f1/64 scope link
|
||||
|
||||
valid_lft forever preferred_lft forever
|
||||
|
||||
```
|
||||
|
||||
`ip addr show` will show you a lot of information about all of your network link devices. In this case, my wireless Ethernet card (wlp4s0) is the IPv4 address (the `inet` field) `10.16.196.113/23`. The `/23` means that there are 23 bits of the 32 bits in the IP address, which will be shared by all of the IP addresses in this subnet. IP addresses in the subnet will range from `10.16.196.0 to 10.16.197.254`. The broadcast address for the subnet (the `brd` field after the IP address) `10.16.197.255` is reserved for broadcast traffic to all hosts on the subnet.
|
||||
|
||||
We can show only the information about a single device using `ip addr show dev wlp4s0`, for example.
|
||||
|
||||
### Display your routing table
|
||||
```
|
||||
[dneary@host]$ ip route list
|
||||
|
||||
default via 10.16.197.254 dev wlp4s0 proto static metric 600
|
||||
|
||||
10.16.196.0/23 dev wlp4s0 proto kernel scope link src 10.16.196.113 metric 601
|
||||
|
||||
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown
|
||||
|
||||
```
|
||||
|
||||
The routing table is the local host's way of helping network traffic figure out where to go. It contains a set of signposts, sending traffic to a specific interface, and a specific next waypoint on its journey.
|
||||
|
||||
If you run any virtual machines or containers, these will get their own IP addresses and subnets, which can make these routing tables quite complicated, but in a single host, there are typically two instructions. For local traffic, send it out onto the local Ethernet, and the network switches will figure out (using a protocol called ARP) which host owns the destination IP address, and thus where the traffic should be sent. For traffic to the internet, send it to the local gateway node, which will have a better idea how to get to the destination.
|
||||
|
||||
In the situation above, the first line represents the external gateway for external traffic, the second line is for local traffic, and the third is reserved for a virtual bridge for VMs running on the host, but this link is not currently active.
|
||||
|
||||
### Monitor your network configuration
|
||||
```
|
||||
[dneary@host]$ ip monitor all
|
||||
|
||||
[dneary@host]$ ip -s link list wlp4s0
|
||||
|
||||
```
|
||||
|
||||
The `ip monitor` command can be used to monitor changes in routing tables, network addressing on network interfaces, or changes in ARP tables on the local host. This command can be particularly useful in debugging network issues related to containers and networking, when two VMs should be able to communicate with each other but cannot.
|
||||
|
||||
`all`, `ip monitor` will report all changes, prefixed with one of `[LINK]` (network interface changes), `[ROUTE]` (changes to a routing table), `[ADDR]` (IP address changes), or `[NEIGH]` (nothing to do with horses—changes related to ARP addresses of neighbors).
|
||||
|
||||
When used withwill report all changes, prefixed with one of(network interface changes),(changes to a routing table),(IP address changes), or(nothing to do with horses—changes related to ARP addresses of neighbors).
|
||||
|
||||
You can also monitor changes on specific objects (for example, a specific routing table or an IP address).
|
||||
|
||||
Another useful option that works with many commands is `ip -s`, which gives some statistics. Adding a second `-s` option adds even more statistics. `ip -s link list wlp4s0` above will give lots of information about packets received and transmitted, with the number of packets dropped, errors detected, and so on.
|
||||
|
||||
### Handy tip: Shorten your commands
|
||||
|
||||
In general, for the `ip` tool, you need to include only enough letters to uniquely identify what you want to do. Instead of `ip monitor`, you can use `ip mon`. Instead of `ip addr list`, you can use `ip a l`, and you can use `ip r` in place of `ip route`. `Ip link list` can be shorted to `ip l ls`. To read about the many options you can use to change the behavior of a command, visit the [ip manpage][1].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/useful-things-you-can-do-with-IP-tool-Linux
|
||||
|
||||
作者:[Dave Neary][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/dneary
|
||||
[1]:https://www.systutorials.com/docs/linux/man/8-ip-route/
|
357
sources/tech/20180511 Looking at the Lispy side of Perl.md
Normal file
357
sources/tech/20180511 Looking at the Lispy side of Perl.md
Normal file
@ -0,0 +1,357 @@
|
||||
Looking at the Lispy side of Perl
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus_cloud_database.png?itok=lhhU42fg)
|
||||
Some programming languages (e.g., C) have named functions only, whereas others (e.g., Lisp, Java, and Perl) have both named and unnamed functions. A lambda is an unnamed function, with Lisp as the language that popularized the term. Lambdas have various uses, but they are particularly well-suited for data-rich applications. Consider this depiction of a data pipeline, with two processing stages shown:
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/data_source.png?itok=OON2cC2R)
|
||||
|
||||
### Lambdas and higher-order functions
|
||||
|
||||
The filter and transform stages can be implemented as higher-order functions—that is, functions that can take a function as an argument. Suppose that the depicted pipeline is part of an accounts-receivable application. The filter stage could consist of a function named `filter_data`, whose single argument is another function—for example, a `high_buyers` function that filters out amounts that fall below a threshold. The transform stage might convert amounts in U.S. dollars to equivalent amounts in euros or some other currency, depending on the function plugged in as the argument to the higher-order `transform_data` function. Changing the filter or the transform behavior requires only plugging in a different function argument to the higher order `filter_data` or `transform_data` functions.
|
||||
|
||||
Lambdas serve nicely as arguments to higher-order functions for two reasons. First, lambdas can be crafted on the fly, and even written in place as arguments. Second, lambdas encourage the coding of pure functions, which are functions whose behavior depends solely on the argument(s) passed in; such functions have no side effects and thereby promote safe concurrent programs.
|
||||
|
||||
Perl has a straightforward syntax and semantics for lambdas and higher-order functions, as shown in the following example:
|
||||
|
||||
### A first look at lambdas in Perl
|
||||
|
||||
```
|
||||
#!/usr/bin/perl
|
||||
|
||||
use strict;
|
||||
use warnings;
|
||||
|
||||
## References to lambdas that increment, decrement, and do nothing.
|
||||
## $_[0] is the argument passed to each lambda.
|
||||
my $inc = sub { $_[0] + 1 }; ## could use 'return $_[0] + 1' for clarity
|
||||
my $dec = sub { $_[0] - 1 }; ## ditto
|
||||
my $nop = sub { $_[0] }; ## ditto
|
||||
|
||||
sub trace {
|
||||
my ($val, $func, @rest) = @_;
|
||||
print $val, " ", $func, " ", @rest, "\nHit RETURN to continue...\n";
|
||||
<STDIN>;
|
||||
}
|
||||
|
||||
## Apply an operation to a value. The base case occurs when there are
|
||||
## no further operations in the list named @rest.
|
||||
sub apply {
|
||||
my ($val, $first, @rest) = @_;
|
||||
trace($val, $first, @rest) if 1; ## 0 to stop tracing
|
||||
|
||||
return ($val, apply($first->($val), @rest)) if @rest; ## recursive case
|
||||
return ($val, $first->($val)); ## base case
|
||||
}
|
||||
|
||||
my $init_val = 0;
|
||||
my @ops = ( ## list of lambda references
|
||||
$inc, $dec, $dec, $inc,
|
||||
$inc, $inc, $inc, $dec,
|
||||
$nop, $dec, $dec, $nop,
|
||||
$nop, $inc, $inc, $nop
|
||||
);
|
||||
|
||||
## Execute.
|
||||
print join(' ', apply($init_val, @ops)), "\n";
|
||||
## Final line of output: 0 1 0 -1 0 1 2 3 2 2 1 0 0 0 1 2 2strictwarningstraceSTDINapplytraceapplyapply
|
||||
```
|
||||
|
||||
The lispy program shown above highlights the basics of Perl lambdas and higher-order functions. Named functions in Perl start with the keyword `sub` followed by a name:
|
||||
```
|
||||
sub increment { ... } # named function
|
||||
|
||||
```
|
||||
|
||||
An unnamed or anonymous function omits the name:
|
||||
```
|
||||
sub {...} # lambda, or unnamed function
|
||||
|
||||
```
|
||||
|
||||
In the lispy example, there are three lambdas, and each has a reference to it for convenience. Here, for review, is the `$inc` reference and the lambda referred to:
|
||||
```
|
||||
my $inc = sub { $_[0] + 1 };
|
||||
|
||||
```
|
||||
|
||||
The lambda itself, the code block to the right of the assignment operator `=`, increments its argument `$_[0]` by 1. The lambda’s body is written in Lisp style; that is, without either an explicit `return` or a semicolon after the incrementing expression. In Perl, as in Lisp, the value of the last expression in a function’s body becomes the returned value if there is no explicit `return` statement. In this example, each lambda has only one expression in its body—a simplification that befits the spirit of lambda programming.
|
||||
|
||||
The `trace` function in the lispy program helps to clarify how the program works (as I'll illustrate below). The higher-order function `apply`, a nod to a Lisp function of the same name, takes a numeric value as its first argument and a list of lambda references as its second argument. The `apply` function is called initially, at the bottom of the program, with zero as the first argument and the list named `@ops` as the second argument. This list consists of 16 lambda references from among `$inc` (increment a value), `$dec` (decrement a value), and `$nop` (do nothing). The list could contain the lambdas themselves, but the code is easier to write and to understand with the more concise lambda references.
|
||||
|
||||
The logic of the higher-order `apply` function can be clarified as follows:
|
||||
|
||||
1. The argument list passed to `apply` in typical Perl fashion is separated into three pieces:
|
||||
```
|
||||
my ($val, $first, @rest) = @_; ## break the argument list into three elements
|
||||
|
||||
```
|
||||
|
||||
The first element `$val` is a numeric value, initially `0`. The second element `$first` is a lambda reference, one of `$inc` `$dec`, or `$nop`. The third element `@rest` is a list of any remaining lambda references after the first such reference is extracted as `$first`.
|
||||
|
||||
2. If the list `@rest` is not empty after its first element is removed, then `apply` is called recursively. The two arguments to the recursively invoked `apply` are:
|
||||
|
||||
* The value generated by applying lambda operation `$first` to numeric value `$val`. For example, if `$first` is the incrementing lambda to which `$inc` refers, and `$val` is 2, then the new first argument to `apply` would be 3.
|
||||
* The list of remaining lambda references. Eventually, this list becomes empty because each call to `apply` shortens the list by extracting its first element.
|
||||
|
||||
|
||||
|
||||
Here is some output from a sample run of the lispy program, with `%` as the command-line prompt:
|
||||
```
|
||||
% ./lispy.pl
|
||||
|
||||
0 CODE(0x8f6820) CODE(0x8f68c8)CODE(0x8f68c8)CODE(0x8f6820)CODE(0x8f6820)CODE(0x8f6820)...
|
||||
Hit RETURN to continue...
|
||||
|
||||
1 CODE(0x8f68c8) CODE(0x8f68c8)CODE(0x8f6820)CODE(0x8f6820)CODE(0x8f6820)CODE(0x8f6820)...
|
||||
Hit RETURN to continue
|
||||
```
|
||||
|
||||
The first output line can be clarified as follows:
|
||||
|
||||
* The `0` is the numeric value passed as an argument in the initial (and thus non-recursive) call to function `apply`. The argument name is `$val` in `apply`.
|
||||
* The `CODE(0x8f6820)` is a reference to one of the lambdas, in this case the lambda to which `$inc` refers. The second argument is thus the address of some lambda code. The argument name is `$first` in `apply`
|
||||
* The third piece, the series of `CODE` references, is the list of lambda references beyond the first. The argument name is `@rest` in `apply`.
|
||||
|
||||
|
||||
|
||||
The second line of output shown above also deserves a look. The numeric value is now `1`, the result of incrementing `0`: the initial lambda is `$inc` and the initial value is `0`. The extracted reference `CODE(0x8f68c8)` is now `$first`, as this reference is the first element in the `@rest` list after `$inc` has been extracted earlier.
|
||||
|
||||
Eventually, the `@rest` list becomes empty, which ends the recursive calls to `apply`. In this case, the function `apply` simply returns a list with two elements:
|
||||
|
||||
1. The numeric value taken in as an argument (in the sample run, 2).
|
||||
2. This argument transformed by the lambda (also 2 because the last lambda reference happens to be `$nop` for do nothing).
|
||||
|
||||
|
||||
|
||||
The lispy example underscores that Perl supports lambdas without any special fussy syntax: A lambda is just an unnamed code block, perhaps with a reference to it for convenience. Lambdas themselves, or references to them, can be passed straightforwardly as arguments to higher-order functions such as `apply` in the lispy example. Invoking a lambda through a reference is likewise straightforward. In the `apply` function, the call is:
|
||||
```
|
||||
$first->($val) ## $first is a lambda reference, $val a numeric argument passed to the lambda
|
||||
|
||||
```
|
||||
|
||||
### A richer code example
|
||||
|
||||
The next code example puts a lambda and a higher-order function to practical use. The example implements Conway’s Game of Life, a cellular automaton that can be represented as a matrix of cells. Such a matrix goes through various transformations, each yielding a new generation of cells. The Game of Life is fascinating because even relatively simple initial configurations can lead to quite complex behavior. A quick look at the rules governing cell birth, survival, and death is in order.
|
||||
|
||||
Consider this 5x5 matrix, with a star representing a live cell and a dash representing a dead one:
|
||||
```
|
||||
----- ## initial configuration
|
||||
--*--
|
||||
--*--
|
||||
--*--
|
||||
-----
|
||||
```
|
||||
|
||||
The next generation becomes:
|
||||
```
|
||||
----- ## next generation
|
||||
-----
|
||||
-***-
|
||||
----
|
||||
-----
|
||||
```
|
||||
|
||||
As life continues, the generations oscillate between these two configurations.
|
||||
|
||||
Here are the rules determining birth, death, and survival for a cell. A given cell has between three neighbors (a corner cell) and eight neighbors (an interior cell):
|
||||
|
||||
* A dead cell with exactly three live neighbors comes to life.
|
||||
* A live cell with more than three live neighbors dies from over-crowding.
|
||||
* A live cell with two or three live neighbors survives; hence, a live cell with fewer than two live neighbors dies from loneliness.
|
||||
|
||||
|
||||
|
||||
In the initial configuration shown above, the top and bottom live cells die because neither has two or three live neighbors. By contrast, the middle live cell in the initial configuration gains two live neighbors, one on either side, in the next generation.
|
||||
|
||||
## Conway’s Game of Life
|
||||
```
|
||||
#!/usr/bin/perl
|
||||
|
||||
### A simple implementation of Conway's game of life.
|
||||
# Usage: ./gol.pl [input file] ;; If no file name given, DefaultInfile is used.
|
||||
|
||||
use constant Dead => "-";
|
||||
use constant Alive => "*";
|
||||
use constant DefaultInfile => 'conway.in';
|
||||
|
||||
use strict;
|
||||
use warnings;
|
||||
|
||||
my $dimension = undef;
|
||||
my @matrix = ();
|
||||
my $generation = 1;
|
||||
|
||||
sub read_data {
|
||||
my $datafile = DefaultInfile;
|
||||
$datafile = shift @ARGV if @ARGV;
|
||||
die "File $datafile does not exist.\n" if !-f $datafile;
|
||||
open(INFILE, "<$datafile");
|
||||
|
||||
## Check 1st line for dimension;
|
||||
$dimension = <INFILE>;
|
||||
die "1st line of input file $datafile not an integer.\n" if $dimension !~ /\d+/;
|
||||
|
||||
my $record_count = 0;
|
||||
while (<INFILE>) {
|
||||
chomp($_);
|
||||
last if $record_count++ == $dimension;
|
||||
die "$_: bad input record -- incorrect length\n" if length($_) != $dimension;
|
||||
my @cells = split(//, $_);
|
||||
push @matrix, @cells;
|
||||
}
|
||||
close(INFILE);
|
||||
draw_matrix();
|
||||
}
|
||||
|
||||
sub draw_matrix {
|
||||
my $n = $dimension * $dimension;
|
||||
print "\n\tGeneration $generation\n";
|
||||
for (my $i = 0; $i < $n; $i++) {
|
||||
print "\n\t" if ($i % $dimension) == 0;
|
||||
print $matrix[$i];
|
||||
}
|
||||
print "\n\n";
|
||||
$generation++;
|
||||
}
|
||||
|
||||
sub has_left_neighbor {
|
||||
my ($ind) = @_;
|
||||
return ($ind % $dimension) != 0;
|
||||
}
|
||||
|
||||
sub has_right_neighbor {
|
||||
my ($ind) = @_;
|
||||
return (($ind + 1) % $dimension) != 0;
|
||||
}
|
||||
|
||||
sub has_up_neighbor {
|
||||
my ($ind) = @_;
|
||||
return (int($ind / $dimension)) != 0;
|
||||
}
|
||||
|
||||
sub has_down_neighbor {
|
||||
my ($ind) = @_;
|
||||
return (int($ind / $dimension) + 1) != $dimension;
|
||||
}
|
||||
|
||||
sub has_left_up_neighbor {
|
||||
my ($ind) = @_;
|
||||
($ind) && has_up_neighbor($ind);
|
||||
}
|
||||
|
||||
sub has_right_up_neighbor {
|
||||
my ($ind) = @_;
|
||||
($ind) && has_up_neighbor($ind);
|
||||
}
|
||||
|
||||
sub has_left_down_neighbor {
|
||||
my ($ind) = @_;
|
||||
($ind) && has_down_neighbor($ind);
|
||||
}
|
||||
|
||||
sub has_right_down_neighbor {
|
||||
my ($ind) = @_;
|
||||
($ind) && has_down_neighbor($ind);
|
||||
}
|
||||
|
||||
sub compute_cell {
|
||||
my ($ind) = @_;
|
||||
my @neighbors;
|
||||
|
||||
# 8 possible neighbors
|
||||
push(@neighbors, $ind - 1) if has_left_neighbor($ind);
|
||||
push(@neighbors, $ind + 1) if has_right_neighbor($ind);
|
||||
push(@neighbors, $ind - $dimension) if has_up_neighbor($ind);
|
||||
push(@neighbors, $ind + $dimension) if has_down_neighbor($ind);
|
||||
push(@neighbors, $ind - $dimension - 1) if has_left_up_neighbor($ind);
|
||||
push(@neighbors, $ind - $dimension + 1) if has_right_up_neighbor($ind);
|
||||
push(@neighbors, $ind + $dimension - 1) if has_left_down_neighbor($ind);
|
||||
push(@neighbors, $ind + $dimension + 1) if has_right_down_neighbor($ind);
|
||||
|
||||
my $count = 0;
|
||||
foreach my $n (@neighbors) {
|
||||
$count++ if $matrix[$n] eq Alive;
|
||||
}
|
||||
|
||||
if ($matrix[$ind] eq Alive) && (($count == 2) || ($count == 3)); ## survival
|
||||
if ($matrix[$ind] eq Dead) && ($count == 3); ## birth
|
||||
; ## death
|
||||
}
|
||||
|
||||
sub again_or_quit {
|
||||
print "RETURN to continue, 'q' to quit.\n";
|
||||
my $flag = <STDIN>;
|
||||
chomp($flag);
|
||||
return ($flag eq 'q') ? 1 : 0;
|
||||
}
|
||||
|
||||
sub animate {
|
||||
my @new_matrix;
|
||||
my $n = $dimension * $dimension - 1;
|
||||
|
||||
while (1) { ## loop until user signals stop
|
||||
@new_matrix = map {compute_cell($_)} (0..$n); ## generate next matrix
|
||||
|
||||
splice @matrix; ## empty current matrix
|
||||
push @matrix, @new_matrix; ## repopulate matrix
|
||||
draw_matrix(); ## display the current matrix
|
||||
|
||||
last if again_or_quit(); ## continue?
|
||||
splice @new_matrix; ## empty temp matrix
|
||||
}
|
||||
}
|
||||
|
||||
### Execute
|
||||
read_data(); ## read initial configuration from input file
|
||||
animate(); ## display and recompute the matrix until user tires
|
||||
```
|
||||
|
||||
The gol program (see [Conway’s Game of Life][1]) has almost 140 lines of code, but most of these involve reading the input file, displaying the matrix, and bookkeeping tasks such as determining the number of live neighbors for a given cell. Input files should be configured as follows:
|
||||
```
|
||||
5
|
||||
-----
|
||||
--*--
|
||||
--*--
|
||||
--*--
|
||||
-----
|
||||
```
|
||||
|
||||
The first record gives the matrix side, in this case 5 for a 5x5 matrix. The remaining rows are the contents, with stars for live cells and spaces for dead ones.
|
||||
|
||||
The code of primary interest resides in two functions, `animate` and `compute_cell`. The `animate` function constructs the next generation, and this function needs to call `compute_cell` on every cell in order to determine the cell’s new status as either alive or dead. How should the `animate` function be structured?
|
||||
|
||||
The `animate` function has a `while` loop that iterates until the user decides to terminate the program. Within this `while` loop the high-level logic is straightforward:
|
||||
|
||||
1. Create the next generation by iterating over the matrix cells, calling function `compute_cell` on each cell to determine its new status. At issue is how best to do the iteration. A loop nested inside the `while `loop would do, of course, but nested loops can be clunky. Another way is to use a higher-order function, as clarified shortly.
|
||||
2. Replace the current matrix with the new one.
|
||||
3. Display the next generation.
|
||||
4. Check if the user wants to continue: if so, continue; otherwise, terminate.
|
||||
|
||||
|
||||
|
||||
Here, for review, is the call to Perl’s higher-order `map` function, with the function’s name again a nod to Lisp. This call occurs as the first statement within the `while` loop in `animate`:
|
||||
```
|
||||
while (1) {
|
||||
@new_matrix = map {compute_cell($_)} (0..$n); ## generate next matrixcompute_cell
|
||||
```
|
||||
|
||||
The `map` function takes two arguments: an unnamed code block (a lambda!), and a list of values passed to this code block one at a time. In this example, the code block calls the `compute_cell` function with one of the matrix indexes, 0 through the matrix size - 1. Although the matrix is displayed as two-dimensional, it is implemented as a one-dimensional list.
|
||||
|
||||
Higher-order functions such as `map` encourage the code brevity for which Perl is famous. My view is that such functions also make code easier to write and to understand, as they dispense with the required but messy details of loops. In any case, lambdas and higher-order functions make up the Lispy side of Perl.
|
||||
|
||||
If you're interested in more detail, I recommend Mark Jason Dominus's book, [Higher-Order Perl][2].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/looking-lispy-side-perl
|
||||
|
||||
作者:[Marty Kalin][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/mkalindepauledu
|
||||
[1]:https://trello-attachments.s3.amazonaws.com/575088ec94ca6ac38b49b30e/5ad4daf12f6b6a3ac2318d28/c0700c7379983ddf61f5ab5ab4891f0c/lispyPerl.html#gol (Conway’s Game of Life)
|
||||
[2]:https://www.elsevier.com/books/higher-order-perl/dominus/978-1-55860-701-9
|
@ -0,0 +1,309 @@
|
||||
How To Check Laptop Battery Status In Terminal In Linux
|
||||
======
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2016/12/Check-Laptop-Battery-Status-In-Terminal-In-Linux-720x340.png)
|
||||
Finding your Laptop battery status in GUI mode is easy. You could easily tell the battery level by hovering the mouse pointer over the battery indicator icon in the task bar. But, how about from the command line? Not everyone know this. The other day a friend of mine asked how to check his Laptop battery level from Terminal in his Ubuntu desktop – hence this post. Here I have included three simple methods which will help you to check Laptop battery status in Terminal in any Linux distribution.
|
||||
|
||||
### Check Laptop Battery Status In Terminal In Linux
|
||||
|
||||
We can find the Laptop battery status from command line in three methods.
|
||||
|
||||
##### Method 1 – Using “Upower” command
|
||||
|
||||
The **Upower** command comes preinstalled with most Linux distributions. To display the battery status using Upower, open up the Terminal and run:
|
||||
```
|
||||
$ upower -i /org/freedesktop/UPower/devices/battery_BAT0
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
native-path: BAT0
|
||||
vendor: Samsung SDI
|
||||
model: DELL 7XFJJA2
|
||||
serial: 4448
|
||||
power supply: yes
|
||||
updated: Sat 12 May 2018 06:48:48 PM IST (41 seconds ago)
|
||||
has history: yes
|
||||
has statistics: yes
|
||||
battery
|
||||
present: yes
|
||||
rechargeable: yes
|
||||
state: charging
|
||||
warning-level: none
|
||||
energy: 43.3011 Wh
|
||||
energy-empty: 0 Wh
|
||||
energy-full: 44.5443 Wh
|
||||
energy-full-design: 48.84 Wh
|
||||
energy-rate: 9.8679 W
|
||||
voltage: 12.548 V
|
||||
time to full: 7.6 minutes
|
||||
percentage: 97%
|
||||
capacity: 91.2045%
|
||||
technology: lithium-ion
|
||||
icon-name: 'battery-full-charging-symbolic'
|
||||
History (charge):
|
||||
1526131128 97.000 charging
|
||||
History (rate):
|
||||
1526131128 9.868 charging
|
||||
|
||||
```
|
||||
|
||||
As you see above, my battery is in charging mode now and the battery level is 97%.
|
||||
|
||||
If the above command doesn’t work for any reason, try the following command instead:
|
||||
```
|
||||
$ upower -i `upower -e | grep 'BAT'`
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
native-path: BAT0
|
||||
vendor: Samsung SDI
|
||||
model: DELL 7XFJJA2
|
||||
serial: 4448
|
||||
power supply: yes
|
||||
updated: Sat 12 May 2018 06:50:49 PM IST (22 seconds ago)
|
||||
has history: yes
|
||||
has statistics: yes
|
||||
battery
|
||||
present: yes
|
||||
rechargeable: yes
|
||||
state: charging
|
||||
warning-level: none
|
||||
energy: 43.6119 Wh
|
||||
energy-empty: 0 Wh
|
||||
energy-full: 44.5443 Wh
|
||||
energy-full-design: 48.84 Wh
|
||||
energy-rate: 8.88 W
|
||||
voltage: 12.552 V
|
||||
time to full: 6.3 minutes
|
||||
percentage: 97%
|
||||
capacity: 91.2045%
|
||||
technology: lithium-ion
|
||||
icon-name: 'battery-full-charging-symbolic'
|
||||
History (rate):
|
||||
1526131249 8.880 charging
|
||||
|
||||
```
|
||||
|
||||
Upower not just display the battery status, but also the complete details of the installed battery such as model, vendor name, serial no, state, voltage etc.
|
||||
|
||||
However, you can only display the status of the battery by with combination of upower and [**grep**][1] commands as shown below.
|
||||
```
|
||||
$ upower -i $(upower -e | grep BAT) | grep --color=never -E "state|to\ full|to\ empty|percentage"
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
state: fully-charged
|
||||
percentage: 100%
|
||||
|
||||
```
|
||||
|
||||
![][3]
|
||||
|
||||
As you see in the above output, my Laptop battery has been fully charged.
|
||||
|
||||
For more details, refer man pages.
|
||||
```
|
||||
$ man upower
|
||||
|
||||
```
|
||||
|
||||
##### Method 2 – Using “acpi” command
|
||||
|
||||
The **acpi** command shows battery status and other ACPI information in your Linux distribution.
|
||||
|
||||
You might need to install **acpi** command in some Linux distributions.
|
||||
|
||||
To install acpi on Debian, Ubuntu and its derivatives:
|
||||
```
|
||||
$ sudo apt-get install acpi
|
||||
|
||||
```
|
||||
|
||||
On RHEL, CentOS, Fedora:
|
||||
```
|
||||
$ sudo yum install acpi
|
||||
|
||||
```
|
||||
|
||||
Or,
|
||||
```
|
||||
$ sudo dnf install acpi
|
||||
|
||||
```
|
||||
|
||||
On Arch Linux and its derivatives:
|
||||
```
|
||||
$ sudo pacman -S acpi
|
||||
|
||||
```
|
||||
|
||||
Once acpi installed, run the following command:
|
||||
```
|
||||
$ acpi -V
|
||||
|
||||
```
|
||||
|
||||
**Note:** Here, “V” is capital letter.
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
Battery 0: Charging, 99%, 00:02:09 until charged
|
||||
Battery 0: design capacity 4400 mAh, last full capacity 4013 mAh = 91%
|
||||
Battery 1: Discharging, 0%, rate information unavailable
|
||||
Adapter 0: on-line
|
||||
Thermal 0: ok, 77.5 degrees C
|
||||
Thermal 0: trip point 0 switches to mode critical at temperature 84.0 degrees C
|
||||
Cooling 0: Processor 0 of 3
|
||||
Cooling 1: Processor 0 of 3
|
||||
Cooling 2: LCD 0 of 15
|
||||
Cooling 3: Processor 0 of 3
|
||||
Cooling 4: Processor 0 of 3
|
||||
Cooling 5: intel_powerclamp no state information available
|
||||
Cooling 6: x86_pkg_temp no state information available
|
||||
|
||||
```
|
||||
|
||||
Let us only check the state of the charge of battery. To do so, run:
|
||||
```
|
||||
$ acpi
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
Battery 0: Charging, 99%, 00:01:41 until charged
|
||||
Battery 1: Discharging, 0%, rate information unavailable
|
||||
|
||||
```
|
||||
|
||||
Let us check the battery temperature:
|
||||
```
|
||||
$ acpi -t
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
Thermal 0: ok, 63.5 degrees C
|
||||
|
||||
```
|
||||
|
||||
Let us view the above output in Fahrenheit:
|
||||
```
|
||||
$ acpi -t -f
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
Thermal 0: ok, 144.5 degrees F
|
||||
|
||||
```
|
||||
|
||||
Want to know whether the AC power is connected or not? Run:
|
||||
```
|
||||
$ acpi -a
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
Adapter 0: on-line
|
||||
|
||||
```
|
||||
|
||||
If the AC power is not available, you would the see the following instead:
|
||||
```
|
||||
Adapter 0: off-line
|
||||
|
||||
```
|
||||
|
||||
For more details, check the man pages.
|
||||
```
|
||||
$ man acpi
|
||||
|
||||
```
|
||||
|
||||
##### Method 3: Using “Batstat” Program
|
||||
|
||||
The **batstat** is a small ncurses-based CLI utility to display your Laptop battery status in Unix-like systems. It will display the following details:
|
||||
|
||||
* Current battery level
|
||||
* Current Energy
|
||||
* Full charge energy
|
||||
* Time elapsed from the start of the program, without tracking the sleep time of the machine.
|
||||
* Battery level history
|
||||
|
||||
|
||||
|
||||
Installing batstat is a piece of cake. Git clone the latest version using command:
|
||||
```
|
||||
$ git clone https://github.com/Juve45/batstat.git
|
||||
|
||||
```
|
||||
|
||||
The above command will pull the latest batstat version and save it’s contents in a folder named “batstat”.
|
||||
|
||||
CD to batstat/bin/ directory:
|
||||
```
|
||||
$ cd batstat/bin/
|
||||
|
||||
```
|
||||
|
||||
Copy “batstat” binary file to your PATH, for example /usr/local/bin/.
|
||||
```
|
||||
$ sudo cp batstat /usr/local/bin/
|
||||
|
||||
```
|
||||
|
||||
Make it executable using command:
|
||||
```
|
||||
$ sudo chmod +x /usr/local/bin/batstat
|
||||
|
||||
```
|
||||
|
||||
Finally, run the following command to view your battery status.
|
||||
```
|
||||
$ batstat
|
||||
|
||||
```
|
||||
|
||||
Sample output:
|
||||
|
||||
![][4]
|
||||
|
||||
As you see in the above screenshot, my battery is in charging mode.
|
||||
|
||||
This utility has some limitations though. As of writing this guide, batstat will support only one battery. And, it gathers information only from this folder – **“/sys/class/power_supply/”**. If your machine contains the battery information on a different folder, this program will not work.
|
||||
|
||||
For more details, check batstat github page.
|
||||
|
||||
And, that’s all for today folks. There might be many commands and programs out there to check the laptop battery status in Terminal in Linux. As far as I know, the above given methods have worked just fine as expected. If you know some other commands to find out the battery status, let me know in the comment section below. I will update commands in the article if they works.
|
||||
|
||||
And, that’s all for now. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-check-laptop-battery-status-in-terminal-in-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://www.ostechnix.com/the-grep-command-tutorial-with-examples-for-beginners/
|
||||
[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[3]:http://www.ostechnix.com/wp-content/uploads/2016/12/sk@sk_006-1.png
|
||||
[4]:http://www.ostechnix.com/wp-content/uploads/2016/12/batstat-1.png
|
@ -0,0 +1,47 @@
|
||||
LikeCoin, a cryptocurrency for creators of openly licensed content
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_whitehurst_money.png?itok=ls-SOzM0)
|
||||
|
||||
Conventional wisdom indicates that writers, photographers, artists, and other creators who share their content for free, under Creative Commons and other open licenses, won't get paid. That means most independent creators don't make any money by publishing their work on the internet. Enter [LikeCoin][1]: a new, open source project that intends to make this convention, where artists often have to compromise or sacrifice in order to contribute, a thing of the past.
|
||||
|
||||
The LikeCoin protocol is designed to monetize creative content so creators can focus on creating great material rather than selling it.
|
||||
|
||||
The protocol is also based on decentralized technologies that track when content is used and reward its creators with LikeCoin, an [Ethereum ERC-20][2] cryptocurrency token. It operates through a "Proof of Creativity" algorithm which assigns LikeCoins based partially on how many "likes" a piece of content receives and how many derivative works are produced from it. Because openly licensed content has more opportunity to be reused and earn LikeCoin tokens, the system encourages content creators to publish under Creative Commons licenses.
|
||||
|
||||
### How it works
|
||||
|
||||
When a creative piece is uploaded via the LikeCoin protocol, the content creator includes the work's metadata, including author information and its InterPlanetary Linked Data ([IPLD][3]). This data forms a family graph of derivative works; we call the relationships between a work and its derivatives the "content footprint." This structure allows a content's inheritance tree to be easily traced all the way back to the original work.
|
||||
|
||||
LikeCoin tokens will be distributed to creators using information about a work's derivation history. Since all creative works contain the metadata of the author's wallet, the corresponding LikeCoin shares can be calculated through the algorithm and distributed accordingly.
|
||||
|
||||
LikeCoins are awarded in two ways: either directly by individuals who want to show their appreciation by paying a content creator, or through the Creators Pool, which collects viewers' "Likes" and distributes LikeCoin according to a content's LikeRank. Based on content-footprint tracing in the LikeCoin protocol, the LikeRank measures the importance (or creativity as we define it in this context) of a creative content. In general, the more derivative works a creative content generates, the more creative the creative content is, and thus the higher LikeRank of the content. LikeRank is the quantifier of the creativity of contents.
|
||||
|
||||
### Want to get involved?
|
||||
|
||||
LikeCoin is still very new, and we expect to launch our first decentralized application later in 2018 to reward Creative Commons content and connect seamlessly with a much larger and established community.
|
||||
|
||||
Most of LikeCoin's code can be accessed in the [LikeCoin GitHub][4] repository under a [GPL 3.0 license][5]. Since it's still under active development, some of the experimental code is not yet open to the public, but we will make it so as soon as possible.
|
||||
|
||||
We welcome feature requests, pull requests, forks, and stars. Please join our development on GitHub and our general discussions on [Telegram][6]. We also release updates about our progress on [Medium][7], [Facebook][8], [Twitter][9], and our website, [like.co][1].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/likecoin
|
||||
|
||||
作者:[Kin Ko][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/ckxpress
|
||||
[1]:https://like.co/
|
||||
[2]:https://en.wikipedia.org/wiki/ERC20
|
||||
[3]:https://ipld.io/
|
||||
[4]:https://github.com/likecoin
|
||||
[5]:https://www.gnu.org/licenses/gpl-3.0.en.html
|
||||
[6]:https://t.me/likecoin
|
||||
[7]:http://medium.com/likecoin
|
||||
[8]:http://fb.com/likecoin.foundation
|
||||
[9]:https://twitter.com/likecoin_fdn
|
@ -0,0 +1,96 @@
|
||||
IT自动化:如何去实现
|
||||
======
|
||||
|
||||
在任何重要的项目或变更刚开始的时候,TI的管理者在前进的道路上面临着普遍的抉择。
|
||||
|
||||
第一条路径看上去是提供了一个从A到B的最短路径:简单的把项目强制分配给每个人去执行,本质来说就是你要么按照要求去做要么就不要做了。
|
||||
|
||||
第二条路径可能看上去会不是很直接,因为要通过这条路径你要花时间去解释项目背后的策略以及原因。你会沿着这条路线设置停靠站点而不是从起点到终点的马拉松:
|
||||
“这就是我们正在做的-和为什么我们这么做。”
|
||||
|
||||
猜想一下哪条路径会赢得更好的结果?
|
||||
|
||||
如果你选的是路径2,你肯定是以前都经历过这两条路径-而且经历了第一次的结局。让人们参与到重大变革中总会是最明智的选择。
|
||||
|
||||
IT领导者也知道重大的变革总会带来严重的恐慌、怀疑,和其他的挑战。IT自动化确实是很正确的改变。这个术语对某些人来说是很可怕的,而且容易被曲解。帮助人们理解你的公司需要IT自动化的必要性的原因以及如何去实现是达到你的目标和策略的重要步骤。
|
||||
|
||||
[**阅读我们的相关文章,**[**IT自动化最佳实践:持久成功的7个关键点**][2]. ]
|
||||
|
||||
考虑到这一点,我们咨询了许多IT管理者关于如何在你的组织中实现IT自动化。
|
||||
|
||||
## 1. 向人们展示它的优点
|
||||
|
||||
我们要面对的一点事实是:自我利益和自我保护是本能。利用人们的这种本能是一个吸引他们的好方法:向他们展示自动化策略将如何让他们和他们的工作获益。自动化将会是软件管道中的一个特定过程意味着将会减少在半夜呼叫团队同事来解决故障?他将能让一些人丢弃技术含量低的技能,用更有策略,高效的有序工作代替手工作业,这将会帮助他们的职业生涯更进一步?
|
||||
|
||||
”向他们传达他们能得到什么好处,自动化将会如何让他们的客户和公司受益,“来自vipual的建议,ADP全球首席技术官。”将现在的状态和未来光明的未来进行对比,展现公司将会变得如何稳定,敏捷,高效和安全。“
|
||||
|
||||
这样的方法同样适用于IT领域之外的其他领域;只要在向非技术领域的股东们解读利益的时候解释清楚一些术语即可,Nagrath 说道。
|
||||
|
||||
设置好前后的情景是一个不错的帮助人们理解的更透彻的故事机。
|
||||
|
||||
“你要描述一幅人们能够联想到的当前状态的画面,”Nagrath 说。“描述现在是什么工作,但也要重点强调是什么导致团队的工作效率不够敏捷。”然后再阐释自动化过程将如何提高现在的状态。
|
||||
|
||||
## 2.将自动化和特定的商业目标绑定在一起
|
||||
|
||||
一个强有力的案列的一部分要确保人们理解你不只是在追逐潮流趋势。如果i只是为了自动化而自动化,人们会很快察觉到进而会更加抵制的-也许在IT界更是如此。
|
||||
|
||||
“自动化需要商业需求的驱动,列如收入和运营开销,” David说道,Cyxtera的副总裁和首席信息安全官。“没有自动化的努力是自我辩护的,而且任何技术专长都不应该被当做一种手段,除非它是公司的一项核心能力”
|
||||
|
||||
像Nagrath一样,Emerson建议将达到自动化的商业目标和奖励措施挂钩,用迭代式的循序渐进的方式推进这些目标和相关的激励措施。
|
||||
|
||||
## 3. 将自动化计划分解为可管理的条目
|
||||
|
||||
即使你的自动化策略字面上是“一切都自动化,”对大多数组织来说那也是很艰难的而且可能是没有灵活性的。你需要一个能够将自动化目标分解为可管理的目标的计划来制定一个强有力的方案。而且这将能够创造很大的灵活性来适应之后漫长的道路。
|
||||
|
||||
“当制定一个自动化方案的时候,我建议详细的阐明推进自动化进程的奖励措施,而且允许迭代朝着目标前进来介绍和证明利益处于一个低风险水平,”Emerson说道。
|
||||
|
||||
Sergey Zuev, GA Connector的创始人,分享了一个为什么自动化如此重要的快节奏体验的报告-它将如何帮助你的策略建立一个强壮持久的论点。Zuevz应该知道:他的公司的自动化工具将公司的客户关系应用数据导入谷歌分析。但实际上是公司的内部经验使顾客培训进程自动化从而出现了一个闪耀的时刻。
|
||||
|
||||
“起初, 我们曾尝试去建立整个培训机制,结果这个项目搁浅了好几个月,”Zuev说道。“认识到这将无法继续下去之后,我们决定挑选其中的一个能够有巨大的时效的领域,而且立即启动。结果我们只用了一周就实现了其中的电子邮件序列的目标,而且我们已经从被亵渎的体力劳动中获益。”
|
||||
|
||||
## 4. 出售主要部分也有好处
|
||||
|
||||
循序渐进的方法并不会阻碍构建一个宏伟的蓝图。就像以个人或者团队的水平来制定方案是一个好主意,帮助人们理解全公司的利益也是一个不错的主意。
|
||||
|
||||
“如果我们能够加速达到商业需求所需的时间,那么一切质疑将会平息。”
|
||||
|
||||
Eric Kaplan, AHEAD的首席技术官,赞同通过小范围的胜利来展示自动化的价值是一个赢得人心的聪明策略。但是那些所谓的“小的”的价值揭示能够帮助你提高人们的整体形象。Kaplan指出个人和组织间的价值是每个人都可以容易联系到的领域。
|
||||
|
||||
“最能展现的地方就是你能够在节约多少时间,”Kaaplan说。“如果我们能够加速达到商业需求所需的时间,那么一切质疑将会消失。”
|
||||
|
||||
时间和可伸缩性是业务和IT同事的强大优势,都被业务的增长控制,能够被控制。
|
||||
|
||||
“自动化的结果是伸缩灵活的-每个人只需较少的努力就能保持和改善你的IT环境”,红帽的全球服务副总裁John最近提到。“如果增加人力是提升你的商业的唯一途径,那么伸缩灵活就是白日梦。自动化减少了你的人力需求而且提供了IT演进所需的灵活性和韧性。”(详细内容请参考他的文章,[DevOps团队对CIO的真正需求是什么。])
|
||||
|
||||
## 5. 推广你的成果。
|
||||
|
||||
在你自动化策略的开始时,你可能是在目标和要达到目标的预期利益上制定方案。但随着你的自动化策略的不断演进,没有什么能够比现实中的实际结果令人信服。
|
||||
|
||||
“眼见为实,”ADP的首席技术官Nagrath说。“没有什么比追踪记录能够平息质疑。”
|
||||
|
||||
那意味着,不仅仅要达到你的目标,还要准时的完成-这是迭代的循序渐进的方法论的另一个不错的解释。
|
||||
|
||||
而量化的结果如比列的提高或者成本的节省可以大声宣扬出来,Nagrath建议他的IT领导者的同事们在讲述你们的自动化故事的时候不要仅仅止步于此。
|
||||
|
||||
为自动化提供案列也是一个定性的讨论,通过它我们能够促进问题的预防,归总商业的连续性,减伤失败或错误,而且能够在他们处理更有价值的任务时承担更多的责任。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://enterprisersproject.com/article/2018/1/how-make-case-it-automation
|
||||
|
||||
作者:[Kevin Casey][a]
|
||||
译者:[FelixYFZ](https://github.com/FelixYFZ)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://enterprisersproject.com/user/kevin-casey
|
||||
[1]:https://enterprisersproject.com/article/2017/10/how-beat-fear-and-loathing-it-change
|
||||
[2]:https://enterprisersproject.com/article/2018/1/it-automation-best-practices-7-keys-long-term-success?sc_cid=70160000000h0aXAAQ
|
||||
[3]:https://www.adp.com/
|
||||
[4]:https://www.cyxtera.com/
|
||||
[5]:http://gaconnector.com/
|
||||
[6]:https://www.thinkahead.com/
|
||||
[7]:https://www.redhat.com/en?intcmp=701f2000000tjyaAAA
|
||||
[8]:https://enterprisersproject.com/article/2017/12/what-devops-teams-really-need-cio
|
||||
[9]:https://enterprisersproject.com/email-newsletter?intcmp=701f2000000tsjPAAQ
|
@ -0,0 +1,81 @@
|
||||
从专有到开源的十个简单步骤
|
||||
======
|
||||
|
||||
"开源软件的确不是很安全,因为每个人都能使用它,而且他们能够随意的进行编译并且用他们自己写的不好的东西进行替换。"举手示意:谁之前听说过这个?1
|
||||
|
||||
当我和顾客讨论的时候,是的,他们有时候会让我和顾客交谈,对于场景2的人来说这是很常见的。在前一篇文章中,"[许多人的评论并不一定能防止错误代码]",我会
|
||||
谈论尤其是安全的软件这块--并没有如外界所说的那样比专有软件安全,但是和专有软件比起来,我每次还是比较青睐开源软件。但我听到--关于开源软件不是很安全--表明了有时候仅仅解释开源需要工作投入是不够的,但是我们也需要积极的参与进去。
|
||||
|
||||
我并不期望能够达到牛顿或者维特根斯坦的逻辑水平,但是我会尽我所能,而且我会在结尾做个总结,如果你感兴趣的话可以去快速的浏览一下。
|
||||
|
||||
### 关键因素
|
||||
|
||||
首先,我们必须明白没有任何一款软件是绝对安全的。无论是专有软件还是开源软件。第二,我们应该接受确实还是存在一些很不错的专利软件的。第三,也存在一些不好的开源软件。第四,有很多优秀的,很有天赋的,专业的架构师,设计师和软件工程师设计开发专利软件。
|
||||
|
||||
但也有些摩擦:第五点,从事专有软件的人员是有限的,而且你不可能总是能够雇佣到最好的员工。即使在政府部门或者公共组织--他们拥有丰富的人才资源池,但在安全应用这块,他们的人才也是有限的。
|
||||
|
||||
第六点,开发,测试,提升改善开源软件的人总是无限的,而且还包含最好的人才。第七(也是我最欢的一),这群人找那个包含很多编写专有软件的人才。第八,许多政府或者公共组织开发的软件也都逐渐开源了。
|
||||
|
||||
第九,如果你在担心你在运行的软件的不被支持或者来源不明,好消息是:有一批组织会来检查软件代码的来源,提供支持和补丁更新。他们会按照专利软件模式那样去运行开源软件:他们的技术标准就是去签署认证以便你可以验证你正在运行的开源软件不是来源不明或者是恶意的软件。
|
||||
|
||||
第十点(也是这篇文章的重点),当你运行,测试,在问题上进行反馈,发现问题并且报告的时候,你正在为共福利贡献知识,专业技能以及经验,这就是开源,正因为你的所做的这些而变得更好。如果你是通过个人或者提供支持的商业组织,,你已经成为了这个组织的一部分了。开源让软件变得越来越好,你可以看到它们的变化。没有什么是隐藏封闭的,它是完全开放的。事情会变坏吗?是的,但是我们能够及时发现问题并且修复。
|
||||
|
||||
这个共享福利并不适用于专有软件:保持隐藏的东西是不能照亮个丰富世界的。
|
||||
|
||||
我知道作为一个英国人在使用联邦这个词的时候要小心谨慎的;它和帝国连接着的,但我所表达的不是这个意思。它不是克伦威尔在对这个词所表述的意思,无论如何,他是一个有争议的历史人物。我所表达的意思是这个词有共同和福利连接,福利不是指钱而是全人类都能拥有的福利。
|
||||
|
||||
我真的很相信这点的。如果i想从这篇文章中国得到一些虔诚的信息的话,那应该是第十条:共享福利是我们的遗产,我们的经验,我们的知识,我们的责任。共享福利是全人类都能拥有的。我们共同拥有它而且它是一笔无法估量的财富。
|
||||
|
||||
### 便利贴
|
||||
|
||||
1. (几乎)没有一款软件是完美无缺的。
|
||||
2. 有很好的专有软件。
|
||||
3. 有不好的专有软件。
|
||||
4. 有聪明,有才能,专注的人开开发专有软件。
|
||||
5. 从事开发完善专有软件的人是有限的,即使在政府或者公共组织也是如此。
|
||||
6. 相对来说从事开源软件的人是无限的。
|
||||
7. …而且包括很多从事专有软件的人才。
|
||||
8. 政府和公共组织的人经常开源它们的软件.
|
||||
9. 有商业组织会为你的开源软件提供支持.
|
||||
10. 贡献--即使使用--为开源软件贡献.
|
||||
|
||||
|
||||
|
||||
1 OK--you can put your hands down now.
|
||||
|
||||
2 Should this be capitalized? Is there a particular field, or how does it work? I'm not sure.
|
||||
|
||||
3 I have a degree in English literature and theology--this probably won't surprise regular readers of my articles.4
|
||||
|
||||
4 Not, I hope, because I spout too much theology,5 but because it's often full of long-winded, irrelevant humanities (U.S. English: "liberal arts") references.
|
||||
|
||||
5 Emacs. Every time.
|
||||
|
||||
6 Not even Emacs. And yes, I know that there are techniques to prove the correctness of some software. (I suspect that Emacs doesn't pass many of them…)
|
||||
|
||||
7 Hand up here: I'm employed by one of them, [Red Hat][3]. Go have a look--it's a fun place to work, and [we're usually hiring][4].
|
||||
|
||||
8 Assuming that they fully abide by the rules of the open source licence(s) they're using, that is.
|
||||
|
||||
9 Erstwhile "Lord Protector of England, Scotland, and Ireland"--that Cromwell.
|
||||
|
||||
10 Oh, and choose Emacs over Vi variants, obviously.
|
||||
|
||||
This article originally appeared on [Alice, Eve, and Bob - a security blog][5] and is republished with permission.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/11/commonwealth-open-source
|
||||
|
||||
作者:[Mike Bursell][a]
|
||||
译者:[FelixYFZ](https://github.com/FelixYFZ)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/mikecamel
|
||||
[1]:https://opensource.com/article/17/10/many-eyes
|
||||
[2]:https://en.wikipedia.org/wiki/Apologetics
|
||||
[3]:https://www.redhat.com/
|
||||
[4]:https://www.redhat.com/en/jobs
|
||||
[5]:https://aliceevebob.com/2017/10/24/the-commonwealth-of-open-source/
|
141
translated/tech/20171221 A Commandline Food Recipes Manager.md
Normal file
141
translated/tech/20171221 A Commandline Food Recipes Manager.md
Normal file
@ -0,0 +1,141 @@
|
||||
HeRM’s - 一个命令行食谱管理器
|
||||
======
|
||||
![配图](https://www.ostechnix.com/wp-content/uploads/2017/12/herms-720x340.jpg)
|
||||
|
||||
烹饪让爱变得可见,不是吗?确实!烹饪也许是你的热情或爱好或职业,我相信你会维护一份烹饪日记。保持写烹饪日记是改善烹饪习惯的一种方法。有很多方法可以记录食谱。你可以维护一份小日记/笔记或将配方的笔记存储在智能手机中,或将它们保存在计算机中文档中。这有很多选择。今天,我介绍 **HeRM 's**,一个基于 Haskell 的命令行食谱管理器,能为你的美食食谱做笔记。使用 Herm's,你可以添加、查看、编辑和删除食物配方,甚至可以制作购物清单。这些全部来自你的终端!它是免费的,并使用 Haskell 语言编写的开源程序。源代码在 GitHub 中免费提供,因此你可以 fork 它,添加更多功能或改进它。
|
||||
|
||||
### HeRM's - 一个命令食谱管理器
|
||||
|
||||
#### **安装 HeRM 's**
|
||||
|
||||
由于它是使用 Haskell 编写的,因此我们需要首先安装 Cabal。 Cabal 是一个用于下载和编译用 Haskell 语言编写的软件的命令行程序。Cabal 存在于大多数 Linux 发行版的核心软件库中,因此你可以使用发行版的默认软件包管理器来安装它。
|
||||
|
||||
例如,你可以使用以下命令在 Arch Linux 及其变体(如 Antergos、Manjaro Linux)中安装 cabal:
|
||||
```
|
||||
sudo pacman -S cabal-install
|
||||
```
|
||||
|
||||
在 Debian、Ubuntu 上:
|
||||
```
|
||||
sudo apt-get install cabal-install
|
||||
```
|
||||
|
||||
安装 Cabal 后,确保你已经添加了 PATH。为此,请编辑你的 **~/.bashrc** :
|
||||
```
|
||||
vi ~/.bashrc
|
||||
```
|
||||
|
||||
添加下面这行:
|
||||
```
|
||||
PATH=$PATH:~/.cabal/bin
|
||||
```
|
||||
|
||||
按 **:wq** 保存并退出文件。然后,运行以下命令更新所做的更改。
|
||||
```
|
||||
source ~/.bashrc
|
||||
```
|
||||
|
||||
安装 cabal 后,运行以下命令安装 herms:
|
||||
```
|
||||
cabal install herms
|
||||
```
|
||||
|
||||
喝一杯咖啡!这将需要一段时间。几分钟后,你会看到一个输出,如下所示。
|
||||
```
|
||||
[...]
|
||||
Linking dist/build/herms/herms ...
|
||||
Installing executable(s) in /home/sk/.cabal/bin
|
||||
Installed herms-1.8.1.2
|
||||
```
|
||||
|
||||
恭喜! Herms 已经安装完成。
|
||||
|
||||
#### **添加食谱**
|
||||
|
||||
让我们添加一个食谱,例如 **Dosa**。对于那些想知道的,Dosa 是一种受欢迎的南印度食物,配以 **sambar** 和**酸辣酱**。这是一种健康的,可以说是最美味的食物。它不含添加的糖或饱和脂肪。制作一个也很容易。有几种不同的 Dosas,在我们家中最常见的是 Plain Dosa。
|
||||
|
||||
要添加食谱,请输入:
|
||||
```
|
||||
herms add
|
||||
```
|
||||
|
||||
你会看到一个如下所示的屏幕。开始输入食谱的详细信息。
|
||||
|
||||
[![][1]][2]
|
||||
|
||||
要变换字段,请使用以下键盘快捷键:
|
||||
|
||||
* **Tab / Shift+Tab** - 下一个/前一个字段
|
||||
* **Ctrl + <箭头键>** - 导航字段
|
||||
* **[Meta 或者 Alt] + <h-j-k-l>** - 导航字段
|
||||
* **Esc** - 保存或取消。
|
||||
|
||||
|
||||
|
||||
添加完配方的详细信息后,按下 ESC 键并点击 Y 保存。同样,你可以根据需要添加尽可能多的食谱。
|
||||
|
||||
要列出添加的食谱,输入:
|
||||
```
|
||||
herms list
|
||||
```
|
||||
|
||||
[![][1]][3]
|
||||
|
||||
要查看上面列出的任何食谱的详细信息,请使用下面的相应编号。
|
||||
```
|
||||
herms view 1
|
||||
```
|
||||
|
||||
[![][1]][4]
|
||||
|
||||
要编辑任何食谱,使用:
|
||||
```
|
||||
herms edit 1
|
||||
```
|
||||
|
||||
完成更改后,按下 ESC 键。系统会询问你是否要保存。你只需选择适当的选项。
|
||||
|
||||
[![][1]][5]
|
||||
|
||||
要删除食谱,命令是:
|
||||
```
|
||||
herms remove 1
|
||||
```
|
||||
|
||||
要为指定食谱生成购物清单,运行:
|
||||
```
|
||||
herms shopping 1
|
||||
```
|
||||
|
||||
[![][1]][6]
|
||||
|
||||
要获得帮助,运行:
|
||||
```
|
||||
herms -h
|
||||
```
|
||||
|
||||
当你下次听到你的同事、朋友或其他地方谈到好的食谱时,只需打开 Herms,并快速记下,并将它们分享给你的配偶。她会很高兴!
|
||||
|
||||
今天就是这些。还有更好的东西。敬请关注!
|
||||
|
||||
干杯!!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/herms-commandline-food-recipes-manager/
|
||||
|
||||
作者:[][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com
|
||||
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]:http://www.ostechnix.com/wp-content/uploads/2017/12/Make-Dosa-1.png ()
|
||||
[3]:http://www.ostechnix.com/wp-content/uploads/2017/12/herms-1-1.png ()
|
||||
[4]:http://www.ostechnix.com/wp-content/uploads/2017/12/herms-2.png ()
|
||||
[5]:http://www.ostechnix.com/wp-content/uploads/2017/12/herms-3.png ()
|
||||
[6]:http://www.ostechnix.com/wp-content/uploads/2017/12/herms-4.png ()
|
@ -1,118 +0,0 @@
|
||||
两款 Linux 桌面端可用的科学计算器
|
||||
======
|
||||
|
||||
|
||||
|
||||
Translating by zyk2290
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_OpenData_CityNumbers.png?itok=lC03ce76)
|
||||
|
||||
Image by : opensource.com
|
||||
|
||||
每个Linux 桌面环境都至少带有一个功能简单的桌面计算器,但大多数计算器只能进行一些简单的计算。
|
||||
|
||||
幸运的是,还是有例外的:不仅可以做得比开平方根和一些三角函数还多,而且还很简单。这里将介绍两款强大的计算器,外加一大堆额外的功能。
|
||||
|
||||
### SpeedCrunch
|
||||
|
||||
[SpeedCrunch][1]是一款高精度科学计算器有着 Qt5 图像界面前端,并且强烈依赖键盘。
|
||||
|
||||
![SpeedCrunch graphical interface][3]
|
||||
|
||||
SpeedCrunch 在工作时
|
||||
|
||||
It supports working with units and comes loaded with all kinds of functions.
|
||||
|
||||
所有函数都支持与单位一起工作。
|
||||
|
||||
例如,
|
||||
|
||||
`2 * 10^6 newton / (meter^2)`
|
||||
|
||||
你可以得到
|
||||
|
||||
`= 2000000 pascal`
|
||||
|
||||
SpeedCrunch 会默认地将结果转化为国际标准单位,但还是可以用"in"命令转换
|
||||
|
||||
例如:
|
||||
|
||||
`3*10^8 meter / second in kilo meter / hour`
|
||||
|
||||
结果是:
|
||||
`= 1080000000 kilo meter / hour`
|
||||
|
||||
`F5` 键可以将所有结果转为科学计数法 (`1.08e9 kilo meter / hour`),`F2`键可以只将那些很大的数或很小的数转为科学计数法。更多选项可以在配置(Configuration)页面找到。
|
||||
|
||||
可用的函数的列表看上去非常惊艳。它可以在 Linux 、 Windows、macOS.。许可证是GPLv2,你可以在[Bitbucket][4]上得到它的源码。
|
||||
|
||||
### Qalculate!
|
||||
|
||||
[Qalculate!][5](有感叹号)有一段长而复杂的历史。
|
||||
|
||||
这个项目给了我们一个强大的库,而这个库可以被其它程序使用(在 Plasma 桌面中,krunner 可以用它来计算),以及一个用 GTK3 搭建的图形界面前端。它允许你转换单位,处理物理常量,创建图像,使用复数,矩阵以及向量,选择任意准确度,等等
|
||||
|
||||
|
||||
![Qalculate! Interface][7]
|
||||
|
||||
正在在 Qalculate! 寻找物理常量
|
||||
|
||||
在使用单位上,Qalculate! 会比SppedCrunch 更加直观,而且可以识别一些常用前缀。你有听说过 exapascal 吗?反正我没有(太阳的中心大概在 `~26 PPa`),但 Qalculate! ,可以准确识别出 `1 EPa`。同时,Qalculate! 可以更加灵活地处理语法错误,所以你不需要担心打括号:如果没有歧义,Qalculate! 会直接给出正确答案。
|
||||
|
||||
一段时间之后这个计划看上去被遗弃了。但在2016年,它又变得强大了,在一年里更新了10个版本。它的许可证是 GPLv2 (源码在 [GitHub][8] 上),提供Linux 、Windows 、macOS的版本。
|
||||
|
||||
### Bonus calculators
|
||||
|
||||
#### 转换一切
|
||||
|
||||
好吧,这不是“计算器”,但这个程序非常好用
|
||||
|
||||
大部分单位转换器只是一大个基本单位列表以及一大堆基本组合,但[ConvertAll][9]与它们不一样。有试过把光年转换为英尺每秒吗?不管它们说不说得通,只要你想转换任何种类的单位,ConvertAll 就是你要的工具。
|
||||
|
||||
只需要在相应的输入框内输入转换前和转换后的单位:如果单位相容,你会直接得到答案。
|
||||
|
||||
主程序是在 PyQt5 上搭建的,但也有[JavaScript 的在线版本][10]。
|
||||
|
||||
#### (wx)Maxima with the units package
|
||||
|
||||
有时候(好吧,很多时候)一款桌面计算器时候不够你用的,然后你需要更多的原力(raw power?)
|
||||
|
||||
[Maxima][11]是一款计算机代数系统(LCTT 译者注:进行符号运算的软件。这种系统的要件是数学表示式的符号运算),你可以用它计算导数、积分、方程、特征值和特征向量、泰勒级数、拉普拉斯变换与傅立叶变换,以及任意精度的数字计算、二维或三维图像··· ···列出这些都够我们写几页纸的了。
|
||||
|
||||
[wxMaxima][12]是一个设计精湛的 Maxima 的图形前端,它简化了许多 Maxima 的选项,但并不会影响其它。在 Maxima 的基础上,wxMaxima 还允许你创建 “笔记本”(notebooks),你可以在上面写一些笔记,保存你的图像等。其中一项 (wx)Maxima 最惊艳的功能是它可以处理标注单位(dimension units)。
|
||||
|
||||
`load("unit")`
|
||||
|
||||
只需要输入`load("unit")`
|
||||
|
||||
按 Shift+Enter,等几秒钟的时间,然后你就可以开始了
|
||||
|
||||
默认地,单位包与基本 MKS 单位工作,但如果你喜欢,例如,要拿到 `N`为单位而不是 `kg*m/s2`,你只需要输入:`setunits(N)`
|
||||
|
||||
Maxima 的帮助(也可以在 wxMaxima 的帮助菜单中找到)会给你更多信息。
|
||||
|
||||
你使用这些程序吗?你知道还有其它好的科学、工程用途的桌面计算器或者其它相关的计算器吗?在评论区里告诉我们吧!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/scientific-calculators-linux
|
||||
|
||||
作者:[Ricardo Berlasso][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/rgb-es
|
||||
[1]:http://speedcrunch.org/index.html
|
||||
[2]:/file/382511
|
||||
[3]:https://opensource.com/sites/default/files/u128651/speedcrunch.png "SpeedCrunch graphical interface"
|
||||
[4]:https://bitbucket.org/heldercorreia/speedcrunch
|
||||
[5]:https://qalculate.github.io/
|
||||
[6]:/file/382506
|
||||
[7]:https://opensource.com/sites/default/files/u128651/qalculate-600.png "Qalculate! Interface"
|
||||
[8]:https://github.com/Qalculate
|
||||
[9]:http://convertall.bellz.org/
|
||||
[10]:http://convertall.bellz.org/js/
|
||||
[11]:http://maxima.sourceforge.net/
|
||||
[12]:https://andrejv.github.io/wxmaxima/
|
@ -0,0 +1,311 @@
|
||||
如何使用 Android Things 和 TensorFlow 在物联网上应用机器学习
|
||||
============================================================
|
||||
|
||||
这个项目探索了如何将机器学习应用到物联网上。具体来说,物联网平台我们将使用 **Android Things**,而机器学习引擎我们将使用 **Google TensorFlow**。
|
||||
|
||||
![Machine Learning with Android Things](https://www.survivingwithandroid.com/wp-content/uploads/2018/03/machine_learning_android_things.png)
|
||||
|
||||
现如今,机器学习是物联网上使用的最热门的主题之一。给机器学习的最简单的定义,可能就是 [维基百科上的定义][13]:机器学习是计算机科学中,让计算机不需要显式编程就能去“学习”(即,逐步提升在特定任务上的性能)使用数据的一个领域。
|
||||
|
||||
换句话说就是,经过训练之后,那怕是它没有针对它们进行特定的编程,这个系统也能够预测结果。另一方面,我们都知道物联网和联网设备的概念。其中一个前景看好的领域就是如何在物联网上应用机器学习,构建专业的系统,这样就能够去开发一个能够“学习”的系统。此外,还可以使用这些知识去控制和管理物理对象。
|
||||
|
||||
这里有几个应用机器学习和物联网产生重要价值的领域,以下仅提到了几个感兴趣的领域,它们是:
|
||||
|
||||
* 在工业物联网(IIoT)中的预见性维护
|
||||
|
||||
* 消费物联网中,机器学习可以让设备更智能,它通过调整使设备更适应我们的习惯
|
||||
|
||||
在本教程中,我们希望去探索如何使用 Android Things 和 TensorFlow 在物联网上应用机器学习。这个 Adnroid Things 物联网项目的基本想法是,探索如何去*构建一个能够识别前方道路上基本形状(比如箭头)的无人驾驶汽车*。我们已经介绍了 [如何使用 Android Things 去构建一个无人驾驶汽车][5],因此,在开始这个项目之前,我们建议你去阅读那个教程。
|
||||
|
||||
这个机器学习和物联网项目包含如下的主题:
|
||||
|
||||
* 如何使用 Docker 配置 TensorFlow 环境
|
||||
|
||||
* 如何训练 TensorFlow 系统
|
||||
|
||||
* 如何使用 Android Things 去集成 TensorFlow
|
||||
|
||||
* 如何使用 TensorFlow 的成果去控制无人驾驶汽车
|
||||
|
||||
这个项目起源于 [Android Things TensorFlow 图像分类器][6]。
|
||||
|
||||
我们开始吧!
|
||||
|
||||
### 如何使用 Tensorflow 图像识别
|
||||
|
||||
在开始之前,需要安装和配置 TensorFlow 环境。我不是机器学习方面的专家,因此,我需要快速找到并且准备去使用一些东西,因此,我们可以构建 TensorFlow 图像识别器。为此,我们使用 Docker 去运行一个 TensorFlow 镜像。以下是操作步骤:
|
||||
|
||||
1. 克隆 TensorFlow 仓库:
|
||||
```
|
||||
git clone https://github.com/tensorflow/tensorflow.git
|
||||
cd /tensorflow
|
||||
git checkout v1.5.0
|
||||
```
|
||||
|
||||
2. 创建一个目录(`/tf-data`),它将用于保存这个项目中使用的所有文件。
|
||||
|
||||
3. 运行 Docker:
|
||||
```
|
||||
docker run -it \
|
||||
--volume /tf-data:/tf-data \
|
||||
--volume /tensorflow:/tensorflow \
|
||||
--workdir /tensorflow tensorflow/tensorflow:1.5.0 bash
|
||||
```
|
||||
|
||||
使用这个命令,我们运行一个交互式 TensorFlow 环境,可以在使用项目期间挂载一些目录。
|
||||
|
||||
### 如何训练 TensorFlow 去识别图像
|
||||
|
||||
在 Android Things 系统能够识别图像之前,我们需要去训练 TensorFlow 引擎,以使它能够构建它的模型。为此,我们需要去收集一些图像。正如前面所言,我们需要使用箭头来控制 Android Things 无人驾驶汽车,因此,我们至少要收集四种类型的箭头:
|
||||
|
||||
* 向上的箭头
|
||||
|
||||
* 向下的箭头
|
||||
|
||||
* 向左的箭头
|
||||
|
||||
* 向右的箭头
|
||||
|
||||
为训练这个系统,需要使用这四类不同的图像去创建一个“知识库”。在 `/tf-data` 目录下创建一个名为 `images` 的目录,然后在它下面创建如下名字的四个子目录:
|
||||
|
||||
* up-arrow
|
||||
|
||||
* down-arrow
|
||||
|
||||
* left-arrow
|
||||
|
||||
* right-arrow
|
||||
|
||||
现在,我们去找图片。我使用的是 Google 图片搜索,你也可以使用其它的方法。为了简化图片下载过程,你可以安装一个 Chrome 下载插件,这样你只需要点击就可以下载选定的图片。别忘了多下载一些图片,这样训练效果更好,当然,这样创建模型的时间也会相应增加。
|
||||
|
||||
**扩展阅读**
|
||||
[如何使用 API 去集成 Android Things][2]
|
||||
[如何与 Firebase 一起使用 Android Things][3]
|
||||
|
||||
打开浏览器,开始去查找四种箭头的图片:
|
||||
|
||||
![TensorFlow image classifier](https://www.survivingwithandroid.com/wp-content/uploads/2018/03/TensorFlow-image-classifier.png)
|
||||
[Save][7]
|
||||
|
||||
每个类别我下载了 80 张图片。不用管图片文件的扩展名。
|
||||
|
||||
为所有类别的图片做一次如下的操作(在 Docker 界面下):
|
||||
|
||||
```
|
||||
python /tensorflow/examples/image_retraining/retrain.py \
|
||||
--bottleneck_dir=tf_files/bottlenecks \
|
||||
--how_many_training_steps=4000 \
|
||||
--output_graph=/tf-data/retrained_graph.pb \
|
||||
--output_labels=/tf-data/retrained_labels.txt \
|
||||
--image_dir=/tf-data/images
|
||||
```
|
||||
|
||||
这个过程你需要耐心等待,它需要花费很长时间。结束之后,你将在 `/tf-data` 目录下发现如下的两个文件:
|
||||
|
||||
1. retrained_graph.pb
|
||||
|
||||
2. retrained_labels.txt
|
||||
|
||||
第一个文件包含了 TensorFlow 训练过程产生的结果模型,而第二个文件包含了我们的四个图片类相关的标签。
|
||||
|
||||
### 如何测试 Tensorflow 模型
|
||||
|
||||
如果你想去测试这个模型,去验证它是否能按预期工作,你可以使用如下的命令:
|
||||
|
||||
```
|
||||
python scripts.label_image \
|
||||
--graph=/tf-data/retrained-graph.pb \
|
||||
--image=/tf-data/images/[category]/[image_name.jpg]
|
||||
```
|
||||
|
||||
### 优化模型
|
||||
|
||||
在 Android Things 项目中使用我们的 TensorFlow 模型之前,需要去优化它:
|
||||
|
||||
```
|
||||
python /tensorflow/python/tools/optimize_for_inference.py \
|
||||
--input=/tf-data/retrained_graph.pb \
|
||||
--output=/tf-data/opt_graph.pb \
|
||||
--input_names="Mul" \
|
||||
--output_names="final_result"
|
||||
```
|
||||
|
||||
那个就是我们全部的模型。我们将使用这个模型,把 TensorFlow 与 Android Things 集成到一起,在物联网或者更多任务上应用机器学习。目标是使用 Android Things 应用程序智能识别箭头图片,并反应到接下来的无人驾驶汽车的方向控制上。
|
||||
|
||||
如果你想去了解关于 TensorFlow 以及如何生成模型的更多细节,请查看官方文档以及这篇 [教程][8]。
|
||||
|
||||
### 如何使用 Android Things 和 TensorFlow 在物联网上应用机器学习
|
||||
|
||||
TensorFlow 的数据模型准备就绪之后,我们继续下一步:如何将 Android Things 与 TensorFlow 集成到一起。为此,我们将这个任务分为两步来完成:
|
||||
|
||||
1. 硬件部分,我们将把电机和其它部件连接到 Android Things 开发板上
|
||||
|
||||
2. 实现这个应用程序
|
||||
|
||||
### Android Things 示意图
|
||||
|
||||
在深入到如何连接外围部件之前,先列出在这个 Android Things 项目中使用到的组件清单:
|
||||
|
||||
1. Android Things 开发板(树莓派 3)
|
||||
|
||||
2. 树莓派摄像头
|
||||
|
||||
3. 一个 LED 灯
|
||||
|
||||
4. LN298N 双 H 桥电机驱动模块(连接控制电机)
|
||||
|
||||
5. 一个带两个轮子的无人驾驶汽车底盘
|
||||
|
||||
我不再重复 [如何使用 Android Things 去控制电机][9] 了,因为在以前的文章中已经讲过了。
|
||||
|
||||
下面是示意图:
|
||||
|
||||
![Integrating Android Things with IoT](https://www.survivingwithandroid.com/wp-content/uploads/2018/03/tensor_bb.png)
|
||||
[Save][10]
|
||||
|
||||
上图中没有展示摄像头。最终成果如下图:
|
||||
|
||||
![Integrating Android Things with TensorFlow](https://www.survivingwithandroid.com/wp-content/uploads/2018/03/android_things_with_tensorflow-min.jpg)
|
||||
[Save][11]
|
||||
|
||||
### 使用 TensorFlow 实现 Android Things 应用程序
|
||||
|
||||
最后一步是实现 Android Things 应用程序。为此,我们可以复用 Github 上名为 [TensorFlow 图片分类器示例][12] 的示例代码。开始之前,先克隆 Github 仓库,这样你就可以修改源代码。
|
||||
|
||||
这个 Android Things 应用程序与原始的应用程序是不一样的,因为:
|
||||
|
||||
1. 它不使用按钮去开启摄像头图像捕获
|
||||
|
||||
2. 它使用了不同的模型
|
||||
|
||||
3. 它使用一个闪烁的 LED 灯来提示,摄像头将在 LED 停止闪烁后拍照
|
||||
|
||||
4. 当 TensorFlow 检测到图像时(箭头)它将控制电机。此外,在第 3 步的循环开始之前,它将打开电机 5 秒钟。
|
||||
|
||||
为了让 LED 闪烁,使用如下的代码:
|
||||
|
||||
```
|
||||
private Handler blinkingHandler = new Handler();
|
||||
private Runnable blinkingLED = new Runnable() {
|
||||
@Override
|
||||
public void run() {
|
||||
try {
|
||||
// If the motor is running the app does not start the cam
|
||||
if (mc.getStatus())
|
||||
return ;
|
||||
|
||||
Log.d(TAG, "Blinking..");
|
||||
mReadyLED.setValue(!mReadyLED.getValue());
|
||||
if (currentValue <= NUM_OF_TIMES) {
|
||||
currentValue++;
|
||||
blinkingHandler.postDelayed(blinkingLED,
|
||||
BLINKING_INTERVAL_MS);
|
||||
}
|
||||
else {
|
||||
mReadyLED.setValue(false);
|
||||
currentValue = 0;
|
||||
mBackgroundHandler.post(mBackgroundClickHandler);
|
||||
}
|
||||
} catch (IOException e) {
|
||||
e.printStackTrace();
|
||||
}
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
当 LED 停止闪烁后,应用程序将捕获图片。
|
||||
|
||||
现在需要去关心如何根据检测到的图片去控制电机。修改这个方法:
|
||||
|
||||
```
|
||||
@Override
|
||||
public void onImageAvailable(ImageReader reader) {
|
||||
final Bitmap bitmap;
|
||||
try (Image image = reader.acquireNextImage()) {
|
||||
bitmap = mImagePreprocessor.preprocessImage(image);
|
||||
}
|
||||
|
||||
final List<Classifier.Recognition> results =
|
||||
mTensorFlowClassifier.doRecognize(bitmap);
|
||||
|
||||
Log.d(TAG,
|
||||
"Got the following results from Tensorflow: " + results);
|
||||
|
||||
// Check the result
|
||||
if (results == null || results.size() == 0) {
|
||||
Log.d(TAG, "No command..");
|
||||
blinkingHandler.post(blinkingLED);
|
||||
return ;
|
||||
}
|
||||
|
||||
Classifier.Recognition rec = results.get(0);
|
||||
Float confidence = rec.getConfidence();
|
||||
Log.d(TAG, "Confidence " + confidence.floatValue());
|
||||
|
||||
if (confidence.floatValue() < 0.55) {
|
||||
Log.d(TAG, "Confidence too low..");
|
||||
blinkingHandler.post(blinkingLED);
|
||||
return ;
|
||||
}
|
||||
|
||||
String command = rec.getTitle();
|
||||
Log.d(TAG, "Command: " + rec.getTitle());
|
||||
|
||||
if (command.indexOf("down") != -1)
|
||||
mc.backward();
|
||||
else if (command.indexOf("up") != -1)
|
||||
mc.forward();
|
||||
else if (command.indexOf("left") != -1)
|
||||
mc.turnLeft();
|
||||
else if (command.indexOf("right") != -1)
|
||||
mc.turnRight();
|
||||
}
|
||||
```
|
||||
|
||||
在这个方法中,当 TensorFlow 返回捕获的图片匹配到的可能的标签之后,应用程序将比较这个结果与可能的方向,并因此来控制电机。
|
||||
|
||||
最后,将去使用前面创建的模型了。拷贝 _assets_ 文件夹下的 `opt_graph.pb` 和 `reatrained_labels.txt` 去替换现在的文件。
|
||||
|
||||
打开 `Helper.java` 并修改如下的行:
|
||||
|
||||
```
|
||||
public static final int IMAGE_SIZE = 299;
|
||||
private static final int IMAGE_MEAN = 128;
|
||||
private static final float IMAGE_STD = 128;
|
||||
private static final String LABELS_FILE = "retrained_labels.txt";
|
||||
public static final String MODEL_FILE = "file:///android_asset/opt_graph.pb";
|
||||
public static final String INPUT_NAME = "Mul";
|
||||
public static final String OUTPUT_OPERATION = "output";
|
||||
public static final String OUTPUT_NAME = "final_result";
|
||||
```
|
||||
|
||||
运行这个应用程序,并给摄像头展示几种箭头,以检查它的反应。无人驾驶汽车将根据展示的箭头进行移动。
|
||||
|
||||
### 总结
|
||||
|
||||
教程到此结束,我们讲解了如何使用 Android Things 和 TensorFlow 在物联网上应用机器学习。我们使用图片去控制无人驾驶汽车的移动。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.survivingwithandroid.com/2018/03/apply-machine-learning-iot-using-android-things-tensorflow.html
|
||||
|
||||
作者:[Francesco Azzola ][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.survivingwithandroid.com/author/francesco-azzolagmail-com
|
||||
[1]:https://www.survivingwithandroid.com/author/francesco-azzolagmail-com
|
||||
[2]:https://www.survivingwithandroid.com/2017/11/building-a-restful-api-interface-using-android-things.html
|
||||
[3]:https://www.survivingwithandroid.com/2017/10/synchronize-android-things-with-firebase-real-time-control-firebase-iot.html
|
||||
[4]:http://pinterest.com/pin/create/bookmarklet/?media=data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs=&url=https://www.survivingwithandroid.com/2018/03/apply-machine-learning-iot-using-android-things-tensorflow.html&is_video=false&description=Machine%20Learning%20with%20Android%20Things
|
||||
[5]:https://www.survivingwithandroid.com/2017/12/building-a-remote-controlled-car-using-android-things-gpio.html
|
||||
[6]:https://github.com/androidthings/sample-tensorflow-imageclassifier
|
||||
[7]:http://pinterest.com/pin/create/bookmarklet/?media=data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs=&url=https://www.survivingwithandroid.com/2018/03/apply-machine-learning-iot-using-android-things-tensorflow.html&is_video=false&description=TensorFlow%20image%20classifier
|
||||
[8]:https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/#0
|
||||
[9]:https://www.survivingwithandroid.com/2017/12/building-a-remote-controlled-car-using-android-things-gpio.html
|
||||
[10]:http://pinterest.com/pin/create/bookmarklet/?media=data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs=&url=https://www.survivingwithandroid.com/2018/03/apply-machine-learning-iot-using-android-things-tensorflow.html&is_video=false&description=Integrating%20Android%20Things%20with%20IoT
|
||||
[11]:http://pinterest.com/pin/create/bookmarklet/?media=data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs=&url=https://www.survivingwithandroid.com/2018/03/apply-machine-learning-iot-using-android-things-tensorflow.html&is_video=false&description=Integrating%20Android%20Things%20with%20TensorFlow
|
||||
[12]:https://github.com/androidthings/sample-tensorflow-imageclassifier
|
||||
[13]:https://en.wikipedia.org/wiki/Machine_learning
|
@ -0,0 +1,116 @@
|
||||
面向数据科学的 Anaconda Python 入门
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X)
|
||||
像很多人一样,我一直努力加入到快速发展的数据科学领域。我上过 Udemy 的 [R][1] 及 [Python][2] 语言编程课,那时我分别下载并安装了应用程序。当我试图解决各种依赖关系,安装类似 [Numpy][3] 和 [Matplotlib][4] 这样的数据科学扩展包时,我了解了 [Anaconda Python 发行版][5]。
|
||||
|
||||
Anaconda 是一个完备、[开源][6]的数据科学包,拥有超过 600 万社区用户。[下载][7]和安装 Anaconda 都很容易,支持的操作系统包括 Linux, MacOS 及 Windows。
|
||||
|
||||
我感谢 Anaconda 降低了初学者的学习门槛。发行版自带 1000 多个数据科学包以及 [Conda][8] 包和虚拟环境管理器,让你无需单独学习每个库的安装方法。就像 Anaconda 官网上提到的,“Anaconda 库中的 Python 和 R 语言的conda 包是我们在安全环境中修订并编译得到的优化二进制程序,可以在你系统上工作”。
|
||||
|
||||
我推荐使用 [Anaconda Navigator][9],它是一个桌面 GUI (graphical user interface) 系统,包含发行版自带应用的链接,包括[RStudio][10], [iPython][11], [Jupyter Notebook][12], [JupyterLab][13], [Spyder][14], [Glue][15] 和 [Orange][16]。默认环境采用 Python 3.6,但你可以轻松安装 Python 3.5, Python 2.7 或 R。[文档][16]十分详尽,而且用户社区极好,可以提供额外的支持。
|
||||
|
||||
### 安装 Anaconda
|
||||
|
||||
为在我的 Linux 笔记本 (I3 CPU,4GB 内存) 上安装 Anaconda,我下载了 Anaconda 5.1 Linux 版安装器并运行 `md5sum` 进行文件校验:
|
||||
```
|
||||
$ md5sum Anaconda3-5.1.0-Linux-x86_64.sh
|
||||
|
||||
```
|
||||
|
||||
接着按照[安装文档][17]的说明,无论是否在 Bash shell 环境下,执行如下 shell 命令:
|
||||
```
|
||||
$ bash Anaconda3-5.1.0-Linux-x86_64.sh
|
||||
|
||||
```
|
||||
|
||||
我完全按照安装指南操作,运行这个精心编写的脚本,大约花费 5 分钟可以完成安装。安装过程中会提示:“是否希望安装器将 Anaconda 的安装路径加入到你的 `/home/<user>/.bashrc`?我选择允许并重启了 shell,这会让 `.bashrc` 中的环境变量生效。
|
||||
|
||||
安装完成后,我启动了 Anaconda Navigator,具体操作是在 shell 中执行如下命令:
|
||||
```
|
||||
$ anaconda-navigator
|
||||
|
||||
```
|
||||
|
||||
Anaconda Navigator 每次启动时会检查是否有可更新的软件包,如果有,会提醒你进行更新。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/anaconda-update.png?itok=wMk78pGQ)
|
||||
|
||||
按照提醒进行更新即可,无需使用命令行。Anaconda 初次启动会有些慢,如果涉及更新会额外花费几分钟。
|
||||
|
||||
当然,你也可以通过执行如下命令手动更新:
|
||||
|
||||
```
|
||||
$ conda update anaconda-navigator
|
||||
|
||||
```
|
||||
|
||||
### 浏览和安装应用
|
||||
|
||||
Navigator 启动后,可以很容易地浏览 Anaconda 发行版包含的应用。按照文档所述,64 位 Python 3.6 版本的 Anaconda [支持 499 个软件包][18]。我浏览的第一个应用是 [Jupyter QtConsole][19],这个简单易用的 GUI 支持内联数据 (inline figures) 和语法高亮。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/anaconda-jupyterqtconsole.png?itok=fQQoErIO)
|
||||
|
||||
发行版中包含 Jupyter Notebook,故(不像我用的其它 Python 环境那样)无需在发行版外安装。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/anaconda-jupyternotebook.png?itok=VqvbyOcI)
|
||||
|
||||
我习惯使用的 RStudio 并没有默认安装,但安装它也仅需点击一下鼠标。其它应用的启动或安装也仅需点击一下鼠标,包括 JupyterLab, Orange, Glue 和 Spyder 等。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/anaconda-otherapps.png?itok=9QmSUdel)
|
||||
|
||||
Anaconda 发行版的一个强大功能是创建多套环境。假如我需要创建一套与默认 Python 3.6 不同的 Python 2.7 的环境,可以在 shell 中执行如下命令:
|
||||
```
|
||||
$ conda create -n py27 python=2.7 anaconda
|
||||
|
||||
```
|
||||
|
||||
Conda 负责整个安装流程,如需启动它,仅需在 shell 中执行如下命令:
|
||||
```
|
||||
$ anaconda-navigator
|
||||
|
||||
```
|
||||
|
||||
在 Anaconda GUI 的 "Applications on" 下拉菜单中选取 **py27** 即可。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/anaconda-navigator.png?itok=2i5qYAyG)
|
||||
|
||||
### 更多内容
|
||||
|
||||
如果你想了解更多关于 Anaconda 的信息,可供参考的资源十分丰富。不妨从检索 [Anaconda 社区][20]及对应的[邮件列表][21]开始。
|
||||
|
||||
你是否在使用 Anaconda 发行版及 Navigator 呢?欢迎在评论中留下你的使用感想。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/4/getting-started-anaconda-python
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[pinewall](https://github.com/pinewall)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/don-watkins
|
||||
[1]:https://www.r-project.org/
|
||||
[2]:https://www.python.org/
|
||||
[3]:http://www.numpy.org/
|
||||
[4]:https://matplotlib.org/
|
||||
[5]:https://www.anaconda.com/distribution/
|
||||
[6]:https://docs.anaconda.com/anaconda/eula
|
||||
[7]:https://www.anaconda.com/download/#linux
|
||||
[8]:https://conda.io/
|
||||
[9]:https://docs.anaconda.com/anaconda/navigator/
|
||||
[10]:https://www.rstudio.com/
|
||||
[11]:https://ipython.org/
|
||||
[12]:http://jupyter.org/
|
||||
[13]:https://blog.jupyter.org/jupyterlab-is-ready-for-users-5a6f039b8906
|
||||
[14]:https://spyder-ide.github.io/
|
||||
[15]:http://glueviz.org/
|
||||
[16]:https://orange.biolab.si/
|
||||
[17]:https://docs.anaconda.com/anaconda/install/linux
|
||||
[18]:https://docs.anaconda.com/anaconda/packages/py3.6_linux-64
|
||||
[19]:http://qtconsole.readthedocs.io/en/stable/
|
||||
[20]:https://www.anaconda.com/community/
|
||||
[21]:https://groups.google.com/a/continuum.io/forum/#!forum/anaconda
|
Loading…
Reference in New Issue
Block a user