mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-27 02:30:10 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
8527596cc2
104
published/20171101 -dev-urandom- entropy explained.md
Normal file
104
published/20171101 -dev-urandom- entropy explained.md
Normal file
@ -0,0 +1,104 @@
|
||||
/dev/[u]random:对熵的解释
|
||||
======
|
||||
|
||||
### 熵
|
||||
|
||||
当谈到 `/dev/random` 和 `/dev/urandom` 的主题时,你总是会听到这个词:“<ruby>熵<rt>Entropy</rt></ruby>”。每个人对此似乎都有自己的比喻。那为我呢?我喜欢将熵视为“随机果汁”。它是果汁,随机数需要它变得更随机。
|
||||
|
||||
如果你曾经生成过 SSL 证书或 GPG 密钥,那么可能已经看到过像下面这样的内容:
|
||||
|
||||
```
|
||||
We need to generate a lot of random bytes. It is a good idea to perform
|
||||
some other action (type on the keyboard, move the mouse, utilize the
|
||||
disks) during the prime generation; this gives the random number
|
||||
generator a better chance to gain enough entropy.
|
||||
++++++++++..+++++.+++++++++++++++.++++++++++...+++++++++++++++...++++++
|
||||
+++++++++++++++++++++++++++++.+++++..+++++.+++++.+++++++++++++++++++++++++>.
|
||||
++++++++++>+++++...........................................................+++++
|
||||
Not enough random bytes available. Please do some other work to give
|
||||
the OS a chance to collect more entropy! (Need 290 more bytes)
|
||||
```
|
||||
|
||||
通过在键盘上打字并移动鼠标,你可以帮助生成熵或随机果汁。
|
||||
|
||||
你可能会问自己……为什么我需要熵?以及为什么它对于随机数真的变得随机如此重要?那么,假设我们的熵的来源仅限于键盘、鼠标和磁盘 IO 的数据。但是我们的系统是一个服务器,所以我知道没有鼠标和键盘输入。这意味着唯一的因素是你的 IO。如果它是一个单独的、几乎不使用的磁盘,你将拥有较低的熵。这意味着你的系统随机的能力很弱。换句话说,我可以玩概率游戏,并大幅减少破解 ssh 密钥或者解密你认为是加密会话的时间。
|
||||
|
||||
好的,但这是很难实现的对吧?不,实际上并非如此。看看这个 [Debian OpenSSH 漏洞][1]。这个特定的问题是由于某人删除了一些负责添加熵的代码引起的。有传言说,他们因为它导致 valgrind 发出警告而删除了它。然而,在这样做的时候,随机数现在少了很多随机性。事实上,熵少了很多,因此暴力破解变成了一个可行的攻击向量。
|
||||
|
||||
希望到现在为止,我们理解了熵对安全性的重要性。无论你是否意识到你正在使用它。
|
||||
|
||||
### /dev/random 和 /dev/urandom
|
||||
|
||||
`/dev/urandom` 是一个伪随机数生成器,缺乏熵它也**不会**停止。
|
||||
|
||||
`/dev/random` 是一个真随机数生成器,它会在缺乏熵的时候停止。
|
||||
|
||||
大多数情况下,如果我们正在处理实际的事情,并且它不包含你的核心信息,那么 `/dev/urandom` 是正确的选择。否则,如果就使用 `/dev/random`,那么当系统的熵耗尽时,你的程序就会变得有趣。无论它直接失败,或只是挂起——直到它获得足够的熵,这取决于你编写的程序。
|
||||
|
||||
### 检查熵
|
||||
|
||||
那么,你有多少熵?
|
||||
|
||||
```
|
||||
[root@testbox test]# cat /proc/sys/kernel/random/poolsize
|
||||
4096
|
||||
[root@testbox test]# cat /proc/sys/kernel/random/entropy_avail
|
||||
2975
|
||||
```
|
||||
|
||||
`/proc/sys/kernel/random/poolsize`,说明熵池的大小(以位为单位)。例如:在停止抽水之前我们应该储存多少随机果汁。`/proc/sys/kernel/random/entropy_avail` 是当前池中随机果汁的数量(以位为单位)。
|
||||
|
||||
### 我们如何影响这个数字?
|
||||
|
||||
这个数字可以像我们使用它一样耗尽。我可以想出的最简单的例子是将 `/dev/random` 定向到 `/dev/null` 中:
|
||||
|
||||
```
|
||||
[root@testbox test]# cat /dev/random > /dev/null &
|
||||
[1] 19058
|
||||
[root@testbox test]# cat /proc/sys/kernel/random/entropy_avail
|
||||
0
|
||||
[root@testbox test]# cat /proc/sys/kernel/random/entropy_avail
|
||||
1
|
||||
```
|
||||
|
||||
影响这个最简单的方法是运行 [Haveged][2]。Haveged 是一个守护进程,它使用处理器的“抖动”将熵添加到系统熵池中。安装和基本设置非常简单。
|
||||
|
||||
```
|
||||
[root@b08s02ur ~]# systemctl enable haveged
|
||||
Created symlink from /etc/systemd/system/multi-user.target.wants/haveged.service to /usr/lib/systemd/system/haveged.service.
|
||||
[root@b08s02ur ~]# systemctl start haveged
|
||||
```
|
||||
|
||||
在流量相对中等的机器上:
|
||||
|
||||
```
|
||||
[root@testbox ~]# pv /dev/random > /dev/null
|
||||
40 B 0:00:15 [ 0 B/s] [ <=> ]
|
||||
52 B 0:00:23 [ 0 B/s] [ <=> ]
|
||||
58 B 0:00:25 [5.92 B/s] [ <=> ]
|
||||
64 B 0:00:30 [6.03 B/s] [ <=> ]
|
||||
^C
|
||||
[root@testbox ~]# systemctl start haveged
|
||||
[root@testbox ~]# pv /dev/random > /dev/null
|
||||
7.12MiB 0:00:05 [1.43MiB/s] [ <=> ]
|
||||
15.7MiB 0:00:11 [1.44MiB/s] [ <=> ]
|
||||
27.2MiB 0:00:19 [1.46MiB/s] [ <=> ]
|
||||
43MiB 0:00:30 [1.47MiB/s] [ <=> ]
|
||||
^C
|
||||
```
|
||||
|
||||
使用 `pv` 我们可以看到我们通过管道传递了多少数据。正如你所看到的,在运行 `haveged` 之前,我们是 2.1 位/秒(B/s)。而在开始运行 `haveged` 之后,加入处理器的抖动到我们的熵池中,我们得到大约 1.5MiB/秒。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://jhurani.com/linux/2017/11/01/entropy-explained.html
|
||||
|
||||
作者:[James J][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://jblevins.org/log/ssh-vulnkey
|
||||
[1]:http://jhurani.com/linux/2017/11/01/%22https://jblevins.org/log/ssh-vulnkey%22
|
||||
[2]:http://www.issihosts.com/haveged/
|
@ -1,51 +1,56 @@
|
||||
如何在无响应的 Linux 系统中杀掉最大的进程
|
||||
如何在无响应的 Linux 系统中杀掉内存消耗最大的进程
|
||||
======
|
||||
|
||||

|
||||
|
||||
作为一名博客作者,我收藏了很多博客、网站和论坛用来标记 Linux 和 Unix 相关的内容。有时候,我在浏览器中开启了非常多的标签页,导致操作系统会无响应好几分钟。我不能移动我的鼠标去杀掉一个进程或关闭任何开启的标签页。在这种情况下,我别无选择,只能强制重启系统。当然我也用了 **OneTab** (译者注:OneTab 是一个 Chrome 的 Extension, 可以将标签页转化成一个列表保存。)和 **Greate Suspender** (译者注:Great Suspender 是一个 Chrome 的 Extension, 可以自动冻结标签页)这样浏览器拓展,但它们在这里也起不到太大的作用。 我经常耗尽我的内存。而这就是 **Early OOM** 起作用的时候了。在情况严重,它会杀掉一个未响应系统中的最大的进程。Early OOM 每秒会检测可用内存和空余交换区 10 次,一旦两者都低于 10%,它就会把最大的进程杀死。
|
||||
作为一名博客作者,我收藏了很多博客、网站和论坛用来寻找 Linux 和 Unix 相关的内容。有时候,我在浏览器中开启了非常多的标签页,导致操作系统会无响应好几分钟。我不能移动我的鼠标,也不能杀掉一个进程或关闭任何开启的标签页。在这种情况下,我别无选择,只能强制重启系统。当然我也用了 **OneTab** (LCTT 译注:OneTab 是一个 Chrome 的 Extension,可以将标签页转化成一个列表保存。)和 **Greate Suspender** (LCTT 译注:Great Suspender 是一个 Chrome 的 Extension, 可以自动冻结标签页)这样浏览器拓展,但它们在这里也起不到太大的作用。 我经常耗尽我的内存。而这就是 **Early OOM** 起作用的时候了。在情况严重时,它会杀掉一个未响应系统中的内存消耗最大的进程。Early OOM 每秒会检测可用内存和空余交换区 10 次,一旦两者都低于 10%,它就会把最大的进程杀死。
|
||||
|
||||
### 为什么用 Early OOM?为什么不用系统内置的 OOM killer?
|
||||
|
||||
在继续讨论下去之前,我想先简短的介绍下 OOM killer,也就是 **O** ut **O** f **M** emory killer。OOM killer 是一个由内核在可用内存非常低的时候使用的进程。它的主要任务是不断的杀死进程,直到释放出足够的内存,是内核正在运行的进程的其余部分能顺利运行。OOM killer 会找到系统中最不重要并且能释放出最多内存的进程,然后杀掉他们。在 **/proc** 目录下的 **pid** 目录中,我们可以看到每个进程的 oom_score。
|
||||
在继续讨论下去之前,我想先简短的介绍下 OOM killer,也就是 **O**ut **O**f **M**emory killer。OOM killer 是一个由内核在可用内存非常低的时候使用的进程。它的主要任务是不断的杀死进程,直到释放出足够的内存,使内核正在运行的其它进程能顺利运行。OOM killer 会找到系统中最不重要并且能释放出最多内存的进程,然后杀掉他们。在 `/proc` 目录下的 `pid` 目录中,我们可以看到每个进程的 `oom_score`。
|
||||
|
||||
示例:
|
||||
|
||||
```
|
||||
$ cat /proc/10299/oom_score
|
||||
1
|
||||
```
|
||||
|
||||
一个进程的 oom_score 的值越高,这个进程越有可能在系统内存耗尽的时候被 OOM killer 杀死。
|
||||
一个进程的 `oom_score` 的值越高,这个进程越有可能在系统内存耗尽的时候被 OOM killer 杀死。
|
||||
|
||||
Early OOM 的开发者表示,相对于内置的 OOM killer,Early OOM 有一个很大的优点。就像我之前说的那样,OOM killer 会杀掉 oom_score 最高的进程,而这也导致 Chrome 浏览器总是会成为第一个被杀死的进程。为了避免这种情况发生,Early OOM 使用 **/proc/*/status** 而不是 **echo f > /proc/sysrq-trigger**(译者注:这条命令会调用 OOM killer 杀死进程)。开发者还表示,手动触发 OOM killer 在最新版本的 Linux 内核中很可能不会起作用。
|
||||
Early OOM 的开发者表示,相对于内置的 OOM killer,Early OOM 有一个很大的优点。就像我之前说的那样,OOM killer 会杀掉 `oom_score` 最高的进程,而这也导致 Chrome 浏览器总是会成为第一个被杀死的进程。为了避免这种情况发生,Early OOM 使用 `/proc/*/status` 而不是 `echo f > /proc/sysrq-trigger`(LCTT 译注:这条命令会调用 OOM killer 杀死进程)。开发者还表示,手动触发 OOM killer 在最新版本的 Linux 内核中很可能不会起作用。
|
||||
|
||||
### 安装 Early OOM
|
||||
|
||||
Early OOM 在AUR(Arch User Repository)中可以被找到,所以你可以在 Arch 和它的衍生版本中使用任何 AUR 工具安装它。
|
||||
Early OOM 在 AUR(Arch User Repository)中可以找到,所以你可以在 Arch 和它的衍生版本中使用任何 AUR 工具安装它。
|
||||
|
||||
使用 [Pacaur][1]:
|
||||
|
||||
使用 [**Pacaur**][1]:
|
||||
```
|
||||
pacaur -S earlyoom
|
||||
```
|
||||
|
||||
使用 [**Packer**][2]:
|
||||
使用 [Packer][2]:
|
||||
|
||||
```
|
||||
packer -S earlyoom
|
||||
```
|
||||
|
||||
使用 [**Yaourt**][3]:
|
||||
使用 [Yaourt][3]:
|
||||
|
||||
```
|
||||
yaourt -S earlyoom
|
||||
```
|
||||
|
||||
启用并启动 Early OOM daemon:
|
||||
启用并启动 Early OOM 守护进程:
|
||||
|
||||
```
|
||||
sudo systemctl enable earlyoom
|
||||
```
|
||||
```
|
||||
sudo systemctl start earlyoom
|
||||
```
|
||||
|
||||
在其它的 Linux 发行版中,可以按如下方法编译安装它
|
||||
在其它的 Linux 发行版中,可以按如下方法编译安装它:
|
||||
|
||||
```
|
||||
git clone https://github.com/rfjakob/earlyoom.git
|
||||
cd earlyoom
|
||||
@ -53,19 +58,22 @@ make
|
||||
sudo make install
|
||||
```
|
||||
|
||||
### Early OOM - Kill The Largest Process In An Unresponsive Linux System杀掉无响应 Linux 系统中的最大的进程
|
||||
### Early OOM - 杀掉无响应 Linux 系统中的最大的进程
|
||||
|
||||
运行如下命令启动 Early OOM:
|
||||
|
||||
```
|
||||
earlyoom
|
||||
```
|
||||
|
||||
如果是通过编译源代码安装的, 运行如下命令启动 Early OOM:
|
||||
|
||||
```
|
||||
./earlyoom
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
earlyoom 0.12
|
||||
mem total: 3863 MiB, min: 386 MiB (10 %)
|
||||
@ -82,21 +90,24 @@ mem avail: 1784 MiB (46 %), swap free: 2047 MiB (99 %)
|
||||
[...]
|
||||
```
|
||||
|
||||
就像你在上面的输出中可以看到的,Early OOM 将会显示你有多少内存和交换区,以及有多少可用的内存和交换区。记住它会一直保持运行,直到你按下 CTRL+C。
|
||||
就像你在上面的输出中可以看到的,Early OOM 将会显示你有多少内存和交换区,以及有多少可用的内存和交换区。记住它会一直保持运行,直到你按下 `CTRL+C`。
|
||||
|
||||
如果可用的内存和交换区大小都低于 10%,Early OOM 将会自动杀死最大的进程,直到系统有足够的内存可以流畅的运行。你也可以根据你的需求配置最小百分比值。
|
||||
|
||||
设置最小的可用内存百分比,运行:
|
||||
|
||||
```
|
||||
earlyoom -m <PERCENT_HERE>
|
||||
```
|
||||
|
||||
设置最小可用交换区百分比, 运行:
|
||||
设置最小可用交换区百分比, 运行:
|
||||
|
||||
```
|
||||
earlyoom -s <PERCENT_HERE>
|
||||
```
|
||||
|
||||
在帮助部分,可以看到更多详细信息:
|
||||
|
||||
```
|
||||
$ earlyoom -h
|
||||
earlyoom 0.12
|
||||
@ -120,15 +131,13 @@ Usage: earlyoom [OPTION]...
|
||||
|
||||
谢谢!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/kill-largest-process-unresponsive-linux-system/
|
||||
|
||||
作者:[Aditya Goturu][a]
|
||||
译者:[cizezsy](https://github.com/cizezsy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
81
published/20180507 4 Firefox extensions to install now.md
Normal file
81
published/20180507 4 Firefox extensions to install now.md
Normal file
@ -0,0 +1,81 @@
|
||||
4 个现在就该去装的 Firefox 扩展
|
||||
=====
|
||||
|
||||
> 合适的扩展能极大地增强你的浏览器功能,但是仔细的选择也是很重要的。
|
||||
|
||||

|
||||
|
||||
正如我在关于 Firefox 扩展的[原创文章][1]中提到的,web 浏览器已成为许多用户计算机体验的关键组件。现代浏览器已经发展成为功能强大且可扩展的平台,扩展可以添加或修改其功能。Firefox 的扩展是使用 WebExtensions API(一种跨浏览器开发系统)构建的。
|
||||
|
||||
在第一篇文章中,我问读者:“你应该安装哪些扩展?” 重申一下,这一决定主要取决于你如何使用浏览器,你对隐私的看法,你对扩展程序开发人员的信任程度以及其他个人偏好。自文章发表以来,我推荐的一个扩展(Xmarks)已经停止维护。另外,该文章收到了大量的反馈,在这篇更新中,这些反馈已经被考虑到。
|
||||
|
||||
我想再次指出,浏览器扩展通常需要能够阅读和(或)更改你访问的网页上的所有内容。你应该仔细考虑这一点。如果扩展程序修改了你访问的所有网页的访问权限,那么它可能成为键盘记录程序、拦截信用卡信息、在线跟踪、插入广告以及执行各种其他恶意活动。这并不意味着每个扩展程序都会暗中执行这些操作,但在安装任何扩展程序之前,你应该仔细考虑安装源,涉及的权限,风险配置文件以及其他因素。请记住,你可以使用配置文件来管理扩展如何影响你的攻击面 —— 例如,使用没有扩展的专用配置文件来执行网上银行等任务。
|
||||
|
||||
考虑到这一点,这里有你可能想要考虑的四个开源 Firefox 扩展。
|
||||
|
||||
### uBlock Origin
|
||||
|
||||
![ublock origin ad blocker screenshot][2]
|
||||
|
||||
我的第一个建议保持不变。[uBlock Origin][3] 是一款快速、低内存消耗、广泛的拦截器,它不仅可以拦截广告,而且还可以执行你自己的内容过滤。 uBlock Origin 的默认行为是使用多个预定义的过滤器列表来拦截广告、跟踪器和恶意软件站点。它允许你任意添加列表和规则,甚至可以锁定到默认拒绝模式。尽管它很强大,但它已被证明是高效和高性能的。它将继续定期更新,并且是该功能的最佳选择之一。
|
||||
|
||||
### Privacy Badger
|
||||
|
||||
![privacy badger ad blocker][4]
|
||||
|
||||
我的第二个建议也保持不变。如果说有什么区别的话,那就是自从我上一篇文章发表以来,隐私问题更被关注了,这使得这个扩展成为一个简单的建议。顾名思义,[Privacy Badger][5] 是一个专注于隐私的扩展,可以拦截广告和其他第三方跟踪器。这是电子前哨基金会基金会(EFF)的一个项目,他们说:
|
||||
|
||||
> Privacy Badger 的诞生是我们希望能够推荐一个单独的扩展,它可以自动分析和拦截任何违反用户同意原则的追踪器或广告;在用户没有任何设置、有关知识或配置的情况下,它可以很好地运行;它是由一个明确为其用户而不是为广告商工作的组织所产生的;它使用了算法的方法来决定什么被跟踪,什么没有被跟踪。”
|
||||
|
||||
为什么 Privacy Badger 会出现在这个列表上,它的功能与上一个扩展看起来很类似?因为一些原因:首先,它从根本上工作原理与 uBlock Origin 不同。其次,深度防御的实践是一项合理的策略。说到深度防御,EFF 还维护着 [HTTPS Everywhere][6] 扩展,它自动确保 https 用于许多主流网站。当你安装 Privacy Badger 时,你也可以考虑使用 HTTPS Everywhere。
|
||||
|
||||
如果你开始认为这篇文章只是对上一篇文章的重新讨论,那么以下是我的建议分歧。
|
||||
|
||||
### Bitwarden
|
||||
|
||||
![Bitwarden][7]
|
||||
|
||||
在上一篇文章中推荐 LastPass 时,我提到这可能是一个有争议的选择。这无疑属实。无论你是否应该使用密码管理器 —— 如果你使用,那么是否应该选择带有浏览器插件的密码管理器 —— 这是一个备受争议的话题,答案很大程度上取决于你的个人风险状况。我认为大多数普通的计算机用户应该使用一个,因为它比最常见的选择要好得多:在任何地方都使用相同的弱密码!我仍然相信这一点。
|
||||
|
||||
[Bitwarden][8] 自从我上次点评以后确实更成熟了。像 LastPass 一样,它对用户友好,支持双因素身份验证,并且相当安全。与 LastPass 不同的是,它是[开源的][9]。它可以使用或不使用浏览器插件,并支持从其他解决方案(包括 LastPass)导入。它的核心功能完全免费,它还有一个 10 美元/年的高级版本。
|
||||
|
||||
### Vimium-FF
|
||||
|
||||
![Vimium][10]
|
||||
|
||||
[Vimium][11] 是另一个开源的扩展,它为 Firefox 键盘快捷键提供了类似 Vim 一样的导航和控制,其称之为“黑客的浏览器”。对于 `Ctrl+x`、 `Meta+x` 和 `Alt+x`,分别对应 `<c-x>`、`<m-x>` 和 `<a-x>`,默认值可以轻松定制。一旦你安装了 Vimium,你可以随时键入 `?` 来查看键盘绑定列表。请注意,如果你更喜欢 Emacs,那么也有一些针对这些键绑定的扩展。无论哪种方式,我认为键盘快捷键是未充分利用的生产力推动力。
|
||||
|
||||
### 额外福利: Grammarly
|
||||
|
||||
不是每个人都有幸在 Opensource.com 上撰写专栏 —— 尽管你应该认真考虑为这个网站撰写文章;如果你有问题,有兴趣,或者想要一个导师,伸出手,让我们聊聊吧。但是,即使没有专栏撰写,正确的语法在各种各样的情况下都是有益的。试一下 [Grammarly][12]。不幸的是,这个扩展不是开源的,但它确实可以确保你输入的所有东西都是清晰的
|
||||
、有效的并且没有错误。它通过扫描你文本中的常见的和复杂的语法错误来实现这一点,涵盖了从主谓一致到文章使用,到修饰词的放置这些所有内容。它的基本功能是免费的,它有一个高级版本,每月收取额外的费用。我在这篇文章中使用了它,它发现了许多我的校对没有发现的错误。
|
||||
|
||||
再次说明,Grammarly 是这个列表中包含的唯一不是开源的扩展,因此如果你知道类似的高质量开源替代品,请在评论中告诉我们。
|
||||
|
||||
这些扩展是我发现有用并推荐给其他人的扩展。请在评论中告诉我你对更新建议的看法。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/firefox-extensions
|
||||
|
||||
作者:[Jeremy Garcia][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jeremy-garcia
|
||||
[1]:https://opensource.com/article/18/1/top-5-firefox-extensions
|
||||
[2]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/ublock.png?itok=_QFEbDmq (ublock origin ad blocker screenshot)
|
||||
[3]:https://addons.mozilla.org/en-US/firefox/addon/ublock-origin/
|
||||
[4]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/privacy_badger_1.0.1.png?itok=qZXQeKtc (privacy badger ad blocker screenshot)
|
||||
[5]:https://www.eff.org/privacybadger
|
||||
[6]:https://www.eff.org/https-everywhere
|
||||
[7]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/bitwarden.png?itok=gZPrCYoi (Bitwarden)
|
||||
[8]:https://bitwarden.com/
|
||||
[9]:https://github.com/bitwarden
|
||||
[10]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/vimium.png?itok=QRESXjWG (Vimium)
|
||||
[11]:https://addons.mozilla.org/en-US/firefox/addon/vimium-ff/
|
||||
[12]:https://www.grammarly.com/
|
@ -1,40 +1,22 @@
|
||||
# Is DevOps compatible with part-time community teams?
|
||||
|
||||
### DevOps can greatly benefit projects of all sizes. Here's how to build a DevOps adoption plan for smaller projects.
|
||||
|
||||

|
||||
|
||||
Image by :
|
||||
|
||||
[WOCinTech Chat][1]. Modified by Opensource.com. [CC BY-SA 4.0][2]
|
||||
|
||||
### Get the newsletter
|
||||
|
||||
Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.
|
||||
Is DevOps compatible with part-time community teams?
|
||||
======
|
||||
|
||||

|
||||
DevOps seems to be the talk of the IT world of late—and for good reason. DevOps has streamlined the process and production of IT development and operations. However, there is also an upfront cost to embracing a DevOps ideology, in terms of time, effort, knowledge, and financial investment. Larger companies may have the bandwidth, budget, and time to make the necessary changes, but is it feasible for part-time, resource-strapped communities?
|
||||
|
||||
Part-time communities are teams of like-minded people who take on projects outside of their normal work schedules. The members of these communities are driven by passion and a shared purpose. For instance, one such community is the [ALM | DevOps Rangers][3]. With 100 rangers engaged across the globe, a DevOps solution may seem daunting; nonetheless, they took on the challenge and embraced the ideology. Through their example, we've learned that DevOps is not only feasible but desirable in smaller teams. To read about their transformation, check out [How DevOps eliminates development bottlenecks][4].
|
||||
Part-time communities are teams of like-minded people who take on projects outside of their normal work schedules. The members of these communities are driven by passion and a shared purpose. For instance, one such community is the [ALM | DevOps Rangers][1]. With 100 rangers engaged across the globe, a DevOps solution may seem daunting; nonetheless, they took on the challenge and embraced the ideology. Through their example, we've learned that DevOps is not only feasible but desirable in smaller teams. To read about their transformation, check out [How DevOps eliminates development bottlenecks][2].
|
||||
|
||||
> “DevOps is the union of people, process, and products to enable continuous delivery of value to our end customers.” - Donovan Brown
|
||||
|
||||
### The cost of DevOps
|
||||
|
||||
As stated above, there is an _upfront_ "cost" to DevOps. The cost manifests itself in many forms, such as the time and collaboration between development, operations, and other stakeholders, planning a smooth-flowing process that delivers continuous value, finding the best DevOps products, and training the team in new technologies, to name a few. This aligns directly with Donovan's definition of DevOps, in fact—a **process** for delivering **continuous value** and the **people** who make that happen.
|
||||
|
||||
More DevOps resources
|
||||
|
||||
* [What is DevOps?][5]
|
||||
* [Free eBook: DevOps with OpenShift][6]
|
||||
* [10 bad DevOps habits to break][7]
|
||||
* [10 must-read DevOps resources][8]
|
||||
* [The latest on DevOps][9]
|
||||
As stated above, there is an upfront "cost" to DevOps. The cost manifests itself in many forms, such as the time and collaboration between development, operations, and other stakeholders, planning a smooth-flowing process that delivers continuous value, finding the best DevOps products, and training the team in new technologies, to name a few. This aligns directly with Donovan's definition of DevOps, in fact—a **process** for delivering **continuous value** and the **people** who make that happen.
|
||||
|
||||
Streamlined DevOps takes a lot of planning and training just to create the process, and that doesn't even consider the testing phase. We also can't forget the existing in-flight projects that need to be converted into the new system. While the cost increases the more pervasive the transformation—for instance, if an organization aims to unify its entire development organization under a single process, then that would cost more versus transforming a single pilot or subset of the entire portfolio—these upfront costs must be addressed regardless of their scale. There are a lot of resources and products already out there that can be implemented for a smoother transition—but again, we face the time and effort that will be necessary just to research which ones might work best.
|
||||
|
||||
In the case of the ALM | DevOps Rangers, they had to halt all projects for a couple of sprints to set up the initial process. Many organizations would not be able to do that. Even part-time groups might have very good reasons to keep things moving, which only adds to the complexity. In such scenarios, additional cutover planning (and therefore additional cost) is needed, and the overall state of the community is one of flux and change, which adds risk, which—you guessed it—requires more cost to mitigate.
|
||||
|
||||
There is also an _ongoing_ "cost" that teams will face with a DevOps mindset: Simple maintenance of the system, training and transitioning new team members, and keeping up with new, improved technologies are all a part of the process.
|
||||
There is also an ongoing "cost" that teams will face with a DevOps mindset: Simple maintenance of the system, training and transitioning new team members, and keeping up with new, improved technologies are all a part of the process.
|
||||
|
||||
### DevOps for a part-time community
|
||||
|
||||
@ -46,7 +28,7 @@ The answer to that is dependent on a few variables, such as the ability of the t
|
||||
|
||||
Luckily, we aren't without examples to demonstrate just how DevOps can benefit a smaller group. Let's take a quick look at the ALM Rangers again. The results from their transformation help us understand how DevOps changed their community:
|
||||
|
||||

|
||||

|
||||
|
||||
As illustrated, there are some huge benefits for part-time community teams. Planning goes from long, arduous design sessions to a quick prototyping and storyboarding process. Builds become automated, reliable, and resilient. Testing and bug detection are proactive instead of reactive, which turns into a happier clientele. Multiple full-time program managers are replaced with self-managing teams with a single part-time manager to oversee projects. Teams become smaller and more efficient, which equates to higher production rates and higher-quality project delivery. With results like these, it's hard to argue against DevOps.
|
||||
|
||||
@ -56,62 +38,36 @@ Still, the upfront and ongoing costs aren't right for every community. The numbe
|
||||
|
||||
Another important question to ask: How can a low-bandwidth group make such a massive transition? The good news is that a DevOps transformation doesn’t need to happen all at once. Taken in smaller, more manageable steps, organizations of any size can embrace DevOps.
|
||||
|
||||
1. Determine why DevOps may be the solution you need. Are your projects bottlenecking? Are they running over budget and over time? Of course, these concerns are common for any community, big or small. Answering these questions leads us to step two:
|
||||
2. Develop the right framework to improve the engineering process. DevOps is all about automation, collaboration, and streamlining. Rather than trying to fit everyone into the same process box, the framework should support the work habits, preferences, and delivery needs of the community. Some broad standards should be established (for example, that all teams use a particular version control system). Beyond that, however, let the teams decide their own best process.
|
||||
3. Use the current products that are already available if they meet your needs. Why reinvent the wheel?
|
||||
4. Finally, implement and test the actual DevOps solution. This is, of course, where the actual value of DevOps is realized. There will likely be a few issues and some heartburn, but it will all be worth it in the end because, once established, the products of the community’s work will be nimbler and faster for the users.
|
||||
1. Determine why DevOps may be the solution you need. Are your projects bottlenecking? Are they running over budget and over time? Of course, these concerns are common for any community, big or small. Answering these questions leads us to step two:
|
||||
2. Develop the right framework to improve the engineering process. DevOps is all about automation, collaboration, and streamlining. Rather than trying to fit everyone into the same process box, the framework should support the work habits, preferences, and delivery needs of the community. Some broad standards should be established (for example, that all teams use a particular version control system). Beyond that, however, let the teams decide their own best process.
|
||||
3. Use the current products that are already available if they meet your needs. Why reinvent the wheel?
|
||||
4. Finally, implement and test the actual DevOps solution. This is, of course, where the actual value of DevOps is realized. There will likely be a few issues and some heartburn, but it will all be worth it in the end because, once established, the products of the community’s work will be nimbler and faster for the users.
|
||||
|
||||
|
||||
|
||||
### Reuse DevOps solutions
|
||||
|
||||
One benefit to creating effective CI/CD pipelines is the reusability of those pipelines. Although there is no one-size fits all solution, anyone can adopt a process. There are several pre-made templates available for you to examine, such as build templates on VSTS, ARM templates to deploy Azure resources, and "cookbook"-style textbooks from technical publishers. Once it identifies a process that works well, a community can also create its own template by defining and establishing standards and making that template easily discoverable by the entire community. For more information on DevOps journeys and tools, check out [this site][10].
|
||||
One benefit to creating effective CI/CD pipelines is the reusability of those pipelines. Although there is no one-size fits all solution, anyone can adopt a process. There are several pre-made templates available for you to examine, such as build templates on VSTS, ARM templates to deploy Azure resources, and "cookbook"-style textbooks from technical publishers. Once it identifies a process that works well, a community can also create its own template by defining and establishing standards and making that template easily discoverable by the entire community. For more information on DevOps journeys and tools, check out [this site][3].
|
||||
|
||||
### Summary
|
||||
|
||||
Overall, the success or failure of DevOps relies on the culture of a community. It doesn't matter if the community is a large, resource-rich enterprise or a small, resource-sparse, part-time group. DevOps will still bring solid benefits. The difference is in the approach for adoption and the scale of that adoption. There are both upfront and ongoing costs, but the value greatly outweighs those costs. Communities can use any of the powerful tools available today for their pipelines, and they can also leverage reusability, such as templates, to reduce upfront implementation costs. DevOps is most certainly feasible—and even critical—for the success of part-time community teams.
|
||||
|
||||
---
|
||||
**[See our related story,[How DevOps eliminates development bottlenecks][4].]**
|
||||
|
||||
**\[See our related story, [How DevOps eliminates development bottlenecks][11].\]**
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/4/devops-compatible-part-time-community-teams
|
||||
|
||||
### About the author
|
||||
作者:[Edward Fry][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
[][13]
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
Edward Fry \- Edward a business technophile who revels in blending people, process, and technology to solve real problems with grace and efficiency. As a member of the ALM Rangers, he loves to share his passion for the process of software development and ways to get computers to do the drudge work so that people can dream the big dreams. He also enjoys the art of great software design and architecture and also just likes being immersed in code. When he isn't helping people with technology, he enjoys... [more about Edward Fry][14]
|
||||
|
||||
[More about me][15]
|
||||
|
||||
* [Learn how you can contribute][16]
|
||||
|
||||
---
|
||||
|
||||
via: [https://opensource.com/article/18/4/devops-compatible-part-time-community-teams][17]
|
||||
|
||||
作者: [Edward Fry][18] 选题者: [@lujun9972][19] 译者: [译者ID][20] 校对: [校对者ID][21]
|
||||
|
||||
本文由 [LCTT][22] 原创编译,[Linux中国][23] 荣誉推出
|
||||
|
||||
[1]: https://www.flickr.com/photos/wocintechchat/25392377053/
|
||||
[2]: https://creativecommons.org/licenses/by/4.0/
|
||||
[3]: https://github.com/ALM-Rangers
|
||||
[4]: https://opensource.com/article/17/11/devops-rangers-transformation
|
||||
[5]: https://opensource.com/resources/devops?src=devops_resource_menu1
|
||||
[6]: https://www.openshift.com/promotions/devops-with-openshift.html?intcmp=7016000000127cYAAQ&src=devops_resource_menu2
|
||||
[7]: https://enterprisersproject.com/article/2018/1/10-bad-devops-habits-break?intcmp=7016000000127cYAAQ&src=devops_resource_menu3
|
||||
[8]: https://opensource.com/article/17/12/10-must-read-devops-books?src=devops_resource_menu4
|
||||
[9]: https://opensource.com/tags/devops?src=devops_resource_menu5
|
||||
[10]: https://www.visualstudio.com/devops/
|
||||
[11]: https://opensource.com/article/17/11/devops-rangers-transformation
|
||||
[12]: https://opensource.com/tags/devops
|
||||
[13]: https://opensource.com/users/edwardf
|
||||
[14]: https://opensource.com/users/edwardf
|
||||
[15]: https://opensource.com/users/edwardf
|
||||
[16]: https://opensource.com/participate
|
||||
[17]: https://opensource.com/article/18/4/devops-compatible-part-time-community-teams
|
||||
[18]: https://opensource.com/users/edwardf
|
||||
[19]: https://github.com/lujun9972
|
||||
[20]: https://github.com/译者ID
|
||||
[21]: https://github.com/校对者ID
|
||||
[22]: https://github.com/LCTT/TranslateProject
|
||||
[23]: https://linux.cn/
|
||||
[a]:https://opensource.com/users/edwardf
|
||||
[1]:https://github.com/ALM-Rangers
|
||||
[2]:https://opensource.com/article/17/11/devops-rangers-transformation
|
||||
[3]:https://www.visualstudio.com/devops/
|
||||
[4]:https://opensource.com/article/17/11/devops-rangers-transformation
|
@ -1,111 +0,0 @@
|
||||
# How will the GDPR impact open source communities?
|
||||
|
||||

|
||||
|
||||
Image by :
|
||||
|
||||
opensource.com
|
||||
|
||||
|
||||
|
||||
On May 25, 2018 the [General Data Protection Regulation][1] will go into effect. This new regulation by the European Union will impact how organizations need to protect personal data on a global scale. This could include open source projects, including communities.
|
||||
|
||||
### GDPR details
|
||||
|
||||
The General Data Protection Regulation (GDPR) was approved by the EU Parliament on April 14, 2016, and will be enforced beginning May 25, 2018. The GDPR replaces the Data Protection Directive 95/46/EC that was designed "to harmonize data privacy laws across Europe, to protect and empower all EU citizens data privacy and to reshape the way organizations across the region approach data privacy."
|
||||
|
||||
The aim of the GDPR is to protect the personal data of individuals in the EU in an increasingly data-driven world.
|
||||
|
||||
### To whom does it apply
|
||||
|
||||
One of the biggest changes that comes with the GDPR is an increased territorial scope. The GDPR applies to all organizations processing the personal data of data subjects residing in the European Union, irrelevant to its location.
|
||||
|
||||
While most of the online articles covering the GDPR mention companies selling goods or services, we can also look at this territorial scope with open source projects in mind. There are a few variations, such as a software company (profit) running a community, and a non-profit organization, i.e. an open source software project and its community. Once these communities are run on a global scale, it is most likely that EU-based persons are taking part in this community.
|
||||
|
||||
When such a global community has an online presence, using platforms such as a website, forum, issue tracker etcetera, it is very likely that they are processing personal data of these EU persons, such as their names, e-mail addresses and possibly even more. These activities will trigger a need to comply with the GDPR.
|
||||
|
||||
### GDPR changes and its impact
|
||||
|
||||
The GDPR brings [many changes][2], strengthening data protection and privacy of EU persons, compared to the previous Directive. Some of these changes have a direct impact on a community as described earlier. Let's look at some of these changes.
|
||||
|
||||
#### Consent
|
||||
|
||||
Let's assume that the community in question uses a forum for its members, and also has one or more forms on their website for registration purposes. With the GDPR you will no longer be able to use one lengthy and illegible privacy policy and terms of conditions. For each of those specific purposes, registering on the forum, and on one of those forms, you will need to obtain explicit consent. This consent must be “freely given, specific, informed, and unambiguous.”
|
||||
|
||||
In case of such a form, you could have a checkbox, which should not be pre-checked, with clear text indicating for which purposes the personal data is used, preferably linking to an ‘addendum’ of your existing privacy policy and terms of use.
|
||||
|
||||
#### Right to access
|
||||
|
||||
EU persons get expanded rights by the GDPR. One of them is the right to ask an organization if, where and which personal data is processed. Upon request, they should also be provided with a copy of this data, free of charge, and in an electronic format if this data subject (e.g. EU citizen) asks for it.
|
||||
|
||||
#### Right to be forgotten
|
||||
|
||||
Another right EU citizens get through the GDPR is the "right to be forgotten," also known as data erasure. This means that subject to certain limitation, the organization will have to erase his/her data, and possibly even stop any further processing, including by the organization’s third parties.
|
||||
|
||||
The above three changes imply that your platform(s) software will need to comply with certain aspects of the GDPR as well. It will need to have specific features such as obtaining and storing consent, extracting data and providing a copy in electronic format to a data subject, and finally the means to erase specific data about a data subject.
|
||||
|
||||
#### Breach notification
|
||||
|
||||
Under the GDPR, a data breach occurs whenever personal data is taken or stolen without the authorization of the data subject. Once discovered, you should notify your affected community members within 72 hours unless the personal data breach is unlikely to result in a risk to the rights and freedoms of natural persons. This breach notification is mandatory under the GDPR.
|
||||
|
||||
#### Register
|
||||
|
||||
As an organization, you will become responsible for keeping a register which will include detailed descriptions of all procedures, purposes etc for which you process personal data. This register will act as proof of the organization's compliance with the GDPR’s requirement to maintain a record of personal data processing activities, and will be used for audit purposes.
|
||||
|
||||
#### Fines
|
||||
|
||||
Organizations that do not comply with the GDPR risk fines up to 4% of annual global turnover or €20 million (whichever is greater). According to the GDPR, "this is the maximum fine that can be imposed for the most serious infringements e.g.not having sufficient customer consent to process data or violating the core of Privacy by Design concepts."
|
||||
|
||||
### Final words
|
||||
|
||||
My article should not be used as legal advice or a definite guide to GDPR compliance. I have covered some of the parts of the regulation that could be of impact to an open source community, raising awareness about the GDPR and its impact. Obviously, the regulation contains much more which you will need to know about and possibly comply with.
|
||||
|
||||
As you can probably conclude yourself, you will have to take steps when you are running a global community, to comply with the GDPR. If you already apply robust security standards in your community, such as ISO 27001, NIST or PCI DSS, you should have a head start.
|
||||
|
||||
You can find more information about the GDPR at the following sites/resources:
|
||||
|
||||
* [GDPR Portal][3] (by the EU)
|
||||
|
||||
* [Official Regulation (EU) 2016/679][4] (GDPR, including translations)
|
||||
|
||||
* [What is GDPR? 8 things leaders should know][5] (The Enterprisers Project)
|
||||
|
||||
* [How to avoid a GDPR compliance audit: Best practices][6] (The Enterprisers Project)
|
||||
|
||||
|
||||
### About the author
|
||||
|
||||
[][7]
|
||||
|
||||
Robin Muilwijk \- Robin Muilwijk is Advisor Internet and e-Government. He also serves as a community moderator for Opensource.com, an online publication by Red Hat, and as ambassador for The Open Organization. Robin is also Chair of the eZ Community Board, and Community Manager at [eZ Systems][8]. Robin writes and is active on social media to promote and advocate for open source in our businesses and lives.Follow him on Twitter... [more about Robin Muilwijk][9]
|
||||
|
||||
[More about me][10]
|
||||
|
||||
* [Learn how you can contribute][11]
|
||||
|
||||
---
|
||||
|
||||
via: [https://opensource.com/article/18/4/gdpr-impact][12]
|
||||
|
||||
作者: [Robin Muilwijk][13] 选题者: [@lujun9972][14] 译者: [译者ID][15] 校对: [校对者ID][16]
|
||||
|
||||
本文由 [LCTT][17] 原创编译,[Linux中国][18] 荣誉推出
|
||||
|
||||
[1]: https://www.eugdpr.org/eugdpr.org.html
|
||||
[2]: https://www.eugdpr.org/key-changes.html
|
||||
[3]: https://www.eugdpr.org/eugdpr.org.html
|
||||
[4]: http://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1520531479111&uri=CELEX:32016R0679
|
||||
[5]: https://enterprisersproject.com/article/2018/4/what-gdpr-8-things-leaders-should-know
|
||||
[6]: https://enterprisersproject.com/article/2017/9/avoiding-gdpr-compliance-audit-best-practices
|
||||
[7]: https://opensource.com/users/robinmuilwijk
|
||||
[8]: http://ez.no
|
||||
[9]: https://opensource.com/users/robinmuilwijk
|
||||
[10]: https://opensource.com/users/robinmuilwijk
|
||||
[11]: https://opensource.com/participate
|
||||
[12]: https://opensource.com/article/18/4/gdpr-impact
|
||||
[13]: https://opensource.com/users/robinmuilwijk
|
||||
[14]: https://github.com/lujun9972
|
||||
[15]: https://github.com/译者ID
|
||||
[16]: https://github.com/校对者ID
|
||||
[17]: https://github.com/LCTT/TranslateProject
|
||||
[18]: https://linux.cn/
|
@ -0,0 +1,63 @@
|
||||
What is a Linux server and why does your business need one?
|
||||
======
|
||||
|
||||

|
||||
IT organizations strive to deliver business value by increasing productivity and delivering services faster while remaining flexible enough to incorporate innovations like cloud, containers, and configuration automation. Modern workloads, whether they run on bare metal, virtual machines, containers, or private or public clouds, are expected to be portable and scalable. Supporting all this requires a modern, secure platform.
|
||||
|
||||
The most direct route to innovation is not always a straight line. With the growing adoption of private and public clouds, multiple architectures, and virtualization, today’s data center is like a globe, with varying infrastructure choices bringing it dimension and depth. And just as a pilot depends on air traffic controllers to provide continuous updates, your digital transformation journey should be guided by a trusted operating system like Linux to provide continuously updated technology and the most efficient and secure access to innovations like cloud, containers, and configuration automation.
|
||||
|
||||
Linux is a family of free, open source software operating systems built around the Linux kernel. Originally developed for personal computers based on the Intel x86 architecture, Linux has since been ported to more platforms than any other operating system. Thanks to the dominance of the Linux kernel-based Android OS on smartphones, Linux has the largest installed base of all general-purpose operating systems. Linux is also the leading operating system on servers and "big iron" systems such as mainframe computers, and it is the only OS used on [TOP500][1] supercomputers.
|
||||
|
||||
|
||||
To tap this functionality, many enterprise companies have adopted servers with a high-powered variant of the Linux open source operating system. These are designed to handle the most demanding business application requirements, such as network and system administration, database management, and web services. Linux servers are often chosen over other server operating systems for their stability, security, and flexibility. Leading Linux server operating systems include [Debian][2], [Ubuntu Server][3], [CentOS][4] [Slackware][5] , and [Gentoo][6]
|
||||
|
||||
|
||||
What features and benefits on an enterprise-grade Linux server should you consider for an enterprise workload? First, built-in security controls and scale-out manageability through interfaces that are familiar to both Linux and Windows administrators will enable you to focus on business growth instead of reacting to security vulnerabilities and costly management configuration mistakes. The Linux server you choose should provide security technologies and certifications and maintain enhancements to combat intrusions, protect your data, and meet regulatory compliance for an open source project or a specific OS vendor. It should:
|
||||
|
||||
* **Deliver resources with security** using integrated control features such as centralized identity management and [Security-Enhanced Linux][7] (SELinux), mandatory access controls (MAC) on a foundation that is [Common Criteria-][8] and [FIPS 140-2-certified][9], as well as the first Linux container framework support to be Common Criteria-certified.
|
||||
* **Automate regulatory compliance and security configuration remediation** across your system and within containers with image scanning like [OpenSCAP][10] that checks, remediates against vulnerabilities and configuration security baselines, including against [National Checklist Program][11] content for [PCI-DSS][12], [DISA STIG][13], and more. Additionally, it should centralize and scale out configuration remediation across your entire hybrid environment.
|
||||
* **Receive continuous vulnerability security updates** from the upstream community itself or a specific OS vendor, which remedies and delivers all critical issues by next business day, if possible, to minimize business impact.
|
||||
|
||||
|
||||
|
||||
As the foundation of your hybrid data center, the Linux server should provide platform manageability and flexible integration with legacy management and automation infrastructure. This will save IT staff time and reduce unplanned downtime compared to a non-paid Linux infrastructure. It should:
|
||||
|
||||
* **Speed image building, deployment, and patch management** across the data center with built-in capabilities and enrich system life-cycle management, provisioning, and enhanced patching, and more.
|
||||
* **Manage individual systems from an easy-to-use web interface** that includes storage, networking, containers, services, and more.
|
||||
* **Automate consistency and compliance** across heterogeneous multiple environments and reduce scripting rework with system roles using native configuration management tools like [Ansible][14], [Chef][15], [Salt][16], [Puppet][17], and more.
|
||||
* **Simplify platform updates** with in-place upgrades that eliminate the hassle of machine migrations and application rebuilds.
|
||||
* **Resolve technical issues** before they impact business operations by using predictive analytics tools to automate identification and remediation of anomalies and their root causes.
|
||||
|
||||
|
||||
|
||||
Linux servers are powering innovation around the globe. As the platform for enterprise workloads, a Linux server should provide a stable, secure, and performance-driven foundation for the applications that run the business of today and tomorrow.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/what-linux-server
|
||||
|
||||
作者:[Daniel Oh][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/daniel-oh
|
||||
[1]:https://en.wikipedia.org/wiki/TOP500
|
||||
[2]:https://www.debian.org/
|
||||
[3]:https://www.ubuntu.com/download/server
|
||||
[4]:https://www.centos.org/
|
||||
[5]:http://www.slackware.com/
|
||||
[6]:https://www.gentoo.org/
|
||||
[7]:https://en.wikipedia.org/wiki/Security-Enhanced_Linux
|
||||
[8]:https://en.wikipedia.org/wiki/Common_Criteria
|
||||
[9]:https://en.wikipedia.org/wiki/FIPS_140-2
|
||||
[10]:https://www.open-scap.org/
|
||||
[11]:https://www.nist.gov/programs-projects/national-checklist-program
|
||||
[12]:https://www.pcisecuritystandards.org/pci_security/
|
||||
[13]:https://iase.disa.mil/stigs/Pages/index.aspx
|
||||
[14]:https://www.ansible.com/
|
||||
[15]:https://www.chef.io/chef/
|
||||
[16]:https://saltstack.com/salt-open-source/
|
||||
[17]:https://puppet.com/
|
@ -1,3 +1,5 @@
|
||||
Translating by MjSeven
|
||||
|
||||
The Best Linux Tools for Teachers and Students
|
||||
======
|
||||
Linux is a platform ready for everyone. If you have a niche, Linux is ready to meet or exceed the needs of said niche. One such niche is education. If you are a teacher or a student, Linux is ready to help you navigate the waters of nearly any level of the educational system. From study aids, to writing papers, to managing classes, to running an entire institution, Linux has you covered.
|
||||
|
@ -1,3 +1,4 @@
|
||||
Translating by zhouzhuowei
|
||||
Getting started with Python for data science
|
||||
======
|
||||
|
||||
|
102
sources/tech/20180525 A Bittorrent Filesystem Based On FUSE.md
Normal file
102
sources/tech/20180525 A Bittorrent Filesystem Based On FUSE.md
Normal file
@ -0,0 +1,102 @@
|
||||
A Bittorrent Filesystem Based On FUSE
|
||||
======
|
||||
|
||||

|
||||
The torrents have been around for a long time to share and download data from the Internet. There are plethora of GUI and CLI torrent clients available on the market. Sometimes, you just can not sit and wait for your download to complete. You might want to watch the content immediately. This is where **BTFS** , the bittorent filesystem, comes in handy. Using BTFS, you can mount the torrent file or magnet link as a directory and then use it as any read-only directory in your file tree. The contents of the files will be downloaded on-demand as they are read by applications. Since BTFS runs on top of FUSE, it does not require intervention into the Linux Kernel.
|
||||
|
||||
## BTFS – A Bittorrent Filesystem Based On FUSE
|
||||
|
||||
### Installing BTFS
|
||||
|
||||
BTFS is available in the default repositories of most Linux distributions.
|
||||
|
||||
On Arch Linux and its variants, run the following command to install BTFS.
|
||||
```
|
||||
$ sudo pacman -S btfs
|
||||
|
||||
```
|
||||
|
||||
On Debian, Ubuntu, Linux Mint:
|
||||
```
|
||||
$ sudo apt-get install btfs
|
||||
|
||||
```
|
||||
|
||||
On Gentoo:
|
||||
```
|
||||
# emerge -av btfs
|
||||
|
||||
```
|
||||
|
||||
BTFS can also be installed using [**Linuxbrew**][1] package manager.
|
||||
```
|
||||
$ brew install btfs
|
||||
|
||||
```
|
||||
|
||||
### Usage
|
||||
|
||||
BTFS usage is fairly simple. All you have to find the .torrent file or magnet link and mount it in a directory. The contents of the torrent file or magnet link will be mounted inside the directory of your choice. When a program tries to access the file for reading, the actual data will be downloaded on demand. Furthermore, tools like **ls** , **cat** and **cp** works as expected for manipulating the torrents. Applications like **vlc** and **mplayer** can also work without changes. The thing is the players don’t even know that the actual content is not physically present in the local disk and the content is collected in parts from peers on demand.
|
||||
|
||||
Create a directory to mount the torrent/magnet link:
|
||||
```
|
||||
$ mkdir mnt
|
||||
|
||||
```
|
||||
|
||||
Mount the torrent/magnet link:
|
||||
```
|
||||
$ btfs video.torrent mnt
|
||||
|
||||
```
|
||||
|
||||
[![][2]][3]
|
||||
|
||||
Cd to the directory:
|
||||
```
|
||||
$ cd mnt
|
||||
|
||||
```
|
||||
|
||||
And, start watching!
|
||||
```
|
||||
$ vlc <path-to-video.mp4>
|
||||
|
||||
```
|
||||
|
||||
Give BTFS a few moments to find and get the website tracker. Once the real data is loaded, BTFS won’t require the tracker any more.
|
||||
|
||||
![][4]
|
||||
|
||||
To unmount the BTFS filesystem, simply run the following command:
|
||||
```
|
||||
$ fusermount -u mnt
|
||||
|
||||
```
|
||||
|
||||
Now, the contents in the mounted directory will be gone. To access the contents again, you need to mount the torrent as described above.
|
||||
|
||||
The BTFS application will turn your VLC or Mplayer into Popcorn Time. Mount your favorite TV show or movie torrent file or magnet link and start watching without having to download the entire contents of the torrent or wait for your download to complete. The contents of the torrent or magnet link will be downloaded on demand when accessed by the applications.
|
||||
|
||||
And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/btfs-a-bittorrent-filesystem-based-on-fuse/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://www.ostechnix.com/linuxbrew-common-package-manager-linux-mac-os-x/
|
||||
[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[3]:http://www.ostechnix.com/wp-content/uploads/2018/05/btfs.png
|
||||
[4]:http://www.ostechnix.com/wp-content/uploads/2018/05/btfs-1.png
|
@ -0,0 +1,89 @@
|
||||
How to Set Different Wallpaper for Each Monitor in Linux
|
||||
======
|
||||
**Brief: If you want to display different wallpapers on multiple monitors on Ubuntu 18.04 or any other Linux distribution with GNOME, MATE or Budgie desktop environment, this nifty tool will help you achieve this.**
|
||||
|
||||
Multi-monitor setup often leads to multiple issues on Linux but I am not going to discuss those issues in this article. I have rather a positive article on multiple monitor support on Linux.
|
||||
|
||||
If you are using multiple monitor, perhaps you would like to setup a different wallpaper for each monitor. I am not sure about other Linux distributions and desktop environments, but Ubuntu with [GNOME desktop][1] doesn’t provide this functionality on its own.
|
||||
|
||||
Fret not! In this quick tutorial, I’ll show you how to set a different wallpaper for each monitor on Linux distributions with GNOME desktop environment.
|
||||
|
||||
### Setting up different wallpaper for each monitor on Ubuntu 18.04 and other Linux distributions
|
||||
|
||||
![Different wallaper on each monitor in Ubuntu][2]
|
||||
|
||||
I am going to use a nifty tool called [HydraPaper][3] for setting different backgrounds on different monitors. HydraPaper is a [GTK][4] based application to set different backgrounds for each monitor in [GNOME desktop environment][5].
|
||||
|
||||
It also supports on [MATE][6] and [Budgie][7] desktop environments. Which means Ubuntu MATE and [Ubuntu Budgie][8] users can also benefit from this application.
|
||||
|
||||
#### Install HydraPaper on Linux using FlatPak
|
||||
|
||||
HydraPaper can be installed easily using [FlatPak][9]. Ubuntu 18.04 already provides support for FlatPaks so all you need to do is to download the application file and double click on it to open it with the GNOME Software Center.
|
||||
|
||||
You can refer to this article to learn [how to enable FlatPak support][10] on your distribution. Once you have the FlatPak support enabled, just download it from [FlatHub][11] and install it.
|
||||
|
||||
[Download HydraPaper][12]
|
||||
|
||||
#### Using HydraPaper for setting different background on different monitors
|
||||
|
||||
Once installed, just look for HydraPaper in application menu and start the application. You’ll see images from your Pictures folder here because by default the application takes images from the Pictures folder of the user.
|
||||
|
||||
You can add your own folder(s) where you keep your wallpapers. Do note that it doesn’t find images recursively. If you have nested folders, it will only show images from the top folder.
|
||||
|
||||
![Setting up different wallpaper for each monitor on Linux][13]
|
||||
|
||||
Using HydraPaper is absolutely simple. Just select the wallpapers for each monitor and click on the apply button at the top. You can easily identify external monitor(s) termed with HDMI.
|
||||
|
||||
![Setting up different wallpaper for each monitor on Linux][14]
|
||||
|
||||
You can also add selected wallpapers to ‘Favorites’ for quick access. Doing this will move the ‘favorite wallpapers’ from Wallpapers tab to Favorites tab.
|
||||
|
||||
![Setting up different wallpaper for each monitor on Linux][15]
|
||||
|
||||
You don’t need to start HydraPaper at each boot. Once you set different wallpaper for different monitor, the settings are saved and you’ll see your chosen wallpapers even after restart. This would be expected behavior of course but I thought I would mention the obvious.
|
||||
|
||||
One big downside of HydraPaper is in the way it is designed to work. You see, HydraPaper combines your selected wallpapers into one single image and stretches it across the screens giving an impression of having different background on each display. And this becomes an issue when you remove the external display.
|
||||
|
||||
For example, when I tried using my laptop without the external display, it showed me an background image like this.
|
||||
|
||||
![Dual Monitor wallpaper HydraPaper][16]
|
||||
|
||||
Quite obviously, this is not what I would expect.
|
||||
|
||||
#### Did you like it?
|
||||
|
||||
HydraPaper makes setting up different backgrounds on different monitors a painless task. It supports more than two monitors and monitors with different orientation. Simple interface with only the required features makes it an ideal application for those who always use dual monitors.
|
||||
|
||||
How do you set different wallpaper for different monitor on Linux? Do you think HydraPaper is an application worth installing?
|
||||
|
||||
Do share your views and if you find this article, please share it on various social media channels such as Twitter and [Reddit][17].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/wallpaper-multi-monitor/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[1]:https://www.gnome.org/
|
||||
[2]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/multi-monitor-wallpaper-setup-800x450.jpeg
|
||||
[3]:https://github.com/GabMus/HydraPaper
|
||||
[4]:https://www.gtk.org/
|
||||
[5]:https://itsfoss.com/gnome-tricks-ubuntu/
|
||||
[6]:https://mate-desktop.org/
|
||||
[7]:https://budgie-desktop.org/home/
|
||||
[8]:https://itsfoss.com/ubuntu-budgie-18-review/
|
||||
[9]:https://flatpak.org
|
||||
[10]:https://flatpak.org/setup/
|
||||
[11]:https://flathub.org
|
||||
[12]:https://flathub.org/apps/details/org.gabmus.hydrapaper
|
||||
[13]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/different-wallpaper-each-monitor-hydrapaper-2-800x631.jpeg
|
||||
[14]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/different-wallpaper-each-monitor-hydrapaper-1.jpeg
|
||||
[15]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/different-wallpaper-each-monitor-hydrapaper-3.jpeg
|
||||
[16]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/hydra-paper-dual-monitor-800x450.jpeg
|
||||
[17]:https://www.reddit.com/r/LinuxUsersGroup/
|
@ -0,0 +1,114 @@
|
||||
How to clean up your data in the command line
|
||||
======
|
||||

|
||||
|
||||
I work part-time as a data auditor. Think of me as a proofreader who works with tables of data rather than pages of prose. The tables are exported from relational databases and are usually fairly modest in size: 100,000 to 1,000,000 records and 50 to 200 fields.
|
||||
|
||||
I haven't seen an error-free data table, ever. The messiness isn't limited, as you might think, to duplicate records, spelling and formatting errors, and data items placed in the wrong field. I also find:
|
||||
|
||||
* broken records spread over several lines because data items had embedded line breaks
|
||||
* data items in one field disagreeing with data items in another field, in the same record
|
||||
* records with truncated data items, often because very long strings were shoehorned into fields with 50- or 100-character limits
|
||||
* character encoding failures producing the gibberish known as [mojibake][1]
|
||||
* invisible [control characters][2], some of which can cause data processing errors
|
||||
* [replacement characters][3] and mysterious question marks inserted by the last program that failed to understand the data's character encoding
|
||||
|
||||
|
||||
|
||||
Cleaning up these problems isn't hard, but there are non-technical obstacles to finding them. The first is everyone's natural reluctance to deal with data errors. Before I see a table, the data owners or managers may well have gone through all five stages of Data Grief:
|
||||
|
||||
1. There are no errors in our data.
|
||||
2. Well, maybe there are a few errors, but they're not that important.
|
||||
3. OK, there are a lot of errors; we'll get our in-house people to deal with them.
|
||||
4. We've started fixing a few of the errors, but it's time-consuming; we'll do it when we migrate to the new database software.
|
||||
5. We didn't have time to clean the data when moving to the new database; we could use some help.
|
||||
|
||||
|
||||
|
||||
The second progress-blocking attitude is the belief that data cleaning requires dedicated applications—either expensive proprietary programs or the excellent open source program [OpenRefine][4]. To deal with problems that dedicated applications can't solve, data managers might ask a programmer for help—someone good with [Python][5] or [R][6].
|
||||
|
||||
But data auditing and cleaning generally don't require dedicated applications. Plain-text data tables have been around for many decades, and so have text-processing tools. Open up a Bash shell and you have a toolbox loaded with powerful text processors like `grep`, `cut`, `paste`, `sort`, `uniq`, `tr`, and `awk`. They're fast, reliable, and easy to use.
|
||||
|
||||
I do all my data auditing on the command line, and I've put many of my data-auditing tricks on a ["cookbook" website][7]. Operations I do regularly get stored as functions and shell scripts (see the example below).
|
||||
|
||||
Yes, a command-line approach requires that the data to be audited have been exported from the database. And yes, the audit results need to be edited later within the database, or (database permitting) the cleaned data items need to be imported as replacements for the messy ones.
|
||||
|
||||
But the advantages are remarkable. `awk` will process a few million records in seconds on a consumer-grade desktop or laptop. Uncomplicated regular expressions will find all the data errors you can imagine. And all of this will happen safely outside the database structure: Command-line auditing cannot affect the database, because it works with data liberated from its database prison.
|
||||
|
||||
Readers who trained on Unix will be smiling smugly at this point. They remember manipulating data on the command line many years ago in just these ways. What's happened since then is that processing power and RAM have increased spectacularly, and the standard command-line tools have been made substantially more efficient. Data auditing has never been faster or easier. And now that Microsoft Windows 10 can run Bash and GNU/Linux programs, Windows users can appreciate the Unix and Linux motto for dealing with messy data: Keep calm and open a terminal.
|
||||
|
||||
|
||||
![Tshirt, Keep Calm and Open A Terminal][9]
|
||||
|
||||
Photo by Robert Mesibov, CC BY
|
||||
|
||||
### An example
|
||||
|
||||
Suppose I want to find the longest data item in a particular field of a big table. That's not really a data auditing task, but it will show how shell tools work. For demonstration purposes, I'll use the tab-separated table `full0`, which has 1,122,023 records (plus a header line) and 49 fields, and I'll look in field number 36. (I get field numbers with a function explained [on my cookbook site][10].)
|
||||
|
||||
The command begins by using `tail` to remove the header line from `full0`. The result is piped to `cut`, which extracts the decapitated field 36. Next in the pipeline is `awk`. Here the variable `big` is initialized to a value of 0; then `awk` tests the length of the data item in the first record. If the length is bigger than 0, `awk` resets `big` to the new length and stores the line number (NR) in the variable `line` and the whole data item in the variable `text`. `awk` then processes each of the remaining 1,122,022 records in turn, resetting the three variables when it finds a longer data item. Finally, it prints out a neatly separated list of line numbers, length of data item, and full text of the longest data item. (In the following code, the commands have been broken up for clarity onto several lines.)
|
||||
```
|
||||
<code>tail -n +2 full0 \
|
||||
|
||||
> | cut -f36 \
|
||||
|
||||
> | awk 'BEGIN {big=0} length($0)>big \
|
||||
|
||||
> {big=length($0);line=NR;text=$0} \
|
||||
|
||||
> END {print "\nline: "line"\nlength: "big"\ntext: "text}' </code>
|
||||
|
||||
```
|
||||
|
||||
How long does this take? About 4 seconds on my desktop (core i5, 8GB RAM):
|
||||
|
||||

|
||||
|
||||
Now for the neat part: I can pop that long command into a shell function, `longest`, which takes as its arguments the filename `($1)` and the field number `($2)`:
|
||||

|
||||
|
||||
I can then re-run the command as a function, finding longest data items in other fields and in other files without needing to remember how the command is written:
|
||||

|
||||
|
||||
As a final tweak, I can add to the output the name of the numbered field I'm searching. To do this, I use `head` to extract the header line of the table, pipe that line to `tr` to convert tabs to new lines, and pipe the resulting list to `tail` and `head` to print the `$2th` field name on the list, where `$2` is the field number argument. The field name is stored in the shell variable `field` and passed to `awk` for printing as the internal `awk` variable `fld`.
|
||||
```
|
||||
<code>longest() { field=$(head -n 1 "$1" | tr '\t' '\n' | tail -n +"$2" | head -n 1); \
|
||||
|
||||
tail -n +2 "$1" \
|
||||
|
||||
| cut -f"$2" | \
|
||||
|
||||
awk -v fld="$field" 'BEGIN {big=0} length($0)>big \
|
||||
|
||||
{big=length($0);line=NR;text=$0}
|
||||
|
||||
END {print "\nfield: "fld"\nline: "line"\nlength: "big"\ntext: "text}'; }</code>
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
Note that if I'm looking for the longest data item in a number of different fields, all I have to do is press the Up Arrow key to get the last `longest` command, then backspace the field number and enter a new one.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/command-line-data-auditing
|
||||
|
||||
作者:[Bob Mesibov][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/bobmesibov
|
||||
[1]:https://en.wikipedia.org/wiki/Mojibake
|
||||
[2]:https://en.wikipedia.org/wiki/Control_character
|
||||
[3]:https://en.wikipedia.org/wiki/Specials_(Unicode_block)#Replacement_character
|
||||
[4]:http://openrefine.org/
|
||||
[5]:https://www.python.org/
|
||||
[6]:https://www.r-project.org/about.html
|
||||
[7]:https://www.polydesmida.info/cookbook/index.html
|
||||
[8]:/file/399116
|
||||
[9]:https://opensource.com/sites/default/files/uploads/terminal_tshirt.jpg (Tshirt, Keep Calm and Open A Terminal)
|
||||
[10]:https://www.polydesmida.info/cookbook/functions.html#fields
|
307
sources/tech/20180528 What is behavior-driven Python.md
Normal file
307
sources/tech/20180528 What is behavior-driven Python.md
Normal file
@ -0,0 +1,307 @@
|
||||
What is behavior-driven Python?
|
||||
======
|
||||

|
||||
Have you heard about [behavior-driven development][1] (BDD) and wondered what all the buzz is about? Maybe you've caught team members talking in "gherkin" and felt left out of the conversation. Or perhaps you're a Pythonista looking for a better way to test your code. Whatever the circumstance, learning about BDD can help you and your team achieve better collaboration and test automation, and Python's `behave` framework is a great place to start.
|
||||
|
||||
### What is BDD?
|
||||
|
||||
* Submitting forms on a website
|
||||
* Searching for desired results
|
||||
* Saving a document
|
||||
* Making REST API calls
|
||||
* Running command-line interface commands
|
||||
|
||||
|
||||
|
||||
In software, a behavior is how a feature operates within a well-defined scenario of inputs, actions, and outcomes. Products can exhibit countless behaviors, such as:
|
||||
|
||||
Defining a product's features based on its behaviors makes it easier to describe them, develop them, and test them. This is the heart of BDD: making behaviors the focal point of software development. Behaviors are defined early in development using a [specification by example][2] language. One of the most common behavior spec languages is [Gherkin][3], the Given-When-Then scenario format from the [Cucumber][4] project. Behavior specs are basically plain-language descriptions of how a behavior works, with a little bit of formal structure for consistency and focus. Test frameworks can easily automate these behavior specs by "gluing" step texts to code implementations.
|
||||
|
||||
Below is an example of a behavior spec written in Gherkin:
|
||||
```
|
||||
Scenario: Basic DuckDuckGo Search
|
||||
|
||||
Given the DuckDuckGo home page is displayed
|
||||
|
||||
When the user searches for "panda"
|
||||
|
||||
Then results are shown for "panda"
|
||||
|
||||
```
|
||||
|
||||
At a quick glance, the behavior is intuitive to understand. Except for a few keywords, the language is freeform. The scenario is concise yet meaningful. A real-world example illustrates the behavior. Steps declaratively indicate what should happen—without getting bogged down in the details of how.
|
||||
|
||||
The [main benefits of BDD][5] are good collaboration and automation. Everyone can contribute to behavior development, not just programmers. Expected behaviors are defined and understood from the beginning of the process. Tests can be automated together with the features they cover. Each test covers a singular, unique behavior in order to avoid duplication. And, finally, existing steps can be reused by new behavior specs, creating a snowball effect.
|
||||
|
||||
### Python's behave framework
|
||||
|
||||
`behave` is one of the most popular BDD frameworks in Python. It is very similar to other Gherkin-based Cucumber frameworks despite not holding the official Cucumber designation. `behave` has two primary layers:
|
||||
|
||||
1. Behavior specs written in Gherkin `.feature` files
|
||||
2. Step definitions and hooks written in Python modules that implement Gherkin steps
|
||||
|
||||
|
||||
|
||||
As shown in the example above, Gherkin scenarios use a three-part format:
|
||||
|
||||
1. Given some initial state
|
||||
2. When an action is taken
|
||||
3. Then verify the outcome
|
||||
|
||||
|
||||
|
||||
Each step is "glued" by decorator to a Python function when `behave` runs tests.
|
||||
|
||||
### Installation
|
||||
|
||||
As a prerequisite, make sure you have Python and `pip` installed on your machine. I strongly recommend using Python 3. (I also recommend using [`pipenv`][6], but the following example commands use the more basic `pip`.)
|
||||
|
||||
Only one package is required for `behave`:
|
||||
```
|
||||
pip install behave
|
||||
|
||||
```
|
||||
|
||||
Other packages may also be useful, such as:
|
||||
```
|
||||
pip install requests # for REST API calls
|
||||
|
||||
pip install selenium # for Web browser interactions
|
||||
|
||||
```
|
||||
|
||||
The [behavior-driven-Python][7] project on GitHub contains the examples used in this article.
|
||||
|
||||
### Gherkin features
|
||||
|
||||
The Gherkin syntax that `behave` uses is practically compliant with the official Cucumber Gherkin standard. A `.feature` file has Feature sections, which in turn have Scenario sections with Given-When-Then steps. Below is an example:
|
||||
```
|
||||
Feature: Cucumber Basket
|
||||
|
||||
As a gardener,
|
||||
|
||||
I want to carry many cucumbers in a basket,
|
||||
|
||||
So that I don’t drop them all.
|
||||
|
||||
|
||||
|
||||
@cucumber-basket
|
||||
|
||||
Scenario: Add and remove cucumbers
|
||||
|
||||
Given the basket is empty
|
||||
|
||||
When "4" cucumbers are added to the basket
|
||||
|
||||
And "6" more cucumbers are added to the basket
|
||||
|
||||
But "3" cucumbers are removed from the basket
|
||||
|
||||
Then the basket contains "7" cucumbers
|
||||
|
||||
```
|
||||
|
||||
There are a few important things to note here:
|
||||
|
||||
* Both the Feature and Scenario sections have [short, descriptive titles][8].
|
||||
* The lines immediately following the Feature title are comments ignored by `behave`. It is a good practice to put the user story there.
|
||||
* Scenarios and Features can have tags (notice the `@cucumber-basket` mark) for hooks and filtering (explained below).
|
||||
* Steps follow a [strict Given-When-Then order][9].
|
||||
* Additional steps can be added for any type using `And` and `But`.
|
||||
* Steps can be parametrized with inputs—notice the values in double quotes.
|
||||
|
||||
|
||||
|
||||
Scenarios can also be written as templates with multiple input combinations by using a Scenario Outline:
|
||||
```
|
||||
Feature: Cucumber Basket
|
||||
|
||||
|
||||
|
||||
@cucumber-basket
|
||||
|
||||
Scenario Outline: Add cucumbers
|
||||
|
||||
Given the basket has “<initial>” cucumbers
|
||||
|
||||
When "<more>" cucumbers are added to the basket
|
||||
|
||||
Then the basket contains "<total>" cucumbers
|
||||
|
||||
|
||||
|
||||
Examples: Cucumber Counts
|
||||
|
||||
| initial | more | total |
|
||||
|
||||
| 0 | 1 | 1 |
|
||||
|
||||
| 1 | 2 | 3 |
|
||||
|
||||
| 5 | 4 | 9 |
|
||||
|
||||
```
|
||||
|
||||
Scenario Outlines always have an Examples table, in which the first row gives column titles and each subsequent row gives an input combo. The row values are substituted wherever a column title appears in a step surrounded by angle brackets. In the example above, the scenario will be run three times because there are three rows of input combos. Scenario Outlines are a great way to avoid duplicate scenarios.
|
||||
|
||||
There are other elements of the Gherkin language, but these are the main mechanics. To learn more, read the Automation Panda articles [Gherkin by Example][10] and [Writing Good Gherkin][11].
|
||||
|
||||
### Python mechanics
|
||||
|
||||
Every Gherkin step must be "glued" to a step definition, a Python function that provides the implementation. Each function has a step type decorator with the matching string. It also receives a shared context and any step parameters. Feature files must be placed in a directory named `features/`, while step definition modules must be placed in a directory named `features/steps/`. Any feature file can use step definitions from any module—they do not need to have the same names. Below is an example Python module with step definitions for the cucumber basket features.
|
||||
```
|
||||
from behave import *
|
||||
|
||||
from cucumbers.basket import CucumberBasket
|
||||
|
||||
|
||||
|
||||
@given('the basket has "{initial:d}" cucumbers')
|
||||
|
||||
def step_impl(context, initial):
|
||||
|
||||
context.basket = CucumberBasket(initial_count=initial)
|
||||
|
||||
|
||||
|
||||
@when('"{some:d}" cucumbers are added to the basket')
|
||||
|
||||
def step_impl(context, some):
|
||||
|
||||
context.basket.add(some)
|
||||
|
||||
|
||||
|
||||
@then('the basket contains "{total:d}" cucumbers')
|
||||
|
||||
def step_impl(context, total):
|
||||
|
||||
assert context.basket.count == total
|
||||
|
||||
```
|
||||
|
||||
Three [step matchers][12] are available: `parse`, `cfparse`, and `re`. The default and simplest marcher is `parse`, which is shown in the example above. Notice how parametrized values are parsed and passed into the functions as input arguments. A common best practice is to put double quotes around parameters in steps.
|
||||
|
||||
Each step definition function also receives a [context][13] variable that holds data specific to the current scenario being run, such as `feature`, `scenario`, and `tags` fields. Custom fields may be added, too, to share data between steps. Always use context to share data—never use global variables!
|
||||
|
||||
`behave` also supports [hooks][14] to handle automation concerns outside of Gherkin steps. A hook is a function that will be run before or after a step, scenario, feature, or whole test suite. Hooks are reminiscent of [aspect-oriented programming][15]. They should be placed in a special `environment.py` file under the `features/` directory. Hook functions can check the current scenario's tags, as well, so logic can be selectively applied. The example below shows how to use hooks to set up and tear down a Selenium WebDriver instance for any scenario tagged as `@web`.
|
||||
```
|
||||
from selenium import webdriver
|
||||
|
||||
|
||||
|
||||
def before_scenario(context, scenario):
|
||||
|
||||
if 'web' in context.tags:
|
||||
|
||||
context.browser = webdriver.Firefox()
|
||||
|
||||
context.browser.implicitly_wait(10)
|
||||
|
||||
|
||||
|
||||
def after_scenario(context, scenario):
|
||||
|
||||
if 'web' in context.tags:
|
||||
|
||||
context.browser.quit()
|
||||
|
||||
```
|
||||
|
||||
Note: Setup and cleanup can also be done with [fixtures][16] in `behave`.
|
||||
|
||||
To offer an idea of what a `behave` project should look like, here's the example project's directory structure:
|
||||
|
||||

|
||||
|
||||
Any Python packages and custom modules can be used with `behave`. Use good design patterns to build a scalable test automation solution. Step definition code should be concise.
|
||||
|
||||
### Running tests
|
||||
|
||||
To run tests from the command line, change to the project's root directory and run the `behave` command. Use the `–help` option to see all available options.
|
||||
|
||||
Below are a few common use cases:
|
||||
```
|
||||
# run all tests
|
||||
|
||||
behave
|
||||
|
||||
|
||||
|
||||
# run the scenarios in a feature file
|
||||
|
||||
behave features/web.feature
|
||||
|
||||
|
||||
|
||||
# run all tests that have the @duckduckgo tag
|
||||
|
||||
behave --tags @duckduckgo
|
||||
|
||||
|
||||
|
||||
# run all tests that do not have the @unit tag
|
||||
|
||||
behave --tags ~@unit
|
||||
|
||||
|
||||
|
||||
# run all tests that have @basket and either @add or @remove
|
||||
|
||||
behave --tags @basket --tags @add,@remove
|
||||
|
||||
```
|
||||
|
||||
For convenience, options may be saved in [config][17] files.
|
||||
|
||||
### Other options
|
||||
|
||||
`behave` is not the only BDD test framework in Python. Other good frameworks include:
|
||||
|
||||
* `pytest-bdd` , a plugin for `pytest``behave`, it uses Gherkin feature files and step definition modules, but it also leverages all the features and plugins of `pytest`. For example, it can run Gherkin scenarios in parallel using `pytest-xdist`. BDD and non-BDD tests can also be executed together with the same filters. `pytest-bdd` also offers a more flexible directory layout.
|
||||
|
||||
* `radish` is a "Gherkin-plus" framework—it adds Scenario Loops and Preconditions to the standard Gherkin language, which makes it more friendly to programmers. It also offers rich command line options like `behave`.
|
||||
|
||||
* `lettuce` is an older BDD framework very similar to `behave`, with minor differences in framework mechanics. However, GitHub shows little recent activity in the project (as of May 2018).
|
||||
|
||||
|
||||
|
||||
Any of these frameworks would be good choices.
|
||||
|
||||
Also, remember that Python test frameworks can be used for any black box testing, even for non-Python products! BDD frameworks are great for web and service testing because their tests are declarative, and Python is a [great language for test automation][18].
|
||||
|
||||
This article is based on the author's [PyCon Cleveland 2018][19] talk, [Behavior-Driven Python][20].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/behavior-driven-python
|
||||
|
||||
作者:[Andrew Knight][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/andylpk247
|
||||
[1]:https://automationpanda.com/bdd/
|
||||
[2]:https://en.wikipedia.org/wiki/Specification_by_example
|
||||
[3]:https://automationpanda.com/2017/01/26/bdd-101-the-gherkin-language/
|
||||
[4]:https://cucumber.io/
|
||||
[5]:https://automationpanda.com/2017/02/13/12-awesome-benefits-of-bdd/
|
||||
[6]:https://docs.pipenv.org/
|
||||
[7]:https://github.com/AndyLPK247/behavior-driven-python
|
||||
[8]:https://automationpanda.com/2018/01/31/good-gherkin-scenario-titles/
|
||||
[9]:https://automationpanda.com/2018/02/03/are-gherkin-scenarios-with-multiple-when-then-pairs-okay/
|
||||
[10]:https://automationpanda.com/2017/01/27/bdd-101-gherkin-by-example/
|
||||
[11]:https://automationpanda.com/2017/01/30/bdd-101-writing-good-gherkin/
|
||||
[12]:http://behave.readthedocs.io/en/latest/api.html#step-parameters
|
||||
[13]:http://behave.readthedocs.io/en/latest/api.html#detecting-that-user-code-overwrites-behave-context-attributes
|
||||
[14]:http://behave.readthedocs.io/en/latest/api.html#environment-file-functions
|
||||
[15]:https://en.wikipedia.org/wiki/Aspect-oriented_programming
|
||||
[16]:http://behave.readthedocs.io/en/latest/api.html#fixtures
|
||||
[17]:http://behave.readthedocs.io/en/latest/behave.html#configuration-files
|
||||
[18]:https://automationpanda.com/2017/01/21/the-best-programming-language-for-test-automation/
|
||||
[19]:https://us.pycon.org/2018/
|
||||
[20]:https://us.pycon.org/2018/schedule/presentation/87/
|
@ -0,0 +1,65 @@
|
||||
5 trending open source machine learning JavaScript frameworks
|
||||
======
|
||||

|
||||
|
||||
The tremendous growth of the machine learning field has been driven by the availability of open source tools that allow developers to build applications easily. (For example, [AndreyBu][1], who is from Germany and has more than five years of experience in machine learning, has been utilizing various open source frameworks to build captivating machine learning projects.)
|
||||
|
||||
Although the Python programming language powers most of the machine learning frameworks, JavaScript hasn’t been left behind. JavaScript developers have been using various frameworks for training and deploying machine learning models in the browser.
|
||||
|
||||
Here are the five trending open source machine learning frameworks in JavaScript.
|
||||
|
||||
### 1\. TensorFlow.js
|
||||
|
||||
[TensorFlow.js][2] is an open source library that allows you to run machine learning programs completely in the browser. It is the successor of Deeplearn.js, which is no longer supported. TensorFlow.js improves on the functionalities of Deeplearn.js and empowers you to make the most of the browser for a deeper machine learning experience.
|
||||
|
||||
With the library, you can use versatile and intuitive APIs to define, train, and deploy models from scratch right in the browser. Furthermore, it automatically offers support for WebGL and Node.js.
|
||||
|
||||
If you have pre-existing trained models you want to import to the browser, TensorFlow.js will allow you do that. You can also retrain existing models without leaving the browser.
|
||||
|
||||
The [machine learning tools][3] library is a compilation of resourceful open source tools for supporting widespread machine learning functionalities in the browser. The tools provide support for several machine learning algorithms, including unsupervised learning, supervised learning, data processing, artificial neural networks (ANN), math, and regression.
|
||||
|
||||
If you are coming from a Python background and looking for something similar to Scikit-learn for JavaScript in-browser machine learning, this suite of tools could have you covered.
|
||||
|
||||
### 3\. Keras.js
|
||||
|
||||
[Keras.js][4] is another trending open source framework that allows you to run machine learning models in the browser. It offers GPU mode support using WebGL. If you have models in Node.js, you’ll run them only in CPU mode. Keras.js also offers support for models trained using any backend framework, such as the Microsoft Cognitive Toolkit (CNTK).
|
||||
|
||||
Some of the Keras models that can be deployed on the client-side browser include Inception v3 (trained on ImageNet), 50-layer Residual Network (trained on ImageNet), and Convolutional variational auto-encoder (trained on MNIST).
|
||||
|
||||
### 4\. Brain.js
|
||||
|
||||
Machine learning concepts are very math-heavy, which may discourage people from starting. The technicalities and jargons in this field may make beginners freak out. This is where [Brain.js][5] becomes important. It is an open source, JavaScript-powered framework that simplifies the process of defining, training, and running neural networks.
|
||||
|
||||
If you are a JavaScript developer who is completely new to machine learning, Brain.js could reduce your learning curve. It can be used with Node.js or in the client-side browser for training machine learning models. Some of the networks that Brain.js supports include feed-forward networks, Ellman networks, and Gated Recurrent Units networks.
|
||||
|
||||
### 5\. STDLib
|
||||
|
||||
[STDLib][6] is an open source library for powering JavaScript and Node.js applications. If you are looking for a library that emphasizes in-browser support for scientific and numerical web-based machine learning applications, STDLib could suit your needs.
|
||||
|
||||
The library comes with comprehensive and advanced mathematical and statistical functions to assist you in building high-performing machine learning models. You can also use its expansive utilities for building applications and other libraries. Furthermore, if you want a framework for data visualization and exploratory data analysis, you’ll find STDLib worthwhile.
|
||||
|
||||
### Conclusion
|
||||
|
||||
If you are a JavaScript developer who intends to delve into the exciting world of [machine learning][7] or a machine learning expert who intends to start using JavaScript, the above open source frameworks will intrigue you.
|
||||
|
||||
Do you know of another open source library that offers in-browser machine learning capabilities? Please let us know in the comment section below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/machine-learning-javascript-frameworks
|
||||
|
||||
作者:[Dr.Michael J.Garbade][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/drmjg
|
||||
[1]:https://www.liveedu.tv/andreybu/REaxr-machine-learning-model-python-sklearn-kera/
|
||||
[2]:https://js.tensorflow.org/
|
||||
[3]:https://github.com/mljs/ml
|
||||
[4]:https://transcranial.github.io/keras-js/#/
|
||||
[5]:https://github.com/BrainJS/brain.js
|
||||
[6]:https://stdlib.io/
|
||||
[7]:https://www.liveedu.tv/guides/artificial-intelligence/machine-learning/
|
119
sources/tech/20180529 Copying and renaming files on Linux.md
Normal file
119
sources/tech/20180529 Copying and renaming files on Linux.md
Normal file
@ -0,0 +1,119 @@
|
||||
Copying and renaming files on Linux
|
||||
======
|
||||

|
||||
Linux users have for many decades been using simple cp and mv commands to copy and rename files. These commands are some of the first that most of us learned and are used every day by possibly millions of people. But there are other techniques, handy variations, and another command for renaming files that offers some unique options.
|
||||
|
||||
First, let’s think about why might you want to copy a file. You might need the same file in another location or you might want a copy because you’re going to edit the file and want to be sure you have a handy backup just in case you need to revert to the original file. The obvious way to do that is to use a command like “cp myfile myfile-orig”.
|
||||
|
||||
If you want to copy a large number of files, however, that strategy might get old real fast. Better alternatives are to:
|
||||
|
||||
* Use tar to create an archive of all of the files you want to back up before you start editing them.
|
||||
* Use a for loop to make the backup copies easier.
|
||||
|
||||
|
||||
|
||||
The tar option is very straightforward. For all files in the current directory, you’d use a command like:
|
||||
```
|
||||
$ tar cf myfiles.tar *
|
||||
|
||||
```
|
||||
|
||||
For a group of files that you can identify with a pattern, you’d use a command like this:
|
||||
```
|
||||
$ tar cf myfiles.tar *.txt
|
||||
|
||||
```
|
||||
|
||||
In each case, you end up with a myfiles.tar file that contains all the files in the directory or all files with the .txt extension.
|
||||
|
||||
An easy loop would allow you to make backup copies with modified names:
|
||||
```
|
||||
$ for file in *
|
||||
> do
|
||||
> cp $file $file-orig
|
||||
> done
|
||||
|
||||
```
|
||||
|
||||
When you’re backing up a single file and that file just happens to have a long name, you can rely on using the tab command to use filename completion (hit the tab key after entering enough letters to uniquely identify the file) and use syntax like this to append “-orig” to the copy.
|
||||
```
|
||||
$ cp file-with-a-very-long-name{,-orig}
|
||||
|
||||
```
|
||||
|
||||
You then have a file-with-a-very-long-name and a file-with-a-very-long-name file-with-a-very-long-name-orig.
|
||||
|
||||
### Renaming files on Linux
|
||||
|
||||
The traditional way to rename a file is to use the mv command. This command will move a file to a different directory, change its name and leave it in place, or do both.
|
||||
```
|
||||
$ mv myfile /tmp
|
||||
$ mv myfile notmyfile
|
||||
$ mv myfile /tmp/notmyfile
|
||||
|
||||
```
|
||||
|
||||
But we now also have the rename command to do some serious renaming for us. The trick to using the rename command is to get used to its syntax, but if you know some perl, you might not find it tricky at all.
|
||||
|
||||
Here’s a very useful example. Say you wanted to rename the files in a directory to replace all of the uppercase letters with lowercase ones. In general, you don’t find a lot of file with capital letters on Unix or Linux systems, but you could. Here’s an easy way to rename them without having to use the mv command for each one of them. The /A-Z/a-z/ specification tells the rename command to change any letters in the range A-Z to the corresponding letters in a-z.
|
||||
```
|
||||
$ ls
|
||||
Agenda Group.JPG MyFile
|
||||
$ rename 'y/A-Z/a-z/' *
|
||||
$ ls
|
||||
agenda group.jpg myfile
|
||||
|
||||
```
|
||||
|
||||
You can also use rename to remove file extensions. Maybe you’re tired of seeing text files with .txt extensions. Simply remove them — and in one command.
|
||||
```
|
||||
$ ls
|
||||
agenda.txt notes.txt weekly.txt
|
||||
$ rename 's/.txt//' *
|
||||
$ ls
|
||||
agenda notes weekly
|
||||
|
||||
```
|
||||
|
||||
Now let’s imagine you have a change of heart and want to put those extensions back. No problem. Just change the command. The trick is understanding that the “s” before the first slash means “substitute”. What’s in between the first two slashes is what we want to change, and what’s in between the second and third slashes is what we want to change it to. So, $ represents the end of the filename, and we’re changing it to “.txt”.
|
||||
```
|
||||
$ ls
|
||||
agenda notes weekly
|
||||
$ rename 's/$/.txt/' *
|
||||
$ ls
|
||||
agenda.txt notes.txt weekly.txt
|
||||
|
||||
```
|
||||
|
||||
You can change other parts of filenames, as well. Keep the **s/old/new/** rule in mind.
|
||||
```
|
||||
$ ls
|
||||
draft-minutes-2018-03 draft-minutes-2018-04 draft-minutes-2018-05
|
||||
$ rename 's/draft/approved/' *minutes*
|
||||
$ ls
|
||||
approved-minutes-2018-03 approved-minutes-2018-04 approved-minutes-2018-05
|
||||
|
||||
```
|
||||
|
||||
Note in the examples above that when we use an **s** as in " **s** /old/new/", we are substituting one part of the name with another. When we use **y** , we are transliterating (substituting characters from one range to another).
|
||||
|
||||
### Wrap-up
|
||||
|
||||
There are a lot of options for copying and renaming files. I hope some of them will make your time on the command line more enjoyable.
|
||||
|
||||
Join the Network World communities on [Facebook][1] and [LinkedIn][2] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3276349/linux/copying-and-renaming-files-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[1]:https://www.facebook.com/NetworkWorld/
|
||||
[2]:https://www.linkedin.com/company/network-world
|
@ -0,0 +1,196 @@
|
||||
Manage your workstation with Ansible: Configure desktop settings
|
||||
======
|
||||
|
||||

|
||||
|
||||
In the [first article][1] of this series on using Ansible to configure a workstation, we set up a repository and configured a few basic things. In the [second part][2], we automated Ansible to apply settings automatically when changes are made to our repository. In this third (and final) article, we'll use Ansible to configure GNOME desktop settings.
|
||||
|
||||
This configuration will work only on newer distributions (such as Ubuntu 18.04, which I'll use in my examples). Older versions of Ubuntu will not work, as they ship with a version of `python-psutils` that is too old for Ansible's `dconf` module to work properly. If you're using a newer version of your Linux distribution, you should have no issues.
|
||||
|
||||
Before you begin, make sure you've worked through parts one and two of this series, as part three builds upon that groundwork. If you haven't already, download the GitHub repository you've been using in those first two articles. We'll add a few more features to it.
|
||||
|
||||
### Set a wallpaper and lock screen
|
||||
|
||||
First, we'll create a taskbook to hold our GNOME settings. In the root of the repository, you should have a file named `local.yml`. Add the following line to it:
|
||||
```
|
||||
- include: tasks/gnome.yml
|
||||
|
||||
```
|
||||
|
||||
The entire file should now look like this:
|
||||
```
|
||||
- hosts: localhost
|
||||
|
||||
become: true
|
||||
|
||||
pre_tasks:
|
||||
|
||||
- name: update repositories
|
||||
|
||||
apt: update_cache=yes
|
||||
|
||||
changed_when: False
|
||||
|
||||
|
||||
|
||||
tasks:
|
||||
|
||||
- include: tasks/users.yml
|
||||
|
||||
- include: tasks/cron.yml
|
||||
|
||||
- include: tasks/packages.yml
|
||||
|
||||
- include: tasks/gnome.yml
|
||||
|
||||
```
|
||||
|
||||
Basically, this added a reference to a file named `gnome.yml` that will be stored in the `tasks` directory inside the repository. We haven't created this file yet, so let's do that now. Create `gnome.yml` file in the `tasks` directory, and place the following content inside:
|
||||
```
|
||||
- name: Install python-psutil package
|
||||
|
||||
apt: name=python-psutil
|
||||
|
||||
|
||||
|
||||
- name: Copy wallpaper file
|
||||
|
||||
copy: src=files/wallpaper.jpg dest=/home/jay/.wallpaper.jpg owner=jay group=jay mode=600
|
||||
|
||||
|
||||
|
||||
- name: Set GNOME Wallpaper
|
||||
|
||||
become_user: jay
|
||||
|
||||
dconf: key="/org/gnome/desktop/background/picture-uri" value="'file:///home/jay/.wallpaper.jpg'"
|
||||
|
||||
```
|
||||
|
||||
Note that this code refers to my username (`jay`) several times, so make sure to replace every occurrence of `jay` with the username you use on your machine. Also, if you're not using Ubuntu 18.04 (as I am), you'll have to change the `apt` line to match the package manager for your chosen distribution and to confirm the name of the `python-psutil` package for your distribution, as it may be different.
|
||||
|
||||
`wallpaper.jpg` inside the `files` directory. This file must exist or the Ansible configuration will fail. Inside the `tasks` directory, create a subdirectory named `files`. Find a wallpaper image you like, name it `wallpaper.jpg`, and place it inside the `files` directory. If the file is a PNG image instead of a JPG, change the file extension in both the code and in the repository. If you're not feeling creative, I have an example wallpaper file in the
|
||||
|
||||
In the example tasks, I referred to a file namedinside thedirectory. This file must exist or the Ansible configuration will fail. Inside thedirectory, create a subdirectory named. Find a wallpaper image you like, name it, and place it inside thedirectory. If the file is a PNG image instead of a JPG, change the file extension in both the code and in the repository. If you're not feeling creative, I have an example wallpaper file in the [GitHub repository][3] for this article series that you can use.
|
||||
|
||||
Once you've made all these changes, commit everything to your GitHub repository, and push those changes. To recap, you should've completed the following:
|
||||
|
||||
* Modified the `local.yml` file to refer to the `tasks/gnome.yml` playbook
|
||||
* Created the `tasks/gnome.yml` playbook with the content mentioned above
|
||||
* Created a `files` directory inside the `tasks` directory, with an image file named `wallpaper.jpg` (or whatever you chose to call it).
|
||||
|
||||
|
||||
|
||||
Once you've completed those steps and pushed your changes back to the repository, the configuration should be automatically applied during its next scheduled run. (You may recall that we automated this in the previous article.) If you're in a hurry, you can apply the configuration immediately with the following command:
|
||||
```
|
||||
sudo ansible-pull -U https://github.com/<github_user>/ansible.git
|
||||
|
||||
```
|
||||
|
||||
If everything ran correctly, you should see your new wallpaper.
|
||||
|
||||
Let's take a moment to go through what the new GNOME taskbook does. First, we added a play to install the `python-psutil` package. If we don't add this, we can't use the `dconf` module, since it requires this package to be installed before we can modify GNOME settings. Next, we used the `copy` module to copy the wallpaper file to our `home` directory, and we named the resulting file starting with a period to hide it. If you'd prefer not to have this file in the root of your `home` directory, you can always instruct this section to copy it somewhere else—it will still work as long as you refer to it at the correct place. In the next play, we used the `dconf` module to change GNOME settings. In this case, we adjusted the `/org/gnome/desktop/background/picture-uri` key and set it equal to `file:///home/jay/.wallpaper.jpg`. Note the quotes in this section of the playbook—you must always use two single-quotes in `dconf` values, and you must also include double-quotes if the value is a string.
|
||||
|
||||
Now, let's take our configuration a step further and apply a background to the lock screen. Here's the GNOME taskbook again, but with two additional plays added:
|
||||
```
|
||||
- name: Install python-psutil package
|
||||
|
||||
apt: name=python-psutil
|
||||
|
||||
|
||||
|
||||
- name: Copy wallpaper file
|
||||
|
||||
copy: src=files/wallpaper.jpg dest=/home/jay/.wallpaper.jpg owner=jay group=jay mode=600
|
||||
|
||||
|
||||
|
||||
- name: Set GNOME wallpaper
|
||||
|
||||
dconf: key="/org/gnome/desktop/background/picture-uri" value="'file:///home/jay/.wallpaper.jpg'"
|
||||
|
||||
|
||||
|
||||
- name: Copy lockscreenfile
|
||||
|
||||
copy: src=files/lockscreen.jpg dest=/home/jay/.lockscreen.jpg owner=jay group=jay mode=600
|
||||
|
||||
|
||||
|
||||
- name: Set lock screen background
|
||||
|
||||
become_user: jay
|
||||
|
||||
dconf: key="/org/gnome/desktop/screensaver/picture-uri" value="'file:///home/jay/.lockscreen.jpg'"
|
||||
|
||||
```
|
||||
|
||||
As you can see, we're pretty much doing the same thing as we did with the wallpaper. We added two additional tasks, one to copy the lock screen image and place it in our `home` directory, and another to apply the setting to GNOME so it will be used. Again, be sure to change your username from `jay` and also name your desired lock screen picture `lockscreen.jpg` and copy it to the `files` directory. Once you've committed these changes to your repository, the new lock screen should be applied during the next scheduled Ansible run.
|
||||
|
||||
### Apply a new desktop theme
|
||||
|
||||
Setting the wallpaper and lock screen background is cool and all, but let's go even further and apply a desktop theme. First, let's add an instruction to our taskbook to install the package for the `arc` theme. Add the following code to the beginning of the GNOME taskbook:
|
||||
```
|
||||
- name: Install arc theme
|
||||
|
||||
apt: name=arc-theme
|
||||
|
||||
```
|
||||
|
||||
Then, at the bottom, add the following play:
|
||||
```
|
||||
- name: Set GTK theme
|
||||
|
||||
become_user: jay
|
||||
|
||||
dconf: key="/org/gnome/desktop/interface/gtk-theme" value="'Arc'"
|
||||
|
||||
```
|
||||
|
||||
Did you see GNOME's GTK theme change right before your eyes? We added a play to install the `arc-theme` package via the `apt` module and another play to apply this theme to GNOME.
|
||||
|
||||
### Make other customizations
|
||||
|
||||
Now that you've changed some GNOME settings, feel free to add additional customizations on your own. Any setting you can tweak in GNOME can be automated this way; setting the wallpapers and the theme were just a few examples. You may be wondering how to find the settings that you want to change. Here's a trick that works for me.
|
||||
|
||||
First, take a snapshot of ALL your current `dconf` settings by running the following command on the machine you're managing:
|
||||
```
|
||||
dconf dump / > before.txt
|
||||
|
||||
```
|
||||
|
||||
This command exports all your current changes to a file named `before.txt`. Next, manually change the setting you want to automate, and capture the `dconf` settings again:
|
||||
```
|
||||
dconf dump / > after.txt
|
||||
|
||||
```
|
||||
|
||||
Now, you can use the `diff` command to see what's different between the two files:
|
||||
```
|
||||
diff before.txt after.txt
|
||||
|
||||
```
|
||||
|
||||
This should give you a list of keys that changed. While it's true that changing settings manually defeats the purpose of automation, what you're essentially doing is capturing the keys that change when you update your preferred settings, which then allows you to create Ansible plays to modify those settings so you'll never need to touch those settings again. If you ever need to restore your machine, your Ansible repository will take care of each and every one of your customizations. If you have multiple machines, or even a fleet of workstations, you only have to manually make the change once, and all other workstations will have the new settings applied and be completely in sync.
|
||||
|
||||
### Wrapping up
|
||||
|
||||
If you've followed along with this series, you should know how to set up Ansible to automate your workstation. These examples offer a useful baseline, and you can use the syntax and examples to make additional customizations. As you go along, you can continue to add new modifications, which will make your Ansible configuration grow over time.
|
||||
|
||||
I've used Ansible in this way to automate everything, including my user account and password; configuration files for Vim, tmux, etc.; desktop packages; SSH settings; SSH keys; and basically everything I could ever want to customize. Using this series as a starting point will pave the way for you to completely automate your workstations.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/manage-your-workstation-ansible-part-3
|
||||
|
||||
作者:[Jay LaCroix][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jlacroix
|
||||
[1]:https://opensource.com/article/18/3/manage-workstation-ansible
|
||||
[2]:https://opensource.com/article/18/3/manage-your-workstation-configuration-ansible-part-2
|
||||
[3]:https://github.com/jlacroix82/ansible_article.git
|
@ -0,0 +1,122 @@
|
||||
The Source Code Line Counter And Analyzer
|
||||
======
|
||||
|
||||

|
||||
|
||||
**Ohcount** is simple command line utility that analyzes the source code and prints the total number lines of a source code file. It is not just source code line counter, but also detects the popular open source licenses, such as GPL, within a large directory of source code. Additionally, Ohcount can also detect code that targets a particular programming API such as KDE or Win32. As of writing this guide, Ohcount currently supports over 70 popular programming languages. It is written in **C** programming language and is originally developed by **Ohloh** for generating the reports at [www.openhub.net][1].
|
||||
|
||||
In this brief tutorial, we are going to how to install and use Ohcount to analyze source code files in Debian, Ubuntu and its variants like Linux Mint.
|
||||
|
||||
### Ohcount – The source code line counter
|
||||
|
||||
**Installation**
|
||||
|
||||
Ohcount is available in the default repositories in Debian and Ubuntu and its derivatives, so you can install it using APT package manager as shown below.
|
||||
```
|
||||
$ sudo apt-get install ohcount
|
||||
|
||||
```
|
||||
|
||||
**Usage**
|
||||
|
||||
Ohcount usage is extremely simple.
|
||||
|
||||
All you have to do is go to the directory where you have the source code that you want to analyze and ohcount program.
|
||||
|
||||
Say for example, I am going to analyze the source of code of [**coursera-dl**][2] program.
|
||||
```
|
||||
$ cd coursera-dl-master/
|
||||
|
||||
$ ohcount
|
||||
|
||||
```
|
||||
|
||||
Here is the line count summary of Coursera-dl program:
|
||||
|
||||
![][4]
|
||||
|
||||
As you can see, the source code of Coursera-dl program contains 141 files in total. The first column specifies the name of programming languages that the source code consists of. The second column displays the number of files in each programming languages. The third column displays the total number of lines in each programming language. The fourth and fifth lines displays how many lines of comments and its percentage in the code. The sixth column displays the number of blank lines. And the final and seventh column displays total line of codes in each language and the gross total of coursera-dl program.
|
||||
|
||||
Alternatively, mention the complete path directly like below.
|
||||
```
|
||||
$ ohcount coursera-dl-master/
|
||||
|
||||
```
|
||||
|
||||
The path can be any number of individual files or directories. Directories will be probed recursively. If no path is given, the current directory will be used.
|
||||
|
||||
If you don’t want to mention the whole directory path each time, just CD into it and use ohcount utility to analyze the codes in that directory.
|
||||
|
||||
To count lines of code per file, use **-i** flag.
|
||||
```
|
||||
$ ohcount -i
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
|
||||
![][5]
|
||||
|
||||
Ohcount utility can also show the annotated source code when you use **-a** flag.
|
||||
```
|
||||
$ ohcount -a
|
||||
|
||||
```
|
||||
|
||||
![][6]
|
||||
|
||||
As you can see, the contents of all source code files found in this directory is displayed. Each line is prefixed with a tab-delimited language name and semantic categorization (code, comment, or blank).
|
||||
|
||||
Some times, you just want to know the license used in the source code. To do so, use **-l** flag.
|
||||
```
|
||||
$ ohcount -l
|
||||
lgpl3, coursera_dl.py
|
||||
gpl coursera_dl.py
|
||||
|
||||
```
|
||||
|
||||
Another available option is **-re** , which is used to print raw entity information to the screen (mainly for debugging).
|
||||
```
|
||||
$ ohcount -re
|
||||
|
||||
```
|
||||
|
||||
To find all source code files within the given paths recursively, use **-d** flag.
|
||||
```
|
||||
$ ohcount -d
|
||||
|
||||
```
|
||||
|
||||
The above command will display all source code files in the current working directory and the each file name will be prefixed with a tab-delimited language name.
|
||||
|
||||
To know more details and supported options, run:
|
||||
```
|
||||
$ ohcount --help
|
||||
|
||||
```
|
||||
|
||||
Ohcount is quite useful for developers who wants to analysis the code written by themselves or other developers, and check how many lines that code contains, which languages have been used to write those codes, and the license details of the code etc.
|
||||
|
||||
And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/ohcount-the-source-code-line-counter-and-analyzer/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:http://www.openhub.net
|
||||
[2]:https://www.ostechnix.com/coursera-dl-a-script-to-download-coursera-videos/
|
||||
[4]:http://www.ostechnix.com/wp-content/uploads/2018/05/ohcount-2.png
|
||||
[5]:http://www.ostechnix.com/wp-content/uploads/2018/05/ohcount-1-5.png
|
||||
[6]:http://www.ostechnix.com/wp-content/uploads/2018/05/ohcount-2-2.png
|
270
sources/tech/20180530 3 Python command-line tools.md
Normal file
270
sources/tech/20180530 3 Python command-line tools.md
Normal file
@ -0,0 +1,270 @@
|
||||
3 Python command-line tools
|
||||
======
|
||||
|
||||

|
||||
|
||||
This article was co-written with [Lacey Williams Henschel][1].
|
||||
|
||||
Sometimes the right tool for the job is a command-line application. A command-line application is a program that you interact with and run from something like your shell or Terminal. [Git][2] and [Curl][3] are examples of command-line applications that you might already be familiar with.
|
||||
|
||||
Command-line apps are useful when you have a bit of code you want to run several times in a row or on a regular basis. Django developers run commands like `./manage.py runserver` to start their web servers; Docker developers run `docker-compose up` to spin up their containers. The reasons you might want to write a command-line app are as varied as the reasons you might want to write code in the first place.
|
||||
|
||||
For this month's Python column, we have three libraries to recommend to Pythonistas looking to write their own command-line tools.
|
||||
|
||||
### Click
|
||||
|
||||
[Click][4] is our favorite Python package for command-line applications. It:
|
||||
|
||||
* Has great documentation filled with examples
|
||||
|
||||
* Includes instructions on packaging your app as a Python application so it's easier to run
|
||||
|
||||
* Automatically generates useful help text
|
||||
|
||||
* Lets you stack optional and required arguments and even [several commands][5]
|
||||
|
||||
* Has a Django version ([][6]
|
||||
|
||||
`django-click`
|
||||
|
||||
) for writing management commands
|
||||
|
||||
|
||||
|
||||
|
||||
Click uses its `@click.command()` to declare a function as a command and specify required or optional arguments.
|
||||
```
|
||||
# hello.py
|
||||
|
||||
import click
|
||||
|
||||
|
||||
|
||||
@click.command()
|
||||
|
||||
@click.option('--name', default='', help='Your name')
|
||||
|
||||
def say_hello(name):
|
||||
|
||||
click.echo("Hello {}!".format(name))
|
||||
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
||||
hello()
|
||||
|
||||
```
|
||||
|
||||
The `@click.option()` decorator declares an [optional argument][7], and the `@click.argument()` decorator declares a [required argument][8]. You can combine optional and required arguments by stacking the decorators. The `echo()` method prints results to the console.
|
||||
```
|
||||
$ python hello.py --name='Lacey'
|
||||
|
||||
Hello Lacey!
|
||||
|
||||
```
|
||||
|
||||
### Docopt
|
||||
|
||||
[Docopt][9] is a command-line application parser, sort of like Markdown for your command-line apps. If you like writing the documentation for your apps as you go, Docopt has by far the best-formatted help text of the options in this article. It isn't our favorite command-line app library because its documentation throws you into the deep end right away, which makes it a little more difficult to get started. Still, it's a lightweight library that is very popular, especially if exceptionally nice documentation is important to you.
|
||||
|
||||
`help` and `version` flags.
|
||||
|
||||
Docopt is very particular about how you format the required docstring at the top of your file. The top element in your docstring after the name of your tool must be "Usage," and it should list the ways you expect your command to be called (e.g., by itself, with arguments, etc.). Usage should includeandflags.
|
||||
|
||||
The second element in your docstring should be "Options," and it should provide more information about the options and arguments you identified in "Usage." The content of your docstring becomes the content of your help text.
|
||||
```
|
||||
"""HELLO CLI
|
||||
|
||||
|
||||
|
||||
Usage:
|
||||
|
||||
hello.py
|
||||
|
||||
hello.py <name>
|
||||
|
||||
hello.py -h|--help
|
||||
|
||||
hello.py -v|--version
|
||||
|
||||
|
||||
|
||||
Options:
|
||||
|
||||
<name> Optional name argument.
|
||||
|
||||
-h --help Show this screen.
|
||||
|
||||
-v --version Show version.
|
||||
|
||||
"""
|
||||
|
||||
|
||||
|
||||
from docopt import docopt
|
||||
|
||||
|
||||
|
||||
def say_hello(name):
|
||||
|
||||
return("Hello {}!".format(name))
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
||||
arguments = docopt(__doc__, version='DEMO 1.0')
|
||||
|
||||
if arguments['<name>']:
|
||||
|
||||
print(say_hello(arguments['<name>']))
|
||||
|
||||
else:
|
||||
|
||||
print(arguments)
|
||||
|
||||
```
|
||||
|
||||
At its most basic level, Docopt is designed to return your arguments to the console as key-value pairs. If I call the above command without specifying a name, I get a dictionary back:
|
||||
```
|
||||
$ python hello.py
|
||||
|
||||
{'--help': False,
|
||||
|
||||
'--version': False,
|
||||
|
||||
'<name>': None}
|
||||
|
||||
```
|
||||
|
||||
This shows me I did not input the `help` or `version` flags, and the `name` argument is `None`.
|
||||
|
||||
But if I call it with a name, the `say_hello` function will execute.
|
||||
```
|
||||
$ python hello.py Jeff
|
||||
|
||||
Hello Jeff!
|
||||
|
||||
```
|
||||
|
||||
Docopt allows both required and optional arguments and has different syntax conventions for each. Required arguments should be represented in `ALLCAPS` or in `<carets>`, and options should be represented with double or single dashes, like `--name`. Read more about Docopt's [patterns][10] in the docs.
|
||||
|
||||
### Fire
|
||||
|
||||
[Fire][11] is a Google library for writing command-line apps. We especially like it when your command needs to take more complicated arguments or deal with Python objects, as it tries to handle parsing your argument types intelligently.
|
||||
|
||||
Fire's [docs][12] include a ton of examples, but I wish the docs were a bit better organized. Fire can handle [multiple commands in one file][13], commands as methods on [objects][14], and [grouping][15] commands.
|
||||
|
||||
Its weakness is the documentation it makes available to the console. Docstrings on your commands don't appear in the help text, and the help text doesn't necessarily identify arguments.
|
||||
```
|
||||
import fire
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
def say_hello(name=''):
|
||||
|
||||
return 'Hello {}!'.format(name)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
||||
fire.Fire()
|
||||
|
||||
```
|
||||
|
||||
Arguments are made required or optional depending on whether you specify a default value for them in your function or method definition. To call this command, you must specify the filename and the function name, more like Click's syntax:
|
||||
```
|
||||
$ python hello.py say_hello Rikki
|
||||
|
||||
Hello Rikki!
|
||||
|
||||
```
|
||||
|
||||
You can also pass arguments as flags, like `--name=Rikki`.
|
||||
|
||||
### Bonus: Packaging!
|
||||
|
||||
Click includes instructions (and highly recommends you follow them) for [packaging][16] your commands using `setuptools`.
|
||||
|
||||
To package our first example, add this content to your `setup.py` file:
|
||||
```
|
||||
from setuptools import setup
|
||||
|
||||
|
||||
|
||||
setup(
|
||||
|
||||
name='hello',
|
||||
|
||||
version='0.1',
|
||||
|
||||
py_modules=['hello'],
|
||||
|
||||
install_requires=[
|
||||
|
||||
'Click',
|
||||
|
||||
],
|
||||
|
||||
entry_points='''
|
||||
|
||||
[console_scripts]
|
||||
|
||||
hello=hello:say_hello
|
||||
|
||||
''',
|
||||
|
||||
)
|
||||
|
||||
```
|
||||
|
||||
Everywhere you see `hello`, substitute the name of your module but omit the `.py` extension. Where you see `say_hello`, substitute the name of your function.
|
||||
|
||||
Then, run `pip install --editable` to make your command available to the command line.
|
||||
|
||||
You can now call your command like this:
|
||||
```
|
||||
$ hello --name='Jeff'
|
||||
|
||||
Hello Jeff!
|
||||
|
||||
```
|
||||
|
||||
By packaging your command, you omit the extra step in the console of having to type `python hello.py --name='Jeff'` and save yourself several keystrokes. These instructions will probably also work for the other libraries we mentioned.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/3-python-command-line-tools
|
||||
|
||||
作者:[Jeff Triplett][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/laceynwilliams
|
||||
[1]:https://opensource.com/users/laceynwilliams
|
||||
[2]:https://git-scm.com/
|
||||
[3]:https://curl.haxx.se/
|
||||
[4]:http://click.pocoo.org/5/
|
||||
[5]:http://click.pocoo.org/5/commands/
|
||||
[6]:https://github.com/GaretJax/django-click
|
||||
[7]:http://click.pocoo.org/5/options/
|
||||
[8]:http://click.pocoo.org/5/arguments/
|
||||
[9]:http://docopt.org/
|
||||
[10]:https://github.com/docopt/docopt#usage-pattern-format
|
||||
[11]:https://github.com/google/python-fire
|
||||
[12]:https://github.com/google/python-fire/blob/master/docs/guide.md
|
||||
[13]:https://github.com/google/python-fire/blob/master/docs/guide.md#exposing-multiple-commands
|
||||
[14]:https://github.com/google/python-fire/blob/master/docs/guide.md#version-3-firefireobject
|
||||
[15]:https://github.com/google/python-fire/blob/master/docs/guide.md#grouping-commands
|
||||
[16]:http://click.pocoo.org/5/setuptools/
|
@ -0,0 +1,94 @@
|
||||
Introduction to the Pony programming language
|
||||
======
|
||||
|
||||

|
||||
|
||||
At [Wallaroo Labs][1], where I'm the VP of engineering, we're are building a [high-performance, distributed stream processor][2] written in the [Pony][3] programming language. Most people haven't heard of Pony, but it has been an excellent choice for Wallaroo, and it might be an excellent choice for your next project, too.
|
||||
|
||||
> "A programming language is just another tool. It's not about syntax. It's not about expressiveness. It's not about paradigms or models. It's about managing hard problems." —Sylvan Clebsch, creator of Pony
|
||||
|
||||
I'm a contributor to the Pony project, but here I'll touch on why Pony is a good choice for applications like Wallaroo and share ways I've used Pony. If you are interested in a more in-depth look at why we use Pony to write Wallaroo, we have a [blog post][4] about that.
|
||||
|
||||
### What is Pony?
|
||||
|
||||
You can think of Pony as something like "Rust meets Erlang." Pony sports buzzworthy features. It is:
|
||||
|
||||
* Type-safe
|
||||
* Memory-safe
|
||||
* Exception-safe
|
||||
* Data-race-free
|
||||
* Deadlock-free
|
||||
|
||||
|
||||
|
||||
Additionally, it's compiled to efficient native code, and it's developed in the open and available under a two-clause BSD license.
|
||||
|
||||
That's a lot of features, but here I'll focus on the few that were key to my company adopting Pony.
|
||||
|
||||
### Why Pony?
|
||||
|
||||
Writing fast, safe, efficient, highly concurrent programs is not easy with most of our existing tools. "Fast, efficient, and highly concurrent" is an achievable goal, but throw in "safe," and things get a lot harder. With Wallaroo, we wanted to accomplish all four, and Pony has made it easy to achieve.
|
||||
|
||||
#### Highly concurrent
|
||||
|
||||
Pony makes concurrency easy. Part of the way it does that is by providing an opinionated concurrency story. In Pony, all concurrency happens via the [actor model][5].
|
||||
|
||||
The actor model is most famous via the implementations in Erlang and Akka. The actor model has been around since the 1970s, and details vary widely from implementation to implementation. What doesn't vary is that all computation is executed by actors that communicate via asynchronous messaging.
|
||||
|
||||
Think of the actor model this way: objects in object-oriented programming are state + synchronous methods and actors are state + asynchronous methods.
|
||||
|
||||
When an actor receives a message, it executes a corresponding method. That method might operate on a state that is accessible by only that actor. The actor model allows us to use a mutable state in a concurrency-safe manner. Every actor is single-threaded. Two methods within an actor are never run concurrently. This means that, within a given actor, data updates cannot cause data races or other problems commonly associated with threads and mutable states.
|
||||
|
||||
#### Fast and efficient
|
||||
|
||||
Pony actors are scheduled with an efficient work-stealing scheduler. There's a single Pony scheduler per available CPU. The thread-per-core concurrency model is part of Pony's attempt to work in concert with the CPU to operate as efficiently as possible. The Pony runtime attempts to be as CPU-cache friendly as possible. The less your code thrashes the cache, the better it will run. Pony aims to help your code play nice with CPU caches.
|
||||
|
||||
The Pony runtime also features per-actor heaps so that, during garbage collection, there's no "stop the world" garbage collection step. This means your program is always doing at least some work. As a result, Pony programs end up with very consistent performance and predictable latencies.
|
||||
|
||||
#### Safe
|
||||
|
||||
The Pony type system introduces a novel concept: reference capabilities, which make data safety part of the type system. The type of every variable in Pony includes information about how the data can be shared between actors. The Pony compiler uses the information to verify, at compile time, that your code is data-race- and deadlock-free.
|
||||
|
||||
If this sounds a bit like Rust, it's because it is. Pony's reference capabilities and Rust's borrow checker both provide data safety; they just approach it in different ways and have different tradeoffs.
|
||||
|
||||
### Is Pony right for you?
|
||||
|
||||
Deciding whether to use a new programming language for a non-hobby project is hard. You must weigh the appropriateness of the tool against its immaturity compared to other solutions. So, what about Pony and you?
|
||||
|
||||
Pony might be the right solution if you have a hard concurrency problem to solve. Concurrent applications are Pony's raison d'être. If you can accomplish what you want in a single-threaded Python script, you probably don't need Pony. If you have a hard concurrency problem, you should consider Pony and its powerful data-race-free, concurrency-aware type system.
|
||||
|
||||
You'll get a compiler that will prevent you from introducing many concurrency-related bugs and a runtime that will give you excellent performance characteristics.
|
||||
|
||||
### Getting started with Pony
|
||||
|
||||
If you're ready to get started with Pony, your first visit should be the [Learn section][6] of the Pony website. There you will find directions for installing the Pony compiler and resources for learning the language.
|
||||
|
||||
If you like to contribute to the language you are using, we have some [beginner-friendly issues][7] waiting for you on GitHub.
|
||||
|
||||
Also, I can't wait to start talking with you on [our IRC channel][8] and the [Pony mailing list][9].
|
||||
|
||||
To learn more about Pony, attend Sean Allen's talk, [Pony: How I learned to stop worrying and embrace an unproven technology][10], at the [20th OSCON][11], July 16-19, 2018, in Portland, Ore.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/pony
|
||||
|
||||
作者:[Sean T Allen][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/seantallen
|
||||
[1]:http://www.wallaroolabs.com/
|
||||
[2]:https://github.com/wallaroolabs/wallaroo
|
||||
[3]:https://www.ponylang.org/
|
||||
[4]:https://blog.wallaroolabs.com/2017/10/why-we-used-pony-to-write-wallaroo/
|
||||
[5]:https://en.wikipedia.org/wiki/Actor_model
|
||||
[6]:https://www.ponylang.org/learn/
|
||||
[7]:https://github.com/ponylang/ponyc/issues?q=is%3Aissue+is%3Aopen+label%3A%22complexity%3A+beginner+friendly%22
|
||||
[8]:https://webchat.freenode.net/?channels=%23ponylang
|
||||
[9]:https://pony.groups.io/g/user
|
||||
[10]:https://conferences.oreilly.com/oscon/oscon-or/public/schedule/speaker/213590
|
||||
[11]:https://conferences.oreilly.com/oscon/oscon-or
|
@ -0,0 +1,104 @@
|
||||
# GDPR 将如何影响开源社区?
|
||||
|
||||

|
||||
|
||||
图片来源:opensource.com
|
||||
|
||||
2018 年 5 月 25 日,[<ruby>通用数据保护条例<rt>General Data Protection Regulation, GDPR</rt></ruby>][1] 开始生效。欧盟出台的该条例将在全球范围内对企业如何保护个人数据产生重大影响。影响也会波及到开源项目以及开源社区。
|
||||
|
||||
### GDPR 概述
|
||||
|
||||
GDPR 于 2016 年 4 月 14 日在欧盟议会通过,从 2018 年 5 月 25 日起开始生效。GDPR 用于替代 95/46/EC 号<ruby>数据保护指令<rt>Data Protection Directive</rt></ruby>,该指令被设计用于“协调欧洲各国数据隐私法,保护和授权全体欧盟公民的数据隐私,改变欧洲范围内企业处理数据隐私的方式”。
|
||||
|
||||
GDPR 目标是在当前日益数据驱动的世界中保护欧盟公民的个人数据。
|
||||
|
||||
### 它对谁生效
|
||||
|
||||
GDPR 带来的最大改变之一是影响范围的扩大。不管企业本身是否位于欧盟,只要涉及欧盟公民个人数据的处理,GDPR 就会对其生效。
|
||||
|
||||
大部分提及 GDPR 的网上文章关注销售商品或服务的公司,但关注影响范围时,我们也不要忘记开源项目。有几种不同的类型,包括运营开源社区的(营利性)软件公司和非营利性组织(例如,开源软件项目及其社区)。对于面向全球的社区,几乎总是会有欧盟居民加入其中。
|
||||
|
||||
如果一个面向全球的社区有对应的在线平台,包括网站、论坛和问题跟踪系统等,那么很可能会涉及欧盟居民的个人数据处理,包括姓名、邮箱地址甚至更多。这些处理行为都需要遵循 GDPR。
|
||||
|
||||
### GDPR 带来的改变及其影响
|
||||
|
||||
相比被替代的指令,GDPR 带来了[很多改变][2],强化了对欧盟居民数据和隐私的保护。正如前文所说,一些改变给社区带来了直接的影响。我们来看看若干改变。
|
||||
|
||||
#### 授权
|
||||
|
||||
我们假设社区为成员提供论坛,网站中包含若干个用于注册的表单。要遵循 GDPR,你不能再使用冗长、无法辨识的隐私策略和条件条款。无论是每种特殊用途,在论坛注册或使用网站表单注册,你都需要获取明确的授权。授权必须是无条件的、具体的、通知性的以及无歧义的。
|
||||
|
||||
以表单为例,你需要提供一个复选框,处于未选中状态并给出个人数据用途的明确说明,一般是当前使用的隐私策略和条件条款附录的超链接。
|
||||
|
||||
#### 访问权
|
||||
|
||||
GDPR 赋予欧盟公民更多的权利。其中一项权利是向企业查询个人数据包括哪些,保存在哪里;如果<ruby>数据相关人<rt>data subject</rt></ruby>(例如欧盟公民)提出获取相应数据副本的需求,企业还应免费提供数字形式的数据。
|
||||
|
||||
#### 遗忘权
|
||||
|
||||
欧盟居民还从 GDPR 获得了“遗忘权”,即数据擦除。该权利是指,在一定限制条件下,企业必须删除个人数据,甚至可能停止其自身或第三方机构后续处理申请人的数据。
|
||||
|
||||
上述三种改变要求你的平台软件也要遵循 GDPR 的某些方面。需要提供特定的功能,例如获取并保存授权,提取数据并向数据相关人提供数字形式的副本,以及删除数据相关人对应的数据等。
|
||||
|
||||
#### 泄露通知
|
||||
|
||||
在 GDPR 看来,不经数据相关人授权情况下使用或偷取个人数据都被视为<ruby>数据泄露<rt>data breach</rt></ruby>。一旦发现,你应该在 72 小时内通知社区成员,除非这些个人数据不太可能给<ruby>自然人<rt>natural persons</rt></ruby>的权利与自由带来风险。GDPR 强制要求执行泄露通知。
|
||||
|
||||
#### 披露记录
|
||||
|
||||
企业负责提供一份记录,用于详细披露个人数据处理的过程和目的等。该记录用于证明企业遵从 GDPR 要求,维护了一份个人数据处理行为的记录;同时该记录也用于审计。
|
||||
|
||||
#### 罚款
|
||||
|
||||
不遵循 GDPR 的企业最高可面临全球年收入总额 4% 或 2000 万欧元 (取两者较大值)的罚款。根据 GDPR,“最高处罚针对严重的侵权行为,包括未经用户充分授权的情况下处理数据,以及违反设计理念中核心隐私部分”。
|
||||
|
||||
### 补充说明
|
||||
|
||||
本文不应用于法律建议或 GDPR 合规的指导书。我提到了可能对开源社区有影响的条约部分,希望引起大家对 GDPR 及其影响的关注。当然,条约包含了更多你需要了解和可能需要遵循的条款。
|
||||
|
||||
你自己也可能认识到,当运营一个面向全球的社区时,需要行动起来使其遵循 GDPR。如果在社区中,你已经遵循包括 ISO 27001,NIST 和 PCI DSS 在内的健壮安全标准,你已经先人一步。
|
||||
|
||||
可以从如下网站/资源中获取更多关于 GDPR 的信息:
|
||||
|
||||
* [GDPR 官网][3] (欧盟提供)
|
||||
* [官方条约 (欧盟) 2016/679][4] (GDPR,包含翻译)
|
||||
* [GDPR 是什么? 领导人需要知道的 8 件事][5] (企业人项目)
|
||||
* [如何规避 GDPR 合规审计:最佳实践][6] (企业人项目)
|
||||
|
||||
|
||||
### 关于作者
|
||||
|
||||
[][7]
|
||||
|
||||
Robin Muilwijk \- Robin Muilwijk 是一名互联网和电子政务顾问,在 Red Hat 旗下在线发布平台 Opensource.com 担任社区版主,在 Open Organization 担任大使。此外,Robin 还是 eZ 社区董事会成员,[eZ 系统][8] 社区的管理员。Robin 活跃在社交媒体中,促进和支持商业和生活领域的开源项目。可以在 Twitter 上关注 [Robin Muilwijk][9] 以获取更多关于他的信息。
|
||||
|
||||
[更多关于我的信息][10]
|
||||
|
||||
* [学习如何做出贡献][11]
|
||||
|
||||
---
|
||||
|
||||
via: [https://opensource.com/article/18/4/gdpr-impact][12]
|
||||
|
||||
作者: [Robin Muilwijk][13] 选题者: [@lujun9972][14] 译者: [pinewall][15] 校对: [校对者ID][16]
|
||||
|
||||
本文由 [LCTT][17] 原创编译,[Linux中国][18] 荣誉推出
|
||||
|
||||
[1]: https://www.eugdpr.org/eugdpr.org.html
|
||||
[2]: https://www.eugdpr.org/key-changes.html
|
||||
[3]: https://www.eugdpr.org/eugdpr.org.html
|
||||
[4]: http://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1520531479111&uri=CELEX:32016R0679
|
||||
[5]: https://enterprisersproject.com/article/2018/4/what-gdpr-8-things-leaders-should-know
|
||||
[6]: https://enterprisersproject.com/article/2017/9/avoiding-gdpr-compliance-audit-best-practices
|
||||
[7]: https://opensource.com/users/robinmuilwijk
|
||||
[8]: http://ez.no
|
||||
[9]: https://opensource.com/users/robinmuilwijk
|
||||
[10]: https://opensource.com/users/robinmuilwijk
|
||||
[11]: https://opensource.com/participate
|
||||
[12]: https://opensource.com/article/18/4/gdpr-impact
|
||||
[13]: https://opensource.com/users/robinmuilwijk
|
||||
[14]: https://github.com/lujun9972
|
||||
[15]: https://github.com/pinewall
|
||||
[16]: https://github.com/校对者ID
|
||||
[17]: https://github.com/LCTT/TranslateProject
|
||||
[18]: https://linux.cn/
|
@ -1,108 +0,0 @@
|
||||
/dev/[u]random:熵解释
|
||||
======
|
||||
### 熵
|
||||
|
||||
当 /dev/random 和 /dev/urandom 的主题出现时,你总是会听到这个词:“熵”。每个人对此似乎都有自己的比喻。那为什么不是我?我喜欢将熵视为“随机果汁”。它是果汁,random 需要它变得更随机。
|
||||
|
||||
如果你曾经生成过 SSL 证书或 GPG 密钥,那么可能已经看到过像下面这样的内容:
|
||||
```
|
||||
We need to generate a lot of random bytes. It is a good idea to perform
|
||||
some other action (type on the keyboard, move the mouse, utilize the
|
||||
disks) during the prime generation; this gives the random number
|
||||
generator a better chance to gain enough entropy.
|
||||
++++++++++..+++++.+++++++++++++++.++++++++++...+++++++++++++++...++++++
|
||||
+++++++++++++++++++++++++++++.+++++..+++++.+++++.+++++++++++++++++++++++++>.
|
||||
++++++++++>+++++...........................................................+++++
|
||||
Not enough random bytes available. Please do some other work to give
|
||||
the OS a chance to collect more entropy! (Need 290 more bytes)
|
||||
|
||||
```
|
||||
|
||||
|
||||
通过在键盘上打字并移动鼠标,你可以帮助生成熵或随机果汁。
|
||||
|
||||
你可能会问自己......为什么我需要熵?以及为什么它对于 random 实际变得随机如此重要?那么,假设我们的熵仅限于键盘、鼠标和磁盘 IO。但是我们的系统是一个服务器,所以我知道没有鼠标和键盘输入。这意味着唯一的因素是你的 IO。如果它是一个单独的几乎不使用磁盘,你将拥有较低的熵。这意味着你的系统随机的能力很弱。换句话说,我可以玩概率游戏,并大幅减少破解 ssh 密钥的时间,或者解密你认为是加密会话的时间。
|
||||
|
||||
好的,但这是非常不切实际的对吧?不,实际上并非如此。看看这个[ Debian OpenSSH 漏洞][1]。这个特定的问题是由于某人删除了一些负责添加熵的代码引起的。有传言说,他们因为它导致 valgrind 发出警告而删除了它。然而,在这样做的时候,random 现在少了很多随机。事实上,熵少了很多因此暴力破解变成了一个可行的攻击向量。
|
||||
|
||||
希望到现在为止,我们理解了熵对安全性的重要性。无论你是否意识到你正在使用它。
|
||||
|
||||
### /dev/random 和 /dev/urandom
|
||||
|
||||
|
||||
/dev/urandom 是一个伪随机随机数生成器,缺乏熵它**不会**停止。
|
||||
/dev/random 是一个真随机数生成器,它会在缺乏熵的时候停止。
|
||||
|
||||
大多数情况下,如果我们正在处理实际的事情,并且它不包含你的核心信息,那么 /dev/urandom 是正确的选择。否则,如果就使用 /dev/random,那么当系统用完熵时,你的程序就会变得有趣。无论它直接失败,或只是挂起,直到它获得足够的熵,这取决于你编写的程序。
|
||||
|
||||
### 检查熵
|
||||
|
||||
那么,你有多少熵?
|
||||
```
|
||||
[root@testbox test]# cat /proc/sys/kernel/random/poolsize
|
||||
4096
|
||||
[root@testbox test]# cat /proc/sys/kernel/random/entropy_avail
|
||||
2975
|
||||
[root@testbox test]#
|
||||
|
||||
```
|
||||
|
||||
/proc/sys/kernel/random/poolsize,说明熵池的大小(以位为单位)。例如:在停止抽水之前我们应该储存多少随机果汁。/proc/sys/kernel/random/entropy_avail 是当前池中随机果汁的数量(以位为单位)。
|
||||
|
||||
### 我们如何影响这个数字?
|
||||
|
||||
这个数字在我们使用时已经耗尽。我可以想出的最简单的例子是将 /dev/random 定向到 /dev/null 中:
|
||||
```
|
||||
[root@testbox test]# cat /dev/random > /dev/null &
|
||||
[1] 19058
|
||||
[root@testbox test]# cat /proc/sys/kernel/random/entropy_avail
|
||||
0
|
||||
[root@testbox test]# cat /proc/sys/kernel/random/entropy_avail
|
||||
1
|
||||
[root@testbox test]#
|
||||
|
||||
```
|
||||
|
||||
影响这个最简单的方法是运行 [Haveged][2]。Haveged 是一个守护进程,它使用处理器的“抖动”将熵添加到系统熵池中。安装和基本设置非常简单。
|
||||
```
|
||||
[root@b08s02ur ~]# systemctl enable haveged
|
||||
Created symlink from /etc/systemd/system/multi-user.target.wants/haveged.service to /usr/lib/systemd/system/haveged.service.
|
||||
[root@b08s02ur ~]# systemctl start haveged
|
||||
[root@b08s02ur ~]#
|
||||
|
||||
```
|
||||
|
||||
在流量相对中等的机器上:
|
||||
```
|
||||
[root@testbox ~]# pv /dev/random > /dev/null
|
||||
40 B 0:00:15 [ 0 B/s] [ <=> ]
|
||||
52 B 0:00:23 [ 0 B/s] [ <=> ]
|
||||
58 B 0:00:25 [5.92 B/s] [ <=> ]
|
||||
64 B 0:00:30 [6.03 B/s] [ <=> ]
|
||||
^C
|
||||
[root@testbox ~]# systemctl start haveged
|
||||
[root@testbox ~]# pv /dev/random > /dev/null
|
||||
7.12MiB 0:00:05 [1.43MiB/s] [ <=> ]
|
||||
15.7MiB 0:00:11 [1.44MiB/s] [ <=> ]
|
||||
27.2MiB 0:00:19 [1.46MiB/s] [ <=> ]
|
||||
43MiB 0:00:30 [1.47MiB/s] [ <=> ]
|
||||
^C
|
||||
[root@testbox ~]#
|
||||
|
||||
```
|
||||
|
||||
使用 pv 我们可以看到我们通过管道传递了多少数据。正如你所看到的,在运行 haveged 之前,我们是 2.1 位/秒(B/s)。而在开始运行 haveged 之后,加入处理器的抖动到我们的熵池中,我们得到大约 1.5MiB/秒。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://jhurani.com/linux/2017/11/01/entropy-explained.html
|
||||
|
||||
作者:[James J][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://jblevins.org/log/ssh-vulnkey
|
||||
[1]:http://jhurani.com/linux/2017/11/01/%22https://jblevins.org/log/ssh-vulnkey%22
|
||||
[2]:http://www.issihosts.com/haveged/
|
@ -1,77 +0,0 @@
|
||||
4 个 Firefox 扩展
|
||||
=====
|
||||
|
||||

|
||||
正如我在关于 Firefox 扩展的[原创文章][1]中提到的,web 浏览器已成为许多用户计算(机)验的关键组件。现代浏览器已经发展成为功能强大且可扩展的平台,扩展可以添加或修改其功能。Firefox 的扩展是使用 WebExtensions API(一种跨浏览器开发系统)构建的。
|
||||
|
||||
在第一篇文章中,我问读者:“你应该安装哪些扩展?” 重申一下,这一决定主要取决于你如何使用浏览器,你对隐私的看法,你对扩展程序开发人员的信任程度以及其他个人偏好。自文章发表以来,我推荐的一个扩展(Xmarks)已经停止。另外,该文章收到了大量的反馈,在这篇更新中,这些反馈已经被考虑到。
|
||||
|
||||
我想再次指出,浏览器扩展通常需要能够阅读和(或)更改你访问的网页上的所有内容。你应该仔细考虑这一点。如果扩展程序修改了你访问的所有网页的访问权限,那么它可能充当键盘记录程序,拦截信用卡信息,在线跟踪,插入广告以及执行各种其他恶意活动。这并不意味着每个扩展程序都会暗中执行这些操作,但在安装任何扩展程序之前,你应该仔细考虑安装源,涉及的权限,风险配置文件以及其他因素。请记住,你可以使用配置文件来管理扩展如何影响你的攻击面 - 例如,使用没有扩展的专用配置文件来 执行网上银行等任务。
|
||||
|
||||
考虑到这一点,这里有你可能想要考虑的四个开源 Firefox 扩展。
|
||||
|
||||
### uBlock Origin
|
||||
|
||||
![ublock origin ad blocker screenshot][2]
|
||||
|
||||
我的第一个建议保持不变。[uBlock Origin][3] 是一款快速,低内存,广泛的拦截器,它不仅可以拦截广告,而且还可以执行你自己的内容过滤。 uBlock Origin 的默认行为是使用多个预定义的过滤器列表来拦截广告,跟踪器和恶意软件站点。它允许你任意添加列表和规则,甚至可以锁定到默认拒绝模式。尽管它很强大,但它已被证明是高效和高性能的。它将继续定期更新,并且是该功能的最佳选择之一。
|
||||
|
||||
### Privacy Badger
|
||||
|
||||
![privacy badger ad blocker][4]
|
||||
|
||||
我的第二个建议也保持不变。如果说有什么区别的话,那就是自从我上一篇文章发表以来,隐私问题就一直被带到最前沿,这使得这个扩展成为一个简单的建议。顾名思义,[Privacy Badger][5] 是一个专注于隐私的扩展,可以拦截广告和其他第三方跟踪器。这是 Electronic Freedom (to 校正者:这里 Firefox 添加此扩展后,弹出页面译为电子前哨基金会)基金会的一个项目,他们说:
|
||||
|
||||
> Privacy Badger 诞生于我们希望能够推荐一个单独的扩展,它可以自动分析和拦截任何违反用户同意原则的追踪器或广告;在用户没有任何设置、有关知识或配置的情况下,它可以很好地运行;它是由一个明确为其用户而不是为广告商工作的组织所产生的;它使用了算法的方法来决定什么是什么,什么是不跟踪的。”
|
||||
|
||||
为什么 Privacy Badger 会出现在这个列表上,它的功能与上一个扩展看起来很类似?因为一些原因。首先,它从根本上工作原理与 uBlock Origin 不同。其次,深度防御的实践是一项合理的策略。说到深度防御,EFF 还维护着 [HTTPS Everywhere][6] 扩展,它自动确保 https 用于许多主流网站。当你安装 Privacy Badger 时,你也可以考虑使用 HTTPS Everywhere。
|
||||
|
||||
如果你开始认为这篇文章只是对上一篇文章的重新讨论,那么以下是我的建议分歧。
|
||||
|
||||
### Bitwarden
|
||||
|
||||
![Bitwarden][7]
|
||||
|
||||
在上一篇文章中推荐 LastPass 时,我提到这可能是一个有争议的选择。这无疑属实。无论你是否应该使用密码管理器 - 如果你使用,那么是否应该选择带有浏览器插件的密码管理器 - 这是一个备受争议的话题,答案很大程度上取决于你的个人风险状况。我认为大多数普通的计算机用户应该使用一个,因为它比最常见的选择要好得多:在任何地方都使用相同的弱密码。我仍然相信这一点。
|
||||
|
||||
[Bitwarden][8] 自从我上次审视以后确实成熟了。像 LastPass 一样,它对用户友好,支持双因素身份验证,并且相当安全。与 LastPass 不同的是,它是[开源的][9]。它可以使用或不使用浏览器插件,并支持从其他解决方案(包括 LastPass)导入。它的核心功能完全免费,它还有一个 10 美元/年的高级版本。
|
||||
|
||||
### Vimium-FF
|
||||
|
||||
![Vimium][10]
|
||||
|
||||
[Vimium][11] 是另一个开源的扩展,它为 Firefox 键盘快捷键提供了类似 Vim 一样的导航和控制,其称之为“黑客的浏览器”。对于 Ctrl+x, Meta+x 和 Alt+x,分别对应 **< c-x>**, **< m-x>** 和 **< a-x>**,默认值可以轻松定制。一旦你安装了 Vimium,你可以随时键入 **?** 来查看键盘绑定列表。请注意,如果你更喜欢 Emacs,那么也有一些针对这些键绑定的扩展。无论哪种方式,我认为键盘快捷键是未充分利用的生产力推动力。
|
||||
|
||||
### 额外福利: Grammarly
|
||||
|
||||
不是每个人都有幸在 Opensource.com 上撰写专栏 - 尽管你应该认真考虑为网站撰写文章;如果你有问题,有兴趣,或者想要一个导师,伸出手,让我们聊天吧。但是,即使没有专栏撰写,正确的语法在各种各样的情况下都是有益的。试一下 [Grammarly][12]。不幸的是,这个扩展不是开源的,但它确实可以确保你输入的所有东西都是清晰的,有效的并且没有错误。它通过扫描你文本中的常见的和复杂的语法错误来实现这一点,涵盖了从主谓一致到文章使用到修饰词的放置这些所有内容。它的基本功能是免费的,它有一个高级版本,每月收取额外的费用。我在这篇文章中使用了它,它发现了许多我的校对没有发现的错误。
|
||||
|
||||
再次说明,Grammarly 是这个列表中包含的唯一不是开源的扩展,因此如果你知道类似的高质量开源替代品,请在评论中告诉我们。
|
||||
|
||||
这些扩展是我发现有用并推荐给其他人的扩展。请在评论中告诉我你对更新建议的看法。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/firefox-extensions
|
||||
|
||||
作者:[Jeremy Garcia][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jeremy-garcia
|
||||
[1]:https://opensource.com/article/18/1/top-5-firefox-extensions
|
||||
[2]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/ublock.png?itok=_QFEbDmq (ublock origin ad blocker screenshot)
|
||||
[3]:https://addons.mozilla.org/en-US/firefox/addon/ublock-origin/
|
||||
[4]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/privacy_badger_1.0.1.png?itok=qZXQeKtc (privacy badger ad blocker screenshot)
|
||||
[5]:https://www.eff.org/privacybadger
|
||||
[6]:https://www.eff.org/https-everywhere
|
||||
[7]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/bitwarden.png?itok=gZPrCYoi (Bitwarden)
|
||||
[8]:https://bitwarden.com/
|
||||
[9]:https://github.com/bitwarden
|
||||
[10]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/vimium.png?itok=QRESXjWG (Vimium)
|
||||
[11]:https://addons.mozilla.org/en-US/firefox/addon/vimium-ff/
|
||||
[12]:https://www.grammarly.com/
|
Loading…
Reference in New Issue
Block a user