mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-25 23:11:02 +08:00
commit
a081a9497c
@ -0,0 +1,245 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lujun9972)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10625-1.html)
|
||||
[#]: subject: (Advanced Techniques for Reducing Emacs Startup Time)
|
||||
[#]: via: (https://blog.d46.us/advanced-emacs-startup/)
|
||||
[#]: author: (Joe Schafer https://blog.d46.us/)
|
||||
|
||||
降低 Emacs 启动时间的高级技术
|
||||
======
|
||||
|
||||
> 《[Emacs Start Up Profiler][1]》 的作者教你六项减少 Emacs 启动时间的技术。
|
||||
|
||||
简而言之:做下面几个步骤:
|
||||
|
||||
1. 使用 Esup 进行性能检测。
|
||||
2. 调整垃圾回收的阀值。
|
||||
3. 使用 use-package 来自动(延迟)加载所有东西。
|
||||
4. 不要使用会引起立即加载的辅助函数。
|
||||
5. 参考我的 [配置][2]。
|
||||
|
||||
### 从 .emacs.d 的失败到现在
|
||||
|
||||
我最近宣布了 .emacs.d 的第三次失败,并完成了第四次 Emacs 配置的迭代。演化过程为:
|
||||
|
||||
1. 拷贝并粘贴 elisp 片段到 `~/.emacs` 中,希望它能工作。
|
||||
2. 借助 `el-get` 来以更结构化的方式来管理依赖关系。
|
||||
3. 放弃自己从零配置,以 Spacemacs 为基础。
|
||||
4. 厌倦了 Spacemacs 的复杂性,基于 `use-package` 重写配置。
|
||||
|
||||
本文汇聚了三次重写和创建 《[Emacs Start Up Profiler][1]》过程中的技巧。非常感谢 Spacemacs、use-package 等背后的团队。没有这些无私的志愿者,这项任务将会困难得多。
|
||||
|
||||
### 不过守护进程模式又如何呢
|
||||
|
||||
在我们开始之前,让我反驳一下优化 Emacs 时的常见观念:“Emacs 旨在作为守护进程来运行的,因此你只需要运行一次而已。”
|
||||
|
||||
这个观点很好,只不过:
|
||||
|
||||
- 速度总是越快越好。
|
||||
- 配置 Emacs 时,可能会有不得不通过重启 Emacs 的情况。例如,你可能为 `post-command-hook` 添加了一个运行缓慢的 `lambda` 函数,很难删掉它。
|
||||
- 重启 Emacs 能帮你验证不同会话之间是否还能保留配置。
|
||||
|
||||
### 1、估算当前以及最佳的启动时间
|
||||
|
||||
第一步是测量当前的启动时间。最简单的方法就是在启动时显示后续步骤进度的信息。
|
||||
|
||||
```
|
||||
;; Use a hook so the message doesn't get clobbered by other messages.
|
||||
(add-hook 'emacs-startup-hook
|
||||
(lambda ()
|
||||
(message "Emacs ready in %s with %d garbage collections."
|
||||
(format "%.2f seconds"
|
||||
(float-time
|
||||
(time-subtract after-init-time before-init-time)))
|
||||
gcs-done)))
|
||||
```
|
||||
|
||||
第二步、测量最佳的启动速度,以便了解可能的情况。我的是 0.3 秒。
|
||||
|
||||
```
|
||||
# -q ignores personal Emacs files but loads the site files.
|
||||
emacs -q --eval='(message "%s" (emacs-init-time))'
|
||||
|
||||
;; For macOS users:
|
||||
open -n /Applications/Emacs.app --args -q --eval='(message "%s" (emacs-init-time))'
|
||||
```
|
||||
|
||||
### 2、检测 Emacs 启动指标对你大有帮助
|
||||
|
||||
《[Emacs StartUp Profiler][1]》(ESUP)将会给你顶层语句执行的详细指标。
|
||||
|
||||
![esup.png][3]
|
||||
|
||||
*图 1: Emacs Start Up Profiler 截图*
|
||||
|
||||
> 警告:Spacemacs 用户需要注意,ESUP 目前与 Spacemacs 的 init.el 文件有冲突。遵照 <https://github.com/jschaf/esup/issues/48> 上说的进行升级。
|
||||
|
||||
### 3、调高启动时垃圾回收的阀值
|
||||
|
||||
这为我节省了 **0.3 秒**。
|
||||
|
||||
Emacs 默认值是 760kB,这在现代机器看来极其保守。真正的诀窍在于初始化完成后再把它降到合理的水平。这为我节省了 0.3 秒。
|
||||
|
||||
```
|
||||
;; Make startup faster by reducing the frequency of garbage
|
||||
;; collection. The default is 800 kilobytes. Measured in bytes.
|
||||
(setq gc-cons-threshold (* 50 1000 1000))
|
||||
|
||||
;; The rest of the init file.
|
||||
|
||||
;; Make gc pauses faster by decreasing the threshold.
|
||||
(setq gc-cons-threshold (* 2 1000 1000))
|
||||
```
|
||||
|
||||
*~/.emacs.d/init.el*
|
||||
|
||||
### 4、不要 require 任何东西,而是使用 use-package 来自动加载
|
||||
|
||||
让 Emacs 变坏的最好方法就是减少要做的事情。`require` 会立即加载源文件,但是很少会出现需要在启动阶段就立即需要这些功能的。
|
||||
|
||||
在 [use-package][4] 中你只需要声明好需要哪个包中的哪个功能,`use-package` 就会帮你完成正确的事情。它看起来是这样的:
|
||||
|
||||
```
|
||||
(use-package evil-lisp-state ; the Melpa package name
|
||||
|
||||
:defer t ; autoload this package
|
||||
|
||||
:init ; Code to run immediately.
|
||||
(setq evil-lisp-state-global nil)
|
||||
|
||||
:config ; Code to run after the package is loaded.
|
||||
(abn/define-leader-keys "k" evil-lisp-state-map))
|
||||
```
|
||||
|
||||
可以通过查看 `features` 变量来查看 Emacs 现在加载了那些包。想要更好看的输出可以使用 [lpkg explorer][5] 或者我在 [abn-funcs-benchmark.el][6] 中的变体。输出看起来类似这样的:
|
||||
|
||||
```
|
||||
479 features currently loaded
|
||||
- abn-funcs-benchmark: /Users/jschaf/.dotfiles/emacs/funcs/abn-funcs-benchmark.el
|
||||
- evil-surround: /Users/jschaf/.emacs.d/elpa/evil-surround-20170910.1952/evil-surround.elc
|
||||
- misearch: /Applications/Emacs.app/Contents/Resources/lisp/misearch.elc
|
||||
- multi-isearch: nil
|
||||
- <many more>
|
||||
```
|
||||
|
||||
### 5、不要使用辅助函数来设置模式
|
||||
|
||||
通常,Emacs 包会建议通过运行一个辅助函数来设置键绑定。下面是一些例子:
|
||||
|
||||
* `(evil-escape-mode)`
|
||||
* `(windmove-default-keybindings) ; 设置快捷键。`
|
||||
* `(yas-global-mode 1) ; 复杂的片段配置。`
|
||||
|
||||
可以通过 `use-package` 来对此进行重构以提高启动速度。这些辅助函数只会让你立即加载那些尚用不到的包。
|
||||
|
||||
下面这个例子告诉你如何自动加载 `evil-escape-mode`。
|
||||
|
||||
```
|
||||
;; The definition of evil-escape-mode.
|
||||
(define-minor-mode evil-escape-mode
|
||||
(if evil-escape-mode
|
||||
(add-hook 'pre-command-hook 'evil-escape-pre-command-hook)
|
||||
(remove-hook 'pre-command-hook 'evil-escape-pre-command-hook)))
|
||||
|
||||
;; Before:
|
||||
(evil-escape-mode)
|
||||
|
||||
;; After:
|
||||
(use-package evil-escape
|
||||
:defer t
|
||||
;; Only needed for functions without an autoload comment (;;;###autoload).
|
||||
:commands (evil-escape-pre-command-hook)
|
||||
|
||||
;; Adding to a hook won't load the function until we invoke it.
|
||||
;; With pre-command-hook, that means the first command we run will
|
||||
;; load evil-escape.
|
||||
:init (add-hook 'pre-command-hook 'evil-escape-pre-command-hook))
|
||||
```
|
||||
|
||||
下面来看一个关于 `org-babel` 的例子,这个例子更为复杂。我们通常的配置时这样的:
|
||||
|
||||
```
|
||||
(org-babel-do-load-languages
|
||||
'org-babel-load-languages
|
||||
'((shell . t)
|
||||
(emacs-lisp . nil)))
|
||||
```
|
||||
|
||||
这不是个好的配置,因为 `org-babel-do-load-languages` 定义在 `org.el` 中,而该文件有超过 2 万 4 千行的代码,需要花 0.2 秒来加载。通过查看源代码可以看到 `org-babel-do-load-languages` 仅仅只是加载 `ob-<lang>` 包而已,像这样:
|
||||
|
||||
```
|
||||
;; From org.el in the org-babel-do-load-languages function.
|
||||
(require (intern (concat "ob-" lang)))
|
||||
```
|
||||
|
||||
而在 `ob-<lang>.el` 文件中,我们只关心其中的两个方法 `org-babel-execute:<lang>` 和 `org-babel-expand-body:<lang>`。我们可以延时加载 org-babel 相关功能而无需调用 `org-babel-do-load-languages`,像这样:
|
||||
|
||||
```
|
||||
;; Avoid `org-babel-do-load-languages' since it does an eager require.
|
||||
(use-package ob-python
|
||||
:defer t
|
||||
:ensure org-plus-contrib
|
||||
:commands (org-babel-execute:python))
|
||||
|
||||
(use-package ob-shell
|
||||
:defer t
|
||||
:ensure org-plus-contrib
|
||||
:commands
|
||||
(org-babel-execute:sh
|
||||
org-babel-expand-body:sh
|
||||
|
||||
org-babel-execute:bash
|
||||
org-babel-expand-body:bash))
|
||||
```
|
||||
|
||||
### 6、使用惰性定时器来推迟加载非立即需要的包
|
||||
|
||||
我推迟加载了 9 个包,这帮我节省了 **0.4 秒**。
|
||||
|
||||
有些包特别有用,你希望可以很快就能使用它们,但是它们本身在 Emacs 启动过程中又不是必须的。这些软件包包括:
|
||||
|
||||
- `recentf`:保存最近的编辑过的那些文件。
|
||||
- `saveplace`:保存访问过文件的光标位置。
|
||||
- `server`:开启 Emacs 守护进程。
|
||||
- `autorevert`:自动重载被修改过的文件。
|
||||
- `paren`:高亮匹配的括号。
|
||||
- `projectile`:项目管理工具。
|
||||
- `whitespace`:高亮行尾的空格。
|
||||
|
||||
不要 `require` 这些软件包,**而是等到空闲 N 秒后再加载它们**。我在 1 秒后加载那些比较重要的包,在 2 秒后加载其他所有的包。
|
||||
|
||||
```
|
||||
(use-package recentf
|
||||
;; Loads after 1 second of idle time.
|
||||
:defer 1)
|
||||
|
||||
(use-package uniquify
|
||||
;; Less important than recentf.
|
||||
:defer 2)
|
||||
```
|
||||
|
||||
### 不值得的优化
|
||||
|
||||
不要费力把你的 Emacs 配置文件编译成字节码了。这只节省了大约 0.05 秒。把配置文件编译成字节码还可能导致源文件与编译后的文件不一致从而难以重现错误进行调试。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blog.d46.us/advanced-emacs-startup/
|
||||
|
||||
作者:[Joe Schafer][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://blog.d46.us/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://github.com/jschaf/esup
|
||||
[2]: https://github.com/jschaf/dotfiles/blob/master/emacs/start.el
|
||||
[3]: https://blog.d46.us/images/esup.png
|
||||
[4]: https://github.com/jwiegley/use-package
|
||||
[5]: https://gist.github.com/RockyRoad29/bd4ca6fdb41196a71662986f809e2b1c
|
||||
[6]: https://github.com/jschaf/dotfiles/blob/master/emacs/funcs/abn-funcs-benchmark.el
|
@ -0,0 +1,227 @@
|
||||
toplip:一款十分强大的文件加密解密 CLI 工具
|
||||
======
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2017/12/Toplip-720x340.jpg)
|
||||
|
||||
在市场上能找到许多用来保护文件的文档加密工具。我们已经介绍过其中一些例如 [Cryptomater][1]、[Cryptkeeper][2]、[CryptGo][3]、[Cryptr][4]、[Tomb][5],以及 [GnuPG][6] 等加密工具。今天我们将讨论另一款叫做 “toplip” 的命令行文件加密解密工具。它是一款使用一种叫做 [AES256][7] 的强大加密方法的自由开源的加密工具。它同时也使用了 XTS-AES 设计以保护你的隐私数据。它还使用了 [Scrypt][8],一种基于密码的密钥生成函数来保护你的密码免于暴力破解。
|
||||
|
||||
### 优秀的特性
|
||||
|
||||
相比于其它文件加密工具,toplip 自带以下独特且杰出的特性。
|
||||
|
||||
* 非常强大的基于 XTS-AES256 的加密方法。
|
||||
* <ruby>合理的推诿<rt>Plausible deniability</rt></ruby>。
|
||||
* 加密并嵌入文件到图片(PNG/JPG)中。
|
||||
* 多重密码保护。
|
||||
* 可防护直接暴力破解。
|
||||
* 无可辨识的输出标记。
|
||||
* 开源(GPLv3)。
|
||||
|
||||
### 安装 toplip
|
||||
|
||||
没有什么需要安装的。`toplip` 是独立的可执行二进制文件。你所要做的仅是从 [产品官方页面][9] 下载最新版的 `toplip` 并赋予它可执行权限。为此你只要运行:
|
||||
|
||||
```
|
||||
chmod +x toplip
|
||||
```
|
||||
|
||||
### 使用
|
||||
|
||||
如果你不带任何参数运行 `toplip`,你将看到帮助页面。
|
||||
|
||||
```
|
||||
./toplip
|
||||
```
|
||||
|
||||
![][10]
|
||||
|
||||
请允许我给你展示一些例子。
|
||||
|
||||
为了达到指导目的,我建了两个文件 `file1` 和 `file2`。我同时也有 `toplip` 可执行二进制文件。我把它们全都保存进一个叫做 `test` 的目录。
|
||||
|
||||
![][12]
|
||||
|
||||
#### 加密/解密单个文件
|
||||
|
||||
现在让我们加密 `file1`。为此,运行:
|
||||
|
||||
```
|
||||
./toplip file1 > file1.encrypted
|
||||
```
|
||||
|
||||
这行命令将让你输入密码。一旦你输入完密码,它就会加密 `file1` 的内容并将它们保存进你当前工作目录下一个叫做 `file1.encrypted` 的文件。
|
||||
|
||||
上述命令行的示例输出将会是这样:
|
||||
|
||||
```
|
||||
This is toplip v1.20 (C) 2015, 2016 2 Ton Digital. Author: Jeff Marrison A showcase piece for the HeavyThing library. Commercial support available Proudly made in Cooroy, Australia. More info: https://2ton.com.au/toplip file1 Passphrase #1: generating keys...Done
|
||||
Encrypting...Done
|
||||
```
|
||||
|
||||
为了验证文件是否的确经过加密,试着打开它你会发现一些随机的字符。
|
||||
|
||||
为了解密加密过的文件,像以下这样使用 `-d` 参数:
|
||||
|
||||
```
|
||||
./toplip -d file1.encrypted
|
||||
```
|
||||
|
||||
这行命令会解密提供的文档并在终端窗口显示内容。
|
||||
|
||||
为了保存文档而不是写入到标准输出,运行:
|
||||
|
||||
```
|
||||
./toplip -d file1.encrypted > file1.decrypted
|
||||
```
|
||||
|
||||
输入正确的密码解密文档。`file1.encrypted` 的所有内容将会存入一个叫做 `file1.decrypted` 的文档。
|
||||
|
||||
请不要用这种命名方法,我这样用仅仅是为了便于理解。使用其它难以预测的名字。
|
||||
|
||||
#### 加密/解密多个文件
|
||||
|
||||
现在我们将使用两个分别的密码加密每个文件。
|
||||
|
||||
```
|
||||
./toplip -alt file1 file2 > file3.encrypted
|
||||
```
|
||||
|
||||
你会被要求为每个文件输入一个密码,使用不同的密码。
|
||||
|
||||
上述命令行的示例输出将会是这样:
|
||||
|
||||
```
|
||||
This is toplip v1.20 (C) 2015, 2016 2 Ton Digital. Author: Jeff Marrison A showcase piece for the HeavyThing library. Commercial support available Proudly made in Cooroy, Australia. More info: https://2ton.com.au/toplip
|
||||
file2 Passphrase #1 : generating keys...Done
|
||||
file1 Passphrase #1 : generating keys...Done
|
||||
Encrypting...Done
|
||||
```
|
||||
|
||||
上述命令所做的是加密两个文件的内容并将它们保存进一个单独的叫做 `file3.encrypted` 的文件。在保存中分别给予各自的密码。比如说如果你提供 `file1` 的密码,`toplip` 将复原 `file1`。如果你提供 `file2` 的密码,`toplip` 将复原 `file2`。
|
||||
|
||||
每个 `toplip` 加密输出都可能包含最多四个单独的文件,并且每个文件都建有各自独特的密码。由于加密输出放在一起的方式,一下判断出是否存在多个文档不是一件容易的事。默认情况下,甚至就算确实只有一个文件是由 `toplip` 加密,随机数据都会自动加上。如果指定了多于一个文件,每个都有自己的密码,那么你可以有选择性地独立解码每个文件,以此来否认其它文件存在的可能性。这能有效地使一个用户在可控的暴露风险下打开一个加密的捆绑文件包。并且对于敌人来说,在计算上没有一种低廉的办法来确认额外的秘密数据存在。这叫做“<ruby>合理的推诿<rt>Plausible deniability</rt></ruby>”,是 toplip 著名的特性之一。
|
||||
|
||||
为了从 `file3.encrypted` 解码 `file1`,仅需输入:
|
||||
|
||||
```
|
||||
./toplip -d file3.encrypted > file1.encrypted
|
||||
```
|
||||
|
||||
你将会被要求输入 `file1` 的正确密码。
|
||||
|
||||
为了从 `file3.encrypted` 解码 `file2`,输入:
|
||||
|
||||
```
|
||||
./toplip -d file3.encrypted > file2.encrypted
|
||||
```
|
||||
|
||||
别忘了输入 `file2` 的正确密码。
|
||||
|
||||
#### 使用多重密码保护
|
||||
|
||||
这是我中意的另一个炫酷特性。在加密过程中我们可以为单个文件提供多重密码。这样可以保护密码免于暴力尝试。
|
||||
|
||||
```
|
||||
./toplip -c 2 file1 > file1.encrypted
|
||||
```
|
||||
|
||||
这里,`-c 2` 代表两个不同的密码。上述命令行的示例输出将会是这样:
|
||||
|
||||
```
|
||||
This is toplip v1.20 (C) 2015, 2016 2 Ton Digital. Author: Jeff Marrison A showcase piece for the HeavyThing library. Commercial support available Proudly made in Cooroy, Australia. More info: https://2ton.com.au/toplip
|
||||
file1 Passphrase #1: generating keys...Done
|
||||
file1 Passphrase #2: generating keys...Done
|
||||
Encrypting...Done
|
||||
```
|
||||
|
||||
正如你在上述示例中所看到的,`toplip` 要求我输入两个密码。请注意你必须提供两个不同的密码,而不是提供两遍同一个密码。
|
||||
|
||||
为了解码这个文件,这样做:
|
||||
|
||||
```
|
||||
$ ./toplip -c 2 -d file1.encrypted > file1.decrypted
|
||||
This is toplip v1.20 (C) 2015, 2016 2 Ton Digital. Author: Jeff Marrison A showcase piece for the HeavyThing library. Commercial support available Proudly made in Cooroy, Australia. More info: https://2ton.com.au/toplip
|
||||
file1.encrypted Passphrase #1: generating keys...Done
|
||||
file1.encrypted Passphrase #2: generating keys...Done
|
||||
Decrypting...Done
|
||||
```
|
||||
|
||||
#### 将文件藏在图片中
|
||||
|
||||
将一个文件、消息、图片或视频藏在另一个文件里的方法叫做隐写术。幸运的是 `toplip` 默认包含这个特性。
|
||||
|
||||
为了将文件藏入图片中,像如下所示的样子使用 `-m` 参数。
|
||||
|
||||
```
|
||||
$ ./toplip -m image.png file1 > image1.png
|
||||
This is toplip v1.20 (C) 2015, 2016 2 Ton Digital. Author: Jeff Marrison A showcase piece for the HeavyThing library. Commercial support available Proudly made in Cooroy, Australia. More info: https://2ton.com.au/toplip
|
||||
file1 Passphrase #1: generating keys...Done
|
||||
Encrypting...Done
|
||||
```
|
||||
|
||||
这行命令将 `file1` 的内容藏入一张叫做 `image1.png` 的图片中。
|
||||
|
||||
要解码,运行:
|
||||
|
||||
```
|
||||
$ ./toplip -d image1.png > file1.decrypted This is toplip v1.20 (C) 2015, 2016 2 Ton Digital. Author: Jeff Marrison A showcase piece for the HeavyThing library. Commercial support available Proudly made in Cooroy, Australia. More info: https://2ton.com.au/toplip
|
||||
image1.png Passphrase #1: generating keys...Done
|
||||
Decrypting...Done
|
||||
```
|
||||
|
||||
#### 增加密码复杂度
|
||||
|
||||
为了进一步使文件变得难以破译,我们可以像以下这样增加密码复杂度:
|
||||
|
||||
```
|
||||
./toplip -c 5 -i 0x8000 -alt file1 -c 10 -i 10 file2 > file3.encrypted
|
||||
```
|
||||
|
||||
上述命令将会要求你为 `file1` 输入十条密码,为 `file2` 输入五条密码,并将它们存入单个叫做 `file3.encrypted` 的文件。如你所注意到的,我们在这个例子中又用了另一个 `-i` 参数。这是用来指定密钥生成循环次数。这个选项覆盖了 `scrypt` 函数初始和最终 PBKDF2 阶段的默认循环次数 1。十六进制和十进制数值都是允许的。比如说 `0x8000`、`10` 等。请注意这会大大增加计算次数。
|
||||
|
||||
为了解码 `file1`,使用:
|
||||
|
||||
```
|
||||
./toplip -c 5 -i 0x8000 -d file3.encrypted > file1.decrypted
|
||||
```
|
||||
|
||||
为了解码 `file2`,使用:
|
||||
|
||||
```
|
||||
./toplip -c 10 -i 10 -d file3.encrypted > file2.decrypted
|
||||
```
|
||||
|
||||
参考 `toplip` [官网](https://2ton.com.au/toplip/)以了解更多关于其背后的技术信息和使用的加密方式。
|
||||
|
||||
我个人对所有想要保护自己数据的人的建议是,别依赖单一的方法。总是使用多种工具/方法来加密文件。不要在纸上写下密码也不要将密码存入本地或云。记住密码,阅后即焚。如果你记不住,考虑使用任何了信赖的密码管理器。
|
||||
|
||||
- [KeeWeb – An Open Source, Cross Platform Password Manager](https://www.ostechnix.com/keeweb-an-open-source-cross-platform-password-manager/)
|
||||
- [Buttercup – A Free, Secure And Cross-platform Password Manager](https://www.ostechnix.com/buttercup-a-free-secure-and-cross-platform-password-manager/)
|
||||
- [Titan – A Command line Password Manager For Linux](https://www.ostechnix.com/titan-command-line-password-manager-linux/)
|
||||
|
||||
今天就到此为止了,更多好东西后续推出,请保持关注。
|
||||
|
||||
顺祝时祺!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/toplip-strong-file-encryption-decryption-cli-utility/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[tomjlw](https://github.com/tomjlw)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://www.ostechnix.com/cryptomator-open-source-client-side-encryption-tool-cloud/
|
||||
[2]:https://www.ostechnix.com/how-to-encrypt-your-personal-foldersdirectories-in-linux-mint-ubuntu-distros/
|
||||
[3]:https://www.ostechnix.com/cryptogo-easy-way-encrypt-password-protect-files/
|
||||
[4]:https://www.ostechnix.com/cryptr-simple-cli-utility-encrypt-decrypt-files/
|
||||
[5]:https://www.ostechnix.com/tomb-file-encryption-tool-protect-secret-files-linux/
|
||||
[6]:https://www.ostechnix.com/an-easy-way-to-encrypt-and-decrypt-files-from-commandline-in-linux/
|
||||
[7]:http://en.wikipedia.org/wiki/Advanced_Encryption_Standard
|
||||
[8]:http://en.wikipedia.org/wiki/Scrypt
|
||||
[9]:https://2ton.com.au/Products/
|
||||
[10]:https://www.ostechnix.com/wp-content/uploads/2017/12/toplip-2.png
|
||||
[12]:https://www.ostechnix.com/wp-content/uploads/2017/12/toplip-1.png
|
||||
|
158
published/20180206 Power(Shell) to the people.md
Normal file
158
published/20180206 Power(Shell) to the people.md
Normal file
@ -0,0 +1,158 @@
|
||||
给大家安利一下 PowerShell
|
||||
======
|
||||
|
||||
> 代码更简洁、脚本更清晰、跨平台一致性等好处是让 Linux 和 OS X 用户喜爱 PowerShell 的原因。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_lightbulbs.png?itok=pwp22hTw)
|
||||
|
||||
今年(2018)早些时候,[Powershell Core][1] 以 [MIT][3] 开源协议发布了[正式可用版(GA)][2]。PowerShell 算不上是新技术。自 2006 年为 Windows 发布了第一版 PowerShell 以来,PowerShell 的创建者在[结合了][4] Unⅸ shell 的强大和灵活的同时也在弥补他们所意识到的缺点,特别是从组合命令中获取值时所要进行的文本操作。
|
||||
|
||||
在发布了 5 个主要版本之后,PowerShell 已经可以在所有主流操作系统上(包括 OS X 和 Linux)本地运行同样创新的 shell 和命令行环境。一些人(应该说是大多数人)可能依旧在嘲弄这位诞生于 Windows 的闯入者的大胆和冒失:为那些远古以来(从千禧年开始算不算?)便存在着强大的 shell 环境的平台引荐自己。在本帖中,我希望可以将 PowerShell 的优势介绍给大家,甚至是那些经验老道的用户。
|
||||
|
||||
### 跨平台一致性
|
||||
|
||||
如果你计划将脚本从一个执行环境迁移到另一个平台时,你需要确保只使用了那些在两个平台下都起作用的命令和语法。比如在 GNU 系统中,你可以通过以下方式获取昨天的日期:
|
||||
|
||||
```
|
||||
date --date="1 day ago"
|
||||
```
|
||||
|
||||
在 BSD 系统中(比如 OS X),上述语法将没办法工作,因为 BSD 的 date 工具需要以下语法:
|
||||
|
||||
```
|
||||
date -v -1d
|
||||
```
|
||||
|
||||
因为 PowerShell 具有宽松的许可证,并且在所有的平台都有构建,所以你可以把 PowerShell 和你的应用一起打包。因此,当你的脚本运行在目标系统中时,它们会运行在一样的 shell 环境中,使用与你的测试环境中同样的命令实现。
|
||||
|
||||
### 对象和结构化数据
|
||||
|
||||
*nix 命令和工具依赖于你使用和操控非结构化数据的能力。对于那些长期活在 `sed`、 `grep` 和 `awk` 环境下的人们来说,这可能是小菜一碟,但现在有更好的选择。
|
||||
|
||||
让我们使用 PowerShell 重写那个获取昨天日期的实例。为了获取当前日期,使用 `Get-Date` cmdlet(读作 “commandlet”):
|
||||
|
||||
```
|
||||
> Get-Date
|
||||
|
||||
Sunday, January 21, 2018 8:12:41 PM
|
||||
```
|
||||
|
||||
你所看到的输出实际上并不是一个文本字符串。不如说,这是 .Net Core 对象的一个字符串表现形式。就像任何 OOP 环境中的对象一样,它具有类型以及你可以调用的方法。
|
||||
|
||||
让我们来证明这一点:
|
||||
|
||||
```
|
||||
> $(Get-Date).GetType().FullName
|
||||
System.DateTime
|
||||
```
|
||||
|
||||
`$(...)` 语法就像你所期望的 POSIX shell 中那样,计算括弧中的命令然后替换整个表达式。但是在 PowerShell 中,这种表达式中的 `$` 是可选的。并且,最重要的是,结果是一个 .Net 对象,而不是文本。因此我们可以调用该对象中的 `GetType()` 方法来获取该对象类型(类似于 Java 中的 `Class` 对象),`FullName` [属性][5] 则用来获取该类型的全称。
|
||||
|
||||
那么,这种对象导向的 shell 是如何让你的工作变得更加简单呢?
|
||||
|
||||
首先,你可将任何对象排进 `Get-Member` cmdlet 来查看它提供的所有方法和属性。
|
||||
|
||||
```
|
||||
> (Get-Date) | Get-Member
|
||||
PS /home/yevster/Documents/ArticlesInProgress> $(Get-Date) | Get-Member
|
||||
|
||||
|
||||
TypeName: System.DateTime
|
||||
|
||||
Name MemberType Definition
|
||||
---- ---------- ----------
|
||||
Add Method datetime Add(timespan value)
|
||||
AddDays Method datetime AddDays(double value)
|
||||
AddHours Method datetime AddHours(double value)
|
||||
AddMilliseconds Method datetime AddMilliseconds(double value)
|
||||
AddMinutes Method datetime AddMinutes(double value)
|
||||
AddMonths Method datetime AddMonths(int months)
|
||||
AddSeconds Method datetime AddSeconds(double value)
|
||||
AddTicks Method datetime AddTicks(long value)
|
||||
AddYears Method datetime AddYears(int value)
|
||||
CompareTo Method int CompareTo(System.Object value), int ...
|
||||
```
|
||||
|
||||
你可以很快的看到 DateTime 对象具有一个 `AddDays` 方法,从而可以使用它来快速的获取昨天的日期:
|
||||
|
||||
```
|
||||
> (Get-Date).AddDays(-1)
|
||||
|
||||
Saturday, January 20, 2018 8:24:42 PM
|
||||
```
|
||||
|
||||
为了做一些更刺激的事,让我们调用 Yahoo 的天气服务(因为它不需要 API 令牌)然后获取你的本地天气。
|
||||
|
||||
```
|
||||
$city="Boston"
|
||||
$state="MA"
|
||||
$url="https://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20weather.forecast%20where%20woeid%20in%20(select%20woeid%20from%20geo.places(1)%20where%20text%3D%22${city}%2C%20${state}%22)&format=json&env=store%3A%2F%2Fdatatables.org%2Falltableswithkeys"
|
||||
```
|
||||
|
||||
现在,我们可以使用老派的方法然后直接运行 `curl $url` 来获取 JSON 二进制对象,或者……
|
||||
|
||||
```
|
||||
$weather=(Invoke-RestMethod $url)
|
||||
```
|
||||
|
||||
如果你查看了 `$weather` 类型(运行 `echo $weather.GetType().FullName`),你将会发现它是一个 `PSCustomObject`。这是一个用来反射 JSON 结构的动态对象。
|
||||
|
||||
然后 PowerShell 可以通过 tab 补齐来帮助你完成命令输入。只需要输入 `$weather.`(确报包含了 `.`)然后按下 `Tab` 键。你将看到所有根级别的 JSON 键。输入其中的一个,然后跟上 `.` ,再一次按下 `Tab` 键,你将看到它所有的子键(如果有的话)。
|
||||
|
||||
因此,你可以轻易的导航到你所想要的数据:
|
||||
|
||||
```
|
||||
> echo $weather.query.results.channel.atmosphere.pressure
|
||||
1019.0
|
||||
|
||||
> echo $weather.query.results.channel.wind.chill 41
|
||||
```
|
||||
|
||||
并且如果你有非结构化的 JSON 或 CSV 数据(通过外部命令返回的),只需要将它相应的排进 `ConverFrom-Json` 或 `ConvertFrom-CSV` cmdlet,然后你可以得到一个漂亮干净的对象。
|
||||
|
||||
### 计算 vs. 自动化
|
||||
|
||||
我们使用 shell 用于两种目的。一个是用于计算,运行独立的命令然后手动响应它们的输出。另一个是自动化,通过写脚本执行多个命令,然后以编程的方式相应它们的输出。
|
||||
|
||||
我们大多数人都能发现这两种目的在 shell 上的不同且互相冲突的要求。计算任务要求 shell 简洁明了。用户输入的越少越好。但如果用户输入对其他用户来说几乎难以理解,那这一点就不重要了。脚本,从另一个角度来讲是代码。可读性和可维护性是关键。这一方面,POSIX 工具通常是失败的。虽然一些命令通常会为它们的参数提供简洁明了的语法(如:`-f` 和 `--force`),但是命令名字本身就不简洁明了。
|
||||
|
||||
PowerShell 提供了几个机制来消除这种浮士德式的平衡。
|
||||
|
||||
首先,tab 补齐可以消除键入参数名的需要。比如:键入 `Get-Random -Mi`,按下 `Tab` 然后 PowerShell 将会为你完成参数:`Get-Random -Minimum`。但是如果你想更简洁一些,你甚至不需要按下 `Tab`。如下所示,PowerShell 可以理解:
|
||||
|
||||
```
|
||||
Get-Random -Mi 1 -Ma 10
|
||||
```
|
||||
|
||||
因为 `Mi` 和 `Ma` 每一个都具有独立不同的补齐。
|
||||
|
||||
你可能已经留意到所有的 PowerShell cmdlet 名称具有动名词结构。这有助于脚本的可读性,但是你可能不想一而再、再而三的键入 `Get-`。所以并不需要!如果你之间键入了一个名词而没有动词的话,PowerShell 将查找带有该名词的 `Get-` 命令。
|
||||
|
||||
> 小心:尽管 PowerShell 不区分大小写,但在使用 PowerShell 命令是时,名词首字母大写是一个好习惯。比如,键入 `date` 将会调用系统中的 `date` 工具。键入 `Date` 将会调用 PowerShell 的 `Get-Date` cmdlet。
|
||||
|
||||
如果这还不够,PowerShell 还提供了别名,用来创建简单的名字。比如,如果键入 `alias -name cd`,你将会发现 `cd` 在 PowerShell 实际上时 `Set-Location` 命令的别名。
|
||||
|
||||
所以回顾以下 —— 你可以使用强大的 tab 补全、别名,和名词补全来保持命令名词简洁、自动化和一致性参数名截断,与此同时还可以享受丰富、可读的语法格式。
|
||||
|
||||
### 那么……你看呢?
|
||||
|
||||
这些只是 PowerShell 的一部分优势。还有更多特性和 cmdlet,我还没讨论(如果你想弄哭 `grep` 的话,可以查看 [Where-Object][6] 或其别称 `?`)。如果你有点怀旧的话,PowerShell 可以为你加载原来的本地工具。但是给自己足够的时间来适应 PowerShell 面向对象 cmdlet 的世界,然后你将发现自己会选择忘记回去的路。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/powershell-people
|
||||
|
||||
作者:[Yev Bronshteyn][a]
|
||||
译者:[sanfusu](https://github.com/sanfusu)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/yevster
|
||||
[1]:https://github.com/PowerShell/PowerShell/blob/master/README.md
|
||||
[2]:https://blogs.msdn.microsoft.com/powershell/2018/01/10/powershell-core-6-0-generally-available-ga-and-supported/
|
||||
[3]:https://spdx.org/licenses/MIT
|
||||
[4]:http://www.jsnover.com/Docs/MonadManifesto.pdf
|
||||
[5]:https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/properties
|
||||
[6]:https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/where-object?view=powershell-6
|
@ -0,0 +1,204 @@
|
||||
Python 的 ChatOps 库:Opsdroid 和 Errbot
|
||||
======
|
||||
|
||||
> 学习一下 Python 世界里最广泛使用的 ChatOps 库:每个都能做什么,如何使用。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_mobile_phone.png?itok=RqVtvxkd)
|
||||
|
||||
ChatOps 是基于会话导向而进行的开发。其思路是你可以编写能够对聊天窗口中的某些输入进行回复的可执行代码。作为一个开发者,你能够用 ChatOps 从 Slack 合并拉取请求,自动从收到的 Facebook 消息中给某人分配支持工单,或者通过 IRC 检查开发状态。
|
||||
|
||||
在 Python 世界,最为广泛使用的 ChatOps 库是 Opsdroid 和 Errbot。在这个月的 Python 专栏,让我们一起聊聊使用它们是怎样的体验,它们各自适用于什么方面以及如何着手使用它们。
|
||||
|
||||
### Opsdroid
|
||||
|
||||
[Opsdroid][2] 是一个相对年轻的(始于 2016)Python 开源聊天机器人库。它有着良好的开发文档,不错的教程,并且包含能够帮助你对接流行的聊天服务的插件。
|
||||
|
||||
#### 它内置了什么
|
||||
|
||||
库本身并没有自带所有你需要上手的东西,但这是故意的。轻量级的框架鼓励你去运用它现有的连接器(Opsdroid 所谓的帮你接入聊天服务的插件)或者去编写你自己的,但是它并不会因自带你所不需要的连接器而自贬身价。你可以轻松使用现有的 Opsdroid 连接器来接入:
|
||||
|
||||
+ 命令行
|
||||
+ Cisco Spark
|
||||
+ Facebook
|
||||
+ GitHub
|
||||
+ Matrix
|
||||
+ Slack
|
||||
+ Telegram
|
||||
+ Twitter
|
||||
+ Websocket
|
||||
|
||||
Opsdroid 会调用使聊天机器人能够展现它们的“技能”的函数。这些技能其实是异步 Python 函数,并使用 Opsdroid 叫做“匹配器”的匹配装饰器。你可以设置你的 Opsdroid 项目,来使用同样从你设置文件所在的代码中的“技能”。你也可以从外面的公共或私人仓库调用这些“技能”。
|
||||
|
||||
你同样可以启用一些现存的 Opsdroid “技能”,包括 [seen][3] —— 它会告诉你聊天机器人上次是什么时候看到某个用户的,以及 [weather][4] —— 会将天气报告给用户。
|
||||
|
||||
最后,Opdroid 允许你使用现存的数据库模块设置数据库。现在 Opdroid 支持的数据库包括:
|
||||
|
||||
+ Mongo
|
||||
+ Redis
|
||||
+ SQLite
|
||||
|
||||
你可以在你的 Opdroid 项目中的 `configuration.yaml` 文件设置数据库、技能和连接器。
|
||||
|
||||
#### Opsdroid 的优势
|
||||
|
||||
**Docker 支持:**从一开始 Opsdroid 就打算在 Docker 中良好运行。在 Docker 中的指导是它 [安装文档][5] 中的一部分。使用 Opsdroid 和 Docker Compose 也很简单:将 Opsdroid 设置成一种服务,当你运行 `docker-compose up` 时,你的 Opsdroid 服务将会开启你的聊天机器人也将就绪。
|
||||
|
||||
```
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
opsdroid:
|
||||
container_name: opsdroid
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile
|
||||
```
|
||||
|
||||
**丰富的连接器:** Opsdroid 支持九种像 Slack 和 Github 等从外部接入的服务连接器。你所要做的一切就是在你的设置文件中启用那些连接器,然后把必须的口令或者 API 密匙传过去。比如为了启用 Opsdroid 以在一个叫做 `#updates` 的 Slack 频道发帖,你需要将以下代码加入你设置文件的 `connectors` 部分:
|
||||
|
||||
```
|
||||
- name: slack
|
||||
api-token: "this-is-my-token"
|
||||
default-room: "#updates"
|
||||
```
|
||||
|
||||
在设置 Opsdroid 以接入 Slack 之前你需要[添加一个机器人用户][6]。
|
||||
|
||||
如果你需要接入一个 Opsdroid 不支持的服务,在[文档][7]里有有添加你自己的连接器的教程。
|
||||
|
||||
**相当不错的文档:** 特别是对于一个在积极开发中的新兴库来说,Opsdroid 的文档十分有帮助。这些文档包括一篇带你创建几个不同的基本技能的[教程][8]。Opsdroid 在[技能][9]、[连接器][7]、[数据库][10],以及[匹配器][11]方面的文档也十分清晰。
|
||||
|
||||
它所支持的技能和连接器的仓库为它的技能提供了富有帮助的示范代码。
|
||||
|
||||
**自然语言处理:** Opsdroid 的技能里面能使用正则表达式,但也同样提供了几个包括 [Dialogflow][12],[luis.ai][13],[Recast.AI][14] 以及 [wit.ai][15] 的 NLP API。
|
||||
|
||||
#### Opsdroid 可能的不足
|
||||
|
||||
Opsdroid 对它的一部分连接器还没有启用全部的特性。比如说,Slack API 允许你向你的消息添加颜色柱、图片以及其他的“附件”。Opsdroid Slack 连接器并没有启用“附件”特性,所以如果那些特性对你来说很重要的话,你需要编写一个自定义的 Slack 连接器。如果连接器缺少一个你需要的特性,Opsdroid 将欢迎你的[贡献][16]。文档中可以使用更多的例子,特别是对于预料到的使用场景。
|
||||
|
||||
#### 示例用法
|
||||
|
||||
```
|
||||
from opsdroid.matchers import match_regex
|
||||
import random
|
||||
|
||||
|
||||
@match_regex(r'hi|hello|hey|hallo')
|
||||
async def hello(opsdroid, config, message):
|
||||
text = random.choice(["Hi {}", "Hello {}", "Hey {}"]).format(message.user)
|
||||
await message.respond(text)
|
||||
```
|
||||
|
||||
*hello/\_\_init\_\_.py*
|
||||
|
||||
|
||||
```
|
||||
connectors:
|
||||
- name: websocket
|
||||
|
||||
skills:
|
||||
- name: hello
|
||||
repo: "https://github.com/<user_id>/hello-skill"
|
||||
|
||||
```
|
||||
|
||||
*configuration.yaml*
|
||||
|
||||
|
||||
### Errbot
|
||||
|
||||
[Errbot][17] 是一个功能齐全的开源聊天机器人。Errbot 发行于 2012 年,并且拥有人们从一个成熟的项目能期待的一切,包括良好的文档、优秀的教程以及许多帮你连入现有的流行聊天服务的插件。
|
||||
|
||||
#### 它内置了什么
|
||||
|
||||
不像采用了较轻量级方式的 Opsdroid,Errbot 自带了你需要可靠地创建一个自定义机器人的一切东西。
|
||||
|
||||
Errbot 包括了对于本地 XMPP、IRC、Slack、Hipchat 以及 Telegram 服务的支持。它通过社区支持的后端列出了另外十种服务。
|
||||
|
||||
#### Errbot 的优势
|
||||
|
||||
**良好的文档:** Errbot 的文档成熟易读。
|
||||
|
||||
**动态插件架构:** Errbot 允许你通过和聊天机器人交谈安全地安装、卸载、更新、启用以及禁用插件。这使得开发和添加特性十分简便。感谢 Errbot 的颗粒性授权系统,出于安全意识这所有的一切都可以被锁闭。
|
||||
|
||||
当某个人输入 `!help`,Errbot 使用你的插件的文档字符串来为可获取的命令生成文档,这使得了解每行命令的作用更加简便。
|
||||
|
||||
**内置的管理和安全特性:** Errbot 允许你限制拥有管理员权限的用户列表,甚至细粒度访问控制。比如说你可以限制特定用户或聊天房间访问特定命令。
|
||||
|
||||
**额外的插件框架:** Errbot 支持钩子、回调、子命令、webhook、轮询以及其它[更多特性][18]。如果那些还不够,你甚至可以编写[动态插件][19]。当你需要基于在远程服务器上的可用命令来启用对应的聊天命令时,这个特性十分有用。
|
||||
|
||||
**自带测试框架:** Errbot 支持 [pytest][20],同时也自带一些能使你简便测试插件的有用功能。它的“[测试你的插件][21]”的文档出于深思熟虑,并提供了足够的资料让你上手。
|
||||
|
||||
#### Errbot 可能的不足
|
||||
|
||||
**以 “!” 开头:** 默认情况下,Errbot 命令发出时以一个惊叹号打头(`!help` 以及 `!hello`)。一些人可能会喜欢这样,但是另一些人可能认为这让人烦恼。谢天谢地,这很容易关掉。
|
||||
|
||||
**插件元数据** 首先,Errbot 的 [Hello World][22] 插件示例看上去易于使用。然而我无法加载我的插件,直到我进一步阅读了教程并发现我还需要一个 `.plug` 文档,这是一个 Errbot 用来加载插件的文档。这可能比较吹毛求疵了,但是在我深挖文档之前,这对我来说都不是显而易见的。
|
||||
|
||||
### 示例用法
|
||||
|
||||
|
||||
```
|
||||
import random
|
||||
from errbot import BotPlugin, botcmd
|
||||
|
||||
class Hello(BotPlugin):
|
||||
|
||||
@botcmd
|
||||
def hello(self, msg, args):
|
||||
text = random.choice(["Hi {}", "Hello {}", "Hey {}"]).format(message.user)
|
||||
return text
|
||||
```
|
||||
|
||||
*hello.py*
|
||||
|
||||
```
|
||||
[Core]
|
||||
Name = Hello
|
||||
Module = hello
|
||||
|
||||
[Python]
|
||||
Version = 2+
|
||||
|
||||
[Documentation]
|
||||
Description = Example "Hello" plugin
|
||||
```
|
||||
|
||||
*hello.plug*
|
||||
|
||||
|
||||
你用过 Errbot 或 Opsdroid 吗?如果用过请留下关于你对于这些工具印象的留言。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/3/python-chatops-libraries-opsdroid-and-errbot
|
||||
|
||||
作者:[Jeff Triplett][a], [Lacey Williams Henschel][1]
|
||||
译者:[tomjlw](https://github.com/tomjlw)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/laceynwilliams
|
||||
[1]:https://opensource.com/users/laceynwilliams
|
||||
[2]:https://opsdroid.github.io/
|
||||
[3]:https://github.com/opsdroid/skill-seen
|
||||
[4]:https://github.com/opsdroid/skill-weather
|
||||
[5]:https://opsdroid.readthedocs.io/en/stable/#docker
|
||||
[6]:https://api.slack.com/bot-users
|
||||
[7]:https://opsdroid.readthedocs.io/en/stable/extending/connectors/
|
||||
[8]:https://opsdroid.readthedocs.io/en/stable/tutorials/introduction/
|
||||
[9]:https://opsdroid.readthedocs.io/en/stable/extending/skills/
|
||||
[10]:https://opsdroid.readthedocs.io/en/stable/extending/databases/
|
||||
[11]:https://opsdroid.readthedocs.io/en/stable/matchers/overview/
|
||||
[12]:https://opsdroid.readthedocs.io/en/stable/matchers/dialogflow/
|
||||
[13]:https://opsdroid.readthedocs.io/en/stable/matchers/luis.ai/
|
||||
[14]:https://opsdroid.readthedocs.io/en/stable/matchers/recast.ai/
|
||||
[15]:https://opsdroid.readthedocs.io/en/stable/matchers/wit.ai/
|
||||
[16]:https://opsdroid.readthedocs.io/en/stable/contributing/
|
||||
[17]:http://errbot.io/en/latest/
|
||||
[18]:http://errbot.io/en/latest/features.html#extensive-plugin-framework
|
||||
[19]:http://errbot.io/en/latest/user_guide/plugin_development/dynaplugs.html
|
||||
[20]:http://pytest.org/
|
||||
[21]:http://errbot.io/en/latest/user_guide/plugin_development/testing.html
|
||||
[22]:http://errbot.io/en/latest/index.html#simple-to-build-upon
|
@ -0,0 +1,77 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lujun9972)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10639-1.html)
|
||||
[#]: subject: (Asynchronous rsync with Emacs,dired and tramp。)
|
||||
[#]: via: (https://vxlabs.com/2018/03/30/asynchronous-rsync-with-emacs-dired-and-tramp/)
|
||||
[#]: author: (cpbotha https://vxlabs.com/author/cpbotha/)
|
||||
|
||||
在 Emacs 的 dired 和 tramp 中异步运行 rsync
|
||||
======
|
||||
|
||||
[Trần Xuân Trường][2] 写的 [tmtxt-dired-async][1] 是一个不为人知的 Emacs 包,它可以扩展 dired(Emacs 内置的文件管理器),使之可以异步地运行 `rsync` 和其他命令 (例如压缩、解压缩和下载)。
|
||||
|
||||
这意味着你可以拷贝上 GB 的目录而不影响 Emacs 的其他任务。
|
||||
|
||||
它的一个功能时让你可以通过 `C-c C-a` 从不同位置添加任意多的文件到一个等待列表中,然后按下 `C-c C-v` 异步地使用 `rsync` 将整个等待列表中的文件同步到目标目录中。光这个功能就值得一试了。
|
||||
|
||||
例如这里将 arduino 1.9 的 beta 存档同步到另一个目录中:
|
||||
|
||||
![][4]
|
||||
|
||||
整个进度完成后,底部的窗口会在 5 秒后自动退出。下面是异步解压上面的 arduino 存档后出现的另一个会话:
|
||||
|
||||
![][6]
|
||||
|
||||
这个包进一步增加了我 dired 配置的实用性。
|
||||
|
||||
我刚刚贡献了 [一个拉取请求来允许 tmtxt-dired-async 同步到远程 tramp 目录中][7],而且我立即使用该功能来将上 GB 的新照片传输到 Linux 服务器上。
|
||||
|
||||
若你想配置 tmtxt-dired-async,下载 [tmtxt-async-tasks.el][8](被依赖的库)以及 [tmtxt-dired-async.el][9](若你想让它支持 tramp,请确保合并使用了我的拉取请求)到 `~/.emacs.d/` 目录中,然后添加下面配置:
|
||||
|
||||
```
|
||||
;; no MELPA packages of this, so we have to do a simple check here
|
||||
(setq dired-async-el (expand-file-name "~/.emacs.d/tmtxt-dired-async.el"))
|
||||
(when (file-exists-p dired-async-el)
|
||||
(load (expand-file-name "~/.emacs.d/tmtxt-async-tasks.el"))
|
||||
(load dired-async-el)
|
||||
(define-key dired-mode-map (kbd "C-c C-r") 'tda/rsync)
|
||||
(define-key dired-mode-map (kbd "C-c C-z") 'tda/zip)
|
||||
(define-key dired-mode-map (kbd "C-c C-u") 'tda/unzip)
|
||||
|
||||
(define-key dired-mode-map (kbd "C-c C-a") 'tda/rsync-multiple-mark-file)
|
||||
(define-key dired-mode-map (kbd "C-c C-e") 'tda/rsync-multiple-empty-list)
|
||||
(define-key dired-mode-map (kbd "C-c C-d") 'tda/rsync-multiple-remove-item)
|
||||
(define-key dired-mode-map (kbd "C-c C-v") 'tda/rsync-multiple)
|
||||
|
||||
(define-key dired-mode-map (kbd "C-c C-s") 'tda/get-files-size)
|
||||
|
||||
(define-key dired-mode-map (kbd "C-c C-q") 'tda/download-to-current-dir))
|
||||
```
|
||||
|
||||
祝你开心!
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://vxlabs.com/2018/03/30/asynchronous-rsync-with-emacs-dired-and-tramp/
|
||||
|
||||
作者:[cpbotha][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://vxlabs.com/author/cpbotha/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://truongtx.me/tmtxt-dired-async.html
|
||||
[2]: https://truongtx.me/about.html
|
||||
[3]: https://i0.wp.com/vxlabs.com/wp-content/uploads/2018/03/rsync-arduino-zip.png?resize=660%2C340&ssl=1
|
||||
[4]: https://i0.wp.com/vxlabs.com/wp-content/uploads/2018/03/rsync-arduino-zip.png?ssl=1
|
||||
[5]: https://i1.wp.com/vxlabs.com/wp-content/uploads/2018/03/progress-window-5s.png?resize=660%2C310&ssl=1
|
||||
[6]: https://i1.wp.com/vxlabs.com/wp-content/uploads/2018/03/progress-window-5s.png?ssl=1
|
||||
[7]: https://github.com/tmtxt/tmtxt-dired-async/pull/6
|
||||
[8]: https://github.com/tmtxt/tmtxt-async-tasks
|
||||
[9]: https://github.com/tmtxt/tmtxt-dired-async
|
194
published/20180826 Be productive with Org-mode.md
Normal file
194
published/20180826 Be productive with Org-mode.md
Normal file
@ -0,0 +1,194 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lujun9972)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10634-1.html)
|
||||
[#]: subject: (Be productive with Org-mode)
|
||||
[#]: via: (https://www.badykov.com/emacs/2018/08/26/be-productive-with-org-mode/)
|
||||
[#]: author: (Ayrat Badykov https://www.badykov.com)
|
||||
|
||||
高效使用 Org 模式
|
||||
======
|
||||
|
||||
![org-mode-collage][1]
|
||||
|
||||
### 简介
|
||||
|
||||
在我 [前一篇关于 Emacs 的文章中][2] 我提到了 <ruby>[Org 模式][3]<rt>Org-mode</rt></ruby>,这是一个笔记管理工具和组织工具。本文中,我将会描述一下我日常的 Org 模式使用案例。
|
||||
|
||||
### 笔记和代办列表
|
||||
|
||||
首先而且最重要的是,Org 模式是一个管理笔记和待办列表的工具,Org 模式的所有工具都聚焦于使用纯文本文件记录笔记。我使用 Org 模式管理多种笔记。
|
||||
|
||||
#### 一般性笔记
|
||||
|
||||
Org 模式最基本的应用场景就是以笔记的形式记录下你想记住的事情。比如,下面是我正在学习的笔记内容:
|
||||
|
||||
```
|
||||
* Learn
|
||||
** Emacs LISP
|
||||
*** Plan
|
||||
|
||||
- [ ] Read best practices
|
||||
- [ ] Finish reading Emacs Manual
|
||||
- [ ] Finish Exercism Exercises
|
||||
- [ ] Write a couple of simple plugins
|
||||
- Notification plugin
|
||||
|
||||
*** Resources
|
||||
|
||||
https://www.gnu.org/software/emacs/manual/html_node/elisp/index.html
|
||||
http://exercism.io/languages/elisp/about
|
||||
[[http://batsov.com/articles/2011/11/30/the-ultimate-collection-of-emacs-resources/][The Ultimate Collection of Emacs Resources]]
|
||||
|
||||
** Rust gamedev
|
||||
*** Study [[https://github.com/SergiusIW/gate][gate]] 2d game engine with web assembly support
|
||||
*** [[ggez][https://github.com/ggez/ggez]]
|
||||
*** [[https://www.amethyst.rs/blog/release-0-8/][Amethyst 0.8 Relesed]]
|
||||
|
||||
** Upgrade Elixir/Erlang Skills
|
||||
*** Read Erlang in Anger
|
||||
```
|
||||
|
||||
借助 [org-bullets][4] 它看起来是这样的:
|
||||
|
||||
![notes][5]
|
||||
|
||||
在这个简单的例子中,你能看到 Org 模式的一些功能:
|
||||
|
||||
- 笔记允许嵌套
|
||||
- 链接
|
||||
- 带复选框的列表
|
||||
|
||||
#### 项目待办
|
||||
|
||||
我在工作时时常会发现一些能够改进或修复的事情。我并不会在代码文件中留下 TODO 注释 (坏味道),相反我使用 [org-projectile][6] 来在另一个文件中记录一个 TODO 事项,并留下一个快捷方式。下面是一个该文件的例子:
|
||||
|
||||
```
|
||||
* [[elisp:(org-projectile-open-project%20"mana")][mana]] [3/9]
|
||||
:PROPERTIES:
|
||||
:CATEGORY: mana
|
||||
:END:
|
||||
** DONE [[file:~/Development/mana/apps/blockchain/lib/blockchain/contract/create_contract.ex::insufficient_gas_before_homestead%20=][fix this check using evm.configuration]]
|
||||
CLOSED: [2018-08-08 Ср 09:14]
|
||||
[[https://github.com/ethereum/EIPs/blob/master/EIPS/eip-2.md][eip2]]:
|
||||
If contract creation does not have enough gas to pay for the final gas fee for
|
||||
adding the contract code to the state, the contract creation fails (i.e. goes out-of-gas)
|
||||
rather than leaving an empty contract.
|
||||
** DONE Upgrade Elixir to 1.7.
|
||||
CLOSED: [2018-08-08 Ср 09:14]
|
||||
** TODO [#A] Difficulty tests
|
||||
** TODO [#C] Upgrage to OTP 21
|
||||
** DONE [#A] EIP150
|
||||
CLOSED: [2018-08-14 Вт 21:25]
|
||||
*** DONE operation cost changes
|
||||
CLOSED: [2018-08-08 Ср 20:31]
|
||||
*** DONE 1/64th for a call and create
|
||||
CLOSED: [2018-08-14 Вт 21:25]
|
||||
** TODO [#C] Refactor interfaces
|
||||
** TODO [#B] Caching for storage during execution
|
||||
** TODO [#B] Removing old merkle trees
|
||||
** TODO do not calculate cost twice
|
||||
* [[elisp:(org-projectile-open-project%20".emacs.d")][.emacs.d]] [1/3]
|
||||
:PROPERTIES:
|
||||
:CATEGORY: .emacs.d
|
||||
:END:
|
||||
** TODO fix flycheck issues (emacs config)
|
||||
** TODO use-package for fetching dependencies
|
||||
** DONE clean configuration
|
||||
CLOSED: [2018-08-26 Вс 11:48]
|
||||
```
|
||||
|
||||
它看起来是这样的:
|
||||
|
||||
![project-todos][7]
|
||||
|
||||
本例中你能看到更多的 Org 模式的功能:
|
||||
|
||||
- 代办列表具有 `TODO`、`DONE` 两个状态。你还可以定义自己的状态 (`WAITING` 等)
|
||||
- 关闭的事项有 `CLOSED` 时间戳
|
||||
- 有些事项有优先级 - A、B、C
|
||||
- 链接可以指向文件内部 (`[[file:~/。..]`)
|
||||
|
||||
#### 捕获模板
|
||||
|
||||
正如 Org 模式的文档中所描述的,捕获可以在不怎么干扰你工作流的情况下让你快速存储笔记。
|
||||
|
||||
我配置了许多捕获模板,可以帮我快速记录想要记住的事情。
|
||||
|
||||
```
|
||||
(setq org-capture-templates
|
||||
'(("t" "Todo" entry (file+headline "~/Dropbox/org/todo.org" "Todo soon")
|
||||
"* TODO %? \n %^t")
|
||||
("i" "Idea" entry (file+headline "~/Dropbox/org/ideas.org" "Ideas")
|
||||
"* %? \n %U")
|
||||
("e" "Tweak" entry (file+headline "~/Dropbox/org/tweaks.org" "Tweaks")
|
||||
"* %? \n %U")
|
||||
("l" "Learn" entry (file+headline "~/Dropbox/org/learn.org" "Learn")
|
||||
"* %? \n")
|
||||
("w" "Work note" entry (file+headline "~/Dropbox/org/work.org" "Work")
|
||||
"* %? \n")
|
||||
("m" "Check movie" entry (file+headline "~/Dropbox/org/check.org" "Movies")
|
||||
"* %? %^g")
|
||||
("n" "Check book" entry (file+headline "~/Dropbox/org/check.org" "Books")
|
||||
"* %^{book name} by %^{author} %^g")))
|
||||
```
|
||||
|
||||
做书本记录时我需要记下它的名字和作者,做电影记录时我需要记下标签,等等。
|
||||
|
||||
### 规划
|
||||
|
||||
Org 模式的另一个超棒的功能是你可以用它来作日常规划。让我们来看一个例子:
|
||||
|
||||
![schedule][8]
|
||||
|
||||
我没有挖空心思虚构一个例子,这就是我现在真实文件的样子。它看起来内容并不多,但它有助于你花时间在在重要的事情上并且帮你对抗拖延症。
|
||||
|
||||
#### 习惯
|
||||
|
||||
根据 Org 模式的文档,Org 能够跟踪一种特殊的代办事情,称为 “习惯”。当我想养成新的习惯时,我会将该功能与日常规划功能一起连用:
|
||||
|
||||
![habits][9]
|
||||
|
||||
你可以看到,目前我在尝试每天早期并且每两天锻炼一次。另外,它也有助于让我每天阅读书籍。
|
||||
|
||||
#### 议事日程视图
|
||||
|
||||
最后,我还使用议事日程视图功能。待办事项可能分散在不同文件中(比如我就是日常规划和习惯分散在不同文件中),议事日程视图可以提供所有待办事项的总览:
|
||||
|
||||
![agenda][10]
|
||||
|
||||
### 更多 Org 模式的功能
|
||||
|
||||
+ 手机应用([Android](https://play.google.com/store/apps/details?id=com.orgzly&hl=en)、[ios](https://itunes.apple.com/app/id1238649962]))
|
||||
+ [将 Org 模式文档导出为其他格式](https://orgmode.org/manual/Exporting.html)(html、markdown、pdf、latex 等)
|
||||
+ 使用 [ledger](https://github.com/ledger/ledger-mode) [追踪财务状况](https://orgmode.org/worg/org-tutorials/weaving-a-budget.html)
|
||||
|
||||
### 总结
|
||||
|
||||
本文我描述了 Org 模式广泛功能中的一小部分,我每天都用它来提高工作效率,把时间花在重要的事情上。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.badykov.com/emacs/2018/08/26/be-productive-with-org-mode/
|
||||
|
||||
作者:[Ayrat Badykov][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.badykov.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i.imgur.com/hgqCyen.jpg
|
||||
[2]: http://www.badykov.com/emacs/2018/07/31/why-emacs-is-a-great-editor/
|
||||
[3]: https://orgmode.org/
|
||||
[4]: https://github.com/sabof/org-bullets
|
||||
[5]: https://i.imgur.com/lGi60Uw.png
|
||||
[6]: https://github.com/IvanMalison/org-projectile
|
||||
[7]: https://i.imgur.com/Hbu8ilX.png
|
||||
[8]: https://i.imgur.com/z5HpuB0.png
|
||||
[9]: https://i.imgur.com/YJIp3d0.png
|
||||
[10]: https://i.imgur.com/CKX9BL9.png
|
@ -0,0 +1,59 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10642-1.html)
|
||||
[#]: subject: (Get started with Cypht, an open source email client)
|
||||
[#]: via: (https://opensource.com/article/19/1/productivity-tool-cypht-email)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney))
|
||||
|
||||
开始使用 Cypht 吧,一个开源的电子邮件客户端
|
||||
======
|
||||
|
||||
> 使用 Cypht 将你的电子邮件和新闻源集成到一个界面中,这是我们 19 个开源工具系列中的第 4 个,它将使你在 2019 年更高效。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_mail_box_envelope_send_blue.jpg?itok=6Epj47H6)
|
||||
|
||||
每年年初似乎都有疯狂的冲动想提高工作效率。新年的决心,渴望开启新的一年,当然,“抛弃旧的,拥抱新的”的态度促成了这一切。通常这时的建议严重偏向闭源和专有软件,但事实上并不用这样。
|
||||
|
||||
这是我挑选出的 19 个新的(或者对你而言新的)开源工具中的第 4 个工具来帮助你在 2019 年更有效率。
|
||||
|
||||
### Cypht
|
||||
|
||||
我们花了很多时间来处理电子邮件,有效地[管理你的电子邮件][1]可以对你的工作效率产生巨大影响。像 Thunderbird、Kontact/KMail 和 Evolution 这样的程序似乎都有一个共同点:它们试图复制 Microsoft Outlook 的功能,这在过去 10 年左右并没有真正改变。在过去十年中,甚至像 Mutt 和 Cone 这样的[著名控制台程序][2]也没有太大变化。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/cypht-1.png)
|
||||
|
||||
[Cypht][3] 是一个简单、轻量级和现代的 Webmail 客户端,它将多个帐户聚合到一个界面中。除了电子邮件帐户,它还包括 Atom/RSS 源。在 “Everything” 中,不仅可以显示收件箱中的邮件,还可以显示新闻源中的最新文章,从而使得阅读不同来源的内容变得简单。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/cypht-2.png)
|
||||
|
||||
它使用简化的 HTML 消息来显示邮件,或者你也可以将其设置为查看纯文本版本。由于 Cypht 不会加载远程图像(以帮助维护安全性),HTML 渲染可能有点粗糙,但它足以完成工作。你将看到包含大量富文本邮件的纯文本视图 —— 这意味着很多链接并且难以阅读。我不会说是 Cypht 的问题,因为这确实是发件人所做的,但它确实降低了阅读体验。阅读新闻源大致相同,但它们与你的电子邮件帐户集成,这意味着可以轻松获取最新的(我有时会遇到问题)。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/cypht-3.png)
|
||||
|
||||
用户可以使用预配置的邮件服务器并添加他们使用的任何其他服务器。Cypht 的自定义选项包括纯文本与 HTML 邮件显示,它支持多个配置文件以及更改主题(并自行创建)。你要记得单击左侧导航栏上的“保存”按钮,否则你的自定义设置将在该会话后消失。如果你在不保存的情况下注销并重新登录,那么所有更改都将丢失,你将获得开始时的设置。因此可以轻松地实验,如果你需要重置,只需在不保存的情况下注销,那么在再次登录时就会看到之前的配置。
|
||||
|
||||
![](https://opensource.com/sites/default/files/pictures/cypht-4.png)
|
||||
|
||||
本地[安装 Cypht][4] 非常容易。虽然它不使用容器或类似技术,但安装说明非常清晰且易于遵循,并且不需要我做任何更改。在我的笔记本上,从安装开始到首次登录大约需要 10 分钟。服务器上的共享安装使用相同的步骤,因此它应该大致相同。
|
||||
|
||||
最后,Cypht 是桌面和基于 Web 的电子邮件客户端的绝佳替代方案,它有简单的界面,可帮助你快速有效地处理电子邮件。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/1/productivity-tool-cypht-email
|
||||
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ksonney (Kevin Sonney)
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/article/17/7/email-alternatives-thunderbird
|
||||
[2]: https://opensource.com/life/15/8/top-4-open-source-command-line-email-clients
|
||||
[3]: https://cypht.org/
|
||||
[4]: https://cypht.org/install.html
|
@ -0,0 +1,60 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10636-1.html)
|
||||
[#]: subject: (Get started with CryptPad, an open source collaborative document editor)
|
||||
[#]: via: (https://opensource.com/article/19/1/productivity-tool-cryptpad)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney))
|
||||
|
||||
开始使用 CryptPad 吧,一个开源的协作文档编辑器
|
||||
======
|
||||
|
||||
> 使用 CryptPad 安全地共享你的笔记、文档、看板等,这是我们在开源工具系列中的第 5 个工具,它将使你在 2019 年更高效。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/web_browser_desktop_devlopment_design_system_computer.jpg?itok=pfqRrJgh)
|
||||
|
||||
每年年初似乎都有疯狂的冲动想提高工作效率。新年的决心,渴望开启新的一年,当然,“抛弃旧的,拥抱新的”的态度促成了这一切。通常这时的建议严重偏向闭源和专有软件,但事实上并不用这样。
|
||||
|
||||
这是我挑选出的 19 个新的(或者对你而言新的)开源工具中的第 5 个工具来帮助你在 2019 年更有效率。
|
||||
|
||||
### CryptPad
|
||||
|
||||
我们已经介绍过 [Joplin][1],它能很好地保存自己的笔记,但是,你可能已经注意到,它没有任何共享或协作功能。
|
||||
|
||||
[CryptPad][2] 是一个安全、可共享的笔记应用和文档编辑器,它能够安全地协作编辑。与 Joplin 不同,它是一个 NodeJS 应用,这意味着你可以在桌面或其他服务器上运行它,并使用任何现代 Web 浏览器访问。它开箱即用,它支持富文本、Markdown、投票、白板,看板和 PPT。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/cryptpad-1.png)
|
||||
|
||||
它支持不同的文档类型且功能齐全。它的富文本编辑器涵盖了你所期望的所有基础功能,并允许你将文件导出为 HTML。它的 Markdown 的编辑能与 Joplin 相提并论,它的看板虽然不像 [Wekan][3] 那样功能齐全,但也做得不错。其他支持的文档类型和编辑器也很不错,并且有你希望在类似应用中看到的功能,尽管投票功能显得有些粗糙。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/cryptpad-2.png)
|
||||
|
||||
然而,CryptPad 的真正强大之处在于它的共享和协作功能。共享文档只需在“共享”选项中获取可共享 URL,CryptPad 支持使用 `<iframe>` 标签嵌入其他网站的文档。可以在“编辑”或“查看”模式下使用密码和会过期的链接共享文档。内置聊天能够让编辑者相互交谈(请注意,具有浏览权限的人也可以看到聊天但无法发表评论)。
|
||||
|
||||
![](https://opensource.com/sites/default/files/pictures/cryptpad-3.png)
|
||||
|
||||
所有文件都使用用户密码加密存储。服务器管理员无法读取文档,这也意味着如果你忘记或丢失了密码,文件将无法恢复。因此,请确保将密码保存在安全的地方,例如放在[密码保险箱][4]中。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/cryptpad-4.png)
|
||||
|
||||
当它在本地运行时,CryptPad 是一个用于创建和编辑文档的强大应用。当在服务器上运行时,它成为了用于多用户文档创建和编辑的出色协作平台。在我的笔记本电脑上安装它不到五分钟,并且开箱即用。开发者还加入了在 Docker 中运行 CryptPad 的说明,并且还有一个社区维护用于方便部署的 Ansible 角色。CryptPad 不支持任何第三方身份验证,因此用户必须创建自己的帐户。如果你不想运行自己的服务器,CryptPad 还有一个社区支持的托管版本。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/1/productivity-tool-cryptpad
|
||||
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ksonney (Kevin Sonney)
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/article/19/1/productivity-tool-joplin
|
||||
[2]: https://cryptpad.fr/index.html
|
||||
[3]: https://opensource.com/article/19/1/productivity-tool-wekan
|
||||
[4]: https://opensource.com/article/18/4/3-password-managers-linux-command-line
|
@ -0,0 +1,131 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10637-1.html)
|
||||
[#]: subject: (ODrive (Open Drive) – Google Drive GUI Client For Linux)
|
||||
[#]: via: (https://www.2daygeek.com/odrive-open-drive-google-drive-gui-client-for-linux/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
ODrive:Linux 中的 Google 云端硬盘图形客户端
|
||||
======
|
||||
|
||||
这个我们已经多次讨论过。但是,我还要简要介绍一下它。截至目前,还没有官方的 Google 云端硬盘的 Linux 客户端,我们需要使用非官方客户端。Linux 中有许多集成 Google 云端硬盘的应用。每个应用都提供了一组功能。
|
||||
|
||||
我们过去在网站上写过一些此类文章。
|
||||
|
||||
这些文章是 [DriveSync][1] 、[Google Drive Ocamlfuse 客户端][2] 和 [在 Linux 中使用 Nautilus 文件管理器挂载 Google 云端硬盘][3]。
|
||||
|
||||
今天我们也将讨论相同的主题,程序名字是 ODrive。
|
||||
|
||||
### ODrive 是什么?
|
||||
|
||||
ODrive 意即 Open Drive。它是 Google 云端硬盘的图形客户端,它用 electron 框架编写。
|
||||
|
||||
它简单的图形界面能让用户几步就能集成 Google 云端硬盘。
|
||||
|
||||
### 如何在 Linux 上安装和设置 ODrive?
|
||||
|
||||
由于开发者提供了 AppImage 包,因此在 Linux 上安装 ODrive 没有任何困难。
|
||||
|
||||
只需使用 `wget` 命令从开发者的 GitHub 页面下载最新的 ODrive AppImage 包。
|
||||
|
||||
```
|
||||
$ wget https://github.com/liberodark/ODrive/releases/download/0.1.3/odrive-0.1.3-x86_64.AppImage
|
||||
```
|
||||
|
||||
你必须为 ODrive AppImage 文件设置可执行文件权限。
|
||||
|
||||
```
|
||||
$ chmod +x odrive-0.1.3-x86_64.AppImage
|
||||
```
|
||||
|
||||
只需运行 ODrive AppImage 文件以启动 ODrive GUI 以进行进一步设置。
|
||||
|
||||
```
|
||||
$ ./odrive-0.1.3-x86_64.AppImage
|
||||
```
|
||||
|
||||
运行上述命令时,可能会看到下面的窗口。只需按下“下一步”按钮即可进行进一步设置。
|
||||
|
||||
![][5]
|
||||
|
||||
点击“连接”链接添加 Google 云端硬盘帐户。
|
||||
|
||||
![][6]
|
||||
|
||||
输入你要设置 Google 云端硬盘帐户的电子邮箱。
|
||||
|
||||
![][7]
|
||||
|
||||
输入邮箱密码。
|
||||
|
||||
![][8]
|
||||
|
||||
允许 ODrive 访问你的 Google 帐户。
|
||||
|
||||
![][9]
|
||||
|
||||
默认情况下,它将选择文件夹位置。如果你要选择特定文件夹,则可以更改。
|
||||
|
||||
![][10]
|
||||
|
||||
最后点击“同步”按钮开始将文件从 Google 下载到本地系统。
|
||||
|
||||
![][11]
|
||||
|
||||
同步正在进行中。
|
||||
|
||||
![][12]
|
||||
|
||||
同步完成后。它会显示所有已下载的文件。
|
||||
|
||||
![][13]
|
||||
|
||||
我看到所有文件都下载到上述目录中。
|
||||
|
||||
![][14]
|
||||
|
||||
如果要将本地系统中的任何新文件同步到 Google 。只需从应用菜单启动 “ODrive”,但它不会实际启动应用。它将在后台运行,我们可以使用 `ps` 命令查看。
|
||||
|
||||
```
|
||||
$ ps -df | grep odrive
|
||||
```
|
||||
|
||||
![][15]
|
||||
|
||||
将新文件添加到 Google文件夹后,它会自动开始同步。从通知菜单中也可以看到。是的,我看到一个文件已同步到 Google 中。
|
||||
|
||||
![][16]
|
||||
|
||||
同步完成后图形界面没有加载,我不确定这个功能。我会向开发者反馈之后,根据他的反馈更新。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/odrive-open-drive-google-drive-gui-client-for-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/drivesync-google-drive-sync-client-for-linux/
|
||||
[2]: https://linux.cn/article-10517-1.html
|
||||
[3]: https://www.2daygeek.com/mount-access-setup-google-drive-in-linux/
|
||||
[4]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[5]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-1.png
|
||||
[6]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-2.png
|
||||
[7]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-3.png
|
||||
[8]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-4.png
|
||||
[9]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-5.png
|
||||
[10]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-6.png
|
||||
[11]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-7.png
|
||||
[12]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-8a.png
|
||||
[13]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-9.png
|
||||
[14]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-11.png
|
||||
[15]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-9b.png
|
||||
[16]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-10.png
|
@ -0,0 +1,61 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10648-1.html)
|
||||
[#]: subject: (Get started with Freeplane, an open source mind mapping application)
|
||||
[#]: via: (https://opensource.com/article/19/1/productivity-tool-freeplane)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney))
|
||||
|
||||
开始使用 Freeplane 吧,一款开源思维导图
|
||||
======
|
||||
|
||||
> 使用 Freeplane 进行头脑风暴,这是我们开源工具系列中的第 13 个,它将使你在 2019 年更高效。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X)
|
||||
|
||||
每年年初似乎都有疯狂的冲动想提高工作效率。新年的决心,渴望开启新的一年,当然,“抛弃旧的,拥抱新的”的态度促成了这一切。通常这时的建议严重偏向闭源和专有软件,但事实上并不用这样。
|
||||
|
||||
这是我挑选出的 19 个新的(或者对你而言新的)开源工具中的第 13 个工具来帮助你在 2019 年更有效率。
|
||||
|
||||
### Freeplane
|
||||
|
||||
[思维导图][1]是我用于快速头脑风暴和捕捉数据的最有价值的工具之一。思维导图是一个灵活的过程,有助于显示事物的相关性,并可用于快速组织相互关联的信息。从规划角度来看,思维导图让你快速将大脑中的单个概念、想法或技术表达出来。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/freeplane-1.png)
|
||||
|
||||
[Freeplane][2] 是一款桌面应用,可以轻松创建、查看、编辑和共享思维导图。它是 [FreeMind][3] 这款很长时间内都是思维导图首选应用的重新打造。
|
||||
|
||||
安装 Freeplane 非常简单。它是一个 [Java][4] 应用,并使用 ZIP 文件分发,可使用脚本在 Linux、Windows 和 MacOS 上启动。在第一次启动它时,主窗口会包含一个示例思维导图,其中包含指向你可以使用 Freeplane 执行的所有不同操作的文档的链接。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/freeplane-2.png)
|
||||
|
||||
创建新思维导图时,你可以选择模板。标准模板(可能位于列表底部)适用于大多数情况。你只需开始输入开头的想法或短语,你的文本就会替换中心的文本。按“Insert”键将从中心添加一个分支(或节点),其中包含一个空白字段,你可以在其中填写与该想法相关的内容。再次按“Insert”将添加另一个节点到第一个上。在节点上按回车键将添加与该节点平行的节点。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/freeplane-3.png)
|
||||
|
||||
在添加节点时,你可能会想到与主题相关的另一个想法。使用鼠标或箭头键,返回到导图的中心,然后按“Insert”键。这将在主题之外创建一个新节点。
|
||||
|
||||
如果你想使用 Freeplane 其他功能,请右键单击任何节点以显示该节点的“属性”菜单。工具窗口(在视图 ->控制菜单下激活)包含丰富的自定义选项,包括线条形状和粗细、边框形状、颜色等等。“日历”选项允许你在节点中插入日期,并为节点设置到期提醒。 (请注意,提醒仅在 Freeplane 运行时有效。)思维导图可以导出为多种格式,包括常见的图像、XML、Microsoft Project、Markdown 和 OPML。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/freeplane-4.png)
|
||||
|
||||
Freeplane 为你提供了创建生动和实用的思维导图所需的所有工具,让你表达头脑中的想法,并可采取行动。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/1/productivity-tool-freeplane
|
||||
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ksonney (Kevin Sonney)
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Mind_map
|
||||
[2]: https://www.freeplane.org/wiki/index.php/Home
|
||||
[3]: https://sourceforge.net/projects/freemind/
|
||||
[4]: https://java.com
|
@ -1,28 +1,30 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10654-1.html)
|
||||
[#]: subject: (19 days of productivity in 2019: The fails)
|
||||
[#]: via: (https://opensource.com/article/19/1/productivity-tool-wish-list)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney))
|
||||
|
||||
2019 年的 19 个高效日:失败了
|
||||
======
|
||||
以下是开源世界没有做到的一些工具。
|
||||
|
||||
> 以下是开源世界没有做到的一些工具。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_sysadmin_cloud.png?itok=sUciG0Cn)
|
||||
|
||||
每年年初似乎都有疯狂的冲动想提高工作效率。新年的决心,渴望开启新的一年,当然,“抛弃旧的,拥抱新的”的态度促成了这一切。通常这时的建议严重偏向闭源和专有软件,但事实上并不用这样。
|
||||
|
||||
保持高效一部分是接受失败发生。我是 [Howard Tayler's][1] 的第 70 条座右铭的支持者:“失败不是一种选择,它是一定的。可以选择的是是否让失败成为你做的最后一件事。”我对这个系列的有很多话想多,但是我没有找到好的答案。
|
||||
保持高效一部分是接受失败。我是 [Howard Tayler][1] 的第 70 条座右铭的支持者:“失败不是一种选择,它是一定的。可以选择的是是否让失败成为你做的最后一件事。”在这个系列中我想谈的很多事情都没有找到好的答案。
|
||||
|
||||
关于我的 19 个新的(或对你而言新的)帮助你在 2019 年更高效的工具的最终版,我想到了一些我想要的,但是没有找到的。我希望读者你能够帮我找到下面这些项目的好的方案。如果你发现了,请在下面的留言中分享。
|
||||
关于我的 19 个新的(或对你而言新的)帮助你在 2019 年更高效的工具的最终版,我想到了一些我想要,但是有没有找到的。我希望读者你能够帮我找到下面这些项目的好的方案。如果你发现了,请在下面的留言中分享。
|
||||
|
||||
### 日历
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/thunderbird-1.png)
|
||||
|
||||
如果开源世界有一件事缺乏,那就是日历。我尝试过的日历程序和尝试电子邮件程序的数量一样多。共享日历基本上有三个很好的选择:[Evolution][2]、[Thunderbird 中的 Lightning 附加组件][3] 或 [KOrganizer][4]。我尝试过的所有其他应用 (包括 [Orage][5]、[Osmo][6] 以及几乎所有 [Org 模式][7]附加组件) 似乎只可靠地支持对远程日历的只读访问。如果共享日历使用 [Google 日历][8] 或 [Microsoft Exchange][9] 作为服务器,那么前三个是唯一易于配置的选择(即便如此,通常还需要其他附加组件)。
|
||||
如果开源世界有一件事缺乏,那就是日历。我尝试过的日历程序和电子邮件程序的数量一样多。共享日历基本上有三个很好的选择:[Evolution][2]、[Thunderbird 中的 Lightning 附加组件][3] 或 [KOrganizer][4]。我尝试过的所有其他应用 (包括 [Orage][5]、[Osmo][6] 以及几乎所有 [Org 模式][7]附加组件) 似乎只能可靠地支持对远程日历的只读访问。如果共享日历使用 [Google 日历][8] 或 [Microsoft Exchange][9] 作为服务器,那么前三个是唯一易于配置的选择(即便如此,通常还需要其他附加组件)。
|
||||
|
||||
### Linux 内核的系统
|
||||
|
||||
@ -36,7 +38,7 @@
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/opennms_jira_dashboard-3.png)
|
||||
|
||||
对于大大小小的公司来说,客户服务是一件大事。现在,随着近来对 DevOps 的关注,有必要使用工具来弥补差距。我工作的几乎每家公司都使用 [Jira][15]、[GitHub][16] 或 [GitLab][17] 来提代码问题,但这些工具都不是很擅长客户支持工单(没有很多工作)。虽然围绕客户支持工单和问题设计了许多应用,但大多数(如果不是全部)应用都是与其他系统不兼容的孤岛,同样没有大量工作。
|
||||
对于大大小小的公司来说,客户服务是一件大事。现在,随着近来对 DevOps 的关注,有必要使用工具来弥补差距。我工作的几乎每家公司都使用 [Jira][15]、[GitHub][16] 或 [GitLab][17] 来报告代码问题,但这些工具都不是很擅长于客户支持工单(除非付出大量的工作)。虽然围绕客户支持工单和问题设计了许多应用,但大多数(如果不是全部)应用都是与其他系统不兼容的孤岛(同样除非付出了大量的工作)。
|
||||
|
||||
我的愿望是有一个开源解决方案,它能让客户、支持人员和开发人员一起工作,而无需笨重的代码将多个系统粘合在一起。
|
||||
|
||||
@ -53,7 +55,7 @@ via: https://opensource.com/article/19/1/productivity-tool-wish-list
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,189 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (pityonline)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10626-1.html)
|
||||
[#]: subject: (How To Remove/Delete The Empty Lines In A File In Linux)
|
||||
[#]: via: (https://www.2daygeek.com/remove-delete-empty-lines-in-a-file-in-linux/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
在 Linux 中如何删除文件中的空行
|
||||
======
|
||||
|
||||
有时你可能需要在 Linux 中删除某个文件中的空行。如果是的,你可以使用下面方法中的其中一个。有很多方法可以做到,但我在这里只是列举一些简单的方法。
|
||||
|
||||
你可能已经知道 `grep`、`awk` 和 `sed` 命令是专门用来处理文本数据的工具。
|
||||
|
||||
如果你想了解更多关于这些命令的文章,请访问这几个 URL:[在 Linux 中创建指定大小的文件的几种方法][1],[在 Linux 中创建一个文件的几种方法][2] 以及 [在 Linux 中删除一个文件中的匹配的字符串][3]。
|
||||
|
||||
这些属于高级命令,它们可用在大多数 shell 脚本中执行所需的操作。
|
||||
|
||||
下列 5 种方法可以做到。
|
||||
|
||||
* `sed`:过滤和替换文本的流编辑器。
|
||||
* `grep`:输出匹配到的行。
|
||||
* `cat`:合并文件并打印内容到标准输出。
|
||||
* `tr`:替换或删除字符。
|
||||
* `awk`:awk 工具用于执行 awk 语言编写的程序,专门用于文本处理。
|
||||
* `perl`:Perl 是一种用于处理文本的编程语言。
|
||||
|
||||
我创建了一个 `2daygeek.txt` 文件来测试这些命令。下面是文件的内容。
|
||||
|
||||
```
|
||||
$ cat 2daygeek.txt
|
||||
2daygeek.com is a best Linux blog to learn Linux.
|
||||
|
||||
It's FIVE years old blog.
|
||||
|
||||
This website is maintained by Magesh M, it's licensed under CC BY-NC 4.0.
|
||||
|
||||
He got two GIRL babys.
|
||||
|
||||
Her names are Tanisha & Renusha.
|
||||
```
|
||||
|
||||
现在一切就绪,我们准备开始用多种方法来验证。
|
||||
|
||||
### 使用 sed 命令
|
||||
|
||||
`sed` 是一个<ruby>流编辑器<rt>stream editor</rt></ruby>。流编辑器是用来编辑输入流(文件或管道)中的文本的。
|
||||
|
||||
```
|
||||
$ sed '/^$/d' 2daygeek.txt
|
||||
2daygeek.com is a best Linux blog to learn Linux.
|
||||
It's FIVE years old blog.
|
||||
This website is maintained by Magesh M, it's licensed under CC BY-NC 4.0.
|
||||
He got two GIRL babes.
|
||||
Her names are Tanisha & Renusha.
|
||||
```
|
||||
|
||||
以下是命令展开的细节:
|
||||
|
||||
* `sed`: 该命令本身。
|
||||
* `//`: 标记匹配范围。
|
||||
* `^`: 匹配字符串开头。
|
||||
* `$`: 匹配字符串结尾。
|
||||
* `d`: 删除匹配的字符串。
|
||||
* `2daygeek.txt`: 源文件名。
|
||||
|
||||
### 使用 grep 命令
|
||||
|
||||
`grep` 可以通过正则表达式在文件中搜索。该表达式可以是一行或多行空行分割的字符,`grep` 会打印所有匹配的内容。
|
||||
|
||||
```
|
||||
$ grep . 2daygeek.txt
|
||||
or
|
||||
$ grep -Ev "^$" 2daygeek.txt
|
||||
or
|
||||
$ grep -v -e '^$' 2daygeek.txt
|
||||
2daygeek.com is a best Linux blog to learn Linux.
|
||||
It's FIVE years old blog.
|
||||
This website is maintained by Magesh M, it's licensed under CC BY-NC 4.0.
|
||||
He got two GIRL babes.
|
||||
Her names are Tanisha & Renusha.
|
||||
```
|
||||
|
||||
以下是命令展开的细节:
|
||||
|
||||
* `grep`: 该命令本身。
|
||||
* `.`: 替换任意字符。
|
||||
* `^`: 匹配字符串开头。
|
||||
* `$`: 匹配字符串结尾。
|
||||
* `E`: 使用扩展正则匹配模式。
|
||||
* `e`: 使用常规正则匹配模式。
|
||||
* `v`: 反向匹配。
|
||||
* `2daygeek.txt`: 源文件名。
|
||||
|
||||
### 使用 awk 命令
|
||||
|
||||
`awk` 可以执行使用 awk 语言写的脚本,大多是专用于处理文本的。awk 脚本是一系列 `awk` 命令和正则的组合。
|
||||
|
||||
```
|
||||
$ awk NF 2daygeek.txt
|
||||
or
|
||||
$ awk '!/^$/' 2daygeek.txt
|
||||
or
|
||||
$ awk '/./' 2daygeek.txt
|
||||
2daygeek.com is a best Linux blog to learn Linux.
|
||||
It's FIVE years old blog.
|
||||
This website is maintained by Magesh M, it's licensed under CC BY-NC 4.0.
|
||||
He got two GIRL babes.
|
||||
Her names are Tanisha & Renusha.
|
||||
```
|
||||
|
||||
以下是命令展开的细节:
|
||||
|
||||
* `awk`: 该命令本身。
|
||||
* `//`: 标记匹配范围。
|
||||
* `^`: 匹配字符串开头。
|
||||
* `$`: 匹配字符串结尾。
|
||||
* `.`: 匹配任意字符。
|
||||
* `!`: 删除匹配的字符串。
|
||||
* `2daygeek.txt`: 源文件名。
|
||||
|
||||
### 使用 cat 和 tr 命令 组合
|
||||
|
||||
`cat` 是<ruby>串联(拼接)<rt>concatenate</rt></ruby>的简写。经常用于在 Linux 中读取一个文件的内容。
|
||||
|
||||
`cat` 是在类 Unix 系统中使用频率最高的命令之一。它提供了常用的三个处理文本文件的功能:显示文件内容、将多个文件拼接成一个,以及创建一个新文件。
|
||||
|
||||
`tr` 可以将标准输入中的字符转换,压缩或删除,然后重定向到标准输出。
|
||||
|
||||
```
|
||||
$ cat 2daygeek.txt | tr -s '\n'
|
||||
2daygeek.com is a best Linux blog to learn Linux.
|
||||
It's FIVE years old blog.
|
||||
This website is maintained by Magesh M, it's licensed under CC BY-NC 4.0.
|
||||
He got two GIRL babes.
|
||||
Her names are Tanisha & Renusha.
|
||||
```
|
||||
|
||||
以下是命令展开的细节:
|
||||
|
||||
* `cat`: cat 命令本身。
|
||||
* `tr`: tr 命令本身。
|
||||
* `|`: 管道符号。它可以将前面的命令的标准输出作为下一个命令的标准输入。
|
||||
* `s`: 替换标数据集中任意多个重复字符为一个。
|
||||
* `\n`: 添加一个新的换行。
|
||||
* `2daygeek.txt`: 源文件名。
|
||||
|
||||
### 使用 perl 命令
|
||||
|
||||
Perl 表示<ruby>实用的提取和报告语言<rt>Practical Extraction and Reporting Language</rt></ruby>。Perl 在初期被设计为一个专用于文本处理的编程语言,现在已扩展应用到 Linux 系统管理,网络编程和网站开发等多个领域。
|
||||
|
||||
```
|
||||
$ perl -ne 'print if /\S/' 2daygeek.txt
|
||||
2daygeek.com is a best Linux blog to learn Linux.
|
||||
It's FIVE years old blog.
|
||||
This website is maintained by Magesh M, it's licensed under CC BY-NC 4.0.
|
||||
He got two GIRL babes.
|
||||
Her names are Tanisha & Renusha.
|
||||
```
|
||||
|
||||
以下是命令展开的细节:
|
||||
|
||||
* `perl`: perl 命令。
|
||||
* `n`: 逐行读入数据。
|
||||
* `e`: 执行某个命令。
|
||||
* `print`: 打印信息。
|
||||
* `if`: if 条件分支。
|
||||
* `//`: 标记匹配范围。
|
||||
* `\S`: 匹配任意非空白字符。
|
||||
* `2daygeek.txt`: 源文件名。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/remove-delete-empty-lines-in-a-file-in-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[pityonline](https://github.com/pityonline)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/create-a-file-in-specific-certain-size-linux/
|
||||
[2]: https://www.2daygeek.com/linux-command-to-create-a-file/
|
||||
[3]: https://www.2daygeek.com/empty-a-file-delete-contents-lines-from-a-file-remove-matching-string-from-a-file-remove-empty-blank-lines-from-a-file/
|
@ -0,0 +1,154 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (FSSlc)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10641-1.html)
|
||||
[#]: subject: (Run Particular Commands Without Sudo Password In Linux)
|
||||
[#]: via: (https://www.ostechnix.com/run-particular-commands-without-sudo-password-linux/)
|
||||
[#]: author: (SK https://www.ostechnix.com/author/sk/)
|
||||
|
||||
在 Linux 中运行特定命令而无需 sudo 密码
|
||||
======
|
||||
|
||||
我有一台部署在 AWS 上的 Ubuntu 系统,在它的里面有一个脚本,这个脚本的原有目的是以一定间隔(准确来说是每隔 1 分钟)去检查某个特定服务是否正在运行,如果这个服务因为某些原因停止了,就自动重启这个服务。但问题是我需要 sudo 权限来开启这个服务。正如你所知道的那样,当我们以 sudo 用户运行命令时,我们应该提供密码,但我并不想这么做,实际上我想做的是以 sudo 用户的身份运行这个服务但无需提供密码。假如你曾经经历过这样的情形,那么我知道一个简单的方法来做到这点。今天,在这个简短的指南中,我将教你如何在类 Unix 的操作系统中运行特定命令而无需 sudo 密码。
|
||||
|
||||
就让我们看看下面的例子吧。
|
||||
|
||||
```
|
||||
$ sudo mkdir /ostechnix
|
||||
[sudo] password for sk:
|
||||
```
|
||||
|
||||
![][2]
|
||||
|
||||
正如上面的截图中看到的那样,当我在根目录(`/`)中创建一个名为 `ostechnix` 的目录时,我需要提供 sudo 密码。每当我们尝试以 sudo 特权执行一个命令时,我们必须输入密码。而在我的预想中,我不想提供 sudo 密码。下面的内容便是我如何在我的 Linux 机子上运行一个 `sudo` 命令而无需输入密码的过程。
|
||||
|
||||
### 在 Linux 中运行特定命令而无需 sudo 密码
|
||||
|
||||
基于某些原因,假如你想允许一个用户运行特定命令而无需提供 sudo 密码,则你需要在 `sudoers` 文件中添加上这个命令。
|
||||
|
||||
假如我想让名为 `sk` 的用户去执行 `mkdir` 而无需提供 sudo 密码,下面就让我们看看该如何做到这点。
|
||||
|
||||
使用下面的命令来编辑 `sudoers` 文件:
|
||||
|
||||
```
|
||||
$ sudo visudo
|
||||
```
|
||||
|
||||
将下面的命令添加到这个文件的最后。
|
||||
|
||||
```
|
||||
sk ALL=NOPASSWD:/bin/mkdir
|
||||
```
|
||||
|
||||
![][3]
|
||||
|
||||
其中 `sk` 是用户名。根据上面一行的内容,用户 `sk` 可以从任意终端执行 `mkdir` 命令而不必输入 sudo 密码。
|
||||
|
||||
你可以用逗号分隔的值来添加额外的命令(例如 `chmod`),正如下面展示的那样。
|
||||
|
||||
```
|
||||
sk ALL=NOPASSWD:/bin/mkdir,/bin/chmod
|
||||
```
|
||||
|
||||
保存并关闭这个文件,然后注销(或重启)你的系统。现在以普通用户 `sk` 登录,然后试试使用 `sudo` 来运行这些命令,看会发生什么。
|
||||
|
||||
```
|
||||
$ sudo mkdir /dir1
|
||||
```
|
||||
|
||||
![][4]
|
||||
|
||||
看到了吗?即便我以 sudo 特权运行 `mkdir` 命令,也不会弹出提示让我输入密码。从现在开始,当用户 `sk` 运行 `mkdir` 时,就不必输入 sudo 密码了。
|
||||
|
||||
当运行除了添加到 `sudoers` 文件之外的命令时,你将被提示输入 sudo 密码。
|
||||
|
||||
让我们用 `sudo` 来运行另一个命令。
|
||||
|
||||
```
|
||||
$ sudo apt update
|
||||
```
|
||||
|
||||
![][5]
|
||||
|
||||
看到了吗?这个命令将提示我输入 sudo 密码。
|
||||
|
||||
假如你不想让这个命令提示你输入 sudo 密码,请编辑 `sudoers` 文件:
|
||||
|
||||
```
|
||||
$ sudo visudo
|
||||
```
|
||||
|
||||
像下面这样将 `apt` 命令添加到 `sudoers` 文件中:
|
||||
|
||||
```
|
||||
sk ALL=NOPASSWD:/bin/mkdir,/usr/bin/apt
|
||||
```
|
||||
|
||||
你注意到了上面命令中 `apt` 二进制执行文件的路径与 `mkdir` 的有所不同吗?是的,你必须提供一个正确的可执行文件路径。要找到任意命令的可执行文件路径,例如这里的 `apt`,可以像下面这样使用 `whichis` 命令来查看:
|
||||
|
||||
```
|
||||
$ whereis apt
|
||||
apt: /usr/bin/apt /usr/lib/apt /etc/apt /usr/share/man/man8/apt.8.gz
|
||||
```
|
||||
|
||||
如你所见,`apt` 命令的可执行文件路径为 `/usr/bin/apt`,所以我将这个路径添加到了 `sudoers` 文件中。
|
||||
|
||||
正如我前面提及的那样,你可以添加任意多个以逗号分隔的命令。一旦你做完添加的动作,保存并关闭你的 `sudoers` 文件,接着注销,然后重新登录进你的系统。
|
||||
|
||||
现在就检验你是否可以直接运行以 `sudo` 开头的命令而不必使用密码:
|
||||
|
||||
```
|
||||
$ sudo apt update
|
||||
```
|
||||
|
||||
![][6]
|
||||
|
||||
看到了吗?`apt` 命令没有让我输入 sudo 密码,即便我用 `sudo` 来运行它。
|
||||
|
||||
下面展示另一个例子。假如你想运行一个特定服务,例如 `apache2`,那么就添加下面这条命令到 `sudoers` 文件中:
|
||||
|
||||
```
|
||||
sk ALL=NOPASSWD:/bin/mkdir,/usr/bin/apt,/bin/systemctl restart apache2
|
||||
```
|
||||
|
||||
现在用户 `sk` 就可以运行 `sudo systemctl restart apache` 命令而不必输入 sudo 密码了。
|
||||
|
||||
我可以再次让一个特别的命令提醒输入 sudo 密码吗?当然可以!只需要删除添加的命令,注销然后再次登录即可。
|
||||
|
||||
除了这种方法外,你还可以在命令的前面添加 `PASSWD:` 指令。让我们看看下面的例子:
|
||||
|
||||
在 `sudoers` 文件中添加或者修改下面的一行:
|
||||
|
||||
```
|
||||
sk ALL=NOPASSWD:/bin/mkdir,/bin/chmod,PASSWD:/usr/bin/apt
|
||||
```
|
||||
|
||||
在这种情况下,用户 `sk` 可以运行 `mkdir` 和 `chmod` 命令而不用输入 sudo 密码。然而,当他运行 `apt` 命令时,就必须提供 sudo 密码了。
|
||||
|
||||
免责声明:本篇指南仅具有教育意义。在使用这个方法的时候,你必须非常小心。这个命令既可能富有成效但也可能带来摧毁性效果。例如,假如你允许用户执行 `rm` 命令而不输入 sudo 密码,那么他们可能无意或有意地删除某些重要文件。我警告过你了!
|
||||
|
||||
那么这就是全部的内容了。希望这个能够给你带来帮助。更多精彩内容即将呈现,请保持关注!
|
||||
|
||||
干杯!
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/run-particular-commands-without-sudo-password-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]: http://www.ostechnix.com/wp-content/uploads/2017/05/sudo-password-1.png
|
||||
[3]: http://www.ostechnix.com/wp-content/uploads/2017/05/sudo-password-7.png
|
||||
[4]: http://www.ostechnix.com/wp-content/uploads/2017/05/sudo-password-6.png
|
||||
[5]: http://www.ostechnix.com/wp-content/uploads/2017/05/sudo-password-4.png
|
||||
[6]: http://www.ostechnix.com/wp-content/uploads/2017/05/sudo-password-5.png
|
226
published/20190215 4 Methods To Change The HostName In Linux.md
Normal file
226
published/20190215 4 Methods To Change The HostName In Linux.md
Normal file
@ -0,0 +1,226 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (FSSlc)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10651-1.html)
|
||||
[#]: subject: (4 Methods To Change The HostName In Linux)
|
||||
[#]: via: (https://www.2daygeek.com/four-methods-to-change-the-hostname-in-linux/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
Linux 中改变主机名的 4 种方法
|
||||
======
|
||||
|
||||
昨天我们已经在我们的网站中写过[如何在 Linux 中修改主机名的文章][1]。今天,我们将向你展示使用不同的方法来修改主机名。你可以从中选取最适合你的方法。
|
||||
|
||||
使用 `systemd` 的系统自带一个名为 `hostnamectl` 的好用工具,它可以使我们能够轻易地管理系统的主机名。
|
||||
|
||||
当你使用这个原生命令时,它可以立刻改变主机名而无需重启来生效。
|
||||
|
||||
但假如你通过手动修改某个配置文件来更改主机名,那么就可能需要经过重启来生效。
|
||||
|
||||
在这篇文章中,我们将展示在使用 `systemd` 的系统中改变主机名的 4 种方法。
|
||||
|
||||
`hostnamectl` 命令允许在 Linux 中设置三类主机名,它们的细节如下:
|
||||
|
||||
* **静态:** 这是静态主机名,由系统管理员添加。
|
||||
* **瞬时/动态:** 这个由 DHCP 或者 DNS 服务器在运行时赋予。
|
||||
* **易读形式:** 它可以由系统管理员赋予。这个是自由形式的主机名,以一种易读形式来表示服务器,例如 “JBOSS UAT Server” 这样的名字。
|
||||
|
||||
这些都可以使用下面 4 种方法来设置。
|
||||
|
||||
* `hostnamectl` 命令:控制系统主机名。
|
||||
* `nmcli` 命令:是一个控制 NetworkManager 的命令行工具。
|
||||
* `nmtui` 命令:是一个控制 NetworkManager 的文本用户界面。
|
||||
* `/etc/hostname` 文件:这个文件中包含系统的静态主机名。
|
||||
|
||||
### 方法 1:在 Linux 中使用 hostnamectl 来改变主机名
|
||||
|
||||
`hostnamectl` 可被用来查询和改变系统的主机名,以及相关设定。只需运行 `hostnamectl` 便可以查看系统的主机名了。
|
||||
|
||||
```
|
||||
$ hostnamectl
|
||||
```
|
||||
|
||||
或者使用下面的命令:
|
||||
|
||||
```
|
||||
$ hostnamectl status
|
||||
Static hostname: daygeek-Y700
|
||||
Icon name: computer-laptop
|
||||
Chassis: laptop
|
||||
Machine ID: 31bdeb7b83230a2025d43547368d75bc
|
||||
Boot ID: 267f264c448f000ea5aed47263c6de7f
|
||||
Operating System: Manjaro Linux
|
||||
Kernel: Linux 4.19.20-1-MANJARO
|
||||
Architecture: x86-64
|
||||
```
|
||||
|
||||
假如你想改变主机名,可以使用下面的命令格式:
|
||||
|
||||
语法格式:
|
||||
|
||||
```
|
||||
$ hostnamectl set-hostname [YOUR NEW HOSTNAME]
|
||||
```
|
||||
|
||||
使用下面的命令来使用 `hostnamectl` 更改主机名。在这个例子中,我将把主机名从 `daygeek-Y700` 改为 `magi-laptop`。
|
||||
|
||||
```
|
||||
$ hostnamectl set-hostname magi-laptop
|
||||
```
|
||||
|
||||
你可以使用下面的命令来查看更新后的主机名。
|
||||
|
||||
```
|
||||
$ hostnamectl
|
||||
Static hostname: magi-laptop
|
||||
Icon name: computer-laptop
|
||||
Chassis: laptop
|
||||
Machine ID: 31bdeb7b83230a2025d43547368d75bc
|
||||
Boot ID: 267f264c448f000ea5aed47263c6de7f
|
||||
Operating System: Manjaro Linux
|
||||
Kernel: Linux 4.19.20-1-MANJARO
|
||||
Architecture: x86-64
|
||||
```
|
||||
|
||||
### 方法 2:在 Linux 中使用 nmcli 命令来更改主机名
|
||||
|
||||
`nmcli` 是一个命令行工具,旨在控制 NetworkManager 并报告网络状态。
|
||||
|
||||
`nmcli` 被用来创建、展示、编辑、删除、激活和注销网络连接,同时还可以用来控制和展示网络设备的状态。另外,它也允许我们更改主机名。
|
||||
|
||||
使用下面的命令来利用 `nmcli` 查看当前的主机名。
|
||||
|
||||
```
|
||||
$ nmcli general hostname
|
||||
daygeek-Y700
|
||||
```
|
||||
|
||||
语法格式:
|
||||
|
||||
```
|
||||
$ nmcli general hostname [YOUR NEW HOSTNAME]
|
||||
```
|
||||
|
||||
使用下面的命令来借助 `nmcli` 命令可以更改主机名。在这个例子中,我将把主机名从 `daygeek-Y700` 变成 `magi-laptop`。
|
||||
|
||||
```
|
||||
$ nmcli general hostname magi-laptop
|
||||
```
|
||||
|
||||
它可以在不重启下设备的情况下生效,但为了安全目的,只需要重启 `systemd-hostnamed` 服务来使得更改生效。
|
||||
|
||||
```
|
||||
$ sudo systemctl restart systemd-hostnamed
|
||||
```
|
||||
|
||||
再次运行相同的 `nmcli` 命令来检查更改后的主机名。
|
||||
|
||||
```
|
||||
$ nmcli general hostname
|
||||
magi-laptop
|
||||
```
|
||||
|
||||
### 方法 3:在 Linux 中使用 nmtui 来更改主机名
|
||||
|
||||
`nmtui` 是一个基于 `curses` 库的 TUI 应用,被用来和 NetworkManager 交互。当启动 `nmtui` 后,如果没有指定 `nmtui` 的第一个命令行参数,它将提醒用户选择执行某项活动。
|
||||
|
||||
在终端中运行下面的命令来开启文本用户界面。
|
||||
|
||||
```
|
||||
$ nmtui
|
||||
```
|
||||
|
||||
使用向下箭头按键来选择 “Set system hostname” 这个选项,然后敲击回车键。
|
||||
|
||||
![][3]
|
||||
|
||||
下面的截图展示的是原来的主机名。
|
||||
|
||||
![][4]
|
||||
|
||||
我们需要做的就是删除原来的主机名,再输入新的主机名,然后选中 “OK” 敲击回车确认就可以了。
|
||||
|
||||
![][5]
|
||||
|
||||
然后它将在屏幕中向你展示更新后的主机名,再次选中 “OK” 敲击回车确认就完成更改了。
|
||||
|
||||
![][6]
|
||||
|
||||
最后,选中 “Quit” 按钮来从 `nmtui` 终端界面离开。
|
||||
|
||||
![][7]
|
||||
|
||||
它可以在不重启设备的情况下生效,但为了安全目的,需要重启 `systemd-hostnamed` 服务来使得更改生效。
|
||||
|
||||
```
|
||||
$ sudo systemctl restart systemd-hostnamed
|
||||
```
|
||||
|
||||
你可以运行下面的命令来查看更新后的主机名。
|
||||
|
||||
```
|
||||
$ hostnamectl
|
||||
Static hostname: daygeek-Y700
|
||||
Icon name: computer-laptop
|
||||
Chassis: laptop
|
||||
Machine ID: 31bdeb7b83230a2025d43547368d75bc
|
||||
Boot ID: 267f264c448f000ea5aed47263c6de7f
|
||||
Operating System: Manjaro Linux
|
||||
Kernel: Linux 4.19.20-1-MANJARO
|
||||
Architecture: x86-64
|
||||
```
|
||||
|
||||
### 方法 4:在 Linux 中使用 /etc/hostname 来更改主机名
|
||||
|
||||
除了上面的方法外,我们还可以通过修改 `/etc/hostname` 文件来达到修改主机名的目的。但这个方法需要服务器重启才能生效。
|
||||
|
||||
使用下面的命令来检查 `/etc/hostname` 文件以查看当前的主机名:
|
||||
|
||||
```
|
||||
$ cat /etc/hostname
|
||||
daygeek-Y700
|
||||
```
|
||||
|
||||
要改变主机名,只需覆写这个文件就行了,因为这个文件只包含主机名这一项内容。
|
||||
|
||||
```
|
||||
$ sudo echo "magi-daygeek" > /etc/hostname
|
||||
|
||||
$ cat /etc/hostname
|
||||
magi-daygeek
|
||||
```
|
||||
|
||||
然后使用下面的命令重启系统:
|
||||
|
||||
```
|
||||
$ sudo init 6
|
||||
```
|
||||
|
||||
最后查看 `/etc/hostname` 文件的内容来验证主机名是否被更改了。
|
||||
|
||||
```
|
||||
$ cat /etc/hostname
|
||||
magi-daygeek
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/four-methods-to-change-the-hostname-in-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/linux-change-set-hostname/
|
||||
[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[3]: https://www.2daygeek.com/wp-content/uploads/2019/02/four-methods-to-change-the-hostname-in-linux-1.png
|
||||
[4]: https://www.2daygeek.com/wp-content/uploads/2019/02/four-methods-to-change-the-hostname-in-linux-2.png
|
||||
[5]: https://www.2daygeek.com/wp-content/uploads/2019/02/four-methods-to-change-the-hostname-in-linux-3.png
|
||||
[6]: https://www.2daygeek.com/wp-content/uploads/2019/02/four-methods-to-change-the-hostname-in-linux-4.png
|
||||
[7]: https://www.2daygeek.com/wp-content/uploads/2019/02/four-methods-to-change-the-hostname-in-linux-5.png
|
@ -1,28 +1,30 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (MjSeven)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10627-1.html)
|
||||
[#]: subject: (Emoji-Log: A new way to write Git commit messages)
|
||||
[#]: via: (https://opensource.com/article/19/2/emoji-log-git-commit-messages)
|
||||
[#]: author: (Ahmad Awais https://opensource.com/users/mrahmadawais)
|
||||
|
||||
Emoji-Log:编写 Git 提交信息的新方法
|
||||
======
|
||||
使用 Emoji-Log 为你的提交添加上下文。
|
||||
|
||||
> 使用 Emoji-Log 为你的提交添加上下文。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/emoji_tech_keyboard.jpg?itok=ncBNKZFl)
|
||||
|
||||
我是一名全职开源开发人员,我喜欢称自己为“开源者”。我从事开源软件工作已经超过十年,并[构建了数百个][1]开源软件应用程序。
|
||||
我是一名全职的开源开发人员,我喜欢称自己为“开源者”。我从事开源软件工作已经超过十年,并[构建了数以百计的][1]开源软件应用程序。
|
||||
|
||||
同时我也是不要重复自己 Don't Repeat Yourself(DRY)哲学的忠实粉丝,并且相信编写更好的 Git 提交消息是 DRY 的一个重要组成部分。它们具有足够的上下文关联可以作为你开源软件的变更日志。我编写的众多工作流之一是 [Emoji-Log][2],它是一个简单易用的开源 Git 提交日志标准。它通过使用表情符号来创建更好的 Git 提交消息,从而改善了开发人员的体验(DX)。
|
||||
同时我也是“<ruby>避免重复工作<rt>Don't Repeat Yourself</rt></ruby>”(DRY)哲学的忠实粉丝,并且我相信编写更好的 Git 提交消息是 DRY 的一个重要组成部分。它们具有足够的上下文关联,可以作为你开源软件的变更日志。我编写的众多工作流之一是 [Emoji-Log][2],它是一个简单易用的开源 Git 提交日志标准。它通过使用表情符号来创建更好的 Git 提交消息,从而改善了开发人员的体验(DX)。
|
||||
|
||||
我使用 Emoji-Log 构建了 [VSCode Tips & Tricks 仓库][3] 和我的 🦄 [紫色 VSCode 主题仓库][4],以及一个看起来很漂亮的[自动变更日志][5]。
|
||||
|
||||
### Emoji-Log 的哲学
|
||||
|
||||
我喜欢表情符号,我很喜欢它们。编程,代码,极客/书呆子,开源...所有这一切本质上都很枯燥,有时甚至很无聊。表情符号帮助我添加颜色和情感。想要将感受添加到 2D、平面和基于文本的代码世界并没有错。
|
||||
我喜欢(很多)表情符号,我很喜欢它们。编程、代码、极客/书呆子、开源……所有这一切本质上都很枯燥,有时甚至很无聊。表情符号帮助我添加颜色和情感。想要将感受添加到这个 2D 的、平板的、基于文本的代码世界并没有错。
|
||||
|
||||
相比于[数百个表情符号][6],我学会了更好的办法是保持小类别和普遍性。以下是指导使用 Emoji-Log 编写提交信息的原则:
|
||||
相比于[数百个表情符号][6],我学会的更好办法是让类别较小和普遍性。以下是指导使用 Emoji-Log 编写提交信息的原则:
|
||||
|
||||
1. **必要的**
|
||||
* Git 提交信息是必要的。
|
||||
@ -31,10 +33,10 @@ Emoji-Log:编写 Git 提交信息的新方法
|
||||
* 例如,使用 ✅ **Create** 而不是 ❌ **Creating**
|
||||
2. **规则**
|
||||
* 少数类别易于记忆。
|
||||
* 不多不也少
|
||||
* 例如 **📦 NEW** , **👌 IMPROVE** , **🐛 FIX** , **📖 DOC** , **🚀 RELEASE**, **✅ TEST**
|
||||
* 不多也不少
|
||||
* 例如 **📦 NEW** 、 **👌 IMPROVE** 、 **🐛 FIX** 、 **📖 DOC** 、 **🚀 RELEASE** 、 **✅ TEST**
|
||||
3. **行为**
|
||||
* 让 Git 基于你所采取的操作提交
|
||||
* 让 Git 的提交基于你所采取的操作
|
||||
* 使用像 [VSCode][7] 这样的编辑器来提交带有提交信息的正确文件。
|
||||
|
||||
### 编写提交信息
|
||||
@ -63,7 +65,7 @@ Emoji-Log:编写 Git 提交信息的新方法
|
||||
|
||||
### Emoji-Log 函数
|
||||
|
||||
为了快速构建原型,我写了以下函数,你可以将它们添加到 **.bashrc** 或者 **.zshrc** 文件中以快速使用 Emoji-Log。
|
||||
为了快速构建原型,我写了以下函数,你可以将它们添加到 `.bashrc` 或者 `.zshrc` 文件中以快速使用 Emoji-Log。
|
||||
|
||||
```
|
||||
#.# Better Git Logs.
|
||||
@ -126,7 +128,8 @@ funcsave gdoc
|
||||
funcsave gtst
|
||||
```
|
||||
|
||||
如果你愿意,可以将这些别名直接粘贴到 **~/.gitconfig** 文件:
|
||||
如果你愿意,可以将这些别名直接粘贴到 `~/.gitconfig` 文件:
|
||||
|
||||
```
|
||||
# Git Commit, Add all and Push — in one step.
|
||||
cap = "!f() { git add .; git commit -m \"$@\"; git push; }; f"
|
||||
@ -145,6 +148,17 @@ doc = "!f() { git cap \"📖 DOC: $@\"; }; f"
|
||||
tst = "!f() { git cap \"✅ TEST: $@\"; }; f"
|
||||
```
|
||||
|
||||
### Emoji-Log 例子
|
||||
|
||||
这里列出了一些使用 Emoji-Log 的仓库:
|
||||
|
||||
- [Create-guten-block toolkit](https://github.com/ahmadawais/create-guten-block/commits/)
|
||||
- [VSCode Shades of Purple theme](https://github.com/ahmadawais/shades-of-purple-vscode/commits/)
|
||||
- [Ahmad Awais' GitHub repos](https://github.com/ahmadawais) (我的最新的仓库)
|
||||
- [CaptainCore CLI](https://github.com/CaptainCore/captaincore-cli/commits/) (WordPress 管理工具)
|
||||
- [CaptainCore GUI](https://github.com/CaptainCore/captaincore-gui/commits/) (WordPress 插件)
|
||||
|
||||
你呢?如果你的仓库使用 Emoji-Log,请将这个 [Emoji-Log 徽章](https://on.ahmda.ws/rOMZ/c)放到你的 README 中,并给我发送一个[拉取请求](https://github.com/ahmadawais/Emoji-Log/pulls),以让我可以将你的仓库列在这里。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -153,7 +167,7 @@ via: https://opensource.com/article/19/2/emoji-log-git-commit-messages
|
||||
作者:[Ahmad Awais][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (HankChow)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10640-1.html)
|
||||
[#]: subject: (SPEED TEST: x86 vs. ARM for Web Crawling in Python)
|
||||
[#]: via: (https://blog.dxmtechsupport.com.au/speed-test-x86-vs-arm-for-web-crawling-in-python/)
|
||||
[#]: author: (James Mawson https://blog.dxmtechsupport.com.au/author/james-mawson/)
|
||||
@ -12,7 +12,7 @@ x86 和 ARM 的 Python 爬虫速度对比
|
||||
|
||||
![][1]
|
||||
|
||||
如果你的老板给你的任务是不断地访问竞争对手的网站,把对方商品的价格记录下来,而且要纯手工操作,恐怕你会想要把整个办公室都烧掉。
|
||||
假如说,如果你的老板给你的任务是一次又一次地访问竞争对手的网站,把对方商品的价格记录下来,而且要纯手工操作,恐怕你会想要把整个办公室都烧掉。
|
||||
|
||||
之所以现在网络爬虫的影响力如此巨大,就是因为网络爬虫可以被用于追踪客户的情绪和趋向、搜寻空缺的职位、监控房地产的交易,甚至是获取 UFC 的比赛结果。除此以外,还有很多意想不到的用途。
|
||||
|
||||
@ -32,21 +32,21 @@ x86 和 ARM 的 Python 爬虫速度对比
|
||||
|
||||
ARM 是目前世界上最流行的 CPU 架构。
|
||||
|
||||
但 ARM 架构处理器在很多人眼中的地位只是作为一个节省成本的选择,而不是跑在生产环境中的处理器的首选。
|
||||
但 ARM 架构处理器在很多人眼中的地位只是作为一个省钱又省电的选择,而不是跑在生产环境中的处理器的首选。
|
||||
|
||||
然而,诞生于英国剑桥的 ARM CPU,最初是用于昂贵的 [Acorn Archimedes][4] 计算机上的,这是当时世界上最强大的计算机,它的运算速度甚至比最快的 386 还要快好几倍。
|
||||
然而,诞生于英国剑桥的 ARM CPU,最初是用于极其昂贵的 [Acorn Archimedes][4] 计算机上的,这是当时世界上最强大的桌面计算机,甚至在很长一段时间内,它的运算速度甚至比最快的 386 还要快好几倍。
|
||||
|
||||
Acorn 公司和 Commodore、Atari 的理念类似,他们认为一家伟大的计算机公司就应该制造出伟大的计算机,让人感觉有点目光短浅。而比尔盖茨的想法则有所不同,他力图在更多不同种类和价格的 x86 机器上使用他的 DOS 系统。
|
||||
|
||||
拥有大量用户基础的平台会让更多开发者开发出众多适应平台的软件,而软件资源丰富又让计算机更受用户欢迎。
|
||||
拥有大量用户基数的平台会成为第三方开发者开发软件的平台,而软件资源丰富又会让你的计算机更受用户欢迎。
|
||||
|
||||
即使是苹果公司也在这上面吃到了苦头,不得不在 x86 芯片上投入大量的财力。最终,这些芯片不再仅仅用于专业的计算任务,走进了人们的日常生活中。
|
||||
即使是苹果公司也几乎被打败。在 x86 芯片上投入大量的财力,最终,这些芯片被用于生产环境计算任务。
|
||||
|
||||
ARM 架构也并没有消失。基于 ARM 架构的芯片不仅运算速度快,同时也非常节能。因此诸如机顶盒、PDA、数码相机、MP3 播放器这些电子产品多数都会采用 ARM 架构的芯片,甚至在很多需要用电池、不配备大散热风扇的电子产品上,都可以见到 ARM 芯片的身影。
|
||||
但 ARM 架构也并没有消失。基于 ARM 架构的芯片不仅运算速度快,同时也非常节能。因此诸如机顶盒、PDA、数码相机、MP3 播放器这些电子产品多数都会采用 ARM 架构的芯片,甚至在很多需要用电池或不配备大散热风扇的电子产品上,都可以见到 ARM 芯片的身影。
|
||||
|
||||
而 ARM 则脱离 Acorn 成为了一种独立的商业模式,他们不生产实物芯片,仅仅是向芯片生产厂商出售相关的知识产权。
|
||||
而 ARM 则脱离 Acorn 成为了一种特殊的商业模式,他们不生产实物芯片,仅仅是向芯片生产厂商出售相关的知识产权。
|
||||
|
||||
因此,ARM 芯片被应用于很多手机和平板电脑上。当 Linux 被移植到这种架构的芯片上时,开源技术的大门就已经向它打开了,这才让我们今天得以在这些芯片上运行 web 爬虫程序。
|
||||
因此,这或多或少是 ARM 芯片被应用于如此之多的手机和平板电脑上的原因。当 Linux 被移植到这种架构的芯片上时,开源技术的大门就已经向它打开了,这才让我们今天得以在这些芯片上运行 web 爬虫程序。
|
||||
|
||||
#### 服务器端的 ARM
|
||||
|
||||
@ -64,11 +64,11 @@ ARM 架构也并没有消失。基于 ARM 架构的芯片不仅运算速度快
|
||||
|
||||
#### Scaleway
|
||||
|
||||
Scaleway 自身的定位是“专为开发者设计”。我觉得这个定位很准确,对于开发原型来说,Scaleway 提供的产品确实可以作为一个很好的沙盒环境。
|
||||
Scaleway 自身的定位是“专为开发者设计”。我觉得这个定位很准确,对于开发和原型设计来说,Scaleway 提供的产品确实可以作为一个很好的沙盒环境。
|
||||
|
||||
Scaleway 提供了一个简洁的页面,让用户可以快速地从主页进入 bash shell 界面。对于很多小企业、自由职业者或者技术顾问,如果想要运行 web 爬虫,这个产品毫无疑问是一个物美价廉的选择。
|
||||
Scaleway 提供了一个简洁的仪表盘页面,让用户可以快速地从主页进入 bash shell 界面。对于很多小企业、自由职业者或者技术顾问,如果想要运行 web 爬虫,这个产品毫无疑问是一个物美价廉的选择。
|
||||
|
||||
ARM 方面我们选择 [ARM64-2GB][10] 这一款服务器,每月只需要 3 欧元。它带有 4 个 Cavium ThunderX 核心,是在 2014 年推出的第一款服务器级的 ARMv8 处理器。但现在看来它已经显得有点落后了,并逐渐被更新的 ThunderX2 取代。
|
||||
ARM 方面我们选择 [ARM64-2GB][10] 这一款服务器,每月只需要 3 欧元。它带有 4 个 Cavium ThunderX 核心,这是在 2014 年推出的第一款服务器级的 ARMv8 处理器。但现在看来它已经显得有点落后了,并逐渐被更新的 ThunderX2 取代。
|
||||
|
||||
x86 方面我们选择 [1-S][11],每月的费用是 4 欧元。它拥有 2 个英特尔 Atom C3995 核心。英特尔的 Atom 系列处理器的特点是低功耗、单线程,最初是用在笔记本电脑上的,后来也被服务器所采用。
|
||||
|
||||
@ -76,43 +76,45 @@ x86 方面我们选择 [1-S][11],每月的费用是 4 欧元。它拥有 2 个
|
||||
|
||||
为了避免我不能熟练使用包管理器的尴尬局面,两方的操作系统我都会选择使用 Debian 9。
|
||||
|
||||
#### Amazon Web Services
|
||||
#### Amazon Web Services(AWS)
|
||||
|
||||
当你还在注册 AWS 账号的时候,使用 Scaleway 的用户可能已经把提交信用卡信息、启动 VPS 实例、添加sudoer、安装依赖包这一系列流程都完成了。AWS 的操作相对来说比较繁琐,甚至需要详细阅读手册才能知道你正在做什么。
|
||||
当你还在注册 AWS 账号的时候,使用 Scaleway 的用户可能已经把提交信用卡信息、启动 VPS 实例、添加 sudo 用户、安装依赖包这一系列流程都完成了。AWS 的操作相对来说比较繁琐,甚至需要详细阅读手册才能知道你正在做什么。
|
||||
|
||||
当然这也是合理的,对于一些需求复杂或者特殊的企业用户,确实需要通过详细的配置来定制合适的使用方案。
|
||||
|
||||
我们所采用的 AWS Graviton 处理器是 AWS EC2(Elastic Compute Cloud)的一部分,我会以按需实例的方式来运行,这也是最贵但最简捷的方式。AWS 同时也提供[竞价实例][12],这样可以用较低的价格运行实例,但实例的运行时间并不固定。如果实例需要长时间持续运行,还可以选择[预留实例][13]。
|
||||
我们所采用的 AWS Graviton 处理器是 AWS EC2(<ruby>弹性计算云<rt>Elastic Compute Cloud</rt></ruby>)的一部分,我会以按需实例的方式来运行,这也是最贵但最简捷的方式。AWS 同时也提供[竞价实例][12],这样可以用较低的价格运行实例,但实例的运行时间并不固定。如果实例需要长时间持续运行,还可以选择[预留实例][13]。
|
||||
|
||||
看,AWS 就是这么复杂……
|
||||
|
||||
我们分别选择 [a1.medium][14] 和 [t2.small][15] 两种型号的实例进行对比,两者都带有 2GB 内存。这个时候问题来了,手册中提到的 vCPU 又是什么?两种型号的不同之处就在于此。
|
||||
我们分别选择 [a1.medium][14] 和 [t2.small][15] 两种型号的实例进行对比,两者都带有 2GB 内存。这个时候问题来了,这里提到的 vCPU 又是什么?两种型号的不同之处就在于此。
|
||||
|
||||
对于 a1.medium 型号的实例,vCPU 是 AWS Graviton 芯片提供的单个计算核心。这个芯片由被亚马逊在 2015 收购的以色列厂商 Annapurna Labs 研发,是 AWS 独有的单线程 64 位 ARMv8 内核。它的按需价格为每小时 0.0255 美元。
|
||||
|
||||
而 t2.small 型号实例使用英特尔至强系列芯片,但我不确定具体是其中的哪一款。它每个核心有两个线程,但我们并不能用到整个核心,甚至整个线程。我们能用到的只是“20% 的基准性能,可以使用 CPU 积分突破这个基准”。这可能有一定的原因,但我没有弄懂。它的按需价格是每小时 0.023 美元。
|
||||
而 t2.small 型号实例使用英特尔至强系列芯片,但我不确定具体是其中的哪一款。它每个核心有两个线程,但我们并不能用到整个核心,甚至整个线程。
|
||||
|
||||
我们能用到的只是“20% 的基准性能,可以使用 CPU 积分突破这个基准”。这可能有一定的原因,但我没有弄懂。它的按需价格是每小时 0.023 美元。
|
||||
|
||||
在镜像库中没有 Debian 发行版的镜像,因此我选择了 Ubuntu 18.04。
|
||||
|
||||
### Beavis and Butthead Do Moz’s Top 500
|
||||
### 瘪四与大头蛋爬取 Moz 排行榜前 500 的网站
|
||||
|
||||
要测试这些 VPS 的 CPU 性能,就该使用爬虫了。一般来说都是对几个网站在尽可能短的时间里发出尽可能多的请求,但这种操作太暴力了,我的做法是只向大量网站发出少数几个请求。
|
||||
要测试这些 VPS 的 CPU 性能,就该使用爬虫了。一个方法是对几个网站在尽可能短的时间里发出尽可能多的请求,但这种操作不太礼貌,我的做法是只向大量网站发出少数几个请求。
|
||||
|
||||
为此,我编写了 `beavs.py` 这个爬虫程序(致敬我最喜欢的物理学家和制片人 Mike Judge)。这个程序会将 Moz 上排行前 500 的网站都爬取 3 层的深度,并计算 “wood” 和 “ass” 这两个单词在 HTML 文件中出现的次数。
|
||||
为此,我编写了 `beavis.py`(瘪四)这个爬虫程序(致敬我最喜欢的物理学家和制片人 Mike Judge)。这个程序会将 Moz 上排行前 500 的网站都爬取 3 层的深度,并计算 “wood” 和 “ass” 这两个单词在 HTML 文件中出现的次数。(LCTT 译注:beavis(瘪四)和 butt-head(大头蛋) 都是 Mike Judge 的动画片《瘪四与大头蛋》中的角色)
|
||||
|
||||
但我实际爬取的网站可能不足 500 个,因为我需要遵循网站的 `robot.txt` 协定,另外还有些网站需要提交 javascript 请求,也不一定会计算在内。但这已经是一个足以让 CPU 保持繁忙的爬虫任务了。
|
||||
|
||||
Python 的[全局解释器锁][16]机制会让我的程序只能用到一个 CPU 线程。为了测试多线程的性能,我需要启动多个独立的爬虫程序进程。
|
||||
|
||||
因此我还编写了 `butthead.py`,尽管 Butthead 很粗鲁,它也比 Beavis 要略胜一筹(译者注:beavis 和 butt-head 都是 Mike Judge 的动画片《Beavis and Butt-head》中的角色)。
|
||||
因此我还编写了 `butthead.py`,尽管大头蛋很粗鲁,它也总是比瘪四要略胜一筹。
|
||||
|
||||
我将整个爬虫任务拆分为多个部分,这可能会对爬取到的链接数量有一点轻微的影响。但无论如何,每次爬取都会有所不同,我们要关注的是爬取了多少个页面,以及耗时多长。
|
||||
|
||||
### 在 ARM 服务器上安装 Scrapy
|
||||
|
||||
安装 Scrapy 的过程与芯片的不同架构没有太大的关系,都是安装 pip 和相关的依赖包之后,再使用 pip 来安装Scrapy。
|
||||
安装 Scrapy 的过程与芯片的不同架构没有太大的关系,都是安装 `pip` 和相关的依赖包之后,再使用 `pip` 来安装 Scrapy。
|
||||
|
||||
据我观察,在使用 ARM 的机器上使用 pip 安装 Scrapy 确实耗时要长一点,我估计是由于需要从源码编译为二进制文件。
|
||||
据我观察,在使用 ARM 的机器上使用 `pip` 安装 Scrapy 确实耗时要长一点,我估计是由于需要从源码编译为二进制文件。
|
||||
|
||||
在 Scrapy 安装结束后,就可以通过 shell 来查看它的工作状态了。
|
||||
|
||||
@ -143,7 +145,7 @@ Scrapy 的官方文档建议[将爬虫程序的 CPU 使用率控制在 80% 到 9
|
||||
|
||||
我使用了 [top][18] 工具来查看爬虫程序运行期间的 CPU 使用率。在任务刚开始的时候,两者的 CPU 使用率都达到了 100%,但 ThunderX 大部分时间都达到了 CPU 的极限,无法看出来 Atom 的性能会比 ThunderX 超出多少。
|
||||
|
||||
通过 top 工具,我还观察了它们的内存使用情况。随着爬取任务的进行,ARM 机器的内存使用率最终达到了 14.7%,而 x86 则最终是 15%。
|
||||
通过 `top` 工具,我还观察了它们的内存使用情况。随着爬取任务的进行,ARM 机器的内存使用率最终达到了 14.7%,而 x86 则最终是 15%。
|
||||
|
||||
从运行日志还可以看出来,当 CPU 使用率到达极限时,会有大量的超时页面产生,最终导致页面丢失。这也是合理出现的现象,因为 CPU 过于繁忙会无法完整地记录所有爬取到的页面。
|
||||
|
||||
@ -156,19 +158,19 @@ Scrapy 的官方文档建议[将爬虫程序的 CPU 使用率控制在 80% 到 9
|
||||
| a1.medium | 100m 39.900s | 41,294 | 24,612.725 | 1.03605 |
|
||||
| t2.small | 78m 53.171s | 41,200 | 31,336.286 | 0.73397 |
|
||||
|
||||
为了方便比较,对于在 AWS 上跑的爬虫,我记录的指标和 Scaleway 上一致,但似乎没有达到预期的效果。这里我没有使用 top,而是使用了 AWS 提供的控制台来监控 CPU 的使用情况,从监控结果来看,我的爬虫程序并没有完全用到这两款服务器所提供的所有性能。
|
||||
为了方便比较,对于在 AWS 上跑的爬虫,我记录的指标和 Scaleway 上一致,但似乎没有达到预期的效果。这里我没有使用 `top`,而是使用了 AWS 提供的控制台来监控 CPU 的使用情况,从监控结果来看,我的爬虫程序并没有完全用到这两款服务器所提供的所有性能。
|
||||
|
||||
a1.medium 型号的机器尤为如此,在任务开始阶段,它的 CPU 使用率达到了峰值 45%,但随后一直在 20% 到 30% 之间。
|
||||
|
||||
让我有点感到意外的是,这个程序在 ARM 处理器上的运行速度相当慢,但却远未达到 Graviton CPU 能力的极限,而在 Inter 处理器上则可以在某些时候达到 CPU 能力的极限。它们运行的代码是完全相同的,处理器的不同架构可能导致了对代码的不同处理方式。
|
||||
让我有点感到意外的是,这个程序在 ARM 处理器上的运行速度相当慢,但却远未达到 Graviton CPU 能力的极限,而在 Intel Atom 处理器上则可以在某些时候达到 CPU 能力的极限。它们运行的代码是完全相同的,处理器的不同架构可能导致了对代码的不同处理方式。
|
||||
|
||||
个中原因无论是由于处理器本身的特性,还是而今是文件的编译,又或者是两者皆有,对我来说都是一个黑盒般的存在。我认为,既然在 AWS 机器上没有达到 CPU 处理能力的极限,那么只有在 Scaleway 机器上跑出来的性能数据是可以作为参考的。
|
||||
个中原因无论是由于处理器本身的特性,还是二进制文件的编译,又或者是两者皆有,对我来说都是一个黑盒般的存在。我认为,既然在 AWS 机器上没有达到 CPU 处理能力的极限,那么只有在 Scaleway 机器上跑出来的性能数据是可以作为参考的。
|
||||
|
||||
t2.small 型号的机器性能让人费解。CPU 利用率大概 20%,最高才达到 35%,是因为手册中说的“20% 的基准性能,可以使用 CPU 积分突破这个基准”吗?但在控制台中可以看到 CPU 积分并没有被消耗。
|
||||
|
||||
为了确认这一点,我安装了 [stress][19] 这个软件,然后运行了一段时间,这个时候发现居然可以把 CPU 使用率提高到 100% 了。
|
||||
|
||||
显然,我需要调整一下它们的配置文件。我将 CONCURRENT_REQUESTS 参数设置为 5000,将 REACTOR_THREADPOOL_MAXSIZE 参数设置为 120,将爬虫任务的负载调得更大。
|
||||
显然,我需要调整一下它们的配置文件。我将 `CONCURRENT_REQUESTS` 参数设置为 5000,将 `REACTOR_THREADPOOL_MAXSIZE` 参数设置为 120,将爬虫任务的负载调得更大。
|
||||
|
||||
| 机器种类 | 耗时 | 爬取页面数 | 每小时爬取页面数 | 每万页面费用(美元) |
|
||||
| ----------------------- | ----------- | ---------- | ---------------- | -------------------- |
|
||||
@ -182,7 +184,7 @@ a1.medium 型号机器的 CPU 使用率在爬虫任务开始后 5 分钟飙升
|
||||
|
||||
现在我们看到它们的性能都差不多了。但至强处理器的线程持续跑满了 CPU,Graviton 处理器则只是有一段时间如此。可以认为 Graviton 略胜一筹。
|
||||
|
||||
然而,如果 CPU 积分耗尽了呢?这种情况下的对比可能更为公平。为了测试这种情况,我使用 stress 把所有的 CPU 积分用完,然后再次启动了爬虫任务。
|
||||
然而,如果 CPU 积分耗尽了呢?这种情况下的对比可能更为公平。为了测试这种情况,我使用 `stress` 把所有的 CPU 积分用完,然后再次启动了爬虫任务。
|
||||
|
||||
在没有 CPU 积分的情况下,CPU 使用率在 27% 就到达极限不再上升了,同时又出现了丢失页面的现象。这么看来,它的性能比负载较低的时候更差。
|
||||
|
||||
@ -190,7 +192,7 @@ a1.medium 型号机器的 CPU 使用率在爬虫任务开始后 5 分钟飙升
|
||||
|
||||
将爬虫任务分散到不同的进程中,可以有效利用机器所提供的多个核心。
|
||||
|
||||
一开始,我将爬虫任务分布在 10 个不同的进程中并同时启动,结果发现比仅使用 1 个进程的时候还要慢。
|
||||
一开始,我将爬虫任务分布在 10 个不同的进程中并同时启动,结果发现比每个核心仅使用 1 个进程的时候还要慢。
|
||||
|
||||
经过尝试,我得到了一个比较好的方案。把爬虫任务分布在 10 个进程中,但每个核心只启动 1 个进程,在每个进程接近结束的时候,再从剩余的进程中选出 1 个进程启动起来。
|
||||
|
||||
@ -198,7 +200,7 @@ a1.medium 型号机器的 CPU 使用率在爬虫任务开始后 5 分钟飙升
|
||||
|
||||
想要预估某个域名的页面量,一定程度上可以参考这个域名主页的链接数量。我用另一个程序来对这个数量进行了统计,然后按照降序排序。经过这样的预处理之后,只会额外增加 1 分钟左右的时间。
|
||||
|
||||
结果,爬虫运行的总耗时找过了两个小时!毕竟把链接最多的域名都堆在同一个进程中也存在一定的弊端。
|
||||
结果,爬虫运行的总耗时超过了两个小时!毕竟把链接最多的域名都堆在同一个进程中也存在一定的弊端。
|
||||
|
||||
针对这个问题,也可以通过调整各个进程爬取的域名数量来进行优化,又或者在排序之后再作一定的修改。不过这种优化可能有点复杂了。
|
||||
|
||||
@ -225,7 +227,9 @@ a1.medium 型号机器的 CPU 使用率在爬虫任务开始后 5 分钟飙升
|
||||
|
||||
### 结论
|
||||
|
||||
从上面的数据来看,不同架构的 CPU 性能和它们的问世时间没有直接的联系,AWS Graviton 是单线程情况下性能最佳的。
|
||||
从上面的数据来看,对于性能而言,CPU 的架构并没有它们的问世时间重要,2018 年生产的 AWS Graviton 是单线程情况下性能最佳的。
|
||||
|
||||
你当然可以说按核心来比,Xeon 仍然赢了。但是,你不但需要计算美元的变化,甚至还要计算线程数。
|
||||
|
||||
另外在性能方面 2017 年生产的 Atom 轻松击败了 2014 年生产的 ThunderX,而 ThunderX 则在性价比方面占优。当然,如果你使用 AWS 的机器的话,还是使用 Graviton 吧。
|
||||
|
||||
@ -243,7 +247,7 @@ a1.medium 型号机器的 CPU 使用率在爬虫任务开始后 5 分钟飙升
|
||||
|
||||
要运行这些代码,需要预先安装 Scrapy,并且需要 [Moz 上排名前 500 的网站][21]的 csv 文件。如果要运行 `butthead.py`,还需要安装 [psutil][22] 这个库。
|
||||
|
||||
##### beavis.py
|
||||
*beavis.py*
|
||||
|
||||
```
|
||||
import scrapy
|
||||
@ -347,7 +351,7 @@ if __name__ == '__main__':
|
||||
print('Uh huhuhuhuh. It said wood ' + str(wood) + ' times.')
|
||||
```
|
||||
|
||||
##### butthead.py
|
||||
*butthead.py*
|
||||
|
||||
```
|
||||
import scrapy, time, psutil
|
||||
@ -494,7 +498,7 @@ via: https://blog.dxmtechsupport.com.au/speed-test-x86-vs-arm-for-web-crawling-i
|
||||
作者:[James Mawson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[HankChow](https://github.com/HankChow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,163 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (MjSeven)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10646-1.html)
|
||||
[#]: subject: (Set up two-factor authentication for SSH on Fedora)
|
||||
[#]: via: (https://fedoramagazine.org/two-factor-authentication-ssh-fedora/)
|
||||
[#]: author: (Curt Warfield https://fedoramagazine.org/author/rcurtiswarfield/)
|
||||
|
||||
在 Fedora 上为 SSH 设置双因子验证
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2019/02/twofactor-auth-ssh-816x345.png)
|
||||
|
||||
每天似乎都有一个安全漏洞的新闻报道,说我们的数据会因此而存在风险。尽管 SSH 是一种远程连接系统的安全方式,但你仍然可以使它更安全。本文将向你展示如何做到这一点。
|
||||
|
||||
此时<ruby>双因子验证<rt>two-factor authentication</rt></ruby>(2FA)就有用武之地了。即使你禁用密码并只允许使用公钥和私钥进行 SSH 连接,但如果未经授权的用户偷窃了你的密钥,他仍然可以借此访问系统。
|
||||
|
||||
使用双因子验证,你不能仅仅使用 SSH 密钥连接到服务器,你还需要提供手机上的验证器应用程序随机生成的数字。
|
||||
|
||||
本文展示的方法是<ruby>基于时间的一次性密码<rt>Time-based One-time Password</rt></ruby>(TOTP)算法。[Google Authenticator][1] 用作服务器应用程序。默认情况下,Google Authenticator 在 Fedora 中是可用的。
|
||||
|
||||
至于手机,你可以使用与 TOTP 兼容的任何可以双路验证的应用程序。Andorid 或 iOS 有许多可以与 TOTP 和 Google Authenticator 配合使用的免费应用程序。本文与 [FreeOTP][2] 为例。
|
||||
|
||||
### 安装并设置 Google Authenticator
|
||||
|
||||
首先,在你的服务器上安装 Google Authenticator。
|
||||
```
|
||||
$ sudo dnf install -y google-authenticator
|
||||
```
|
||||
|
||||
运行应用程序:
|
||||
|
||||
```
|
||||
$ google-authenticator
|
||||
```
|
||||
|
||||
该应用程序提供了一系列问题。下面的片段展示了如何进行合理的安全设置:
|
||||
|
||||
```
|
||||
Do you want authentication tokens to be time-based (y/n) y
|
||||
Do you want me to update your "/home/user/.google_authenticator" file (y/n)? y
|
||||
```
|
||||
|
||||
这个应用程序为你提供一个密钥、验证码和恢复码。把它们放在安全的地方。如果你丢失了手机,恢复码是访问服务器的**唯一**方式。
|
||||
|
||||
### 设置手机验证
|
||||
|
||||
在你的手机上安装验证器应用程序(FreeOTP)。如果你有一台安卓手机,那么你可以在 Google Play 中找到它,也可以在苹果 iPhone 的 iTunes 商店中找到它。
|
||||
|
||||
Google Authenticator 会在屏幕上显示一个二维码。打开手机上的 FreeOTP 应用程序,选择添加新账户,在应用程序顶部选择二维码形状工具,然后扫描二维码即可。设置完成后,在每次远程连接服务器时,你必须提供验证器应用程序生成的随机数。
|
||||
|
||||
### 完成配置
|
||||
|
||||
应用程序会向你询问更多的问题。下面示例展示了如何设置合理的安全配置。
|
||||
|
||||
```
|
||||
Do you want to disallow multiple uses of the same authentication token? This restricts you to one login about every 30s, but it increases your chances to notice or even prevent man-in-the-middle attacks (y/n) y
|
||||
By default, tokens are good for 30 seconds. In order to compensate for possible time-skew between the client and the server, we allow an extra token before and after the current time. If you experience problems with poor time synchronization, you can increase the window from its default size of +-1min (window size of 3) to about +-4min (window size of 17 acceptable tokens).
|
||||
Do you want to do so? (y/n) n
|
||||
If the computer that you are logging into isn't hardened against brute-force login attempts, you can enable rate-limiting for the authentication module. By default, this limits attackers to no more than 3 login attempts every 30s.
|
||||
Do you want to enable rate-limiting (y/n) y
|
||||
```
|
||||
|
||||
现在,你必须设置 SSH 来利用新的双路验证。
|
||||
|
||||
### 配置 SSH
|
||||
|
||||
在完成此步骤之前,**确保你已使用公钥建立了一个可用的 SSH 连接**,因为我们将禁用密码连接。如果出现问题或错误,一个已经建立的连接将允许你修复问题。
|
||||
|
||||
在你的服务器上,使用 [sudo][3] 编辑 `/etc/pam.d/sshd` 文件。
|
||||
|
||||
```
|
||||
$ sudo vi /etc/pam.d/ssh
|
||||
```
|
||||
|
||||
注释掉 `auth substack password-auth` 这一行:
|
||||
|
||||
```
|
||||
#auth substack password-auth
|
||||
```
|
||||
|
||||
将以下行添加到文件底部:
|
||||
|
||||
```
|
||||
auth sufficient pam_google_authenticator.so
|
||||
```
|
||||
|
||||
保存并关闭文件。然后编辑 `/etc/ssh/sshd_config` 文件:
|
||||
|
||||
```
|
||||
$ sudo vi /etc/ssh/sshd_config
|
||||
```
|
||||
|
||||
找到 `ChallengeResponseAuthentication` 这一行并将其更改为 `yes`:
|
||||
|
||||
```
|
||||
ChallengeResponseAuthentication yes
|
||||
```
|
||||
|
||||
找到 `PasswordAuthentication` 这一行并将其更改为 `no`:
|
||||
|
||||
```
|
||||
PasswordAuthentication no
|
||||
```
|
||||
|
||||
将以下行添加到文件底部:
|
||||
|
||||
```
|
||||
AuthenticationMethods publickey,password publickey,keyboard-interactive
|
||||
```
|
||||
|
||||
保存并关闭文件,然后重新启动 SSH:
|
||||
|
||||
```
|
||||
$ sudo systemctl restart sshd
|
||||
```
|
||||
|
||||
### 测试双因子验证
|
||||
|
||||
当你尝试连接到服务器时,系统会提示你输入验证码:
|
||||
|
||||
```
|
||||
[user@client ~]$ ssh user@example.com
|
||||
Verification code:
|
||||
```
|
||||
|
||||
验证码由你手机上的验证器应用程序随机生成。由于这个数字每隔几秒就会发生变化,因此你需要在它变化之前输入它。
|
||||
|
||||
![][4]
|
||||
|
||||
如果你不输入验证码,你将无法访问系统,你会收到一个权限被拒绝的错误:
|
||||
|
||||
```
|
||||
[user@client ~]$ ssh user@example.com
|
||||
Verification code:
|
||||
Verification code:
|
||||
Verification code:
|
||||
Permission denied (keyboard-interactive).
|
||||
[user@client ~]$
|
||||
```
|
||||
|
||||
### 结论
|
||||
|
||||
通过添加这种简单的双路验证,现在未经授权的用户访问你的服务器将变得更加困难。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/two-factor-authentication-ssh-fedora/
|
||||
|
||||
作者:[Curt Warfield][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/rcurtiswarfield/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Google_Authenticator
|
||||
[2]: https://freeotp.github.io/
|
||||
[3]: https://fedoramagazine.org/howto-use-sudo/
|
||||
[4]: https://fedoramagazine.org/wp-content/uploads/2019/02/freeotp-1.png
|
@ -1,14 +1,15 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (HankChow)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10624-1.html)
|
||||
[#]: subject: (All about {Curly Braces} in Bash)
|
||||
[#]: via: (https://www.linux.com/blog/learn/2019/2/all-about-curly-braces-bash)
|
||||
[#]: author: (Paul Brown https://www.linux.com/users/bro66)
|
||||
|
||||
浅析 Bash 中的 {花括号}
|
||||
======
|
||||
> 让我们继续我们的 Bash 基础之旅,来近距离观察一下花括号,了解一下如何和何时使用它们。
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/curly-braces-1920.jpg?itok=cScRhWrX)
|
||||
|
||||
@ -70,7 +71,6 @@ month=("Jan" "Feb" "Mar" "Apr" "May" "Jun" "Jul" "Aug" "Sep" "Oct" "Nov" "Dec")
|
||||
|
||||
```
|
||||
$ echo ${month[3]} # 数组索引从 0 开始,因此 [3] 对应第 4 个元素
|
||||
|
||||
Apr
|
||||
```
|
||||
|
||||
@ -94,13 +94,12 @@ dec2bin=({0..1}{0..1}{0..1}{0..1}{0..1}{0..1}{0..1}{0..1})
|
||||
|
||||
```
|
||||
$ echo ${dec2bin[25]}
|
||||
|
||||
00011001
|
||||
```
|
||||
|
||||
对于进制转换,确实还有更好的方法,但这不失为一个有趣的方法。
|
||||
|
||||
### <ruby>参数展开<rt>parameter expansion</rt></ruby>
|
||||
### 参数展开
|
||||
|
||||
再看回前面的
|
||||
|
||||
@ -108,7 +107,7 @@ $ echo ${dec2bin[25]}
|
||||
echo ${month[3]}
|
||||
```
|
||||
|
||||
在这里,花括号的作用就不是构造序列了,而是用于参数展开。顾名思义,参数展开就是将花括号中的变量展开为这个变量实际的内容。
|
||||
在这里,花括号的作用就不是构造序列了,而是用于<ruby>参数展开<rt>parameter expansion</rt></ruby>。顾名思义,参数展开就是将花括号中的变量展开为这个变量实际的内容。
|
||||
|
||||
我们继续使用上面的 `month` 数组来举例:
|
||||
|
||||
@ -132,7 +131,7 @@ a="Too longgg"
|
||||
echo ${a%gg}
|
||||
```
|
||||
|
||||
可以输出“too long”,也就是去掉了最后的两个 g。
|
||||
可以输出 “too long”,也就是去掉了最后的两个 g。
|
||||
|
||||
在这里,
|
||||
|
||||
@ -141,8 +140,6 @@ echo ${a%gg}
|
||||
* `%` 告诉 shell 需要在展开字符串之后从字符串的末尾去掉某些内容
|
||||
* `gg` 是被去掉的内容
|
||||
|
||||
|
||||
|
||||
这个特性在转换文件格式的时候会比较有用,我来举个例子:
|
||||
|
||||
[ImageMagick][3] 是一套可以用于操作图像文件的命令行工具,它有一个 `convert` 命令。这个 `convert` 命令的作用是可以为某个格式的图像文件制作一个另一格式的副本。
|
||||
@ -206,14 +203,6 @@ echo "I found all these PNGs:"; find . -iname "*.png"; echo "Within this bunch o
|
||||
|
||||
在后续的文章中,我会介绍其它“包裹”类符号的用法,敬请关注。
|
||||
|
||||
相关阅读:
|
||||
|
||||
[And, Ampersand, and & in Linux][4]
|
||||
|
||||
[Ampersands and File Descriptors in Bash][5]
|
||||
|
||||
[Logical & in Bash][2]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/learn/2019/2/all-about-curly-braces-bash
|
||||
@ -221,15 +210,12 @@ via: https://www.linux.com/blog/learn/2019/2/all-about-curly-braces-bash
|
||||
作者:[Paul Brown][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[HankChow](https://github.com/HankChow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/users/bro66
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linux.com/blog/learn/2019/1/linux-tools-meaning-dot
|
||||
[2]: https://www.linux.com/blog/learn/2019/2/logical-ampersand-bash
|
||||
[1]: https://linux.cn/article-10465-1.html
|
||||
[3]: http://www.imagemagick.org/
|
||||
[4]: https://www.linux.com/blog/learn/2019/2/and-ampersand-and-linux
|
||||
[5]: https://www.linux.com/blog/learn/2019/2/ampersands-and-file-descriptors-bash
|
||||
|
@ -0,0 +1,83 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10631-1.html)
|
||||
[#]: subject: (Linux security: Cmd provides visibility, control over user activity)
|
||||
[#]: via: (https://www.networkworld.com/article/3342454/linux-security-cmd-provides-visibility-control-over-user-activity.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
Linux 安全:Cmd 提供可视化控制用户活动
|
||||
======
|
||||
> Cmd 可以帮助机构监控、验证和阻止那些超出系统预期使用范围的活动。
|
||||
|
||||
![](https://images.techhive.com/images/article/2017/01/background-1900329_1920-100705659-large.jpg)
|
||||
|
||||
有一个新的 Linux 安全工具你值得了解一下:Cmd(读作 “see em dee”),它极大地改变了可以对 Linux 用户进行控制的类型。它远远超出了传统的用户权限配置,并在监视和控制用户能够在 Linux 系统上运行的命令方面发挥了积极作用。
|
||||
|
||||
Cmd 由同名公司开发,专注于云应用。鉴于越来越多的应用迁移到依赖于 Linux 的云环境中,而可用工具的缺口使得难以充分实施所需的安全性。除此以外,Cmd 还可用于管理和保护本地系统。
|
||||
|
||||
### Cmd 与传统 Linux 安全控件的区别
|
||||
|
||||
Cmd 公司的领导 Milun Tesovic 和 Jake King 表示,除非了解了用户日常如何工作以及什么被视是“正常”,机构无法自信地预测或控制用户行为。他们寻求提供一种能够精细控制、监控和验证用户活动的工具。
|
||||
|
||||
Cmd 通过形成用户活动配置文件(描绘这些用户通常进行的活动)来监视用户活动,注意其在线行为的异常(登录时间、使用的命令、用户位置等),以及预防和报告某些意味着系统攻击的活动(例如,下载或修改文件和运行特权命令)。产品的行为是可配置的,可以快速进行更改。
|
||||
|
||||
如今大多数人用来检测威胁、识别漏洞和控制用户权限的工具,我们已经使用了很久了,但我们仍在努力抗争保持系统和数据的安全。Cmd 让我们更能够确定恶意用户的意图,无论这些用户是设法侵入帐户还是代表内部威胁。
|
||||
|
||||
![1 sources live sessions][1]
|
||||
|
||||
*查看实时 Linux 会话*
|
||||
|
||||
### Cmd 如何工作?
|
||||
|
||||
在监视和管理用户活动时,Cmd 可以:
|
||||
|
||||
* 收集描述用户活动的信息
|
||||
* 使用基线来确定什么是正常的
|
||||
* 使用特定指标检测并主动防止威胁
|
||||
* 向负责人发送警报
|
||||
|
||||
![2 triggers][3]
|
||||
|
||||
*在 Cmd 中构建自定义策略*
|
||||
|
||||
Cmd 扩展了系统管理员通过传统方法可以控制的内容,例如配置 `sudo` 权限,提供更精细和特定情境的控制。
|
||||
|
||||
管理员可以选择可以与 Linux 系统管理员所管理的用户权限控制分开管理的升级策略。
|
||||
|
||||
Cmd 客户端提供实时可视化(而不是事后日志分析),并且可以阻止操作、要求额外的身份验证或根据需要进行协商授权。
|
||||
|
||||
此外,如果有用户位置信息,Cmd 支持基于地理定位的自定义规则。并且可以在几分钟内将新策略推送到部署在主机上的客户端。
|
||||
|
||||
![3 command blocked][4]
|
||||
|
||||
*在 Cmd 中构建触发器查询*
|
||||
|
||||
### Cmd 的融资新闻
|
||||
|
||||
[Cmd][2] 最近完成了由 [GV][6] (前身为 Google Ventures)领投,Expa、Amplify Partners 和其他战略投资者跟投的 [1500 万美元的融资][5]。这使该公司的融资金额达到了 2160 万美元,这将帮助其继续为该产品增加新的防御能力并发展其工程师团队。
|
||||
|
||||
此外,该公司还任命 GV 的普通合伙人 Karim Faris 为董事会成员。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3342454/linux-security-cmd-provides-visibility-control-over-user-activity.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/1-sources-live-sessions-100789431-large.jpg
|
||||
[2]: https://cmd.com
|
||||
[3]: https://images.idgesg.net/images/article/2019/02/2-triggers-100789432-large.jpg
|
||||
[4]: https://images.idgesg.net/images/article/2019/02/3-command-blocked-100789433-large.jpg
|
||||
[5]: https://www.linkedin.com/pulse/changing-cybersecurity-announcing-cmds-15-million-funding-jake-king/
|
||||
[6]: https://www.gv.com/
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,199 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (FSSlc)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10635-1.html)
|
||||
[#]: subject: (How To Find Available Network Interfaces On Linux)
|
||||
[#]: via: (https://www.ostechnix.com/how-to-find-available-network-interfaces-on-linux/)
|
||||
[#]: author: (SK https://www.ostechnix.com/author/sk/)
|
||||
|
||||
如何在 Linux 中查看可用的网络接口
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2019/02/network-interface-720x340.jpeg)
|
||||
|
||||
在我们安装完一个 Linux 系统后最为常见的任务便是网络配置了。当然,你可以在安装系统时进行网络接口的配置。但是,对于某些人来说,他们更偏爱在安装完系统后再进行网络的配置或者更改现存的设置。众所周知,为了在命令行中进行网络设定的配置,我们首先必须知道系统中有多少个可用的网络接口。本次这个简单的指南将列出所有可能的方式来在 Linux 和 Unix 操作系统中找到可用的网络接口。
|
||||
|
||||
### 在 Linux 中找到可用的网络接口
|
||||
|
||||
我们可以使用下面的这些方法来找到可用的网络接口。
|
||||
|
||||
#### 方法 1 使用 ifconfig 命令
|
||||
|
||||
使用 `ifconfig` 命令来查看网络接口仍然是最常使用的方法。我相信还有很多 Linux 用户仍然使用这个方法。
|
||||
|
||||
```
|
||||
$ ifconfig -a
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
enp5s0: flags=4098<BROADCAST,MULTICAST> mtu 1500
|
||||
ether 24:b6:fd:37:8b:29 txqueuelen 1000 (Ethernet)
|
||||
RX packets 0 bytes 0 (0.0 B)
|
||||
RX errors 0 dropped 0 overruns 0 frame 0
|
||||
TX packets 0 bytes 0 (0.0 B)
|
||||
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
|
||||
|
||||
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
|
||||
inet 127.0.0.1 netmask 255.0.0.0
|
||||
inet6 ::1 prefixlen 128 scopeid 0x10<host>
|
||||
loop txqueuelen 1000 (Local Loopback)
|
||||
RX packets 171420 bytes 303980988 (289.8 MiB)
|
||||
RX errors 0 dropped 0 overruns 0 frame 0
|
||||
TX packets 171420 bytes 303980988 (289.8 MiB)
|
||||
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
|
||||
|
||||
wlp9s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
|
||||
inet 192.168.225.37 netmask 255.255.255.0 broadcast 192.168.225.255
|
||||
inet6 2409:4072:6183:c604:c218:85ff:fe50:474f prefixlen 64 scopeid 0x0<global>
|
||||
inet6 fe80::c218:85ff:fe50:474f prefixlen 64 scopeid 0x20<link>
|
||||
ether c0:18:85:50:47:4f txqueuelen 1000 (Ethernet)
|
||||
RX packets 564574 bytes 628671925 (599.5 MiB)
|
||||
RX errors 0 dropped 0 overruns 0 frame 0
|
||||
TX packets 299706 bytes 60535732 (57.7 MiB)
|
||||
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
|
||||
```
|
||||
|
||||
如上面的输出所示,在我的 Linux 机器上有两个网络接口,它们分别叫做 `enp5s0`(主板上的有线网卡)和 `wlp9s0`(无线网卡)。其中的 `lo` 是环回网卡,被用来访问本地的网络的服务,通常它的 IP 地址为 `127.0.0.1`。
|
||||
|
||||
我们也可以在许多 UNIX 变种例如 FreeBSD 中使用相同的 `ifconfig` 来列出可用的网卡。
|
||||
|
||||
#### 方法 2 使用 ip 命令
|
||||
|
||||
在最新的 Linux 版本中, `ifconfig` 命令已经被弃用了。你可以使用 `ip` 命令来罗列出网络接口,正如下面这样:
|
||||
|
||||
```
|
||||
$ ip link show
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
|
||||
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
||||
2: enp5s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
|
||||
link/ether 24:b6:fd:37:8b:29 brd ff:ff:ff:ff:ff:ff
|
||||
3: wlp9s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DORMANT group default qlen 1000
|
||||
link/ether c0:18:85:50:47:4f brd ff:ff:ff:ff:ff:ff
|
||||
```
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2019/02/ip-command.png)
|
||||
|
||||
你也可以使用下面的命令来查看。
|
||||
|
||||
```
|
||||
$ ip addr
|
||||
```
|
||||
|
||||
```
|
||||
$ ip -s link
|
||||
```
|
||||
|
||||
你注意到了吗?这些命令同时还显示出了已经连接的网络接口的状态。假如你仔细查看上面的输出,你将注意到我的有线网卡并没有跟网络线缆连接(从上面输出中的 `DOWN` 可以看出)。另外,我的无线网卡已经连接了(从上面输出中的 `UP` 可以看出)。想知晓更多的细节,可以查看我们先前的指南 [在 Linux 中查看网络接口的已连接状态][1]。
|
||||
|
||||
这两个命令(`ifconfig` 和 `ip`)已经足够在你的 LInux 系统中查看可用的网卡了。
|
||||
|
||||
然而,仍然有其他方法来列出 Linux 中的网络接口,下面我们接着看。
|
||||
|
||||
#### 方法 3 使用 /sys/class/net 目录
|
||||
|
||||
Linux 内核将网络接口的详细信息保存在 `/sys/class/net` 目录中,你可以通过查看这个目录的内容来检验可用接口的列表是否和前面的结果相符。
|
||||
|
||||
```
|
||||
$ ls /sys/class/net
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
enp5s0 lo wlp9s0
|
||||
```
|
||||
|
||||
#### 方法 4 使用 /proc/net/dev 目录
|
||||
|
||||
在 Linux 操作系统中,文件 `/proc/net/dev` 中包含有关网络接口的信息。
|
||||
|
||||
要查看可用的网卡,只需使用下面的命令来查看上面文件的内容:
|
||||
|
||||
```
|
||||
$ cat /proc/net/dev
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
Inter-| Receive | Transmit
|
||||
face |bytes packets errs drop fifo frame compressed multicast|bytes packets errs drop fifo colls carrier compressed
|
||||
wlp9s0: 629189631 566078 0 0 0 0 0 0 60822472 300922 0 0 0 0 0 0
|
||||
enp5s0: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
|
||||
lo: 303980988 171420 0 0 0 0 0 0 303980988 171420 0 0 0 0 0 0
|
||||
```
|
||||
|
||||
#### 方法 5 使用 netstat 命令
|
||||
|
||||
`netstat` 命令可以列出各种不同的信息,例如网络连接、路由表、接口统计信息、伪装连接和多播成员等。
|
||||
|
||||
```
|
||||
$ netstat -i
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
Kernel Interface table
|
||||
Iface MTU RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
|
||||
lo 65536 171420 0 0 0 171420 0 0 0 LRU
|
||||
wlp9s0 1500 565625 0 0 0 300543 0 0 0 BMRU
|
||||
```
|
||||
|
||||
请注意 `netstat` 被弃用了, `netstat -i` 的替代命令是 `ip -s link`。另外需要注意的是这个方法将只列出激活的接口,而不是所有可用的接口。
|
||||
|
||||
#### 方法 6 使用 nmcli 命令
|
||||
|
||||
`nmcli` 是一个用来控制 NetworkManager 和报告网络状态的命令行工具。它可以被用来创建、展示、编辑、删除、激活、停用网络连接和展示网络状态。
|
||||
|
||||
假如你的 Linux 系统中安装了 NetworkManager,你便可以使用下面的命令来使用 `nmcli` 列出可以的网络接口:
|
||||
|
||||
```
|
||||
$ nmcli device status
|
||||
```
|
||||
|
||||
或者
|
||||
|
||||
```
|
||||
$ nmcli connection show
|
||||
```
|
||||
|
||||
现在你知道了如何在 Linux 中找到可用网络接口的方法,接下来,请查看下面的指南来知晓如何在 Linux 中配置 IP 地址吧。
|
||||
|
||||
- [如何在 Linux 和 Unix 中配置静态 IP 地址][2]
|
||||
- [如何在 Ubuntu 18.04 LTS 中配置 IP 地址][3]
|
||||
- [如何在 Arch Linux 中配置静态和动态 IP 地址][4]
|
||||
- [如何在 Linux 中为单个网卡分配多个 IP 地址][5]
|
||||
|
||||
假如你知道其他快捷的方法来在 Linux 中找到可用的网络接口,请在下面的评论部分中分享出来,我将检查你们的评论并更新这篇指南。
|
||||
|
||||
这就是全部的内容了,更多精彩内容即将呈现,请保持关注!
|
||||
|
||||
干杯!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-find-available-network-interfaces-on-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/how-to-find-out-the-connected-state-of-a-network-cable-in-linux/
|
||||
[2]: https://www.ostechnix.com/configure-static-ip-address-linux-unix/
|
||||
[3]: https://www.ostechnix.com/how-to-configure-ip-address-in-ubuntu-18-04-lts/
|
||||
[4]: https://www.ostechnix.com/configure-static-dynamic-ip-address-arch-linux/
|
||||
[5]: https://www.ostechnix.com/how-to-assign-multiple-ip-addresses-to-single-network-card-in-linux/
|
@ -0,0 +1,59 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (sanfusu)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10650-1.html)
|
||||
[#]: subject: (Blockchain 2.0: An Introduction [Part 1])
|
||||
[#]: via: (https://www.ostechnix.com/blockchain-2-0-an-introduction/)
|
||||
[#]: author: (ostechnix https://www.ostechnix.com/author/editor/)
|
||||
|
||||
区块链 2.0:介绍(一)
|
||||
==============
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2019/03/blockchain-introduction-720x340.png)
|
||||
|
||||
### 区块链 2.0:下一个计算范式
|
||||
|
||||
**区块链**现在显然被认为是一种转型技术,它将为人们使用互联网的方式带来革新。本系列文章将探讨即将到来的基于区块链 2.0 的技术和应用浪潮。不同的涉众对它表现出的极大兴趣证明了区块链的存在。
|
||||
|
||||
对于任何打算使用互联网做任何事情的人来说,了解它是什么以及它是如何工作的都是至关重要的。即使你所做的只是盯着 Instagram 上朋友们的早餐照片,或者寻找下一个最好的视频片段,你也需要知道这项技术能对这些提供什么样的帮助。
|
||||
|
||||
尽管区块链的基本概念早在上世纪 90 年代就被学术界提及,但它之所以成为网民热词,要归功于诸如**比特币**和**以太币**等支付平台的崛起。
|
||||
|
||||
比特币最初是一种去中心化的数字货币。它的出现意味着你基本上可以通过互联网进行完全匿名、安全可靠的支付。不过,在比特币这个简单的金融令牌系统背后,是区块链。您可以将比特币技术或任何加密货币看作是 3 层结构。区块链基础技术可以验证、记录和确认交易,在这个基础之上是协议,本质上来讲是一个规则或在线礼仪,用来兑现、记录和确认交易,当然,最重要的是通常被称作比特币的加密货币令牌。一旦记录了协议相关的事务,则由区块链生成令牌。
|
||||
|
||||
虽然大多数人只看到了最顶层,即代表比特币真正含义的硬币或代币,但很少有人知道,在区块链基础技术的帮助下,金融交易只是众多此类可能性中的一种。目前正在探讨这些可能性,以产生和开发所有去中心化交易方式的新标准。
|
||||
|
||||
在最基本的层面上,区块链可以被认为是一个包含所有记录和交易的账簿。这实际上意味着区块链理论上可以处理所有类型的记录。未来这方面的发展可能会导致各种硬资产(如房地产契约、实物钥匙等)和软无形资产(如身份记录、专利、商标、预约等)被编码为数字资产,通过区块链进行保护和转让。
|
||||
|
||||
对于不熟悉区块链的人来说,区块链上的事务本质上被认为是无偏见的永久记录。这是可能的,因为协议中内置了**共识系统**。所有交易均由系统参与者确认、审核和记录,在比特币加密货币平台中,该角色由**矿工**和交易所负责。这可能因不同的平台或区块链而异。构建该平台的协议栈是由开源代码所定义的,并且对任何具有技术能力的人都是免费的。与目前互联网上运行的许多其他平台不同,公开透明被内置进了该系统。
|
||||
|
||||
一旦事务被记录并编码到区块链中,它们就会被看到。参与者有义务按照它们最初的执行方式履行其交易和合约。除非原来的规则禁止了它,否则执行本身将由平台自动处理,因为它是硬编码的。区块链平台对于试图篡改记录、记录的持久性等方面的恢复能力,在因特网上是闻所未闻的。当这项技术的支持者们宣称其日益重要的意义时,这种能力是经常被提及的附加信任层。
|
||||
|
||||
这些特性并不是最近才被发现的隐藏的平台潜力,而是从一开始就被设想出来的。传说中的比特币创造者<ruby>中本聪<rt>Satoshi Nakamoto</rt></ruby>在一份公报中说**“我花了数年的时间来构造一个用来支撑巨大的各种可能事务类型的设计……如果比特币能够流行起来,这些就是我们未来要探索的……但是它们在最初就设计,以确保它们将来能够实现。”**。这些特性被设计并融入到已经存在的协议中的事实印证了这些话。关键的想法是,去中心化的事务分类账就像区块链的功能一样,可以用于传输、部署和执行各种形式的合约。
|
||||
|
||||
领先的机构目前正在探索重新发明股票、养老金和衍生品等金融工具的可能性,而世界各国政府更关注区块链的防篡改和永久性保存记录的潜力。该平台的支持者声称,一旦开发达到一个关键的门槛,从你的酒店钥匙卡到版权和专利,那时起,一切都将通过区块链记录和实现。
|
||||
|
||||
**Ledra Capital**在[这个][1]页面上汇编并维护了几乎完整的项目和细节列表,这些项目和细节理论上可以通过区块链模型实现。想要真正意识到区块链对我们生活的影响有多大是一项艰巨的任务,但看看这个清单就会再次证明这么做的重要性。
|
||||
|
||||
现在,上面提到的所有官僚和商业用途可能会让你相信,这样的技术只会出现在政府和大型私营企业领域。然而,事实远非如此。鉴于该系统的巨大潜力使其对此类用途具有吸引力,而区块链还具有其它可能性和特性。还有一些与该技术相关的更复杂的概念,如 DApp、DAO、DAC、DAS 等,本系列文章将深入讨论这些概念。
|
||||
|
||||
基本上,开发正在如火如荼地进行,任何人对基于区块链的系统的定义、标准和功能进行指点以便进行更广泛的推广还为时尚早,但是这种可能性及其即将产生的影响无疑是存在的。甚至有人谈到基于区块链的智能手机和选举期间的民意调查。
|
||||
|
||||
这只是一个简短的对这个平台能力的鸟瞰。我们将通过一系列这样详细的帖子和文章来研究这些不同的可能性。关注[本系列的下一篇文章][2],它将探索区块链是如何革新交易和契约的。
|
||||
|
||||
---
|
||||
|
||||
via: https://www.ostechnix.com/blockchain-2-0-an-introduction/
|
||||
|
||||
作者:[ostechnix][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[sanfusu](https://github.com/sanfusu)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux 中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/editor/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: http://ledracapital.com/blog/2014/3/11/bitcoin-series-24-the-mega-master-blockchain-list
|
||||
[2]: https://www.ostechnix.com/blockchain-2-0-revolutionizing-the-financial-system/
|
70
published/20190303 How to boot up a new Raspberry Pi.md
Normal file
70
published/20190303 How to boot up a new Raspberry Pi.md
Normal file
@ -0,0 +1,70 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (qhwdw)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10644-1.html)
|
||||
[#]: subject: (How to boot up a new Raspberry Pi)
|
||||
[#]: via: (https://opensource.com/article/19/3/how-boot-new-raspberry-pi)
|
||||
[#]: author: (Anderson Silva https://opensource.com/users/ansilva)
|
||||
|
||||
树莓派使用入门:如何启动一个新的树莓派
|
||||
======
|
||||
> 在本系列文章的第三篇中,我们将教你开始使用树莓派,学习如何安装一个 Linux 操作系统。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming_code_keyboard_orange_hands.png?itok=G6tJ_64Y)
|
||||
|
||||
如果你按顺序看我们本系列的文章,那么你已经 [选择][1] 和 [购买][2] 了你的树莓派和外围设备,现在,你将要去使用它。在第三篇文章中,我们来看一下你需要做些什么才能让它启动起来。
|
||||
|
||||
与你的笔记本、台式机、智能手机、或平板电脑不一样的是,树莓派上并没有内置存储。而是需要使用一个 Micro SD 卡去存储操作系统和文件。这么做的最大好处就是携带你的文件比较方便(甚至都不用带着树莓派)。不利之处是存储卡丢失和损坏的风险可能很高,这将导致你的文件丢失。因此,只要保护好你的 Micro SD 卡就没什么问题了。
|
||||
|
||||
你应该也知道,SD 卡的读写速度比起机械硬件或固态硬盘要慢很多,因此,你的树莓派的启动、读取、和写入的速度将不如其它设备。
|
||||
|
||||
### 如何安装 Raspbian
|
||||
|
||||
你拿到新树莓派的第一件事情就是将它的操作系统安装到一个 Micro SD 卡上。尽管树莓派上可用的操作系统很多(基于 Linux 的或非基于 Linux 的都有),但本系列课程将专注于 [Raspbian][3],它是树莓派的官方 Linux 版本。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/raspbian.png)
|
||||
|
||||
安装 Raspbian 的最简单的方式是使用 [NOOBS][4],它是 “New Out Of Box Software” 的缩写。树莓派官方提供了非常详细的 [NOOBS 文档][5],因此,我就不在这里重复这些安装指令了。
|
||||
|
||||
NOOBS 可以让你选择安装以下的这些操作系统:
|
||||
|
||||
+ [Raspbian][6]
|
||||
+ [LibreELEC][7]
|
||||
+ [OSMC][8]
|
||||
+ [Recalbox][9]
|
||||
+ [Lakka][10]
|
||||
+ [RISC OS][11]
|
||||
+ [Screenly OSE][12]
|
||||
+ [Windows 10 IoT Core][13]
|
||||
+ [TLXOS][14]
|
||||
|
||||
再强调一次,我们在本系列的课程中使用的是 Raspbian,因此,拿起你的 Micro SD 卡,然后按照 NOOBS 文档去安装 Raspbian 吧。在本系列的第四篇文章中,我们将带你去看看,如何使用 Linux,包括你需要掌握的一些主要的命令。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/how-boot-new-raspberry-pi
|
||||
|
||||
作者:[Anderson Silva][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ansilva
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://linux.cn/article-10611-1.html
|
||||
[2]: https://linux.cn/article-10615-1.html
|
||||
[3]: https://www.raspbian.org/RaspbianFAQ
|
||||
[4]: https://www.raspberrypi.org/downloads/noobs/
|
||||
[5]: https://www.raspberrypi.org/documentation/installation/noobs.md
|
||||
[6]: https://www.raspbian.org/RaspbianFAQ
|
||||
[7]: https://libreelec.tv/
|
||||
[8]: https://osmc.tv/
|
||||
[9]: https://www.recalbox.com/
|
||||
[10]: http://www.lakka.tv/
|
||||
[11]: https://www.riscosopen.org/wiki/documentation/show/Welcome%20to%20RISC%20OS%20Pi
|
||||
[12]: https://www.screenly.io/ose/
|
||||
[13]: https://developer.microsoft.com/en-us/windows/iot
|
||||
[14]: https://thinlinx.com/
|
@ -1,18 +1,19 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (qhwdw)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10645-1.html)
|
||||
[#]: subject: (Learn Linux with the Raspberry Pi)
|
||||
[#]: via: (https://opensource.com/article/19/3/learn-linux-raspberry-pi)
|
||||
[#]: author: (Andersn Silva https://opensource.com/users/ansilva)
|
||||
|
||||
用树莓派学 Linux
|
||||
树莓派使用入门:用树莓派学 Linux
|
||||
======
|
||||
我们的《树莓派使用入门》的第四篇文章将进入到 Linux 命令行。
|
||||
> 我们的《树莓派使用入门》的第四篇文章将进入到 Linux 命令行。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg)
|
||||
|
||||
在本系列的 [第三篇文章][1] 中开始了我们的树莓派探索之旅,我分享了如何安装 `Raspbian`,它是树莓派的官方 Linux 版本。现在,你已经安装好了 `Raspbian` 并用它引导你的新树莓派,你已经具备学习 Linux 相关知识的条件了。
|
||||
在本系列的 [第三篇文章][1] 中开始了我们的树莓派探索之旅,我分享了如何安装 Raspbian,它是树莓派的官方 Linux 版本。现在,你已经安装好了 Raspbian 并用它引导你的新树莓派,你已经具备学习 Linux 相关知识的条件了。
|
||||
|
||||
在这样简短的文章中去解决像“如何使用 Linux” 这样的宏大主题显然是不切实际的,因此,我只是给你提供一些如何使用树莓派来学习更多的 Linux 知识的一些创意而已。
|
||||
|
||||
@ -22,17 +23,15 @@
|
||||
|
||||
如果你想成为一个 Linux 用户,从终端中尝试以下的命令行开始:
|
||||
|
||||
* 使用像 **ls**、**cd**、和 **pwd** 这样的命令导航到你的 Home 目录。
|
||||
* 使用 **mkdir**、**rm**、**mv**、和 **cp** 命令创建、删除、和重命名目录。
|
||||
* 使用像 `ls`、`cd` 和 `pwd` 这样的命令导航到你的 Home 目录。
|
||||
* 使用 `mkdir`、`rm`、`mv` 和 `cp` 命令创建、删除、和重命名目录。
|
||||
* 使用命令行编辑器(如 Vi、Vim、Emacs 或 Nano)去创建一个文本文件。
|
||||
* 尝试一些其它命令,比如 **chmod**、**chown**、**w**、**cat**、**more**、**less**、**tail**、**free**、**df**、**ps**、**uname**、和 **kill**。
|
||||
* 尝试一下 **/bin** 和 **/usr/bin** 目录中的其它命令。
|
||||
* 尝试一些其它命令,比如 `chmod`、`chown`、`w`、`cat`、`more`、`less`、`tail`、`free`、`df`、`ps`、`uname` 和 `kill`。
|
||||
* 尝试一下 `/bin` 和 `/usr/bin` 目录中的其它命令。
|
||||
|
||||
学习命令行的最佳方式还是阅读它的 “man 手册”(简称手册);在命令行中输入 `man <command>` 就可以像上面那样打开它。并且在互联网上搜索 Linux 命令速查表可以让你更清楚地了解命令的用法 —— 你应该会找到一大堆能帮你学习的资料。
|
||||
|
||||
|
||||
学习命令行的最佳方式还是阅读它的 “man 手册”(简称手册);在命令行中输入 **man <command>** 就可以像上面那样打开它。并且在互联网上搜索 Linux 命令速查表可以让你更清楚地了解命令的用法 — 你应该会找到一大堆能帮你学习的资料。
|
||||
|
||||
Raspbian 就像主流的 Linux 发行版一样有非常多的命令,假以时日,你最终将比其他人会用更多的命令。我使用 Linux 命令行已经超过二十年了,即便这样仍然有些一些命令我从来没有使用过,即便是那些我使用的过程中一直就存在的命令。
|
||||
Raspbian 就像主流的 Linux 发行版一样有非常多的命令,假以时日,你最终将比其他人会用更多的命令。我使用 Linux 命令行已经超过二十年了,即便这样仍然有一些命令我从来没有使用过,即便是那些我使用的过程中就一直存在的命令。
|
||||
|
||||
最后,你可以使用图形环境去更快地工作,但是只有深入到 Linux 命令行,你才能够获得操作系统真正的强大功能和知识。
|
||||
|
||||
@ -43,11 +42,11 @@ via: https://opensource.com/article/19/3/learn-linux-raspberry-pi
|
||||
作者:[Andersn Silva][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ansilva
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/article/19/2/how-boot-new-raspberry-pi
|
||||
[1]: https://linux.cn/article-10644-1.html
|
||||
[2]: https://opensource.com/article/18/8/window-manager
|
@ -1,40 +1,42 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (qhwdw)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10653-1.html)
|
||||
[#]: subject: (5 ways to teach kids to program with Raspberry Pi)
|
||||
[#]: via: (https://opensource.com/article/19/3/teach-kids-program-raspberry-pi)
|
||||
[#]: author: (Anderson Silva https://opensource.com/users/ansilva)
|
||||
|
||||
教孩子们使用树莓派学编程的 5 种方法。
|
||||
树莓派使用入门:教孩子们用树莓派学编程的 5 种方法
|
||||
======
|
||||
这是我们的《树莓派入门指南》系列的第五篇文章,它探索了帮助孩子们学习编程的一些资源。
|
||||
|
||||
> 这是我们的《树莓派入门指南》系列的第五篇文章,它探索了帮助孩子们学习编程的一些资源。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolseriesgen_rh_032x_0.png?itok=cApG9aB4)
|
||||
|
||||
无数的学校、图书馆和家庭已经证明,树莓派是让孩子们接触编程的最好方式。在本系列的前四篇文章中,你已经学习了如何去[购买][1]、[安装][2]、和[配置][3]一个树莓派。在第五篇文章中,我们将分享一些帮助孩子们使用树莓派编程的入门级资源。
|
||||
|
||||
### Scratch
|
||||
|
||||
[Scratch][4] 是让孩子们了解编程基本概念(比如变量、布尔逻辑、循环等等)的一个很好的方式。你在 Raspbian 中就可以找到它,并且在互联网上你可以找到非常多的有关 Scratch 的文章和教程,包括在 `Opensource.com` 上的 [今天的 Scratch 是不是像“上世纪八十年代教孩子学LOGO编程”?][5]。
|
||||
[Scratch][4] 是让孩子们了解编程基本概念(比如变量、布尔逻辑、循环等等)的一个很好的方式。你在 Raspbian 中就可以找到它,并且在互联网上你可以找到非常多的有关 Scratch 的文章和教程,包括在 Opensource.com 上的 [今天的 Scratch 是不是像“上世纪八十年代教孩子学 LOGO 编程”?][5]。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/scratch2.png)
|
||||
|
||||
### Code.org
|
||||
|
||||
[Code.org][6] 是另一个非常好的教孩子学编程的在线资源。这个组织的使命是让更多的人通过课程、教程和流行的一小时学编程来接触编程。许多学校 — 包括我五年级的儿子就读的学校 — 都使用它,让更多的孩子学习编程和计算机科学的概念。
|
||||
[Code.org][6] 是另一个非常好的教孩子学编程的在线资源。这个组织的使命是让更多的人通过课程、教程和流行的一小时学编程来接触编程。许多学校(包括我五年级的儿子就读的学校)都使用它,让更多的孩子学习编程和计算机科学的概念。
|
||||
|
||||
### 阅读
|
||||
|
||||
读书是学习编程的另一个很好的方式。学习如何编程并不需要你会说英语,当然,如果你会英语的话,学习起来将更容易,因为大多数的编程语言都是使用英文关键字去描述命令的。如果你的英语很好,能够轻松地阅读接下来的这个树莓派系列文章,那么你就完全有能力去阅读有关编程的书籍、论坛和其它的出版物。我推荐一本由 `Jason Biggs` 写的书: [儿童学 Python:非常有趣的 Python 编程入门][7]。
|
||||
读书是学习编程的另一个很好的方式。学习如何编程并不需要你会说英语,当然,如果你会英语的话,学习起来将更容易,因为大多数的编程语言都是使用英文关键字去描述命令的。如果你的英语很好,能够轻松地阅读接下来的这个树莓派系列文章,那么你就完全有能力去阅读有关编程的书籍、论坛和其它的出版物。我推荐一本由 Jason Biggs 写的书: [儿童学 Python:非常有趣的 Python 编程入门][7]。
|
||||
|
||||
### Raspberry Jam
|
||||
|
||||
另一个让你的孩子进入编程世界的好方法是在聚会中让他与其他人互动。树莓派基金会赞助了一个称为 [Raspberry Jams][8] 的活动,让世界各地的孩子和成人共同参与在树莓派上学习。如果你所在的地区没有 `Raspberry Jam`,基金会有一个[指南][9]和其它资源帮你启动一个 `Raspberry Jam`。
|
||||
另一个让你的孩子进入编程世界的好方法是在聚会中让他与其他人互动。树莓派基金会赞助了一个称为 [Raspberry Jams][8] 的活动,让世界各地的孩子和成人共同参与在树莓派上学习。如果你所在的地区没有 Raspberry Jam,基金会有一个[指南][9]和其它资源帮你启动一个 Raspberry Jam。
|
||||
|
||||
### 游戏
|
||||
|
||||
最后一个(是本文的最后一个,当然还有其它的方式),[Minecraft][10] 有一个树莓派版本。<ruby>我的世界<rt>Minecraft</rt></ruby>已经从一个多玩家的、类似于”数字乐高“这样的游戏,成长为一个任何人都能使用 Pythonb 和其它编程语言去构建我自己的虚拟世界。更多内容查看 [Minecraft Pi 入门][11] 和 [Minecraft 一小时入门教程][12]。
|
||||
最后一个(是本文的最后一个,当然还有其它的方式),[Minecraft][10] 有一个树莓派版本。<ruby>我的世界<rt>Minecraft</rt></ruby>已经从一个多玩家的、类似于”数字乐高“这样的游戏,成长为一个任何人都能使用 Python 和其它编程语言去构建我自己的虚拟世界。更多内容查看 [Minecraft Pi 入门][11] 和 [Minecraft 一小时入门教程][12]。
|
||||
|
||||
你还有教孩子用树莓派学编程的珍藏资源吗?请在下面的评论区共享出来吧。
|
||||
|
||||
@ -45,15 +47,15 @@ via: https://opensource.com/article/19/3/teach-kids-program-raspberry-pi
|
||||
作者:[Anderson Silva][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ansilva
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/article/19/2/how-buy-raspberry-pi
|
||||
[2]: https://opensource.com/article/19/2/how-boot-new-raspberry-pi
|
||||
[3]: https://opensource.com/article/19/3/learn-linux-raspberry-pi
|
||||
[1]: https://linux.cn/article-10615-1.html
|
||||
[2]: https://linux.cn/article-10644-1.html
|
||||
[3]: https://linux.cn/article-10645-1.html
|
||||
[4]: https://scratch.mit.edu/
|
||||
[5]: https://opensource.com/article/17/3/logo-scratch-teach-programming-kids
|
||||
[6]: https://code.org/
|
74
published/20190307 13 open source backup solutions.md
Normal file
74
published/20190307 13 open source backup solutions.md
Normal file
@ -0,0 +1,74 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (qhwdw)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10655-1.html)
|
||||
[#]: subject: (13 open source backup solutions)
|
||||
[#]: via: (https://opensource.com/article/19/3/backup-solutions)
|
||||
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
|
||||
|
||||
13 个开源备份解决方案
|
||||
======
|
||||
|
||||
> 读者们推荐了超过一打的他们喜欢的数据保护解决方案。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/server_data_system_admin.png?itok=q6HCfNQ8)
|
||||
|
||||
最近,我发起了一个 [投票][1],让读者投票选出他们最喜欢的开源备份解决方案。在我们的 [版主社区][2] 上,我们提供了六个推荐的解决方案 —— Cronopete、Deja Dup、Rclone、Rdiff-backup、Restic、和 Rsync,而参与的读者也在评论区分享了一些其它的选择。并且读者提供的这 13 个其它的解决方案,(到目前为止)我们要么是没有想到,要么是没有听说过。
|
||||
|
||||
到目前为止,最受欢迎的推荐是 [BorgBackup][3]。它是一个带有压缩和加密特性以用具有数据去重功能的备份解决方案。它基于 BSD 许可证,支持 Linux、MacOS 和 BSD。
|
||||
|
||||
第二个是 [UrBackup][4],它可以做镜像和文件的完整和增量备份;你可以保存整个分区或单个目录。它有 Windows、Linux、和 MacOS 客户端,并且采用 GNU Affero 公共许可证。
|
||||
|
||||
第三个是 [LuckyBackup][5];根据其网站介绍,“它是一个易于使用、快速(只传输变化部分,而不是全部数据)、安全(在做任何数据操作之前,先检查所有需要备份的目录,以确保数据安全)、可靠和完全可定制的备份解决方案。它在 GPL 许可证下发行。
|
||||
|
||||
[Casync][6] 是一个可寻址内容的同步解决方案 —— 它设计用于备份、同步、存储和检索大文件系统的多个相关版本。它使用 GNU Lesser 公共许可证。
|
||||
|
||||
[Syncthing][7] 是用于在两台计算机之间同步文件。它基于 Mozilla 公共许可证使用,根据其网站介绍,它是安全和私密的。它可以工作于 MacOS、Windows、Linux、FreeBSD、Solaris 和 OpenBSD。
|
||||
|
||||
[Duplicati][8] 是一个可工作于 Windows、MacOS 和 Linux 上的、并且支持多种标准协议(比如 FTP、SSH、WebDAV 和云服务)、免费的备份解决方案。它的特性是强大的加密功能,并且它使用 GPL 许可证。
|
||||
|
||||
[Dirvish][9] 是一个基于磁盘的虚拟镜像备份系统,它使用 OSL-3.0 许可证。它要求必须安装有 Rsync、Perl5、SSH。
|
||||
|
||||
[Bacula][10] 的网站上介绍说:”它是允许系统管理员去管理备份、恢复、和跨网络的不同种类计算机上的多种数据的一套计算机程序“,它支持在 Linux、FreeBSD、Windows、MacOS、OpenBSD 和 Solaris 上运行,并且它的大部分源代码都是基于 AGPLv3 许可证的。
|
||||
|
||||
[BackupPC][11] 的网站上介绍说:”它是一个高性能的、企业级的、可以备份 Linux、Windows 和 MacOS 系统的 PC 和笔记本电脑上的数据到服务器磁盘上的备份解决方案“。它是基于 GPLv3 许可证的。
|
||||
|
||||
[Amanda][12] 是一个使用 C 和 Perl 写的备份系统,它允许系统管理员去备份整个网络中的客户端到一台服务器上的磁带、磁盘或基于云的系统。它是由马里兰大学于 1991 年开发并拥有版权,并且它有一个 BSD 式的许可证。
|
||||
|
||||
[Back in Time][13] 是一个为 Linux 设计的简单的备份实用程序。它提供了命令行和图形用户界面,它们都是用 Python 写的。去执行一个备份,只需要指定存储快照的位置、需要备份的文件夹,和备份频率即可。它使用的是 GPLv2 许可证。
|
||||
|
||||
[Timeshift][14] 是一个 Linux 上的备份实用程序,它类似于 Windows 上的系统恢复和 MacOS 上的时间胶囊。它的 GitHub 仓库上介绍说:“Timeshift 通过定期递增的文件系统快照来保护你的系统。这些快照可以在日后用于数据恢复,以撤销某些对文件系统的修改。”
|
||||
|
||||
[Kup][15] 是一个能够帮助用户备份它们的文件到 USB 驱动器上的备份解决方案,但它也可以用于执行网络备份。它的 GitHub 仓库上介绍说:”当插入你的外部硬盘时,Kup 将自动启动并复制你的最新的修改。“
|
||||
|
||||
感谢大家在我们的投票中分享你们喜爱的开源备份解决方案!如果还有其它的、没有提到的开源备份解决方案,请在下面的评论区分享它们。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/backup-solutions
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/don-watkins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/article/19/2/linux-backup-solutions
|
||||
[2]: https://opensource.com/opensourcecom-team
|
||||
[3]: https://www.borgbackup.org/
|
||||
[4]: https://www.urbackup.org/
|
||||
[5]: http://luckybackup.sourceforge.net/
|
||||
[6]: http://0pointer.net/blog/casync-a-tool-for-distributing-file-system-images.html
|
||||
[7]: https://syncthing.net/
|
||||
[8]: https://www.duplicati.com/
|
||||
[9]: http://dirvish.org/
|
||||
[10]: https://www.bacula.org/
|
||||
[11]: https://backuppc.github.io/backuppc/
|
||||
[12]: http://www.amanda.org/
|
||||
[13]: https://github.com/bit-team/backintime
|
||||
[14]: https://github.com/teejee2008/timeshift
|
||||
[15]: https://github.com/spersson/Kup
|
@ -0,0 +1,65 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (MjSeven)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10629-1.html)
|
||||
[#]: subject: (How To Fix “Network Protocol Error” On Mozilla Firefox)
|
||||
[#]: via: (https://www.ostechnix.com/how-to-fix-network-protocol-error-on-mozilla-firefox/)
|
||||
[#]: author: (SK https://www.ostechnix.com/author/sk/)
|
||||
|
||||
如何修复 Mozilla Firefox 中出现的 “Network Protocol Error”
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2019/03/firefox-logo-1-720x340.png)
|
||||
|
||||
Mozilla Firefox 多年来一直是我的默认 Web 浏览器,我每天用它来进行日常网络活动,例如访问邮件,浏览喜欢的网站等。今天,我在使用 Firefox 时遇到了一个奇怪的错误。我试图在 Reddit 平台上分享我们的一个指南时,在 Firefox 上出现了以下错误消息:
|
||||
|
||||
> Network Protocol Error
|
||||
|
||||
> Firefox has experienced a network protocol violation that cannot be repaired.
|
||||
|
||||
> The page you are trying to view cannot be shown because an error in the network protocol was detected.
|
||||
|
||||
> Please contact the website owners to inform them of this problem.
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2019/03/firefox.png)
|
||||
|
||||
老实说,我有点慌,我以为可能是我的系统受到了某种恶意软件的影响。哈哈!但是我发现我错了。我在 Arch Linux 桌面上使用的是最新的 Firefox 版本,我在 Chromium 浏览器中打开了相同的链接,它正确显示了,我猜这是 Firefox 相关的错误。在谷歌上搜索后,我解决了这个问题,如下所述。
|
||||
|
||||
出现这种问题主要是因为“浏览器缓存”,如果你遇到此类错误,例如 “Network Protocol Error” 或 “Corrupted Content Error”,遵循以下任何一种方法。
|
||||
|
||||
**方法 1:**
|
||||
|
||||
要修复 “Network Protocol Error” 或 “Corrupted Content Error”,你需要在重新加载网页时绕过缓存。为此,按下 `Ctrl + F5` 或 `Ctrl + Shift + R` 快捷键,它将从服务器重新加载页面,而不是从 Firefox 缓存加载。这样网页就应该可以正常工作了。
|
||||
|
||||
**方法 2:**
|
||||
|
||||
如果方法 1 不起作用,尝试以下方法。
|
||||
|
||||
打开 “Edit - > Preferences”,在 “Preferences” 窗口中,打开左窗格中的 “Privacy & Security” 选项卡,单击 “Clear Data” 选项清除 Firefox 缓存。
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2019/03/firefox-1.png)
|
||||
|
||||
确保你选中了 “Cookies and Site Data” 和 “Cached Web Content” 选项,然后单击 “Clear”。
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2019/03/firefox-2.png)
|
||||
|
||||
完成!现在 Cookie 和离线内容将被删除。注意,Firefox 可能会将你从登录的网站中注销,稍后你可以重新登录这些网站。最后,关闭 Firefox 浏览器并重新启动系统。现在网页加载没有任何问题。
|
||||
|
||||
希望这对你有帮助。更多好东西要来了,敬请关注!
|
||||
|
||||
干杯!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-fix-network-protocol-error-on-mozilla-firefox/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
296
sources/talk/20160921 lawyer The MIT License, Line by Line.md
Normal file
296
sources/talk/20160921 lawyer The MIT License, Line by Line.md
Normal file
@ -0,0 +1,296 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (lawyer The MIT License, Line by Line)
|
||||
[#]: via: (https://writing.kemitchell.com/2016/09/21/MIT-License-Line-by-Line.html)
|
||||
[#]: author: (Kyle E. Mitchell https://kemitchell.com/)
|
||||
|
||||
lawyer The MIT License, Line by Line
|
||||
======
|
||||
|
||||
### The MIT License, Line by Line
|
||||
|
||||
[The MIT License][1] is the most popular open-source software license. Here’s one read of it, line by line.
|
||||
|
||||
#### Read the License
|
||||
|
||||
If you’re involved in open-source software and haven’t taken the time to read the license from top to bottom—it’s only 171 words—you need to do so now. Especially if licenses aren’t your day-to-day. Make a mental note of anything that seems off or unclear, and keep trucking. I’ll repeat every word again, in chunks and in order, with context and commentary. But it’s important to have the whole in mind.
|
||||
|
||||
> The MIT License (MIT)
|
||||
>
|
||||
> Copyright (c) <year> <copyright holders>
|
||||
>
|
||||
> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
||||
>
|
||||
> The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
|
||||
>
|
||||
> The Software is provided “as is”, without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose and noninfringement. In no event shall the authors or copyright holders be liable for any claim, damages or other liability, whether in an action of contract, tort or otherwise, arising from, out of or in connection with the software or the use or other dealings in the Software.
|
||||
|
||||
The license is arranged in five paragraphs, but breaks down logically like this:
|
||||
|
||||
* **Header**
|
||||
* **License Title** : “The MIT License”
|
||||
* **Copyright Notice** : “Copyright (c) …”
|
||||
* **License Grant** : “Permission is hereby granted …”
|
||||
* **Grant Scope** : “… to deal in the Software …”
|
||||
* **Conditions** : “… subject to …”
|
||||
* **Attribution and Notice** : “The above … shall be included …”
|
||||
* **Warranty Disclaimer** : “The software is provided ‘as is’ …”
|
||||
* **Limitation of Liability** : “In no event …”
|
||||
|
||||
|
||||
|
||||
Here we go:
|
||||
|
||||
#### Header
|
||||
|
||||
##### License Title
|
||||
|
||||
> The MIT License (MIT)
|
||||
|
||||
“The MIT License” is a not a single license, but a family of license forms derived from language prepared for releases from the Massachusetts Institute of Technology. It has seen a lot of changes over the years, both for the original projects that used it, and also as a model for other projects. The Fedora Project maintains a [kind of cabinet of MIT license curiosities][2], with insipid variations preserved in plain text like anatomical specimens in formaldehyde, tracing a wayward kind of evolution.
|
||||
|
||||
Fortunately, the [Open Source Initiative][3] and [Software Package Data eXchange][4] groups have standardized a generic MIT-style license form as “The MIT License”. OSI in turn has adopted SPDX’ standardized [string identifiers][5] for common open-source licenses, with `MIT` pointing unambiguously to the standardized form “MIT License”. If you want MIT-style terms for a new project, use [the standardized form][1].
|
||||
|
||||
Even if you include “The MIT License” or “SPDX:MIT” in a `LICENSE` file, any responsible reviewer will still run a comparison of the text against the standard form, just to be sure. While various license forms calling themselves “MIT License” vary only in minor details, the looseness of what counts as an “MIT License” has tempted some authors into adding bothersome “customizations”. The canonical horrible, no good, very bad example of this is [the JSON license][6], an MIT-family license plus “The Software shall be used for Good, not Evil.”. This kind of thing might be “very Crockford”. It is definitely a pain in the ass. Maybe the joke was supposed to be on the lawyers. But they laughed all the way to the bank.
|
||||
|
||||
Moral of the story: “MIT License” alone is ambiguous. Folks probably have a good idea what you mean by it, but you’re only going to save everyone—yourself included—time by copying the text of the standard MIT License form into your project. If you use metadata, like the `license` property in package manager metadata files, to designate the `MIT` license, make sure your `LICENSE` file and any header comments use the standard form text. All of this can be [automated][7].
|
||||
|
||||
##### Copyright Notice
|
||||
|
||||
> Copyright (c) <year> <copyright holders>
|
||||
|
||||
Until the 1976 Copyright Act, United States copyright law required specific actions, called “formalities”, to secure copyright in creative works. If you didn’t follow those formalities, your rights to sue others for unauthorized use of your work were limited, often completely lost. One of those formalities was “notice”: Putting marks on your work and otherwise making it known to the market that you were claiming copyright. The © is a standard symbol for marking copyrighted works, to give notice of copyright. The ASCII character set doesn’t have the © symbol, but `Copyright (c)` gets the same point across.
|
||||
|
||||
The 1976 Copyright Act, which “implemented” many requirements of the international Berne Convention, eliminated formalities for securing copyright. At least in the United States, copyright holders still need to register their copyrighted works before suing for infringement, with potentially higher damages if they register before infringement begins. In practice, however, many register copyright right before bringing suit against someone in particular. You don’t lose your copyright just by failing to put notices on it, registering, sending a copy to the Library of Congress, and so on.
|
||||
|
||||
Even if copyright notices aren’t as absolutely necessary as they used to be, they are still plenty useful. Stating the year a work was authored and who the copyright belonged to give some sense of when copyright in the work might expire, bringing the work into the public domain. The identity of the author or authors is also useful: United States law calculates copyright terms differently for individual and “corporate” authors. Especially in business use, it may also behoove a company to think twice about using software from a known competitor, even if the license terms give very generous permission. If you’re hoping others will see your work and want to license it from you, copyright notices serve nicely for attribution.
|
||||
|
||||
As for “copyright holder”: Not all standard form licenses have a space to write this out. More recent license forms, like [Apache 2.0][8] and [GPL 3.0][9], publish `LICENSE` texts that are meant to be copied verbatim, with header comments and separate files elsewhere to indicate who owns copyright and is giving the license. Those approaches neatly discourage changes to the “standard” texts, accidental or intentional. They also make automated license identification more reliable.
|
||||
|
||||
The MIT License descends from language written for releases of code by institutions. For institutional releases, there was just one clear “copyright holder”, the institution releasing the code. Other institutions cribbed these licenses, replacing “MIT” with their own names, leading eventually to the generic forms we have now. This process repeated for other short-form institutional licenses of the era, notably the [original four-clause BSD License][10] for the University of California, Berkeley, now used in [three-clause][11] and [two-clause][12] variants, as well as [The ISC License][13] for the Internet Systems Consortium, an MIT variant.
|
||||
|
||||
In each case, the institution listed itself as the copyright holder in reliance on rules of copyright ownership, called “[works made for hire][14]” rules, that give employers and clients ownership of copyright in some work their employees and contractors do on their behalf. These rules don’t usually apply to distributed collaborators submitting code voluntarily. This poses a problem for project-steward foundations, like the Apache Foundation and Eclipse Foundation, that accept contributions from a more diverse group of contributors. The usual foundation approach thus far has been to use a house license that states a single copyright holder—[Apache 2.0][8] and [EPL 1.0][15]—backed up by contributor license agreements—[Apache CLAs][16] and [Eclipse CLAs][17]—to collect rights from contributors. Collecting copyright ownership in one place is even more important under “copyleft” licenses like the GPL, which rely on copyright owners to enforce license conditions to promote software-freedom values.
|
||||
|
||||
These days, loads of projects without any kind of institutional or business steward use MIT-style license terms. SPDX and OSI have helped these use cases by standardizing forms of licenses like MIT and ISC that don’t refer to a specific entity or institutional copyright holder. Armed with those forms, the prevailing practice of project authors is to fill their own name in the copyright notice of the form very early on … and maybe bump the year here and there. At least under United States copyright law, the resulting copyright notice doesn’t give a full picture.
|
||||
|
||||
The original owner of a piece of software retains ownership of their work. But while MIT-style license terms give others rights to build on and change the software, creating what the law calls “derivative works”, they don’t give the original author ownership of copyright in others’ contributions. Rather, each contributor has copyright in any [even marginally creative][18] work they make using the existing code as a starting point.
|
||||
|
||||
Most of these projects also balk at the idea of taking contributor license agreements, to say nothing of signed copyright assignments. That’s both naive and understandable. Despite the assumption of some newer open-source developers that sending a pull request on GitHub “automatically” licenses the contribution for distribution on the terms of the project’s existing license, United States law doesn’t recognize any such rule. Strong copyright protection, not permissive licensing, is the default.
|
||||
|
||||
Update: GitHub later changed its site-wide terms of service to include an attempt to flip this default, at least on GitHub.com. I’ve written up some thoughts on that development, not all of them positive, in [another post][19].
|
||||
|
||||
To fill the gap between legally effective, well-documented grants of rights in contributions and no paper trail at all, some projects have adopted the [Developer Certificate of Origin][20], a standard statement contributors allude to using `Signed-Off-By` metadata tags in their Git commits. The Developer Certificate of Origin was developed for Linux kernel development in the wake of the infamous SCO lawsuits, which alleged that chunks of Linux’ code derived from SCO-owned Unix source. As a means of creating a paper trail showing that each line of Linux came from a contributor, the Developer Certificate of Origin functions nicely. While the Developer Certificate of Origin isn’t a license, it does provide lots of good evidence that those submitting code expected the project to distribute their code, and for others to use it under the kernel’s existing license terms. The kernel also maintains a machine-readable `CREDITS` file listing contributors with name, affiliation, contribution area, and other metadata. I’ve done [some][21] [experiments][22] adapting that approach for projects that don’t use the kernel’s development flow.
|
||||
|
||||
#### License Grant
|
||||
|
||||
> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”),
|
||||
|
||||
The meat of The MIT License is, you guessed it, a license. In general terms, a license is permission that one person or legal entity—the “licensor”—gives another—the “licensee”—to do something the law would otherwise let them sue for. The MIT License is a promise not to sue.
|
||||
|
||||
The law sometimes distinguishes licenses from promises to give licenses. If someone breaks a promise to give a license, you may be able to sue them for breaking their promise, but you may not end up with a license. “Hereby” is one of those hokey, archaic-sounding words lawyers just can’t get rid of. It’s used here to show that the license text itself gives the license, and not just a promise of a license. It’s a legal [IIFE][23].
|
||||
|
||||
While many licenses give permission to a specific, named licensee, The MIT License is a “public license”. Public licenses give everybody—the public at large—permission. This is one of the three great ideas in open-source licensing. The MIT License captures this idea by giving a license “to any person obtaining a copy of … the Software”. As we’ll see later, there is also a condition to receiving this license that ensures others will learn about their permission, too.
|
||||
|
||||
The parenthetical with a capitalized term in quotation marks (a “Definition”), is the standard way to give terms specific meanings in American-style legal documents. Courts will reliably look back to the terms of the definition when they see a defined, capitalized term used elsewhere in the document.
|
||||
|
||||
##### Grant Scope
|
||||
|
||||
> to deal in the Software without restriction,
|
||||
|
||||
From the licensee’s point of view, these are the seven most important words in The MIT License. The key legal concerns are getting sued for copyright infringement and getting sued for patent infringement. Neither copyright law nor patent law uses “to deal in” as a term of art; it has no specific meaning in court. As a result, any court deciding a dispute between a licensor and a licensee would ask what the parties meant and understood by this language. What the court will see is that the language is intentionally broad and open-ended. It gives licensees a strong argument against any claim by a licensor that they didn’t give permission for the licensee to do that specific thing with the software, even if the thought clearly didn’t occur to either side when the license was given.
|
||||
|
||||
> including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so,
|
||||
|
||||
No piece of legal writing is perfect, “fully settled in meaning”, or unmistakably clear. Beware anyone who pretends otherwise. This is the least perfect part of The MIT License. There are three main issues:
|
||||
|
||||
First, “including without limitation” is a legal antipattern. It crops up in any number of flavors:
|
||||
|
||||
* “including, without limitation”
|
||||
* “including, without limiting the generality of the foregoing”
|
||||
* “including, but not limited to”
|
||||
* many, many pointless variations
|
||||
|
||||
|
||||
|
||||
All of these share a common purpose, and they all fail to achieve it reliably. Fundamentally, drafters who use them try to have their cake and eat it, too. In The MIT License, that means introducing specific examples of “dealing in the Software”—“use, copy, modify” and so on—without implying that licensee action has to be something like the examples given to count as “dealing in”. The trouble is that, if you end up needing a court to review and interpret the terms of a license, the court will see its job as finding out what those fighting meant by the language. If the court needs to decide what “deal in” means, it cannot “unsee” the examples, even if you tell it to. I’d argue that “deal in the Software without restriction” alone would be better for licensees. Also shorter.
|
||||
|
||||
Second, the verbs given as examples of “deal in” are a hodgepodge. Some have specific meanings under copyright or patent law, others almost do or just plain don’t:
|
||||
|
||||
* use appears in [United States Code title 35, section 271(a)][24], the patent law’s list of what patent owners can sue others for doing without permission.
|
||||
|
||||
* copy appears in [United States Code title 17, section 106][25], the copyright law’s list of what copyright owners can sue others for doing without permission.
|
||||
|
||||
* modify doesn’t appear in either copyright or patent statute. It is probably closest to “prepare derivative works” under the copyright statute, but may also implicate improving or otherwise derivative inventions.
|
||||
|
||||
* merge doesn’t appear in either copyright or patent statute. “Merger” has a specific meaning in copyright, but that’s clearly not what’s intended here. Rather, a court would probably read “merge” according to its meaning in industry, as in “to merge code”.
|
||||
|
||||
* publish doesn’t appear in either copyright or patent statute. Since “the Software” is what’s being published, it probably hews closest to “distribute” under the [copyright statute][25]. That statute also covers rights to perform and display works “publicly”, but those rights apply only to specific kinds of copyrighted work, like plays, sound recordings, and motion pictures.
|
||||
|
||||
* distribute appears in the [copyright statute][25].
|
||||
|
||||
* sublicense is a general term of intellectual property law. The right to sublicense means the right to give others licenses of their own, to do some or all of what you have permission to do. The MIT License’s right to sublicense is actually somewhat unusual in open-source licenses generally. The norm is what Heather Meeker calls a “direct licensing” approach, where everyone who gets a copy of the software and its license terms gets a license direct from the owner. Anyone who might get a sublicense under the MIT License will probably end up with a copy of the license telling them they have a direct license, too.
|
||||
|
||||
* sell copies of is a mongrel. It is close to “offer to sell” and “sell” in the [patent statute][24], but refers to “copies”, a copyright concept. On the copyright side, it seems close to “distribute”, but the [copyright statute][25] makes no mention of sales.
|
||||
|
||||
* permit persons to whom the Software is furnished to do so seems redundant of “sublicense”. It’s also unnecessary to the extent folks who get copies also get a direct license.
|
||||
|
||||
|
||||
|
||||
|
||||
Lastly, as a result of this mishmash of legal, industry, general-intellectual-property, and general-use terms, it isn’t clear whether The MIT License includes a patent license. The general language “deal in” and some of the example verbs, especially “use”, point toward a patent license, albeit a very unclear one. The fact that the license comes from the copyright holder, who may or may not have patent rights in inventions in the software, as well as most of the example verbs and the definition of “the Software” itself, all point strongly toward a copyright license. More recent permissive open-source licenses, like [Apache 2.0][8], address copyright, patent, and even trademark separately and specifically.
|
||||
|
||||
##### Three License Conditions
|
||||
|
||||
> subject to the following conditions:
|
||||
|
||||
There’s always a catch! MIT has three!
|
||||
|
||||
If you don’t follow The MIT License’s conditions, you don’t get the permission the license offers. So failing to do what the conditions say at least theoretically leaves you open to a lawsuit, probably a copyright lawsuit.
|
||||
|
||||
Using the value of the software to the licensee to motivate compliance with conditions, even though the licensee paid nothing for the license, is the second great idea of open-source licensing. The last, not found in The MIT License, builds off license conditions: “Copyleft” licenses like the [GNU General Public License][9] use license conditions to control how those making changes can license and distribute their changed versions.
|
||||
|
||||
##### Notice Condition
|
||||
|
||||
> The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
|
||||
|
||||
If you give someone a copy of the software, you need to include the license text and any copyright notice. This serves a few critical purposes:
|
||||
|
||||
1. Gives others notice that they have permission for the software under the public license. This is a key part of the direct-licensing model, where each user gets a license direct from the copyright holder.
|
||||
|
||||
2. Makes known who’s behind the software, so they can be showered in praises, glory, and cold, hard cash donations.
|
||||
|
||||
3. Ensures the warranty disclaimer and limitation of liability (coming up next) follow the software around. Everyone who gets a copy should get a copy of those licensor protections, too.
|
||||
|
||||
|
||||
|
||||
|
||||
There’s nothing to stop you charging for providing a copy, or even a copy in compiled form, without source code. But when you do, you can’t pretend that the MIT code is your own proprietary code, or provided under some other license. Those receiving get to know their rights under the “public license”.
|
||||
|
||||
Frankly, compliance with this condition is breaking down. Nearly every open-source license has such an “attribution” condition. Makers of system and installed software often understand they’ll need to compile a notices file or “license information” screen, with copies of license texts for libraries and components, for each release of their own. The project-steward foundations have been instrumental in teaching those practices. But web developers, as a whole, haven’t got the memo. It can’t be explained away by a lack of tooling—there is plenty—or the highly modular nature of packages from npm and other repositories—which uniformly standardize metadata formats for license information. All the good JavaScript minifiers have command-line flags for preserving license header comments. Other tools will concatenate `LICENSE` files from package trees. There’s really no excuse.
|
||||
|
||||
##### Warranty Disclaimer
|
||||
|
||||
> The Software is provided “as is”, without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose and noninfringement.
|
||||
|
||||
Nearly every state in the United States has enacted a version of the Uniform Commercial Code, a model statute of laws governing commercial transactions. Article 2 of the UCC—“Division 2” in California—governs contracts for sales of goods, from used automobiles bought off the lot to large shipments of industrial chemicals to manufacturing plants.
|
||||
|
||||
Some of the UCC’s rules about sales contracts are mandatory. These rules always apply, whether those buying and selling like them or not. Others are just “defaults”. Unless buyers and sellers opt out in writing, the UCC implies that they want the baseline rule found in the UCC’s text for their deal. Among the default rules are implied “warranties”, or promises by sellers to buyers about the quality and usability of the goods being sold.
|
||||
|
||||
There is a big theoretical debate about whether public licenses like The MIT License are contracts—enforceable agreements between licensors and licensees—or just licenses, which go one way, but may come with strings attached, their conditions. There is less debate about whether software counts as “goods”, triggering the UCC’s rules. There is no debate among licensors on liability: They don’t want to get sued for lots of money if the software they give away for free breaks, causes problems, doesn’t work, or otherwise causes trouble. That’s exactly the opposite of what three default rules for “implied warranties” do:
|
||||
|
||||
1. The implied warranty of “merchantability” under [UCC section 2-314][26] is a promise that “the goods”—the Software—are of at least average quality, properly packaged and labeled, and fit for the ordinary purposes they are intended to serve. This warranty applies only if the one giving the software is a “merchant” with respect to the software, meaning they deal in software and hold themselves out as skilled in software.
|
||||
|
||||
2. The implied warranty of “fitness for a particular purpose” under [UCC section 2-315][27] kicks in when the seller knows the buyer is relying on them to provide goods for a particular purpose. The goods need to actually be “fit” for that purpose.
|
||||
|
||||
3. The implied warranty of “noninfringement” is not part of the UCC, but is a common feature of general contract law. This implied promise protects the buyer if it turns out the goods they received infringe somebody else’s intellectual property rights. That would be the case if the software under The MIT License didn’t actually belong to the one trying to license it, or if it fell under a patent owned by someone else.
|
||||
|
||||
|
||||
|
||||
|
||||
[Section 2-316(3)][28] of the UCC requires language opting out of, or “excluding”, implied warranties of merchantability and fitness for a particular purpose to be conspicuous. “Conspicuous” in turn means written or formatted to call attention to itself, the opposite of microscopic fine print meant to slip past unwary consumers. State law may impose a similar attention-grabbing requirement for disclaimers of noninfringement.
|
||||
|
||||
Lawyers have long suffered under the delusion that writing anything in `ALL-CAPS` meets the conspicuous requirement. That isn’t true. Courts have criticized the Bar for pretending as much, and most everyone agrees all-caps does more to discourage reading than compel it. All the same, most open-source-license forms set their warranty disclaimers in all-caps, in part because that’s the only obvious way to make it stand out in plain-text `LICENSE` files. I’d prefer to use asterisks or other ASCII art, but that ship sailed long, long ago.
|
||||
|
||||
##### Limitation of Liability
|
||||
|
||||
> In no event shall the authors or copyright holders be liable for any claim, damages or other liability, whether in an action of contract, tort or otherwise, arising from, out of or in connection with the Software or the use or other dealings in the Software.
|
||||
|
||||
The MIT License gives permission for software “free of charge”, but the law does not assume that folks receiving licenses free of charge give up their rights to sue when things go wrong and the licensor is to blame. “Limitations of liability”, often paired with “damages exclusions”, work a lot like licenses, as promises not to sue. But these are protections for the licensor against lawsuits by licensees.
|
||||
|
||||
In general, courts read limitations of liability and damages exclusions warily, since they can shift an incredible amount of risk from one side to another. To protect the community’s vital interest in giving folks a way to redress wrongs done in court, they “strictly construe” language limiting liability, reading it against the one protected by it where possible. Limitations of liability have to be specific to stand up. Especially in “consumer” contracts and other situations where those giving up the right to sue lack sophistication or bargaining power, courts have sometimes refused to honor language that seemed buried out of sight. Partly for that reason, partly by sheer force of habit, lawyers tend to give limits of liability the all-caps treatment, too.
|
||||
|
||||
Drilling down a bit, the “limitation of liability” part is a cap on the amount of money a licensee can sue for. In open-source licenses, that limit is always no money at all, $0, “not liable”. By contrast, in commercial licenses, it’s often a multiple of license fees paid in the last 12-month period, though it’s often negotiated.
|
||||
|
||||
The “exclusion” part lists, specifically, kinds of legal claims—reasons to sue for damages—the licensor cannot use. Like many, many legal forms, The MIT License mentions actions “of contract”—for breaching a contract—and “of tort”. Tort rules are general rules against carelessly or maliciously harming others. If you run someone down on the road while texting, you have committed a tort. If your company sells faulty headphones that burn peoples’ ears off, your company has committed a tort. If a contract doesn’t specifically exclude tort claims, courts sometimes read exclusion language in a contract to prevent only contract claims. For good measure, The MIT License throws in “or otherwise”, just to catch the odd admiralty law or other, exotic kind of legal claim.
|
||||
|
||||
The phrase “arising from, out of or in connection with” is a recurring tick symptomatic of the legal draftsman’s inherent, anxious insecurity. The point is that any lawsuit having anything to do with the software is covered by the limitation and exclusions. On the off chance something can “arise from”, but not “out of”, or “in connection with”, it feels better to have all three in the form, so pack ‘em in. Never mind that any court forced to split hairs in this part of the form will have to come up with different meanings for each, on the assumption that a professional drafter wouldn’t use different words in a row to mean the same thing. Never mind that in practice, where courts don’t feel good about a limitation that’s disfavored to begin with, they’ll be more than ready to read the scope trigger narrowly. But I digress. The same language appears in literally millions of contracts.
|
||||
|
||||
#### Overall
|
||||
|
||||
All these quibbles are a bit like spitting out gum on the way into church. The MIT License is a legal classic. The MIT License works. It is by no means a panacea for all software IP ills, in particular the software patent scourge, which it predates by decades. But MIT-style licenses have served admirably, fulfilling a narrow purpose—reversing troublesome default rules of copyright, sales, and contract law—with a minimal combination of discreet legal tools. In the greater context of computing, its longevity is astounding. The MIT License has outlasted and will outlast the vast majority of software licensed under it. We can only guess how many decades of faithful legal service it will have given when it finally loses favor. It’s been especially generous to those who couldn’t have afforded their own lawyer.
|
||||
|
||||
We’ve seen how the The MIT License we know today is a specific, standardized set of terms, bringing order at long last to a chaos of institution-specific, haphazard variations.
|
||||
|
||||
We’ve seen how its approach to attribution and copyright notice informed intellectual property management practices for academic, standards, commercial, and foundation institutions.
|
||||
|
||||
We’ve seen how The MIT Licenses grants permission for software to all, for free, subject to conditions that protect licensors from warranties and liability.
|
||||
|
||||
We’ve seen that despite some crusty verbiage and lawyerly affectation, one hundred and seventy one little words can get a hell of a lot of legal work done, clearing a path for open-source software through a dense underbrush of intellectual property and contract.
|
||||
|
||||
I’m so grateful for all who’ve taken the time to read this rather long post, to let me know they found it useful, and to help improve it. As always, I welcome your comments via [e-mail][29], [Twitter][30], and [GitHub][31].
|
||||
|
||||
A number of folks have asked where they can read more, or find run-downs of other licenses, like the GNU General Public License or the Apache 2.0 license. No matter what your particular continuing interest may be, I heartily recommend the following books:
|
||||
|
||||
* Andrew M. St. Laurent’s [Understanding Open Source & Free Software Licensing][32], from O’Reilly.
|
||||
|
||||
I start with this one because, while it’s somewhat dated, its approach is also closest to the line-by-line approach used above. O’Reilly has made it [available online][33].
|
||||
|
||||
* Heather Meeker’s [Open (Source) for Business][34]
|
||||
|
||||
In my opinion, by far the best writing on the GNU General Public License and copyleft more generally. This book covers the history, the licenses, their development, as well as compatibility and compliance. It’s the book I lend to clients considering or dealing with the GPL.
|
||||
|
||||
* Larry Rosen’s [Open Source Licensing][35], from Prentice Hall.
|
||||
|
||||
A great first book, also available for free [online][36]. This is the best introduction to open-source licensing and related law for programmers starting from scratch. This one is also a bit dated in some specific details, but Larry’s taxonomy of licenses and succinct summary of open-source business models stand the test of time.
|
||||
|
||||
|
||||
|
||||
|
||||
All of these were crucial to my own education as an open-source licensing lawyer. Their authors are professional heroes of mine. Have a read! — K.E.M
|
||||
|
||||
I license this article under a [Creative Commons Attribution-ShareAlike 4.0 license][37].
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://writing.kemitchell.com/2016/09/21/MIT-License-Line-by-Line.html
|
||||
|
||||
作者:[Kyle E. Mitchell][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://kemitchell.com/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: http://spdx.org/licenses/MIT
|
||||
[2]: https://fedoraproject.org/wiki/Licensing:MIT?rd=Licensing/MIT
|
||||
[3]: https://opensource.org
|
||||
[4]: https://spdx.org
|
||||
[5]: http://spdx.org/licenses/
|
||||
[6]: https://spdx.org/licenses/JSON
|
||||
[7]: https://www.npmjs.com/package/licensor
|
||||
[8]: https://www.apache.org/licenses/LICENSE-2.0
|
||||
[9]: https://www.gnu.org/licenses/gpl-3.0.en.html
|
||||
[10]: http://spdx.org/licenses/BSD-4-Clause
|
||||
[11]: https://spdx.org/licenses/BSD-3-Clause
|
||||
[12]: https://spdx.org/licenses/BSD-2-Clause
|
||||
[13]: http://www.isc.org/downloads/software-support-policy/isc-license/
|
||||
[14]: http://worksmadeforhire.com/
|
||||
[15]: https://www.eclipse.org/legal/epl-v10.html
|
||||
[16]: https://www.apache.org/licenses/#clas
|
||||
[17]: https://wiki.eclipse.org/ECA
|
||||
[18]: https://en.wikipedia.org/wiki/Feist_Publications,_Inc.,_v._Rural_Telephone_Service_Co.
|
||||
[19]: https://writing.kemitchell.com/2017/02/16/Against-Legislating-the-Nonobvious.html
|
||||
[20]: http://developercertificate.org/
|
||||
[21]: https://github.com/berneout/berneout-pledge
|
||||
[22]: https://github.com/berneout/authors-certificate
|
||||
[23]: https://en.wikipedia.org/wiki/Immediately-invoked_function_expression
|
||||
[24]: https://www.govinfo.gov/app/details/USCODE-2017-title35/USCODE-2017-title35-partIII-chap28-sec271
|
||||
[25]: https://www.govinfo.gov/app/details/USCODE-2017-title17/USCODE-2017-title17-chap1-sec106
|
||||
[26]: https://leginfo.legislature.ca.gov/faces/codes_displaySection.xhtml?sectionNum=2314.&lawCode=COM
|
||||
[27]: https://leginfo.legislature.ca.gov/faces/codes_displaySection.xhtml?sectionNum=2315.&lawCode=COM
|
||||
[28]: https://leginfo.legislature.ca.gov/faces/codes_displaySection.xhtml?sectionNum=2316.&lawCode=COM
|
||||
[29]: mailto:kyle@kemitchell.com
|
||||
[30]: https://twitter.com/kemitchell
|
||||
[31]: https://github.com/kemitchell/writing/tree/master/_posts/2016-09-21-MIT-License-Line-by-Line.md
|
||||
[32]: https://lccn.loc.gov/2006281092
|
||||
[33]: http://www.oreilly.com/openbook/osfreesoft/book/
|
||||
[34]: https://www.amazon.com/dp/1511617772
|
||||
[35]: https://lccn.loc.gov/2004050558
|
||||
[36]: http://www.rosenlaw.com/oslbook.htm
|
||||
[37]: https://creativecommons.org/licenses/by-sa/4.0/legalcode
|
@ -1,56 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lujun9972)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (IRC vs IRL: How to run a good IRC meeting)
|
||||
[#]: via: (https://opensource.com/article/19/2/irc-vs-irl-meetings)
|
||||
[#]: author: (Ben Cotton https://opensource.com/users/bcotton)
|
||||
|
||||
IRC vs IRL: How to run a good IRC meeting
|
||||
======
|
||||
Internet Relay Chat meetings can be a great way to move a project forward if you follow these best practices.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_community_1.png?itok=rT7EdN2m)
|
||||
|
||||
There's an art to running a meeting in any format. Many people have learned to run in-person or telephone meetings, but [Internet Relay Chat][1] (IRC) meetings have unique characteristics that differ from "in real life" (IRL) meetings. This article will share the advantages and disadvantages of the IRC format as well as tips that will help you lead IRC meetings more effectively.
|
||||
|
||||
Why IRC? Despite the wealth of real-time chat options available today, [IRC remains a cornerstone of open source projects][2]. If your project uses another communication method, don't worry. Most of this advice works for any synchronous text chat mechanism, perhaps with a few tweaks here and there.
|
||||
|
||||
### Challenges of IRC meetings
|
||||
|
||||
IRC meetings pose certain challenges compared to in-person meetings. You know that lag between when one person finishes talking and the next one begins? It's worse in IRC because people have to type what they're thinking. This is slower than talking and—unlike with talking—you can't tell when someone else is trying to compose a message. Moderators must remember to insert long pauses when asking for responses or moving to the next topic. And someone who wants to speak up should insert a brief message (e.g., a period) to let the moderator know.
|
||||
|
||||
IRC meetings also lack the metadata you get from other methods. You can't read facial expressions or tone of voice in text. This means you have to be careful with your word choice and phrasing.
|
||||
|
||||
And IRC meetings make it really easy to get distracted. At least when someone is looking at funny cat GIFs during an in-person meeting, you'll see them smile and hear them laugh at inopportune times. In IRC, unless they accidentally paste the wrong text, there's no peer pressure even to pretend to pay attention. With IRC, you can even be in multiple meetings at once. I've done this, but it's dangerous if you need to be an active participant.
|
||||
|
||||
### Benefits of IRC meetings
|
||||
|
||||
IRC meetings have some unique advantages, too. IRC is a very resource-light medium. It doesn't tax bandwidth or CPU. This lowers the barrier for participation, which is advantageous for both the underprivileged and people who are on the road. For volunteer contributors, it means they may be able to participate during their workday. And it means participants don't need to find a quiet space where they can talk without bothering those around them.
|
||||
|
||||
With a meeting bot, IRC can produce meeting minutes instantly. In Fedora, we use Zodbot, an instance of Debian's [Meetbot][3], to log meetings and provide interaction. When a meeting ends, the minutes and full logs are immediately available to the community. This can reduce the administrative overhead of running the meeting.
|
||||
|
||||
### It's like a normal meeting, but different
|
||||
|
||||
Conducting a meeting via IRC or other text-based medium means thinking about the meeting in a slightly different way. Although it lacks some of the benefits of higher-bandwidth modes of communication, it has advantages, too. Running an IRC meeting provides the opportunity to develop discipline that can help you run any type of meeting.
|
||||
|
||||
Like any meeting, IRC meetings are best when there's a defined agenda and purpose. A good meeting moderator knows when to let the conversation follow twists and turns and when it's time to reel it back in. There's no hard and fast rule here—it's very much an art. But IRC offers an advantage in this regard. By setting the channel topic to the meeting's current topic, people have a visible reminder of what they should be talking about.
|
||||
|
||||
If your project doesn't already conduct synchronous meetings, you should give it some thought. For projects with a diverse set of time zones, finding a mutually agreeable time to hold a meeting is hard. You can't rely on meetings as your only source of coordination. But they can be a valuable part of how your project works.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/2/irc-vs-irl-meetings
|
||||
|
||||
作者:[Ben Cotton][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/bcotton
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Internet_Relay_Chat
|
||||
[2]: https://opensource.com/article/16/6/getting-started-irc
|
||||
[3]: https://wiki.debian.org/MeetBot
|
115
sources/talk/20190314 A Look Back at the History of Firefox.md
Normal file
115
sources/talk/20190314 A Look Back at the History of Firefox.md
Normal file
@ -0,0 +1,115 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (A Look Back at the History of Firefox)
|
||||
[#]: via: (https://itsfoss.com/history-of-firefox)
|
||||
[#]: author: (John Paul https://itsfoss.com/author/john/)
|
||||
|
||||
A Look Back at the History of Firefox
|
||||
======
|
||||
|
||||
The Firefox browser has been a mainstay of the open-source community for a long time. For many years it was the default web browser on (almost) all Linux distros and the lone obstacle to Microsoft’s total dominance of the internet. This browser has roots that go back all the way to the very early days of the internet. Since this week marks the 30th anniversary of the internet, there is no better time to talk about how Firefox became the browser we all know and love.
|
||||
|
||||
### Early Roots
|
||||
|
||||
In the early 1990s, a young man named [Marc Andreessen][1] was working on his bachelor’s degree in computer science at the University of Illinois. While there, he started working for the [National Center for Supercomputing Applications][2]. During that time [Sir Tim Berners-Lee][3] released an early form of the web standards that we know today. Marc [was introduced][4] to a very primitive web browser named [ViolaWWW][5]. Seeing that the technology had potential, Marc and Eric Bina created an easy to install browser for Unix named [NCSA Mosaic][6]). The first alpha was released in June 1993. By September, there were ports to Windows and Macintosh. Mosaic became very popular because it was easier to use than other browsing software.
|
||||
|
||||
In 1994, Marc graduated and moved to California. He was approached by Jim Clark, who had made his money selling computer hardware and software. Clark had used Mosaic and saw the financial possibilities of the internet. Clark recruited Marc and Eric to start an internet software company. The company was originally named Mosaic Communications Corporation, however, the University of Illinois did not like [their use of the name Mosaic][7]. As a result, the company name was changed to Netscape Communications Corporation.
|
||||
|
||||
The company’s first project was an online gaming network for the Nintendo 64, but that fell through. The first product they released was a web browser named Mosaic Netscape 0.9, subsequently renamed Netscape Navigator. Internally, the browser project was codenamed mozilla, which stood for “Mosaic killer”. An employee created a cartoon of a [Godzilla like creature][8]. They wanted to take out the competition.
|
||||
|
||||
![Early Firefox Mascot][9]Early Mozilla mascot at Netscape
|
||||
|
||||
They succeed mightily. At the time, one of the biggest advantages that Netscape had was the fact that its browser looked and functioned the same on every operating system. Netscape described this as giving everyone a level playing field.
|
||||
|
||||
As usage of Netscape Navigator increase, the market share of NCSA Mosaic cratered. In 1995, Netscape went public. [On the first day][10], the stock started at $28, jumped to $75 and ended the day at $58. Netscape was without any rivals.
|
||||
|
||||
But that didn’t last for long. In the summer of 1994, Microsoft released Internet Explorer 1.0, which was based on Spyglass Mosaic which was based on NCSA Mosaic. The [browser wars][11] had begun.
|
||||
|
||||
Over the next few years, Netscape and Microsoft competed for dominance of the internet. Each added features to compete with the other. Unfortunately, Internet Explorer had an advantage because it came bundled with Windows. On top of that, Microsoft had more programmers and money to throw at the problem. Toward the end of 1997, Netscape started to run into financial problems.
|
||||
|
||||
### Going Open Source
|
||||
|
||||
![Mozilla Firefox][12]
|
||||
|
||||
In January 1998, Netscape open-sourced the code of the Netscape Communicator 4.0 suite. The [goal][13] was to “harness the creative power of thousands of programmers on the Internet by incorporating their best enhancements into future versions of Netscape’s software. This strategy is designed to accelerate development and free distribution by Netscape of future high-quality versions of Netscape Communicator to business customers and individuals.”
|
||||
|
||||
The project was to be shepherded by the newly created Mozilla Organization. However, the code from Netscape Communicator 4.0 proved to be very difficult to work with due to its size and complexity. On top of that, several parts could not be open sourced because of licensing agreements with third parties. In the end, it was decided to rewrite the browser from scratch using the new [Gecko][14]) rendering engine.
|
||||
|
||||
In November 1998, Netscape was acquired by AOL for [stock swap valued at $4.2 billion][15].
|
||||
|
||||
Starting from scratch was a major undertaking. Mozilla Firefox (initially nicknamed Phoenix) was created in June 2002 and it worked on multiple operating systems, such as Linux, Mac OS, Microsoft Windows, and Solaris.
|
||||
|
||||
The following year, AOL announced that they would be shutting down browser development. The Mozilla Foundation was subsequently created to handle the Mozilla trademarks and handle the financing of the project. Initially, the Mozilla Foundation received $2 million in donations from AOL, IBM, Sun Microsystems, and Red Hat.
|
||||
|
||||
In March 2003, Mozilla [announced pl][16][a][16][ns][16] to separate the suite into stand-alone applications because of creeping software bloat. The stand-alone browser was initially named Phoenix. However, the name was changed due to a trademark dispute with the BIOS manufacturer Phoenix Technologies, which had a BIOS-based browser named trademark dispute with the BIOS manufacturer Phoenix Technologies. Phoenix was renamed Firebird only to run afoul of the Firebird database server people. The browser was once more renamed to the Firefox that we all know.
|
||||
|
||||
At the time, [Mozilla said][17], “We’ve learned a lot about choosing names in the past year (more than we would have liked to). We have been very careful in researching the name to ensure that we will not have any problems down the road. We have begun the process of registering our new trademark with the US Patent and Trademark office.”
|
||||
|
||||
![Mozilla Firefox 1.0][18]Firefox 1.0 : [Picture Credit][19]
|
||||
|
||||
The first official release of Firefox was [0.8][20] on February 8, 2004. 1.0 followed on November 9, 2004. Version 2.0 and 3.0 followed in October 2006 and June 2008 respectively. Each major release brought with it many new features and improvements. In many respects, Firefox pulled ahead of Internet Explorer in terms of features and technology, but IE still had more users.
|
||||
|
||||
That changed with the release of Google’s Chrome browser. In the months before the release of Chrome in September 2008, Firefox accounted for 30% of all [browser usage][21] and IE had over 60%. According to StatCounter’s [January 2019 report][22], Firefox accounts for less than 10% of all browser usage, while Chrome has over 70%.
|
||||
|
||||
Fun Fact
|
||||
|
||||
Contrary to popular belief, the logo of Firefox doesn’t feature a fox. It’s actually a [Red Panda][23]. In Chinese, “fire fox” is another name for the red panda.
|
||||
|
||||
### The Future
|
||||
|
||||
As noted above, Firefox currently has the lowest market share in its recent history. There was a time when a bunch of browsers were based on Firefox, such as the early version of the [Flock browser][24]). Now most browsers are based on Google technology, such as Opera and Vivaldi. Even Microsoft is giving up on browser development and [joining the Chromium band wagon][25].
|
||||
|
||||
This might seem like quite a downer after the heights of the early Netscape years. But don’t forget what Firefox has accomplished. A group of developers from around the world have created the second most used browser in the world. They clawed 30% market share away from Microsoft’s monopoly, they can do it again. After all, they have us, the open source community, behind them.
|
||||
|
||||
The fight against the monopoly is one of the several reasons [why I use Firefox][26]. Mozilla regained some of its lost market-share with the revamped release of [Firefox Quantum][27] and I believe that it will continue the upward path.
|
||||
|
||||
What event from Linux and open source history would you like us to write about next? Please let us know in the comments below.
|
||||
|
||||
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][28].
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/history-of-firefox
|
||||
|
||||
作者:[John Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/john/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Marc_Andreessen
|
||||
[2]: https://en.wikipedia.org/wiki/National_Center_for_Supercomputing_Applications
|
||||
[3]: https://en.wikipedia.org/wiki/Tim_Berners-Lee
|
||||
[4]: https://www.w3.org/DesignIssues/TimBook-old/History.html
|
||||
[5]: http://viola.org/
|
||||
[6]: https://en.wikipedia.org/wiki/Mosaic_(web_browser
|
||||
[7]: http://www.computinghistory.org.uk/det/1789/Marc-Andreessen/
|
||||
[8]: http://www.davetitus.com/mozilla/
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/Mozilla_boxing.jpg?ssl=1
|
||||
[10]: https://www.marketwatch.com/story/netscape-ipo-ignited-the-boom-taught-some-hard-lessons-20058518550
|
||||
[11]: https://en.wikipedia.org/wiki/Browser_wars
|
||||
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/mozilla-firefox.jpg?resize=800%2C450&ssl=1
|
||||
[13]: https://web.archive.org/web/20021001071727/wp.netscape.com/newsref/pr/newsrelease558.html
|
||||
[14]: https://en.wikipedia.org/wiki/Gecko_(software)
|
||||
[15]: http://news.cnet.com/2100-1023-218360.html
|
||||
[16]: https://web.archive.org/web/20050618000315/http://www.mozilla.org/roadmap/roadmap-02-Apr-2003.html
|
||||
[17]: https://www-archive.mozilla.org/projects/firefox/firefox-name-faq.html
|
||||
[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/firefox-1.jpg?ssl=1
|
||||
[19]: https://www.iceni.com/blog/firefox-1-0-introduced-2004/
|
||||
[20]: https://en.wikipedia.org/wiki/Firefox_version_history
|
||||
[21]: https://en.wikipedia.org/wiki/Usage_share_of_web_browsers
|
||||
[22]: http://gs.statcounter.com/browser-market-share/desktop/worldwide/#monthly-201901-201901-bar
|
||||
[23]: https://en.wikipedia.org/wiki/Red_panda
|
||||
[24]: https://en.wikipedia.org/wiki/Flock_(web_browser
|
||||
[25]: https://www.windowscentral.com/microsoft-building-chromium-powered-web-browser-windows-10
|
||||
[26]: https://itsfoss.com/why-firefox/
|
||||
[27]: https://itsfoss.com/firefox-quantum-ubuntu/
|
||||
[28]: http://reddit.com/r/linuxusersgroup
|
||||
[29]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/mozilla-firefox.jpg?fit=800%2C450&ssl=1
|
@ -0,0 +1,84 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why feedback, not metrics, is critical to DevOps)
|
||||
[#]: via: (https://opensource.com/article/19/3/devops-feedback-not-metrics)
|
||||
[#]: author: (Ranjith Varakantam (Red Hat) https://opensource.com/users/ranjith)
|
||||
|
||||
Why feedback, not metrics, is critical to DevOps
|
||||
======
|
||||
|
||||
Metrics can tell you some things, but not the most important things about how your products and teams are doing.
|
||||
|
||||
![CICD with gears][1]
|
||||
|
||||
Most managers and agile coaches depend on metrics over feedback from their teams, users, and even customers. In fact, quite a few use feedback and metrics synonymously, where they present feedback from teams or customers as a bunch of numbers or a graphical representation of those numbers. This is not only unfortunate, but it can be misleading as it presents only part of the story and not the entire truth.
|
||||
|
||||
When it comes to two critical factors—how we manage or guide our teams and how we operate and influence the product that our teams are developing—few exceptional leaders and teams get it right. For one thing, it has become very easy to get your hands on data and metrics. Furthermore, it's still hard to get real feedback from teams and users. It requires significant investments and energy, and unless everyone understands the critical need for it, getting and giving feedback tends to be a low priority and keeps getting pushed to the back burner.
|
||||
|
||||
### How to manage and guide teams
|
||||
|
||||
With the acceptance of agile, a lot of teams have put a ridiculously high value on metrics, such as velocity, burndown charts, cumulative flow diagram (CFD), etc., instead of the value delivered by the team in each iteration or deployment. The focus is on the delivery or output produced without a clear understanding of how this relates to personal performance or implications for the project, product, or service.
|
||||
|
||||
A few managers and agile coaches even abuse the true spirit of agile by misusing metrics to chastise or even penalize their teams. Instead of creating an environment of empowerment, they are slipping back into the command-and-control method where metrics are used to bully teams into submission.
|
||||
|
||||
In our group, the best managers have weekly one-on-one meetings with every team member. These meetings not only give them a real pulse on team morale but also a profound understanding of the project and the decisions being made to move it forward. This weekly feedback loop also helps the team members communicate technical, functional, and even personal issues better. As a result, the team is much more cohesive in understanding the overall project needs and able to make decisions promptly.
|
||||
|
||||
These leaders also skip levels—reaching out to team members two or three levels below them—and have frequent conversations with other group members who interact with their teams on a regular basis. These actions give the managers a holistic picture, which they couldn't get if they relied on feedback from one manager or lead, and help them identify any blind spots the leads and managers may have.
|
||||
|
||||
These one-on-one meetings effectively transform a manager into a coach who has a close understanding of every team member. Like a good coach, these managers both give and receive feedback from the team members regarding the product, decision-making transparency, places where the team feels management is lagging, and areas that are being ignored. This empowers the teams by giving them a voice, not once in a while in an annual meeting or an annual survey, but every week. This is the level where DevOps teams should be in order to deliver their commitments successfully.
|
||||
|
||||
This demands significant investments of time and energy, but the results more than justify it. The alternative is to rely on metrics and annual reviews and surveys, which has failed miserably. Unless we begin valuing feedback over metrics, we will keep seeing the metrics we want to see but failed projects and miserable team morale.
|
||||
|
||||
### Influencing projects and product development
|
||||
|
||||
We see similar behavior on the project or product side, with too few conversations with the users and developers and too much focus on metrics. Let's take the example of a piece of software that was released to the community or market, and the primary success metric is the number of downloads or installs. This can be deceiving for several reasons:
|
||||
|
||||
1. This product was packaged into another piece of software that users installed; even though the users are not even aware of your product's existence or purpose, it is still counted as a win and something the user needs.
|
||||
|
||||
2. The marketing team spent a huge budget promoting the product—and even offered an incentive to developers to download it. The _incentive_ drives the downloads, not user need or desire, but the metric is still considered a measure of success.
|
||||
|
||||
3. Software updates are counted as downloads, even when they are involuntary updates pushed rather than initiated by the user. This keeps bumping up the number, even though the user might have used it once, a year ago, for a specific task.
|
||||
|
||||
|
||||
|
||||
|
||||
In these cases, the user automatically becomes a metric that's used to report how well the product is doing, just based on the fact it was downloaded and it's accepting updates, regardless of whether the user likes or uses the software. Instead, we should be focusing on actual usage of the product and the feedback these users have to offer us, rather than stopping short at the download numbers.
|
||||
|
||||
The same holds true for SaaS products—instead of counting the number of signups, we should look at how often users use the product or service. Signups by themselves have little meaning, especially to the DevOps team where the focus is on getting constant feedback and striving for continuous improvements.
|
||||
|
||||
### Gathering feedback
|
||||
|
||||
So, why do we rely on metrics so much? My guess is they are easy to collect, and the marketing team is more interested in getting the product into the users' hands than evaluating how it is fairing. Unless the engineering team invests quite a bit of time in collecting feedback with tracing, which captures how often the program is executed and which components are used most often, it can be difficult to collect feedback.
|
||||
|
||||
A big advantage of working in an open source community is that we first release the piece of software into a community where we can get feedback. Most open source enthusiasts take the time to log issues and bugs based on their experience with the product. If we can supplement this data with tracing, the team has an accurate record of how the product is used.
|
||||
|
||||
Open as many channels of communication as possible–chat, email, Twitter, etc.—and allow users to choose their feedback channel.
|
||||
|
||||
A few DevOps teams have integrated blue-green deployments, A/B testing, and canary releases to shorten the feedback loop. Setting up these frameworks it is not a trivial matter and calls for a huge upfront investment and constant updates to make them seamlessly work. But once everything is set up and data begins to flow, the team can act upon real feedback based on real user interactions with every new bit of software released.
|
||||
|
||||
Most agile practitioners and lean movement activists push for a build-deploy-measure-learn cycle, and for this to happen, we need to collect feedback in addition to metrics. It might seem expensive and time consuming in the short term, but in the long run, it is a foolproof way of learning.
|
||||
|
||||
### Proof that feedback pays off
|
||||
|
||||
Whether it pertains to people or projects, it pays to rely on first-hand feedback rather than metrics, which are seldom interpreted in impartial ways. We have ample proof of this in other industries, where companies such as Zappos and the Virgin Group have done wonders for their business simply by listening to their customers. There is no reason we cannot follow suit, especially those of us working in open source communities.
|
||||
|
||||
Feedback is the only effective way we can uncover our blind spots. Metrics are not of much help in this regard, as we can't find out what's wrong when we are dealing with unknowns. Blind spots can create serious gaps between reality and what we think we know. Feedback not only encourages continuous improvement, whether it's on a personal or a product level, but the simple act of listening and acting on it increases trust and loyalty.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/devops-feedback-not-metrics
|
||||
|
||||
作者:[Ranjith Varakantam (Red Hat)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ranjith
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cicd_continuous_delivery_deployment_gears.png?itok=kVlhiEkc (CICD with gears)
|
@ -0,0 +1,148 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (6 steps to stop ethical debt in AI product development)
|
||||
[#]: via: (https://opensource.com/article/19/3/ethical-debt-ai-product-development)
|
||||
[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeo)
|
||||
|
||||
6 steps to stop ethical debt in AI product development
|
||||
======
|
||||
|
||||
Machine bias in artificial intelligence is a known and unavoidable problem—but it is not unmanageable.
|
||||
|
||||
![old school calculator][1]
|
||||
|
||||
It's official: artificial intelligence (AI) isn't the unbiased genius we want it to be.
|
||||
|
||||
Alphabet (Google's parent company) used its latest annual report [to warn][2] that ethical concerns about its products might hurt future revenue. Entrepreneur Joy Buolamwini established the [Safe Face Pledge][3] to prevent abuse of facial analysis technology.
|
||||
|
||||
And years after St. George's Hospital Medical School in London was found to have used AI that inadvertently [screened out qualified female candidates][4], Amazon scrapped a recruiting tool last fall after machine learning (ML) specialists found it [doing the same thing][5].
|
||||
|
||||
We've learned the hard way that technologies built with AI are biased like people. Left unchecked, the datasets used to train such products can pose [life-or-death consequences][6] for their end users.
|
||||
|
||||
For example, imagine a self-driving car that can't recognize commands from people with certain accents. If the dataset used to train the technology powering that car isn't exposed to enough voice variations and inflections, it risks not recognizing all its users as fully human.
|
||||
|
||||
Here's the good news: Machine bias in AI is unavoidable—but it is _not_ unmanageable. Just like product and development teams work to reduce technical debt, you can [reduce the risk of ethical debt][7] as well.
|
||||
|
||||
Here are six steps that your technical team can start taking today:
|
||||
|
||||
### 1\. Document your priorities upfront
|
||||
|
||||
Reducing ethical debt within your product will require you to answer two key questions in the product specification phase:
|
||||
|
||||
* Which methods of fairness will you use?
|
||||
* How will you prioritize them?
|
||||
|
||||
|
||||
|
||||
If your team is building a product based on ML, it's not enough to reactively fix bugs or pull products from shelves. Instead, answer these questions [in your tech spec][8] so that they're included from the start of your product lifecycle.
|
||||
|
||||
### 2\. Train your data under fairness constraints
|
||||
|
||||
This step is tough because when you try to control or eliminate both direct and indirect bias, you'll find yourself in a Catch-22.
|
||||
|
||||
If you train exclusively on non-sensitive attributes, you eliminate direct discrimination but introduce or reinforce indirect bias.
|
||||
|
||||
However, if you train separate classifiers for each sensitive feature, you reintroduce direct discrimination.
|
||||
|
||||
Another challenge is that detection can occur only after you've trained your model. When this occurs, the only recourse is to scrap the model and retrain it from scratch.
|
||||
|
||||
To reduce these risks, don't just measure average strengths of acceptance and rejection across sensitive groups. Instead, use limits to determine what is or isn't included in the model you're training. When you do this, discrimination tests are expressed as restrictions and limitations on the learning process.
|
||||
|
||||
### 3\. Monitor your datasets throughout the product lifecycle
|
||||
|
||||
Developers build training sets based on data they hope the model will encounter. But many don't monitor the data their creations receive from the real world.
|
||||
|
||||
ML products are unique in that they're continuously taking in data. New data allows the algorithms powering these products to keep refining their results.
|
||||
|
||||
But such products often encounter data in deployment that differs from what they were trained on in production. It's also not uncommon for the algorithm to be updated without the model itself being revalidated.
|
||||
|
||||
This risk will decrease if you appoint someone to monitor the source, history, and context of the data in your algorithm. This person should conduct continuous audits to find unacceptable behavior.
|
||||
|
||||
Bias should be reduced as much as possible while maintaining an acceptable level of accuracy, as defined in the product specification. If unacceptable biases or behaviors are detected, the model should be rolled back to an earlier state prior to the first time you saw bias.
|
||||
|
||||
### 4\. Use tagged training data
|
||||
|
||||
We live in a world with trillions of images and videos at our fingertips, but most neural networks can't use this data for one reason: Most of it isn't tagged.
|
||||
|
||||
Tagging refers to which classes are present in an image and their locations. When you tag an image, you share which classes are present and where they're located.
|
||||
|
||||
This sounds simple—until you realize how much work it would take to draw shapes around every single person in a photo of a crowd or a box around every single person on a highway.
|
||||
|
||||
Even if you succeeded, you might rush your tagging and draw your shapes sloppily, leading to a poorly trained neural network.
|
||||
|
||||
The good news is that more products are coming to market so they can decrease the time and cost of tagging.
|
||||
|
||||
As one example, [Brain Builder][9] is a data annotation product from Neurala that uses open source frameworks like TensorFlow and Caffe. Its goal is to help users [manage and annotate their training data][10]. It also aims to bring diverse class examples to datasets—another key step in data training.
|
||||
|
||||
### 5\. Use diverse class examples
|
||||
|
||||
Training data needs positive and negative examples of classes. If you want specific classes of objects, you need negative examples as well. This (hopefully) mimics the data that the algorithm will encounter in the wild.
|
||||
|
||||
Consider the example of “homes” within a dataset. If the algorithm contains only images of homes in North America, it won't know to recognize homes in Japan, Morocco, or other international locations. Its concept of a “home” is thus limited.
|
||||
|
||||
Neurala warns, "Most AI applications require thousands of images to be tagged, and since data tagging cost is proportional to the time spent tagging, this step alone often costs tens to hundreds of thousands of dollars per project."
|
||||
|
||||
Luckily, 2018 saw a strong increase in the number of open source AI datasets. Synced has a helpful [roundup of 10 datasets][11]—from multi-label images to semantic parsing—that were open sourced last year. If you're looking for datasets by industry, GitHub [has a longer list][12].
|
||||
|
||||
### 6\. Focus on the subject, not the context
|
||||
|
||||
Tech leaders in monitoring ML datasets should aim to understand how the algorithm classifies data. That's because AI sometimes focuses on irrelevant attributes that are shared by several targets in the training set.
|
||||
|
||||
Let's start by looking at the biased training set below. Wolves were tagged standing in snow, but the model wasn't shown images of dogs. So, when dogs were introduced, the model started tagging them as wolves because both animals were standing in snow. In this case, the AI put too much emphasis on the context (a snowy backdrop).
|
||||
|
||||
![Wolves in snow][13]
|
||||
|
||||
Source: [Gartner][14] (full research available for clients)
|
||||
|
||||
By contrast, here is a training set from Brain Builder that is focused on the subject dogs. When monitoring your own training set, make sure the AI is giving more weight to the subject of each image. If you saw an image classifier state that one of the dogs below is a wolf, you would need to know which aspects of the input led to this misclassification. This is a sign to check your training set and confirm that the data is accurate.
|
||||
|
||||
![Dogs training set][15]
|
||||
|
||||
Source: [Brain Builder][16]
|
||||
|
||||
Reducing ethical debt isn't just the “right thing to do”—it also reduces _technical_ debt. Since programmatic bias is so tough to detect, working to reduce it, from the start of your lifecycle, will save you the need to retrain models from scratch.
|
||||
|
||||
This isn't an easy or perfect job; tech teams will have to make tradeoffs between fairness and accuracy. But this is the essence of product management: compromises based on what's best for the product and its end users.
|
||||
|
||||
Strategy is the soul of all strong products. If your team includes measures of fairness and algorithmic priorities from the start, you'll sail ahead of your competition.
|
||||
|
||||
* * *
|
||||
|
||||
_Lauren Maffeo will present_ [_Erase Unconscious Bias From Your AI Datasets_][17] _at[DrupalCon][18] in Seattle, April 8-12, 2019._
|
||||
|
||||
* * *
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/ethical-debt-ai-product-development
|
||||
|
||||
作者:[Lauren Maffeo][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/lmaffeo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/math_money_financial_calculator_colors.jpg?itok=_yEVTST1 (old school calculator)
|
||||
[2]: https://www.yahoo.com/news/google-warns-rise-ai-may-181710642.html?soc_src=mail&soc_trk=ma
|
||||
[3]: https://www.safefacepledge.org/
|
||||
[4]: https://futurism.com/ai-bias-black-box
|
||||
[5]: https://uk.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUKKCN1MK08G
|
||||
[6]: https://opensource.com/article/18/10/open-source-classifiers-ai-algorithms
|
||||
[7]: https://thenewstack.io/tech-ethics-new-years-resolution-dont-build-software-you-will-regret/
|
||||
[8]: https://eng.lyft.com/awesome-tech-specs-86eea8e45bb9
|
||||
[9]: https://www.neurala.com/tech
|
||||
[10]: https://www.roboticsbusinessreview.com/ai/brain-builder-neurala-video-annotation/
|
||||
[11]: https://medium.com/syncedreview/2018-in-review-10-open-sourced-ai-datasets-696b3b49801f
|
||||
[12]: https://github.com/awesomedata/awesome-public-datasets
|
||||
[13]: https://opensource.com/sites/default/files/uploads/wolves_in_snow.png (Wolves in snow)
|
||||
[14]: https://www.gartner.com/doc/3889586/control-bias-eliminate-blind-spots
|
||||
[15]: https://opensource.com/sites/default/files/uploads/ai_ml_canine_recognition.png (Dogs training set)
|
||||
[16]: https://www.neurala.com/
|
||||
[17]: https://events.drupal.org/seattle2019/sessions/erase-unconscious-bias-your-ai-datasets
|
||||
[18]: https://events.drupal.org/seattle2019
|
143
sources/talk/20190322 How to save time with TiDB.md
Normal file
143
sources/talk/20190322 How to save time with TiDB.md
Normal file
@ -0,0 +1,143 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to save time with TiDB)
|
||||
[#]: via: (https://opensource.com/article/19/3/how-save-time-tidb)
|
||||
[#]: author: (Morgan Tocker https://opensource.com/users/morgo)
|
||||
|
||||
How to save time with TiDB
|
||||
======
|
||||
|
||||
TiDB, an open source-compatible, cloud-based database engine, simplifies many of MySQL database administrators' common tasks.
|
||||
|
||||
![Team checklist][1]
|
||||
|
||||
Last November, I wrote about key [differences between MySQL and TiDB][2], an open source-compatible, cloud-based database engine, from the perspective of scaling both solutions in the cloud. In this follow-up article, I'll dive deeper into the ways [TiDB][3] streamlines and simplifies administration.
|
||||
|
||||
If you come from a MySQL background, you may be used to doing a lot of manual tasks that are either not required or much simpler with TiDB.
|
||||
|
||||
The inspiration for TiDB came from the founders managing sharded MySQL at scale at some of China's largest internet companies. Since requirements for operating a large system at scale are a key concern, I'll look at some typical MySQL database administrator (DBA) tasks and how they translate to TiDB.
|
||||
|
||||
[![TiDB architecture][4]][5]
|
||||
|
||||
In [TiDB's architecture][5]:
|
||||
|
||||
* SQL processing is separated from data storage. The SQL processing (TiDB) and storage (TiKV) components independently scale horizontally.
|
||||
* PD (Placement Driver) acts as the cluster manager and stores metadata.
|
||||
* All components natively provide high availability, with PD and TiKV using the [Raft consensus algorithm][6].
|
||||
* You can access your data via either MySQL (TiDB) or Spark (TiSpark) protocols.
|
||||
|
||||
|
||||
|
||||
### Adding/fixing replication slaves
|
||||
|
||||
**tl;dr:** It doesn't happen in the same way as in MySQL.
|
||||
|
||||
Replication and redundancy of data are automatically managed by TiKV. You also don't need to worry about creating initial backups to seed replicas, as _both_ the provisioning and replication are handled for you.
|
||||
|
||||
Replication is also quorum-based using the Raft consensus algorithm, so you don't have to worry about the inconsistency problems surrounding failures that you do with asynchronous replication (the default in MySQL and what many users are using).
|
||||
|
||||
TiDB does support its own binary log, so it can be used for asynchronous replication between clusters.
|
||||
|
||||
### Optimizing slow queries
|
||||
|
||||
**tl;dr:** Still happens in TiDB
|
||||
|
||||
There is no real way out of optimizing slow queries that have been introduced by development teams.
|
||||
|
||||
As a mitigating factor though, if you need to add breathing room to your database's capacity while you work on optimization, the TiDB's architecture allows you to horizontally scale.
|
||||
|
||||
### Upgrades and maintenance
|
||||
|
||||
**tl;dr:** Still required, but generally easier
|
||||
|
||||
Because the TiDB server is stateless, you can roll through an upgrade and deploy new TiDB servers. Then you can remove the older TiDB servers from the load balancer pool, shutting down them once connections have drained.
|
||||
|
||||
Upgrading PD is also quite straightforward since only the PD leader actively answers requests at a time. You can perform a rolling upgrade and upgrade PD's non-leader peers one at a time, and then change the leader before upgrading the final PD server.
|
||||
|
||||
For TiKV, the upgrade is marginally more complex. If you want to remove a node, I recommend first setting it to be a follower on each of the regions where it is currently a leader. After that, you can bring down the node without impacting your application. If the downtime is brief, TiKV will recover with its regional peers from the Raft log. In a longer downtime, it will need to re-copy data. This can all be managed for you, though, if you choose to deploy using Ansible or Kubernetes.
|
||||
|
||||
### Manual sharding
|
||||
|
||||
**tl;dr:** Not required
|
||||
|
||||
Manual sharding is mainly a pain on the part of the application developers, but as a DBA, you might have to get involved if the sharding is naive or has problems such as hotspots (many workloads do) that require re-balancing.
|
||||
|
||||
In TiDB, re-sharding or re-balancing happens automatically in the background. The PD server observes when data regions (TiKV's term for chunks of data in key-value form) get too small, too big, or too frequently accessed.
|
||||
|
||||
You can also explicitly configure PD to store regions on certain TiKV servers. This works really well when combined with MySQL partitioning.
|
||||
|
||||
### Capacity planning
|
||||
|
||||
**tl;dr:** Much easier
|
||||
|
||||
Capacity planning on a MySQL database can be a little bit hard because you need to plan your physical infrastructure requirements two to three years from now. As data grows (and the working set changes), this can be a difficult task. I wouldn't say it completely goes away in the cloud either, since changing a master server's hardware is always hard.
|
||||
|
||||
TiDB splits data into approximately 100MiB chunks that it distributes among TiKV servers. Because this increment is much smaller than a full server, it's much easier to move around and redistribute data. It's also possible to add new servers in smaller increments, which is easier on planning.
|
||||
|
||||
### Scaling
|
||||
|
||||
**tl;dr:** Much easier
|
||||
|
||||
This is related to capacity planning and sharding. When we talk about scaling, many people think about very large _systems,_ but that is not exclusively how I think of the problem:
|
||||
|
||||
* Scaling is being able to start with something very small, without having to make huge investments upfront on the chance it could become very large.
|
||||
* Scaling is also a people problem. If a system requires too much internal knowledge to operate, it can become hard to grow as an engineering organization. The barrier to entry for new hires can become very high.
|
||||
|
||||
|
||||
|
||||
Thus, by providing automatic sharding, TiDB can scale much easier.
|
||||
|
||||
### Schema changes (DDL)
|
||||
|
||||
**tl;dr:** Mostly better
|
||||
|
||||
The data definition language (DDL) supported in TiDB is all online, which means it doesn't block other reads or writes to the system. It also doesn't block the replication stream.
|
||||
|
||||
That's the good news, but there are a couple of limitations to be aware of:
|
||||
|
||||
* TiDB does not currently support all DDL operations, such as changing the primary key or some "change data type" operations.
|
||||
* TiDB does not currently allow you to chain multiple DDL changes in the same command, e.g., _ALTER TABLE t1 ADD INDEX (x), ADD INDEX (y)_. You will need to break these queries up into individual DDL queries.
|
||||
|
||||
|
||||
|
||||
This is an area that we're looking to improve in [TiDB 3.0][7].
|
||||
|
||||
### Creating one-off data dumps for the reporting team
|
||||
|
||||
**tl;dr:** May not be required
|
||||
|
||||
DBAs loathe manual tasks that create one-off exports of data to be consumed by another team, perhaps in an analytics tool or data warehouse.
|
||||
|
||||
This is often required when the types of queries that are be executed on the dataset are analytical. TiDB has hybrid transactional/analytical processing (HTAP) capabilities, so in many cases, these queries should work fine. If your analytics team is using Spark, you can also use the [TiSpark][8] connector to allow them to connect directly to TiKV.
|
||||
|
||||
This is another area we are improving with [TiFlash][7], a column store accelerator. We are also working on a plugin system to support external authentication. This will make it easier to manage access by the reporting team.
|
||||
|
||||
### Conclusion
|
||||
|
||||
In this post, I looked at some common MySQL DBA tasks and how they translate to TiDB. If you would like to learn more, check out our [TiDB Academy course][9] designed for MySQL DBAs (it's free!).
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/how-save-time-tidb
|
||||
|
||||
作者:[Morgan Tocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/morgo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_todo_clock_time_team.png?itok=1z528Q0y (Team checklist)
|
||||
[2]: https://opensource.com/article/18/11/key-differences-between-mysql-and-tidb
|
||||
[3]: https://github.com/pingcap/tidb
|
||||
[4]: https://opensource.com/sites/default/files/uploads/tidb_architecture.png (TiDB architecture)
|
||||
[5]: https://pingcap.com/docs/architecture/
|
||||
[6]: https://raft.github.io/
|
||||
[7]: https://pingcap.com/blog/tidb-3.0-beta-stability-at-scale/
|
||||
[8]: https://github.com/pingcap/tispark
|
||||
[9]: https://pingcap.com/tidb-academy/
|
@ -0,0 +1,74 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to transition into a Developer Relations career)
|
||||
[#]: via: (https://opensource.com/article/19/3/developer-relations-career)
|
||||
[#]: author: (Mary Thengvall https://opensource.com/users/marygrace-0)
|
||||
|
||||
How to transition into a Developer Relations career
|
||||
======
|
||||
|
||||
Combine your love for open source software with your love for the community in a way that allows you to invest your time in both.
|
||||
|
||||
![][1]
|
||||
|
||||
Let's say you've found an open source project you really love and you want to do more than just contribute. Or you love coding, but you don't want to spend the rest of your life interacting more with your computer than you do with people. How do you combine your love for open source software with your love for the community in a way that allows you to invest your time in both?
|
||||
|
||||
### Developer Relations: A symbiotic relationship
|
||||
|
||||
Enter community management, or as it's more commonly called in the tech industry, Developer Relations (DevRel for short). The goal of DevRel is, at its core, to empower developers. From writing content and creating documentation to supporting local meetups and bubbling up developer feedback internally, everything that a Developer Relations professional does on a day-to-day basis is for the benefit of the community. That's not to say that it doesn't benefit the company as well! After all, as Developer Relations professionals understand, if the community succeeds, so will the company. It's the best kind of symbiotic relationship!
|
||||
|
||||
These hybrid roles have been around since shortly after the open source and free software movements started, but the Developer Relations industry—and the Developer Advocate role, in particular—have exploded over the past few years. So what is Developer Relations exactly? Let's start by defining "community" so that we're all on the same page:
|
||||
|
||||
> **Community:** A group of people who not only share common principles, but also develop and share practices that help individuals in the group thrive.
|
||||
|
||||
This could be a group of people who have gathered around an open source project, a particular topic such as email, or who are all in a similar job function—the DevOps community, for instance.
|
||||
|
||||
As I mentioned, the role of a DevRel team is to empower the community by building up, encouraging, and amplifying the voice of the community members. While this will look slightly different at every company, depending on its goals, priorities, and direction, there are a few themes that are consistent throughout the industry.
|
||||
|
||||
1. **Listen:** Before making any plans or goals, take the time to listen.
|
||||
* _Listen to your company stakeholders:_ What do they expect of your team? What do they think you should be responsible for? What metrics are they accustomed to? And what business needs do they care most about?
|
||||
* _Listen to your customer community:_ What are customers' biggest pain points with your product? Where do they struggle with onboarding? Where does the documentation fail them?
|
||||
* _Listen to your product's technical audience:_ What problems are they trying to solve? What could be done to make their work life easier? Where do they get their content? What technological advances are they most excited about?
|
||||
|
||||
|
||||
2. **Gather information**
|
||||
Based on these answers, you can start making your plan. Find the overlapping areas where you can make your product a better fit for the larger technical audience and also make it easier for your customers to use. Figure out what content you can provide that not only answers your community's questions but also solves problems for your company's stakeholders. Learn about the areas where your co-workers struggle and see where your strengths can supplement those needs.
|
||||
|
||||
|
||||
3. **Make connections**
|
||||
Above all, community managers are responsible for making connections within the community as well as between community members and coworkers. These connections, or "DevRel qualified leads," are what ultimately shows the business value of a community manager's work. By making connections between community members Marie and Bob, who are both interested in the latest developments in Python, or between Marie and your coworker Phil, who's responsible for developer-focused content on your website, you're making your community a valuable source of information for everyone around you.
|
||||
|
||||
|
||||
|
||||
By getting to know your technical community, you become an expert on what customer needs your product can meet. With great power comes great responsibility. As the expert, you are now responsible for advocating internally for those needs, and you have the potential to make a big difference for your community.
|
||||
|
||||
### Getting started
|
||||
|
||||
So now what? If you're still with me, congratulations! You might just be a good fit for a Community Manager or Developer Advocate role. I'd encourage you to take community work for a test drive and see if you like the pace and the work. There's a lot of context switching and moving around between tasks, which can be a bit of an adjustment for some folks.
|
||||
|
||||
Volunteer to write a blog post for your marketing team (or for [Opensource.com][2]) or help out at an upcoming conference. Apply to speak at a local meetup or offer to advise on a few technical support cases. Get to know your community members on a deeper level.
|
||||
|
||||
Above all, Community Managers are 100% driven by a passion for building technical communities and bringing people together. If that resonates with you, it may be time for a career change!
|
||||
|
||||
I love talking to professionals that help others grow through community and Developer Relations practices. Don't hesitate to [reach out to me][3] if you have any questions or send me a [DM on Twitter][4].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/developer-relations-career
|
||||
|
||||
作者:[Mary Thengvall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/marygrace-0
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/resume_career_document_general.png?itok=JEaFL2XI
|
||||
[2]: https://opensource.com/how-submit-article
|
||||
[3]: https://www.marythengvall.com/about
|
||||
[4]: http://twitter.com/mary_grace
|
146
sources/tech/20170414 5 projects for Raspberry Pi at home.md
Normal file
146
sources/tech/20170414 5 projects for Raspberry Pi at home.md
Normal file
@ -0,0 +1,146 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (5 projects for Raspberry Pi at home)
|
||||
[#]: via: (https://opensource.com/article/17/4/5-projects-raspberry-pi-home)
|
||||
[#]: author: (Ben Nuttall (Community Moderator) )
|
||||
|
||||
5 projects for Raspberry Pi at home
|
||||
======
|
||||
|
||||
![5 projects for Raspberry Pi at home][1]
|
||||
|
||||
The [Raspberry Pi][2] computer can be used in all kinds of settings and for a variety of purposes. It obviously has a place in education for helping students with learning programming and maker skills in the classroom and the hackspace, and it has plenty of industrial applications in the workplace and in factories. I'm going to introduce five projects you might want to build in your own home.
|
||||
|
||||
### Media center
|
||||
|
||||
One of the most common uses for Raspberry Pi in people's homes is behind the TV running media center software serving multimedia files. It's easy to set this up, and the Raspberry Pi provides plenty of GPU (Graphics Processing Unit) power to render HD TV shows and movies to your big screen TV. [Kodi][3] (formerly XBMC) on a Raspberry Pi is a great way to playback any media you have on a hard drive or network-attached storage. You can also install a plugin to play YouTube videos.
|
||||
|
||||
There are a few different options available, most prominently [OSMC][4] (Open Source Media Center) and [LibreELEC][5], both based on Kodi. They both perform well at playing media content, but OSMC has a more visually appearing user interface, while LibreElec is much more lightweight. All you have to do is choose a distribution, download the image and install on an SD card (or just use [NOOBS][6]), boot it up, and you're ready to go.
|
||||
|
||||
![LibreElec ][7]
|
||||
|
||||
LibreElec; Raspberry Pi Foundation, CC BY-SA
|
||||
|
||||
![OSMC][8]
|
||||
|
||||
OSMC.tv, Copyright, Used with permission
|
||||
|
||||
Before proceeding you'll need to decide [w][9][hich Raspberry Pi model to use][9]. These distributions will work on any Pi (1, 2, 3, or Zero), and video playback will essentially be matched on each of these. Apart from the Pi 3 (and Zero W) having built-in Wi-Fi, the only noticeable difference is the reaction speed of the user interface, which will be much faster on a Pi 3. A Pi 2 will not be much slower, so that's fine if you don't need Wi-Fi, but the Pi 3 will noticeably outperform the Pi 1 and Zero when it comes to flicking through the menus.
|
||||
|
||||
### SSH gateway
|
||||
|
||||
If you want to be able to access computers and devices on your home network from outside over the internet, you have to open up ports on those devices to allow outside traffic. Opening ports to the internet is a security risk, meaning you're always at risk of attack, misuse, or any kind of unauthorized access. However, if you install a Raspberry Pi on your network and set up port forwarding to allow only SSH access to that Pi, you can use that as a secure gateway to hop onto other Pis and PCs on the network.
|
||||
|
||||
Most routers allow you to configure port-forwarding rules. You'll need to give your Pi a fixed internal IP address and set up port 22 on your router to map to port 22 on your Raspberry Pi. If your ISP provides you with a static IP address, you'll be able to SSH into it with this as the host address (for example, **ssh pi@123.45.56.78** ). If you have a domain name, you can configure a subdomain to point to this IP address, so you don't have to remember it (for example, **ssh[pi@home.mydomain.com][10]** ).
|
||||
|
||||
![][11]
|
||||
|
||||
However, if you're going to expose a Raspberry Pi to the internet, you should be very careful not to put your network at risk. There are a few simple procedures you can follow to make it sufficiently secure:
|
||||
|
||||
1\. Most people suggest you change your login password (which makes sense, seeing as the default password “raspberry” is well known), but this does not protect against brute-force attacks. You could change your password and add a two-factor authentication (so you need your password _and_ a time-dependent passcode generated by your phone), which is more secure. However, I believe the best way to secure your Raspberry Pi from intruders is to [disable][12] [“password authentication”][12] in your SSH configuration, so you allow only SSH key access. This means that anyone trying to SSH in by guessing your password will never succeed. Only with your private SSH key can anyone gain access. Similarly, most people suggest changing the SSH port from the default 22 to something unexpected, but a simple [Nmap][13] of your IP address will reveal your true SSH port.
|
||||
|
||||
2\. Ideally, you would not run much in the way of other software on this Pi, so you don't end up accidentally exposing anything else. If you want to run other software, you might be better running it on another Pi on the network that is not exposed to the internet. Ensure that you keep your packages up to date by upgrading regularly, particularly the **openssh-server** package, so that any security vulnerabilities are patched.
|
||||
|
||||
3\. Install [sshblack][14] or [fail2ban][15] to blacklist any users who seem to be acting maliciously, such as attempting to brute force your SSH password.
|
||||
|
||||
Once you've secured your Raspberry Pi and put it online, you'll be able to log in to your network from anywhere in the world. Once you're on your Raspberry Pi, you can SSH into other devices on the network using their local IP address (for example, 192.168.1.31). If you have passwords on these devices, just use the password. If they're also SSH-key-only, you'll need to ensure your key is forwarded over SSH by using the **-A** flag: **ssh -A pi@123.45.67.89**.
|
||||
|
||||
### CCTV / pet camera
|
||||
|
||||
Another great home project is to set up a camera module to take photos or stream video, capture and save files, or streamed internally or to the internet. There are many reasons you might want to do this, but two common use cases are for a homemade security camera or to monitor a pet.
|
||||
|
||||
The [Raspberry Pi camera module][16] is a brilliant accessory. It provides full HD photo and video, lots of advanced configuration, and is [easy to][17] [program][17]. The [infrared camera][18] is ideal for this kind of use, and with an infrared LED (which the Pi can control) you can see in the dark!
|
||||
|
||||
If you want to take still images on a regular basis to keep an eye on things, you can just write a short [Python][19] script or use the command line tool [raspistill][20], and schedule it to recur in [Cron][21]. You might want to have it save them to [Dropbox][22] or another web service, upload them to a web server, or you can even create a [web app][23] to display them.
|
||||
|
||||
If you want to stream video, internally or externally, that's really easy, too. A simple MJPEG (Motion JPEG) example is provided in the [picamera documentation][24] (under “web streaming”). Just download or copy that code into a file, run it and visit the Pi's IP address at port 8000, and you'll see your camera's output live.
|
||||
|
||||
A more advanced streaming project, [pistreaming][25], is available, which uses [JSMpeg][26] (a JavaScript video player) with the web server and a websocket for the camera stream running separately. This method is more performant and is just as easy to get running as the previous example, but there is more code involved and if set up to stream on the internet, requires you to open two ports.
|
||||
|
||||
Once you have web streaming set up, you can position the camera where you want it. I have one set up to keep an eye on my pet tortoise:
|
||||
|
||||
![Tortoise ][27]
|
||||
|
||||
Ben Nuttall, CC BY-SA
|
||||
|
||||
If you want to be able to control where the camera actually points, you can do so using servos. A neat solution is to use Pimoroni's [Pan-Tilt HAT][28], which allows you to move the camera easily in two dimensions. To integrate this with pistreaming, see the project's [pantilthat branch][29].
|
||||
|
||||
![Pan-tilt][30]
|
||||
|
||||
Pimoroni.com, Copyright, Used with permission
|
||||
|
||||
If you want to position your Pi outside, you'll need a waterproof enclosure and some way of getting power to the Pi. PoE (Power-over-Ethernet) cables can be a good way of achieving this.
|
||||
|
||||
### Home automation and IoT
|
||||
|
||||
It's 2017 and there are internet-connected devices everywhere, especially in the home. Our lightbulbs have Wi-Fi, our toasters are smarter than they used to be, and our tea kettles are at risk of attack from Russia. As long as you keep your devices secure, or don't connect them to the internet if they don't need to be, then you can make great use of IoT devices to automate tasks around the home.
|
||||
|
||||
There are plenty of services you can buy or subscribe to, like Nest Thermostat or Philips Hue lightbulbs, which allow you to control your heating or your lighting from your phone, respectively—whether you're inside or away from home. You can use a Raspberry Pi to boost the power of these kinds of devices by automating interactions with them according to a set of rules involving timing or even sensors. One thing you can't do with Philips Hue is have the lights come on when you enter the room, but with a Raspberry Pi and a motion sensor, you can use a Python API to turn on the lights. Similarly, you can configure your Nest to turn on the heating when you're at home, but what if you only want it to turn on if there's at least two people home? Write some Python code to check which phones are on the network and if there are at least two, tell the Nest to turn on the heat.
|
||||
|
||||
You can do a great deal more without integrating with existing IoT devices and with only using simple components. A homemade burglar alarm, an automated chicken coop door opener, a night light, a music box, a timed heat lamp, an automated backup server, a print server, or whatever you can imagine.
|
||||
|
||||
### Tor proxy and blocking ads
|
||||
|
||||
Adafruit's [Onion Pi][31] is a [Tor][32] proxy that makes your web traffic anonymous, allowing you to use the internet free of snoopers and any kind of surveillance. Follow Adafruit's tutorial on setting up Onion Pi and you're on your way to a peaceful anonymous browsing experience.
|
||||
|
||||
![Onion-Pi][33]
|
||||
|
||||
Onion-pi from Adafruit, Copyright, Used with permission
|
||||
|
||||
![Pi-hole][34]You can install a Raspberry Pi on your network that intercepts all web traffic and filters out any advertising. Simply download the [Pi-hole][35] software onto the Pi, and all devices on your network will be ad-free (it even blocks in-app ads on your mobile devices).
|
||||
|
||||
There are plenty more uses for the Raspberry Pi at home. What do you use Raspberry Pi for at home? What do you want to use it for?
|
||||
|
||||
Let us know in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/4/5-projects-raspberry-pi-home
|
||||
|
||||
作者:[Ben Nuttall (Community Moderator)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberry_pi_home_automation.png?itok=2TnmJpD8 (5 projects for Raspberry Pi at home)
|
||||
[2]: https://www.raspberrypi.org/
|
||||
[3]: https://kodi.tv/
|
||||
[4]: https://osmc.tv/
|
||||
[5]: https://libreelec.tv/
|
||||
[6]: https://www.raspberrypi.org/downloads/noobs/
|
||||
[7]: https://opensource.com/sites/default/files/libreelec_0.png (LibreElec )
|
||||
[8]: https://opensource.com/sites/default/files/osmc.png (OSMC)
|
||||
[9]: https://opensource.com/life/16/10/which-raspberry-pi-should-you-choose-your-project
|
||||
[10]: mailto:pi@home.mydomain.com
|
||||
[11]: https://opensource.com/sites/default/files/resize/screenshot_from_2017-04-07_15-13-01-700x380.png
|
||||
[12]: http://stackoverflow.com/questions/20898384/ssh-disable-password-authentication
|
||||
[13]: https://nmap.org/
|
||||
[14]: http://www.pettingers.org/code/sshblack.html
|
||||
[15]: https://www.fail2ban.org/wiki/index.php/Main_Page
|
||||
[16]: https://www.raspberrypi.org/products/camera-module-v2/
|
||||
[17]: https://opensource.com/life/15/6/raspberry-pi-camera-projects
|
||||
[18]: https://www.raspberrypi.org/products/pi-noir-camera-v2/
|
||||
[19]: http://picamera.readthedocs.io/
|
||||
[20]: https://www.raspberrypi.org/documentation/usage/camera/raspicam/raspistill.md
|
||||
[21]: https://www.raspberrypi.org/documentation/linux/usage/cron.md
|
||||
[22]: https://github.com/RZRZR/plant-cam
|
||||
[23]: https://github.com/bennuttall/bett-bot
|
||||
[24]: http://picamera.readthedocs.io/en/release-1.13/recipes2.html#web-streaming
|
||||
[25]: https://github.com/waveform80/pistreaming
|
||||
[26]: http://jsmpeg.com/
|
||||
[27]: https://opensource.com/sites/default/files/tortoise.jpg (Tortoise)
|
||||
[28]: https://shop.pimoroni.com/products/pan-tilt-hat
|
||||
[29]: https://github.com/waveform80/pistreaming/tree/pantilthat
|
||||
[30]: https://opensource.com/sites/default/files/pan-tilt.gif (Pan-tilt)
|
||||
[31]: https://learn.adafruit.com/onion-pi/overview
|
||||
[32]: https://www.torproject.org/
|
||||
[33]: https://opensource.com/sites/default/files/onion-pi.jpg (Onion-Pi)
|
||||
[34]: https://opensource.com/sites/default/files/resize/pi-hole-250x250.png (Pi-hole)
|
||||
[35]: https://pi-hole.net/
|
@ -1,3 +1,6 @@
|
||||
Translating by MjSeven
|
||||
|
||||
|
||||
iWant – The Decentralized Peer To Peer File Sharing Commandline Application
|
||||
======
|
||||
|
||||
|
@ -1,252 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lujun9972)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Advanced Techniques for Reducing Emacs Startup Time)
|
||||
[#]: via: (https://blog.d46.us/advanced-emacs-startup/)
|
||||
[#]: author: (Joe Schafer https://blog.d46.us/)
|
||||
|
||||
Advanced Techniques for Reducing Emacs Startup Time
|
||||
======
|
||||
|
||||
Six techniques to reduce Emacs startup time by the author of the [Emacs Start Up Profiler][1].
|
||||
|
||||
tl;dr: Do these steps:
|
||||
|
||||
1. Profile with Esup.
|
||||
2. Adjust the garbage collection threshold.
|
||||
3. Autoload **everything** with use-package.
|
||||
4. Avoid helper functions which cause eager loads.
|
||||
5. See my Emacs [config][2] for an example.
|
||||
|
||||
|
||||
|
||||
### From .emacs.d Bankruptcy to Now
|
||||
|
||||
I recently declared my third .emacs.d bankruptcy and finished the fourth iteration of my Emacs configuration. The evolution was:
|
||||
|
||||
1. Copy and paste elisp snippets into `~/.emacs` and hope it works.
|
||||
2. Adopt a more structured approach with `el-get` to manage dependencies.
|
||||
3. Give up and outsource to Spacemacs.
|
||||
4. Get tired of Spacemacs intricacies and rewrite with `use-package`.
|
||||
|
||||
|
||||
|
||||
This article is a collection of tips collected during the 3 rewrites and from creating the Emacs Start Up Profiler. Many thanks to the teams behind Spacemacs, use-package and general. Without these dedicated voluteers, this task would be vastly more difficult.
|
||||
|
||||
### But What About Daemon Mode
|
||||
|
||||
Before we get started, let me acknowledge the common retort when optimizing Emacs: “Emacs is meant to run as a daemon so you’ll only start it once.” That’s all well and good except:
|
||||
|
||||
* Fast things feel nicer.
|
||||
* When customizing Emacs, you sometimes get into weird states that can be hard to recover from without restarting. For example, if you add a slow `lambda` function to your `post-command-hook`, it’s tough to remove it.
|
||||
* Restarting Emacs helps verify that customization will persist between sessions.
|
||||
|
||||
|
||||
|
||||
### 1\. Establish the Current and Best Possible Start Up Time
|
||||
|
||||
The first step is to measure the current start up time. The easy way is to display the information at startup which will show progress through the next steps.
|
||||
|
||||
```
|
||||
(add-hook 'emacs-startup-hook
|
||||
(lambda ()
|
||||
(message "Emacs ready in %s with %d garbage collections."
|
||||
(format "%.2f seconds"
|
||||
(float-time
|
||||
(time-subtract after-init-time before-init-time)))
|
||||
gcs-done)))
|
||||
```
|
||||
|
||||
Second, measure the best possible startup speed so you know what’s possible. Mine is 0.3 seconds.
|
||||
|
||||
```
|
||||
emacs -q --eval='(message "%s" (emacs-init-time))'
|
||||
|
||||
;; For macOS users:
|
||||
open -n /Applications/Emacs.app --args -q --eval='(message "%s" (emacs-init-time))'
|
||||
```
|
||||
|
||||
### 2\. Profile Emacs Startup for Easy Wins
|
||||
|
||||
The [Emacs StartUp Profiler][1] (ESUP) will give you detailed metrics for top-level expressions.
|
||||
|
||||
![esup.png][3]
|
||||
|
||||
Figure 1:
|
||||
|
||||
Emacs Start Up Profiler Screenshot
|
||||
|
||||
WARNING: Spacemacs users, ESUP currently chokes on the Spacemacs init.el file. Follow <https://github.com/jschaf/esup/issues/48> for updates.
|
||||
|
||||
### 3\. Set the Garbage Collection Threshold Higher during Startup
|
||||
|
||||
This saves about ****0.3 seconds**** on my configuration.
|
||||
|
||||
The default value for Emacs is 760kB which is extremely conservative on a modern machine. The real trick is to lower it back to something reasonable after initialization. This saves about 0.3 seconds on my init files.
|
||||
|
||||
```
|
||||
;; Make startup faster by reducing the frequency of garbage
|
||||
;; collection. The default is 800 kilobytes. Measured in bytes.
|
||||
(setq gc-cons-threshold (* 50 1000 1000))
|
||||
|
||||
;; The rest of the init file.
|
||||
|
||||
;; Make gc pauses faster by decreasing the threshold.
|
||||
(setq gc-cons-threshold (* 2 1000 1000))
|
||||
```
|
||||
|
||||
### 4\. Never require anything; autoload with use-package instead
|
||||
|
||||
The best way to make Emacs faster is to do less. Running `require` eagerly loads the underlying source file. It’s rare the you’ll need functionality immediately at startup time.
|
||||
|
||||
With [`use-package`][4], you declare which features you need from a package and `use-package` does the right thing. Here’s what it looks like:
|
||||
|
||||
```
|
||||
(use-package evil-lisp-state ; the Melpa package name
|
||||
|
||||
:defer t ; autoload this package
|
||||
|
||||
:init ; Code to run immediately.
|
||||
(setq evil-lisp-state-global nil)
|
||||
|
||||
:config ; Code to run after the package is loaded.
|
||||
(abn/define-leader-keys "k" evil-lisp-state-map))
|
||||
```
|
||||
|
||||
To see what packages Emacs currently has loaded, examine the `features` variable. For nice output see [lpkg explorer][5] or my variant in [abn-funcs-benchmark.el][6]. The output looks like:
|
||||
|
||||
```
|
||||
479 features currently loaded
|
||||
- abn-funcs-benchmark: /Users/jschaf/.dotfiles/emacs/funcs/abn-funcs-benchmark.el
|
||||
- evil-surround: /Users/jschaf/.emacs.d/elpa/evil-surround-20170910.1952/evil-surround.elc
|
||||
- misearch: /Applications/Emacs.app/Contents/Resources/lisp/misearch.elc
|
||||
- multi-isearch: nil
|
||||
- <many more>
|
||||
```
|
||||
|
||||
### 5\. Avoid Helper Functions to Set Up Modes
|
||||
|
||||
Often, Emacs packages will suggest running a helper function to set up keybindings. Here’s a few examples:
|
||||
|
||||
* `(evil-escape-mode)`
|
||||
* `(windmove-default-keybindings) ; Sets up keybindings.`
|
||||
* `(yas-global-mode 1) ; Complex snippet setup.`
|
||||
|
||||
|
||||
|
||||
Rewrite these with use-package to improve startup speed. These helper functions are really just sneaky ways to trick you into eagerly loading packages before you need them.
|
||||
|
||||
As an example, here’s how to autoload `evil-escape-mode`.
|
||||
|
||||
```
|
||||
;; The definition of evil-escape-mode.
|
||||
(define-minor-mode evil-escape-mode
|
||||
(if evil-escape-mode
|
||||
(add-hook 'pre-command-hook 'evil-escape-pre-command-hook)
|
||||
(remove-hook 'pre-command-hook 'evil-escape-pre-command-hook)))
|
||||
|
||||
;; Before:
|
||||
(evil-escape-mode)
|
||||
|
||||
;; After:
|
||||
(use-package evil-escape
|
||||
:defer t
|
||||
;; Only needed for functions without an autoload comment (;;;###autoload).
|
||||
:commands (evil-escape-pre-command-hook)
|
||||
|
||||
;; Adding to a hook won't load the function until we invoke it.
|
||||
;; With pre-command-hook, that means the first command we run will
|
||||
;; load evil-escape.
|
||||
:init (add-hook 'pre-command-hook 'evil-escape-pre-command-hook))
|
||||
```
|
||||
|
||||
For a much trickier example, consider `org-babel`. The common recipe is:
|
||||
|
||||
```
|
||||
(org-babel-do-load-languages
|
||||
'org-babel-load-languages
|
||||
'((shell . t)
|
||||
(emacs-lisp . nil)))
|
||||
```
|
||||
|
||||
This is bad because `org-babel-do-load-languages` is defined in `org.el`, which is over 24k lines of code and takes about 0.2 seconds to load. After examining the source code, `org-babel-do-load-languages` is simply requiring the `ob-<lang>` package like so:
|
||||
|
||||
```
|
||||
;; From org.el in the org-babel-do-load-languages function.
|
||||
(require (intern (concat "ob-" lang)))
|
||||
```
|
||||
|
||||
In the `ob-<lang>.el`, there’s only two methods we care about, `org-babel-execute:<lang>` and `org-babel-expand-body:<lang>`. We can autoload the org-babel functionality instead of `org-babel-do-load-languages` like so:
|
||||
|
||||
```
|
||||
;; Avoid `org-babel-do-load-languages' since it does an eager require.
|
||||
(use-package ob-python
|
||||
:defer t
|
||||
:ensure org-plus-contrib
|
||||
:commands (org-babel-execute:python))
|
||||
|
||||
(use-package ob-shell
|
||||
:defer t
|
||||
:ensure org-plus-contrib
|
||||
:commands
|
||||
(org-babel-execute:sh
|
||||
org-babel-expand-body:sh
|
||||
|
||||
org-babel-execute:bash
|
||||
org-babel-expand-body:bash))
|
||||
```
|
||||
|
||||
### 6\. Defer Packages you don’t need Immediately with Idle Timers
|
||||
|
||||
This saves about ****0.4 seconds**** for the 9 packages I defer.
|
||||
|
||||
Some packages are useful and you want them available soon, but are not essential for immediate editing. These modes include:
|
||||
|
||||
* `recentf`: Saves recent files.
|
||||
* `saveplace`: Saves point of visited files.
|
||||
* `server`: Starts Emacs daemon.
|
||||
* `autorevert`: Automatically reloads files that changed on disk.
|
||||
* `paren`: Highlight matching parenthesis.
|
||||
* `projectile`: Project management tools.
|
||||
* `whitespace`: Highlight trailing whitespace.
|
||||
|
||||
|
||||
|
||||
Instead of requiring these modes, ****load them after N seconds of idle time****. I use 1 second for the more important packages and 2 seconds for everything else.
|
||||
|
||||
```
|
||||
(use-package recentf
|
||||
;; Loads after 1 second of idle time.
|
||||
:defer 1)
|
||||
|
||||
(use-package uniquify
|
||||
;; Less important than recentf.
|
||||
:defer 2)
|
||||
```
|
||||
|
||||
### Optimizations that aren’t Worth It
|
||||
|
||||
Don’t bother byte-compiling your personal Emacs files. It saved about 0.05 seconds. Byte compiling causes difficult to debug errors when the source file gets out of sync with compiled file.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blog.d46.us/advanced-emacs-startup/
|
||||
|
||||
作者:[Joe Schafer][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://blog.d46.us/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://github.com/jschaf/esup
|
||||
[2]: https://github.com/jschaf/dotfiles/blob/master/emacs/start.el
|
||||
[3]: https://blog.d46.us/images/esup.png
|
||||
[4]: https://github.com/jwiegley/use-package
|
||||
[5]: https://gist.github.com/RockyRoad29/bd4ca6fdb41196a71662986f809e2b1c
|
||||
[6]: https://github.com/jschaf/dotfiles/blob/master/emacs/funcs/abn-funcs-benchmark.el
|
@ -1,283 +0,0 @@
|
||||
tomjlw is translatting
|
||||
Toplip – A Very Strong File Encryption And Decryption CLI Utility
|
||||
======
|
||||
There are numerous file encryption tools available on the market to protect
|
||||
your files. We have already reviewed some encryption tools such as
|
||||
[**Cryptomater**][1], [**Cryptkeeper**][2], [**CryptGo**][3], [**Cryptr**][4],
|
||||
[**Tomb**][5], and [**GnuPG**][6] etc. Today, we will be discussing yet
|
||||
another file encryption and decryption command line utility named **"
|
||||
Toplip"**. It is a free and open source encryption utility that uses a very
|
||||
strong encryption method called **[AES256][7]** , along with an **XTS-AES**
|
||||
design to safeguard your confidential data. Also, it uses [**Scrypt**][8], a
|
||||
password-based key derivation function, to protect your passphrases against
|
||||
brute-force attacks.
|
||||
|
||||
### Prominent features
|
||||
|
||||
Compared to other file encryption tools, toplip ships with the following
|
||||
unique and prominent features.
|
||||
|
||||
* Very strong XTS-AES256 based encryption method.
|
||||
* Plausible deniability.
|
||||
* Encrypt files inside images (PNG/JPG).
|
||||
* Multiple passphrase protection.
|
||||
* Simplified brute force recovery protection.
|
||||
* No identifiable output markers.
|
||||
* Open source/GPLv3.
|
||||
|
||||
### Installing Toplip
|
||||
|
||||
There is no installation required. Toplip is a standalone executable binary
|
||||
file. All you have to do is download the latest toplip from the [**official
|
||||
products page**][9] and make it as executable. To do so, just run:
|
||||
|
||||
```
|
||||
chmod +x toplip
|
||||
```
|
||||
|
||||
### Usage
|
||||
|
||||
If you run toplip without any arguments, you will see the help section.
|
||||
|
||||
```
|
||||
./toplip
|
||||
```
|
||||
|
||||
[![][10]][11]
|
||||
|
||||
Allow me to show you some examples.
|
||||
|
||||
For the purpose of this guide, I have created two files namely **file1** and
|
||||
**file2**. Also, I have an image file which we need it to hide the files
|
||||
inside it. And finally, I have **toplip** executable binary file. I have kept
|
||||
them all in a directory called **test**.
|
||||
|
||||
[![][12]][13]
|
||||
|
||||
**Encrypt/decrypt a single file**
|
||||
|
||||
Now, let us encrypt **file1**. To do so, run:
|
||||
|
||||
```
|
||||
./toplip file1 > file1.encrypted
|
||||
```
|
||||
|
||||
This command will prompt you to enter a passphrase. Once you have given the
|
||||
passphrase, it will encrypt the contents of **file1** and save them in a file
|
||||
called **file1.encrypted** in your current working directory.
|
||||
|
||||
Sample output of the above command would be:
|
||||
|
||||
```
|
||||
This is toplip v1.20 (C) 2015, 2016 2 Ton Digital. Author: Jeff Marrison A showcase piece for the HeavyThing library. Commercial support available Proudly made in Cooroy, Australia. More info: https://2ton.com.au/toplip file1 Passphrase #1: generating keys...Done
|
||||
Encrypting...Done
|
||||
```
|
||||
|
||||
To verify if the file is really encrypted., try to open it and you will see
|
||||
some random characters.
|
||||
|
||||
To decrypt the encrypted file, use **-d** flag like below:
|
||||
|
||||
```
|
||||
./toplip -d file1.encrypted
|
||||
```
|
||||
|
||||
This command will decrypt the given file and display the contents in the
|
||||
Terminal window.
|
||||
|
||||
To restore the file instead of writing to stdout, do:
|
||||
|
||||
```
|
||||
./toplip -d file1.encrypted > file1.decrypted
|
||||
```
|
||||
|
||||
Enter the correct passphrase to decrypt the file. All contents of **file1.encrypted** will be restored in a file called **file1.decrypted**.
|
||||
|
||||
Please don't follow this naming method. I used it for the sake of easy understanding. Use any other name(s) which is very hard to predict.
|
||||
|
||||
**Encrypt/decrypt multiple files
|
||||
**
|
||||
|
||||
Now we will encrypt two files with two separate passphrases for each one.
|
||||
|
||||
```
|
||||
./toplip -alt file1 file2 > file3.encrypted
|
||||
```
|
||||
|
||||
You will be asked to enter passphrase for each file. Use different
|
||||
passphrases.
|
||||
|
||||
Sample output of the above command will be:
|
||||
|
||||
```
|
||||
This is toplip v1.20 (C) 2015, 2016 2 Ton Digital. Author: Jeff Marrison A showcase piece for the HeavyThing library. Commercial support available Proudly made in Cooroy, Australia. More info: https://2ton.com.au/toplip
|
||||
**file2 Passphrase #1** : generating keys...Done
|
||||
**file1 Passphrase #1** : generating keys...Done
|
||||
Encrypting...Done
|
||||
```
|
||||
|
||||
What the above command will do is encrypt the contents of two files and save
|
||||
them in a single file called **file3.encrypted**. While restoring, just give
|
||||
the respective password. For example, if you give the passphrase of the file1,
|
||||
toplip will restore file1. If you enter the passphrase of file2, toplip will
|
||||
restore file2.
|
||||
|
||||
Each **toplip** encrypted output may contain up to four wholly independent
|
||||
files, and each created with their own separate and unique passphrase. Due to
|
||||
the way the encrypted output is put together, there is no way to easily
|
||||
determine whether or not multiple files actually exist in the first place. By
|
||||
default, even if only one file is encrypted using toplip, random data is added
|
||||
automatically. If more than one file is specified, each with their own
|
||||
passphrase, then you can selectively extract each file independently and thus
|
||||
deny the existence of the other files altogether. This effectively allows a
|
||||
user to open an encrypted bundle with controlled exposure risk, and no
|
||||
computationally inexpensive way for an adversary to conclusively identify that
|
||||
additional confidential data exists. This is called **Plausible deniability**
|
||||
, one of the notable feature of toplip.
|
||||
|
||||
To decrypt **file1** from **file3.encrypted** , just enter:
|
||||
|
||||
```
|
||||
./toplip -d file3.encrypted > file1.encrypted
|
||||
```
|
||||
|
||||
You will be prompted to enter the correct passphrase of file1.
|
||||
|
||||
To decrypt **file2** from **file3.encrypted** , enter:
|
||||
|
||||
```
|
||||
./toplip -d file3.encrypted > file2.encrypted
|
||||
```
|
||||
|
||||
Do not forget to enter the correct passphrase of file2.
|
||||
|
||||
**Use multiple passphrase protection**
|
||||
|
||||
This is another cool feature that I admire. We can provide multiple
|
||||
passphrases for a single file when encrypting it. It will protect the
|
||||
passphrases against brute force attempts.
|
||||
|
||||
```
|
||||
./toplip -c 2 file1 > file1.encrypted
|
||||
```
|
||||
|
||||
Here, **-c 2** represents two different passphrases. Sample output of above
|
||||
command would be:
|
||||
|
||||
```
|
||||
This is toplip v1.20 (C) 2015, 2016 2 Ton Digital. Author: Jeff Marrison A showcase piece for the HeavyThing library. Commercial support available Proudly made in Cooroy, Australia. More info: https://2ton.com.au/toplip
|
||||
**file1 Passphrase #1:** generating keys...Done
|
||||
**file1 Passphrase #2:** generating keys...Done
|
||||
Encrypting...Done
|
||||
```
|
||||
|
||||
As you see in the above example, toplip prompted me to enter two passphrases.
|
||||
Please note that you must **provide two different passphrases** , not a single
|
||||
passphrase twice.
|
||||
|
||||
To decrypt this file, do:
|
||||
|
||||
```
|
||||
$ ./toplip -c 2 -d file1.encrypted > file1.decrypted
|
||||
This is toplip v1.20 (C) 2015, 2016 2 Ton Digital. Author: Jeff Marrison A showcase piece for the HeavyThing library. Commercial support available Proudly made in Cooroy, Australia. More info: https://2ton.com.au/toplip
|
||||
**file1.encrypted Passphrase #1:** generating keys...Done
|
||||
**file1.encrypted Passphrase #2:** generating keys...Done
|
||||
Decrypting...Done
|
||||
```
|
||||
|
||||
**Hide files inside image**
|
||||
|
||||
The practice of concealing a file, message, image, or video within another
|
||||
file is called **steganography**. Fortunately, this feature exists in toplip
|
||||
by default.
|
||||
|
||||
To hide a file(s) inside images, use **-m** flag as shown below.
|
||||
|
||||
```
|
||||
$ ./toplip -m image.png file1 > image1.png
|
||||
This is toplip v1.20 (C) 2015, 2016 2 Ton Digital. Author: Jeff Marrison A showcase piece for the HeavyThing library. Commercial support available Proudly made in Cooroy, Australia. More info: https://2ton.com.au/toplip
|
||||
file1 Passphrase #1: generating keys...Done
|
||||
Encrypting...Done
|
||||
```
|
||||
|
||||
This command conceals the contents of file1 inside an image named image1.png.
|
||||
To decrypt it, run:
|
||||
|
||||
```
|
||||
$ ./toplip -d image1.png > file1.decrypted This is toplip v1.20 (C) 2015, 2016 2 Ton Digital. Author: Jeff Marrison A showcase piece for the HeavyThing library. Commercial support available Proudly made in Cooroy, Australia. More info: https://2ton.com.au/toplip
|
||||
image1.png Passphrase #1: generating keys...Done
|
||||
Decrypting...Done
|
||||
```
|
||||
|
||||
**Increase password complexity**
|
||||
|
||||
To make things even harder to break, we can increase the password complexity
|
||||
like below.
|
||||
|
||||
```
|
||||
./toplip -c 5 -i 0x8000 -alt file1 -c 10 -i 10 file2 > file3.encrypted
|
||||
```
|
||||
|
||||
The above command will prompt to you enter 10 passphrases for the file1, 5
|
||||
passphrases for the file2 and encrypt both of them in a single file called
|
||||
"file3.encrypted". As you may noticed, we have used one more additional flag
|
||||
**-i** in this example. This is used to specify key derivation iterations.
|
||||
This option overrides the default iteration count of 1 for scrypt's initial
|
||||
and final PBKDF2 stages. Hexadecimal or decimal values permitted, e.g.
|
||||
**0x8000** , **10** , etc. Please note that this can dramatically increase the
|
||||
calculation times.
|
||||
|
||||
To decrypt file1, use:
|
||||
|
||||
```
|
||||
./toplip -c 5 -i 0x8000 -d file3.encrypted > file1.decrypted
|
||||
```
|
||||
|
||||
To decrypt file2, use:
|
||||
|
||||
```
|
||||
./toplip -c 10 -i 10 -d file3.encrypted > file2.decrypted
|
||||
```
|
||||
|
||||
To know more about the underlying technical information and crypto methods
|
||||
used in toplip, refer its official website given at the end.
|
||||
|
||||
My personal recommendation to all those who wants to protect their data. Don't
|
||||
rely on single method. Always use more than one tools/methods to encrypt
|
||||
files. Do not write passphrases/passwords in a paper and/or do not save them
|
||||
in your local or cloud storage. Just memorize them and destroy the notes. If
|
||||
you're poor at remembering passwords, consider to use any trustworthy password
|
||||
managers.
|
||||
|
||||
And, that's all. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/toplip-strong-file-encryption-decryption-cli-utility/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[tomjlw](https://github.com/tomjlw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://www.ostechnix.com/cryptomator-open-source-client-side-encryption-tool-cloud/
|
||||
[2]:https://www.ostechnix.com/how-to-encrypt-your-personal-foldersdirectories-in-linux-mint-ubuntu-distros/
|
||||
[3]:https://www.ostechnix.com/cryptogo-easy-way-encrypt-password-protect-files/
|
||||
[4]:https://www.ostechnix.com/cryptr-simple-cli-utility-encrypt-decrypt-files/
|
||||
[5]:https://www.ostechnix.com/tomb-file-encryption-tool-protect-secret-files-linux/
|
||||
[6]:https://www.ostechnix.com/an-easy-way-to-encrypt-and-decrypt-files-from-commandline-in-linux/
|
||||
[7]:http://en.wikipedia.org/wiki/Advanced_Encryption_Standard
|
||||
[8]:http://en.wikipedia.org/wiki/Scrypt
|
||||
[9]:https://2ton.com.au/Products/
|
||||
[10]:https://www.ostechnix.com/wp-content/uploads/2017/12/toplip-2.png%201366w,%20https://www.ostechnix.com/wp-content/uploads/2017/12/toplip-2-300x157.png%20300w,%20https://www.ostechnix.com/wp-content/uploads/2017/12/toplip-2-768x403.png%20768w,%20https://www.ostechnix.com/wp-content/uploads/2017/12/toplip-2-1024x537.png%201024w
|
||||
[11]:http://www.ostechnix.com/wp-content/uploads/2017/12/toplip-2.png
|
||||
[12]:https://www.ostechnix.com/wp-content/uploads/2017/12/toplip-1.png%20779w,%20https://www.ostechnix.com/wp-content/uploads/2017/12/toplip-1-300x101.png%20300w,%20https://www.ostechnix.com/wp-content/uploads/2017/12/toplip-1-768x257.png%20768w
|
||||
[13]:http://www.ostechnix.com/wp-content/uploads/2017/12/toplip-1.png
|
||||
|
@ -1,156 +0,0 @@
|
||||
Power(Shell) to the people
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_lightbulbs.png?itok=pwp22hTw)
|
||||
|
||||
Earlier this year, [PowerShell Core][1] [became generally available][2] under an Open Source ([MIT][3]) license. PowerShell is hardly a new technology. From its first release for Windows in 2006, PowerShell's creators [sought][4] to incorporate the power and flexibility of Unix shells while remedying their perceived deficiencies, particularly the need for text manipulation to derive value from combining commands.
|
||||
|
||||
Five major releases later, PowerShell Core allows the same innovative shell and command environment to run natively on all major operating systems, including OS X and Linux. Some (read: almost everyone) may still scoff at the audacity and/or the temerity of this Windows-born interloper to offer itself to platforms that have had strong shell environments since time immemorial (at least as defined by a millennial). In this post, I hope to make the case that PowerShell can provide advantages to even seasoned users.
|
||||
|
||||
### Consistency across platforms
|
||||
|
||||
If you plan to port your scripts from one execution environment to another, you need to make sure you use only the commands and syntaxes that work. For example, on GNU systems, you would obtain yesterday's date as follows:
|
||||
```
|
||||
date --date="1 day ago"
|
||||
|
||||
```
|
||||
|
||||
On BSD systems (such as OS X), the above syntax wouldn't work, as the BSD date utility requires the following syntax:
|
||||
```
|
||||
date -v -1d
|
||||
|
||||
```
|
||||
|
||||
Because PowerShell is licensed under a permissive license and built for all platforms, you can ship it with your application. Thus, when your scripts run in the target environment, they'll be running on the same shell using the same command implementations as the environment in which you tested your scripts.
|
||||
|
||||
### Objects and structured data
|
||||
|
||||
*nix commands and utilities rely on your ability to consume and manipulate unstructured data. Those who have lived for years with `sed` `grep` and `awk` may be unbothered by this statement, but there is a better way.
|
||||
|
||||
Let's redo the yesterday's date example in PowerShell. To get the current date, run the `Get-Date` cmdlet (pronounced "commandlet"):
|
||||
```
|
||||
> Get-Date
|
||||
|
||||
|
||||
|
||||
Sunday, January 21, 2018 8:12:41 PM
|
||||
|
||||
```
|
||||
|
||||
The output you see isn't really a string of text. Rather, it is a string representation of a .Net Core object. Just like any other object in any other OOP environment, it has a type and most often, methods you can call.
|
||||
|
||||
Let's prove this:
|
||||
```
|
||||
> $(Get-Date).GetType().FullName
|
||||
|
||||
System.DateTime
|
||||
|
||||
```
|
||||
|
||||
The `$(...)` syntax behaves exactly as you'd expect from POSIX shells—the result of the evaluation of the command in parentheses is substituted for the entire expression. In PowerShell, however, the $ is strictly optional in such expressions. And, most importantly, the result is a .Net object, not text. So we can call the `GetType()` method on that object to get its type object (similar to `Class` object in Java), and the `FullName` [property][5] to get the full name of the type.
|
||||
|
||||
So, how does this object-orientedness make your life easier?
|
||||
|
||||
First, you can pipe any object to the `Get-Member` cmdlet to see all the methods and properties it has to offer.
|
||||
```
|
||||
> (Get-Date) | Get-Member
|
||||
PS /home/yevster/Documents/ArticlesInProgress> $(Get-Date) | Get-Member
|
||||
|
||||
|
||||
TypeName: System.DateTime
|
||||
|
||||
|
||||
Name MemberType Definition
|
||||
---- ---------- ----------
|
||||
Add Method datetime Add(timespan value)
|
||||
AddDays Method datetime AddDays(double value)
|
||||
AddHours Method datetime AddHours(double value)
|
||||
AddMilliseconds Method datetime AddMilliseconds(double value)
|
||||
AddMinutes Method datetime AddMinutes(double value)
|
||||
AddMonths Method datetime AddMonths(int months)
|
||||
AddSeconds Method datetime AddSeconds(double value)
|
||||
AddTicks Method datetime AddTicks(long value)
|
||||
AddYears Method datetime AddYears(int value)
|
||||
CompareTo Method int CompareTo(System.Object value), int ...
|
||||
```
|
||||
|
||||
You can quickly see that the DateTime object has an `AddDays` that you can quickly use to get yesterday's date:
|
||||
```
|
||||
> (Get-Date).AddDays(-1)
|
||||
|
||||
|
||||
Saturday, January 20, 2018 8:24:42 PM
|
||||
```
|
||||
|
||||
To do something slightly more exciting, let's call Yahoo's weather service (because it doesn't require an API token) and get your local weather.
|
||||
```
|
||||
$city="Boston"
|
||||
$state="MA"
|
||||
$url="https://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20weather.forecast%20where%20woeid%20in%20(select%20woeid%20from%20geo.places(1)%20where%20text%3D%22${city}%2C%20${state}%22)&format=json&env=store%3A%2F%2Fdatatables.org%2Falltableswithkeys"
|
||||
```
|
||||
|
||||
Now, we could do things the old-fashioned way and just run `curl $url` to get a giant blob of JSON, or...
|
||||
```
|
||||
$weather=(Invoke-RestMethod $url)
|
||||
```
|
||||
|
||||
If you look at the type of `$weather` (by running `echo $weather.GetType().FullName`), you will see that it's a `PSCustomObject`. It's a dynamic object that reflects the structure of the JSON.
|
||||
|
||||
And PowerShell will be thrilled to help you navigate through it with its tab completion. Just type `$weather.` (making sure to include the ".") and press Tab. You will see all the root-level JSON keys. Type one, followed by a "`.`", press Tab again, and you'll see its children (if any).
|
||||
|
||||
Thus, you can easily navigate to the data you want:
|
||||
```
|
||||
> echo $weather.query.results.channel.atmosphere.pressure
|
||||
1019.0
|
||||
|
||||
|
||||
> echo $weather.query.results.channel.wind.chill
|
||||
41
|
||||
```
|
||||
|
||||
And if you have JSON or CSV lying around (or returned by an outside command) as unstructured data, just pipe it into the `ConvertFrom-Json` or `ConvertFrom-CSV` cmdlet, respectively, and you can have your data in nice clean objects.
|
||||
|
||||
### Computing vs. automation
|
||||
|
||||
We use shells for two purposes. One is for computing, to run individual commands and to manually respond to their output. The other is automation, to write scripts that execute multiple commands and respond to their output programmatically.
|
||||
|
||||
A problem that most of us have learned to overlook is that these two purposes place different and conflicting requirements on the shell. Computing requires the shell to be laconic. The fewer keystrokes a user can get away with, the better. It's unimportant if what the user has typed is barely legible to another human being. Scripts, on the other hand, are code. Readability and maintainability are key. And here, POSIX utilities often fail us. While some commands do offer both laconic and readable syntaxes (e.g. `-f` and `--force`) for some of their parameters, the command names themselves err on the side of brevity, not readability.
|
||||
|
||||
PowerShell includes several mechanisms to eliminate that Faustian tradeoff.
|
||||
|
||||
First, tab completion eliminates typing of argument names. For instance, type `Get-Random -Mi`, press Tab and PowerShell will complete the argument for you: `Get-Random -Minimum`. But if you really want to be laconic, you don't even need to press Tab. For instance, PowerShell will understand
|
||||
```
|
||||
Get-Random -Mi 1 -Ma 10
|
||||
```
|
||||
|
||||
because `Mi` and `Ma` each have unique completions.
|
||||
|
||||
You may have noticed that all PowerShell cmdlet names have a verb-noun structure. This can help script readability, but you probably don't want to keep typing `Get-` over and over in the command line. So don't! If you type a noun without a verb, PowerShell will look for a `Get-` command with that noun.
|
||||
|
||||
Caution: although PowerShell is not case-sensitive, it's a good practice to capitalize the first letter of the noun when you intend to use a PowerShell command. For example, typing `date` will call your system's `date` utility. Typing `Date` will call PowerShell's `Get-Date` cmdlet.
|
||||
|
||||
And if that's not enough, PowerShell has aliases to create simple names. For example, if you type `alias -name cd`, you will discover the `cd` command in PowerShell is itself an alias for the `Set-Location` command.
|
||||
|
||||
So to review—you get powerful tab completion, aliases, and noun completions to keep your command names short, automatic and consistent parameter name truncation, while still enjoying a rich, readable syntax for scripting.
|
||||
|
||||
### So... friends?
|
||||
|
||||
There are just some of the advantages of PowerShell. There are more features and cmdlets I haven't discussed (check out [Where-Object][6] or its alias `?` if you want to make `grep` cry). And hey, if you really feel homesick, PowerShell will be happy to launch your old native utilities for you. But give yourself enough time to get acclimated in PowerShell's object-oriented cmdlet world, and you may find yourself choosing to forget the way back.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/powershell-people
|
||||
|
||||
作者:[Yev Bronshteyn][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/yevster
|
||||
[1]:https://github.com/PowerShell/PowerShell/blob/master/README.md
|
||||
[2]:https://blogs.msdn.microsoft.com/powershell/2018/01/10/powershell-core-6-0-generally-available-ga-and-supported/
|
||||
[3]:https://spdx.org/licenses/MIT
|
||||
[4]:http://www.jsnover.com/Docs/MonadManifesto.pdf
|
||||
[5]:https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/properties
|
||||
[6]:https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/where-object?view=powershell-6
|
@ -1,212 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (GraveAccent)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (JSON vs XML vs TOML vs CSON vs YAML)
|
||||
[#]: via: (https://www.zionandzion.com/json-vs-xml-vs-toml-vs-cson-vs-yaml/)
|
||||
[#]: author: (Tim Anderson https://www.zionandzion.com)
|
||||
|
||||
JSON vs XML vs TOML vs CSON vs YAML
|
||||
======
|
||||
|
||||
|
||||
### A Super Serious Segment About Sets, Subsets, and Supersets of Sample Serialization
|
||||
|
||||
I’m a developer. I read code. I write code. I write code that writes code. I write code that writes code for other code to read. It’s all very mumbo-jumbo, but beautiful in its own way. However, that last bit, writing code that writes code for other code to read, can get more convoluted than this paragraph—quickly. There are a lot of ways to do it. One not-so-convoluted way and a favorite among the developer community is through data serialization. For those who aren’t savvy on the super buzzword I just threw at you, data serialization is the process of taking some information from one system, churning it into a format that other systems can read, and then passing it along to those other systems.
|
||||
|
||||
While there are enough [data serialization formats][1] out there to bury the Burj Khalifa, they all mostly fall into two categories:
|
||||
|
||||
* simplicity for humans to read and write,
|
||||
* and simplicity for machines to read and write.
|
||||
|
||||
|
||||
|
||||
It’s difficult to have both as we humans enjoy loosely typed, flexible formatting standards that allow us to be more expressive, whereas machines tend to enjoy being told exactly what everything is without doubt or lack of detail, and consider “strict specifications” to be their favorite flavor of Ben & Jerry’s.
|
||||
|
||||
Since I’m a web developer and we’re an agency who creates websites, we’ll stick to those special formats that web systems can understand, or be made to understand without much effort, and that are particularly useful for human readability: XML, JSON, TOML, CSON, and YAML. Each has benefits, cons, and appropriate use cases.
|
||||
|
||||
### Facts First
|
||||
|
||||
Back in the early days of the interwebs, [some really smart fellows][2] decided to put together a standard language which every system could read and creatively named it Standard Generalized Markup Language, or SGML for short. SGML was incredibly flexible and well defined by its publishers. It became the father of languages such as XML, SVG, and HTML. All three fall under the SGML specification, but are subsets with stricter rules and shorter flexibility.
|
||||
|
||||
Eventually, people started seeing a great deal of benefit in having very small, concise, easy to read, and easy to generate data that could be shared programmatically between systems with very little overhead. Around that time, JSON was born and was able to fulfil all requirements. In turn, other languages began popping up to deal with more specialized cases such as CSON, TOML, and YAML.
|
||||
|
||||
### XML: Ixnayed
|
||||
|
||||
Originally, the XML language was amazingly flexible and easy to write, but its drawback was that it was verbose, difficult for humans to read, really difficult for computers to read, and had a lot of syntax that wasn’t entirely necessary to communicate information.
|
||||
|
||||
Today, it’s all but dead for data serialization purposes on the web. Unless you’re writing HTML or SVG, both siblings to XML, you probably aren’t going to see XML in too many other places. Some outdated systems still use it today, but using it to pass data around tends to be overkill for the web.
|
||||
|
||||
I can already hear the XML greybeards beginning to scribble upon their stone tablets as to why XML is ah-may-zing, so I’ll provide a small addendum: XML can be easy to read and write by systems and people. However, it is really, and I mean ridiculously, hard to create a system that can read it to specification. Here’s a simple, beautiful example of XML:
|
||||
|
||||
```
|
||||
<book id="bk101">
|
||||
<author>Gambardella, Matthew</author>
|
||||
<title>XML Developer's Guide</title>
|
||||
<genre>Computer</genre>
|
||||
<price>44.95</price>
|
||||
<publish_date>2000-10-01</publish_date>
|
||||
<description>An in-depth look at creating applications
|
||||
with XML.</description>
|
||||
</book>
|
||||
```
|
||||
|
||||
Wonderful. Easy to read, reason about, write, and code a system that can read and write. But consider this example:
|
||||
|
||||
```
|
||||
<!DOCTYPE r [ <!ENTITY y "a]>b"> ]>
|
||||
<r>
|
||||
<a b="&y;>" />
|
||||
<![CDATA[[a>b <a>b <a]]>
|
||||
<?x <a> <!-- <b> ?> c --> d
|
||||
</r>
|
||||
```
|
||||
|
||||
The above is 100% valid XML. Impossible to read, understand, or reason about. Writing code that can consume and understand this would cost at least 36 heads of hair and 248 pounds of coffee grounds. We don’t have that kind of time nor coffee, and most of us greybeards are balding nowadays. So let’s let it live only in our memory alongside [css hacks][3], [internet explorer 6][4], and [vacuum tubes][5].
|
||||
|
||||
### JSON: Juxtaposition Jamboree
|
||||
|
||||
Okay, we’re all in agreement. XML = bad. So, what’s a good alternative? JavaScript Object Notation, or JSON for short. JSON (read like the name Jason) was invented by Brendan Eich, and made popular by the great and powerful Douglas Crockford, the [Dutch Uncle of JavaScript][6]. It’s used just about everywhere nowadays. The format is easy to write by both human and machine, fairly easy to [parse][7] with strict rules in the specification, and flexible—allowing deep nesting of data, all of the primitive data types, and interpretation of collections as either arrays or objects. JSON became the de facto standard for transferring data from one system to another. Nearly every language out there has built-in functionality for reading and writing it.
|
||||
|
||||
JSON syntax is straightforward. Square brackets denote arrays, curly braces denote records, and two values separated by semicolons denote properties (or ‘keys’) on the left, and values on the right. All keys must be wrapped in double quotes:
|
||||
|
||||
```
|
||||
{
|
||||
"books": [
|
||||
{
|
||||
"id": "bk102",
|
||||
"author": "Crockford, Douglas",
|
||||
"title": "JavaScript: The Good Parts",
|
||||
"genre": "Computer",
|
||||
"price": 29.99,
|
||||
"publish_date": "2008-05-01",
|
||||
"description": "Unearthing the Excellence in JavaScript"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
This should make complete sense to you. It’s nice and concise, and has stripped much of the extra nonsense from XML to convey the same amount of information. JSON is king right now, and the rest of this article will go into other language formats that are nothing more than JSON boiled down in an attempt to be either more concise or more readable by humans, but follow very similar structure.
|
||||
|
||||
### TOML: Truncated to Total Altruism
|
||||
|
||||
TOML (Tom’s Obvious, Minimal Language) allows for defining deeply-nested data structures rather quickly and succinctly. The name-in-the-name refers to the inventor, [Tom Preston-Werner][8], an inventor and software developer who’s active in our industry. The syntax is a bit awkward when compared to JSON, and is more akin to an [ini file][9]. It’s not a bad syntax, but could take some getting used to:
|
||||
|
||||
```
|
||||
[[books]]
|
||||
id = 'bk101'
|
||||
author = 'Crockford, Douglas'
|
||||
title = 'JavaScript: The Good Parts'
|
||||
genre = 'Computer'
|
||||
price = 29.99
|
||||
publish_date = 2008-05-01T00:00:00+00:00
|
||||
description = 'Unearthing the Excellence in JavaScript'
|
||||
```
|
||||
|
||||
A couple great features have been integrated into TOML, such as multiline strings, auto-escaping of reserved characters, datatypes such as dates, time, integers, floats, scientific notation, and “table expansion”. That last bit is special, and is what makes TOML so concise:
|
||||
|
||||
```
|
||||
[a.b.c]
|
||||
d = 'Hello'
|
||||
e = 'World'
|
||||
```
|
||||
|
||||
The above expands to the following:
|
||||
|
||||
```
|
||||
{
|
||||
"a": {
|
||||
"b": {
|
||||
"c": {
|
||||
"d": "Hello"
|
||||
"e": "World"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
You can definitely see how much you can save in both time and file length using TOML. There are few systems which use it or something very similar for configuration, and that is its biggest con. There simply aren’t very many languages or libraries out there written to interpret TOML.
|
||||
|
||||
### CSON: Simple Samples Enslaved by Specific Systems
|
||||
|
||||
First off, there are two CSON specifications. One stands for CoffeeScript Object Notation, the other stands for Cursive Script Object Notation. The latter isn’t used too often, so we won’t be getting into it. Let’s just focus on the CoffeeScript one.
|
||||
|
||||
[CSON][10] will take a bit of intro. First, let’s talk about CoffeeScript. [CoffeeScript][11] is a language that runs through a compiler to generate JavaScript. It allows you to write JavaScript in a more syntactically concise way, and have it [transcompiled][12] into actual JavaScript, which you would then use in your web application. CoffeeScript makes writing JavaScript easier by removing a lot of the extra syntax necessary in JavaScript. A big one that CoffeeScript gets rid of is curly braces—no need for them. In that same token, CSON is JSON without the curly braces. It instead relies on indentation to determine hierarchy of your data. CSON is very easy to read and write and usually requires fewer lines of code than JSON because there are no brackets.
|
||||
|
||||
CSON also offers up some extra niceties that JSON doesn’t have to offer. Multiline strings are incredibly easy to write, you can enter [comments][13] by starting a line with a hash, and there’s no need for separating key-value pairs with commas.
|
||||
|
||||
```
|
||||
books: [
|
||||
id: 'bk102'
|
||||
author: 'Crockford, Douglas'
|
||||
title: 'JavaScript: The Good Parts'
|
||||
genre: 'Computer'
|
||||
price: 29.99
|
||||
publish_date: '2008-05-01'
|
||||
description: 'Unearthing the Excellence in JavaScript'
|
||||
]
|
||||
```
|
||||
|
||||
Here’s the big issue with CSON. It’s **CoffeeScript** Object Notation. Meaning CoffeeScript is what you use to parse/tokenize/lex/transcompile or otherwise use CSON. CoffeeScript is the system that reads the data. If the intent of data serialization is to allow data to be passed from one system to another, and here we have a data serialization format that’s only read by a single system, well that makes it about as useful as a fireproof match, or a waterproof sponge, or that annoyingly flimsy fork part of a spork.
|
||||
|
||||
If this format is adopted by other systems, it could be pretty useful in the developer world. Thus far that hasn’t happened in a comprehensive manner, so using it in alternative languages such as PHP or JAVA are a no-go.
|
||||
|
||||
### YAML: Yielding Yips from Youngsters
|
||||
|
||||
Developers rejoice, as YAML comes into the scene from [one of the contributors to Python][14]. YAML has the same feature set and similar syntax as CSON, a boatload of new features, and parsers available in just about every web programming language there is. It also has some extra features, like circular referencing, soft-wraps, multi-line keys, typecasting tags, binary data, object merging, and [set maps][15]. It has incredibly good human readability and writability, and is a superset of JSON, so you can use fully qualified JSON syntax inside YAML and all will work well. You almost never need quotes, and it can interpret most of your base data types (strings, integers, floats, booleans, etc.).
|
||||
|
||||
```
|
||||
books:
|
||||
- id: bk102
|
||||
author: Crockford, Douglas
|
||||
title: 'JavaScript: The Good Parts'
|
||||
genre: Computer
|
||||
price: 29.99
|
||||
publish_date: !!str 2008-05-01
|
||||
description: Unearthing the Excellence in JavaScript
|
||||
```
|
||||
|
||||
The younglings of the industry are rapidly adopting YAML as their preferred data serialization and system configuration format. They are smart to do so. YAML has all the benefits of being as terse as CSON, and all the features of datatype interpretation as JSON. YAML is as easy to read as Canadians are to hang out with.
|
||||
|
||||
There are two issues with YAML that stick out to me, and the first is a big one. At the time of this writing, YAML parsers haven’t yet been built into very many languages, so you’ll need to use a third-party library or extension for your chosen language to parse .yaml files. This wouldn’t be a big deal, however it seems most developers who’ve created parsers for YAML have chosen to throw “additional features” into their parsers at random. Some allow [tokenization][16], some allow [chain referencing][17], some even allow inline calculations. This is all well and good (sort of), except that none of these features are part of the specification, and so are difficult to find amongst other parsers in other languages. This results in system-locking; you end up with the same issue that CSON is subject to. If you use a feature found in only one parser, other parsers won’t be able to interpret the input. Most of these features are nonsense that don’t belong in a dataset, but rather in your application logic, so it’s best to simply ignore them and write your YAML to specification.
|
||||
|
||||
The second issue is there are few parsers that yet completely implement the specification. All the basics are there, but it can be difficult to find some of the more complex and newer things like soft-wraps, document markers, and circular references in your preferred language. I have yet to see an absolute need for these things, so hopefully they shouldn’t slow you down too much. With the above considered, I tend to keep to the more matured feature set presented in the [1.1 specification][18], and avoid the newer stuff found in the [1.2 specification][19]. However, programming is an ever-evolving monster, so by the time you finish reading this article, you’re likely to be able to use the 1.2 spec.
|
||||
|
||||
### Final Philosophy
|
||||
|
||||
The final word here is that each serialization language should be treated with a case-by-case reverence. Some are the bee’s knees when it comes to machine readability, some are the cat’s meow for human readability, and some are simply gilded turds. Here’s the ultimate breakdown: If you are writing code for other code to read, use YAML. If you are writing code that writes code for other code to read, use JSON. Finally, if you are writing code that transcompiles code into code that other code will read, rethink your life choices.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.zionandzion.com/json-vs-xml-vs-toml-vs-cson-vs-yaml/
|
||||
|
||||
作者:[Tim Anderson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.zionandzion.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Comparison_of_data_serialization_formats
|
||||
[2]: https://en.wikipedia.org/wiki/Standard_Generalized_Markup_Language#History
|
||||
[3]: https://www.quirksmode.org/css/csshacks.html
|
||||
[4]: http://www.ie6death.com/
|
||||
[5]: https://en.wikipedia.org/wiki/Vacuum_tube
|
||||
[6]: https://twitter.com/BrendanEich/status/773403975865470976
|
||||
[7]: https://en.wikipedia.org/wiki/Parsing#Parser
|
||||
[8]: https://en.wikipedia.org/wiki/Tom_Preston-Werner
|
||||
[9]: https://en.wikipedia.org/wiki/INI_file
|
||||
[10]: https://github.com/bevry/cson#what-is-cson
|
||||
[11]: http://coffeescript.org/
|
||||
[12]: https://en.wikipedia.org/wiki/Source-to-source_compiler
|
||||
[13]: https://en.wikipedia.org/wiki/Comment_(computer_programming)
|
||||
[14]: http://clarkevans.com/
|
||||
[15]: http://exploringjs.com/es6/ch_maps-sets.html
|
||||
[16]: https://www.tutorialspoint.com/compiler_design/compiler_design_lexical_analysis.htm
|
||||
[17]: https://en.wikipedia.org/wiki/Fluent_interface
|
||||
[18]: http://yaml.org/spec/1.1/current.html
|
||||
[19]: http://www.yaml.org/spec/1.2/spec.html
|
@ -1,241 +0,0 @@
|
||||
Python ChatOps libraries: Opsdroid and Errbot
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_mobile_phone.png?itok=RqVtvxkd)
|
||||
This article was co-written with [Lacey Williams Henschel][1].
|
||||
|
||||
ChatOps is conversation-driven development. The idea is you can write code that is executed in response to something typed in a chat window. As a developer, you could use ChatOps to merge pull requests from Slack, automatically assign a support ticket to someone from a received Facebook message, or check the status of a deployment through IRC.
|
||||
|
||||
In the Python world, the most widely used ChatOps libraries are Opsdroid and Errbot. In this month's Python column, let's chat about what it's like to use them, what each does well, and how to get started with them.
|
||||
|
||||
### Opsdroid
|
||||
|
||||
[Opsdroid][2] is a relatively young (since 2016) open source chatbot library written in Python. It has good documentation, a great tutorial, and includes plugins to help you connect to popular chat services.
|
||||
|
||||
#### What's built in
|
||||
|
||||
The library itself doesn't ship with everything you need to get started, but this is by design. The lightweight framework encourages you to enable its existing connectors (what Opsdroid calls the plugins that help you connect to chat services) or write your own, but it doesn't weigh itself down by shipping with connectors you may not need. You can easily enable existing Opsdroid connectors for:
|
||||
|
||||
|
||||
+ The command line
|
||||
+ Cisco Spark
|
||||
+ Facebook
|
||||
+ GitHub
|
||||
+ Matrix
|
||||
+ Slack
|
||||
+ Telegram
|
||||
+ Twitter
|
||||
+ Websockets
|
||||
|
||||
|
||||
Opsdroid calls the functions the chatbot performs "skills." Skills are `async` Python functions and use Opsdroid's matching decorators, called "matchers." You can configure your Opsdroid project to use skills from the same codebase your configuration file is in or import skills from outside public or private repositories.
|
||||
|
||||
You can enable some existing Opsdroid skills as well, including [seen][3], which tells you when a specific user was last seen by the bot, and [weather][4], which will report the weather to the user.
|
||||
|
||||
Finally, Opdroid allows you to configure databases using its existing database modules. Current databases with Opsdroid support include:
|
||||
|
||||
|
||||
+ Mongo
|
||||
+ Redis
|
||||
+ SQLite
|
||||
|
||||
|
||||
You configure databases, skills, and connectors in the `configuration.yaml` file in your Opsdroid project.
|
||||
|
||||
#### Opsdroid pros
|
||||
|
||||
**Docker support:** Opsdroid is meant to work well in Docker from the get-go. Docker instructions are part of its [installation documentation][5]. Using Opsdroid with Docker Compose is also simple: Set up Opsdroid as a service and when you run `docker-compose up`, your Opsdroid service will start and your chatbot will be ready to chat.
|
||||
```
|
||||
version: "3"
|
||||
|
||||
|
||||
|
||||
services:
|
||||
|
||||
opsdroid:
|
||||
|
||||
container_name: opsdroid
|
||||
|
||||
build:
|
||||
|
||||
context: .
|
||||
|
||||
dockerfile: Dockerfile
|
||||
|
||||
```
|
||||
|
||||
**Lots of connectors:** Opsdroid supports nine connectors to services like Slack and GitHub out of the box; all you need to do is enable those connectors in your configuration file and pass necessary tokens or API keys. For example, to enable Opsdroid to post in a Slack channel named `#updates`, add this to the `connectors` section of your configuration file:
|
||||
```
|
||||
- name: slack
|
||||
|
||||
api-token: "this-is-my-token"
|
||||
|
||||
default-room: "#updates"
|
||||
|
||||
```
|
||||
|
||||
You will have to [add a bot user][6] to your Slack workspace before configuring Opsdroid to connect to Slack.
|
||||
|
||||
If you need to connect to a service that Opsdroid does not support, there are instructions for adding your own connectors in the [docs][7].
|
||||
|
||||
**Pretty good docs.** Especially for a young-ish library in active development, Opsdroid's docs are very helpful. The docs include a [tutorial][8] that leads you through creating a couple of different basic skills. The Opsdroid documentation on [skills][9], [connectors][7], [databases][10], and [matchers][11] is also clear.
|
||||
|
||||
The repositories for its supported skills and connectors provide helpful example code for when you start writing your own custom skills and connectors.
|
||||
|
||||
**Natural language processing:** Opsdroid supports regular expressions for its skills, but also several NLP APIs, including [Dialogflow][12], [luis.ai][13], [Recast.AI][14], and [wit.ai][15].
|
||||
|
||||
#### Possible Opsdroid concern
|
||||
|
||||
Opsdroid doesn't yet enable the full features of some of its connectors. For example, the Slack API allows you to add color bars, images, and other "attachments" to your message. The Opsdroid Slack connector doesn't enable the "attachments" feature, so you would need to write a custom Slack connector if those features were important to you. If a connector is missing a feature you need, though, Opsdroid would welcome your [contribution][16]. The docs could use some more examples, especially of expected use cases.
|
||||
|
||||
#### Example usage
|
||||
|
||||
`hello/__init__.py`
|
||||
```
|
||||
from opsdroid.matchers import match_regex
|
||||
|
||||
import random
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
@match_regex(r'hi|hello|hey|hallo')
|
||||
|
||||
async def hello(opsdroid, config, message):
|
||||
|
||||
text = random.choice(["Hi {}", "Hello {}", "Hey {}"]).format(message.user)
|
||||
|
||||
await message.respond(text)
|
||||
|
||||
```
|
||||
|
||||
`configuration.yaml`
|
||||
```
|
||||
connectors:
|
||||
|
||||
- name: websocket
|
||||
|
||||
|
||||
|
||||
skills:
|
||||
|
||||
|
||||
|
||||
- name: hello
|
||||
|
||||
repo: "https://github.com/<user_id>/hello-skill"
|
||||
|
||||
```
|
||||
|
||||
### Errbot
|
||||
|
||||
[Errbot][17] is a batteries-included open source chatbot. Errbot was released in 2012 and has everything anyone would expect from a mature project, including good documentation, a great tutorial, and plenty of plugins to help you connect to existing popular chat services.
|
||||
|
||||
#### What's built in
|
||||
|
||||
Unlike Opsdroid, which takes a more lightweight approach, Errbot ships with everything you need to build a customized bot safely.
|
||||
|
||||
Errbot includes support for XMPP, IRC, Slack, Hipchat, and Telegram services natively. It lists support for 10 other services through community-supplied backends.
|
||||
|
||||
#### Errbot pros
|
||||
|
||||
**Good docs:** Errbot's docs are mature and easy to use.
|
||||
|
||||
**Dynamic plugin architecture:** Errbot allow you to securely install, uninstall, update, enable, and disable plugins by chatting with the bot. This makes development and adding features easy. For the security conscious, this can all be locked down thanks to Errbot's granular permission system.
|
||||
|
||||
Errbot uses your plugin docstrings to generate documentation for available commands when someone types `!help`, which makes it easier to know what each command does.
|
||||
|
||||
**Built-in administration and security:** Errbot allows you to restrict lists of users who have administrative rights and even has fine-grained access controls. For example, you can restrict which commands may be called by specific users and/or specific rooms.
|
||||
|
||||
**Extensive plugin framework:** Errbot supports hooks, callbacks, subcommands, webhooks, polling, and many [more features][18]. If those aren't enough, you can even write [Dynamic plugins][19]. This feature is useful if you want to enable chat commands based on what commands are available on a remote server.
|
||||
|
||||
**Ships with a testing framework:** Errbot supports [pytest][20] and ships with some useful utilities that make testing your plugins easy and possible. Its "[testing your plugins][21]" docs are well thought out and provide enough to get started.
|
||||
|
||||
#### Possible Errbot concerns
|
||||
|
||||
**Initial !:** By default, Errbot commands are issued starting with an exclamation mark (`!help` and `!hello`). Some people may like this, but others may find it annoying. Thankfully, this is easy to turn off.
|
||||
|
||||
**Plugin metadata:** At first, Errbot's [Hello World][22] plugin example seems easy to use. However, I couldn't get my plugin to load until I read further into the tutorial and discovered that I also needed a `.plug` file, a file Errbot uses to load plugins. This is a pretty minor nitpick, but it wasn't obvious to me until I dug further into the docs.
|
||||
|
||||
#### Example usage
|
||||
|
||||
`hello.py`
|
||||
```
|
||||
import random
|
||||
|
||||
from errbot import BotPlugin, botcmd
|
||||
|
||||
|
||||
|
||||
class Hello(BotPlugin):
|
||||
|
||||
|
||||
|
||||
@botcmd
|
||||
|
||||
def hello(self, msg, args):
|
||||
|
||||
text = random.choice(["Hi {}", "Hello {}", "Hey {}"]).format(message.user)
|
||||
|
||||
return text
|
||||
|
||||
```
|
||||
|
||||
`hello.plug`
|
||||
```
|
||||
[Core]
|
||||
|
||||
Name = Hello
|
||||
|
||||
Module = hello
|
||||
|
||||
|
||||
|
||||
[Python]
|
||||
|
||||
Version = 2+
|
||||
|
||||
|
||||
|
||||
[Documentation]
|
||||
|
||||
Description = Example "Hello" plugin
|
||||
|
||||
```
|
||||
|
||||
Have you used Errbot or Opsdroid? If so, please leave a comment with your impressions on these tools.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/3/python-chatops-libraries-opsdroid-and-errbot
|
||||
|
||||
作者:[Jeff Triplett][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/laceynwilliams
|
||||
[1]:https://opensource.com/users/laceynwilliams
|
||||
[2]:https://opsdroid.github.io/
|
||||
[3]:https://github.com/opsdroid/skill-seen
|
||||
[4]:https://github.com/opsdroid/skill-weather
|
||||
[5]:https://opsdroid.readthedocs.io/en/stable/#docker
|
||||
[6]:https://api.slack.com/bot-users
|
||||
[7]:https://opsdroid.readthedocs.io/en/stable/extending/connectors/
|
||||
[8]:https://opsdroid.readthedocs.io/en/stable/tutorials/introduction/
|
||||
[9]:https://opsdroid.readthedocs.io/en/stable/extending/skills/
|
||||
[10]:https://opsdroid.readthedocs.io/en/stable/extending/databases/
|
||||
[11]:https://opsdroid.readthedocs.io/en/stable/matchers/overview/
|
||||
[12]:https://opsdroid.readthedocs.io/en/stable/matchers/dialogflow/
|
||||
[13]:https://opsdroid.readthedocs.io/en/stable/matchers/luis.ai/
|
||||
[14]:https://opsdroid.readthedocs.io/en/stable/matchers/recast.ai/
|
||||
[15]:https://opsdroid.readthedocs.io/en/stable/matchers/wit.ai/
|
||||
[16]:https://opsdroid.readthedocs.io/en/stable/contributing/
|
||||
[17]:http://errbot.io/en/latest/
|
||||
[18]:http://errbot.io/en/latest/features.html#extensive-plugin-framework
|
||||
[19]:http://errbot.io/en/latest/user_guide/plugin_development/dynaplugs.html
|
||||
[20]:http://pytest.org/
|
||||
[21]:http://errbot.io/en/latest/user_guide/plugin_development/testing.html
|
||||
[22]:http://errbot.io/en/latest/index.html#simple-to-build-upon
|
@ -1,77 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lujun9972)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Asynchronous rsync with Emacs, dired and tramp.)
|
||||
[#]: via: (https://vxlabs.com/2018/03/30/asynchronous-rsync-with-emacs-dired-and-tramp/)
|
||||
[#]: author: (cpbotha https://vxlabs.com/author/cpbotha/)
|
||||
|
||||
Asynchronous rsync with Emacs, dired and tramp.
|
||||
======
|
||||
|
||||
[tmtxt-dired-async][1] by [Trần Xuân Trường][2] is an unfortunately lesser known Emacs package which extends dired, the Emacs file manager, to be able to run rsync and other commands (zip, unzip, downloading) asynchronously.
|
||||
|
||||
This means you can copy gigabytes of directories around whilst still happily continuing with all of your other tasks in the Emacs operating system.
|
||||
|
||||
It has a feature where you can add any number of files from different locations into a wait list with `C-c C-a`, and then asynchronously rsync the whole wait list into a final destination directory with `C-c C-v`. This alone is worth the price of admission.
|
||||
|
||||
For example here it is pointlessly rsyncing the arduino 1.9 beta archive to another directory:
|
||||
|
||||
[![][3]][4]
|
||||
|
||||
When the process is complete, the window at the bottom will automatically be killed after 5 seconds. Here is a separate session right after the asynchronous unzipping of the above-mentioned arduino archive:
|
||||
|
||||
[![][5]][6]
|
||||
|
||||
This package has further increased the utility of my dired configuration.
|
||||
|
||||
I just contributed [a pull request that enables tmtxt-dired-async to rsync to remote tramp-based directories][7], and I immediately used this new functionality to sort a few gigabytes of new photos onto the Linux server.
|
||||
|
||||
To add tmtxt-dired-async to your config, download [tmtxt-async-tasks.el][8] (a required library) and [tmtxt-dired-async.el][9] (check that my PR is in there if you plan to use this with tramp) into your `~/.emacs.d/` and add the following to your config:
|
||||
|
||||
```
|
||||
;; no MELPA packages of this, so we have to do a simple check here
|
||||
(setq dired-async-el (expand-file-name "~/.emacs.d/tmtxt-dired-async.el"))
|
||||
(when (file-exists-p dired-async-el)
|
||||
(load (expand-file-name "~/.emacs.d/tmtxt-async-tasks.el"))
|
||||
(load dired-async-el)
|
||||
(define-key dired-mode-map (kbd "C-c C-r") 'tda/rsync)
|
||||
(define-key dired-mode-map (kbd "C-c C-z") 'tda/zip)
|
||||
(define-key dired-mode-map (kbd "C-c C-u") 'tda/unzip)
|
||||
|
||||
(define-key dired-mode-map (kbd "C-c C-a") 'tda/rsync-multiple-mark-file)
|
||||
(define-key dired-mode-map (kbd "C-c C-e") 'tda/rsync-multiple-empty-list)
|
||||
(define-key dired-mode-map (kbd "C-c C-d") 'tda/rsync-multiple-remove-item)
|
||||
(define-key dired-mode-map (kbd "C-c C-v") 'tda/rsync-multiple)
|
||||
|
||||
(define-key dired-mode-map (kbd "C-c C-s") 'tda/get-files-size)
|
||||
|
||||
(define-key dired-mode-map (kbd "C-c C-q") 'tda/download-to-current-dir))
|
||||
```
|
||||
|
||||
Enjoy!
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://vxlabs.com/2018/03/30/asynchronous-rsync-with-emacs-dired-and-tramp/
|
||||
|
||||
作者:[cpbotha][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://vxlabs.com/author/cpbotha/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://truongtx.me/tmtxt-dired-async.html
|
||||
[2]: https://truongtx.me/about.html
|
||||
[3]: https://i0.wp.com/vxlabs.com/wp-content/uploads/2018/03/rsync-arduino-zip.png?resize=660%2C340&ssl=1
|
||||
[4]: https://i0.wp.com/vxlabs.com/wp-content/uploads/2018/03/rsync-arduino-zip.png?ssl=1
|
||||
[5]: https://i1.wp.com/vxlabs.com/wp-content/uploads/2018/03/progress-window-5s.png?resize=660%2C310&ssl=1
|
||||
[6]: https://i1.wp.com/vxlabs.com/wp-content/uploads/2018/03/progress-window-5s.png?ssl=1
|
||||
[7]: https://github.com/tmtxt/tmtxt-dired-async/pull/6
|
||||
[8]: https://github.com/tmtxt/tmtxt-async-tasks
|
||||
[9]: https://github.com/tmtxt/tmtxt-dired-async
|
@ -1,3 +1,12 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (pityonline)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Get Started with Snap Packages in Linux)
|
||||
[#]: via: (https://www.linux.com/learn/intro-to-linux/2018/5/get-started-snap-packages-linux)
|
||||
[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen)
|
||||
|
||||
Get Started with Snap Packages in Linux
|
||||
======
|
||||
|
||||
@ -139,7 +148,7 @@ via: https://www.linux.com/learn/intro-to-linux/2018/5/get-started-snap-packages
|
||||
|
||||
作者:[Jack Wallen][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[pityonline](https://github.com/pityonline)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,3 +1,4 @@
|
||||
DaivdMax2006 is translating
|
||||
100 Best Ubuntu Apps
|
||||
======
|
||||
|
||||
|
@ -1,362 +0,0 @@
|
||||
Building tiny container images
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_scale_performance.jpg?itok=R7jyMeQf)
|
||||
|
||||
When [Docker][1] exploded onto the scene a few years ago, it brought containers and container images to the masses. Although Linux containers existed before then, Docker made it easy to get started with a user-friendly command-line interface and an easy-to-understand way to build images using the Dockerfile format. But while it may be easy to jump in, there are still some nuances and tricks to building container images that are usable, even powerful, but still small in size.
|
||||
|
||||
### First pass: Clean up after yourself
|
||||
|
||||
Some of these examples involve the same kind of cleanup you would use with a traditional server, but more rigorously followed. Smaller image sizes are critical for quickly moving images around, and storing multiple copies of unnecessary data on disk is a waste of resources. Consequently, these techniques should be used more regularly than on a server with lots of dedicated storage.
|
||||
|
||||
An example of this kind of cleanup is removing cached files from an image to recover space. Consider the difference in size between a base image with [Nginx][2] installed by `dnf` with and without the metadata and yum cache cleaned up:
|
||||
```
|
||||
# Dockerfile with cache
|
||||
|
||||
FROM fedora:28
|
||||
|
||||
LABEL maintainer Chris Collins <collins.christopher@gmail.com>
|
||||
|
||||
|
||||
|
||||
RUN dnf install -y nginx
|
||||
|
||||
|
||||
|
||||
-----
|
||||
|
||||
|
||||
|
||||
# Dockerfile w/o cache
|
||||
|
||||
FROM fedora:28
|
||||
|
||||
LABEL maintainer Chris Collins <collins.christopher@gmail.com>
|
||||
|
||||
|
||||
|
||||
RUN dnf install -y nginx \
|
||||
|
||||
&& dnf clean all \
|
||||
|
||||
&& rm -rf /var/cache/yum
|
||||
|
||||
|
||||
|
||||
-----
|
||||
|
||||
|
||||
|
||||
[chris@krang] $ docker build -t cache -f Dockerfile .
|
||||
|
||||
[chris@krang] $ docker images --format "{{.Repository}}: {{.Size}}"
|
||||
|
||||
| head -n 1
|
||||
|
||||
cache: 464 MB
|
||||
|
||||
|
||||
|
||||
[chris@krang] $ docker build -t no-cache -f Dockerfile-wo-cache .
|
||||
|
||||
[chris@krang] $ docker images --format "{{.Repository}}: {{.Size}}" | head -n 1
|
||||
|
||||
no-cache: 271 MB
|
||||
|
||||
```
|
||||
|
||||
That is a significant difference in size. The version with the `dnf` cache is almost twice the size of the image without the metadata and cache. Package manager cache, Ruby gem temp files, `nodejs` cache, even downloaded source tarballs are all perfect candidates for cleaning up.
|
||||
|
||||
### Layers—a potential gotcha
|
||||
|
||||
Unfortunately (or fortunately, as you’ll see later), based on the way layers work with containers, you cannot simply add a `RUN rm -rf /var/cache/yum` line to your Dockerfile and call it a day. Each instruction of a Dockerfile is stored in a layer, with changes between layers applied on top. So even if you were to do this:
|
||||
```
|
||||
RUN dnf install -y nginx
|
||||
|
||||
RUN dnf clean all
|
||||
|
||||
RUN rm -rf /var/cache/yum
|
||||
|
||||
```
|
||||
|
||||
...you’d still end up with three layers, one of which contains all the cache, and two intermediate layers that "remove" the cache from the image. But the cache is actually still there, just as when you mount a filesystem over the top of another one, the files are there—you just can’t see or access them.
|
||||
|
||||
You’ll notice that the example in the previous section chains the cache cleanup in the same Dockerfile instruction where the cache is generated:
|
||||
```
|
||||
RUN dnf install -y nginx \
|
||||
|
||||
&& dnf clean all \
|
||||
|
||||
&& rm -rf /var/cache/yum
|
||||
|
||||
```
|
||||
|
||||
This is a single instruction and ends up being a single layer within the image. You’ll lose a bit of the Docker (*ahem*) cache this way, making a rebuild of the image slightly longer, but the cached data will not end up in your final image. As a nice compromise, just chaining related commands (e.g., `yum install` and `yum clean all`, or downloading, extracting and removing a source tarball, etc.) can save a lot on your final image size while still allowing you to take advantage of the Docker cache for quicker development.
|
||||
|
||||
This layer "gotcha" is more subtle than it first appears, though. Because the image layers document the _changes_ to each layer, one upon another, it’s not just the existence of files that add up, but any change to the file. For example, _even changing the mode_ of the file creates a copy of that file in the new layer.
|
||||
|
||||
For example, the output of `docker images` below shows information about two images. The first, `layer_test_1`, was created by adding a single 1GB file to a base CentOS image. The second image, `layer_test_2`, was created `FROM layer_test_1` and did nothing but change the mode of the 1GB file with `chmod u+x`.
|
||||
```
|
||||
layer_test_2 latest e11b5e58e2fc 7 seconds ago 2.35 GB
|
||||
|
||||
layer_test_1 latest 6eca792a4ebe 2 minutes ago 1.27 GB
|
||||
|
||||
```
|
||||
|
||||
As you can see, the new image is more than 1GB larger than the first. Despite the fact that `layer_test_1` is only the first two layers of `layer_test_2`, there’s still an extra 1GB file floating around hidden inside the second image. This is true anytime you remove, move, or change any file during the image build process.
|
||||
|
||||
### Purpose-built images vs. flexible images
|
||||
|
||||
An anecdote: As my office heavily invested in [Ruby on Rails][3] applications, we began to embrace the use of containers. One of the first things we did was to create an official Ruby base image for all of our teams to use. For simplicity’s sake (and suffering under “this is the way we did it on our servers”), we used [rbenv][4] to install the latest four versions of Ruby into the image, allowing our developers to migrate all of their applications into containers using a single image. This resulted in a very large but flexible (we thought) image that covered all the bases of the various teams we were working with.
|
||||
|
||||
This turned out to be wasted work. The effort required to maintain separate, slightly modified versions of a particular image was easy to automate, and selecting a specific image with a specific version actually helped to identify applications approaching end-of-life before a breaking change was introduced, wreaking havoc downstream. It also wasted resources: When we started to split out the different versions of Ruby, we ended up with multiple images that shared a single base and took up very little extra space if they coexisted on a server, but were considerably smaller to ship around than a giant image with multiple versions installed.
|
||||
|
||||
That is not to say building flexible images is not helpful, but in this case, creating purpose-build images from a common base ended up saving both storage space and maintenance time, and each team could modify their setup however they needed while maintaining the benefit of the common base image.
|
||||
|
||||
### Start without the cruft: Add what you need to a blank image
|
||||
|
||||
As friendly and easy-to-use as the _Dockerfile_ is, there are tools available that offer the flexibility to create very small Docker-compatible container images without the cruft of a full operating system—even those as small as the standard Docker base images.
|
||||
|
||||
[I’ve written about Buildah before][5], and I’ll mention it again because it is flexible enough to create an image from scratch using tools from your host to install packaged software and manipulate the image. Those tools then never need to be included in the image itself.
|
||||
|
||||
Buildah replaces the `docker build` command. With it, you can mount the filesystem of your container image to your host machine and interact with it using tools from the host.
|
||||
|
||||
Let’s try Buildah with the Nginx example from above (ignoring caches for now):
|
||||
```
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -o errexit
|
||||
|
||||
|
||||
|
||||
# Create a container
|
||||
|
||||
container=$(buildah from scratch)
|
||||
|
||||
|
||||
|
||||
# Mount the container filesystem
|
||||
|
||||
mountpoint=$(buildah mount $container)
|
||||
|
||||
|
||||
|
||||
# Install a basic filesystem and minimal set of packages, and nginx
|
||||
|
||||
dnf install --installroot $mountpoint --releasever 28 glibc-minimal-langpack nginx --setopt install_weak_deps=false -y
|
||||
|
||||
|
||||
|
||||
# Save the container to an image
|
||||
|
||||
buildah commit --format docker $container nginx
|
||||
|
||||
|
||||
|
||||
# Cleanup
|
||||
|
||||
buildah unmount $container
|
||||
|
||||
|
||||
|
||||
# Push the image to the Docker daemon’s storage
|
||||
|
||||
buildah push nginx:latest docker-daemon:nginx:latest
|
||||
|
||||
```
|
||||
|
||||
You’ll notice we’re no longer using a Dockerfile to build the image, but a simple Bash script, and we’re building it from a scratch (or blank) image. The Bash script mounts the container’s root filesystem to a mount point on the host, and then uses the hosts’ command to install the packages. This way the package manager doesn’t even have to exist inside the container.
|
||||
|
||||
Without extra cruft—all the extra stuff in the base image, like `dnf`, for example—the image weighs in at only 304 MB, more than 100 MB smaller than the Nginx image built with a Dockerfile above.
|
||||
```
|
||||
[chris@krang] $ docker images |grep nginx
|
||||
|
||||
docker.io/nginx buildah 2505d3597457 4 minutes ago 304 MB
|
||||
|
||||
```
|
||||
|
||||
_Note: The image name has`docker.io` appended to it due to the way the image is pushed into the Docker daemon’s namespace, but it is still the image built locally with the build script above._
|
||||
|
||||
That 100 MB is already a huge savings when you consider a base image is already around 300 MB on its own. Installing Nginx with a package manager brings in a ton of dependencies, too. For something compiled from source using tools from the host, the savings can be even greater because you can choose the exact dependencies and not pull in any extra files you don’t need.
|
||||
|
||||
If you’d like to try this route, [Tom Sweeney][6] wrote a much more in-depth article, [Creating small containers with Buildah][7], which you should check out.
|
||||
|
||||
Using Buildah to build images without a full operating system and included build tools can enable much smaller images than you would otherwise be able to create. For some types of images, we can take this approach even further and create images with _only_ the application itself included.
|
||||
|
||||
### Create images with only statically linked binaries
|
||||
|
||||
Following the same philosophy that leads us to ditch administrative and build tools inside images, we can go a step further. If we specialize enough and abandon the idea of troubleshooting inside of production containers, do we need Bash? Do we need the [GNU core utilities][8]? Do we _really_ need the basic Linux filesystem? You can do this with any compiled language that allows you to create binaries with [statically linked libraries][9]—where all the libraries and functions needed by the program are copied into and stored within the binary itself.
|
||||
|
||||
This is a relatively popular way of doing things within the [Golang][10] community, so we’ll use a Go application to demonstrate.
|
||||
|
||||
The Dockerfile below takes a small Go Hello-World application and compiles it in an image `FROM golang:1.8`:
|
||||
```
|
||||
FROM golang:1.8
|
||||
|
||||
|
||||
|
||||
ENV GOOS=linux
|
||||
|
||||
ENV appdir=/go/src/gohelloworld
|
||||
|
||||
|
||||
|
||||
COPY ./ /go/src/goHelloWorld
|
||||
|
||||
WORKDIR /go/src/goHelloWorld
|
||||
|
||||
|
||||
|
||||
RUN go get
|
||||
|
||||
RUN go build -o /goHelloWorld -a
|
||||
|
||||
|
||||
|
||||
CMD ["/goHelloWorld"]
|
||||
|
||||
```
|
||||
|
||||
The resulting image, containing the binary, the source code, and the base image layer comes in at 716 MB. The only thing we actually need for our application is the compiled binary, however. Everything else is unused cruft that gets shipped around with our image.
|
||||
|
||||
If we disable `cgo` with `CGO_ENABLED=0` when we compile, we can create a binary that doesn’t wrap C libraries for some of its functions:
|
||||
```
|
||||
GOOS=linux CGO_ENABLED=0 go build -a goHelloWorld.go
|
||||
|
||||
```
|
||||
|
||||
The resulting binary can be added to an empty, or "scratch" image:
|
||||
```
|
||||
FROM scratch
|
||||
|
||||
COPY goHelloWorld /
|
||||
|
||||
CMD ["/goHelloWorld"]
|
||||
|
||||
```
|
||||
|
||||
Let’s compare the difference in image size between the two:
|
||||
```
|
||||
[ chris@krang ] $ docker images
|
||||
|
||||
REPOSITORY TAG IMAGE ID CREATED SIZE
|
||||
|
||||
goHello scratch a5881650d6e9 13 seconds ago 1.55 MB
|
||||
|
||||
goHello builder 980290a100db 14 seconds ago 716 MB
|
||||
|
||||
```
|
||||
|
||||
That’s a huge difference. The image built from `golang:1.8` with the `goHelloWorld` binary in it (tagged "builder" above) is _460_ times larger than the scratch image with just the binary. The entirety of the scratch image with the binary is only 1.55 MB. That means we’d be shipping around 713 MB of unnecessary data if we used the builder image.
|
||||
|
||||
As mentioned above, this method of creating small images is used often in the Golang community, and there is no shortage of blog posts on the subject. [Kelsey Hightower][11] wrote [an article on the subject][12] that goes into more detail, including dealing with dependencies other than just C libraries.
|
||||
|
||||
### Consider squashing, if it works for you
|
||||
|
||||
There’s an alternative to chaining all the commands into layers in an attempt to save space: Squashing your image. When you squash an image, you’re really exporting it, removing all the intermediate layers, and saving a single layer with the current state of the image. This has the advantage of reducing that image to a much smaller size.
|
||||
|
||||
Squashing layers used to require some creative workarounds to flatten an image—exporting the contents of a container and re-importing it as a single layer image, or using tools like `docker-squash`. Starting in version 1.13, Docker introduced a handy flag, `--squash`, to accomplish the same thing during the build process:
|
||||
```
|
||||
FROM fedora:28
|
||||
|
||||
LABEL maintainer Chris Collins <collins.christopher@gmail.com>
|
||||
|
||||
|
||||
|
||||
RUN dnf install -y nginx
|
||||
|
||||
RUN dnf clean all
|
||||
|
||||
RUN rm -rf /var/cache/yum
|
||||
|
||||
|
||||
|
||||
[chris@krang] $ docker build -t squash -f Dockerfile-squash --squash .
|
||||
|
||||
[chris@krang] $ docker images --format "{{.Repository}}: {{.Size}}" | head -n 1
|
||||
|
||||
squash: 271 MB
|
||||
|
||||
```
|
||||
|
||||
Using `docker squash` with this multi-layer Dockerfile, we end up with another 271MB image, as we did with the chained instruction example. This works great for this use case, but there’s a potential gotcha.
|
||||
|
||||
“What? ANOTHER gotcha?”
|
||||
|
||||
Well, sort of—it’s the same issue as before, causing problems in another way.
|
||||
|
||||
### Going too far: Too squashed, too small, too specialized
|
||||
|
||||
Images can share layers. The base may be _x_ megabytes in size, but it only needs to be pulled/stored once and each image can use it. The effective size of all the images sharing layers is the base layers plus the diff of each specific change on top of that. In this way, thousands of images may take up only a small amount more than a single image.
|
||||
|
||||
This is a drawback with squashing or specializing too much. When you squash an image into a single layer, you lose any opportunity to share layers with other images. Each image ends up being as large as the total size of its single layer. This might work well for you if you use only a few images and run many containers from them, but if you have many diverse images, it could end up costing you space in the long run.
|
||||
|
||||
Revisiting the Nginx squash example, we can see it’s not a big deal for this case. We end up with Fedora, Nginx installed, no cache, and squashing that is fine. Nginx by itself is not incredibly useful, though. You generally need customizations to do anything interesting—e.g., configuration files, other software packages, maybe some application code. Each of these would end up being more instructions in the Dockerfile.
|
||||
|
||||
With a traditional image build, you would have a single base image layer with Fedora, a second layer with Nginx installed (with or without cache), and then each customization would be another layer. Other images with Fedora and Nginx could share these layers.
|
||||
|
||||
Need an image:
|
||||
```
|
||||
[ App 1 Layer ( 5 MB) ] [ App 2 Layer (6 MB) ]
|
||||
|
||||
[ Nginx Layer ( 21 MB) ] ------------------^
|
||||
|
||||
[ Fedora Layer (249 MB) ]
|
||||
|
||||
```
|
||||
|
||||
But if you squash the image, then even the Fedora base layer is squashed. Any squashed image based on Fedora has to ship around its own Fedora content, adding another 249 MB for _each image!_
|
||||
```
|
||||
[ Fedora + Nginx + App 1 (275 MB)] [ Fedora + Nginx + App 2 (276 MB) ]
|
||||
|
||||
```
|
||||
|
||||
This also becomes a problem if you build lots of highly specialized, super-tiny images.
|
||||
|
||||
As with everything in life, moderation is key. Again, thanks to how layers work, you will find diminishing returns as your container images become smaller and more specialized and can no longer share base layers with other related images.
|
||||
|
||||
Images with small customizations can share base layers. As explained above, the base may be _x_ megabytes in size, but it only needs to be pulled/stored once and each image can use it. The effective size of all the images is the base layers plus the diff of each specific change on top of that. In this way, thousands of images may take up only a small amount more than a single image.
|
||||
```
|
||||
[ specific app ] [ specific app 2 ]
|
||||
|
||||
[ customizations ]--------------^
|
||||
|
||||
[ base layer ]
|
||||
|
||||
```
|
||||
|
||||
If you go too far with your image shrinking and you have too many variations or specializations, you can end up with many images, none of which share base layers and all of which take up their own space on disk.
|
||||
```
|
||||
[ specific app 1 ] [ specific app 2 ] [ specific app 3 ]
|
||||
|
||||
```
|
||||
|
||||
### Conclusion
|
||||
|
||||
There are a variety of different ways to reduce the amount of storage space and bandwidth you spend working with container images, but the most effective way is to reduce the size of the images themselves. Whether you simply clean up your caches (avoiding leaving them orphaned in intermediate layers), squash all your layers into one, or add only static binaries in an empty image, it’s worth spending some time looking at where bloat might exist in your container images and slimming them down to an efficient size.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/7/building-container-images
|
||||
|
||||
作者:[Chris Collins][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/clcollins
|
||||
[1]:https://www.docker.com/
|
||||
[2]:https://www.nginx.com/
|
||||
[3]:https://rubyonrails.org/
|
||||
[4]:https://github.com/rbenv/rbenv
|
||||
[5]:https://opensource.com/article/18/6/getting-started-buildah
|
||||
[6]:https://twitter.com/TSweeneyRedHat
|
||||
[7]:https://opensource.com/article/18/5/containers-buildah
|
||||
[8]:https://www.gnu.org/software/coreutils/coreutils.html
|
||||
[9]:https://en.wikipedia.org/wiki/Static_library
|
||||
[10]:https://golang.org/
|
||||
[11]:https://twitter.com/kelseyhightower
|
||||
[12]:https://medium.com/@kelseyhightower/optimizing-docker-images-for-static-binaries-b5696e26eb07
|
@ -1,202 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lujun9972)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Be productive with Org-mode)
|
||||
[#]: via: (https://www.badykov.com/emacs/2018/08/26/be-productive-with-org-mode/)
|
||||
[#]: author: (Ayrat Badykov https://www.badykov.com)
|
||||
|
||||
Be productive with Org-mode
|
||||
======
|
||||
|
||||
|
||||
![org-mode-collage][1]
|
||||
|
||||
### Introduction
|
||||
|
||||
In my [previous post about emacs][2] I mentioned [Org-mode][3], a note manager and organizer. In this post, I’ll describe my day-to-day Org-mode use cases.
|
||||
|
||||
### Notes and to-do lists
|
||||
|
||||
First and foremost, Org-mode is a tool for managing notes and to-do lists and all work in Org-mode is centered around writing notes in plain text files. I manage several kinds of notes using Org-mode.
|
||||
|
||||
#### General notes
|
||||
|
||||
The most basic Org-mode use case is writing simple notes about things that you want to remember. For example, here are my notes about things I’m learning right now:
|
||||
|
||||
```
|
||||
* Learn
|
||||
** Emacs LISP
|
||||
*** Plan
|
||||
|
||||
- [ ] Read best practices
|
||||
- [ ] Finish reading Emacs Manual
|
||||
- [ ] Finish Exercism Exercises
|
||||
- [ ] Write a couple of simple plugins
|
||||
- Notification plugin
|
||||
|
||||
*** Resources
|
||||
|
||||
https://www.gnu.org/software/emacs/manual/html_node/elisp/index.html
|
||||
http://exercism.io/languages/elisp/about
|
||||
[[http://batsov.com/articles/2011/11/30/the-ultimate-collection-of-emacs-resources/][The Ultimate Collection of Emacs Resources]]
|
||||
|
||||
** Rust gamedev
|
||||
*** Study [[https://github.com/SergiusIW/gate][gate]] 2d game engine with web assembly support
|
||||
*** [[ggez][https://github.com/ggez/ggez]]
|
||||
*** [[https://www.amethyst.rs/blog/release-0-8/][Amethyst 0.8 Relesed]]
|
||||
|
||||
** Upgrade Elixir/Erlang Skills
|
||||
*** Read Erlang in Anger
|
||||
```
|
||||
|
||||
How it looks using [org-bullets][4]:
|
||||
|
||||
![notes][5]
|
||||
|
||||
In this simple example you can see some of the Org-mode features:
|
||||
|
||||
* nested notes
|
||||
* links
|
||||
* lists with checkboxes
|
||||
|
||||
|
||||
|
||||
#### Project todos
|
||||
|
||||
Often when I’m working on some task I notice things that I can improve or fix. Instead of leaving TODO comment in source code files (bad smell) I use [org-projectile][6] which allows me to write TODO items with a single shortcut in a separate file. Here’s an example of this file:
|
||||
|
||||
```
|
||||
* [[elisp:(org-projectile-open-project%20"mana")][mana]] [3/9]
|
||||
:PROPERTIES:
|
||||
:CATEGORY: mana
|
||||
:END:
|
||||
** DONE [[file:~/Development/mana/apps/blockchain/lib/blockchain/contract/create_contract.ex::insufficient_gas_before_homestead%20=][fix this check using evm.configuration]]
|
||||
CLOSED: [2018-08-08 Ср 09:14]
|
||||
[[https://github.com/ethereum/EIPs/blob/master/EIPS/eip-2.md][eip2]]:
|
||||
If contract creation does not have enough gas to pay for the final gas fee for
|
||||
adding the contract code to the state, the contract creation fails (i.e. goes out-of-gas)
|
||||
rather than leaving an empty contract.
|
||||
** DONE Upgrade Elixir to 1.7.
|
||||
CLOSED: [2018-08-08 Ср 09:14]
|
||||
** TODO [#A] Difficulty tests
|
||||
** TODO [#C] Upgrage to OTP 21
|
||||
** DONE [#A] EIP150
|
||||
CLOSED: [2018-08-14 Вт 21:25]
|
||||
*** DONE operation cost changes
|
||||
CLOSED: [2018-08-08 Ср 20:31]
|
||||
*** DONE 1/64th for a call and create
|
||||
CLOSED: [2018-08-14 Вт 21:25]
|
||||
** TODO [#C] Refactor interfaces
|
||||
** TODO [#B] Caching for storage during execution
|
||||
** TODO [#B] Removing old merkle trees
|
||||
** TODO do not calculate cost twice
|
||||
* [[elisp:(org-projectile-open-project%20".emacs.d")][.emacs.d]] [1/3]
|
||||
:PROPERTIES:
|
||||
:CATEGORY: .emacs.d
|
||||
:END:
|
||||
** TODO fix flycheck issues (emacs config)
|
||||
** TODO use-package for fetching dependencies
|
||||
** DONE clean configuration
|
||||
CLOSED: [2018-08-26 Вс 11:48]
|
||||
```
|
||||
|
||||
How it looks in Emacs:
|
||||
|
||||
![project-todos][7]
|
||||
|
||||
In this example you can see more Org mode features:
|
||||
|
||||
* todo items have states - `TODO`, `DONE`. You can define your own states (`WAITING` etc)
|
||||
* closed items have `CLOSED` timestamp
|
||||
* some items have priorities - A, B, C.
|
||||
* links can be internal (`[[file:~/...]`)
|
||||
|
||||
|
||||
|
||||
#### Capture templates
|
||||
|
||||
As described in Org-mode’s documentation, capture lets you quickly store notes with little interruption of your workflow.
|
||||
|
||||
I configured several capture templates which help me to quickly create notes about things that I want to remember.
|
||||
|
||||
```
|
||||
(setq org-capture-templates
|
||||
'(("t" "Todo" entry (file+headline "~/Dropbox/org/todo.org" "Todo soon")
|
||||
"* TODO %? \n %^t")
|
||||
("i" "Idea" entry (file+headline "~/Dropbox/org/ideas.org" "Ideas")
|
||||
"* %? \n %U")
|
||||
("e" "Tweak" entry (file+headline "~/Dropbox/org/tweaks.org" "Tweaks")
|
||||
"* %? \n %U")
|
||||
("l" "Learn" entry (file+headline "~/Dropbox/org/learn.org" "Learn")
|
||||
"* %? \n")
|
||||
("w" "Work note" entry (file+headline "~/Dropbox/org/work.org" "Work")
|
||||
"* %? \n")
|
||||
("m" "Check movie" entry (file+headline "~/Dropbox/org/check.org" "Movies")
|
||||
"* %? %^g")
|
||||
("n" "Check book" entry (file+headline "~/Dropbox/org/check.org" "Books")
|
||||
"* %^{book name} by %^{author} %^g")))
|
||||
```
|
||||
|
||||
For a book note I should add its name and its author, for a movie note I should add tags etc.
|
||||
|
||||
### Planning
|
||||
|
||||
Another great feature of Org-mode is that you can use it as a day planner. Let’s see an example of one of my days:
|
||||
|
||||
![schedule][8]
|
||||
|
||||
I didn’t give a lot of thought to this example, it’s my real file for today. It doesn’t look like much but it helps to spend your time on things that important to you and fight with procrastination.
|
||||
|
||||
#### Habits
|
||||
|
||||
From Org mode’s documentation, Org has the ability to track the consistency of a special category of TODOs, called “habits”. I use this feature along with day planning when I want to create new habits:
|
||||
|
||||
![habits][9]
|
||||
|
||||
As you can see currently I’m trying to wake early every day and workout once in two days. Also, it helped to start reading books every day.
|
||||
|
||||
#### Agenda views
|
||||
|
||||
Last but not least I use agenda views. Todo items can be scattered throughout different files (in my case daily plan and habits are in separate files), agenda views give an overview of all todo items:
|
||||
|
||||
![agenda][10]
|
||||
|
||||
### More Org mode features
|
||||
|
||||
|
||||
+ Smartphone apps (Android, ios)
|
||||
|
||||
+ Exporting Org mode files into different formats (html, markdown, pdf, latex etc)
|
||||
|
||||
+ Tracking Finances with ledger
|
||||
|
||||
### Conclusion
|
||||
|
||||
In this post, I described a small subset of Org-mode’s extensive functionality that helps me be productive every day, spending time on things that important to me.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.badykov.com/emacs/2018/08/26/be-productive-with-org-mode/
|
||||
|
||||
作者:[Ayrat Badykov][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.badykov.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i.imgur.com/hgqCyen.jpg
|
||||
[2]: http://www.badykov.com/emacs/2018/07/31/why-emacs-is-a-great-editor/
|
||||
[3]: https://orgmode.org/
|
||||
[4]: https://github.com/sabof/org-bullets
|
||||
[5]: https://i.imgur.com/lGi60Uw.png
|
||||
[6]: https://github.com/IvanMalison/org-projectile
|
||||
[7]: https://i.imgur.com/Hbu8ilX.png
|
||||
[8]: https://i.imgur.com/z5HpuB0.png
|
||||
[9]: https://i.imgur.com/YJIp3d0.png
|
||||
[10]: https://i.imgur.com/CKX9BL9.png
|
@ -0,0 +1,128 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Host your own cloud with Raspberry Pi NAS)
|
||||
[#]: via: (https://opensource.com/article/18/9/host-cloud-nas-raspberry-pi?extIdCarryOver=true)
|
||||
[#]: author: (Manuel Dewald https://opensource.com/users/ntlx)
|
||||
|
||||
Host your own cloud with Raspberry Pi NAS
|
||||
======
|
||||
|
||||
Protect and secure your data with a self-hosted cloud powered by your Raspberry Pi.
|
||||
|
||||
![Tree clouds][1]
|
||||
|
||||
In the first two parts of this series, we discussed the [hardware and software fundamentals][2] for building network-attached storage (NAS) on a Raspberry Pi. We also put a proper [backup strategy][3] in place to secure the data on the NAS. In this third part, we will talk about a convenient way to store, access, and share your data with [Nextcloud][4].
|
||||
|
||||
![Raspberry Pi NAS infrastructure with Nextcloud][6]
|
||||
|
||||
### Prerequisites
|
||||
|
||||
To use Nextcloud conveniently, you have to meet a few prerequisites. First, you should have a domain you can use for the Nextcloud instance. For the sake of simplicity in this how-to, we'll use **nextcloud.pi-nas.com**. This domain should be directed to your Raspberry Pi. If you want to run it on your home network, you probably need to set up dynamic DNS for this domain and enable port forwarding of ports 80 and 443 (if you go for an SSL setup, which is highly recommended; otherwise port 80 should be sufficient) from your router to the Raspberry Pi.
|
||||
|
||||
You can automate dynamic DNS updates from the Raspberry Pi using [ddclient][7].
|
||||
|
||||
### Install Nextcloud
|
||||
|
||||
To run Nextcloud on your Raspberry Pi (using the setup described in the [first part][2] of this series), install the following packages as dependencies to Nextcloud using **apt**.
|
||||
|
||||
```
|
||||
sudo apt install unzip wget php apache2 mysql-server php-zip php-mysql php-dom php-mbstring php-gd php-curl
|
||||
```
|
||||
|
||||
The next step is to download Nextcloud. [Get the latest release's URL][8] and copy it to download via **wget** on the Raspberry Pi. In the first article in this series, we attached two disk drives to the Raspberry Pi, one for current data and one for backups. Install Nextcloud on the data drive to make sure data is backed up automatically every night.
|
||||
```
|
||||
sudo mkdir -p /nas/data/nextcloud
|
||||
sudo chown pi /nas/data/nextcloud
|
||||
cd /nas/data/
|
||||
wget <https://download.nextcloud.com/server/releases/nextcloud-14.0.0.zip> -O /nas/data/nextcloud.zip
|
||||
unzip nextcloud.zip
|
||||
sudo ln -s /nas/data/nextcloud /var/www/nextcloud
|
||||
sudo chown -R www-data:www-data /nas/data/nextcloud
|
||||
```
|
||||
|
||||
When I wrote this, the latest release (as you see in the code above) was 14. Nextcloud is under heavy development, so you may find a newer version when installing your copy of Nextcloud onto your Raspberry Pi.
|
||||
|
||||
### Database setup
|
||||
|
||||
When we installed Nextcloud above, we also installed MySQL as a dependency to use it for all the metadata Nextcloud generates (for example, the users you create to access Nextcloud). If you would rather use a Postgres database, you'll need to adjust some of the modules installed above.
|
||||
|
||||
To access the MySQL database as root, start the MySQL client as root:
|
||||
|
||||
```
|
||||
sudo mysql
|
||||
```
|
||||
|
||||
This will open a SQL prompt where you can insert the following commands—substituting the placeholder with the password you want to use for the database connection—to create a database for Nextcloud.
|
||||
```
|
||||
CREATE USER nextcloud IDENTIFIED BY '<insert-password-here>';
|
||||
CREATE DATABASE nextcloud;
|
||||
GRANT ALL ON nextcloud.* TO nextcloud;
|
||||
```
|
||||
|
||||
|
||||
You can exit the SQL prompt by pressing **Ctrl+D** or entering **quit**.
|
||||
|
||||
### Web server configuration
|
||||
|
||||
Nextcloud can be configured to run using Nginx or other web servers, but for this how-to, I decided to go with the Apache web server on my Raspberry Pi NAS. (Feel free to try out another alternative and let me know if you think it performs better.)
|
||||
|
||||
To set it up, configure a virtual host for the domain you created for your Nextcloud instance **nextcloud.pi-nas.com**. To create a virtual host, create the file **/etc/apache2/sites-available/001-nextcloud.conf** with content similar to the following. Make sure to adjust the ServerName to your domain and paths, if you didn't use the ones suggested earlier in this series.
|
||||
```
|
||||
<VirtualHost *:80>
|
||||
ServerName nextcloud.pi-nas.com
|
||||
ServerAdmin [admin@pi-nas.com][9]
|
||||
DocumentRoot /var/www/nextcloud/
|
||||
|
||||
<Directory /var/www/nextcloud/>
|
||||
AllowOverride None
|
||||
</Directory>
|
||||
</VirtualHost>
|
||||
```
|
||||
|
||||
|
||||
To enable this virtual host, run the following two commands.
|
||||
```
|
||||
a2ensite 001-nextcloud
|
||||
sudo systemctl reload apache2
|
||||
```
|
||||
|
||||
|
||||
With this configuration, you should now be able to reach the web server with your domain via the web browser. To secure your data, I recommend using HTTPS instead of HTTP to access Nextcloud. A very easy (and free) way is to obtain a [Let's Encrypt][10] certificate with [Certbot][11] and have a cron job automatically refresh it. That way you don't have to mess around with self-signed or expiring certificates. Follow Certbot's simple how-to [instructions to install it on your Raspberry Pi][12]. During Certbot configuration, you can even decide to automatically forward HTTP to HTTPS, so visitors to **<http://nextcloud.pi-nas.com>** will be redirected to **<https://nextcloud.pi-nas.com>**. Please note, if your Raspberry Pi is running behind your home router, you must have port forwarding enabled for ports 443 and 80 to obtain Let's Encrypt certificates.
|
||||
|
||||
### Configure Nextcloud
|
||||
|
||||
The final step is to visit your fresh Nextcloud instance in a web browser to finish the configuration. To do so, open your domain in a browser and insert the database details from above. You can also set up your first Nextcloud user here, the one you can use for admin tasks. By default, the data directory should be inside the Nextcloud folder, so you don't need to change anything for the backup mechanisms from the [second part of this series][3] to pick up the data stored by users in Nextcloud.
|
||||
|
||||
Afterward, you will be directed to your Nextcloud and can log in with the admin user you created previously. To see a list of recommended steps to ensure a performant and secure Nextcloud installation, visit the Basic Settings tab in the Settings page (in our example: <https://nextcloud.pi-nas.com/>settings/admin) and see the Security & Setup Warnings section.
|
||||
|
||||
Congratulations! You've set up your own Nextcloud powered by a Raspberry Pi. Go ahead and [download a Nextcloud client][13] from the Nextcloud page to sync data with your client devices and access it offline. Mobile clients even provide features like instant upload of pictures you take, so they'll automatically sync to your desktop PC without wondering how to get them there.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/9/host-cloud-nas-raspberry-pi?extIdCarryOver=true
|
||||
|
||||
作者:[Manuel Dewald][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ntlx
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life_tree_clouds.png?itok=b_ftihhP (Tree clouds)
|
||||
[2]: https://opensource.com/article/18/7/network-attached-storage-Raspberry-Pi
|
||||
[3]: https://opensource.com/article/18/8/automate-backups-raspberry-pi
|
||||
[4]: https://nextcloud.com/
|
||||
[5]: /file/409336
|
||||
[6]: https://opensource.com/sites/default/files/uploads/nas_part3.png (Raspberry Pi NAS infrastructure with Nextcloud)
|
||||
[7]: https://sourceforge.net/p/ddclient/wiki/Home/
|
||||
[8]: https://nextcloud.com/install/#instructions-server
|
||||
[9]: mailto:admin@pi-nas.com
|
||||
[10]: https://letsencrypt.org/
|
||||
[11]: https://certbot.eff.org/
|
||||
[12]: https://certbot.eff.org/lets-encrypt/debianother-apache
|
||||
[13]: https://nextcloud.com/install/#install-clients
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (LuuMing)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (Auk7F7)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: subject: (Arch-Wiki-Man – A Tool to Browse The Arch Wiki Pages As Linux Man Page from Offline)
|
||||
|
@ -1,58 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Get started with Cypht, an open source email client)
|
||||
[#]: via: (https://opensource.com/article/19/1/productivity-tool-cypht-email)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney))
|
||||
|
||||
Get started with Cypht, an open source email client
|
||||
======
|
||||
Integrate your email and news feeds into one view with Cypht, the fourth in our series on 19 open source tools that will make you more productive in 2019.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_mail_box_envelope_send_blue.jpg?itok=6Epj47H6)
|
||||
|
||||
There seems to be a mad rush at the beginning of every year to find ways to be more productive. New Year's resolutions, the itch to start the year off right, and of course, an "out with the old, in with the new" attitude all contribute to this. And the usual round of recommendations is heavily biased towards closed source and proprietary software. It doesn't have to be that way.
|
||||
|
||||
Here's the fourth of my picks for 19 new (or new-to-you) open source tools to help you be more productive in 2019.
|
||||
|
||||
### Cypht
|
||||
|
||||
We spend a lot of time dealing with email, and effectively [managing your email][1] can make a huge impact on your productivity. Programs like Thunderbird, Kontact/KMail, and Evolution all seem to have one thing in common: they seek to duplicate the functionality of Microsoft Outlook, which hasn't really changed in the last 10 years or so. Even the [console standard-bearers][2] like Mutt and Cone haven't changed much in the last decade.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/cypht-1.png)
|
||||
|
||||
[Cypht][3] is a simple, lightweight, and modern webmail client that aggregates several accounts into a single view. Along with email accounts, it includes Atom/RSS feeds. It makes reading items from these different sources very simple by using an "Everything" screen that shows not just the mail from your inbox, but also the newest articles from your news feeds.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/cypht-2.png)
|
||||
|
||||
It uses a simplified version of HTML messages to display mail or you can set it to view a plain-text version. Since Cypht doesn't load images from remote sources (to help maintain security), HTML rendering can be a little rough, but it does enough to get the job done. You'll get plain-text views with most rich-text mail—meaning lots of links and hard to read. I don't fault Cypht, since this is really the email senders' doing, but it does detract a little from the reading experience. Reading news feeds is about the same, but having them integrated with your email accounts makes it much easier to keep up with them (something I sometimes have issues with).
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/cypht-3.png)
|
||||
|
||||
Users can use a preconfigured mail server and add any additional servers they use. Cypht's customization options include plain-text vs. HTML mail display, support for multiple profiles, and the ability to change the theme (and make your own). You have to remember to click the "Save" button on the left navigation bar, though, or your custom settings will disappear after that session. If you log out and back in without saving, all your changes will be lost and you'll end up with the settings you started with. This does make it easy to experiment, and if you need to reset things, simply logging out without saving will bring back the previous setup when you log back in.
|
||||
|
||||
![](https://opensource.com/sites/default/files/pictures/cypht-4.png)
|
||||
|
||||
[Installing Cypht][4] locally is very easy. While it is not in a container or similar technology, the setup instructions were very clear and easy to follow and didn't require any changes on my part. On my laptop, it took about 10 minutes from starting the installation to logging in for the first time. A shared installation on a server uses the same steps, so it should be about the same.
|
||||
|
||||
In the end, Cypht is a fantastic alternative to desktop and web-based email clients with a simple interface to help you handle your email quickly and efficiently.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/1/productivity-tool-cypht-email
|
||||
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ksonney (Kevin Sonney)
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/article/17/7/email-alternatives-thunderbird
|
||||
[2]: https://opensource.com/life/15/8/top-4-open-source-command-line-email-clients
|
||||
[3]: https://cypht.org/
|
||||
[4]: https://cypht.org/install.html
|
@ -1,58 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Get started with CryptPad, an open source collaborative document editor)
|
||||
[#]: via: (https://opensource.com/article/19/1/productivity-tool-cryptpad)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney))
|
||||
|
||||
Get started with CryptPad, an open source collaborative document editor
|
||||
======
|
||||
Securely share your notes, documents, kanban boards, and more with CryptPad, the fifth in our series on open source tools that will make you more productive in 2019.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/web_browser_desktop_devlopment_design_system_computer.jpg?itok=pfqRrJgh)
|
||||
|
||||
There seems to be a mad rush at the beginning of every year to find ways to be more productive. New Year's resolutions, the itch to start the year off right, and of course, an "out with the old, in with the new" attitude all contribute to this. And the usual round of recommendations is heavily biased towards closed source and proprietary software. It doesn't have to be that way.
|
||||
|
||||
Here's the fifth of my picks for 19 new (or new-to-you) open source tools to help you be more productive in 2019.
|
||||
|
||||
### CryptPad
|
||||
|
||||
We already talked about [Joplin][1], which is good for keeping your own notes but—as you may have noticed—doesn't have any sharing or collaboration features.
|
||||
|
||||
[CryptPad][2] is a secure, shareable note-taking app and document editor that allows for secure, collaborative editing. Unlike Joplin, it is a NodeJS app, which means you can run it on your desktop or a server elsewhere and access it with any modern web browser. Out of the box, it supports rich text, Markdown, polls, whiteboards, kanban, and presentations.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/cryptpad-1.png)
|
||||
|
||||
The different document types are robust and fully featured. The rich text editor covers all the bases you'd expect from a good editor and allows you to export files to HTML. The Markdown editor is on par with Joplin, and the kanban board, though not as full-featured as [Wekan][3], is really well done. The rest of the supported document types and editors are also very polished and have the features you'd expect from similar apps, although polls feel a little clunky.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/cryptpad-2.png)
|
||||
|
||||
CryptPad's real power, though, comes in its sharing and collaboration features. Sharing a document is as simple as getting the sharable URL from the "share" option, and CryptPad supports embedding documents in iFrame tags on other websites. Documents can be shared in Edit or View mode with a password and with links that expire. The built-in chat allows editors to talk to each other (note that people with View access can also see the chat but can't comment).
|
||||
|
||||
![](https://opensource.com/sites/default/files/pictures/cryptpad-3.png)
|
||||
|
||||
All files are stored encrypted with the user's password. Server administrators can't read the documents, which also means if you forget or lose your password, the files are unrecoverable. So make sure you keep the password in a secure place, like a [password vault][4].
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/cryptpad-4.png)
|
||||
|
||||
When it's run locally, CryptPad is a robust app for creating and editing documents. When run on a server, it becomes an excellent collaboration platform for multi-user document creation and editing. Installation took less than five minutes on my laptop, and it just worked out of the box. The developers also include instructions for running CryptPad in Docker, and there is a community-maintained Ansible role for ease of deployment. CryptPad does not support any third-party authentication methods, so users must create their own accounts. CryptPad also has a community-supported hosted version if you don't want to run your own server.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/1/productivity-tool-cryptpad
|
||||
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ksonney (Kevin Sonney)
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/article/19/1/productivity-tool-joplin
|
||||
[2]: https://cryptpad.fr/index.html
|
||||
[3]: https://opensource.com/article/19/1/productivity-tool-wekan
|
||||
[4]: https://opensource.com/article/18/4/3-password-managers-linux-command-line
|
@ -1,127 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (ODrive (Open Drive) – Google Drive GUI Client For Linux)
|
||||
[#]: via: (https://www.2daygeek.com/odrive-open-drive-google-drive-gui-client-for-linux/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
ODrive (Open Drive) – Google Drive GUI Client For Linux
|
||||
======
|
||||
|
||||
This we had discussed in so many times. However, i will give a small introduction about it.
|
||||
|
||||
As of now there is no official Google Drive Client for Linux and we need to use unofficial clients.
|
||||
|
||||
There are many applications available in Linux for Google Drive integration.
|
||||
|
||||
Each application has came out with set of features.
|
||||
|
||||
We had written few articles about this in our website in the past.
|
||||
|
||||
Those are **[DriveSync][1]** , **[Google Drive Ocamlfuse Client][2]** and **[Mount Google Drive in Linux Using Nautilus File Manager][3]**.
|
||||
|
||||
Today also we are going to discuss about the same topic and the utility name is ODrive.
|
||||
|
||||
### What’s ODrive?
|
||||
|
||||
ODrive stands for Open Drive. It’s a GUI client for Google Drive which was written in electron framework.
|
||||
|
||||
It’s simple GUI which allow users to integrate the Google Drive with few steps.
|
||||
|
||||
### How To Install & Setup ODrive on Linux?
|
||||
|
||||
Since the developer is offering the AppImage package and there is no difficulty for installing the ODrive on Linux.
|
||||
|
||||
Simple download the latest ODrive AppImage package from developer github page using **wget Command**.
|
||||
|
||||
```
|
||||
$ wget https://github.com/liberodark/ODrive/releases/download/0.1.3/odrive-0.1.3-x86_64.AppImage
|
||||
```
|
||||
|
||||
You have to set executable file permission to the ODrive AppImage file.
|
||||
|
||||
```
|
||||
$ chmod +x odrive-0.1.3-x86_64.AppImage
|
||||
```
|
||||
|
||||
Simple run the following ODrive AppImage file to launch the ODrive GUI for further setup.
|
||||
|
||||
```
|
||||
$ ./odrive-0.1.3-x86_64.AppImage
|
||||
```
|
||||
|
||||
You might get the same window like below when you ran the above command. Just hit the **`Next`** button for further setup.
|
||||
![][5]
|
||||
|
||||
Click **`Connect`** link to add a Google drive account.
|
||||
![][6]
|
||||
|
||||
Enter your email id which you want to setup a Google Drive account.
|
||||
![][7]
|
||||
|
||||
Enter your password for the given email id.
|
||||
![][8]
|
||||
|
||||
Allow ODrive (Open Drive) to access your Google account.
|
||||
![][9]
|
||||
|
||||
By default, it will choose the folder location. You can change if you want to use the specific one.
|
||||
![][10]
|
||||
|
||||
Finally hit **`Synchronize`** button to start download the files from Google Drive to your local system.
|
||||
![][11]
|
||||
|
||||
Synchronizing is in progress.
|
||||
![][12]
|
||||
|
||||
Once synchronizing is completed. It will show you all files downloaded.
|
||||
Once synchronizing is completed. It’s shows you that all the files has been downloaded.
|
||||
![][13]
|
||||
|
||||
I have seen all the files were downloaded in the mentioned directory.
|
||||
![][14]
|
||||
|
||||
If you want to sync any new files from local system to Google Drive. Just start the `ODrive` from the application menu but it won’t actual launch the application. But it will be running in the background that we can able to see by using the ps command.
|
||||
|
||||
```
|
||||
$ ps -df | grep odrive
|
||||
```
|
||||
|
||||
![][15]
|
||||
|
||||
It will automatically sync once you add a new file into the google drive folder. The same has been checked through notification menu. Yes, i can see one file was synced to Google Drive.
|
||||
![][16]
|
||||
|
||||
GUI is not loading after sync, and i’m not sure this functionality. I will check with developer and will add update based on his input.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/odrive-open-drive-google-drive-gui-client-for-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/drivesync-google-drive-sync-client-for-linux/
|
||||
[2]: https://www.2daygeek.com/mount-access-google-drive-on-linux-with-google-drive-ocamlfuse-client/
|
||||
[3]: https://www.2daygeek.com/mount-access-setup-google-drive-in-linux/
|
||||
[4]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[5]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-1.png
|
||||
[6]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-2.png
|
||||
[7]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-3.png
|
||||
[8]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-4.png
|
||||
[9]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-5.png
|
||||
[10]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-6.png
|
||||
[11]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-7.png
|
||||
[12]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-8a.png
|
||||
[13]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-9.png
|
||||
[14]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-11.png
|
||||
[15]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-9b.png
|
||||
[16]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-10.png
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (MZqk)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,61 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Get started with Freeplane, an open source mind mapping application)
|
||||
[#]: via: (https://opensource.com/article/19/1/productivity-tool-freeplane)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney))
|
||||
|
||||
Get started with Freeplane, an open source mind mapping application
|
||||
======
|
||||
|
||||
Map your brainstorming sessions with Freeplane, the 13th in our series on open source tools that will make you more productive in 2019.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X)
|
||||
|
||||
There seems to be a mad rush at the beginning of every year to find ways to be more productive. New Year's resolutions, the itch to start the year off right, and of course, an "out with the old, in with the new" attitude all contribute to this. And the usual round of recommendations is heavily biased towards closed source and proprietary software. It doesn't have to be that way.
|
||||
|
||||
Here's the 13th of my picks for 19 new (or new-to-you) open source tools to help you be more productive in 2019.
|
||||
|
||||
### Freeplane
|
||||
|
||||
[Mind maps][1] are one of the more valuable tools I've used for quickly brainstorming ideas and capturing data. Mind mapping is a versatile process that helps show how things are related and can be used to quickly organize interrelated information. From a planning perspective, mind mapping allows you to quickly perform a brain dump around a single concept, idea, or technology.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/freeplane-1.png)
|
||||
|
||||
[Freeplane][2] is a desktop application that makes it easy to create, view, edit, and share mind maps. It is a redesign of [FreeMind][3], which was the go-to mind-mapping application for quite some time.
|
||||
|
||||
Installing Freeplane is pretty easy. It is a [Java][4] application and distributed as a ZIP file with scripts to start the application on Linux, Windows, and MacOS. At its first startup, its main window includes an example mind map with links to documentation about all the different things you can do with Freeplane.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/freeplane-2.png)
|
||||
|
||||
You have a choice of templates when you create a new mind map. The standard template (likely at the bottom of the list) works for most cases. Just start typing the idea or phrase you want to start with, and your text will replace the center text. Pressing the Insert key will add a branch (or node) off the center with a blank field where you can fill in something associated with the idea. Pressing Insert again will add another node connected to the first one. Pressing Enter on a node will add a node parallel to that one.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/freeplane-3.png)
|
||||
|
||||
As you add nodes, you may come up with another thought or idea related to the main topic. Using either the mouse or the Arrow keys, go back to the center of the map and press Insert. A new node will be created off the main topic.
|
||||
|
||||
If you want to go beyond Freeplane's base functionality, right-click on any of the nodes to bring up a Properties menu for that node. The Tool pane (activated under the View–>Controls menu) contains customization options galore, including line shape and thickness, border shapes, colors, and much, much more. The Calendar tab allows you to insert dates into the nodes and set reminders for when nodes are due. (Note that reminders work only when Freeplane is running.) Mind maps can be exported to several formats, including common images, XML, Microsoft Project, Markdown, and OPML.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/freeplane-4.png)
|
||||
|
||||
Freeplane gives you all the tools you'll need to create vibrant and useful mind maps, getting your ideas out of your head and into a place where you can take action on them.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/1/productivity-tool-freeplane
|
||||
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ksonney (Kevin Sonney)
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Mind_map
|
||||
[2]: https://www.freeplane.org/wiki/index.php/Home
|
||||
[3]: https://sourceforge.net/projects/freemind/
|
||||
[4]: https://java.com
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (ustblixin)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (sndnvaps)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,192 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( pityonline )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Remove/Delete The Empty Lines In A File In Linux)
|
||||
[#]: via: (https://www.2daygeek.com/remove-delete-empty-lines-in-a-file-in-linux/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
How To Remove/Delete The Empty Lines In A File In Linux
|
||||
======
|
||||
|
||||
Some times you may wants to remove or delete the empty lines in a file in Linux.
|
||||
|
||||
If so, you can use the one of the below method to achieve it.
|
||||
|
||||
It can be done in many ways but i have listed simple methods in the article.
|
||||
|
||||
You may aware of that grep, awk and sed commands are specialized for textual data manipulation.
|
||||
|
||||
Navigate to the following URL, if you would like to read more about these kind of topics. For **[creating a file in specific size in Linux][1]** multiple ways, for **[creating a file in Linux][2]** multiple ways and for **[removing a matching string from a file in Linux][3]**.
|
||||
|
||||
These are fall in advanced commands category because these are used in most of the shell script to do required things.
|
||||
|
||||
It can be done using the following 5 methods.
|
||||
|
||||
* **`sed Command:`** Stream editor for filtering and transforming text.
|
||||
* **`grep Command:`** Print lines that match patterns.
|
||||
* **`cat Command:`** It concatenate files and print on the standard output.
|
||||
* **`tr Command:`** Translate or delete characters.
|
||||
* **`awk Command:`** The awk utility shall execute programs written in the awk programming language, which is specialized for textual data manipulation.
|
||||
* **`perl Command:`** Perl is a programming language specially designed for text editing.
|
||||
|
||||
|
||||
|
||||
To test this, i had already created the file called `2daygeek.txt` with some texts and empty lines. The details are below.
|
||||
|
||||
```
|
||||
$ cat 2daygeek.txt
|
||||
2daygeek.com is a best Linux blog to learn Linux.
|
||||
|
||||
It's FIVE years old blog.
|
||||
|
||||
This website is maintained by Magesh M, it's licensed under CC BY-NC 4.0.
|
||||
|
||||
He got two GIRL babys.
|
||||
|
||||
Her names are Tanisha & Renusha.
|
||||
```
|
||||
|
||||
Now everything is ready and i’m going to test this in multiple ways.
|
||||
|
||||
### How To Remove/Delete The Empty Lines In A File In Linux Using sed Command?
|
||||
|
||||
Sed is a stream editor. A stream editor is used to perform basic text transformations on an input stream (a file or input from a pipeline).
|
||||
|
||||
```
|
||||
$ sed '/^$/d' 2daygeek.txt
|
||||
2daygeek.com is a best Linux blog to learn Linux.
|
||||
It's FIVE years old blog.
|
||||
This website is maintained by Magesh M, it's licensed under CC BY-NC 4.0.
|
||||
He got two GIRL babes.
|
||||
Her names are Tanisha & Renusha.
|
||||
```
|
||||
|
||||
Details are follow:
|
||||
|
||||
* **`sed:`** It’s a command
|
||||
* **`//:`** It holds the searching string.
|
||||
* **`^:`** Matches start of string.
|
||||
* **`$:`** Matches end of string.
|
||||
* **`d:`** Delete the matched string.
|
||||
* **`2daygeek.txt:`** Source file name.
|
||||
|
||||
|
||||
|
||||
### How To Remove/Delete The Empty Lines In A File In Linux Using grep Command?
|
||||
|
||||
grep searches for PATTERNS in each FILE. PATTERNS is one or patterns separated by newline characters, and grep prints each line that matches a pattern.
|
||||
|
||||
```
|
||||
$ grep . 2daygeek.txt
|
||||
or
|
||||
$ grep -Ev "^$" 2daygeek.txt
|
||||
or
|
||||
$ grep -v -e '^$' 2daygeek.txt
|
||||
2daygeek.com is a best Linux blog to learn Linux.
|
||||
It's FIVE years old blog.
|
||||
This website is maintained by Magesh M, it's licensed under CC BY-NC 4.0.
|
||||
He got two GIRL babes.
|
||||
Her names are Tanisha & Renusha.
|
||||
```
|
||||
|
||||
Details are follow:
|
||||
|
||||
* **`grep:`** It’s a command
|
||||
* **`.:`** Replaces any character.
|
||||
* **`^:`** matches start of string.
|
||||
* **`$:`** matches end of string.
|
||||
* **`E:`** For extended regular expressions pattern matching.
|
||||
* **`e:`** For regular expressions pattern matching.
|
||||
* **`v:`** To select non-matching lines from the file.
|
||||
* **`2daygeek.txt:`** Source file name.
|
||||
|
||||
|
||||
|
||||
### How To Remove/Delete The Empty Lines In A File In Linux Using awk Command?
|
||||
|
||||
The awk utility shall execute programs written in the awk programming language, which is specialized for textual data manipulation. An awk program is a sequence of patterns and corresponding actions.
|
||||
|
||||
```
|
||||
$ awk NF 2daygeek.txt
|
||||
or
|
||||
$ awk '!/^$/' 2daygeek.txt
|
||||
or
|
||||
$ awk '/./' 2daygeek.txt
|
||||
2daygeek.com is a best Linux blog to learn Linux.
|
||||
It's FIVE years old blog.
|
||||
This website is maintained by Magesh M, it's licensed under CC BY-NC 4.0.
|
||||
He got two GIRL babes.
|
||||
Her names are Tanisha & Renusha.
|
||||
```
|
||||
|
||||
Details are follow:
|
||||
|
||||
* **`awk:`** It’s a command
|
||||
* **`//:`** It holds the searching string.
|
||||
* **`^:`** matches start of string.
|
||||
* **`$:`** matches end of string.
|
||||
* **`.:`** Replaces any character.
|
||||
* **`!:`** Delete the matched string.
|
||||
* **`2daygeek.txt:`** Source file name.
|
||||
|
||||
|
||||
|
||||
### How To Delete The Empty Lines In A File In Linux using Combination of cat And tr Command?
|
||||
|
||||
cat stands for concatenate. It is very frequently used in Linux to reads data from a file.
|
||||
|
||||
cat is one of the most frequently used commands on Unix-like operating systems. It’s offer three functions which is related to text file such as display content of a file, combine multiple files into the single output and create a new file.
|
||||
|
||||
Translate, squeeze, and/or delete characters from standard input, writing to standard output.
|
||||
|
||||
```
|
||||
$ cat 2daygeek.txt | tr -s '\n'
|
||||
2daygeek.com is a best Linux blog to learn Linux.
|
||||
It's FIVE years old blog.
|
||||
This website is maintained by Magesh M, it's licensed under CC BY-NC 4.0.
|
||||
He got two GIRL babes.
|
||||
Her names are Tanisha & Renusha.
|
||||
```
|
||||
|
||||
Details are follow:
|
||||
|
||||
* **`cat:`** It’s a command
|
||||
* **`tr:`** It’s a command
|
||||
* **`|:`** Pipe symbol. It pass first command output as a input to another command.
|
||||
* **`s:`** Replace each sequence of a repeated character that is listed in the last specified SET.
|
||||
* **`\n:`** To add a new line.
|
||||
* **`2daygeek.txt:`** Source file name.
|
||||
|
||||
|
||||
|
||||
### How To Remove/Delete The Empty Lines In A File In Linux Using perl Command?
|
||||
|
||||
Perl stands in for “Practical Extraction and Reporting Language”. Perl is a programming language specially designed for text editing. It is now widely used for a variety of purposes including Linux system administration, network programming, web development, etc.
|
||||
|
||||
```
|
||||
$ perl -ne 'print if /\S/' 2daygeek.txt
|
||||
2daygeek.com is a best Linux blog to learn Linux.
|
||||
It's FIVE years old blog.
|
||||
This website is maintained by Magesh M, it's licensed under CC BY-NC 4.0.
|
||||
He got two GIRL babes.
|
||||
Her names are Tanisha & Renusha.
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/remove-delete-empty-lines-in-a-file-in-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[pityonline](https://github.com/pityonline)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/create-a-file-in-specific-certain-size-linux/
|
||||
[2]: https://www.2daygeek.com/linux-command-to-create-a-file/
|
||||
[3]: https://www.2daygeek.com/empty-a-file-delete-contents-lines-from-a-file-remove-matching-string-from-a-file-remove-empty-blank-lines-from-a-file/
|
@ -1,157 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Run Particular Commands Without Sudo Password In Linux)
|
||||
[#]: via: (https://www.ostechnix.com/run-particular-commands-without-sudo-password-linux/)
|
||||
[#]: author: (SK https://www.ostechnix.com/author/sk/)
|
||||
|
||||
Run Particular Commands Without Sudo Password In Linux
|
||||
======
|
||||
|
||||
I had a script on my Ubuntu system deployed on AWS. The primary purpose of this script is to check if a specific service is running at regular interval (every one minute to be precise) and start that service automatically if it is stopped for any reason. But the problem is I need sudo privileges to start the service. As you may know already, we should provide password when we run something as sudo user. But I don’t want to do that. What I actually want to do is to run the service as sudo without password. If you’re ever in a situation like this, I know a small work around, Today, in this brief guide, I will teach you how to run particular commands without sudo password in Unix-like operating systems.
|
||||
|
||||
Have a look at the following example.
|
||||
|
||||
```
|
||||
$ sudo mkdir /ostechnix
|
||||
[sudo] password for sk:
|
||||
```
|
||||
|
||||
![][2]
|
||||
|
||||
As you can see in the above screenshot, I need to provide sudo password when creating a directory named ostechnix in root (/) folder. Whenever we try to execute a command with sudo privileges, we must enter the password. However, in my scenario, I don’t want to provide the sudo password. Here is what I did to run a sudo command without password on my Linux box.
|
||||
|
||||
### Run Particular Commands Without Sudo Password In Linux
|
||||
|
||||
For any reasons, if you want to allow a user to run a particular command without giving the sudo password, you need to add that command in **sudoers** file.
|
||||
|
||||
I want the user named **sk** to execute **mkdir** command without giving the sudo password. Let us see how to do it.
|
||||
|
||||
Edit sudoers file:
|
||||
|
||||
```
|
||||
$ sudo visudo
|
||||
```
|
||||
|
||||
Add the following line at the end of file.
|
||||
|
||||
```
|
||||
sk ALL=NOPASSWD:/bin/mkdir
|
||||
```
|
||||
|
||||
![][3]
|
||||
|
||||
Here, **sk** is the username. As per the above line, the user **sk** can run ‘mkdir’ command from any terminal, without sudo password.
|
||||
|
||||
You can add additional commands (for example **chmod** ) with comma-separated values as shown below.
|
||||
|
||||
```
|
||||
sk ALL=NOPASSWD:/bin/mkdir,/bin/chmod
|
||||
```
|
||||
|
||||
Save and close the file. Log out (or reboot) your system. Now, log in as normal user ‘sk’ and try to run those commands with sudo and see what happens.
|
||||
|
||||
```
|
||||
$ sudo mkdir /dir1
|
||||
```
|
||||
|
||||
![][4]
|
||||
|
||||
See? Even though I ran ‘mkdir’ command with sudo privileges, there was no password prompt. From now on, the user **sk** need not to enter the sudo password while running ‘mkdir’ command.
|
||||
|
||||
When running all other commands except those commands added in sudoers files, you will be prompted to enter the sudo password.
|
||||
|
||||
Let us run another command with sudo.
|
||||
|
||||
```
|
||||
$ sudo apt update
|
||||
```
|
||||
|
||||
![][5]
|
||||
|
||||
See? This command prompts me to enter the sudo password.
|
||||
|
||||
If you don’t want this command to prompt you to ask sudo password, edit sudoers file:
|
||||
|
||||
```
|
||||
$ sudo visudo
|
||||
```
|
||||
|
||||
Add the ‘apt’ command in visudo file like below:
|
||||
|
||||
```
|
||||
sk ALL=NOPASSWD: /bin/mkdir,/usr/bin/apt
|
||||
```
|
||||
|
||||
Did you notice that the apt binary executable file path is different from mkdir? Yes, you must provide the correct executable file path. To find executable file path of any command, for example ‘apt’, use ‘whereis’ command like below.
|
||||
|
||||
```
|
||||
$ whereis apt
|
||||
apt: /usr/bin/apt /usr/lib/apt /etc/apt /usr/share/man/man8/apt.8.gz
|
||||
```
|
||||
|
||||
As you see, the executable file for apt command is **/usr/bin/apt** , hence I added it in sudoers file.
|
||||
|
||||
Like I already mentioned, you can add any number of commands with comma-separated values. Save and close your sudoers file once you’re done. Log out and log in again to your system.
|
||||
|
||||
Now, check if you can be able to run the command with sudo prefix without using the password:
|
||||
|
||||
```
|
||||
$ sudo apt update
|
||||
```
|
||||
|
||||
![][6]
|
||||
|
||||
See? The apt command didn’t ask me the password even though I ran it with sudo.
|
||||
|
||||
Here is yet another example. If you want to run a specific service, for example apache2, add it as shown below.
|
||||
|
||||
```
|
||||
sk ALL=NOPASSWD:/bin/mkdir,/usr/bin/apt,/bin systemctl restart apache2
|
||||
```
|
||||
|
||||
Now, the user can run ‘sudo systemctl restart apache2’ command without sudo password.
|
||||
|
||||
Can I re-authenticate to a particular command in the above case? Of course, yes! Just remove the added command. Log out and log in back.
|
||||
|
||||
Alternatively, you can add **‘PASSWD:’** directive in-front of the command. Look at the following example.
|
||||
|
||||
Add/modify the following line as shown below.
|
||||
|
||||
```
|
||||
sk ALL=NOPASSWD:/bin/mkdir,/bin/chmod,PASSWD:/usr/bin/apt
|
||||
```
|
||||
|
||||
In this case, the user **sk** can run ‘mkdir’ and ‘chmod’ commands without entering the sudo password. However, he must provide sudo password when running ‘apt’ command.
|
||||
|
||||
**Disclaimer:** This is for educational-purpose only. You should be very careful while applying this method. This method might be both productive and destructive. Say for example, if you allow users to execute ‘rm’ command without sudo password, they could accidentally or intentionally delete important stuffs. You have been warned!
|
||||
|
||||
**Suggested read:**
|
||||
|
||||
And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/run-particular-commands-without-sudo-password-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]: http://www.ostechnix.com/wp-content/uploads/2017/05/sudo-password-1.png
|
||||
[3]: http://www.ostechnix.com/wp-content/uploads/2017/05/sudo-password-7.png
|
||||
[4]: http://www.ostechnix.com/wp-content/uploads/2017/05/sudo-password-6.png
|
||||
[5]: http://www.ostechnix.com/wp-content/uploads/2017/05/sudo-password-4.png
|
||||
[6]: http://www.ostechnix.com/wp-content/uploads/2017/05/sudo-password-5.png
|
@ -1,227 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (4 Methods To Change The HostName In Linux)
|
||||
[#]: via: (https://www.2daygeek.com/four-methods-to-change-the-hostname-in-linux/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
4 Methods To Change The HostName In Linux
|
||||
======
|
||||
|
||||
We had written an article yesterday in our website about **[changing hostname in Linux][1]**.
|
||||
|
||||
Today we are going to show you that how to change the hostname using different methods. You can choose the best one for you.
|
||||
|
||||
systemd systems comes with a handy tool called `hostnamectl` that allow us to manage the system hostname easily.
|
||||
|
||||
It’s changing the hostname instantly and doesn’t required reboot when you use the native commands.
|
||||
|
||||
But if you modify the hostname manually in any of the configuration file that requires reboot.
|
||||
|
||||
In this article we will show you the four methods to change the hostname in systemd system.
|
||||
|
||||
hostnamectl command allows to set three kind of hostname in Linux and the details are below.
|
||||
|
||||
* **`Static:`** It’s static hostname which is added by the system admin.
|
||||
* **`Transient/Dynamic:`** It’s assigned by DHCP or DNS server at run time.
|
||||
* **`Pretty:`** It can be assigned by the system admin. It is a free-form of the hostname that represent the server in the pretty way like, “JBOSS UAT Server”.
|
||||
|
||||
|
||||
|
||||
It can be done in the following four methods.
|
||||
|
||||
* **`hostnamectl Command:`** hostnamectl command is controling the system hostname.
|
||||
* **`nmcli Command:`** nmcli is a command-line tool for controlling NetworkManager.
|
||||
* **`nmtui Command:`** nmtui is a text User Interface for controlling NetworkManager.
|
||||
* **`/etc/hostname file:`** This file is containing the static system hostname.
|
||||
|
||||
|
||||
|
||||
### Method-1: Change The HostName Using hostnamectl Command in Linux
|
||||
|
||||
hostnamectl may be used to query and change the system hostname and related settings.
|
||||
|
||||
Simple run the `hostnamectl` command to view the system hostname.
|
||||
|
||||
```
|
||||
$ hostnamectl
|
||||
or
|
||||
$ hostnamectl status
|
||||
|
||||
Static hostname: daygeek-Y700
|
||||
Icon name: computer-laptop
|
||||
Chassis: laptop
|
||||
Machine ID: 31bdeb7b83230a2025d43547368d75bc
|
||||
Boot ID: 267f264c448f000ea5aed47263c6de7f
|
||||
Operating System: Manjaro Linux
|
||||
Kernel: Linux 4.19.20-1-MANJARO
|
||||
Architecture: x86-64
|
||||
```
|
||||
|
||||
If you would like to change the hostname, use the following command format.
|
||||
|
||||
**The general syntax:**
|
||||
|
||||
```
|
||||
$ hostnamectl set-hostname [YOUR NEW HOSTNAME]
|
||||
```
|
||||
|
||||
Use the following command to change the hostname using hostnamectl command. In this example, i’m going to change the hostname from `daygeek-Y700` to `magi-laptop`.
|
||||
|
||||
```
|
||||
$ hostnamectl set-hostname magi-laptop
|
||||
```
|
||||
|
||||
You can view the updated hostname by running the following command.
|
||||
|
||||
```
|
||||
$ hostnamectl
|
||||
Static hostname: magi-laptop
|
||||
Icon name: computer-laptop
|
||||
Chassis: laptop
|
||||
Machine ID: 31bdeb7b83230a2025d43547368d75bc
|
||||
Boot ID: 267f264c448f000ea5aed47263c6de7f
|
||||
Operating System: Manjaro Linux
|
||||
Kernel: Linux 4.19.20-1-MANJARO
|
||||
Architecture: x86-64
|
||||
```
|
||||
|
||||
### Method-2: Change The HostName Using nmcli Command in Linux
|
||||
|
||||
nmcli is a command-line tool for controlling NetworkManager and reporting network status.
|
||||
|
||||
nmcli is used to create, display, edit, delete, activate, and deactivate network connections, as well as control and display network device status. Also, it allow us to change the hostname.
|
||||
|
||||
Use the following format to view the current hostname using nmcli.
|
||||
|
||||
```
|
||||
$ nmcli general hostname
|
||||
daygeek-Y700
|
||||
```
|
||||
|
||||
**The general syntax:**
|
||||
|
||||
```
|
||||
$ nmcli general hostname [YOUR NEW HOSTNAME]
|
||||
```
|
||||
|
||||
Use the following command to change the hostname using nmcli command. In this example, i’m going to change the hostname from `daygeek-Y700` to `magi-laptop`.
|
||||
|
||||
```
|
||||
$ nmcli general hostname magi-laptop
|
||||
```
|
||||
|
||||
It’s taking effect without bouncing the below service. However, for safety purpose just restart the systemd-hostnamed service for the changes to take effect.
|
||||
|
||||
```
|
||||
$ sudo systemctl restart systemd-hostnamed
|
||||
```
|
||||
|
||||
Again run the same nmcli command to check the changed hostname.
|
||||
|
||||
```
|
||||
$ nmcli general hostname
|
||||
magi-laptop
|
||||
```
|
||||
|
||||
### Method-3: Change The HostName Using nmtui Command in Linux
|
||||
|
||||
nmtui is a curses‐based TUI application for interacting with NetworkManager. When starting nmtui, the user is prompted to choose the activity to perform unless it was specified as the first argument.
|
||||
|
||||
Run the following command on terminal to launch the terminal user interface.
|
||||
|
||||
```
|
||||
$ nmtui
|
||||
```
|
||||
|
||||
Use the `Down Arrow Mark` to choose the `Set system hostname` option then hit the `Enter` button.
|
||||
![][3]
|
||||
|
||||
This is old hostname screenshot.
|
||||
![][4]
|
||||
|
||||
Just remove the olde one and update the new one then hit `OK` button.
|
||||
![][5]
|
||||
|
||||
It will show you the updated hostname in the screen and simple hit `OK` button to complete it.
|
||||
![][6]
|
||||
|
||||
Finally hit the `Quit` button to exit from the nmtui terminal.
|
||||
![][7]
|
||||
|
||||
It’s taking effect without bouncing the below service. However, for safety purpose just restart the systemd-hostnamed service for the changes to take effect.
|
||||
|
||||
```
|
||||
$ sudo systemctl restart systemd-hostnamed
|
||||
```
|
||||
|
||||
You can view the updated hostname by running the following command.
|
||||
|
||||
```
|
||||
$ hostnamectl
|
||||
Static hostname: daygeek-Y700
|
||||
Icon name: computer-laptop
|
||||
Chassis: laptop
|
||||
Machine ID: 31bdeb7b83230a2025d43547368d75bc
|
||||
Boot ID: 267f264c448f000ea5aed47263c6de7f
|
||||
Operating System: Manjaro Linux
|
||||
Kernel: Linux 4.19.20-1-MANJARO
|
||||
Architecture: x86-64
|
||||
```
|
||||
|
||||
### Method-4: Change The HostName Using /etc/hostname File in Linux
|
||||
|
||||
Alternatively, we can change the hostname by modifying the `/etc/hostname` file. But this method
|
||||
requires server reboot for changes to take effect.
|
||||
|
||||
Check the current hostname using /etc/hostname file.
|
||||
|
||||
```
|
||||
$ cat /etc/hostname
|
||||
daygeek-Y700
|
||||
```
|
||||
|
||||
To change the hostname, simple overwrite the file because it’s contains only the hostname alone.
|
||||
|
||||
```
|
||||
$ sudo echo "magi-daygeek" > /etc/hostname
|
||||
|
||||
$ cat /etc/hostname
|
||||
magi-daygeek
|
||||
```
|
||||
|
||||
Reboot the system by running the following command.
|
||||
|
||||
```
|
||||
$ sudo init 6
|
||||
```
|
||||
|
||||
Finally verify the updated hostname using /etc/hostname file.
|
||||
|
||||
```
|
||||
$ cat /etc/hostname
|
||||
magi-daygeek
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/four-methods-to-change-the-hostname-in-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/linux-change-set-hostname/
|
||||
[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[3]: https://www.2daygeek.com/wp-content/uploads/2019/02/four-methods-to-change-the-hostname-in-linux-1.png
|
||||
[4]: https://www.2daygeek.com/wp-content/uploads/2019/02/four-methods-to-change-the-hostname-in-linux-2.png
|
||||
[5]: https://www.2daygeek.com/wp-content/uploads/2019/02/four-methods-to-change-the-hostname-in-linux-3.png
|
||||
[6]: https://www.2daygeek.com/wp-content/uploads/2019/02/four-methods-to-change-the-hostname-in-linux-4.png
|
||||
[7]: https://www.2daygeek.com/wp-content/uploads/2019/02/four-methods-to-change-the-hostname-in-linux-5.png
|
@ -1,170 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Set up two-factor authentication for SSH on Fedora)
|
||||
[#]: via: (https://fedoramagazine.org/two-factor-authentication-ssh-fedora/)
|
||||
[#]: author: (Curt Warfield https://fedoramagazine.org/author/rcurtiswarfield/)
|
||||
|
||||
Set up two-factor authentication for SSH on Fedora
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2019/02/twofactor-auth-ssh-816x345.png)
|
||||
|
||||
Every day there seems to be a security breach reported in the news where our data is at risk. Despite the fact that SSH is a secure way to connect remotely to a system, you can still make it even more secure. This article will show you how.
|
||||
|
||||
That’s where two-factor authentication (2FA) comes in. Even if you disable passwords and only allow SSH connections using public and private keys, an unauthorized user could still gain access to your system if they steal your keys.
|
||||
|
||||
With two-factor authentication, you can’t connect to a server with just your SSH keys. You also need to provide the randomly generated number displayed by an authenticator application on a mobile phone.
|
||||
|
||||
The Time-based One-time Password algorithm (TOTP) is the method shown in this article. [Google Authenticator][1] is used as the server application. Google Authenticator is available by default in Fedora.
|
||||
|
||||
For your mobile phone, you can use any two-way authentication application that is compatible with TOTP. There are numerous free applications for Android or IOS that work with TOTP and Google Authenticator. This article uses [FreeOTP][2] as an example.
|
||||
|
||||
### Install and set up Google Authenticator
|
||||
|
||||
First, install the Google Authenticator package on your server.
|
||||
|
||||
```
|
||||
$ sudo dnf install -y google-authenticator
|
||||
```
|
||||
|
||||
Run the application.
|
||||
|
||||
```
|
||||
$ google-authenticator
|
||||
```
|
||||
|
||||
The application presents you with a series of questions. The snippets below show you how to answer for a reasonably secure setup.
|
||||
|
||||
```
|
||||
Do you want authentication tokens to be time-based (y/n) y
|
||||
Do you want me to update your "/home/user/.google_authenticator" file (y/n)? y
|
||||
```
|
||||
|
||||
The app provides you with a secret key, verification code, and recovery codes. Keep these in a secure, safe location. The recovery codes are the **only** way to access your server if you lose your mobile phone.
|
||||
|
||||
### Set up mobile phone authentication
|
||||
|
||||
Install the authenticator application (FreeOTP) on your mobile phone. You can find it in Google Play if you have an Android phone, or in the iTunes store for an Apple iPhone.
|
||||
|
||||
A QR code is displayed on the screen. Open up the FreeOTP app on your mobile phone. To add a new account, select the QR code shaped tool at the top on the app, and then scan the QR code. After the setup is complete, you’ll have to provide the random number generated by the authenticator application every time you connect to your server remotely.
|
||||
|
||||
### Finish configuration
|
||||
|
||||
The application asks further questions. The example below shows you how to answer to set up a reasonably secure configuration.
|
||||
|
||||
```
|
||||
Do you want to disallow multiple uses of the same authentication token? This restricts you to one login about every 30s, but it increases your chances to notice or even prevent man-in-the-middle attacks (y/n) y
|
||||
By default, tokens are good for 30 seconds. In order to compensate for possible time-skew between the client and the server, we allow an extra token before and after the current time. If you experience problems with poor time synchronization, you can increase the window from its default size of +-1min (window size of 3) to about +-4min (window size of 17 acceptable tokens).
|
||||
Do you want to do so? (y/n) n
|
||||
If the computer that you are logging into isn't hardened against brute-force login attempts, you can enable rate-limiting for the authentication module. By default, this limits attackers to no more than 3 login attempts every 30s.
|
||||
Do you want to enable rate-limiting (y/n) y
|
||||
```
|
||||
|
||||
Now you have to set up SSH to take advantage of the new two-way authentication.
|
||||
|
||||
### Configure SSH
|
||||
|
||||
Before completing this step, **make sure you’ve already established a working SSH connection** using public SSH keys, since we’ll be disabling password connections. If there is a problem or mistake, having a connection will allow you to fix the problem.
|
||||
|
||||
On your server, use [sudo][3] to edit the /etc/pam.d/sshd file.
|
||||
|
||||
```
|
||||
$ sudo vi /etc/pam.d/ssh
|
||||
```
|
||||
|
||||
Comment out the auth substack password-auth line:
|
||||
|
||||
```
|
||||
#auth substack password-auth
|
||||
```
|
||||
|
||||
Add the following line to the bottom of the file.
|
||||
|
||||
```
|
||||
auth sufficient pam_google_authenticator.so
|
||||
```
|
||||
|
||||
Save and close the file. Next, edit the /etc/ssh/sshd_config file.
|
||||
|
||||
```
|
||||
$ sudo vi /etc/ssh/sshd_config
|
||||
```
|
||||
|
||||
Look for the ChallengeResponseAuthentication line and change it to yes.
|
||||
|
||||
```
|
||||
ChallengeResponseAuthentication yes
|
||||
```
|
||||
|
||||
Look for the PasswordAuthentication line and change it to no.
|
||||
|
||||
```
|
||||
PasswordAuthentication no
|
||||
```
|
||||
|
||||
Add the following line to the bottom of the file.
|
||||
|
||||
```
|
||||
AuthenticationMethods publickey,password publickey,keyboard-interactive
|
||||
```
|
||||
|
||||
Save and close the file, and then restart SSH.
|
||||
|
||||
```
|
||||
$ sudo systemctl restart sshd
|
||||
```
|
||||
|
||||
### Testing your two-factor authentication
|
||||
|
||||
When you attempt to connect to your server you’re now prompted for a verification code.
|
||||
|
||||
```
|
||||
[user@client ~]$ ssh user@example.com
|
||||
Verification code:
|
||||
```
|
||||
|
||||
The verification code is randomly generated by your authenticator application on your mobile phone. Since this number changes every few seconds, you need to enter it before it changes.
|
||||
|
||||
![][4]
|
||||
|
||||
If you do not enter the verification code, you won’t be able to access the system, and you’ll get a permission denied error:
|
||||
|
||||
```
|
||||
[user@client ~]$ ssh user@example.com
|
||||
|
||||
Verification code:
|
||||
|
||||
Verification code:
|
||||
|
||||
Verification code:
|
||||
|
||||
Permission denied (keyboard-interactive).
|
||||
|
||||
[user@client ~]$
|
||||
```
|
||||
|
||||
### Conclusion
|
||||
|
||||
By adding this simple two-way authentication, you’ve now made it much more difficult for an unauthorized user to gain access to your server.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/two-factor-authentication-ssh-fedora/
|
||||
|
||||
作者:[Curt Warfield][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/rcurtiswarfield/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Google_Authenticator
|
||||
[2]: https://freeotp.github.io/
|
||||
[3]: https://fedoramagazine.org/howto-use-sudo/
|
||||
[4]: https://fedoramagazine.org/wp-content/uploads/2019/02/freeotp-1.png
|
@ -1,86 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Linux security: Cmd provides visibility, control over user activity)
|
||||
[#]: via: (https://www.networkworld.com/article/3342454/linux-security-cmd-provides-visibility-control-over-user-activity.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
Linux security: Cmd provides visibility, control over user activity
|
||||
======
|
||||
|
||||
![](https://images.techhive.com/images/article/2017/01/background-1900329_1920-100705659-large.jpg)
|
||||
|
||||
There's a new Linux security tool you should be aware of — Cmd (pronounced "see em dee") dramatically modifies the kind of control that can be exercised over Linux users. It reaches way beyond the traditional configuration of user privileges and takes an active role in monitoring and controlling the commands that users are able to run on Linux systems.
|
||||
|
||||
Provided by a company of the same name, Cmd focuses on cloud usage. Given the increasing number of applications being migrated into cloud environments that rely on Linux, gaps in the available tools make it difficult to adequately enforce required security. However, Cmd can also be used to manage and protect on-premises systems.
|
||||
|
||||
### How Cmd differs from traditional Linux security controls
|
||||
|
||||
The leaders at Cmd — Milun Tesovic and Jake King — say organizations cannot confidently predict or control user behavior until they understand how users work routinely and what is considered “normal.” They seek to provide a tool that will granularly control, monitor, and authenticate user activity.
|
||||
|
||||
Cmd monitors user activity by forming user activity profiles (characterizing the activities these users generally perform), noticing abnormalities in their online behavior (login times, commands used, user locations, etc.), and preventing and reporting certain activities (e.g., downloading or modifying files and running privileged commands) that suggest some kind of system compromise might be underway. The product's behaviors are configurable and changes can be made rapidly.
|
||||
|
||||
The kind of tools most of us are using today to detect threats, identify vulnerabilities, and control user privileges have taken us a long way, but we are still fighting the battle to keep our systems and data safe. Cmd brings us a lot closer to identifying the intentions of hostile users whether those users are people who have managed to break into accounts or represent insider threats.
|
||||
|
||||
![1 sources live sessions][1]
|
||||
|
||||
View live Linux sessions
|
||||
|
||||
### How does Cmd work?
|
||||
|
||||
In monitoring and managing user activity, Cmd:
|
||||
|
||||
* Collects information that profiles user activity
|
||||
* Uses the baseline to determine what is considered normal
|
||||
* Detects and proactively prevents threats using specific indicators
|
||||
* Sends alerts to responsible people
|
||||
|
||||
|
||||
|
||||
![2 triggers][3]
|
||||
|
||||
Building custom policies in Cmd
|
||||
|
||||
Cmd goes beyond defining what sysadmins can control through traditional methods, such as configuring sudo privileges, providing much more granular and situation-specific controls.
|
||||
|
||||
Administrators can select escalation policies that can be managed separately from the user privilege controls managed by Linux sysadmins.
|
||||
|
||||
The Cmd agent provides real-time visibility (not after-the-fact log analysis) and can block actions, require additional authentication, or negotiate authorization as needed.
|
||||
|
||||
Also, Cmd supports custom rules based on geolocation if user locations are available. And new policies can be pushed to agents deployed on hosts within minutes.
|
||||
|
||||
![3 command blocked][4]
|
||||
|
||||
Building a trigger query in Cmd
|
||||
|
||||
### Funding news for Cmd
|
||||
|
||||
[Cmd][2] recently got a financial boost, having [completed of a $15 million round of funding][5] led by [GV][6] (formerly Google Ventures) with participation from Expa, Amplify Partners, and additional strategic investors. This brings the company's raised funding to $21.6 million and will help it continue to add new defensive capabilities to the product and grow its engineering teams.
|
||||
|
||||
In addition, the company appointed Karim Faris, general partner at GV, to its board of directors.
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3342454/linux-security-cmd-provides-visibility-control-over-user-activity.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/1-sources-live-sessions-100789431-large.jpg
|
||||
[2]: https://cmd.com
|
||||
[3]: https://images.idgesg.net/images/article/2019/02/2-triggers-100789432-large.jpg
|
||||
[4]: https://images.idgesg.net/images/article/2019/02/3-command-blocked-100789433-large.jpg
|
||||
[5]: https://www.linkedin.com/pulse/changing-cybersecurity-announcing-cmds-15-million-funding-jake-king/
|
||||
[6]: https://www.gv.com/
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -1,202 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Find Available Network Interfaces On Linux)
|
||||
[#]: via: (https://www.ostechnix.com/how-to-find-available-network-interfaces-on-linux/)
|
||||
[#]: author: (SK https://www.ostechnix.com/author/sk/)
|
||||
|
||||
How To Find Available Network Interfaces On Linux
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2019/02/network-interface-720x340.jpeg)
|
||||
|
||||
One of the common task we do after installing a Linux system is network configuration. Of course, you can configure network interfaces during the installation time. But, some of you might prefer to do it after installation or change the existing settings. As you know already, you must first know how many interfaces are available on the system in-order to configure network settings from command line. This brief tutorial addresses all the possible ways to find available network interfaces on Linux and Unix operating systems.
|
||||
|
||||
### Find Available Network Interfaces On Linux
|
||||
|
||||
We can find the available network cards in couple ways.
|
||||
|
||||
**Method 1 – Using ‘ifconfig’ Command:**
|
||||
|
||||
The most commonly used method to find the network interface details is using **‘ifconfig’** command. I believe some of Linux users might still use this.
|
||||
|
||||
```
|
||||
$ ifconfig -a
|
||||
```
|
||||
|
||||
Sample output:
|
||||
|
||||
```
|
||||
enp5s0: flags=4098<BROADCAST,MULTICAST> mtu 1500
|
||||
ether 24:b6:fd:37:8b:29 txqueuelen 1000 (Ethernet)
|
||||
RX packets 0 bytes 0 (0.0 B)
|
||||
RX errors 0 dropped 0 overruns 0 frame 0
|
||||
TX packets 0 bytes 0 (0.0 B)
|
||||
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
|
||||
|
||||
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
|
||||
inet 127.0.0.1 netmask 255.0.0.0
|
||||
inet6 ::1 prefixlen 128 scopeid 0x10<host>
|
||||
loop txqueuelen 1000 (Local Loopback)
|
||||
RX packets 171420 bytes 303980988 (289.8 MiB)
|
||||
RX errors 0 dropped 0 overruns 0 frame 0
|
||||
TX packets 171420 bytes 303980988 (289.8 MiB)
|
||||
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
|
||||
|
||||
wlp9s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
|
||||
inet 192.168.225.37 netmask 255.255.255.0 broadcast 192.168.225.255
|
||||
inet6 2409:4072:6183:c604:c218:85ff:fe50:474f prefixlen 64 scopeid 0x0<global>
|
||||
inet6 fe80::c218:85ff:fe50:474f prefixlen 64 scopeid 0x20<link>
|
||||
ether c0:18:85:50:47:4f txqueuelen 1000 (Ethernet)
|
||||
RX packets 564574 bytes 628671925 (599.5 MiB)
|
||||
RX errors 0 dropped 0 overruns 0 frame 0
|
||||
TX packets 299706 bytes 60535732 (57.7 MiB)
|
||||
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
|
||||
```
|
||||
|
||||
As you see in the above output, I have two network interfaces namely **enp5s0** (on board wired ethernet adapter) and **wlp9s0** (wireless network adapter) on my Linux box. Here, **lo** is loopback interface, which is used to access all network services locally. It has an ip address of 127.0.0.1.
|
||||
|
||||
We can also use the same ‘ifconfig’ command in many UNIX variants, for example **FreeBSD** , to list available network cards.
|
||||
|
||||
**Method 2 – Using ‘ip’ Command:**
|
||||
|
||||
The ‘ifconfig’ command is deprecated in the latest Linux versions. So you can use **‘ip’** command to display the network interfaces as shown below.
|
||||
|
||||
```
|
||||
$ ip link show
|
||||
```
|
||||
|
||||
Sample output:
|
||||
|
||||
```
|
||||
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
|
||||
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
||||
2: enp5s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
|
||||
link/ether 24:b6:fd:37:8b:29 brd ff:ff:ff:ff:ff:ff
|
||||
3: wlp9s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DORMANT group default qlen 1000
|
||||
link/ether c0:18:85:50:47:4f brd ff:ff:ff:ff:ff:ff
|
||||
```
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2019/02/ip-command.png)
|
||||
|
||||
You can also use the following commands as well.
|
||||
|
||||
```
|
||||
$ ip addr
|
||||
|
||||
$ ip -s link
|
||||
```
|
||||
|
||||
Did you notice that these command also shows the connected state of the network interfaces? If you closely look at the above output, you will notice that my Ethernet card is not connected with network cable (see the word **“DOWN”** in the above output). And wireless network card is connected (See the word **“UP”** ). For more details, check our previous guide to [**find the connected state of network interfaces on Linux**][1].
|
||||
|
||||
These two commands (ifconfig and ip) are just enough to find the available network cards on your Linux systems.
|
||||
|
||||
However, there are few other methods available to list network interfaces on Linux. Here you go.
|
||||
|
||||
**Method 3:**
|
||||
|
||||
The Linux Kernel saves the network interface details inside **/sys/class/net** directory. You can verify the list of available interfaces by looking into this directory.
|
||||
|
||||
```
|
||||
$ ls /sys/class/net
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||
```
|
||||
enp5s0 lo wlp9s0
|
||||
```
|
||||
|
||||
**Method 4:**
|
||||
|
||||
In Linux operating systems, **/proc/net/dev** file contains statistics about network interfaces.
|
||||
|
||||
To view the available network cards, just view its contents using command:
|
||||
|
||||
```
|
||||
$ cat /proc/net/dev
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||
```
|
||||
Inter-| Receive | Transmit
|
||||
face |bytes packets errs drop fifo frame compressed multicast|bytes packets errs drop fifo colls carrier compressed
|
||||
wlp9s0: 629189631 566078 0 0 0 0 0 0 60822472 300922 0 0 0 0 0 0
|
||||
enp5s0: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
|
||||
lo: 303980988 171420 0 0 0 0 0 0 303980988 171420 0 0 0 0 0 0
|
||||
```
|
||||
|
||||
**Method 5: Using ‘netstat’ command**
|
||||
|
||||
The **netstat** command displays various details such as network connections, routing tables, interface statistics, masquerade connections, and multicast memberships.
|
||||
|
||||
```
|
||||
$ netstat -i
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
|
||||
```
|
||||
Kernel Interface table
|
||||
Iface MTU RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
|
||||
lo 65536 171420 0 0 0 171420 0 0 0 LRU
|
||||
wlp9s0 1500 565625 0 0 0 300543 0 0 0 BMRU
|
||||
```
|
||||
|
||||
Please be mindful that netstat is obsolete. The Replacement for “netstat -i” is “ip -s link”. Also note that this method will list only the active interfaces, not all available interfaces.
|
||||
|
||||
**Method 6: Using ‘nmcli’ command**
|
||||
|
||||
The nmcli is nmcli is a command-line tool for controlling NetworkManager and reporting network status. It is used to create, display, edit, delete, activate, and deactivate network connections and display network status.
|
||||
|
||||
If you have Linux system with Network Manager installed, you can list the available network interfaces using nmcli tool using the following commands:
|
||||
|
||||
```
|
||||
$ nmcli device status
|
||||
```
|
||||
|
||||
Or,
|
||||
|
||||
```
|
||||
$ nmcli connection show
|
||||
```
|
||||
|
||||
You know now how to find the available network interfaces on Linux. Next, check the following guides to know how to configure IP address on Linux.
|
||||
|
||||
[How To Configure Static IP Address In Linux And Unix][2]
|
||||
|
||||
[How To Configure IP Address In Ubuntu 18.04 LTS][3]
|
||||
|
||||
[How To Configure Static And Dynamic IP Address In Arch Linux][4]
|
||||
|
||||
[How To Assign Multiple IP Addresses To Single Network Card In Linux][5]
|
||||
|
||||
If you know any other quick ways to do it, please share them in the comment section below. I will check and update the guide with your inputs.
|
||||
|
||||
And, that’s all. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-find-available-network-interfaces-on-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/how-to-find-out-the-connected-state-of-a-network-cable-in-linux/
|
||||
[2]: https://www.ostechnix.com/configure-static-ip-address-linux-unix/
|
||||
[3]: https://www.ostechnix.com/how-to-configure-ip-address-in-ubuntu-18-04-lts/
|
||||
[4]: https://www.ostechnix.com/configure-static-dynamic-ip-address-arch-linux/
|
||||
[5]: https://www.ostechnix.com/how-to-assign-multiple-ip-addresses-to-single-network-card-in-linux/
|
@ -1,61 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Blockchain 2.0: An Introduction [Part 1])
|
||||
[#]: via: (https://www.ostechnix.com/blockchain-2-0-an-introduction/)
|
||||
[#]: author: (EDITOR https://www.ostechnix.com/author/editor/)
|
||||
|
||||
Blockchain 2.0: An Introduction [Part 1]
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2019/03/blockchain-introduction-720x340.png)
|
||||
|
||||
### Blockchain 2.0 – The next paradigm of computing
|
||||
|
||||
The **Blockchain** is now easily distinguishable as a transformational technology poised to bring in revolutionary changes in the way people use the internet. The present series of posts will explore the upcoming wave of Blockchain 2.0 based technologies and applications. The Blockchain is here to stay as evidenced by the tremendous interest in it shown by different stakeholders.
|
||||
|
||||
Staying on top of what it is and how it works is paramount to anyone who plans on using the internet for literally anything. Even if all you do is just stare at your friends’ morning breakfast pics on Instagram or looking for the next best clip to watch, you need to know what this technology can do to all of that.
|
||||
|
||||
Even though the basic concept behind the Blockchain was first talked about in academia in the **1990s** , its prominence to being a trending buzzword among netizens is owed to the rise of payment platforms such as **Bitcoins** and **Ethers**.
|
||||
|
||||
Bitcoin started off as a decentralized digital currency. Its advent meant that you could basically pay people over the internet being totally anonymous, safe and secure. What lay beneath the simple financial token system that was bitcoin though was the BLOCKCHAIN. You can think of Bitcoin technology or any cryptocurrency for that matter as being built up from 3 layers. There’s the foundational Blockchain tech that verifies, records and confirms transactions, on top of the foundation rests the protocol, basically, a rule or an online etiquette to honor, record and confirm transactions and of course, on top of it all is the cryptocurrency token commonly called Bitcoin. A token is generated by the Blockchain once a transaction respecting the protocol is recorded on it.
|
||||
|
||||
While most people only saw the top layer, the coins or tokens being representative of what bitcoin really was, few ventured deep enough to understand that financial transactions were just one of many such possibilities that could be accomplished with the help of the Blockchain foundation. These possibilities are now being explored to generate and develop new standards for decentralizing all manners of transactions.
|
||||
|
||||
At its very basic level, the Blockchain can be thought of as an all-encompassing ledger of records and transactions. This in effect means that all kinds of records can theoretically be handled by the Blockchain. Developments in this area will possibly in the future result in all kinds of hard (Such as real estate deeds, physical keys, etc.) and soft intangible assets (Such as identity records, patents, trademarks, reservations etc.) can be encoded as digital assets to be protected and transferred via the blockchain.
|
||||
|
||||
For the uninitiated, transactions on the Blockchain are inherently thought of and designed to be unbiased, permanent records. This is possible because of a **“consensus system”** that is built into the protocol. All transactions are confirmed, vetted and recorded by the participants of the system, in the case of the Bitcoin cryptocurrency platform, this role is taken care of by **miners** and exchanges. This can vary from platform to platform or from blockchain to blockchain. The protocol stack on which the platform is built is by definition supposed to be open-source and free for anyone with the technical know-how to verify. Transparency is woven into the system unlike much of the other platforms that the internet currently runs on.
|
||||
|
||||
Once transactions are recorded and coded into the Blockchain, they will be seen through. Participants are bound to honor their transactions and contracts the way they were originally intended to be executed. The execution itself will be automatically taken care of by the platform since it’s hardcoded into it, unless of course if the original terms forbid it. This resilience of the Blockchain platform toward attempts of tampering with records, permanency of the records etc., are hitherto unheard of for something working over the internet. This is the added layer of trust that is often talked about while supporters of the technology claim its rising significance.
|
||||
|
||||
These features are not recently discovered hidden potentials of the platform, these were envisioned from the start. In a communique, **Satoshi Nakamoto** , the fabled creator(s) of Bitcoin mentioned, **“the design supports a tremendous variety of possible transaction types that I designed years ago… If Bitcoin catches on in a big way, these are things we’ll want to explore in the future… but they all had to be designed at the beginning to make sure they would be possible later.”**. Cementing the fact that these features are designed and baked into the already existing protocols. The key idea being that the decentralized transaction ledger like the functionality of the Blockchain could be used to transfer, deploy and execute all manner of contracts.
|
||||
|
||||
Leading institutions are currently exploring the possibility of re-inventing financial instruments such as stocks, pensions, and derivatives, while governments all over the world are concerned more with the tamper-proof permanent record keeping potential of the Blockchain. Supporters of the platform claim that once development reaches a critical threshold, everything from your hotel key cards to copyrights and patents will from then on be recorded and implemented via the use of Blockchains.
|
||||
|
||||
An almost full list of items and particulars that could theoretically be implemented via a Blockchain model is compiled and maintained on [**this**][1] page by **Ledra Capital**. A thought experiment to actually realize how much of our lives the Blockchain might effect is a daunting task, but a look at that list will reiterate the importance of doing so.
|
||||
|
||||
Now, all of the bureaucratic and commercial uses mentioned above might lead you to believe that a technology such as this will be solely in the domain of Governments and Large private corporations. However, the truth is far from that. Given the fact that the vast potentials of the system make it attractive for such uses, there are other possibilities and features harbored by Blockchains. There are other more intricate concepts related to the technology such as **DApps** , **DAOs** , **DACs** , **DASs** etc., more of which will be covered in depth in this series of articles.
|
||||
|
||||
Basically, development is going on in full swing and its early for anyone to comment on definitions, standards, and capabilities of such Blockchain based systems for a wider roll-out, but the possibilities and its imminent effects are doubtless. There are even talks about Blockchain based smartphones and polling during elections.
|
||||
|
||||
This was just a brief birds-eye view of what the platform is capable of. We’ll look at the distinct possibilities through a series of such detailed posts and articles. Keep an eye out for the [**next post of the series**][2], which will explore how the Blockchain is revolutionizing transactions and contracts.
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/blockchain-2-0-an-introduction/
|
||||
|
||||
作者:[EDITOR][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/editor/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: http://ledracapital.com/blog/2014/3/11/bitcoin-series-24-the-mega-master-blockchain-list
|
||||
[2]: https://www.ostechnix.com/blockchain-2-0-revolutionizing-the-financial-system/
|
84
sources/tech/20190301 Emacs for (even more of) the win.md
Normal file
84
sources/tech/20190301 Emacs for (even more of) the win.md
Normal file
@ -0,0 +1,84 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Emacs for (even more of) the win)
|
||||
[#]: via: (https://so.nwalsh.com/2019/03/01/emacs)
|
||||
[#]: author: (Norman Walsh https://so.nwalsh.com)
|
||||
|
||||
Emacs for (even more of) the win
|
||||
======
|
||||
|
||||
I use Emacs every day. I rarely notice it. But when I do, it usually brings me joy.
|
||||
|
||||
>If you are a professional writer…Emacs outshines all other editing software in approximately the same way that the noonday sun does the stars. It is not just bigger and brighter; it simply makes everything else vanish.
|
||||
|
||||
I’ve been using [Emacs][1] for well over twenty years. I use it for writing almost anything and everything (I edit Scala and Java in [IntelliJ][2]). I read my email in it. If it can be done in Emacs, that’s where I prefer to do it.
|
||||
|
||||
Although I’ve used Emacs for literally decades, I realized around the new year that very little about my use of Emacs had changed in the past decade or more. New editing modes had come along, of course, I’d picked up a package or two, and I did adopt [Helm][3] a few years ago, but mostly it just did all the heavy lifting that I required of it, day in and day out without complaining or getting in my way. On the one hand, that’s a testament to how good it is. On the other hand, that’s an invitation to dig in and see what I’ve missed.
|
||||
|
||||
At about the same time, I resolved to improve several aspects of my work life:
|
||||
|
||||
* **Better meeting management.** I’m lead on a couple of projects at work and those projects have meetings, both regularly scheduled and ad hoc; some of them I run, some of them, I only attend.
|
||||
|
||||
I realized I’d become sloppy about my participation in meetings. It’s all too easy sit in a room where there’s a meeting going on but actually read email and work on other items. (I strongly oppose the “no laptops” rule in meetings, but that’s a topic for another day.)
|
||||
|
||||
There are a couple of problems with sloppy participation. First, it’s disrespectful to the person who convened the meeting and the other participants. That’s actually sufficient reason not to do it, but I think there’s another problem: it disguises the cost of meetings.
|
||||
|
||||
If you’re in a meeting but also answering your email and maybe fixing a bug, then that meeting didn’t cost anything (or as much). If meetings are cheap, then there will be more of them.
|
||||
|
||||
I want fewer, shorter meetings. I don’t want to disguise their cost, I want them to be perceived as damned expensive and to be avoided unless absolutely necessary.
|
||||
|
||||
Sometimes, they are absolutely necessary. And I appreciate that a quick meeting can sometimes resolve an issue quickly. But if I have ten short meetings a day, let’s not pretend that I’m getting anything else productive accomplished.
|
||||
|
||||
I resolved to take notes at all the meetings I attend. I’m not offering to take minutes, necessarily, but I am taking minutes of a sort. It keeps me focused on the meeting and not catching up on other things.
|
||||
|
||||
* **Better time management.** There are lots and lots of things that I need or want to do, both professionally and personally. I’ve historically kept track off some of them in issue lists, some in saved email threads (in Emacs and [Gmail][4], for slightly different types of reminders), in my calendar, on “todo lists” of various sorts on my phone, and on little scraps of paper. And probably other places as well.
|
||||
|
||||
I resolved to keep them all in one place. Not because I think there’s one place that’s uniformly best or better, but because I hope to accomplish two things. First, by having them all in one place, I hope to be able to develop a better and more holistic view of where I’m putting my energies. Second, because I want to develop a habitn. “A settled or regular tendency or practice, especially one that is hard to give up.” of recording, tracking, and preserving them.
|
||||
|
||||
* **Better accountability.** If you work in certain science or engineering disciplines, you will have developed the habit of keeping a [lab notebook][5]. Alas, I did not. But I resolved to do so.
|
||||
|
||||
I’m not interested in the legal aspects that encourage bound pages or scribing only in permanent marker. What I’m interested in is developing the habit of keeping a record. My goal is to have a place to jot down ideas and design sketches and the like. If I have sudden inspiration or if I think of an edge case that isn’t in the test suite, I want my instinct to be to write it in my journal instead of scribbling it on a scrap of paper or promising myself that I’ll remember it.
|
||||
|
||||
|
||||
|
||||
|
||||
This confluence of resolutions led me quickly and more-or-less directly to [Org][6]. There is a large, active, and loyal community of Org users. I’ve played with it in the past (I even [wrote about it][7], at least in passing, a couple of years ago) and I tinkered long enough to [integrate MarkLogic][8] into it. (Boy has that paid off in the last week or two!)
|
||||
|
||||
But I never used it.
|
||||
|
||||
I am now using it. I take minutes in it, I record all of my todo items in it, and I keep a journal in it. I’m not sure there’s much value in me attempting to wax eloquent about it or enumerate all its features, you’ll find plenty of either with a quick web search.
|
||||
|
||||
If you use Emacs, you should be using Org. If you don’t use Emacs, I’m confident you wouldn’t be the first person who started because of Org. It does a lot. It takes a little time to learn your way around and remember the shortcuts, but I think it’s worth it. (And if you carry an [iOS][9] device in your pocket, I recommend [beorg][10] for recording items while you’re on the go.)
|
||||
|
||||
Naturally, I worked out how to [get XML out of it][11]⊕“Worked out” sure is a funny way to spell “hacked together in elisp.”. And from there, how to turn it back into the markup my weblog expects (and do so at the push of a button in Emacs, of course). So this is the first posting written in Org. It won’t be the last.
|
||||
|
||||
P.S. Happy birthday [little weblog][12].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://so.nwalsh.com/2019/03/01/emacs
|
||||
|
||||
作者:[Norman Walsh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://so.nwalsh.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Emacs
|
||||
[2]: https://en.wikipedia.org/wiki/IntelliJ_IDEA
|
||||
[3]: https://emacs-helm.github.io/helm/
|
||||
[4]: https://en.wikipedia.org/wiki/Gmail
|
||||
[5]: https://en.wikipedia.org/wiki/Lab_notebook
|
||||
[6]: https://en.wikipedia.org/wiki/Org-mode
|
||||
[7]: https://www.balisage.net/Proceedings/vol17/html/Walsh01/BalisageVol17-Walsh01.html
|
||||
[8]: https://github.com/ndw/ob-ml-marklogic/
|
||||
[9]: https://en.wikipedia.org/wiki/IOS
|
||||
[10]: https://beorgapp.com/
|
||||
[11]: https://github.com/ndw/org-to-xml
|
||||
[12]: https://so.nwalsh.com/2017/03/01/helloWorld
|
@ -1,69 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to boot up a new Raspberry Pi)
|
||||
[#]: via: (https://opensource.com/article/19/3/how-boot-new-raspberry-pi)
|
||||
[#]: author: (Anderson Silva https://opensource.com/users/ansilva)
|
||||
|
||||
How to boot up a new Raspberry Pi
|
||||
======
|
||||
Learn how to install a Linux operating system, in the third article in our guide to getting started with Raspberry Pi.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming_code_keyboard_orange_hands.png?itok=G6tJ_64Y)
|
||||
|
||||
If you've been following along in this series, you've [chosen][1] and [bought][2] your Raspberry Pi board and peripherals and now you're ready to start using it. Here, in the third article, let's look at what you need to do to boot it up.
|
||||
|
||||
Unlike your laptop, desktop, smartphone, or tablet, the Raspberry Pi doesn't come with built-in storage. Instead, it uses a Micro SD card to store the operating system and your files. The great thing about this is it gives you the flexibility to carry your files (even if you don't have your Raspberry Pi with you). The downside is it may also increase the risk of losing or damaging the card—and thus losing your files. Just protect your Micro SD card, and you should be fine.
|
||||
|
||||
You should also know that SD cards aren't as fast as mechanical or solid state drives, so booting, reading, and writing from your Pi will not be as speedy as you would expect from other devices.
|
||||
|
||||
### How to install Raspbian
|
||||
|
||||
The first thing you need to do when you get a new Raspberry Pi is to install its operating system on a Micro SD card. Even though there are other operating systems (both Linux- and non-Linux-based) available for the Raspberry Pi, this series focuses on [Raspbian][3] , Raspberry Pi's official Linux version.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/raspbian.png)
|
||||
|
||||
The easiest way to install Raspbian is with [NOOBS][4], which stands for "New Out Of Box Software." Raspberry Pi offers great [documentation for NOOBS][5], so I won't repeat the installation instructions here.
|
||||
|
||||
NOOBS gives you the choice of installing the following operating systems:
|
||||
|
||||
+ [Raspbian][6]
|
||||
+ [LibreELEC][7]
|
||||
+ [OSMC][8]
|
||||
+ [Recalbox][9]
|
||||
+ [Lakka][10]
|
||||
+ [RISC OS][11]
|
||||
+ [Screenly OSE][12]
|
||||
+ [Windows 10 IoT Core][13]
|
||||
+ [TLXOS][14]
|
||||
|
||||
Again, Raspbian is the operating system we'll use in this series, so go ahead, grab your Micro SD and follow the NOOBS documentation to install it. I'll meet you in the fourth article in this series, where we'll look at how to use Linux, including some of the main commands you'll need to know.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/how-boot-new-raspberry-pi
|
||||
|
||||
作者:[Anderson Silva][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ansilva
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/article/19/3/which-raspberry-pi-choose
|
||||
[2]: https://opensource.com/article/19/2/how-buy-raspberry-pi
|
||||
[3]: https://www.raspbian.org/RaspbianFAQ
|
||||
[4]: https://www.raspberrypi.org/downloads/noobs/
|
||||
[5]: https://www.raspberrypi.org/documentation/installation/noobs.md
|
||||
[6]: https://www.raspbian.org/RaspbianFAQ
|
||||
[7]: https://libreelec.tv/
|
||||
[8]: https://osmc.tv/
|
||||
[9]: https://www.recalbox.com/
|
||||
[10]: http://www.lakka.tv/
|
||||
[11]: https://www.riscosopen.org/wiki/documentation/show/Welcome%20to%20RISC%20OS%20Pi
|
||||
[12]: https://www.screenly.io/ose/
|
||||
[13]: https://developer.microsoft.com/en-us/windows/iot
|
||||
[14]: https://thinlinx.com/
|
@ -1,94 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lujun9972)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Manage Your Mirrors with ArchLinux Mirrorlist Manager)
|
||||
[#]: via: (https://itsfoss.com/archlinux-mirrorlist-manager)
|
||||
[#]: author: (John Paul https://itsfoss.com/author/john/)
|
||||
|
||||
Manage Your Mirrors with ArchLinux Mirrorlist Manager
|
||||
======
|
||||
|
||||
**ArchLinux Mirrorlist Manager is a simple GUI program that allows you to easily manage mirrors in your Arch Linux system.**
|
||||
|
||||
For Linux users, it is important to make sure that you keep your mirror list in good shape. Today we will take a quick look at an application designed to help manage your Arch mirror list.
|
||||
|
||||
![ArchLinux Mirrorlist Manager][1]ArchLinux Mirrorlist Manager
|
||||
|
||||
### What is a Mirror?
|
||||
|
||||
For those new to the world of Linux, Linux operating systems depend on a series of servers placed around the world. These servers contain identical copies of all of the packages and software available for a particular distro. This is why they are called “mirrors”.
|
||||
|
||||
The ultimate goal is to have multiple mirrors in each country. This allows local users to quickly update their systems. However, this is not always true. Sometimes mirrors from another country can be faster.
|
||||
|
||||
### ArchLinux Mirrorlist Manager makes managing mirrors simpler in Arch Linux
|
||||
|
||||
![ArchLinux Mirrorlist Manager][2]Main Screen
|
||||
|
||||
[Managing and sorting][3] the available mirrors in Arch is not easy. It involves fairly lengthy commands. Thankfully, someone came up with a solution.
|
||||
|
||||
Last year, [Rizwan Hasan][4] created a little Python and Qt application entitled [ArchLinux Mirrorlist Manager][5]. You might recognize Rizwan’s name because it is not the first time that we featured something he created on this site. Over a year ago, I wrote about a new Arch-based distro that Rizwan created named [MagpieOS][6]. I imagine that Rizwan’s experience with MagpieOS inspired him to create this application.
|
||||
|
||||
There really isn’t much to ArchLinux Mirrorlist Manager. It allows you to rank mirrors by response speed and limit the results by number and country of origin.
|
||||
|
||||
In other words, if you are located in Germany, you can restrict your mirrors to the 3 fastest in Germany.
|
||||
|
||||
### Install ArchLinux Mirrorlist Manager
|
||||
|
||||
```
|
||||
It is only for Arch Linux users
|
||||
|
||||
Pay attention! ArchLinux Mirrorlist Manager is for Arch Linux distribution only. Don’t try to use it on other Arch-based distributions unless you make sure that the distro uses Arch mirrors. Otherwise, you might face issues that I encountered with Manjaro (explained in the section below).
|
||||
```
|
||||
|
||||
```
|
||||
Mirrorlist Manager alternative for Manjaro
|
||||
|
||||
When it comes to using something Archy, my go-to system is Manjaro. In preparation for this article, I decided to install ArchLinux Mirrorlist Manager on my Manjaro machine. It quickly sorted the available mirror and saved them to my mirror list.
|
||||
|
||||
I then proceeded to try to update my system and immediately ran into problems. When ArchLinux Mirrorlist Manager sorted the mirrors my system was using, it replaced all of my Manjaro mirrors with vanilla Arch mirrors. (Manjaro is based on Arch, but has its own mirrors because the dev team tests all package updates before pushing them to the users to ensure there are no system-breaking bugs.) Thankfully, the Manjaro forum helped me fix my mistake.
|
||||
|
||||
If you are a Manjaro user, please do not make the same mistake that I did. ArchLinux Mirrorlist Manager is only for Arch and Arch-based distros that use Arch’s mirrors.
|
||||
|
||||
Luckily, there is an easy to use terminal application that Manjaro users can use to manage their mirror lists. It is called [Pacman-mirrors][7]. Just like ArchLinux Mirrorlist Manager, you can sort by response speed. Just type `sudo pacman-mirrors --fasttrack`. If you want to limit the results to the five fastest mirrors, you can type `sudo pacman-mirrors --fasttrack 5`. To restrict the results to one or more countries, type `sudo pacman-mirrors --country Germany,Spain,Austria`. You can limit the results to your country by typing `sudo pacman-mirrors --geoip`. You can visit the [Manjaro wiki][7] for more information about Pacman-mirrors.
|
||||
|
||||
After you run Pacman-mirrors, you have to synchronize your package database and update your system by typing `sudo pacman -Syyu`.
|
||||
|
||||
Note: Pacman-mirrors is for **Manjaro only**.
|
||||
```
|
||||
|
||||
ArchLinux Mirrorlist Manager is available in the [Arch User Repository][8]. More advanced Arch users can download the PKGBUILD directly from [the GitHub page][9].
|
||||
|
||||
### Final Thoughts on ArchLinux Mirrorlist Manager
|
||||
|
||||
Even though [ArchLinux Mirrorlist Manager][5] isn’t very useful for me, I’m glad it exists. It shows that Linux users are actively trying to make Linux easier to use. As I said earlier, managing a mirror list on Arch is not easy. Rizwan’s little tool will help make Arch more usable by the beginning user.
|
||||
|
||||
Have you ever used ArchLinux Mirrorlist Manager? What is your method to manage your Arch mirrors? Please let us know in the comments below.
|
||||
|
||||
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][10].
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/archlinux-mirrorlist-manager
|
||||
|
||||
作者:[John Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/john/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/mirrorlist-manager2.png?ssl=1
|
||||
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/mirrorlist-manager4.jpg?ssl=1
|
||||
[3]: https://wiki.archlinux.org/index.php/Mirrors
|
||||
[4]: https://github.com/Rizwan-Hasan
|
||||
[5]: https://github.com/Rizwan-Hasan/ArchLinux-Mirrorlist-Manager
|
||||
[6]: https://itsfoss.com/magpieos/
|
||||
[7]: https://wiki.manjaro.org/index.php?title=Pacman-mirrors
|
||||
[8]: https://aur.archlinux.org/packages/mirrorlist-manager
|
||||
[9]: https://github.com/Rizwan-Hasan/MagpieOS-Packages/tree/master/ArchLinux-Mirrorlist-Manager
|
||||
[10]: http://reddit.com/r/linuxusersgroup
|
@ -1,366 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (leommxj)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (5 Ways To Generate A Random/Strong Password In Linux Terminal)
|
||||
[#]: via: (https://www.2daygeek.com/5-ways-to-generate-a-random-strong-password-in-linux-terminal/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
5 Ways To Generate A Random/Strong Password In Linux Terminal
|
||||
======
|
||||
|
||||
Recently we had written an article about **[password strength and password score check][1]** in our website.
|
||||
|
||||
It will help you to validate your password strength and score.
|
||||
|
||||
We can manually create few passwords which we required but if you would like to generate a password for multiple users or servers, what will be the solution.
|
||||
|
||||
Yes, there are many utilities are available in Linux to fulfill this requirements. However, I’m going to include the best five password generators in this article.
|
||||
|
||||
These tools are generates a strong random passwords for you. Navigate to the following article if you would like to update the password on multiple users and servers.
|
||||
|
||||
These tools are easy to use, that’s why i preferred to go with it. By default it will generate a strong password and if you would like to generate a super strong password then use the available options.
|
||||
|
||||
It will help you to generate a super strong password in the following combination. It should have minimum 12-15 characters length, that includes Alphabets (Lower case & Upper case), Numbers and Special Characters.
|
||||
|
||||
These tools are below.
|
||||
|
||||
* `pwgen:` The pwgen program generates passwords which are designed to be easily memorized by humans, while being as secure as possible.
|
||||
* `openssl:` The openssl program is a command line tool for using the various cryptography functions of OpenSSL’s crypto library from the shell.
|
||||
* `gpg:` OpenPGP encryption and signing tool
|
||||
* `mkpasswd:` generate new password, optionally apply it to a user
|
||||
* `makepasswd:` makepasswd generates true random passwords using /dev/urandom, with the emphasis on security over pronounceability.
|
||||
* `/dev/urandom file:` The character special files /dev/random and /dev/urandom (present since Linux 1.3.30) provide an interface to the kernel’s random number generator.
|
||||
* `md5sum:` md5sum is a computer program that calculates and verifies 128-bit MD5 hashes.
|
||||
* `sha256sum:` The program sha256sum is designed to verify data integrity using the SHA-256 (SHA-2 family with a digest length of 256 bits).
|
||||
* `sha1pass:` sha1pass creates a SHA1 password hash. In the absence of a salt value on the command line, a random salt vector will be generated.
|
||||
|
||||
|
||||
|
||||
### How To Generate A Random Strong Password In Linux Using pwgen Command?
|
||||
|
||||
The pwgen program generates passwords which are designed to be easily memorized by humans, while being as secure as possible.
|
||||
|
||||
Human-memorable passwords are never going to be as secure as completely completely random passwords.
|
||||
|
||||
Use `-s` option to generate completely random, hard-to-memorize passwords. These should only be used for machine passwords as we can’t memorize.
|
||||
|
||||
For **`Fedora`** system, use **[DNF Command][2]** to install pwgen.
|
||||
|
||||
```
|
||||
$ sudo dnf install pwgen
|
||||
```
|
||||
|
||||
For **`Debian/Ubuntu`** systems, use **[APT-GET Command][3]** or **[APT Command][4]** to install pwgen.
|
||||
|
||||
```
|
||||
$ sudo apt install pwgen
|
||||
```
|
||||
|
||||
For **`Arch Linux`** based systems, use **[Pacman Command][5]** to install pwgen.
|
||||
|
||||
```
|
||||
$ sudo pacman -S pwgen
|
||||
```
|
||||
|
||||
For **`RHEL/CentOS`** systems, use **[YUM Command][6]** to install pwgen.
|
||||
|
||||
```
|
||||
$ sudo yum install pwgen
|
||||
```
|
||||
|
||||
For **`openSUSE Leap`** system, use **[Zypper Command][7]** to install pwgen.
|
||||
|
||||
```
|
||||
$ sudo zypper install pwgen
|
||||
```
|
||||
|
||||
### How To Use pwgen Command In Linux?
|
||||
|
||||
It’s a simple and straight forward method. Use one of the below preferred examples for you. By default, it generates a human memorable password.
|
||||
|
||||
To do so, simple run the `pwgen` command on your terminal. It generates 160 passwords in a single shot. These 160 passwords are printer with 20 rows and 8 columns.
|
||||
|
||||
```
|
||||
$ pwgen
|
||||
ameiK2oo aibi3Cha EPium0Ie aisoh1Ee Nidee9ae uNga0Bee uPh9ieM1 ahn1ooNg
|
||||
oc5ooTea tai7eKid tae2yieS hiecaiR8 wohY2Ohk Uab2maed heC4aXoh Ob6Nieso
|
||||
Shaeriu3 uy9Juk5u hoht7Doo Fah6yah3 faz9Jeew eKiek4ju as0Xuosh Eiwo4epo
|
||||
oot8teeZ Ui1yoohi Aechae7A Ohdi2ael cae5Thoh Au1aeTei ais0aiC2 Cai2quin
|
||||
Oox9ohz4 neev0Che ahza8AQu Ahz7eica meiBeeW0 Av3bo7ah quoiTu3f taeNg3ae
|
||||
Aiko7Aiz SheiGh8E aesaeSh7 haet6Loo AeTel3oN Ath7zeer IeYah4ie UG3ootha
|
||||
Ohch9Och Phuap6su iel5Xu7s diqui7Bu ieF2dier eeluHa1u Thagei0i Ceeth3oh
|
||||
OCei1ahj zei2aiYo Jahgh1ia ooqu1Cej eez2aiPo Wahd5soo noo7Mei9 Hie5ashe
|
||||
Uith4Or2 Xie3uh2b fuF9Eilu eiN2sha9 zae2YaSh oGh5ephi ohvao4Ae aixu6aeM
|
||||
fo4Ierah iephei6A hae9eeGa eiBeiY3g Aic8Kee9 he8AheCh ohM4bid9 eemae3Zu
|
||||
eesh2EiM cheiGa4j PooV2vii ahpeeg5E aezauX2c Xe7aethu Ahvaph7a Joh2heec
|
||||
Ii5EeShi aij7Uo8e ooy2Ahth mieKe2ni eiQuu8fe giedaQu0 eiPhob3E oox1uo2U
|
||||
eehia4Hu ga9Ahw0a ohxuZei7 eV4OoXio Kid2wu1n ku4Ahf5s uigh8uQu AhWoh0po
|
||||
vo1Eeb2u Ahth7ve5 ieje4eiL ieci1Ach Meephie9 iephieY8 Eesoom7u eakai2Bo
|
||||
uo8Ieche Zai3aev5 aGhahf0E Wowoo5th Oraeb0ah Gah3nah0 ieGhah0p aeCh0OhJ
|
||||
ahQu2feZ ahQu0gah foik7Ush cei1Wai1 Aivi3ooY eephei5U MooZae3O quooRoh7
|
||||
aequae5U pae6Ceiv eizahF1k ohmi7ETa ahyaeK1N Mohw2no8 ooc8Oone coo7Ieve
|
||||
eePhei9h Weequ8eV Vie4iezu neeMiim4 ie6aiZoh Queegh2E shahwi3N Inichie8
|
||||
Sid1aeji mohj4Ko7 lieDi0pe Zeemah6a thuevu2E phi4Ohsh paiKeix1 ooz1Ceph
|
||||
ahV4yore ue2laePh fu1eThui qui7aePh Fahth1nu ohk9puLo aiBeez0b Neengai5
|
||||
```
|
||||
|
||||
To generate a secure random password, use `-s` option with pwgen command.
|
||||
|
||||
```
|
||||
$ pwgen -s
|
||||
CU75lgZd 7HzzKgtA 2ktBJDpR F6XJVhBs UjAm3bNL zO7Dw7JJ pxn8fUvp Ka3lLilG
|
||||
ywJX7iJl D9ajxb6N 78c1HOg2 g8vtWCra Jp6pBGBw oYuev9Vl gbA6gHV8 G6XQoVO5
|
||||
uQN98IU4 50GgQfrX FrTsou2t YQorO4x6 UGer8Yi2 O7DB5nw1 1ax370UR 1xVRPkA1
|
||||
RVaGDr2i Nt11ekUd 9Vm3D244 ck8Lnpd0 SjDt8uWn 5ERT4tf8 4EONFzyY Jc6T83jg
|
||||
WZa6bKPW H4HMo1YU bsDDRik3 gBwV7LOW 9H1QRQ4x 3Ak7RcSe IJu2RBF9 e508xrLC
|
||||
SzTrW191 AslxDa6E IkWWov2b iOb6EmTy qHt82OwG 5ZFO7B53 97zmjOPu A4KZuhYV
|
||||
uQpoJR4D 0eKyOiUr Rz96smeO 3HTABu3N 6W0VmEls uPsp5zpw 8UD3VkMG YTct6Rd4
|
||||
VKo0cVmq E07ZX7j9 kQSlvA69 Nm3fpv3i xWvF2xMu yEfcw8uA oQGVX3l9 grTzx7Xj
|
||||
s4GVEYtM uJl5sYMe n3icRPiY ED3Mup4B k3M9KHI7 IkxqoSM0 dt2cxmMU yb2tUkut
|
||||
2Q9wGZQx 8Rpo11s9 I13siOHu 7GV64Fjv 3VONzD8i SCDfVD3F oiPTx239 6BQakoiJ
|
||||
XUEokiC4 ybL7VGmL el2RfvWk zKc7CLcE 3FqNBSyA NjDWrvZ5 KI3NSX4h VFyo6VPr
|
||||
h4q3XeqZ FDYMoX6f uTU5ZzU3 6u4ob4Ep wiYPt05n CZga66qh upzH6Z9y RuVcqbe8
|
||||
taQv11hq 1xsY67a8 EVo9GLXA FCaDLGb1 bZyh0YN8 0nTKo0Qy RRVUwn9t DuU8mwwv
|
||||
x96LWpCb tFLz3fBG dNb4gCKf n6VYcOiH 1ep6QYFZ x8kaJtrY 56PDWuW6 1R0If4kV
|
||||
2XK0NLQK 4XQqhycl Ip08cn6c Bnx9z2Bz 7gjGlON7 CJxLR1U4 mqMwir3j ovGXWu0z
|
||||
MfDjk5m8 4KwM9SAN oz0fZ5eo 5m8iRtco oP5BpLh0 Z5kvwr1W f34O2O43 hXao1Sp8
|
||||
tKoG5VNI f13fuYvm BQQn8MD3 bmFSf6Mf Z4Y0o17U jT4wO1DG cz2clBES Lr4B3qIY
|
||||
ArKQRND6 8xnh4oIs nayiK2zG yWvQCV3v AFPlHSB8 zfx5bnaL t5lFbenk F2dIeBr4
|
||||
C6RqDQMy gKt28c9O ZCi0tQKE 0Ekdjh3P ox2vWOMI 14XF4gwc nYA0L6tV rRN3lekn
|
||||
lmwZNjz1 4ovmJAr7 shPl9o5f FFsuNwj0 F2eVkqGi 7gw277RZ nYE7gCLl JDn05S5N
|
||||
```
|
||||
|
||||
If you would like to generate a strong five passwords with 14 characters length, use the following format.
|
||||
|
||||
```
|
||||
$ pwgen -s 14 5
|
||||
7YxUwDyfxGVTYD em2NT6FceXjPfT u8jlrljbrclcTi IruIX3Xu0TFXRr X8M9cB6wKNot1e
|
||||
```
|
||||
|
||||
If you really want to generate a super strong random twenty passwords, use the following format.
|
||||
|
||||
```
|
||||
$ pwgen -cnys 14 20
|
||||
mQ3E=vfGfZ,5[B #zmj{i5|ZS){jg Ht_8i7OqJ%N`~2 443fa5iJ\W-L?] ?Qs$o=vz2vgQBR
|
||||
^'Ry0Az|J9p2+0 t2oA/n7U_'|QRx EsX*%_(4./QCRJ ACr-,8yF9&eM[* !Xz1C'bw?tv50o
|
||||
8hfv-fK(VxwQGS q!qj?sD7Xmkb7^ N#Zp\_Y2kr%!)~ 4*pwYs{bq]Hh&Y |4u=-Q1!jS~8=;
|
||||
]{$N#FPX1L2B{h I|01fcK.z?QTz" l~]JD_,W%5bp.E +i2=D3;BQ}p+$I n.a3,.D3VQ3~&i
|
||||
```
|
||||
|
||||
### How To Generate A Random Strong Password In Linux Using openssl Command?
|
||||
|
||||
The openssl program is a command line tool for using the various cryptography functions of OpenSSL’s crypto library from the shell.
|
||||
|
||||
Run the openssl command with the following format to generate a random strong password with 14 characters.
|
||||
|
||||
```
|
||||
$ openssl rand -base64 14
|
||||
WjzyDqdkWf3e53tJw/c=
|
||||
```
|
||||
|
||||
If you would like to generate ten random strong password with 14 characters using openssl command then use the following for loop.
|
||||
|
||||
```
|
||||
$ for pw in {1..10}; do openssl rand -base64 14; done
|
||||
6i0hgHDBi3ohZ9Mil8I=
|
||||
gtn+y1bVFJFanpJqWaA=
|
||||
rYu+wy+0nwLf5lk7TBA=
|
||||
xrdNGykIzxaKDiLF2Bw=
|
||||
cltejRkDPdFPC/zI0Pg=
|
||||
G6aroK6d4xVVYFTrZGs=
|
||||
jJEnFoOk1+UTSx/wJrY=
|
||||
TFxVjBmLx9aivXB3yxE=
|
||||
oQtOLPwTuO8df7dIv9I=
|
||||
ktpBpCSQFOD+5kIIe7Y=
|
||||
```
|
||||
|
||||
### How To Generate A Random Strong Password In Linux Using gpg Command?
|
||||
|
||||
gpg is the OpenPGP part of the GNU Privacy Guard (GnuPG). It is a tool to provide digital encryption and signing services using the OpenPGP standard. gpg features complete key management and all the bells and whistles you would expect from a full OpenPGP implementation.
|
||||
|
||||
Run the gpg command with the following format to generate a random strong password with 14 characters.
|
||||
|
||||
```
|
||||
$ gpg --gen-random --armor 1 14
|
||||
or
|
||||
$ gpg2 --gen-random --armor 1 14
|
||||
jq1mtY4gBa6gIuJrggM=
|
||||
```
|
||||
|
||||
If you would like to generate ten random strong password with 14 characters using gpg command then use the following for loop.
|
||||
|
||||
```
|
||||
$ for pw in {1..10}; do gpg --gen-random --armor 1 14; done
|
||||
or
|
||||
$ for pw in {1..10}; do gpg2 --gen-random --armor 1 14; done
|
||||
F5ZzLSUMet2kefG6Ssc=
|
||||
8hh7BFNs8Qu0cnrvHrY=
|
||||
B+PEt28CosR5xO05/sQ=
|
||||
m21bfx6UG1cBDzVGKcE=
|
||||
wALosRXnBgmOC6+++xU=
|
||||
TGpjT5xRxo/zFq/lNeg=
|
||||
ggsKxVgpB/3aSOY15W4=
|
||||
iUlezWxL626CPc9omTI=
|
||||
pYb7xQwI1NTlM2rxaCg=
|
||||
eJjhtA6oHhBrUpLY4fM=
|
||||
```
|
||||
|
||||
### How To Generate A Random Strong Password In Linux Using mkpasswd Command?
|
||||
|
||||
mkpasswd generates passwords and can apply them automatically to users. With no arguments, mkpasswd returns a new password. It’s part of an expect package so, you have to install expect package to use mkpasswd command.
|
||||
|
||||
For **`Fedora`** system, use **[DNF Command][2]** to install mkpasswd.
|
||||
|
||||
```
|
||||
$ sudo dnf install expect
|
||||
```
|
||||
|
||||
For **`Debian/Ubuntu`** systems, use **[APT-GET Command][3]** or **[APT Command][4]** to install mkpasswd.
|
||||
|
||||
```
|
||||
$ sudo apt install expect
|
||||
```
|
||||
|
||||
For **`Arch Linux`** based systems, use **[Pacman Command][5]** to install mkpasswd.
|
||||
|
||||
```
|
||||
$ sudo pacman -S expect
|
||||
```
|
||||
|
||||
For **`RHEL/CentOS`** systems, use **[YUM Command][6]** to install mkpasswd.
|
||||
|
||||
```
|
||||
$ sudo yum install expect
|
||||
```
|
||||
|
||||
For **`openSUSE Leap`** system, use **[Zypper Command][7]** to install mkpasswd.
|
||||
|
||||
```
|
||||
$ sudo zypper install expect
|
||||
```
|
||||
|
||||
Run the `mkpasswd` command in terminal to generate a random password.
|
||||
|
||||
```
|
||||
$ mkpasswd
|
||||
37_slQepD
|
||||
```
|
||||
|
||||
Run the mkpasswd command with the following format to generate a random strong password with 14 characters.
|
||||
|
||||
```
|
||||
$ mkpasswd -l 14
|
||||
W1qP1uv=lhghgh
|
||||
```
|
||||
|
||||
Run the mkpasswd command with the following format to generate a random strong password with 14 characters. It combinations of alphabetic (Lower & Upper case), Numeric number and special characters.
|
||||
|
||||
```
|
||||
$ mkpasswd -l 14 -d 3 -C 3 -s 3
|
||||
3aad!bMWG49"t,
|
||||
```
|
||||
|
||||
If you would like to generate ten random strong password with 14 characters (It combination of alphabetic (Lower & Upper case), Numeric number and special characters) using mkpasswd command then use the following for loop.
|
||||
|
||||
```
|
||||
$ for pw in {1..10}; do mkpasswd -l 14 -d 3 -C 3 -s 3; done
|
||||
zmSwP[q9;P1r6[
|
||||
E42zcvzM"i3%B\
|
||||
8}1#[email protected]
|
||||
0X:zB(mmU22?nj
|
||||
0sqqL44M}ko(O^
|
||||
43tQ(.6jG;ceRq
|
||||
-jB6cp3x1GZ$e=
|
||||
$of?Rj9kb2N(1J
|
||||
9HCf,nn#gjO79^
|
||||
Tu9m56+Ev_Yso(
|
||||
```
|
||||
|
||||
### How To Generate A Random Strong Password In Linux Using makepasswd Command?
|
||||
|
||||
makepasswd generates true random passwords using /dev/urandom, with the emphasis on security over pronounceability. It can also encrypt plaintext passwords given on the command line.
|
||||
|
||||
Run the `makepasswd` command in terminal to generate a random password.
|
||||
|
||||
```
|
||||
$ makepasswd
|
||||
HdCJafVaN
|
||||
```
|
||||
|
||||
Run the makepasswd command with the following format to generate a random strong password with 14 characters.
|
||||
|
||||
```
|
||||
$ makepasswd --chars 14
|
||||
HxJDv5quavrqmU
|
||||
```
|
||||
|
||||
Run the makepasswd command with the following format to generate ten random strong password with 14 characters.
|
||||
|
||||
```
|
||||
$ makepasswd --chars 14 --count 10
|
||||
TqmKVWnRGeoVNr
|
||||
mPV2P98hLRUsai
|
||||
MhMXPwyzYi2RLo
|
||||
dxMGgLmoFpYivi
|
||||
8p0G7JvJjd6qUP
|
||||
7SmX95MiJcQauV
|
||||
KWzrh5npAjvNmL
|
||||
oHPKdq1uA9tU85
|
||||
V1su9GjU2oIGiQ
|
||||
M2TMCEoahzLNYC
|
||||
```
|
||||
|
||||
### How To Generate A Random Strong Password In Linux Using Multiple Commands?
|
||||
|
||||
Still if you are looking other options then you can use the following utilities to generate a random password in Linux.
|
||||
|
||||
**Using md5sum:** md5sum is a computer program that calculates and verifies 128-bit MD5 hashes.
|
||||
|
||||
```
|
||||
$ date | md5sum
|
||||
9baf96fb6e8cbd99601d97a5c3acc2c4 -
|
||||
```
|
||||
|
||||
**Using /dev/urandom:** The character special files /dev/random and /dev/urandom (present since Linux 1.3.30) provide an interface to the kernel’s random number generator. File /dev/random has major device number 1 and minor device number 8. File /dev/urandom has major device number 1 and minor device number 9.
|
||||
|
||||
```
|
||||
$ cat /dev/urandom | tr -dc 'a-zA-Z0-9' | head -c 14
|
||||
15LQB9J84Btnzz
|
||||
```
|
||||
|
||||
**Using sha256sum:** The program sha256sum is designed to verify data integrity using the SHA-256 (SHA-2 family with a digest length of 256 bits).
|
||||
|
||||
```
|
||||
$ date | sha256sum
|
||||
a114ae5c458ae0d366e1b673d558d921bb937e568d9329b525cf32290478826a -
|
||||
```
|
||||
|
||||
**Using sha1pass:** sha1pass creates a SHA1 password hash. In the absence of a salt value on the command line, a random salt vector will be generated.
|
||||
|
||||
```
|
||||
$ sha1pass
|
||||
$4$9+JvykOv$e7U0jMJL2yBOL+RVa2Eke8SETEo$
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/5-ways-to-generate-a-random-strong-password-in-linux-terminal/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[leommx](https://github.com/leommxj)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/how-to-check-password-complexity-strength-and-score-in-linux/
|
||||
[2]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
|
||||
[3]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[4]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[5]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
|
||||
[6]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
|
||||
[7]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
|
@ -1,65 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (3 popular programming languages you can learn with Raspberry Pi)
|
||||
[#]: via: (https://opensource.com/article/19/3/programming-languages-raspberry-pi)
|
||||
[#]: author: (Anderson Silva https://opensource.com/users/ansilva)
|
||||
|
||||
3 popular programming languages you can learn with Raspberry Pi
|
||||
======
|
||||
Become more valuable on the job market by learning to program with the Raspberry Pi.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming_language_c.png?itok=mPwqDAD9)
|
||||
|
||||
In the last article in this series, I shared some ways to [teach kids to program with Raspberry Pi][1]. In theory, there is absolutely nothing stopping an adult from using resources designed for kids, but you might be better served by learning the programming languages that are in demand in the job market.
|
||||
|
||||
Here are three programming languages you can learn with the Raspberry Pi.
|
||||
|
||||
### Python
|
||||
|
||||
[Python][2] has become one of the [most popular programming languages][3] in the open source world. Its interpreter has been packaged and made available in every popular Linux distribution. If you install Raspbian on your Raspberry Pi, you will see an app called [Thonny][4], which is a Python integrated development environment (IDE) for beginners. In a nutshell, an IDE is an application that provides all you need to get your code executed, often including things like debuggers, documentation, auto-completion, and emulators. Here is a [great little tutorial][5] to get you started using Thonny and Python on the Raspberry Pi.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/thonny.png)
|
||||
|
||||
### Java
|
||||
|
||||
Although arguably not as attractive as it once was, [Java][6] remains heavily used in universities around the world and deeply embedded in the enterprise. So, even though some will disagree that I'm recommending it as a beginner's language, I am compelled to do so; for one thing, it still very popular, and for another, there are a lot of books, classes, and other information available for you to learn Java. Get started on the Raspberry Pi by using the [BlueJ][7] Java IDE.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/bluejayide.png)
|
||||
|
||||
### JavaScript
|
||||
|
||||
"Back in my day…" [JavaScript][8] was a client-side language that basically allowed people to streamline and automate user events in a browser and modify HTML elements. Today, JavaScript has escaped the browser and is available for other types of clients like mobile apps and even server-side programming. [Node.js][9] is a popular runtime environment that allows developers to code beyond the client-browser paradigm. To learn more about running Node.js on the Raspberry Pi, check out [W3Schools tutorial][10].
|
||||
|
||||
### Other languages
|
||||
|
||||
If there's another language you want to learn, don't despair. There's a high likelihood that you can use your Raspberry Pi to compile or interpret any language of choice, including C, C++, PHP, and Ruby.
|
||||
|
||||
Microsoft's [Visual Studio Code][11] also [runs on the Raspberry Pi][12]. It's an open source code editor from Microsoft that supports several markup and programming languages.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/programming-languages-raspberry-pi
|
||||
|
||||
作者:[Anderson Silva][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ansilva
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/article/19/2/teach-kids-program-raspberry-pi
|
||||
[2]: https://opensource.com/resources/python
|
||||
[3]: https://www.economist.com/graphic-detail/2018/07/26/python-is-becoming-the-worlds-most-popular-coding-language
|
||||
[4]: https://thonny.org/
|
||||
[5]: https://raspberrypihq.com/getting-started-with-python-programming-and-the-raspberry-pi/
|
||||
[6]: https://opensource.com/resources/java
|
||||
[7]: https://www.bluej.org/raspberrypi/
|
||||
[8]: https://developer.mozilla.org/en-US/docs/Web/JavaScript
|
||||
[9]: https://nodejs.org/en/
|
||||
[10]: https://www.w3schools.com/nodejs/nodejs_raspberrypi.asp
|
||||
[11]: https://code.visualstudio.com/
|
||||
[12]: https://pimylifeup.com/raspberry-pi-visual-studio-code/
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,72 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (13 open source backup solutions)
|
||||
[#]: via: (https://opensource.com/article/19/3/backup-solutions)
|
||||
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
|
||||
|
||||
13 open source backup solutions
|
||||
======
|
||||
Readers suggest more than a dozen of their favorite solutions for protecting data.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/server_data_system_admin.png?itok=q6HCfNQ8)
|
||||
|
||||
Recently, we published a [poll][1] that asked readers to vote on their favorite open source backup solution. We offered six solutions recommended by our [moderator community][2]—Cronopete, Deja Dup, Rclone, Rdiff-backup, Restic, and Rsync—and invited readers to share other options in the comments. And you came through, offering 13 other solutions (so far) that we either hadn't considered or hadn't even heard of.
|
||||
|
||||
By far the most popular suggestion was [BorgBackup][3]. It is a deduplicating backup solution that features compression and encryption. It is supported on Linux, MacOS, and BSD and has a BSD License.
|
||||
|
||||
Second was [UrBackup][4], which does full and incremental image and file backups; you can save whole partitions or single directories. It has clients for Windows, Linux, and MacOS and has a GNU Affero Public License.
|
||||
|
||||
Third was [LuckyBackup][5]; according to its website, "it is simple to use, fast (transfers over only changes made and not all data), safe (keeps your data safe by checking all declared directories before proceeding in any data manipulation), reliable, and fully customizable." It carries a GNU Public License.
|
||||
|
||||
[Casync][6] is content-addressable synchronization—it's designed for backup and synchronizing and stores and retrieves multiple related versions of large file systems. It is licensed with the GNU Lesser Public License.
|
||||
|
||||
[Syncthing][7] synchronizes files between two computers. It is licensed with the Mozilla Public License and, according to its website, is secure and private. It works on MacOS, Windows, Linux, FreeBSD, Solaris, and OpenBSD.
|
||||
|
||||
[Duplicati][8] is a free backup solution that works on Windows, MacOS, and Linux and a variety of standard protocols, such as FTP, SSH, and WebDAV, and cloud services. It features strong encryption and is licensed with the GPL.
|
||||
|
||||
[Dirvish][9] is a disk-based virtual image backup system licensed under OSL-3.0. It also requires Rsync, Perl5, and SSH to be installed.
|
||||
|
||||
[Bacula][10]'s website says it "is a set of computer programs that permits the system administrator to manage backup, recovery, and verification of computer data across a network of computers of different kinds." It is supported on Linux, FreeBSD, Windows, MacOS, OpenBSD, and Solaris and the bulk of its source code is licensed under AGPLv3.
|
||||
|
||||
[BackupPC][11] "is a high-performance, enterprise-grade system for backing up Linux, Windows, and MacOS PCs and laptops to a server's disk," according to its website. It is licensed under the GPLv3.
|
||||
|
||||
[Amanda][12] is a backup system written in C and Perl that allows a system administrator to back up an entire network of client machines to a single server using tape, disk, or cloud-based systems. It was developed and copyrighted in 1991 at the University of Maryland and has a BSD-style license.
|
||||
|
||||
[Back in Time][13] is a simple backup utility designed for Linux. It provides a command line client and a GUI, both written in Python. To do a backup, just specify where to store snapshots, what folders to back up, and the frequency of the backups. BackInTime is licensed with GPLv2.
|
||||
|
||||
[Timeshift][14] is a backup utility for Linux that is similar to System Restore for Windows and Time Capsule for MacOS. According to its GitHub repository, "Timeshift protects your system by taking incremental snapshots of the file system at regular intervals. These snapshots can be restored at a later date to undo all changes to the system."
|
||||
|
||||
[Kup][15] is a backup solution that was created to help users back up their files to a USB drive, but it can also be used to perform network backups. According to its GitHub repository, "When you plug in your external hard drive, Kup will automatically start copying your latest changes."
|
||||
|
||||
Thanks for sharing your favorite open source backup solutions in our poll! If there are still others that haven't been mentioned yet, please share them in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/backup-solutions
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/don-watkins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/article/19/2/linux-backup-solutions
|
||||
[2]: https://opensource.com/opensourcecom-team
|
||||
[3]: https://www.borgbackup.org/
|
||||
[4]: https://www.urbackup.org/
|
||||
[5]: http://luckybackup.sourceforge.net/
|
||||
[6]: http://0pointer.net/blog/casync-a-tool-for-distributing-file-system-images.html
|
||||
[7]: https://syncthing.net/
|
||||
[8]: https://www.duplicati.com/
|
||||
[9]: http://dirvish.org/
|
||||
[10]: https://www.bacula.org/
|
||||
[11]: https://backuppc.github.io/backuppc/
|
||||
[12]: http://www.amanda.org/
|
||||
[13]: https://github.com/bit-team/backintime
|
||||
[14]: https://github.com/teejee2008/timeshift
|
||||
[15]: https://github.com/spersson/Kup
|
@ -1,51 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to keep your Raspberry Pi updated)
|
||||
[#]: via: (https://opensource.com/article/19/3/how-raspberry-pi-update)
|
||||
[#]: author: (Anderson Silva https://opensource.com/users/ansilva)
|
||||
|
||||
How to keep your Raspberry Pi updated
|
||||
======
|
||||
Learn how to keep your Raspberry Pi patched and working well in the seventh article in our guide to getting started with the Raspberry Pi.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_happy_sad_developer_programming.png?itok=72nkfSQ_)
|
||||
|
||||
Just like your tablet, cellphone, and laptop, you need to keep your Raspberry Pi updated. Not only will the latest enhancements keep your Pi running smoothly, they will also keep you safer, especially if you are connected to a network. The seventh article in our guide to getting started with the Raspberry Pi shares two pieces of advice on keeping your Pi working well.
|
||||
|
||||
### Update Raspbian
|
||||
|
||||
Updating your Raspbian installation is a [two-step process][1]:
|
||||
|
||||
1. In your terminal type: **sudo apt-get update**
|
||||
The command **sudo** allows you to run **apt-get update** as admin (aka root). Note that **apt-get update** will not install anything new on your system; rather it will update the list of packages and dependencies that need to be updated.
|
||||
|
||||
|
||||
2. Then type: **sudo apt-get dist-upgrade**
|
||||
From the documentation: "Generally speaking, doing this regularly will keep your installation up to date, in that it will be equivalent to the latest released image available from [raspberrypi.org/downloads][2]."
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/update_sudo_rpi.png)
|
||||
|
||||
### Be careful with rpi-update
|
||||
|
||||
Raspbian comes with another little update utility called [rpi-update][3]. This utility can be used to upgrade your Pi to the latest firmware which may or may not be broken/buggy. You may find information explaining how to use it, but as of late it is recommended never to use this application unless you have a really good reason to do so.
|
||||
|
||||
Bottom line: Keep your systems updated!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/how-raspberry-pi-update
|
||||
|
||||
作者:[Anderson Silva][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ansilva
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.raspberrypi.org/documentation/raspbian/updating.md
|
||||
[2]: https://www.raspberrypi.org/downloads/
|
||||
[3]: https://github.com/Hexxeh/rpi-update
|
@ -1,48 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Emulators and Native Linux games on the Raspberry Pi)
|
||||
[#]: via: (https://opensource.com/article/19/3/play-games-raspberry-pi)
|
||||
[#]: author: (Anderson Silva https://opensource.com/users/ansilva)
|
||||
|
||||
Emulators and Native Linux games on the Raspberry Pi
|
||||
======
|
||||
The Raspberry Pi is a great platform for gaming; learn how in the ninth article in our series on getting started with the Raspberry Pi.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/getting_started_with_minecraft_copy.png?itok=iz4RF7f8)
|
||||
|
||||
Back in the [fifth article][1] in our series on getting started with the Raspberry Pi, I mentioned Minecraft as a way to teach kids to program using a gaming platform. Today we'll talk about other ways you can play games on your Raspberry Pi, as it's a great platform for gaming—with and without emulators.
|
||||
|
||||
### Gaming with emulators
|
||||
|
||||
Emulators are software that allow you to play games from different systems and different decades on your Raspberry Pi. Of the many emulators available today, the most popular for the Raspberry Pi is [RetroPi][2]. You can use it to play games from systems such as Apple II, Amiga, Atari 2600, Commodore 64, Game Boy Advance, and [many others][3].
|
||||
|
||||
If RetroPi sounds interesting, check out [these instructions][4] on how to get started, and start having fun today!
|
||||
|
||||
### Native Linux games
|
||||
|
||||
There are also plenty of native Linux games available on Raspbian, Raspberry Pi's operating system. Make Use Of has a great article on [how to play 10 old favorites][5] like Doom and Nuke Dukem 3D on the Raspberry Pi.
|
||||
|
||||
You can also use your Raspberry Pi as a [game server][6]. For example, you can set up Terraria, Minecraft, and QuakeWorld servers on the Raspberry Pi.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/play-games-raspberry-pi
|
||||
|
||||
作者:[Anderson Silva][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ansilva
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/article/19/3/teach-kids-program-raspberry-pi
|
||||
[2]: https://retropie.org.uk/
|
||||
[3]: https://retropie.org.uk/about/systems
|
||||
[4]: https://opensource.com/article/19/1/retropie
|
||||
[5]: https://www.makeuseof.com/tag/classic-games-raspberry-pi-without-emulators/
|
||||
[6]: https://www.makeuseof.com/tag/raspberry-pi-game-servers/
|
@ -1,67 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Fix “Network Protocol Error” On Mozilla Firefox)
|
||||
[#]: via: (https://www.ostechnix.com/how-to-fix-network-protocol-error-on-mozilla-firefox/)
|
||||
[#]: author: (SK https://www.ostechnix.com/author/sk/)
|
||||
|
||||
How To Fix “Network Protocol Error” On Mozilla Firefox
|
||||
======
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2019/03/firefox-logo-1-720x340.png)
|
||||
|
||||
Mozilla Firefox is my default web browser for years. I have been using it for my day to day web activities, such as accessing my mails, browsing favorite websites etc. Today, I experienced a strange error while using Firefox. I tried to share one of our guide on Reddit platform and got the following error message.
|
||||
|
||||
```
|
||||
Network Protocol Error
|
||||
|
||||
Firefox has experienced a network protocol violation that cannot be repaired.
|
||||
|
||||
The page you are trying to view cannot be shown because an error in the network protocol was detected.
|
||||
|
||||
Please contact the website owners to inform them of this problem.
|
||||
```
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2019/03/firefox.png)
|
||||
|
||||
To be honest, I panicked a bit and thought my system might be affected with some kind of malware. LOL! I was wrong! I am using latest Firefox version on my Arch Linux desktop. I opened the same link in Chromium browser. It’s working fine! I guessed it is Firefox related error. After Googling a bit, I fixed this issue as described below.
|
||||
|
||||
This kind of problems occurs mostly because of the **browser’s cache**. If you’ve encountered these kind of errors, such as “Network Protocol Error” or “Corrupted Content Error”, follow any one of these methods.
|
||||
|
||||
**Method 1:**
|
||||
|
||||
To fix “Network Protocol Error” or “Corrupted Content Error”, you need to reload the webpage while bypassing the cache. To do so, Press **Ctrl + F5** or **Ctrl + Shift + R** keys. It will reload the webpage fresh from the server, not from the Firefox cache. Now the web page should work just fine.
|
||||
|
||||
**Method 2:**
|
||||
|
||||
If the method1 doesn’t work, please try this method.
|
||||
|
||||
Go to **Edit - > Preferences**. From the Preferences window, navigate to **Privacy & Security** tab on the left pane. Now clear the Firefox cache by clicking on **“Clear Data”** option.
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2019/03/firefox-1.png)
|
||||
|
||||
Make sure you have checked both “Cookies and Site Data” and “Cached Web Content” options and click **“Clear”**.
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2019/03/firefox-2.png)
|
||||
|
||||
Done! Now the cookies and offline content will be removed. Please note that Firefox may sign you out of the logged-in websites. You can re-login to those websites later. Finally, close the Firefox browser and restart your system. Now the webpage will load without any issues.
|
||||
|
||||
Hope this was useful. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-fix-network-protocol-error-on-mozilla-firefox/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
@ -1,48 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Let's get physical: How to use GPIO pins on the Raspberry Pi)
|
||||
[#]: via: (https://opensource.com/article/19/3/gpio-pins-raspberry-pi)
|
||||
[#]: author: (Anderson Silva https://opensource.com/users/ansilva)
|
||||
|
||||
Let's get physical: How to use GPIO pins on the Raspberry Pi
|
||||
======
|
||||
The 10th article in our series on getting started with Raspberry Pi explains how the GPIO pins work.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspbery_pi_zero_wireless_hardware.jpg?itok=9YFzdxFQ)
|
||||
|
||||
Until now, this series has focused on the Raspberry Pi's software side, but today we'll get into the hardware. The availability of [general-purpose input/output][1] (GPIO) pins was one of the main features that interested me in the Pi when it first came out. GPIO allows you to programmatically interact with the physical world by attaching sensors, relays, and other types of circuitry to the Raspberry Pi.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/raspberrypi_10_gpio-pins-pi2.jpg)
|
||||
|
||||
Each pin on the board either has a predefined function or is designated as general purpose. Also, different Raspberry Pi models have either 26 or 40 pins for you to use at your discretion. Wikipedia has a [good overview of each pin][2] and its functionality.
|
||||
|
||||
You can do many things with the Pi's GPIO pins. I've written some other articles about using the GPIOs, including a trio of articles ([Part I][3], [Part II][4], and [Part III][5]) about controlling holiday lights with the Raspberry Pi while using open source software to pair the lights with music.
|
||||
|
||||
The Raspberry Pi community has done a great job in creating libraries in different programming languages, so you should be able to interact with the pins using [C][6], [Python][7], [Scratch][8], and other languages.
|
||||
|
||||
Also, if you want the ultimate experience to interact with the physical world, pick up a [Raspberry Pi Sense Hat][9]. It is an affordable expansion board for the Pi that plugs into the GPIO pins so you can programmatically interact with LEDs, joysticks, and barometric pressure, temperature, humidity, gyroscope, accelerometer, and magnetometer sensors.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/gpio-pins-raspberry-pi
|
||||
|
||||
作者:[Anderson Silva][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ansilva
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.raspberrypi.org/documentation/usage/gpio/
|
||||
[2]: https://en.wikipedia.org/wiki/Raspberry_Pi#General_purpose_input-output_(GPIO)_connector
|
||||
[3]: https://opensource.com/life/15/2/music-light-show-with-raspberry-pi
|
||||
[4]: https://opensource.com/life/15/12/ssh-your-christmas-tree-raspberry-pi
|
||||
[5]: https://opensource.com/article/18/12/lightshowpi-raspberry-pi
|
||||
[6]: https://www.bigmessowires.com/2018/05/26/raspberry-pi-gpio-programming-in-c/
|
||||
[7]: https://www.raspberrypi.org/documentation/usage/gpio/python/README.md
|
||||
[8]: https://www.raspberrypi.org/documentation/usage/gpio/scratch2/README.md
|
||||
[9]: https://opensource.com/life/16/4/experimenting-raspberry-pi-sense-hat
|
@ -1,62 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (hopefully2333)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Learn about computer security with the Raspberry Pi and Kali Linux)
|
||||
[#]: via: (https://opensource.com/article/19/3/computer-security-raspberry-pi)
|
||||
[#]: author: (Anderson Silva https://opensource.com/users/ansilva)
|
||||
|
||||
Learn about computer security with the Raspberry Pi and Kali Linux
|
||||
======
|
||||
Raspberry Pi is a great way to learn about computer security. Learn how in the 11th article in our getting-started series.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security_privacy_lock.png?itok=ZWjrpFzx)
|
||||
|
||||
Is there a hotter topic in technology than securing your computer? Some experts will tell you that there is no such thing as perfect security. They joke that if you want your server or application to be truly secure, then turn off your server, unplug it from the network, and put it in a safe somewhere. The problem with that should be obvious: What good is an app or server that nobody can use?
|
||||
|
||||
That's the conundrum around security. How can we make something secure enough and still usable and valuable? I am not a security expert by any means, although I hope to be one day. With that in mind, I thought it would make sense to share some ideas about what you can do with a Raspberry Pi to learn more about security.
|
||||
|
||||
I should note that, like the other articles in this series dedicated to Raspberry Pi beginners, my goal is not to dive in deep, rather to light a fire of interest for you to learn more about these topics.
|
||||
|
||||
### Kali Linux
|
||||
|
||||
When it comes to "doing security things," one of the Linux distributions that comes to mind is [Kali Linux][1]. Kali's development is primarily focused on forensics and penetration testing. It has more than 600 preinstalled [penetration-testing programs][2] to test your computer's security, and a [forensics mode][3], which prevents it from touching the internal hard drive or swap space of the system being examined.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/raspberrypi_11_kali.png)
|
||||
|
||||
Like Raspbian, Kali Linux is based on the Debian distribution, and you can find directions on installing it on the Raspberry Pi in its main [documentation portal][4]. If you installed Raspbian or another Linux distribution on your Raspberry Pi, you should have no problem installing Kali. Kali Linux's creators have even put together [training, workshops, and certifications][5] to help boost your career in the security field.
|
||||
|
||||
### Other Linux distros
|
||||
|
||||
Most standard Linux distributions, like Raspbian, Ubuntu, and Fedora, also have [many security tools available][6] in their repositories. Some great tools to explore include [Nmap][7], [Wireshark][8], [auditctl][9], and [SELinux][10].
|
||||
|
||||
### Projects
|
||||
|
||||
There are many other security-related projects you can run on your Raspberry Pi, such as [Honeypots][11], [Ad blockers][12], and [USB sanitizers][13]. Take some time and learn about them!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/computer-security-raspberry-pi
|
||||
|
||||
作者:[Anderson Silva][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ansilva
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.kali.org/
|
||||
[2]: https://en.wikipedia.org/wiki/Kali_Linux#Development
|
||||
[3]: https://docs.kali.org/general-use/kali-linux-forensics-mode
|
||||
[4]: https://docs.kali.org/kali-on-arm/install-kali-linux-arm-raspberry-pi
|
||||
[5]: https://www.kali.org/penetration-testing-with-kali-linux/
|
||||
[6]: https://linuxblog.darkduck.com/2019/02/9-best-linux-based-security-tools.html
|
||||
[7]: https://nmap.org/
|
||||
[8]: https://www.wireshark.org/
|
||||
[9]: https://linux.die.net/man/8/auditctl
|
||||
[10]: https://opensource.com/article/18/7/sysadmin-guide-selinux
|
||||
[11]: https://trustfoundry.net/honeypi-easy-honeypot-raspberry-pi/
|
||||
[12]: https://pi-hole.net/
|
||||
[13]: https://www.circl.lu/projects/CIRCLean/
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -0,0 +1,77 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (14 days of celebrating the Raspberry Pi)
|
||||
[#]: via: (https://opensource.com/article/19/3/happy-pi-day)
|
||||
[#]: author: (Anderson Silva (Red Hat) https://opensource.com/users/ansilva)
|
||||
|
||||
14 days of celebrating the Raspberry Pi
|
||||
======
|
||||
|
||||
In the 14th and final article in our series on getting started with the Raspberry Pi, take a look back at all the things we've learned.
|
||||
|
||||
![][1]
|
||||
|
||||
**Happy Pi Day!**
|
||||
|
||||
Every year on March 14th, we geeks celebrate Pi Day. In the way we abbreviate dates—MMDD—March 14 is written 03/14, which numerically reminds us of 3.14, or the first three numbers of [pi][2]. What many Americans don't realize is that virtually no other country in the world uses this [date format][3], so Pi Day pretty much only works in the US, though it is celebrated globally.
|
||||
|
||||
Wherever you are in the world, let's celebrate the Raspberry Pi and wrap up this series by reviewing the topics we've covered in the past two weeks:
|
||||
|
||||
* Day 1: [Which Raspberry Pi should you choose?][4]
|
||||
* Day 2: [How to buy a Raspberry Pi][5]
|
||||
* Day 3: [How to boot up a new Raspberry Pi][6]
|
||||
* Day 4: [Learn Linux with the Raspberry Pi][7]
|
||||
* Day 5: [5 ways to teach kids to program with Raspberry Pi][8]
|
||||
* Day 6: [3 popular programming languages you can learn with Raspberry Pi][9]
|
||||
* Day 7: [How to keep your Raspberry Pi updated][10]
|
||||
* Day 8: [How to use your Raspberry Pi for entertainment][11]
|
||||
* Day 9: [Play games on the Raspberry Pi][12]
|
||||
* Day 10: [Let's get physical: How to use GPIO pins on the Raspberry Pi][13]
|
||||
* Day 11: [Learn about computer security with the Raspberry Pi][14]
|
||||
* Day 12: [Do advanced math with Mathematica on the Raspberry Pi][15]
|
||||
* Day 13: [Contribute to the Raspberry Pi community][16]
|
||||
|
||||
|
||||
|
||||
![Pi Day illustration][18]
|
||||
|
||||
I'll end this series by thanking everyone who was brave enough to follow along and especially those who learned something from it during these past 14 days! I also want to encourage everyone to keep expanding their knowledge about the Raspberry Pi and all of the open (and closed) source technology that has been built around it.
|
||||
|
||||
I also encourage you to learn about other cultures, philosophies, religions, and worldviews. What makes us human is this amazing (and sometimes amusing) ability that we have to adapt not only to external environmental circumstances—but also intellectual ones.
|
||||
|
||||
No matter what you do, keep learning!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/happy-pi-day
|
||||
|
||||
作者:[Anderson Silva (Red Hat)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberry-pi-juggle.png?itok=oTgGGSRA
|
||||
[2]: https://www.piday.org/million/
|
||||
[3]: https://en.wikipedia.org/wiki/Date_format_by_country
|
||||
[4]: https://opensource.com/article/19/3/which-raspberry-pi-choose
|
||||
[5]: https://opensource.com/article/19/3/how-buy-raspberry-pi
|
||||
[6]: https://opensource.com/article/19/3/how-boot-new-raspberry-pi
|
||||
[7]: https://opensource.com/article/19/3/learn-linux-raspberry-pi
|
||||
[8]: https://opensource.com/article/19/3/teach-kids-program-raspberry-pi
|
||||
[9]: https://opensource.com/article/19/3/programming-languages-raspberry-pi
|
||||
[10]: https://opensource.com/article/19/3/how-raspberry-pi-update
|
||||
[11]: https://opensource.com/article/19/3/raspberry-pi-entertainment
|
||||
[12]: https://opensource.com/article/19/3/play-games-raspberry-pi
|
||||
[13]: https://opensource.com/article/19/3/gpio-pins-raspberry-pi
|
||||
[14]: https://opensource.com/article/19/3/learn-about-computer-security-raspberry-pi
|
||||
[15]: https://opensource.com/article/19/3/do-math-raspberry-pi
|
||||
[16]: https://opensource.com/article/19/3/contribute-raspberry-pi-community
|
||||
[17]: /file/426561
|
||||
[18]: https://opensource.com/sites/default/files/uploads/raspberrypi_14_piday.jpg (Pi Day illustration)
|
@ -0,0 +1,169 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Navigate Inside A Directory/Folder In Linux Without CD Command?)
|
||||
[#]: via: (https://www.2daygeek.com/navigate-switch-directory-without-using-cd-command-in-linux/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
How To Navigate Inside A Directory/Folder In Linux Without CD Command?
|
||||
======
|
||||
|
||||
As everybody know that we can’t navigate inside a directory in Linux without CD command.
|
||||
|
||||
Yes that’s true but we have the Linux built-in command called `shopt` that help us to solve this issue.
|
||||
|
||||
[shopt][1] is a shell builtin command to set and unset various bash shell options, which is installed so, we no need to install it again.
|
||||
|
||||
Yes we can navigate inside a directory without CD command after enabling this option.
|
||||
|
||||
We will show you, how to do this in this article. This is a small tweak but it’s very useful for newbies who all are moving from Windows to Linux.
|
||||
|
||||
This is not useful for Linux administrator because we won’t navigate to the directory without CD command, as we had a good practices on this.
|
||||
|
||||
If you are trying to navigate a directory/folder in Linux without cd command, you will be getting the following error message. This is common in Linux.
|
||||
|
||||
```
|
||||
$ Documents/
|
||||
bash: Documents/: Is a directory
|
||||
```
|
||||
|
||||
To achieve this, we need to append the following values in a user `.bashrc` file.
|
||||
|
||||
### What Is the .bashrc File?
|
||||
|
||||
The “.bashrc” file is a shell script which is run every time a user opens a new shell in interactive mode.
|
||||
|
||||
You can add any command in that file that you want to type at the command prompt.
|
||||
|
||||
The .bashrc file itself contains a series of configurations for the terminal session. This includes setting up or enabling: colouring, completion, the shell history, command aliases and more.
|
||||
|
||||
```
|
||||
$ vi ~/.bashrc
|
||||
|
||||
shopt -s autocd
|
||||
```
|
||||
|
||||
Run the following command to make the changes to take effect.
|
||||
|
||||
```
|
||||
$ source ~/.bashrc
|
||||
```
|
||||
|
||||
We have done all the configuration. Simple do the testing on this to confirm whether this working or not.
|
||||
|
||||
```
|
||||
$ Documents/
|
||||
cd -- Documents/
|
||||
|
||||
$ daygeek/
|
||||
cd -- daygeek/
|
||||
|
||||
$ /home/daygeek/Documents/daygeek
|
||||
cd -- /home/daygeek/Documents/daygeek
|
||||
|
||||
$ pwd
|
||||
/home/daygeek/Documents/daygeek
|
||||
```
|
||||
|
||||
![][3]
|
||||
Yes, it’s working fine as expected.
|
||||
|
||||
However, it’s working fine in `fish shell` without making any changes in the `.bashrc` file.
|
||||
![][4]
|
||||
|
||||
If you would like to perform this action for temporarily then use the following commands (set/unset). This will go away when you reboot the system.
|
||||
|
||||
```
|
||||
# shopt -s autocd
|
||||
|
||||
# shopt | grep autocd
|
||||
autocd on
|
||||
|
||||
# shopt -u autocd
|
||||
|
||||
# shopt | grep autocd
|
||||
autocd off
|
||||
```
|
||||
|
||||
shopt command is offering so many other options and if you want to verify those, run the following command.
|
||||
|
||||
```
|
||||
$ shopt
|
||||
autocd on
|
||||
assoc_expand_once off
|
||||
cdable_vars off
|
||||
cdspell on
|
||||
checkhash off
|
||||
checkjobs off
|
||||
checkwinsize on
|
||||
cmdhist on
|
||||
compat31 off
|
||||
compat32 off
|
||||
compat40 off
|
||||
compat41 off
|
||||
compat42 off
|
||||
compat43 off
|
||||
compat44 off
|
||||
complete_fullquote on
|
||||
direxpand off
|
||||
dirspell off
|
||||
dotglob off
|
||||
execfail off
|
||||
expand_aliases on
|
||||
extdebug off
|
||||
extglob off
|
||||
extquote on
|
||||
failglob off
|
||||
force_fignore on
|
||||
globasciiranges on
|
||||
globstar off
|
||||
gnu_errfmt off
|
||||
histappend on
|
||||
histreedit off
|
||||
histverify off
|
||||
hostcomplete on
|
||||
huponexit off
|
||||
inherit_errexit off
|
||||
interactive_comments on
|
||||
lastpipe off
|
||||
lithist off
|
||||
localvar_inherit off
|
||||
localvar_unset off
|
||||
login_shell off
|
||||
mailwarn off
|
||||
no_empty_cmd_completion off
|
||||
nocaseglob off
|
||||
nocasematch off
|
||||
nullglob off
|
||||
progcomp on
|
||||
progcomp_alias off
|
||||
promptvars on
|
||||
restricted_shell off
|
||||
shift_verbose off
|
||||
sourcepath on
|
||||
xpg_echo off
|
||||
```
|
||||
|
||||
I had found few other utilities, that are help us to navigate a directory faster in Linux compared with cd command.
|
||||
|
||||
Those are pushd, popd, up shell script and bd utility. We will cover these topics in the upcoming articles.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/navigate-switch-directory-without-using-cd-command-in-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.gnu.org/software/bash/manual/html_node/The-Shopt-Builtin.html
|
||||
[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[3]: https://www.2daygeek.com/wp-content/uploads/2019/03/navigate-switch-directory-without-using-cd-command-in-linux-1.jpg
|
||||
[4]: https://www.2daygeek.com/wp-content/uploads/2019/03/navigate-switch-directory-without-using-cd-command-in-linux-2.jpg
|
@ -0,0 +1,264 @@
|
||||
[#]: collector: "lujun9972"
|
||||
[#]: translator: "zhangxiangping "
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
[#]: subject: "How To Parse And Pretty Print JSON With Linux Commandline Tools"
|
||||
[#]: via: "https://www.ostechnix.com/how-to-parse-and-pretty-print-json-with-linux-commandline-tools/"
|
||||
[#]: author: "EDITOR https://www.ostechnix.com/author/editor/"
|
||||
|
||||
How To Parse And Pretty Print JSON With Linux Commandline Tools
|
||||
======
|
||||
|
||||
**JSON** is a lightweight and language independent data storage format, easy to integrate with most programming languages and also easy to understand by humans, of course when properly formatted. The word JSON stands for **J** ava **S** cript **O** bject **N** otation, though it starts with JavaScript, and primarily used to exchange data between server and browser, but now being used in many fields including embedded systems. Here we’re going to parse and pretty print JSON with command line tools on Linux. It’s extremely useful for handling large JSON data in a shell scripts, or manipulating JSON data in a shell script.
|
||||
|
||||
### What is pretty printing?
|
||||
|
||||
The JSON data is structured to be somewhat more human readable. However in most cases, JSON data is stored in a single line, even without a line ending character.
|
||||
|
||||
Obviously that’s not very convenient for reading and editing manually.
|
||||
|
||||
That’s when pretty print is useful. The name is quite self explanatory, re-formatting the JSON text to be more legible by humans. This is known as **JSON pretty printing**.
|
||||
|
||||
### Parse And Pretty Print JSON With Linux Commandline Tools
|
||||
|
||||
JSON data could be parsed with command line text processors like **awk** , **sed** and **gerp**. In fact JSON.awk is an awk script to do that. However there are some dedicated tools for the same purpose.
|
||||
|
||||
1. **jq** or **jshon** , JSON parser for shell, both of them are quite useful.
|
||||
|
||||
2. Shell scripts like **JSON.sh** or **jsonv.sh** to parse JSON in bash, zsh or dash shell.
|
||||
|
||||
3. **JSON.awk** , JSON parser awk script.
|
||||
|
||||
4. Python modules like **json.tool**.
|
||||
|
||||
5. **underscore-cli** , Node.js and javascript based.
|
||||
|
||||
|
||||
|
||||
|
||||
In this tutorial I’m focusing only on **jq** , which is quite powerful JSON parser for shells with advanced filtering and scripting capability.
|
||||
|
||||
### JSON pretty printing
|
||||
|
||||
JSON data could be in one and nearly illegible for humans, so to make it somewhat readable, JSON pretty printing is here.
|
||||
|
||||
**Example:** A data from **jsonip.com** , to get external IP address in JSON format, use **curl** or **wget** tools like below.
|
||||
|
||||
```
|
||||
$ wget -cq http://jsonip.com/ -O -
|
||||
```
|
||||
|
||||
The actual data looks like this:
|
||||
|
||||
```
|
||||
{"ip":"111.222.333.444","about":"/about","Pro!":"http://getjsonip.com"}
|
||||
```
|
||||
|
||||
Now pretty print it with jq:
|
||||
|
||||
```
|
||||
$ wget -cq http://jsonip.com/ -O - | jq '.'
|
||||
```
|
||||
|
||||
This should look like below, after filtering the result with jq.
|
||||
|
||||
```
|
||||
{
|
||||
|
||||
"ip": "111.222.333.444",
|
||||
|
||||
"about": "/about",
|
||||
|
||||
"Pro!": "http://getjsonip.com"
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
The Same thing could be done with python **json.tool** module. Here is an example:
|
||||
|
||||
```
|
||||
$ cat anything.json | python -m json.tool
|
||||
```
|
||||
|
||||
This Python based solution should be fine for most users, but it’s not that useful where Python is not pre-installed or could not be installed, like on embedded systems.
|
||||
|
||||
However the json.tool python module has a distinct advantage, it’s cross platform. So, you can use it seamlessly on Windows, Linux or mac OS.
|
||||
|
||||
|
||||
### How to parse JSON with jq
|
||||
|
||||
First, you need to install jq, it’s already picked up by most GNU/Linux distributions, install it with their respective package installer commands.
|
||||
|
||||
On Arch Linux:
|
||||
|
||||
```
|
||||
$ sudo pacman -S jq
|
||||
```
|
||||
|
||||
On Debian, Ubuntu, Linux Mint:
|
||||
|
||||
```
|
||||
$ sudo apt-get install jq
|
||||
```
|
||||
|
||||
On Fedora:
|
||||
|
||||
```
|
||||
$ sudo dnf install jq
|
||||
```
|
||||
|
||||
On openSUSE:
|
||||
|
||||
```
|
||||
$ sudo zypper install jq
|
||||
```
|
||||
|
||||
For other OS or platforms, see the [official installation instructions][1].
|
||||
|
||||
**Basic filters and identifiers of jq**
|
||||
|
||||
jq could read the JSON data either from **stdin** or a **file**. You’ve to use both depending on the situation.
|
||||
|
||||
The single symbol of **.** is the most basic filter. These filters are also called as **object identifier-index**. Using a single **.** along with jq basically pretty prints the input JSON file.
|
||||
|
||||
**Single quotes** – You don’t have to use the single quote always. But if you’re combining several filters in a single line, then you must use them.
|
||||
|
||||
**Double quotes** – You’ve to enclose any special character like **@** , **#** , **$** within two double quotes, like this example, **jq .foo.”@bar”**
|
||||
|
||||
**Raw data print** – For any reason, if you need only the final parsed data, not enclosed within a double quote, use the -r flag with the jq command, like this. **– jq -r .foo.bar**.
|
||||
|
||||
**Parsing specific data**
|
||||
|
||||
To filter out a specific part of JSON, you’ve to look into the pretty printed JSON file’s data hierarchy.
|
||||
|
||||
An example of JSON data, from Wikipedia:
|
||||
|
||||
```
|
||||
{
|
||||
|
||||
"firstName": "John",
|
||||
|
||||
"lastName": "Smith",
|
||||
|
||||
"age": 25,
|
||||
|
||||
"address": {
|
||||
|
||||
"streetAddress": "21 2nd Street",
|
||||
|
||||
"city": "New York",
|
||||
|
||||
"state": "NY",
|
||||
|
||||
"postalCode": "10021"
|
||||
|
||||
},
|
||||
|
||||
"phoneNumber": [
|
||||
|
||||
{
|
||||
|
||||
"type": "home",
|
||||
|
||||
"number": "212 555-1234"
|
||||
|
||||
},
|
||||
|
||||
{
|
||||
|
||||
"type": "fax",
|
||||
|
||||
"number": "646 555-4567"
|
||||
|
||||
}
|
||||
|
||||
],
|
||||
|
||||
"gender": {
|
||||
|
||||
"type": "male"
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
I’m going to use this JSON data as an example in this tutorial, saved this as **sample.json**.
|
||||
|
||||
Let’s say I want to filter out the address from sample.json file. So the command should be like:
|
||||
|
||||
```
|
||||
$ jq .address sample.json
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
|
||||
```
|
||||
{
|
||||
|
||||
"streetAddress": "21 2nd Street",
|
||||
|
||||
"city": "New York",
|
||||
|
||||
"state": "NY",
|
||||
|
||||
"postalCode": "10021"
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
Again let’s say I want the postal code, then I’ve to add another **object identifier-index** , i.e. another filter.
|
||||
|
||||
```
|
||||
$ cat sample.json | jq .address.postalCode
|
||||
```
|
||||
|
||||
Also note that the **filters are case sensitive** and you’ve to use the exact same string to get something meaningful output instead of null.
|
||||
|
||||
**Parsing elements from JSON array**
|
||||
|
||||
Elements of JSON array are enclosed within square brackets, undoubtedly quite versatile to use.
|
||||
|
||||
To parse elements from a array, you’ve to use the **[]identifier** along with other object identifier-index.
|
||||
|
||||
In this sample JSON data, the phone numbers are stored inside an array, to get all the contents from this array, you’ve to use only the brackets, like this example.
|
||||
|
||||
```
|
||||
$ jq .phoneNumber[] sample.json
|
||||
```
|
||||
|
||||
Let’s say you just want the first element of the array, then use the array object numbers starting for 0, for the first item, use **[0]** , for the next items, it should be incremented by one each step.
|
||||
|
||||
```
|
||||
$ jq .phoneNumber[0] sample.json
|
||||
```
|
||||
|
||||
**Scripting examples**
|
||||
|
||||
Let’s say I want only the the number for home, not entire JSON array data. Here’s when scripting within jq command comes handy.
|
||||
|
||||
```
|
||||
$ cat sample.json | jq -r '.phoneNumber[] | select(.type == "home") | .number'
|
||||
```
|
||||
|
||||
Here first I’m piping the results of one filer to another, then using the select attribute to select a particular type of data, again piping the result to another filter.
|
||||
|
||||
Explaining every type of jq filters and scripting is beyond the scope and purpose of this tutorial. It’s highly suggested to read the JQ manual for better understanding given below.
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-parse-and-pretty-print-json-with-linux-commandline-tools/
|
||||
|
||||
作者:[EDITOR][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/editor/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://stedolan.github.io/jq/download/
|
@ -0,0 +1,317 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to create portable documents with CBZ and DjVu)
|
||||
[#]: via: (https://opensource.com/article/19/3/comic-book-archive-djvu)
|
||||
[#]: author: (Seth Kenlon (Red Hat, Community Moderator) https://opensource.com/users/seth)
|
||||
|
||||
How to create portable documents with CBZ and DjVu
|
||||
======
|
||||
|
||||
Stop using PDFs with these two smart digital archive formats.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/books_stack_library_reading.jpg?itok=uulcS8Sw)
|
||||
|
||||
Recently, I discovered that my great-great-grandfather wrote two books near the turn of the 20th century: one about sailing and the other about his career as [New York City's fire chief][1]. The books have a niche audience, but since they are part of my family history, I wanted to preserve a digital copy of each. But, I wondered, what portable document format is best suited for such an endeavor?
|
||||
|
||||
I decided early on that PDF was not an option. The format, while good for printing preflight, seems condemned to nonstop feature bloat, and it produces documents that are difficult to introspect and edit. I wanted a smarter format with similar features. Two came to mind: comic book archive and DjVu.
|
||||
|
||||
### Comic book archive
|
||||
|
||||
[Comic book archive][2] is a simple format most often used, as the name suggests, for comic books. You can see examples of comic book archives on sites like [Comic Book Plus][3] and [The Digital Comic Museum][4].
|
||||
|
||||
The greatest feature of a comic book archive is also its weakest: it's so simple, it's almost more of a convention than a format. In fact, a comic book archive is just a ZIP, TAR, 7Z, or RAR archive given the extension .cbz, .cbt, .cb7, or .cbr, respectively. It has no standard for storing metadata.
|
||||
|
||||
They are, however, very easy to create.
|
||||
|
||||
#### Creating comic book archives
|
||||
|
||||
1. Create a directory full of image files, and rename the images so that they have an inherent order:
|
||||
|
||||
```
|
||||
$ n=0 && for i in *.png ; do mv $i `printf %04d $n`.png ; done
|
||||
```
|
||||
|
||||
|
||||
|
||||
2. Archive the files using your favorite archive tool. In my experience, CBZ is best supported.
|
||||
|
||||
```
|
||||
$ zip comicbook.zip -r *.png
|
||||
```
|
||||
|
||||
|
||||
|
||||
3. Finally, rename the file with the appropriate extension.
|
||||
|
||||
```
|
||||
$ mv comicbook.zip comicbook.cbz
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
The resulting file should open on most of your devices. On Linux, both [Evince][5] and [Okular][6] can open CBZ files. On Android, [Document Viewer][7] and [Bubble][8] can open them.
|
||||
|
||||
#### Uncompressing comic book archives
|
||||
|
||||
Getting your data back out of a comic book archive is also easy: just unarchive the CBZ file.
|
||||
|
||||
Since your favorite archive tool may not recognize the .cbz extension as a valid archive, it's best to rename it back to its native extension:
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
$ mv comicbook.cbz comicbook.zip
|
||||
$ unzip comicbook.zip
|
||||
|
||||
### DjVu
|
||||
|
||||
A more advanced format, developed more than 20 years ago by AT&T, is [DjVu][9] (pronounced "déjà vu"). It's a digital document format with advanced compression technology and is viewable in more applications than you probably realize, including [Evince][5], [Okular][6], [DjVu.js][10] online, the [DjVu.js viewer][11] Firefox extension, [GNU Emacs][12], [Document Viewer][7] on Android, and the open source, cross-platform [DjView][13] viewer on Sourceforge.
|
||||
|
||||
You can read more about DjVu and find sample .djvu files, at [djvu.org][14].
|
||||
|
||||
DjVu has several appealing features, including image compression, outline (bookmark) structure, and support for embedded text. It's easy to introspect and edit using free and open source tools.
|
||||
|
||||
#### Installing DjVu
|
||||
|
||||
The open source toolchain is [DjVuLibre][15], which you can find in your distribution's software repository. For example, on Fedora:
|
||||
|
||||
```
|
||||
$ sudo dnf install dvjulibre
|
||||
```
|
||||
|
||||
#### Creating a DjVu file
|
||||
|
||||
A .djvu is an image that has been encoded as a DjVu file. A .djvu can contain one or more images (stored as "pages").
|
||||
|
||||
To manually produce a DjVu, you can use one of two encoders: **c44** for high-quality images or **cjb2** for simple bi-tonal images. Each encoder accepts a different image format: c44 can process .pnm or .jpeg files, while cjb2 can process .pbm or .tiff images.
|
||||
|
||||
If you need to preprocess an image, you can do that in a terminal with [Image Magick][16], using the **-density** option to define your desired resolution:
|
||||
|
||||
```
|
||||
$ convert -density 200 foo.png foo.pnm
|
||||
```
|
||||
|
||||
Then you can convert it to DjVu:
|
||||
|
||||
```
|
||||
$ c44 -dpi 200 foo.pnm foo.djvu
|
||||
```
|
||||
|
||||
If your image is simple, like black text on a white page, you can try to convert it using the simpler encoder. If necessary, use Image Magick first to convert it to a compatible intermediate format:
|
||||
|
||||
```
|
||||
$ convert -density 200 foo.png foo.pbm
|
||||
```
|
||||
|
||||
And then convert it to DjVu:
|
||||
|
||||
```
|
||||
$ cjb2 -dpi 200 foo.pbm foo.djvu
|
||||
```
|
||||
|
||||
You now have a simple, single-page .djvu document.
|
||||
|
||||
#### Creating a multi-page DjVu file
|
||||
|
||||
While a single-page DjVu can be useful, given DjVu's sometimes excellent compression, it's most commonly used as a multi-page format.
|
||||
|
||||
Assuming you have a directory of many .djvu files, you can bundle them together with the **djvm** command:
|
||||
|
||||
```
|
||||
$ djvm -c pg_1.djvu two.djvu 003.djvu mybook.djvu
|
||||
```
|
||||
|
||||
Unlike a CBZ archive, the names of the bundled images have no effect on their order in the DjVu document, rather it preserves the order you provide in the command. If you had the foresight to name them in a natural sorting order (001.djvu, 002.djvu, 003.djvu, 004.djvu, and so on), you can use a wildcard:
|
||||
|
||||
```
|
||||
$ djvm -c *.djvu mybook.djvu
|
||||
```
|
||||
|
||||
#### Manipulating a DjVu document
|
||||
|
||||
It's easy to edit DjVu documents with **djvm**. For instance, you can insert a page into an existing DjVu document:
|
||||
|
||||
```
|
||||
$ djvm -i mybook.djvu newpage.djvu 2
|
||||
```
|
||||
|
||||
In this example, the page _newpage.djvu_ becomes the new page 2 in the file _mybook.djvu_.
|
||||
|
||||
You can also delete a page. For example, to delete page 4 from _mybook.djvu_ :
|
||||
|
||||
```
|
||||
$ djvm -d mybook.djvu 4
|
||||
```
|
||||
|
||||
#### Setting an outline
|
||||
|
||||
You can add metadata to a DjVu file, such as an outline (commonly called "bookmarks"). To do this manually, create a plaintext file with the document's outline. A DjVu outline is expressed in a [Lisp][17]-like structure, with an opening **bookmarks** element followed by bookmark names and page numbers:
|
||||
```
|
||||
(bookmarks
|
||||
("Front cover" "#1")
|
||||
("Chapter 1" "#3")
|
||||
("Chapter 2" "#18")
|
||||
("Chapter 3" "#26")
|
||||
)
|
||||
```
|
||||
|
||||
The parentheses define levels in the outline. The outline currently has only top-level bookmarks, but any section can have a subsection by delaying its closing parenthesis. For example, to add a subsection to Chapter 1:
|
||||
```
|
||||
(bookmarks
|
||||
("Front cover" "#1")
|
||||
("Chapter 1" "#3"
|
||||
("Section 1" "#6"))
|
||||
("Chapter 2" "#18")
|
||||
("Chapter 3" "#26")
|
||||
)
|
||||
```
|
||||
|
||||
Once the outline is complete, save the file and apply it to your DjVu file using the **djvused** command:
|
||||
|
||||
```
|
||||
$ djvused -e 'set-outline outline.txt' -s mybook.djvu
|
||||
```
|
||||
|
||||
Open the DjVu file to see the outline.
|
||||
|
||||
![A DjVu with an outline as viewed in Okular][19]
|
||||
|
||||
#### Embedding text
|
||||
|
||||
If you want to store the text of a document you're creating, you can embed text elements ("hidden text" in **djvused** terminology) in your DjVu file so that applications like Okular or DjView can select and copy the text to a user's clipboard.
|
||||
|
||||
This is a complex operation because, in order to embed text, you must first have text. If you have access to a good OCR application (or the time and dedication to transcribe the printed page), you may have that data, but then you must map the text to the bitmap image.
|
||||
|
||||
Once you have the text and the coordinates for each line (or, if you prefer, for each word), you can write a **djvused** script with blocks for each page:
|
||||
```
|
||||
select; remove-ant; remove-txt
|
||||
# -------------------------
|
||||
select "p0004.djvu" # page 4
|
||||
set-txt
|
||||
(page 0 0 2550 3300
|
||||
(line 1661 2337 2235 2369 "Fires and Fire-fighters")
|
||||
(line 1761 2337 2235 2369 "by John Kenlon"))
|
||||
|
||||
.
|
||||
# -------------------------
|
||||
select "p0005.djvu" # page 5
|
||||
set-txt
|
||||
(page 0 0 2550 3300
|
||||
(line 294 2602 1206 2642 "Some more text here, blah blah blah."))
|
||||
```
|
||||
|
||||
The integers for each line represent the minimum and maximum locations for the X and Y coordinates of each line ( **xmin** , **ymin** , **xmax** , **ymax** ). Each line is a rectangle measured in pixels, with an origin at the _bottom-left_ corner of the page.
|
||||
|
||||
You can define embedded text elements as words, lines, and hyperlinks, and you can map complex regions with shapes other than just rectangles. You can also embed specially defined metadata, such as BibTex keys, which are expressed in lowercase (year, booktitle, editor, author, and so on), and DocInfo keys, borrowed from the PDF spec, always starting with an uppercase letter (Title, Author, Subject, Creator, Produced, CreationDate, ModDate, and so on).
|
||||
|
||||
#### Automating DjVu creation
|
||||
|
||||
While it's nice to be able to handcraft a finely detailed DjVu document, if you adopt DjVu as an everyday format, you'll notice that your applications lack some of the conveniences available in the more ubiquitous PDF. For instance, few (if any) applications offer a convenient _Print to DjVu_ or _Export to DjVu_ option, as they do for PDF.
|
||||
|
||||
However, you can still use DjVu by leveraging PDF as an intermediate format.
|
||||
|
||||
Unfortunately, the library required for easy, automated DjVu conversion is licensed under the CPL, which has requirements that cannot be satisfied by the GPL code in the toolchain. For this reason, it can't be distributed as a compiled library, but you're free to compile it yourself.
|
||||
|
||||
The process is relatively simple due to an excellent build script provided by the DjVuLibre team.
|
||||
|
||||
1. First, prepare your system with software development tools. On Fedora, the quick-and-easy way is with a DNF group:
|
||||
|
||||
```
|
||||
$ sudo dnf group install @c-development
|
||||
```
|
||||
|
||||
On Ubuntu:
|
||||
|
||||
```
|
||||
$ sudo apt-get install build-essential
|
||||
```
|
||||
|
||||
|
||||
|
||||
2. Next, download the [**GSDjVu** source code][20] from Sourceforge. Be sure to download **GSDjVu** , not **DjVuLibre** (in other words, don't click on the big green button at the top of the file listing, but on the latest file instead).
|
||||
|
||||
|
||||
3. Unarchive the file you just downloaded, and change directory into it:
|
||||
```
|
||||
$ cd ~/Downloads
|
||||
$ tar xvf gsdjvu-X.YY.tar.gz
|
||||
$ cd gsdjvu-X.YY
|
||||
```
|
||||
|
||||
|
||||
|
||||
4. Create a directory called **BUILD**. It must be called **BUILD** , so quell your creativity:
|
||||
```
|
||||
$ mkdir BUILD
|
||||
$ cd BUILD
|
||||
```
|
||||
|
||||
|
||||
|
||||
5. Download the additional source packages required to build the **GSDjVu **application. Specifically, you must download the source for **Ghostscript** (you almost certainly already have this installed, but you need its source to build against). Additionally, your system must have source packages for **jpeg** , **libpng** , **openjpeg** , and **zlib**. If you think your system already has the source packages for these projects, you can run the build script; if the sources are not found, the script will fail and let you correct the error before trying again.
|
||||
|
||||
|
||||
6. Run the interactive **build-gsdjvu** build script included in the download. This script unpacks the source files, patches Ghostscript with the **gdevdjvu** driver, compiles Ghostscript, and prunes unnecessary files from the build results.
|
||||
|
||||
|
||||
7. You can install **GSDjVu **anywhere in your path. If you don't know what your **PATH** variable is, you can see it with **echo $PATH**. For example, to install it to the **/usr/local** prefix:
|
||||
```
|
||||
$ sudo cp -r BUILD/INST/gsdjvu /usr/local/lib64
|
||||
$ cd /usr/local/bin
|
||||
$ sudo ln -s ../lib64/gsdjvu/gsdjvu gsdjvu
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
#### Converting a PDF to DjVu
|
||||
|
||||
Now that you've built the Ghostscript driver, converting a PDF to DjVu requires just one command:
|
||||
|
||||
```
|
||||
$ djvudigital --words mydocument.pdf mydocument.djvu
|
||||
```
|
||||
|
||||
This transforms all pages, bookmarks, and embedded text in a PDF into a DjVu file. The `--words` option maps all mapped embedded PDF text to the corresponding points in the DjVu file. If there is no embedded PDF, then no embedded text is carried over. Using this tool, you can use convenient PDF functions from your applications and end up with DjVu files.
|
||||
|
||||
### Why DjVu and CBZ?
|
||||
|
||||
DjVu and comic book archive are great additional document formats for your archival arsenal. It seems silly to stuff a series of images into a PostScript format, like PDF, or a format clearly meant mostly for text, like EPUB, so it's nice to have CBZ and DjVu as additional options. They might not be right for all of your documents, but it's good to get comfortable with them so you can use one when it makes the most sense.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/comic-book-archive-djvu
|
||||
|
||||
作者:[Seth Kenlon (Red Hat, Community Moderator)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.fireengineering.com/articles/print/volume-56/issue-27/features/chief-john-kenlon-of-new-york-city.html
|
||||
[2]: https://en.wikipedia.org/wiki/Comic_book_archive
|
||||
[3]: https://comicbookplus.com/
|
||||
[4]: https://digitalcomicmuseum.com/
|
||||
[5]: https://wiki.gnome.org/Apps/Evince
|
||||
[6]: https://okular.kde.org
|
||||
[7]: https://f-droid.org/en/packages/org.sufficientlysecure.viewer/
|
||||
[8]: https://f-droid.org/en/packages/com.nkanaev.comics/
|
||||
[9]: http://djvu.org/
|
||||
[10]: http://djvu.js.org/
|
||||
[11]: https://github.com/RussCoder/djvujs
|
||||
[12]: https://elpa.gnu.org/packages/djvu.html
|
||||
[13]: http://djvu.sourceforge.net/djview4.html
|
||||
[14]: http://djvu.org
|
||||
[15]: http://djvu.sourceforge.net
|
||||
[16]: https://www.imagemagick.org/
|
||||
[17]: https://en.wikipedia.org/wiki/Lisp_(programming_language)
|
||||
[18]: /file/426061
|
||||
[19]: https://opensource.com/sites/default/files/uploads/outline.png (A DjVu with an outline as viewed in Okular)
|
||||
[20]: https://sourceforge.net/projects/djvu/files/GSDjVu/1.10/
|
@ -0,0 +1,73 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Sweet Home 3D: An open source tool to help you decide on your dream home)
|
||||
[#]: via: (https://opensource.com/article/19/3/tool-find-home)
|
||||
[#]: author: (Jeff Macharyas (Community Moderator) )
|
||||
|
||||
Sweet Home 3D: An open source tool to help you decide on your dream home
|
||||
======
|
||||
|
||||
Interior design application makes it easy to render your favorite house—real or imaginary.
|
||||
|
||||
![Houses in a row][1]
|
||||
|
||||
I recently accepted a new job in Virginia. Since my wife was working and watching our house in New York until it sold, it was my responsibility to go out and find a new house for us and our cat. A house that she would not see until we moved into it!
|
||||
|
||||
I contracted with a real estate agent and looked at a few houses, taking many pictures and writing down illegible notes. At night, I would upload the photos into a Google Drive folder, and my wife and I would review them simultaneously over the phone while I tried to remember whether the room was on the right or the left, whether it had a fan, etc.
|
||||
|
||||
Since this was a rather tedious and not very accurate way to present my findings, I went in search of an open source solution to better illustrate what our future dream house would look like that wouldn't hinge on my fuzzy memory and blurry photos.
|
||||
|
||||
[Sweet Home 3D][2] did exactly what I wanted it to do. Sweet Home 3D is available on Sourceforge and released under the GNU General Public License. The [website][3] is very informative, and I was able to get it up and running in no time. Sweet Home 3D was developed by Paris-based Emmanuel Puybaret of eTeks.
|
||||
|
||||
### Hanging the drywall
|
||||
|
||||
I downloaded Sweet Home 3D onto my MacBook Pro and added a PNG version of a flat floorplan of a house to use as a background base map.
|
||||
|
||||
From there, it was a simple matter of using the Rooms palette to trace the pattern and set the "real life" dimensions. After I mapped the rooms, I added the walls, which I could customize by color, thickness, height, etc.
|
||||
|
||||
![Sweet Home 3D floorplan][5]
|
||||
|
||||
Now that I had the "drywall" built, I downloaded various pieces of "furniture" from a large array that includes actual furniture as well as doors, windows, shelves, and more. Each item downloads as a ZIP file, so I created a folder of all my uncompressed pieces. I could customize each piece of furniture, and repetitive items, such as doors, were easy to copy-and-paste into place.
|
||||
|
||||
Once I had all my walls and doors and windows in place, I used the application's 3D view to navigate through the house. Drawing upon my photos and memory, I made adjustments to all the objects until I had a close representation of the house. I could have spent more time modifying the house by adding textures, additional furniture, and objects, but I got it to the point I needed.
|
||||
|
||||
![Sweet Home 3D floorplan][7]
|
||||
|
||||
After I finished, I exported the plan as an OBJ file, which can be opened in a variety of programs, such as [Blender][8] and Preview on the Mac, to spin the house around and examine it from various angles. The Video function was most useful, as I could create a starting point, draw a path through the house, and record the "journey." I exported the video as a MOV file, which I opened and viewed on the Mac using QuickTime.
|
||||
|
||||
My wife was able to see (almost) exactly what I saw, and we could even start arranging furniture ahead of the move, too. Now, all I have to do is load up the moving truck and head south.
|
||||
|
||||
Sweet Home 3D will also prove useful at my new job. I was looking for a way to improve the map of the college's buildings and was planning to just re-draw it in [Inkscape][9] or Illustrator or something. However, since I have the flat map, I can use Sweet Home 3D to create a 3D version of the floorplan and upload it to our website to make finding the bathrooms so much easier!
|
||||
|
||||
### An open source crime scene?
|
||||
|
||||
An interesting aside: according to the [Sweet Home 3D blog][10], "the French Forensic Police Office (Scientific Police) recently chose Sweet Home 3D as a tool to design plans [to represent roads and crime scenes]. This is a concrete application of the recommendation of the French government to give the preference to free open source solutions."
|
||||
|
||||
This is one more bit of evidence of how open source solutions are being used by citizens and governments to create personal projects, solve crimes, and build worlds.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/tool-find-home
|
||||
|
||||
作者:[Jeff Macharyas (Community Moderator)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/house_home_colors_live_building.jpg?itok=HLpsIfIL (Houses in a row)
|
||||
[2]: https://sourceforge.net/projects/sweethome3d/
|
||||
[3]: http://www.sweethome3d.com/
|
||||
[4]: /file/426441
|
||||
[5]: https://opensource.com/sites/default/files/uploads/virginia-house-create-screenshot.png (Sweet Home 3D floorplan)
|
||||
[6]: /file/426451
|
||||
[7]: https://opensource.com/sites/default/files/uploads/virginia-house-3d-screenshot.png (Sweet Home 3D floorplan)
|
||||
[8]: https://opensource.com/article/18/5/blender-hotkey-cheat-sheet
|
||||
[9]: https://opensource.com/article/19/1/inkscape-cheat-sheet
|
||||
[10]: http://www.sweethome3d.com/blog/2018/12/10/customization_for_the_forensic_police.html
|
@ -0,0 +1,141 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Program the real world using Rust on Raspberry Pi)
|
||||
[#]: via: (https://opensource.com/article/19/3/physical-computing-rust-raspberry-pi)
|
||||
[#]: author: (Rahul Thakoor https://opensource.com/users/rahul27)
|
||||
|
||||
Program the real world using Rust on Raspberry Pi
|
||||
======
|
||||
|
||||
rust_gpizero uses the Rust programming language to do physical computing on the Raspberry Pi.
|
||||
|
||||
![][1]
|
||||
|
||||
If you own a Raspberry Pi, chances are you may already have experimented with physical computing—writing code to interact with the real, physical world, like blinking some LEDs or [controlling a servo motor][2]. You may also have used [GPIO Zero][3], a Python library that provides a simple interface to GPIO devices from Raspberry Pi with a friendly Python API. GPIO Zero is developed by [Opensource.com][4] community moderator [Ben Nuttall][5].
|
||||
|
||||
I am working on [**rust_gpiozero**][6], a port of the awesome GPIO Zero library that uses the Rust programming language. It is still a work in progress, but it already includes some useful components.
|
||||
|
||||
[Rust][7] is a systems programming language developed at Mozilla. It is focused on performance, reliability, and productivity. The Rust website has [great resources][8] if you'd like to learn more about it.
|
||||
|
||||
### Getting started
|
||||
|
||||
Before starting with rust_gpiozero, it's smart to have a basic grasp of the Rust programming language. I recommend working through at least the first three chapters in [The Rust Programming Language][9] book.
|
||||
|
||||
I recommend [installing Rust][10] on your Raspberry Pi using [**rustup**][11]. Alternatively, you can set up a cross-compilation environment using [cross][12] (which works only on an x86_64 Linux host) or [this how-to][13].
|
||||
|
||||
After you've installed Rust, create a new Rust project by entering:
|
||||
|
||||
```
|
||||
cargo new rust_gpiozero_demo
|
||||
```
|
||||
|
||||
Add **rust_gpiozero** as a dependency (currently in v0.2.0) by adding the following to the dependencies section in your **Cargo.toml** file
|
||||
|
||||
```
|
||||
rust_gpiozero = "0.2.0"
|
||||
```
|
||||
|
||||
Next, blink an LED—the "hello world" of physical computing by modifying the **main.rs** file with the following:
|
||||
```
|
||||
use rust_gpiozero::*;
|
||||
use std::thread;
|
||||
use std::time::Duration;
|
||||
|
||||
fn main() {
|
||||
// Create a new LED attached to Pin 17
|
||||
let led = LED::new(17);
|
||||
|
||||
// Blink the LED 5 times
|
||||
for _ in 0.. 5{
|
||||
led.on();
|
||||
thread::sleep(Duration::from_secs(1));
|
||||
led.off();
|
||||
thread::sleep(Duration::from_secs(1));
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
rust_gpiozero provides an easier interface for blinking an LED. You can use the blink method, providing the number of seconds it should stay on and off. This simplifies the code to the following:
|
||||
```
|
||||
use rust_gpiozero::*;
|
||||
fn main() {
|
||||
// Create a new LED attached to Pin 17
|
||||
let mut led = LED::new(17);
|
||||
|
||||
// on_time = 2 secs, off_time=3 secs
|
||||
led.blink(2.0,3.0);
|
||||
|
||||
// prevent program from exiting immediately
|
||||
led.wait();
|
||||
}
|
||||
```
|
||||
|
||||
### Other components
|
||||
|
||||
rust_gpiozero provides several components that are similar to GPIO Zero for controlling output and input devices. These include [LED][14], [Buzzer][15], [Motor][16], Pulse Width Modulation LED ([PWMLED][17]), [Servo][18], and [Button][19].
|
||||
|
||||
Support for other components, sensors, and devices will be added eventually. You can refer to the [documentation][20] for further usage information.
|
||||
|
||||
### More resources
|
||||
|
||||
rust_gpiozero is still a work in progress. If you need more resources for getting started with Rust on your Raspberry Pi, here are some useful links:
|
||||
|
||||
#### Raspberry Pi Peripheral Access Library (RPPAL)
|
||||
|
||||
Similar to GPIO Zero, which is based on the [RPi.GPIO][21] library, rust_gpiozero builds upon the awesome **[RPPAL][22]** library by [Rene van der Meer][23]. If you want more control for your projects using Rust, you should definitely try RPPAL. It has support for GPIO, Inter-Integrated Circuit (I 2C), hardware and software Pulse Width Modulation (PWM), and Serial Peripheral Interface (SPI). Universal asynchronous receiver-transmitter (UART) support is currently in development.
|
||||
|
||||
#### Sense HAT support
|
||||
|
||||
**[Sensehat-rs][24]** is a library by [Jonathan Pallant][25] ([@therealjpster][26]) that provides Rust support for the Raspberry Pi [Sense HAT][27] add-on board. Jonathan also has a [starter workshop][28] for using the library and he wrote a beginner's intro to use Rust on Raspberry Pi, "Read Sense HAT with Rust," in [Issue 73 of _The MagPi_][29] magazine.
|
||||
|
||||
### Wrap Up
|
||||
|
||||
Hopefully, this has inspired you to use the Rust programming language for physical computing on your Raspberry Pi. rust_gpiozero is a library which provides useful components such as LED, Buzzer, Motor, PWMLED, Servo, and Button. More features are planned and you can follow me on [twitter][30] or check out [my blog][31] to stay tuned.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/physical-computing-rust-raspberry-pi
|
||||
|
||||
作者:[Rahul Thakoor][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_os_rh2x.png?itok=jbRfXinl
|
||||
[2]: https://projects.raspberrypi.org/en/projects/grandpa-scarer/4
|
||||
[3]: https://gpiozero.readthedocs.io/en/stable/#
|
||||
[4]: http://Opensource.com
|
||||
[5]: https://opensource.com/users/bennuttall
|
||||
[6]: https://crates.io/crates/rust_gpiozero
|
||||
[7]: https://www.rust-lang.org/
|
||||
[8]: https://www.rust-lang.org/learn
|
||||
[9]: https://doc.rust-lang.org/book/
|
||||
[10]: https://www.rust-lang.org/tools/install
|
||||
[11]: https://rustup.rs/
|
||||
[12]: https://github.com/rust-embedded/cross
|
||||
[13]: https://github.com/kunerd/clerk/wiki/How-to-use-HD44780-LCD-from-Rust#setting-up-the-cross-toolchain
|
||||
[14]: https://docs.rs/rust_gpiozero/0.2.0/rust_gpiozero/output_devices/struct.LED.html
|
||||
[15]: https://docs.rs/rust_gpiozero/0.2.0/rust_gpiozero/output_devices/struct.Buzzer.html
|
||||
[16]: https://docs.rs/rust_gpiozero/0.2.0/rust_gpiozero/output_devices/struct.Motor.html
|
||||
[17]: https://docs.rs/rust_gpiozero/0.2.0/rust_gpiozero/output_devices/struct.PWMLED.html
|
||||
[18]: https://docs.rs/rust_gpiozero/0.2.0/rust_gpiozero/output_devices/struct.Servo.html
|
||||
[19]: https://docs.rs/rust_gpiozero/0.2.0/rust_gpiozero/input_devices/struct.Button.html
|
||||
[20]: https://docs.rs/rust_gpiozero/
|
||||
[21]: https://pypi.org/project/RPi.GPIO/
|
||||
[22]: https://github.com/golemparts/rppal
|
||||
[23]: https://twitter.com/golemparts
|
||||
[24]: https://crates.io/crates/sensehat
|
||||
[25]: https://github.com/thejpster
|
||||
[26]: https://twitter.com/therealjpster
|
||||
[27]: https://www.raspberrypi.org/products/sense-hat/
|
||||
[28]: https://github.com/thejpster/pi-workshop-rs/
|
||||
[29]: https://www.raspberrypi.org/magpi/issues/73/
|
||||
[30]: https://twitter.com/rahulthakoor
|
||||
[31]: https://rahul-thakoor.github.io/
|
331
sources/tech/20190318 10 Python image manipulation tools.md
Normal file
331
sources/tech/20190318 10 Python image manipulation tools.md
Normal file
@ -0,0 +1,331 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (HankChow)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (10 Python image manipulation tools)
|
||||
[#]: via: (https://opensource.com/article/19/3/python-image-manipulation-tools)
|
||||
[#]: author: (Parul Pandey https://opensource.com/users/parul-pandey)
|
||||
|
||||
10 Python image manipulation tools
|
||||
======
|
||||
|
||||
These Python libraries provide an easy and intuitive way to transform images and make sense of the underlying data.
|
||||
|
||||
![][1]
|
||||
|
||||
Today's world is full of data, and images form a significant part of this data. However, before they can be used, these digital images must be processed—analyzed and manipulated in order to improve their quality or extract some information that can be put to use.
|
||||
|
||||
Common image processing tasks include displays; basic manipulations like cropping, flipping, rotating, etc.; image segmentation, classification, and feature extractions; image restoration; and image recognition. Python is an excellent choice for these types of image processing tasks due to its growing popularity as a scientific programming language and the free availability of many state-of-the-art image processing tools in its ecosystem.
|
||||
|
||||
This article looks at 10 of the most commonly used Python libraries for image manipulation tasks. These libraries provide an easy and intuitive way to transform images and make sense of the underlying data.
|
||||
|
||||
### 1\. scikit-image
|
||||
|
||||
**[**scikit** -image][2]** is an open source Python package that works with [NumPy][3] arrays. It implements algorithms and utilities for use in research, education, and industry applications. It is a fairly simple and straightforward library, even for those who are new to Python's ecosystem. The code is high-quality, peer-reviewed, and written by an active community of volunteers.
|
||||
|
||||
#### Resources
|
||||
|
||||
scikit-image is very well [documented][4] with a lot of examples and practical use cases.
|
||||
|
||||
#### Usage
|
||||
|
||||
The package is imported as **skimage** , and most functions are found within the submodules.
|
||||
|
||||
Image filtering:
|
||||
|
||||
```
|
||||
import matplotlib.pyplot as plt
|
||||
%matplotlib inline
|
||||
|
||||
from skimage import data,filters
|
||||
|
||||
image = data.coins() # ... or any other NumPy array!
|
||||
edges = filters.sobel(image)
|
||||
plt.imshow(edges, cmap='gray')
|
||||
```
|
||||
|
||||
![Image filtering in scikit-image][6]
|
||||
|
||||
Template matching using the [match_template][7] function:
|
||||
|
||||
![Template matching in scikit-image][9]
|
||||
|
||||
You can find more examples in the [gallery][10].
|
||||
|
||||
### 2\. NumPy
|
||||
|
||||
[**NumPy**][11] is one of the core libraries in Python programming and provides support for arrays. An image is essentially a standard NumPy array containing pixels of data points. Therefore, by using basic NumPy operations, such as slicing, masking, and fancy indexing, you can modify the pixel values of an image. The image can be loaded using **skimage** and displayed using Matplotlib.
|
||||
|
||||
#### Resources
|
||||
|
||||
A complete list of resources and documentation is available on NumPy's [official documentation page][11].
|
||||
|
||||
#### Usage
|
||||
|
||||
Using Numpy to mask an image:
|
||||
|
||||
```
|
||||
import numpy as np
|
||||
from skimage import data
|
||||
import matplotlib.pyplot as plt
|
||||
%matplotlib inline
|
||||
|
||||
image = data.camera()
|
||||
type(image)
|
||||
numpy.ndarray #Image is a NumPy array:
|
||||
|
||||
mask = image < 87
|
||||
image[mask]=255
|
||||
plt.imshow(image, cmap='gray')
|
||||
```
|
||||
|
||||
![NumPy][13]
|
||||
|
||||
### 3\. SciPy
|
||||
|
||||
**[SciPy][14]** is another of Python's core scientific modules (like NumPy) and can be used for basic image manipulation and processing tasks. In particular, the submodule [**scipy.ndimage**][15] (in SciPy v1.1.0) provides functions operating on n-dimensional NumPy arrays. The package currently includes functions for linear and non-linear filtering, binary morphology, B-spline interpolation, and object measurements.
|
||||
|
||||
#### Resources
|
||||
|
||||
For a complete list of functions provided by the **scipy.ndimage** package, refer to the [documentation][16].
|
||||
|
||||
#### Usage
|
||||
|
||||
Using SciPy for blurring using a [Gaussian filter][17]:
|
||||
```
|
||||
from scipy import misc,ndimage
|
||||
|
||||
face = misc.face()
|
||||
blurred_face = ndimage.gaussian_filter(face, sigma=3)
|
||||
very_blurred = ndimage.gaussian_filter(face, sigma=5)
|
||||
|
||||
#Results
|
||||
plt.imshow(<image to be displayed>)
|
||||
```
|
||||
|
||||
![Using a Gaussian filter in SciPy][19]
|
||||
|
||||
### 4\. PIL/Pillow
|
||||
|
||||
**PIL** (Python Imaging Library) is a free library for the Python programming language that adds support for opening, manipulating, and saving many different image file formats. However, its development has stagnated, with its last release in 2009. Fortunately, there is [**Pillow**][20], an actively developed fork of PIL, that is easier to install, runs on all major operating systems, and supports Python 3. The library contains basic image processing functionality, including point operations, filtering with a set of built-in convolution kernels, and color-space conversions.
|
||||
|
||||
#### Resources
|
||||
|
||||
The [documentation][21] has instructions for installation as well as examples covering every module of the library.
|
||||
|
||||
#### Usage
|
||||
|
||||
Enhancing an image in Pillow using ImageFilter:
|
||||
|
||||
```
|
||||
from PIL import Image,ImageFilter
|
||||
#Read image
|
||||
im = Image.open('image.jpg')
|
||||
#Display image
|
||||
im.show()
|
||||
|
||||
from PIL import ImageEnhance
|
||||
enh = ImageEnhance.Contrast(im)
|
||||
enh.enhance(1.8).show("30% more contrast")
|
||||
```
|
||||
|
||||
![Enhancing an image in Pillow using ImageFilter][23]
|
||||
|
||||
[Image source code][24]
|
||||
|
||||
### 5\. OpenCV-Python
|
||||
|
||||
**OpenCV** (Open Source Computer Vision Library) is one of the most widely used libraries for computer vision applications. [**OpenCV-Python**][25] is the Python API for OpenCV. OpenCV-Python is not only fast, since the background consists of code written in C/C++, but it is also easy to code and deploy (due to the Python wrapper in the foreground). This makes it a great choice to perform computationally intensive computer vision programs.
|
||||
|
||||
#### Resources
|
||||
|
||||
The [OpenCV2-Python-Guide][26] makes it easy to get started with OpenCV-Python.
|
||||
|
||||
#### Usage
|
||||
|
||||
Using _Image Blending using Pyramids_ in OpenCV-Python to create an "Orapple":
|
||||
|
||||
|
||||
![Image blending using Pyramids in OpenCV-Python][28]
|
||||
|
||||
[Image source code][29]
|
||||
|
||||
### 6\. SimpleCV
|
||||
|
||||
[**SimpleCV**][30] is another open source framework for building computer vision applications. It offers access to several high-powered computer vision libraries such as OpenCV, but without having to know about bit depths, file formats, color spaces, etc. Its learning curve is substantially smaller than OpenCV's, and (as its tagline says), "it's computer vision made easy." Some points in favor of SimpleCV are:
|
||||
|
||||
* Even beginning programmers can write simple machine vision tests
|
||||
* Cameras, video files, images, and video streams are all interoperable
|
||||
|
||||
|
||||
|
||||
#### Resources
|
||||
|
||||
The official [documentation][31] is very easy to follow and has tons of examples and use cases to follow.
|
||||
|
||||
#### Usage
|
||||
|
||||
### [7-_simplecv.png][32]
|
||||
|
||||
![SimpleCV][33]
|
||||
|
||||
### 7\. Mahotas
|
||||
|
||||
**[Mahotas][34]** is another computer vision and image processing library for Python. It contains traditional image processing functions such as filtering and morphological operations, as well as more modern computer vision functions for feature computation, including interest point detection and local descriptors. The interface is in Python, which is appropriate for fast development, but the algorithms are implemented in C++ and tuned for speed. Mahotas' library is fast with minimalistic code and even minimum dependencies. Read its [official paper][35] for more insights.
|
||||
|
||||
#### Resources
|
||||
|
||||
The [documentation][36] contains installation instructions, examples, and even some tutorials to help you get started using Mahotas easily.
|
||||
|
||||
#### Usage
|
||||
|
||||
The Mahotas library relies on simple code to get things done. For example, it does a good job with the [Finding Wally][37] problem with a minimum amount of code.
|
||||
|
||||
Solving the Finding Wally problem:
|
||||
|
||||
![Finding Wally problem in Mahotas][39]
|
||||
|
||||
[Image source code][40]
|
||||
|
||||
![Finding Wally problem in Mahotas][42]
|
||||
|
||||
[Image source code][40]
|
||||
|
||||
### 8\. SimpleITK
|
||||
|
||||
[**ITK**][43] (Insight Segmentation and Registration Toolkit) is an "open source, cross-platform system that provides developers with an extensive suite of software tools for image analysis. **[SimpleITK][44]** is a simplified layer built on top of ITK, intended to facilitate its use in rapid prototyping, education, [and] interpreted languages." It's also an image analysis toolkit with a [large number of components][45] supporting general filtering operations, image segmentation, and registration. SimpleITK is written in C++, but it's available for a large number of programming languages including Python.
|
||||
|
||||
#### Resources
|
||||
|
||||
There are a large number of [Jupyter Notebooks][46] illustrating the use of SimpleITK for educational and research activities. The notebooks demonstrate using SimpleITK for interactive image analysis using the Python and R programming languages.
|
||||
|
||||
#### Usage
|
||||
|
||||
Visualization of a rigid CT/MR registration process created with SimpleITK and Python:
|
||||
|
||||
![SimpleITK animation][48]
|
||||
|
||||
[Image source code][49]
|
||||
|
||||
### 9\. pgmagick
|
||||
|
||||
[**pgmagick**][50] is a Python-based wrapper for the GraphicsMagick library. The [**GraphicsMagick**][51] image processing system is sometimes called the Swiss Army Knife of image processing. Its robust and efficient collection of tools and libraries supports reading, writing, and manipulating images in over 88 major formats including DPX, GIF, JPEG, JPEG-2000, PNG, PDF, PNM, and TIFF.
|
||||
|
||||
#### Resources
|
||||
|
||||
pgmagick's [GitHub repository][52] has installation instructions and requirements. There is also a detailed [user guide][53].
|
||||
|
||||
#### Usage
|
||||
|
||||
Image scaling:
|
||||
|
||||
![Image scaling in pgmagick][55]
|
||||
|
||||
[Image source code][56]
|
||||
|
||||
Edge extraction:
|
||||
|
||||
![Edge extraction in pgmagick][58]
|
||||
|
||||
[Image source code][59]
|
||||
|
||||
### 10\. Pycairo
|
||||
|
||||
[**Pycairo**][60] is a set of Python bindings for the [Cairo][61] graphics library. Cairo is a 2D graphics library for drawing vector graphics. Vector graphics are interesting because they don't lose clarity when resized or transformed. Pycairo can call Cairo commands from Python.
|
||||
|
||||
#### Resources
|
||||
|
||||
The Pycairo [GitHub repository][62] is a good resource with detailed instructions on installation and usage. There is also a [getting started guide][63], which has a brief tutorial on Pycairo.
|
||||
|
||||
#### Usage
|
||||
|
||||
Drawing lines, basic shapes, and radial gradients with Pycairo:
|
||||
|
||||
![Pycairo][65]
|
||||
|
||||
[Image source code][66]
|
||||
|
||||
### Conclusion
|
||||
|
||||
These are some of the useful and freely available image processing libraries in Python. Some are well known and others may be new to you. Try them out to get to know more about them!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/python-image-manipulation-tools
|
||||
|
||||
作者:[Parul Pandey][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/parul-pandey
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/daisy_gimp_art_design.jpg?itok=6kCxAKWO
|
||||
[2]: https://scikit-image.org/
|
||||
[3]: http://docs.scipy.org/doc/numpy/reference/index.html#module-numpy
|
||||
[4]: http://scikit-image.org/docs/stable/user_guide.html
|
||||
[5]: /file/426206
|
||||
[6]: https://opensource.com/sites/default/files/uploads/1-scikit-image.png (Image filtering in scikit-image)
|
||||
[7]: http://scikit-image.org/docs/dev/auto_examples/features_detection/plot_template.html#sphx-glr-auto-examples-features-detection-plot-template-py
|
||||
[8]: /file/426211
|
||||
[9]: https://opensource.com/sites/default/files/uploads/2-scikit-image.png (Template matching in scikit-image)
|
||||
[10]: https://scikit-image.org/docs/dev/auto_examples
|
||||
[11]: http://www.numpy.org/
|
||||
[12]: /file/426216
|
||||
[13]: https://opensource.com/sites/default/files/uploads/3-numpy.png (NumPy)
|
||||
[14]: https://www.scipy.org/
|
||||
[15]: https://docs.scipy.org/doc/scipy/reference/ndimage.html#module-scipy.ndimage
|
||||
[16]: https://docs.scipy.org/doc/scipy/reference/tutorial/ndimage.html#correlation-and-convolution
|
||||
[17]: https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.gaussian_filter.html
|
||||
[18]: /file/426221
|
||||
[19]: https://opensource.com/sites/default/files/uploads/4-scipy.png (Using a Gaussian filter in SciPy)
|
||||
[20]: https://python-pillow.org/
|
||||
[21]: https://pillow.readthedocs.io/en/3.1.x/index.html
|
||||
[22]: /file/426226
|
||||
[23]: https://opensource.com/sites/default/files/uploads/5-pillow.png (Enhancing an image in Pillow using ImageFilter)
|
||||
[24]: http://sipi.usc.edu/database/
|
||||
[25]: https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_setup/py_intro/py_intro.html
|
||||
[26]: https://github.com/abidrahmank/OpenCV2-Python-Tutorials
|
||||
[27]: /file/426236
|
||||
[28]: https://opensource.com/sites/default/files/uploads/6-opencv.jpeg (Image blending using Pyramids in OpenCV-Python)
|
||||
[29]: https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_pyramids/py_pyramids.html#pyramids
|
||||
[30]: http://simplecv.org/
|
||||
[31]: http://examples.simplecv.org/en/latest/
|
||||
[32]: /file/426241
|
||||
[33]: https://opensource.com/sites/default/files/uploads/7-_simplecv.png (SimpleCV)
|
||||
[34]: https://mahotas.readthedocs.io/en/latest/
|
||||
[35]: https://openresearchsoftware.metajnl.com/articles/10.5334/jors.ac/
|
||||
[36]: https://mahotas.readthedocs.io/en/latest/install.html
|
||||
[37]: https://blog.clarifai.com/wheres-waldo-using-machine-learning-to-find-all-the-waldos
|
||||
[38]: /file/426246
|
||||
[39]: https://opensource.com/sites/default/files/uploads/8-mahotas.png (Finding Wally problem in Mahotas)
|
||||
[40]: https://mahotas.readthedocs.io/en/latest/wally.html
|
||||
[41]: /file/426251
|
||||
[42]: https://opensource.com/sites/default/files/uploads/9-mahotas.png (Finding Wally problem in Mahotas)
|
||||
[43]: https://itk.org/
|
||||
[44]: http://www.simpleitk.org/
|
||||
[45]: https://itk.org/ITK/resources/resources.html
|
||||
[46]: http://insightsoftwareconsortium.github.io/SimpleITK-Notebooks/
|
||||
[47]: /file/426256
|
||||
[48]: https://opensource.com/sites/default/files/uploads/10-simpleitk.gif (SimpleITK animation)
|
||||
[49]: https://github.com/InsightSoftwareConsortium/SimpleITK-Notebooks/blob/master/Utilities/intro_animation.py
|
||||
[50]: https://pypi.org/project/pgmagick/
|
||||
[51]: http://www.graphicsmagick.org/
|
||||
[52]: https://github.com/hhatto/pgmagick
|
||||
[53]: https://pgmagick.readthedocs.io/en/latest/
|
||||
[54]: /file/426261
|
||||
[55]: https://opensource.com/sites/default/files/uploads/11-pgmagick.png (Image scaling in pgmagick)
|
||||
[56]: https://pgmagick.readthedocs.io/en/latest/cookbook.html#scaling-a-jpeg-image
|
||||
[57]: /file/426266
|
||||
[58]: https://opensource.com/sites/default/files/uploads/12-pgmagick.png (Edge extraction in pgmagick)
|
||||
[59]: https://pgmagick.readthedocs.io/en/latest/cookbook.html#edge-extraction
|
||||
[60]: https://pypi.org/project/pycairo/
|
||||
[61]: https://cairographics.org/
|
||||
[62]: https://github.com/pygobject/pycairo
|
||||
[63]: https://pycairo.readthedocs.io/en/latest/tutorial.html
|
||||
[64]: /file/426271
|
||||
[65]: https://opensource.com/sites/default/files/uploads/13-pycairo.png (Pycairo)
|
||||
[66]: http://zetcode.com/gfx/pycairo/basicdrawing/
|
@ -0,0 +1,162 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (3 Ways To Check Whether A Port Is Open On The Remote Linux System?)
|
||||
[#]: via: (https://www.2daygeek.com/how-to-check-whether-a-port-is-open-on-the-remote-linux-system-server/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
3 Ways To Check Whether A Port Is Open On The Remote Linux System?
|
||||
======
|
||||
|
||||
This is an important topic, which is not only for Linux administrator and it will be very helpful for all.
|
||||
|
||||
I mean to say. It’s very useful for users who are working in IT Infra.
|
||||
|
||||
They have to check whether the port is open or not on Linux server before proceeding to next steps.
|
||||
|
||||
If it’s not open then they can directly ask the Linux admin to check on this.
|
||||
|
||||
If it’s open then we need to check with application team, etc,.
|
||||
|
||||
In this article, we will show you, how to check this using three methods.
|
||||
|
||||
It can be done using the following Linux commands.
|
||||
|
||||
* **`nc:`** Netcat is a simple Unix utility which reads and writes data across network connections, using TCP or UDP protocol.
|
||||
* **`nmap:`** Nmap (“Network Mapper”) is an open source tool for network exploration and security auditing. It was designed to rapidly scan large networks.
|
||||
* **`telnet:`** The telnet command is used for interactive communication with another host using the TELNET protocol.
|
||||
|
||||
|
||||
|
||||
### How To Check Whether A Port Is Open On The Remote Linux System Using nc (netcat) Command?
|
||||
|
||||
nc stands for netcat. Netcat is a simple Unix utility which reads and writes data across network connections, using TCP or UDP protocol.
|
||||
|
||||
It is designed to be a reliable “back-end” tool that can be used directly or easily driven by other programs and scripts.
|
||||
|
||||
At the same time, it is a feature-rich network debugging and exploration tool, since it can create almost any kind of connection you would need and has several interesting built-in capabilities.
|
||||
|
||||
Netcat has three main modes of functionality. These are the connect mode, the listen mode, and the tunnel mode.
|
||||
|
||||
**Common Syntax for nc (netcat):**
|
||||
|
||||
```
|
||||
$ nc [-options] [HostName or IP] [PortNumber]
|
||||
```
|
||||
|
||||
In this example, we are going to check whether the port 22 is open or not on the remote Linux system.
|
||||
|
||||
If it’s success then you will be getting the following output.
|
||||
|
||||
```
|
||||
# nc -zvw3 192.168.1.8 22
|
||||
Connection to 192.168.1.8 22 port [tcp/ssh] succeeded!
|
||||
```
|
||||
|
||||
**Details:**
|
||||
|
||||
* **`nc:`** It’s a command.
|
||||
* **`z:`** zero-I/O mode (used for scanning).
|
||||
* **`v:`** For verbose.
|
||||
* **`w3:`** timeout wait seconds
|
||||
* **`192.168.1.8:`** Destination system IP.
|
||||
* **`22:`** Port number needs to be verified.
|
||||
|
||||
|
||||
|
||||
If it’s fail then you will be getting the following output.
|
||||
|
||||
```
|
||||
# nc -zvw3 192.168.1.95 22
|
||||
nc: connect to 192.168.1.95 port 22 (tcp) failed: Connection refused
|
||||
```
|
||||
|
||||
### How To Check Whether A Port Is Open On The Remote Linux System Using nmap Command?
|
||||
|
||||
Nmap (“Network Mapper”) is an open source tool for network exploration and security auditing. It was designed to rapidly scan large networks, although it works fine against single hosts.
|
||||
|
||||
Nmap uses raw IP packets in novel ways to determine what hosts are available on the network, what services (application name and version) those hosts are offering, what operating systems (and OS versions) they are running, what type of packet filters/firewalls are in use, and dozens of other characteristics.
|
||||
|
||||
While Nmap is commonly used for security audits, many systems and network administrators find it useful for routine tasks such as network inventory, managing service upgrade schedules, and monitoring host or service uptime.
|
||||
|
||||
**Common Syntax for nmap:**
|
||||
|
||||
```
|
||||
$ nmap [-options] [HostName or IP] [-p] [PortNumber]
|
||||
```
|
||||
|
||||
If it’s success then you will be getting the following output.
|
||||
|
||||
```
|
||||
# nmap 192.168.1.8 -p 22
|
||||
|
||||
Starting Nmap 7.70 ( https://nmap.org ) at 2019-03-16 03:37 IST Nmap scan report for 192.168.1.8 Host is up (0.00031s latency).
|
||||
|
||||
PORT STATE SERVICE
|
||||
|
||||
22/tcp open ssh
|
||||
|
||||
Nmap done: 1 IP address (1 host up) scanned in 13.06 seconds
|
||||
```
|
||||
|
||||
If it’s fail then you will be getting the following output.
|
||||
|
||||
```
|
||||
# nmap 192.168.1.8 -p 80
|
||||
Starting Nmap 7.70 ( https://nmap.org ) at 2019-03-16 04:30 IST
|
||||
Nmap scan report for 192.168.1.8
|
||||
Host is up (0.00036s latency).
|
||||
|
||||
PORT STATE SERVICE
|
||||
80/tcp closed http
|
||||
|
||||
Nmap done: 1 IP address (1 host up) scanned in 13.07 seconds
|
||||
```
|
||||
|
||||
### How To Check Whether A Port Is Open On The Remote Linux System Using telnet Command?
|
||||
|
||||
The telnet command is used for interactive communication with another host using the TELNET protocol.
|
||||
|
||||
**Common Syntax for telnet:**
|
||||
|
||||
```
|
||||
$ telnet [HostName or IP] [PortNumber]
|
||||
```
|
||||
|
||||
If it’s success then you will be getting the following output.
|
||||
|
||||
```
|
||||
$ telnet 192.168.1.9 22
|
||||
Trying 192.168.1.9...
|
||||
Connected to 192.168.1.9.
|
||||
Escape character is '^]'.
|
||||
SSH-2.0-OpenSSH_5.3
|
||||
^]
|
||||
Connection closed by foreign host.
|
||||
```
|
||||
|
||||
If it’s fail then you will be getting the following output.
|
||||
|
||||
```
|
||||
$ telnet 192.168.1.9 80
|
||||
Trying 192.168.1.9...
|
||||
telnet: Unable to connect to remote host: Connection refused
|
||||
```
|
||||
|
||||
We had found only the above three methods. If you found any other ways, please let us know by updating your query in the comments section.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/how-to-check-whether-a-port-is-open-on-the-remote-linux-system-server/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
@ -0,0 +1,176 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Building and augmenting libraries by calling Rust from JavaScript)
|
||||
[#]: via: (https://opensource.com/article/19/3/calling-rust-javascript)
|
||||
[#]: author: (Ryan Levick https://opensource.com/users/ryanlevick)
|
||||
|
||||
Building and augmenting libraries by calling Rust from JavaScript
|
||||
======
|
||||
|
||||
Explore how to use WebAssembly (Wasm) to embed Rust inside JavaScript.
|
||||
|
||||
![JavaScript in Vim][1]
|
||||
|
||||
In _[Why should you use Rust in WebAssembly?][2]_ , I looked at why you might want to write WebAssembly (Wasm), and why you might choose Rust as the language to do it in. Now I'll share what that looks like by exploring ways to embed Rust inside JavaScript.
|
||||
|
||||
This is something that separates Rust from Go, C#, and other languages with large runtimes that can compile to Wasm. Rust has a minimal runtime (basically just an allocator), making it easy to use Rust from JavaScript libraries. C and C++ have similar stories, but what sets Rust apart is its tooling, which we'll take a look at now.
|
||||
|
||||
### The basics
|
||||
|
||||
If you've never used Rust before, you'll first want to get that set up. It's pretty easy. First download [**Rustup**][3], which is a way to control versions of Rust and different toolchains for cross-compilation. This will give you access to [**Cargo**][4], which is the Rust build tool and package manager.
|
||||
|
||||
Now we have a decision to make. We can easily write Rust code that runs in the browser through WebAssembly, but if we want to do anything other than make people's CPU fans spin, we'll probably at some point want to interact with the Document Object Model (DOM) or use some JavaScript API. In other words: _we need JavaScript interop_ (aka the JavaScript interoperability API).
|
||||
|
||||
### The problem and the solutions
|
||||
|
||||
WebAssembly is an extremely simple machine language. If we want to be able to communicate with JavaScript, Wasm gives us only four data types to do it with: 32- and 64-bit floats and integers. Wasm doesn't have a concept of strings, arrays, objects, or any other rich data type. Basically, we can only pass around pointers between Rust and JavaScript. Needless to say, this is less than ideal.
|
||||
|
||||
The good news is there are two libraries that facilitate communication between Rust-based Wasm and JavaScript: [**wasm-bindgen**][5] and [**stdweb**][6]. The bad news, however, is these two libraries are unfortunately incompatible with each other. **wasm-bindgen** is lower-level than **stdweb** and attempts to provide full control over how JavaScript and Rust interact. In fact, there is even talk of [rewriting **stdweb** using **wasm-bindgen**][7], which would get rid of the issue of incompatibility.
|
||||
|
||||
Because **wasm-bindgen** is the lighter-weight option (and the option officially worked on by the official [Rust WebAssembly working group][8]), we'll focus at that.
|
||||
|
||||
### wasm-bindgen and wasm-pack
|
||||
|
||||
We're going to create a function that takes a string from JavaScript, makes it uppercase and prepends "HELLO, " to it, and returns it back to JavaScript. We'll call this function **excited_greeting**!
|
||||
|
||||
First, let's create our Rust library that will house this fabulous function:
|
||||
|
||||
```
|
||||
$ cargo new my-wasm-library --lib
|
||||
$ cd my-wasm-library
|
||||
```
|
||||
|
||||
Now we'll want to replace the contents of **src/lib.rs** with our exciting logic. I think it's best to write the code out instead of copy/pasting.
|
||||
|
||||
```
|
||||
// Include the `wasm_bindgen` attribute into the current namespace.
|
||||
use wasm_bindgen::prelude::wasm_bindgen;
|
||||
|
||||
// This attribute makes calling Rust from JavaScript possible.
|
||||
// It generates code that can convert the basic types wasm understands
|
||||
// (integers and floats) into more complex types like strings and
|
||||
// vice versa. If you're interested in how this works, check this out:
|
||||
// <https://blog.ryanlevick.com/posts/wasm-bindgen-interop/>
|
||||
#[wasm_bindgen]
|
||||
// This is pretty plain Rust code. If you've written Rust before this
|
||||
// should look extremely familiar. If not, why wait?! Check this out:
|
||||
// <https://doc.rust-lang.org/book/>
|
||||
pub fn excited_greeting(original: &str) -> String {
|
||||
format!("HELLO, {}", original.to_uppercase())
|
||||
}
|
||||
```
|
||||
|
||||
Second, we'll have to make two changes to our **Cargo.toml** configuration file:
|
||||
|
||||
* Add **wasm_bindgen** as a dependency.
|
||||
* Configure the type of library binary to be a **cdylib** or dynamic system library. In this case, our system is **wasm** , and setting this option is how we produce **.wasm** binary files.
|
||||
|
||||
|
||||
```
|
||||
[package]
|
||||
name = "my-wasm-library"
|
||||
version = "0.1.0"
|
||||
authors = ["$YOUR_INFO"]
|
||||
edition = "2018"
|
||||
|
||||
[lib]
|
||||
crate-type = ["cdylib", "rlib"]
|
||||
|
||||
[dependencies]
|
||||
wasm-bindgen = "0.2.33"
|
||||
```
|
||||
|
||||
Now let's build! If we just use **cargo build** , we'll get a **.wasm** binary, but in order to make it easy to call our Rust code from JavaScript, we'd like to have some JavaScript code that converts rich JavaScript types like strings and objects to pointers and passes these pointers to the Wasm module on our behalf. Doing this manually is tedious and prone to bugs.
|
||||
|
||||
Luckily, in addition to being a library, **wasm-bindgen** also has the ability to create this "glue" JavaScript for us. This means in our code we can interact with our Wasm module using normal JavaScript types, and the generated code from **wasm-bindgen** will do the dirty work of converting these rich types into the pointer types that Wasm actually understands.
|
||||
|
||||
We can use the awesome **wasm-pack** to build our Wasm binary, invoke the **wasm-bindgen** CLI tool, and package all of our JavaScript (and any optional generated TypeScript types) into one nice and neat package. Let's do that now!
|
||||
|
||||
First we'll need to install **wasm-pack** :
|
||||
|
||||
```
|
||||
$ cargo install wasm-pack
|
||||
```
|
||||
|
||||
By default, **wasm-bindgen** produces ES6 modules. We'll use our code from a simple script tag, so we just want it to produce a plain old JavaScript object that gives us access to our Wasm functions. To do this, we'll pass it the **\--target no-modules** option.
|
||||
|
||||
```
|
||||
$ wasm-pack build --target no-modules
|
||||
```
|
||||
|
||||
We now have a **pkg** directory in our project. If we look at the contents, we'll see the following:
|
||||
|
||||
* **package.json** : useful if we want to package this up as an NPM module
|
||||
* **my_wasm_library_bg.wasm** : our actual Wasm code
|
||||
* **my_wasm_library.js** : the JavaScript "glue" code
|
||||
* Some TypeScript definition files
|
||||
|
||||
|
||||
|
||||
Now we can create an **index.html** file that will make use of our JavaScript and Wasm:
|
||||
|
||||
```
|
||||
<[html][9]>
|
||||
<[head][10]>
|
||||
<[meta][11] content="text/html;charset=utf-8" http-equiv="Content-Type" />
|
||||
</[head][10]>
|
||||
<[body][12]>
|
||||
<!-- Include our glue code -->
|
||||
<[script][13] src='./pkg/my_wasm_library.js'></[script][13]>
|
||||
<!-- Include our glue code -->
|
||||
<[script][13]>
|
||||
window.addEventListener('load', async () => {
|
||||
// Load the wasm file
|
||||
await wasm_bindgen('./pkg/my_wasm_library_bg.wasm');
|
||||
// Once it's loaded the `wasm_bindgen` object is populated
|
||||
// with the functions defined in our Rust code
|
||||
const greeting = wasm_bindgen.excited_greeting("Ryan")
|
||||
console.log(greeting)
|
||||
});
|
||||
</[script][13]>
|
||||
</[body][12]>
|
||||
</[html][9]>
|
||||
```
|
||||
|
||||
You may be tempted to open the HTML file in your browser, but unfortunately, this is not possible. For security reasons, Wasm files have to be served from the same domain as the HTML file. You'll need an HTTP server. If you have a favorite static HTTP server that can serve files from your filesystem, feel free to use that. I like to use [**basic-http-server**][14], which you can install and run like so:
|
||||
|
||||
```
|
||||
$ cargo install basic-http-server
|
||||
$ basic-http-server
|
||||
```
|
||||
|
||||
Now open the **index.html** file through the web server by going to **<http://localhost:4000/index.html>** and check your JavaScript console. You should see a very exciting greeting there!
|
||||
|
||||
If you have any questions, please [let me know][15]. Next time, we'll take a look at how we can use various browser and JavaScript APIs from within our Rust code.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/calling-rust-javascript
|
||||
|
||||
作者:[Ryan Levick][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ryanlevick
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/javascript_vim.jpg?itok=mqkAeakO (JavaScript in Vim)
|
||||
[2]: https://opensource.com/article/19/2/why-use-rust-webassembly
|
||||
[3]: https://rustup.rs/
|
||||
[4]: https://doc.rust-lang.org/cargo/
|
||||
[5]: https://github.com/rustwasm/wasm-bindgen
|
||||
[6]: https://github.com/koute/stdweb
|
||||
[7]: https://github.com/koute/stdweb/issues/318
|
||||
[8]: https://www.rust-lang.org/governance/wgs/wasm
|
||||
[9]: http://december.com/html/4/element/html.html
|
||||
[10]: http://december.com/html/4/element/head.html
|
||||
[11]: http://december.com/html/4/element/meta.html
|
||||
[12]: http://december.com/html/4/element/body.html
|
||||
[13]: http://december.com/html/4/element/script.html
|
||||
[14]: https://github.com/brson/basic-http-server
|
||||
[15]: https://twitter.com/ryan_levick
|
@ -0,0 +1,266 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Install MEAN.JS Stack In Ubuntu 18.04 LTS)
|
||||
[#]: via: (https://www.ostechnix.com/install-mean-js-stack-ubuntu/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
Install MEAN.JS Stack In Ubuntu 18.04 LTS
|
||||
======
|
||||
|
||||
![Install MEAN.JS Stack][1]
|
||||
|
||||
**MEAN.JS** is an Open-Source, full-Stack JavaScript solution for building fast, and robust web applications. **MEAN.JS** stack consists of **MongoDB** (NoSQL database), **ExpressJs** (NodeJS server-side application web framework), **AngularJS** (Client-side web application framework), and **Node.js** (JavaScript run-time, popular for being a web server platform). In this tutorial, we will be discussing how to install MEAN.JS stack in Ubuntu. This guide was tested in Ubuntu 18.04 LTS server. However, it should work on other Ubuntu versions and Ubuntu variants.
|
||||
|
||||
### Install MongoDB
|
||||
|
||||
**MongoDB** is a free, cross-platform, open source, NoSQL document-oriented database. To install MongoDB on your Ubuntu system, refer the following guide:
|
||||
|
||||
* [**Install MongoDB Community Edition In Linux**][2]
|
||||
|
||||
|
||||
|
||||
### Install Node.js
|
||||
|
||||
**NodeJS** is an open source, cross-platform, and lightweight JavaScript run-time environment that can be used to build scalable network applications.
|
||||
|
||||
To install NodeJS on your system, refer the following guide:
|
||||
|
||||
* [**How To Install NodeJS On Linux**][3]
|
||||
|
||||
|
||||
|
||||
After installing, MongoDB, and Node.js, we need to install the other required components such as **Yarn** , **Grunt** , and **Gulp** for MEAN.js stack.
|
||||
|
||||
### Install Yarn package manager
|
||||
|
||||
Yarn is a package manager used by MEAN.JS stack to manage front-end packages.
|
||||
|
||||
To install Bower, run the following command:
|
||||
|
||||
```
|
||||
$ npm install -g yarn
|
||||
```
|
||||
|
||||
### Install Grunt Task Runner
|
||||
|
||||
Grunt Task Runner is used to to automate the development process.
|
||||
|
||||
To install Grunt, run:
|
||||
|
||||
```
|
||||
$ npm install -g grunt-cli
|
||||
```
|
||||
|
||||
To verify if Yarn and Grunt have been installed, run:
|
||||
|
||||
```
|
||||
$ npm list -g --depth=0 /home/sk/.nvm/versions/node/v11.11.0/lib ├── [email protected] ├── [email protected] └── [email protected]
|
||||
```
|
||||
|
||||
### Install Gulp Task Runner (Optional)
|
||||
|
||||
This is optional. You can use Gulp instead of Grunt. To install Gulp Task Runner, run the following command:
|
||||
|
||||
```
|
||||
$ npm install -g gulp
|
||||
```
|
||||
|
||||
We have installed all required prerequisites. Now, let us deploy MEAN.JS stack.
|
||||
|
||||
### Download and Install MEAN.JS Stack
|
||||
|
||||
Install Git if it is not installed already:
|
||||
|
||||
```
|
||||
$ sudo apt-get install git
|
||||
```
|
||||
|
||||
Next, git clone the MEAN.JS repository with command:
|
||||
|
||||
```
|
||||
$ git clone https://github.com/meanjs/mean.git meanjs
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
|
||||
```
|
||||
Cloning into 'meanjs'...
|
||||
remote: Counting objects: 8596, done.
|
||||
remote: Compressing objects: 100% (12/12), done.
|
||||
remote: Total 8596 (delta 3), reused 0 (delta 0), pack-reused 8584 Receiving objects: 100% (8596/8596), 2.62 MiB | 140.00 KiB/s, done.
|
||||
Resolving deltas: 100% (4322/4322), done.
|
||||
Checking connectivity... done.
|
||||
```
|
||||
|
||||
The above command will clone the latest version of the MEAN.JS repository to **meanjs** folder in your current working directory.
|
||||
|
||||
Go to the meanjs folder:
|
||||
|
||||
```
|
||||
$ cd meanjs/
|
||||
```
|
||||
|
||||
Run the following command to install the Node.js dependencies required for testing and running our application:
|
||||
|
||||
```
|
||||
$ npm install
|
||||
```
|
||||
|
||||
This will take some time. Please be patient.
|
||||
|
||||
* * *
|
||||
|
||||
**Troubleshooting:**
|
||||
|
||||
When I run the above command in Ubuntu 18.04 LTS, I get the following error:
|
||||
|
||||
```
|
||||
Downloading binary from https://github.com/sass/node-sass/releases/download/v4.5.3/linux-x64-67_binding.node
|
||||
Cannot download "https://github.com/sass/node-sass/releases/download/v4.5.3/linux-x64-67_binding.node":
|
||||
|
||||
HTTP error 404 Not Found
|
||||
|
||||
[....]
|
||||
```
|
||||
|
||||
If you ever get these type of common errors like “node-sass and gulp-sass”, do the following:
|
||||
|
||||
First uninstall the project and global gulp-sass modules using the following commands:
|
||||
|
||||
```
|
||||
$ npm uninstall gulp-sass
|
||||
$ npm uninstall -g gulp-sass
|
||||
```
|
||||
|
||||
Next uninstall the global node-sass module:
|
||||
|
||||
```
|
||||
$ npm uninstall -g node-sass
|
||||
```
|
||||
|
||||
Install the global node-sass first. Then install the gulp-sass module at the local project level.
|
||||
|
||||
```
|
||||
$ npm install -g node-sass
|
||||
$ npm install gulp-sass
|
||||
```
|
||||
|
||||
Now try the npm install again from the project folder using command:
|
||||
|
||||
```
|
||||
$ npm install
|
||||
```
|
||||
|
||||
Now all dependencies will start to install without any issues.
|
||||
|
||||
* * *
|
||||
|
||||
Once all dependencies are installed, run the following command to install all the front-end modules needed for the application:
|
||||
|
||||
```
|
||||
$ yarn --allow-root --config.interactive=false install
|
||||
```
|
||||
|
||||
Or,
|
||||
|
||||
```
|
||||
$ yarn --allow-root install
|
||||
```
|
||||
|
||||
You will see the following message at the end if the installation is successful.
|
||||
|
||||
```
|
||||
[...]
|
||||
> meanjs@0.6.0 snyk-protect /home/sk/meanjs
|
||||
> snyk protect
|
||||
|
||||
Successfully applied Snyk patches
|
||||
|
||||
Done in 99.47s.
|
||||
```
|
||||
|
||||
### Test MEAN.JS
|
||||
|
||||
MEAN.JS stack has been installed. We can now able to start a sample application using command:
|
||||
|
||||
```
|
||||
$ npm start
|
||||
```
|
||||
|
||||
After a few seconds, you will see a message like below. This means MEAN.JS stack is working!
|
||||
|
||||
```
|
||||
[...]
|
||||
MEAN.JS - Development Environment
|
||||
|
||||
Environment: development
|
||||
Server: http://0.0.0.0:3000
|
||||
Database: mongodb://localhost/mean-dev
|
||||
App version: 0.6.0
|
||||
MEAN.JS version: 0.6.0
|
||||
```
|
||||
|
||||
![][4]
|
||||
|
||||
To verify, open up the browser and navigate to **<http://localhost:3000>** or **<http://IP-Address:3000/>**. You should see a screen something like below.
|
||||
|
||||
![][5]
|
||||
|
||||
Mean stack test page
|
||||
|
||||
Congratulations! MEAN.JS stack is ready to start building web applications.
|
||||
|
||||
For further details, I recommend you to refer **[MEAN.JS stack official documentation][6]**.
|
||||
|
||||
* * *
|
||||
|
||||
Want to setup MEAN.JS stack in CentOS, RHEL, Scientific Linux? Check the following link for more details.
|
||||
|
||||
* **[Install MEAN.JS Stack in CentOS 7][7]**
|
||||
|
||||
|
||||
|
||||
* * *
|
||||
|
||||
And, that’s all for now, folks. Hope this tutorial will help you to setup MEAN.JS stack.
|
||||
|
||||
If you find this tutorial useful, please share it on your social, professional networks and support OSTechNix.
|
||||
|
||||
More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
**Resources:**
|
||||
|
||||
* **[MEAN.JS website][8]**
|
||||
* [**MEAN.JS GitHub Repository**][9]
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/install-mean-js-stack-ubuntu/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]: https://www.ostechnix.com/install-mongodb-linux/
|
||||
[3]: https://www.ostechnix.com/install-node-js-linux/
|
||||
[4]: http://www.ostechnix.com/wp-content/uploads/2016/03/meanjs.png
|
||||
[5]: http://www.ostechnix.com/wp-content/uploads/2016/03/mean-stack-test-page.png
|
||||
[6]: http://meanjs.org/docs.html
|
||||
[7]: http://www.ostechnix.com/install-mean-js-stack-centos-7/
|
||||
[8]: http://meanjs.org/
|
||||
[9]: https://github.com/meanjs/mean
|
@ -0,0 +1,108 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Solus 4 ‘Fortitude’ Released with Significant Improvements)
|
||||
[#]: via: (https://itsfoss.com/solus-4-release)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Solus 4 ‘Fortitude’ Released with Significant Improvements
|
||||
======
|
||||
|
||||
Finally, after a year of work, the much anticipated Solus 4 is here. It’s a significant release not just because this is a major upgrade, but also because this is the first major release after [Ikey Doherty (the founder of Solus) left the project][1] a few months ago.
|
||||
|
||||
Now that everything’s under control with the new _management_ , **Solus 4 Fortitude** with updated Budgie desktop and other significant improvements has officially released.
|
||||
|
||||
### What’s New in Solus 4
|
||||
|
||||
![Solus 4 Fortitude][2]
|
||||
|
||||
#### Core Improvements
|
||||
|
||||
Solus 4 comes loaded with **[Linux Kernel 4.20.16][3]** which enables better hardware support (like Touchpad support, improved support for Intel Coffee Lake and Ice Lake CPUs, and for AMD Picasso & Raven2 APUs).
|
||||
|
||||
This release also ships with the latest [FFmpeg 4.1.1][4]. Also, they have enabled the support for [dav1d][5] in [VLC][6] – which is an open source AV1 decoder. So, you can consider these upgrades to significantly improve the Multimedia experience.
|
||||
|
||||
It also includes some minor fixes to the Software Center – if you were encountering any issues while finding an application or viewing the description.
|
||||
|
||||
In addition, WPS Office has been removed from the listing.
|
||||
|
||||
#### UI Improvements
|
||||
|
||||
![Budgie 10.5][7]
|
||||
|
||||
The Budgie desktop update includes some minor changes and also comes baked in with the [Plata (Noir) GTK Theme.][8]
|
||||
|
||||
You will no longer observe same applications multiple times in the menu, they’ve fixed this. They have also introduced a “ **Caffeine** ” mode as applet which prevents the system from suspending, locking the screen or changing the brightness while you are working. You can schedule the time accordingly.
|
||||
|
||||
![Caffeine Mode][9]
|
||||
|
||||
The new Budgie desktop experience also adds quick actions to the app icons on the task bar, dubbed as “ **Icon Tasklist** “. It makes it easy to manage the active tabs on a browser or the actions to minimize and move it to a new workplace (as shown in the image below).
|
||||
|
||||
![Icon Tasklist][10]
|
||||
|
||||
As the [change log][11] mentions, the above pop over design lets you do more:
|
||||
|
||||
* _Close all instances of the selected application_
|
||||
* _Easily access per-window controls for marking it always on top, maximizing / unmaximizing, minimizing, and moving it to various workspaces._
|
||||
* _Quickly favorite / unfavorite apps_
|
||||
* _Quickly launch a new instance of the selected application_
|
||||
* _Scroll up or down on an IconTasklist button when a single window is open to activate and bring it into focus, or minimize it, based on the scroll direction._
|
||||
* _Toggle to minimize and unminimize various application windows_
|
||||
|
||||
|
||||
|
||||
The notification area now groups the notifications from specific applications instead of piling it all up. So, that’s a good improvement.
|
||||
|
||||
In addition to these, the sound widget got some cool improvements while letting you personalize the look and feel of your desktop in an efficient manner.
|
||||
|
||||
To know about all the nitty-gritty details, do refer the official [release note][11]s.
|
||||
|
||||
### Download Solus 4
|
||||
|
||||
You can get the latest version of Solus from its download page below. It is available in the default Budgie, GNOME and MATE desktop flavors.
|
||||
|
||||
[Get Solus 4][12]
|
||||
|
||||
### Wrapping Up**
|
||||
|
||||
Solus 4 is definitely an impressive upgrade – without introducing any unnecessary fancy features but by adding only the useful ones, subtle changes.
|
||||
|
||||
What do you think about the latest Solus 4 Fortitude? Have you tried it yet?
|
||||
|
||||
Let us know your thoughts in the comments below.
|
||||
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/solus-4-release
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/ikey-leaves-solus/
|
||||
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/solus-4-featured.jpg?fit=800%2C450&ssl=1
|
||||
[3]: https://itsfoss.com/kernel-4-20-release/
|
||||
[4]: https://www.ffmpeg.org/
|
||||
[5]: https://code.videolan.org/videolan/dav1d
|
||||
[6]: https://www.videolan.org/index.html
|
||||
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/Budgie-desktop.jpg?resize=800%2C450&ssl=1
|
||||
[8]: https://gitlab.com/tista500/plata-theme
|
||||
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/caffeine-mode.jpg?ssl=1
|
||||
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/IconTasklistPopover.jpg?ssl=1
|
||||
[11]: https://getsol.us/2019/03/17/solus-4-released/
|
||||
[12]: https://getsol.us/download/
|
||||
[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/Budgie-desktop.jpg?fit=800%2C450&ssl=1
|
||||
[14]: https://www.facebook.com/sharer.php?t=Solus%204%20%E2%80%98Fortitude%E2%80%99%20Released%20with%20Significant%20Improvements&u=https%3A%2F%2Fitsfoss.com%2Fsolus-4-release%2F
|
||||
[15]: https://twitter.com/intent/tweet?text=Solus+4+%E2%80%98Fortitude%E2%80%99+Released+with+Significant+Improvements&url=https%3A%2F%2Fitsfoss.com%2Fsolus-4-release%2F&via=itsfoss2
|
||||
[16]: https://www.linkedin.com/shareArticle?title=Solus%204%20%E2%80%98Fortitude%E2%80%99%20Released%20with%20Significant%20Improvements&url=https%3A%2F%2Fitsfoss.com%2Fsolus-4-release%2F&mini=true
|
||||
[17]: https://www.reddit.com/submit?title=Solus%204%20%E2%80%98Fortitude%E2%80%99%20Released%20with%20Significant%20Improvements&url=https%3A%2F%2Fitsfoss.com%2Fsolus-4-release%2F
|
@ -0,0 +1,50 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Blockchain 2.0: Blockchain In Real Estate [Part 4])
|
||||
[#]: via: (https://www.ostechnix.com/blockchain-2-0-blockchain-in-real-estate/)
|
||||
[#]: author: (EDITOR https://www.ostechnix.com/author/editor/)
|
||||
|
||||
Blockchain 2.0: Blockchain In Real Estate [Part 4]
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2019/03/Blockchain-In-Real-Estate-720x340.png)
|
||||
|
||||
### Blockchain 2.0: Smart‘er’ Real Estate
|
||||
|
||||
The [**previous article**][1] of this series explored the features of blockchain which will enable institutions to transform and interlace **traditional banking** and **financing systems** with it. This part will explore – **Blockchain in real estate**. The real estate industry is ripe for a revolution. It’s among the most actively traded most significant asset classes known to man. However, filled with regulatory hurdles and numerous possibilities of fraud and deceit, it’s also one of the toughest to participate in. The distributed ledger capabilities of the blockchain utilizing an appropriate consensus algorithm are touted as the way forward for the industry which is traditionally regarded as conservative in its attitude to change.
|
||||
|
||||
Real estate has always been a very conservative industry in terms of its myriad operations. Somewhat rightfully so as well. A major economic crisis such as the 2008 financial crisis or the great depression from the early half of the 20th century managed to destroy the industry and its participants. However, like most products of economic value, the real estate industry is resilient and this resilience is rooted in its conservative nature.
|
||||
|
||||
The global real estate market comprises an asset class worth **$228 trillion dollars** [1]. Give or take. Other investment assets such as stocks, bonds, and shares combined are only worth **$170 trillion**. Obviously, any and all transactions implemented in such an industry is naturally carefully planned and meticulously executed, for the most part. For the most part, because real estate is also notorious for numerous instances of fraud and devastating loses which ensue them. The industry because of the very conservative nature of its operations is also tough to navigate. It’s heavily regulated with complex laws creating an intertwined web of nuances that are just too difficult for an average person to understand fully. This makes entry and participation near impossible for most people. If you’ve ever been involved in one such deal, you’ll know how heavy and long the paper trail was.
|
||||
|
||||
This hard reality is now set to change, albeit a slow and gradual transformation. The very reasons the industry has stuck to its hardy tested roots all this while can finally give way to its modern-day counterpart. The backbone of the real estate industry has always been its paper records. Land deeds, titles, agreements, rental insurance, proofs, and declarations etc., are just the tip of the iceberg here. If you’ve noticed the pattern here, this should be obvious, the distributed ledger technology that is blockchain, fits in perfectly with the needs here. Forget paper records, conventional database systems are also points of major failure. They can be modified by multiple participants, is not tamper proof or un-hackable, has a complicated set of ever-changing regulatory parameters making auditing and verifying data a nightmare. The blockchain perfectly solves all of these issues and more.
|
||||
|
||||
Starting with a trivial albeit an important example to show just how bad the current record management practices are in the real estate sector, consider the **Title Insurance business** [2], [3]. Title Insurance is used to hedge against the possibility of the land’s titles and ownership records being inadmissible and hence unenforceable. An insurance product such as this is also referred to as an indemnity cover. It is by law required in many cases that properties have title insurance, especially when dealing with property that has changed hands multiple times over the years. Mortgage firms might insist on the same as well when they back real estate deals. The fact that a product of this kind has existed since the 1850s and that it does business worth at least **$1.5 trillion a year in the US alone** is a testament to the statement at the start. A revolution in terms of how these records are maintained is imperative to have in this situation and the blockchain provides a sustainable solution. Title fraud averages around $100k per case on average as per the **American Land Title Association** and 25% of all titles involved in transactions have an issue regarding their documents[4]. The blockchain allows for setting up an immutable permanent database that will track the property itself, recording each and every transaction or investment that has gone into it. Such a ledger system will make life easier for everyone involved in the real estate industry including one-time home buyers and make financial products such as Title Insurance basically irrelevant. Converting a physical asset such as real estate to a digital asset like this is unconventional and is extant only in theory at the moment. However, such a change is imminent sooner rather than later[5].
|
||||
|
||||
Among the areas in which blockchain will have the most impact within real estate is as highlighted above in maintaining a transparent and secure title management system for properties. A blockchain based record of the property can contain information about the property, its location, history of ownership, and any related public record of the same[6]. This will permit closing real estate deals fast and obliviates the need for 3rd party monitoring and oversight. Tasks such as real estate appraisal and tax calculations become matters of tangible objective parameters rather than subjective measures and guesses because of reliable historical data which is publicly verifiable. **UBITQUITY** is one such platform that offers customized blockchain-based solutions to enterprise customers. The platform allows customers to keep track of all property details, payment records, mortgage records and even allows running smart contracts that’ll take care of taxation and leasing automatically[7].
|
||||
|
||||
This brings us to the second biggest opportunity and use case of blockchains in real estate. Since the sector is highly regulated by numerous 3rd parties apart from the counterparties involved in the trade, due-diligence and financial evaluations can be significantly time-consuming. These processes are predominantly carried out using offline channels and paperwork needs to travel for days before a final evaluation report comes out. This is especially true for corporate real estate deals and forms a bulk of the total billable hours charged by consultants. In case the transaction is backed by a mortgage, duplication of these processes is unavoidable. Once combined with digital identities for the people and institutions involved along with the property, the current inefficiencies can be avoided altogether and transactions can take place in a matter of seconds. The tenants, investors, institutions involved, consultants etc., could individually validate the data and arrive at a critical consensus thereby validating the property records for perpetuity[8]. This increases the accuracy of verification manifold. Real estate giant **RE/MAX** has recently announced a partnership with service provider **XYO Network Partners** for building a national database of real estate listings in Mexico. They hope to one day create one of the largest (as of yet) decentralized real estate title registry in the world[9].
|
||||
|
||||
However, another significant and arguably a very democratic change that the blockchain can bring about is with respect to investing in real estate. Unlike other investment asset classes where even small household investors can potentially participate, real estate often requires large hands-down payments to participate. Companies such as **ATLANT** and **BitOfProperty** tokenize the book value of a property and convert them into equivalents of a cryptocurrency. These tokens are then put for sale on their exchanges similar to how stocks and shares are traded. Any cash flow that the real estate property generates afterward is credited or debited to the token owners depending on their “share” in the property[4].
|
||||
|
||||
However, even with all of that said, Blockchain technology is still in very early stages of adoption in the real estate sector and current regulations are not exactly defined for it to be either[8]. Concepts such as distributed applications, distributed anonymous organizations, smart contracts etc., are unheard of in the legal domain in many countries. A complete overhaul of existing regulations and guidelines once all the stakeholders are well educated on the intricacies of the blockchain is the most pragmatic way forward. Again, it’ll be a slow and gradual change to go through, however a much-needed one nonetheless. The next article of the series will look at how **“Smart Contracts”** , such as those implemented by companies such as UBITQUITY and XYO are created and executed in the blockchain.
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/blockchain-2-0-blockchain-in-real-estate/
|
||||
|
||||
作者:[EDITOR][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/editor/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/blockchain-2-0-redefining-financial-services/
|
@ -0,0 +1,342 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Five Commands To Use Calculator In Linux Command Line?)
|
||||
[#]: via: (https://www.2daygeek.com/linux-command-line-calculator-bc-calc-qalc-gcalccmd/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
Five Commands To Use Calculator In Linux Command Line?
|
||||
======
|
||||
|
||||
As a Linux administrator you may use the command line calculator many times in a day for some purpose.
|
||||
|
||||
I had used this especially when LVM creation using the PE values.
|
||||
|
||||
There are many commands available for this purpose and i’m going to list most used commands in this article.
|
||||
|
||||
These command line calculators are allow us to perform all kind of actions such as scientific, financial, or even simple calculation.
|
||||
|
||||
Also, we can use these commands in shell scripts for complex math.
|
||||
|
||||
In this article, I’m listing the top five command line calculator commands.
|
||||
|
||||
Those command line calculator commands are below.
|
||||
|
||||
* **`bc:`** An arbitrary precision calculator language
|
||||
* **`calc:`** arbitrary precision calculator
|
||||
* **`expr:`** evaluate expressions
|
||||
* **`gcalccmd:`** gnome-calculator – a desktop calculator
|
||||
* **`qalc:`**
|
||||
* **`Linux Shell:`**
|
||||
|
||||
|
||||
|
||||
### How To Perform Calculation In Linux Using bc Command?
|
||||
|
||||
bs stands for Basic Calculator. bc is a language that supports arbitrary precision numbers with interactive execution of statements. There are some similarities in the syntax to the C programming language.
|
||||
|
||||
A standard math library is available by command line option. If requested, the math library is defined before processing any files. bc starts by processing code from all the files listed on the command line in the order listed.
|
||||
|
||||
After all files have been processed, bc reads from the standard input. All code is executed as it is read.
|
||||
|
||||
By default bc command has installed in all the Linux system. If not, use the following procedure to install it.
|
||||
|
||||
For **`Fedora`** system, use **[DNF Command][1]** to install bc.
|
||||
|
||||
```
|
||||
$ sudo dnf install bc
|
||||
```
|
||||
|
||||
For **`Debian/Ubuntu`** systems, use **[APT-GET Command][2]** or **[APT Command][3]** to install bc.
|
||||
|
||||
```
|
||||
$ sudo apt install bc
|
||||
```
|
||||
|
||||
For **`Arch Linux`** based systems, use **[Pacman Command][4]** to install bc.
|
||||
|
||||
```
|
||||
$ sudo pacman -S bc
|
||||
```
|
||||
|
||||
For **`RHEL/CentOS`** systems, use **[YUM Command][5]** to install bc.
|
||||
|
||||
```
|
||||
$ sudo yum install bc
|
||||
```
|
||||
|
||||
For **`openSUSE Leap`** system, use **[Zypper Command][6]** to install bc.
|
||||
|
||||
```
|
||||
$ sudo zypper install bc
|
||||
```
|
||||
|
||||
### How To Use The bc Command To Perform Calculation In Linux?
|
||||
|
||||
We can use the bc command to perform all kind of calculation right from the terminal.
|
||||
|
||||
```
|
||||
$ bc
|
||||
bc 1.07.1
|
||||
Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006, 2008, 2012-2017 Free Software Foundation, Inc.
|
||||
This is free software with ABSOLUTELY NO WARRANTY.
|
||||
For details type `warranty'.
|
||||
|
||||
1+2
|
||||
3
|
||||
|
||||
10-5
|
||||
5
|
||||
|
||||
2*5
|
||||
10
|
||||
|
||||
10/2
|
||||
5
|
||||
|
||||
(2+4)*5-5
|
||||
25
|
||||
|
||||
quit
|
||||
```
|
||||
|
||||
Use `-l` flag to define the standard math library.
|
||||
|
||||
```
|
||||
$ bc -l
|
||||
bc 1.07.1
|
||||
Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006, 2008, 2012-2017 Free Software Foundation, Inc.
|
||||
This is free software with ABSOLUTELY NO WARRANTY.
|
||||
For details type `warranty'.
|
||||
|
||||
3/5
|
||||
.60000000000000000000
|
||||
|
||||
quit
|
||||
```
|
||||
|
||||
### How To Perform Calculation In Linux Using calc Command?
|
||||
|
||||
calc is an arbitrary precision calculator. It’s a simple calculator that allow us to perform all kind of calculation in Linux command line.
|
||||
|
||||
For **`Fedora`** system, use **[DNF Command][1]** to install calc.
|
||||
|
||||
```
|
||||
$ sudo dnf install calc
|
||||
```
|
||||
|
||||
For **`Debian/Ubuntu`** systems, use **[APT-GET Command][2]** or **[APT Command][3]** to install calc.
|
||||
|
||||
```
|
||||
$ sudo apt install calc
|
||||
```
|
||||
|
||||
For **`Arch Linux`** based systems, use **[Pacman Command][4]** to install calc.
|
||||
|
||||
```
|
||||
$ sudo pacman -S calc
|
||||
```
|
||||
|
||||
For **`RHEL/CentOS`** systems, use **[YUM Command][5]** to install calc.
|
||||
|
||||
```
|
||||
$ sudo yum install calc
|
||||
```
|
||||
|
||||
For **`openSUSE Leap`** system, use **[Zypper Command][6]** to install calc.
|
||||
|
||||
```
|
||||
$ sudo zypper install calc
|
||||
```
|
||||
|
||||
### How To Use The calc Command To Perform Calculation In Linux?
|
||||
|
||||
We can use the calc command to perform all kind of calculation right from the terminal.
|
||||
|
||||
Intractive mode
|
||||
|
||||
```
|
||||
$ calc
|
||||
C-style arbitrary precision calculator (version 2.12.7.1)
|
||||
Calc is open software. For license details type: help copyright
|
||||
[Type "exit" to exit, or "help" for help.]
|
||||
|
||||
; 5+1
|
||||
6
|
||||
; 5-1
|
||||
4
|
||||
; 5*2
|
||||
10
|
||||
; 10/2
|
||||
5
|
||||
; quit
|
||||
```
|
||||
|
||||
Non-Intractive mode
|
||||
|
||||
```
|
||||
$ calc 3/5
|
||||
0.6
|
||||
```
|
||||
|
||||
### How To Perform Calculation In Linux Using expr Command?
|
||||
|
||||
Print the value of EXPRESSION to standard output. A blank line below separates increasing precedence groups. It’s part of coreutils so, we no need to install it.
|
||||
|
||||
### How To Use The expr Command To Perform Calculation In Linux?
|
||||
|
||||
Use the following format for basic calculations.
|
||||
|
||||
For addition
|
||||
|
||||
```
|
||||
$ expr 5 + 1
|
||||
6
|
||||
```
|
||||
|
||||
For subtraction
|
||||
|
||||
```
|
||||
$ expr 5 - 1
|
||||
4
|
||||
```
|
||||
|
||||
For division.
|
||||
|
||||
```
|
||||
$ expr 10 / 2
|
||||
5
|
||||
```
|
||||
|
||||
### How To Perform Calculation In Linux Using gcalccmd Command?
|
||||
|
||||
gnome-calculator is the official calculator of the GNOME desktop environment. gcalccmd is the console version of Gnome Calculator utility. By default it has installed in the GNOME desktop.
|
||||
|
||||
### How To Use The gcalccmd Command To Perform Calculation In Linux?
|
||||
|
||||
I have added few examples on this.
|
||||
|
||||
```
|
||||
$ gcalccmd
|
||||
|
||||
> 5+1
|
||||
6
|
||||
|
||||
> 5-1
|
||||
4
|
||||
|
||||
> 5*2
|
||||
10
|
||||
|
||||
> 10/2
|
||||
5
|
||||
|
||||
> sqrt(16)
|
||||
4
|
||||
|
||||
> 3/5
|
||||
0.6
|
||||
|
||||
> quit
|
||||
```
|
||||
|
||||
### How To Perform Calculation In Linux Using qalc Command?
|
||||
|
||||
Qalculate is a multi-purpose cross-platform desktop calculator. It is simple to use but provides power and versatility normally reserved for complicated math packages, as well as useful tools for everyday needs (such as currency conversion and percent calculation).
|
||||
|
||||
Features include a large library of customizable functions, unit calculations and conversion, symbolic calculations (including integrals and equations), arbitrary precision, uncertainty propagation, interval arithmetic, plotting, and a user-friendly interface (GTK+ and CLI).
|
||||
|
||||
For **`Fedora`** system, use **[DNF Command][1]** to install qalc.
|
||||
|
||||
```
|
||||
$ sudo dnf install libqalculate
|
||||
```
|
||||
|
||||
For **`Debian/Ubuntu`** systems, use **[APT-GET Command][2]** or **[APT Command][3]** to install qalc.
|
||||
|
||||
```
|
||||
$ sudo apt install libqalculate
|
||||
```
|
||||
|
||||
For **`Arch Linux`** based systems, use **[Pacman Command][4]** to install qalc.
|
||||
|
||||
```
|
||||
$ sudo pacman -S libqalculate
|
||||
```
|
||||
|
||||
For **`RHEL/CentOS`** systems, use **[YUM Command][5]** to install qalc.
|
||||
|
||||
```
|
||||
$ sudo yum install libqalculate
|
||||
```
|
||||
|
||||
For **`openSUSE Leap`** system, use **[Zypper Command][6]** to install qalc.
|
||||
|
||||
```
|
||||
$ sudo zypper install libqalculate
|
||||
```
|
||||
|
||||
### How To Use The qalc Command To Perform Calculation In Linux?
|
||||
|
||||
I have added few examples on this.
|
||||
|
||||
```
|
||||
$ qalc
|
||||
> 5+1
|
||||
|
||||
5 + 1 = 6
|
||||
|
||||
> ans*2
|
||||
|
||||
ans * 2 = 12
|
||||
|
||||
> ans-2
|
||||
|
||||
ans - 2 = 10
|
||||
|
||||
> 1 USD to INR
|
||||
It has been 36 day(s) since the exchange rates last were updated.
|
||||
Do you wish to update the exchange rates now? y
|
||||
|
||||
error: Failed to download exchange rates from coinbase.com: Resolving timed out after 15000 milliseconds.
|
||||
1 * dollar = approx. INR 69.638581
|
||||
|
||||
> 10 USD to INR
|
||||
|
||||
10 * dollar = approx. INR 696.38581
|
||||
|
||||
> quit
|
||||
```
|
||||
|
||||
### How To Perform Calculation In Linux Using Linux Shell Command?
|
||||
|
||||
We can use the shell commands such as echo, awk, etc to perform the calculation.
|
||||
|
||||
For Addition using echo command.
|
||||
|
||||
```
|
||||
$ echo $((5+5))
|
||||
10
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/linux-command-line-calculator-bc-calc-qalc-gcalccmd/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
|
||||
[2]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[3]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[4]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
|
||||
[5]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
|
||||
[6]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
|
@ -0,0 +1,365 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Set Up a Firewall with GUFW on Linux)
|
||||
[#]: via: (https://itsfoss.com/set-up-firewall-gufw)
|
||||
[#]: author: (Sergiu https://itsfoss.com/author/sergiu/)
|
||||
|
||||
How To Set Up a Firewall with GUFW on Linux
|
||||
======
|
||||
|
||||
**UFW (Uncomplicated Firewall)** is a simple to use firewall utility with plenty of options for most users. It is an interface for the **iptables** , which is the classic (and harder to get comfortable with) way to set up rules for your network.
|
||||
|
||||
**Do you really need a firewall for desktop?**
|
||||
|
||||
![][1]
|
||||
|
||||
A **[firewall][2]** is a way to regulate the incoming and outgoing traffic on your network. A well-configured firewall is crucial for the security of servers.
|
||||
|
||||
But what about normal, desktop users? Do you need a firewall on your Linux system? Most likely you are connected to internet via a router linked to your internet service provider (ISP). Some routers already have built-in firewall. On top of that, your actual system is hidden behind NAT. In other words, you probably have a security layer when you are on your home network.
|
||||
|
||||
Now that you know you should be using a firewall on your system, let’s see how you can easily install and configure a firewall on Ubuntu or any other Linux distribution.
|
||||
|
||||
### Setting Up A Firewall With GUFW
|
||||
|
||||
**[GUFW][3]** is a graphical utility for managing [Uncomplicated Firewall][4] ( **UFW** ). In this guide, I’ll go over configuring a firewall using **GUFW** that suits your needs, going over the different modes and rules.
|
||||
|
||||
But first, let’s see how to install GUFW.
|
||||
|
||||
#### Installing GUFW on Ubuntu and other Linux
|
||||
|
||||
GUFW is available in all major Linux distributions. I advise using your distribution’s package manager for installing GUFW.
|
||||
|
||||
If you are using Ubuntu, make sure you have the Universe Repository enabled. To do that, open up a terminal (default hotkey**:** CTRL+ALT+T) and enter:
|
||||
|
||||
```
|
||||
sudo add-apt-repository universe
|
||||
sudo apt update -y
|
||||
```
|
||||
|
||||
Now you can install GUFW with this command:
|
||||
|
||||
```
|
||||
sudo apt install gufw -y
|
||||
```
|
||||
|
||||
That’s it! If you prefer not touching the terminal, you can install it from the Software Center as well.
|
||||
|
||||
Open Software Center and search for **gufw** and click on the search result.
|
||||
|
||||
![Search for gufw in software center][5]
|
||||
|
||||
Go ahead and click **Install**.
|
||||
|
||||
![Install GUFW from the Software Center][6]
|
||||
|
||||
To open **gufw** , go to your menu and search for it.
|
||||
|
||||
![Start GUFW][7]
|
||||
|
||||
This will open the firewall application and you’ll be greeted by a “ **Getting Started** ” section.
|
||||
|
||||
![GUFW Interface and Welcome Screen][8]
|
||||
|
||||
#### Turn on the firewall
|
||||
|
||||
The first thing to notice about this menu is the **Status** toggle. Pressing this button will turn on/off the firewall ( **default:** off), applying your preferences (policies and rules).
|
||||
|
||||
![Turn on the firewall][9]
|
||||
|
||||
If turned on, the shield icon turn from grey to colored. The colors, as noted later in this article, reflect your policies. This will also make the firewall **automatically start** on system startup.
|
||||
|
||||
**Note:** _**Home** will be turned **off** by default. The other profiles (see next section) will be turned **on.**_
|
||||
|
||||
#### Understanding GUFW and its profiles
|
||||
|
||||
As you can see in the menu, you can select different **profiles**. Each profile comes with different **default policies**. What this means is that they offer different behaviors for incoming and outgoing traffic.
|
||||
|
||||
The **default profiles** are:
|
||||
|
||||
* Home
|
||||
* Public
|
||||
* Office
|
||||
|
||||
|
||||
|
||||
You can select another profile by clicking on the current one ( **default: Home** ).
|
||||
|
||||
![][10]
|
||||
|
||||
Selecting one of them will modify the default behavior. Further down, you can change Incoming and Outgoing traffic preferences.
|
||||
|
||||
By default, both in **Home** and in **Office** , these policies are **Deny Incoming** and **Allow Outgoing**. This enables you to use services such as http/https without letting anything get in ( **e.g.** ssh).
|
||||
|
||||
For **Public** , they are **Reject Incoming** and **Allow Outgoing**. **Reject** , similar to **deny** , doesn’t let services in, but also sends feedback to the user/service that tried accessing your machine (instead of simply dropping/hanging the connection).
|
||||
|
||||
Note
|
||||
|
||||
If you are an average desktop user, you can stick with the default profiles. You’ll have to manually change the profiles if you change the network.
|
||||
|
||||
So if you are travelling, set the firewall on public profile and the from here forwards, firewall will be set in public mode on each reboot.
|
||||
|
||||
#### Configuring firewall rules and policies [for advanced users]
|
||||
|
||||
All profiles use the same rules, only the policies the rules build upon will differ. Changing the behavior of a policy ( **Incoming/Outgoing** ) will apply the changes to the selected profile.
|
||||
|
||||
Note that the policies can only be changed while the firewall is active (Status: ON).
|
||||
|
||||
Profiles can easily be added, deleted and renamed from the **Preferences** menu.
|
||||
|
||||
##### Preferences
|
||||
|
||||
In the top bar, click on **Edit**. Select **Preferences**.
|
||||
|
||||
![Open Preferences Menu in GUFW][11]
|
||||
|
||||
This will open up the **Preferences** menu.
|
||||
|
||||
![][12]
|
||||
|
||||
Let’s go over the options you have here!
|
||||
|
||||
**Logging** means exactly what you would think: how much information does the firewall write down in the log files.
|
||||
|
||||
The options under **Gufw** are quite self-explanatory.
|
||||
|
||||
In the section under **Profiles** is where we can add, delete and rename profiles. Double-clicking on a profile will allow you to **rename** it. Pressing **Enter** will complete this process and pressing **Esc** will cancel the rename.
|
||||
|
||||
![][13]
|
||||
|
||||
To **add** a new profile, click on the **+** under the list of profiles. This will add a new profile. However, it won’t notify you about it. You’ll also have to scroll down the list to see the profile you created (using the mouse wheel or the scroll bar on the right side of the list).
|
||||
|
||||
**Note:** _The newly added profile will **Deny Incoming** and **Allow Outgoing** traffic._
|
||||
|
||||
![][14]
|
||||
|
||||
Clicking a profile highlight that profile. Pressing the **–** button will **delete** the highlighted profile.
|
||||
|
||||
![][15]
|
||||
|
||||
**Note:** _You can’t rename/remove the currently selected profile_.
|
||||
|
||||
You can now click on **Close**. Next, I’ll go into setting up different **rules**.
|
||||
|
||||
##### Rules
|
||||
|
||||
Back to the main menu, somewhere in the middle of the screen you can select different tabs ( **Home, Rules, Report, Logs)**. We already covered the **Home** tab (that’s the quick guide you see when you start the app).
|
||||
|
||||
![][16]
|
||||
|
||||
Go ahead and select **Rules**.
|
||||
|
||||
![][17]
|
||||
|
||||
This will be the bulk of your firewall configuration: networking rules. You need to understand the concepts UFW is based on. That is **allowing, denying, rejecting** and **limiting** traffic.
|
||||
|
||||
**Note:** _In UFW, the rules apply from top to bottom (the top rules take effect first and on top of them are added the following ones)._
|
||||
|
||||
**Allow, Deny, Reject, Limit:**These are the available policies for the rules you’ll add to your firewall.
|
||||
|
||||
Let’s see exactly what each of them means:
|
||||
|
||||
* **Allow:** allows any entry traffic to a port
|
||||
* **Deny:** denies any entry traffic to a port
|
||||
* **Reject:** denies any entry traffic to a port and informs the requester about the rejection
|
||||
* **Limit:** denies entry traffic if an IP address has attempted to initiate 6 or more connections in the last 30 seconds
|
||||
|
||||
|
||||
|
||||
##### Adding Rules
|
||||
|
||||
There are three ways to add rules in GUFW. I’ll present all three methods in the following section.
|
||||
|
||||
**Note:** _After you added the rules, changing their order is a very tricky process and it’s easier to just delete them and add them in the right order._
|
||||
|
||||
But first, click on the **+** at the bottom of the **Rules** tab.
|
||||
|
||||
![][18]
|
||||
|
||||
This should open a pop-up menu ( **Add a Firewall Rule** ).
|
||||
|
||||
![][19]
|
||||
|
||||
At the top of this menu, you can see the three ways you can add rules. I’ll guide you through each method i.e. **Preconfigured, Simple, Advanced**. Click to expand each section.
|
||||
|
||||
**Preconfigured Rules**
|
||||
|
||||
This is the most beginner-friendly way to add rules.
|
||||
|
||||
The first step is choosing a policy for the rule (from the ones detailed above).
|
||||
|
||||
![][20]
|
||||
|
||||
The next step is to choose the direction the rule will affect ( **Incoming, Outgoing, Both** ).
|
||||
|
||||
![][21]
|
||||
|
||||
The **Category** and **Subcategory** choices are plenty. These narrow down the **Applications** you can select
|
||||
|
||||
Choosing an **Application** will set up a set of ports based on what is needed for that particular application. This is especially useful for apps that might operate on multiple ports, or if you don’t want to bother with manually creating rules for handwritten port numbers.
|
||||
|
||||
If you wish to further customize the rule, you can click on the **orange arrow icon**. This will copy the current settings (Application with it’s ports etc.) and take you to the **Advanced** rule menu. I’ll cover that later in this article.
|
||||
|
||||
For this example, I picked an **Office Database** app: **MySQL**. I’ll deny all incoming traffic to the ports used by this app.
|
||||
To create the rule, click on **Add**.
|
||||
|
||||
![][22]
|
||||
|
||||
You can now **Close** the pop-up (if you don’t want to add any other rules). You can see that the rule has been successfully added.
|
||||
|
||||
![][23]
|
||||
|
||||
The ports have been added by GUFW, and the rules have been automatically numbered. You may wonder why are there two new rules instead of just one; the answer is that UFW automatically adds both a standard **IP** rule and an **IPv6** rule.
|
||||
|
||||
**Simple Rules**
|
||||
|
||||
Although setting up preconfigured rules is nice, there is another easy way to add a rule. Click on the **+** icon again and go to the **Simple** tab.
|
||||
|
||||
![][24]
|
||||
|
||||
The options here are straight forward. Enter a name for your rule and select the policy and the direction. I’ll add a rule for rejecting incoming SSH attempts.
|
||||
|
||||
![][25]
|
||||
|
||||
The **Protocols** you can choose are **TCP, UDP** or **Both**.
|
||||
|
||||
You must now enter the **Port** for which you want to manage the traffic. You can enter a **port number** (e.g. 22 for ssh), a **port range** with inclusive ends separated by a **:** ( **colon** ) (e.g. 81:89) or a **service name** (e.g. ssh). I’ll use **ssh** and select **both TCP and UDP** for this example. As before, click on **Add** to completing the creation of your rule. You can click the **red arrow icon** to copy the settings to the **Advanced** rule creation menu.
|
||||
|
||||
![][26]
|
||||
|
||||
If you select **Close** , you can see that the new rule (along with the corresponding IPv6 rule) has been added.
|
||||
|
||||
![][27]
|
||||
|
||||
**Advanced Rules**
|
||||
|
||||
I’ll now go into how to set up more advanced rules, to handle traffic from specific IP addresses and subnets and targeting different interfaces.
|
||||
|
||||
Let’s open up the **Rules** menu again. Select the **Advanced** tab.
|
||||
|
||||
![][28]
|
||||
|
||||
By now, you should already be familiar with the basic options: **Name, Policy, Direction, Protocol, Port**. These are the same as before.
|
||||
|
||||
![][29]
|
||||
|
||||
**Note:** _You can choose both a receiving port and a requesting port._
|
||||
|
||||
What changes is that now you have additional options to further specialize our rules.
|
||||
|
||||
I mentioned before that rules are automatically numbered by GUFW. With **Advanced** rules you specify the position of your rule by entering a number in the **Insert** option.
|
||||
|
||||
**Note:** _Inputting **position 0** will add your rule after all existing rules._
|
||||
|
||||
**Interface** let’s you select any network interface available on your machine. By doing so, the rule will only have effect on traffic to and from that specific interface.
|
||||
|
||||
**Log** changes exactly that: what will and what won’t be logged.
|
||||
|
||||
You can also choose IPs for the requesting and for the receiving port/service ( **From** , **To** ).
|
||||
|
||||
All you have to do is specify an **IP address** (e.g. 192.168.0.102) or an entire **subnet** (e.g. 192.168.0.0/24 for IPv4 addresses ranging from 192.168.0.0 to 192.168.0.255).
|
||||
|
||||
In my example, I’ll set up a rule to allow all incoming TCP SSH requests from systems on my subnet to a specific network interface of the machine I’m currently running. I’ll add the rule after all my standard IP rules, so that it takes effect on top of the other rules I have set up.
|
||||
|
||||
![][30]
|
||||
|
||||
**Close** the menu.
|
||||
|
||||
![][31]
|
||||
|
||||
The rule has been successfully added after the other standard IP rules.
|
||||
|
||||
##### Edit Rules
|
||||
|
||||
Clicking a rule in the rules list will highlight it. Now, if you click on the **little cog icon** at the bottom, you can **edit** the highlighted rule.
|
||||
|
||||
![][32]
|
||||
|
||||
This will open up a menu looking something like the **Advanced** menu I explained in the last section.
|
||||
|
||||
![][33]
|
||||
|
||||
**Note:** _Editing any options of a rule will move it to the end of your list._
|
||||
|
||||
You can now ether select on **Apply** to modify your rule and move it to the end of the list, or hit **Cancel**.
|
||||
|
||||
##### Delete Rules
|
||||
|
||||
After selecting (highlighting) a rule, you can also click on the **–** icon.
|
||||
|
||||
![][34]
|
||||
|
||||
##### Reports
|
||||
|
||||
Select the **Report** tab. Here you can see services that are currently running (along with information about them, such as Protocol, Port, Address and Application name). From here, you can **Pause Listening Report (Pause Icon)** or **Create a rule from a highlighted service from the listening report (+ Icon)**.
|
||||
|
||||
![][35]
|
||||
|
||||
##### Logs
|
||||
|
||||
Select the **Logs** tab. Here is where you’ll have to check for any errors are suspicious rules. I’ve tried creating some invalid rules to show you what these might look like when you don’t know why you can’t add a certain rule. In the bottom section, there are two icons. Clicking the **first icon copies the logs** to your clipboard and clicking the **second icon** **clears the log**.
|
||||
|
||||
![][36]
|
||||
|
||||
### Wrapping Up
|
||||
|
||||
Having a firewall that is properly configured can greatly contribute to your Ubuntu experience, making your machine safer to use and allowing you to have full control over incoming and outgoing traffic.
|
||||
|
||||
I have covered the different uses and modes of **GUFW** , going into how to set up different rules and configure a firewall to your needs. I hope that this guide has been helpful to you.
|
||||
|
||||
If you are a beginner, this should prove to be a comprehensive guide; even if you are more versed in the Linux world and maybe getting your feet wet into servers and networking, I hope you learned something new.
|
||||
|
||||
Let us know in the comments if this article helped you and why did you decide a firewall would improve your system!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/set-up-firewall-gufw
|
||||
|
||||
作者:[Sergiu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/sergiu/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/firewall-linux.png?resize=800%2C450&ssl=1
|
||||
[2]: https://en.wikipedia.org/wiki/Firewall_(computing)
|
||||
[3]: http://gufw.org/
|
||||
[4]: https://en.wikipedia.org/wiki/Uncomplicated_Firewall
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/ubuntu_software_gufw-1.jpg?ssl=1
|
||||
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/ubuntu_software_install_gufw.jpg?ssl=1
|
||||
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/show_applications_gufw.jpg?ssl=1
|
||||
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw.jpg?ssl=1
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_toggle_status.jpg?ssl=1
|
||||
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_select_profile-1.jpg?ssl=1
|
||||
[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_open_preferences.jpg?ssl=1
|
||||
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_preferences.png?fit=800%2C585&ssl=1
|
||||
[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_rename_profile.png?fit=800%2C551&ssl=1
|
||||
[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_add_profile.png?ssl=1
|
||||
[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_delete_profile.png?ssl=1
|
||||
[16]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_home_tab.png?ssl=1
|
||||
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_rules_tab.png?ssl=1
|
||||
[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_add_rule.png?ssl=1
|
||||
[19]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_add_rules_menu.png?ssl=1
|
||||
[20]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_preconfigured_rule_policy.png?ssl=1
|
||||
[21]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_preconfigured_rule_direction.png?ssl=1
|
||||
[22]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_preconfigured_add_rule.png?ssl=1
|
||||
[23]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_preconfigured_rule_added.png?ssl=1
|
||||
[24]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_add_simple_rules_menu.png?ssl=1
|
||||
[25]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_simple_rule_name_policy_direction.png?ssl=1
|
||||
[26]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_add_simple_rule.png?ssl=1
|
||||
[27]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_simple_rule_added.png?ssl=1
|
||||
[28]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_add_advanced_rules_menu.png?ssl=1
|
||||
[29]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_advanced_rule_basic_options.png?ssl=1
|
||||
[30]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_add_advanced_rule.png?ssl=1
|
||||
[31]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_advanced_rule_added.png?ssl=1
|
||||
[32]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_edit_highlighted_rule.png?ssl=1
|
||||
[33]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_edit_rule_menu.png?ssl=1
|
||||
[34]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_delete_rule.png?ssl=1
|
||||
[35]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_report_tab.png?ssl=1
|
||||
[36]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_log_tab-1.png?ssl=1
|
||||
[37]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/firewall-linux.png?fit=800%2C450&ssl=1
|
@ -0,0 +1,107 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to set up a homelab from hardware to firewall)
|
||||
[#]: via: (https://opensource.com/article/19/3/home-lab)
|
||||
[#]: author: (Michael Zamot (Red Hat) https://opensource.com/users/mzamot)
|
||||
|
||||
How to set up a homelab from hardware to firewall
|
||||
======
|
||||
|
||||
Take a look at hardware and software options for building your own homelab.
|
||||
|
||||
![][1]
|
||||
|
||||
Do you want to create a homelab? Maybe you want to experiment with different technologies, create development environments, or have your own private cloud. There are many reasons to have a homelab, and this guide aims to make it easier to get started.
|
||||
|
||||
There are three categories to consider when planning a home lab: hardware, software, and maintenance. We'll look at the first two categories here and save maintaining your computer lab for a future article.
|
||||
|
||||
### Hardware
|
||||
|
||||
When thinking about your hardware needs, first consider how you plan to use your lab as well as your budget, noise, space, and power usage.
|
||||
|
||||
If buying new hardware is too expensive, search local universities, ads, and websites like eBay or Craigslist for recycled servers. They are usually inexpensive, and server-grade hardware is built to last many years. You'll need three types of hardware: a virtualization server, storage, and a router/firewall.
|
||||
|
||||
#### Virtualization servers
|
||||
|
||||
A virtualization server allows you to run several virtual machines that share the physical box's resources while maximizing and isolating resources. If you break one virtual machine, you won't have to rebuild the entire server, just the virtual one. If you want to do a test or try something without the risk of breaking your entire system, just spin up a new virtual machine and you're ready to go.
|
||||
|
||||
The two most important factors to consider in a virtualization server are the number and speed of its CPU cores and its memory. If there are not enough resources to share among all the virtual machines, they'll be overallocated and try to steal each other's CPU cycles and memory.
|
||||
|
||||
So, consider a CPU platform with multiple cores. You want to ensure the CPU supports virtualization instructions (VT-x for Intel and AMD-V for AMD). Examples of good consumer-grade processors that can handle virtualization are Intel i5 or i7 and AMD Ryzen. If you are considering server-grade hardware, the Xeon class for Intel and EPYC for AMD are good options. Memory can be expensive, especially the latest DDR4 SDRAM. When estimating memory requirements, factor at least 2GB for the host operating system's memory consumption.
|
||||
|
||||
If your electricity bill or noise is a concern, solutions like Intel's NUC devices provide a small form factor, low power usage, and reduced noise, but at the expense of expandability.
|
||||
|
||||
#### Network-attached storage (NAS)
|
||||
|
||||
If you want a machine loaded with hard drives to store all your personal data, movies, pictures, etc. and provide storage for the virtualization server, network-attached storage (NAS) is what you want.
|
||||
|
||||
In most cases, you won't need a powerful CPU; in fact, many commercial NAS solutions use low-powered ARM CPUs. A motherboard that supports multiple SATA disks is a must. If your motherboard doesn't have enough ports, use a host bus adapter (HBA) SAS controller to add extras.
|
||||
|
||||
Network performance is critical for a NAS, so select a gigabit network interface (or better).
|
||||
|
||||
Memory requirements will differ based on your filesystem. ZFS is one of the most popular filesystems for NAS, and you'll need more memory to use features such as caching or deduplication. Error-correcting code (ECC) memory is your best bet to protect data from corruption (but make sure your motherboard supports it before you buy). Last, but not least, don't forget an uninterruptible power supply (UPS), because losing power can cause data corruption.
|
||||
|
||||
#### Firewall and router
|
||||
|
||||
Have you ever realized that a cheap router/firewall is usually the main thing protecting your home network from the exterior world? These routers rarely receive timely security updates, if they receive any at all. Scared now? Well, [you should be][2]!
|
||||
|
||||
You usually don't need a powerful CPU or a great deal of memory to build your own router/firewall, unless you are handling a huge throughput or want to do CPU-intensive tasks, like a VPN server or traffic filtering. In such cases, you'll need a multicore CPU with AES-NI support.
|
||||
|
||||
You may want to get at least two 1-gigabit or better Ethernet network interface cards (NICs), also, not needed, but recommended, a managed switch to connect your DIY-router to create VLANs to further isolate and secure your network.
|
||||
|
||||
![Home computer lab PfSense][4]
|
||||
|
||||
### Software
|
||||
|
||||
After you've selected your virtualization server, NAS, and firewall/router, the next step is exploring the different operating systems and software to maximize their benefits. While you could use a regular Linux distribution like CentOS, Debian, or Ubuntu, they usually take more time to configure and administer than the following options.
|
||||
|
||||
#### Virtualization software
|
||||
|
||||
**[KVM][5]** (Kernel-based Virtual Machine) lets you turn Linux into a hypervisor so you can run multiple virtual machines in the same box. The best thing is that KVM is part of Linux, and it is the go-to option for many enterprises and home users. If you are comfortable, you can install **[libvirt][6]** and **[virt-manager][7]** to manage your virtualization platform.
|
||||
|
||||
**[Proxmox VE][8]** is a robust, enterprise-grade solution and a full open source virtualization and container platform. It is based on Debian and uses KVM as its hypervisor and LXC for containers. Proxmox offers a powerful web interface, an API, and can scale out to many clustered nodes, which is helpful because you'll never know when you'll run out of capacity in your lab.
|
||||
|
||||
**[oVirt][9] (RHV)** is another enterprise-grade solution that uses KVM as the hypervisor. Just because it's enterprise doesn't mean you can't use it at home. oVirt offers a powerful web interface and an API and can handle hundreds of nodes (if you are running that many servers, I don't want to be your neighbor!). The potential problem with oVirt for a home lab is that it requires a minimum set of nodes: You'll need one external storage, such as a NAS, and at least two additional virtualization nodes (you can run it just on one, but you'll run into problems in maintenance of your environment).
|
||||
|
||||
#### NAS software
|
||||
|
||||
**[FreeNAS][10]** is the most popular open source NAS distribution, and it's based on the rock-solid FreeBSD operating system. One of its most robust features is its use of the ZFS filesystem, which provides data-integrity checking, snapshots, replication, and multiple levels of redundancy (mirroring, striped mirrors, and striping). On top of that, everything is managed from the powerful and easy-to-use web interface. Before installing FreeNAS, check its hardware support, as it is not as wide as Linux-based distributions.
|
||||
|
||||
Another popular alternative is the Linux-based **[OpenMediaVault][11]**. One of its main features is its modularity, with plugins that extend and add features. Among its included features are a web-based administration interface; protocols like CIFS, SFTP, NFS, iSCSI; and volume management, including software RAID, quotas, access control lists (ACLs), and share management. Because it is Linux-based, it has extensive hardware support.
|
||||
|
||||
#### Firewall/router software
|
||||
|
||||
**[pfSense][12]** is an open source, enterprise-grade FreeBSD-based router and firewall distribution. It can be installed directly on a server or even inside a virtual machine (to manage your virtual or physical networks and save space). It has many features and can be expanded using packages. It is managed entirely using the web interface, although it also has command-line access. It has all the features you would expect from a router and firewall, like DHCP and DNS, as well as more advanced features, such as intrusion detection (IDS) and intrusion prevention (IPS) systems. You can create multiple networks listening on different interfaces or using VLANs, and you can create a secure VPN server with a few clicks. pfSense uses pf, a stateful packet filter that was developed for the OpenBSD operating system using a syntax similar to IPFilter. Many companies and organizations use pfSense.
|
||||
|
||||
* * *
|
||||
|
||||
With all this information in mind, it's time for you to get your hands dirty and start building your lab. In a future article, I will get into the third category of running a home lab: using automation to deploy and maintain it.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/home-lab
|
||||
|
||||
作者:[Michael Zamot (Red Hat)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mzamot
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_code_woman.png?itok=vbYz6jjb
|
||||
[2]: https://opensource.com/article/18/5/how-insecure-your-router
|
||||
[3]: /file/427426
|
||||
[4]: https://opensource.com/sites/default/files/uploads/pfsense2.png (Home computer lab PfSense)
|
||||
[5]: https://www.linux-kvm.org/page/Main_Page
|
||||
[6]: https://libvirt.org/
|
||||
[7]: https://virt-manager.org/
|
||||
[8]: https://www.proxmox.com/en/proxmox-ve
|
||||
[9]: https://ovirt.org/
|
||||
[10]: https://freenas.org/
|
||||
[11]: https://www.openmediavault.org/
|
||||
[12]: https://www.pfsense.org/
|
121
sources/tech/20190320 4 cool terminal multiplexers.md
Normal file
121
sources/tech/20190320 4 cool terminal multiplexers.md
Normal file
@ -0,0 +1,121 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (4 cool terminal multiplexers)
|
||||
[#]: via: (https://fedoramagazine.org/4-cool-terminal-multiplexers/)
|
||||
[#]: author: (Paul W. Frields https://fedoramagazine.org/author/pfrields/)
|
||||
|
||||
4 cool terminal multiplexers
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
The Fedora OS is comfortable and easy for lots of users. It has a stunning desktop that makes it easy to get everyday tasks done. Under the hood is all the power of a Linux system, and the terminal is the easiest way for power users to harness it. By default terminals are simple and somewhat limited. However, a _terminal multiplexer_ allows you to turn your terminal into an even more incredible powerhouse. This article shows off some popular terminal multiplexers and how to install them.
|
||||
|
||||
Why would you want to use one? Well, for one thing, it lets you logout of your system while _leaving your terminal session undisturbed_. It’s incredibly useful to logout of your console, secure it, travel somewhere else, then remotely login with SSH and continue where you left off. Here are some utilities to check out.
|
||||
|
||||
One of the oldest and most well-known terminal multiplexers is _screen._ However, because the code is no longer maintained, this article focuses on more recent apps. (“Recent” is relative — some of these have been around for years!)
|
||||
|
||||
### Tmux
|
||||
|
||||
The _tmux_ utility is one of the most widely used replacements for _screen._ It has a highly configurable interface. You can program tmux to start up specific kinds of sessions based on your needs. You’ll find a lot more about tmux in this article published earlier:
|
||||
|
||||
> [Use tmux for a more powerful terminal][2]
|
||||
|
||||
Already a tmux user? You might like [this additional article on making your tmux sessions more effective][3].
|
||||
|
||||
To install tmux, use the _sudo_ command along with _dnf_ , since you’re probably in a terminal already:
|
||||
|
||||
```
|
||||
$ sudo dnf install tmux
|
||||
```
|
||||
|
||||
To start learning, run the _tmux_ command. A single pane window starts with your default shell. Tmux uses a _modifier key_ to signal that a command is coming next. This key is **Ctrl+B** by default. If you enter **Ctrl+B, C** you’ll create a new window with a shell in it.
|
||||
|
||||
Here’s a hint: Use **Ctrl+B, ?** to enter a help mode that lists all the keys you can use. To keep things simple, look for the lines starting with _bind-key -T prefix_ at first. These are keys you can use right after the modifier key to configure your tmux session. You can hit **Ctrl+C** to exit the help mode back to tmux.
|
||||
|
||||
To completely exit tmux, use the standard _exit_ command or _Ctrl+D_ keystroke to exit all the shells.
|
||||
|
||||
### Dvtm
|
||||
|
||||
You might have recently seen the Magazine article on [dwm, a dynamic window manager][4]. Like dwm, _dvtm_ is for tiling window management — but in a terminal. It’s designed to adhere to the legacy UNIX philosophy of “do one thing well” — in this case managing windows in a terminal.
|
||||
|
||||
Installing dvtm is easy as well. However, if you want the logout functionality mentioned earlier, you’ll also need the _abduco_ package which handles session management for dvtm.
|
||||
|
||||
```
|
||||
$ sudo dnf install dvtm abduco
|
||||
```
|
||||
|
||||
The dvtm utility has many keystrokes already mapped to allow you to manage windows in the terminal. By default, it uses **Ctrl+G** as its modifier key. This keystroke tells dvtm that the following character is going to be a command it should process. For instance, **Ctrl+G, C** creates a new window and **Ctrl+G, X** removes it.
|
||||
|
||||
For more information on using dvtm, check out the dvtm [home page][5] which includes numerous tips and get-started information.
|
||||
|
||||
### Byobu
|
||||
|
||||
While _byobu_ isn’t truly a multiplexer on its own — it wraps _tmux_ or even the older _screen_ to add functions — it’s worth covering here too. Byobu makes terminal multiplexers better for novices, by adding a help menu and window tabs that are slightly easier to navigate.
|
||||
|
||||
Of course it’s available in the Fedora repos as well. To install, use this command:
|
||||
|
||||
```
|
||||
$ sudo dnf install byobu
|
||||
```
|
||||
|
||||
By default the _byobu_ command runs _screen_ underneath, so you might want to run _byobu-tmux_ to wrap _tmux_ instead. You can then use the **F9** key to open up a help menu for more information to help you get started.
|
||||
|
||||
### Mtm
|
||||
|
||||
The _mtm_ utility is one of the smallest multiplexers you’ll find. In fact, it’s only about 1000 lines of code! You might find it helpful if you’re in a limited environment such as old hardware, a minimal container, and so forth. To get started, you’ll need a couple packages.
|
||||
|
||||
```
|
||||
$ sudo dnf install git ncurses-devel make gcc
|
||||
```
|
||||
|
||||
Then clone the repository where mtm lives:
|
||||
|
||||
```
|
||||
$ git clone https://github.com/deadpixi/mtm.git
|
||||
```
|
||||
|
||||
Change directory into the _mtm_ folder and build the program:
|
||||
|
||||
```
|
||||
$ make
|
||||
```
|
||||
|
||||
You might receive a few warnings, but when you’re done, you’ll have the very small _mtm_ utility. Run it with this command:
|
||||
|
||||
```
|
||||
$ ./mtm
|
||||
```
|
||||
|
||||
You can find all the documentation for the utility [on its GitHub page][6].
|
||||
|
||||
These are just some of the terminal multiplexers out there. Got one you’d like to recommend? Leave a comment below with your tips and enjoy building windows in your terminal!
|
||||
|
||||
* * *
|
||||
|
||||
_Photo by _[ _Michael_][7]_ on [Unsplash][8]._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/4-cool-terminal-multiplexers/
|
||||
|
||||
作者:[Paul W. Frields][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/pfrields/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2018/08/tmuxers-4-816x345.jpg
|
||||
[2]: https://fedoramagazine.org/use-tmux-more-powerful-terminal/
|
||||
[3]: https://fedoramagazine.org/4-tips-better-tmux-sessions/
|
||||
[4]: https://fedoramagazine.org/lets-try-dwm-dynamic-window-manger/
|
||||
[5]: http://www.brain-dump.org/projects/dvtm/#why
|
||||
[6]: https://github.com/deadpixi/mtm
|
||||
[7]: https://unsplash.com/photos/48yI_ZyzuLo?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[8]: https://unsplash.com/search/photos/windows?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
@ -0,0 +1,97 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Choosing an open messenger client: Alternatives to WhatsApp)
|
||||
[#]: via: (https://opensource.com/article/19/3/open-messenger-client)
|
||||
[#]: author: (Chris Hermansen (Community Moderator) https://opensource.com/users/clhermansen)
|
||||
|
||||
Choosing an open messenger client: Alternatives to WhatsApp
|
||||
======
|
||||
|
||||
Keep in touch with far-flung family, friends, and colleagues without sacrificing your privacy.
|
||||
|
||||
![Team communication, chat][1]
|
||||
|
||||
Like many families, mine is inconveniently spread around, and I have many colleagues in North and South America. So, over the years, I've relied more and more on WhatsApp to stay in touch with people. The claimed end-to-end encryption appeals to me, as I prefer to maintain some shreds of privacy, and moreover to avoid forcing those with whom I communicate to use an insecure mechanism. But all this [WhatsApp/Facebook/Instagram "convergence"][2] has led our family to decide to vote with our feet. We no longer use WhatsApp for anything except communicating with others who refuse to use anything else, and we're working on them.
|
||||
|
||||
So what do we use instead? Before I spill the beans, I'd like to explain what other options we looked at and how we chose.
|
||||
|
||||
### Options we considered and how we evaluated them
|
||||
|
||||
There is an absolutely [crazy number of messaging apps out there][3], and we spent a good deal of time thinking about what we needed for a replacement. We started by reading Dan Arel's article on [five social media alternatives to protect privacy][4].
|
||||
|
||||
Then we came up with our list of core needs:
|
||||
|
||||
* Our entire family uses Android phones.
|
||||
* One of us has a Windows desktop; the rest use Linux.
|
||||
* Our main interest is something we can use to chat, both individually and as a group, on our phones, but it would be nice to have a desktop client available.
|
||||
* It would also be nice to have voice and video calling as well.
|
||||
* Our privacy is important. Ideally, the code should be open source to facilitate security reviews. If the operation is not pure peer-to-peer, then the organization operating the server components should not operate a business based on the commercialization of our personal information.
|
||||
|
||||
|
||||
|
||||
At that point, we narrowed the long list down to [Viber][5], [Line][6], [Signal][7], [Threema][8], [Wire][9], and [Riot.im][10]. While I lean strongly to open source, we wanted to include some closed source and paid solutions to make sure we weren't missing something important. Here's how those six alternatives measured up.
|
||||
|
||||
### Line
|
||||
|
||||
[Line][11] is a popular messaging application, and it's part of a larger Line "ecosystem"—online gaming, Taxi (an Uber-like service in Japan), Wow (a food delivery service), Today (a news hub), shopping, and others. For us, Line checks a few too many boxes with all those add-on features. Also, I could not determine its current security quality, and it's not open source. The business model seems to be to build a community and figure out how to make money through that community.
|
||||
|
||||
### Riot.im
|
||||
|
||||
[Riot.im][12] operates on top of the Matrix protocol and therefore lets the user choose a Matrix provider. It also appears to check all of our "needs" boxes, although in operation it looks more like Slack, with a room-oriented and interoperable/federated design. It offers desktop clients, and it's open source. Since the Matrix protocol can be hosted anywhere, any business model would be particular to the Matrix provider.
|
||||
|
||||
### Signal
|
||||
|
||||
[Signal][13] offers a similar user experience to WhatsApp. It checks all of our "needs" boxes, with solid security validated by external audit. It is open source, and it is developed and operated by a not-for-profit foundation, in principle similar to the Mozilla Foundation. Interestingly, Signal's communications protocol appears to be used by other messaging apps, [including WhatsApp][14].
|
||||
|
||||
### Threema
|
||||
|
||||
[Threema][15] is extremely privacy-focused. It checks some of our "needs" boxes, with decent external audit results of its security. It doesn't offer a desktop client, and it [isn't fully open source][16] though some of its core components are. Threema's business model appears to be to offer paid secure communications.
|
||||
|
||||
### Viber
|
||||
|
||||
[Viber][17] is a very popular messaging application. It checks most of our "needs" boxes; however, it doesn't seem to have solid proof of its security—it seems to use a proprietary encryption mechanism, and as far as I could determine, its current security mechanisms are not externally audited. It's not open source. The owner, Rakuten, seems to be planning for a paid subscription as a business model.
|
||||
|
||||
### Wire
|
||||
|
||||
[Wire][18] was started and is built by some ex-Skype people. It appears to check all of our "needs" boxes, although I am not completely comfortable with its security profile since it stores client data that apparently is not encrypted on its servers. It offers desktop clients and is open source. The developer and operator, Wire Swiss, appears to have a [pay-for-service track][9] as its future business model.
|
||||
|
||||
### The final verdict
|
||||
|
||||
In the end, we picked Signal. We liked its open-by-design approach, its serious and ongoing [privacy and security stance][7] and having a Signal app on our GNOME (and Windows) desktops. It performs very well on our Android handsets and our desktops. Moreover, it wasn't a big surprise to our small user community; it feels much more like WhatsApp than, for example, Riot.im, which we also tried extensively. Having said that, if we were trying to replace Slack, we'd probably move to Riot.im.
|
||||
|
||||
_Have a favorite messenger? Tell us about it in the comments below._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/open-messenger-client
|
||||
|
||||
作者:[Chris Hermansen (Community Moderator)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/clhermansen
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_team_mobile_desktop.png?itok=d7sRtKfQ (Team communication, chat)
|
||||
[2]: https://www.cnbc.com/2018/03/28/facebook-new-privacy-settings-dont-address-instagram-whatsapp.html
|
||||
[3]: https://en.wikipedia.org/wiki/Comparison_of_instant_messaging_clients
|
||||
[4]: https://opensource.com/article/19/1/open-source-social-media-alternatives
|
||||
[5]: https://en.wikipedia.org/wiki/Viber
|
||||
[6]: https://en.wikipedia.org/wiki/Line_(software)
|
||||
[7]: https://en.wikipedia.org/wiki/Signal_(software)
|
||||
[8]: https://en.wikipedia.org/wiki/Threema
|
||||
[9]: https://en.wikipedia.org/wiki/Wire_(software)
|
||||
[10]: https://en.wikipedia.org/wiki/Riot.im
|
||||
[11]: https://line.me/en/
|
||||
[12]: https://about.riot.im/
|
||||
[13]: https://signal.org/
|
||||
[14]: https://en.wikipedia.org/wiki/Signal_Protocol
|
||||
[15]: https://threema.ch/en
|
||||
[16]: https://threema.ch/en/faq/source_code
|
||||
[17]: https://www.viber.com/
|
||||
[18]: https://wire.com/en/
|
130
sources/tech/20190320 Move your dotfiles to version control.md
Normal file
130
sources/tech/20190320 Move your dotfiles to version control.md
Normal file
@ -0,0 +1,130 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Move your dotfiles to version control)
|
||||
[#]: via: (https://opensource.com/article/19/3/move-your-dotfiles-version-control)
|
||||
[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberg)
|
||||
|
||||
Move your dotfiles to version control
|
||||
======
|
||||
Back up or sync your custom configurations across your systems by sharing dotfiles on GitLab or GitHub.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/documents_papers_file_storage_work.png?itok=YlXpAqAJ)
|
||||
|
||||
There is something truly exciting about customizing your operating system through the collection of hidden files we call dotfiles. In [What a Shell Dotfile Can Do For You][1], H. "Waldo" Grunenwald goes into excellent detail about the why and how of setting up your dotfiles. Let's dig into the why and how of sharing them.
|
||||
|
||||
### What's a dotfile?
|
||||
|
||||
"Dotfiles" is a common term for all the configuration files we have floating around our machines. These files usually start with a **.** at the beginning of the filename, like **.gitconfig** , and operating systems often hide them by default. For example, when I use **ls -a** on MacOS, it shows all the lovely dotfiles that would otherwise not be in the output.
|
||||
|
||||
```
|
||||
dotfiles on master
|
||||
➜ ls
|
||||
README.md Rakefile bin misc profiles zsh-custom
|
||||
|
||||
dotfiles on master
|
||||
➜ ls -a
|
||||
. .gitignore .oh-my-zsh README.md zsh-custom
|
||||
.. .gitmodules .tmux Rakefile
|
||||
.gemrc .global_ignore .vimrc bin
|
||||
.git .gvimrc .zlogin misc
|
||||
.gitconfig .maid .zshrc profiles
|
||||
```
|
||||
|
||||
If I take a look at one, **.gitconfig** , which I use for Git configuration, I see a ton of customization. I have account information, terminal color preferences, and tons of aliases that make my command-line interface feel like mine. Here's a snippet from the **[alias]** block:
|
||||
|
||||
```
|
||||
87 # Show the diff between the latest commit and the current state
|
||||
88 d = !"git diff-index --quiet HEAD -- || clear; git --no-pager diff --patch-with-stat"
|
||||
89
|
||||
90 # `git di $number` shows the diff between the state `$number` revisions ago and the current state
|
||||
91 di = !"d() { git diff --patch-with-stat HEAD~$1; }; git diff-index --quiet HEAD -- || clear; d"
|
||||
92
|
||||
93 # Pull in remote changes for the current repository and all its submodules
|
||||
94 p = !"git pull; git submodule foreach git pull origin master"
|
||||
95
|
||||
96 # Checkout a pull request from origin (of a github repository)
|
||||
97 pr = !"pr() { git fetch origin pull/$1/head:pr-$1; git checkout pr-$1; }; pr"
|
||||
```
|
||||
|
||||
Since my **.gitconfig** has over 200 lines of customization, I have no interest in rewriting it on every new computer or system I use, and either does anyone else. This is one reason sharing dotfiles has become more and more popular, especially with the rise of the social coding site GitHub. The canonical article advocating for sharing dotfiles is Zach Holman's [Dotfiles Are Meant to Be Forked][2] from 2008. The premise is true to this day: I want to share them, with myself, with those new to dotfiles, and with those who have taught me so much by sharing their customizations.
|
||||
|
||||
### Sharing dotfiles
|
||||
|
||||
Many of us have multiple systems or know hard drives are fickle enough that we want to back up our carefully curated customizations. How do we keep these wonderful files in sync across environments?
|
||||
|
||||
My favorite answer is distributed version control, preferably a service that will handle the heavy lifting for me. I regularly use GitHub and continue to enjoy GitLab as I get more experienced with it. Either one is a perfect place to share your information. To set yourself up:
|
||||
|
||||
1. Sign into your preferred Git-based service.
|
||||
2. Create a repository called "dotfiles." (Make it public! Sharing is caring.)
|
||||
3. Clone it to your local environment.*
|
||||
4. Copy your dotfiles into the folder.
|
||||
5. Symbolically link (symlink) them back to their target folder (most often **$HOME** ).
|
||||
6. Push them to the remote repository.
|
||||
|
||||
|
||||
|
||||
* You may need to set up your Git configuration commands to clone the repository. Both GitHub and GitLab will prompt you with the commands to run.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/gitlab-new-project.png)
|
||||
|
||||
Step 4 above is the crux of this effort and can be a bit tricky. Whether you use a script or do it by hand, the workflow is to symlink from your dotfiles folder to the dotfiles destination so that any updates to your dotfiles are easily pushed to the remote repository. To do this for my **.gitconfig** file, I would enter:
|
||||
|
||||
```
|
||||
$ cd dotfiles/
|
||||
$ ln -nfs .gitconfig $HOME/.gitconfig
|
||||
```
|
||||
|
||||
The flags added to the symlinking command offer a few additional benefits:
|
||||
|
||||
* **-s** creates a symbolic link instead of a hard link
|
||||
* **-f** continues with other symlinking when an error occurs (not needed here, but useful in loops)
|
||||
* **-n** avoids symlinking a symlink (same as **-h** for other versions of **ln** )
|
||||
|
||||
|
||||
|
||||
You can review the IEEE and Open Group [specification of **ln**][3] and the version on [MacOS 10.14.3][4] if you want to dig deeper into the available parameters. I had to look up these flags since I pulled them from someone else's dotfiles.
|
||||
|
||||
You can also make updating simpler with a little additional code, like the [Rakefile][5] I forked from [Brad Parbs][6]. Alternatively, you can keep it incredibly simple, as Jeff Geerling does [in his dotfiles][7]. He symlinks files using [this Ansible playbook][8]. Keeping everything in sync at this point is easy: you can cron job or occasionally **git push** from your dotfiles folder.
|
||||
|
||||
### Quick aside: What not to share
|
||||
|
||||
Before we move on, it is worth noting what you should not add to a shared dotfile repository—even if it starts with a dot. Anything that is a security risk, like files in your **.ssh/** folder, is not a good choice to share using this method. Be sure to double-check your configuration files before publishing them online and triple-check that no API tokens are in your files.
|
||||
|
||||
### Where should I start?
|
||||
|
||||
If Git is new to you, my [article about the terminology][9] and [a cheat sheet][10] of my most frequently used commands should help you get going.
|
||||
|
||||
There are other incredible resources to help you get started with dotfiles. Years ago, I came across [dotfiles.github.io][11] and continue to go back to it for a broader look at what people are doing. There is a lot of tribal knowledge hidden in other people's dotfiles. Take the time to scroll through some and don't be shy about adding them to your own.
|
||||
|
||||
I hope this will get you started on the joy of having consistent dotfiles across your computers.
|
||||
|
||||
What's your favorite dotfile trick? Add a comment or tweet me [@mbbroberg][12].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/move-your-dotfiles-version-control
|
||||
|
||||
作者:[Matthew Broberg][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mbbroberg
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/article/18/9/shell-dotfile
|
||||
[2]: https://zachholman.com/2010/08/dotfiles-are-meant-to-be-forked/
|
||||
[3]: http://pubs.opengroup.org/onlinepubs/9699919799/utilities/ln.html
|
||||
[4]: https://www.unix.com/man-page/FreeBSD/1/ln/
|
||||
[5]: https://github.com/mbbroberg/dotfiles/blob/master/Rakefile
|
||||
[6]: https://github.com/bradp/dotfiles
|
||||
[7]: https://github.com/geerlingguy/dotfiles
|
||||
[8]: https://github.com/geerlingguy/mac-dev-playbook
|
||||
[9]: https://opensource.com/article/19/2/git-terminology
|
||||
[10]: https://opensource.com/downloads/cheat-sheet-git
|
||||
[11]: http://dotfiles.github.io/
|
||||
[12]: https://twitter.com/mbbroberg?lang=en
|
@ -0,0 +1,186 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Nuvola: Desktop Music Player for Streaming Services)
|
||||
[#]: via: (https://itsfoss.com/nuvola-music-player)
|
||||
[#]: author: (Atharva Lele https://itsfoss.com/author/atharva/)
|
||||
|
||||
Nuvola: Desktop Music Player for Streaming Services
|
||||
======
|
||||
|
||||
[Nuvola][1] is not like your usual music players. It’s different because it allows you to play a number of streaming services in a desktop music player.
|
||||
|
||||
Nuvola provides a runtime called [Nuvola Apps Runtime][2] which runs web apps. This is why Nuvola can support a host of streaming services. Some of the major players it supports are:
|
||||
|
||||
* Spotify
|
||||
* Google Play Music
|
||||
* YouTube, YouTube Music
|
||||
* [Pandora][3]
|
||||
* [SoundCloud][4]
|
||||
* and many many more.
|
||||
|
||||
|
||||
|
||||
You can find the full list [here][1] in the Music streaming services section. Apple Music is not supported, if you were wondering.
|
||||
|
||||
Why would you use a streaming music service in a different desktop player when you can run it in a web browser? The advantage with Nuvola is that it provides tight integration with many [desktop environments][5].
|
||||
|
||||
Ideally it should work with all DEs, but the officially supported ones are GNOME, Unity, and Pantheon (elementary OS).
|
||||
|
||||
### Features of Nuvola Music Player
|
||||
|
||||
Let’s see some of the main features of the open source project Nuvola:
|
||||
|
||||
* Supports a wide variety of music streaming services
|
||||
* Desktop integration with GNOME, Unity, and Pantheon.
|
||||
* Keyboard shortcuts with the ability to customize them
|
||||
* Support for keyboard’s multimedia keys (paid feature)
|
||||
* Background play with notifications
|
||||
* [GNOME Media Player][6] extension support
|
||||
* App Tray indicator
|
||||
* Dark and Light themes
|
||||
* Enable or disable features
|
||||
* Password Manager for web services
|
||||
* Remote control over internet (paid feature)
|
||||
* Available for a lot of distros ([Flatpak][7] packages)
|
||||
|
||||
|
||||
|
||||
Complete list of features is available [here][8].
|
||||
|
||||
### How to install Nuvola on Ubuntu & other Linux distributions
|
||||
|
||||
Installing Nuvola consists of a few more steps than simply adding a PPA and then installing the software. Since it is based on [Flatpak][7], you have to set up Flatpak first and then you can install Nuvola.
|
||||
|
||||
[Enable Flatpak Support][9]
|
||||
|
||||
The steps are pretty simple. You can follow the guide [here][10] if you want to install using the GUI, however I prefer terminal commands since they’re easier and faster.
|
||||
|
||||
**Warning: If already installed, remove the older version of Nuvola (Click to expand)**
|
||||
|
||||
If you have ever installed Nuvola before, you need to uninstall it to avoid issues. Run these commands in the terminal to do so.
|
||||
|
||||
```
|
||||
sudo apt remove nuvolaplayer*
|
||||
```
|
||||
|
||||
```
|
||||
rm -rf ~/.cache/nuvolaplayer3 ~/.local/share/nuvolaplayer ~/.config/nuvolaplayer3 ~/.local/share/applications/nuvolaplayer3*
|
||||
```
|
||||
|
||||
Once you have made sure that your system has Flatpak, you can install Nuvola using this command:
|
||||
|
||||
```
|
||||
flatpak remote-add --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo
|
||||
flatpak remote-add --if-not-exists nuvola https://dl.tiliado.eu/flatpak/nuvola.flatpakrepo
|
||||
```
|
||||
|
||||
This is an optional step but I recommend you install this since the service allows you to commonly configure settings like shortcuts for each of the streaming service that you might use.
|
||||
|
||||
```
|
||||
flatpak install nuvola eu.tiliado.Nuvola
|
||||
```
|
||||
|
||||
Nuvola supports 29 streaming services. To get them, you need to add those services individually. You can find all the supported music services are available on this [page][10].
|
||||
|
||||
For the purpose of this tutorial, I’m going to go with [YouTube Music][11].
|
||||
|
||||
```
|
||||
flatpak install nuvola eu.tiliado.NuvolaAppYoutubeMusic
|
||||
```
|
||||
|
||||
After this, you should have the app installed and should be able to see the icon if you search for it.
|
||||
|
||||
![Nuvola App specific icons][12]
|
||||
|
||||
Clicking on the icon will pop-up the first time setup. You’ll have to accept the Privacy Policy and then continue.
|
||||
|
||||
![Terms and Conditions page][13]
|
||||
|
||||
After accepting terms and conditions, you should launch into the web app of the respective streaming service, YouTube Music in this case.
|
||||
|
||||
![YouTube Music web app running on Nuvola Runtime][14]
|
||||
|
||||
In case of installation on other distributions, specific guidelines are available on the [Nuvola website][15].
|
||||
|
||||
### My experience with Nuvola Music Player
|
||||
|
||||
Initially I thought that it wouldn’t be too different than simply running the web app in [Firefox][16], since many desktop environments like KDE support media controls and shortcuts for media playing in Firefox.
|
||||
|
||||
However, this isn’t the case with many other desktops environments and that’s where Nuvola comes in handy. Often, it’s also faster to access than loading the website on the browser.
|
||||
|
||||
Once loaded, it behaves pretty much like a normal web app with the benefit of keyboard shortcuts. Speaking of shortcuts, you should check out the list of must know [Ubuntu shortcuts][17].
|
||||
|
||||
![Viewing an Artist’s page][18]
|
||||
|
||||
Integration with the DE comes in handy when you quickly want to change a song or play/pause your music without leaving your current application. Nuvola gives you access in GNOME notifications as well as provides an app tray icon.
|
||||
|
||||
* ![Notification music controls][19]
|
||||
|
||||
* ![App tray music controls][20]
|
||||
|
||||
|
||||
|
||||
|
||||
Keyboard shortcuts work well, globally as well as in-app. You get a notification when the song changes. Whether you do it yourself or it automatically switches to the next song.
|
||||
|
||||
![][21]
|
||||
|
||||
By default, very few keyboard shortcuts are provided. However you can enable them for almost everything you can do with the app. For example I set the song change shortcuts to Ctrl + Arrow keys as you can see in the screenshot.
|
||||
|
||||
![Keyboard Shortcuts][22]
|
||||
|
||||
All in all, it works pretty well and it’s fast and responsive. Definitely more so than your usual Snap app.
|
||||
|
||||
**Some criticism**
|
||||
|
||||
Some thing that did not please me as much was the installation size. Since it requires a browser back-end and GNOME integration it essentially installs a browser and necessary GNOME libraries for Flatpak, so that results in having to install almost 350MB in dependencies.
|
||||
|
||||
After that, you install individual apps. The individual apps themselves are not heavy at all. But if you just use one streaming service, having a 300+ MB installation might not be ideal if you’re concerned about disk space.
|
||||
|
||||
Nuvola also does not support local music, at least as far as I could find.
|
||||
|
||||
**Conclusion**
|
||||
|
||||
Hope this article helped you to know more about Nuvola Music Player and its features. If you like such different applications, why not take a look at some of the [lesser known music players for Linux][23]?
|
||||
|
||||
As always, if you have any suggestions or questions, I look forward to reading your comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/nuvola-music-player
|
||||
|
||||
作者:[Atharva Lele][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/atharva/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://nuvola.tiliado.eu/
|
||||
[2]: https://nuvola.tiliado.eu/#fn:1
|
||||
[3]: https://itsfoss.com/install-pandora-linux-client/
|
||||
[4]: https://itsfoss.com/install-soundcloud-linux/
|
||||
[5]: https://itsfoss.com/best-linux-desktop-environments/
|
||||
[6]: https://extensions.gnome.org/extension/55/media-player-indicator/
|
||||
[7]: https://flatpak.org/
|
||||
[8]: http://tiliado.github.io/nuvolaplayer/documentation/4/explore.html
|
||||
[9]: https://itsfoss.com/flatpak-guide/
|
||||
[10]: https://nuvola.tiliado.eu/nuvola/ubuntu/bionic/
|
||||
[11]: https://nuvola.tiliado.eu/app/youtube_music/ubuntu/bionic/
|
||||
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/nuvola_youtube_music_icon.png?resize=800%2C450&ssl=1
|
||||
[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/nuvola_eula.png?resize=800%2C450&ssl=1
|
||||
[14]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/nuvola_youtube_music.png?resize=800%2C450&ssl=1
|
||||
[15]: https://nuvola.tiliado.eu/index/
|
||||
[16]: https://itsfoss.com/why-firefox/
|
||||
[17]: https://itsfoss.com/ubuntu-shortcuts/
|
||||
[18]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/nuvola_web_player.png?resize=800%2C449&ssl=1
|
||||
[19]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/nuvola_music_controls.png?fit=800%2C450&ssl=1
|
||||
[20]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/nuvola_web_player2.png?fit=800%2C450&ssl=1
|
||||
[21]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/nuvola_song_change_notification-e1553077619208.png?ssl=1
|
||||
[22]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/nuvola_shortcuts.png?resize=800%2C450&ssl=1
|
||||
[23]: https://itsfoss.com/lesser-known-music-players-linux/
|
@ -0,0 +1,268 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Setup Linux Media Server Using Jellyfin)
|
||||
[#]: via: (https://www.ostechnix.com/how-to-setup-linux-media-server-using-jellyfin/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
How To Setup Linux Media Server Using Jellyfin
|
||||
======
|
||||
|
||||
![Setup Linux Media Server Using Jellyfin][1]
|
||||
|
||||
We’ve already written about setting up your own streaming media server on Linux using [**Streama**][2]. Today, we will going to setup yet another media server using **Jellyfin**. Jellyfin is a free, cross-platform and open source alternative to propriety media streaming applications such as **Emby** and **Plex**. The main developer of Jellyfin forked it from Emby after the announcement of Emby transitioning to a proprietary model. Jellyfin doesn’t include any premium features, licenses or membership plans. It is completely free and open source project supported by hundreds of community members. Using jellyfin, we can instantly setup Linux media server in minutes and access it via LAN/WAN from any devices using multiple apps.
|
||||
|
||||
### Setup Linux Media Server Using Jellyfin
|
||||
|
||||
Jellyfin supports GNU/Linux, Mac OS and Microsoft Windows operating systems. You can install it on your Linux distribution as described below.
|
||||
|
||||
##### Install Jellyfin On Linux
|
||||
|
||||
As of writing this guide, Jellyfin packages are available for most popular Linux distributions, such as Arch Linux, Debian, CentOS, Fedora and Ubuntu.
|
||||
|
||||
On **Arch Linux** and its derivatives like **Antergos** , **Manjaro Linux** , you can install Jellyfin using any AUR helper tools, for example [**YaY**][3].
|
||||
|
||||
```
|
||||
$ yay -S jellyfin-git
|
||||
```
|
||||
|
||||
On **CentOS/RHEL** :
|
||||
|
||||
Download the latest Jellyfin rpm package from [**here**][4] and install it as shown below.
|
||||
|
||||
```
|
||||
$ wget https://repo.jellyfin.org/releases/server/centos/jellyfin-10.2.2-1.el7.x86_64.rpm
|
||||
|
||||
$ sudo yum localinstall jellyfin-10.2.2-1.el7.x86_64.rpm
|
||||
```
|
||||
|
||||
On **Fedora** :
|
||||
|
||||
Download Jellyfin for Fedora from [**here**][5].
|
||||
|
||||
```
|
||||
$ wget https://repo.jellyfin.org/releases/server/fedora/jellyfin-10.2.2-1.fc29.x86_64.rpm
|
||||
|
||||
$ sudo dnf install jellyfin-10.2.2-1.fc29.x86_64.rpm
|
||||
```
|
||||
|
||||
On **Debian** :
|
||||
|
||||
Install HTTPS transport for APT if it is not installed already:
|
||||
|
||||
```
|
||||
$ sudo apt install apt-transport-https
|
||||
```
|
||||
|
||||
Import Jellyfin GPG signing key:``
|
||||
|
||||
```
|
||||
$ wget -O - https://repo.jellyfin.org/debian/jellyfin_team.gpg.key | sudo apt-key add -
|
||||
```
|
||||
|
||||
Add Jellyfin repository:
|
||||
|
||||
```
|
||||
$ sudo touch /etc/apt/sources.list.d/jellyfin.list
|
||||
|
||||
$ echo "deb [arch=amd64] https://repo.jellyfin.org/debian $( lsb_release -c -s ) main" | sudo tee /etc/apt/sources.list.d/jellyfin.list
|
||||
```
|
||||
|
||||
Finally, update Jellyfin repository and install Jellyfin using commands:``
|
||||
|
||||
```
|
||||
$ sudo apt update
|
||||
|
||||
$ sudo apt install jellyfin
|
||||
```
|
||||
|
||||
On **Ubuntu 18.04 LTS** :
|
||||
|
||||
Install HTTPS transport for APT if it is not installed already:
|
||||
|
||||
```
|
||||
$ sudo apt install apt-transport-https
|
||||
```
|
||||
|
||||
Import and add Jellyfin GPG signing key:``
|
||||
|
||||
```
|
||||
$ wget -O - https://repo.jellyfin.org/debian/jellyfin_team.gpg.key | sudo apt-key add -
|
||||
```
|
||||
|
||||
Add the Jellyfin repository:
|
||||
|
||||
```
|
||||
$ sudo touch /etc/apt/sources.list.d/jellyfin.list
|
||||
|
||||
$ echo "deb https://repo.jellyfin.org/ubuntu bionic main" | sudo tee /etc/apt/sources.list.d/jellyfin.list
|
||||
```
|
||||
|
||||
For Ubuntu 16.04, just replace **bionic** with **xenial** in the above URL.
|
||||
|
||||
Finally, update Jellyfin repository and install Jellyfin using commands:``
|
||||
|
||||
```
|
||||
$ sudo apt update
|
||||
|
||||
$ sudo apt install jellyfin
|
||||
```
|
||||
|
||||
##### Start Jellyfin service
|
||||
|
||||
Run the following commands to enable and start jellyfin service on every reboot:
|
||||
|
||||
```
|
||||
$ sudo systemctl enable jellyfin
|
||||
|
||||
$ sudo systemctl start jellyfin
|
||||
```
|
||||
|
||||
To check if the service has been started or not, run:
|
||||
|
||||
```
|
||||
$ sudo systemctl status jellyfin
|
||||
```
|
||||
|
||||
Sample output:
|
||||
|
||||
```
|
||||
● jellyfin.service - Jellyfin Media Server
|
||||
Loaded: loaded (/lib/systemd/system/jellyfin.service; enabled; vendor preset: enabled)
|
||||
Drop-In: /etc/systemd/system/jellyfin.service.d
|
||||
└─jellyfin.service.conf
|
||||
Active: active (running) since Wed 2019-03-20 12:20:19 UTC; 1s ago
|
||||
Main PID: 4556 (jellyfin)
|
||||
Tasks: 11 (limit: 2320)
|
||||
CGroup: /system.slice/jellyfin.service
|
||||
└─4556 /usr/bin/jellyfin --datadir=/var/lib/jellyfin --configdir=/etc/jellyfin --logdir=/var/log/jellyfin --cachedir=/var/cache/jellyfin --r
|
||||
|
||||
Mar 20 12:20:21 ubuntuserver jellyfin[4556]: [12:20:21] [INF] Loading Emby.Photos, Version=10.2.2.0, Culture=neutral, PublicKeyToken=null
|
||||
Mar 20 12:20:21 ubuntuserver jellyfin[4556]: [12:20:21] [INF] Loading Emby.Server.Implementations, Version=10.2.2.0, Culture=neutral, PublicKeyToken=nu
|
||||
Mar 20 12:20:21 ubuntuserver jellyfin[4556]: [12:20:21] [INF] Loading MediaBrowser.MediaEncoding, Version=10.2.2.0, Culture=neutral, PublicKeyToken=nul
|
||||
Mar 20 12:20:21 ubuntuserver jellyfin[4556]: [12:20:21] [INF] Loading Emby.Dlna, Version=10.2.2.0, Culture=neutral, PublicKeyToken=null
|
||||
Mar 20 12:20:21 ubuntuserver jellyfin[4556]: [12:20:21] [INF] Loading MediaBrowser.LocalMetadata, Version=10.2.2.0, Culture=neutral, PublicKeyToken=nul
|
||||
Mar 20 12:20:21 ubuntuserver jellyfin[4556]: [12:20:21] [INF] Loading Emby.Notifications, Version=10.2.2.0, Culture=neutral, PublicKeyToken=null
|
||||
Mar 20 12:20:21 ubuntuserver jellyfin[4556]: [12:20:21] [INF] Loading MediaBrowser.XbmcMetadata, Version=10.2.2.0, Culture=neutral, PublicKeyToken=null
|
||||
Mar 20 12:20:21 ubuntuserver jellyfin[4556]: [12:20:21] [INF] Loading jellyfin, Version=10.2.2.0, Culture=neutral, PublicKeyToken=null
|
||||
Mar 20 12:20:21 ubuntuserver jellyfin[4556]: [12:20:21] [INF] Sqlite version: 3.26.0
|
||||
Mar 20 12:20:21 ubuntuserver jellyfin[4556]: [12:20:21] [INF] Sqlite compiler options: COMPILER=gcc-5.4.0 20160609,DEFAULT_FOREIGN_KEYS,ENABLE_COLUMN_M
|
||||
```
|
||||
|
||||
If you see an output something, congratulations! Jellyfin service has been started.
|
||||
|
||||
Next, we should do some initial configuration.
|
||||
|
||||
##### Configure Jellyfin
|
||||
|
||||
Once jellyfin is installed, open the browser and navigate to – **http:// <domain-name>:8096** or **http:// <IP-Address>:8096** URL.
|
||||
|
||||
You will see the following welcome screen. Select your preferred language and click Next.
|
||||
|
||||
![][6]
|
||||
|
||||
Enter your user details. You can add more users later from the Jellyfin Dashboard.
|
||||
|
||||
![][7]
|
||||
|
||||
The next step is to select media files which we want to stream. To do so, click “Add media Library” button:
|
||||
|
||||
![][8]
|
||||
|
||||
Choose the content type (i.e audio, video, movies etc.,), display name and click plus (+) sign next to the Folders icon to choose the location where you kept your media files. You can further choose other library settings such as the preferred download language, country etc. Click Ok after choosing the preferred options.
|
||||
|
||||
![][9]
|
||||
|
||||
Similarly, add all of the media files. Once you have chosen everything to stream, click Next.
|
||||
|
||||
![][10]
|
||||
|
||||
Choose the Metadata language and click Next:
|
||||
|
||||
![][11]
|
||||
|
||||
Next, you need to configure whether you want to allow remote connections to this media server. Make sure you have allowed the remote connections. Also, enable automatic port mapping and click Next:
|
||||
|
||||
![][12]
|
||||
|
||||
You’re all set! Click Finish to complete Jellyfin configuration.
|
||||
|
||||
![][13]
|
||||
|
||||
You will now be redirected to Jellyfin login page. Click on the username and enter it’s password which we setup earlier.
|
||||
|
||||
![][14]
|
||||
|
||||
This is how Jellyfin dashboard looks like.
|
||||
|
||||
![][15]
|
||||
|
||||
As you see in the screenshot, all of your media files are shown in the dashboard itself under My Media section. Just click on the any media file of your choice and start watching it!!
|
||||
|
||||
![][16]
|
||||
|
||||
You can access this Jellyfin media server from any systems on the network using URL – <http://ip-address:8096>. You need not to install any extra apps. All you need is a modern web browser.
|
||||
|
||||
If you want to change anything or reconfigure, click on the three horizontal bars from the Home screen. Here, you can add users, media files, change playback settings, add TV/DVR, install plugins, change default port no and a lot more settings.
|
||||
|
||||
![][17]
|
||||
|
||||
For more details, check out [**Jellyfin official documentation**][18] page.
|
||||
|
||||
And, that’s all for now. As you can see setting up a streaming media server on Linux is no big-deal. I tested it on my Ubuntu 18.04 LTS VM. It worked fine out of the box. I can be able to watch the movies from other systems in my LAN. If you’re looking for easy, quick and free solution for hosting a media server, Jellyfin is a good choice.
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-setup-linux-media-server-using-jellyfin/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]: https://www.ostechnix.com/streama-setup-your-own-streaming-media-server-in-minutes/
|
||||
[3]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
|
||||
[4]: https://repo.jellyfin.org/releases/server/centos/
|
||||
[5]: https://repo.jellyfin.org/releases/server/fedora/
|
||||
[6]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-1.png
|
||||
[7]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-2-1.png
|
||||
[8]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-3-1.png
|
||||
[9]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-4-1.png
|
||||
[10]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-5-1.png
|
||||
[11]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-6.png
|
||||
[12]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-7.png
|
||||
[13]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-8-1.png
|
||||
[14]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-9.png
|
||||
[15]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-10.png
|
||||
[16]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-11.png
|
||||
[17]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-12.png
|
||||
[18]: https://jellyfin.readthedocs.io/en/latest/
|
||||
[19]: https://github.com/jellyfin/jellyfin
|
||||
[20]: http://feedburner.google.com/fb/a/mailverify?uri=ostechnix (Subscribe to our Email newsletter)
|
||||
[21]: https://www.paypal.me/ostechnix (Donate Via PayPal)
|
||||
[22]: http://ostechnix.tradepub.com/category/information-technology/1207/
|
||||
[23]: https://www.facebook.com/ostechnix/
|
||||
[24]: https://twitter.com/ostechnix
|
||||
[25]: https://plus.google.com/+SenthilkumarP/
|
||||
[26]: https://www.linkedin.com/in/ostechnix
|
||||
[27]: http://feeds.feedburner.com/Ostechnix
|
||||
[28]: https://www.ostechnix.com/how-to-setup-linux-media-server-using-jellyfin/?share=reddit (Click to share on Reddit)
|
||||
[29]: https://www.ostechnix.com/how-to-setup-linux-media-server-using-jellyfin/?share=twitter (Click to share on Twitter)
|
||||
[30]: https://www.ostechnix.com/how-to-setup-linux-media-server-using-jellyfin/?share=facebook (Click to share on Facebook)
|
||||
[31]: https://www.ostechnix.com/how-to-setup-linux-media-server-using-jellyfin/?share=linkedin (Click to share on LinkedIn)
|
||||
[32]: https://www.ostechnix.com/how-to-setup-linux-media-server-using-jellyfin/?share=pocket (Click to share on Pocket)
|
||||
[33]: https://api.whatsapp.com/send?text=How%20To%20Setup%20Linux%20Media%20Server%20Using%20Jellyfin%20https%3A%2F%2Fwww.ostechnix.com%2Fhow-to-setup-linux-media-server-using-jellyfin%2F (Click to share on WhatsApp)
|
||||
[34]: https://www.ostechnix.com/how-to-setup-linux-media-server-using-jellyfin/?share=telegram (Click to share on Telegram)
|
||||
[35]: https://www.ostechnix.com/how-to-setup-linux-media-server-using-jellyfin/?share=email (Click to email this to a friend)
|
||||
[36]: https://www.ostechnix.com/how-to-setup-linux-media-server-using-jellyfin/#print (Click to print)
|
151
sources/tech/20190321 How to add new disk in Linux.md
Normal file
151
sources/tech/20190321 How to add new disk in Linux.md
Normal file
@ -0,0 +1,151 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to add new disk in Linux)
|
||||
[#]: via: (https://kerneltalks.com/hardware-config/how-to-add-new-disk-in-linux/)
|
||||
[#]: author: (kerneltalks https://kerneltalks.com)
|
||||
|
||||
How to add new disk in Linux
|
||||
======
|
||||
|
||||
* * *
|
||||
|
||||
_Step by step procedure to add disk in Linux machine_
|
||||
|
||||
![New disk addition in Linux][1]
|
||||
|
||||
In this article we will walk you through steps to add new disk in Linux machine. Adding raw disk to linux machine may very depending upon the type of server you have but once disk is presented to machine, procedure of getting it to mount points is almost same.
|
||||
|
||||
**Objective** : Add new 10GB disk to server and create 5GB mount point out of it using LVM and newly created volume group.
|
||||
|
||||
* * *
|
||||
|
||||
### Adding raw disk to Linux machine
|
||||
|
||||
If you are using AWS EC2 Linux server, you may [follow these steps][2] to add raw disk. If you are on VMware Linux VM you will have different set of steps to follow to add disk. If you are running physical rack mount/blade server then adding disk will be a physical task.
|
||||
|
||||
Now once the disk is attached to Linux machine physically/virtually, it will be identified by kernel and then our rally starts.
|
||||
|
||||
* * *
|
||||
|
||||
### Identifying newly added disk in Linux
|
||||
|
||||
After attachment of raw disk, you need to ask kernel to [scan new disk][3]. Mostly its done now automatically by kernel in new versions.
|
||||
|
||||
First thing is to identify newly added disk and its name in kernel. There are numerous ways to achieve this. I will list few –
|
||||
|
||||
* You can observer `lsblk` output before and after adding/scanning disk to get new disk name.
|
||||
* Check newly created disk files in `/dev` filesystem. Match timestamp of file and disk addition time.
|
||||
* Observer `fdisk -l` output before and after adding/scanning disk to get new disk name.
|
||||
|
||||
|
||||
|
||||
For our example I am using AWS EC2 server and I added 5GB disk to my server. here is my lsblk output –
|
||||
|
||||
|
||||
```
|
||||
[root@kerneltalks ~]# lsblk
|
||||
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
|
||||
xvda 202:0 0 10G 0 disk
|
||||
├─xvda1 202:1 0 1M 0 part
|
||||
└─xvda2 202:2 0 10G 0 part /
|
||||
xvdf 202:80 0 10G 0 disk
|
||||
```
|
||||
|
||||
You can see xvdf is our newly added disk. Full path for disk is `/dev/xvdf`.
|
||||
|
||||
* * *
|
||||
|
||||
### Add new disk in LVM
|
||||
|
||||
We are using LVM here since its widely used and flexible volume manager on Linux plateform. Make sure you have `lvm` or `lvm2` [package installed on your system][4]. If not, [install lvm/lvm2 package][5].
|
||||
|
||||
Now, we are going to add this RAW disk in Logical Volume Manager and create 10GB of mount point out of it. List of commands you need to follow are –
|
||||
|
||||
* [pvcreate][6]
|
||||
* [vgcreate][7]
|
||||
* [lvcreate][8]
|
||||
|
||||
|
||||
|
||||
If you are willing to add disk to existing mount point and use its space to [extend mount point][9] then `vgcreate` should be replaced by `vgextend`.
|
||||
|
||||
Sample outputs from my session –
|
||||
|
||||
```
|
||||
[root@kerneltalks ~]# pvcreate /dev/xvdf
|
||||
Physical volume "/dev/xvdf" successfully created.
|
||||
[root@kerneltalks ~]# vgcreate vgdata /dev/xvdf
|
||||
Volume group "vgdata" successfully created
|
||||
[root@kerneltalks ~]# lvcreate -L 5G -n lvdata vgdata
|
||||
Logical volume "lvdata" created.
|
||||
```
|
||||
|
||||
Now, you have logical volume created. You need to format it with filesystem on your choice and mount it. We are choosing ext4 filesystem here and formatting using `mkfs.ext4` .
|
||||
|
||||
```
|
||||
[root@kerneltalks ~]# mkfs.ext4 /dev/vgdata/lvdata
|
||||
mke2fs 1.42.9 (28-Dec-2013)
|
||||
Filesystem label=
|
||||
OS type: Linux
|
||||
Block size=4096 (log=2)
|
||||
Fragment size=4096 (log=2)
|
||||
Stride=0 blocks, Stripe width=0 blocks
|
||||
327680 inodes, 1310720 blocks
|
||||
65536 blocks (5.00%) reserved for the super user
|
||||
First data block=0
|
||||
Maximum filesystem blocks=1342177280
|
||||
40 block groups
|
||||
32768 blocks per group, 32768 fragments per group
|
||||
8192 inodes per group
|
||||
Superblock backups stored on blocks:
|
||||
32768, 98304, 163840, 229376, 294912, 819200, 884736
|
||||
|
||||
Allocating group tables: done
|
||||
Writing inode tables: done
|
||||
Creating journal (32768 blocks): done
|
||||
Writing superblocks and filesystem accounting information: done
|
||||
```
|
||||
|
||||
* * *
|
||||
|
||||
### Mounting volume from new disk on mount point
|
||||
|
||||
Lets mount the logical volume of 5GB which we created and formatted on /data mount point using `mount` command.
|
||||
|
||||
```
|
||||
[root@kerneltalks ~]# mount /dev/vgdata/lvdata /data
|
||||
[root@kerneltalks ~]# df -Ph /data
|
||||
Filesystem Size Used Avail Use% Mounted on
|
||||
/dev/mapper/vgdata-lvdata 4.8G 20M 4.6G 1% /data
|
||||
```
|
||||
|
||||
Verify your mount point with df command as above and you are all done! You can always add an entry in [/etc/fstab][10] to make this mount persistent over reboots.
|
||||
|
||||
You have attached 10GB disk to Linux machine and created 5GB mount point out of it!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://kerneltalks.com/hardware-config/how-to-add-new-disk-in-linux/
|
||||
|
||||
作者:[kerneltalks][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://kerneltalks.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i1.wp.com/kerneltalks.com/wp-content/uploads/2019/03/How-to-add-new-disk-in-Linux.png?ssl=1
|
||||
[2]: https://kerneltalks.com/cloud-services/how-to-add-ebs-disk-on-aws-linux-server/
|
||||
[3]: https://kerneltalks.com/disk-management/howto-scan-new-lun-disk-linux-hpux/
|
||||
[4]: https://kerneltalks.com/tools/check-package-installed-linux/
|
||||
[5]: https://kerneltalks.com/tools/package-installation-linux-yum-apt/
|
||||
[6]: https://kerneltalks.com/disk-management/lvm-command-tutorials-pvcreate-pvdisplay/
|
||||
[7]: https://kerneltalks.com/disk-management/lvm-commands-tutorial-vgcreate-vgdisplay-vgscan/
|
||||
[8]: https://kerneltalks.com/disk-management/lvm-commands-tutorial-lvcreate-lvdisplay-lvremove/
|
||||
[9]: https://kerneltalks.com/disk-management/extend-file-system-online-lvm/
|
||||
[10]: https://kerneltalks.com/config/understanding-etcfstab-file/
|
@ -0,0 +1,113 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (12 open source tools for natural language processing)
|
||||
[#]: via: (https://opensource.com/article/19/3/natural-language-processing-tools)
|
||||
[#]: author: (Dan Barker (Community Moderator) https://opensource.com/users/barkerd427)
|
||||
|
||||
12 open source tools for natural language processing
|
||||
======
|
||||
|
||||
Take a look at a dozen options for your next NLP application.
|
||||
|
||||
![Chat bubbles][1]
|
||||
|
||||
Natural language processing (NLP), the technology that powers all the chatbots, voice assistants, predictive text, and other speech/text applications that permeate our lives, has evolved significantly in the last few years. There are a wide variety of open source NLP tools out there, so I decided to survey the landscape to help you plan your next voice- or text-based application.
|
||||
|
||||
For this review, I focused on tools that use languages I'm familiar with, even though I'm not familiar with all the tools. (I didn't find a great selection of tools in the languages I'm not familiar with anyway.) That said, I excluded tools in three languages I am familiar with, for various reasons.
|
||||
|
||||
The most obvious language I didn't include might be R, but most of the libraries I found hadn't been updated in over a year. That doesn't always mean they aren't being maintained well, but I think they should be getting updates more often to compete with other tools in the same space. I also chose languages and tools that are most likely to be used in production scenarios (rather than academia and research), and I have mostly used R as a research and discovery tool.
|
||||
|
||||
I was also surprised to see that the Scala libraries are fairly stagnant. It has been a couple of years since I last used Scala, when it was pretty popular. Most of the libraries haven't been updated since that time—or they've only had a few updates.
|
||||
|
||||
Finally, I excluded C++. This is mostly because it's been many years since I last wrote in C++, and the organizations I've worked in have not used C++ for NLP or any data science work.
|
||||
|
||||
### Python tools
|
||||
|
||||
#### Natural Language Toolkit (NLTK)
|
||||
|
||||
It would be easy to argue that [Natural Language Toolkit (NLTK)][2] is the most full-featured tool of the ones I surveyed. It implements pretty much any component of NLP you would need, like classification, tokenization, stemming, tagging, parsing, and semantic reasoning. And there's often more than one implementation for each, so you can choose the exact algorithm or methodology you'd like to use. It also supports many languages. However, it represents all data in the form of strings, which is fine for simple constructs but makes it hard to use some advanced functionality. The documentation is also quite dense, but there is a lot of it, as well as [a great book][3]. The library is also a bit slow compared to other tools. Overall, this is a great toolkit for experimentation, exploration, and applications that need a particular combination of algorithms.
|
||||
|
||||
#### SpaCy
|
||||
|
||||
[SpaCy][4] is probably the main competitor to NLTK. It is faster in most cases, but it only has a single implementation for each NLP component. Also, it represents everything as an object rather than a string, which simplifies the interface for building applications. This also helps it integrate with many other frameworks and data science tools, so you can do more once you have a better understanding of your text data. However, SpaCy doesn't support as many languages as NLTK. It does have a simple interface with a simplified set of choices and great documentation, as well as multiple neural models for various components of language processing and analysis. Overall, this is a great tool for new applications that need to be performant in production and don't require a specific algorithm.
|
||||
|
||||
#### TextBlob
|
||||
|
||||
[TextBlob][5] is kind of an extension of NLTK. You can access many of NLTK's functions in a simplified manner through TextBlob, and TextBlob also includes functionality from the Pattern library. If you're just starting out, this might be a good tool to use while learning, and it can be used in production for applications that don't need to be overly performant. Overall, TextBlob is used all over the place and is great for smaller projects.
|
||||
|
||||
#### Textacy
|
||||
|
||||
This tool may have the best name of any library I've ever used. Say "[Textacy][6]" a few times while emphasizing the "ex" and drawing out the "cy." Not only is it great to say, but it's also a great tool. It uses SpaCy for its core NLP functionality, but it handles a lot of the work before and after the processing. If you were planning to use SpaCy, you might as well use Textacy so you can easily bring in many types of data without having to write extra helper code.
|
||||
|
||||
#### PyTorch-NLP
|
||||
|
||||
[PyTorch-NLP][7] has been out for just a little over a year, but it has already gained a tremendous community. It is a great tool for rapid prototyping. It's also updated often with the latest research, and top companies and researchers have released many other tools to do all sorts of amazing processing, like image transformations. Overall, PyTorch is targeted at researchers, but it can also be used for prototypes and initial production workloads with the most advanced algorithms available. The libraries being created on top of it might also be worth looking into.
|
||||
|
||||
### Node tools
|
||||
|
||||
#### Retext
|
||||
|
||||
[Retext][8] is part of the [unified collective][9]. Unified is an interface that allows multiple tools and plugins to integrate and work together effectively. Retext is one of three syntaxes used by the unified tool; the others are Remark for markdown and Rehype for HTML. This is a very interesting idea, and I'm excited to see this community grow. Retext doesn't expose a lot of its underlying techniques, but instead uses plugins to achieve the results you might be aiming for with NLP. It's easy to do things like checking spelling, fixing typography, detecting sentiment, or making sure text is readable with simple plugins. Overall, this is an excellent tool and community if you just need to get something done without having to understand everything in the underlying process.
|
||||
|
||||
#### Compromise
|
||||
|
||||
[Compromise][10] certainly isn't the most sophisticated tool. If you're looking for the most advanced algorithms or the most complete system, this probably isn't the right tool for you. However, if you want a performant tool that has a wide breadth of features and can function on the client side, you should take a look at Compromise. Overall, its name is accurate in that the creators compromised on functionality and accuracy by focusing on a small package with much more specific functionality that benefits from the user understanding more of the context surrounding the usage.
|
||||
|
||||
#### Natural
|
||||
|
||||
[Natural][11] includes most functions you might expect in a general NLP library. It is mostly focused on English, but some other languages have been contributed, and the community is open to additional contributions. It supports tokenizing, stemming, classification, phonetics, term frequency–inverse document frequency, WordNet, string similarity, and some inflections. It might be most comparable to NLTK, in that it tries to include everything in one package, but it is easier to use and isn't necessarily focused around research. Overall, this is a pretty full library, but it is still in active development and may require additional knowledge of underlying implementations to be fully effective.
|
||||
|
||||
#### Nlp.js
|
||||
|
||||
[Nlp.js][12] is built on top of several other NLP libraries, including Franc and Brain.js. It provides a nice interface into many components of NLP, like classification, sentiment analysis, stemming, named entity recognition, and natural language generation. It also supports quite a few languages, which is helpful if you plan to work in something other than English. Overall, this is a great general tool with a simplified interface into several other great tools. This will likely take you a long way in your applications before you need something more powerful or more flexible.
|
||||
|
||||
### Java tools
|
||||
|
||||
#### OpenNLP
|
||||
|
||||
[OpenNLP][13] is hosted by the Apache Foundation, so it's easy to integrate it into other Apache projects, like Apache Flink, Apache NiFi, and Apache Spark. It is a general NLP tool that covers all the common processing components of NLP, and it can be used from the command line or within an application as a library. It also has wide support for multiple languages. Overall, OpenNLP is a powerful tool with a lot of features and ready for production workloads if you're using Java.
|
||||
|
||||
#### StanfordNLP
|
||||
|
||||
[Stanford CoreNLP][14] is a set of tools that provides statistical NLP, deep learning NLP, and rule-based NLP functionality. Many other programming language bindings have been created so this tool can be used outside of Java. It is a very powerful tool created by an elite research institution, but it may not be the best thing for production workloads. This tool is dual-licensed with a special license for commercial purposes. Overall, this is a great tool for research and experimentation, but it may incur additional costs in a production system. The Python implementation might also interest many readers more than the Java version. Also, one of the best Machine Learning courses is taught by a Stanford professor on Coursera. [Check it out][15] along with other great resources.
|
||||
|
||||
#### CogCompNLP
|
||||
|
||||
[CogCompNLP][16], developed by the University of Illinois, also has a Python library with similar functionality. It can be used to process text, either locally or on remote systems, which can remove a tremendous burden from your local device. It provides processing functions such as tokenization, part-of-speech tagging, chunking, named-entity tagging, lemmatization, dependency and constituency parsing, and semantic role labeling. Overall, this is a great tool for research, and it has a lot of components that you can explore. I'm not sure it's great for production workloads, but it's worth trying if you plan to use Java.
|
||||
|
||||
* * *
|
||||
|
||||
What are your favorite open source tools and libraries for NLP? Please share in the comments—especially if there's one I didn't include.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/natural-language-processing-tools
|
||||
|
||||
作者:[Dan Barker (Community Moderator)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/barkerd427
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_communication_team.png?itok=CYfZ_gE7 (Chat bubbles)
|
||||
[2]: http://www.nltk.org/
|
||||
[3]: http://www.nltk.org/book_1ed/
|
||||
[4]: https://spacy.io/
|
||||
[5]: https://textblob.readthedocs.io/en/dev/
|
||||
[6]: https://readthedocs.org/projects/textacy/
|
||||
[7]: https://pytorchnlp.readthedocs.io/en/latest/
|
||||
[8]: https://www.npmjs.com/package/retext
|
||||
[9]: https://unified.js.org/
|
||||
[10]: https://www.npmjs.com/package/compromise
|
||||
[11]: https://www.npmjs.com/package/natural
|
||||
[12]: https://www.npmjs.com/package/node-nlp
|
||||
[13]: https://opennlp.apache.org/
|
||||
[14]: https://stanfordnlp.github.io/CoreNLP/
|
||||
[15]: https://opensource.com/article/19/2/learn-data-science-ai
|
||||
[16]: https://github.com/CogComp/cogcomp-nlp
|
83
sources/tech/20190322 Easy means easy to debug.md
Normal file
83
sources/tech/20190322 Easy means easy to debug.md
Normal file
@ -0,0 +1,83 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Easy means easy to debug)
|
||||
[#]: via: (https://arp242.net/weblog/easy.html)
|
||||
[#]: author: (Martin Tournoij https://arp242.net/)
|
||||
|
||||
|
||||
What does it mean for a framework, library, or tool to be “easy”? There are many possible definitions one could use, but my definition is usually that it’s easy to debug. I often see people advertise a particular program, framework, library, file format, or something else as easy because “look with how little effort I can do task X, this is so easy!” That’s great, but an incomplete picture.
|
||||
|
||||
You only write software once, but will almost always go through several debugging cycles. With debugging cycle I don’t mean “there is a bug in the code you need to fix”, but rather “I need to look at this code to fix the bug”. To debug code, you need to understand it, so “easy to debug” by extension means “easy to understand”.
|
||||
|
||||
Abstractions which make something easier to write often come at the cost of make things harder to understand. Sometimes this is a good trade-off, but often it’s not. In general I will happily spend a little but more effort writing something now if that makes things easier to understand and debug later on, as it’s often a net time-saver.
|
||||
|
||||
Simplicity isn’t the only thing that makes programs easier to debug, but it is probably the most important. Good documentation helps too, but unfortunately good documentation is uncommon (note that quality is not measured by word count!)
|
||||
|
||||
This is not exactly a novel insight; from the 1974 The Elements of Programming Style by Brian W. Kernighan and P. J. Plauger:
|
||||
|
||||
> Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it?
|
||||
|
||||
A lot of stuff I see seems to be written “as clever as can be” and is consequently hard to debug. I’ll list a few examples of this pattern below. It’s not my intention to argue that any of these things are bad per se, I just want to highlight the trade-offs in “easy to use” vs. “easy to debug”.
|
||||
|
||||
* When I tried running [Let’s Encrypt][1] a few years ago it required running a daemon as root(!) to automatically rewrite nginx files. I looked at the source a bit to understand how it worked and it was all pretty complex, so I was “let’s not” and opted to just pay €10 to the CA mafia, as not much can go wrong with putting a file in /etc/nginx/, whereas a lot can go wrong with complex Python daemons running as root.
|
||||
|
||||
(I don’t know the current state/options for Let’s Encrypt; at a quick glance there may be better/alternative ACME clients that suck less now.)
|
||||
|
||||
* Some people claim that systemd is easier than SysV init.d scripts because it’s easier to write systemd unit files than it is to write shell scripts. In particular, this is the argument Lennart Poettering used in his [systemd myths][2] post (point 5).
|
||||
|
||||
I think is completely missing the point. I agree with Poettering that shell scripts are hard – [I wrote an entire post about that][3] – but by making the interface easier doesn’t mean the entire system becomes easier. Look at [this issue][4] I encountered and [the fix][5] for it. Does that look easy to you?
|
||||
|
||||
* Many JavaScript frameworks I’ve used can be hard to fully understand. Clever state keeping logic is great and all, until that state won’t work as you expect, and then you better hope there’s a Stack Overflow post or GitHub issue to help you out.
|
||||
|
||||
* Docker is great, right up to the point you get:
|
||||
|
||||
```
|
||||
ERROR: for elasticsearch Cannot start service elasticsearch:
|
||||
oci runtime error: container_linux.go:247: starting container process caused "process_linux.go:258:
|
||||
applying cgroup configuration for process caused \"failed to write 898 to cgroup.procs: write
|
||||
/sys/fs/cgroup/cpu,cpuacct/docker/b13312efc203e518e3864fc3f9d00b4561168ebd4d9aad590cc56da610b8dd0e/cgroup.procs:
|
||||
invalid argument\""
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```
|
||||
ERROR: for elasticsearch Cannot start service elasticsearch: EOF
|
||||
```
|
||||
|
||||
And … now what?
|
||||
|
||||
* Many testing libraries can make things harder to debug. Ruby’s rspec is a good example where I’ve occasionally used the library wrong by accident and had to spend quite a long time figuring out what exactly went wrong (as the errors it gave me were very confusing!)
|
||||
|
||||
I wrote a bit more about that in my [Testing isn’t everything][6] post.
|
||||
|
||||
* ORM libraries can make database queries a lot easier, at the cost of making things a lot harder to understand once you want to solve a problem.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://arp242.net/weblog/easy.html
|
||||
|
||||
作者:[Martin Tournoij][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://arp242.net/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Let%27s_Encrypt
|
||||
[2]: http://0pointer.de/blog/projects/the-biggest-myths.html
|
||||
[3]: https://arp242.net/weblog/shell-scripting-trap.html
|
||||
[4]: https://unix.stackexchange.com/q/185495/33645
|
||||
[5]: https://cgit.freedesktop.org/systemd/systemd/commit/?id=6e392c9c45643d106673c6643ac8bf4e65da13c1
|
||||
[6]: /weblog/testing.html
|
||||
[7]: mailto:martin@arp242.net
|
||||
[8]: https://github.com/Carpetsmoker/arp242.net/issues/new
|
@ -0,0 +1,205 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Install OpenLDAP on Ubuntu Server 18.04)
|
||||
[#]: via: (https://www.linux.com/blog/2019/3/how-install-openldap-ubuntu-server-1804)
|
||||
[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen)
|
||||
|
||||
How to Install OpenLDAP on Ubuntu Server 18.04
|
||||
======
|
||||
|
||||
![OpenLDAP][1]
|
||||
|
||||
In part one of this short tutorial series, Jack Wallen explains how to install OpenLDAP.
|
||||
|
||||
[Creative Commons Zero][2]
|
||||
|
||||
The Lightweight Directory Access Protocol (LDAP) allows for the querying and modification of an X.500-based directory service. In other words, LDAP is used over a Local Area Network (LAN) to manage and access a distributed directory service. LDAPs primary purpose is to provide a set of records in a hierarchical structure. What can you do with those records? The best use-case is for user validation/authentication against desktops. If both server and client are set up properly, you can have all your Linux desktops authenticating against your LDAP server. This makes for a great single point of entry so that you can better manage (and control) user accounts.
|
||||
|
||||
The most popular iteration of LDAP for Linux is [OpenLDAP][3]. OpenLDAP is a free, open-source implementation of the Lightweight Directory Access Protocol, and makes it incredibly easy to get your LDAP server up and running.
|
||||
|
||||
In this three-part series, I’ll be walking you through the steps of:
|
||||
|
||||
1. Installing OpenLDAP server.
|
||||
|
||||
2. Installing the web-based LDAP Account Manager.
|
||||
|
||||
3. Configuring Linux desktops, such that they can communicate with your LDAP server.
|
||||
|
||||
|
||||
|
||||
|
||||
In the end, all of your Linux desktop machines (that have been configured properly) will be able to authenticate against a centralized location, which means you (as the administrator) have much more control over the management of users on your network.
|
||||
|
||||
In this first piece, I’ll be demonstrating the installation and configuration of OpenLDAP on Ubuntu Server 18.04. All you will need to make this work is a running instance of Ubuntu Server 18.04 and a user account with sudo privileges.
|
||||
Let’s get to work.
|
||||
|
||||
### Update/Upgrade
|
||||
|
||||
The first thing you’ll want to do is update and upgrade your server. Do note, if the kernel gets updated, the server will need to be rebooted (unless you have Live Patch, or a similar service running). Because of this, run the update/upgrade at a time when the server can be rebooted.
|
||||
To update and upgrade Ubuntu, log into your server and run the following commands:
|
||||
|
||||
```
|
||||
sudo apt-get update
|
||||
|
||||
sudo apt-get upgrade -y
|
||||
```
|
||||
|
||||
When the upgrade completes, reboot the server (if necessary), and get ready to install and configure OpenLDAP.
|
||||
|
||||
### Installing OpenLDAP
|
||||
|
||||
Since we’ll be using OpenLDAP as our LDAP server software, it can be installed from the standard repository. To install the necessary pieces, log into your Ubuntu Server and issue the following command:
|
||||
|
||||
### sudo apt-get instal slapd ldap-utils -y
|
||||
|
||||
During the installation, you’ll be first asked to create an administrator password for the LDAP directory. Type and verify that password (Figure 1).
|
||||
|
||||
![password][4]
|
||||
|
||||
Figure 1: Creating an administrator password for LDAP.
|
||||
|
||||
[Used with permission][5]
|
||||
|
||||
Configuring LDAP
|
||||
|
||||
With the installation of the components complete, it’s time to configure LDAP. Fortunately, there’s a handy tool we can use to make this happen. From the terminal window, issue the command:
|
||||
|
||||
```
|
||||
sudo dpkg-reconfigure slapd
|
||||
```
|
||||
|
||||
In the first window, hit Enter to select No and continue on. In the second window of the configuration tool (Figure 2), you must type the DNS domain name for your server. This will serve as the base DN (the point from where a server will search for users) for your LDAP directory. In my example, I’ve used example.com (you’ll want to change this to fit your needs).
|
||||
|
||||
![domain name][6]
|
||||
|
||||
Figure 2: Configuring the domain name for LDAP.
|
||||
|
||||
[Used with permission][5]
|
||||
|
||||
In the next window, type your Organizational name (ie the name of your company or department). You will then be prompted to (once again) create an administrator password (you can use the same one as you did during the installation). Once you’ve taken care of that, you’ll be asked the following questions:
|
||||
|
||||
* Database backend to use - select **MDB**.
|
||||
|
||||
* Do you want the database to be removed with slapd is purged? - Select **No.**
|
||||
|
||||
* Move old database? - Select **Yes.**
|
||||
|
||||
|
||||
|
||||
|
||||
OpenLDAP is now ready for data.
|
||||
|
||||
### Adding Initial Data
|
||||
|
||||
Now that OpenLDAP is installed and running, it’s time to populate the directory with a bit of initial data. In the second piece of this series, we’ll be installing a web-based GUI that makes it much easier to handle this task, but it’s always good to know how to add data the manual way.
|
||||
|
||||
One of the best ways to add data to the LDAP directory is via text file, which can then be imported in with the __ldapadd__ command. Create a new file with the command:
|
||||
|
||||
```
|
||||
nano ldap_data.ldif
|
||||
```
|
||||
|
||||
In that file, paste the following contents:
|
||||
|
||||
```
|
||||
dn: ou=People,dc=example,dc=com
|
||||
|
||||
objectClass: organizationalUnit
|
||||
|
||||
ou: People
|
||||
|
||||
|
||||
dn: ou=Groups,dc=EXAMPLE,dc=COM
|
||||
|
||||
objectClass: organizationalUnit
|
||||
|
||||
ou: Groups
|
||||
|
||||
|
||||
dn: cn=DEPARTMENT,ou=Groups,dc=EXAMPLE,dc=COM
|
||||
|
||||
objectClass: posixGroup
|
||||
|
||||
cn: SUBGROUP
|
||||
|
||||
gidNumber: 5000
|
||||
|
||||
|
||||
dn: uid=USER,ou=People,dc=EXAMPLE,dc=COM
|
||||
|
||||
objectClass: inetOrgPerson
|
||||
|
||||
objectClass: posixAccount
|
||||
|
||||
objectClass: shadowAccount
|
||||
|
||||
uid: USER
|
||||
|
||||
sn: LASTNAME
|
||||
|
||||
givenName: FIRSTNAME
|
||||
|
||||
cn: FULLNAME
|
||||
|
||||
displayName: DISPLAYNAME
|
||||
|
||||
uidNumber: 10000
|
||||
|
||||
gidNumber: 5000
|
||||
|
||||
userPassword: PASSWORD
|
||||
|
||||
gecos: FULLNAME
|
||||
|
||||
loginShell: /bin/bash
|
||||
|
||||
homeDirectory: USERDIRECTORY
|
||||
```
|
||||
|
||||
In the above file, every entry in all caps needs to be modified to fit your company needs. Once you’ve modified the above file, save and close it with the [Ctrl]+[x] key combination.
|
||||
|
||||
To add the data from the file to the LDAP directory, issue the command:
|
||||
|
||||
```
|
||||
ldapadd -x -D cn=admin,dc=EXAMPLE,dc=COM -W -f ldap_data.ldif
|
||||
```
|
||||
|
||||
Remember to alter the dc entries (EXAMPLE and COM) in the above command to match your domain name. After running the command, you will be prompted for the LDAP admin password. When you successfully authentication to the LDAP server, the data will be added. You can then ensure the data is there, by running a search like so:
|
||||
|
||||
```
|
||||
ldapsearch -x -LLL -b dc=EXAMPLE,dc=COM 'uid=USER' cn gidNumber
|
||||
```
|
||||
|
||||
Where EXAMPLE and COM is your domain name and USER is the user to search for. The command should report the entry you searched for (Figure 3).
|
||||
|
||||
![search][7]
|
||||
|
||||
Figure 3: Our search was successful.
|
||||
|
||||
[Used with permission][5]
|
||||
|
||||
Now that you have your first entry into your LDAP directory, you can edit the above file to create even more. Or, you can wait until the next entry into the series (installing LDAP Account Manager) and take care of the process with the web-based GUI. Either way, you’re one step closer to having LDAP authentication on your network.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/2019/3/how-install-openldap-ubuntu-server-1804
|
||||
|
||||
作者:[Jack Wallen][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/users/jlwallen
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ldap.png?itok=r9viT8n6 (OpenLDAP)
|
||||
[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
|
||||
[3]: https://www.openldap.org/
|
||||
[4]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ldap_1.jpg?itok=vbWScztB (password)
|
||||
[5]: /LICENSES/CATEGORY/USED-PERMISSION
|
||||
[6]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ldap_2.jpg?itok=10CSCm6Z (domain name)
|
||||
[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ldap_3.jpg?itok=df2Y65Dv (search)
|
@ -1,292 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Amanda0212)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (lawyer The MIT License, Line by Line)
|
||||
[#]: via: (https://writing.kemitchell.com/2016/09/21/MIT-License-Line-by-Line.html)
|
||||
[#]: author: (Kyle E. Mitchell https://kemitchell.com/)
|
||||
|
||||
|
||||
MIT许可证的“精华”
|
||||
======
|
||||
|
||||
### MIT许可证的“精华”
|
||||
|
||||
[MIT许可证][1] 是最流行的开源软件许可证,请阅读下面的内容。
|
||||
|
||||
#### 阅读许可证
|
||||
|
||||
如果你参与了开源软件的开发,然而你还没有花时间从头开始阅读尽管只有171个单词的许可证,你现在就需要这么做,尤其是如果许可证不是你的日常生活。记住任何看起来不对劲或不清楚的事情,然后继续思考。我将重复上下文和评论的每一个字,按块和顺序。但重要的是你要做到心中有数。
|
||||
|
||||
> MIT 许可证
|
||||
>
|
||||
> 版权(c)<年份><版权持有人>
|
||||
>
|
||||
> 现免费准许任何人取得本软件及相关文件档案("软件")的副本,以便不受限制地处理该软件,包括不受限制地使用、复制、修改,合并、公布、分发、转授许可证和/或出售软件副本的权利,并允许向其提供软件的人这样做,但须符合下列条件:
|
||||
>
|
||||
> 在软件和软件的所有副本中都必须包含版权声明和许可声明。
|
||||
>
|
||||
> 该软件是"原封不动地"提供的,没有任何明示或默示的保证,包括但不限于适销性、适合某一特定目的和不侵权的保证。在任何情况下,作者或版权所有人都不应对因下列原因引起的任何索赔、损害或其他责任负责,与软件或软件中的使用或其他交易有关的。
|
||||
|
||||
许可证由五个部分组成,在逻辑组成上像下面这样:
|
||||
|
||||
* **页眉**
|
||||
* **许可证所有权** : “MIT执照”
|
||||
* **版权公告** : “版权 (c) …”
|
||||
* **许可证授予** : “特此批准 …”
|
||||
* **授予范围** : “… 在软件方面的处理 …”
|
||||
* **条件** : “… 服从于 …”
|
||||
* **归属和通知** : “以上...应包含...在内”
|
||||
* **保修免责声明** : “该软件“原封不动地”提供...”
|
||||
* **赔偿责任限制** : “在任何情况下....”
|
||||
|
||||
|
||||
|
||||
我们开始了:
|
||||
|
||||
#### 写在前面
|
||||
|
||||
##### 许可证所有权
|
||||
|
||||
> 麻省理工许可证(MIT)
|
||||
|
||||
“麻省理工许可证”并不是一个单一的许可证,而是包含来自于麻省理工学院为发布而准备的语言的一系列许可证表格。这些年来,无论是对于使用它的原始项目都经历了许多变化,还是作为其他项目的模型。fedora项目保持了一种[mit许可证陈列室的种类][2],在简单文本中保留了微小的变化,如甲醛中的解剖标本,追踪着一种没有规律的进化。
|
||||
|
||||
幸运的是,[开源计划][3]和[数据交换软件包][4]组已经将通用的MIT式许可证表格标准化为“mit许可证”。而OSI则将SPDX的标准化[字符串识别符][5]用于普通的开源许可证,“MIT”明确指向标准化的表格“MIT许可证”。
|
||||
|
||||
即使在“许可证”文件中包含“MIT许可证”或“SPDX:MIT”,负责任的评论者为了确定仍然会对文本和标准表单进行比较。尽管各种自称为“麻省理工许可证”的许可证表格只在细节上有所不同,但所谓的“麻省理工许可证”的宽松已经诱使一些作者添加了令人讨厌的“定制”。典型的可怕的的例子就是[JSON许可证][6],一个MIT-family许可证加上“软件应该用于正途的,而不是邪恶的...”这种事情可能是“非常克罗克福德”。这绝对是一个痛苦的事情。可能是律师们开的一个玩笑,但他们一路笑到最后。
|
||||
|
||||
这个故事的寓意是:“麻省理工许可证”本身是模棱两可的。人们或许对你的意思有清晰的理解,但是你只会通过将标准的MIT许可证表格的文本复制到你的项目中来节省所有人的时间,包括你自己。如果您使用元数据,如包管理器元数据文件中的“许可证”属性来指定“MIT”许可证,请确保您的“许可证”文件和任何头注使用标准窗体文本。而这些都可以[自动化][7]。
|
||||
|
||||
##### 版权公告
|
||||
|
||||
> 版权(c)<年份><版权持有人>
|
||||
|
||||
直到1976年的版权法,美国版权法要求采取具体行动以确保创作作品的版权,称为“手续”。如果你没有遵循这些手续,你起诉他人未经授权使用你的作品的权利是有限的,但这种权利基本丧失。其中一种形式是“通知”:在你的作品上做标记,或者在声明过后让市场知道你拥有版权。 © 是标记有版权作品的标准符号,以发出版权通知。ascii字符集没有 © 符号,但是“版权(c)”得到了这种权利。
|
||||
|
||||
1976年“执行”了伯尔尼国际公约的许多要求版权法却取消了保护版权的手续。至少在美国,版权拥有者仍然需要在起诉侵权行为之前登记他们的版权作品,但如果他们在侵权行为开始前登记,也许会造成更高的损害。然而,现实中许多人在提起诉讼之前登记了版权。你不会因为只是没有在版权上张贴通知、注册、向国会图书馆发送副本等行为而失去版权。
|
||||
|
||||
即使版权公告不再像过去那样绝对必要,但它们仍然很有用。说明作品创作的年份和版权持有者,使人们对作品的版权何时到期有某种感觉,从而使作品进入公共领域。作者的身份也很有用:美国法律对个人和“公司”两种身份的作者的版权计算方法不同。尤其是在商业上,即使许可条款给予非常慷慨的许可,公司也有必要对使用已知竞争对手的软件三思而后行。如果你希望别人看到你的作品,并希望从你那里获得许可,版权公告就可以很好裁决归属问题。
|
||||
|
||||
至于“版权持有人”:并不是所有的标准格式许可证都有地方写出来。近期的许可证表格,如[apache 2.0][8]和[gpl 3.0][9],发布“许可证”文本,这些文本本应逐字复制,与头注和单独的文件其他地方,以表明谁拥有版权,并正在给予许可证。这些做法有意无意地阻止了对"标准"文本的更改,还使自动许可证认证更加可靠。
|
||||
|
||||
MIT许可证是机构为发布代码而编写的语言。对于机构发布,只有一个明确的“版权持有人”,即机构发布代码。其他一些机构用他们自己的名字代替了“MIT”,最终形成了我们现在的通用格式。这个过程重复了这个时代的其他短形式的机构许可证,值得注意的是加州大学伯克利分校的[原四条款BSD许可证][10],现在用于[三条款][11]和[二条款][12]的变体,以及互联网系统联盟(MIT的一种变体)的(ISC许可证)。
|
||||
|
||||
在所有情况下,机构都将自己列为版权所有人,以依赖版权所有权规则,称为"[受雇作品][14]"规则,这就给了雇主和客户一些版权,他们的雇员和承包商代表他们做的工作。这些规则通常不适用于自愿提交代码的分布式合作者。这给项目管理者基金会带来了一个问题,比如apache基金会和日食基金会,这些基金会接受了来自更多样化的捐助者群体的捐助。到目前为止,通常的基础方法是使用一种房屋许可证,该许可证规定只有一个版权持有人——[Apache 2.0][8]和[EPL 1.0][15]——由出资人许可证协议支持——[apache clas][16]并且[clas][17]——从贡献者那里收集权利。在像GPL这样的“版权许可”下,在一个地方收集版权所有权更加重要,因为它依赖于版权所有人强制执行许可条件以促进软件自由值。
|
||||
|
||||
如今,只有很少的任何机构或企业管理者的项目在使用MIT式的许可证条款。SPDX和OSI通过规范MIT和ISC等不涉及特定实体或机构版权持有人的许可证形式,帮助了这些案例的使用。有了这些表格,项目作者的普遍做法是在很早表格的版权通知时就填写他们自己的名字,也许会在这里或那里增加一年的时间。至少根据美国版权法,由此产生的版权通知并没有给出一个完整的情况。
|
||||
|
||||
软件的原拥有者保留对其作品的所有权。但是,尽管MIT式的许可条款赋予了其他人在软件上进行开发和更改的权利,创造了法律所称的“衍生作品”,但它们并没有赋予原始作者在他人贡献中的版权所有权。相反,每个贡献者都拥有以现有代码为起点的作品的版权。
|
||||
|
||||
这些项目中的大多数也对接受贡献者许可协议的想法有所保留,更不用说签署版权转让了。这其实很好理解。尽管假定一些较新的开源开发者对github“自动”发送退出请求,根据项目的现有许可条款授权分配贡献,但美国法律不承认这样的规则。默认的是强有力的版权保护,而不是许可许可。
|
||||
|
||||
更新:GitHub后来改变了它在整个网站的服务条款,包括试图翻转这个默认,至少在GitHub.com。我已经写了一些关于这个发展的想法,不是所有的都是积极的,在[另一个帖子][19]。
|
||||
|
||||
为填补具有法律效力的且有充分文件证明的捐款权利授予与完全没有书面记录之间的空白,一些项目采用了[开发商原产地证书][20],一个标准的语句贡献者暗示在他们的Git提交中使用“签名关闭”元数据标签。在臭名昭著的SCO诉讼事件之后,Linux内核开发人员为Linux内核开发了原产地证书,该诉讼指控Linux的代码块来自于SCO拥有的UNIX源代码。作为一种创建文件线索的手段,显示每一行Linux代码的贡献者,开发人员原产地证书的功能很好。虽然开发人员的原产地证书不是许可证,但它确实提供了大量的证据,证明提交代码的人希望项目分发他们的代码,并让其他人在内核的现有许可条款下使用。内核还维护一个机器可读的“信用”文件,列出具有名称、关联、贡献区域和其他元数据的贡献者。我已经做了[一些][实验][22]将这种方法应用于不使用内核开发流程的项目。
|
||||
|
||||
#### 许可授权
|
||||
|
||||
> 现免费准许任何人取得本软件及相关文件档案("软件")的副本.
|
||||
|
||||
按照猜测,麻省理工执照的重点是执照。一般而言,许可是一个人或一个法律规定的实体----"许可人"----允许另一人----"被许可人"----做法律本来允许他们起诉的事情。MIT执照承诺不起诉。
|
||||
|
||||
法律有时会将许可证与承诺颁发许可证区分开来。如果有人违背了给他发许可证的承诺,你也许可以控告他违反了承诺,但你可能没有得到许可证。"在此"是律师们无法摆脱的时髦的古语之一。它在这里用来显示许可证文本本身给出了许可证,而不仅仅是一个许可证的承诺。这是合法的[IIFE][23]。
|
||||
|
||||
虽然许多许可证给予许可的具体命名的许可证,麻省理工学院许可证则是一个“公共许可证”。公共许可证给予广大公众许可。这是开放源码许可的三大理念之一。MIT许可证通过向“获得该软件副本的任何人”颁发许可证来捕捉这一想法。正如我们将在后面看到的,若获得这个许可证也有一个条件,以确保其他人也会知道他们的许可。
|
||||
|
||||
在美国式的法律文件中,用大写的引号(一个“定义”)来表示术语的标准方法。当法院在文件的其他地方看到一个定义明确、资本化的术语时,大写的引号将会很可靠地回顾定义的术语。
|
||||
|
||||
##### 授予范围
|
||||
|
||||
> 不受限制的软件处理
|
||||
|
||||
麻省理工许可证中最重要的七个词就是从被许可人的观点来看。其关键的法律问题是被起诉侵犯版权和被起诉侵犯专利。版权法和专利法都没有使用“处理”作为术语,它在法庭上没有具体的含义。因此,对许可人与被许可人之间的争议作出裁决的任何法院都会询问该措词所指的当事人的含义和理解。法院将会看到的是,该描述是不详细的和开放式的。这给了被许可人一个强有力的论据——在许可人没有允许被许可人对软件做特定的事情时反对许可人的任何主张,即使在许可的时候,这一想法都没有出现。
|
||||
|
||||
|
||||
> 包括在不受限制的情况下使用、复制、修改、合并、公布、分发、转授许可证和/或出售软件副本的权利,以及允许软件给予者这样做的权利。
|
||||
|
||||
没有哪篇法律文章是完美的,“在根本上完全解决”,或者明确无误的。要小心那些装模作样的人。这是麻省理工执照中最不完美的部分。有三个主要问题:
|
||||
|
||||
首先,"包括不受限制"是一种法律上的反模式。它产生了各种理解:
|
||||
|
||||
* “包括,不受限制”
|
||||
* “包括,在不限制上述一般性的情况下”
|
||||
* “包括,但不局限于”
|
||||
* 很多很多毫无意义的措辞变化
|
||||
|
||||
|
||||
|
||||
所有这些都有共同的目标,但都没能真正地实现。从根本上说,它们的创始人也在尝试从中得到好处。在MIT许可证中,这意味着引入“软件交易”,其具体示例是“使用、复制、修改”等等,而不是暗示被许可人的行为必须类似于给出的“交易”的示例。问题是,如果你最终需要一个法庭来审查和解释许可证的条款,法庭则看作找出那些争论的语言的含义。法院不能“忽略”示例决定什么是“交易”,,即使你告诉它。也是较短的。
|
||||
|
||||
第二,作为“处理”的例子给出的动词是一个大杂烩。有些在版权法或专利法下有特定的含义,有些却没有:
|
||||
|
||||
*使用出现在[美国法典第35章,第271(a)条][24],专利法的清单中,列出了专利所有人可以因未经许可而起诉他人的行为。
|
||||
|
||||
*复制出现在《版权法》(美国法典第17编第106条)[25]中,列出了版权所有人可以因未经许可而起诉他人的行为
|
||||
|
||||
*修改不出现在版权或专利法规中。它可能最接近版权法规下的“准备衍生作品”,但也可能涉及改进或其他衍生发明。
|
||||
|
||||
*合并没有出现在版权或专利法规中。“合并”在版权上有特定的含义,但这显然不是这里的本意。相反,法院可能根据其在业界的含义来理解“合并”,如“合并代码”。
|
||||
|
||||
*出版不出现在版权或专利法规中。由于“软件”是发布的内容,它可能最接近于[版权法规][25]下的“发布”。法律也涵盖了“公开”表演和展示作品的权利,但这些权利只适用于特定类型的有版权的作品,如戏剧、录音和电影。
|
||||
|
||||
*散布出现在[版权法规][25]中。
|
||||
|
||||
*次级许可是知识产权法的总称。转牌权是指利用给予他人自己的执照做一些有权利做的事情的权利。MIT许可证的转行权在开源许可证中是不常见的。每个人只要拿到软件及其许可条款的副本,就可以直接从所有者那里得到许可,这就是Heather Meeker所说的“直接授权”方法。任何可能通过麻省理工许可证获得次级许可证的人,很可能最终会得到一份许可证副本,告诉他们自己也有直接许可证。
|
||||
|
||||
*出售的副本是一个混合物。它接近[专利法规][24]中的"要约销售"和"销售",却指的是版权概念中的"副本"。在版权方面,它似乎接近“发行”,但[版权法规][25]没有提到出售。
|
||||
|
||||
*允许向软件提供者这样做,"次级许可"似乎是多余的并且也是不必要的,因为人们得到的副本也获得了直接许可证。
|
||||
|
||||
|
||||
|
||||
最后,由于法律、工业、一般知识产权和一般用途条款的混淆,目前还不清楚麻省理工学院的许可证是否包括专利许可证。一般的语言“处理”和一些例子动词,特别是“使用”,指向专利许可,尽管是一个非常不清楚的。许可证来自版权持有人,他对软件中的发明可能有也可能没有专利权,以及大多数示例动词和"软件"本身的定义,所有这些都强烈指向版权许可证.更近期的许可开放源码许可证,如[apache 2.0][8],涉及单独和具体的版权、专利甚至商标。
|
||||
|
||||
##### 许可证的三个条件
|
||||
|
||||
> 须符合以下条件:
|
||||
|
||||
总有人能符合MIT的三个条件!
|
||||
|
||||
如果你不遵守麻省理工的许可条件,你就得不到许可。因此,如果不按条件所说的做,至少在理论上,你会面临一场诉讼,很可能是一场版权诉讼。
|
||||
|
||||
利用软件的价值来激励被许可人遵守条件,即使被许可人没有为许可支付任何费用,是开源许可的第二个伟大想法。最后一个,在MIT许可证中没有,建立在许可证条件的基础上:“版权”许可证,像[GNU通用公共许可证][9]使用利益来控制那些进行更改的人去许可和分发他们的更改版本。
|
||||
|
||||
##### 公告条件
|
||||
|
||||
> 上述版权通知和许可通知应包含在软件的副本的绝大部分内容中。
|
||||
|
||||
如果你给某人软件的副本,你需要包括许可证文本和任何版权通知。这有几个关键目的:
|
||||
|
||||
1.通知他人他们有许可使用该软件的公共许可证。这是直接授权模式的一个关键部分,即每个用户直接从版权持有人那里获得许可证。
|
||||
|
||||
2.让他们知道谁是软件的参与者,使得他们能够接受检验。
|
||||
|
||||
3.确保免责声明和赔偿责任限制(下一步)伴随软件。每个得到副本的人也应该得到一份这些许可人保护的副本。
|
||||
|
||||
|
||||
|
||||
没有源代码的情况下,没有什么可以阻止您付费获得副本,甚至是编译形式的副本。但是当你这样做的时候,你不能假装MIT的代码是你自己的专有代码,或者是在其他许可证下提供的代码。领取者得了解他们在"公共许可证"下的权利。
|
||||
|
||||
坦率地说,这一条件很难去遵守。几乎每个开源许可证都有这样的“归属”条件。系统和已安装软件的制造商通常都明白,他们需要为自己的每个版本编写一个通知文件或“许可证信息”,并为库和组件提供许可证文本的副本。项目管理基金会在传授这些做法方面发挥了重要作用。但总体而言,网络开发者并没有得到这份备忘录。这不能用缺少工具来解释——有大量的工具——或者是来自npm和其他存储库的软件包的高度模块化性质——它们统一地将许可证信息的元数据格式标准化。所有好的JavaScript迷你机都有用于保存许可证标题注释的命令行标记。其他工具将从包装箱树连接“许可证”文件。实在没有理由去遗忘这一条件。
|
||||
|
||||
##### 保护免责声明
|
||||
|
||||
> 该软件是"原封不动地"提供的,没有任何明示或默示的保证,包括但不限于适销性、适合某一特定目的和不侵权的保证。
|
||||
|
||||
美国几乎每个州都颁布了统一商业法典,这是一个规范商业交易的法律范本。加州大学洛杉矶分校(UCC)的第2条则规定了货物销售合同,从旧汽车的购进到工业化学品的大规模运输,再到制造工厂。
|
||||
|
||||
加州大学洛杉矶分销关于销售合同的某些规则是强制性的。不管那些买卖他们喜欢与否这些规则总是适用的。而其他的只是“默认”。除非买卖双方以书面形式选择不参与,否则UCC暗示他们希望在UCC的文本中找到他们交易的基准规则。在默认规则中,隐含着“保证”,或卖方向买方承诺所售货物的质量和可用性。
|
||||
|
||||
对于像MIT许可证这样的公共许可证到底是合同(许可人和被许可人之间可执行的协议)还是仅仅只是许可证(只有一种方式,但可能附带条件),存在着很大的争论。而关于软件是否算作“商品”的争论则较少,这触发了UCC的规则。许可证持有者之间没有关于赔偿责任的争论:他们不想因为大量的钱财而被起诉,如果他们赠送的软件是免费、则会引起一些麻烦。这与“默认保证”的三个默认规则正好相反:
|
||||
|
||||
1.[UCC第2-314][26]条对"适销性"的默认保证是"货物"或者软件应至少具有平均质量,包装和适当的标签,适合他们的用途。这个保证只适用于提供软件的人是软件的“商人”,这意味着他们从事软件交易,并坚持自己在软件方面很熟练。
|
||||
|
||||
2.[UCC第2-315][27]条中关于"适合某一特定目的"的默认保证,在卖方知道买方因为某一用途而购买软件时,默认保护是有用的。因此,货物必须"合适"。
|
||||
|
||||
3.“不侵权”的默认保证不是合同约定的一部分,而是一般合同法的共同特征。如果买方收到的货物侵犯了他人的知识产权,该默认承诺保护买方。如果MIT许可下的软件实际上不属于试图授权它的软件,或者它属于其他人拥有的专利,就不会去保护买方。
|
||||
|
||||
|
||||
|
||||
UCC[章节 2-316(3)][28]条要求选择或"排除"关于适销性和适合某一特定目的的隐含保证的语言非常一目了然。而“显眼”则意味着书写或格式化的目的是为了引起人们对其本身的注意,这与微观的细微印刷相反,其目的是为了避开不谨慎的消费者。州法律可能对不侵权的免责声明规定类似的吸引注意力的要求。
|
||||
|
||||
长期以来,律师们一直被一种错觉所困扰,认为用“全盖”写任何东西都符合明显的要求,然而这确实假的。法院已经批判了这一标准,因为它太假了,而且大多数人都同意全上限的做法更多的是为了阻止阅读,而不是强迫阅读。尽管如此,大多数开源许可证的格式都将其保证免责声明设置为全上限,部分原因是这是唯一明显的方法,可以使其在纯文本“许可证”文件中脱颖而出。我宁愿用星号或其他的ASCII艺术,但那已经很久了。
|
||||
|
||||
##### 限制
|
||||
> 在任何情况下,作者或版权持有人均不对因下列原因而引起的任何索赔、损害或其他责任承担任何责任或是与软件或者软件中的使用或其他交易有关的责任。
|
||||
|
||||
MIT许可证允许“免费”使用软件,但法律并不认为获得免费许可证的人会在事情出了差错时放弃起诉的权利,而应归咎于许可人。“赔偿责任限制”,通常与“损害赔偿排除”结合在一起,就像保证不起诉一样,很像许可证。但这些是对许可人免受被许可人诉讼的保护。
|
||||
|
||||
一般来说,法院谨慎地解读赔偿责任和损害赔偿排除的限制,因为它们可以将巨大的风险从一方转移到另一方。为了保护社区的重大利益,他们“严格解释”限制责任的语言让人们能够纠正在法庭上犯下的错误,在可能的情况下对照受其保护的语言来解读。限制赔偿责任必须是明确的,才能成立。尤其是在“消费者”合同和其他情况下,那些放弃起诉权的人缺乏技巧或议价能力,法院有时拒绝尊重那些言外之意。律师们由于这个原因可能也由于纯粹的习惯而倾向于限制给予赔偿责任。
|
||||
|
||||
只要稍微钻低一点,“赔偿责任限额”就是被许可人可以起诉的金额的上限。在开源许可证中,这个限制总是没有钱的——0美元,“不负责任”。相比之下,在商业许可证中,尽管它通常是经过协商的而决定出为过去12个月期间支付的许可证费用的倍数。
|
||||
|
||||
"排除"部分具体列出了各类法律索赔以及许可人不能使用造成损害赔偿的理由。和许多法律形式一样,MIT执照提到了“合同”行为中的违约行为和“侵权行为”。侵权行为规则是防止不小心或恶意伤害他人的一般规则。如果你发短信的时候在路上撞上了某人,你就犯下了侵权行为。如果你的公司出售的耳机有问题,让人耳朵发麻,那么你的公司就犯下了侵权行为。如果合同没有明确排除侵权索赔,法院有时会在合同中阅读排除语,以防止只发生合同索赔。为了更好地衡量这种排除部分,麻省理工的执照上写着“或者其他”,仅仅是不同寻常的法律主张。
|
||||
|
||||
"产生于、或与之有关"这句话是法律起草人固有的、焦虑的不安全感的反复出现的症状。重点是任何与软件有关的诉讼都在限制和排除范围之内。在偶然的机会,一些东西可以"产生",但不是"产生",或"联系",不必介意它在形式上的三种的说法。出现在不同地方的同一个词语或许都是不同意思,假设一个专业起草人不会使用不同的词语在一排的意思相同的事情则不必介意,在实践中,如果法院对一开始就不满意想这个限制,那么他们就会非常愿意仔细地解读范围触发因素。但我离题了。同样的语言出现在数百万份合同中或许理解都不一样。
|
||||
|
||||
#### 总结
|
||||
|
||||
这些俏皮话虽然有点像碎碎念,但MIT许可证却是法律上的经典。MIT许可证是有用的。虽然MIT式的许可证非常出色但它绝不是解决所有软件ip弊病的灵丹妙药,尤其是早于它几十年出现的软件专利的祸害。实现了用最低限度的谨慎的法律工具组合来扭转麻烦的版权、销售和合同法的默认规则这个狭隘的目标。在更大的计算环境中,它的生命周期是惊人的。MIT许可证的有效期已经超过了它所授权的绝大多数软件。我们只能猜测,当它最终对那些自己请不起律师的人失去好感时,它将提供多少几十年忠实的法律服务。
|
||||
|
||||
我们已经看到了我们今天所知的MIT许可证是一套具体的、标准化的术语,最终给机构特有的、随意变化的混乱带来了秩序。
|
||||
|
||||
我们已经看到了它是如何为学术、标准、商业和基金会机构的知识产权管理实践提供归属和版权通知的依据。
|
||||
|
||||
我们已经看到了MIT许可证是如何向所有人免费授予软件许可的,但我们必须遵守保护许可人免受担保和赔偿责任的条件。
|
||||
|
||||
我们已经看到,尽管有一些不是很精准的词和修饰,但这一百七十一个词已经足够严谨,能够通过一个密集的知识产权和合同为开源软件开辟新的道路。
|
||||
|
||||
我非常感谢所有愿意花时间来阅读这篇长文的人,让我知道他们认为它有用,并帮助改进它。和往常一样,我欢迎您通过[电子邮件][29], [推特][30], 和 [GitHub][31].发表评论。
|
||||
|
||||
若是想阅读更多的内容或者找到其他许可证的概要,比如GNU公共许可证或者Apache 2.0许可证。无论你有什么关于这方面的兴趣,我都衷心推荐以下的书:
|
||||
|
||||
* Andrew M. St. Laurent’s [Understanding Open Source & Free Software Licensing][32], from O’Reilly.
|
||||
|
||||
我是这本书开始入门的,虽然它有点过时,但它的方法也最接近上面使用的逐行方法。O’Reilly 已经在网上提供了它。
|
||||
|
||||
* Heather Meeker’s [Open (Source) for Business][34]
|
||||
|
||||
在我看来,目前为止,关于GNU公共许可证和版权写的比较好的已经有很多了。这本书涵盖了许可证的历史、发展、以及兼容性和合规性。这是我借给客户的书,考虑或处理GPL。
|
||||
|
||||
* Larry Rosen’s [Open Source Licensing][35], from Prentice Hall.
|
||||
|
||||
这是很棒的一本书,也[在线][36]免费的。这是对程序员关于从零开始的开源许可和相关法律的最好的介绍。虽然这一点也有点过时了,但是在一些具体的细节中拉里对许可证的分类法和对开源商业模式的简洁总结是经得起时间考验的。
|
||||
|
||||
|
||||
|
||||
所有的这些教育都对作为开源授权律师的我至关重要。他们的作者是我这行的英雄。强烈推荐阅读!-- K.E.M
|
||||
|
||||
我在[创意共享许可4.0版本][37]授权这篇文章.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://writing.kemitchell.com/2016/09/21/MIT-License-Line-by-Line.html
|
||||
|
||||
作者:[Kyle E. Mitchell][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Amanda0212](https://github.com/Amanda0212)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://kemitchell.com/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: http://spdx.org/licenses/MIT
|
||||
[2]: https://fedoraproject.org/wiki/Licensing:MIT?rd=Licensing/MIT
|
||||
[3]: https://opensource.org
|
||||
[4]: https://spdx.org
|
||||
[5]: http://spdx.org/licenses/
|
||||
[6]: https://spdx.org/licenses/JSON
|
||||
[7]: https://www.npmjs.com/package/licensor
|
||||
[8]: https://www.apache.org/licenses/LICENSE-2.0
|
||||
[9]: https://www.gnu.org/licenses/gpl-3.0.en.html
|
||||
[10]: http://spdx.org/licenses/BSD-4-Clause
|
||||
[11]: https://spdx.org/licenses/BSD-3-Clause
|
||||
[12]: https://spdx.org/licenses/BSD-2-Clause
|
||||
[13]: http://www.isc.org/downloads/software-support-policy/isc-license/
|
||||
[14]: http://worksmadeforhire.com/
|
||||
[15]: https://www.eclipse.org/legal/epl-v10.html
|
||||
[16]: https://www.apache.org/licenses/#clas
|
||||
[17]: https://wiki.eclipse.org/ECA
|
||||
[18]: https://en.wikipedia.org/wiki/Feist_Publications,_Inc.,_v._Rural_Telephone_Service_Co.
|
||||
[19]: https://writing.kemitchell.com/2017/02/16/Against-Legislating-the-Nonobvious.html
|
||||
[20]: http://developercertificate.org/
|
||||
[21]: https://github.com/berneout/berneout-pledge
|
||||
[22]: https://github.com/berneout/authors-certificate
|
||||
[23]: https://en.wikipedia.org/wiki/Immediately-invoked_function_expression
|
||||
[24]: https://www.govinfo.gov/app/details/USCODE-2017-title35/USCODE-2017-title35-partIII-chap28-sec271
|
||||
[25]: https://www.govinfo.gov/app/details/USCODE-2017-title17/USCODE-2017-title17-chap1-sec106
|
||||
[26]: https://leginfo.legislature.ca.gov/faces/codes_displaySection.xhtml?sectionNum=2314.&lawCode=COM
|
||||
[27]: https://leginfo.legislature.ca.gov/faces/codes_displaySection.xhtml?sectionNum=2315.&lawCode=COM
|
||||
[28]: https://leginfo.legislature.ca.gov/faces/codes_displaySection.xhtml?sectionNum=2316.&lawCode=COM
|
||||
[29]: mailto:kyle@kemitchell.com
|
||||
[30]: https://twitter.com/kemitchell
|
||||
[31]: https://github.com/kemitchell/writing/tree/master/_posts/2016-09-21-MIT-License-Line-by-Line.md
|
||||
[32]: https://lccn.loc.gov/2006281092
|
||||
[33]: http://www.oreilly.com/openbook/osfreesoft/book/
|
||||
[34]: https://www.amazon.com/dp/1511617772
|
||||
[35]: https://lccn.loc.gov/2004050558
|
||||
[36]: http://www.rosenlaw.com/oslbook.htm
|
||||
[37]: https://creativecommons.org/licenses/by-sa/4.0/legalcode
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user