This commit is contained in:
geekpi 2018-07-03 08:51:19 +08:00
commit e386b5cdfa
19 changed files with 2458 additions and 747 deletions

View File

@ -0,0 +1,48 @@
容器基础知识:你需要知道的术语
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/containers-krissia-cruz.jpg?itok=-TNPqdcp)
[在前一篇文章中][1],我们谈到了<ruby>容器<rt>container</rt></ruby>是什么以及它是如何培育创新并助力企业快速发展的。在以后的文章中,我们将讨论如何使用容器。然而,在深入探讨这个话题之前,我们需要了解关于容器的一些术语和命令。掌握了这些术语,才不至于产生混淆。
让我们来探讨 [Docker][2] 容器世界中使用的一些基本术语吧。
<ruby>容器<rt>Container</rt></ruby>:到底什么是容器呢?它是一个 Docker <ruby>镜像<rt>image</rt></ruby>的运行实例。它包含一个 Docker 镜像、执行环境和说明。它与系统完全隔离,所以可以在系统上运行多个容器,并且完全无视对方的存在。你可以从同一镜像中复制出多个容器,并在需求较高时扩展服务,在需求低时对这些容器进行缩减。
Docker <ruby>镜像<rt>Image</rt></ruby>:这与你下载的 Linux 发行版的镜像别无二致。它是一个安装包,包含了用于创建、部署和执行容器的一系列依赖关系和信息。你可以在几秒钟内创建任意数量的完全相同的容器。镜像是分层叠加的。一旦镜像被创建出来,是不能更改的。如果你想对容器进行更改,则只需创建一个新的镜像并从该镜像部署新的容器即可。
<ruby>仓库<rt>Repository</rt></ruby>repoLinux 的用户对于仓库这个术语一定不陌生吧。它是一个软件库,存储了可下载并安装在系统中的软件包。在 Docker 容器中,唯一的区别是它管理的是通过标签分类的 Docker 镜像。你可以找到同一个应用程序的不同版本或不同变体,他们都有适当的标记。
<ruby>镜像管理服务<rt>Registry</rt></ruby>:可以将其想象成 GitHub。这是一个在线服务管理并提供了对 Docker 镜像仓库的访问例如默认的公共镜像仓库——DockerHub。供应商可以将他们的镜像库上传到 DockerHub 上,以便他们的客户下载和使用官方镜像。一些公司为他们的镜像提供自己的服务。镜像管理服务不必由第三方机构来运行和管理。组织机构可以使用预置的服务来管理内部范围的镜像库访问。
<ruby>标签<rt>Tag</rt></ruby>:当你创建 Docker 镜像时可以给它添加一个合适的标签以便轻松识别不同的变体或版本。这与你在任何软件包中看到的并无区别。Docker 镜像在添加到镜像仓库时被标记。
现在你已经掌握了基本知识,下一个阶段是理解实际使用 Docker 容器时用到的术语。
**Dockerfile** :这是一个文本文件,包含为了为构建 Docker 镜像需手动执行的命令。Docker 使用这些指令自动构建镜像。
<ruby>构建<rt>Build</rt></ruby>:这是从 Dockerfile 创建成镜像的过程。
<ruby>推送<rt>Push</rt></ruby>一旦镜像创建完成“push” 是将镜像发布到仓库的过程。该术语也是我们下一篇文章要学习的命令之一。
<ruby>拉取<rt>Pull</rt></ruby>:用户可以通过 “pull” 过程从仓库检索该镜像。
<ruby>编组<rt>Compose</rt></ruby>复杂的应用程序会包含多个容器。docker-compose 是一个用于运行多容器应用程序的命令行工具。它允许你用单条命令运行一个多容器的应用程序,简化了多容器带来的问题。
### 总结
容器术语的范围很广泛,这里是经常遇到的一些基本术语。下一次当你看到这些术语时,你会确切地知道它们的含义。在下一篇文章中,我们将开始使用 Docker 容器。
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/intro-to-linux/2017/12/container-basics-terms-you-need-know
作者:[Swapnil Bhartiya][a]
译者:[jessie-pang](https://github.com/jessie-pang)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/arnieswap
[1]:https://linux.cn/article-9468-1.html
[2]:https://www.docker.com/

View File

@ -0,0 +1,208 @@
Linux 文件系统详解
=====================
> 这篇教程将帮你快速了解 Linux 文件系统。
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/search.jpg?itok=7hj0YYjF)
早在 1996 年,在真正理解文件系统的结构之前,我就学会了如何在我崭新的 Linux 上安装软件。这是一个问题,但对程序来说不是大问题,因为即使我不知道实际的可执行文件在哪里,它们也会神奇地工作。问题在于文档。
你知道那时候Linux 不是像今天这样直观、用户友好的系统。你必须读很多东西。你必须知道你的 CRT 显示器的扫描频率以及拨号调制解调器的噪音来龙去脉,以及其他数以百计的事情。 我很快就意识到我需要花一些时间来掌握目录的组织方式以及 `/etc`(不是用于“其它”文件),`/usr`(不是用于“用户”文件)和 `/bin` (不是“垃圾桶”)的意思。
本教程将帮助你比我当时更快地了解这些。
### 结构
从终端窗口探索 Linux 文件系统是有道理的,这并不是因为作者是一个脾气暴躁的老人,并且对新孩子和他们漂亮的图形工具不以为然(尽管某些事实如此),而是因为终端,尽管只是文本界面,才是更好地显示 Linux 目录树结构的工具。
事实上,帮助你了解这一切的、应该首先安装的第一个工具的名为:`tree`。如果你正在使用 Ubuntu 或 Debian ,你可以:
```
sudo apt install tree
```
在 Red Hat 或 Fedora :
```
sudo dnf install tree
```
对于 SUSE/openSUSE 可以使用 `zypper`
```
sudo zypper install tree
```
对于使用 Arch ManjaroAntergos等等使用
```
sudo pacman -S tree
```
……等等。
一旦安装好,在终端窗口运行 `tree` 命令:
```
tree /
```
上述指令中的 `/` 指的是根目录。系统中的其他目录都是从根目录分支而出,当你运行 `tree` 命令,并且告诉它从根目录开始,那么你就可以看到整个目录树,系统中的所有目录及其子目录,还有它们的文件。
如果你已经使用你的系统有一段时间了这可能需要一段时间因为即使你自己还没有生成很多文件Linux 系统及其应用程序总是在记录、缓存和存储各种临时文件。文件系统中的条目数量会快速增长。
不过,不要感到不知所措。 相反,试试这个:
```
tree -L 1 /
```
你应该看到如图 1 所示。
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/f01_tree01.png?itok=aGKzzC0C)
上面的指令可以翻译为“只显示以 `/`(根目录) 开头的目录树的第一级”。 `-L` 选项告诉树你想看到多少层目录。
大多数 Linux 发行版都会向你显示与你在上图中看到的相同或非常类似的结构。 这意味着,即使你现在感到困惑,掌握这一点,你将掌握大部分(如果不是全部的话)全世界的 Linux 文件系统。
为了让你开始走上掌控之路,让我们看看每个目录的用途。 当我们查看每一个目录的时候,你可以使用 `ls` 来查看他们的内容。
### 目录
从上到下,你所看到的目录如下
#### /bin
`/bin` 目录是包含一些二进制文件的目录,即可以运行的一些应用程序。 你会在这个目录中找到上面提到的 `ls` 程序,以及用于新建和删除文件和目录、移动它们基本工具。还有其它一些程序,等等。文件系统树的其他部分有更多的 *bin* 目录,但我们将在一会儿讨论这些目录。
#### /boot
`/boot` 目录包含启动系统所需的文件。我必须要说吗? 好吧,我会说:**不要动它** 如果你在这里弄乱了其中一个文件,你可能无法运行你的 Linux修复被破坏的系统是非常痛苦的一件事。 另一方面,不要太担心无意中破坏系统:你必须拥有超级用户权限才能执行此操作。
#### /dev
`/dev 目录包含设备文件。 其中许多是在启动时或甚至在运行时生成的。 例如,如果你将新的网络摄像头或 USB 随身碟连接到你的机器中,则会自动弹出一个新的设备条目。
#### /etc
`/etc` 的目录名称会让人变得非常的困惑。`/etc` 得名于最早的 Unix 系统们,它的字面意思是 “etcetera”诸如此类 ,因为它是系统文件管理员不确定在哪里放置的文件的垃圾场。
现在,说 `/etc` 是“<ruby>要配置的所有内容<rt>Everything To Configure</rt></ruby>”更为恰当,因为它包含大部分(如果不是全部的话)的系统配置文件。 例如,包含系统名称、用户及其密码、网络上计算机名称以及硬盘上分区的安装位置和时间的文件都在这里。 再说一遍,如果你是 Linux 的新手,最好是不要在这里接触太多,直到你对系统的工作有更好的理解。
#### /home
`/home` 是你可以找到用户个人目录的地方。在我的情况下,`/home` 下有两个目录:`/home/paul`,其中包含我所有的东西;另外一个目录是 `/home/guest` 目录,以防有客人需要使用我的电脑。
#### /lib
`/lib` 是库文件所在的地方。库是包含应用程序可以使用的代码文件。它们包含应用程序用于在桌面上绘制窗口、控制外围设备或将文件发送到硬盘的代码片段。
在文件系统周围散布着更多的 `lib` 目录,但是这个直接挂载在 `/``/lib` 目录是特殊的,除此之外,它包含了所有重要的内核模块。 内核模块是使你的显卡、声卡、WiFi、打印机等工作的驱动程序。
#### /media
`/media` 目录中,当你插入外部存储器试图访问它时,将自动挂载它。与此列表中的大多数其他项目不同,`/media` 并不追溯到 1970 年代主要是因为当计算机正在运行而动态地插入和检测存储U 盘、USB 硬盘、SD 卡、外部 SSD 等),这是近些年才发生的事。
#### /mnt
然而,`/mnt` 目录是一些过去的残余。这是你手动挂载存储设备或分区的地方。现在不常用了。
#### /opt
`/opt` 目录通常是你编译软件(即,你从源代码构建,并不是从你的系统的软件库中安装软件)的地方。应用程序最终会出现在 `/opt/bin` 目录,库会在 `/opt/lib` 目录中出现。
稍微的题外话:应用程序和库的另一个地方是 `/usr/local`,在这里安装软件时,也会有 `/usr/local/bin``/usr/local/lib` 目录。开发人员如何配置文件来控制编译和安装过程,这就决定了软件安装到哪个地方。
#### /proc
`/proc`,就像 `/dev` 是一个虚拟目录。它包含有关你的计算机的信息,例如关于你的 CPU 和你的 Linux 系统正在运行的内核的信息。与 `/dev` 一样,文件和目录是在计算机启动或运行时生成的,因为你的系统正在运行且会发生变化。
#### /root
`/root` 是系统的超级用户(也称为“管理员”)的主目录。 它与其他用户的主目录是分开的,**因为你不应该动它**。 所以把自己的东西放在你自己的目录中,伙计们。
#### /run
`/run` 是另一个新出现的目录。系统进程出于自己不可告人的原因使用它来存储临时数据。这是另一个**不要动它**的文件夹。
#### /sbin
`/sbin``/bin` 类似,但它包含的应用程序只有超级用户(即首字母的 `s` )才需要。你可以使用 `sudo` 命令使用这些应用程序,该命令暂时允许你在许多 Linux 发行版上拥有超级用户权限。`/sbin` 目录通常包含可以安装、删除和格式化各种东西的工具。你可以想象,如果你使用不当,这些指令中有一些是致命的,所以要小心处理。
#### /usr
`/usr` 目录是在 UNIX 早期用户的主目录所处的地方。然而,正如我们上面看到的,现在 `/home` 是用户保存他们的东西的地方。如今,`/usr` 包含了大量目录,而这些目录又包含了应用程序、库、文档、壁纸、图标和许多其他需要应用程序和服务共享的内容。
你还可以在 `/usr` 目录下找到 `bin``sbin``lib` 目录,它们与挂载到根目录下的那些有什么区别呢?现在的区别不是很大。在早期,`/bin` 目录(挂载在根目录下的)只会包含一些基本的命令,例如 `ls`、`mv` 和 `rm` ;这是一些在安装系统的时候就会预装的一些命令,用于维护系统的一个基本的命令。 而 `/usr/bin` 目录则包含了用户自己安装和用于工作的软件,例如文字处理器,浏览器和一些其他的软件。
但是许多现代的 Linux 发行版只是把所有的东西都放到 `/usr/bin` 中,并让 `/bin` 指向 `/usr/bin`以防彻底删除它会破坏某些东西。因此Debian、Ubuntu 和 Mint 仍然保持 `/bin``/usr/bin` (和 `/sbin``/usr/sbin` )分离;其他的,比如 Arch 和它衍生版,只是有一个“真实”存储二进制程序的目录,`/usr/bin`,其余的任何 `bin` 目录是指向 `/usr/`bin` 的“假”目录。
#### /srv
`/srv` 目录包含服务器的数据。如果你正在 Linux 机器上运行 Web 服务器,你网站的 HTML文件将放到 `/srv/http`(或 `/srv/www`)。 如果你正在运行 FTP 服务器,则你的文件将放到 `/srv/ftp`
#### /sys
`/sys` 是另一个类似 `/proc``/dev` 的虚拟目录,它还包含连接到计算机的设备的信息。
在某些情况下,你还可以操纵这些设备。 例如,我可以通过修改存储在 `/sys/devices/pci0000:00/0000:00:02.0/drm/card1/card1-eDP-1/intel_backlight/brightness` 中的值来更改笔记本电脑屏幕的亮度(在你的机器上你可能会有不同的文件)。但要做到这一点,你必须成为超级用户。原因是,与许多其它虚拟目录一样,在 `/sys` 中打乱内容和文件可能是危险的,你可能会破坏系统。直到你确信你知道你在做什么。否则不要动它。
#### /tmp
`/tmp` 包含临时文件,通常由正在运行的应用程序放置。文件和目录通常(并非总是)包含应用程序现在不需要但以后可能需要的数据。
你还可以使用 `/tmp` 来存储你自己的临时文件 —— `/tmp` 是少数挂载到根目录下而你可以在不成为超级用户的情况下与它进行实际交互的目录之一。
#### /var
`/var` 最初被如此命名是因为它的内容被认为是<ruby>可变的<rt>variable</rt></ruby>,因为它经常变化。今天,它有点用词不当,因为还有许多其他目录也包含频繁更改的数据,特别是我们上面看到的虚拟目录。
不管怎样,`/var` 目录包含了放在 `/var/log` 子目录的日志文件之类。日志是记录系统中发生的事件的文件。如果内核中出现了什么问题,它将被记录到 `/var/log` 下的文件中;如果有人试图从外部侵入你的计算机,你的防火墙也将记录尝试。它还包含用于任务的假脱机程序。这些“任务”可以是你发送给共享打印机必须等待执行的任务,因为另一个用户正在打印一个长文档,或者是等待递交给系统上的用户的邮件。
你的系统可能还有一些我们上面没有提到的目录。例如,在屏幕截图中,有一个 `/snap` 目录。这是因为这张截图是在 Ubuntu 系统上截取的。Ubuntu 最近将 [snap][1] 包作为一种分发软件的方式。`/snap` 目录包含所有文件和从 snaps 安装的软件。
### 更深入的研究
这里仅仅谈了根目录,但是许多子目录都指向它们自己的一组文件和子目录。图 2 给出了基本文件系统的总体概念(图片是在 Paul Gardner 的 CC BY-SA 许可下提供的),[Wikipedia 对每个目录的用途进行了总结][2]。
![filesystem][4]
*图 2标准 Unix 文件系统 [许可][5] Paul Gardner*
要自行探索文件系统,请使用 `cd` 命令:`cd`将带你到你所选择的目录( `cd` 代表更改目录)。
如果你不知道你在哪儿,`pwd`会告诉你,你到底在哪里,( `pwd` 代表打印工作目录 ),同时 `cd`命令在没有任何选项或者参数的时候,将会直接带你到你自己的主目录,这是一个安全舒适的地方。
最后,`cd ..`将会带你到上一层目录,会使你更加接近根目录,如果你在 `/usr/share/wallpapers` 目录,然后你执行 `cd ..` 命令,你将会跳转到 `/usr/share` 目录
要查看目录里有什么内容,使用 `ls` 或这简单的使用 `l` 列出你所在目录的内容。
当然,你总是可以使用 `tree` 来获得目录中内容的概述。在 `/usr/share` 上试试——里面有很多有趣的东西。
### 总结
尽管 Linux 发行版之间存在细微差别,但它们的文件系统的布局非常相似。 你可以这么说:一旦你了解一个,你就会都了解了。 了解文件系统的最好方法就是探索它。 因此,伴随 `tree` `ls` 和 `cd` 进入未知的领域吧。
你不会只是因为查看文件系统就破坏了文件系统,因此请从一个目录移动到另一个目录并进行浏览。 很快你就会发现 Linux 文件系统及其布局的确很有意义,并且你会直观地知道在哪里可以找到应用程序,文档和其他资源。
通过 Linux 基金会和 edX 免费的 “[Linux入门][6]” 课程了解更多有关 Linux 的信息。
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/intro-to-linux/2018/4/linux-filesystem-explained
作者:[PAUL BROWN][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[amwps290](https://github.com/amwps290)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/bro66
[1]:https://www.ubuntu.com/desktop/snappy
[2]:https://en.wikipedia.org/wiki/Unix_filesystem#Conventional_directory_layout
[3]:https://www.linux.com/files/images/standard-unix-filesystem-hierarchypng
[4]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/standard-unix-filesystem-hierarchy.png?itok=CVqmyk6P "filesystem"
[5]:https://www.linux.com/licenses/category/used-permission
[6]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -0,0 +1,69 @@
Python 调试技巧
======
当进行调试时,你有很多选择,但是很难给出一直有效的通用建议(除了“你试过关闭再打开么?”以外)。
这里有一些我最喜欢的 Python 调试技巧。
### 建立一个分支
请相信我。即使你从来没有打算将修改提交回上游,你也会很乐意将你的实验被包含在它们自己的分支中。
不说别的,它会使清理更容易!
### 安装 pdb++
认真地说,如果你使用命令行,它会让你的生活更轻松。
pdb++ 所做的一切就是用更好的模块替换标准的 pdb 模块。以下是你在 `pip install pdbpp` 会看到的:
* 彩色提示!
* 制表符补全!(非常适合探索!)
* 支持切分!
好的,也许最后一个是有点多余……但是非常认真地说,安装 pdb++ 非常值得。
### 探索
有时候最好的办法就是胡乱试试,然后看看会发生什么。在“明显”的位置放置一个断点并确保它被命中。在代码中加入 `print()` 和/或 `logging.debug()` 语句,并查看代码执行的位置。
检查传递给你的函数的参数,检查库的版本(如果你已经非常绝望了)。
### 一次只能改变一件事
在你在探索了一下后,你将会对你可以做的事情有所了解。但在你开始摆弄代码之前,先退一步,考虑一下你可以改变什么,然后只改变一件事。
做出改变后,然后测试一下,看看你是否接近解决问题。如果没有,请将它改回来,然后尝试其他方法。
只更改一件事就可以让你知道什可以工作,哪些不工作。另外,一旦可以工作后,你的新提交将会小得多(因为将有更少的变化)。
这几乎是<ruby>科学过程<rt>Scientific Process</rt></ruby>中所做的事情:一次只更改一个变量。通过让自己看到并衡量一次更改的结果,你可以节省你的理智,并更快地找到解决方案。
### 不要假设,提出问题
偶尔一个开发人员(当然不是你咯!)会匆忙提交一些有问题的代码。当你去调试这段代码时,你需要停下来,并确保你明白它想要完成什么。
不要做任何假设。仅仅因为代码在 `model.py` 文件中并不意味着它不会尝试渲染一些 HTML。
同样,在做任何破坏性的事情之前,仔细检查你的所有外部关联。要删除一些配置数据?**请确保你没有连接到你的生产系统。**
### 聪明,但不要聪明过头
有时候我们编写的代码神奇般地奏效,不知道它是如何做的。
当我们发布代码时,我们可能会觉得自己很聪明,但当代码崩溃时,我们往往会感到愚蠢,我们必须记住它是如何工作的,以便弄清楚它为什么不起作用。
留意任何看起来过于复杂、冗长或极短的代码段。这些可能是隐藏复杂并导致错误的地方。
--------------------------------------------------------------------------------
via: https://pythondebugging.com/articles/python-debugging-tips
作者:[PythonDebugging.com][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://pythondebugging.com

View File

@ -1,73 +0,0 @@
Translating by MjSeven
Can we build a social network that serves users rather than advertisers?
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/people_team_community_group.png?itok=Nc_lTsUK)
Today, open source software is far-reaching and has played a key role driving innovation in our digital economy. The world is undergoing radical change at a rapid pace. People in all parts of the world need a purpose-built, neutral, and transparent online platform to meet the challenges of our time.
And open principles might just be the way to get us there. What would happen if we married digital innovation with social innovation using open-focused thinking?
This question is at the heart of our work at [Human Connection][1], a forward-thinking, Germany-based knowledge and action network with a mission to create a truly social network that serves the world. We're guided by the notion that human beings are inherently generous and sympathetic, and that they thrive on benevolent actions. But we haven't seen a social network that has fully supported our natural tendency towards helpfulness and cooperation to promote the common good. Human Connection aspires to be the platform that allows everyone to become an active changemaker.
In order to achieve the dream of a solution-oriented platform that enables people to take action around social causes by engaging with charities, community groups, and social change activists, Human Connection embraces open values as a vehicle for social innovation.
Here's how.
### Transparency first
Transparency is one of Human Connection's guiding principles. Human Connection invites programmers around the world to jointly work on the platform's source code (JavaScript, Vue, nuxt) by [making their source code available on Github][2] and support the idea of a truly social network by contributing to the code or programming additional functions.
But our commitment to transparency extends beyond our development practices. In fact—when it comes to building a new kind of social network that promotes true connection and interaction between people who are passionate about changing the world for the better—making the source code available is just one step towards being transparent.
To facilitate open dialogue, the Human Connection team holds [regular public meetings online][3]. Here we answer questions, encourage suggestions, and respond to potential concerns. Our Meet The Team events are also recorded and made available to the public afterwards. By being fully transparent with our process, our source code, and our finances, we can protect ourselves against critics or other potential backlashes.
The commitment to transparency also means that all user contributions that shared publicly on Human Connection will be released under a Creative Commons license and can eventually be downloaded as a data pack. By making crowd knowledge available, especially in a decentralized way, we create the opportunity for social pluralism.
Guiding all of our organizational decisions is one question: "Does it serve the people and the greater good?" And we use the [UN Charter][4] and the Universal Declaration of Human Rights as a foundation for our value system. As we'll grow bigger, especially with our upcoming open beta launch, it's important for us to stay accountable to that mission. I'm even open to the idea of inviting the Chaos Computer Club or other hacker clubs to verify the integrity of our code and our actions by randomly checking into our platform.
When it comes to building a new kind of social network that promotes true connection and interaction between people who are passionate about changing the world for the better, making the source code available is just one step towards being transparent.
### A collaborative community
A [collaborative, community-centered approach][5] to programming the Human Connection platform is the foundation for an idea that extends beyond the practical applications of a social network. Our team is driven by finding an answer to the question: "What makes a social network truly social?"
A network that abandons the idea of a profit-driven algorithm serving advertisers instead of end-users can only thrive by turning to the process of peer production and collaboration. Organizations like [Code Alliance][6] and [Code for America][7], for example, have demonstrated how technology can be created in an open source environment to benefit humanity and disrupt the status quo. Community-driven projects like the map-based reporting platform [FixMyStreet][8] or the [Tasking Manager][9] built for the Humanitarian OpenStreetMap initiative have embraced crowdsourcing as a way to move their mission forward.
Our approach to building Human Connection has been collaborative from the start. To gather initial data on the necessary functions and the purpose of a truly social network, we collaborated with the National Institute for Oriental Languages and Civilizations (INALCO) at the University Sorbonne in Paris and the Stuttgart Media University in Germany. Research findings from both projects were incorporated into the early development of Human Connection. Thanks to that research, [users will have a whole new set of functions available][10] that put them in control of what content they see and how they engage with others. As early supporters are [invited to the network's alpha version][10], they can experience the first available noteworthy functions. Here are just a few:
* Linking information to action was one key theme emerging from our research sessions. Current social networks leave users in the information stage. Student groups at both universities saw a need for an action-oriented component that serves our human instinct of working together to solve problems. So we built a ["Can Do" function][11] into our platform. It's one of the ways individuals can take action after reading about a certain topic. "Can Do's" are user-suggested activities in the "Take Action" area that everyone can implement.
* The "Versus" function is another defining result. Where traditional social networks are limited to a comment function, our student groups saw the need for a more structured and useful way to engage in discussions and arguments. A "Versus" is a counter-argument to a public post that is displayed separately and provides an opportunity to highlight different opinions around an issue.
* Today's social networks don't provide a lot of options to filter content. Research has shown that a filtering option by emotions can help us navigate the social space in accordance with our daily mood and potentially protect our emotional wellbeing by not displaying sad or upsetting posts on a day where we want to see uplifting content only.
Human Connection invites changemakers to collaborate on the development of a network with the potential to mobilize individuals and groups around the world to turn negative news into "Can Do's"—and participate in social innovation projects in conjunction with charities and non-profit organizations.
[Subscribe to our weekly newsletter][12] to learn more about open organizations.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/18/3/open-social-human-connection
作者:[Dennis Hack][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/dhack
[1]:https://human-connection.org/en/
[2]:https://github.com/human-connection/
[3]:https://youtu.be/tPcYRQcepYE
[4]:http://www.un.org/en/charter-united-nations/index.html
[5]:https://youtu.be/BQHBno-efRI
[6]:http://codealliance.org/
[7]:https://www.codeforamerica.org/
[8]:http://fixmystreet.org/
[9]:https://tasks.hotosm.org/
[10]:https://youtu.be/AwSx06DK2oU
[11]:https://youtu.be/g2gYLNx686I
[12]:https://opensource.com/open-organization/resources/newsletter

View File

@ -1,217 +0,0 @@
Translating by qhwdw
Operating a Kubernetes network
============================================================
Ive been working on Kubernetes networking a lot recently. One thing Ive noticed is, while theres a reasonable amount written about how to **set up** your Kubernetes network, I havent seen much about how to **operate** your network and be confident that it wont create a lot of production incidents for you down the line.
In this post Im going to try to convince you of three things: (all I think pretty reasonable :))
* Avoiding networking outages in production is important
* Operating networking software is hard
* Its worth thinking critically about major changes to your networking infrastructure and the impact that will have on your reliability, even if very fancy Googlers say “this is what we do at Google”. (google engineers are doing great work on Kubernetes!! But I think its important to still look at the architecture and make sure it makes sense for your organization.)
Im definitely not a Kubernetes networking expert by any means, but I have run into a few issues while setting things up and definitely know a LOT more about Kubernetes networking than I used to.
### Operating networking software is hard
Here Im not talking about operating physical networks (I dont know anything about that), but instead about keeping software like DNS servers & load balancers & proxies working correctly.
I have been working on a team thats responsible for a lot of networking infrastructure for a year, and I have learned a few things about operating networking infrastructure! (though I still have a lot to learn obviously). 3 overall thoughts before we start:
* Networking software often relies very heavily on the Linux kernel. So in addition to configuring the software correctly you also need to make sure that a bunch of different sysctls are set correctly, and a misconfigured sysctl can easily be the difference between “everything is 100% fine” and “everything is on fire”.
* Networking requirements change over time (for example maybe youre doing 5x more DNS lookups than you were last year! Maybe your DNS server suddenly started returning TCP DNS responses instead of UDP which is a totally different kernel workload!). This means software that was working fine before can suddenly start having issues.
* To fix a production networking issues you often need a lot of expertise. (for example see this [great post by Sophie Haskins on debugging a kube-dns issue][1]) Im a lot better at debugging networking issues than I was, but thats only after spending a huge amount of time investing in my knowledge of Linux networking.
I am still far from an expert at networking operations but I think it seems important to:
1. Very rarely make major changes to the production networking infrastructure (because its super disruptive)
2. When you  _are_  making major changes, think really carefully about what the failure modes are for the new network architecture are
3. Have multiple people who are able to understand your networking setup
Switching to Kubernetes is obviously a pretty major networking change! So lets talk about what some of the things that can go wrong are!
### Kubernetes networking components
The Kubernetes networking components were going to talk about in this post are:
* Your overlay network backend (like flannel/calico/weave net/romana)
* `kube-dns`
* `kube-proxy`
* Ingress controllers / load balancers
* The `kubelet`
If youre going to set up HTTP services you probably need all of these. Im not using most of these components yet but Im trying to understand them, so thats what this post is about.
### The simplest way: Use host networking for all your containers
Lets start with the simplest possible thing you can do. This wont let you run HTTP services in Kubernetes. I think its pretty safe because there are less moving parts.
If you use host networking for all your containers I think all you need to do is:
1. Configure the kubelet to configure DNS correctly inside your containers
2. Thats it
If you use host networking for literally every pod you dont need kube-dns or kube-proxy. You dont even need a working overlay network.
In this setup your pods can connect to the outside world (the same way any process on your hosts would talk to the outside world) but the outside world cant connect to your pods.
This isnt super important (I think most people want to run HTTP services inside Kubernetes and actually communicate with those services) but I do think its interesting to realize that at some level all of this networking complexity isnt strictly required and sometimes you can get away without using it. Avoiding networking complexity seems like a good idea to me if you can.
### Operating an overlay network
The first networking component were going to talk about is your overlay network. Kubernetes assumes that every pod has an IP address and that you can communicate with services inside that pod by using that IP address. When I say “overlay network” this is what I mean (“the system that lets you refer to a pod by its IP address”).
All other Kubernetes networking stuff relies on the overlay networking working correctly. You can read more about the [kubernetes networking model here][10].
The way Kelsey Hightower describes in [kubernetes the hard way][11] seems pretty good but its not really viable on AWS for clusters more than 50 nodes or so, so Im not going to talk about that.
There are a lot of overlay network backends (calico, flannel, weaveworks, romana) and the landscape is pretty confusing. But as far as Im concerned an overlay network has 2 responsibilities:
1. Make sure your pods can send network requests outside your cluster
2. Keep a stable mapping of nodes to subnets and keep every node in your cluster updated with that mapping. Do the right thing when nodes are added & removed.
Okay! So! What can go wrong with your overlay network?
* The overlay network is responsible for setting up iptables rules (basically `iptables -A -t nat POSTROUTING -s $SUBNET -j MASQUERADE`) to ensure that containers can make network requests outside Kubernetes. If something goes wrong with this rule then your containers cant connect to the external network. This isnt that hard (its just a few iptables rules) but it is important. I made a [pull request][2] because I wanted to make sure this was resilient
* Something can go wrong with adding or deleting nodes. Were using the flannel hostgw backend and at the time we started using it, node deletion [did not work][3].
* Your overlay network is probably dependent on a distributed database (etcd). If that database has an incident, this can cause issues. For example [https://github.com/coreos/flannel/issues/610][4] says that if you have data loss in your flannel etcd cluster it can result in containers losing network connectivity. (this has now been fixed)
* You upgrade Docker and everything breaks
* Probably more things!
Im mostly talking about past issues in Flannel here but I promise Im not picking on Flannel I actually really **like** Flannel because I feel like its relatively simple (for instance the [vxlan backend part of it][12] is like 500 lines of code) and I feel like its possible for me to reason through any issues with it. And its obviously continuously improving. Theyve been great about reviewing pull requests.
My approach to operating an overlay network so far has been:
* Learn how it works in detail and how to debug it (for example the hostgw network backend for Flannel works by creating routes, so you mostly just need to do `sudo ip route list` to see whether its doing the correct thing)
* Maintain an internal build so its easy to patch it if needed
* When there are issues, contribute patches upstream
I think its actually really useful to go through the list of merged PRs and see bugs that have been fixed in the past its a bit time consuming but is a great way to get a concrete list of kinds of issues other people have run into.
Its possible that for other people their overlay networks just work but that hasnt been my experience and Ive heard other folks report similar issues. If you have an overlay network setup that is a) on AWS and b) works on a cluster more than 50-100 nodes where you feel more confident about operating it I would like to know.
### Operating kube-proxy and kube-dns?
Now that we have some thoughts about operating overlay networks, lets talk about
Theres a question mark next to this one because I havent done this. Here I have more questions than answers.
Heres how Kubernetes services work! A service is a collection of pods, which each have their own IP address (like 10.1.0.3, 10.2.3.5, 10.3.5.6)
1. Every Kubernetes service gets an IP address (like 10.23.1.2)
2. `kube-dns` resolves Kubernetes service DNS names to IP addresses (so my-svc.my-namespace.svc.cluster.local might map to 10.23.1.2)
3. `kube-proxy` sets up iptables rules in order to do random load balancing between them. Kube-proxy also has a userspace round-robin load balancer but my impression is that they dont recommend using it.
So when you make a request to `my-svc.my-namespace.svc.cluster.local`, it resolves to 10.23.1.2, and then iptables rules on your local host (generated by kube-proxy) redirect it to one of 10.1.0.3 or 10.2.3.5 or 10.3.5.6 at random.
Some things that I can imagine going wrong with this:
* `kube-dns` is misconfigured
* `kube-proxy` dies and your iptables rules dont get updated
* Some issue related to maintaining a large number of iptables rules
Lets talk about the iptables rules a bit, since doing load balancing by creating a bajillion iptables rules is something I had never heard of before!
kube-proxy creates one iptables rule per target host like this: (these rules are from [this github issue][13])
```
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.20000000019 -j KUBE-SEP-E4QKA7SLJRFZZ2DD[b][c]
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-LZ7EGMG4DRXMY26H
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-RKIFTWKKG3OHTTMI
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-CGDKBCNM24SZWCMS
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -j KUBE-SEP-RI4SRNQQXWSTGE2Y
```
So kube-proxy creates a **lot** of iptables rules. What does that mean? What are the implications of that in for my network? Theres a great talk from Huawei called [Scale Kubernetes to Support 50,000 services][14] that says if you have 5,000 services in your kubernetes cluster, it takes **11 minutes** to add a new rule. If that happened to your real cluster I think it would be very bad.
I definitely dont have 5,000 services in my cluster, but 5,000 isnt SUCH a bit number. The proposal they give to solve this problem is to replace this iptables backend for kube-proxy with IPVS which is a load balancer that lives in the Linux kernel.
It seems like kube-proxy is going in the direction of various Linux kernel based load balancers. I think this is partly because they support UDP load balancing, and other load balancers (like HAProxy) dont support UDP load balancing.
But I feel comfortable with HAProxy! Is it possible to replace kube-proxy with HAProxy! I googled this and I found this [thread on kubernetes-sig-network][15] saying:
> kube-proxy is so awesome, we have used in production for almost a year, it works well most of time, but as we have more and more services in our cluster, we found it was getting hard to debug and maintain. There is no iptables expert in our team, we do have HAProxy&LVS experts, as we have used these for several years, so we decided to replace this distributed proxy with a centralized HAProxy. I think this maybe useful for some other people who are considering using HAProxy with kubernetes, so we just update this project and make it open source: [https://github.com/AdoHe/kube2haproxy][5]. If you found its useful , please take a look and give a try.
So thats an interesting option! I definitely dont have answers here, but, some thoughts:
* Load balancers are complicated
* DNS is also complicated
* If you already have a lot of experience operating one kind of load balancer (like HAProxy), it might make sense to do some extra work to use that instead of starting to use an entirely new kind of load balancer (like kube-proxy)
* Ive been thinking about where we want to be using kube-proxy or kube-dns at all I think instead it might be better to just invest in Envoy and rely entirely on Envoy for all load balancing & service discovery. So then you just need to be good at operating Envoy.
As you can see my thoughts on how to operate your Kubernetes internal proxies are still pretty confused and Im still not super experienced with them. Its totally possible that kube-proxy and kube-dns are fine and that they will just work fine but I still find it helpful to think through what some of the implications of using them are (for example “you cant have 5,000 Kubernetes services”).
### Ingress
If youre running a Kubernetes cluster, its pretty likely that you actually need HTTP requests to get into your cluster so far. This blog post is already too long and I dont know much about ingress yet so were not going to talk about that.
### Useful links
A couple of useful links, to summarize:
* [The Kubernetes networking model][6]
* How GKE networking works: [https://www.youtube.com/watch?v=y2bhV81MfKQ][7]
* The aforementioned talk on `kube-proxy` performance: [https://www.youtube.com/watch?v=4-pawkiazEg][8]
### I think networking operations is important
My sense of all this Kubernetes networking software is that its all still quite new and Im not sure we (as a community) really know how to operate all of it well. This makes me worried as an operator because I really want my network to keep working! :) Also I feel like as an organization running your own Kubernetes cluster you need to make a pretty large investment into making sure you understand all the pieces so that you can fix things when they break. Which isnt a bad thing, its just a thing.
My plan right now is just to keep learning about how things work and reduce the number of moving parts I need to worry about as much as possible.
As usual I hope this was helpful and I would very much like to know what I got wrong in this post!
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2017/10/10/operating-a-kubernetes-network/
作者:[Julia Evans ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://jvns.ca/about
[1]:http://blog.sophaskins.net/blog/misadventures-with-kube-dns/
[2]:https://github.com/coreos/flannel/pull/808
[3]:https://github.com/coreos/flannel/pull/803
[4]:https://github.com/coreos/flannel/issues/610
[5]:https://github.com/AdoHe/kube2haproxy
[6]:https://kubernetes.io/docs/concepts/cluster-administration/networking/#kubernetes-model
[7]:https://www.youtube.com/watch?v=y2bhV81MfKQ
[8]:https://www.youtube.com/watch?v=4-pawkiazEg
[9]:https://jvns.ca/categories/kubernetes
[10]:https://kubernetes.io/docs/concepts/cluster-administration/networking/#kubernetes-model
[11]:https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/11-pod-network-routes.md
[12]:https://github.com/coreos/flannel/tree/master/backend/vxlan
[13]:https://github.com/kubernetes/kubernetes/issues/37932
[14]:https://www.youtube.com/watch?v=4-pawkiazEg
[15]:https://groups.google.com/forum/#!topic/kubernetes-sig-network/3NlBVbTUUU0

View File

@ -1,3 +1,4 @@
translating----hoppipolla-
3 Python command-line tools
======

View File

@ -1,81 +0,0 @@
【翻译中 by ZenMoore】
Historical inventory of collaborative editors
======
A quick inventory of major collaborative editor efforts, in chronological order.
As with any such list, it must start with an honorable mention to [the mother of all demos][25] during which [Doug Engelbart][26] presented what is basically an exhaustive list of all possible software written since 1968\. This includes not only a collaborative editor, but graphics, programming and math editor.
Everything else after that demo is just a slower implementation to compensate for the acceleration of hardware.
> Software gets slower faster than hardware gets faster. - Wirth's law
So without further ado, here is the list of notable collaborative editors that I could find. By "notable" i mean that they introduce a notable feature or implementation detail.
| Project | Date | Platform | Notes |
| --- | --- | --- | --- |
| [SubEthaEdit][1] | 2003-2015? | Mac-only | first collaborative, real-time, multi-cursor editor I could find. [reverse-engineering attempt in Emacs][2] |
| [DocSynch][3] | 2004-2007 | ? | built on top of IRC
![(!)](https://anarc.at/smileys/idea.png)
|
| [Gobby][4] | 2005-now | C, multi-platform | first open, solid and reliable implementation. still around! protocol ("[libinfinoted][5]") notoriously hard to port to other editors (e.g. [Rudel][6] failed to implement this in Emacs. 0.7 release in jan 2017 adds possible python bindings that might improve this. Interesting plugins: autosave to disk. |
| [moonedit][7] | 2005-2008? | ? | Original website died. Other user's cursors visible and emulated keystrokes noises. Calculator and music sequencer. |
| [synchroedit][8] | 2006-2007 | ? | First web app. |
| [Etherpad][9] | 2008-now | Web | First solid webapp. Originally developped as a heavy Java app in 2008, acquired and opensourced by google in 2009, then rewritten in Node.js in 2011\. Widely used. |
| [CRDT][10] | 2011 | Specification | Standard for replicating a document's datastructure among different computers reliably. |
| [Operational transform][11] | 2013 | Specification | Similar to CRDT, yet, well, different. |
| [Floobits][12] | 2013-now | ? | Commercial, but opensource plugins for different editors |
| [HackMD][13] | 2015-now | ? | Commercial but [opensource][14]. Inspired by hackpad, which was bought up by Dropbox. |
| [Cryptpad][15] | 2016-now | web? | spin-off of xwiki. encrypted, "zero-knowledge" on server |
| [Prosemirror][16] | 2016-now | Web, Node.JS | "Tries to bridge the gap between Markdown text editing and classical WYSIWYG editors." Not really an editor, but something that can be used to build one. |
| [Qill][17] | 2013-now | Web, Node.JS | Rich text editor, also javascript. Not sure it is really collaborative. |
| [Nextcloud][18] | 2017-now | Web | Some sort of Google docs equivalent |
| [Teletype][19] | 2017-now | WebRTC, Node.JS | For the GitHub's [Atom editor][20], introduces "portal" idea that makes guests follow what the host is doing across multiple docs. p2p with webRTC after visit to introduction server, CRDT based. |
| [Tandem][21] | 2018-now | Node.JS? | Plugins for atom, vim, neovim, sublime... uses a relay to setup p2p connexions CRDT based. [Dubious license issues][22] were resolved thanks to the involvement of Debian developers, which makes it a promising standard to follow in the future. |
### Other lists
* [Emacs wiki][23]
* [Wikipedia][24]
--------------------------------------------------------------------------------
via: https://anarc.at/blog/2018-06-26-collaborative-editors-history/
作者:[Anacr][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://anarc.at
[1]:https://www.codingmonkeys.de/subethaedit/
[2]:https://www.emacswiki.org/emacs/SubEthaEmacs
[3]:http://docsynch.sourceforge.net/
[4]:https://gobby.github.io/
[5]:http://infinote.0x539.de/libinfinity/API/libinfinity/
[6]:https://www.emacswiki.org/emacs/Rudel
[7]:https://web.archive.org/web/20060423192346/http://www.moonedit.com:80/
[8]:http://www.synchroedit.com/
[9]:http://etherpad.org/
[10]:https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type
[11]:http://operational-transformation.github.io/
[12]:https://floobits.com/
[13]:https://hackmd.io/
[14]:https://github.com/hackmdio/hackmd
[15]:https://cryptpad.fr/
[16]:https://prosemirror.net/
[17]:https://quilljs.com/
[18]:https://nextcloud.com/collaboraonline/
[19]:https://teletype.atom.io/
[20]:https://atom.io
[21]:http://typeintandem.com/
[22]:https://github.com/typeintandem/tandem/issues/131
[23]:https://www.emacswiki.org/emacs/CollaborativeEditing
[24]:https://en.wikipedia.org/wiki/Collaborative_real-time_editor
[25]:https://en.wikipedia.org/wiki/The_Mother_of_All_Demos
[26]:https://en.wikipedia.org/wiki/Douglas_Engelbart

View File

@ -0,0 +1,97 @@
Blockchain evolution: A quick guide and why open source is at the heart of it
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/block-quilt-chain.png?itok=mECoDbrc)
It isn't uncommon, when working on a new version of an open source project, to suffix it with "-ng", for "next generation." Fortunately, in their rapid evolution blockchains have so far avoided this naming pitfall. But in this evolutionary open source ecosystem, changes have been abundant, and good ideas have been picked up, remixed, and evolved between many different projects in a typical open source fashion.
In this article, I will look at the different generations of blockchains and what ideas have emerged to address the problems the ecosystem has encountered. Of course, any attempt at classifying an ecosystem will have limits—and objectors—but it should provide a rough guide to the jungle of blockchain projects.
### The beginning: Bitcoin
The first generation of blockchains stems from the [Bitcoin][1] blockchain, the ledger underpinning the decentralized, peer-to-peer cryptocurrency that has gone from [Slashdot][2] miscellanea to a mainstream topic.
This blockchain is a distributed ledger that keeps track of all users' transactions to prevent them from double-spending their coins (a task historically entrusted to third parties: banks). To prevent attackers from gaming the system, the ledger is replicated to every computer participating in the Bitcoin network and can be updated by only one computer in the network at a time. To decide which computer earns the right to update the ledger, the system organizes every 10 minutes a race between the computers, which costs them (a lot of) energy to enter. The winner wins the right to commit the last 10 minutes of transactions to the ledger (the "block" in blockchain) and some Bitcoin as a reward for their efforts. This setup is called a _proof of work_ consensus mechanism.
The goal of using a blockchain is to raise the level of trust participants have in the network.
This is where it gets interesting. Bitcoin was released as an [open source project][3] in January 2009. In 2010, realizing that quite a few of these elements can be tweaked, the community that had aggregated around Bitcoin, often on the [bitcointalk forums][4], started experimenting with them.
First, seeing that the Bitcoin blockchain is a form of a distributed database, the [Namecoin][5] project emerged, suggesting to store arbitrary data in its transaction database. If the blockchain can record the transfer of money, it could also record the transfer of other assets, such as domain names. This is exactly Namecoin's main use case, which went live in April 2011, two years after Bitcoin's introduction.
Where Namecoin tweaked the content of the blockchain, [Litecoin][6] tweaked two technical aspects: reducing the time between two blocks from 10 to 2.5 minutes and changing how the race is run (replacing the SHA-256 secure hashing algorithm with [scrypt][7]). This was possible because Bitcoin was released as open source software and Litecoin is essentially identical to Bitcoin in all other places. Litecoin was the first fork to modify the consensus mechanism, paving the way for many more.
Along the way, many more variations of the Bitcoin codebase have appeared. Some started as proposed extensions to Bitcoin, such as the [Zerocash][8] protocol, which aimed to provide transaction anonymity and fungibility but was eventually spun off into its own currency, [Zcash][9].
While Zcash has brought its own innovations, using recent cryptographic advances known as zero-knowledge proofs, it maintains compatibility with the vast majority of the Bitcoin code base, meaning it too can benefit from upstream Bitcoin innovations.
Another project, [CryptoNote][10], didn't use the same code base but sprouted from the same community, building on (and against) Bitcoin and again, on older ideas. Published in December 2012, it led to the creation of several cryptocurrencies, of which [Monero][11] (2014) is the best-known. Monero takes a different approach to Zcash but aims to solve the same issues: privacy and fungibility.
As is often the case in the open source world, there is more than one tool for the job.
### The next generations: "Blockchain-ng"
So far, however, all these variations have only really been about refining cryptocurrencies or extending them to support another type of transaction. This brings us to the second generation of blockchains.
Once the community started modifying what a blockchain could be used for and tweaking technical aspects, it didn't take long for some people to expand and rethink them further. A longtime follower of Bitcoin, [Vitalik Buterin][12] suggested in late 2013 that a blockchain's transactions could represent the change of states of a state machine, conceiving the blockchain as a distributed computer capable of running applications ("smart contracts"). The project, [Ethereum][13], went live in July 2015. It has seen fair success in running distributed apps, and the popularity of some of its better-known distributed apps ([CryptoKitties][14]) have even caused the Ethereum blockchain to slow down.
This demonstrates one of the big limitations of current blockchains: speed and capacity. (Speed is often measured in transactions per second, or TPS.) Several approaches have been suggested to solve this, from sharding to sidechains and so-called "second-layer" solutions. The need for more innovation here is strong.
With the words "smart contract" in the air and a proved—if still slow—technology to run them, another idea came to fruition: permissioned blockchains. So far, all the blockchain networks we've described have had two unsaid characteristics: They are public (anyone can see them function), and they are without permission (anyone can join them). These two aspects are both desirable and necessary to run a distributed, non-third-party-based currency.
As blockchains were being considered more and more separately from cryptocurrencies, it started to make sense to consider them in some private, permissioned settings. A consortium-type group of actors that have business relationships but don't necessarily trust each other fully can benefit from these types of blockchains—for example, actors along a logistics chain, financial or insurance institutions that regularly do bilateral settlements or use a clearinghouse, idem for healthcare institutions.
Once you change the setting from "anyone can join" to "invitation-only," further changes and tweaks to the blockchain building blocks become possible, yielding interesting results for some.
For a start, proof of work, designed to protect the network from malicious and spammy actors, can be replaced by something simpler and less resource-hungry, such as a [Raft][15]-based consensus protocol. A tradeoff appears between a high level of security or faster speed, embodied by the option of simpler consensus algorithms. This is highly desirable to many groups, as they can trade some cryptography-based assurance for assurance based on other means—legal relationships, for instance—and avoid the energy-hungry arms race that proof of work often leads to. This is another area where innovation is ongoing, with [Proof of Stake][16] a notable contender for the public network consensus mechanism of choice. It would likely also find its way to permissioned networks too.
Several projects make it simple to create permissioned blockchains, including [Quorum][17] (a fork of Ethereum) and [Hyperledger][18]'s [Fabric][19] and [Sawtooth][20], two open source projects based on new code.
Permissioned blockchains can avoid certain complexities that public, non-permissioned ones can't, but they still have their own set of issues. Proper management of participants is one: Who can join? How do they identify? How can they be removed from the network? Does one entity on the network manage a central public key infrastructure (PKI)?
The open nature of blockchains is seen as a form of governance.
### Open nature of blockchains
In all of the cases so far, one thing is clear: The goal of using a blockchain is to raise the level of trust participants have in the network and the data it produces—ideally, enough to be able to use it as is, without further work.
Reaching this level of trust is possible only if the software that powers the network is free and open source. Even a correctly distributed proprietary blockchain is essentially a collection of independent agents running the same third party's code. By nature, it's necessary—but not sufficient—for a blockchain's source code to be open source. This has both been a minimum guarantee and the source of further innovation as the ecosystem keeps growing.
Finally, it is worth mentioning that while the open nature of blockchains has been a source of innovation and variation, it has also been seen as a form of governance: governance by code, where users are expected to run whichever specific version of the code contains a function or approach they think the whole network should embrace. In this respect, one can say the open nature of some blockchains has also become a cop-out regarding governance. But this is being addressed.
### Third and fourth generations: governance
Next, I will look at what I am currently considering the third and fourth generations of blockchains: blockchains with built-in governance tools and projects to solve the tricky question of interconnecting the multitude of different blockchain projects to let them exchange information and value with each other.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/6/blockchain-guide-next-generation
作者:[Axel Simon][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/axel
[1]:https://bitcoin.org
[2]:https://slashdot.org/
[3]:https://github.com/bitcoin/bitcoin
[4]:https://bitcointalk.org/
[5]:https://www.namecoin.org/
[6]:https://litecoin.org/
[7]:https://en.wikipedia.org/wiki/Scrypt
[8]:http://zerocash-project.org/index
[9]:https://z.cash
[10]:https://cryptonote.org/
[11]:https://en.wikipedia.org/wiki/Monero_(cryptocurrency)
[12]:https://en.wikipedia.org/wiki/Vitalik_Buterin
[13]:https://ethereum.org
[14]:http://cryptokitties.co/
[15]:https://en.wikipedia.org/wiki/Raft_(computer_science)
[16]:https://www.investopedia.com/terms/p/proof-stake-pos.asp
[17]:https://www.jpmorgan.com/global/Quorum
[18]:https://hyperledger.org/
[19]:https://www.hyperledger.org/projects/fabric
[20]:https://www.hyperledger.org/projects/sawtooth

View File

@ -0,0 +1,139 @@
Sosreport A Tool To Collect System Logs And Diagnostic Information
======
![](https://www.ostechnix.com/wp-content/uploads/2018/06/sos-720x340.png)
If youre working as RHEL administrator, you might definitely heard about **Sosreport** an extensible, portable and support data collection tool. It is a tool to collect system configuration details and diagnostic information from a Unix-like operating system. When the user raise a support ticket, he/she has to run this tool and send the resulting report generated by Sosreport tool to the Red Hat support executive. The executive will then perform an initial analysis based on the report and try to find whats the problem in the system. Not just on RHEL system, you can use it on any Unix-like operating systems for collecting system logs and other debug information.
### Installing Sosreport
Sosreport is available on Red Hat official systems, so you can install it using Yum Or DNF package managers as shown below.
```
$ sudo yum install sos
```
Or,
```
$ sudo dnf install sos
```
On Debian, Ubuntu and Linux Mint, run:
```
$ sudo apt install sosreport
```
### Usage
Once installed, run the following command to collect your system configuration details and other diagnostic information.
```
$ sudo sosreport
```
You will be asked to enter some details of your system, such as system name, case id etc. Type the details accordingly, and press ENTER key to generate the report. If you dont want to change anything and want to use the default values, simply press ENTER.
Sample output from my CentOS 7 server:
```
sosreport (version 3.5)
This command will collect diagnostic and configuration information from
this CentOS Linux system and installed applications.
An archive containing the collected information will be generated in
/var/tmp/sos.DiJXi7 and may be provided to a CentOS support
representative.
Any information provided to CentOS will be treated in accordance with
the published support policies at:
https://wiki.centos.org/
The generated archive may contain data considered sensitive and its
content should be reviewed by the originating organization before being
passed to any third party.
No changes will be made to system configuration.
Press ENTER to continue, or CTRL-C to quit.
Please enter your first initial and last name [server.ostechnix.local]:
Please enter the case id that you are generating this report for []:
Setting up archive ...
Setting up plugins ...
Running plugins. Please wait ...
Running 73/73: yum...
Creating compressed archive...
Your sosreport has been generated and saved in:
/var/tmp/sosreport-server.ostechnix.local-20180628171844.tar.xz
The checksum is: 8f08f99a1702184ec13a497eff5ce334
Please send this file to your support representative.
```
If you dont want to be prompted for entering such details, simply use batch mode like below.
```
$ sudo sosreport --batch
```
As you can see in the above output, an archived report is generated and saved in **/var/tmp/sos.DiJXi7** file. In RHEL 6/CentOS 6, the report will be generated in **/tmp** location. You can now send this report to your support executive, so that he can do initial analysis and find whats the problem.
You might be concerned or wanted to know whats in the report. If so, you can view it by running the following command:
```
$ sudo tar -tf /var/tmp/sosreport-server.ostechnix.local-20180628171844.tar.xz
```
Or,
```
$ sudo vim /var/tmp/sosreport-server.ostechnix.local-20180628171844.tar.xz
```
Please note that above commands will not extract the archive, but only display the list of files and folders in the archive. If you want to view the actual contents of the files in the archive, first extract the archive using command:
```
$ sudo tar -xf /var/tmp/sosreport-server.ostechnix.local-20180628171844.tar.xz
```
All the contents of the archive will be extracted in a directory named “sosreport-server.ostechnix.local-20180628171844/” in the current working directory. Go to the directory and view the contents of any file using cat command or any other text viewer:
```
$ cd sosreport-server.ostechnix.local-20180628171844/
$ cat uptime
17:19:02 up 1:03, 2 users, load average: 0.50, 0.17, 0.10
```
For more details about Sosreport, refer man pages.
```
$ man sosreport
```
And, thats all for now. Hope this was useful. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/sosreport-a-tool-to-collect-system-logs-and-diagnostic-information/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,97 @@
Discover hidden gems in LibreOffice
======
![](https://fedoramagazine.org/wp-content/uploads/2018/06/libreoffice6-gems-816x345.jpg)
LibreOffice is the most popular free and open source office suite. Its included by default in many Linux distributions, such as [Fedora Workstation][1]. Chances are that you use it fairly often, but how many of its features have you really explored? What hidden gems are there in LibreOffice that not so many people know about?
This article explores some lesser-known features in the suite, and shows you how to make the most of them. Then it wraps up with a quick look at the LibreOffice community, and how you can help to make the software even better.
### Notebookbar
Recent versions of LibreOffice have seen gradual improvements to the user interface, such as reorganized menus and additional toolbar buttons. However, the general layout hasnt changed drastically since the software was born back in 2010. But now, a completely new (and optional!) user interface called the [Notebookbar][2] is under development, and it looks like this:
![LibreOffice's \(experimental\) Notebookbar][3]
Yes, its substantially different to the current “traditional” design, and there are a few variants. Because LibreOffices design team is still working on the Notebookbar, its not available by default in current versions of the suite. Instead, its an experimental option.
To try it, make sure youre running a recent release of LibreOffice, such as 5.4 or 6.0. (LibreOffice 6.x is already available in Fedora 28.) Then go to Tools > Options in the menu. In the dialog box that appears, go to Advanced on the left-hand side. Tick the Enable experimental features box, click OK, and then youll be prompted to restart LibreOffice. Go ahead and do that.
Now, in Writer, Calc and Impress, go to View > Toolbar Layout in the menu, and choose Notebookbar. Youll see the new interface straight away. Remember that this is still experimental, though, and not ready for production use, so dont be surprised if you see some bugs or glitches in places!
The default Notebookbar layout is called “tabbed”, and you can see tabs along the top of the window to display different sets of buttons. But if you go to View > Notebookbar in the menu, youll see other variants of the design as well. Try them out! If you need to access the familiar menu bar, youll find an icon for it in the top-right of the window. And to revert back to the regular interface, just go to View > Toolbar Layout > Default.
### Command line tips and tricks
Yes, you can even use LibreOffice from the Bash prompt. This is most useful if you want to perform batch operations on large numbers of files. For instance, lets say you have 20 .odt (OpenDocument Text) files in a directory, and want to make PDFs of them. Via LibreOffices graphical user interface, youd have to do a lot of clicking to achieve this. But at the command line, its simple:
```
libreoffice --convert-to pdf *.odt
```
Or take another example: you have a set of Microsoft Office documents, and you want to convert them all to ODT:
```
libreoffice --convert-to odt *.docx
```
Another useful batch operation is printing. If you have a bunch of documents and want to print them all in one fell swoop, without manually opening them and clicking on the printer icon, do this:
```
libreoffice -p *.odt
```
Its also worth noting some of the other command line flags that LibreOffice uses. For instance, if you want to create a launcher in your program menu that starts Calc directly, instead of showing the opening screen, use:
```
libreoffice --calc
```
Its also possible to launch Impress and jump straight into the first slide of a presentation, without showing the LibreOffice user interface:
```
libreoffice --show presentation.odp
```
### Extra goodies in Draw
Writer, Calc and Impress are the most popular components of LibreOffice. But Draw is a capable tool as well for creating diagrams, leaflets and other materials. When youre working with multiple objects, there are various tricks you can do to speed up your work.
For example, you probably know you can select multiple objects by clicking and dragging a selection area around them. But you can also select and deselect objects in the group by holding down the Shift key while clicking.
When moving individual shapes or groups of shapes, you can use keyboard modifiers to change the movement speed. Try it out: select a bunch of objects, then use the cursor keys to move them around. Now try holding Shift to move them in greater increments, or Alt for fine-tuning. (The Ctrl key comes in useful here too, for panning around inside a document without moving the shapes.)
[LibreOffice 5.1][4] added a useful feature to equalize the widths and heights of multiple shapes. Select them with the mouse, right-click on the selection, and then go to the Shapes part of the context menu. There youll see the Equalize options. This is good for making objects more consistent, and it works in Impress too!
![Equalizing shape sizes in Draw][5]
Lastly, heres a shortcut for duplicating objects: the Ctrl key. Try clicking and dragging on an object, with Ctrl held down, and youll see that a copy of the object is made immediately. This is quicker and more elegant than using the Duplicate dialog box.
### Over to you!
So those are some features and tricks in LibreOffice you can now use in your work. But theres always room for improvement, and the LibreOffice community is working hard on the next release, [LibreOffice 6.1][6], which is due in early August. Give them a hand! You can help to test the beta releases, trying out new features and reporting bugs. Or [get involved in other areas][7] such as design, marketing, documentation, translations and more.
Photo by [William Iven][8] on [Unsplash][9].
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/discover-hidden-gems-libreoffice/
作者:[Mike Saunders][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org/author/mikesaunders/
[1]:https://getfedora.org/workstation
[2]:https://wiki.documentfoundation.org/Development/NotebookBar
[3]:https://fedoramagazine.org/wp-content/uploads/2018/06/libreoffice_gems_notebookbar-300x109.png
[4]:https://wiki.documentfoundation.org/ReleaseNotes/5.1
[5]:https://fedoramagazine.org/wp-content/uploads/2018/06/libreoffice_gems_draw-300x178.png
[6]:https://wiki.documentfoundation.org/ReleaseNotes/6.1
[7]:https://www.libreoffice.org/community/get-involved/
[8]:https://unsplash.com/photos/jrh5lAq-mIs?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[9]:https://unsplash.com/search/photos/documents?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText

View File

@ -0,0 +1,65 @@
Reflecting on the GPLv3 license for its 11th anniversary
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_vaguepatent_520x292.png?itok=_zuxUwyt)
Last year, I missed the opportunity to write about the 10th anniversary of [GPLv3][1], the third version of the GNU General Public License. GPLv3 was officially released by the Free Software Foundation (FSF) on June 29, 2007—better known in technology history as the date Apple launched the iPhone. Now, one year later, I feel some retrospection on GPLv3 is due. For me, much of what is interesting about GPLv3 goes back somewhat further than 11 years, to the public drafting process in which I was an active participant.
In 2005, following nearly a decade of enthusiastic self-immersion in free software, yet having had little open source legal experience to speak of, I was hired by Eben Moglen to join the Software Freedom Law Center as counsel. SFLC was then outside counsel to the FSF, and my role was conceived as focusing on the incipient public phase of the GPLv3 drafting process. This opportunity rescued me from a previous career turn that I had found rather dissatisfying. Free and open source software (FOSS) legal matters would come to be my new specialty, one that I found fascinating, gratifying, and intellectually rewarding. My work at SFLC, and particularly the trial by fire that was my work on GPLv3, served as my on-the-job training.
GPLv3 must be understood as the product of an earlier era of FOSS, the contours of which may be difficult for some to imagine today. By the beginning of the public drafting process in 2006, Linux and open source were no longer practically synonymous, as they might have been for casual observers several years earlier, but the connection was much closer than it is now.
Reflecting the profound impact that Linux was already having on the technology industry, everyone assumed GPL version 2 was the dominant open source licensing model. We were seeing the final shakeout of a Cambrian explosion of open source (and pseudo-open source) business models. A frothy business-fueled hype surrounded open source (for me most memorably typified by the Open Source Business Conference) that bears little resemblance to the present-day embrace of open source development by the software engineering profession. Microsoft, with its expanding patent portfolio and its competitive opposition to Linux, was commonly seen in the FOSS community as an existential threat, and the [SCO litigation][2] had created a cloud of legal risk around Linux and the GPL that had not quite dissipated.
That environment necessarily made the drafting of GPLv3 a high-stakes affair, unprecedented in free software history. Lawyers at major technology companies and top law firms scrambled for influence over the license, convinced that GPLv3 was bound to take over and thoroughly reshape open source and all its massive associated business investment.
A similar mindset existed within the technical community; it can be detected in the fears expressed in the final paragraph of the Linux kernel developers' momentous September 2006 [denunciation][3] of GPLv3. Those of us close to the FSF knew a little better, but I think we assumed the new license would be either an overwhelming success or a resounding failure—where "success" meant something approximating an upgrade of the existing GPLv2 project ecosystem to GPLv3, though perhaps without the kernel. The actual outcome was something in the middle.
I have no confidence in attempts to measure open source license adoption, which have in recent years typically been used to demonstrate a loss of competitive advantage for copyleft licensing. My own experience, which is admittedly distorted by proximity to Linux and my work at Red Hat, suggests that GPLv3 has enjoyed moderate popularity as a license choice for projects launched since 2007, though most GPLv2 projects that existed before 2007, along with their post-2007 offshoots, remained on the old license. (GPLv3's sibling licenses LGPLv3 and AGPLv3 never gained comparable popularity.) Most of the existing GPLv2 projects (with a few notable exceptions like the kernel and Busybox) were licensed as "GPLv2 or any later version." The technical community decided early on that "GPLv2 or later" was a politically neutral license choice that embraced both GPLv2 and GPLv3; this goes some way to explain why adoption of GPLv3 was somewhat gradual and limited, especially within the Linux community.
During the GPLv3 drafting process, some expressed concerns about a "balkanized" Linux ecosystem, whether because of the overhead of users having to understand two different, strong copyleft licenses or because of GPLv2/GPLv3 incompatibility. These fears turned out to be entirely unfounded. Within mainstream server and workstation Linux stacks, the two licenses have peacefully coexisted for a decade now. This is partly because such stacks are made up of separate units of strong copyleft scope (see my discussion of [related issues in the container setting][4]). As for incompatibility inside units of strong copyleft scope, here, too, the prevalence of "GPLv2 or later" was seen by the technical community as neatly resolving the theoretical problem, despite the fact that nominal license upgrading of GPLv2-or-later to GPLv3 hardly ever occurred.
I have alluded to the handwringing that some of us FOSS license geeks have brought to the topic of supposed copyleft decline. GPLv3 has taken its share of abuse from critics as far back as the beginning of the public drafting process, and some, predictably, have drawn a link between GPLv3 in particular and GPL or copyleft disfavor in general.
I have viewed it somewhat differently: Largely because of its complexity and baroqueness, GPLv3 was a lost opportunity to create a strong copyleft license that would appeal very broadly to modern individual software authors and corporate licensors. I believe individual developers today tend to prefer short, simple, easy to understand, minimalist licenses, the most obvious example of which is the [MIT License][5].
Some corporate decisionmakers around open source license selection may naturally share that view, while others may associate some parts of GPLv3, such as the patent provisions or the anti-lockdown requirements, as too risky or incompatible with their business models. The great irony is that the characteristics of GPLv3 that fail to attract these groups are there in part because of conscious attempts to make the license appeal to these same sorts of interests.
How did GPLv3 come to be so baroque? As I have said, GPLv3 was the product of an earlier time, in which FOSS licenses were viewed as the primary instruments of project governance. (Today, we tend to associate governance with other kinds of legal or quasi-legal tools, such as structuring of nonprofit organizations, rules around project decision making, codes of conduct, and contributor agreements.)
GPLv3, in its drafting, was the high point of an optimistic view of FOSS licenses as ambitious means of private regulation. This was already true of GPLv2, but GPLv3 took things further by addressing in detail a number of new policy problems—software patents, anti-circumvention laws, device lockdown. That was bound to make the license longer and more complex than GPLv2, as the FSF and SFLC noted apologetically in the first GPLv3 [rationale document][6].
But a number of other factors at play in the drafting of GPLv3 unintentionally caused the complexity of the license to grow. Lawyers representing vendors' and commercial users' interests provided useful suggestions for improvements from a legal and commercial perspective, but these often took the form of making simply worded provisions more verbose, arguably without net increases in clarity. Responses to feedback from the technical community, typically identifying loopholes in license provisions, had a similar effect.
The GPLv3 drafters also famously got entangled in a short-term political crisis—the controversial [Microsoft/Novell deal][7] of 2006—resulting in the permanent addition of new and unusual conditions in the patent section of the license, which arguably served little purpose after 2007 other than to make license compliance harder for conscientious patent-holding vendors. Of course, some of the complexity in GPLv3 was simply the product of well-intended attempts to make compliance easier, especially for community project developers, or to codify FSF interpretive practice. Finally, one can take issue with the style of language used in GPLv3, much of which had a quality of playful parody or mockery of conventional software license legalese; a simpler, straightforward form of phrasing would in many cases have been an improvement.
The complexity of GPLv3 and the movement towards preferring brevity and simplicity in license drafting and unambitious license policy objectives meant that the substantive text of GPLv3 would have little direct influence on later FOSS legal drafting. But, as I noted with surprise and [delight][8] back in 2012, MPL 2.0 adapted two parts of GPLv3: the 30-day cure and 60-day repose language from the GPLv3 termination provision, and the assurance that downstream upgrading to a later license version adds no new obligations on upstream licensors.
The GPLv3 cure language has come to have a major impact, particularly over the past year. Following the Software Freedom Conservancy's promulgation, with the FSF's support, of the [Principles of Community-Oriented GPL Enforcement][9], which calls for extending GPLv3 cure opportunities to GPLv2 violations, the Linux Foundation Technical Advisory Board published a [statement][10], endorsed by over a hundred Linux kernel developers, which incorporates verbatim the cure language of GPLv3. This in turn was followed by a Red Hat-led series of [corporate commitments][11] to extend the GPLv3 cure provisions to GPLv2 and LGPLv2.x noncompliance, a campaign to get individual open source developers to extend the same commitment, and an announcement by Red Hat that henceforth GPLv2 and LGPLv2.x projects it leads will use the commitment language directly in project repositories. I discussed these developments in a recent [blog post][12].
One lasting contribution of GPLv3 concerns changed expectations for how revisions of widely-used FOSS licenses are done. It is no longer acceptable for such licenses to be revised entirely in private, without opportunity for comment from the community and without efforts to consult key stakeholders. The drafting of MPL 2.0 and, more recently, EPL 2.0 reflects this new norm.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/6/gplv3-anniversary
作者:[Richard Fontana][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/fontana
[1]:https://www.gnu.org/licenses/gpl-3.0.en.html
[2]:https://en.wikipedia.org/wiki/SCO%E2%80%93Linux_disputes
[3]:https://lwn.net/Articles/200422/
[4]:https://opensource.com/article/18/1/containers-gpl-and-copyleft
[5]:https://opensource.org/licenses/MIT
[6]:http://gplv3.fsf.org/gpl-rationale-2006-01-16.html
[7]:https://en.wikipedia.org/wiki/Novell#Agreement_with_Microsoft
[8]:https://opensource.com/law/12/1/the-new-mpl
[9]:https://sfconservancy.org/copyleft-compliance/principles.html
[10]:https://www.kernel.org/doc/html/v4.16/process/kernel-enforcement-statement.html
[11]:https://www.redhat.com/en/about/press-releases/technology-industry-leaders-join-forces-increase-predictability-open-source-licensing
[12]:https://www.redhat.com/en/blog/gpl-cooperation-commitment-and-red-hat-projects?source=author&term=26851

View File

@ -0,0 +1,188 @@
SoCLI Easy Way To Search And Browse Stack Overflow From The Terminal
======
Stack Overflow is the largest, most trusted online community for developers to learn, share their programming knowledge, and build their careers. Its worlds largest developer community and allows users to ask and answer questions. Its open alternative to earlier question and answer sites such as Experts-Exchange.
Its my preferred website, i have learned many program stuffs also i found many Linux related stuffs as well. Even i asked many questions and answered few questions too when i have time.
Today i have stumbled upon good CLI utility called SoCLI & how2 both are used to browse stackoverflow website from the terminal easily and its very helpful when you doesnt have GUI. Today we are going to discuss about SoCLI and will discuss about how2 in upcoming article.
**Suggested Read :**
**(#)** [How To Search The Arch Wiki Website Right From Terminal][1]
**(#)** [Googler Google Search from the command line on Linux][2]
**(#)** [Buku A Powerful Command-line Bookmark Manager for Linux][3]
This might have very useful for NIX guys, whoever spending most of the time in CLI.
[SoCLI][4] is a Stack overflow command line interface written in python. Its allows you to search and browse stack overflow from the terminal.
### SoCLI Featues:
* Verity of search is available like Quick Search, Manual Search & interactive Search
* Coloured interface
* Question stats view
* Topic Based Search using tag
* Can view user profiles
* Can create a new question via the web browser
* Can open the page in a browser
### How to Install Python
Make sure your system should have python-pip package in order to install SoCLI. pip is a python module bundled with setuptools, its one of the recommended tool for installing Python packages in Linux.
For **`Debian/Ubuntu`** , use [apt-get command][5] or [apt command][6] to install pip.
```
$ sudo apt install python-pip
```
For **`RHEL/CentOS`** , use [YUM command][7] to install pip.
```
$ sudo yum install python-pip python-devel
```
For **`Fedora`** , use [dnf command][8] to install pip.
```
$ sudo dnf install python-pip
```
For **`Arch Linux`** , use [pacman command][9] to install pip.
```
$ sudo pacman -S python-pip
```
For **`openSUSE`** , use [Zypper Command][10] to install pip.
```
$ sudo pacman -S python-pip
```
### How to Install SoCLI
Simple use pip command to install socli.
```
$ sudo pip install socli
```
### How to Update SoCLI
Run the following command to update your existing version of socli to the newest version to avail latest features.
```
$ sudo pip install --upgrade socli
```
### How to Use SoCLI
Simple fire the socli command on terminal to start explorer stackoverflow from the Linux command line. Its offering varies arguments which will speedup your search even more faster.
Common syntax for **`SoCLI`**
```
socli [Arguments] [Search Query]
```
### Quick Search
The following command will search for the given query `command to check apache active connections` and displays the first most voted question in Stack Overflow with its most voted answer.
```
$ socli command to check apache active connections
```
![][12]
### Interactive Search
To enable interactive search, use `-iq` arguments followed by your search query.
The following command will search for the given query `delete matching string` and print a list of questions from Stack Overflow.
```
$ socli -iq delete matching string
```
![][13]
It will allows users to choose any of the questions interactively by hitting questing number in end of the results. In my case i have choose a question `2` then it will display the complete description of the chosen question with its most voted answer.
![][14]
Use `UP` and `DOWN` arrow keys to navigate to other answers. Press `LEFT` arrow key to go back to the list of questions.
### Manual Search
SoCLI allows you to display mentioned question number for given query. The following command will search for the given query `netstat command examples` in Stack Overflow and displays the second question full information for given query alike quick search.
```
$ socli -r 2 -q netstat command examples
```
![][15]
### Topic Based Search
SoCLI allows topic based search by using specific tags. Just mention the specific tags using `-t` arguments and followed by search query `command to increase jvm heap memory`.
```
$ socli -t linux -q command to increase jvm heap memory
```
![][16]
For multiple tags, Just separate them with a comma.
```
$ socli -t linux,unix -q grep
```
### Post a New Question
If you cant find an answer for your question in Stack Overflow? dont worry, post a new question by running following command.
```
$ socli -n
```
It will open the new question page of Stack Overflow in the web browser for you to create a new question.
### Man Page
To know more options & arguments about SoCLI, navigate to help section.
```
$ socli -h
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/socli-search-and-browse-stack-overflow-from-linux-terminal/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.2daygeek.com/author/magesh/
[1]:https://www.2daygeek.com/search-arch-wiki-website-command-line-terminal/
[2]:https://www.2daygeek.com/googler-google-search-from-the-command-line-on-linux/
[3]:https://www.2daygeek.com/buku-command-line-bookmark-manager-linux/
[4]:https://github.com/gautamkrishnar/socli
[5]:https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
[6]:https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
[7]:https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
[8]:https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
[9]:https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
[10]:https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
[11]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[12]:https://www.2daygeek.com/wp-content/uploads/2017/08/socli-search-and-browse-stack-overflow-from-command-line-1.png
[13]:https://www.2daygeek.com/wp-content/uploads/2017/08/socli-search-and-browse-stack-overflow-from-command-line-2.png
[14]:https://www.2daygeek.com/wp-content/uploads/2017/08/socli-search-and-browse-stack-overflow-from-command-line-2a.png
[15]:https://www.2daygeek.com/wp-content/uploads/2017/08/socli-search-and-browse-stack-overflow-from-command-line-3.png
[16]:https://www.2daygeek.com/wp-content/uploads/2017/08/socli-search-and-browse-stack-overflow-from-command-line-4.png

View File

@ -0,0 +1,70 @@
我们能否建立一个服务于用户而非广告商的社交网络?
=====
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/people_team_community_group.png?itok=Nc_lTsUK)
如今,开源软件具有深远的意义,在推动数字经济创新方面发挥着关键作用。世界正在快速彻底地改变。世界各地的人们需要一个专门的,中立的,透明的在线平台来迎接我们这个时代的挑战。
开放的原则可能会成为让我们到达那里的方法to 校正者:这句上下文没有理解)。如果我们用开放的思维方式将数字创新与社会创新结合在一起,会发生什么?
这个问题是我们在 [Human Connection][1] 工作的核心这是一个具有前瞻性的以德国为基础的知识和行动网络其使命是创建一个服务于全球的真正的社交网络。我们受到这样一种观念为指引即人类天生慷慨而富有同情心并且他们在慈善行为上茁壮成长。但我们还没有看到一个完全支持我们自然倾向于乐于助人和合作以促进共同利益的社交网络。Human Connection 渴望成为让每个人都成为积极变革者的平台。
为了实现一个以解决方案为导向的平台的梦想让人们通过与慈善机构、社区团体和社会变革活动人士的接触围绕社会公益事业采取行动Human Connection 将开放的价值观作为社会创新的载体。
以下是有关它如何工作的。
### 首先是透明
透明是 Human Connection 的指导原则之一。Human Connection 邀请世界各地的程序员通过[在 Github 上提交他们的源代码][2]共同开发平台的源代码JavaScript, Vue, nuxt并通过贡献代码或编程附加功能来支持真正的社交网络。
但我们对透明的承诺超出了我们的发展实践。事实上,当涉及到建立一种新的社交网络,促进那些对让世界变得更好的人之间的真正联系和互动,分享源代码只是迈向透明的一步。
为促进公开对话Human Connection 团队举行[定期在线公开会议][3]。我们在这里回答问题,鼓励建议并对潜在的问题作出回应。我们的 Meet The Team to 校正者:这里如果可以,请翻译得稍微优雅,我想不出来一个词)活动也会记录下来,并在事后向公众开放。通过对我们的流程,源代码和财务状况完全透明,我们可以保护自己免受批评或其他潜在的不利影响。
对透明的承诺意味着,所有在 Human Connection 上公开分享的用户贡献者将在 Creative Commons 许可下发布,最终作为数据包下载。通过让大众知识变得可用,特别是以一种分散的方式,我们创造了一个多元化社会的机会。
一个问题指导我们所有的组织决策:“它是否服务于人民和更大的利益?”我们用[联合国宪章(UN Charter)][4]和“世界人权宣言(Universal Declaration of Human Rights)”作为我们价值体系的基础。随着我们的规模越来越大,尤其是即将推出的公测版,我们必须对此任务负责。我甚至愿意邀请 Chaos Computer Club (译者注:这是欧洲最大的黑客联盟)或其他黑客俱乐部通过随机检查我们的平台来验证我们的代码和行为的完整性。
### 一个合作的社会
以一种[以社区为中心的协作方法][5]来编写 Human Connection 平台是超越社交网络实际应用理念的基础。我们的团队是通过找到问题的答案来驱动:“是什么让一个社交网络真正地社会化?”
一个抛弃了以利润为导向的算法,为广告商而不是最终用户服务的网络,只能通过转向对等生产和协作的过程而繁荣起来。例如,像 [Code Alliance][6] 和 [Code for America][7] 这样的组织已经证明了如何在一个开源环境中创造技术造福人类并破坏to 校正:这里译为改变较好)现状。社区驱动的项目,如基于地图的报告平台 [FixMyStreet][8],或者为 Humanitarian OpenStreetMap 而建立的 [Tasking Manager][9],已经将众包作为推动其使用的一种方式。
我们建立 Human Connection 的方法从一开始就是合作。为了收集关于必要功能和真正社交网络的目的的初步数据我们与巴黎索邦大学University Sorbonne的国家东方语言与文明研究所National Institute for Oriental Languages and Civilizations (INALCO) 和德国斯图加特媒体大学Stuttgart Media University )合作。这两个项目的研究结果都被纳入了 Human Connection 的早期开发。多亏了这项研究,[用户将拥有一套全新的功能][10],让他们可以控制自己看到的内容以及他们如何与他人的互动。由于早期的支持者[被邀请到网络的 alpha 版本][10],他们可以体验到第一个可用的值得注意的功能。这里有一些:
* 将信息与行动联系起来是我们研究会议的一个重要主题。当前的社交网络让用户处于信息阶段。这两所大学的学生团体都认为,需要一个以行动为导向的组件,以满足人类共同解决问题的本能。所以我们在平台上构建了一个[“Can Do”功能][11]。这是一个人在阅读了某个话题后可以采取行动的一种方式。“Can Do” 是用户建议的活动在“采取行动Take Action”领域每个人都可以实现。
* “Versus” 功能是另一个定义结果的方式to 校正者这句话稍微注意一下。在传统社交网络仅限于评论功能的地方我们的学生团体认为需要采用更加结构化且有用的方式进行讨论和争论。“Versus” 是对公共帖子的反驳,它是单独显示的,并提供了一个机会来突出围绕某个问题的不同意见。
* 今天的社交网络并没有提供很多过滤内容的选项。研究表明,情绪过滤选项可以帮助我们根据日常情绪驾驭社交空间,并可能通过在我们希望仅看到令人振奋的内容的那一天时,不显示悲伤或难过的帖子来潜在地保护我们的情绪健康。
Human Connection 邀请改革者合作开发一个网络,有可能动员世界各地的个人和团体将负面新闻变成 “Can Do”并与慈善机构和非营利组织一起参与社会创新项目。
[订阅我们的每周时事通讯][12]以了解有关开放组织的更多信息。
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/18/3/open-social-human-connection
作者:[Dennis Hack][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/dhack
[1]:https://human-connection.org/en/
[2]:https://github.com/human-connection/
[3]:https://youtu.be/tPcYRQcepYE
[4]:http://www.un.org/en/charter-united-nations/index.html
[5]:https://youtu.be/BQHBno-efRI
[6]:http://codealliance.org/
[7]:https://www.codeforamerica.org/
[8]:http://fixmystreet.org/
[9]:https://tasks.hotosm.org/
[10]:https://youtu.be/AwSx06DK2oU
[11]:https://youtu.be/g2gYLNx686I
[12]:https://opensource.com/open-organization/resources/newsletter

View File

@ -0,0 +1,216 @@
运营一个 Kubernetes 网络
============================================================
最近我一直在研究 Kubernetes 网络。我注意到一件事情就是,虽然关于如何设置 Kubernetes 网络的文章很多,也写得很不错,但是却没有看到关于如何去运营 Kubernetes 网络的文章、以及如何完全确保它不会给你造成生产事故。
在本文中,我将尽力让你相信三件事情(我觉得这些都很合理 :)
* 避免生产系统网络中断非常重要
* 运营联网软件是很难的
* 有关你的网络基础设施的重要变化值得深思熟虑以及这种变化对可靠性的影响。虽然非常“牛x”的谷歌人常说“这是我们在谷歌正在用的”谷歌工程师在 Kubernetes 上正做着很重大的工作!但是我认为重要的仍然是研究架构,并确保它对你的组织有意义)。
我肯定不是 Kubernetes 网络方面的专家,但是我在配置 Kubernetes 网络时遇到了一些问题,并且比以前更加了解 Kubernetes 网络了。
### 运营联网软件是很难的
在这里,我并不讨论有关运营物理网络的话题(对于它我不懂),而是讨论关于如何让像 DNS 服务、负载均衡以及代理这样的软件正常工作方面的内容。
我在一个负责很多网络基础设施的团队工作过一年时间,并且因此学到了一些运营网络基础设施的知识!(显然我还有很多的知识需要继续学习)在我们开始之前有三个整体看法:
* 联网软件经常重度依赖 Linux 内核。因此除了正确配置软件之外你还需要确保许多不同的系统控制sysctl配置正确而一个错误配置的系统控制就很容易让你处于“一切都很好”和“到处都出问题”的差别中。
* 联网需求会随时间而发生变化(比如,你的 DNS 查询或许比上一年多了五倍!或者你的 DNS 服务器突然开始返回 TCP 协议的 DNS 响应而不是 UDP 的,它们是完全不同的内核负载!)。这意味着之前正常工作的软件突然开始出现问题。
* 修复一个生产网络的问题,你必须有足够的经验。(例如,看这篇 [由 Sophie Haskins 写的关于 kube-dns 问题调试的文章][1])我在网络调试方面比以前进步多了,但那也是我花费了大量时间研究 Linux 网络知识之后的事了。
我距离成为一名网络运营专家还差得很远,但是我认为以下几点很重要:
1. 对生产网络的基础设施做重要的更改是很难得的(因为它会产生巨大的混乱)
2. 当你对网络基础设施做重大更改时,真的应该仔细考虑如果新网络基础设施失败该如何处理
3. 是否有很多人都能理解你的网络配置
切换到 Kubernetes 显然是个非常大的更改!因此,我们来讨论一下可能会导致错误的地方!
### Kubernetes 网络组件
在本文中我们将要讨论的 Kubernetes 网络组件有:
* 网络覆盖后端(像 flannel/calico/weave 网络/romana
* `kube-dns`
* `kube-proxy`
* 入站控制器 / 负载均衡器
* `kubelet`
如果你打算配置 HTTP 服务,或许这些你都会用到。这些组件中的大部分我都不会用到,但是我尽可能去理解它们,因此,本文将涉及它们有关的内容。
### 最简化的方式:为所有容器使用宿主机网络
我们从你能做到的最简单的东西开始。这并不能让你在 Kubernetes 中运行 HTTP 服务。我认为它是非常安全的,因为在这里面可以让你动的东西很少。
如果你为所有容器使用宿主机网络,我认为需要你去做的全部事情仅有:
1. 配置 kubelet以便于容器内部正确配置 DNS
2. 没了,就这些!
如果你为每个 Pod 直接使用宿主机网络,那就不需要 kube-dns 或者 kube-proxy 了。你都不需要一个作为基础的覆盖网络。
这种配置方式中,你的 pod 们都可以连接到外部网络(同样的方式,你的宿主机上的任何进程都可以与外部网络对话),但外部网络不能连接到你的 pod 们。
这并不是最重要的(我认为大多数人想在 Kubernetes 中运行 HTTP 服务并与这些服务进行真实的通讯),但我认为有趣的是,从某种程度上来说,网络的复杂性并不是绝对需要的,并且有时候你不用这么复杂的网络就可以实现你的需要。如果可以的话,尽可能地避免让网络过于复杂。
### 运营一个覆盖网络
我们将要讨论的第一个网络组件是有关覆盖网络的。Kubernetes 假设每个 pod 都有一个 IP 地址,这样你就可以与那个 pod 中的服务进行通讯了。我在说到“覆盖网络”这个词时,指的就是这个意思(“让你通过它的 IP 地址指向到 pod 的系统)。
所有其它的 Kubernetes 网络的东西都依赖正确工作的覆盖网络。更多关于它的内容,你可以读 [这里的 kubernetes 网络模型][10]。
Kelsey Hightower 在 [kubernetes the hard way][11] 中描述的方式看起来似乎很好,但是,事实上它的作法在超过 50 个节点的 AWS 上是行不通的,因此,我不打算讨论它了。
有许多覆盖网络后端calico、flannel、weaveworks、romana并且规划非常混乱。就我的观点来看我认为一个覆盖网络有 2 个职责:
1. 确保你的 pod 能够发送网络请求到外部的集群
2. 保持一个到子网络的稳定的节点映射,并且保持集群中每个节点都可以使用那个映射得以更新。当添加和删除节点时,能够做出正确的反应。
Okay! 因此!你的覆盖网络可能会出现的问题是什么呢?
* 覆盖网络负责设置 iptables 规则(最基本的是 `iptables -A -t nat POSTROUTING -s $SUBNET -j MASQUERADE`),以确保那个容器能够向 Kubernetes 之外发出网络请求。如果在这个规则上有错误,你的容器就不能连接到外部网络。这并不很难(它只是几条 iptables 规则而已),但是它非常重要。我发起了一个 [pull request][2],因为我想确保它有很好的弹性。
* 添加或者删除节点时可能会有错误。我们使用 `flannel hostgw` 后端,我们开始使用它的时候,节点删除 [尚未开始工作][3]。
* 你的覆盖网络或许依赖一个分布式数据库etcd。如果那个数据库发生什么问题这将导致覆盖网络发生问题。例如[https://github.com/coreos/flannel/issues/610][4] 上说,如果在你的 `flannel etcd` 集群上丢失了数据,最后的结果将是在容器中网络连接会丢失。(现在这个问题已经被修复了)
* 你升级 Docker 以及其它东西导致的崩溃
* 还有更多的其它的可能性!
我在这里主要讨论的是过去发生在 Flannel 中的问题,但是我并不是要承诺不去使用 Flannel —— 事实上我很喜欢 Flannel因为我觉得它很简单比如类似 [vxlan 在后端这一块的部分][12] 只有 500 行代码),并且我觉得对我来说,通过代码来找出问题的根源成为了可能。并且很显然,它在不断地改进。他们在审查 `pull requests` 方面做的很好。
到目前为止,我运营覆盖网络的方法是:
* 学习它的工作原理的详细内容以及如何去调试它比如Flannel 用于创建路由的 hostgw 网络后端,因此,你只需要使用 `sudo ip route list` 命令去查看它是否正确即可)
* 如果需要的话,维护一个内部构建版本,这样打补丁比较容易
* 有问题时,向上游贡献补丁
我认为去遍历所有已合并的 PR 以及过去已修复的 bug 清单真的是非常有帮助的 —— 这需要花费一些时间,但这是得到一个其它人遇到的各种问题的清单的好方法。
对其他人来说他们的覆盖网络可能工作的很好但是我并不能从中得到任何经验并且我也曾听说过其他人报告类似的问题。如果你有一个类似配置的覆盖网络a) 在 AWS 上并且 b) 在多于 50-100 节点上运行,我想知道你运营这样的一个网络有多大的把握。
### 运营 kube-proxy 和 kube-dns
现在,我有一些关于运营覆盖网络的想法,我们来讨论一下。
这个标题的最后面有一个问号,那是因为我并没有真的去运营过。在这里我还有更多的问题要问答。
这里的 Kubernetes 服务是如何工作的!一个服务是一群 pod 们,它们中的每个都有自己的 IP 地址(像 10.1.0.3、10.2.3.5、10.3.5.6 这样)
1. 每个 Kubernetes 服务有一个 IP 地址(像 10.23.1.2 这样)
2. `kube-dns` 去解析 Kubernetes 服务 DNS 名字为 IP 地址因此my-svc.my-namespace.svc.cluster.local 可能映射到 10.23.1.2 上)
3. `kube-proxy` 配置 `iptables` 规则是为了在它们之间随机进行均衡负载。Kube-proxy 也有一个用户空间的轮询负载均衡器,但是在我的印象中,他们并不推荐使用它。
因此,当你发出一个请求到 `my-svc.my-namespace.svc.cluster.local` 时,它将解析为 10.23.1.2,然后,在你本地主机上的 `iptables` 规则(由 kube-proxy 生成)将随机重定向到 10.1.0.3 或者 10.2.3.5 或者 10.3.5.6 中的一个上。
在这个过程中我能想像出的可能出问题的地方:
* `kube-dns` 配置错误
* `kube-proxy` 挂了,以致于你的 `iptables` 规则没有得以更新
* 维护大量的 `iptables` 规则相关的一些问题
我们来讨论一下 `iptables` 规则,因为创建大量的 `iptables` 规则是我以前从没有听过的事情!
kube-proxy 像如下这样为每个目标主机创建一个 `iptables` 规则:这些规则来自 [这里][13]
```
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.20000000019 -j KUBE-SEP-E4QKA7SLJRFZZ2DD[b][c]
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-LZ7EGMG4DRXMY26H
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-RKIFTWKKG3OHTTMI
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-CGDKBCNM24SZWCMS
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -j KUBE-SEP-RI4SRNQQXWSTGE2Y
```
因此kube-proxy 创建了许多 `iptables` 规则。它们都是什么意思?它对我的网络有什么样的影响?这里有一个来自华为的非常好的演讲,它叫做 [支持 50,000 个服务的可伸缩 Kubernetes][14],它说如果在你的 Kubernetes 集群中有 5,000 服务,增加一个新规则,将需要 **11 分钟**。如果这种事情发生在真实的集群中,我认为这将是一件非常糟糕的事情。
在我的集群中肯定不会有 5,000 个服务,但是 5,000 并不是那么大的一个数字。为解决这个问题,他们给出的解决方案是 kube-proxy 用 IPVS 来替换这个 `iptables` 后端IPVS 是存在于 Linux 内核中的一个负载均衡器。
看起来,像 kube-proxy 正趋向于使用各种基于 Linux 内核的负载均衡器。我认为这只是一定程度上是这样,因为他们支持 UDP 负载均衡,而其它类型的负载均衡器(像 HAProxy并不支持 UDP 负载均衡。
但是,我觉得使用 HAProxy 更舒服!它能够用于去替换 kube-proxy我用谷歌搜索了一下然后发现了这个 [thread on kubernetes-sig-network][15],它说:
> kube-proxy 是很难用的,我们在生产系统中使用它近一年了,它在大部分的时间都表现的很好,但是,随着我们集群中的服务越来越多,我们发现它的排错和维护工作越来越难。在我们的团队中没有 iptables 方面的专家,我们只有 HAProxy&LVS 方面的专家,由于我们已经使用它们好几年了,因此我们决定使用一个中心化的 HAProxy 去替换分布式的代理。我觉得这可能会对在 Kubernetes 中使用 HAProxy 的其他人有用,因此,我们更新了这个项目,并将它开源:[https://github.com/AdoHe/kube2haproxy][5]。如果你发现它有用,你可以去看一看、试一试。
因此,那是一个有趣的选择!我在这里确实没有答案,但是,有一些想法:
* 负载均衡器是很复杂的
* DNS 也很复杂
* 如果你有运营某种类型的负载均衡器(比如 HAProxy的经验与其使用一个全新的负载均衡器比如 kube-proxy还不如做一些额外的工作去使用你熟悉的那个来替换或许更有意义。
* 我一直在考虑,我们希望在什么地方能够完全使用 kube-proxy 或者 kube-dns —— 我认为,最好是只在 Envoy 上投入,并且在负载均衡&服务发现上完全依赖 Envoy 来做。因此,你只需要将 Envoy 运营好就可以了。
正如你所看到的,我在关于如何运营 Kubernetes 中的内部代理方面的思路还是很混乱的并且我也没有使用它们的太多经验。总体上来说kube-proxy 和 kube-dns 还是很好的,也能够很好地工作,但是我仍然认为应该去考虑使用它们可能产生的一些问题(例如,”你不能有超出 5000 的 Kubernetes 服务“)。
### 入口
如果你正在运行着一个 Kubernetes 集群,那么到目前为止,很有可能的是,你事实上需要 HTTP 请求去进入到你的集群中。这篇博客已经太长了,并且关于入口我知道的也不多,因此,我们将不讨论关于入口的内容。
### 有用的链接
几个有用的链接,总结如下:
* [Kubernetes 网络模型][6]
* GKE 网络是如何工作的:[https://www.youtube.com/watch?v=y2bhV81MfKQ][7]
* 上述的有关 `kube-proxy` 上性能的讨论:[https://www.youtube.com/watch?v=4-pawkiazEg][8]
### 我认为网络运营很重要
我对 Kubernetes 的所有这些联网软件的感觉是,它们都仍然是非常新的,并且我并不能确定我们(作为一个社区)真的知道如何去把它们运营好。这让我作为一个操作者感到很焦虑,因为我真的想让我的网络运行的很好!:) 而且我觉得作为一个组织,运行你自己的 Kubernetes 集群需要相当大的投入,以确保你理解所有的代码片段,这样当它们出现问题时你可以去修复它们。这不是一件坏事,它只是一个事而已。
我现在的计划是,继续不断地学习关于它们都是如何工作的,以尽可能多地减少对我动过的那些部分的担忧。
一如继往,我希望这篇文章对你有帮助,并且如果我在这篇文章中有任何的错误,我非常喜欢你告诉我。
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2017/10/10/operating-a-kubernetes-network/
作者:[Julia Evans ][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://jvns.ca/about
[1]:http://blog.sophaskins.net/blog/misadventures-with-kube-dns/
[2]:https://github.com/coreos/flannel/pull/808
[3]:https://github.com/coreos/flannel/pull/803
[4]:https://github.com/coreos/flannel/issues/610
[5]:https://github.com/AdoHe/kube2haproxy
[6]:https://kubernetes.io/docs/concepts/cluster-administration/networking/#kubernetes-model
[7]:https://www.youtube.com/watch?v=y2bhV81MfKQ
[8]:https://www.youtube.com/watch?v=4-pawkiazEg
[9]:https://jvns.ca/categories/kubernetes
[10]:https://kubernetes.io/docs/concepts/cluster-administration/networking/#kubernetes-model
[11]:https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/11-pod-network-routes.md
[12]:https://github.com/coreos/flannel/tree/master/backend/vxlan
[13]:https://github.com/kubernetes/kubernetes/issues/37932
[14]:https://www.youtube.com/watch?v=4-pawkiazEg
[15]:https://groups.google.com/forum/#!topic/kubernetes-sig-network/3NlBVbTUUU0

View File

@ -1,49 +0,0 @@
容器基础知识:你需要知道的术语
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/containers-krissia-cruz.jpg?itok=-TNPqdcp)
[在前一篇文章中][1]我们谈到了容器container是什么以及它是如何培育创新并助力企业快速发展的。在以后的文章中我们将讨论如何使用容器。然而在深入探讨这个话题之前我们需要了解关于容器的一些术语和命令。掌握了这些术语才不至于产生混淆。
让我们来探讨 [Docker][2] 容器世界中使用的一些基本术语吧。
**容器Container** **** 到底什么是容器呢?它是一个 Docker 镜像image的运行实例。它包含一个 Docker 镜像、执行环境和说明。它与系统完全隔离,所以多个容器可以在系统上运行,并且完全无视对方的存在。你可以从同一镜像中复制出多个容器,并在需求较高时扩展服务,在需求低时对这些容器进行缩减。
**Docker 镜像Image** **** 这与你下载的 Linux 发行版的镜像别无二致。它是一个安装包,包含了用于创建、部署和执行容器的一系列依赖关系和信息。你可以在几秒钟内创建任意数量的完全相同的容器。镜像是分层叠加的。一旦镜像被创建出来,是不能更改的。如果你想对容器进行更改,则只需创建一个新的镜像并从该镜像部署新的容器即可。
**仓库Repository / repo** **:** Linux 的用户对于仓库这个术语一定不陌生吧。它是一个软件库,存储了可下载并安装在系统中的软件包。在 Docker 容器中,唯一的区别是它管理的是通过标签分类的 Docker 镜像。你可以找到同一个应用程序的不同版本或不同变体,他们都有适当的标记。
**镜像管理服务Registry** **:** 可以将其想象成 GitHub。这是一个在线服务管理并提供了对 Docker 镜像仓库的访问例如默认的公共镜像仓库——DockerHub。供应商可以将他们的镜像库上传到 DockerHub 上,以便他们的客户下载和使用官方镜像。一些公司为他们的镜像提供自己的服务。镜像管理服务不必由第三方机构来运行和管理。组织机构可以使用预置的服务来管理内部范围的镜像库访问。
**标签Tag** **:** 当你创建 Docker 镜像时可以给它添加一个合适的标签以便轻松识别不同的变体或版本。这与你在任何软件包中看到的并无区别。Docker 镜像在添加到镜像仓库时被标记。
现在你已经掌握了基本知识,下一个阶段是理解实际使用 Docker 容器时用到的术语。
**Dockerfile** **:** 这是一个文本文件,包含为了为构建 Docker 镜像需手动执行的命令。Docker 使用这些指令自动构建镜像。
**Build** **:** 这是从 Dockerfile 创建成镜像的过程。
**Push** **:** 一旦镜像被被创建“push”是将镜像发布到仓库的过程。该术语也是我们下一篇文章要学习的命令之一。
**Pull** **:** 用户可以通过“pull”过程从仓库检索该镜像。
**Compose** **:** 复杂的应用程序会包含多个容器。Compose 是一个用于运行多容器应用程序的命令行工具。它允许你用单条命令运行一个多容器的应用程序,简化了多容器带来的问题。
### 总结
容器术语的范围很广泛,这里是经常遇到的一些基本术语。下一次当你看到这些术语时,你会确切地知道它们的含义。在下一篇文章中,我们将开始使用 Docker 容器。
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/intro-to-linux/2017/12/container-basics-terms-you-need-know
作者:[Swapnil Bhartiya][a]
译者:[jessie-pang](https://github.com/jessie-pang)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/arnieswap
[1]:https://www.linux.com/blog/intro-to-linux/2017/12/what-are-containers-and-why-should-you-care
[2]:https://www.docker.com/

View File

@ -1,257 +0,0 @@
# Linux文件系统详解
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/search.jpg?itok=7hj0YYjF)
早在1996年在真正理解文件系统的结构之前我就学会了如何在我崭新的 Linux 上安装软件。这是一个问题,但不是程序的问题,因为即使我不知道实际的可执行文件在哪里,它们也会神奇地工作。问题在于文档。
你看那时候Linux 不是像今天这样直观,用户友好的系统。 你必须读很多东西。 你必须知道你的 CRT 显示器的频率以及拨号调制解调器的噪音来龙去脉,以及其他数以百计的事情。 我很快就意识到我需要花一些时间来掌握目录的组织方式以及 /etc不用于其他文件/usr不是用于用户文件和 /*bin * 不是垃圾桶)的意思。
本教程将帮助您加快速度,比我做得更快。
### 结构
从终端窗口探索 Linux 文件系统是有道理的,这并不是因为作者是一个脾气暴躁的老人,并且对新孩子和他们漂亮的图形工具不以为然——尽管有一些事实——但是因为终端,尽管是文本——只有更好的工具才能显示 Linux 目录树的结构。
事实上,这是您将安装的第一个工具的名称:*tree*。如果你正在使用 Ubuntu 或 Debian ,你可以:
```
sudo apt install tree
```
在 Red Hat 或 Fedora :
```
sudo dnf install tree
```
对于 SUSE/openSUSE 可以使用 `zypper`
```
sudo zypper install tree
```
对于使用 Arch ManjaroAntergos等等使用
```
sudo pacman -S tree
```
...等等。
一旦安装好,在终端窗口运行 *tree* 命令:
```
tree /
```
上述指令中的 `/` 指的是根目录。系统中的其他目录都是从根目录分支而出,当你运行 `tree` 命令,并且告诉它从根目录开始,那么你就可以看到整个目录树,系统中的所有目录及其子目录,还有他们的文件。
如果您已经使用您的系统有一段时间了这可能需要一段时间因为即使您自己还没有生成很多文件Linux系统及其应用程序总是记录、缓存和存储临时文件。文件系统中的条目数量可以快速增长。
不过,不要感到不知所措。 相反,试试这个:
```
tree -L 1 /
```
您应该看到如图 1 所示。
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/f01_tree01.png?itok=aGKzzC0C)
上面的指令可以翻译为“只显示以 /(根目录) 开头的目录树的第一级”。 `-L` 选项告诉树你想看到多少层目录。
大多数 Linux 发行版都会向您显示与您在上图中看到的相同或非常类似的结构。 这意味着,即使你现在感到困惑,掌握这一点,你将掌握大部分(如果不是全部的话)整个世界的 Linux 文件系统。
为了让您开始走上掌握之路,让我们看看每个目录的用途。 当我们查看每一个目录的时候,你可以使用 *ls* 来查看他们的内容。
### 目录
从上到下,你所看到的目录如下
#### _/bin_
*/bin* 目录是包含一些二进制文件的目录,即可以运行的一些应用程序。 你会在这个目录中找到上面提到的 *ls* 程序,以及用于新建和删除文件和目录,移动它们基本工具。还有其他一些程序,等等, 文件系统树的其他部分有更多的 *bin* 目录,但我们将在一分钟内讨论这些目录。
#### _/boot_
*/boot* 目录包含启动系统所需的文件。 我必须要说吗? 好吧,我会说:**不要动它** 如果你在这里弄乱了其中一个文件,你可能无法运行你的 Linux修复破坏的系统是非常痛苦的一件事。 另一方面,不要太担心无意中破坏系统:您必须拥有超级用户权限才能执行此操作。
#### _/dev_
*/dev* 目录包含设备文件。 其中许多是在启动时或甚至在运行时生成的。 例如,如果您将新的网络摄像头或 USB 随身碟连接到您的机器中,则会自动弹出一个新的设备条目。
#### _/etc_
*/etc* 的目录名称会让人变得非常的困惑。*/etc* 从最早的 Unixes 系统中得到它的名称,它的字面意思是 “et cetera” ,因为它是系统文件管理员不确定在哪里放置文件的垃圾场。
现在,说 */etc* 是“要配置的所有内容”更为恰当,因为它包含大部分(如果不是全部的话)系统配置文件。 例如,包含系统名称,用户及其密码,网络上计算机名称以及硬盘上分区的安装位置和时间的文件都在这里。 再说一遍,如果你是 Linux 的新手,最好是不要在这里接触太多,直到你对系统的工作有更好的理解。
#### _/home_
*/home* 是您可以找到用户个人目录的地方。 在我的情况下,*/home* 下有两个目录:*/home/paul*,其中包含我所有的东西;另外一个目录是 */home/guest* 目录,以防有人需要使用我的电脑。
#### _/lib_
*/lib* 是库文件所在的地方。 库是包含应用程序可以使用的代码文件。 它们包含应用程序用于在桌面上绘制窗口,控制外围设备或将文件发送到硬盘的代码片段。
在文件系统周围散布着更多的 *lib* 目录,但是这个直接挂载在 / 的 */lib* 目录是特殊的,除此之外,它包含了所有重要的内核模块。 内核模块是使您的视频卡声卡WiFi打印机等工作的驱动程序。
#### _/media_
*/media* 目录中,当您插入外部存储并试图访问它时,将自动挂载它。与此列表中的大多数其他项目不同,*/media* 并不追溯到 1970 年代,主要是因为当计算机正在运行而动态地插入和检测存储( U 盘, USB 硬盘SD卡外部 SSD 等),这是最近才发生的事。
#### _/mnt_
然而,*/mnt* 目录是一些过去的残余。这是您手动挂载存储设备或分区的地方。现在不常用了。
#### _/opt_
*/opt* 目录通常是您编译软件(即,您从源代码构建,并不是从您的系统的软件库中安装软件)的地方。应用程序最终会出现在 */opt/bin* 目录,库会在 */opt/lib* 目录中出现。
稍微的题外话:应用程序和库的另一个地方是 */usr/local*,在这里安装软件时,也会有 */usr/local/bin**/usr/local/lib* 目录。开发人员如何配置文件来控制编译和安装过程,这就决定了软件安装到哪个地方。
#### _/proc_
*/proc*,就像 */dev* 是一个虚拟目录。它包含有关您的计算机的信息,例如关于您的 CPU 和您的 Linux 系统正在运行的内核的信息。与 */dev* 一样,文件和目录是在计算机启动或运行时生成的,因为您的系统正在运行且会发生变化。
#### _/root_
*/root* 是系统的超级用户(也称为“管理员”)的主目录。 它与其他用户的主目录是分开的,因为您打算去动它。 所以把自己的东西放在你自己的目录中,伙计们。
#### _/run_
*/run* 是另一个新目录。系统进程出于自己的邪恶原因使用它来存储临时数据。这是另一个不要动它的文件夹例子。
#### _/sbin_
*/sbin* 与 */bin* 类似,但它包含的应用程序只有超级用户(即首字母的 s )才需要。您可以使用 `sudo` 命令使用这些应用程序,该命令暂时允许您在许多发行版上拥有超级用户权限。*/sbin* 目录通常包含可以安装、删除和格式化内容的工具。你可以想象,如果你使用不当,这些指令中有一些是致命的,所以要小心处理。
#### _/usr_
*/usr* 目录是在 UNIX 早期用户的主目录被保留的地方。然而,正如我们上面看到的,现在 */home* 是用户保存他们的东西的地方。如今,*/usr* 包含了大量目录,而这些目录又包含了应用程序、库、文档、壁纸、图标和许多其他需要应用程序和服务共享的内容。
您还可以在 */usr* 目录下找到 */bin**/sbin**/lib* 目录,他们与挂载到根目录下的那些有什么区别呢?现在的区别不是很大。在早期,/bin 目录(挂载在根目录下的)只会包含一些基本的命令,例如 *ls**mv* 和 *rm* ;这是一些在安装系统的时候就会预装的一些命令,用于维护系统的一个基本的命令。 而 */usr/bin* 目录则包含了用户自己安装和用于工作的软件,例如文字处理器,浏览器和一些其他的软件。
但是许多现代的 Linux 发行版只是把所有的东西都放到 */usr/bin* 中,并让 */bin* 指向 */usr/bin*以防彻底删除它会破坏某些东西。因此Debian、Ubuntu 和 Mint 仍然保持 */bin**/usr/bin* (和 */sbin**/usr/sbin* )分离;其他的,比如 Arch 和它衍生版,只是有一个二进制的“真实”目录,*/usr/bin*,其余的或 *bin 是指向 */usr/ bin* 的“假”目录。
#### _/srv_
*/srv* 目录包含服务器的数据。 如果您正在 Linux 机器上运行 Web 服务器,您网站的 HTML文件将放到 */srv/http*(或 */srv/www*)。 如果您正在运行 FTP 服务器,则您的文件将放到 */srv/ftp*
#### _/sys_
*/sys* 是另一个类似 */proc**/dev* 的虚拟目录,它还包含连接到计算机的设备的信息。
在某些情况下,您还可以操纵这些设备。 例如,我可以通过修改存储在 /sys/devices/pci0000:00/0000:00:02.0/drm/card1/card1-eDP-1/intel_backlight/brightness 中的值来更改笔记本电脑屏幕的亮度(在你的机器上你可能会有不同的文件)。但要做到这一点,你必须成为超级用户。原因是,与许多其他虚拟目录一样,在 */sys* 中打乱内容和文件可能是危险的,您可能会破坏系统。直到你确信你知道你在做什么。否则不要动它。
#### _/tmp_
*/tmp* 包含临时文件,通常由正在运行的应用程序放置。文件和目录通常(并非总是)包含应用程序现在不需要但以后可能需要的数据。为了释放内存,它被存储在这里。
您还可以使用 */tmp* 来存储您自己的临时文件—— /tmp 是挂载到根目录下您可以在不成为超级用户的情况下与它进行实际交互少数目录之一。问题是,应用程序有时不会回来检索和删除文件和目录,而且 */tmp* 常常会耗尽硬盘上的空间,使其塞满垃圾。在本系列的后面,我们将看到如何清理它。
#### _/var_
*/var* 最初被命名是因为它的内容被认为是可变的,因为它经常变化。今天,它有点用词不当,因为还有许多其他目录也包含频繁更改的数据,特别是我们上面看到的虚拟目录。
不管怎样,*/var* 目录包含了类似 */var/log* 子目录的日志文件。日志是记录系统中发生的事件的文件。如果内核中出现了什么问题,它将被记录到 */var/log* 文件中;如果有人试图从外部侵入您的计算机,您的防火墙也将记录尝试。它还包含用于任务的假脱机程序。这些“任务”可以是您在不得不等待时发送给共享打印机的任务,因为另一个用户正在打印一个长文档,或者是等待发送给系统上的用户的邮件。
您的系统可能还有一些我们上面没有提到的目录。例如,在屏幕截图中,有一个 */snap* 目录。这是因为这张截图是在Ubuntu系统上截取的。Ubuntu 最近将 [snap][1] 包作为一种分发软件的方式。*/snap* 目录包含所有文件和从 snaps 安装的软件。
### 更深入的研究
这里仅仅谈了根目录,但是许多子目录都指向它们自己的一组文件和子目录。图 2 给出了基本文件系统的总体概念(图片是在 Paul Gardner 的 CC BY-SA 许可下提供的),[Wikipedia 对每个目录的用途进行了总结][2]。
![filesystem][4]
图 2标准 Unix 文件系统
[许可][5]
Paul Gardner
要自行探索文件系统,请使用 `cd` 命令:
```
cd
```
将带您到您所选择的目录( *cd* 代表更改目录)。
如果你不知道你在哪儿,
```
pwd
```
*pwd* 会告诉你,你到底在哪里,( *pwd* 代表打印工作目录 ),同时
```
cd
```
*cd* 命令在没有任何选项或者参数的时候,将会直接带你到你自己的主目录,这是一个安全舒适的地方。
最后,
```
cd ..
```
*cd ..* 将会带你到上一层目录,会使你更加接近根目录,如果你在 */usr/share/wallpapers* 目录,然后你执行 `cd ..` 命令,你将会跳转到 `/usr/share` 目录
要查看目录里有什么内容,使用
```
ls
```
或这简单的使用
```
l
```
列出你所在目录的内容。
当然,您总是可以使用 `tree` 来获得目录中内容的概述。在 */usr/share* 上试试——里面有很多有趣的东西。
### 总结
尽管 Linux 发行版之间存在细微差别,但它们的文件系统的布局非常相似。 你可以这么说:一旦你认识一个,你就会知道他们。 知道文件系统的最好方法就是探索它。 因此,伴随 `tree` `ls` 和 `cd` 进入未知的领域吧。
您不能仅仅通过查看文件系统就破坏文件系统,因此请从一个目录移动到另一个目录并进行浏览。 很快你就会发现 Linux 文件系统及其布局的确很有意义,并且你会直观地知道在哪里可以找到应用程序,文档和其他资源。
通过 Linux 基金会和 edX 免费的 “[Linux入门][6]” 课程了解更多有关 Linux 的信息。
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/intro-to-linux/2018/4/linux-filesystem-explained
作者:[PAUL BROWN][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[amwps290](https://github.com/amwps290)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/bro66
[1]:https://www.ubuntu.com/desktop/snappy
[2]:https://en.wikipedia.org/wiki/Unix_filesystem#Conventional_directory_layout
[3]:https://www.linux.com/files/images/standard-unix-filesystem-hierarchypng
[4]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/standard-unix-filesystem-hierarchy.png?itok=CVqmyk6P "filesystem"
[5]:https://www.linux.com/licenses/category/used-permission
[6]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -1,70 +0,0 @@
Python 调试技巧
======
当涉及到调试时,你可以做出很多选择。很难提供总是有效的通用建议(除了“你试过关闭再打开么?”)。
这里有一些我最喜欢的 Python 调试技巧。
### 建立一个分支
请相信我。即使你从来没有打算将修改提交回上游,你也会乐意你的实验被包含在它们自己的分支中。
如果没有其他的东西,它会使清理更容易!
### 安装 pdb++
认真地说,如果你使用命令行,它会让你的生活更轻松。
pdb++ 所做的一切就是用更好的模块替换标准的 pdb 模块。以下是你在 `pip install pdbpp` 会看到的:
* 彩色提示!
* tab补全非常适合探索
* 支持切分!
好的,也许最后一个是有点多余......但是非常认真地说,安装 pdb++ 非常值得。
### 探索
有时候最好的办法就是搞乱,然后看看会发生什么。在“明显”的位置放置一个断点并确保它被击中。在代码中加入 `print()` 和/或 `logging.debug()` 语句,并查看代码执行的位置。
检查传递给你的函数的参数。检查库的版本(如果事情变得非常绝望)。
### 一次只能改变一件事
在你在探索一下后,你将会对你可以做的事情有所了解。但在你开始使用代码之前,先退一步,考虑一下你可以改变什么,然后只改变一件事。
做出改变后,然后测试一下,看看你是否接近解决问题。如果没有,请将它改回来,然后尝试其他方法。
只更改一件事就可以让你知道什么工作,哪些不工作。另外,一旦工作后,你的新提交将会小得多(因为将有更少的变化)。
这几乎是科学过程中所做的事情:一次只更改一个变量。通过让自己看到并衡量一次更改的结果,你可以节省你的理智,并更快地找到解决方案。
### 不要假设,提出问题
偶尔一个开发人员(当然不是你!)会匆忙提交一些有问题的代码。当你去调试这段代码时,你需要停下来,并确保你明白它想要完成什么。
不要做任何假设。仅仅因为代码在 `model.py` 中并不意味着它不会尝试渲染一些 HTML。
同样,在做任何破坏性的事情之前,仔细检查你的所有外部连接。要删除一些配置数据?请确保你没有连接到你的生产系统。
### 聪明,但不太聪明
有时候我们编写的代码非常棒,不知道它是如何做的。
当我们发布代码时,我们可能会觉得很聪明,但当代码崩溃时,我们往往会感到愚蠢,我们必须记住它是如何工作的,以便弄清楚它为什么不起作用。
留意任何看起来过于复杂,冗长或极短的代码段。这些可能是隐藏复杂并导致错误的地方。
--------------------------------------------------------------------------------
via: https://pythondebugging.com/articles/python-debugging-tips
作者:[PythonDebugging.com][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://pythondebugging.com

View File

@ -0,0 +1,76 @@
协同编辑器的历史性发明
======
不妨按时间顺序快速列出在主要协同编辑器上付出的努力。
正如任何如此的清单一样,它必定会在一开始便提到受人尊敬的所有编辑器的先祖,道格·恩格尔巴特描述了在那期间什么基本是 1968 年来所写的所有可能软件的详尽清单。这不仅包括协同编辑器,还包括图形、编程和数学编辑器。
那个示范之后的所有编辑器仅仅是为了弥补硬件发展的加速度的更缓慢的实现。
> 软件加快的速度比硬件加快的速度慢。——沃斯定律
因此没有进一步的麻烦的话,这里是我找到的可圈可点的协同编辑器的清单。我说“可圈可点”的意思是他们具有可圈可点的特征或者实现细节。
| 项目 | 日期 | 平台 | 说明 |
| --- | --- | --- | --- |
| [SubEthaEdit][1] | 2003-2015? | 仅 Mac|首次协同, 实时的, 我能找到的多指针编辑器, [在 Emacs 上的逆向工程的尝试。][2] |
| [DocSynch][3] | 2004-2007 | ? | 在互联网交互式聊天程序之上构造! [(!)](https://anarc.at/smileys/idea.png)|
| [Gobby][4] | 2005 至今 | C, 多平台 | 首次开放,实现稳固可靠。 仍然存在!众所周知 ("[libinfinoted][5]") 协议很难移植到其他编辑器中 (例如: [Rudel][6] 不能在 Emacs 上实现此协议。 2017 年 1 月发行的 0.7 版本添加了也许可以改善这种状况的 Python 捆绑。 有趣的插件: 自动保存到磁盘。|
| [moonedit][7] | 2005-2008? | ? | 原网站已关闭。其他用户的光标可见并且会模仿击键的声音。 计算器和音乐定序器。 |
| [synchroedit][8] | 2006-2007 | ? |首款网络应用。|
| [Etherpad][9] | 2008 至今 | 网络 |首款稳定的网络应用。 最初在 2008 年被开发为一款大型 Java 应用,在 2009 年被谷歌获取并开源,然后在 2011 年被用 Node.JS 重写。广泛使用。|
| [CRDT][10] | 2011 | 特定平台| 对在不同电脑间可靠地复制一个文件的数据结构是标准的。 |
| [Operational transform][11] | 2013 | 特定平台| 与 CRDT 类似, 然而, 确切地说, 两者是不同的。 |
| [Floobits][12] | 2013 至今 | ? | 对不同编辑器是商业的但开源的插件。 |
| [HackMD][13] | 2015 至今| ? | 商业的但是[开源][14]。受 hackpad 的启发( hackpad 已被 Dropbox 收购)。 |
| [Cryptpad][15] | 2016 至今 | 网络? |Xwiki 的副产品。加密的, 在服务器"零知识"。|
| [Prosemirror][16] | 2016 至今 | 网络, Node.JS | "试图架起 Markdown 文本编辑和 传统 WYSIWYG 编辑器之间隔阂的桥梁。"不是完全意义上的编辑器,但是一种可以用来构建编辑器的工具。 |
| [Qill][17] | 2013 至今 | 网络, Node.JS | 富文本编辑器,同时支持 JavaScript.不确定是否是协同式的。 |
| [Nextcloud][18] | 2017 至今 | Web |一种类似谷歌文档的文档。 |
| [Teletype][19] | 2017 至今 | WebRTC, Node.JS | 为 GitHub 的[ Atom 编辑器][20] 引入了 "可移植"的想法,这种想法使访客可以跟踪主人在对多个文档做什么.访问介绍服务器后使用实时通讯的点对点技术( P2P ),基于 CRDT. |
| [Tandem][21] | 2018 至今 | Node.JS? | Atom, Vim, Neovim, Sublime 等的插件。 使用中继安装基于 CRDT 的 P2P 连接。多亏 Debian 开发者的参与,[可疑证书问题][22]已被解决,这使它成为很有希望在未来被遵循的标准。 |
### 其他清单
* Emacs 维基
* 维基百科
--------------------------------------------------------------------------------
via: https://anarc.at/blog/2018-06-26-collaborative-editors-history/
作者:[Anacr][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[ZenMoore](https://github.com/ZenMoore)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://anarc.at
[1]:https://www.codingmonkeys.de/subethaedit/
[2]:https://www.emacswiki.org/emacs/SubEthaEmacs
[3]:http://docsynch.sourceforge.net/
[4]:https://gobby.github.io/
[5]:http://infinote.0x539.de/libinfinity/API/libinfinity/
[6]:https://www.emacswiki.org/emacs/Rudel
[7]:https://web.archive.org/web/20060423192346/http://www.moonedit.com:80/
[8]:http://www.synchroedit.com/
[9]:http://etherpad.org/
[10]:https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type
[11]:http://operational-transformation.github.io/
[12]:https://floobits.com/
[13]:https://hackmd.io/
[14]:https://github.com/hackmdio/hackmd
[15]:https://cryptpad.fr/
[16]:https://prosemirror.net/
[17]:https://quilljs.com/
[18]:https://nextcloud.com/collaboraonline/
[19]:https://teletype.atom.io/
[20]:https://atom.io
[21]:http://typeintandem.com/
[22]:https://github.com/typeintandem/tandem/issues/131
[23]:https://www.emacswiki.org/emacs/CollaborativeEditing
[24]:https://en.wikipedia.org/wiki/Collaborative_real-time_editor
[25]:https://en.wikipedia.org/wiki/The_Mother_of_All_Demos
[26]:https://en.wikipedia.org/wiki/Douglas_Engelbart