Merge pull request #1 from LCTT/master

update
This commit is contained in:
lixinyuxx 2018-11-23 12:17:50 +08:00 committed by GitHub
commit fc85ebd596
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
114 changed files with 3406 additions and 4125 deletions

View File

@ -1,37 +1,35 @@
使用企业版 Docker 搭建自己的私有注册服务器
使用 Docker 企业版搭建自己的私有注册服务器
======
![docker trusted registry][1]
Docker 真的很酷,特别是和使用虚拟机相比,转移 Docker 镜像十分容易。如果你已准备好使用 Docker那你肯定已从 [Docker Hub][2] 上拉取完整的镜像。Docker Hub 是 Docker 的云端注册服务器服务,包含成千上万个供选择的 Docker 镜像。如果你开发了自己的软件包并创建了自己的 Docker 镜像,那么你会想有自己私有注册服务器。如果你有搭配着专有许可的镜像或想为你的构建系统提供复杂的持续集成CI过程则更应该拥有自己的私有注册服务器。
Docker 真的很酷,特别是和使用虚拟机相比,转移 Docker 镜像十分容易。如果你已准备好使用 Docker那你肯定已从 [Docker Hub][2] 上拉取完整的镜像。Docker Hub 是 Docker 的云端注册服务器服务,包含成千上万个供选择的 Docker 镜像。如果你开发了自己的软件包并创建了自己的 Docker 镜像,那么你会想有自己私有注册服务器。如果你有搭配着专有许可的镜像或想为你的构建系统提供复杂的持续集成CI过程则更应该拥有自己的私有注册服务器。
Docker 企业版包括 Docker Trusted Registry译者注:DTRDocker 可信注册服务器)。这是一个具有安全镜像管理功能的高可用的注册服务器,为在你自己的数据中心或基于云端的架构上运行而构建。在接下来的几周,我们将了解 DTR 是提供安全、可重用且连续的[软件供应链][3]的一个关键组件。你可以通过我们的[免费托管小样][4]立即开始使用,或者通过下载安装进行 30 天的免费试用。下面是开始自己安装的步骤。
Docker 企业版包括 <ruby>Docker 可信注册服务器<rt>Docker Trusted Registry</rt></ruby>DTR。这是一个具有安全镜像管理功能的高可用的注册服务器为在你自己的数据中心或基于云端的架构上运行而构建。在接下来我们将了解 DTR 是提供安全、可重用且连续的[软件供应链][3]的一个关键组件。你可以通过我们的[免费托管小样][4]立即开始使用,或者通过下载安装进行 30 天的免费试用。下面是开始自己安装的步骤。
## 配置 Docker 企业版
### 配置 Docker 企业版
Docker Trusted Registry 在通用控制面板UCP上运行,所以开始前要安装一个单节点集群。如果你已经有了自己的 UCP 集群,可以跳过这一步。在你的 docker 托管主机上,运行以下命令:
DTR 运行于通用控制面板UCP之上,所以开始前要安装一个单节点集群。如果你已经有了自己的 UCP 集群,可以跳过这一步。在你的 docker 托管主机上,运行以下命令:
```
# 拉取并安装 UCP
docker run -it -rm -v /var/run/docker.sock:/var/run/docker.sock -name ucp docker/ucp:latest install
```
当 UCP 启动并运行后,在安装 DTR 之前你还有几件事要做。针对刚刚安装的 UCP 实例,打开浏览器。在日志输出的末尾应该有一个链接。如果你已经有了 Docker 企业版的许可证,那就在这个界面上输入它吧。如果你还没有,可以访问 [Docker 商店][5]获取30天的免费试用版。
当 UCP 启动并运行后,在安装 DTR 之前你还有几件事要做。针对刚刚安装的 UCP 实例,打开浏览器。在日志输出的末尾应该有一个链接。如果你已经有了 Docker 企业版的许可证,那就在这个界面上输入它吧。如果你还没有,可以访问 [Docker 商店][5]获取 30 天的免费试用版。
准备好许可证后,你可能会需要改变一下 UCP 运行的端口。因为这是一个单节点集群DTR 和 UCP 可能会以相同的端口运行们的 web 服务。如果你拥有不只一个节点的 UCP 集群,这就不是问题,因为 DTR 会寻找有所需空闲端口的节点。在 UCP 中,点击管理员设置 -> 集群配置并修改控制器端口,比如 5443。
准备好许可证后,你可能会需要改变一下 UCP 运行的端口。因为这是一个单节点集群DTR 和 UCP 可能会以相同的端口运行们的 web 服务。如果你拥有不只一个节点的 UCP 集群,这就不是问题,因为 DTR 会寻找有所需空闲端口的节点。在 UCP 中,点击管理员设置 -> 集群配置并修改控制器端口,比如 5443。
## 安装 DTR
### 安装 DTR
我们要安装一个简单的、单节点的 Docker Trusted Registry 实例。如果你要安装实际生产用途的 DTR那么你会将其设置为高可用HA模式即需要另一种存储介质比如基于云端的对象存储或者 NFSNetwork File System网络文件系统。因为目前安装的是一个单节点实例我们依然使用默认的本地存储。
我们要安装一个简单的、单节点的 DTR 实例。如果你要安装实际生产用途的 DTR那么你会将其设置为高可用HA模式即需要另一种存储介质比如基于云端的对象存储或者 NFSLCTT 译注Network File System网络文件系统。因为目前安装的是一个单节点实例我们依然使用默认的本地存储。
首先我们需要拉取 DTR 的 bootstrap 镜像。Boostrap 镜像是一个微小的独立安装程序,包括连接到 UCP 以及设置和启动 DTR 所需的所有容器、卷和逻辑网络。
首先我们需要拉取 DTR 的 bootstrap 镜像。boostrap 镜像是一个微小的独立安装程序,包括连接到 UCP 以及设置和启动 DTR 所需的所有容器、卷和逻辑网络。
使用命令:
```
# 拉取并运行 DTR 引导程序
docker run -it -rm docker/dtr:latest install -ucp-insecure-tls
```
@ -39,55 +37,42 @@ docker run -it -rm docker/dtr:latest install -ucp-insecure-tls
然后 DTR bootstrap 镜像会让你确定几项设置,比如 UCP 安装的 URL 地址以及管理员的用户名和密码。从拉取所有的 DTR 镜像到设置全部完成,只需要一到两分钟的时间。
## 保证一切安全
### 保证一切安全
一切都准备好后,就可以向注册服务器推送或者从中拉取镜像了。在做这一步之前,让我们设置 TLS 证书,以便安全的与 DTR 通信。
一切都准备好后,就可以向注册服务器推送或者从中拉取镜像了。在做这一步之前,让我们设置 TLS 证书,以便与 DTR 安全地通信。
在 Linux 上,我们可以使用以下命令(只需确保更改了 DTR_HOSTNAME 变量,来正确映射我们刚刚设置的 DTR
在 Linux 上,我们可以使用以下命令(只需确保更改了 `DTR_HOSTNAME` 变量,来正确映射我们刚刚设置的 DTR
```
# 从 DTR 拉取 CA 证书(如果 curl 不可用,你可以使用 wget
DTR_HOSTNAME=< DTR 主机名>
curl -k https://$(DTR_HOSTNAME)/ca > $(DTR_HOSTNAME).crt
sudo mkdir /etc/docker/certs.d/$(DTR_HOSTNAME)
sudo cp $(DTR_HOSTNAME) /etc/docker/certs.d/$(DTR_HOSTNAME)
# 重启 docker 守护进程(在 Ubuntu 14.04 上,使用 `sudo service docker restart` 命令)
sudo systemctl restart docker
```
对于 Mac 和 Windows 版的 Docker我们会以不同的方式安装客户端。转入设置 -> 守护进程,在 Insecure Registries译者注不安全的注册服务器部分输入你的 DTR 主机名。点击应用docker 守护进程应在重启后可以良好使用。
对于 Mac 和 Windows 版的 Docker我们会以不同的方式安装客户端。转入“设置 -> 守护进程”,在“不安全的注册服务器”部分,输入你的 DTR 主机名。点击“应用”docker 守护进程应在重启后可以良好使用。
## 推送和拉取镜像
### 推送和拉取镜像
现在我们需要设置一个仓库来存放镜像。这和 Docker Hub 有一点不同,如果你做的 docker 推送仓库中不存在,它会自动创建一个。要创建一个仓库,在浏览器中打开 https://<Your DTR hostname> 并在出现登录提示时使用你的管理员凭据登录。如果你向 UCP 添加了许可证,则 DTR 会自动获取该许可证。如果没有,请现在确认上传你的许可证。
现在我们需要设置一个仓库来存放镜像。这和 Docker Hub 有一点不同,如果你做的 docker 推送仓库中不存在,它会自动创建一个。要创建一个仓库,在浏览器中打开 `https://<Your DTR hostname>` 并在出现登录提示时使用你的管理员凭据登录。如果你向 UCP 添加了许可证,则 DTR 会自动获取该许可证。如果没有,请现在确认上传你的许可证。
进入刚才的网页之后,点击`新建仓库`按钮来创建新的仓库。
进入刚才的网页之后,点击“新建仓库”按钮来创建新的仓库。
我们会创建一个用于存储 Alpine linux 的仓库,所以在名字输入处键入 `alpine`,点击`保存`(在 DTR 2.5 及更高版本中叫`创建`)。
我们会创建一个用于存储 Alpine linux 的仓库,所以在名字输入处键入 “alpine”点击“保存”在 DTR 2.5 及更高版本中叫“创建”)。
现在我们回到 shell 界面输入以下命令:
```
# 拉取 Alpine Linux 的最新版
docker pull alpine:latest
# 登入新的 DTR 实例
docker login <Your DTR hostname>
# 标记上 Alpine 使能推送其至你的 DTR
docker tag alpine:latest <Your DTR hostname>/admin/alpine:latest
# 向 DTR 推送镜像
docker push <Your DTR hostname>/admin/alpine:latest
```
@ -95,22 +80,19 @@ docker push <Your DTR hostname>/admin/alpine:latest
```
# 从 DTR 中拉取镜像
docker pull <Your DTR hostname>/admin/alpine:latest
```
DTR 具有许多优秀的镜像管理功能,例如图像缓存,镜像,扫描,签名甚至自动化供应链策略。这些功能我们在后期的博客文章中更详细的探讨。
DTR 具有许多优秀的镜像管理功能,例如镜像的缓存、映像、扫描、签名甚至自动化供应链策略。这些功能我们在后期的博客文章中更详细的探讨。
--------------------------------------------------------------------------------
via: https://blog.docker.com/2018/01/dtr/
作者:[Patrick Devine;Rolf Neugebauer;Docker Core Engineering;Matt Bentley][a]
作者:[Patrick Devine][a]
译者:[fuowang](https://github.com/fuowang)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,18 +3,17 @@
为开源项目作贡献最好的方式是为它减少代码,我们应致力于写出让新手程序员无需注释就容易理解的代码,让维护者也无需花费太多精力就能着手维护。
在学生时代,我们会更多地用复杂巧妙的技术去挑战新的难题。首先我们会学习循环,然后是函数啊,类啊,等等。 当我们到达一定高的程度,能用更高级的技术写更长的程序,我们会因此受到称赞。 此刻我们发现老司机们用 monads 而新手们用 loop 作循环。
在学生时代,我们会更多地用复杂巧妙的技术去挑战新的难题。首先我们会学习循环,然后是函数啊,类啊,等等。当我们到达一定高的程度,能用更高级的技术写更长的程序,我们会因此受到称赞。此刻我们发现老司机们用 monads 而新手们用 loop 作循环。
之后我们毕业找了工作,或者和他人合作开源项目。我们用在学校里学到的各种炫技寻求并骄傲地给出解决方案的代码实现。
哈哈, 我能扩展这个项目并实现某牛X功能啦 我这里能用继承啦, 我太聪明啦!
*哈哈,我能扩展这个项目,并实现某牛 X 功能啦,我这里能用继承啦,我太聪明啦!*
我们实现了某个小的功能,并以充分的理由觉得自己做到了。现实项目中的编程却不是针对某某部分的功能而言。以我个人的经验而言,以前我很开心的去写代码,并骄傲地向世界展示我所知道的事情。 有例为证,作为对某种编程技术的偏爱,这是和另一段元语言代码混合在一起的 [一行 algebra 代码][1],我注意到多年以后一直没人愿意碰它。
我们实现了某个小的功能,并以充分的理由觉得自己做到了。现实项目中的编程却不是针对某某部分的功能而言。以我个人的经验而言,以前我很开心的去写代码,并骄傲地向世界展示我所知道的事情。有例为证,作为对某种编程技术的偏爱,这是用另一种元编程语言构建的一个 [线性代数语言][1],注意,这么多年以来一直没人愿意碰它。
在维护了更多的代码后,我的观点发生了变化。
1. 我们不应去刻意探求如何构建软件。 软件是我们为解决问题所付出的代价, 那才是我们真实的目的。 我们应努力为了解决问题而构建较小的软件。
1. 我们不应去刻意探求如何构建软件。软件是我们为解决问题所付出的代价,那才是我们真实的目的。我们应努力为了解决问题而构建较小的软件。
2. 我们应使用尽可能简单的技术,那么更多的人就越可能会使用,并且无需理解我们所知的高级技术就能扩展软件的功能。当然,在我们不知道如何使用简单技术去实现时,我们也可以使用高级技术。
所有的这些例子都不是听来的故事。我遇到的大部分人会认同某些部分,但不知为什么,当我们向一个新项目贡献代码时又会忘掉这个初衷。直觉里用复杂技术去构建的念头往往会占据上风。
@ -23,7 +22,7 @@
你写的每行代码都要花费人力。写代码当然是需要时间的,也许你会认为只是你个人在奉献,然而这些代码在被审阅的时候也需要花时间理解,对于未来维护和开发人员来说,他们在维护和修改代码时同样要花费时间。否则他们完全可以用这时间出去晒晒太阳,或者陪伴家人。
所以,当你向某个项目贡献代码时,请心怀谦恭。就像是,你正和你的家人进餐时,餐桌上却没有足够的食物,你索取你所需的部分,别人对你的自我约束将肃然起敬。以更少的代码去解决问题是很难的,你肩负重任的同时自然减轻了别人的重负。
所以,当你向某个项目贡献代码时,请心怀谦恭。就像是,你正和你的家人进餐时,餐桌上却没有足够的食物,你索取你所需的部分,别人对你的自我约束将肃然起敬。以更少的代码去解决问题是很难的,你肩负重任的同时自然减轻了别人的重负。
### 技术越复杂越难维护
@ -31,13 +30,13 @@
而在现实中,和团队去解决问题时,情况发生了逆转。现在,我们致力于尽可能使用简单的代码去解决问题。简单方式解决问题使新手程序员能够以此扩展并解决其他问题。简单的代码让别人容易上手,效果立竿见影。我们藉以只用简单的技术去解决难题,从而展示自己的价值。
看, 我用循环替代了递归函数并且一样达到了我们的需求。 当然我明白这是不够聪明的做法, 不过我注意到新手同事之前在这里会遇上麻烦,我觉得这改变将有所帮助吧。
*看,我用循环替代了递归函数并且一样达到了我们的需求。当然我明白这是不够聪明的做法,不过我注意到新手同事在这里会遇上麻烦,我觉得这改变将有所帮助吧。*
如果你是个好的程序员,你不需要证明你知道很多炫技。相应的,你可以通过用一个简单的方法解决一个问题来显示你的价值,并激发你的团队在未来的时间里去完善它。
### 当然,也请保持节制
话虽如此, 过于遵循 “用简单的工具去构建” 的教条也会降低生产力。经常地,用递归会比用循环解决问题更简单,用类或 monad 才是正确的途径。还有两种情况另当别论,一是只是只为满足自我而创建的系统,或者是别人毫无构建经验的系统。
话虽如此,过于遵循 “用简单的工具去构建” 的教条也会降低生产力。通常用递归会比用循环解决问题更简单,用类或 monad 才是正确的途径。还有两种情况另当别论一是只是只为满足自我而创建的系统,或者是别人毫无构建经验的系统。
--------------------------------------------------------------------------------
@ -46,7 +45,7 @@ via: http://matthewrocklin.com/blog/work/2018/01/27/write-dumb-code
作者:[Matthew Rocklin][a]
译者:[plutoid](https://github.com/plutoid)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,77 @@
8 个很棒的 pytest 插件
======
> Python 测试工具最好的一方面是其强大的生态系统。这里列出了八个最好的插件。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming_keyboard_coding.png?itok=E0Vvam7A)
我们是 [pytest][1] 的忠实粉丝,并将其作为工作和开源项目的默认 Python 测试工具。在本月的 Python 专栏中,我们分享了为什么我们喜欢 `pytest` 以及一些让 `pytest` 测试工作更有趣的插件。
### 什么是 pytest
正如该工具的网站所说“pytest 框架可以轻松地编写小型测试,也能进行扩展以支持应用和库的复杂功能测试。”
`pytest` 允许你在任何名为 `test_*.py` 的文件中定义测试,并将其定义为以 `test_*` 开头的函数。然后pytest 将在整个项目中查找所有测试,并在控制台中运行 `pytest` 时自动运行这些测试。pytest 接受[标志和参数][2],它们可以在测试运行器停止时更改,这些包含如何输出结果,运行哪些测试以及输出中包含哪些信息。它还包括一个 `set_trace()` 函数,它可以进入到你的测试中。它会暂停您的测试, 并允许你与变量进行交互,不然你只能在终端中“四处翻弄”来调试你的项目。
`pytest` 最好的一方面是其强大的插件生态系统。因为 `pytest` 是一个非常流行的测试库,所以多年来创建了许多插件来扩展、定制和增强其功能。这八个插件是我们的最爱。
### 8 个很棒的插件
#### 1、pytest-sugar
[pytest-sugar][3] 改变了 `pytest` 的默认外观,添加了一个进度条,并立即显示失败的测试。它不需要配置,只需 `pip install pytest-sugar`,用 `pytest` 运行测试,来享受更漂亮、更有用的输出。
#### 2、pytest-cov
[pytest-cov][4] 在 `pytest` 中增加了覆盖率支持,来显示哪些代码行已经测试过,哪些还没有。它还将包括项目的测试覆盖率。
#### 3、pytest-picked
[pytest-picked][5] 对你已经修改但尚未提交 `git` 的代码运行测试。安装库并运行 `pytest --picked` 来仅测试自上次提交后已更改的文件。
#### 4、pytest-instafail
[pytest-instafail][6] 修改 `pytest` 的默认行为来立即显示失败和错误,而不是等到 `pytest` 完成所有测试。
#### 5、pytest-tldr
一个全新的 `pytest` 插件,可以将输出限制为你需要的东西。`pytest-tldr``tldr` 代表 “too long, didn't read” —— 太长,不想读),就像 pytest-sugar 一样,除基本安装外不需要配置。不像 pytest 的默认输出那么详细,[pytest-tldr][7] 将默认输出限制为失败测试的回溯信息,并忽略了一些令人讨厌的颜色编码。添加 `-v` 标志会为喜欢它的人返回更详细的输出。
#### 6、pytest-xdist
[pytest-xdist][8] 允许你通过 `-n` 标志并行运行多个测试:例如,`pytest -n 2` 将在两个 CPU 上运行你的测试。这可以显著加快你的测试速度。它还包括 `--looponfail` 标志,它将自动重新运行你的失败测试。
#### 7、pytest-django
[pytest-django][9] 为 Django 应用和项目添加了 `pytest` 支持。具体来说,`pytest-django` 引入了使用 pytest fixture 测试 Django 项目的能力,而省略了导入 `unittest` 和复制/粘贴其他样板测试代码的需要,并且比标准的 Django 测试套件运行得更快。
#### 8、django-test-plus
[django-test-plus][10] 并不是专门为 `pytest` 开发,但它现在支持 `pytest`。它包含自己的 `TestCase` 类,你的测试可以继承该类,并使你能够使用较少的按键来输出频繁的测试案例,例如检查特定的 HTTP 错误代码。
我们上面提到的库绝不是你扩展 `pytest` 的唯一选择。有用的 pytest 插件的前景是广阔的。查看 [pytest 插件兼容性][11]页面来自行探索。你最喜欢哪些插件?
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/6/pytest-plugins
作者:[Jeff Triplett][a1], [Lacery Williams Henschel][a2]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a1]:https://opensource.com/users/jefftriplett
[a2]:https://opensource.com/users/laceynwilliams
[1]:https://docs.pytest.org/en/latest/
[2]:https://docs.pytest.org/en/latest/usage.html
[3]:https://github.com/Frozenball/pytest-sugar
[4]:https://github.com/pytest-dev/pytest-cov
[5]:https://github.com/anapaulagomes/pytest-picked
[6]:https://github.com/pytest-dev/pytest-instafail
[7]:https://github.com/freakboy3742/pytest-tldr
[8]:https://github.com/pytest-dev/pytest-xdist
[9]:https://pytest-django.readthedocs.io/en/latest/
[10]:https://django-test-plus.readthedocs.io/en/latest/
[11]:https://plugincompat.herokuapp.com/

View File

@ -1,24 +1,26 @@
Dropbox 在 Linux 上终止除了 Ext4 之外所有文件系统的同步支持
Dropbox 在 Linux 上终止除了 Ext4 之外所有文件系统的同步支持
======
Dropbox 正考虑将同步支持限制为少数几种文件系统类型Windows 的 NTFS、macOS 的 HFS+/APFS 和 Linux 的 Ext4。
> Dropbox 正考虑将同步支持限制为少数几种文件系统类型Windows 的 NTFS、macOS 的 HFS+/APFS 和 Linux 的 Ext4。
![Dropbox ends support for various file system types][1]
[Dropbox][2] 是最受欢迎的[ Linux 中的云服务][3]之一。很多人正好使用的是 Linux 下的 Dropbox 同步客户端。但是,最近,一些用户在他们的 Dropbox Linux 桌面客户端上收到一条警告说:
[Dropbox][2] 是最受欢迎的 [Linux 中的云服务][3]之一。很多人都在使用 Linux 下的 Dropbox 同步客户端。但是,最近,一些用户在他们的 Dropbox Linux 桌面客户端上收到一条警告说:
> “移动 Dropbox 文件夹位置,
> Dropbox 将在 11 月停止同步“
### Dropbox 将仅支持少量文件系统
一个[ Reddit 主题][4]高亮了一位用户在[ Dropbox 论坛][5]上查询了该消息后的公告,该消息是社区管理员带来的意外消息。这是[回复][6]中的内容:
一个 [Reddit 主题][4]强调了一位用户在 [Dropbox 论坛][5]上查询了该消息后的公告,该消息被社区管理员标记为意外新闻。这是[回复][6]中的内容:
> **“大家好,在 2018 年 11 月 7 日,我们会结束 Dropbox 在某些不常见文件系统的同步支持。支持的文件系统是 Windows 的 NTFS、macOS 的 HFS+ 或 APFS以及Linux 的 Ext4。**
> “大家好,在 2018 年 11 月 7 日,我们会结束 Dropbox 在某些不常见文件系统的同步支持。支持的文件系统是 Windows 的 NTFS、macOS 的 HFS+ 或 APFS以及Linux 的 Ext4。
>
> [Dropbox 官方论坛][6]
![Dropbox official confirmation over limitation on supported file systems][7]
Dropbox 官方确认支持文件系统的限制
*Dropbox 官方确认支持文件系统的限制*
此举旨在提供稳定和一致的体验。Dropbox 还更新了其[桌面要求][8]。
@ -31,11 +33,12 @@ Linux 仅支持 Ext4 文件系统。但这并不是一个令人担忧的新闻
在 Ubuntu 或其他基于 Ubuntu 的发行版上,打开磁盘应用并查看 Linux 系统所在分区的文件系统。
![Check file system type on Ubuntu][9]
检查 Ubuntu 上的文件系统类型
*检查 Ubuntu 上的文件系统类型*
如果你的系统上没有安装磁盘应用,那么可以[使用命令行了解文件系统类型][10]。
如果你使用的是 Ext4 文件系统并仍然收到来自 Dropbox 的警告,请检查你是否有可能收到通知的非活动计算机/设备。如果是,[将该系统与你的 Dropbox 帐户取消接][11]。
如果你使用的是 Ext4 文件系统并仍然收到来自 Dropbox 的警告,请检查你是否有可能收到通知的非活动计算机/设备。如果是,[将该系统与你的 Dropbox 帐户取消接][11]。
### Dropbox 也不支持加密的 Ext4 吗?
@ -51,21 +54,21 @@ via: https://itsfoss.com/dropbox-linux-ext4-only/
作者:[Ankush Das][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[1]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/dropbox-filesystem-support-featured.png
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/08/dropbox-filesystem-support-featured.png?w=800&ssl=1
[2]: https://www.dropbox.com/
[3]: https://itsfoss.com/cloud-services-linux/
[4]: https://www.reddit.com/r/linux/comments/966xt0/linux_dropbox_client_will_stop_syncing_on_any/
[5]: https://www.dropboxforum.com/t5/Syncing-and-uploads/
[6]: https://www.dropboxforum.com/t5/Syncing-and-uploads/Linux-Dropbox-client-warn-me-that-it-ll-stop-syncing-in-Nov-why/m-p/290065/highlight/true#M42255
[7]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/dropbox-stopping-file-system-supports.jpeg
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/08/dropbox-stopping-file-system-supports.jpeg?w=800&ssl=1
[8]: https://www.dropbox.com/help/desktop-web/system-requirements#desktop
[9]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/check-file-system-type-ubuntu.jpg
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/08/check-file-system-type-ubuntu.jpg?w=800&ssl=1
[10]: https://www.thegeekstuff.com/2011/04/identify-file-system-type/
[11]: https://www.dropbox.com/help/mobile/unlink-relink-computer-mobile
[12]: https://www.dropbox.com/help/desktop-web/cant-establish-secure-connection#location

View File

@ -0,0 +1,100 @@
顶级 Linux 开发者推荐的编程书籍
======
![](https://www.hpe.com/content/dam/hpe/insights/articles/2018/08/top-linux-developers-recommended-programming-books/featuredStory/Top-recommended-programming-books.jpg)
> 毫无疑问Linux 是由那些拥有深厚计算机知识背景而且才华横溢的程序员发明的。让那些大名鼎鼎的 Linux 程序员向如今的开发者分享一些曾经带领他们登堂入室的好书和技术参考资料吧,你会不会也读过其中几本呢?
Linux毫无争议的属于 21 世纪的操作系统。虽然 Linus Torvalds 在建立开源社区这件事上做了很多工作和社区决策,不过那些网络专家和开发者愿意接受 Linux 的原因还是因为它卓越的代码质量和高可用性。Torvalds 是个编程天才,同时必须承认他还是得到了很多其他同样极具才华的开发者的无私帮助。
就此我咨询了 Torvalds 和其他一些顶级 Linux 开发者,有哪些书籍帮助他们走上了成为顶级开发者的道路,下面请听我一一道来。
### 熠熠生辉的 C 语言
Linux 是在大约上世纪 90 年代开发出来的,与它一起问世的还有其他一些完成基础功能的开源软件。与此相应,那时的开发者使用的工具和语言反映了那个时代的印记,也就是说 C 语言。可能 [C 语言不再流行了][1]可对于很多已经建功立业的开发者来说C 语言是他们的第一个在实际开发中使用的语言,这一点也在他们推选的对他们有着深远影响的书单中反映出来。
Torvalds 说,“你不应该再选用我那个时代使用的语言或者开发方式”,他的开发道路始于 BASIC然后转向机器码“甚至都不是汇编语言而是真真正正的二进制机器码”他解释道再然后转向汇编语言和 C 语言。
“任何人都不应该再从这些语言开始进入开发这条路了”,他补充道。“这些语言中的一些今天已经没有什么意义(如 BASIC 和机器语言)。尽管 C 还是一个主流语言,我也不推荐你从它开始。”
并不是他不喜欢 C。不管怎样Linux 是用 [GNU C 语言][2]写就的。“我始终认为 C 是一个伟大的语言它有着非常简单的语法对于很多方向的开发都很合适但是我怀疑你会遇到重重挫折从你的第一个Hello World程序开始到你真正能开发出能用的东西当中有很大一步要走”。他认为用现在的标准如果作为入门语言的话从 C 语言开始的代价太大。
在他那个时代Torvalds 的唯一选择的书就只能是 Brian W. Kernighan 和 Dennis M. Ritchie 合著的《<ruby>[C 编程语言,第二版][3]<rt>C Programming Language, 2nd Edition</rt></ruby>》,它在编程圈内也被尊称为 K&R。“这本书简单精炼但是你要先有编程的背景才能欣赏它”Torvalds 说到。
Torvalds 并不是唯一一个推荐 K&R 的开源开发者。以下几位也同样引用了这本他们认为值得推荐的书籍他们有Linux 和 Oracle 虚拟化开发副总裁 Wim CoekaertsLinux 开发者 Alan CoxGoogle 云 CTO Brian StevensCanonical 技术运营部副总裁 Pete Graner。
如果你今日还想同 C 语言较量一番的话Samba 的共同创始人 Jeremy Allison 推荐《<ruby>[C 程序设计新思维][4]<rt>21st Century C: C Tips from the New School</rt></ruby>》。他还建议,同时也去阅读一本比较旧但是写的更详细的《<ruby>[C 专家编程][5]<rt>Expert C Programming: Deep C Secrets</rt></ruby>》和有着 20 年历史的《<ruby>[POSIX 多线程编程][6]<rt>Programming with POSIX Threads</rt></ruby>》。
### 如果不选 C 语言, 那选什么?
Linux 开发者推荐的书籍自然都是他们认为适合今时今日的开发项目的语言工具。这也折射了开发者自身的个人偏好。例如Allison 认为年轻的开发者应该在《<ruby>[Go 编程语言][7]<rt>The Go Programming Language</rt></ruby>》和《<ruby>[Rust 编程][8]<rt>Rust with Programming Rust</rt></ruby>》的帮助下去学习 Go 语言和 Rust 语言。
但是超越编程语言来考虑问题也不无道理(尽管这些书传授了你编程技巧)。今日要做些有意义的开发工作的话,"要从那些已经完成了 99% 显而易见工作的框架开始,然后你就能围绕着它开始写脚本了" Torvalds 推荐了这种做法。
“坦率来说,语言本身远远没有围绕着它的基础架构重要”,他继续道,“可能你会从 Java 或者 Kotlin 开始,但那是因为你想为自己的手机开发一个应用,因此安卓 SDK 成为了最佳的选择,又或者,你对游戏开发感兴趣,你选择了一个游戏开发引擎来开始,而通常它们有着自己的脚本语言”。
这里提及的基础架构包括那些和操作系统本身相关的编程书籍。
Garner 在读完了大名鼎鼎的 K&R 后又拜读了 W. Richard Steven 的《<ruby>[Unix 网络编程][10]<rt>Unix Network Programming</rt></ruby>》。特别是Steven 的《<ruby>[TCP/IP 详解卷1协议][11]<rt>TCP/IP Illustrated, Volume 1: The Protocols</rt></ruby>》在出版了 30 年之后仍然被认为是必读之书。因为 Linux 开发很大程度上和[和网络基础架构有关][12]Garner 也推荐了很多 O'Reilly 在 [Sendmail][13]、[Bash][14]、[DNS][15] 以及 [IMAP/POP][16] 等方面的书。
Coekaerts 也是 Maurice Bach 的《<ruby>[UNIX 操作系统设计][17]<rt>The Design of the Unix Operation System</rt></ruby>》的书迷之一。James Bottomley 也是这本书的推崇者,作为一个 Linux 内核开发者,当 Linux 刚刚问世时 James 就用 Bach 的这本书所传授的知识将它研究了个底朝天。
### 软件设计知识永不过时
尽管这样说有点太局限在技术领域。Stevens 还是说到,“所有的开发者都应该在开始钻研语法前先研究如何设计,《<ruby>[设计心理学][18]<rt>The Design of Everyday Things</rt></ruby>》是我的最爱”。
Coekaerts 喜欢 Kernighan 和 Rob Pike 合著的《<ruby>[程序设计实践][19]<rt>The Practic of Programming</rt></ruby>》。这本关于设计实践的书当 Coekaerts 还在学校念书的时候还未出版,他说道,“但是我把它推荐给每一个人”。
不管何时当你问一个长期从事于开发工作的开发者他最喜欢的计算机书籍时你迟早会听到一个名字和一本书Donald Knuth 和他所著的《<ruby>[计算机程序设计艺术1-4A][20]<rt>The Art of Computer Programming, Volumes 1-4A</rt></ruby>》。VMware 首席开源官 Dirk Hohndel认为这本书尽管有永恒的价值但他也承认“今时今日并非极其有用”。LCTT 译注:不代表译者观点)
### 读代码。大量的读。
编程书籍能教会你很多,也请别错过另外一个在开源社区特有的学习机会:《<ruby>[代码阅读方法与实践][21]<rt>Code Reading: The Open Source Perspective</rt></ruby>》。那里有不可计数的代码例子阐述如何解决编程问题以及如何让你陷入麻烦……。Stevens 说,谈到磨炼编程技巧,在他的书单里排名第一的“书”是 Unix 的源代码。
“也请不要忽略从他人身上学习的各种机会。” Cox 道,“我是在一个计算机俱乐部里和其他人一起学的 BASIC在我看来这仍然是一个学习的最好办法”他从《<ruby>[精通 ZX81 机器码][22]<rt>Mastering machine code on your ZX81</rt></ruby>》这本书和 Honeywell L66 B 编译器手册里学习到了如何编写机器码,但是学习技术这点来说,单纯阅读和与其他开发者在工作中共同学习仍然有着很大的不同。
Cox 说,“我始终认为最好的学习方法是和一群人一起试图去解决你们共同关心的一些问题并从中找到快乐,这和你是 5 岁还是 55 岁无关”。
最让我吃惊的是这些顶级 Linux 开发者都是在非常底层级别开始他们的开发之旅的,甚至不是从汇编语言或 C 语言,而是从机器码开始开发。毫无疑问,这对帮助开发者理解计算机在非常微观的底层级别是怎么工作的起了非常大的作用。
那么现在你准备好尝试一下硬核 Linux 开发了吗Greg Kroah-Hartman这位 Linux 内核稳定分支的维护者,推荐了 Steve Oualline 的《<ruby>[实用 C 语言编程][23]<rt>Practical C Programming</rt></ruby>》和 Samuel harbison 与 Guy Steels 合著的《<ruby>[C 语言参考手册][24]<rt>C: A Reference Manual</rt></ruby>》。接下来请阅读<ruby>[如何进行 Linux 内核开发][25]<rt>HOWTO do Linux kernel development</rt></ruby>,到这时,就像 Kroah-Hartman 所说,你已经准备好启程了。
于此同时,还请你刻苦学习并大量编码,最后祝你在跟随顶级 Linux 开发者脚步的道路上好运相随。
--------------------------------------------------------------------------------
via: https://www.hpe.com/us/en/insights/articles/top-linux-developers-recommended-programming-books-1808.html
作者:[Steven Vaughan-Nichols][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[DavidChenLiang](https://github.com/DavidChenLiang)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.hpe.com/us/en/insights/contributors/steven-j-vaughan-nichols.html
[1]:https://www.codingdojo.com/blog/7-most-in-demand-programming-languages-of-2018/
[2]:https://www.gnu.org/software/gnu-c-manual/
[3]:https://amzn.to/2nhyjEO
[4]:https://amzn.to/2vsL8k9
[5]:https://amzn.to/2KBbWn9
[6]:https://amzn.to/2M0rfeR
[7]:https://amzn.to/2nhyrnMe
[8]:http://shop.oreilly.com/product/0636920040385.do
[9]:https://www.hpe.com/us/en/resources/storage/containers-for-dummies.html?jumpid=in_510384402_linuxbooks_containerebook0818
[10]:https://amzn.to/2MfpbyC
[11]:https://amzn.to/2MpgrTn
[12]:https://www.hpe.com/us/en/insights/articles/how-to-see-whats-going-on-with-your-linux-system-right-now-1807.html
[13]:http://shop.oreilly.com/product/9780596510299.do
[14]:http://shop.oreilly.com/product/9780596009656.do
[15]:http://shop.oreilly.com/product/9780596100575.do
[16]:http://shop.oreilly.com/product/9780596000127.do
[17]:https://amzn.to/2vsCJgF
[18]:https://amzn.to/2APzt3Z
[19]:https://www.amazon.com/Practice-Programming-Addison-Wesley-Professional-Computing/dp/020161586X/ref=as_li_ss_tl?ie=UTF8&linkCode=sl1&tag=thegroovycorpora&linkId=e6bbdb1ca2182487069bf9089fc8107e&language=en_US
[20]:https://amzn.to/2OknFsJ
[21]:https://amzn.to/2M4VVL3
[22]:https://amzn.to/2OjccJA
[23]:http://shop.oreilly.com/product/9781565923065.do
[24]:https://amzn.to/2OjzgrT
[25]:https://www.kernel.org/doc/html/v4.16/process/howto.html

View File

@ -0,0 +1,98 @@
容器技术对 DevOps 的一些启发
======
> 容器技术的使用支撑了目前 DevOps 三大主要实践:工作流、及时反馈、持续学习。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-patent_reform_520x292_10136657_1012_dc.png?itok=Cd2PmDWf)
有人说容器技术与 DevOps 二者在发展的过程中是互相促进的关系。得益于 DevOps 设计理念的流行,容器生态系统在设计上与组件选择上也有相应发展。同时,由于容器技术在生产环境中的使用,反过来也促进了 DevOps 三大主要实践:[支撑 DevOps 的三个实践][1]。
### 工作流
#### 容器中的工作流
每个容器都可以看成一个独立的运行环境,对于容器内部,不需要考虑外部的宿主环境、集群环境,以及其它基础设施。在容器内部,每个功能看起来都是以传统的方式运行。从外部来看,容器内运行的应用一般作为整个应用系统架构的一部分:比如 web API、web app 用户界面、数据库、任务执行、缓存系统、垃圾回收等。运维团队一般会限制容器的资源使用,并在此基础上建立完善的容器性能监控服务,从而降低其对基础设施或者下游其他用户的影响。
#### 现实中的工作流
那些跟“容器”一样业务功能独立的团队,也可以借鉴这种容器思维。因为无论是在现实生活中的工作流(代码发布、构建基础设施,甚至制造 [《杰森一家》中的斯贝斯利太空飞轮][2] 等),还是技术中的工作流(开发、测试、运维、发布)都使用了这样的线性工作流,一旦某个独立的环节或者工作团队出现了问题,那么整个下游都会受到影响,虽然使用这种线性的工作流有效降低了工作耦合性。
#### DevOps 中的工作流
DevOps 中的第一条原则,就是掌控整个执行链路的情况,努力理解系统如何协同工作,并理解其中出现的问题如何对整个过程产生影响。为了提高流程的效率,团队需要持续不断的找到系统中可能存在的性能浪费以及问题,并最终修复它们。
> 践行这样的工作流后,可以避免将一个已知缺陷带到工作流的下游,避免局部优化导致可能的全局性能下降,要不断探索如何优化工作流,持续加深对于系统的理解。
> —— Gene Kim《[支撑 DevOps 的三个实践][3]》IT 革命2017.4.25
### 反馈
#### 容器中的反馈
除了限制容器的资源,很多产品还提供了监控和通知容器性能指标的功能,从而了解当容器工作不正常时,容器内部处于什么样的状态。比如目前[流行的][5] [Prometheus][4],可以用来收集容器和容器集群中相应的性能指标数据。容器本身特别适用于分隔应用系统,以及打包代码和其运行环境,但同时也带来了不透明的特性,这时,从中快速收集信息来解决其内部出现的问题就显得尤为重要了。
#### 现实中的反馈
在现实中,从始至终同样也需要反馈。一个高效的处理流程中,及时的反馈能够快速地定位事情发生的时间。反馈的关键词是“快速”和“相关”。当一个团队被淹没在大量不相关的事件时,那些真正需要快速反馈的重要信息很容易被忽视掉,并向下游传递形成更严重的问题。想象下[如果露西和埃塞尔][6]能够很快地意识到传送带太快了那么制作出的巧克力可能就没什么问题了尽管这样就不那么搞笑了LCTT 译注:露西和埃塞尔是上世纪 50 年代的著名黑白情景喜剧《我爱露西》中的主角)
#### DevOps 中的反馈
DevOps 中的第二条原则就是快速收集所有相关的有用信息这样在问题影响到其它开发流程之前就可以被识别出。DevOps 团队应该努力去“优化下游”,以及快速解决那些可能会影响到之后团队的问题。同工作流一样,反馈也是一个持续的过程,目标是快速的获得重要的信息以及当问题出现后能够及时地响应。
> 快速的反馈对于提高技术的质量、可用性、安全性至关重要。
> —— Gene Kim 等人《DevOps 手册如何在技术组织中创造世界级的敏捷性可靠性和安全性》IT 革命2016
### 持续学习
#### 容器中的持续学习
践行第三条原则“持续学习”是一个不小的挑战。在不需要掌握太多边缘的或难以理解的东西的情况下,容器技术让我们的开发工程师和运营团队依然可以安全地进行本地和生产环境的测试,这在之前是难以做到的。即便是一些激进的实验,容器技术仍然让我们轻松地进行版本控制、记录和分享。
#### 现实中的持续学习
举个我自己的例子:多年前,作为一个年轻、初出茅庐的系统管理员(仅仅工作三周),我被安排对一个运行着某个大学核心 IT 部门网站的 Apache 虚拟主机配置进行更改。由于没有方便的测试环境,我直接在生产站点上修改配置,当时觉得配置没问题就发布了,几分钟后,我无意中听到了隔壁同事说:
“等会,网站挂了?”
“没错,怎么回事?”
很多人蒙圈了……
在被嘲讽之后(真实的嘲讽),我一头扎在工作台上,赶紧撤销我之前的更改。当天下午晚些时候,部门主管 —— 我老板的老板的老板 —— 来到我的工位询问发生了什么事。“别担心,”她告诉我。“我们不会责怪你,这是一个错误,现在你已经学会了。”
而在容器中,这种情形在我的笔记本上就很容易测试了,并且也很容易在部署生产环境之前,被那些经验老道的团队成员发现。
#### DevOps 中的持续学习
持续学习文化的一部分是我们每个人都希望通过一些改变从而能够提高一些东西,并勇敢地通过实验来验证我们的想法。对于 DevOps 团队来说,失败无论对团队还是个人来说都是成长而不是惩罚,所以不要畏惧失败。团队中的每个成员不断学习、共享,也会不断提升其所在团队与组织的水平。
随着系统越来越被细分,我们更需要将注意力集中在具体的点上:上面提到的两条原则主要关注整体流程,而持续学习关注的则是整个项目、人员、团队、组织的未来。它不仅对流程产生了影响,还对流程中的每个人产生影响。
> 实验和冒险让我们能够不懈地改进我们的工作,但也要求我们尝试之前未用过的工作方式。
> —— Gene Kim 等人,《[凤凰计划:让你了解 IT、DevOps 以及如何取得商业成功][7]》IT 革命2013
### 容器技术带给 DevOps 的启迪
有效地应用容器技术可以学习 DevOps 的三条原则:工作流,反馈以及持续学习。从整体上看应用程序和基础设施,而不是对容器外的东西置若罔闻,教会我们考虑到系统的所有部分,了解其上游和下游影响,打破隔阂,并作为一个团队工作,以提升整体表现和深度了解整个系统。通过努力提供及时准确的反馈,我们可以在组织内部创建有效的反馈机制,以便在问题发生影响之前发现问题。最后,提供一个安全的环境来尝试新的想法并从中学习,教会我们创造一种文化,在这种文化中,失败一方面促进了我们知识的增长,另一方面通过有根据的猜测,可以为复杂的问题带来新的、优雅的解决方案。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/containers-can-teach-us-devops
作者:[Chris Hermansen][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[littleji](https://github.com/littleji)
校对:[pityonline](https://github.com/pityonline), [wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/clhermansen
[1]: https://itrevolution.com/the-three-ways-principles-underpinning-devops/
[2]: https://en.wikipedia.org/wiki/The_Jetsons
[3]: http://itrevolution.com/the-three-ways-principles-underpinning-devops
[4]: https://prometheus.io/
[5]: https://opensource.com/article/18/9/prometheus-operational-advantage
[6]: https://www.youtube.com/watch?v=8NPzLBSBzPI
[7]: https://itrevolution.com/book/the-phoenix-project/

View File

@ -1,13 +1,13 @@
服务器的 LinuxBoot告别 UEFI、拥抱开源
============================================================
============
[LinuxBoot][13] 是私有的 [UEFI][15] 固件的 [替代者][14]。它发布于去年并且现在已经得到主流的硬件生产商的认可成为他们产品的默认固件。去年LinuxBoot 已经被 Linux 基金会接受并[纳入][16]开源家族。
[LinuxBoot][13] 是私有的 [UEFI][15] 固件的开源 [替代品][14]。它发布于去年并且现在已经得到主流的硬件生产商的认可成为他们产品的默认固件。去年LinuxBoot 已经被 Linux 基金会接受并[纳入][16]开源家族。
这个项目最初是由 Ron Minnich 在 2017 年 1 月提出,它是 LinuxBIOS 的创造人,并且在 Google 领导 [coreboot][17] 的工作。
Google、Facebook、[Horizon 计算解决方案][18]、和 [Two Sigma][19] 共同合作,在运行 Linux 的服务器上开发 [LinuxBoot 项目][20](以前叫 [NERF][21])。
Google、Facebook、[Horizon Computing Solutions][18]、和 [Two Sigma][19] 共同合作,在运行 Linux 的服务器上开发 [LinuxBoot 项目][20](以前叫 [NERF][21])。
它开放允许服务器用户去很容易地定制他们自己的引导脚本、修复问题、构建他们自己的[运行时][22] 和用他们自己的密钥去 [刷入固件][23]。他们不需要等待供应商的更新。
开放允许服务器用户去很容易地定制他们自己的引导脚本、修复问题、构建他们自己的 [运行时环境][22] 和用他们自己的密钥去 [刷入固件][23],而不需要等待供应商的更新。
下面是第一次使用 NERF BIOS 去引导 [Ubuntu Xenial][24] 的视频:
@ -17,40 +17,36 @@ Google、Facebook、[Horizon 计算解决方案][18]、和 [Two Sigma][19] 共
### LinuxBoot 超越 UEFI 的优势
![LinuxBoot vs UEFI](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/linuxboot-uefi.png)
![LinuxBoot vs UEFI](https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/10/linuxboot-uefi.png?w=800&ssl=1)
下面是一些 LinuxBoot 超越 UEFI 的主要优势:
### 启动速度显著加快
#### 启动速度显著加快
它能在 20 秒钟以内完成服务器启动,而 UEFI 需要几分钟的时间。
### 显著的灵活性
#### 显著的灵活性
LinuxBoot 可以用在各种设备、文件系统和 Linux 支持的协议上。
LinuxBoot 可以用在 Linux 支持的各种设备、文件系统和协议上。
### 更加安全
#### 更加安全
相比 UEFI 而言LinuxBoot 在设备驱动程序和文件系统方面进行更加严格的检查。
我们可能主张 UEFI 是使用 [EDK II][25] 而部分开源的,而 LinuxBoot 是部分闭源的。但有人[提出][26],即便有像 EDK II 这样的代码,但也没有做适当的审查级别和像 [Linux 内核][27] 那样的正确性检查,并且在 UEFI 的开发中还大量使用闭源组件。
我们可能争辩说 UEFI 是使用 [EDK II][25] 而部分开源的,而 LinuxBoot 是部分闭源的。但有人[提出][26],即便有像 EDK II 这样的代码,但也没有做适当的审查级别和像 [Linux 内核][27] 那样的正确性检查,并且在 UEFI 的开发中还大量使用闭源组件。
其它方面LinuxBoot 有非常少的二进制文件,它仅用了大约一百多 KB相比而言UEFI 的二进制文件有 32 MB。
另一方面LinuxBoot 有非常小的二进制文件,它仅用了大约几百 KB相比而言UEFI 的二进制文件有 32 MB。
严格来说LinuxBoot 与 UEFI 不一样,更适合于[可信计算基础][28]。
[建议阅读 Linux 上最好的自由开源的 Adobe 产品的替代者][29]
LinuxBoot 有一个基于 [kexec][30] 的引导加载器,它不支持启动 Windows/非 Linux 内核,但这影响并不大,因为主流的云都是基于 Linux 的服务器。
### LinuxBoot 的采用者
自 2011 年, [Facebook][32] 发起了[开源计算项目][31],它的一些服务器是基于[开源][33]设计的目的是构建的数据中心更加高效。LinuxBoot 已经在下面列出的几个开源计算硬件上做了测试:
自 2011 年, [Facebook][32] 发起了[开源计算项目OCP][31],它的一些服务器是基于[开源][33]设计的目的是构建的数据中心更加高效。LinuxBoot 已经在下面列出的几个开源计算硬件上做了测试:
* Winterfell
* Leopard
* Tioga Pass
更多 [OCP][34] 硬件在[这里][35]有一个简短的描述。OCP 基金会通过[开源系统固件][36]运行一个专门的固件项目。
@ -58,9 +54,7 @@ LinuxBoot 有一个基于 [kexec][30] 的引导加载器,它不支持启动
支持 LinuxBoot 的其它一些设备有:
* [QEMU][9] 仿真的 [Q35][10] 系统
* [Intel S2600wf][11]
* [Dell R630][12]
上个月底2018 年 9 月 24 日),[Equus 计算解决方案][37] [宣布][38] 发行它的 [白盒开放式™][39] M2660 和 M2760 服务器,作为它们的定制的、成本优化的、开放硬件服务器和存储平台的一部分。它们都支持 LinuxBoot 灵活定制服务器的 BIOS以提升安全性和设计一个非常快的纯净的引导体验。
@ -73,10 +67,10 @@ LinuxBoot 在 [GitHub][40] 上有很丰富的文档。你喜欢它与 UEFI 不
via: https://itsfoss.com/linuxboot-uefi/
作者:[ Avimanyu Bandyopadhyay][a]
作者:[Avimanyu Bandyopadhyay][a]
选题:[oska874][b]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,184 @@
Lisp 是怎么成为上帝的编程语言的
======
当程序员们谈论各类编程语言的相对优势时,他们通常会采用相当平淡的措词,就好像这些语言是一条工具带上的各种工具似的 —— 有适合写操作系统的,也有适合把其它程序黏在一起来完成特殊工作的。这种讨论方式非常合理;不同语言的能力不同。不声明特定用途就声称某门语言比其他语言更优秀只能导致侮辱性的无用争论。
但有一门语言似乎受到和用途无关的特殊尊敬:那就是 Lisp。即使是恨不得给每个说出形如“某某语言比其他所有语言都好”这类话的人都来一拳的键盘远征军们也会承认 Lisp 处于另一个层次。 Lisp 超越了用于评判其他语言的实用主义标准,因为普通程序员并不使用 Lisp 编写实用的程序 —— 而且,多半他们永远也不会这么做。然而,人们对 Lisp 的敬意是如此深厚,甚至于到了这门语言会时而被加上神话属性的程度。
大家都喜欢的网络漫画合集 xkcd 就至少在两组漫画中如此描绘过 Lisp[其中一组漫画][1]中,某人得到了某种 Lisp 启示,而这好像使他理解了宇宙的基本构架。
![](https://imgs.xkcd.com/comics/lisp.jpg)
在[另一组漫画][2]中,一个穿着长袍的老程序员给他的徒弟递了一沓圆括号,说这是“文明时代的优雅武器”,暗示着 Lisp 就像原力那样拥有各式各样的神秘力量。
![](https://imgs.xkcd.com/comics/lisp_cycles.png)
另一个绝佳例子是 Bob Kanefsky 的滑稽剧插曲,《上帝就在人间》。这部剧叫做《永恒之火》,撰写于 1990 年代中期;剧中描述了上帝必然是使用 Lisp 创造世界的种种原因。完整的歌词可以在 [GNU 幽默合集][3]中找到,如下是一段摘抄:
> 因为上帝用祂的 Lisp 代码
> 让树叶充满绿意。
> 分形的花儿和递归的根:
> 我见过的奇技淫巧之中没什么比这更可爱。
> 当我对着雪花深思时,
> 从未见过两片相同的,
> 我知道,上帝偏爱那一门
> 名字是四个字母的语言。
以下这句话我实在不好在人前说;不过,我还是觉得,这样一种 “Lisp 是奥术魔法”的文化模因实在是有史以来最奇异、最迷人的东西。Lisp 是象牙塔的产物,是人工智能研究的工具;因此,它对于编程界的俗人而言总是陌生的,甚至是带有神秘色彩的。然而,当今的程序员们[开始怂恿彼此,“在你死掉之前至少试一试 Lisp”][4],就像这是一种令人恍惚入迷的致幻剂似的。尽管 Lisp 是广泛使用的编程语言中第二古老的(只比 Fortran 年轻一岁)[^1] ,程序员们也仍旧在互相怂恿。想象一下,如果你的工作是为某种组织或者团队推广一门新的编程语言的话,忽悠大家让他们相信你的新语言拥有神力难道不是绝佳的策略吗?—— 但你如何能够做到这一点呢?或者,换句话说,一门编程语言究竟是如何变成人们口中“隐晦知识的载体”的呢?
Lisp 究竟是怎么成为这样的?
![Byte 杂志封面,1979年八月。][5]
*Byte 杂志封面1979年八月。*
### 理论 A :公理般的语言
Lisp 的创造者<ruby>约翰·麦卡锡<rt>John McCarthy</rt></ruby>最初并没有想过把 Lisp 做成优雅、精炼的计算法则结晶。然而在一两次运气使然的深谋远虑和一系列优化之后Lisp 的确变成了那样的东西。 <ruby>保罗·格雷厄姆<rt>Paul Graham</rt></ruby>(我们一会儿之后才会聊到他)曾经这么写道, 麦卡锡通过 Lisp “为编程作出的贡献就像是欧几里得对几何学所做的贡献一般” [^2]。人们可能会在 Lisp 中看出更加隐晦的含义 —— 因为麦卡锡创造 Lisp 时使用的要素实在是过于基础,基础到连弄明白他到底是创造了这门语言、还是发现了这门语言,都是一件难事。
最初, 麦卡锡产生要造一门语言的想法,是在 1956 年的<ruby>达特茅斯人工智能夏季研究项目<rt>Darthmouth Summer Research Project on Artificial Intelligence</rt></ruby>上。夏季研究项目是个持续数周的学术会议,直到现在也仍旧在举行;它是此类会议之中最早开始举办的会议之一。 麦卡锡当初还是个达特茅斯的数学助教,而“<ruby>人工智能<rt>artificial intelligence</rt></ruby>AI”这个词事实上就是他建议举办该会议时发明的 [^3]。在整个会议期间大概有十人参加 [^4]。他们之中包括了<ruby>艾伦·纽厄尔<rt>Allen Newell</rt></ruby><ruby>赫伯特·西蒙<rt>Herbert Simon</rt></ruby>,两名隶属于<ruby>兰德公司<rt>RAND Corporation</rt></ruby><ruby>卡内基梅隆大学<rt>Carnegie Mellon</rt></ruby>的学者。这两人不久之前设计了一门语言,叫做 IPL。
当时,纽厄尔和西蒙正试图制作一套能够在命题演算中生成证明的系统。两人意识到,用电脑的原生指令集编写这套系统会非常困难;于是他们决定创造一门语言——他们的原话是“<ruby>伪代码<rt>pseudo-code</rt></ruby>”,这样,他们就能更加轻松自然地表达这台“<ruby>逻辑理论机器<rt>Logic Theory Machine</rt></ruby>”的底层逻辑了 [^5]。这门语言叫做 IPL即“<ruby>信息处理语言<rt>Information Processing Language</rt></ruby>”;比起我们现在认知中的编程语言,它更像是一种高层次的汇编语言方言。 纽厄尔和西蒙提到,当时人们开发的其它“伪代码”都抓着标准数学符号不放 —— 也许他们指的是 Fortran [^6];与此不同的是,他们的语言使用成组的符号方程来表示命题演算中的语句。通常,用 IPL 写出来的程序会调用一系列的汇编语言宏,以此在这些符号方程列表中对表达式进行变换和求值。
麦卡锡认为,一门实用的编程语言应该像 Fortran 那样使用代数表达式;因此,他并不怎么喜欢 IPL [^7]。然而,他也认为,在给人工智能领域的一些问题建模时,符号列表会是非常好用的工具 —— 而且在那些涉及演绎的问题上尤其有用。麦卡锡的渴望最终被诉诸行动;他要创造一门代数的列表处理语言 —— 这门语言会像 Fortran 一样使用代数表达式,但拥有和 IPL 一样的符号列表处理能力。
当然,今日的 Lisp 可不像 Fortran。在会议之后的几年中麦卡锡关于“理想的列表处理语言”的见解似乎在逐渐演化。到 1957 年,他的想法发生了改变。他那时候正在用 Fortran 编写一个能下国际象棋的程序;越是长时间地使用 Fortran ,麦卡锡就越确信其设计中存在不当之处,而最大的问题就是尴尬的 `IF` 声明 [^8]。为此,他发明了一个替代品,即条件表达式 `true`;这个表达式会在给定的测试通过时返回子表达式 `A` ,而在测试未通过时返回子表达式 `B` *而且*,它只会对返回的子表达式进行求值。在 1958 年夏天,当麦卡锡设计一个能够求导的程序时,他意识到,他发明的 `true` 条件表达式让编写递归函数这件事变得更加简单自然了 [^9]。也是这个求导问题让麦卡锡创造了 `maplist` 函数;这个函数会将其它函数作为参数并将之作用于指定列表的所有元素 [^10]。在给项数多得叫人抓狂的多项式求导时,它尤其有用。
然而,以上的所有这些,在 Fortran 中都是没有的;因此,在 1958 年的秋天,麦卡锡请来了一群学生来实现 Lisp。因为他那时已经成了一名麻省理工助教所以这些学生可都是麻省理工的学生。当麦卡锡和学生们最终将他的主意变为能运行的代码时这门语言得到了进一步的简化。这之中最大的改变涉及了 Lisp 的语法本身。最初,麦卡锡在设计语言时,曾经试图加入所谓的 “M 表达式”;这是一层语法糖,能让 Lisp 的语法变得类似于 Fortran。虽然 M 表达式可以被翻译为 S 表达式 —— 基础的、“用圆括号括起来的列表”,也就是 Lisp 最著名的特征 —— 但 S 表达式事实上是一种给机器看的低阶表达方法。唯一的问题是,麦卡锡用方括号标记 M 表达式,但他的团队在麻省理工使用的 IBM 026 键盘打孔机的键盘上根本没有方括号 [^11]。于是 Lisp 团队坚定不移地使用着 S 表达式,不仅用它们表示数据列表,也拿它们来表达函数的应用。麦卡锡和他的学生们还作了另外几样改进,包括将数学符号前置;他们也修改了内存模型,这样 Lisp 实质上就只有一种数据类型了 [^12]。
到 1960 年,麦卡锡发表了他关于 Lisp 的著名论文《用符号方程表示的递归函数及它们的机器计算》。那时候Lisp 已经被极大地精简,而这让麦卡锡意识到,他的作品其实是“一套优雅的数学系统”,而非普通的编程语言 [^13]。他后来这么写道,对 Lisp 的许多简化使其“成了一种描述可计算函数的方式,而且它比图灵机或者一般情况下用于递归函数理论的递归定义更加简洁” [^14]。在他的论文中,他不仅使用 Lisp 作为编程语言,也将它当作一套用于研究递归函数行为方式的表达方法。
通过“从一小撮规则中逐步实现出 Lisp”的方式麦卡锡将这门语言介绍给了他的读者。后来保罗·格雷厄姆在短文《<ruby>[Lisp 之根][6]<rt>The Roots of Lisp</rt></ruby>》中用更易读的语言回顾了麦卡锡的步骤。格雷厄姆只用了七种原始运算符、两种函数写法,以及使用原始运算符定义的六个稍微高级一点的函数来解释 Lisp。毫无疑问Lisp 的这种只需使用极少量的基本规则就能完整说明的特点加深了其神秘色彩。格雷厄姆称麦卡锡的论文为“使计算公理化”的一种尝试 [^15]。我认为,在思考 Lisp 的魅力从何而来时,这是一个极好的切入点。其它编程语言都有明显的人工构造痕迹,表现为 `While``typedef``public static void` 这样的关键词;而 Lisp 的设计却简直像是纯粹计算逻辑的鬼斧神工。Lisp 的这一性质,以及它和晦涩难懂的“递归函数理论”的密切关系,使它具备了获得如今声望的充分理由。
### 理论 B属于未来的机器
Lisp 诞生二十年后,它成了著名的《<ruby>[黑客词典][7]<rt>Hackers Dictionary</rt></ruby>》中所说的人工智能研究的“母语”。Lisp 在此之前传播迅速,多半是托了语法规律的福 —— 不管在怎么样的电脑上,实现 Lisp 都是一件相对简单直白的事。而学者们之后坚持使用它乃是因为 Lisp 在处理符号表达式这方面有巨大的优势;在那个时代,人工智能很大程度上就意味着符号,于是这一点就显得十分重要。在许多重要的人工智能项目中都能见到 Lisp 的身影。这些项目包括了 [SHRDLU 自然语言程序][8]、[Macsyma 代数系统][9] 和 [ACL2 逻辑系统][10]。
然而,在 1970 年代中期人工智能研究者们的电脑算力开始不够用了。PDP-10 就是一个典型。这个型号在人工智能学界曾经极受欢迎;但面对这些用 Lisp 写的 AI 程序,它的 18 位地址空间一天比一天显得吃紧 [^16]。许多的 AI 程序在设计上可以与人互动。要让这些既极度要求硬件性能、又有互动功能的程序在分时系统上优秀发挥,是很有挑战性的。麻省理工的<ruby>彼得·杜奇<rt>Peter Deutsch</rt></ruby>给出了解决方案:那就是针对 Lisp 程序来特别设计电脑。就像是我那[关于 Chaosnet 的上一篇文章][11]所说的那样,这些<ruby>Lisp 计算机<rt>Lisp machines</rt></ruby>会给每个用户都专门分配一个为 Lisp 特别优化的处理器。到后来,考虑到硬核 Lisp 程序员的需求,这些计算机甚至还配备上了完全由 Lisp 编写的开发环境。在当时那样一个小型机时代已至尾声而微型机的繁盛尚未完全到来的尴尬时期Lisp 计算机就是编程精英们的“高性能个人电脑”。
有那么一会儿Lisp 计算机被当成是未来趋势。好几家公司雨后春笋般出现,追着赶着要把这项技术商业化。其中最成功的一家叫做 Symbolics由麻省理工 AI 实验室的前成员创立。上世纪八十年代,这家公司生产了所谓的 3600 系列计算机,它们当时在 AI 领域和需要高性能计算的产业中应用极广。3600 系列配备了大屏幕、位图显示、鼠标接口,以及[强大的图形与动画软件][12]。它们都是惊人的机器,能让惊人的程序运行起来。例如,之前在推特上跟我聊过的机器人研究者 Bob Culley就能用一台 1985 年生产的 Symbolics 3650 写出带有图形演示的寻路算法。他向我解释说,在 1980 年代,位图显示和面向对象编程(能够通过 [Flavors 扩展][13]在 Lisp 计算机上使用都刚刚出现。Symbolics 站在时代的最前沿。
![Bob Culley 的寻路程序。][14]
*Bob Culley 的寻路程序。*
而以上这一切导致 Symbolics 的计算机奇贵无比。在 1983 年,一台 Symbolics 3600 能卖 111,000 美金 [^16]。所以,绝大部分人只可能远远地赞叹 Lisp 计算机的威力和操作员们用 Lisp 编写程序的奇妙技术。不止他们赞叹,从 1979 年到 1980 年代末Byte 杂志曾经多次提到过 Lisp 和 Lisp 计算机。在 1979 年八月发行的、关于 Lisp 的一期特别杂志中,杂志编辑激情洋溢地写道,麻省理工正在开发的计算机配备了“大坨大坨的内存”和“先进的操作系统” [^17];他觉得,这些 Lisp 计算机的前途是如此光明,以至于它们的面世会让 1978 和 1977 年 —— 诞生了 Apple II、Commodore PET 和 TRS-80 的两年 —— 显得黯淡无光。五年之后,在 1985 年,一名 Byte 杂志撰稿人描述了为“复杂精巧、性能强悍的 Symbolics 3670”编写 Lisp 程序的体验,并力劝读者学习 Lisp称其为“绝大数人工智能工作者的语言选择”和将来的通用编程语言 [^18]。
我问过<ruby>保罗·麦克琼斯<rt>Paul McJones</rt></ruby>(他在<ruby>山景城<rt>Mountain View<rt></ruby><ruby>计算机历史博物馆<rt>Computer History Museum</rt></ruby>做了许多 Lisp 的[保护工作][15]),人们是什么时候开始将 Lisp 当作高维生物的赠礼一样谈论的呢他说这门语言自有的性质毋庸置疑地促进了这种现象的产生然而他也说Lisp 上世纪六七十年代在人工智能领域得到的广泛应用,很有可能也起到了作用。当 1980 年代到来、Lisp 计算机进入市场时,象牙塔外的某些人由此接触到了 Lisp 的能力,于是传说开始滋生。时至今日,很少有人还记得 Lisp 计算机和 Symbolics 公司;但 Lisp 得以在八十年代一直保持神秘,很大程度上要归功于它们。
### 理论 C学习编程
1985 年,两位麻省理工的教授,<ruby>哈尔·阿伯尔森<rt>Harold "Hal" Abelson</rt></ruby><ruby>杰拉尔德·瑟斯曼<rt>Gerald Sussman</rt></ruby>,外加瑟斯曼的妻子<ruby>朱莉·瑟斯曼<rt>Julie Sussman</rt></ruby>,出版了一本叫做《<ruby>计算机程序的构造和解释<rt>Structure and Interpretation of Computer Programs</rt></ruby>》的教科书。这本书用 Scheme一种 Lisp 方言)向读者们示范了如何编程。它被用于教授麻省理工入门编程课程长达二十年之久。出于直觉,我认为 SICP这本书的名字通常缩写为 SICP倍增了 Lisp 的“神秘要素”。SICP 使用 Lisp 描绘了深邃得几乎可以称之为哲学的编程理念。这些理念非常普适,可以用任意一种编程语言展现;但 SICP 的作者们选择了 Lisp。结果这本阴阳怪气、卓越不凡、吸引了好几代程序员还成了一种[奇特的模因][16]的著作臭名远扬之后Lisp 的声望也顺带被提升了。Lisp 已不仅仅是一如既往的“麦卡锡的优雅表达方式”;它现在还成了“向你传授编程的不传之秘的语言”。
SICP 究竟有多奇怪这一点值得好好说;因为我认为,时至今日,这本书的古怪之处和 Lisp 的古怪之处是相辅相成的。书的封面就透着一股古怪。那上面画着一位朝着桌子走去,准备要施法的巫师或者炼金术士。他的一只手里抓着一副测径仪 —— 或者圆规另一只手上拿着个球上书“eval”和“apply”。他对面的女人指着桌子在背景中希腊字母 λ lambda漂浮在半空释放出光芒。
![SICP 封面上的画作][17]
*SICP 封面上的画作。*
说真的,这上面画的究竟是怎么一回事?为什么桌子会长着动物的腿?为什么这个女人指着桌子?墨水瓶又是干什么用的?我们是不是该说,这位巫师已经破译了宇宙的隐藏奥秘,而所有这些奥秘就蕴含在 eval/apply 循环和 Lambda 微积分之中?看似就是如此。单单是这张图片,就一定对人们如今谈论 Lisp 的方式产生了难以计量的影响。
然而这本书的内容通常并不比封面正常多少。SICP 跟你读过的所有计算机科学教科书都不同。在引言中,作者们表示,这本书不只教你怎么用 Lisp 编程 —— 它是关于“现象的三个焦点:人的心智、复数的计算机程序,和计算机”的作品 [^19]。在之后,他们对此进行了解释,描述了他们对如下观点的坚信:编程不该被当作是一种计算机科学的训练,而应该是“<ruby>程序性认识论<rt>procedural epistemology</rt></ruby>”的一种新表达方式 [^20]。程序是将那些偶然被送入计算机的思想组织起来的全新方法。这本书的第一章简明地介绍了 Lisp但是之后的绝大部分都在讲述更加抽象的概念。其中包括了对不同编程范式的讨论对于面向对象系统中“时间”和“一致性”的讨论在书中的某一处还有关于通信的基本限制可能会如何带来同步问题的讨论 —— 而这些基本限制在通信中就像是光速不变在相对论中一样关键 [^21]。都是些高深难懂的东西。
以上这些并不是说这是本糟糕的书;这本书其实棒极了。在我读过的所有作品中,这本书对于重要的编程理念的讨论是最为深刻的;那些理念我琢磨了很久,却一直无力用文字去表达。一本入门编程教科书能如此迅速地开始描述面向对象编程的根本缺陷,和函数式语言“将可变状态降到最少”的优点,实在是一件让人印象深刻的事。而这种描述之后变为了另一种震撼人心的讨论:某种(可能类似于今日的 [RxJS][18] 的流范式能如何同时具备两者的优秀特性。SICP 用和当初麦卡锡的 Lisp 论文相似的方式提纯出了高级程序设计的精华。你读完这本书之后,会立即想要将它推荐给你的程序员朋友们;如果他们找到这本书,看到了封面,但最终没有阅读的话,他们就只会记住长着动物腿的桌子上方那神秘的、根本的、给予魔法师特殊能力的、写着 eval/apply 的东西。话说回来,书上这两人的鞋子也让我印象颇深。
然而SICP 最重要的影响恐怕是,它将 Lisp 由一门怪语言提升成了必要的教学工具。在 SICP 面世之前,人们互相推荐 Lisp以学习这门语言为提升编程技巧的途径。1979 年的 Byte 杂志 Lisp 特刊印证了这一事实。之前提到的那位编辑不仅就麻省理工的新 Lisp 计算机大书特书还说Lisp 这门语言值得一学,因为它“代表了分析问题的另一种视角” [^22]。但 SICP 并未只把 Lisp 作为其它语言的陪衬来使用SICP 将其作为*入门*语言。这就暗含了一种论点那就是Lisp 是最能把握计算机编程基础的语言。可以认为,如今的程序员们彼此怂恿“在死掉之前至少试试 Lisp”的时候他们很大程度上是因为 SICP 才这么说的。毕竟,编程语言 [Brainfuck][19] 想必同样也提供了“分析问题的另一种视角”;但人们学习 Lisp 而非学习 Brainfuck那是因为他们知道前者的那种 Lisp 视角在二十年中都被看作是极其有用的,有用到麻省理工在给他们的本科生教其它语言之前,必然会先教 Lisp。
### Lisp 的回归
在 SICP 出版的同一年,<ruby>本贾尼·斯特劳斯特卢普<rt>Bjarne Stroustrup</rt></ruby>发布了 C++ 语言的首个版本它将面向对象编程带到了大众面前。几年之后Lisp 计算机的市场崩盘AI 寒冬开始了。在下一个十年的变革中, C++ 和后来的 Java 成了前途无量的语言,而 Lisp 被冷落,无人问津。
理所当然地,确定人们对 Lisp 重新燃起热情的具体时间并不可能;但这多半是保罗·格雷厄姆发表他那几篇声称 Lisp 是首选入门语言的短文之后的事了。保罗·格雷厄姆是 Y-Combinator 的联合创始人和《Hacker News》的创始者他这几篇短文有很大的影响力。例如在短文《<ruby>[胜于平庸][20]<rt>Beating the Averages</rt></ruby>》中,他声称 Lisp 宏使 Lisp 比其它语言更强。他说,因为他在自己创办的公司 Viaweb 中使用 Lisp他得以比竞争对手更快地推出新功能。至少[一部分程序员][21]被说服了。然而,庞大的主流程序员群体并未换用 Lisp。
实际上出现的情况是Lisp 并未流行,但越来越多 Lisp 式的特性被加入到广受欢迎的语言中。Python 有了列表理解。C# 有了 Linq。Ruby……嗯[Ruby 是 Lisp 的一种][22]。就如格雷厄姆之前在 2001 年提到的那样,“在一系列常用语言中所体现出的‘默认语言’正越发朝着 Lisp 的方向演化” [^23]。尽管其它语言变得越来越像 LispLisp 本身仍然保留了其作为“很少人了解但是大家都该学的神秘语言”的特殊声望。在 1980 年Lisp 的诞生二十周年纪念日上麦卡锡写道Lisp 之所以能够存活这么久,是因为它具备“编程语言领域中的某种近似局部最优” [^24]。这句话并未充分地表明 Lisp 的真正影响力。Lisp 能够存活超过半个世纪之久,并非因为程序员们一年年地勉强承认它就是最好的编程工具;事实上,即使绝大多数程序员根本不用它,它还是存活了下来。多亏了它的起源和它的人工智能研究用途,说不定还要多亏 SICP 的遗产Lisp 一直都那么让人着迷。在我们能够想象上帝用其它新的编程语言创造世界之前Lisp 都不会走下神坛。
--------------------------------------------------------------------------------
[^1]: John McCarthy, “History of Lisp”, 14, Stanford University, February 12, 1979, accessed October 14, 2018, http://jmc.stanford.edu/articles/lisp/lisp.pdf
[^2]: Paul Graham, “The Roots of Lisp”, 1, January 18, 2002, accessed October 14, 2018, http://languagelog.ldc.upenn.edu/myl/llog/jmc.pdf.
[^3]: Martin Childs, “John McCarthy: Computer scientist known as the father of AI”, The Independent, November 1, 2011, accessed on October 14, 2018, https://www.independent.co.uk/news/obituaries/john-mccarthy-computer-scientist-known-as-the-father-of-ai-6255307.html.
[^4]: Lisp Bulletin History. http://www.artinfo-musinfo.org/scans/lb/lb3f.pdf
[^5]: Allen Newell and Herbert Simon, “Current Developments in Complex Information Processing,” 19, May 1, 1956, accessed on October 14, 2018, http://bitsavers.org/pdf/rand/ipl/P-850_Current_Developments_In_Complex_Information_Processing_May56.pdf.
[^6]: ibid.
[^7]: Herbert Stoyan, “Lisp History”, 43, Lisp Bulletin #3, December 1979, accessed on October 14, 2018, http://www.artinfo-musinfo.org/scans/lb/lb3f.pdf
[^8]: McCarthy, “History of Lisp”, 5.
[^9]: ibid.
[^10]: McCarthy “History of Lisp”, 6.
[^11]: Stoyan, “Lisp History”, 45
[^12]: McCarthy, “History of Lisp”, 8.
[^13]: McCarthy, “History of Lisp”, 2.
[^14]: McCarthy, “History of Lisp”, 8.
[^15]: Graham, “The Roots of Lisp”, 11.
[^16]: Guy Steele and Richard Gabriel, “The Evolution of Lisp”, 22, History of Programming Languages 2, 1993, accessed on October 14, 2018, http://www.dreamsongs.com/Files/HOPL2-Uncut.pdf. 2
[^17]: Carl Helmers, “Editorial”, Byte Magazine, 154, August 1979, accessed on October 14, 2018, https://archive.org/details/byte-magazine-1979-08/page/n153.
[^18]: Patrick Winston, “The Lisp Revolution”, 209, April 1985, accessed on October 14, 2018, https://archive.org/details/byte-magazine-1985-04/page/n207.
[^19]: Harold Abelson, Gerald Jay. Sussman, and Julie Sussman, Structure and Interpretation of Computer Programs (Cambridge, Mass: MIT Press, 2010), xiii.
[^20]: Abelson, xxiii.
[^21]: Abelson, 428.
[^22]: Helmers, 7.
[^23]: Paul Graham, “What Made Lisp Different”, December 2001, accessed on October 14, 2018, http://www.paulgraham.com/diff.html.
[^24]: John McCarthy, “Lisp—Notes on its past and future”, 3, Stanford University, 1980, accessed on October 14, 2018, http://jmc.stanford.edu/articles/lisp20th/lisp20th.pdf.
via: https://twobithistory.org/2018/10/14/lisp.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[Northurland](https://github.com/Northurland)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: https://xkcd.com/224/
[2]: https://xkcd.com/297/
[3]: https://www.gnu.org/fun/jokes/eternal-flame.en.html
[4]: https://www.reddit.com/r/ProgrammerHumor/comments/5c14o6/xkcd_lisp/d9szjnc/
[5]: https://twobithistory.org/images/byte_lisp.jpg
[6]: http://languagelog.ldc.upenn.edu/myl/llog/jmc.pdf
[7]: https://en.wikipedia.org/wiki/Jargon_File
[8]: https://hci.stanford.edu/winograd/shrdlu/
[9]: https://en.wikipedia.org/wiki/Macsyma
[10]: https://en.wikipedia.org/wiki/ACL2
[11]: https://twobithistory.org/2018/09/30/chaosnet.html
[12]: https://youtu.be/gV5obrYaogU?t=201
[13]: https://en.wikipedia.org/wiki/Flavors_(programming_language)
[14]: https://twobithistory.org/images/symbolics.jpg
[15]: http://www.softwarepreservation.org/projects/LISP/
[16]: https://knowyourmeme.com/forums/meme-research/topics/47038-structure-and-interpretation-of-computer-programs-hugeass-image-dump-for-evidence
[17]: https://twobithistory.org/images/sicp.jpg
[18]: https://rxjs-dev.firebaseapp.com/
[19]: https://en.wikipedia.org/wiki/Brainfuck
[20]: http://www.paulgraham.com/avg.html
[21]: https://web.archive.org/web/20061004035628/http://wiki.alu.org/Chris-Perkins
[22]: http://www.randomhacks.net/2005/12/03/why-ruby-is-an-acceptable-lisp/

View File

@ -1,9 +1,9 @@
Chrony 一个类 Unix 系统可选的 NTP 客户端和服务器
Chrony:一个类 Unix 系统上 NTP 客户端和服务器替代品
======
![](https://www.ostechnix.com/wp-content/uploads/2018/10/chrony-1-720x340.jpeg)
在这个教程中,我们会讨论如何安装和配置 **Chrony**,一个类 Unix 系统上可选的 NTP 客户端和服务器。Chrony 可以更快的同步系统时钟具有更好的时钟准确度并且它对于那些不是一直在线的系统很有帮助。Chrony 是免费、开源的,并且支持 GNU/Linux 和 BSD 衍生版比如 FreeBSDNetBSDmacOS 和 Solaris 等。
在这个教程中,我们会讨论如何安装和配置 **Chrony**,一个类 Unix 系统上 NTP 客户端和服务器的替代品。Chrony 可以更快的同步系统时钟具有更好的时钟准确度并且它对于那些不是一直在线的系统很有帮助。Chrony 是自由开源的,并且支持 GNU/Linux 和 BSD 衍生版(比如 FreeBSD、NetBSDmacOS 和 Solaris 等。
### 安装 Chrony
@ -13,7 +13,7 @@ Chrony 可以从大多数 Linux 发行版的默认软件库中获得。如果你
$ sudo pacman -S chrony
```
在 DebianUbuntuLinux Mint 上:
在 Debian、Ubuntu、Linux Mint 上:
```
$ sudo apt-get install chrony
@ -25,7 +25,7 @@ $ sudo apt-get install chrony
$ sudo dnf install chrony
```
当安装完成后,如果之前没有启动过的话需启动 **chronyd.service** 守护进程:
当安装完成后,如果之前没有启动过的话需启动 `chronyd.service` 守护进程:
```
$ sudo systemctl start chronyd.service
@ -37,7 +37,7 @@ $ sudo systemctl start chronyd.service
$ sudo systemctl enable chronyd.service
```
为了确认 Chronyd.service 已经启动,运行:
为了确认 `chronyd.service` 已经启动,运行:
```
$ sudo systemctl status chronyd.service
@ -71,7 +71,7 @@ Oct 17 10:35:06 ubuntuserver chronyd[2482]: Selected source 106.10.186.200
### 配置 Chrony
NTP 客户端需要知道它要连接到哪个 NTP 服务器来获取当前时间。我们可以直接在 NTP 配置文件中的 **server** 或者 **pool** 项指定 NTP 服务器。通常,默认的配置文件位于 **/etc/chrony/chrony.conf** 或者 **/etc/chrony.conf**,取决于 Linux 发行版版本。为了更可靠的时间同步,建议指定至少三个服务器。
NTP 客户端需要知道它要连接到哪个 NTP 服务器来获取当前时间。我们可以直接在该 NTP 配置文件中的 `server` 或者 `pool` 项指定 NTP 服务器。通常,默认的配置文件位于 `/etc/chrony/chrony.conf` 或者 `/etc/chrony.conf`,取决于 Linux 发行版版本。为了更可靠的同步时间,建议指定至少三个服务器。
下面几行是我的 Ubuntu 18.04 LTS 服务器上的一个示例。
@ -87,19 +87,19 @@ pool 2.ubuntu.pool.ntp.org iburst maxsources 2
[...]
```
从上面的输出中你可以看到,[**NTP Pool Project**][1] 已经被设置成为了默认的时间服务器。对于那些好奇的人NTP Pool project 是一个时间服务器集群,用来为全世界千万个客户端提供 NTP 服务。它是 Ubuntu 以及其他主流 Linux 发行版的默认时间服务器。
从上面的输出中你可以看到,[NTP 服务器池项目][1] 已经被设置成为了默认的时间服务器。对于那些好奇的人NTP 服务器池项目是一个时间服务器集群,用来为全世界千万个客户端提供 NTP 服务。它是 Ubuntu 以及其他主流 Linux 发行版的默认时间服务器。
在这里,
* **iburst** 选项用来加速初始的同步过程
* **maxsources** 代表 NTP 源的最大数量
* `iburst` 选项用来加速初始的同步过程
* `maxsources` 代表 NTP 源的最大数量
请确保你选择的 NTP 服务器是同步的、稳定的、离你的位置较近的,以便使用这些 NTP 源来提升时间准确度。
### 在命令行中管理 Chronyd
Chrony 有一个命令行工具叫做 **chronyc** 用来控制和监控 **chrony** 守护进程chronyd)。
chrony 有一个命令行工具叫做 `chronyc` 用来控制和监控 chrony 守护进程(`chronyd`)。
为了检查是否 **chrony** 已经同步,我们可以使用下面展示的 **tracking** 命令。
为了检查是否 chrony 已经同步,我们可以使用下面展示的 `tracking` 命令。
```
$ chronyc tracking
@ -135,7 +135,7 @@ MS Name/IP address Stratum Poll Reach LastRx Last sample
^- ns2.pulsation.fr 2 10 377 311 -75ms[ -73ms] +/- 250ms
```
Chronyc 工具可以对每个源进行统计,比如使用 **sourcestats** 命令获得漂移速率和进行偏移估计。
`chronyc` 工具可以对每个源进行统计,比如使用 `sourcestats` 命令获得漂移速率和进行偏移估计。
```
$ chronyc sourcestats
@ -152,7 +152,7 @@ sin1.m-d.net 29 13 83m +0.049 6.060 -8466us 9940us
ns2.pulsation.fr 32 17 88m +0.784 9.834 -62ms 22ms
```
如果你的系统没有连接到 Internet你需要告知 Chrony 系统没有连接到 Internet。为了这样做,运行:
如果你的系统没有连接到互联网,你需要告知 Chrony 系统没有连接到 互联网。为了这样做,运行:
```
$ sudo chronyc offline
@ -174,7 +174,7 @@ $ chronyc activity
可以看到,我的所有源此时都是离线状态。
一旦你连接到 Internet,只需要使用命令告知 Chrony 你的系统已经回到在线状态:
一旦你连接到互联网,只需要使用命令告知 Chrony 你的系统已经回到在线状态:
```
$ sudo chronyc online
@ -193,11 +193,10 @@ $ chronyc activity
0 sources with unknown address
```
所有选项和参数的详细解释,请参考帮助手册。
所有选项和参数的详细解释,请参考帮助手册。
```
$ man chronyc
$ man chronyd
```
@ -206,7 +205,6 @@ $ man chronyd
保持关注!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/chrony-an-alternative-ntp-client-and-server-for-unix-like-systems/
@ -214,7 +212,7 @@ via: https://www.ostechnix.com/chrony-an-alternative-ntp-client-and-server-for-u
作者:[SK][a]
选题:[lujun9972][b]
译者:[zianglei](https://github.com/zianglei)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,30 +1,19 @@
2018 年 10 月在 COPR 中值得尝试的 4 个很酷的新项目
COPR 仓库中 4 个很酷的新软件2018.10
======
![](https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg)
COPR是软件的个人存储库的[集合] [1],它不在标准的 Fedora 仓库中携带。某些软件不符合允许轻松打包的标准。或者它可能不符合其他 Fedora 标准,尽管它是免费和开源的。COPR 可以在标准的 Fedora 包之外提供这些项目。COPR 中的软件不受 Fedora 基础设施的支持,或者是由项目自己签名的。但是,它是尝试新的或实验性软件的一种很好的方法。
COPR 是软件的个人存储库的[集合] [1],它包含那些不在标准的 Fedora 仓库中的软件。某些软件不符合允许轻松打包的标准。或者它可能不符合其他 Fedora 标准,尽管它是自由开源的。COPR 可以在标准的 Fedora 包之外提供这些项目。COPR 中的软件不受 Fedora 基础设施的支持,或者是由项目自己背书的。但是,它是尝试新的或实验性软件的一种很好的方法。
这是 COPR 中一组新的有趣项目。
### GitKraken
[编者按:这些项目里面有一个兵不适合通过 COPR 分发,所以从本文中 也删除了。相关的评论也删除了,以免误导读者。对此带来的不便,我们深表歉意。]
[GitKraken][2] 是一个有用的 git 客户端它适合喜欢图形界面而非命令行的用户并提供你期望的所有功能。此外GitKraken 可以创建仓库和文件并具有内置编辑器。GitKraken 的一个有用功能是暂存行或者文件,并快速切换分支。但是,在某些情况下,在遇到较大项目时会有性能问题。
![][3]
#### 安装说明
该仓库目前为 Fedora 27、28、29 、Rawhide 以及 OpenSUSE Tumbleweed 提供 GitKraken。要安装 GitKraken请使用以下命令
```
sudo dnf copr enable elken/gitkraken
sudo dnf install gitkraken
```
LCTT 译注本文后来移除了对“GitKraken”项目的介绍。
### Music On Console
[Music On Console][4] 播放器或称为 mocp是一个简单的控制台音频播放器。它有一个类似于 “Midnight Commander” 的界面并且很容易使用。你只需进入包含音乐的目录然后选择要播放的文件或目录。此外mocp 提供了一组命令,允许直接从命令行进行控制。
[Music On Console][4] 播放器(简称 mocp是一个简单的控制台音频播放器。它有一个类似于 “Midnight Commander” 的界面并且很容易使用。你只需进入包含音乐的目录然后选择要播放的文件或目录。此外mocp 提供了一组命令,允许直接从命令行进行控制。
![][5]
@ -39,7 +28,7 @@ sudo dnf install moc
### cnping
[Cnping][6]是小型的图形化 ping IPv4 工具,可用于可视化显示 RTT 的变化。它提供了一个选项来控制每个数据包之间的间隔以及发送的数据大小。除了显示的图表外cnping 还提供 RTT 和丢包的基本统计数据。
[Cnping][6] 是小型的图形化 ping IPv4 工具,可用于可视化显示 RTT 的变化。它提供了一个选项来控制每个数据包之间的间隔以及发送的数据大小。除了显示的图表外cnping 还提供 RTT 和丢包的基本统计数据。
![][7]
@ -54,7 +43,7 @@ sudo dnf install cnping
### Pdfsandwich
[Pdfsandwich][8] 是将文本添加到图像形式的文本 PDF 文件 (如扫描书籍) 的工具。它使用光学字符识别 (OCR) 创建一个额外的图层, 包含了原始页面已识别的文本。这对于复制和处理文本很有用。
[Pdfsandwich][8] 是将文本添加到图像形式的文本 PDF 文件 (如扫描书籍) 的工具。它使用光学字符识别 (OCR) 创建一个额外的图层, 包含了原始页面已识别的文本。这对于复制和处理文本很有用。
#### 安装说明
@ -72,7 +61,7 @@ via: https://fedoramagazine.org/4-cool-new-projects-try-copr-october-2018/
作者:[Dominik Turecek][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,9 +1,11 @@
监测数据库的健康和行为: 有哪些重要指标?
监测数据库的健康和行为有哪些重要指标?
======
对数据库的监测可能过于困难或者没有监测到关键点。本文将讲述如何正确的监测数据库。
> 对数据库的监测可能过于困难或者没有找到关键点。本文将讲述如何正确的监测数据库。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_graph_stats_blue.png?itok=OKCc_60D)
我们没有足够的讨论数据库。在这个充满监测仪器的时代,我们监测我们的应用程序、基础设施、甚至我们的用户,但有时忘记我们的数据库也值得被监测。这很大程度是因为数据库表现的很好,以至于我们单纯地信任它能把任务完成的很好。信任固然重要,但能够证明它的表现确实如我们所期待的那样就更好了。
我们没有对数据库讨论过多少。在这个充满监测仪器的时代,我们监测我们的应用程序、基础设施、甚至我们的用户,但有时忘记我们的数据库也值得被监测。这很大程度是因为数据库表现的很好,以至于我们单纯地信任它能把任务完成的很好。信任固然重要,但能够证明它的表现确实如我们所期待的那样就更好了。
![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image1_-_bffs.png?itok=BZQM_Fos)
@ -13,11 +15,11 @@
![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image5_fire.png?itok=wsip2Fa4)
更具体地说,数据库是系统健康和行为的重要标志。数据库中的异常行为能够指出应用程序中出现问题的区域。另外,当应用程序中有异常行为时,你可以利用数据库的指标来迅速完成排除故障的过程。
更具体地说数据库是系统健康和行为的重要标志。数据库中的异常行为能够指出应用程序中出现问题的区域。另外,当应用程序中有异常行为时,你可以利用数据库的指标来迅速完成排除故障的过程。
### 问题
最轻微的调查揭示了监测数据库的一个问题:数据库有很多指标。说“很多”只是轻描淡写,如果你是Scrooge McDuck你可以浏览所有可用的指标。如果这是Wrestlemania,那么指标就是折叠椅。监测所有指标似乎并不实用,那么你如何决定要监测哪些指标?
最轻微的调查揭示了监测数据库的一个问题:数据库有很多指标。说“很多”只是轻描淡写,如果你是<ruby>史高治<rt>Scrooge McDuck</rt></ruby>LCTT 译注:史高治,唐老鸭的舅舅,以一毛不拔著称),你不会放过任何一个可用的指标。如果这是<ruby>摔角狂热<rt>Wrestlemania</rt></ruby> 比赛,那么指标就是折叠椅。监测所有指标似乎并不实用,那么你如何决定要监测哪些指标?
![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image2_db_metrics.png?itok=Jd9NY1bt)
@ -29,11 +31,11 @@
开始检测数据库的最好方法是跟踪它所接到请求的数量。我们对数据库有较高期望;期望它能稳定的存储数据,并处理我们抛给它的所有查询,这些查询可能是一天一次大规模查询,或者是来自用户一天到晚的数百万次查询。吞吐量可以告诉我们数据库是否如我们期望的那样工作。
你也可以将请求安照类型(读,写,服务器端,客户端等)分组,以开始分析流量。
你也可以将请求按照类型(读、写、服务器端、客户端等)分组,以开始分析流量。
### 执行时间:数据库完成工作需要多长时间?
这个指标看起来很明显,但往往被忽视了。 你不仅想知道数据库收到了多少请求,还想知道数据库在每个请求上花费了多长时间。 然而参考上下文来讨论执行时间非常重要像InfluxDB这样的时间序列数据库中的慢与像MySQL这样的关系型数据库中的慢不一样。InfluxDB中的慢可能意味着毫秒而MySQL的“SLOW_QUERY”变量的默认值是10秒。
这个指标看起来很明显,但往往被忽视了。你不仅想知道数据库收到了多少请求,还想知道数据库在每个请求上花费了多长时间。 然而,参考上下文来讨论执行时间非常重要:像 InfluxDB 这样的时间序列数据库中的慢与像 MySQL 这样的关系型数据库中的慢不一样。InfluxDB 中的慢可能意味着毫秒,而 MySQL 的 `SLOW_QUERY` 变量的默认值是 10 秒。
![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image4_slow_is_relative.png?itok=9RkuzUi8)
@ -43,7 +45,7 @@
一旦你知道数据库正在处理多少请求以及每个请求需要多长时间,你就需要添加一层复杂性以开始从这些指标中获得实际值。
如果数据库接收到十个请求,并且每个请求需要十秒钟来完成,那么数据库是否忙碌了100秒、10秒或者介于两者之间?并发任务的数量改变了数据库资源的使用方式。当你考虑连接和线程的数量等问题时,你将开始对数据库指标有更全面的了解。
如果数据库接收到十个请求,并且每个请求需要十秒钟来完成,那么数据库是忙碌了 100 秒、10 秒,还是介于两者之间?并发任务的数量改变了数据库资源的使用方式。当你考虑连接和线程的数量等问题时,你将开始对数据库指标有更全面的了解。
并发性还能影响延迟,这不仅包括任务完成所需的时间(执行时间),还包括任务在处理之前需要等待的时间。
@ -53,15 +55,15 @@
![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image6_telephone.png?itok=YzdpwUQP)
该指标对于确定数据库的整体健康和性能特别有用。如果只能在80%的时间内响应请求,则可以重新分配资源、进行优化工作,或者进行更改以更接近高可用性。
该指标对于确定数据库的整体健康和性能特别有用。如果只能在 80% 的时间内响应请求,则可以重新分配资源、进行优化工作,或者进行更改以更接近高可用性。
### 好消息
监测和分析似乎非常困难特别是因为我们大多数人不是数据库专家我们可能没有时间去理解这些指标。但好消息是大部分的工作已经为我们做好了。许多数据库都有一个内部性能数据库Postgrespg_stats、CouchDBRuntime_.、InfluxDB_internal等),数据库工程师设计该数据库来监测与该特定数据库有关的指标。你可以看到像慢速查询的数量一样广泛的内容,或者像数据库中每个事件的平均微秒一样详细的内容。
监测和分析似乎非常困难特别是因为我们大多数人不是数据库专家我们可能没有时间去理解这些指标。但好消息是大部分的工作已经为我们做好了。许多数据库都有一个内部性能数据库Postgres`pg_stats`、CouchDB`Runtime_Statistics`、InfluxDB`_internal` 等),数据库工程师设计该数据库来监测与该特定数据库有关的指标。你可以看到像慢速查询的数量一样广泛的内容,或者像数据库中每个事件的平均微秒一样详细的内容。
### 结论
数据库创建了足够的指标以使我们需要长时间研究,虽然内部性能数据库充满了有用的信息,但并不总是使你清楚应该关注哪些指标。 从吞吐量,执行时间,并发性和利用率开始,它们为你提供了足够的信息,使你可以开始了解你的数据库中的情况。
数据库创建了足够的指标以使我们需要长时间研究,虽然内部性能数据库充满了有用的信息,但并不总是使你清楚应该关注哪些指标。从吞吐量、执行时间、并发性和利用率开始,它们为你提供了足够的信息,使你可以开始了解你的数据库中的情况。
![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image3_3_hearts.png?itok=iHF-OSwx)
@ -74,7 +76,7 @@ via: https://opensource.com/article/18/10/database-metrics-matter
作者:[Katy Farmer][a]
选题:[lujun9972][b]
译者:[ChiZelin](https://github.com/ChiZelin)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,11 +1,11 @@
设计更快的网页(三):字体和 CSS 转换
设计更快的网页(三):字体和 CSS 调整
======
![](https://fedoramagazine.org/wp-content/uploads/2018/10/designfaster3-816x345.jpg)
欢迎回到我们为了构建更快网页所写的系列文章。本系列的[第一][1]和[第二][2]部分讲述了如何通过优化和替换图片来减少浏览器脂肪。本部分会着眼于在 CSS[层叠式样式表][3])和字体中减掉更多的脂肪。
欢迎回到我们为了构建更快网页所写的系列文章。本系列的[第一部分][1]和[第二部分][2]讲述了如何通过优化和替换图片来减少浏览器脂肪。本部分会着眼于在 CSS[层叠式样式表][3])和字体中减掉更多的脂肪。
### CSS 转换
### 调整 CSS
首先我们先来看看问题的源头。CSS 的出现曾是技术的一大进步。你可以用一个集中式的样式表来装饰多个网页。如今很多 Web 开发者都会使用 Bootstrap 这样的框架。
@ -35,7 +35,7 @@ Font-awesome CSS 代表了包含未使用样式的极端。这个页面中只用
current free version 912 glyphs/icons, smallest set ttf 30.9KB, woff 14.7KB, woff2 12.2KB, svg 107.2KB, eot 31.2
```
所以问题是,你需要所有的字形吗?很可能不需要。你可以通过 [FontForge][10] 来摆脱这些无用字形,但这需要很大的工作量。你还可以用 [Fontello][11]. 你可以使用公共实例,也可以配置你自己的版本,因为它是自由软件,可以在 [Github][12] 上找到。
所以问题是,你需要所有的字形吗?很可能不需要。你可以通过 [FontForge][10] 来去除这些无用字形,但这需要很大的工作量。你还可以用 [Fontello][11]. 你可以使用公共实例,也可以配置你自己的版本,因为它是自由软件,可以在 [Github][12] 上找到。
这种自定义字体集的缺点在于,你必须自己来托管字体文件。你也没法使用其它在线服务来提供更新。但与更快的性能相比,这可能算不上一个缺点。
@ -53,14 +53,14 @@ via: https://fedoramagazine.org/design-faster-web-pages-part-3-font-css-tweaks/
作者:[Sirko Kemter][a]
选题:[lujun9972][b]
译者:[StdioA](https://github.com/StdioA)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/gnokii/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/design-faster-web-pages-part-1-image-compression/
[2]: https://fedoramagazine.org/design-faster-web-pages-part-2-image-replacement/
[1]: https://linux.cn/article-10166-1.html
[2]: https://linux.cn/article-10217-1.html
[3]: https://en.wikipedia.org/wiki/Cascading_Style_Sheets
[4]: https://getfedora.org
[5]: https://fedoramagazine.org/wp-content/uploads/2018/02/CSS_delivery_tool_-_Examine_how_a_page_uses_CSS_-_2018-02-24_15.00.46.png

View File

@ -0,0 +1,367 @@
我们如何得知安装的包来自哪个仓库?
==========
有时候你可能想知道安装的软件包来自于哪个仓库。这将帮助你在遇到包冲突问题时进行故障排除。
因为[第三方仓库][1]拥有最新版本的软件包,所以有时候当你试图安装一些包的时候会出现兼容性的问题。
在 Linux 上一切都是可能的,因为你可以安装一个即使在你的发行版系统上不能使用的包。
你也可以安装一个最新版本的包,即使你的发行版系统仓库还没有这个版本,怎么做到的呢?
这就是为什么出现了第三方仓库。它们允许用户从库中安装所有可用的包。
几乎所有的发行版系统都允许第三方软件库。一些发行版还会官方推荐一些不会取代基础仓库的第三方仓库,例如 CentOS 官方推荐安装 [EPEL 库][2]。
下面是常用的仓库列表和它们的详细信息。
* CentOS [EPEL][2]、[ELRepo][3] 等是 [Centos 社区认证仓库](4)。
* Fedora [RPMfusion 仓库][5] 是经常被很多 [Fedora][6] 用户使用的仓库。
* ArchLinux ArchLinux 社区仓库包含了来自于 Arch 用户仓库的可信用户审核通过的软件包。
* openSUSE [Packman 仓库][7] 为 openSUSE 提供了各种附加的软件包,特别是但不限于那些在 openSUSE Build Service 应用黑名单上的与多媒体相关的应用和库。它是 openSUSE 软件包的最大外部软件库。
* Ubuntu个人软件包归档PPA是一种软件仓库。开发者们可以创建这种仓库来分发他们的软件。你可以在 PPA 导航页面找到相关信息。同时,你也可以启用 Cananical 合作伙伴软件仓库。
### 仓库是什么?
软件仓库是存储特定的应用程序的软件包的集中场所。
所有的 Linux 发行版都在维护他们自己的仓库,并允许用户在他们的机器上获取和安装包。
每个厂商都提供了各自的包管理工具来管理它们的仓库,例如搜索、安装、更新、升级、删除等等。
除了 RHEL 和 SUSE 以外大部分 Linux 发行版都是自由软件。要访问付费的仓库,你需要购买其订阅服务。
### 为什么我们需要启用第三方仓库?
在 Linux 里,并不建议从源代码安装包,因为这样做可能会在升级软件和系统的时候产生很多问题,这也是为什么我们建议从库中安装包而不是从源代码安装。
### 在 RHEL/CentOS 系统上我们如何得知安装的软件包来自哪个仓库?
这可以通过很多方法实现。我们会给你所有可能的选择,你可以选择一个对你来说最合适的。
#### 方法-1使用 yum 命令
RHEL 和 CentOS 系统使用 RPM 包,因此我们能够使用 [Yum 包管理器][8] 来获得信息。
YUM 即 “Yellodog Updater, Modified” 是适用于基于 RPM 的系统例如 RHEL 和 CentOS 的一个开源命令行前端包管理工具。
`yum` 是从发行版仓库和其他第三方库中获取、安装、删除、查询和管理 RPM 包的一个主要工具。
```
# yum info apachetop
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* epel: epel.mirror.constant.com
Installed Packages
Name : apachetop
Arch : x86_64
Version : 0.15.6
Release : 1.el7
Size : 65 k
Repo : installed
From repo : epel
Summary : A top-like display of Apache logs
URL : https://github.com/tessus/apachetop
License : BSD
Description : ApacheTop watches a logfile generated by Apache (in standard common or
: combined logformat, although it doesn't (yet) make use of any of the extra
: fields in combined) and generates human-parsable output in realtime.
```
`apachetop` 包来自 EPEL 仓库。
#### 方法-2使用 yumdb 命令
`yumdb info` 提供了类似于 `yum info` 的信息但是它又提供了包校验和数据、类型、用户信息(谁安装的软件包)。从 yum 3.2.26 开始yum 已经开始在 rpmdatabase 之外存储额外的信息user 表示软件是用户安装的dep 表示它是作为依赖项引入的)。
```
# yumdb info lighttpd
Loaded plugins: fastestmirror
lighttpd-1.4.50-1.el7.x86_64
checksum_data = a24d18102ed40148cfcc965310a516050ed437d728eeeefb23709486783a4d37
checksum_type = sha256
command_line = --enablerepo=epel install lighttpd apachetop aria2 atop axel
from_repo = epel
from_repo_revision = 1540756729
from_repo_timestamp = 1540757483
installed_by = 0
origin_url = https://epel.mirror.constant.com/7/x86_64/Packages/l/lighttpd-1.4.50-1.el7.x86_64.rpm
reason = user
releasever = 7
var_contentdir = centos
var_infra = stock
var_uuid = ce328b07-9c0a-4765-b2ad-59d96a257dc8
```
`lighttpd` 包来自 EPEL 仓库。
#### 方法-3使用 rpm 命令
[RPM 命令][9] 即 “Red Hat Package Manager” 是一个适用于基于 Red Hat 的系统(例如 RHEL、CentOS、Fedora、openSUSE & Mageia的强大的命令行包管理工具。
这个工具允许你在你的 Linux 系统/服务器上安装、更新、移除、查询和验证软件。RPM 文件具有 .rpm 后缀名。RPM 包是用必需的库和依赖关系构建的,不会与系统上安装的其他包冲突。
```
# rpm -qi apachetop
Name : apachetop
Version : 0.15.6
Release : 1.el7
Architecture: x86_64
Install Date: Mon 29 Oct 2018 06:47:49 AM EDT
Group : Applications/Internet
Size : 67020
License : BSD
Signature : RSA/SHA256, Mon 22 Jun 2015 09:30:26 AM EDT, Key ID 6a2faea2352c64e5
Source RPM : apachetop-0.15.6-1.el7.src.rpm
Build Date : Sat 20 Jun 2015 09:02:37 PM EDT
Build Host : buildvm-22.phx2.fedoraproject.org
Relocations : (not relocatable)
Packager : Fedora Project
Vendor : Fedora Project
URL : https://github.com/tessus/apachetop
Summary : A top-like display of Apache logs
Description :
ApacheTop watches a logfile generated by Apache (in standard common or
combined logformat, although it doesn't (yet) make use of any of the extra
fields in combined) and generates human-parsable output in realtime.
```
`apachetop` 包来自 EPEL 仓库。
#### 方法-4使用 repoquery 命令
`repoquery` 是一个从 YUM 库查询信息的程序,类似于 rpm 查询。
```
# repoquery -i httpd
Name : httpd
Version : 2.4.6
Release : 80.el7.centos.1
Architecture: x86_64
Size : 9817285
Packager : CentOS BuildSystem
Group : System Environment/Daemons
URL : http://httpd.apache.org/
Repository : updates
Summary : Apache HTTP Server
Source : httpd-2.4.6-80.el7.centos.1.src.rpm
Description :
The Apache HTTP Server is a powerful, efficient, and extensible
web server.
```
`httpd` 包来自 CentOS updates 仓库。
### 在 Fedora 系统上我们如何得知安装的包来自哪个仓库?
DNF 是 “Dandified yum” 的缩写。DNF 是使用 hawkey/libsolv 库作为后端的下一代 yum 包管理器yum 的分支)。从 Fedora 18 开始 Aleš Kozumplík 开始开发 DNF并最终在 Fedora 22 上得以应用/启用。
[dnf 命令][10] 用于在 Fedora 22 以及之后的系统上安装、更新、搜索和删除包。它会自动解决依赖并使安装包的过程变得顺畅,不会出现任何问题。
```
$ dnf info tilix
Last metadata expiration check: 27 days, 10:00:23 ago on Wed 04 Oct 2017 06:43:27 AM IST.
Installed Packages
Name : tilix
Version : 1.6.4
Release : 1.fc26
Arch : x86_64
Size : 3.6 M
Source : tilix-1.6.4-1.fc26.src.rpm
Repo : @System
From repo : updates
Summary : Tiling terminal emulator
URL : https://github.com/gnunn1/tilix
License : MPLv2.0 and GPLv3+ and CC-BY-SA
Description : Tilix is a tiling terminal emulator with the following features:
:
: - Layout terminals in any fashion by splitting them horizontally or vertically
: - Terminals can be re-arranged using drag and drop both within and between
: windows
: - Terminals can be detached into a new window via drag and drop
: - Input can be synchronized between terminals so commands typed in one
: terminal are replicated to the others
: - The grouping of terminals can be saved and loaded from disk
: - Terminals support custom titles
: - Color schemes are stored in files and custom color schemes can be created by
: simply creating a new file
: - Transparent background
: - Supports notifications when processes are completed out of view
:
: The application was written using GTK 3 and an effort was made to conform to
: GNOME Human Interface Guidelines (HIG).
```
`tilix` 包来自 Fedora updates 仓库。
### 在 openSUSE 系统上我们如何得知安装的包来自哪个仓库?
Zypper 是一个使用 libzypp 的命令行包管理器。[Zypper 命令][11] 提供了存储库访问、依赖处理、包安装等功能。
```
$ zypper info nano
Loading repository data...
Reading installed packages...
Information for package nano:
-----------------------------
Repository : Main Repository (OSS)
Name : nano
Version : 2.4.2-5.3
Arch : x86_64
Vendor : openSUSE
Installed Size : 1017.8 KiB
Installed : No
Status : not installed
Source package : nano-2.4.2-5.3.src
Summary : Pico editor clone with enhancements
Description :
GNU nano is a small and friendly text editor. It aims to emulate
the Pico text editor while also offering a few enhancements.
```
`nano` 包来自于 openSUSE Main 仓库OSS
### 在 ArchLinux 系统上我们如何得知安装的包来自哪个仓库?
[Pacman 命令][12] 即包管理器工具package manager utility ),是一个简单的用来安装、构建、删除和管理 Arch Linux 软件包的命令行工具。Pacman 使用 libalpm 作为后端来执行所有的操作。
```
# pacman -Ss chromium
extra/chromium 48.0.2564.116-1
The open-source project behind Google Chrome, an attempt at creating a safer, faster, and more stable browser
extra/qt5-webengine 5.5.1-9 (qt qt5)
Provides support for web applications using the Chromium browser project
community/chromium-bsu 0.9.15.1-2
A fast paced top scrolling shooter
community/chromium-chromevox latest-1
Causes the Chromium web browser to automatically install and update the ChromeVox screen reader extention. Note: This
package does not contain the extension code.
community/fcitx-mozc 2.17.2313.102-1
Fcitx Module of A Japanese Input Method for Chromium OS, Windows, Mac and Linux (the Open Source Edition of Google Japanese
Input)
```
`chromium` 包来自 ArchLinux extra 仓库。
或者,我们可以使用以下选项获得关于包的详细信息。
```
# pacman -Si chromium
Repository : extra
Name : chromium
Version : 48.0.2564.116-1
Description : The open-source project behind Google Chrome, an attempt at creating a safer, faster, and more stable browser
Architecture : x86_64
URL : http://www.chromium.org/
Licenses : BSD
Groups : None
Provides : None
Depends On : gtk2 nss alsa-lib xdg-utils bzip2 libevent libxss icu libexif libgcrypt ttf-font systemd dbus
flac snappy speech-dispatcher pciutils libpulse harfbuzz libsecret libvpx perl perl-file-basedir
desktop-file-utils hicolor-icon-theme
Optional Deps : kdebase-kdialog: needed for file dialogs in KDE
gnome-keyring: for storing passwords in GNOME keyring
kwallet: for storing passwords in KWallet
Conflicts With : None
Replaces : None
Download Size : 44.42 MiB
Installed Size : 172.44 MiB
Packager : Evangelos Foutras
Build Date : Fri 19 Feb 2016 04:17:12 AM IST
Validated By : MD5 Sum SHA-256 Sum Signature
```
`chromium` 包来自 ArchLinux extra 仓库。
### 在基于 Debian 的系统上我们如何得知安装的包来自哪个仓库?
在基于 Debian 的系统例如 Ubuntu、LinuxMint 上可以使用两种方法实现。
#### 方法-1使用 apt-cache 命令
[apt-cache 命令][13] 可以显示存储在 APT 内部数据库的很多信息。这些信息是一种缓存,因为它们是从列在 `source.list` 文件里的不同的源中获得的。这个过程发生在 apt 更新操作期间。
```
$ apt-cache policy python3
python3:
Installed: 3.6.3-0ubuntu2
Candidate: 3.6.3-0ubuntu3
Version table:
3.6.3-0ubuntu3 500
500 http://in.archive.ubuntu.com/ubuntu artful-updates/main amd64 Packages
*** 3.6.3-0ubuntu2 500
500 http://in.archive.ubuntu.com/ubuntu artful/main amd64 Packages
100 /var/lib/dpkg/status
```
`python3` 包来自 Ubuntu updates 仓库。
#### 方法-2使用 apt 命令
[APT 命令][14] 即 “Advanced Packaging Tool”`apt-get` 命令的替代品,就像 DNF 是如何取代 YUM 一样。它是具有丰富功能的命令行工具并将所有的功能例如 `apt-cache`、`apt-search`、`dpkg`、`apt-cdrom`、`apt-config`、`apt-ket` 等包含在一个命令APT并且还有几个独特的功能。例如我们可以通过 APT 轻松安装 .dpkg 包,但我们不能使用 `apt-get` 命令安装,更多类似的功能都被包含进了 APT 命令。`apt-get` 因缺失了很多未被解决的特性而被 `apt` 取代。
```
$ apt -a show notepadqq
Package: notepadqq
Version: 1.3.2-1~artful1
Priority: optional
Section: editors
Maintainer: Daniele Di Sarli
Installed-Size: 1,352 kB
Depends: notepadqq-common (= 1.3.2-1~artful1), coreutils (>= 8.20), libqt5svg5 (>= 5.2.1), libc6 (>= 2.14), libgcc1 (>= 1:3.0), libqt5core5a (>= 5.9.0~beta), libqt5gui5 (>= 5.7.0), libqt5network5 (>= 5.2.1), libqt5printsupport5 (>= 5.2.1), libqt5webkit5 (>= 5.6.0~rc), libqt5widgets5 (>= 5.2.1), libstdc++6 (>= 5.2)
Download-Size: 356 kB
APT-Sources: http://ppa.launchpad.net/notepadqq-team/notepadqq/ubuntu artful/main amd64 Packages
Description: Notepad++-like editor for Linux
Text editor with support for multiple programming
languages, multiple encodings and plugin support.
Package: notepadqq
Version: 1.2.0-1~artful1
Status: install ok installed
Priority: optional
Section: editors
Maintainer: Daniele Di Sarli
Installed-Size: 1,352 kB
Depends: notepadqq-common (= 1.2.0-1~artful1), coreutils (>= 8.20), libqt5svg5 (>= 5.2.1), libc6 (>= 2.14), libgcc1 (>= 1:3.0), libqt5core5a (>= 5.9.0~beta), libqt5gui5 (>= 5.7.0), libqt5network5 (>= 5.2.1), libqt5printsupport5 (>= 5.2.1), libqt5webkit5 (>= 5.6.0~rc), libqt5widgets5 (>= 5.2.1), libstdc++6 (>= 5.2)
Homepage: http://notepadqq.altervista.org
Download-Size: unknown
APT-Manual-Installed: yes
APT-Sources: /var/lib/dpkg/status
Description: Notepad++-like editor for Linux
Text editor with support for multiple programming
languages, multiple encodings and plugin support.
```
`notepadqq` 包来自 Launchpad PPA。
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/how-do-we-find-out-the-installed-packages-came-from-which-repository/
作者:[Prakash Subramanian][a]
选题:[lujun9972][b]
译者:[zianglei](https://github.com/zianglei)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/prakash/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/category/repository/
[2]: https://www.2daygeek.com/install-enable-epel-repository-on-rhel-centos-scientific-linux-oracle-linux/
[3]: https://www.2daygeek.com/install-enable-elrepo-on-rhel-centos-scientific-linux/
[4]: https://www.2daygeek.com/additional-yum-repositories-for-centos-rhel-fedora-systems/
[5]: https://www.2daygeek.com/install-enable-rpm-fusion-repository-on-centos-fedora-rhel/
[6]: https://fedoraproject.org/wiki/Third_party_repositories
[7]: https://www.2daygeek.com/install-enable-packman-repository-on-opensuse-leap/
[8]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
[9]: https://www.2daygeek.com/rpm-command-examples/
[10]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
[11]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
[12]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
[13]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
[14]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/

View File

@ -1,35 +1,38 @@
CPod一个开源、跨平台播客应用
======
播客是一个很好的娱乐和获取信息的方式。事实上,我会听十几个不同的播客,包括技术、神秘事件、历史和喜剧。当然,[Linux 播客][1]也在此列表中。
今天,我们将看一个简单的跨平台应用来收听你的播客。
![][2]
推荐的播客和播客搜索
*推荐的播客和播客搜索*
### 应用程序
[CPod][3] 是 [Zack Guardz -----][4] 的作品。**它是一个 [Election][5] 程序**,这使它能够在最大的操作系统Linux、Windows、Mac OS上运行。
[CPod][3] 是 [Zack Guardz-------------][4] 的作品。**它是一个 [Election][5] 程序**,这使它能够在大多数操作系统Linux、Windows、Mac OS上运行。
一个小事CPod 最初被命名为 Cumulonimbus。
> 一个小事CPod 最初被命名为 Cumulonimbus。
应用的大部分被两个面板占用来显示内容和选项。屏幕左侧的小条让你可以使用应用的不同功能。CPod 的不同栏目包括主页、队列、订阅、浏览和设置。
![cpod settings][6]
设置
*设置*
### CPod 的功能
以下是 CPod 提供的功能列表:
* 简洁,干净的设计
* 可在顶级计算机平台上使用
* 可在主流计算机平台上使用
* 有 Snap 包
* 搜索 iTunes 的播客目录
* 下载以及无需下载播放节目
* 可下载也可无需下载就播放节目
* 查看播客信息和节目
* 搜索播客的个别节目
* 黑暗模式
* 深色模式
* 改变播放速度
* 键盘快捷键
* 将你的播客订阅与 gpodder.net 同步
@ -39,13 +42,13 @@ CPod一个开源、跨平台播客应用
* 多语言支持
![search option in cpod application][7]
搜索 ZFS 节目
*搜索 ZFS 节目*
### 在 Linux 上体验 CPod
我最后在两个系统上安装了 CPodArchLabs 和 Windows。[Arch 用户仓库][8] 中有两个版本的 CPod。但是它们都已过时一个是版本 1.14.0,另一个是 1.22.6。最新版本的 CPod 是 1.27.0。由于 ArchLabs 和 Windows 之间的版本差异,我不得已而有不同的体验。在本文中,我将重点关注 1.27.0,因为它是最新且功能最多的。
我最后在两个系统上安装了 CPodArchLabs 和 Windows。[Arch 用户仓库][8] 中有两个版本的 CPod。但是它们都已过时一个是版本 1.14.0,另一个是 1.22.6。最新版本的 CPod 是 1.27.0。由于 ArchLabs 和 Windows 之间的版本差异,我的体验有所不同。在本文中,我将重点关注 1.27.0,因为它是最新且功能最多的。
我马上能够找到我最喜欢的播客。我可以粘贴 RSS 源的 URL 来添加 iTunes 列表中没有的那些播客。
@ -55,7 +58,7 @@ CPod一个开源、跨平台播客应用
### 安装 CPod
在 [GitHub][11]上,你可以下载适用于 Linux 的 AppImage 或 Deb 文件,适用于 Windows 的 .exe 文件或适用于 Mac OS 的 .dmg 文件。
在 [GitHub][11] 上,你可以下载适用于 Linux 的 AppImage 或 Deb 文件,适用于 Windows 的 .exe 文件或适用于 Mac OS 的 .dmg 文件。
你可以使用 [Snap][12] 安装 CPod。你需要做的就是使用以下命令
@ -63,14 +66,15 @@ CPod一个开源、跨平台播客应用
sudo snap install cpod
```
就像我之前说的那样CPod 的 [Arch 用户仓库][8]的版本已经过时了。我已经给其中一个打包者发了消息。如果你使用 Arch或基于 Arch 的发行版),我建议你这样做。
就像我之前说的那样CPod 的 [Arch 用户仓库][8]的版本已经过时了。我已经给其中一个打包者发了消息。如果你使用 Arch或基于 Arch 的发行版),我建议你这样做。
![cpod for Linux pidcasts][13]
播放其中一个我最喜欢的播客
*播放其中一个我最喜欢的播客*
### 最后的想法
总的来说,我喜欢 CPod。它外观漂亮使用简单。事实上我更喜欢原来的名字 Cumulonimbus但是它有点拗口。
总的来说,我喜欢 CPod。它外观漂亮使用简单。事实上我更喜欢原来的名字Cumulonimbus但是它有点拗口。
我刚刚在程序中遇到两个问题。首先,我希望每个播客都有评分。其次,在打开黑暗模式后,根据长度、日期、下载状态和播放进度对剧集进行排序的菜单不起作用。
@ -85,23 +89,23 @@ via: https://itsfoss.com/cpod-podcast-app/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/linux-podcasts/
[2]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/cpod1.1.jpg
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/10/cpod1.1.jpg?w=800&ssl=1
[3]: https://github.com/z-------------/CPod
[4]: https://github.com/z-------------
[5]: https://electronjs.org/
[6]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/cpod2.1.png
[7]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/cpod4.1.jpg
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/10/cpod2.1.png?w=800&ssl=1
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/10/cpod4.1.jpg?w=800&ssl=1
[8]: https://aur.archlinux.org/packages/?O=0&K=cpod
[9]: https://latenightlinux.com/
[10]: https://itsfoss.com/what-is-zfs/
[11]: https://github.com/z-------------/CPod/releases
[12]: https://snapcraft.io/cumulonimbus
[13]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/cpod3.1.jpg
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/10/cpod3.1.jpg?w=800&ssl=1
[14]: http://reddit.com/r/linuxusersgroup

View File

@ -1,24 +1,24 @@
命令行快捷提示:如何定位一个文件
命令行快速技巧:如何定位一个文件
======
![](https://fedoramagazine.org/wp-content/uploads/2018/10/commandlinequicktips-816x345.jpg)
我们都会有文件存储在电脑里 —— 目录,相片,源代码等等。它们是如此之多。也无疑超出了我的记忆范围。要是毫无目标,找到正确的那一个可能会很费时间。在这篇文章里我们来看一下如何在命令行里找到需要的文件,特别是快速找到你想要的那一个。
我们都会有文件存储在电脑里 —— 目录、相片、源代码等等。它们是如此之多。也无疑超出了我的记忆范围。要是毫无目标,找到正确的那一个可能会很费时间。在这篇文章里我们来看一下如何在命令行里找到需要的文件,特别是快速找到你想要的那一个。
好消息是 Linux 命令行专门设计了很多非常有用的命令行工具在你的电脑上查找文件。下面我们看一下它们其中三个:ls、tree 和 tree
好消息是 Linux 命令行专门设计了很多非常有用的命令行工具在你的电脑上查找文件。下面我们看一下它们其中三个:`ls`、`tree` 和 `find`
### ls
如果你知道文件在哪里你只需要列出它们或者查看有关它们的信息ls 就是为此而生的。
如果你知道文件在哪里,你只需要列出它们或者查看有关它们的信息,`ls` 就是为此而生的。
只需运行 ls 就可以列出当下目录中所有可见的文件和目录:
只需运行 `ls` 就可以列出当下目录中所有可见的文件和目录:
```
$ ls
Documents Music Pictures Videos notes.txt
```
添加 **-l** 选项可以查看文件的相关信息。同时再加上 **-h** 选项,就可以用一种人们易读的格式查看文件的大小:
添加 `-l` 选项可以查看文件的相关信息。同时再加上 `-h` 选项,就可以用一种人们易读的格式查看文件的大小:
```
$ ls -lh
@ -30,7 +30,7 @@ drwxr-xr-x 2 adam adam 4.0K Nov 2 13:07 Videos
-rw-r--r-- 1 adam adam 43K Nov 2 13:12 notes.txt
```
**ls** 也可以搜索一个指定位置:
`ls` 也可以搜索一个指定位置:
```
$ ls Pictures/
@ -44,7 +44,7 @@ $ ls *.txt
notes.txt
```
少了点什么?想要查看一个隐藏文件?没问题,使用 **-a** 选项:
少了点什么?想要查看一个隐藏文件?没问题,使用 `-a` 选项:
```
$ ls -a
@ -52,7 +52,7 @@ $ ls -a
.. .bash_profile .vimrc Music Videos
```
**ls** 还有很多其他有用的选项,你可以把它们组合在一起获得你想要的效果。可以使用以下命令了解更多:
`ls` 还有很多其他有用的选项,你可以把它们组合在一起获得你想要的效果。可以使用以下命令了解更多:
```
$ man ls
@ -60,13 +60,13 @@ $ man ls
### tree
如果你想查看你的文件的树状结构tree 是一个不错的选择。可能你的系统上没有默认安装它,你可以使用包管理 DNF 手动安装:
如果你想查看你的文件的树状结构,`tree` 是一个不错的选择。可能你的系统上没有默认安装它,你可以使用包管理 DNF 手动安装:
```
$ sudo dnf install tree
```
如果不带任何选项或者参数地运行 tree将会以当前目录开始显示出包含其下所有目录和文件的一个树状图。提醒一下这个输出可能会非常大因为它包含了这个目录下的所有目录和文件
如果不带任何选项或者参数地运行 `tree`,将会以当前目录开始,显示出包含其下所有目录和文件的一个树状图。提醒一下,这个输出可能会非常大,因为它包含了这个目录下的所有目录和文件:
```
$ tree
@ -89,7 +89,7 @@ $ tree
`-- notes.txt
```
如果列出的太多了,使用 -L 选项,并在其后加上你想查看的层级数,可以限制列出文件的层级:
如果列出的太多了,使用 `-L` 选项,并在其后加上你想查看的层级数,可以限制列出文件的层级:
```
$ tree -L 2
@ -118,13 +118,13 @@ Documents/work/
`-- status-reports.txt
```
如果使用 tree 列出的是一个很大的树状图,你可以把它跟 less 组合使用:
如果使用 `tree` 列出的是一个很大的树状图,你可以把它跟 `less` 组合使用:
```
$ tree | less
```
再一次tree 有很多其他的选项可以使用你可以把他们组合在一起发挥更强大的作用。man 手册页有所有这些选项:
再一次,`tree` 有很多其他的选项可以使用你可以把他们组合在一起发挥更强大的作用。man 手册页有所有这些选项:
```
$ man tree
@ -134,13 +134,13 @@ $ man tree
那么如果不知道文件在哪里呢?就让我们来找到它们吧!
要是你的系统中没有 find你可以使用 DNF 安装它:
要是你的系统中没有 `find`,你可以使用 DNF 安装它:
```
$ sudo dnf install findutils
```
运行 find 时如果没有添加任何选项或者参数,它将会递归列出当前目录下的所有文件和目录。
运行 `find` 时如果没有添加任何选项或者参数,它将会递归列出当前目录下的所有文件和目录。
```
$ find
@ -167,7 +167,7 @@ $ find
./Music
```
但是 find 真正强大的是你可以使用文件名进行搜索:
但是 `find` 真正强大的是你可以使用文件名进行搜索:
```
$ find -name do-things.sh
@ -184,6 +184,7 @@ $ find -name "*.txt"
./Documents/work/project-abc/project-notes.txt
./notes.txt
```
你也可以根据大小寻找文件。如果你的空间不足的时候,这种方法也许特别有用。现在来列出所有大于 1 MB 的文件:
```
@ -207,7 +208,7 @@ $ find Documents -name "*project*" -type f
Documents/work/project-abc/project-notes.txt
```
最后再一次find 还有很多供你使用的选项要是你想使用它们man 手册页绝对可以帮到你:
最后再一次,`find` 还有很多供你使用的选项要是你想使用它们man 手册页绝对可以帮到你:
```
$ man find
@ -220,7 +221,7 @@ via: https://fedoramagazine.org/commandline-quick-tips-locate-file/
作者:[Adam Šamalík][a]
选题:[lujun9972][b]
译者:[dianbanjiu](https://github.com/dianbanjiu)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,22 +1,23 @@
使用 gitbase 在 git 仓库进行 SQL 查询
gitbase用 SQL 查询 Git 仓库
======
gitbase 是一个使用 go 开发的的开源项目,它实现了在 git 仓库上执行 SQL 查询。
> gitbase 是一个使用 go 开发的的开源项目,它实现了在 Git 仓库上执行 SQL 查询。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus_cloud_database.png?itok=lhhU42fg)
git 已经成为了代码版本控制的事实标准,但尽管 git 相当普及,对代码仓库的深入分析的工作难度却没有因此而下降;而 SQL 在大型代码库的查询方面则已经是一种久经考验的语言,因此诸如 Spark 和 BigQuery 这样的项目都采用了它。
Git 已经成为了代码版本控制的事实标准,但尽管 Git 相当普及,对代码仓库的深入分析的工作难度却没有因此而下降;而 SQL 在大型代码库的查询方面则已经是一种久经考验的语言,因此诸如 Spark 和 BigQuery 这样的项目都采用了它。
所以source{d} 很顺理成章地将这两种技术结合起来,就产生了 gitbase。gitbase 是一个<ruby>代码即数据<rt>code-as-data</rt></ruby>的解决方案,可以使用 SQL 对 git 仓库进行大规模分析。
所以source{d} 很顺理成章地将这两种技术结合起来,就产生了 gitbaseLCTT 译注source{d} 是一家开源公司,本文作者是该公司开发者关系副总裁)。gitbase 是一个<ruby>代码即数据<rt>code-as-data</rt></ruby>的解决方案,可以使用 SQL 对 git 仓库进行大规模分析。
[gitbase][1] 是一个完全开源的项目。它站在了很多巨人的肩上,因此得到了足够的发展竞争力。下面就来介绍一下其中的一些“巨人”。
![](https://opensource.com/sites/default/files/uploads/gitbase.png)
[gitbase playground][2] 为 gitbase 提供了一个可视化的操作环境。
*[gitbase playground][2] 为 gitbase 提供了一个可视化的操作环境。*
### 用 Vitess 解析 SQL
gitbase 通过 SQL 与用户进行交互,因此需要能够遵循 MySQL 协议来对传入的 SQL 请求作出解析和理解,万幸由 YouTube 建立的 [Vitess][3] 项目已经在这一方面给出了解决方案。Vitess 是一个横向扩展的 MySQL数据库集群系统。
gitbase 通过 SQL 与用户进行交互,因此需要能够遵循 MySQL 协议来对通过网络传入的 SQL 请求作出解析和理解,万幸由 YouTube 建立的 [Vitess][3] 项目已经在这一方面给出了解决方案。Vitess 是一个横向扩展的 MySQL 数据库集群系统。
我们只是使用了这个项目中的部分重要代码,并将其转化为一个可以让任何人在数分钟以内编写出一个 MySQL 服务器的[开源程序][4],就像我在 [justforfunc][5] 视频系列中展示的 [CSVQL][6] 一样,它可以使用 SQL 操作 CSV 文件。
@ -28,17 +29,17 @@ gitbase 通过 SQL 与用户进行交互,因此需要能够遵循 MySQL 协议
### 使用 enry 检测语言、使用 babelfish 解析文件
gitbase 集成了我们的语言检测开源项目 [enry][9] 以及代码解析项目 [babelfish][10],因此在分析 git 仓库历史代码的能力也相当强大。babelfish 是一个自托管服务,普适于各种源代码解析,并将代码文件转换为<ruby>通用抽象语法树<rt>Universal Abstract Syntax Tree</rt></ruby>UAST
gitbase 集成了我们开源的语言检测项目 [enry][9] 以及代码解析项目 [babelfish][10],因此在分析 git 仓库历史代码的能力也相当强大。babelfish 是一个自托管服务,普适于各种源代码解析,并将代码文件转换为<ruby>通用抽象语法树<rt>Universal Abstract Syntax Tree</rt></ruby>UAST
这两个功能在 gitbase 中可以被用户以函数 LANGUAGE 和 UAST 调用,诸如“查找上个月最常被修改的函数的名称”这样的请求就需要通过这两个功能实现。
这两个功能在 gitbase 中可以被用户以函数 `LANGUAGE``UAST` 调用,诸如“查找上个月最常被修改的函数的名称”这样的请求就需要通过这两个功能实现。
### 提高性能
gitbase 可以对非常大的数据集进行分析,例如源代码大小达 3 TB 的 Public Git Archive。面临的工作量如此巨大,因此每一点性能都必须运用到极致。于是,我们也使用到了 Rubex 和 Pilosa 这两个项目。
gitbase 可以对非常大的数据集进行分析,例如来自 GitHub 高达 3 TB 源代码的 Public Git Archive[公告][11]。面临的工作量如此巨大,因此每一点性能都必须运用到极致。于是,我们也使用到了 Rubex 和 Pilosa 这两个项目。
#### 使用 Rubex 和 Oniguruma 优化正则表达式速度
[Rubex][12] 是 go 的正则表达式标准库包的一个准替代品。之所以说它是准替代品,是因为它没有在 regexp.Regexp 类中实现 LiteralPrefix 方法,直到现在都还没有。
[Rubex][12] 是 go 的正则表达式标准库包的一个准替代品。之所以说它是准替代品,是因为它没有在 `regexp.Regexp` 类中实现 `LiteralPrefix` 方法,直到现在都还没有。
Rubex 的高性能是由于使用 [cgo][14] 调用了 [Oniguruma][13],它是一个高度优化的 C 代码库。
@ -65,7 +66,7 @@ via: https://opensource.com/article/18/11/gitbase
作者:[Francesc Campoy][a]
选题:[lujun9972][b]
译者:[HankChow](https://github.com/HankChow)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,186 @@
在 Linux 中如何查找一个命令或进程的执行时间
======
![](https://www.ostechnix.com/wp-content/uploads/2018/11/time-command-720x340.png)
在类 Unix 系统中,你可能知道一个命令或进程开始执行的时间,以及[一个进程运行了多久][1]。 但是,你如何知道这个命令或进程何时结束或者它完成运行所花费的总时长呢? 在类 Unix 系统中,这是非常容易的! 有一个专门为此设计的程序名叫 **GNU time**。 使用 `time` 程序,我们可以轻松地测量 Linux 操作系统中命令或程序的总执行时间。 `time` 命令在大多数 Linux 发行版中都有预装,所以你不必去安装它。
### 在 Linux 中查找一个命令或进程的执行时间
要测量一个命令或程序的执行时间,运行:
```
$ /usr/bin/time -p ls
```
或者,
```
$ time ls
```
输出样例:
```
dir1 dir2 file1 file2 mcelog
real 0m0.007s
user 0m0.001s
sys 0m0.004s
```
```
$ time ls -a
. .bash_logout dir1 file2 mcelog .sudo_as_admin_successful
.. .bashrc dir2 .gnupg .profile .wget-hsts
.bash_history .cache file1 .local .stack
real 0m0.008s
user 0m0.001s
sys 0m0.005s
```
以上命令显示出了 `ls` 命令的总执行时间。 你可以将 `ls` 替换为任何命令或进程,以查找总的执行时间。
输出详解:
1. `real` —— 指的是命令或程序所花费的总时间
2. `user` —— 指的是在用户模式下程序所花费的时间
3. `sys` —— 指的是在内核模式下程序所花费的时间
我们也可以将命令限制为仅运行一段时间。参考如下教程了解更多细节:
- [在 Linux 中如何让一个命令运行特定的时长](https://www.ostechnix.com/run-command-specific-time-linux/)
### time 与 /usr/bin/time
你可能注意到了, 我们在上面的例子中使用了两个命令 `time``/usr/bin/time` 。 所以,你可能会想知道他们的不同。
首先, 让我们使用 `type` 命令看看 `time` 命令到底是什么。对于那些我们不了解的 Linux 命令,`type` 命令用于查找相关命令的信息。 更多详细信息,[请参阅本指南][2]。
```
$ type -a time
time is a shell keyword
time is /usr/bin/time
```
正如你在上面的输出中看到的一样,`time` 是两个东西:
* 一个是 BASH shell 中内建的关键字
* 一个是可执行文件,如 `/usr/bin/time`
由于 shell 关键字的优先级高于可执行文件,当你没有给出完整路径只运行 `time` 命令时,你运行的是 shell 内建的命令。 但是,当你运行 `/usr/bin/time` 时,你运行的是真正的 **GNU time** 命令。 因此,为了执行真正的命令你可能需要给出完整路径。
在大多数 shell 中如 BASH、ZSH、CSH、KSH、TCSH 等,内建的关键字 `time` 是可用的。 `time` 关键字的选项少于该可执行文件,你可以使用的唯一选项是 `-p`
你现在知道了如何使用 `time` 命令查找给定命令或进程的总执行时间。 想进一步了解 GNU time 工具吗? 继续阅读吧!
### 关于 GNU time 程序的简要介绍
GNU time 程序运行带有给定参数的命令或程序,并在命令完成后将系统资源使用情况汇总到标准输出。 与 `time` 关键字不同GNU time 程序不仅显示命令或进程的执行时间还显示内存、I/O 和 IPC 调用等其他资源。
`time` 命令的语法是:
```
/usr/bin/time [options] command [arguments...]
```
上述语法中的 `options` 是指一组可以与 `time` 命令一起使用去执行特定功能的选项。 下面给出了可用的选项:
* `-f, format` —— 使用此选项可以根据需求指定输出格式。
* `-p, portability` —— 使用简要的输出格式。
* `-o file, output=FILE` —— 将输出写到指定文件中而不是到标准输出。
* `-a, append` —— 将输出追加到文件中而不是覆盖它。
* `-v, verbose` —— 此选项显示 `time` 命令输出的详细信息。
* `quiet` 此选项可以防止 `time` 命令报告程序的状态.
当不带任何选项使用 GNU time 命令时,你将看到以下输出。
```
$ /usr/bin/time wc /etc/hosts
9 28 273 /etc/hosts
0.00user 0.00system 0:00.00elapsed 66%CPU (0avgtext+0avgdata 2024maxresident)k
0inputs+0outputs (0major+73minor)pagefaults 0swaps
```
如果你用 shell 关键字 `time` 运行相同的命令, 输出会有一点儿不同:
```
$ time wc /etc/hosts
9 28 273 /etc/hosts
real 0m0.006s
user 0m0.001s
sys 0m0.004s
```
有时,你可能希望将系统资源使用情况输出到文件中而不是终端上。 为此, 你可以使用 `-o` 选项,如下所示。
```
$ /usr/bin/time -o file.txt ls
dir1 dir2 file1 file2 file.txt mcelog
```
正如你看到的,`time` 命令不会显示到终端上。因为我们将输出写到了`file.txt` 的文件中。 让我们看一下这个文件的内容:
```
$ cat file.txt
0.00user 0.00system 0:00.00elapsed 66%CPU (0avgtext+0avgdata 2512maxresident)k
0inputs+0outputs (0major+106minor)pagefaults 0swaps
```
当你使用 `-o` 选项时, 如果你没有一个名为 `file.txt` 的文件,它会创建一个并把输出写进去。如果文件存在,它会覆盖文件原来的内容。
你可以使用 `-a` 选项将输出追加到文件后面,而不是覆盖它的内容。
```
$ /usr/bin/time -a file.txt ls
```
`-f` 选项允许用户根据自己的喜好控制输出格式。 比如说,以下命令的输出仅显示用户,系统和总时间。
```
$ /usr/bin/time -f "\t%E real,\t%U user,\t%S sys" ls
dir1 dir2 file1 file2 mcelog
0:00.00 real, 0.00 user, 0.00 sys
```
请注意 shell 中内建的 `time` 命令并不具有 GNU time 程序的所有功能。
有关 GNU time 程序的详细说明可以使用 `man` 命令来查看。
```
$ man time
```
想要了解有关 Bash 内建 `time` 关键字的更多信息,请运行:
```
$ help time
```
就到这里吧。 希望对你有所帮助。
会有更多好东西分享哦。 请关注我们!
加油哦!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-find-the-execution-time-of-a-command-or-process-in-linux/
作者:[SK][a]
选题:[lujun9972][b]
译者:[caixiangyue](https://github.com/caixiangyue)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/find-long-process-running-linux/
[2]: https://www.ostechnix.com/the-type-command-tutorial-with-examples-for-beginners/

View File

@ -0,0 +1,79 @@
为 Linux 选择打印机
======
> Linux 为打印机提供了广泛的支持。学习如何利用它。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ)
我们在传闻已久的无纸化社会方面取得了重大进展,但我们仍需要不时打印文件。如果你是 Linux 用户,并有一台没有 Linux 安装盘的打印机,或者你正准备在市场上购买新设备,那么你很幸运。因为大多数 Linux 发行版(以及 MacOS都使用通用 Unix 打印系统([CUPS][1]),它包含了当今大多数打印机的驱动程序。这意味着 Linux 为打印机提供了比 Windows 更广泛的支持。
### 选择打印机
如果你需要购买新打印机,了解它是否支持 Linux 的最佳方法是查看包装盒或制造商网站上的文档。你也可以搜索 [Open Printing][2] 数据库。它是检查各种打印机与 Linux 兼容性的绝佳资源。
以下是与 Linux 兼容的佳能打印机的一些 Open Printing 结果。
![](https://opensource.com/sites/default/files/uploads/linux-printer_2-openprinting.png)
下面的截图是 Open Printing 的 Hewlett-Packard LaserJet 4050 的结果 —— 根据数据库,它应该可以“完美”工作。这里列出了建议驱动以及通用说明,让我了解它适用于 CUPS、行式打印守护程序LPD、LPRng 等。
![](https://opensource.com/sites/default/files/uploads/linux-printer_3-hplaserjet.png)
在任何情况下,最好在购买打印机之前检查制造商的网站并询问其他 Linux 用户。
### 检查你的连接
有几种方法可以将打印机连接到计算机。如果你的打印机是通过 USB 连接的,那么可以在 Bash 提示符下输入 `lsusb` 来轻松检查连接。
```
$ lsusb
```
该命令返回 “Bus 002 Device 004: ID 03f0:ad2a Hewlett-Packard” —— 这没有太多价值,但可以得知打印机已连接。我可以通过输入以下命令获得有关打印机的更多信息:
```
$ dmesg | grep -i usb
```
结果更加详细。
![](https://opensource.com/sites/default/files/uploads/linux-printer_1-dmesg.png)
如果你尝试将打印机连接到并口(假设你的计算机有并口 —— 如今很少见),你可以使用此命令检查连接:
```
$ dmesg | grep -i parport
```
返回的信息可以帮助我为我的打印机选择正确的驱动程序。我发现,如果我坚持使用流行的名牌打印机,大部分时间我都能获得良好的效果。
### 设置你的打印机软件
Fedora Linux 和 Ubuntu Linux 都包含简单的打印机设置工具。[Fedora][3] 为打印问题的答案维护了一个出色的 wiki。可以在 GUI 中的设置轻松启动这些工具,也可以在命令行上调用 `system-config-printer`
![](https://opensource.com/sites/default/files/uploads/linux-printer_4-printersetup.png)
HP 支持 Linux 打印的 [HP Linux 成像和打印][4] HPLIP 软件可能已安装在你的 Linux 系统上。如果没有,你可以为你的发行版[下载][5]最新版本。打印机制造商 [Epson][6] 和 [Brother][7] 也有带有 Linux 打印机驱动程序和信息的网页。
你最喜欢的 Linux 打印机是什么?请在评论中分享你的意见。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/11/choosing-printer-linux
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://www.cups.org/
[2]: http://www.openprinting.org/printers
[3]: https://fedoraproject.org/wiki/Printing
[4]: https://developers.hp.com/hp-linux-imaging-and-printing
[5]: https://developers.hp.com/hp-linux-imaging-and-printing/gethplip
[6]: https://epson.com/Support/wa00821
[7]: https://support.brother.com/g/s/id/linux/en/index.html?c=us_ot&lang=en&comple=on&redirect=on

View File

@ -8,25 +8,25 @@ more、less 和 most 的区别
### more 命令
`more` 是一个较为传统的终端阅读工具,它可以用于打开指定的文件并进行交互式阅读。如果文件的内容太长,在一屏以内无法完整显示,就会逐页显示文件内容。使用回车键或者空格键可以滚动浏览文件的内容,但有一个限制,就是只能够单向滚动。也就是说只能按顺序往下翻页,而不能进行回看。
`more` 是一个老式的、基础的终端分页阅读器,它可以用于打开指定的文件并进行交互式阅读。如果文件的内容太长,在一屏以内无法完整显示,就会逐页显示文件内容。使用回车键或者空格键可以滚动浏览文件的内容,但有一个限制,就是只能够单向滚动。也就是说只能按顺序往下翻页,而不能进行回看。
![](https://www.ostechnix.com/wp-content/uploads/2018/11/more-command-demo.gif)
**更**
**更**
有的 Linux 用户向我指出,在 `more` 当中是可以向上翻页的。不过,最原始版本的 `more` 确实只允许向下翻页,在后续出现的较新的版本中也允许了有限次数的向上翻页,只需要在浏览过程中按 `b` 键即可向上翻页。唯一的限制是 `more` 不能搭配管道使用。(我使用 more 是可以搭配管道使用的,不知道原作者为什么要这样写,麻烦校对确认一下这一句是否需要去掉
有的 Linux 用户向我指出,在 `more` 当中是可以向上翻页的。不过,最原始版本的 `more` 确实只允许向下翻页,在后续出现的较新的版本中也允许了有限次数的向上翻页,只需要在浏览过程中按 `b` 键即可向上翻页。唯一的限制是 `more` 不能搭配管道使用(如 `ls | more`LCTT 译注:此处原作者疑似有误,译者使用 `more` 是可以搭配管道使用的,或许与不同 `more` 版本有关
`q` 即可退出 `more`
**更多示例**
打开 ostechnix.txt 文件进行交互式阅读,可以执行以下命令:
打开 `ostechnix.txt` 文件进行交互式阅读,可以执行以下命令:
```
$ more ostechnix.txt
```
在阅读过程中,如果需要查找某个字符串,只需要像下面这样在斜杠(/)之后输入需要查找的内容:
在阅读过程中,如果需要查找某个字符串,只需要像下面这样输入斜杠(`/`)之后接着输入需要查找的内容:
```
/linux
@ -34,13 +34,13 @@ $ more ostechnix.txt
`n` 键可以跳转到下一个匹配的字符串。
如果需要在文件的第 10 行开始阅读,只需要执行:
如果需要在文件的第 `10` 行开始阅读,只需要执行:
```
$ more +10 file
```
就可以从文件的第 10 行开始显示文件的内容了。
就可以从文件的第 `10` 行开始显示文件的内容了。
如果你需要让 `more` 提示你按空格键来翻页,可以加上 `-d` 参数:
@ -60,11 +60,11 @@ $ more -d ostechnix.txt
$ man more
```
### less 命令
### less 命令
`less` 命令也是用于打开指定的文件并进行交互式阅读,它也支持翻页和搜索。如果文件的内容太长,也会对输出进行分页,因此也可以翻页阅读。比 `more` 命令更好的一点是,`less` 支持向上翻页和向下翻页,也就是可以在整个文件中任意阅读。
![](https://www.ostechnix.com/wp-content/uploads/2018/11/less-command-demo.gif)
![][4]
在使用功能方面,`less` 比 `more` 命令具有更多优点,以下列出其中几个:
@ -73,8 +73,6 @@ $ man more
* 可以跳转到文件的末尾并立即从文件的开头开始阅读
* 在编辑器中打开指定的文件
**更多示例**
打开文件:
@ -85,7 +83,7 @@ $ less ostechnix.txt
按空格键或回车键可以向下翻页,按 `b` 键可以向上翻页。
如果需要向下搜索,在斜杠(/)之后输入需要搜索的内容:
如果需要向下搜索,在输入斜杠(`/`)之后接着输入需要搜索的内容:
```
/linux
@ -93,7 +91,7 @@ $ less ostechnix.txt
`n` 键可以跳转到下一个匹配的字符串,如果需要跳转到上一个匹配的字符串,可以按 `N` 键。
如果需要向上搜索,在问号(?)之后输入需要搜索的内容:
如果需要向上搜索,在输入问号(`?`)之后接着输入需要搜索的内容:
```
?linux
@ -115,7 +113,7 @@ $ man less
### most 命令
`most` 同样是一个终端阅读工具,而且比 `more``less` 的功能更为丰富。`most` 支持同时打开多个文件、编辑当前打开的文件、迅速跳转到文件中的某一行、分屏阅读、同时锁定或滚动多个屏幕等等功能。在默认情况下,对于较长的行,`most` 不会将其截断成多行显示,而是提供了左右滚动功能在同一行内显示。
`most` 同样是一个终端阅读工具,而且比 `more``less` 的功能更为丰富。`most` 支持同时打开多个文件。你可以在打开的文件之间切换、编辑当前打开的文件、迅速跳转到文件中的某一行、分屏阅读、同时锁定或滚动多个屏幕等等功能。在默认情况下,对于较长的行,`most` 不会将其截断成多行显示,而是提供了左右滚动功能在同一行内显示。
**更多示例**
@ -124,14 +122,16 @@ $ man less
```
$ most ostechnix1.txt
```
![](https://www.ostechnix.com/wp-content/uploads/2018/11/most-command.png)
`e` 键可以编辑当前文件。
如果需要向下搜索,在斜杠(/)或 S 或 f 之后输入需要搜索的内容,按 `n` 键就可以跳转到下一个匹配的字符串。
如果需要向下搜索,在斜杠(`/`)或 `S``f` 之后输入需要搜索的内容,按 `n` 键就可以跳转到下一个匹配的字符串。
![][3]
如果需要向上搜索,在问号(?)之后输入需要搜索的内容,也是通过按 `n` 键跳转到下一个匹配的字符串。
如果需要向上搜索,在问号(`?`)之后输入需要搜索的内容,也是通过按 `n` 键跳转到下一个匹配的字符串。
同时打开多个文件:
@ -139,7 +139,8 @@ $ most ostechnix1.txt
$ most ostechnix1.txt ostechnix2.txt ostechnix3.txt
```
在打开了多个文件的状态下,可以输入 `:n` 切换到其它文件,使用`↑` 或 `↓` 键选择需要切换到的文件,按回车键就可以查看对应的文件。
在打开了多个文件的状态下,可以输入 `:n` 切换到下一个文件,使用 `↑``↓` 键选择需要切换到的文件,按回车键就可以查看对应的文件。
![](https://www.ostechnix.com/wp-content/uploads/2018/11/most-2.gif)
要打开文件并跳转到某个字符串首次出现的位置(例如 linux可以执行以下命令
@ -154,44 +155,36 @@ $ most file +/linux
移动:
* **空格键或 `D` 键** 向下滚动一屏
* **DELETE 键或 `U` 键** 向上滚动一屏
* **`↓` 键** 向下移动一行
* **`↑` 键** 向上移动一行
* **`T` 键** 移动到文件开头
* **`B` 键** 移动到文件末尾
* **`>` 键或 TAB 键** 向右滚动屏幕
* **`<` 键** 向左滚动屏幕
* **`→` 键** 向右移动一列
* **`←` 键** 向左移动一列
* **`J` 键或 `G` 键** 移动到某一行,例如 `10j` 可以移动到第 10 行
* **`%` 键** 移动到文件长度某个百分比的位置
* 空格键或 `D` 向下滚动一屏
* `DELETE` 键或 `U` 向上滚动一屏
* `↓` 向下移动一行
* `↑` 向上移动一行
* `T` 移动到文件开头
* `B` 移动到文件末尾
* `>` 键或 `TAB` 向右滚动屏幕
* `<` 向左滚动屏幕
* `→` 向右移动一列
* `←` 向左移动一列
* `J` 键或 `G` 移动到某一行,例如 `10j` 可以移动到第 10 行
* `%` 移动到文件长度某个百分比的位置
窗口命令:
* **`Ctrl-X 2`、`Ctrl-W 2`** 分屏
* **`Ctrl-X 1`、`Ctrl-W 1`** 只显示一个窗口
* **`O` 键、`Ctrl-X O`** 切换到另一个窗口
* **`Ctrl-X 0`** 删除窗口
* `Ctrl-X 2`、`Ctrl-W 2` 分屏
* `Ctrl-X 1`、`Ctrl-W 1` 只显示一个窗口
* `O` 键、`Ctrl-X O` 切换到另一个窗口
* `Ctrl-X 0` 删除窗口
文件内搜索:
* **`S` 键或 `f` 键或 `/` 键** 向下搜索
* **`?` 键** 向上搜索
* **`n` 键** 跳转到下一个匹配的字符串
* `S` 键或 `f` 键或 `/` 向下搜索
* `?` 向上搜索
* `n` 跳转到下一个匹配的字符串
退出:
* **`q` 键** 退出 `most` ,且所有打开的文件都会被关闭
* **`:N`、`:n`** 退出当前文件并查看下一个文件(使用`↑` 键、`↓` 键选择下一个文件)
* `q` 退出 `most` ,且所有打开的文件都会被关闭
* `:N`、`:n` 退出当前文件并查看下一个文件(使用 `↑` 键、`↓` 键选择下一个文件)
要查看 `most` 的更多详细信息,可以参考手册:
@ -201,16 +194,14 @@ $ man most
### 总结
**`more`** 传统且基础的文件阅读工具,仅支持向下翻页和有限次数的向上翻页。
`more` 传统且基础的分页阅读工具,仅支持向下翻页和有限次数的向上翻页。
**`less`** `more` 功能丰富,支持向下翻页和向上翻页,也支持文本搜索。在打开大文件的时候,比 `vi` 这类文本编辑器启动得更快。
`less` `more` 功能丰富,支持向下翻页和向上翻页,也支持文本搜索。在打开大文件的时候,比 `vi` 这类文本编辑器启动得更快。
**`most`** 在上述两个工具功能的基础上,还加入了同时打开多个文件、同时锁定或滚动多个屏幕、分屏等等大量功能。
`most` 在上述两个工具功能的基础上,还加入了同时打开多个文件、同时锁定或滚动多个屏幕、分屏等等大量功能。
以上就是我的介绍,希望能让你通过我的文章对这三个工具有一定的认识。如果想了解这篇文章以外的关于这几个工具的详细功能,请参阅它们的 `man` 手册。
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/the-difference-between-more-less-and-most-commands/
@ -218,7 +209,7 @@ via: https://www.ostechnix.com/the-difference-between-more-less-and-most-command
作者:[SK][a]
选题:[lujun9972][b]
译者:[HankChow](https://github.com/HankChow)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -227,4 +218,4 @@ via: https://www.ostechnix.com/the-difference-between-more-less-and-most-command
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]: http://www.ostechnix.com/wp-content/uploads/2018/11/more-1.png
[3]: http://www.ostechnix.com/wp-content/uploads/2018/11/most-1-1.gif
[4]: https://www.ostechnix.com/wp-content/uploads/2018/11/less-command-demo.gif

24
scripts/status/status.sh Executable file
View File

@ -0,0 +1,24 @@
#!/usr/bin/env bash
set -e
cd "$(dirname $0)/../.." # 进入TP root
function file-translating-p ()
{
local file="$*"
head -n 3 "$file" |grep -E -i "translat|fanyi|翻译" >/dev/null 2>&1
}
function get_status_of()
{
local file="$@"
git log --date=short --pretty=format:"{\"file\":\"${file}\",\"time\":\"%ad\",\"user\":\"%an\"}" -n 1 "${file}"
}
(
git grep -niE "translat|fanyi|翻译" sources/*.md |awk -F ":" '{if ($2<=3) print $1}' |xargs -I{} git log --date=short --pretty=format:"{\"filename\":\"{}\",\"time\":\"%ad\",\"user\":\"%an\"}" -n 1 "{}"|jq --slurp
find sources -name "2*.md"|sort|while read file;do
if ! file-translating-p "${file}";then
get_status_of "${file}"
fi
done |jq --slurp
)|jq --slurp '{"translating":.[0],"unselected":.[1]}'

View File

@ -1,4 +1,3 @@
translating by lujun9972
Finding Jobs in Software
======

View File

@ -1,56 +0,0 @@
heguangzhi translating
Linus, His Apology, And Why We Should Support Him
======
![](https://i1.wp.com/www.jonobacon.com/wp-content/uploads/2018/09/Linus-Torvalds-640x353.jpg?resize=640%2C353&ssl=1)
Today, Linus Torvalds, the creator of Linux, which powers everything from smartwatches to electrical grids posted [a pretty remarkable note on the kernel mailing list][1].
As a little bit of backstory, Linus has sometimes come under fire for the ways in which he has expressed feedback, provided criticism, and reacted to various scenarios on the kernel mailing list. This criticism has been fair in many cases: he has been overly aggressive at times, and while the kernel maintainers are a tight-knit group, the optics, particularly for those new to kernel development has often been pretty bad.
Like many conflict scenarios, this feedback has been communicated back to him in both constructive and non-constructive ways. Historically he has been seemingly reluctant to really internalize this feedback, I suspect partially because (a) the Linux kernel is a very successful project, and (b) some of the critics have at times gone nuclear at him (which often doesnt work as a strategy towards defensive people). Well, things changed today.
In his post today he shared some self-reflection on this feedback:
> This week people in our community confronted me about my lifetime of not understanding emotions. My flippant attacks in emails have been both unprofessional and uncalled for. Especially at times when I made it personal. In my quest for a better patch, this made sense to me. I know now this was not OK and I am truly sorry.
He went on to not just share an admission that this has been a problem, but to also share a very personal acceptance that he struggles to understand and engage with peoples emotions:
> The above is basically a long-winded way to get to the somewhat painful personal admission that hey, I need to change some of my behavior, and I want to apologize to the people that my personal behavior hurt and possibly drove away from kernel development entirely. I am going to take time off and get some assistance on how to understand peoples emotions and respond appropriately.
His post is sure to light up the open source, Linux, and tech world for the next few weeks. For some it will be celebrated as a step in the right direction. For some it will be too little too late, and their animus will remain. For some they will be cautiously supportive, but defer judgement until they have seen his future behavior demonstrate substantive changes.
### My Take
I wouldnt say I know Linus very closely; we have a casual relationship. I see him at conferences from time to time, and we often bump into each other and catch up. I interviewed him for my book and for the Global Learning XPRIZE. From my experience he is a funny, genuine, friendly guy. Interestingly, and not unusually at all for open source, his online persona is rather different to his in-person persona. I am not going to deny that when I would see these dust-ups on LKML, it didnt reflect the Linus I know. I chalked it down to a mixture of his struggles with social skills, dogmatic pragmatism, and ego.
His post today is a pretty remarkable change of posture for him, and I encourage that we as a community support him in making these changes.
**Accepting these personal challenges is tough, particularly for someone in his position**. Linux is a global phenomenon. It has resulted in billions of dollars of technology creation, powering thousands of companies, and changing the norms around of how software is consumed and created. It is easy to forget that Linux was started by a quiet Finnish kid in his university dorm room. It is important to remember that **just because Linux has scaled elegantly, it doesnt mean that Linus has been able to**. He isnt a codebase, he is a human being, and bugs are harder to spot and fix in humans. You cant just deploy a fix immediately. It takes time to identify the problem and foster and grow a change. The starting point for this is to support people in that desire for change, not re-litigate the ills of the past: that will get us nowhere quickly.
[![Young Linus Torvalds][2]][3]
I am also mindful of ego. None of us like to admit we have an ago, but we all do. You dont get to build one of the most fundamental technologies in the last thirty years and not have an ego. He built it…they came…and a revolution was energized because of what he created. While Linuss ego is more subtle, and thankfully doesnt extend to faddish self-promotion, overly expensive suits, and forays into Hollywood (quite the opposite), his ego has naturally resulted in abrupt opinions on how his project should run, sometimes plugging fingers in his ears to particularly challenging viewpoints from others. **His post today is a clear example of him putting Linux as a project ahead of his own personal ego**.
This is important for a few reasons. Firstly, being in such a public position and accepting your personal flaws isnt a problem many people face, and isnt a situation many people handle well. I work with a lot of CEOs, and they often say it is the loneliest job on the planet. I have heard American presidents say the same in interviews. This is because they are the top of the tree with all the responsibility and expectations on their shoulders. Put yourself in Linuss position: his little project has blown up into a global phenomenon, and he didnt necessarily have the social tools to be able to handle this change. Ego forces these internal struggles under the surface and to push them down and avoid them. So, to accept them as publicly and openly as he did today is a very firm step in the right direction. Now, the true test will be results, but we need to all provide the breathing space for him to accomplish them.
So, I would encourage everyone to give Linus a shot. This doesnt mean the frustrations of the past are erased, and he has acknowledged and apologized for these mistakes as a first step. He has accepted he struggles with understanding others emotions, and a desire to help improve this for the betterment of the project and himself. **He is a human, and the best tonic for humans to resolve their own internal struggles is the support and encouragement of other humans**. This is not unique to Linus, but to anyone who faces similar struggles.
All the best, Linus.
--------------------------------------------------------------------------------
via: https://www.jonobacon.com/2018/09/16/linus-his-apology-and-why-we-should-support-him/
作者:[Jono Bacon][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.jonobacon.com/author/admin/
[1]: https://lkml.org/lkml/2018/9/16/167
[2]: https://i1.wp.com/www.jonobacon.com/wp-content/uploads/2018/09/linus.jpg?resize=499%2C342&ssl=1
[3]: https://i1.wp.com/www.jonobacon.com/wp-content/uploads/2018/09/linus.jpg?ssl=1

View File

@ -1,112 +0,0 @@
translating by belitex
translating by belitex
translating by belitex
translating by belitex
Directing traffic: Demystifying internet-scale load balancing
======
Common techniques used to balance network traffic come with advantages and trade-offs.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/traffic-light-go.png?itok=nC_851ys)
Large, multi-site, internet-facing systems, including content-delivery networks (CDNs) and cloud providers, have several options for balancing traffic coming onto their networks. In this article, we'll describe common traffic-balancing designs, including techniques and trade-offs.
If you were an early cloud computing provider, you could take a single customer web server, assign it an IP address, configure a domain name system (DNS) record to associate it with a human-readable name, and advertise the IP address via the border gateway protocol (BGP), the standard way of exchanging routing information between networks.
It wasn't load balancing per se, but there probably was load distribution across redundant network paths and networking technologies to increase availability by routing around unavailable infrastructure (giving rise to phenomena like [asymmetric routing][1]).
### Doing simple DNS load balancing
As traffic to your customer's service grows, the business' owners want higher availability. You add a second web server with its own publicly accessible IP address and update the DNS record to direct users to both web servers (hopefully somewhat evenly). This is OK for a while until one web server unexpectedly goes offline. Assuming you detect the failure quickly, you can update the DNS configuration (either manually or with software) to stop referencing the broken server.
Unfortunately, because DNS records are cached, around 50% of requests to the service will likely fail until the record expires from the client caches and those of other nameservers in the DNS hierarchy. DNS records generally have a time to live (TTL) of several minutes or more, so this can create a significant impact on your system's availability.
Worse, some proportion of clients ignore TTL entirely, so some requests will be directed to your offline web server for some time. Setting very short DNS TTLs is not a great idea either; it means higher load on DNS services plus increased latency because clients will have to perform DNS lookups more often. If your DNS service is unavailable for any reason, access to your service will degrade more quickly with a shorter TTL because fewer clients will have your service's IP address cached.
### Adding network load balancing
To work around this problem, you can add a redundant pair of [Layer 4][2] (L4) network load balancers that serve the same virtual IP (VIP) address. They could be hardware appliances or software balancers like [HAProxy][3]. This means the DNS record points only at the VIP and no longer does load balancing.
![Layer 4 load balancers balance connections across webservers.][5]
Layer 4 load balancers balance connections from users across two webservers.
The L4 balancers load-balance traffic from the internet to the backend servers. This is generally done based on a hash (a mathematical function) of each IP packet's 5-tuple: the source and destination IP address and port plus the protocol (such as TCP or UDP). This is fast and efficient (and still maintains essential properties of TCP) and doesn't require the balancers to maintain state per connection. (For more information, [Google's paper on Maglev][6] discusses implementation of a software L4 balancer in significant detail.)
The L4 balancers can do health-checking and send traffic only to web servers that pass checks. Unlike in DNS balancing, there is minimal delay in redirecting traffic to another web server if one crashes, although existing connections will be reset.
L4 balancers can do weighted balancing, dealing with backends with varying capacity. L4 balancing gives significant power and flexibility to operators while being relatively inexpensive in terms of computing power.
### Going multi-site
The system continues to grow. Your customers want to stay up even if your data center goes down. You build a new data center with its own set of service backends and another cluster of L4 balancers, which serve the same VIP as before. The DNS setup doesn't change.
The edge routers in both sites advertise address space, including the service VIP. Requests sent to that VIP can reach either site, depending on how each network between the end user and the system is connected and how their routing policies are configured. This is known as anycast. Most of the time, this works fine. If one site isn't operating, you can stop advertising the VIP for the service via BGP, and traffic will quickly move to the alternative site.
![Serving from multiple sites using anycast][8]
Serving from multiple sites using anycast.
This setup has several problems. Its worst failing is that you can't control where traffic flows or limit how much traffic is sent to a given site. You also don't have an explicit way to route users to the nearest site (in terms of network latency), but the network protocols and configurations that determine the routes should, in most cases, route requests to the nearest site.
### Controlling inbound requests in a multi-site system
To maintain stability, you need to be able to control how much traffic is served to each site. You can get that control by assigning a different VIP to each site and use DNS to balance them using simple or weighted [round-robin][9].
![Serving from multiple sites using a primary VIP][11]
Serving from multiple sites using a primary VIP per site, backed up by secondary sites, with geo-aware DNS.
You now have two new problems.
First, using DNS balancing means you have cached records, which is not good if you need to redirect traffic quickly.
Second, whenever users do a fresh DNS lookup, a VIP connects them to the service at an arbitrary site, which may not be the closest site to them. If your service runs on widely separated sites, individual users will experience wide variations in your system's responsiveness, depending upon the network latency between them and the instance of your service they are using.
You can solve the first problem by having each site constantly advertise and serve the VIPs for all the other sites (and consequently the VIP for any faulty site). Networking tricks (such as advertising less-specific routes from the backups) can ensure that VIP's primary site is preferred, as long as it is available. This is done via BGP, so we should see traffic move within a minute or two of updating BGP.
There isn't an elegant solution to the problem of serving users from sites other than the nearest healthy site with capacity. Many large internet-facing services use DNS services that attempt to return different results to users in different locations, with some degree of success. This approach is always somewhat [complex and error-prone][12], given that internet-addressing schemes are not organized geographically, blocks of addresses can change locations (e.g., when a company reorganizes its network), and many end users can be served from a single caching nameserver.
### Adding Layer 7 load balancing
Over time, your customers begin to ask for more advanced features.
While L4 load balancers can efficiently distribute load among multiple web servers, they operate only on source and destination IP addresses, protocol, and ports. They don't know anything about the content of a request, so you can't implement many advanced features in an L4 balancer. Layer 7 (L7) load balancers are aware of the structure and contents of requests and can do far more.
Some things that can be implemented in L7 load balancers are caching, rate limiting, fault injection, and cost-aware load balancing (some requests require much more server time to process).
They can also balance based on a request's attributes (e.g., HTTP cookies), terminate SSL connections, and help defend against application layer denial-of-service (DoS) attacks. The downside of L7 balancers at scale is cost—they do more computation to process requests, and each active request consumes some system resources. Running L4 balancers in front of one or more pools of L7 balancers can help with scaling.
### Conclusion
Load balancing is a difficult and complex problem. In addition to the strategies described in this article, there are different [load-balancing algorithms][13], high-availability techniques used to implement load balancers, client load-balancing techniques, and the recent rise of service meshes.
Core load-balancing patterns have evolved alongside the growth of cloud computing, and they will continue to improve as large web services work to improve the control and flexibility that load-balancing techniques offer./p>
Laura Nolan and Murali Suriar will present [Keeping the Balance: Load Balancing Demystified][14] at [LISA18][15], October 29-31 in Nashville, Tennessee, USA.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/internet-scale-load-balancing
作者:[Laura Nolan][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/lauranolan
[b]: https://github.com/lujun9972
[1]: https://www.noction.com/blog/bgp-and-asymmetric-routing
[2]: https://en.wikipedia.org/wiki/Transport_layer
[3]: https://www.haproxy.com/blog/failover-and-worst-case-management-with-haproxy/
[4]: /file/412596
[5]: https://opensource.com/sites/default/files/uploads/loadbalancing1_l4-network-loadbalancing.png (Layer 4 load balancers balance connections across webservers.)
[6]: https://ai.google/research/pubs/pub44824
[7]: /file/412601
[8]: https://opensource.com/sites/default/files/uploads/loadbalancing2_going-multisite.png (Serving from multiple sites using anycast)
[9]: https://en.wikipedia.org/wiki/Round-robin_scheduling
[10]: /file/412606
[11]: https://opensource.com/sites/default/files/uploads/loadbalancing3_controlling-inbound-requests.png (Serving from multiple sites using a primary VIP)
[12]: https://landing.google.com/sre/book/chapters/load-balancing-frontend.html
[13]: https://medium.com/netflix-techblog/netflix-edge-load-balancing-695308b5548c
[14]: https://www.usenix.org/conference/lisa18/presentation/suriar
[15]: https://www.usenix.org/conference/lisa18

View File

@ -0,0 +1,63 @@
translating---geekpi
Akash Angle: How do you Fedora?
======
![](https://fedoramagazine.org/wp-content/uploads/2018/11/akash-angle-816x345.jpg)
We recently interviewed Akash Angle on how he uses Fedora. This is [part of a series][1] on the Fedora Magazine. The series profiles Fedora users and how they use Fedora to get things done. Contact us on the [feedback form][2] to express your interest in becoming a interviewee.
### Who is Akash Angle?
Akash is a Linux user who ditched Windows some time ago. An avid Fedora user for the past 9 years, he has tried out almost all the Fedora flavors and spins to get his day to day tasks done. He was introduced to Fedora by a school friend.
### What Hardware?
Akash uses a Lenovo B490 at work. It is equipped with an Intel Core i3-3310 Processor, and a 240GB Kingston SSD. “This laptop is great for day to work like surfing the internet, blogging, and a little bit of photo editing and video editing too. Although not a professional laptop and the specs not being that high end, it does the job perfectly,” says Akash.
He uses a Logitech basic wireless mouse and would like to eventually get a mechanical keyboard. His personal computer — which is a custom-built desktop — has the latest 7th-generation Intel i5 7400 processor, and 8GB Corsair Vengeance RAM.
![][3]
### What Software?
Akash is a fan of the GNOME 3 desktop environment. He loves most of the goodies and bells and whistles the OS can throw in for getting basic tasks done.
For practical reasons he prefers a fresh installation as a way of upgrading to the latest Fedora version. He thinks Fedora 29 is arguably the the best workstation out there. Akash says this has been backed up by reviews of various tech evangelists and open source news sites.
To play videos, his go-to is the VLC video player packaged as a [Flatpak][4], which gives him the latest stable version. When Akash wants to make screenshots, the ultimate tool for him is [Shutter, which the Magazine has covered in the past][5]. For graphics, GIMP is something without which he wouldnt be able to work.
Google Chrome stable, and the dev channel, are his most used web browsers. He also uses Chromium and the default version of Firefox, and sometimes even Opera makes its way into the party as well.
All the rest of the magic Akash does is from the terminal, as he is a power user. The GNOME Terminal app is the one for him.
#### Favorite wallpapers
One of his favorite wallpapers originally coming from Fedora 16 is the following one:
![][6]
And this is the one he currently uses on his Fedora 29 Workstation today:
![][7]
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/akash-angle-how-do-you-fedora/
作者:[Adam Šamalík][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/asamalik/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/tag/how-do-you-fedora/
[2]: https://fedoramagazine.org/submit-an-idea-or-tip/
[3]: https://fedoramagazine.org/wp-content/uploads/2018/11/akash-angle-desktop-300x259.png
[4]: https://fedoramagazine.org/getting-started-flatpak/
[5]: https://fedoramagazine.org/screenshot-everything-shutter-fedora/
[6]: https://fedoramagazine.org/wp-content/uploads/2018/11/Fedora-16-300x188.png
[7]: https://fedoramagazine.org/wp-content/uploads/2018/11/wallpaper2you_72588-300x169.jpg

View File

@ -1,4 +1,3 @@
translating by lujun9972
Continuous infrastructure: The other CI
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BIZ_darwincloud_520x292_0311LL.png?itok=74DLgd8Q)

View File

@ -1,102 +0,0 @@
ANNOUNCING THE GENERAL AVAILABILITY OF CONTAINERD 1.0, THE INDUSTRY-STANDARD RUNTIME USED BY MILLIONS OF USERS
============================================================
Today, were pleased to announce that containerd (pronounced Con-Tay-Ner-D), an industry-standard runtime for building container solutions, has reached its 1.0 milestone. containerd has already been deployed in millions of systems in production today, making it the most widely adopted runtime and an essential upstream component of the Docker platform.
Built to address the needs of modern container platforms like Docker and orchestration systems like Kubernetes, containerd ensures users have a consistent dev to ops experience. From [Dockers initial announcement][22] last year that it was spinning out its core runtime to [its donation to the CNCF][23] in March 2017, the containerd project has experienced significant growth and progress over the past 12 months. .
Within both the Docker and Kubernetes communities, there has been a significant uptick in contributions from independents and CNCF member companies alike including Docker, Google, NTT, IBM, Microsoft, AWS, ZTE, Huawei and ZJU. Similarly, the maintainers have been working to add key functionality to containerd.The initial containerd donation provided everything users need to ensure a seamless container experience including methods for:
* transferring container images,
* container execution and supervision,
* low-level local storage and network interfaces and
* the ability to work on both Linux, Windows and other platforms. 
Additional work has been done to add even more powerful capabilities to containerd including a:
* Complete storage and distribution system that supports both OCI and Docker image formats and
* Robust events system
* More sophisticated snapshot model to manage container filesystems
These changes helped the team build out a smaller interface for the snapshotters, while still fulfilling the requirements needed from things like a builder. It also reduces the amount of code needed, making it much easier to maintain in the long run.
The containerd 1.0 milestone comes after several months testing both the alpha and version versions, which enabled the  team to implement many performance improvements. Some of these,improvements include the creation of a stress testing system, improvements in garbage collection and shim memory usage.
“In 2017 key functionality has been added containerd to address the needs of modern container platforms like Docker and orchestration systems like Kubernetes,” said Michael Crosby, Maintainer for containerd and engineer at Docker. “Since our announcement in December, we have been progressing the design of the project with the goal of making it easily embeddable in higher level systems to provide core container capabilities. We will continue to work with the community to create a runtime thats lightweight yet powerful, balancing new functionality with the desire for code that is easy to support and maintain.”
containerd is already being used by Kubernetes for its[ cri-containerd project][24], which enables users to run Kubernetes clusters using containerd as the underlying runtime. containerd is also an essential upstream component of the Docker platform and is currently used by millions of end users. There is also strong alignment with other CNCF projects: containerd exposes an API using [gRPC][25] and exposes metrics in the [Prometheus][26] format. containerd also fully leverages the Open Container Initiative (OCI) runtime, image format specifications and OCI reference implementation ([runC][27]), and will pursue OCI certification when it is available.
Key Milestones in the progress to 1.0 include:
![containerd 1.0](https://i2.wp.com/blog.docker.com/wp-content/uploads/4f8d8c4a-6233-4d96-a0a2-77ed345bf42b-5.jpg?resize=720%2C405&ssl=1)
Notable containerd facts and figures:
* 1994 GitHub stars, 401 forks
* 108 contributors
* 8 maintainers from independents and and member companies alike including Docker, Google, IBM, ZTE and ZJU .
* 3030+ commits, 26 releases
Availability and Resources
To participate in containerd: [github.com/containerd/containerd][28]
* Getting Started with containerd: [http://mobyproject.org/blog/2017/08/15/containerd-getting-started/][8]
* Roadmap: [https://github.com/containerd/containerd/blob/master/ROADMAP.md][1]
* Scope table: [https://github.com/containerd/containerd#scope][2]
* Architecture document: [https://github.com/containerd/containerd/blob/master/design/architecture.md][3]
* APIs: [https://github.com/containerd/containerd/tree/master/api/][9].
* Learn more about containerd at KubeCon by attending Justin Cormacks [LinuxKit & Kubernetes talk at Austin Docker Meetup][10], Patrick Chanezons [Moby session][11] [Phil Estes session][12] or the [containerd salon][13]
--------------------------------------------------------------------------------
via: https://blog.docker.com/2017/12/cncf-containerd-1-0-ga-announcement/
作者:[Patrick Chanezon ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://blog.docker.com/author/chanezon/
[1]:https://github.com/docker/containerd/blob/master/ROADMAP.md
[2]:https://github.com/docker/containerd#scope
[3]:https://github.com/docker/containerd/blob/master/design/architecture.md
[4]:http://www.linkedin.com/shareArticle?mini=true&url=http://dockr.ly/2ArQe3G&title=Announcing%20the%20General%20Availability%20of%20containerd%201.0%2C%20the%20industry-standard%20runtime%20used%20by%20millions%20of%20users&summary=Today,%20we%E2%80%99re%20pleased%20to%20announce%20that%20containerd%20(pronounced%20Con-Tay-Ner-D),%20an%20industry-standard%20runtime%20for%20building%20container%20solutions,%20has%20reached%20its%201.0%20milestone.%20containerd%20has%20already%20been%20deployed%20in%20millions%20of%20systems%20in%20production%20today,%20making%20it%20the%20most%20widely%20adopted%20runtime%20and%20an%20essential%20upstream%20component%20of%20the%20Docker%20platform.%20Built%20...
[5]:http://www.reddit.com/submit?url=http://dockr.ly/2ArQe3G&title=Announcing%20the%20General%20Availability%20of%20containerd%201.0%2C%20the%20industry-standard%20runtime%20used%20by%20millions%20of%20users
[6]:https://plus.google.com/share?url=http://dockr.ly/2ArQe3G
[7]:http://news.ycombinator.com/submitlink?u=http://dockr.ly/2ArQe3G&t=Announcing%20the%20General%20Availability%20of%20containerd%201.0%2C%20the%20industry-standard%20runtime%20used%20by%20millions%20of%20users
[8]:http://mobyproject.org/blog/2017/08/15/containerd-getting-started/
[9]:https://github.com/docker/containerd/tree/master/api/
[10]:https://www.meetup.com/Docker-Austin/events/245536895/
[11]:http://sched.co/CU6G
[12]:https://kccncna17.sched.com/event/CU6g/embedding-the-containerd-runtime-for-fun-and-profit-i-phil-estes-ibm
[13]:https://kccncna17.sched.com/event/Cx9k/containerd-salon-hosted-by-derek-mcgowan-docker-lantao-liu-google
[14]:https://blog.docker.com/author/chanezon/
[15]:https://blog.docker.com/tag/cloud-native-computing-foundation/
[16]:https://blog.docker.com/tag/cncf/
[17]:https://blog.docker.com/tag/container-runtime/
[18]:https://blog.docker.com/tag/containerd/
[19]:https://blog.docker.com/tag/cri-containerd/
[20]:https://blog.docker.com/tag/grpc/
[21]:https://blog.docker.com/tag/kubernetes/
[22]:https://blog.docker.com/2016/12/introducing-containerd/
[23]:https://blog.docker.com/2017/03/docker-donates-containerd-to-cncf/
[24]:http://blog.kubernetes.io/2017/11/containerd-container-runtime-options-kubernetes.html
[25]:http://www.grpc.io/
[26]:https://prometheus.io/
[27]:https://github.com/opencontainers/runc
[28]:http://github.com/containerd/containerd

View File

@ -1,4 +1,3 @@
translating by lujun9972
Sysadmin 101: Troubleshooting
======
I typically keep this blog strictly technical, keeping observations, opinions and the like to a minimum. But this, and the next few posts will be about basics and fundamentals for starting out in system administration/SRE/system engineer/sysops/devops-ops (whatever you want to call yourself) roles more generally.

View File

@ -1,74 +0,0 @@
Ubuntu Updates for the Meltdown / Spectre Vulnerabilities
============================================================
![](https://insights.ubuntu.com/wp-content/uploads/0372/Screenshot-from-2018-01-04-12-39-25.png)
* For up-to-date patch, package, and USN links, please refer to: [https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAndMeltdown][2]
Unfortunately, youve probably already read about one of the most widespread security issues in modern computing history — colloquially known as “[Meltdown][5]” ([CVE-2017-5754][6]) and “[Spectre][7]” ([CVE-2017-5753][8] and [CVE-2017-5715][9]) — affecting practically every computer built in the last 10 years, running any operating system.  That includes [Ubuntu][10].
I say “unfortunately”, in part because there was a coordinated release date of January 9, 2018, agreed upon by essentially every operating system, hardware, and cloud vendor in the world.  By design, operating system updates would be available at the same time as the public disclosure of the security vulnerability.  While it happens rarely, this an industry standard best practice, which has broken down in this case.
At its heart, this vulnerability is a CPU hardware architecture design issue.  But there are billions of affected hardware devices, and replacing CPUs is simply unreasonable.  As a result, operating system kernels — Windows, MacOS, Linux, and many others — are being patched to mitigate the critical security vulnerability.
Canonical engineers have been working on this since we were made aware under the embargoed disclosure (November 2017) and have worked through the Christmas and New Years holidays, testing and integrating an incredibly complex patch set into a broad set of Ubuntu kernels and CPU architectures.
Ubuntu users of the 64-bit x86 architecture (aka, amd64) can expect updated kernels by the original January 9, 2018 coordinated release date, and sooner if possible.  Updates will be available for:
* Ubuntu 17.10 (Artful) — Linux 4.13 HWE
* Ubuntu 16.04 LTS (Xenial) — Linux 4.4 (and 4.4 HWE)
* Ubuntu 14.04 LTS (Trusty) — Linux 3.13
* Ubuntu 12.04 ESM** (Precise) — Linux 3.2
* Note that an [Ubuntu Advantage license][1] is required for the 12.04 ESM kernel update, as Ubuntu 12.04 LTS is past its end-of-life
Ubuntu 18.04 LTS (Bionic) will release in April of 2018, and will ship a 4.15 kernel, which includes the [KPTI][11] patchset as integrated upstream.
Ubuntu optimized kernels for the Amazon, Google, and Microsoft public clouds are also covered by these updates, as well as the rest of Canonicals [Certified Public Clouds][12] including Oracle, OVH, Rackspace, IBM Cloud, Joyent, and Dimension Data.
These kernel fixes will not be [Livepatch-able][13].  The source code changes required to address this problem is comprised of hundreds of independent patches, touching hundreds of files and thousands of lines of code.  The sheer complexity of this patchset is not compatible with the Linux kernel Livepatch mechanism.  An update and a reboot will be required to active this update.
Furthermore, you can expect Ubuntu security updates for a number of other related packages, including CPU microcode, GCC and QEMU in the coming days.
We dont have a performance analysis to share at this time, but please do stay tuned here as well followup with that as soon as possible.
Thanks,
[@DustinKirkland][14]
VP of Product
Canonical / Ubuntu
### About the author
![Dustin's photo](https://insights.ubuntu.com/wp-content/uploads/6f45/kirkland.jpg)
Dustin Kirkland is part of Canonical's Ubuntu Product and Strategy team, working for Mark Shuttleworth, and leading the technical strategy, road map, and life cycle of the Ubuntu Cloud and IoT commercial offerings. Formerly the CTO of Gazzang, a venture funded start-up acquired by Cloudera, Dustin designed and implemented an innovative key management system for the cloud, called zTrustee, and delivered comprehensive security for cloud and big data platforms with eCryptfs and other encryption technologies. Dustin is an active Core Developer of the Ubuntu Linux distribution, maintainer of 20+ open source projects, and the creator of Byobu, DivItUp.com, and LinuxSearch.org. A Fightin' Texas Aggie Class of 2001 graduate, Dustin lives in Austin, Texas, with his wife Kim, daughters, and his Australian Shepherds, Aggie and Tiger. Dustin is also an avid home brewer.
[More articles by Dustin][3]
--------------------------------------------------------------------------------
via: https://insights.ubuntu.com/2018/01/04/ubuntu-updates-for-the-meltdown-spectre-vulnerabilities/
作者:[Dustin Kirkland][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://insights.ubuntu.com/author/kirkland/
[1]:https://www.ubuntu.com/support/esm
[2]:https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAndMeltdown
[3]:https://insights.ubuntu.com/author/kirkland/
[4]:https://insights.ubuntu.com/author/kirkland/
[5]:https://en.wikipedia.org/wiki/Meltdown_(security_vulnerability)
[6]:https://people.canonical.com/~ubuntu-security/cve/2017/CVE-2017-5754.html
[7]:https://en.wikipedia.org/wiki/Spectre_(security_vulnerability)
[8]:https://people.canonical.com/~ubuntu-security/cve/2017/CVE-2017-5753.html
[9]:https://people.canonical.com/~ubuntu-security/cve/2017/CVE-2017-5715.html
[10]:https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAndMeltdown
[11]:https://lwn.net/Articles/742404/
[12]:https://partners.ubuntu.com/programmes/public-cloud
[13]:https://www.ubuntu.com/server/livepatch
[14]:https://twitter.com/dustinkirkland

View File

@ -1,85 +0,0 @@
Reckoning The Spectre And Meltdown Performance Hit For HPC
============================================================
![](https://3s81si1s5ygj3mzby34dq6qf-wpengine.netdna-ssl.com/wp-content/uploads/2015/03/intel-chip-logo-bw-200x147.jpg)
While no one has yet created an exploit to take advantage of the Spectre and Meltdown speculative execution vulnerabilities that were exposed by Google six months ago and that were revealed in early January, it is only a matter of time. The [patching frenzy has not settled down yet][2], and a big concern is not just whether these patches fill the security gaps, but at what cost they do so in terms of application performance.
To try to ascertain the performance impact of the Spectre and Meltdown patches, most people have relied on comments from Google on the negligible nature of the performance hit on its own applications and some tests done by Red Hat on a variety of workloads, [which we profiled in our initial story on the vulnerabilities][3]. This is a good starting point, but what companies really need to do is profile the performance of their applications before and after applying the patches and in such a fine-grained way that they can use the data to debug the performance hit and see if there is any remediation they can take to alleviate the impact.
In the meantime, we are relying on researchers and vendors to figure out the performance impacts. Networking chip maker Mellanox Technologies, always eager to promote the benefits of the offload model of its switch and network interface chips, has run some tests to show the effects of the Spectre and Meltdown patches on high performance networking for various workloads and using various networking technologies, including its own Ethernet and InfiniBand devices and Intels OmniPath. Some HPC researchers at the University of Buffalo have also done some preliminary benchmarking of selected HPC workloads to see the effect on compute and network performance. This is a good starting point, but is far from a complete picture of the impact that might be seen on HPC workloads after organization deploy the Spectre and Meltdown patches to their systems.
To recap, here is what Red Hat found out when it tested the initial Spectre and Meltdown patches running its Enterprise Linux 7 release on servers using Intels “Haswell” Xeon E5 v3, “Broadwell” Xeon E5 v4, and “Skylake” Xeon SP processors:
* **Measurable, 8 percent to 19 percent:** Highly cached random memory, with buffered I/O, OLTP database workloads, and benchmarks with high kernel-to-user space transitions are impacted between 8 percent and 19 percent. Examples include OLTP Workloads (TPC), sysbench, pgbench, netperf (< 256 byte), and fio (random I/O to NvME).
* **Modest, 3 percent to 7 percent:** Database analytics, Decision Support System (DSS), and Java VMs are impacted less than the Measurable category. These applications may have significant sequential disk or network traffic, but kernel/device drivers are able to aggregate requests to moderate level of kernel-to-user transitions. Examples include SPECjbb2005, Queries/Hour and overall analytic timing (sec).
* **Small, 2 percent to 5 percent:** HPC CPU-intensive workloads are affected the least with only 2 percent to 5 percent performance impact because jobs run mostly in user space and are scheduled using CPU pinning or NUMA control. Examples include Linpack NxN on X86 and SPECcpu2006.
* **Minimal impact:** Linux accelerator technologies that generally bypass the kernel in favor of user direct access are the least affected, with less than 2% overhead measured. Examples tested include DPDK (VsPERF at 64 byte) and OpenOnload (STAC-N). Userspace accesses to VDSO like get-time-of-day are not impacted. We expect similar minimal impact for other offloads.
And just to remind you, according to Red Hat containerized applications running atop Linux do not incur an extra Spectre or Meltdown penalty compared to applications running on bare metal because they are implemented as generic Linux processes themselves. But applications running inside virtual machines running atop hypervisors, Red Hat does expect that, thanks to the increase in the frequency of user-to-kernel transitions, the performance hit will be higher. (How much has not yet been revealed.)
Gilad Shainer, the vice president of marketing for the InfiniBand side of the Mellanox house, shared some initial performance data from the companys labs with regard to the Spectre and Meltdown patches. ([The presentation is available online here.][4])
In general, Shainer tells  _The Next Platform_ , the offload model that Mellanox employs in its InfiniBand switches (RDMA is a big component of this) and in its Ethernet (The RoCE clone of RDMA is used here) are a very big deal given the fact that the network drivers bypass the operating system kernels. The exploits take advantage, in one of three forms, of the porous barrier between the kernel and user spaces in the operating systems, so anything that is kernel heavy will be adversely affected. This, says Shainer, includes the TCP/IP protocol that underpins Ethernet as well as the OmniPath protocol, which by its nature tries to have the CPUs in the system do a lot of the network processing. Intel and others who have used an onload model have contended that this allows for networks to be more scalable, and clearly there are very scalable InfiniBand and OmniPath networks, with many thousands of nodes, so both approaches seem to work in production.
Here are the feeds and speeds on the systems that Mellanox tested on two sets of networking tests. For the comparison of Ethernet with RoCE added and standard TCP over Ethernet, the hardware was a two-socket server using Intels Xeon E5-2697A v4 running at 2.60 GHz. This machine was configured with Red Hat Enterprise Linux 7.4, with kernel versions 3.10.0-693.11.6.el7.x86_64 and 3.10.0-693.el7.x86_64\. (Those numbers  _are_  different there is an  _11.6_  in the middle of the second one.) The machines were equipped with ConnectX-5 server adapters with firmware 16.22.0170 and the MLNX_OFED_LINUX-4.3-0.0.5.0 driver. The workload that was tested was not a specific HPC application, but rather a very low level, homegrown interconnect benchmark that is used to stress switch chips and NICs to see their peak  _sustained_  performance, as distinct from peak  _theoretical_ performance, which is the absolute ceiling. This particular test was run on a two-node cluster, passing data from one machine to the other.
Here is how the performance stacked up before and after the Spectre and Meltdown patches were added to the systems:
[![](https://3s81si1s5ygj3mzby34dq6qf-wpengine.netdna-ssl.com/wp-content/uploads/2018/01/mellanox-spectre-meltdown-roce-versus-tcp.jpg)][5]
As you can see, at this very low level, there is no impact on network performance between two machines supporting RoCE on Ethernet, but running plain vanilla TCP without an offload on top of Ethernet, there are some big performance hits. Interestingly, on this low-level test, the impact was greatest on small message sizes in the TCP stack and then disappeared as the message sizes got larger.
On a separate round of tests pitting InfiniBand from Mellanox against OmniPath from Intel, the server nodes were configured with a pair of Intel Xeon SP Gold 6138 processors running at 2 GHz, also with Red Hat Enterprise Linux 7.4 with the 3.10.0-693.el7.x86_64 and 3.10.0-693.11.6.el7.x86_64          kernel versions. The OmniPath adapter uses the IntelOPA-IFS.RHEL74-x86_64.10.6.1.0.2 driver and the Mellanox ConnectX-5 adapter uses the MLNX_OFED 4.2 driver.
Here is how the InfiniBand and OmniPath protocols did on the tests before and after the patches:
[![](https://3s81si1s5ygj3mzby34dq6qf-wpengine.netdna-ssl.com/wp-content/uploads/2018/01/mellanox-spectre-meltdown-infiniband-versus-omnipath.jpg)][6]
Again, thanks to the offload model and the fact that this was a low level benchmark that did not hit the kernel very much (and some HPC applications might cross that boundary and therefore invoke the Spectre and Meltdown performance penalties), there was no real effect on the two-node cluster running InfiniBand. With the OmniPath system, the impact was around 10 percent for small message sizes, and then grew to 25 percent or so once the message sizes transmitted reached 512 bytes.
We have no idea what the performance implications are for clusters of more than two machines using the Mellanox approach. It would be interesting to see if the degradation compounds or doesnt.
### Early HPC Performance Tests
While such low level benchmarks provide some initial guidance on what the effect might be of the Spectre and Meltdown patches on HPC performance, what you really need is a benchmark run of real HPC applications running on clusters of various sizes, both before and after the Spectre and Meltdown patches are applied to the Linux nodes. A team of researchers led by Nikolay Simakov at the Center For Computational Research at SUNY Buffalo fired up some HPC benchmarks and a performance monitoring tool derived from the National Science Foundations Extreme Digital (XSEDE) program to see the effect of the Spectre and Meltdown patches on how much work they could get done as gauged by wall clock time to get that work done.
The paper that Simakov and his team put together on the initial results [is found here][7]. The tool that was used to monitor the performance of the systems was called XD Metrics on Demand, or XDMoD, and it was open sourced and is available for anyone to use. (You might consider [Open XDMoD][8] for your own metrics to determine the performance implications of the Spectre and Meltdown patches.) The benchmarks tested by the SUNY Buffalo researchers included the NAMD molecular dynamics and NWChem computational chemistry applications, as well as the HPC Challenge suite, which itself includes the STREAM memory bandwidth test and the NASA Parallel Benchmarks (NPB), the Interconnect MPI Benchmarks (IMB). The researchers also tested the IOR file reading and the MDTest metadata benchmark tests from Lawrence Livermore National Laboratory. The IOR and MDTest benchmarks were run in local mode and in conjunction with a GPFS parallel file system running on an external 3 PB storage cluster. (The tests with a “.local” suffix in the table are run on storage in the server nodes themselves.)
SUNY Buffalo has an experimental cluster with two-socket machines based on Intel “Nehalem” Xeon L5520 processors, which have eight cores and which are, by our reckoning, very long in the tooth indeed in that they are nearly nine years old. Each node has 24 GB of main memory and has 40 Gb/sec QDR InfiniBand links cross connecting them together. The systems are running the latest CentOS 7.4.1708 release, without and then with the patches applied. (The same kernel patches outlined above in the Mellanox test.) Simakov and his team ran each benchmark on a single node configuration and then ran the benchmark on a two node configuration, and it shows the difference between running a low-level benchmark and actual applications when doing tests. Take a look at the table of results:
[![](https://3s81si1s5ygj3mzby34dq6qf-wpengine.netdna-ssl.com/wp-content/uploads/2018/01/suny-buffalo-spectre-meltdown-test-table.jpg)][9]
The before runs of each application tested were done on around 20 runs, and the after was done on around 50 runs. For the core HPC applications NAMD, NWChem, and the elements of HPCC the performance degradation was between 2 percent and 3 percent, consistent with what Red Hat told people to expect back in the first week that the Spectre and Meltdown vulnerabilities were revealed and the initial patches were available. However, moving on to two-node configurations, where network overhead was taken into account, the performance impact ranged from 5 percent to 11 percent. This is more than you would expect based on the low level benchmarks that Mellanox has done. Just to make things interesting, on the IOR and MDTest benchmarks, moving from one to two nodes actually lessened the performance impact; running the IOR test on the local disks resulted in a smaller performance hit then over the network for a single node, but was not as low as for a two-node cluster running out to the GPFS file system.
There is a lot of food for thought in this data, to say the least.
What we want to know and what the SUNY Buffalo researchers are working on is what happens to performance on these HPC applications when the cluster is scaled out.
“We will know that answer soon,” Simakov tells  _The Next Platform_ . “But there are only two scenarios that are possible. Either it is going to get worse or it is going to stay about the same as a two-node cluster. We think that it will most likely stay the same, because all of the MPI communication happens through the shared memory on a single node, and when you get to two nodes, you get it into the network fabric and at that point, you are probably paying all of the extra performance penalties.”
We will update this story with data on larger scale clusters as soon as Simakov and his team provide the data.
--------------------------------------------------------------------------------
via: https://www.nextplatform.com/2018/01/30/reckoning-spectre-meltdown-performance-hit-hpc/
作者:[Timothy Prickett Morgan][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.nextplatform.com/author/tpmn/
[1]:https://www.nextplatform.com/author/tpmn/
[2]:https://www.nextplatform.com/2018/01/18/datacenters-brace-spectre-meltdown-impact/
[3]:https://www.nextplatform.com/2018/01/08/cost-spectre-meltdown-server-taxes/
[4]:http://www.mellanox.com/related-docs/presentations/2018/performance/Spectre-and-Meltdown-Performance.pdf?homepage
[5]:https://3s81si1s5ygj3mzby34dq6qf-wpengine.netdna-ssl.com/wp-content/uploads/2018/01/mellanox-spectre-meltdown-roce-versus-tcp.jpg
[6]:https://3s81si1s5ygj3mzby34dq6qf-wpengine.netdna-ssl.com/wp-content/uploads/2018/01/mellanox-spectre-meltdown-infiniband-versus-omnipath.jpg
[7]:https://arxiv.org/pdf/1801.04329.pdf
[8]:http://open.xdmod.org/7.0/index.html
[9]:https://3s81si1s5ygj3mzby34dq6qf-wpengine.netdna-ssl.com/wp-content/uploads/2018/01/suny-buffalo-spectre-meltdown-test-table.jpg

View File

@ -1,59 +0,0 @@
Louis-Philippe Véronneau -
======
I've been watching [Critical Role][1]1 for a while now and since I've started my master's degree I haven't had much time to sit down and watch the show on YouTube as I used to do.
I thus started listening to the podcasts instead; that way, I can listen to the show while I'm doing other productive tasks. Pretty quickly, I grew tired of manually downloading every episode each time I finished the last one. To make things worst, the podcast is hosted on PodBean and they won't let you download episodes on a mobile device without their app. Grrr.
After the 10th time opening the terminal on my phone to download the podcast using some `wget` magic I decided enough was enough: I was going to write a dumb script to download them all in one batch.
I'm a little ashamed to say it took me more time than I had intended... The PodBean website uses semi-randomized URLs, so I could not figure out a way to guess the paths to the hosted audio files. I considered using `youtube-dl` to get the DASH version of the show on YouTube, but Google has been heavily throttling DASH streams recently. Not cool Google.
I then had the idea to use iTune's RSS feed to get the audio files. Surely they would somehow be included there? Of course Apple doesn't give you a simple RSS feed link on the iTunes podcast page, so I had to rummage around and eventually found out this is the link you have to use:
```
https://itunes.apple.com/lookup?id=1243705452&entity=podcast
```
Surprise surprise, from the json file this links points to, I found out the main Critical Role podcast page [has a proper RSS feed][2]. To my defense, the RSS button on the main podcast page brings you to some PodBean crap page.
Anyway, once you have the RSS feed, it's only a matter of using `grep` and `sed` until you get what you want.
Around 20 minutes later, I had downloaded all the episodes, for a total of 22Gb! Victory dance!
Video clip loop of the Critical Role doing a victory dance.
### Script
Here's the bash script I wrote. You will need `recode` to run it, as the RSS feed includes some HTML entities.
```
# Get the whole RSS feed
wget -qO /tmp/criticalrole.rss http://criticalrolepodcast.geekandsundry.com/feed/
# Extract the URLS and the episode titles
mp3s=( $(grep -o "http.\+mp3" /tmp/criticalrole.rss) )
titles=( $(tail -n +45 /tmp/criticalrole.rss | grep -o "<title>.\+</title>" \
| sed -r 's@</?title>@@g; s@ @\\@g' | recode html..utf8) )
# Download all the episodes under their titles
for i in ${!titles[*]}
do
wget -qO "$(sed -e "s@\\\@\\ @g" <<< "${titles[$i]}").mp3" ${mp3s[$i]}
done
```
1 - For those of you not familiar with Critical Role, it's web series where a group of voice actresses and actors from LA play Dungeons & Dragons. It's so good even people like me who never played D&D can enjoy it..
--------------------------------------------------------------------------------
via: https://veronneau.org/downloading-all-the-critical-role-podcasts-in-one-batch.html
作者:[Louis-Philippe Véronneau][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://veronneau.org/
[1]:https://en.wikipedia.org/wiki/Critical_Role
[2]:http://criticalrolepodcast.geekandsundry.com/feed/

View File

@ -1,108 +0,0 @@
Announcing NGINX Unit 1.0
============================================================
Today, April 12, marks a significant milestone in the development of [NGINX Unit][8], our dynamic web and application server. Approximately six months after its [first public release][9], were now happy to announce that NGINX Unit is generally available and productionready. NGINX Unit is our new open source initiative led by Igor Sysoev, creator of the original NGINX Open Source software, which is now used by more than [409 million websites][10].
“I set out to make an application server which will be remotely and dynamically configured, and able to switch dynamically from one language or application version to another,” explains Igor. “Dynamic configuration and switching I saw as being certainly the main problem. People want to reconfigure servers without interrupting client processing.”
NGINX Unit is dynamically configured using a REST API; there is no static configuration file. All configuration changes happen directly in memory. Configuration changes take effect without requiring process reloads or service interruptions.
![NGINX runs Go, Perl, PHP, Python, and Ruby together on the same server](https://cdn-1.wp.nginx.com/wp-content/uploads/2017/09/dia-FM-2018-04-11-what-is-nginx-unit-01_1024x725-1024x725.png)
NGINX Unit runs multiple languages simultaneously
“The dynamic switching requires that we can run different languages and language versions in one server,” continues Igor.
As of Release 1.0, NGINX Unit supports Go, Perl, PHP, Python, and Ruby on the same server. Multiple language versions are also supported, so you can, for instance, run applications written for PHP 5 and PHP 7 on the same server. Support for additional languages, including Java, is planned for future NGINX Unit releases.
Note: We have an additional blog post on [how to configure NGINX, NGINX Unit, and WordPress][11] to work together.
Igor studied at Moscow State Technical University, which was a pioneer in the Russian space program, and April 12 has a special significance. “This is the anniversary of the first manned spaceflight in history, made by [Yuri Gagarin][12]. The first public version of NGINX [0.1.0] was released on [[October 4, 2004][7],] the anniversary of the [Sputnik][13] launch, and NGINX 1.0 was launched on April 12, 2011.”
### What Is NGINX Unit?
NGINX Unit is a dynamic web and application server, suitable for both standalone applications and distributed, microservices application architectures. It launches and scales application processes on demand, executing each application instance in its own secure sandbox.
NGINX Unit manages and routes all incoming network transactions to the application through a separate “router” process, so it can rapidly implement configuration changes without interrupting service.
“The configuration is in JSON format, so users can edit it manually, and its very suitable for scripting. We hope to add capabilities to [NGINX Controller][14] and [NGINX Amplify][15] to work with Unit configuration too,” explains Igor.
The NGINX Unit configuration process is described thoroughly in the [documentation][16].
“Now Unit can run Python, PHP, Ruby, Perl and Go  five languages. For example, during our beta, one of our users used Unit to run a number of different PHP platform versions on a single host,” says Igor.
NGINX Units ability to run multiple language runtimes is based on its internal separation between the router process, which terminates incoming HTTP requests, and groups of application processes, which implement the application runtime and execute application code.
![NGINX Unit architecture](https://cdn-1.wp.nginx.com/wp-content/uploads/2018/04/dia-FM-2018-04-11-Unit-1.0.0-blog-router-process-01-horiz_1024x576-1024x576.png)
NGINX Unit architecture
The router process is persistent  it never restarts  meaning that configuration updates can be implemented seamlessly, without any interruption in service. Each application process is deployed in its own sandbox (with support for [Linux control groups][17] [cgroups] under active development), so that NGINX Unit provides secure isolation for user code.
### Whats Next for NGINX Unit?
The next milestones for the NGINX Unit engineering team after Release 1.0 are concerned with HTTP maturity, serving static content, and additional language support.
“We plan to add SSL and HTTP/2 capabilities in Unit,” says Igor. “Also, we plan to support routing in configurations; currently, we have direct mapping from one listen port to one application. We plan to add routing using URIs and hostnames, etc.”
“In addition, we want to add more language support to Unit. We are completing the Ruby implementation, and next we will consider Node.js and Java. Java will be added in a Tomcatcompatible fashion.”
The end goal for NGINX Unit is to create an open source platform for distributed, polyglot applications which can run application code securely, reliably, and with the best possible performance. The platform will selfmanage, with capabilities such as autoscaling to meet SLAs within resource constraints, and service discovery and internal load balancing to make it easy to create a [service mesh][18].
### NGINX Unit and the NGINX Application Platform
An NGINX Unit platform will typically be delivered with a frontend tier of NGINX Open Source or NGINX Plus reverse proxies to provide ingress control, edge load balancing, and security. The joint platform (NGINX Unit and NGINX or NGINX Plus) can then be managed fully using NGINX Controller to monitor, configure, and control the entire platform.
![NGINX Application Platform for microservices and monolithic applications with NGINX Controller, NGINX Plus, and NGINX Unit](https://cdn-1.wp.nginx.com/wp-content/uploads/2018/03/nginx.com-NAP-diagram-01ag_Main-Products-print-Roboto-white-1024x1008.png)
The NGINX Application Platform is our vision for building microservices
Together, these three components  NGINX Plus, NGINX Unit, and NGINX Controller  make up the [NGINX Application Platform][19]. The NGINX Application Platform is a product suite that delivers load balancing, caching, API management, a WAF, and application serving, with rich management and control planes that simplify the tasks of operating monolithic, microservices, and transitional applications.
### Getting Started with NGINX Unit
NGINX Unit is free and open source. Please see the [installation instructions][20] to get started. We have prebuilt packages for most operating systems, including Ubuntu and Red Hat Enterprise Linux. We also make a [Docker container][21] available on Docker Hub.
The source code is available in our [Mercurial repository][22] and [mirrored on GitHub][23]. The code is available under the Apache 2.0 license. You can compile NGINX Unit yourself on most popular Linux and Unix systems.
If you have any questions, please use the [GitHub issues board][24] or the [NGINX Unit mailing list][25]. Wed love to hear how you are using NGINX Unit, and we welcome [code contributions][26] too.
Were also happy to extend technical support for NGINX Unit to NGINX Plus customers with Professional or Enterprise support contracts. Please refer to our [Support page][27] for details of the support services we can offer.
--------------------------------------------------------------------------------
via: https://www.nginx.com/blog/nginx-unit-1-0-released/
作者:[www.nginx.com ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:www.nginx.com
[1]:https://twitter.com/intent/tweet?text=Announcing+NGINX+Unit+1.0+by+%40nginx+https%3A%2F%2Fwww.nginx.com%2Fblog%2Fnginx-unit-1-0-released%2F
[2]:http://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Fwww.nginx.com%2Fblog%2Fnginx-unit-1-0-released%2F&title=Announcing+NGINX+Unit+1.0&summary=Today%2C+April+12%2C+marks+a+significant+milestone+in+the+development+of+NGINX%26nbsp%3BUnit%2C+our+dynamic+web+and+application+server.+Approximately+six+months+after+its+first+public+release%2C+we%E2%80%99re+now+happy+to+announce+that+NGINX%26nbsp%3BUnit+is+generally+available+and+production%26%238209%3Bready.+NGINX%26nbsp%3BUnit+is+our+new+open+source+initiative+led+by+Igor%26nbsp%3BSysoev%2C+creator+of+the+original+NGINX+Open+Source+%5B%26hellip%3B%5D
[3]:https://news.ycombinator.com/submitlink?u=https%3A%2F%2Fwww.nginx.com%2Fblog%2Fnginx-unit-1-0-released%2F&t=Announcing%20NGINX%20Unit%201.0&text=Today,%20April%2012,%20marks%20a%20significant%20milestone%20in%20the%20development%20of%20NGINX%C2%A0Unit,%20our%20dynamic%20web%20and%20application%20server.%20Approximately%20six%20months%20after%20its%20first%20public%20release,%20we%E2%80%99re%20now%20happy%20to%20announce%20that%20NGINX%C2%A0Unit%20is%20generally%20available%20and%20production%E2%80%91ready.%20NGINX%C2%A0Unit%20is%20our%20new%20open%20source%20initiative%20led%20by%20Igor%C2%A0Sysoev,%20creator%20of%20the%20original%20NGINX%20Open%20Source%20[%E2%80%A6]
[4]:https://www.facebook.com/sharer/sharer.php?u=https%3A%2F%2Fwww.nginx.com%2Fblog%2Fnginx-unit-1-0-released%2F
[5]:https://plus.google.com/share?url=https%3A%2F%2Fwww.nginx.com%2Fblog%2Fnginx-unit-1-0-released%2F
[6]:http://www.reddit.com/submit?url=https%3A%2F%2Fwww.nginx.com%2Fblog%2Fnginx-unit-1-0-released%2F&title=Announcing+NGINX+Unit+1.0&text=Today%2C+April+12%2C+marks+a+significant+milestone+in+the+development+of+NGINX%26nbsp%3BUnit%2C+our+dynamic+web+and+application+server.+Approximately+six+months+after+its+first+public+release%2C+we%E2%80%99re+now+happy+to+announce+that+NGINX%26nbsp%3BUnit+is+generally+available+and+production%26%238209%3Bready.+NGINX%26nbsp%3BUnit+is+our+new+open+source+initiative+led+by+Igor%26nbsp%3BSysoev%2C+creator+of+the+original+NGINX+Open+Source+%5B%26hellip%3B%5D
[7]:http://nginx.org/en/CHANGES
[8]:https://www.nginx.com/products/nginx-unit/
[9]:https://www.nginx.com/blog/introducing-nginx-unit/
[10]:https://news.netcraft.com/archives/2018/03/27/march-2018-web-server-survey.html
[11]:https://www.nginx.com/blog/installing-wordpress-with-nginx-unit/
[12]:https://en.wikipedia.org/wiki/Yuri_Gagarin
[13]:https://en.wikipedia.org/wiki/Sputnik_1
[14]:https://www.nginx.com/products/nginx-controller/
[15]:https://www.nginx.com/products/nginx-amplify/
[16]:http://unit.nginx.org/configuration/
[17]:https://en.wikipedia.org/wiki/Cgroups
[18]:https://www.nginx.com/blog/what-is-a-service-mesh/
[19]:https://www.nginx.com/products
[20]:http://unit.nginx.org/installation/
[21]:https://hub.docker.com/r/nginx/unit/
[22]:http://hg.nginx.org/unit
[23]:https://github.com/nginx/unit
[24]:https://github.com/nginx/unit/issues
[25]:http://mailman.nginx.org/mailman/listinfo/unit
[26]:https://unit.nginx.org/contribution/
[27]:https://www.nginx.com/support
[28]:https://www.nginx.com/blog/tag/releases/
[29]:https://www.nginx.com/blog/tag/nginx-unit/

View File

@ -1,3 +1,5 @@
translating---geekpi
How To Browse Stack Overflow From Terminal
======

View File

@ -1,294 +0,0 @@
Things to do After Installing Ubuntu 18.04
======
**Brief: This list of things to do after installing Ubuntu 18.04 helps you get started with Bionic Beaver for a smoother desktop experience.**
[Ubuntu][1] 18.04 Bionic Beaver releases today. You are perhaps already aware of the [new features in Ubuntu 18.04 LTS][2] release. If not, heres the video review of Ubuntu 18.04 LTS:
[Subscribe to YouTube Channel for more Ubuntu Videos][3]
If you opted to install Ubuntu 18.04, I have listed out a few recommended steps that you can follow to get started with it.
### Things to do after installing Ubuntu 18.04 Bionic Beaver
![Things to do after installing Ubuntu 18.04][4]
I should mention that the list of things to do after installing Ubuntu 18.04 depends a lot on you and your interests and needs. If you are a programmer, youll focus on installing programming tools. If you are a graphic designer, youll focus on installing graphics tools.
Still, there are a few things that should be applicable to most Ubuntu users. This list is composed of those things plus a few of my of my favorites.
Also, this list is for the default [GNOME desktop][5]. If you are using some other flavor like [Kubuntu][6], Lubuntu etc then the GNOME-specific stuff wont be applicable to your system.
You dont have to follow each and every point on the list blindly. You should see if the recommended action suits your requirements or not.
With that said, lets get started with this list of things to do after installing Ubuntu 18.04.
#### 1\. Update the system
This is the first thing you should do after installing Ubuntu. Update the system without fail. It may sound strange because you just installed a fresh OS but still, you must check for the updates.
In my experience, if you dont update the system right after installing Ubuntu, you might face issues while trying to install a new program.
To update Ubuntu 18.04, press Super Key (Windows Key) to launch the Activity Overview and look for Software Updater. Run it to check for updates.
![Software Updater in Ubuntu 17.10][7]
**Alternatively** , you can use these famous commands in the terminal ( Use Ctrl+Alt+T):
```
sudo apt update && sudo apt upgrade
```
#### 2\. Enable additional repositories for more software
[Ubuntu has several repositories][8] from where it provides software for your system. These repositories are:
* Main Free and open-source software supported by Ubuntu team
* Universe Free and open-source software maintained by the community
* Restricted Proprietary drivers for devices.
* Multiverse Software restricted by copyright or legal issues.
* Canonical Partners Software packaged by Ubuntu for their partners
Enabling all these repositories will give you access to more software and proprietary drivers.
Go to Activity Overview by pressing Super Key (Windows key), and search for Software & Updates:
![Software and Updates in Ubuntu 17.10][9]
Under the Ubuntu Software tab, make sure you have checked all of the Main, Universe, Restricted and Multiverse repository checked.
![Setting repositories in Ubuntu 18.04][10]
Now move to the **Other Software** tab, check the option of **Canonical Partners**.
![Enable Canonical Partners repository in Ubuntu 17.10][11]
Youll have to enter your password in order to update the software sources. Once it completes, youll find more applications to install in the Software Center.
#### 3\. Install media codecs
In order to play media files like MP#, MPEG4, AVI etc, youll need to install media codecs. Ubuntu has them in their repository but doesnt install it by default because of copyright issues in various countries.
As an individual, you can install these media codecs easily using the Ubuntu Restricted Extra package. Click on the link below to install it from the Software Center.
[Install Ubuntu Restricted Extras][12]
Or alternatively, use the command below to install it:
```
sudo apt install ubuntu-restricted-extras
```
#### 4\. Install software from the Software Center
Now that you have setup the repositories and installed the codecs, it is time to get software. If you are absolutely new to Ubuntu, please follow this [guide to installing software in Ubuntu][13].
There are several ways to install software. The most convenient way is to use the Software Center that has thousands of software available in various categories. You can install them in a few clicks from the software center.
![Software Center in Ubuntu 17.10 ][14]
It depends on you what kind of software you would like to install. Ill suggest some of my favorites here.
* **VLC** media player for videos
* **GIMP** Photoshop alternative for Linux
* **Pinta** Paint alternative in Linux
* **Calibre** eBook management tool
* **Chromium** Open Source web browser
* **Kazam** Screen recorder tool
* [**Gdebi**][15] Lightweight package installer for .deb packages
* **Spotify** For streaming music
* **Skype** For video messaging
* **Kdenlive** [Video editor for Linux][16]
* **Atom** [Code editor][17] for programming
You may also refer to this list of [must-have Linux applications][18] for more software recommendations.
#### 5\. Install software from the Web
Though Ubuntu has thousands of applications in the software center, you may not find some of your favorite applications despite the fact that they support Linux.
Many software vendors provide ready to install .deb packages. You can download these .deb files from their website and install it by double-clicking on it.
[Google Chrome][19] is one such software that you can download from the web and install it.
#### 6\. Opt out of data collection in Ubuntu 18.04 (optional)
Ubuntu 18.04 collects some harmless statistics about your system hardware and your system installation preference. It also collects crash reports.
Youll be given the option to not send this data to Ubuntu servers when you log in to Ubuntu 18.04 for the first time.
![Opt out of data collection in Ubuntu 18.04][20]
If you miss it that time, you can disable it by going to System Settings -> Privacy and then set the Problem Reporting to Manual.
![Privacy settings in Ubuntu 18.04][21]
#### 7\. Customize the GNOME desktop (Dock, themes, extensions and more)
The GNOME desktop looks good in Ubuntu 18.04 but doesnt mean you cannot change it.
You can do a few visual changes from the System Settings. You can change the wallpaper of the desktop and the lock screen, you can change the position of the dock (launcher on the left side), change power settings, Bluetooth etc. In short, you can find many settings that you can change as per your need.
![Ubuntu 17.10 System Settings][22]
Changing themes and icons are the major way to change the looks of your system. I advise going through the list of [best GNOME themes][23] and [icons for Ubuntu][24]. Once you have found the theme and icon of your choice, you can use them with GNOME Tweaks tool.
You can install GNOME Tweaks via the Software Center or you can use the command below to install it:
```
sudo apt install gnome-tweak-tool
```
Once it is installed, you can easily [install new themes and icons][25].
![Change theme is one of the must to do things after installing Ubuntu 17.10][26]
You should also have a look at [use GNOME extensions][27] to further enhance the looks and capabilities of your system. I made this video about using GNOME extensions in 17.10 and you can follow the same for Ubuntu 18.04.
If you are wondering which extension to use, do take a look at this list of [best GNOME extensions][28].
I also recommend reading this article on [GNOME customization in Ubuntu][29] so that you can know the GNOME desktop in detail.
#### 8\. Prolong your battery and prevent overheating
Lets move on to [prevent overheating in Linux laptops][30]. TLP is a wonderful tool that controls CPU temperature and extends your laptops battery life in the long run.
Make sure that you havent installed any other power saving application such as [Laptop Mode Tools][31]. You can install it using the command below in a terminal:
```
sudo apt install tlp tlp-rdw
```
Once installed, run the command below to start it:
```
sudo tlp start
```
#### 9\. Save your eyes with Nightlight
Nightlight is my favorite feature in GNOME desktop. Keeping [your eyes safe at night][32] from the computer screen is very important. Reducing blue light helps reducing eye strain at night.
![flux effect][33]
GNOME provides a built-in Night Light option, which you can activate in the System Settings.
Just go to System Settings-> Devices-> Displays and turn on the Night Light option.
![Enabling night light is a must to do in Ubuntu 17.10][34]
#### 9\. Disable automatic suspend for laptops
Ubuntu 18.04 comes with a new automatic suspend feature for laptops. If the system is running on battery and is inactive for 20 minutes, it will go in suspend mode.
I understand that the intention is to save battery life but it is an inconvenience as well. You cant keep the power plugged in all the time because its not good for the battery life. And you may need the system to be running even when you are not using it.
Thankfully, you can change this behavior. Go to System Settings -> Power. Under Suspend & Power Button section, either turn off the Automatic Suspend option or extend its time period.
![Disable automatic suspend in Ubuntu 18.04][35]
You can also change the screen dimming behavior in here.
#### 10\. System cleaning
I have written in detail about [how to clean up your Ubuntu system][36]. I recommend reading that article to know various ways to keep your system free of junk.
Normally, you can use this little command to free up space from your system:
```
sudo apt autoremove
```
Its a good idea to run this command every once a while. If you dont like the command line, you can use a GUI tool like [Stacer][37] or [Bleach Bit][38].
#### 11\. Going back to Unity or Vanilla GNOME (not recommended)
If you have been using Unity or GNOME in the past, you may not like the new customized GNOME desktop in Ubuntu 18.04. Ubuntu has customized GNOME so that it resembles Unity but at the end of the day, it is neither completely Unity nor completely GNOME.
So if you are a hardcore Unity or GNOMEfan, you may want to use your favorite desktop in its real form. I wouldnt recommend but if you insist here are some tutorials for you:
#### 12\. Cant log in to Ubuntu 18.04 after incorrect password? Heres a workaround
I noticed a [little bug in Ubuntu 18.04][39] while trying to change the desktop session to Ubuntu Community theme. It seems if you try to change the sessions at the login screen, it rejects your password first and at the second attempt, the login gets stuck. You can wait for 5-10 minutes to get it back or force power it off.
The workaround here is that after it displays the incorrect password message, click Cancel, then click your name, then enter your password again.
#### 13\. Experience the Community theme (optional)
Ubuntu 18.04 was supposed to have a dashing new theme developed by the community. The theme could not be completed so it could not become the default look of Bionic Beaver release. I am guessing that it will be the default theme in Ubuntu 18.10.
![Ubuntu 18.04 Communitheme][40]
You can try out the aesthetic theme even today. [Installing Ubuntu Community Theme][41] is very easy. Just look for it in the software center, install it, restart your system and then at the login choose the Communitheme session.
#### 14\. Get Windows 10 in Virtual Box (if you need it)
In a situation where you must use Windows for some reasons, you can [install Windows in virtual box inside Linux][42]. It will run as a regular Ubuntu application.
Its not the best way but it still gives you an option. You can also [use WINE to run Windows software on Linux][43]. In both cases, I suggest trying the alternative native Linux application first before jumping to virtual machine or WINE.
#### What do you do after installing Ubuntu?
Those were my suggestions for getting started with Ubuntu. There are many more tutorials that you can find under [Ubuntu 18.04][44] tag. You may go through them as well to see if there is something useful for you.
Enough from myside. Your turn now. What are the items on your list of **things to do after installing Ubuntu 18.04**? The comment section is all yours.
--------------------------------------------------------------------------------
via: https://itsfoss.com/things-to-do-after-installing-ubuntu-18-04/
作者:[Abhishek Prakash][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/
[1]:https://www.ubuntu.com/
[2]:https://itsfoss.com/ubuntu-18-04-release-features/
[3]:https://www.youtube.com/c/itsfoss?sub_confirmation=1
[4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/things-to-after-installing-ubuntu-18-04-featured-800x450.jpeg
[5]:https://www.gnome.org/
[6]:https://kubuntu.org/
[7]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/software-update-ubuntu-17-10.jpg
[8]:https://help.ubuntu.com/community/Repositories/Ubuntu
[9]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/software-updates-ubuntu-17-10.jpg
[10]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/repositories-ubuntu-18.png
[11]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/software-repository-ubuntu-17-10.jpeg
[12]:apt://ubuntu-restricted-extras
[13]:https://itsfoss.com/remove-install-software-ubuntu/
[14]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/Ubuntu-software-center-17-10-800x551.jpeg
[15]:https://itsfoss.com/gdebi-default-ubuntu-software-center/
[16]:https://itsfoss.com/best-video-editing-software-linux/
[17]:https://itsfoss.com/best-modern-open-source-code-editors-for-linux/
[18]:https://itsfoss.com/essential-linux-applications/
[19]:https://www.google.com/chrome/
[20]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/opt-out-of-data-collection-ubuntu-18-800x492.jpeg
[21]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/privacy-ubuntu-18-04-800x417.png
[22]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/System-Settings-Ubuntu-17-10-800x573.jpeg
[23]:https://itsfoss.com/best-gtk-themes/
[24]:https://itsfoss.com/best-icon-themes-ubuntu-16-04/
[25]:https://itsfoss.com/install-themes-ubuntu/
[26]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/GNOME-Tweak-Tool-Ubuntu-17-10.jpeg
[27]:https://itsfoss.com/gnome-shell-extensions/
[28]:https://itsfoss.com/best-gnome-extensions/
[29]:https://itsfoss.com/gnome-tricks-ubuntu/
[30]:https://itsfoss.com/reduce-overheating-laptops-linux/
[31]:https://wiki.archlinux.org/index.php/Laptop_Mode_Tools
[32]:https://itsfoss.com/night-shift-flux-ubuntu-linux/
[33]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/03/flux-eyes-strain.jpg
[34]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/Enable-Night-Light-Feature-Ubuntu-17-10-800x396.jpeg
[35]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/disable-automatic-suspend-ubuntu-18-800x586.jpeg
[36]:https://itsfoss.com/free-up-space-ubuntu-linux/
[37]:https://itsfoss.com/optimize-ubuntu-stacer/
[38]:https://itsfoss.com/bleachbit-2-release/
[39]:https://gitlab.gnome.org/GNOME/gnome-shell/issues/227
[40]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/ubunt-18-theme.jpeg
[41]:https://itsfoss.com/ubuntu-community-theme/
[42]:https://itsfoss.com/install-windows-10-virtualbox-linux/
[43]:https://itsfoss.com/use-windows-applications-linux/
[44]:https://itsfoss.com/tag/ubuntu-18-04/

View File

@ -1,111 +0,0 @@
Checking out the notebookbar and other improvements in LibreOffice 6.0 FOSS adventures
======
With any new openSUSE release, I am interested in the improvements that the big applications have made. One of these big applications is LibreOffice. Ever since LibreOffice has forked from OpenOffice.org, there has been a constant delivery of new features and new fixes every 6 months. openSUSE Leap 15 brought us the upgrade from LibreOffice 5.3.3 to LibreOffice 6.0.4. In this post, I will highlight the improvements that I found most newsworthy.
### Notebookbar
One of the experimental features of LibreOffice 5.3 was the Notebookbar. In LibreOffice 6.0 this feature has matured a lot and has gained a new form: the groupedbar. Lets take a look at the 3 variants. You can enable the Notebookbar by clicking on View > Toolbar Layout and then Notebookbar.
![][1]
Please be aware that switching back to the Default Toolbar Layout is a bit of a hassle. To list the tricks:
* The contextual groups notebookbar shows the menubar by default. Make sure that you dont hide it. Change the Layout via the View menu in the menubar.
* The tabbed notebookbar has a hamburger menu on the upper right side. Select menubar. Then change the Layout via the View menu in the menubar.
* The groupedbar notebookbar has a menu dropdown menu on the upper right side. Make sure to maximize the window. Otherwise it might be hidden.
The most talked about version of the notebookbar is the tabbed version. This looks similar to the Microsoft Office 2007 ribbon. That fact alone is enough to ruffle some feathers in the open source community. In comparison to the ribbon, the tabs (other than Home) can feel rather empty. The reason for that is that the icons are not designed to be big and bold. Another reason is that there are no sub-sections in the tabs. In the Microsoft version of the ribbon, you have names of the sub-sections underneath the icons. This helps to fill the empty space. However, in terms of ease of use, this design does the job. It provides you with a lot of functions in an easy to understand interface.
![][2]
The most successful version of the notebookbar is in my opinion the groupedbar. It gives you all of the most needed functions in a single overview. And the dropdown menus (File / Edit / Styles / Format / Paragraph / Insert / Reference) all show useful functions that are not so often used.
![][3]
By the way, it also works great for Calc (Spreadsheets) and Impress (Presentations).
![][4]
![][5]
Finally there is the contextual groups version. The “groups” version is not very helpful. It shows a very limited number of basic functions. And it takes up a lot of space. If you want to use more advanced functions, you need to use the traditional menubar. The traditional menubar works perfectly, but in that case I rather combine it with the Default toolbar layout.
![][6]
The contextual single version is the better version. If you compare it to the “normal” single toolbar, it contains more functions and the order in which the functions are arranged is easier to use.
![][7]
There is no real need to make the switch to the notebookbar. But it provides you with choice. One of these user interfaces might just suit your taste.
### Microsoft Office compatibility
Microsoft Office compatibility (especially .docx, .xlsx and .pptx) is one of the things that I find very important. As a former Business Consultant I have created a lot of documents in the past. I have created 200+ page reports. They need to work flawless, including getting the page brakes right, which is incredibly difficult as the margins are never the same. Also the index, headers, footers, grouped drawings and SmartArt drawings need to display as originally composed. I have created large PowerPoint presentations with branded slides with +30 layouts, grouped drawings and SmartArt drawings. I need these to render perfectly in the slideshow. Furthermore, I have created large multi-tabbed Excel sheets with filters, pivot tables, graphs and conditional formatting. All of these need to be conserved when I open these files in LibreOffice.
And no, LibreOffice is still not perfect. But damn, it is close. This time I have seen no major problems when opening older documents. Which means that LibreOffice finally gets SmartArt drawings right. In Writer, the page breaks in different places compared to Microsoft Word. That has always been an issue. But I dont see many other issues. In Calc, the rendering of the graphs is less beautiful. But its similar enough to Excel. In Impress, presentations can look strange, because sometimes you see bigger/smaller fonts in the same slide (and that is not on purpose). But I was very impressed to see branded slides with multiple sections render correctly. If I needed to score it, I would give LibreOffice a 7 out of 10 for Microsoft Office compatibility. A very solid score. Below some examples of compatibility done right.
![][8]
![][9]
![][10]
### Noteworthy features
Finally, there are the noteworthy features. I will only highlight the ones that I find cool. The first one is the ability to rotate images in any degree. Below is an example of me rotating a Gecko.
![][11]
The second cool feature is that the old collection of autoformat table styles are now replaced with a new collection of table styles. You can access these styles via the menubar: Table > AutoFormat Styles. In the screenshots below, I show how to change a table from the Box List Green to the Box List Red format.
![][12]
![][13]
The third feature is the ability to copy-past unformatted text in Calc. This is something I will use a lot, making it a cool feature.
![][14]
The final feature is the new and improved LibreOffice Online help. This is not the same as the LibreOffice help (press F1 to see what I mean). That is still there (and as far as I know unchanged). But this is the online wiki that you will find on the LibreOffice.org website. Some contributors obviously put a lot of effort in this feature. It looks good, now also on a mobile device. Kudos!
![][15]
If you want to learn about all of the other introduced features, read the [release notes][16]. They are really well written.
### And thats not all folks
I discussed LibreOffice on openSUSE Leap 15. However, LibreOffice is also available on Android and in the Cloud. You can get the Android version from the [Google Play Store][17]. And you can see the Cloud version in action if you go to the [Collabora website][18]. Check them out for yourselves.
--------------------------------------------------------------------------------
via: https://www.fossadventures.com/checking-out-the-notebookbar-and-other-improvements-in-libreoffice-6-0/
作者:[Martin De Boer][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.fossadventures.com/author/martin_de_boer/
[1]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice06.jpeg
[2]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice09.jpeg
[3]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice11.jpeg
[4]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice10.jpeg
[5]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice08.jpeg
[6]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice07.jpeg
[7]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice12.jpeg
[8]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice14.jpeg
[9]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice15.jpeg
[10]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice16.jpeg
[11]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice01.jpeg
[12]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice02.jpeg
[13]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice03.jpeg
[14]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice04.jpeg
[15]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice05.jpeg
[16]:https://wiki.documentfoundation.org/ReleaseNotes/6.0
[17]:https://play.google.com/store/apps/details?id=org.documentfoundation.libreoffice&hl=en
[18]:https://www.collaboraoffice.com/press-releases/collabora-office-6-0-released/

View File

@ -1,3 +1,4 @@
translating by runningwater)
Version Control Before Git with CVS
======
Github was launched in 2008. If your software engineering career, like mine, is no older than Github, then Git may be the only version control software you have ever used. While people sometimes grouse about its steep learning curve or unintuitive interface, Git has become everyones go-to for version control. In Stack Overflows 2015 developer survey, 69.3% of respondents used Git, almost twice as many as used the second-most-popular version control system, Subversion. After 2015, Stack Overflow stopped asking developers about the version control systems they use, perhaps because Git had become so popular that the question was uninteresting.
@ -296,7 +297,7 @@ via: https://twobithistory.org/2018/07/07/cvs.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[runningwater](https://github.com/runningwater)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,101 +0,0 @@
Anbox: How To Install Google Play Store And Enable ARM (libhoudini) Support, The Easy Way
======
**[Anbox][1], or Android in a Box, is a free and open source tool that allows running Android applications on Linux.** It works by running the Android runtime environment in an LXC container, recreating the directory structure of Android as a mountable loop image, while using the native Linux kernel to execute applications.
Its key features are security, performance, integration and convergence (scales across different form factors), according to its website.
**Using Anbox, each Android application or game is launched in a separate window, just like system applications** , and they behave more or less like regular windows, showing up in the launcher, can be tiled, etc.
By default, Anbox doesn't ship with the Google Play Store or support for ARM applications. To install applications you must download each app APK and install it manually using adb. Also, installing ARM applications or games doesn't work by default with Anbox - trying to install ARM apps results in the following error being displayed:
```
Failed to install PACKAGE.NAME.apk: Failure [INSTALL_FAILED_NO_MATCHING_ABIS: Failed to extract native libraries, res=-113]
```
You can set up both Google Play Store and support for ARM applications (through libhoudini) manually for Android in a Box, but it's a quite complicated process. **To make it easier to install Google Play Store and Google Play Services on Anbox, and get it to support ARM applications and games (using libhoudini), the folks at[geeks-r-us.de][2] (linked article is in German) have created a [script][3] that automates these tasks.**
Before using this, I'd like to make it clear that not all Android applications and games work in Anbox, even after integrating libhoudini for ARM support. Some Android applications and games may not show up in the Google Play Store at all, while others may be available for installation but will not work. Also, some features may not be available in some applications.
### Install Google Play Store and enable ARM applications / games support on Anbox (Android in a Box)
These instructions will obviously not work if Anbox is not already installed on your Linux desktop. If you haven't already, install Anbox by following the installation instructions found
`anbox.appmgr`
at least once after installing Anbox and before using this script, to avoid running into issues.
1\. Install the required dependencies (`wget` , `lzip` , `unzip` and `squashfs-tools`).
In Debian, Ubuntu or Linux Mint, use this command to install the required dependencies:
```
sudo apt install wget lzip unzip squashfs-tools
```
2\. Download and run the script that automatically downloads and installs Google Play Store (and Google Play Services) and libhoudini (for ARM apps / games support) on your Android in a Box installation.
**Warning: never run a script you didn't write without knowing what it does. Before running this script, check out its [code][4]. **
To download the script, make it executable and run it on your Linux desktop, use these commands in a terminal:
```
wget https://raw.githubusercontent.com/geeks-r-us/anbox-playstore-installer/master/install-playstore.sh
chmod +x install-playstore.sh
sudo ./install-playstore.sh
```
3\. To get Google Play Store to work in Anbox, you need to enable all the permissions for both Google Play Store and Google Play Services
To do this, run Anbox:
```
anbox.appmgr
```
Then go to `Settings > Apps > Google Play Services > Permissions` and enable all available permissions. Do the same for Google Play Store!
You should now be able to login using a Google account into Google Play Store.
Without enabling all permissions for Google Play Store and Google Play Services, you may encounter an issue when trying to login to your Google account, with the following error message: " _Couldn't sign in. There was a problem communicating with Google servers. Try again later_ ", as you can see in this screenshot:
After logging in, you can disable some of the Google Play Store / Google Play Services permissions.
**If you're encountering some connectivity issues when logging in to your Google account on Anbox,** make sure the `anbox-bride.sh` is running:
* to start it:
```
sudo /snap/anbox/current/bin/anbox-bridge.sh start
```
* to restart it:
```
sudo /snap/anbox/current/bin/anbox-bridge.sh restart
```
You may also need to install the dnsmasq package if you continue to have connectivity issues with Anbox, according to
--------------------------------------------------------------------------------
via: https://www.linuxuprising.com/2018/07/anbox-how-to-install-google-play-store.html
作者:[Logix][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/118280394805678839070
[1]:https://anbox.io/
[2]:https://geeks-r-us.de/2017/08/26/android-apps-auf-dem-linux-desktop/
[3]:https://github.com/geeks-r-us/anbox-playstore-installer/
[4]:https://github.com/geeks-r-us/anbox-playstore-installer/blob/master/install-playstore.sh
[5]:https://docs.anbox.io/userguide/install.html
[6]:https://github.com/anbox/anbox/issues/118#issuecomment-295270113

View File

@ -1,3 +1,5 @@
translating---geekpi
GPaste Is A Great Clipboard Manager For Gnome Shell
======
**[GPaste][1] is a clipboard management system that consists of a library, daemon, and interfaces for the command line and Gnome (using a native Gnome Shell extension).**

View File

@ -1,3 +1,5 @@
translating by lixinyuxx
5 reasons the i3 window manager makes Linux better
======

View File

@ -1,75 +0,0 @@
translating---geekpi
Publishing Markdown to HTML with MDwiki
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_cafe_brew_laptop_desktop.jpg?itok=G-n1o1-o)
There are plenty of reasons to like Markdown, a simple language with an easy-to-learn syntax that can be used with any text editor. Using tools like [Pandoc][1], you can convert Markdown text to [a variety of popular formats][2], including HTML. You can also automate that conversion process in a web server. An HTML5 and JavaScript application called [MDwiki][3], created by Timo Dörr, can take a stack of Markdown files and turn them into a website when requested from a browser. The MDwiki site includes a how-to guide and other information to help you get started:
![MDwiki site getting started][5]
What an Mdwiki site looks like.
Inside the web server, a basic MDwiki site looks like this:
![MDwiki site inside web server][7]
What the webserver folder for that site looks like.
I renamed the MDwiki HTML file `START.HTML` for this project. There is also one Markdown file that deals with navigation and a JSON file to hold a few configuration settings. Everything else is site content.
While the overall website design is pretty much fixed by MDwiki, the content, styling, and number of pages are not. You can view a selection of different sites generated by MDwiki at [the MDwiki site][8]. It is fair to say that MDwiki sites lack the visual appeal that a web designer could achieve—but they are functional, and users should balance their simple appearance against the speed and ease of creating and editing them.
Markdown comes in various flavors that extend a stable core functionality for different specific purposes. MDwiki uses GitHub flavor [Markdown][9], which adds features such as formatted code blocks and syntax highlighting for popular programming languages, making it well-suited for producing program documentation and tutorials.
MDwiki also supports what it calls "gimmicks," which add extra functionality such as embedding YouTube video content and displaying mathematical formulas. These are worth exploring if you need them for specific projects. I find MDwiki an ideal tool for creating technical documentation and educational resources. I have also discovered some tricks and hacks that might not be immediately apparent.
MDwiki works with any modern web browser when deployed in a web server; however, you do not need a web server if you access MDwiki with Mozilla Firefox. Most MDwiki users will opt to deploy completed projects on a web server to avoid excluding potential users, but development and testing can be done with just a text editor and Firefox. Completed MDwiki projects that are loaded into a Moodle Virtual Learning Environment (VLE) can be read by any modern browser, which could be useful in educational contexts. (This is probably also true for other VLE software, but you should test that.)
MDwiki's default color scheme is not ideal for all projects, but you can replace it with another theme downloaded from [Bootswatch.com][10]. To do this, simply open the MDwiki HTML file in an editor, take out the `extlib/css/bootstrap-3.0.0.min.css` code, and insert the downloaded Bootswatch theme. There is also an MDwiki gimmick that lets users choose a Bootswatch theme to replace the default after MDwiki loads in their browser. I often work with users who have visual impairments, and they tend to prefer high-contrast themes, with white text on a dark background.
![MDwiki screen with Bootswatch Superhero theme][12]
MDwiki screen using the Bootswatch Superhero theme
MDwiki, Markdown files, and static images are fine for many purposes. However, you might sometimes want to include, say, a JavaScript slideshow or a feedback form. Markdown files can include HTML code, but mixing Markdown with HTML can get confusing. One solution is to create the feature you want in a separate HTML file and display it inside a Markdown file with an iframe tag. I took this idea from the [Twine Cookbook][13], a support site for the Twine interactive fiction engine. The Twine Cookbook doesnt actually use MDwiki, but combining Markdown and iframe tags opens up a wide range of creative possibilities.
Here is an example:
This HTML will display an HTML page created by the Twine interactive fiction engine inside a Markdown file.
```
<iframe height="400" src="sugarcube_dungeonmoving_example.html" width="90%"></iframe>
```
The result in an MDwiki-generated site looks like this:
![](https://opensource.com/sites/default/files/uploads/4_-_mdwiki_site_summary.png)
In short, MDwiki is an excellent small application that achieves its purpose extremely well.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/markdown-html-publishing
作者:[Peter Cheer][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/petercheer
[1]: https://pandoc.org/
[2]: https://opensource.com/downloads/pandoc-cheat-sheet
[3]: http://dynalon.github.io/mdwiki/#!index.md
[4]: https://opensource.com/file/407306
[5]: https://opensource.com/sites/default/files/uploads/1_-_mdwiki_screenshot.png (MDwiki site getting started)
[6]: https://opensource.com/file/407311
[7]: https://opensource.com/sites/default/files/uploads/2_-_mdwiki_inside_web_server.png (MDwiki site inside web server)
[8]: http://dynalon.github.io/mdwiki/#!examples.md
[9]: https://guides.github.com/features/mastering-markdown/
[10]: https://bootswatch.com/
[11]: https://opensource.com/file/407316
[12]: https://opensource.com/sites/default/files/uploads/3_-_mdwiki_bootswatch_superhero.png (MDwiki screen with Bootswatch Superhero theme)
[13]: https://github.com/iftechfoundation/twine-cookbook

View File

@ -1,616 +0,0 @@
Lab 1: PC Bootstrap and GCC Calling Conventions
======
### Lab 1: Booting a PC
#### Introduction
This lab is split into three parts. The first part concentrates on getting familiarized with x86 assembly language, the QEMU x86 emulator, and the PC's power-on bootstrap procedure. The second part examines the boot loader for our 6.828 kernel, which resides in the `boot` directory of the `lab` tree. Finally, the third part delves into the initial template for our 6.828 kernel itself, named JOS, which resides in the `kernel` directory.
##### Software Setup
The files you will need for this and subsequent lab assignments in this course are distributed using the [Git][1] version control system. To learn more about Git, take a look at the [Git user's manual][2], or, if you are already familiar with other version control systems, you may find this [CS-oriented overview of Git][3] useful.
The URL for the course Git repository is <https://pdos.csail.mit.edu/6.828/2018/jos.git>. To install the files in your Athena account, you need to _clone_ the course repository, by running the commands below. You must use an x86 Athena machine; that is, `uname -a` should mention `i386 GNU/Linux` or `i686 GNU/Linux` or `x86_64 GNU/Linux`. You can log into a public Athena host with `ssh -X athena.dialup.mit.edu`.
```
athena% mkdir ~/6.828
athena% cd ~/6.828
athena% add git
athena% git clone https://pdos.csail.mit.edu/6.828/2018/jos.git lab
Cloning into lab...
athena% cd lab
athena%
```
Git allows you to keep track of the changes you make to the code. For example, if you are finished with one of the exercises, and want to checkpoint your progress, you can _commit_ your changes by running:
```
athena% git commit -am 'my solution for lab1 exercise 9'
Created commit 60d2135: my solution for lab1 exercise 9
1 files changed, 1 insertions(+), 0 deletions(-)
athena%
```
You can keep track of your changes by using the git diff command. Running git diff will display the changes to your code since your last commit, and git diff origin/lab1 will display the changes relative to the initial code supplied for this lab. Here, `origin/lab1` is the name of the git branch with the initial code you downloaded from our server for this assignment.
We have set up the appropriate compilers and simulators for you on Athena. To use them, run add -f 6.828. You must run this command every time you log in (or add it to your `~/.environment` file). If you get obscure errors while compiling or running `qemu`, double check that you added the course locker.
If you are working on a non-Athena machine, you'll need to install `qemu` and possibly `gcc` following the directions on the [tools page][4]. We've made several useful debugging changes to `qemu` and some of the later labs depend on these patches, so you must build your own. If your machine uses a native ELF toolchain (such as Linux and most BSD's, but notably _not_ OS X), you can simply install `gcc` from your package manager. Otherwise, follow the directions on the tools page.
##### Hand-In Procedure
You will turn in your assignments using the [submission website][5]. You need to request an API key from the submission website before you can turn in any assignments or labs.
The lab code comes with GNU Make rules to make submission easier. After committing your final changes to the lab, type make handin to submit your lab.
```
athena% git commit -am "ready to submit my lab"
[lab1 c2e3c8b] ready to submit my lab
2 files changed, 18 insertions(+), 2 deletions(-)
athena% make handin
git archive --prefix=lab1/ --format=tar HEAD | gzip > lab1-handin.tar.gz
Get an API key for yourself by visiting https://6828.scripts.mit.edu/2018/handin.py/
Please enter your API key: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 50199 100 241 100 49958 414 85824 --:--:-- --:--:-- --:--:-- 85986
athena%
```
make handin will store your API key in _myapi.key_. If you need to change your API key, just remove this file and let make handin generate it again ( _myapi.key_ must not include newline characters).
If use make handin and you have either uncomitted changes or untracked files, you will see output similar to the following:
```
M hello.c
?? bar.c
?? foo.pyc
Untracked files will not be handed in. Continue? [y/N]
```
Inspect the above lines and make sure all files that your lab solution needs are tracked i.e. not listed in a line that begins with ??.
In the case that make handin does not work properly, try fixing the problem with the curl or Git commands. Or you can run make tarball. This will make a tar file for you, which you can then upload via our [web interface][5].
You can run make grade to test your solutions with the grading program. The [web interface][5] uses the same grading program to assign your lab submission a grade. You should check the output of the grader (it may take a few minutes since the grader runs periodically) and ensure that you received the grade which you expected. If the grades don't match, your lab submission probably has a bug -- check the output of the grader (resp-lab*.txt) to see which particular test failed.
For Lab 1, you do not need to turn in answers to any of the questions below. (Do answer them for yourself though! They will help with the rest of the lab.)
#### Part 1: PC Bootstrap
The purpose of the first exercise is to introduce you to x86 assembly language and the PC bootstrap process, and to get you started with QEMU and QEMU/GDB debugging. You will not have to write any code for this part of the lab, but you should go through it anyway for your own understanding and be prepared to answer the questions posed below.
##### Getting Started with x86 assembly
If you are not already familiar with x86 assembly language, you will quickly become familiar with it during this course! The [PC Assembly Language Book][6] is an excellent place to start. Hopefully, the book contains mixture of new and old material for you.
_Warning:_ Unfortunately the examples in the book are written for the NASM assembler, whereas we will be using the GNU assembler. NASM uses the so-called _Intel_ syntax while GNU uses the _AT &T_ syntax. While semantically equivalent, an assembly file will differ quite a lot, at least superficially, depending on which syntax is used. Luckily the conversion between the two is pretty simple, and is covered in [Brennan's Guide to Inline Assembly][7].
Exercise 1. Familiarize yourself with the assembly language materials available on [the 6.828 reference page][8]. You don't have to read them now, but you'll almost certainly want to refer to some of this material when reading and writing x86 assembly.
We do recommend reading the section "The Syntax" in [Brennan's Guide to Inline Assembly][7]. It gives a good (and quite brief) description of the AT&T assembly syntax we'll be using with the GNU assembler in JOS.
Certainly the definitive reference for x86 assembly language programming is Intel's instruction set architecture reference, which you can find on [the 6.828 reference page][8] in two flavors: an HTML edition of the old [80386 Programmer's Reference Manual][9], which is much shorter and easier to navigate than more recent manuals but describes all of the x86 processor features that we will make use of in 6.828; and the full, latest and greatest [IA-32 Intel Architecture Software Developer's Manuals][10] from Intel, covering all the features of the most recent processors that we won't need in class but you may be interested in learning about. An equivalent (and often friendlier) set of manuals is [available from AMD][11]. Save the Intel/AMD architecture manuals for later or use them for reference when you want to look up the definitive explanation of a particular processor feature or instruction.
##### Simulating the x86
Instead of developing the operating system on a real, physical personal computer (PC), we use a program that faithfully emulates a complete PC: the code you write for the emulator will boot on a real PC too. Using an emulator simplifies debugging; you can, for example, set break points inside of the emulated x86, which is difficult to do with the silicon version of an x86.
In 6.828 we will use the [QEMU Emulator][12], a modern and relatively fast emulator. While QEMU's built-in monitor provides only limited debugging support, QEMU can act as a remote debugging target for the [GNU debugger][13] (GDB), which we'll use in this lab to step through the early boot process.
To get started, extract the Lab 1 files into your own directory on Athena as described above in "Software Setup", then type make (or gmake on BSD systems) in the `lab` directory to build the minimal 6.828 boot loader and kernel you will start with. (It's a little generous to call the code we're running here a "kernel," but we'll flesh it out throughout the semester.)
```
athena% cd lab
athena% make
+ as kern/entry.S
+ cc kern/entrypgdir.c
+ cc kern/init.c
+ cc kern/console.c
+ cc kern/monitor.c
+ cc kern/printf.c
+ cc kern/kdebug.c
+ cc lib/printfmt.c
+ cc lib/readline.c
+ cc lib/string.c
+ ld obj/kern/kernel
+ as boot/boot.S
+ cc -Os boot/main.c
+ ld boot/boot
boot block is 380 bytes (max 510)
+ mk obj/kern/kernel.img
```
(If you get errors like "undefined reference to `__udivdi3'", you probably don't have the 32-bit gcc multilib. If you're running Debian or Ubuntu, try installing the gcc-multilib package.)
Now you're ready to run QEMU, supplying the file `obj/kern/kernel.img`, created above, as the contents of the emulated PC's "virtual hard disk." This hard disk image contains both our boot loader (`obj/boot/boot`) and our kernel (`obj/kernel`).
```
athena% make qemu
```
or
```
athena% make qemu-nox
```
This executes QEMU with the options required to set the hard disk and direct serial port output to the terminal. Some text should appear in the QEMU window:
```
Booting from Hard Disk...
6828 decimal is XXX octal!
entering test_backtrace 5
entering test_backtrace 4
entering test_backtrace 3
entering test_backtrace 2
entering test_backtrace 1
entering test_backtrace 0
leaving test_backtrace 0
leaving test_backtrace 1
leaving test_backtrace 2
leaving test_backtrace 3
leaving test_backtrace 4
leaving test_backtrace 5
Welcome to the JOS kernel monitor!
Type 'help' for a list of commands.
K>
```
Everything after '`Booting from Hard Disk...`' was printed by our skeletal JOS kernel; the `K>` is the prompt printed by the small _monitor_ , or interactive control program, that we've included in the kernel. If you used make qemu, these lines printed by the kernel will appear in both the regular shell window from which you ran QEMU and the QEMU display window. This is because for testing and lab grading purposes we have set up the JOS kernel to write its console output not only to the virtual VGA display (as seen in the QEMU window), but also to the simulated PC's virtual serial port, which QEMU in turn outputs to its own standard output. Likewise, the JOS kernel will take input from both the keyboard and the serial port, so you can give it commands in either the VGA display window or the terminal running QEMU. Alternatively, you can use the serial console without the virtual VGA by running make qemu-nox. This may be convenient if you are SSH'd into an Athena dialup. To quit qemu, type Ctrl+a x.
There are only two commands you can give to the kernel monitor, `help` and `kerninfo`.
```
K> help
help - display this list of commands
kerninfo - display information about the kernel
K> kerninfo
Special kernel symbols:
entry f010000c (virt) 0010000c (phys)
etext f0101a75 (virt) 00101a75 (phys)
edata f0112300 (virt) 00112300 (phys)
end f0112960 (virt) 00112960 (phys)
Kernel executable memory footprint: 75KB
K>
```
The `help` command is obvious, and we will shortly discuss the meaning of what the `kerninfo` command prints. Although simple, it's important to note that this kernel monitor is running "directly" on the "raw (virtual) hardware" of the simulated PC. This means that you should be able to copy the contents of `obj/kern/kernel.img` onto the first few sectors of a _real_ hard disk, insert that hard disk into a real PC, turn it on, and see exactly the same thing on the PC's real screen as you did above in the QEMU window. (We don't recommend you do this on a real machine with useful information on its hard disk, though, because copying `kernel.img` onto the beginning of its hard disk will trash the master boot record and the beginning of the first partition, effectively causing everything previously on the hard disk to be lost!)
##### The PC's Physical Address Space
We will now dive into a bit more detail about how a PC starts up. A PC's physical address space is hard-wired to have the following general layout:
```
+------------------+ <- 0xFFFFFFFF (4GB)
| 32-bit |
| memory mapped |
| devices |
| |
/\/\/\/\/\/\/\/\/\/\
/\/\/\/\/\/\/\/\/\/\
| |
| Unused |
| |
+------------------+ <- depends on amount of RAM
| |
| |
| Extended Memory |
| |
| |
+------------------+ <- 0x00100000 (1MB)
| BIOS ROM |
+------------------+ <- 0x000F0000 (960KB)
| 16-bit devices, |
| expansion ROMs |
+------------------+ <- 0x000C0000 (768KB)
| VGA Display |
+------------------+ <- 0x000A0000 (640KB)
| |
| Low Memory |
| |
+------------------+ <- 0x00000000
```
The first PCs, which were based on the 16-bit Intel 8088 processor, were only capable of addressing 1MB of physical memory. The physical address space of an early PC would therefore start at 0x00000000 but end at 0x000FFFFF instead of 0xFFFFFFFF. The 640KB area marked "Low Memory" was the _only_ random-access memory (RAM) that an early PC could use; in fact the very earliest PCs only could be configured with 16KB, 32KB, or 64KB of RAM!
The 384KB area from 0x000A0000 through 0x000FFFFF was reserved by the hardware for special uses such as video display buffers and firmware held in non-volatile memory. The most important part of this reserved area is the Basic Input/Output System (BIOS), which occupies the 64KB region from 0x000F0000 through 0x000FFFFF. In early PCs the BIOS was held in true read-only memory (ROM), but current PCs store the BIOS in updateable flash memory. The BIOS is responsible for performing basic system initialization such as activating the video card and checking the amount of memory installed. After performing this initialization, the BIOS loads the operating system from some appropriate location such as floppy disk, hard disk, CD-ROM, or the network, and passes control of the machine to the operating system.
When Intel finally "broke the one megabyte barrier" with the 80286 and 80386 processors, which supported 16MB and 4GB physical address spaces respectively, the PC architects nevertheless preserved the original layout for the low 1MB of physical address space in order to ensure backward compatibility with existing software. Modern PCs therefore have a "hole" in physical memory from 0x000A0000 to 0x00100000, dividing RAM into "low" or "conventional memory" (the first 640KB) and "extended memory" (everything else). In addition, some space at the very top of the PC's 32-bit physical address space, above all physical RAM, is now commonly reserved by the BIOS for use by 32-bit PCI devices.
Recent x86 processors can support _more_ than 4GB of physical RAM, so RAM can extend further above 0xFFFFFFFF. In this case the BIOS must arrange to leave a _second_ hole in the system's RAM at the top of the 32-bit addressable region, to leave room for these 32-bit devices to be mapped. Because of design limitations JOS will use only the first 256MB of a PC's physical memory anyway, so for now we will pretend that all PCs have "only" a 32-bit physical address space. But dealing with complicated physical address spaces and other aspects of hardware organization that evolved over many years is one of the important practical challenges of OS development.
##### The ROM BIOS
In this portion of the lab, you'll use QEMU's debugging facilities to investigate how an IA-32 compatible computer boots.
Open two terminal windows and cd both shells into your lab directory. In one, enter make qemu-gdb (or make qemu-nox-gdb). This starts up QEMU, but QEMU stops just before the processor executes the first instruction and waits for a debugging connection from GDB. In the second terminal, from the same directory you ran `make`, run make gdb. You should see something like this,
```
athena% make gdb
GNU gdb (GDB) 6.8-debian
Copyright (C) 2008 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "i486-linux-gnu".
+ target remote localhost:26000
The target architecture is assumed to be i8086
[f000:fff0] 0xffff0: ljmp $0xf000,$0xe05b
0x0000fff0 in ?? ()
+ symbol-file obj/kern/kernel
(gdb)
```
We provided a `.gdbinit` file that set up GDB to debug the 16-bit code used during early boot and directed it to attach to the listening QEMU. (If it doesn't work, you may have to add an `add-auto-load-safe-path` in your `.gdbinit` in your home directory to convince `gdb` to process the `.gdbinit` we provided. `gdb` will tell you if you have to do this.)
The following line:
```
[f000:fff0] 0xffff0: ljmp $0xf000,$0xe05b
```
is GDB's disassembly of the first instruction to be executed. From this output you can conclude a few things:
* The IBM PC starts executing at physical address 0x000ffff0, which is at the very top of the 64KB area reserved for the ROM BIOS.
* The PC starts executing with `CS = 0xf000` and `IP = 0xfff0`.
* The first instruction to be executed is a `jmp` instruction, which jumps to the segmented address `CS = 0xf000` and `IP = 0xe05b`.
Why does QEMU start like this? This is how Intel designed the 8088 processor, which IBM used in their original PC. Because the BIOS in a PC is "hard-wired" to the physical address range 0x000f0000-0x000fffff, this design ensures that the BIOS always gets control of the machine first after power-up or any system restart - which is crucial because on power-up there _is_ no other software anywhere in the machine's RAM that the processor could execute. The QEMU emulator comes with its own BIOS, which it places at this location in the processor's simulated physical address space. On processor reset, the (simulated) processor enters real mode and sets CS to 0xf000 and the IP to 0xfff0, so that execution begins at that (CS:IP) segment address. How does the segmented address 0xf000:fff0 turn into a physical address?
To answer that we need to know a bit about real mode addressing. In real mode (the mode that PC starts off in), address translation works according to the formula: _physical address_ = 16 选题模板.txt 中文排版指北.md comic core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LCTT翻译规范.md LICENSE Makefile published README.md sign.md sources translated _segment_ \+ _offset_. So, when the PC sets CS to 0xf000 and IP to 0xfff0, the physical address referenced is:
```
16 * 0xf000 + 0xfff0 # in hex multiplication by 16 is
= 0xf0000 + 0xfff0 # easy--just append a 0.
= 0xffff0
```
`0xffff0` is 16 bytes before the end of the BIOS (`0x100000`). Therefore we shouldn't be surprised that the first thing that the BIOS does is `jmp` backwards to an earlier location in the BIOS; after all how much could it accomplish in just 16 bytes?
Exercise 2. Use GDB's si (Step Instruction) command to trace into the ROM BIOS for a few more instructions, and try to guess what it might be doing. You might want to look at [Phil Storrs I/O Ports Description][14], as well as other materials on the [6.828 reference materials page][8]. No need to figure out all the details - just the general idea of what the BIOS is doing first.
When the BIOS runs, it sets up an interrupt descriptor table and initializes various devices such as the VGA display. This is where the "`Starting SeaBIOS`" message you see in the QEMU window comes from.
After initializing the PCI bus and all the important devices the BIOS knows about, it searches for a bootable device such as a floppy, hard drive, or CD-ROM. Eventually, when it finds a bootable disk, the BIOS reads the _boot loader_ from the disk and transfers control to it.
#### Part 2: The Boot Loader
Floppy and hard disks for PCs are divided into 512 byte regions called _sectors_. A sector is the disk's minimum transfer granularity: each read or write operation must be one or more sectors in size and aligned on a sector boundary. If the disk is bootable, the first sector is called the _boot sector_ , since this is where the boot loader code resides. When the BIOS finds a bootable floppy or hard disk, it loads the 512-byte boot sector into memory at physical addresses 0x7c00 through 0x7dff, and then uses a `jmp` instruction to set the CS:IP to `0000:7c00`, passing control to the boot loader. Like the BIOS load address, these addresses are fairly arbitrary - but they are fixed and standardized for PCs.
The ability to boot from a CD-ROM came much later during the evolution of the PC, and as a result the PC architects took the opportunity to rethink the boot process slightly. As a result, the way a modern BIOS boots from a CD-ROM is a bit more complicated (and more powerful). CD-ROMs use a sector size of 2048 bytes instead of 512, and the BIOS can load a much larger boot image from the disk into memory (not just one sector) before transferring control to it. For more information, see the ["El Torito" Bootable CD-ROM Format Specification][15].
For 6.828, however, we will use the conventional hard drive boot mechanism, which means that our boot loader must fit into a measly 512 bytes. The boot loader consists of one assembly language source file, `boot/boot.S`, and one C source file, `boot/main.c` Look through these source files carefully and make sure you understand what's going on. The boot loader must perform two main functions:
1. First, the boot loader switches the processor from real mode to _32-bit protected mode_ , because it is only in this mode that software can access all the memory above 1MB in the processor's physical address space. Protected mode is described briefly in sections 1.2.7 and 1.2.8 of [PC Assembly Language][6], and in great detail in the Intel architecture manuals. At this point you only have to understand that translation of segmented addresses (segment:offset pairs) into physical addresses happens differently in protected mode, and that after the transition offsets are 32 bits instead of 16.
2. Second, the boot loader reads the kernel from the hard disk by directly accessing the IDE disk device registers via the x86's special I/O instructions. If you would like to understand better what the particular I/O instructions here mean, check out the "IDE hard drive controller" section on [the 6.828 reference page][8]. You will not need to learn much about programming specific devices in this class: writing device drivers is in practice a very important part of OS development, but from a conceptual or architectural viewpoint it is also one of the least interesting.
After you understand the boot loader source code, look at the file `obj/boot/boot.asm`. This file is a disassembly of the boot loader that our GNUmakefile creates _after_ compiling the boot loader. This disassembly file makes it easy to see exactly where in physical memory all of the boot loader's code resides, and makes it easier to track what's happening while stepping through the boot loader in GDB. Likewise, `obj/kern/kernel.asm` contains a disassembly of the JOS kernel, which can often be useful for debugging.
You can set address breakpoints in GDB with the `b` command. For example, b *0x7c00 sets a breakpoint at address 0x7C00. Once at a breakpoint, you can continue execution using the c and si commands: c causes QEMU to continue execution until the next breakpoint (or until you press Ctrl-C in GDB), and si _N_ steps through the instructions _`N`_ at a time.
To examine instructions in memory (besides the immediate next one to be executed, which GDB prints automatically), you use the x/i command. This command has the syntax x/ _N_ i _ADDR_ , where _N_ is the number of consecutive instructions to disassemble and _ADDR_ is the memory address at which to start disassembling.
Exercise 3. Take a look at the [lab tools guide][16], especially the section on GDB commands. Even if you're familiar with GDB, this includes some esoteric GDB commands that are useful for OS work.
Set a breakpoint at address 0x7c00, which is where the boot sector will be loaded. Continue execution until that breakpoint. Trace through the code in `boot/boot.S`, using the source code and the disassembly file `obj/boot/boot.asm` to keep track of where you are. Also use the `x/i` command in GDB to disassemble sequences of instructions in the boot loader, and compare the original boot loader source code with both the disassembly in `obj/boot/boot.asm` and GDB.
Trace into `bootmain()` in `boot/main.c`, and then into `readsect()`. Identify the exact assembly instructions that correspond to each of the statements in `readsect()`. Trace through the rest of `readsect()` and back out into `bootmain()`, and identify the begin and end of the `for` loop that reads the remaining sectors of the kernel from the disk. Find out what code will run when the loop is finished, set a breakpoint there, and continue to that breakpoint. Then step through the remainder of the boot loader.
Be able to answer the following questions:
* At what point does the processor start executing 32-bit code? What exactly causes the switch from 16- to 32-bit mode?
* What is the _last_ instruction of the boot loader executed, and what is the _first_ instruction of the kernel it just loaded?
* _Where_ is the first instruction of the kernel?
* How does the boot loader decide how many sectors it must read in order to fetch the entire kernel from disk? Where does it find this information?
##### Loading the Kernel
We will now look in further detail at the C language portion of the boot loader, in `boot/main.c`. But before doing so, this is a good time to stop and review some of the basics of C programming.
Exercise 4. Read about programming with pointers in C. The best reference for the C language is _The C Programming Language_ by Brian Kernighan and Dennis Ritchie (known as 'K &R'). We recommend that students purchase this book (here is an [Amazon Link][17]) or find one of [MIT's 7 copies][18].
Read 5.1 (Pointers and Addresses) through 5.5 (Character Pointers and Functions) in K&R. Then download the code for [pointers.c][19], run it, and make sure you understand where all of the printed values come from. In particular, make sure you understand where the pointer addresses in printed lines 1 and 6 come from, how all the values in printed lines 2 through 4 get there, and why the values printed in line 5 are seemingly corrupted.
There are other references on pointers in C (e.g., [A tutorial by Ted Jensen][20] that cites K&R heavily), though not as strongly recommended.
_Warning:_ Unless you are already thoroughly versed in C, do not skip or even skim this reading exercise. If you do not really understand pointers in C, you will suffer untold pain and misery in subsequent labs, and then eventually come to understand them the hard way. Trust us; you don't want to find out what "the hard way" is.
To make sense out of `boot/main.c` you'll need to know what an ELF binary is. When you compile and link a C program such as the JOS kernel, the compiler transforms each C source ('`.c`') file into an _object_ ('`.o`') file containing assembly language instructions encoded in the binary format expected by the hardware. The linker then combines all of the compiled object files into a single _binary image_ such as `obj/kern/kernel`, which in this case is a binary in the ELF format, which stands for "Executable and Linkable Format".
Full information about this format is available in [the ELF specification][21] on [our reference page][8], but you will not need to delve very deeply into the details of this format in this class. Although as a whole the format is quite powerful and complex, most of the complex parts are for supporting dynamic loading of shared libraries, which we will not do in this class. The [Wikipedia page][22] has a short description.
For purposes of 6.828, you can consider an ELF executable to be a header with loading information, followed by several _program sections_ , each of which is a contiguous chunk of code or data intended to be loaded into memory at a specified address. The boot loader does not modify the code or data; it loads it into memory and starts executing it.
An ELF binary starts with a fixed-length _ELF header_ , followed by a variable-length _program header_ listing each of the program sections to be loaded. The C definitions for these ELF headers are in `inc/elf.h`. The program sections we're interested in are:
* `.text`: The program's executable instructions.
* `.rodata`: Read-only data, such as ASCII string constants produced by the C compiler. (We will not bother setting up the hardware to prohibit writing, however.)
* `.data`: The data section holds the program's initialized data, such as global variables declared with initializers like `int x = 5;`.
When the linker computes the memory layout of a program, it reserves space for _uninitialized_ global variables, such as `int x;`, in a section called `.bss` that immediately follows `.data` in memory. C requires that "uninitialized" global variables start with a value of zero. Thus there is no need to store contents for `.bss` in the ELF binary; instead, the linker records just the address and size of the `.bss` section. The loader or the program itself must arrange to zero the `.bss` section.
Examine the full list of the names, sizes, and link addresses of all the sections in the kernel executable by typing:
```
athena% objdump -h obj/kern/kernel
(If you compiled your own toolchain, you may need to use i386-jos-elf-objdump)
```
You will see many more sections than the ones we listed above, but the others are not important for our purposes. Most of the others are to hold debugging information, which is typically included in the program's executable file but not loaded into memory by the program loader.
Take particular note of the "VMA" (or _link address_ ) and the "LMA" (or _load address_ ) of the `.text` section. The load address of a section is the memory address at which that section should be loaded into memory.
The link address of a section is the memory address from which the section expects to execute. The linker encodes the link address in the binary in various ways, such as when the code needs the address of a global variable, with the result that a binary usually won't work if it is executing from an address that it is not linked for. (It is possible to generate _position-independent_ code that does not contain any such absolute addresses. This is used extensively by modern shared libraries, but it has performance and complexity costs, so we won't be using it in 6.828.)
Typically, the link and load addresses are the same. For example, look at the `.text` section of the boot loader:
```
athena% objdump -h obj/boot/boot.out
```
The boot loader uses the ELF _program headers_ to decide how to load the sections. The program headers specify which parts of the ELF object to load into memory and the destination address each should occupy. You can inspect the program headers by typing:
```
athena% objdump -x obj/kern/kernel
```
The program headers are then listed under "Program Headers" in the output of objdump. The areas of the ELF object that need to be loaded into memory are those that are marked as "LOAD". Other information for each program header is given, such as the virtual address ("vaddr"), the physical address ("paddr"), and the size of the loaded area ("memsz" and "filesz").
Back in boot/main.c, the `ph->p_pa` field of each program header contains the segment's destination physical address (in this case, it really is a physical address, though the ELF specification is vague on the actual meaning of this field).
The BIOS loads the boot sector into memory starting at address 0x7c00, so this is the boot sector's load address. This is also where the boot sector executes from, so this is also its link address. We set the link address by passing `-Ttext 0x7C00` to the linker in `boot/Makefrag`, so the linker will produce the correct memory addresses in the generated code.
Exercise 5. Trace through the first few instructions of the boot loader again and identify the first instruction that would "break" or otherwise do the wrong thing if you were to get the boot loader's link address wrong. Then change the link address in `boot/Makefrag` to something wrong, run make clean, recompile the lab with make, and trace into the boot loader again to see what happens. Don't forget to change the link address back and make clean again afterward!
Look back at the load and link addresses for the kernel. Unlike the boot loader, these two addresses aren't the same: the kernel is telling the boot loader to load it into memory at a low address (1 megabyte), but it expects to execute from a high address. We'll dig in to how we make this work in the next section.
Besides the section information, there is one more field in the ELF header that is important to us, named `e_entry`. This field holds the link address of the _entry point_ in the program: the memory address in the program's text section at which the program should begin executing. You can see the entry point:
```
athena% objdump -f obj/kern/kernel
```
You should now be able to understand the minimal ELF loader in `boot/main.c`. It reads each section of the kernel from disk into memory at the section's load address and then jumps to the kernel's entry point.
Exercise 6. We can examine memory using GDB's x command. The [GDB manual][23] has full details, but for now, it is enough to know that the command x/ _N_ x _ADDR_ prints _`N`_ words of memory at _`ADDR`_. (Note that both '`x`'s in the command are lowercase.) _Warning_ : The size of a word is not a universal standard. In GNU assembly, a word is two bytes (the 'w' in xorw, which stands for word, means 2 bytes).
Reset the machine (exit QEMU/GDB and start them again). Examine the 8 words of memory at 0x00100000 at the point the BIOS enters the boot loader, and then again at the point the boot loader enters the kernel. Why are they different? What is there at the second breakpoint? (You do not really need to use QEMU to answer this question. Just think.)
#### Part 3: The Kernel
We will now start to examine the minimal JOS kernel in a bit more detail. (And you will finally get to write some code!). Like the boot loader, the kernel begins with some assembly language code that sets things up so that C language code can execute properly.
##### Using virtual memory to work around position dependence
When you inspected the boot loader's link and load addresses above, they matched perfectly, but there was a (rather large) disparity between the _kernel's_ link address (as printed by objdump) and its load address. Go back and check both and make sure you can see what we're talking about. (Linking the kernel is more complicated than the boot loader, so the link and load addresses are at the top of `kern/kernel.ld`.)
Operating system kernels often like to be linked and run at very high _virtual address_ , such as 0xf0100000, in order to leave the lower part of the processor's virtual address space for user programs to use. The reason for this arrangement will become clearer in the next lab.
Many machines don't have any physical memory at address 0xf0100000, so we can't count on being able to store the kernel there. Instead, we will use the processor's memory management hardware to map virtual address 0xf0100000 (the link address at which the kernel code _expects_ to run) to physical address 0x00100000 (where the boot loader loaded the kernel into physical memory). This way, although the kernel's virtual address is high enough to leave plenty of address space for user processes, it will be loaded in physical memory at the 1MB point in the PC's RAM, just above the BIOS ROM. This approach requires that the PC have at least a few megabytes of physical memory (so that physical address 0x00100000 works), but this is likely to be true of any PC built after about 1990.
In fact, in the next lab, we will map the _entire_ bottom 256MB of the PC's physical address space, from physical addresses 0x00000000 through 0x0fffffff, to virtual addresses 0xf0000000 through 0xffffffff respectively. You should now see why JOS can only use the first 256MB of physical memory.
For now, we'll just map the first 4MB of physical memory, which will be enough to get us up and running. We do this using the hand-written, statically-initialized page directory and page table in `kern/entrypgdir.c`. For now, you don't have to understand the details of how this works, just the effect that it accomplishes. Up until `kern/entry.S` sets the `CR0_PG` flag, memory references are treated as physical addresses (strictly speaking, they're linear addresses, but boot/boot.S set up an identity mapping from linear addresses to physical addresses and we're never going to change that). Once `CR0_PG` is set, memory references are virtual addresses that get translated by the virtual memory hardware to physical addresses. `entry_pgdir` translates virtual addresses in the range 0xf0000000 through 0xf0400000 to physical addresses 0x00000000 through 0x00400000, as well as virtual addresses 0x00000000 through 0x00400000 to physical addresses 0x00000000 through 0x00400000. Any virtual address that is not in one of these two ranges will cause a hardware exception which, since we haven't set up interrupt handling yet, will cause QEMU to dump the machine state and exit (or endlessly reboot if you aren't using the 6.828-patched version of QEMU).
Exercise 7. Use QEMU and GDB to trace into the JOS kernel and stop at the `movl %eax, %cr0`. Examine memory at 0x00100000 and at 0xf0100000. Now, single step over that instruction using the stepi GDB command. Again, examine memory at 0x00100000 and at 0xf0100000. Make sure you understand what just happened.
What is the first instruction _after_ the new mapping is established that would fail to work properly if the mapping weren't in place? Comment out the `movl %eax, %cr0` in `kern/entry.S`, trace into it, and see if you were right.
##### Formatted Printing to the Console
Most people take functions like `printf()` for granted, sometimes even thinking of them as "primitives" of the C language. But in an OS kernel, we have to implement all I/O ourselves.
Read through `kern/printf.c`, `lib/printfmt.c`, and `kern/console.c`, and make sure you understand their relationship. It will become clear in later labs why `printfmt.c` is located in the separate `lib` directory.
Exercise 8. We have omitted a small fragment of code - the code necessary to print octal numbers using patterns of the form "%o". Find and fill in this code fragment.
Be able to answer the following questions:
1. Explain the interface between `printf.c` and `console.c`. Specifically, what function does `console.c` export? How is this function used by `printf.c`?
2. Explain the following from `console.c`:
```
1 if (crt_pos >= CRT_SIZE) {
2 int i;
3 memmove(crt_buf, crt_buf + CRT_COLS, (CRT_SIZE - CRT_COLS) 选题模板.txt 中文排版指北.md comic core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LCTT翻译规范.md LICENSE Makefile published README.md sign.md sources translated sizeof(uint16_t));
4 for (i = CRT_SIZE - CRT_COLS; i < CRT_SIZE; i++)
5 crt_buf[i] = 0x0700 | ' ';
6 crt_pos -= CRT_COLS;
7 }
```
3. For the following questions you might wish to consult the notes for Lecture 2. These notes cover GCC's calling convention on the x86.
Trace the execution of the following code step-by-step:
```
int x = 1, y = 3, z = 4;
cprintf("x %d, y %x, z %d\n", x, y, z);
```
* In the call to `cprintf()`, to what does `fmt` point? To what does `ap` point?
* List (in order of execution) each call to `cons_putc`, `va_arg`, and `vcprintf`. For `cons_putc`, list its argument as well. For `va_arg`, list what `ap` points to before and after the call. For `vcprintf` list the values of its two arguments.
4. Run the following code.
```
unsigned int i = 0x00646c72;
cprintf("H%x Wo%s", 57616, &i);
```
What is the output? Explain how this output is arrived at in the step-by-step manner of the previous exercise. [Here's an ASCII table][24] that maps bytes to characters.
The output depends on that fact that the x86 is little-endian. If the x86 were instead big-endian what would you set `i` to in order to yield the same output? Would you need to change `57616` to a different value?
[Here's a description of little- and big-endian][25] and [a more whimsical description][26].
5. In the following code, what is going to be printed after `'y='`? (note: the answer is not a specific value.) Why does this happen?
```
cprintf("x=%d y=%d", 3);
```
6. Let's say that GCC changed its calling convention so that it pushed arguments on the stack in declaration order, so that the last argument is pushed last. How would you have to change `cprintf` or its interface so that it would still be possible to pass it a variable number of arguments?
Challenge Enhance the console to allow text to be printed in different colors. The traditional way to do this is to make it interpret [ANSI escape sequences][27] embedded in the text strings printed to the console, but you may use any mechanism you like. There is plenty of information on [the 6.828 reference page][8] and elsewhere on the web on programming the VGA display hardware. If you're feeling really adventurous, you could try switching the VGA hardware into a graphics mode and making the console draw text onto the graphical frame buffer.
##### The Stack
In the final exercise of this lab, we will explore in more detail the way the C language uses the stack on the x86, and in the process write a useful new kernel monitor function that prints a _backtrace_ of the stack: a list of the saved Instruction Pointer (IP) values from the nested `call` instructions that led to the current point of execution.
Exercise 9. Determine where the kernel initializes its stack, and exactly where in memory its stack is located. How does the kernel reserve space for its stack? And at which "end" of this reserved area is the stack pointer initialized to point to?
The x86 stack pointer (`esp` register) points to the lowest location on the stack that is currently in use. Everything _below_ that location in the region reserved for the stack is free. Pushing a value onto the stack involves decreasing the stack pointer and then writing the value to the place the stack pointer points to. Popping a value from the stack involves reading the value the stack pointer points to and then increasing the stack pointer. In 32-bit mode, the stack can only hold 32-bit values, and esp is always divisible by four. Various x86 instructions, such as `call`, are "hard-wired" to use the stack pointer register.
The `ebp` (base pointer) register, in contrast, is associated with the stack primarily by software convention. On entry to a C function, the function's _prologue_ code normally saves the previous function's base pointer by pushing it onto the stack, and then copies the current `esp` value into `ebp` for the duration of the function. If all the functions in a program obey this convention, then at any given point during the program's execution, it is possible to trace back through the stack by following the chain of saved `ebp` pointers and determining exactly what nested sequence of function calls caused this particular point in the program to be reached. This capability can be particularly useful, for example, when a particular function causes an `assert` failure or `panic` because bad arguments were passed to it, but you aren't sure _who_ passed the bad arguments. A stack backtrace lets you find the offending function.
Exercise 10. To become familiar with the C calling conventions on the x86, find the address of the `test_backtrace` function in `obj/kern/kernel.asm`, set a breakpoint there, and examine what happens each time it gets called after the kernel starts. How many 32-bit words does each recursive nesting level of `test_backtrace` push on the stack, and what are those words?
Note that, for this exercise to work properly, you should be using the patched version of QEMU available on the [tools][4] page or on Athena. Otherwise, you'll have to manually translate all breakpoint and memory addresses to linear addresses.
The above exercise should give you the information you need to implement a stack backtrace function, which you should call `mon_backtrace()`. A prototype for this function is already waiting for you in `kern/monitor.c`. You can do it entirely in C, but you may find the `read_ebp()` function in `inc/x86.h` useful. You'll also have to hook this new function into the kernel monitor's command list so that it can be invoked interactively by the user.
The backtrace function should display a listing of function call frames in the following format:
```
Stack backtrace:
ebp f0109e58 eip f0100a62 args 00000001 f0109e80 f0109e98 f0100ed2 00000031
ebp f0109ed8 eip f01000d6 args 00000000 00000000 f0100058 f0109f28 00000061
...
```
Each line contains an `ebp`, `eip`, and `args`. The `ebp` value indicates the base pointer into the stack used by that function: i.e., the position of the stack pointer just after the function was entered and the function prologue code set up the base pointer. The listed `eip` value is the function's _return instruction pointer_ : the instruction address to which control will return when the function returns. The return instruction pointer typically points to the instruction after the `call` instruction (why?). Finally, the five hex values listed after `args` are the first five arguments to the function in question, which would have been pushed on the stack just before the function was called. If the function was called with fewer than five arguments, of course, then not all five of these values will be useful. (Why can't the backtrace code detect how many arguments there actually are? How could this limitation be fixed?)
The first line printed reflects the _currently executing_ function, namely `mon_backtrace` itself, the second line reflects the function that called `mon_backtrace`, the third line reflects the function that called that one, and so on. You should print _all_ the outstanding stack frames. By studying `kern/entry.S` you'll find that there is an easy way to tell when to stop.
Here are a few specific points you read about in K&R Chapter 5 that are worth remembering for the following exercise and for future labs.
* If `int *p = (int*)100`, then `(int)p + 1` and `(int)(p + 1)` are different numbers: the first is `101` but the second is `104`. When adding an integer to a pointer, as in the second case, the integer is implicitly multiplied by the size of the object the pointer points to.
* `p[i]` is defined to be the same as `*(p+i)`, referring to the i'th object in the memory pointed to by p. The above rule for addition helps this definition work when the objects are larger than one byte.
* `&p[i]` is the same as `(p+i)`, yielding the address of the i'th object in the memory pointed to by p.
Although most C programs never need to cast between pointers and integers, operating systems frequently do. Whenever you see an addition involving a memory address, ask yourself whether it is an integer addition or pointer addition and make sure the value being added is appropriately multiplied or not.
Exercise 11. Implement the backtrace function as specified above. Use the same format as in the example, since otherwise the grading script will be confused. When you think you have it working right, run make grade to see if its output conforms to what our grading script expects, and fix it if it doesn't. _After_ you have handed in your Lab 1 code, you are welcome to change the output format of the backtrace function any way you like.
If you use `read_ebp()`, note that GCC may generate "optimized" code that calls `read_ebp()` _before_ `mon_backtrace()`'s function prologue, which results in an incomplete stack trace (the stack frame of the most recent function call is missing). While we have tried to disable optimizations that cause this reordering, you may want to examine the assembly of `mon_backtrace()` and make sure the call to `read_ebp()` is happening after the function prologue.
At this point, your backtrace function should give you the addresses of the function callers on the stack that lead to `mon_backtrace()` being executed. However, in practice you often want to know the function names corresponding to those addresses. For instance, you may want to know which functions could contain a bug that's causing your kernel to crash.
To help you implement this functionality, we have provided the function `debuginfo_eip()`, which looks up `eip` in the symbol table and returns the debugging information for that address. This function is defined in `kern/kdebug.c`.
Exercise 12. Modify your stack backtrace function to display, for each `eip`, the function name, source file name, and line number corresponding to that `eip`.
In `debuginfo_eip`, where do `__STAB_*` come from? This question has a long answer; to help you to discover the answer, here are some things you might want to do:
* look in the file `kern/kernel.ld` for `__STAB_*`
* run objdump -h obj/kern/kernel
* run objdump -G obj/kern/kernel
* run gcc -pipe -nostdinc -O2 -fno-builtin -I. -MD -Wall -Wno-format -DJOS_KERNEL -gstabs -c -S kern/init.c, and look at init.s.
* see if the bootloader loads the symbol table in memory as part of loading the kernel binary
Complete the implementation of `debuginfo_eip` by inserting the call to `stab_binsearch` to find the line number for an address.
Add a `backtrace` command to the kernel monitor, and extend your implementation of `mon_backtrace` to call `debuginfo_eip` and print a line for each stack frame of the form:
```
K> backtrace
Stack backtrace:
ebp f010ff78 eip f01008ae args 00000001 f010ff8c 00000000 f0110580 00000000
kern/monitor.c:143: monitor+106
ebp f010ffd8 eip f0100193 args 00000000 00001aac 00000660 00000000 00000000
kern/init.c:49: i386_init+59
ebp f010fff8 eip f010003d args 00000000 00000000 0000ffff 10cf9a00 0000ffff
kern/entry.S:70: <unknown>+0
K>
```
Each line gives the file name and line within that file of the stack frame's `eip`, followed by the name of the function and the offset of the `eip` from the first instruction of the function (e.g., `monitor+106` means the return `eip` is 106 bytes past the beginning of `monitor`).
Be sure to print the file and function names on a separate line, to avoid confusing the grading script.
Tip: printf format strings provide an easy, albeit obscure, way to print non-null-terminated strings like those in STABS tables. `printf("%.*s", length, string)` prints at most `length` characters of `string`. Take a look at the printf man page to find out why this works.
You may find that some functions are missing from the backtrace. For example, you will probably see a call to `monitor()` but not to `runcmd()`. This is because the compiler in-lines some function calls. Other optimizations may cause you to see unexpected line numbers. If you get rid of the `-O2` from `GNUMakefile`, the backtraces may make more sense (but your kernel will run more slowly).
**This completes the lab.** In the `lab` directory, commit your changes with git commit and type make handin to submit your code.
--------------------------------------------------------------------------------
via: https://pdos.csail.mit.edu/6.828/2018/labs/lab1/
作者:[csail.mit][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:
[b]: https://github.com/lujun9972
[1]: http://www.git-scm.com/
[2]: http://www.kernel.org/pub/software/scm/git/docs/user-manual.html
[3]: http://eagain.net/articles/git-for-computer-scientists/
[4]: https://pdos.csail.mit.edu/6.828/2018/tools.html
[5]: https://6828.scripts.mit.edu/2018/handin.py/
[6]: https://pdos.csail.mit.edu/6.828/2018/readings/pcasm-book.pdf
[7]: http://www.delorie.com/djgpp/doc/brennan/brennan_att_inline_djgpp.html
[8]: https://pdos.csail.mit.edu/6.828/2018/reference.html
[9]: https://pdos.csail.mit.edu/6.828/2018/readings/i386/toc.htm
[10]: http://www.intel.com/content/www/us/en/processors/architectures-software-developer-manuals.html
[11]: http://developer.amd.com/resources/developer-guides-manuals/
[12]: http://www.qemu.org/
[13]: http://www.gnu.org/software/gdb/
[14]: http://web.archive.org/web/20040404164813/members.iweb.net.au/~pstorr/pcbook/book2/book2.htm
[15]: https://pdos.csail.mit.edu/6.828/2018/readings/boot-cdrom.pdf
[16]: https://pdos.csail.mit.edu/6.828/2018/labguide.html
[17]: http://www.amazon.com/C-Programming-Language-2nd/dp/0131103628/sr=8-1/qid=1157812738/ref=pd_bbs_1/104-1502762-1803102?ie=UTF8&s=books
[18]: http://library.mit.edu/F/AI9Y4SJ2L5ELEE2TAQUAAR44XV5RTTQHE47P9MKP5GQDLR9A8X-10422?func=item-global&doc_library=MIT01&doc_number=000355242&year=&volume=&sub_library=
[19]: https://pdos.csail.mit.edu/6.828/2018/labs/lab1/pointers.c
[20]: https://pdos.csail.mit.edu/6.828/2018/readings/pointers.pdf
[21]: https://pdos.csail.mit.edu/6.828/2018/readings/elf.pdf
[22]: http://en.wikipedia.org/wiki/Executable_and_Linkable_Format
[23]: https://sourceware.org/gdb/current/onlinedocs/gdb/Memory.html
[24]: http://web.cs.mun.ca/~michael/c/ascii-table.html
[25]: http://www.webopedia.com/TERM/b/big_endian.html
[26]: http://www.networksorcery.com/enp/ien/ien137.txt
[27]: http://rrbrandt.dee.ufcg.edu.br/en/docs/ansi/

View File

@ -1,272 +0,0 @@
Lab 2: Memory Management
======
### Lab 2: Memory Management
#### Introduction
In this lab, you will write the memory management code for your operating system. Memory management has two components.
The first component is a physical memory allocator for the kernel, so that the kernel can allocate memory and later free it. Your allocator will operate in units of 4096 bytes, called _pages_. Your task will be to maintain data structures that record which physical pages are free and which are allocated, and how many processes are sharing each allocated page. You will also write the routines to allocate and free pages of memory.
The second component of memory management is _virtual memory_ , which maps the virtual addresses used by kernel and user software to addresses in physical memory. The x86 hardware's memory management unit (MMU) performs the mapping when instructions use memory, consulting a set of page tables. You will modify JOS to set up the MMU's page tables according to a specification we provide.
##### Getting started
In this and future labs you will progressively build up your kernel. We will also provide you with some additional source. To fetch that source, use Git to commit changes you've made since handing in lab 1 (if any), fetch the latest version of the course repository, and then create a local branch called `lab2` based on our lab2 branch, `origin/lab2`:
```
athena% cd ~/6.828/lab
athena% add git
athena% git pull
Already up-to-date.
athena% git checkout -b lab2 origin/lab2
Branch lab2 set up to track remote branch refs/remotes/origin/lab2.
Switched to a new branch "lab2"
athena%
```
The git checkout -b command shown above actually does two things: it first creates a local branch `lab2` that is based on the `origin/lab2` branch provided by the course staff, and second, it changes the contents of your `lab` directory to reflect the files stored on the `lab2` branch. Git allows switching between existing branches using git checkout _branch-name_ , though you should commit any outstanding changes on one branch before switching to a different one.
You will now need to merge the changes you made in your `lab1` branch into the `lab2` branch, as follows:
```
athena% git merge lab1
Merge made by recursive.
kern/kdebug.c | 11 +++++++++--
kern/monitor.c | 19 +++++++++++++++++++
lib/printfmt.c | 7 +++----
3 files changed, 31 insertions(+), 6 deletions(-)
athena%
```
In some cases, Git may not be able to figure out how to merge your changes with the new lab assignment (e.g. if you modified some of the code that is changed in the second lab assignment). In that case, the git merge command will tell you which files are _conflicted_ , and you should first resolve the conflict (by editing the relevant files) and then commit the resulting files with git commit -a.
Lab 2 contains the following new source files, which you should browse through:
* `inc/memlayout.h`
* `kern/pmap.c`
* `kern/pmap.h`
* `kern/kclock.h`
* `kern/kclock.c`
`memlayout.h` describes the layout of the virtual address space that you must implement by modifying `pmap.c`. `memlayout.h` and `pmap.h` define the `PageInfo` structure that you'll use to keep track of which pages of physical memory are free. `kclock.c` and `kclock.h` manipulate the PC's battery-backed clock and CMOS RAM hardware, in which the BIOS records the amount of physical memory the PC contains, among other things. The code in `pmap.c` needs to read this device hardware in order to figure out how much physical memory there is, but that part of the code is done for you: you do not need to know the details of how the CMOS hardware works.
Pay particular attention to `memlayout.h` and `pmap.h`, since this lab requires you to use and understand many of the definitions they contain. You may want to review `inc/mmu.h`, too, as it also contains a number of definitions that will be useful for this lab.
Before beginning the lab, don't forget to add -f 6.828 to get the 6.828 version of QEMU.
##### Lab Requirements
In this lab and subsequent labs, do all of the regular exercises described in the lab and _at least one_ challenge problem. (Some challenge problems are more challenging than others, of course!) Additionally, write up brief answers to the questions posed in the lab and a short (e.g., one or two paragraph) description of what you did to solve your chosen challenge problem. If you implement more than one challenge problem, you only need to describe one of them in the write-up, though of course you are welcome to do more. Place the write-up in a file called `answers-lab2.txt` in the top level of your `lab` directory before handing in your work.
##### Hand-In Procedure
When you are ready to hand in your lab code and write-up, add your `answers-lab2.txt` to the Git repository, commit your changes, and then run make handin.
```
athena% git add answers-lab2.txt
athena% git commit -am "my answer to lab2"
[lab2 a823de9] my answer to lab2
4 files changed, 87 insertions(+), 10 deletions(-)
athena% make handin
```
As before, we will be grading your solutions with a grading program. You can run make grade in the `lab` directory to test your kernel with the grading program. You may change any of the kernel source and header files you need to in order to complete the lab, but needless to say you must not change or otherwise subvert the grading code.
#### Part 1: Physical Page Management
The operating system must keep track of which parts of physical RAM are free and which are currently in use. JOS manages the PC's physical memory with _page granularity_ so that it can use the MMU to map and protect each piece of allocated memory.
You'll now write the physical page allocator. It keeps track of which pages are free with a linked list of `struct PageInfo` objects (which, unlike xv6, are not embedded in the free pages themselves), each corresponding to a physical page. You need to write the physical page allocator before you can write the rest of the virtual memory implementation, because your page table management code will need to allocate physical memory in which to store page tables.
Exercise 1. In the file `kern/pmap.c`, you must implement code for the following functions (probably in the order given).
`boot_alloc()`
`mem_init()` (only up to the call to `check_page_free_list(1)`)
`page_init()`
`page_alloc()`
`page_free()`
`check_page_free_list()` and `check_page_alloc()` test your physical page allocator. You should boot JOS and see whether `check_page_alloc()` reports success. Fix your code so that it passes. You may find it helpful to add your own `assert()`s to verify that your assumptions are correct.
This lab, and all the 6.828 labs, will require you to do a bit of detective work to figure out exactly what you need to do. This assignment does not describe all the details of the code you'll have to add to JOS. Look for comments in the parts of the JOS source that you have to modify; those comments often contain specifications and hints. You will also need to look at related parts of JOS, at the Intel manuals, and perhaps at your 6.004 or 6.033 notes.
#### Part 2: Virtual Memory
Before doing anything else, familiarize yourself with the x86's protected-mode memory management architecture: namely _segmentation_ and _page translation_.
Exercise 2. Look at chapters 5 and 6 of the [Intel 80386 Reference Manual][1], if you haven't done so already. Read the sections about page translation and page-based protection closely (5.2 and 6.4). We recommend that you also skim the sections about segmentation; while JOS uses the paging hardware for virtual memory and protection, segment translation and segment-based protection cannot be disabled on the x86, so you will need a basic understanding of it.
##### Virtual, Linear, and Physical Addresses
In x86 terminology, a _virtual address_ consists of a segment selector and an offset within the segment. A _linear address_ is what you get after segment translation but before page translation. A _physical address_ is what you finally get after both segment and page translation and what ultimately goes out on the hardware bus to your RAM.
```
Selector +--------------+ +-----------+
---------->| | | |
| Segmentation | | Paging |
Software | |-------->| |----------> RAM
Offset | Mechanism | | Mechanism |
---------->| | | |
+--------------+ +-----------+
Virtual Linear Physical
```
A C pointer is the "offset" component of the virtual address. In `boot/boot.S`, we installed a Global Descriptor Table (GDT) that effectively disabled segment translation by setting all segment base addresses to 0 and limits to `0xffffffff`. Hence the "selector" has no effect and the linear address always equals the offset of the virtual address. In lab 3, we'll have to interact a little more with segmentation to set up privilege levels, but as for memory translation, we can ignore segmentation throughout the JOS labs and focus solely on page translation.
Recall that in part 3 of lab 1, we installed a simple page table so that the kernel could run at its link address of 0xf0100000, even though it is actually loaded in physical memory just above the ROM BIOS at 0x00100000. This page table mapped only 4MB of memory. In the virtual address space layout you are going to set up for JOS in this lab, we'll expand this to map the first 256MB of physical memory starting at virtual address 0xf0000000 and to map a number of other regions of the virtual address space.
Exercise 3. While GDB can only access QEMU's memory by virtual address, it's often useful to be able to inspect physical memory while setting up virtual memory. Review the QEMU [monitor commands][2] from the lab tools guide, especially the `xp` command, which lets you inspect physical memory. To access the QEMU monitor, press Ctrl-a c in the terminal (the same binding returns to the serial console).
Use the xp command in the QEMU monitor and the x command in GDB to inspect memory at corresponding physical and virtual addresses and make sure you see the same data.
Our patched version of QEMU provides an info pg command that may also prove useful: it shows a compact but detailed representation of the current page tables, including all mapped memory ranges, permissions, and flags. Stock QEMU also provides an info mem command that shows an overview of which ranges of virtual addresses are mapped and with what permissions.
From code executing on the CPU, once we're in protected mode (which we entered first thing in `boot/boot.S`), there's no way to directly use a linear or physical address. _All_ memory references are interpreted as virtual addresses and translated by the MMU, which means all pointers in C are virtual addresses.
The JOS kernel often needs to manipulate addresses as opaque values or as integers, without dereferencing them, for example in the physical memory allocator. Sometimes these are virtual addresses, and sometimes they are physical addresses. To help document the code, the JOS source distinguishes the two cases: the type `uintptr_t` represents opaque virtual addresses, and `physaddr_t` represents physical addresses. Both these types are really just synonyms for 32-bit integers (`uint32_t`), so the compiler won't stop you from assigning one type to another! Since they are integer types (not pointers), the compiler _will_ complain if you try to dereference them.
The JOS kernel can dereference a `uintptr_t` by first casting it to a pointer type. In contrast, the kernel can't sensibly dereference a physical address, since the MMU translates all memory references. If you cast a `physaddr_t` to a pointer and dereference it, you may be able to load and store to the resulting address (the hardware will interpret it as a virtual address), but you probably won't get the memory location you intended.
To summarize:
C typeAddress type `T*` Virtual `uintptr_t` Virtual `physaddr_t` Physical
Question
1. Assuming that the following JOS kernel code is correct, what type should variable `x` have, `uintptr_t` or `physaddr_t`?
```
mystery_t x;
char* value = return_a_pointer();
*value = 10;
x = (mystery_t) value;
```
The JOS kernel sometimes needs to read or modify memory for which it knows only the physical address. For example, adding a mapping to a page table may require allocating physical memory to store a page directory and then initializing that memory. However, the kernel cannot bypass virtual address translation and thus cannot directly load and store to physical addresses. One reason JOS remaps all of physical memory starting from physical address 0 at virtual address 0xf0000000 is to help the kernel read and write memory for which it knows just the physical address. In order to translate a physical address into a virtual address that the kernel can actually read and write, the kernel must add 0xf0000000 to the physical address to find its corresponding virtual address in the remapped region. You should use `KADDR(pa)` to do that addition.
The JOS kernel also sometimes needs to be able to find a physical address given the virtual address of the memory in which a kernel data structure is stored. Kernel global variables and memory allocated by `boot_alloc()` are in the region where the kernel was loaded, starting at 0xf0000000, the very region where we mapped all of physical memory. Thus, to turn a virtual address in this region into a physical address, the kernel can simply subtract 0xf0000000. You should use `PADDR(va)` to do that subtraction.
##### Reference counting
In future labs you will often have the same physical page mapped at multiple virtual addresses simultaneously (or in the address spaces of multiple environments). You will keep a count of the number of references to each physical page in the `pp_ref` field of the `struct PageInfo` corresponding to the physical page. When this count goes to zero for a physical page, that page can be freed because it is no longer used. In general, this count should be equal to the number of times the physical page appears below `UTOP` in all page tables (the mappings above `UTOP` are mostly set up at boot time by the kernel and should never be freed, so there's no need to reference count them). We'll also use it to keep track of the number of pointers we keep to the page directory pages and, in turn, of the number of references the page directories have to page table pages.
Be careful when using `page_alloc`. The page it returns will always have a reference count of 0, so `pp_ref` should be incremented as soon as you've done something with the returned page (like inserting it into a page table). Sometimes this is handled by other functions (for example, `page_insert`) and sometimes the function calling `page_alloc` must do it directly.
##### Page Table Management
Now you'll write a set of routines to manage page tables: to insert and remove linear-to-physical mappings, and to create page table pages when needed.
Exercise 4. In the file `kern/pmap.c`, you must implement code for the following functions.
```
pgdir_walk()
boot_map_region()
page_lookup()
page_remove()
page_insert()
```
`check_page()`, called from `mem_init()`, tests your page table management routines. You should make sure it reports success before proceeding.
#### Part 3: Kernel Address Space
JOS divides the processor's 32-bit linear address space into two parts. User environments (processes), which we will begin loading and running in lab 3, will have control over the layout and contents of the lower part, while the kernel always maintains complete control over the upper part. The dividing line is defined somewhat arbitrarily by the symbol `ULIM` in `inc/memlayout.h`, reserving approximately 256MB of virtual address space for the kernel. This explains why we needed to give the kernel such a high link address in lab 1: otherwise there would not be enough room in the kernel's virtual address space to map in a user environment below it at the same time.
You'll find it helpful to refer to the JOS memory layout diagram in `inc/memlayout.h` both for this part and for later labs.
##### Permissions and Fault Isolation
Since kernel and user memory are both present in each environment's address space, we will have to use permission bits in our x86 page tables to allow user code access only to the user part of the address space. Otherwise bugs in user code might overwrite kernel data, causing a crash or more subtle malfunction; user code might also be able to steal other environments' private data. Note that the writable permission bit (`PTE_W`) affects both user and kernel code!
The user environment will have no permission to any of the memory above `ULIM`, while the kernel will be able to read and write this memory. For the address range `[UTOP,ULIM)`, both the kernel and the user environment have the same permission: they can read but not write this address range. This range of address is used to expose certain kernel data structures read-only to the user environment. Lastly, the address space below `UTOP` is for the user environment to use; the user environment will set permissions for accessing this memory.
##### Initializing the Kernel Address Space
Now you'll set up the address space above `UTOP`: the kernel part of the address space. `inc/memlayout.h` shows the layout you should use. You'll use the functions you just wrote to set up the appropriate linear to physical mappings.
Exercise 5. Fill in the missing code in `mem_init()` after the call to `check_page()`.
Your code should now pass the `check_kern_pgdir()` and `check_page_installed_pgdir()` checks.
Question
2. What entries (rows) in the page directory have been filled in at this point? What addresses do they map and where do they point? In other words, fill out this table as much as possible:
| Entry | Base Virtual Address | Points to (logically): |
|-------|----------------------|---------------------------------------|
| 1023 | ? | Page table for top 4MB of phys memory |
| 1022 | ? | ? |
| . | ? | ? |
| . | ? | ? |
| . | ? | ? |
| 2 | 0x00800000 | ? |
| 1 | 0x00400000 | ? |
| 0 | 0x00000000 | [see next question] |
3. We have placed the kernel and user environment in the same address space. Why will user programs not be able to read or write the kernel's memory? What specific mechanisms protect the kernel memory?
4. What is the maximum amount of physical memory that this operating system can support? Why?
5. How much space overhead is there for managing memory, if we actually had the maximum amount of physical memory? How is this overhead broken down?
6. Revisit the page table setup in `kern/entry.S` and `kern/entrypgdir.c`. Immediately after we turn on paging, EIP is still a low number (a little over 1MB). At what point do we transition to running at an EIP above KERNBASE? What makes it possible for us to continue executing at a low EIP between when we enable paging and when we begin running at an EIP above KERNBASE? Why is this transition necessary?
```
Challenge! We consumed many physical pages to hold the page tables for the KERNBASE mapping. Do a more space-efficient job using the PTE_PS ("Page Size") bit in the page directory entries. This bit was _not_ supported in the original 80386, but is supported on more recent x86 processors. You will therefore have to refer to [Volume 3 of the current Intel manuals][3]. Make sure you design the kernel to use this optimization only on processors that support it!
```
```
Challenge! Extend the JOS kernel monitor with commands to:
* Display in a useful and easy-to-read format all of the physical page mappings (or lack thereof) that apply to a particular range of virtual/linear addresses in the currently active address space. For example, you might enter `'showmappings 0x3000 0x5000'` to display the physical page mappings and corresponding permission bits that apply to the pages at virtual addresses 0x3000, 0x4000, and 0x5000.
* Explicitly set, clear, or change the permissions of any mapping in the current address space.
* Dump the contents of a range of memory given either a virtual or physical address range. Be sure the dump code behaves correctly when the range extends across page boundaries!
* Do anything else that you think might be useful later for debugging the kernel. (There's a good chance it will be!)
```
##### Address Space Layout Alternatives
The address space layout we use in JOS is not the only one possible. An operating system might map the kernel at low linear addresses while leaving the _upper_ part of the linear address space for user processes. x86 kernels generally do not take this approach, however, because one of the x86's backward-compatibility modes, known as _virtual 8086 mode_ , is "hard-wired" in the processor to use the bottom part of the linear address space, and thus cannot be used at all if the kernel is mapped there.
It is even possible, though much more difficult, to design the kernel so as not to have to reserve _any_ fixed portion of the processor's linear or virtual address space for itself, but instead effectively to allow user-level processes unrestricted use of the _entire_ 4GB of virtual address space - while still fully protecting the kernel from these processes and protecting different processes from each other!
```
Challenge! Each user-level environment maps the kernel. Change JOS so that the kernel has its own page table and so that a user-level environment runs with a minimal number of kernel pages mapped. That is, each user-level environment maps just enough pages mapped so that the user-level environment can enter and leave the kernel correctly. You also have to come up with a plan for the kernel to read/write arguments to system calls.
```
```
Challenge! Write up an outline of how a kernel could be designed to allow user environments unrestricted use of the full 4GB virtual and linear address space. Hint: do the previous challenge exercise first, which reduces the kernel to a few mappings in a user environment. Hint: the technique is sometimes known as " _follow the bouncing kernel_. " In your design, be sure to address exactly what has to happen when the processor transitions between kernel and user modes, and how the kernel would accomplish such transitions. Also describe how the kernel would access physical memory and I/O devices in this scheme, and how the kernel would access a user environment's virtual address space during system calls and the like. Finally, think about and describe the advantages and disadvantages of such a scheme in terms of flexibility, performance, kernel complexity, and other factors you can think of.
```
```
Challenge! Since our JOS kernel's memory management system only allocates and frees memory on page granularity, we do not have anything comparable to a general-purpose `malloc`/`free` facility that we can use within the kernel. This could be a problem if we want to support certain types of I/O devices that require _physically contiguous_ buffers larger than 4KB in size, or if we want user-level environments, and not just the kernel, to be able to allocate and map 4MB _superpages_ for maximum processor efficiency. (See the earlier challenge problem about PTE_PS.)
Generalize the kernel's memory allocation system to support pages of a variety of power-of-two allocation unit sizes from 4KB up to some reasonable maximum of your choice. Be sure you have some way to divide larger allocation units into smaller ones on demand, and to coalesce multiple small allocation units back into larger units when possible. Think about the issues that might arise in such a system.
```
**This completes the lab.** Make sure you pass all of the make grade tests and don't forget to write up your answers to the questions and a description of your challenge exercise solution in `answers-lab2.txt`. Commit your changes (including adding `answers-lab2.txt`) and type make handin in the `lab` directory to hand in your lab.
--------------------------------------------------------------------------------
via: https://pdos.csail.mit.edu/6.828/2018/labs/lab2/
作者:[csail.mit][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://pdos.csail.mit.edu
[b]: https://github.com/lujun9972
[1]: https://pdos.csail.mit.edu/6.828/2018/readings/i386/toc.htm
[2]: https://pdos.csail.mit.edu/6.828/2018/labguide.html#qemu
[3]: https://pdos.csail.mit.edu/6.828/2018/readings/ia32/IA32-3A.pdf

View File

@ -1,265 +0,0 @@
Translating by jlztan
Turn your book into a website and an ePub using Pandoc
======
Write once, publish twice using Markdown and Pandoc.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ)
Pandoc is a command-line tool for converting files from one markup language to another. In my [introduction to Pandoc][1], I explained how to convert text written in Markdown into a website, a slideshow, and a PDF.
In this follow-up article, I'll dive deeper into [Pandoc][2], showing how to produce a website and an ePub book from the same Markdown source file. I'll use my upcoming e-book, [GRASP Principles for the Object-Oriented Mind][3], which I created using this process, as an example.
First I will explain the file structure used for the book, then how to use Pandoc to generate a website and deploy it in GitHub. Finally, I demonstrate how to generate its companion ePub book.
You can find the code in my [Programming Fight Club][4] GitHub repository.
### Setting up the writing structure
I do all of my writing in Markdown syntax. You can also use HTML, but the more HTML you introduce the highest risk that problems arise when Pandoc converts Markdown to an ePub document. My books follow the one-chapter-per-file pattern. Declare chapters using the Markdown heading H1 ( **#** ). You can put more than one chapter in each file, but putting them in separate files makes it easier to find content and do updates later.
The meta-information follows a similar pattern: each output format has its own meta-information file. Meta-information files define information about your documents, such as text to add to your HTML or the license of your ePub. I store all of my Markdown documents in a folder named parts (this is important for the Makefile that generates the website and ePub). As an example, let's take the table of contents, the preface, and the about chapters (divided into the files toc.md, preface.md, and about.md) and, for clarity, we will leave out the remaining chapters.
My about file might begin like:
```
# About this book {-}
## Who should read this book {-}
Before creating a complex software system one needs to create a solid foundation.
General Responsibility Assignment Software Principles (GRASP) are guidelines to assign
responsibilities to software classes in object-oriented programming.
```
Once the chapters are finished, the next step is to add meta-information to setup the format for the website and the ePub.
### Generating the website
#### Create the HTML meta-information file
The meta-information file (web-metadata.yaml) for my website is a simple YAML file that contains information about the author, title, rights, content for the **< head>** tag, and content for the beginning and end of the HTML file.
I recommend (at minimum) including the following fields in the web-metadata.yaml file:
```
---
title: <a href="/grasp-principles/toc/">GRASP principles for the Object-oriented mind</a>
author: Kiko Fernandez-Reyes
rights: 2017 Kiko Fernandez-Reyes, CC-BY-NC-SA 4.0 International
header-includes:
- |
  \```{=html}
  <link href="https://fonts.googleapis.com/css?family=Inconsolata" rel="stylesheet">
  <link href="https://fonts.googleapis.com/css?family=Gentium+Basic|Inconsolata" rel="stylesheet">
  \```
include-before:
- |
  \```{=html}
  <p>If you like this book, please consider
      spreading the word or
      <a href="https://www.buymeacoffee.com/programming">
        buying me a coffee
      </a>
  </p>
  \```
include-after:
- |
  ```{=html}
  <div class="footnotes">
    <hr>
    <div class="container">
        <nav class="pagination" role="pagination">
          <ul>
          <p>
          <span class="page-number">Designed with</span> ❤️  <span class="page-number"> from Uppsala, Sweden</span>
           </p>
           <p>
           <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a>
           </p>
           </ul>
        </nav>
    </div>
  </div>
  \```
---
```
Some variables to note:
* The **header-includes** variable contains HTML that will be embedded inside the **< head>** tag.
* The line after calling a variable must be **\- |**. The next line must begin with triple backquotes that are aligned with the **|** or Pandoc will reject it. **{=html}** tells Pandoc that this is raw text and should not be processed as Markdown. (For this to work, you need to check that the **raw_attribute** extension in Pandoc is enabled. To check, type **pandoc --list-extensions | grep raw** and make sure the returned list contains an item named **+raw_html** ; the plus sign indicates it is enabled.)
* The variable **include-before** adds some HTML at the beginning of your website, and I ask readers to consider spreading the word or buying me a coffee.
* The **include-after** variable appends raw HTML at the end of the website and shows my book's license.
These are only some of the fields available; take a look at the template variables in HTML (my article [introduction to Pandoc][1] covered this for LaTeX but the process is the same for HTML) to learn about others.
#### Split the website into chapters
The website can be generated as a whole, resulting in a long page with all the content, or split into chapters, which I think is easier to read. I'll explain how to divide the website into chapters so the reader doesn't get intimidated by a long website.
To make the website easy to deploy on GitHub Pages, we need to create a root folder called docs (which is the root folder that GitHub Pages uses by default to render a website). Then we need to create folders for each chapter under docs, place the HTML chapters in their own folders, and the file content in a file named index.html.
For example, the about.md file is converted to a file named index.html that is placed in a folder named about (about/index.html). This way, when users type **http:// <your-website.com>/about/**, the index.html file from the folder about will be displayed in their browser.
The following Makefile does all of this:
```
# Your book files
DEPENDENCIES= toc preface about
# Placement of your HTML files
DOCS=docs
all: web
web: setup $(DEPENDENCIES)
        @cp $(DOCS)/toc/index.html $(DOCS)
# Creation and copy of stylesheet and images into
# the assets folder. This is important to deploy the
# website to Github Pages.
setup:
        @mkdir -p $(DOCS)
        @cp -r assets $(DOCS)
# Creation of folder and index.html file on a
# per-chapter basis
$(DEPENDENCIES):
        @mkdir -p $(DOCS)/$@
        @pandoc -s --toc web-metadata.yaml parts/$@.md \
        -c /assets/pandoc.css -o $(DOCS)/$@/index.html
clean:
        @rm -rf $(DOCS)
.PHONY: all clean web setup
```
The option **-c /assets/pandoc.css** declares which CSS stylesheet to use; it will be fetched from **/assets/pandoc.css**. In other words, inside the **< head>** HTML tag, Pandoc adds the following line:
```
<link rel="stylesheet" href="/assets/pandoc.css">
```
To generate the website, type:
```
make
```
The root folder should contain now the following structure and files:
```
.---parts
|    |--- toc.md
|    |--- preface.md
|    |--- about.md
|
|---docs
    |--- assets/
    |--- index.html
    |--- toc
    |     |--- index.html
    |
    |--- preface
    |     |--- index.html
    |
    |--- about
          |--- index.html
   
```
#### Deploy the website
To deploy the website on GitHub, follow these steps:
1. Create a new repository
2. Push your content to the repository
3. Go to the GitHub Pages section in the repository's Settings and select the option for GitHub to use the content from the Master branch
You can get more details on the [GitHub Pages][5] site.
Check out [my book's website][6], generated using this process, to see the result.
### Generating the ePub book
#### Create the ePub meta-information file
The ePub meta-information file, epub-meta.yaml, is similar to the HTML meta-information file. The main difference is that ePub offers other template variables, such as **publisher** and **cover-image**. Your ePub book's stylesheet will probably differ from your website's; mine uses one named epub.css.
```
---
title: 'GRASP principles for the Object-oriented Mind'
publisher: 'Programming Language Fight Club'
author: Kiko Fernandez-Reyes
rights: 2017 Kiko Fernandez-Reyes, CC-BY-NC-SA 4.0 International
cover-image: assets/cover.png
stylesheet: assets/epub.css
...
```
Add the following content to the previous Makefile:
```
epub:
        @pandoc -s --toc epub-meta.yaml \
        $(addprefix parts/, $(DEPENDENCIES:=.md)) -o $(DOCS)/assets/book.epub
```
The command for the ePub target takes all the dependencies from the HTML version (your chapter names), appends to them the Markdown extension, and prepends them with the path to the folder chapters' so Pandoc knows how to process them. For example, if **$(DEPENDENCIES)** was only **preface about** , then the Makefile would call:
```
@pandoc -s --toc epub-meta.yaml \
parts/preface.md parts/about.md -o $(DOCS)/assets/book.epub
```
Pandoc would take these two chapters, combine them, generate an ePub, and place the book under the Assets folder.
Here's an [example][7] of an ePub created using this process.
### Summarizing the process
The process to create a website and an ePub from a Markdown file isn't difficult, but there are a lot of details. The following outline may make it easier for you to follow.
* HTML book:
* Write chapters in Markdown
* Add metadata
* Create a Makefile to glue pieces together
* Set up GitHub Pages
* Deploy
* ePub book:
* Reuse chapters from previous work
* Add new metadata file
* Create a Makefile to glue pieces together
* Set up GitHub Pages
* Deploy
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/book-to-website-epub-using-pandoc
作者:[Kiko Fernandez-Reyes][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/kikofernandez
[1]: https://opensource.com/article/18/9/intro-pandoc
[2]: https://pandoc.org/
[3]: https://www.programmingfightclub.com/
[4]: https://github.com/kikofernandez/programmingfightclub
[5]: https://pages.github.com/
[6]: https://www.programmingfightclub.com/grasp-principles/
[7]: https://github.com/kikofernandez/programmingfightclub/raw/master/docs/web_assets/demo.epub

View File

@ -1,121 +0,0 @@
fuowang 翻译中
Archiving web sites
======
I recently took a deep dive into web site archival for friends who were worried about losing control over the hosting of their work online in the face of poor system administration or hostile removal. This makes web site archival an essential instrument in the toolbox of any system administrator. As it turns out, some sites are much harder to archive than others. This article goes through the process of archiving traditional web sites and shows how it falls short when confronted with the latest fashions in the single-page applications that are bloating the modern web.
### Converting simple sites
The days of handcrafted HTML web sites are long gone. Now web sites are dynamic and built on the fly using the latest JavaScript, PHP, or Python framework. As a result, the sites are more fragile: a database crash, spurious upgrade, or unpatched vulnerability might lose data. In my previous life as web developer, I had to come to terms with the idea that customers expect web sites to basically work forever. This expectation matches poorly with "move fast and break things" attitude of web development. Working with the [Drupal][2] content-management system (CMS) was particularly challenging in that regard as major upgrades deliberately break compatibility with third-party modules, which implies a costly upgrade process that clients could seldom afford. The solution was to archive those sites: take a living, dynamic web site and turn it into plain HTML files that any web server can serve forever. This process is useful for your own dynamic sites but also for third-party sites that are outside of your control and you might want to safeguard.
For simple or static sites, the venerable [Wget][3] program works well. The incantation to mirror a full web site, however, is byzantine:
```
$ nice wget --mirror --execute robots=off --no-verbose --convert-links \
--backup-converted --page-requisites --adjust-extension \
--base=./ --directory-prefix=./ --span-hosts \
--domains=www.example.com,example.com http://www.example.com/
```
The above downloads the content of the web page, but also crawls everything within the specified domains. Before you run this against your favorite site, consider the impact such a crawl might have on the site. The above command line deliberately ignores [`robots.txt`][] rules, as is now [common practice for archivists][4], and hammer the website as fast as it can. Most crawlers have options to pause between hits and limit bandwidth usage to avoid overwhelming the target site.
The above command will also fetch "page requisites" like style sheets (CSS), images, and scripts. The downloaded page contents are modified so that links point to the local copy as well. Any web server can host the resulting file set, which results in a static copy of the original web site.
That is, when things go well. Anyone who has ever worked with a computer knows that things seldom go according to plan; all sorts of things can make the procedure derail in interesting ways. For example, it was trendy for a while to have calendar blocks in web sites. A CMS would generate those on the fly and make crawlers go into an infinite loop trying to retrieve all of the pages. Crafty archivers can resort to regular expressions (e.g. Wget has a `--reject-regex` option) to ignore problematic resources. Another option, if the administration interface for the web site is accessible, is to disable calendars, login forms, comment forms, and other dynamic areas. Once the site becomes static, those will stop working anyway, so it makes sense to remove such clutter from the original site as well.
### JavaScript doom
Unfortunately, some web sites are built with much more than pure HTML. In single-page sites, for example, the web browser builds the content itself by executing a small JavaScript program. A simple user agent like Wget will struggle to reconstruct a meaningful static copy of those sites as it does not support JavaScript at all. In theory, web sites should be using [progressive enhancement][5] to have content and functionality available without JavaScript but those directives are rarely followed, as anyone using plugins like [NoScript][6] or [uMatrix][7] will confirm.
Traditional archival methods sometimes fail in the dumbest way. When trying to build an offsite backup of a local newspaper ([pamplemousse.ca][8]), I found that WordPress adds query strings (e.g. `?ver=1.12.4`) at the end of JavaScript includes. This confuses content-type detection in the web servers that serve the archive, which rely on the file extension to send the right `Content-Type` header. When such an archive is loaded in a web browser, it fails to load scripts, which breaks dynamic websites.
As the web moves toward using the browser as a virtual machine to run arbitrary code, archival methods relying on pure HTML parsing need to adapt. The solution for such problems is to record (and replay) the HTTP headers delivered by the server during the crawl and indeed professional archivists use just such an approach.
### Creating and displaying WARC files
At the [Internet Archive][9], Brewster Kahle and Mike Burner designed the [ARC][10] (for "ARChive") file format in 1996 to provide a way to aggregate the millions of small files produced by their archival efforts. The format was eventually standardized as the WARC ("Web ARChive") [specification][11] that was released as an ISO standard in 2009 and revised in 2017. The standardization effort was led by the [International Internet Preservation Consortium][12] (IIPC), which is an "international organization of libraries and other organizations established to coordinate efforts to preserve internet content for the future", according to Wikipedia; it includes members such as the US Library of Congress and the Internet Archive. The latter uses the WARC format internally in its Java-based [Heritrix crawler][13].
A WARC file aggregates multiple resources like HTTP headers, file contents, and other metadata in a single compressed archive. Conveniently, Wget actually supports the file format with the `--warc` parameter. Unfortunately, web browsers cannot render WARC files directly, so a viewer or some conversion is necessary to access the archive. The simplest such viewer I have found is [pywb][14], a Python package that runs a simple webserver to offer a Wayback-Machine-like interface to browse the contents of WARC files. The following set of commands will render a WARC file on `http://localhost:8080/`:
```
$ pip install pywb
$ wb-manager init example
$ wb-manager add example crawl.warc.gz
$ wayback
```
This tool was, incidentally, built by the folks behind the [Webrecorder][15] service, which can use a web browser to save dynamic page contents.
Unfortunately, pywb has trouble loading WARC files generated by Wget because it [followed][16] an [inconsistency in the 1.0 specification][17], which was [fixed in the 1.1 specification][18]. Until Wget or pywb fix those problems, WARC files produced by Wget are not reliable enough for my uses, so I have looked at other alternatives. A crawler that got my attention is simply called [crawl][19]. Here is how it is invoked:
```
$ crawl https://example.com/
```
(It does say "very simple" in the README.) The program does support some command-line options, but most of its defaults are sane: it will fetch page requirements from other domains (unless the `-exclude-related` flag is used), but does not recurse out of the domain. By default, it fires up ten parallel connections to the remote site, a setting that can be changed with the `-c` flag. But, best of all, the resulting WARC files load perfectly in pywb.
### Future work and alternatives
There are plenty more [resources][20] for using WARC files. In particular, there's a Wget drop-in replacement called [Wpull][21] that is specifically designed for archiving web sites. It has experimental support for [PhantomJS][22] and [youtube-dl][23] integration that should allow downloading more complex JavaScript sites and streaming multimedia, respectively. The software is the basis for an elaborate archival tool called [ArchiveBot][24], which is used by the "loose collective of rogue archivists, programmers, writers and loudmouths" at [ArchiveTeam][25] in its struggle to "save the history before it's lost forever". It seems that PhantomJS integration does not work as well as the team wants, so ArchiveTeam also uses a rag-tag bunch of other tools to mirror more complex sites. For example, [snscrape][26] will crawl a social media profile to generate a list of pages to send into ArchiveBot. Another tool the team employs is [crocoite][27], which uses the Chrome browser in headless mode to archive JavaScript-heavy sites.
This article would also not be complete without a nod to the [HTTrack][28] project, the "website copier". Working similarly to Wget, HTTrack creates local copies of remote web sites but unfortunately does not support WARC output. Its interactive aspects might be of more interest to novice users unfamiliar with the command line.
In the same vein, during my research I found a full rewrite of Wget called [Wget2][29] that has support for multi-threaded operation, which might make it faster than its predecessor. It is [missing some features][30] from Wget, however, most notably reject patterns, WARC output, and FTP support but adds RSS, DNS caching, and improved TLS support.
Finally, my personal dream for these kinds of tools would be to have them integrated with my existing bookmark system. I currently keep interesting links in [Wallabag][31], a self-hosted "read it later" service designed as a free-software alternative to [Pocket][32] (now owned by Mozilla). But Wallabag, by design, creates only a "readable" version of the article instead of a full copy. In some cases, the "readable version" is actually [unreadable][33] and Wallabag sometimes [fails to parse the article][34]. Instead, other tools like [bookmark-archiver][35] or [reminiscence][36] save a screenshot of the page along with full HTML but, unfortunately, no WARC file that would allow an even more faithful replay.
The sad truth of my experiences with mirrors and archival is that data dies. Fortunately, amateur archivists have tools at their disposal to keep interesting content alive online. For those who do not want to go through that trouble, the Internet Archive seems to be here to stay and Archive Team is obviously [working on a backup of the Internet Archive itself][37].
--------------------------------------------------------------------------------
via: https://anarc.at/blog/2018-10-04-archiving-web-sites/
作者:[Anarcat][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://anarc.at
[1]: https://anarc.at/blog
[2]: https://drupal.org
[3]: https://www.gnu.org/software/wget/
[4]: https://blog.archive.org/2017/04/17/robots-txt-meant-for-search-engines-dont-work-well-for-web-archives/
[5]: https://en.wikipedia.org/wiki/Progressive_enhancement
[6]: https://noscript.net/
[7]: https://github.com/gorhill/uMatrix
[8]: https://pamplemousse.ca/
[9]: https://archive.org
[10]: http://www.archive.org/web/researcher/ArcFileFormat.php
[11]: https://iipc.github.io/warc-specifications/
[12]: https://en.wikipedia.org/wiki/International_Internet_Preservation_Consortium
[13]: https://github.com/internetarchive/heritrix3/wiki
[14]: https://github.com/webrecorder/pywb
[15]: https://webrecorder.io/
[16]: https://github.com/webrecorder/pywb/issues/294
[17]: https://github.com/iipc/warc-specifications/issues/23
[18]: https://github.com/iipc/warc-specifications/pull/24
[19]: https://git.autistici.org/ale/crawl/
[20]: https://archiveteam.org/index.php?title=The_WARC_Ecosystem
[21]: https://github.com/chfoo/wpull
[22]: http://phantomjs.org/
[23]: http://rg3.github.io/youtube-dl/
[24]: https://www.archiveteam.org/index.php?title=ArchiveBot
[25]: https://archiveteam.org/
[26]: https://github.com/JustAnotherArchivist/snscrape
[27]: https://github.com/PromyLOPh/crocoite
[28]: http://www.httrack.com/
[29]: https://gitlab.com/gnuwget/wget2
[30]: https://gitlab.com/gnuwget/wget2/wikis/home
[31]: https://wallabag.org/
[32]: https://getpocket.com/
[33]: https://github.com/wallabag/wallabag/issues/2825
[34]: https://github.com/wallabag/wallabag/issues/2914
[35]: https://pirate.github.io/bookmark-archiver/
[36]: https://github.com/kanishka-linux/reminiscence
[37]: http://iabak.archiveteam.org

View File

@ -0,0 +1,157 @@
5 Easy Tips for Linux Web Browser Security
======
![](https://www.linux.com/learn/intro-to-linux/2018/11/5-easy-tips-linux-web-browser-security)
If you use your Linux desktop and never open a web browser, you are a special kind of user. For most of us, however, a web browser has become one of the most-used digital tools on the planet. We work, we play, we get news, we interact, we bank… the number of things we do via a web browser far exceeds what we do in local applications. Because of that, we need to be cognizant of how we work with web browsers, and do so with a nod to security. Why? Because there will always be nefarious sites and people, attempting to steal information. Considering the sensitive nature of the information we send through our web browsers, it should be obvious why security is of utmost importance.
So, what is a user to do? In this article, Ill offer a few basic tips, for users of all sorts, to help decrease the chances that your data will end up in the hands of the wrong people. I will be demonstrating on the Firefox web browser, but many of these tips cross the application threshold and can be applied to any flavor of web browser.
### 1. Choose Your Browser Wisely
Although most of these tips apply to most browsers, it is imperative that you select your web browser wisely. One of the more important aspects of browser security is the frequency of updates. New issues are discovered quite frequently and you need to have a web browser that is as up to date as possible. Of major browsers, here is how they rank with updates released in 2017:
1. Chrome released 8 updates (with Chromium following up with numerous security patches throughout the year).
2. Firefox released 7 updates.
3. Edge released 2 updates.
4. Safari released 1 update (although Apple does release 5-6 security patches yearly).
But even if your browser of choice releases an update every month, if you (as a user) dont upgrade, that update does you no good. This can be problematic with certain Linux distributions. Although many of the more popular flavors of Linux do a good job of keeping web browsers up to date, others do not. So, its crucial that you manually keep on top of browser updates. This might mean your distribution of choice doesnt include the latest version of your web browser of choice in its standard repository. If thats the case, you can always manually download the latest version of the browser from the developers download page and install from there.
If you like to live on the edge, you can always use a beta or daily build version of your browser. Do note, that using a daily build or beta version does come with it the possibility of unstable software. Say, however, youre okay with using a daily build of Firefox on a Ubuntu-based distribution. To do that, add the necessary repository with the command:
```
sudo apt-add-repository ppa:ubuntu-mozilla-daily/ppa
```
Update apt and install the daily Firefox with the commands:
```
sudo apt-get update
sudo apt-get install firefox
```
Whats most important here is to never allow your browser to get far out of date. You want to have the most updated version possible on your desktop. Period. If you fail this one thing, you could be using a browser that is vulnerable to numerous issues.
### 2. Use A Private Window
Now that you have your browser updated, how do you best make use of it? If you happen to be of the really concerned type, you should consider always using a private window. Why? Private browser windows dont retain your data: No passwords, no cookies, no cache, no history… nothing. The one caveat to browsing through a private window is that (as you probably expect), every time you go back to a web site, or use a service, youll have to re-type any credentials to log in. If youre serious about browser security, never saving credentials should be your default behavior.
This leads me to a reminder that everyone needs: Make your passwords strong! In fact, at this point in the game, everyone should be using a password manager to store very strong passwords. My password manager of choice is [Universal Password Manager][1].
### 3\. Protect Your Passwords
For some, having to retype those passwords every single time might be too much. So what do you do if you want to protect those passwords, while not having to type them constantly? If you use Firefox, theres a built-in tool, called Master Password. With this enabled, none of your browsers saved passwords are accessible, until you correctly type the master password. To set this up, do the following:
1. Open Firefox.
2. Click the menu button.
3. Click Preferences.
4. In the Preferences window, click Privacy & Security.
5. In the resulting window, click the checkbox for Use a master password (Figure 1).
6. When prompted, type and verify your new master password (Figure 2).
7. Close and reopen Firefox.
![Master Password][3]
Figure 1: The Master Password option in Firefox Preferences.
[Used with permission][4]
![Setting password][6]
Figure 2: Setting the Master Password in Firefox.
[Used with permission][4]
### 4\. Know your Extensions
There are plenty of privacy-focused extensions available for most browsers. What extensions you use will depend upon what you want to focus on. For myself, I choose the following extensions for Firefox:
* [Firefox Multi-Account Containers][7] \- Allows you to configure certain sites to open in a containerized tab.
* [Facebook Container][8] \- Always opens Facebook in a containerized tab (Firefox Multi-Account Containers is required for this).
* [Avast Online Security][9] \- Identifies and blocks known phishing sites and displays a websites security rating (curated by the Avast community of over 400 million users).
* [Mining Blocker][10] \- Blocks all CPU-Crypto Miners before they are loaded.
* [PassFF][11] \- Integrates with pass (A UNIX password manager) to store credentials safely.
* [Privacy Badger][12] \- Automatically learns to block trackers.
* [uBlock Origin][13] \- Blocks trackers based on known lists.
Of course, youll find plenty more security-focused extensions for:
+ [Firefox][2]
+ [Chrome, Chromium, & Vivaldi][5]
+ [Opera][14]
Not every web browser offers extensions. Some, such as Midoria, offer a limited about of built-in plugins, that can be enabled/disabled (Figure 3). However, you wont find third-party plugins available for the majority of these lightweight browsers.
![Midori Browser][15]
Figure 3: The Midori Browser plugins window.
[Used with permission][4]
### 5\. Virtualize
For those that are concerned about releasing locally stored data to prying eyes, one option would be to only use a browser on a virtual machine. To do this, install the likes of [VirtualBox][16], install a Linux guest, and then run whatever browser you like in the virtual environment. If you then apply the above tips, you can be sure your browsing experience will be safe.
### The Truth of the Matter
The truth is, if the machine you are working from is on a network, youre never going to be 100% safe. However, if you use that web browser intelligently youll get more bang out of your security buck and be less prone to having data stolen. The silver lining with Linux is that the chances of getting malicious software installed on your machine is exponentially less than if you were using another platform. Just remember to always use the latest release of your browser, keep your operating system updated, and use caution with the sites you visit.
Learn more about Linux through the free ["Introduction to Linux" ][17] course from The Linux Foundation and edX.
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2018/11/5-easy-tips-linux-web-browser-security
作者:[Jack Wallen][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/jlwallen
[b]: https://github.com/lujun9972
[1]: http://upm.sourceforge.net/
[2]: https://addons.mozilla.org/en-US/firefox/search/?q=security
[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/browsersecurity_1.jpg?itok=gHMPKEvr (Master Password)
[4]: https://www.linux.com/licenses/category/used-permission
[5]: https://chrome.google.com/webstore/search/security
[6]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/browsersecurity_2.jpg?itok=4L7DR2Ik (Setting password)
[7]: https://addons.mozilla.org/en-US/firefox/addon/multi-account-containers/?src=search
[8]: https://addons.mozilla.org/en-US/firefox/addon/facebook-container/?src=search
[9]: https://addons.mozilla.org/en-US/firefox/addon/avast-online-security/?src=search
[10]: https://addons.mozilla.org/en-US/firefox/addon/miningblocker/?src=search
[11]: https://addons.mozilla.org/en-US/firefox/addon/passff/?src=search
[12]: https://addons.mozilla.org/en-US/firefox/addon/privacy-badger17/
[13]: https://addons.mozilla.org/en-US/firefox/addon/ublock-origin/?src=search
[14]: https://addons.opera.com/en/search/?query=security
[15]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/browsersecurity_3.jpg?itok=hdNor0gw (Midori Browser)
[16]: https://www.virtualbox.org/
[17]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -1,3 +1,4 @@
[zianglei translating]
How to manage storage on Linux with LVM
======
Create, expand, and encrypt storage pools as needed with the Linux LVM utilities.

View File

@ -1,124 +0,0 @@
translating---geekpi
Automate a web browser with Selenium
======
![](https://fedoramagazine.org/wp-content/uploads/2018/10/selenium-816x345.jpg)
[Selenium][1] is a great tool for browser automation. With Selenium IDE you can record sequences of commands (like click, drag and type), validate the result and finally store this automated test for later. This is great for active development in the browser. But when you want to integrate these tests with your CI/CD flow its time to move on to Selenium WebDriver.
WebDriver exposes an API with bindings for many programming languages, which lets you integrate browser tests with your other tests. This post shows you how to run WebDriver in a container and use it together with a Python program.
### Running Selenium with Podman
Podman is the container runtime in the following examples. See [this previous post][2] for how to get started with Podman.
This example uses a standalone container for Selenium that contains both the WebDriver server and the browser itself. To launch the server container in the background run the following comand:
```
$ podman run -d --network host --privileged --name server \
docker.io/selenium/standalone-firefox
```
When you run the container with the privileged flag and host networking, you can connect to this container later from a Python program. You do not need to use sudo.
### Using Selenium from Python
Now you can provide a simple program that uses this server. This program is minimal, but should give you an idea about what you can do:
```
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
server ="http://127.0.0.1:4444/wd/hub"
driver = webdriver.Remote(command_executor=server,
desired_capabilities=DesiredCapabilities.FIREFOX)
print("Loading page...")
driver.get("https://fedoramagazine.org/")
print("Loaded")
assert "Fedora" in driver.title
driver.quit()
print("Done.")
```
First the program connects to the container you already started. Then it loads the Fedora Magazine web page and asserts that “Fedora” is part of the page title. Finally, it quits the session.
Python bindings are required in order to run the program. And since youre already using containers, why not do this in a container as well? Save the following to a file name Dockerfile:
```
FROM fedora:29
RUN dnf -y install python3
RUN pip3 install selenium
```
Then build your container image using Podman, in the same folder as Dockerfile:
```
$ podman build -t selenium-python .
```
To run your program in the container, mount the file containing your Python code as a volume when you run the container:
```
$ podman run -t --rm --network host \
-v $(pwd)/browser-test.py:/browser-test.py:z \
selenium-python python3 browser-test.py
```
The output should look like this:
```
Loading page...
Loaded
Done.
```
### What to do next
The example program above is minimal, and perhaps not that useful. But it barely scratched the surface of whats possible! Check out the documentation for [Selenium][3] and for the [Python bindings][4]. There youll find examples for how to locate elements in a page, handle popups, or fill in forms. Drag and drop is also possible, and of course waiting for various events.
With a few nice tests implemented, you may want to include the whole thing in your CI/CD pipeline. Luckily enough, this is fairly straightforward since everything was containerized to begin with.
You may also be interested in setting up a [grid][5] to run the tests in parallel. Not only does this help speed things up, but it also allows you to test several different browsers at the same time.
### Cleaning up
When youre done playing with your containers, you can stop and remove the standalone container with the following commands:
```
$ podman stop server
$ podman rm server
```
If you also want to free up disk space, run these commands to remove the images as well:
```
$ podman rmi docker.io/selenium/standalone-firefox
$ podman rmi selenium-python fedora:29
```
### Conclusion
In this post, youve seen how easy it is to get started with Selenium using container technology. It allowed you to automate interaction with a website, as well as test the interaction. Podman allowed you to run the containers necessary without super user privileges or the Docker daemon. Finally, the Python bindings let you use normal Python code to interact with the browser.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/automate-web-browser-selenium/
作者:[Lennart Jern][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/lennartj/
[b]: https://github.com/lujun9972
[1]: https://www.seleniumhq.org/
[2]: https://fedoramagazine.org/running-containers-with-podman/
[3]: https://www.seleniumhq.org/docs/
[4]: https://selenium-python.readthedocs.io
[5]: https://www.seleniumhq.org/docs/07_selenium_grid.jsp

View File

@ -1,186 +0,0 @@
translating by caixiangyue
How To Find The Execution Time Of A Command Or Process In Linux
======
![](https://www.ostechnix.com/wp-content/uploads/2018/11/time-command-720x340.png)
You probably know the start time of a command/process and [**how long a process is running**][1] in Unix-like systems. But, how do you when did it end and/or what is the total time taken by the command/process to complete? Well, Its easy! On Unix-like systems, there is a utility named **GNU time** that is specifically designed for this purpose. Using Time utility, we can easily measure the total execution time of a command or program in Linux operating systems. Good thing is time command comes preinstalled in most Linux distributions, so you dont have to bother with installation.
### Find The Execution Time Of A Command Or Process In Linux
To measure the execution time of a command/program, just run.
```
$ /usr/bin/time -p ls
```
Or,
```
$ time ls
```
Sample output:
```
dir1 dir2 file1 file2 mcelog
real 0m0.007s
user 0m0.001s
sys 0m0.004s
$ time ls -a
. .bash_logout dir1 file2 mcelog .sudo_as_admin_successful
.. .bashrc dir2 .gnupg .profile .wget-hsts
.bash_history .cache file1 .local .stack
real 0m0.008s
user 0m0.001s
sys 0m0.005s
```
The above commands displays the total execution time of **ls** command. Replace “ls” with any command/process of your choice to find the total execution time.
Here,
1. **real** -refers the total time taken by command/program,
2. **user** refers the time taken by the program in user mode,
3. **sys** refers the time taken by the program in kernel mode.
We can also limit the command to run only for a certain time as well. Refer the following guide for more details.
### time vs /usr/bin/time
As you may noticed, we used two commands **time** and **/usr/bin/time** in the above examples. So, you might wonder what is the difference between them.
First, let us see what actually time is using type command. For those who dont know, the **Type** command is used to find out the information about a Linux command. For more details, refer [**this guide**][2].
```
$ type -a time
time is a shell keyword
time is /usr/bin/time
```
As you see in the above output, time is both,
* A keyword built into the BASH shell
* An executable file i.e **/usr/bin/time**
Since shell keywords take precedence over executable files, when you just run`time`command without full path, you run a built-in shell command. But, When you run `/usr/bin/time`, you run a real **GNU time** program. So, in order to access the real command, you may need to specify its explicit path. Clear, good?
The built-in time shell keyword is available in most shells like BASH, ZSH, CSH, KSH, TCSH etc. The time shell keyword has less options than the executables. The only option you can use in time keyword is **-p**.
You know now how to find the total execution time of a given command/process using time command. Want to know little bit more about GNU time utility? Read on!
### A brief introduction about GNU time program
The GNU time program runs a command/program with given arguments and summarizes the system resource usage as standard output after the command is completed. Unlike the time keyword, the GNU time program not just displays the time used by the command/process, but also other resources like memory, I/O and IPC calls.
The typical syntax of the Time command is:
```
/usr/bin/time [options] command [arguments...]
```
The options in the above syntax refers a set of flags that can be used with time command to perform a particular functionality. The list of available options are given below.
* **-f, format** Use this option to specify the format of output as you wish.
* **-p, portability** Use the portable output format.
* **-o file, output=FILE** Writes the output to **FILE** instead of displaying as standard output.
* **-a, append** Append the output to the FILE instead of overwriting it.
* **-v, verbose** This option displays the detailed description of the output of the time utility.
* **quiet** This option prevents the time time utility to report the status of the program.
When using GNU time program without any options, you will see output something like below.
```
$ /usr/bin/time wc /etc/hosts
9 28 273 /etc/hosts
0.00user 0.00system 0:00.00elapsed 66%CPU (0avgtext+0avgdata 2024maxresident)k
0inputs+0outputs (0major+73minor)pagefaults 0swaps
```
If you run the same command with the shell built-in keyword time, the output would be bit different:
```
$ time wc /etc/hosts
9 28 273 /etc/hosts
real 0m0.006s
user 0m0.001s
sys 0m0.004s
```
Some times, you might want to write the system resource usage output to a file rather than displaying in the Terminal. To do so, use **-o** flag like below.
```
$ /usr/bin/time -o file.txt ls
dir1 dir2 file1 file2 file.txt mcelog
```
As you can see in the output, Time utility doesnt display the output. Because, we write the output to a file named file.txt. Let us have a look at this file:
```
$ cat file.txt
0.00user 0.00system 0:00.00elapsed 66%CPU (0avgtext+0avgdata 2512maxresident)k
0inputs+0outputs (0major+106minor)pagefaults 0swaps
```
When you use **-o** flag, if there is no file named file.txt, it will create and write the output in it. If the file.txt is already present, it will overwrite its content.
You can also append output to the file instead of overwriting it using **-a** flag.
```
$ /usr/bin/time -a file.txt ls
```
The **-f** flag allows the users to control the format of the output as per his/her liking. Say for example, the following command displays output of ls command and shows just the user, system, and total time.
```
$ /usr/bin/time -f "\t%E real,\t%U user,\t%S sys" ls
dir1 dir2 file1 file2 mcelog
0:00.00 real, 0.00 user, 0.00 sys
```
Please be mindful that the built-in shell command time doesnt support all features of GNU time program.
For more details about GNU time utility, refer the man pages.
```
$ man time
```
To know more about Bash built-in Time keyword, run:
```
$ help time
```
And, thats all for now. Hope this useful.
More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-find-the-execution-time-of-a-command-or-process-in-linux/
作者:[SK][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/find-long-process-running-linux/
[2]: https://www.ostechnix.com/the-type-command-tutorial-with-examples-for-beginners/

View File

@ -1,76 +0,0 @@
translating---geekpi
Choosing a printer for Linux
======
Linux offers widespread support for printers. Learn how to take advantage of it.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ)
We've made significant strides toward the long-rumored paperless society, but we still need to print hard copies of documents from time to time. If you're a Linux user and have a printer without a Linux installation disk or you're in the market for a new device, you're in luck. That's because most Linux distributions (as well as MacOS) use the Common Unix Printing System ([CUPS][1]), which contains drivers for most printers available today. This means Linux offers much wider support than Windows for printers.
### Selecting a printer
If you're buying a new printer, the best way to find out if it supports Linux is to check the documentation on the box or the manufacturer's website. You can also search the [Open Printing][2] database. It's a great resource for checking various printers' compatibility with Linux.
Here are some Open Printing results for Linux-compatible Canon printers.
![](https://opensource.com/sites/default/files/uploads/linux-printer_2-openprinting.png)
The screenshot below is Open Printing's results for a Hewlett-Packard LaserJet 4050—according to the database, it should work "perfectly." The recommended driver is listed along with generic instructions letting me know it works with CUPS, Line Printing Daemon (LPD), LPRng, and more.
![](https://opensource.com/sites/default/files/uploads/linux-printer_3-hplaserjet.png)
In all cases, it's best to check the manufacturer's website and ask other Linux users before buying a printer.
### Checking your connection
There are several ways to connect a printer to a computer. If your printer is connected through USB, it's easy to check the connection by issuing **lsusb** at the Bash prompt.
```
$ lsusb
```
The command returns **Bus 002 Device 004: ID 03f0:ad2a Hewlett-Packard** —it's not much information, but I can tell the printer is connected. I can get more information about the printer by entering the following command:
```
$ dmesg | grep -i usb
```
The results are much more verbose.
![](https://opensource.com/sites/default/files/uploads/linux-printer_1-dmesg.png)
If you're trying to connect your printer to a parallel port (assuming your computer has a parallel port—they're rare these days), you can check the connection with this command:
```
$ dmesg | grep -i parport
```
The information returned can help me select the right driver for my printer. I have found that if I stick to popular, name-brand printers, most of the time I get good results.
### Setting up your printer software
Both Fedora Linux and Ubuntu Linux contain easy printer setup tools. [Fedora][3] maintains an excellent wiki for answers to printing issues. The tools are easily launched from Settings in the GUI or by invoking **system-config-printer** on the command line.
![](https://opensource.com/sites/default/files/uploads/linux-printer_4-printersetup.png)
Hewlett-Packard's [HP Linux Imaging and Printing][4] (HPLIP) software, which supports Linux printing, is probably already installed on your Linux system; if not, you can [download][5] the latest version for your distribution. Printer manufacturers [Epson][6] and [Brother][7] also have web pages with Linux printer drivers and information.
What's your favorite Linux printer? Please share your opinion in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/11/choosing-printer-linux
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://www.cups.org/
[2]: http://www.openprinting.org/printers
[3]: https://fedoraproject.org/wiki/Printing
[4]: https://developers.hp.com/hp-linux-imaging-and-printing
[5]: https://developers.hp.com/hp-linux-imaging-and-printing/gethplip
[6]: https://epson.com/Support/wa00821
[7]: https://support.brother.com/g/s/id/linux/en/index.html?c=us_ot&lang=en&comple=on&redirect=on

View File

@ -1,101 +0,0 @@
Meet TiDB: An open source NewSQL database
======
5 key differences between MySQL and TiDB for scaling in the cloud
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud-windows-building-containers.png?itok=0XvZLZ8k)
As businesses adopt cloud-native architectures, conversations will naturally lead to what we can do to make the database horizontally scalable. The answer will likely be to take a closer look at [TiDB][1].
TiDB is an open source [NewSQL][2] database released under the Apache 2.0 License. Because it speaks the [MySQL][3] protocol, your existing applications will be able to connect to it using any MySQL connector, and [most SQL functionality][4] remains identical (joins, subqueries, transactions, etc.).
Step under the covers, however, and there are differences. If your architecture is based on MySQL with Read Replicas, you'll see things work a little bit differently with TiDB. In this post, I'll go through the top five key differences I've found between TiDB and MySQL.
### 1. TiDB natively distributes query execution and storage
With MySQL, it is common to scale-out via replication. Typically you will have one MySQL master with many slaves, each with a complete copy of the data. Using either application logic or technology like [ProxySQL][5], queries are routed to the appropriate server (offloading queries from the master to slaves whenever it is safe to do so).
Scale-out replication works very well for read-heavy workloads, as the query execution can be divided between replication slaves. However, it becomes a bottleneck for write-heavy workloads, since each replica must have a full copy of the data. Another way to look at this is that MySQL Replication scales out SQL processing, but it does not scale out the storage. (By the way, this is true for traditional replication as well as newer solutions such as Galera Cluster and Group Replication.)
TiDB works a little bit differently:
* Query execution is handled via a layer of TiDB servers. Scaling out SQL processing is possible by adding new TiDB servers, which is very easy to do using Kubernetes [ReplicaSets][6]. This is because TiDB servers are [stateless][7]; its [TiKV][8] storage layer is responsible for all of the data persistence.
* The data for tables is automatically sharded into small chunks and distributed among TiKV servers. Three copies of each data region (the TiKV name for a shard) are kept in the TiKV cluster, but no TiKV server requires a full copy of the data. To use MySQL terminology: Each TiKV server is both a master and a slave at the same time, since for some data regions it will contain the primary copy, and for others, it will be secondary.
* TiDB supports queries across data regions or, in MySQL terminology, cross-shard queries. The metadata about where the different regions are located is maintained by the Placement Driver, the management server component of any TiDB Cluster. All operations are fully [ACID][9] compliant, and an operation that modifies data across two regions uses a [two-phase commit][10].
For MySQL users learning TiDB, a simpler explanation is the TiDB servers are like an intelligent proxy that translates SQL into batched key-value requests to be sent to TiKV. TiKV servers store your tables with range-based partitioning. The ranges automatically balance to keep each partition at 96MB (by default, but configurable), and each range can be stored on a different TiKV server. The Placement Driver server keeps track of which ranges are located where and automatically rebalances a range if it becomes too large or too hot.
This design has several advantages of scale-out replication:
* It independently scales the SQL Processing and Data Storage tiers. For many workloads, you will hit one bottleneck before the other.
* It incrementally scales by adding nodes (for both SQL and Data Storage).
* It utilizes hardware better. To scale out MySQL to one master and four replicas, you would have five copies of the data. TiDB would use only three replicas, with hotspots automatically rebalanced via the Placement Driver.
### 2. TiDB's storage engine is RocksDB
MySQL's default storage engine has been InnoDB since 2010. Internally, InnoDB uses a [B+tree][11] data structure, which is similar to what traditional commercial databases use.
By contrast, TiDB uses RocksDB as the storage engine with TiKV. RocksDB has advantages for large datasets because it can compress data more effectively and insert performance does not degrade when indexes can no longer fit in memory.
Note that both MySQL and TiDB support an API that allows new storage engines to be made available. For example, Percona Server and MariaDB both support RocksDB as an option.
### 3. TiDB gathers metrics in Prometheus/Grafana
Tracking key metrics is an important part of maintaining database health. MySQL centralizes these fast-changing metrics in Performance Schema. Performance Schema is a set of in-memory tables that can be queried via regular SQL queries.
With TiDB, rather than retaining the metrics inside the server, a strategic choice was made to ship the information to a best-of-breed service. Prometheus+Grafana is a common technology stack among operations teams today, and the included graphs make it easy to create your own or configure thresholds for alarms.
![](https://opensource.com/sites/default/files/uploads/tidb_metrics.png)
### 4. TiDB handles DDL significantly better
If we ignore for a second that not all data definition language (DDL) changes in MySQL are online, a larger challenge when running a distributed MySQL system is externalizing schema changes on all nodes at the same time. Think about a scenario where you have 10 shards and add a column, but each shard takes a different length of time to complete the modification. This challenge still exists without sharding, since replicas will process DDL after a master.
TiDB implements online DDL using the [protocol introduced by the Google F1 paper][12]. In short, DDL changes are broken up into smaller transition stages so they can prevent data corruption scenarios, and the system tolerates an individual node being behind up to one DDL version at a time.
### 5. TiDB is designed for HTAP workloads
The MySQL team has traditionally focused its attention on optimizing performance for online transaction processing ([OLTP][13]) queries. That is, the MySQL team spends more time making simpler queries perform better instead of making all or complex queries perform better. There is nothing wrong with this approach since many applications only use simple queries.
TiDB is designed to perform well across hybrid transaction/analytical processing ([HTAP][14]) queries. This is a major selling point for those who want real-time analytics on their data because it eliminates the need for batch loads between their MySQL database and an analytics database.
### Conclusion
These are my top five observations based on 15 years in the MySQL world and coming to TiDB. While many of them refer to internal differences, I recommend checking out the TiDB documentation on [MySQL Compatibility][4]. It describes some of the finer points about any differences that may affect your applications.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/11/key-differences-between-mysql-and-tidb
作者:[Morgan Tocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/morgo
[b]: https://github.com/lujun9972
[1]: https://www.pingcap.com/docs/
[2]: https://en.wikipedia.org/wiki/NewSQL
[3]: https://en.wikipedia.org/wiki/MySQL
[4]: https://www.pingcap.com/docs/sql/mysql-compatibility/
[5]: https://proxysql.com/
[6]: https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/
[7]: https://en.wikipedia.org/wiki/State_(computer_science)
[8]: https://github.com/tikv/tikv/wiki
[9]: https://en.wikipedia.org/wiki/ACID_(computer_science)
[10]: https://en.wikipedia.org/wiki/Two-phase_commit_protocol
[11]: https://en.wikipedia.org/wiki/B%2B_tree
[12]: https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/41344.pdf
[13]: https://en.wikipedia.org/wiki/Online_transaction_processing
[14]: https://en.wikipedia.org/wiki/Hybrid_transactional/analytical_processing_(HTAP)

View File

@ -0,0 +1,138 @@
[Translating by ChiZelin]
3 best practices for continuous integration and deployment
======
Learn about automating, using a Git repository, and parameterizing Jenkins pipelines.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/innovation_lightbulb_gears_devops_ansible.png?itok=TSbmp3_M)
The article covers three key topics: automating CI/CD configuration, using a Git repository for common CI/CD artifacts, and parameterizing Jenkins pipelines.
### Terminology
First things first; let's define a few terms. **CI/CD** is a practice that allows teams to quickly and automatically test, package, and deploy their applications. It is often achieved by leveraging a server called **[Jenkins][1]** , which serves as the CI/CD orchestrator. Jenkins listens to specific inputs (often a Git hook following a code check-in) and, when triggered, kicks off a pipeline.
A **pipeline** consists of code written by development and/or operations teams that instructs Jenkins which actions to take during the CI/CD process. This pipeline is often something like "build my code, then test my code, and if those tests pass, deploy my application to the next highest environment (usually a development, test, or production environment)." Organizations often have more complex pipelines, incorporating tools such as artifact repositories and code analyzers, but this provides a high-level example.
Now that we understand the key terminology, let's dive into some best practices.
### 1\. Automation is key
To run CI/CD on a PaaS, you need the proper infrastructure to be configured on the cluster. In this example, I will use [OpenShift][2].
"Hello, World" implementations of this are quite simple to achieve. Simply run **oc new-app jenkins- <persistent/ephemeral>** and voilà, you have a running Jenkins server ready to go. Uses in the enterprise, however, are much more complex. In addition to the Jenkins server, admins will often need to deploy a code analysis tool such as SonarQube and an artifact repository such as Nexus. They will then have to create pipelines to perform CI/CD and Jenkins slaves to reduce the load on the master. Most of these entities are backed by OpenShift resources that need to be created to deploy the desired CI/CD infrastructure.
Eventually, the manual steps required to deploy your CI/CD components may need to be replicated, and you might not be the person to perform those steps. To ensure the outcome is produced quickly, error-free, and exactly as it was before, an automation method should be incorporated in the way your infrastructure is created. This can be an Ansible playbook, a Bash script, or any other way you would like to automate the deployment of CI/CD infrastructure. I have used [Ansible][3] and the [OpenShift-Applier][4] role to automate my implementations. You may find these tools valuable, or you may find something else that works better for you and your organization. Either way, you'll find that automation significantly reduces the workload required to recreate CI/CD components.
#### Configuring the Jenkins master
Outside of general "automation," I'd like to single out the Jenkins master and talk about a few ways admins can take advantage of OpenShift to automate Jenkins configuration. The Jenkins image from the [Red Hat Container Catalog][5] comes packaged with the [OpenShift-Sync plugin][6] installed. In the [video][7], we discuss how this plugin can be used to create Jenkins pipelines and slaves.
To create a Jenkins pipeline, create an OpenShift BuildConfig similar to this:
```
apiVersion: v1
kind: BuildConfig
...
spec:  
  source:      
    git:  
      ref: master      
      uri: <repository-uri>  
  ...  
  strategy:    
    jenkinsPipelineStrategy:    
      jenkinsfilePath: Jenkinsfile      
    type: JenkinsPipeline
```
The OpenShift-Sync plugin will notice that a BuildConfig with the strategy **jenkinsPipelineStrategy** has been created and will convert it into a Jenkins pipeline, pulling from the Jenkinsfile specified by the Git source. An inline Jenkinsfile can also be used instead of pulling from one from a Git repository. See the [documentation][8] for more information.
To create a Jenkins slave, create an OpenShift ImageStream that starts with the following definition:
```
apiVersion: v1
kind: ImageStream
metadata:
  annotations:
    slave-label: jenkins-slave
    labels:
      role: jenkins-slave
```
Notice the metadata defined in this ImageStream. The OpenShift-Sync plugin will convert any ImageStream with the label **role: jenkins-slave** into a Jenkins slave. The Jenkins slave will be named after the value from the **slave-label** annotation.
ImageStreams work just fine for simple Jenkins slave configurations, but some teams will find it necessary to configure nitty-gritty details such as resource limits, readiness and liveness probes, and instance caps. This is where ConfigMaps come into play:
```
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
  role: jenkins-slave
...
data:
  template1: |-
    <Kubernetes pod template>
```
Notice that the **role: jenkins-slave** label is still required to convert the ConfigMap into a Jenkins slave. The **Kubernetes pod template** consists of a lengthy bit of XML that will configure every detail to your organization's liking. To view this XML, as well as more information on converting ImageStreams and ConfigMaps into Jenkins slaves, see the [documentation][9].
Notice with the three examples shown above that none of the operations required an administrator to make manual changes to the Jenkins console. By using OpenShift resources, Jenkins can be configured in a way that is easily automated.
### 2\. Sharing is caring
The second best practice is maintaining a Git repository of common CI/CD artifacts. The main idea is to prevent teams from reinventing the wheel. Imagine your team needs to perform a blue/green deployment to an OpenShift environment as part of the pipeline's CD phase. The members of your team responsible for writing the pipeline may not be OpenShift experts, nor may they have the bandwidth to write this functionality from scratch. Luckily, somebody has already written a function that incorporates that functionality in a common CI/CD repository, so your team can use that function instead of spending time writing one.
To take this a step further, your organization may decide to maintain entire pipelines. You may find that teams are writing pipelines with similar functionality. It would be more efficient for those teams to use a parameterized pipeline from a common repository as opposed to writing their own from scratch.
### 3\. Less is more
As I hinted in the previous section, the third and final best practice is to parameterize your CI/CD pipelines. Parameterization will prevent an over-abundance of pipelines, making your CI/CD system easier to maintain. Imagine I have multiple regions where I can deploy my application. Without parameterization, I would need a separate pipeline for each region.
To parameterize a pipeline written as an OpenShift build config, add the **env** stanza to the configuration:
```
...
spec:
  ...
  strategy:
    jenkinsPipelineStrategy:
      env:
      - name: REGION
        value: US-West          
      jenkinsfilePath: Jenkinsfile      
    type: JenkinsPipeline
```
With this configuration, I can pass the **REGION** parameter the pipeline to deploy my application to the specified region.
The [video][7] provides a more substantial case where parameterization is a must. Some organizations decide to split up their CI/CD pipelines into separate CI and CD pipelines, usually, because there is some sort of approval process that happens before deployment. Imagine I have four images and three different environments to deploy to. Without parameterization, I would need 12 CD pipelines to allow all deployment possibilities. This can get out of hand very quickly. To make maintenance of the CD pipeline easier, organizations would find it better to parameterize the image and environment to allow one pipeline to perform the work of many.
### Summary
CI/CD at the enterprise level tends to become more complex than many organizations anticipate. Luckily, with Jenkins, there are many ways to seamlessly provide automation of your setup. Maintaining a Git repository of common CI/CD artifacts will also ease the effort, as teams can pull from maintained dependencies instead of writing their own from scratch. Finally, parameterization of your CI/CD pipelines will reduce the number of pipelines that will have to be maintained.
If you've found other practices you can't do without, please share them in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/11/best-practices-cicd
作者:[Austin Dewey][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/adewey
[b]: https://github.com/lujun9972
[1]: https://jenkins.io/
[2]: https://www.openshift.com/
[3]: https://docs.ansible.com/
[4]: https://github.com/redhat-cop/openshift-applier
[5]: https://access.redhat.com/containers/?tab=overview#/registry.access.redhat.com/openshift3/jenkins-2-rhel7
[6]: https://github.com/openshift/jenkins-sync-plugin
[7]: https://www.youtube.com/watch?v=zlL7AFWqzfw
[8]: https://docs.openshift.com/container-platform/3.11/dev_guide/dev_tutorials/openshift_pipeline.html#the-pipeline-build-config
[9]: https://docs.openshift.com/container-platform/3.11/using_images/other_images/jenkins.html#configuring-the-jenkins-kubernetes-plug-in

View File

@ -0,0 +1,73 @@
7 command-line tools for writers | Opensource.com
======
Put away your word processor and start writing from the command line using these open source tools.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/terminal_command_linux_desktop_code.jpg?itok=p5sQ6ODE)
For most people (especially non-techies), the act of writing means tapping out words using LibreOffice Writer or another GUI word processing application. But there are many other options available to help anyone communicate their message in writing, especially for the growing number of writers [embracing plaintext][1].
There's also room in a GUI writer's world for command line tools that can help them write, check their writing, and more—regardless of whether they're banging out an article, blog post, or story; writing a README; or prepping technical documentation.
Here's a look at some command-line tools that any writer will find useful.
### Editors
Yes, you _can_ do actual writing at the command line. I know writers who do their work using editors like [Nano][2], [Vim][3], [Emacs][4], and [Jove][5] in a terminal window. And those editors [aren't the only games in town][6]. Text editors are great because they (at a basic level, anyway) are easy to use and distraction free. They're perfect for tapping out a first draft of anything or even completing a long and complicated writing project.
If you want a more word processor-like experience at the command line, take a look at [WordGrinder][7] . WordGrinder is a bare-bones word processor, but it has more than enough features for writing and publishing your work. It supports basic formatting and styles, and you can export your writing to formats like Markdown, ODT, LaTeX, and HTML.
### Spell checkers
Every writer does (or at least should do) a spelling check on their work at least once. Why? An immutable law of the writing universe states that, no matter how many times you look over your manuscript, a spelling mistake or typo will creep in.
My favorite command-line spelling checker is [GNU Aspell][8], which I previously [looked at][9] in detail. Aspell checks plaintext documents interactively and not only highlights errors but often puts the best correction at the top of its list of suggestions. Aspell also ignores many markup languages while doing its thing.
A much older but still useful alternative is [Ispell][10]. It's a bit slower than Aspell, but both utilities work the same way. As you interact with your text file, Ispell suggests corrections. Ispell also has good support for foreign languages.
### Prose linters
Software developers use [linters][11] to check their code for errors or bugs. There are also linters for prose that check for style and syntax errors; think of them as the _Elements of Style_ for the command line. While any writer can (and probably should) use one, a prose linter is especially useful for team documentation projects that require a consistent voice and style.
[Proselint][12] is a comprehensive tool for checking what you're writing. It looks for jargon, hyperbole, incorrect date and time format, misused terms, and [much more][13]. It's also easy to run and ignores markup in a plaintext file.
[Alex][14] is a simple yet powerful prose linter. Run it against a plaintext document or one formatted with Markdown or HTML. Alex pumps out warnings of "gender favouring, polarising, race related, religion inconsiderate, or other unequal phrasing in text." If you want to give Alex a test drive, there's an [online demo][15].
### Other tools
Sometimes you just can't find the right synonym for a word. But you don't need to grab a "dead tree" thesaurus or go to a dedicated website to perfect your word choice. Just run [Aiksaurus][16] against the word you want to replace, and it does the work for you. This utility's main drawback, though, is that it supports English only.
Even writers with few (if any) technical skills are embracing [Markdown][17] to quickly and easily format their work. Sometimes, though, you need to convert files formatted with Markdown to something else. That's where [Pandoc][18] comes in. You can use it to convert your documents to HTML, Word, LibreOffice Writer, LaTeX, EPUB, and other formats. You can even use Pandoc to produce books and [research papers][19].
Do you have a favorite command-line tool for writing? Share it with the Opensource.com community by leaving a comment.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/11/command-line-tools-writers
作者:[Scott Nesbitt][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/scottnesbitt
[b]: https://github.com/lujun9972
[1]: https://plaintextproject.online
[2]: https://www.nano-editor.org/
[3]: https://www.vim.org
[4]: https://www.gnu.org/software/emacs/
[5]: https://opensource.com/article/17/1/jove-lightweight-alternative-vim
[6]: https://en.wikipedia.org/wiki/List_of_text_editors#Text_user_interface
[7]: https://cowlark.com/wordgrinder/
[8]: http://aspell.net/
[9]: https://opensource.com/article/18/2/how-check-spelling-linux-command-line-aspell
[10]: https://www.cs.hmc.edu/~geoff/ispell.html
[11]: https://en.wikipedia.org/wiki/Lint_(software)
[12]: http://proselint.com/
[13]: http://proselint.com/checks/
[14]: https://github.com/get-alex/alex
[15]: https://alexjs.com/#demo
[16]: http://aiksaurus.sourceforge.net/
[17]: https://en.wikipedia.org/wiki/Markdown
[18]: https://pandoc.org
[19]: https://opensource.com/article/18/9/pandoc-research-paper

View File

@ -0,0 +1,256 @@
9 obscure Python libraries for data science
======
Go beyond pandas, scikit-learn, and matplotlib and learn some new tricks for doing data science in Python.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life-python.jpg?itok=F2PYP2wT)
Python is an amazing language. In fact, it's one of the fastest growing programming languages in the world. It has time and again proved its usefulness both in developer job roles and data science positions across industries. The entire ecosystem of Python and its libraries makes it an apt choice for users (beginners and advanced) all over the world. One of the reasons for its success and popularity is its set of robust libraries that make it so dynamic and fast.
In this article, we will look at some of the Python libraries for data science tasks other than the commonly used ones like **pandas, scikit-learn** , and **matplotlib**. Although libraries like **pandas and scikit-learn** are the ones that come to mind for machine learning tasks, it's always good to learn about other Python offerings in this field.
### Wget
Extracting data, especially from the web, is one of a data scientist's vital tasks. [Wget][1] is a free utility for non-interactive downloading files from the web. It supports HTTP, HTTPS, and FTP protocols, as well as retrieval through HTTP proxies. Since it is non-interactive, it can work in the background even if the user isn't logged in. So the next time you want to download a website or all the images from a page, **wget** will be there to assist.
#### Installation
```
$ pip install wget
```
#### Example
```
import wget
url = 'http://www.futurecrew.com/skaven/song_files/mp3/razorback.mp3'
filename = wget.download(url)
100% [................................................] 3841532 / 3841532
filename
'razorback.mp3'
```
### Pendulum
For people who get frustrated when working with date-times in Python, **[Pendulum][2]** is here. It is a Python package to ease **datetime** manipulations. It is a drop-in replacement for Python's native class. Refer to the [documentation][3] for in-depth information.
#### Installation
```
$ pip install pendulum
```
#### Example
```
import pendulum
dt_toronto = pendulum.datetime(2012, 1, 1, tz='America/Toronto')
dt_vancouver = pendulum.datetime(2012, 1, 1, tz='America/Vancouver')
print(dt_vancouver.diff(dt_toronto).in_hours())
3
```
### Imbalanced-learn
Most classification algorithms work best when the number of samples in each class is almost the same (i.e., balanced). But real-life cases are full of imbalanced datasets, which can have a bearing upon the learning phase and the subsequent prediction of machine learning algorithms. Fortunately, the **[imbalanced-learn][4]** library was created to address this issue. It is compatible with [**scikit-learn**][5] and is part of **[scikit-learn-contrib][6]** projects. Try it the next time you encounter imbalanced datasets.
#### Installation
```
pip install -U imbalanced-learn
# or
conda install -c conda-forge imbalanced-learn
```
#### Example
For usage and examples refer to the [documentation][7].
### FlashText
Cleaning text data during natural language processing (NLP) tasks often requires replacing keywords in or extracting keywords from sentences. Usually, such operations can be accomplished with regular expressions, but they can become cumbersome if the number of terms to be searched runs into the thousands.
Python's **[FlashText][8]** module, which is based upon the [FlashText algorithm][9], provides an apt alternative for such situations. The best part of FlashText is the runtime is the same irrespective of the number of search terms. You can read more about it in the [documentation][10].
#### Installation
```
$ pip install flashtext
```
#### Examples
##### **Extract keywords:**
```
from flashtext import KeywordProcessor
keyword_processor = KeywordProcessor()
# keyword_processor.add_keyword(<unclean name>, <standardised name>)
keyword_processor.add_keyword('Big Apple', 'New York')
keyword_processor.add_keyword('Bay Area')
keywords_found = keyword_processor.extract_keywords('I love Big Apple and Bay Area.')
keywords_found
['New York', 'Bay Area']
```
**Replace keywords:**
```
keyword_processor.add_keyword('New Delhi', 'NCR region')
new_sentence = keyword_processor.replace_keywords('I love Big Apple and new delhi.')
new_sentence
'I love New York and NCR region.'
```
For more examples, refer to the [usage][11] section in the documentation.
### FuzzyWuzzy
The name sounds weird, but **[FuzzyWuzzy][12]** is a very helpful library when it comes to string matching. It can easily implement operations like string comparison ratios, token ratios, etc. It is also handy for matching records kept in different databases.
#### Installation
```
$ pip install fuzzywuzzy
```
#### Example
```
from fuzzywuzzy import fuzz
from fuzzywuzzy import process
# Simple Ratio
fuzz.ratio("this is a test", "this is a test!")
97
# Partial Ratio
fuzz.partial_ratio("this is a test", "this is a test!")
 100
```
More examples can be found in FuzzyWuzzy's [GitHub repo.][12]
### PyFlux
Time-series analysis is one of the most frequently encountered problems in machine learning. **[PyFlux][13]** is an open source library in Python that was explicitly built for working with time-series problems. The library has an excellent array of modern time-series models, including but not limited to **ARIMA** , **GARCH** , and **VAR** models. In short, PyFlux offers a probabilistic approach to time-series modeling. It's worth trying out.
#### Installation
```
pip install pyflux
```
#### Example
Please refer to the [documentation][14] for usage and examples.
### IPyvolume
Communicating results is an essential aspect of data science, and visualizing results offers a significant advantage. **[**IPyvolume**][15]** is a Python library to visualize 3D volumes and glyphs (e.g., 3D scatter plots) in the Jupyter notebook with minimal configuration and effort. However, it is currently in the pre-1.0 stage. A good analogy would be something like this: IPyvolume's **volshow** is to 3D arrays what matplotlib's **imshow** is to 2D arrays. You can read more about it in the [documentation][16].
#### Installation
```
Using pip
$ pip install ipyvolume
Conda/Anaconda
$ conda install -c conda-forge ipyvolume
```
#### Examples
**Animation:**
![](https://opensource.com/sites/default/files/uploads/ipyvolume_animation.gif)
**Volume rendering:**
![](https://opensource.com/sites/default/files/uploads/ipyvolume_volume-rendering.gif)
### Dash
**[Dash][17]** is a productive Python framework for building web applications. It is written on top of Flask, Plotly.js, and React.js and ties modern UI elements like drop-downs, sliders, and graphs to your analytical Python code without the need for JavaScript. Dash is highly suitable for building data visualization apps that can be rendered in the web browser. Consult the [user guide][18] for more details.
#### Installation
```
pip install dash==0.29.0  # The core dash backend
pip install dash-html-components==0.13.2  # HTML components
pip install dash-core-components==0.36.0  # Supercharged components
pip install dash-table==3.1.3  # Interactive DataTable component (new!)
```
#### Example
The following example shows a highly interactive graph with drop-down capabilities. As the user selects a value in the drop-down, the application code dynamically exports data from Google Finance into a Pandas DataFrame.
![](https://opensource.com/sites/default/files/uploads/dash_animation.gif)
### Gym
**[Gym][19]** from [OpenAI][20] is a toolkit for developing and comparing reinforcement learning algorithms. It is compatible with any numerical computation library, such as TensorFlow or Theano. The Gym library is a collection of test problems, also called environments, that you can use to work out your reinforcement-learning algorithms. These environments have a shared interface, which allows you to write general algorithms.
#### Installation
```
pip install gym
```
#### Example
The following example will run an instance of the environment **[CartPole-v0][21]** for 1,000 timesteps, rendering the environment at each step.
![](https://opensource.com/sites/default/files/uploads/gym_animation.gif)
You can read about [other environments][22] on the Gym website.
### Conclusion
These are my picks for useful, but little-known Python libraries for data science. If you know another one to add to this list, please mention it in the comments below.
This was originally published on the [Analytics Vidhya][23] Medium channel and is reprinted with permission.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/11/python-libraries-data-science
作者:[Parul Pandey][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/parul-pandey
[b]: https://github.com/lujun9972
[1]: https://pypi.org/project/wget/
[2]: https://github.com/sdispater/pendulum
[3]: https://pendulum.eustace.io/docs/#installation
[4]: https://github.com/scikit-learn-contrib/imbalanced-learn
[5]: http://scikit-learn.org/stable/
[6]: https://github.com/scikit-learn-contrib
[7]: http://imbalanced-learn.org/en/stable/api.html
[8]: https://github.com/vi3k6i5/flashtext
[9]: https://arxiv.org/abs/1711.00046
[10]: https://flashtext.readthedocs.io/en/latest/
[11]: https://flashtext.readthedocs.io/en/latest/#usage
[12]: https://github.com/seatgeek/fuzzywuzzy
[13]: https://github.com/RJT1990/pyflux
[14]: https://pyflux.readthedocs.io/en/latest/index.html
[15]: https://github.com/maartenbreddels/ipyvolume
[16]: https://ipyvolume.readthedocs.io/en/latest/?badge=latest
[17]: https://github.com/plotly/dash
[18]: https://dash.plot.ly/
[19]: https://github.com/openai/gym
[20]: https://openai.com/
[21]: https://gym.openai.com/envs/CartPole-v0
[22]: https://gym.openai.com/
[23]: https://medium.com/analytics-vidhya/python-libraries-for-data-science-other-than-pandas-and-numpy-95da30568fad

View File

@ -0,0 +1,298 @@
How To Customize Bash Prompt In Linux
======
![](https://www.ostechnix.com/wp-content/uploads/2017/10/BASH-720x340.jpg)
As you know already, **BASH** (the **B** ourne- **A** gain **Sh** ell) is the default shell for most modern Linux distributions. In this guide, we are going to customize BASH prompt and enhance its look by adding some colors and styles. Of course, there are many plugins/tools available to get this job done easily and quickly. However, we still can do some basic customization, such as adding, modifying elements, changing the foreground and background color etc., without having to install any additional tools and plugins. Let us get started!
### Customize Bash Prompt In Linux
In BASH, we can customize and change the BASH prompt as the way you want by changing the value of **PS1** environment variable.
Usually, the BASH prompt will look something like below:
![](https://www.ostechnix.com/wp-content/uploads/2017/10/Linux-Terminal.png)
Here, **sk** is my username and **ubuntuserver** is my hostname.
Now, we are going to change this prompt as per your liking by inserting some backslash-escaped special characters called **Escape Sequences**.
Let me show you some examples.
Before going further, it is highly recommended to backup the **~/.bashrc** file.
```
$ cp ~/.bashrc ~/.bashrc.bak
```
**Modify “[[email protected]][1]” part in the Bash prompt**
As I mentioned above, the BASH prompt has “[[email protected]][1]” part by default in most Linux distributions. You can change this part to something else.
To do so, edit **~/.bashrc **file:
```
$ vi ~/.bashrc
```
Add the following line at the end:
```
PS1="ostechnix> "
```
Replace “ostechnix” with any letters/words of your choice. Once added, hit the **ESC** key and type **:wq** to save and exit the file.
Run the following command to update the changes:
```
$ source ~/.bashrc
```
Now, the BASH prompt will have the letters “ostechnix” in the shell prompt.
![][3]
Here is another example. I am going to replace “[[email protected]][1]” part with “[[email protected]][1]>”.
To do so, add the following entry in your **~./bashrc** file.
Dont forget to update the changes using “source ~./bashrc” command.
Here is the output of my BASH prompt in Ubuntu 18.04 LTS.
![](https://www.ostechnix.com/wp-content/uploads/2017/10/bash-prompt-1.png)
**Display username only:**
To display the username only, just add the following line in **~/.bashrc** file.
```
export PS1="\u "
```
Here, **\u** is the escape sequence.
Here are some more values to add to your PS1 variable to change the BASH prompt. After adding each entry, you must run “source ~/.bashrc” command to take effect the changes.
**Add username with hostname:**
```
export PS1="\u\h "
```
Your prompt will now look like below:
```
skubuntuserver
```
**Add username and FQDN (Fully Qualified Domain Name):**
```
export PS1="\u\H "
```
**Add extra characters between username and hostname:**
If you want to any letter, for example **@** , between the username and hostname, use the following entry:
```
export PS1="\u@\h "
```
The bash prompt will look like below:
```
sk@ubuntuserver
```
**Add username with hostname with $ symbol at the end:**
```
export PS1="\u@\h\\$ "
```
**Add special characters between and after username and hostname:**
```
export PS1="\u@\h> "
```
This entry will change the BASH prompt as shown below.
```
sk@ubuntuserver>
```
Similarly, you can add other special characters, such as colon, semi-colon, *, underscore, space etc.
**Display username, hostname, shell name:**
```
export PS1="\u@\h>\s "
```
**Display username, hostname, shell and and its version:**
```
export PS1="\u@\h>\s\v "
```
Bash prompt output:
![][4]
Display username, hostname and path to current directory:
```
export PS1="\u@\h\w "
```
You will see a tilde (~) symbol if the current directory is $HOME.
**Display date in BASH prompt:**
To display date with your username and hostname in the BASH prompt, add the following entry in ~/.bashrc file.
```
export PS1="\u@\h>\d "
```
![][5]
**Date and time in 12 hour format in BASH prompt:**
```
export PS1="\u@\h>\d\@ "
```
**Date and 12 hour time hh:mm:ss format:**
```
export PS1="\u@\h>\d\T "
```
**Date and 24 hour time:**
```
export PS1="\u@\h>\d\A "
```
**Date and 24 hour hh:mm:ss format:**
```
export PS1="\u@\h>\d\t "
```
These are some common escape sequences to change the Bash prompt format. There are few more escape sequences are available. You can view them all in in the **bash man page** under the **“PROMPTING”** section.
And, you can view the current prompt settings at any time using command:
```
$ echo $PS1
```
**Hide “username@hostname” Part In Bash prompt**
I dont want to change anything. Can I hide it altogether? Yes, you can!
If youre a blogger or tech writer, there are chances that you have to upload the screenshots of your Linux Terminal in your websites and blogs. Your username/hostname might be too cool, so you may not want others to copy and use them as their own. On the other hand, your username/hostname might be too weird or too bad or contain offensive characters, so you dont want others to view them. In such cases, this small tip might help you to hide or modify “[[email protected]][1]” part in Terminal.
If you dont like to let the users to view your username/hostname part, just follow the steps given below.
Edit your **“~/.bashrc”** file:
```
$ vi ~/.bashrc
```
Add the following at the end:
```
PS1="\W> "
```
Type **:wq** to save and close the file.
Then, run the following command to take effect the changes.
```
$ source ~/.bashrc
```
Thats it. Now, check your Terminal. You will not see the [[email protected]][1] part. You will only see the **~ >** symbol.
![][6]
Want to know another simplest way without messing the **~/.bashrc** file? Just create another user account something like **[[email protected]][1]** , or **[[email protected]][1]**. Use these accounts for making guides, videos and upload them on your blog or online. Now, you have nothing to worry about your identity.
**Warning:** This is a bad practice in some cases. For example, if another shells like zsh inherits your current shell, it will cause some problems. Use it only for hiding or modifying your [[email protected]][1] part if you use single shell. Apart from hiding the [[email protected]][1] part in the Terminal, this tip is pretty useless and might be problematic.
### Colorizing BASH prompt
What we have seen so far is we just changed/added some elements to the BASH prompt. In this section, we are going to add colors the elements.
You can enhance the foreground (text) and background color of BASH prompts elements by adding some code to the ~/.bashrc file.
For example, to change the foreground color of all texts to Red, add the following code:
```
export PS1="\u@\[\e[31m\]\h\[\e[m\] "
```
Once added, update the changes using command:
Now, your BASH prompt will look like below:
![][7]
Similarly, to change the background color, add this code:
```
export PS1="\u@\[\e[31;46m\]\h\[\e[m\] "
```
![][8]
### **Adding Emojis**
Who doesnt love emoji? We can add an emoji by placing the following code in the ~/.bashrc file.
```
PS1="\W 🔥 >"
```
Please note that some terminal may not show the emojis properly depending upon the font used. You may see either garbled characters or monochrome emoji if you dont have suitable fonts.
### Customizing BASH is bit difficult to me, Is there any other easy way?
If youre a newbie, writing and adding PS1 values will be confusing and difficult. Also, you will find it bit difficult to arrange the elements to get the result of your choice. No worries! There is an online Bash PS1 generator available which allows you to easily generate different PS1 values as you wish.
Go to the following website:
[![EzPrompt](https://www.ostechnix.com/wp-content/uploads/2017/10/EzPrompt.png)][9]
Just pick the elements you want to use in your BASH prompt. Add the colors to the elements and re-arrange them in any order of your liking. Preview the output instantly and finally copy/paste resulting code in your **~/.bashrc** file. It is that simple! Most of the examples mentioned in this guide are taken from this website.
### I messed up with my .bashrc file? How to restore it to default settings?
As I mentioned earlier, it is strongly recommended to take backup ~./bashrc (Or any important configuration files in general) before making any changes. So, you can restore it to the previous working version if something went wrong. However if you forgot to backup the ~/.bashrc file in the first place, you still can restore it to the default settings as described in the following guide.
[How To Restore .bashrc File To Default Settings][10]
The above guide is based on Ubuntu, but it may applicable to other Linux distributions as well. Please let us be clear that the aforementioned guide will help you to reset ~/.bashrc to its default settings at the time of new installation. Any changes done afterwards will be lost.
And, thats all for now. I will keep updating this guide as I learned more ways to customize the BASH prompt in future.
Hope this helps. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/hide-modify-usernamelocalhost-part-terminal/
作者:[SK][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/cdn-cgi/l/email-protection
[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[3]: http://www.ostechnix.com/wp-content/uploads/2017/10/Linux-Terminal-2.png
[4]: http://www.ostechnix.com/wp-content/uploads/2017/10/bash-prompt-2.png
[5]: http://www.ostechnix.com/wp-content/uploads/2017/10/bash-prompt-3.png
[6]: http://www.ostechnix.com/wp-content/uploads/2017/10/Linux-Terminal-1.png
[7]: http://www.ostechnix.com/hide-modify-usernamelocalhost-part-terminal/bash-prompt-4/
[8]: http://www.ostechnix.com/hide-modify-usernamelocalhost-part-terminal/bash-prompt-5/
[9]: http://ezprompt.net/
[10]: https://www.ostechnix.com/restore-bashrc-file-default-settings-ubuntu/

View File

@ -0,0 +1,84 @@
How To Change GDM Login Screen Background In Ubuntu
======
Whenever you log in or lock and unlock your Ubuntu 18.04 LTS desktop, you will be greeted with a plain purple-colored screen. It is the default GDM (GNOME Display Manager) background since Ubuntu version 17.04. Some of you may feel boring to look at this plain background and want to make the Login screen something cool and eye-candy! If so, youre on the right track. This brief guide describes how to change GDM Login screen background in Ubuntu 18.04 LTS desktop.
### Change GDM Login Screen Background In Ubuntu
Here is how the default GDM login screen background image looks like in Ubuntu 18.04 LTS desktop.
![](https://www.ostechnix.com/wp-content/uploads/2018/11/GDM-login-screen-1.png)
Whether you like it or not, you will stumbled upon this screen every time you log in or lock and unlock the system. No worries! You can change this background with any beautiful image of your choice.
Changing desktop wallpaper and users profile picture is not a big deal in Ubuntu. We can do it with a few mouse clicks in no time. However, changing Login/Lock screen background need a little bit editing of a file called **ubuntu.css** located under **/usr/share/gnome-shell/theme** directory.
Before modifying this file, take a backup of this file. So, we can restore it if something went wrong.
```
$ sudo cp /usr/share/gnome-shell/theme/ubuntu.css /usr/share/gnome-shell/theme/ubuntu.css.bak
```
Now, edit ubuntu.css file:
```
$ sudo nano /usr/share/gnome-shell/theme/ubuntu.css
```
Find the following lines under the directive named **“lockDialogGroup”** in the file:
```
#lockDialogGroup {
background: #2c001e url(resource:///org/gnome/shell/theme/noise-texture.png);
background-repeat: repeat;
}
```
![](https://www.ostechnix.com/wp-content/uploads/2018/11/ubuntu_css.png)
As you can see, the default image for the GDM login screen is **noise-texture.png**.
Now, change the background image by adding your image path. You can use either .jpg or .png file. Both format images worked fine for me. After editing the file, the contents of file will look like below:
```
#lockDialogGroup {
background: #2c001e url(file:///home/sk/image.png);
background-repeat: no-repeat;
background-size: cover;
background-position: center;
}
```
Please pay little attention to the modified version of this directive in the ubuntu.css file. I have marked the changes in bold.
As you might have noticed, I have changed the line “… **url(resource:///org/gnome/shell/theme/noise-texture.png);** ” with “ **…url(file:///home/sk/image.png);”**. I.e You should change “… **url(resource** …” to “… **url(file**..”.
Also, I have changed the value of “background-repeat:” parameter from **“repeat”** to **“no-repeat”** and added two more lines. You can simply copy/paste the above lines and change image path with your own in your ubuntu.css file.
Once you are done, save and close the file. And, reboot your system.
Here is my GDM login screen with updated backgrounds:
![](https://www.ostechnix.com/wp-content/uploads/2018/11/GDM-login-screen-2.png)
![](https://www.ostechnix.com/wp-content/uploads/2018/11/GDM-login-screen-3.png)
Cool, yeah? As you can see, changing GDM login screen is not that difficult either. All you have to do is to change the path of the image in ubuntu.css file and restart your system. It is simple as that. Have fun!
You can also edit **gdm3.css** file located under **/usr/share/gnome-shell/theme** directory and modify it as shown above to get the same result. Again, dont forget to take the backup of the file before making any changes.
And, thats all now. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-change-gdm-login-screen-background-in-ubuntu/
作者:[SK][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972

Some files were not shown because too many files have changed in this diff Show More