mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-02-06 23:50:16 +08:00
Merge branch 'master' of github.com:LCTT/TranslateProject
This commit is contained in:
commit
e5cb04ccb3
145
published/20160518 Cleaning Up Your Linux Startup Process.md
Normal file
145
published/20160518 Cleaning Up Your Linux Startup Process.md
Normal file
@ -0,0 +1,145 @@
|
||||
Linux 系统开机启动项清理
|
||||
=======
|
||||
|
||||
![Linux cleanup](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner-cleanup-startup.png?itok=dCcKwdoP "Clean up your startup process")
|
||||
|
||||
一般情况下,常规用途的 Linux 发行版在开机启动时拉起各种相关服务进程,包括许多你可能无需使用的服务,例如<ruby>蓝牙<rt>bluetooth</rt></ruby>、Avahi、 <ruby>调制解调管理器<rt>ModemManager</rt></ruby>、ppp-dns(LCTT 译注:此处作者笔误 ppp-dns 应该为 pppd-dns) 等服务进程,这些都是什么东西?用于哪里,有何功能?
|
||||
|
||||
Systemd 提供了许多很好的工具用于查看系统启动情况,也可以控制在系统启动时运行什么。在这篇文章中,我将说明在 Systemd 类发行版中如何关闭一些令人讨厌的进程。
|
||||
|
||||
### 查看开机启动项
|
||||
|
||||
在过去,你能很容易通过查看 `/etc/init.d` 了解到哪些服务进程会在引导时启动。Systemd 以不同的方式展现,你可以使用如下命令罗列允许开机启动的服务进程。
|
||||
|
||||
```
|
||||
$ systemctl list-unit-files --type=service | grep enabled
|
||||
accounts-daemon.service enabled
|
||||
anacron-resume.service enabled
|
||||
anacron.service enabled
|
||||
bluetooth.service enabled
|
||||
brltty.service enabled
|
||||
[...]
|
||||
```
|
||||
|
||||
在此列表顶部,对我来说,蓝牙服务是冗余项,因为在该电脑上我不需要使用蓝牙功能,故无需运行此服务。下面的命令将停止该服务进程,并且使其开机不启动。
|
||||
|
||||
```
|
||||
$ sudo systemctl stop bluetooth.service
|
||||
$ sudo systemctl disable bluetooth.service
|
||||
```
|
||||
|
||||
你可以通过下面命令确定是否操作成功。
|
||||
|
||||
```
|
||||
$ systemctl status bluetooth.service
|
||||
bluetooth.service - Bluetooth service
|
||||
Loaded: loaded (/lib/systemd/system/bluetooth.service; disabled; vendor preset: enabled)
|
||||
Active: inactive (dead)
|
||||
Docs: man:bluetoothd(8)
|
||||
```
|
||||
|
||||
停用的服务进程仍然能够被另外一个服务进程启动。如果你真的想在任何情况下系统启动时都不启动该进程,无需卸载该它,只需要把它掩盖起来就可以阻止该进程在任何情况下开机启动。
|
||||
|
||||
```
|
||||
$ sudo systemctl mask bluetooth.service
|
||||
Created symlink from /etc/systemd/system/bluetooth.service to /dev/null.
|
||||
```
|
||||
|
||||
一旦你对禁用该进程启动而没有出现负面作用感到满意,你也可以选择卸载该程序。
|
||||
|
||||
通过执行命令可以获得如下服务列表:
|
||||
|
||||
```
|
||||
$ systemctl list-unit-files --type=service
|
||||
UNIT FILE STATE
|
||||
accounts-daemon.service enabled
|
||||
acpid.service disabled
|
||||
alsa-restore.service static
|
||||
alsa-utils.service masked
|
||||
```
|
||||
|
||||
你不能启用或禁用静态服务,因为静态服务被其他的进程所依赖,并不意味着它们自己运行。
|
||||
|
||||
### 哪些服务能够禁止?
|
||||
|
||||
如何知道你需要哪些服务,而哪些又是可以安全地禁用的呢?它总是依赖于你的个性化需求。
|
||||
|
||||
这里举例了几个服务进程的作用。许多服务进程都是发行版特定的,所以你应该看看你的发行版文档(比如通过 google 或 StackOverflow)。
|
||||
|
||||
- **accounts-daemon.service** 是一个潜在的安全风险。它是 AccountsService 的一部分,AccountsService 允许程序获得或操作用户账户信息。我不认为有好的理由能使我允许这样的后台操作,所以我选择<ruby>掩盖<rt>mask</rt></ruby>该服务进程。
|
||||
- **avahi-daemon.service** 用于零配置网络发现,使电脑超容易发现网络中打印机或其他的主机,我总是禁用它,别漏掉它。
|
||||
- **brltty.service** 提供布莱叶盲文设备支持,例如布莱叶盲文显示器。
|
||||
- **debug-shell.service** 开放了一个巨大的安全漏洞(该服务提供了一个无密码的 root shell ,用于帮助 调试 systemd 问题),除非你正在使用该服务,否则永远不要启动服务。
|
||||
- **ModemManager.service** 该服务是一个被 dbus 激活的守护进程,用于提供移动<ruby>宽频<rt>broadband</rt></ruby>(2G/3G/4G)接口,如果你没有该接口,无论是内置接口,还是通过如蓝牙配对的电话,以及 USB 适配器,那么你也无需该服务。
|
||||
- **pppd-dns.service** 是一个计算机发展的遗物,如果你使用拨号接入互联网的话,保留它,否则你不需要它。
|
||||
- **rtkit-daemon.service** 听起来很可怕,听起来像是 rootkit。 但是你需要该服务,因为它是一个<ruby>实时内核调度器<rt>real-time kernel scheduler</rt></ruby>。
|
||||
- **whoopsie.service** 是 Ubuntu 错误报告服务。它用于收集 Ubuntu 系统崩溃报告,并发送报告到 https://daisy.ubuntu.com 。 你可以放心地禁止其启动,或者永久的卸载它。
|
||||
- **wpa_supplicant.service** 仅在你使用 Wi-Fi 连接时需要。
|
||||
|
||||
### 系统启动时发生了什么?
|
||||
|
||||
Systemd 提供了一些命令帮助调试系统开机启动问题。该命令会重演你的系统启动的所有消息。
|
||||
|
||||
```
|
||||
$ journalctl -b
|
||||
|
||||
-- Logs begin at Mon 2016-05-09 06:18:11 PDT,
|
||||
end at Mon 2016-05-09 10:17:01 PDT. --
|
||||
May 16 06:18:11 studio systemd-journal[289]:
|
||||
Runtime journal (/run/log/journal/) is currently using 8.0M.
|
||||
Maximum allowed usage is set to 157.2M.
|
||||
Leaving at least 235.9M free (of currently available 1.5G of space).
|
||||
Enforced usage limit is thus 157.2M.
|
||||
[...]
|
||||
```
|
||||
|
||||
通过命令 `journalctl -b -1` 可以复审前一次启动,`journalctl -b -2` 可以复审倒数第 2 次启动,以此类推。
|
||||
|
||||
该命令会打印出大量的信息,你可能并不关注所有信息,只是关注其中问题相关部分。为此,系统提供了几个过滤器,用于帮助你锁定目标。让我们以进程号为 1 的进程为例,该进程是所有其它进程的父进程。
|
||||
|
||||
```
|
||||
$ journalctl _PID=1
|
||||
|
||||
May 08 06:18:17 studio systemd[1]: Starting LSB: Raise network interfaces....
|
||||
May 08 06:18:17 studio systemd[1]: Started LSB: Raise network interfaces..
|
||||
May 08 06:18:17 studio systemd[1]: Reached target System Initialization.
|
||||
May 08 06:18:17 studio systemd[1]: Started CUPS Scheduler.
|
||||
May 08 06:18:17 studio systemd[1]: Listening on D-Bus System Message Bus Socket
|
||||
May 08 06:18:17 studio systemd[1]: Listening on CUPS Scheduler.
|
||||
[...]
|
||||
```
|
||||
|
||||
这些打印消息显示了什么被启动,或者是正在尝试启动。
|
||||
|
||||
一个最有用的命令工具之一 `systemd-analyze blame`,用于帮助查看哪个服务进程启动耗时最长。
|
||||
|
||||
```
|
||||
$ systemd-analyze blame
|
||||
8.708s gpu-manager.service
|
||||
8.002s NetworkManager-wait-online.service
|
||||
5.791s mysql.service
|
||||
2.975s dev-sda3.device
|
||||
1.810s alsa-restore.service
|
||||
1.806s systemd-logind.service
|
||||
1.803s irqbalance.service
|
||||
1.800s lm-sensors.service
|
||||
1.800s grub-common.service
|
||||
```
|
||||
|
||||
这个特定的例子没有出现任何异常,但是如果存在系统启动瓶颈,则该命令将能发现它。
|
||||
|
||||
你也能通过如下资源了解 Systemd 如何工作:
|
||||
|
||||
- [理解和使用 Systemd](https://www.linux.com/learn/understanding-and-using-systemd)
|
||||
- [介绍 Systemd 运行级别和服务管理命令](https://www.linux.com/learn/intro-systemd-runlevels-and-service-management-commands)
|
||||
- [再次前行,另一个 Linux 初始化系统:Systemd 介绍](https://www.linux.com/learn/here-we-go-again-another-linux-init-intro-systemd)
|
||||
|
||||
----
|
||||
|
||||
via: https://www.linux.com/learn/cleaning-your-linux-startup-process
|
||||
|
||||
作者:[David Both](https://www.linux.com/users/cschroder)
|
||||
译者:[penghuster](https://github.com/penghuster)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 LCTT 原创编译,Linux中国 荣誉推出
|
@ -0,0 +1,196 @@
|
||||
一周工作所用的日常 Git 命令
|
||||
============================================================
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1600/1*frC0VgM2etsVCJzJrNMZTQ.png)
|
||||
|
||||
像大多数新手一样,我一开始是在 StackOverflow 上搜索 Git 命令,然后把答案复制粘贴,并没有真正理解它们究竟做了什么。
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1600/1*0o9GZUzXiNnI4poEvxvy8g.png)
|
||||
|
||||
*Image credit: [XKCD][7]*
|
||||
|
||||
我曾经想过:“如果有一个最常见的 Git 命令的列表,以及它们的功能是什么,这不是极好的吗?”
|
||||
|
||||
多年之后,我编制了这样一个列表,并且给出了一些最佳实践,让新手们甚至中高级开发人员都能从中发现有用的东西。
|
||||
|
||||
为了保持实用性,我将这个列表与我过去一周实际使用的 Git 命令进行了比较。
|
||||
|
||||
几乎每个开发人员都在使用 Git,当然很可能是 GitHub。但大多数开发者大概有 99% 的时间只是使用这三个命令:
|
||||
|
||||
```
|
||||
git add --all
|
||||
git commit -am "<message>"
|
||||
git push origin master
|
||||
```
|
||||
|
||||
如果你只是单枪匹马,或者参加一场黑客马拉松或开发一次性的应用时,它工作得很好,但是当稳定性和可维护性开始成为一个优先考虑的事情后,清理提交、坚持分支策略和提交信息的规范性就变得很重要。
|
||||
|
||||
我将从常用命令的列表开始,使新手更容易了解 Git 能做什么,然后进入更高级的功能和最佳实践。
|
||||
|
||||
### 经常使用的命令
|
||||
|
||||
要想在仓库(repo)中初始化 Git,你只需输入以下命令即可。如果你没有初始化 Git,则不能在该仓库内运行任何其他的 Git 命令。
|
||||
|
||||
```
|
||||
git init
|
||||
```
|
||||
|
||||
如果你在使用 GitHub,而且正在将代码推送到在线存储的 GitHub 仓库中,那么你正在使用的就是远程(remote)仓库。该远程仓库的默认名称(也称为别名)为 `origin`。如果你已经从 Github 复制了一个项目,它就有了一个 `origin`。你可以使用命令 `git remote -v` 查看该 `origin`,该命令将列出远程仓库的 URL。
|
||||
|
||||
如果你初始化了自己的 Git 仓库,并希望将其与 GitHub 仓库相关联,则必须在 GitHub 上创建一个,复制新仓库提供的 URL,并使用 `git remote add origin <URL>` 命令,这里使用 GitHub 提供的 URL 替换 `<URL>`。这样,你就可以添加、提交和推送更改到你的远程仓库了。
|
||||
|
||||
最后一条命令用在当你需要更改远程仓库时。如果你从其他人那里复制了一个仓库,并希望将远程仓库从原始所有者更改为你自己的 GitHub 帐户。除了改用 `set-url` 来更改远程仓库外,流程与 `git remote add origin` 相同。
|
||||
|
||||
```
|
||||
git remote -v
|
||||
git remote add origin <url>
|
||||
git remote set-url origin <url>
|
||||
```
|
||||
|
||||
复制仓库最常见的方式是使用 `git clone`,后跟仓库的 URL。
|
||||
|
||||
请记住,远程仓库将连接到克隆仓库原属于的帐户。所以,如果你克隆了一个属于别人的仓库,你将无法推送到 GitHub,除非你使用上面的命令改变了 `origin`。
|
||||
|
||||
```
|
||||
git clone <url>
|
||||
```
|
||||
|
||||
你很快就会发现自己正在使用分支。如果你还不理解什么是分支,有许多其他更深入的教程,你应该先阅读它们,再继续下面的操作。([这里是一个教程][8])
|
||||
|
||||
命令 `git branch` 列出了本地机器上的所有分支。如果要创建一个新的分支,可以使用命令 `git branch <name>`,其中 `<name>` 表示分支的名字,比如说 `master`。
|
||||
|
||||
`git checkout <name>` 命令可以切换到现有的分支。你也可以使用 `git checkout -b` 命令创建一个新的分支并立即切换到它。大多数人都使用此命令而不是单独的 `branch` 和 `checkout` 命令。
|
||||
|
||||
```
|
||||
git branch
|
||||
git branch <name>
|
||||
git checkout <name>
|
||||
git checkout -b <name>
|
||||
```
|
||||
|
||||
如果你对一个分支进行了一系列的更改,假如说此分支名为 `develop`,如果想要将该分支合并回主分支(`master`)上,则使用 `git merge <branch>` 命令。你需要先检出(`checkout`)主分支,然后运行 `git merge develop` 将 `develop` 合并到主分支中。
|
||||
|
||||
```
|
||||
git merge <branch>
|
||||
```
|
||||
|
||||
如果你正在与多个人进行协作,你会发现有时 GitHub 的仓库上已经更新了,但你的本地却没有做相应的更改。如果是这样,你可以使用 `git pull origin <branch>` 命令从远程分支中拉取最新的更改。
|
||||
|
||||
```
|
||||
git pull origin <branch>
|
||||
```
|
||||
|
||||
如果您好奇地想看到哪些文件已被更改以及哪些内存正在被跟踪,可以使用 `git status` 命令。如果要查看每个文件的更改,可以使用 `git diff` 来查看每个文件中更改的行。
|
||||
|
||||
```
|
||||
git status
|
||||
git diff --stat
|
||||
```
|
||||
|
||||
### 高级命令和最佳实践
|
||||
|
||||
很快你会到达一个阶段,这时你希望你的提交看起来整洁一致。你可能还需要调整你的提交记录,使得提交更容易理解或者能还原一个意外的有破坏性的更改。
|
||||
|
||||
`git log` 命令可以输出提交的历史记录。你将使用它来查看提交的历史记录。
|
||||
|
||||
你的提交会附带消息和一个哈希值,哈希值是一串包含数字和字母的随机序列。一个哈希值示例如下:`c3d882aa1aa4e3d5f18b3890132670fbeac912f7`。
|
||||
|
||||
```
|
||||
git log
|
||||
```
|
||||
|
||||
假设你推送了一些可能破坏了你应用程序的东西。你最好回退一个提交然后再提交一次正确的,而不是修复它和推送新的东西。
|
||||
|
||||
如果你希望及时回退并从之前的提交中检出(`checkout`)你的应用程序,则可以使用该哈希作为分支名直接执行此操作。这将使你的应用程序与当前版本分离(因为你正在编辑历史记录的版本,而不是当前版本)。
|
||||
|
||||
```
|
||||
git checkout c3d88eaa1aa4e4d5f
|
||||
```
|
||||
|
||||
然后,如果你在那个历史分支中做了更改,并且想要再次推送,你必须使用强制推送。
|
||||
|
||||
**注意**:强制推送是危险的,只有在绝对必要的时候才能执行它。它将覆盖你的应用程序的历史记录,你将失去之后版本的任何信息。
|
||||
|
||||
```
|
||||
git push -f origin master
|
||||
```
|
||||
|
||||
在其他时候,将所有内容保留在一个提交中是不现实的。也行你想在尝试有潜在风险的操作之前保存当前进度,或者也许你犯了一个错误,但希望在你的版本历史中避免尴尬地留着这个错误。对此,我们有 `git rebase`。
|
||||
|
||||
假设你在本地历史记录上有 4 个提交(没有推送到 GitHub),你要回退这是个提交。你的提交记录看起来很乱很拖拉。这时你可以使用 `rebase` 将所有这些提交合并到一个简单的提交中。
|
||||
|
||||
```
|
||||
git rebase -i HEAD~4
|
||||
```
|
||||
|
||||
上面的命令会打开你计算机的默认编辑器(默认为 Vim,除非你将默认修改为其他的),提供了几个你准备如何修改你的提交的选项。它看起来就像下面的代码:
|
||||
|
||||
```
|
||||
pick 130deo9 oldest commit message
|
||||
pick 4209fei second oldest commit message
|
||||
pick 4390gne third oldest commit message
|
||||
pick bmo0dne newest commit message
|
||||
```
|
||||
|
||||
为了合并这些提交,我们需要将 `pick` 选项修改为 `fixup`(如代码下面的文档所示),以将该提交合并并丢弃该提交消息。请注意,在 Vim 中,你需要按下 `a` 或 `i` 才能编辑文本,要保存退出,你需要按下 `Esc` 键,然后按 `shift + z + z`。不要问我为什么,它就是这样。
|
||||
|
||||
```
|
||||
pick 130deo9 oldest commit message
|
||||
fixup 4209fei second oldest commit message
|
||||
fixup 4390gne third oldest commit message
|
||||
fixup bmo0dne newest commit message
|
||||
```
|
||||
|
||||
这将把你的所有提交合并到一个提交中,提交消息为 `oldest commit message`。
|
||||
|
||||
下一步是重命名你的提交消息。这完全是一个建议的操作,但只要你一直遵循一致的模式,都可以做得很好。这里我建议使用 [Google 为 Angular.js 提供的提交指南][9]。
|
||||
|
||||
为了更改提交消息,请使用 `amend` 标志。
|
||||
|
||||
```
|
||||
git commit --amend
|
||||
```
|
||||
|
||||
这也会打开 Vim,文本编辑和保存规则如上所示。为了给出一个良好的提交消息的例子,下面是遵循该指南中规则的提交消息:
|
||||
|
||||
```
|
||||
feat: add stripe checkout button to payments page
|
||||
|
||||
- add stripe checkout button
|
||||
- write tests for checkout
|
||||
```
|
||||
|
||||
保持指南中列出的类型(type)的一个优点是它使编写更改日志更加容易。你还可以在页脚(footer)(再次,在指南中规定的)中包含信息来引用问题(issue)。
|
||||
|
||||
**注意:**如果你正在协作一个项目,并将代码推送到了 GitHub,你应该避免重新引用(`rebase`)并压缩(`squash`)你的提交。如果你开始在人们的眼皮子底下更改版本历史,那么你可能会遇到难以追踪的错误,从而给每个人都带来麻烦。
|
||||
|
||||
Git 有无数的命令,但这里介绍的命令可能是您最初几年编程所需要知道的所有。
|
||||
|
||||
* * *
|
||||
|
||||
Sam Corcos 是 [Sightline Maps][10] 的首席开发工程师和联合创始人,Sightline Maps 是最直观的 3D 打印地形图的平台,以及用于构建 Phoenix 和 React 的可扩展生产应用程序的中级高级教程网站 [LearnPhoenix.io][11]。使用优惠码:free_code_camp 取得 LearnPhoenix 的20美元。
|
||||
|
||||
(题图:[GitHub Octodex][6])
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://medium.freecodecamp.org/git-cheat-sheet-and-best-practices-c6ce5321f52
|
||||
|
||||
作者:[Sam Corcos][a]
|
||||
译者:[firmianay](https://github.com/firmianay)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://medium.freecodecamp.org/@SamCorcos?source=post_header_lockup
|
||||
[1]:https://medium.freecodecamp.org/tagged/git?source=post
|
||||
[2]:https://medium.freecodecamp.org/tagged/github?source=post
|
||||
[3]:https://medium.freecodecamp.org/tagged/programming?source=post
|
||||
[4]:https://medium.freecodecamp.org/tagged/software-development?source=post
|
||||
[5]:https://medium.freecodecamp.org/tagged/web-development?source=post
|
||||
[6]:https://octodex.github.com/
|
||||
[7]:https://xkcd.com/1597/
|
||||
[8]:https://guides.github.com/introduction/flow/
|
||||
[9]:https://github.com/angular/angular.js/blob/master/CONTRIBUTING.md#-git-commit-guidelines
|
||||
[10]:http://sightlinemaps.com/
|
||||
[11]:http://learnphoenix.io/
|
@ -0,0 +1,169 @@
|
||||
在树莓派中开启激动人心的 Perl 之旅
|
||||
============================================================
|
||||
|
||||
> 树莓派,随心所欲。
|
||||
|
||||
![Getting started with Perl on the Raspberry Pi](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberry_pi_blue_board.jpg?itok=hZkYuk-m "Getting started with Perl on the Raspberry Pi")
|
||||
|
||||
我最近在 SVPerl (硅谷 Perl 聚会)谈到在树莓派上运行 Perl 语言的时候,有人问我,“我听说树莓派应该使用 Python ,是这样吗?”。我非常乐意回答他,这是个常见误解。树莓派可以支持任何语言: Python、Perl 和其他树莓派官方软件 Raspbian Linux 初始安装的语言。
|
||||
|
||||
看似很厉害,其实很简单。树莓派的创造者英国的计算机科学教授 Eben Upton 曾经说过,树莓派名字中的‘派’(pi),是想为了听起来像 Python,因为他喜欢这门语言。他选择了这门语言作为孩子们的启蒙语言。但是他和他的团队做了一个通用计算机。开源软件没给树莓派任何限制。我们想运行什么就运行什么,全凭自己心意。
|
||||
|
||||
我在 SVPerl 和这篇文章中还想讲第二点,就是介绍我的 “PiFlash” 脚本。虽然它是用 Perl 写的,但是不需要你有多了解 Perl 就可以在 Linux 下将树莓派系统自动化烧录到 SD 卡。这样对初学者就比较友好,避免他们在烧录 SD 卡时候,偶然擦除了整个硬盘。即使是高级用户也可以从它的自动化工作中受益,包括我,这也是我开发这个工具的原因。在 Windows 和 Mac 下也有类似的工具,但是树莓派网站没有介绍类似工具给 Linux 用户。不过,现在有了。
|
||||
|
||||
开源软件早就有自己造轮子的传统,因为他们总是崇尚“自痒自挠”去解决问题。这种方式在 Eric S 1997 年的论文和 1999 年的书籍《[大教堂与集市][8]》中早有提及,它定义了开源软件的方法论。我也是为了满足想我这样的 Linux 用户,所以写了这个脚本。
|
||||
|
||||
### 下载系统镜像
|
||||
|
||||
想要开启树莓派之旅,你首先需要为它下载一个操作系统。我们称之为“系统镜像”文件。一旦你把它下载到你的桌面、手提电脑,或者甚至是另一个树莓派中,我就需要写入或者称之为“烧录”进你的 SD卡。详细情况可以看在线文件。手动做这件事情需要一些功底,你要把系统镜像烧录到整个 SD卡,而不是其中一块分区。系统镜像必须独自包含至少一个分区,因为树莓派引导需要一个 FAT32文件系统分区,系统引导这里开始。除了引导分区,其他分区可以是操作系统内核支持的任何分区类型。
|
||||
|
||||
在大部分树莓派中,我们都运行的是某些使用 Linux 内核的发行版。已经有一系列树莓派中常用的系统镜像你可以下载使用。(当然,没什么能阻止你自己造轮子)
|
||||
|
||||
树莓派基金会向新手推荐的是“[NOOBS][9]”系统。它代表了 “New Out of the Box System”(新鲜出炉即开即用系统),显然它好像听起来像术语 “noob"”(小白),通俗点说就是 “newbie”(菜鸟)。NOOBS 是一个基于树莓派的 Linux 系统,它会给你一个菜单可以在你的树莓派上自动下载安装几个其它的系统镜像。
|
||||
|
||||
[Raspbian Linux][10] 是 Debian Linux 发行版的树莓派定制版。它是为树莓派开发的正式 Linux 发行版,并且由树莓派基金会维护。几乎所有树莓派驱动和软件都会在 Raspbian 上先试用,然后才会放到其它发行版上。其默认安装博客 Perl。
|
||||
|
||||
Ubuntu Linux (还有其社区版的 Ubuntu MATE)也将树莓派作为其支持 ARM (Advanced RISC Machines)处理器的平台之一。RISC(Reduced Instruction Set Computer)Ubuntu 是一个 Debian Linux 的商业化支持的开源分支,它也使用 DEB 包管理器。Perl 也在其中。它仅仅支持 32 位 ARM7 或者 64 位 ARM8 处理器的树莓派 2 和 3。ARM6 的树莓派 1 和 Zero 从未被 Ubuntu 构建过程支持。
|
||||
|
||||
[Fedora Linux][12] 支持树莓派2 ,而 Fedora 25 支持 3。 Fedora 是一个隶属于红帽(Red Hat)的开源项目。Fedora 是个基础,商业版的 RHEL(Red Hat Enterprise Linux)在其上增加了商业软件包和支持,所以其软件像所有的兼容红帽的发行版一样来自 RPM(Red Hat Package Manager) 软件包。就像其它发行版一样,也包括 Perl。
|
||||
|
||||
[RISC OS][13] 是一个特别针对 ARM 处理器的单用户操作系统。如果你想要一个比 Linux 系统更加简洁的小型桌面(功能更少),你可以考虑一下。它同样支持 Perl。
|
||||
|
||||
[RaspBSD][14] 是一个 FreeBSD 的树莓派发行版。它是一个基于 Unix 的系统,而不是 Linux。作为开源 Unix 的一员,它延续了 Unix 的功能,而且和 Linux 有着众多相似之处。包括有类似的开源软件带来的相似的系统环境,包括 Perl。
|
||||
|
||||
[OSMC][15],即开源多媒体中心,以及 [LibreElec][16] 电视娱乐中心,它们都基于运行 Linux 内核之上的 Kodi 娱乐中心。它是一个小巧、特化的 Linux 系统,所以不要期望它能支持 Perl。
|
||||
|
||||
[Microsoft Windows IoT Core][17] 是仅运行在树莓派3上的新成员。你需要微软开发者身份才能下载。而作为一个 Linux 极客,我根本不看它。我的 PiFlash 脚本还不支持它,但如果你找的是它,你可以去看看。
|
||||
|
||||
### PiFlash 脚本
|
||||
|
||||
如果你想看看[树莓派 SD 卡烧录指导][19],你可以找到在 Windows 或者 Mac 系统下需要下载的工具来完成烧录任务。但是对于 Linux 系统,只有一系列手工操作建议。我已经手工做过这个太多次,这很容易引发一个开发者的本能去自动化这个过程,这就是 PiFlash 脚本的起源。这有点难,因为 Linux 有太多方法可以配置,但是它们都是基于 Linux 内核的。
|
||||
|
||||
我总是觉得,手工操作潜在最大的失误恐怕就是偶然错误地擦除了某个设备,而不是擦除了 SD 卡,然后彻底清除了我本想保留在硬盘的东西。我在 SVPerl 演讲中也说了,我很惊讶地发现在听众中有犯了这种错误(而且不害怕承认)的人。因此,PiFlash 其中一个目的就是保护新手的安全,不会擦除 SD 卡之外的设备。PiFlash 脚本还会拒绝覆写包含了已经挂载的文件系统的设备。
|
||||
|
||||
对于有经验的用户,包括我,PiFlash 脚本还提供提供一个简便的自动化服务。下载完系统镜像之后,我不需要必须从 zip格式中解压缩或者提取出系统镜像。PiFlash 可以直接提取它,不管是哪种格式,并且直接烧录到 SD 卡中。
|
||||
|
||||
我把 [PiFlash 及其指导][21]发布在了 GitHub 上。
|
||||
|
||||
命令行用法如下:
|
||||
|
||||
```
|
||||
piflash [--verbose] input-file output-device
|
||||
piflash [--verbose] --SDsearch
|
||||
```
|
||||
|
||||
`input-file` 参数是你要写入的系统镜像文件,只要是你从树莓派发行版网站下载的镜像都行。`output-device` 参数是你要写入的 SD 卡的块设备路径。
|
||||
|
||||
你也可以使用 `--SDsearch` 参数列出挂载在系统中 SD 卡设备名称。
|
||||
|
||||
可选项 `--verbose` 可以输出所有的程序状态数据,它在你需要帮助时或者递送 bug 报告和自行排错时很有用。它就是我开发时用的。
|
||||
|
||||
下面的例子是我使用该脚本写入仍是 zip 存档的 Raspbian 镜像到位于 `/dev/mmcblk0` 的 SD 卡:
|
||||
|
||||
```
|
||||
piflash 2016-11-25-raspbian-jessie.img.zip /dev/mmcblk0
|
||||
```
|
||||
|
||||
如果你已经指定了 `/dev/mmcblk0p1` (SD 卡的第一分区),它会识别到这个分区不是一个正确的位置,并拒绝写入。
|
||||
|
||||
在不同的 Linux 系统中怎样去识别哪个设备是 SD 卡是一个技术活。像 mmcblk0 这种在我的笔记本上是基于 PCI 的 SD卡接口。如果我使用了 USB SD 卡接口,它就是 `/dev/sdb`,这在多硬盘的系统中不好区分。然而,只有少量的 Linux 块设备支持 SD 卡。PiFlash 在这两种情况下都会检查块设备的参数。如果全部失败,它会认为可写入、可移动的,并有着正确物理扇区数量的 USB 驱动器是 SD 卡。
|
||||
|
||||
我想这应该能涵盖大部分情况。但是,如果你使用了我不知道的 SD 卡接口呢?我乐意看到你的来信。请在输出信息中加上 `--verbos --SDsearch` 参数,以便让我可以知道你系统目前的环境。理想情况下,如果 PiFlash 脚本可以被广泛利用,我们可以构建一个开源社区去尽可能的帮助更多的树莓派用户。
|
||||
|
||||
### 树莓派的 CPAN 模块
|
||||
|
||||
[CPAN][22](Comprehensive Perl Archive Network)是一个世界范围内包含各种 Perl 模块的的下载镜像。它们都是开源的。大量 CPAN 中的模块都是历久弥坚。对于成千上百的任务,你不需要重复造轮子,只要利用别人已经发布的代码就可以了。然后,你还可以提交你的新功能。
|
||||
|
||||
尽管树莓派是个五脏俱全的 Linux 系统,支持大部分 CPAN 模块,但是这里我想强调一下专为树莓派硬件开发的东西。一般来说它们都用在测量、控制、机器人方面的嵌入式系统中。你可以通过 GPIO (General-Purpose Input/Output)针脚将你的树莓派连接到外部电子设备。
|
||||
|
||||
可以使用树莓派 GPIO 针脚的模块如下:[Device::SMBus][23]、[Device::I2C][24]、[Rpi::PIGPIO][25]、[Rpi::SPI][26]、[Rpi::WiringPi][27]、[Device::WebIO::RaspberryPI][28] 和 [Device::PiGlow][29]。树莓派支持的嵌入式模块如下:[UAV::Pilot::Wumpus::Server::Backend::RaspberryPiI2C][30]、[RPI::DHT11][31](温度/湿度)、[RPI::HCSR04][32](超声波)、[App::RPI::EnvUI][33]、[RPi::DigiPot::MCP4XXXX][34]、[RPI::ADC::ADS][35]、[Device::PaPiRus][36] 和 [Device::BCM2835::Timer][37]。
|
||||
|
||||
### 例子
|
||||
|
||||
这里有些我们在树莓派上可以用 Perl 做的事情的例子。
|
||||
|
||||
#### 例一:在 OSMC 使用 PiFlash 播放视频
|
||||
|
||||
本例中,你将练习如何设置并运行使用 OSMC 操作系统的树莓派。
|
||||
|
||||
* 到 [RaspberryPi.Org][5] 下载区,下载最新的 OSMC 版本。
|
||||
* 将空 SD 卡插入你的 Linux 电脑或者笔记本。树莓派第一代是全尺寸的 SD 卡,除此以外都在使用 microSD,你也许需要一个通用适配器才能插入它。
|
||||
* 在插入前后分别运行 `cat /proc/partitions` 命令来看看系统分给硬件的设备名称。它可能像这样 `/dev/mmcblk0` 或者 `/dev/sdb`, 用如下命令将正确的系统镜像烧录到 SD 卡:`piflash OSMC_TGT_rbp2_20170210.img.gz /dev/mmcblk0`。
|
||||
* 弹出 SD 卡,将它插入树莓派中,接上 HDMI 显示器,开机。
|
||||
* 当 OSMC 设置完毕,插入一个 USB 设备,在里面放点视频。出于示范目的,我将使用 `youtube-dl` 程序下载两个视频。运行 `youtube-dl OHF2xDrq8dY` (彭博关于英国高新产业,包括树莓派的介绍)还有 `youtube-dl nAvZMgXbE9c` (CNet 发表的“排名前五的树莓派项目”) 。将它们下载到 USB 中,然后卸载移除设备。
|
||||
* 将 USB 设备插入到 OSMC 树莓派。点击视频选项进入到外部设备。
|
||||
* 只要你能在树莓派中播放视频,那么恭喜你,你已经完成了本次练习。玩的愉快。
|
||||
|
||||
#### 例二:随机播放目录中的视频的脚本
|
||||
|
||||
这个例子将使用一个脚本在树莓派上的目录中乱序播放视频。根据视频的不同和设备的摆放位置,这可以用作信息亭显示的用途。我写这个脚本用来展示室内体验视频。
|
||||
|
||||
* 设置树莓派引导 Raspbian Linux。连接到 HDMI 监视器。
|
||||
* 从 GitHub 上下载 [do-video 脚本][6]。把它放到树莓派中。
|
||||
* 跟随该页面的安装指导。最主要的事情就是安装 omxplayer 包,它可以使用树莓派硬件视频加速功能平滑地播放视频。
|
||||
* 在家目录的 Videos 目录下放一些视频。
|
||||
* 运行 `do-video` ,这样,应该就可以播放视频了
|
||||
|
||||
#### 例三:读取 GPS 数据的脚本
|
||||
|
||||
这个例子更加深入,更有针对性。它展示了 Perl 怎么从外部设备中读取数据。在先前例子中出现的我的 GitHub上 “[Perl on Pi][6]” 有一个 gps-read.pl 脚本。它可以通过一系列端口从 GPS 读取 NMEA(国家海洋电子协会)的数据。页面还有教程,包括构建它所使用的 AdaFruit Industries 部分,但是你可以使用任何能输出 NMEA 数据的 GPS。
|
||||
|
||||
通过这些任务,我想你应该可以在树莓派上像使用其他语言一样使用 Perl了。希望你喜欢。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Ian Kluft - 上学开始,Ian 就对喜欢编程和飞行。他一直致力于 Unix 的工作。在 Linux 内核发布后的六个月他转向了 Linux。他有计算机科学硕士学位,并且拥有 CSSLP 资格证(认证规范开发流程专家),另一方面,他还是引航员和认证的飞机指令长。作为一个超过二十五年的认证的无线电爱好者,在近些年,他在一些电子设备上陆续做了实验,包括树莓派。
|
||||
|
||||
------------------
|
||||
|
||||
via: https://opensource.com/article/17/3/perl-raspberry-pi
|
||||
|
||||
作者:[Ian Kluft][a]
|
||||
译者:[Taylor1024](https://github.com/Taylor1024)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/ikluft
|
||||
[1]:https://opensource.com/tags/raspberry-pi?src=raspberry_pi_resource_menu
|
||||
[2]:https://opensource.com/resources/what-raspberry-pi?src=raspberry_pi_resource_menu
|
||||
[3]:https://opensource.com/article/16/12/getting-started-raspberry-pi?src=raspberry_pi_resource_menu
|
||||
[4]:https://opensource.com/article/17/2/raspberry-pi-submit-your-article?src=raspberry_pi_resource_menu
|
||||
[5]:http://raspberrypi.org/
|
||||
[6]:https://github.com/ikluft/ikluft-tools/tree/master/perl-on-pi
|
||||
[7]:https://opensource.com/article/17/3/perl-raspberry-pi?rate=OsZH1-H_xMfLtSFqZw4SC-_nyV4yo_sgKKBJGjUsbfM
|
||||
[8]:http://www.catb.org/~esr/writings/cathedral-bazaar/
|
||||
[9]:https://www.raspberrypi.org/downloads/noobs/
|
||||
[10]:https://www.raspberrypi.org/downloads/raspbian/
|
||||
[11]:https://www.raspberrypi.org/downloads/raspbian/
|
||||
[12]:https://fedoraproject.org/wiki/Raspberry_Pi#Downloading_the_Fedora_ARM_image
|
||||
[13]:https://www.riscosopen.org/content/downloads/raspberry-pi
|
||||
[14]:http://www.raspbsd.org/raspberrypi.html
|
||||
[15]:https://osmc.tv/
|
||||
[16]:https://libreelec.tv/
|
||||
[17]:http://ms-iot.github.io/content/en-US/Downloads.htm
|
||||
[18]:http://ms-iot.github.io/content/en-US/Downloads.htm
|
||||
[19]:https://www.raspberrypi.org/documentation/installation/installing-images/README.md
|
||||
[20]:https://www.raspberrypi.org/documentation/installation/installing-images/README.md
|
||||
[21]:https://github.com/ikluft/ikluft-tools/tree/master/piflash
|
||||
[22]:http://www.cpan.org/
|
||||
[23]:https://metacpan.org/pod/Device::SMBus
|
||||
[24]:https://metacpan.org/pod/Device::I2C
|
||||
[25]:https://metacpan.org/pod/RPi::PIGPIO
|
||||
[26]:https://metacpan.org/pod/RPi::SPI
|
||||
[27]:https://metacpan.org/pod/RPi::WiringPi
|
||||
[28]:https://metacpan.org/pod/Device::WebIO::RaspberryPi
|
||||
[29]:https://metacpan.org/pod/Device::PiGlow
|
||||
[30]:https://metacpan.org/pod/UAV::Pilot::Wumpus::Server::Backend::RaspberryPiI2C
|
||||
[31]:https://metacpan.org/pod/RPi::DHT11
|
||||
[32]:https://metacpan.org/pod/RPi::HCSR04
|
||||
[33]:https://metacpan.org/pod/App::RPi::EnvUI
|
||||
[34]:https://metacpan.org/pod/RPi::DigiPot::MCP4XXXX
|
||||
[35]:https://metacpan.org/pod/RPi::ADC::ADS
|
||||
[36]:https://metacpan.org/pod/Device::PaPiRus
|
||||
[37]:https://metacpan.org/pod/Device::BCM2835::Timer
|
||||
[38]:https://opensource.com/user/120171/feed
|
||||
[39]:https://opensource.com/article/17/3/perl-raspberry-pi#comments
|
||||
[40]:https://opensource.com/users/ikluft
|
@ -0,0 +1,69 @@
|
||||
如何管理开源产品的安全漏洞
|
||||
============================================================
|
||||
|
||||
|
||||
![software vulnerabilities](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/security-software-vulnerabilities.jpg?itok=D3joblgb "software vulnerabilities")
|
||||
|
||||
在 ELC + OpenIoT 峰会上,英特尔安全架构师 Ryan Ware 将会解释如何应对漏洞洪流,并管理你产品的安全性。
|
||||
|
||||
在开发开源软件时, 你需要考虑的安全漏洞也许会将你吞没。<ruby>常见漏洞及曝光<rt>Common Vulnerabilities and Exposures</rt></ruby>(CVE)ID、零日漏洞和其他漏洞似乎每天都在公布。随着这些信息洪流,你怎么能保持不掉队?
|
||||
|
||||
英特尔安全架构师 Ryan Ware 表示:“如果你发布了基于 Linux 内核 4.4.1 的产品,该内核截止今日已经有 9 个针对该内核的 CVE。这些都会影响你的产品,尽管事实上当你配载它们时还不知道。”
|
||||
|
||||
在 [ELC][6] + [OpenIoT 峰会][7]上,英特尔安全架构师 Ryan Ware 的演讲将介绍如何实施并成功管理产品的安全性的策略。在他的演讲中,Ware 讨论了最常见的开发者错误,跟上最新的漏洞的策略等等。
|
||||
|
||||
**Linux.com:让我们从头开始。你能否简要介绍一下常见漏洞和曝光(CVE),零日以及其他漏洞么?它们是什么,为什么重要?**
|
||||
|
||||
Ryan Ware:好问题。<ruby>常见漏洞及曝光<rt>Common Vulnerabilities and Exposures</rt></ruby>(CVE)是按美国政府的要求由 MITR Corporation(一个非营利组织)维护的数据库。其目前由美国国土安全部资助。它是在 1999 年创建的,以包含有关所有公布的安全漏洞的信息。这些漏洞中的每一个都有自己的标识符(CVE-ID),并且可以被引用。 CVE 这个术语,已经从指整个数据库逐渐演变成代表一个单独的安全漏洞: 一个 CVE 漏洞。
|
||||
|
||||
出现于 CVE 数据库中的许多漏洞最初是零日漏洞。这些漏洞出于不管什么原因没有遵循更有序的如“<ruby>责任揭秘<rt>Responsible Disclosure</rt></ruby>”这样的披露过程。关键在于,如果没有软件供应商能够通过某种类型的修复(通常是软件补丁)来进行响应,那么它们就成为了公开和可利用的。这些和其他未打补丁的软件漏洞至关重要,因为在修补软件之前,漏洞是可以利用的。在许多方面,发布 CVE 或者零日就像是开枪。在你比赛结束之前,你的客户很容易受到伤害。
|
||||
|
||||
**Linux.com:有多少漏洞?你如何确定那些与你的产品相关?**
|
||||
|
||||
Ryan:在探讨有多少之前,以任何形式发布软件的任何人都应该记住。即使你采取一切努力确保你发布的软件没有已知的漏洞,你的软件*也会*存在漏洞。它们只是不知道而已。例如,如果你发布了一个基于 Linux 内核 4.4.1 的产品,那么截止今日,已经有了 9 个CVE。这些都会影响你的产品,尽管事实上在你使用它们时不知道。
|
||||
|
||||
此时,CVE 数据库包含 80,957 个条目(截止至 2017 年 1 月 30 日),包括最早可追溯到 1999 年的所有记录,当时有 894 个已记录问题。迄今为止,一年中出现最大的数字的是 2014 年,当时记录了 7,946 个问题。也就是说,我认为过去两年该数字减少并不是因为安全漏洞的减少。这是我将在我的谈话中说到的东西。
|
||||
|
||||
**Linux.com:开发人员可以使用哪些策略来跟上这些信息?**
|
||||
|
||||
Ryan:开发人员可以通过各种方式跟上这些如洪水般涌来的漏洞信息。我最喜欢的工具之一是 [CVE Details][8]。它以一种非常容易理解的方式展示了来自 MITRE 的信息。它最好的功能是创建自定义 RSS 源的能力,以便你可以跟踪你关心的组件的漏洞。那些具有更复杂的追踪需求的人可以从下载 MITR CVE 数据库(免费提供)开始,并定期更新。其他优秀工具,如 cvechecker,可以让你检查软件中已知的漏洞。
|
||||
|
||||
对于软件栈中的关键部分,我还推荐一个非常有用的工具:参与到上游社区中。这些是最理解你所使用的软件的人。世界上没有比他们更好的专家。与他们一起合作。
|
||||
|
||||
**Linux.com:你怎么知道你的产品是否解决了所有漏洞?有推荐的工具吗?**
|
||||
|
||||
Ryan:不幸的是,正如我上面所说,你永远无法从你的产品中移除所有的漏洞。上面提到的一些工具是关键。但是,我还没有提到一个对你发布的任何产品来说都是至关重要的部分:软件更新机制。如果你无法在当场更新产品软件,则当客户受到影响时,你无法解决安全问题。你的软件必须能够更新,更新过程越容易,你的客户将受到更好的保护。
|
||||
|
||||
**Linux.com:开发人员还需要知道什么才能成功管理安全漏洞?**
|
||||
|
||||
Ryan:有一个我反复看到的错误。开发人员总是需要牢记将攻击面最小化的想法。这是什么意思?在实践中,这意味着只包括你的产品实际需要的东西!这不仅包括确保你不将无关的软件包加入到你的产品中,而且还可以关闭不需要的功能的配置来编译项目。
|
||||
|
||||
这有什么帮助?想象这是 2014 年。你刚刚上班就看到 Heartbleed 的技术新闻。你知道你在产品中包含 OpenSSL,因为你需要执行一些基本的加密功能,但不使用 TLS 心跳,该问题与该漏洞相关。你愿意:
|
||||
|
||||
a. 花费时间与客户和合作伙伴合作,通过关键的软件更新来修复这个高度安全问题?
|
||||
|
||||
b. 只需要告诉你的客户和合作伙伴,你使用 “-DOPENSSL_NO_HEARTBEATS” 标志编译 OpenSSL 产品,他们不会受到损害,你就可以专注于新功能和其他生产活动。
|
||||
|
||||
最简单解决漏洞的方法是你不包含这个漏洞。
|
||||
|
||||
(题图:[Creative Commons Zero][2] Pixabay)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/news/event/elcna/2017/2/how-manage-security-vulnerabilities-your-open-source-product
|
||||
|
||||
作者:[AMBER ANKERHOLZ][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/aankerholz
|
||||
[1]:https://www.linux.com/licenses/category/used-permission
|
||||
[2]:https://www.linux.com/licenses/category/creative-commons-zero
|
||||
[3]:https://www.linux.com/files/images/ryan-ware01jpg
|
||||
[4]:https://www.linux.com/files/images/security-software-vulnerabilitiesjpg
|
||||
[5]:http://events.linuxfoundation.org/events/embedded-linux-conference/program/schedule?utm_source=linux&utm_campaign=elc17&utm_medium=blog&utm_content=video-blog
|
||||
[6]:http://events.linuxfoundation.org/events/embedded-linux-conference
|
||||
[7]:http://events.linuxfoundation.org/events/openiot-summit
|
||||
[8]:http://www.cvedetails.com/
|
@ -0,0 +1,113 @@
|
||||
听说过时间表,但是你是否知道“哈希表”
|
||||
============================================================
|
||||
|
||||
探索<ruby>哈希表<rt>hash table</rt></ruby>的世界并理解其底层的机制是非常有趣的,并且将会受益匪浅。所以,让我们了解它,并从头开始探索吧。
|
||||
|
||||
哈希表是许多现代软件应用程序中一种常见的数据结构。它提供了类似字典的功能,使你能够在其中执行插入、删除和删除等操作。这么说吧,比如我想找出“苹果”的定义是什么,并且我知道该定义被存储在了我定义的哈希表中。我将查询我的哈希表来得到定义。它在哈希表内的记录看起来可能像:`"苹果" => "一种拥有水果之王之称的绿色水果"`。这里,“苹果”是我的关键字,而“一种拥有水果之王之称的水果”是与之关联的值。
|
||||
|
||||
还有一个例子可以让我们更清楚,哈希表的内容如下:
|
||||
|
||||
```
|
||||
"面包" => "固体"
|
||||
"水" => "液体"
|
||||
"汤" => "液体"
|
||||
"玉米片" => "固体"
|
||||
```
|
||||
|
||||
我想知道*面包*是固体还是液体,所以我将查询哈希表来获取与之相关的值,该哈希表将返回“固体”给我。现在,我们大致了解了哈希表是如何工作的。使用哈希表需要注意的另一个重要概念是每一个关键字都是唯一的。如果到了明天,我拥有一个面包奶昔(它是液体),那么我们需要更新哈希表,把“固体”改为“液体”来反映哈希表的改变。所以,我们需要添加一条记录到字典中:关键字为“面包”,对应的值为“液体”。你能发现下面的表发生了什么变化吗?(LCTT 译注:不知道这个“面包奶昔”是一种什么食物,大约是一种面包做的奶昔,总之你就理解成作者把液体的“面包奶昔”当成一种面包吧。)
|
||||
|
||||
```
|
||||
"面包" => "液体"
|
||||
"水" => "液体"
|
||||
"汤" => "液体"
|
||||
"玉米片" => "固体"
|
||||
```
|
||||
|
||||
没错,“面包”对应的值被更新为了“液体”。
|
||||
|
||||
**关键字是唯一的**,我的面包不能既是液体又是固体。但是,是什么使得该数据结构与其他数据结构相比如此特殊呢?为什么不使用一个[数组][1]来代替呢?它取决于问题的本质。对于某一个特定的问题,使用数组来描述可能会更好,因此,我们需要注意的关键点就是,**我们应该选择最适合问题的数据结构**。例如,如果你需要做的只是存储一个简单的杂货列表,那么使用数组会很适合。考虑下面的两个问题,两个问题的本质完全不同。
|
||||
|
||||
1. 我需要一个水果的列表
|
||||
2. 我需要一个水果的列表以及各种水果的价格(每千克)
|
||||
|
||||
正如你在下面所看到的,用数组来存储水果的列表可能是更好的选择。但是,用哈希表来存储每一种水果的价格看起来是更好的选择。
|
||||
|
||||
|
||||
```
|
||||
//示例数组
|
||||
["苹果", "桔子", "梨子", "葡萄"]
|
||||
//示例哈希表
|
||||
{ "苹果" : 3.05,
|
||||
"桔子" : 5.5,
|
||||
"梨子" : 8.4,
|
||||
"葡萄" : 12.4
|
||||
}
|
||||
```
|
||||
|
||||
实际上,有许多的机会需要[使用][2]哈希表。
|
||||
|
||||
### 时间以及它对你的意义
|
||||
|
||||
[这是对时间复杂度和空间复杂度的一个复习][3]。
|
||||
|
||||
平均情况下,在哈希表中进行搜索、插入和删除记录的时间复杂度均为 `O(1)` 。实际上,`O(1)` 读作“大 O 1”,表示常数时间。这意味着执行每一种操作的运行时间不依赖于数据集中数据的数量。我可以保证,查找、插入和删除项目均只花费常数时间,“当且仅当”哈希表的实现方式正确时。如果实现不正确,可能需要花费很慢的 `O(n)` 时间,尤其是当所有的数据都映射到了哈希表中的同一位置/点。
|
||||
|
||||
### 构建一个好的哈希表
|
||||
|
||||
到目前为止,我们已经知道如何使用哈希表了,但是如果我们想**构建**一个哈希表呢?本质上我们需要做的就是把一个字符串(比如 “狗”)映射到一个哈希代码(一个生成的数),即映射到一个数组的索引。你可能会问,为什么不直接使用索引呢?为什么要这么麻烦呢?因为通过这种方式我们可以直接查询 “狗” 并立即得到 “狗” 所在的位置,`String name = Array["狗"] // 名字叫拉斯`。而使用索引查询名称时,可能出现的情况是我们不知道名称所在的索引。比如,`String name = Array[10] // 该名字现在叫鲍勃` - 那不是我的狗的名字。这就是把一个字符串映射到一个哈希代码的益处(对应于一个数组的索引而言)。我们可以通过使用模运算符和哈希表的大小来计算出数组的索引:`index = hash_code % table_size`。
|
||||
|
||||
我们需要避免的另一种情况是两个关键字映射到同一个索引,这叫做**哈希碰撞**,如果哈希函数实现的不好,这很容易发生。实际上,每一个输入比输出多的哈希函数都有可能发生碰撞。通过下面的同一个函数的两个输出来展示一个简单的碰撞:
|
||||
|
||||
```
|
||||
int cat_idx = hashCode("猫") % table_size; //cat_idx 现在等于 1
|
||||
int dog_idx = hashCode("狗") % table_size; //dog_idx 也等于 1
|
||||
```
|
||||
|
||||
我们可以看到,现在两个数组的索引均是 1 。这样将会出现两个值相互覆盖,因为它们被写到了相同的索引中。如果我们查找 “猫” 的值,将会返回 “拉斯” ,但是这并不是我们想要的。有许多可以[解决哈希碰撞][4]的方法,但是更受欢迎的一种方法叫做**链接**。链接的想法就是对于数组的每一个索引位置都有一个链表,如果碰撞发生,值就被存到链表中。因此,在前面的例子中,我们将会得到我们需要的值,但是我们需要搜索数组中索引为 1 的位置上的链表。伴有链接的哈希实现需要 `O(1 + α)` 时间,其中 α 是装载因子,它可以表示为 n/k,其中 n 是哈希表中的记录数目,k 是哈希表中可用位置的数目。但是请记住,只有当你给出的关键字非常随机时,这一结论才正确(依赖于 [SUHA][5])。
|
||||
|
||||
这是做了一个很大的假设,因为总是有可能任何不相等的关键字都散列到同一点。这一问题的一个解决方法是去除哈希表中关键字对随机性的依赖,转而把随机性集中于关键字是如何被散列的,从而减少矛盾发生的可能性。这被称为……
|
||||
|
||||
### 通用散列
|
||||
|
||||
这个观念很简单,从<ruby>通用散列<rt>universal hash</rt></ruby>家族集合随机选择一个哈希函数 h 来计算哈希代码。换句话来说,就是选择任何一个随机的哈希函数来散列关键字。通过这种方法,两个不同的关键字的散列结果相同的可能性将非常低(LCTT 译注:原文是“not be the same”,应是笔误)。我只是简单的提一下,如果不相信我那么请相信[数学][6]。实现这一方法时需要注意的另一件事是如果选择了一个不好的通用散列家族,它会把时间和空间复杂度拖到 `O(U)`,其中 U 是散列家族的大小。而其中的挑战就是找到一个不需要太多时间来计算,也不需要太多空间来存储的哈希家族。
|
||||
|
||||
### 上帝哈希函数
|
||||
|
||||
追求完美是人的天性。我们是否能够构建一个*完美的哈希函数*,从而能够把关键字映射到整数集中,并且几乎*没有碰撞*。好消息是我们能够在一定程度上做到,但是我们的数据必须是静态的(这意味着在一定时间内没有插入/删除/更新)。一个实现完美哈希函数的方法就是使用 <ruby>2 级哈希<rt>2-Level Hashing</rt></ruby>,它基本上是我们前面讨论过的两种方法的组合。它使用*通用散列*来选择使用哪个哈希函数,然后通过*链接*组合起来,但是这次不是使用链表数据结构,而是使用另一个哈希表。让我们看一看下面它是怎么实现的:
|
||||
|
||||
[![2-Level Hashing](http://www.zeroequalsfalse.press/2017/02/20/hashtables/Diagram.png "2-Level Hashing")][8]
|
||||
|
||||
**但是这是如何工作的以及我们如何能够确保无需关心碰撞?**
|
||||
|
||||
它的工作方式与[生日悖论][7]相反。它指出,在随机选择的一堆人中,会有一些人生日相同。但是如果一年中的天数远远大于人数(平方以上),那么有极大的可能性所有人的生日都不相同。所以这二者是如何相关的?对于每一个链接哈希表,其大小均为第一级哈希表大小的平方。那就是说,如果有两个元素被散列到同一个点,那么链接哈希表的大小将为 4 。大多数时候,链接哈希表将会非常稀疏/空。
|
||||
|
||||
重复下面两步来确保无需担心碰撞:
|
||||
|
||||
* 从通用散列家族中选择一个哈希函数来计算
|
||||
* 如果发生碰撞,那么继续从通用散列家族中选择另一个哈希函数来计算
|
||||
|
||||
字面上看就是这样(这是一个 `O(n^2)` 空间的解)。如果需要考虑空间问题,那么显然需要另一个不同的方法。但是值得庆幸的是,该过程平均只需要进行**两次**。
|
||||
|
||||
### 总结
|
||||
|
||||
只有具有一个好的哈希函数才能算得上是一个好的哈希表。在同时保证功能实现、时间和空间的提前下构建一个完美的哈希函数是一件很困难的事。我推荐你在解决问题的时候首先考虑哈希表,因为它能够为你提供巨大的性能优势,而且它能够对应用程序的可用性产生显著差异。哈希表和完美哈希函数常被用于实时编程应用中,并且在各种算法中都得到了广泛应用。你见或者不见,哈希表就在这儿。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.zeroequalsfalse.press/2017/02/20/hashtables/
|
||||
|
||||
作者:[Marty Jacobs][a]
|
||||
译者:[ucasFL](https://github.com/ucasFL)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.zeroequalsfalse.press/about
|
||||
[1]:https://en.wikipedia.org/wiki/Array_data_type
|
||||
[2]:https://en.wikipedia.org/wiki/Hash_table#Uses
|
||||
[3]:https://www.hackerearth.com/practice/basic-programming/complexity-analysis/time-and-space-complexity/tutorial/
|
||||
[4]:https://en.wikipedia.org/wiki/Hash_table#Collision_resolution
|
||||
[5]:https://en.wikipedia.org/wiki/SUHA_(computer_science
|
||||
[6]:https://en.wikipedia.org/wiki/Universal_hashing#Mathematical_guarantees
|
||||
[7]:https://en.wikipedia.org/wiki/Birthday_problem
|
||||
[8]:http://www.zeroequalsfalse.press/2017/02/20/hashtables/Diagram.png
|
@ -1,6 +1,8 @@
|
||||
通过开源书籍学习 RUBY 编程
|
||||
通过开源书籍学习 Ruby 编程
|
||||
============================================================
|
||||
|
||||
![](https://i1.wp.com/www.ossblog.org/wp-content/uploads/2017/03/Ruby-Montage.png?w=565&ssl=1)
|
||||
|
||||
### 开源的 Ruby 书籍
|
||||
|
||||
Ruby 是由 Yukihiro “Matz” Matsumoto 开发的一门通用目的、脚本化、结构化、灵活且完全面向对象的编程语言。它具有一个完全动态类型系统,这意味着它的大多数类型检查是在运行的时候进行,而非编译的时候。因此程序员不必过分担心是整数类型还是字符串类型。Ruby 会自动进行内存管理,它具有许多和 Python、Perl、Lisp、Ada、Eiffel 和 Smalltalk 相同的特性。
|
||||
@ -13,11 +15,10 @@ Ruby 具有很高的可移植性性,在 Linux、Windows、Mac OS X、Cygwin、
|
||||
|
||||
这篇文章是[ OSSBlog 的系列文章开源编程书籍][18]的一部分。
|
||||
|
||||
|
|
||||
![Ruby Best Practices](https://i0.wp.com/www.ossblog.org/wp-content/uploads/2017/03/RubyBestPractices.jpg?resize=200%2C262&ssl=1)
|
||||
|
|
||||
|
||||
### [Ruby Best Practices][1]
|
||||
### 《[Ruby Best Practices][1]》
|
||||
|
||||
![Ruby Best Practices](https://i0.wp.com/www.ossblog.org/wp-content/uploads/2017/03/RubyBestPractices.jpg?resize=200%2C262&ssl=1)
|
||||
|
||||
作者: Gregory Brown (328 页)
|
||||
|
||||
@ -31,26 +32,24 @@ Ruby 具有很高的可移植性性,在 Linux、Windows、Mac OS X、Cygwin、
|
||||
|
||||
* 通过测试驱动代码 - 涉及了大量的测试哲学和技术。使用 mocks 和 stubs
|
||||
* 通过利用 Ruby 神秘的力量来设计漂亮的 API:灵活的参数处理和代码块
|
||||
* 利用动态工具包向开发者展示如何构建灵活的界面,实现对象行为,扩展和修改已有代码,以及程序化地构建类和模块
|
||||
* 利用动态工具包向开发者展示如何构建灵活的界面,实现单对象行为,扩展和修改已有代码,以及程序化地构建类和模块
|
||||
* 文本处理和文件管理集中于正则表达式,文件、临时文件标准库以及文本处理策略实战
|
||||
|
||||
|
||||
* 函数式编程技术优化模块代码组织、存储、无穷目录以及更高顺序程序。
|
||||
* 函数式编程技术优化了模块代码组织、存储、无穷目录以及更高顺序程序。
|
||||
* 理解代码如何出错以及为什么会出错,阐述如何处理日志记录
|
||||
* 通过利用 Ruby 的多语言能力削弱文化屏障
|
||||
* 熟练的项目维护
|
||||
|
||||
本书为开源书籍,在 CC NC-SA 许可证下发布。
|
||||
|
||||
|
|
||||
![I Love Ruby](https://i2.wp.com/www.ossblog.org/wp-content/uploads/2017/03/LoveRuby.png?resize=200%2C282&ssl=1)
|
||||
|
|
||||
[在此下载《Ruby Best Practices》][1]。
|
||||
|
||||
### [I Love Ruby][2]
|
||||
### 《[I Love Ruby][2]》
|
||||
|
||||
![I Love Ruby](https://i2.wp.com/www.ossblog.org/wp-content/uploads/2017/03/LoveRuby.png?resize=200%2C282&ssl=1)
|
||||
|
||||
作者: Karthikeyan A K (246 页)
|
||||
|
||||
《I Love Ruby》以比传统介绍更高的深度阐述了基本概念和技术。该方法为编写有用、正确、易维护和高效的 Ruby 代码提供了一个坚实的基础。
|
||||
《I Love Ruby》以比传统的介绍更高的深度阐述了基本概念和技术。该方法为编写有用、正确、易维护和高效的 Ruby 代码提供了一个坚实的基础。
|
||||
|
||||
章节内容涵盖:
|
||||
|
||||
@ -75,15 +74,15 @@ Ruby 具有很高的可移植性性,在 Linux、Windows、Mac OS X、Cygwin、
|
||||
* Gems
|
||||
* 元编程
|
||||
|
||||
在 GNU 自由文档许可证有效期内,你可以复制、发布和修改本书,1.3 或任何更新版本由自由软件基金会发布。
|
||||
在 GNU 自由文档许可证之下,你可以复制、发布和修改本书,1.3 或任何之后版本由自由软件基金会发布。
|
||||
|
||||
[点此下载《I Love Ruby》][2]。
|
||||
|
||||
|
|
||||
|
|
||||
![Programming Ruby - The Pragmatic Programmer's Guide](https://i1.wp.com/www.ossblog.org/wp-content/uploads/2017/03/ProgrammingRuby.jpeg?resize=200%2C248&ssl=1)
|
||||
|
|
||||
|
||||
### [Programming Ruby – The Pragmatic Programmer’s Guide][3]
|
||||
|
||||
![Programming Ruby - The Pragmatic Programmer's Guide](https://i1.wp.com/www.ossblog.org/wp-content/uploads/2017/03/ProgrammingRuby.jpeg?resize=200%2C248&ssl=1)
|
||||
|
||||
作者: David Thomas, Andrew Hunt (HTML)
|
||||
|
||||
《Programming Ruby – The Pragmatic Programmer’s Guide》是一本 Ruby 编程语言的教程和参考书。使用 Ruby,你将能够写出更好的代码,更加有效率,并且使编程变成更加享受的体验。
|
||||
@ -111,12 +110,11 @@ Ruby 具有很高的可移植性性,在 Linux、Windows、Mac OS X、Cygwin、
|
||||
|
||||
这本书的第一版在开放发布许可证 1.0 版或更新版的许可下发布。本书更新后的第二版涉及 Ruby 1.8 ,并且包括所有可用新库的描述,但是它不是在免费发行许可证下发布的。
|
||||
|
||||
|
|
||||
|
|
||||
![Why’s (Poignant) Guide to Ruby](https://i1.wp.com/www.ossblog.org/wp-content/uploads/2017/03/WhysGuideRuby.jpg?resize=200%2C218&ssl=1)
|
||||
|
|
||||
[点此下载《Programming Ruby – The Pragmatic Programmer’s Guide》][3]。
|
||||
|
||||
### [Why’s (Poignant) Guide to Ruby][4]
|
||||
### 《[Why’s (Poignant) Guide to Ruby][4]》
|
||||
|
||||
![Why’s (Poignant) Guide to Ruby](https://i1.wp.com/www.ossblog.org/wp-content/uploads/2017/03/WhysGuideRuby.jpg?resize=200%2C218&ssl=1)
|
||||
|
||||
作者:why the lucky stiff (176 页)
|
||||
|
||||
@ -135,12 +133,11 @@ Ruby 具有很高的可移植性性,在 Linux、Windows、Mac OS X、Cygwin、
|
||||
|
||||
本书在 CC-SA 许可证许可下可用。
|
||||
|
||||
|
|
||||
|
|
||||
![Ruby Hacking Guide](https://i1.wp.com/www.ossblog.org/wp-content/uploads/2017/03/RubyHackingGuide.png?resize=200%2C250&ssl=1)
|
||||
|
|
||||
[点此下载《Why’s (poignant) Guide to Ruby》][4]。
|
||||
|
||||
### [Ruby Hacking Guide][5]
|
||||
### 《[Ruby Hacking Guide][5]》
|
||||
|
||||
![Ruby Hacking Guide](https://i1.wp.com/www.ossblog.org/wp-content/uploads/2017/03/RubyHackingGuide.png?resize=200%2C250&ssl=1)
|
||||
|
||||
作者: Minero Aoki ,翻译自 Vincent Isambart 和 Clifford Escobar Caoille (HTML)
|
||||
|
||||
@ -161,12 +158,11 @@ Ruby 具有很高的可移植性性,在 Linux、Windows、Mac OS X、Cygwin、
|
||||
|
||||
原书的官方支持网站为 [i.loveruby.net/ja/rhg/][10]
|
||||
|
||||
|
|
||||
|
|
||||
![The Book Of Ruby](https://i1.wp.com/www.ossblog.org/wp-content/uploads/2017/03/BookRuby.jpg?resize=200%2C270&ssl=1)
|
||||
|
|
||||
[点此下载《Ruby Hacking Guide》][5]
|
||||
|
||||
### [The Book Of Ruby][6]
|
||||
### 《[The Book Of Ruby][6]》
|
||||
|
||||
![The Book Of Ruby](https://i1.wp.com/www.ossblog.org/wp-content/uploads/2017/03/BookRuby.jpg?resize=200%2C270&ssl=1)
|
||||
|
||||
作者: How Collingbourne (425 页)
|
||||
|
||||
@ -174,7 +170,7 @@ Ruby 具有很高的可移植性性,在 Linux、Windows、Mac OS X、Cygwin、
|
||||
|
||||
《The Book Of Ruby》以 PDF 文件格式提供,并且每一个章节的所有例子都伴有可运行的源代码。同时,也有一个介绍来阐述如何在 Steel 或其他任何你喜欢的编辑器/IDE 中运行这些 Ruby 代码。它主要集中于 Ruby 语言的 1.8.x 版本。
|
||||
|
||||
本书被分成字节大小的块。每一个章节介绍一个主题,并且分成几个不同的子话题。每一个编程主题由一个或多个小的自包含、可运行的 Ruby 程序构成。
|
||||
本书被分成很小的块。每一个章节介绍一个主题,并且分成几个不同的子话题。每一个编程主题由一个或多个小的自包含、可运行的 Ruby 程序构成。
|
||||
|
||||
* 字符串、数字、类和对象 - 获取输入和输出、字符串和外部评估、数字和条件测试:if ... then、局部变量和全局变量、类和对象、实例变量、消息、方法、多态性、构造器和检属性和类变量 - 超类和子类,超类传参,访问器方法,’set‘ 访问器,属性读写器、超类的方法调用,以及类变量
|
||||
* 类等级、属性和类变量 - 超类和子类,超类传参,访问器方法,’set‘ 访问器,属性读写器、超类的方法调用,以及类变量
|
||||
@ -199,12 +195,12 @@ Ruby 具有很高的可移植性性,在 Linux、Windows、Mac OS X、Cygwin、
|
||||
|
||||
本书由 SapphireSteel Software 发布,SapphireSteel Software 是用于 Visual Studio 的 Ruby In Steel 集成开发环境的开发者。读者可以复制和发布本书的文本和代码(免费版)
|
||||
|
||||
|
|
||||
|
|
||||
![The Little Book of Ruby](https://i0.wp.com/www.ossblog.org/wp-content/uploads/2017/03/TheLittleBookRuby.png?resize=200%2C259&ssl=1)
|
||||
|
|
||||
[点此下载《The Book Of Ruby》][6]
|
||||
|
||||
### [The Little Book Of Ruby][7]
|
||||
|
||||
### 《[The Little Book Of Ruby][7]》
|
||||
|
||||
![The Little Book of Ruby](https://i0.wp.com/www.ossblog.org/wp-content/uploads/2017/03/TheLittleBookRuby.png?resize=200%2C259&ssl=1)
|
||||
|
||||
作者: Huw Collingbourne (87 页)
|
||||
|
||||
@ -225,12 +221,11 @@ Ruby 具有很高的可移植性性,在 Linux、Windows、Mac OS X、Cygwin、
|
||||
|
||||
本书可免费复制和发布,只需保留原始文本且注明版权信息。
|
||||
|
||||
|
|
||||
|
|
||||
![Kestrels, Quirky Birds, and Hopeless Egocentricity](https://i2.wp.com/www.ossblog.org/wp-content/uploads/2017/03/KestrelsQuirkyBirds.jpeg?resize=200%2C259&ssl=1)
|
||||
|
|
||||
[点此下载《The Little Book of Ruby》][7]
|
||||
|
||||
### [Kestrels, Quirky Birds, and Hopeless Egocentricity][8]
|
||||
### 《[Kestrels, Quirky Birds, and Hopeless Egocentricity][8]》
|
||||
|
||||
![Kestrels, Quirky Birds, and Hopeless Egocentricity](https://i2.wp.com/www.ossblog.org/wp-content/uploads/2017/03/KestrelsQuirkyBirds.jpeg?resize=200%2C259&ssl=1)
|
||||
|
||||
作者: Reg “raganwald” Braithwaite (123 页)
|
||||
|
||||
@ -242,12 +237,11 @@ Ruby 具有很高的可移植性性,在 Linux、Windows、Mac OS X、Cygwin、
|
||||
|
||||
本书在 MIT 许可证许可下发布。
|
||||
|
||||
[点此下载《Kestrels, Quirky Birds, and Hopeless Egocentricity》][8]
|
||||
|
||||
### 《[Ruby Programming][9]》
|
||||
|
||||
|
|
||||
![Ruby Programming](https://i1.wp.com/www.ossblog.org/wp-content/uploads/2017/03/RubyProgrammingWikibooks.png?resize=200%2C285&ssl=1)
|
||||
|
|
||||
|
||||
### [Ruby Programming][9]
|
||||
|
||||
作者: Wikibooks.org (261 页)
|
||||
|
||||
@ -264,15 +258,15 @@ Ruby 是一种解释性、面向对象的编程语言。
|
||||
|
||||
本书在 CC-SA 3.0 本地化许可证许可下发布。
|
||||
|
||||
|
|
||||
[点此下载《Ruby Programming》][9]
|
||||
|
||||
* * *
|
||||
|
||||
无特定顺序,我将在结束前推荐一些没有在开源许可证下发布但可以免费下载的 Ruby 编程书籍。
|
||||
|
||||
* [Mr. Neighborly 的 Humble Little Ruby Book][11] – 一个易读易学的 Ruby 完全指南。
|
||||
* [Introduction to Programming with Ruby][12] – 学习编程时最基本的构建块,一切从零开始。
|
||||
* [Object Oriented Programming with Ruby][13] – 学习编程时最基本的构建块,一切从零开始。
|
||||
* [Introduction to Programming with Ruby][12] – 学习编程的基础知识,一切从零开始。
|
||||
* [Object Oriented Programming with Ruby][13] – 学习编程的基础知识,一切从零开始。
|
||||
* [Core Ruby Tools][14] – 对 Ruby 的四个核心工具 Gems、Ruby Version Managers、Bundler 和 Rake 进行了简短的概述。
|
||||
* [Learn Ruby the Hard Way, 3rd Edition][15] – 一本适合初学者的入门书籍。
|
||||
* [Learn to Program][16] – 来自 Chris Pine。
|
||||
@ -282,9 +276,9 @@ Ruby 是一种解释性、面向对象的编程语言。
|
||||
|
||||
via: https://www.ossblog.org/study-ruby-programming-with-open-source-books/
|
||||
|
||||
作者:[Steve Emms ][a]
|
||||
作者:[Steve Emms][a]
|
||||
译者:[ucasFL](https://github.com/ucasFL)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,101 +1,79 @@
|
||||
在标准建立之前,软件所存在的问题
|
||||
============================================================
|
||||
|
||||
### 开源项目需要认真对待交付成果中所包含的标准
|
||||
|
||||
> 开源项目需要认真对待交付成果中所包含的标准
|
||||
|
||||
![The problem with software before standards](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/suitcase_container_bag.png?itok=q40lKCBY "The problem with software before standards")
|
||||
|
||||
Image by :
|
||||
无论以何种标准来衡量,开源软件作为传统的专有软件的替代品而崛起,取得了不错的效果。 如今,仅 Github 中就有着数以千万计的代码仓库,其中重要项目的数量也在快速增长。在本文撰写的时候,[Apache 软件基金会][4] 开展了超过 [300 个项目][5], [Linux 基金会][6] 支持的项目也超过了 60 个。与此同时,[OpenStack 基金会][7] 在 180 多个国家拥有超过 60,000 名成员。
|
||||
|
||||
opensource.com
|
||||
这样说来,这种情景下有什么问题么?
|
||||
|
||||
无论以何种标准来衡量,开源软件作为旧软件的替代品而崛起,以独特的方式取得了不错的效果。 如今,仅 Github 中就有着数千万的代码仓库,其中重要项目的数量也在快速增长。在本文撰写的时候,[Apache软件基金会][4] 开展了超过 [300个项目][5], [Linux基金会][6] 支持的项目也超过了 60 个。与此同时,[OpenStack 基金会][7] 在 180 多个国家拥有超过 60,000 名成员。
|
||||
|
||||
这样说来,图中的内容可能是错误的吧?
|
||||
|
||||
开源软件在面对用户的众多需求时,由于缺少足够的意识,而无法独自去解决全部需求。 更糟糕的是,许多开源软件社区的成员(业务主管以及开发者)对利用合适的工具解决这一问题并不感兴趣。
|
||||
开源软件在面对用户的众多需求时,由于缺少足够的意识,而无法独自去解决全部需求。 更糟糕的是,许多开源软件社区的成员(业务主管以及开发者)对利用最合适的工具解决这一问题并不感兴趣。
|
||||
|
||||
让我们开始找出那些有待解决的问题,看看这些问题在过去是如何被处理的。
|
||||
|
||||
问题存在于:通常许多项目都在试图解决一个大问题当中重复的一小部分。 客户希望能够在竞争产品之间做出选择,不满意的话还能够选择其他产品。但是现在看来,在问题被解决之前都是不可能的,这一问题将会阻止开源软件的使用。
|
||||
问题存在于:通常许多项目都在试图解决一个大问题当中重复的一小部分,而客户希望能够在竞争产品之间做出选择,不满意的话还能够轻松选择其他产品。但是现在看来都是不可能的,在这个问题被解决之前它将会阻碍开源软件的使用。
|
||||
|
||||
这已经不是一个新的问题了,并且至今没有传统意义上的解决方法。在一个半世纪以来,用户期望有更多的选择和自由来变换厂商,而这一直是通过制定的标准来实现的。在现实当中,你可以对螺丝钉、灯泡、轮胎、扩展卡的厂商做出无数多的选择,甚至于对独特形状的红酒杯也倾注你的选择。因为标准为这里的每一件物品都提供了物理规格。而在健康和安全领域,我们的幸福也依赖于成千上万的标准,这些标准是由各自企业制定的,以确保在最大化的竞争中能够有完美的结果。
|
||||
这已经不是一个新的问题或者没有传统解决方案的问题了。在一个半世纪以来,用户期望有更多的选择和自由来变换厂商,而这一直是通过标准的制定来实现的。在现实当中,你可以对螺丝钉、灯泡、轮胎、延长线的厂商做出无数多的选择,甚至于对独特形状的红酒杯也可以专注选择。因为标准为这里的每一件物品都提供了物理规格。而在健康和安全领域,我们的幸福也依赖于成千上万的标准,这些标准是由私营行业制定的,以确保在最大化的竞争中能够有完美的结果。
|
||||
|
||||
随着信息与通信技术(ICT)的发展,同样类似的方式形成了一些重要的组织机构,例如:国际电信联盟(ITU),国际电工委员会(IEC),以及电气与电子工程师学会标准协会(IEEE-SA)。有将近 1000 家企业遵循 ICT 标准来进行开发、推广以及测试。
|
||||
随着信息与通信技术(ICT)的发展,以同样类似的方式形成了一些重要的组织机构,例如:国际电信联盟(ITU)、国际电工委员会(IEC),以及电气与电子工程师学会标准协会(IEEE-SA)。近千家财团遵循 ICT 标准来进行开发、推广以及测试。
|
||||
|
||||
如今在我们生活的科技世界里,执行着成千上万必不可少的标准,这些标准包含了计算机、移动设备、Wi-Fi 路由器以及其他一切依赖电力来运行的东西,但并不是所有的 ICT 标准都能做到无缝对接。
|
||||
虽然并非是所有的 ICT 标准都形成了无缝对接,但如今在我们生活的科技世界里,成千上万的基本标准履行着这一承诺,这些标准包含了计算机、移动设备、Wi-Fi 路由器以及其他一切依赖电力来运行的东西。
|
||||
|
||||
关键的一点,在很长的一段时间里,由于客户对拥有种类丰富的产品,避免受制于供应商,并且享受全球范围内的服务的渴望,逐渐演变出了这一体系。
|
||||
关键的一点,在很长的一段时间里,由于客户对拥有种类丰富的产品、避免受制于供应商,并且享受全球范围内的服务的渴望,逐渐演变出了这一体系。
|
||||
|
||||
现在让我们来看看开源软件是如何演进的。
|
||||
|
||||
好消息是伟大的软件已经被创造出来了。坏消息是对于像云计算和虚拟化网络这样的关键领域,没有任何单独的基金会在开发全部的堆栈。取而代之的是,单个项目开发单独一层或者多层,依靠每个项目所花费的时间及友好合作,最终堆叠成栈。当这一过程运行良好时,它不会创造出潜在的受制于传统的专有产品。相反,坏的结果就是它会浪费开发商、社区成员的时间和努力,同时也会辜负客户的期望。
|
||||
好消息是伟大的软件已经被创造出来了。坏消息是对于像云计算和虚拟化网络这样的关键领域,没有任何单独的基金会在开发整个堆栈。取而代之的是,单个项目开发单独的一层或者多层,依靠需要时才建立的善意的合作,这些项目最终堆叠成栈。当这一过程运行良好时,结果是好的,但也有可能形成与传统的专有产品同样的锁定。相反,当这一过程运行不良时,坏的结果就是它会浪费开发商、社区成员的时间和努力,同时也会辜负客户的期望。
|
||||
|
||||
制定标准是最明确的解决方法
|
||||
。鼓励多个解决方案通过对附加的服务和功能进行有益的竞争,避免客户选择受限。当然也存在着例外,就如同开源世界正在发生的情况。
|
||||
最明确的解决方法的创建标准,允许客户避免被锁定,鼓励多个解决方案通过对附加服务和功能进行有益的竞争。当然也存在着例外,但这不是开源世界正在发生的情况。
|
||||
|
||||
这背后的主要原因在于,开源社区的主流观点是:标准意味着限制、落后和多余。对于一个完整的堆栈中的单独一层来说,可能就是这样。但客户想要的自由,是要通过不断地选择,激烈的竞争的。结果就回到了之前的坏结果上,尽管多个厂商提供相似的集成堆栈,但却被锁定在一个技术上。
|
||||
这背后的主要原因在于,开源社区的主流观点是:标准意味着限制、落后和多余。对于一个完整的堆栈中的单独一层来说,可能就是这样。但客户想要选择的自由、激烈的竞争,这就导致回到了之前的坏结果上,尽管多个厂商提供相似的集成堆栈,但却被锁定在一个技术上。
|
||||
|
||||
在 Yaron Haviv 于 2017 年 6 月 14 日所写的 “[We'll Be Enslaved to Proprietary Clouds Unless We Collaborate][8]” 一文中,就有对这一问题有着很好的描述。
|
||||
在 Yaron Haviv 于 2017 年 6 月 14 日所写的 “[除非我们协作,否则我们将被困在专有云上][8]” 一文中,就有对这一问题有着很好的描述。
|
||||
|
||||
> _在今天的开源生态系统当中存在一个问题,跨项目整合并不普遍。开源项目能够进行大型合作,构建出分层的模块化的架构,比如说 Linux _ — _已经一次又一次的证明了他的成功。但是与 Linux 的意识形成鲜明对比的就是如今许多开源社区的日常状态。_
|
||||
> 在今天的开源生态系统当中存在一个问题,跨项目整合并不普遍。开源项目能够进行大型合作,构建出分层的模块化的架构,比如说 Linux — 已经一次又一次的证明了它的成功。但是与 Linux 的意识形成鲜明对比的就是如今许多开源社区的日常状态。
|
||||
>
|
||||
> _举个例子:大数据生态系统,就是依赖众多共享组件或通用 API 和层的堆叠来实现的。这一过程同样缺少标准的线路协议,同时,每个处理框架( think Spark, Presto, and Flink)都拥有独立的数据源 API。_
|
||||
> 举个例子:大数据生态系统,就是依赖众多共享组件或通用 API 和层的堆叠来实现的。这一过程同样缺少标准的线路协议,同时,每个处理框架(看看 Spark、Presto 和 Flink)都拥有独立的数据源 API。
|
||||
>
|
||||
> _这种缺乏合作正在造成担忧。如果不这样的话,项目就会变得不通用,结果对客户产生了负面影响。因为每个人都不得不从头开始,重新开发,这基本上就锁定了客户,减缓了项目的发展。_
|
||||
> 这种合作的缺乏正在造成担忧。缺少了合作,项目就会变得不通用,结果对客户产生了负面影响。因为每个人都不得不从头开始,重新开发,这基本上就锁定了客户,减缓了项目的发展。
|
||||
|
||||
Haviv 提出了两种解决方法:
|
||||
|
||||
* 项目之间更紧密的合作,联合多个项目消除重叠的部分,使堆栈内的整合更加密切;
|
||||
|
||||
* 开发 API ,使切换更加容易。
|
||||
|
||||
这两种方法都能达到目的。但除非事情能有所改变,我们将只会看到第一种方法,这就是前边展望中发现的技术锁定。结果会发现工业界,无论是过去 WinTel 的世界,或者纵观苹果的历史,他们自身相互竞争的产品都是以牺牲选择来换取紧密整合的。
|
||||
这两种方法都能达到目的。但除非事情能有所改变,我们将只会看到第一种方法,这就是前边展望中发现的技术锁定。结果会发现工业界,无论是过去 WinTel 的世界,或者纵观苹果的历史,相互竞争的产品都是以牺牲选择来换取紧密整合的。
|
||||
|
||||
同样的事情似乎很有可能发生在新的开源界,如果开源项目继续忽视对标准的需求,那么竞争会存在于层内,甚至是堆栈间。如果现在能够做到的话,这样的问题可能就不会发生了。
|
||||
|
||||
因为如果口惠无实开发软件优先,标准之后的话,对于标准的制定就没有真正的兴趣。主要原因是,大多数的商人和开发者对标准知之甚少。不幸的是,我们能够理解这些使事情变得糟糕的原因。这些原因有几个:
|
||||
因为如果口惠无实开发软件优先、标准在后的话,对于标准的制定就没有真正的兴趣。主要原因是,大多数的商人和开发者对标准知之甚少。不幸的是,我们能够理解这些使事情变得糟糕的原因。这些原因有几个:
|
||||
|
||||
* 大学几乎很少对标准进行培训;
|
||||
|
||||
* 过去拥有标准专业人员的公司遣散了这些部门,新部署的工程师接受标准组织的培训又远远不够;
|
||||
|
||||
* 雇主代表缺足够的标准相关的经验价值;
|
||||
|
||||
* 工程师参与标准活动将会是最佳的技术解决方案,可能会对雇主的花费有更加深远的战略意义;
|
||||
|
||||
* 在许多公司内部,标准专业人员与开源开发者之间鲜又交流;
|
||||
|
||||
* 过去拥有专业的标准人员的公司遣散了这些部门,现在的部署工程师接受标准组织的培训又远远不够;
|
||||
* 在建立雇主标准工作方面的专业知识方面几乎没有职业价值;
|
||||
* 参与标准活动的工程师可能需要以他们认为是最佳技术解决方案为代价来延长雇主的战略利益;
|
||||
* 在许多公司内部,专业的标准人员与开源开发者之间鲜有交流;
|
||||
* 许多软件工程师将标准视为与 FOSS 定义的“四大自由”有着直接冲突。
|
||||
|
||||
现在,让我们来看看在开源界正在发生什么:
|
||||
|
||||
* 今天大多数的软件工程师鲜有不知道开源的;
|
||||
|
||||
* 工程师们每天都在享受着开源工具所带来的便利;
|
||||
|
||||
* 许多令人激动的最前沿的工作正是在开源项目中完成的;
|
||||
|
||||
* 在热门的开源领域,有经验的开发者广受欢迎,并获得了大量实质性的奖励;
|
||||
|
||||
* 在备受好评的项目中,开发者在软件开发过程中享受到了空前的自主权;
|
||||
|
||||
* 事实上,几乎所有的大型 ICT 公司都参与了多个开源项目,最高级别的成员当中,通常每个公司每年的合并成本(会费加上投入的雇员)都超过了一百万美元。
|
||||
|
||||
如果脱离实际的话,这个比喻似乎暗示着标准是走向 ICT 历史的灰烬。但现实却有很大差别。一个被忽视的事实是,开源开发是比常人所认为的更为娇嫩的花朵。这样比喻的原因是:
|
||||
|
||||
* 项目的主要支持者们可以撤回(已经做过的事情),这将导致一个项目的失败;
|
||||
|
||||
* 社区内的个性和文化冲突会导致社区的瓦解;
|
||||
|
||||
* 重要项目更加紧密的整合能力有待观察;
|
||||
|
||||
* 高资助的开源项目,有时专有权在博弈中被削弱,在某些情况下会导致失败。
|
||||
|
||||
* 随着时间的推移,可能个别公司决定的开源策略没能给他们带来预期的回报;
|
||||
|
||||
* 对开源项目的失败引起过多关注,会导致厂商放弃一些投资中的新项目,并说服客户谨慎选择开源方案。
|
||||
* 有时专有权在博弈中被削弱,高资助的开源项目在某些情况下会导致失败。
|
||||
* 随着时间的推移,可能个别公司认为其开源策略没能给他们带来预期的回报;
|
||||
* 对关键开源项目的失败引起过多关注,会导致厂商放弃一些投资中的新项目,并说服客户谨慎选择开源方案。
|
||||
|
||||
奇怪的是,最积极解决这些问题的协作单位是标准组织,部分原因是,他们已经感受到了开源合作的崛起所带来的威胁。他们的回应包括更新知识产权策略以允许在此基础上各种类型的合作,开发开源工具,包含开源代码的标准,以及在其他类型的工作项目中开发开源手册。
|
||||
|
||||
@ -106,19 +84,22 @@ Haviv 提出了两种解决方法:
|
||||
倘若这一切不发生的话,将会是一个很大的遗憾,因为这是开源所导致的巨大损失。而这取决于如今的项目所做的决定,是供给市场所需,还是甘心于未来日趋下降的影响力,而不是持续的成功。
|
||||
|
||||
_本文源自 ConsortiumInfo.org的 [Standards Blog][2],并已获得出版许可_
|
||||
|
||||
(题图:opensource.com)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Andy Updegrove - Andy helps 的 CEO, 管理团队,由他们的投资者建立的成功的组织。他曾作为一名先驱,自1979年起,就为高科技公司提供商业头脑的法律顾问和策略建议。在全球舞台上,他经常作为代表,帮助推动超过135部全球标准的制定,宣传开源,主张联盟,其中包括一些世界上最大,最具影响力的标准制定机构。
|
||||
Andy Updegrove - Andy helps 的 CEO,管理团队,由他们的投资者建立的成功的组织。他曾作为一名先驱,自1979年起,就为高科技公司提供商业头脑的法律顾问和策略建议。在全球舞台上,他经常作为代表,帮助推动超过 135 部全球标准的制定,宣传开源,主张联盟,其中包括一些世界上最大,最具影响力的标准制定机构。
|
||||
|
||||
---
|
||||
|
||||
via: https://opensource.com/article/17/7/software-standards
|
||||
|
||||
作者:[ Andy Updegrove][a]
|
||||
作者:[Andy Updegrove][a]
|
||||
译者:[softpaopao](https://github.com/softpaopao)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,72 @@
|
||||
Ubuntu Linux 的不同安装类型:服务器 vs 桌面
|
||||
============================================================
|
||||
|
||||
> 内核是任何 Linux 机器的核心
|
||||
|
||||
之前我已经讲了获取与安装 Ubuntu Linux,这次我将讲桌面和服务器的安装。两类安装都满足某些需求。不同的安装包是从 Ubuntu 分开下载的。你可以从 [Ubuntu.com/downloads][1] 选择你需要的。
|
||||
|
||||
无论安装类型如何,都有一些相似之处。
|
||||
|
||||
![](http://www.radiomagonline.com/Portals/0/radio-managing-tech-Ubuntu_1.jpg)
|
||||
|
||||
*可以从桌面系统图形用户界面或从服务器系统命令行添加安装包。*
|
||||
|
||||
两者都使用相同的内核和包管理器系统。软件包管理器系统是预编译为可在几乎任何 Ubuntu 系统运行的程序的仓库。程序分组成包,然后以安装包进行安装。安装包可以从桌面系统图形用户界面或从服务器系统命令行添加。
|
||||
|
||||
程序安装使用一个名为 `apt-get` 的程序。这是一个包管理器系统或程序管理器系统。最终用户只需输入命令行 `apt-get install (package-name)`,Ubuntu 就会自动获取软件包并进行安装。
|
||||
|
||||
软件包通常安装可以通过手册页访问的文档的命令(这本身就是一个主题)。它们可以通过输入 `man (command)` 来访问。这将打开一个描述该命令详细用法的页面。终端用户还可以 Google 任何的 Linux 命令或安装包,并找到大量关于它的信息。
|
||||
|
||||
例如,在安装网络连接存储套件后,可以通过命令行、GUI 或使用名为 Webmin 的程序进行管理。Webmin 安装了一个基于 Web 的管理界面,用于配置大多数 Linux 软件包,它受到了仅安装服务器版本的人群的欢迎,因为它安装为网页,不需要 GUI。它还允许远程管理服务器。
|
||||
|
||||
大多数(如果不是全部)基于 Linux 的软件包都有专门帮助你如何运行该软件包的视频和网页。只需在 YouTube 上搜索 “Linux Ubuntu NAS”,你就会找到一个指导你如何设置和配置此服务的视频。还有专门指导 Webmin 的设置和操作的视频。
|
||||
|
||||
内核是任何 Linux 安装的核心。由于内核是模块化的,它是非常小的(顾名思义)。我在一个 32MB 的小型闪存上运行 Linux 服务器。我没有打错 - 32MB 的空间!Linux 系统使用的大部分空间都是由安装的软件包使用的。
|
||||
|
||||
|
||||
**服务器**
|
||||
|
||||
服务器安装 ISO 镜像是 Ubuntu 提供的最小的下载。它是针对服务器操作优化的操作系统的精简版本。此版本没有 GUI。默认情况下,它完全从命令行运行。
|
||||
|
||||
移除 GUI 和其他组件可简化系统并最大限度地提高性能。最初没有安装的必要软件包可以稍后通过命令行程序包管理器添加。由于没有 GUI,因此必须从命令行完成所有配置、故障排除和包管理。许多管理员将使用服务器安装来获取一个干净或最小的系统,然后只添加他们需要的某些包。这包括添加桌面 GUI 系统并制作精简桌面系统。
|
||||
|
||||
广播电台可以使用 Linux 服务器作为 Apache Web 服务器或数据库服务器。这些是真实需要消耗处理能力的程序,这就是为什么它们通常使用服务器形式安装以及没有 GUI 的原因。SNORT 和 Cacti 是可以在你的 Linux 服务器上运行的其他程序(这两个应用程序都在上一篇文章中介绍,可以在这里找到:[_http://tinyurl.com/yd8dyegu_][2])。
|
||||
|
||||
|
||||
**桌面**
|
||||
|
||||
桌面安装 ISO 镜像相当大,并且有多个在服务器安装 ISO 镜像上没有的软件包。此安装用于工作站或日常桌面使用。此安装类型允许自定义安装包(程序),或者可以选择默认的桌面配置。
|
||||
|
||||
![](http://www.radiomagonline.com/Portals/0/radio-managing-tech-Ubuntu_2.jpg)
|
||||
|
||||
*桌面安装 ISO 镜像相当大,并且有多个在服务器安装 ISO 镜像上没有的软件包。此安装包专为工作站或日常桌面使用设计。*
|
||||
|
||||
软件包通过 apt-get 包管理器系统安装,就像服务器安装一样。两者之间的区别在于,在桌面安装中,apt-get 包管理器具有不错的 GUI 前端。这允许通过点击鼠标轻松地从系统安装或删除软件包!桌面安装将设置一个 GUI 以及许多与桌面操作系统相关的软件包。
|
||||
|
||||
![](http://www.radiomagonline.com/Portals/0/radio-managing-tech-Ubuntu_3.jpg)
|
||||
|
||||
*通过 apt-get 包管理器系统安装软件包,就像服务器安装一样。两者之间的区别在于,在桌面安装中,apt-get 包管理器具有不错的 GUI 前端。**
|
||||
|
||||
这个系统安装后随时可用,可以很好的替代你的 Windows 或 Mac 台式机。它有很多包,包括 Office 套件和 Web 浏览器。
|
||||
|
||||
Linux 是一个成熟而强大的操作系统。无论哪种安装类型,它都可以配置为适合几乎所有需要。从功能强大的数据库服务器到用于网页浏览和写信给奶奶的基本台式机操作系统,天空有极限,而可用的安装包几乎是不竭的。如果你遇到一个需要计算机化解决方案的问题,Linux 可能会提供免费或低成本的软件来解决该问题。
|
||||
|
||||
通过提供两个安装版本,Ubuntu 做得很好,这让人们开始朝着正确的方向前进。
|
||||
|
||||
*Cottingham 是前无线电总工程师,现在从事流媒体工作。*
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.radiomagonline.com/deep-dig/0005/linux-installation-types-server-vs-desktop/39123
|
||||
|
||||
作者:[Chris Cottingham][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.radiomagonline.com/author/chris-cottingham
|
||||
[1]:https://www.ubuntu.com/download
|
||||
[2]:http://tinyurl.com/yd8dyegu
|
||||
[3]:http://www.radiomagonline.com/author/chris-cottingham
|
@ -1,19 +1,17 @@
|
||||
六个聪明的 Linux 命令行技巧
|
||||
六个优雅的 Linux 命令行技巧
|
||||
============================================================
|
||||
|
||||
### 一些有用的命令能让命令行上的生命更有价值
|
||||
> 一些非常有用的命令能让命令行的生活更满足
|
||||
|
||||
![command key keyboard](https://images.idgesg.net/images/article/2017/08/commands-micah_elizabeth_scott-cropped-100733439-large.jpg)
|
||||
[Micah Elizabeth Scott][32] [(CC BY 2.0)][33]RELATED
|
||||
|
||||
|
||||
使用 Linux 命令工作可以获得许多乐趣,但是如果您使用一些命令,它们可以减少您的工作或以有趣的方式显示信息时,您将获得更多的乐趣。在今天的文章中,我们将介绍六个命令,它们可能会使你用在命令行上的时间更加值当。
|
||||
|
||||
### watch
|
||||
|
||||
watch 命令会重复运行您给出的任何命令,并显示输出。默认情况下,它每两秒运行一次命令。命令的每次运行都将覆盖上一次运行时显示的内容,因此您始终可以看到最新的数据。
|
||||
`watch` 命令会重复运行您给出的任何命令,并显示输出。默认情况下,它每两秒运行一次命令。命令的每次运行都将覆盖上一次运行时显示的内容,因此您始终可以看到最新的数据。
|
||||
|
||||
您可能会在等待某人登录时使用它。在这种情况下,您可以使用 “watch who” 命令或者 “watch -n 15 who” 命令使每次运行的时间变为 15 秒,而不是两秒。另外终端窗口的右上角会显示日期和时间。
|
||||
您可能会在等待某人登录时使用它。在这种情况下,您可以使用 `watch who` 命令或者 `watch -n 15 who` 命令使每 15 秒运行一次,而不是两秒一次。另外终端窗口的右上角会显示日期和时间。
|
||||
|
||||
```
|
||||
$ watch -n 5 who
|
||||
@ -25,7 +23,6 @@ zoe pts/1 2017-08-23 08:15 (192.168.0.19)
|
||||
|
||||
您也可以使用它来查看日志文件。如果您显示的数据没有任何变化,则只有窗口角落里的日期和时间会发生变化。
|
||||
|
||||
|
||||
```
|
||||
$ watch tail /var/log/syslog
|
||||
Every 2.0s: tail /var/log/syslog stinkbug: Wed Aug 23 15:16:37 2017
|
||||
@ -47,11 +44,11 @@ Aug 23 15:15:01 stinkbug CRON[7828]: (root) CMD (command -v debian-sa1 > /dev/nu
|
||||
ll && debian-sa1 1 1)
|
||||
```
|
||||
|
||||
这里的输出和使用命令 “tail -f /var/log/syslog” 的输出相似。
|
||||
这里的输出和使用命令 `tail -f /var/log/syslog` 的输出相似。
|
||||
|
||||
### look
|
||||
|
||||
这个命令的名字 look 可能会让我们以为它和 watch 做类似的事情,但其实是不同的。look 命令用于搜索以某个特定字符串开头的单词。
|
||||
这个命令的名字 `look` 可能会让我们以为它和 `watch` 做类似的事情,但其实是不同的。`look` 命令用于搜索以某个特定字符串开头的单词。
|
||||
|
||||
```
|
||||
$ look ecl
|
||||
@ -70,7 +67,7 @@ ecliptic
|
||||
ecliptic's
|
||||
```
|
||||
|
||||
look 命令通常有助于单词的拼写,它使用 /usr/share/dict/words 文件,除非你使用如下的命令指定了文件名:
|
||||
`look` 命令通常有助于单词的拼写,它使用 `/usr/share/dict/words` 文件,除非你使用如下的命令指定了文件名:
|
||||
|
||||
```
|
||||
$ look esac .bashrc
|
||||
@ -79,12 +76,11 @@ esac
|
||||
esac
|
||||
```
|
||||
|
||||
在这种情况下,它的作用就像跟在一个 awk 命令后面的 grep ,只打印匹配行上的第一个单词。
|
||||
|
||||
在这种情况下,它的作用就像跟在一个 `awk` 命令后面的 `grep` ,只打印匹配行上的第一个单词。
|
||||
|
||||
### man -k
|
||||
|
||||
man -k 命令列出包含指定单词的手册页。它的工作基本上和 apropos 命令一样。
|
||||
`man -k` 命令列出包含指定单词的手册页。它的工作基本上和 `apropos` 命令一样。
|
||||
|
||||
```
|
||||
$ man -k logrotate
|
||||
@ -95,8 +91,7 @@ logrotate.conf (5) - rotates, compresses, and mails system logs
|
||||
|
||||
### help
|
||||
|
||||
当你完全绝望的时候,您可能会试图使用此命令,help 命令实际上是显示一个 shell 内置的列表。最令人惊讶的是它相当多的参数变量。你可能会看到这样的东西,然后开始想知道这些内置功能可以为你做些什么:
|
||||
|
||||
当你完全绝望的时候,您可能会试图使用此命令,`help` 命令实际上是显示一个 shell 内置命令的列表。最令人惊讶的是它有相当多的参数变量。你可能会看到这样的东西,然后开始想知道这些内置功能可以为你做些什么:
|
||||
```
|
||||
$ help
|
||||
GNU bash, version 4.4.7(1)-release (i686-pc-linux-gnu)
|
||||
@ -149,7 +144,7 @@ A star (*) next to a name means that the command is disabled.
|
||||
|
||||
### stat -c
|
||||
|
||||
stat 命令用于显示文件的大小、所有者、组、索引节点号、权限、修改和访问时间等重要的统计信息。这是一个非常有用的命令,可以显示比 ls -l 更多的细节。
|
||||
`stat` 命令用于显示文件的大小、所有者、用户组、索引节点号、权限、修改和访问时间等重要的统计信息。这是一个非常有用的命令,可以显示比 `ls -l` 更多的细节。
|
||||
|
||||
```
|
||||
$ stat .bashrc
|
||||
@ -163,14 +158,14 @@ Change: 2017-06-21 17:37:11.899157791 -0400
|
||||
Birth: -
|
||||
```
|
||||
|
||||
使用 -c 选项,您可以指定要查看的字段。例如,如果您只想查看一个文件或一系列文件的文件名和访问权限,则可以这样做:
|
||||
使用 `-c` 选项,您可以指定要查看的字段。例如,如果您只想查看一个文件或一系列文件的文件名和访问权限,则可以这样做:
|
||||
|
||||
```
|
||||
$ stat -c '%n %a' .bashrc
|
||||
.bashrc 644
|
||||
```
|
||||
|
||||
在此命令中, %n 表示每个文件的名称,而 %a 表示访问权限。一个 %u 表示数字类型的 UID,并且 %U 表示用户名。
|
||||
在此命令中, `%n` 表示每个文件的名称,而 `%a` 表示访问权限。`%u` 表示数字类型的 UID,而 `%U` 表示用户名。
|
||||
|
||||
```
|
||||
$ stat -c '%n %a' bin/*
|
||||
@ -188,17 +183,18 @@ bin/show_release 700 shs
|
||||
|
||||
### TAB
|
||||
|
||||
如果你没有使用过 tab 命令来补全文件名,你真的错过了一个非常有用的命令行技巧。tab 命令提供文件名补全功能(包括使用 cd 时的目录)。它在出现歧义之前尽可能多的填充文件名(多个文件以相同的字母开头。如果您有一个名为 bigplans 的文件,另一个名为 bigplans2017 的文件会发生歧义,你必须决定是按下回车键还是输入 “2” 之后再按下 tab 键选择第二个文件。
|
||||
如果你没有使用过 tab 键来补全文件名,你真的错过了一个非常有用的命令行技巧。tab 键提供文件名补全功能(包括使用 `cd` 时的目录)。它在出现歧义之前尽可能多的填充文件名(多个文件以相同的字母开头。如果您有一个名为 `bigplans` 的文件,另一个名为 `bigplans2017` 的文件会发生歧义,你将听到一个声音,然后需要决定是按下回车键还是输入 `2` 之后再按下 tab 键选择第二个文件。
|
||||
|
||||
Join the Network World communities on [Facebook][30] and [LinkedIn][31] to comment on topics that are top of mind.
|
||||
|
||||
(题图:[Micah Elizabeth Scott][32] [(CC BY 2.0)][33])
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3219684/linux/half-a-dozen-clever-linux-command-line-tricks.html
|
||||
|
||||
作者:[ Sandra Henry-Stocker][a]
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
译者:[firmianay](https://github.com/firmianay)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
53
published/201708/20170824 Understanding OPNFV Starts Here.md
Normal file
53
published/201708/20170824 Understanding OPNFV Starts Here.md
Normal file
@ -0,0 +1,53 @@
|
||||
从这开始了解 OPNFV
|
||||
============================================================
|
||||
|
||||
![OPNFV](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/network-transformation.png?itok=uNTYBeQb "OPNFV")
|
||||
|
||||
如果电信运营商或企业今天从头开始构建网络,那么他们可能用软件定义资源的方式构建,这与 Google 或 Facebook 的基础设施类似。这是网络功能虚拟化 (NFV) 的前提。
|
||||
|
||||
NFV 是颠覆的一代,其将彻底改变网络的建设和运营。而且,[OPNFV][3] 是一个领先的开源 NFV 项目,旨在加速这项技术的采用。
|
||||
|
||||
你是想要知道有哪些开源项目可能会帮助你进行 NFV 转换计划的电信运营商或者相关的企业员工么?还是要将你的产品和服务推向新的 NFV 世界的技术提供商?或者,也许是一名想使用开源项目来发展你事业的工程师、网络运维或商业领袖?(例如 2013 年 Rackspace [提到][4] 拥有 OpenStack 技能的网络工程师的平均工资比他们的同行高 13%)?如果这其中任何一个适用于你,那么 _理解 OPNFV_ 一书是你的完美资源。
|
||||
|
||||
![OPNFV Book](https://www.linux.com/sites/lcom/files/understanding-opnfv.jpeg)
|
||||
|
||||
*“理解 OPNFV”一书高屋建瓴地提供了 OPNFV 的理解以及它如何帮助你和你们的组织。*
|
||||
|
||||
本书(由 Mirantis 、 Nick Chase 和我撰写)在 11 个易于阅读的章节和超过 144 页中介绍了从 NFV、NFV 转换、OPNFV 项目的各个方面到 VNF 入门的概述,涵盖了一系列主题。阅读本书后,你将对 OPNFV 是什么有一个高屋建瓴的理解以及它如何帮助你或你们的组织。这本书不是专门面向开发人员的,虽然有开发背景信息很有用。如果你是开发人员,希望作为贡献者参与 OPNFV 项目,那么 [wiki.opnfv.org][5] 仍然是你的最佳资源。
|
||||
|
||||
在本博客系列中,我们会向你展示本书的一部分内容 - 就是有些什么内容,以及你可能会学到的。
|
||||
|
||||
让我们从第一章开始。第 1 章,毫不奇怪,是对 NFV 的介绍。它从业务驱动因素(需要差异化服务、成本压力和敏捷需求)、NFV 是什么,以及你可从 NFV 可以获得什么好处的角度做了简要概述。
|
||||
|
||||
简而言之,NFV 可以在数据中心的计算节点上执行复杂的网络功能。在计算节点上执行的网络功能称为虚拟网络功能 (VNF)。因此,VNF 可以作为网络运行,NFV 还会添加机制来确定如何将它们链接在一起,以提供对网络中流量的控制。
|
||||
|
||||
虽然大多数人认为它用在电信,但 NFV 涵盖了广泛的使用场景,从基于应用或流量类型的按角色访问控制 (RBAC) 到用于管理网络内容的内容分发网络 (CDN) 网络(通常需要的地方),更明显的电信相关用例如演进分组核心 (EPC) 和 IP 多媒体系统(IMS)。
|
||||
|
||||
此外,一些主要收益包括增加收入、改善客户体验、减少运营支出 (OPEX)、减少资本支出 (CAPEX)和为新项目腾出资源。本节还提供了具体的 NFV 总体拥有成本 (TCO) 分析。这些话题的处理很简单,因为我们假设你有一些 NFV 背景。然而,如果你刚接触 NFV ,不要担心 - 介绍材料足以理解本书的其余部分。
|
||||
|
||||
本章总结了 NFV 要求 - 安全性、性能、互操作性、易操作性以及某些具体要求,如服务保证和服务功能链。不符合这些要求,没有 NFV 架构或技术可以真正成功。
|
||||
|
||||
阅读本章后,你将对为什么 NFV 非常重要、NFV是什么,以及 NFV 成功的技术要求有一个很好的概念。我们将在今后的博客文章中浏览下面的章节。
|
||||
|
||||
这本书已被证明是行业活动上最受欢迎的赠品,中文版正在进行之中!但是你现在可以[下载 PDF 格式的电子书][6],或者在亚马逊上下载[打印版本][7]。
|
||||
|
||||
(题图:[Creative Commons Zero][1]Pixabay)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/opnfv/2017/8/understanding-opnfv-starts-here
|
||||
|
||||
作者:[AMAR KAPADIA][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/akapadia
|
||||
[1]:https://www.linux.com/licenses/category/creative-commons-zero
|
||||
[2]:https://www.linux.com/files/images/network-transformationpng
|
||||
[3]:https://www.opnfv.org/
|
||||
[4]:https://blog.rackspace.com/solving-the-openstack-talent-gap
|
||||
[5]:https://wiki.opnfv.org/
|
||||
[6]:https://www.opnfv.org/resources/download-understanding-opnfv-ebook
|
||||
[7]:https://www.amazon.com/dp/B071LQY724/ref=cm_sw_r_cp_ep_dp_pgFMzbM8YHJA9
|
@ -1,15 +1,15 @@
|
||||
OpenStack 上的 OpenShift:更好地交付应用程序
|
||||
============================================================
|
||||
|
||||
你有没有问过自己,我应该在哪里运行 OpenShift?答案是任何地方 - 它可以在裸机、虚拟机、私有云或公共云中很好地运行。但是,这里有一些原因为什么人们正迁移到围绕全栈和资源消耗自动化相关的私有云和公有云。传统的操作系统一直是关于[硬件资源的展示和消耗][2] - 硬件提供资源,应用程序消耗它们,操作系统一直是交通警察。但传统的操作系统一直局限于单机[1]。
|
||||
你有没有问过自己,我应该在哪里运行 OpenShift?答案是任何地方 - 它可以在裸机、虚拟机、私有云或公共云中很好地运行。但是,这里有一些为什么人们正迁移到围绕全栈和资源消耗自动化相关的私有云和公有云的原因。传统的操作系统一直是关于[硬件资源的展示和消耗][2] - 硬件提供资源,应用程序消耗它们,操作系统一直是交通警察。但传统的操作系统一直局限于单机^注1 。
|
||||
|
||||
那么,在原生云的世界里,现在意味着这个概念扩展到包括多个操作系统实例。这就是 OpenStack 和 OpenShift 所在。在原生云世界、虚拟机、存储卷和网段都成为动态配置的构建块。我们从这些构建块构建我们的应用程序。他们通常按时间或分钟付款,并在不再需要时被取消配置。但是,你需要将它们视为应用程序的动态配置能力。 OpenStack 在动态配置能力(展示)方面非常擅长,OpenShift 在动态配置应用程序(消费)方面很好,但是我们如何将它们结合在一起来提供一个动态的,高度可编程的多节点操作系统?
|
||||
那么,在原生云的世界里,现在意味着这个概念扩展到包括多个操作系统实例。这就是 OpenStack 和 OpenShift 所在。在原生云世界,虚拟机、存储卷和网段都成为动态配置的构建块。我们从这些构建块构建我们的应用程序。它们通常按小时或分钟付费,并在不再需要时被取消配置。但是,你需要将它们视为应用程序的动态配置能力。 OpenStack 在动态配置能力(展示)方面非常擅长,OpenShift 在动态配置应用程序(消费)方面做的很好,但是我们如何将它们结合在一起来提供一个动态的、高度可编程的多节点操作系统呢?
|
||||
|
||||
要理解这个,让我们来看看如果我们在传统的环境中安装 OpenShift 会发生什么 - 想像我们想要为开发者提供动态访问来创建新的应用程序,或者想象我们想要提供业务线,使其能够访问现有应用程序的新副本以满足合同义务。每个应用程序都需要访问持久存储。持久存储不是临时的,在传统的环境中,这通过提交一张工单实现。没关系,我们可以连到 OpenShift,每次需要存储时都会提交一张工单。存储管理员可以登录企业存储阵列并根据需要删除卷,然后将其移回 OpenShift 以满足应用程序。但这将是一个非常慢的手动过程,而且你可能会遇到存储管理员退出。
|
||||
要理解这个,让我们来看看如果我们在传统的环境中安装 OpenShift 会发生什么 - 想像我们想要为开发者提供动态访问来创建新的应用程序,或者想象我们想要提供业务线,使其能够访问现有应用程序的新副本以满足合同义务。每个应用程序都需要访问持久存储。持久存储不是临时的,在传统的环境中,这通过提交一张工单实现。没关系,我们可以连到 OpenShift,每次需要存储时都会提交一张工单。存储管理员可以登录企业存储阵列并根据需要删除卷,然后将其移回 OpenShift 以满足应用程序。但这将是一个非常慢的手动过程,而且你可能会遇到存储管理员辞职。
|
||||
|
||||
![](https://blog.openshift.com/wp-content/uploads/OpenShift-on-OpenStack-Delivering-Applications-Better-Together-Traditional-Storage-1024x615.png)
|
||||
|
||||
在原生云的世界里,我们应该将其视为一个策略驱动的自动化流程。存储管理员变得更加战略性、设置策略、配额和服务级别(银,黄金等),但实际配置变得动态。
|
||||
在原生云的世界里,我们应该将其视为一个策略驱动的自动化流程。存储管理员变得更加战略性、设置策略、配额和服务级别(银、黄金等),但实际配置变得动态。
|
||||
|
||||
![](https://blog.openshift.com/wp-content/uploads/OpenShift-on-OpenStack-Delivering-Applications-Better-Together-Cloud-Storage-1024x655.png)
|
||||
|
||||
@ -17,21 +17,21 @@ OpenStack 上的 OpenShift:更好地交付应用程序
|
||||
|
||||
![](https://blog.openshift.com/wp-content/uploads/OpenShift-on-OpenStack-Delivering-Applications-Better-Together-Persistent-Volume-Claims-Persistent-Volumes-Demo-1024x350.png)
|
||||
|
||||
下面的演示视频展示了动态存储配置如何与 Red Hat OpenStack 平台(Cinder 卷)以及 Red Hat OpenShift 容器平台配合使用,但动态配置并不限于存储。想象一下,随着 OpenShift 的一个实例需要更多的容量,节点自动扩展的环境。想象一下,推送一个敏感的程序更改前,将网段划分为负载测试 OpenShift 的特定实例。这些是你为何需要动态配置 IT 构建块的原因。OpenStack 实际上是以 API 驱动的方式实现的。
|
||||
下面的演示视频展示了动态存储配置如何与 Red Hat OpenStack 平台(Cinder 卷)以及 Red Hat OpenShift 容器平台配合使用,但动态配置并不限于存储。想象一下,随着 OpenShift 的一个实例需要更多的容量、节点自动扩展的环境。想象一下,推送一个敏感的程序更改前,将网段划分为负载测试 OpenShift 的特定实例。这些是你为何需要动态配置 IT 构建块的原因。OpenStack 实际上是以 API 驱动的方式实现的。
|
||||
|
||||
[YOUTUBE VIDEO](https://youtu.be/PfWmAS9Fc7I)
|
||||
|
||||
OpenShift 和 OpenStack 一起更好地交付应用程序。OpenStack 动态提供资源,而 OpenShift 会动态地消耗它们。它们一起为你所有的容器和虚拟机需求提供灵活的原生云解决方案。
|
||||
|
||||
[1]高可用性集群和一些专门的操作系统在一定程度上弥合了这一差距,但在计算中通常是一个边缘情况。
|
||||
注1:高可用性集群和一些专门的操作系统在一定程度上弥合了这一差距,但在计算中通常是一个边缘情况。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blog.openshift.com/openshift-on-openstack-delivering-applications-better-together/
|
||||
|
||||
作者:[SCOTT MCCARTY ][a]
|
||||
作者:[SCOTT MCCARTY][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
108
published/20170821 Manage your finances with LibreOffice Calc.md
Normal file
108
published/20170821 Manage your finances with LibreOffice Calc.md
Normal file
@ -0,0 +1,108 @@
|
||||
使用 LibreOffice Calc 管理你的财务
|
||||
============================================================
|
||||
|
||||
> 你想知道你的钱花在哪里?这个精心设计的电子表格可以一目了然地回答这个问题。
|
||||
|
||||
![Get control of your finances with LibreOffice Calc](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BIZ_WorkInPublic.png?itok=7nAi_Db_ "Get control of your finances with LibreOffice Calc")
|
||||
|
||||
|
||||
如果你像大多数人一样,没有一个无底般的银行帐户。你可能需要仔细观察你的每月支出。
|
||||
|
||||
有很多方法可以做到这一点,但是最快最简单的方法是使用电子表格。许多人创建一个非常基本的电子表格来完成这项工作,它由两长列组成,总计位于底部。这是可行的,但这有点傻。
|
||||
|
||||
我将通过使用 LibreOffice Calc 创建一个更便于细查的(我认为)以及更具视觉吸引力的个人消费电子表格。
|
||||
|
||||
你说不用 LibreOffice?没关系。你可以使用 [Gnumeric][7]、[Calligra Sheets][8] 或 [EtherCalc][9] 等电子表格工具使用本文中的信息。
|
||||
|
||||
### 首先列出你的费用
|
||||
|
||||
先别费心 LibreOffice 了。坐下来用笔和纸,列出你的每月日常开支。花时间,翻遍你的记录,记下所有的事情,无论它多么渺小。不要担心你花了多少钱。重点放在你把钱花在哪里。
|
||||
|
||||
完成之后,将你的费用分组到最有意义的标题下。例如,将你的燃气、电气和水费放在“水电费”下。你也可能想要为我们每个月都会遇到的意外费用,使用一组名为“种种”。
|
||||
|
||||
### 创建电子表格
|
||||
|
||||
启动 LibreOffice Calc 并创建一个空的电子表格。在电子表格的顶部留下三个空白行。之后我们会回来。
|
||||
|
||||
你把你的费用归类是有原因的:这些组将成为电子表格上的块。我们首先将最重要的花费组(例如 “家庭”)放在电子表格的顶部。
|
||||
|
||||
在工作表顶部第四行的第一个单元格中输入该花费组的名称。将它放大(可以是 12 号字体)、加粗使得它显眼。
|
||||
|
||||
在该标题下方的行中,添加以下三列:
|
||||
|
||||
* 花费
|
||||
* 日期
|
||||
* 金额
|
||||
|
||||
在“花费”列下的单元格中输入该组内花费的名称。
|
||||
|
||||
接下来,选择日期标题下的单元格。单击 **Format** 菜单,然后选择 **Number Format > Date**。对“金额”标题下的单元格重复此操作,然后选择 **Number Format > Currency**。
|
||||
|
||||
你会看到这样:
|
||||
|
||||
![A group of expenses](https://opensource.com/sites/default/files/u128651/spreadsheet-expense-block.png "A group of expenses")
|
||||
|
||||
这是一组开支的方式。不要为每个花费组创建新块, 而是复制你创建的内容并将其粘贴到第一个块旁边。我建议一行放三块, 在它们之间有一个空列。
|
||||
|
||||
你会看到这样:
|
||||
|
||||
![A row of expenses](https://opensource.com/sites/default/files/u128651/spreadsheet-expense-rows.png "A row of expenses")
|
||||
|
||||
对所有你的花费组做重复操作。
|
||||
|
||||
### 总计所有
|
||||
|
||||
查看所有个人费用是一回事,但你也可以一起查看每组费用的总额和所有费用。
|
||||
|
||||
我们首先总计每个费用组的金额。你可以让 LibreOffice Calc 自动做这些。高亮显示“金额”列底部的单元格,然后单击 “Formula” 工具栏上的 “Sum” 按钮。
|
||||
|
||||
![The Sum button](https://opensource.com/sites/default/files/u128651/spreadsheet-sum-button.png "The Sum button")
|
||||
|
||||
单击金额列中的第一个单元格,然后将光标拖动到列中的最后一个单元格。然后按下 Enter。
|
||||
|
||||
![An expense block with a total](https://opensource.com/sites/default/files/u128651/spreadsheet-totaled-expenses.png "An expense block with a total")
|
||||
|
||||
现在让我们用你顶部留下的两三行空白行做一些事。这就是你所有费用的总和。我建议把它放在那里,这样无论何时你打开文件时它都是可见的。
|
||||
|
||||
在表格左上角的其中一个单元格中,输入类似“月总计”。然后,在它旁边的单元格中,输入 `=SUM()`。这是 LibreOffice Calc 函数,它可以在电子表格中添加特定单元格的值。
|
||||
|
||||
不要手动输入要添加的单元格的名称,请按住键盘上的 Ctrl。然后在电子表格上单击你在每组费用中总计的单元格。
|
||||
|
||||
### 完成
|
||||
|
||||
你有一张追踪一个月花费的表。拥有单个月花费的电子表格有点浪费。为什么不用它跟踪全年的每月支出呢?
|
||||
|
||||
右键单击电子表格底部的选项卡,然后选择 **Move or Copy Sheet**。在弹出的窗口中,单击 **-move to end position-**,然后按下 Enter 键。一直重复到你有 12 张表 - 每月一张。以月份重命名表格,然后使用像 _Monthly Expenses 2017.ods_ 这样的描述性名称保存电子表格。
|
||||
|
||||
现在设置完成了,你可以使用电子表格了。使用电子表格跟踪你的花费本身不会坚实你的财务基础,但它可以帮助你控制每个月的花费。
|
||||
|
||||
(题图: opensource.com)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
作者简介:
|
||||
|
||||
Scott Nesbitt - 我是一名长期使用自由/开源软件的用户,并为了乐趣和收益写了各种软件。我不会太严肃。你可以在网上这些地方找到我:Twitter、Mastodon、GitHub。
|
||||
|
||||
----------------
|
||||
|
||||
via: https://opensource.com/article/17/8/budget-libreoffice-calc
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/scottnesbitt
|
||||
[1]:https://opensource.com/file/366811
|
||||
[2]:https://opensource.com/file/366831
|
||||
[3]:https://opensource.com/file/366821
|
||||
[4]:https://opensource.com/file/366826
|
||||
[5]:https://opensource.com/article/17/8/budget-libreoffice-calc?rate=C87fXAfGoIpA1OuF-Zx1nv-98UN9GgbFUz4tl_bKug4
|
||||
[6]:https://opensource.com/user/14925/feed
|
||||
[7]:http://www.gnumeric.org/
|
||||
[8]:https://www.calligra.org/sheets/
|
||||
[9]:https://ethercalc.net/
|
||||
[10]:https://opensource.com/users/scottnesbitt
|
||||
[11]:https://opensource.com/users/scottnesbitt
|
||||
[12]:https://opensource.com/article/17/8/budget-libreoffice-calc#comments
|
@ -0,0 +1,110 @@
|
||||
为什么开源应该是云原生环境的首选
|
||||
============================================================
|
||||
|
||||
> 基于 Linux 击败了专有软件一样的原因,开源应该成为云原生环境的首选。
|
||||
|
||||
![Why open source should be the first choice for cloud-native environments](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud-globe.png?itok=_drXt4Tn "Why open source should be the first choice for cloud-native environments")
|
||||
|
||||
让我们回溯到上世纪 90 年代,当时专有软件大行其道,而开源才刚开始进入它自己的时代。是什么导致了这种转变?更重要的是,而今天我们转到云原生环境时,我们能从中学到什么?
|
||||
|
||||
### 基础设施的历史经验
|
||||
|
||||
我将以一个高度武断的、开源的视角开始,来看看基础设施过去 30 年的历史。在上世纪 90 年代,Linux 只是大多数组织视野中一个微不足道的小光点而已——如果他们听说过它的话。你早早购入股票的那些公司们很快就发现了 Linux 的好处,它主要是作为专有的 Unix 的廉价替代品,而部署服务器的标准方式是使用专有的 Unix,或者日渐增多的使用 Microsoft Windows NT。
|
||||
|
||||
这种模式的专有本性为更专有的软件提供了一个肥沃的生态系统。软件被装在盒子里面放在商店出售。甚至开源软件也参与了这种装盒游戏;你可以在货架上买到 Linux,而不是用你的互联网连接免费下载。去商店和从你的软件供应商那里只是你得到软件的不同方式而已。
|
||||
|
||||
![Ubuntu box packaging on a Best Buy shelf](https://opensource.com/sites/default/files/u128651/ubuntu_box.png "Ubuntu box packaging on a Best Buy shelf")
|
||||
|
||||
*Ubuntu 包装盒出现在百思买的货架上*
|
||||
|
||||
我认为,随着 LAMP 系列(Linux、Apache、MySQL 和 PHP / Perl / Python)的崛起,情况发生了变化。LAMP 系列非常成功。它是稳定的、可伸缩的和相对用户友好的。与此同时,我开始看到对专有解决方案的不满。一旦客户在 LAMP 系列中尝过了开源的甜头,他们就会改变他们对软件的期望,包括:
|
||||
|
||||
* 不愿被供应商绑架,
|
||||
* 关注安全,
|
||||
* 希望自己来修复 bug ,以及
|
||||
* 孤立开发的软件意味着创新被扼杀。
|
||||
|
||||
在技术方面,我们也看到了各种组织在如何使用软件上的巨大变化。忽然有一天,网站的宕机变成不可接受的了。这就对扩展性和自动化有了更多的依赖。特别是在过去的十年里,我们看到了基础设施从传统的“宠物”模式到“群牛”模式的转变,在这种模式中,服务器可以被换下和替换,而不是一直运行和被指定。公司使用大量的数据,更注重数据留存和数据到用户的处理和返回速度。
|
||||
|
||||
开源和开源社区,以及来自大公司的日益增多的投入,为我们改变如何使用软件提供了基础。系统管理员的岗位要求开始 要求 Linux 技能和对开源技术和理念的熟悉。通过开源类似 Chef cookbooks 和 Puppet 模块这样东西,管理员可以分享他们的模式配置。我们不再单独配置和调优 MySQL;我们创建了一个掌控基础部分的系统,我们现在可以专注于更有趣的、可以给我们雇主带来更高价值的工程作业。
|
||||
|
||||
开源现在无处不在,围绕它的模式也无处不在。曾经仇视这个想法的公司不仅通过协同项目与外界拥抱开源,而且进一步地,还发布了他们自己的开源软件项目并且围绕它们构建了社区。
|
||||
|
||||
![A "Microsoft Linux" USB stick](https://opensource.com/sites/default/files/u128651/microsoft_linux_stick.png "A "Microsoft Linux" USB stick")
|
||||
|
||||
### 转向云端
|
||||
|
||||
今天,我们生活在一个 DevOps 和云端的世界里。我们收获了开源运动带来的创新成果。在公司内部采用开源软件开发实践的情况下, Tim O'reilly 所称的 “[内部开源][11]” 有了明显增长。我们为云平台共享部署配置。像 Terraform 这样的工具甚至允许我们编写和分享我们如何部署特定的平台。
|
||||
|
||||
但这些平台本身呢?
|
||||
|
||||
> “大多数人想都不想就使用了云……许多用户将钱投入到根本不属于他们的基础设施中,而对放弃他们的数据和信息毫无顾虑。"
|
||||
> —Edward Snowden, OpenStack Summit, May 9, 2017
|
||||
|
||||
现在是时候要更多地想想本能地转移或扩展到云上的事情了。
|
||||
|
||||
就像 Snowden 强调的那样,现在我们正面临着对我们的用户和客户的数据的失控风险。抛开安全不谈,如果我们回顾一下我们转向开源的原因,个中原因还包括被厂商绑架的担忧、创新难以推动、甚至修复 bug 的考虑。
|
||||
|
||||
在把你自己和/或你的公司锁定在一个专有平台之前,考虑以下问题:
|
||||
|
||||
* 我使用的服务是遵循开放标准,还是被厂商绑架的?
|
||||
* 如果服务供应商破产或被竞争对手收购,什么是我可以依赖的?
|
||||
* 关于停机、安全等问题,供应商与其客户沟通中是否有一个明确而真诚的历史过往?
|
||||
* 供应商是否响应 bug 和特性请求,即使那是来自小客户?
|
||||
* 供应商是否会在我不知情的情况下使用我们的数据(或者更糟,即便我们的客户协议所不同意)?
|
||||
* 供应商是否有一个计划来处理长期的,不断上升的增长成本,特别是如果最初的成本很低呢?
|
||||
|
||||
您可以通过这个问卷,讨论每个要点,而仍然决定使用专有的解决方案。这很好,很多公司一直都在这么做。然而,如果你像我一样,宁愿找到一个更开放的解决方案而仍然受益于云,你确实有的选择。
|
||||
|
||||
### 基于私有云
|
||||
|
||||
当您寻找私有云解决方案时,您的首选是开源,投资一个云提供商,其核心运行在开源软件上。 [OpenStack][12] 是行业领袖,在其 7 年的历史中,有 100 多个参与组织和成千上万的贡献者(包括我)。 OpenStack 项目已经证明,结合多个基于 OpenStack 云不仅是可行的,而且相对简单。云公司之间的 API 是相似的,所以您不必局限于特定的 OpenStack 供应商。作为一个开放源码项目,您仍然可以影响该基础设施的特性、bug 请求和发展方向。
|
||||
|
||||
第二种选择是继续在基础层面上使用私有云,但在一个开源容器编排系统中。无论您选择 [DC/OS][13](基于[Apache Mesos][14]) 、[Kubernetes][15] 或 [Docker Swarm 模式][16] ,这些平台都允许您将私有云系统提供的虚拟机作为独立的 Linux 机器,并在此之上安装您的平台。您所需要的只是 Linux 而已,不会立即被锁定在特定云的工具或平台上。可以根据具体情况来决定是否使用特定的专属后端,但如果你这样做,就应该着眼于未来。
|
||||
|
||||
有了这两种选择,你也可以选择完全离开云服务商。您可以部署自己的 OpenStack 云,或者将容器平台内部架构移动到您自己的数据中心。
|
||||
|
||||
### 做一个登月计划
|
||||
|
||||
最后,我想谈一谈开源项目基础设施。今年 3 月,在召开的 [南加州 Linux 展会][17] 上,多个开放源码项目的参与者讨论了为他们的项目运行开源基础设施。(更多的,请阅读我的 [关于该会议的总结][18])我认为这些项目正在做的这个工作是基础设施开源的最后一步。除了我们现在正在做的基本分享之外,我相信公司和组织们可以在不放弃与竞争对手相区分的“独门秘方”的情况下,进一步充分利用他们的基础设施开源。
|
||||
|
||||
开源了他们的基础设施的开源项目,已经证明了允许多个公司和组织向他们的基础设施提交训练有素的 bug 报告,甚至是补丁和特定论文的价值。突然之间,你可以邀请兼职的贡献者。你的客户可以通过了解你的基础设施,“深入引擎盖子之下”,从而获得信心。
|
||||
|
||||
想要更多的证据吗?访问 [开源基础设施][19] 的网站了解开源基础设施的项目(以及他们已经发布的大量基础设施)。
|
||||
|
||||
可以在 8 月 26 日在费城举办的 FOSSCON 大会上 Elizabeth K. Joseph 的演讲“[基础架构开源][4]”上了解更多。
|
||||
|
||||
(题图:[Jason Baker][6]. [CC BY-SA 4.0][7]. Source: [Cloud][8], [Globe][9]. Both [CC0][10].)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/8/open-sourcing-infrastructure
|
||||
|
||||
作者:[Elizabeth K. Joseph][a]
|
||||
译者:[wenzhiyi](https://github.com/wenzhiyi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/pleia2
|
||||
[1]:https://opensource.com/file/366596
|
||||
[2]:https://opensource.com/file/366591
|
||||
[3]:https://opensource.com/article/17/8/open-sourcing-infrastructure?rate=PdT-huv5y5HFZVMHOoRoo_qd95RG70y4DARqU5pzgkU
|
||||
[4]:https://fosscon.us/node/12637
|
||||
[5]:https://opensource.com/user/25923/feed
|
||||
[6]:https://opensource.com/users/jason-baker
|
||||
[7]:https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[8]:https://pixabay.com/en/clouds-sky-cloud-dark-clouds-1473311/
|
||||
[9]:https://pixabay.com/en/globe-planet-earth-world-1015311/
|
||||
[10]:https://creativecommons.org/publicdomain/zero/1.0/
|
||||
[11]:https://opensource.com/life/16/11/create-internal-innersource-community
|
||||
[12]:https://www.openstack.org/
|
||||
[13]:https://dcos.io/
|
||||
[14]:http://mesos.apache.org/
|
||||
[15]:https://kubernetes.io/
|
||||
[16]:https://docs.docker.com/engine/swarm/
|
||||
[17]:https://www.socallinuxexpo.org/
|
||||
[18]:https://opensource.com/article/17/3/growth-open-source-project-infrastructures
|
||||
[19]:https://opensourceinfra.org/
|
||||
[20]:https://opensource.com/users/pleia2
|
||||
[21]:https://opensource.com/users/pleia2
|
@ -0,0 +1,153 @@
|
||||
Linux 1.0 之旅:回顾这一切的开始
|
||||
============================================================
|
||||
|
||||
> 通过安装 SLS 1.05 展示了 Linux 内核在这 26 年间走过了多远。
|
||||
|
||||
![Happy anniversary, Linux: A look back at where it all began](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/happy_birthday_tux.png?itok=GoaC0Car "Happy anniversary, Linux: A look back at where it all began")
|
||||
|
||||
我第一次安装 Linux 是在 1993 年。那时我跑的是 MS-DOS,但我真的很喜欢学校机房电脑的 Unix 系统,就在那里度过了我大学本科时光。 当我听说了 Linux,一个 Unix 的免费版本,可以在我家的 386 电脑上运行的时候,我立刻就想要试试。我的第一个 Linux 发行版是 [Softlanding Linux System][27] (SLS) 1.03,带有 11 级补丁的 0.99 alpha 版本的 Linux 内核。它要求高达 2 MB 的内存,如果你想要编译项目需要 4 MB,运行 X windows 则需要 8 MB。
|
||||
|
||||
我认为 Linux 相较于 MS-DOS 世界是一个巨大的进步。 尽管 Linux 缺乏运行在 MS-DOS 上的广泛的应用及游戏,但我发现 Linux 带给我的是巨大的灵活性。不像 MS-DOS ,现在我可以进行真正的多任务,同时运行不止一个程序。并且 Linux 提供了丰富的工具,包括一个 C 语言编译器,让我可以构建自己的项目。
|
||||
|
||||
一年后,我升级到了 SLS 1.05,它支持全新的 Linux 内核 1.0。 更重要的,Linux 引入了内核模块。通过内核模块,你不再需要为支持新硬件而编译整个内核;取而代之,只需要从包含 Linux 内核之内的 63 个模块里加载一个就行。在 SLS 1.05 的发行自述文件中包含这些关于模块的注释:
|
||||
|
||||
> 内核的模块化旨在正视减少并最终消除重新编译内核的要求,无论是变更、修改设备驱动或者为了动态访问不常用的驱动。也许更为重要的是,个别工作小组的工作不再影响到内核的正确开发。事实上,这让以二进制发布官方内核现在成为了可能。
|
||||
|
||||
在 8 月 25 日,Linux 内核将迎来它的第 26 周年(LCTT 译注:已经过去了 =.= )。为了庆祝,我重新安装了 SLS 1.05 来提醒自己 Linux 1.0 内核是什么样子,去认识 Linux 自二十世纪 90 年代以来走了多远。和我一起踏上 Linux 的怀旧之旅吧!
|
||||
|
||||
### 安装
|
||||
|
||||
SLS 是第一个真正的 “发行版”,因为它包含一个安装程序。 尽管安装过程并不像现代发行版一样顺畅。 不能从 CD-ROM 启动安装,我需要从安装软盘启动我的系统,然后从 **login** 提示中运行安装程序。
|
||||
|
||||
![Installing SLS 1.05 from the login prompt](https://opensource.com/sites/default/files/u128651/install1.png "Installing SLS 1.05 from the login prompt")
|
||||
|
||||
在 SLS 1.05 中引入的一个漂亮的功能是支持彩色的文本模式安装器。当我选择彩色模式时,安装器切换到一个带有黑色文字的亮蓝色背景,不再是我们祖祖辈辈们使用的原始的普通黑白文本。
|
||||
|
||||
![Color-enabled text-mode installer in SLS 1.05](https://opensource.com/sites/default/files/u128651/install2.png "Color-enabled text-mode installer in SLS 1.05")
|
||||
|
||||
SLS 安装器是个简单的东西,文本从屏幕底部滚动而上,显示其做的工作。通过对一些简单的提示的响应,我能够创建一个 Linux 分区,挂载上 ext2 文件系统,并安装 Linux 。 安装包含了 X windows 和开发工具的 SLS 1.05,需要大约 85 MB 的磁盘空间。依照今天的标准这听起来可能不是很多,但在 Linux 1.0 出来的时候,120 MB 的硬件设备才是主流设备。
|
||||
|
||||
![Creating a partition for Linux, putting an ext2 filesystem on it, and installing Linux](https://opensource.com/sites/default/files/u128651/install10.png "Creating a partition for Linux, putting an ext2 filesystem on it, and installing Linux")
|
||||
|
||||
![First boot](https://opensource.com/sites/default/files/u128651/firstboot1.png "First boot")
|
||||
|
||||
### 系统级别
|
||||
|
||||
当我第一次启动到 Linux 时,让我想起来了一些关于这个早期版本 Linux 系统的事情。首先,Linux 没有占据很多的空间。在启动系统之后运行一些程序来检查的时候,Linux 占用了不到 4 MB 的内存。在一个拥有 16MB 内存的系统中,这就意味着节省了很多内存用来运行程序。
|
||||
|
||||
![Checking out the filesystem and available disk space](https://opensource.com/sites/default/files/u128651/uname-df.png "Checking out the filesystem and available disk space")
|
||||
|
||||
熟悉的 `/proc` 元文件系统在 Linux 1.0 就存在了,尽管对比我们今天在现代系统上看到的,它并不能提供许多信息。在 Linux 1.0, `/proc` 包含一些接口来探测类似 `meminfo` 和 `stat` 之类的基本系统状态。
|
||||
|
||||
![The familiar /proc meta filesystem](https://opensource.com/sites/default/files/u128651/proc.png "The familiar /proc meta filesystem")
|
||||
|
||||
在这个系统上的 `/etc` 文件目录非常简单。值得一提的是,SLS 1.05 借用了来自 [BSD Unix][28] 的 **rc** 脚本来控制系统启动。 初始化是通过 **rc** 脚本进行的,由 `rc.local` 文件来定义本地系统的调整。后来,许多 Linux 发行版采用了来自 [Unix System V][29] 的很相似的 **init** 脚本,后来又是 [systemd][30] 初始化系统。
|
||||
|
||||
![The /etc directory](https://opensource.com/sites/default/files/u128651/etc.png "The /etc directory")
|
||||
|
||||
### 你能做些什么
|
||||
|
||||
随着我的系统的启动运行,接下来就可以使用了了。那么,在这样的早期 Linux 系统上你能做些什么?
|
||||
|
||||
让我们从基本的文件管理开始。 每次在你登录的时候,SLS 会让你使用 Softlanding 菜单界面(MESH),这是一个文件管理程序,现代的用户们可能觉得它和 [Midnight Commander][31] 很相似。 而二十世纪 90 年代的用户们可能会拿 MESH 与更为接近的 [Norton Commander][32] 相比,这个可以说是在 MS-DOS 上最流行的第三方文件管理程序。
|
||||
|
||||
![The Softlanding menu shell (MESH)](https://opensource.com/sites/default/files/u128651/mesh.png "The Softlanding menu shell (MESH)")
|
||||
|
||||
除了 MESH 之外,在 SLS 1.05 中还少量包含了一些全屏应用程序。你可以找到熟悉的用户工具,包括 Elm 邮件阅读器、GNU Emacs 可编程编辑器,以及古老的 Vim 编辑器。
|
||||
|
||||
![Elm mail reader](https://opensource.com/sites/default/files/u128651/elm.png "Elm mail reader")
|
||||
|
||||
![GNU Emacs programmable editor](https://opensource.com/sites/default/files/u128651/emacs19.png "GNU Emacs programmable editor")
|
||||
|
||||
SLS 1.05 甚至包含了一个可以让你在终端玩的俄罗斯方块版本。
|
||||
|
||||
![Tetris for terminals](https://opensource.com/sites/default/files/u128651/tetris.png "Tetris for terminals")
|
||||
|
||||
在二十世纪 90 年代,多数住宅的网络接入是通过拨号连接的,所以 SLS 1.05 包含了 Minicom 调制解调器拨号程序。Minicom 提供一个与调制解调器的直接连接,并需要用户通过贺氏调制解调器的 **AT** 命令来完成一些像是拨号或挂电话这样的基础功能。Minicom 同样支持宏和其他简单功能来使连接你的本地调制解调器池更容易。
|
||||
|
||||
![Minicom modem-dialer application](https://opensource.com/sites/default/files/u128651/minicom.png "Minicom modem-dialer application")
|
||||
|
||||
但如果你想要写一篇文档时怎么办? SLS 1.05 的存在要比 LibreOffice 或者 OpenOffice 早很长时间。在二十世纪 90 年代,Linux 还没有这些应用。相反,如果你想要使用一个文字处理器,可能需要引导你的系统进入 MS-DOS,然后运行你喜欢的文字处理器程序,如 WordPerfect 或者共享软件 GalaxyWrite。
|
||||
|
||||
但是所有的 Unix 系统都包含一套简单的文本格式化程序,叫做 nroff 和 troff。在 Linux 系统中,他们被合并成 GNU groff 包,而 SLS 1.05 包含了 groff 的一个版本。我在 SLS 1.05 上的一项测试就是用 nroff 生成一个简单的文本文档。
|
||||
|
||||
![A simple nroff text document](https://opensource.com/sites/default/files/u128651/paper-me-emacs.png "A simple nroff text document")
|
||||
|
||||
![nroff text document output](https://opensource.com/sites/default/files/u128651/paper-me-out.png "nroff text document output")
|
||||
|
||||
### 运行 X windows
|
||||
|
||||
获取安装 X windows 并不特别容易,如 SLS 安装文件承诺的那样:
|
||||
|
||||
> 在你的 PC 上获取安装 X windows 可能会有一些发人深省的体验,主要是因为 PC 的显示卡类型太多。Linux X11 仅支持 VGA 类型的显示卡,但在许多类型的 VGA 中仅有个别的某些类型是完全支持的。SLS 存在两种 X windows 服务器。全彩的 XFree86,支持一些或所有 ET3000、ET400、PVGA1、GVGA、Trident、S3、8514、Accelerated cards、ATI plus 等。
|
||||
>
|
||||
> 另一个服务器 XF86_Mono,能够工作在几乎所有的 VGA 卡上,但只提供单色模式。因此,相比于彩色服务器,它会占用更少的内存并拥有更快的速度。当然就是看起来不怎么漂亮。
|
||||
>
|
||||
> X windows 的配置信息都堆放在目录 “/usr/X386/lib/X11/”。需要注意的是,“Xconfig” 文件为监视器和显示卡定义了时序。默认情况下,X windows 设置使用彩色服务器,如果彩色服务器出现问题,你可以切换到单色服务器 x386mono,因为它已经支持各种标准的 VGA。本质上,这只是将 /usr/X386/bin/X 链接到它。
|
||||
>
|
||||
> 只需要编辑 Xconfig 来设置鼠标驱动类型和时序,然后键入 “startx” 即可。
|
||||
|
||||
这些听起来令人困惑,但它就是这样。手工配置 X windows 真的可以是一个发人深省的体验。幸好,SLS 1.05 包含了 syssetup 程序来帮你确定系统组件的种类,包括了 X windows 的显示设置。在一些提示过后,经过一些实验和调整,最终我成功启动了 X windows!
|
||||
|
||||
![The syssetup program](https://opensource.com/sites/default/files/u128651/syssetup.png "The syssetup program")
|
||||
|
||||
但这是来自于 1994 年的 X windows,它仍然并没有桌面的概念。我可以从 FVWM (一个虚拟窗口管理器)或 TWM (选项卡式的窗口管理器)中选择。TWM 直观地设置提供一个功能简单的图形环境。
|
||||
|
||||
![TWM](https://opensource.com/sites/default/files/u128651/twm_720.png "TWM")
|
||||
|
||||
### 关机
|
||||
|
||||
我已经在我的 Linux 寻根之旅沉浸许久,是时候最终回到我的现代桌面上了。最初我跑 Linux 的是一台仅有 8MB 内存和 一个 120MB 硬盘驱动器的 32 位 386 电脑,而我现在的系统已经足够强大了。拥有双核 64 位 Intel Core i5 处理器,4 GB 内存和一个 128 GB 的固态硬盘,我可以在我的运行着 Linux 内核 4.11.11 的系统上做更多事情。那么,在我的 SLS 1.05 的实验结束之后,是时候离开了。
|
||||
|
||||
![Shutting down](https://opensource.com/sites/default/files/u128651/shutdown-h.png "Shutting down")
|
||||
|
||||
再见,Linux 1.0。很高兴看到你的茁壮成长。
|
||||
|
||||
(题图:图片来源:[litlnemo][25]。由 Opnesource.com 修改。[CC BY-SA 2.0.][26])
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/8/linux-anniversary
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
译者:[softpaopao](https://github.com/softpaopao)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jim-hall
|
||||
[1]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[2]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[3]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[4]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[5]:https://opensource.com/tags/linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[6]:https://opensource.com/file/365166
|
||||
[7]:https://opensource.com/file/365171
|
||||
[8]:https://opensource.com/file/365176
|
||||
[9]:https://opensource.com/file/365161
|
||||
[10]:https://opensource.com/file/365221
|
||||
[11]:https://opensource.com/file/365196
|
||||
[12]:https://opensource.com/file/365156
|
||||
[13]:https://opensource.com/file/365181
|
||||
[14]:https://opensource.com/file/365146
|
||||
[15]:https://opensource.com/file/365151
|
||||
[16]:https://opensource.com/file/365211
|
||||
[17]:https://opensource.com/file/365186
|
||||
[18]:https://opensource.com/file/365191
|
||||
[19]:https://opensource.com/file/365226
|
||||
[20]:https://opensource.com/file/365206
|
||||
[21]:https://opensource.com/file/365236
|
||||
[22]:https://opensource.com/file/365201
|
||||
[23]:https://opensource.com/article/17/8/linux-anniversary?rate=XujKSFS7GfDmxcV7Jf_HUK_MdrW15Po336fO3G8s1m0
|
||||
[24]:https://opensource.com/user/126046/feed
|
||||
[25]:https://www.flickr.com/photos/litlnemo/19777182/
|
||||
[26]:https://creativecommons.org/licenses/by-sa/2.0/
|
||||
[27]:https://en.wikipedia.org/wiki/Softlanding_Linux_System
|
||||
[28]:https://en.wikipedia.org/wiki/Berkeley_Software_Distribution
|
||||
[29]:https://en.wikipedia.org/wiki/UNIX_System_V
|
||||
[30]:https://en.wikipedia.org/wiki/Systemd
|
||||
[31]:https://midnight-commander.org/
|
||||
[32]:https://en.wikipedia.org/wiki/Norton_Commander
|
||||
[33]:https://opensource.com/users/jim-hall
|
||||
[34]:https://opensource.com/users/jim-hall
|
||||
[35]:https://opensource.com/article/17/8/linux-anniversary#comments
|
@ -1,95 +0,0 @@
|
||||
How to use pull requests to improve your code reviews
|
||||
============================================================
|
||||
|
||||
Spend more time building and less time fixing with GitHub Pull Requests for proper code review.
|
||||
|
||||
![Measure](https://d3tdunqjn7n0wj.cloudfront.net/360x240/measure-106354_1920-a7f65d82a54323773f847cf572e640a4.jpg)
|
||||
|
||||
|
||||
>Take a look Brent and Peter’s book, [ _Introducing GitHub_ ][5], for more on creating projects, starting pull requests, and getting an overview of your team’s software development process.
|
||||
|
||||
|
||||
If you don’t write code every day, you may not know some of the problems that software developers face on a daily basis:
|
||||
|
||||
* Security vulnerabilities in the code
|
||||
* Code that causes your application to crash
|
||||
* Code that can be referred to as “technical debt” and needs to be re-written later
|
||||
* Code that has already been written somewhere that you didn’t know about
|
||||
|
||||
|
||||
Code review helps improve the software we write by allowing other people and/or tools to look it over for us. This review can happen with automated code analysis or test coverage tools — two important pieces of the software development process that can save hours of manual work — or peer review. Peer review is a process where developers review each other's work. When it comes to developing software, speed and urgency are two components that often result in some of the previously mentioned problems. If you don’t release soon enough, your competitor may come out with a new feature first. If you don’t release often enough, your users may doubt whether or not you still care about improvements to your application.
|
||||
|
||||
### Weighing the time trade-off: code review vs. bug fixing
|
||||
|
||||
If someone is able to bring together multiple types of code review in a way that has minimal friction, then the quality of that software written over time will be improved. It would be naive to think that the introduction of new tools or processes would not at first introduce some amount of delay in time. But what is more expensive: time to fix bugs in production, or improving the software before it makes it into production? Even if new tools introduce some lag time in which a new feature can be released and appreciated by customers, that lag time will shorten as the software developers improve their own skills and the software release cycles will increase back to previous levels while bugs should decrease.
|
||||
|
||||
One of the keys for achieving this goal of proactively improving code quality with code review is using a platform that is flexible enough to allow software developers to quickly write code, plug in the tools they are familiar with, and do peer review of each others’ code. [GitHub][9] is a great example of such a platform. However, putting your code on GitHub doesn’t just magically make code review happen; you have to open a pull request to start down this journey.
|
||||
|
||||
### Pull requests: a living discussion about code
|
||||
|
||||
[Pull requests][10] are a tool on GitHub that allows software developers to discuss and propose changes to the main codebase of a project that later can be deployed for all users to see. They were created back in February of 2008 for the purpose of suggesting a change on to someone’s work before it would be accepted (merged) and later deployed to production for end-users to see that change.
|
||||
|
||||
Pull requests started out as a loose way to offer your change to someone’s project, but they have evolved into:
|
||||
|
||||
* A living discussion about the code you want merged
|
||||
* Added functionality of increasing the visibility of what changed
|
||||
* Integration of your favorite tools
|
||||
* Explicit pull request reviews that can be required as part of a protected branch workflow
|
||||
|
||||
### Considering code: URLs are forever
|
||||
|
||||
Looking at the first two bullet points above, pull requests foster an ongoing code discussion that makes code changes very visible, as well as making it easy to pick up where you left off on your review. For both new and experienced developers, being able to refer back to these previous discussions about why a feature was developed the way it was or being linked to another conversation about a related feature should be priceless. Context can be so important when coordinating features across multiple projects and keeping everyone in the loop as close as possible to the code is great too. If those features are still being developed, it’s important to be able to just see what’s changed since you last reviewed. After all, it’s far easier to [review a small change than a large one][11], but that’s not always possible with large features. So, it’s important to be able to pick up where you last reviewed and only view the changes since then.
|
||||
|
||||
### Integrating tools: software developers are opinionated
|
||||
|
||||
Considering the third point above, GitHub’s pull requests have a lot of functionality but developers will always have a preference on additional tools. Code quality is a whole realm of code review that involves the other component to code reviews that aren’t necessarily human. Detecting code that’s “inefficient” or slow, a potential security vulnerability, or just not up to company standards is a task best left to automated tools. Tools like [SonarQube][12] and [Code Climate][13]can analyse your code, while tools like [Codecov][14] and [Coveralls][15] can tell you if the new code you just wrote is not well tested. The wonder of these tools is that they can plug into GitHub and report their findings right back into the pull request! This means the conversation not only has people reviewing the code, but the tools are reporting there too. Everyone can stay in the loop of exactly how a feature is developing.
|
||||
|
||||
Lastly, depending on the preference of your team, you can make the tools and the peer review required by leveraging the required status feature of the [protected branch workflow][16].
|
||||
|
||||
Though you may just be getting started on your software development journey, a business stakeholder who wants to know how a project is doing, or a project manager who wants to ensure the timeliness and quality of a project, getting involved in the pull request by setting up an approval workflow and thinking about integration with additional tools to ensure quality is important at any level of software development.
|
||||
|
||||
Whether it’s for your personal website, your company’s online store, or the latest combine to harvest this year’s corn with maximum yield, writing good software involves having good code review. Having good code review involves the right tools and platform. To learn more about GitHub and the software development process, take a look at the O’Reilly book, [ _Introducing GitHub_ ][17], where you can understand creating projects, starting pull requests, and getting an overview of your team's’ software development process.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
**Brent Beer**
|
||||
|
||||
Brent Beer has used Git and GitHub for over 5 years through university classes, contributions to open source projects, and professionally as a web developer. While working as a trainer for GitHub, he also became a published author of “Introducing GitHub” for O’Reilly. He now works as a solutions engineer for GitHub in Amsterdam to help bring Git and GitHub to developers across the world.
|
||||
|
||||
**Peter Bell**
|
||||
|
||||
Peter Bell is the founder and CTO of Ronin Labs. Training is broken - we're fixing it through technology enhanced training! He is an experienced entrepreneur, technologist, agile coach and CTO specializing in EdTech projects. He wrote "Introducing GitHub" for O'Reilly, created the "Mastering GitHub" course for code school and "Git and GitHub LiveLessons" for Pearson. He has presented regularly at national and international conferences on ruby, nodejs, NoSQL (especially MongoDB and neo4j), cloud computing, software craftsmanship, java, groovy, j...
|
||||
|
||||
|
||||
-------------
|
||||
|
||||
|
||||
via: https://www.oreilly.com/ideas/how-to-use-pull-requests-to-improve-your-code-reviews?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311
|
||||
|
||||
作者:[Brent Beer][a],[Peter Bell][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.oreilly.com/people/acf937de-cdf4-4b0e-85bd-b559404c580e
|
||||
[b]:https://www.oreilly.com/people/2256f119-7ea0-440e-99e8-65281919e952
|
||||
[1]:https://pixabay.com/en/measure-measures-rule-metro-106354/
|
||||
[2]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
|
||||
[3]:https://www.oreilly.com/people/acf937de-cdf4-4b0e-85bd-b559404c580e
|
||||
[4]:https://www.oreilly.com/people/2256f119-7ea0-440e-99e8-65281919e952
|
||||
[5]:https://www.safaribooksonline.com/library/view/introducing-github/9781491949801/?utm_source=newsite&utm_medium=content&utm_campaign=lgen&utm_content=how-to-use-pull-requests-to-improve-your-code-reviews
|
||||
[6]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
|
||||
[7]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
|
||||
[8]:https://www.oreilly.com/ideas/how-to-use-pull-requests-to-improve-your-code-reviews?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311
|
||||
[9]:https://github.com/about
|
||||
[10]:https://help.github.com/articles/about-pull-requests/
|
||||
[11]:https://blog.skyliner.io/ship-small-diffs-741308bec0d1
|
||||
[12]:https://github.com/integrations/sonarqube
|
||||
[13]:https://github.com/integrations/code-climate
|
||||
[14]:https://github.com/integrations/codecov
|
||||
[15]:https://github.com/integrations/coveralls
|
||||
[16]:https://help.github.com/articles/about-protected-branches/
|
||||
[17]:https://www.safaribooksonline.com/library/view/introducing-github/9781491949801/?utm_source=newsite&utm_medium=content&utm_campaign=lgen&utm_content=how-to-use-pull-requests-to-improve-your-code-reviews-lower
|
@ -1,166 +0,0 @@
|
||||
penghuster apply for it
|
||||
|
||||
Cleaning Up Your Linux Startup Process
|
||||
============================================================
|
||||
|
||||
![Linux cleanup](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner-cleanup-startup.png?itok=dCcKwdoP "Clean up your startup process")
|
||||
Learn how to clean up your Linux startup process.[Used with permission][1]
|
||||
|
||||
The average general-purpose Linux distribution launches all kinds of stuff at startup, including a lot of services that don't need to be running. Bluetooth, Avahi, ModemManager, ppp-dns… What are these things, and who needs them?
|
||||
|
||||
Systemd provides a lot of good tools for seeing what happens during your system startup, and controlling what starts at boot. In this article, I’ll show how to turn off startup cruft on Systemd distributions.
|
||||
|
||||
### View Boot Services
|
||||
|
||||
In the olden days, you could easily see which services were set to launch at boot by looking in /etc/init.d. Systemd does things differently. You can use the following incantation to list enabled boot services:
|
||||
|
||||
```
|
||||
systemctl list-unit-files --type=service | grep enabled
|
||||
accounts-daemon.service enabled
|
||||
anacron-resume.service enabled
|
||||
anacron.service enabled
|
||||
bluetooth.service enabled
|
||||
brltty.service enabled
|
||||
[...]
|
||||
```
|
||||
|
||||
And, there near the top is my personal nemesis: Bluetooth. I don't use it on my PC, and I don't need it running. The following commands stop it and then disable it from starting at boot:
|
||||
|
||||
```
|
||||
$ sudo systemctl stop bluetooth.service
|
||||
$ sudo systemctl disable bluetooth.service
|
||||
```
|
||||
|
||||
You can confirm by checking the status:
|
||||
|
||||
```
|
||||
$ systemctl status bluetooth.service
|
||||
bluetooth.service - Bluetooth service
|
||||
Loaded: loaded (/lib/systemd/system/bluetooth.service; disabled; vendor preset: enabled)
|
||||
Active: inactive (dead)
|
||||
Docs: man:bluetoothd(8)
|
||||
```
|
||||
|
||||
A disabled service can be started by another service. If you really want it dead, without uninstalling it, then you can mask it to prevent it from starting under any circumstances:
|
||||
|
||||
```
|
||||
$ sudo systemctl mask bluetooth.service
|
||||
Created symlink from /etc/systemd/system/bluetooth.service to /dev/null.
|
||||
```
|
||||
|
||||
Once you are satisfied that disabling a service has no bad side effects, you may elect to uninstall it.
|
||||
|
||||
You can generate a list of all services:
|
||||
|
||||
```
|
||||
$ systemctl list-unit-files --type=service
|
||||
UNIT FILE STATE
|
||||
accounts-daemon.service enabled
|
||||
acpid.service disabled
|
||||
alsa-restore.service static
|
||||
alsa-utils.service masked
|
||||
```
|
||||
|
||||
You cannot enable or disable static services, because these are dependencies of other systemd services and are not meant to run by themselves.
|
||||
|
||||
### Can I Get Rid of These Services?
|
||||
|
||||
How do you know what you need, and what you can safely disable? As always, that depends on your particular setup.
|
||||
|
||||
Here is a sampling of services and what they are for. Many services are distro-specific, so have your distribution documentation handy (i.e., Google and Stack Overflow).
|
||||
|
||||
* **accounts-daemon.service** is a potential security risk. It is part of AccountsService, which allows programs to get and manipulate user account information. I can't think of a good reason to allow this kind of behind-my-back operations, so I mask it.
|
||||
|
||||
* **avahi-daemon.service** is supposed to provide zero-configuration network discovery, and make it super-easy to find printers and other hosts on your network. I always disable it and don't miss it.
|
||||
|
||||
* **brltty.service** provides Braille device support, for example, Braille displays.
|
||||
|
||||
* **debug-shell.service** opens a giant security hole and should never be enabled except when you are using it. This provides a password-less root shell to help with debugging systemd problems.
|
||||
|
||||
* **ModemManager.service** is a DBus-activated daemon that controls mobile broadband (2G/3G/4G) interfaces. If you don't have a mobile broadband interface -- built-in, paired with a mobile phone via Bluetooth, or USB dongle -- you don't need this.
|
||||
|
||||
* **pppd-dns.service** is a relic of the dim past. If you use dial-up Internet, keep it. Otherwise, you don't need it.
|
||||
|
||||
* **rtkit-daemon.service** sounds scary, like rootkit, but you need it because it is the real-time kernel scheduler.
|
||||
|
||||
* **whoopsie.service** is the Ubuntu error reporting service. It collects crash reports and sends them to [https://daisy.ubuntu.com][2]. You may safely disable it, or you can remove it permanently by uninstalling apport.
|
||||
|
||||
* **wpa_supplicant.service** is necessary only if you use a Wi-Fi network interface.
|
||||
|
||||
### What Happens During Bootup
|
||||
|
||||
Systemd has some commands to help debug boot issues. This command replays all of your boot messages:
|
||||
|
||||
```
|
||||
$ journalctl -b
|
||||
|
||||
-- Logs begin at Mon 2016-05-09 06:18:11 PDT,
|
||||
end at Mon 2016-05-09 10:17:01 PDT. --
|
||||
May 16 06:18:11 studio systemd-journal[289]:
|
||||
Runtime journal (/run/log/journal/) is currently using 8.0M.
|
||||
Maximum allowed usage is set to 157.2M.
|
||||
Leaving at least 235.9M free (of currently available 1.5G of space).
|
||||
Enforced usage limit is thus 157.2M.
|
||||
[...]
|
||||
```
|
||||
|
||||
You can review previous boots with **journalctl -b -1**, which displays the previous startup;**journalctl -b -2** shows two boots ago, and so on.
|
||||
|
||||
This spits out a giant amount of output, which is interesting but maybe not all that useful. It has several filters to help you find what you want. Let's look at PID 1, which is the parent process for all other processes:
|
||||
|
||||
```
|
||||
$ journalctl _PID=1
|
||||
|
||||
May 08 06:18:17 studio systemd[1]: Starting LSB: Raise network interfaces....
|
||||
May 08 06:18:17 studio systemd[1]: Started LSB: Raise network interfaces..
|
||||
May 08 06:18:17 studio systemd[1]: Reached target System Initialization.
|
||||
May 08 06:18:17 studio systemd[1]: Started CUPS Scheduler.
|
||||
May 08 06:18:17 studio systemd[1]: Listening on D-Bus System Message Bus Socket
|
||||
May 08 06:18:17 studio systemd[1]: Listening on CUPS Scheduler.
|
||||
[...]
|
||||
```
|
||||
|
||||
This shows what was started -- or attempted to start.
|
||||
|
||||
One of the most useful tools is **systemd-analyze blame**, which shows which services are taking the longest to start up.
|
||||
|
||||
```
|
||||
$ systemd-analyze blame
|
||||
8.708s gpu-manager.service
|
||||
8.002s NetworkManager-wait-online.service
|
||||
5.791s mysql.service
|
||||
2.975s dev-sda3.device
|
||||
1.810s alsa-restore.service
|
||||
1.806s systemd-logind.service
|
||||
1.803s irqbalance.service
|
||||
1.800s lm-sensors.service
|
||||
1.800s grub-common.service
|
||||
```
|
||||
|
||||
This particular example doesn't show anything unusual, but if there is startup bottleneck, this command will find it.
|
||||
|
||||
You may also find these previous Systemd how-tos useful:
|
||||
|
||||
* [Understanding and Using Systemd][3]
|
||||
|
||||
* [Intro to Systemd Runlevels and Service Management Commands][4]
|
||||
|
||||
* [Here We Go Again, Another Linux Init: Intro to systemd][5]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/cleaning-your-linux-startup-process
|
||||
|
||||
作者:[CARLA SCHRODER ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/cschroder
|
||||
[1]:https://www.linux.com/licenses/category/used-permission
|
||||
[2]:https://daisy.ubuntu.com/
|
||||
[3]:https://www.linux.com/learn/understanding-and-using-systemd
|
||||
[4]:https://www.linux.com/learn/intro-systemd-runlevels-and-service-management-commands
|
||||
[5]:https://www.linux.com/learn/here-we-go-again-another-linux-init-intro-systemd
|
||||
[6]:https://www.linux.com/files/images/banner-cleanup-startuppng
|
@ -1,214 +0,0 @@
|
||||
translating by firmianay
|
||||
|
||||
Here are all the Git commands I used last week, and what they do.
|
||||
============================================================
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1600/1*frC0VgM2etsVCJzJrNMZTQ.png)Image credit: [GitHub Octodex][6]
|
||||
|
||||
Like most newbies, I started out searching StackOverflow for Git commands, then copy-pasting answers, without really understanding what they did.
|
||||
|
||||
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1600/1*0o9GZUzXiNnI4poEvxvy8g.png)Image credit: [XKCD][7]
|
||||
|
||||
I remember thinking,“Wouldn’t it be nice if there were a list of the most common Git commands along with an explanation as to why they are useful?”
|
||||
|
||||
Well, here I am years later to compile such a list, and lay out some best practices that even intermediate-advanced developers should find useful.
|
||||
|
||||
To keep things practical, I’m basing this list off of the actual Git commands I used over the past week.
|
||||
|
||||
Almost every developer uses Git, and most likely GitHub. But the average developer probably only uses these three commands 99% of the time:
|
||||
|
||||
```
|
||||
git add --all
|
||||
git commit -am "<message>"
|
||||
git push origin master
|
||||
```
|
||||
|
||||
That’s all well and good when you’re working on a one-person team, a hackathon, or a throw-away app, but when stability and maintenance start to become a priority, cleaning up commits, sticking to a branching strategy, and writing coherent commit messages becomes important.
|
||||
|
||||
I’ll start with the list of commonly used commands to make it easier for newbies to understand what is possible with Git, then move into the more advanced functionality and best practices.
|
||||
|
||||
#### Regularly used commands
|
||||
|
||||
To initialize Git in a repository (repo), you just need to type the following command. If you don’t initialize Git, you cannot run any other Git commands within that repo.
|
||||
|
||||
```
|
||||
git init
|
||||
```
|
||||
|
||||
If you’re using GitHub and you’re pushing code to a GitHub repo that’s stored online, you’re using a remote repo. The default name (also known as an alias) for that remote repo is origin. If you’ve copied a project from Github, it already has an origin. You can view that origin with the command git remote -v, which will list the URL of the remote repo.
|
||||
|
||||
If you initialized your own Git repo and want to associate it with a GitHub repo, you’ll have to create one on GitHub, copy the URL provided, and use the command git remote add origin <URL>, with the URL provided by GitHub replacing “<URL>”. From there, you can add, commit, and push to your remote repo.
|
||||
|
||||
The last one is used when you need to change the remote repository. Let’s say you copied a repo from someone else and want to change the remote repository from the original owner’s to your own GitHub account. Follow the same process as git remote add origin, except use set-url instead to change the remote repo.
|
||||
|
||||
```
|
||||
git remote -v
|
||||
git remote add origin <url>
|
||||
git remote set-url origin <url>
|
||||
```
|
||||
|
||||
The most common way to copy a repo is to use git clone, followed by the URL of the repo.
|
||||
|
||||
Keep in mind that the remote repository will be linked to the account from which you cloned the repo. So if you cloned a repo that belongs to someone else, you will not be able to push to GitHub until you change the originusing the commands above.
|
||||
|
||||
```
|
||||
git clone <url>
|
||||
```
|
||||
|
||||
You’ll quickly find yourself using branches. If you don’t understand what branches are, there are other tutorials that are much more in-depth, and you should read those before proceeding ([here’s one][8]).
|
||||
|
||||
The command git branch lists all branches on your local machine. If you want to create a new branch, you can use git branch <name>, with <name> representing the name of the branch, such as “master”.
|
||||
|
||||
The git checkout <name> command switches to an existing branch. You can also use the git checkout -b <name> command to create a new branch and immediately switch to it. Most people use this instead of separate branch and checkout commands.
|
||||
|
||||
```
|
||||
git branch
|
||||
git branch <name>
|
||||
git checkout <name>
|
||||
git checkout -b <name>
|
||||
```
|
||||
|
||||
If you’ve made a bunch of changes to a branch, let’s call it “develop”, and you want to merge that branch back into your master branch, you use the git merge <branch> command. You’ll want to checkout the master branch, then run git merge develop to merge develop into the master branch.
|
||||
|
||||
```
|
||||
git merge <branch>
|
||||
```
|
||||
|
||||
If you’re working with multiple people, you’ll find yourself in a position where a repo was updated on GitHub, but you don’t have the changes locally. If that’s the case, you can use git pull origin <branch> to pull the most recent changes from that remote branch.
|
||||
|
||||
```
|
||||
git pull origin <branch>
|
||||
```
|
||||
|
||||
If you’re curious to see what files have been changed and what’s being tracked, you can use git status. If you want to see _how much_ each file has been changed, you can use git diff to see the number of lines changed in each file.
|
||||
|
||||
```
|
||||
git status
|
||||
git diff --stat
|
||||
```
|
||||
|
||||
### Advanced commands and best practices
|
||||
|
||||
Soon you reach a point where you want your commits to look nice and stay consistent. You might also have to fiddle around with your commit history to make your commits easier to comprehend or to revert an accidental breaking change.
|
||||
|
||||
The git log command lets you see the commit history. You’ll want to use this to see the history of your commits.
|
||||
|
||||
Your commits will come with messages and a hash, which is random series of numbers and letters. An example hash might look like this: c3d882aa1aa4e3d5f18b3890132670fbeac912f7
|
||||
|
||||
```
|
||||
git log
|
||||
```
|
||||
|
||||
Let’s say you pushed something that broke your app. Rather than fix it and push something new, you’d rather just go back one commit and try again.
|
||||
|
||||
If you want to go back in time and checkout your app from a previous commit, you can do this directly by using the hash as the branch name. This will detach your app from the current version (because you’re editing a historical record, rather than the current version).
|
||||
|
||||
```
|
||||
git checkout c3d88eaa1aa4e4d5f
|
||||
```
|
||||
|
||||
Then, if you make changes from that historical branch and you want to push again, you’d have to do a force push.
|
||||
|
||||
Caution: Force pushing is dangerous and should only be done if you absolutely must. It will overwrite the history of your app and you will lose whatever came after.
|
||||
|
||||
```
|
||||
git push -f origin master
|
||||
```
|
||||
|
||||
Other times it’s just not practical to keep everything in one commit. Perhaps you want to save your progress before trying something potentially risky, or perhaps you made a mistake and want to spare yourself the embarrassment of having an error in your version history. For that, we have git rebase.
|
||||
|
||||
Let’s say you have 4 commits in your local history (not pushed to GitHub) in which you’ve gone back and forth. Your commits look sloppy and indecisive. You can use rebase to combine all of those commits into a single, concise commit.
|
||||
|
||||
```
|
||||
git rebase -i HEAD~4
|
||||
```
|
||||
|
||||
The above command will open up your computer’s default editor (which is Vim unless you’ve set it to something else), with several options for how you can change your commits. It will look something like the code below:
|
||||
|
||||
```
|
||||
pick 130deo9 oldest commit message
|
||||
pick 4209fei second oldest commit message
|
||||
pick 4390gne third oldest commit message
|
||||
pick bmo0dne newest commit message
|
||||
```
|
||||
|
||||
In order to combine these, we need to change the “pick” option to “fixup” (as the documentation below the code says) to meld the commits and discard the commit messages. Note that in vim, you need to press “a” or “i” to be able to edit the text, and to save and exit, you need to type the escapekey followed by “shift + z + z”. Don’t ask me why, it just is.
|
||||
|
||||
```
|
||||
pick 130deo9 oldest commit message
|
||||
fixup 4209fei second oldest commit message
|
||||
fixup 4390gne third oldest commit message
|
||||
fixup bmo0dne newest commit message
|
||||
```
|
||||
|
||||
This will merge all of your commits into the commit with the message “oldest commit message”.
|
||||
|
||||
The next step is to rename your commit message. This is entirely a matter of opinion, but so long as you follow a consistent pattern, anything you do is fine. I recommend using the [commit guidelines put out by Google for Angular.js][9].
|
||||
|
||||
In order to change the commit message, use the amend flag.
|
||||
|
||||
```
|
||||
git commit --amend
|
||||
```
|
||||
|
||||
This will also open vim, and the text editing and saving rules are the same as above. To give an example of a good commit message, here’s one following the rules from the guideline:
|
||||
|
||||
```
|
||||
feat: add stripe checkout button to payments page
|
||||
```
|
||||
|
||||
```
|
||||
- add stripe checkout button
|
||||
- write tests for checkout
|
||||
```
|
||||
|
||||
One advantage to keeping with the types listed in the guideline is that it makes writing change logs easier. You can also include information in the footer (again, specified in the guideline) that references issues.
|
||||
|
||||
Note: you should avoid rebasing and squashing your commits if you are collaborating on a project, and have code pushed to GitHub. If you start changing version history under people’s noses, you could end up making everyone’s lives more difficult with bugs that are difficult to track.
|
||||
|
||||
There are an almost endless number of possible commands with Git, but these commands are probably the only ones you’ll need to know for your first few years of programming.
|
||||
|
||||
* * *
|
||||
|
||||
_Sam Corcos is the lead developer and co-founder of _ [_Sightline Maps_][10] _, the most intuitive platform for 3D printing topographical maps, as well as _ [_LearnPhoenix.io_][11] _, an intermediate-advanced tutorial site for building scalable production apps with Phoenix and React. Get $20 off of LearnPhoenix with the coupon code: _ _free_code_camp_
|
||||
|
||||
|
||||
* [Git][1]
|
||||
|
||||
* [Github][2]
|
||||
|
||||
* [Programming][3]
|
||||
|
||||
* [Software Development][4]
|
||||
|
||||
* [Web Development][5]
|
||||
|
||||
Show your support
|
||||
|
||||
Clapping shows how much you appreciated Sam Corcos’s story.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://medium.freecodecamp.org/git-cheat-sheet-and-best-practices-c6ce5321f52
|
||||
|
||||
作者:[Sam Corcos][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://medium.freecodecamp.org/@SamCorcos?source=post_header_lockup
|
||||
[1]:https://medium.freecodecamp.org/tagged/git?source=post
|
||||
[2]:https://medium.freecodecamp.org/tagged/github?source=post
|
||||
[3]:https://medium.freecodecamp.org/tagged/programming?source=post
|
||||
[4]:https://medium.freecodecamp.org/tagged/software-development?source=post
|
||||
[5]:https://medium.freecodecamp.org/tagged/web-development?source=post
|
||||
[6]:https://octodex.github.com/
|
||||
[7]:https://xkcd.com/1597/
|
||||
[8]:https://guides.github.com/introduction/flow/
|
||||
[9]:https://github.com/angular/angular.js/blob/master/CONTRIBUTING.md#-git-commit-guidelines
|
||||
[10]:http://sightlinemaps.com/
|
||||
[11]:http://learnphoenix.io/
|
214
sources/tech/20161031 An introduction to Linux filesystems.md
Normal file
214
sources/tech/20161031 An introduction to Linux filesystems.md
Normal file
@ -0,0 +1,214 @@
|
||||
ucasFL translating
|
||||
|
||||
An introduction to Linux filesystems
|
||||
============================================================
|
||||
|
||||
![Introduction to Linux filesystems](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/community-penguins-osdc-lead.png?itok=BmqsAF4A "Introduction to Linux filesystems")
|
||||
Image credits : Original photo by Rikki Endsley. [CC BY-SA 4.0][9]
|
||||
|
||||
This article is intended to be a very high-level discussion of Linux filesystem concepts. It is not intended to be a low-level description of how a particular filesystem type, such as EXT4, works, nor is it intended to be a tutorial of filesystem commands.
|
||||
|
||||
More Linux resources
|
||||
|
||||
* [What is Linux?][1]
|
||||
|
||||
* [What are Linux containers?][2]
|
||||
|
||||
* [Download Now: Linux commands cheat sheet][3]
|
||||
|
||||
* [Advanced Linux commands cheat sheet][4]
|
||||
|
||||
* [Our latest Linux articles][5]
|
||||
|
||||
Every general-purpose computer needs to store data of various types on a hard disk drive (HDD) or some equivalent, such as a USB memory stick. There are a couple reasons for this. First, RAM loses its contents when the computer is switched off. There are non-volatile types of RAM that can maintain the data stored there after power is removed (such as flash RAM that is used in USB memory sticks and solid state drives), but flash RAM is much more expensive than standard, volatile RAM like DDR3 and other, similar types.
|
||||
|
||||
The second reason that data needs to be stored on hard drives is that even standard RAM is still more expensive than disk space. Both RAM and disk costs have been dropping rapidly, but RAM still leads the way in terms of cost per byte. A quick calculation of the cost per byte, based on costs for 16GB of RAM vs. a 2TB hard drive, shows that the RAM is about 71 times more expensive per unit than the hard drive. A typical cost for RAM is around $0.0000000043743750 per byte today.
|
||||
|
||||
For a quick historical note to put present RAM costs in perspective, in the very early days of computing, one type of memory was based on dots on a CRT screen. This was very expensive at about $1.00 _per bit_ !
|
||||
|
||||
### Definitions
|
||||
|
||||
You may hear people talk about filesystems in a number of different and confusing ways. The word itself can have multiple meanings, and you may have to discern the correct meaning from the context of a discussion or document.
|
||||
|
||||
I will attempt to define the various meanings of the word "filesystem" based on how I have observed it being used in different circumstances. Note that while attempting to conform to standard "official" meanings, my intent is to define the term based on its various usages. These meanings will be explored in greater detail in the following sections of this article.
|
||||
|
||||
1. The entire Linux directory structure starting at the top (/) root directory.
|
||||
|
||||
2. A specific type of data storage format, such as EXT3, EXT4, BTRFS, XFS, and so on. Linux supports almost 100 types of filesystems, including some very old ones as well as some of the newest. Each of these filesystem types uses its own metadata structures to define how the data is stored and accessed.
|
||||
|
||||
3. A partition or logical volume formatted with a specific type of filesystem that can be mounted on a specified mount point on a Linux filesystem.
|
||||
|
||||
### Basic filesystem functions
|
||||
|
||||
Disk storage is a necessity that brings with it some interesting and inescapable details. Obviously, a filesystem is designed to provide space for non-volatile storage of data; that is its ultimate function. However, there are many other important functions that flow from that requirement.
|
||||
|
||||
All filesystems need to provide a namespace—that is, a naming and organizational methodology. This defines how a file can be named, specifically the length of a filename and the subset of characters that can be used for filenames out of the total set of characters available. It also defines the logical structure of the data on a disk, such as the use of directories for organizing files instead of just lumping them all together in a single, huge conglomeration of files.
|
||||
|
||||
Once the namespace has been defined, a metadata structure is necessary to provide the logical foundation for that namespace. This includes the data structures required to support a hierarchical directory structure; structures to determine which blocks of space on the disk are used and which are available; structures that allow for maintaining the names of the files and directories; information about the files such as their size and times they were created, modified or last accessed; and the location or locations of the data belonging to the file on the disk. Other metadata is used to store high-level information about the subdivisions of the disk, such as logical volumes and partitions. This higher-level metadata and the structures it represents contain the information describing the filesystem stored on the drive or partition, but is separate from and independent of the filesystem metadata.
|
||||
|
||||
Filesystems also require an Application Programming Interface (API) that provides access to system function calls which manipulate filesystem objects like files and directories. APIs provide for tasks such as creating, moving, and deleting files. It also provides algorithms that determine things like where a file is placed on a filesystem. Such algorithms may account for objectives such as speed or minimizing disk fragmentation.
|
||||
|
||||
Modern filesystems also provide a security model, which is a scheme for defining access rights to files and directories. The Linux filesystem security model helps to ensure that users only have access to their own files and not those of others or the operating system itself.
|
||||
|
||||
The final building block is the software required to implement all of these functions. Linux uses a two-part software implementation as a way to improve both system and programmer efficiency.
|
||||
|
||||
<center>
|
||||
![](https://opensource.com/sites/default/files/filesystem_diagram.png)
|
||||
|
||||
Figure 1: The Linux two-part filesystem software implementation.</center>
|
||||
|
||||
The first part of this two-part implementation is the Linux virtual filesystem. This virtual filesystem provides a single set of commands for the kernel, and developers, to access all types of filesystems. The virtual filesystem software calls the specific device driver required to interface to the various types of filesystems. The filesystem-specific device drivers are the second part of the implementation. The device driver interprets the standard set of filesystem commands to ones specific to the type of filesystem on the partition or logical volume.
|
||||
|
||||
### Directory structure
|
||||
|
||||
As a usually very organized Virgo, I like things stored in smaller, organized groups rather than in one big bucket. The use of directories helps me to be able to store and then locate the files I want when I am looking for them. Directories are also known as folders because they can be thought of as folders in which files are kept in a sort of physical desktop analogy.
|
||||
|
||||
In Linux and many other operating systems, directories can be structured in a tree-like hierarchy. The Linux directory structure is well defined and documented in the [Linux Filesystem Hierarchy Standard][10] (FHS). Referencing those directories when accessing them is accomplished by using the sequentially deeper directory names connected by forward slashes (/) such as /var/log and /var/spool/mail. These are called paths.
|
||||
|
||||
The following table provides a very brief list of the standard, well-known, and defined top-level Linux directories and their purposes.
|
||||
|
||||
| Directory | Description |
|
||||
| --- | --- |
|
||||
| / (root filesystem) | The root filesystem is the top-level directory of the filesystem. It must contain all of the files required to boot the Linux system before other filesystems are mounted. It must include all of the required executables and libraries required to boot the remaining filesystems. After the system is booted, all other filesystems are mounted on standard, well-defined mount points as subdirectories of the root filesystem. |
|
||||
| /bin | The /bin directory contains user executable files. |
|
||||
| /boot | Contains the static bootloader and kernel executable and configuration files required to boot a Linux computer. |
|
||||
| /dev | This directory contains the device files for every hardware device attached to the system. These are not device drivers, rather they are files that represent each device on the computer and facilitate access to those devices. |
|
||||
| /etc | Contains the local system configuration files for the host computer. |
|
||||
| /home | Home directory storage for user files. Each user has a subdirectory in /home. |
|
||||
| /lib | Contains shared library files that are required to boot the system. |
|
||||
| /media | A place to mount external removable media devices such as USB thumb drives that may be connected to the host. |
|
||||
| /mnt | A temporary mountpoint for regular filesystems (as in not removable media) that can be used while the administrator is repairing or working on a filesystem. |
|
||||
| /opt | Optional files such as vendor supplied application programs should be located here. |
|
||||
| /root | This is not the root (/) filesystem. It is the home directory for the root user. |
|
||||
| /sbin | System binary files. These are executables used for system administration. |
|
||||
| /tmp | Temporary directory. Used by the operating system and many programs to store temporary files. Users may also store files here temporarily. Note that files stored here may be deleted at any time without prior notice. |
|
||||
| /usr | These are shareable, read-only files, including executable binaries and libraries, man files, and other types of documentation. |
|
||||
| /var | Variable data files are stored here. This can include things like log files, MySQL, and other database files, web server data files, email inboxes, and much more. |
|
||||
|
||||
<center>Table 1: The top level of the Linux filesystem hierarchy.</center>
|
||||
|
||||
The directories and their subdirectories shown in Table 1, along with their subdirectories, that have a teal background are considered an integral part of the root filesystem. That is, they cannot be created as a separate filesystem and mounted at startup time. This is because they (specifically, their contents) must be present at boot time in order for the system to boot properly.
|
||||
|
||||
The /media and /mnt directories are part of the root filesystem, but they should never contain any data. Rather, they are simply temporary mount points.
|
||||
|
||||
The remaining directories, those that have no background color in Table 1 do not need to be present during the boot sequence, but will be mounted later, during the startup sequence that prepares the host to perform useful work.
|
||||
|
||||
Be sure to refer to the official [Linux Filesystem Hierarchy Standard][11] (FHS) web page for details about each of these directories and their many subdirectories. Wikipedia also has a good description of the [FHS][12]. This standard should be followed as closely as possible to ensure operational and functional consistency. Regardless of the filesystem types used on a host, this hierarchical directory structure is the same.
|
||||
|
||||
### Linux unified directory structure
|
||||
|
||||
In some non-Linux PC operating systems, if there are multiple physical hard drives or multiple partitions, each disk or partition is assigned a drive letter. It is necessary to know on which hard drive a file or program is located, such as C: or D:. Then you issue the drive letter as a command, **D:**, for example, to change to the D: drive, and then you use the **cd** command to change to the correct directory to locate the desired file. Each hard drive has its own separate and complete directory tree.
|
||||
|
||||
The Linux filesystem unifies all physical hard drives and partitions into a single directory structure. It all starts at the top–the root (/) directory. All other directories and their subdirectories are located under the single Linux root directory. This means that there is only one single directory tree in which to search for files and programs.
|
||||
|
||||
This can work only because a filesystem, such as /home, /tmp, /var, /opt, or /usr can be created on separate physical hard drives, a different partition, or a different logical volume from the / (root) filesystem and then be mounted on a mountpoint (directory) as part of the root filesystem tree. Even removable drives such as a USB thumb drive or an external USB or ESATA hard drive will be mounted onto the root filesystem and become an integral part of that directory tree.
|
||||
|
||||
One good reason to do this is apparent during an upgrade from one version of a Linux distribution to another, or changing from one distribution to another. In general, and aside from any upgrade utilities like dnf-upgrade in Fedora, it is wise to occasionally reformat the hard drive(s) containing the operating system during an upgrade to positively remove any cruft that has accumulated over time. If /home is part of the root filesystem it will be reformatted as well and would then have to be restored from a backup. By having /home as a separate filesystem, it will be known to the installation program as a separate filesystem and formatting of it can be skipped. This can also apply to /var where database, email inboxes, website, and other variable user and system data are stored.
|
||||
|
||||
There are other reasons for maintaining certain parts of the Linux directory tree as separate filesystems. For example, a long time ago, when I was not yet aware of the potential issues surrounding having all of the required Linux directories as part of the / (root) filesystem, I managed to fill up my home directory with a large number of very big files. Since neither the /home directory nor the /tmp directory were separate filesystems but simply subdirectories of the root filesystem, the entire root filesystem filled up. There was no room left for the operating system to create temporary files or to expand existing data files. At first, the application programs started complaining that there was no room to save files, and then the OS itself started to act very strangely. Booting to single-user mode and clearing out the offending files in my home directory allowed me to get going again. I then reinstalled Linux using a pretty standard multi-filesystem setup and was able to prevent complete system crashes from occurring again.
|
||||
|
||||
I once had a situation where a Linux host continued to run, but prevented the user from logging in using the GUI desktop. I was able to log in using the command line interface (CLI) locally using one of the [virtual consoles][13], and remotely using SSH. The problem was that the /tmp filesystem had filled up and some temporary files required by the GUI desktop could not be created at login time. Because the CLI login did not require files to be created in /tmp, the lack of space there did not prevent me from logging in using the CLI. In this case, the /tmp directory was a separate filesystem and there was plenty of space available in the volume group the /tmp logical volume was a part of. I simply [expanded the /tmp logical volume][14] to a size that accommodated my fresh understanding of the amount of temporary file space needed on that host and the problem was solved. Note that this solution did not require a reboot, and as soon as the /tmp filesystem was enlarged the user was able to login to the desktop.
|
||||
|
||||
Another situation occurred while I was working as a lab administrator at one large technology company. One of our developers had installed an application in the wrong location (/var). The application was crashing because the /var filesystem was full and the log files, which are stored in /var/log on that filesystem, could not be appended with new messages due to the lack of space. However, the system remained up and running because the critical / (root) and /tmp filesystems did not fill up. Removing the offending application and reinstalling it in the /opt filesystem resolved that problem.
|
||||
|
||||
### Filesystem types
|
||||
|
||||
Linux supports reading around 100 partition types; it can create and write to only a few of these. But it is possible—and very common—to mount filesystems of different types on the same root filesystem. In this context we are talking about filesystems in terms of the structures and metadata required to store and manage the user data on a partition of a hard drive or a logical volume. The complete list of filesystem partition types recognized by the Linux **fdisk**command is provided here, so that you can get a feel for the high degree of compatibility that Linux has with very many types of systems.
|
||||
|
||||
```
|
||||
0 Empty 24 NEC DOS 81 Minix / old Lin bf Solaris 1 FAT12 27 Hidden NTFS Win 82 Linux swap / So c1 DRDOS/sec (FAT- 2 XENIX root 39 Plan 9 83 Linux c4 DRDOS/sec (FAT- 3 XENIX usr 3c PartitionMagic 84 OS/2 hidden or c6 DRDOS/sec (FAT- 4 FAT16 <32M 40 Venix 80286 85 Linux extended c7 Syrinx 5 Extended 41 PPC PReP Boot 86 NTFS volume set da Non-FS data 6 FAT16 42 SFS 87 NTFS volume set db CP/M / CTOS / . 7 HPFS/NTFS/exFAT 4d QNX4.x 88 Linux plaintext de Dell Utility 8 AIX 4e QNX4.x 2nd part 8e Linux LVM df BootIt 9 AIX bootable 4f QNX4.x 3rd part 93 Amoeba e1 DOS access a OS/2 Boot Manag 50 OnTrack DM 94 Amoeba BBT e3 DOS R/O b W95 FAT32 51 OnTrack DM6 Aux 9f BSD/OS e4 SpeedStor c W95 FAT32 (LBA) 52 CP/M a0 IBM Thinkpad hi ea Rufus alignment e W95 FAT16 (LBA) 53 OnTrack DM6 Aux a5 FreeBSD eb BeOS fs f W95 Ext'd (LBA) 54 OnTrackDM6 a6 OpenBSD ee GPT 10 OPUS 55 EZ-Drive a7 NeXTSTEP ef EFI (FAT-12/16/ 11 Hidden FAT12 56 Golden Bow a8 Darwin UFS f0 Linux/PA-RISC b 12 Compaq diagnost 5c Priam Edisk a9 NetBSD f1 SpeedStor 14 Hidden FAT16 <3 61 SpeedStor ab Darwin boot f4 SpeedStor 16 Hidden FAT16 63 GNU HURD or Sys af HFS / HFS+ f2 DOS secondary 17 Hidden HPFS/NTF 64 Novell Netware b7 BSDI fs fb VMware VMFS 18 AST SmartSleep 65 Novell Netware b8 BSDI swap fc VMware VMKCORE 1b Hidden W95 FAT3 70 DiskSecure Mult bb Boot Wizard hid fd Linux raid auto 1c Hidden W95 FAT3 75 PC/IX bc Acronis FAT32 L fe LANstep 1e Hidden W95 FAT1 80 Old Minix be Solaris boot ff BBT
|
||||
```
|
||||
|
||||
The main purpose in supporting the ability to read so many partition types is to allow for compatibility and at least some interoperability with other computer systems' filesystems. The choices available when creating a new filesystem with Fedora are shown in the following list.
|
||||
|
||||
* btrfs
|
||||
|
||||
* **cramfs**
|
||||
|
||||
* **ext2**
|
||||
|
||||
* **ext3**
|
||||
|
||||
* **ext4**
|
||||
|
||||
* fat
|
||||
|
||||
* gfs2
|
||||
|
||||
* hfsplus
|
||||
|
||||
* minix
|
||||
|
||||
* **msdos**
|
||||
|
||||
* ntfs
|
||||
|
||||
* reiserfs
|
||||
|
||||
* **vfat**
|
||||
|
||||
* xfs
|
||||
|
||||
Other distributions support creating different filesystem types. For example, CentOS 6 supports creating only those filesystems highlighted in bold in the above list.
|
||||
|
||||
### Mounting
|
||||
|
||||
The term "to mount" a filesystem in Linux refers back to the early days of computing when a tape or removable disk pack would need to be physically mounted on an appropriate drive device. After being physically placed on the drive, the filesystem on the disk pack would be logically mounted by the operating system to make the contents available for access by the OS, application programs and users.
|
||||
|
||||
A mount point is simply a directory, like any other, that is created as part of the root filesystem. So, for example, the home filesystem is mounted on the directory /home. Filesystems can be mounted at mount points on other non-root filesystems but this is less common.
|
||||
|
||||
The Linux root filesystem is mounted on the root directory (/) very early in the boot sequence. Other filesystems are mounted later, by the Linux startup programs, either **rc** under SystemV or by **systemd** in newer Linux releases. Mounting of filesystems during the startup process is managed by the /etc/fstab configuration file. An easy way to remember that is that fstab stands for "file system table," and it is a list of filesystems that are to be mounted, their designated mount points, and any options that might be needed for specific filesystems.
|
||||
|
||||
Filesystems are mounted on an existing directory/mount point using the **mount**command. In general, any directory that is used as a mount point should be empty and not have any other files contained in it. Linux will not prevent users from mounting one filesystem over one that is already there or on a directory that contains files. If you mount a filesystem on an existing directory or filesystem, the original contents will be hidden and only the content of the newly mounted filesystem will be visible.
|
||||
|
||||
### Conclusion
|
||||
|
||||
I hope that some of the possible confusion surrounding the term filesystem has been cleared up by this article. It took a long time and a very helpful mentor for me to truly understand and appreciate the complexity, elegance, and functionality of the Linux filesystem in all of its meanings.
|
||||
|
||||
If you have questions, please add them to the comments below and I will try to answer them.
|
||||
|
||||
### Next month
|
||||
|
||||
Another important concept is that for Linux, everything is a file. This concept has some interesting and important practical applications for users and system admins. The reason I mention this is that you might want to read my "[Everything is a file][15]" article before the article I am planning for next month on the /dev directory.
|
||||
|
||||
-----------------
|
||||
|
||||
作者简介:
|
||||
|
||||
David Both - David Both is a Linux and Open Source advocate who resides in Raleigh, North Carolina. He has been in the IT industry for over forty years and taught OS/2 for IBM where he worked for over 20 years. While at IBM, he wrote the first training course for the original IBM PC in 1981\. He has taught RHCE classes for Red Hat and has worked at MCI Worldcom, Cisco, and the State of North Carolina. He has been working with Linux and Open Source Software for almost 20 years. David has written articles for[More about me][7]
|
||||
|
||||
* [Learn how you can contribute][22]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/life/16/10/introduction-linux-filesystems
|
||||
|
||||
作者:[David Both][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/dboth
|
||||
[1]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[2]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[3]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[4]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[5]:https://opensource.com/tags/linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[6]:https://opensource.com/life/16/10/introduction-linux-filesystems?rate=Qyf2jgkdgrj5_zfDwadBT8KsHZ2Gp5Be2_tF7R-s02Y
|
||||
[7]:https://opensource.com/users/dboth
|
||||
[8]:https://opensource.com/user/14106/feed
|
||||
[9]:https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[10]:http://www.pathname.com/fhs/
|
||||
[11]:http://www.pathname.com/fhs/
|
||||
[12]:https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard
|
||||
[13]:https://en.wikipedia.org/wiki/Virtual_console
|
||||
[14]:https://opensource.com/business/16/9/linux-users-guide-lvm
|
||||
[15]:https://opensource.com/life/15/9/everything-is-a-file
|
||||
[16]:https://opensource.com/users/dboth
|
||||
[17]:https://opensource.com/users/dboth
|
||||
[18]:https://opensource.com/users/dboth
|
||||
[19]:https://opensource.com/life/16/10/introduction-linux-filesystems#comments
|
||||
[20]:https://opensource.com/tags/linux
|
||||
[21]:https://opensource.com/tags/sysadmin
|
||||
[22]:https://opensource.com/participate
|
@ -0,0 +1,110 @@
|
||||
How to create an internal innersource community
|
||||
============================================================
|
||||
|
||||
|
||||
![How to create an internal innersource community](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_openisopen.png?itok=FjmDxIaL "How to create an internal innersource community")
|
||||
Image by : opensource.com
|
||||
|
||||
In recent years, we have seen more and more interest in a variance of open source known as _innersource_ . Put simply, innersource is taking the principles of open source and bringing them inside the walls of an organization. As such, you build collaboration and community that may look and taste like open source, but in which all code and community is private within the walls of the organization.
|
||||
|
||||
As a [community strategy and leadership consultant][5], I work with many companies to help build their innersource communities. As such, I thought it could be fun to share some of the most important principles that map to most of my clients and beyond. This could be a helpful primer if you are considering exploring innersource inside your organization.
|
||||
|
||||
### Culture then code
|
||||
|
||||
For most companies, their innersource journey starts out life as a pilot project. Typically the project will focus on a few specific teams who may be more receptive to the experiment. As such, innersource is almost always a new workflow and methodology being brought into an existing culture. Making this cultural adjustment is key to being successful and the biggest challenge.
|
||||
|
||||
There is an understandable misconception that the key to doing innersource well is to focus on building the open source software development lifecycle in your company. That is, open code repositories and communication channels, encouraging forking, code review, and continuous integration/deployment. This is definitely an essential part of the innersource approach, but think of these pieces as lego bricks. The real trick is building an environment where people have the permission and incentive to build incredible things with those bricks.
|
||||
|
||||
As such, doing innersource well is more about building a culture, environment, and set of incentives that encourage and reward the behavior we often associate with open source. Building that culture from scratch is much easier, but adapting to this culture, particularly for traditional organizations is where most of the work lies.
|
||||
|
||||
Cultural change can't be dictated, it is the amalgamation of individual actions that build social conventions that the wider group are influenced by. To do this well you need to look at the problem from the vantage-point of your staff. How can you make their work easier, more efficient, more rewarding, and more collaborative?
|
||||
|
||||
When you understand the existing cultural pain points and your staff associate the new innersource culture as a way to relieve those issues, the adaptation goes much more smoothly.
|
||||
|
||||
### Methodology
|
||||
|
||||
Given that innersource is largely a cultural adjustment that incorporates some proven open source workflow methodologies, it begs an interesting question: How do you manage the rollout of an innersource program?
|
||||
|
||||
I have seen good and bad ways in which organizations do this. Some take a top-down approach and announce to their staff that things are going to be different and teams and staff need to fall in line given a specific innersource schedule. Other companies take a more bottom-up approach where a dedicated innersource team informally tries to get people on board with the innersource program.
|
||||
|
||||
I wouldn't recommend either approach explicitly, but instead a combination. For any cultural change you are going to need a top-down approach in emphasizing and encouraging a new way of working from your executive team and company leaders. Rather than dictating rules, these leaders should instead be supporting an environment where your staff can help shape the workings and implementation of your innersource program in a more bottom-up way.
|
||||
|
||||
Let's be honest, everyone hates these kind of cultural adjustments. We have all lived through company executives bringing in new ways of working—agile, kanban, pair-programming, or whatever else. Often these new ways of working are pushed onto staff and it sets the dynamic off on the wrong foot: Instead of encouraging your staff to shape the culture you are instead mandating it to them.
|
||||
|
||||
The approach I recommend here is to put in place a regular cadence that iterates your innersource strategy.
|
||||
|
||||
For example, every six months a new cycle begins. Prior to the end of the previous cycle the leadership team will survey staff to get their perspectives on how the innersource program is working, review their feedback, and set out core goals for the new cycle. Staff will then have structured ways in which they can play a role in helping to shape how these goals can be accomplished. For example, this could be individual teams/groups that focus on communication, peer review, QA, community growth and engagement, and more. Throughout the entire process a core innersource team will facilitate these conversations, mentor and guide the leadership team, support individual staff in making worthwhile contributions, and keep the train moving forward.
|
||||
|
||||
This is how it works in open source. If you hack on a project you don't just get to submit code, but you can also play a role in the operational dynamics of the project too. It is important you bring this sense of influence into your innersource program and your staff—the most rewarding companies are the ones where the staff feel they can influence the culture in a positive way.
|
||||
|
||||
### Asynchronous and remote working
|
||||
|
||||
One of the most interesting challenges with innersource is that it depends on asynchronous collaboration extensively. That is, you can collaborate digitally without requiring participants to be in the same timezone or location.
|
||||
|
||||
As an example, some companies require all staff members to work from the same office and much of the business that gets done takes place in in-person meetings in conference rooms, on conference calls, and via email. This can make it difficult for the company to grow and hire remote workers or for staff to be productive when they are on the road at conferences or at home.
|
||||
|
||||
A core component of what makes open source work well is that participants can work asynchronously. All communication, development (code submission, issue management, and code review), QA, and release management can often be performed entirely online and from any timezone. The asynchronous with semi-regular in-person sprints/meetings combo is a hugely productive and efficient methodology.
|
||||
|
||||
This can be a difficult transition for companies. For example, some of my clients will have traditionally had in-person meetings to plan projects, perform code review over a board-room table, and don't have the infrastructure and communication channels to operate asynchronously. To do innersource well, it is important to put in place a plan to work as asynchronously as possible, while blending the benefits of in-person communication at the office, while also supporting digital collaboration.
|
||||
|
||||
The side benefit of doing this is that you build an environment that can support remote working. As anyone who works in technology will likely agree, hiring good people is _hard_ , and a blocker can often be a relocation to your city. Thus, your investment in innersource will also make it easier to not just hire people remotely (anyone can do that), but importantly to have an environment where remote workers can be successful.
|
||||
|
||||
### Peer review and workflow
|
||||
|
||||
For companies making the adjustment to innersource, one of the most interesting "sticking points" is the peer review process.
|
||||
|
||||
For those of you familiar with open source, there are two important principles that take place in everyday collaboration. Firstly, all contributions are reviewed by other developers (both new features and bug fixes), and secondly, this peer review takes place out in the open for other people to see. This open peer review element can be a tough pill to swallow for people new to open source. If your engineers have either not conducted code review or it took place privately, it can be socially awkward and unsettling to move over to a more open review process.
|
||||
|
||||
This adjustment is something that needs to be carefully managed. A large part of this is building a culture in which critique is something that should be favored not avoided, that we celebrate failure, and we cherish our peers helping us to be better at what we do. Again, this is about framing these adjustments to the benefit of your staff so that while it may be awkward at first, they will feel the ultimate benefits soon after.
|
||||
|
||||
As you can probably see, a core chunk of building communities (whether public communities or innersource communities in companies) is understanding the psychology of your community members and converting those psychological patterns into workflow that helps you encourage the behavior you want to see.
|
||||
|
||||
As an example, there are two behavioral economics principles that play a key role with peer review and workflow. The first is the _Ikea Effect_ . This is where if you and I were to put together the exact same Ikea table (or build something else), we will each think our respective table is somehow better or more valuable. Thus, we put more value into the things we make, often overstated value. Secondly, there is the principle of _autonomy_ , which is essentially that _choice_ is critically important to people. If we don't feel we have control of our destiny, that we can make choices, we feel boxed in and restricted.
|
||||
|
||||
Just these two principles have an important impact on workflow and peer review. In terms of the Ikea effect we should expect that most people's pull requests and fixes will likely be seen as very valuable to them. As such, we need to use the peer review process to objectively define the value of the contribution in an independent and unemotional way (e.g. reviewing specific pieces of the diff, requiring at least two reviewers, encouraging specific implementation feedback, etc). With the autonomy principle, we should ensure staff can refine, customize, and hack on their toolchain and workflow as much as possible, and regularly give them an opportunity to provide feedback and input on making it better.
|
||||
|
||||
### Reputation, incentives, and engagement
|
||||
|
||||
Another key element of building an innersource culture in a company is to carefully carve out how you will track great work and incentivize and encourage that kind of behavior.
|
||||
|
||||
This piece has three core components.
|
||||
|
||||
Firstly, we need to have a way of getting a quantitative representation of the quality of that person's work. This can be as involved as building a complex system for tracking individual actions and weighting their value, or as simple as observationally watching how people work. What is important here is that people should be judged on their merit, and not factors such as how much they schmooze with the bosses, or how many donuts they bring to the office.
|
||||
|
||||
Secondly, based on this representation of their work we need to provide different incentives and rewards to encourage the behavior we want to see.
|
||||
|
||||
There are two core types of rewards. _Extrinsic_ rewards are material in nature, such as T-shirts, hoodies, gift cards, money, and more. _Intrinsic_ rewards are of the more touchy-feely kind such as respect, recognition, and admiration. Both are important and it is important to get the right balance of these rewards. Based on the behavior you want to encourage, I recommend putting together incentives that inspire action and rewards that validate those actions.
|
||||
|
||||
Finally, it can be helpful to sub-divide your staff into different groups based on their work and engage them in different ways. For example, people who are new will benefit from mentoring, support, and wider guidance. On the other hand, the most active and accomplished staff members can be used as a tremendous source of insight, guidance, and them enjoying playing a role in helping to shape the company culture further.
|
||||
|
||||
So, there are a few starting points for broader brushstrokes that typically need to be made when painting an innersource picture inside your organization. As usual, let me know your thoughts in the comments and feel free to reach out to me with questions!
|
||||
|
||||
-----------------
|
||||
|
||||
作者简介
|
||||
|
||||
Jono Bacon - Jono Bacon is a leading community manager, speaker, author, and podcaster. He is the founder of [Jono Bacon Consulting][2] which provides community strategy/execution, developer workflow, and other services. He also previously served as director of community at GitHub, Canonical, XPRIZE, OpenAdvantage, and consulted and advised a range of organizations.[More about me][3]
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/life/16/11/create-internal-innersource-community
|
||||
|
||||
作者:[Jono Bacon ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jonobacon
|
||||
[1]:https://opensource.com/life/16/11/create-internal-innersource-community?rate=QnpszlpgMXpNG5m2OLbZPYAg_RV_DA1i48tI00CPyTc
|
||||
[2]:http://www.jonobacon.com/consulting
|
||||
[3]:https://opensource.com/users/jonobacon
|
||||
[4]:https://opensource.com/user/26312/feed
|
||||
[5]:http://www.jonobacon.org/consulting
|
||||
[6]:https://opensource.com/users/jonobacon
|
||||
[7]:https://opensource.com/users/jonobacon
|
||||
[8]:https://opensource.com/users/jonobacon
|
||||
[9]:https://opensource.com/tags/community-management
|
||||
[10]:https://opensource.com/tags/six-degrees-column
|
||||
[11]:https://opensource.com/participate
|
@ -0,0 +1,412 @@
|
||||
Understanding Firewalld in Multi-Zone Configurations
|
||||
============================================================
|
||||
|
||||
Stories of compromised servers and data theft fill today's news. It isn't difficult for someone who has read an informative blog post to access a system via a misconfigured service, take advantage of a recently exposed vulnerability or gain control using a stolen password. Any of the many internet services found on a typical Linux server could harbor a vulnerability that grants unauthorized access to the system.
|
||||
|
||||
Since it's an impossible task to harden a system at the application level against every possible threat, firewalls provide security by limiting access to a system. Firewalls filter incoming packets based on their IP of origin, their destination port and their protocol. This way, only a few IP/port/protocol combinations interact with the system, and the rest do not.
|
||||
|
||||
Linux firewalls are handled by netfilter, which is a kernel-level framework. For more than a decade, iptables has provided the userland abstraction layer for netfilter. iptables subjects packets to a gauntlet of rules, and if the IP/port/protocol combination of the rule matches the packet, the rule is applied causing the packet to be accepted, rejected or dropped.
|
||||
|
||||
Firewalld is a newer userland abstraction layer for netfilter. Unfortunately, its power and flexibility are underappreciated due to a lack of documentation describing multi-zoned configurations. This article provides examples to remedy this situation.
|
||||
|
||||
### Firewalld Design Goals
|
||||
|
||||
#
|
||||
|
||||
The designers of firewalld realized that most iptables usage cases involve only a few unique IP sources, for each of which a whitelist of services is allowed and the rest are denied. To take advantage of this pattern, firewalld categorizes incoming traffic into zones defined by the source IP and/or network interface. Each zone has its own configuration to accept or deny packets based on specified criteria.
|
||||
|
||||
Another improvement over iptables is a simplified syntax. Firewalld makes it easier to specify services by using the name of the service rather than its port(s) and protocol(s)—for example, samba rather than UDP ports 137 and 138 and TCP ports 139 and 445\. It further simplifies syntax by removing the dependence on the order of statements as was the case for iptables.
|
||||
|
||||
Finally, firewalld enables the interactive modification of netfilter, allowing a change in the firewall to occur independently of the permanent configuration stored in XML. Thus, the following is a temporary modification that will be overwritten by the next reload:
|
||||
|
||||
```
|
||||
|
||||
# firewall-cmd <some modification>
|
||||
|
||||
```
|
||||
|
||||
And, the following is a permanent change that persists across reboots:
|
||||
|
||||
```
|
||||
|
||||
# firewall-cmd --permanent <some modification>
|
||||
# firewall-cmd --reload
|
||||
```
|
||||
|
||||
### Zones
|
||||
|
||||
The top layer of organization in firewalld is zones. A packet is part of a zone if it matches that zone's associated network interface or IP/mask source. Several predefined zones are available:
|
||||
|
||||
```
|
||||
|
||||
# firewall-cmd --get-zones
|
||||
block dmz drop external home internal public trusted work
|
||||
|
||||
```
|
||||
|
||||
An active zone is any zone that is configured with an interface and/or a source. To list active zones:
|
||||
|
||||
```
|
||||
|
||||
# firewall-cmd --get-active-zones
|
||||
public
|
||||
interfaces: eno1 eno2
|
||||
|
||||
```
|
||||
|
||||
**Interfaces** are the system's names for hardware and virtual network adapters, as you can see in the above example. All active interfaces will be assigned to zones, either to the default zone or to a user-specified one. However, an interface cannot be assigned to more than one zone.
|
||||
|
||||
In its default configuration, firewalld pairs all interfaces with the public zone and doesn't set up sources for any zones. As a result, public is the only active zone.
|
||||
|
||||
**Sources** are incoming IP address ranges, which also can be assigned to zones. A source (or overlapping sources) cannot be assigned to multiple zones. Doing so results in undefined behavior, as it would not be clear which rules should be applied to that source.
|
||||
|
||||
Since specifying a source is not required, for every packet there will be a zone with a matching interface, but there won't necessarily be a zone with a matching source. This indicates some form of precedence with priority going to the more specific source zones, but more on that later. First, let's inspect how the public zone is configured:
|
||||
|
||||
```
|
||||
|
||||
# firewall-cmd --zone=public --list-all
|
||||
public (default, active)
|
||||
interfaces: eno1 eno2
|
||||
sources:
|
||||
services: dhcpv6-client ssh
|
||||
ports:
|
||||
masquerade: no
|
||||
forward-ports:
|
||||
icmp-blocks:
|
||||
rich rules:
|
||||
# firewall-cmd --permanent --zone=public --get-target
|
||||
default
|
||||
|
||||
```
|
||||
|
||||
Going line by line through the output:
|
||||
|
||||
* `public (default, active)` indicates that the public zone is the default zone (interfaces default to it when they come up), and it is active because it has at least one interface or source associated with it.
|
||||
|
||||
* `interfaces: eno1 eno2` lists the interfaces associated with the zone.
|
||||
|
||||
* `sources:` lists the sources for the zone. There aren't any now, but if there were, they would be of the form xxx.xxx.xxx.xxx/xx.
|
||||
|
||||
* `services: dhcpv6-client ssh` lists the services allowed through the firewall. You can get an exhaustive list of firewalld's defined services by executing `firewall-cmd --get-services`.
|
||||
|
||||
* `ports:` lists port destinations allowed through the firewall. This is useful if you need to allow a service that isn't defined in firewalld.
|
||||
|
||||
* `masquerade: no` indicates that IP masquerading is disabled for this zone. If enabled, this would allow IP forwarding, with your computer acting as a router.
|
||||
|
||||
* `forward-ports:` lists ports that are forwarded.
|
||||
|
||||
* `icmp-blocks:` a blacklist of blocked icmp traffic.
|
||||
|
||||
* `rich rules:` advanced configurations, processed first in a zone.
|
||||
|
||||
* `default` is the target of the zone, which determines the action taken on a packet that matches the zone yet isn't explicitly handled by one of the above settings.
|
||||
|
||||
### A Simple Single-Zoned Example
|
||||
|
||||
Say you just want to lock down your firewall. Simply remove the services currently allowed by the public zone and reload:
|
||||
|
||||
```
|
||||
|
||||
# firewall-cmd --permanent --zone=public --remove-service=dhcpv6-client
|
||||
# firewall-cmd --permanent --zone=public --remove-service=ssh
|
||||
# firewall-cmd --reload
|
||||
|
||||
```
|
||||
|
||||
These commands result in the following firewall:
|
||||
|
||||
```
|
||||
|
||||
# firewall-cmd --zone=public --list-all
|
||||
public (default, active)
|
||||
interfaces: eno1 eno2
|
||||
sources:
|
||||
services:
|
||||
ports:
|
||||
masquerade: no
|
||||
forward-ports:
|
||||
icmp-blocks:
|
||||
rich rules:
|
||||
# firewall-cmd --permanent --zone=public --get-target
|
||||
default
|
||||
|
||||
```
|
||||
|
||||
In the spirit of keeping security as tight as possible, if a situation arises where you need to open a temporary hole in your firewall (perhaps for ssh), you can add the service to just the current session (omit `--permanent`) and instruct firewalld to revert the modification after a specified amount of time:
|
||||
|
||||
```
|
||||
|
||||
# firewall-cmd --zone=public --add-service=ssh --timeout=5m
|
||||
|
||||
```
|
||||
|
||||
The timeout option takes time values in seconds (s), minutes (m) or hours (h).
|
||||
|
||||
### Targets
|
||||
|
||||
When a zone processes a packet due to its source or interface, but there is no rule that explicitly handles the packet, the target of the zone determines the behavior:
|
||||
|
||||
* `ACCEPT`: accept the packet.
|
||||
|
||||
* `%%REJECT%%`: reject the packet, returning a reject reply.
|
||||
|
||||
* `DROP`: drop the packet, returning no reply.
|
||||
|
||||
* `default`: don't do anything. The zone washes its hands of the problem, and kicks it "upstairs".
|
||||
|
||||
There was a bug present in firewalld 0.3.9 (fixed in 0.3.10) for source zones with targets other than `default` in which the target was applied regardless of allowed services. For example, a source zone with the target `DROP` would drop all packets, even if they were whitelisted. Unfortunately, this version of firewalld was packaged for RHEL7 and its derivatives, causing it to be a fairly common bug. The examples in this article avoid situations that would manifest this behavior.
|
||||
|
||||
### Precedence
|
||||
|
||||
Active zones fulfill two different roles. Zones with associated interface(s) act as interface zones, and zones with associated source(s) act as source zones (a zone could fulfill both roles). Firewalld handles a packet in the following order:
|
||||
|
||||
1. The corresponding source zone. Zero or one such zones may exist. If the source zone deals with the packet because the packet satisfies a rich rule, the service is whitelisted, or the target is not default, we end here. Otherwise, we pass the packet on.
|
||||
|
||||
2. The corresponding interface zone. Exactly one such zone will always exist. If the interface zone deals with the packet, we end here. Otherwise, we pass the packet on.
|
||||
|
||||
3. The firewalld default action. Accept icmp packets and reject everything else.
|
||||
|
||||
The take-away message is that source zones have precedence over interface zones. Therefore, the general design pattern for multi-zoned firewalld configurations is to create a privileged source zone to allow specific IP's elevated access to system services and a restrictive interface zone to limit the access of everyone else.
|
||||
|
||||
### A Simple Multi-Zoned Example
|
||||
|
||||
To demonstrate precedence, let's swap ssh for http in the public zone and set up the default internal zone for our favorite IP address, 1.1.1.1\. The following commands accomplish this task:
|
||||
|
||||
```
|
||||
|
||||
# firewall-cmd --permanent --zone=public --remove-service=ssh
|
||||
# firewall-cmd --permanent --zone=public --add-service=http
|
||||
# firewall-cmd --permanent --zone=internal --add-source=1.1.1.1
|
||||
# firewall-cmd --reload
|
||||
|
||||
```
|
||||
|
||||
which results in the following configuration:
|
||||
|
||||
```
|
||||
|
||||
# firewall-cmd --zone=public --list-all
|
||||
public (default, active)
|
||||
interfaces: eno1 eno2
|
||||
sources:
|
||||
services: dhcpv6-client http
|
||||
ports:
|
||||
masquerade: no
|
||||
forward-ports:
|
||||
icmp-blocks:
|
||||
rich rules:
|
||||
# firewall-cmd --permanent --zone=public --get-target
|
||||
default
|
||||
# firewall-cmd --zone=internal --list-all
|
||||
internal (active)
|
||||
interfaces:
|
||||
sources: 1.1.1.1
|
||||
services: dhcpv6-client mdns samba-client ssh
|
||||
ports:
|
||||
masquerade: no
|
||||
forward-ports:
|
||||
icmp-blocks:
|
||||
rich rules:
|
||||
# firewall-cmd --permanent --zone=internal --get-target
|
||||
default
|
||||
|
||||
```
|
||||
|
||||
With the above configuration, if someone attempts to `ssh` in from 1.1.1.1, the request would succeed because the source zone (internal) is applied first, and it allows ssh access.
|
||||
|
||||
If someone attempts to `ssh` from somewhere else, say 2.2.2.2, there wouldn't be a source zone, because no zones match that source. Therefore, the request would pass directly to the interface zone (public), which does not explicitly handle ssh. Since public's target is `default`, the request passes to the firewalld default action, which is to reject it.
|
||||
|
||||
What if 1.1.1.1 attempts http access? The source zone (internal) doesn't allow it, but the target is `default`, so the request passes to the interface zone (public), which grants access.
|
||||
|
||||
Now let's suppose someone from 3.3.3.3 is trolling your website. To restrict access for that IP, simply add it to the preconfigured drop zone, aptly named because it drops all connections:
|
||||
|
||||
```
|
||||
|
||||
# firewall-cmd --permanent --zone=drop --add-source=3.3.3.3
|
||||
# firewall-cmd --reload
|
||||
|
||||
```
|
||||
|
||||
The next time 3.3.3.3 attempts to access your website, firewalld will send the request first to the source zone (drop). Since the target is `DROP`, the request will be denied and won't make it to the interface zone (public) to be accepted.
|
||||
|
||||
### A Practical Multi-Zoned Example
|
||||
|
||||
Suppose you are setting up a firewall for a server at your organization. You want the entire world to have http and https access, your organization (1.1.0.0/16) and workgroup (1.1.1.0/8) to have ssh access, and your workgroup to have samba access. Using zones in firewalld, you can set up this configuration in an intuitive manner.
|
||||
|
||||
Given the naming, it seems logical to commandeer the public zone for your world-wide purposes and the internal zone for local use. Start by replacing the dhcpv6-client and ssh services in the public zone with http and https:
|
||||
|
||||
```
|
||||
|
||||
# firewall-cmd --permanent --zone=public --remove-service=dhcpv6-client
|
||||
# firewall-cmd --permanent --zone=public --remove-service=ssh
|
||||
# firewall-cmd --permanent --zone=public --add-service=http
|
||||
# firewall-cmd --permanent --zone=public --add-service=https
|
||||
|
||||
```
|
||||
|
||||
Then trim mdns, samba-client and dhcpv6-client out of the internal zone (leaving only ssh) and add your organization as the source:
|
||||
|
||||
```
|
||||
|
||||
# firewall-cmd --permanent --zone=internal --remove-service=mdns
|
||||
# firewall-cmd --permanent --zone=internal --remove-service=samba-client
|
||||
# firewall-cmd --permanent --zone=internal --remove-service=dhcpv6-client
|
||||
# firewall-cmd --permanent --zone=internal --add-source=1.1.0.0/16
|
||||
|
||||
```
|
||||
|
||||
To accommodate your elevated workgroup samba privileges, add a rich rule:
|
||||
|
||||
```
|
||||
|
||||
# firewall-cmd --permanent --zone=internal --add-rich-rule='rule
|
||||
↪family=ipv4 source address="1.1.1.0/8" service name="samba"
|
||||
↪accept'
|
||||
|
||||
```
|
||||
|
||||
Finally, reload, pulling the changes into the active session:
|
||||
|
||||
```
|
||||
|
||||
# firewall-cmd --reload
|
||||
|
||||
```
|
||||
|
||||
Only a few more details remain. Attempting to `ssh` in to your server from an IP outside the internal zone results in a reject message, which is the firewalld default. It is more secure to exhibit the behavior of an inactive IP and instead drop the connection. Change the public zone's target to `DROP`rather than `default` to accomplish this:
|
||||
|
||||
```
|
||||
|
||||
# firewall-cmd --permanent --zone=public --set-target=DROP
|
||||
# firewall-cmd --reload
|
||||
|
||||
```
|
||||
|
||||
But wait, you no longer can ping, even from the internal zone! And icmp (the protocol ping goes over) isn't on the list of services that firewalld can whitelist. That's because icmp is an IP layer 3 protocol and has no concept of a port, unlike services that are tied to ports. Before setting the public zone to `DROP`, pinging could pass through the firewall because both of your `default` targets passed it on to the firewalld default, which allowed it. Now it's dropped.
|
||||
|
||||
To restore pinging to the internal network, use a rich rule:
|
||||
|
||||
```
|
||||
|
||||
# firewall-cmd --permanent --zone=internal --add-rich-rule='rule
|
||||
↪protocol value="icmp" accept'
|
||||
# firewall-cmd --reload
|
||||
|
||||
```
|
||||
|
||||
In summary, here's the configuration for the two active zones:
|
||||
|
||||
```
|
||||
|
||||
# firewall-cmd --zone=public --list-all
|
||||
public (default, active)
|
||||
interfaces: eno1 eno2
|
||||
sources:
|
||||
services: http https
|
||||
ports:
|
||||
masquerade: no
|
||||
forward-ports:
|
||||
icmp-blocks:
|
||||
rich rules:
|
||||
# firewall-cmd --permanent --zone=public --get-target
|
||||
DROP
|
||||
# firewall-cmd --zone=internal --list-all
|
||||
internal (active)
|
||||
interfaces:
|
||||
sources: 1.1.0.0/16
|
||||
services: ssh
|
||||
ports:
|
||||
masquerade: no
|
||||
forward-ports:
|
||||
icmp-blocks:
|
||||
rich rules:
|
||||
rule family=ipv4 source address="1.1.1.0/8"
|
||||
↪service name="samba" accept
|
||||
rule protocol value="icmp" accept
|
||||
# firewall-cmd --permanent --zone=internal --get-target
|
||||
default
|
||||
|
||||
```
|
||||
|
||||
This setup demonstrates a three-layer nested firewall. The outermost layer, public, is an interface zone and spans the entire world. The next layer, internal, is a source zone and spans your organization, which is a subset of public. Finally, a rich rule adds the innermost layer spanning your workgroup, which is a subset of internal.
|
||||
|
||||
The take-away message here is that when a scenario can be broken into nested layers, the broadest layer should use an interface zone, the next layer should use a source zone, and additional layers should use rich rules within the source zone.
|
||||
|
||||
### Debugging
|
||||
|
||||
Firewalld employs intuitive paradigms for designing a firewall, yet gives rise to ambiguity much more easily than its predecessor, iptables. Should unexpected behavior occur, or to understand better how firewalld works, it can be useful to obtain an iptables description of how netfilter has been configured to operate. Output for the previous example follows, with forward, output and logging lines trimmed for simplicity:
|
||||
|
||||
```
|
||||
|
||||
# iptables -S
|
||||
-P INPUT ACCEPT
|
||||
... (forward and output lines) ...
|
||||
-N INPUT_ZONES
|
||||
-N INPUT_ZONES_SOURCE
|
||||
-N INPUT_direct
|
||||
-N IN_internal
|
||||
-N IN_internal_allow
|
||||
-N IN_internal_deny
|
||||
-N IN_public
|
||||
-N IN_public_allow
|
||||
-N IN_public_deny
|
||||
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
|
||||
-A INPUT -i lo -j ACCEPT
|
||||
-A INPUT -j INPUT_ZONES_SOURCE
|
||||
-A INPUT -j INPUT_ZONES
|
||||
-A INPUT -p icmp -j ACCEPT
|
||||
-A INPUT -m conntrack --ctstate INVALID -j DROP
|
||||
-A INPUT -j REJECT --reject-with icmp-host-prohibited
|
||||
... (forward and output lines) ...
|
||||
-A INPUT_ZONES -i eno1 -j IN_public
|
||||
-A INPUT_ZONES -i eno2 -j IN_public
|
||||
-A INPUT_ZONES -j IN_public
|
||||
-A INPUT_ZONES_SOURCE -s 1.1.0.0/16 -g IN_internal
|
||||
-A IN_internal -j IN_internal_deny
|
||||
-A IN_internal -j IN_internal_allow
|
||||
-A IN_internal_allow -p tcp -m tcp --dport 22 -m conntrack
|
||||
↪--ctstate NEW -j ACCEPT
|
||||
-A IN_internal_allow -s 1.1.1.0/8 -p udp -m udp --dport 137
|
||||
↪-m conntrack --ctstate NEW -j ACCEPT
|
||||
-A IN_internal_allow -s 1.1.1.0/8 -p udp -m udp --dport 138
|
||||
↪-m conntrack --ctstate NEW -j ACCEPT
|
||||
-A IN_internal_allow -s 1.1.1.0/8 -p tcp -m tcp --dport 139
|
||||
↪-m conntrack --ctstate NEW -j ACCEPT
|
||||
-A IN_internal_allow -s 1.1.1.0/8 -p tcp -m tcp --dport 445
|
||||
↪-m conntrack --ctstate NEW -j ACCEPT
|
||||
-A IN_internal_allow -p icmp -m conntrack --ctstate NEW
|
||||
↪-j ACCEPT
|
||||
-A IN_public -j IN_public_deny
|
||||
-A IN_public -j IN_public_allow
|
||||
-A IN_public -j DROP
|
||||
-A IN_public_allow -p tcp -m tcp --dport 80 -m conntrack
|
||||
↪--ctstate NEW -j ACCEPT
|
||||
-A IN_public_allow -p tcp -m tcp --dport 443 -m conntrack
|
||||
↪--ctstate NEW -j ACCEPT
|
||||
|
||||
```
|
||||
|
||||
In the above iptables output, new chains (lines starting with `-N`) are first declared. The rest are rules appended (starting with `-A`) to iptables. Established connections and local traffic are accepted, and incoming packets go to the `INPUT_ZONES_SOURCE` chain, at which point IPs are sent to the corresponding zone, if one exists. After that, traffic goes to the `INPUT_ZONES` chain, at which point it is routed to an interface zone. If it isn't handled there, icmp is accepted, invalids are dropped, and everything else is rejected.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Firewalld is an under-documented firewall configuration tool with more potential than many people realize. With its innovative paradigm of zones, firewalld allows the system administrator to break up traffic into categories where each receives a unique treatment, simplifying the configuration process. Because of its intuitive design and syntax, it is practical for both simple single-zoned and complex multi-zoned configurations.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxjournal.com/content/understanding-firewalld-multi-zone-configurations?page=0,0
|
||||
|
||||
作者:[ Nathan Vance][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linuxjournal.com/users/nathan-vance
|
||||
[1]:https://www.linuxjournal.com/tag/firewalls
|
||||
[2]:https://www.linuxjournal.com/tag/howtos
|
||||
[3]:https://www.linuxjournal.com/tag/networking
|
||||
[4]:https://www.linuxjournal.com/tag/security
|
||||
[5]:https://www.linuxjournal.com/tag/sysadmin
|
||||
[6]:https://www.linuxjournal.com/users/william-f-polik
|
||||
[7]:https://www.linuxjournal.com/users/nathan-vance
|
@ -1,181 +0,0 @@
|
||||
翻译中++++++++++++++
|
||||
++++++++++++++
|
||||
Getting started with Perl on the Raspberry Pi
|
||||
============================================================
|
||||
|
||||
> We're all free to pick what we want to run on our Raspberry Pi.
|
||||
|
||||
|
||||
|
||||
![Getting started with Perl on the Raspberry Pi](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/raspberry_pi_blue_board.jpg?itok=01NR5MX4 "Getting started with Perl on the Raspberry Pi")
|
||||
>Image by : opensource.com
|
||||
|
||||
When I spoke recently at SVPerl (Silicon Valley Perl) about Perl on the Raspberry Pi, someone asked, "I heard the Raspberry Pi is supposed to use Python. Is that right?" I was glad he asked because it's a common misconception. The Raspberry Pi can run any language. Perl, Python, and others are part of the initial installation of Raspbian Linux, the official software for the board.
|
||||
|
||||
The origin of the myth is simple. The Raspberry Pi's creator, UK Computer Science professor Eben Upton, has told the story that the "Pi" part of the name was intended to sound like Python because he likes the language. He chose it as his emphasis for kids to learn coding. But he and his team made a general-purpose computer. The open source software on the Raspberry Pi places no restrictions on us. We're all free to pick what we want to run and make each Raspberry Pi our own.
|
||||
|
||||
More on Raspberry Pi
|
||||
|
||||
* [Our latest on Raspberry Pi][1]
|
||||
* [What is Raspberry Pi?][2]
|
||||
* [Getting started with Raspberry Pi][3]
|
||||
* [Send us your Raspberry Pi projects and tutorials][4]
|
||||
|
||||
The second point to my presentation at SVPerl and this article is to introduce my "PiFlash" script. It was written in Perl, but it doesn't require any knowledge of Perl to automate your task of flashing SD cards for a Raspberry Pi from a Linux system. It provides safety for beginners, so they won't accidentally erase a hard drive while trying to flash an SD card. It offers automation and convenience for power users, which includes me and is why I wrote it. Similar tools already existed for Windows and Macs, but the instructions on the Raspberry Pi website oddly have no automated tools for Linux users. Now one exists.
|
||||
|
||||
Open source software has a long tradition of new projects starting because an author wanted to "scratch their own itch," or to solve their own problems. That's the way Eric S. Raymond described it in his 1997 paper and 1999 book "[The Cathedral and the Bazaar][8]," which defined the open source software development methodology. I wrote PiFlash to fill a need for Linux users like myself.
|
||||
|
||||
### Downloadable system images
|
||||
|
||||
When setting up a Raspberry Pi, you first need to download an operating system for it. We call it a "system image" file. Once you download it to your desktop, laptop, or even another Raspberry Pi, you have to write or "flash" it to an SD card. The details are covered online already. It can be a bit tricky to do manually because getting the system image on the whole SD card and not on a partition matters. The system image will actually contain at least one partition of its own because the Raspberry Pi's boot procedure needs a FAT32 filesystem partition from which to start. Other partitions after the boot partition can be any filesystem type supported by the OS kernel.
|
||||
|
||||
In most cases on the Raspberry Pi, we're running some distribution with a Linux kernel. Here's a list of common system images that you can download for the Raspberry Pi (but there's nothing to stop you from building your own from scratch).
|
||||
|
||||
The ["NOOBS"][9] system from the Raspberry Pi Foundation is their recommended system for new users. It stands for "New Out of the Box System." It's obviously intended to sound like the term "noob," short for "newbie." NOOBS starts a Raspbian-based Linux system, which presents a menu that you can use to automatically download and install several other system images on your Raspberry Pi.
|
||||
|
||||
[Raspbian ][10][Linux][11] is Debian Linux specialized for the Raspberry Pi. It's the official Linux distribution for the Raspberry Pi and is maintained by the Raspberry Pi Foundation. Nearly all Raspberry Pi software and drivers start with Raspbian before going to other Linux distributions. It runs on all models of the Raspberry Pi. The default installation includes Perl.
|
||||
|
||||
Ubuntu Linux (and the community edition Ubuntu MATE) includes the Raspberry Pi as one of its supported platforms for the ARM (Advanced RISC Machines) processor. [RISC (Reduced Instruction Set Computer) architecture] Ubuntu is a commercially supported open source variant of Debian Linux, so its software comes as DEB packages. Perl is included. It only works on the Raspberry Pi 2 and 3 models with their 32-bit ARM7 and 64-bit ARM8 processors. The ARM6 processor of the Raspberry Pi 1 and Zero was never supported by Ubuntu's build process.
|
||||
|
||||
[Fedora Linux][12] supports the Raspberry Pi 2 and 3 as of Fedora 25\. Fedora is the open source project affiliated with Red Hat. Fedora serves as the base that the commercial RHEL (Red Hat Enterprise Linux) adds commercial packages and support to, so its software comes as RPM (Red Hat Package Manager) packages like all Red Hat-compatible Linux distributions. Like the others, it includes Perl.
|
||||
|
||||
[RISC OS][13] is a single-user operating system made specifically for the ARM processor. If you want to experiment with a small desktop that is more compact than Linux (due to fewer features), it's an option. Perl runs on RISC OS.
|
||||
|
||||
[RaspBSD][14] is the Raspberry Pi distribution of FreeBSD. It's a Unix-based system, but isn't Linux. As an open source Unix, form follows function and it has many similarities to Linux, including that the operating system environment is made from a similar set of open source packages, including Perl.
|
||||
|
||||
[OSMC][15], the Open Source Media Center, and [LibreElec][16] are TV entertainment center systems. They are both based on the Kodi entertainment center, which runs on a Linux kernel. It's a really compact and specialized Linux system, so don't expect to find Perl on it.
|
||||
|
||||
[Microsoft ][17][Windows IoT Core][18] is a new entrant that runs only on the Raspberry Pi 3\. You need Microsoft developer access to download it, so as a Linux geek, that deterred me from looking at it. My PiFlash script doesn't support it, but if that's what you're looking for, it's there.
|
||||
|
||||
### The PiFlash script
|
||||
|
||||
If you look at the Raspberry Pi 's [SD card flashing][19][ instructions][20], you'll see the instructions to do that from Windows or Mac involve downloading a tool to write to the SD card. But for Linux systems, it's a set of instructions to do manually. I've done that manual procedure so many times that it triggered my software-developer instinct to automate the process, and that's where the PiFlash script came from. It's tricky because there are many ways a Linux system can be set up, but they are all based on the Linux kernel.
|
||||
|
||||
I always imagined one of the biggest potential errors of the manual procedure is accidentally erasing the wrong device, instead of the SD card, and destroying the data on a hard drive that I wanted to keep. In my presentation at SVPerl, I was surprised to find someone in the audience who has made that mistake (and wasn't afraid to admit it). Therefore, one of the purposes of the PiFlash script, to provide safety for new users by refusing to erase a device that isn't an SD card, is even more needed than I expected. PiFlash will also refuse to overwrite a device that contains a mounted filesystem.
|
||||
|
||||
For experienced users, including me, the PiFlash script offers the convenience of automation. After downloading the system image, I don't have to uncompress it or extract the system image from a zip archive. PiFlash will extract it from whichever format it's in and directly flash the SD card.
|
||||
|
||||
I posted [PiFlash and its instructions][21] on GitHub.
|
||||
|
||||
It's a command-line tool with the following usages:
|
||||
|
||||
**piflash [--verbose] input-file output-device**
|
||||
|
||||
**piflash [--verbose] --SDsearch**
|
||||
|
||||
The **input-file** parameter is the system image file, whatever you downloaded from the Raspberry Pi software distribution sites. The **output-device** parameter is the path of the block device for the SD card you want to write to.
|
||||
|
||||
Alternatively, use **--SDsearch** to print a list of the device names of SD cards on the system.
|
||||
|
||||
The optional **--verbose** parameter is useful for printing out all of the program's state data in case you need to ask for help, submit a bug report, or troubleshoot a problem yourself. That's what I used for developing it.
|
||||
|
||||
This example of using the script writes a Raspbian image, still in its zip archive, to the SD card at **/dev/mmcblk0**:
|
||||
|
||||
**piflash 2016-11-25-raspbian-jessie.img.zip /dev/mmcblk0**
|
||||
|
||||
If you had specified **/dev/mmcblk0p1** (the first partition on the SD card), it would have recognized that a partition is not the correct location and refused to write to it.
|
||||
|
||||
One tricky aspect is recognizing which devices are SD cards on various Linux systems. The example with **mmcblk0** is from the PCI-based SD card interface on my laptop. If I used a USB SD card interface, it would be **/dev/sdb**, which is harder to distinguish from hard drives present on many systems. However, there are only a few Linux block drivers that support SD cards. PiFlash checks the parameters of the block devices in both those cases. If all else fails, it will accept USB drives which are writable, removable and have the right physical sector count for an SD card.
|
||||
|
||||
I think that covers most cases. However, what if you have another SD card interface I haven't seen? I'd like to hear from you. Please include the **--verbose**** --SDsearch** output, so I can see what environment was present on your system when it tried. Ideally, if the PiFlash script becomes widely used, we should build up an open source community around maintaining it for as many Raspberry Pi users as we can.
|
||||
|
||||
### CPAN modules for Raspberry Pi
|
||||
|
||||
CPAN is the [Comprehensive Perl Archive Network][22], a worldwide network of download mirrors containing a wealth of Perl modules. All of them are open source. The vast quantity of modules on CPAN has been a huge strength of Perl over the years. For many thousands of tasks, there is no need to re-invent the wheel, you can just use the code someone else already posted, then submit your own once you have something new.
|
||||
|
||||
As Raspberry Pi is a full-fledged Linux system, most CPAN modules will run normally on it, but I'll focus on some that are specifically for the Raspberry Pi's hardware. These would usually be for embedded systems projects like measurement, control, or robotics. You can connect your Raspberry Pi to external electronics via its GPIO (General-Purpose Input/Output) pins.
|
||||
|
||||
Modules specifically for accessing the Raspberry Pi's GPIO pins include [Device::SMBus][23], [Device::I2C][24], [Rpi::PIGPIO][25], [Rpi::SPI][26], [Rpi::WiringPi][27], [Device::WebIO::RaspberryPi][28] and [Device::PiGlow][29]. Modules for other embedded systems with Raspberry Pi support include [UAV::Pilot::Wumpus::Server::Backend::RaspberryPiI2C][30], [RPi::DHT11][31] (temperature/humidity), [RPi::HCSR04][32] (ultrasonic), [App::RPi::EnvUI][33] (lights for growing plants), [RPi::DigiPot::MCP4XXXX][34] (potentiometer), [RPi::ADC::ADS][35] (A/D conversion), [Device::PaPiRus][36] and [Device::BCM2835::Timer][37] (the on-board timer chip).
|
||||
|
||||
### Examples
|
||||
|
||||
Here are some examples of what you can do with Perl on a Raspberry Pi.
|
||||
|
||||
### Example 1: Flash OSMC with PiFlash and play a video
|
||||
|
||||
For this example, you'll practice setting up and running a Raspberry Pi using the OSMC (Open Source Media Center).
|
||||
|
||||
* Go to [RaspberryPi.Org][5]. In the downloads area, get the latest version of OSMC.
|
||||
* Insert a blank SD card in your Linux desktop or laptop. The Raspberry Pi 1 uses a full-size SD card. Everything else uses a microSD, which may require a common adapter to insert it.
|
||||
* Check "cat /proc/partitions" before and after inserting the SD card to see which device name it was assigned by the system. It could be something like **/dev/mmcblk0** or **/dev/sdb**. Substitute your correct system image file and output device in a command that looks like this:
|
||||
|
||||
** piflash OSMC_TGT_rbp2_20170210.img.gz /dev/mmcblk0**
|
||||
|
||||
* Eject the SD card. Put it in the Raspberry Pi and boot it connected to an HDMI monitor.
|
||||
* While OSMC is setting up, get a USB stick and put some videos on it. For purposes of the demonstration, I suggest using the "youtube-dl" program to download two videos. Run "youtube-dl OHF2xDrq8dY" (The Bloomberg "Hello World" episode about UK tech including Raspberry Pi) and "youtube-dl nAvZMgXbE9c" (CNet's Top 5 Raspberry Pi projects). Move them to the USB stick, then unmount and remove it.
|
||||
* Insert the USB stick in the OSMC Raspberry Pi. Follow the Videos menu to the external device.
|
||||
* When you can play the videos on the Raspberry Pi, you have completed the exercise. Have fun.
|
||||
|
||||
### Example 2: A script to play random videos from a directory
|
||||
|
||||
This example uses a script to shuffle-play videos from a directory on the Raspberry Pi. Depending on the videos and where it's installed, this could be a kiosk display. I wrote it to display videos while using indoor exercise equipment.
|
||||
|
||||
* Set up a Raspberry Pi to boot Raspbian Linux. Connect it to an HDMI monitor.
|
||||
* Download my ["do-video" script][6] from GitHub and put it on the Raspberry Pi.
|
||||
* Follow the installation instructions on the page. The main thing is to install the **omxplayer** package, which plays videos smoothly using the Raspberry Pi's hardware video acceleration.
|
||||
* Put some videos in a directory called Videos under the home directory.
|
||||
* Run "do-video" and videos should start playing.
|
||||
|
||||
### Example 3: A script to read GPS data
|
||||
|
||||
This example is more advanced and optional, but it shows how Perl can read from external devices. At my "Perl on Pi" page on GitHub from the previous example, there is also a **gps-read.pl** script. It reads NMEA (National Marine Electronics Association) data from a GPS via the serial port. Instructions are on the page, including parts I used from AdaFruit Industries to build it, but any GPS that outputs NMEA data could be used.
|
||||
|
||||
With these tasks, I've made the case that you really can use Perl as well as any other language on a Raspberry Pi. I hope you enjoy it.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Ian Kluft - Ian has had parallel interests since grade school in computing and flight. He was coding on Unix before there was Linux, and started on Linux 6 months after the kernel was posted. He has a masters degree in Computer Science and is a CSSLP (Certified Secure Software Lifecycle Professional). On the side he's a pilot and a certified flight instructor. As a licensed Ham Radio operator for over 25 years, experimentation with electronics has evolved in recent years to include the Raspberry Pi
|
||||
|
||||
------------------
|
||||
|
||||
via: https://opensource.com/article/17/3/perl-raspberry-pi
|
||||
|
||||
作者:[Ian Kluft ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/ikluft
|
||||
[1]:https://opensource.com/tags/raspberry-pi?src=raspberry_pi_resource_menu
|
||||
[2]:https://opensource.com/resources/what-raspberry-pi?src=raspberry_pi_resource_menu
|
||||
[3]:https://opensource.com/article/16/12/getting-started-raspberry-pi?src=raspberry_pi_resource_menu
|
||||
[4]:https://opensource.com/article/17/2/raspberry-pi-submit-your-article?src=raspberry_pi_resource_menu
|
||||
[5]:http://raspberrypi.org/
|
||||
[6]:https://github.com/ikluft/ikluft-tools/tree/master/perl-on-pi
|
||||
[7]:https://opensource.com/article/17/3/perl-raspberry-pi?rate=OsZH1-H_xMfLtSFqZw4SC-_nyV4yo_sgKKBJGjUsbfM
|
||||
[8]:http://www.catb.org/~esr/writings/cathedral-bazaar/
|
||||
[9]:https://www.raspberrypi.org/downloads/noobs/
|
||||
[10]:https://www.raspberrypi.org/downloads/raspbian/
|
||||
[11]:https://www.raspberrypi.org/downloads/raspbian/
|
||||
[12]:https://fedoraproject.org/wiki/Raspberry_Pi#Downloading_the_Fedora_ARM_image
|
||||
[13]:https://www.riscosopen.org/content/downloads/raspberry-pi
|
||||
[14]:http://www.raspbsd.org/raspberrypi.html
|
||||
[15]:https://osmc.tv/
|
||||
[16]:https://libreelec.tv/
|
||||
[17]:http://ms-iot.github.io/content/en-US/Downloads.htm
|
||||
[18]:http://ms-iot.github.io/content/en-US/Downloads.htm
|
||||
[19]:https://www.raspberrypi.org/documentation/installation/installing-images/README.md
|
||||
[20]:https://www.raspberrypi.org/documentation/installation/installing-images/README.md
|
||||
[21]:https://github.com/ikluft/ikluft-tools/tree/master/piflash
|
||||
[22]:http://www.cpan.org/
|
||||
[23]:https://metacpan.org/pod/Device::SMBus
|
||||
[24]:https://metacpan.org/pod/Device::I2C
|
||||
[25]:https://metacpan.org/pod/RPi::PIGPIO
|
||||
[26]:https://metacpan.org/pod/RPi::SPI
|
||||
[27]:https://metacpan.org/pod/RPi::WiringPi
|
||||
[28]:https://metacpan.org/pod/Device::WebIO::RaspberryPi
|
||||
[29]:https://metacpan.org/pod/Device::PiGlow
|
||||
[30]:https://metacpan.org/pod/UAV::Pilot::Wumpus::Server::Backend::RaspberryPiI2C
|
||||
[31]:https://metacpan.org/pod/RPi::DHT11
|
||||
[32]:https://metacpan.org/pod/RPi::HCSR04
|
||||
[33]:https://metacpan.org/pod/App::RPi::EnvUI
|
||||
[34]:https://metacpan.org/pod/RPi::DigiPot::MCP4XXXX
|
||||
[35]:https://metacpan.org/pod/RPi::ADC::ADS
|
||||
[36]:https://metacpan.org/pod/Device::PaPiRus
|
||||
[37]:https://metacpan.org/pod/Device::BCM2835::Timer
|
||||
[38]:https://opensource.com/user/120171/feed
|
||||
[39]:https://opensource.com/article/17/3/perl-raspberry-pi#comments
|
||||
[40]:https://opensource.com/users/ikluft
|
@ -1,254 +0,0 @@
|
||||
MonkeyDEcho translating
|
||||
|
||||
Introduction to functional programming
|
||||
============================================================
|
||||
|
||||
> We explain what functional programming is, explore its benefits, and look at resources for learning functional programming.
|
||||
|
||||
|
||||
![Introduction to functional programming ](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/lightbulb_computer_person_general_.png?itok=ZY3UuQQa "Introduction to functional programming ")
|
||||
Image by :
|
||||
|
||||
opensource.com
|
||||
|
||||
Depending on whom you ask, _functional programming_ (FP) is either an enlightened approach to programming that should be spread far and wide, or an overly academic approach to programming with few real-world benefits. In this article, I will explain what functional programming is, explore its benefits, and recommend resources for learning functional programming.
|
||||
|
||||
### Syntax primer
|
||||
|
||||
Code examples in this article are in the [Haskell][40] programming language. All that you need to understand for this article is the basic function syntax:
|
||||
|
||||
```
|
||||
even :: Int -> Bool
|
||||
even = ... -- implementation goes here
|
||||
```
|
||||
|
||||
This defines a one-argument function named **even**. The first line is the _type declaration_ , which says that **even** takes an **Int** and returns a **Bool**. The implementation follows and consists of one or more _equations_ . We'll ignore the implementation (the name and type tell us enough):
|
||||
|
||||
```
|
||||
map :: (a -> b) -> [a] -> [b]
|
||||
map = ...
|
||||
```
|
||||
|
||||
In this example, **map** is a function that takes two arguments:
|
||||
|
||||
1. **(a -> b)**: a functions that turns an **a** into a **b**
|
||||
2. **[a]**: a list of **a**
|
||||
|
||||
and returns a list of **b**. Again, we don't care about the definition—the type is more interesting! **a** and **b** are _type variables_ that could stand for any type. In the expression below, **a** is **Int** and **b** is **Bool**:
|
||||
|
||||
```
|
||||
map even [1,2,3]
|
||||
```
|
||||
|
||||
It evaluates to a **[Bool]**:
|
||||
|
||||
```
|
||||
[False,True,False]
|
||||
```
|
||||
|
||||
If you see other syntax that you do not understand, don't panic; full comprehension of the syntax is not essential.
|
||||
|
||||
### Myths about functional programming
|
||||
|
||||
Programming and development
|
||||
|
||||
* [Our latest JavaScript articles][1]
|
||||
* [Recent Perl posts][2]
|
||||
* [New Python content][3]
|
||||
* [Red Hat Developers Blog][4]
|
||||
* [Tools for Red Hat Developers][5]
|
||||
|
||||
Let's begin by dispelling common misconceptions:
|
||||
|
||||
* Functional programming is not the rival or antithesis of imperative or object-oriented programming. This is a false dichotomy.
|
||||
* Functional programming is not just the domain of academics. It is true that the history of functional programming is steeped in academia, and languages such as like Haskell and OCaml are popular research languages. But today many companies use functional programming for large-scale systems, small specialized programs, and everything in between. There's even an annual conference for [Commercial Users of Functional Programming][33]; past programs give an insight into how functional programming is being used in industry, and by whom.
|
||||
* Functional programming has nothing to do with [monads][34], nor any other particular abstraction. For all the hand-wringing around this topic, monad is just an abstraction with laws. Some things are monads, others are not.
|
||||
* Functional programming is not especially hard to learn. Some languages may have different syntax or evaluation semantics from those you already know, but these differences are superficial. There are dense concepts in functional programming, but this is also true of other approaches.
|
||||
|
||||
### What is functional programming?
|
||||
|
||||
At its core, functional programming is just programming with functions— _pure_ mathematical functions. The result of a function depends only on the arguments, and there are no side effects, such as I/O or mutation of state. Programs are built by combining functions together. One way of combining functions is _function composition_ :
|
||||
|
||||
```
|
||||
(.) :: (b -> c) -> (a -> b) -> (a -> c)
|
||||
(g . f) x = g (f x)
|
||||
```
|
||||
|
||||
This _infix_ function combines two functions into one, applying **g** to the output of **f**. We'll see it used in an upcoming example. For comparison, the same function in Python looks like:
|
||||
|
||||
```
|
||||
def compose(g, f):
|
||||
return lambda x: g(f(x))
|
||||
```
|
||||
|
||||
The beauty of functional programming is that because functions are deterministic and have no side effects, you can always replace a function application with the result of the application. This substitution of equals for equals enables _equational reasoning_ . Every programmer has to reason about their code and others', and equational reasoning is a great tool for doing that. Let's look at an example. You encounter the expression:
|
||||
|
||||
```
|
||||
map even . map (+1)
|
||||
```
|
||||
|
||||
What does this program do? Can it be simplified? Equational reasoning lets you analyze the code through a series of substitutions:
|
||||
|
||||
```
|
||||
map even . map (+1)
|
||||
map (even . (+1)) -- from definition of 'map'
|
||||
map (\x -> even (x + 1)) -- lambda abstraction
|
||||
map odd -- from definition of 'even'
|
||||
```
|
||||
|
||||
We can use equational reasoning to understand programs and optimize for readability. The Haskell compiler uses equational reasoning to perform many kinds of program optimizations. Without pure functions, equational reasoning either isn't possible, or requires an inordinate effort from the programmer.
|
||||
|
||||
### Functional programming languages
|
||||
|
||||
What do you need from a programming language to be able to do functional programming?
|
||||
|
||||
Doing functional programming meaningfully in a language without _higher-order functions_ (the ability to pass functions as arguments and return functions), _lambdas_ (anonymous functions), and _generics_ is difficult. Most modern languages have these, but there are differences in _how well_ different languages support functional programming. The languages with the best support are called _functional programming languages_ . These include _Haskell_ , _OCaml_ , _F#_ , and _Scala_ , which are statically typed, and the dynamically typed _Erlang_ and _Clojure_ .
|
||||
|
||||
Even among functional languages there are big differences in how far you can exploit functional programming. Having a type system helps a lot, especially if it supports _type inference_ (so you don't always have to type the types). There isn't room in this article to go into detail, but suffice it to say, not all type systems are created equal.
|
||||
|
||||
As with all languages, different functional languages emphasize different concepts, techniques, or use cases. When choosing a language, considering how well it supports functional programming and whether it fits your use case is important. If you're stuck using some non-FP language, you will still benefit from applying functional programming to the extent the language supports it.
|
||||
|
||||
### Don't open that trap door!
|
||||
|
||||
Recall that the result of a function depends only on its inputs. Alas, almost all programming languages have "features" that break this assumption. Null values, type case (**instanceof**), type casting, exceptions, side-effects, and the possibility of infinite recursion are trap doors that break equational reasoning and impair a programmer's ability to reason about the behavior or correctness of a program. ( _Total languages_ , which do not have any trap doors, include Agda, Idris, and Coq.)
|
||||
|
||||
Fortunately, as programmers, we can choose to avoid these traps, and if we are disciplined, we can pretend that the trap doors do not exist. This idea is called _fast and loose reasoning_ . It costs nothing—almost any program can be written without using the trap doors—and by avoiding them you win back equational reasoning, composability and reuse.
|
||||
|
||||
Let's discuss exceptions in detail. This trap door breaks equational reasoning because the possibility of abnormal termination is not reflected in the type. (Count yourself lucky if the documentation even mentions the exceptions that could be thrown.) But there is no reason why we can't have a return type that encompasses all the failure modes.
|
||||
|
||||
Avoiding trap doors is an area in which language features can make a big difference. For avoiding exceptions, _algebraic data types_ can be used to model error conditions, like so:
|
||||
|
||||
```
|
||||
-- new data type for results of computations that can fail
|
||||
--
|
||||
data Result e a = Error e | Success a
|
||||
|
||||
-- new data type for three kinds of arithmetic errors
|
||||
--
|
||||
data ArithError = DivByZero | Overflow | Underflow
|
||||
|
||||
-- integer division, accounting for divide-by-zero
|
||||
--
|
||||
safeDiv :: Int -> Int -> Result ArithError Int
|
||||
safeDiv x y =
|
||||
if y == 0
|
||||
then Error DivByZero
|
||||
else Success (div x y)
|
||||
```
|
||||
|
||||
The trade-off in this example is that you must now work with values of type **Result ArithError Int** instead of plain old **Int**, but there are abstractions for dealing with this. You no longer need to handle exceptions and can use fast and loose reasoning, so overall it's a win.
|
||||
|
||||
### Theorems for free
|
||||
|
||||
Most modern statically typed languages have _generics_ (also called _parametric polymorphism_ ), where functions are defined over one or more abstract types. For example, consider a function over lists:
|
||||
|
||||
```
|
||||
f :: [a] -> [a]
|
||||
f = ...
|
||||
```
|
||||
|
||||
The same function in Java looks like:
|
||||
|
||||
```
|
||||
static <A> List<A> f(List<A> xs) { ... }
|
||||
```
|
||||
|
||||
The compiled program is a proof that this function will work with _any_ choice for the type **a**. With that in mind, and employing fast and loose reasoning, can you work out what the function does? Does knowing the type help?
|
||||
|
||||
In this case, the type doesn't tell us exactly what the function does (it could reverse the list, drop the first element, or many other things), but it does tell us a lot. Just from the type, we can derive theorems about the function:
|
||||
|
||||
* **Theorem 1**: Every element in the output appears in the input; it couldn't possibly add an **a** to the list because it has no knowledge of what **a** is or how to construct one.
|
||||
* **Theorem 2**: If you map any function over the list then apply **f**, the result is the same as applying **f** then mapping.
|
||||
|
||||
Theorem 1 helps us understand what the code is doing, and Theorem 2 is useful for program optimization. We learned all this just from the type! This result—the ability to derive useful theorems from types—is called _parametricity_ . It follows that a type is a partial (sometimes complete) specification of a function's behavior, and a kind of machine-checked documentation.
|
||||
|
||||
Now it's your turn to exploit parametricity. What can you conclude from the types of **map** and **(.)**, or the following functions?
|
||||
|
||||
* **foo :: a -> (a, a)**
|
||||
* **bar :: a -> a -> a**
|
||||
* **baz :: b -> a -> a**
|
||||
|
||||
### Resources for learning functional programming
|
||||
|
||||
Perhaps you have been convinced that functional programming is a better way to write software, and you are wondering how to get started? There are several approaches to learning functional programming; here are some I recommend (with, I admit, a strong bias toward Haskell):
|
||||
|
||||
* UPenn's [CIS 194: Introduction to Haskell][35] is a solid introduction to functional programming concepts and real-world Haskell development. The course material is available, but the lectures are not (you could view Brisbane Functional Programming Group's [series of talks covering CIS 194][36] from a few years ago instead).
|
||||
* Good introductory books include _[Functional Programming in Scala][30]_ , _[Thinking Functionally with Haskell][31]_ , and _[Haskell Programming from first principles][32]_ .
|
||||
* The [Data61 FP course][37] (f.k.a., _NICTA_ course) teaches foundational abstractions and data structures through _type-driven development_ . The payoff is huge, but it is _difficult by design_ , having its origins in training workshops, so only attempt it if you know a functional programmer who is willing to mentor you.
|
||||
* Start practicing functional programming in whatever code you're working on. Write pure functions (avoid non-determinism and mutation), use higher-order functions and recursion instead of loops, exploit parametricity for improved readability and reuse. Many people start out in functional programming by experimenting and experiencing the benefits in all kinds of languages.
|
||||
* Join a functional programming user group or study group in your area—or start one—and look out for functional programming conferences (new ones are popping up all the time).
|
||||
|
||||
### Conclusion
|
||||
|
||||
In this article, I discussed what functional programming is and is not, and looked at advantages of functional programming, including equational reasoning and parametricity. We learned that you can do _some_ functional programming in most programming languages, but the choice of language affects how much you can benefit, with _functional programming languages_ , such as Haskell, having the most to offer. I also recommended resources for learning functional programming.
|
||||
|
||||
Functional programming is a rich field and there are many deeper (and denser) topics awaiting exploration. I would be remiss not to mention a few that have practical implications, such as:
|
||||
|
||||
* lenses and prisms (first-class, composable getters and setters; great for working with nested data);
|
||||
* theorem proving (why test your code when you could _prove it correct_ instead?);
|
||||
* lazy evaluation (lets you work with potentially infinite data structures);
|
||||
* and category theory (the origin of many beautiful and practical abstractions in functional programming).
|
||||
|
||||
I hope that you have enjoyed this introduction to functional programming and are inspired to dive into this fun and practical approach to software development.
|
||||
|
||||
_This article is published under the [CC BY 4.0][38] license._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Software Engineer at Red Hat. Interested in functional programming, category theory and other intersections of math and programming. Crazy about jalapeños.
|
||||
|
||||
----------------------
|
||||
|
||||
via: https://opensource.com/article/17/4/introduction-functional-programming
|
||||
|
||||
作者:[Fraser Tweedale ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/frasertweedale
|
||||
[1]:https://opensource.com/tags/javascript?src=programming_resource_menu
|
||||
[2]:https://opensource.com/tags/perl?src=programming_resource_menu
|
||||
[3]:https://opensource.com/tags/python?src=programming_resource_menu
|
||||
[4]:https://developers.redhat.com/?intcmp=7016000000127cYAAQ&src=programming_resource_menu
|
||||
[5]:https://developers.redhat.com/products/#developer_tools?intcmp=7016000000127cYAAQ&src=programming_resource_menu
|
||||
[6]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#t:Int
|
||||
[7]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#t:Int
|
||||
[8]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#t:Int
|
||||
[9]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:div
|
||||
[10]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:even
|
||||
[11]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#t:Int
|
||||
[12]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#t:Bool
|
||||
[13]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:even
|
||||
[14]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[15]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[16]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[17]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:even
|
||||
[18]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[19]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:even
|
||||
[20]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[21]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[22]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:even
|
||||
[23]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[24]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[25]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:even
|
||||
[26]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[27]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:even
|
||||
[28]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[29]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:odd
|
||||
[30]:https://www.manning.com/books/functional-programming-in-scala
|
||||
[31]:http://www.cambridge.org/gb/academic/subjects/computer-science/programming-languages-and-applied-logic/thinking-functionally-haskell
|
||||
[32]:http://haskellbook.com/
|
||||
[33]:http://cufp.org/
|
||||
[34]:https://www.haskell.org/tutorial/monads.html
|
||||
[35]:https://www.cis.upenn.edu/~cis194/fall16/
|
||||
[36]:https://github.com/bfpg/cis194-yorgey-lectures
|
||||
[37]:https://github.com/data61/fp-course
|
||||
[38]:https://creativecommons.org/licenses/by/4.0/
|
||||
[39]:https://opensource.com/article/17/4/introduction-functional-programming?rate=_tO5hNzT4hRKNMJtWwQM-K3Jmxm10iPeqoy3bbS12MQ
|
||||
[40]:https://wiki.haskell.org/Introduction
|
||||
[41]:https://opensource.com/user/123116/feed
|
||||
[42]:https://opensource.com/users/frasertweedale
|
@ -1,3 +1,5 @@
|
||||
MonkeyDEcho translated
|
||||
|
||||
[MySQL infrastructure testing automation at GitHub][31]
|
||||
============================================================
|
||||
|
||||
|
@ -1,230 +0,0 @@
|
||||
ucasFL translating
|
||||
|
||||
An Intro to Compilers
|
||||
============================================================
|
||||
|
||||
### How to Speak to Computers, Pre-Siri
|
||||
|
||||
|
||||
A compiler is just a program that translates other programs. Traditional compilers translate source code into executable machine code that your computer understands. (Some compilers translate source code into another programming language. These compilers are called source-to-source translators or transpilers.) [LLVM][7] is a widely used compiler project, consisting of many modular compiler tools.
|
||||
|
||||
Traditional compiler design comprises three parts:
|
||||
![](https://nicoleorchard.com/img/blog/compilers/compiler1.jpg)
|
||||
|
||||
* The Frontend translates source code into an intermediate representation (IR)*. [`clang`][1] is LLVM’s frontend for the C family of languages.
|
||||
|
||||
* The Optimizer analyzes the IR and translates it into a more efficient form. [`opt`][2] is the LLVM optimizer tool.
|
||||
|
||||
* The Backend generates machine code by mapping the IR to the target hardware instruction set. [`llc`][3] is the LLVM backend tool.
|
||||
|
||||
* LLVM IR is a low-level language that is similar to assembly. However, it abstracts away hardware-specific information.
|
||||
|
||||
### Hello, Compiler 👋
|
||||
|
||||
Below is a simple C program that prints “Hello, Compiler!” to stdout. The C syntax is human-readable, but my computer wouldn’t know what to do with it. I’m going to walk through the three compilation phases to make this program machine-executable.
|
||||
|
||||
```
|
||||
// compile_me.c
|
||||
// Wave to the compiler. The world can wait.
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
int main() {
|
||||
printf("Hello, Compiler!\n");
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
### The Frontend
|
||||
|
||||
As I mentioned above, `clang` is LLVM’s frontend for the C family of languages. Clang consists of a C preprocessor, lexer, parser, semantic analyzer, and IR generator.
|
||||
|
||||
* The C Preprocessor modifies the source code before beginning the translation to IR. The preprocessor handles including external files, like `#include <stdio.h>` above. It will replace that line with the entire contents of the `stdio.h` C standard library file, which will include the declaration of the `printf` function.
|
||||
|
||||
_See the output of the preprocessor step by running:_
|
||||
|
||||
```
|
||||
clang -E compile_me.c -o preprocessed.i
|
||||
|
||||
```
|
||||
|
||||
* The Lexer (or scanner or tokenizer) converts a string of characters to a string of words. Each word, or token, is assigned to one of five syntactic categories: punctuation, keyword, identifier, literal, or comment.
|
||||
|
||||
_Tokenization of compile_me.c_
|
||||
![](https://nicoleorchard.com/img/blog/compilers/lexer.jpg)
|
||||
|
||||
* The Parser determines whether or not the stream of words consists of valid sentences in the source language. After analyzing the grammar of the token stream, it outputs an abstract syntax tree (AST). Nodes in a Clang AST represent declarations, statements, and types.
|
||||
|
||||
_The AST of compile_me.c_
|
||||
|
||||
![](https://nicoleorchard.com/img/blog/compilers/tree.jpg)
|
||||
|
||||
* The Semantic Analyzer traverses the AST, determining if code sentences have valid meaning. This phase checks for type errors. If the main function in compile_me.c returned `"zero"` instead of `0`, the semantic analyzer would throw an error because `"zero"` is not of type `int`.
|
||||
|
||||
* The IR Generator translates the AST to IR.
|
||||
|
||||
_Run the clang frontend on compile_me.c to generate LLVM IR:_
|
||||
|
||||
```
|
||||
clang -S -emit-llvm -o llvm_ir.ll compile_me.c
|
||||
|
||||
```
|
||||
|
||||
_The main function in llvm_ir.ll_
|
||||
|
||||
```
|
||||
; llvm_ir.ll
|
||||
@.str = private unnamed_addr constant [18 x i8] c"Hello, Compiler!\0A\00", align 1
|
||||
|
||||
define i32 @main() {
|
||||
%1 = alloca i32, align 4 ; <- memory allocated on the stack
|
||||
store i32 0, i32* %1, align 4
|
||||
%2 = call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([18 x i8], [18 x i8]* @.str, i32 0, i32 0))
|
||||
ret i32 0
|
||||
}
|
||||
|
||||
declare i32 @printf(i8*, ...)
|
||||
```
|
||||
|
||||
### The Optimizer
|
||||
|
||||
The job of the optimizer is to improve code efficiency based on its understanding of the program’s runtime behavior. The optimizer takes IR as input and produces improved IR as output. LLVM’s optimizer tool, `opt`, will optimize for processor speed with the flag `-O2` (capital o, two) and for size with the flag `-Os` (capital o, s).
|
||||
|
||||
Take a look at the difference between the LLVM IR code our frontend generated above and the result of running:
|
||||
|
||||
```
|
||||
opt -O2 -S llvm_ir.ll -o optimized.ll
|
||||
|
||||
```
|
||||
|
||||
_The main function in optimized.ll_
|
||||
|
||||
```
|
||||
optimized.ll
|
||||
|
||||
@str = private unnamed_addr constant [17 x i8] c"Hello, Compiler!\00"
|
||||
|
||||
define i32 @main() {
|
||||
%puts = tail call i32 @puts(i8* getelementptr inbounds ([17 x i8], [17 x i8]* @str, i64 0, i64 0))
|
||||
ret i32 0
|
||||
}
|
||||
|
||||
declare i32 @puts(i8* nocapture readonly)
|
||||
|
||||
```
|
||||
|
||||
In the optimized version, main doesn’t allocate memory on the stack, since it doesn’t use any memory. The optimized code also calls `puts` instead of `printf`because none of `printf`’s formatting functionality was used.
|
||||
|
||||
Of course, the optimizer does more than just know when to use `puts` in lieu of `printf`. The optimizer also unrolls loops and inlines the results of simple calculations. Consider the program below, which adds two integers and prints the result.
|
||||
|
||||
```
|
||||
// add.c
|
||||
#include <stdio.h>
|
||||
|
||||
int main() {
|
||||
int a = 5, b = 10, c = a + b;
|
||||
printf("%i + %i = %i\n", a, b, c);
|
||||
}
|
||||
```
|
||||
|
||||
_Here is the unoptimized LLVM IR:_
|
||||
|
||||
```
|
||||
@.str = private unnamed_addr constant [14 x i8] c"%i + %i = %i\0A\00", align 1
|
||||
|
||||
define i32 @main() {
|
||||
%1 = alloca i32, align 4 ; <- allocate stack space for var a
|
||||
%2 = alloca i32, align 4 ; <- allocate stack space for var b
|
||||
%3 = alloca i32, align 4 ; <- allocate stack space for var c
|
||||
store i32 5, i32* %1, align 4 ; <- store 5 at memory location %1
|
||||
store i32 10, i32* %2, align 4 ; <- store 10 at memory location %2
|
||||
%4 = load i32, i32* %1, align 4 ; <- load the value at memory address %1 into register %4
|
||||
%5 = load i32, i32* %2, align 4 ; <- load the value at memory address %2 into register %5
|
||||
%6 = add nsw i32 %4, %5 ; <- add the values in registers %4 and %5\. put the result in register %6
|
||||
store i32 %6, i32* %3, align 4 ; <- put the value of register %6 into memory address %3
|
||||
%7 = load i32, i32* %1, align 4 ; <- load the value at memory address %1 into register %7
|
||||
%8 = load i32, i32* %2, align 4 ; <- load the value at memory address %2 into register %8
|
||||
%9 = load i32, i32* %3, align 4 ; <- load the value at memory address %3 into register %9
|
||||
%10 = call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([14 x i8], [14 x i8]* @.str, i32 0, i32 0), i32 %7, i32 %8, i32 %9)
|
||||
ret i32 0
|
||||
}
|
||||
|
||||
declare i32 @printf(i8*, ...)
|
||||
|
||||
```
|
||||
|
||||
_Here is the optimized LLVM IR:_
|
||||
|
||||
```
|
||||
@.str = private unnamed_addr constant [14 x i8] c"%i + %i = %i\0A\00", align 1
|
||||
|
||||
define i32 @main() {
|
||||
%1 = tail call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([14 x i8], [14 x i8]* @.str, i64 0, i64 0), i32 5, i32 10, i32 15)
|
||||
ret i32 0
|
||||
}
|
||||
|
||||
declare i32 @printf(i8* nocapture readonly, ...)
|
||||
|
||||
```
|
||||
|
||||
Our optimized main function is essentially lines 17 and 18 of the unoptimized version, with the variable values inlined. `opt` calculated the addition because all of the variables were constant. Pretty cool, huh?
|
||||
|
||||
### The Backend
|
||||
|
||||
LLVM’s backend tool is `llc`. It generates machine code from LLVM IR input in three phases:
|
||||
|
||||
* Instruction selection is the mapping of IR instructions to the instruction-set of the target machine. This step uses an infinite namespace of virtual registers.
|
||||
|
||||
* Register allocation is the mapping of virtual registers to actual registers on your target architecture. My CPU has an x86 architecture, which is limited to 16 registers. However, the compiler will use as few registers as possible.
|
||||
|
||||
* Instruction scheduling is the reordering of operations to reflect the target machine’s performance constraints.
|
||||
|
||||
_Running this command will produce some machine code!_
|
||||
|
||||
```
|
||||
llc -o compiled-assembly.s optimized.ll
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
_main:
|
||||
pushq %rbp
|
||||
movq %rsp, %rbp
|
||||
leaq L_str(%rip), %rdi
|
||||
callq _puts
|
||||
xorl %eax, %eax
|
||||
popq %rbp
|
||||
retq
|
||||
L_str:
|
||||
.asciz "Hello, Compiler!"
|
||||
|
||||
```
|
||||
|
||||
This program is x86 assembly language, which is the human readable syntax for the language my computer speaks. Someone finally understands me 🙌
|
||||
|
||||
* * *
|
||||
|
||||
Resources
|
||||
|
||||
1. [Engineering a compiler][4]
|
||||
|
||||
2. [Getting Started with LLVM Core Libraries][5]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://nicoleorchard.com/blog/compilers
|
||||
|
||||
作者:[Nicole Orchard ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://nicoleorchard.com/
|
||||
[1]:http://clang.llvm.org/
|
||||
[2]:http://llvm.org/docs/CommandGuide/opt.html
|
||||
[3]:http://llvm.org/docs/CommandGuide/llc.html
|
||||
[4]:https://www.amazon.com/Engineering-Compiler-Second-Keith-Cooper/dp/012088478X
|
||||
[5]:https://www.amazon.com/Getting-Started-LLVM-Core-Libraries/dp/1782166920
|
||||
[6]:https://twitter.com/norchard/status/864246049266958336
|
||||
[7]:http://llvm.org/
|
@ -1,188 +0,0 @@
|
||||
【翻译中 @haoqixu】[Kubernetes at GitHub][10]
|
||||
============================================================
|
||||
|
||||
Over the last year, GitHub has gradually evolved the infrastructure that runs the Ruby on Rails application responsible for `github.com` and `api.github.com`. We reached a big milestone recently: all web and API requests are served by containers running in [Kubernetes][13] clusters deployed on our [metal cloud][14]. Moving a critical application to Kubernetes was a fun challenge, and we’re excited to share some of what we’ve learned with you today.
|
||||
|
||||
### Why change?[][15]
|
||||
|
||||
Before this move, our main Ruby on Rails application (we call it `github/github`) was configured a lot like it was eight years ago: [Unicorn][16] processes managed by a Ruby process manager called [God][17] running on Puppet-managed servers. Similarly, our [chatops deployment][18] worked a lot like it did when it was first introduced: Capistrano established SSH connections to each frontend server, then [updated the code in place][19] and restarted application processes. When peak request load exceeded available frontend CPU capacity, GitHub Site Reliability Engineers would [provision additional capacity][20] and add it to the pool of active frontend servers.
|
||||
|
||||
![Previous unicorn service design](https://githubengineering.com/images/kubernetes-at-github/before.png)
|
||||
|
||||
While our basic production approach didn’t change much in those years, GitHub itself changed a lot: new features, larger software communities, more GitHubbers on staff, and way more requests per second. As we grew, this approach began to exhibit new problems. Many teams wanted to extract the functionality they were responsible for from this large application into a smaller service that could run and be deployed independently. As the number of services we ran increased, the SRE team began supporting similar configurations for dozens of other applications, increasing the percentage of our time we spent on server maintenance, provisioning, and other work not directly related to improving the overall GitHub experience. New services took days, weeks, or months to deploy depending on their complexity and the SRE team’s availability. Over time, it became clear that this approach did not provide our engineers the flexibility they needed to continue building a world-class service. Our engineers needed a self-service platform they could use to experiment, deploy, and scale new services. We also needed that same platform to fit the needs of our core Ruby on Rails application so that engineers and/or robots could respond to changes in demand by allocating additional compute resources in seconds instead of hours, days, or longer.
|
||||
|
||||
In response to those needs, the SRE, Platform, and Developer Experience teams began a joint project that led us from an initial evaluation of container orchestration platforms to where we are today: deploying the code that powers `github.com` and `api.github.com` to Kubernetes clusters dozens of times per day. This post aims to provide a high-level overview of the work involved in that journey.
|
||||
|
||||
### Why Kubernetes?[][21]
|
||||
|
||||
As a part of evaluating the existing landscape of “platform as a service” tools, we took a closer look at Kubernetes, a project from Google that described itself at the time as _an open-source system for automating deployment, scaling, and management of containerized applications_ . Several qualities of Kubernetes stood out from the other platforms we evaluated: the vibrant open source community supporting the project, the first run experience (which allowed us to deploy a small cluster and an application in the first few hours of our initial experiment), and a wealth of information available about the [experience][22]that motivated its design.
|
||||
|
||||
These experiments quickly grew in scope: a small project was assembled to build a Kubernetes cluster and deployment tooling in support of an upcoming hack week to gain some practical experience with the platform. Our experience with this project as well as the feedback from engineers who used it was overwhelmingly positive. It was time to expand our experiments, so we started planning a larger rollout.
|
||||
|
||||
### Why start with `github/github`?[][23]
|
||||
|
||||
At the earliest stages of this project, we made a deliberate decision to target the migration of a critical workload: `github/github`. Many factors contributed to this decision, but a few stood out:
|
||||
|
||||
* We knew that the deep knowledge of this application throughout GitHub would be useful during the process of migration.
|
||||
|
||||
* We needed self-service capacity expansion tooling to handle continued growth.
|
||||
|
||||
* We wanted to make sure the habits and patterns we developed were suitable for large applications as well as smaller services.
|
||||
|
||||
* We wanted to better insulate the app from differences between development, staging, production, enterprise, and other environments.
|
||||
|
||||
* We knew that migrating a critical, high-visibility workload would encourage further Kubernetes adoption at GitHub.
|
||||
|
||||
Given the critical nature of the workload we chose to migrate, we needed to build a high level of operational confidence before serving any production traffic.
|
||||
|
||||
### Rapid iteration and confidence building with a review lab[][24]
|
||||
|
||||
As a part of this migration, we designed, prototyped, and validated a replacement for the service currently provided by our frontend servers using Kubernetes primitives like Pods, Deployments, and Services. Some validation of this new design could be performed by running `github/github`’s existing test suites in a container rather than on a server configured similarly to frontend servers, but we also needed to observe how this container behaved as a part of a larger set of Kubernetes resources. It quickly became clear that an environment that supported exploratory testing of the combination of Kubernetes and the services we intended to run would be necessary during the validation phase.
|
||||
|
||||
Around the same time, we observed that our existing patterns for exploratory testing of `github/github` pull requests had begun to show signs of growing pains. As the rate of deploys increased along with the number of engineers working on the project, so did the utilization of the several [additional deploy environments][25] used as a part of the process of validating a pull request to `github/github`. The small number of fully-featured deploy environments were usually booked solid during peak working hours, which slowed the process of deploying a pull request. Engineers frequently requested the ability to test more of the various production subsystems on “branch lab.” While branch lab allowed concurrent deployment from many engineers, it only started a single Unicorn process for each, which meant it was only useful when testing API and UI changes. These needs overlapped substantially enough for us to combine the projects and start work on a new Kubernetes-powered deployment environment for `github/github` called “review lab.”
|
||||
|
||||
In the process of building review lab, we shipped a handful of sub-projects, each of which could likely be covered in their own blog post. Along the way, we shipped:
|
||||
|
||||
* A Kubernetes cluster running in an AWS VPC managed using a combination of [Terraform][2] and [kops][3].
|
||||
|
||||
* A set of Bash integration tests that exercise ephemeral Kubernetes clusters, used heavily in the beginning of the project to gain confidence in Kubernetes.
|
||||
|
||||
* A Dockerfile for `github/github`.
|
||||
|
||||
* Enhancements to our internal CI platform to support building and publishing containers to a container registry.
|
||||
|
||||
* YAML representations of 50+ Kubernetes resources, checked into `github/github`.
|
||||
|
||||
* Enhancements to our internal deployment application to support deploying Kubernetes resources from a repository into a Kubernetes namespace, as well as the creation of Kubernetes secrets from our internal secret store.
|
||||
|
||||
* A service that combines haproxy and consul-template to route traffic from Unicorn pods to the existing services that publish service information there.
|
||||
|
||||
* A service that reads Kubernetes events and sends abnormal ones to our internal error tracking system.
|
||||
|
||||
* A [chatops-rpc][4]-compatible service called `kube-me` that exposes a limited set of `kubectl` commands to users via chat.
|
||||
|
||||
The end result is a chat-based interface for creating an isolated deployment of GitHub for any pull request. Once a pull request passed all required CI jobs, a user can deploy their pull request to review lab like so:
|
||||
|
||||
![jnewland](https://avatars0.githubusercontent.com/jnewland?v=3&s=22)
|
||||
**jnewland**.deploy https://github.com/github/github/pull/4815162342 to review-lab
|
||||
![hubot](https://avatars1.githubusercontent.com/hubot?v=3&s=22)
|
||||
**Hubot**[@jnewland][1]'s review-lab deployment of github/add-pre-stop-hook (00cafefe) is done! (12 ConfigMaps, 17 Deployments, 1 Ingress, 1 Namespace, 6 Secrets, and 23 Services)(77.62s) your lab is available at https://jnewland.review-lab.github.com
|
||||
|
||||
Like branch lab before it, labs are cleaned up one day after their last deploy. As each lab is created in its own Kubernetes namespace, cleanup is as simple as deleting the namespace, which our deployment system performs automatically when necessary.
|
||||
|
||||
Review lab was a successful project with a number of positive outcomes. Before making this environment generally available to engineers, it served as an essential proving ground and prototyping environment for our Kubernetes cluster design as well as the design and configuration of the Kubernetes resources that now describe the `github/github` Unicorn workload. After release, it exposed a large number of engineers to a new style of deployment, helping us build confidence via feedback from interested engineers as well as continued use from engineers who didn’t notice any change. And just recently, we observed some engineers on our High Availability team use review lab to experiment with the interaction between Unicorn and the behavior of a new experimental subsystem by deploying it to a shared lab. We’re extremely pleased with the way that this environment empowers engineers to experiment and solve problems in a self-service manner.
|
||||
|
||||
![Deploys per day to branch lab and review lab](https://githubengineering.com/images/kubernetes-at-github/deploys.png)
|
||||
|
||||
### Kubernetes on Metal[][26]
|
||||
|
||||
With review lab shipped, our attention shifted to `github.com`. To satisfy the performance and reliability requirements of our flagship service - which depends on low-latency access to other data services - we needed to build out Kubernetes infrastructure that supported the [metal cloud][27] we run in our physical data centers and POPs. Again, nearly a dozen subprojects were involved in this effort:
|
||||
|
||||
* A timely and thorough post about [container networking][5] helped us select the [Calico][6]network provider, which provided the out-of-the box functionality we needed to ship a cluster quickly in `ipip` mode while giving us the flexibility to explore peering with our network infrastructure later.
|
||||
|
||||
* Following no less than a dozen reads of [@kelseyhightower][7]’s indispensable [Kubernetes the hard way][8], we assembled a handful of manually provisioned servers into a temporary Kubernetes cluster that passed the same set of integration tests we used to exercise our AWS clusters.
|
||||
|
||||
* We built a small tool to generate the CA and configuration necessary for each cluster in a format that could be consumed by our internal Puppet and secret systems.
|
||||
|
||||
* We Puppetized the configuration of two instance roles - Kubernetes nodes and Kubernetes apiservers - in a fashion that allows a user to provide the name of an already-configured cluster to join at provision time.
|
||||
|
||||
* We built a small Go service to consume container logs, append metadata in key/value format to each line, and send them to the hosts’ local syslog endpoint.
|
||||
|
||||
* We enhanced [GLB][9], our internal load balancing service, to support Kubernetes NodePort Services.
|
||||
|
||||
The combination of all of this hard work resulted in a cluster that passed our internal acceptance tests. Given that, we were fairly confident that the same set of inputs (the Kubernetes resources in use by review lab), the same set of data (the network services review lab connected to over a VPN), and same tools would create a similar result. In less than a week’s time - much of which was spent on internal communication and sequencing in the event the migration had significant impact - we were able to migrate this entire workload from a Kubernetes cluster running on AWS to one running inside one of our data centers.
|
||||
|
||||
### Raising the confidence bar[][28]
|
||||
|
||||
With a successful and repeatable pattern for assembling Kubernetes clusters on our metal cloud, it was time to build confidence in the ability of our Unicorn deployment to replace the pool of current frontend servers. At GitHub, it is common practice for engineers and their teams to validate new functionality by creating a [Flipper][29] feature and then opting into it as soon as it is viable to do so. After enhancing our deployment system to deploy a new set of Kubernetes resources to a `github-production` namespace in parallel with our existing production servers and enhancing GLB to support routing staff requests to a different backend based on a Flipper-influenced cookie, we allowed staff to opt-in to the experimental Kubernetes backend with a button in our [mission control bar][30]:
|
||||
|
||||
![Staff UI for opting-in to Kubernetes-powered infrastructure](https://githubengineering.com/images/kubernetes-at-github/button.png)
|
||||
|
||||
The load from internal users helped us find problems, fix bugs, and start getting comfortable with Kubernetes in production. During this period, we worked to increase our confidence by simulating procedures we anticipated performing in the future, writing runbooks, and performing failure tests. We also routed small amounts of production traffic to this cluster to confirm our assumptions about performance and reliability under load, starting with 100 requests per second and expanding later to 10% of the requests to `github.com` and `api.github.com`. With several of these simulations under our belt, we paused briefly to re-evaluate the risk of a full migration.
|
||||
|
||||
![Kubernetes unicorn service design](https://githubengineering.com/images/kubernetes-at-github/after.png)
|
||||
|
||||
### Cluster Groups[][31]
|
||||
|
||||
Several of our failure tests produced results we didn’t expect. Particularly, a test that simulated the failure of a single apiserver node disrupted the cluster in a way that negatively impacted the availability of running workloads. Investigations into the results of these tests did not produce conclusive results, but helped us identify that the disruption was likely related to an interaction between the various clients that connect to the Kubernetes apiserver (like `calico-agent`, `kubelet`, `kube-proxy`, and `kube-controller-manager`) and our internal load balancer’s behavior during an apiserver node failure. Given that we had observed a Kubernetes cluster degrade in a way that might disrupt service, we started looking at running our flagship application on multiple clusters in each site and automating the process of diverting requests away from a unhealthy cluster to the other healthy ones.
|
||||
|
||||
Similar work was already on our roadmap to support deploying this application into multiple independently-operated sites, and other positive trade-offs of this approach - including presenting a viable story for low-disruption cluster upgrades and associating clusters with existing failure domains like shared network and power devices - influenced us to go down this route. We eventually settled on a design that uses our deployment system’s support for deploying to multiple “partitions” and enhanced it to support cluster-specific configuration via a custom Kubernetes resource annotation, forgoing the existing federation solutions for an approach that allowed us to use the business logic already present in our deployment system.
|
||||
|
||||
### From 10% to 100%[][32]
|
||||
|
||||
With Cluster Groups in place, we gradually converted frontend servers into Kubernetes nodes and increased the percentage of traffic routed to Kubernetes. Alongside a number of other responsible engineering groups, we completed the frontend transition in just over a month while keeping performance and error rates within our targets.
|
||||
|
||||
![Percentage of web traffic served by cluster](https://githubengineering.com/images/kubernetes-at-github/rollout.png)
|
||||
|
||||
During this migration, we encountered an issue that persists to this day: during times of high load and/or high rates of container churn, some of our Kubernetes nodes will kernel panic and reboot. While we’re not satisfied with this situation and are continuing to investigate it with high priority, we’re happy that Kubernetes is able to route around these failures automatically and continue serving traffic within our target error bounds. We’ve performed a handful of failure tests that simulated kernel panics with `echo c > /proc/sysrq-trigger` and have found this to be a useful addition to our failure testing patterns.
|
||||
|
||||
### What’s next?[][33]
|
||||
|
||||
We’re inspired by our experience migrating this application to Kubernetes, and are looking forward to migrating more soon. While scope of our first migration was intentionally limited to stateless workloads, we’re excited about experimenting with patterns for running stateful services on Kubernetes.
|
||||
|
||||
During the last phase of this project, we also shipped a workflow for deploying new applications and services into a similar group of Kubernetes clusters. Over the last several months, engineers have already deployed dozens of applications to this cluster. Each of these applications would have previously required configuration management and provisioning support from SREs. With a self-service application provisioning workflow in place, SRE can devote more of our time to delivering infrastructure products to the rest of the engineering organization in support of our best practices, building toward a faster and more resilient GitHub experience for everyone.
|
||||
|
||||
### Thanks[][34]
|
||||
|
||||
We’d like to extend our deep thanks to the entire Kubernetes team for their software, words, and guidance along the way. I’d also like to thank the following GitHubbers for their incredible work on this project: [@samlambert][35], [@jssjr][36], [@keithduncan][37], [@jbarnette][38], [@sophaskins][39], [@aaronbbrown][40], [@rhettg][41], [@bbasata][42], and [@gamefiend][43].
|
||||
|
||||
### Come work with us![][44]
|
||||
|
||||
Want to help the GitHub SRE team solve interesting problems like this? We’d love for you to join us. Apply [here][45]!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://githubengineering.com/kubernetes-at-github/
|
||||
|
||||
作者:[jnewland ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://github.com/jnewland
|
||||
[1]:https://github.com/jnewland
|
||||
[2]:https://github.com/hashicorp/terraform
|
||||
[3]:https://github.com/kubernetes/kops
|
||||
[4]:https://github.com/bhuga/hubot-chatops-rpc
|
||||
[5]:https://jvns.ca/blog/2016/12/22/container-networking/
|
||||
[6]:https://www.projectcalico.org/
|
||||
[7]:https://github.com/kelseyhightower
|
||||
[8]:https://github.com/kelseyhightower/kubernetes-the-hard-way
|
||||
[9]:https://githubengineering.com/introducing-glb/
|
||||
[10]:https://githubengineering.com/kubernetes-at-github/
|
||||
[11]:https://github.com/jnewland
|
||||
[12]:https://github.com/jnewland
|
||||
[13]:https://github.com/kubernetes/kubernetes/
|
||||
[14]:https://githubengineering.com/githubs-metal-cloud/
|
||||
[15]:https://githubengineering.com/kubernetes-at-github/?utm_source=webopsweekly&utm_medium=email#why-change
|
||||
[16]:https://github.com/blog/517-unicorn
|
||||
[17]:http://godrb.com/
|
||||
[18]:https://githubengineering.com/deploying-branches-to-github-com/
|
||||
[19]:https://github.com/blog/470-deployment-script-spring-cleaning
|
||||
[20]:https://githubengineering.com/githubs-metal-cloud/
|
||||
[21]:https://githubengineering.com/kubernetes-at-github/?utm_source=webopsweekly&utm_medium=email#why-kubernetes
|
||||
[22]:http://queue.acm.org/detail.cfm?id=2898444
|
||||
[23]:https://githubengineering.com/kubernetes-at-github/?utm_source=webopsweekly&utm_medium=email#why-start-with-githubgithub
|
||||
[24]:https://githubengineering.com/kubernetes-at-github/?utm_source=webopsweekly&utm_medium=email#rapid-iteration-and-confidence-building-with-a-review-lab
|
||||
[25]:https://githubengineering.com/deploying-branches-to-github-com/#deploy-environments
|
||||
[26]:https://githubengineering.com/kubernetes-at-github/?utm_source=webopsweekly&utm_medium=email#kubernetes-on-metal
|
||||
[27]:https://githubengineering.com/githubs-metal-cloud/
|
||||
[28]:https://githubengineering.com/kubernetes-at-github/?utm_source=webopsweekly&utm_medium=email#raising-the-confidence-bar
|
||||
[29]:https://github.com/jnunemaker/flipper
|
||||
[30]:https://github.com/blog/1252-how-we-keep-github-fast
|
||||
[31]:https://githubengineering.com/kubernetes-at-github/?utm_source=webopsweekly&utm_medium=email#cluster-groups
|
||||
[32]:https://githubengineering.com/kubernetes-at-github/?utm_source=webopsweekly&utm_medium=email#from-10-to-100
|
||||
[33]:https://githubengineering.com/kubernetes-at-github/?utm_source=webopsweekly&utm_medium=email#whats-next
|
||||
[34]:https://githubengineering.com/kubernetes-at-github/?utm_source=webopsweekly&utm_medium=email#thanks
|
||||
[35]:https://github.com/samlambert
|
||||
[36]:https://github.com/jssjr
|
||||
[37]:https://github.com/keithduncan
|
||||
[38]:https://github.com/jbarnette
|
||||
[39]:https://github.com/sophaskins
|
||||
[40]:https://github.com/aaronbbrown
|
||||
[41]:https://github.com/rhettg
|
||||
[42]:https://github.com/bbasata
|
||||
[43]:https://github.com/gamefiend
|
||||
[44]:https://githubengineering.com/kubernetes-at-github/?utm_source=webopsweekly&utm_medium=email#come-work-with-us
|
||||
[45]:https://boards.greenhouse.io/github/jobs/788701
|
@ -1,4 +1,4 @@
|
||||
Your Serverless Raspberry Pi cluster with Docker
|
||||
【翻译中 @haoqixu】Your Serverless Raspberry Pi cluster with Docker
|
||||
============================================================
|
||||
|
||||
|
||||
|
@ -1,3 +1,5 @@
|
||||
translating---geekpi
|
||||
|
||||
Getting started with ImageMagick
|
||||
============================================================
|
||||
|
||||
|
@ -1,75 +0,0 @@
|
||||
Linux Installation Types: Server Vs. Desktop
|
||||
============================================================
|
||||
|
||||
The kernel is the heart of any Linux installation
|
||||
|
||||
|
||||
I have previously covered obtaining and installing Ubuntu Linux, and this time I will touch on desktop and server installations. Both types of installation address certain needs. The different installs are downloaded separately from Ubuntu. You can choose which one you need from _[Ubuntu.com/downloads][1]_ .
|
||||
|
||||
Regardless of the installation type, there are some similarities.
|
||||
|
||||
|
||||
![](http://www.radiomagonline.com/Portals/0/radio-managing-tech-Ubuntu_1.jpg)
|
||||
|
||||
**Packages can be added from the desktop system graphical user interface or from the server system command line.**
|
||||
|
||||
Both utilize the same kernel and package manager system. The package manager system is a repository of programs that are precompiled to run on almost any Ubuntu system. Programs are grouped into packages and then packages are installed. Packages can be added from the desktop system graphical user interface or from the server system command line.
|
||||
|
||||
Programs are installed with a program called apt-get. This is a package manager system or program manager system. The end user simply types at the command line “apt-get install (package-name)” and Ubuntu will automatically get the software package and install it.
|
||||
|
||||
Packages usually install commands that have documentation that is accessed via the man pages (which is a topic unto itself). They are accessed by typing “man (command).” This will bring up a page that describes the command with details on usage. An end-user can also Google any Linux command or package and find a wealth of information about it, as well.
|
||||
|
||||
As an example, after installing the Network Attached Storage suite of packages, one would administer it via the command line, with the GUI, or with a program called Webmin. Webmin installs a web-based administrative interface for configuring most Linux packages, and it’s popular with the server-only install crowd because it installs as a webpage and does not require a GUI. It also allows for administering the server remotely.
|
||||
|
||||
Most, if not all, of these Linux-based package installs have videos and web pages dedicated to helping you run whatever package you install. Just search YouTube for “Linux Ubuntu NAS,” and you will find a video instructing you on how to setup and configure this service. There are also videos dedicated to the setup and operation of Webmin.
|
||||
|
||||
The kernel is the heart of any Linux installation. Since the kernel is modular, it is incredibly small (as the name suggests). I have run a Linux server installation from a small 32 MB compact flash. That is not a typo — 32 MB of space! Most of the space utilized by a Linux system is used by the packages installed.
|
||||
|
||||
|
||||
![](http://www.radiomagonline.com/Portals/0/radio-managing-tech-Ubuntu_2.jpg)
|
||||
|
||||
**The desktop install ISO is fairly large and has a number of optional install packages not found on the server install ISO. This installation is designed for workstation or daily desktop use.**
|
||||
|
||||
**SERVER**
|
||||
|
||||
The server install ISO is the smallest download from Ubuntu. It is a stripped down version of the operating system optimized for server operations. This version does not have a GUI. By default, it is completely run from the command line.
|
||||
|
||||
Removing the GUI and other components streamlines the system and maximizes performance. Any necessary packages that are not initially installed can be added later via the command line package manager. Since there is no GUI, all configuration, troubleshooting and package management must be done from a command line. A lot of administrators will use the server installation to get a clean or minimal system and then add only the certain packages that they require. This includes the ability to add a desktop GUI system and make a streamlined desktop system.
|
||||
|
||||
A Linux server could be used at the radio station as an Apache web server or a database server. Those are the real apps that require the horsepower, and that’s why they are usually run with a server install and no GUI. SNORT and Cacti are other applications that could be run on your Linux server (both covered in a previous article, found here: [_http://tinyurl.com/yd8dyegu_][2] ).
|
||||
|
||||
|
||||
![](http://www.radiomagonline.com/Portals/0/radio-managing-tech-Ubuntu_3.jpg)
|
||||
|
||||
**Packages are installed via the apt-get package manager system, just like the server install. The difference between the two is that on a desktop install, the apt-get package manager has a nice GUI front end.**
|
||||
|
||||
**DESKTOP**
|
||||
|
||||
The desktop install ISO is fairly large and has a number of optional install packages not found on the server install ISO. This installation is designed for workstation or daily desktop use. This installation type allows for the customization of packages (programs) or a default desktop configuration can be selected.
|
||||
|
||||
Packages are installed via the apt-get package manager system, just like the server install. The difference between the two is that on a desktop install, the apt-get package manager has a nice GUI front end. This allows for packages to be installed or removed easily from the system with the click of a mouse! The desktop install will setup a GUI and a lot of packages related to a desktop operating system.
|
||||
|
||||
This system is ready to go after being installed and can be a nice replacement to your windows or Mac desktop computer. It has a lot of packages including an Office suite and web browser.
|
||||
|
||||
Linux is a mature and powerful operating system. Regardless of the installation type, it can be configured to fit almost any need. From a powerful database server to a basic desktop operating system used for web browsing and writing letters to grandma, the sky is the limit and the packages available are almost inexhaustible. If you can think of a problem that requires a computerized solution, Linux probably has software for free or low cost to address that problem.
|
||||
|
||||
By offering two installation starting points, Ubuntu has done a great job of getting people started in the right direction.
|
||||
|
||||
_Cottingham is a former radio chief engineer, now working in streaming media._
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.radiomagonline.com/deep-dig/0005/linux-installation-types-server-vs-desktop/39123
|
||||
|
||||
作者:[Chris Cottingham ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[1]:https://www.ubuntu.com/download
|
||||
[2]:http://tinyurl.com/yd8dyegu
|
||||
[3]:http://www.radiomagonline.com/author/chris-cottingham
|
@ -1,116 +0,0 @@
|
||||
Manage your finances with LibreOffice Calc
|
||||
============================================================
|
||||
|
||||
### Do you wonder where all your money goes? This well-designed spreadsheet can answer that question at a glance.
|
||||
|
||||
![Get control of your finances with LibreOffice Calc](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BIZ_WorkInPublic.png?itok=7nAi_Db_ "Get control of your finances with LibreOffice Calc")
|
||||
Image by : opensource.com
|
||||
|
||||
If you're like most people, you don't have a bottomless bank account. You probably need to watch your monthly spending carefully.
|
||||
|
||||
There are many ways to do that, but that quickest and easiest way is to use a spreadsheet. Many folks create a very basic spreadsheet to do the job, one that consists of two long columns with a total at the bottom. That works, but it's kind of blah.
|
||||
|
||||
I'm going to walk you through creating a more scannable and (I think) more visually appealing personal expense spreadsheet using LibreOffice Calc.
|
||||
|
||||
Say you don't use LibreOffice? That's OK. You can use the information in this article with spreadsheet tools like [Gnumeric][7], [Calligra Sheets][8], or [EtherCalc][9].
|
||||
|
||||
### Start by making a list of your expenses
|
||||
|
||||
Don't bother firing up LibreOffice Calc just yet. Sit down with pen and paper and list your regular monthly expenses. Take your time, go through your records, and note everything, no matter how small. Don't worry about how much you're spending. Focus on where you're putting your money.
|
||||
|
||||
Once you've done that, group your expenses under headings that make the most sense to you. For example, group your gas, electric, and water bills under the heading Utilities. You might also want to have a group of expenses with a name like Various for those unexpected expenses we all run into each month.
|
||||
|
||||
### Create the spreadsheet
|
||||
|
||||
Start LibreOffice Calc and create an empty spreadsheet. Leave three blank rows at the top of the spreadsheet. We'll come back to them.
|
||||
|
||||
There's a reason you grouped your expenses: Those groups will become blocks on the spreadsheet. Let's start by putting your most important expense group (e.g., Home) at the top of the spreadsheet.
|
||||
|
||||
Type that expense group's name in the first cell of the fourth row from the top of sheet. Make it stand out by putting it in a larger (12 points is good), bold font.
|
||||
|
||||
In the row below that heading, add the following three columns:
|
||||
|
||||
* Expense
|
||||
|
||||
* Date
|
||||
|
||||
* Amount
|
||||
|
||||
Type the names of the expenses within that group into the cells under the Expense column.
|
||||
|
||||
Next, select the cells under the Date heading. Click the **Format** menu and select **Number Format > Date**. Repeat that for the cells under the Amount heading, and choose **Number Format > Currency**.
|
||||
|
||||
You'll have something that looks like this:
|
||||
|
||||
### [spreadsheet-expense-block.png][1]
|
||||
|
||||
![A group of expenses](https://opensource.com/sites/default/files/u128651/spreadsheet-expense-block.png "A group of expenses")
|
||||
|
||||
That's one group of expenses out of the way. Instead of creating a new block for each expense group, copy what you created and paste it beside the first block. I recommend having rows of three blocks, with an empty column between them.
|
||||
|
||||
You'll have something like this:
|
||||
|
||||
### [spreadsheet-expense-rows.png][2]
|
||||
|
||||
![A row of expenses](https://opensource.com/sites/default/files/u128651/spreadsheet-expense-rows.png "A row of expenses")
|
||||
|
||||
Repeat that for all your expense groups.
|
||||
|
||||
### Total it all up
|
||||
|
||||
It's one thing to see all your individual expenses, but you'll also want to view totals for each group of expenses and for all of your expenses together.
|
||||
|
||||
Let's start by totaling the amounts for each expense group. You can get LibreOffice Calc to do that automatically. Highlight a cell at the bottom of the Amount column and then click the **Sum** button on the Formula toolbar.
|
||||
|
||||
### [spreadsheet-sum-button.png][3]
|
||||
|
||||
![The Sum button](https://opensource.com/sites/default/files/u128651/spreadsheet-sum-button.png "The Sum button")
|
||||
|
||||
Click the first cell in the Amount column and drag the cursor to the last cell in the column. Then, press Enter.
|
||||
|
||||
### [spreadsheet-totaled-expenses.png][4]
|
||||
|
||||
![An expense block with a total](https://opensource.com/sites/default/files/u128651/spreadsheet-totaled-expenses.png "An expense block with a total")
|
||||
|
||||
Now let's do something with the two or three blank rows you left at the top of the spreadsheet. That's where you'll put the grand total of all your expenses. I advise putting it up there so it's visible whenever you open the file.
|
||||
|
||||
In one of the cells at the top left of the sheet, type something like Grand Total or _T_ otal for the Month. Then, in the cell beside it, type **=SUM()**. That's the LibreOffice Calc function that adds the values of specific cells on a spreadsheet.
|
||||
|
||||
Instead of manually entering the names of the cells to add, press and hold Ctrl on your keyboard. Then click the cells where you totaled each group of expenses on your spreadsheet.
|
||||
|
||||
### Finishing up
|
||||
|
||||
You have a sheet for a tracking a month's expenses. Having a spreadsheet for a single month's expenses is a bit of a waste. Why not use it to track your monthly expenses for the full year instead?
|
||||
|
||||
Right-click on the tab at the bottom of the spreadsheet and select **Move or Copy Sheet**. In the window that pops up, click **-move to end position-** and press Enter. Repeat that until you have 12 sheets—one for each month. Rename each sheet for each month of the year, then save the spreadsheet with a descriptive name like _Monthly Expenses 2017.ods_ .
|
||||
|
||||
Now that your setup is out of the way, you're ready to use the spreadsheet. While using a spreadsheet to track your expenses won't, by itself, put you on firmer financial footing, it can help you keep on top of and control what you're spending each month.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
作者简介:
|
||||
|
||||
Scott Nesbitt - I'm a long-time user of free/open source software, and write various things for both fun and profit. I don't take myself too seriously. You can find me at these fine establishments on the web: Twitter, Mastodon, GitHub,
|
||||
|
||||
----------------
|
||||
|
||||
via: https://opensource.com/article/17/8/budget-libreoffice-calc
|
||||
|
||||
作者:[Scott Nesbitt ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/scottnesbitt
|
||||
[1]:https://opensource.com/file/366811
|
||||
[2]:https://opensource.com/file/366831
|
||||
[3]:https://opensource.com/file/366821
|
||||
[4]:https://opensource.com/file/366826
|
||||
[5]:https://opensource.com/article/17/8/budget-libreoffice-calc?rate=C87fXAfGoIpA1OuF-Zx1nv-98UN9GgbFUz4tl_bKug4
|
||||
[6]:https://opensource.com/user/14925/feed
|
||||
[7]:http://www.gnumeric.org/
|
||||
[8]:https://www.calligra.org/sheets/
|
||||
[9]:https://ethercalc.net/
|
||||
[10]:https://opensource.com/users/scottnesbitt
|
||||
[11]:https://opensource.com/users/scottnesbitt
|
||||
[12]:https://opensource.com/article/17/8/budget-libreoffice-calc#comments
|
@ -1,125 +0,0 @@
|
||||
Using Ansible for deploying serverless applications
|
||||
============================================================
|
||||
|
||||
### Serverless is another step in the direction of managed services and plays nice with Ansible's agentless architecture.
|
||||
|
||||
![Using Ansible for deploying serverless applications](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/suitcase_container_bag.png?itok=q40lKCBY "Using Ansible for deploying serverless applications")
|
||||
Image by : opensource.com
|
||||
|
||||
[Ansible][8] is designed as the simplest deployment tool that actually works. What that means is that it's not a full programming language. You write YAML templates that define tasks and list whatever tasks you need to automate your job.
|
||||
|
||||
Most people think of Ansible as a souped-up version of "SSH in a 'for' loop," and that's true for simple use cases. But really Ansible is about _tasks_ , not about SSH. For a lot of use cases, we connect via SSH but also support things like Windows Remote Management (WinRM) for Windows machines, different protocols for network devices, and the HTTPS APIs that are the lingua franca of cloud services.
|
||||
|
||||
More on Ansible
|
||||
|
||||
* [How Ansible works][1]
|
||||
|
||||
* [Free Ansible eBooks][2]
|
||||
|
||||
* [Ansible quick start video][3]
|
||||
|
||||
* [Download and install Ansible][4]
|
||||
|
||||
In a cloud, Ansible can operate on two separate layers: the control plane and the on-instance resources. The control plane consists of everything _not_ running on the OS. This includes setting up networks, spawning instances, provisioning higher-level services like Amazon's S3 or DynamoDB, and everything else you need to keep your cloud infrastructure secure and serving customers.
|
||||
|
||||
On-instance work is what you already know Ansible for: starting and stopping services, templating config files, installing packages, and everything else OS-related that you can do over SSH.
|
||||
|
||||
Now, what about [serverless][9]? Depending who you ask, serverless is either the ultimate extension of the continued rush to the public cloud or a wildly new paradigm where everything is an API call, and it's never been done before.
|
||||
|
||||
Ansible takes the first view. Before "serverless" was a term of art, users had to manage and provision EC2 instances, virtual private cloud (VPC) networks, and everything else. Serverless is another step in the direction of managed services and plays nice with Ansible's agentless architecture.
|
||||
|
||||
Before we go into a [Lambda][10] example, let's look at a simpler task for provisioning a CloudFormation stack:
|
||||
|
||||
```
|
||||
- name: Build network
|
||||
cloudformation:
|
||||
stack_name: prod-vpc
|
||||
state: present
|
||||
template: base_vpc.yml
|
||||
```
|
||||
|
||||
Writing a task like this takes just a couple minutes, but it brings the last semi-manual step involved in building your infrastructure—clicking "Create Stack"—into a playbook with everything else. Now your VPC is just another task you can call when building up a new region.
|
||||
|
||||
Since cloud providers are the real source of truth when it comes to what's really happening in your account, Ansible has a number of ways to pull that back and use the IDs, names, and other parameters to filter and query running instances or networks. Take for example the **cloudformation_facts** module that we can use to get the subnet IDs, network ranges, and other data back out of the template we just created.
|
||||
|
||||
```
|
||||
- name: Pull all new resources back in as a variable
|
||||
cloudformation_facts:
|
||||
stack_name: prod-vpc
|
||||
register: network_stack
|
||||
```
|
||||
|
||||
For serverless applications, you'll definitely need a complement of Lambda functions in addition to any other DynamoDB tables, S3 buckets, and whatever else. Fortunately, by using the **lambda** modules, Lambda functions can be created in the same way as the stack from the last tasks:
|
||||
|
||||
```
|
||||
- lambda:
|
||||
name: sendReportMail
|
||||
zip_file: "{{ deployment_package }}"
|
||||
runtime: python3.6
|
||||
handler: report.send
|
||||
memory_size: 1024
|
||||
role: "{{ iam_exec_role }}"
|
||||
register: new_function
|
||||
```
|
||||
|
||||
If you have another tool that you prefer for shipping the serverless parts of your application, that works as well. The open source [Serverless Framework][11] has its own Ansible module that will work just as well:
|
||||
|
||||
```
|
||||
- serverless:
|
||||
service_path: '{{ project_dir }}'
|
||||
stage: dev
|
||||
register: sls
|
||||
- name: Serverless uses CloudFormation under the hood, so you can easily pull info back into Ansible
|
||||
cloudformation_facts:
|
||||
stack_name: "{{ sls.service_name }}"
|
||||
register: sls_facts
|
||||
```
|
||||
|
||||
That's not quite everything you need, since the serverless project also must exist, and that's where you'll do the heavy lifting of defining your functions and event sources. For this example, we'll make a single function that responds to HTTP requests. The Serverless Framework uses YAML as its config language (as does Ansible), so this should look familiar.
|
||||
|
||||
```
|
||||
# serverless.yml
|
||||
service: fakeservice
|
||||
|
||||
provider:
|
||||
name: aws
|
||||
runtime: python3.6
|
||||
|
||||
functions:
|
||||
main:
|
||||
handler: test_function.handler
|
||||
events:
|
||||
- http:
|
||||
path: /
|
||||
method: get
|
||||
```
|
||||
|
||||
At [AnsibleFest][12], I'll be covering this example and other in-depth deployment strategies to take the best advantage of the Ansible playbooks and infrastructure you already have, along with new serverless practices. Whether you're able to be there or not, I hope these examples can get you started using Ansible—whether or not you have any servers to manage.
|
||||
|
||||
_AnsibleFest is a _ _day-long_ _ conference bringing together hundreds of Ansible users, developers, and industry partners. Join us for product updates, inspirational talks, tech deep dives, hands-on demos and a day of networking. Get your tickets to AnsibleFest in San Francisco on September 7\. Save 25% on [**registration**][6] with the discount code **OPENSOURCE**._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/8/ansible-serverless-applications
|
||||
|
||||
作者:[Ryan Scott Brown ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/ryansb
|
||||
[1]:https://www.ansible.com/how-ansible-works?intcmp=701f2000000h4RcAAI
|
||||
[2]:https://www.ansible.com/ebooks?intcmp=701f2000000h4RcAAI
|
||||
[3]:https://www.ansible.com/quick-start-video?intcmp=701f2000000h4RcAAI
|
||||
[4]:https://docs.ansible.com/ansible/latest/intro_installation.html?intcmp=701f2000000h4RcAAI
|
||||
[5]:https://opensource.com/article/17/8/ansible-serverless-applications?rate=zOgBPQUEmiTctfbajpu_TddaH-8b-ay3pFCK0b43vFw
|
||||
[6]:https://www.eventbrite.com/e/ansiblefest-san-francisco-2017-tickets-34008433139
|
||||
[7]:https://opensource.com/user/12043/feed
|
||||
[8]:https://www.ansible.com/
|
||||
[9]:https://en.wikipedia.org/wiki/Serverless_computing
|
||||
[10]:https://aws.amazon.com/lambda/
|
||||
[11]:https://serverless.com/
|
||||
[12]:https://www.ansible.com/ansiblefest?intcmp=701f2000000h4RcAAI
|
||||
[13]:https://opensource.com/users/ryansb
|
||||
[14]:https://opensource.com/users/ryansb
|
@ -1,132 +0,0 @@
|
||||
Why open source should be the first choice for cloud-native environments
|
||||
============================================================
|
||||
|
||||
### For the same reasons Linux beat out proprietary software, open source should be the first choice for cloud-native environments.
|
||||
|
||||
|
||||
![Why open source should be the first choice for cloud-native environments](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud-globe.png?itok=_drXt4Tn "Why open source should be the first choice for cloud-native environments")
|
||||
Image by :
|
||||
|
||||
[Jason Baker][6]. [CC BY-SA 4.0][7]. Source: [Cloud][8], [Globe][9]. Both [CC0][10].
|
||||
|
||||
Let's take a trip back in time to the 1990s, when proprietary software reigned, but open source was starting to come into its own. What caused this switch, and more importantly, what can we learn from it today as we shift into cloud-native environments?
|
||||
|
||||
### An infrastructure history lesson
|
||||
|
||||
I'll begin with a highly opinionated, open source view of infrastructure's history over the past 30 years. In the 1990s, Linux was merely a blip on most organizations' radar, if they knew anything about it. You had early buy-in from companies that quickly saw the benefits of Linux, mostly as a cheap replacement for proprietary Unix, but the standard way of deploying a server was with a proprietary form of Unix or—increasingly—by using Microsoft Windows NT.
|
||||
|
||||
The proprietary nature of this tooling provided a fertile ecosystem for even more proprietary software. Software was boxed up to be sold in stores. Even open source got in on the packaging game; you could buy Linux on the shelf instead of tying up your internet connection downloading it from free sources. Going to the store or working with your software vendor was just how you got software.
|
||||
|
||||
### [ubuntu_box.png][1]
|
||||
|
||||
![Ubuntu box packaging on a Best Buy shelf](https://opensource.com/sites/default/files/u128651/ubuntu_box.png "Ubuntu box packaging on a Best Buy shelf")
|
||||
|
||||
Ubuntu box packaging on a Best Buy shelf
|
||||
|
||||
Where I think things changed was with the rise of the LAMP stack (Linux, Apache, MySQL, and PHP/Perl/Python).
|
||||
|
||||
Where I think things changed was with the rise of the LAMP stack (Linux, Apache, MySQL, and PHP/Perl/Python).The LAMP stack is a major success story. It was stable, scalable, and relatively user-friendly. At the same time, I started seeing dissatisfaction with proprietary solutions. Once customers had this taste of open source in the LAMP stack, they changed what they expected from software, including:
|
||||
|
||||
* reluctance to be locked in by a vendor,
|
||||
|
||||
* concern over security,
|
||||
|
||||
* desire to fix bugs themselves, and
|
||||
|
||||
* recognition that innovation is stifled when software is developed in isolation.
|
||||
|
||||
On the technical side, we also saw a massive change in how organizations use software. Suddenly, downtime for a website was unacceptable. There was a move to a greater reliance on scaling and automation. In the past decade especially, we've seen a move from the traditional "pet" model of infrastructure to a "cattle" model, where servers can be swapped out and replaced, rather than kept and named. Companies work with massive amounts of data, causing a greater focus on data retention and the speed of processing and returning that data to users.
|
||||
|
||||
Open source, with open communities and increasing investment from major companies, provided the foundation to satisfy this change in how we started using software. Systems administrators' job descriptions began requiring skill with Linux and familiarity with open source technologies and methodologies. Through the open sourcing of things like Chef cookbooks and Puppet modules, administrators could share the configuration of their tooling. No longer were we individually configuring and tuning MySQL in silos; we created a system for handling
|
||||
|
||||
Open source is ubiquitous today, and so is the tooling surrounding it.the basic parts so we could focus on the more interesting engineering work that brought specific value to our employers.
|
||||
|
||||
Open source is ubiquitous today, and so is the tooling surrounding it. Companies once hostile to the idea are not only embracing open source through interoperability programs and outreach, but also by releasing their own open source software projects and building communities around it.
|
||||
|
||||
### [microsoft_linux_stick.png][2]
|
||||
|
||||
![A "Microsoft Linux" USB stick](https://opensource.com/sites/default/files/u128651/microsoft_linux_stick.png "A "Microsoft Linux" USB stick")
|
||||
|
||||
A "Microsoft
|
||||
![heart](https://opensource.com/sites/all/libraries/ckeditor/plugins/smiley/images/heart.png "heart")
|
||||
Linux" USB stick
|
||||
|
||||
### Turning to the cloud
|
||||
|
||||
Today, we're living in a world of DevOps and clouds. We've reaped the rewards of the innovation that open source movements brought. There's a sharp rise in what Tim O'Reilly called "[inner-sourcing][11]," where open source software development practices are adopted inside of companies. We're sharing deployment configurations for cloud platforms. Tools like Terraform are even allowing us to write and share how we deploy to specific platforms.
|
||||
|
||||
Today, we're living in a world of DevOps and clouds.But what about these platforms themselves?
|
||||
|
||||
> "Most people just consume the cloud without thinking ... many users are sinking cost into infrastructure that is not theirs, and they are giving up data and information about themselves without thinking."
|
||||
> —Edward Snowden, OpenStack Summit, May 9, 2017
|
||||
|
||||
It's time to put more thought into our knee-jerk reaction to move or expand to the cloud.
|
||||
|
||||
As Snowden highlighted, now we risk of losing control of the data that we maintain for our users and customers. Security aside, if we look back at our list of reasons for switching to open source, high among them were also concerns about vendor lock-in and the inability to drive innovation or even fix bugs.
|
||||
|
||||
Before you lock yourself and/or your company into a proprietary platform, consider the following questions:
|
||||
|
||||
* Is the service I'm using adhering to open standards, or am I locked in?
|
||||
|
||||
* What is my recourse if the service vendor goes out of business or is bought by a competitor?
|
||||
|
||||
* Does the vendor have a history of communicating clearly and honestly with its customers about downtime, security, etc.?
|
||||
|
||||
* Does the vendor respond to bugs and feature requests, even from smaller customers?
|
||||
|
||||
* Will the vendor use our data in a way that I'm not comfortable with (or worse, isn't allowed by our own customer agreements)?
|
||||
|
||||
* Does the vendor have a plan to handle long-term, escalating costs of growth, particularly if initial costs are low?
|
||||
|
||||
You may go through this questionnaire, discuss each of the points, and still decide to use a proprietary solution. That's fine; companies do it all the time. However, if you're like me and would rather find a more open solution while still benefiting from the cloud, you do have options.
|
||||
|
||||
### Beyond the proprietary cloud
|
||||
|
||||
As you look beyond proprietary cloud solutions, your first option to go open source is by investing in a cloud provider whose core runs on open source software. [OpenStack][12] is the industry leader, with more than 100 participating organizations and thousands of contributors in its seven-year history (including me for a time). The OpenStack project has proven that interfacing with multiple OpenStack-based clouds is not only possible, but relatively trivial. The APIs are similar between cloud companies, so you're not necessarily locked in to a specific OpenStack vendor. As an open source project, you can still influence the features, bug requests, and direction of the infrastructure.
|
||||
|
||||
The second option is to continue to use proprietary clouds at a basic level, but within an open source container orchestration system. Whether you select [DC/OS][13] (built on [Apache Mesos][14]), [Kubernetes][15], or [Docker in swarm mode][16], these platforms allow you to treat the virtual machines served up by proprietary cloud systems as independent Linux machines and install your platform on top of that. All you need is Linux—and don't get immediately locked into the cloud-specific tooling or platforms. Decisions can be made on a case-by-case basis about whether to use specific proprietary backends, but if you do, try to keep an eye toward the future should a move be required.
|
||||
|
||||
With either option, you also have the choice to depart from the cloud entirely. You can deploy your own OpenStack cloud or move your container platform in-house to your own data center.
|
||||
|
||||
### Making a moonshot
|
||||
|
||||
To conclude, I'd like to talk a bit about open source project infrastructures. Back in March, participants from various open source projects convened at the [Southern California Linux Expo][17] to talk about running open source infrastructures for their projects. (For more, read my [summary of this event][18].) I see the work these projects are doing as the final step in the open sourcing of infrastructure. Beyond the basic sharing that we're doing now, I believe companies and organizations can make far more of their infrastructures open source without giving up the "secret sauce" that distinguishes them from competitors.
|
||||
|
||||
The open source projects that have open sourced their infrastructures have proven the value of allowing multiple companies and organizations to submit educated bug reports, and even patches and features, to their infrastructure. Suddenly you can invite part-time contributors. Your customers can derive confidence by knowing what your infrastructure looks like "under the hood."
|
||||
|
||||
Want more evidence? Visit [Open Source Infrastructure][19]'s website to learn more about the projects making their infrastructures open source (and the extensive amount of infrastructure they've released).
|
||||
|
||||
_Learn more in Elizabeth K. Joseph's talk, [The Open Sourcing of Infrastructure][4], at FOSSCON August 26th in Philadelphia._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/8/open-sourcing-infrastructure
|
||||
|
||||
作者:[ Elizabeth K. Joseph][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/pleia2
|
||||
[1]:https://opensource.com/file/366596
|
||||
[2]:https://opensource.com/file/366591
|
||||
[3]:https://opensource.com/article/17/8/open-sourcing-infrastructure?rate=PdT-huv5y5HFZVMHOoRoo_qd95RG70y4DARqU5pzgkU
|
||||
[4]:https://fosscon.us/node/12637
|
||||
[5]:https://opensource.com/user/25923/feed
|
||||
[6]:https://opensource.com/users/jason-baker
|
||||
[7]:https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[8]:https://pixabay.com/en/clouds-sky-cloud-dark-clouds-1473311/
|
||||
[9]:https://pixabay.com/en/globe-planet-earth-world-1015311/
|
||||
[10]:https://creativecommons.org/publicdomain/zero/1.0/
|
||||
[11]:https://opensource.com/life/16/11/create-internal-innersource-community
|
||||
[12]:https://www.openstack.org/
|
||||
[13]:https://dcos.io/
|
||||
[14]:http://mesos.apache.org/
|
||||
[15]:https://kubernetes.io/
|
||||
[16]:https://docs.docker.com/engine/swarm/
|
||||
[17]:https://www.socallinuxexpo.org/
|
||||
[18]:https://opensource.com/article/17/3/growth-open-source-project-infrastructures
|
||||
[19]:https://opensourceinfra.org/
|
||||
[20]:https://opensource.com/users/pleia2
|
||||
[21]:https://opensource.com/users/pleia2
|
@ -1,49 +0,0 @@
|
||||
Understanding OPNFV Starts Here
|
||||
============================================================
|
||||
|
||||
![OPNFV](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/network-transformation.png?itok=uNTYBeQb "OPNFV")
|
||||
The "Understanding OPNFV" book provides a high-level understanding of what OPNFV is and how it can help you or your organization. Download now.[Creative Commons Zero][1]Pixabay
|
||||
|
||||
If telecom operators or enterprises were to build their networks from scratch today, they would likely build them as software-defined resources, similar to Google or Facebook’s infrastructure. That’s the premise of Network Functions Virtualization (NFV).
|
||||
|
||||
NFV is a once in a generation disruption that will completely transform how networks are built and operated. And, [OPNFV][3] is a leading open source NFV project that aims to accelerate the adoption of this technology.
|
||||
|
||||
Are you a telecom operator or connected enterprise employee wondering which open source projects might help you with your NFV transformation initiatives? Or a technology vendor attempting to position your products and services in the new NFV world? Or perhaps an engineer, network operator or business leader wanting to progress your career using open source projects (case in point, in 2013 Rackspace [stated][4] that network engineers with OpenStack skills made, on average, 13 percent more salary than their counterparts)? If any of this applies to you, the _Understanding OPNFV_ book is a perfect resource for you.
|
||||
|
||||
![OPNFV Book](https://www.linux.com/sites/lcom/files/understanding-opnfv.jpeg)
|
||||
In 11 easy-to-read chapters and over 144 pages, this book (written by Nick Chase from Mirantis and me) covers an entire range of topics from an overview of NFV, NFV transformation, all aspects of the OPNFV project, to VNF onboarding. After reading this book, you will have an excellent high-level understanding of what OPNFV is and how it can help you or your organization. This book is not specifically meant for developers, though it may be useful for background information. If you are a developer looking to get involved in a specific OPNFV project as a contributor, then [wiki.opnfv.org][5] is still the best resource for you.
|
||||
|
||||
In this blog series, we will give you a flavor of portions of the book — in terms of what’s there and what you might learn.
|
||||
|
||||
Let’s start with the first chapter. Chapter 1, no surprise, provides an introduction to NFV. It gives a super-brief overview of NFV in terms of business drivers (the need for differentiated services, cost pressures and need for agility), what NFV is and what benefits you can expect from NFV.
|
||||
|
||||
Briefly, NFV enables complex network functions to be performed on compute nodes in data centers. A network function performed on a compute node is called a Virtualized Network Function (VNF). So that VNFs can behave as a network, NFV also adds the mechanisms to determine how they can be chained together to provide control over traffic within a network.
|
||||
|
||||
Although most people think of it in terms of telecommunications, NFV encompasses a broad set of use cases, from Role Based Access Control (RBAC) based on application or traffic type, to Content Delivery Networks (CDN) that manage content at the edges of the network (where it is often needed), to the more obvious telecom-related use cases such as Evolved Packet Core (EPC) and IP Multimedia System (IMS).
|
||||
|
||||
Additionally, some of the main benefits include increased revenue, improved customer experience, reduced operational expenditure (OPEX), reduced capital expenditures (CAPEX) and freed-up resources for new projects. This section also provides results of a concrete NFV total-cost-of-ownership (TCO) analysis. Treatment of these topics is brief since we assume you will have some NFV background; however, if you are new to NFV, not to worry — the introductory material is adequate to understand the rest of the book.
|
||||
|
||||
The chapter concludes with a summary of NFV requirements — security, performance, interoperability, ease-of-operations and some specific requirements such as service assurance and service function chaining. No NFV architecture or technology can be truly successful without meeting these requirements.
|
||||
|
||||
After reading this chapter, you will have a good overview of why NFV is important, what NFV is, and what is technically required to make NFV successful. We will look at following chapters in upcoming blog posts.
|
||||
|
||||
This book has proven to be our most popular giveaway at industry events and a Chinese version is now under development! But you can [download the eBook in PDF][6] right now, or [order a printed version][7] on Amazon.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/opnfv/2017/8/understanding-opnfv-starts-here
|
||||
|
||||
作者:[AMAR KAPADIA][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/akapadia
|
||||
[1]:https://www.linux.com/licenses/category/creative-commons-zero
|
||||
[2]:https://www.linux.com/files/images/network-transformationpng
|
||||
[3]:https://www.opnfv.org/
|
||||
[4]:https://blog.rackspace.com/solving-the-openstack-talent-gap
|
||||
[5]:https://wiki.opnfv.org/
|
||||
[6]:https://www.opnfv.org/resources/download-understanding-opnfv-ebook
|
||||
[7]:https://www.amazon.com/dp/B071LQY724/ref=cm_sw_r_cp_ep_dp_pgFMzbM8YHJA9
|
@ -1,4 +1,4 @@
|
||||
Guide to Linux App Is a Handy Tool for Every Level of Linux User
|
||||
[JanzenLiu Translating...] Guide to Linux App Is a Handy Tool for Every Level of Linux User
|
||||
============================================================
|
||||
|
||||
![Guide to Linux](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/guide-to-linux.png?itok=AAcrxjjc "Guide to Linux")
|
||||
|
@ -1,198 +0,0 @@
|
||||
Happy anniversary, Linux: A look back at where it all began
|
||||
============================================================
|
||||
|
||||
### Installing SLS 1.05 shows just how far the Linux kernel has come in 26 years.
|
||||
|
||||
![Happy anniversary, Linux: A look back at where it all began](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/happy_birthday_tux.png?itok=GoaC0Car "Happy anniversary, Linux: A look back at where it all began")
|
||||
Image by : [litlnemo][25]. Modified by Opensource.com. [CC BY-SA 2.0.][26]
|
||||
|
||||
I first installed Linux in 1993\. I ran MS-DOS at the time, but I really liked the Unix systems in our campus computer lab, where I spent much of my time as an undergraduate university student. When I heard about Linux, a free version of Unix that I could run on my 386 computer at home, I immediately wanted to try it out. My first Linux distribution was [Softlanding Linux System][27] (SLS) 1.03, with Linux kernel 0.99 alpha patch level 11\. That required a whopping 2MB of RAM, or 4MB if you wanted to compile programs, and 8MB to run X windows.
|
||||
|
||||
More Linux resources
|
||||
|
||||
* [What is Linux?][1]
|
||||
|
||||
* [What are Linux containers?][2]
|
||||
|
||||
* [Download Now: Linux commands cheat sheet][3]
|
||||
|
||||
* [Advanced Linux commands cheat sheet][4]
|
||||
|
||||
* [Our latest Linux articles][5]
|
||||
|
||||
I thought Linux was a huge step up from the world of MS-DOS. While Linux lacked the breadth of applications and games available on MS-DOS, I found Linux gave me a greater degree of flexibility. Unlike MS-DOS, I could now do true multi-tasking, running more than one program at a time. And Linux provided a wealth of tools, including a C compiler that I could use to build my own programs.
|
||||
|
||||
A year later, I upgraded to SLS 1.05, which sported the brand-new Linux kernel 1.0\. More importantly, Linux 1.0 introduced kernel modules. With modules, you no longer needed to completely recompile your kernel to support new hardware; instead you loaded one of the 63 included Linux kernel modules. SLS 1.05 included this note about modules in the distribution's README file:
|
||||
|
||||
> Modularization of the kernel is aimed squarely at reducing, and eventually eliminating, the requirements for recompiling the kernel, either for changing/modifying device drivers or for dynamic access to infrequently required drivers. More importantly, perhaps, the efforts of individual working groups need no longer affect the development of the kernel proper. In fact, a binary release of the official kernel should now be possible.
|
||||
|
||||
On August 25, the Linux kernel will reach its 26th anniversary. To celebrate, I reinstalled SLS 1.05 to remind myself what the Linux 1.0 kernel was like and to recognize how far Linux has come since the 1990s. Join me on this journey into Linux nostalgia!
|
||||
|
||||
### Installation
|
||||
|
||||
Softlanding Linux System was the first true "distribution" that included an install program. Yet the install process isn't the same smooth process you find in modern distributions. Instead of booting from an install CD-ROM, I needed to boot my system from an install floppy, then run the install program from the **login** prompt.
|
||||
|
||||
### [install1.png][6]
|
||||
|
||||
![Installing SLS 1.05 from the login prompt](https://opensource.com/sites/default/files/u128651/install1.png "Installing SLS 1.05 from the login prompt")
|
||||
|
||||
A neat feature introduced in SLS 1.05 was the color-enabled text-mode installer. When I selected color mode, the installer switched to a light blue background with black text, instead of the plain white-on-black text used by our primitive forbearers.
|
||||
|
||||
### [install2.png][7]
|
||||
|
||||
![Color-enabled text-mode installer in SLS 1.05](https://opensource.com/sites/default/files/u128651/install2.png "Color-enabled text-mode installer in SLS 1.05")
|
||||
|
||||
The SLS installer is a simple affair, scrolling text from the bottom of the screen, but it does the job. By responding to a few simple prompts, I was able to create a partition for Linux, put an ext2 filesystem on it, and install Linux. Installing SLS 1.05, including X windows and development tools, required about 85MB of disk space. That may not sound like much space by today's standards, but when Linux 1.0 came out, 120MB hard drives were still common.
|
||||
|
||||
### [install10.png][8]
|
||||
|
||||
![Creating a partition for Linux, putting an ext2 filesystem on it, and installing Linux](https://opensource.com/sites/default/files/u128651/install10.png "Creating a partition for Linux, putting an ext2 filesystem on it, and installing Linux")
|
||||
|
||||
### [firstboot1.png][9]
|
||||
|
||||
![First boot](https://opensource.com/sites/default/files/u128651/firstboot1.png "First boot")
|
||||
|
||||
### System level
|
||||
|
||||
When I first booted into Linux, my memory triggered a few system things about this early version of Linux. First, Linux doesn't take up much space. After booting the system and running a few utilities to check it out, Linux occupied less than 4MB of memory. On a system with 16MB of memory, that meant lots left over to run programs.
|
||||
|
||||
### [uname-df.png][10]
|
||||
|
||||
![Checking out the filesystem and available disk space](https://opensource.com/sites/default/files/u128651/uname-df.png "Checking out the filesystem and available disk space")
|
||||
|
||||
The familiar **/proc** meta filesystem exists in Linux 1.0, although it doesn't provide much information compared to what you see in modern systems. In Linux 1.0, **/proc** includes interfaces to probe basic system statistics like **meminfo** and **stat**.
|
||||
|
||||
### [proc.png][11]
|
||||
|
||||
![The familiar /proc meta filesystem](https://opensource.com/sites/default/files/u128651/proc.png "The familiar /proc meta filesystem")
|
||||
|
||||
The **/etc** directory on this system is pretty bare. Notably, SLS 1.05 borrows the **rc** scripts from [BSD Unix][28] to control system startup. Everything gets started via **rc**scripts, with local system changes defined in the **rc.local** file. Later, most Linux distributions would adopt the more familiar **init** scripts from [Unix System V][29], then the [systemd][30] initialization system.
|
||||
|
||||
### [etc.png][12]
|
||||
|
||||
![The /etc directory](https://opensource.com/sites/default/files/u128651/etc.png "The /etc directory")
|
||||
|
||||
### What you can do
|
||||
|
||||
With my system up and running, it was time to get to work. So, what can you do with this early Linux system?
|
||||
|
||||
Let's start with basic file management. Every time you log in, SLS reminds you about the Softlanding menu shell (MESH), a file-management program that modern users might recognize as similar to [Midnight Commander][31]. Users in the 1990s would have compared MESH more closely to [Norton Commander][32], arguably the most popular third-party file manager available on MS-DOS.
|
||||
|
||||
### [mesh.png][13]
|
||||
|
||||
![The Softlanding menu shell (MESH)](https://opensource.com/sites/default/files/u128651/mesh.png "The Softlanding menu shell (MESH)")
|
||||
|
||||
Aside from MESH, there are relatively few full-screen applications included with SLS 1.05\. But you can find the familiar user tools, including the Elm mail reader, the GNU Emacs programmable editor, and the venerable Vim editor.
|
||||
|
||||
### [elm.png][14]
|
||||
|
||||
![Elm mail reader](https://opensource.com/sites/default/files/u128651/elm.png "Elm mail reader")
|
||||
|
||||
### [emacs19.png][15]
|
||||
|
||||
![GNU Emacs programmable editor](https://opensource.com/sites/default/files/u128651/emacs19.png "GNU Emacs programmable editor")
|
||||
|
||||
SLS 1.05 even included a version of Tetris that you could play at the terminal.
|
||||
|
||||
### [tetris.png][16]
|
||||
|
||||
![Tetris for terminals](https://opensource.com/sites/default/files/u128651/tetris.png "Tetris for terminals")
|
||||
|
||||
In the 1990s, most residential internet access was via dial-up connections, so SLS 1.05 included the Minicom modem-dialer application. Minicom provided a direct connection to the modem and required users to navigate the Hayes modem **AT** commands to do basic functions like dial a number or hang up the phone. Minicom also supported macros and other neat features to make it easier to connect to your local modem pool.
|
||||
|
||||
### [minicom.png][17]
|
||||
|
||||
![Minicom modem-dialer application](https://opensource.com/sites/default/files/u128651/minicom.png "Minicom modem-dialer application")
|
||||
|
||||
But what if you wanted to write a document? SLS 1.05 existed long before the likes of LibreOffice or OpenOffice. Linux just didn't have those applications in the early 1990s. Instead, if you wanted to use a word processor, you likely booted your system into MS-DOS and ran your favorite word processor program, such as WordPerfect or the shareware GalaxyWrite.
|
||||
|
||||
But all Unix systems include a set of simple text formatting programs, called nroff and troff. On Linux systems, these are combined into the GNU groff package, and SLS 1.05 includes a version of groff. One of my tests with SLS 1.05 was to generate a simple text document using nroff.
|
||||
|
||||
### [paper-me-emacs.png][18]
|
||||
|
||||
![A simple nroff text document](https://opensource.com/sites/default/files/u128651/paper-me-emacs.png "A simple nroff text document")
|
||||
|
||||
### [paper-me-out.png][19]
|
||||
|
||||
![nroff text document output](https://opensource.com/sites/default/files/u128651/paper-me-out.png "nroff text document output")
|
||||
|
||||
### Running X windows
|
||||
|
||||
Getting X windows to perform was not exactly easy, as the SLS install file promised:
|
||||
|
||||
> Getting X windows to run on your PC can sometimes be a bit of a sobering experience, mostly because there are so many types of video cards for the PC. Linux X11 supports only VGA type video cards, but there are so many types of VGAs that only certain ones are fully supported. SLS comes with two X windows servers. The full color one, XFree86, supports some or all ET3000, ET4000, PVGA1, GVGA, Trident, S3, 8514, Accelerated cards, ATI plus, and others.
|
||||
>
|
||||
> The other server, XF86_Mono, should work with virtually any VGA card, but only in monochrome mode. Accordingly, it also uses less memory and should be faster than the color one. But of course it doesn't look as nice.
|
||||
>
|
||||
> The bulk of the X windows configuration information is stored in the directory "/usr/X386/lib/X11/". In particular, the file "Xconfig" defines the timings for the monitor and the video card. By default, X windows is set up to use the color server, but you can switch to using the monochrome server x386mono, if the color one gives you trouble, since it should support any standard VGA. Essentially, this just means making /usr/X386/bin/X a link to it.
|
||||
>
|
||||
> Just edit Xconfig to set the mouse device type and timings, and enter "startx".
|
||||
|
||||
If that sounds confusing, it is. Configuring X windows by hand really can be a sobering experience. Fortunately, SLS 1.05 included the syssetup program to help you define various system components, including display settings for X windows. After a few prompts, and some experimenting and tweaking, I was finally able to launch X windows!
|
||||
|
||||
### [syssetup.png][20]
|
||||
|
||||
![The syssetup program](https://opensource.com/sites/default/files/u128651/syssetup.png "The syssetup program")
|
||||
|
||||
But this is X windows from 1994, and the concept of a desktop didn't exist yet. My options were either FVWM (a virtual window manager) or TWM (the tabbed window manager). TWM was straightforward to set up and provided a simple, yet functional, graphical environment.
|
||||
|
||||
### [twm_720.png][21]
|
||||
|
||||
![TWM](https://opensource.com/sites/default/files/u128651/twm_720.png "TWM")
|
||||
|
||||
### Shutdown
|
||||
|
||||
As much as I enjoyed exploring my Linux roots, eventually it was time to return to my modern desktop. I originally ran Linux on a 32-bit 386 computer with just 8MB of memory and a 120MB hard drive, and my system today is much more powerful. I can do so much more on my dual-core, 64-bit Intel Core i5 CPU with 4GB of memory and a 128GB solid-state drive running Linux kernel 4.11.11\. So, after my experiments with SLS 1.05 were over, it was time to leave.
|
||||
|
||||
### [shutdown-h.png][22]
|
||||
|
||||
![Shutting down](https://opensource.com/sites/default/files/u128651/shutdown-h.png "Shutting down")
|
||||
|
||||
So long, Linux 1.0\. It's good to see how well you've grown up.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/8/linux-anniversary
|
||||
|
||||
作者:[Jim Hall ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jim-hall
|
||||
[1]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[2]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[3]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[4]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[5]:https://opensource.com/tags/linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[6]:https://opensource.com/file/365166
|
||||
[7]:https://opensource.com/file/365171
|
||||
[8]:https://opensource.com/file/365176
|
||||
[9]:https://opensource.com/file/365161
|
||||
[10]:https://opensource.com/file/365221
|
||||
[11]:https://opensource.com/file/365196
|
||||
[12]:https://opensource.com/file/365156
|
||||
[13]:https://opensource.com/file/365181
|
||||
[14]:https://opensource.com/file/365146
|
||||
[15]:https://opensource.com/file/365151
|
||||
[16]:https://opensource.com/file/365211
|
||||
[17]:https://opensource.com/file/365186
|
||||
[18]:https://opensource.com/file/365191
|
||||
[19]:https://opensource.com/file/365226
|
||||
[20]:https://opensource.com/file/365206
|
||||
[21]:https://opensource.com/file/365236
|
||||
[22]:https://opensource.com/file/365201
|
||||
[23]:https://opensource.com/article/17/8/linux-anniversary?rate=XujKSFS7GfDmxcV7Jf_HUK_MdrW15Po336fO3G8s1m0
|
||||
[24]:https://opensource.com/user/126046/feed
|
||||
[25]:https://www.flickr.com/photos/litlnemo/19777182/
|
||||
[26]:https://creativecommons.org/licenses/by-sa/2.0/
|
||||
[27]:https://en.wikipedia.org/wiki/Softlanding_Linux_System
|
||||
[28]:https://en.wikipedia.org/wiki/Berkeley_Software_Distribution
|
||||
[29]:https://en.wikipedia.org/wiki/UNIX_System_V
|
||||
[30]:https://en.wikipedia.org/wiki/Systemd
|
||||
[31]:https://midnight-commander.org/
|
||||
[32]:https://en.wikipedia.org/wiki/Norton_Commander
|
||||
[33]:https://opensource.com/users/jim-hall
|
||||
[34]:https://opensource.com/users/jim-hall
|
||||
[35]:https://opensource.com/article/17/8/linux-anniversary#comments
|
@ -0,0 +1,62 @@
|
||||
An economically efficient model for open source software license compliance
|
||||
============================================================
|
||||
|
||||
### Using open source the way it was intended benefits your bottom line and the open source ecosystem.
|
||||
|
||||
|
||||
![An economically efficient model for open source software license compliance](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_EvidencedBasedIP_520x292_CS.png?itok=mmhCWuZR "An economically efficient model for open source software license compliance")
|
||||
Image by : opensource.com
|
||||
|
||||
"The Compliance Industrial Complex" is a term that evokes dystopian imagery of organizations engaging in elaborate and highly expensive processes to comply with open source license terms. As life often imitates art, many organizations engage in this practice, sadly robbing them of the many benefits of the open source model. This article presents an economically efficient approach to open source software license compliance.
|
||||
|
||||
Open source licenses generally impose three requirements on a distributor of code licensed from a third party:
|
||||
|
||||
1. Provide a copy of the open source license(s)
|
||||
|
||||
2. Include copyright notices
|
||||
|
||||
3. For copyleft licenses (like GPL), make the corresponding source code available to the distributees
|
||||
|
||||
_(As with any general statement, there may be exceptions, so it is always advised to review license terms and, if necessary, seek the advice of an attorney.)_
|
||||
|
||||
Because the source code (and any associated files, e.g. license/README) generally contains all of this information, the easiest way to comply is to simply provide the source code along with your binary/executable application.
|
||||
|
||||
The alternative is more difficult and expensive, because, in most situations, you are still required to provide a copy of the open source licenses and retain copyright notices. Extracting this information to accompany your binary/executable release is not trivial. You need processes, systems, and people to copy this information out of the sources and associated files and insert them into a separate text file or document.
|
||||
|
||||
The amount of time and expense to create this file is not to be underestimated. Although there are software tools that may be used to partially automate the process, these tools often require resources (e.g., engineers, quality managers, release managers) to prepare code for scan and to review the results for accuracy (no tool is perfect and review is almost always required). Your organization has finite resources, and diverting them to this activity leads to opportunity costs. Compounding this expense, each subsequent release—major or minor—will require a new analysis and revision.
|
||||
|
||||
There are also other costs resulting from not choosing to release sources that are not well recognized. These stem from not releasing source code back to the original authors and/or maintainers of the open source project, an activity known as upstreaming. Upstreaming alone seldom meets the requirements of most open source licenses, which is why this article advocates releasing sources along with your binary/executable; however, both upstreaming and providing the source code along with your binary/executable affords additional economic benefits. This is because your organization will no longer be required to keep a private fork of your code changes that must be internally merged with the open source bits upon every release—an increasingly costly and messy endeavor as your internal code base diverges from the community project. Upstreaming also enhances the open source ecosystem, which encourages further innovations from the community from which your organization may benefit.
|
||||
|
||||
So why do a significant number of organizations not release source code for their products to simplify their compliance efforts? In many cases, this is because they are under the belief that it may reveal information that gives them a competitive edge. This belief may be misplaced in many situations, considering that substantial amounts of code in these proprietary products are likely direct copies of open source code to enable functions such as WiFi or cloud services, foundational features of most contemporary products.
|
||||
|
||||
Even if changes are made to these open source works to adapt them for proprietary offerings, such changes are often de minimis and contain little new copyright expression or patentable content. As such, any organization should look at its code through this lens, as it may discover that an overwhelming percentage of its code base is open source, with only a small percentage truly proprietary and enabling differentiation from its competitors. So why then not distribute and upstream the source to those non-differentiating bits?
|
||||
|
||||
Consider rejecting the Compliance Industrial Complex mindset to lower your cost and drastically simplify compliance. Use open source the way it was intended and experience the joy of releasing your source code to benefit your bottom line and the open source ecosystem from which you will continue to reap increasing benefits.
|
||||
|
||||
------------------------
|
||||
|
||||
作者简介
|
||||
|
||||
Jeffrey Robert Kaufman - Jeffrey R. Kaufman is an Open Source IP Attorney for Red Hat, Inc., the world’s leading provider of open source software solutions. Jeffrey also serves as an adjunct professor at the Thomas Jefferson School of Law. Previous to Red Hat, Jeffrey served as Patent Counsel at Qualcomm Incorporated providing open source counsel to the Office of the Chief Scientist. Jeffrey holds multiple patents in RFID, barcoding, image processing, and printing technologies.[More about me][2]
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/9/economically-efficient-model
|
||||
|
||||
作者:[ Jeffrey Robert Kaufman ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jkaufman
|
||||
[1]:https://opensource.com/article/17/9/economically-efficient-model?rate=0SO3DeFAxtgLdmZxE2ZZQyTRTTbu2OOlksFZSUXmjJk
|
||||
[2]:https://opensource.com/users/jkaufman
|
||||
[3]:https://opensource.com/user/74461/feed
|
||||
[4]:https://opensource.com/users/jkaufman
|
||||
[5]:https://opensource.com/users/jkaufman
|
||||
[6]:https://opensource.com/users/jkaufman
|
||||
[7]:https://opensource.com/tags/law
|
||||
[8]:https://opensource.com/tags/licensing
|
||||
[9]:https://opensource.com/participate
|
@ -1,74 +0,0 @@
|
||||
如何管理开源产品的安全漏洞
|
||||
============================================================
|
||||
|
||||
|
||||
![software vulnerabilities](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/security-software-vulnerabilities.jpg?itok=D3joblgb "software vulnerabilities")
|
||||
在即将举行的 ELC + OpenIoT 峰会上,英特尔安全架构师 Ryan Ware 将会解释如何解决大量漏洞,并管理你产品的安全性。
|
||||
|
||||
[Creative Commons Zero][2]Pixabay
|
||||
|
||||
在开发开源软件时, 你需要考虑的安全漏洞可能是大量的。常见漏洞枚举(CVE)ID、零日和其他漏洞似乎每天都在公布。随着这些大量的信息,你怎么能跟上最新消息?
|
||||
|
||||
英特尔安全架构师 Ryan Ware 表示:“如果你发布了基于 Linux 内核 4.4.1 的产品,该内核截止今日已经有 9 个针对该内核的 CVE。这些都会影响你的产品,尽管事实上它们在出货时不知道。”
|
||||
|
||||
![Ryan Ware](https://www.linux.com/sites/lcom/files/styles/floated_images/public/ryan-ware_01.jpg?itok=cy13TM9g "Ryan Ware")
|
||||
|
||||
英特尔安全架构师 Ryan Ware [经许可使用][1]
|
||||
|
||||
在即将举行的 [ELC][6] + [OpenIoT 峰会][7]上,英特尔安全架构师 Ryan Ware 的演讲将介绍如何实施并成功管理产品的安全性的策略。在他的演讲中,Ware 讨论了最常见的开发者错误,跟上最新的漏洞的策略等等。
|
||||
|
||||
**Linux.com:让我们从头开始。你能否简要介绍一下常见漏洞和曝光(CVE),零日以及其他漏洞么?它们是什么,为什么重要?**
|
||||
|
||||
Ryan Ware:好问题。常见漏洞和曝光(CVE)是由美国政府的要求由 MITR Corporation(一个非营利组织)维护的数据库。目前由美国国土安全部资助。它是在 1999 年创建的,以包含有关所有公众安全漏洞的信息。这些漏洞中的每一个都有自己的标识符(CVE-ID),并且可以被引用。这就是 CVE 这个术语,已经演变成一个单独的安全漏洞: 一个 CVE。
|
||||
|
||||
CVE 数据库中的许多漏洞最初是零日漏洞。这些漏洞出于不管什么原因没有遵循更有序的披露过程,如负责披露。关键在于,如果没有软件供应商能够通过某种类型的修复(通常是软件补丁)来进行响应,那么它们已经成为公开和可利用的。这些和其他未打补丁的软件漏洞至关重要,因为在修补软件之前,漏洞是可以利用的。在许多方面,发布 CVE 或者零日就像是开枪。在你比赛结束之前,你的客户很容易受到伤害。
|
||||
|
||||
**Linux.com:有多少漏洞?你如何确定那些与你的产品相关?**
|
||||
|
||||
Ryan:在探讨有多少之前,任何以任何形式发布软件的人都应该记住。即使你采取一切努力确保你发布的软件没有已知的漏洞,你的软件*也会*存在漏洞。他们只是不知道而已。例如,如果你发布了一个基于 Linux 内核 4.4.1 的产品,那么截止今日,已经有了 9 个CVE。这些都会影响你的产品,尽管事实上它们在出货时不知道。
|
||||
|
||||
此时,CVE 数据库包含 80,957 个条目(2017年1月30日),包括追溯到 1999 年的记录,当时有 894 个已记录问题。迄今为止,一年中最大的数字是 2014 年,当时记录了 7,946 个问题。也就是说,我相信过去两年的数字减少并不是因为安全漏洞的减少。这是我将在我的谈话中说到的东西
|
||||
|
||||
**Linux.com:开发人员可以使用哪些策略来跟上这些信息?**
|
||||
|
||||
Ryan:开发人员可以通过各种方式跟上漏洞信息。我最喜欢的工具之一是 [CVE Details][8]。它以一种非常容易理解的方式展示了来自 MITRE 的信息。它最好的功能是创建自定义 RSS 源的能力,以便你可以跟踪你关心的组件的漏洞。那些追踪更复杂问题的人可能希望从下载 MITR CVE 数据库(免费提供)开始,并定期更新。其他优秀工具,如 cvechecker,可以让你检查软件中已知的漏洞。
|
||||
|
||||
对于软件栈中的关键部分,我还推荐一个非常有用的工具:参与上游社区。这些是最理解你发布的软件的人。世界上没有比他们更好的专家。与他们一起合作。
|
||||
|
||||
**Linux.com:你怎么知道你的产品是否涵盖了所有漏洞?有推荐的工具吗?**
|
||||
|
||||
Ryan:不幸的是,正如我上面所说,你永远无法从你的产品中移除所有的漏洞。上面提到的一些工具是关键。但是,我还没有提到一个对你发布的任何产品来说都是至关重要的软件:软件更新机制。如果你无法在当场更新产品软件,则当客户受到影响时,你无法解决安全问题。你的软件必须能够更新,更新过程更容易,你的客户将受到更好的保护。
|
||||
|
||||
**Linux.com:开发人员还需要知道什么才能成功管理安全漏洞?**
|
||||
|
||||
Ryan:有一个我反复看到的错误。开发人员总是需要牢记将攻击面最小化的想法。这是什么意思?在实践中,这意味着只包括你的产品实际需要的东西!这不仅包括确保你不将无关的软件包加入到你的产品中,而且还可以关闭不需要的功能的配置来编译项目。
|
||||
|
||||
这有什么帮助?想象这是 2014 年。你刚刚上班就看到 Heartbleed 的技术新闻。你知道你在产品中包含 OpenSSL,因为你需要执行一些基本的加密功能,但不使用 TLS 心跳,该问题与该漏洞相关。你愿意::
|
||||
|
||||
a. 花费时间与客户和合作伙伴合作,通过关键的软件更新来修复这个高度安全问题?
|
||||
|
||||
b. 只需要告诉你的客户和合作伙伴,你使用 “-DOPENSSL_NO_HEARTBEATS” 标志编译 OpenSSL 产品,他们不会受到损害,你就可以专注于新功能和其他生产活动。
|
||||
|
||||
最简单解决漏洞的方法是你不包含这个漏洞。
|
||||
|
||||
_嵌入式 Linux 会议 + OpenIoT 北美峰会将于 2017 年 2 月 21 日至 23 日在俄勒冈州波特兰举行。查看关于 Linux 内核、嵌入式开发和系统,以及最新的开放物联网的[超过 130 个会话][5]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/news/event/elcna/2017/2/how-manage-security-vulnerabilities-your-open-source-product
|
||||
|
||||
作者:[AMBER ANKERHOLZ][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/aankerholz
|
||||
[1]:https://www.linux.com/licenses/category/used-permission
|
||||
[2]:https://www.linux.com/licenses/category/creative-commons-zero
|
||||
[3]:https://www.linux.com/files/images/ryan-ware01jpg
|
||||
[4]:https://www.linux.com/files/images/security-software-vulnerabilitiesjpg
|
||||
[5]:http://events.linuxfoundation.org/events/embedded-linux-conference/program/schedule?utm_source=linux&utm_campaign=elc17&utm_medium=blog&utm_content=video-blog
|
||||
[6]:http://events.linuxfoundation.org/events/embedded-linux-conference
|
||||
[7]:http://events.linuxfoundation.org/events/openiot-summit
|
||||
[8]:http://www.cvedetails.com/
|
@ -0,0 +1,95 @@
|
||||
如何使用 pull requests 来改善你的代码审查
|
||||
============================================================
|
||||
|
||||
在 Github 上使用 pull requests 来做代码审查,花费更多的时间去构建,而更少的时间去修改。
|
||||
|
||||
![Measure](https://d3tdunqjn7n0wj.cloudfront.net/360x240/measure-106354_1920-a7f65d82a54323773f847cf572e640a4.jpg)
|
||||
|
||||
|
||||
>看看 Brent 和 Peter’s 的[ _Introducing GitHub_ ][5]的书, 了解更多有关创建项目,开始 pull requests 和团队软件开发流程的概述。
|
||||
|
||||
|
||||
如果你每天不编写代码,你可能不知道软件开发人员每天面临的一些问题。
|
||||
|
||||
* 代码中的安全漏洞
|
||||
* 导致应用程序崩溃的代码
|
||||
* 被称作 “技术债务” 和之后需要重写的代码
|
||||
* 已经重写在你所不知道地方的代码
|
||||
|
||||
|
||||
代码审查可以允许其他人和工具检查来帮助我们改善所编写的软件。这种审查通过自动化代码分析或者测试覆盖工具来进行软件开发过程中重要的二个部分,节省数小时的手工劳动或同行评审。同行的审查是开发人员审查彼此工作的过程。在软件开发的过程中,速度和紧迫性是经常面临的问题中二个重要的部分。如果你没有尽快的发布,你的竞争对手可能会在你之前发布相似的产品。如果你不经常发不新的版本,你的用户可能会怀疑您是否仍然关心你的应用程序的改进优化。
|
||||
|
||||
### 衡量时间权衡:代码审查 vs. bug 修复
|
||||
|
||||
如果有人能够以最小争议的方式汇集多种类型的代码审查,那么随着时间的推移,该软件的质量将会得到改善。认为引入新的工具或流程在最初不会推迟时间,这是天真的想法。但是更昂贵的是:修复生产中的错误的时候,在还是软件生产之前改进软件,即使新工具延迟了时间,客户可以发布和欣赏新功能,随着软件开发人员提高自己的技能,软件开发周期将会回升到以前的水平,同时应该减少错误。
|
||||
|
||||
通过代码审查实现提升代码质量目标的关键之一就是使用一个足够灵活的平台,允许软件开发人员快速编写代码,使用他们熟悉的工具,并行彼此进行同行评审码。 GitHub 就是这样一个平台的。然而,把你的代码放在 [GitHub][9] 上并不只是神奇地使代码审查发生; 你必须使用 pull requests ,来开始这个美妙的旅程。
|
||||
|
||||
### Pull requests: 一个代码的生活讨论的工具
|
||||
|
||||
[Pull requests][10] 是 Github 上的一个工具,允许软件开发人员讨论并提出对项目的主要代码库的更改,稍后可以让所用用户看到。它们在 2008 年 2 月创建的,目的是在接受(合并)某人之前的建议进行更改,然后在部署到生产中,供最终用户看到这种变化。
|
||||
|
||||
Pull requests 开始是一种松散的方式为某人的项目提供改变,但是它已经演变成:
|
||||
|
||||
* 关于你想要合并的代码的生活讨论
|
||||
* 增加功能,这种可见性的修改(更改)
|
||||
* 整合你最喜爱的工具
|
||||
* 作为受保护的分支工作流程的一部分可能需要显式提取请求评估
|
||||
|
||||
### 考虑带代码: URL 是永久的
|
||||
|
||||
看看上面的前两个点,pull requests 促成了一个正在进行的代码讨论,使代码变化非常明显,并且使您很容易在回顾的过程中找到所需的代码。对于新人和有经验的开发人员来说,能够回顾以前的讨论,了解为什么一个功能被开发出来,或者与另一个关于相关功能的讨论这样的联系方式是便捷的。当跨多个项目协调功能并使每个人尽可能接近代码时,前后讨论的内容也非常重要。如果这些功能仍在开发中,重要的是能够看到上次审查以来更改了哪些内容。毕竟,[对小的更改比大的修改要容易得多][11],但大的功能并不总是可能的。因此,重要的是能够拿起你上次审查,并只看到从那时以来的变化。
|
||||
|
||||
### 集成工具: 软件开发人员的建议
|
||||
|
||||
考虑到上述第三点,GitHub 上的 pull requests 有很多功能,但开发人员将始终对第三方工具有偏好。代码质量是代码审查的整个领域,涉及到其他组件的代码评审,而这些评审不一定是人的。检测“低效”或缓慢、潜在的安全漏洞或不符合公司标准的代码是留给自动化工具的任务。
|
||||
[SonarQube][12] 和 [Code Climatecan][13] 分析你的代码的工具,而像 [Codecov][14] 和 [Coveralls][15] 的工具可以告诉你如果你只是写新代码没有得到很好的测试。这些令人惊奇工具最大的特点就是,他们可以插到 GitHub 和 pull requests 报告他们的发现!这意味着不仅让人们检查代码,而且工具也在那里报告情况。每个人都可以停留在一个如何发展循环中的功能。
|
||||
|
||||
最后,根据您的团队的偏好,您可以利用[受保护的分支工作流][16]所需的状态特性来进行工具和同行评审。
|
||||
|
||||
虽然您可能只是开始您的软件开发之旅,一个希望知道一个项目正在做什么的业务利益相关者,或者是想要确保项目的及时性和质量的项目经理,可以通过设置参与 pull requests 批准工作流程,并考虑与其他工具集成以确保质量,在任何级别的软件开发中都很重要。
|
||||
|
||||
无论是为您的个人网站,贵公司的在线商店,还是最新的组合,以最大的收益收获今年的玉米,编写好的软件都需要进行良好的代码审查。良好的代码审查涉及到正确的工具和平台。要了解有关 GitHub 和软件开发过程的更多信息,请参阅 O'Reilly 的 [ _GitHub 简介_ ][17] 一书, 您可以在其中了解创建项目,启动拉取请求以及概述团队的“软件开发流程”。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
**Brent Beer**
|
||||
|
||||
Brent Beer 使用 Git 和 GitHub 已经超过五年了,通过大学的课程,对开源项目的贡献,以及专业网站开发人员。在担任 GitHub 上的培训师时,他也成为 O’Reilly 的 “GitHub简介” 的出版作者。他现在担任 Amsterdam GitHub 上的解决方案工程师,帮助 Git 和 GitHub 向世界各地的开发人员提供服务。
|
||||
|
||||
**Peter Bell**
|
||||
|
||||
Peter Bell 是 Ronin 实验室的创始人以及 CTO。Training is broken - we're fixing it through technology enhanced training!他是一位有经验的企业家,技术专家,敏捷教练和CTO,专门从事 EdTech 项目。他为 O'Reilly 撰写了 “ GitHub 简介” ,为代码学校创建了“掌握 GitHub ”课程,为 Pearson 创建了“ Git 和 GitHub LiveLessons ”课程。他经常在国际和国际会议上提供 ruby , nodejs , NoSQL (尤其是 MongoDB 和 neo4j ),云计算,软件工艺,java,groovy,j ...
|
||||
|
||||
-------------
|
||||
|
||||
|
||||
via: https://www.oreilly.com/ideas/how-to-use-pull-requests-to-improve-your-code-reviews?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311
|
||||
|
||||
作者:[Brent Beer][a],[Peter Bell][b]
|
||||
译者:[MonkeyDEcho](https://github.com/MonkeyDEcho)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.oreilly.com/people/acf937de-cdf4-4b0e-85bd-b559404c580e
|
||||
[b]:https://www.oreilly.com/people/2256f119-7ea0-440e-99e8-65281919e952
|
||||
[1]:https://pixabay.com/en/measure-measures-rule-metro-106354/
|
||||
[2]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
|
||||
[3]:https://www.oreilly.com/people/acf937de-cdf4-4b0e-85bd-b559404c580e
|
||||
[4]:https://www.oreilly.com/people/2256f119-7ea0-440e-99e8-65281919e952
|
||||
[5]:https://www.safaribooksonline.com/library/view/introducing-github/9781491949801/?utm_source=newsite&utm_medium=content&utm_campaign=lgen&utm_content=how-to-use-pull-requests-to-improve-your-code-reviews
|
||||
[6]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
|
||||
[7]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
|
||||
[8]:https://www.oreilly.com/ideas/how-to-use-pull-requests-to-improve-your-code-reviews?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311
|
||||
[9]:https://github.com/about
|
||||
[10]:https://help.github.com/articles/about-pull-requests/
|
||||
[11]:https://blog.skyliner.io/ship-small-diffs-741308bec0d1
|
||||
[12]:https://github.com/integrations/sonarqube
|
||||
[13]:https://github.com/integrations/code-climate
|
||||
[14]:https://github.com/integrations/codecov
|
||||
[15]:https://github.com/integrations/coveralls
|
||||
[16]:https://help.github.com/articles/about-protected-branches/
|
||||
[17]:https://www.safaribooksonline.com/library/view/introducing-github/9781491949801/?utm_source=newsite&utm_medium=content&utm_campaign=lgen&utm_content=how-to-use-pull-requests-to-improve-your-code-reviews-lower
|
@ -1,12 +1,12 @@
|
||||
我对 Go 的错误处理有哪些不满,以及我是如何处理的
|
||||
======================
|
||||
|
||||
写 Go 的人往往对它的错误处理模式有一定的看法。根据你对其他语言的经验,你可能习惯于不同的方法。这就是为什么我决定要写这篇文章,尽管有点固执己见,但我认为吸收我的经验在辩论中是有用的。 我想要解决的主要问题是,很难去强制良好的错误处理实践,错误没有堆栈追踪,并且错误处理本身太冗长。不过,我已经看到了一些潜在的解决方案或许能帮助解决一些问题。
|
||||
写 Go 的人往往对它的错误处理模式有一定的看法。按不同的语言经验,人们可能有不同的习惯处理方法。这就是为什么我决定要写这篇文章,尽管有点固执己见,但我认为吸收我的经验是有用的。我想要讲的主要问题是,很难去强制良好的错误处理实践,经常错误没有堆栈追踪,并且错误处理本身太冗长。不过,我已经看到了一些潜在的解决方案,或许能帮助解决一些问题。
|
||||
|
||||
### 与其他语言的快速比较
|
||||
|
||||
|
||||
[在 Go 中,所有的错误是值][1]。因为这点,相当多的函数最后会返回一个 `error`, 看起来像这样:
|
||||
[在 Go 中,所有的错误都是值][1]。因为这点,相当多的函数最后会返回一个 `error`, 看起来像这样:
|
||||
|
||||
```
|
||||
func (s *SomeStruct) Function() (string, error)
|
||||
@ -21,7 +21,7 @@ if err != nil {
|
||||
}
|
||||
```
|
||||
|
||||
另外一种是在其他语言中如 Java、C#、Javascript、Objective C、Python 等使用的 `try-catch` 模式。如下你可以看到与先前的 Go 示例类似的 Java 代码,声明 `throws` 而不是返回 `error`:
|
||||
另外一种方法,是在其他语言中,如 Java、C#、Javascript、Objective C、Python 等使用的 `try-catch` 模式。如下你可以看到与先前的 Go 示例类似的 Java 代码,声明 `throws` 而不是返回 `error`:
|
||||
|
||||
```
|
||||
public String function() throws Exception
|
||||
@ -39,7 +39,7 @@ catch (Exception e) {
|
||||
}
|
||||
```
|
||||
|
||||
当然,还有其他的不同。不如,`error` 不会使你的程序崩溃,然而 `Exception` 会。还有其他的一些,我希望在在本篇中专注在这些上。
|
||||
当然,还有其他的不同。例如,`error` 不会使你的程序崩溃,然而 `Exception` 会。还有其他的一些,在本篇中会专门提到这些。
|
||||
|
||||
### 实现集中式错误处理
|
||||
|
||||
@ -68,7 +68,7 @@ func viewCompanies(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
```
|
||||
|
||||
这并不是一个好的解决方案,因为我们不得不重复在所有的处理函数中处理错误。为了能更好地维护,最好能在一处地方处理错误。幸运的是,[在 Go 的博客中,Andrew Gerrand 提供了一个替代方法][2]可以完美地实现。我们可以错见一个处理错误的类型:
|
||||
这并不是一个好的解决方案,因为我们不得不重复在所有的处理函数中处理错误。为了能更好地维护,最好能在一处地方处理错误。幸运的是,[在 Go 的博客中,Andrew Gerrand 提供了一个替代方法][2],可以完美地实现。我们可以创建一个处理错误的 Type:
|
||||
|
||||
```
|
||||
type appHandler func(http.ResponseWriter, *http.Request) error
|
||||
@ -109,9 +109,9 @@ func viewUsers(w http.ResponseWriter, r *http.Request) error {
|
||||
|
||||
调用链可能会相当深,在整个过程中,各种错误可能在不同的地方实例化。[Russ Cox][4]的这篇文章解释了如何避免遇到太多这类问题的最佳实践:
|
||||
|
||||
在 Go 中错误报告的部分约定是函数包含相关的上下文、包含正在尝试的操作(比如函数名和它的参数)
|
||||
“在 Go 中错误报告的部分约定是函数包含相关的上下文,包括正在尝试的操作(比如函数名和它的参数)。”
|
||||
|
||||
给出的例子是 OS 包中的一个调用:
|
||||
给出的例子是对 OS 包的一个调用:
|
||||
|
||||
```
|
||||
err := os.Remove("/tmp/nonexist")
|
||||
@ -140,11 +140,11 @@ if err != nil {
|
||||
}
|
||||
```
|
||||
|
||||
这意味着错误发生时它们没有交流。
|
||||
这意味着错误何时发生并没有传递出来。
|
||||
|
||||
应该注意的是,所有这些错误都可以在 `Exception` 驱动的模型中发生 - 糟糕的错误信息、隐藏异常等。那么为什么我认为该模型更有用?
|
||||
|
||||
如果我们在处理一个糟糕的异常消息,_我们仍然能够了解堆栈中发生了什么_。因为堆栈跟踪,这引发了一些我对 Go 不了解的部分 - 你知道 Go 的 `panic` 包含了堆栈追踪,但是 `error` 没有。我认为推论是 `panic` 可能会使你的程序崩溃,因此需要一个堆栈追踪,而处理错误并不会,因为它会假定你在它发生的地方做一些事。
|
||||
如果我们在处理一个糟糕的异常消息,_我们仍然能够了解它发生在调用堆栈中什么地方_。因为堆栈跟踪,这引发了一些我对 Go 不了解的部分 - 你知道 Go 的 `panic` 包含了堆栈追踪,但是 `error` 没有。我推测可能是 `panic` 会使你的程序崩溃,因此需要一个堆栈追踪,而处理错误并不会,因为它会假定你在它发生的地方做一些事。
|
||||
|
||||
所以让我们回到之前的例子 - 一个有糟糕错误信息的第三方库,它只是输出了调用链。你认为调试会更容易吗?
|
||||
|
||||
@ -170,13 +170,13 @@ github.com/Org/app/core/vendor/github.com/rusenask/goproxy.FuncReqHandler.Handle
|
||||
如果我们使用 Java 作为一个随意的例子,其中人们犯的一个最愚蠢的错误是不记录堆栈追踪:
|
||||
|
||||
```
|
||||
LOGGER.error(ex.getMessage()) // Doesn't log stack trace
|
||||
LOGGER.error(ex.getMessage(), ex) // Does log stack trace
|
||||
LOGGER.error(ex.getMessage()) // 不记录堆栈追踪
|
||||
LOGGER.error(ex.getMessage(), ex) // 记录堆栈追踪
|
||||
```
|
||||
|
||||
但是 Go 设计中似乎没有这个信息
|
||||
但是 Go 似乎设计中就没有这个信息。
|
||||
|
||||
在获取上下文信息方面 - Russ 还提到了社区正在讨论一些潜在的接口用于剥离上下文错误。了解更多这点或许会很有趣。
|
||||
在获取上下文信息方面 - Russ 还提到了社区正在讨论一些潜在的接口用于剥离上下文错误。关于这点,了解更多或许会很有趣。
|
||||
|
||||
### 堆栈追踪问题解决方案
|
||||
|
||||
@ -188,7 +188,7 @@ if errors.Is(err, crashy.Crashed) {
|
||||
}
|
||||
```
|
||||
|
||||
不过,我认为这个功能能成为语言的一等公民将是一个改进,这样你就不必对类型做一些修改了。此外,如果我们像先前的例子那样使用第三方库,那就可能不必使用 `crashy` - 我们仍有相同的问题。
|
||||
不过,我认为这个功能如果能成为语言的<ruby>第一类公民<rt>first class citizenship </rt></ruby>将是一个改进,这样你就不必对类型做一些修改了。此外,如果我们像先前的例子那样使用第三方库,它可能没有使用 `crashy` - 我们仍有相同的问题。
|
||||
|
||||
### 我们对错误应该做什么?
|
||||
|
||||
@ -201,7 +201,7 @@ if err != nil {
|
||||
}
|
||||
```
|
||||
|
||||
如果我们想要调用大量会返回错误的方法时会发生什么,在同一个地方处理它们么?看上去像这样:
|
||||
如果我们想要调用大量方法,它们会产生错误,然后在一个地方处理所有错误,这时会发生什么?看上去像这样:
|
||||
|
||||
```
|
||||
err := doSomething()
|
||||
@ -222,7 +222,7 @@ func doSomething() error {
|
||||
}
|
||||
```
|
||||
|
||||
这感觉有点冗余,然而在其他语言中你可以将多条语句作为一个整体处理。
|
||||
这感觉有点冗余,在其他语言中你可以将多条语句作为一个整体处理。
|
||||
|
||||
```
|
||||
try {
|
||||
@ -245,9 +245,9 @@ public void doSomething() throws SomeErrorToPropogate {
|
||||
}
|
||||
```
|
||||
|
||||
我个人认为这两个例子实现了一件事情,只有 `Exception` 模式更少冗余更加弹性。如果有什么,我发现 `if err!= nil` 感觉像样板。也许有一种方法可以清理?
|
||||
我个人认为这两个例子实现了一件事情,只是 `Exception` 模式更少冗余,更加弹性。如果有什么的话,我觉得 `if err!= nil` 感觉像样板。也许有一种方法可以清理?
|
||||
|
||||
### 将多条语句像一个整体那样发生错误
|
||||
### 将失败的多条语句做为一个整体处理错误
|
||||
|
||||
首先,我做了更多的阅读,并[在 Rob Pike 写的 Go 博客中][7]发现了一个比较务实的解决方案。
|
||||
|
||||
@ -317,11 +317,11 @@ if err != nil {
|
||||
}
|
||||
```
|
||||
|
||||
这可以用,但是并没有帮助太大,因为它最后比标准的 `if err != nil` 检查带来了更多的冗余。我有兴趣听到有人能提供其他解决方案。或许语言本身需要一些方法来以不那么臃肿的方式的传递或者组合错误 - 但是感觉似乎是特意设计成不那么做。
|
||||
这可以用,但是并没有太大帮助,因为它最终比标准的 `if err != nil` 检查带来了更多的冗余。如果有人能提供其他解决方案,我会很有兴趣听。或许这个语言本身需要一些方法来以不那么臃肿的方式传递或者组合错误 - 但是感觉似乎是特意设计成不那么做。
|
||||
|
||||
### 总结
|
||||
|
||||
看完这些之后,你可能会认为我反对在 Go 中使用 `error`。但事实并非如此,我只是描述了如何将它与 `try catch` 模型的经验进行比较。它是一个用于系统编程很好的语言,并且已经出现了一些优秀的工具。仅举几例有 [Kubernetes][8]、[Docker][9]、[Terraform][10]、[Hoverfly][11] 等。还有小型、高性能、本地二进制的优点。但是,`error` 难以适应。 我希望我的推论是有道理的,而且一些方案和解决方法可能会有帮助。
|
||||
看完这些之后,你可能会认为我在对 `error` 挑刺儿,由此推论我反对 Go。事实并非如此,我只是将它与我使用 `try catch` 模型的经验进行比较。它是一个用于系统编程很好的语言,并且已经出现了一些优秀的工具。仅举几例,有 [Kubernetes][8]、[Docker][9]、[Terraform][10]、[Hoverfly][11] 等。还有小型、高性能、本地二进制的优点。但是,`error` 难以适应。 我希望我的推论是有道理的,而且一些方案和解决方法可能会有帮助。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -336,7 +336,7 @@ via: https://opencredo.com/why-i-dont-like-error-handling-in-go
|
||||
|
||||
作者:[Andrew Morgan][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[jasminepeng](https://github.com/jasminepeng)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
|
@ -1,115 +0,0 @@
|
||||
|
||||
知道时间表,但是你是否知道“哈希表”
|
||||
============================================================
|
||||
|
||||
探索哈希表的世界并理解潜在的技术是非常有趣的,并且将会受益匪浅。所以,让我们开始探索它吧。
|
||||
|
||||
哈希表是许多现代软件、应用程序中一种常见的数据结构。它提供了类似字典的功能,从而你能够执行插入、删除和删除其中的项等操作。这么说吧,比如我想找出“苹果”的定义是什么,并且我知道该定义被存储在了我定义的哈希表中。我将查询我的哈希表来得到定义。哈希表内的记录看起来可能像:`“苹果 ”=>" “一种拥有水果之王之称的绿色水果”`。所以,“苹果”是我的关键字,而“一种拥有水果之王之称的水果”是与之关联的值。
|
||||
|
||||
还有一个例子,我们很清楚,下面的是哈希表的内容:
|
||||
|
||||
|
||||
```
|
||||
1234
|
||||
```
|
||||
|
||||
```
|
||||
"面包" => "固体""水" => "液体""汤" => "液体""玉米片" => "固体"
|
||||
```
|
||||
|
||||
我想知道*面包*是固体还是液体,所以我将查询哈希表来获取与之相关的值,该哈希表将返回“固体”给我。现在,我们大致了解了哈希表是如何工作的。使用哈希表需要注意的另一个重要概念是每一个关键字都是唯一的。如果到了明天,我拥有一个面包奶昔(它是液体),那么我们需要更新哈希表,把“固体”改为“液体”来反映哈希表的改变。所以,我们需要添加一条记录到字典中:关键字为“面包”,对应的值为“液体”。你能发现下面的表发生了什么变化吗?
|
||||
|
||||
|
||||
```
|
||||
1234
|
||||
```
|
||||
|
||||
```
|
||||
"面包" => "液体""水" => "液体""汤" => "液体""玉米片" => "固体"
|
||||
```
|
||||
|
||||
没错,“面包”对应的值被更新为了“液体”。
|
||||
|
||||
**关键字是唯一的**,我的面包不能既是液体又是固体。但是,是什么使得该数据结构与其他数据结构相比如此特殊呢?为什么不使用一个[数组][1]来代替呢?它取决于问题的本质。对于某一个特定的问题,使用数组来描述可能会更好,因此,我们需要注意的关键点就是,**我们应该选择最适合问题的数据结构**。例如,如果你需要做的只是存储一个简单的杂货列表,那么使用数组会很适合。考虑下面的两个问题,两个问题的本质完全不同。
|
||||
|
||||
1. 我需要一个水果的列表
|
||||
2. 我需要一个水果的列表以及各种水果的价格(每千克)
|
||||
|
||||
正如你在下面所看到的,用数组来存储水果的列表可能是更好的选择。但是,用哈希表来存储每一种水果的价格看起来是更好的选择。
|
||||
|
||||
|
||||
```
|
||||
123456789
|
||||
```
|
||||
|
||||
```
|
||||
//示例数组
|
||||
["apple, "orange", "pear", "grape"]
|
||||
//示例哈希表
|
||||
{ "apple" : 3.05, "orange" : 5.5, "pear" : 8.4, "grape" : 12.4 }
|
||||
```
|
||||
|
||||
实际上,有许多的机会需要[使用][2]哈希表。
|
||||
|
||||
### 时间以及它对你的意义
|
||||
|
||||
[对时间复杂度和空间复杂度的一个复习][3].
|
||||
|
||||
平均情况下,在哈希表中进行搜索、插入和删除记录的时间复杂度均为 O(1) 。实际上,O(1) 读作“大 O 1”,表示常数时间。这意味着执行每一种操作的运行时间不依赖于数据集中数据的数量。我可以保证,查找、插入和删除项目均只花费常数时间,“当且仅当”哈希表的执行正确。如果执行不正确,可能需要花费很慢的 O(n) 时间,尤其是当所有的数据都映射到了哈希表中的同一位置/点。
|
||||
|
||||
### 构建一个好的哈希表
|
||||
|
||||
到目前为止,我们已经知道如何使用哈希表了,但是如果我们想**构建**一个哈希表呢?本质上我们需要做的就是把一个字符串(比如 "dog")映射到一个哈希代码(一个生成数),即映射到一个数组的索引。你可能会问,为什么不直接使用索引呢?为什么没必要呢?通过这种方式我们可以直接查询 "dog" 立即得到 "dog" 所在的位置,`String name = Array["dog"] //name is Lassy`。但是,使用索引查询名称时,可能出现的情况是我们不知道名称所在的索引。比如,`String name = Array[10] // name is now "Bob"`- 现在不是我的狗的名字。这就是把一个字符串映射到一个哈希代码的益处(它和一个数组的索引相对应)。我们可以通过使用模运算符和哈希表的大小来计算出数组的索引:`index = hash_code % table_size`。
|
||||
|
||||
我们需要避免的另一种情况是两个关键字映射到同一个索引,这叫做**哈希碰撞**,如果选取的哈希函数不合适,这很容易发生。实际上,每一个输入比输出多的哈希函数都有可能发生碰撞。通过下面的同一个函数的两个输出来展示一个简单的碰撞:
|
||||
|
||||
`int cat_idx = hashCode("cat") % table_size; //cat_idx 现在等于 1`
|
||||
|
||||
`int dog_idx = hashCode("dog") % table_size; //dog_idx 也等于 1`
|
||||
|
||||
我们可以看到,现在两个数组的索引均是 1 。这样将会出现两个值相互覆盖,因为它们被写到了相同的索引中。如果我们查找 "cat" 的值,将会返回 "Lassy" ,但是这并不是我们想要的。有许多可以[解决哈希碰撞][4]的方法,但是更受欢迎的一种方法叫做**链接**。链接的想法就是对于数组的每一个索引位置都有一个链表,如果碰撞发生,值就被存到链表中。因此,在前面的例子中,我们将会得到我们需要的值,但是我们需要搜索数组中索引为 1 的位置上的链表。伴有链接的哈希实现需要 O(1 + α) 时间,其中 α 是装载因子,它可以表示为 n/k,其中 n 是哈希表中的记录数目,k 是哈希表中可用位置的数目。但是请记住,只有当你给出的关键字非常随机时,这一结论才正确(依赖于 [SUHA][5])。
|
||||
|
||||
这是做了一个很大的假设,因为总是有可能任何不相等的关键字都不散列到同一点。这一问题的一个解决方法是去除哈希表中关键字对随机性的依赖,转而把随机性集中于关键字是如何被散列的,从而减少矛盾发生的可能性。这被称为……
|
||||
|
||||
### 通用散列
|
||||
|
||||
这个观念很简单,从通用散列家族集合随机选择一个哈希函数来计算哈希代码。用起来话来说,就是选择任何一个随机的哈希函数来散列关键字。通过这种方法,将有很低的可能性散列两个不同的关键字结果不相同。我只是简单的提一下,如果不相信我那么请相信[数学][6]。实现这一方法时需要注意的另一件事是选择了一个不好的通用散列家族。它会把时间和空间复杂度拖到 O(U),其中 U 是散列家族的大小。而其中的挑战就是找到一个不需要太多时间来计算,也不需要太多空间来存储的哈希家族。
|
||||
|
||||
### 上帝哈希函数
|
||||
|
||||
追求完美是人的天性。我们是否能够构建一个*完美的哈希函数*,从而能够把关键字映射到整数集中,并且几乎没有碰撞。好消息是我们能够在一定程度上做到,但是我们的数据必须是静态的(这意味着在一定时间内没有插入/删除/更新)。一个实现完美哈希函数的方法就是使用 2-级哈希,它基本上是我们前面讨论过的两种方法的组合。它使用*通用散列*来选择使用哪个哈希函数,然后通过链接组合起来,但是这次不是使用链表数据结构,而是使用另一个哈希表。让我们看一看下面它是怎么实现的: [![2-Level Hashing](http://www.zeroequalsfalse.press/2017/02/20/hashtables/Diagram.png "2-Level Hashing")][8]
|
||||
|
||||
**但是这是如何工作的以及我们如何能够确保无需关心碰撞?**
|
||||
|
||||
它的工作方式与[生日悖论][7]相反。它指出,在随机选择的一堆人中,会有一些人生日相同。但是如果一年中的天数远远大于人数(平方以上),那么有极大的可能性所有人的生日都不相同。所以这二者是如何相关的?对于每一个链接哈希表,其大小均为第一级哈希表大小的平方。那就是说,如果有两个元素被散列到同一个点,那么链接哈希表的大小将为 4 。大多数时候,链接哈希表将会非常稀疏/空。
|
||||
|
||||
重复下面两个来确保无需担心碰撞:
|
||||
|
||||
* 从通用散列家族中选择一个哈希函数来计算
|
||||
* 如果发生碰撞,那么继续从通用散列家族中选择另一个哈希函数来计算
|
||||
|
||||
字面上看就是这样(这是一个 O(n^2) 空间的解)。如果需要考虑空间问题,那么显然需要另一个不同的方法。但是值得庆幸的是,该过程平均只需要进行**两次。**
|
||||
|
||||
### 总结
|
||||
|
||||
只有具有一个好的哈希函数才能算得上是一个好的哈希表。在同时保证功能实现、时间和空间的提前下构建一个完美的哈希函数是一件很困难的事。我推荐你在解决问题的时候首先考虑哈希表,因为它能够为你提供巨大的性能优势,而且它能够对应用程序的可用性产生显著差异。哈希表和完美的哈希函数常被用于实时编程应用中,并且在各种算法中都得到了广泛应用。你见或者不见,哈希表就在这儿。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.zeroequalsfalse.press/2017/02/20/hashtables/
|
||||
|
||||
作者:[Marty Jacobs][a]
|
||||
译者:[ucasFL](https://github.com/ucasFL)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.zeroequalsfalse.press/about
|
||||
[1]:https://en.wikipedia.org/wiki/Array_data_type
|
||||
[2]:https://en.wikipedia.org/wiki/Hash_table#Uses
|
||||
[3]:https://www.hackerearth.com/practice/basic-programming/complexity-analysis/time-and-space-complexity/tutorial/
|
||||
[4]:https://en.wikipedia.org/wiki/Hash_table#Collision_resolution
|
||||
[5]:https://en.wikipedia.org/wiki/SUHA_(computer_science
|
||||
[6]:https://en.wikipedia.org/wiki/Universal_hashing#Mathematical_guarantees
|
||||
[7]:https://en.wikipedia.org/wiki/Birthday_problem
|
||||
[8]:http://www.zeroequalsfalse.press/2017/02/20/hashtables/Diagram.png
|
@ -0,0 +1,787 @@
|
||||
详解 Ubuntu snap 包的制作过程
|
||||
============================================================
|
||||
|
||||
|
||||
> 如果你看过译者以前翻译的 snappy 文章,不知有没有感觉相关主题都是浅尝辄止,讲得不够透彻,看得也不太过瘾?如果有的话,相信这篇详细讲解如何从零开始制作一个 snap 包的文章应该不会让你失望。
|
||||
|
||||
在这篇文章中,我们将看到如何为名为 [timg][1] 的实用程序制作对应的 snap 包。如果这是你第一次听说 snap 安装包,你可以先看看 [如何创建你的第一个 snap 包][2]。
|
||||
|
||||
今天我们将学习以下有关使用 snapcraft 制作 snap 包的内容:
|
||||
|
||||
* [timg][3] 源码中的 Makefile 文件是手工编写,我们需要修改一些 [make 插件参数][4]。
|
||||
|
||||
* 这个程序是用 C++ 语言写的,依赖几个额外的库文件。我们需要把相关的代码添加到 snap 包中。
|
||||
|
||||
* 严格限制还是传统限制?我们将会讨论如何在它们之间进行选择。
|
||||
|
||||
首先,我们了解下 [timg][5] 有什么用?
|
||||
|
||||
### 背景
|
||||
|
||||
Linux 终端模拟器已经变得非常炫酷,并且还能显示颜色!
|
||||
|
||||
![1.png-19.9kB][6]
|
||||
|
||||
除了标准的颜色,大多数终端模拟器都支持真彩色(1600 万种颜色)。
|
||||
|
||||
![图片.png-61.9kB][7]
|
||||
|
||||
是的!终端模拟器已经支持真彩色了!从这个页面 [多个终端和终端应用程序已经支持真彩色(1600 万种颜色)][8] 可以获取 AWK 代码进行测试。你可以看到在代码中使用了一些 [转义序列][9] 来指定 RGB 的值(256 * 256 * 256 ~= 1600 万种颜色)。
|
||||
|
||||
### timg 是什么?
|
||||
|
||||
好了,言归正传,[timg][10] 有什么用?它能将输入的图片重新调整为终端窗口字符能显示范围的大小(比如:80 x 25),然后在终端窗口的任何分辨率用彩色字符显示图像。
|
||||
|
||||
![图片.png-37.3kB][11]
|
||||
|
||||
这幅图用彩色块字符显示了 [Ubuntu 的 logo][12],原图是一个 PNG 格式的文件。
|
||||
|
||||
![图片.png-165kB][13]
|
||||
|
||||
这是 [@Doug8888 拍摄的花][14]。
|
||||
|
||||
如果你通过远程连接服务器来管理自己的业务,并希望查看图像文件,那么 [timg][15] 将会特别有用。
|
||||
|
||||
除了静态图片,[timg][16] 同样也可以显示 gif 动图。让我们开始 snap 之旅吧!
|
||||
|
||||
### 熟悉 timg 的源码
|
||||
|
||||
[timg][17] 的源码可以在 [这里][18] 找到。让我们试着手动编译它,以了解它有什么需求。
|
||||
|
||||
![图片.png-128.4kB][19]
|
||||
|
||||
`Makefile` 在 `src/` 子文件夹中而不是项目的根文件夹中。在 github 页面上,他们说需要安装这两个开发包(GraphicsMagic++ 和 WebP),然后使用 `make` 就能生成可执行文件。在截图中可以看到我已经将它们安装好了(在我读完相关的 Readme.md 文件后)。
|
||||
|
||||
|
||||
因此,在编写 snapcraft.yaml 文件时已经有了四条腹稿:
|
||||
|
||||
1. Makefile 在 src/ 子文件夹中而不是项目的根文件夹中。
|
||||
|
||||
2. 这个程序编译时需要两个开发库。
|
||||
|
||||
3. 为了让 timg 以 snap 包形式运行,我们需要将这两个库捆绑在 snap 包中(或者静态链接它们)。
|
||||
|
||||
4. [timg][20] 是用 C++ 编写的,所以需要安装 g++。在编译之前,让我们通过 snapcraft.yaml 文件来检查 `build-essential` 元包是否已经安装。
|
||||
|
||||
### 从 snapcraft 开始
|
||||
|
||||
让我们新建一个名为 `timg-snap/` 的文件夹,并在其中运行 `snapcraft init` 这条命令来创建 `snapcraft.yaml` 工作的框架。
|
||||
|
||||
```
|
||||
ubuntu@snaps:~$ mkdir timg-snap
|
||||
ubuntu@snaps:~$ cd timg-snap/
|
||||
ubuntu@snaps:~/timg-snap$ snapcraft init
|
||||
Created snap/snapcraft.yaml.
|
||||
Edit the file to your liking or run `snapcraft` to get started
|
||||
ubuntu@snaps:~/timg-snap$ cat snap/snapcraft.yaml
|
||||
name: my-snap-name # you probably want to 'snapcraft register <name>'
|
||||
version: '0.1' # just for humans, typically '1.2+git' or '1.3.2'
|
||||
summary: Single-line elevator pitch for your amazing snap # 79 char long summary
|
||||
description: |
|
||||
This is my-snap's description. You have a paragraph or two to tell the most important story about your snap. Keep it under 100 words though, we live in tweetspace and your description wants to look good in the snap store.
|
||||
|
||||
grade: devel # must be 'stable' to release into candidate/stable channels
|
||||
confinement: devmode # use 'strict' once you have the right plugs and slots
|
||||
|
||||
parts:
|
||||
my-part:
|
||||
# See 'snapcraft plugins'
|
||||
plugin: nil
|
||||
```
|
||||
|
||||
### 填充元数据
|
||||
|
||||
snapcraft.yaml 配置文件的上半部分是元数据。我们需要一个一个把它们填满,这算是比较容易的部分。元数据由以下字段组成:
|
||||
|
||||
1. `名字` —— snap 包的名字,它将公开在 Ubuntu 商店中。
|
||||
|
||||
2. `版本` —— snap 包的版本号。可以是源代码存储库中一个适当的分支或者标记,如果没有分支或标记的话,也可以是当前日期。
|
||||
|
||||
3. `摘要` —— 不超过 80 个字符的简短描述。
|
||||
|
||||
4. `描述` —— 长一点的描述, 100 个字以下.
|
||||
|
||||
5. `等级` —— 稳定或者开发。因为我们想要在 Ubuntu 商店的稳定通道中发布这个 snap 包,所以在 snap 包能正常工作后,就把它设置成稳定。
|
||||
|
||||
6. `限制` —— 我们首先设置为开发模式,这样系统将不会以任何方式限制 snap 包。一旦它在开发模式 下能正常工作,我们再考虑选择严格还是传统限制。
|
||||
|
||||
我们将使用 `timg` 这个名字:
|
||||
|
||||
```
|
||||
ubuntu@snaps:~/timg-snap$ snapcraft register timg
|
||||
Registering timg.
|
||||
You already own the name 'timg'.
|
||||
```
|
||||
|
||||
是的,这个名字我已经注册了 :-)。
|
||||
|
||||
接下来,我们应该选择哪个版本的 timg?
|
||||
|
||||
![图片.png-72.7kB][21]
|
||||
|
||||
当在仓库中寻找分支或标记时,我们会发现有一个 v0.9.5 标签,其中有 2016 年 6 月 27 日最新提交的代码。
|
||||
|
||||
![图片.png-71.4kB][22]
|
||||
|
||||
然而主分支(`master`)中有两个看起来很重要的提交。因此我们使用主分支而不用 `v0.9.5` 标签的那个。我们使用今天的日期—— `20170226` 做为版本号。
|
||||
|
||||
我们从仓库中搜集了摘要和描述。其中摘要的内容为 `一个终端图像查看器`,描述的内容为 `一个能用 24 位颜色和 unicode 字符块来在终端中显示图像的查看器`。
|
||||
|
||||
最后,将等级设置为稳定,将限制设置为开发模式(一直到 snap 包真正起作用)。
|
||||
|
||||
这是更新后的 snapcraft.yaml,带有所有的元数据:
|
||||
|
||||
```
|
||||
ubuntu@snaps:~/timg-snap$ cat snap/snapcraft.yaml
|
||||
name: timg
|
||||
version: '20170226'
|
||||
summary: A terminal image viewer
|
||||
description: |
|
||||
A viewer that uses 24-Bit color capabilities and unicode character blocks to display images in the terminal.
|
||||
|
||||
grade: stable
|
||||
confinement: devmode
|
||||
|
||||
parts:
|
||||
my-part:
|
||||
# See 'snapcraft plugins'
|
||||
plugin: nil
|
||||
```
|
||||
|
||||
### 弄清楚 "parts:" 是什么
|
||||
|
||||
现在我们需要将上面已经存在的 `parts:` 部分替换成真实的 `parts:`。
|
||||
|
||||
![timg-git-url.png-8kB][23]
|
||||
|
||||
<dd> Git 仓库的 URL。 </dd>
|
||||
|
||||
![图片.png-28.7kB][24]
|
||||
|
||||
<dd> 存在 Makefile,因此我们需要 make 插件。</dd>
|
||||
|
||||
(这两张图在原文中是并排显示的,在 markdown 中不知道怎么设置。。)
|
||||
|
||||
|
||||
我们已经知道 git 仓库的 URL 链接,并且 timg 源码中存在 Makefile 文件。至于 [snapcraft make 插件][25] 的 Makefile 命令,正如文档所言,这个插件总是会运行 `make` 后再运行 `make install`。为了确认 `make` 插件的用法,我查看了 [snapcraft 可用插件列表][26]。
|
||||
|
||||
因此,我们将最初的配置:
|
||||
|
||||
```
|
||||
parts:
|
||||
my-part:
|
||||
# See 'snapcraft plugins'
|
||||
plugin: nil
|
||||
```
|
||||
|
||||
修改为:
|
||||
|
||||
```
|
||||
parts:
|
||||
timg:
|
||||
source: https://github.com/hzeller/timg.git
|
||||
plugin: make
|
||||
```
|
||||
|
||||
这是当前 snapcraft.yaml 文件的内容:
|
||||
|
||||
```
|
||||
name: timg
|
||||
version: '20170226'
|
||||
summary: A terminal image viewer
|
||||
description: |
|
||||
A viewer that uses 24-Bit color capabilities and unicode character blocks
|
||||
to display images in the terminal.
|
||||
|
||||
grade: stable
|
||||
confinement: devmode
|
||||
|
||||
parts:
|
||||
timg:
|
||||
source: https://github.com/hzeller/timg.git
|
||||
plugin: make
|
||||
```
|
||||
|
||||
让我们运行下 `snapcraft prime` 命令看看会发生什么:
|
||||
|
||||
```
|
||||
ubuntu@snaps:~/timg-snap$ snapcraft prime
|
||||
Preparing to pull timg
|
||||
Pulling timg
|
||||
Cloning into '/home/ubuntu/timg-snap/parts/timg/src'...
|
||||
remote: Counting objects: 144, done.
|
||||
remote: Total 144 (delta 0), reused 0 (delta 0), pack-reused 144
|
||||
Receiving objects: 100% (144/144), 116.00 KiB | 0 bytes/s, done.
|
||||
Resolving deltas: 100% (89/89), done.
|
||||
Checking connectivity... done.
|
||||
Preparing to build timg
|
||||
Building timg
|
||||
make -j4
|
||||
make: *** No targets specified and no makefile found. Stop.
|
||||
Command '['/bin/sh', '/tmp/tmpem97fh9d', 'make', '-j4']' returned non-zero exit status 2
|
||||
ubuntu@snaps:~/timg-snap$
|
||||
```
|
||||
|
||||
我们可以看到 `snapcraft` 无法在源代码中找到 `Makefile` 文件,正如我们之前所暗示的,`Makefile` 只位于 `src/` 子文件夹中。那么,我们可以让 `snapcraft` 使用 `src/` 文件夹中的 `Makefile` 文件吗?
|
||||
|
||||
每个 snapcraft 插件都有自己的选项,并且有一些通用选项是所有插件共享的。在本例中,我们希望研究那些[与源代码相关的 snapcraft 选项][27]。我们开始吧:
|
||||
|
||||
* source-subdir:path
|
||||
|
||||
snapcraft 会检出(checkout) `source` 关键字所引用的仓库或者解压归档文件到 `parts/<part-name>/src/` 中,但是它只会将特定的子目录复制到 `parts/<part-name>/build/` 中。
|
||||
|
||||
我们已经有了适当的选项,下面更新下 `parts`:
|
||||
|
||||
```
|
||||
parts:
|
||||
timg:
|
||||
source: https://github.com/hzeller/timg.git
|
||||
source-subdir: src
|
||||
plugin: make
|
||||
```
|
||||
|
||||
然后再次运行 snapcraft prime:
|
||||
|
||||
```
|
||||
ubuntu@snaps:~/timg-snap$ snapcraft prime
|
||||
The 'pull' step of 'timg' is out of date:
|
||||
|
||||
The 'source-subdir' part property appears to have changed.
|
||||
|
||||
Please clean that part's 'pull' step in order to continue
|
||||
ubuntu@snaps:~/timg-snap$ snapcraft clean
|
||||
Cleaning up priming area
|
||||
Cleaning up staging area
|
||||
Cleaning up parts directory
|
||||
ubuntu@snaps:~/timg-snap$ snapcraft prime
|
||||
Skipping pull timg (already ran)
|
||||
Preparing to build timg
|
||||
Building timg
|
||||
make -j4
|
||||
g++ `GraphicsMagick++-config --cppflags --cxxflags` -Wall -O3 -fPIC -c -o timg.o timg.cc
|
||||
g++ -Wall -O3 -fPIC -c -o terminal-canvas.o terminal-canvas.cc
|
||||
/bin/sh: 1: GraphicsMagick++-config: not found
|
||||
timg.cc:33:22: fatal error: Magick++.h: No such file or directory
|
||||
compilation terminated.
|
||||
Makefile:10: recipe for target 'timg.o' failed
|
||||
make: *** [timg.o] Error 1
|
||||
make: *** Waiting for unfinished jobs....
|
||||
Command '['/bin/sh', '/tmp/tmpeeyxj5kw', 'make', '-j4']' returned non-zero exit status 2
|
||||
ubuntu@snaps:~/timg-snap$
|
||||
```
|
||||
|
||||
从错误信息我们可以得知 snapcraft 找不到 GraphicsMagick++ 这个开发库文件。根据 [snapcraft 常见关键字][29] 可知,我们需要在 snapcraft.yaml 中指定这个库文件,这样 snapcraft 才能安装它。
|
||||
|
||||
* `build-packages`:[deb, deb, deb…]
|
||||
|
||||
构建 part 前需要在主机中安装的 Ubuntu 包列表。这些包通常不会进入最终的 snap 包中,除非它们含有 snap 包中二进制文件直接依赖的库文件(在这种情况下,可以通过 `ldd` 发现他们),或者在 `stage-package` 中显式地指定了它们。
|
||||
|
||||
让我们寻找下这个开发包的名字:
|
||||
|
||||
```
|
||||
ubuntu@snaps:~/timg-snap$ apt-cache search graphicsmagick++ | grep dev
|
||||
graphicsmagick-libmagick-dev-compat/xenial 1.3.23-1build1 all
|
||||
libgraphicsmagick++1-dev/xenial 1.3.23-1build1 amd64
|
||||
format-independent image processing - C++ development files
|
||||
libgraphicsmagick1-dev/xenial 1.3.23-1build1 amd64
|
||||
format-independent image processing - C development files
|
||||
ubuntu@snaps:~/timg-snap$
|
||||
```
|
||||
|
||||
可以看到包名为 `libgraphicsmagick++1-dev`,下面是更新后的 `parts`:
|
||||
|
||||
```
|
||||
parts:
|
||||
timg:
|
||||
source: https://github.com/hzeller/timg.git
|
||||
source-subdir: src
|
||||
plugin: make
|
||||
build-packages:
|
||||
- libgraphicsmagick++1-dev
|
||||
```
|
||||
|
||||
再次运行 `snapcraft`:
|
||||
|
||||
```
|
||||
ubuntu@snaps:~/timg-snap$ snapcraft
|
||||
Installing build dependencies: libgraphicsmagick++1-dev
|
||||
[...]
|
||||
The following NEW packages will be installed:
|
||||
libgraphicsmagick++-q16-12 libgraphicsmagick++1-dev libgraphicsmagick-q16-3
|
||||
libgraphicsmagick1-dev libwebp5
|
||||
[...]
|
||||
Building timg
|
||||
make -j4
|
||||
g++ `GraphicsMagick++-config --cppflags --cxxflags` -Wall -O3 -fPIC -c -o timg.o timg.cc
|
||||
g++ -Wall -O3 -fPIC -c -o terminal-canvas.o terminal-canvas.cc
|
||||
g++ -o timg timg.o terminal-canvas.o `GraphicsMagick++-config --ldflags --libs`
|
||||
/usr/bin/ld: cannot find -lwebp
|
||||
collect2: error: ld returned 1 exit status
|
||||
Makefile:7: recipe for target 'timg' failed
|
||||
make: *** [timg] Error 1
|
||||
Command '['/bin/sh', '/tmp/tmptma45jzl', 'make', '-j4']' returned non-zero exit status 2
|
||||
ubuntu@snaps:~/timg-snap$
|
||||
```
|
||||
|
||||
虽然只指定了开发库 `libgraphicsmagick+1-dev`,但 Ubuntu 还安装了一些代码库,包括 `libgraphicsmagick ++-q16-12`,以及动态代码库 `libwebp`。
|
||||
|
||||
这里仍然有一个错误,这个是因为缺少开发版本的 `webp` 库(一个静态库)。我们可以通过下面的命令找到它:
|
||||
|
||||
```
|
||||
ubuntu@snaps:~/timg-snap$ apt-cache search libwebp | grep dev
|
||||
libwebp-dev - Lossy compression of digital photographic images.
|
||||
ubuntu@snaps:~/timg-snap$
|
||||
```
|
||||
|
||||
上面安装的 `libwebp5` 包只提供了一个动态库(.so)。通过 `libwebp-dev` 包,我们可以得到相应的静态库(.a)。好了,让我们更新下 `parts:` 部分:
|
||||
|
||||
```
|
||||
parts:
|
||||
timg:
|
||||
source: https://github.com/hzeller/timg.git
|
||||
source-subdir: src
|
||||
plugin: make
|
||||
build-packages:
|
||||
- libgraphicsmagick++1-dev
|
||||
- libwebp-dev
|
||||
```
|
||||
|
||||
下面是更新后 snapcraft.yaml 文件的内容:
|
||||
|
||||
```
|
||||
name: timg
|
||||
version: '20170226'
|
||||
summary: A terminal image viewer
|
||||
description: |
|
||||
A viewer that uses 24-Bit color capabilities and unicode character blocks
|
||||
to display images in the terminal.
|
||||
|
||||
grade: stable
|
||||
confinement: devmode
|
||||
|
||||
parts:
|
||||
timg:
|
||||
source: https://github.com/hzeller/timg.git
|
||||
source-subdir: src
|
||||
plugin: make
|
||||
build-packages:
|
||||
- libgraphicsmagick++1-dev
|
||||
- libwebp-dev
|
||||
```
|
||||
|
||||
让我们运行下 `snapcraft prime`:
|
||||
|
||||
```
|
||||
ubuntu@snaps:~/timg-snap$ snapcraft prime
|
||||
Skipping pull timg (already ran)
|
||||
Preparing to build timg
|
||||
Building timg
|
||||
make -j4
|
||||
g++ `GraphicsMagick++-config --cppflags --cxxflags` -Wall -O3 -fPIC -c -o timg.o timg.cc
|
||||
g++ -Wall -O3 -fPIC -c -o terminal-canvas.o terminal-canvas.cc
|
||||
g++ -o timg timg.o terminal-canvas.o `GraphicsMagick++-config --ldflags --libs`
|
||||
make install DESTDIR=/home/ubuntu/timg-snap/parts/timg/install
|
||||
install timg /usr/local/bin
|
||||
install: cannot create regular file '/usr/local/bin/timg': Permission denied
|
||||
Makefile:13: recipe for target 'install' failed
|
||||
make: *** [install] Error 1
|
||||
Command '['/bin/sh', '/tmp/tmptq_s1itc', 'make', 'install', 'DESTDIR=/home/ubuntu/timg-snap/parts/timg/install']' returned non-zero exit status 2
|
||||
ubuntu@snaps:~/timg-snap$
|
||||
```
|
||||
|
||||
我们遇到了一个新问题。由于 `Makefile` 文件是手工编写的,不符合 [snapcraft make 插件][30] 的参数设置,所以不能正确安装到 `prime/` 文件夹中。`Makefile` 会尝试安装到 `usr/local/bin` 中。
|
||||
|
||||
我们需要告诉 [snapcraft make 插件][31] 不要运行 `make install`,而是找到 `timg` 可执行文件然后把它放到 `prime/` 文件夹中。根据文档的描述:
|
||||
|
||||
```
|
||||
- artifacts:
|
||||
(列表)
|
||||
将 make 生成的指定文件复制或者链接到 snap 包安装目录。如果使用,则 `make install` 这步操作将被忽略。
|
||||
```
|
||||
|
||||
所以,我们需要将一些东西放到 `artifacts:` 中。但是具体是哪些东西?
|
||||
|
||||
```
|
||||
ubuntu@snaps:~/timg-snap/parts/timg$ ls build/src/
|
||||
Makefile terminal-canvas.h timg* timg.o
|
||||
terminal-canvas.cc terminal-canvas.o timg.cc
|
||||
ubuntu@snaps:~/timg-snap/parts/timg$
|
||||
```
|
||||
|
||||
在 `build/` 子目录中,我们可以找到 `make` 的输出结果。由于我们设置了 `source-subdir:` 为 `src`,所以 `artifacts:` 的基目录为 `build/src`。在这里我们可以找到可执行文件 `timg`,我们需要将它设置为 `artifacts:` 的一个参数:。通过 `artifacts:`,我们可以把 `make` 输出的某些文件复制到 snap 包的安装目录(在 `prime/` 中)。
|
||||
|
||||
下面是更新后 snapcraft.yaml 文件 parts: 部分的内容:
|
||||
|
||||
```
|
||||
parts:
|
||||
timg:
|
||||
source: https://github.com/hzeller/timg.git
|
||||
source-subdir: src
|
||||
plugin: make
|
||||
build-packages:
|
||||
- libgraphicsmagick++1-dev
|
||||
- libwebp-dev
|
||||
artifacts: [timg]
|
||||
```
|
||||
让我们运行 `snapcraft prime`:
|
||||
```
|
||||
ubuntu@snaps:~/timg-snap$ snapcraft prime
|
||||
Preparing to pull timg
|
||||
Pulling timg
|
||||
Cloning into '/home/ubuntu/timg-snap/parts/timg/src'...
|
||||
remote: Counting objects: 144, done.
|
||||
remote: Total 144 (delta 0), reused 0 (delta 0), pack-reused 144
|
||||
Receiving objects: 100% (144/144), 116.00 KiB | 207.00 KiB/s, done.
|
||||
Resolving deltas: 100% (89/89), done.
|
||||
Checking connectivity... done.
|
||||
Preparing to build timg
|
||||
Building timg
|
||||
make -j4
|
||||
g++ `GraphicsMagick++-config --cppflags --cxxflags` -Wall -O3 -fPIC -c -o timg.o timg.cc
|
||||
g++ -Wall -O3 -fPIC -c -o terminal-canvas.o terminal-canvas.cc
|
||||
g++ -o timg timg.o terminal-canvas.o `GraphicsMagick++-config --ldflags --libs`
|
||||
Staging timg
|
||||
Priming timg
|
||||
ubuntu@snaps:~/timg-snap$
|
||||
```
|
||||
|
||||
我们还将继续迭代。
|
||||
|
||||
### 导出命令
|
||||
|
||||
到目前为止,snapcraft 生成了可执行文件,但没有导出给用户使用的命令。接下来我们需要通过 `apps:` 导出一个命令。
|
||||
|
||||
首先我们需要知道命令在 `prime/` 的哪个子文件夹中:
|
||||
|
||||
```
|
||||
ubuntu@snaps:~/timg-snap$ ls prime/
|
||||
meta/ snap/ timg* usr/
|
||||
ubuntu@snaps:~/timg-snap$
|
||||
```
|
||||
|
||||
它在 `prime/` 子文件夹的根目录中。现在,我们已经准备好要在 snapcaft.yaml 中增加 `apps:` 的内容:
|
||||
|
||||
```
|
||||
ubuntu@snaps:~/timg-snap$ cat snap/snapcraft.yaml
|
||||
name: timg
|
||||
version: '20170226'
|
||||
summary: A terminal image viewer
|
||||
description: |
|
||||
A viewer that uses 24-Bit color capabilities and unicode character blocks
|
||||
to display images in the terminal.
|
||||
|
||||
grade: stable
|
||||
confinement: devmode
|
||||
|
||||
apps:
|
||||
timg:
|
||||
command: timg
|
||||
|
||||
parts:
|
||||
timg:
|
||||
source: https://github.com/hzeller/timg.git
|
||||
source-subdir: src
|
||||
plugin: make
|
||||
build-packages:
|
||||
- libgraphicsmagick++1-dev
|
||||
- libwebp-dev
|
||||
artifacts: [timg]
|
||||
```
|
||||
|
||||
让我们再次运行 `snapcraft prime`,然后测试下生成的 snap 包:
|
||||
|
||||
```
|
||||
ubuntu@snaps:~/timg-snap$ snapcraft prime
|
||||
Skipping pull timg (already ran)
|
||||
Skipping build timg (already ran)
|
||||
Skipping stage timg (already ran)
|
||||
Skipping prime timg (already ran)
|
||||
ubuntu@snaps:~/timg-snap$ snap try --devmode prime/
|
||||
timg 20170226 mounted from /home/ubuntu/timg-snap/prime
|
||||
ubuntu@snaps:~/timg-snap$
|
||||
```
|
||||
|
||||
![图片.png-42.3kB][32]
|
||||
|
||||
图片来源: https://www.flickr.com/photos/mustangjoe/6091603784/
|
||||
|
||||
我们可以通过 `snap try --devmode prime/ ` 使能 snap 包然后测试 timg 命令。这是一种高效的测试方法,可以避免生成 .snap 文件,并且无需安装和卸载它们,因为 `snap try prime/` 直接使用了 `prime/` 文件夹中的内容。
|
||||
|
||||
### 限制 snap
|
||||
|
||||
到目前为止,snap 包一直是在不受限制的开发模式下运行的。让我们看看如何限制它的运行:
|
||||
|
||||
```
|
||||
ubuntu@snaps:~/timg-snap$ snap list
|
||||
Name Version Rev Developer Notes
|
||||
core 16-2 1337 canonical -
|
||||
timg 20170226 x1 devmode,try
|
||||
ubuntu@snaps:~/timg-snap$ snap try --jailmode prime
|
||||
timg 20170226 mounted from /home/ubuntu/timg-snap/prime
|
||||
ubuntu@snaps:~/timg-snap$ snap list
|
||||
Name Version Rev Developer Notes
|
||||
core 16-2 1337 canonical -
|
||||
timg 20170226 x2 jailmode,try
|
||||
ubuntu@snaps:~/timg-snap$ timg pexels-photo-149813.jpeg
|
||||
Trouble loading pexels-photo-149813.jpeg (Magick: Unable to open file (pexels-photo-149813.jpeg) reported by magick/blob.c:2828 (OpenBlob))
|
||||
ubuntu@snaps:~/timg-snap$
|
||||
```
|
||||
|
||||
通过这种方式,我们可以无需修改 snapcraft.yaml 文件就从开发模式切换到限制模式(confinement: strict)。正如预期的那样,timg 无法读取图像,因为我们没有开放访问文件系统的权限。
|
||||
|
||||
现在,我们需要作出决定。使用限制模式,我们可以很容易授予某个命令访问用户 `$HOME` 目录中文件的权限,但是只能访问那里。如果图像文件位于其它地方,我们总是需要复制到 `$HOME` 目录并在 `$HOME` 的副本上运行 timg。如果我们对此感到满意,那我们可以设置 snapcraf.yaml 为:
|
||||
|
||||
```
|
||||
name: timg
|
||||
version: '20170226'
|
||||
summary: A terminal image viewer
|
||||
description: |
|
||||
A viewer that uses 24-Bit color capabilities and unicode character blocks
|
||||
to display images in the terminal.
|
||||
|
||||
grade: stable
|
||||
confinement: strict
|
||||
|
||||
apps:
|
||||
timg:
|
||||
command: timg
|
||||
plugs: [home]
|
||||
|
||||
parts:
|
||||
timg:
|
||||
source: https://github.com/hzeller/timg.git
|
||||
source-subdir: src
|
||||
plugin: make
|
||||
build-packages:
|
||||
- libgraphicsmagick++1-dev
|
||||
- libwebp-dev
|
||||
artifacts: [timg]
|
||||
```
|
||||
|
||||
另一方面,如果希望 timg snap 包能访问整个文件系统,我们可以设置 传统限制来实现。对应的 snapcraft.yaml 内容如下:
|
||||
|
||||
```
|
||||
name: timg
|
||||
version: '20170226'
|
||||
summary: A terminal image viewer
|
||||
description: |
|
||||
A viewer that uses 24-Bit color capabilities and unicode character blocks
|
||||
to display images in the terminal.
|
||||
|
||||
grade: stable
|
||||
confinement: classic
|
||||
|
||||
apps:
|
||||
timg:
|
||||
command: timg
|
||||
|
||||
parts:
|
||||
timg:
|
||||
source: https://github.com/hzeller/timg.git
|
||||
source-subdir: src
|
||||
plugin: make
|
||||
build-packages:
|
||||
- libgraphicsmagick++1-dev
|
||||
- libwebp-dev
|
||||
artifacts: [timg]
|
||||
```
|
||||
|
||||
接下来我们将选择严格约束选项。因此,图像应该只能放在 $HOME 中。
|
||||
|
||||
### 打包和测试
|
||||
|
||||
让我们打包这个 snap,也就是制作 .snap 文件,然后在新安装的 Ubuntu 系统上对它进行测试。
|
||||
|
||||
```
|
||||
ubuntu@snaps:~/timg-snap$ snapcraft
|
||||
Skipping pull timg (already ran)
|
||||
Skipping build timg (already ran)
|
||||
Skipping stage timg (already ran)
|
||||
Skipping prime timg (already ran)
|
||||
Snapping 'timg' \
|
||||
Snapped timg_20170226_amd64.snap
|
||||
ubuntu@snaps:~/timg-snap$
|
||||
```
|
||||
|
||||
我们如何在几秒钟内得到一个全新安装的 Ubuntu 系统来对 snap 包进行测试?
|
||||
|
||||
请查看 [尝试在 Ubuntu 上使用 LXD 容器][33],并在你的系统上设置 LXD。然后回到这里,尝试运行下面的命令:
|
||||
|
||||
```
|
||||
$ lxc launch ubuntu:x snaptesting
|
||||
Creating snaptesting
|
||||
Starting snaptesting
|
||||
$ lxc file push timg_20170226_amd64.snap snaptesting/home/ubuntu/
|
||||
$ lxc exec snaptesting -- sudo su - ubuntu
|
||||
To run a command as administrator (user "root"), use "sudo <command>".
|
||||
See "man sudo_root" for details.
|
||||
|
||||
ubuntu@snaptesting:~$ ls
|
||||
timg_20170226_amd64.snap
|
||||
ubuntu@snaptesting:~$ snap install timg_20170226_amd64.snap
|
||||
error: access denied (try with sudo)
|
||||
ubuntu@snaptesting:~$ sudo snap install timg_20170226_amd64.snap
|
||||
error: cannot find signatures with metadata for snap "timg_20170226_amd64.snap"
|
||||
ubuntu@snaptesting:~$ sudo snap install timg_20170226_amd64.snap --dangerous
|
||||
error: cannot perform the following tasks:
|
||||
- Mount snap "core" (1337) ([start snap-core-1337.mount] failed with exit status 1: Job for snap-core-1337.mount failed. See "systemctl status snap-core-1337.mount" and "journalctl -xe" for details.
|
||||
)
|
||||
ubuntu@snaptesting:~$ sudo apt install squashfuse
|
||||
[...]
|
||||
Setting up squashfuse (0.1.100-0ubuntu1~ubuntu16.04.1) ...
|
||||
ubuntu@snaptesting:~$ sudo snap install timg_20170226_amd64.snap --dangerous
|
||||
timg 20170226 installed
|
||||
ubuntu@snaptesting:~$ wget https://farm7.staticflickr.com/6187/6091603784_d6960c8be2_z_d.jpg
|
||||
[...]
|
||||
2017-02-26 22:12:18 (636 KB/s) - ‘6091603784_d6960c8be2_z_d.jpg’ saved [240886/240886]
|
||||
ubuntu@snaptesting:~$ timg 6091603784_d6960c8be2_z_d.jpg
|
||||
[it worked!]
|
||||
ubuntu@snaptesting:~$
|
||||
```
|
||||
|
||||
我们启动了一个名为 `snaptesting` 的 LXD 容器,并将 .snap 文件复制进去。然后,通过普通用户连接到容器,并尝试安装 snap 包。最初,我们安装失败了,因为在无特权的 LXD 容器中安装 snap 包需要使用 `sudo` 。接着又失败了,因为 .snap 没有经过签名(我们需要使用 `--dangerous` 参数)。然而还是失败了,这次是因为我们需要安装 `squashfuse` 包(Ubuntu 16.04 镜像中没有预装)。最后,我们成功安装了snap,并设法查看了图像。
|
||||
|
||||
在一个全新安装的 Linux 系统中测试 snap 包是很重要的,因为这样才能确保 snap 包中包含所有必须的代码库。在这个例子中,我们使用了静态库并运行良好。
|
||||
|
||||
### 发布到 Ubuntu 商店
|
||||
|
||||
这是 [发布 snap 包到 Ubuntu 商店的说明][34]。 在之前的教程中,我们已经发布了一些 snap 包。对于 `timg` 来说,我们设置了严格限制和稳定等级。因此,我们会将它发布到稳定通道。
|
||||
|
||||
```
|
||||
$ snapcraft push timg_20170226_amd64.snap
|
||||
Pushing 'timg_20170226_amd64.snap' to the store.
|
||||
Uploading timg_20170226_amd64.snap [ ] 0%
|
||||
Uploading timg_20170226_amd64.snap [=======================================] 100%
|
||||
Ready to release!|
|
||||
Revision 6 of 'timg' created.
|
||||
$ snapcraft release timg 6 stable
|
||||
Track Arch Series Channel Version Revision
|
||||
latest amd64 16 stable 20170226 6
|
||||
candidate ^ ^
|
||||
beta 0.9.5 5
|
||||
edge 0.9.5 5
|
||||
The 'stable' channel is now open.
|
||||
```
|
||||
|
||||
我们把 .snap 包推送到 Ubuntu 商店后,得到了一个 `修订版本号 6`。然后,我们将 timg `修订版本 6` 发布到了 Ubuntu 商店的稳定通道。
|
||||
|
||||
在候选通道中没有已发布的 snap 包,他继承的是稳定通道的包,所以显示 `^` 字符。
|
||||
|
||||
在之前的测试中,我将一些较老版本的 snap 包上传到了测试和边缘通道。这些旧版本使用了 timg 标签为 `0.9.5` 的源代码。
|
||||
|
||||
我们可以通过将稳定版本发布到测试和边缘通道来移除旧的 0.9.5 版本的包。
|
||||
|
||||
```
|
||||
$ snapcraft release timg 6 beta
|
||||
Track Arch Series Channel Version Revision
|
||||
latest amd64 16 stable 20170226 6
|
||||
candidate ^ ^
|
||||
beta 20170226 6
|
||||
edge 0.9.5 5
|
||||
$ snapcraft release timg 6 edge
|
||||
Track Arch Series Channel Version Revision
|
||||
latest amd64 16 stable 20170226 6
|
||||
candidate ^ ^
|
||||
beta 20170226 6
|
||||
edge 20170226 6
|
||||
```
|
||||
|
||||
### 使用 timg
|
||||
|
||||
让我们不带参数运行 timg:
|
||||
|
||||
```
|
||||
ubuntu@snaptesting:~$ timg
|
||||
Expected image filename.
|
||||
usage: /snap/timg/x1/timg [options] <image> [<image>...]
|
||||
Options:
|
||||
-g<w>x<h> : Output pixel geometry. Default from terminal 80x48
|
||||
-s[<ms>] : Scroll horizontally (optionally: delay ms (60)).
|
||||
-d<dx:dy> : delta x and delta y when scrolling (default: 1:0).
|
||||
-w<seconds>: If multiple images given: Wait time between (default: 0.0).
|
||||
-t<seconds>: Only animation or scrolling: stop after this time.
|
||||
-c<num> : Only Animation or scrolling: number of runs through a full cycle.
|
||||
-C : Clear screen before showing image.
|
||||
-F : Print filename before showing picture.
|
||||
-v : Print version and exit.
|
||||
If both -c and -t are given, whatever comes first stops.
|
||||
If both -w and -t are given for some animation/scroll, -t takes precedence
|
||||
ubuntu@snaptesting:~$
|
||||
```
|
||||
|
||||
这里说,当前我们终端模拟器的缩放级别,即分辨率为:80 × 48。
|
||||
|
||||
让我们缩小一点,并最大化 GNOME 终端窗口。
|
||||
|
||||
```
|
||||
-g<w>x<h> : Output pixel geometry. Default from terminal 635x428
|
||||
```
|
||||
|
||||
这是一个更好的解决方案,但我几乎看不到字符,因为他们太小了。让我们调用前面的命令再次显示这辆车。
|
||||
|
||||
![图片.png-904.9kB][35]
|
||||
|
||||
你所看到的是调整后的图像(1080p)。虽然它是用彩色文本字符显示的,但看起来依旧很棒。
|
||||
|
||||
接下来呢?timg 其实也可以播放 gif 动画哦!
|
||||
|
||||
```
|
||||
$ wget https://m.popkey.co/9b7141/QbAV_f-maxage-0.gif -O JonahHillAmazed.gif$ timg JonahHillAmazed.gif
|
||||
```
|
||||
|
||||
你可以试着安装 timg 来体验 gif 动画。要是不想自己动手,可以在 [asciinema][36] 上查看相关记录 (如果视频看上去起伏不定的,请重新运行它)。
|
||||
|
||||
谢谢阅读!
|
||||
|
||||
-----
|
||||
译者简介:
|
||||
|
||||
经常混迹于 snapcraft.io,对 Ubuntu Core、Snaps 和 Snapcraft 有着浓厚的兴趣,并致力于将这些还在快速发展的新技术通过翻译或原创的方式介绍到中文世界。有兴趣的小伙伴也可以关注译者个人公众号: `Snapcraft`
|
||||
|
||||
-----
|
||||
via:https://blog.simos.info/how-to-create-a-snap-for-timg-with-snapcraft-on-ubuntu/
|
||||
|
||||
作者:[Mi blog lah!][37]
|
||||
译者:[Snapcrafter](https://github.com/Snapcrafter)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
|
||||
[1]: https://github.com/hzeller/timg
|
||||
[2]:https://tutorials.ubuntu.com/tutorial/create-first-snap
|
||||
[3]: https://github.com/hzeller/timg
|
||||
[4]: https://snapcraft.io/docs/reference/plugins/make
|
||||
[5]:https://github.com/hzeller/timg
|
||||
[6]: http://static.zybuluo.com/apollomoon/ynm5k5urc7idb037ahca2s93/%E5%9B%BE%E7%89%87.png
|
||||
[7]: http://static.zybuluo.com/apollomoon/h2ynj68axdqiy7dwszgw5z1f/%E5%9B%BE%E7%89%87.png
|
||||
[8]:https://gist.github.com/XVilka/8346728
|
||||
[9]:https://en.wikipedia.org/wiki/Escape_sequence
|
||||
[10]:https://github.com/hzeller/timg
|
||||
[11]: http://static.zybuluo.com/apollomoon/nzlqpq3xn4rs72h4r96k4xlw/%E5%9B%BE%E7%89%87.png
|
||||
[12]:http://design.ubuntu.com/wp-content/uploads/ubuntu-logo112.png
|
||||
[13]: http://static.zybuluo.com/apollomoon/vo1nxnu4xfaghyib03fnkvq4/%E5%9B%BE%E7%89%87.png
|
||||
[14]:https://www.flickr.com/photos/doug88888/5776072628/in/photolist-9WCiNQ-7U3Trc-7YUZBL-5DwkEQ-6e1iT8-a372aS-5F75aL-a1gbow-6eNayj-8gWK2H-5CtH7P-6jVqZv-86RpwN-a2nEnB-aiRmsc-6aKvwK-8hmXrN-5CWDNP-62hWM8-a9smn1-ahQqHw-a22p3w-a36csK-ahN4Pv-7VEmnt-ahMSiT-9NpTa7-5A3Pon-ai7DL7-9TKCqV-ahr7gN-a1boqP-83ZzpH-9Sqjmq-5xujdi-7UmDVb-6J2zQR-5wAGNR-5eERar-5KVDym-5dL8SZ-5S2Uut-7RVyHg-9Z6MAt-aiRiT4-5tLesw-aGLSv6-5ftp6j-5wAVBq-5T2KAP
|
||||
[15]: https://github.com/hzeller/timg
|
||||
[16]: https://github.com/hzeller/timg
|
||||
[17]:https://github.com/hzeller/timg
|
||||
[18]: https://github.com/hzeller/timg
|
||||
[19]: http://static.zybuluo.com/apollomoon/hovu73yqx08pdhm8qmdg6f6a/%E5%9B%BE%E7%89%87.png
|
||||
[20]:https://github.com/hzeller/timg
|
||||
[21]: http://static.zybuluo.com/apollomoon/o64i7jm65u3o12wg3fqqcn7x/%E5%9B%BE%E7%89%87.png
|
||||
[22]: http://static.zybuluo.com/apollomoon/t4w1uak9j4h6rfn4ghc8q15k/%E5%9B%BE%E7%89%87.png
|
||||
[23]: http://static.zybuluo.com/apollomoon/cvuetj2rzd5nee7pgfcp7wr3/timg-git-url.png
|
||||
[24]: http://static.zybuluo.com/apollomoon/dxtl628r1qavphhzu70jiw1n/%E5%9B%BE%E7%89%87.png
|
||||
[25]: https://snapcraft.io/docs/reference/plugins/make
|
||||
[26]:https://snapcraft.io/docs/reference/plugins/
|
||||
[27]: https://snapcraft.io/docs/reference/plugins/source
|
||||
[28]: https://snapcraft.io/docs/reference/plugins/source
|
||||
[29]:https://snapcraft.io/docs/reference/plugins/common
|
||||
[30]:https://snapcraft.io/docs/reference/plugins/make
|
||||
[31]:https://snapcraft.io/docs/reference/plugins/make
|
||||
[32]: http://static.zybuluo.com/apollomoon/v9y3vutt8li4wwaxeigwr4yz/%E5%9B%BE%E7%89%87.png
|
||||
[33]:https://blog.simos.info/trying-out-lxd-containers-on-our-ubuntu/
|
||||
[34]:https://snapcraft.io/docs/build-snaps/publish
|
||||
[35]: http://static.zybuluo.com/apollomoon/clnv44g3bwhaqog7o1jpvpcd/%E5%9B%BE%E7%89%87.png
|
||||
[36]: https://asciinema.org/a/dezbe2gpye84e0pjndp8t0pvh
|
||||
[37]: https://blog.simos.info/
|
@ -0,0 +1,253 @@
|
||||
函数式编程简介
|
||||
============================================================
|
||||
|
||||
> 我们来解释函数式编程的什么,它的优点是哪些,并且寻找一些函数式编程的学习资源。
|
||||
|
||||
|
||||
![Introduction to functional programming ](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lightbulb_computer_person_general_.png?itok=BRGJXU7e" 函数式编程简介 ")
|
||||
图片来源于:
|
||||
|
||||
opensource.com
|
||||
|
||||
根据您的问题来回答, _函数式编程_ (FP) 是一种开放的程序设计方法,理应广泛传播或者流行于理论学术中,在现实中没有实际的作用。在这篇文章中我来讲解函数式编程,探究其优点,并推荐学习函数式编程的资源。
|
||||
|
||||
### 语法入门
|
||||
|
||||
本文的代码示例使用的是 [Haskell][40] 编程语言。因而你需要理解这篇文章的基本函数语法:
|
||||
|
||||
```
|
||||
even :: Int -> Bool
|
||||
even = ... -- implementation goes here
|
||||
```
|
||||
|
||||
示例定义了含有一个参数的函数 **even** ,第一行是 _类型声明_ 具体来说就是 **even** 函数接受一个 int 类型的参数返回一个 bool 类型的值,由一个或多个方法实现,在这里我们将忽略具体实现方法(名称和类型已经足够了):
|
||||
|
||||
```
|
||||
map :: (a -> b) -> [a] -> [b]
|
||||
map = ...
|
||||
```
|
||||
|
||||
这个示例, **map** 是一个有两个参数的函数:
|
||||
|
||||
1. **(a -> b)** : 将**a** 转换成 **b** 的匿名函数
|
||||
2. **[a]**: 将匿名函数作用到 **[a]** (List 序列与其它语言的数组对应)的每一个元素上,将每次所得结果放到另一个 **[b]** ,最后返回这个结果 **[b]** 。
|
||||
|
||||
同样我们不去关心是要如何实现,我们只感兴趣它的定义类型。
|
||||
**a** 和 **b** 是任何一种的的 _类型变量_ 。就像上一个示例中, **a** 是 **Int** 类型, **b** 是 **Bool** 类型:
|
||||
|
||||
```
|
||||
map even [1,2,3]
|
||||
```
|
||||
|
||||
这个是一个bool类型的序列:
|
||||
|
||||
```
|
||||
[False,True,False]
|
||||
```
|
||||
|
||||
如果你看到你不理解的其他语法,不要惊慌;对语法的充分理解不是必要的。
|
||||
|
||||
### 函数式编程的误区
|
||||
|
||||
编程与开发
|
||||
|
||||
* [我们最新的 JavaScript 文章][1]
|
||||
* [最近 Perl 的帖子][2]
|
||||
* [新的 Python 内容][3]
|
||||
* [红帽开发者博客][4]
|
||||
* [红帽开发者工具][5]
|
||||
|
||||
我们先来解释一下常见的误区:
|
||||
|
||||
* 函数式编程不是像命令行编程或者面向对象编程一样对立,这些都是虚假的。
|
||||
* 函数式编程不仅仅是学术领域在其他领域也有使用。这是真的,在函数式编程的历史中,如像Haskell和OCaml语言是最流行的研究。但是今天许多公司使用函数式编程来处理大型系统,小型专业程序,以及两者之间的一切。甚至还有一个面向函数式编程的商业用户的年度会议;过去的程序让我们了解了函数式编程在工业中的用途,以及由谁来使用它。
|
||||
* 函数式编程与monads无关 ,也不是任何其他特殊的抽象。对于围绕这个monad只是一个抽象的规定,有些是有些也的不是。
|
||||
* 函数式编程不是特别难学的。有些语言可能与您已经知道的语法不同,但这些差异是浅显的。函数式编程中有dense的概念,但其他方法也是如此。(这里的dense不懂什么意思,校对者注意一下)
|
||||
|
||||
### 什么是函数式编程?
|
||||
|
||||
核心是函数式编程是只使用_纯粹_的数学函数编程,函数的结果取决于参数,就像 I/O 或者状态转换这样。程序是通过 _组合函数_ 的方法构建的:
|
||||
|
||||
```
|
||||
(.) :: (b -> c) -> (a -> b) -> (a -> c)
|
||||
(g . f) x = g (f x)
|
||||
```
|
||||
|
||||
这个_(.)_ 表示的是二个函数组合成一个,将 **g** 作用到 **f** 上。我们将在下一个示例中看到它的使用。这里使用 Python 中的函数:
|
||||
|
||||
```
|
||||
def compose(g, f):
|
||||
return lambda x: g(f(x))
|
||||
```
|
||||
|
||||
函数式编程的优点在于:由于函数是确定的,所以可以用应用程序的结果替换函数,这种替代等价于使用使 _等式推理_ 。每个程序员都有使用自己代码和别人代码的理由,而等式推理就是解决这样问题不错的工具。来看一个示例。等你遇到这个问题:
|
||||
|
||||
```
|
||||
map even . map (+1)
|
||||
```
|
||||
|
||||
这段代码是做什么的?可以简化吗?通过等式推理,可以通过一系列替换来分析代码:
|
||||
|
||||
```
|
||||
map even . map (+1)
|
||||
map (even . (+1)) -- from definition of 'map'
|
||||
map (\x -> even (x + 1)) -- lambda abstraction
|
||||
map odd -- from definition of 'even'
|
||||
```
|
||||
|
||||
我们可以使用等式推理来理解程序并优化。Haskell编译器使用等式推理进行多种方案的优化。没有纯粹的函数,等式推理是不可能的,或者需要程序员更多的努力。
|
||||
|
||||
### 函数式编程语言
|
||||
|
||||
你需要一种编程语言来做函数式编程。
|
||||
|
||||
在没有高阶函数(传递函数作为参数和返回函数的能力)的语言中有意义地进行函数式编程, _lambdas_ (匿名函数)和泛型是困难的。 大多数现代语言都有这些,但在不同语言支持函数式编程方面存在差异。 具有最佳支持的语言称为函数式编程语言。 这些包括静态类型的 _Haskell_, _OCaml_ , _F#_ 和 _Scala_ ,动态类型的 _Erlang_ 和 _Clojure_。
|
||||
|
||||
在函数式语言之间可以在多大程度上利用函数编程有很大差异。有一个类型系统会有很大的帮助,特别是它支持 _类型推断_ (所以你并不总是必须键入类型)。这篇文章中没有详细介绍这部分,但足以说明,并非所有类型的系统都是平等的。
|
||||
|
||||
与所有语言一样,不同的函数的语言强调不同的概念,技术或用例。选择语言时,考虑到它支持函数式编程的程度以及是否适合您的用例很重要。如果您使用某些非 FP 语言,会受益于在语言支持的范围内的函数式编程。
|
||||
|
||||
### 不要打开表面没什么但却是陷阱的门
|
||||
|
||||
回想一下,函数的结果只取决于它的输入。几乎所有的编程语言都有这个。空值,类型case(instanceof),类型转换,异常以及无限递归的可能性都是陷阱,它打破等式推理,并削弱程序员对程序行为正确性的理解能力。(没有任何陷阱的语言包括Agda,Idris和Coq。)
|
||||
|
||||
幸运的是,作为程序员,我们可以选择避免这些陷阱,如果我们受到严格的规范,我们可以假装陷阱不存在。 这个方法叫做 _快速推理_ 。它不需要任何条件,几乎任何程序都可以在不使用陷阱的情况下进行编写,并且通过避免这些程序可以进行等式推理,可组合性和可重用性。
|
||||
|
||||
让我们详细讨论一下。 这个陷阱打破了等式推理,因为异常终止的可能性没有反映在类型中。(如果文档中提到可能抛出的异常,请自己计算一下)。但是没有理由我们无法包含所有故障模式的返回类型。
|
||||
|
||||
避开陷阱是语言特征中产生巨大影响的一个领域。为避免例外, 代数数据类型可用于模型误差的条件下,就像:
|
||||
|
||||
```
|
||||
-- new data type for results of computations that can fail
|
||||
--
|
||||
data Result e a = Error e | Success a
|
||||
|
||||
-- new data type for three kinds of arithmetic errors
|
||||
--
|
||||
data ArithError = DivByZero | Overflow | Underflow
|
||||
|
||||
-- integer division, accounting for divide-by-zero
|
||||
--
|
||||
safeDiv :: Int -> Int -> Result ArithError Int
|
||||
safeDiv x y =
|
||||
if y == 0
|
||||
then Error DivByZero
|
||||
else Success (div x y)
|
||||
```
|
||||
|
||||
在这个例子中的权衡你现在必须使用ArithError 或者 Int 类型为结果,而不是旧的 Int 的值,但这也是解决这个问题的一种方式。你不再需要处理异常,使用 _快速推理_ ,总体来说这是一个胜利。
|
||||
|
||||
### 免费的定理
|
||||
|
||||
大多数现代静态类型语言具有 _范型_(也称为 _参数多态性_ ),其中函数是通过一个或多个抽象类型定义的。 例如,考虑List(序列)上的函数:
|
||||
|
||||
```
|
||||
f :: [a] -> [a]
|
||||
f = ...
|
||||
```
|
||||
|
||||
Java中的相同函数如下所示:
|
||||
|
||||
```
|
||||
static <A> List<A> f(List<A> xs) { ... }
|
||||
```
|
||||
|
||||
编译程序的过程是一个证明的过程是将 _a_ 类型做出选择的过程。考虑到这一点,采用快速推理的方法,你能够创造出怎样的函数。
|
||||
|
||||
在这种情况下,该类型并不能告诉我们函数的功能(它可以改变序列,删除第一个元素或许多其他的东西),但它确实告诉了我们很多信息。只是从类型,我们可以得出关于函数的定理:
|
||||
|
||||
* **Theorem 1**: 输入决定输出;不可能在输入的序列 **a** 中添加值,因为你不知道它的数据结构。
|
||||
* **Theorem 2**: If you map any function over the list then apply **f**, the result is the same as applying **f** then mapping.
|
||||
|
||||
定理1帮助我们了解代码的作用,定理2对于程序优化提供了帮助。我们从类型中学到了这一切!从类型中获取有用的信息称为参数。因此,类型是函数行为的部分(有时是完整的)规范,也是一种检查机制。
|
||||
|
||||
现在你可以利用参数话了探寻了。你可以从 **map** **(.)** 或者下面的这些函数中发现什么呢?
|
||||
|
||||
* **foo :: a -> (a, a)**
|
||||
* **bar :: a -> a -> a**
|
||||
* **baz :: b -> a -> a**
|
||||
|
||||
### 学习功能编程的资源
|
||||
|
||||
也许你已经相信函数式编程是编写软件不错的方式,你想知道如何开始?有几种学习功能编程的方法; 这里有一些我推荐(我承认,我对 Haskell 偏爱:
|
||||
|
||||
* UPenn's 的 [CIS 194: 介绍 Haskell][35] 是函数式编程概念和 Haskell 开发的不错选择。可以当课程材料使用,讲座(您可以查看几年前 Brisbane 函数式编程小组的 [系列 CIS 194 讲座][36]。
|
||||
* 不错的入门书籍有 _[ Scala 的函数式编程][30]_ , _[ Haskell 对函数的思考][31]_ , 和 _[ Haskell 编程原理][32]_ .
|
||||
* [Data61 FP 课程][37] (f.k.a., _NICTA_ 课程) 通过 _类型驱动_ 开发来教授抽象和数据结构的概念。这是十分困难,但收获也是丰富的,如果你有一名愿意引导你函数式编程的程序员,你可以尝试。
|
||||
* 在你的工作学习中使用函数式编程书写代码,写一些纯粹的函数(避免不确定性和异常的出现),使用高阶函数而不是循环和递归,利用参数化来提高可读性和重用性。许多人从函数式编程开始,体验各种语言的美妙。
|
||||
* 加入到你区域中的一些函数式编程小组或者学习小组中,也可以是参加一些函数式编程的会议(新的会议总是不断的出现)。
|
||||
|
||||
### 总结
|
||||
|
||||
在本文中,我讨论了什么是函数式编程,而不是函数式编程的优点,包括等式推理和参数化。我们了解到在大多数编程语言中执行一些函数编程,但是语言的选择会影响受益的程度,而 Haskell 是函数式编程中语言最受欢迎的语言。我也推荐学习函数式编程的资源。
|
||||
|
||||
函数式编程是一个丰富的领域,还有许多更深入(更神秘)的主题正在等待探索。我没有提到那些具有实际意义的事情,比如:
|
||||
|
||||
* lenses and prisms (是一流的设置值的方式;非常适合使用嵌套数据);
|
||||
* 定理证明 (当测试你代码的时候你可以你代码的正确性);
|
||||
* 懒惰评估 (让您处理潜在无数的数据结构);
|
||||
* 类型理论 (函数式编程中许多美丽实用的抽象的起源).
|
||||
|
||||
我希望你喜欢这个函数式编程的介绍,并且启发你使用这个有趣和实用的软件开发方法。
|
||||
|
||||
_本文根据 [CC BY 4.0][38] 许可证发布。_
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
红帽软件工程师。对函数式编程,分类理论,数学感兴趣。Crazy about jalapeños.
|
||||
|
||||
----------------------
|
||||
|
||||
via: https://opensource.com/article/17/4/introduction-functional-programming
|
||||
|
||||
作者:[Fraser Tweedale ][a]
|
||||
译者:[MonkeyDEcho](https://github.com/MonkeyDEcho)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/frasertweedale
|
||||
[1]:https://opensource.com/tags/javascript?src=programming_resource_menu
|
||||
[2]:https://opensource.com/tags/perl?src=programming_resource_menu
|
||||
[3]:https://opensource.com/tags/python?src=programming_resource_menu
|
||||
[4]:https://developers.redhat.com/?intcmp=7016000000127cYAAQ&src=programming_resource_menu
|
||||
[5]:https://developers.redhat.com/products/#developer_tools?intcmp=7016000000127cYAAQ&src=programming_resource_menu
|
||||
[6]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#t:Int
|
||||
[7]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#t:Int
|
||||
[8]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#t:Int
|
||||
[9]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:div
|
||||
[10]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:even
|
||||
[11]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#t:Int
|
||||
[12]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#t:Bool
|
||||
[13]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:even
|
||||
[14]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[15]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[16]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[17]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:even
|
||||
[18]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[19]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:even
|
||||
[20]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[21]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[22]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:even
|
||||
[23]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[24]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[25]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:even
|
||||
[26]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[27]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:even
|
||||
[28]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:map
|
||||
[29]:http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:odd
|
||||
[30]:https://www.manning.com/books/functional-programming-in-scala
|
||||
[31]:http://www.cambridge.org/gb/academic/subjects/computer-science/programming-languages-and-applied-logic/thinking-functionally-haskell
|
||||
[32]:http://haskellbook.com/
|
||||
[33]:http://cufp.org/
|
||||
[34]:https://www.haskell.org/tutorial/monads.html
|
||||
[35]:https://www.cis.upenn.edu/~cis194/fall16/
|
||||
[36]:https://github.com/bfpg/cis194-yorgey-lectures
|
||||
[37]:https://github.com/data61/fp-course
|
||||
[38]:https://creativecommons.org/licenses/by/4.0/
|
||||
[39]:https://opensource.com/article/17/4/introduction-functional-programming?rate=_tO5hNzT4hRKNMJtWwQM-K3Jmxm10iPeqoy3bbS12MQ
|
||||
[40]:https://wiki.haskell.org/Introduction
|
||||
[41]:https://opensource.com/user/123116/feed
|
||||
[42]:https://opensource.com/users/frasertweedale
|
219
translated/tech/20170813 An Intro to Compilers.md
Normal file
219
translated/tech/20170813 An Intro to Compilers.md
Normal file
@ -0,0 +1,219 @@
|
||||
编译器简介
|
||||
============================================================
|
||||
|
||||
### 如何对计算机说话 - Pre-Siri
|
||||
|
||||
简单说来,一个编译器不过是一个可以翻译其他程序的程序。传统的编译器可以把源代码翻译成你的机器能够理解的可执行机器代码。(一些编译器将源代码翻译成别的程序语言,这样的编译器称为源到源翻译器或转化器。)[LLVM][7] 是一个广泛使用的编译器项目,包含许多模块化的编译工具。
|
||||
|
||||
传统的编译器设计包含三个部分:
|
||||
|
||||
![](https://nicoleorchard.com/img/blog/compilers/compiler1.jpg)
|
||||
|
||||
* 通过前端翻译将源代码转化为中间表示: (IR)* 。[`clang`][1] 是 LLVM 中用于 C 家族语言的前端工具。
|
||||
* 优化程序分析指令然后将其转化为更高效的形式。[`opt`][2]是 LLVM 的优化工具。
|
||||
* 后端工具通过将指令映射到目标硬件指令集从而生成机器代码。[`11c`][3]是 LLVM 的后端工具。
|
||||
* LLVM IR 是一种和汇编类似的低级语言。然而,它抽象出了特定硬件信息。
|
||||
|
||||
### Hello, Compiler
|
||||
|
||||
下面是一个打印 "Hello, Compiler!" 到标准输出的简单 C 程序。C 语法是人类可读的,但是计算机却不能理解,不知道该程序要干什么。我将通过三个编译阶段使该程序变成机器可执行的程序。
|
||||
|
||||
```
|
||||
// compile_me.c
|
||||
// Wave to the compiler. The world can wait.
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
int main() {
|
||||
printf("Hello, Compiler!\n");
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
### 前端
|
||||
|
||||
正如我在上面所提到的,`clang` 是 LLVM 中用于 C 家族语言的前端工具。Clang 包含一个 C 预处理器、词法分析器、语法解析器、语义分析器和 IR生成器。
|
||||
|
||||
* C 预处理器在将源程序翻译成 IR 前修改源程序。预处理器处理外部包含文件,比如上面的 `#include <stdio.h>`。 它将会把这一行替换为 `stdio.h` C 标准库文件的完整内容,其中包含 `printf` 函数的声明。
|
||||
|
||||
*通过运行下面的命令来查看预处理步骤的输出:*
|
||||
|
||||
```
|
||||
clang -E compile_me.c -o preprocessed.i
|
||||
|
||||
```
|
||||
|
||||
* 词法分析器(或扫描器或分词器)将一串字符转化为一串单词。每一个单词或记号,被归并到五种语法目录中的一个:标点符号、关键字、标识符、文字或注释。
|
||||
|
||||
*compile_me.c 的分词过程*
|
||||
![](https://nicoleorchard.com/img/blog/compilers/lexer.jpg)
|
||||
|
||||
* 语法分析器确定源程序中的单词流是否组成了合法的句子。在分析标识符流的语法后,它会输出一个抽象语法树(AST)。在 Clang 的 AST 中的节点表示声明、语句和类型。
|
||||
|
||||
_compile_me.c 的语法树_
|
||||
|
||||
![](https://nicoleorchard.com/img/blog/compilers/tree.jpg)
|
||||
|
||||
* 语义分析器遍历抽象语法树,从而确定代码语句是否有正确意义。这个阶段会检查类型错误。如果 compile_me.c 的 main 函数返回 `"zero"`而不是 `0`, 那么语义分析器将会抛出一个错误,因为 `"zero"` 不是 `int` 类型。
|
||||
|
||||
* IR 生成器将抽象语法树翻译为 IR 。
|
||||
|
||||
*对 compile_me.c 运行 clang 来生成 LLVM IR:*
|
||||
|
||||
```
|
||||
clang -S -emit-llvm -o llvm_ir.ll compile_me.c
|
||||
|
||||
```
|
||||
|
||||
在 llvm_ir.ll 中的 main 函数
|
||||
|
||||
```
|
||||
; llvm_ir.ll
|
||||
@.str = private unnamed_addr constant [18 x i8] c"Hello, Compiler!\0A\00", align 1
|
||||
|
||||
define i32 @main() {
|
||||
%1 = alloca i32, align 4 ; <- memory allocated on the stack
|
||||
store i32 0, i32* %1, align 4
|
||||
%2 = call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([18 x i8], [18 x i8]* @.str, i32 0, i32 0))
|
||||
ret i32 0
|
||||
}
|
||||
|
||||
declare i32 @printf(i8*, ...)
|
||||
```
|
||||
|
||||
### 优化程序
|
||||
|
||||
优化程序的工作是基于程序的运行时行为来提高代码效率。优化程序将 IR 作为输入然后生成改进后的 IR 作为输出。LLVM 的优化工具 opt 将会通过标记 `-O2`(大写 o,数字 2)来优化处理器速度,通过标记 `Os`(大写 o,小写 s)来减少指令数目。
|
||||
|
||||
看一看上面的前端工具生成的 LLVM IR 代码和运行下面的命令生成的结果之间的区别:
|
||||
|
||||
```
|
||||
opt -O2 -S llvm_ir.ll -o optimized.ll
|
||||
|
||||
```
|
||||
|
||||
_在 optimized.ll 中的 main 函数_
|
||||
|
||||
```
|
||||
optimized.ll
|
||||
|
||||
@str = private unnamed_addr constant [17 x i8] c"Hello, Compiler!\00"
|
||||
|
||||
define i32 @main() {
|
||||
%puts = tail call i32 @puts(i8* getelementptr inbounds ([17 x i8], [17 x i8]* @str, i64 0, i64 0))
|
||||
ret i32 0
|
||||
}
|
||||
|
||||
declare i32 @puts(i8* nocapture readonly)
|
||||
```
|
||||
|
||||
优化后的版本中, main 函数没有在栈中分配内存,因为它不使用任何内存。优化后的代码中调用 `puts` 函数而不是 `printf` 函数,因为程序中并没有使用 `printf` 函数的格式化功能。
|
||||
|
||||
当然,优化程序不仅仅知道何时可以把 `printf` 函数用 `puts` 函数代替。优化程序也能展开循环和内联简单计算的结果。考虑下面的程序,它将两个整数相加并打印出结果。
|
||||
|
||||
```
|
||||
// add.c
|
||||
#include <stdio.h>
|
||||
|
||||
int main() {
|
||||
int a = 5, b = 10, c = a + b;
|
||||
printf("%i + %i = %i\n", a, b, c);
|
||||
}
|
||||
```
|
||||
|
||||
_下面是未优化的 LLVM IR:_
|
||||
|
||||
```
|
||||
@.str = private unnamed_addr constant [14 x i8] c"%i + %i = %i\0A\00", align 1
|
||||
|
||||
define i32 @main() {
|
||||
%1 = alloca i32, align 4 ; <- allocate stack space for var a
|
||||
%2 = alloca i32, align 4 ; <- allocate stack space for var b
|
||||
%3 = alloca i32, align 4 ; <- allocate stack space for var c
|
||||
store i32 5, i32* %1, align 4 ; <- store 5 at memory location %1
|
||||
store i32 10, i32* %2, align 4 ; <- store 10 at memory location %2
|
||||
%4 = load i32, i32* %1, align 4 ; <- load the value at memory address %1 into register %4
|
||||
%5 = load i32, i32* %2, align 4 ; <- load the value at memory address %2 into register %5
|
||||
%6 = add nsw i32 %4, %5 ; <- add the values in registers %4 and %5\. put the result in register %6
|
||||
store i32 %6, i32* %3, align 4 ; <- put the value of register %6 into memory address %3
|
||||
%7 = load i32, i32* %1, align 4 ; <- load the value at memory address %1 into register %7
|
||||
%8 = load i32, i32* %2, align 4 ; <- load the value at memory address %2 into register %8
|
||||
%9 = load i32, i32* %3, align 4 ; <- load the value at memory address %3 into register %9
|
||||
%10 = call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([14 x i8], [14 x i8]* @.str, i32 0, i32 0), i32 %7, i32 %8, i32 %9)
|
||||
ret i32 0
|
||||
}
|
||||
|
||||
declare i32 @printf(i8*, ...)
|
||||
```
|
||||
|
||||
_下面是优化后的 LLVM IR_
|
||||
|
||||
```
|
||||
@.str = private unnamed_addr constant [14 x i8] c"%i + %i = %i\0A\00", align 1
|
||||
|
||||
define i32 @main() {
|
||||
%1 = tail call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([14 x i8], [14 x i8]* @.str, i64 0, i64 0), i32 5, i32 10, i32 15)
|
||||
ret i32 0
|
||||
}
|
||||
|
||||
declare i32 @printf(i8* nocapture readonly, ...)
|
||||
```
|
||||
|
||||
优化后的 main 函数本质上是未优化版本的第 17 行和 18 行,伴有变量值内联。`opt` 计算加法,因为所有的变量都是常数。很酷吧,对不对?
|
||||
|
||||
### 后端
|
||||
|
||||
LLVM 的后端工具是 `11c`。它分三个阶段将 LLVM IR 作为输入生成机器代码。
|
||||
|
||||
* 指令选择是将 IR 指令映射到目标机器的指令集。这个步骤使用虚拟寄存器的无限名字空间。
|
||||
* 寄存器分配是将虚拟寄存器映射到目标体系结构的实际寄存器。我的 CPU 是 x86 结构,它只有 16 个寄存器。然而,编译器将会尽可能少的使用寄存器。
|
||||
* 指令安排是重排操作,从而反映出目标机器的性能约束。
|
||||
|
||||
_运行下面这个命令将会产生一些机器代码:_
|
||||
|
||||
```
|
||||
llc -o compiled-assembly.s optimized.ll
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
_main:
|
||||
pushq %rbp
|
||||
movq %rsp, %rbp
|
||||
leaq L_str(%rip), %rdi
|
||||
callq _puts
|
||||
xorl %eax, %eax
|
||||
popq %rbp
|
||||
retq
|
||||
L_str:
|
||||
.asciz "Hello, Compiler!"
|
||||
```
|
||||
|
||||
这个程序是 x86 汇编语言,它是计算机所说的语言,并具有人类可读语法。某些人最后也许能理解我。
|
||||
|
||||
* * *
|
||||
|
||||
相关资源:
|
||||
|
||||
1. [设计一个编译器][4]
|
||||
|
||||
2. [开始探索 LLVM 核心库][5]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://nicoleorchard.com/blog/compilers
|
||||
|
||||
作者:[Nicole Orchard][a]
|
||||
译者:[ucasFL](https://github.com/ucasFL)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://nicoleorchard.com/
|
||||
[1]:http://clang.llvm.org/
|
||||
[2]:http://llvm.org/docs/CommandGuide/opt.html
|
||||
[3]:http://llvm.org/docs/CommandGuide/llc.html
|
||||
[4]:https://www.amazon.com/Engineering-Compiler-Second-Keith-Cooper/dp/012088478X
|
||||
[5]:https://www.amazon.com/Getting-Started-LLVM-Core-Libraries/dp/1782166920
|
||||
[6]:https://twitter.com/norchard/status/864246049266958336
|
||||
[7]:http://llvm.org/
|
540
translated/tech/20170815 Getting Started with Headless Chrome.md
Normal file
540
translated/tech/20170815 Getting Started with Headless Chrome.md
Normal file
@ -0,0 +1,540 @@
|
||||
Headless Chrome 入门
|
||||
============================================================
|
||||
|
||||
|
||||
### 摘要
|
||||
|
||||
[Headless Chrome][9] 在 Chrome 59 中开始搭载。这是一种在 headless 环境下运行 Chrome 浏览器的方式。从本质上来说,就是不用 chrome 来运行 Chrome!它将 Chromium 和 Blink 渲染引擎提供的所有现代 Web 平台的功能都带入了命令行。
|
||||
|
||||
它为什么有用?
|
||||
|
||||
Headless 浏览器对于自动化测试和不需要可视化 UI 界面的服务器环境是一个很好的工具。例如,你可能需要对真实的网页运行一些测试,创建一个 PDF,或者只是检查浏览器如何呈现 URL。
|
||||
|
||||
<aside class="caution" style="box-sizing: inherit; font-size: 14px; margin-top: 16px; margin-bottom: 16px; padding: 12px 24px 12px 60px; background: rgb(255, 243, 224); color: rgb(221, 44, 0);">**注意:** Mac 和 Linux 上的 Chrome 59 都可以运行 Headless 模式。[对 Windows 的支持][2]将在 Chrome 60 中提供。检查你使用的 Chrome 版本,请打开 `chrome://version`。</aside>
|
||||
|
||||
### 开启 Headless 模式(命令行界面)
|
||||
|
||||
开启 headless 模式最简单的方法是从命令行打开 Chrome 二进制文件。如果你已经安装了 Chrome 59 以上的版本,请使用 `--headless` 标志启动 Chrome:
|
||||
|
||||
```
|
||||
chrome \
|
||||
--headless \ # Runs Chrome in headless mode.
|
||||
--disable-gpu \ # Temporarily needed for now.
|
||||
--remote-debugging-port=9222 \
|
||||
https://www.chromestatus.com # URL to open. Defaults to about:blank.
|
||||
```
|
||||
|
||||
<aside class="note" style="box-sizing: inherit; font-size: 14px; margin-top: 16px; margin-bottom: 16px; padding: 12px 24px 12px 60px; background: rgb(225, 245, 254); color: rgb(2, 136, 209);">**注意:**目前你仍然需要使用 `--disable-gpu` 标志。但它最终会消失的。</aside>
|
||||
|
||||
`chrome` 应该指向你安装 Chrome 的位置。确切的位置会因平台差异而不同。当前我在 Mac 上操作,所以我为安装的每个版本的 Chrome 都创建了方便使用的别名。
|
||||
|
||||
如果您使用 Chrome 的稳定版,并且无法获得测试版,我建议您使用 `chrome-canary`:
|
||||
|
||||
```
|
||||
alias chrome="/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome"
|
||||
alias chrome-canary="/Applications/Google\ Chrome\ Canary.app/Contents/MacOS/Google\ Chrome\ Canary"
|
||||
alias chromium="/Applications/Chromium.app/Contents/MacOS/Chromium"
|
||||
```
|
||||
|
||||
在[这里][10]下载 Chrome Cannary。
|
||||
|
||||
### 命令行的功能
|
||||
|
||||
在某些情况下,你可能不需要以编程方式在 Headless Chrome 中执行[脚本][11]。可以使用一些[有用的命令行标志][12]来执行常见的任务。
|
||||
|
||||
### 打印 DOM
|
||||
|
||||
`--dump-dom` 标志将打印 `document.body.innerHTML` 到标准输出:
|
||||
|
||||
```
|
||||
chrome --headless --disable-gpu --dump-dom https://www.chromestatus.com/
|
||||
```
|
||||
|
||||
### 创建一个 PDF
|
||||
|
||||
`--print-to-pdf` 标志将页面转出为 PDF 文件:
|
||||
|
||||
```
|
||||
chrome --headless --disable-gpu --print-to-pdf https://www.chromestatus.com/
|
||||
```
|
||||
|
||||
### 截图
|
||||
|
||||
要捕获页面的屏幕截图,请使用 `--screenshot` 标志:
|
||||
|
||||
```
|
||||
chrome --headless --disable-gpu --screenshot https://www.chromestatus.com/
|
||||
|
||||
# Size of a standard letterhead.
|
||||
chrome --headless --disable-gpu --screenshot --window-size=1280,1696 https://www.chromestatus.com/
|
||||
|
||||
# Nexus 5x
|
||||
chrome --headless --disable-gpu --screenshot --window-size=412,732 https://www.chromestatus.com/
|
||||
```
|
||||
|
||||
使用 `--screenshot` 标志运行将在当前工作目录中生成一个名为 `screenshot.png` 的文件。如果你正在寻求整个页面的截图,那么会涉及到很多事情。来自 David Schnurr 的一篇博文已经介绍了这一内容。请查看 [Using headless Chrome as an automated screenshot tool ][13]。
|
||||
|
||||
### REPL 模式 (read-eval-print loop)
|
||||
|
||||
`--repl` 标志可以使 Headless 运行在一个你可以使用浏览器评估 JS 表达式的模式下。执行下面的命令:
|
||||
|
||||
```
|
||||
$ chrome --headless --disable-gpu --repl https://www.chromestatus.com/
|
||||
[0608/112805.245285:INFO:headless_shell.cc(278)] Type a Javascript expression to evaluate or "quit" to exit.
|
||||
>>> location.href
|
||||
{"result":{"type":"string","value":"https://www.chromestatus.com/features"}}
|
||||
>>> quit
|
||||
$
|
||||
```
|
||||
|
||||
### 在没有浏览器界面的情况下调试 Chrome
|
||||
|
||||
当你使用 `--remote-debugging-port=9222` 运行 Chrome 时,它会启动一个开启 [DevTools 协议][14]的实例。该协议用于与 Chrome 进行通信,并且驱动 headless 浏览器实例。它也是一个类似 Sublime、VS Code 和 Node 的工具,可用于应用程序的远程调试。#协同效应
|
||||
|
||||
由于你没有使用浏览器用户界面来查看网页,请在另一个浏览器中输入 `http://localhost:9222`,以检查一切是否正常。你将会看到一个可检查页面的列表,可以点击它们来查看 Headless 正在呈现的内容:
|
||||
|
||||
![DevTools Remote](https://developers.google.com/web/updates/images/2017/04/headless-chrome/remote-debugging-ui.jpg)
|
||||
DevTools 远程调试界面
|
||||
|
||||
从这里,你就可以像往常一样使用熟悉的 DevTools 来检查、调试和调整页面了。如果你以编程方式使用 Headless,这个页面也是一个功能强大的调试工具,用于查看所有穿过电线,与浏览器交互的原始 DevTools 协议命令。
|
||||
|
||||
### 使用编程模式 (Node)
|
||||
|
||||
### Puppeteer 库 API
|
||||
|
||||
[Puppeteer][15] 是一个由 Chrome 团队开发的 Node 库。它提供了一个高层次的 API 来控制 headless(或 full) Chrome。它与其他自动化测试库,如 Phantom 和 NightmareJS 相类似,但是只适用于最新版本的 Chrome。
|
||||
|
||||
除此之外,Puppeteer 还可用于轻松截取屏幕截图,创建 PDF,导航页面以及获取有关这些页面的信息。如果你想快速地自动化测试浏览器,我建议使用该库。它隐藏了 DevTools 协议的复杂性,并可以处理诸如启动 Chrome 调试实例等冗余的任务。
|
||||
|
||||
安装:
|
||||
|
||||
```
|
||||
yarn add puppeteer
|
||||
```
|
||||
|
||||
**例子** - 打印用户代理
|
||||
|
||||
```
|
||||
const puppeteer = require('puppeteer');
|
||||
|
||||
(async() => {
|
||||
const browser = await puppeteer.launch();
|
||||
console.log(await browser.version());
|
||||
browser.close();
|
||||
})();
|
||||
```
|
||||
|
||||
**例子** - 获取页面的屏幕截图
|
||||
|
||||
```
|
||||
const puppeteer = require('puppeteer');
|
||||
|
||||
(async() => {
|
||||
|
||||
const browser = await puppeteer.launch();
|
||||
const page = await browser.newPage();
|
||||
await page.goto('https://www.chromestatus.com', {waitUntil: 'networkidle'});
|
||||
await page.pdf({path: 'page.pdf', format: 'A4'});
|
||||
|
||||
browser.close();
|
||||
})();
|
||||
```
|
||||
|
||||
查看 [Puppeteer 的文档][16],了解完整 API 的更多信息。
|
||||
|
||||
### CRI 库
|
||||
|
||||
[chrome-remote-interface][17] 是一个比 Puppeteer API 更低层次的库。如果你想要更接近原始信息和更直接地使用 [DevTools 协议][18]。
|
||||
|
||||
#### 启动 Chrome
|
||||
|
||||
chrome-remote-interface 不会为你启动 Chrome,所以你要自己启动它。
|
||||
|
||||
在 CLI 部分,我们使用 `--headless --remote-debugging-port=9222` [手动启动 Chrome][19]。但是,要想做到完全自动化测试,你可能希望从应用程序中跳转到 Chrome。
|
||||
|
||||
其中一种方法是使用 `child_process`:
|
||||
|
||||
```
|
||||
const execFile = require('child_process').execFile;
|
||||
|
||||
function launchHeadlessChrome(url, callback) {
|
||||
// Assuming MacOSx.
|
||||
const CHROME = '/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome';
|
||||
execFile(CHROME, ['--headless', '--disable-gpu', '--remote-debugging-port=9222', url], callback);
|
||||
}
|
||||
|
||||
launchHeadlessChrome('https://www.chromestatus.com', (err, stdout, stderr) => {
|
||||
...
|
||||
});
|
||||
```
|
||||
|
||||
但是如果你想要在多个平台上运行可移植的解决方案,事情会变得很棘手。请注意 Chrome 的硬编码路径:
|
||||
|
||||
##### 使用 ChromeLauncher
|
||||
|
||||
[Lighthouse][20] 是一个奇妙的网络应用质量的测试工具。Lighthouse 内部开发了一个强大的用于启动 Chrome 的模块,现在已经被提取出来,可以单独使用。[`chrome-launcher` NPM 模块][21] 可以找到 Chrome 的安装位置,设置调试实例,启动浏览器和在程序运行完之后将其杀死。它最好的一点是可以跨平台工作,感谢 Node!
|
||||
|
||||
默认情况下,**`chrome-launcher` 会尝试启动 Chrome Canary**(如果已经安装),但是你也可以更改它,手动选择使用的 Chrome 版本。要想使用它,首先从 npm 安装:
|
||||
|
||||
```
|
||||
yarn add chrome-launcher
|
||||
```
|
||||
|
||||
**例子** - 使用 `chrome-launcher` 启动 Headless
|
||||
|
||||
```
|
||||
const chromeLauncher = require('chrome-launcher');
|
||||
|
||||
// Optional: set logging level of launcher to see its output.
|
||||
// Install it using: yarn add lighthouse-logger
|
||||
// const log = require('lighthouse-logger');
|
||||
// log.setLevel('info');
|
||||
|
||||
/**
|
||||
* Launches a debugging instance of Chrome.
|
||||
* @param {boolean=} headless True (default) launches Chrome in headless mode.
|
||||
* False launches a full version of Chrome.
|
||||
* @return {Promise<ChromeLauncher>}
|
||||
*/
|
||||
function launchChrome(headless=true) {
|
||||
return chromeLauncher.launch({
|
||||
// port: 9222, // Uncomment to force a specific port of your choice.
|
||||
chromeFlags: [
|
||||
'--window-size=412,732',
|
||||
'--disable-gpu',
|
||||
headless ? '--headless' : ''
|
||||
]
|
||||
});
|
||||
}
|
||||
|
||||
launchChrome().then(chrome => {
|
||||
console.log(`Chrome debuggable on port: ${chrome.port}`);
|
||||
...
|
||||
// chrome.kill();
|
||||
});
|
||||
```
|
||||
|
||||
运行这个脚本没有做太多的事情,但你应该能在任务管理器中看到一个 Chrome 的实例,它加载了页面 `about:blank`。记住,它不会有任何的浏览器界面,我们是 headless 的。
|
||||
|
||||
为了控制浏览器,我们需要 DevTools 协议!
|
||||
|
||||
#### 检索有关页面的信息
|
||||
|
||||
<aside class="warning" style="box-sizing: inherit; font-size: 14px; margin-top: 16px; margin-bottom: 16px; padding: 12px 24px 12px 60px; background: rgb(251, 233, 231); color: rgb(213, 0, 0);">**警告:** DevTools 协议可以做一些有趣的事情,但是起初可能有点令人生畏。我建议先花点时间浏览 [DevTools 协议查看器][3]。然后,转到 `chrome-remote-interface` 的 API 文档,看看它是如何包装原始协议的。</aside>
|
||||
|
||||
我们来安装该库:
|
||||
|
||||
```
|
||||
yarn add chrome-remote-interface
|
||||
```
|
||||
|
||||
##### 示例
|
||||
|
||||
**例子** - 打印用户代理
|
||||
|
||||
```
|
||||
const CDP = require('chrome-remote-interface');
|
||||
|
||||
...
|
||||
|
||||
launchChrome().then(async chrome => {
|
||||
const version = await CDP.Version({port: chrome.port});
|
||||
console.log(version['User-Agent']);
|
||||
});
|
||||
```
|
||||
|
||||
结果是类似这样的东西:`HeadlessChrome/60.0.3082.0`
|
||||
|
||||
**例子** - 检查网站是否有 [Web 应用程序清单][22]
|
||||
|
||||
```
|
||||
const CDP = require('chrome-remote-interface');
|
||||
|
||||
...
|
||||
|
||||
(async function() {
|
||||
|
||||
const chrome = await launchChrome();
|
||||
const protocol = await CDP({port: chrome.port});
|
||||
|
||||
// Extract the DevTools protocol domains we need and enable them.
|
||||
// See API docs: https://chromedevtools.github.io/devtools-protocol/
|
||||
const {Page} = protocol;
|
||||
await Page.enable();
|
||||
|
||||
Page.navigate({url: 'https://www.chromestatus.com/'});
|
||||
|
||||
// Wait for window.onload before doing stuff.
|
||||
Page.loadEventFired(async () => {
|
||||
const manifest = await Page.getAppManifest();
|
||||
|
||||
if (manifest.url) {
|
||||
console.log('Manifest: ' + manifest.url);
|
||||
console.log(manifest.data);
|
||||
} else {
|
||||
console.log('Site has no app manifest');
|
||||
}
|
||||
|
||||
protocol.close();
|
||||
chrome.kill(); // Kill Chrome.
|
||||
});
|
||||
|
||||
})();
|
||||
```
|
||||
|
||||
**例子** - 使用 DOM API 提取页面 `<title>`
|
||||
|
||||
```
|
||||
const CDP = require('chrome-remote-interface');
|
||||
|
||||
...
|
||||
|
||||
(async function() {
|
||||
|
||||
const chrome = await launchChrome();
|
||||
const protocol = await CDP({port: chrome.port});
|
||||
|
||||
// Extract the DevTools protocol domains we need and enable them.
|
||||
// See API docs: https://chromedevtools.github.io/devtools-protocol/
|
||||
const {Page, Runtime} = protocol;
|
||||
await Promise.all([Page.enable(), Runtime.enable()]);
|
||||
|
||||
Page.navigate({url: 'https://www.chromestatus.com/'});
|
||||
|
||||
// Wait for window.onload before doing stuff.
|
||||
Page.loadEventFired(async () => {
|
||||
const js = "document.querySelector('title').textContent";
|
||||
// Evaluate the JS expression in the page.
|
||||
const result = await Runtime.evaluate({expression: js});
|
||||
|
||||
console.log('Title of page: ' + result.result.value);
|
||||
|
||||
protocol.close();
|
||||
chrome.kill(); // Kill Chrome.
|
||||
});
|
||||
|
||||
})();
|
||||
```
|
||||
|
||||
### 使用 Selenium、WebDriver 和 ChromeDriver
|
||||
|
||||
现在,Selenium 开启了 Chrome 的完整实例。换句话说,这是一个自动化的解决方案,但不是完全 headless 的。但是,Selenium 只需要进行小小的配置即可运行 headless Chrome。如果你想要关于如何自己设置的完整说明,我建议你[使用 Headless Chrome 来运行 Selenium][23],你可以从下面的一些示例开始。
|
||||
|
||||
#### 使用 ChromeDriver
|
||||
|
||||
[ChromeDriver][24] 2.3.0 支持 Chrome 59 及更新版本,可与 headless Chrome 配合使用。在某些情况下,你可能需要 Chrome 60 以解决 bug。例如,Chrome 59 中屏幕截图已知存在问题。
|
||||
|
||||
安装:
|
||||
|
||||
```
|
||||
yarn add selenium-webdriver chromedriver
|
||||
```
|
||||
|
||||
例子:
|
||||
|
||||
```
|
||||
const fs = require('fs');
|
||||
const webdriver = require('selenium-webdriver');
|
||||
const chromedriver = require('chromedriver');
|
||||
|
||||
// This should be the path to your Canary installation.
|
||||
// I'm assuming Mac for the example.
|
||||
const PATH_TO_CANARY = '/Applications/Google Chrome Canary.app/Contents/MacOS/Google Chrome Canary';
|
||||
|
||||
const chromeCapabilities = webdriver.Capabilities.chrome();
|
||||
chromeCapabilities.set('chromeOptions', {
|
||||
binary: PATH_TO_CANARY // Screenshots require Chrome 60\. Force Canary.
|
||||
'args': [
|
||||
'--headless',
|
||||
]
|
||||
});
|
||||
|
||||
const driver = new webdriver.Builder()
|
||||
.forBrowser('chrome')
|
||||
.withCapabilities(chromeCapabilities)
|
||||
.build();
|
||||
|
||||
// Navigate to google.com, enter a search.
|
||||
driver.get('https://www.google.com/');
|
||||
driver.findElement({name: 'q'}).sendKeys('webdriver');
|
||||
driver.findElement({name: 'btnG'}).click();
|
||||
driver.wait(webdriver.until.titleIs('webdriver - Google Search'), 1000);
|
||||
|
||||
// Take screenshot of results page. Save to disk.
|
||||
driver.takeScreenshot().then(base64png => {
|
||||
fs.writeFileSync('screenshot.png', new Buffer(base64png, 'base64'));
|
||||
});
|
||||
|
||||
driver.quit();
|
||||
```
|
||||
|
||||
#### 使用 WebDriverIO
|
||||
|
||||
[WebDriverIO][25] 是一个在 Selenium WebDrive 上构建的更高层次的 API.
|
||||
|
||||
安装:
|
||||
|
||||
```
|
||||
yarn add webdriverio chromedriver
|
||||
```
|
||||
|
||||
例子:过滤 chromestatus.com 上的 CSS 功能
|
||||
|
||||
```
|
||||
const webdriverio = require('webdriverio');
|
||||
const chromedriver = require('chromedriver');
|
||||
|
||||
// This should be the path to your Canary installation.
|
||||
// I'm assuming Mac for the example.
|
||||
const PATH_TO_CANARY = '/Applications/Google Chrome Canary.app/Contents/MacOS/Google Chrome Canary';
|
||||
const PORT = 9515;
|
||||
|
||||
chromedriver.start([
|
||||
'--url-base=wd/hub',
|
||||
`--port=${PORT}`,
|
||||
'--verbose'
|
||||
]);
|
||||
|
||||
(async () => {
|
||||
|
||||
const opts = {
|
||||
port: PORT,
|
||||
desiredCapabilities: {
|
||||
browserName: 'chrome',
|
||||
chromeOptions: {
|
||||
binary: PATH_TO_CANARY // Screenshots require Chrome 60\. Force Canary.
|
||||
args: ['--headless']
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
const browser = webdriverio.remote(opts).init();
|
||||
|
||||
await browser.url('https://www.chromestatus.com/features');
|
||||
|
||||
const title = await browser.getTitle();
|
||||
console.log(`Title: ${title}`);
|
||||
|
||||
await browser.waitForText('.num-features', 3000);
|
||||
let numFeatures = await browser.getText('.num-features');
|
||||
console.log(`Chrome has ${numFeatures} total features`);
|
||||
|
||||
await browser.setValue('input[type="search"]', 'CSS');
|
||||
console.log('Filtering features...');
|
||||
await browser.pause(1000);
|
||||
|
||||
numFeatures = await browser.getText('.num-features');
|
||||
console.log(`Chrome has ${numFeatures} CSS features`);
|
||||
|
||||
const buffer = await browser.saveScreenshot('screenshot.png');
|
||||
console.log('Saved screenshot...');
|
||||
|
||||
chromedriver.stop();
|
||||
browser.end();
|
||||
|
||||
})();
|
||||
```
|
||||
|
||||
### 更多资源
|
||||
|
||||
以下是一些可以带你入门的有用资源:
|
||||
|
||||
文档
|
||||
|
||||
* [DevTools Protocol Viewer][4] - API 参考文档
|
||||
|
||||
工具
|
||||
|
||||
* [chrome-remote-interface][5] - 基于 DevTools 协议的 node 模块
|
||||
|
||||
* [Lighthouse][6] - 测试 Web 应用程序质量的自动化工具;大量使用了协议
|
||||
|
||||
* [chrome-launcher][7] - 用于启动 Chrome 的 node 模块,可以自动化
|
||||
|
||||
样例
|
||||
|
||||
* "[The Headless Web][8]" - Paul Kinlan 发布的使用了 Headless 和 api.ai 的精彩博客
|
||||
|
||||
### 常见问题
|
||||
|
||||
**我需要 `--disable-gpu` 标志吗?**
|
||||
|
||||
目前是需要的。`--disable-gpu` 标志在处理一些 bug 时是需要的。在未来版本的 Chrome 中就不需要了。查看 [https://crbug.com/546953#c152][26] 和 [https://crbug.com/695212][27] 获取更多信息。
|
||||
|
||||
**所以我仍然需要 Xvfb 吗?**
|
||||
|
||||
不。Headless Chrome 不使用窗口,所以不需要像 Xvfb 这样的显示服务器。没有它你也可以愉快地运行你的自动化测试。
|
||||
|
||||
什么是 Xvfb?Xvfb 是一个用于类 Unix 系统的内存显示服务器,可以让你运行图形应用程序(如 Chrome),而无需附加的物理显示。许多人使用 Xvfb 运行早期版本的 Chrome 进行 “headless” 测试。
|
||||
|
||||
**如何创建一个运行 Headless Chrome 的 Docker 容器?**
|
||||
|
||||
查看 [lighthouse-ci][28]。它有一个使用 Ubuntu 作为基础镜像的 [Dockerfile 示例][29],并且在 App Engine Flexible 容器中安装和运行了 Lighthouse。
|
||||
|
||||
**我可以把它和 Selenium / WebDriver / ChromeDriver 一起使用吗?**
|
||||
|
||||
是的。查看 [Using Selenium, WebDrive, or ChromeDriver][30]。
|
||||
|
||||
**它和 PhantomJS 有什么关系?**
|
||||
|
||||
Headless Chrome 和 [PhantomJS][31] 是类似的工具。它们都可以用来在 headless 环境中进行自动化测试。两者的主要不同在于 Phantom 使用了一个较老版本的 WebKit 作为它的渲染引擎,而 Headless Chrome 使用了最新版本的 Blink。
|
||||
|
||||
目前,Phantom 提供了比 [DevTools protocol][32] 更高层次的 API。
|
||||
|
||||
**我在哪儿提交 bug?**
|
||||
|
||||
对于 Headless Chrome 的 bug,请提交到 [crbug.com][33]。
|
||||
|
||||
对于 DevTools 洗衣的 bug,请提交到 [github.com/ChromeDevTools/devtools-protocol][34]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介
|
||||
|
||||
[Eric Bidelman][1] 谷歌工程师,Lighthouse 开发,Web 和 Web 组件开发,Chrome 开发
|
||||
|
||||
-----------------------------------
|
||||
|
||||
via: https://developers.google.com/web/updates/2017/04/headless-chrome
|
||||
|
||||
作者:[Eric Bidelman ][a]
|
||||
译者:[firmianay](https://github.com/firmianay)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://developers.google.com/web/resources/contributors#ericbidelman
|
||||
[1]:https://developers.google.com/web/resources/contributors#ericbidelman
|
||||
[2]:https://bugs.chromium.org/p/chromium/issues/detail?id=686608
|
||||
[3]:https://chromedevtools.github.io/devtools-protocol/
|
||||
[4]:https://chromedevtools.github.io/devtools-protocol/
|
||||
[5]:https://www.npmjs.com/package/chrome-remote-interface
|
||||
[6]:https://github.com/GoogleChrome/lighthouse
|
||||
[7]:https://github.com/GoogleChrome/lighthouse/tree/master/chrome-launcher
|
||||
[8]:https://paul.kinlan.me/the-headless-web/
|
||||
[9]:https://chromium.googlesource.com/chromium/src/+/lkgr/headless/README.md
|
||||
[10]:https://www.google.com/chrome/browser/canary.html
|
||||
[11]:https://developers.google.com/web/updates/2017/04/headless-chrome#node
|
||||
[12]:https://cs.chromium.org/chromium/src/headless/app/headless_shell_switches.cc
|
||||
[13]:https://medium.com/@dschnr/using-headless-chrome-as-an-automated-screenshot-tool-4b07dffba79a
|
||||
[14]:https://chromedevtools.github.io/devtools-protocol/
|
||||
[15]:https://github.com/GoogleChrome/puppeteer
|
||||
[16]:https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md
|
||||
[17]:https://www.npmjs.com/package/chrome-remote-interface
|
||||
[18]:https://chromedevtools.github.io/devtools-protocol/
|
||||
[19]:https://developers.google.com/web/updates/2017/04/headless-chrome#cli
|
||||
[20]:https://developers.google.com/web/tools/lighthouse/
|
||||
[21]:https://www.npmjs.com/package/chrome-launcher
|
||||
[22]:https://developers.google.com/web/fundamentals/engage-and-retain/web-app-manifest/
|
||||
[23]:https://intoli.com/blog/running-selenium-with-headless-chrome/
|
||||
[24]:https://sites.google.com/a/chromium.org/chromedriver/
|
||||
[25]:http://webdriver.io/
|
||||
[26]:https://bugs.chromium.org/p/chromium/issues/detail?id=546953#c152
|
||||
[27]:https://bugs.chromium.org/p/chromium/issues/detail?id=695212
|
||||
[28]:https://github.com/ebidel/lighthouse-ci
|
||||
[29]:https://github.com/ebidel/lighthouse-ci/blob/master/builder/Dockerfile.headless
|
||||
[30]:https://developers.google.com/web/updates/2017/04/headless-chrome#drivers
|
||||
[31]:http://phantomjs.org/
|
||||
[32]:https://chromedevtools.github.io/devtools-protocol/
|
||||
[33]:https://bugs.chromium.org/p/chromium/issues/entry?components=Blink&blocking=705916&cc=skyostil%40chromium.org&Proj=Headless
|
||||
[34]:https://github.com/ChromeDevTools/devtools-protocol/issues/new
|
@ -0,0 +1,170 @@
|
||||
使用 Headless Chrome 进行自动化测试
|
||||
============================================================
|
||||
|
||||
|
||||
如果你想使用 Headless Chrome 进行自动化测试,就看下去吧!这篇文章将让你完全使用 Karma 作为 runner,并且使用 Mocha+Chai 进行 authoring 测试。
|
||||
|
||||
**这些东西是什么?**
|
||||
|
||||
Karma, Mocha, Chai, Headless Chrome, oh my!
|
||||
|
||||
[Karma][2] 是一个测试工具,可以和所有最流行的测试框架([Jasmine][3], [Mocha][4], [QUnit][5])配合使用。
|
||||
|
||||
[Chai][6] 是一个断言库,可以与 Node 和浏览器一起使用。这里我们需要后者。
|
||||
|
||||
[Headless Chrome][7] 是一种在没有浏览器用户界面的 headless 环境中运行 Chrome 浏览器的方法。使用 Headless Chrome(而不是直接在 Node 中测试) 的一个好处是 JavaScript 测试将在与你的网站用户相同的环境中执行。Headless Chrome 为你提供了真正的浏览器环境,却没有运行完整版本的 Chrome 一样的内存开销。
|
||||
|
||||
### Setup
|
||||
|
||||
### 安装
|
||||
|
||||
使用 `yarn` 安装 Karma、相关插件和测试用例:
|
||||
|
||||
```
|
||||
yarn add --dev karma karma-chrome-launcher karma-mocha karma-chai
|
||||
yarn add --dev mocha chai
|
||||
```
|
||||
|
||||
或者使用 `npm`:
|
||||
|
||||
```
|
||||
npm i --save-dev karma karma-chrome-launcher karma-mocha karma-chai
|
||||
npm i --save-dev mocha chai
|
||||
```
|
||||
|
||||
在这篇文章中我使用 [Mocha][8] 和 [Chai][9],但是你也可以选择自己最喜欢的在浏览器中工作的断言库。
|
||||
|
||||
### 配置 Karma
|
||||
|
||||
创建一个使用 `ChromeHeadless` 启动器的 `karma.config.js` 文件。
|
||||
|
||||
**karma.conf.js**
|
||||
|
||||
```
|
||||
module.exports = function(config) {
|
||||
config.set({
|
||||
frameworks: ['mocha', 'chai'],
|
||||
files: ['test/**/*.js'],
|
||||
reporters: ['progress'],
|
||||
port: 9876, // karma web server port
|
||||
colors: true,
|
||||
logLevel: config.LOG_INFO,
|
||||
browsers: ['ChromeHeadless'],
|
||||
autoWatch: false,
|
||||
// singleRun: false, // Karma captures browsers, runs the tests and exits
|
||||
concurrency: Infinity
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
<aside class="note" style="box-sizing: inherit; font-size: 14px; margin-top: 16px; margin-bottom: 16px; padding: 12px 24px 12px 60px; background: rgb(225, 245, 254); color: rgb(2, 136, 209);">**注意:** 运行 `./node_modules/karma/bin/ init karma.conf.js` 生成 Karma 的配置文件。</aside>
|
||||
|
||||
### 写一个测试
|
||||
|
||||
在 `/test/test.js` 中写一个测试:
|
||||
|
||||
**/test/test.js**
|
||||
|
||||
```
|
||||
describe('Array', () => {
|
||||
describe('#indexOf()', () => {
|
||||
it('should return -1 when the value is not present', () => {
|
||||
assert.equal(-1, [1,2,3].indexOf(4));
|
||||
});
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### 运行你的测试
|
||||
|
||||
在我们设置好用于运行 Karma 的 `package.json` 中添加一个测试脚本。
|
||||
|
||||
**package.json**
|
||||
|
||||
```
|
||||
"scripts": {
|
||||
"test": "karma start --single-run --browsers ChromeHeadless karma.conf.js"
|
||||
}
|
||||
```
|
||||
|
||||
当你运行你的测试(`yarn test`)时,Headless Chrome 会启动并将运行结果输出到终端:
|
||||
|
||||
![Output from Karma](https://developers.google.com/web/updates/images/2017/06/headless-karma.png)
|
||||
|
||||
### 创建你自己的 Headless Chrome 启动器
|
||||
|
||||
`ChromeHeadless` 启动器非常棒,因为它可以在 Headless Chrome 上进行测试。它包含了适合你的 Chrome 标志,并在端口 `9222` 上启动 Chrome 的远程调试版本。
|
||||
|
||||
但是,有时你可能希望将自定义的标志传递给 Chrome 或更改启动器使用的远程调试端口。要做到这一点,可以通过创建一个 `customLaunchers` 字段来扩展基础的 `ChromeHeadless` 启动器:
|
||||
|
||||
**karma.conf.js**
|
||||
|
||||
```
|
||||
module.exports = function(config) {
|
||||
...
|
||||
|
||||
config.set({
|
||||
browsers: ['Chrome', 'ChromeHeadless', 'MyHeadlessChrome'],
|
||||
|
||||
customLaunchers: {
|
||||
MyHeadlessChrome: {
|
||||
base: 'ChromeHeadless',
|
||||
flags: ['--disable-translate', '--disable-extensions', '--remote-debugging-port=9223']
|
||||
}
|
||||
},
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### 在 Travis CI 上运行它
|
||||
|
||||
在 Headless Chrome 中配置 Karma 运行测试是很困难的。而在 Travis 中持续整合就只有几种!
|
||||
|
||||
要在 Travis 中运行测试,请使用 `dist: trusty` 并安装稳定版 Chrome 插件:
|
||||
|
||||
**.travis.yml**
|
||||
|
||||
```
|
||||
language: node_js
|
||||
node_js:
|
||||
- "7"
|
||||
dist: trusty # needs Ubuntu Trusty
|
||||
sudo: false # no need for virtualization.
|
||||
addons:
|
||||
chrome: stable # have Travis install chrome stable.
|
||||
cache:
|
||||
yarn: true
|
||||
directories:
|
||||
- node_modules
|
||||
install:
|
||||
- yarn
|
||||
script:
|
||||
- yarn test
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介
|
||||
|
||||
[Eric Bidelman][1] 谷歌工程师,Lighthouse 开发,Web 和 Web 组件开发,Chrome 开发
|
||||
|
||||
----------------
|
||||
|
||||
via: https://developers.google.com/web/updates/2017/06/headless-karma-mocha-chai
|
||||
|
||||
作者:[ Eric Bidelman][a]
|
||||
译者:[firmianay](https://github.com/firmianay)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://developers.google.com/web/resources/contributors#ericbidelman
|
||||
[1]:https://developers.google.com/web/resources/contributors#ericbidelman
|
||||
[2]:https://karma-runner.github.io/
|
||||
[3]:https://jasmine.github.io/
|
||||
[4]:https://mochajs.org/
|
||||
[5]:https://qunitjs.com/
|
||||
[6]:http://chaijs.com/
|
||||
[7]:https://developers.google.com/web/updates/2017/04/headless-chrome
|
||||
[8]:https://mochajs.org/
|
||||
[9]:http://chaijs.com/
|
@ -0,0 +1,125 @@
|
||||
使用 Ansible 部署无服务应用
|
||||
============================================================
|
||||
|
||||
### 无服务是托管服务方向迈出的另一步,并且与 Ansible 的无代理体系结构相得益彰。
|
||||
|
||||
![Using Ansible for deploying serverless applications](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/suitcase_container_bag.png?itok=q40lKCBY "Using Ansible for deploying serverless applications")
|
||||
图片提供: opensource.com
|
||||
|
||||
[Ansible][8] 被设计为实际工作中的最简单的部署工具。这意味着它不是一个完整的编程语言。你编写定义任务的 YAML 模板,并列出任何需要自动完成的任务。
|
||||
|
||||
大多数人认为 Ansible 是更强大的“ for 循环中的 SSH”,在简单的情况下这是真的。但真正的 Ansible 是_任务_,而不是 SSH。对于很多情况,我们通过 SSH 进行连接,但也支持 Windows 机器上的 Windows Remote Management(WinRM),以及作为云服务的通用语言的 HTTPS API 之类的东西。。
|
||||
|
||||
更多关于 Ansible
|
||||
|
||||
* [Ansible 如何工作][1]
|
||||
|
||||
* [免费的 Ansible 电子书][2]
|
||||
|
||||
* [Ansible 快速入门视频][3]
|
||||
|
||||
* [下载并安装 Ansible][4]
|
||||
|
||||
在云中,Ansible 可以在两个独立的层上操作:控制面和实例资源。控制面由所有_没有_运行在操作系统上的东西组成。包括设置网络、新建实例、配置更高级别的服务,如亚马逊的 S3 或 DynamoDB,以及保持云基础设施安全和服务客户所需的一切。
|
||||
|
||||
实例上的工作是你已经知道 Ansible 可以做的:启动和停止服务、模板配置文件、安装软件包以及通过 SSH 执行的所有与操作系统相关的操作。
|
||||
|
||||
现在,关于[无服务][9]怎么样呢?根据你的要求,无服务要么是对公有云的无限延伸,或者是一个全新的范例,其中所有的东西都是 API 调用,以前从来没有这样做过。
|
||||
|
||||
Ansible 采取第一种观点。在 “无服务” 是艺术术语之前,用户不得不管理和配置 EC2 实例、虚拟私有云 (VPC) 网络以及其他所有内容。无服务是托管服务方向迈出的另一步,并且与 Ansible 的无代理体系结构相得益彰。
|
||||
|
||||
在我们开始 [Lambda][10] 示例之前,让我们来看一个简单的配置 CloudFormation 栈任务:
|
||||
|
||||
```
|
||||
- name: Build network
|
||||
cloudformation:
|
||||
stack_name: prod-vpc
|
||||
state: present
|
||||
template: base_vpc.yml
|
||||
```
|
||||
|
||||
编写这样的任务只需要几分钟,但它是构建基础架构所涉及的最后半手动步骤 - 点击 “Create Stack” - 这将 playbook 与其他放在一起。现在你的 VPC 只是在建立新区域时可以调用的另一项任务了。
|
||||
|
||||
由于云提供商是你帐户中发生些什么的真相来源,因此 Ansible 有许多方法来取回并使用 ID、名称和其他参数来过滤和查询运行的实例或网络。以 **cloudformation_facts** 模块为例,我们可以从我们刚刚创建的模板中得到子网 ID、网络范围和其他数据。
|
||||
|
||||
```
|
||||
- name: Pull all new resources back in as a variable
|
||||
cloudformation_facts:
|
||||
stack_name: prod-vpc
|
||||
register: network_stack
|
||||
```
|
||||
|
||||
对于无服务应用,除了 DynamoDB 表,S3 bucket 和其他任何其他功能之外,你肯定还需要一个 Lambda 函数的补充。幸运的是,通过使用 **lambda** 模块, Lambda 函数可以作为堆栈以上次相同的方式创建:
|
||||
|
||||
```
|
||||
- lambda:
|
||||
name: sendReportMail
|
||||
zip_file: "{{ deployment_package }}"
|
||||
runtime: python3.6
|
||||
handler: report.send
|
||||
memory_size: 1024
|
||||
role: "{{ iam_exec_role }}"
|
||||
register: new_function
|
||||
```
|
||||
|
||||
如果你有其他想用来交付无服务应用的工具,这也是可以的。开源的[无服务框架][11]有自己的 Ansible 模块,它也可以工作:
|
||||
|
||||
```
|
||||
- serverless:
|
||||
service_path: '{{ project_dir }}'
|
||||
stage: dev
|
||||
register: sls
|
||||
- name: Serverless uses CloudFormation under the hood, so you can easily pull info back into Ansible
|
||||
cloudformation_facts:
|
||||
stack_name: "{{ sls.service_name }}"
|
||||
register: sls_facts
|
||||
```
|
||||
|
||||
这不是你需要的一切,因为无服务项目也必须存在,你将在那里做大量的定义你的函数和事件源。对于此例,我们将制作一个响应 HTTP 请求的函数。无服务框架使用 YAML 作为其配置语言(和 Ansible 一样),所以这应该看起来很熟悉。
|
||||
|
||||
```
|
||||
# serverless.yml
|
||||
service: fakeservice
|
||||
|
||||
provider:
|
||||
name: aws
|
||||
runtime: python3.6
|
||||
|
||||
functions:
|
||||
main:
|
||||
handler: test_function.handler
|
||||
events:
|
||||
- http:
|
||||
path: /
|
||||
method: get
|
||||
```
|
||||
|
||||
在 [AnsibleFest][12] 中,我将介绍这个例子和其他深入的部署策略,以最大限度地利用你已经拥有的 playbook 和基础设施,还有新的无服务实践。无论你是否能到,我希望这些例子可以让你开始使用 Ansible,无论你是否有任何服务要管理。
|
||||
|
||||
_AnsibleFest 是一个_ _一日_ _会议,汇集了数百名 Ansible 用户、开发人员和行业合作伙伴。为了产品更新、鼓舞人心的交谈、技术深度潜水,动手演示和一天的网络加入我们吧。9 月 7 日在旧金山获得你的 AnsibleFest 票。在[**注册**][6]页使用优惠代码 **OPENSOURCE** 节省 25%。_
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/8/ansible-serverless-applications
|
||||
|
||||
作者:[Ryan Scott Brown ][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/ryansb
|
||||
[1]:https://www.ansible.com/how-ansible-works?intcmp=701f2000000h4RcAAI
|
||||
[2]:https://www.ansible.com/ebooks?intcmp=701f2000000h4RcAAI
|
||||
[3]:https://www.ansible.com/quick-start-video?intcmp=701f2000000h4RcAAI
|
||||
[4]:https://docs.ansible.com/ansible/latest/intro_installation.html?intcmp=701f2000000h4RcAAI
|
||||
[5]:https://opensource.com/article/17/8/ansible-serverless-applications?rate=zOgBPQUEmiTctfbajpu_TddaH-8b-ay3pFCK0b43vFw
|
||||
[6]:https://www.eventbrite.com/e/ansiblefest-san-francisco-2017-tickets-34008433139
|
||||
[7]:https://opensource.com/user/12043/feed
|
||||
[8]:https://www.ansible.com/
|
||||
[9]:https://en.wikipedia.org/wiki/Serverless_computing
|
||||
[10]:https://aws.amazon.com/lambda/
|
||||
[11]:https://serverless.com/
|
||||
[12]:https://www.ansible.com/ansiblefest?intcmp=701f2000000h4RcAAI
|
||||
[13]:https://opensource.com/users/ryansb
|
||||
[14]:https://opensource.com/users/ryansb
|
@ -1,19 +1,19 @@
|
||||
Using Kubernetes for Local Development — Minikube
|
||||
使用 Kubernetes 进行本地开发 - Minikube
|
||||
============================================================
|
||||
|
||||
If you ops team are using Docker and Kubernetes, it is recommended to adopt the same or similar technologies in development. This will reduce the number of incompatibility and portability problems and makes everyone consider the application container a common responsibility of both Dev and Ops teams.
|
||||
|
||||
如果你的运维团队在使用 Docker 和 Kubernetes,那么建议开发上采用相同或相似的技术。这将减少不兼容性和可移植性问题的数量,并使每个人都认为应用程序容器是开发和运维团队的共同责任。
|
||||
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1000/1*3RHSw_mAFsUhObmbHyjVOg.jpeg)
|
||||
|
||||
This blog post introduces the usage of Kubernetes in development mode and it is inspired from a screencast that you can find in [Painless Docker Course][10].
|
||||
这篇博客文章介绍了 Kubernetes 在开发模式中的用法,它受到[没有痛苦的 Docker 教程][10]中的截屏的启发。
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/800/1*a02rarYYYvd7GalkyQ3AXg.jpeg)][1]
|
||||
|
||||
Minikube is a tool that makes developers’ life easy by allowing them to use and run a Kubernetes cluster in a local machine.
|
||||
Minikube 允许开发人员在本地使用和运行 Kubernetes 集群,从而使开发人员的生活变得轻松的一种工具。
|
||||
|
||||
In this blog post, for the examples that I tested, I am using Linux Mint 18, but it doesn’t change nothing apart the installation part.
|
||||
|
||||
在这篇博客中,对于我测试的例子,我使用的是 Linux Mint 18,但它在安装部分没有区别
|
||||
|
||||
```
|
||||
cat /etc/lsb-release
|
||||
@ -29,11 +29,11 @@ DISTRIB_DESCRIPTION=”Linux Mint 18.1 Serena”
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/800/1*DZzICImnejKbNV-NCa3gEg.png)
|
||||
|
||||
#### Prerequisites
|
||||
#### 先决条件
|
||||
|
||||
In order to work with Minkube, we should have Kubectl and Minikube installed + some virtualization drivers.
|
||||
为了与 Minkube 一起工作,我们应该安装 Kubectl 和 Minikube 和一些虚拟化驱动程序。
|
||||
|
||||
* For OS X, install [xhyve driver][2], [VirtualBox][3], or [VMware Fusion][4], then Kubectl and Minkube
|
||||
* 对于 OS X,安装 [xhyve 驱动][2]、[VirtualBox][3] 或者 [VMware Fusion][4],接着是 Kubectl 和 Minkube。
|
||||
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl
|
||||
@ -51,19 +51,19 @@ sudo mv ./kubectl /usr/local/bin/kubectl
|
||||
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.21.0/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
|
||||
```
|
||||
|
||||
* For Windows, install [VirtualBox][6] or [Hyper-V][7] then Kubectl and Minkube
|
||||
* 对于 Windows,安装 [VirtualBox][6] 或者 [Hyper-V][7],接着是 Kubectl 和 Minkube。
|
||||
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/windows/amd64/kubectl.exe
|
||||
```
|
||||
|
||||
Add the binary to your PATH (This [article][11] explains how to modify the PATH)
|
||||
将二进制文件添加到你的 PATH 中(这篇[文章][11]解释了如何修改 PATH)
|
||||
|
||||
Download the `minikube-windows-amd64.exe` file, rename it to `minikube.exe`and add it to your path.
|
||||
下载 `minikube-windows-amd64.exe`,将其重命名为 `minikube.exe`,并将其添加到你的 PATH 中。
|
||||
|
||||
Find the last release [here][12].
|
||||
[在这][12]找到最后一个版本。
|
||||
|
||||
* For Linux, install [VirtualBox][8] or [KVM][9] then Kubectl and Minkube
|
||||
* 对于 Linux,安装 [VirtualBox][8] 或者 [KVM][9],接着是 Kubectl 和 Minkube。
|
||||
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
|
||||
@ -81,9 +81,9 @@ sudo mv ./kubectl /usr/local/bin/kubectl
|
||||
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.21.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
|
||||
```
|
||||
|
||||
#### Using Minikube
|
||||
#### 使用 Minikube
|
||||
|
||||
Let’s start by creating an image from this Dockerfile:
|
||||
我们先从这个 Dockerfile 创建一个镜像:
|
||||
|
||||
```
|
||||
FROM busybox
|
||||
@ -92,21 +92,21 @@ EXPOSE 8000
|
||||
CMD httpd -p 8000 -h /www; tail -f /dev/null
|
||||
```
|
||||
|
||||
Add something you’d like to see in the index.html page.
|
||||
添加你希望在 index.html 中看到的内容。
|
||||
|
||||
Build the image:
|
||||
构建镜像:
|
||||
|
||||
```
|
||||
docker build -t eon01/hello-world-web-server .
|
||||
```
|
||||
|
||||
Let’s run the container to test it:
|
||||
我们来运行容器来测试它:
|
||||
|
||||
```
|
||||
docker run -d --name webserver -p 8000:8000 eon01/hello-world-web-server
|
||||
```
|
||||
|
||||
This is the output of docker ps:
|
||||
这是 docker ps 的输出:
|
||||
|
||||
```
|
||||
docker ps
|
||||
@ -118,145 +118,145 @@ CONTAINER ID IMAGE COMMAND CREA
|
||||
2ad8d688d812 eon01/hello-world-web-server "/bin/sh -c 'httpd..." 3 seconds ago Up 2 seconds 0.0.0.0:8000->8000/tcp webserver
|
||||
```
|
||||
|
||||
Let’s commit the image and upload it to the public Docker Hub. You can use your own private registry:
|
||||
让我们提交镜像并将其上传到公共 Docker Hub 中。你可以使用自己的私有仓库:
|
||||
|
||||
```
|
||||
docker commit webserver
|
||||
docker push eon01/hello-world-web-server
|
||||
```
|
||||
|
||||
Remove the container since we will use it with Minikube
|
||||
删除容器,因为我们将与 Minikube 一起使用它。
|
||||
|
||||
```
|
||||
docker rm -f webserver
|
||||
```
|
||||
|
||||
Time to start Minikube:
|
||||
启动 Minikube:
|
||||
|
||||
```
|
||||
minkube start
|
||||
```
|
||||
|
||||
Check the status:
|
||||
检查状态:
|
||||
|
||||
```
|
||||
minikube status
|
||||
```
|
||||
|
||||
We are running a single node:
|
||||
我们运行一个节点:
|
||||
|
||||
```
|
||||
kubectl get node
|
||||
```
|
||||
|
||||
Run the webserver:
|
||||
运行 webserver:
|
||||
|
||||
```
|
||||
kubectl run webserver --image=eon01/hello-world-web-server --port=8000
|
||||
```
|
||||
|
||||
A webserver should have it’s port exposed:
|
||||
webserver 应该会暴露它的端口:
|
||||
|
||||
```
|
||||
kubectl expose deployment webserver --type=NodePort
|
||||
```
|
||||
|
||||
In order to get the service url type:
|
||||
为了得到服务 url 输入:
|
||||
|
||||
```
|
||||
minikube service webserver --url
|
||||
```
|
||||
|
||||
We can see the content of the web page using :
|
||||
使用下面的命令得到 Web 页面的内容:
|
||||
|
||||
```
|
||||
curl $(minikube service webserver --url)
|
||||
```
|
||||
|
||||
To show a summary of the running cluster run:
|
||||
显示运行中集群的摘要:
|
||||
|
||||
```
|
||||
kubectl cluster-info
|
||||
```
|
||||
|
||||
For more details:
|
||||
更多细节:
|
||||
|
||||
```
|
||||
kubectl cluster-info dump
|
||||
```
|
||||
|
||||
We can also list the pods using:
|
||||
我们还可以使用以下方式列出 pod:
|
||||
|
||||
```
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
And to access to the dashboard use:
|
||||
使用下面的方式访问面板:
|
||||
|
||||
```
|
||||
minikube dashboard
|
||||
```
|
||||
|
||||
If you would like to access the frontend of the web application type:
|
||||
如果你想访问 Web 程序的前端,输入:
|
||||
|
||||
```
|
||||
kubectl proxy
|
||||
```
|
||||
|
||||
If we want to execute a command inside the container, get the pod id using:
|
||||
如果我们要在容器内部执行一个命令,请使用以下命令获取 pod id:
|
||||
|
||||
```
|
||||
kubetctl get pods
|
||||
```
|
||||
|
||||
Then use it like :
|
||||
然后像这样使用:
|
||||
|
||||
```
|
||||
kubectl exec webserver-2022867364-0v1p9 -it -- /bin/sh
|
||||
```
|
||||
|
||||
To finish, delete all deployments:
|
||||
要完成,请删除所有部署:
|
||||
|
||||
```
|
||||
kubectl delete deployments --all
|
||||
```
|
||||
|
||||
Delete all pods:
|
||||
删除所有 pod:
|
||||
|
||||
```
|
||||
kubectl delete pods --all
|
||||
```
|
||||
|
||||
And stop Minikube
|
||||
并且停止 Minikube。
|
||||
|
||||
```
|
||||
minikube stop
|
||||
```
|
||||
|
||||
I hope you enjoyed this introduction.
|
||||
我希望你享受这个介绍。
|
||||
|
||||
### Connect Deeper
|
||||
### 更加深入
|
||||
|
||||
If you resonated with this article, you can find more interesting contents in [Painless Docker Course][13].
|
||||
如果你对本文感到共鸣,您可以在[没有痛苦的 Docker 教程][13]中找到更多有趣的内容。
|
||||
|
||||
We, [Eralabs][14], will be happy to help you on your Docker and Cloud Computing projects, [contact us][15] and we will be happy to hear about your projects.
|
||||
我们 [Eralabs][14] 将很乐意为你的 Docker 和云计算项目提供帮助,[联系我们][15],我们将很乐意听到你的项目。
|
||||
|
||||
Please subscribe to [DevOpsLinks][16] : An Online Community Of Thousands Of IT Experts & DevOps Enthusiast From All Over The World.
|
||||
请订阅 [DevOpsLinks][16]:成千上万的 IT 专家和 DevOps 爱好者在线社区。
|
||||
|
||||
You may be also interested in joining our newsletter [Shipped][17], a newsletter focused on containers, orchestration and serverless technologies.
|
||||
你可能也有兴趣加入我们的新闻订阅 [Shipped][17],一个专注于容器,编排和无服务技术的新闻订阅。
|
||||
|
||||
You can find me on [Twitter][18], [Clarity][19] or my [website][20] and you can also check my books: [SaltStack For DevOps][21].
|
||||
你可以在[Twitter][18]、[Clarity][19] 或我的[网站][20]上找到我,你也可以看看我的书:[SaltStack For DevOps][21]。
|
||||
|
||||
Don’t forget to join my last project [Jobs For DevOps][22] !
|
||||
不要忘记加入我的最后一个项目[DevOps 的职位][22]!
|
||||
|
||||
If you liked this post, please recommend it and share it with your followers.
|
||||
如果你喜欢本文,请推荐它,并与你的关注者分享。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Aymen El Amri
|
||||
Cloud & Software Architect, Entrepreneur, Author, CEO www.eralabs.io, Founder www.devopslinks.com, Personal Page : www.aymenelamri.com
|
||||
Aymen El Amri -
|
||||
云和软件架构师、企业家、作者、www.eralabs.io 的 CEO、www.devopslinks.com 的创始人,个人页面:www.aymenelamri.com
|
||||
|
||||
-------------------
|
||||
|
||||
@ -264,7 +264,7 @@ Cloud & Software Architect, Entrepreneur, Author, CEO www.eralabs.io, Founder ww
|
||||
via: https://medium.com/devopslinks/using-kubernetes-minikube-for-local-development-c37c6e56e3db
|
||||
|
||||
作者:[Aymen El Amri ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
Loading…
Reference in New Issue
Block a user