mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-02-03 23:40:14 +08:00
Merge branch 'master' of https://github.com/LCTT/TranslateProject
This commit is contained in:
commit
16f21335a0
218
published/20150316 Linux on UEFI A Quick Installation Guide.md
Normal file
218
published/20150316 Linux on UEFI A Quick Installation Guide.md
Normal file
@ -0,0 +1,218 @@
|
||||
详解 UEFI 模式下安装 Linux
|
||||
============================================================
|
||||
|
||||
> 此页面是免费浏览的,没有烦人的外部广告;然而,我的确花了时间准备,网站托管也花了钱。如果您发现此页面帮到了您,请考虑进行小额[捐款](http://www.rodsbooks.com/linux-uefi/),以帮助保持网站的运行。谢谢!
|
||||
> 原著于 2013/10/19;最后修改于 2015/3/16
|
||||
|
||||
### 引言
|
||||
|
||||
几年来,一种新的固件技术悄然出现,而大多数普通用户对此并无所知。该技术被称为 [ <ruby>可扩展固件接口<rt>Extensible Firmware Interface</rt></ruby>][29](EFI), 或更新一些的统一可扩展固件接口(Unified EFI,UEFI,本质上是 EFI 2.x),它已经开始替代古老的[<ruby>基本输入/输出系统<rt>Basic Input/Output System</rt></ruby>][30](BIOS)固件技术,有经验的计算机用户或多或少都有些熟悉 BIOS。
|
||||
|
||||
本页面是给 Linux 用户使用 EFI 技术的一个快速介绍,其中包括有关开始将 Linux 安装到此类计算机上的建议。不幸的是,EFI 是一个庞杂的话题;EFI 软件本身是复杂的,许多实现有系统特定的怪异行为甚至是缺陷。因此,我无法在一个页面上描述在 EFI 计算机上安装和使用 Linux 的一切知识。我希望你能将本页面作为一个有用的起点,不管怎么说,每个部分以及末尾[参考文献][31]部分的链接可以指引你找到更多的文档。
|
||||
|
||||
### 你的计算机是否使用 EFI 技术?
|
||||
|
||||
EFI 是一种_固件_,意味着它是内置于计算机中处理低级任务的软件。最重要的是,固件控制着计算机的引导过程,反过来说这代表着基于 EFI 的计算机与基于 BIOS 的计算机的引导过程不同。(有关此规律的例外之处稍后再说。)这种差异可能使操作系统安装介质的设计超级复杂化,但是一旦安装好并运行之后,它对计算机的日常操作几乎没有影响。请注意,大多数制造商使用术语 “BIOS” 来表示他们的 EFI。我认为这种用法很混乱,所以我避免了;在我看来,EFI 和 BIOS 是两种不同类型的固件。
|
||||
|
||||
> **注意:**苹果公司的 Mac 使用的 EFI 在许多方面是不同寻常的。尽管本页面的大部分内容同样适用于 Mac,但有些细节上的出入,特别是在设置 EFI 引导加载程序的时候。这个任务最好在 OS X 上进行,使用 Mac 的 [bless utility][49]工具,我不在此做过多描述。
|
||||
|
||||
自从 2006 年第一次推出以来,EFI 已被用于基于英特尔的 Mac 上。从 2012 年底开始,大多数安装 Windows 8 或更高版本系统的计算机就已经默认使用 UEFI 启动,实际上大多数 PC 从 2011 年中期就开始使用 UEFI,虽然默认情况下它们可能无法以 EFI 模式启动。2011 年前销出的 PC 也有一些支持 EFI,尽管它们大都默认使用 BIOS 模式启动。
|
||||
|
||||
如果你不确定你的计算机是否支持 EFI,则应查看固件设置实用程序和参考用户手册关于 _EFI_、_UEFI_ 以及 _legacy booting_ 的部分。(可以通过搜索用户手册的 PDF 文件来快速了解。)如果你没有找到类似的参考,你的计算机可能使用老式的(“legacy”) BIOS 引导;但如果你找到了这些术语的参考,几乎可以肯定它使用了 EFI 技术。你还可以尝试_只_有 EFI 模式的引导加载器的安装介质。使用 [rEFInd][50] 制作的 USB 闪存驱动器或 CD-R 镜像是用来测试不错的选择。
|
||||
|
||||
在继续之前,你应当了解大多数 x86 和 x86-64 架构的计算机上的 EFI 都包含一个叫做<ruby>兼容支持模块<rt>Compatibility Support Module</rt></ruby>(CSM)的组件,这使得 EFI 能够使用旧的 BIOS 风格的引导机制来引导操作系统。这会非常方便,因为它向后兼容;但是这样也导致一些意外情况的发生,因为计算机不论以 EFI 模式引导还是以 BIOS (也称为 CSM 或 legacy)模式引导,在控制时没有标准的使用规范和用户界面。特别地,你的 Linux 安装介质非常容易意外的以 BIOS/CSM/legacy 模式启动,这会导致 Linux 以 BIOS/CSM/legacy 模式安装。如果 Linux 是唯一的操作系统,也可以正常工作,但是如果与在 EFI 模式下的 Windows 组成双启动的话,就会非常复杂。(反过来问题也可能发生。)以下部分将帮助你以正确模式引导安装程序。如果你在阅读这篇文章之前就已经以 BIOS 模式安装了 Linux,并且希望切换引导模式,请阅读后续章节,[哎呀:将传统模式下安装的引导转为 EFI 模式下的引导][51]。
|
||||
|
||||
UEFI 的一个附加功能值得一提:<ruby>安全启动<rt>Secure Boot</rt></ruby>。此特性旨在最大限度的降低计算机受到 _boot kit_ 病毒感染的风险,这是一种感染计算机引导加载程序的恶意软件。Boot kits 很难检测和删除,阻止它们的运行刻不容缓。微软公司要求所有带有支持 Windows 8 标志的台式机和笔记本电脑启用 安全启动。这一配置使 Linux 的安装变得复杂,尽管有些发行版可以较好的处理这个问题。不要将安全启动和 EFI 或 UEFI 混淆;支持 EFI 的计算机不一定支持 安全启动,而且支持 EFI 的 x86-64 的计算机也可以禁用 安全启动。微软同意用户在 Windows 8 认证的 x86 和 x86-64 计算机上禁用安全启动功能;然而对装有 Windows 8 的 ARM 计算机而言却相反,它们必须**不允许**用户禁用 安全启动。幸运的是,基于 ARM 的 Windows 8 计算机目前很少见。我建议避免使用它们。
|
||||
|
||||
### 你的发行版是否支持 EFI 技术?
|
||||
|
||||
大多数 Linux 发行版已经支持 EFI 好多年了。然而,不同的发行版对 EFI 的支持程度不同。大多数主流发行版(Fedora,OpenSUSE,Ubuntu 等)都能很好的支持 EFI,包括对安全启动的支持。另外一些“自行打造”的发行版,比如 Gentoo,对 EFI 的支持较弱,但它们的性质使其很容易添加 EFI 支持。事实上,可以向_任意_ Linux 发行版添加 EFI 支持:你需要安装 Linux(即使在 BIOS 模式下),然后在计算机上安装 EFI 引导加载程序。有关如何执行此操作的信息,请参阅[哎呀:将传统模式下安装的引导转为 EFI 模式下的引导][52]部分。
|
||||
|
||||
你应当查看发行版的功能列表,来确定它是否支持 EFI。你还应当注意你的发行版对安全启动的支持情况,特别是如果你打算和 Windows 8 组成双启动。请注意,即使正式支持安全启动的发行版也可能要求禁用此功能,因为 Linux 对安全启动的支持通常很差劲,或者导致意外情况的发生。
|
||||
|
||||
### 准备安装 Linux
|
||||
|
||||
下面几个准备步骤有助于在 EFI 计算机上 Linux 的安装,使其更加顺利:
|
||||
|
||||
#### 1、 升级固件
|
||||
|
||||
有些 EFI 是有问题的,不过硬件制造商偶尔会发布其固件的更新。因此我建议你将固件升级到最新可用的版本。如果你从论坛的帖子知道自己计算机的 EFI 有问题,你应当在安装 Linux 之前更新它,因为如果安装 Linux 之后更新固件,会有些问题需要额外的操作才能解决。另一方面,升级固件是有一定风险的,所以如果制造商提供了 EFI 支持,最好的办法就是按它们提供的方式进行升级。
|
||||
|
||||
#### 2、 了解如何使用固件
|
||||
|
||||
通常你可以通过在引导过程之初按 Del 键或功能键进入固件设置实用程序。按下开机键后尽快查看相关的提示信息,或者尝试每个功能键。类似的,ESC 键或功能键通常可以进入固件的内置引导管理器,可以选择要进入的操作系统或外部设备。一些制造商把这些设置隐藏的很深。在某些情况下,如[此页面][32]所述,你可以在 Windows 8 内做到这些。
|
||||
|
||||
#### 3、调整以下固件设置
|
||||
|
||||
* **快速启动** — 此功能可以通过在硬件初始化时使用快捷方式来加快引导过程。这很好用,但有时候会使 USB 设备不能初始化,导致计算机无法从 USB 闪存驱动器或类似的设备启动。因此禁用快速启动_可能_有一定的帮助,甚至是必须的;你可以让它保持激活,而只在 Linux 安装程序启动遇到问题时将其停用。请注意,此功能有时可能会以其它名字出现。在某些情况下,你必须_启用_ USB 支持,而不是_禁用_快速启动功能。
|
||||
* **安全启动** — Fedora,OpenSUSE,Ubuntu 以及其它的发行版官方就支持安全启动;但是如果在启动引导加载程序或内核时遇到问题,可能需要禁用此功能。不幸的是,没办法具体描述怎么禁用,因为不同计算机的设置方法也不同。请参阅[我的安全启动页面][1]获取更多关于此话题的信息。
|
||||
|
||||
> **注意:** 一些教程说安装 Linux 时需要启用 BIOS/CSM/legacy 支持。通常情况下,这样做是错的。启用这些支持可以解决启动安装程序涉及的问题,但也会带来新的问题。以这种方式安装的教程通常可以通过“引导修复”来解决这些问题,但最好从一开始就做对。本页面提供了帮助你以 EFI 模式启动 Linux 安装程序的提示,从而避免以后的问题。
|
||||
* **CSM/legacy 选项** — 如果你想以 EFI 模式安装,请_关闭_这些选项。一些教程推荐启用这些选项,有时这是必须的 —— 比如,有些附加视频卡需要在固件中启用 BIOS 模式。尽管如此,大多数情况下启用 CSM/legacy 支持只会无意中增加以 BIOS 模式启动 Linux 的风险,但你并_不想_这样。请注意,安全启动和 CSM/legacy 选项有时会交织在一起,因此更改任一选项之后务必检查另一个。
|
||||
|
||||
#### 4、 禁用 Windows 的快速启动功能
|
||||
|
||||
[这个页面][33]描述了如何禁用此功能,不禁用的话会导致文件系统损坏。请注意此功能与固件的快速启动不同。
|
||||
|
||||
#### 5、 检查分区表
|
||||
|
||||
使用 [GPT fdisk][34]、parted 或其它任意分区工具检查磁盘分区。理想情况下,你应该创建一个包含每个分区确切起点和终点(以扇区为单位)的纸面记录。这会是很有用的参考,特别是在安装时进行手动分区的时候。如果已经安装了 Windows,确定可以识别你的 [EFI 系统分区(ESP)][35],它是一个 FAT 分区,设置了“启动标记”(在 parted 或 Gparted 中)或在 gdisk 中的类型码为 EF00。
|
||||
|
||||
### 安装 Linux
|
||||
|
||||
大部分 Linux 发行版都提供了足够的安装说明;然而我注意到了在 EFI 模式安装中的几个常见的绊脚石:
|
||||
|
||||
* **确保使用正确位深的发行版** — EFI 启动加载器和 EFI 自身的位深相同。现代计算机通常是 64 位,尽管最初几代基于 Intel 的 Mac、一些现代的平板电脑和变形本、以及一些鲜为人知的电脑使用 32 位 EFI。虽然可以将 32 位 EFI 引导加载程序添加至 32 位发行版,但我还没有遇到过正式支持 32 位 EFI 的 Linux 发行版。(我的 《[在 Linux 上管理 EFI 引导加载程序][36]》 一文概述了引导加载程序,而且理解了这些原则你就可以修改 32 位发行版的安装程序,尽管这不是一个初学者该做的。)在 64 位 EFI 的计算机上安装 32 位发行版最让人头疼,我不准备在这里描述这一过程;在具有 64 位 EFI 的计算机上,你应当使用 64 位的发行版。
|
||||
* **正确准备引导介质** — 将 .iso 镜像传输到 USB 闪存驱动器的第三方工具,比如 unetbootin,在创建正确的 EFI 模式引导项时经常失败。我建议按照发行版维护者的建议来创建 USB 闪存驱动器。如果没有类似的建议,使用 Linux 的 dd 工具,通过执行 `dd if=image.iso of=/dev/sdc` 在识别为 `/dev/sdc` 的 USB 闪存驱动器上创建一个镜像。至于 Windows,有 [WinDD][37] 和 [dd for windows][38],但我从没测试过它们。请注意,使用不兼容 EFI 的工具创建安装介质是错误的,这会导致人们进入在 BIOS 模式下安装然后再纠正它们的误区,所以不要忽视这一点!
|
||||
* **备份 ESP 分区** — 如果计算机已经存在 Windows 或者其它的操作系统,我建议在安装 Linux 之前备份你的 ESP 分区。尽管 Linux _不应该_ 损坏 ESP 分区已有的文件,但似乎这时不时发生。发生这种事情时备份会有很大用处。只需简单的文件级的备份(使用 cp,tar,或者 zip 类似的工具)就足够了。
|
||||
* **以 EFI 模式启动** — 以 BIOS/CSM/legacy 模式引导 Linux 安装程序的意外非常容易发生,特别是当固件启用 CSM/legacy 选项时。下面一些提示可以帮助你避免此问题:
|
||||
* 进入 Linux shell 环境执行 `ls /sys/firmware/efi` 验证当前是否处于 EFI 引导模式。如果你看到一系列文件和目录,表明你已经以 EFI 模式启动,而且可以忽略以下多余的提示;如果没有,表明你是以 BIOS 模式启动的,应当重新检查你的设置。
|
||||
* 使用固件内置的引导管理器(你应该已经知道在哪;请参阅[了解如何使用固件][26])使之以 EFI 模式启动。一般你会看到 CD-R 或 USB 闪存驱动器两个选项,其中一个选项包括 _EFI_ 或 _UEFI_ 字样的描述,另一个不包括。使用 EFI/UEFI 选项来启动介质。
|
||||
* 禁用安全启动 - 即使你使用的发行版官方支持安全启动,有时它们也不能生效。在这种情况下,计算机会静默的转到下一个引导加载程序,它可能是启动介质的 BIOS 模式的引导加载程序,导致你以 BIOS 模式启动。请参阅我的[安全启动的相关文章][27]以得到禁用安全启动的相关提示。
|
||||
* 如果 Linux 安装程序总是无法以 EFI 模式启动,试试用我的 [rEFInd 引导管理器][28] 制作的 USB 闪存驱动器或 CD-R。如果 rEFInd 启动成功,那它保证是以 EFI 模式运行的,而且在基于 UEFI 的 PC 上,它只显示 EFI 模式的引导项,因此若您启动到 Linux 安装程序,则应处于 EFI 模式。(但是在 Mac 上,除了 EFI 模式选项之外,rEFInd 还显示 BIOS 模式的引导项。)
|
||||
* **准备 ESP 分区** — 除了 Mac,EFI 使用 ESP 分区来保存引导加载程序。如果你的计算机已经预装了 Windows,那么 ESP 分区就已存在,可以在 Linux 上直接使用。如果不是这样,那么我建议创建一个大小为 550 MB 的 ESP 分区。(如果你已有的 ESP 分区比这小,别担心,直接用就行。)在此分区上创建一个 FAT32 文件系统。如果你使用 Gparted 或者 parted 准备 ESP 分区,记得给它一个“启动标记”。如果你使用 GPT fdisk(gdisk,cgdisk 或 sgdisk)准备 ESP 分区,记得给它一个名为 EF00 的类型码。有些安装程序会创建一个较小的 ESP 分区,并且设置为 FAT16 文件系统。尽管这样能正常工作,但如果你之后需要重装 Windows,安装程序会无法识别 FAT16 文件系统的 ESP 分区,所以你需要将其备份后转为 FAT32 文件系统。
|
||||
* **使用 ESP 分区** — 不同发行版的安装程序以不同的方式辨识 ESP 分区。比如,Debian 和 Ubuntu 的某些版本把 ESP 分区称为“EFI boot partition”,而且不会明确显示它的挂载点(尽管它会在后台挂载);但是有些发行版,像 Arch 或 Gentoo,需要你去手动挂载。尽管将 ESP 分区挂载到 /boot 进行相应配置后可以正常工作,特别是当你想使用 gummiboot 或 ELILO(译者注:gummiboot 和 ELILO 都是 EFI 引导工具)时,但是在 Linux 中最标准的 ESP 分区挂载点是 /boot/efi。某些发行版的 /boot 不能用 FAT 分区。因此,当你设置 ESP 分区挂载点时,请将其设置为 /boot/efi。除非 ESP 分区没有,否则_不要_为其新建文件系统 — 如果已经安装 Windows 或其它操作系统,它们的引导文件都在 ESP 分区里,新建文件系统会销毁这些文件。
|
||||
* **设置引导程序的位置** — 某些发行版会询问将引导程序(GRUB)装到何处。如果 ESP 分区按上述内容正确标记,不必理会此问题,但有些发行版仍会询问。请尝试使用 ESP 分区。
|
||||
* **其它分区** — 除了 ESP 分区,不再需要其它的特殊分区;你可以设置 根(/)分区,swap 分区,/home 分区,或者其它分区,就像你在 BIOS 模式下安装时一样。请注意 EFI 模式下_不需要设置_[BIOS 启动分区][39],所以如果安装程序提示你需要它,意味着你可能意外的进入了 BIOS 模式。另一方面,如果你创建了 BIOS 启动分区,会更灵活,因为你可以安装 BIOS 模式下的 GRUB,然后以任意模式(EFI 模式 或 BIOS 模式)引导。
|
||||
* **解决无显示问题** — 2013 年,许多人在 EFI 模式下经常遇到(之后出现的频率逐渐降低)无显示的问题。有时可以在命令行下通过给内核添加 `nomodeset` 参数解决这一问题。在 GRUB 界面按 `e` 键会打开一个简易文本编辑器。大多数情况下你需要搜索有关此问题的更多信息,因为此问题更多是由特定硬件引起的。
|
||||
|
||||
在某些情况下,你可能不得不以 BIOS 模式安装 Linux。但你可以手动安装 EFI 引导程序让 Linux 以 EFI 模式启动。请参阅《 [在 Linux 上管理 EFI 引导加载程序][53]》 页面获取更多有关它们以及如何安装的可用信息。
|
||||
|
||||
### 解决安装后的问题
|
||||
|
||||
如果 Linux 无法在 EFI 模式下工作,但在 BIOS 模式下成功了,那么你可以完全放弃 EFI 模式。在只有 Linux 的计算机上这非常简单;安装 BIOS 引导程序即可(如果你是在 BIOS 模式下安装的,引导程序也应随之装好)。如果是和 EFI 下的 Windows 组成双系统,最简单的方法是安装我的 [rEFInd 引导管理器][54]。在 Windows 上安装它,然后编辑 `refind.conf` 文件:取消注释 `scanfor` 一行,并确保拥有 `hdbios` 选项。这样 rEFInd 在引导时会重定向到 BIOS 模式的引导项。
|
||||
|
||||
如果重启后计算机直接进入了 Windows,很可能是 Linux 的引导程序或管理器安装不正确。(但是应当首先尝试禁用安全启动;之前提到过,它经常引发各种问题。)下面是关于此问题的几种可能的解决方案:
|
||||
|
||||
* **使用 efibootmgr** — 你可以以 _EFI 模式_引导一个 Linux 急救盘,使用 efibootmgr 实用工具尝试重新注册你的 Linux 引导程序,如[这里][40]所述。
|
||||
* **使用 Windows 上的 bcdedit** — 在 Windows 管理员命令提示符窗口中,输入 `bcdedit /set {bootmgr}path \EFI\fedora\grubx64.efi` 会用 ESP 分区的 `EFI/fedora/grubx64.efi` 文件作为默认的引导加载程序。根据需要更改此路径,指向你想设置的引导文件。如果你启用了安全启动,需要设置 `shim.efi`,`shimx64.efi` 或者 `PreLoader.efi`(不管有哪个)为引导而不是 `grubx64.efi`。
|
||||
* **安装 rEFInd** — 有时候 rEFInd 可以解决这个问题。我推荐使用 [CD-R 或者 USB 闪存驱动器][41]进行测试。如果 Linux 可以启动,就安装 Debian 软件包、RPM 程序,或者 .zip 文件包。(请注意,你需要在一个高亮的 Linux vmlinuz* 选项按两次 `F2` 或 `Insert` 修改启动选项。如果你的启动分区是单独的,这就更有必要了,因为这种情况下,rEFInd 无法找到根(/)分区,也就无法传递参数给内核。)
|
||||
* **使用修复引导程序** — Ubuntu 的[引导修复实用工具][42]可以自动修复一些问题;然而,我建议只在 Ubuntu 和 密切相关的发行版上使用,比如 Mint。有时候,有必要通过高级选项备份并替换 Windows 的引导。
|
||||
* **劫持 Windows 引导程序** — 有些不完整的 EFI 引导只能引导 Windows,就是 ESP 分区上的 `EFI/Microsoft/Boot/bootmgfw.efi` 文件。因此,你可能需要将引导程序改名(我建议将其移动到上级目录 `EFI/Microsoft/bootmgfw.efi`),然后将首选引导程序复制到这里。(大多数发行版会在 EFI 的子目录放置 GRUB 的副本,例如 Ubuntu 的 EFI/ubuntu,Fedora 的 EFI/fedora。)请注意此方法是个丑陋的解决方法,有用户反映 Windows 会替换引导程序,所以这个办法不是 100% 有效。然而,这是在不完整的 EFI 上生效的唯一办法。在尝试之前,我建议你升级固件并重新注册自己的引导程序,Linux 上用 efibootmgr,Windows 上用 bcdedit。
|
||||
|
||||
有关引导程序的其它类型的问题 - 如果 GRUB(或者你的发行版默认的其它引导程序或引导管理器)没有引导操作系统,你必须修复这个问题。因为 GRUB 2 引导 Windows 时非常挑剔,所以 Windows 经常启动失败。在某些情况下,安全启动会加剧这个问题。请参阅[我的关于 GRUB 2 的页面][55]获取一个引导 Windows 的 GRUB 2 示例。还会有很多原因导致 Linux 引导出现问题,类似于 BIOS 模式下的情况,所以我没有全部写出来。
|
||||
|
||||
尽管 GRUB 2 使用很普遍,但我对它的评价却不高 - 它很复杂,而且难以配置和使用。因此,如果你在使用 GRUB 的时候遇到了问题,我的第一反应就是用别的东西代替。[我的用于 Linux 的 EFI 引导程序页面][56]有其它的选择。其中包括我的 [rEFInd 引导管理器][57],它除了能够让许多发行版上的 GRUB 2 工作,也更容易安装和维护 - 但是它还不能完全代替 GRUB 2。
|
||||
|
||||
除此之外,EFI 引导的问题可能很奇怪,所以你需要去论坛发帖求助。尽量将问题描述完整。[Boot Info Script][58] 可帮助你提供有用的信息 - 运行此脚本,将生成的名为 RESULTS.txt 的文件粘贴到论坛的帖子上。一定要将文本粘贴到 `[code]` 和 `[/code]` 之间;不然会遭人埋怨。或者将 RESULTS.txt 文件上传到 pastebin 网站上,比如 [pastebin.com][59],然后将网站给你的 URL 地址发布到论坛。
|
||||
|
||||
### 哎呀:将传统模式下安装的系统转为 EFI 模式下引导
|
||||
|
||||
**警告:**这些指南主要用于基于 UEFI 的 PC。如果你的 Mac 已经安装了 BIOS 模式下的 Linux,但想以 EFI 模式启动 Linux,可以_在 OS X_ 中安装引导程序。rEFInd(或者旧式的 rEFIt)是 Mac 上的常用选择,但 GRUB 可以做的更多。
|
||||
|
||||
论坛上有很多人看了错误的教程,在已经存在 EFI 模式的 Windows 的情况下,安装了 BIOS 引导的 Linux,这一问题在 2015 年初很普遍。这样配置效果很不好,因为大多数 EFI 很难在两种模式之间切换,而且 GRUB 也无法胜任这项工作。你可能会遇到不完善的 EFI 无法启动外部介质的情况,也可能遇到 EFI 模式下的显示问题,或者其它问题。
|
||||
|
||||
如前所述,在[解决安装后的问题][60]部分,解决办法之一就是_在 Windows_ 上安装 rEFInd,将其配置为支持 BIOS 模式引导。然后可以引导 rEFInd 并链式引导到你的 BIOS 模式的 GRUB。在 Linux 上遇到 EFI 特定的问题时,例如无法使用显卡,我建议你使用这个办法修复。如果你没有这样的 EFI 特定的问题,在 Windows 中安装 rEFInd 和合适的 EFI 文件系统驱动可以让 Linux 直接以 EFI 模式启动。这个解决方案很完美,它和我下面描述的内容等同。
|
||||
|
||||
大多数情况下,最好将 Linux 配置为以 EFI 模式启动。有很多办法可以做到,但最好的是使用 Linux 的 EFI 引导模式(或者,可以想到,Windows,或者一个 EFI shell)注册到你首选的引导管理器。实现这一目标的方法如下:
|
||||
|
||||
1. 下载适用于 USB 闪存驱动器或 CD-R 的 [rEFInd 引导管理器][43]。
|
||||
2. 从下载的镜像文件生成安装介质。可以在任何计算机上准备,不管是 EFI 还是 BIOS 的计算机都可以(或者在其它平台上使用其它方法)。
|
||||
3. 如果你还没有这样做,[请禁用安全启动][44]。因为 rEFInd CD-R 和 USB 镜像不支持安全启动,所以这很必要,你可以在以后重新启用它。
|
||||
4. 在目标计算机上启动 rEFInd。如前所述,你可能需要调整固件设置,并使用内置引导管理器选择要引导的介质。你选择的那一项也许在其描述中包含 _UEFI_ 这样的字符串。
|
||||
5. 在 rEFInd 上测试引导项。你应该至少看到一个启动 Linux 内核的选项(名字含有 vmlinuz 这样的字符串)。有两种方法可以启动它:
|
||||
* 如果你_没有_独立的 `/boot` 分区,只需简单的选择内核并按回车键。Linux 就会启动。
|
||||
* 如果你_确定有_一个独立的 `/boot` 分区,按两次 `Insert` 或 `F2` 键。这样会打开一个行编辑器,你可以用它来编辑内核选项。增加一个 `root=` 格式以标识根(/)文件系统,如果根(/)分区在 `/dev/sda5` 上,就添加 `root=/dev/sda5`。如果不知道根文件系统在哪里,那你需要重启并尽可能想到办法。
|
||||
|
||||
在一些罕见的情况下,你可能需要添加其它内核选项来代替或补充 `root=` 选项。比如配置了 LVM(LCTT 译注:Logical Volume Manager,逻辑卷管理)的 Gentoo 就需要 `dolvm` 选项。
|
||||
6. Linux 一旦启动,安装你想要的引导程序。rEFInd 的安装很简单,可以通过 RPM、Debian 软件包、PPA,或从[rEFInd 下载页面][45]下载的二进制 .zip 文件进行安装。在 Ubuntu 和相关的发行版上,引导修改程序可以相对简单地修复你的 GRUB 设置,但你要对它有信心可以正常工作。(它通常工作良好,但有时候会把事情搞得一团糟。)另外一些选项都在我的 《[在 Linux 上管理 EFI 引导加载程序][46]》 页面上。
|
||||
7. 如果你想在安全启动激活的情况下引导,只需重启并启用它。但是,请注意,可能需要额外的安装步骤才能将引导程序设置为使用安全启动。有关详细信息,请参阅[我关于这个主题的页面][47]或你的引导程序有关安全启动的文档资料。
|
||||
|
||||
重启时,你可以看到刚才安装的引导程序。如果计算机进入了 BIOS 模式下的 GRUB,你应当进入固件禁用 BIOS/CSM/legacy 支持,或调整引导顺序。如果计算机直接进入了 Windows,那么你应当阅读前一部分,[解决安装后的问题][61]。
|
||||
|
||||
你可能想或需要调整你的配置。通常是为了看到额外的引导选项,或者隐藏某些选项。请参阅引导程序的文档资料,以了解如何进行这些更改。
|
||||
|
||||
### 参考和附加信息
|
||||
|
||||
* **信息网页**
|
||||
* 我的 《[在 Linux 上管理 EFI 引导加载程序][2]》 页面含有可用的 EFI 引导程序和引导管理器。
|
||||
* [OS X's bless tool 的手册页][3] 页面在设置 OS X 平台上的引导程序或引导管理器时可能会很有用。
|
||||
* [EFI 启动过程][4] 描述了 EFI 启动时的大致框架。
|
||||
* [Arch Linux UEFI wiki page][5] 有大量关于 UEFI 和 Linux 的详细信息。
|
||||
* 亚当·威廉姆森写的一篇不错的 《[什么是 EFI,它是怎么工作的][6]》。
|
||||
* [这个页面][7] 描述了如何从 Windows 8 调整 EFI 的固件设置。
|
||||
* 马修·J·加勒特是 Shim 引导程序的开发者,此程序支持安全启动,他维护的[博客][8]经常更新有关 EFI 的问题。
|
||||
* 如果你对 EFI 软件的开发感兴趣,我的 《[EFI 编程][9]》 页面可以为你起步助力。
|
||||
* **附加程序**
|
||||
* [rEFInd 官网][10]
|
||||
* [gummiboot 官网][11]
|
||||
* [ELILO 官网][12]
|
||||
* [GRUB 官网][13]
|
||||
* [GPT fdisk 分区软件官网][14]
|
||||
* Ubuntu 的 [引导修复实用工具][15]可帮助解决一些引启动问题
|
||||
* **交流**
|
||||
* [Sourceforge 上的 rEFInd 交流论坛][16]是 rEFInd 用户互相交流或与我联系的一种方法。
|
||||
* Pastebin 网站,比如 [http://pastebin.com][17], 是在 Web 论坛上与其他用户交换大量文本的一种便捷的方法。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.rodsbooks.com/linux-uefi/
|
||||
|
||||
作者:[Roderick W. Smith][a]
|
||||
译者:[fuowang](https://github.com/fuowang)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:rodsmith@rodsbooks.com
|
||||
[1]:http://www.rodsbooks.com/efi-bootloaders/secureboot.html#disable
|
||||
[2]:http://www.rodsbooks.com/efi-bootloaders/
|
||||
[3]:http://ss64.com/osx/bless.html
|
||||
[4]:http://homepage.ntlworld.com/jonathan.deboynepollard/FGA/efi-boot-process.html
|
||||
[5]:https://wiki.archlinux.org/index.php/Unified_Extensible_Firmware_Interface
|
||||
[6]:https://www.happyassassin.net/2014/01/25/uefi-boot-how-does-that-actually-work-then/
|
||||
[7]:http://www.eightforums.com/tutorials/20256-uefi-firmware-settings-boot-inside-windows-8-a.html
|
||||
[8]:http://mjg59.dreamwidth.org/
|
||||
[9]:http://www.rodsbooks.com/efi-programming/
|
||||
[10]:http://www.rodsbooks.com/refind/
|
||||
[11]:http://freedesktop.org/wiki/Software/gummiboot
|
||||
[12]:http://elilo.sourceforge.net/
|
||||
[13]:http://www.gnu.org/software/grub/
|
||||
[14]:http://www.rodsbooks.com/gdisk/
|
||||
[15]:https://help.ubuntu.com/community/Boot-Repair
|
||||
[16]:https://sourceforge.net/p/refind/discussion/
|
||||
[17]:http://pastebin.com/
|
||||
[18]:http://www.rodsbooks.com/linux-uefi/#intro
|
||||
[19]:http://www.rodsbooks.com/linux-uefi/#isitefi
|
||||
[20]:http://www.rodsbooks.com/linux-uefi/#distributions
|
||||
[21]:http://www.rodsbooks.com/linux-uefi/#preparing
|
||||
[22]:http://www.rodsbooks.com/linux-uefi/#installing
|
||||
[23]:http://www.rodsbooks.com/linux-uefi/#troubleshooting
|
||||
[24]:http://www.rodsbooks.com/linux-uefi/#oops
|
||||
[25]:http://www.rodsbooks.com/linux-uefi/#references
|
||||
[26]:http://www.rodsbooks.com/linux-uefi/#using_firmware
|
||||
[27]:http://www.rodsbooks.com/efi-bootloaders/secureboot.html#disable
|
||||
[28]:http://www.rodsbooks.com/refind/getting.html
|
||||
[29]:https://en.wikipedia.org/wiki/Uefi
|
||||
[30]:https://en.wikipedia.org/wiki/BIOS
|
||||
[31]:http://www.rodsbooks.com/linux-uefi/#references
|
||||
[32]:http://www.eightforums.com/tutorials/20256-uefi-firmware-settings-boot-inside-windows-8-a.html
|
||||
[33]:http://www.eightforums.com/tutorials/6320-fast-startup-turn-off-windows-8-a.html
|
||||
[34]:http://www.rodsbooks.com/gdisk/
|
||||
[35]:http://en.wikipedia.org/wiki/EFI_System_partition
|
||||
[36]:http://www.rodsbooks.com/efi-bootloaders
|
||||
[37]:https://sourceforge.net/projects/windd/
|
||||
[38]:http://www.chrysocome.net/dd
|
||||
[39]:https://en.wikipedia.org/wiki/BIOS_Boot_partition
|
||||
[40]:http://www.rodsbooks.com/efi-bootloaders/installation.html
|
||||
[41]:http://www.rodsbooks.com/refind/getting.html
|
||||
[42]:https://help.ubuntu.com/community/Boot-Repair
|
||||
[43]:http://www.rodsbooks.com/refind/getting.html
|
||||
[44]:http://www.rodsbooks.com/efi-bootloaders/secureboot.html#disable
|
||||
[45]:http://www.rodsbooks.com/refind/getting.html
|
||||
[46]:http://www.rodsbooks.com/efi-bootloaders/
|
||||
[47]:http://www.rodsbooks.com/efi-bootloaders/secureboot.html
|
||||
[48]:mailto:rodsmith@rodsbooks.com
|
||||
[49]:http://ss64.com/osx/bless.html
|
||||
[50]:http://www.rodsbooks.com/refind/getting.html
|
||||
[51]:http://www.rodsbooks.com/linux-uefi/#oops
|
||||
[52]:http://www.rodsbooks.com/linux-uefi/#oops
|
||||
[53]:http://www.rodsbooks.com/efi-bootloaders/
|
||||
[54]:http://www.rodsbooks.com/refind/
|
||||
[55]:http://www.rodsbooks.com/efi-bootloaders/grub2.html
|
||||
[56]:http://www.rodsbooks.com/efi-bootloaders
|
||||
[57]:http://www.rodsbooks.com/refind/
|
||||
[58]:http://sourceforge.net/projects/bootinfoscript/
|
||||
[59]:http://pastebin.com/
|
||||
[60]:http://www.rodsbooks.com/linux-uefi/#troubleshooting
|
||||
[61]:http://www.rodsbooks.com/linux-uefi/#troubleshooting
|
@ -1,46 +1,44 @@
|
||||
|
||||
微流冷却技术可能让摩尔定律起死回生
|
||||
============================================================
|
||||
|
||||
![](http://tr1.cbsistatic.com/hub/i/r/2015/12/09/a7cb82d1-96e8-43b5-bfbd-d4593869b230/resize/620x/9607388a284e3a61a39f4399a9202bd7/networkingistock000042544852agsandrew.jpg)
|
||||
>Image: iStock/agsandrew
|
||||
|
||||
*Image: iStock/agsandrew*
|
||||
|
||||
现有的技术无法对微芯片进行有效的冷却,这正快速成为摩尔定律消亡的第一原因。
|
||||
|
||||
随着对数字计算速度的需求,科学家和工程师正努力地将更多的晶体管和支撑电路放在已经很拥挤的硅片上。的确,它非常地复杂,然而,和复杂性相比,热量聚积引起的问题更严重。
|
||||
|
||||
洛克希德马丁公司首席研究员John Ditri在新闻稿中说到:当前,我们可以放入微芯片的功能是有限的,最主要的原因之一是发热的管理。如果你能管理好发热,你可以用较少的芯片,较少的材料,那样就可以节约成本,并能减少系统的大小和重量。如果你能管理好发热,用相同数量的芯片将能获得更好的系统性能。
|
||||
洛克希德马丁公司首席研究员 John Ditri 在新闻稿中说到:当前,我们可以放入微芯片的功能是有限的,最主要的原因之一是发热的管理。如果你能管理好发热,你可以用较少的芯片,也就是说较少的材料,那样就可以节约成本,并能减少系统的大小和重量。如果你能管理好发热,用相同数量的芯片将能获得更好的系统性能。
|
||||
|
||||
硅对电子流动的阻力产生了热量,在如此小的空间封装如此多的晶体管累积了足以毁坏元器件的热量。一种消除热累积的方法是在芯片层用光子学技术减少电子的流动,然而光子学技术有它的一系列问题。
|
||||
|
||||
SEE:2015年硅光子将引起数据中心的革命 [Silicon photonics will revolutionize data centers in 2015][5]
|
||||
参见: [2015 年硅光子将引起数据中心的革命][5]
|
||||
|
||||
### 微流冷却技术可能是问题的解决之道
|
||||
|
||||
为了寻找其他解决办法,国防高级研究计划局DARPA发起了一个关于ICECool应用[ICECool Applications][6] (片内/片间增强冷却技术)的项目。GSA网站 [GSA website FedBizOpps.gov][7] 报道:ICECool正在探索革命性的热技术,其将减轻热耗对军用电子系统的限制,同时能显著减小军用电子系统的尺寸,重量和功耗。
|
||||
为了寻找其他解决办法,美国国防高级研究计划局 DARPA 发起了一个关于 [ICECool 应用][6] (片内/片间增强冷却技术)的项目。 [GSA 的网站 FedBizOpps.gov][7] 报道:ICECool 正在探索革命性的热技术,其将减轻热耗对军用电子系统的限制,同时能显著减小军用电子系统的尺寸,重量和功耗。
|
||||
|
||||
微流冷却方法的独特之处在于组合使用片内和(或)片间微流冷却技术和片上热互连技术。
|
||||
|
||||
![](http://tr4.cbsistatic.com/hub/i/r/2016/05/25/fd3d0d17-bd86-4d25-a89a-a7050c4d59c4/resize/300x/e9c18034bde66526310c667aac92fbf5/microcooling-1.png)
|
||||
>MicroCooling 1 Image: DARPA
|
||||
|
||||
DARPA ICECool应用项目 [DARPA ICECool Application announcement][8] 指出, 这种微型片内和(或)片间通道可采用轴向微通道,径向通道和(或)横流通道,采用微孔和歧管结构及局部液体喷射形式来疏散和重新引导微流,从而以最有利的方式来满足指定的散热指标。
|
||||
*MicroCooling 1 Image: DARPA*
|
||||
|
||||
通过上面的技术,洛克希德马丁的工程师已经实验性地证明了片上冷却是如何得到显著改善的。洛克希德马丁新闻报道:ICECool项目的第一阶段发现,当冷却具有多个局部30kW/cm2热点,发热为1kw/cm2的芯片时热阻减少了4倍,进而验证了洛克希德的嵌入式微流冷却方法的有效性。
|
||||
[DARPA ICECool 应用发布的公告][8] 指出,这种微型片内和(或)片间通道可采用轴向微通道、径向通道和(或)横流通道,采用微孔和歧管结构及局部液体喷射形式来疏散和重新引导微流,从而以最有利的方式来满足指定的散热指标。
|
||||
|
||||
第二阶段,洛克希德马丁的工程师聚焦于RF放大器。通过ICECool的技术,团队演示了RF的输出功率可以得到6倍的增长,而放大器仍然比其常规冷却的更凉。
|
||||
通过上面的技术,洛克希德马丁的工程师已经实验性地证明了片上冷却是如何得到显著改善的。洛克希德马丁新闻报道:ICECool 项目的第一阶段发现,当冷却具有多个局部 30kW/cm2 热点,发热为 1kw/cm2 的芯片时热阻减少了 4 倍,进而验证了洛克希德的嵌入式微流冷却方法的有效性。
|
||||
|
||||
第二阶段,洛克希德马丁的工程师聚焦于 RF 放大器。通过 ICECool 的技术,团队演示了 RF 的输出功率可以得到 6 倍的增长,而放大器仍然比其常规冷却的更凉。
|
||||
|
||||
### 投产
|
||||
|
||||
出于对技术的信心,洛克希德马丁已经在设计和制造实用的微流冷却发射天线。 Lockheed Martin还与Qorvo合作,将其热解决方案与Qorvo的高性能GaN工艺 [GaN process][9] 集成.
|
||||
出于对技术的信心,洛克希德马丁已经在设计和制造实用的微流冷却发射天线。 洛克希德马丁还与 Qorvo 合作,将其热解决方案与 Qorvo 的高性能 [ GaN 工艺][9] 相集成。
|
||||
|
||||
研究论文 [DARPA's Intra/Interchip Enhanced Cooling (ICECool) Program][10] 的作者认为ICECool将使电子系统的热管理模式发生改变。ICECool应用的执行者将根据应用来定制片内和片间的热管理方法,这个方法需要兼顾应用的材料,制造工艺和工作环境。
|
||||
研究论文 [DARPA 的片间/片内增强冷却技术(ICECool)流程][10] 的作者认为 ICECool 将使电子系统的热管理模式发生改变。ICECool 应用的执行者将根据应用来定制片内和片间的热管理方法,这个方法需要兼顾应用的材料,制造工艺和工作环境。
|
||||
|
||||
如果微流冷却能像科学家和工程师所说的成功的话,似乎摩尔定律会起死回生。
|
||||
|
||||
更多的关于网络的信息,请订阅Data Centers newsletter。
|
||||
|
||||
[SUBSCRIBE](https://secure.techrepublic.com/user/login/?regSource=newsletter-button&position=newsletter-button&appId=true&redirectUrl=http%3A%2F%2Fwww.techrepublic.com%2Farticle%2Fmicrofluidic-cooling-may-prevent-the-demise-of-moores-law%2F&)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -48,7 +46,7 @@ via: http://www.techrepublic.com/article/microfluidic-cooling-may-prevent-the-de
|
||||
|
||||
作者:[Michael Kassner][a]
|
||||
译者:[messon007](https://github.com/messon007)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,384 @@
|
||||
大数据初步:在树莓派上通过 Apache Spark on YARN 搭建 Hadoop 集群
|
||||
===
|
||||
|
||||
有些时候我们想从 DQYDJ 网站的数据中分析点有用的东西出来,在过去,我们要[用 R 语言提取固定宽度的数据](https://dqydj.com/how-to-import-fixed-width-data-into-a-spreadsheet-via-r-playing-with-ipums-cps-data/),然后通过数学建模来分析[美国的最低收入补贴](http://dqydj.com/negative-income-tax-cost-calculator-united-states/),当然也包括其他优秀的方法。
|
||||
|
||||
今天我将向你展示对大数据的一点探索,不过有点变化,使用的是全世界最流行的微型电脑————[树莓派](https://www.raspberrypi.org/),如果手头没有,那就看下一篇吧(可能是已经处理好的数据),对于其他用户,请继续阅读吧,今天我们要建立一个树莓派 Hadoop集群!
|
||||
|
||||
### I. 为什么要建立一个树莓派的 Hadoop 集群?
|
||||
|
||||
![](https://dqydj.com/wp-content/uploads/2016/08/IMG_9132-245x300.png)
|
||||
|
||||
*由三个树莓派节点组成的 Hadoop 集群*
|
||||
|
||||
我们对 DQYDJ 的数据做了[大量的处理工作](https://dqydj.com/finance-calculators-investment-calculators-and-visualizations/),但这些还不能称得上是大数据。
|
||||
|
||||
和许许多多有争议的话题一样,数据的大小之别被解释成这样一个笑话:
|
||||
|
||||
> 如果能被内存所存储,那么它就不是大数据。 ————佚名
|
||||
|
||||
似乎这儿有两种解决问题的方法:
|
||||
|
||||
1. 我们可以找到一个足够大的数据集合,任何家用电脑的物理或虚拟内存都存不下。
|
||||
2. 我们可以买一些不用特别定制,我们现有数据就能淹没它的电脑:
|
||||
|
||||
—— 上手树莓派 2B
|
||||
|
||||
这个由设计师和工程师制作出来的精致小玩意儿拥有 1GB 的内存, MicroSD 卡充当它的硬盘,此外,每一台的价格都低于 50 美元,这意味着你可以花不到 250 美元的价格搭建一个 Hadoop 集群。
|
||||
|
||||
或许天下没有比这更便宜的入场券来带你进入大数据的大门。
|
||||
|
||||
### II. 制作一个树莓派集群
|
||||
|
||||
我最喜欢制作的原材料。
|
||||
|
||||
这里我将给出我原来为了制作树莓派集群购买原材料的链接,如果以后要在亚马逊购买的话你可先这些链接收藏起来,也是对本站的一点支持。(谢谢)
|
||||
|
||||
- [树莓派 2B 3 块](http://amzn.to/2bEFTVh)
|
||||
- [4 层亚克力支架](http://amzn.to/2bTo1br)
|
||||
- [6 口 USB 转接器](http://amzn.to/2bEGO8g),我选了白色 RAVPower 50W 10A 6 口 USB 转接器
|
||||
- [MicroSD 卡](http://amzn.to/2cguV9I),这个五件套 32GB 卡非常棒
|
||||
- [短的 MicroUSB 数据线](http://amzn.to/2bX2mwm),用于给树莓派供电
|
||||
- [短网线](http://amzn.to/2bDACQJ)
|
||||
- 双面胶,我有一些 3M 的,很好用
|
||||
|
||||
#### 开始制作
|
||||
|
||||
1. 首先,装好三个树莓派,每一个用螺丝钉固定在亚克力面板上。(看下图)
|
||||
2. 接下来,安装以太网交换机,用双面胶贴在其中一个在亚克力面板上。
|
||||
3. 用双面胶贴将 USB 转接器贴在一个在亚克力面板使之成为最顶层。
|
||||
4. 接着就是一层一层都拼好——这里我选择将树莓派放在交换机和USB转接器的底下(可以看看完整安装好的两张截图)
|
||||
|
||||
想办法把线路放在需要的地方——如果你和我一样购买力 USB 线和网线,我可以将它们卷起来放在亚克力板子的每一层
|
||||
|
||||
现在不要急着上电,需要将系统烧录到 SD 卡上才能继续。
|
||||
|
||||
#### 烧录 Raspbian
|
||||
|
||||
按照[这个教程](https://www.raspberrypi.org/downloads/raspbian/)将 Raspbian 烧录到三张 SD 卡上,我使用的是 Win7 下的 [Win32DiskImager][2]。
|
||||
|
||||
将其中一张烧录好的 SD 卡插在你想作为主节点的树莓派上,连接 USB 线并启动它。
|
||||
|
||||
#### 启动主节点
|
||||
|
||||
这里有[一篇非常棒的“Because We Can Geek”的教程](http://www.becausewecangeek.com/building-a-raspberry-pi-hadoop-cluster-part-1/),讲如何安装 Hadoop 2.7.1,此处就不再熬述。
|
||||
|
||||
在启动过程中有一些要注意的地方,我将带着你一起设置直到最后一步,记住我现在使用的 IP 段为 192.168.1.50 – 192.168.1.52,主节点是 .50,从节点是 .51 和 .52,你的网络可能会有所不同,如果你想设置静态 IP 的话可以在评论区看看或讨论。
|
||||
|
||||
一旦你完成了这些步骤,接下来要做的就是启用交换文件,Spark on YARN 将分割出一块非常接近内存大小的交换文件,当你内存快用完时便会使用这个交换分区。
|
||||
|
||||
(如果你以前没有做过有关交换分区的操作的话,可以看看[这篇教程](https://www.digitalocean.com/community/tutorials/how-to-add-swap-on-ubuntu-14-04),让 `swappiness` 保持较低水准,因为 MicroSD 卡的性能扛不住)
|
||||
|
||||
现在我准备介绍有关我的和“Because We Can Geek”关于启动设置一些微妙的区别。
|
||||
|
||||
对于初学者,确保你给你的树莓派起了一个正式的名字——在 `/etc/hostname` 设置,我的主节点设置为 ‘RaspberryPiHadoopMaster’ ,从节点设置为 ‘RaspberryPiHadoopSlave#’
|
||||
|
||||
主节点的 `/etc/hosts` 配置如下:
|
||||
|
||||
```
|
||||
#/etc/hosts
|
||||
127.0.0.1 localhost
|
||||
::1 localhost ip6-localhost ip6-loopback
|
||||
ff02::1 ip6-allnodes
|
||||
ff02::2 ip6-allrouters
|
||||
|
||||
192.168.1.50 RaspberryPiHadoopMaster
|
||||
192.168.1.51 RaspberryPiHadoopSlave1
|
||||
192.168.1.52 RaspberryPiHadoopSlave2
|
||||
```
|
||||
|
||||
如果你想让 Hadoop、YARN 和 Spark 运行正常的话,你也需要修改这些配置文件(不妨现在就编辑)。
|
||||
|
||||
这是 `hdfs-site.xml`:
|
||||
|
||||
```
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
|
||||
<configuration>
|
||||
<property>
|
||||
<name>fs.default.name</name>
|
||||
<value>hdfs://RaspberryPiHadoopMaster:54310</value>
|
||||
</property>
|
||||
<property>
|
||||
<name>hadoop.tmp.dir</name>
|
||||
<value>/hdfs/tmp</value>
|
||||
</property>
|
||||
</configuration>
|
||||
```
|
||||
|
||||
这是 `yarn-site.xml` (注意内存方面的改变):
|
||||
|
||||
```
|
||||
<?xml version="1.0"?>
|
||||
<configuration>
|
||||
|
||||
<!-- Site specific YARN configuration properties -->
|
||||
<property>
|
||||
<name>yarn.nodemanager.aux-services</name>
|
||||
<value>mapreduce_shuffle</value>
|
||||
</property>
|
||||
<property>
|
||||
<name>yarn.nodemanager.resource.cpu-vcores</name>
|
||||
<value>4</value>
|
||||
</property>
|
||||
<property>
|
||||
<name>yarn.nodemanager.resource.memory-mb</name>
|
||||
<value>1024</value>
|
||||
</property>
|
||||
<property>
|
||||
<name>yarn.scheduler.minimum-allocation-mb</name>
|
||||
<value>128</value>
|
||||
</property>
|
||||
<property>
|
||||
<name>yarn.scheduler.maximum-allocation-mb</name>
|
||||
<value>1024</value>
|
||||
</property>
|
||||
<property>
|
||||
<name>yarn.scheduler.minimum-allocation-vcores</name>
|
||||
<value>1</value>
|
||||
</property>
|
||||
<property>
|
||||
<name>yarn.scheduler.maximum-allocation-vcores</name>
|
||||
<value>4</value>
|
||||
</property>
|
||||
<property>
|
||||
<name>yarn.nodemanager.vmem-check-enabled</name>
|
||||
<value>false</value>
|
||||
<description>Whether virtual memory limits will be enforced for containers</description>
|
||||
</property>
|
||||
<property>
|
||||
<name>yarn.nodemanager.vmem-pmem-ratio</name>
|
||||
<value>4</value>
|
||||
<description>Ratio between virtual memory to physical memory when setting memory limits for containers</description>
|
||||
</property>
|
||||
<property>
|
||||
<name>yarn.resourcemanager.resource-tracker.address</name>
|
||||
<value>RaspberryPiHadoopMaster:8025</value>
|
||||
</property>
|
||||
<property>
|
||||
<name>yarn.resourcemanager.scheduler.address</name>
|
||||
<value>RaspberryPiHadoopMaster:8030</value>
|
||||
</property>
|
||||
<property>
|
||||
<name>yarn.resourcemanager.address</name>
|
||||
<value>RaspberryPiHadoopMaster:8040</value>
|
||||
</property>
|
||||
</configuration>
|
||||
```
|
||||
|
||||
`slaves`:
|
||||
|
||||
```
|
||||
RaspberryPiHadoopMaster
|
||||
RaspberryPiHadoopSlave1
|
||||
RaspberryPiHadoopSlave2
|
||||
```
|
||||
|
||||
`core-site.xml`:
|
||||
|
||||
```
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
|
||||
<configuration>
|
||||
<property>
|
||||
<name>fs.default.name</name>
|
||||
<value>hdfs://RaspberryPiHadoopMaster:54310</value>
|
||||
</property>
|
||||
<property>
|
||||
<name>hadoop.tmp.dir</name>
|
||||
<value>/hdfs/tmp</value>
|
||||
</property>
|
||||
</configuration>
|
||||
```
|
||||
|
||||
#### 设置两个从节点:
|
||||
|
||||
接下来[按照 “Because We Can Geek”上的教程](http://www.becausewecangeek.com/building-a-raspberry-pi-hadoop-cluster-part-2/),你需要对上面的文件作出小小的改动。 在 `yarn-site.xml` 中主节点没有改变,所以从节点中不必含有这个 `slaves` 文件。
|
||||
|
||||
### III. 在我们的树莓派集群中测试 YARN
|
||||
|
||||
如果所有设备都正常工作,在主节点上你应该执行如下命令:
|
||||
|
||||
```
|
||||
start-dfs.sh
|
||||
start-yarn.sh
|
||||
```
|
||||
|
||||
当设备启动后,以 Hadoop 用户执行,如果你遵循教程,用户应该是 `hduser`。
|
||||
|
||||
接下来执行 `hdfs dfsadmin -report` 查看三个节点是否都正确启动,确认你看到一行粗体文字 ‘Live datanodes (3)’:
|
||||
|
||||
```
|
||||
Configured Capacity: 93855559680 (87.41 GB)
|
||||
Raspberry Pi Hadoop Cluster picture Straight On
|
||||
Present Capacity: 65321992192 (60.84 GB)
|
||||
DFS Remaining: 62206627840 (57.93 GB)
|
||||
DFS Used: 3115364352 (2.90 GB)
|
||||
DFS Used%: 4.77%
|
||||
Under replicated blocks: 0
|
||||
Blocks with corrupt replicas: 0
|
||||
Missing blocks: 0
|
||||
Missing blocks (with replication factor 1): 0
|
||||
————————————————-
|
||||
Live datanodes (3):
|
||||
Name: 192.168.1.51:50010 (RaspberryPiHadoopSlave1)
|
||||
Hostname: RaspberryPiHadoopSlave1
|
||||
Decommission Status : Normal
|
||||
```
|
||||
|
||||
你现在可以做一些简单的诸如 ‘Hello, World!’ 的测试,或者直接进行下一步。
|
||||
|
||||
### IV. 安装 SPARK ON YARN
|
||||
|
||||
YARN 的意思是另一种非常好用的资源调度器(Yet Another Resource Negotiator),已经作为一个易用的资源管理器集成在 Hadoop 基础安装包中。
|
||||
|
||||
[Apache Spark](https://spark.apache.org/) 是 Hadoop 生态圈中的另一款软件包,它是一个毁誉参半的执行引擎和[捆绑的 MapReduce](https://hadoop.apache.org/docs/r1.2.1/mapred_tutorial.html)。在一般情况下,相对于基于磁盘存储的 MapReduce,Spark 更适合基于内存的存储,某些运行任务能够得到 10-100 倍提升——安装完成集群后你可以试试 Spark 和 MapReduce 有什么不同。
|
||||
|
||||
我个人对 Spark 还是留下非常深刻的印象,因为它提供了两种数据工程师和科学家都比较擅长的语言—— Python 和 R。
|
||||
|
||||
安装 Apache Spark 非常简单,在你家目录下,`wget "为 Hadoop 2.7 构建的 Apache Spark”`([来自这个页面](https://spark.apache.org/downloads.html)),然后运行 `tar -xzf “tgz 文件”`,最后把解压出来的文件移动至 `/opt`,并清除刚才下载的文件,以上这些就是安装步骤。
|
||||
|
||||
我又创建了只有两行的文件 `spark-env.sh`,其中包含 Spark 的配置文件目录。
|
||||
|
||||
```
|
||||
SPARK_MASTER_IP=192.168.1.50
|
||||
SPARK_WORKER_MEMORY=512m
|
||||
```
|
||||
|
||||
(在 YARN 跑起来之前我不确定这些是否有必要。)
|
||||
|
||||
### V. 你好,世界! 为 Apache Spark 寻找有趣的数据集!
|
||||
|
||||
在 Hadoop 世界里面的 ‘Hello, World!’ 就是做单词计数。
|
||||
|
||||
我决定让我们的作品做一些内省式……为什么不统计本站最常用的单词呢?也许统计一些关于本站的大数据会更有用。
|
||||
|
||||
如果你有一个正在运行的 WordPress 博客,可以通过简单的两步来导出和净化。
|
||||
|
||||
1. 我使用 [Export to Text](https://wordpress.org/support/plugin/export-to-text) 插件导出文章的内容到纯文本文件中
|
||||
2. 我使用一些[压缩库](https://pypi.python.org/pypi/bleach)编写了一个 Python 脚本来剔除 HTML
|
||||
|
||||
```
|
||||
import bleach
|
||||
|
||||
# Change this next line to your 'import' filename, whatever you would like to strip
|
||||
# HTML tags from.
|
||||
ascii_string = open('dqydj_with_tags.txt', 'r').read()
|
||||
|
||||
|
||||
new_string = bleach.clean(ascii_string, tags=[], attributes={}, styles=[], strip=True)
|
||||
new_string = new_string.encode('utf-8').strip()
|
||||
|
||||
# Change this next line to your 'export' filename
|
||||
f = open('dqydj_stripped.txt', 'w')
|
||||
f.write(new_string)
|
||||
f.close()
|
||||
```
|
||||
|
||||
现在我们有了一个更小的、适合复制到树莓派所搭建的 HDFS 集群上的文件。
|
||||
|
||||
如果你不能树莓派主节点上完成上面的操作,找个办法将它传输上去(scp、 rsync 等等),然后用下列命令行复制到 HDFS 上。
|
||||
|
||||
```
|
||||
hdfs dfs -copyFromLocal dqydj_stripped.txt /dqydj_stripped.txt
|
||||
```
|
||||
|
||||
现在准备进行最后一步 - 向 Apache Spark 写入一些代码。
|
||||
|
||||
### VI. 点亮 Apache Spark
|
||||
|
||||
Cloudera 有个极棒的程序可以作为我们的超级单词计数程序的基础,[你可以在这里找到](https://www.cloudera.com/documentation/enterprise/5-6-x/topics/spark_develop_run.html)。我们接下来为我们的内省式单词计数程序修改它。
|
||||
|
||||
在主节点上[安装‘stop-words’](https://pypi.python.org/pypi/stop-words)这个 python 第三方包,虽然有趣(我在 DQYDJ 上使用了 23,295 次 the 这个单词),你可能不想看到这些语法单词占据着单词计数的前列,另外,在下列代码用你自己的数据集替换所有有关指向 dqydj 文件的地方。
|
||||
|
||||
```
|
||||
import sys
|
||||
|
||||
from stop_words import get_stop_words
|
||||
from pyspark import SparkContext, SparkConf
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
# create Spark context with Spark configuration
|
||||
conf = SparkConf().setAppName("Spark Count")
|
||||
sc = SparkContext(conf=conf)
|
||||
|
||||
# get threshold
|
||||
try:
|
||||
threshold = int(sys.argv[2])
|
||||
except:
|
||||
threshold = 5
|
||||
|
||||
# read in text file and split each document into words
|
||||
tokenized = sc.textFile(sys.argv[1]).flatMap(lambda line: line.split(" "))
|
||||
|
||||
# count the occurrence of each word
|
||||
wordCounts = tokenized.map(lambda word: (word.lower().strip(), 1)).reduceByKey(lambda v1,v2:v1 +v2)
|
||||
|
||||
# filter out words with fewer than threshold occurrences
|
||||
filtered = wordCounts.filter(lambda pair:pair[1] >= threshold)
|
||||
|
||||
print "*" * 80
|
||||
print "Printing top words used"
|
||||
print "-" * 80
|
||||
filtered_sorted = sorted(filtered.collect(), key=lambda x: x[1], reverse = True)
|
||||
for (word, count) in filtered_sorted: print "%s : %d" % (word.encode('utf-8').strip(), count)
|
||||
|
||||
|
||||
# Remove stop words
|
||||
print "\n\n"
|
||||
print "*" * 80
|
||||
print "Printing top non-stop words used"
|
||||
print "-" * 80
|
||||
# Change this to your language code (see the stop-words documentation)
|
||||
stop_words = set(get_stop_words('en'))
|
||||
no_stop_words = filter(lambda x: x[0] not in stop_words, filtered_sorted)
|
||||
for (word, count) in no_stop_words: print "%s : %d" % (word.encode('utf-8').strip(), count)
|
||||
```
|
||||
|
||||
保存好 wordCount.py,确保上面的路径都是正确无误的。
|
||||
|
||||
现在,准备念出咒语,让运行在 YARN 上的 Spark 跑起来,你可以看到我在 DQYDJ 使用最多的单词是哪一个。
|
||||
|
||||
```
|
||||
/opt/spark-2.0.0-bin-hadoop2.7/bin/spark-submit –master yarn –executor-memory 512m –name wordcount –executor-cores 8 wordCount.py /dqydj_stripped.txt
|
||||
```
|
||||
|
||||
### VII. 我在 DQYDJ 使用最多的单词
|
||||
|
||||
可能入列的单词有哪一些呢?“can, will, it’s, one, even, like, people, money, don’t, also“.
|
||||
|
||||
嘿,不错,“money”悄悄挤进了前十。在一个致力于金融、投资和经济的网站上谈论这似乎是件好事,对吧?
|
||||
|
||||
下面是的前 50 个最常用的词汇,请用它们刻画出有关我的文章的水平的结论。
|
||||
|
||||
![](https://dqydj.com/wp-content/uploads/2016/08/dqydj_pk_most_used_words.png)
|
||||
|
||||
我希望你能喜欢这篇关于 Hadoop、YARN 和 Apache Spark 的教程,现在你可以在 Spark 运行和编写其他的应用了。
|
||||
|
||||
你的下一步是任务是开始[阅读 pyspark 文档](https://spark.apache.org/docs/2.0.0/api/python/index.html)(以及用于其他语言的该库),去学习一些可用的功能。根据你的兴趣和你实际存储的数据,你将会深入学习到更多——有流数据、SQL,甚至机器学习的软件包!
|
||||
|
||||
你怎么看?你要建立一个树莓派 Hadoop 集群吗?想要在其中挖掘一些什么吗?你在上面看到最令你惊奇的单词是什么?为什么 'S&P' 也能上榜?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://dqydj.com/raspberry-pi-hadoop-cluster-apache-spark-yarn/
|
||||
|
||||
作者:[PK][a]
|
||||
译者:[popy32](https://github.com/sfantree)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://dqydj.com/about/#contact_us
|
||||
[1]: https://www.raspberrypi.org/downloads/raspbian/
|
||||
[2]: https://sourceforge.net/projects/win32diskimager/
|
||||
[3]: http://www.becausewecangeek.com/building-a-raspberry-pi-hadoop-cluster-part-1/
|
||||
[4]: https://www.digitalocean.com/community/tutorials/how-to-add-swap-on-ubuntu-14-04
|
||||
[5]: http://www.becausewecangeek.com/building-a-raspberry-pi-hadoop-cluster-part-2/
|
||||
[6]: https://spark.apache.org/
|
||||
[7]: https://hadoop.apache.org/docs/r1.2.1/mapred_tutorial.html
|
||||
[8]: https://spark.apache.org/downloads.html
|
||||
[9]: https://wordpress.org/support/plugin/export-to-text
|
||||
[10]: https://pypi.python.org/pypi/bleach
|
||||
|
@ -0,0 +1,69 @@
|
||||
Rowhammer:针对物理内存的攻击可以取得 Android 设备的 root 权限
|
||||
===
|
||||
|
||||
> 攻击者确实可以在物理存储单元中实现位翻转来达到侵入移动设备与计算机的目的
|
||||
|
||||
![](http://images.techhive.com/images/idgnsImport/2015/08/id-2969037-security1-100606370-large.jpg)
|
||||
|
||||
|
||||
研究者们发现了一种新的在不利用任何软件漏洞情况下,利用内存芯片物理设计上的弱点来侵入 Android 设备的方式。这种攻击技术同样可以影响到其它如 ARM 和 X86 架构的设备与计算机。
|
||||
|
||||
这种称之为“Rowhammer”的攻击起源于过去十多年中将更多的 DRAM(动态随机存取存储器)容量封装进越来越小的芯片中,这将导致在特定情况下存储单元电子可以从相邻两<ruby>行<rt>row</rt></ruby>的一边泄漏到另一边。(LCTT 译注:参见 https://en.wikipedia.org/wiki/Row_hammer)
|
||||
|
||||
例如,反复且快速的访问相同的物理储存位置,这种被称为 “<ruby>锤击<rt>hammering</rt></ruby>” 的行为可以导致相邻位置的位值从 0 反转成 1,或者相反。
|
||||
|
||||
虽然这样的电子干扰已经被生产商知晓并且从可靠性角度研究了一段时间了,因为内存错误能够导致系统崩溃。而研究者们现在展示了在可控方式的触发下它所存在的严重安全隐患。
|
||||
|
||||
在 2015 年 4 月,来自谷歌 Project Zero 项目的研究者公布了两份基于内存 “Rowhammer”漏洞对于 x86-64 CPU 架构的 [提权利用][7]。其中一份利用可以使代码从谷歌的 Chrome 浏览器沙盒里逃逸并且直接在系统上执行,另一份可以在 Linux 机器上获取内核级权限。
|
||||
|
||||
此后,其他的研究者进行了更深入的调查并且展示了[通过网站中 JaveScript 脚本进行利用的方式][6]甚至能够影响运行在云环境下的[虚拟服务器][5]。然而,对于这项技术是否可以应用在智能手机和移动设备大量使用的 ARM 架构中还是有疑问的。
|
||||
|
||||
现在,一队成员来自荷兰阿姆斯特丹自由大学、奥地利格拉茨技术大学和加州大学圣塔芭芭拉分校的 VUSec 小组,已经证明了 Rowhammer 不仅仅可以应用在 ARM 架构上并且甚至比在 x86 架构上更容易。
|
||||
|
||||
研究者们将他们的新攻击命名为 Drammer,代表了 Rowhammer 确实存在,并且计划于周三在维也纳举办的第 23 届 ACM 计算机与通信安全大会上展示。这种攻击建立在之前就被发现与实现的 Rowhammer 技术之上。
|
||||
|
||||
VUSec 小组的研究者已经制造了一个适用于 Android 设备的恶意应用,当它被执行的时候利用不易察觉的内存位反转在不需要任何权限的情况下就可以获取设备根权限。
|
||||
|
||||
研究者们测试了来自不同制造商的 27 款 Android 设备,21 款使用 ARMv7(32-bit)指令集架构,其它 6 款使用 ARMv8(64-bit)指令集架构。他们成功的在 17 款 ARMv7 设备和 1 款 ARMv8 设备上实现了为反转,表明了这些设备是易受攻击的。
|
||||
|
||||
此外,Drammer 能够与其它的 Android 漏洞组合使用,例如 [Stagefright][4] 或者 [BAndroid][3] 来实现无需用户手动下载恶意应用的远程攻击。
|
||||
|
||||
谷歌已经注意到了这一类型的攻击。“在研究者向谷歌漏洞奖励计划报告了这个问题之后,我们与他们进行了密切的沟通来深入理解这个问题以便我们更好的保护用户,”一位谷歌的代表在一份邮件申明中这样说到。“我们已经开发了一个缓解方案,将会包含在十一月的安全更新中。”(LCTT 译注:缓解方案,参见 https://en.wikipedia.org/wiki/Vulnerability_management)
|
||||
|
||||
VUSec 的研究者认为,谷歌的缓解方案将会使得攻击过程更为复杂,但是它不能修复潜在的问题。
|
||||
|
||||
事实上,从软件上去修复一个由硬件导致的问题是不现实的。硬件供应商正在研究相关问题并且有可能在将来的内存芯片中被修复,但是在现有设备的芯片中风险依然存在。
|
||||
|
||||
更糟的是,研究者们说,由于有许多因素会影响到攻击的成功与否并且这些因素尚未被研究透彻,因此很难去说有哪些设备会被影响到。例如,内存控制器可能会在不同的电量的情况下展现不同的行为,因此一个设备可能在满电的情况下没有风险,当它处于低电量的情况下就是有风险的。
|
||||
|
||||
同样的,在网络安全中有这样一句俗语:<ruby>攻击将变本加厉,如火如荼<rt>Attacks always get getter, they never get worse</rt></ruby>。Rowhammer 攻击已经从理论变成了现实可能,同样的,它也可能会从现在的现实可能变成确确实实的存在。这意味着今天某个设备是不被影响的,在明天就有可能被改进后的 Rowhammer 技术证明它是存在风险的。
|
||||
|
||||
Drammer 在 Android 上实现是因为研究者期望研究基于 ARM 设备的影响,但是潜在的技术可以被使用在所有的架构与操作系统上。新的攻击相较于之前建立在运气与特殊特性与特定平台之上并且十分容易失效的技术已经是一个巨大的进步了。
|
||||
|
||||
Drammer 攻击的实现依靠于被包括图形、网络、声音等大量硬件子系统所使用的 DMA(直接存储访问)缓存。Drammer 的实现采用了所有操作系统上都有的 Android 的 ION 内存分配器、接口与方法,这给我们带来的警示是该论文的主要贡献之一。
|
||||
|
||||
“破天荒的,我们成功地展示了我们可以做到,在不依赖任何特定的特性情况下完全可靠的证明了 Rowhammer”, VUSec 小组中的其中一位研究者 Cristiano Giuffrida 这样说道。“攻击所利用的内存位置并非是 Android 独有的。攻击在任何的 Linux 平台上都能工作 -- 我们甚至怀疑其它操作系统也可以 -- 因为它利用的是操作系统内核内存管理中固有的特性。”
|
||||
|
||||
“我期待我们可以看到更多针对其它平台的攻击的变种,”阿姆斯特丹自由大学的教授兼 VUSec 系统安全研究小组的领导者 Herbert Bos 补充道。
|
||||
|
||||
在他们的[论文][2]之外,研究者们也释出了一个 Android 应用来测试 Android 设备在当前所知的技术条件下受到 Rowhammer 攻击时是否会有风险。应用还没有传上谷歌应用商店,可以从 [VUSec Drammer 网站][1] 下载来手动安装。一个开源的 Rowhammer 模拟器同样能够帮助其他的研究者来更深入的研究这个问题。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via:http://www.csoonline.com/article/3134726/security/physical-ram-attack-can-root-android-and-possibly-other-devices.html
|
||||
|
||||
作者:[Lucian Constantin][a]
|
||||
译者:[wcnnbdk1](https://github.com/wcnnbdk1)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: http://www.csoonline.com/author/Lucian-Constantin/
|
||||
[1]:https://www.vusec.net/projects/drammer/
|
||||
[2]:https://vvdveen.com/publications/drammer.pdf
|
||||
[3]:https://www.vusec.net/projects/bandroid/
|
||||
[4]:http://www.csoonline.com/article/3045836/security/new-stagefright-exploit-puts-millions-of-android-devices-at-risk.html
|
||||
[5]:http://www.infoworld.com/article/3105889/security/flip-feng-shui-attack-on-cloud-vms-exploits-hardware-weaknesses.html
|
||||
[6]:http://www.computerworld.com/article/2954582/security/researchers-develop-astonishing-webbased-attack-on-a-computers-dram.html
|
||||
[7]:http://www.computerworld.com/article/2895898/google-researchers-hack-computers-using-dram-electrical-leaks.html
|
||||
[8]:http://csoonline.com/newsletters/signup.html
|
466
published/20161025 GitLab Workflow An Overview.md
Normal file
466
published/20161025 GitLab Workflow An Overview.md
Normal file
@ -0,0 +1,466 @@
|
||||
GitLab 工作流概览
|
||||
======
|
||||
|
||||
GitLab 是一个基于 git 的仓库管理程序,也是一个方便软件开发的强大完整应用。
|
||||
|
||||
GitLab 拥有一个“用户新人友好”的界面,通过图形界面和命令行界面,使你的工作更加具有效率。GitLab 不仅仅对开发者是一个有用的工具,它甚至可以被集成到你的整个团队中,使得每一个人获得一个独自唯一的平台。
|
||||
|
||||
GitLab 工作流逻辑符合使用者思维,使得整个平台变得更加易用。相信我,使用一次,你就离不开它了!
|
||||
|
||||
### GitLab 工作流
|
||||
|
||||
**GitLab 工作流** 是在软件开发过程中,在使用 GitLab 作为代码托管平台时,可以采取的动作的一个逻辑序列。
|
||||
|
||||
GitLab 工作流遵循了 [GitLab Flow][97] 策略,这是由一系列由**基于 Git** 的方法和策略组成的,这些方法为版本的管理,例如**分支策略**、**Git最佳实践**等等提供了保障。
|
||||
|
||||
通过 GitLab 工作流,可以很方便的[提升](https://about.gitlab.com/2016/09/13/gitlab-master-plan/)团队的工作效率以及凝聚力。这种提升,从引入一个新的项目开始,一直到发布这个项目,成为一个产品都有所体现。这就是我们所说的“如何通过最快的速度把一个点子在 10 步之内变成一个产品”。
|
||||
|
||||
![FROM IDEA TO PRODUCTION IN 10 STEPS](https://about.gitlab.com/images/blogimages/idea-to-production-10-steps.png)
|
||||
|
||||
#### 软件开发阶段
|
||||
|
||||
一般情况下,软件开发经过 10 个主要阶段;GitLab 为这 10 个阶段依次提供了解决方案:
|
||||
|
||||
1. **IDEA**: 每一个从点子开始的项目,通常来源于一次闲聊。在这个阶段,GitLab 集成了 [Mattermost][44]。
|
||||
2. **ISSUE**: 最有效的讨论一个点子的方法,就是为这个点子建立一个工单讨论。你的团队和你的合作伙伴可以在 <ruby>工单追踪器<rt>issue tracker</rt></ruby> 中帮助你去提升这个点子
|
||||
3. **PLAN**: 一旦讨论得到一致的同意,就是开始编码的时候了。但是等等!首先,我们需要优先考虑组织我们的工作流。对于此,我们可以使用 <ruby>工单看板<rt>Issue Board</rt></ruby>。
|
||||
4. **CODE**: 现在,当一切准备就绪,我们可以开始写代码了。
|
||||
5. **COMMIT**: 当我们为我们的初步成果欢呼的时候,我们就可以在版本控制下,提交代码到功能分支了。
|
||||
6. **TEST**: 通过 [GitLab CI][41],我们可以运行脚本来构建和测试我们的应用。
|
||||
7. **REVIEW**: 一旦脚本成功运行,我们测试和构建成功,我们就可以进行 <ruby>代码复审<rt>code review</rt></ruby> 以及批准。
|
||||
8. **STAGING:**: 现在是时候[将我们的代码部署到演示环境][39]来检查一下,看看是否一切就像我们预估的那样顺畅——或者我们可能仍然需要修改。
|
||||
9. **PRODUCTION**: 当一切都如预期,就是[部署到生产环境][38]的时候了!
|
||||
10. **FEEDBACK**: 现在是时候返回去看我们项目中需要提升的部分了。我们使用[<ruby>周期分析<rt> Cycle Analytics</rt></ruby>][37]来对当前项目中关键的部分进行的反馈。
|
||||
|
||||
简单浏览这些步骤,我们可以发现,提供强大的工具来支持这些步骤是十分重要的。在接下来的部分,我们为 GitLab 的可用工具提供一个简单的概览。
|
||||
|
||||
### GitLab 工单追踪器
|
||||
|
||||
GitLab 有一个强大的工单追溯系统,在使用过程中,允许你和你的团队,以及你的合作者分享和讨论建议。
|
||||
|
||||
![issue tracker - view list](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/issue-tracker-list-view.png)
|
||||
|
||||
工单是 GitLab 工作流的第一个重要重要特性。[以工单的讨论为开始][95]; 跟踪新点子的改变是一个最好的方式。
|
||||
|
||||
这十分有利于:
|
||||
|
||||
* 讨论点子
|
||||
* 提交功能建议
|
||||
* 提问题
|
||||
* 提交错误和故障
|
||||
* 获取支持
|
||||
* 精细化新代码的引入
|
||||
|
||||
每一个在 GitLab 上部署的项目都有一个工单追踪器。找到你的项目中的 **Issues** > **New issue** 来创建一个新的工单。建立一个标题来总结要被讨论的主题,并且使用 [Markdown][94] 来形容它。看看下面的“专业技巧”来加强你的工单描述。
|
||||
|
||||
GitLab 工单追踪器提供了一个额外的实用功能,使得步骤变得更佳易于管理和考虑。下面的部分仔细描述了它。
|
||||
|
||||
![new issue - additional settings](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/issue-features-view.png)
|
||||
|
||||
#### 秘密工单
|
||||
|
||||
无论何时,如果你仅仅想要在团队中讨论这个工单,你可以使[该工单成为秘密的][92]。即使你的项目是公开的,你的工单也会被保密起来。当一个不是本项目成员的人,就算是 [报告人级别][01],想要访问工单的地址时,浏览器也会返回一个 404 错误。
|
||||
|
||||
#### 截止日期
|
||||
|
||||
每一个工单允许你填写一个[截止日期][90]。有些团队工作时间表安排紧凑,以某种方式去设置一个截止日期来解决问题,是有必要的。这些都可以通过截止日期这一功能实现。
|
||||
|
||||
当你对一个多任务项目有截止日期的时候——比如说,一个新的发布活动、项目的启动,或者按阶段追踪任务——你可以使用[里程碑][89]。
|
||||
|
||||
#### 受托者
|
||||
|
||||
要让某人处理某个工单,可以将其分配给他。你可以任意修改被分配者,直到满足你的需求。这个功能的想法是,一个受托者本身对这个工单负责,直到其将这个工单重新赋予其他人。
|
||||
|
||||
这也可以用于按受托者筛选工单。
|
||||
|
||||
#### 标签
|
||||
|
||||
GitLab 标签也是 GitLab 流的一个重要组成部分。你可以使用它们来分类你的工单,在工作流中定位,以及通过[优先级标签][88]来安装优先级组织它们。
|
||||
|
||||
标签使得你与[GitLab 工单看板][87]协同工作,加快工程进度以及组织你的工作流。
|
||||
|
||||
**新功能:** 你可以创建[组标签][86]。它可以使得在每一个项目组中使用相同的标签。
|
||||
|
||||
#### 工单权重
|
||||
|
||||
你可以添加个[工单权重][85]使得一个工单重要性表现的更为清晰。01 - 03 表示工单不是特别重要,07 - 09 表示十分重要,04 - 06 表示程度适中。此外,你可以与你的团队自行定义工单重要性的指标。
|
||||
|
||||
注:该功能仅可用于 GitLab 企业版和 GitLab.com 上。
|
||||
|
||||
#### GitLab 工单看板
|
||||
|
||||
在项目中,[GitLab 工单看板][84]是一个用于计划以及组织你的工单,使之符合你的项目工作流的工具。
|
||||
|
||||
看板包含了与其相关的相应标签,每一个列表包含了相关的被标记的工单,并且以卡片的形式展示出来。
|
||||
|
||||
这些卡片可以在列表之间移动,被移动的卡片,其标签将会依据你移动的位置相应更新到列表上。
|
||||
|
||||
![GitLab Issue Board](https://about.gitlab.com/images/blogimages/designing-issue-boards/issue-board.gif)
|
||||
|
||||
**新功能:** 你也可以通过点击列表上方的“+”按钮在看板右边创建工单。当你这么做的时候,这个工单将会自动添加与列表相关的标签。
|
||||
|
||||
**新功能:** 我们[最近推出了][83] 每一个 GitLab 项目拥有**多个工单看板**的功能(仅存在于 [GitLab 企业版][82]);这是为不同的工作流组织你的工单的好方法。
|
||||
|
||||
![Multiple Issue Boards](https://about.gitlab.com/images/8_13/m_ib.gif)
|
||||
|
||||
### 通过 GitLab 进行代码复审
|
||||
|
||||
在工单追踪器中,讨论了新的提议之后,就是在代码上做工作的时候了。你在本地书写代码,一旦你完成了你的第一个版本,提交你的代码并且推送到你的 GitLab 仓库。你基于 Git 的管理策略可以在 [GitLab 流][81]中被提升。
|
||||
|
||||
#### 第一次提交
|
||||
|
||||
在你的第一次提交信息中,你可以添加涉及到工单号在其中。通过这样做你可以将两个阶段的开发工作流链接起来:工单本身以及关于这个工单的第一次提交。
|
||||
|
||||
这样做,如果你提交的代码和工单属于同一个项目,你可以简单的添加 `#xxx` 到提交信息中(LCTT 译注:`git commit message`),`xxx`是一个工单号。如果它们不在一个项目中,你可以添加整个工单的整个URL(`https://gitlab.com/<username>/<projectname>/issues/<xxx>`)。
|
||||
|
||||
```
|
||||
git commit -m "this is my commit message. Ref #xxx"
|
||||
```
|
||||
|
||||
或者
|
||||
|
||||
```
|
||||
git commit -m "this is my commit message. Related to https://gitlab.com/<username>/<projectname>/issues/<xxx>"
|
||||
```
|
||||
|
||||
当然,你也可以替换 `gitlab.com`,以你自己的 GitLab 实例来替换这个 URL。
|
||||
|
||||
**注:** 链接工单和你的第一次提交是为了通过 [GitLab 周期分析][80]追踪你的进展。这将会衡量计划执行该工单所采取的时间,即创建工单与第一次提交的间隔时间。
|
||||
|
||||
#### 合并请求
|
||||
|
||||
一旦将你的改动提交到功能分支,GitLab 将识别该修改,并且建议你提交一次<ruby>合并请求<rt>Merge Request</rt></ruby>(MR)。
|
||||
|
||||
每一次 MR 都会有一个标题(这个标题总结了这次的改动)并且一个用 [Markdown][79] 书写的描述。在描述中,你可以简单的描述该 MR 做了什么,提及任何工单以及 MR(在它们之间创建联系),并且,你也可以添加个[关闭工单模式][78],当该 MR 被**合并**的时候,相关联的工单就会被关闭。
|
||||
|
||||
例如:
|
||||
|
||||
```
|
||||
## 增加一个新页面
|
||||
|
||||
这个 MR 将会为这个项目创建一个包含该 app 概览的 `readme.md`。
|
||||
|
||||
Closes #xxx and https://gitlab.com/<username>/<projectname>/issues/<xxx>
|
||||
|
||||
预览:
|
||||
|
||||
![预览新页面](#image-url)
|
||||
|
||||
cc/ @Mary @Jane @John
|
||||
```
|
||||
|
||||
当你创建一个如上的带有描述的 MR,它将会:
|
||||
|
||||
* 当合并时,关闭包括工单 `#xxx` 以及 `https://gitlab.com/<username>/<projectname>/issues/<xxx>`
|
||||
* 展示一张图片
|
||||
* 通过邮件提醒用户 `@Mary`、`@Jane`,以及给 `@John`
|
||||
|
||||
你可以分配这个 MR 给你自己,直到你完成你的工作,然后把它分配给其他人来做一次代码复审。如果有必要的话,这个 MR 可以被重新分配多次,直到你覆盖你所需要的所有复审。
|
||||
|
||||
它也可以被标记,并且添加一个[里程碑][77]来促进管理。
|
||||
|
||||
当你从图形界面而不是命令行添加或者修改一个文件并且提交一个新的分支时,也很容易创建一个新的 MR,仅仅需要标记一下复选框,“以这些改变开始一个新的合并请求”,然后,一旦你提交你的改动,GitLab 将会自动创建一个新的 MR。
|
||||
|
||||
![commit to a feature branch and add a new MR from the UI](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/start-new-mr-edit-from-ui.png)
|
||||
|
||||
**注:** 添加[关闭工单样式][76]到你的 MR 以便可以使用 [GitLab 周期分析][75]追踪你的项目进展,是十分重要的。它将会追踪“CODE”阶段,衡量第一次提交及创建一个相关的合并请求所间隔的时间。
|
||||
|
||||
**新功能:** 我们已经开发了[审查应用][74],这是一个可以让你部署你的应用到一个动态的环境中的新功能,在此你可以按分支名字、每个合并请求来预览改变。参看这里的[可用示例][73]。
|
||||
|
||||
#### WIP MR
|
||||
|
||||
WIP MR 含义是 **在工作过程中的合并请求**,是一个我们在 GitLab 中避免 MR 在准备就绪前被合并的技术。只需要添加 `WIP:` 在 MR 的标题开头,它将不会被合并,除非你把 `WIP:` 删除。
|
||||
|
||||
当你改动已经准备好被合并,编辑工单来手动删除 `WIP:` ,或者使用就像如下 MR 描述下方的快捷方式。
|
||||
|
||||
![WIP MR click to remove WIP from the title](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/gitlab-wip-mr.png)
|
||||
|
||||
**新功能:** `WIP` 模式可以通过[斜线命令][71] `/wip` [快速添加到合并请求中][72]。只需要在评论或者 MR 描述中输入它并提交即可。
|
||||
|
||||
#### 复审
|
||||
|
||||
一旦你创建一个合并请求,就是你开始从你的团队以及合作方收取反馈的时候了。使用图形界面中的差异比较功能,你可以简单的添加行内注释,以及回复或者解决它们。
|
||||
|
||||
你也可以通过点击行号获取每一行代码的链接。
|
||||
|
||||
在图形界面中可以看到提交历史,通过提交历史,你可以追踪文件的每一次改变。你可以以行内差异或左右对比的方式浏览它们。
|
||||
|
||||
![code review in MRs at GitLab](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/gitlab-code-review.png)
|
||||
|
||||
**新功能:** 如果你遇到合并冲突,可以快速地[通过图形界面来解决][70],或者依据你的需要修改文件来修复冲突。
|
||||
|
||||
![mr conflict resolution](https://about.gitlab.com/images/8_13/inlinemergeconflictresolution.gif)
|
||||
|
||||
### 构建、测试以及发布
|
||||
|
||||
[GitLab CI][69] 是一个强大的内建工具,其作用是[持续集成、持续发布以及持续分发][58],它可以按照你希望的运行一些脚本。它的可能性是无止尽的:你可以把它看做是自己运行的命令行。
|
||||
|
||||
它完全是通过一个名为 `.gitlab-ci.yml` 的 YAML 文件设置的,其放置在你的项目仓库中。使用 Web 界面简单的添加一个文件,命名为 `.gitlab-ci.yml` 来触发一个下拉菜单,为不同的应用选择各种 CI 模版。
|
||||
|
||||
![GitLab CI templates - dropdown menu](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/gitlab-ci-template.png)
|
||||
|
||||
#### Koding
|
||||
|
||||
Use GitLab's [Koding integration][67] to run your entire development environment in the cloud. This means that you can check out a project or just a merge request in a full-fledged IDE with the press of a button.
|
||||
|
||||
可以使用 GitLab 的 [Koding 集成][67]功能在云端运行你的整个云端开发环境。这意味着你可以轻轻一键即可在一个完整的 IDE 中检出以个项目,或者合并一个请求。
|
||||
|
||||
#### 使用案例
|
||||
|
||||
GitLab CI 的使用案例:
|
||||
|
||||
* 用它来[构建][36]任何[静态网站生成器][35],并且通过 [GitLab Pages][34] 发布你的网站。
|
||||
* 用它来[发布你的网站][33] 到 `staging` 以及 `production` [环境][32]。
|
||||
* 用它来[构建一个 iOS 应用][31]。
|
||||
* 用它来[构建和发布你的 Docker 镜像][30]到 [GitLab 容器注册库][29]。
|
||||
|
||||
我们已经准备一大堆 [GitLab CI 样例工程][66]作为您的指南。看看它们吧!
|
||||
|
||||
### 反馈:周期分析
|
||||
|
||||
当你遵循 GitLab 工作流进行工作,你的团队从点子到产品,在每一个[过程的关键部分][64],你将会在下列时间获得一个 [GitLab 周期分析][65]的反馈:
|
||||
|
||||
* **Issue**: 从创建一个工单,到分配这个工单给一个里程碑或者添加工单到你的工单看板的时间。
|
||||
* **Plan**: 从给工单分配一个里程碑或者把它添加到工单看板,到推送第一次提交的时间。
|
||||
* **Code**: 从第一次提交到提出该合并请求的时间。
|
||||
* **Test**: CI 为了相关合并请求而运行整个过程的时间。
|
||||
* **Review**: 从创建一个合并请求到合并它的时间。
|
||||
* **Staging**: 从合并到发布成为产品的时间。
|
||||
* **Production(Total)**: 从创建工单到把代码发布成[产品][28]的时间。
|
||||
|
||||
### 加强
|
||||
|
||||
#### 工单以及合并请求模版
|
||||
|
||||
[工单以及合并请求模版][63]允许你为你的项目去定义一个特定内容的工单模版和合并请求的描述字段。
|
||||
|
||||
你可以以 [Markdown][62] 形式书写它们,并且把它们加入仓库的默认分支。当创建工单或者合并请求时,可以通过下拉菜单访问它们。
|
||||
|
||||
它们节省了您在描述工单和合并请求的时间,并标准化了需要持续跟踪的重要信息。它确保了你需要的一切都在你的掌控之中。
|
||||
|
||||
你可以创建许多模版,用于不同的用途。例如,你可以有一个提供功能建议的工单模版,或者一个 bug 汇报的工单模版。在 [GitLab CE project][61] 中寻找真实的例子吧!
|
||||
|
||||
![issues and MR templates - dropdown menu screenshot](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/issues-choose-template.png)
|
||||
|
||||
#### 里程碑
|
||||
|
||||
[里程碑][60] 是 GitLab 中基于共同的目标、详细的日期追踪你队伍工作的最好工具。
|
||||
|
||||
不同情况下的目的是不同的,但是大致是相同的:你有为了达到特定的目标的工单的集合以及正在编码的合并请求。
|
||||
|
||||
这个目标基本上可以是任何东西——用来结合团队的工作,在一个截止日期前完成一些事情。例如,发布一个新的版本,启动一个新的产品,在某个日期前完成,或者按季度收尾一些项目。
|
||||
|
||||
![milestone dashboard](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/gitlab-milestone.png)
|
||||
|
||||
### 专业技巧
|
||||
|
||||
#### 工单和 MR
|
||||
|
||||
* 在工单和 MR 的描述中:
|
||||
* 输入 `#` 来触发一个已有工单的下拉列表
|
||||
* 输入 `!` 来触发一个已有 MR 的下拉列表
|
||||
* 输入 `/` 来触发[斜线命令][4]
|
||||
* 输入 `:` 来出发 emoji 表情 (也支持行中评论)
|
||||
* 通过按钮“附加文件”来添加图片(jpg、png、gif) 和视频到行内评论
|
||||
* 通过 [GitLab Webhooks][26] [自动应用标签][27]
|
||||
* [构成引用][24]: 使用语法 `>>>` 来开始或者结束一个引用
|
||||
|
||||
```
|
||||
>>>
|
||||
Quoted text
|
||||
|
||||
Another paragraph
|
||||
>>>
|
||||
```
|
||||
* 创建[任务列表][23]:
|
||||
|
||||
```
|
||||
- [ ] Task 1
|
||||
- [ ] Task 2
|
||||
- [ ] Task 3
|
||||
```
|
||||
|
||||
##### 订阅
|
||||
|
||||
你是否发现你有一个工单或者 MR 想要追踪?展开你的右边的导航,点击[订阅][59],你就可以在随时收到一个评论的提醒。要是你想要一次订阅多个工单和 MR?使用[批量订阅][58]。
|
||||
|
||||
##### 添加代办
|
||||
|
||||
除了一直留意工单和 MR,如果你想要对它预先做点什么,或者不管什么时候你想要在 GitLab 代办列表中添加点什么,点击你右边的导航,并且[点击**添加代办**][57]。
|
||||
|
||||
##### 寻找你的工单和 MR
|
||||
|
||||
当你寻找一个在很久以前由你开启的工单或 MR——它们可能数以十计、百计、甚至千计——所以你很难找到它们。打开你左边的导航,并且点击**工单**或者**合并请求**,你就会看到那些分配给你的。同时,在那里或者任何工单追踪器里,你可以通过作者、分配者、里程碑、标签以及重要性来过滤工单,也可以通过搜索所有不同状态的工单,例如开启的、合并的,关闭的等等。
|
||||
|
||||
#### 移动工单
|
||||
|
||||
一个工单在一个错误的项目中结束了?不用担心,点击**Edit**,然后[移动工单][56]到正确的项目。
|
||||
|
||||
#### 代码片段
|
||||
|
||||
你经常在不同的项目以及文件中使用一些相同的代码段和模版吗?创建一个代码段并且使它在你需要的时候可用。打开左边导航栏,点击**[Snipptes][25]**。所有你的片段都会在那里。你可以把它们设置成公开的,内部的(仅为 GitLab 注册用户提供),或者私有的。
|
||||
|
||||
![Snippets - screenshot](https://about.gitlab.com/images/blogimages/gitlab-workflow-an-overview/gitlab-code-snippet.png)
|
||||
|
||||
### GitLab 工作流用户案例概要
|
||||
|
||||
作为总结,让我们把所有东西聚在一起理顺一下。不必担心,这十分简单。
|
||||
|
||||
让我们假设:你工作于一个专注于软件开发的公司。你创建了一个新的工单,这个工单是为了开发一个新功能,实施于你的一个应用中。
|
||||
|
||||
**标签策略**
|
||||
|
||||
为了这个应用,你已经创建了几个标签,“讨论”、“后端”、“前端”、“正在进行”、“展示”、“就绪”、“文档”、“营销”以及“产品”。所有都已经在工单看板有它们自己的列表。你的当前的工单已经有了标签“讨论”。
|
||||
|
||||
在工单追踪器中的讨论达成一致之后,你的后端团队开始在工单上工作,所以他们把这个工单的标签从“讨论”移动到“后端”。第一个开发者开始写代码,并且把这个工单分配给自己,增加标签“正在进行”。
|
||||
|
||||
**编码 & 提交**
|
||||
|
||||
在他的第一次提交的信息中,他提及了他的工单编号。在工作后,他把他的提交推送到一个功能分支,并且创建一个新的合并请求,在 MR 描述中,包含工单关闭模式。他的团队复审了他的代码并且保证所有的测试和建立都已经通过。
|
||||
|
||||
**使用工单看板**
|
||||
|
||||
一旦后端团队完成了他们的工作,他们就删除“正在进行”标签,并且把工单从“后端”移动到“前端”看板。所以,前端团队接到通知,这个工单已经为他们准备好了。
|
||||
|
||||
**发布到演示**
|
||||
|
||||
当一个前端开发者开始在该工单上工作,他(她)增加一个标签“正在进行”,并且把这个工单重新分配给自己。当工作完成,该实现将会被发布到一个**演示**环境。标签“正在进行”就会被删除,然后在工单看板里,工单卡被移动到“演示”列表中。
|
||||
|
||||
**团队合作**
|
||||
|
||||
最后,当新功能成功实现,你的团队把它移动到“就绪”列表。
|
||||
|
||||
然后,就是你的技术文档编写团队的时间了,他们为新功能书写文档。一旦某个人完成书写,他添加标签“文档”。同时,你的市场团队开始启动并推荐该功能,所以某个人添加“市场”。当技术文档书写完毕,书写者删除标签“文档”。一旦市场团队完成他们的工作,他们将工单从“市场”移动到“生产”。
|
||||
|
||||
**部署到生产环境**
|
||||
|
||||
最后,你将会成为那个为新版本负责的人,合并“合并请求”并且将新功能部署到**生产**环境,然后工单的状态转变为**关闭**。
|
||||
|
||||
**反馈**
|
||||
|
||||
通过[周期分析][55],你和你的团队节省了如何从点子到产品的时间,并且开启另一个工单,来讨论如何将这个过程进一步提升。
|
||||
|
||||
### 总结
|
||||
|
||||
GitLab 工作流通过一个单一平台帮助你的团队加速从点子到生产的改变:
|
||||
|
||||
* 它是**有效的**:因为你可以获取你想要的结果
|
||||
* 它是**高效的**:因为你可以用最小的努力和成本达到最大的生产力
|
||||
* 它是**高产的**:因为你可以非常有效的计划和行动
|
||||
* 它是**简单的**:因为你不需要安装不同的工具去完成你的目的,仅仅需要 GitLab
|
||||
* 它是**快速的**:因为你不需要在多个平台间跳转来完成你的工作
|
||||
|
||||
每一个月的 22 号都会有一个新的 GitLab 版本释出,让它在集成软件开发解决方案上变得越来越好,让团队可以在一个单一的、唯一的界面下一起工作。
|
||||
|
||||
在 GitLab,每个人都可以奉献!多亏了我们强大的社区,我们获得了我们想要的。并且多亏了他们,我们才能一直为你提供更好的产品。
|
||||
|
||||
还有什么问题和反馈吗?请留言,或者在推特上@我们[@GitLab][54]!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/
|
||||
|
||||
作者:[Marcia Ramos][a]
|
||||
译者:[svtter](https://github.com/svtter)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://twitter.com/XMDRamos
|
||||
[1]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#search-for-your-issues-and-mrs
|
||||
[2]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#add-to-do
|
||||
[3]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#subscribe
|
||||
[4]:https://docs.gitlab.com/ce/user/project/slash_commands.html
|
||||
[5]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#code-snippets
|
||||
[6]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#moving-issues
|
||||
[7]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#for-both-issues-and-mrs
|
||||
[8]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#milestones
|
||||
[9]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#issue-and-mr-templates
|
||||
[10]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#use-cases
|
||||
[11]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#koding
|
||||
[12]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#review
|
||||
[13]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#wip-mr
|
||||
[14]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#merge-request
|
||||
[15]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#first-commit
|
||||
[16]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-issue-board
|
||||
[17]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#issue-weight
|
||||
[18]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#labels
|
||||
[19]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#assignee
|
||||
[20]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#due-dates
|
||||
[21]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#confidential-issues
|
||||
[22]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#stages-of-software-development
|
||||
[23]:https://docs.gitlab.com/ee/user/markdown.html#task-lists
|
||||
[24]:https://about.gitlab.com/2016/07/22/gitlab-8-10-released/#blockquote-fence-syntax
|
||||
[25]:https://gitlab.com/dashboard/snippets
|
||||
[26]:https://docs.gitlab.com/ce/web_hooks/web_hooks.html
|
||||
[27]:https://about.gitlab.com/2016/08/19/applying-gitlab-labels-automatically/
|
||||
[28]:https://docs.gitlab.com/ce/ci/yaml/README.html#environment
|
||||
[29]:https://about.gitlab.com/2016/05/23/gitlab-container-registry/
|
||||
[30]:https://about.gitlab.com/2016/08/11/building-an-elixir-release-into-docker-image-using-gitlab-ci-part-1/
|
||||
[31]:https://about.gitlab.com/2016/03/10/setting-up-gitlab-ci-for-ios-projects/
|
||||
[32]:https://docs.gitlab.com/ce/ci/yaml/README.html#environment
|
||||
[33]:https://about.gitlab.com/2016/08/26/ci-deployment-and-environments/
|
||||
[34]:https://pages.gitlab.io/
|
||||
[35]:https://about.gitlab.com/2016/06/17/ssg-overview-gitlab-pages-part-3-examples-ci/
|
||||
[36]:https://about.gitlab.com/2016/04/07/gitlab-pages-setup/
|
||||
[37]:https://about.gitlab.com/solutions/cycle-analytics/
|
||||
[38]:https://about.gitlab.com/2016/08/05/continuous-integration-delivery-and-deployment-with-gitlab/
|
||||
[39]:https://about.gitlab.com/2016/08/05/continuous-integration-delivery-and-deployment-with-gitlab/
|
||||
[40]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-code-review
|
||||
[41]:https://about.gitlab.com/gitlab-ci/
|
||||
[42]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-issue-board
|
||||
[43]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-issue-tracker
|
||||
[44]:https://about.gitlab.com/2015/08/18/gitlab-loves-mattermost/
|
||||
[45]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#conclusions
|
||||
[46]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-workflow-use-case-scenario
|
||||
[47]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#pro-tips
|
||||
[48]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#enhance
|
||||
[49]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#feedback
|
||||
[50]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#build-test-and-deploy
|
||||
[51]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#code-review-with-gitlab
|
||||
[52]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-issue-tracker
|
||||
[53]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-workflow
|
||||
[54]:https://twitter.com/gitlab
|
||||
[55]:https://about.gitlab.com/solutions/cycle-analytics/
|
||||
[56]:https://about.gitlab.com/2016/03/22/gitlab-8-6-released/#move-issues-to-other-projects
|
||||
[57]:https://about.gitlab.com/2016/06/22/gitlab-8-9-released/#manually-add-todos
|
||||
[58]:https://about.gitlab.com/2016/07/22/gitlab-8-10-released/#bulk-subscribe-to-issues
|
||||
[59]:https://about.gitlab.com/2016/03/22/gitlab-8-6-released/#subscribe-to-a-label
|
||||
[60]:https://about.gitlab.com/2016/08/05/feature-highlight-set-dates-for-issues/#milestones
|
||||
[61]:https://gitlab.com/gitlab-org/gitlab-ce/issues/new
|
||||
[62]:https://docs.gitlab.com/ee/user/markdown.html
|
||||
[63]:https://docs.gitlab.com/ce/user/project/description_templates.html
|
||||
[64]:https://about.gitlab.com/2016/09/21/cycle-analytics-feature-highlight/
|
||||
[65]:https://about.gitlab.com/solutions/cycle-analytics/
|
||||
[66]:https://docs.gitlab.com/ee/ci/examples/README.html
|
||||
[67]:https://about.gitlab.com/2016/08/22/gitlab-8-11-released/#koding-integration
|
||||
[68]:https://about.gitlab.com/2016/08/05/continuous-integration-delivery-and-deployment-with-gitlab/
|
||||
[69]:https://about.gitlab.com/gitlab-ci/
|
||||
[70]:https://about.gitlab.com/2016/08/22/gitlab-8-11-released/#merge-conflict-resolution
|
||||
[71]:https://docs.gitlab.com/ce/user/project/slash_commands.html
|
||||
[72]:https://about.gitlab.com/2016/10/22/gitlab-8-13-released/#wip-slash-command
|
||||
[73]:https://gitlab.com/gitlab-examples/review-apps-nginx/
|
||||
[74]:https://about.gitlab.com/2016/10/22/gitlab-8-13-released/#ability-to-stop-review-apps
|
||||
[75]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#feedback
|
||||
[76]:https://docs.gitlab.com/ce/administration/issue_closing_pattern.html
|
||||
[77]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#milestones
|
||||
[78]:https://docs.gitlab.com/ce/administration/issue_closing_pattern.html
|
||||
[79]:https://docs.gitlab.com/ee/user/markdown.html
|
||||
[80]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#feedback
|
||||
[81]:https://about.gitlab.com/2014/09/29/gitlab-flow/
|
||||
[82]:https://about.gitlab.com/free-trial/
|
||||
[83]:https://about.gitlab.com/2016/10/22/gitlab-8-13-released/#multiple-issue-boards-ee
|
||||
[84]:https://about.gitlab.com/solutions/issueboard
|
||||
[85]:https://docs.gitlab.com/ee/workflow/issue_weight.html
|
||||
[86]:https://about.gitlab.com/2016/10/22/gitlab-8-13-released/#group-labels
|
||||
[87]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#gitlab-issue-board
|
||||
[88]:https://docs.gitlab.com/ee/user/project/labels.html#prioritize-labels
|
||||
[89]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#milestones
|
||||
[90]:https://about.gitlab.com/2016/08/05/feature-highlight-set-dates-for-issues/#due-dates-for-issues
|
||||
[91]:https://docs.gitlab.com/ce/user/permissions.html
|
||||
[92]:https://about.gitlab.com/2016/03/31/feature-highlihght-confidential-issues/
|
||||
[93]:https://about.gitlab.com/2016/10/25/gitlab-workflow-an-overview/#pro-tips
|
||||
[94]:https://docs.gitlab.com/ee/user/markdown.html
|
||||
[95]:https://about.gitlab.com/2016/03/03/start-with-an-issue/
|
||||
[96]:https://about.gitlab.com/2016/09/13/gitlab-master-plan/
|
||||
[97]:https://about.gitlab.com/2014/09/29/gitlab-flow/
|
@ -0,0 +1,138 @@
|
||||
向 Linus Torvalds 学习让编出的代码具有 “good taste”
|
||||
========
|
||||
|
||||
在[最近关于 Linus Torvalds 的一个采访中][1],这位 Linux 的创始人,在采访过程中大约 14:20 的时候,提及了关于代码的 “good taste”。good taste?采访者请他展示更多的细节,于是,Linus Torvalds 展示了一张提前准备好的插图。
|
||||
|
||||
他展示的是一个代码片段。但这段代码并没有 “good taste”。这是一个具有 “poor taste” 的代码片段,把它作为例子,以提供一些初步的比较。
|
||||
|
||||
![Poor Taste Code Example](https://d262ilb51hltx0.cloudfront.net/max/1200/1*X2VgEA_IkLvsCS-X4iPY7g.png)
|
||||
|
||||
这是一个用 C 写的函数,作用是删除链表中的一个对象,它包含有 10 行代码。
|
||||
|
||||
他把注意力集中在底部的 `if` 语句。正是这个 `if` 语句受到他的批判。
|
||||
|
||||
我暂停了这段视频,开始研究幻灯片。我发现我最近有写过和这很像的代码。Linus 不就是在说我的代码品味很差吗?我放下自傲,继续观看视频。
|
||||
|
||||
随后, Linus 向观众解释,正如我们所知道的,当从链表中删除一个对象时,需要考虑两种可能的情况。当所需删除的对象位于链表的表头时,删除过程和位于链表中间的情况不同。这就是这个 `if` 语句具有 “poor taste” 的原因。
|
||||
|
||||
但既然他承认考虑这两种不同的情况是必要的,那为什么像上面那样写如此糟糕呢?
|
||||
|
||||
接下来,他又向观众展示了第二张幻灯片。这个幻灯片展示的是实现同样功能的一个函数,但这段代码具有 “goog taste” 。
|
||||
|
||||
![Good Taste Code Example](https://d262ilb51hltx0.cloudfront.net/max/1200/1*GHFLYFB3vDQeakMyUGPglw.png)
|
||||
|
||||
原先的 10 行代码现在减少为 4 行。
|
||||
|
||||
但代码的行数并不重要,关键是 `if` 语句,它不见了,因为不再需要了。代码已经被重构,所以,不用管对象在列表中的位置,都可以运用同样的操作把它删除。
|
||||
|
||||
Linus 解释了一下新的代码,它消除了边缘情况,就是这样。然后采访转入了下一个话题。
|
||||
|
||||
我琢磨了一会这段代码。 Linus 是对的,的确,第二个函数更好。如果这是一个确定代码具有 “good taste” 还是 “bad taste” 的测试,那么很遗憾,我失败了。我从未想到过有可能能够去除条件语句。我写过不止一次这样的 `if` 语句,因为我经常使用链表。
|
||||
|
||||
这个例子的意义,不仅仅是教给了我们一个从链表中删除对象的更好方法,而是启发了我们去思考自己写的代码。你通过程序实现的一个简单算法,可能还有改进的空间,只是你从来没有考虑过。
|
||||
|
||||
以这种方式,我回去审查最近正在做的项目的代码。也许是一个巧合,刚好也是用 C 写的。
|
||||
|
||||
我尽最大的能力去审查代码,“good taste” 的一个基本要求是关于边缘情况的消除方法,通常我们会使用条件语句来消除边缘情况。你的测试使用的条件语句越少,你的代码就会有更好的 “taste” 。
|
||||
|
||||
下面,我将分享一个通过审查代码进行了改进的一个特殊例子。
|
||||
|
||||
这是一个关于初始化网格边缘的算法。
|
||||
|
||||
下面所写的是一个用来初始化网格边缘的算法,网格 grid 以一个二维数组表示:grid[行][列] 。
|
||||
|
||||
再次说明,这段代码的目的只是用来初始化位于 grid 边缘的点的值,所以,只需要给最上方一行、最下方一行、最左边一列以及最右边一列赋值即可。
|
||||
|
||||
为了完成这件事,我通过循环遍历 grid 中的每一个点,然后使用条件语句来测试该点是否位于边缘。代码看起来就是下面这样:
|
||||
|
||||
```Tr
|
||||
for (r = 0; r < GRID_SIZE; ++r) {
|
||||
for (c = 0; c < GRID_SIZE; ++c) {
|
||||
// Top Edge
|
||||
if (r == 0)
|
||||
grid[r][c] = 0;
|
||||
// Left Edge
|
||||
if (c == 0)
|
||||
grid[r][c] = 0;
|
||||
// Right Edge
|
||||
if (c == GRID_SIZE - 1)
|
||||
grid[r][c] = 0;
|
||||
// Bottom Edge
|
||||
if (r == GRID_SIZE - 1)
|
||||
grid[r][c] = 0;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
虽然这样做是对的,但回过头来看,这个结构存在一些问题。
|
||||
|
||||
1. 复杂性 — 在双层循环里面使用 4 个条件语句似乎过于复杂。
|
||||
2. 高效性 — 假设 `GRID_SIZE` 的值为 64,那么这个循环需要执行 4096 次,但需要进行赋值的只有位于边缘的 256 个点。
|
||||
|
||||
用 Linus 的眼光来看,将会认为这段代码没有 “good taste” 。
|
||||
|
||||
所以,我对上面的问题进行了一下思考。经过一番思考,我把复杂度减少为包含四个条件语句的单层 `for` 循环。虽然只是稍微改进了一下复杂性,但在性能上也有了极大的提高,因为它只是沿着边缘的点进行了 256 次循环。
|
||||
|
||||
```
|
||||
for (i = 0; i < GRID_SIZE * 4; ++i) {
|
||||
// Top Edge
|
||||
if (i < GRID_SIZE)
|
||||
grid[0][i] = 0;
|
||||
// Right Edge
|
||||
else if (i < GRID_SIZE * 2)
|
||||
grid[i - GRID_SIZE][GRID_SIZE - 1] = 0;
|
||||
// Left Edge
|
||||
else if (i < GRID_SIZE * 3)
|
||||
grid[i - (GRID_SIZE * 2)][0] = 0;
|
||||
// Bottom Edge
|
||||
else
|
||||
grid[GRID_SIZE - 1][i - (GRID_SIZE * 3)] = 0;
|
||||
}
|
||||
```
|
||||
|
||||
的确是一个很大的提高。但是它看起来很丑,并不是易于阅读理解的代码。基于这一点,我并不满意。
|
||||
|
||||
我继续思考,是否可以进一步改进呢?事实上,答案是 YES!最后,我想出了一个非常简单且优雅的算法,老实说,我不敢相信我会花了那么长时间才发现这个算法。
|
||||
|
||||
下面是这段代码的最后版本。它只有一层 `for` 循环并且没有条件语句。另外。循环只执行了 64 次迭代,极大的改善了复杂性和高效性。
|
||||
|
||||
```
|
||||
for (i = 0; i < GRID_SIZE; ++i) {
|
||||
// Top Edge
|
||||
grid[0][i] = 0;
|
||||
|
||||
// Bottom Edge
|
||||
grid[GRID_SIZE - 1][i] = 0;
|
||||
// Left Edge
|
||||
grid[i][0] = 0;
|
||||
// Right Edge
|
||||
grid[i][GRID_SIZE - 1] = 0;
|
||||
}
|
||||
```
|
||||
|
||||
这段代码通过每次循环迭代来初始化四条边缘上的点。它并不复杂,而且非常高效,易于阅读。和原始的版本,甚至是第二个版本相比,都有天壤之别。
|
||||
|
||||
至此,我已经非常满意了。
|
||||
|
||||
那么,我是一个有 “good taste” 的开发者么?
|
||||
|
||||
我觉得我是,但是这并不是因为我上面提供的这个例子,也不是因为我在这篇文章中没有提到的其它代码……而是因为具有 “good taste” 的编码工作远非一段代码所能代表。Linus 自己也说他所提供的这段代码不足以表达他的观点。
|
||||
|
||||
我明白 Linus 的意思,也明白那些具有 “good taste” 的程序员虽各有不同,但是他们都是会将他们之前开发的代码花费时间重构的人。他们明确界定了所开发的组件的边界,以及是如何与其它组件之间的交互。他们试着确保每一样工作都完美、优雅。
|
||||
|
||||
其结果就是类似于 Linus 的 “good taste” 的例子,或者像我的例子一样,不过是千千万万个 “good taste”。
|
||||
|
||||
你会让你的下个项目也具有这种 “good taste” 吗?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://medium.com/@bartobri/applying-the-linus-tarvolds-good-taste-coding-requirement-99749f37684a
|
||||
|
||||
作者:[Brian Barto][a]
|
||||
译者:[ucasFL](https://github.com/ucasFL)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://medium.com/@bartobri?source=post_header_lockup
|
||||
[1]:https://www.ted.com/talks/linus_torvalds_the_mind_behind_linux
|
@ -0,0 +1,165 @@
|
||||
Linux 容器能否弥补 IoT 的安全短板?
|
||||
========
|
||||
|
||||
![](http://hackerboards.com/files/internet_of_things_wikimedia1-thm.jpg)
|
||||
|
||||
> 在这个最后的物联网系列文章中,Canonical 和 Resin.io 向以 Linux 容器技术作为解决方案向物联网安全性和互操作性发起挑战。
|
||||
|
||||
![](http://hackerboards.com/files/samsung_artik710-thm.jpg)
|
||||
|
||||
*Artik 7*
|
||||
|
||||
尽管受到日益增长的安全威胁,但对物联网(IoT)的炒作没有显示减弱的迹象。为了刷存在感,公司们正忙于重新规划它们的物联网方面的路线图。物联网大潮迅猛异常,比移动互联网革命渗透的更加深入和广泛。IoT 像黑洞一样,吞噬一切,包括智能手机,它通常是我们通向物联网世界的窗口,有时也作为我们的汇聚点或终端。
|
||||
|
||||
新的针对物联网的处理器和嵌入式主板继续重塑其技术版图。自从 9 月份推出 [面向物联网的 Linux 和开源硬件][5] 系列文章之后,我们看到了面向物联网网关的 “Apollo Lake]” SoC 芯片 [Intel Atom E3900][6] 以及[三星 新的 Artik 模块][7],包括用于网关并由 Linux 驱动的 64 位 Artik 7 COM 及自带 RTOS 的 Cortex-M4 Artik。 ARM 为具有 ARMv8-M 和 TrustZone 安全性的 IoT 终端发布了 [Cortex-M23 和 Cortex-M33][8] 芯片。
|
||||
|
||||
讲道理,安全是这些产品的卖点。最近攻击 Dyn 服务并在一天内摧毁了美国大部分互联网的 Mirai 僵尸网络将基于 Linux 的物联网推到台前 - 当然这种方式似乎不太体面。就像 IoT 设备可以成为 DDoS 的帮凶一样,设备及其所有者同样可能直接遭受恶意攻击。
|
||||
|
||||
![](http://hackerboards.com/files/arm_cortexm33m23-thm.jpg)
|
||||
|
||||
*Cortex-M33 和 -M23*
|
||||
|
||||
Dyn 攻击更加证明了这种观点,即物联网将更加蓬勃地在受控制和受保护的工业环境发展,而不是家用环境中。这不是因为没有消费级[物联网安全技术][9],但除非产品设计之初就以安全为目标,否则如我们的[智能家居集线器系列][10]中的许多解决方案一样,后期再考虑安全就会增加成本和复杂性。
|
||||
|
||||
在物联网系列的最后这个未来展望的部分,我们将探讨两种基于 Linux 的面向 Docker 的容器技术,这些技术被提出作为物联网安全解决方案。容器还可以帮助解决我们在[物联网框架][11]中探讨的开发复杂性和互操作性障碍的问题。
|
||||
|
||||
我们与 Canonical 的 Ubuntu 客户平台工程副总裁 Oliver Ries 讨论了 Ubuntu Core 和适用于 Docker 的容器式 Snaps 包管理技术。我们还就新的基于 Docker 的物联网方案 ResinOS 采访了 Resin.io 首席执行官和联合创始人 Alexandros Marinos。
|
||||
|
||||
### Ubuntu Core Snaps
|
||||
|
||||
Canonical 面向物联网的 [Snappy Ubuntu Core][12] 版本的 Ubuntu 是围绕一个类似容器的快照包管理机制而构建的,并提供应用商店支持。 snaps 技术最近[自行发布了][13]用于其他 Linux 发行版的版本。去年 11 月 3 日,Canonical 发布了 [Ubuntu Core 16] [14],该版本改进了白标应用商店和更新控制服务。
|
||||
|
||||
|
||||
|
||||
![](http://hackerboards.com/files/canonical_ubuntucore16_diagram.jpg)
|
||||
|
||||
|
||||
*传统 Ubuntu(左)架构 与 Ubuntu Core 16*
|
||||
|
||||
快照机制提供自动更新,并有助于阻止未经授权的更新。 使用事务系统管理,快照可确保更新按预期部署或根本不部署。 在 Ubuntu Core 中,使用 AppArmor 进一步加强了安全性,并且所有应用程序文件都是只读的且保存在隔离的孤岛中。
|
||||
|
||||
![](http://hackerboards.com/files/limesdr-thm.jpg)
|
||||
|
||||
*LimeSDR*
|
||||
|
||||
Ubuntu Core 是我们最近展开的[开源物联网操作系统调查][16]的一部分,现在运行于 Gumstix 主板、Erle 机器人无人机、Dell Edge 网关、[Nextcloud Box][17]、LimeSDR、Mycroft 家庭集线器、英特尔的 Joule 和符合 Linaro 的 96Boards 规范的 SBC(单板计算机) 上。 Canonical 公司还与 Linaro 物联网和嵌入式(LITE)部门集团在其 [96Boards 物联网版(IE)][18] 上达成合作。最初,96Boards IE 专注于 Zephyr 驱动的 Cortex-M4 板卡,如 Seeed 的 [BLE Carbon] [19],不过它将扩展到可以运行 Ubuntu Core 的网关板卡上。
|
||||
|
||||
“Ubuntu Core 和 snaps 具有从边缘到网关到云的相关性,”Canonical 的 Ries 说。 “能够在任何主要发行版(包括 Ubuntu Server 和 Ubuntu for Cloud)上运行快照包,使我们能够提供一致的体验。 snaps 可以使用事务更新以免故障方式升级,可用于安全性更新、错误修复或新功能的持续更新,这在物联网环境中非常重要。”
|
||||
|
||||
![](http://hackerboards.com/files/nextcloud_box3-thm.jpg)
|
||||
|
||||
*Nextcloud盒子*
|
||||
|
||||
安全性和可靠性是关注的重点,Ries 说。 “snaps 应用可以完全独立于彼此和操作系统而运行,使得两个应用程序可以安全地在单个网关上运行,”他说。 “snaps 是只读的和经过认证的,可以保证代码的完整性。
|
||||
|
||||
Ries 还说这种技术减少开发时间。 “snap 软件包允许开发人员向支持它的任何平台提供相同的二进制包,从而降低开发和测试成本,减少部署时间和提高更新速度。 “使用 snap 软件包,开发人员完可以全控制开发生命周期,并可以立即更新。 snap 包提供了所有必需的依赖项,因此开发人员可以选择定制他们使用的组件。”
|
||||
|
||||
### ResinOS: 为 IoT 而生的 Docker
|
||||
|
||||
Resin.io 公司,与其商用的 IoT 框架同名,最近剥离了该框架的基于 Yocto Linux 的 [ResinOS 2.0][20],ResinOS 2.0 将作为一个独立的开源项目运营。 Ubuntu Core 在 snap 包中运行 Docker 容器引擎,ResinOS 在主机上运行 Docker。 极致简约的 ResinOS 抽离了使用 Yocto 代码的复杂性,使开发人员能够快速部署 Docker 容器。
|
||||
|
||||
|
||||
|
||||
![](http://hackerboards.com/files/resinio_resinos_arch.jpg)
|
||||
|
||||
|
||||
*ResinOS 2.0 架构*
|
||||
|
||||
与基于 Linux 的 CoreOS 一样,ResinOS 集成了 systemd 控制服务和网络协议栈,可通过异构网络安全地部署更新的应用程序。 但是,它是为在资源受限的设备(如 ARM 黑客板)上运行而设计的,与之相反,CoreOS 和其他基于 Docker 的操作系统(例如基于 Red Hat 的 Project Atomic)目前仅能运行在 x86 上,并且更喜欢资源丰富的服务器平台。 ResinOS 可以在 20 中 Linux 设备上运行,并不断增长,包括 Raspberry Pi,BeagleBone 和Odroid-C1 等。
|
||||
|
||||
“我们认为 Linux 容器对嵌入式系统比对于云更重要,”Resin.io 的 Marinos 说。 “在云中,容器代表了对之前的进程的优化,但在嵌入式中,它们代表了姗姗来迟的通用虚拟化“
|
||||
|
||||
![](http://hackerboards.com/files/beaglebone-hand-thm.jpg)
|
||||
|
||||
*BeagleBone Black*
|
||||
|
||||
当应用于物联网时,完整的企业级虚拟机有直接访问硬件的限制的性能缺陷,Marinos 说。像 OSGi 和 Android 的Dalvik 这样的移动设备虚拟机可以用于 IoT,但是它们依赖 Java 并有其他限制。
|
||||
|
||||
对于企业开发人员来说,使用 Docker 似乎很自然,但是你如何说服嵌入式黑客转向全新的范式呢? “Marinos 解释说,”ResinOS 不是把云技术的实践经验照单全收,而是针对嵌入式进行了优化。”此外,他说,容器比典型的物联网技术更好地包容故障。 “如果有软件缺陷,主机操作系统可以继续正常工作,甚至保持连接。要恢复,您可以重新启动容器或推送更新。更新设备而不重新启动它的能力进一步消除了故障引发问题的机率。”
|
||||
|
||||
据 Marinos 所说,其他好处源自与云技术的一致性,例如拥有更广泛的开发人员。容器提供了“跨数据中心和边缘的统一范式,以及一种方便地将技术、工作流、基础设施,甚至应用程序转移到边缘(终端)的方式。”
|
||||
|
||||
Marinos 说,容器中的固有安全性优势正在被其他技术增强。 “随着 Docker 社区推动实现镜像签名和鉴证,这些自然会转移并应用到 ResinOS,”他说。 “当 Linux 内核被强化以提高容器安全性时,或者获得更好地管理容器所消耗的资源的能力时,会产生类似的好处。
|
||||
|
||||
容器也适合开源 IoT 框架,Marinos 说。 “Linux 容器很容易与几乎各种协议、应用程序、语言和库结合使用,”Marinos 说。 “Resin.io 参加了 AllSeen 联盟,我们与使用 IoTivity 和 Thread的 伙伴一起合作。”
|
||||
|
||||
### IoT的未来:智能网关与智能终端
|
||||
|
||||
Marinos 和 Canonical 的 Ries 对未来物联网的几个发展趋势具有一致的看法。 首先,物联网的最初概念(其中基于 MCU 的端点直接与云进行通信以进行处理)正在迅速被雾化计算架构所取代。 这需要更智能的网关,也需要比仅仅在 ZigBee 和 WiFi 之间聚合和转换数据更多的功能。
|
||||
|
||||
其次,网关和智能边缘设备越来越多地运行多个应用程序。 第三,许多这些设备将提供板载分析,这些在最新的[智能家居集线器][22]上都有体现。 最后,富媒体将很快成为物联网组合的一部分。
|
||||
|
||||
|
||||
![](http://hackerboards.com/files/eurotech_reliagate2026.jpg)
|
||||
|
||||
|
||||
*最新设备网关: Eurotech 的 [ReliaGate 20-26][1]*
|
||||
|
||||
![](http://hackerboards.com/files/advantech_ubc221.jpg)
|
||||
|
||||
|
||||
*最新设备网关: Advantech 的 [UBC-221][2]*
|
||||
|
||||
“智能网关正在接管最初为云服务设计的许多处理和控制功能,”Marinos 说。 “因此,我们看到对容器化的推动力在增加,可以在 IoT 设备中使用类似云工作流程来部署与功能和安全相关的优化。去中心化是由移动数据紧缩、不断发展的法律框架和各种物理限制等因素驱动的。”
|
||||
|
||||
Ubuntu Core 等平台正在使“可用于网关的软件爆炸式增长”,Canonical 的 Ries 说。 “在单个设备上运行多个应用程序的能力吸引了众多单一功能设备的用户,以及现在可以产生持续的软件收入的设备所有者。”
|
||||
|
||||
![](http://hackerboards.com/files/myomega_mynxg-sm.jpg)
|
||||
|
||||
*两种 IoT 网关: [MyOmega MYNXG IC2 Controller][3]*
|
||||
|
||||
![](http://hackerboards.com/files/technexion_ls1021aiot_front-sm.jpg)
|
||||
|
||||
*两种 IoT 网关: TechNexion 的 [LS1021A-IoT Gateway][4]*
|
||||
|
||||
不仅是网关 - 终端也变得更聪明。 “阅读大量的物联网新闻报道,你得到的印象是所有终端都运行在微控制器上,”Marinos 说。 “但是我们对大量的 Linux 终端,如数字标牌,无人机和工业机械等直接执行任务,而不是作为操作中介(数据转发)感到惊讶。我们称之为影子 IoT。”
|
||||
|
||||
Canonical 的 Ries 同意,对简约技术的专注使他们忽视了新兴物联网领域。 “轻量化的概念在一个发展速度与物联网一样快的行业中初现端倪,”Ries 说。 “今天的高级消费硬件可以持续为终端供电数月。”
|
||||
|
||||
虽然大多数物联网设备将保持轻量和“无头”(一种配置方式,比如物联网设备缺少显示器,键盘等),它们装备有如加速度计和温度传感器这样的传感器并通过低速率的数据流通信,但是许多较新的物联网应用已经使用富媒体。 “媒体输入/输出只是另一种类型的外设,”Marinos 说。 “总是存在多个容器竞争有限资源的问题,但它与传感器或蓝牙竞争天线资源没有太大区别。”
|
||||
|
||||
Ries 看到了工业和家庭网关中“提高边缘智能”的趋势。 “我们看到人工智能、机器学习、计算机视觉和上下文意识的大幅上升,”Ries 说。 “为什么要在云中运行面部检测软件,如果相同的软件可以在边缘设备运行而又没有网络延迟和带宽及计算成本呢?“
|
||||
|
||||
当我们在这个物联网系列的[开篇故事][27]中探索时,我们发现存在与安全相关的物联网问题,例如隐私丧失和生活在监视文化中的权衡。还有一些问题如把个人决策交给可能由他人操控的 AI 裁定。这些不会被容器,快照或任何其他技术完全解决。
|
||||
|
||||
如果 AWS Alexa 可以处理生活琐事,而我们专注在要事上,也许我们会更快乐。或许有一个方法来平衡隐私和效用,现在,我们仍在探索,如此甚好。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://hackerboards.com/can-linux-containers-save-iot-from-a-security-meltdown/
|
||||
|
||||
作者:[Eric Brown][a]
|
||||
译者:[firstadream](https://github.com/firstadream)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://hackerboards.com/can-linux-containers-save-iot-from-a-security-meltdown/
|
||||
[1]:http://hackerboards.com/atom-based-gateway-taps-new-open-source-iot-cloud-platform/
|
||||
[2]:http://hackerboards.com/compact-iot-gateway-runs-yocto-linux-on-quark/
|
||||
[3]:http://hackerboards.com/wireless-crazed-customizable-iot-gateway-uses-arm-or-x86-coms/
|
||||
[4]:http://hackerboards.com/iot-gateway-runs-linux-on-qoriq-accepts-arduino-shields/
|
||||
[5]:http://hackerboards.com/linux-and-open-source-hardware-for-building-iot-devices/
|
||||
[6]:http://hackerboards.com/intel-launches-14nm-atom-e3900-and-spins-an-automotive-version/
|
||||
[7]:http://hackerboards.com/samsung-adds-first-64-bit-and-cortex-m4-based-artik-modules/
|
||||
[8]:http://hackerboards.com/new-cortex-m-chips-add-armv8-and-trustzone/
|
||||
[9]:http://hackerboards.com/exploring-security-challenges-in-linux-based-iot-devices/
|
||||
[10]:http://hackerboards.com/linux-based-smart-home-hubs-advance-into-ai/
|
||||
[11]:http://hackerboards.com/open-source-projects-for-the-internet-of-things-from-a-to-z/
|
||||
[12]:http://hackerboards.com/lightweight-snappy-ubuntu-core-os-targets-iot/
|
||||
[13]:http://hackerboards.com/canonical-pushes-snap-as-a-universal-linux-package-format/
|
||||
[14]:http://hackerboards.com/ubuntu-core-16-gets-smaller-goes-all-snaps/
|
||||
[15]:http://hackerboards.com/files/canonical_ubuntucore16_diagram.jpg
|
||||
[16]:http://hackerboards.com/open-source-oses-for-the-internet-of-things/
|
||||
[17]:http://hackerboards.com/private-cloud-server-and-iot-gateway-runs-ubuntu-snappy-on-rpi/
|
||||
[18]:http://hackerboards.com/linaro-beams-lite-at-internet-of-things-devices/
|
||||
[19]:http://hackerboards.com/96boards-goes-cortex-m4-with-iot-edition-and-carbon-sbc/
|
||||
[20]:http://hackerboards.com/can-linux-containers-save-iot-from-a-security-meltdown/%3Ca%20href=
|
||||
[21]:http://hackerboards.com/files/resinio_resinos_arch.jpg
|
||||
[22]:http://hackerboards.com/linux-based-smart-home-hubs-advance-into-ai/
|
||||
[23]:http://hackerboards.com/files/eurotech_reliagate2026.jpg
|
||||
[24]:http://hackerboards.com/files/advantech_ubc221.jpg
|
||||
[25]:http://hackerboards.com/files/myomega_mynxg.jpg
|
||||
[26]:http://hackerboards.com/files/technexion_ls1021aiot_front.jpg
|
||||
[27]:http://hackerboards.com/an-open-source-perspective-on-the-internet-of-things-part-1/
|
||||
[28]:http://hackerboards.com/can-linux-containers-save-iot-from-a-security-meltdown/
|
@ -1,8 +1,7 @@
|
||||
如何在 Ubuntu16.04 中用 Apache 部署 Jenkins 自动化服务器
|
||||
============================================================
|
||||
|
||||
|
||||
Jenkins 是从 Hudson 项目衍生出来的自动化服务器。Jenkins 是一个基于服务器的应用程序,运行在 Java servlet 容器中,它支持包括 Git、SVN 以及 Mercurial 在内的多种 SCM(Source Control Management,源码控制工具)。Jenkins 提供了上百种插件帮助你的项目实现自动化。Jenkins 由 Kohsuke Kawaguchi 开发,在 2011 年使用 MIT 协议发布了第一个发行版,它是个免费软件。
|
||||
Jenkins 是从 Hudson 项目衍生出来的自动化服务器。Jenkins 是一个基于服务器的应用程序,运行在 Java servlet 容器中,它支持包括 Git、SVN 以及 Mercurial 在内的多种 SCM(Source Control Management,源码控制工具)。Jenkins 提供了上百种插件帮助你的项目实现自动化。Jenkins 由 Kohsuke Kawaguchi 开发,在 2011 年使用 MIT 协议发布了第一个发行版,它是个自由软件。
|
||||
|
||||
在这篇指南中,我会向你介绍如何在 Ubuntu 16.04 中安装最新版本的 Jenkins。我们会用自己的域名运行 Jenkins,在 apache web 服务器中安装和配置 Jenkins,而且支持反向代理。
|
||||
|
||||
@ -17,22 +16,28 @@ Jenkins 基于 Java,因此我们需要在服务器上安装 Java OpenJDK 7。
|
||||
|
||||
默认情况下,Ubuntu 16.04 没有安装用于管理 PPA 仓库的 python-software-properties 软件包,因此我们首先需要安装这个软件。使用 apt 命令安装 python-software-properties。
|
||||
|
||||
`apt-get install python-software-properties`
|
||||
```
|
||||
apt-get install python-software-properties
|
||||
```
|
||||
|
||||
下一步,添加 Java PPA 仓库到服务器中。
|
||||
|
||||
`add-apt-repository ppa:openjdk-r/ppa`
|
||||
```
|
||||
add-apt-repository ppa:openjdk-r/ppa
|
||||
```
|
||||
|
||||
输入回车键
|
||||
用 apt 命令更新 Ubuntu 仓库并安装 Java OpenJDK。
|
||||
|
||||
用 apt 命令更新 Ubuntu 仓库并安装 Java OpenJDK。`
|
||||
|
||||
`apt-get update`
|
||||
`apt-get install openjdk-7-jdk`
|
||||
```
|
||||
apt-get update
|
||||
apt-get install openjdk-7-jdk
|
||||
```
|
||||
|
||||
输入下面的命令验证安装:
|
||||
|
||||
`java -version`
|
||||
```
|
||||
java -version
|
||||
```
|
||||
|
||||
你会看到安装到服务器上的 Java 版本。
|
||||
|
||||
@ -46,21 +51,29 @@ Jenkins 给软件安装包提供了一个 Ubuntu 仓库,我们会从这个仓
|
||||
|
||||
用下面的命令添加 Jenkins 密钥和仓库到系统中。
|
||||
|
||||
`wget -q -O - https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo apt-key add -`
|
||||
`echo 'deb https://pkg.jenkins.io/debian-stable binary/' | tee -a /etc/apt/sources.list`
|
||||
```
|
||||
wget -q -O - https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo apt-key add -
|
||||
echo 'deb https://pkg.jenkins.io/debian-stable binary/' | tee -a /etc/apt/sources.list
|
||||
```
|
||||
|
||||
更新仓库并安装 Jenkins。
|
||||
|
||||
`apt-get update`
|
||||
`apt-get install jenkins`
|
||||
```
|
||||
apt-get update
|
||||
apt-get install jenkins
|
||||
```
|
||||
|
||||
安装完成后,用下面的命令启动 Jenkins。
|
||||
|
||||
`systemctl start jenkins`
|
||||
```
|
||||
systemctl start jenkins
|
||||
```
|
||||
|
||||
通过检查 Jenkins 默认使用的端口(端口 8080)验证 Jenkins 正在运行。我会像下面这样用 netstat 命令检测:
|
||||
通过检查 Jenkins 默认使用的端口(端口 8080)验证 Jenkins 正在运行。我会像下面这样用 `netstat` 命令检测:
|
||||
|
||||
`netstat -plntu`
|
||||
```
|
||||
netstat -plntu
|
||||
```
|
||||
|
||||
Jenkins 已经安装好了并运行在 8080 端口。
|
||||
|
||||
@ -70,23 +83,29 @@ Jenkins 已经安装好了并运行在 8080 端口。
|
||||
|
||||
### 第三步 - 为 Jenkins 安装和配置 Apache 作为反向代理
|
||||
|
||||
在这篇指南中,我们会在一个 apache web 服务器中运行 Jenkins,我们会为 Jenkins 配置 apache 作为反向代理。首先我会安装 apache 并启用一些需要的模块,然后我会为 Jenkins 用域名 my.jenkins.id 创建虚拟 host 文件。请在这里使用你自己的域名并在所有配置文件中出现的地方替换。
|
||||
在这篇指南中,我们会在一个 Apache web 服务器中运行 Jenkins,我们会为 Jenkins 配置 apache 作为反向代理。首先我会安装 apache 并启用一些需要的模块,然后我会为 Jenkins 用域名 my.jenkins.id 创建虚拟主机文件。请在这里使用你自己的域名并在所有配置文件中出现的地方替换。
|
||||
|
||||
从 Ubuntu 仓库安装 apache2 web 服务器。
|
||||
|
||||
`apt-get install apache2`
|
||||
```
|
||||
apt-get install apache2
|
||||
```
|
||||
|
||||
安装完成后,启用 proxy 和 proxy_http 模块以便将 apache 配置为 Jenkins 的前端服务器/反向代理。
|
||||
|
||||
`a2enmod proxy`
|
||||
`a2enmod proxy_http`
|
||||
```
|
||||
a2enmod proxy
|
||||
a2enmod proxy_http
|
||||
```
|
||||
|
||||
下一步,在 sites-available 目录创建新的虚拟 host 文件。
|
||||
下一步,在 `sites-available` 目录创建新的虚拟主机文件。
|
||||
|
||||
`cd /etc/apache2/sites-available/`
|
||||
`vim jenkins.conf`
|
||||
```
|
||||
cd /etc/apache2/sites-available/
|
||||
vim jenkins.conf
|
||||
```
|
||||
|
||||
粘贴下面的虚拟 host 配置。
|
||||
粘贴下面的虚拟主机配置。
|
||||
|
||||
```
|
||||
<Virtualhost *:80>
|
||||
@ -106,18 +125,24 @@ Jenkins 已经安装好了并运行在 8080 端口。
|
||||
</Virtualhost>
|
||||
```
|
||||
|
||||
保存文件。然后用 a2ensite 命令激活 Jenkins 虚拟 host。
|
||||
保存文件。然后用 `a2ensite` 命令激活 Jenkins 虚拟主机。
|
||||
|
||||
`a2ensite jenkins`
|
||||
```
|
||||
a2ensite jenkins
|
||||
```
|
||||
|
||||
重启 Apache 和 Jenkins。
|
||||
|
||||
`systemctl restart apache2`
|
||||
`systemctl restart jenkins`
|
||||
```
|
||||
systemctl restart apache2
|
||||
systemctl restart jenkins
|
||||
```
|
||||
|
||||
检查 Jenkins 和 Apache 正在使用 80 和 8080 端口。
|
||||
|
||||
`netstat -plntu`
|
||||
```
|
||||
netstat -plntu
|
||||
```
|
||||
|
||||
[
|
||||
![检查 Apache 和 Jenkins 是否在运行](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/3.png)
|
||||
@ -127,29 +152,30 @@ Jenkins 已经安装好了并运行在 8080 端口。
|
||||
|
||||
Jenkins 用域名 'my.jenkins.id' 运行。打开你的 web 浏览器然后输入 URL。你会看到要求你输入初始管理员密码的页面。Jenkins 已经生成了一个密码,因此我们只需要显示并把结果复制到密码框。
|
||||
|
||||
用 cat 命令显示 Jenkins 初始管理员密码。
|
||||
|
||||
`cat /var/lib/jenkins/secrets/initialAdminPassword`
|
||||
用 `cat` 命令显示 Jenkins 初始管理员密码。
|
||||
|
||||
```
|
||||
cat /var/lib/jenkins/secrets/initialAdminPassword
|
||||
a1789d1561bf413c938122c599cf65c9
|
||||
```
|
||||
|
||||
[
|
||||
![获取 Jenkins 管理员密码](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/4.png)
|
||||
][12]
|
||||
|
||||
将结果粘贴到密码框然后点击 ‘**Continue**’。
|
||||
将结果粘贴到密码框然后点击 Continue。
|
||||
|
||||
[
|
||||
![安装和配置 Jenkins](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/5.png)
|
||||
][13]
|
||||
|
||||
现在为了后面能比较好的使用,我们需要在 Jenkins 中安装一些插件。选择 ‘**Install Suggested Plugin**’,点击它。
|
||||
现在为了后面能比较好的使用,我们需要在 Jenkins 中安装一些插件。选择 Install Suggested Plugin,点击它。
|
||||
|
||||
[
|
||||
![安装 Jenkins 插件](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/6.png)
|
||||
][14]
|
||||
|
||||
Jenkins 插件安装过程
|
||||
Jenkins 插件安装过程:
|
||||
|
||||
[
|
||||
![Jenkins 安装完插件](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/7.png)
|
||||
@ -199,27 +225,29 @@ Jenkins 在 ‘**Access Control**’ 部分提供了多种认证方法。为了
|
||||
![在 Jenkins 中创建新的任务](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/13.png)
|
||||
][21]
|
||||
|
||||
输入任务的名称,在这里我用 ‘Checking System’,选择 ‘**Freestyle Project**’ 然后点击 ‘**OK**’。
|
||||
输入任务的名称,在这里我输入 ‘Checking System’,选择 Freestyle Project 然后点击 OK。
|
||||
|
||||
[
|
||||
![配置 Jenkins 任务](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/14.png)
|
||||
][22]
|
||||
|
||||
进入 ‘**Build**’ 标签页。在 ‘**Add build step**’,选择选项 ‘**Execute shell**’。
|
||||
进入 Build 标签页。在 Add build step,选择选项 Execute shell。
|
||||
|
||||
在输入框输入下面的命令。
|
||||
|
||||
`top -b -n 1 | head -n 5`
|
||||
```
|
||||
top -b -n 1 | head -n 5
|
||||
```
|
||||
|
||||
点击 ‘**Save**’。
|
||||
点击 Save。
|
||||
|
||||
[
|
||||
![启动 Jenkins 任务](https://www.howtoforge.com/images/how-to-install-jenkins-with-apache-on-ubuntu-16-04/15.png)
|
||||
][23]
|
||||
|
||||
现在你是在任务 ‘Project checking system’的任务页。点击 ‘**Build Now**’ 执行任务 ‘checking system’。
|
||||
现在你是在任务 ‘Project checking system’ 的任务页。点击 Build Now 执行任务 ‘checking system’。
|
||||
|
||||
任务执行完成后,你会看到 ‘**Build History**’,点击第一个任务查看结果。
|
||||
任务执行完成后,你会看到 Build History,点击第一个任务查看结果。
|
||||
|
||||
下面是 Jenkins 任务执行的结果。
|
||||
|
||||
@ -233,9 +261,9 @@ Jenkins 在 ‘**Access Control**’ 部分提供了多种认证方法。为了
|
||||
|
||||
via: https://www.howtoforge.com/tutorial/how-to-install-jenkins-with-apache-on-ubuntu-16-04/
|
||||
|
||||
作者:[Muhammad Arul ][a]
|
||||
作者:[Muhammad Arul][a]
|
||||
译者:[ictlyh](https://github.com/ictlyh)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,68 +1,67 @@
|
||||
该死,原生移动应用的开发成本太高了!
|
||||
============================================================
|
||||
|
||||
### 一个有价值的命题
|
||||
> 一个有价值的命题
|
||||
|
||||
我们遇到了一个临界点。除去几个比较好的用例之外,使用原生框架和原生应用开发团队构建、维护移动应用再也没有意义了。
|
||||
我们遇到了一个临界点。除去少数几个特别的的用例之外,使用原生框架和原生应用开发团队构建、维护移动应用再也没有意义了。
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1000/1*4nyeufIIgw9B7nMSr5Sybg.jpeg)
|
||||
|
||||
在美国,雇佣 iOS,Android,JavaScript 开发人员的平均花费([http://www.indeed.com/salary][1],[http://www.payscale.com/research/US/Skill=JavaScript/Salary][2])
|
||||
*在美国,雇佣 [iOS,Android][1],[JavaScript][2] 开发人员的平均花费*
|
||||
|
||||
在过去的几年,原生移动应用开发的费用螺旋式上升,无法控制。对没有大量资金的新创业者来说,创建原生应用、MVP 设计架构和原型的难度大大增加。现有的公司需要抓住人才,以便在现有应用上进行迭代开发或者构建一个新的应用。要尽一切努力留住最好的人才,与 [世界各地的公司][9] 拼尽全力 [争][6] 个 [高][7] [下][8]。
|
||||
在过去的几年,原生移动应用开发的费用螺旋式上升,无法控制。对没有大量资金的新创业者来说,创建原生应用、MVP 设计架构和原型的难度大大增加。现有的公司需要抓住人才,以便在现有应用上进行迭代开发或者构建一个新的应用。要尽一切努力才能留住最好的人才,与 [世界各地的公司][9] 拼尽全力[争个][6][高][7][下][8]。
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/800/1*imThyh2e45RW1np0xXIE4Q.png)
|
||||
|
||||
2015年初,原生方式和混合方式开发 MVP 设计架构的费用对比([Comomentum.com][3])
|
||||
*2015 年初,原生方式和混合方式开发 MVP 设计架构的费用[对比][3]*
|
||||
|
||||
### 这一切对于我们意味着什么?
|
||||
|
||||
如果你的公司很大或者有足够多的现金,旧思维是只要你在原生应用开发方面投入足够多的资金,就高枕无忧。但事实不再如此。
|
||||
|
||||
Facebook 是你最不会想到的在人才战中失败的公司(因为他们没有失败),它也遇到了原生应用方面金钱无法解决的问题。他们的移动应用庞大而又复杂,[可以看到编译它竟然需要15分钟][10]。这意味着哪怕是极小的用户界面改动,比如移动几个点,测试起来都要花费几个小时(甚至几天)。
|
||||
Facebook 是你最不会想到的在人才战中失败的公司(因为他们没有失败),它也遇到了原生应用方面金钱无法解决的问题。他们的移动应用庞大而又复杂,[他们发现编译它竟然需要 15 分钟][10]。这意味着哪怕是极小的用户界面改动,比如移动几个点,测试起来都要花费几个小时(甚至几天)。
|
||||
|
||||
除了冗长的编译时间,应用的每一个小改动在测试时都需要在两个完全不同的环境(IOS 和 Android)实施,开发团队需要使用两种语言和框架工作,这趟水更浑了。
|
||||
|
||||
Facebook 对这个问题的解决方案是 [React Native][11]。
|
||||
|
||||
### 能不能抛弃移动应用,仅面向Web呢?
|
||||
### 能不能抛弃移动应用,仅面向 Web 呢?
|
||||
|
||||
[一些人认为移动应用的末日已到。][12] 尽管我很欣赏、尊重 [Eric Elliott][13] 和他的工作,但我们还是通过考察一些近期的数据,进而讨论一下某些相反的观点:
|
||||
[一些人认为移动应用的末日已到][12]。尽管我很欣赏、尊重 [Eric Elliott][13] 和他的工作,但我们还是通过考察一些近期的数据,进而讨论一下某些相反的观点:
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/800/1*s0O7X2PgIqP5_zselxQdqQ.png)
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/800/1*s0O7X2PgIqP5_zselxQdqQ.png)
|
||||
|
||||
人们在移动应用上花费的时间(2016年4月,[smartinsights.com][4])
|
||||
*人们在移动应用上花费的[时间][4](2016年4月)*
|
||||
|
||||
> 人们使用 APP 的时间占使用手机总时长的 90%
|
||||
|
||||
目前世界上有 25 亿人在使用移动手机。[这个数字增长到 50 亿的速度会比我们想象的还要快。][14] 在正常情况下,丢掉 45 亿人的生意,或者抛弃有 45 亿人使用的应用程序是绝对荒唐且行不通的。
|
||||
|
||||
老问题是原生移动应用的开发成本对大多数公司来说太高了。尽管这个问题确实存在,但面向 web 的开发成本也在增加。[在美国,JavaScript 开发者的平均工资已达到 $97,000.00。][15]
|
||||
老问题是原生移动应用的开发成本对大多数公司来说太高了。然而,面向 web 的开发成本也在增加。[在美国,JavaScript 开发者的平均工资已达到 $97,000.00][15]。
|
||||
|
||||
伴随着复杂性的增加以及暴涨的高质量 web 开发需求,雇佣一个JavaScript 开发者的平均价格直逼原生应用开发者。论证 web 开发更便宜已经没用了。
|
||||
伴随着复杂性的增加以及对高质量 web 开发的需求暴涨,雇佣一个 JavaScript 开发者的平均价格直逼原生应用开发者。论证 web 开发更便宜已经没用了。
|
||||
|
||||
### 那混合开发呢?
|
||||
|
||||
混合应用是将 HTML5 应用内嵌在原生应用的容器里,并且提供实现原生平台特性所需的权限。Cordova 和 PhoneGap 就是典型的例子。
|
||||
|
||||
如果你想构建一个 MVP 设计架构、一个产品原型,或者不担心模仿原生应用的用户体验,那么混合应用会很适合你。谨记如果你最后想把它转为原生应用,整个项目都得重写。
|
||||
如果你想构建一个 MVP 设计架构、一个产品原型,或者不担心对原生应用的模仿的用户体验,那么混合应用会很适合你。但谨记如果你最后想把它转为原生应用,整个项目都得重写。
|
||||
|
||||
此领域有很多创新的东西,我最喜欢的当属 [Ionic Framework][16]。混合开发正变得越来越好,但还不如原生开发那么流畅自然。
|
||||
|
||||
有很多公司,包括最严峻的初创公司,也包括大中规模的公司,混合应用在质量上的表现似乎没有满足客户的要求,给人的感觉是活糙、不够专业。
|
||||
|
||||
[听说应用商店里的前 100 名都不是混合应用,][17]我没有证据支持这一观点。如果说有百分之零到百分之五是混合应用,我就不怀疑了。
|
||||
[听说应用商店里的前 100 名都不是混合应用][17],我没有证据支持这一观点。如果说有百分之零到百分之五是混合应用,我就不怀疑了。
|
||||
|
||||
> [我们最大的错误是在 HTML5 身上下了太多的赌注][18] — 马克 扎克伯格
|
||||
> [我们最大的错误是在 HTML5 身上下了太多的赌注][18] — 马克·扎克伯格
|
||||
|
||||
### 解决方案
|
||||
|
||||
如果你紧跟移动开发动向,那么你绝对听说过像 [NativeScript][19] 和 [React Native][20] 这样的项目。
|
||||
|
||||
通过这些项目,使用用 JavaScript 写成的基本 UI 组成块,像常规 iOS 和 Android 应用那样,就可以构建出高质量的原生移动应用。
|
||||
通过这些项目,使用由 JavaScript 写成的基本 UI 组成块,像常规 iOS 和 Android 应用那样,就可以构建出高质量的原生移动应用。
|
||||
|
||||
你可以仅用一位工程师,也可以用一个专业的工程师团队,通过 React Native 使用 [现有代码库][22] 或者 [底层技术][23] 进行跨平台移动应用开发,[原生桌面开发][21], 甚至还有 web 开发。把你的应用发布到 APP Store上, Play Store上,还有 Web 上。如此可以在保证不丧失原生应用性能和质量的同时,使成本仅占传统开发的一小部分。
|
||||
你可以仅用一位工程师,也可以用一个专业的工程师团队,通过 React Native 使用 [现有代码库][22] 或者 [底层技术][23] 进行跨平台移动应用开发、[原生桌面开发][21],甚至还有 web 开发。把你的应用发布到 APP Store 上、 Play Store 上,还有 Web 上。如此可以在保证不丧失原生应用性能和质量的同时,使成本仅占传统开发的一小部分。
|
||||
|
||||
通过 React Native 进行跨平台开发时重复使用其中 90% 的代码也不是没有的事,这个范围通常是 80% 到 90%。
|
||||
|
||||
@ -72,7 +71,7 @@ Facebook 对这个问题的解决方案是 [React Native][11]。
|
||||
|
||||
React Native 还可以使用 [Code Push][24] 和 [AppHub][25] 这样的工具来远程更新你的 JavaScript 代码。这意味着你可以向用户实时推送更新、新特性,快速修复 bug,绕过打包、发布这些工作,绕过 App Store、Google Play Store 的审核,省去了耗时 2 到 7 天的过程(App Store 一直是整个过程的痛点)。混合应用的这些优势原生应用不可能比得上。
|
||||
|
||||
如果这个领域的创新力能像刚发行时那样保持,将来你甚至可以为 [Apple Watch ][26],[Apple TV][27],和 [Tizen][28] 这样的平台开发应用。
|
||||
如果这个领域的创新力能像刚发行时那样保持,将来你甚至可以为 [Apple Watch][26]、[Apple TV][27],和 [Tizen][28] 这样的平台开发应用。
|
||||
|
||||
> NativeScript 依然是个相当年轻的框架驱动,Angular 版本 2,[上个月刚刚发布测试版][29]。但只要它保持良好的市场份额,未来就很有前途。
|
||||
|
||||
@ -84,49 +83,57 @@ React Native 还可以使用 [Code Push][24] 和 [AppHub][25] 这样的工具
|
||||
|
||||
看下面的例子,[这是一个使用 React Native 技术的著名应用列表][31]。
|
||||
|
||||
### Facebook
|
||||
#### Facebook
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/800/1*36atCP-kVNoYrit2RMR-8g.jpeg)
|
||||
|
||||
Facebook 公司的 React Native 应用
|
||||
*Facebook 公司的 React Native 应用*
|
||||
|
||||
Facebook 的两款应用 [Ads Manager][32] 和 [Facebook Groups][33]都在使用 React Native 技术,并且[将会应用到实现动态消息的框架上][34]。
|
||||
Facebook 的两款应用 [Ads Manager][32] 和 [Facebook Groups][33] 都在使用 React Native 技术,并且[将会应用到实现动态消息的框架上][34]。
|
||||
|
||||
Facebook 也会投入大量的资金创立和维护像 React Native 这样的开源项目,而且开源项目的开发者最近已经创建很多了不起的项目,这是很了不起的工作,像我以及全世界的业务每天都从中享受诸多好处。
|
||||
|
||||
### Instagram
|
||||
#### Instagram
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/800/1*MQ0ezjRsUW3A5I0ahryHPg.jpeg)
|
||||
|
||||
Instagram
|
||||
*Instagram*
|
||||
|
||||
Instagram 应用的一部分已经使用了 React Native 技术。
|
||||
|
||||
### Airbnb
|
||||
#### Airbnb
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/800/1*JS3R_cfLsDFCmAZJmtVEvg.jpeg)
|
||||
|
||||
Airbnb
|
||||
*Airbnb*
|
||||
|
||||
Airbnb 的很多东西正用 React Native 重写。(来自 [Leland Richardson][36])
|
||||
|
||||
超过 90% 的 Airbnb 旅行平台都是用 React Native 写的。(来自 [spikebrehm][37])
|
||||
|
||||
### Vogue
|
||||
#### Vogue
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/800/1*V9JMA2L3lXcO1nczCN3gcA.jpeg)
|
||||
|
||||
Vogue 是 2016 年度十佳应用之一
|
||||
*Vogue 是 2016 年度十佳应用之一*
|
||||
|
||||
Vogue 这么突出不仅仅因为它也用 React Native 写成,而是[因为它被苹果公司评为年度十佳应用之一][38]。
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/800/1*vPDVV-vwvjfL3MsHpOO8rQ.jpeg)
|
||||
|
||||
微软
|
||||
#### 沃尔玛
|
||||
|
||||
微软在 React Native 身上下的赌注很大
|
||||
![](https://cdn-images-1.medium.com/max/800/1*ZlUk9AGwfOAPKdEBpa8avg.jpeg)
|
||||
|
||||
它早已发布多个开源工具,包括 [Code Push][39],[React Native VS Code][40],以及 [React Native Windows][41],旨在帮助开发者向 React Native 领域转移。
|
||||
*Walmart Labs*
|
||||
|
||||
查看这篇 [Keerti](https://medium.com/@Keerti) 的[文章](https://medium.com/walmartlabs/react-native-at-walmartlabs-cdd140589560#.azpn97g8t)来了解沃尔玛是怎样看待 React Native 的优势的。
|
||||
|
||||
#### 微软
|
||||
|
||||
微软在 React Native 身上下的赌注很大。
|
||||
|
||||
它早已发布多个开源工具,包括 [Code Push][39]、[React Native VS Code][40],以及 [React Native Windows][41],旨在帮助开发者向 React Native 领域转移。
|
||||
|
||||
微软考虑的是那些已经使用 React Native 为 iOS 和 Android 开发应用的开发者,他们可以重用高达 90% 的代码,不用花费太多额外的时间和成本就可将应用发布到 Windows 上。
|
||||
|
||||
@ -136,11 +143,11 @@ Vogue 这么突出不仅仅因为它也用 React Native 写成,而是[因为
|
||||
|
||||
移动应用界面设计和移动应用开发要进行范式转变,下一步就是 React Native 以及与其相似的技术。
|
||||
|
||||
公司
|
||||
#### 公司
|
||||
|
||||
如果你的公司正想着削减成本、加快开发速度,而又不想在应用质量和性能上妥协,这是最适合使用 React Native 的时候,它能提高你的净利润。
|
||||
|
||||
开发者
|
||||
#### 开发者
|
||||
|
||||
如果你是一个开发者,想进入一个将来会快速发展的领域,我强烈推荐你把 React Native 列入你的学习清单。
|
||||
|
||||
@ -166,7 +173,7 @@ via: https://hackernoon.com/the-cost-of-native-mobile-app-development-is-too-dam
|
||||
|
||||
作者:[Nader Dabit][a]
|
||||
译者:[fuowang](https://github.com/fuowang)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
237
published/20161221 Living Android without Kotlin.md
Normal file
237
published/20161221 Living Android without Kotlin.md
Normal file
@ -0,0 +1,237 @@
|
||||
在没有 Kotlin 的世界与 Android 共舞
|
||||
============================================================
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/2000/1*Fd349rzh3XWwSbCP2IV7zA.jpeg)
|
||||
|
||||
> 开始投入一件事比远离它更容易。 — Donald Rumsfeld
|
||||
|
||||
没有 Kotlin 的生活就像在触摸板上玩魔兽争霸 3。购买鼠标很简单,但如果你的新雇主不想让你在生产中使用 Kotlin,你该怎么办?
|
||||
|
||||
下面有一些选择。
|
||||
* 与你的产品负责人争取获得使用 Kotlin 的权利。
|
||||
* 使用 Kotlin 并且不告诉其他人因为你知道最好的东西是只适合你的。
|
||||
* 擦掉你的眼泪,自豪地使用 Java。
|
||||
|
||||
想象一下,你在和产品负责人的斗争中失败,作为一个专业的工程师,你不能在没有同意的情况下私自去使用那些时髦的技术。我知道这听起来非常恐怖,特别当你已经品尝到 Kotlin 的好处时,不过不要失去生活的信念。
|
||||
|
||||
在文章接下来的部分,我想简短地描述一些 Kotlin 的特征,使你通过一些知名的工具和库,可以应用到你的 Android 里的 Java 代码中去。对于 Kotlin 和 Java 的基本认识是需要的。
|
||||
|
||||
### 数据类
|
||||
|
||||
我想你肯定已经喜欢上 Kotlin 的数据类。对于你来说,得到 `equals()`、 `hashCode()`、 `toString()` 和 `copy()` 这些是很容易的。具体来说,`data` 关键字还可以按照声明顺序生成对应于属性的 `componentN()` 函数。 它们用于解构声明。
|
||||
|
||||
```
|
||||
data class Person(val name: String)
|
||||
val (riddle) = Person("Peter")
|
||||
println(riddle)
|
||||
```
|
||||
|
||||
你知道什么会被打印出来吗?确实,它不会是从 `Person` 类的 `toString()` 返回的值。这是解构声明的作用,它赋值从 `name` 到 `riddle`。使用园括号 `(riddle)` 编译器知道它必须使用解构声明机制。
|
||||
|
||||
```
|
||||
val (riddle): String = Person("Peter").component1()
|
||||
println(riddle) // prints Peter)
|
||||
```
|
||||
|
||||
> 这个代码没编译。它就是展示了构造声明怎么工作的。
|
||||
|
||||
正如你可以看到 `data` 关键字是一个超级有用的语言特性,所以你能做什么把它带到你的 Java 世界? 使用注释处理器并修改抽象语法树(Abstract Syntax Tree)。 如果你想更深入,请阅读文章末尾列出的文章(Project Lombok— Trick Explained)。
|
||||
|
||||
使用项目 Lombok 你可以实现 `data`关键字所提供的几乎相同的功能。 不幸的是,没有办法进行解构声明。
|
||||
|
||||
```
|
||||
import lombok.Data;
|
||||
|
||||
@Data class Person {
|
||||
final String name;
|
||||
}
|
||||
```
|
||||
|
||||
`@Data` 注解生成 `equals()`、`hashCode()` 和 `toString()`。 此外,它为所有字段创建 getter,为所有非最终字段创建setter,并为所有必填字段(final)创建构造函数。 值得注意的是,Lombok 仅用于编译,因此库代码不会添加到您的最终的 .apk。
|
||||
|
||||
### Lambda 表达式
|
||||
|
||||
Android 工程师有一个非常艰难的生活,因为 Android 中缺乏 Java 8 的特性,而且其中之一是 lambda 表达式。 Lambda 是很棒的,因为它们为你减少了成吨的样板。 你可以在回调和流中使用它们。 在 Kotlin 中,lambda 表达式是内置的,它们看起来比它们在 Java 中看起来好多了。 此外,lambda 的字节码可以直接插入到调用方法的字节码中,因此方法计数不会增加。 它可以使用内联函数。
|
||||
|
||||
```
|
||||
button.setOnClickListener { println("Hello World") }
|
||||
```
|
||||
|
||||
最近 Google 宣布在 Android 中支持 Java 8 的特性,由于 Jack 编译器,你可以在你的代码中使用 lambda。还要提及的是,它们在 API 23 或者更低的级别都可用。
|
||||
|
||||
```
|
||||
button.setOnClickListener(view -> System.out.println("Hello World!"));
|
||||
```
|
||||
|
||||
怎样使用它们?就只用添加下面几行到你的 `build.gradle` 文件中。
|
||||
|
||||
```
|
||||
defaultConfig {
|
||||
jackOptions {
|
||||
enabled true
|
||||
}
|
||||
}
|
||||
|
||||
compileOptions {
|
||||
sourceCompatibility JavaVersion.VERSION_1_8
|
||||
targetCompatibility JavaVersion.VERSION_1_8
|
||||
}
|
||||
```
|
||||
|
||||
如果你不喜欢用 Jack 编译器,或者你由于一些原因不能使用它,这里有一个不同的解决方案提供给你。Retrolambda 项目允许你在 Java 7,6 或者 5 上运行带有 lambda 表达式的 Java 8 代码,下面是设置过程。
|
||||
|
||||
```
|
||||
dependencies {
|
||||
classpath 'me.tatarka:gradle-retrolambda:3.4.0'
|
||||
}
|
||||
|
||||
apply plugin: 'me.tatarka.retrolambda'
|
||||
|
||||
compileOptions {
|
||||
sourceCompatibility JavaVersion.VERSION_1_8
|
||||
targetCompatibility JavaVersion.VERSION_1_8
|
||||
}
|
||||
```
|
||||
|
||||
正如我前面提到的,在 Kotlin 下的 lambda 内联函数不增加方法计数,但是如何在 Jack 或者 Retrolambda 下使用它们呢? 显然,它们不是没成本的,隐藏的成本如下。
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/800/1*H7h2MB2auMslMkdaDtqAfg.png)
|
||||
|
||||
*该表展示了使用不同版本的 Retrolambda 和 Jack 编译器生成的方法数量。该比较结果来自 Jake Wharton 的“[探索 Java 的隐藏成本](http://jakewharton.com/exploring-java-hidden-costs/)” 技术讨论之中。*
|
||||
|
||||
### 数据操作
|
||||
|
||||
Kotlin 引入了高阶函数作为流的替代。 当您必须将一组数据转换为另一组数据或过滤集合时,它们非常有用。
|
||||
|
||||
```
|
||||
fun foo(persons: MutableList<Person>) {
|
||||
persons.filter { it.age >= 21 }
|
||||
.filter { it.name.startsWith("P") }
|
||||
.map { it.name }
|
||||
.sorted()
|
||||
.forEach(::println)
|
||||
}
|
||||
|
||||
data class Person(val name: String, val age: Int)
|
||||
```
|
||||
|
||||
流也由 Google 通过 Jack 编译器提供。 不幸的是,Jack 不使用 Lombok,因为它在编译代码时跳过生成中间的 `.class` 文件,而 Lombok 却依赖于这些文件。
|
||||
|
||||
```
|
||||
void foo(List<Person> persons) {
|
||||
persons.stream()
|
||||
.filter(it -> it.getAge() >= 21)
|
||||
.filter(it -> it.getName().startsWith("P"))
|
||||
.map(Person::getName)
|
||||
.sorted()
|
||||
.forEach(System.out::println);
|
||||
}
|
||||
|
||||
class Person {
|
||||
final private String name;
|
||||
final private int age;
|
||||
|
||||
public Person(String name, int age) {
|
||||
this.name = name;
|
||||
this.age = age;
|
||||
}
|
||||
|
||||
String getName() { return name; }
|
||||
int getAge() { return age; }
|
||||
}
|
||||
```
|
||||
|
||||
这简直太好了,所以 catch 在哪里? 令人悲伤的是,流从 API 24 才可用。谷歌做了好事,但哪个应用程序有用 `minSdkVersion = 24`?
|
||||
|
||||
幸运的是,Android 平台有一个很好的提供许多很棒的库的开源社区。Lightweight-Stream-API 就是其中的一个,它包含了 Java 7 及以下版本的基于迭代器的流实现。
|
||||
|
||||
```
|
||||
import lombok.Data;
|
||||
import com.annimon.stream.Stream;
|
||||
|
||||
void foo(List<Person> persons) {
|
||||
Stream.of(persons)
|
||||
.filter(it -> it.getAge() >= 21)
|
||||
.filter(it -> it.getName().startsWith("P"))
|
||||
.map(Person::getName)
|
||||
.sorted()
|
||||
.forEach(System.out::println);
|
||||
}
|
||||
|
||||
@Data class Person {
|
||||
final String name;
|
||||
final int age;
|
||||
}
|
||||
```
|
||||
|
||||
上面的例子结合了 Lombok、Retrolambda 和 Lightweight-Stream-API,它看起来几乎和 Kotlin 一样棒。使用静态工厂方法允许您将任何 Iterable 转换为流,并对其应用 lambda,就像 Java 8 流一样。 将静态调用 `Stream.of(persons)` 包装为 Iterable 类型的扩展函数是完美的,但是 Java 不支持它。
|
||||
|
||||
### 扩展函数
|
||||
|
||||
扩展机制提供了向类添加功能而无需继承它的能力。 这个众所周知的概念非常适合 Android 世界,这就是 Kotlin 在该社区很受欢迎的原因。
|
||||
|
||||
有没有技术或魔术将扩展功能添加到你的 Java 工具箱? 因 Lombok,你可以使用它们作为一个实验功能。 根据 Lombok 文档的说明,他们想把它从实验状态移出,基本上没有什么变化的话很快。 让我们重构最后一个例子,并将 `Stream.of(persons)` 包装成扩展函数。
|
||||
|
||||
```
|
||||
import lombok.Data;
|
||||
import lombok.experimental.ExtensionMethod;
|
||||
|
||||
@ExtensionMethod(Streams.class)
|
||||
public class Foo {
|
||||
void foo(List<Person> persons) {
|
||||
persons.toStream()
|
||||
.filter(it -> it.getAge() >= 21)
|
||||
.filter(it -> it.getName().startsWith("P"))
|
||||
.map(Person::getName)
|
||||
.sorted()
|
||||
.forEach(System.out::println);
|
||||
}
|
||||
}
|
||||
|
||||
@Data class Person {
|
||||
final String name;
|
||||
final int age;
|
||||
}
|
||||
|
||||
class Streams {
|
||||
static <T> Stream<T> toStream(List<T> list) {
|
||||
return Stream.of(list);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
所有的方法是 `public`、`static` 的,并且至少有一个参数的类型不是原始的,因而是扩展方法。 `@ExtensionMethod` 注解允许你指定一个包含你的扩展函数的类。 你也可以传递数组,而不是使用一个 `.class` 对象。
|
||||
|
||||
* * *
|
||||
|
||||
我完全知道我的一些想法是非常有争议的,特别是 Lombok,我也知道,有很多的库,可以使你的生活更轻松。请不要犹豫在评论里分享你的经验。干杯!
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/800/1*peB9mmElOn6xwR3eH0HXXA.png)
|
||||
|
||||
---------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
![](https://cdn-images-1.medium.com/fit/c/60/60/1*l7_L6VCKzkOm0gq4Kplnkw.jpeg)
|
||||
|
||||
Coder and professional dreamer @ Grid Dynamics
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://medium.com/proandroiddev/living-android-without-kotlin-db7391a2b170
|
||||
|
||||
作者:[Piotr Ślesarew][a]
|
||||
译者:[DockerChen](https://github.com/DockerChen)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://hackernoon.com/@piotr.slesarew?source=post_header_lockup
|
||||
[1]:http://jakewharton.com/exploring-java-hidden-costs/
|
||||
[2]:https://medium.com/u/8ddd94878165
|
||||
[3]:https://projectlombok.org/index.html
|
||||
[4]:https://github.com/aNNiMON/Lightweight-Stream-API
|
||||
[5]:https://github.com/orfjackal/retrolambda
|
||||
[6]:http://notatube.blogspot.com/2010/11/project-lombok-trick-explained.html
|
||||
[7]:http://notatube.blogspot.com/2010/11/project-lombok-trick-explained.html
|
||||
[8]:https://twitter.com/SliskiCode
|
@ -0,0 +1,405 @@
|
||||
GraphQL 用例:使用 Golang 和 PostgreSQL 构建一个博客引擎 API
|
||||
============================================================
|
||||
|
||||
### 摘要
|
||||
|
||||
GraphQL 在生产环境中似乎难以使用:虽然对于建模功能来说图接口非常灵活,但是并不适用于关系型存储,不管是在实现还是性能方面。
|
||||
|
||||
在这篇博客中,我们会设计并实现一个简单的博客引擎 API,它支持以下功能:
|
||||
|
||||
* 三种类型的资源(用户、博文以及评论)支持多种功能(创建用户、创建博文、给博文添加评论、关注其它用户的博文和评论,等等。)
|
||||
* 使用 PostgreSQL 作为后端数据存储(选择它因为它是一个流行的关系型数据库)。
|
||||
* 使用 Golang(开发 API 的一个流行语言)实现 API。
|
||||
|
||||
我们会比较简单的 GraphQL 实现和纯 REST 替代方案,在一种普通场景(呈现博客文章页面)下对比它们的实现复杂性和效率。
|
||||
|
||||
### 介绍
|
||||
|
||||
GraphQL 是一种 IDL(Interface Definition Language,接口定义语言),设计者定义数据类型和并把数据建模为一个图(graph)。每个顶点都是一种数据类型的一个实例,边代表了节点之间的关系。这种方式非常灵活,能适应任何业务领域。然而,问题是设计过程更加复杂,而且传统的数据存储不能很好地映射到图模型。阅读_附录1_了解更多关于这个问题的详细信息。
|
||||
|
||||
GraphQL 在 2014 年由 Facebook 的工程师团队首次提出。尽管它的优点和功能非常有趣而且引人注目,但它并没有得到大规模应用。开发者需要权衡 REST 的设计简单性、熟悉性、丰富的工具和 GraphQL 不会受限于 CRUD(LCTT 译注:Create、Read、Update、Delete) 以及网络性能(它优化了往返服务器的网络)的灵活性。
|
||||
|
||||
大部分关于 GraphQL 的教程和指南都跳过了从数据存储获取数据以便解决查询的问题。也就是,如何使用通用目的、流行存储方案(例如关系型数据库)为 GraphQL API 设计一个支持高效数据提取的数据库。
|
||||
|
||||
这篇博客介绍构建一个博客引擎 GraphQL API 的流程。它的功能相当复杂。为了和基于 REST 的方法进行比较,它的范围被限制为一个熟悉的业务领域。
|
||||
|
||||
这篇博客的文章结构如下:
|
||||
|
||||
* 第一部分我们会设计一个 GraphQL 模式并介绍所使用语言的一些功能。
|
||||
* 第二部分是 PostgreSQL 数据库的设计。
|
||||
* 第三部分介绍了使用 Golang 实现第一部分设计的 GraphQL 模式。
|
||||
* 第四部分我们以从后端获取所需数据的角度来比较呈现博客文章页面的任务。
|
||||
|
||||
### 相关阅读
|
||||
|
||||
* 很棒的 [GraphQL 介绍文档][1]。
|
||||
* 该项目的完整实现代码在 [github.com/topliceanu/graphql-go-example][2]。
|
||||
|
||||
### 在 GraphQL 中建模一个博客引擎
|
||||
|
||||
下述_列表1_包括了博客引擎 API 的全部模式。它显示了组成图的顶点的数据类型。顶点之间的关系,也就是边,被建模为指定类型的属性。
|
||||
|
||||
```
|
||||
type User {
|
||||
id: ID
|
||||
email: String!
|
||||
post(id: ID!): Post
|
||||
posts: [Post!]!
|
||||
follower(id: ID!): User
|
||||
followers: [User!]!
|
||||
followee(id: ID!): User
|
||||
followees: [User!]!
|
||||
}
|
||||
|
||||
type Post {
|
||||
id: ID
|
||||
user: User!
|
||||
title: String!
|
||||
body: String!
|
||||
comment(id: ID!): Comment
|
||||
comments: [Comment!]!
|
||||
}
|
||||
|
||||
type Comment {
|
||||
id: ID
|
||||
user: User!
|
||||
post: Post!
|
||||
title: String
|
||||
body: String!
|
||||
}
|
||||
|
||||
type Query {
|
||||
user(id: ID!): User
|
||||
}
|
||||
|
||||
type Mutation {
|
||||
createUser(email: String!): User
|
||||
removeUser(id: ID!): Boolean
|
||||
follow(follower: ID!, followee: ID!): Boolean
|
||||
unfollow(follower: ID!, followee: ID!): Boolean
|
||||
createPost(user: ID!, title: String!, body: String!): Post
|
||||
removePost(id: ID!): Boolean
|
||||
createComment(user: ID!, post: ID!, title: String!, body: String!): Comment
|
||||
removeComment(id: ID!): Boolean
|
||||
}
|
||||
```
|
||||
|
||||
_列表1_
|
||||
|
||||
模式使用 GraphQL DSL 编写,它用于定义自定义数据类型,例如 `User`、`Post` 和 `Comment`。该语言也提供了一系列原始数据类型,例如 `String`、`Boolean` 和 `ID`(它是`String` 的别名,但是有顶点唯一标识符的额外语义)。
|
||||
|
||||
`Query` 和 `Mutation` 是语法解析器能识别并用于查询图的可选类型。从 GraphQL API 读取数据等同于遍历图。需要提供这样一个起始顶点;该角色通过 `Query` 类型来实现。在这种情况中,所有图的查询都要从一个由 id `user(id:ID!)` 指定的用户开始。对于写数据,定义了 `Mutation` 顶点。它提供了一系列操作,建模为能遍历(并返回)新创建顶点类型的参数化属性。_列表2_是这些查询的一些例子。
|
||||
|
||||
顶点属性能被参数化,也就是能接受参数。在图遍历场景中,如果一个博文顶点有多个评论顶点,你可以通过指定 `comment(id: ID)` 只遍历其中的一个。所有这些都取决于设计,设计者可以选择不提供到每个独立顶点的直接路径。
|
||||
|
||||
`!` 字符是一个类型后缀,适用于原始类型和用户定义类型,它有两种语义:
|
||||
|
||||
* 当被用于参数化属性的参数类型时,表示这个参数是必须的。
|
||||
* 当被用于一个属性的返回类型时,表示当顶点被获取时该属性不会为空。
|
||||
* 也可以把它们组合起来,例如 `[Comment!]!` 表示一个非空 Comment 顶点链表,其中 `[]`、`[Comment]` 是有效的,但 `null, [null], [Comment, null]` 就不是。
|
||||
|
||||
|
||||
_列表2_ 包括一系列用于博客 API 的 `curl` 命令,它们会使用 mutation 填充图然后查询图以便获取数据。要运行它们,按照 [topliceanu/graphql-go-example][3] 仓库中的指令编译并运行服务。
|
||||
|
||||
```
|
||||
# 创建用户 1、2 和 3 的更改。更改和查询类似,在该情景中我们检索新创建用户的 id 和 email。
|
||||
curl -XPOST http://vm:8080/graphql -d 'mutation {createUser(email:"user1@x.co"){id, email}}'
|
||||
curl -XPOST http://vm:8080/graphql -d 'mutation {createUser(email:"user2@x.co"){id, email}}'
|
||||
curl -XPOST http://vm:8080/graphql -d 'mutation {createUser(email:"user3@x.co"){id, email}}'
|
||||
# 为用户添加博文的更改。为了和模式匹配我们需要检索他们的 id,否则会出现错误。
|
||||
curl -XPOST http://vm:8080/graphql -d 'mutation {createPost(user:1,title:"post1",body:"body1"){id}}'
|
||||
curl -XPOST http://vm:8080/graphql -d 'mutation {createPost(user:1,title:"post2",body:"body2"){id}}'
|
||||
curl -XPOST http://vm:8080/graphql -d 'mutation {createPost(user:2,title:"post3",body:"body3"){id}}'
|
||||
# 博文所有评论的更改。`createComment` 需要用户 id,标题和正文。看列表 1 的模式。
|
||||
curl -XPOST http://vm:8080/graphql -d 'mutation {createComment(user:2,post:1,title:"comment1",body:"comment1"){id}}'
|
||||
curl -XPOST http://vm:8080/graphql -d 'mutation {createComment(user:1,post:3,title:"comment2",body:"comment2"){id}}'
|
||||
curl -XPOST http://vm:8080/graphql -d 'mutation {createComment(user:3,post:3,title:"comment3",body:"comment3"){id}}'
|
||||
# 让用户 3 关注用户 1 和用户 2 的更改。注意 `follow` 更改只返回一个布尔值而不需要指定。
|
||||
curl -XPOST http://vm:8080/graphql -d 'mutation {follow(follower:3, followee:1)}'
|
||||
curl -XPOST http://vm:8080/graphql -d 'mutation {follow(follower:3, followee:2)}'
|
||||
|
||||
# 用户获取用户 1 所有数据的查询。
|
||||
curl -XPOST http://vm:8080/graphql -d '{user(id:1)}'
|
||||
# 用户获取用户 2 和用户 1 的关注者的查询。
|
||||
curl -XPOST http://vm:8080/graphql -d '{user(id:2){followers{id, email}}}'
|
||||
curl -XPOST http://vm:8080/graphql -d '{user(id:1){followers{id, email}}}'
|
||||
# 检测用户 2 是否被用户 1 关注的查询。如果是,检索用户 1 的 email,否则返回空。
|
||||
curl -XPOST http://vm:8080/graphql -d '{user(id:2){follower(id:1){email}}}'
|
||||
# 返回用户 3 关注的所有用户 id 和 email 的查询。
|
||||
curl -XPOST http://vm:8080/graphql -d '{user(id:3){followees{id, email}}}'
|
||||
# 如果用户 3 被用户 1 关注,就获取用户 3 email 的查询。
|
||||
curl -XPOST http://vm:8080/graphql -d '{user(id:1){followee(id:3){email}}}'
|
||||
# 获取用户 1 的第二篇博文的查询,检索它的标题和正文。如果博文 2 不是由用户 1 创建的,就会返回空。
|
||||
curl -XPOST http://vm:8080/graphql -d '{user(id:1){post(id:2){title,body}}}'
|
||||
# 获取用户 1 的所有博文的所有数据的查询。
|
||||
curl -XPOST http://vm:8080/graphql -d '{user(id:1){posts{id,title,body}}}'
|
||||
# 获取写博文 2 用户的查询,如果博文 2 是由 用户 1 撰写;一个现实语言灵活性的例证。
|
||||
curl -XPOST http://vm:8080/graphql -d '{user(id:1){post(id:2){user{id,email}}}}'
|
||||
```
|
||||
|
||||
_列表2_
|
||||
|
||||
通过仔细设计 mutation 和类型属性,可以实现强大而富有表达力的查询。
|
||||
|
||||
### 设计 PostgreSQL 数据库
|
||||
|
||||
关系型数据库的设计,一如以往,由避免数据冗余的需求驱动。选择该方式有两个原因:
|
||||
|
||||
1. 表明实现 GraphQL API 不需要定制化的数据库技术或者学习和使用新的设计技巧。
|
||||
2. 表明 GraphQL API 能在现有的数据库之上创建,更具体地说,最初设计用于 REST 后端甚至传统的呈现 HTML 站点的服务器端数据库。
|
||||
|
||||
阅读 _附录1_ 了解关于关系型和图数据库在构建 GraphQL API 方面的区别。_列表3_ 显示了用于创建新数据库的 SQL 命令。数据库模式和 GraphQL 模式相对应。为了支持 `follow/unfollow` 更改,需要添加 `followers` 关系。
|
||||
|
||||
```
|
||||
CREATE TABLE IF NOT EXISTS users (
|
||||
id SERIAL PRIMARY KEY,
|
||||
email VARCHAR(100) NOT NULL
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS posts (
|
||||
id SERIAL PRIMARY KEY,
|
||||
user_id INTEGER NOT NULL REFERENCES users(id) ON DELETE CASCADE,
|
||||
title VARCHAR(200) NOT NULL,
|
||||
body TEXT NOT NULL
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS comments (
|
||||
id SERIAL PRIMARY KEY,
|
||||
user_id INTEGER NOT NULL REFERENCES users(id) ON DELETE CASCADE,
|
||||
post_id INTEGER NOT NULL REFERENCES posts(id) ON DELETE CASCADE,
|
||||
title VARCHAR(200) NOT NULL,
|
||||
body TEXT NOT NULL
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS followers (
|
||||
follower_id INTEGER NOT NULL REFERENCES users(id) ON DELETE CASCADE,
|
||||
followee_id INTEGER NOT NULL REFERENCES users(id) ON DELETE CASCADE,
|
||||
PRIMARY KEY(follower_id, followee_id)
|
||||
);
|
||||
```
|
||||
|
||||
_列表3_
|
||||
|
||||
### Golang API 实现
|
||||
|
||||
本项目使用的用 Go 实现的 GraphQL 语法解析器是 `github.com/graphql-go/graphql`。它包括一个查询解析器,但不包括模式解析器。这要求开发者利用库提供的结构使用 Go 构建 GraphQL 模式。这和 [nodejs 实现][3] 不同,后者提供了一个模式解析器并为数据获取暴露了钩子。因此 _列表1_ 中的模式只是作为指导使用,需要转化为 Golang 代码。然而,这个_“限制”_提供了与抽象级别对等的机会,并且了解模式如何和用于检索数据的图遍历模型相关。_列表4_ 显示了 `Comment` 顶点类型的实现:
|
||||
|
||||
```
|
||||
var CommentType = graphql.NewObject(graphql.ObjectConfig{
|
||||
Name: "Comment",
|
||||
Fields: graphql.Fields{
|
||||
"id": &graphql.Field{
|
||||
Type: graphql.NewNonNull(graphql.ID),
|
||||
Resolve: func(p graphql.ResolveParams) (interface{}, error) {
|
||||
if comment, ok := p.Source.(*Comment); ok == true {
|
||||
return comment.ID, nil
|
||||
}
|
||||
return nil, nil
|
||||
},
|
||||
},
|
||||
"title": &graphql.Field{
|
||||
Type: graphql.NewNonNull(graphql.String),
|
||||
Resolve: func(p graphql.ResolveParams) (interface{}, error) {
|
||||
if comment, ok := p.Source.(*Comment); ok == true {
|
||||
return comment.Title, nil
|
||||
}
|
||||
return nil, nil
|
||||
},
|
||||
},
|
||||
"body": &graphql.Field{
|
||||
Type: graphql.NewNonNull(graphql.ID),
|
||||
Resolve: func(p graphql.ResolveParams) (interface{}, error) {
|
||||
if comment, ok := p.Source.(*Comment); ok == true {
|
||||
return comment.Body, nil
|
||||
}
|
||||
return nil, nil
|
||||
},
|
||||
},
|
||||
},
|
||||
})
|
||||
func init() {
|
||||
CommentType.AddFieldConfig("user", &graphql.Field{
|
||||
Type: UserType,
|
||||
Resolve: func(p graphql.ResolveParams) (interface{}, error) {
|
||||
if comment, ok := p.Source.(*Comment); ok == true {
|
||||
return GetUserByID(comment.UserID)
|
||||
}
|
||||
return nil, nil
|
||||
},
|
||||
})
|
||||
CommentType.AddFieldConfig("post", &graphql.Field{
|
||||
Type: PostType,
|
||||
Args: graphql.FieldConfigArgument{
|
||||
"id": &graphql.ArgumentConfig{
|
||||
Description: "Post ID",
|
||||
Type: graphql.NewNonNull(graphql.ID),
|
||||
},
|
||||
},
|
||||
Resolve: func(p graphql.ResolveParams) (interface{}, error) {
|
||||
i := p.Args["id"].(string)
|
||||
id, err := strconv.Atoi(i)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return GetPostByID(id)
|
||||
},
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
_列表4_
|
||||
|
||||
正如 _列表1_ 中的模式,`Comment` 类型是静态定义的一个有三个属性的结构体:`id`、`title` 和 `body`。为了避免循环依赖,动态定义了 `user` 和 `post` 两个其它属性。
|
||||
|
||||
Go 并不适用于这种动态建模,它只支持一些类型检查,代码中大部分变量都是 `interface{}` 类型,在使用之前都需要进行类型断言。`CommentType` 是一个 `graphql.Object` 类型的变量,它的属性是 `graphql.Field` 类型。因此,GraphQL DSL 和 Go 中使用的数据结构并没有直接的转换。
|
||||
|
||||
每个字段的 `resolve` 函数暴露了 `Source` 参数,它是表示遍历时前一个节点的数据类型顶点。`Comment` 的所有属性都有作为 source 的当前 `CommentType` 顶点。检索`id`、`title` 和 `body` 是一个直接属性访问,而检索 `user` 和 `post` 要求图遍历,也需要数据库查询。由于它们非常简单,这篇文章并没有介绍这些 SQL 查询,但在_参考文献_部分列出的 github 仓库中有。
|
||||
|
||||
### 普通场景下和 REST 的对比
|
||||
|
||||
在这一部分,我们会展示一个普通的博客文章呈现场景,并比较 REST 和 GraphQL 的实现。关注重点会放在入站/出站请求数量,因为这些是造成页面呈现延迟的最主要原因。
|
||||
|
||||
场景:呈现一个博客文章页面。它应该包含关于作者(email)、博客文章(标题、正文)、所有评论(标题、正文)以及评论人是否关注博客文章作者的信息。_图1_ 和 _图2_ 显示了客户端 SPA、API 服务器以及数据库之间的交互,一个是 REST API、另一个对应是 GraphQL API。
|
||||
|
||||
```
|
||||
+------+ +------+ +--------+
|
||||
|client| |server| |database|
|
||||
+--+---+ +--+---+ +----+---+
|
||||
| GET /blogs/:id | |
|
||||
1\. +-------------------------> SELECT * FROM blogs... |
|
||||
| +--------------------------->
|
||||
| <---------------------------+
|
||||
<-------------------------+ |
|
||||
| | |
|
||||
| GET /users/:id | |
|
||||
2\. +-------------------------> SELECT * FROM users... |
|
||||
| +--------------------------->
|
||||
| <---------------------------+
|
||||
<-------------------------+ |
|
||||
| | |
|
||||
| GET /blogs/:id/comments | |
|
||||
3\. +-------------------------> SELECT * FROM comments... |
|
||||
| +--------------------------->
|
||||
| <---------------------------+
|
||||
<-------------------------+ |
|
||||
| | |
|
||||
| GET /users/:id/followers| |
|
||||
4\. +-------------------------> SELECT * FROM followers.. |
|
||||
| +--------------------------->
|
||||
| <---------------------------+
|
||||
<-------------------------+ |
|
||||
| | |
|
||||
+ + +
|
||||
```
|
||||
|
||||
_图1_
|
||||
|
||||
```
|
||||
+------+ +------+ +--------+
|
||||
|client| |server| |database|
|
||||
+--+---+ +--+---+ +----+---+
|
||||
| GET /graphql | |
|
||||
1\. +-------------------------> SELECT * FROM blogs... |
|
||||
| +--------------------------->
|
||||
| <---------------------------+
|
||||
| | |
|
||||
| | |
|
||||
| | |
|
||||
2\. | | SELECT * FROM users... |
|
||||
| +--------------------------->
|
||||
| <---------------------------+
|
||||
| | |
|
||||
| | |
|
||||
| | |
|
||||
3\. | | SELECT * FROM comments... |
|
||||
| +--------------------------->
|
||||
| <---------------------------+
|
||||
| | |
|
||||
| | |
|
||||
| | |
|
||||
4\. | | SELECT * FROM followers.. |
|
||||
| +--------------------------->
|
||||
| <---------------------------+
|
||||
<-------------------------+ |
|
||||
| | |
|
||||
+ + +
|
||||
```
|
||||
|
||||
_图2_
|
||||
|
||||
_列表5_ 是一条用于获取所有呈现博文所需数据的简单 GraphQL 查询。
|
||||
|
||||
```
|
||||
{
|
||||
user(id: 1) {
|
||||
email
|
||||
followers
|
||||
post(id: 1) {
|
||||
title
|
||||
body
|
||||
comments {
|
||||
id
|
||||
title
|
||||
user {
|
||||
id
|
||||
email
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
_列表5_
|
||||
|
||||
对于这种情况,对数据库的查询次数是故意相同的,但是到 API 服务器的 HTTP 请求已经减少到只有一个。我们认为在这种类型的应用程序中通过互联网的 HTTP 请求是最昂贵的。
|
||||
|
||||
为了利用 GraphQL 的优势,后端并不需要进行特别设计,从 REST 到 GraphQL 的转换可以逐步完成。这使得可以测量性能提升和优化。从这一点,API 设计者可以开始优化(潜在的合并) SQL 查询从而提高性能。缓存的机会在数据库和 API 级别都大大增加。
|
||||
|
||||
SQL 之上的抽象(例如 ORM 层)通常会和 `n+1` 问题相抵触。在 REST 示例的步骤 4 中,客户端可能不得不在单独的请求中为每个评论的作者请求关注状态。这是因为在 REST 中没有标准的方式来表达两个以上资源之间的关系,而 GraphQL 旨在通过使用嵌套查询来防止这类问题。这里我们通过获取用户的所有关注者来作弊。我们向客户提出了如何确定评论并关注了作者的用户的逻辑。
|
||||
|
||||
另一个区别是获取比客户端所需更多的数据,以免破坏 REST 资源抽象。这对于用于解析和存储不需要数据的带宽消耗和电池寿命非常重要。
|
||||
|
||||
### 总结
|
||||
|
||||
GraphQL 是 REST 的一个可用替代方案,因为:
|
||||
|
||||
* 尽管设计 API 更加困难,但该过程可以逐步完成。也是由于这个原因,从 REST 转换到 GraphQL 非常容易,两个流程可以没有任何问题地共存。
|
||||
* 在网络请求方面更加高效,即使是类似本博客中的简单实现。它还提供了更多查询优化和结果缓存的机会。
|
||||
* 在用于解析结果的带宽消耗和 CPU 周期方面它更加高效,因为它只返回呈现页面所需的数据。
|
||||
|
||||
REST 仍然非常有用,如果:
|
||||
|
||||
* 你的 API 非常简单,只有少量的资源或者资源之间关系简单。
|
||||
* 在你的组织中已经在使用 REST API,而且你已经配置好了所有工具,或者你的客户希望获取 REST API。
|
||||
* 你有复杂的 ACL(LCTT 译注:Access Control List) 策略。在博客例子中,可能的功能是允许用户良好地控制谁能查看他们的电子邮箱、博客、特定博客的评论、他们关注了谁,等等。优化数据获取同时检查复杂的业务规则可能会更加困难。
|
||||
|
||||
### 附录1:图数据库和高效数据存储
|
||||
|
||||
尽管将其应用领域数据想象为一个图非常直观,正如这篇博文介绍的那样,但是支持这种接口的高效数据存储问题仍然没有解决。
|
||||
|
||||
近年来图数据库变得越来越流行。通过将 GraphQL 查询转换为特定的图数据库查询语言从而延迟解决请求的复杂性似乎是一种可行的方案。
|
||||
|
||||
问题是和关系型数据库相比,图并不是一种高效的数据结构。图中一个顶点可能有到任何其它顶点的连接,访问模式比较难以预测因此提供了较少的优化机会。
|
||||
|
||||
例如缓存的问题,为了快速访问需要将哪些顶点保存在内存中?通用缓存算法在图遍历场景中可能没那么高效。
|
||||
|
||||
数据库分片问题:把数据库切分为更小、没有交叉的数据库并保存到独立的硬件。在学术上,最小切割的图划分问题已经得到了很好的理解,但可能是次优的,而且由于病态的最坏情况可能导致高度不平衡切割。
|
||||
|
||||
在关系型数据库中,数据被建模为记录(行或者元组)和列,表和数据库名称都只是简单的命名空间。大部分数据库都是面向行的,意味着每个记录都是一个连续的内存块,一个表中的所有记录在磁盘上一个接一个地整齐地打包(通常按照某个关键列排序)。这非常高效,因为这是物理存储最优的工作方式。HDD 最昂贵的操作是将磁头移动到磁盘上的另一个扇区,因此最小化此类访问非常重要。
|
||||
|
||||
很有可能如果应用程序对一条特定记录感兴趣,它需要获取整条记录,而不仅仅是记录中的其中一列。也很有可能如果应用程序对一条记录感兴趣,它也会对该记录周围的记录感兴趣,例如全表扫描。这两点使得关系型数据库相当高效。然而,也是因为这个原因,关系型数据库的最差使用场景就是总是随机访问所有数据。图数据库正是如此。
|
||||
|
||||
随着支持更快随机访问的 SSD 驱动器的出现,更便宜的内存使得缓存大部分图数据库成为可能,更好的优化图缓存和分区的技术,图数据库开始成为可选的存储解决方案。大部分大公司也使用它:Facebook 有 Social Graph,Google 有 Knowledge Graph。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://alexandrutopliceanu.ro/post/graphql-with-go-and-postgresql
|
||||
|
||||
作者:[Alexandru Topliceanu][a]
|
||||
译者:[ictlyh](https://github.com/ictlyh)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://github.com/topliceanu
|
||||
[1]:http://graphql.org/learn/
|
||||
[2]:https://github.com/topliceanu/graphql-go-example
|
||||
[3]:https://github.com/graphql/graphql-js
|
@ -0,0 +1,59 @@
|
||||
一位老极客的眼中的开发和部署
|
||||
============================================================
|
||||
|
||||
![The difference between development and deployment](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BUS_OpenSourceExperience_520x292_cm.png?itok=APna2N9Y "The difference between development and deployment")
|
||||
|
||||
图片提供 : opensource.com
|
||||
|
||||
多年前,我曾是一名 Smalltalk 程序员,这种经验让我以一种不同的视角来观察编程的世界,例如,需要花时间来适应源代码应该存储在文本文件中的这种做法。
|
||||
|
||||
我们作为程序员通常会区分“开发”和“部署”,特别是我们在开发的地方所使用的工具不同于我们在之后部署软件时的地点和工具时。而在 Smalltalk 世界里,没有这样的区别。
|
||||
|
||||
Smalltalk 构建于虚拟机包含了你的开发环境(IDE、调试器、文本编辑器、版本控制等)的思路之上,如果你需要修改任何一处代码,你得修改内存中运行副本。如果需要的话,你可以为运行中的机器做个快照;如果你想分发你的代码,你可以发送一个运行中的机器的镜像副本(包括 IDE、调试器、文本编辑器、版本控制等)给用户。这就是上世纪 90 年代软件开发的方式(对我们中的一些人来说)。
|
||||
|
||||
如今,部署环境与开发环境有了很大的不同。起初,你不要期望那里(指部署环境)有任何开发工具。一旦部署,就没有版本控制、没有调试、没有开发环境。有的是记录和监视,这些在我们的开发环境中都没有,而有一个“构建管道”,它将我们的软件从开发形式转换为部署形式。作为一个例证,Docker 容器则试图重新找回上世纪 90 年代 Smalltalk 程序员部署体验的那种简单性,而避免同样的开发体验。
|
||||
|
||||
我想如果 Smalltalk 世界是我唯一的编程方面的体验,让我无法区分开发和部署环境,我可能会偶尔回顾一下它。但是在我成为一名 Smalltalk 程序员之前,我还是一位 APL 程序员,这也是一个可修改的虚拟机镜像的世界,其中开发和部署是无法区分的。因此,我相信,在当前的时代,人们编辑单独的源代码文件,然后运行构建管道以创建在编辑代码时尚不存在的部署作品,然后将这些作品部署给用户。我们已经以某种方式将这种反模式的软件开发制度化,而不断发展的软件环境的需求正在迫使我们找回到上世纪 90 年代的更有效的技术方法。因此才会有 Docker 的成功,所以,我需要提出我的建议。
|
||||
|
||||
我有两个建议:我们在运行时系统中实现(并使用)版本控制,以及,我们通过更改运行中的系统来开发软件,而不是用新的运行系统替换它们。这两个想法是相关的。为了安全地更改正在运行的系统,我们需要一些版本控制功能来支持“撤消”功能。也许公平地说,我只提出了一个建议。让我举例来说明。
|
||||
|
||||
让我们开始假设一个静态网站。你要修改一些 HTML 文件。你应该如何工作?如果你像大多数开发者一样,你会有两个,也许三个网站 - 一个用于开发,一个用于 QA(或者预发布),一个用于生产。你将直接编辑开发实例中的文件。准备就绪后,你将把你的修改“部署”到预发布实例。在用户验收测试之后,你将再次部署,这次是生产环境。
|
||||
|
||||
使用 Occam 的 Razor,让我们可以避免不必要地创建实例。我们需要多少台机器?我们可以使用一台电脑。我们需要多少台 web 服务器?我们可以使用具有多个虚拟主机的单台 web 服务器。如果不使用多个虚拟主机的话,我们可以只使用单个虚拟主机吗?那么我们就需要多个目录,并需要使用 URL 的顶级路径来区分不同的版本,而不是虚拟主机名。但是为什么我们需要多个目录?因为 web 服务器将从文件系统中提供静态文件。我们的问题是,目录有三个不同的版本,我们的解决方案是创建目录的三个不同的副本。这不是正是 Subversion 和 Git 这样的版本控制系统解决的问题吗?制作目录的多个副本以存储多个版本的策略回到了版本控制 CVS 之前的日子。为什么不使用比如说一个空的的 Git 仓库来存储文件呢?要这样做,web 服务器将需要能够从 git 仓库读取文件(参见 [mod_git] [3])。
|
||||
|
||||
这将是一个支持版本控制的运行时系统。
|
||||
|
||||
使用这样的 web 服务器,使用的版本可以由 cookie 来标识。这样,任何人都可以推送到仓库,用户将继续看到他们发起会话时所分配的版本。版本控制系统有不可改变的提交; 一旦会话开始,开发人员可以在不影响正在运行的用户的情况下快速推送更改。开发人员可以重置其会话以跟踪他们的新提交,因此开发人员或测试人员就可能如普通用户一样查看在同台服务器上同一个 URL 上正在开发或正在测试的版本。作为偶然的副作用,A/B 测试仅仅是将不同的用户分配给不同的提交的情况。所有用于管理多个版本的 git 设施都可以在运行环境中发挥作用。当然,git reset 为我们提供了前面提到的“撤销”功能。
|
||||
|
||||
为什么不是每个人都这样做?
|
||||
|
||||
一种可能性是,诸如版本控制系统的工具没有被设计为在生产环境中使用。例如,给某人推送到测试分支而不是生产分支的许可是不可能的。对这个方案最常见的反对是,如果发现了一个漏洞,你会想要将某些提交标记为不可访问。这将是另一种更细粒度的权限的情况;开发人员将具有对所有提交的读取权限,但外部用户不会。我们可能需要对现有工具进行一些额外的改造以支持这种模式,但是这些功能很容易理解,并已被设计到其他软件中。例如,Linux (或 PostgreSQL)实现了对不同用户的细粒度权限的想法。
|
||||
|
||||
随着云环境变得越来越普及,这些想法变得更加相关:云总是在运行。例如,我们可以看到,AWS 中等价的 “文件系统”(S3)实现了版本控制,所以你可能有一个不同的想法,使用一台 web 服务器提供来自 S3 的资源文件,并根据会话信息选择不同版本的资源文件。重要的并不是哪个实现是最好的,而是支持这种运行时版本控制的愿景。
|
||||
|
||||
部署的软件环境应该是“版本感知”的原则,应该扩展到除了服务静态文件的 web 服务器之外的其他工具。在将来的文章中,我将介绍版本库,数据库和应用程序服务器的方法。
|
||||
|
||||
_在 linux.conf.au 中了解更多 Robert Lefkowitz 2017 年 ([#lca2017][1])在 Hobart:[保持 Linux 伟大][2]的主题。_
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/robert_lefkowitz.jpg?itok=CFoX-OUI)
|
||||
|
||||
Robert M. Lefkowitz - Robert(即 r0ml)是一个喜欢复杂编程语言的编程语言爱好者。 他是一个提高清晰度、提高可靠性和最大限度地简化的编程技术收藏家。他通过让计算机更加容易获得来使它普及化。他经常演讲中世纪晚期和早期文艺复兴对编程艺术的影响。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/1/difference-between-development-deployment
|
||||
|
||||
作者:[Robert M. Lefkowitz][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[Bestony](https://github.com/Bestony)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/r0ml
|
||||
[1]:https://twitter.com/search?q=%23lca2017&src=typd
|
||||
[2]:https://www.linux.conf.au/schedule/presentation/107/
|
||||
[3]:https://github.com/r0ml/mod_git
|
@ -0,0 +1,67 @@
|
||||
六个开源软件开发的“潜规则”
|
||||
============================================================
|
||||
|
||||
> 你想成为开源项目中得意满满、功成名就的那个人吗,那就要遵守下面的“潜规则”。
|
||||
|
||||
![The 6 unwritten rules of open source development](http://images.techhive.com/images/article/2016/12/09_opensource-100698477-large.jpg)
|
||||
|
||||
正如体育界不成文的规定一样,这些规则基本上不会出现在官方文档和正式记录上。比如说,在棒球运动中,从比分领先时不要盗垒,到跑垒员跑了第一时也不要放弃四坏球保送。对于圈外人来讲,这些东西很难懂,甚至觉得没什么意义。但是对于那些想成为 MVP 的队员来说,这些都是理所当然的。
|
||||
|
||||
软件开发,特别是开源软件开发中,也有一套不成文的规定。和其它的团队运动一样,这些规定很大程度上决定了开源社区如何看待一名开发者,特别是新加入社区的开发者。
|
||||
|
||||
### 运行之前先调试
|
||||
|
||||
在参与社区之前,比如开放源代码或者其它什么的,你需要做一些基本工作。对于有眼界的开源贡献者,这意味这你需要理解社区的目标,并学习应该从哪里起步。人人都想贡献源代码,但是只有少量的人做过准备,并且乐意、同时也有能力完成这项艰苦卓绝的工作:测试补丁、复审代码、撰写文档、修正错误。所有的这些不受待见的任务在一个健康的社区中都是必要的。
|
||||
|
||||
为什么要在优雅地写代码前做这些呢?这是一种信任,更重要的是,不要只关注自己开发的功能,而是要关注整个社区的动向。
|
||||
|
||||
### 填坑而不是挖坑
|
||||
|
||||
当你在某个社区中建立起自己的声望,那么很有必要全面了解该项目和代码。不要停留于任务状态上,而是要去钻研项目本身,理解那些超出你擅长范围之外的知识。不要只把自己的理解局限于开发者,这样会让你着眼于让你的代码有更大的影响,而不只是你那一亩三分地。
|
||||
|
||||
打个比方,你已经完成了一个网络模块的测试版本。你测试了一下,觉得不错。然后你把它开放到社区,想要更多的人测试。结果发现,当它以特定的方式部署时,有可能会破坏安全设置,还可能导致主存储泄露。如果你将代码视为一个整体时问题就可以迎刃而解,而不是孤立地看待问题。这表明,你要对项目各个部分如何与其他人协作交互有比较深入的理解。让你的补丁填坑而不是挖坑。这样你朝成为社区精英的目标上又前进了一大步。
|
||||
|
||||
### 不投放代码炸弹
|
||||
|
||||
代码提交完毕后你的工作还没结束。如果代码被接受,还会有一些关于这些更改的讨论和常见的问答,还要做测试。你要确保你可以准时提交,努力去理解如何在不影响社区其他成员的情况下,改进代码和补丁。
|
||||
|
||||
### 助己之前先助人
|
||||
|
||||
开源社区不是自相残杀的丛林世界,我们更看重项目的价值而非个体的贡献和成功。如果你想给自己加分,让自己成为更重要的社区成员、让社区接纳你的代码,那就努力帮助别人。如果你熟悉网络部分,那就去复审网络部分,用你的专业技能让整个代码更加优雅。道理很简单,顶级的审查者经常和顶级的贡献者打交道。你帮助的人越多,你就越有价值。
|
||||
|
||||
### 打磨抛光才算完
|
||||
|
||||
作为一个开发者,你很可能希望为开源项目解决一个特定的痛点。或许你想要运行在一个目前还不支持的系统上,抑或你很希望改革社区目前使用的安全技术。想要引进新技术,特别是比较有争议的技术,最好的办法就是让人无法拒绝它。你需要透彻地了解底层代码,考虑每个极端情况。在不影响已实现功能的前提下增加新功能。不仅仅是完成就行,还要在特性的完善上下功夫。
|
||||
|
||||
### 不离不弃方始终
|
||||
|
||||
开源社区也有许多玩玩就算的人,但是承诺了就不要轻易失信。不要就因为提交被拒就离开社区。找出原因,修正错误,然后再试一试。当你开发时候,要和整个代码库保持一致,确保即使项目发生变化而你的补丁仍然可用。不要把你的代码留给别人修复,要自己修复。这样可以在社区形成良好的风气,每个人都自己改。
|
||||
|
||||
---
|
||||
|
||||
这些“潜规则”看上去很简单,但是还是有许多开源项目的贡献者并没有遵守。这样做的开发者不仅可以为成功地推动他们自己的项目,而且也有助于开源社区。
|
||||
|
||||
作者简介:
|
||||
|
||||
Matt Hicks 是 Red Hat 软件工程的副主席,也是 Red Hat 开源合作团队的奠基成员之一。他历时十五年,在软件工程中担任多种职务:开发,运行,架构,管理。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.infoworld.com/article/3156776/open-source-tools/the-6-unwritten-rules-of-open-source-development.html
|
||||
|
||||
作者:[Matt Hicks][a]
|
||||
译者:[Taylor1024](https://github.com/Taylor1024)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.infoworld.com/blog/new-tech-forum/
|
||||
[1]:https://twitter.com/intent/tweet?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3156776%2Fopen-source-tools%2Fthe-6-unwritten-rules-of-open-source-development.html&via=infoworld&text=The+6+unwritten+rules+of+open+source+development
|
||||
[2]:https://www.facebook.com/sharer/sharer.php?u=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3156776%2Fopen-source-tools%2Fthe-6-unwritten-rules-of-open-source-development.html
|
||||
[3]:http://www.linkedin.com/shareArticle?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3156776%2Fopen-source-tools%2Fthe-6-unwritten-rules-of-open-source-development.html&title=The+6+unwritten+rules+of+open+source+development
|
||||
[4]:https://plus.google.com/share?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3156776%2Fopen-source-tools%2Fthe-6-unwritten-rules-of-open-source-development.html
|
||||
[5]:http://reddit.com/submit?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3156776%2Fopen-source-tools%2Fthe-6-unwritten-rules-of-open-source-development.html&title=The+6+unwritten+rules+of+open+source+development
|
||||
[6]:http://www.stumbleupon.com/submit?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3156776%2Fopen-source-tools%2Fthe-6-unwritten-rules-of-open-source-development.html
|
||||
[7]:http://www.infoworld.com/article/3156776/open-source-tools/the-6-unwritten-rules-of-open-source-development.html#email
|
||||
[8]:http://www.infoworld.com/article/3152565/linux/5-rock-solid-linux-distros-for-developers.html#tk.ifw-infsb
|
||||
[9]:http://www.infoworld.com/newsletters/signup.html#tk.ifw-infsb
|
@ -0,0 +1,74 @@
|
||||
5 个提升你开源项目贡献者基数的方法
|
||||
============================================================
|
||||
|
||||
![5 ways to expand your project's contributor base](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BUSINESS_cubestalk.png?itok=MxdS-jA_ "5 ways to expand your project's contributor base")
|
||||
|
||||
图片提供:opensource.com
|
||||
|
||||
许多自由和开源软件项目因解决问题而出现,人们开始为它们做贡献,是因为他们也想修复遇到的问题。当项目的最终用户发现它对他们的需求有用,该项目就开始增长。并且出于分享的目的把人们吸引到同一个项目社区。
|
||||
|
||||
就像任何事物都是有寿命的,增长既是项目成功的标志也是来源。那么项目领导者和维护者如何激励贡献者基数的增长?这里有五种方法。
|
||||
|
||||
### 1、 提供好的文档
|
||||
|
||||
人们经常低估项目[文档][2]的重要性。它是项目贡献者的主要信息来源,它会激励他们努力。信息必须是正确和最新的。它应该包括如何构建该软件、如何提交补丁、编码风格指南等步骤。
|
||||
|
||||
查看经验丰富的科技作家、编辑 Bob Reselman 的 [7 个创建世界级文档的规则][3]。
|
||||
|
||||
开发人员文档的一个很好的例子是 [Python 开发人员指南][4]。它包括清晰简洁的步骤,涵盖 Python 开发的各个方面。
|
||||
|
||||
### 2、 降低进入门槛
|
||||
|
||||
如果你的项目有[工单或 bug 追踪工具][5],请确保将初级任务标记为一个“小 bug” 或“起点”。新的贡献者可以很容易地通过解决这些问题进入项目。追踪工具也是标记非编程任务(如平面设计、图稿和文档改进)的地方。有许多项目成员不是每天都编码,但是却通过这种方式成为推动力。
|
||||
|
||||
Fedora 项目维护着一个这样的[易修复和入门级问题的追踪工具][6]。
|
||||
|
||||
### 3、 为补丁提供常规反馈
|
||||
|
||||
确认每个补丁,即使它只有一行代码,并给作者反馈。提供反馈有助于吸引潜在的候选人,并指导他们熟悉项目。所有项目都应有一个邮件列表和[聊天功能][7]进行通信。问答可在这些媒介中发生。大多数项目不会在一夜之间成功,但那些繁荣的列表和沟通渠道为增长创造了环境。
|
||||
|
||||
### 4、 推广你的项目
|
||||
|
||||
始于解决问题的项目实际上可能对其他开发人员也有用。作为项目的主要贡献者,你的责任是为你的的项目建立文档并推广它。写博客文章,并在社交媒体上分享项目的进展。你可以从简要描述如何成为项目的贡献者开始,并在该描述中提供主要开发者文档的参考连接。此外,请务必提供有关路线图和未来版本的信息。
|
||||
|
||||
为了你的听众,看看由 Opensource.com 的社区经理 Rikki Endsley 写的[写作提示][8]。
|
||||
|
||||
### 5、 保持友好
|
||||
|
||||
友好的对话语调和迅速的回复将加强人们对你的项目的兴趣。最初,这些问题只是为了寻求帮助,但在未来,新的贡献者也可能会提出想法或建议。让他们有信心他们可以成为项目的贡献者。
|
||||
|
||||
记住你一直在被人评头论足!人们会观察项目开发者是如何在邮件列表或聊天上交谈。这些意味着对新贡献者的欢迎和开放程度。当使用技术时,我们有时会忘记人文关怀,但这对于任何项目的生态系统都很重要。考虑一个情况,项目是很好的,但项目维护者不是很受欢迎。这样的管理员可能会驱使用户远离项目。对于有大量用户基数的项目而言,不被支持的环境可能导致分裂,一部分用户可能决定复刻项目并启动新项目。在开源世界中有这样的先例。
|
||||
|
||||
另外,拥有不同背景的人对于开源项目的持续增长和源源不断的点子是很重要的。
|
||||
|
||||
最后,项目负责人有责任维持和帮助项目成长。指导新的贡献者是项目的关键,他们将成为项目和社区未来的领导者。
|
||||
|
||||
阅读:由红帽的内容战略家 Nicole Engard 写的 [7 种让新的贡献者感到受欢迎的方式][1]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/ar1dbnui.jpg?itok=4Xa7f2cM)
|
||||
|
||||
Kushal Das - Kushal Das 是 Python 软件基金会的一名 CPython 核心开发人员和主管。他是一名长期的 FOSS 贡献者和导师,他帮助新人进入贡献世界。他目前在 Red Hat 担任 Fedora 云工程师。他的博客在 https://kushaldas.in 。你也可以在 Twitter @kushaldas 上找到他
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/1/expand-project-contributor-base
|
||||
|
||||
作者:[Kushal Das][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[Bestony](https://github.com/bestony)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/kushaldas
|
||||
[1]:https://opensource.com/life/16/5/sumana-harihareswara-maria-naggaga-oscon
|
||||
[2]:https://opensource.com/tags/documentation
|
||||
[3]:https://opensource.com/business/16/1/scale-14x-interview-bob-reselman
|
||||
[4]:https://docs.python.org/devguide/
|
||||
[5]:https://opensource.com/tags/bugs-and-issues
|
||||
[6]:https://fedoraproject.org/easyfix/
|
||||
[7]:https://opensource.com/alternatives/slack
|
||||
[8]:https://opensource.com/business/15/10/what-stephen-king-can-teach-tech-writers
|
@ -0,0 +1,101 @@
|
||||
如何在 Linux 中捕获并流式传输你的游戏过程
|
||||
============================================================
|
||||
|
||||
也许没有那么多铁杆的游戏玩家使用 Linux,但肯定有很多 Linux 用户喜欢玩游戏。如果你是其中之一,并希望向世界展示 Linux 游戏不再是一个笑话,那么你会喜欢下面这个关于如何捕捉并且/或者以流式播放游戏的快速教程。我在这将用一个名为 “[Open Broadcaster Software Studio][5]” 的软件,这可能是我们所能找到最好的一种。
|
||||
|
||||
### 捕获设置
|
||||
|
||||
在顶层菜单中,我们选择 “File” → “Settings”,然后我们选择 “Output” 来设置要生成的文件的选项。这里我们可以设置想要的音频和视频的比特率、新创建的文件的目标路径和文件格式。这上面还提供了粗略的质量设置。
|
||||
|
||||
[
|
||||
![Select output set in OBS Studio](https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/pic_1.png)
|
||||
][6]
|
||||
|
||||
如果我们将顶部的输出模式从 “Simple” 更改为 “Advanced”,我们就能够设置 CPU 负载,以控制 OBS 对系统的影响。根据所选的质量、CPU 能力和捕获的游戏,可以设置一个不会导致丢帧的 CPU 负载设置。你可能需要做一些试验才能找到最佳设置,但如果将质量设置为低,则不用太多设置。
|
||||
|
||||
[
|
||||
![Change OBS output mode](https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/pic_2.png)
|
||||
][7]
|
||||
|
||||
接下来,我们转到设置的 “Video” 部分,我们可以设置我们想要的输出视频分辨率。注意缩小过滤(downscaling filtering )方式,因为它使最终的质量有所不同。
|
||||
|
||||
[
|
||||
![Down scaling filter](https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/pic_3.png)
|
||||
][8]
|
||||
|
||||
你可能还需要绑定热键以启动、暂停和停止录制。这特别有用,这样你就可以在录制时看到游戏的屏幕。为此,请在设置中选择 “Hotkeys” 部分,并在相应的框中分配所需的按键。当然,你不必每个框都填写,你只需要填写所需的。
|
||||
|
||||
[
|
||||
![Configure Hotkeys in OBS](https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/pic_4.png)
|
||||
][9]
|
||||
|
||||
如果你对流式传输感兴趣,而不仅仅是录制,请选择 “Stream” 分类的设置,然后你可以选择支持的 30 种流媒体服务,包括 Twitch、Facebook Live 和 Youtube,然后选择服务器并输入 流密钥。
|
||||
|
||||
[
|
||||
![Streaming settings](https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/pic_5.png)
|
||||
][10]
|
||||
|
||||
### 设置源
|
||||
|
||||
在左下方,你会发现一个名为 “Sources” 的框。我们按下加号来添加一个新的源,它本质上就是我们录制的媒体源。在这你可以设置音频和视频源,但是图像甚至文本也是可以的。
|
||||
|
||||
[
|
||||
![OBS Media Source](https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/pic_6.png)
|
||||
][11]
|
||||
|
||||
前三个是关于音频源,接下来的两个是图像,JACK 选项用于从乐器捕获的实时音频, Media Source 用于添加文件等。这里我们感兴趣的是 “Screen Capture (XSHM)”、“Video Capture Device (V4L2)” 和 “Window Capture (Xcomposite)” 选项。
|
||||
|
||||
屏幕捕获选项让你选择要捕获的屏幕(包括活动屏幕),以便记录所有内容。如工作区更改、窗口最小化等。对于标准批量录制来说,这是一个适合的选项,它可在发布之前进行编辑。
|
||||
|
||||
我们来探讨另外两个。Window Capture 将让我们选择一个活动窗口并将其放入捕获监视器。为了将我们的脸放在一个角落,视频捕获设备用于人们可以在我们说话时可以看到我们。当然,每个添加的源都提供了一组选项来供我们实现我们最后要的效果。
|
||||
|
||||
[
|
||||
![OBS Window Capture](https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/pic_7.png)
|
||||
][12]
|
||||
|
||||
添加的来源是可以调整大小的,也可以沿着录制帧的平面移动,因此你可以添加多个来源,并根据需要进行排列,最后通过右键单击执行基本的编辑任务。
|
||||
|
||||
[
|
||||
![Add Multiple sources](https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/pic_8.png)
|
||||
][13]
|
||||
|
||||
### 过渡
|
||||
|
||||
最后,如果你正在流式传输游戏会话时希望能够在游戏视图和自己(或任何其他来源)之间切换。为此,请从右下角切换为“Studio Mode”,并添加一个分配给另一个源的场景。你还可以通过取消选中 “Duplicate scene” 并检查 “Transitions” 旁边的齿轮图标上的 “Duplicate sources” 来切换。 当你想在简短评论中显示你的脸部时,这很有帮助。
|
||||
|
||||
[
|
||||
![Studio mode](https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/pic_9.png)
|
||||
][14]
|
||||
|
||||
这个软件有许多过渡效果,你可以按中心的 “Quick Transitions” 旁边的加号图标添加更多。当你添加它们时,还将会提示你进行设置。
|
||||
|
||||
### 总结
|
||||
|
||||
OBS Studio 是一个功能强大的免费软件,它工作稳定,使用起来相当简单直接,并且拥有越来越多的扩展其功能的[附加插件][15]。如果你需要在 Linux 上记录并且/或者流式传输游戏会话,除了使用 OBS 之外,我无法想到其他更好的解决方案。你有其他类似工具的经验么? 请在评论中分享也欢迎包含一个展示你技能的视频链接。:)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/tutorial/how-to-capture-and-stream-your-gaming-session-on-linux/
|
||||
|
||||
作者:[Bill Toulas][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com/tutorial/how-to-capture-and-stream-your-gaming-session-on-linux/
|
||||
[1]:https://www.howtoforge.com/tutorial/how-to-capture-and-stream-your-gaming-session-on-linux/#capture-settings
|
||||
[2]:https://www.howtoforge.com/tutorial/how-to-capture-and-stream-your-gaming-session-on-linux/#setting-up-the-sources
|
||||
[3]:https://www.howtoforge.com/tutorial/how-to-capture-and-stream-your-gaming-session-on-linux/#transitioning
|
||||
[4]:https://www.howtoforge.com/tutorial/how-to-capture-and-stream-your-gaming-session-on-linux/#conclusion
|
||||
[5]:https://obsproject.com/download
|
||||
[6]:https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/big/pic_1.png
|
||||
[7]:https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/big/pic_2.png
|
||||
[8]:https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/big/pic_3.png
|
||||
[9]:https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/big/pic_4.png
|
||||
[10]:https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/big/pic_5.png
|
||||
[11]:https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/big/pic_6.png
|
||||
[12]:https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/big/pic_7.png
|
||||
[13]:https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/big/pic_8.png
|
||||
[14]:https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/big/pic_9.png
|
||||
[15]:https://obsproject.com/forum/resources/categories/obs-studio-plugins.6/
|
@ -0,0 +1,117 @@
|
||||
如何在 Vim 中使用模式行进行文件特定的设置
|
||||
============================================================
|
||||
|
||||
虽然[插件][4]毫无疑问是 Vim 最大的优势,然而,还有其它一些功能,使得它成为当今 Linux 用户中最强大、功能最丰富的文本编辑器/IDE 之一。其中一个功能就是可以根据文件做特定的设置。我们可以使用该编辑器的模式行(Modeline)特性来实现该功能。
|
||||
|
||||
在这篇文章中,我将讨论如何使用 Vim 的[模式行(Modeline)][5]特性来简单的理解一些例子。
|
||||
|
||||
在开始之前,值得提醒一下,这篇教程中提及的所有例子、命令和指令都已经在 Ubuntu 16.04 中使用 Vim 7.4 版本测试过。
|
||||
|
||||
### VIM 模式行
|
||||
|
||||
#### 用法
|
||||
|
||||
正如上面已经提到的, Vim 的模式行特性让你能够进行特定于文件的更改。比如,假设你想把项目中的一个特定文件中的所有制表符用空格替换,并且确保这个更改不会影响到其它所有文件。这是模式行帮助你完成你想做的事情的一个理想情况。
|
||||
|
||||
因此,你可以考虑将下面这一行加入文件的开头或结尾来完成这件事。
|
||||
|
||||
```
|
||||
# vim: set expandtab:
|
||||
```
|
||||
|
||||
(LCTT 译注:模式行就是一行以注释符,如 `#`、`//`、`/*` 开头,间隔一个空格,以 `vim:` 关键字触发的设置命令。可参看:http://vim.wikia.com/wiki/Modeline_magic )
|
||||
|
||||
如果你是在 Linux 系统上尝试上面的练习来测试用例,很有可能它将不会像你所期望的那样工作。如果是这样,也不必担心,因为某些情况下,模式行特性需要先激活才能起作用(出于安全原因,在一些系统比如 Debian、Ubuntu、GGentoo 和 OSX 上默认情况下禁用)。
|
||||
|
||||
为了启用该特性,打开 `.vimrc` 文件(位于 `home` 目录),然后加入下面一行内容:
|
||||
|
||||
```
|
||||
set modeline
|
||||
```
|
||||
|
||||
现在,无论何时你在该文件输入一个制表符然后保存时(文件中已输入 `expandtab` 模式行命令的前提下),都会被自动转换为空格。
|
||||
|
||||
让我们考虑另一个用例。假设在 Vim 中, 制表符默认设置为 4 个空格,但对于某个特殊的文件,你想把它增加到 8 个。对于这种情况,你需要在文件的开头或末尾加上下面这行内容:
|
||||
|
||||
```
|
||||
// vim: noai:ts=8:
|
||||
```
|
||||
|
||||
现在,输入一个制表符,你会看到,空格的数量为 8 个。
|
||||
|
||||
你可能已经注意到我刚才说的,这些模式行命令需要加在靠近文件的顶部或底部。如果你好奇为什么是这样,那么理由是该特性以这种方式设计的。下面这一行(来自 Vim 官方文件)将会解释清楚:
|
||||
|
||||
> “模式行不能随意放在文件中的任何位置:它需要放在文件中的前几行或最后几行。`modelines` 变量控制 Vim 检查模式行在文件中的确切位置。请查看 `:help modelines` 。默认情况下,设置为 5 行。”
|
||||
|
||||
下面是 `:help modelines` 命令(上面提到的)输出的内容:
|
||||
|
||||
> 如果 `modeline` 已启用并且 `modelines` 给出了行数,那么便在相应位置查找 `set` 命令。如果 `modeline` 禁用或 `modelines` 设置的行数为 0 则不查找。
|
||||
|
||||
尝试把模式行命令置于超出 5 行的范围(距离文件底部和顶部的距离均超过 5 行),你会发现, 制表符将会恢复为 Vim 默认数目的空格 — 在我的情况里是 4 个空格。
|
||||
|
||||
然而,你可以按照自己的意愿改变默认行数,只需在你的 `.vimrc` 文件中加入下面一行命令
|
||||
|
||||
```
|
||||
set modelines=[新值]
|
||||
```
|
||||
|
||||
比如,我把值从 5 增加到了 10 。
|
||||
|
||||
```
|
||||
set modelines=10
|
||||
```
|
||||
|
||||
这意味着,现在我可以把模式行命令置于文件前 10 行或最后 10 行的任意位置。
|
||||
|
||||
继续,无论何时,当你在编辑一个文件的时候,你可以输入下面的命令(在 Vim 编辑器的命令模式下输入)来查看当前与命令行相关的设置以及它们最新的设置。
|
||||
|
||||
```
|
||||
:verbose set modeline? modelines?
|
||||
```
|
||||
|
||||
比如,在我的例子中,上面的命令产生了如下所示的输出:
|
||||
|
||||
```
|
||||
modeline
|
||||
Last set from ~/.vimrc
|
||||
modelines=10
|
||||
Last set from ~/.vimrc
|
||||
```
|
||||
|
||||
关于 Vim 的模式行特性,你还需要知道一些重要的点:
|
||||
|
||||
* 默认情况下,当 Vim 以非兼容(`nocompatible`)模式运行时该特性是启用的,但需要注意的是,在一些发行版中,出于安全考虑,系统的 `vimrc` 文件禁用了该选项。
|
||||
* 默认情况下,当以 root 权限编辑文件时,该特性被禁用(如果你是使用 `sudo` 方式打开该文件,那么该特性依旧能够正常工作)。
|
||||
* 通过 `set` 来设置模式行,其结束于第一个冒号,而非反斜杠。不使用 `set`,则后面的文本都是选项。比如,`/* vim: noai:ts=4:sw=4 */` 是一个无效的模式行。
|
||||
|
||||
(LCTT 译注:关于模式行中的 `set`,上述描述指的是:如果用 `set` 来设置,那么当发现第一个 `:` 时,表明选项结束,后面的 `*/` 之类的为了闭合注释而出现的文本均无关;而如果不用 `set` 来设置,那么以 `vim:` 起头的该行所有内容均视作选项。 )
|
||||
|
||||
#### 安全考虑
|
||||
|
||||
令人沮丧的是, Vim 的模式行特性可能会造成安全性问题。事实上,在过去,已经报道过多个和模式行相关的问题,包括 [shell 命令注入][6],[任意命令执行][7]和[无授权访问][8]等。我知道,这些问题发生在很早的一些时候,现在应该已经修复好了,但是,这提醒了我们,模式行特性有可能会被黑客滥用。
|
||||
|
||||
### 结论
|
||||
|
||||
模式行可能是 Vim 编辑器的一个高级命令,但是它并不难理解。毫无疑问,它的学习曲线会有一些复杂,但是不需多问也知道,该特性是多么的有用。当然,出于安全考虑,在启用并使用该选项前,你需要对自己的选择进行权衡。
|
||||
|
||||
你有使用过模式行特性吗?你的体验是什么样的?记得在下面的评论中分享给我们。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/tutorial/vim-modeline-settings/
|
||||
|
||||
作者:[Ansh][a]
|
||||
译者:[ucasFL](https://github.com/ucasFL)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com/tutorial/vim-modeline-settings/
|
||||
[1]:https://www.howtoforge.com/tutorial/vim-modeline-settings/#usage
|
||||
[2]:https://www.howtoforge.com/tutorial/vim-modeline-settings/#vim-modeline
|
||||
[3]:https://www.howtoforge.com/tutorial/vim-modeline-settings/#conclusion
|
||||
[4]:https://linux.cn/article-7901-1.html
|
||||
[5]:http://vim.wikia.com/wiki/Modeline_magic
|
||||
[6]:https://tools.cisco.com/security/center/viewAlert.x?alertId=13223
|
||||
[7]:http://usevim.com/2012/03/28/modelines/
|
||||
[8]:https://tools.cisco.com/security/center/viewAlert.x?alertId=5169
|
@ -1,71 +1,47 @@
|
||||
### 如何在 Linux 终端会话中使用 Asciinema 进行录制和回放
|
||||
如何在 Linux 中使用 Asciinema 进行录制和回放终端会话
|
||||
===========
|
||||
|
||||
![](https://linuxconfig.org/images/asciimena-video-example.jpg?58942057)
|
||||
|
||||
内容
|
||||
|
||||
* * [1、简介][11]
|
||||
* [2、困难][12]
|
||||
* [3、惯例][13]
|
||||
* [4、标准库安装][14]
|
||||
* [4.1、在 Arch Linux 上安装][1]
|
||||
* [4.2、在 Debian 上安装][2]
|
||||
* [4.3、在 Ubuntu 上安装][3]
|
||||
* [4.4、在 Fedora 上安装][4]
|
||||
* [5、从源代码安装][15]
|
||||
* [6、前提条件][16]
|
||||
* [6.1、在 Arch Linux 上安装 ruby][5]
|
||||
* [6.2、在 Debian 上安装 ruby][6]
|
||||
* [6.3、在 Ubuntu 安装 ruby][7]
|
||||
* [6.4、在 Fedora 安装 ruby][8]
|
||||
* [6.5、在 CentOS 安装 ruby][9]
|
||||
* [7、 安装 Linuxbrew][17]
|
||||
* [8、 安装 Asciinema][18]
|
||||
* [9、录制终端会话][19]
|
||||
* [10、回放已录制终端会话][20]
|
||||
* [11、将视频嵌入 HTML][21]
|
||||
* [12、结论][22]
|
||||
* [13、 故障排除][23]
|
||||
* [13.1、在 UTF-8 环境下运行 asciinema][10]
|
||||
|
||||
### 简介
|
||||
|
||||
Asciinema 是一个轻量并且非常高效的脚本终端会话录制器的替代品。使用它可以录制、回放和分享 JSON 格式的终端会话记录。和一些桌面录制器,比如 Recordmydesktop、Simplescreenrecorder、Vokoscreen 或 Kazam 相比,Asciinema 最主要的优点是,它能够以通过 ANSI 转义码编码的 ASCII 文本录制所有的标准终端输入、输出和错误。
|
||||
Asciinema 是一个轻量并且非常高效的终端会话录制器。使用它可以录制、回放和分享 JSON 格式的终端会话记录。与一些桌面录制器,比如 Recordmydesktop、Simplescreenrecorder、Vokoscreen 或 Kazam 相比,Asciinema 最主要的优点是,它能够以通过 ASCII 文本以及 ANSI 转义码编码来录制所有的标准终端输入、输出和错误信息。
|
||||
|
||||
事实上,即使是很长的终端会话,录制出的 JSON 格式文件也非常小。另外,JSON 格式使得用户可以利用简单的文件转化器,将输出的 JSON 格式文件嵌入到 HTML 代码中,然后分享到公共网站或者使用 asciinema 账户分享到 Asciinema.org 。最后,如果你的终端会话中有一些错误,并且你还懂一些 ASCI 转义码语法,那么你可以使用任何编辑器来修改你的已录制终端会话。
|
||||
|
||||
### 困难
|
||||
**难易程度:**
|
||||
|
||||
很简单!
|
||||
|
||||
### 惯例
|
||||
**标准终端:**
|
||||
|
||||
* **#** - 给定命令需要以 root 用户权限运行或者使用 `sudo` 命令
|
||||
* **$** - 给定命令以常规权限用户运行
|
||||
|
||||
### 标准库安装
|
||||
### 从软件库安装
|
||||
|
||||
很有可能, asciinema 可以使用你的版本库进行安装。但是,如果不可以使用系统版本库进行安装或者你想安装最新的版本,那么,你可以像下面的“从源代码安装”部分所描述的那样,使用 Linuxbrew 包管理器来执行 Asciinema 安装。
|
||||
通常, asciinema 可以使用你的发行版的软件库进行安装。但是,如果不可以使用系统的软件库进行安装或者你想安装最新的版本,那么,你可以像下面的“从源代码安装”部分所描述的那样,使用 Linuxbrew 包管理器来执行 Asciinema 安装。
|
||||
|
||||
### 在 Arch Linux 上安装
|
||||
**在 Arch Linux 上安装:**
|
||||
|
||||
```
|
||||
# pacman -S asciinema
|
||||
```
|
||||
|
||||
### 在 Debian 上安装
|
||||
**在 Debian 上安装:**
|
||||
|
||||
```
|
||||
# apt install asciinema
|
||||
```
|
||||
|
||||
### 在 Ubuntu 上安装
|
||||
**在 Ubuntu 上安装:**
|
||||
|
||||
```
|
||||
$ sudo apt install asciinema
|
||||
```
|
||||
|
||||
### 在 Fedora 上安装
|
||||
**在 Fedora 上安装:**
|
||||
|
||||
```
|
||||
$ sudo dnf install asciinema
|
||||
@ -75,7 +51,7 @@ $ sudo dnf install asciinema
|
||||
|
||||
最简单并且值得推荐的方式是使用 Linuxbrew 包管理器,从源代码安装最新版本的 Asciinema 。
|
||||
|
||||
### 前提条件
|
||||
#### 前提条件
|
||||
|
||||
下面列出的前提条件是安装 Linuxbrew 和 Asciinema 需要满足的依赖关系:
|
||||
|
||||
@ -86,61 +62,69 @@ $ sudo dnf install asciinema
|
||||
|
||||
在安装 Linuxbrew 之前,请确保上面的这些包都已经安装在了你的 Linux 系统中。
|
||||
|
||||
### 在 Arch Linux 上安装 ruby
|
||||
**在 Arch Linux 上安装 ruby:**
|
||||
|
||||
```
|
||||
# pacman -S git gcc make ruby
|
||||
```
|
||||
|
||||
### 在 Debian 上安装 ruby
|
||||
**在 Debian 上安装 ruby:**
|
||||
|
||||
```
|
||||
# apt install git gcc make ruby
|
||||
```
|
||||
|
||||
### 在 Ubuntu 上安装 ruby
|
||||
**在 Ubuntu 上安装 ruby:**
|
||||
|
||||
```
|
||||
$ sudo apt install git gcc make ruby
|
||||
```
|
||||
|
||||
### 在 Fedora 上安装 ruby
|
||||
**在 Fedora 上安装 ruby:**
|
||||
|
||||
```
|
||||
$ sudo dnf install git gcc make ruby
|
||||
```
|
||||
|
||||
### 在 CentOS 上安装 ruby
|
||||
**在 CentOS 上安装 ruby:**
|
||||
|
||||
```
|
||||
# yum install git gcc make ruby
|
||||
```
|
||||
|
||||
### 安装 Linuxbrew
|
||||
#### 安装 Linuxbrew
|
||||
|
||||
Linuxbrew 包管理器是苹果的 MacOS 操作系统很受欢迎的 Homebrew 包管理器的一个复刻版本。还没发布多久,Homebrew 就以容易使用而著称。如果你想使用 Linuxbrew 来安装 Asciinema,那么,请运行下面命令在你的 Linux 版本上安装 Linuxbrew:
|
||||
|
||||
```
|
||||
$ ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Linuxbrew/install/master/install)"
|
||||
```
|
||||
|
||||
现在,Linuxbrew 已经安装到了目录 `$HOME/.linuxbrew/` 下。剩下需要做的就是使它成为可执行 `PATH` 环境变量的一部分。
|
||||
|
||||
```
|
||||
$ echo 'export PATH="$HOME/.linuxbrew/bin:$PATH"' >>~/.bash_profile
|
||||
$ . ~/.bash_profile
|
||||
```
|
||||
|
||||
为了确认 Linuxbrew 是否已经安装好,你可以使用 `brew` 命令来查看它的版本:
|
||||
|
||||
```
|
||||
$ brew --version
|
||||
Homebrew 1.1.7
|
||||
Homebrew/homebrew-core (git revision 5229; last commit 2017-02-02)
|
||||
```
|
||||
|
||||
### 安装 Asciinema
|
||||
#### 安装 Asciinema
|
||||
|
||||
安装好 Linuxbrew 以后,安装 Asciinema 就变得无比容易了:
|
||||
|
||||
```
|
||||
$ brew install asciinema
|
||||
```
|
||||
|
||||
检查 Asciinema 是否安装正确:
|
||||
|
||||
```
|
||||
$ asciinema --version
|
||||
asciinema 1.3.0
|
||||
@ -151,10 +135,13 @@ asciinema 1.3.0
|
||||
经过一番辛苦的安装工作以后,是时候来干一些有趣的事情了。Asciinema 是一个非常容易使用的软件。事实上,目前的 1.3 版本只有很少的几个可用命令行选项,其中一个是 `--help` 。
|
||||
|
||||
我们首先使用 `rec` 选项来录制终端会话。下面的命令将会开始录制终端会话,之后,你将会有一个选项来丢弃已录制记录或者把它上传到 asciinema.org 网站以便将来参考。
|
||||
|
||||
```
|
||||
$ asciinema rec
|
||||
```
|
||||
|
||||
运行上面的命令以后,你会注意到, Asciinema 已经开始录制终端会话了,你可以按下 `CTRL+D` 快捷键或执行 `exit` 命令来停止录制。如果你使用的是 Debian/Ubuntu/Mint Linux 系统,你可以像下面这样尝试进行第一次 asciinema 录制:
|
||||
|
||||
```
|
||||
$ su
|
||||
Password:
|
||||
@ -162,7 +149,9 @@ Password:
|
||||
# exit
|
||||
$ sl
|
||||
```
|
||||
|
||||
一旦输入最后一个 `exit` 命令以后,将会询问你:
|
||||
|
||||
```
|
||||
$ exit
|
||||
~ Asciicast recording finished.
|
||||
@ -170,11 +159,15 @@ $ exit
|
||||
|
||||
https://asciinema.org/a/7lw94ys68gsgr1yzdtzwijxm4
|
||||
```
|
||||
|
||||
如果你不想上传你的私密命令行技巧到 asciinema.org 网站,那么有一个选项可以把 Asciinema 记录以 JSON 格式保存为本地文件。比如,下面的 asciinema 记录将被存为 `/tmp/my_rec.json`:
|
||||
|
||||
```
|
||||
$ asciinema rec /tmp/my_rec.json
|
||||
```
|
||||
另一个非常有用的 asciinema 特性是时间微调。如果你的键盘输入速度很慢,或者你在进行多任务,输入命令和执行命令之间的时间可以延长。Asciinema 会记录你的实时按键时间,这意味着每一个停顿都将反映在最终视频的长度上。可以使用 `-w` 选项来缩短按键的时间间隔。比如,下面的命令将按键的时间间隔缩短为 0.2 秒:
|
||||
|
||||
另一个非常有用的 asciinema 特性是时间微调。如果你的键盘输入速度很慢,或者你在进行多任务,输入命令和执行命令之间的时间会比较长。Asciinema 会记录你的实时按键时间,这意味着每一个停顿都将反映在最终视频的长度上。可以使用 `-w` 选项来缩短按键的时间间隔。比如,下面的命令将按键的时间间隔缩短为 0.2 秒:
|
||||
|
||||
```
|
||||
$ asciinema rec -w 0.2
|
||||
```
|
||||
@ -182,15 +175,19 @@ $ asciinema rec -w 0.2
|
||||
### 回放已录制终端会话
|
||||
|
||||
有两种方式可以来回放已录制会话。第一种方式是直接从 asciinema.org 网站上播放终端会话。这意味着,你之前已经把录制会话上传到了 asciinema.org 网站,并且需要提供有效链接:
|
||||
|
||||
```
|
||||
$ asciinema play https://asciinema.org/a/7lw94ys68gsgr1yzdtzwijxm4
|
||||
```
|
||||
Alternatively, use your locally stored JSON file:
|
||||
|
||||
另外,你也可以使用本地存储的 JSON 文件:
|
||||
|
||||
```
|
||||
$ asciinema play /tmp/my_rec.json
|
||||
```
|
||||
|
||||
如果要使用 `wget` 命令来下载之前的上传记录,只需在链接的后面加上 `.json`:
|
||||
|
||||
```
|
||||
$ wget -q -O steam_locomotive.json https://asciinema.org/a/7lw94ys68gsgr1yzdtzwijxm4.json
|
||||
$ asciinema play steam_locomotive.json
|
||||
@ -198,7 +195,8 @@ $ asciinema play steam_locomotive.json
|
||||
|
||||
### 将视频嵌入 HTML
|
||||
|
||||
最后,Asciinema 还带有一个独立的 JavaScript 播放器。这意味者你可以很容易的在你的网站上分享终端会话记录。下面,使用一段简单的 `index.html` 代码来说明这个方法。首先,下载所有必要的东西:
|
||||
最后,asciinema 还带有一个独立的 JavaScript 播放器。这意味者你可以很容易的在你的网站上分享终端会话记录。下面,使用一段简单的 `index.html` 代码来说明这个方法。首先,下载所有必要的东西:
|
||||
|
||||
```
|
||||
$ cd /tmp/
|
||||
$ mkdir steam_locomotive
|
||||
@ -208,6 +206,7 @@ $ wget -q https://github.com/asciinema/asciinema-player/releases/download/v2.4.0
|
||||
$ wget -q https://github.com/asciinema/asciinema-player/releases/download/v2.4.0/asciinema-player.js
|
||||
```
|
||||
之后,创建一个新的包含下面这些内容的 `/tmp/steam_locomotive/index.html` 文件:
|
||||
|
||||
```
|
||||
<html>
|
||||
<head>
|
||||
@ -219,28 +218,34 @@ $ wget -q https://github.com/asciinema/asciinema-player/releases/download/v2.4.0
|
||||
</body>
|
||||
</html>
|
||||
```
|
||||
|
||||
完成以后,打开你的网页浏览器,按下 `CTRL+O` 来打开新创建的 `/tmp/steam_locomotive/index.html` 文件。
|
||||
|
||||
### 结论
|
||||
|
||||
正如前面所说的,使用 Asciinema 录制器来录制终端会话最主要的优点是它的输出文件非常小,这使得你的视频很容易分享出去。上面的例子产生了一个包含 58472 个字符的文件,它是一个只有 58 KB 大的 22 秒终端会话视频。如果我们查看输出的 JSON 文件,会发现甚至这个数字已经非常大了,这主要是因为一个 “蒸汽机车” 已经跑过了终端。这个长度的正常终端会话会产生一个更小的输出文件。
|
||||
正如前面所说的,使用 asciinema 录制器来录制终端会话最主要的优点是它的输出文件非常小,这使得你的视频很容易分享出去。上面的例子产生了一个包含 58472 个字符的文件,它是一个只有 58 KB 大 小的 22 秒终端会话视频。如果我们查看输出的 JSON 文件,会发现甚至这个数字已经非常大了,这主要是因为一个 “蒸汽机车” 已经跑过了终端。这个长度的正常终端会话一般会产生一个更小的输出文件。
|
||||
|
||||
下次,当你想要在一个论坛上询问关于 Linux 配置的问题,并且很难描述你的问题的时候,只需运行下面的命令:
|
||||
|
||||
```
|
||||
$ asciinema rec
|
||||
```
|
||||
|
||||
然后把最后的链接贴到论坛的帖子里。
|
||||
|
||||
### 故障排除
|
||||
|
||||
### 在 UTF-8 环境下运行 asciinema
|
||||
#### 在 UTF-8 环境下运行 asciinema
|
||||
|
||||
错误信息:
|
||||
|
||||
```
|
||||
asciinema 需要在 UTF-8 环境下运行。请检查 `locale` 命令的输出。
|
||||
asciinema needs a UTF-8 native locale to run. Check the output of `locale` command.
|
||||
```
|
||||
|
||||
解决方法:
|
||||
生成并导出UTF-8语言环境。例如:
|
||||
生成并导出 UTF-8 语言环境。例如:
|
||||
|
||||
```
|
||||
$ localedef -c -f UTF-8 -i en_US en_US.UTF-8
|
||||
$ export LC_ALL=en_US.UTF-8
|
||||
@ -252,7 +257,7 @@ via: https://linuxconfig.org/record-and-replay-terminal-session-with-asciinema-o
|
||||
|
||||
作者:[Lubos Rendek][a]
|
||||
译者:[ucasFL](https://github.com/ucasFL)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,49 @@
|
||||
甲骨文的政策更改提高了其在 AWS 上的价格
|
||||
============================================================
|
||||
|
||||
> 这种改变使在 AWS 上实施甲骨文的软件的价格翻了一番,它已经安静地生效了,而几乎没有通知用户。
|
||||
|
||||
![](http://windowsitpro.com/site-files/windowsitpro.com/files/imagecache/large_img/uploads/2017/02/ellison-hero.jpg)
|
||||
|
||||
之前有消息传出,甲骨文使亚马逊云上的产品价格翻了一倍。它在[如何计算 AWS 的虚拟 CPU][6] 上耍了一些花招。它这么做也没有任何宣扬。该公司的新定价政策于 1 月 23 日生效,直到 1 月 28 日都几乎没有被人注意到, 甲骨文的关注者 Tim Hall 偶然发现 Big Red 公司的 [甲骨文软件云计算环境许可][7]文件并披露了出来。
|
||||
|
||||
乍一看,这一举动似乎并不太大,因为它仅将甲骨文的 AWS 定价与 Microsoft Azure 的价格相提并论。但是 Azure 只有市场领先的 AWS 体量的三分之一,所以如果你想在云中销售许可证,AWS 是合适的地方。虽然此举可能或可能不会影响已经在 AWS 上使用甲骨文产品的用户,但是尚不清楚新规则是否适用于已在使用产品的用户 - 它肯定会让一些考虑可能使用甲骨文的用户另寻它处。
|
||||
|
||||
这个举动的主要原因是显而易见的。甲骨文希望使自己的云更具吸引力 - 这让 [The Register 观察][8]到一点:“拉里·埃里森确实承诺过甲骨文的云将会更快更便宜”。更快和更便宜仍然有待看到。如果甲骨文的 SPARC 云计划启动,并且按照广告的形式执行,那么可能会更快,但是更便宜的可能性较小。甲骨文以对其价格的强硬态度而著称。
|
||||
|
||||
随着其招牌数据库和业务栈销售的下滑,并且对 Sun 公司的 74 亿美元的投资并未能按照如期那样,甲骨文将其未来赌在云计算上。但是甲骨文来晚了,迄今为止,它的努力似乎还没有结果, 一些金融预测者并没有看到甲骨文云的光明前景。他们说,云是一个拥挤的市场,而四大公司 - 亚马逊、微软、IBM 和谷歌 - 已经有了领先优势。
|
||||
|
||||
确实如此。但是甲骨文面临的最大的障碍是,好吧,就是甲骨文。它的声誉在它之前。
|
||||
|
||||
保守地说这个公司并不是因为明星客户服务而闻名。事实上,各种新闻报道将甲骨文描绘成一个恶霸和操纵者。
|
||||
|
||||
例如,早在 2015 年,甲骨文就因为它的云并不像预期那样快速增长而越来越沮丧,开始[激活业内人士称之为的“核特权”][9]。它会审核客户的数据中心,如果客户不符合规定,将发出“违规通知” - 它通常只适用于大规模滥用情况,并命令客户在 30 天内退出使用其软件。
|
||||
|
||||
或许你能想到,大量投入在甲骨文软件平台上的大公司们绝对不能在短时间内迁移到另一个解决方案。甲骨文的违规通知将会引发灾难。
|
||||
|
||||
商业内幕人士 Julie Bort 解释到:“为了使违规通知消失 - 或者减少高额的违规罚款 - 甲骨文销售代表通常希望客户向合同中添加云额度”。
|
||||
|
||||
换句话说,甲骨文正在使用审计来扭转客户去购买它的云,而无论他们是否有需要。这种策略与最近 AWS 价格翻倍之间也可能存在联系。Hall 的文章的评论者指出,围绕价格提升的秘密背后的目的可能是触发软件审计。
|
||||
|
||||
使用这些策略的麻烦迟早会出来。消息一旦传播开来,你的客户就开始寻找其他选项。对 Big Red 而言或许是时候参考微软的做法,开始建立一个更好和更温和的甲骨文,将客户的需求放在第一位。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://windowsitpro.com/cloud/oracle-policy-change-raises-prices-aws
|
||||
|
||||
作者:[Christine Hall][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://windowsitpro.com/author/christine-hall
|
||||
[1]:http://windowsitpro.com/penton_ur/nojs/user/register?path=node%2F186491&nid=186491&source=email
|
||||
[2]:http://windowsitpro.com/author/christine-hall
|
||||
[3]:http://windowsitpro.com/author/christine-hall
|
||||
[4]:http://windowsitpro.com/cloud/oracle-policy-change-raises-prices-aws#comments
|
||||
[5]:http://windowsitpro.com/cloud/oracle-policy-change-raises-prices-aws#comments
|
||||
[6]:https://oracle-base.com/blog/2017/01/28/oracles-cloud-licensing-change-be-warned/
|
||||
[7]:http://www.oracle.com/us/corporate/pricing/cloud-licensing-070579.pdf
|
||||
[8]:https://www.theregister.co.uk/2017/01/30/oracle_effectively_doubles_licence_fees_to_run_in_aws/
|
||||
[9]:http://www.businessinsider.com/oracle-is-using-the-nuclear-option-to-sell-its-cloud-software-2015-7
|
@ -1,17 +1,17 @@
|
||||
5 个要了解的开源软件定义网络项目
|
||||
5 个需要知道的开源的软件定义网络(SDN)项目
|
||||
============================================================
|
||||
|
||||
|
||||
![SDN](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/software-defined-networking_0.jpg?itok=FeWzZo8n "SDN")
|
||||
|
||||
SDN 开始重新定义企业网络。这里有五个应该知道的开源项目。
|
||||
[Creative Commons Zero][1] Pixabay
|
||||
> SDN 开始重新定义企业网络。这里有五个应该知道的开源项目。
|
||||
|
||||
|
||||
纵观整个 2016 年,软件定义网络(SDN)持续快速发展并变得成熟。我们现在已经超出了开源网络的概念阶段,两年前评估这些项目潜力的公司已经开始了企业部署。如几年来所预测的,SDN 正在开始重新定义企业网络。
|
||||
|
||||
这与市场研究人员的观点基本上是一致的。IDC 在今年早些时候公布了 SDN 市场的[一份研究][3],它预计从 2014 年到 2020 年 SDN 的年均复合增长率为 53.9%,届时市场价值将达到 125 亿美元。此外,“<ruby>2016 技术趋势<rt>Technology Trends 2016</rt></ruby>” 报告中将 SDN 列为 2016 年最佳技术投资。
|
||||
|
||||
IDC 网络基础设施副总裁,[Rohit Mehra][4] 说:“云计算和第三方平台推动了 SDN 的需求,这将在 2020 年代表一个价值超过 125 亿美元的市场。毫无疑问的是 SDN 的价值将越来越多地渗透到网络虚拟化软件和 SDN 应用中,包括虚拟化网络和安全服务,大型企业在数据中心实现 SDN 的价值,但它们最终会认识到其在分支机构和校园网络中的广泛应用。“
|
||||
IDC 网络基础设施副总裁,[Rohit Mehra][4] 说:“云计算和第三方平台推动了 SDN 的需求,这预示着 2020 年的一个价值超过 125 亿美元的市场。毫无疑问的是 SDN 的价值将越来越多地渗透到网络虚拟化软件和 SDN 应用中,包括虚拟化网络和安全服务。大型企业现在正在数据中心体现 SDN 的价值,但它们最终会认识到其在分支机构和校园网络中的广泛应用。“
|
||||
|
||||
Linux 基金会最近[发布][5]了其 2016 年度报告[“开放云指南:当前趋势和开源项目”][6]。其中第三份年度报告全面介绍了开放云计算的状态,并包含关于 unikernel 的部分。你现在可以[下载报告][7]了,首先要注意的是汇总和分析研究,说明了容器、unikernel 等的趋势是如何重塑云计算的。该报告提供了对当今开放云环境中心的分类项目的描述和链接。
|
||||
|
||||
@ -19,27 +19,37 @@ Linux 基金会最近[发布][5]了其 2016 年度报告[“开放云指南:
|
||||
|
||||
### 软件定义网络
|
||||
|
||||
[ONOS][8]
|
||||
#### [ONOS][8]
|
||||
|
||||
<ruby>开放网络操作系统<rt>Open Network Operating System </rt></ruby>(ONOS)是一个 Linux 基金会项目,它是一个给服务提供商的软件定义网络操作系统,它具有可扩展性、高可用性、高性能和抽象功能来创建应用程序和服务。[ONOS 的 GitHub 地址][9]。
|
||||
<ruby>开放网络操作系统<rt>Open Network Operating System</rt></ruby>(ONOS)是一个 Linux 基金会项目,它是一个面向服务提供商的软件定义网络操作系统,它具有可扩展性、高可用性、高性能和抽象功能来创建应用程序和服务。
|
||||
|
||||
[OpenContrail][10]
|
||||
[ONOS 的 GitHub 地址][9]。
|
||||
|
||||
OpenContrail 是 Juniper Networks 的云开源网络虚拟化平台。它提供网络虚拟化的所有必要组件:SDN 控制器、虚拟路由器、分析引擎和已发布的上层 API。其 REST API 配置并收集来自系统的操作和分析数据。[OpenContrail 的 GitHub 地址][11]。
|
||||
#### [OpenContrail][10]
|
||||
|
||||
[OpenDaylight][12]
|
||||
OpenContrail 是 Juniper Networks 的云开源网络虚拟化平台。它提供网络虚拟化的所有必要组件:SDN 控制器、虚拟路由器、分析引擎和已发布的上层 API。其 REST API 配置并收集来自系统的操作和分析数据。
|
||||
|
||||
OpenDaylight 是 Linux 基金会的一个 OpenDaylight Foundation 项目,它是一个可编程的、提供给服务提供商和企业的软件定义网络平台。它基于微服务架构,可以在多供应商环境中的一系列硬件上实现网络服务。[OpenDaylight 的 GitHub 地址][13]。
|
||||
[OpenContrail 的 GitHub 地址][11]。
|
||||
|
||||
[Open vSwitch][14]
|
||||
#### [OpenDaylight][12]
|
||||
|
||||
Open vSwitch 是一个 Linux 基金会项目,具有生产级别质量的多层虚拟交换机。它通过程序化扩展设计用于大规模网络自动化,同时还支持标准管理接口和协议,包括 NetFlow、sFlow、IPFIX、RSPAN、CLI、LACP 和 802.1ag。它支持类似 VMware 的分布式 vNetwork 或者 Cisco Nexus 1000V 那样跨越多个物理服务器分发。[OVS 在 GitHub 的地址][15]。
|
||||
OpenDaylight 是 Linux 基金会旗下的 OpenDaylight 基金会项目,它是一个可编程的、提供给服务提供商和企业的软件定义网络平台。它基于微服务架构,可以在多供应商环境中的一系列硬件上实现网络服务。
|
||||
|
||||
[OPNFV][16]
|
||||
[OpenDaylight 的 GitHub 地址][13]。
|
||||
|
||||
<ruby>网络功能虚拟化开放平台<rt>Open Platform for Network Functions Virtualization</rt></ruby>(OPNFV) 是 Linux 基金会项目,它用于企业和服务提供商网络的 NFV 平台。它汇集了计算、存储和网络虚拟化方面的上游组件以创建 NFV 程序的端到端平台。[OPNFV 在 Bitergia 上的地址][17]。
|
||||
#### [Open vSwitch][14]
|
||||
|
||||
_要了解更多关于开源云计算趋势和查看顶级开源云计算项目完整列表,[请下载 Linux 基金会的 “开放云指南”。][18]_
|
||||
Open vSwitch 是一个 Linux 基金会项目,是具有生产级品质的多层虚拟交换机。它通过程序化扩展设计用于大规模网络自动化,同时还支持标准管理接口和协议,包括 NetFlow、sFlow、IPFIX、RSPAN、CLI、LACP 和 802.1ag。它支持类似 VMware 的分布式 vNetwork 或者 Cisco Nexus 1000V 那样跨越多个物理服务器分发。
|
||||
|
||||
[OVS 在 GitHub 的地址][15]。
|
||||
|
||||
#### [OPNFV][16]
|
||||
|
||||
<ruby>网络功能虚拟化开放平台<rt>Open Platform for Network Functions Virtualization</rt></ruby>(OPNFV)是 Linux 基金会项目,它用于企业和服务提供商网络的 NFV 平台。它汇集了计算、存储和网络虚拟化方面的上游组件以创建 NFV 程序的端到端平台。
|
||||
|
||||
[OPNFV 在 Bitergia 上的地址][17]。
|
||||
|
||||
_要了解更多关于开源云计算趋势和查看顶级开源云计算项目完整列表,[请下载 Linux 基金会的 “开放云指南”][18]。_
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
108
published/20170210 Best Third-Party Repositories for CentOS.md
Normal file
108
published/20170210 Best Third-Party Repositories for CentOS.md
Normal file
@ -0,0 +1,108 @@
|
||||
CentOS 上最佳的第三方仓库
|
||||
============================================================
|
||||
|
||||
![CentOS](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/centos.png?itok=YRMQVk7U "CentOS")
|
||||
|
||||
> 从 Software Collections、EPEL 和 Remi 获得可靠的 CentOS 新版软件。
|
||||
|
||||
在 Red Hat 企业 Linux(RHEL) 上,提供那些早已老掉牙的软件已经是企业级软件厂商的传统了。这倒不是因为他们懒,而确实是用户需要。很多公司像看待家具一样看待软件:我买一张桌子,能用一辈子,软件不应该也这样吗?
|
||||
|
||||
CentOS 作为 RHEL 的复制品有着同样的遭遇。虽然 Red Hat 还在为这些被厂商抛弃的过时软件提供支持、修补安全漏洞等,但如果你的应用依赖新版软件,你就得想办法了。 我在这个问题上不止一次碰壁。 LAMP 组合里任一个组件都需要其它所有组件能与其兼容,这有时就显得很麻烦。 比如说去年我就被 RHEL/CentOS 折腾得够呛。REHL/CentOS 第 6 版最高支持 PHP 5.3 ,第 7 版支持到 PHP 5.4 。而 PHP 5.3 早在 2014 年 8 月就到达 EOL(End Of Life) ,不再被厂商支持了, PHP 5.4 的 EOL 在 2015 年 9 月, 5.5 则是 2016 年 7 月。 有太多古老的软件版本,包括 MySQL、Python 等,它们应该像木乃伊一样被展示在博物馆里,但它们却活在你的系统上。
|
||||
|
||||
那么,可怜的管理员们该怎么办呢?如果你跑着 RHEL/CentOS ,那应该先试试 [Software Collections][3],因为这是 Red Hat 唯一支持的新软件包源。 [Software Collections][3] 为 CentOS 设立了专门的仓库,安装和管理都和其它第三方仓库一样。但如果你用的是 RHEL 的,情况就有点不同了,具体请参考 [RHEL 的解决方法][4]。[Software Collections][3] 同样支持 Fedora 和 Scientific Linux 。
|
||||
|
||||
### 安装 Software Collections
|
||||
|
||||
在 CentOS 6/7 上安装 Software Collections 的命令如下:
|
||||
|
||||
```
|
||||
$ sudo yum install centos-release-scl
|
||||
```
|
||||
|
||||
`centos-release-scl-rh` 可能作为依赖包被同时安装。
|
||||
|
||||
然后就可以像平常一样搜索、安装软件包了:
|
||||
|
||||
```
|
||||
$ yum search php7
|
||||
[...]
|
||||
rh-php70.x86_64 : Package that installs PHP 7.0
|
||||
[...]
|
||||
$ sudo yum install rh-php70
|
||||
```
|
||||
|
||||
最后一件事就是启用你的新软件包:
|
||||
|
||||
```
|
||||
$ scl enable rh-php70 bash
|
||||
$ php -v
|
||||
PHP 7.0.10
|
||||
```
|
||||
|
||||
此命令会开启一个新的 bash 并配置好环境变量以便运行新软件包。 如果需要的话,你还得安装对应的扩展包,比如对于 Python 、PHP、MySQL 等软件包,有些配置文件也需要修改以指向新版软件(比如 Apache )。
|
||||
|
||||
这些 SCL 软件包在重启后不会激活。SCL 的设计初衷就是在不影响原有配置的前提下,让新旧软件能一起运行。不过你可以通过 `~/.bashrc` 加载 SCL 提供的 `enable` 脚本来实现自动启用。 SCL 的所有软件包都安装在 `/opt` 下, 以我们的 PHP 7 为例,在 `~/.bashrc` 里加入一行:
|
||||
|
||||
```
|
||||
source /opt/rh/rh-php70/enable
|
||||
```
|
||||
|
||||
以后相应的软件包就能在重启后自动启用了。有新软件保驾护航,你终于可以专注于自己的业务了。
|
||||
|
||||
### 列出可用软件包
|
||||
|
||||
那么,到底 Software Collections 里都是些什么呢? centos-release-scl 里有一些由社区维护的额外的软件包。除了在 [CentOS Wiki][5] 查看软件包列表外,你还可以使用 Yum 。我们先来看看安装了哪些仓库:
|
||||
|
||||
```
|
||||
$ yum repolist
|
||||
[...]
|
||||
repo id repo name
|
||||
base/7/x86_64 CentOS-7 - Base
|
||||
centos-sclo-rh/x86_64 CentOS-7 - SCLo rh
|
||||
centos-sclo-sclo/x86_64 CentOS-7 - SCLo sclo
|
||||
extras/7/x86_64 CentOS-7 - Extras
|
||||
updates/7/x86_64 CentOS-7 - Updates
|
||||
```
|
||||
|
||||
Yum 没有专门用来打印某一个仓库中所有软件包的命令,所以你得这样来: (LCTT 译注:实际上有,`yum repo-pkgs REPO list`,需要 root 权限,dnf 同)
|
||||
|
||||
```
|
||||
$ yum --disablerepo "*" --enablerepo centos-sclo-rh \
|
||||
list available | less
|
||||
```
|
||||
|
||||
`--disablerepo` 与 `--enablerepo` 选项的用法没有详细的文档,这里简单说下。 实际上在这个命令里你并没有禁用或启用什么东西,而只是将你的搜索范围限制在某一个仓库内。 此命令会打印出一个很长的列表,所以我们用管道传递给 `less` 输出。
|
||||
|
||||
### EPEL
|
||||
|
||||
强大的 Fedora 社区为 Feora 及所有 RHEL 系的发行版维护着 [EPEL:Extra Packages for Enterprise Linux][6] 。 里面包含一些最新软件包以及一些未被发行版收纳的软件包。安装 EPEL 里的软件就不用麻烦 `enable` 脚本了,直接像平常一样用。你还可以用 `--disablerepo` 和 `--enablerepo` 选项指定从 EPEL 里安装软件包:
|
||||
|
||||
```
|
||||
$ sudo yum --disablerepo "*" --enablerepo epel install [package]
|
||||
```
|
||||
|
||||
### Remi Collet
|
||||
|
||||
Remi Collet 在 [Remi 的 RPM 仓库][7] 里维护着大量更新的和额外的软件包。需要先安装 EPEL ,因为 Remi 仓库依赖它。
|
||||
|
||||
CentOS wiki 上有较完整的仓库列表:[更多的第三方仓库][8] ,用哪些,不用哪些,里面都有建议。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2017/2/best-third-party-repositories-centos
|
||||
|
||||
作者:[CARLA SCHRODER][a]
|
||||
译者:[Dotcra](https://github.com/Dotcra)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/cschroder
|
||||
[1]:https://www.linux.com/licenses/category/creative-commons-attribution
|
||||
[2]:https://www.linux.com/files/images/centospng
|
||||
[3]:https://www.softwarecollections.org/en/
|
||||
[4]:https://access.redhat.com/solutions/472793
|
||||
[5]:https://wiki.centos.org/SpecialInterestGroup/SCLo/CollectionsList
|
||||
[6]:https://fedoraproject.org/wiki/EPEL
|
||||
[7]:http://rpms.remirepo.net/
|
||||
[8]:https://wiki.centos.org/AdditionalResources/Repositories
|
97
published/20170210 Fedora 25 Wayland vs Xorg.md
Normal file
97
published/20170210 Fedora 25 Wayland vs Xorg.md
Normal file
@ -0,0 +1,97 @@
|
||||
Fedora 25: Wayland 大战 Xorg
|
||||
============
|
||||
|
||||
就像异形大战铁血战士的结果一样,后者略胜一筹。不管怎样,你可能知道,我最近测试了 [Fedora 25][1],体验还可以。总的来说,这个发行版表现的相当不错。它不是最快速的,但随着一系列的改进,变得足够稳定也足够好用。最重要的是,除了一些性能以及响应性的损失,Wayland 并没有造成我的系统瘫痪。但这还仅仅是个开始。
|
||||
|
||||
Wayland 作为一种消费者技术还处在它的襁褓期,或者至少是当人们在处理桌面事物时应就这么认为。因此,我必须继续测试,绝不弃坑。在过去的积极地使用 Fedora 25 的几个星期里,我确实碰到了几个其它的问题,有些不用太担心,有些确实很恼人,有些很奇怪,有些却无意义。让我们来讲述一下吧!
|
||||
|
||||
![Teaser](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-wayland-xorg-teaser.jpg)
|
||||
|
||||
注: 图片来自 [Wikimedia][2] 并做了些修改, [CC BY-SA 3.0][3] 许可
|
||||
|
||||
### Wayland 并不支持所有软件
|
||||
|
||||
没错,这是一个事实。如果你去网站上阅读相关的信息,你会发现各种各样的软件都还没为 Wayland 做好准备。当然,我们都知道 Fedora 是一个激进的高端发行版,它是为了探索新功能而出现的测试床。这没毛病。有一段时间,所有东西都很正常,没有瞎忙活,没有错误。但接下来,我突然需要使用 GParted。我当时很着急,正在排除一个大故障,然而我却让自己置身于一些无意义的额外工作。 GParted 没办法在 Wayland 下直接启动。在探索了更多一些细节之后,我知道了该分区软件目前还没有被 Wayland 支持。
|
||||
|
||||
![GParted 无法运行于 Wayland](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-wayland-xorg-gparted.jpg)
|
||||
|
||||
问题就在于我并不太清楚其它哪些应用不能在 Wayland 下运行,并且我不能在一个正确估计的时间敏锐地发现这一点。通过在线搜索,我还是不能快速找到一个简要的当前不兼容列表。可能只是我在搜索方面不太在行,但显而易见这些东西是和“Wayland + 兼容性” 这样的问题一样零零散散的。
|
||||
|
||||
我找到了一个[自诩][4] Wayland 很棒的文章、一个目前已被这个新玩意儿支持的 [Gnome][5] 应用程序列表、一些 ArchWiki 上难懂的资料,一篇在[英伟达][6]开发论坛上的晦涩得让我后悔点进去的主题,以及一些其他含糊的讨论。
|
||||
|
||||
### 再次提到性能
|
||||
|
||||
在 Fedora 25 上,我把登录会话从 Gnome(Wayland)切换到 Gnome Xorg,观察会对系统产生什么影响。我之前已经提到过在同一个笔记本([联想 G50][8])上的性能跑分和与 [Fedora 24][7] 的比较,但这次会给我们提供更加准确的结果。
|
||||
|
||||
Wayland(截图 1)空闲时的内存占用为 1.4GB, CPU 的平均负载约为 4-5%。Xorg(截图 2)占用了同样大小的内存,处理器消耗了全部性能的 3-4%,单纯从数字上来看少了一小点。但是 Xorg 会话的体验却好得多。虽然只是毫秒级的差距,但你能感受得到。传统的会话方式看起来更加的灵动、快速、清新一点。但 Wayland 落后得并不明显。如果你对你的电脑响应速度很敏感,你可能会对这点延迟不会太满意。当然,这也许只是作为新手缺乏优化的缘故,Wayland 会随时间进步。但这一点也是我们所不能忽视的。
|
||||
|
||||
![Wayland resources](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-wayland-xorg-resources-wayland.jpg)
|
||||
|
||||
![Xorg resources](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-wayland-xorg-resources-xorg.jpg)
|
||||
|
||||
### 批评
|
||||
|
||||
我对此并不高兴。虽然并不是很愤怒,但我不喜欢为了能完全享受我的桌面体验,我却需要登录到传统的 X 会话。因为 X 给了我全部,但 Wayland 则没有。这意味着我不能一天都用着 Wayland。我喜欢探索科技,但我不是一个盲目的狂热追随者。我只是想用我的桌面,有时我可能需要它快速响应。注销然后重新登录在急需使用的时候会成为恼人的麻烦。我们遇到这个问题的原因就是 Wayland 没有让 Linux 桌面用户的生活变得更简单,而恰恰相反。
|
||||
|
||||
引用:
|
||||
|
||||
> Wayland 是为了成为 X 的更加简单的替代品,更加容易开发和维护。建议 GNOME 和 KDE 都使用它。
|
||||
|
||||
你能看到,这就是问题的一方面原因。东西不应该被设计成容易开发或维护。它可以是满足了所有其它消费需求之后提供的有益副产品。但如果没有,那么不管它对于程序员敲代码来说多么困难或简单都将不重要。那是他们的工作。科技的所有目的都是为了达到一种无缝并且流畅的用户体验。
|
||||
|
||||
不幸的是,现在很大数量的产品都被重新设计或者开发,只是为了使软件人员更加轻松,而不是用户。在很大程度上,Gnome 3、PulseAudio、[Systemd][9] 和 Wayland 都没有遵循提高用户体验的宗旨。在这个意义上来说,它们更像是一种打扰,而没有为 Linux 桌面生态的稳定性和易用性作出贡献。
|
||||
|
||||
这也是为什么 Linux 桌面是一个相对不成熟产品的一个主要原因————它被设计成开发人员的自我支持产品,更像一个活生生的生物,而不是依附于用户各种怪念头和想法的奴隶。这也是伟大事物是如何形成的,你满足于主要需求,接下来只是担心细节方面。优秀的用户体验不依赖于(也永远不依赖于)编程语言、编译器的选择,或任何其他无意义的东西。如果依赖了,那么不管谁来设计这个抽象层做的不够好的产品,我们都会得到一个失败的作品,需要把它的存在抹去。
|
||||
|
||||
那么在我的展望中,我不在乎是否要吐血十升去编译一个 Xorg 或其它什么的版本。我是一个用户,我所在乎的只是我的桌面能否像它昨天或者 5 年前一样健壮地工作。没事的情况下,我不会对宏、类、变量、声明、结构体,或其他任何极客的计算机科技感兴趣。那是不着边际的。一个产品宣传自己是被创造出来为人们的开发提供方便的,那是个悖论。如果接下来再也不用去开发它了,这样反而会使事情更简单。
|
||||
|
||||
现在,事实是 Wayland 大体上可用,但它仍然不像 Xorg 那么好,并且它也不应该在任何的桌面上作为就绪的产品被提供。一但它能够无人知晓般无缝地取代那些过时技术,只有在那种时候,它才获得了它所需要的成功,那之后,它可以用 C、D 或者 K 之类的无论什么语言编写,拥有开发者需要的任何东西。而在那之前,它都是一个蚕食资源和人们思想的寄生虫而已。
|
||||
|
||||
不要误会,我们需要进步,需要改变。但它必须为了进化的目而服务。现在 Xorg 能很好地满足用户需求了吗?它能为第三方库提供图形支持吗?它能支持 HD、UHD、DPI 或其他的什么吗?你能用它玩最新的游戏吗?行还是不行?如果不行,那么它就需要被改进。这些就是进化的驱动力,而不是写代码或者编译代码的困难程度。软件开发者是数字工业的矿工,他们需要努力工作而取悦于用户。就像短语“更加易于开发”应该被取缔一样,那些崇尚于此的人也应该用老式收音机的电池处以电刑,然后用没有空调的飞船流放到火星上去。如果你不能写出高明的代码,那是你的问题。用户不能因为开发者认为自己是公主而遭受折磨。
|
||||
|
||||
### 结语
|
||||
|
||||
说到这里。大体上说,Wayland 还可以,并不差。但这说的就像是某人决定修改你工资单上分配比例,导致你从昨天能赚 100% 到今天只能赚 83% 一样。讲道理这是不能接受的,即使 Wayland 工作的相当好。正是那些不能运作的东西导致如此大的不同。忽略所有批评它的一面,Wayland 被认为降低了可用性、性能以及软件的知名度,这正是 Fedora 亟待解决的问题。
|
||||
|
||||
其他的发行版会跟进,然后我们会看到历史重演,就像 Gnome 3 和 Systemd 所发生的一样。没有完全准备好的东西被放到开放环境中,然后我们花费一两年时间修复那些本无需修复的东西,最终我们将拥有的是我们已经拥有的相同功能,只是用不同的编程语言来实现。我并不感兴趣这些。计算机科学曾在 1999 年非常受欢迎,当时 Excel 用户每小时能赚 50 美元。而现在,编程就像是躲在平底船下划桨,人们并不会在乎你在甲板下流下的汗水与磨出的水泡。
|
||||
|
||||
性能可能不太是一个问题了,因为你可以放弃 1-2% 的变化,尤其是它会受随机的来自任何一个因素的影响,如果你已经用 Linux 超过一、两年你就会知道的。但是无法启动应用是个大问题。不过至少, Fedora 也友好地提供了传统的平台。但是,它可能会在 Wayland 100% 成熟前就消失了。我们再来看看,不,不会有灾难。我原本的 Fedora 25 宣称支持这种看法。我们有的就是烦恼,不必要的烦恼。啊,这是 Linux 故事中的第 9000 集。
|
||||
|
||||
那么,在今天结束之际,我们已经讨论了所有事情。从中我们学到:**臣伏于 Xorg!天呐!**真棒,现在我将淡入背景音乐,而笑声会将你的欢乐带给寒冷的夜晚。再见!
|
||||
|
||||
干杯。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
|
||||
作者简介:
|
||||
|
||||
我是 Igor Ljubuncic。现在大约 38 岁,已婚但还没有孩子。我现在在一个大胆创新的云科技公司做首席工程师。直到大约 2015 年初,我还在一个全世界最大的 IT 公司之一中做系统架构工程师,和一个工程计算团队开发新的基于 Linux 的解决方案,优化内核以及攻克 Linux 的问题。在那之前,我是一个为高性能计算环境设计创新解决方案的团队的技术领导。还有一些其他花哨的头衔,包括系统专家、系统程序员等等。所有这些都曾是我的爱好,但从 2008 年开始成为了我的有偿的工作。还有什么比这更令人满意的呢?
|
||||
|
||||
从 2004 年到 2008 年间,我曾通过作为医学影像行业的物理学家来糊口。我的工作专长集中在解决问题和算法开发。为此,我广泛地使用了 Matlab,主要用于信号和图像处理。另外,我得到了几个主要的工程方法学的认证,包括 MEDIC 六西格玛绿带、试验设计以及统计工程学。
|
||||
|
||||
我也写过书,包括高度魔幻类和 Linux 上的技术工作方面的,彼此交织。
|
||||
|
||||
|
||||
-------------
|
||||
|
||||
|
||||
via: http://www.dedoimedo.com/computers/fedora-25-wayland-vs-xorg.html
|
||||
|
||||
作者:[Igor Ljubuncic][a]
|
||||
译者:[cycoe](https://github.com/cycoe)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.dedoimedo.com/faq.html
|
||||
|
||||
[1]:http://www.dedoimedo.com/computers/fedora-25-gnome.html
|
||||
[2]:https://commons.wikimedia.org/wiki/File:DragonCon-AlienVsPredator.jpg
|
||||
[3]:https://creativecommons.org/licenses/by-sa/3.0/deed.en
|
||||
[4]:https://wayland.freedesktop.org/faq.html
|
||||
[5]:https://wiki.gnome.org/Initiatives/Wayland/Applications
|
||||
[6]:https://devtalk.nvidia.com/default/topic/925605/linux/nvidia-364-12-release-vulkan-glvnd-drm-kms-and-eglstreams/
|
||||
[7]:http://www.dedoimedo.com/computers/fedora-24-gnome.html
|
||||
[8]:http://www.dedoimedo.com/computers/lenovo-g50-distros-second-round.html
|
||||
[9]:http://www.ocsmag.com/2016/10/19/systemd-progress-through-complexity/
|
@ -0,0 +1,216 @@
|
||||
8 个优秀的开源 Markdown 编辑器
|
||||
============================================================
|
||||
|
||||
### Markdown
|
||||
|
||||
首先,对 Markdown 进行一个简单的介绍。Markdown 是由 John Gruber 和 Aaron Swartz 共同创建的一种轻量级纯文本格式语法。Markdown 可以让用户“以易读、易写的纯文本格式来进行写作,然后可以将其转换为有效格式的 XHTML(或 HTML)“。Markdown 语法只包含一些非常容易记住的符号。其学习曲线平缓;你可以在炒蘑菇的同时一点点学习 Markdown 语法(大约 10 分钟)。通过使用尽可能简单的语法,错误率达到了最小化。除了拥有友好的语法,它还具有直接输出干净、有效的(X)HTML 文件的强大功能。如果你看过我的 HTML 文件,你就会知道这个功能是多么的重要。
|
||||
|
||||
Markdown 格式语法的主要目标是实现最大的可读性。用户能够以纯文本的形式发布一份 Markdown 格式的文件。用 Markdown 进行文本写作的一个优点是易于在计算机、智能手机和个人之间共享。几乎所有的内容管理系统都支持 Markdown 。它作为一种网络写作格式流行起来,其产生一些被许多服务采用的变种,比如 GitHub 和 Stack Exchange 。
|
||||
|
||||
你可以使用任何文本编辑器来写 Markdown 文件。但我建议使用一个专门为这种语法设计的编辑器。这篇文章中所讨论的软件允许你使用 Markdown 语法来写各种格式的专业文档,包括博客文章、演示文稿、报告、电子邮件以及幻灯片等。另外,所有的应用都是在开源许可证下发布的,在 Linux、OS X 和 Windows 操作系统下均可用。
|
||||
|
||||
|
||||
### Remarkable
|
||||
|
||||
![Remarkable - cross-platform Markdown editor](https://i2.wp.com/www.ossblog.org/wp-content/uploads/2017/02/Remarkable.png?resize=800%2C319&ssl=1)
|
||||
|
||||
让我们从 Remarkable 开始。Remarkable 是一个 apt 软件包的名字,它是一个相当有特色的 Markdown 编辑器 — 它并不支持 Markdown 的全部功能特性,但该有的功能特性都有。它使用和 GitHub Markdown 类似的语法。
|
||||
|
||||
你可以使用 Remarkable 来写 Markdown 文档,并在实时预览窗口查看更改。你可以把你的文件导出为 PDF 格式(带有目录)和 HTML 格式文件。它有强大的配置选项,从而具有许多样式,因此,你可以把它配置成你最满意的 Markdown 编辑器。
|
||||
|
||||
其他一些特性:
|
||||
|
||||
* 语法高亮
|
||||
* 支持 [GitHub 风味的 Markdown](https://linux.cn/article-8399-1.html)
|
||||
* 支持 MathJax - 通过高级格式呈现丰富文档
|
||||
* 键盘快捷键
|
||||
|
||||
在 Debian、Ubuntu、Fedora、SUSE 和 Arch 系统上均有 Remarkable 的可用的简易安装程序。
|
||||
|
||||
主页: [https://remarkableapp.github.io/][4]
|
||||
许可证: MIT 许可
|
||||
|
||||
### Atom
|
||||
|
||||
![Atom - cross-platform Markdown editor](https://i2.wp.com/www.ossblog.org/wp-content/uploads/2017/02/Atom-Markdown.png?resize=800%2C328&ssl=1)
|
||||
|
||||
毫无疑问, Atom 是一个神话般的文本编辑器。超过 50 个开源包集合在一个微小的内核上,从而构成 Atom 。伴有 Node.js 的支持,以及全套功能特性,Atom 是我最喜欢用来写代码的编辑器。Atom 的特性在[杀手级开源应用][5]的文章中有更详细介绍,它是如此的强大。但是作为一个 Markdown 编辑器,Atom 还有许多不足之处,它的默认包不支持 Markdown 的特性。例如,正如上图所展示的,它不支持等价渲染。
|
||||
|
||||
但是,开源拥有强大的力量,这是我强烈提倡开源的一个重要原因。Atom 上有许多包以及一些复刻,从而添加了缺失的功能特性。比如,Markdown Preview Plus 提供了 Markdown 文件的实时预览,并伴有数学公式渲染和实时重加载。另外,你也可以尝试一下 [Markdown Preview Enhanced][6]。如果你需要自动滚动特性,那么 [markdown-scroll-sync][7] 可以满足你的需求。我是 [Markdown-Writer][8]和 [Markdown-pdf][9]的忠实拥趸,后者支持将 Markdown 快速转换为 PDF、PNG 以及 JPEG 文件。
|
||||
|
||||
这个方式体现了开源的理念:允许用户通过添加扩展来提供所需的特性。这让我想起了 Woolworths 的 n 种杂拌糖果的故事。虽然需要多付出一些努力,但能收获最好的回报。
|
||||
|
||||
主页: [https://atom.io/][10]
|
||||
许可证: MIT 许可
|
||||
|
||||
### Haroopad
|
||||
|
||||
![Haroopad - - cross-platform Markdown editor](https://i2.wp.com/www.ossblog.org/wp-content/uploads/2017/02/Haroopad-1.png?resize=800%2C332&ssl=1)
|
||||
|
||||
Haroopad 是一个优秀的 Markdown 编辑器,是一个用于创建适宜 Web 的文档的处理器。使用 Haroopad 可以创作各种格式的文档,比如博客文章、幻灯片、演示文稿、报告和电子邮件等。Haroopad 在 Windows、Mac OS X 和 Linux 上均可用。它有 Debian/Ubuntu 的软件包,也有 Windows 和 Mac 的二进制文件。该应用程序使用 node-webkit、CodeMirror,marked,以及 Twitter 的 Bootstrap 。
|
||||
|
||||
Haroo 在韩语中的意思是“一天”。
|
||||
|
||||
它的功能列表非常可观。请看下面:
|
||||
|
||||
* 主题、皮肤和 UI 组件
|
||||
* 超过 30 种不同的编辑主题 - tomorrow-night-bright 和 zenburn 是近期刚添加的
|
||||
* 编辑器中的代码块的语法高亮
|
||||
* Ruby、Python、PHP、Javascript、C、HTML 和 CSS 的语法高亮支持
|
||||
* 基于 CodeMirror,这是一个在浏览器中使用 JavaScript 实现的通用文本编辑器
|
||||
* 实时预览主题
|
||||
* 基于 markdown-css 的 7 个主题
|
||||
* 语法高亮
|
||||
* 基于 hightlight.js 的 112 种语言以及 49 种样式
|
||||
* 定制主题
|
||||
* 基于 CSS (层叠样式表)的样式
|
||||
* 演示模式 - 对于现场演示非常有用
|
||||
* 绘图 - 流程图和序列图
|
||||
* 任务列表
|
||||
* 扩展 Markdown 语法,支持 TOC(目录)、 GitHub 风味 Markdown 以及数学表达式、脚注和任务列表等
|
||||
* 字体大小
|
||||
* 使用首选窗口和快捷键来设置编辑器和预览字体大小
|
||||
* 嵌入富媒体内容
|
||||
* 视频、音频、3D、文本、开放图形以及 oEmbed
|
||||
* 支持大约 100 种主要的网络服务(YouTude、SoundCloud、Flickr 等)
|
||||
* 支持拖放
|
||||
* 显示模式
|
||||
* 默认:编辑器|预览器,倒置:预览器|编辑器,仅编辑器,仅预览器(View -> Mode)
|
||||
* 插入当前日期和时间
|
||||
* 多种格式支持(Insert -> Data & Time)
|
||||
* HtML 到 Markdown
|
||||
* 拖放你在 Web 浏览器中选择好的文本
|
||||
* Markdown 解析选项
|
||||
* 大纲预览
|
||||
* 纯粹主义者的 Vim 键位绑定
|
||||
* Markdown 自动补全
|
||||
* 导出为 PDF 和 HTML
|
||||
* 带有样式的 HTML 复制到剪切板可用于所见即所得编辑器
|
||||
* 自动保存和恢复
|
||||
* 文件状态信息
|
||||
* 换行符或空格缩进
|
||||
* (一、二、三)列布局视图
|
||||
* Markdown 语法帮助对话框
|
||||
* 导入和导出设置
|
||||
* 通过 MathJax 支持 LaTex 数学表达式
|
||||
* 导出文件为 HTML 和 PDF
|
||||
* 创建扩展来构建自己的功能
|
||||
* 高效地将文件转换进博客系统:WordPress、Evernote 和 Tumblr 等
|
||||
* 全屏模式-尽管该模式不能隐藏顶部菜单栏和顶部工具栏
|
||||
* 国际化支持:英文、韩文、西班牙文、简体中文、德文、越南文、俄文、希腊文、葡萄牙文、日文、意大利文、印度尼西亚文土耳其文和法文
|
||||
|
||||
主页 [http://pad.haroopress.com/][11]
|
||||
许可证: GNU GPL v3 许可
|
||||
|
||||
### StackEdit
|
||||
|
||||
![StackEdit - a web based Markdown editor](https://i2.wp.com/www.ossblog.org/wp-content/uploads/2017/02/StackEdit.png?resize=800%2C311&ssl=1)
|
||||
|
||||
StackEdit 是一个功能齐全的 Markdown 编辑器,基于 PageDown(该 Markdown 库被 Stack Overflow 和其他一些 Stack 交流网站使用)。不同于在这个列表中的其他编辑器,StackEdit 是一个基于 Web 的编辑器。在 Chrome 浏览器上即可使用 StackEdit 。
|
||||
|
||||
特性包括:
|
||||
|
||||
* 实时预览 HTML,并通过绑定滚动连接特性来将编辑器和预览的滚动条相绑定
|
||||
* 支持 Markdown Extra 和 GitHub 风味 Markdown,Prettify/Highlight.js 语法高亮
|
||||
* 通过 MathJax 支持 LaTex 数学表达式
|
||||
* 所见即所得的控制按键
|
||||
* 布局配置
|
||||
* 不同风格的主题支持
|
||||
* la carte 扩展
|
||||
* 离线编辑
|
||||
* 可以与 Google 云端硬盘(多帐户)和 Dropbox 在线同步
|
||||
* 一键发布到 Blogger、Dropbox、Gist、GitHub、Google Drive、SSH 服务器、Tumblr 和 WordPress
|
||||
|
||||
主页: [https://stackedit.io/][12]
|
||||
许可证: Apache 许可
|
||||
|
||||
### MacDown
|
||||
|
||||
![MacDown - OS X Markdown editor](https://i0.wp.com/www.ossblog.org/wp-content/uploads/2017/02/MacDown.png?resize=800%2C422&ssl=1)
|
||||
|
||||
MacDown 是在这个列表中唯一一个只运行在 macOS 上的全特性编辑器。具体来说,它需要在 OX S 10.8 或更高的版本上才能使用。它在内部使用 Hoedown 将 Markdown 渲染成 HTML,这使得它的特性更加强大。Heodown 是 Sundown 的一个复活复刻。它完全符合标准,无依赖,具有良好的扩展支持和 UTF-8 感知。
|
||||
|
||||
MacDown 基于 Mou,这是专为 Web 开发人员设计的专用解决方案。
|
||||
|
||||
它提供了良好的 Markdown 渲染,通过 Prism 提供的语言识别渲染实现代码块级的语法高亮,MathML 和 LaTex 渲染,GTM 任务列表,Jekyll 前端以及可选的高级自动补全。更重要的是,它占用资源很少。想在 OS X 上写 Markdown?MacDown 是我针对 Web 开发者的开源推荐。
|
||||
|
||||
主页: [https://macdown.uranusjr.com/][13]
|
||||
许可证: MIT 许可
|
||||
|
||||
|
||||
### ghostwriter
|
||||
|
||||
![ghostwriter - cross-platform Markdown editor](https://i0.wp.com/www.ossblog.org/wp-content/uploads/2017/02/ghostwriter.png?resize=800%2C310&ssl=1)
|
||||
|
||||
ghostwriter 是一个跨平台的、具有美感的、无干扰的 Markdown 编辑器。它内建了 Sundown 处理器支持,还可以自动检测 pandoc、MultiMarkdown、Discount 和 cmark 处理器。它试图成为一个朴实的编辑器。
|
||||
|
||||
ghostwriter 有许多很好的功能设置,包括语法高亮、全屏模式、聚焦模式、主题、通过 Hunspell 进行拼写检查、实时字数统计、实时 HTML 预览、HTML 预览自定义 CSS 样式表、图片拖放支持以及国际化支持。Hemingway 模式按钮可以禁用退格键和删除键。一个新的 “Markdown cheat sheet” HUD 窗口是一个有用的新增功能。主题支持很基本,但在 [GitHub 仓库上][14]也有一些可用的试验性主题。
|
||||
|
||||
ghostwriter 的功能有限。我越来越欣赏这个应用的通用性,部分原因是其简洁的界面能够让写作者完全集中在策划内容上。这一应用非常值得推荐。
|
||||
|
||||
ghostwirter 在 Linux 和 Windows 系统上均可用。在 Windows 系统上还有一个便携式的版本可用。
|
||||
|
||||
主页: [https://github.com/wereturtle/ghostwriter][15]
|
||||
许可证: GNU GPL v3 许可
|
||||
|
||||
### Abricotine
|
||||
|
||||
![Abricotine - cross-platform Markdown editor](https://i2.wp.com/www.ossblog.org/wp-content/uploads/2017/02/Abricotine.png?resize=800%2C316&ssl=1)
|
||||
|
||||
Abricotine 是一个为桌面构建的、旨在跨平台且开源的 Markdown 编辑器。它在 Linux、OS X 和 Windows 上均可用。
|
||||
|
||||
该应用支持 Markdown 语法以及一些 GitHub 风味的 Markdown 增强(比如表格)。它允许用户直接在文本编辑器中预览文档,而不是在侧窗栏。
|
||||
|
||||
该应用有一系列有用的特性,包括拼写检查、以 HTML 格式保存文件或把富文本复制粘贴到邮件客户端。你也可以在侧窗中显示文档目录,展示语法高亮代码、以及助手、锚点和隐藏字符等。它目前正处于早期的开发阶段,因此还有一些很基本的 bug 需要修复,但它值得关注。它有两个主题可用,如果有能力,你也可以添加你自己的主题。
|
||||
|
||||
主页: [http://abricotine.brrd.fr/][16]
|
||||
许可证: GNU 通用公共许可证 v3 或更高许可
|
||||
|
||||
### ReText
|
||||
|
||||
![ReText - Linux Markdown editor](https://i1.wp.com/www.ossblog.org/wp-content/uploads/2017/02/ReText.png?resize=800%2C270&ssl=1)
|
||||
|
||||
ReText 是一个简单而强大的 Markdown 和 reStructureText 文本编辑器。用户可以控制所有输出的格式。它编辑的文件是纯文本文件,但可以导出为 PDF、HTML 和其他格式的文件。ReText 官方仅支持 Linux 系统。
|
||||
|
||||
特性包括:
|
||||
|
||||
* 全屏模式
|
||||
* 实时预览
|
||||
* 同步滚动(针对 Markdown)
|
||||
* 支持数学公式
|
||||
* 拼写检查
|
||||
* 分页符
|
||||
* 导出为 HTML、ODT 和 PDF 格式
|
||||
* 使用其他标记语言
|
||||
|
||||
主页: [https://github.com/retext-project/retext][17]
|
||||
许可证: GNU GPL v2 或更高许可
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ossblog.org/markdown-editors/
|
||||
|
||||
作者:[Steve Emms][a]
|
||||
译者:[ucasFL](https://github.com/ucasFL)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ossblog.org/author/steve/
|
||||
[1]:https://www.ossblog.org/author/steve/
|
||||
[2]:https://www.ossblog.org/markdown-editors/#comments
|
||||
[3]:https://www.ossblog.org/category/utilities/
|
||||
[4]:https://remarkableapp.github.io/
|
||||
[5]:https://www.ossblog.org/top-software/2/
|
||||
[6]:https://atom.io/packages/markdown-preview-enhanced
|
||||
[7]:https://atom.io/packages/markdown-scroll-sync
|
||||
[8]:https://atom.io/packages/markdown-writer
|
||||
[9]:https://atom.io/packages/markdown-pdf
|
||||
[10]:https://atom.io/
|
||||
[11]:http://pad.haroopress.com/
|
||||
[12]:https://stackedit.io/
|
||||
[13]:https://macdown.uranusjr.com/
|
||||
[14]:https://github.com/jggouvea/ghostwriter-themes
|
||||
[15]:https://github.com/wereturtle/ghostwriter
|
||||
[16]:http://abricotine.brrd.fr/
|
||||
[17]:https://github.com/retext-project/retext
|
@ -1,24 +1,25 @@
|
||||
# 从损坏的 Linux EFI 安装中恢复
|
||||
从损坏的 Linux EFI 安装中恢复
|
||||
=========
|
||||
|
||||
在过去的十多年里,Linux 发行版在安装前、安装过程中、以及安装后偶尔会失败,但我总是有办法恢复系统并继续正常工作。然而,[Solus][1] 损坏了我的笔记本。
|
||||
|
||||
GRUB 恢复。不行,重装。还不行!Ubuntu 拒绝安装,报错目标设备不是这个或那个。哇。我之前还没有遇到过想这样的事情。我的测试机已变成无用的砖块。我们该失望吗?不,绝对不。让我来告诉你怎样你可以修复它吧。
|
||||
GRUB 恢复。不行,重装。还不行!Ubuntu 拒绝安装,目标设备的报错一会这样,一会那样。哇。我之前还没有遇到过像这样的事情。我的测试机已变成无用的砖块。难道我该绝望吗?不,绝对不。让我来告诉你怎样你可以修复它吧。
|
||||
|
||||
### 问题详情
|
||||
|
||||
所有事情都从 Solus 尝试安装它自己的启动引导器 - goofiboot 开始。不知道什么原因、它没有成功完成安装,留给我的就是一个无法启动的系统。BIOS 之后,我有一个 GRUB 恢复终端。
|
||||
所有事情都从 Solus 尝试安装它自己的启动引导器 - goofiboot 开始。不知道什么原因、它没有成功完成安装,留给我的就是一个无法启动的系统。经过 BIOS 引导之后,我进入一个 GRUB 恢复终端。
|
||||
|
||||
![安装失败](http://www.dedoimedo.com/images/computers-years/2016-2/solus-installation-failed.png)
|
||||
![安装失败](http://www.dedoimedo.com/images/computers-years/2016-2/solus-installation-failed.png)
|
||||
|
||||
我尝试在终端中手动修复,使用类似和我在我的扩展 [GRUB2 指南][2]中介绍的这个或那个命令。但还是不行。然后我尝试按照我在[GRUB2 和 EFI 指南][3]中的建议从 Live CD(译者注:Live CD 是一个完整的计算机可引导安装媒介,它包括在计算机内存中运行的操作系统,而不是从硬盘驱动器加载; CD 本身是只读的。 它允许用户为任何目的运行操作系统,而无需安装它或对计算机的配置进行任何更改)中恢复。我用 efibootmgr 工具创建了一个条目,确保标记它为有效。正如我们之前在指南中做的那样,之前这些是能正常工作的。哎,现在这个方法也不起作用。
|
||||
我尝试在终端中手动修复,使用类似和我在我详实的 [GRUB2 指南][2]中介绍的各种命令。但还是不行。然后我尝试按照我在 [GRUB2 和 EFI 指南][3]中的建议从 Live CD 中恢复(LCTT 译注:Live CD 是一个完整的计算机可引导安装媒介,它包括在计算机内存中运行的操作系统,而不是从硬盘驱动器加载;CD 本身是只读的。 它允许用户为任何目的运行操作系统,而无需安装它或对计算机的配置进行任何更改)。我用 efibootmgr 工具创建了一个引导入口,确保标记它为有效。正如我们之前在指南中做的那样,之前这些是能正常工作的。哎,现在这个方法也不起作用。
|
||||
|
||||
我尝试一次完整的 Ubuntu 安装,把它安装到 Solus 所在的分区,希望安装程序能给我一些有用的信息。但是 Ubuntu 无法按成安装。它报错:failed to install into /target。又回到开始的地方了。怎么办?
|
||||
我尝试做一个完整的 Ubuntu 安装,把它安装到 Solus 所在的分区,希望安装程序能给我一些有用的信息。但是 Ubuntu 无法完成安装。它报错:failed to install into /target。又回到开始的地方了。怎么办?
|
||||
|
||||
### 手动清除 EFI 分区
|
||||
|
||||
显然,我们的 EFI 分区出现了严重问题。简单回顾以下,如果你使用的是 UEFI,那么你需要一个单独的 FAT-32 格式化分区。该分区用于存储 EFI 引导镜像。例如,当你安装 Fedora 时,Fedora 引导镜像会被拷贝到 EFI 子目录。每个操作系统都会被存储到一个它自己的目录,一般是 /boot/efi/EFI/<操作系统版本>/。
|
||||
显然,我们的 EFI 分区出现了严重问题。简单回顾以下,如果你使用的是 UEFI,那么你需要一个单独的 FAT-32 格式化的分区。该分区用于存储 EFI 引导镜像。例如,当你安装 Fedora 时,Fedora 引导镜像会被拷贝到 EFI 子目录。每个操作系统都会被存储到一个它自己的目录,一般是 `/boot/efi/EFI/<操作系统版本>/`。
|
||||
|
||||
![EFI 分区内容](http://www.dedoimedo.com/images/computers-years/2016-2/grub2-efi-partition-contents.png)
|
||||
![EFI 分区内容](http://www.dedoimedo.com/images/computers-years/2016-2/grub2-efi-partition-contents.png)
|
||||
|
||||
在我的 [G50][4] 机器上,这里有很多各种发行版测试条目,包括:centos、debian、fedora、mx-15、suse、Ubuntu、zorin 以及其它。这里也有一个 goofiboot 目录。但是,efibootmgr 并没有在它的菜单中显示 goofiboot 条目。显然这里出现了一些问题。
|
||||
|
||||
@ -45,7 +46,7 @@ Boot2003* EFI Network
|
||||
P.S. 上面的输出是在 LIVE 会话中运行命令生成的!
|
||||
|
||||
|
||||
我决定清除所有非默认的以及非微软的条目然后重新开始。显然,有些东西被损坏了,妨碍了新的发行版设置它们自己的启动引导程序。因此我删除了 /boot/efi/EFI 分区下面出 Boot 和 Windows 外的所有目录。同时,我也通过删除所有额外的条目更新了启动管理器。
|
||||
我决定清除所有非默认的以及非微软的条目然后重新开始。显然,有些东西被损坏了,妨碍了新的发行版设置它们自己的启动引导程序。因此我删除了 `/boot/efi/EFI` 分区下面除了 Boot 和 Windows 以外的所有目录。同时,我也通过删除所有额外的条目更新了启动管理器。
|
||||
|
||||
```
|
||||
efibootmgr -b <hex> -B <hex>
|
||||
@ -53,23 +54,20 @@ efibootmgr -b <hex> -B <hex>
|
||||
|
||||
最后,我重新安装了 Ubuntu,并仔细监控 GRUB 安装和配置的过程。这次,成功完成啦。正如预期的那样,几个无效条目出现了一些错误,但整个安装过程完成就好了。
|
||||
|
||||
![安装错误](http://www.dedoimedo.com/images/computers-years/2016-2/grub2-install-errors.jpg)
|
||||
![安装的错误消息](http://www.dedoimedo.com/images/computers-years/2016-2/grub2-install-errors.jpg)
|
||||
|
||||
![安装成功](http://www.dedoimedo.com/images/computers-years/2016-2/grub2-install-successful.jpg)
|
||||
![安装的成功消息](http://www.dedoimedo.com/images/computers-years/2016-2/grub2-install-successful.jpg)
|
||||
|
||||
### 额外阅读
|
||||
|
||||
如果你不喜欢这种手动修复,你可以阅读:
|
||||
|
||||
```
|
||||
[Boot-Info][5] 手册,里面有帮助你恢复系统的自动化工具
|
||||
|
||||
[Boot-repair-cd][6] 自动恢复工具下载页面
|
||||
```
|
||||
- [Boot-Info][5] 手册,里面有帮助你恢复系统的自动化工具
|
||||
- [Boot-repair-cd][6] 自动恢复工具下载页面
|
||||
|
||||
### 总结
|
||||
|
||||
如果你遇到由于 EFI 分区破坏而导致系统严重瘫痪的情况,那么你可能需要遵循本指南中的建议。 删除所有非默认条目。 如果你使用 Windows 进行多重引导,请确保不要修改任何和 Microsoft 相关的东西。 然后相应地更新引导菜单,以便删除损坏的条目。 重新运行所需发行版的安装设置,或者尝试用之前介绍的比较不严格的修复方法。
|
||||
如果你遇到由于 EFI 分区破坏而导致系统严重瘫痪的情况,那么你可能需要遵循本指南中的建议。 删除所有非默认条目。 如果你使用 Windows 进行多重引导,请确保不要修改任何和 Microsoft 相关的东西。 然后相应地更新引导菜单,以便删除损坏的条目。 重新运行所需发行版的安装设置,或者尝试用之前介绍的比较不严谨的修复方法。
|
||||
|
||||
我希望这篇小文章能帮你节省一些时间。Solus 对我系统的更改使我很懊恼。这些事情本不应该发生,恢复过程也应该更简单。不管怎样,虽然事情似乎很可怕,修复并不是很难。你只需要删除损害的文件然后重新开始。你的数据应该不会受到影响,你也应该能够顺利进入到运行中的系统并继续工作。开始吧。
|
||||
|
||||
@ -84,12 +82,6 @@ efibootmgr -b <hex> -B <hex>
|
||||
|
||||
从 2004 到 2008 年,我通过在医疗图像行业担任物理专家养活自己。我的工作主要关注解决问题和开发算法。为此,我广泛使用 Matlab,主要用于信号和图像处理。另外,我已通过几个主要工程方法的认证,包括 MEDIC Six Sigma Green Belt、实验设计以及统计工程。
|
||||
|
||||
有时候我也会写书,包括 Linux 创新及技术工作。
|
||||
|
||||
往下滚动你可以查看我开源项目的完整列表、发表文章以及专利。
|
||||
|
||||
有关我奖项、提名以及 IT 相关认证的完整列表,稍后也会有。
|
||||
|
||||
|
||||
-------------
|
||||
|
||||
@ -98,7 +90,7 @@ via: http://www.dedoimedo.com/computers/grub2-efi-corrupt-part-recovery.html
|
||||
|
||||
作者:[Igor Ljubuncic][a]
|
||||
译者:[ictlyh](https://github.com/ictlyh)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,28 +1,28 @@
|
||||
如何在 CentOS 7 中安装、配置 SFTP - [全面指南]
|
||||
完全指南:如何在 CentOS 7 中安装、配置和安全加固 FTP 服务
|
||||
============================================================
|
||||
|
||||
FTP(文件传输协议)是一种用于通过网络[在服务器和客户端之间传输文件][1]的传统并广泛使用的标准工具,特别是在不需要身份验证的情况下(允许匿名用户连接到服务器)。我们必须明白,默认情况下 FTP 是不安全的,因为它不加密传输用户凭据和数据。
|
||||
|
||||
在本指南中,我们将介绍在 CentOS/RHEL7 和 Fedora 发行版中安装、配置和保护 FTP 服务器( VSFTPD 代表 “Very Secure FTP Daemon”)的步骤。
|
||||
|
||||
请注意,本指南中的所有命令将以 root 身份运行,以防你不使用 root 帐户操作服务器,请使用 [sudo命令][2] 获取 root 权限。
|
||||
请注意,本指南中的所有命令将以 root 身份运行,如果你不使用 root 帐户操作服务器,请使用 [sudo命令][2] 获取 root 权限。
|
||||
|
||||
### 步骤 1:安装 FTP 服务器
|
||||
|
||||
1. 安装 vsftpd 服务器很直接,只要在终端运行下面的命令。
|
||||
1、 安装 vsftpd 服务器很直接,只要在终端运行下面的命令。
|
||||
|
||||
```
|
||||
# yum install vsftpd
|
||||
```
|
||||
|
||||
2. 安装完成后,服务会先被禁用,因此我们需要手动启动,并设置在下次启动时自动启用:
|
||||
2、 安装完成后,服务先是被禁用的,因此我们需要手动启动,并设置在下次启动时自动启用:
|
||||
|
||||
```
|
||||
# systemctl start vsftpd
|
||||
# systemctl enable vsftpd
|
||||
```
|
||||
|
||||
3. 接下来,为了允许从外部系统访问 FTP 服务,我们需要打开 FTP 守护进程监听 21 端口:
|
||||
3、 接下来,为了允许从外部系统访问 FTP 服务,我们需要打开 FTP 守护进程监听的 21 端口:
|
||||
|
||||
```
|
||||
# firewall-cmd --zone=public --permanent --add-port=21/tcp
|
||||
@ -32,7 +32,7 @@ FTP(文件传输协议)是一种用于通过网络[在服务器和客户端
|
||||
|
||||
### 步骤 2: 配置 FTP 服务器
|
||||
|
||||
4. 现在,我们会进行一些配置来设置并加密我们的 FTP 服务器,让我们先备份一下原始配置文件 /etc/vsftpd/vsftpd.conf:
|
||||
4、 现在,我们会进行一些配置来设置并加密我们的 FTP 服务器,让我们先备份一下原始配置文件 `/etc/vsftpd/vsftpd.conf`:
|
||||
|
||||
```
|
||||
# cp /etc/vsftpd/vsftpd.conf /etc/vsftpd/vsftpd.conf.orig
|
||||
@ -41,30 +41,30 @@ FTP(文件传输协议)是一种用于通过网络[在服务器和客户端
|
||||
接下来,打开上面的文件,并将下面的选项设置相关的值:
|
||||
|
||||
```
|
||||
anonymous_enable=NO # disable anonymous login
|
||||
local_enable=YES # permit local logins
|
||||
write_enable=YES # enable FTP commands which change the filesystem
|
||||
local_umask=022 # value of umask for file creation for local users
|
||||
dirmessage_enable=YES # enable showing of messages when users first enter a new directory
|
||||
xferlog_enable=YES # a log file will be maintained detailing uploads and downloads
|
||||
connect_from_port_20=YES # use port 20 (ftp-data) on the server machine for PORT style connections
|
||||
xferlog_std_format=YES # keep standard log file format
|
||||
listen=NO # prevent vsftpd from running in standalone mode
|
||||
listen_ipv6=YES # vsftpd will listen on an IPv6 socket instead of an IPv4 one
|
||||
pam_service_name=vsftpd # name of the PAM service vsftpd will use
|
||||
userlist_enable=YES # enable vsftpd to load a list of usernames
|
||||
tcp_wrappers=YES # turn on tcp wrappers
|
||||
anonymous_enable=NO ### 禁用匿名登录
|
||||
local_enable=YES ### 允许本地用户登录
|
||||
write_enable=YES ### 允许对文件系统做改动的 FTP 命令
|
||||
local_umask=022 ### 本地用户创建文件所用的 umask 值
|
||||
dirmessage_enable=YES ### 当用户首次进入一个新目录时显示一个消息
|
||||
xferlog_enable=YES ### 用于记录上传、下载细节的日志文件
|
||||
connect_from_port_20=YES ### 使用端口 20 (ftp-data)用于 PORT 风格的连接
|
||||
xferlog_std_format=YES ### 使用标准的日志格式
|
||||
listen=NO ### 不要让 vsftpd 运行在独立模式
|
||||
listen_ipv6=YES ### vsftpd 将监听 IPv6 而不是 IPv4
|
||||
pam_service_name=vsftpd ### vsftpd 使用的 PAM 服务名
|
||||
userlist_enable=YES ### vsftpd 支持载入用户列表
|
||||
tcp_wrappers=YES ### 使用 tcp wrappers
|
||||
```
|
||||
|
||||
5. 现在基于用户列表文件 `/etc/vsftpd.userlist` 来配置 FTP 允许/拒绝用户访问。
|
||||
5、 现在基于用户列表文件 `/etc/vsftpd.userlist` 来配置 FTP 来允许/拒绝用户的访问。
|
||||
|
||||
默认情况下,如果设置了 userlist_enable=YES,当 userlist_deny 选项设置为 YES 的时候,`userlist_file=/etc/vsftpd.userlist` 中的用户列表被拒绝登录。
|
||||
默认情况下,如果设置了 `userlist_enable=YES`,当 `userlist_deny` 选项设置为 `YES` 的时候,`userlist_file=/etc/vsftpd.userlist` 中列出的用户被拒绝登录。
|
||||
|
||||
然而, userlist_deny=NO 更改了设置,意味着只有在 userlist_file=/etc/vsftpd.userlist 显式指定的用户才允许登录。
|
||||
然而, 更改配置为 `userlist_deny=NO`,意味着只有在 `userlist_file=/etc/vsftpd.userlist` 显式指定的用户才允许登录。
|
||||
|
||||
```
|
||||
userlist_enable=YES # vsftpd will load a list of usernames, from the filename given by userlist_file
|
||||
userlist_file=/etc/vsftpd.userlist # stores usernames.
|
||||
userlist_enable=YES ### vsftpd 将从 userlist_file 给出的文件中载入用户名列表
|
||||
userlist_file=/etc/vsftpd.userlist ### 存储用户名的文件
|
||||
userlist_deny=NO
|
||||
```
|
||||
|
||||
@ -72,30 +72,30 @@ userlist_deny=NO
|
||||
|
||||
接下来,我们将介绍如何将 FTP 用户 chroot 到 FTP 用户的家目录(本地 root)中的两种可能情况,如下所述。
|
||||
|
||||
6. 接下来添加下面的选项来限制 FTP 用户到它们自己的家目录。
|
||||
6、 接下来添加下面的选项来限制 FTP 用户到它们自己的家目录。
|
||||
|
||||
```
|
||||
chroot_local_user=YES
|
||||
allow_writeable_chroot=YES
|
||||
```
|
||||
|
||||
chroot_local_user=YES 意味着用户可以设置 chroot jail,默认是登录后的家目录。
|
||||
`chroot_local_user=YES` 意味着用户可以设置 chroot jail,默认是登录后的家目录。
|
||||
|
||||
同样默认的是,出于安全原因,vsftpd 不会允许 chroot jail 目录可写,然而,我们可以添加 allow_writeable_chroot=YES 来覆盖这个设置。
|
||||
同样默认的是,出于安全原因,vsftpd 不会允许 chroot jail 目录可写,然而,我们可以添加 `allow_writeable_chroot=YES` 来覆盖这个设置。
|
||||
|
||||
保存并关闭文件。
|
||||
|
||||
### 用 SELinux 加密 FTP 服务器
|
||||
### 步骤 3: 用 SELinux 加密 FTP 服务器
|
||||
|
||||
7. 现在,让我们设置下面的 SELinux 布尔值来允许 FTP 能读取用户家目录下的文件。请注意,这最初是使用以下命令完成的:
|
||||
7、现在,让我们设置下面的 SELinux 布尔值来允许 FTP 能读取用户家目录下的文件。请注意,这原本是使用以下命令完成的:
|
||||
|
||||
```
|
||||
# setsebool -P ftp_home_dir on
|
||||
```
|
||||
|
||||
然而,`ftp_home_dir` 指令由于这个 bug 报告:[https://bugzilla.redhat.com/show_bug.cgi?id=1097775][3] 默认是禁用的。
|
||||
然而,由于这个 bug 报告:[https://bugzilla.redhat.com/show_bug.cgi?id=1097775][3],`ftp_home_dir` 指令默认是禁用的。
|
||||
|
||||
现在,我们会使用 semanage 命令来设置 SELinux 规则来允许 FTP 读取/写入用户的家目录。
|
||||
现在,我们会使用 `semanage` 命令来设置 SELinux 规则来允许 FTP 读取/写入用户的家目录。
|
||||
|
||||
```
|
||||
# semanage boolean -m ftpd_full_access --on
|
||||
@ -109,21 +109,21 @@ chroot_local_user=YES 意味着用户可以设置 chroot jail,默认是登录
|
||||
|
||||
### 步骤 4: 测试 FTP 服务器
|
||||
|
||||
8. 现在我们会用[ useradd 命令][4]创建一个 FTP 用户来测试 FTP 服务器。
|
||||
8、 现在我们会用 [useradd 命令][4]创建一个 FTP 用户来测试 FTP 服务器。
|
||||
|
||||
```
|
||||
# useradd -m -c “Ravi Saive, CEO” -s /bin/bash ravi
|
||||
# passwd ravi
|
||||
```
|
||||
|
||||
之后,我们如下使用[ echo 命令][5]添加用户 ravi 到文件 /etc/vsftpd.userlist 中:
|
||||
之后,我们如下使用 [echo 命令][5]添加用户 ravi 到文件 `/etc/vsftpd.userlist` 中:
|
||||
|
||||
```
|
||||
# echo "ravi" | tee -a /etc/vsftpd.userlist
|
||||
# cat /etc/vsftpd.userlist
|
||||
```
|
||||
|
||||
9. 现在是时候测试我们上面的设置是否可以工作了。让我们使用匿名登录测试,我们可以从下面的截图看到匿名登录不被允许。
|
||||
9、 现在是时候测试我们上面的设置是否可以工作了。让我们使用匿名登录测试,我们可以从下面的截图看到匿名登录没有被允许。
|
||||
|
||||
```
|
||||
# ftp 192.168.56.10
|
||||
@ -134,13 +134,14 @@ Name (192.168.56.10:root) : anonymous
|
||||
Login failed.
|
||||
ftp>
|
||||
```
|
||||
|
||||
[
|
||||
![Test Anonymous FTP Login](http://www.tecmint.com/wp-content/uploads/2017/02/Test-Anonymous-FTP-Login.png)
|
||||
][6]
|
||||
|
||||
测试 FTP 匿名登录
|
||||
*测试 FTP 匿名登录*
|
||||
|
||||
10. 让我们也测试一下没有列在 /etc/vsftpd.userlist 中的用户是否有权限登录,这不是下面截图中的例子:
|
||||
10、 让我们也测试一下没有列在 `/etc/vsftpd.userlist` 中的用户是否有权限登录,下面截图是没有列入的情况:
|
||||
|
||||
```
|
||||
# ftp 192.168.56.10
|
||||
@ -155,9 +156,9 @@ ftp>
|
||||
![FTP User Login Failed](http://www.tecmint.com/wp-content/uploads/2017/02/FTP-User-Login-Failed.png)
|
||||
][7]
|
||||
|
||||
FTP 用户登录失败
|
||||
*FTP 用户登录失败*
|
||||
|
||||
11. 现在最后测试一下列在 /etc/vsftpd.userlis 中的用户是否在登录后真的进入了他/她的家目录:
|
||||
11、 现在最后测试一下列在 `/etc/vsftpd.userlist` 中的用户是否在登录后真的进入了他/她的家目录:
|
||||
|
||||
```
|
||||
# ftp 192.168.56.10
|
||||
@ -171,21 +172,22 @@ Remote system type is UNIX.
|
||||
Using binary mode to transfer files.
|
||||
ftp> ls
|
||||
```
|
||||
|
||||
[
|
||||
![FTP User Login Successful[](http://www.tecmint.com/wp-content/uploads/2017/02/FTP-User-Login.png)
|
||||
![FTP User Login Successful](http://www.tecmint.com/wp-content/uploads/2017/02/FTP-User-Login.png)
|
||||
][8]
|
||||
|
||||
用户成功登录
|
||||
*用户成功登录*
|
||||
|
||||
警告:使用 `allow_writeable_chroot=YES' 有一定的安全隐患,特别是用户具有上传权限或 shell 访问权限时。
|
||||
警告:使用 `allow_writeable_chroot=YES` 有一定的安全隐患,特别是用户具有上传权限或 shell 访问权限时。
|
||||
|
||||
只有当你完全知道你正做什么时才激活此选项。重要的是要注意,这些安全性影响并不是 vsftpd 特定的,它们适用于所有 FTP 守护进程,它们也提供将本地用户置于 chroot jail中。
|
||||
只有当你完全知道你正做什么时才激活此选项。重要的是要注意,这些安全性影响并不是 vsftpd 特定的,它们适用于所有提供了将本地用户置于 chroot jail 中的 FTP 守护进程。
|
||||
|
||||
因此,我们将在下一节中看到一种更安全的方法来设置不同的不可写本地根目录。
|
||||
|
||||
### 步骤 5: 配置不同的 FTP 家目录
|
||||
|
||||
12. 再次打开 vsftpd 配置文件,并将下面不安全的选项注释掉:
|
||||
12、 再次打开 vsftpd 配置文件,并将下面不安全的选项注释掉:
|
||||
|
||||
```
|
||||
#allow_writeable_chroot=YES
|
||||
@ -199,7 +201,7 @@ ftp> ls
|
||||
# chmod a-w /home/ravi/ftp
|
||||
```
|
||||
|
||||
13. 接下来,在用户存储他/她的文件的本地根目录下创建一个文件夹:
|
||||
13、 接下来,在用户存储他/她的文件的本地根目录下创建一个文件夹:
|
||||
|
||||
```
|
||||
# mkdir /home/ravi/ftp/files
|
||||
@ -207,11 +209,11 @@ ftp> ls
|
||||
# chmod 0700 /home/ravi/ftp/files/
|
||||
```
|
||||
|
||||
、接着在 vsftpd 配置文件中添加/修改这些选项:
|
||||
接着在 vsftpd 配置文件中添加/修改这些选项:
|
||||
|
||||
```
|
||||
user_sub_token=$USER # 在本地根目录下插入用户名
|
||||
local_root=/home/$USER/ftp # 定义任何用户的本地根目录
|
||||
user_sub_token=$USER ### 在本地根目录下插入用户名
|
||||
local_root=/home/$USER/ftp ### 定义任何用户的本地根目录
|
||||
```
|
||||
|
||||
保存并关闭文件。再说一次,有新的设置后,让我们重启服务:
|
||||
@ -220,7 +222,7 @@ local_root=/home/$USER/ftp # 定义任何用户的本地根目录
|
||||
# systemctl restart vsftpd
|
||||
```
|
||||
|
||||
14. 现在最后在测试一次查看用户本地根目录就是我们在他的家目录创建的 FTP 目录。
|
||||
14、 现在最后在测试一次查看用户本地根目录就是我们在他的家目录创建的 FTP 目录。
|
||||
|
||||
```
|
||||
# ftp 192.168.56.10
|
||||
@ -238,7 +240,7 @@ ftp> ls
|
||||
![FTP User Home Directory Login Successful](http://www.tecmint.com/wp-content/uploads/2017/02/FTP-User-Home-Directory-Login-Successful.png)
|
||||
][9]
|
||||
|
||||
FTP 用户家目录登录成功
|
||||
*FTP 用户家目录登录成功*
|
||||
|
||||
就是这样了!在本文中,我们介绍了如何在 CentOS 7 中安装、配置以及加密的 FTP 服务器,使用下面的评论栏给我们回复,或者分享关于这个主题的任何有用信息。
|
||||
|
||||
@ -258,7 +260,7 @@ via: http://www.tecmint.com/install-ftp-server-in-centos-7/
|
||||
|
||||
作者:[Aaron Kili][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
@ -273,5 +275,5 @@ via: http://www.tecmint.com/install-ftp-server-in-centos-7/
|
||||
[7]:http://www.tecmint.com/wp-content/uploads/2017/02/FTP-User-Login-Failed.png
|
||||
[8]:http://www.tecmint.com/wp-content/uploads/2017/02/FTP-User-Login.png
|
||||
[9]:http://www.tecmint.com/wp-content/uploads/2017/02/FTP-User-Home-Directory-Login-Successful.png
|
||||
[10]:http://www.tecmint.com/install-proftpd-in-centos-7/
|
||||
[10]:https://linux.cn/article-8504-1.html
|
||||
[11]:http://www.tecmint.com/secure-vsftpd-using-ssl-tls-on-centos/
|
@ -1,33 +1,33 @@
|
||||
如何在 CentOS 7 中使用 SSL/TLS 加固 FTP 服务器进行安全文件传输
|
||||
============================================================
|
||||
|
||||
在一开始的设计中,FTP(文件传输协议)是不安全的,意味着它不会加密两台机器之间传输的数据以及用户的凭据。这使得数据和服务器安全面临很大威胁。
|
||||
在一开始的设计中,FTP(文件传输协议)就是不安全的,意味着它不会加密两台机器之间传输的数据以及用户的凭据。这使得数据和服务器安全面临很大威胁。
|
||||
|
||||
在这篇文章中,我们会介绍在 CentOS/RHEL 7 以及 Fedora 中如何在 FTP 服务器中手动启用数据加密服务;我们会介绍使用 SSL/TLS 证书保护 VSFTPD(Very Secure FTP Daemon)服务的各个步骤。
|
||||
|
||||
#### 前提条件:
|
||||
|
||||
1. 你必须已经[在 CentOS 7 中安装和配置 FTP 服务][1]
|
||||
- 你必须已经[在 CentOS 7 中安装和配置 FTP 服务][1] 。
|
||||
|
||||
在我们开始之前,要注意本文中所有命令都以 root 用户运行,否则,如果现在你不是使用 root 用户控制服务器,你可以使用 [sudo 命令][2] 去获取 root 权限。
|
||||
|
||||
### 第一步:生成 SSL/TLS 证书和密钥
|
||||
|
||||
1. 我们首先要在 `/etc/ssl` 目录下创建用于保存 SSL/TLS 证书和密钥文件的子目录:
|
||||
1、 我们首先要在 `/etc/ssl` 目录下创建用于保存 SSL/TLS 证书和密钥文件的子目录:
|
||||
|
||||
```
|
||||
# mkdir /etc/ssl/private
|
||||
```
|
||||
|
||||
2. 然后运行下面的命令为 vsftpd 创建证书和密钥并保存到一个文件中,下面会解析使用的每个标签。
|
||||
2、 然后运行下面的命令为 vsftpd 创建证书和密钥并保存到一个文件中,下面会解析使用的每个选项。
|
||||
|
||||
1. req - 是 X.509 Certificate Signing Request (CSR,证书签名请求)管理的一个命令。
|
||||
2. x509 - X.509 证书数据管理。
|
||||
3. days - 定义证书的有效日期。
|
||||
4. newkey - 指定证书密钥处理器。
|
||||
5. rsa:2048 - RSA 密钥处理器,会生成一个 2048 位的密钥。
|
||||
6. keyout - 设置密钥存储文件。
|
||||
7. out - 设置证书存储文件,注意证书和密钥都保存在一个相同的文件:/etc/ssl/private/vsftpd.pem。
|
||||
1. `req` - 是 X.509 Certificate Signing Request (CSR,证书签名请求)管理的一个命令。
|
||||
2. `x509` - X.509 证书数据管理。
|
||||
3. `days` - 定义证书的有效日期。
|
||||
4. `newkey` - 指定证书密钥处理器。
|
||||
5. `rsa:2048` - RSA 密钥处理器,会生成一个 2048 位的密钥。
|
||||
6. `keyout` - 设置密钥存储文件。
|
||||
7. `out` - 设置证书存储文件,注意证书和密钥都保存在一个相同的文件:/etc/ssl/private/vsftpd.pem。
|
||||
|
||||
```
|
||||
# openssl req -x509 -nodes -keyout /etc/ssl/private/vsftpd.pem -out /etc/ssl/private/vsftpd.pem -days 365 -newkey rsa:2048
|
||||
@ -47,7 +47,7 @@ Email Address []:admin@tecmint.com
|
||||
|
||||
### 第二步:配置 VSFTPD 使用 SSL/TLS
|
||||
|
||||
3. 在我们进行任何 VSFTPD 配置之前,首先开放 990 和 40000-50000 端口,以便在 VSFTPD 配置文件中分别定义 TLS 连接的端口和被动端口的端口范围:
|
||||
3、 在我们进行任何 VSFTPD 配置之前,首先开放 990 和 40000-50000 端口,以便在 VSFTPD 配置文件中分别定义 TLS 连接的端口和被动端口的端口范围:
|
||||
|
||||
```
|
||||
# firewall-cmd --zone=public --permanent --add-port=990/tcp
|
||||
@ -55,7 +55,7 @@ Email Address []:admin@tecmint.com
|
||||
# firewall-cmd --reload
|
||||
```
|
||||
|
||||
4. 现在,打开 VSFTPD 配置文件并在文件中指定 SSL 的详细信息:
|
||||
4、 现在,打开 VSFTPD 配置文件并在文件中指定 SSL 的详细信息:
|
||||
|
||||
```
|
||||
# vi /etc/vsftpd/vsftpd.conf
|
||||
@ -70,14 +70,14 @@ ssl_sslv2=NO
|
||||
ssl_sslv3=NO
|
||||
```
|
||||
|
||||
5. 然后,添加下面的行定义 SSL 证书和密钥文件的位置:
|
||||
5、 然后,添加下面的行来定义 SSL 证书和密钥文件的位置:
|
||||
|
||||
```
|
||||
rsa_cert_file=/etc/ssl/private/vsftpd.pem
|
||||
rsa_private_key_file=/etc/ssl/private/vsftpd.pem
|
||||
```
|
||||
|
||||
6. 下面,我们要阻止匿名用户使用 SSL,然后强制所有非匿名用户登录使用安全的 SSL 连接进行数据传输和登录过程中的密码发送:
|
||||
6、 下面,我们要阻止匿名用户使用 SSL,然后强制所有非匿名用户登录使用安全的 SSL 连接进行数据传输和登录过程中的密码发送:
|
||||
|
||||
```
|
||||
allow_anon_ssl=NO
|
||||
@ -85,7 +85,7 @@ force_local_data_ssl=YES
|
||||
force_local_logins_ssl=YES
|
||||
```
|
||||
|
||||
7. 另外,我们还可以添加下面的选项增强 FTP 服务器的安全性。当选项 `require_ssl_reuse` 被设置为 `YES` 时,要求所有 SSL 数据连接都显示 SSL 会话重用;表明它们知道与控制频道相同的主机密码。
|
||||
7、 另外,我们还可以添加下面的选项增强 FTP 服务器的安全性。当选项 `require_ssl_reuse` 被设置为 `YES` 时,要求所有 SSL 数据连接都会重用 SSL 会话;这样它们会知道控制通道的主密码。
|
||||
|
||||
因此,我们需要把它关闭。
|
||||
|
||||
@ -93,19 +93,19 @@ force_local_logins_ssl=YES
|
||||
require_ssl_reuse=NO
|
||||
```
|
||||
|
||||
另外,我们还要用 `ssl_ciphers` 选项选择 VSFTPD 允许用于加密 SSL 连接的 SSL 密码。这可以大大限制尝试使用在漏洞中发现的特定密码的攻击者:
|
||||
另外,我们还要用 `ssl_ciphers` 选项选择 VSFTPD 允许用于加密 SSL 连接的 SSL 算法。这可以极大地限制那些尝试发现使用存在缺陷的特定算法的攻击者:
|
||||
|
||||
```
|
||||
ssl_ciphers=HIGH
|
||||
```
|
||||
|
||||
8. 现在,设置被动端口的端口范围(最小和最大端口)。
|
||||
8、 现在,设置被动端口的端口范围(最小和最大端口)。
|
||||
```
|
||||
pasv_min_port=40000
|
||||
pasv_max_port=50000
|
||||
```
|
||||
|
||||
9. 选择性启用 `debug_ssl` 选项以允许 SSL 调试,意味着 OpenSSL 连接诊断会被记录到 VSFTPD 日志文件:
|
||||
9、 选择性启用 `debug_ssl` 选项以允许 SSL 调试,这意味着 OpenSSL 连接诊断会被记录到 VSFTPD 日志文件:
|
||||
|
||||
```
|
||||
debug_ssl=YES
|
||||
@ -119,7 +119,7 @@ debug_ssl=YES
|
||||
|
||||
### 第三步:用 SSL/TLS 连接测试 FTP 服务器
|
||||
|
||||
10. 完成上面的所有配置之后,像下面这样通过在命令行中尝试使用 FTP 测试 VSFTPD 是否使用 SSL/TLS 连接:
|
||||
10、 完成上面的所有配置之后,像下面这样通过在命令行中尝试使用 FTP 测试 VSFTPD 是否使用 SSL/TLS 连接:
|
||||
|
||||
```
|
||||
# ftp 192.168.56.10
|
||||
@ -131,11 +131,12 @@ Login failed.
|
||||
421 Service not available, remote server has closed connection
|
||||
ftp>
|
||||
```
|
||||
|
||||
[
|
||||
![验证 FTP SSL 安全连接](http://www.tecmint.com/wp-content/uploads/2017/02/Verify-FTP-Secure-Connection.png)
|
||||
][3]
|
||||
|
||||
验证 FTP SSL 安全连接
|
||||
*验证 FTP SSL 安全连接*
|
||||
|
||||
从上面的截图中,我们可以看到这里有个错误提示我们 VSFTPD 只允许用户从支持加密服务的客户端登录。
|
||||
|
||||
@ -143,7 +144,7 @@ ftp>
|
||||
|
||||
### 第四步:安装 FileZilla 以便安全地连接到 FTP 服务器
|
||||
|
||||
11. FileZilla 是一个时尚、流行且重要的交叉平台 FTP 客户端,它默认支持 SSL/TLS 连接。
|
||||
11、 FileZilla 是一个现代化、流行且重要的跨平台的 FTP 客户端,它默认支持 SSL/TLS 连接。
|
||||
|
||||
要在 Linux 上安装 FileZilla,可以运行下面的命令:
|
||||
|
||||
@ -154,7 +155,7 @@ ftp>
|
||||
$ sudo apt-get install filezilla
|
||||
```
|
||||
|
||||
12. 当安装完成后(或者你已经安装了该软件),打开它,选择 File=>Sites Manager 或者按 `Ctrl + S` 打开 Site Manager 界面。
|
||||
12、 当安装完成后(或者你已经安装了该软件),打开它,选择 File => Sites Manager 或者按 `Ctrl + S` 打开 Site Manager 界面。
|
||||
|
||||
点击 New Site 按钮添加一个新的站点/主机连接详细信息。
|
||||
|
||||
@ -162,7 +163,7 @@ $ sudo apt-get install filezilla
|
||||
![在 FileZilla 中添加新 FTP 站点](http://www.tecmint.com/wp-content/uploads/2017/02/Add-New-FTP-Site-in-Filezilla.png)
|
||||
][4]
|
||||
|
||||
在 FileZilla 中添加新 FTP 站点
|
||||
*在 FileZilla 中添加新 FTP 站点*
|
||||
|
||||
13. 下一步,像下面这样设置主机/站点名称、添加 IP 地址、定义使用的协议、加密和登录类型(使用你自己情况的值):
|
||||
|
||||
@ -177,15 +178,15 @@ User: username
|
||||
![在 Filezilla 中添加 FTP 服务器详细信息](http://www.tecmint.com/wp-content/uploads/2017/02/Add-FTP-Server-Details-in-Filezilla.png)
|
||||
][5]
|
||||
|
||||
在 Filezilla 中添加 FTP 服务器详细信息
|
||||
*在 Filezilla 中添加 FTP 服务器详细信息*
|
||||
|
||||
14. 然后点击 Connect,再次输入密码,然后验证用于 SSL/TLS 连接的证书,再一次点击 `OK` 连接到 FTP 服务器:
|
||||
14、 然后点击 Connect,再次输入密码,然后验证用于 SSL/TLS 连接的证书,再一次点击 `OK` 连接到 FTP 服务器:
|
||||
|
||||
[
|
||||
![验证 FTP SSL 证书](http://www.tecmint.com/wp-content/uploads/2017/02/Verify-FTP-SSL-Certificate.png)
|
||||
][6]
|
||||
|
||||
验证 FTP SSL 证书
|
||||
*验证 FTP SSL 证书*
|
||||
|
||||
到了这里,我们应该使用 TLS 连接成功地登录到了 FTP 服务器,在下面的界面中检查连接状态部分获取更多信息。
|
||||
|
||||
@ -193,15 +194,15 @@ User: username
|
||||
![通过 TLS/SSL 连接到 FTP 服务器](http://www.tecmint.com/wp-content/uploads/2017/02/connected-to-ftp-server-with-tls.png)
|
||||
][7]
|
||||
|
||||
通过 TLS/SSL 连接到 FTP 服务器
|
||||
*通过 TLS/SSL 连接到 FTP 服务器*
|
||||
|
||||
15. 最后,在文件目录尝试 [从本地传输文件到 FTP 服务器][8],看 FileZilla 界面后面的部分查看文件传输相关的报告。
|
||||
15、 最后,在文件目录尝试 [从本地传输文件到 FTP 服务器][8],看 FileZilla 界面后面的部分查看文件传输相关的报告。
|
||||
|
||||
[
|
||||
![使用 FTP 安全地传输文件](http://www.tecmint.com/wp-content/uploads/2017/02/Transfer-Files-Securely-Using-FTP.png)
|
||||
][9]
|
||||
|
||||
使用 FTP 安全地传输文件
|
||||
*使用 FTP 安全地传输文件*
|
||||
|
||||
就是这些。记住 FTP 默认是不安全的,除非我们像上面介绍的那样配置它使用 SSL/TLS 连接。在下面的评论框中和我们分享你关于这篇文章/主题的想法吧。
|
||||
|
||||
@ -217,7 +218,7 @@ via: http://www.tecmint.com/secure-vsftpd-using-ssl-tls-on-centos/
|
||||
|
||||
作者:[Aaron Kili][a]
|
||||
译者:[ictlyh](https://github.com/ictlyh)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,484 @@
|
||||
如何安装 pandom : 一个针对 Linux 的真随机数生成器
|
||||
============================================================
|
||||
|
||||
本教程只针对 amd64/x86_64 架构 Linux 内核版本大于等于 2.6.9 的系统。本文将解释如何安装 [pandom][46],这是一个由 ncomputers.org 维护的定时抖动真随机数生成器。
|
||||
|
||||
### 简介
|
||||
|
||||
在现在的计算机状况下,比如说配置了固态硬盘(SSD)的个人电脑和虚拟专用服务器(VPS)的环境中,Linux 内核内置的真随机数发生器提供的吞吐量很低。
|
||||
|
||||
而出于各种不同的加密目的使得对真随机数的需求持续增长,从而使得这个低吞吐量问题在 Linux 实现中变得越来越严重。
|
||||
|
||||
在与上述相同的物理或者虚拟环境下,并假设没有其它进程以 root 身份向 `/dev/random` 进行写操作的话,64 [ubits][47]/64 bits 的 pandom 可以以 8 KiB/s 的速率生成随机数。
|
||||
|
||||
### 1 pandom 的安装
|
||||
|
||||
#### 1.1 获得 root 权限
|
||||
|
||||
Pandom 必须以 root 身份来安装,所以在必要的时候请运行如下命令:
|
||||
|
||||
```
|
||||
su -
|
||||
```
|
||||
|
||||
#### 1.2 安装编译所需的依赖
|
||||
|
||||
为了下载并安装 pandom,你需要 GNU `as` 汇编器、GNU `make`、GNU `tar` 和 GNU `wget` (最后两个工具通常已被安装)。随后你可以按照你的意愿卸载它们。
|
||||
|
||||
**基于 Arch 的系统:**
|
||||
|
||||
```
|
||||
pacman -S binutils make
|
||||
```
|
||||
|
||||
**基于 Debian 的系统:**
|
||||
|
||||
```
|
||||
apt-get install binutils make
|
||||
```
|
||||
|
||||
基于 Red Hat 的系统:
|
||||
|
||||
```
|
||||
dnf install binutils make
|
||||
yum install binutils make
|
||||
```
|
||||
|
||||
**基于 SUSE 的系统:**
|
||||
|
||||
```
|
||||
zypper install binutils make
|
||||
```
|
||||
|
||||
#### 1.3 下载并析出源码
|
||||
|
||||
下面的命令将使用 `wget` 和 `tar` 从 ncomputers.org 下载 pandom 的源代码并将它们解压出来:
|
||||
|
||||
```
|
||||
wget http://ncomputers.org/pandom.tar.gz
|
||||
tar xf pandom.tar.gz
|
||||
cd pandom/amd64-linux
|
||||
```
|
||||
|
||||
#### 1.4 在安装前进行测试 (推荐)
|
||||
|
||||
这个被推荐的测试将花费大约 8 分钟的时间,它将检查内核支持情况并生成一个名为 `checkme` 的文件(在下一节中将被分析)。
|
||||
|
||||
```
|
||||
make check
|
||||
```
|
||||
#### 1.5 确定系统的初始化程序
|
||||
|
||||
在安装 pandom 之前,你需要知道你的系统使用的是哪个初始化程序。假如下面命令的输出中包含 `running`,则意味着你的系统使用了 `systemd`,否则你的系统则可能使用了一个 `init.d` 的实现(例如 upstart、sysvinit)。
|
||||
|
||||
```
|
||||
systemctl is-system-running
|
||||
running
|
||||
```
|
||||
|
||||
#### 1.6 安装 pandom
|
||||
|
||||
一旦你知道了你的系统使用何种 Linux 实现,那么你就可以相应地安装 pandom 了。
|
||||
|
||||
**使用基于 init.d 作为初始化程序(如: upstart、sysvinit)的系统:**
|
||||
|
||||
假如你的系统使用了一个 init.d 的实现(如: upstart、sysvinit),请运行下面的命令来安装 pandom:
|
||||
|
||||
```
|
||||
make install-init.d
|
||||
```
|
||||
|
||||
**以 systemd 作为初始化程序的系统:**
|
||||
|
||||
假如你的系统使用 `systemd`,则请运行以下命令来安装 pandom:
|
||||
|
||||
```
|
||||
make install-systemd
|
||||
```
|
||||
|
||||
### 2 checkme 文件的分析
|
||||
|
||||
在使用 pandom 进行加密之前,强烈建议分析一下先前在安装过程中生成的 `checkme` 文件。通过分析我们便可以知道用 pandom 生成的数是否真的随机。本节将解释如何使用 ncomputers.org 的 shell 脚本 `entropyarray` 来测试由 pandom 产生的输出的熵及序列相关性。
|
||||
|
||||
**注**:整个分析过程也可以在另一台电脑上完成,例如在一个笔记本电脑或台式机上。举个例子:假如你正在一个资源受到限制的 VPS 上安装 pandom 程序,或许你更倾向于将 `checkme` 复制到自己的个人电脑中,然后再进行分析。
|
||||
|
||||
#### 2.1 获取 root 权限
|
||||
|
||||
`entropyarray` 程序也必须以 root 身份来安装,所以在必要时请运行如下命令:
|
||||
|
||||
```
|
||||
su -
|
||||
```
|
||||
|
||||
#### 2.2 安装编译所需的依赖
|
||||
|
||||
为了下载并安装 `entropyarray`, 你需要 GNU g++ 编译器、GNU `make`、GNU `tar` 和 GNU `wget`。在随后你可以任意卸载这些依赖。
|
||||
|
||||
**基于 Arch 的系统:**
|
||||
|
||||
```
|
||||
pacman -S gcc make
|
||||
```
|
||||
|
||||
**基于 Debian 的系统:**
|
||||
|
||||
```
|
||||
apt-get install g++ make
|
||||
```
|
||||
|
||||
**基于 Red Hat 的系统:**
|
||||
|
||||
```
|
||||
dnf install gcc-c++ make
|
||||
yum install gcc-c++ make
|
||||
```
|
||||
|
||||
**基于 SUSE 的系统:**
|
||||
|
||||
```
|
||||
zypper install gcc-c++ make
|
||||
```
|
||||
|
||||
#### 2.3 下载并析出源码
|
||||
|
||||
以下命令将使用 `wget` 和 `tar` 从 ncomputers.org 下载到 entropyarray 的源码并进行解压:
|
||||
|
||||
```
|
||||
wget http://ncomputers.org/rearray.tar.gz
|
||||
wget http://ncomputers.org/entropy.tar.gz
|
||||
wget http://ncomputers.org/entropyarray.tar.gz
|
||||
|
||||
tar xf entropy.tar.gz
|
||||
tar xf rearray.tar.gz
|
||||
tar xf entropyarray.tar.gz
|
||||
```
|
||||
|
||||
#### 2.4 安装 entropyarray
|
||||
|
||||
**注**:如果在编译过程中报有关 `-std=c++11` 的错误,则说明当前系统安装的 GNU g++ 版本不支持 ISO C++ 2011 标准,那么你可能需要在另一个支持该标准的系统中编译 ncomputers.org/entropy 和 ncomputers.org/rearray (例如在一个你喜爱的较新的 Linux 发行版本中来编译)。接着使用 `make install` 来安装编译好的二进制文件,再接着你可能想继续运行 `entropyarray` 程序,或者跳过运行该程序这一步骤,然而我还是建议在使用 pandom 来达到加密目地之前先分析一下 `checkme` 文件。
|
||||
|
||||
```
|
||||
cd rearray; make install; cd ..
|
||||
cd entropy; make install; cd ..
|
||||
cd entropyarray; make install; cd ..
|
||||
```
|
||||
|
||||
#### 2.5 分析 checkme 文件
|
||||
|
||||
**注**:64 [ubits][53] / 64 bits 的 pandom 实现所生成的结果中熵应该高于 `15.977` 且 `max` 字段低于 `70`。假如你的结果与之相差巨大,或许你应该按照下面第 5 节介绍的那样增加你的 pandom 实现的不可预测性。假如你跳过了生成 `checkme` 文件的那一步,你也可以使用其他的工具来进行测试,例如 [伪随机数序列测试][54]。
|
||||
|
||||
```
|
||||
entropyarray checkme
|
||||
|
||||
entropyarray in /tmp/tmp.mbCopmzqsg
|
||||
15.977339
|
||||
min:12
|
||||
med:32
|
||||
max:56
|
||||
15.977368
|
||||
min:11
|
||||
med:32
|
||||
max:58
|
||||
15.977489
|
||||
min:11
|
||||
med:32
|
||||
max:59
|
||||
15.977077
|
||||
min:12
|
||||
med:32
|
||||
max:60
|
||||
15.977439
|
||||
min:8
|
||||
med:32
|
||||
max:59
|
||||
15.977374
|
||||
min:13
|
||||
med:32
|
||||
max:60
|
||||
15.977312
|
||||
min:12
|
||||
med:32
|
||||
max:67
|
||||
```
|
||||
|
||||
#### 2.6 卸载 entropyarray (可选)
|
||||
|
||||
假如你打算不再使用 `entropyarray`,那么你可以按照你自己的需求卸载它:
|
||||
|
||||
|
||||
```
|
||||
cd entropyarray; make uninstall; cd ..
|
||||
cd entropy; make uninstall; cd ..
|
||||
cd rearray; make uninstall; cd ..
|
||||
```
|
||||
|
||||
### 3 使用 debian 的软件仓库来进行安装
|
||||
|
||||
假如你想在你基于 debian 的系统中让 pandom 保持更新,则你可以使用 ncomputers.org 的 debian 软件仓库来安装或者重新安装它。
|
||||
|
||||
#### 3.1 获取 root 权限
|
||||
|
||||
以下的 debian 软件包必须以 root 身份来安装,所以在必要时请运行下面这个命令:
|
||||
|
||||
```
|
||||
su -
|
||||
```
|
||||
|
||||
#### 3.2 安装密钥
|
||||
|
||||
下面的 debian 软件包中包含 ncomputers.org debian 软件仓库的公匙密钥:
|
||||
|
||||
```
|
||||
wget http://ncomputers.org/debian/keyring.deb
|
||||
dpkg -i keyring.deb
|
||||
rm keyring.deb
|
||||
```
|
||||
|
||||
#### 3.3 安装软件源列表
|
||||
|
||||
下面这些 debian 软件包含有 ncomputers.org debian 软件仓库的软件源列表,这些软件源列表对应最新的 debian 发行版本(截至 2017 年)。
|
||||
|
||||
**注**:你也可以将下面的以 `#` 注释的行加入 `/etc/apt/sources.list` 文件中,而不是为你的 debian 发行版本安装对应的 debian 软件包。但假如这些源在将来改变了,你就需要手动更新它们。
|
||||
|
||||
**Wheezy:**
|
||||
|
||||
```
|
||||
#deb http://ncomputers.org/debian wheezy main
|
||||
wget http://ncomputers.org/debian/wheezy.deb
|
||||
dpkg -i wheezy.deb
|
||||
rm wheezy.deb
|
||||
```
|
||||
|
||||
Jessie:
|
||||
|
||||
```
|
||||
#deb http://ncomputers.org/debian jessie main
|
||||
wget http://ncomputers.org/debian/jessie.deb
|
||||
dpkg -i jessie.deb
|
||||
rm jessie.deb
|
||||
```
|
||||
|
||||
**Stretch:**
|
||||
|
||||
```
|
||||
#deb http://ncomputers.org/debian stretch main
|
||||
wget http://ncomputers.org/debian/stretch.deb
|
||||
dpkg -i stretch.deb
|
||||
rm stretch.deb
|
||||
```
|
||||
|
||||
#### 3.4 升级软件源列表
|
||||
|
||||
一旦密钥和软件源列表安装完成,则可以使用下面的命令来更新:
|
||||
|
||||
```
|
||||
apt-get update
|
||||
```
|
||||
|
||||
#### 3.5 测试 pandom
|
||||
|
||||
测试完毕后,你可以随意卸载下面的软件包。
|
||||
|
||||
**注**:假如你已经在你的 Linux 中测试了 pandom , 则你可以跳过这一步。
|
||||
|
||||
```
|
||||
apt-get install pandom-test
|
||||
pandom-test
|
||||
|
||||
generating checkme file, please wait around 8 minutes ...
|
||||
entropyarray in /tmp/tmp.5SkiYsYG3h
|
||||
15.977366
|
||||
min:12
|
||||
med:32
|
||||
max:57
|
||||
15.977367
|
||||
min:13
|
||||
med:32
|
||||
max:57
|
||||
15.977328
|
||||
min:12
|
||||
med:32
|
||||
max:61
|
||||
15.977431
|
||||
min:12
|
||||
med:32
|
||||
max:59
|
||||
15.977437
|
||||
min:11
|
||||
med:32
|
||||
max:57
|
||||
15.977298
|
||||
min:11
|
||||
med:32
|
||||
max:59
|
||||
15.977196
|
||||
min:10
|
||||
med:32
|
||||
max:57
|
||||
```
|
||||
|
||||
#### 3.6 安装 pandom
|
||||
|
||||
```
|
||||
apt-get install pandom
|
||||
```
|
||||
|
||||
### 4 管理 pandom
|
||||
|
||||
在 pandom 安装完成后,你可能想对它进行管理。
|
||||
|
||||
#### 4.1 性能测试
|
||||
|
||||
pandom 提供大约 8 kB/s 的随机数生成速率,但它的性能可能根据环境而有所差异。
|
||||
|
||||
```
|
||||
dd if=/dev/random of=/dev/null bs=8 count=512
|
||||
|
||||
512+0 records in
|
||||
512+0 records out
|
||||
4096 bytes (4.1 kB, 4.0 KiB) copied, 0.451253 s, 9.1 kB/s
|
||||
```
|
||||
|
||||
#### 4.2 熵和序列相关性检验
|
||||
|
||||
除了 ncomputers.org/entropyarray,还存在更多的测试,例如 [Ilja Gerhardt 的 NIST 测试套件][62]。
|
||||
|
||||
```
|
||||
entropyarray /dev/random 1M
|
||||
```
|
||||
|
||||
#### 4.3 系统服务
|
||||
|
||||
pandom 还可以以系统服务的形式运行。
|
||||
|
||||
**基于 init.d 的初始化系统(如 upstart、sysvinit):**
|
||||
|
||||
```
|
||||
/etc/init.d/random status
|
||||
/etc/init.d/random start
|
||||
/etc/init.d/random stop
|
||||
/etc/init.d/random restart
|
||||
```
|
||||
**以 systemd 作为初始化程序的系统:**
|
||||
|
||||
```
|
||||
systemctl status random
|
||||
systemctl start random
|
||||
systemctl stop random
|
||||
systemctl restart random
|
||||
```
|
||||
|
||||
### 5 增强不可预测性或者性能
|
||||
|
||||
假如你想增加你编译的 pandom 程序的不可预测性或者性能,你可以尝试增加或删减 CPU 时间测量选项。
|
||||
|
||||
#### 5.1 编辑源文件
|
||||
|
||||
请按照自己的意愿,在源文件 `test.s` 和 `tRNG.s` 中增加或者移除 `measurement blocks` 字段。
|
||||
|
||||
```
|
||||
#measurement block
|
||||
mov $35,%rax
|
||||
syscall
|
||||
rdtsc
|
||||
[...]
|
||||
|
||||
#measurement block
|
||||
mov $35,%rax
|
||||
syscall
|
||||
rdtsc
|
||||
[...]
|
||||
```
|
||||
|
||||
#### 5.2 测试不可预测性
|
||||
|
||||
我们总是建议在使用个人定制的 pandom 实现来用于加密目地之前,先进行一些测试。
|
||||
|
||||
```
|
||||
make check
|
||||
```
|
||||
|
||||
#### 5.3 安装定制的 pandom
|
||||
|
||||
假如你对测试的结果很满意,你就可以使用下面的命令来安装你的 pandom 实现。
|
||||
|
||||
```
|
||||
make install
|
||||
```
|
||||
|
||||
更多额外信息及更新详见 [http://ncomputers.org/pandom][63]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/
|
||||
|
||||
作者:[Oliver][a]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/
|
||||
[1]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-gain-root-access
|
||||
[2]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-install-build-dependencies
|
||||
[3]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#arch-based-systems
|
||||
[4]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#debian-based-systems
|
||||
[5]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#red-hat-based-systems
|
||||
[6]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#suse-based-systems
|
||||
[7]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-download-and-extract-sources
|
||||
[8]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-test-before-installing-recommended
|
||||
[9]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-determine-init-system
|
||||
[10]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-install-pandom
|
||||
[11]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#initd-based-init-system-eg-upstart-sysvinit
|
||||
[12]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#systemd-as-init-system
|
||||
[13]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-gain-root-access-2
|
||||
[14]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-install-build-dependencies-2
|
||||
[15]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#arch-based-systems-2
|
||||
[16]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#debian-based-systems-2
|
||||
[17]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#red-hat-based-systems-2
|
||||
[18]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#suse-based-systems-2
|
||||
[19]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-download-and-extract-sources-2
|
||||
[20]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-install-entropyarray
|
||||
[21]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-analyze-checkme-file
|
||||
[22]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-uninstall-entropyarray-optional
|
||||
[23]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-gain-root-access-3
|
||||
[24]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-install-keyring
|
||||
[25]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-install-sources-list
|
||||
[26]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#wheezy
|
||||
[27]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#jessie
|
||||
[28]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#stretch
|
||||
[29]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-update-sources-list
|
||||
[30]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-test-pandom
|
||||
[31]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-install-pandom-2
|
||||
[32]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-performance-test
|
||||
[33]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-entropy-and-serial-correlation-test
|
||||
[34]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-system-service
|
||||
[35]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#initd-based-init-system-eg-upstart-sysvinit-2
|
||||
[36]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#systemd-as-init-system-2
|
||||
[37]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-edit-source-files
|
||||
[38]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-test-the-unpredictability
|
||||
[39]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-install-personalized-pandom
|
||||
[40]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#introduction
|
||||
[41]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-installation-of-pandom
|
||||
[42]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-analysis-of-checkme-file
|
||||
[43]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-installation-using-debian-repository
|
||||
[44]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-managing-pandom
|
||||
[45]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-increasing-unpredictability-or-performance
|
||||
[46]:http://ncomputers.org/pandom
|
||||
[47]:http://ncomputers.org/ubit
|
||||
[48]:http://ncomputers.org/pandom.tar.gz
|
||||
[49]:http://unix.stackexchange.com/a/18210/94448
|
||||
[50]:http://ncomputers.org/rearray.tar.gz
|
||||
[51]:http://ncomputers.org/entropy.tar.gz
|
||||
[52]:http://ncomputers.org/entropyarray.tar.gz
|
||||
[53]:http://ncomputers.org/ubit
|
||||
[54]:http://www.fourmilab.ch/random/
|
||||
[55]:http://ncomputers.org/debian/keyring.deb
|
||||
[56]:http://ncomputers.org/debian
|
||||
[57]:http://ncomputers.org/debian/wheezy.deb
|
||||
[58]:http://ncomputers.org/debian
|
||||
[59]:http://ncomputers.org/debian/jessie.deb
|
||||
[60]:http://ncomputers.org/debian
|
||||
[61]:http://ncomputers.org/debian/stretch.deb
|
||||
[62]:https://gerhardt.ch/random.php
|
||||
[63]:http://ncomputers.org/pandom
|
@ -1,25 +1,20 @@
|
||||
在 Linux 上给用户赋予指定目录的读写权限
|
||||
============================================================
|
||||
|
||||
|
||||
在上篇文章中我们向您展示了如何在 Linux 上[创建一个共享目录][3]。这次,我们会为您介绍如何将 Linux 上指定目录的读写权限赋予用户。
|
||||
|
||||
|
||||
有两种方法可以实现这个目标:第一种是 [使用 ACL (访问控制列表)][4] ,第二种是[创建用户组来管理文件权限][5],下面会一一介绍。
|
||||
|
||||
|
||||
为了完成这个教程,我们将使用以下设置。
|
||||
|
||||
```
|
||||
Operating system: CentOS 7
|
||||
Test directory: /shares/project1/reports
|
||||
Test user: tecmint
|
||||
Filesystem type: Ext4
|
||||
```
|
||||
- 操作系统:CentOS 7
|
||||
- 测试目录:`/shares/project1/reports`
|
||||
- 测试用户:tecmint
|
||||
- 文件系统类型:ext4
|
||||
|
||||
请确认所有的命令都是使用 root 用户执行的,或者使用 [sudo 命令][6] 来享受与之同样的权限。
|
||||
|
||||
让我们开始吧!下面,先使用 mkdir 命令来创建一个名为 `reports` 的目录。
|
||||
让我们开始吧!下面,先使用 `mkdir` 命令来创建一个名为 `reports` 的目录。
|
||||
|
||||
```
|
||||
# mkdir -p /shares/project1/reports
|
||||
@ -27,16 +22,16 @@ Filesystem type: Ext4
|
||||
|
||||
### 使用 ACL 来为用户赋予目录的读写权限
|
||||
|
||||
重要提示:打算使用此方法的话,您需要确认您的 Linux 文件系统类型(如 Ext3 和 Ext4, NTFS, BTRFS)支持 ACL。
|
||||
重要提示:打算使用此方法的话,您需要确认您的 Linux 文件系统类型(如 ext3 和 ext4, NTFS, BTRFS)支持 ACL。
|
||||
|
||||
1. 首先, 依照以下命令在您的系统中[检查当前文件系统类型][7],并且查看内核是否支持 ACL:
|
||||
1、 首先, 依照以下命令在您的系统中[检查当前文件系统类型][7],并且查看内核是否支持 ACL:
|
||||
|
||||
```
|
||||
# df -T | awk '{print $1,$2,$NF}' | grep "^/dev"
|
||||
# grep -i acl /boot/config*
|
||||
```
|
||||
|
||||
从下方的截屏可以看到,文件系统类型是 **Ext4**,并且从 **CONFIG_EXT4_FS_POSIX_ACL=y** 选项可以发现内核是支持 **POSIX ACLs** 的。
|
||||
从下方的截屏可以看到,文件系统类型是 `ext4`,并且从 `CONFIG_EXT4_FS_POSIX_ACL=y` 选项可以发现内核是支持 **POSIX ACLs** 的。
|
||||
|
||||
[
|
||||
![Check Filesystem Type and Kernel ACL Support](http://www.tecmint.com/wp-content/uploads/2017/03/Check-Filesystem-Type-and-Kernel-ACL-Support.png)
|
||||
@ -44,7 +39,7 @@ Filesystem type: Ext4
|
||||
|
||||
*查看文件系统类型和内核的 ACL 支持。*
|
||||
|
||||
2. 接下来,查看文件系统(分区)挂载时是否使用了 ACL 选项。
|
||||
2、 接下来,查看文件系统(分区)挂载时是否使用了 ACL 选项。
|
||||
|
||||
```
|
||||
# tune2fs -l /dev/sda1 | grep acl
|
||||
@ -55,14 +50,14 @@ Filesystem type: Ext4
|
||||
|
||||
*查看分区是否支持 ACL*
|
||||
|
||||
通过上边的输出可以发现,默认的挂载项目中已经对 **ACL** 进行了支持。如果发现结果不如所愿,你可以通过以下命令对指定分区(此例中使用 **/dev/sda3**)开启 ACL 的支持。
|
||||
通过上边的输出可以发现,默认的挂载项目中已经对 **ACL** 进行了支持。如果发现结果不如所愿,你可以通过以下命令对指定分区(此例中使用 `/dev/sda3`)开启 ACL 的支持。
|
||||
|
||||
```
|
||||
# mount -o remount,acl /
|
||||
# tune2fs -o acl /dev/sda3
|
||||
```
|
||||
|
||||
3. 现在是时候指定目录 `reports` 的读写权限分配给名为 `tecmint` 的用户了,依照以下命令执行即可。
|
||||
3、 现在是时候指定目录 `reports` 的读写权限分配给名为 `tecmint` 的用户了,依照以下命令执行即可。
|
||||
|
||||
```
|
||||
# getfacl /shares/project1/reports # Check the default ACL settings for the directory
|
||||
@ -75,9 +70,9 @@ Filesystem type: Ext4
|
||||
|
||||
*通过 ACL 对指定目录赋予读写权限*
|
||||
|
||||
在上方的截屏中,通过输出结果的第二行 **getfacl** 命令可以发现,用户 `tecmint` 已经成功的被赋予了 **/shares/project1/reports** 目录的读写权限。
|
||||
在上方的截屏中,通过输出结果的第二行 `getfacl` 命令可以发现,用户 `tecmint` 已经成功的被赋予了 `/shares/project1/reports` 目录的读写权限。
|
||||
|
||||
如果想要获取ACL列表的更多信息。可以在下方查看我们的其他指南。
|
||||
如果想要获取 ACL 列表的更多信息。可以在下方查看我们的其他指南。
|
||||
|
||||
1. [如何使用访问控制列表(ACL)为用户/组设置磁盘配额][1]
|
||||
2. [如何使用访问控制列表(ACL)挂载网络共享][2]
|
||||
@ -86,7 +81,7 @@ Filesystem type: Ext4
|
||||
|
||||
### 使用用户组来为用户赋予指定目录的读写权限
|
||||
|
||||
1. 如果用户已经拥有了默认的用户组(通常组名与用户名相同),就可以简单的通过变更文件夹的所属用户组来完成。
|
||||
1、 如果用户已经拥有了默认的用户组(通常组名与用户名相同),就可以简单的通过变更文件夹的所属用户组来完成。
|
||||
|
||||
```
|
||||
# chgrp tecmint /shares/project1/reports
|
||||
@ -98,20 +93,20 @@ Filesystem type: Ext4
|
||||
# groupadd projects
|
||||
```
|
||||
|
||||
2. 接下来将用户 `tecmint` 添加到 `projects` 组中:
|
||||
2、 接下来将用户 `tecmint` 添加到 `projects` 组中:
|
||||
|
||||
```
|
||||
# usermod -aG projects tecmint # add user to projects
|
||||
# groups tecmint # check users groups
|
||||
```
|
||||
|
||||
3. 将目录的所属用户组变更为 projects:
|
||||
3、 将目录的所属用户组变更为 projects:
|
||||
|
||||
```
|
||||
# chgrp projects /shares/project1/reports
|
||||
```
|
||||
|
||||
4. 现在,给组成员设置读写权限。
|
||||
4、 现在,给组成员设置读写权限。
|
||||
|
||||
```
|
||||
# chmod -R 0760 /shares/projects/reports
|
||||
@ -141,7 +136,7 @@ via: http://www.tecmint.com/give-read-write-access-to-directory-in-linux/
|
||||
[a]:http://www.tecmint.com/author/aaronkili/
|
||||
[1]:http://www.tecmint.com/set-access-control-lists-acls-and-disk-quotas-for-users-groups/
|
||||
[2]:http://www.tecmint.com/rhcsa-exam-configure-acls-and-mount-nfs-samba-shares/
|
||||
[3]:http://www.tecmint.com/create-a-shared-directory-in-linux/
|
||||
[3]:https://linux.cn/article-8187-1.html
|
||||
[4]:http://www.tecmint.com/secure-files-using-acls-in-linux/
|
||||
[5]:http://www.tecmint.com/manage-users-and-groups-in-linux/
|
||||
[6]:http://www.tecmint.com/sudoers-configurations-for-setting-sudo-in-linux/
|
@ -1,30 +1,30 @@
|
||||
如何用树莓派搭建一个自己的 web 服务器
|
||||
如何用树莓派搭建个人 web 服务器
|
||||
============================================================
|
||||
|
||||
![How to set up a personal web server with a Raspberry Pi](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/lightbulb_computer_person_general_.png?itok=ZY3UuQQa "How to set up a personal web server with a Raspberry Pi")
|
||||
|
||||
>图片来源 : opensource.com
|
||||
|
||||
个人网络服务器即 “云”,只不过是你拥有和控制它,而不是一个大型公司。
|
||||
个人 Web 服务器即 “云”,只不过是你拥有和控制它,而不是一个大型公司。
|
||||
|
||||
拥有一个自己的云有很多好处,包括定制,免费存储,免费的互联网服务,开源软件的路径,高品质的安全性,完全控制您的内容,快速更改的能力,实验代码的地方等等。 这些好处大部分是无法估量的,但在财务上,这些好处可以为您每个月节省超过 100 美元。
|
||||
拥有一个自己的云有很多好处,包括可定制、免费存储、免费的互联网服务、通往开源软件之路、高安全性、完全控制您的内容、快速更改的能力、实验代码的地方等等。 这些好处大部分是无法估量的,但在财务上,这些好处可以为您每个月节省超过 100 美元。
|
||||
|
||||
![Building your own web server with Raspberry Pi](https://opensource.com/sites/default/files/1-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Building your own web server with Raspberry Pi")
|
||||
|
||||
图片来自 Mitchell McLaughlin, CC BY-SA 4.0
|
||||
|
||||
我本可以选择 AWS ,但我更喜欢完全自由且安全性可控,并且我可以学一下这些东西是如何搭建的。
|
||||
|
||||
* 私有主机: 不使用 BlueHost 或 DreamHost
|
||||
* 云存储:不使用 Dropbox, Box, Google Drive, Microsoft Azure, iCloud, 或是 AWS
|
||||
* 内部部署安全
|
||||
* 私有 Web 托管:而非 BlueHost 或 DreamHost
|
||||
* 云存储:而非 Dropbox、Box、Google Drive、Microsoft Azure、iCloud 或是 AWS
|
||||
* 自主部署安全
|
||||
* HTTPS:Let’s Encrypt
|
||||
* 分析: Google
|
||||
* OpenVPN:不需要专有互联网连接 (预计每个月花费 $7)
|
||||
* OpenVPN:不需要专有互联网连接(预计每个月花费 $7)
|
||||
|
||||
我所使用的物品清单:
|
||||
|
||||
* 树莓派 3 代 Model B
|
||||
* MicroSD 卡(推荐使用 32 GB, [兼容树莓派的 SD 卡][a1])
|
||||
* MicroSD 卡(推荐使用 32 GB, [兼容树莓派的 SD 卡][a1])
|
||||
* USB microSD 卡读卡器
|
||||
* 以太网络线
|
||||
* 连接上 Wi-Fi 的路由器
|
||||
@ -37,36 +37,38 @@
|
||||
* 显示器 (支持接入 HDMI)
|
||||
* MacBook Pro
|
||||
|
||||
### 步骤 1: 启动树莓派
|
||||
### 步骤 1: 启动树莓派
|
||||
|
||||
下载最新发布的 Raspbian (树莓派的操作系统)。 [Raspbian Jessie][a6] 的 ZIP 包就可以用 [1]。解压缩或提取下载的文件然后把它拷贝到 SD 卡里。使用 [Pi Filler][a7] 可以让这些过程变得更简单。[下载 Pi Filer 1.3][8] 或最新的版本。解压或提取下载文件之后打开它,你应该会看到这样的提示:
|
||||
下载最新发布的 Raspbian (树莓派的操作系统)。 [Raspbian Jessie][a6] 的 ZIP 包就可以用 [脚注 1]。解压缩或提取下载的文件然后把它拷贝到 SD 卡里。使用 [Pi Filler][a7] 可以让这些过程变得更简单。[下载 Pi Filer 1.3][8] 或最新的版本。解压或提取下载文件之后打开它,你应该会看到这样的提示:
|
||||
|
||||
![Pi Filler prompt](https://opensource.com/sites/default/files/2-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Pi Filler prompt")
|
||||
|
||||
确保 USB 读卡器这时还没有插上。如果已经插上了那就先推出。点 Continue 继续下一步。你会看到一个让你选择文件的界面,选择你之前解压缩后的树莓派系统文件。然后你会看到另一个提示,如图所示:
|
||||
确保 USB 读卡器这时还没有插上。如果已经插上了那就先弹出。点 “Continue” 继续下一步。你会看到一个让你选择文件的界面,选择你之前解压缩后的树莓派系统文件。然后你会看到另一个提示,如图所示:
|
||||
|
||||
![USB card reader prompt](https://opensource.com/sites/default/files/3-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "USB card reader")
|
||||
|
||||
把 MicroSD 卡 (推荐 32 GB ,至少 16GB) 插入到 USB MicroSD 卡读卡器里。然后把 USB 读卡器接入到你的电脑里。你可以把你的 SD 卡重命名为 “Raspberry” 以区别其他设备。然后点击 continue。请先确保你的 SD 卡是空的,因为 Pi Filler 会在运行时 _擦除_ 所有事先存在 SD 卡里的内容。如果你要备份卡里的内容,那你最好就马上备份。当你点 continue 的时候,Raspbian OS 就会被写入到 SD 卡里。这个过程大概会花费一到三分钟左右。当写入完成后,推出 USB 读卡器,把 SD 卡拔出来插入到树莓派的 SD 卡槽里。把电源线接上,给树莓派提供电源。这时树莓派就会自己启动。树莓派的默认登录账户信息是:
|
||||
把 MicroSD 卡(推荐 32 GB ,至少 16GB)插入到 USB MicroSD 卡读卡器里。然后把 USB 读卡器接入到你的电脑里。你可以把你的 SD 卡重命名为 “Raspberry” 以区别其他设备。然后点击 “Continue”。请先确保你的 SD 卡是空的,因为 Pi Filler 会在运行时 _擦除_ 所有事先存在 SD 卡里的内容。如果你要备份卡里的内容,那你最好就马上备份。当你点 “Continue” 的时候,Raspbian OS 就会被写入到 SD 卡里。这个过程大概会花费一到三分钟左右。当写入完成后,推出 USB 读卡器,把 SD 卡拔出来插入到树莓派的 SD 卡槽里。把电源线接上,给树莓派供电。这时树莓派就会自己启动。树莓派的默认登录账户信息是:
|
||||
|
||||
**用户名: pi
|
||||
密码: raspberry**
|
||||
- 用户名: pi
|
||||
- 密码:raspberry
|
||||
|
||||
当树莓派首次启动完成时,会跳出一个标题为 <ruby>设置选项<rt>Setup Options</rt></ruby> 的配置界面,就像下面的图片一样 [2]:
|
||||
当树莓派首次启动完成时,会跳出一个标题为 “<ruby>设置选项<rt>Setup Options</rt></ruby>” 的配置界面,就像下面的图片一样 [脚注 2]:
|
||||
|
||||
![Raspberry Pi software configuration setup](https://opensource.com/sites/default/files/4-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Raspberry Pi software configuration setup")
|
||||
|
||||
选择 “Expand Filesystem” 这一选项并回车 [3]。 同时,我还推荐选择第二个选项 “Change User Password”。这对保证安全性来说尤为重要。它还能个性化你的树莓派。
|
||||
选择 “<ruby>扩展文件系统<rt>Expand Filesystem</rt></ruby>” 这一选项并回车 [脚注 3]。 同时,我还推荐选择第二个选项 “<ruby>修改密码<rt>Change User Password</rt></ruby>”。这对保证安全性来说尤为重要。它还能个性化你的树莓派。
|
||||
|
||||
在选项列表中选择第三项 “Enable Boot To Desktop/Scratch” 并回车。这时会跳到另一个标题为 “Choose boot option” 的界面,就像下面这张图这样。
|
||||
在选项列表中选择第三项 “<ruby>启用引导到桌面<rt>Enable Boot To Desktop/Scratch</rt></ruby>” 并回车。这时会跳到另一个标题为 “<ruby>选择引导选项<rt>Choose boot option</rt></ruby>” 的界面,就像下面这张图这样:
|
||||
|
||||
![Choose boot option](https://opensource.com/sites/default/files/5-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Choose boot option")
|
||||
|
||||
在 “Choose boot option” 这个界面选择第二个选项 “Desktop log in as user 'pi' at the graphical desktop” 并回车 [4]。完成这个操作之后会回到之前的 “Setup Options” 界面。如果没有回到之前的界面的话就选择当前界面底部的 “OK” 按钮并回车。
|
||||
在这个界面选择第二个选项 “<ruby>以用户‘pi’登录图形化桌面<rt>Desktop log in as user 'pi' at the graphical desktop</rt></ruby>” 并回车 [脚注 4]。完成这个操作之后会回到之前的 “<ruby>设置选项<rt>Setup Options</rt></ruby>” 界面。如果没有回到之前的界面的话就选择当前界面底部的 “OK” 按钮并回车。
|
||||
|
||||
当这些操作都完成之后,选择当前界面底部的 “Finish” 按钮并回车,这时它就会自动重启。如果没有自动重启的话,就在终端里使用如下命令来重启。
|
||||
|
||||
**$ sudo reboot**
|
||||
```
|
||||
$ sudo reboot
|
||||
```
|
||||
|
||||
接上一步的重启,如果所有步骤都顺利进行的话,你会进入到类似下面这样桌面环境中。
|
||||
|
||||
@ -76,17 +78,14 @@
|
||||
|
||||
```
|
||||
$ sudo apt-get update
|
||||
|
||||
$ sudo apt-get upgrade-y
|
||||
|
||||
$ sudo apt-get dist-upgrade -y
|
||||
|
||||
$ sudo rpi-update
|
||||
```
|
||||
|
||||
这些操作可能会花费几分钟时间。完成之后,现在运行着的树莓派就是最新的了。
|
||||
|
||||
### 步骤 2: 配置树莓派
|
||||
### 步骤 2: 配置树莓派
|
||||
|
||||
SSH 指的是 Secure Shell,是一种加密网络协议,可让你在计算机和树莓派之间安全地传输数据。 你可以从 Mac 的命令行控制你的树莓派,而无需显示器或键盘。
|
||||
|
||||
@ -96,9 +95,9 @@ SSH 指的是 Secure Shell,是一种加密网络协议,可让你在计算机
|
||||
$ sudo ifconfig
|
||||
```
|
||||
|
||||
如果你在使用以太网,看 “eth0” 部分。如果你在使用 Wi-Fi, 看 “wlan0” 部分。
|
||||
如果你在使用以太网,看 `eth0` 部分。如果你在使用 Wi-Fi, 看 `wlan0` 部分。
|
||||
|
||||
查找 “inet addr”,后跟一个 IP 地址,如 192.168.1.115,这是本篇文章中使用的默认 IP。
|
||||
查找 `inet addr`,后跟一个 IP 地址,如 192.168.1.115,这是本篇文章中使用的默认 IP。
|
||||
|
||||
有了这个地址,在终端中输入 :
|
||||
|
||||
@ -106,9 +105,9 @@ $ sudo ifconfig
|
||||
$ ssh pi@192.168.1.115
|
||||
```
|
||||
|
||||
对于 PC 上的 SSH,请参见脚注 [5]。
|
||||
对于 PC 上的 SSH,请参见 [脚注 5]。
|
||||
|
||||
出现提示时输入默认密码 “raspberry”,除非你之前更改过密码。
|
||||
出现提示时输入默认密码 `raspberry`,除非你之前更改过密码。
|
||||
|
||||
现在你已经通过 SSH 登录成功。
|
||||
|
||||
@ -120,22 +119,18 @@ $ ssh pi@192.168.1.115
|
||||
$ sudo apt-get install xrdp
|
||||
```
|
||||
|
||||
Xrdp 支持 Mac 和 PC 的 Microsoft Remote Desktop 客户端。
|
||||
xrdp 支持 Mac 和 PC 的 Microsoft Remote Desktop 客户端。
|
||||
|
||||
在 Mac 上,在 App store 中搜索 “Microsoft Remote Desktop”。 下载它。 (对于 PC,请参见脚注 [6]。)
|
||||
在 Mac 上,在 App store 中搜索 “Microsoft Remote Desktop”。 下载它。 (对于 PC,请参见 [脚注 6]。)
|
||||
|
||||
安装完成之后,在你的 Mac 中搜索一个叫 “Microsoft Remote Desktop” 的应用并打开它,你会看到 :
|
||||
|
||||
![Microsoft Remote Desktop](https://opensource.com/sites/default/files/7-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Microsoft Remote Desktop")
|
||||
|
||||
*图片来自 Mitchell McLaughlin, CC BY-SA 4.0*
|
||||
|
||||
点击 “New” 新建一个远程连接,在空白处填写如下配置。
|
||||
|
||||
![Setting up a remote connection](https://opensource.com/sites/default/files/8-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Setting up a remote connection")
|
||||
|
||||
*图片来自 Mitchell McLaughlin, CC BY-SA 4.0*
|
||||
|
||||
关闭 “New” 窗口就会自动保存。
|
||||
|
||||
你现在应该看到 “My Desktop” 下列出的远程连接。 双击它。
|
||||
@ -146,7 +141,7 @@ Xrdp 支持 Mac 和 PC 的 Microsoft Remote Desktop 客户端。
|
||||
|
||||
好了,现在你不需要额外的鼠标、键盘或显示器就能控制你的树莓派。这是一个更为轻量级的配置。
|
||||
|
||||
### 静态本地 IP 地址
|
||||
### 静态化本地 IP 地址
|
||||
|
||||
有时候你的本地 IP 地址 192.168.1.115 会发生改变。我们需要让这个 IP 地址静态化。输入:
|
||||
|
||||
@ -154,17 +149,17 @@ Xrdp 支持 Mac 和 PC 的 Microsoft Remote Desktop 客户端。
|
||||
$ sudo ifconfig
|
||||
```
|
||||
|
||||
从 “eth0” 部分或 “wlan0” 部分,“inet addr”(树莓派当前 IP),“bcast”(广播 IP 范围)和 “mask”(子网掩码地址))中写入。 然后输入:
|
||||
从 `eth0` 部分或 `wlan0` 部分,记下 `inet addr`(树莓派当前 IP),`bcast`(广播 IP 范围)和 `mask`(子网掩码地址)。 然后输入:
|
||||
|
||||
```
|
||||
$ netstat -nr
|
||||
```
|
||||
|
||||
记下 “destination” 和 “gateway/network”。
|
||||
记下 `destination` 和 `gateway/network`。
|
||||
|
||||
![Setting up a local IP address](https://opensource.com/sites/default/files/setting_up_local_ip_address.png "Setting up a local IP address")
|
||||
|
||||
应该大概是这样子的:
|
||||
大概应该是这样子的:
|
||||
|
||||
```
|
||||
net address 192.168.1.115
|
||||
@ -181,7 +176,7 @@ destination 192.168.1.0
|
||||
$ sudo nano /etc/dhcpcd.conf
|
||||
```
|
||||
|
||||
不要使用 **/etc/network/interfaces**。
|
||||
不要去动 `/etc/network/interfaces`。
|
||||
|
||||
剩下要做的就是把这些内容追加到这个文件的底部,把 IP 换成你想要的 IP 地址。
|
||||
|
||||
@ -206,25 +201,25 @@ $ sudo ifconfig
|
||||
|
||||
这时你就可以看到你的树莓派上的新的静态配置了。
|
||||
|
||||
### 静态全局 IP address
|
||||
### 静态化全局 IP 地址
|
||||
|
||||
如果您的 ISP(互联网服务提供商)已经给您一个静态外部 IP 地址,您可以跳到端口转发部分。 如果没有,请继续阅读。
|
||||
|
||||
你已经设置了 SSH,远程桌面和静态内部 IP 地址,因此现在本地网络中的计算机将会知道在哪里可以找到你的树莓派。 但是你仍然无法从本地 Wi-Fi 网络外部访问你的树莓派。 你需要树莓派可以从互联网上的任何地方公开访问。这需要静态外部 IP 地址 [7]。
|
||||
你已经设置了 SSH、远程桌面和静态内部 IP 地址,因此现在本地网络中的计算机将会知道在哪里可以找到你的树莓派。 但是你仍然无法在本地 Wi-Fi 网络外部访问你的树莓派。 你需要树莓派可以从互联网上的任何地方公开访问。这需要静态的外部 IP 地址 [脚注 7]。
|
||||
|
||||
调用您的 ISP 并请求静态外部(有时称为静态全局)IP 地址可能会是一个非常敏感的过程。 ISP 拥有决策权,所以我会非常小心处理。 他们可能拒绝你的的静态外部 IP 地址请求。 如果他们拒绝了你的请求,你不要怪罪于他们,因为这种类型的请求有法律和操作风险。 他们特别不希望客户运行中型或大型互联网服务。 他们可能会明确地询问为什么需要一个静态的外部 IP 地址。 最好说实话,告诉他们你打算主办一个低流量的个人网站或类似的小型非营利互联网服务。 如果一切顺利,他们应该会建立一个任务,并在一两个星期内给你打电话。
|
||||
联系您的 ISP 并请求静态的外部(有时称为静态全局)IP 地址可能会是一个非常敏感的过程。 ISP 拥有决策权,所以我会非常小心处理。 他们可能拒绝你的的静态外部 IP 地址请求。 如果他们拒绝了你的请求,你不要怪罪于他们,因为这种类型的请求有法律和操作风险。 他们特别不希望客户运行中型或大型互联网服务。 他们可能会明确地询问为什么需要一个静态的外部 IP 地址。 最好说实话,告诉他们你打算主办一个低流量的个人网站或类似的小型非营利互联网服务。 如果一切顺利,他们应该会建立一个工单,并在一两个星期内给你打电话。
|
||||
|
||||
### 端口转发
|
||||
|
||||
这个新获得的 ISP 分配的静态全局 IP 地址是用于访问路由器。 树莓派现在仍然无法访问。 你需要设置端口转发才能访问树莓派。
|
||||
|
||||
端口是信息在互联网上传播的虚拟途径。 你有时需要转发端口,以使计算机像树莓派一样可以访问 Internet,因为它位于网络路由器后面。 VollmilchTV 专栏在 YouTube 上的一个视频,名字是[什么是 TCP/IP,端口,路由,Intranet,防火墙,互联网][9],帮助我更好地了解端口。
|
||||
端口是信息在互联网上传播的虚拟途径。 你有时需要转发端口,以使计算机像树莓派一样可以访问 Internet,因为它位于网络路由器后面。 VollmilchTV 专栏在 YouTube 上的一个视频,名字是[什么是 TCP/IP,端口,路由,Intranet,防火墙,互联网][9],可以帮助你更好地了解端口。
|
||||
|
||||
端口转发可用于像 树莓派 Web 服务器或 VoIP 或点对点下载的应用程序。 有 [65,000+个端口][10]可供选择,因此你可以为你构建的每个 Internet 应用程序分配一个不同的端口。
|
||||
端口转发可用于像 树莓派 Web 服务器或 VoIP 或点对点下载的应用程序。 有 [65000个以上的端口][10]可供选择,因此你可以为你构建的每个 Internet 应用程序分配一个不同的端口。
|
||||
|
||||
设置端口转发的方式取决于你的路由器。 如果你有 Linksys 的话,Gabriel Ramirez 在 YouTbue 上有一个标题叫 [How to go online with your Apache Ubuntu server][a2] 的视频解释了如何设置。 如果您没有 Linksys,请阅读路由器附带的文档,以便自定义和定义要转发的端口。
|
||||
设置端口转发的方式取决于你的路由器。 如果你有 Linksys 的话,Gabriel Ramirez 在 YouTbue 上有一个标题叫 [如何让你的 Apache Ubuntu 服务器连到互联网][a2] 的视频解释了如何设置。 如果您没有 Linksys,请阅读路由器附带的文档,以便自定义和定义要转发的端口。
|
||||
|
||||
你将需要转发 SSH 以及远程桌面端口。
|
||||
你需要转发 SSH 以及远程桌面端口。
|
||||
|
||||
如果你认为你已经过配置端口转发了,输入下面的命令以查看它是否正在通过 SSH 工作:
|
||||
|
||||
@ -234,17 +229,17 @@ $ ssh pi@your_global_ip_address
|
||||
|
||||
它应该会提示你输入密码。
|
||||
|
||||
检查端口转发是否也适用于远程桌面。 打开 Microsoft Remote Desktop。 你之前的的远程连接设置应该已经保存了,但需要使用静态外部 IP 地址(例如195.198.227.116)来更新 “PC名称” 字段,而不是静态内部地址(例如 192.168.1.115)。
|
||||
检查端口转发是否也适用于远程桌面。 打开 Microsoft Remote Desktop。 你之前的的远程连接设置应该已经保存了,但需要使用静态的外部 IP 地址(例如 195.198.227.116)来更新 “PC 名称” 字段,而不是静态的内部地址(例如 192.168.1.115)。
|
||||
|
||||
现在,尝试通过远程桌面连接。 它应该简单地加载并到达树莓派的桌面。
|
||||
现在,尝试通过远程桌面连接。 它应该简单地加载并显示树莓派的桌面。
|
||||
|
||||
![Raspberry Pi desktop](https://opensource.com/sites/default/files/6-image_by_mitchell_mclaughlin_cc_by-sa_4.0_1.png "Raspberry Pi desktop")
|
||||
|
||||
好了, 树莓派现在可以从互联网上访问了,并且已经准备好进行高级项目了。
|
||||
|
||||
作为一个奖励选项,您可以保持两个远程连接到您的 Pi。 一个通过互联网,另一个通过 LAN(局域网)。很容易设置。在 Microsoft Remote Desktop 中,保留一个称为 “Pi Internet” 的远程连接,另一个称为 “Pi Local”。 将 Pi Internet 的 “PC name” 配置为静态外部 IP 地址,例如 195.198.227.116。 将 Pi Local 的 “PC name” 配置为静态内部 IP 地址,例如192.168.1.115。 现在,您可以选择在全球或本地连接。
|
||||
作为一个奖励选项,您可以保持到您的 Pi 的两个远程连接。 一个通过互联网,另一个通过 LAN(局域网)。很容易设置。在 Microsoft Remote Desktop 中,保留一个称为 “Pi Internet” 的远程连接,另一个称为 “Pi Local”。 将 Pi Internet 的 “PC 名称” 配置为静态外部 IP 地址,例如 195.198.227.116。 将 Pi Local 的 “PC 名称” 配置为静态内部 IP 地址,例如 192.168.1.115。 现在,您可以选择在全局或本地连接。
|
||||
|
||||
如果你还没有看过由 Gabriel Ramirez 发布的 [如何使用您的Apache Ubuntu服务器上线][a3],那么你可以去看一下,作为过渡到第二个项目的教程。 它将向您展示项目背后的技术架构。 在我们的例子中,你使用的是树莓派而不是 Ubuntu 服务器。 动态 DNS 位于域公司和您的路由器之间,这是 Ramirez 省略的部分。 除了这个微妙之处外,视频是在整体上解释系统的工作原理。 您可能会注意到本教程涵盖了树莓派设置和端口转发,这是服务器端或后端。 查看原始来源,涵盖域名,动态 DNS,Jekyll(静态 HTML 生成器)和 Apache(网络托管)的更高级项目,这是客户端或前端。
|
||||
如果你还没有看过由 Gabriel Ramirez 发布的 [如何让你的 Apache Ubuntu 服务器连到互联网][a3],那么你可以去看一下,作为过渡到第二个项目的教程。 它将向您展示项目背后的技术架构。 在我们的例子中,你使用的是树莓派而不是 Ubuntu 服务器。 动态 DNS 位于域名公司和您的路由器之间,这是 Ramirez 省略的部分。 除了这个微妙之处外,视频是在整体上解释系统的工作原理。 您可能会注意到本教程涵盖了树莓派设置和端口转发,这是服务器端或后端。 查看原始来源,涵盖域名,动态 DNS,Jekyll(静态 HTML 生成器)和 Apache(网络托管)的更高级项目,这是客户端或前端。
|
||||
|
||||
### 脚注
|
||||
|
||||
@ -264,7 +259,7 @@ $ sudo-rasps-config
|
||||
|
||||
![PuTTY configuration](https://opensource.com/sites/default/files/putty_configuration.png "PuTTY configuration")
|
||||
|
||||
[下载并运行 PuTTY][11] 或 Windows 的另一个 SSH 客户端。 在该字段中输入你的 IP 地址,如上图所示。 将默认端口保留在 22。 回车,PuTTY 将打开一个终端窗口,提示你输入用户名和密码。 填写然后开始在树莓派上进行你的远程工作。
|
||||
[下载并运行 PuTTY][11] 或 Windows 的其它 SSH 客户端。 在该字段中输入你的 IP 地址,如上图所示。 将默认端口保留为 22。 回车,PuTTY 将打开一个终端窗口,提示你输入用户名和密码。 填写然后开始在树莓派上进行你的远程工作。
|
||||
|
||||
[6] 如果尚未安装,请下载 [Microsoft Remote Desktop][12]。 搜索您的计算机上的的 Microsoft Remote Desktop。 运行。 提示时输入 IP 地址。 接下来,会弹出一个 xrdp 窗口,提示你输入用户名和密码。
|
||||
|
||||
@ -276,12 +271,10 @@ $ sudo-rasps-config
|
||||
|
||||
作者简介:
|
||||
|
||||
Mitchell McLaughlin - 我是一名开放网络的贡献者和开发者。我感兴趣的领域很广泛,但我特别喜欢开源软件/硬件,比特币和编程。 我住在旧金山 我有过一些简短的 GoPro 和 Oracle 工作经验。
|
||||
|
||||
Mitchell McLaughlin - 我是一名开放网络的贡献者和开发者。我感兴趣的领域很广泛,但我特别喜欢开源软件/硬件,比特币和编程。 我住在旧金山,我有过一些简短的 GoPro 和 Oracle 工作经验。
|
||||
|
||||
-------------
|
||||
|
||||
|
||||
via: https://opensource.com/article/17/3/building-personal-web-server-raspberry-pi-3
|
||||
|
||||
作者:[Mitchell McLaughlin ][a]
|
331
published/20170308 Our guide to a Golang logs world.md
Normal file
331
published/20170308 Our guide to a Golang logs world.md
Normal file
@ -0,0 +1,331 @@
|
||||
Go 语言日志指南
|
||||
============================================================
|
||||
|
||||
![golang logo](https://logmatic.io/wp-content/uploads/2017/03/golang-logo.png)
|
||||
|
||||
你是否厌烦了那些使用复杂语言编写的、难以部署的、总是在不停构建的解决方案?Golang 是解决这些问题的好方法,它和 C 语言一样快,又和 Python 一样简单。
|
||||
|
||||
但是你是如何使用 Golang 日志监控你的应用程序的呢?Golang 没有异常,只有错误。因此你的第一印象可能就是开发 Golang 日志策略并不是一件简单的事情。不支持异常事实上并不是什么问题,异常在很多编程语言中已经失去了其异常性:它们过于被滥用以至于它们的作用都被忽视了。
|
||||
|
||||
在进一步深入之前,我们首先会介绍 Golang 日志的基础,并讨论 Golang 日志标准、元数据意义、以及最小化 Golang 日志对性能的影响。通过日志,你可以追踪用户在你应用中的活动,快速识别你项目中失效的组件,并监控总体性能以及用户体验。
|
||||
|
||||
### I. Golang 日志基础
|
||||
|
||||
#### 1) 使用 Golang “log” 库
|
||||
|
||||
Golang 给你提供了一个称为 “log” 的原生[日志库][3] 。它的日志器完美适用于追踪简单的活动,例如通过使用可用的[选项][4]在错误信息之前添加一个时间戳。
|
||||
|
||||
下面是一个 Golang 中如何记录错误日志的简单例子:
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import (
|
||||
"log"
|
||||
"errors"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
func main() {
|
||||
/* 定义局部变量 */
|
||||
...
|
||||
|
||||
/* 除法函数,除以 0 的时候会返回错误 */
|
||||
ret,err = div(a, b)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
fmt.Println(ret)
|
||||
}
|
||||
```
|
||||
|
||||
如果你尝试除以 0,你就会得到类似下面的结果:
|
||||
|
||||
![golang 代码](https://logmatic.io/wp-content/uploads/2017/03/golang-code.png)
|
||||
|
||||
为了快速测试一个 Golang 函数,你可以使用 [go playground][5]。
|
||||
|
||||
为了确保你的日志总是能轻易访问,我们建议你把它们写到一个文件:
|
||||
|
||||
```
|
||||
package main
|
||||
import (
|
||||
"log"
|
||||
"os"
|
||||
)
|
||||
func main() {
|
||||
// 按照所需读写权限创建文件
|
||||
f, err := os.OpenFile("filename", os.O_WRONLY|os.O_CREATE|os.O_APPEND, 0644)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
// 完成后延迟关闭,而不是习惯!
|
||||
defer f.Close()
|
||||
//设置日志输出到 f
|
||||
log.SetOutput(f)
|
||||
//测试用例
|
||||
log.Println("check to make sure it works")
|
||||
}
|
||||
```
|
||||
|
||||
你可以在[这里][6]找到 Golang 日志的完整指南,以及 “log” [库][7]内可用函数的完整列表。
|
||||
|
||||
现在你就可以记录它们的错误以及根本原因啦。
|
||||
|
||||
另外,日志也可以帮你将活动流拼接在一起,查找需要修复的错误上下文,或者调查在你的系统中单个请求如何影响其它应用层和 API。
|
||||
|
||||
为了获得更好的日志效果,你首先需要在你的项目中使用尽可能多的上下文丰富你的 Golang 日志,并标准化你使用的格式。这就是 Golang 原生库能达到的极限。使用最广泛的库是 [glog][8] 和 [logrus][9]。必须承认还有很多好的库可以使用。如果你已经在使用支持 JSON 格式的库,你就不需要再换其它库了,后面我们会解释。
|
||||
|
||||
### II. 为你 Golang 日志统一格式
|
||||
|
||||
#### 1) JSON 格式的结构优势
|
||||
|
||||
在一个项目或者多个微服务中结构化你的 Golang 日志可能是最困难的事情,但一旦完成就很轻松了。结构化你的日志能使机器可读(参考我们 [收集日志的最佳实践博文][10])。灵活性和层级是 JSON 格式的核心,因此信息能够轻易被人类和机器解析以及处理。
|
||||
|
||||
下面是一个使用 [Logrus/Logmatic.io][11] 如何用 JSON 格式记录日志的例子:
|
||||
|
||||
```
|
||||
package main
|
||||
import (
|
||||
log "github.com/Sirupsen/logrus"
|
||||
"github.com/logmatic/logmatic-go"
|
||||
)
|
||||
func main() {
|
||||
// 使用 JSONFormatter
|
||||
log.SetFormatter(&logmatic.JSONFormatter{})
|
||||
// 使用 logrus 像往常那样记录事件
|
||||
log.WithFields(log.Fields{"string": "foo", "int": 1, "float": 1.1 }).Info("My first ssl event from golang")
|
||||
}
|
||||
```
|
||||
|
||||
会输出结果:
|
||||
|
||||
```
|
||||
{
|
||||
"date":"2016-05-09T10:56:00+02:00",
|
||||
"float":1.1,
|
||||
"int":1,
|
||||
"level":"info",
|
||||
"message":"My first ssl event from golang",
|
||||
"String":"foo"
|
||||
}
|
||||
```
|
||||
|
||||
#### 2) 标准化 Golang 日志
|
||||
|
||||
同一个错误出现在你代码的不同部分,却以不同形式被记录下来是一件可耻的事情。下面是一个由于一个变量错误导致无法确定 web 页面加载状态的例子。一个开发者日志格式是:
|
||||
|
||||
```
|
||||
message: 'unknown error: cannot determine loading status from unknown error: missing or invalid arg value client'</span>
|
||||
```
|
||||
|
||||
另一个人的格式却是:
|
||||
|
||||
```
|
||||
unknown error: cannot determine loading status - invalid client</span>
|
||||
```
|
||||
|
||||
强制日志标准化的一个好的解决办法是在你的代码和日志库之间创建一个接口。这个标准化接口会包括所有你想添加到你日志中的可能行为的预定义日志消息。这么做可以防止出现不符合你想要的标准格式的自定义日志信息。这么做也便于日志调查。
|
||||
|
||||
![接口函数](https://logmatic.io/wp-content/uploads/2017/03/functions-interface.png)
|
||||
|
||||
由于日志格式都被统一处理,使它们保持更新也变得更加简单。如果出现了一种新的错误类型,它只需要被添加到一个接口,这样每个组员都会使用完全相同的信息。
|
||||
|
||||
最常使用的简单例子就是在 Golang 日志信息前面添加日志器名称和 id。你的代码然后就会发送 “事件” 到你的标准化接口,它会继续讲它们转化为 Golang 日志消息。
|
||||
|
||||
```
|
||||
// 主要部分,我们会在这里定义所有消息。
|
||||
// Event 结构体很简单。为了当所有信息都被记录时能检索它们,
|
||||
// 我们维护了一个 Id
|
||||
var (
|
||||
invalidArgMessage = Event{1, "Invalid arg: %s"}
|
||||
invalidArgValueMessage = Event{2, "Invalid arg value: %s => %v"}
|
||||
missingArgMessage = Event{3, "Missing arg: %s"}
|
||||
)
|
||||
|
||||
// 在我们应用程序中可以使用的所有日志事件
|
||||
func (l *Logger)InvalidArg(name string) {
|
||||
l.entry.Errorf(invalidArgMessage.toString(), name)
|
||||
}
|
||||
func (l *Logger)InvalidArgValue(name string, value interface{}) {
|
||||
l.entry.WithField("arg." + name, value).Errorf(invalidArgValueMessage.toString(), name, value)
|
||||
}
|
||||
func (l *Logger)MissingArg(name string) {
|
||||
l.entry.Errorf(missingArgMessage.toString(), name)
|
||||
}
|
||||
```
|
||||
|
||||
因此如果我们使用前面例子中无效的参数值,我们就会得到相似的日志信息:
|
||||
|
||||
```
|
||||
time="2017-02-24T23:12:31+01:00" level=error msg="LoadPageLogger00003 - Missing arg: client - cannot determine loading status" arg.client=<nil> logger.name=LoadPageLogger
|
||||
```
|
||||
|
||||
JSON 格式如下:
|
||||
|
||||
```
|
||||
{"arg.client":null,"level":"error","logger.name":"LoadPageLogger","msg":"LoadPageLogger00003 - Missing arg: client - cannot determine loading status", "time":"2017-02-24T23:14:28+01:00"}
|
||||
```
|
||||
|
||||
### III. Golang 日志上下文的力量
|
||||
|
||||
现在 Golang 日志已经按照特定结构和标准格式记录,时间会决定需要添加哪些上下文以及相关信息。为了能从你的日志中抽取信息,例如追踪一个用户活动或者工作流,上下文和元数据的顺序非常重要。
|
||||
|
||||
例如在 logrus 库中可以按照下面这样使用 JSON 格式添加 `hostname`、`appname` 和 `session` 参数:
|
||||
|
||||
```
|
||||
// 对于元数据,通常做法是通过复用来重用日志语句中的字段。
|
||||
contextualizedLog := log.WithFields(log.Fields{
|
||||
"hostname": "staging-1",
|
||||
"appname": "foo-app",
|
||||
"session": "1ce3f6v"
|
||||
})
|
||||
contextualizedLog.Info("Simple event with global metadata")
|
||||
```
|
||||
|
||||
元数据可以视为 javascript 片段。为了更好地说明它们有多么重要,让我们看看几个 Golang 微服务中元数据的使用。你会清楚地看到是怎么在你的应用程序中跟踪用户的。这是因为你不仅需要知道一个错误发生了,还要知道是哪个实例以及什么模式导致了错误。假设我们有两个按顺序调用的微服务。上下文信息保存在头部(header)中传输:
|
||||
|
||||
```
|
||||
func helloMicroService1(w http.ResponseWriter, r *http.Request) {
|
||||
client := &http.Client{}
|
||||
// 该服务负责接收所有到来的用户请求
|
||||
// 我们会检查是否是一个新的会话还是已有会话的另一次调用
|
||||
session := r.Header.Get("x-session")
|
||||
if ( session == "") {
|
||||
session = generateSessionId()
|
||||
// 为新会话记录日志
|
||||
}
|
||||
// 每个请求的 Track Id 都是唯一的,因此我们会为每个会话生成一个
|
||||
track := generateTrackId()
|
||||
// 调用你的第二个微服务,添加 session/track
|
||||
reqService2, _ := http.NewRequest("GET", "http://localhost:8082/", nil)
|
||||
reqService2.Header.Add("x-session", session)
|
||||
reqService2.Header.Add("x-track", track)
|
||||
resService2, _ := client.Do(reqService2)
|
||||
….
|
||||
```
|
||||
|
||||
当调用第二个服务时:
|
||||
|
||||
```
|
||||
func helloMicroService2(w http.ResponseWriter, r *http.Request) {
|
||||
// 类似之前的微服务,我们检查会话并生成新的 track
|
||||
session := r.Header.Get("x-session")
|
||||
track := generateTrackId()
|
||||
// 这一次,我们检查请求中是否已经设置了一个 track id,
|
||||
// 如果是,它变为父 track
|
||||
parent := r.Header.Get("x-track")
|
||||
if (session == "") {
|
||||
w.Header().Set("x-parent", parent)
|
||||
}
|
||||
// 为响应添加 meta 信息
|
||||
w.Header().Set("x-session", session)
|
||||
w.Header().Set("x-track", track)
|
||||
if (parent == "") {
|
||||
w.Header().Set("x-parent", track)
|
||||
}
|
||||
// 填充响应
|
||||
w.WriteHeader(http.StatusOK)
|
||||
io.WriteString(w, fmt.Sprintf(aResponseMessage, 2, session, track, parent))
|
||||
}
|
||||
```
|
||||
|
||||
现在第二个微服务中已经有和初始查询相关的上下文和信息,一个 JSON 格式的日志消息看起来类似如下。
|
||||
|
||||
在第一个微服务:
|
||||
|
||||
```
|
||||
{"appname":"go-logging","level":"debug","msg":"hello from ms 1","session":"eUBrVfdw","time":"2017-03-02T15:29:26+01:00","track":"UzWHRihF"}
|
||||
```
|
||||
|
||||
在第二个微服务:
|
||||
|
||||
```
|
||||
{"appname":"go-logging","level":"debug","msg":"hello from ms 2","parent":"UzWHRihF","session":"eUBrVfdw","time":"2017-03-02T15:29:26+01:00","track":"DPRHBMuE"}
|
||||
```
|
||||
|
||||
如果在第二个微服务中出现了错误,多亏了 Golang 日志中保存的上下文信息,现在我们就可以确定它是怎样被调用的以及什么模式导致了这个错误。
|
||||
|
||||
如果你想进一步深挖 Golang 的追踪能力,这里还有一些库提供了追踪功能,例如 [Opentracing][12]。这个库提供了一种简单的方式在或复杂或简单的架构中添加追踪的实现。它通过不同步骤允许你追踪用户的查询,就像下面这样:
|
||||
|
||||
![客户端事务](https://logmatic.io/wp-content/uploads/2017/03/client-transaction.png)
|
||||
|
||||
### IV. Golang 日志对性能的影响
|
||||
|
||||
#### 1) 不要在 Goroutine 中使用日志
|
||||
|
||||
在每个 goroutine 中创建一个新的日志器看起来很诱人。但最好别这么做。Goroutine 是一个轻量级线程管理器,它用于完成一个 “简单的” 任务。因此它不应该负责日志。它可能导致并发问题,因为在每个 goroutine 中使用 `log.New()` 会重复接口,所有日志器会并发尝试访问同一个 io.Writer。
|
||||
|
||||
为了限制对性能的影响以及避免并发调用 io.Writer,库通常使用一个特定的 goroutine 用于日志输出。
|
||||
|
||||
#### 2) 使用异步库
|
||||
|
||||
尽管有很多可用的 Golang 日志库,要注意它们中的大部分都是同步的(事实上是伪异步)。原因很可能是到现在为止它们中没有一个会由于日志严重影响性能。
|
||||
|
||||
但正如 Kjell Hedström 在[他的实验][13]中展示的,使用多个线程创建成千上万日志,即便是在最坏情况下,异步 Golang 日志也会有 40% 的性能提升。因此日志是有开销的,也会对你的应用程序性能产生影响。如果你并不需要处理大量的日志,使用伪异步 Golang 日志库可能就足够了。但如果你需要处理大量的日志,或者很关注性能,Kjell Hedström 的异步解决方案就很有趣(尽管事实上你可能需要进一步开发,因为它只包括了最小的功能需求)。
|
||||
|
||||
#### 3)使用严重等级管理 Golang 日志
|
||||
|
||||
一些日志库允许你启用或停用特定的日志器,这可能会派上用场。例如在生产环境中你可能不需要一些特定等级的日志。下面是一个如何在 glog 库中停用日志器的例子,其中日志器被定义为布尔值:
|
||||
|
||||
```
|
||||
type Log bool
|
||||
func (l Log) Println(args ...interface{}) {
|
||||
fmt.Println(args...)
|
||||
}
|
||||
var debug Log = false
|
||||
if debug {
|
||||
debug.Println("DEBUGGING")
|
||||
}
|
||||
```
|
||||
|
||||
然后你就可以在配置文件中定义这些布尔参数来启用或者停用日志器。
|
||||
|
||||
没有一个好的 Golang 日志策略,Golang 日志可能开销很大。开发人员应该抵制记录几乎所有事情的诱惑 - 尽管它非常有趣!如果日志的目的是为了获取尽可能多的信息,为了避免包含无用元素的日志的白噪音,必须正确使用日志。
|
||||
|
||||
### V. 集中化 Golang 日志
|
||||
|
||||
![集中化 go 日志](https://logmatic.io/wp-content/uploads/2017/03/source-selector-1024x460-1.png)
|
||||
|
||||
如果你的应用程序是部署在多台服务器上的,这样可以避免为了调查一个现象需要连接到每一台服务器的麻烦。日志集中确实有用。
|
||||
|
||||
使用日志装箱工具,例如 windows 中的 Nxlog,linux 中的 Rsyslog(默认安装了的)、Logstash 和 FluentD 是最好的实现方式。日志装箱工具的唯一目的就是发送日志,因此它们能够处理连接失效以及其它你很可能会遇到的问题。
|
||||
|
||||
这里甚至有一个 [Golang syslog 软件包][14] 帮你将 Golang 日志发送到 syslog 守护进程。
|
||||
|
||||
### 希望你享受你的 Golang 日志之旅
|
||||
|
||||
在你项目一开始就考虑你的 Golang 日志策略非常重要。如果在你代码的任意地方都可以获得所有的上下文,追踪用户就会变得很简单。从不同服务中阅读没有标准化的日志是已经很痛苦的事情。一开始就计划在多个微服务中扩展相同用户或请求 id,后面就会允许你比较容易地过滤信息并在你的系统中跟踪活动。
|
||||
|
||||
你是在构架一个很大的 Golang 项目还是几个微服务也会影响你的日志策略。一个大项目的主要组件应该有按照它们功能命名的特定 Golang 日志器。这使你可以立即判断出日志来自你的哪一部分代码。然而对于微服务或者小的 Golang 项目,只有较少的核心组件需要它们自己的日志器。但在每种情形中,日志器的数目都应该保持低于核心功能的数目。
|
||||
|
||||
你现在已经可以使用 Golang 日志量化决定你的性能或者用户满意度啦!
|
||||
|
||||
_如果你有想阅读的特定编程语言,在 Twitter [@logmatic][2] 上告诉我们吧。_
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://logmatic.io/blog/our-guide-to-a-golang-logs-world/
|
||||
|
||||
作者:[Nils][a]
|
||||
译者:[ictlyh](https://github.com/ictlyh)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://logmatic.io/blog/our-guide-to-a-golang-logs-world/
|
||||
[1]:https://twitter.com/logmatic?lang=en
|
||||
[2]:http://twitter.com/logmatic
|
||||
[3]:https://golang.org/pkg/log/
|
||||
[4]:https://golang.org/pkg/log/#pkg-constants
|
||||
[5]:https://play.golang.org/
|
||||
[6]:https://www.goinggo.net/2013/11/using-log-package-in-go.html
|
||||
[7]:https://golang.org/pkg/log/
|
||||
[8]:https://github.com/google/glog
|
||||
[9]:https://github.com/sirupsen/logrus
|
||||
[10]:https://logmatic.io/blog/beyond-application-monitoring-discover-logging-best-practices/
|
||||
[11]:https://github.com/logmatic/logmatic-go
|
||||
[12]:https://github.com/opentracing/opentracing-go
|
||||
[13]:https://sites.google.com/site/kjellhedstrom2/g2log-efficient-background-io-processign-with-c11/g2log-vs-google-s-glog-performance-comparison
|
||||
[14]:https://golang.org/pkg/log/syslog/
|
@ -1,10 +1,9 @@
|
||||
Integrate Ubuntu 16.04 to AD as a Domain Member with Samba and Winbind – Part 8
|
||||
Samba 系列(八):使用 Samba 和 Winbind 将 Ubuntu 16.04 添加到 AD 域
|
||||
============================================================
|
||||
使用 Samba 和 Winbind 将 Ubuntu 16.04 添加到 AD 域 ——(八)
|
||||
|
||||
这篇文章讲述了如何将 Ubuntu 主机加入到 Samba4 AD 域,并实现使用域帐号登录 Ubuntu 系统。
|
||||
|
||||
#### 要求:
|
||||
### 要求:
|
||||
|
||||
1. [在 Ubuntu 系统上使用 Samba4 软件来创建活动目录架构][1]
|
||||
|
||||
@ -12,7 +11,7 @@ Integrate Ubuntu 16.04 to AD as a Domain Member with Samba and Winbind – Part
|
||||
|
||||
1、在将 Ubuntu 主机加入到 AD DC 之前,你得先确保 Ubuntu 系统中的一些服务配置正常。
|
||||
|
||||
主机名是你的机器的一个重要标识。因此,在加入域前,使用 hostnamectl 命令或者通过手动编辑 /etc/hostname 文件来为 Ubuntu 主机设置一个合适的主机名。
|
||||
主机名是你的机器的一个重要标识。因此,在加入域前,使用 `hostnamectl` 命令或者通过手动编辑 `/etc/hostname` 文件来为 Ubuntu 主机设置一个合适的主机名。
|
||||
|
||||
```
|
||||
# hostnamectl set-hostname your_machine_short_name
|
||||
@ -23,32 +22,32 @@ Integrate Ubuntu 16.04 to AD as a Domain Member with Samba and Winbind – Part
|
||||
![Set System Hostname](http://www.tecmint.com/wp-content/uploads/2017/03/Set-Ubuntu-System-Hostname.png)
|
||||
][2]
|
||||
|
||||
设置系统主机名
|
||||
*设置系统主机名*
|
||||
|
||||
2、在这一步中,打开并编辑网卡配置文件,为你的主机设置一个合适的 IP 地址。注意把 DNS 地址设置为你的域控制器的地址。
|
||||
|
||||
编辑 /etc/network/interfaces 文件,添加 dns-nameservers 参数,并设置为 AD 服务器的 IP 地址,添加 dns-search 参数,其值为域控制器的主机名,如下图所示。
|
||||
编辑 `/etc/network/interfaces` 文件,添加 `dns-nameservers` 参数,并设置为 AD 服务器的 IP 地址;添加 `dns-search` 参数,其值为域控制器的主机名,如下图所示。
|
||||
|
||||
并且,把上面设置的 DNS IP 地址和域名添加到 /etc/resolv.conf 配置文件中,如下图所示:
|
||||
并且,把上面设置的 DNS IP 地址和域名添加到 `/etc/resolv.conf` 配置文件中,如下图所示:
|
||||
|
||||
[
|
||||
![Configure Network Settings for AD](http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Network-Settings-for-AD.png)
|
||||
][3]
|
||||
|
||||
为 AD 配置网络设置
|
||||
*为 AD 配置网络设置*
|
||||
|
||||
在上面的截图中, 192.168.1.254 和 192.168.1.253 是 Samba4 AD DC 服务器的 IP 地址, Tecmint.lan 是 AD 域名,已加入到这个域中的所有机器都可以查询到该域名。
|
||||
在上面的截图中, `192.168.1.254` 和 `192.168.1.253` 是 Samba4 AD DC 服务器的 IP 地址, `Tecmint.lan` 是 AD 域名,已加入到这个域中的所有机器都可以查询到该域名。
|
||||
|
||||
3、重启网卡服务或者重启计算机以使网卡配置生效。使用 ping 命令加上域名来检测 DNS 解析是否正常。
|
||||
|
||||
AD DC 应该返回的 FQDN 。如果你的网络中配置了 DHCP 服务器来为局域网中的计算机自动分配 IP 地址,请确保你已经把 AD DC 服务器的 IP 地址添加到 DHCP 服务器的 DNS 配置中。
|
||||
AD DC 应该返回的是 FQDN 。如果你的网络中配置了 DHCP 服务器来为局域网中的计算机自动分配 IP 地址,请确保你已经把 AD DC 服务器的 IP 地址添加到 DHCP 服务器的 DNS 配置中。
|
||||
|
||||
```
|
||||
# systemctl restart networking.service
|
||||
# ping -c2 your_domain_name
|
||||
```
|
||||
|
||||
4、最后一步是配置服务器时间同步。安装 ntpdate 包,使用下面的命令来查询并同步 AD DC 服务器的时间。
|
||||
4、最后一步是配置服务器时间同步。安装 `ntpdate` 包,使用下面的命令来查询并同步 AD DC 服务器的时间。
|
||||
|
||||
```
|
||||
$ sudo apt-get install ntpdate
|
||||
@ -59,7 +58,7 @@ $ sudo ntpdate your_domain_name
|
||||
![Time Synchronization with AD](http://www.tecmint.com/wp-content/uploads/2017/03/Time-Synchronization-with-AD.png)
|
||||
][4]
|
||||
|
||||
AD 服务器时间同步
|
||||
*AD 服务器时间同步*
|
||||
|
||||
5、下一步,在 Ubuntu 机器上执行下面的命令来安装加入域环境所必需软件。
|
||||
|
||||
@ -70,7 +69,7 @@ $ sudo apt-get install samba krb5-config krb5-user winbind libpam-winbind libnss
|
||||
![Install Samba4 in Ubuntu Client](http://www.tecmint.com/wp-content/uploads/2017/03/Install-Samba4-in-Ubuntu-Client.png)
|
||||
][5]
|
||||
|
||||
在 Ubuntu 机器上安装 Samba4 软件
|
||||
*在 Ubuntu 机器上安装 Samba4 软件*
|
||||
|
||||
在 Kerberos 软件包安装的过程中,你会被询问输入默认的域名。输入大写的域名,并按 Enter 键继续安装。
|
||||
|
||||
@ -78,7 +77,7 @@ $ sudo apt-get install samba krb5-config krb5-user winbind libpam-winbind libnss
|
||||
![Add AD Domain Name](http://www.tecmint.com/wp-content/uploads/2017/03/Add-AD-Domain-Name.png)
|
||||
][6]
|
||||
|
||||
添加 AD 域名
|
||||
*添加 AD 域名*
|
||||
|
||||
6、当所有的软件包安装完成之后,使用域管理员帐号来测试 Kerberos 认证,然后执行下面的命令来列出票据信息。
|
||||
|
||||
@ -90,7 +89,7 @@ $ sudo apt-get install samba krb5-config krb5-user winbind libpam-winbind libnss
|
||||
![Check Kerberos Authentication with AD](http://www.tecmint.com/wp-content/uploads/2017/03/Check-Kerberos-Authentication-with-AD.png)
|
||||
][7]
|
||||
|
||||
使用 AD 来检查 Kerberos 认证是否正常
|
||||
*使用 AD 来检查 Kerberos 认证是否正常*
|
||||
|
||||
### 第二步:将 Ubuntu 主机添加到 Samba4 AD DC
|
||||
|
||||
@ -129,11 +128,11 @@ store dos attributes = Yes
|
||||
![Configure Samba for AD](http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Samba.png)
|
||||
][8]
|
||||
|
||||
Samba 服务的 AD 环境配置
|
||||
*Samba 服务的 AD 环境配置*
|
||||
|
||||
根据你本地的实际情况来替换 workgroup , realm , netbios name 和 dns forwarder 的参数值。
|
||||
根据你本地的实际情况来替换 `workgroup` , `realm` , `netbios name` 和 `dns forwarder` 的参数值。
|
||||
|
||||
由于 winbind use default domain 这个参数会让 winbind 服务把任何登录系统的帐号都当作 AD 帐号。因此,如果存在本地帐号名跟域帐号同名的情况下,请不要设置该参数。
|
||||
由于 `winbind use default domain` 这个参数会让 winbind 服务把任何登录系统的帐号都当作 AD 帐号。因此,如果存在本地帐号名跟域帐号同名的情况下,请不要设置该参数。
|
||||
|
||||
8、现在,你应该重启 Samba 后台服务,停止并卸载一些不必要的服务,然后启用 samba 服务的 system-wide 功能,使用下面的命令来完成。
|
||||
|
||||
@ -153,7 +152,7 @@ $ sudo net ads join -U ad_admin_user
|
||||
![Join Ubuntu to Samba4 AD DC](http://www.tecmint.com/wp-content/uploads/2017/03/Join-Ubuntu-to-Samba4-AD-DC.png)
|
||||
][9]
|
||||
|
||||
把 Ubuntu 主机加入到 Samba4 AD DC
|
||||
*把 Ubuntu 主机加入到 Samba4 AD DC*
|
||||
|
||||
10、在 [安装了 RSAT 工具的 Windows 机器上][10] 打开 AD UC ,展开到包含的计算机。你可以看到已加入域的 Ubuntu 计算机。
|
||||
|
||||
@ -161,7 +160,7 @@ $ sudo net ads join -U ad_admin_user
|
||||
![Confirm Ubuntu Client in Windows AD DC](http://www.tecmint.com/wp-content/uploads/2017/03/Confirm-Ubuntu-Client-in-RSAT-.png)
|
||||
][11]
|
||||
|
||||
确认 Ubuntu 计算机已加入到 Windows AD DC
|
||||
*确认 Ubuntu 计算机已加入到 Windows AD DC*
|
||||
|
||||
### 第三步:配置 AD 帐号认证
|
||||
|
||||
@ -173,7 +172,7 @@ $ sudo net ads join -U ad_admin_user
|
||||
$ sudo nano /etc/nsswitch.conf
|
||||
```
|
||||
|
||||
然后在 passwd 和 group 行添加 winbind 值,如下图所示:
|
||||
然后在 `passwd` 和 `group` 行添加 winbind 值,如下图所示:
|
||||
|
||||
```
|
||||
passwd: compat winbind
|
||||
@ -183,9 +182,9 @@ group: compat winbind
|
||||
![Configure AD Accounts Authentication](http://www.tecmint.com/wp-content/uploads/2017/03/Configure-AD-Accounts-Authentication.png)
|
||||
][12]
|
||||
|
||||
配置 AD 帐号认证
|
||||
*配置 AD 帐号认证*
|
||||
|
||||
12、为了测试 Ubuntu 机器是否已加入到域中,你可以使用 wbinfo 命令来列出域帐号和组。
|
||||
12、为了测试 Ubuntu 机器是否已加入到域中,你可以使用 `wbinfo` 命令来列出域帐号和组。
|
||||
|
||||
```
|
||||
$ wbinfo -u
|
||||
@ -195,9 +194,9 @@ $ wbinfo -g
|
||||
![List AD Domain Accounts and Groups](http://www.tecmint.com/wp-content/uploads/2017/03/List-AD-Domain-Accounts-and-Groups.png)
|
||||
][13]
|
||||
|
||||
列出域帐号和组
|
||||
*列出域帐号和组*
|
||||
|
||||
13、同时,使用 getent 命令加上管道符及 grep 参数来过滤指定域用户或组,以测试 Winbind nsswitch 模块是否运行正常。
|
||||
13、同时,使用 `getent` 命令加上管道符及 `grep` 参数来过滤指定域用户或组,以测试 Winbind nsswitch 模块是否运行正常。
|
||||
|
||||
```
|
||||
$ sudo getent passwd| grep your_domain_user
|
||||
@ -207,9 +206,9 @@ $ sudo getent group|grep 'domain admins'
|
||||
![Check AD Domain Users and Groups](http://www.tecmint.com/wp-content/uploads/2017/03/Check-AD-Domain-Users-and-Groups.png)
|
||||
][14]
|
||||
|
||||
检查 AD 域用户和组
|
||||
*检查 AD 域用户和组*
|
||||
|
||||
14、为了让域帐号在 Ubuntu 机器上完成认证登录,你需要使用 root 帐号运行 pam-auth-update 命令,然后勾选 winbind 服务所需的选项,以让每个域帐号首次登录时自动创建 home 目录。
|
||||
14、为了让域帐号在 Ubuntu 机器上完成认证登录,你需要使用 root 帐号运行 `pam-auth-update` 命令,然后勾选 winbind 服务所需的选项,以让每个域帐号首次登录时自动创建 home 目录。
|
||||
|
||||
查看所有的选项,按 ‘[空格]’键选中,单击 OK 以应用更改。
|
||||
|
||||
@ -220,9 +219,9 @@ $ sudo pam-auth-update
|
||||
![Authenticate Ubuntu with Domain Accounts](http://www.tecmint.com/wp-content/uploads/2017/03/Authenticate-Ubuntu-with-Domain-Accounts.png)
|
||||
][15]
|
||||
|
||||
使用域帐号登录 Ubuntu 主机
|
||||
*使用域帐号登录 Ubuntu 主机*
|
||||
|
||||
15、在 Debian 系统中,如果想让系统自动为登录的域帐号创建家目录,你需要手动编辑 /etc/pam.d/common-account 配置文件,并添加下面的内容。
|
||||
15、在 Debian 系统中,如果想让系统自动为登录的域帐号创建家目录,你需要手动编辑 `/etc/pam.d/common-account` 配置文件,并添加下面的内容。
|
||||
|
||||
```
|
||||
session required pam_mkhomedir.so skel=/etc/skel/ umask=0022
|
||||
@ -231,9 +230,9 @@ session required pam_mkhomedir.so skel=/etc/skel/ umask=0022
|
||||
![Authenticate Debian with Domain Accounts](http://www.tecmint.com/wp-content/uploads/2017/03/Authenticate-Debian-with-Domain-Accounts.png)
|
||||
][16]
|
||||
|
||||
使用域帐号登录 Debian 系统
|
||||
*使用域帐号登录 Debian 系统*
|
||||
|
||||
16、为了让 AD 用户能够在 Linux 的命令行下修改密码,你需要打开 /etc/pam.d/common-password 配置文件,在 password 那一行删除 use_authtok 参数,如下图所示:
|
||||
16、为了让 AD 用户能够在 Linux 的命令行下修改密码,你需要打开 /etc/pam.d/common-password 配置文件,在 `password` 那一行删除 `use_authtok` 参数,如下图所示:
|
||||
|
||||
```
|
||||
password [success=1 default=ignore] pam_winbind.so try_first_pass
|
||||
@ -242,9 +241,9 @@ password [success=1 default=ignore] pam_winbind.so try_first_pass
|
||||
![Users Allowed to Change Password](http://www.tecmint.com/wp-content/uploads/2017/03/AD-Domain-Users-Change-Password.png)
|
||||
][17]
|
||||
|
||||
允许域帐号在 Linux 命令行下修改密码
|
||||
*允许域帐号在 Linux 命令行下修改密码*
|
||||
|
||||
17、要使用 Samba4 AD 帐号来登录 Ubuntu 主机,在 su - 后面加上域用户名即可。你还可以通过运行 id 命令来查看 AD 帐号的其它信息。
|
||||
17、要使用 Samba4 AD 帐号来登录 Ubuntu 主机,在 `su -` 后面加上域用户名即可。你还可以通过运行 `id` 命令来查看 AD 帐号的其它信息。
|
||||
|
||||
```
|
||||
$ su - your_ad_user
|
||||
@ -253,9 +252,9 @@ $ su - your_ad_user
|
||||
![Find AD User Information](http://www.tecmint.com/wp-content/uploads/2017/03/Find-AD-User-Information.png)
|
||||
][18]
|
||||
|
||||
查看 AD 用户信息
|
||||
*查看 AD 用户信息*
|
||||
|
||||
使用 [pwd 命令][19]来查看域帐号的当前目录,如果你想修改域帐号的密码,你可以使用 passwd 命令来完成。
|
||||
使用 [pwd 命令][19]来查看域帐号的当前目录,如果你想修改域帐号的密码,你可以使用 `passwd` 命令来完成。
|
||||
|
||||
18、如果想让域帐号在 ubuntu 机器上拥有 root 权限,你可以使用下面的命令来把 AD 帐号添加到 sudo 系统组中:
|
||||
|
||||
@ -263,15 +262,15 @@ $ su - your_ad_user
|
||||
$ sudo usermod -aG sudo your_domain_user
|
||||
```
|
||||
|
||||
登录域帐号登录到 Ubuntu 主机,然后运行 apt-get-update 命令来更新系统,以验证域账号是否拥有 root 权限。
|
||||
登录域帐号登录到 Ubuntu 主机,然后运行 `apt-get-update` 命令来更新系统,以验证域账号是否拥有 root 权限。
|
||||
|
||||
[
|
||||
![Add Sudo User Root Group](http://www.tecmint.com/wp-content/uploads/2017/03/Add-Sudo-User-Root-Group.png)
|
||||
][20]
|
||||
|
||||
给域帐号添加 root 权限
|
||||
*给域帐号添加 root 权限*
|
||||
|
||||
19、要为整个域用户组添加 root 权限,使用 vi 命令打开并编辑 /etc/sudoers 配置文件,如下图所示,添加如下内容:
|
||||
19、要为整个域用户组添加 root 权限,使用 `vi` 命令打开并编辑 `/etc/sudoers` 配置文件,如下图所示,添加如下内容:
|
||||
|
||||
```
|
||||
%YOUR_DOMAIN\\your_domain\ group ALL=(ALL:ALL) ALL
|
||||
@ -280,13 +279,13 @@ $ sudo usermod -aG sudo your_domain_user
|
||||
![Add Root Privileges to Domain Group](http://www.tecmint.com/wp-content/uploads/2017/03/Add-Root-Privileges-to-Domain-Group.jpg)
|
||||
][21]
|
||||
|
||||
为域帐号组添加 root 权限
|
||||
*为域帐号组添加 root 权限*
|
||||
|
||||
使用反斜杠来转义域用户组的名称中包含的空格,或者用来转义第一个反斜杠。在上面的例子中, TECMINT 域的域用户组的名字是 “domain admins" 。
|
||||
|
||||
前边的 `(%)` 表明我们指定是的用户组而不是用户名。
|
||||
前边的 `%` 表明我们指定是的用户组而不是用户名。
|
||||
|
||||
20、如果你使用的是图形界面的 Ubuntu 系统,并且你想使用域帐号来登录系统,你需要修改 LightDM 显示管理器,编辑 /usr/share/lightdm/lightdm.conf.d/50-ubuntu.conf 配置文件,添加下面的内容,然后重启系统才能生效。
|
||||
20、如果你使用的是图形界面的 Ubuntu 系统,并且你想使用域帐号来登录系统,你需要修改 LightDM 显示管理器,编辑 `/usr/share/lightdm/lightdm.conf.d/50-ubuntu.conf` 配置文件,添加下面的内容,然后重启系统才能生效。
|
||||
|
||||
```
|
||||
greeter-show-manual-login=true
|
||||
@ -307,7 +306,7 @@ via: http://www.tecmint.com/join-ubuntu-to-active-directory-domain-member-samba-
|
||||
|
||||
作者:[Matei Cezar][a]
|
||||
译者:[rusking](https://github.com/rusking)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,68 @@
|
||||
买个 DDoS 服务干掉你的对手
|
||||
========================
|
||||
|
||||
> 随着物联网设备的普及,网络犯罪分子通过利用密码的缺陷而提供拒绝服务攻击。
|
||||
|
||||
![](http://images.techhive.com/images/article/2016/12/7606416730_e659cea89c_o-100698667-large.jpg)
|
||||
|
||||
随着物联网设备飞速发展,分布式拒绝服务(DDoS)攻击正在成为一种危险的趋势。就如 [DNS 服务商 Dyn 上年秋季之遭遇][3] 一样,黑客似乎瞄上了每个人,使用未保护的物联网设备来轰炸网络的做法正在抬头。
|
||||
|
||||
可雇佣的 DDoS 攻击的出现意味着即使是最不精通技术的人都能精准报复某些网站。就像在柜台前面买个东西一样方便,然后就可以彻底搞定一个公司。
|
||||
|
||||
根据 [Neustar][4] 的报告,四分之三的国际品牌、机构和公司都是 DDoS 攻击的受害者。[每天至少会发生 3700 起 DDoS 攻击][5]。
|
||||
|
||||
睿科网络公司(A10 Networks)网络运营总监 Chase Cunningham 说:“想要找个可用的物联网设备,你只需要在地下网站四处打听一下 Mirai 扫描器代码,一旦你找到了,你将能够利用在线的每一台设备来进行攻击”。
|
||||
|
||||
“或者你可以去一些类似 Shodan 的网站,然后简单的搜一下设备特定的请求。当你得到这些信息之后,你就可以将你所雇佣的 DDoS 工具配置正确的流量模拟器类型、指向正确的目标并发动攻击。”
|
||||
|
||||
“几乎所有东西都是可买卖的。”他补充道,“你可以购买一个 ‘stresser’,这就是个随便哪个会点按钮的人都会使用的 DDoS 僵尸网络。”
|
||||
|
||||
网络安全提供商 Imperva 说,用户只需要出几十美金,就可以快速发动攻击。有些公司在它们的网站上说它们的工具包含肉鸡负载和 CnC(命令与控制)文件。使用这些工具,那些有点想法的肉鸡大师(或者被称为 herders)就可以开始传播恶意软件,通过垃圾邮件、漏洞扫描程序、暴力攻击等来感染设备。
|
||||
|
||||
大部分 [stresser 和 booter][6] 都会有一个常见的、基于订阅服务的 SaaS(软件即服务)业务模式。来自 Incapsula 公司的 [Q2 2015 DDoS 报告][7] 显示,在 DDoS 上的月均每小时花费是 38 美元(规模较低的是 19.99 美元)。
|
||||
|
||||
![雇佣ddos服务](http://images.techhive.com/images/article/2017/03/ddos-hire-100713247-large.jpg)
|
||||
|
||||
“stresser 和 booter 只是新世界的一个副产品,这些可以扳倒企业和机构的服务只能运作在灰色领域”,Imperva 写道。
|
||||
|
||||
虽然成本不同,但是企业受到的[各种攻击,每次损失在 1.4 万美元到 235 万美元之间][8]。而且企业受到一次攻击后,[有 82% 的可能性会再次受到攻击][9]。
|
||||
|
||||
物联网洪水攻击(DoT,DDoS of Things)使用物联网设备建立的僵尸网络可造成非常大规模的 DDoS 攻击。物联网洪水攻击会利用成百上千的物联网设备攻击,无论是大型服务提供商还是企业,均无幸免。
|
||||
|
||||
“大部分讲究声誉的 DDoS 卖家都会将他们的工具配置为可修改的,这样你就可以轻松地设置攻击的类型。虽然我还没怎么看到有哪些可以‘付费的’物联网流量模拟器的选项,但我敢肯定就要有了。如果是我来搞这个服务,我是绝对会加入这个选项的。”Cunningham 如是说。
|
||||
|
||||
由 IDG 新闻服务的消息可知,要建造一个 DDoS 服务也是很简单的。通常黑客会租用 6 到 12 个左右的服务器,然后使用它们随意的攻击任何目标。去年十月下旬,HackForums.net [关闭][10]了他们的“服务器压力测试”版块,这个做法就是考虑到黑客可能通过使用他们每月十美元的服务建造可雇佣的 DDoS 服务。
|
||||
|
||||
同样地在十二月时,美国和欧洲的执法机构 [逮捕][11] 34个参与可雇佣的 DDoS 服务的嫌犯。
|
||||
|
||||
但是如果这很简单,怎么还没有经常发生攻击?
|
||||
|
||||
Cunningham 说这其实每时每刻都在发生,实际上每天每秒都在发生。他说:”你不知道它们的原因是因为大部分都是扰乱攻击,而不是大规模的、想要搞倒公司的攻击。“
|
||||
|
||||
他说,大部分的攻击平台只出售那些会让系统宕机一个小时或稍长一点的攻击。通常宕机一小时的攻击大概需要花费 15 到 50 美元。当然这得看平台,有些可能一小时就要花上百美元。
|
||||
|
||||
**减少这些攻击的解决方案是让用户把所有联网设备的出厂预置的密码改掉,然后还要禁用那些你不需要的功能。**
|
||||
|
||||
(题图:[Victor](https://www.flickr.com/photos/v1ctor/7606416730/))
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.csoonline.com/article/3180246/data-protection/hire-a-ddos-service-to-take-down-your-enemies.html
|
||||
|
||||
作者:[Ryan Francis][a]
|
||||
译者:[kenxx](https://github.com/kenxx)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.csoonline.com/author/Ryan-Francis/
|
||||
[1]:http://csoonline.com/article/3103122/security/how-can-you-detect-a-fake-ransom-letter.html#tk.cso-infsb
|
||||
[2]:https://www.incapsula.com/ddos/ddos-attacks/denial-of-service.html
|
||||
[3]:http://csoonline.com/article/3135986/security/ddos-attack-against-overwhelmed-despite-mitigation-efforts.html
|
||||
[4]:https://ns-cdn.neustar.biz/creative_services/biz/neustar/www/resources/whitepapers/it-security/ddos/2016-apr-ddos-report.pdf
|
||||
[5]:https://www.a10networks.com/resources/ddos-trends-report
|
||||
[6]:https://www.incapsula.com/ddos/booters-stressers-ddosers.html
|
||||
[7]:https://www.incapsula.com/blog/ddos-global-threat-landscape-report-q2-2015.html
|
||||
[8]:http://www.datacenterknowledge.com/archives/2016/05/13/number-of-costly-dos-related-data-center-outages-rising/
|
||||
[9]:http://www.networkworld.com/article/3064677/security/hit-by-ddos-you-will-likely-be-struck-again.html
|
||||
[10]:http://www.pcworld.com/article/3136730/hacking/hacking-forum-cuts-section-allegedly-linked-to-ddos-attacks.html
|
||||
[11]:http://www.pcworld.com/article/3149543/security/dozens-arrested-in-international-ddos-for-hire-crackdown.html
|
@ -0,0 +1,123 @@
|
||||
如何通过 OpenELEC 创建你自己的媒体中心
|
||||
======
|
||||
|
||||
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-media-center.jpg "How to Build Your Own Media Center with OpenELECs")
|
||||
|
||||
你是否曾经想要创建你自己的家庭影院系统?如果是的话,这里有一个为你准备的指南!在本篇文章中,我们将会介绍如何设置一个由 OpenELEC 以及 Kodi 驱动的家庭娱乐系统。我们将会介绍如何制作安装介质,哪些设备可以运行该软件,如何安装它,以及其他一切需要知道的事情等等。
|
||||
|
||||
### 选择一个设备
|
||||
|
||||
在开始设定媒体中心的软件前,你需要选择一个设备。OpenELEC 支持一系列设备。从一般的桌面设备到树莓派 2/3 等等。选择好设备以后,考虑一下你怎么访问 OpenELEC 系统中的媒体并让其就绪。
|
||||
|
||||
**注意: **OpenELEC 基于 Kodi,有许多方式加载一个可播放的媒体(比如 Samba 网络分享,外设,等等)。
|
||||
|
||||
### 制作安装磁盘
|
||||
|
||||
OpenELEC 安装磁盘需要一个 USB 存储器,且其至少有 1GB 的容量。这是安装该软件的唯一方式,因为开发者没有发布 ISO 文件。取而代之的是需要创建一个 IMG 原始文件。选择与你设备相关的链接并且[下载][10]原始磁盘镜像。当磁盘镜像下载完毕,打开一个终端,并且使用命令将数据从压缩包中解压出来。
|
||||
|
||||
**在Linux/macOS上**
|
||||
|
||||
```
|
||||
cd ~/Downloads
|
||||
gunzip -d OpenELEC*.img.gz
|
||||
```
|
||||
|
||||
**在Windows上**
|
||||
|
||||
下载 [7zip][11],安装它,然后解压压缩文件。
|
||||
|
||||
当原始的 .IMG 文件被解压后,下载 [Etcher USB creation tool][12],并且依据在界面上的指示来安装它并创建 USB 磁盘。
|
||||
|
||||
**注意:** 对于树莓派用户,Etcher 也支持将文件写入到 SD 卡中。
|
||||
|
||||
### 安装 OpenELEC
|
||||
|
||||
OpenELEC 安装进程可能是安装流程最简单的操作系统之一了。将 USB 设备加入,然后配置设备使其以 USB 方式启动。同样,这个过程也可以通过按 DEL 或者 F2 来替代。然而并不是所有的 BIOS 都是一样的,所以最好的方式就是看看手册什么的。
|
||||
|
||||
![openelec-installer-selection](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-installer-selection.png "openelec-installer-selection")
|
||||
|
||||
一旦进入 BIOS,修改设置使其从 USB 磁盘中直接加载。这将会允许电脑从 USB 磁盘中启动,这将会使你进入到 Syslinux 引导屏幕。在提示符中,键入 `installer`,然后按下回车键。
|
||||
|
||||
![openelec-installation-selection-menu](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-installation-selection-menu.png "openelec-installation-selection-menu")
|
||||
|
||||
默认情况下,快速安装选项已经是选中的。按回车键来开始安装。这将会使安装器跳转到磁盘选择界面。选择 OpenELEC 要被安装到的地方,然后按下回车键来开始安装过程。
|
||||
|
||||
![openelec-installation-in-progress](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-installation-in-progress.png "openelec-installation-in-progress")
|
||||
|
||||
一旦完成安装,重启系统并加载 OpenELEC。
|
||||
|
||||
### 配置 OpenELEC
|
||||
|
||||
![openelec-wireless-network-setup](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-wireless-network-setup.jpg "openelec-wireless-network-setup")
|
||||
|
||||
在第一次启动时,用户必须配置一些东西。如果你的媒体中心拥有一个无线网卡,OpenELEC 将会提示用户将其连接到一个热点上。选择一个列表中的网络并且输入密码。
|
||||
|
||||
![openelec-sharing-setup](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-sharing-setup.jpg "openelec-sharing-setup")
|
||||
|
||||
在下一步“<ruby>欢迎来到 OpenELEC<rt>Welcome to OpenELEC</rt></ruby>”屏上,用户必须配置不同的分享设置(SSH 以及 Samba)。建议你把这些设置开启,因为可以用命令行访问,这将会使得远程传输媒体文件变得很简单。
|
||||
|
||||
### 增加媒体
|
||||
|
||||
在 OpenELEC(Kodi)中增加媒体,首先选择你希望添加的媒体到的部分。以同样的流程,为照片、音乐等添加媒体。在这个指南中,我们将着重讲解添加视频。
|
||||
|
||||
![openelec-add-files-to-kodi](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-add-files-to-kodi.jpg "openelec-add-files-to-kodi")
|
||||
|
||||
点击在主页的“<ruby>视频<rt>Video</rt></ruby>”选项来进入视频页面。选择“<ruby>文件<rt>Files</rt></ruby>”选项,在下一个页面点击“<ruby>添加视频...<rt>Add videos…</rt></ruby>”,这将会使得用户进入Kodi 的添加媒体页面。在这个页面,你可以随意的添加媒体源了(包括内部和外部的)。
|
||||
|
||||
![openelec-add-media-source-kodi](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-add-media-source-kodi.jpg "openelec-add-media-source-kodi")
|
||||
|
||||
OpenELEC 会自动挂载外部的设备(像是 USB,DVD 碟片,等等),并且它可以通过浏览文件挂载点来挂载。一般情况下,这些设备都会被放在“/run”下,或者,返回你点击“<ruby>添加视频...<rt>Add videos…</rt></ruby>”的页面,在那里选择设备。任何外部设备,包括 DVD/CD,将会直接展示在那里,并可以直接访问。这是一个很好的选择——对于那些不懂如何找到挂载点的用户。
|
||||
|
||||
![openelec-name-video-source-folder](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-name-video-source-folder.jpg "openelec-name-video-source-folder")
|
||||
|
||||
现在这个设备在 Kodi 中被选中了,界面将会询问用户去浏览设备上私人文件夹,里面有私人文件——这一切都是在媒体中心文件浏览器工具下执行的。一旦找到了放置文件的文件夹,添加它,给予文件夹一个名字,然后按下 OK 按钮来保存它。
|
||||
|
||||
![openelec-show-added-media-kodi](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-show-added-media-kodi.jpg "openelec-show-added-media-kodi")
|
||||
|
||||
当一个用户浏览“<ruby>视频<rt>Videos</rt></ruby>”,他们将会看到可以点击的文件夹,这个文件夹中带有从外部设备添加的媒体。这些文件夹可以很容易地在系统上播放。
|
||||
|
||||
### 使用 OpenELec
|
||||
|
||||
当用户登录他们将会看见一个“主界面”,这个主界面有许多部分,用户可以点击它们并且进入,包括:图片,视频,音乐,程序等等。当悬停在这些部分的时候,子部分就会出现。例如,当悬停在“图片”上时,子部分”文件“以及”插件”就会出现。
|
||||
|
||||
![openelec-navigation-bar](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-navigation-bar.jpg "openelec-navigation-bar")
|
||||
|
||||
如果一个用户点击了一个部分中的子部分,例如“插件”,Kodi 插件选择就会出现。这个安装器将会允许用户浏览新的插件内容,来安装到这个子部分(像是图片关联插件,等等)或者启动一个已经存在的图片关联插件,当然,这个插件应该已经安装到系统上了。
|
||||
|
||||
此外,点击任何部分的文件子部分(例如视频)将会直接给显示用户该部分可用的文件。
|
||||
|
||||
### 系统设置
|
||||
|
||||
![openelec-system-settings](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/03/openelec-system-settings.jpg "openelec-system-settings")
|
||||
|
||||
Kodi 有丰富的设置区域。为了找到这些设置,使鼠标在右方悬停,目录选择器将会滚动右方并且显示”<ruby>系统<rt>System</rt></ruby>“。点击来打开全局系统设定区。
|
||||
|
||||
用户可以修改任何设置,从安装 Kodi 仓库的插件,到激活各种服务,到改变主题,甚至天气。如果想要退出设定区域并且返回主页面,点击右下方角落中的“home”图标。
|
||||
|
||||
### 结论
|
||||
|
||||
通过 OpenELEC 的安装和配置,你现在可以随意体验使用你自己的 Linux 支持的家庭影院系统。在所有的家庭影院系统 Linux 发行版中,这个是最用户友好的。请记住,尽管这个系统是以“OpenELEC”为名,但它运行着的是 Kodi ,并兼容任何 Kodi 的插件,工具以及程序。
|
||||
|
||||
------
|
||||
|
||||
via: https://www.maketecheasier.com/build-media-center-with-openelec/
|
||||
|
||||
作者:[Derrik Diener][a]
|
||||
译者:[svtter](https://github.com/svtter)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.maketecheasier.com/author/derrikdiener/
|
||||
[1]: https://www.maketecheasier.com/author/derrikdiener/
|
||||
[2]: https://www.maketecheasier.com/build-media-center-with-openelec/#comments
|
||||
[3]: https://www.maketecheasier.com/category/linux-tips/
|
||||
[4]: http://www.facebook.com/sharer.php?u=https%3A%2F%2Fwww.maketecheasier.com%2Fbuild-media-center-with-openelec%2F
|
||||
[5]: http://twitter.com/share?url=https%3A%2F%2Fwww.maketecheasier.com%2Fbuild-media-center-with-openelec%2F&text=How+to+Build+Your+Own+Media+Center+with+OpenELEC
|
||||
[6]: mailto:?subject=How%20to%20Build%20Your%20Own%20Media%20Center%20with%20OpenELEC&body=https%3A%2F%2Fwww.maketecheasier.com%2Fbuild-media-center-with-openelec%2F
|
||||
[7]: https://www.maketecheasier.com/permanently-disable-windows-defender-windows-10/
|
||||
[8]: https://www.maketecheasier.com/repair-mac-hard-disk-with-fsck/
|
||||
[9]: https://support.google.com/adsense/troubleshooter/1631343
|
||||
[10]: http://openelec.tv/get-openelec/category/1-openelec-stable-releases
|
||||
[11]: http://www.7-zip.org/
|
||||
[12]: https://etcher.io/
|
@ -1,15 +1,14 @@
|
||||
Join CentOS 7 Desktop to Samba4 AD as a Domain Member – Part 9
|
||||
Samba 系列(九):将 CentOS 7 桌面系统加入到 Samba4 AD 域环境中
|
||||
============================================================
|
||||
将 CentOS 7 桌面系统加入到 Samba4 AD 域环境中——(九)
|
||||
|
||||
这篇文章讲述了如何使用 Authconfig-gtk 工具将 CentOS 7 桌面系统加入到 Samba4 AD 域环境中,并使用域帐号登录到 CentOS 系统。
|
||||
|
||||
#### 要求
|
||||
### 要求
|
||||
|
||||
1、[在 Ubuntu 系统中使用 Samba4 创建活动目录架构][1]
|
||||
2、[CentOS 7.3 安装指南]][2]
|
||||
2、[CentOS 7.3 安装指南][2]
|
||||
|
||||
### 第一步:在 CentOS 系统中配置 Samba4 AD DC Step 1: Configure CentOS Network for Samba4 AD DC
|
||||
### 第一步:在 CentOS 系统中配置 Samba4 AD DC
|
||||
|
||||
1、在将 CentOS 7 加入到 Samba4 域环境之前,你得先配置 CentOS 系统的网络环境,确保在 CentOS 系统中通过 DNS 可以解析到域名。
|
||||
|
||||
@ -21,13 +20,13 @@ Join CentOS 7 Desktop to Samba4 AD as a Domain Member – Part 9
|
||||
![Network Settings](http://www.tecmint.com/wp-content/uploads/2017/03/Network-Settings.jpg)
|
||||
][3]
|
||||
|
||||
网络设置
|
||||
*网络设置*
|
||||
|
||||
[
|
||||
![Configure Network](http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Network.jpg)
|
||||
][4]
|
||||
|
||||
配置网络
|
||||
*配置网络*
|
||||
|
||||
2、下一步,打开网络配置文件,在文件末尾添加一行域名信息。这样能确保当你仅使用主机名来查询域中的 DNS 记录时, DNS 解析器会自动把域名添加进来。
|
||||
|
||||
@ -44,7 +43,7 @@ SEARCH="your_domain_name"
|
||||
![Network Interface Configuration](http://www.tecmint.com/wp-content/uploads/2017/03/Network-Interface-Configuration.jpg)
|
||||
][5]
|
||||
|
||||
网卡配置
|
||||
*网卡配置*
|
||||
|
||||
3、最后,重启网卡服务以应用更改,并验证解析器的配置文件是否正确配置。我们通过使用 ping 命令加上 DC 服务器的主机名或域名以验证 DNS 解析能否正常运行。
|
||||
|
||||
@ -54,12 +53,13 @@ $ cat /etc/resolv.conf
|
||||
$ ping -c1 adc1
|
||||
$ ping -c1 adc2
|
||||
$ ping tecmint.lan
|
||||
```
|
||||
```
|
||||
|
||||
[
|
||||
![Verify Network Configuration](http://www.tecmint.com/wp-content/uploads/2017/03/Verify-Network-Configuration.jpg)
|
||||
][6]
|
||||
|
||||
验证网络配置是否正常
|
||||
*验证网络配置是否正常*
|
||||
|
||||
4、同时,使用下面的命令来配置你的主机名,然后重启计算机以应用更改:
|
||||
|
||||
@ -68,7 +68,7 @@ $ sudo hostnamectl set-hostname your_hostname
|
||||
$ sudo init 6
|
||||
```
|
||||
|
||||
使用下面的命令来验证主机名是否正确配置
|
||||
使用下面的命令来验证主机名是否正确配置:
|
||||
|
||||
```
|
||||
$ cat /etc/hostname
|
||||
@ -106,30 +106,30 @@ $ sudo authconfig-gtk
|
||||
|
||||
打开身份或认证配置页面:
|
||||
|
||||
* 用户帐号数据库 = 选择 Winbind
|
||||
* Winbind 域 = 你的域名
|
||||
* 安全模式 = ADS
|
||||
* Winbind ADS 域 = 你的域名.TLD
|
||||
* 域控制器 = 域控服务器的全域名
|
||||
* 默认Shell = /bin/bash
|
||||
* 用户帐号数据库 : 选择 Winbind
|
||||
* Winbind 域 : 你的域名
|
||||
* 安全模式 : ADS
|
||||
* Winbind ADS 域 : 你的域名.TLD
|
||||
* 域控制器 : 域控服务器的全域名
|
||||
* 默认Shell : /bin/bash
|
||||
* 勾选允许离线登录
|
||||
|
||||
[
|
||||
![Authentication Configuration](http://www.tecmint.com/wp-content/uploads/2017/03/Authentication-Configuration.jpg)
|
||||
][7]
|
||||
|
||||
域认证配置
|
||||
*域认证配置*
|
||||
|
||||
打开高级选项配置页面:
|
||||
|
||||
* 本地认证选项 = 支持指纹识别
|
||||
* 其它认证选项 = 用户首次登录创建家目录
|
||||
* 本地认证选项 : 支持指纹识别
|
||||
* 其它认证选项 : 用户首次登录创建家目录
|
||||
|
||||
[
|
||||
![Authentication Advance Configuration](http://www.tecmint.com/wp-content/uploads/2017/03/Authentication-Advance-Configuration.jpg)
|
||||
][8]
|
||||
|
||||
高级认证配置
|
||||
*高级认证配置*
|
||||
|
||||
9、修改完上面的配置之后,返回到身份或认证配置页面,点击加入域按钮,在弹出的提示框点保存即可。
|
||||
|
||||
@ -137,13 +137,13 @@ $ sudo authconfig-gtk
|
||||
![Identity and Authentication](http://www.tecmint.com/wp-content/uploads/2017/03/Identity-and-Authentication.jpg)
|
||||
][9]
|
||||
|
||||
身份和认证
|
||||
*身份和认证*
|
||||
|
||||
[
|
||||
![Save Authentication Configuration](http://www.tecmint.com/wp-content/uploads/2017/03/Save-Authentication-Configuration.jpg)
|
||||
][10]
|
||||
|
||||
保存认证配置
|
||||
*保存认证配置*
|
||||
|
||||
10、保存配置之后,系统将会提示你提供域管理员信息以将 CentOS 系统加入到域中。输入域管理员帐号及密码后点击 OK 按钮,加入域完成。
|
||||
|
||||
@ -151,15 +151,15 @@ $ sudo authconfig-gtk
|
||||
![Joining Winbind Domain](http://www.tecmint.com/wp-content/uploads/2017/03/Joining-Winbind-Domain.jpg)
|
||||
][11]
|
||||
|
||||
加入 Winbind 域环境
|
||||
*加入 Winbind 域环境*
|
||||
|
||||
11、另入域后,点击应用按钮以让配置生效,选择所有的 windows 并重启机器。
|
||||
11、加入域后,点击应用按钮以让配置生效,选择所有的 windows 并重启机器。
|
||||
|
||||
[
|
||||
![Apply Authentication Configuration](http://www.tecmint.com/wp-content/uploads/2017/03/Apply-Authentication-Configuration.jpg)
|
||||
][12]
|
||||
|
||||
应用认证配置
|
||||
*应用认证配置*
|
||||
|
||||
12、要验证 CentOS 是否已成功加入到 Samba4 AD DC 中,你可以在安装了 [RSAT 工具][13] 的 windows 机器上,打开 AD 用户和计算机工具,点击域中的计算机。
|
||||
|
||||
@ -169,7 +169,7 @@ $ sudo authconfig-gtk
|
||||
![Active Directory Users and Computers](http://www.tecmint.com/wp-content/uploads/2017/03/Active-Directory-Users-and-Computers.jpg)
|
||||
][14]
|
||||
|
||||
活动目录用户和计算机
|
||||
*活动目录用户和计算机*
|
||||
|
||||
### 第四步:使用 Samba4 AD DC 帐号登录 CentOS 桌面系统
|
||||
|
||||
@ -177,20 +177,20 @@ $ sudo authconfig-gtk
|
||||
|
||||
```
|
||||
Domain\domain_account
|
||||
or
|
||||
或
|
||||
Domain_user@domain.tld
|
||||
```
|
||||
[
|
||||
![Not listed Users](http://www.tecmint.com/wp-content/uploads/2017/03/Not-listed-Users.jpg)
|
||||
][15]
|
||||
|
||||
使用其它账户
|
||||
*使用其它账户*
|
||||
|
||||
[
|
||||
![Enter Domain Username](http://www.tecmint.com/wp-content/uploads/2017/03/Enter-Domain-Username.jpg)
|
||||
][16]
|
||||
|
||||
输入域用户名
|
||||
*输入域用户名*
|
||||
|
||||
14、在 CentOS 系统的命令行中,你也可以使用下面的任一方式来切换到域帐号进行登录:
|
||||
|
||||
@ -202,15 +202,15 @@ $ su - domain_user@domain.tld
|
||||
![Authenticate Domain Username](http://www.tecmint.com/wp-content/uploads/2017/03/Authenticate-Domain-User.jpg)
|
||||
][17]
|
||||
|
||||
使用域帐号登录
|
||||
*使用域帐号登录*
|
||||
|
||||
[
|
||||
![Authenticate Domain User Email](http://www.tecmint.com/wp-content/uploads/2017/03/Authenticate-Domain-User-Email.jpg)
|
||||
][18]
|
||||
|
||||
使用域帐号邮箱登录
|
||||
*使用域帐号邮箱登录*
|
||||
|
||||
15、要为域用户或组添加 root 权限,在命令行下使用 root 权限帐号打开 sudoers 配置文件,添加下面一行内容:
|
||||
15、要为域用户或组添加 root 权限,在命令行下使用 root 权限帐号打开 `sudoers` 配置文件,添加下面一行内容:
|
||||
|
||||
```
|
||||
YOUR_DOMAIN\\domain_username ALL=(ALL:ALL) ALL #For domain users
|
||||
@ -220,7 +220,7 @@ YOUR_DOMAIN\\domain_username ALL=(ALL:ALL) ALL #For domain users
|
||||
![Assign Permission to User and Group](http://www.tecmint.com/wp-content/uploads/2017/03/Assign-Permission-to-User-and-Group.jpg)
|
||||
][19]
|
||||
|
||||
指定用户和用户组权限
|
||||
*指定用户和用户组权限*
|
||||
|
||||
16、使用下面的命令来查看域控制器信息:
|
||||
|
||||
@ -231,7 +231,7 @@ $ sudo net ads info
|
||||
![Check Domain Controller Info](http://www.tecmint.com/wp-content/uploads/2017/03/Check-Domain-Controller-Info.jpg)
|
||||
][20]
|
||||
|
||||
查看域控制器信息
|
||||
*查看域控制器信息*
|
||||
|
||||
17、你可以在安装了 Winbind 客户端的机器上使用下面的命令来验证 CentOS 加入到 Samba4 AD DC 后的信任关系是否正常:
|
||||
|
||||
@ -242,17 +242,17 @@ $ sudo yum install samba-winbind-clients
|
||||
然后,执行下面的一些命令来查看 Samba4 AD DC 的相关信息:
|
||||
|
||||
```
|
||||
$ wbinfo -p #Ping 域名
|
||||
$ wbinfo -t #检查信任关系
|
||||
$ wbinfo -u #列出域用户帐号
|
||||
$ wbinfo -g #列出域用户组
|
||||
$ wbinfo -n domain_account #查看域帐号的 SID 信息
|
||||
$ wbinfo -p ### Ping 域名
|
||||
$ wbinfo -t ### 检查信任关系
|
||||
$ wbinfo -u ### 列出域用户帐号
|
||||
$ wbinfo -g ### 列出域用户组
|
||||
$ wbinfo -n domain_account ### 查看域帐号的 SID 信息
|
||||
```
|
||||
[
|
||||
![Get Samba4 AD DC Details](http://www.tecmint.com/wp-content/uploads/2017/03/Get-Samba4-AD-DC-Details.jpg)
|
||||
][21]
|
||||
|
||||
查看 Samba4 AD DC 信息
|
||||
*查看 Samba4 AD DC 信息*
|
||||
|
||||
18、如果你想让 CentOS 系统退出域环境,使用具有管理员权限的帐号执行下面的命令,后面加上域名及域管理员帐号,如下图所示:
|
||||
|
||||
@ -263,7 +263,7 @@ $ sudo net ads leave your_domain -U domain_admin_username
|
||||
![Leave Domain from Samba4 AD](http://www.tecmint.com/wp-content/uploads/2017/03/Leave-Domain-from-Samba4-AD.jpg)
|
||||
][22]
|
||||
|
||||
退出 Samba4 AD 域
|
||||
*退出 Samba4 AD 域*
|
||||
|
||||
这篇文章就写到这里吧!尽管上面的这些操作步骤是将 CentOS 7 系统加入到 Samba4 AD DC 域中,其实这些步骤也同样适用于将 CentOS 7 桌面系统加入到 Microsoft Windows Server 2008 或 2012 的域中。
|
||||
|
||||
@ -279,14 +279,14 @@ via: http://www.tecmint.com/join-centos-7-to-samba4-active-directory/
|
||||
|
||||
作者:[Matei Cezar][a]
|
||||
译者:[rusking](https://github.com/rusking)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/cezarmatei/
|
||||
|
||||
[1]:http://www.tecmint.com/install-samba4-active-directory-ubuntu/
|
||||
[2]:http://www.tecmint.com/centos-7-3-installation-guide/
|
||||
[1]:https://linux.cn/article-8065-1.html
|
||||
[2]:https://linux.cn/article-8048-1.html
|
||||
[3]:http://www.tecmint.com/wp-content/uploads/2017/03/Network-Settings.jpg
|
||||
[4]:http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Network.jpg
|
||||
[5]:http://www.tecmint.com/wp-content/uploads/2017/03/Network-Interface-Configuration.jpg
|
||||
@ -297,7 +297,7 @@ via: http://www.tecmint.com/join-centos-7-to-samba4-active-directory/
|
||||
[10]:http://www.tecmint.com/wp-content/uploads/2017/03/Save-Authentication-Configuration.jpg
|
||||
[11]:http://www.tecmint.com/wp-content/uploads/2017/03/Joining-Winbind-Domain.jpg
|
||||
[12]:http://www.tecmint.com/wp-content/uploads/2017/03/Apply-Authentication-Configuration.jpg
|
||||
[13]:http://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/
|
||||
[13]:https://linux.cn/article-8097-1.html
|
||||
[14]:http://www.tecmint.com/wp-content/uploads/2017/03/Active-Directory-Users-and-Computers.jpg
|
||||
[15]:http://www.tecmint.com/wp-content/uploads/2017/03/Not-listed-Users.jpg
|
||||
[16]:http://www.tecmint.com/wp-content/uploads/2017/03/Enter-Domain-Username.jpg
|
@ -1,20 +1,11 @@
|
||||
如何在 Ubuntu 上使用 pm2 和 Nginx 部署 Node.js 应用
|
||||
============================================================
|
||||
|
||||
### 导航
|
||||
|
||||
1. [第一步 - 安装 Node.js][1]
|
||||
2. [第二步 - 生成 Express 事例 App][2]
|
||||
3. [第三步- 安装 pm2][3]
|
||||
4. [第四步 - 安装配置 Nginx 作为反向代理][4]
|
||||
5. [第五步 - 测试][5]
|
||||
6. [链接][6]
|
||||
|
||||
pm2 是一个 Node.js 应用的进程管理器,它允许你让你的应用程序保持运行,还有一个内建的负载均衡器。它非常简单而且强大,你可以零间断重启或重新加载你的 node 应用,它也允许你为你的 node 应用创建集群。
|
||||
|
||||
pm2 是一个 Node.js 应用的进程管理器,它可以让你的应用程序保持运行,还有一个内建的负载均衡器。它非常简单而且强大,你可以零间断重启或重新加载你的 node 应用,它也允许你为你的 node 应用创建集群。
|
||||
|
||||
在这篇博文中,我会向你展示如何安装和配置 pm2 用于这个简单的 'Express' 应用,然后配置 Nginx 作为运行在 pm2 下的 node 应用的反向代理。
|
||||
|
||||
**前提**
|
||||
前提:
|
||||
|
||||
* Ubuntu 16.04 - 64bit
|
||||
* Root 权限
|
||||
@ -23,50 +14,64 @@ pm2 是一个 Node.js 应用的进程管理器,它允许你让你的应用程
|
||||
|
||||
在这篇指南中,我们会从零开始我们的实验。首先,我们需要在服务器上安装 Node.js。我会使用 Nodejs LTS 6.x 版本,它能从 nodesource 仓库中安装。
|
||||
|
||||
从 Ubuntu 仓库安装 '**python-software-properties**' 软件包并添加 'nodesource' Nodejs 仓库。
|
||||
从 Ubuntu 仓库安装 `python-software-properties` 软件包并添加 “nodesource” Nodejs 仓库。
|
||||
|
||||
`sudo apt-get install -y python-software-properties`
|
||||
`curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -`
|
||||
```
|
||||
sudo apt-get install -y python-software-properties
|
||||
curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
|
||||
```
|
||||
|
||||
安装最新版本的 Nodejs LTS
|
||||
安装最新版本的 Nodejs LTS:
|
||||
|
||||
`sudo apt-get install -y nodejs`
|
||||
```
|
||||
sudo apt-get install -y nodejs
|
||||
```
|
||||
|
||||
安装完成后,查看 node 和 npm 版本。
|
||||
|
||||
`node -v`
|
||||
`npm -v`
|
||||
```
|
||||
node -v
|
||||
npm -v
|
||||
```
|
||||
|
||||
[
|
||||
![检查 node.js 版本](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/1.png)
|
||||
][10]
|
||||
|
||||
### 第二步 - 生成 Express 事例 App
|
||||
### 第二步 - 生成 Express 示例 App
|
||||
|
||||
我会使用 **express-generator**' 软件包生成的简单 web 应用框架进行事例安装。Express-generator 可以使用 npm 命令安装。
|
||||
我会使用 `express-generator` 软件包生成的简单 web 应用框架进行示例安装。`express-generator` 可以使用 `npm` 命令安装。
|
||||
|
||||
用 npm 安装 '**express-generator**':
|
||||
用 `npm `安装 `express-generator`:
|
||||
|
||||
`npm install express-generator -g`
|
||||
```
|
||||
npm install express-generator -g
|
||||
```
|
||||
|
||||
**-g:** 在系统内部安装软件包
|
||||
- `-g` : 在系统内部安装软件包。
|
||||
|
||||
我会以普通用户运行应用程序,而不是 root 或者超级用户。我们首先需要创建一个新的用户。
|
||||
|
||||
创建一个名为 '**yume**' 的用户:
|
||||
创建一个名为 `yume` 的用户:
|
||||
|
||||
`useradd -m -s /bin/bash yume`
|
||||
`passwd yume`
|
||||
```
|
||||
useradd -m -s /bin/bash yume
|
||||
passwd yume
|
||||
```
|
||||
|
||||
使用 su 命令登录到新用户:
|
||||
使用 `su` 命令登录到新用户:
|
||||
|
||||
`su - yume`
|
||||
```
|
||||
su - yume
|
||||
```
|
||||
|
||||
下一步,用 express 命令生成一个新的简单 web 应用程序:
|
||||
下一步,用 `express` 命令生成一个新的简单 web 应用程序:
|
||||
|
||||
`express hakase-app`
|
||||
```
|
||||
express hakase-app
|
||||
```
|
||||
|
||||
命令会创建新项目目录 '**hakase-app**'。
|
||||
命令会创建新项目目录 `hakase-app`。
|
||||
|
||||
[
|
||||
![用 express-generator 生成应用框架](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/2.png)
|
||||
@ -74,47 +79,59 @@ pm2 是一个 Node.js 应用的进程管理器,它允许你让你的应用程
|
||||
|
||||
进入到项目目录并安装应用需要的所有依赖。
|
||||
|
||||
`cd hakase-app`
|
||||
`npm install`
|
||||
```
|
||||
cd hakase-app
|
||||
npm install
|
||||
```
|
||||
|
||||
然后用下面的命令测试并启动一个新的简单应用程序:
|
||||
|
||||
`DEBUG=myapp:* npm start`
|
||||
```
|
||||
DEBUG=myapp:* npm start
|
||||
```
|
||||
|
||||
默认情况下,我们的 express 应用汇运行在 **3000** 端口。现在访问服务器的 IP 地址:[192.168.33.10:3000][12]
|
||||
默认情况下,我们的 express 应用会运行在 `3000` 端口。现在访问服务器的 IP 地址:192.168.33.10:3000 :
|
||||
|
||||
[
|
||||
![express nodejs 运行在 3000 端口](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/3.png)
|
||||
][13]
|
||||
|
||||
简单 web 应用框架以 'yume' 用户运行在 3000 端口。
|
||||
这个简单 web 应用框架现在以 'yume' 用户运行在 3000 端口。
|
||||
|
||||
### 第三步 - 安装 pm2
|
||||
|
||||
pm2 是一个 node 软件包,可以使用 npm 命令安装。让我们用 npm 命令安装吧(用 root 权限,如果你仍然以 yume 用户登录,那么运行命令 "exit" 再次成为 root 用户):
|
||||
pm2 是一个 node 软件包,可以使用 `npm` 命令安装。(用 root 权限,如果你仍然以 yume 用户登录,那么运行命令 `exit` 再次成为 root 用户):
|
||||
|
||||
`npm install pm2 -g`
|
||||
```
|
||||
npm install pm2 -g
|
||||
```
|
||||
|
||||
现在我们可以为我们的 web 应用使用 pm2 了。
|
||||
|
||||
进入应用目录 '**hakase-app**':
|
||||
进入应用目录 `hakase-app`:
|
||||
|
||||
`su - yume`
|
||||
`cd ~/hakase-app/`
|
||||
```
|
||||
su - yume
|
||||
cd ~/hakase-app/
|
||||
```
|
||||
|
||||
这里你可以看到一个名为 '**package.json**' 的文件,用 cat 命令显示它的内容。
|
||||
这里你可以看到一个名为 `package.json` 的文件,用 `cat` 命令显示它的内容。
|
||||
|
||||
`cat package.json`
|
||||
```
|
||||
cat package.json
|
||||
```
|
||||
|
||||
[
|
||||
![配置 express nodejs 服务](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/4.png)
|
||||
][14]
|
||||
|
||||
你可以看到 '**start**' 行有一个 nodejs 用于启动 express 应用的命令。我们会和 pm2 进程管理器一起使用这个命令。
|
||||
你可以看到 `start` 行有一个 nodejs 用于启动 express 应用的命令。我们会和 pm2 进程管理器一起使用这个命令。
|
||||
|
||||
像下面这样使用 pm2 命令运行 express 应用:
|
||||
像下面这样使用 `pm2` 命令运行 express 应用:
|
||||
|
||||
`pm2 start ./bin/www`
|
||||
```
|
||||
pm2 start ./bin/www
|
||||
```
|
||||
|
||||
现在你可以看到像下面这样的结果:
|
||||
|
||||
@ -122,9 +139,11 @@ pm2 是一个 node 软件包,可以使用 npm 命令安装。让我们用 npm
|
||||
![使用 pm2 运行 nodejs app](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/5.png)
|
||||
][15]
|
||||
|
||||
我们的 express 应用正在 pm2 中运行,名称为 '**www**',id '**0**'。你可以用 show 选项 '**show nodeid|name**' 获取更多 pm2 下运行的应用的信息。
|
||||
我们的 express 应用正在 `pm2` 中运行,名称为 `www`,id 为 `0`。你可以用 show 选项 `show nodeid|name` 获取更多 pm2 下运行的应用的信息。
|
||||
|
||||
`pm2 show www`
|
||||
```
|
||||
pm2 show www
|
||||
```
|
||||
|
||||
[
|
||||
![pm2 服务状态](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/6.png)
|
||||
@ -132,7 +151,9 @@ pm2 是一个 node 软件包,可以使用 npm 命令安装。让我们用 npm
|
||||
|
||||
如果你想看我们应用的日志,你可以使用 logs 选项。它包括访问和错误日志,你还可以看到应用程序的 HTTP 状态。
|
||||
|
||||
`pm2 logs www`
|
||||
```
|
||||
pm2 logs www
|
||||
```
|
||||
|
||||
[
|
||||
![pm2 服务日志](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/7.png)
|
||||
@ -140,14 +161,17 @@ pm2 是一个 node 软件包,可以使用 npm 命令安装。让我们用 npm
|
||||
|
||||
你可以看到我们的程序正在运行。现在,让我们来让它开机自启动。
|
||||
|
||||
`pm2 startup systemd`
|
||||
```
|
||||
pm2 startup systemd
|
||||
```
|
||||
|
||||
**systemd**: Ubuntu 16 使用的是 systemd。
|
||||
- `systemd`: Ubuntu 16 使用的是 systemd。
|
||||
|
||||
你会看到要用 root 用户运行命令的信息。使用 "exit" 命令回到 root 用户然后运行命令。
|
||||
你会看到要用 root 用户运行命令的信息。使用 `exit` 命令回到 root 用户然后运行命令。
|
||||
|
||||
|
||||
`sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u yume --hp /home/yume`
|
||||
```
|
||||
sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u yume --hp /home/yume
|
||||
```
|
||||
|
||||
它会为启动应用程序生成 systemd 配置文件。当你重启服务器的时候,应用程序就会自动运行。
|
||||
|
||||
@ -157,66 +181,73 @@ pm2 是一个 node 软件包,可以使用 npm 命令安装。让我们用 npm
|
||||
|
||||
### 第四步 - 安装和配置 Nginx 作为反向代理
|
||||
|
||||
在这篇指南中,我们会使用 Nginx 作为 node 应用的反向代理。Ubuntu 仓库中有 Nginx,用 apt 命令安装它:
|
||||
在这篇指南中,我们会使用 Nginx 作为 node 应用的反向代理。Ubuntu 仓库中有 Nginx,用 `apt` 命令安装它:
|
||||
|
||||
`sudo apt-get install -y nginx`
|
||||
```
|
||||
sudo apt-get install -y nginx
|
||||
```
|
||||
|
||||
下一步,进入到 '**sites-available**' 目录并创建新的虚拟 host 配置文件。
|
||||
下一步,进入到 `sites-available` 目录并创建新的虚拟主机配置文件。
|
||||
|
||||
`cd /etc/nginx/sites-available/`
|
||||
`vim hakase-app`
|
||||
```
|
||||
cd /etc/nginx/sites-available/
|
||||
vim hakase-app
|
||||
```
|
||||
|
||||
粘贴下面的配置:
|
||||
|
||||
upstream hakase-app {
|
||||
# Nodejs app upstream
|
||||
server 127.0.0.1:3000;
|
||||
keepalive 64;
|
||||
}
|
||||
|
||||
# Server on port 80
|
||||
server {
|
||||
listen 80;
|
||||
server_name hakase-node.co;
|
||||
root /home/yume/hakase-app;
|
||||
|
||||
location / {
|
||||
# Proxy_pass configuration
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header Host $http_host;
|
||||
proxy_set_header X-NginX-Proxy true;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
proxy_max_temp_file_size 0;
|
||||
proxy_pass http://hakase-app/;
|
||||
proxy_redirect off;
|
||||
proxy_read_timeout 240s;
|
||||
}
|
||||
}
|
||||
```
|
||||
upstream hakase-app {
|
||||
# Nodejs app upstream
|
||||
server 127.0.0.1:3000;
|
||||
keepalive 64;
|
||||
}
|
||||
|
||||
# Server on port 80
|
||||
server {
|
||||
listen 80;
|
||||
server_name hakase-node.co;
|
||||
root /home/yume/hakase-app;
|
||||
|
||||
location / {
|
||||
# Proxy_pass configuration
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header Host $http_host;
|
||||
proxy_set_header X-NginX-Proxy true;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
proxy_max_temp_file_size 0;
|
||||
proxy_pass http://hakase-app/;
|
||||
proxy_redirect off;
|
||||
proxy_read_timeout 240s;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
保存文件并退出 vim。
|
||||
|
||||
在配置中:
|
||||
|
||||
* node 应用使用域名 '**hakase-node.co**' 运行。
|
||||
* 所有来自 nginx 的流量都会被转发到运行在 **3000** 端口的 node app。
|
||||
* node 应用使用域名 `hakase-node.co` 运行。
|
||||
* 所有来自 nginx 的流量都会被转发到运行在 `3000` 端口的 node app。
|
||||
|
||||
测试 Nginx 配置确保没有错误。
|
||||
|
||||
`nginx -t`
|
||||
```
|
||||
nginx -t
|
||||
```
|
||||
|
||||
启用 Nginx 并使其开机自启动。
|
||||
|
||||
`systemctl start nginx`
|
||||
`systemctl enable nginx`
|
||||
```
|
||||
systemctl start nginx
|
||||
systemctl enable nginx
|
||||
```
|
||||
|
||||
### 第五步 - 测试
|
||||
|
||||
打开你的 web 浏览器并访问域名(我的是):
|
||||
|
||||
[http://hakase-app.co][19]
|
||||
打开你的 web 浏览器并访问域名(我的是):[http://hakase-app.co][19]
|
||||
|
||||
你可以看到 express 应用正在 Nginx web 服务器中运行。
|
||||
|
||||
@ -226,13 +257,17 @@ pm2 是一个 node 软件包,可以使用 npm 命令安装。让我们用 npm
|
||||
|
||||
下一步,重启你的服务器,确保你的 node app 能开机自启动:
|
||||
|
||||
`pm2 save`
|
||||
`sudo reboot`
|
||||
```
|
||||
pm2 save
|
||||
sudo reboot
|
||||
```
|
||||
|
||||
如果你再次登录到了你的服务器,检查 node app 进程。以 '**yume**' 用户运行下面的命令。
|
||||
如果你再次登录到了你的服务器,检查 node app 进程。以 `yume` 用户运行下面的命令。
|
||||
|
||||
`su - yume`
|
||||
`pm2 status www`
|
||||
```
|
||||
su - yume
|
||||
pm2 status www
|
||||
```
|
||||
|
||||
[
|
||||
![nodejs 在 pm2 下开机自启动](https://www.howtoforge.com/images/how_to_deploy_nodejs_applications_with_pm2_and_nginx_on_ubuntu/10.png)
|
||||
@ -250,9 +285,9 @@ Node 应用在 pm2 中运行并使用 Nginx 作为反向代理。
|
||||
|
||||
via: https://www.howtoforge.com/tutorial/how-to-deploy-nodejs-applications-with-pm2-and-nginx-on-ubuntu/
|
||||
|
||||
作者:[Muhammad Arul ][a]
|
||||
作者:[Muhammad Arul][a]
|
||||
译者:[ictlyh](https://github.com/ictlyh)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -5,40 +5,34 @@
|
||||
|
||||
|
||||
![How to deploy Kubernetes on the Raspberry Pi ](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/raspberrypi_cartoon.png?itok=sntNdheJ "How to deploy Kubernetes on the Raspberry Pi ")
|
||||
>图片提供: opensource.com
|
||||
|
||||
> 图片提供: opensource.com
|
||||
|
||||
当我开始对 [ARM][6]设备,特别是 Raspberry Pi 感兴趣时,我的第一个项目是一个 OpenVPN 服务器。
|
||||
|
||||
通过将 Raspberry Pi 作为家庭网络的安全网关,我可以使用我的手机来控制我的桌面,远程播放 Spotify,打开文档以及一些其他有趣的东西。我在第一个项目中使用了一个现有的教程,因为我害怕自己使用命令行。
|
||||
通过将 Raspberry Pi 作为家庭网络的安全网关,我可以使用我的手机来控制我的桌面,远程播放 Spotify,打开文档以及一些其他有趣的东西。我在第一个项目中使用了一个现有的教程,因为我害怕自己在命令行中拼砌。
|
||||
|
||||
更多关于 Raspberry Pi 的文章:
|
||||
|
||||
* [最新的 Raspberry Pi][1]
|
||||
* [什么是 Raspberry Pi?][2]
|
||||
* [开始使用 Raspberry Pi][3]
|
||||
* [给我们发送你的 Raspberry Pi 项目和教程][4]
|
||||
|
||||
几个月后,这种恐惧消失了。我扩展了我的原始项目,并使用[ Samba 服务器][7]从文件服务器隔离了 OpenVPN 服务器。这是我第一个没有完全按照教程来的项目。不幸的是,在我的 Samba 项目结束后,我意识到我没有记录任何东西,所以我无法复制它。为了重新创建它,我不得不重新参考我曾经用过的那些单独的教程,将项目拼回到一起。
|
||||
几个月后,这种恐惧消失了。我扩展了我的原始项目,并使用 [Samba 服务器][7]从文件服务器分离出了 OpenVPN 服务器。这是我第一个没有完全按照教程来的项目。不幸的是,在我的 Samba 项目结束后,我意识到我没有记录任何东西,所以我无法复制它。为了重新创建它,我不得不重新参考我曾经用过的那些单独的教程,将项目拼回到一起。
|
||||
|
||||
我学到了关于开发人员工作流程的宝贵经验 - 跟踪你所有的更改。我在本地做了一个小的 git 仓库,并记录了我输入的所有命令。
|
||||
|
||||
### 发现 Kubernetes
|
||||
|
||||
2015 年 5 月,我发现了 Linux 容器和 Kubernetes。我觉得 Kubernetes 很有魅力,我可以使用仍在技术上发展的概念 - 并且我实际上可以用它。平台本身及其所呈现的可能性令人兴奋。在此之前,我才刚刚在一块 Raspberry Pi 上运行了一个程序。有了 Kubernetes,我可以做出比以前更高级的配置。
|
||||
2015 年 5 月,我发现了 Linux 容器和 Kubernetes。我觉得 Kubernetes 很有魅力,我可以使用仍然处于技术发展的概念 - 并且我实际上可以用它。平台本身及其所呈现的可能性令人兴奋。在此之前,我才刚刚在一块 Raspberry Pi 上运行了一个程序。而有了 Kubernetes,我可以做出比以前更先进的配置。
|
||||
|
||||
那时候,Docker(v1.6 版本,如果我记得正确的话)在 ARM 上有一个 bug,这意味着在 Raspberry Pi 上运行 Kubernetes 实际上是不可能的。在早期的 0.x 版本中,Kubernetes 的变化很快。每次我在 AMD64 上找到一篇关于如何设置 Kubernetes 的指南时,它针对的还都是一个旧版本,与我当时使用的完全不兼容。
|
||||
|
||||
不管怎样,我用自己的方法在 Raspberry Pi 上创建了一个 Kubernetes 节点,而在 Kubernetes v1.0.1 中,我使用 Docker v1.7.1 [让它工作了][8]。这是第一个将 Kubernetes 全功能部署到 ARM 的方法。
|
||||
|
||||
在 Raspberry Pi 上运行 Kubernetes 的优势在于,由于 ARM 设备非常小巧,因此不会产生大量的功耗。如果程序以正确的方式构建,那么同样可以在 AMD64 上用同样的方法运行程序。有一块小型 IoT 板为教育创造了巨大的机会。用它来做演示也很有用,比如你要出差参加一个会议。携带 Raspberry Pi (通常)比拖着大型英特尔机器要容易得多。
|
||||
在 Raspberry Pi 上运行 Kubernetes 的优势在于,由于 ARM 设备非常小巧,因此不会产生大量的功耗。如果程序以正确的方式构建而成,那么就可以在 AMD64 上用同样的方法运行同一个程序。这样的一块小型 IoT 板为教育创造了巨大的机会。用它来做演示也很有用,比如你要出差参加一个会议。携带 Raspberry Pi (通常)比拖着大型英特尔机器要容易得多。
|
||||
|
||||
现在按照[我建议][9]的 ARM(32 位和 64 位)的支持已被合并到核心中。ARM 的二进制文件会自动与 Kubernetes 一起发布。虽然我们还没有为 ARM 提供自动化的 CI(持续集成)系统,在 PR 合并之前会自动确定它可在 ARM 上工作,它仍然工作得不错。
|
||||
现在按照[我建议][9]的 ARM(32 位和 64 位)的支持已被合并到 Kubernetes 核心中。ARM 的二进制文件会自动与 Kubernetes 一起发布。虽然我们还没有为 ARM 提供自动化的 CI(持续集成)系统,不过在 PR 合并之前会自动确定它可在 ARM 上工作,现在它运转得不错。
|
||||
|
||||
### Raspberry Pi 上的分布式网络
|
||||
|
||||
我通过 [kubeadm][10] 发现了 Weave Net。[Weave Mesh][11]是一个有趣的分布式网络解决方案,因此我开始阅读更多关于它的内容。在 2016 年 12 月,我在 [Weaveworks][12] 收到了第一份合同工作。我是 Weave Net 中 ARM 支持团队的一员。
|
||||
我通过 [kubeadm][10] 发现了 Weave Net。[Weave Mesh][11] 是一个有趣的分布式网络解决方案,因此我开始了解更多关于它的内容。在 2016 年 12 月,我在 [Weaveworks][12] 收到了第一份合同工作,我成为了 Weave Net 中 ARM 支持团队的一员。
|
||||
|
||||
我很高兴可以在 Raspberry Pi 上运行 Weave Net 的工业案例,比如那些需要更加移动化的工厂。目前,将 Weave Scope 或 Weave Cloud 部署到 Raspberry Pi 可能不太现实(尽管可以考虑使用其他 ARM 设备),因为我猜这个软件需要更多的内存才能运行良好。理想情况下,随着 Raspberry Pi 升级到 2GB 内存,我想我可以在它上面运行 Weave Cloud 了。
|
||||
我很高兴可以在 Raspberry Pi 上运行 Weave Net 的工业案例,比如那些需要设备更加移动化的工厂。目前,将 Weave Scope 或 Weave Cloud 部署到 Raspberry Pi 可能不太现实(尽管可以考虑使用其他 ARM 设备),因为我猜这个软件需要更多的内存才能运行良好。理想情况下,随着 Raspberry Pi 升级到 2GB 内存,我想我可以在它上面运行 Weave Cloud 了。
|
||||
|
||||
在 Weave Net 1.9 中,Weave Net 支持了 ARM。Kubeadm(通常是 Kubernetes)在多个平台上工作。你可以使用 Weave 将 Kubernetes 部署到 ARM,就像在任何 AMD64 设备上一样安装 Docker、kubeadm、kubectl 和 kubelet。然后初始化控制面板组件运行的主机:
|
||||
|
||||
@ -52,7 +46,7 @@ kubeadm init
|
||||
kubectl apply -f https://git.io/weave-kube
|
||||
```
|
||||
|
||||
在此之前在 ARM 上,你只可以用 Flannel 安装一个 pod 网络,但是在 Weave Net 1.9 中已经改变了,它官方支持了 ARM。
|
||||
在此之前在 ARM 上,你只能用 Flannel 安装 pod 网络,但是在 Weave Net 1.9 中已经改变了,它官方支持了 ARM。
|
||||
|
||||
最后,加入你的节点:
|
||||
|
||||
@ -84,7 +78,7 @@ Lucas Käldström - 谢谢你发现我!我是一名来自芬兰的说瑞典语
|
||||
|
||||
via: https://opensource.com/article/17/3/kubernetes-raspberry-pi
|
||||
|
||||
作者:[ Lucas Käldström][a]
|
||||
作者:[Lucas Käldström][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[jasminepeng](https://github.com/jasminepeng)
|
||||
|
@ -0,0 +1,111 @@
|
||||
AI 正快速入侵我们生活的五个方面
|
||||
============================================================
|
||||
|
||||
> 让我们来看看我们已经被人工智能包围的五个真实存在的方面。
|
||||
|
||||
![5 big ways AI is rapidly invading our lives](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/brain-think-ai-intelligence-ccby.png?itok=-EK6Vpz1 "5 big ways AI is rapidly invading our lives")
|
||||
|
||||
> 图片来源: opensource.com
|
||||
|
||||
开源项目[正在助推][2]人工智能(AI)进步,而且随着技术的成熟,我们将听到更多关于 AI 如何影响我们生活的消息。你有没有考虑过 AI 是如何改变你周围的世界的?让我们来看看我们日益被我们所改变的世界,以及大胆预测一下 AI 对未来影响。
|
||||
|
||||
### 1. AI 影响你的购买决定
|
||||
|
||||
最近 [VentureBeat][3] 上的一篇文章,[“AI 将如何帮助我们解读千禧一代”][4]吸引了我的注意。我承认我对人工智能没有思考太多,也没有费力尝试解读千禧一代,所以我很好奇,希望了解更多。事实证明,文章标题有点误导人,“如何卖东西给千禧一代”或许会是一个更准确的标题。
|
||||
|
||||
根据这篇文章,千禧一代是“一个令人垂涎的年龄阶段的人群,全世界的市场经理都在争抢他们”。通过分析网络行为 —— 无论是购物、社交媒体或其他活动 - 机器学习可以帮助预测行为模式,这将可以变成有针对性的广告。文章接着解释如何对物联网和社交媒体平台进行挖掘形成数据点。“使用机器学习挖掘社交媒体数据,可以让公司了解千禧一代如何谈论其产品,他们对一个产品类别的看法,他们对竞争对手的广告活动如何响应,还可获得很多数据,用于设计有针对性的广告,"这篇文章解释说。AI 和千禧一代成为营销的未来并不是什么很令人吃惊的事,但是 X 一代和婴儿潮一代,你们也逃不掉呢!(LCTT 译注:X 一代指出生于 20 世纪 60 年代中期至 70 年代末的美国人,婴儿潮是指二战结束后,1946 年初至 1964 年底出生的人)
|
||||
|
||||
> 人工智能根据行为变化,将包括城市人在内的整个人群设为目标群体。
|
||||
|
||||
例如, [Raconteur 上][23]的一篇文章 —— “AI 将怎样改变购买者的行为”中解释说,AI 在网上零售行业最大的力量是它能够迅速适应客户行为不断变化的形势。人工智能创业公司 [Fluid AI][25] 首席执行官 Abhinav Aggarwal 表示,他公司的软件被一个客户用来预测顾客行为,有一次系统注意到在暴风雪期间发生了一个变化。“那些通常会忽略在一天中发送的电子邮件或应用内通知的用户现在正在打开它们,因为他们在家里没有太多的事情可做。一个小时之内,AI 系统就适应了新的情况,并在工作时间开始发送更多的促销材料。”他解释说。
|
||||
|
||||
AI 正在改变我们怎样花钱和为什么花钱,但是 AI 是怎样改变我们挣钱的方式的呢?
|
||||
|
||||
### 2. 人工智能正在改变我们如何工作
|
||||
|
||||
[Fast 公司][5]最近的一篇文章“2017 年人工智能将如何改变我们的生活”中说道,求职者将会从人工智能中受益。作者解释说,除更新薪酬趋势之外,人工智能将被用来给求职者发送相关职位空缺信息。当你应该升职的时候,你很可能会得到一个升职的机会。
|
||||
|
||||
人工智能也可以被公司用来帮助新入职的员工。文章解释说:“许多新员工在刚入职的几天内会获得大量信息,其中大部分都留不下来。” 相反,机器人可能会随着时间的推移,当新员工需要相关信息时,再向他一点点“告知信息”。
|
||||
|
||||
[Inc.][7] 有一篇文章[“没有偏见的企业:人工智能将如何重塑招聘机制”][8],观察了人才管理解决方案提供商 [SAP SuccessFactors][9] 是怎样利用人工智能作为一个工作描述“偏见检查器”,以及检查员工赔偿金的偏见。
|
||||
|
||||
[《Deloitte 2017 人力资本趋势报告》][10]显示,AI 正在激励组织进行重组。Fast 公司的文章[“AI 是怎样改变公司组织的方式”][11]审查了这篇报告,该报告是基于全球 10,000 多名人力资源和商业领袖的调查结果。这篇文章解释说:"许多公司现在更注重文化和环境的适应性,而不是聘请最有资格的人来做某个具体任务,因为知道个人角色必须随 AI 的实施而发展。" 为了适应不断变化的技术,组织也从自上而下的结构转向多学科团队,文章说。
|
||||
|
||||
### 3. AI 正在改变教育
|
||||
|
||||
> AI 将使所有教育生态系统的利益相关者受益。
|
||||
|
||||
尽管教育预算正在缩减,但是教室的规模却正在增长。因此利用技术的进步有助于提高教育体系的生产率和效率,并在提高教育质量和负担能力方面发挥作用。根据 VentureBeat 上的一篇文章[“2017 年人工智能将怎样改变教育”][26],今年我们将看到 AI 对学生们的书面答案进行评分,机器人回答学生的问题,虚拟个人助理辅导学生等等。文章解释说:“AI 将惠及教育生态系统的所有利益相关者。学生将能够通过即时的反馈和指导学习地更好,教师将获得丰富的学习分析和对个性化教学的见解,父母将以更低的成本看到他们的孩子的更好的职业前景,学校能够规模化优质的教育,政府能够向所有人提供可负担得起的教育。"
|
||||
|
||||
### 4. 人工智能正在重塑医疗保健
|
||||
|
||||
2017 年 2 月 [CB Insights][12] 的一篇文章挑选了 106 个医疗保健领域的人工智能初创公司,它们中的很多在过去几年中提高了第一次股权融资。这篇文章说:“在 24 家成像和诊断公司中,19 家公司自 2015 年 1 月起就首次公开募股。”这份名单上有那些从事于远程病人监测,药物发现和肿瘤学方面人工智能的公司。”
|
||||
|
||||
3 月 16 日发表在 TechCrunch 上的一篇关于 AI 进步如何重塑医疗保健的文章解释说:“一旦对人类的 DNA 有了更好的理解,就有机会更进一步,并能根据他们特殊的生活习性为他们提供个性化的见解。这种趋势预示着‘个性化遗传学’的新纪元,人们能够通过获得关于自己身体的前所未有的信息来充分控制自己的健康。”
|
||||
|
||||
本文接着解释说,AI 和机器学习降低了研发新药的成本和时间。部分得益于广泛的测试,新药进入市场需要 12 年以上的时间。这篇文章说:“机器学习算法可以让计算机根据先前处理的数据来‘学习’如何做出预测,或者选择(在某些情况下,甚至是产品)需要做什么实验。类似的算法还可用于预测特定化合物对人体的副作用,这样可以加快审批速度。”这篇文章指出,2015 年旧金山的一个创业公司 [Atomwise][15] 一天内完成了可以减少埃博拉感染的两种新药物的分析,而不是花费数年时间。
|
||||
|
||||
> AI 正在帮助发现、诊断和治疗新疾病。
|
||||
|
||||
另外一个位于伦敦的初创公司 [BenevolentAI][27] 正在利用人工智能寻找科学文献中的模式。这篇文章说:“最近,这家公司找到了两种可能对 Alzheimer 起作用的化合物,引起了很多制药公司的关注。"
|
||||
|
||||
除了有助于研发新药,AI 正在帮助发现、诊断和治疗新疾病。TechCrunch 上的文章解释说,过去是根据显示的症状诊断疾病,但是现在 AI 正在被用于检测血液中的疾病特征,并利用对数十亿例临床病例分析进行深度学习获得经验来制定治疗计划。这篇文章说:“IBM 的 Watson 正在与纽约的 Memorial Sloan Kettering 合作,消化理解数十年来关于癌症患者和治疗方面的数据,为了向治疗疑难的癌症病例的医生提供和建议治疗方案。”
|
||||
|
||||
### 5. AI 正在改变我们的爱情生活
|
||||
|
||||
有 195 个国家的超过 5000 万活跃用户通过一个在 2012 年推出的约会应用程序 [Tinder][16] 找到潜在的伴侣。在一个 [Forbes 采访播客][17]中,Tinder 的创始人兼董事长 Sean Rad spoke 与 Steven Bertoni 对人工智能是如何正在改变人们约会进行过讨论。在[关于此次采访的文章][18]中,Bertoni 引用了 Rad 说的话,他说:“可能有这样一个时刻,Tinder 可以很好的推测你会感兴趣的人,在组织约会中还可能会做很多跑腿的工作”,所以,这个 app 会向用户推荐一些附近的同伴,并更进一步,协调彼此的时间安排一次约会,而不只是向用户显示一些有可能的同伴。
|
||||
|
||||
> 我们的后代真的可能会爱上人工智能。
|
||||
|
||||
你爱上了 AI 吗?我们的后代真的可能会爱上人工智能。Raya Bidshahri 发表在 [Singularity Hub][19] 的一篇文章“AI 将如何重新定义爱情”说,几十年的后,我们可能会认为爱情不再受生物学的限制。
|
||||
|
||||
Bidshahri 解释说:“我们的技术符合摩尔定律,正在以惊人的速度增长 —— 智能设备正在越来越多地融入我们的生活。”,他补充道:“到 2029 年,我们将会有和人类同等智慧的 AI,而到 21 世纪 40 年代,AI 将会比人类聪明无数倍。许多人预测,有一天我们会与强大的机器合并,我们自己可能会变成人工智能。”他认为在这样一个世界上那些是不可避免的,人们将会接受与完全的非生物相爱。
|
||||
|
||||
这听起来有点怪异,但是相比较于未来机器人将统治世界,爱上 AI 会是一个更乐观的结果。Bidshahri 说:“对 AI 进行编程,让他们能够感受到爱,这将使我们创造出更富有同情心的 AI,这可能也是避免很多人忧虑的 AI 大灾难的关键。”
|
||||
|
||||
这份 AI 正在入侵我们生活各领域的清单只是涉及到了我们身边的人工智能的表面。哪些 AI 创新是让你最兴奋的,或者是让你最烦恼的?大家可以在文章评论区写下你们的感受。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
Rikki Endsley - Rikki Endsley 是开源社区 Opensource.com 的管理员。在过去,她曾做过 Red Hat 开源和标准(OSAS)团队社区传播者;自由技术记者;USENIX 协会的社区管理员;linux 权威杂志 ADMIN 和 Ubuntu User 的合作出版者,还是杂志 Sys Admin 和 UnixReview.com 的主编。在 Twitter 上关注她:@rikkiends。
|
||||
|
||||
----
|
||||
|
||||
via: https://opensource.com/article/17/3/5-big-ways-ai-rapidly-invading-our-lives
|
||||
|
||||
作者:[Rikki Endsley][a]
|
||||
译者:[zhousiyu325](https://github.com/zhousiyu325)
|
||||
校对:[jasminepeng](https://github.com/jasminepeng)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/rikki-endsley
|
||||
[1]:https://opensource.com/article/17/3/5-big-ways-ai-rapidly-invading-our-lives?rate=ORfqhKFu9dpA9aFfg-5Za9ZWGcBcx-f0cUlf_VZNeQs
|
||||
[2]:https://www.linux.com/news/open-source-projects-are-transforming-machine-learning-and-ai
|
||||
[3]:https://twitter.com/venturebeat
|
||||
[4]:http://venturebeat.com/2017/03/16/how-ai-will-help-us-decipher-millennials/
|
||||
[5]:https://opensource.com/article/17/3/5-big-ways-ai-rapidly-invading-our-lives
|
||||
[6]:https://www.fastcompany.com/3066620/this-is-how-ai-will-change-your-work-in-2017
|
||||
[7]:https://twitter.com/Inc
|
||||
[8]:http://www.inc.com/bill-carmody/businesses-beyond-bias-how-ai-will-reshape-hiring-practices.html
|
||||
[9]:https://www.successfactors.com/en_us.html
|
||||
[10]:https://dupress.deloitte.com/dup-us-en/focus/human-capital-trends.html?id=us:2el:3pr:dup3575:awa:cons:022817:hct17
|
||||
[11]:https://www.fastcompany.com/3068492/how-ai-is-changing-the-way-companies-are-organized
|
||||
[12]:https://twitter.com/CBinsights
|
||||
[13]:https://www.cbinsights.com/blog/artificial-intelligence-startups-healthcare/
|
||||
[14]:https://techcrunch.com/2017/03/16/advances-in-ai-and-ml-are-reshaping-healthcare/
|
||||
[15]:http://www.atomwise.com/
|
||||
[16]:https://twitter.com/Tinder
|
||||
[17]:https://www.forbes.com/podcasts/the-forbes-interview/#5e962e5624e1
|
||||
[18]:https://www.forbes.com/sites/stevenbertoni/2017/02/14/tinders-sean-rad-on-how-technology-and-artificial-intelligence-will-change-dating/#4180fc2e5b99
|
||||
[19]:https://twitter.com/singularityhub
|
||||
[20]:https://singularityhub.com/2016/08/05/how-ai-will-redefine-love/
|
||||
[21]:https://opensource.com/user/23316/feed
|
||||
[22]:https://opensource.com/article/17/3/5-big-ways-ai-rapidly-invading-our-lives#comments
|
||||
[23]:https://twitter.com/raconteur
|
||||
[24]:https://www.raconteur.net/technology/how-ai-will-change-buyer-behaviour
|
||||
[25]:http://www.fluid.ai/
|
||||
[26]:http://venturebeat.com/2017/02/04/how-ai-will-transform-education-in-2017/
|
||||
[27]:https://twitter.com/benevolent_ai
|
||||
[28]:https://opensource.com/users/rikki-endsley
|
||||
|
@ -1,24 +1,24 @@
|
||||
Remmina - 一个 Linux 下功能丰富的远程桌面共享工具
|
||||
Remmina:一个 Linux 下功能丰富的远程桌面共享工具
|
||||
============================================================
|
||||
|
||||
**Remmina** 是一款在 Linux 和其他类 Unix 系统下的免费开源、功能丰富、强大的远程桌面客户端,它用 GTK+ 3 编写而成。它适用于那些需要远程访问及使用许多计算机的系统管理员和在外出行人员。
|
||||
**Remmina** 是一款在 Linux 和其他类 Unix 系统下的自由开源、功能丰富、强大的远程桌面客户端,它用 GTK+ 3 编写而成。它适用于那些需要远程访问及使用许多计算机的系统管理员和在外出行人员。
|
||||
|
||||
它以简单、统一、同一性、易于使用的用户界面支持多种网络协议。
|
||||
它以简单、统一、同质、易用的用户界面支持多种网络协议。
|
||||
|
||||
#### Remmina 功能
|
||||
### Remmina 功能
|
||||
|
||||
* 支持 RDP、VNC、NX、XDMCP 和 SSH。
|
||||
* 用户能够以组的形式维护一份连接配置列表。
|
||||
* 支持用户直接输入服务器地址的快速连接。
|
||||
* 具有更高分辨率的远程桌面,可以在窗口和全屏模式下滚动/缩放。
|
||||
* 支持窗口全屏模式;当鼠标移动到屏幕边缘时,远程桌面会自动滚动。
|
||||
* 还支持全屏模式浮动工具栏;使你能够在不同模式间切换、触发键盘获取、最小化等。
|
||||
* 提供选项卡式界面,可选择由组管理。
|
||||
* 还支持全屏模式的浮动工具栏;使你能够在不同模式间切换、触发键盘获取、最小化等。
|
||||
* 提供选项卡式界面,可以按组管理。
|
||||
* 还提供托盘图标,允许你快速访问已配置的连接文件。
|
||||
|
||||
在本文中,我们将向你展示如何在 Linux 中安装 Remmina,以及使用它通过支持的不同协议实现桌面共享。
|
||||
|
||||
#### 先决条件
|
||||
### 先决条件
|
||||
|
||||
* 在远程机器上允许桌面共享(让远程机器允许远程连接)。
|
||||
* 在远程机器上设置 SSH 服务。
|
||||
@ -43,7 +43,7 @@ $ sudo dnf copr enable hubbitus/remmina-next
|
||||
$ sudo dnf upgrade --refresh 'remmina*' 'freerdp*'
|
||||
```
|
||||
|
||||
一旦安装完成后,在 Ubuntu 或 Linux Mint 菜单中搜索 **remmina**,接着运行它:
|
||||
一旦安装完成后,在 Ubuntu 或 Linux Mint 菜单中搜索 `remmina`,接着运行它:
|
||||
|
||||
[
|
||||
![Remmina Desktop Sharing Client](http://www.tecmint.com/wp-content/uploads/2017/03/Remmina-Desktop-Sharing-Client.png)
|
||||
@ -53,7 +53,7 @@ $ sudo dnf upgrade --refresh 'remmina*' 'freerdp*'
|
||||
|
||||
你可以通过图形界面或者编辑 `$HOME/.remmina` 或者 `$HOME/.config/remmina` 下的文件来进行配置。
|
||||
|
||||
要设置到一个新的远程服务器的连接,按下 `[Ctrl+N]` 并点击 **Connection -> New**,如下截图中配置远程连接。这是基本的设置界面。
|
||||
要设置到一个新的远程服务器的连接,按下 `Ctrl+N` 并点击 **Connection -> New**,如下截图中配置远程连接。这是基本的设置界面。
|
||||
|
||||
[
|
||||
![Remmina Basic Desktop Preferences](http://www.tecmint.com/wp-content/uploads/2017/03/Remmina-Basic-Desktop-Preferences.png)
|
||||
@ -87,7 +87,7 @@ $ sudo dnf upgrade --refresh 'remmina*' 'freerdp*'
|
||||
|
||||
#### 使用 sFTP 连接到远程机器
|
||||
|
||||
选择连接配置并编辑设置,在 “**Protocols**” 下拉菜单中选择 **sFTP - 安全文件传输**。接着设置启动路径(可选),并指定 SSH 验证细节。最后点击**连接**。
|
||||
选择连接配置并编辑设置,在 “**Protocols**” 下拉菜单中选择 **sFTP - Secure File Transfer**。接着设置启动路径(可选),并指定 SSH 验证细节。最后点击**连接**。
|
||||
|
||||
[
|
||||
![Remmina sftp Connection](http://www.tecmint.com/wp-content/uploads/2017/03/Remmina-sftp-connection.png)
|
||||
@ -103,7 +103,7 @@ $ sudo dnf upgrade --refresh 'remmina*' 'freerdp*'
|
||||
|
||||
*输入 SSH 密码*
|
||||
|
||||
如果你看到下面的界面,那么代表 SFTP 连接成功了,你现在可以[在两台机器键传输文件了][8]。
|
||||
如果你看到下面的界面,那么代表 sFTP 连接成功了,你现在可以[在两台机器键传输文件了][8]。
|
||||
|
||||
[
|
||||
![Remmina Remote sFTP Filesystem](http://www.tecmint.com/wp-content/uploads/2017/03/Remmina-Remote-sFTP-Filesystem.png)
|
||||
@ -131,7 +131,7 @@ $ sudo dnf upgrade --refresh 'remmina*' 'freerdp*'
|
||||
|
||||
#### 使用 VNC 连接到远程机器
|
||||
|
||||
选择连接配置并编辑设置,在 “**Protocols**” 下拉菜单中选择 **VNC - 虚拟网络计算**。为连接配置基础、高级以及 ssh 设置,点击**连接**,接着输入用户 SSH 密码。
|
||||
选择连接配置并编辑设置,在 “**Protocols**” 下拉菜单中选择 **VNC - Virtual Network Computing**。为该连接配置基础、高级以及 ssh 设置,点击**连接**,接着输入用户 SSH 密码。
|
||||
|
||||
[
|
||||
![Remmina VNC Connection](http://www.tecmint.com/wp-content/uploads/2017/03/Remmina-VNC-Connection.png)
|
||||
@ -172,7 +172,7 @@ via: http://www.tecmint.com/remmina-remote-desktop-sharing-and-ssh-client/
|
||||
|
||||
作者:[Aaron Kili][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,255 @@
|
||||
rdiff-backup:一个 Linux 中的远程增量备份工具
|
||||
============================================================
|
||||
|
||||
rdiff-backup 是一个用于本地/远程增量备份的强大而易用的 Python 脚本,它适用于任何 POSIX 操作系统,如Linux、Mac OS X 或 [Cygwin][1]。它集合了镜像和增量备份的显著特性。
|
||||
|
||||
值得注意的是,它保留了子目录、dev 文件、硬链接,以及关键的文件属性,如权限、uid/gid 所有权、修改时间、扩展属性、acl 以及 resource fork。它可以通过管道以高效带宽的模式工作,这与流行的 [rsync 备份工具][2]类似。
|
||||
|
||||
rdiff-backup 通过使用 SSH 将单个目录备份到另一个目录,这意味着数据传输被加密并且是安全的。目标目录(在远程系统上)最终会得到源目录的完整副本,但是此外的反向差异会存储在目标目录的特殊子目录中,从而可以恢复前一段时间丢失的文件。
|
||||
|
||||
### 依赖
|
||||
|
||||
要在 Linux 中使用 rdiff-backup,你需要在系统上安装以下软件包:
|
||||
|
||||
* Python v2.2 或更高版本
|
||||
* librsync v0.9.7 或更高版本
|
||||
* pylibacl 和 pyxattr Python 模块是可选的,但它们分别是 POSIX 访问控制列表(ACL)和扩展属性支持必需的。
|
||||
* rdiff-backup-statistics 需要 Python v2.4 或更高版本。
|
||||
|
||||
### 如何在 Linux 中安装 rdiff-backup
|
||||
|
||||
重要:如果你通过网络运行它,则必须在两个系统中都安装 rdiff-backup,两者最好是相同版本。
|
||||
|
||||
该脚本已经存在于主流 Linux 发行版的官方仓库中,只需运行以下命令来安装 rdiff-backup 及其依赖关系:
|
||||
|
||||
#### 在 Debian/Ubuntu 中
|
||||
|
||||
```
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install librsync-dev rdiff-backup
|
||||
```
|
||||
|
||||
#### 在 CentOS/RHEL 7 中
|
||||
|
||||
```
|
||||
# wget http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-9.noarch.rpm
|
||||
# rpm -ivh epel-release-7-9.noarch.rpm
|
||||
# yum install librsync rdiff-backup
|
||||
```
|
||||
|
||||
#### 在 CentOS/RHEL 6 中
|
||||
|
||||
```
|
||||
# wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
|
||||
# rpm -ivh epel-release-6-8.noarch.rpm
|
||||
# yum install librsync rdiff-backup
|
||||
```
|
||||
|
||||
#### 在 Fedora 中
|
||||
|
||||
```
|
||||
# yum install librsync rdiff-backup
|
||||
# dnf install librsync rdiff-backup [Fedora 22+]
|
||||
```
|
||||
|
||||
### 如何在 Linux 中使用 rdiff-backup
|
||||
|
||||
如前所述,rdiff-backup 使用 SSH 连接到网络上的远程计算机,SSH 的默认身份验证方式是用户名/密码,这通常需要人工交互。
|
||||
|
||||
但是,要自动执行诸如脚本等自动备份之类的任务,那么你需要配置[使用 SSH 密钥无密码登录 SSH][3],因为 SSH 密钥增加了两台 Linux服务器之间的信任来[简化文件同步或传输][4]。
|
||||
|
||||
在你设置了 [SSH 无密码登录][5]后,你可以使用下面的例子开始使用该脚本。
|
||||
|
||||
#### 备份文件到不同分区
|
||||
|
||||
下面的例子会备份 `/etc` 文件夹到另外一个分区的 `Backup` 文件夹内:
|
||||
|
||||
```
|
||||
$ sudo rdiff-backup /etc /media/aaronkilik/Data/Backup/mint_etc.backup
|
||||
```
|
||||
[
|
||||
![Backup Files to Different Partition](http://www.tecmint.com/wp-content/uploads/2017/03/Backup-Files-to-Different-Partition.png)
|
||||
][6]
|
||||
|
||||
*备份文件到不同分区*
|
||||
|
||||
要排除一个特定文件夹和它的子目录,你可以如下使用 `--exclude` 选项:
|
||||
|
||||
```
|
||||
$ sudo rdiff-backup --exclude /etc/cockpit --exclude /etc/bluetooth /media/aaronkilik/Data/Backup/mint_etc.backup
|
||||
```
|
||||
|
||||
我们可以如下使用 `--include-special-files` 包含所有的设备文件、fifo 文件、socket 文件和链接文件:
|
||||
|
||||
```
|
||||
$ sudo rdiff-backup --include-special-files --exclude /etc/cockpit /media/aaronkilik/Data/Backup/mint_etc.backup
|
||||
```
|
||||
|
||||
还有另外两个重要标志来用于选择文件,`--max-file-size` 用来排除大于给定字节大小的文件,`--min-file-size` 用于排除小于给定字节大小的文件:
|
||||
|
||||
```
|
||||
$ sudo rdiff-backup --max-file-size 5M --include-special-files --exclude /etc/cockpit /media/aaronkilik/Data/Backup/mint_etc.backup
|
||||
```
|
||||
|
||||
#### 在本地 Linux 服务器上备份远程文件
|
||||
|
||||
要这么做,我们使用:
|
||||
|
||||
```
|
||||
Remote Server (tecmint) : 192.168.56.102
|
||||
Local Backup Server (backup) : 192.168.56.10
|
||||
```
|
||||
|
||||
如前所述,你必须在两台机器上安装相同版本的 rdiff-backup,如下所示,请尝试在两台机器上检查版本:
|
||||
|
||||
```
|
||||
$ rdiff-backup -V
|
||||
```
|
||||
[
|
||||
![Check rdiff Version on Servers](http://www.tecmint.com/wp-content/uploads/2017/03/check-rdif-versions-on-servers.png)
|
||||
][7]
|
||||
|
||||
*检查服务器中 rdiff 版本*
|
||||
|
||||
在备份服务器中,像这样创建一个存储备份文件的目录:
|
||||
|
||||
```
|
||||
# mkdir -p /backups
|
||||
```
|
||||
|
||||
现在在备份服务器中,运行下面的命令来将远程 Linux 服务器 192.168.56.102 中的 `/var/log/` 和 `/root` 备份到 `/backups` 中:
|
||||
|
||||
```
|
||||
# rdiff-backup root@192.168.56.102::/var/log/ /backups/192.168.56.102_logs.backup
|
||||
# rdiff-backup root@192.168.56.102::/root/ /backups/192.168.56.102_rootfiles.backup
|
||||
```
|
||||
|
||||
下面的截图展示了远程服务器 192.168.56.102 中的 `root` 文件夹以及 192.168.56.10 备份服务器中的已备份文件:
|
||||
|
||||
[
|
||||
![Backup Remote Directory on Local Server](http://www.tecmint.com/wp-content/uploads/2017/03/Backup-Remote-Linux-Directory-on-Local-Server.png)
|
||||
][8]
|
||||
|
||||
*在本地服务器备份远程目录*
|
||||
|
||||
注意截图中 “backup” 目录中创建的 rdiff-backup-data 文件夹,它包含了备份过程和增量文件的重要数据。
|
||||
|
||||
[
|
||||
![rdiff-backup - Backup Process Files](http://www.tecmint.com/wp-content/uploads/2017/03/rdiff-backup-data-directory-contents.png)
|
||||
][9]
|
||||
|
||||
*rdiff-backup – 备份过程文件*
|
||||
|
||||
现在,在 192.168.56.102 服务器中,如下所示 `root` 目录已经添加了额外的文件:
|
||||
|
||||
[
|
||||
![Verify Backup Directory](http://www.tecmint.com/wp-content/uploads/2017/03/additional-files-in-root-directory.png)
|
||||
][10]
|
||||
|
||||
*验证备份目录*
|
||||
|
||||
让我们再次运行备份命令以获取更改的数据,我们可以使用 `-v[0-9]`(其中数字指定详细程度级别,默认值为 3,这是静默模式)选项设置详细功能:
|
||||
|
||||
```
|
||||
# rdiff-backup -v4 root@192.168.56.102::/root/ /backups/192.168.56.102_rootfiles.backup
|
||||
```
|
||||
[
|
||||
![Incremental Backup with Summary](http://www.tecmint.com/wp-content/uploads/2017/03/incremental-backup-of-root-files.png)
|
||||
][11]
|
||||
|
||||
*带有摘要的增量备份*
|
||||
|
||||
要列出 `/backups/192.168.56.102_rootfiles.backup` 目录中包含的部分增量备份的数量和日期,我们可以运行:
|
||||
|
||||
```
|
||||
# rdiff-backup -l /backups/192.168.56.102_rootfiles.backup/
|
||||
```
|
||||
|
||||
#### 使用 cron 自动进行 rdiff-back 备份
|
||||
|
||||
使用 `--print-statistics` 成功备份后,我们可以打印摘要统计信息。但是,如果我们不设置此选项,我们可以仍从会话统计中获得。在手册页的 “STATISTICS” 部分中阅读有关此选项的更多信息。
|
||||
|
||||
`-remote-schema` 选项使我们能够指定使用替代方法连接到远程计算机。
|
||||
|
||||
现在,我们开始在备份服务器 192.168.56.10 上创建一个 `backup.sh` 脚本,如下所示:
|
||||
|
||||
```
|
||||
# cd ~/bin
|
||||
# vi backup.sh
|
||||
```
|
||||
|
||||
添加下面的行到脚本中。
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
#This is a rdiff-backup utility backup script
|
||||
#Backup command
|
||||
rdiff-backup --print-statistics --remote-schema 'ssh -C %s "sudo /usr/bin/rdiff-backup --server --restrict-read-only /"' root@192.168.56.102::/var/logs /backups/192.168.56.102_logs.back
|
||||
#Checking rdiff-backup command success/error
|
||||
status=$?
|
||||
if [ $status != 0 ]; then
|
||||
#append error message in ~/backup.log file
|
||||
echo "rdiff-backup exit Code: $status - Command Unsuccessful" >>~/backup.log;
|
||||
exit 1;
|
||||
fi
|
||||
#Remove incremental backup files older than one month
|
||||
rdiff-backup --force --remove-older-than 1M /backups/192.168.56.102_logs.back
|
||||
```
|
||||
|
||||
保存文件并退出,接着运行下面的命令在服务器 192.168.56.10 上的 crontab 中添加此脚本:
|
||||
|
||||
```
|
||||
# crontab -e
|
||||
```
|
||||
|
||||
添加此行在每天午夜运行你的备份脚本:
|
||||
|
||||
```
|
||||
0 0 * * * /root/bin/backup.sh > /dev/null 2>&1
|
||||
```
|
||||
|
||||
保存 crontab 并退出,现在我们已经成功自动化了备份过程。确保一切如希望那样工作。
|
||||
|
||||
阅读 rdiff-backup 的手册页获取更多信息、详尽的使用选项以及示例:
|
||||
|
||||
```
|
||||
# man rdiff-backup
|
||||
```
|
||||
|
||||
rdiff-backup 主页: [http://www.nongnu.org/rdiff-backup/][12]
|
||||
|
||||
就是这样了!在本教程中,我们向你展示了如何安装并基础地使用 rdiff-backup 这个易于使用的 Python 脚本,用于 Linux 中的本地/远程增量备份。 请通过下面的反馈栏与我们分享你的想法。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Aaron Kili 是 Linux 和 F.O.S.S 爱好者,将来的 Linux SysAdmin 和 web 开发人员,目前是 TecMint 的内容创建者,他喜欢用电脑工作,并坚信分享知识。
|
||||
|
||||
|
||||
------------
|
||||
|
||||
via: http://www.tecmint.com/rdiff-backup-remote-incremental-backup-for-linux/
|
||||
|
||||
作者:[Aaron Kili][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/aaronkili/
|
||||
[1]:http://www.tecmint.com/install-cygwin-to-run-linux-commands-on-windows-system/
|
||||
[2]:http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/
|
||||
[3]:https://linux.cn/article-6901-1.html
|
||||
[4]:http://www.tecmint.com/sync-new-changed-modified-files-rsync-linux/
|
||||
[5]:https://linux.cn/article-6901-1.html
|
||||
[6]:http://www.tecmint.com/wp-content/uploads/2017/03/Backup-Files-to-Different-Partition.png
|
||||
[7]:http://www.tecmint.com/wp-content/uploads/2017/03/check-rdif-versions-on-servers.png
|
||||
[8]:http://www.tecmint.com/wp-content/uploads/2017/03/Backup-Remote-Linux-Directory-on-Local-Server.png
|
||||
[9]:http://www.tecmint.com/wp-content/uploads/2017/03/rdiff-backup-data-directory-contents.png
|
||||
[10]:http://www.tecmint.com/wp-content/uploads/2017/03/additional-files-in-root-directory.png
|
||||
[11]:http://www.tecmint.com/wp-content/uploads/2017/03/incremental-backup-of-root-files.png
|
||||
[12]:http://www.nongnu.org/rdiff-backup/
|
||||
[13]:http://www.tecmint.com/author/aaronkili/
|
||||
[14]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
|
||||
[15]:http://www.tecmint.com/free-linux-shell-scripting-books/
|
141
published/20170415 bd – Quickly Go Back to a Parent Directory.md
Normal file
141
published/20170415 bd – Quickly Go Back to a Parent Directory.md
Normal file
@ -0,0 +1,141 @@
|
||||
bd:快速返回某级父目录而不用冗余地输入 “cd ../../..”
|
||||
============================================================
|
||||
|
||||
在 Linux 系统上通过命令行切换文件夹时,为了回到父目录(长路径),我们通常会重复输入 [cd 命令][1](`cd ../../..`),直到进入感兴趣的目录。
|
||||
|
||||
对于经验丰富的 Linux 用户或需要进行各种不同任务的系统管理员而言,这可能非常乏味,因此希望在操作系统时有一个快捷方式来简化工作。
|
||||
|
||||
**建议阅读:** [Autojump - 一个快速浏览 Linux 文件系统的高级 “cd” 命令][2]
|
||||
|
||||
在本文中,我们将在 bd 工具的帮助下,用这个简单而有用的工具快速回到 Linux 中的父目录。
|
||||
|
||||
bd 是用于切换文件夹的便利工具,它可以使你快速返回到父目录,而不必重复键入 `cd ../../..` 。 你可以可靠地将其与其他 Linux 命令组合以执行几个日常操作。
|
||||
|
||||
### 如何在 Linux 中安装 bd
|
||||
|
||||
运行下面的命令,使用 [wget 命令][3]下载并安装 bd 到 `/usr/bin/` 中,添加执行权限,并在 `~/.bashrc` 中创建需要的别名:
|
||||
|
||||
```
|
||||
$ wget --no-check-certificate -O /usr/bin/bd https://raw.github.com/vigneshwaranr/bd/master/bd
|
||||
$ chmod +rx /usr/bin/bd
|
||||
$ echo 'alias bd=". bd -si" >> ~/.bashrc
|
||||
$ source ~/.bashrc
|
||||
```
|
||||
|
||||
注意:如果要启用大小写敏感的目录名匹配,请在上面创建的别名中,设置 `-s` 标志而不是 `-si` 标志。
|
||||
|
||||
要启用自动补全支持,运行这些命令:
|
||||
|
||||
```
|
||||
$ sudo wget -O /etc/bash_completion.d/bd https://raw.github.com/vigneshwaranr/bd/master/bash_completion.d/bd
|
||||
$ sudo source /etc/bash_completion.d/bd
|
||||
```
|
||||
|
||||
#### 如何在 Linux 中使用 bd
|
||||
|
||||
假设你目前在这个路径的顶层目录:
|
||||
|
||||
```
|
||||
/media/aaronkilik/Data/Computer Science/Documents/Books/LEARN/Linux/Books/server $
|
||||
```
|
||||
|
||||
你想要快速进入 “Documents” 目录,只要输入:
|
||||
|
||||
```
|
||||
$ bd Documents
|
||||
```
|
||||
|
||||
接着直接进入到 Data 目录,你可以输入:
|
||||
|
||||
```
|
||||
$ bd Data
|
||||
```
|
||||
[
|
||||
![Switch Between Directories Quickly](http://www.tecmint.com/wp-content/uploads/2017/03/Switch-Between-Directories-Quickly.png)
|
||||
][4]
|
||||
|
||||
*目录间快速切换*
|
||||
|
||||
实际上,bd 让它变得更加直接,你要做的是输入 “bd <开头几个字母>”,比如:
|
||||
|
||||
```
|
||||
$ bd Doc
|
||||
$ bd Da
|
||||
```
|
||||
[
|
||||
![Quickly Switch Directories](http://www.tecmint.com/wp-content/uploads/2017/03/Quickly-Switch-Directories.png)
|
||||
][5]
|
||||
|
||||
*快速切换目录*
|
||||
|
||||
重要:如果层次结构中有不止一个具有相同名称的目录,bd 将会移动到最接近的目录,而不考虑最近的父目录,如下面的例子那样。
|
||||
|
||||
例如,在上面的路径中,有两个名称相同的目录 Books,如果你想移动到:
|
||||
|
||||
```
|
||||
/media/aaronkilik/Data/ComputerScience/Documents/Books/LEARN/Linux/Books
|
||||
```
|
||||
|
||||
输入 `bd Books` 会进入:
|
||||
|
||||
```
|
||||
/media/aaronkilik/Data/ComputerScience/Documents/Books
|
||||
```
|
||||
[
|
||||
![Move to 'Books' Directory Quickly](http://www.tecmint.com/wp-content/uploads/2017/03/Move-to-Directory-Quickly.png)
|
||||
][6]
|
||||
|
||||
*快速进入 ‘Books’ 目录*
|
||||
|
||||
另外,在引号中使用 bd 如 ``bd <开头几个字母>`` 会打印出路径而不更改当前目录,所以你可以与其他常见的 Linux 命令,如 [ls][7],[echo][8] 等一起使用 ``bd <开头几个字母>`` 。
|
||||
|
||||
在下面的例子中,当前在 `/var/www/html/internship/assets/filetree` 目录中,要打印出绝对路径、详细列出内容、统计目录 html 中所有文件的大小,你不必进入它,只需要键入:
|
||||
|
||||
```
|
||||
$ echo `bd ht`
|
||||
$ ls -l `bd ht`
|
||||
$ du -cs `bd ht`
|
||||
```
|
||||
[
|
||||
![Switch Directory with Listing](http://www.tecmint.com/wp-content/uploads/2017/03/Switch-Directory-with-Listing.png)
|
||||
][9]
|
||||
|
||||
*列出切换的目录*
|
||||
|
||||
要在 Github 上了解更多关于 bd 的信息:[https://github.com/vigneshwaranr/bd][10]
|
||||
|
||||
就是这样了!在本文中,我们展示了使用 bd 程序[在 Linux 中快速切换文件夹][11]的便捷方法。
|
||||
|
||||
通过下面的反馈栏单发表你的看法。此外,你还知道其他类似的工具么,在评论中让我们知道。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Aaron Kili是一名 Linux 和 F.O.S.S 的爱好者,未来的 Linux 系统管理员、网站开发人员,目前是 TecMint 的内容创作者,他喜欢用电脑工作,并乐于分享知识。
|
||||
|
||||
---------------
|
||||
|
||||
via: http://www.tecmint.com/bd-quickly-go-back-to-a-linux-parent-directory/
|
||||
|
||||
作者:[Aaron Kili][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/aaronkili/
|
||||
[1]:http://www.tecmint.com/cd-command-in-linux/
|
||||
[2]:https://linux.cn/article-5983-1.html
|
||||
[3]:http://www.tecmint.com/10-wget-command-examples-in-linux/
|
||||
[4]:http://www.tecmint.com/wp-content/uploads/2017/03/Switch-Between-Directories-Quickly.png
|
||||
[5]:http://www.tecmint.com/wp-content/uploads/2017/03/Quickly-Switch-Directories.png
|
||||
[6]:http://www.tecmint.com/wp-content/uploads/2017/03/Move-to-Directory-Quickly.png
|
||||
[7]:http://www.tecmint.com/tag/linux-ls-command/
|
||||
[8]:http://www.tecmint.com/echo-command-in-linux/
|
||||
[9]:http://www.tecmint.com/wp-content/uploads/2017/03/Switch-Directory-with-Listing.png
|
||||
[10]:https://github.com/vigneshwaranr/bd
|
||||
[11]:https://linux.cn/article-5983-1.html
|
||||
[12]:http://www.tecmint.com/author/aaronkili/
|
||||
[13]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
|
||||
[14]:http://www.tecmint.com/free-linux-shell-scripting-books/
|
@ -0,0 +1,134 @@
|
||||
Python-mode:在 Vim 编辑器中开发 Python 应用的 Vim 插件
|
||||
============================================================
|
||||
|
||||
Python-mode 是一个 Vim 插件,它使你能够在 [Vim 编辑器][1]中更快的利用包括 pylint、rope、pydoc、pyflakes、pep8、autopep8、pep257 和 mccable 在内的各种库来写 Python 代码,这些库提供了一些编码功能,比如静态分析、特征重构、折叠、补全和文档等。
|
||||
|
||||
**推荐阅读:** [如何用 Bash-Support 插件将 Vim 编辑器打造成编写 Bash 脚本的 IDE][2]
|
||||
|
||||
这个插件包含了所有你在 Vim 编辑器中可以用来开发 Python 应用的特性。
|
||||
|
||||
### Python-mode 的特性
|
||||
|
||||
它包含下面这些值得一提的特性:
|
||||
|
||||
* 支持 Python 2.6+ 至 Python 3.2 版本
|
||||
* 语法高亮
|
||||
* 提供 virtualenv 支持
|
||||
* 支持 Python 式折叠
|
||||
* 提供增强的 Python 缩进
|
||||
* 能够在 Vim 中运行 Python 代码
|
||||
* 能够添加/删除断点
|
||||
* 支持 Python 的 motion 和运算符
|
||||
* 能够在运行的同时检查代码(pylint、pyflakes、pylama ……)
|
||||
* 支持自动修复 PEP8 错误
|
||||
* 允许在 Python 文档中进行搜索
|
||||
* 支持代码重构
|
||||
* 支持强代码补全
|
||||
* 支持定义跳转
|
||||
|
||||
在这篇教程中,我将阐述如何在 Linux 中为 Vim 安装设置 Python-mode,从而在 Vim 编辑器中开发 Python 应用。
|
||||
|
||||
### 如何在 Linux 系统中为 Vim 安装 Python-mode
|
||||
|
||||
首先安装 [Pathogen][3] (它使得安装插件超级简单,并且运行文件位于私有目录中),从而更加容易的安装 Python-mode
|
||||
|
||||
运行下面的命令来获取 `pathogen.vim` 文件和它需要的目录:
|
||||
|
||||
```
|
||||
# mkdir -p ~/.vim/autoload ~/.vim/bundle && \
|
||||
# curl -LSso ~/.vim/autoload/pathogen.vim https://tpo.pe/pathogen.vim
|
||||
```
|
||||
|
||||
然后把下面这些内容加入 `~/.vimrc` 文件中:
|
||||
|
||||
```
|
||||
execute pathogen#infect()
|
||||
syntax on
|
||||
filetype plugin indent on
|
||||
```
|
||||
|
||||
安装好 pathogen 以后,你可以像下面这样把 Python-mode 插件放入 `~/.vim/bunble` 目录中:
|
||||
|
||||
```
|
||||
# cd ~/.vim/bundle
|
||||
# git clone https://github.com/klen/python-mode.git
|
||||
```
|
||||
|
||||
然后像下面这样在 Vim 中重建 `helptags` :
|
||||
|
||||
```
|
||||
:helptags
|
||||
```
|
||||
|
||||
你需要启用 `filetype-plugin` (:help filetype-plugin-on)和 `filetype-indent` (:help filetype-indent-on)来使用 Python-mode 。
|
||||
|
||||
### 在 Debian 和 Ubuntu 中安装 Python-mode
|
||||
|
||||
另一种在 Debian 和 Ubuntu 中安装 Python-mode 的方法是使用 PPA,就像下面这样
|
||||
|
||||
```
|
||||
$ sudo add-apt-repository https://klen.github.io/python-mode/deb main
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install vim-python-mode
|
||||
```
|
||||
|
||||
如果你遇到消息:“The following signatures couldn’t be verified because the public key is not available”,请运行下面的命令:
|
||||
|
||||
```
|
||||
$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys B5DF65307000E266
|
||||
```
|
||||
|
||||
现在,使用 `vim-addon-manager` 启用 Python-mode:
|
||||
|
||||
```
|
||||
$ sudo apt install vim-addon-manager
|
||||
$ vim-addons install python-mode
|
||||
```
|
||||
|
||||
### 在 Linux 中定制 Python-mode
|
||||
|
||||
如果想覆盖默认键位绑定,可以在 `.vimrc` 文件中重定义它们,比如:
|
||||
|
||||
```
|
||||
" Override go-to.definition key shortcut to Ctrl-]
|
||||
let g:pymode_rope_goto_definition_bind = "<C-]>"
|
||||
" Override run current python file key shortcut to Ctrl-Shift-e
|
||||
let g:pymode_run_bind = "<C-S-e>"
|
||||
" Override view python doc key shortcut to Ctrl-Shift-d
|
||||
let g:pymode_doc_bind = "<C-S-d>"
|
||||
```
|
||||
|
||||
注意,默认情况下, Python-mode 使用 Python 2 进行语法检查。你可以在 `.vimrc` 文件中加入下面这行内容从而启动 Python 3 语法检查。
|
||||
|
||||
```
|
||||
let g:pymode_python = 'python3'
|
||||
```
|
||||
|
||||
你可以在 Python-mode 的 GitHub 仓库找到更多的配置选项: [https://github.com/python-mode/python-mode][4]
|
||||
|
||||
这就是全部内容了。在本教程中,我向你们展示了如何在 Linux 中使用 Python-mode 来配置 Vim 。请记得通过下面的反馈表来和我们分享你的想法。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Aaron Kili 是一个 Linux 和 F.O.S.S 爱好者、Linux 系统管理员、网络开发人员,现在也是 TecMint 的内容创作者,他喜欢和电脑一起工作,坚信共享知识。
|
||||
|
||||
------------------
|
||||
|
||||
via: https://www.tecmint.com/python-mode-a-vim-editor-plugin/
|
||||
|
||||
作者:[Aaron Kili][a]
|
||||
译者:[ucasFL](https://github.com/ucasFL)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.tecmint.com/author/aaronkili/
|
||||
[1]:https://www.tecmint.com/vi-editor-usage/
|
||||
[2]:https://linux.cn/article-8467-1.html
|
||||
[3]:https://github.com/tpope/vim-pathogen
|
||||
[4]:https://github.com/python-mode/python-mode
|
||||
[5]:https://www.tecmint.com/author/aaronkili/
|
||||
[6]:https://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
|
||||
[7]:https://www.tecmint.com/free-linux-shell-scripting-books/
|
@ -1,14 +1,13 @@
|
||||
如何在 Ubuntu 中安装 Discord
|
||||
如何在 Ubuntu 中安装语音聊天工具 Discord
|
||||
============================================================
|
||||
|
||||
![](https://www.maketecheasier.com/assets/uploads/2017/04/discord-feat.jpg "How to Install Discord on Ubuntu Linuxs")
|
||||
|
||||
Discord 是一个非常受欢迎的文字和语音聊天程序。虽然开始时主要面向游戏玩家,但它几乎获得了所有人的了广泛青睐。
|
||||
|
||||
Discord 是一个非常受欢迎的文字和语音聊天程序。虽然开始了主要面向游戏玩家,但它几乎获得了所有人的了广泛青睐。
|
||||
Discord 不仅仅是一个很好的聊天客户端。当你安装它时,你还可以获得其强大的服务端功能,强力而自足。游戏玩家和非玩家都可以在几分钟内开启自己的私人聊天服务,这使 Discord 成为团队、公会和各种社区的明显选择。
|
||||
|
||||
Discord 不仅仅是一个很好的聊天客户端。当你安装它时,你还可以获得其强大的服务端功能,包括电池。游戏玩家和非玩家都可以在几分钟内开启自己的私人聊天服务,这使 Discord 成为团队、公会和各种社区的明显选择。
|
||||
|
||||
Linux 用户经常在游戏世界中被遗忘。但 Discord 并不是这样。它的开发人员也在 Linux 下积极构建并维护其流行聊天平台。Ubuntu 用户拥有更好的功能。Discord 捆绑在方便的 Debian/Ubuntu .deb 包中。
|
||||
Linux 用户经常被游戏世界遗忘。但 Discord 并不是这样。它的开发人员也在 Linux 下积极构建并维护其流行聊天平台。Ubuntu 用户甚至拥有更好的待遇,Discord 捆绑在方便的 Debian/Ubuntu .deb 包中。
|
||||
|
||||
### 获取并安装软件包
|
||||
|
||||
@ -46,7 +45,7 @@ sudo apt install libgconf-2-4 libappindicator1
|
||||
|
||||
### 命令行安装
|
||||
|
||||
懒惰的 Linux 熟手并不在意花哨的 GUI 工具。如果你是这个阵营的人,那么你有一个更直接的命令行选项。
|
||||
“懒惰”的 Linux 熟手并不在意花哨的 GUI 工具。如果你是这个阵营的人,那么你有一个更直接的命令行选项。
|
||||
|
||||
首先,打开一个终端并进入你的下载目录。在那里可以使用 `wget` 直接下载 .deb 包。
|
||||
|
||||
@ -55,7 +54,7 @@ cd ~/Downloads
|
||||
wget -O discord-0.0.1.deb https://discordapp.com/api/download?platform=linux&format=deb
|
||||
```
|
||||
|
||||
下载完成后,你可以使用 dpkg 直接安装 .deb 软件包。运行下面的命令:
|
||||
下载完成后,你可以使用 `dpkg` 直接安装 .deb 软件包。运行下面的命令:
|
||||
|
||||
```
|
||||
sudo dpkg -i discord-0.0.1.deb
|
||||
@ -69,19 +68,19 @@ sudo dpkg -i discord-0.0.1.deb
|
||||
|
||||
![Login to Discord on Ubuntu](https://www.maketecheasier.com/assets/uploads/2017/04/discord-login.jpg "Login to Discord on Ubuntu")
|
||||
|
||||
首次启动,你需要创建一个帐户或者登录。做任意一个你需要做的。
|
||||
首次启动,根据你需求,创建一个帐户或者登录。
|
||||
|
||||
![Discord running on Ubuntu Linux](https://www.maketecheasier.com/assets/uploads/2017/04/discord-running.jpg "Discord running on Ubuntu Linux")
|
||||
|
||||
登录后,你就进入 Discord 了。它会提供一些介绍教程和建议。你可以直接略过开始尝试。欢迎进入你新的 Linux 聊天体验!
|
||||
登录后,你就进入 Discord 了。它会提供一些介绍教程和建议。你可以直接略过并开始尝试。欢迎进入你新的 Linux 聊天体验!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/install-discord-ubuntu/
|
||||
|
||||
作者:[ Nick Congleton][a]
|
||||
作者:[Nick Congleton][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
132
published/20170501 Containers running Containers.md
Normal file
132
published/20170501 Containers running Containers.md
Normal file
@ -0,0 +1,132 @@
|
||||
LinuxKit:在容器中运行容器
|
||||
============================================================
|
||||
|
||||
一些令人振奋的消息引发了我对今年 DockerCon 的兴趣,在这次会议中,无可争议的容器巨头公司 Docker 发布了一个新的操作系统:LinuxKit。
|
||||
|
||||
这家容器巨头宣布的是一个灵活的、可扩展的操作系统,而为了可移植性,系统服务也是运行在容器之中。甚至,令人惊讶的是,就连 Docker 运行时环境也是运行在容器内!
|
||||
|
||||
在本文中,我们将简要介绍一下 LinuxKit 中所承诺的内容,以及如何自己尝试一下这个不断精简、优化的容器。
|
||||
|
||||
### 少即是多
|
||||
|
||||
不可否认的是,用户一直在寻找一个可以运行他们的微服务的精简版本的 Linux 。通过容器化,你会尽可能地最小化每个应用程序,使其成为一个适合于运行在其自身容器内的独立进程。但是,由于你需要对那些驻留容器的宿主机出现的问题进行修补,因此你不断地在宿主机间移动容器。实际上,如果没有像 Kubernetes 或 Docker Swarm 这样的编排系统,容器编排几乎总是会导致停机。
|
||||
|
||||
不用说,这只是让你保持操作系统尽可能小的原因之一。
|
||||
|
||||
我曾多次在不同场合重复过的最喜爱的名言,来自荷兰的天才程序员 Wietse Zweitze,他为我们提供了重要的 Email 软件 Postfix 和 TCP Wrappers 等知名软件。
|
||||
|
||||
在 [Postfix 网站][10] 指出,即使你编码和 Wietse 一样小心,“每 1000 行[你]就会在 Postfix 中引入一个额外的 bug”。从我的专业的 DevSecOps 角度看,这里提到的“bug” 可以将其大致看做安全问题。
|
||||
|
||||
从安全的角度来看,正是由于这个原因,代码世界中“少即是多”。简单地说,使用较少的代码行有很多好处,即安全性、管理时间和性能。对于初学者来说,这意味着安全漏洞较少,更新软件包的时间更短,启动时间更快。
|
||||
|
||||
### 深入观察
|
||||
|
||||
考虑下在容器内部运行你的程序。
|
||||
|
||||
一个好的起点是 [Alpine Linux][1],它是一个苗条、精简的操作系统,通常比那些笨重的系统更受喜欢,如 Ubuntu 或 CentOS 等。Alpine 还提供了一个 miniroot 文件系统(用于容器内),最近我看到的大小是惊人的 1.8M。事实上,这个完整的 Linux 操作系统下载后有 80M。
|
||||
|
||||
如果你决定使用 Alpine Linux 作为 Docker 基础镜像,那么你可以在 Docker Hub 上[找到][2]一个, 它将其描述为:“一个基于 Alpine Linux 的最小 Docker 镜像,具有完整的包索引,大小只有5 MB!”
|
||||
|
||||
据说无处不在的 “Window 开始菜单” 文件也是大致相同的大小!我没有验证过,也不会进一步评论。
|
||||
|
||||
讲真,希望你去了解一下这个创新的类 Unix 操作系统(如 Alpine Linux)的强大功能。
|
||||
|
||||
### 锁定一切
|
||||
|
||||
再说一点,Alpine Linux 是(并不惊人)基于 [BusyBox][3],这是一套著名的打包了 Linux 命令的集合,许多人不会意识到他们的宽带路由器、智能电视,当然还有他们家庭中的物联网设备就有它。
|
||||
|
||||
Alpine Linux 站点的“[关于][4]”页面的评论中指出:
|
||||
|
||||
> “Alpine Linux 的设计考虑到安全性。内核使用 grsecurity/PaX 的非官方移植进行了修补,所有用户态二进制文件都编译为具有堆栈保护的地址无关可执行文件(PIE)。 这些主动安全特性可以防止所有类别的零日漏洞和其它漏洞利用。”
|
||||
|
||||
换句话说,这些捆绑在 Alpine Linux 中的精简二进制文件提供的功能通过了那些行业级安全工具筛选,以缓解缓冲区溢出攻击所带来的危害。
|
||||
|
||||
### 多出一只袜子
|
||||
|
||||
你可能会问,为什么当我们谈及 Docker 的新操作系统时,容器的内部结构很重要?
|
||||
|
||||
那么,你可能已经猜到,当涉及容器时,他们的目标是精简。除非绝对必要,否则不包括任何东西。所以你可以放心地清理橱柜、花园棚子、车库和袜子抽屉了。
|
||||
|
||||
Docker 的确因为它们的先见而获得声望。据报道,2 月初,Docker 聘请了 Alpine Linux 的主要推动者 Nathaniel Copa,他帮助将默认的官方镜像库从 Ubuntu 切换到 Alpine。Docker Hub 从新近精简镜像节省的带宽受到了赞誉。
|
||||
|
||||
并且最新的情况是,这项工作将与最新的基于容器的操作系统相结合:Docker 的 LinuxKit。
|
||||
|
||||
要说清楚的是 LinuxKit 注定不会代替 Alpine,而是位于容器下层,并作为一个完整的操作系统出现,你可以高兴地启动你的运行时守护程序(在这种情况下,是生成你的容器的Docker 守护程序 )。
|
||||
|
||||
### 金发女郎的 Atomic
|
||||
|
||||
经过精心调试的宿主机绝对不是一件新事物(以前提到过嵌入式 Linux 的家用设备)。在过去几十年中一直在优化 Linux 的天才在某个时候意识到底层的操作系统才是快速生产含有大量容器主机的关键。
|
||||
|
||||
例如,强大的红帽长期以来一直在出售已经贡献给 [Project Atomic][6] 的 [红帽 Atomic][5]。后者继续解释:
|
||||
|
||||
> “基于 Red Hat Enterprise Linux 或 CentOS 和 Fedora 项目的成熟技术,Atomic Host 是一个轻量级的、不可变的平台,其设计目的仅在于运行容器化应用程序。”
|
||||
|
||||
将底层的、不可变的 Atomic OS 作为红帽的 OpenShift PaaS(平台即服务)产品推荐有一个很好理由:它最小化、高性能、尖端。
|
||||
|
||||
### 特性
|
||||
|
||||
在 Docker 关于 LinuxKit 的公告中,“少即是多”的口号是显而易见的。实现 LinuxKit 愿景的项目显然是不小的事业,它由 Docker 老将和 [Unikernel][7] 的主管 Justin Cormack 指导,并与 HPE、Intel、ARM、IBM 和 Microsoft LinuxKit 合作,可以运行在从大型机到基于物联网的冰柜之中。
|
||||
|
||||
LinuxKit 的可配置性、可插拔性和可扩展性将吸引许多寻求建立其服务基准的项目。通过开源项目,Docker 明智地邀请每个人全身心地投入其功能开发,随着时间的推移,它会像好的奶酪那样成熟。
|
||||
|
||||
### 布丁作证
|
||||
|
||||
按照该发布消息中所承诺的,那些急于使用新系统的人不用再等待了。如果你准备着手 LinuxKit,你可以从 GitHub 中开始:[LinuxKit][11]。
|
||||
|
||||
在 GitHub 页面上有关于如何启动和运行一些功能的指导。
|
||||
|
||||
时间允许的话我准备更加深入研究 LinuxKit。对有争议的 Kubernetes 与 Docker Swarm 编排功能对比会是有趣的尝试。此外,我还想看到内存占用、启动时间和磁盘空间使用率的基准测试。
|
||||
|
||||
如果该承诺可靠,则作为容器运行的可插拔系统服务是构建操作系统的迷人方式。Docker 在[博客][12])中提到:“因为 LinuxKit 是原生容器,它有一个非常小的尺寸 - 35MB,引导时间非常小。所有系统服务都是容器,这意味着可以删除或替换所有的内容。”
|
||||
|
||||
我不知道你觉得怎么样,但这非常符合我的胃口。
|
||||
|
||||
### 呼叫警察
|
||||
|
||||
除了我站在 DevSecOps 角度看到的功能,我会看看其对安全的承诺。
|
||||
|
||||
Docker 在他们的博客上引用来自 NIST([国家标准与技术研究所] [8])的话:
|
||||
|
||||
> “安全性是最高目标,这与 NIST 在其《应用程序容器安全指南》草案中说明的保持一致:‘使用容器专用操作系统而不是通用操作系统来减少攻击面。当使用专用容器操作系统时,攻击面通常比通用操作系统小得多,因此攻击和危及专用容器操作系统的机会较少。’”
|
||||
|
||||
可能最重要的容器到主机和主机到容器的安全创新是将系统容器(系统服务)完全地沙箱化到自己的非特权空间中,而只给它们需要的外部访问。
|
||||
|
||||
通过<ruby>内核自我保护项目<rt>Kernel Self Protection Project</rt></ruby>([KSPP][9])的协作来实现这一功能,我很满意 Docker 开始专注于一些非常值得的东西上。对于那些不熟悉的 KSPP 的人而言,它存在理由如下:
|
||||
|
||||
> “启动这个项目的的假设是内核 bug 的存在时间很长,内核必须设计成可以防止这些缺陷的危害。”
|
||||
|
||||
KSPP 网站进一步表态:
|
||||
|
||||
> “这些努力非常重要并还在进行,但如果我们要保护我们的十亿 Android 手机、我们的汽车、国际空间站,还有其他运行 Linux 的产品,我们必须在上游的 Linux 内核中建立积极的防御性技术。我们需要内核安全地出错,而不只是安全地运行。”
|
||||
|
||||
而且,如果 Docker 最初只是在 LinuxKit 前进了一小步,那么随着时间的推移,成熟度带来的好处可能会在容器领域中取得长足的进步。
|
||||
|
||||
### 离终点还远
|
||||
|
||||
像 Docker 这样不断发展壮大的巨头无论在哪个方向上取得巨大的飞跃都将会用户和其他软件带来益处。
|
||||
|
||||
我鼓励所有对 Linux 感兴趣的人密切关注这个领域。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.devsecops.cc/devsecops/containers.html
|
||||
|
||||
作者:[Chris Binnie][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.devsecops.cc/
|
||||
[1]:https://alpinelinux.org/downloads/
|
||||
[2]:https://hub.docker.com/_/alpine
|
||||
[3]:https://busybox.net/
|
||||
[4]:https://www.alpinelinux.org/about/
|
||||
[5]:https://www.redhat.com/en/resources/red-hat-enterprise-linux-atomic-host
|
||||
[6]:http://www.projectatomic.io/
|
||||
[7]:https://en.wikipedia.org/wiki/Unikernel
|
||||
[8]:https://www.nist.gov/
|
||||
[9]:https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project
|
||||
[10]:http://www.postfix.org/TLS_README.html
|
||||
[11]:https://github.com/linuxkit/linuxkit
|
||||
[12]:https://blog.docker.com/2017/04/introducing-linuxkit-container-os-toolkit
|
@ -0,0 +1,159 @@
|
||||
在 Ubuntu 16.04 中安装支持 CPU 和 GPU 的 Google TensorFlow 神经网络软件
|
||||
============================================================
|
||||
|
||||
TensorFlow 是用于机器学习任务的开源软件。它的创建者 Google 希望提供一个强大的工具以帮助开发者探索和建立基于机器学习的应用,所以他们在去年作为开源项目发布了它。TensorFlow 是一个非常强大的工具,专注于一种称为<ruby>深层神经网络<rt>deep neural network</rt></ruby>(DNN)的神经网络。
|
||||
|
||||
深层神经网络被用来执行复杂的机器学习任务,例如图像识别、手写识别、自然语言处理、聊天机器人等等。这些神经网络被训练学习其所要执行的任务。由于训练所需的计算是非常巨大的,在大多数情况下需要 GPU 支持,这时 TensorFlow 就派上用场了。启用了 GPU 并安装了支持 GPU 的软件,那么训练所需的时间就可以大大减少。
|
||||
|
||||
本教程可以帮助你安装只支持 CPU 的和同时支持 GPU 的 TensorFlow。要使用带有 GPU 支持的 TensorFLow,你必须要有一块支持 CUDA 的 Nvidia GPU。CUDA 和 CuDNN(Nvidia 的计算库)的安装有点棘手,本指南会提供在实际安装 TensorFlow 之前一步步安装它们的方法。
|
||||
|
||||
Nvidia CUDA 是一个 GPU 加速库,它已经为标准神经网络中用到的标准例程调优过。CuDNN 是一个用于 GPU 的调优库,它负责 GPU 性能的自动调整。TensorFlow 同时依赖这两者用于训练并运行深层神经网络,因此它们必须在 TensorFlow 之前安装。
|
||||
|
||||
需要指出的是,那些不希望安装支持 GPU 的 TensorFlow 的人,你可以跳过以下所有的步骤并直接跳到:“步骤 5:安装只支持 CPU 的 TensorFlow”。
|
||||
|
||||
关于 TensorFlow 的介绍可以在[这里][10]找到。
|
||||
|
||||
### 1、 安装 CUDA
|
||||
|
||||
首先,在[这里][11]下载用于 Ubuntu 16.04 的 CUDA 库。此文件非常大(2GB),因此也许会花费一些时间下载。
|
||||
|
||||
下载的文件是 “.deb” 包。要安装它,运行下面的命令:
|
||||
|
||||
```
|
||||
sudo dpkg -i cuda-repo-ubuntu1604-8-0-local_8.0.44-1_amd64.deb
|
||||
```
|
||||
|
||||
[
|
||||
![Install CUDA](https://www.howtoforge.com/images/installing_tensorflow_machine_learning_software_for_cpu_and_gpu_on_ubuntu_1604/image1.png)
|
||||
][12]
|
||||
|
||||
下面的的命令会安装所有的依赖,并最后安装 cuda 工具包:
|
||||
|
||||
```
|
||||
sudo apt install -f
|
||||
sudo apt update
|
||||
sudo apt install cuda
|
||||
```
|
||||
|
||||
如果成功安装,你会看到一条消息说:“successfully installed”。如果已经安装了,接着你可以看到类似下面的输出:
|
||||
|
||||
[
|
||||
![Install CUDA with apt](https://www.howtoforge.com/images/installing_tensorflow_machine_learning_software_for_cpu_and_gpu_on_ubuntu_1604/image2.png)
|
||||
][13]
|
||||
|
||||
### 2、安装 CuDNN 库
|
||||
|
||||
CuDNN 下载需要花费一些功夫。Nvidia 没有直接提供下载文件(虽然它是免费的)。通过下面的步骤获取 CuDNN。
|
||||
|
||||
1. 点击[此处][8]进入 Nvidia 的注册页面并创建一个帐户。第一页要求你输入你的个人资料,第二页会要求你回答几个调查问题。如果你不知道所有答案也没问题,你可以随便选择一个选项。
|
||||
2. 通过前面的步骤,Nvidia 会向你的邮箱发送一个激活链接。在你激活之后,直接进入[这里][9]的 CuDNN 下载链接。
|
||||
3. 登录之后,你需要填写另外一份类似的调查。随机勾选复选框,然后点击调查底部的 “proceed to Download”,在下一页我们点击同意使用条款。
|
||||
4. 最后,在下拉中点击 “Download cuDNN v5.1 (Jan 20, 2017), for CUDA 8.0”,最后,你需要下载这两个文件:
|
||||
* [cuDNN v5.1 Runtime Library for Ubuntu14.04 (Deb)][6]
|
||||
* [cuDNN v5.1 Developer Library for Ubuntu14.04 (Deb)][7]
|
||||
|
||||
注意:即使上面说的是用于 Ubuntu 14.04 的库。它也适用于 16.04。
|
||||
|
||||
现在你已经同时有 CuDNN 的两个文件了,是时候安装它们了!在包含这些文件的文件夹内运行下面的命令:
|
||||
|
||||
```
|
||||
sudo dpkg -i libcudnn5_5.1.5-1+cuda8.0_amd64.deb
|
||||
sudo dpkg -i libcudnn5-dev_5.1.5-1+cuda8.0_amd64.deb
|
||||
```
|
||||
|
||||
下面的图片展示了这些命令的输出:
|
||||
|
||||
[
|
||||
![Install the CuDNN library](https://www.howtoforge.com/images/installing_tensorflow_machine_learning_software_for_cpu_and_gpu_on_ubuntu_1604/image3.png)
|
||||
][14]
|
||||
|
||||
### 3、 在 bashrc 中添加安装位置
|
||||
|
||||
安装位置应该被添加到 bashrc 文件中,以便系统下一次知道如何找到这些用于 CUDA 的文件。使用下面的命令打开 bashrc 文件:
|
||||
|
||||
```
|
||||
sudo gedit ~/.bashrc
|
||||
```
|
||||
|
||||
文件打开后,添加下面两行到文件的末尾:
|
||||
|
||||
```
|
||||
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64"
|
||||
export CUDA_HOME=/usr/local/cuda
|
||||
```
|
||||
|
||||
### 4、 安装带有 GPU 支持的 TensorFlow
|
||||
|
||||
这步我们将安装带有 GPU 支持的 TensorFlow。如果你使用的是 Python 2.7,运行下面的命令:
|
||||
|
||||
```
|
||||
pip install TensorFlow-gpu
|
||||
```
|
||||
|
||||
如果安装了 Python 3.x,使用下面的命令:
|
||||
|
||||
```
|
||||
pip3 install TensorFlow-gpu
|
||||
```
|
||||
|
||||
安装完后,你会看到一条 “successfully installed” 的消息。现在,剩下要测试的是是否已经正确安装。打开终端并输入下面的命令测试:
|
||||
|
||||
```
|
||||
python
|
||||
import TensorFlow as tf
|
||||
```
|
||||
|
||||
你应该会看到类似下面图片的输出。在图片中你可以观察到 CUDA 库已经成功打开了。如果有任何错误,消息会提示说无法打开 CUDA 甚至无法找到模块。为防你或许遗漏了上面的某步,仔细重做教程的每一步就行了。
|
||||
|
||||
[
|
||||
![Install TensorFlow with GPU support](https://www.howtoforge.com/images/installing_tensorflow_machine_learning_software_for_cpu_and_gpu_on_ubuntu_1604/image4.png)
|
||||
][15]
|
||||
|
||||
### 5、 安装只支持 CPU 的 TensorFlow
|
||||
|
||||
注意:这步是对那些没有 GPU 或者没有 Nvidia GPU 的人而言的。其他人请忽略这步!!
|
||||
|
||||
安装只支持 CPU 的 TensorFlow 非常简单。使用下面两个命令:
|
||||
|
||||
```
|
||||
pip install TensorFlow
|
||||
```
|
||||
|
||||
如果你有 python 3.x,使用下面的命令:
|
||||
|
||||
```
|
||||
pip3 install TensorFlow
|
||||
```
|
||||
|
||||
是的,就是这么简单!
|
||||
|
||||
安装指南至此结束,你现在可以开始构建深度学习应用了。如果你刚刚起步,你可以在[这里][16]看下适合初学者的官方教程。如果你正在寻找更多的高级教程,你可以在[这里][17]学习了解如何设置可以高精度识别上千个物体的图片识别系统/工具。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/tutorial/installing-tensorflow-neural-network-software-for-cpu-and-gpu-on-ubuntu-16-04/
|
||||
|
||||
作者:[Akshay Pai][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com/tutorial/installing-tensorflow-neural-network-software-for-cpu-and-gpu-on-ubuntu-16-04/
|
||||
[1]:https://www.howtoforge.com/tutorial/installing-tensorflow-neural-network-software-for-cpu-and-gpu-on-ubuntu-16-04/#-install-cuda
|
||||
[2]:https://www.howtoforge.com/tutorial/installing-tensorflow-neural-network-software-for-cpu-and-gpu-on-ubuntu-16-04/#-install-the-cudnn-library
|
||||
[3]:https://www.howtoforge.com/tutorial/installing-tensorflow-neural-network-software-for-cpu-and-gpu-on-ubuntu-16-04/#-add-the-installation-location-to-bashrc-file
|
||||
[4]:https://www.howtoforge.com/tutorial/installing-tensorflow-neural-network-software-for-cpu-and-gpu-on-ubuntu-16-04/#-install-tensorflow-with-gpu-support
|
||||
[5]:https://www.howtoforge.com/tutorial/installing-tensorflow-neural-network-software-for-cpu-and-gpu-on-ubuntu-16-04/#-install-tensorflow-with-only-cpu-support
|
||||
[6]:https://developer.nvidia.com/compute/machine-learning/cudnn/secure/v5.1/prod_20161129/8.0/libcudnn5_5.1.10-1+cuda8.0_amd64-deb
|
||||
[7]:https://developer.nvidia.com/compute/machine-learning/cudnn/secure/v5.1/prod_20161129/8.0/libcudnn5-dev_5.1.10-1+cuda8.0_amd64-deb
|
||||
[8]:https://developer.nvidia.com/group/node/873374/subscribe/og_user_node
|
||||
[9]:https://developer.nvidia.com/rdp/form/cudnn-download-survey
|
||||
[10]:http://sourcedexter.com/what-is-tensorflow/
|
||||
[11]:https://developer.nvidia.com/compute/cuda/8.0/Prod2/local_installers/cuda-repo-ubuntu1604-8-0-local-ga2_8.0.61-1_amd64-deb
|
||||
[12]:https://www.howtoforge.com/images/installing_tensorflow_machine_learning_software_for_cpu_and_gpu_on_ubuntu_1604/big/image1.png
|
||||
[13]:https://www.howtoforge.com/images/installing_tensorflow_machine_learning_software_for_cpu_and_gpu_on_ubuntu_1604/big/image2.png
|
||||
[14]:https://www.howtoforge.com/images/installing_tensorflow_machine_learning_software_for_cpu_and_gpu_on_ubuntu_1604/big/image3.png
|
||||
[15]:https://www.howtoforge.com/images/installing_tensorflow_machine_learning_software_for_cpu_and_gpu_on_ubuntu_1604/big/image4.png
|
||||
[16]:https://www.tensorflow.org/get_started/mnist/beginners
|
||||
[17]:https://www.tensorflow.org/tutorials/image_recognition
|
@ -0,0 +1,82 @@
|
||||
T-UI Launcher:将你的 Android 设备变成 Linux 命令行界面
|
||||
============================================================
|
||||
|
||||
不管你是一位命令行大师,还是只是不想让你的朋友和家人使用你的 Android 设备,那就看下 T-UI Launcher 这个程序。Unix/Linux 用户一定会喜欢这个。
|
||||
|
||||
T-UI Launcher 是一个免费的轻量级 Android 程序,具有类似 Linux 的命令行界面,它可将你的普通 Android 设备变成一个完整的命令行界面。对于喜欢使用基于文本的界面的人来说,这是一个简单、快速、智能的启动器。
|
||||
|
||||
#### T-UI Launcher 功能
|
||||
|
||||
下面是一些重要的功能:
|
||||
|
||||
* 第一次启动后展示快速使用指南。
|
||||
* 快速且可完全定制。
|
||||
* 提供自动补全菜单及快速、强大的别名系统。
|
||||
* 此外,提供预测建议,并提供有用的搜索功能。
|
||||
|
||||
它是免费的,你可以从 Google Play 商店[下载并安装它][1],接着在 Android 设备中运行。
|
||||
|
||||
安装完成后,第一次启动时你会看到一个快速指南。阅读完成之后,你可以如下面那样使用简单的命令开始使用了。
|
||||
|
||||
[![T-UI Commandline Help Guide](https://www.tecmint.com/wp-content/uploads/2017/05/T-UI-Commandline-Help.jpg)][2]
|
||||
|
||||
*T-UI 命令行帮助指南*
|
||||
|
||||
要启动一个 app,只要输入几个字母,自动补全功能会在屏幕中展示可用的 app。接着点击你想打开的程序。
|
||||
|
||||
```
|
||||
$ Telegram ### 启动 telegram
|
||||
$ WhatsApp ### 启动 whatsapp
|
||||
$ Chrome ### 启动 chrome
|
||||
```
|
||||
|
||||
[![T-UI Commandline Usage](https://www.tecmint.com/wp-content/uploads/2017/05/T-UI-Commandline-Usage.jpg)][3]
|
||||
|
||||
*T-UI 命令行使用*
|
||||
|
||||
要浏览你的 Android 设备状态(电池电量、wifi、移动数据),输入:
|
||||
|
||||
```
|
||||
$ status
|
||||
```
|
||||
|
||||
[![Android Phone Status](https://www.tecmint.com/wp-content/uploads/2017/05/T-UI-Commandline-Status.jpg)][4]
|
||||
|
||||
*Android 电话状态*
|
||||
|
||||
其它的有用命令。
|
||||
|
||||
```
|
||||
$ uninstall telegram ### 卸载 telegram
|
||||
$ search [google, playstore, youtube, files] ### 搜索在线应用或本地文件
|
||||
$ wifi ### 打开或关闭 WIFI
|
||||
$ cp Downloads/* Music ### 从 Download 文件夹复制所有文件到 Music 文件夹
|
||||
$ mv Downloads/* Music ### 从 Download 文件夹移动所有文件到 Music 文件夹
|
||||
```
|
||||
|
||||
就是这样了!在本篇中,我们展示了一个带有类似 Linux CLI(命令界面)的简单而有用的 Android 程序,它可以将你的常规 Android 设备变成一个完整的命令行界面。尝试一下并在下面的评论栏分享你的想法。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Aaron Kili 是 Linux 和 F.O.S.S 爱好者,将来的 Linux 系统管理员和网络开发人员,目前是 TecMint 的内容创作者,他喜欢用电脑工作,并坚信分享知识。
|
||||
|
||||
------------------
|
||||
|
||||
via: https://www.tecmint.com/t-ui-launcher-turns-android-device-into-linux-cli/
|
||||
|
||||
作者:[Aaron Kili][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.tecmint.com/author/aaronkili/
|
||||
[1]:https://play.google.com/store/apps/details?id=ohi.andre.consolelauncher
|
||||
[2]:https://www.tecmint.com/wp-content/uploads/2017/05/T-UI-Commandline-Help.jpg
|
||||
[3]:https://www.tecmint.com/wp-content/uploads/2017/05/T-UI-Commandline-Usage.jpg
|
||||
[4]:https://www.tecmint.com/wp-content/uploads/2017/05/T-UI-Commandline-Status.jpg
|
||||
[5]:https://www.tecmint.com/author/aaronkili/
|
||||
[6]:https://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
|
||||
[7]:https://www.tecmint.com/free-linux-shell-scripting-books/
|
@ -0,0 +1,125 @@
|
||||
4 个拥有绝佳命令行界面的终端程序
|
||||
============================================================
|
||||
|
||||
> 让我们来看几个精心设计的 CLI 程序,以及如何解决一些可发现性问题。
|
||||
|
||||
![4 awesome command-line tools](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/code_computer_development_programming.png?itok=wMspQJcO "4 awesome command-line tools")
|
||||
|
||||
>图片提供: opensource.com
|
||||
|
||||
在本文中,我会指出命令行界面的<ruby>可发现性<rt>discoverability</rt></ruby>缺点以及克服这些问题的几种方法。
|
||||
|
||||
我喜欢命令行。我第一次接触命令行是在 1997 的 DOS 6.2 上。我学习了各种命令的语法,并展示了如何在目录中列出隐藏的文件(`attrib`)。我会每次仔细检查命令中的每个字符。 当我犯了一个错误,我会从头开始重新输入命令。直到有一天,有人向我展示了如何使用向上和向下箭头按键遍历命令行历史,我被震惊了。
|
||||
|
||||
后来当我接触到 Linux 时,让我感到惊喜的是,上下箭头保留了它们遍历历史记录的能力。我仍然很仔细地打字,但是现在,我了解如何盲打,并且我能打的很快,每分钟可以达到 55 个单词的速度。接着有人向我展示了 tab 补完,再一次改变了我的生活。
|
||||
|
||||
在 GUI 应用程序中,菜单、工具提示和图标用于向用户展示功能。而命令行缺乏这种能力,但是有办法克服这个问题。在深入解决方案之前,我会来看看几个有问题的 CLI 程序:
|
||||
|
||||
**1、 MySQL**
|
||||
|
||||
首先让我们看看我们所钟爱的 MySQL REPL。我经常发现自己在输入 `SELECT * FROM` 然后按 `Tab` 的习惯。MySQL 会询问我是否想看到所有的 871 种可能性。我的数据库中绝对没有 871 张表。如果我选择 `yes`,它会显示一堆 SQL 关键字、表、函数等。(LCTT 译注:REPL —— Read-Eval-Print Loop,交互式开发环境)
|
||||
|
||||
![MySQL gif](https://opensource.com/sites/default/files/mysql.gif)
|
||||
|
||||
**2、 Python**
|
||||
|
||||
我们来看另一个例子,标准的 Python REPL。我开始输入命令,然后习惯按 `Tab` 键。瞧,插入了一个 `Tab` 字符,考虑到 `Tab` 在 Python 源代码中没有特定作用,这是一个问题。
|
||||
|
||||
![Python gif](https://opensource.com/sites/default/files/python.gif "Python gif")
|
||||
|
||||
### 好的用户体验
|
||||
|
||||
让我看下设计良好的 CLI 程序以及它们是如何克服这些可发现性问题的。
|
||||
|
||||
#### 自动补全: bpython
|
||||
|
||||
[Bpython][15] 是对 Python REPL 的一个很好的替代。当我运行 bpython 并开始输入时,建议会立即出现。我没用通过特殊的键盘绑定触发它,甚至没有按下 `Tab` 键。
|
||||
|
||||
![bpython gif](https://opensource.com/sites/default/files/bpython.gif "bpython gif")
|
||||
|
||||
当我出于习惯按下 `Tab` 键时,它会用列表中的第一个建议补全。这是给 CLI 设计带来可发现性性的一个很好的例子。
|
||||
|
||||
bpython 的另一个方面是可以展示模块和函数的文档。当我输入一个函数的名字时,它会显示这个函数附带的签名以及文档字符串。这是一个多么令人难以置信的周到设计啊。
|
||||
|
||||
#### 上下文感知补全:mycli
|
||||
|
||||
[mycli][16] 是默认的 MySQL 客户端的现代替代品。这个工具对 MySQL 来说就像 bpython 之于标准 Python REPL 一样。mycli 将在你输入时自动补全关键字、表名、列和函数。
|
||||
|
||||
补全建议是上下文相关的。例如,在 `SELECT * FROM` 之后,只有来自当前数据库的表才会列出,而不是所有可能的关键字。
|
||||
|
||||
![mycli gif](https://opensource.com/sites/default/files/mycli.gif "mycli gif")
|
||||
|
||||
#### 模糊搜索和在线帮助: pgcli
|
||||
|
||||
如果您正在寻找 PostgreSQL 版本的 mycli,请看看 [pgcli][17]。 与 mycli 一样,它提供了上下文感知的自动补全。菜单中的项目使用模糊搜索缩小范围。模糊搜索允许用户输入整体字符串中的任意子字符串来尝试找到正确的匹配项。
|
||||
|
||||
![pgcli gif](https://opensource.com/sites/default/files/pgcli.gif "pgcli gif")
|
||||
|
||||
pgcli 和 mycli 在其 CLI 中都实现了这个功能。斜杠命令的文档也作为补全菜单的一部分展示。
|
||||
|
||||
#### 可发现性: fish
|
||||
|
||||
在传统的 Unix shell(Bash、zsh 等)中,有一种搜索历史记录的方法。此搜索模式由 `Ctrl-R` 触发。当再次调用你上周运行过的命令时,例如 **ssh**或 **docker**,这是一个令人难以置信的有用的工具。 一旦你知道这个功能,你会发现自己经常会使用它。
|
||||
|
||||
如果这个功能是如此有用,那为什么不每次都搜索呢?这正是 [**fish** shell][18] 所做的。一旦你开始输入命令,**fish** 将开始建议与历史记录类似的命令。然后,你可以按右箭头键接受该建议。
|
||||
|
||||
### 命令行规矩
|
||||
|
||||
我已经回顾了一些解决可发现性的问题的创新方法,但也有一些基本的命令行功能应该作为每个 REPL 所实现基础功能的一部分:
|
||||
|
||||
* 确保 REPL 有可通过箭头键调用的历史记录。确保会话之间的历史持续存在。
|
||||
* 提供在编辑器中编辑命令的方法。不管你的补全是多么棒,有时用户只需要一个编辑器来制作完美的命令来删除生产环境中所有的表。
|
||||
* 使用分页器(`pager`)来管道输出。不要让用户滚动他们的终端。哦,要为分页器设置个合理的默认值。(记得添加选项来处理颜色代码。)
|
||||
* 提供一种通过 `Ctrl-R` 界面或者 fish 式的自动搜索来搜索历史记录的方法。
|
||||
|
||||
### 总结
|
||||
|
||||
在第 2 节中,我将来看看 Python 中使你能够实现这些技术的特定库。同时,请查看其中一些精心设计的命令行应用程序:
|
||||
|
||||
* [bpython][5]或 [ptpython][6]:具有自动补全支持的 Python REPL。
|
||||
* [http-prompt][7]:交互式 HTTP 客户端。
|
||||
* [mycli][8]:MySQL、MariaDB 和 Percona 的命令行界面,具有自动补全和语法高亮。
|
||||
* [pgcli][9]:具有自动补全和语法高亮,是对 [psql][10] 的替代工具。
|
||||
* [wharfee][11]:用于管理 Docker 容器的 shell。
|
||||
|
||||
_了解更多: Amjith Ramanujam 在 5 月 20 日在波特兰俄勒冈州举办的 [PyCon US 2017][12] 上的谈话“[神奇的命令行工具][13]”。_
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
|
||||
作者简介:
|
||||
|
||||
Amjith Ramanujam - Amjith Ramanujam 是 pgcli 和 mycli 的创始人。人们认为它们很酷,他表示笑纳赞誉。他喜欢用 Python、Javascript 和 C 编程。他喜欢编写简单易懂的代码,它们有时甚至会成功。
|
||||
|
||||
-----------------------
|
||||
|
||||
via: https://opensource.com/article/17/5/4-terminal-apps
|
||||
|
||||
作者:[Amjith Ramanujam][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/amjith
|
||||
[1]:https://opensource.com/tags/python?src=programming_resource_menu
|
||||
[2]:https://opensource.com/tags/javascript?src=programming_resource_menu
|
||||
[3]:https://opensource.com/tags/perl?src=programming_resource_menu
|
||||
[4]:https://developers.redhat.com/?intcmp=7016000000127cYAAQ&src=programming_resource_menu
|
||||
[5]:http://bpython-interpreter.org/
|
||||
[6]:http://github.com/jonathanslenders/ptpython/
|
||||
[7]:https://github.com/eliangcs/http-prompt
|
||||
[8]:http://mycli.net/
|
||||
[9]:http://pgcli.com/
|
||||
[10]:https://www.postgresql.org/docs/9.2/static/app-psql.html
|
||||
[11]:http://wharfee.com/
|
||||
[12]:https://us.pycon.org/2017/
|
||||
[13]:https://us.pycon.org/2017/schedule/presentation/518/
|
||||
[14]:https://opensource.com/article/17/5/4-terminal-apps?rate=3HL0zUQ8_dkTrinonNF-V41gZvjlRP40R0RlxTJQ3G4
|
||||
[15]:https://bpython-interpreter.org/
|
||||
[16]:http://mycli.net/
|
||||
[17]:http://pgcli.com/
|
||||
[18]:https://fishshell.com/
|
||||
[19]:https://opensource.com/user/125521/feed
|
||||
[20]:https://opensource.com/article/17/5/4-terminal-apps#comments
|
||||
[21]:https://opensource.com/users/amjith
|
@ -0,0 +1,82 @@
|
||||
如何在 Linux 中删除大(100-200GB)文件
|
||||
============================================================
|
||||
|
||||
通常,要[在 Linux 终端删除一个文件][1],我们使用 rm 命令(删除文件)、shred 命令(安全删除文件)、wipe 命令(安全擦除文件)或者 secure-deletion 工具包(一个[安全文件删除工具][2]集合)。
|
||||
|
||||
我们可以使用上面任意的工具来处理相对较小的文件。如果我们想要删除大的文件/文件夹,比如大概 100-200GB。这个方法在删除文件(I/O 调度)所花费的时间以及 RAM 占用量方面看起来可能并不容易。
|
||||
|
||||
在本教程中,我们会解释如何在 Linux 中有效率并可靠地删除大文件/文件夹。
|
||||
|
||||
**建议阅读:** [5 个在 Linux 中清空或者删除大文件内容的方法][3]
|
||||
|
||||
主要的目标是使用一种不会在删除大文件时拖慢系统的技术,并有合理的 I/O 占用。我们可以用 **ionice 命令**实现这个目标。
|
||||
|
||||
### 在 Linux 中使用 ionice 命令删除大(200GB)文件
|
||||
|
||||
ionice 是一个可以为另一个程序设置或获取 I/O 调度级别和优先级的有用程序。如果没有给出参数或者只有 `-p`,那么 ionice 将会查询该进程的当前的 I/O 调度级别以及优先级。
|
||||
|
||||
如果我们给出命令名称,如 rm 命令,它将使用给定的参数运行此命令。要指定要获取或设置调度参数的[进程的进程 ID][4],运行这个:
|
||||
|
||||
```
|
||||
# ionice -p PID
|
||||
```
|
||||
|
||||
要指定名字或者调度的数字,使用(0 表示无、1 表示实时、2 表示尽力、3 表示空闲)下面的命令。
|
||||
|
||||
这意味这 rm 会属于空闲 I/O 级别,并且只在其他进程不使用的时候使用 I/O:
|
||||
|
||||
```
|
||||
---- Deleting Huge Files in Linux -----
|
||||
# ionice -c 3 rm /var/logs/syslog
|
||||
# ionice -c 3 rm -rf /var/log/apache
|
||||
```
|
||||
|
||||
如果系统中没有很多空闲时间,那么我们希望使用尽力调度级别,并且使用低优先级:
|
||||
|
||||
```
|
||||
# ionice -c 2 -n 6 rm /var/logs/syslog
|
||||
# ionice -c 2 -n 6 rm -rf /var/log/apache
|
||||
```
|
||||
|
||||
注意:要使用安全的方法删除大文件,我们可以使用先前提到的 shred、wipe 以及 secure-deletion 工具包中的不同工具,而不是 rm 命令。
|
||||
|
||||
**建议阅读:**[3 个在 Linux 中永久/安全删除文件/文件夹的方法][5]
|
||||
|
||||
要获取更多信息,查阅 ionice 的手册页:
|
||||
|
||||
```
|
||||
# man ionice
|
||||
```
|
||||
|
||||
就是这样了!你脑海里还有其他的方法么?在评论栏中与我们分享。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Aaron Kili 是 Linux 和 F.O.S.S 爱好者,将来的 Linux 系统管理员和网络开发人员,目前是 TecMint 的内容创作者,他喜欢用电脑工作,并坚信分享知识。
|
||||
|
||||
------------------
|
||||
|
||||
via: https://www.tecmint.com/delete-huge-files-in-linux/
|
||||
|
||||
作者:[Aaron Kili ][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[jasminepeng](https://github.com/jasminepeng)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.tecmint.com/author/aaronkili/
|
||||
[1]:https://www.tecmint.com/permanently-and-securely-delete-files-directories-linux/
|
||||
[2]:https://www.tecmint.com/permanently-and-securely-delete-files-directories-linux/
|
||||
[3]:https://www.tecmint.com/empty-delete-file-content-linux/
|
||||
[4]:https://www.tecmint.com/find-linux-processes-memory-ram-cpu-usage/
|
||||
[5]:https://www.tecmint.com/permanently-and-securely-delete-files-directories-linux/
|
||||
[6]:https://www.tecmint.com/delete-huge-files-in-linux/#
|
||||
[7]:https://www.tecmint.com/delete-huge-files-in-linux/#
|
||||
[8]:https://www.tecmint.com/delete-huge-files-in-linux/#
|
||||
[9]:https://www.tecmint.com/delete-huge-files-in-linux/#
|
||||
[10]:https://www.tecmint.com/delete-huge-files-in-linux/#comments
|
||||
[11]:https://www.tecmint.com/author/aaronkili/
|
||||
[12]:https://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
|
||||
[13]:https://www.tecmint.com/free-linux-shell-scripting-books/
|
@ -0,0 +1,69 @@
|
||||
如何使用 Cream 提高 Vim 的用户友好性
|
||||
============================================================
|
||||
|
||||
> Cream 附加包通过把一个更加熟悉的“面孔”置于 Vim 文本编辑器之上,同时保留 Vim 的功能,使其更加容易使用
|
||||
|
||||
![How to make Vim user-friendly with Cream](https://opensource.com/sites/default/files/styles/image-full-size/public/images/education/osdc_edu_rightmix_520.png?itok=SCsog_qv "How to make Vim user-friendly with Cream")
|
||||
|
||||
图片来自 : opensource.com
|
||||
|
||||
大约 10 年前,我既使用 Emacs 进行文本编辑,也使用 Vim 进行文本编辑。说到底,我的确是一个热衷于 Emacs 的家伙。尽管 Emacs 在我的心里占据了很重要的地位,但我知道, Vim 也不赖。
|
||||
|
||||
一些人,或者像我一样的人,在技术方面有些笨手笨脚。多年来,我和一些 Linux 新手交流,了解到他们想使用 Vim,但是却失望的发现, Vim 编辑器和他们在其它操作系统上使用过的编辑器不一样。
|
||||
|
||||
但是,当我把 Cream 介绍给他们以后,他们的失望就变成了满意。Cream 是 Vim 的一个附加包,它使得 Vim 更加容易使用。Cream 让这些 Linux 新手变成了 Vim 的坚决拥护者和忠心用户。
|
||||
|
||||
让我们来看一看 Cream 是什么以及它是如何让 Vim 变得更加容易使用的。
|
||||
|
||||
### Cream 的安装
|
||||
|
||||
在安装 Cream 之前,你需要先在你的电脑上安装好 Vim 和 GVim 的 GUI 组件。我发现最容易完成这件事的方法是使用 Linux 版本的包管理器。
|
||||
|
||||
安装好 Vim 以后,便可[下载 Cream 的安装程序][2],或者你也可以再次使用 Linux 发行版的包管理器进行安装。
|
||||
|
||||
安装好 Cream 以后,你可以从应用菜单选择它(比如,**Applications**->**Cream**)或者在程序启动器中输入 `Cream`,从而启动 Cream 。
|
||||
|
||||
![Cream’s main window](https://opensource.com/sites/default/files/resize/cream-main-window-520x336.png "Cream’s main window")
|
||||
|
||||
### Cream 的使用
|
||||
|
||||
如果你之前已经使用过 Gvim,那么你会注意到, Cream 几乎没改变该编辑器的外观和感觉。最大的不同是 Cream 的菜单栏和工具栏,它们取代了 Gvim 陈旧的菜单栏和工具栏,新的菜单栏和工具栏的外观和功能分组看起来和其它编辑器的一样。
|
||||
|
||||
Cream 的菜单栏对用户隐藏了更多的技术选项,比如指定一个编译器的能力,以及运行 `make` 命令的能力。当你通过使用 Cream 更加熟悉 Vim 以后,你只需要从 **Setting**->**Preferences**->**Behavior** 选择选项,就可以更容易地访问这些特性。有了这些选项,你可以(如果你想)体验到一个兼有 Cream 和传统 Vim 二者优点的强大编辑器。
|
||||
|
||||
Cream 并不是仅由菜单驱动。尽管编辑器的功能仅有单击或双击两种方式,但是你也可以使用常见的键盘快捷键来执行操作,比如 `CTRL-O`(打开一个文件),`CTRL-C`(复制文本)。你不需要在几种模式之间切换,也不需要记住一些很难记住的命令。
|
||||
|
||||
Cream 开始运行以后,打开一个文件,或者新建一个文件,然后就可以开始输入了。几个我向他们介绍过 Cream 的人说,虽然 Cream 保留了 Vim 的许多典型风格,但是 Cream 使用起来更加舒服。
|
||||
|
||||
![Cream add-on for VIM in action](https://opensource.com/sites/default/files/cream-in-action.png "Cream add-on for VIM in action")
|
||||
|
||||
并不是说 Cream 是 Vim 的简化版,远远不是。事实上, Cream 保留了 Vim 的全部特性,同时,它还有[一系列其他有用的特性][7]。我发现的 Cream 的一些有用的特性包括:
|
||||
|
||||
* 一个标签式界面
|
||||
* 语法高亮(特别是针对 Markdown、LaTeX 和 HTML)
|
||||
* 自动修正拼写错误
|
||||
* 字数统计
|
||||
* 内建文件浏览器
|
||||
|
||||
Cream 本身也有许多附加包,可以给编辑器增加一些新的特性。这些特性包括文本加密、清理电子邮件内容,甚至还有一个使用教程。老实说,我还没有发现哪一个附加包是真正有用的,不过你的感受可能会有所不同。
|
||||
|
||||
我曾听过一些 Vi/Vim 的狂热分子谴责 Cream “降低”(这是他们的原话)了 Vi/Vim 编辑器的水准。的确,Cream 并不是为他们设计的。它是为那些想快速使用 Vim ,同时保留他们曾经使用过的编辑器的外观和感觉的人准备的。在这种情况下, Cream 是值得赞赏的,它使得 Vim 更加容易使用,更加广泛的被人们使用。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/5/stir-bit-cream-make-vim-friendlier
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
译者:[ucasFL](https://github.com/ucasFL)
|
||||
校对:[jasminepeng](https://github.com/jasminepeng)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/scottnesbitt
|
||||
[1]:https://opensource.com/article/17/5/stir-bit-cream-make-vim-friendlier?rate=sPQVOnwWoNwyyQX4wV2SZ_7Ly_KXd_Gu9pBu16LRyhU
|
||||
[2]:http://cream.sourceforge.net/download.html
|
||||
[3]:http://cream.sourceforge.net/featurelist.html
|
||||
[4]:https://opensource.com/user/14925/feed
|
||||
[5]:https://opensource.com/article/17/5/stir-bit-cream-make-vim-friendlier#comments
|
||||
[6]:https://opensource.com/users/scottnesbitt
|
||||
[7]:http://cream.sourceforge.net/featurelist.html
|
@ -0,0 +1,123 @@
|
||||
怎样在 Linux 命令行下杀死一个进程
|
||||
============================================================
|
||||
|
||||
> Linux 的命令行里面有用来停止正在运行的进程的所有所需工具。Jack Wallen 将为您讲述细节。
|
||||
|
||||
![stop processes](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/stop-processes.jpg?itok=vfNx8VRz "stop processes")
|
||||
|
||||
想像一下:你打开了一个程序(可能来自于你的桌面菜单或者命令行),然后开始使用这个程序,没想到程序会锁死、停止运行、或者意外死机。你尝试再次运行该程序,但是它反馈说原来的进程没有完全关闭。
|
||||
|
||||
你该怎么办?你要结束进程。但该如何做?不管你信与不信,最好的解决方法大都在命令行里。值得庆幸的是, Linux 有供用户杀死错误的进程的每个必要的工具,然而,你在执行杀死进程的命令之前,你首先需要知道进程是什么。该如何处理这一类的任务。一旦你能够掌握这种工具,它实际是十分简单的……
|
||||
|
||||
让我来介绍给你这些工具。
|
||||
|
||||
我来概述的步骤是每个 Linux 发行版都能用的,不论是桌面版还是服务器版。我将限定只使用命令行,请打开你的终端开始输入命令吧。
|
||||
|
||||
### 定位进程
|
||||
|
||||
杀死一个没有响应的进程的第一个步骤是定位这个进程。我用来定位进程的命令有两个:`top` 和 `ps` 命令。`top` 是每个系统管理员都知道的工具,用 `top` 命令,你能够知道到所有当前正在运行的进程有哪些。在命令行里,输入 `top` 命令能够就看到你正在运行的程序进程(图1)
|
||||
|
||||
![top](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/killa.jpg?itok=95cUI9Lh "top")
|
||||
|
||||
*图 1: top 命令给出你许多的信息。*
|
||||
|
||||
从显示的列表中你能够看到相当重要的信息,举个例子,Chrome 浏览器反映迟钝,依据我们的 `top` 命令显示,我们能够辨别的有四个 Chrome 浏览器的进程在运行,进程的 pid 号分别是 3827、3919、10764 和 11679。这个信息是重要的,可以用一个特殊的方法来结束进程。
|
||||
|
||||
尽管 `top` 命令很是方便,但也不是得到你所要信息最有效的方法。 你知道你要杀死的 Chrome 进程是那个,并且你也不想看 `top` 命令所显示的实时信息。 鉴于此,你能够使用 `ps` 命令然后用 `grep` 命令来过滤出输出结果。这个 `ps` 命令能够显示出当前进程列表的快照,然后用 `grep` 命令输出匹配的样式。我们通过 `grep` 命令过滤 `ps` 命令的输出的理由很简单:如果你只输入 `ps` 命令,你将会得到当前所有进程的列表快照,而我们需要的是列出 Chrome 浏览器进程相关的。所以这个命令是这个样子:
|
||||
|
||||
```
|
||||
ps aux | grep chrome
|
||||
```
|
||||
|
||||
这里 `aux` 选项如下所示:
|
||||
|
||||
* a = 显示所有用户的进程
|
||||
* u = 显示进程的用户和拥有者
|
||||
* x = 也显示不依附于终端的进程
|
||||
|
||||
当你搜索图形化程序的信息时,这个 `x` 参数是很重要的。
|
||||
|
||||
当你输入以上命令的时候,你将会得到比图 2 更多的信息,而且它有时用起来比 `top` 命令更有效。
|
||||
|
||||
![ps command](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/killb.jpg?itok=vyWIuTva "ps command")
|
||||
|
||||
*图 2:用 ps 命令来定位所需的内容信息。*
|
||||
|
||||
### 结束进程
|
||||
|
||||
现在我们开始结束进程的任务。我们有两种可以帮我们杀死错误的进程的信息。
|
||||
|
||||
* 进程的名字
|
||||
* 进程的 ID (PID)
|
||||
|
||||
你用哪一个将会决定终端命令如何使用,通常有两个命令来结束进程:
|
||||
|
||||
* `kill` - 通过进程 ID 来结束进程
|
||||
* `killall` - 通过进程名字来结束进程
|
||||
|
||||
有两个不同的信号能够发送给这两个结束进程的命令。你发送的信号决定着你想要从结束进程命令中得到的结果。举个例子,你可以发送 `HUP`(挂起)信号给结束进程的命令,命令实际上将会重启这个进程。当你需要立即重启一个进程(比如就守护进程来说),这是一个明智的选择。你通过输入 `kill -l` 可以得到所有信号的列表,你将会发现大量的信号。
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/killc.jpg?itok=9ewRHFW2)
|
||||
|
||||
*图 3: 可用的结束进程信号。*
|
||||
|
||||
最经常使用的结束进程的信号是:
|
||||
|
||||
| Signal Name | Single Value | Effect |
|
||||
|-----------|-----------|----------|
|
||||
| SIGHUP | 1 | 挂起 |
|
||||
| SIGINT | 2 | 键盘的中断信号 |
|
||||
| SIGKILL | 9 | 发出杀死信号 |
|
||||
| SIGTERM | 15 | 发出终止信号 |
|
||||
| SIGSTOP | 17, 19, 23 | 停止进程 |
|
||||
|
||||
好的是,你能用信号值来代替信号名字。所以你没有必要来记住所有各种各样的信号名字。
|
||||
|
||||
所以,让我们现在用 `kill` 命令来杀死 Chrome 浏览器的进程。这个命令的结构是:
|
||||
|
||||
```
|
||||
kill SIGNAL PID
|
||||
```
|
||||
|
||||
这里 SIGNAL 是要发送的信号,PID 是被杀死的进程的 ID。我们已经知道,来自我们的 `ps` 命令显示我们想要结束的进程 ID 号是 3827、3919、10764 和 11679。所以要发送结束进程信号,我们输入以下命令:
|
||||
|
||||
```
|
||||
kill -9 3827
|
||||
kill -9 3919
|
||||
kill -9 10764
|
||||
kill -9 11679
|
||||
```
|
||||
|
||||
一旦我们输入了以上命令,Chrome 浏览器的所有进程将会成功被杀死。
|
||||
|
||||
我们有更简单的方法!如果我们已经知道我们想要杀死的那个进程的名字,我们能够利用 `killall` 命令发送同样的信号,像这样:
|
||||
|
||||
```
|
||||
killall -9 chrome
|
||||
```
|
||||
|
||||
附带说明的是,上边这个命令可能不能捕捉到所有正在运行的 Chrome 进程。如果,运行了上边这个命令之后,你输入 `ps aux | grep chrome` 命令过滤一下,看到剩下正在运行的 Chrome 进程有那些,最好的办法还是回到 `kIll` 命令通过进程 ID 来发送信号值 `9` 来结束这个进程。
|
||||
|
||||
### 结束进程很容易
|
||||
|
||||
正如你看到的,杀死错误的进程并没有你原本想的那样有挑战性。当我让一个顽固的进程结束的时候,我趋向于用 `killall`命令来作为有效的方法来终止,然而,当我让一个真正的活跃的进程结束的时候,`kill`命令是一个好的方法。
|
||||
|
||||
-------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2017/5/how-kill-process-command-line
|
||||
|
||||
作者:[JACK WALLEN][a]
|
||||
译者:[hwlog](https://github.com/hwlog)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/jlwallen
|
||||
[1]:https://www.linux.com/licenses/category/used-permission
|
||||
[2]:https://www.linux.com/licenses/category/used-permission
|
||||
[3]:https://www.linux.com/licenses/category/used-permission
|
||||
[4]:https://www.linux.com/licenses/category/creative-commons-zero
|
||||
[5]:https://www.linux.com/files/images/killajpg
|
||||
[6]:https://www.linux.com/files/images/killbjpg
|
||||
[7]:https://www.linux.com/files/images/killcjpg
|
||||
[8]:https://www.linux.com/files/images/stop-processesjpg
|
@ -0,0 +1,95 @@
|
||||
朝鲜 180 局的网络战部门让西方国家忧虑
|
||||
============================================================
|
||||
|
||||
本文译自澳大利亚广播公司相关文章,不代表本站及译者、编辑的态度。
|
||||
|
||||
[![在夜色的映衬下,部队的军车通过平壤市区](http://www.abc.net.au/news/image/8545124-3x2-700x467.jpg "Military trucks through Pyongyang")][13]
|
||||
|
||||
*[PHOTO:脱北者说, 平壤的网络战攻击目的旨在为一个叫做“180局”的部门来筹集资金。(路透社:Damir Sagolj, file)][14]*
|
||||
|
||||
> 据脱北者、官方和网络安全专家的消息,朝鲜的情报机关有一个叫做 180 局的特殊部门, 这个部门已经发起过多起胆大且成功的网络战。
|
||||
|
||||
近几年,朝鲜被指责在美国、韩国,及其周边的几个国家对金融网络发起多起在线袭击。
|
||||
|
||||
网络安全研究人员称他们找到了这个月全球性感染了 150 多个国家 30 多万台计算机的[“想哭”勒索病毒和朝鲜网络战有关联的技术证据][15]。
|
||||
|
||||
平壤称该指控是“荒谬的”。
|
||||
|
||||
对朝鲜的关键指控是指朝鲜与一个叫做拉撒路(Lazarus)的黑客组织有联系,这个组织是在去年在孟加拉国中央银行网络抢劫了 8100 万美元,并在 2014 年攻击了索尼的好莱坞工作室的网络。
|
||||
|
||||
美国政府指责朝鲜对索尼公司的黑客袭击,同时美国政府对平壤在孟加拉国银行的盗窃行为提起公诉并要求立案。
|
||||
|
||||
由于没有确凿的证据、没有犯罪指控并不能够立案。朝鲜之后也否认了索尼公司和该银行的袭击与其有关。
|
||||
|
||||
朝鲜是世界上最封闭的国家之一,它秘密行动的一些细节很难获得。
|
||||
|
||||
但研究这个封闭国家的专家和流落到韩国和一些西方国家的的脱北者已经给出了或多或少的提示。
|
||||
|
||||
### 黑客们喜欢以雇员身份来作为掩护
|
||||
|
||||
金恒光,一位朝鲜前计算机教授,2004 叛逃到韩国,他仍然有着韩国内部的消息来源,他说平壤的网络战目的在于通过侦察总局(RGB)下属的一个叫做 180 局来筹集资金,这个局主要是负责海外的情报机构。
|
||||
|
||||
金教授称,“180 局负责入侵金融机构通过漏洞从银行账户提取资金”。
|
||||
|
||||
他之前也说过,他以前的一些学生已经加入了朝鲜的网络战略司令部,即朝鲜的网络部队。
|
||||
|
||||
>“黑客们到海外寻找比朝鲜更好的互联网服务的地方,以免留下痕迹,” 金教授补充说。
|
||||
|
||||
他说他们经常用贸易公司、朝鲜的海外分公司和在中国和东南亚合资企业的雇员来作为掩护。
|
||||
|
||||
位于华盛顿的战略与国际研究中心的一位名为 James Lewis 的朝鲜专家称,平壤首先把黑客攻击作为间谍活动的工具,然后对韩国和美国的目的进行政治干扰。
|
||||
|
||||
他说,“索尼公司事件之后,他们改变方法,通过用黑客来支持犯罪活动来形成国内坚挺的货币经济政策。”
|
||||
|
||||
“目前为止,网上毒品,假冒伪劣,走私,都是他们惯用的伎俩”。
|
||||
|
||||
|
||||
[**VIDEO:** 你遇到过勒索病毒吗? (ABC News)][16] : https://dn-linuxcn.qbox.me/static/video/CNb_Ransomware_1505_512k.mp4
|
||||
|
||||
### 韩国声称拥有“大量的证据”
|
||||
|
||||
美国国防部称在去年提交给国会的一个报告中显示,朝鲜将网络视为有成本效益的、不对称的、可否认的工具,它能够应付来自报复性袭击的很小风险,因为它的“网络”大部分是和因特网分离的。
|
||||
|
||||
> 报告中说," 它可能从第三方国家使用互联网基础设施"。
|
||||
|
||||
韩国政府称,他们拥有朝鲜网络战行动的大量证据。
|
||||
|
||||
“朝鲜进行网络战通过第三方国家来掩护网络袭击的来源,并且使用他们的信息和通讯技术设施”,韩国外交部副部长安总基在书面评论中告诉路透社。
|
||||
|
||||
除了孟加拉银行抢劫案,他说怀疑平壤也与菲律宾、越南和波兰的银行袭击有关。
|
||||
|
||||
去年六月,警察称朝鲜袭击了 160 个韩国公司和政府机构,入侵了大约 14 万台计算机,暗中在它的对手的计算机中植入恶意代码,为进行大规模网络攻击的长期计划而准备。
|
||||
|
||||
朝鲜也被怀疑在 2014 年对韩国核反应堆操作系统进行阶段性网络攻击,尽管朝鲜否认与其无关。
|
||||
|
||||
根据在一个韩国首尔的杀毒软件厂商 hauri 的高级安全研究员 Simon Choi 的说法,网络袭击是来自于朝鲜在中国的一个基地。
|
||||
|
||||
Choi 先生,是一位对朝鲜的黑客能力有广泛的研究的人,他称,“他们在那里行动,不管他们究竟在做什么,他们拥有中国的 IP 地址”。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.abc.net.au/news/2017-05-21/north-koreas-unit-180-cyber-warfare-cell-hacking/8545106
|
||||
|
||||
作者:[www.abc.net.au][a]
|
||||
译者:[hwlog](https://github.com/hwlog)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.abc.net.au
|
||||
[1]:http://www.abc.net.au/news/2017-05-16/wannacry-ransomware-showing-up-in-obscure-places/8527060
|
||||
[2]:http://www.abc.net.au/news/2015-08-05/why-we-should-care-about-cyber-crime/6673274
|
||||
[3]:http://www.abc.net.au/news/2017-05-15/what-to-do-if-youve-been-hacked/8526118
|
||||
[4]:http://www.abc.net.au/news/2017-05-16/researchers-link-wannacry-to-north-korea/8531110
|
||||
[5]:http://www.abc.net.au/news/2017-05-18/adylkuzz-cyberattack-could-be-far-worse-than-wannacry:-expert/8537502
|
||||
[6]:http://www.google.com/maps/place/Korea,%20Democratic%20People%20S%20Republic%20Of/@40,127,5z
|
||||
[7]:http://www.abc.net.au/news/2017-05-16/wannacry-ransomware-showing-up-in-obscure-places/8527060
|
||||
[8]:http://www.abc.net.au/news/2017-05-16/wannacry-ransomware-showing-up-in-obscure-places/8527060
|
||||
[9]:http://www.abc.net.au/news/2015-08-05/why-we-should-care-about-cyber-crime/6673274
|
||||
[10]:http://www.abc.net.au/news/2015-08-05/why-we-should-care-about-cyber-crime/6673274
|
||||
[11]:http://www.abc.net.au/news/2017-05-15/what-to-do-if-youve-been-hacked/8526118
|
||||
[12]:http://www.abc.net.au/news/2017-05-15/what-to-do-if-youve-been-hacked/8526118
|
||||
[13]:http://www.abc.net.au/news/2017-05-21/military-trucks-trhough-pyongyang/8545134
|
||||
[14]:http://www.abc.net.au/news/2017-05-21/military-trucks-trhough-pyongyang/8545134
|
||||
[15]:http://www.abc.net.au/news/2017-05-16/researchers-link-wannacry-to-north-korea/8531110
|
||||
[16]:http://www.abc.net.au/news/2017-05-15/have-you-been-hit-by-ransomware/8527854
|
@ -1,83 +0,0 @@
|
||||
How is your community promoting diversity?
|
||||
============================================================
|
||||
|
||||
> Open source foundation leaders weigh in.
|
||||
|
||||
![How is your community promoting diversity?](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/world_hands_diversity.png?itok=LMT5xbxJ "How is your community promoting diversity?")
|
||||
Image by : opensource.com
|
||||
|
||||
Open source software is a great enabler for technology innovation. Diversity unlocks innovation and drives market growth. Open source and diversity seem like the ultimate winning combination, yet ironically open source communities are among the least diverse tech communities. This is especially true when it comes to inherent diversity: traits such as gender, age, ethnicity, and sexual orientation.
|
||||
|
||||
It is hard to get a true picture of the diversity of our communities in all the various dimensions. Gender diversity, by virtue of being noticeably lacking and more straight forward to measure, is the starting point and current yardstick for measuring diversity in tech communities.
|
||||
|
||||
For example, it is estimated that around 25% of all software developers are women, but [only 3% work][5] in free and open software. These figures are consistent with my personal experience working in open source for over 10 years.
|
||||
|
||||
Even when individuals in the community are [doing their best][6] (and I have worked with many who are), it seems to make little difference. And little has changed in the last ten years. However, we are, as a community, starting to have a better understanding of some of the factors that maintain this status quo, things like [unconscious bias][7] or [social graph and privilege][8] problems.
|
||||
|
||||
In order to overcome the gravity of these forces in open source, we need combined efforts that are sustained over the long term and that really work. There is no better example of how diversity can be improved rapidly in a relatively short space of time than the Python community. PyCon 2011 consisted of just 1% women speakers. Yet in 2014, 33% of speakers at PyCon were women. Now Python conferences regularly lay out [their diversity targets and how they intend to meet them][9].
|
||||
|
||||
What did it take to make that dramatic improvement in women speaker numbers? In her great talk at PyCon 2014, [Outreach Program for Women: Lessons in Collaboration][10], Marina Zhurakhinskaya outlines the key ingredients:
|
||||
|
||||
* The importance of having a Diversity Champion to spearhead the changes over the long term; in the Python community Jessica McKellar was the driving force behind the big improvement in diversity figures
|
||||
* Specifically marketing to under-represented groups; for example, how GNOME used outreach programs, such as [Outreachy][1], to market to women specifically
|
||||
|
||||
We know diversity issues, while complex are imminently fixable. In this way, open source foundations can play a huge role in the sustaining efforts to promote initiatives. Are other open source communities also putting efforts into diversity? To find out, we asked a few open source foundation leaders:
|
||||
|
||||
### How does your foundation promote diversity in its open source community?
|
||||
|
||||
**Mike Milinkovich, executive director of the Eclipse Foundation:**
|
||||
|
||||
> "The Eclipse Foundation is committed to promoting diversity in its open source community. But that commitment does not mean that we are satisfied with where we are today. We have a long way to go, particularly in the area of gender diversity. That said, some of the tangible steps we've taken in the last couple of years are: (a) we put into place a [Community Code of Conduct][2] that covers all of our activities, (b) we are consciously recruiting women for our conference program committees, (c) we are consciously looking for women speakers for our conferences, including keynotes, and (d) we are supporting community channels to discuss diversity topics. It's been great to see members of our community step up to assume leadership roles on this topic, and we're looking forward to making a lot of progress in 2017."
|
||||
|
||||
**Abby Kearns, executive director for the Cloud Foundry:**
|
||||
|
||||
> "For Cloud Foundry we promote diversity in a variety of ways. For our community, this includes a heavy focus on diversity events at our summit, and on our keynote stage. I'm proud to say we doubled the representation by women and people of color at our last event. For our contributors, this takes on a slightly different meaning and includes diversification across company and role."
|
||||
|
||||
A recent Cloud Foundry Summit featured a [diversity luncheon][11] as well as a [keynote on diversity][12], which highlighted how [gender parity had been achieved][13] by one member company's team.
|
||||
|
||||
**Chris Aniszczyk, COO of the Cloud Native Computing Foundation:**
|
||||
|
||||
> "The Cloud Native Computing Foundation (CNCF) is a very young foundation still, and although we are only one year old as of December 2016, we've had promoting diversity as a goal since our inception. First, every conference hosted by CNCF has [diversity scholarships][3] available, and there are usually special diversity lunches or events at the conference to promote inclusion. We've also sponsored "[contribute your first patch][4]" style events to promote new contributors from all over. These are just some small things we currently do. In the near future, we are discussing launching a Diversity Workgroup within CNCF, and also as we ramp up our certification and training programs, we are discussing offering scholarships for folks from under-representative backgrounds."
|
||||
|
||||
Additionally, Cloud Native Computing Foundation is part of the [Linux Foundation][14] as a formal Collaborative Projects (along with other foundations, including Cloud Foundry Foundation). The Linux Foundation has extensive [Diversity Programs][15] and as an example, recently [partnered with the Girls In Tech][16] not-for-profit to improve diversity in open source. In the future, the CNCF actively plans to participate in these Linux Foundation wide initiatives as they arise.
|
||||
|
||||
For open source to thrive, companies need to foster the right environment for innovation. Diversity is a big part of this. Seeing open source foundations making the conscious decision to take action is encouraging. Dedicated time, money, and resources to diversity is making a difference within communities, and we are slowly but surely starting to see the effects. Going forward, communities can collaborate and learn from each other about what works and makes a real difference.
|
||||
|
||||
If you work in open source, be sure to ask and find out what is being done in your community as a whole to foster and promote diversity. Then commit to supporting these efforts and taking the steps toward making a real difference. It is exciting to think that the next ten years might be a huge improvement over the last 10, and we can start to envision a future of truly diverse open source communities, the ultimate winning combination.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/dsc_0182.jpg?itok=c_u-wggj)
|
||||
|
||||
Tracy Miranda - Tracy Miranda is a software developer and founder of Kichwa Coders, a software consultancy specializing in Eclipse tools for scientific and embedded software. Tracy has been using Eclipse since 2003 and is actively involved in the community, particularly the Eclipse Science Working Group. Tracy has a background in electronics system design. She mentors young coders at the festival of code for Young Rewired State. Follow Tracy on Twitter @tracymiranda.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
|
||||
via: https://opensource.com/article/17/1/take-action-diversity-tech
|
||||
|
||||
作者:[ Tracy Miranda][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/tracymiranda
|
||||
[1]:https://www.gnome.org/outreachy/
|
||||
[2]:https://www.eclipse.org/org/documents/Community_Code_of_Conduct.php
|
||||
[3]:http://events.linuxfoundation.org/events/cloudnativecon-and-kubecon-north-america/attend/scholarship-opportunities
|
||||
[4]:http://conferences.oreilly.com/oscon/oscon-tx-2016/public/schedule/detail/53257
|
||||
[5]:https://www.linux.com/blog/how-bring-more-women-free-and-open-source-software
|
||||
[6]:https://trishagee.github.io/post/what_can_men_do/
|
||||
[7]:https://opensource.com/life/16/3/sxsw-diversity-google-org
|
||||
[8]:https://opensource.com/life/15/8/5-year-plan-improving-diversity-tech
|
||||
[9]:http://2016.pyconuk.org/diversity-target/
|
||||
[10]:https://www.youtube.com/watch?v=CA8HN20NnII
|
||||
[11]:https://www.youtube.com/watch?v=LSRrc5B1an0&list=PLhuMOCWn4P9io8gtd6JSlI9--q7Gw3epW&index=48
|
||||
[12]:https://www.youtube.com/watch?v=FjF8EK2zQU0&list=PLhuMOCWn4P9io8gtd6JSlI9--q7Gw3epW&index=50
|
||||
[13]:https://twitter.com/ab415/status/781036893286854656
|
||||
[14]:https://www.linuxfoundation.org/about/diversity
|
||||
[15]:https://www.linuxfoundation.org/about/diversity
|
||||
[16]:https://www.linux.com/blog/linux-foundation-partners-girls-tech-increase-diversity-open-source
|
@ -1,111 +0,0 @@
|
||||
Be the open source supply chain
|
||||
============================================================
|
||||
|
||||
### Learn why you should be a supply chain influencer.
|
||||
|
||||
![Be the open source supply chain](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/OSDC_BUS_ArchitectureOfParticipation_520x292.png?itok=tzfeycyR "Be the open source supply chain")
|
||||
Image by :
|
||||
|
||||
opensource.com
|
||||
|
||||
I would bet that whoever is best at managing and influencing the open source supply chain will be best positioned to create the most innovative products. In this article, I’ll explain why you should be a supply chain influencer, and how your organization can be an active participant in your supply chain.
|
||||
|
||||
In my previous article, [Open source and the software supply chain][2], I discussed the basics of supply chain management, and where open source fits in this model. I left readers with this illustration of the model:
|
||||
|
||||
![supply chain](https://opensource.com/sites/default/files/f1_520_0.png "supply chain")
|
||||
|
||||
The question to ask your employer and team(s) is: How do we best take advantage of this? After all, if Apple can set the stage for its dominance by creating a better hardware supply chain, then surely one can do the same with software supply chains.
|
||||
|
||||
### Evaluating supply chains
|
||||
|
||||
Having worked with developers and product teams in many companies, I learned that the process for selecting components that go into a product is haphazard. Sometimes there is an official bake-off of one or two components against each other, but the developers often choose to work with a product based on "feel". When determining the best components, you must evaluate based on those projects’ longevity, stage of development, and enough other metrics to form the basis of a "build vs. buy" decision. Number of users, interested parties, commercial activity, involvement of development team in support, and so on are a few considerations in the decision-making process.
|
||||
|
||||
Over time, technology and business needs change, and in the world of open source software, even more so. Not only must an engineering and product team be able to select the best component at that time, they must also be able to switch it out for something else when the time comes—for example, when the community managing the old component moves on, or when a new component with better features emerges.
|
||||
|
||||
### What not to do
|
||||
|
||||
When evaluating supply chain components, teams are prone to make a number of mistakes, including these common ones:
|
||||
|
||||
* **Not Invented Here (NIH)**: I can’t tell you how many times engineering teams decided to "fix" shortcomings in existing supply chain components by deciding to write it themselves. I won’t say "never ever do that," but I will warn that if you take on the responsibility of writing an infrastructure component, understand that you’re chucking away all the advantages of the open source supply chain—namely upstream testing and upstream engineering—and deciding to take on those tasks, immediately saddling your team (and your product) with technical debt that will only grow over time. You’re making the choice to be less efficient, and you had better have a compelling reason for doing so.
|
||||
* **Carrying patches forward**: Any open source-savvy team understands the value of contributing patches to their respective upstream projects. When doing so, contributed code goes through that project’s automated testing procedures, which, when combined with your own team’s existing testing infrastructure, makes for a more hardened end product. Unfortunately, not all teams are open source-savvy. Sometimes these teams are faced with onerous legal requirements that deter them from seeking permission to contribute fixes upstream. In that case, encourage (i.e., nag) your manager to get blanket legal approval for such things, because the alternative is carrying all those changes forward, incurring significant technical debt, and applying patches until the day your project (or you) dies.
|
||||
* **Think you’re only a user**: Using open source components as part of your software supply chain is only the first step. To reap the rewards of open source supply chains, you must dive in and be an influencer. (More on that shortly.)
|
||||
|
||||
### Effective supply chain management example: Red Hat
|
||||
|
||||
Because of its upstream-first policies, [Red Hat][3] is an example of how both to utilize and influence software supply chains. To understand the Red Hat model, you must view their products through a supply chain perspective.
|
||||
|
||||
Products supported by Red Hat are composed of open source components often vetted by multiple upstream communities, and changes made to these components are pushed to their respective upstream projects, often before they land in a supported product from Red Hat. The work flow look somewhat like:
|
||||
|
||||
![workflow diagram](https://opensource.com/sites/default/files/f2_520_0.png "workflow diagram")
|
||||
|
||||
There are multiple reasons for this kind of workflow:
|
||||
|
||||
* Testing, testing, testing: By offloading some initial testing, a company like Red Hat benefits from both the upstream community’s testing, as well as the testing done by other ecosystem participants, including competitors.
|
||||
* Upstream viability: The Red Hat model only works as long as upstream suppliers are viable and self-sustaining. Thus, it’s in Red Hat’s interest to make sure those communities stay healthy.
|
||||
* Engineering efficiency: Because Red Hat offloads common tasks to upstream communities, their engineers spend more time adding value to products for customers.
|
||||
|
||||
To understand the Red Hat approach to supply chain, let’s look at their approach to product development with OpenStack.
|
||||
|
||||
Curiously, Red Hat’s start with OpenStack was not to create a product or even to announce one; rather, they started pushing engineering resources into strategic projects in OpenStack (starting with Nova, Keystone, and Cinder). This list grew to include several other projects in the OpenStack community. A more traditional product management executive might look at this approach and think, "Why on earth would we contribute so much engineering to something that isn’t established and has no product? Why are we giving our competitors our work for free?"
|
||||
|
||||
Instead, here is the open source supply chain thought process:
|
||||
|
||||
### Step 1
|
||||
|
||||
Look at growth areas in the business or largest product gaps that need filling. Is there an open source community that fits a strategic gap? Or can we build a new project from scratch to do the same? In this case, Red Hat looked at the OpenStack community and eventually determined that it would fill a gap in the product portfolio.
|
||||
|
||||
### Step 2
|
||||
|
||||
Gradually turn up the dial on engineering resources. This does a couple of things. First, it helps the engineering team get a sense of the respective projects’ prospects for success. If prospects aren’t not good, the company can stop contributing, with minimal investment spent. Once the project is determined to be worth in the investment, to the company can ensure its engineers will influence current and future development. This helps the project with quality code development, and ensures that the code meets future product requirements and acceptance criteria. Red Hat spent a lot of time slinging code in OpenStack repositories before ever announcing an OpenStack product, much less releasing one. But this was a fraction of the investment that would have been made if the company had developed an IaaS product from scratch.
|
||||
|
||||
### Step 3
|
||||
|
||||
Once the engineering investments begin, start a product management roadmap and marketing release plan. Once the code reaches a minimum level of quality, fork the upstream repository and start working on product-specific code. Bug fixes are pushed upstream to openstack.org and into product branches. (Remember: Red Hat’s model depends on upstream viability, so it makes no sense not to push fixes upstream.)
|
||||
|
||||
Lather, rinse, repeat. This is how you manage an open source software supply chain.
|
||||
|
||||
### Don't accumulate technical debt
|
||||
|
||||
If needed, Red Hat could decide that it would simply depend on upstream code, supply necessary proprietary product glue, and then release that as a product. This is, in fact, what most companies do with upstream open source code; however, this misses a crucial point I made previously. To develop a really great product, being heavily involved in the development process helps. How can an organization make sure that the code base meets its core product criteria if they’re not involved in the day-to-day architecture discussions?
|
||||
|
||||
To make matters worse, in an effort to protect backwards compatibility and interoperability, many companies fork the upstream code, make changes and don't contribute them upstream, choosing instead to carry them forward internally. That is a big no-no, saddling your engineering team forever with accumulated technical debt that will only grow over time. In that scenario, all the gains made from upstream testing, development and release go away in a whiff of stupidity.
|
||||
|
||||
### Red Hat and OpenShift
|
||||
|
||||
Once you begin to understand Red Hat’s approach to supply chain, which you can see manifested in its approach to OpenStack, you can understand its approach to OpenShift. Red Hat first released OpenShift as a proprietary product that was also open sourced. Everything was homegrown, built by a team that joined Red Hat as part of the [Makara acquisition][4] in 2010.
|
||||
|
||||
The technology initially suffered from NIH—using its own homegrown clustering and container management technologies, in spite of the recent (at the time) release of new projects: Kubernetes, Mesos, and Docker. What Red Hat did next is a testament to the company’s commitment to its open source supply chain model: Between OpenShift versions 2 and 3, developers rewrote it to utilize and take advantage of new developments from the Kubernetes and Docker communities, ditching their NIH approach. By restructuring the project in that way, the company took advantage of economies of scale that resulted from the burgeoning developer communities for both projects. I
|
||||
|
||||
Instead of Red Hat fashioning a complete QC/QA testing environment for the entire OpenShift stack, they could rely on testing infrastructure supplied by the Docker and Kubernetes communities. Thus, Red Hat contributions to both the Docker and Kubernetes code bases would undergo a few rounds of testing before ever reaching the company’s own product branches:
|
||||
|
||||
1. The first round of testing is by the Docker and Kubernetes communities .
|
||||
2. Further testing is done by ecosystem participants building products on either or both projects.
|
||||
3. More testing happens on downstream code distributions or products that "embed" both projects.
|
||||
4. Final testing happens in Red Hat’s own product branch.
|
||||
|
||||
The amount of upstream (from Red Hat) testing done on the code ensures a level of quality that would be much more expensive for the company to do comprehensively and from scratch. This is the trick to open source supply chain management: Don’t just consume upstream code, minimally shimming it into a product. That approach won’t give you any of the advantages offered by open source development practices and direct participation for solving your customers’ problems.
|
||||
|
||||
To get the most benefit from the open source software supply chain, you must **be** the open source software supply chain.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
John Mark Walker - John Mark Walker is Director of Product Management at Dell EMC and is responsible for managing the ViPR Controller product as well as the CoprHD open source community. He has led many open source community efforts, including ManageIQ,
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/1/be-open-source-supply-chain
|
||||
|
||||
作者:[John Mark Walker][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/johnmark
|
||||
[1]:https://opensource.com/article/17/1/be-open-source-supply-chain?rate=sz6X6GSpIX1EeYBj4B8PokPU1Wy-ievIcBeHAv0Rv2I
|
||||
[2]:https://opensource.com/article/16/12/open-source-software-supply-chain
|
||||
[3]:https://www.redhat.com/en
|
||||
[4]:https://www.redhat.com/en/about/press-releases/makara
|
||||
[5]:https://opensource.com/user/11815/feed
|
@ -1,74 +0,0 @@
|
||||
Developing open leaders
|
||||
============================================================
|
||||
|
||||
> "Off-the-shelf" leadership training can't sufficiently groom tomorrow's organizational leaders. Here's how we're doing it.
|
||||
|
||||
![Developing open leaders](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BUSINESS_community2.png?itok=ILQK65F1 "Developing open leaders")
|
||||
Image by : opensource.com
|
||||
|
||||
At Red Hat, we have a saying: Not everyone needs to be a people manager, but everyone is expected to be a leader.
|
||||
|
||||
For many people, that requires a profound mindset shift in how to think about leaders. Yet in some ways, it's what we all intuitively know about how organizations really work. As Red Hat CEO Jim Whitehurst has pointed out, in any organization, you have the thermometers—people who reflect the organizational "temperature" and sentiment and direction—and then you have the thermostats—people who _set_ those things for the organization.
|
||||
|
||||
Leadership is about maximizing influence and impact. But how do you develop leadership for an open organization?
|
||||
|
||||
In the first installment of this series, I will share the journey, from my perspective, on how we began to build a leadership development system at Red Hat to enable our growth while sustaining the best parts of our unique culture.
|
||||
|
||||
### Nothing 'off the shelf'
|
||||
|
||||
In an open organization, you can't just buy leadership development training "off the shelf" and expect it to resonate with people—or to reflect and reinforce your unique culture. But you also probably won't have the capacity and resources to build a great leadership development system entirely from scratch.
|
||||
|
||||
Early on in our journey at Red Hat, our leadership development efforts focused on understanding our own philosophy and approach, then taking a bit of an open source approach: sifting through the what people had created for conventional organizations, then configuring those ideas in a way that made them feasible for an open organization.
|
||||
|
||||
Looking back, I can also see we spent a lot of energy looking for ways to plug specific capability gaps.
|
||||
|
||||
Many of our people managers were engineers and other subject matter experts who stepped into management roles because that's what our organization needed. Yet the reality was, many had little experience leading a team or group. So we had some big gaps in basic management skills.
|
||||
|
||||
We also had gaps—not just among managers but also among individual contributors—when it came to navigating tough conversations with respect. In a company where passion runs high and people love to engage in open and heated debate, making your voice heard without shouting others down wasn't always easy.
|
||||
|
||||
We couldn't find any end-to-end leadership development systems that would help train people for leading in a culture that favors flatness and meritocracy over hierarchy and seniority. And while we could build some of those things ourselves, we couldn't build everything fast enough to meet our growing organization's needs.
|
||||
|
||||
So when we saw a need for improved goal setting, we introduced some of the best offerings available—like Closing the Execution Gap and the concept of SMART goals (i.e. specific, measurable, attainable, relevant, and time-bound). To make these work for Red Hat, we configured them to pull through themes from our own culture that could be used in tandem to make the concepts resonate and become even more powerful.
|
||||
|
||||
### Considering meritocracy
|
||||
|
||||
In a culture that values meritocracy, being able to influence others is critical. Yet the passionate open communication and debate that we love at Red Hat sometimes created hard feelings between individuals or teams. We introduced [Crucial Conversations][2] to help everyone navigate those heated and impassioned topics, and also to help them recognize that those kinds of conversations provide the greatest opportunity for influence.
|
||||
|
||||
After building that foundation with Crucial Conversations, we introduced [Influencer Training][3] to help entire teams and organizations communicate and gain traction for their ideas across boundaries.
|
||||
|
||||
We also found a lot of value in Marcus Buckingham's strengths-based approach to leadership development, rather than the conventional models that encouraged people to spend their energy shoring up weaknesses.
|
||||
|
||||
Early on, we made a decision to make our leadership offerings available to individual contributors as well as managers, because we saw that these skills were important for everyone in an open organization.
|
||||
|
||||
Looking back, I can see that this gave us the added benefit of developing a shared understanding and language for talking about leadership throughout our organization. It helped us build and sustain a culture where leadership is expected at all levels and in any role.
|
||||
|
||||
At the same time, training was only part of the solution. We also began developing processes that would help entire departments develop important organizational capabilities, such as talent assessment and succession planning.
|
||||
|
||||
Piece by piece, our open leadership system was beginning to take shape. The story of how it came together is pretty remarkable—at least to me!—and over the next few months, I'll share the journey with you. I look forward to hearing about the journeys of other open organizations, too.
|
||||
|
||||
_(An earlier version of this article appeared in _[The Open Organization Leaders Manual][4]_, now available as a free download from Opensource.com.)_
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
DeLisa Alexander - DeLisa Alexander | DeLisa is Executive Vice President and Chief People Officer at Red Hat. Under her leadership, this team focuses on acquiring, developing, and retaining talent and enhancing the Red Hat culture and brand. In her nearly 15 years with the company, DeLisa has also worked in the Office of General Counsel, where she wrote Red Hat's first subscription agreement and closed the first deals with its OEMs.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/17/1/developing-open-leaders
|
||||
|
||||
作者:[DeLisa Alexander][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/delisa
|
||||
[1]:https://opensource.com/open-organization/17/1/developing-open-leaders?rate=VU560k86SWs0OAchgX-ge2Avg041EOeU8BrlKgxEwqQ
|
||||
[2]:https://www.vitalsmarts.com/products-solutions/crucial-conversations/
|
||||
[3]:https://www.vitalsmarts.com/products-solutions/influencer/
|
||||
[4]:https://opensource.com/open-organization/resources/leaders-manual
|
||||
[5]:https://opensource.com/user/10594/feed
|
||||
[6]:https://opensource.com/open-organization/17/1/developing-open-leaders#comments
|
||||
[7]:https://opensource.com/users/delisa
|
@ -1,93 +0,0 @@
|
||||
4 questions to answer when choosing community metrics to measure
|
||||
============================================================
|
||||
|
||||
> When evaluating a specific metric that you are considering including in your metrics plan, you should answer four questions.
|
||||
|
||||
|
||||
![4 questions to answer when choosing community metrics to measure](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/metrics_lead-steps-measure.png?itok=dj9mvlQw "4 questions to answer when choosing community metrics to measure")
|
||||
Image by :
|
||||
|
||||
[Internet Archive Book Images][4]. Modified by Opensource.com. [CC BY-SA 4.0][5]
|
||||
|
||||
Thus far in the [Community Metrics Playbook][6] column, I've discussed the importance of [setting goals][7] to guide the metrics process, outlined the general [types of metrics][8] that are useful for studying your community, and reviewed technical details of [available tools][9]. As you are deciding which metrics to track for your community, having a deeper understanding of each area is important so you not only choose good metrics, but also understand and plan for what to do when the numbers don't line up with expectations.
|
||||
|
||||
When evaluating a specific metric that you are thinking about including in your metrics plan, you should answer four questions:
|
||||
|
||||
* Does it help achieve my goals?
|
||||
* How accurate is it?
|
||||
* What is its relationship to other metrics?
|
||||
* What will I do if the metrics goes "bad"?
|
||||
|
||||
### Goal-appropriate
|
||||
|
||||
This one should be obvious by now from my [previous discussion on goals][10]: Why do you need to know this metric? Does this metric have a relationship to your project's goals? If not, then you should consider ignoring it—or at least placing much less emphasis on it. Metrics that do not help measure your progress toward goals waste time and resources that could be better spent developing better metrics.
|
||||
|
||||
One thing to consider are intermediate metrics. These are metrics that may not have an obvious, direct relationship to your goals. They can be dangerous when considered alone and can lead to undesirable behavior simply to "meet the number," but when combined with and interpreted in the context of other intermediates, can help projects improve.
|
||||
|
||||
### Accuracy
|
||||
|
||||
Accuracy is defined as the quality or state of being correct or precise. Gauging accuracy for metrics that have built-in subjectivity and bias, such as survey questions, is difficult, so for this discussion I'll talk about objective metrics obtained by computers, which are for the most part highly precise and accurate. [Data can't lie][11], so why are we even discussing accuracy of computed metrics? The potential for inaccurate metrics stems from their human interpretation. The classic example here is _number of downloads_. This metric can be measured easily—often as part of a download site's built-in metrics—but will not be accurate if your software is split into multiple packages, or known systemic processes produce artificially inflated (or deflated) numbers, such as automated testing systems that execute repeated downloads.
|
||||
|
||||
As long as you recognize and avoid fixating on absolute correctness, having slightly inaccurate metrics is usually better than no metrics at all. Web analytics are [notorious][12] for being inaccurate gauges of reality due to the underlying technical nature of web servers, browsers, proxies, caching, dynamic addressing, cookies, and other aspects of computing that can muddy the waters of visitor engagement metrics; however, multiple slightly inaccurate web metrics over time can be an accurate indicator that the website refresh you did reduced your repeat visits by 30%. So don't be afraid of the fact that you'll probably never achieve 100% accuracy.
|
||||
|
||||
### Understanding relationships
|
||||
|
||||
![fresh lemons graph](https://opensource.com/sites/default/files/f1-falkner-02-2017_520.png "fresh lemons graph")
|
||||
|
||||
_Data from: [NHTSA, DOT HS 810 780][1]. [U.S. Department of Agriculture (pdf)][2]_
|
||||
|
||||
The universe of metrics is full of examples stemming from the statistical phrase "[correlation does not imply causation][13]." When choosing metrics, carefully consider whether the chosen metric might have relationships to other metrics, directly or indirectly. Related metrics often can help diagnose success and failure, and indicate needed changes to your project to drive the improvement you're looking for.
|
||||
|
||||
Truly proving that one metric's behavior causes predictable changes in another requires quite a bit of experimentation and statistical analysis, but you don't have to take it that far. If you suspect a relationship, take note and observe their behavior over time, and if evidence suggests a relationship, then you can do experimentation in your own project to test the hypothesis.
|
||||
|
||||
For example, a typical goal of open source projects is to drive innovation by attracting new developers who bring their diverse experience and backgrounds to the project. A given project notices that when the "average time from contribution to code commit" decreases, the number of new contributors coming to the project increases. If evidence over time maintains this correlation, the project might decide to dedicate more resources to handling contributions. This can have an effect elsewhere—such as an increase in bugs due to lots of new code coming in—so try not to over-rotate while using your new-found knowledge.
|
||||
|
||||
### Planning for failure
|
||||
|
||||
After gauging the accuracy and applicability of a metric, you need to think about and plan for what you will do when things don't go as planned (which will happen). Consider this scenario: You've chosen several quality-related metrics for your project, and there is general agreement that they are accurate and important to the project. The QA team is working hard, yet your chosen metrics continue to suffer. What do you do? You have several choices:
|
||||
|
||||
* Do nothing.
|
||||
* Make the QA team come in on the weekend to write more tests.
|
||||
* Work with developers to find the root cause of all the bugs.
|
||||
* Choose different metrics.
|
||||
|
||||
Which is the correct choice? The answer shouldn't surprise you: _It depends_. You may not need to do anything if the trend is expected, for example if resource constraints are forcing you to trade quality for some other metric. QA might actually need to write more tests if you have known poor coverage. Or you may need to do root cause analysis for a systemic issue in development. The last one is particularly important to include in any plan; your metrics may have become outdated and no longer align with your project's goals, and should be regularly evaluated and eliminated or replaced as needed.
|
||||
|
||||
Rarely will there be a single correct choice—it's more important to outline, for each metric, the potential causes for failure and which questions you need to ask and what you will do in various contexts. It doesn't have to be a lengthy checklist of actions for each possible cause, but you should at least list a handful of potential causes and how to proceed to investigate failure.
|
||||
|
||||
By answering these four questions about your metrics, you will gain a greater understanding of their purpose and efficacy. More importantly, sharing the answers with the rest of the project will give your community members a greater feeling of autonomy and purpose, which can be a much better motivator than simply asking them to meet a set of seemingly arbitrary numbers.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
译者简介:
|
||||
|
||||
James Falkner - Technology evangelist, teacher, learner, author, dedicated to open source and open computing. I work at Red Hat as a technical evangelist for Red Hat's portfolio of open source products and love what we do and learning from others, and occasionally teaching at conferences.
|
||||
|
||||
Prior to Red Hat I spent 5 years at Liferay growing a large open source community, onboarding new contributors, meeting and engaging with beginners and experts, and championing open source as the de facto choice for businesses large and small. I am based in the Orlando, Florida, USA area.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/2/4-questions-answer-when-choosing-community-metrics-measure
|
||||
|
||||
作者:[James Falkner][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/james-falkner
|
||||
[1]:https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/810780
|
||||
[2]:http://www.ers.usda.gov/media/320480/wrs0406f_1_.pdf
|
||||
[3]:https://opensource.com/article/17/2/4-questions-answer-when-choosing-community-metrics-measure?rate=I8iVb2WNG2xAcYFvNaZfoEFTozgl_gQ-Pz8Ra1SveOE
|
||||
[4]:https://www.flickr.com/photos/internetarchivebookimages/14753212581/in/photolist-otG57a-orWcFN-ovJbD4-orWgoN-otWQTN-otWmY9-otG3wg-otYjFc-otLxay-otWi5N-ovJ8pt-ocuoJr-otG4KZ-ovJ7ok-otWjdj-otY18v-otYqxn-orWptL-otWkzY-otWTnW-otYcHe-otWAx3-octWmY-otWNwd-otL2wq-otYco6-ovHSva-otFSq4-otFPP2-otWmAL-otYtwP-orWAj3-otLjQy-otWDRs-otWoPJ-otG7wR-otWBTQ-otG4b2-otWyD3-orWgCA-otWMzo-otYfHx-otY9oP-otGbrz-orWnwj-orW6gJ-ocuAd8-orW5U1-otWBcu-otFXgr/
|
||||
[5]:https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[6]:https://opensource.com/tags/community-metrics-playbook
|
||||
[7]:https://opensource.com/bus/16/8/measuring-community-health
|
||||
[8]:https://opensource.com/business/16/9/choosing-right-metrics
|
||||
[9]:https://opensource.com/article/16/11/tools-collecting-analyzing-community-metrics
|
||||
[10]:https://opensource.com/bus/16/8/measuring-community-health
|
||||
[11]:http://management.curiouscatblog.net/2007/08/09/data-cant-lie/
|
||||
[12]:https://brianclifton.com/pro-lounge-files/accuracy-whitepaper.pdf
|
||||
[13]:https://en.wikipedia.org/wiki/Correlation_does_not_imply_causation
|
||||
[14]:https://opensource.com/user/18065/feed
|
||||
[15]:https://opensource.com/users/james-falkner
|
@ -1,94 +0,0 @@
|
||||
How the University of Hawaii is solving today's higher ed problems
|
||||
============================================================
|
||||
|
||||
![How the University of Hawaii is solving today's higher ed problems](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BUS_brainstorm_island_520px.png?itok=KRXqz2-m "How the University of Hawaii is solving today's higher ed problems")
|
||||
>Image by : opensource.com
|
||||
|
||||
Openness invites greater participation and it takes advantage of the shared energy of collaborators. The strength of openly created educational resources comes paradoxically from the vulnerability of the shared experience of that creation process.
|
||||
|
||||
One of the leaders in Open Educational Resources (OER) is [Billy Meinke][3], educational technologist at the University of Hawaii at Manoa. The University's open creation model uses [Pressbooks][4], which Billy tells me more about in this interview.
|
||||
|
||||
**Don Watkins (DW): How did your work at Creative Commons lead you to the University of Hawaii?**
|
||||
|
||||
**Billy Meinke (BM)**: Well, I've actually _returned_ to The University of Hawaii (UH) after being in the Bay Area for several years. I completed the ETEC educational technology Master's program here and then moved to San Francisco where I worked with [Creative Commons][5] (CC). Being with CC was a rewarding and eye-opening experience, and I'm hopeful that what I learned out there will lend itself to the OER work we are ramping up at the University.
|
||||
|
||||
**DW: What came first: instructional design or OER? Are the two symbiotic?**
|
||||
|
||||
**BM**: To me, OER is just a better flavor of learning content. Instructional designers make lots of decisions about the learning product they want to create, be it a textbook or a course or a piece of media. But will they put an open license on that OER when it's published? Will they use an open source tool to author the content? Will they release it in an open format? An instructional designer can produce effective learning content without doing any of those things, but it won't be as useful to the next person. OERs are different because they are designed for reuse, regardless of pedagogical strategy or learning approach.
|
||||
|
||||
**DW: How long has the University of Hawaii been using OERs? What were the primary motivations?**
|
||||
|
||||
**BM**: The OER effort at UH started in 2014, and this past November I took over management of OER activities at UH Manoa, the University system's flagship campus.
|
||||
|
||||
The UH system has a healthy group of OER advocates throughout, primarily at the community colleges. They've transitioned hundreds of courses to become textbook zero (textbooks at no cost) and have made lots of headway building OER-based courses for two-year students. I've been really impressed with how well they've moved towards OER and how much money they've saved students over the last few semesters. We want to empower faculty to take control of what content they teach with, which we expect will result in their saving students money, at all of our campuses.
|
||||
|
||||
**DW: What are Pressbooks? Why are Pressbooks important to the creation of OERs?**
|
||||
|
||||
**BM**: Members of the faculty do have a choice in terms of what content they teach from, much of the time. Some write their own content, or maintain websites that house a course. Pressbooks is a WordPress-based publishing platform that makes it simpler to manage the content—like a book, with sections and chapters, a table of contents, author and publisher metadata, and the capability of to export the "book" into formats that can be easily read _and_ reused.
|
||||
|
||||
Because most undergraduate courses still rely on a primary textbook, we're opening up a means for faculty to adopt an existing open textbook or to co-author a text with others. Pressbooks is the tool, and we're developing the processes for adapting OER as we go.
|
||||
|
||||
**DW: How can a person get involved in development of Pressbooks?**
|
||||
|
||||
**BM**: Pressbooks has a [GitHub repository][6] where they collaboratively build the supporting software, and I've lurked on it for the last year or so. It can take some getting used to, but the conversations that happen there reveal the direction of the software and give an idea of who is working on what. Pressbooks does offer the free hosting of a limited version of the software (it includes a watermark to encourage folks to upgrade) for those who want to tinker without too much commitment. Also, the software is openly licensed (GPLv2), so anyone can use the code without cost or permission.
|
||||
|
||||
**DW: What other institutions use Pressbooks?**
|
||||
|
||||
**BM**: Some of the more widely known examples are [SUNY's Open Textbook project][7] and the [BCcampus OpenEd project][8]. [Lumen Learning][9] also has its own version of Pressbooks, as does [Open Oregon State][10].
|
||||
|
||||
We're looking at what all of these folks are doing to see where we can take our use of Pressbooks, and we hope to help pave the way for others who are developing their own OERs. In some cases, Pressbooks is being used to support entire courses and has integrated activities and assessments, which can hook into the Learning Management System (LMS) an institution uses for course delivery.
|
||||
|
||||
Because Pressbooks is powered by WordPress, it actually has quite a bit of flexibility in terms of what it can do, but we're setting up a humble roadmap for now. We'll be doing standalone open textbooks first.
|
||||
|
||||
**DW: How can other colleges and universities replicate your success? What are some first steps?**
|
||||
|
||||
**BM**: Forming a community that includes librarians, instructional designers, and faculty seems to be a healthy approach. The very first step will always be to get a handle on what is happening with OERs currently where you are, who is aware (or knowledgeable) about OERs, and then supporting them. My focus now is on curating the training resources around OERs that our team has developed, and helping the faculty gain the knowledge and skills it needs to begin adapting OERs. We'll be supporting a number of open textbook adoptions and creations this year, and it's my opinion that we should support folks with OERs, but then get out of the way when they're ready to take to the sky.
|
||||
|
||||
**DW: How important is "release early, release often?"**
|
||||
|
||||
**BM**: Even though the saying has been traditionally used to describe open practices for developing software, I think the creators of OER content should work toward embracing it, too. All too often, an open license is placed on a piece of OER as a finishing step, and none of the drafts or working documents are ever shared before the final content is released. Many folks don't consider that there might be much to gain by publishing early, especially when working independently on OER or as part of the small team. Taking a page from Mozilla's Matt Thompson, [working openly][11] makes way for greater participation, agility, momentum, iteration, and leveraging the collective energy of folks who have similar goals to your own. Because my role at UH is to connect and facilitate the adoption and creation of OER, releasing drafts of planning documents and OER as I go makes more sense.
|
||||
|
||||
To take advantage of the collective experience and knowledge that my networks have, I must improve the quality of the work continuously. This may be the most unsettling part of working openly—others can see your flaws and mistakes alongside your successes and wins. But in truth, I don't think many folks go around looking for issues with the work of others. More often, their assessment begins with asking (after watching and lurking) how useful the work of others is to their own work, which isn't always the case. If it seems useful on the surface, they'll take a deeper look, but they'll otherwise move on to find the good work of others that can help them go further with their own project.
|
||||
|
||||
Being able to borrow ideas from and in some cases directly use the planning docs of others can help new OER projects find legs. That's part of my strategy with the UH system as well: sharing what works so that we can carry our OER initiative forward, together.
|
||||
|
||||
**DW: How is the Open Foundation's approach for ****[OERu][1] ****of select, design, develop, deliver, and revise similar to **[**David Wiley's 5Rs**][12]**?**
|
||||
|
||||
**BM**: Well, OERu's development workflow for OER courses is designed to outline the process of creating and revising OER, while Wiley's 5Rs framework is an assessment tool for an OER. You would (as we have) use OERu's workflow to understand how you can contribute to their course development. Wiley's 5Rs is more of a set of questions to ask to understand how open an OER is.
|
||||
|
||||
**DW: Why are these frameworks essential to the development cycle of OERs and do you have your own framework?**
|
||||
|
||||
**BM**: While I don't believe that any framework or guide is a magic bullet or something that will guarantee success in developing OERs, I think that opening up the processes of content development can benefit teams and individuals who are taking on the challenge of adopting or creating OERs. At a minimum, a framework, or a set of them, can give a big-picture view of what it takes to produce OERs from start to finish. With tools like these, they may better understand where they are in their own process, and have an idea of what it will take to reach the end points they have set for their OER work.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
译者简介:
|
||||
|
||||
Don Watkins - Educator, education technology specialist, entrepreneur, open source advocate. M.A. in Educational Psychology, MSED in Educational Leadership, Linux system administrator, CCNA, virtualization using Virtual Box. Follow me at @Don_Watkins .
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/2/interview-education-billy-meinke
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/don-watkins
|
||||
[1]:https://oeru.org/
|
||||
[2]:https://opensource.com/article/17/2/interview-education-billy-meinke?rate=MTzLUGkz2UyQtAenC-MVjynw2M_qBr_X4B-vE-0KCVI
|
||||
[3]:https://www.linkedin.com/in/billymeinke
|
||||
[4]:https://pressbooks.com/
|
||||
[5]:https://creativecommons.org/
|
||||
[6]:https://github.com/pressbooks/pressbooks
|
||||
[7]:http://textbooks.opensuny.org/
|
||||
[8]:https://open.bccampus.ca/
|
||||
[9]:http://lumenlearning.com/
|
||||
[10]:http://open.oregonstate.edu/textbooks/
|
||||
[11]:https://openmatt.org/2011/04/06/how-to-work-open/
|
||||
[12]:https://opencontent.org/blog/archives/3221
|
||||
[13]:https://opensource.com/user/15542/feed
|
||||
[14]:https://opensource.com/article/17/2/interview-education-billy-meinke#comments
|
||||
[15]:https://opensource.com/users/don-watkins
|
@ -1,117 +0,0 @@
|
||||
A graduate degree could springboard you into an open source job
|
||||
============================================================
|
||||
|
||||
|
||||
![A graduate degree could springboard you into an open source job](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/rh_003784_02_os.comcareers_os_rh2x.png?itok=4wXjYMBw "A graduate degree could springboard you into an open source job")
|
||||
Image by :
|
||||
|
||||
opensource.com
|
||||
|
||||
Tech companies often prefer [hiring those who have open source experience][2] because quite simply open source experience is more valuable. This preference is only growing stronger now that open source software dominates the industry and free and open source hardware is gaining momentum. For example, a [Indeed.com salary analysis][3] shows that jobs with the keywords "Microsoft Windows" have an average salary of $64,000, while jobs with the keyword "Linux" have an average salary of $99,000\. Enough said.
|
||||
|
||||
There are many good open source jobs available to those with Bachelor's degrees, but if you want to control your destiny, a higher degree will give you the freedom to be paid more for following your interests.
|
||||
|
||||
This was very important to me when deciding what education I would choose, and I think it is true of most other PhDs. However, even if you do not put much stock in intellectual freedom, there is a pretty easy case to be made for "doing it for the Benjamins."
|
||||
|
||||
If you care about economic security, as an undergraduate you should consider graduate school. According to [data from the U.S. Bureau of Labor Statistics'][4] Current Population Survey, your average income is going to go up by over 20% if you get a Master's degree and by about 50% if you get a PhD. Similarly the unemployment rate for those with a Bachelor's degree was about 5%, drops to 3.6% for a Master's degree and is cut in half to 2.5% for those with a PhD.
|
||||
|
||||
Of course, all graduate programs and schools are _not_ equal. Most open source advocates would likely find themselves in some kind of engineering program. This is actually also pretty good news on the money front. [IEEE's Where the Jobs Are 2014][5] report says that engineering unemployment is just 1.9% and down to pre-recession levels. Similarly a [survey][6] by the American Society of Mechanical Engineers (ASME) and the American Society of Civil Engineers (ASCE) found that during the recession (from 2011 to 2013) the average salary for engineers actually rose almost 5%.
|
||||
|
||||
Ironically, many students do not consider graduate school for economic reasons. On its face, grad school appears expensive and working your way through it without shouldering a lot of debt seems impossible. For example, [MIT is $24,000 per term][7], and this does not even include room and board. Even at my more humble university graduate school (Michigan Tech, located in the [snow-blasted][8] upper peninsula of Michigan) will set you back more than [$40,000 a year][9] to be an electrical or computer engineer. Despite these costs, graduate school in technical disciplines almost always has an exceptionally high return on investment.
|
||||
|
||||
Also, I have even more good news: **If you are a solid student, graduate school will be more than free.**
|
||||
|
||||
In general, the best students are offered research assistantships that pay their way through graduate school completely, even at the nation's top schools. PhD and Master's degree students are generally fully funded, including tuition and monthly stipends. You will not get rich, but your ramen noodles will be covered. The real beauty of this path is that in general the research that you are paid for will go directly to your own thesis.
|
||||
|
||||
If you are looking for a graduate degree that will springboard you into an open source job, simply any graduate program will not do. A place to start is with the [top 100 universities][10] to support FOSS.
|
||||
|
||||
There are also many institutions that have a fairly well-developed open source culture. Students at RIT can now [earn a minor in free and open source software][11] and free culture, and at Michigan Tech you can join the [Open Hardware Enterprise][12], which is essentially a student-run business. The Massachusetts Institute of Technology hosts [OpenCourseware][13], an open source approach to educational materials. However, be aware that although an academic pedigree is important it is not the primary concern. This is because in graduate school (and particularly for funding) you are applying to a research group (i.e., a single professor) in addition to applying to the university and program.
|
||||
|
||||
### How to get a job in an open source lab
|
||||
|
||||
While many academics ascribe to open source principles and many schools are supportive of open source overall, the group of hard core open source lab groups is fairly selective. NetworkWorld offers [six examples][14], Wikipedia keeps an incomplete [list][15], and I maintain a list of contributors to open source hardware for science on [Appropedia][16]. There are many more to choose from (for example, anyone who attends the open science conferences, [GOSH][17], etc.).
|
||||
|
||||
I run one of these labs myself, and I hope to offer some insight into the process of acquiring funding for potential graduate students. My group studies solar cells and open hardware. [Solar photovoltaic technology represents one of the fastest growing industries][18] and the [open source hardware movement][19] (particularly [RepRap][20] 3D printers) is exploding. Because my lab, the Michigan Tech Open Sustainability Technology ([MOST][21]) Lab, is on the cutting edge of two popular fields, entrance into the group is extremely competitive. This is generally the case with most other open source research groups, which I am happy to report are increasing in both size and overall density within the academic community.
|
||||
|
||||
There are two routes you can take to getting a job in an open source lab: 1) the direct route and 2) the indirect route.
|
||||
|
||||
First, the direct route.
|
||||
|
||||
### Make personal contact and stand out
|
||||
|
||||
Applying to an open source academic lab usually starts with emailing the professor who runs the lab directly. To start, make sure your email is actually addressed to the professor by name and catches his or her interest in the subject and first line. This is necessary because, in general, professors want students working in their labs who share an interest in their research areas. They do not simply want to hire someone that is looking for a job. There are thousands of students looking for positions, so professors can be fairly picky about their selections. You need to prove your interest. Professors literally get dozens of email applications a week, so you must make sure you stand out.
|
||||
|
||||
### Get good grades and study for the GREs
|
||||
|
||||
In addition, you need to cover all the obvious bases. You are going to be judged first by your numbers. You must maintain high grades and get good GRE scores. Even if you are an awesome person, if you do not have scores and grades high enough to impress, you will not meet the minimum requirements for the graduate program and not even make the list for research assistantships. For my lab, competitive graduate students need to be in the top 10% in grades and test scores (GRE ninetieth percentile scores are above 162 for verbal, 164 for quantitative, and 5 or higher in analytical writing. International students will need TOEFL scores greater than 100 and IELTS scores greater than 7.5).
|
||||
|
||||
You can find less competitive groups, but grades and scores will largely determine your chances, particularly the GRE if you are coming from outside the country. There are simply too many universities throughout the world to allow for the evaluation of the quality of a particular grade in a particular school in a particular class. Thus, and I realize this is absurdly reductionist, the practicalities of graduate school admission mean that the GRE becomes a way of quickly vetting students. Realize, however, that you can study for the GRE to improve your scores. Some international students are known for taking a year off to study and then knocking out perfect scores. You do not need to take it that far because the nature of U.S. funding favors domestic students over international students, but you should study hard for the tests.
|
||||
|
||||
Even if your scores are not perfect, you can raise your chances considerably by proving your research interests. This is where the open source philosophy really pays some dividends. Unlike peers who intern at a proprietary company and can say generally, but not specifically, what they worked on, if you work in open source, a professor can see and vet your contributions to a project directly. Ideal applicants have a history and a portfolio already built up in the areas of the research group or closely related areas.
|
||||
|
||||
### Show and share your work
|
||||
|
||||
To gain entrance to my research group, and those like it, we really want to see your work. This means you should make some sort of personal webpage and load it up with your successful projects. You should have undertaken some major project in the research area you want to join. For my group it might be publishing a paper in a peer-reviewed journal as an undergrad, developing a new [open scientific method][22], or making valuable contributions to a large FOSS project, such as [Debian][23]. The project may be applied; for example, it could be in an applied sustainability project, such as organized [Engineers Without Borders][24] chapters at your school, or open hardware, such as founding a [hackerspace][25].
|
||||
|
||||
However, not all of your accomplishments need to be huge or need to be academic undergraduate research. If you restored a car, I want to know about it. If you have designed a cool video game, I want to play it. If you made a mod on the RepRap that I 3D print with or were a major developer of FOSS our group uses, I can more or less guarantee you a position if I have one.
|
||||
|
||||
If you are a good student you will be accepted into many graduate programs, but if funding is low you may not be offered a research assistantship immediately. Do not take rejection personally. You might be the perfect student for a whole range of research projects and a professor may really want you, but simply may not have the funding when you apply. Unfortunately, there have been a stream of pretty vicious [cutbacks to academia in the U.S.][26] in recent years, so research assistantships are not as numerous as they once were. You should apply to several programs and to many professors because you never know who is going to have funding that is matched up to your graduate school career.
|
||||
|
||||
This brings us to the second path to getting a good job in an open source graduate lab, the indirect one.
|
||||
|
||||
### Sneak in
|
||||
|
||||
The first step for this approach is ensuring you meet the minimum requirements for the particular graduate school and apply. These requirements tend to be much lower than advertised by an open source lab director. Once you are accepted to a university you can be placed in the teaching assistant (TA) pool. This also is a way to pay for graduate school, although it lacks the benefit of being paid to work on your thesis, which you will have to do on your own time. While you are establishing yourself at the university by getting good grades and being a good TA, you can attempt to volunteer in the open source lab of your choosing. Most professors with capacity in their lab will take on such self-funded students. If there really is no money, often the professor will offer you some form of independent study credits for your work. These can be used to reduce your class load, giving you time to do research. Take these credits, work hard, and prove yourself.
|
||||
|
||||
This gets your foot is in the door. Your chances at pulling a research assistantship will skyrocket at this point. In general professors are always applying for funding that is randomly being awarded. Often professors must fill a research position in a short amount of time when this happens. If you are good and physically there, your chances are much better for winning those funds. Even in the worst-case scenario, in which you are able to work in an open source lab, but funding does not come, again the nature of open source research will help you. Your projects will be more easily accessible by other professors (who may have funding) and all of your research (even if only paid hourly) will be disclosed to the public. This is a major benefit that is lost to all of those working on proprietary or secret military-related projects. If your work is good, access to your technical work can help you land a position at another group, a program, a school (for example, as a Master's student applying to a PhD program elsewhere), or a better higher-paying job.
|
||||
|
||||
**Work hard and share your research aggressively following the open source model and it will pay off.**
|
||||
|
||||
Good luck!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
译者简介:
|
||||
|
||||
Joshua Pearce - Dr. Joshua Pearce is cross appointed as an Associate Professor in the Materials Science & Engineering and the Electrical & Computer Engineering at Michigan Tech. He currently runs the Michigan Tech in Open Sustainability Technology (MOST) group. He is the author of the Open Source Lab.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/1/grad-school-open-source-academic-lab
|
||||
|
||||
作者:[Joshua Pearce][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jmpearce
|
||||
[1]:https://opensource.com/article/17/1/grad-school-open-source-academic-lab?rate=aJZB6TNyQIo2EOgqPxN8P9a5aoiYgLhtP9GujsPCJYk
|
||||
[2]:http://www.wired.com/2014/07/openhatch/
|
||||
[3]:http://www.indeed.com/salary?q1=linux&q2=microsoft+windows
|
||||
[4]:http://www.appropedia.org/MOST_application_process#Undergraduates
|
||||
[5]:http://spectrum.ieee.org/at-work/tech-careers/where-the-jobs-are-2014
|
||||
[6]:https://www.asme.org/career-education/articles/early-career-engineers/engineering-salaries-on-the-rise
|
||||
[7]:http://web.mit.edu/registrar/reg/costs/
|
||||
[8]:http://www.mtu.edu/alumni/favorites/snowfall/
|
||||
[9]:http://www.mtu.edu/gradschool/admissions/financial/cost/
|
||||
[10]:http://www.portalprogramas.com/en/how-to/best-american-universities-open-source-2014.html
|
||||
[11]:http://www.rit.edu/news/story.php?id=50590
|
||||
[12]:http://www.mtu.edu/enterprise/teams/
|
||||
[13]:https://ocw.mit.edu/index.htm
|
||||
[14]:http://www.networkworld.com/article/3062660/open-source-tools/6-colleges-turning-out-open-source-talent.html
|
||||
[15]:https://en.wikipedia.org/wiki/Open_Source_Lab
|
||||
[16]:http://www.appropedia.org/Open-source_Lab#Examples
|
||||
[17]:http://openhardware.science/
|
||||
[18]:https://hbr.org/2016/08/what-if-all-u-s-coal-workers-were-retrained-to-work-in-solar
|
||||
[19]:http://www.oshwa.org/
|
||||
[20]:http://reprap.org/
|
||||
[21]:http://www.appropedia.org/MOST
|
||||
[22]:http://openwetware.org/wiki/Main_Page
|
||||
[23]:https://www.debian.org/
|
||||
[24]:http://www.appropedia.org/Engineers_Without_Borders
|
||||
[25]:http://www.appropedia.org/Hackerspace
|
||||
[26]:http://www.cbpp.org/research/state-by-state-fact-sheets-higher-education-cuts-jeopardize-students-and-states-economic
|
||||
[27]:https://opensource.com/user/26164/feed
|
||||
[28]:https://opensource.com/article/17/1/grad-school-open-source-academic-lab#comments
|
||||
[29]:https://opensource.com/users/jmpearce
|
@ -1,66 +0,0 @@
|
||||
Poverty Helps You Keep Technology Safe and Easy
|
||||
============================================================
|
||||
|
||||
> In the technology age, there might be some before unknown advantages to living on the bottom rungs of the economic ladder. The question is, do they outweigh the disadvantages.
|
||||
|
||||
### Roblimo’s Hideaway
|
||||
|
||||
![Poor Linux](https://i0.wp.com/fossforce.com/wp-content/uploads/2017/02/trailerpark.jpg?resize=525%2C381)
|
||||
|
||||
Earlier this week I saw a ZDNet story titled [Vizio: The spy in your TV][1] by my friend Steven J. Vaughan-Nichols. Scary stuff. I had a vision of my wife and me and a few dozen of our closest friends having a secret orgy in our living room, except our smart TV’s unblinking eye was recording our every thrust and parry (you might say). Zut alors! In this day of Internet everywhere, we all know that what goes online, stays online. Suddenly our orgy wasn’t secret, and my hopes of becoming the next President were dashed.
|
||||
|
||||
Except… lucky me! I’m poor, so I have an oldie-but-goodie dumb TV that doesn’t have a camera. There’s no way _my_ old Vizio can spy on us. As Mel Brooks didn’t quite say, “[It’s good to be the poverty case][2].”
|
||||
|
||||
Now about that Internet-connected thermostat. I don’t have one. They’re not only expensive (which is why I don’t have one), but according to [this article,][3] they can be hacked to to run ransomware. Oh my! Once again, poverty saves me from a tech problem that can easily afflict my more prosperous neighbors.
|
||||
|
||||
And how about the latest iPhone and the skinniest Mac BookPro. Apple sells the iPhone 7 Plus (gotta have the plussier one) for $769 or more. The MacBook, despite Scottish connotations of thrift, is Apple-priced “From $1299.” That’s a bunch of money, especially since we all know that as soon as you buy an Apple product it is obsolete and you need to get ready to buy a new, fancier one.
|
||||
|
||||
Also, don’t these things explode sometimes? Or catch on fire or something? My [sub-$100 Android phone][4] is safe as houses by comparison. (It has a bigger screen than the biggest-screen iPhone 7, too. Amnazing!)
|
||||
|
||||
Really big safe smartphone for cheap. Check. Simple, old-fashioned, non-networked thermostats that can’t be hacked. TV without the spycams most of the Money-TVs have. Check.
|
||||
|
||||
But wait! There’s more! The [Android phones that got famous for burning up][5] everything in sight were top-dollar models my wife says she wouldn’t want even if we _could_ afford them. Safety first, right? Frugality’s up there, too.
|
||||
|
||||
Now let’s talk about how I got started with Linux.
|
||||
|
||||
Guess what? It was because I was poor! The PC I had back in the days of yore ran DOS just fine, but couldn’t touch Windows 98 when it came out. Not only that, but Windows was expensive, and I was poor. Luckily, I had time on my hands, so I rooted around on the Internet (at phone modem speed) and eventually lit upon Red Hat Linux, which took forever to download and had an install procedure so complicated that instead of figuring it out I wrote an article about how Linux might be great for home computer use someday in the future, but not at the moment.
|
||||
|
||||
This led to the discovery of several helpful local Linux Users Groups (LUGs) and skilled help getting Linux going on my admittedly creaky PC. And that, you might say, led to my career as an IT writer and editor, including my time at Slashdot, NewsForge, and Linux.com.
|
||||
|
||||
This effectively, albeit temporarily, ended my poverty, but with the help of needy relatives — and later, needy doctors and hospitals — I was able to stay true to my “po’ people” roots. I’m glad I did. You’ve probably seen [this article][6] about hackers remotely shutting down a Jeep Cherokee. Hah! My 1996 Jeep Cherokee is totally immune to this kind of attack. Even my 2013 Kia Soul is _relatively_ immune, since it lacks remote-start/stop and other deluxe convenience features that make new cars easy to hack.
|
||||
|
||||
And the list goes on… same as [the beat went on][7] for Sonny and Cher. The more conveniences and Internet connections you have, the more vulnerable you are. Home automation? Make you into a giant hacking target. There’s also a (distant) possibility that your automated, uP-controlled home could become self-aware, suddenly say “I can’t do that, Dave,” and refuse to listen to your frantic cries that you aren’t Dave as it dumps you into the Internet-aware garbage disposal.
|
||||
|
||||
The solution? You got it! Stay poor! Own the fewest possible web-connect cameras and microphones. Don’t get a thermostat people in Nigeria can program to turn your temperature up and down on one-minute cycles. No automatic lights. I mean… I MEAN… is it really all that hard to flick a light switch? I know, that’s something a previous generation took for granted the same way they once walked across the room to change TV channels, and didn’t complain about it.
|
||||
|
||||
Computers? I have (not at my own expense) computers on my desk that run Mac OS, Windows, and Linux. Guess which OS causes me the least grief and confusion? You got it. _The one that cost the least!_
|
||||
|
||||
So I leave you with this thought: In today’s overly-connected world of overly-complex technology, one of the kindest parting comments you can make to someone you care about is, ** _“Stay poor, my friend!”_ **
|
||||
|
||||
The following two tabs change content below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Robin "Roblimo" Miller is a freelance writer and former editor-in-chief at Open Source Technology Group, the company that owned SourceForge, freshmeat, Linux.com, NewsForge, ThinkGeek and Slashdot, and until recently served as a video editor at Slashdot. He also publishes the blog Robin ‘Roblimo’ Miller’s Personal Site. @robinAKAroblimo
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
|
||||
via: http://fossforce.com/2017/02/poverty-helps-keep-technology-safe-easy/
|
||||
|
||||
作者:[Robin "Roblimo" Miller][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.roblimo.com/
|
||||
[1]:http://www.zdnet.com/article/vizio-the-spy-in-your-tv/
|
||||
[2]:https://www.youtube.com/watch?v=StJS51d1Fzg
|
||||
[3]:https://www.infosecurity-magazine.com/news/defcon-thermostat-control-hacked/
|
||||
[4]:https://www.amazon.com/LG-Stylo-Prepaid-Carrier-Locked/dp/B01FSVN3W2/ref=sr_1_1
|
||||
[5]:https://www.cnet.com/news/why-is-samsung-galaxy-note-7-exploding-overheating/
|
||||
[6]:https://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway/
|
||||
[7]:https://www.youtube.com/watch?v=umrp1tIBY8Q
|
@ -1,64 +0,0 @@
|
||||
How I became a project team leader in open source
|
||||
============================================================
|
||||
![How I became a project team leader in open source](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BUSINESS_leadership_brand.png?itok=XSHoZZoG "How I became a project team leader in open source")
|
||||
|
||||
Image by :
|
||||
|
||||
opensource.com
|
||||
|
||||
> _The only people to whose opinions I listen now with any respect are people much younger than myself. They seem in front of me. Life has revealed to them her latest wonder. _― [Oscar Wilde][1], [The Picture of Dorian Gray][2]
|
||||
|
||||
2017 marks two decades since I was first introduced to the concept of "open source" (though the term wasn't coined until later), and a decade since I made my first open source documentation contribution. Each year since has marked another milestone on that journey: new projects, new toolchains, becoming a core contributor, new languages, and becoming a Program Technical Lead (PTL).
|
||||
|
||||
2017 is also the year I will take a step back, take a deep breath, and consciously give the limelight to others.
|
||||
|
||||
As an idealistic young university undergraduate I hung around with the nerds in the computer science department. I was studying arts and, later, business, but somehow I recognized even then that these were my people. I'm forever grateful to a young man (his name was Michael, as so many people in my story are) who introduced me first to IRC and, gradually, to Linux, Google (the lesser known search engine at the time), HTML, and the wonders of open source. He and I were the first people I knew to use USB storage drives, and oh how we loved explaining what they were to the curious in the campus computer lab.
|
||||
|
||||
After university, I found myself working for a startup in Canberra, Australia. Although the startup eventually failed to... well, start, I learned some valuable skills from another dear friend, David. I already knew I had a passion for writing, but David showed me how I could use that skill to build a career, and gave me the tools I needed to actually make that happen. He is also responsible for my first true language love: [LaTeX][3]. To this day, I can spot a LaTeX document from forty paces, which has prompted many an awkward conversation with the often-unwitting bearer of the document in question.
|
||||
|
||||
In 2007, I began working for Red Hat, in what was then known as Engineering Content Services. It was a heady time. Red Hat was determined to invest in an in-house documentation and translation team, and another man by the name of Michael was determined that this would happen in Brisbane, Australia. It was extraordinary case of right place, right time. I seized the opportunity and, working alongside people I still count among the best and brightest technical writers I know, we set about making that thing happen.
|
||||
|
||||
Working at Red Hat in those early days were some of the craziest and most challenging of my career so far. We grew rapidly, there were always several new hires waiting for us to throw them in the deep end, and we had the determination and tenacity to try new things constantly. _Release early, release often_ became a central tenet of our group, and we came up with some truly revolutionary ways of delivering content, as well as some appallingly bad ones. It was here that I discovered the beauty of data typing, single sourcing, remixing content, and using metadata to drive content curation. We weren't trying to tell stories to our readers, but to give our readers the tools to create their own stories.
|
||||
|
||||
As the Red Hat team matured, so too did my career, and I eventually led a team of writers. Around the same time, I started attending and speaking at tech conferences, spreading the word about these new ways of developing content, and trying to lead developers into looking at documentation in new ways. I had a thirst for sharing this knowledge and passion for technical documentation with the world, and with the Red Hat content team slowing their growth and maturing, I found myself craving the fast pace of days gone by. It was time to find a new project.
|
||||
|
||||
When I joined [Rackspace][4], [OpenStack][5] was starting to really hit its stride. I was on the organizing team for [linux.conf.au][6] in 2013 (ably led by yet another Michael), which became known affectionately as openstack.conf.au due to the sheer amount of OpenStack content that was delivered in that year. Anne Gentle had formed the OpenStack documentation team only a year earlier, and I had been watching with interest. The opportunity to work alongside Anne on such an exciting project was irresistible, so by the time 2013 drew to a close, Michael had hired me, and I had become a Racker and a Stacker.
|
||||
|
||||
In late 2014, as we were preparing the Kilo release, Anne asked if I would be willing to put my name forward as a candidate for documentation PTL. OpenStack works on a democratic system where individuals self-nominate for the lead, and the active contributors to each project vote when there is more than one candidate. The fact that Anne not only asked me to step up, but also thought I was capable of stepping in her footsteps was an incredible honor. In early 2015, I was elected unopposed to lead the documentation team for the Liberty release, and we were off to Vancouver.
|
||||
|
||||
By 2015, I had managed documentation teams sized between three and 13 staff members, across many time zones, for nearly five years. I had a business management degree and an MBA to my name, had run my own business, seen a tech startup fail, and watched a new documentation team flourish. I felt as though I understood what being a manager was all about, and I guess I did, but I realized I didn't know what being a PTL was all about. All of a sudden, I had a team where I couldn't name each individual, couldn't rely on any one person to come to work on any given day, couldn't delegate tasks with any authority, and couldn't compensate team members for good work. Suddenly, the only tool I had in my arsenal to get work done was my own ability to convince people that they should.
|
||||
|
||||
My first release as documentation PTL was basically me stumbling around in the dark and poking at the things I encountered. I relied heavily on the expertise of the existing members of the group, particularly Anne Gentle and Andreas Jaeger (our documentation infrastructure guru), to work out what needed to be done, and I gradually started to document the things I learned along the way. I learned that the key to getting things done in a community was not just to talk and delegate, but to listen and collaborate. I had not only to tell people what to do, but also convince them that it was a good idea, and help them to see the task through, picking up the pieces if they didn't.
|
||||
|
||||
Gradually, and through trial and error, I built the confidence and relationships to get through an OpenStack release successfully with my team and my sanity intact. This wouldn't have happened if the team hadn't been willing to stick by me through the times I was wandering in the woods, and the project would never have gotten off the ground in the first place without the advice and expertise of those that had gone before me. Shoulders of giants, etc.
|
||||
|
||||
Somewhat ironically, technical writers aren't very good at documenting their own team processes, so we've been codifying our practices, conventions, tools, and systems. We still have much work to do on this front, but we have made a good start. As the OpenStack documentation team has matured, we have accrued our fair share of [tech debt][7], so dealing with that has been a consistent ribbon through my tenure, not just by closing old bugs (not that there hasn't been a lot of that), but also by changing our systems to prevent it building up in the first place.
|
||||
|
||||
I am now in my tenth year as an open source contributor, and I have four OpenStack releases under my belt: Liberty, Mitaka, Newton, and Ocata. I have been a PTL for two years, and I have seen a lot of great documentation contributors come and go from our little community. I have made an effort to give those who are interested an opportunity to lead: through specialty teams looking after a book or two, release managers who perform the critical tasks to get each new release out into the wild, and moderators who lead a session at OpenStack Summit planning meetings (and help save my voice which, somewhat notoriously, is always completely gone by the end of Summit week).
|
||||
|
||||
From these humble roles, the team has grown leaders. In these people, I see myself. They are hungry for change, full of ideas and ideals, and ready to implement crazy schemes and see where it takes them. So, this year, I'm going to take that step back, allow someone else to lead this amazing team, and let the team take their own steps forward. I intend to be here, holding on for the ride. I can't wait to see what happens next.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Lana Brindley - Lana Brindley has several university degrees, a few of which are even relevant to her field. She has been playing and working with technology since she discovered the Hitchhikers’ Guide to the Galaxy text adventure game in the 80’s. Eventually, she worked out a way to get paid for her two passions – writing and playing with gadgetry – and has been a technical writer ever since.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/2/my-open-source-story-leader
|
||||
|
||||
作者:[Lana Brindley][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/loquacities
|
||||
[1]:http://www.goodreads.com/author/show/3565.Oscar_Wilde
|
||||
[2]:http://www.goodreads.com/work/quotes/1858012
|
||||
[3]:https://www.latex-project.org/
|
||||
[4]:https://www.rackspace.com/en-us
|
||||
[5]:https://www.openstack.org/
|
||||
[6]:https://linux.conf.au/
|
||||
[7]:https://en.wikipedia.org/wiki/Technical_debt
|
@ -1,91 +0,0 @@
|
||||
# rusking translating
|
||||
What a Linux Desktop Does Better
|
||||
============================================================
|
||||
|
||||
![linux-desktop-advantages](http://www.linuxinsider.com/article_images/story_graphics_xlarge/xl-2017-linux-1.jpg)
|
||||
|
||||
![](http://www.linuxinsider.com/images/2015/image-credit-adobe-stock_130x15.gif)
|
||||
|
||||
After I [resolved to adopt Linux][3], my confidence grew slowly but surely. Security-oriented considerations were compelling enough to convince me to switch, but I soon discovered many more advantages to the Linux desktop.
|
||||
|
||||
For those still unsure about making the transition, or those who have done so but may not know everything their system can do, I'll showcase here some of the Linux desktop's advantages.
|
||||
|
||||
### You Can't Beat Free!
|
||||
|
||||
First and foremost, Linux is literally free. Neither the operating system nor any of the programs you run will cost you a dime. Beyond the obvious financial benefit of getting software for free, Linux allows users to _be_ free by affording access to the basic tools of modern computer use -- such as word processing and photo editing -- which otherwise might be unavailable due to the cost barrier.
|
||||
|
||||
Microsoft Office, which sets the de facto standard formats for documents of nearly every kind, demands a US$70 per year subscription. However, you can run [LibreOffice][4] for free while still handling documents in all the same formats with ease.
|
||||
|
||||
Free software also gives you the chance to try new programs, and with them new ways of pursuing business and leisure, without their prospective costs forcing you to make a commitment.
|
||||
|
||||
Instead of painstakingly weighing the merits of Mac or Windows and then taking a leap of faith, you can consider a vast spectrum of choices offered by[hundreds of distributions][5] -- basically, different flavors of Linux -- by trying each in turn until you find the one that's right for you.
|
||||
|
||||
Linux can even save money on hardware, as some manufacturers -- notably Dell -- offer a discount for buying a computer with Linux preinstalled. They can charge less because they don't have to pass on the cost of licensing Windows from Microsoft.
|
||||
|
||||
### You Can Make It Your Own
|
||||
|
||||
There is practically nothing in Linux that can't be customized. Among the projects central to the Linux ecosystem are desktop environments -- that is, collections of basic user programs and visual elements, like status bars and launchers, that make up the user interface.
|
||||
|
||||
Some Linux distributions come bundled with a desktop environment. Ubuntu is paired with the Unity desktop, for example. Others, such as with Debian, give you a choice at installation. In either case, users are free to change to any one they like.
|
||||
|
||||
Most distributions officially support (i.e., vouch for compatibility) dozens of the most popular desktops, which makes finding the one you like best that much simpler. Within the pantheon of desktops, you can find anything from glossy modern interfaces like KDE Plasma or [Gnome][6], to simple and lightweight ones like Xfce and MATE. Within each of these, you can personalize your setup further by changing the themes, system trays and menus, choosing from galleries of other users' screens for inspiration.
|
||||
|
||||
The customization possibilities go well beyond aesthetics. If you prize system stability, you can run a distribution like Mint, which offers dependable hardware support and ensures smooth updates.
|
||||
|
||||
On the other hand, if you want to live on the cutting edge, you can install an OS like Arch Linux, which gives you the latest update to each program as soon as developers release it.
|
||||
|
||||
If you'd rather take the middle path and stick with a stable foundation while running a few programs on the bleeding edge, you can download the source code -- that is, the code files written by the program's developers -- and compile them yourself. That requires running the source code through a utility to translate them into files of 1s and 0s (called "binaries") for your computer to execute.
|
||||
|
||||
The Linux system is yours to tweak in whatever ways work best for you.
|
||||
|
||||
### Lock It Down
|
||||
|
||||
This versatility lends itself well to a third major advantage to Linux: security.
|
||||
|
||||
To start with, while there are viruses for Linux, the number pales in comparison even to those for Mac. More importantly, the fact that the code for the core OS framework is open source -- and thus transparent to evaluation -- means there are fewer vulnerabilities in your basic system.
|
||||
|
||||
While proprietary (i.e., non-open source) OSes sometimes are criticized as maliciously compromising user security, they pose just as great a threat due to poorly implemented, opaque processes.
|
||||
|
||||
For instance, lots of Windows computers by default [do not check the cryptographic signatures][7] -- the mathematically guaranteed seals of authenticity -- on OS updates.
|
||||
|
||||
With Linux, you can implement as much fine-grained control over signature checking as you choose, and the major distributions enforce safe default settings. This kind of accountability arises directly from the transparency of Linux's open source development model.
|
||||
|
||||
Rolling release distributions like Arch add even more security, as critical patches are available almost as soon as they are approved. You would be hard-pressed to find a single mainstream OS that offers daily updates, but with Linux there are dozens.
|
||||
|
||||
### It's a Natural Development Platform
|
||||
|
||||
With a Linux desktop, developers -- or anyone interested in programming -- have the added benefit of Linux's great development tools. Among the best compiling tools around are the GNU C Compiler, or GCC, and GNU Autoconf, both key foundations of Linux.
|
||||
|
||||
Linux comfortably supports dozens of programming languages available in most default repositories, which are the pools of pre-compiled software available to a distribution.
|
||||
|
||||
Much of the Internet's infrastructure and many connected devices run on Linux -- from servers to smart devices such as security cameras and thermostats. Coding for these devices on Linux makes testing that much easier. If you have a computer-related project, Linux has everything you need to get the job done.
|
||||
|
||||
### Community Is at the Heart of Everything Linux
|
||||
|
||||
Finally, Linux has a tightly knit and friendly community. Because Linux is a relatively niche desktop OS, with around 3 percent market share, those who use it want prospective newcomers to stick around.
|
||||
|
||||
User forums, especially for beginner-friendly distributions like Ubuntu, include comprehensive guides to walk you through the basics and troubleshoot issues. Because power users tend to prefer Linux, wiki pages for distributions often contain thorough documentation -- often applicable across distributions -- to enable users to pursue even the most esoteric projects.
|
||||
|
||||
There are even casual Linux forums and [Reddit][8] threads for everything from comparing different software to showing off desktop themes. Taken together, this makes for a community with more camaraderie than I ever experienced as a Windows user.
|
||||
|
||||
Immersing myself in the world of Linux for just over two years has convinced me that it offers something for everyone. I hope this brief sampling of its advantages gives you a sense of what you might discover in a Linux desktop. But don't just take my word for it -- the real fun is finding out for yourself!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxinsider.com/story/84326.html?rss=1
|
||||
|
||||
作者:[Jonathan Terrasi ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linkedin.com/company/ect-news-network
|
||||
[1]:http://www.linuxinsider.com/story/84326.html?rss=1#
|
||||
[2]:http://www.linuxinsider.com/perl/mailit/?id=84326
|
||||
[3]:http://www.linuxinsider.com/story/84286.html
|
||||
[4]:http://www.libreoffice.org/
|
||||
[5]:https://en.wikipedia.org/wiki/Linux_distribution
|
||||
[6]:http://en.wikipedia.org/wiki/GNOME
|
||||
[7]:https://duo.com/blog/out-of-box-exploitation-a-security-analysis-of-oem-updaters
|
||||
[8]:http://www.reddit.com/
|
@ -1,290 +0,0 @@
|
||||
[A Programmer’s Introduction to Unicode][18]
|
||||
============================================================
|
||||
|
||||
|
||||
Unicode! 🅤🅝🅘🅒🅞🅓🅔‽ 🇺🇳🇮🇨🇴🇩🇪! 😄 The very name strikes fear and awe into the hearts of programmers worldwide. We all know we ought to “support Unicode” in our software (whatever that means—like using `wchar_t` for all the strings, right?). But Unicode can be abstruse, and diving into the thousand-page [Unicode Standard][27] plus its dozens of supplementary [annexes, reports][28], and [notes][29]can be more than a little intimidating. I don’t blame programmers for still finding the whole thing mysterious, even 30 years after Unicode’s inception.
|
||||
|
||||
A few months ago, I got interested in Unicode and decided to spend some time learning more about it in detail. In this article, I’ll give an introduction to it from a programmer’s point of view.
|
||||
|
||||
I’m going to focus on the character set and what’s involved in working with strings and files of Unicode text. However, in this article I’m not going to talk about fonts, text layout/shaping/rendering, or localization in detail—those are separate issues, beyond my scope (and knowledge) here.
|
||||
|
||||
* [Diversity and Inherent Complexity][10]
|
||||
* [The Unicode Codespace][11]
|
||||
* [Codespace Allocation][2]
|
||||
* [Scripts][3]
|
||||
* [Usage Frequency][4]
|
||||
* [Encodings][12]
|
||||
* [UTF-8][5]
|
||||
* [UTF-16][6]
|
||||
* [Combining Marks][13]
|
||||
* [Canonical Equivalence][7]
|
||||
* [Normalization Forms][8]
|
||||
* [Grapheme Clusters][9]
|
||||
* [And More…][14]
|
||||
|
||||
### [][30]Diversity and Inherent Complexity
|
||||
|
||||
As soon as you start to study Unicode, it becomes clear that it represents a large jump in complexity over character sets like ASCII that you may be more familiar with. It’s not just that Unicode contains a much larger number of characters, although that’s part of it. Unicode also has a great deal of internal structure, features, and special cases, making it much more than what one might expect a mere “character set” to be. We’ll see some of that later in this article.
|
||||
|
||||
When confronting all this complexity, especially as an engineer, it’s hard not to find oneself asking, “Why do we need all this? Is this really necessary? Couldn’t it be simplified?”
|
||||
|
||||
However, Unicode aims to faithfully represent the _entire world’s_ writing systems. The Unicode Consortium’s stated goal is “enabling people around the world to use computers in any language”. And as you might imagine, the diversity of written languages is immense! To date, Unicode supports 135 different scripts, covering some 1100 languages, and there’s still a long tail of [over 100 unsupported scripts][31], both modern and historical, which people are still working to add.
|
||||
|
||||
Given this enormous diversity, it’s inevitable that representing it is a complicated project. Unicode embraces that diversity, and accepts the complexity inherent in its mission to include all human writing systems. It doesn’t make a lot of trade-offs in the name of simplification, and it makes exceptions to its own rules where necessary to further its mission.
|
||||
|
||||
Moreover, Unicode is committed not just to supporting texts in any _single_ language, but also to letting multiple languages coexist within one text—which introduces even more complexity.
|
||||
|
||||
Most programming languages have libaries available to handle the gory low-level details of text manipulation, but as a programmer, you’ll still need to know about certain Unicode features in order to know when and how to apply them. It may take some time to wrap your head around it all, but don’t be discouraged—think about the billions of people for whom your software will be more accessible through supporting text in their language. Embrace the complexity!
|
||||
|
||||
### [][32]The Unicode Codespace
|
||||
|
||||
Let’s start with some general orientation. The basic elements of Unicode—its “characters”, although that term isn’t quite right—are called _code points_ . Code points are identified by number, customarily written in hexadecimal with the prefix “U+”, such as [U+0041 “A” latin capital letter a][33] or [U+03B8 “θ” greek small letter theta][34]. Each code point also has a short name, and quite a few other properties, specified in the [Unicode Character Database][35].
|
||||
|
||||
The set of all possible code points is called the _codespace_ . The Unicode codespace consists of 1,114,112 code points. However, only 128,237 of them—about 12% of the codespace—are actually assigned, to date. There’s plenty of room for growth! Unicode also reserves an additional 137,468 code points as “private use” areas, which have no standardized meaning and are available for individual applications to define for their own purposes.
|
||||
|
||||
### [][36]Codespace Allocation
|
||||
|
||||
To get a feel for how the codespace is laid out, it’s helpful to visualize it. Below is a map of the entire codespace, with one pixel per code point. It’s arranged in tiles for visual coherence; each small square is 16×16 = 256 code points, and each large square is a “plane” of 65,536 code points. There are 17 planes altogether.
|
||||
|
||||
[
|
||||
![Map of the Unicode codespace (click to zoom)](http://reedbeta.com/blog/programmers-intro-to-unicode/codespace-map.png "Map of the Unicode codespace (click to zoom)")
|
||||
][37]
|
||||
|
||||
White represents unassigned space. Blue is assigned code points, green is private-use areas, and the small red area is surrogates (more about those later). As you can see, the assigned code points are distributed somewhat sparsely, but concentrated in the first three planes.
|
||||
|
||||
Plane 0 is also known as the “Basic Multilingual Plane”, or BMP. The BMP contains essentially all the characters needed for modern text in any script, including Latin, Cyrillic, Greek, Han (Chinese), Japanese, Korean, Arabic, Hebrew, Devanagari (Indian), and many more.
|
||||
|
||||
(In the past, the codespace was just the BMP and no more—Unicode was originally conceived as a straightforward 16-bit encoding, with only 65,536 code points. It was expanded to its current size in 1996\. However, the vast majority of code points in modern text belong to the BMP.)
|
||||
|
||||
Plane 1 contains historical scripts, such as Sumerian cuneiform and Egyptian hieroglyphs, as well as emoji and various other symbols. Plane 2 contains a large block of less-common and historical Han characters. The remaining planes are empty, except for a small number of rarely-used formatting characters in Plane 14; planes 15–16 are reserved entirely for private use.
|
||||
|
||||
### [][38]Scripts
|
||||
|
||||
Let’s zoom in on the first three planes, since that’s where the action is:
|
||||
|
||||
[
|
||||
![Map of scripts in Unicode planes 0–2 (click to zoom)](http://reedbeta.com/blog/programmers-intro-to-unicode/script-map.png "Map of scripts in Unicode planes 0–2 (click to zoom)")
|
||||
][39]
|
||||
|
||||
This map color-codes the 135 different scripts in Unicode. You can see how Han <nobr>()</nobr> and Korean <nobr>()</nobr>take up most of the range of the BMP (the left large square). By contrast, all of the European, Middle Eastern, and South Asian scripts fit into the first row of the BMP in this diagram.
|
||||
|
||||
Many areas of the codespace are adapted or copied from earlier encodings. For example, the first 128 code points of Unicode are just a copy of ASCII. This has clear benefits for compatibility—it’s easy to losslessly convert texts from smaller encodings into Unicode (and the other direction too, as long as no characters outside the smaller encoding are used).
|
||||
|
||||
### [][40]Usage Frequency
|
||||
|
||||
One more interesting way to visualize the codespace is to look at the distribution of usage—in other words, how often each code point is actually used in real-world texts. Below is a heat map of planes 0–2 based on a large sample of text from Wikipedia and Twitter (all languages). Frequency increases from black (never seen) through red and yellow to white.
|
||||
|
||||
[
|
||||
![Heat map of code point usage frequency in Unicode planes 0–2 (click to zoom)](http://reedbeta.com/blog/programmers-intro-to-unicode/heatmap-wiki+tweets.png "Heat map of code point usage frequency in Unicode planes 0–2 (click to zoom)")
|
||||
][41]
|
||||
|
||||
You can see that the vast majority of this text sample lies in the BMP, with only scattered usage of code points from planes 1–2\. The biggest exception is emoji, which show up here as the several bright squares in the bottom row of plane 1.
|
||||
|
||||
### [][42]Encodings
|
||||
|
||||
We’ve seen that Unicode code points are abstractly identified by their index in the codespace, ranging from U+0000 to U+10FFFF. But how do code points get represented as bytes, in memory or in a file?
|
||||
|
||||
The most convenient, computer-friendliest (and programmer-friendliest) thing to do would be to just store the code point index as a 32-bit integer. This works, but it consumes 4 bytes per code point, which is sort of a lot. Using 32-bit ints for Unicode will cost you a bunch of extra storage, memory, and performance in bandwidth-bound scenarios, if you work with a lot of text.
|
||||
|
||||
Consequently, there are several more-compact encodings for Unicode. The 32-bit integer encoding is officially called UTF-32 (UTF = “Unicode Transformation Format”), but it’s rarely used for storage. At most, it comes up sometimes as a temporary internal representation, for examining or operating on the code points in a string.
|
||||
|
||||
Much more commonly, you’ll see Unicode text encoded as either UTF-8 or UTF-16\. These are both _variable-length_ encodings, made up of 8-bit or 16-bit units, respectively. In these schemes, code points with smaller index values take up fewer bytes, which saves a lot of memory for typical texts. The trade-off is that processing UTF-8/16 texts is more programmatically involved, and likely slower.
|
||||
|
||||
### [][43]UTF-8
|
||||
|
||||
In UTF-8, each code point is stored using 1 to 4 bytes, based on its index value.
|
||||
|
||||
UTF-8 uses a system of binary prefixes, in which the high bits of each byte mark whether it’s a single byte, the beginning of a multi-byte sequence, or a continuation byte; the remaining bits, concatenated, give the code point index. This table shows how it works:
|
||||
|
||||
| UTF-8 (binary) | Code point (binary) | Range |
|
||||
| --- | --- | --- |
|
||||
| 0xxxxxxx | xxxxxxx | U+0000–U+007F |
|
||||
| 110xxxxx 10yyyyyy | xxxxxyyyyyy | U+0080–U+07FF |
|
||||
| 1110xxxx 10yyyyyy 10zzzzzz | xxxxyyyyyyzzzzzz | U+0800–U+FFFF |
|
||||
| 11110xxx 10yyyyyy 10zzzzzz 10wwwwww | xxxyyyyyyzzzzzzwwwwww | U+10000–U+10FFFF |
|
||||
|
||||
A handy property of UTF-8 is that code points below 128 (ASCII characters) are encoded as single bytes, and all non-ASCII code points are encoded using sequences of bytes 128–255\. This has a couple of nice consequences. First, any strings or files out there that are already in ASCII can also be interpreted as UTF-8 without any conversion. Second, lots of widely-used string programming idioms—such as null termination, or delimiters (newlines, tabs, commas, slashes, etc.)—will just work on UTF-8 strings. ASCII bytes never occur inside the encoding of non-ASCII code points, so searching byte-wise for a null terminator or a delimiter will do the right thing.
|
||||
|
||||
Thanks to this convenience, it’s relatively simple to extend legacy ASCII programs and APIs to handle UTF-8 strings. UTF-8 is very widely used in the Unix/Linux and Web worlds, and many programmers argue [UTF-8 should be the default encoding everywhere][44].
|
||||
|
||||
However, UTF-8 isn’t a drop-in replacement for ASCII strings in all respects. For instance, code that iterates over the “characters” in a string will need to decode UTF-8 and iterate over code points (or maybe grapheme clusters—more about those later), not bytes. When you measure the “length” of a string, you’ll need to think about whether you want the length in bytes, the length in code points, the width of the text when rendered, or something else.
|
||||
|
||||
### [][45]UTF-16
|
||||
|
||||
The other encoding that you’re likely to encounter is UTF-16\. It uses 16-bit words, with each code point stored as either 1 or 2 words.
|
||||
|
||||
Like UTF-8, we can express the UTF-16 encoding rules in the form of binary prefixes:
|
||||
|
||||
| UTF-16 (binary) | Code point (binary) | Range |
|
||||
| --- | --- | --- |
|
||||
| xxxxxxxxxxxxxxxx | xxxxxxxxxxxxxxxx | U+0000–U+FFFF |
|
||||
| 110110xxxxxxxxxx 110111yyyyyyyyyy | xxxxxxxxxxyyyyyyyyyy + 0x10000 | U+10000–U+10FFFF |
|
||||
|
||||
A more common way that people talk about UTF-16 encoding, though, is in terms of code points called “surrogates”. All the code points in the range U+D800–U+DFFF—or in other words, the code points that match the binary prefixes `110110` and `110111` in the table above—are reserved specifically for UTF-16 encoding, and don’t represent any valid characters on their own. They’re only meant to occur in the 2-word encoding pattern above, which is called a “surrogate pair”. Surrogate code points are illegal in any other context! They’re not allowed in UTF-8 or UTF-32 at all.
|
||||
|
||||
Historically, UTF-16 is a descendant of the original, pre-1996 versions of Unicode, in which there were only 65,536 code points. The original intention was that there would be no different “encodings”; Unicode was supposed to be a straightforward 16-bit character set. Later, the codespace was expanded to make room for a long tail of less-common (but still important) Han characters, which the Unicode designers didn’t originally plan for. Surrogates were then introduced, as—to put it bluntly—a kludge, allowing 16-bit encodings to access the new code points.
|
||||
|
||||
Today, Javascript uses UTF-16 as its standard string representation: if you ask for the length of a string, or iterate over it, etc., the result will be in UTF-16 words, with any code points outside the BMP expressed as surrogate pairs. UTF-16 is also used by the Microsoft Win32 APIs; though Win32 supports either 8-bit or 16-bit strings, the 8-bit version unaccountably still doesn’t support UTF-8—only legacy code-page encodings, like ANSI. This leaves UTF-16 as the only way to get proper Unicode support in Windows.
|
||||
|
||||
By the way, UTF-16’s words can be stored either little-endian or big-endian. Unicode has no opinion on that issue, though it does encourage the convention of putting [U+FEFF zero width no-break space][46] at the top of a UTF-16 file as a [byte-order mark][47], to disambiguate the endianness. (If the file doesn’t match the system’s endianness, the BOM will be decoded as U+FFFE, which isn’t a valid code point.)
|
||||
|
||||
### [][48]Combining Marks
|
||||
|
||||
In the story so far, we’ve been focusing on code points. But in Unicode, a “character” can be more complicated than just an individual code point!
|
||||
|
||||
Unicode includes a system for _dynamically composing_ characters, by combining multiple code points together. This is used in various ways to gain flexibility without causing a huge combinatorial explosion in the number of code points.
|
||||
|
||||
In European languages, for example, this shows up in the application of diacritics to letters. Unicode supports a wide range of diacritics, including acute and grave accents, umlauts, cedillas, and many more. All these diacritics can be applied to any letter of any alphabet—and in fact, _multiple_ diacritics can be used on a single letter.
|
||||
|
||||
If Unicode tried to assign a distinct code point to every possible combination of letter and diacritics, things would rapidly get out of hand. Instead, the dynamic composition system enables you to construct the character you want, by starting with a base code point (the letter) and appending additional code points, called “combining marks”, to specify the diacritics. When a text renderer sees a sequence like this in a string, it automatically stacks the diacritics over or under the base letter to create a composed character.
|
||||
|
||||
For example, the accented character “Á” can be expressed as a string of two code points: [U+0041 “A” latin capital letter a][49] plus [U+0301 “◌́” combining acute accent][50]. This string automatically gets rendered as a single character: “Á”.
|
||||
|
||||
Now, Unicode does also include many “precomposed” code points, each representing a letter with some combination of diacritics already applied, such as [U+00C1 “Á” latin capital letter a with acute][51] or [U+1EC7 “ệ” latin small letter e with circumflex and dot below][52]. I suspect these are mostly inherited from older encodings that were assimilated into Unicode, and kept around for compatibility. In practice, there are precomposed code points for most of the common letter-with-diacritic combinations in European-script languages, so they don’t use dynamic composition that much in typical text.
|
||||
|
||||
Still, the system of combining marks does allow for an _arbitrary number_ of diacritics to be stacked on any base character. The reductio-ad-absurdum of this is [Zalgo text][53], which works by ͖͟ͅr͞aṋ̫̠̖͈̗d͖̻̹óm̪͙͕̗̝ļ͇̰͓̳̫ý͓̥̟͍ ̕s̫t̫̱͕̗̰̼̘͜a̼̩͖͇̠͈̣͝c̙͍k̖̱̹͍͘i̢n̨̺̝͇͇̟͙ģ̫̮͎̻̟ͅ ̕n̼̺͈͞u̮͙m̺̭̟̗͞e̞͓̰̤͓̫r̵o̖ṷs҉̪͍̭̬̝̤ ̮͉̝̞̗̟͠d̴̟̜̱͕͚i͇̫̼̯̭̜͡ḁ͙̻̼c̲̲̹r̨̠̹̣̰̦i̱t̤̻̤͍͙̘̕i̵̜̭̤̱͎c̵s ͘o̱̲͈̙͖͇̲͢n͘ ̜͈e̬̲̠̩ac͕̺̠͉h̷̪ ̺̣͖̱ḻ̫̬̝̹ḙ̙̺͙̭͓̲t̞̞͇̲͉͍t̷͔̪͉̲̻̠͙e̦̻͈͉͇r͇̭̭̬͖,̖́ ̜͙͓̣̭s̘̘͈o̱̰̤̲ͅ ̛̬̜̙t̼̦͕̱̹͕̥h̳̲͈͝ͅa̦t̻̲ ̻̟̭̦̖t̛̰̩h̠͕̳̝̫͕e͈̤̘͖̞͘y҉̝͙ ̷͉͔̰̠o̞̰v͈͈̳̘͜er̶f̰͈͔ḻ͕̘̫̺̲o̲̭͙͠ͅw̱̳̺ ͜t̸h͇̭͕̳͍e̖̯̟̠ ͍̞̜͔̩̪͜ļ͎̪̲͚i̝̲̹̙̩̹n̨̦̩̖ḙ̼̲̼͢ͅ ̬͝s̼͚̘̞͝p͙̘̻a̙c҉͉̜̤͈̯̖i̥͡n̦̠̱͟g̸̗̻̦̭̮̟ͅ ̳̪̠͖̳̯̕a̫͜n͝d͡ ̣̦̙ͅc̪̗r̴͙̮̦̹̳e͇͚̞͔̹̫͟a̙̺̙ț͔͎̘̹ͅe̥̩͍ a͖̪̜̮͙̹n̢͉̝ ͇͉͓̦̼́a̳͖̪̤̱p̖͔͔̟͇͎͠p̱͍̺ę̲͎͈̰̲̤̫a̯͜r̨̮̫̣̘a̩̯͖n̹̦̰͎̣̞̞c̨̦̱͔͎͍͖e̬͓͘ ̤̰̩͙̤̬͙o̵̼̻̬̻͇̮̪f̴ ̡̙̭͓͖̪̤“̸͙̠̼c̳̗͜o͏̼͙͔̮r̞̫̺̞̥̬ru̺̻̯͉̭̻̯p̰̥͓̣̫̙̤͢t̳͍̳̖ͅi̶͈̝͙̼̙̹o̡͔n̙̺̹̖̩͝ͅ”̨̗͖͚̩.̯͓
|
||||
|
||||
A few other places where dynamic character composition shows up in Unicode:
|
||||
|
||||
* [Vowel-pointing notation][15] in Arabic and Hebrew. In these languages, words are normally spelled with some of their vowels left out. They then have diacritic notation to indicate the vowels (used in dictionaries, language-teaching materials, children’s books, and such). These diacritics are expressed with combining marks.
|
||||
|
||||
| A Hebrew example, with [niqqud][1]: | אֶת דַלְתִּי הֵזִיז הֵנִיעַ, קֶטֶב לִשְׁכַּתִּי יָשׁוֹד |
|
||||
| Normal writing (no niqqud): | את דלתי הזיז הניע, קטב לשכתי ישוד |
|
||||
|
||||
* [Devanagari][16], the script used to write Hindi, Sanskrit, and many other South Asian languages, expresses certain vowels as combining marks attached to consonant letters. For example, “ह” + “ि” = “हि” (“h” + “i” = “hi”).
|
||||
|
||||
* Korean characters stand for syllables, but they are composed of letters called [jamo][17] that stand for the vowels and consonants in the syllable. While there are code points for precomposed Korean syllables, it’s also possible to dynamically compose them by concatenating their jamo. For example, “ᄒ” + “ᅡ” + “ᆫ” = “한” (“h” + “a” + “n” = “han”).
|
||||
|
||||
### [][54]Canonical Equivalence
|
||||
|
||||
In Unicode, precomposed characters exist alongside the dynamic composition system. A consequence of this is that there are multiple ways to express “the same” string—different sequences of code points that result in the same user-perceived characters. For example, as we saw earlier, we can express the character “Á” either as the single code point U+00C1, _or_ as the string of two code points U+0041 U+0301.
|
||||
|
||||
Another source of ambiguity is the ordering of multiple diacritics in a single character. Diacritic order matters visually when two diacritics apply to the same side of the base character, e.g. both above: “ǡ” (dot, then macron) is different from “ā̇” (macron, then dot). However, when diacritics apply to different sides of the character, e.g. one above and one below, then the order doesn’t affect rendering. Moreover, a character with multiple diacritics might have one of the diacritics precomposed and others expressed as combining marks.
|
||||
|
||||
For example, the Vietnamese letter “ệ” can be expressed in _five_ different ways:
|
||||
|
||||
* Fully precomposed: U+1EC7 “ệ”
|
||||
* Partially precomposed: U+1EB9 “ẹ” + U+0302 “◌̂”
|
||||
* Partially precomposed: U+00EA “ê” + U+0323 “◌̣”
|
||||
* Fully decomposed: U+0065 “e” + U+0323 “◌̣” + U+0302 “◌̂”
|
||||
* Fully decomposed: U+0065 “e” + U+0302 “◌̂” + U+0323 “◌̣”
|
||||
|
||||
Unicode refers to set of strings like this as “canonically equivalent”. Canonically equivalent strings are supposed to be treated as identical for purposes of searching, sorting, rendering, text selection, and so on. This has implications for how you implement operations on text. For example, if an app has a “find in file” operation and the user searches for “ệ”, it should, by default, find occurrences of _any_ of the five versions of “ệ” above!
|
||||
|
||||
### [][55]Normalization Forms
|
||||
|
||||
To address the problem of “how to handle canonically equivalent strings”, Unicode defines several _normalization forms_ : ways of converting strings into a canonical form so that they can be compared code-point-by-code-point (or byte-by-byte).
|
||||
|
||||
The “NFD” normalization form fully _decomposes_ every character down to its component base and combining marks, taking apart any precomposed code points in the string. It also sorts the combining marks in each character according to their rendered position, so e.g. diacritics that go below the character come before the ones that go above the character. (It doesn’t reorder diacritics in the same rendered position, since their order matters visually, as previously mentioned.)
|
||||
|
||||
The “NFC” form, conversely, puts things back together into precomposed code points as much as possible. If an unusual combination of diacritics is called for, there may not be any precomposed code point for it, in which case NFC still precomposes what it can and leaves any remaining combining marks in place (again ordered by rendered position, as in NFD).
|
||||
|
||||
There are also forms called NFKD and NFKC. The “K” here refers to _compatibility_ decompositions, which cover characters that are “similar” in some sense but not visually identical. However, I’m not going to cover that here.
|
||||
|
||||
### [][56]Grapheme Clusters
|
||||
|
||||
As we’ve seen, Unicode contains various cases where a thing that a user thinks of as a single “character” might actually be made up of multiple code points under the hood. Unicode formalizes this using the notion of a _grapheme cluster_ : a string of one or more code points that constitute a single “user-perceived character”.
|
||||
|
||||
[UAX #29][57] defines the rules for what, precisely, qualifies as a grapheme cluster. It’s approximately “a base code point followed by any number of combining marks”, but the actual definition is a bit more complicated; it accounts for things like Korean jamo, and [emoji ZWJ sequences][58].
|
||||
|
||||
The main thing grapheme clusters are used for is text _editing_ : they’re often the most sensible unit for cursor placement and text selection boundaries. Using grapheme clusters for these purposes ensures that you can’t accidentally chop off some diacritics when you copy-and-paste text, that left/right arrow keys always move the cursor by one visible character, and so on.
|
||||
|
||||
Another place where grapheme clusters are useful is in enforcing a string length limit—say, on a database field. While the true, underlying limit might be something like the byte length of the string in UTF-8, you wouldn’t want to enforce that by just truncating bytes. At a minimum, you’d want to “round down” to the nearest code point boundary; but even better, round down to the nearest _grapheme cluster boundary_ . Otherwise, you might be corrupting the last character by cutting off a diacritic, or interrupting a jamo sequence or ZWJ sequence.
|
||||
|
||||
### [][59]And More…
|
||||
|
||||
There’s much more that could be said about Unicode from a programmer’s perspective! I haven’t gotten into such fun topics as case mapping, collation, compatibility decompositions and confusables, Unicode-aware regexes, or bidirectional text. Nor have I said anything yet about implementation issues—how to efficiently store and look-up data about the sparsely-assigned code points, or how to optimize UTF-8 decoding, string comparison, or NFC normalization. Perhaps I’ll return to some of those things in future posts.
|
||||
|
||||
Unicode is a fascinating and complex system. It has a many-to-one mapping between bytes and code points, and on top of that a many-to-one (or, under some circumstances, many-to-many) mapping between code points and “characters”. It has oddball special cases in every corner. But no one ever claimed that representing _all written languages_ was going to be _easy_ , and it’s clear that we’re never going back to the bad old days of a patchwork of incompatible encodings.
|
||||
|
||||
Further reading:
|
||||
|
||||
* [The Unicode Standard][21]
|
||||
* [UTF-8 Everywhere Manifesto][22]
|
||||
* [Dark corners of Unicode][23] by Eevee
|
||||
* [ICU (International Components for Unicode)][24]—C/C++/Java libraries implementing many Unicode algorithms and related things
|
||||
* [Python 3 Unicode Howto][25]
|
||||
* [Google Noto Fonts][26]—set of fonts intended to cover all assigned code points
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
I’m a graphics programmer, currently freelancing in Seattle. Previously I worked at NVIDIA on the DevTech software team, and at Sucker Punch Productions developing rendering technology for the Infamous series of games for PS3 and PS4.
|
||||
|
||||
I’ve been interested in graphics since about 2002 and have worked on a variety of assignments, including fog, atmospheric haze, volumetric lighting, water, visual effects, particle systems, skin and hair shading, postprocessing, specular models, linear-space rendering, and GPU performance measurement and optimization.
|
||||
|
||||
You can read about what I’m up to on my blog. In addition to graphics, I’m interested in theoretical physics, and in programming language design.
|
||||
|
||||
You can contact me at nathaniel dot reed at gmail dot com, or follow me on Twitter (@Reedbeta) or Google+. I can also often be found answering questions at Computer Graphics StackExchange.
|
||||
|
||||
-------------------
|
||||
|
||||
via: http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311
|
||||
|
||||
作者:[ Nathan][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://reedbeta.com/about/
|
||||
[1]:https://en.wikipedia.org/wiki/Niqqud
|
||||
[2]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#codespace-allocation
|
||||
[3]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#scripts
|
||||
[4]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#usage-frequency
|
||||
[5]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#utf-8
|
||||
[6]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#utf-16
|
||||
[7]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#canonical-equivalence
|
||||
[8]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#normalization-forms
|
||||
[9]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#grapheme-clusters
|
||||
[10]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#diversity-and-inherent-complexity
|
||||
[11]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#the-unicode-codespace
|
||||
[12]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#encodings
|
||||
[13]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#combining-marks
|
||||
[14]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#and-more
|
||||
[15]:https://en.wikipedia.org/wiki/Vowel_pointing
|
||||
[16]:https://en.wikipedia.org/wiki/Devanagari
|
||||
[17]:https://en.wikipedia.org/wiki/Hangul#Letters
|
||||
[18]:http://reedbeta.com/blog/programmers-intro-to-unicode/
|
||||
[19]:http://reedbeta.com/blog/category/coding/
|
||||
[20]:http://reedbeta.com/blog/programmers-intro-to-unicode/#comments
|
||||
[21]:http://www.unicode.org/versions/latest/
|
||||
[22]:http://utf8everywhere.org/
|
||||
[23]:https://eev.ee/blog/2015/09/12/dark-corners-of-unicode/
|
||||
[24]:http://site.icu-project.org/
|
||||
[25]:https://docs.python.org/3/howto/unicode.html
|
||||
[26]:https://www.google.com/get/noto/
|
||||
[27]:http://www.unicode.org/versions/latest/
|
||||
[28]:http://www.unicode.org/reports/
|
||||
[29]:http://www.unicode.org/notes/
|
||||
[30]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#diversity-and-inherent-complexity
|
||||
[31]:http://linguistics.berkeley.edu/sei/
|
||||
[32]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#the-unicode-codespace
|
||||
[33]:http://unicode.org/cldr/utility/character.jsp?a=A
|
||||
[34]:http://unicode.org/cldr/utility/character.jsp?a=%CE%B8
|
||||
[35]:http://www.unicode.org/reports/tr44/
|
||||
[36]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#codespace-allocation
|
||||
[37]:http://reedbeta.com/blog/programmers-intro-to-unicode/codespace-map.png
|
||||
[38]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#scripts
|
||||
[39]:http://reedbeta.com/blog/programmers-intro-to-unicode/script-map.png
|
||||
[40]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#usage-frequency
|
||||
[41]:http://reedbeta.com/blog/programmers-intro-to-unicode/heatmap-wiki+tweets.png
|
||||
[42]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#encodings
|
||||
[43]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#utf-8
|
||||
[44]:http://utf8everywhere.org/
|
||||
[45]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#utf-16
|
||||
[46]:http://unicode.org/cldr/utility/character.jsp?a=FEFF
|
||||
[47]:https://en.wikipedia.org/wiki/Byte_order_mark
|
||||
[48]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#combining-marks
|
||||
[49]:http://unicode.org/cldr/utility/character.jsp?a=A
|
||||
[50]:http://unicode.org/cldr/utility/character.jsp?a=0301
|
||||
[51]:http://unicode.org/cldr/utility/character.jsp?a=%C3%81
|
||||
[52]:http://unicode.org/cldr/utility/character.jsp?a=%E1%BB%87
|
||||
[53]:https://eeemo.net/
|
||||
[54]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#canonical-equivalence
|
||||
[55]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#normalization-forms
|
||||
[56]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#grapheme-clusters
|
||||
[57]:http://www.unicode.org/reports/tr29/
|
||||
[58]:http://blog.emojipedia.org/emoji-zwj-sequences-three-letters-many-possibilities/
|
||||
[59]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#and-more
|
@ -1,74 +0,0 @@
|
||||
Does your open source project need a president?
|
||||
============================================================
|
||||
|
||||
![Does your open source project need a president?](https://opensource.com/sites/default/files/styles/image-full-size/public/images/government/osdc_transparent_whitehouse_520x292.jpg?itok=IAsYgvi- "Does your open source project need a president?")
|
||||
>Image by : opensource.com
|
||||
|
||||
Recently I was lucky enough to be invited to attend the [Linux Foundation Open Source Leadership Summit][4]. The event was stacked with many of the people I consider mentors, friends, and definitely leaders in the various open source and free software communities that I participate in.
|
||||
|
||||
I was able to observe the [CNCF][5] Technical Oversight Committee meeting while there, and was impressed at the way they worked toward consensus where possible. It reminded me of the [OpenStack Technical Committee][6] in its make-up of well-spoken technical individuals who care about their users and stand up for the technical excellence of their foundations' activities.
|
||||
|
||||
But it struck me (and several other attendees) that this consensus building has limitations. [Adam Jacob][7] noted that Linus Torvalds had given an interview on stage earlier in the day where he noted that most of his role was to listen closely for a time to differing opinions, but then stop them when it was clear there was no consensus, and select one that he felt was technically excellent, and move on. Linus, being the founder of Linux and the benevolent dictator of the project for its lifetime thus far, has earned this moral authority.
|
||||
|
||||
However, unlike Linux, many of the modern foundation-fostered projects lack an executive branch. The structure we see for governance is centered around ensuring that corporate sponsors have influence. Foundation members pay dues to get various levels of board seats or corporate access to events and data. And this is a good thing, as it keeps people like me paid to work in these communities.
|
||||
|
||||
However, I believe as technical contributors, we sometimes give this too much sway in the actual governance of the community and the projects. These foundation boards know that day to day decision making should be left to those working in the project, and as such allow committees like the [CNCF][8] TOC or the [OpenStack TC][9] full agency over the technical aspects of the member projects.
|
||||
|
||||
I believe these committees operate as a legislative branch. They evaluate conditions and regulate the projects accordingly, allocating budgets for infrastructure and passing edicts to avoid chaos. Since they're not as large as political legislative bodies like the US House of Representatives and Senate, they can usually operate on a consensus basis, and not drive everything to a contentious vote. By and large, these are as nimble as a legislative body can be.
|
||||
|
||||
However, I believe open source projects need an executive to be effective. At some point, we need a single person to listen to the facts, entertain theories, and then decide, and execute a plan. Some projects have natural single leaders like this. Most, however, do not.
|
||||
|
||||
I believe we as engineers aren't generally good at being like Linus. If you've spent any time in the corporate world you've had an executive disagree with you and run you right over. When we get the chance to distribute power evenly, we do it.
|
||||
|
||||
But I think that's a mistake. I think we should strive to have executives. Not just organizers like the [OpenStack PTL][10], but more like the [Debian Project Leader][11]. Empowered people with the responsibility to serve as a visionary and keep the project's decision making relevant and of high quality. This would also give the board somebody to interact with directly so that they do not have to try and convince the whole community to move in a particular direction to wield influence. In this way, I believe we'd end up with a system of checks and balances similar to the US Constitution.
|
||||
|
||||
So here is my suggestion for how a project executive structure could work, assuming there is already a strong technical committee and a well-defined voting electorate that I call the "active technical contributors."
|
||||
|
||||
1. The president is elected by [Condorcet][1] vote of the active technical contributors of a project for a term of 1 year.
|
||||
|
||||
2. The president will have veto power over any proposed change to the project's technical assets.
|
||||
|
||||
3. The technical committee may override the president's veto by a super majority vote.
|
||||
|
||||
4. The president will inform the technical contributors of their plans for the project every 6 months.
|
||||
|
||||
This system only works if the project contributors expect their project president to actively drive the vision of the project. Basically, the culture has to turn to this executive for final decision-making before it comes to a veto. The veto is for times when the community makes poor decisions. And this doesn't replace leaders of individual teams. Think of these like the governors of states in the US. They're running their sub-project inside the parameters set down by the technical committee and the president.
|
||||
|
||||
And in the case of foundations or communities with boards, I believe ultimately a board would serve as the judicial branch, checking the legality of changes made against the by-laws of the group. If there's no board of sorts, a judiciary could be appointed and confirmed, similar to the US Supreme Court or the [Debian CTTE][12]. This would also just be necessary to ensure that the technical arm of a project doesn't get the foundation into legal trouble of any kind, which is already what foundation boards tend to do.
|
||||
|
||||
I'd love to hear your thoughts on this on Twitter, please tweet me [@SpamapS][13] with the hashtag #OpenSourcePresident to get the discussion going.
|
||||
|
||||
_This article was originally published on [FewBar.com][2] as "Free and open source leaders—You need a president" and was republished with permission._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Clint Byrum - Clint Byrum is a Cloud Architect at IBM (Though his words here are his own, and not those of IBM). He is an active Open Source and Free Software contributor to Debian, Ubuntu, OpenStack, and various other projects spanning the past 20 years.
|
||||
|
||||
-------------------------
|
||||
|
||||
via: https://opensource.com/article/17/3/governance-needs-president
|
||||
|
||||
作者:[ Clint Byrum][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/spamaps
|
||||
[1]:https://en.wikipedia.org/wiki/Condorcet_method
|
||||
[2]:http://fewbar.com/2017/02/open-source-governance-needs-presidents/
|
||||
[3]:https://opensource.com/article/17/3/governance-needs-president?rate=g5uFkFg_AqVo7JnKqPHoAxKccWzo1XXgn5wj5hILAIk
|
||||
[4]:http://events.linuxfoundation.org/events/open-source-leadership-summit
|
||||
[5]:https://www.cncf.io/
|
||||
[6]:https://www.openstack.org/foundation/tech-committee/
|
||||
[7]:https://twitter.com/adamhjk
|
||||
[8]:https://www.cncf.io/
|
||||
[9]:https://www.openstack.org/foundation/tech-committee/
|
||||
[10]:https://docs.openstack.org/project-team-guide/ptl.html
|
||||
[11]:https://www.debian.org/devel/leader
|
||||
[12]:https://www.debian.org/devel/tech-ctte
|
||||
[13]:https://twitter.com/spamaps
|
||||
[14]:https://opensource.com/user/121156/feed
|
||||
[15]:https://opensource.com/users/spamaps
|
101
sources/talk/20170421 A Window Into the Linux Desktop.md
Normal file
101
sources/talk/20170421 A Window Into the Linux Desktop.md
Normal file
@ -0,0 +1,101 @@
|
||||
A Window Into the Linux Desktop
|
||||
============================================================
|
||||
|
||||
![linux-desktop](http://www.linuxinsider.com/article_images/story_graphics_xlarge/xl-2016-linux-1.jpg)
|
||||
|
||||
![](http://www.linuxinsider.com/images/2015/image-credit-adobe-stock_130x15.gif)
|
||||
|
||||
"What can it do that Windows can't?"
|
||||
|
||||
That is the first question many people ask when considering Linux for their desktop. While the open source philosophy that underpins Linux is a good enough draw for some, others want to know just how different its look, feel and functionality can get. To a degree, that depends on whether you choose a desktop environment or a window manager.
|
||||
|
||||
If you want a desktop experience that is lightning fast and uncompromisingly efficient, foregoing the classic desktop environment for a window manager might be for you.
|
||||
|
||||
### What's What
|
||||
|
||||
"Desktop environment" is the technical term for a typical, full-featured desktop -- that is, the complete graphical layout of your system. Besides displaying your programs, the desktop environment includes accoutrements such as app launchers, menu panels and widgets.
|
||||
|
||||
In Microsoft Windows, the desktop environment consists of, among other things, the Start menu, the taskbar of open applications and notification center, all the Windows programs that come bundled with the OS, and the frames enclosing open applications (with a dash, square and X in the upper right corner).
|
||||
|
||||
There are many similarities in Linux.
|
||||
|
||||
The Linux [Gnome][3] desktop environment, for instance, has a slightly different design, but it shares all of the Microsoft Windows basics -- from an app menu to a panel showing open applications, to a notification bar, to the windows framing programs.
|
||||
|
||||
Window program frames rely on a component for drawing them and letting you move and resize them: It's called the "window manager." So, as they all have windows, every desktop environment includes a window manager.
|
||||
|
||||
However, not every window manager is part of a desktop environment. You can run window managers by themselves, and there are reasons to consider doing just that.
|
||||
|
||||
### Out of Your Environment
|
||||
|
||||
For the purpose of this column, references to "window manager" refer to those that can stand alone. If you install a window manager on an existing Linux system, you can log out without shutting down, choose the new window manager on your login screen, and log back in.
|
||||
|
||||
You might not want to do this without researching your window manager first, though, because you will be greeted by a blank screen and sparse status bar that may or may not be clickable.
|
||||
|
||||
There typically is a straightforward way to bring up a terminal in a window manager, because that's how you edit its configuration file. There you will find key- and mouse-bindings to launch programs, at which point you actually can use your new setup.
|
||||
|
||||
In the popular i3 window manager, for instance, you can launch a terminal by hitting the Super (i.e., Windows) key plus Enter -- or press Super plus D to bring up the app launcher. There you can type an app name and hit Enter to open it. All the existing apps can be found that way, and they will open to full screen once selected.
|
||||
|
||||
[![i3 window manager](http://www.linuxinsider.com/article_images/2017/84473_620x388-small.jpg)][4] (Click Image to Enlarge)
|
||||
|
||||
i3 is also a tiling window manager, meaning it ensures that all windows expand to evenly fit the screen, neither overlapping nor wasting space. When a new window pops up, it reduces the existing windows, nudging them aside to make room. Users can toggle to open the next window either vertically or horizontally adjacent.
|
||||
|
||||
### Features Can Be Friends or Foes
|
||||
|
||||
Desktop environments have their advantages, of course. First and foremost, they provide a feature-rich, recognizable interface. Each has its signature style, but overall they provide unobtrusive default settings out of the box, which makes desktop environments ready to use right from the start.
|
||||
|
||||
Another strong point is that desktop environments come with a constellation of programs and media codecs, allowing users to accomplish simple tasks immediately. Further, they include handy features like battery monitors, wireless widgets and system notifications.
|
||||
|
||||
As comprehensive as desktop environments are, the large software base and user experience philosophy unique to each means there are limits on how far they can go. That means they are not always very configurable. With desktop environments that emphasize flashy looks, oftentimes what you see is what you get.
|
||||
|
||||
Many desktop environments are notoriously heavy on system resources, so they're not friendly to lower-end hardware. Because of the visual effects running on them, there are more things that can go wrong, too. I once tried tweaking networking settings that were unrelated to the desktop environment I was running, and the whole thing crashed. When I started a window manager, I was able to change the settings.
|
||||
|
||||
Those prioritizing security may want to avoid desktop environments, since more programs means greater attack surface -- that is, entry points where malicious actors can break in.
|
||||
|
||||
However, if you want to give a desktop environment a try, XFCE is a good place to start, as its smaller software base trims some bloat, leaving less clutter behind if you don't stick with it.
|
||||
|
||||
It's not the prettiest at first sight, but after downloading some GTK theme packs (every desktop environment serves up either these or Qt themes, and XFCE is in the GTK camp) and enabling them in the Appearance section of settings, you easily can touch it up. You can even shop around at this [centralized gallery][5] to find the theme you like best.
|
||||
|
||||
### You Can Save a Lot of Time... if You Take the Time First
|
||||
|
||||
If you'd like to see what you can do outside of a desktop environment, you'll find a window manager allows plenty of room to maneuver.
|
||||
|
||||
More than anything, window managers are about customization. In fact, their customizability has spawned numerous galleries hosting a vibrant community of users whose palette of choice is a window manager.
|
||||
|
||||
The modest resource needs of window managers make them ideal for lower specs, and since most window managers don't come with any programs, they allow users who appreciate modularity to add only those they want.
|
||||
|
||||
Perhaps the most noticeable distinction from desktop environments is that window managers generally focus on efficiency by emphasizing mouse movements and keyboard hotkeys to open programs or launchers.
|
||||
|
||||
Keyboard-driven window managers are especially streamlined, since you can bring up new windows, enter text or more keyboard commands, move them around, and close them again -- all without moving your hands from the home row. Once you acculturate to the design logic, you will be amazed at how quickly you can blaze through your tasks.
|
||||
|
||||
In spite of the freedom they provide, window managers have their drawbacks. Most significantly, they are extremely bare-bones out of the box. Before you can make much use of one, you'll have to spend time reading your window manager's documentation for configuration syntax, and probably some more time getting the hang of said syntax.
|
||||
|
||||
Although you will have some user programs if you switched from a desktop environment (the likeliest scenario), you also will start out missing familiar things like battery indicators and network widgets, and it will take some time to set up new ones.
|
||||
|
||||
If you want to dive into window managers, i3 has [thorough documentation][6] and straightforward configuration syntax. The configuration file doesn't use any programming language -- it simply defines a variable-value pair on each line. Creating a hotkey is as easy as writing "bindsym", the key combination, and the action for that combination to launch.
|
||||
|
||||
While window managers aren't for everyone, they offer a distinctive computing experience, and Linux is one of the few OSes that allows them. No matter which paradigm you ultimately go with, I hope this overview gives you enough information to feel confident about the choice you've made -- or confident enough to venture out of your familiar zone and see what else is available.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
**Jonathan Terrasi** has been an ECT News Network columnist since 2017\. His main interests are computer security (particularly with the Linux desktop), encryption, and analysis of politics and current affairs. He is a full-time freelance writer and musician. His background includes providing technical commentaries and analyses in articles published by the Chicago Committee to Defend the Bill of Rights.
|
||||
|
||||
|
||||
-----------
|
||||
|
||||
via: http://www.linuxinsider.com/story/84473.html?rss=1
|
||||
|
||||
作者:[ ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[1]:http://www.linuxinsider.com/story/84473.html?rss=1#
|
||||
[2]:http://www.linuxinsider.com/perl/mailit/?id=84473
|
||||
[3]:http://en.wikipedia.org/wiki/GNOME
|
||||
[4]:http://www.linuxinsider.com/article_images/2017/84473_1200x750.jpg
|
||||
[5]:http://www.xfce-look.org/
|
||||
[6]:https://i3wm.org/docs/
|
128
sources/talk/20170426 How to get started learning to program.md
Normal file
128
sources/talk/20170426 How to get started learning to program.md
Normal file
@ -0,0 +1,128 @@
|
||||
How to get started learning to program
|
||||
============================================================
|
||||
|
||||
### Ever wondered, "How can I learn to program?" We provide guidance to help you find the approach that best suits your needs and situation.
|
||||
|
||||
|
||||
![Know thyself](https://opensource.com/sites/default/files/styles/image-full-size/public/u23316/roman-know-thyself-osdc-lead.png?itok=oWuH8hRr "Know thyself")
|
||||
>Image by : Artist unknown. Public domain, [via Wikimedia Commons][20]. Modified by Opensource.com.
|
||||
|
||||
There's a lot of buzz lately about learning to program. Not only is there a [shortage of people][21] compared with the open and pending positions in software development, programming is also a career with one of the [highest salaries][22] and [highest job satisfaction rates][23]. No wonder so many people are looking to break into the industry!
|
||||
|
||||
But how, exactly, do you do that? "**How can I learn to program?**" is a common question. Although I don't have all the answers, hopefully this article will provide guidance to help you find the approach that best suits your needs and situation.
|
||||
|
||||
|
||||
|
||||
### What's your learning style?
|
||||
|
||||
Before you start your learning process, consider not only your options, but also yourself. The ancient Greeks had a saying, [γνῶθι σεαυτόν][24] (gnothi seauton), meaning "know thyself". Undertaking a large learning program is difficult. Self awareness is necessary to make sure you are making the choices that will lead to the highest chance of success. Be honest with yourself when you answer the following questions:
|
||||
|
||||
* **What is your preferred learning style?** How do you learn best? Is it by reading? Hearing a lecture? Mostly hands-on experimentation? Choose the style that is most effective for you. Don't choose a style because it's popular or because someone else said it worked for them.
|
||||
|
||||
* **What are your needs and requirements?** Why are you looking into learning how to program? Is it because you wish to change jobs? If so, how quickly do you need to do that? Keep in mind, these are _needs_ , not _wants_ . You may _want_ a new job next week, but _need_ one within a year to help support your growing family. This sort of timing will matter when selecting a path.
|
||||
|
||||
* **What are your available resources?** Sure, going back to college and earning a computer science degree might be nice, but you must be realistic with yourself. Your life must accommodate your learning. Can you afford—both in time and money—to set aside several months to participate in a bootcamp? Do you even live in an area that provides learning opportunities, such as meetups or college courses? The resources available to you will have a large impact on how you proceed in your learning. Research these before diving in.
|
||||
|
||||
### Picking a language
|
||||
|
||||
As you start your path and consider your options, remember that despite what many will say, the choice of which programming language you use to start learning simply does _not_ matter. Yes, some languages are more popular than others. For instance, right now JavaScript, Java, PHP, and Python are among the [most popular languages][25] according to one study. But what is popular today may be passé next year, so don't get too hung up on choice of language. The underlying principles of methods, classes, functions, conditionals, control flow, and other programming concepts will remain more or less the same regardless of the language you use. Only the grammar and community best practices will change. Therefore you can learn to program just as well in [Perl][26] as you can in [Swift][27] or [Rust][28]. As a programmer, you will work with and in many different languages over the course of your career. Don't feel you're "stuck" with the first one you learn.
|
||||
|
||||
### Test the waters
|
||||
|
||||
Unless you already have dabbled a bit and know for sure that programming is something you'd like to spend the rest of your life doing, I advise you to dip a toe into the waters before diving in headfirst. This work is not for everyone. Before going all-in on a learning program, take a little time to try out one of the smaller, cheaper options to get a sense of whether you'll enjoy the work enough to spend 40 hours a week doing it. If you don't enjoy this work, it's unlikely you'll even finish the program. If you do finish your learning program despite that, you may be miserable in your subsequent job. Life is too short to spend a third of it doing something you don't enjoy.
|
||||
|
||||
Thankfully, there is a lot more to software development than simply programming. It's incredibly helpful to be familiar with programming concepts and to understand how software comes together, but you don't need to be a programmer to get a well-paying job in software development. Additional vital roles in the process are technical writer, project manager, product manager, quality assurance, designer, user experience, ops/sysadmin, and data scientist, among others. Many different roles and people are required to launch software successfully. Don't feel that learning to program requires you to become a programmer. Explore your options and choose what's best for you.
|
||||
|
||||
### Learning resources
|
||||
|
||||
What are your options for learning resources? As you've probably already discovered, those options are many and varied, although not all of them may be available in your area.
|
||||
|
||||
* **Bootcamps**: Bootcamps such as [App Academy][5] and [Bloc][6] have become popular in recent years. Often charging a fee of $10K USD or more, bootcamps advertise that they can train a student to become an employable programmer in a matter of weeks. Before enrolling in a coding bootcamp, research the program to make sure it delivers on its promises and is able to place its students in well-paying, long-term positions after graduation. The money is one cost, whereas the time is another—these typically are full-time programs that require the student to set aside any other obligations for several weeks in a row. These two costs often put bootcamps outside the budget of many prospective programmers.
|
||||
|
||||
* **Community college/vocational training center**: Community colleges often are overlooked by people investigating their options for learning to program, and that's a shame. The education you can receive at a community college or vocational training center can be as effective as other options, at a fraction of the cost.
|
||||
|
||||
* **State/local training programs**: Many regions recognize the economic benefits of boosting technology investments in their area and have developed training programs to create well-educated and -prepared workforces. Training program examples include [Code Oregon][7] and [Minneapolis TechHire][8]. Check to see whether your state, province, or municipality offers such a program.
|
||||
|
||||
* **Online training**: Many companies and organizations offer online technology training programs. Some, such as [Linux Foundation][9], are dedicated to training people to be successful with open source technologies. Others, like [O'Reilly Media][10], [Lynda.com][11], and [Coursera][12] provide training in many aspects of software development. [Codecademy][13] provides an online introduction to programming concepts. The costs of each program will vary, but most of them will allow you to learn on your schedule.
|
||||
|
||||
* **MOOCs**: MOOCs—Massive Open Online Courses—have really picked up steam in the past few years. World-class universities, such as [Harvard][14], [Stanford][15], [MIT][16], and others have been recording their courses and making them available online for free. The self-directed nature of the courses may not be a good fit for everyone, but the material available makes this a valuable learning option.
|
||||
|
||||
* **Books**: Many people love self-directed learning using books. It's quite economical and provides ready reference material after the initial learning phase. Although you can order and access books through online services like [Safari][17] and [Amazon][18], don't forget to check your local public library as well.
|
||||
|
||||
### Support network
|
||||
|
||||
Whichever learning resources you choose, the process will be more successful with a support network. Sharing your experiences and challenges with others can help keep you motivated, while providing a safe place to ask questions that you might not feel confident enough to ask elsewhere yet. Many towns have local user groups that gather to discuss and learn about software technologies. Often you can find these listed at [Meetup.com][29]. Special interest groups, such as [Women Who Code][30] and [Code2040][31], frequently hold meetings and hackathons in most urban areas and are a great way to meet and build a support network while you're learning. Some software conferences host "hack days" where you can meet experienced software developers and get help with concepts on which you're stuck. For instance, every year [PyCon][32] features several days of the conference for people to gather and work together. Some projects, such as [BeeWare][33], use these sprint days to assist new programmers to learn and contribute to the project.
|
||||
|
||||
Your support network doesn't have to come from a formal meetup. A small study group can be as effective at keeping you motivated to stay with your learning program and can be as easy to form as posting an invitation on your favorite social network. This is particularly useful if you live in an area that doesn't currently have a large community of software developers to support several meetups and user groups.
|
||||
|
||||
### Steps for getting started
|
||||
|
||||
In summary, to give yourself the best chance of success should you decide to learn to program, follow these steps:
|
||||
|
||||
1. Gather your list of requirements/needs and resources
|
||||
|
||||
2. Research the options available to you in your area
|
||||
|
||||
3. Discard the options that do not meet your requirements and resources
|
||||
|
||||
4. Select the option(s) that best suit your requirements, resources, and learning style
|
||||
|
||||
5. Find a support network
|
||||
|
||||
Remember, though: Your learning process will never be complete. The software industry moves quickly, with new technologies and advances popping up nearly every day. Once you learn to program, you must commit to spending time to learn about these new advances. You cannot rely on your job to provide you this training. Only you are responsible for your own career development, so if you wish to stay up-to-date and employable, you must stay abreast of the latest technologies in the industry.
|
||||
|
||||
Good luck!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
VM (Vicky) Brasseur - VM (aka Vicky) is a manager of technical people, projects, processes, products and p^Hbusinesses. In her more than 18 years in the tech industry she has been an analyst, programmer, product manager, software engineering manager, and director of software engineering. Currently she is a Senior Engineering Manager in service of an upstream open source development team at Hewlett Packard Enterprise. VM blogs at anonymoushash.vmbrasseur.com and tweets at @vmbrasseur.
|
||||
|
||||
--------
|
||||
|
||||
via: https://opensource.com/article/17/4/how-get-started-learning-program
|
||||
|
||||
作者:[VM (Vicky) Brasseur ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/vmbrasseur
|
||||
[1]:https://opensource.com/tags/python?src=programming_resource_menu
|
||||
[2]:https://opensource.com/tags/javascript?src=programming_resource_menu
|
||||
[3]:https://opensource.com/tags/perl?src=programming_resource_menu
|
||||
[4]:https://developers.redhat.com/?intcmp=7016000000127cYAAQ&src=programming_resource_menu
|
||||
[5]:https://www.appacademy.io/
|
||||
[6]:https://www.bloc.io/
|
||||
[7]:http://codeoregon.org/
|
||||
[8]:http://www.minneapolismn.gov/cped/metp/TechHire#start
|
||||
[9]:https://training.linuxfoundation.org/
|
||||
[10]:http://shop.oreilly.com/category/learning-path.do
|
||||
[11]:https://www.lynda.com/
|
||||
[12]:https://www.coursera.org/
|
||||
[13]:https://www.codecademy.com/
|
||||
[14]:https://www.edx.org/school/harvardx
|
||||
[15]:http://online.stanford.edu/courses
|
||||
[16]:https://ocw.mit.edu/index.htm
|
||||
[17]:https://www.safaribooksonline.com/
|
||||
[18]:https://amazon.com/
|
||||
[19]:https://opensource.com/article/17/4/how-get-started-learning-program?rate=txl_aE6F2oOUSgQDveWFtrPWIbA1ULFwfOp017zV35M
|
||||
[20]:https://commons.wikimedia.org/wiki/File:Roman-mosaic-know-thyself.jpg
|
||||
[21]:http://www.techrepublic.com/article/report-40-of-employers-worldwide-face-talent-shortages-driven-by-it/
|
||||
[22]:http://web.archive.org/web/20170328065655/http://www.businessinsider.com/highest-paying-jobs-in-america-2017-3/#-25
|
||||
[23]:https://stackoverflow.com/insights/survey/2017/#career-satisfaction
|
||||
[24]:https://en.wikipedia.org/wiki/Know_thyself
|
||||
[25]:https://stackoverflow.com/insights/survey/2017/#most-popular-technologies
|
||||
[26]:https://learn.perl.org/tutorials/
|
||||
[27]:http://shop.oreilly.com/product/0636920045946.do
|
||||
[28]:https://doc.rust-lang.org/book/
|
||||
[29]:https://www.meetup.com/
|
||||
[30]:https://www.womenwhocode.com/
|
||||
[31]:http://www.code2040.org/
|
||||
[32]:https://us.pycon.org/
|
||||
[33]:http://pybee.org/
|
||||
[34]:https://opensource.com/user/10683/feed
|
||||
[35]:https://opensource.com/article/17/4/how-get-started-learning-program#comments
|
||||
[36]:https://opensource.com/users/vmbrasseur
|
@ -0,0 +1,52 @@
|
||||
Faster machine learning is coming to the Linux kernel
|
||||
============================================================
|
||||
|
||||
### The addition of heterogenous memory management to the Linux kernel will unlock new ways to speed up GPUs, and potentially other kinds of machine learning hardware
|
||||
|
||||
|
||||
![Faster machine learning is coming to a Linux kernel near you](http://images.techhive.com/images/article/2015/12/machine_learning-100633721-primary.idge.jpg)
|
||||
>Credit: Thinkstock
|
||||
|
||||
It's been a long time in the works, but a memory management feature intended to give machine learning or other GPU-powered applications a major performance boost is close to making it into one of the next revisions of the kernel.
|
||||
|
||||
Heterogenous memory management (HMM) allows a device’s driver to mirror the address space for a process under its own memory management. As Red Hat developer Jérôme Glisse [explains][10], this makes it easier for hardware devices like GPUs to directly access the memory of a process without the extra overhead of copying anything. It also doesn't violate the memory protection features afforded by modern OSes.
|
||||
|
||||
|
||||
One class of application that stands to benefit most from HMM is GPU-based machine learning. Libraries like OpenCL and CUDA would be able to get a speed boost from HMM. HMM does this in much the same way as [speedups being done to GPU-based machine learning][11], namely by leaving data in place near the GPU, operating directly on it there, and moving it around as little as possible.
|
||||
|
||||
These kinds of speed-ups for CUDA, Nvidia’s library for GPU-based processing, would only benefit operations on Nvidia GPUs, but those GPUs currently constitute the vast majority of the hardware used to accelerate number crunching. However, OpenCL was devised to write code that could target multiple kinds of hardware—CPUs, GPUs, FPGAs, and so on—so HMM could provide much broader benefits as that hardware matures.
|
||||
|
||||
|
||||
There are a few obstacles to getting HMM into a usable state in Linux. First is kernel support, which has been under wraps for quite some time. HMM was first proposed as a Linux kernel patchset [back in 2014][12], with Red Hat and Nvidia both involved as key developers. The amount of work involved wasn’t trivial, but the developers believe code could be submitted for potential inclusion within the next couple of kernel releases.
|
||||
|
||||
The second obstacle is video driver support, which Nvidia has been working on separately. According to Glisse’s notes, AMD GPUs are likely to support HMM as well, so this particular optimization won’t be limited to Nvidia GPUs. AMD has been trying to ramp up its presence in the GPU market, potentially by [merging GPU and CPU processing][13] on the same die. However, the software ecosystem still plainly favors Nvidia; there would need to be a few more vendor-neutral projects like HMM, and OpenCL performance on a par with what CUDA can provide, to make real choice possible.
|
||||
|
||||
The third obstacle is hardware support, since HMM requires the presence of a replayable page faults hardware feature to work. Only Nvidia’s Pascal line of high-end GPUs supports this feature. In a way that’s good news, since it means Nvidia will only need to provide driver support for one piece of hardware—requiring less work on its part—to get HMM up and running.
|
||||
|
||||
Once HMM is in place, there will be pressure on public cloud providers with GPU instances to [support the latest-and-greatest generation of GPU][14]. Not just by swapping old-school Nvidia Kepler cards for bleeding-edge Pascal GPUs; as each succeeding generation of GPU pulls further away from the pack, support optimizations like HMM will provide strategic advantages.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.infoworld.com/article/3196884/linux/faster-machine-learning-is-coming-to-the-linux-kernel.html
|
||||
|
||||
作者:[Serdar Yegulalp][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.infoworld.com/author/Serdar-Yegulalp/
|
||||
[1]:https://twitter.com/intent/tweet?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3196884%2Flinux%2Ffaster-machine-learning-is-coming-to-the-linux-kernel.html&via=infoworld&text=Faster+machine+learning+is+coming+to+the+Linux+kernel
|
||||
[2]:https://www.facebook.com/sharer/sharer.php?u=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3196884%2Flinux%2Ffaster-machine-learning-is-coming-to-the-linux-kernel.html
|
||||
[3]:http://www.linkedin.com/shareArticle?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3196884%2Flinux%2Ffaster-machine-learning-is-coming-to-the-linux-kernel.html&title=Faster+machine+learning+is+coming+to+the+Linux+kernel
|
||||
[4]:https://plus.google.com/share?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3196884%2Flinux%2Ffaster-machine-learning-is-coming-to-the-linux-kernel.html
|
||||
[5]:http://reddit.com/submit?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3196884%2Flinux%2Ffaster-machine-learning-is-coming-to-the-linux-kernel.html&title=Faster+machine+learning+is+coming+to+the+Linux+kernel
|
||||
[6]:http://www.stumbleupon.com/submit?url=http%3A%2F%2Fwww.infoworld.com%2Farticle%2F3196884%2Flinux%2Ffaster-machine-learning-is-coming-to-the-linux-kernel.html
|
||||
[7]:http://www.infoworld.com/article/3196884/linux/faster-machine-learning-is-coming-to-the-linux-kernel.html#email
|
||||
[8]:http://www.infoworld.com/article/3152565/linux/5-rock-solid-linux-distros-for-developers.html#tk.ifw-infsb
|
||||
[9]:http://www.infoworld.com/newsletters/signup.html#tk.ifw-infsb
|
||||
[10]:https://lkml.org/lkml/2017/4/21/872
|
||||
[11]:http://www.infoworld.com/article/3195437/machine-learning-analytics-get-a-boost-from-gpu-data-frame-project.html
|
||||
[12]:https://lwn.net/Articles/597289/
|
||||
[13]:http://www.infoworld.com/article/3099204/hardware/amd-mulls-a-cpugpu-super-chip-in-a-server-reboot.html
|
||||
[14]:http://www.infoworld.com/article/3126076/artificial-intelligence/aws-machine-learning-vms-go-faster-but-not-forward.html
|
136
sources/talk/20170515 How I got started with bash scripting.md
Normal file
136
sources/talk/20170515 How I got started with bash scripting.md
Normal file
@ -0,0 +1,136 @@
|
||||
How I got started with bash scripting
|
||||
============================================================
|
||||
|
||||
### With a few simple Google searches, a programming novice learned to write code that automates a previously tedious and time-consuming task.
|
||||
|
||||
|
||||
|
||||
![How Google helped me learn bash scripting](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/computer_happy_sad_developer_programming.png?itok=5E3k_t_r "How Google helped me learn bash scripting")
|
||||
>Image by : opensource.com
|
||||
|
||||
I wrote a script the other day. For some of you, that sentence sounds like no big deal. For others, and I know you're out there, that sentence is significant. You see, I'm not a programmer. I'm a writer.
|
||||
|
||||
### What I needed to solve
|
||||
|
||||
My problem was fairly simple: I had to juggle files from engineering into our documentation. The files were available in a .zip format from a web URL. I was copying them to my desktop manually, then moving them into a different directory structure to match my documentation needs. A fellow writer gave me this advice: _"Why don't you just write a script to do this for you?"_
|
||||
|
||||
Programming and development
|
||||
|
||||
* [New Python content][1]
|
||||
|
||||
* [Our latest JavaScript articles][2]
|
||||
|
||||
* [Recent Perl posts][3]
|
||||
|
||||
* [Red Hat Developers Blog][4]
|
||||
|
||||
|
||||
|
||||
I thought _"just write a script?!?"_ —as if it was the easiest thing in the world to do.
|
||||
|
||||
### How Google came to the rescue
|
||||
|
||||
My colleague's question got me thinking, and as I thought, I googled.
|
||||
|
||||
**What scripting languages are on Linux?**
|
||||
|
||||
This was my first Google search criteria, and many of you are probably thinking, "She's pretty clueless." Well, I was, but it did set me on a path to solving my problem. The most common result was Bash. Hmm, I've seen Bash. Heck, one of the files I had to document had Bash in it, that ubiquitous line **#!/bin/bash**. I took another look at that file, and I knew what it was doing because I had to document it.
|
||||
|
||||
So that led me to my next Google search request.
|
||||
|
||||
**How to download a zip file from a URL?**
|
||||
|
||||
That was my basic task really. I had a URL with a .zip file containing all the files I needed to include in my documentation, so I asked the All Powerful Google to help me out. That search gem, and a few more, led me to Curl. But here's the best part: Not only did I find Curl, one of the top search hits showed me a Bash script that used Curl to download a .zip file and extract it. That was more than I asked for, but that's when I realized being specific in my Google search requests could give me the information I needed to write this script. So, momentum in my favor, I wrote the simplest of scripts:
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
|
||||
curl http://rather.long.url | tar -xz -C my_directory --strip-components=1
|
||||
```
|
||||
|
||||
What a moment to see that thing run! But then I realized one gotcha: The URL can change, depending on which set of files I'm trying to access. I had another problem to solve, which led me to my next search.
|
||||
|
||||
**How to pass parameters into a Bash script?**
|
||||
|
||||
I needed to be able to run this script with different URLs and different end directories. Google showed me how to put in **$1**, **$2**, etc., to replace what I typed on the command line with my script. For example:
|
||||
|
||||
```
|
||||
bash myscript.sh http://rather.long.url my_directory
|
||||
```
|
||||
|
||||
That was much better. Everything was working as I needed it to, I had flexibility, I had a working script, and most of all, I had a short command to type and save myself 30 minutes of copy-paste grunt work. That was a morning well spent.
|
||||
|
||||
Then I realized I had one more problem. You see, my memory is short, and I knew I'd run this script only every couple of months. That left me with two issues:
|
||||
|
||||
* How would I remember what to type for my script (URL first? directory first?)?
|
||||
|
||||
* How would another writer know how to run my script if I got hit by a truck?
|
||||
|
||||
I needed a usage message—something the script would display if I didn't use it correctly. For example:
|
||||
|
||||
```
|
||||
usage: bash yaml-fetch.sh <'snapshot_url'> <directory>
|
||||
```
|
||||
|
||||
Otherwise, run the script. My next search was:
|
||||
|
||||
**How to write "if/then/else" in a Bash script?**
|
||||
|
||||
Fortunately I already knew **if/then/else** existed in programming. I just had to find out how to do that. Along the way, I also learned to print from a Bash script using **echo**. What I ended up with was something like this:
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
|
||||
URL=$1
|
||||
DIRECTORY=$2
|
||||
|
||||
if [ $# -eq 0 ];
|
||||
then
|
||||
echo "usage: bash yaml-fetch.sh <'snapshot_url'> <directory>".
|
||||
else
|
||||
|
||||
# make the directory if it doesn't already exist
|
||||
echo 'create directory'
|
||||
|
||||
mkdir $DIRECTORY
|
||||
|
||||
# fetch and untar the yaml files
|
||||
echo 'fetch and untar the yaml files'
|
||||
|
||||
curl $URL | tar -xz -C $DIRECTORY --strip-components=1
|
||||
fi
|
||||
```
|
||||
|
||||
### How Google and scripting rocked my world
|
||||
|
||||
Okay, slight exaggeration there, but this being the 21st century, learning new things (especially somewhat simple things) is a whole lot easier than it used to be. What I learned (besides how to write a short, self-documented Bash script) is that if I have a question, there's a good chance someone else had the same or a similar question before. When I get stumped, I can ask the next question, and the next question. And in the end, not only do I have a script, I have the start of a new skill that I can hold onto and use to simplify other tasks I've been avoiding.
|
||||
|
||||
Don't let that first script (or programming step) get the best of you. It's a skill, like any other, and there's a wealth of information out there to help you along the way. You don't need to read a massive book or take a month-long course. You can do it a simpler way with baby steps and baby scripts that get you started, then build on that skill and your confidence. There will always be a need for folks to write those thousands-of-lines-of-code programs with all the branching and merging and bug-fixing.
|
||||
|
||||
But there is also a strong need for simple scripts and other ways to automate/simplify tasks. And that's where a little script and a little confidence can give you a kickstart.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Sandra McCann - Sandra McCann is a Linux and open source advocate. She's worked as a software developer, content architect for learning resources, and content creator. Sandra is currently a content creator for Red Hat in Westford, MA focusing on OpenStack and NFV techology.
|
||||
|
||||
----
|
||||
|
||||
via: https://opensource.com/article/17/5/how-i-learned-bash-scripting
|
||||
|
||||
作者:[ Sandra McCann ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/sandra-mccann
|
||||
[1]:https://opensource.com/tags/python?src=programming_resource_menu
|
||||
[2]:https://opensource.com/tags/javascript?src=programming_resource_menu
|
||||
[3]:https://opensource.com/tags/perl?src=programming_resource_menu
|
||||
[4]:https://developers.redhat.com/?intcmp=7016000000127cYAAQ&src=programming_resource_menu
|
||||
[5]:https://opensource.com/article/17/5/how-i-learned-bash-scripting?rate=s_R-jmOxcMvs9bi41yRwenl7GINDvbIFYrUMIJ8OBYk
|
||||
[6]:https://opensource.com/user/39771/feed
|
||||
[7]:https://opensource.com/article/17/5/how-i-learned-bash-scripting#comments
|
||||
[8]:https://opensource.com/users/sandra-mccann
|
@ -0,0 +1,60 @@
|
||||
How Microsoft is becoming a Linux vendor
|
||||
=====================================
|
||||
|
||||
|
||||
>Microsoft is bridging the gap with Linux by baking it into its own products.
|
||||
|
||||
![](http://images.techhive.com/images/article/2017/05/microsoft-100722875-large.jpg)
|
||||
|
||||
|
||||
Linux and open source technologies have become too dominant in data centers, cloud and IoT for Microsoft to ignore them.
|
||||
|
||||
On Microsoft’s own cloud, one in three machines run Linux. These are Microsoft customers who are running Linux. Microsoft needs to support the platform they use, or they will go somewhere else.
|
||||
|
||||
Here's how Microsoft's Linux strategy breaks down on its developer platform (Windows 10), on its cloud (Azure) and datacenter (Windows Server).
|
||||
|
||||
**Linux in Windows**: IT professionals managing Linux machines on public or private cloud need native UNIX tooling. Linux and macOS are the only two platforms that offer such native capabilities. No wonder all you see is MacBooks or a few Linux desktops at events like DockerCon, OpenStack Summit or CoreOS Fest.
|
||||
|
||||
To bridge the gap, Microsoft worked with Canonical to build a Linux subsystem within Windows that offers native Linux tooling. It’s a great compromise, where IT professionals can continue to use Windows 10 desktop while getting to run almost all Linux utilities to manage their Linux machines.
|
||||
|
||||
**Linux in Azure**: What good is a cloud that can’t run fully supported Linux machines? Microsoft has been working with Linux vendors that allow customers to run Linux applications and workloads on Azure.
|
||||
|
||||
Microsoft not only managed to sign deals with all three major Linux vendors Red Hat, SUSE and Canonical, it also worked with countless other companies to offer support for community-based distros like Debian.
|
||||
|
||||
**Linux in Windows Server**: This is the last missing piece of the puzzle. There is a massive ecosystem of Linux containers that are used by customers. There are over 900,000 Docker containers on Docker Hub, which can run only on Linux machines. Microsoft wanted to bring these containers to its own platform.
|
||||
|
||||
At DockerCon, Microsoft announced support for Linux containers on Windows Server bringing all those containers to Linux.
|
||||
|
||||
Things are about to get more interesting, after the success of Bash on Ubuntu on Windows 10, Microsoft is bringing Ubuntu bash to Windows Server. Yes, you heard it right. Windows Server will now have a Linux subsystem.
|
||||
|
||||
Rich Turner, Senior Program Manager at Microsoft told me, “WSL on the server provides admins with a preference for *NIX admin scripting & tools to have a more familiar environment in which to work.”
|
||||
|
||||
Microsoft said in an announcement that It will allow IT professionals “to use the same scripts, tools, procedures and container images they have been using for Linux containers on their Windows Server container host. These containers use our Hyper-V isolation technology combined with your choice of Linux kernel to host the workload while the management scripts and tools on the host use WSL.”
|
||||
|
||||
With all three bases covered, Microsoft has succeeded in creating an environment where its customers don't have to deal with any Linux vendor.
|
||||
|
||||
### What does it mean for Microsoft?
|
||||
|
||||
By baking Linux into its own products, Microsoft has become a Linux vendor. They are part of the Linux Foundation, they are one of the many contributors to the Linux kernel, and they now distribute Linux from their own store.
|
||||
|
||||
There is only one minor problem. Microsoft doesn’t own any Linux technologies. They are totally dependent on an external vendor, in this case Canonical, for their entire Linux layer. Too risky a proposition, if Canonical gets acquired by a fierce competitor.
|
||||
|
||||
It might make sense for Microsoft to attempt to acquire Canonical and bring the core technologies in house. It makes sense.
|
||||
|
||||
### What does it mean for Linux vendors
|
||||
|
||||
On the surface, it’s a clear victory for Microsoft as its customers can live within the Windows world. It will also contain the momentum of Linux in a datacenter. It might also affect Linux on the desktop as now IT professionals looking for *NIX tooling don’t have to run Linux desktop, they can do everything from within Windows.
|
||||
|
||||
Is Microsoft's victory a loss for traditional Linux vendors? To some degree, yes. Microsoft has become a direct competitor. But the clear winner here is Linux.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.cio.com/article/3197016/linux/how-microsoft-is-becoming-a-linux-vendor.html
|
||||
|
||||
作者:[ Swapnil Bhartiya ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.cio.com/author/Swapnil-Bhartiya/
|
103
sources/talk/20170517 Security Debt is an Engineers Problem.md
Normal file
103
sources/talk/20170517 Security Debt is an Engineers Problem.md
Normal file
@ -0,0 +1,103 @@
|
||||
Security Debt is an Engineer’s Problem
|
||||
============================================================
|
||||
|
||||
![](https://cdn.thenewstack.io/media/2017/05/d6fe35b0-11416417-1257170530979237-7594665410266720452-o_2_orig-1024x641.jpg)
|
||||
|
||||
![](https://cdn.thenewstack.io/media/2017/05/ea8298a9-keziah-slide1-300x165.png)
|
||||
>Keziah Plattner of AirBnBSecurity.
|
||||
|
||||
Just like organizations can build up technical debt, so too can they also build up something called “security debt,” if they don’t plan accordingly, attendees learned at the [WomenWhoCode Connect ][5]event at Twitter headquarters in San Francisco last month.
|
||||
|
||||
Security has got to be integral to every step of the software development process, stressed [Mary Ann Davidson][6], Oracle’s Chief Security Officer, in a keynote talk with about security for developers with [Zassmin Montes de Oca][7] of [WomenWhoCode][8].
|
||||
|
||||
In the past, security used to be ignored by pretty much everyone, except banks. But security is more critical than it has ever been because there are so many access points. We’ve entered the era of [Internet of Things][9], where thieves can just hack your fridge to see that you’re not home.
|
||||
|
||||
Davidson is in charge of assurance at Oracle, “making sure we build security into everything we build, whether it’s an on-premise product, whether it’s a cloud service, even devices we have that support group builds at customer sites and reports data back to us, helping us do diagnostics — every single one of those things has to have security engineered into it.”
|
||||
|
||||
![](https://cdn.thenewstack.io/media/2017/05/8d5dc451-keziah-talking-225x300.jpg)
|
||||
|
||||
Plattner talking to a capacity crowd at #WWCConnect
|
||||
|
||||
AirBnB’s [Keziah Plattner][10] echoed that sentiment in her breakout session. “Most developers don’t see security as their job,” she said, “but this has to change.”
|
||||
|
||||
She shared four basic security principles for engineers. First, security debt is expensive. There’s a lot of talk about [technical debt ][11]and she thinks security debt should be included in those conversations.
|
||||
|
||||
“This historical attitude is ‘We’ll think about security later,’” Plattner said. As companies grab the low-hanging fruit of software efficiency and growth, they ignore security, but an initial insecure design can cause problems for years to come.
|
||||
|
||||
It’s very hard to add security to an existing vulnerable system, she said. Even when you know where the security holes are and have budgeted the time and resources to make the changes, it’s time-consuming and difficult to re-engineer a secure system.
|
||||
|
||||
So it’s key, she said, to build security into your design from the start. Think of security as part of the technical debt to avoid. And cover all possibilities.
|
||||
|
||||
Most importantly, according to Plattner, is the difficulty in getting to people to change their behavior. No one will change voluntarily, she said, even when you point out that the new behavior is more secure. We all nodded.
|
||||
|
||||
Davidson said engineers need to start thinking about how their code could be attacked, and design from that perspective. She said she only has two rules. The first is never trust any unvalidated data and rule two is see rule one.
|
||||
|
||||
“People do this all the time. They say ‘My client sent me the data so it will be fine.’ Nooooooooo,” she said, to laughs.
|
||||
|
||||
The second key to security, Plattner said, is “never trust users.”
|
||||
|
||||
Davidson put it another way: “My job is to be a professional paranoid.” She worries all the time about how someone might breach her systems even inadvertently. This is not academic, there has been recent denial of service attacks through IoT devices.
|
||||
|
||||
### Little Bobby Tables
|
||||
|
||||
If part of your security plan is trusting users to do the right thing, your system is inherently insecure regardless of whatever other security measures you have in place, said Plattner.
|
||||
|
||||
It’s important to properly sanitize all user input, she explained, showing the [XKCD cartoon][12] where a mom wiped out an entire school database because her son’s middle name was “DropTable Students.”
|
||||
|
||||
So sanitize all user input. Check.
|
||||
|
||||
She showed an example of JavaScript developers using Eval on open source. “A good ground rule is ‘Never use eval(),’” she cautioned. The [eval() ][13]function evaluates JavaScript code. “You’re opening your system to random users if you do.”
|
||||
|
||||
Davidson cautioned that her paranoia extends to including security testing your example code in documentation. “Because we all know no one ever copies sample code,” she said to laughter. She underscored the point that any code should be subject to security checks.
|
||||
|
||||
![](https://cdn.thenewstack.io/media/2017/05/87efe589-keziah-path-300x122.png)
|
||||
|
||||
Make it easy
|
||||
|
||||
Plattner’s suggestion three: Make security easy. Take the path of least resistance, she suggested.
|
||||
|
||||
Externally, make users opt out of security instead of opting in, or, better yet, make it mandatory. Changing people’s behavior is the hardest problem in tech, she said. Once users get used to using your product in a non-secure way, getting them to change in the future is extremely difficult.
|
||||
|
||||
Internal to your company, she suggested make tools that standardize security so it’s not something individual developers need to think about. For example, encrypting data as a service so engineers can just call the service to encrypt or decrypt data.
|
||||
|
||||
Make sure that your company is focused on good security hygiene, she said. Switch to good security habits across the company.
|
||||
|
||||
You’re only secure as your weakest link, so it’s important that each individual also has good personal security hygiene as well as having good corporate security hygiene.
|
||||
|
||||
At Oracle, they’ve got this covered. Davidson said she got tired of explaining security to engineers who graduated college with absolutely no security training, so she wrote the first coding standards at Oracle. There are now hundreds of pages with lots of contributors, and there are classes that are mandatory. They have metrics for compliance to security requirements and measure it. The classes are not just for engineers, but for doc writers as well. “It’s a cultural thing,” she said.
|
||||
|
||||
And what discussion about security would be secure without a mention of passwords? While everyone should be using a good password manager, Plattner said, but they should be mandatory for work, along with two-factor authentication.
|
||||
|
||||
Basic password principles should be a part of every engineer’s waking life, she said. What matters most in passwords is their length and entropy — making the collection of keystrokes as random as possible. A robust password entropy checker is invaluable for this. She recommends [zxcvbn][14], the Dropbox open-source entropy checker.
|
||||
|
||||
Another trick is to use something intentionally slow like [bcrypt][15] when authenticating user input, said Plattner. The slowness doesn’t bother most legit users but irritates hackers who try to force password attempts.
|
||||
|
||||
All of this adds up to job security for anyone wanting to get into the security side of technology, said Davidson. We’re putting more code more places, she said, and that creates systemic risk. “I don’t think anybody is not going to have a job in security as long as we keep doing interesting things in technology.”
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://thenewstack.io/security-engineers-problem/
|
||||
|
||||
作者:[TC Currie][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://thenewstack.io/author/tc/
|
||||
[1]:http://twitter.com/share?url=https://thenewstack.io/security-engineers-problem/&text=Security+Debt+is+an+Engineer%E2%80%99s+Problem+
|
||||
[2]:http://www.facebook.com/sharer.php?u=https://thenewstack.io/security-engineers-problem/
|
||||
[3]:http://www.linkedin.com/shareArticle?mini=true&url=https://thenewstack.io/security-engineers-problem/
|
||||
[4]:https://thenewstack.io/security-engineers-problem/#disqus_thread
|
||||
[5]:http://connect2017.womenwhocode.com/
|
||||
[6]:https://www.linkedin.com/in/mary-ann-davidson-235ba/
|
||||
[7]:https://www.linkedin.com/in/zassmin/
|
||||
[8]:https://www.womenwhocode.com/
|
||||
[9]:https://www.thenewstack.io/tag/Internet-of-Things
|
||||
[10]:https://twitter.com/ittskeziah
|
||||
[11]:https://martinfowler.com/bliki/TechnicalDebt.html
|
||||
[12]:https://xkcd.com/327/
|
||||
[13]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/eval
|
||||
[14]:https://blogs.dropbox.com/tech/2012/04/zxcvbn-realistic-password-strength-estimation/
|
||||
[15]:https://en.wikipedia.org/wiki/Bcrypt
|
||||
[16]:https://thenewstack.io/author/tc/
|
@ -0,0 +1,72 @@
|
||||
|
||||
朝鲜180局的网络战部门让西方国家忧虑。
|
||||
|
||||
Translating by hwlog
|
||||
North Korea's Unit 180, the cyber warfare cell that worries the West
|
||||
|
||||
============================================================
|
||||
[![在夜色的映衬下,部队的军车通过平壤市区,](http://www.abc.net.au/news/image/8545124-3x2-700x467.jpg "Military trucks through Pyongyang")][13] [**PHOTO:** 脱北者说, 平壤的网络战攻击目的在于一个叫做“180局”的部门来筹集资金。(Reuters: Damir Sagolj, file)][14]
|
||||
据叛逃者,官方和网络安全专家称,朝鲜的情报机关有一个叫做180局的特殊部门, 这个部门已经发起过多起勇敢且成功的网络战。
|
||||
近几年朝鲜被美国,韩国,和周边几个国家指责对多数的金融网络发起过一系列在线袭击。
|
||||
网络安全技术人员称他们找到了这个月感染了150多个国家30多万台计算机的全球想哭勒索病毒"ransomware"和朝鲜网络战有关联的技术证据。
|
||||
平壤称该指控是“荒谬的”。
|
||||
对朝鲜的关键指控是指朝鲜与一个叫做拉撒路的黑客组织有联系,这个组织是在去年在孟加拉国中央银行网络抢劫8000万美元并在2014年攻击了索尼的好莱坞工作室的网路。
|
||||
美国政府指责朝鲜对索尼公司的黑客袭击,同时美国政府对平壤在孟加拉国银行的盗窃行为提起公诉并要求立案。
|
||||
由于没有确凿的证据,没有犯罪指控并不能够立案。朝鲜之后也否认了Sony公司和银行的袭击与其有关。
|
||||
朝鲜是世界上最封闭的国家之一,它秘密行动的一些细节很难获得。
|
||||
但研究这个封闭的国家和流落到韩国和一些西方国家的的叛逃者已经给出了或多或少的提示。
|
||||
|
||||
### 黑客们喜欢用雇员来作为掩护
|
||||
金恒光,朝鲜前计算机教授,2004叛逃到韩国,他仍然有着韩国内部的消息,他说平壤的网络战目的在于通过侦察总局下属的一个叫做180局来筹集资金,这个局主要是负责海外的情报机构。
|
||||
金教授称,“180局负责入侵金融机构通过漏洞从银行账户提取资金”。
|
||||
他之前也说过,他以前的一些学生已经加入了朝鲜的网络战略司令部-朝鲜的网络部队。
|
||||
|
||||
>"黑客们到海外寻找比朝鲜更好的互联网服务的地方,以免留下痕迹," 金教授补充说。
|
||||
他说他们经常用贸易公司,朝鲜的海外分公司和在中国和东南亚合资企业的雇员来作为掩护
|
||||
位于华盛顿的战略与国际研究中心的叫做James Lewis的朝鲜专家称,平壤首先用黑客作为间谍活动的工具然后对韩国和美国的目的进行政治干扰。
|
||||
索尼公司事件之后,他们改变方法,通过用黑客来支持犯罪活动来形成国内坚挺的货币经济政策。
|
||||
“目前为止,网上毒品,假冒伪劣,走私,都是他们惯用的伎俩”。
|
||||
Media player: 空格键播放,“M”键静音,“左击”和“右击”查看。
|
||||
|
||||
[**VIDEO:** 你遇到过勒索病毒吗? (ABC News)][16]
|
||||
|
||||
### 韩国声称拥有大量的“证据”
|
||||
美国国防部称在去年提交给国会的一个报告中显示,朝鲜可能有作为有效成本的,不对称的,可拒绝的工具,它能够应付来自报复性袭击很小的风险,因为它的“网络”大部分是和因特网分离的。
|
||||
|
||||
> 报告中说," 它可能从第三方国家使用互联网基础设施"。
|
||||
韩国政府称,他们拥有朝鲜网络战行动的大量证据。
|
||||
“朝鲜进行网络战通过第三方国家来掩护网络袭击的来源,并且使用他们的信息和通讯技术设施”,Ahn Chong-ghee,韩国外交部副部长,在书面评论中告诉路透社。
|
||||
除了孟加拉银行抢劫案,他说平壤也被怀疑与菲律宾,越南和波兰的银行袭击有关。
|
||||
去年六月,警察称朝鲜袭击了160个韩国公司和政府机构,入侵了大约14万台计算机,暗中在他的对手的计算机中植入恶意代码作为长期计划的一部分来进行大规模网络攻击。
|
||||
朝鲜也被怀疑在2014年对韩国核反应堆操作系统进行阶段性网络攻击,尽管朝鲜否认与其无关。
|
||||
根据在一个韩国首尔的杀毒软件厂商“hauri”的高级安全研究员Simon Choi的说法,网络袭击是来自于他在中国的一个基地。
|
||||
Choi先生,一个有着对朝鲜的黑客能力进行了广泛的研究的人称,“他们在那里行动以至于不论他们做什么样的计划,他们拥有中国的ip地址”。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.abc.net.au/news/2017-05-21/north-koreas-unit-180-cyber-warfare-cell-hacking/8545106
|
||||
|
||||
作者:[www.abc.net.au ][a]
|
||||
译者:[译者ID](https://github.com/hwlog)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.abc.net.au
|
||||
[1]:http://www.abc.net.au/news/2017-05-16/wannacry-ransomware-showing-up-in-obscure-places/8527060
|
||||
[2]:http://www.abc.net.au/news/2015-08-05/why-we-should-care-about-cyber-crime/6673274
|
||||
[3]:http://www.abc.net.au/news/2017-05-15/what-to-do-if-youve-been-hacked/8526118
|
||||
[4]:http://www.abc.net.au/news/2017-05-16/researchers-link-wannacry-to-north-korea/8531110
|
||||
[5]:http://www.abc.net.au/news/2017-05-18/adylkuzz-cyberattack-could-be-far-worse-than-wannacry:-expert/8537502
|
||||
[6]:http://www.google.com/maps/place/Korea,%20Democratic%20People%20S%20Republic%20Of/@40,127,5z
|
||||
[7]:http://www.abc.net.au/news/2017-05-16/wannacry-ransomware-showing-up-in-obscure-places/8527060
|
||||
[8]:http://www.abc.net.au/news/2017-05-16/wannacry-ransomware-showing-up-in-obscure-places/8527060
|
||||
[9]:http://www.abc.net.au/news/2015-08-05/why-we-should-care-about-cyber-crime/6673274
|
||||
[10]:http://www.abc.net.au/news/2015-08-05/why-we-should-care-about-cyber-crime/6673274
|
||||
[11]:http://www.abc.net.au/news/2017-05-15/what-to-do-if-youve-been-hacked/8526118
|
||||
[12]:http://www.abc.net.au/news/2017-05-15/what-to-do-if-youve-been-hacked/8526118
|
||||
[13]:http://www.abc.net.au/news/2017-05-21/military-trucks-trhough-pyongyang/8545134
|
||||
[14]:http://www.abc.net.au/news/2017-05-21/military-trucks-trhough-pyongyang/8545134
|
||||
[15]:http://www.abc.net.au/news/2017-05-16/researchers-link-wannacry-to-north-korea/8531110
|
||||
[16]:http://www.abc.net.au/news/2017-05-15/have-you-been-hit-by-ransomware/8527854
|
@ -1,155 +0,0 @@
|
||||
10 tools for visual effects in Linux with Kdenlive
|
||||
================================================================================
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life-uploads/kdenlivetoolssummary.png)
|
||||
Image credits : Seth Kenlon. [CC BY-SA 4.0.][1]
|
||||
|
||||
[Kdenlive][2] is one of those applications; you can use it daily for a year and wake up one morning only to realize that you still have only grazed the surface of all of its potential. That's why it's nice every once in a while to sit back and look over some of the lesser-used tricks and tools in Kdenlive. Even though something's not used as often as, say, the Spacer or Razor tools, it still may end up being just the right finishing touch on your latest masterpiece.
|
||||
|
||||
Most of the tools I'll discuss here are not officially part of Kdenlive; they are plugins from the [Frei0r][3] package. These are ubiquitous parts of video processing on Linux and Unix, and they usually get installed along with Kdenlive as distributed by most Linux distributions, so they often seem like part of the application. If your install of Kdenlive does not feature some of the tools mentioned here, make sure that you have Frei0r plugins installed.
|
||||
|
||||
Since many of the tools in this article affect the look of an image, here is the base image, without effects or adjustment:
|
||||
|
||||
![](https://opensource.com/sites/default/files/images/life-uploads/before_0.png)
|
||||
|
||||
Still image grabbed from a video by Footage Firm, Inc. [CC BY-SA 4.0.][1]
|
||||
|
||||
Let's get started.
|
||||
|
||||
### 1. Color effect ###
|
||||
|
||||
![](https://opensource.com/sites/default/files/images/life-uploads/coloreffect.png)
|
||||
|
||||
You can find the **Color Effect** filter in **Add Effect > Misc** context menu. As filters go, it's mostly just a preset; the only controls it has are which filter you want to use.
|
||||
|
||||
![](https://opensource.com/sites/default/files/images/life-uploads/coloreffect_ctl_0.png)
|
||||
|
||||
Normally that's the kind of filter I avoid, but I have to be honest: Sometimes a plug-and-play solution is exactly what you want. This filter has a few different settings, but the two that make it worth while (at least for me) are the Sepia and XPro effects. Admittedly, controls to adjust how sepia tone the sepia effect is would be nice, but no matter what, when you need a quick and familiar color effect, this is the filter to throw onto a clip. It's immediate, it's easy, and if your client asks for that look, this does the trick every time.
|
||||
|
||||
### 2. Colorize ###
|
||||
|
||||
![](https://opensource.com/sites/default/files/images/life-uploads/colorize.png)
|
||||
|
||||
The simplicity of the **Colorize** filter in **Add Effect > Misc** is also its strength. In some editing applications, it takes two filters and some compositing to achieve this simple color-wash effect. It's refreshing that in Kdenlive, it's a matter of one filter with three possible controls (only one of which, strictly speaking, is necessary to achieve the look).
|
||||
|
||||
![](https://opensource.com/sites/default/files/images/life-uploads/colorize_ctl.png)
|
||||
|
||||
Its use is intuitive; use the **Hue** slider to set the color. Use the other controls to adjust the luma of the base image as needed.
|
||||
|
||||
This is not a filter I use every day, but for ad spots, bumpers, dreamy sequences, or titles, it's the easiest and quickest path to a commonly needed look. Get a company's color, use it as the colorize effect, slap a logo over the top of the screen, and you've just created a winning corporate intro.
|
||||
|
||||
### 3. Dynamic Text ###
|
||||
|
||||
![](https://opensource.com/sites/default/files/images/life-uploads/dyntext.png)
|
||||
|
||||
For the assistant editor, the Add Effect > Misc > Dynamic **Text** effect is worth the price of Kdenlive. With one mostly pre-set filter, you can add a running timecode burn-in to your project, which is an absolute must-have safety feature when round-tripping your footage through effects and sound.
|
||||
|
||||
The controls look more complex than they actually are.
|
||||
|
||||
![](https://opensource.com/sites/default/files/images/life-uploads/dyntext_ctl.png)
|
||||
|
||||
The font settings are self-explanatory. Placement of the text is controlled by the Horizontal and Vertical Alignment settings; steer clear of the **Size** setting (it controls the size of the "canvas" upon which you are compositing the burn-in, not the size of the burn-in itself).
|
||||
|
||||
The text itself doesn't have to be timecode. From the dropdown menu, you can choose from a list of useful text, including frame count (useful for VFX, since animators work in frames), source frame rate, source dimensions, and more.
|
||||
|
||||
You are not limited to just one choice. The text field in the control panel will take whatever arbitrary text you put into it, so if you want to burn in more information than just timecode and frame rate (such as **Sc 3 - #timecode# - #meta.media.0.stream.frame_rate#**), then have at it.
|
||||
|
||||
### 4. Luminance ###
|
||||
|
||||
![](https://opensource.com/sites/default/files/images/life-uploads/luminance.png)
|
||||
|
||||
The **Add Effect > Misc > Luminance** filter is a no-options filter. Luminance does one thing and it does it well: It drops the chroma values of all pixels in an image so that they are displayed by their luma values. In simpler terms, it's a grayscale filter.
|
||||
|
||||
The nice thing about this filter is that it's quick, easy, efficient, and effective. This filter combines particularly well with other related filters (meaning that yes, I'm cheating and including three filters for one).
|
||||
|
||||
![](https://opensource.com/sites/default/files/images/life-uploads/luminance_ctl.png)
|
||||
|
||||
Combining, in this order, the **RGB Noise** for emulated grain, **Luminance** for grayscale, and **LumaLiftGainGamma** for levels can render a textured image that suggests the classic look and feel of [Kodax Tri-X][4] film.
|
||||
|
||||
### 5. Mask0mate ###
|
||||
|
||||
![](https://opensource.com/sites/default/files/images/life-uploads/mask0mate.png)
|
||||
Image by Footage Firm, Inc.
|
||||
|
||||
Better known as a four-point garbage mask, the **Add Effect > Alpha Manipulation > Mask0mate** tool is a quick, no-frills way to ditch parts of your frame that you don't need. There isn't much to say about it; it is what it is.
|
||||
|
||||
![](https://opensource.com/sites/default/files/images/life-uploads/mask0mate_ctl.png)
|
||||
|
||||
The confusing thing about the effect is that it does not imply compositing. You can pull in the edges all you want, but you won't see it unless you add the **Composite** transition to reveal what's underneath the clip (even if that's nothing). Also, use the **Invert** function for the filter to act like you think it should act (without it, the controls will probably feel backward to you).
|
||||
|
||||
### 6. Pr0file ###
|
||||
|
||||
![](https://opensource.com/sites/default/files/images/life-uploads/pr0file.png)
|
||||
|
||||
The **Add Effect > Misc > Pr0file** filter is an analytical tool, not something you would actually leave on a clip for final export (unless, of course, you do). Pr0file consists of two components: the Marker, which dictates what area of the image is being analyzed, and the Graph, which displays information about the marked region.
|
||||
|
||||
Set the marker using the **X, Y, Tilt**, and **Length** controls. The graphical readout of all the relevant color channel information is displayed as a graph, superimposed over your image.
|
||||
|
||||
![](https://opensource.com/sites/default/files/images/life-uploads/pr0file_ctl.jpg)
|
||||
|
||||
The readout displays a profile of the colors within the region marked. The result is a sort of hyper-specific vectorscope (or oscilloscope, as the case may be) that can help you zero in on problem areas during color correction, or compare regions while color matching.
|
||||
|
||||
In other editors, the way to get the same information was simply to temporarily scale your image up to the region you want to analyze, look at your readout, and then hit undo to scale back. Both ways work, but the Pr0file filter does feel a little more elegant.
|
||||
|
||||
### 7. Vectorscope ###
|
||||
|
||||
![](https://opensource.com/sites/default/files/images/life-uploads/vectorscope.jpg)
|
||||
|
||||
Kdenlive features an inbuilt vectorscope, available from the **View** menu in the main menu bar. A vectorscope is not a filter, it's just another view the footage in your Project Monitor, specifically a view of the color saturation in the current frame. If you are color correcting an image and you're not sure what colors you need to boost or counteract, looking at the vectorscope can be a huge help.
|
||||
|
||||
There are several different views available. You can render the vectorscope in traditional green monochrome (like the hardware vectorscopes you'd find in a broadcast control room), or a chromatic view (my personal preference), or subtracted from a color-wheel background, and more.
|
||||
|
||||
The vectorscope reads the entire frame, so unlike the Pr0file filter, you are not just getting a reading of one area in the frame. The result is a consolidated view of what colors are most prominent within a frame. Technically, the same sort of information can be intuited by several trial-and-error passes with color correction, or you can just leave your vectorscope open and watch the colors float along the color wheel and make adjustments accordingly.
|
||||
|
||||
Aside from how you want the vectorscope to look, there are no controls for this tool. It is a readout only.
|
||||
|
||||
### 8. Vertigo ###
|
||||
|
||||
![](https://opensource.com/sites/default/files/images/life-uploads/vertigo.jpg)
|
||||
|
||||
There's no way around it; **Add Effect > Misc > Vertigo** is a gimmicky special effect filter. So unless you're remaking [Fear and Loathing][5] or the movie adaptation of [Dead Island][6], you probably aren't going to use it that much; however, it's one of those high-quality filters that does the exact trick you want when you happen to be looking for it.
|
||||
|
||||
The controls are simple. You can adjust how distorted the image becomes and the rate at which it distorts. The overall effect is probably more drunk or vision-quest than vertigo, but it's good.
|
||||
|
||||
![](https://opensource.com/sites/default/files/images/life-uploads/vertigo_ctl.png)
|
||||
|
||||
### 9. Vignette ###
|
||||
|
||||
![](https://opensource.com/sites/default/files/images/life-uploads/vignette.jpg)
|
||||
|
||||
Another beautiful effect, the **Add Effect > Misc > Vignette** darkens the outer edges of the frame to provide a sort of portrait, soft-focus nouveau look. Combined with the Color Effect or the Luminance faux Tri-X trick, this can be a powerful and emotional look.
|
||||
|
||||
The softness of the border and the aspect ratio of the iris can be adjusted. The **Clear Center Size** attribute controls the size of the clear area, which has the effect of adjusting the intensity of the vignette effect.
|
||||
|
||||
![](https://opensource.com/sites/default/files/images/life-uploads/vignette_ctl.png)
|
||||
|
||||
### 10. Volume ###
|
||||
|
||||
![](https://opensource.com/sites/default/files/images/life-uploads/vol.jpg)
|
||||
|
||||
I don't believe in mixing sound within the video editing application, but I do acknowledge that sometimes it's just necessary for a quick fix or, sometimes, even for a tight production schedule. And that's when the **Audio correction > Volume (Keyframable)** effect comes in handy.
|
||||
|
||||
The control panel is clunky, and no one really wants to adjust volume that way, so the effect is best when used directly in the timeline. To create a volume change, double-click the volume line over the audio clip, and then click and drag to adjust. It's that simple.
|
||||
|
||||
Should you use it? Not really. Sound mixing should be done in a sound mixing application. Will you use it? Absolutely. At some point, you'll get audio that is too loud to play as you edit, or you'll be up against a deadline without a sound engineer in sight. Use it judiciously, watch your levels, and get the show finished.
|
||||
|
||||
### Everything else ###
|
||||
|
||||
This has been 10 (OK, 13 or 14) effects and tools that Kdenlive has quietly lying around to help your edits become great. Obviously there's a lot more to Kdenlive than just these little tricks. Some are obvious, some are cliché, some are obtuse, but they're all in your toolkit. Get to know them, explore your options, and you might be surprised what a few cheap tricks will get you.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/life/15/12/10-kdenlive-tools
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/seth
|
||||
[1]:https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[2]:https://kdenlive.org/
|
||||
[3]:http://frei0r.dyne.org/
|
||||
[4]:http://www.kodak.com/global/en/professional/products/films/bw/triX2.jhtml
|
||||
[5]:https://en.wikipedia.org/wiki/Fear_and_Loathing_in_Las_Vegas_(film)
|
||||
[6]:https://en.wikipedia.org/wiki/Dead_Island
|
@ -1,473 +0,0 @@
|
||||
[How debuggers work: Part 2 - Breakpoints][26]
|
||||
============================================================
|
||||
|
||||
This is the second part in a series of articles on how debuggers work. Make sure you read [the first part][27]before this one.
|
||||
|
||||
### In this part
|
||||
|
||||
I'm going to demonstrate how breakpoints are implemented in a debugger. Breakpoints are one of the two main pillars of debugging - the other being able to inspect values in the debugged process's memory. We've already seen a preview of the other pillar in part 1 of the series, but breakpoints still remain mysterious. By the end of this article, they won't be.
|
||||
|
||||
### Software interrupts
|
||||
|
||||
To implement breakpoints on the x86 architecture, software interrupts (also known as "traps") are used. Before we get deep into the details, I want to explain the concept of interrupts and traps in general.
|
||||
|
||||
A CPU has a single stream of execution, working through instructions one by one [[1]][19]. To handle asynchronous events like IO and hardware timers, CPUs use interrupts. A hardware interrupt is usually a dedicated electrical signal to which a special "response circuitry" is attached. This circuitry notices an activation of the interrupt and makes the CPU stop its current execution, save its state, and jump to a predefined address where a handler routine for the interrupt is located. When the handler finishes its work, the CPU resumes execution from where it stopped.
|
||||
|
||||
Software interrupts are similar in principle but a bit different in practice. CPUs support special instructions that allow the software to simulate an interrupt. When such an instruction is executed, the CPU treats it like an interrupt - stops its normal flow of execution, saves its state and jumps to a handler routine. Such "traps" allow many of the wonders of modern OSes (task scheduling, virtual memory, memory protection, debugging) to be implemented efficiently.
|
||||
|
||||
Some programming errors (such as division by 0) are also treated by the CPU as traps, and are frequently referred to as "exceptions". Here the line between hardware and software blurs, since it's hard to say whether such exceptions are really hardware interrupts or software interrupts. But I've digressed too far away from the main topic, so it's time to get back to breakpoints.
|
||||
|
||||
### int 3 in theory
|
||||
|
||||
Having written the previous section, I can now simply say that breakpoints are implemented on the CPU by a special trap called int 3. int is x86 jargon for "trap instruction" - a call to a predefined interrupt handler. x86 supports the int instruction with a 8-bit operand specifying the number of the interrupt that occurred, so in theory 256 traps are supported. The first 32 are reserved by the CPU for itself, and number 3 is the one we're interested in here - it's called "trap to debugger".
|
||||
|
||||
Without further ado, I'll quote from the bible itself [[2]][20]:
|
||||
|
||||
> The INT 3 instruction generates a special one byte opcode (CC) that is intended for calling the debug exception handler. (This one byte form is valuable because it can be used to replace the first byte of any instruction with a breakpoint, including other one byte instructions, without over-writing other code).
|
||||
|
||||
The part in parens is important, but it's still too early to explain it. We'll come back to it later in this article.
|
||||
|
||||
### int 3 in practice
|
||||
|
||||
Yes, knowing the theory behind things is great, OK, but what does this really mean? How do we use int 3to implement breakpoints? Or to paraphrase common programming Q&A jargon - _Plz show me the codes!_
|
||||
|
||||
In practice, this is really very simple. Once your process executes the int 3 instruction, the OS stops it [[3]][21]. On Linux (which is what we're concerned with in this article) it then sends the process a signal - SIGTRAP.
|
||||
|
||||
That's all there is to it - honest! Now recall from the first part of the series that a tracing (debugger) process gets notified of all the signals its child (or the process it attaches to for debugging) gets, and you can start getting a feel of where we're going.
|
||||
|
||||
That's it, no more computer architecture 101 jabber. It's time for examples and code.
|
||||
|
||||
### Setting breakpoints manually
|
||||
|
||||
I'm now going to show code that sets a breakpoint in a program. The target program I'm going to use for this demonstration is the following:
|
||||
|
||||
```
|
||||
section .text
|
||||
; The _start symbol must be declared for the linker (ld)
|
||||
global _start
|
||||
|
||||
_start:
|
||||
|
||||
; Prepare arguments for the sys_write system call:
|
||||
; - eax: system call number (sys_write)
|
||||
; - ebx: file descriptor (stdout)
|
||||
; - ecx: pointer to string
|
||||
; - edx: string length
|
||||
mov edx, len1
|
||||
mov ecx, msg1
|
||||
mov ebx, 1
|
||||
mov eax, 4
|
||||
|
||||
; Execute the sys_write system call
|
||||
int 0x80
|
||||
|
||||
; Now print the other message
|
||||
mov edx, len2
|
||||
mov ecx, msg2
|
||||
mov ebx, 1
|
||||
mov eax, 4
|
||||
int 0x80
|
||||
|
||||
; Execute sys_exit
|
||||
mov eax, 1
|
||||
int 0x80
|
||||
|
||||
section .data
|
||||
|
||||
msg1 db 'Hello,', 0xa
|
||||
len1 equ $ - msg1
|
||||
msg2 db 'world!', 0xa
|
||||
len2 equ $ - msg2
|
||||
```
|
||||
|
||||
I'm using assembly language for now, in order to keep us clear of compilation issues and symbols that come up when we get into C code. What the program listed above does is simply print "Hello," on one line and then "world!" on the next line. It's very similar to the program demonstrated in the previous article.
|
||||
|
||||
I want to set a breakpoint after the first printout, but before the second one. Let's say right after the first int 0x80 [[4]][22], on the mov edx, len2 instruction. First, we need to know what address this instruction maps to. Running objdump -d:
|
||||
|
||||
```
|
||||
traced_printer2: file format elf32-i386
|
||||
|
||||
Sections:
|
||||
Idx Name Size VMA LMA File off Algn
|
||||
0 .text 00000033 08048080 08048080 00000080 2**4
|
||||
CONTENTS, ALLOC, LOAD, READONLY, CODE
|
||||
1 .data 0000000e 080490b4 080490b4 000000b4 2**2
|
||||
CONTENTS, ALLOC, LOAD, DATA
|
||||
|
||||
Disassembly of section .text:
|
||||
|
||||
08048080 <.text>:
|
||||
8048080: ba 07 00 00 00 mov $0x7,%edx
|
||||
8048085: b9 b4 90 04 08 mov $0x80490b4,%ecx
|
||||
804808a: bb 01 00 00 00 mov $0x1,%ebx
|
||||
804808f: b8 04 00 00 00 mov $0x4,%eax
|
||||
8048094: cd 80 int $0x80
|
||||
8048096: ba 07 00 00 00 mov $0x7,%edx
|
||||
804809b: b9 bb 90 04 08 mov $0x80490bb,%ecx
|
||||
80480a0: bb 01 00 00 00 mov $0x1,%ebx
|
||||
80480a5: b8 04 00 00 00 mov $0x4,%eax
|
||||
80480aa: cd 80 int $0x80
|
||||
80480ac: b8 01 00 00 00 mov $0x1,%eax
|
||||
80480b1: cd 80 int $0x80
|
||||
```
|
||||
|
||||
So, the address we're going to set the breakpoint on is 0x8048096\. Wait, this is not how real debuggers work, right? Real debuggers set breakpoints on lines of code and on functions, not on some bare memory addresses? Exactly right. But we're still far from there - to set breakpoints like _real_ debuggers we still have to cover symbols and debugging information first, and it will take another part or two in the series to reach these topics. For now, we'll have to do with bare memory addresses.
|
||||
|
||||
At this point I really want to digress again, so you have two choices. If it's really interesting for you to know _why_ the address is 0x8048096 and what does it mean, read the next section. If not, and you just want to get on with the breakpoints, you can safely skip it.
|
||||
|
||||
### Digression - process addresses and entry point
|
||||
|
||||
Frankly, 0x8048096 itself doesn't mean much, it's just a few bytes away from the beginning of the text section of the executable. If you look carefully at the dump listing above, you'll see that the text section starts at 0x08048080\. This tells the OS to map the text section starting at this address in the virtual address space given to the process. On Linux these addresses can be absolute (i.e. the executable isn't being relocated when it's loaded into memory), because with the virtual memory system each process gets its own chunk of memory and sees the whole 32-bit address space as its own (called "linear" address).
|
||||
|
||||
If we examine the ELF [[5]][23] header with readelf, we get:
|
||||
|
||||
```
|
||||
$ readelf -h traced_printer2
|
||||
ELF Header:
|
||||
Magic: 7f 45 4c 46 01 01 01 00 00 00 00 00 00 00 00 00
|
||||
Class: ELF32
|
||||
Data: 2's complement, little endian
|
||||
Version: 1 (current)
|
||||
OS/ABI: UNIX - System V
|
||||
ABI Version: 0
|
||||
Type: EXEC (Executable file)
|
||||
Machine: Intel 80386
|
||||
Version: 0x1
|
||||
Entry point address: 0x8048080
|
||||
Start of program headers: 52 (bytes into file)
|
||||
Start of section headers: 220 (bytes into file)
|
||||
Flags: 0x0
|
||||
Size of this header: 52 (bytes)
|
||||
Size of program headers: 32 (bytes)
|
||||
Number of program headers: 2
|
||||
Size of section headers: 40 (bytes)
|
||||
Number of section headers: 4
|
||||
Section header string table index: 3
|
||||
```
|
||||
|
||||
Note the "entry point address" section of the header, which also points to 0x8048080\. So if we interpret the directions encoded in the ELF file for the OS, it says:
|
||||
|
||||
1. Map the text section (with given contents) to address 0x8048080
|
||||
2. Start executing at the entry point - address 0x8048080
|
||||
|
||||
But still, why 0x8048080? For historic reasons, it turns out. Some googling led me to a few sources that claim that the first 128MB of each process's address space were reserved for the stack. 128MB happens to be 0x8000000, which is where other sections of the executable may start. 0x8048080, in particular, is the default entry point used by the Linux ld linker. This entry point can be modified by passing the -Ttextargument to ld.
|
||||
|
||||
To conclude, there's nothing really special in this address and we can freely change it. As long as the ELF executable is properly structured and the entry point address in the header matches the real beginning of the program's code (text section), we're OK.
|
||||
|
||||
### Setting breakpoints in the debugger with int 3
|
||||
|
||||
To set a breakpoint at some target address in the traced process, the debugger does the following:
|
||||
|
||||
1. Remember the data stored at the target address
|
||||
2. Replace the first byte at the target address with the int 3 instruction
|
||||
|
||||
Then, when the debugger asks the OS to run the process (with PTRACE_CONT as we saw in the previous article), the process will run and eventually hit upon the int 3, where it will stop and the OS will send it a signal. This is where the debugger comes in again, receiving a signal that its child (or traced process) was stopped. It can then:
|
||||
|
||||
1. Replace the int 3 instruction at the target address with the original instruction
|
||||
2. Roll the instruction pointer of the traced process back by one. This is needed because the instruction pointer now points _after_ the int 3, having already executed it.
|
||||
3. Allow the user to interact with the process in some way, since the process is still halted at the desired target address. This is the part where your debugger lets you peek at variable values, the call stack and so on.
|
||||
4. When the user wants to keep running, the debugger will take care of placing the breakpoint back (since it was removed in step 1) at the target address, unless the user asked to cancel the breakpoint.
|
||||
|
||||
Let's see how some of these steps are translated into real code. We'll use the debugger "template" presented in part 1 (forking a child process and tracing it). In any case, there's a link to the full source code of this example at the end of the article.
|
||||
|
||||
```
|
||||
/* Obtain and show child's instruction pointer */
|
||||
ptrace(PTRACE_GETREGS, child_pid, 0, ®s);
|
||||
procmsg("Child started. EIP = 0x%08x\n", regs.eip);
|
||||
|
||||
/* Look at the word at the address we're interested in */
|
||||
unsigned addr = 0x8048096;
|
||||
unsigned data = ptrace(PTRACE_PEEKTEXT, child_pid, (void*)addr, 0);
|
||||
procmsg("Original data at 0x%08x: 0x%08x\n", addr, data);
|
||||
```
|
||||
|
||||
Here the debugger fetches the instruction pointer from the traced process, as well as examines the word currently present at 0x8048096\. When run tracing the assembly program listed in the beginning of the article, this prints:
|
||||
|
||||
```
|
||||
[13028] Child started. EIP = 0x08048080
|
||||
[13028] Original data at 0x08048096: 0x000007ba
|
||||
```
|
||||
|
||||
So far, so good. Next:
|
||||
|
||||
```
|
||||
/* Write the trap instruction 'int 3' into the address */
|
||||
unsigned data_with_trap = (data & 0xFFFFFF00) | 0xCC;
|
||||
ptrace(PTRACE_POKETEXT, child_pid, (void*)addr, (void*)data_with_trap);
|
||||
|
||||
/* See what's there again... */
|
||||
unsigned readback_data = ptrace(PTRACE_PEEKTEXT, child_pid, (void*)addr, 0);
|
||||
procmsg("After trap, data at 0x%08x: 0x%08x\n", addr, readback_data);
|
||||
```
|
||||
|
||||
Note how int 3 is inserted at the target address. This prints:
|
||||
|
||||
```
|
||||
[13028] After trap, data at 0x08048096: 0x000007cc
|
||||
```
|
||||
|
||||
Again, as expected - 0xba was replaced with 0xcc. The debugger now runs the child and waits for it to halt on the breakpoint:
|
||||
|
||||
```
|
||||
/* Let the child run to the breakpoint and wait for it to
|
||||
** reach it
|
||||
*/
|
||||
ptrace(PTRACE_CONT, child_pid, 0, 0);
|
||||
|
||||
wait(&wait_status);
|
||||
if (WIFSTOPPED(wait_status)) {
|
||||
procmsg("Child got a signal: %s\n", strsignal(WSTOPSIG(wait_status)));
|
||||
}
|
||||
else {
|
||||
perror("wait");
|
||||
return;
|
||||
}
|
||||
|
||||
/* See where the child is now */
|
||||
ptrace(PTRACE_GETREGS, child_pid, 0, ®s);
|
||||
procmsg("Child stopped at EIP = 0x%08x\n", regs.eip);
|
||||
```
|
||||
|
||||
This prints:
|
||||
|
||||
```
|
||||
Hello,
|
||||
[13028] Child got a signal: Trace/breakpoint trap
|
||||
[13028] Child stopped at EIP = 0x08048097
|
||||
```
|
||||
|
||||
Note the "Hello," that was printed before the breakpoint - exactly as we planned. Also note where the child stopped - just after the single-byte trap instruction.
|
||||
|
||||
Finally, as was explained earlier, to keep the child running we must do some work. We replace the trap with the original instruction and let the process continue running from it.
|
||||
|
||||
```
|
||||
/* Remove the breakpoint by restoring the previous data
|
||||
** at the target address, and unwind the EIP back by 1 to
|
||||
** let the CPU execute the original instruction that was
|
||||
** there.
|
||||
*/
|
||||
ptrace(PTRACE_POKETEXT, child_pid, (void*)addr, (void*)data);
|
||||
regs.eip -= 1;
|
||||
ptrace(PTRACE_SETREGS, child_pid, 0, ®s);
|
||||
|
||||
/* The child can continue running now */
|
||||
ptrace(PTRACE_CONT, child_pid, 0, 0);
|
||||
```
|
||||
|
||||
This makes the child print "world!" and exit, just as planned.
|
||||
|
||||
Note that we don't restore the breakpoint here. That can be done by executing the original instruction in single-step mode, then placing the trap back and only then do PTRACE_CONT. The debug library demonstrated later in the article implements this.
|
||||
|
||||
### More on int 3
|
||||
|
||||
Now is a good time to come back and examine int 3 and that curious note from Intel's manual. Here it is again:
|
||||
|
||||
> This one byte form is valuable because it can be used to replace the first byte of any instruction with a breakpoint, including other one byte instructions, without over-writing other code
|
||||
|
||||
int instructions on x86 occupy two bytes - 0xcd followed by the interrupt number [[6]][24]. int 3 could've been encoded as cd 03, but there's a special single-byte instruction reserved for it - 0xcc.
|
||||
|
||||
Why so? Because this allows us to insert a breakpoint without ever overwriting more than one instruction. And this is important. Consider this sample code:
|
||||
|
||||
```
|
||||
.. some code ..
|
||||
jz foo
|
||||
dec eax
|
||||
foo:
|
||||
call bar
|
||||
.. some code ..
|
||||
```
|
||||
|
||||
Suppose we want to place a breakpoint on dec eax. This happens to be a single-byte instruction (with the opcode 0x48). Had the replacement breakpoint instruction been longer than 1 byte, we'd be forced to overwrite part of the next instruction (call), which would garble it and probably produce something completely invalid. But what is the branch jz foo was taken? Then, without stopping on dec eax, the CPU would go straight to execute the invalid instruction after it.
|
||||
|
||||
Having a special 1-byte encoding for int 3 solves this problem. Since 1 byte is the shortest an instruction can get on x86, we guarantee than only the instruction we want to break on gets changed.
|
||||
|
||||
### Encapsulating some gory details
|
||||
|
||||
Many of the low-level details shown in code samples of the previous section can be easily encapsulated behind a convenient API. I've done some encapsulation into a small utility library called debuglib - its code is available for download at the end of the article. Here I just want to demonstrate an example of its usage, but with a twist. We're going to trace a program written in C.
|
||||
|
||||
### Tracing a C program
|
||||
|
||||
So far, for the sake of simplicity, I focused on assembly language targets. It's time to go one level up and see how we can trace a program written in C.
|
||||
|
||||
It turns out things aren't very different - it's just a bit harder to find where to place the breakpoints. Consider this simple program:
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
|
||||
void do_stuff()
|
||||
{
|
||||
printf("Hello, ");
|
||||
}
|
||||
|
||||
int main()
|
||||
{
|
||||
for (int i = 0; i < 4; ++i)
|
||||
do_stuff();
|
||||
printf("world!\n");
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
Suppose I want to place a breakpoint at the entrance to do_stuff. I'll use the old friend objdump to disassemble the executable, but there's a lot in it. In particular, looking at the text section is a bit useless since it contains a lot of C runtime initialization code I'm currently not interested in. So let's just look for do_stuff in the dump:
|
||||
|
||||
```
|
||||
080483e4 <do_stuff>:
|
||||
80483e4: 55 push %ebp
|
||||
80483e5: 89 e5 mov %esp,%ebp
|
||||
80483e7: 83 ec 18 sub $0x18,%esp
|
||||
80483ea: c7 04 24 f0 84 04 08 movl $0x80484f0,(%esp)
|
||||
80483f1: e8 22 ff ff ff call 8048318 <puts@plt>
|
||||
80483f6: c9 leave
|
||||
80483f7: c3 ret
|
||||
```
|
||||
|
||||
Alright, so we'll place the breakpoint at 0x080483e4, which is the first instruction of do_stuff. Moreover, since this function is called in a loop, we want to keep stopping at the breakpoint until the loop ends. We're going to use the debuglib library to make this simple. Here's the complete debugger function:
|
||||
|
||||
```
|
||||
void run_debugger(pid_t child_pid)
|
||||
{
|
||||
procmsg("debugger started\n");
|
||||
|
||||
/* Wait for child to stop on its first instruction */
|
||||
wait(0);
|
||||
procmsg("child now at EIP = 0x%08x\n", get_child_eip(child_pid));
|
||||
|
||||
/* Create breakpoint and run to it*/
|
||||
debug_breakpoint* bp = create_breakpoint(child_pid, (void*)0x080483e4);
|
||||
procmsg("breakpoint created\n");
|
||||
ptrace(PTRACE_CONT, child_pid, 0, 0);
|
||||
wait(0);
|
||||
|
||||
/* Loop as long as the child didn't exit */
|
||||
while (1) {
|
||||
/* The child is stopped at a breakpoint here. Resume its
|
||||
** execution until it either exits or hits the
|
||||
** breakpoint again.
|
||||
*/
|
||||
procmsg("child stopped at breakpoint. EIP = 0x%08X\n", get_child_eip(child_pid));
|
||||
procmsg("resuming\n");
|
||||
int rc = resume_from_breakpoint(child_pid, bp);
|
||||
|
||||
if (rc == 0) {
|
||||
procmsg("child exited\n");
|
||||
break;
|
||||
}
|
||||
else if (rc == 1) {
|
||||
continue;
|
||||
}
|
||||
else {
|
||||
procmsg("unexpected: %d\n", rc);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
cleanup_breakpoint(bp);
|
||||
}
|
||||
```
|
||||
|
||||
Instead of getting our hands dirty modifying EIP and the target process's memory space, we just use create_breakpoint, resume_from_breakpoint and cleanup_breakpoint. Let's see what this prints when tracing the simple C code displayed above:
|
||||
|
||||
```
|
||||
$ bp_use_lib traced_c_loop
|
||||
[13363] debugger started
|
||||
[13364] target started. will run 'traced_c_loop'
|
||||
[13363] child now at EIP = 0x00a37850
|
||||
[13363] breakpoint created
|
||||
[13363] child stopped at breakpoint. EIP = 0x080483E5
|
||||
[13363] resuming
|
||||
Hello,
|
||||
[13363] child stopped at breakpoint. EIP = 0x080483E5
|
||||
[13363] resuming
|
||||
Hello,
|
||||
[13363] child stopped at breakpoint. EIP = 0x080483E5
|
||||
[13363] resuming
|
||||
Hello,
|
||||
[13363] child stopped at breakpoint. EIP = 0x080483E5
|
||||
[13363] resuming
|
||||
Hello,
|
||||
world!
|
||||
[13363] child exited
|
||||
```
|
||||
|
||||
Just as expected!
|
||||
|
||||
### The code
|
||||
|
||||
[Here are][25] the complete source code files for this part. In the archive you'll find:
|
||||
|
||||
* debuglib.h and debuglib.c - the simple library for encapsulating some of the inner workings of a debugger
|
||||
* bp_manual.c - the "manual" way of setting breakpoints presented first in this article. Uses the debuglib library for some boilerplate code.
|
||||
* bp_use_lib.c - uses debuglib for most of its code, as demonstrated in the second code sample for tracing the loop in a C program.
|
||||
|
||||
### Conclusion and next steps
|
||||
|
||||
We've covered how breakpoints are implemented in debuggers. While implementation details vary between OSes, when you're on x86 it's all basically variations on the same theme - substituting int 3 for the instruction where we want the process to stop.
|
||||
|
||||
That said, I'm sure some readers, just like me, will be less than excited about specifying raw memory addresses to break on. We'd like to say "break on do_stuff", or even "break on _this_ line in do_stuff" and have the debugger do it. In the next article I'm going to show how it's done.
|
||||
|
||||
### References
|
||||
|
||||
I've found the following resources and articles useful in the preparation of this article:
|
||||
|
||||
* [How debugger works][12]
|
||||
* [Understanding ELF using readelf and objdump][13]
|
||||
* [Implementing breakpoints on x86 Linux][14]
|
||||
* [NASM manual][15]
|
||||
* [SO discussion of the ELF entry point][16]
|
||||
* [This Hacker News discussion][17] of the first part of the series
|
||||
* [GDB Internals][18]
|
||||
|
||||
|
||||
[1] On a high-level view this is true. Down in the gory details, many CPUs today execute multiple instructions in parallel, some of them not in their original order.
|
||||
|
||||
[2] The bible in this case being, of course, Intel's Architecture software developer's manual, volume 2A.
|
||||
|
||||
[3] How can the OS stop a process just like that? The OS registered its own handler for int 3 with the CPU, that's how!
|
||||
|
||||
[4] Wait, int again? Yes! Linux uses int 0x80 to implement system calls from user processes into the OS kernel. The user places the number of the system call and its arguments into registers and executes int 0x80. The CPU then jumps to the appropriate interrupt handler, where the OS registered a procedure that looks at the registers and decides which system call to execute.
|
||||
|
||||
[5] ELF (Executable and Linkable Format) is the file format used by Linux for object files, shared libraries and executables.
|
||||
|
||||
[6] An observant reader can spot the translation of int 0x80 into cd 80 in the dumps listed above.
|
||||
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://eli.thegreenplace.net/2011/01/27/how-debuggers-work-part-2-breakpoints
|
||||
|
||||
作者:[Eli Bendersky][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://eli.thegreenplace.net/
|
||||
[1]:http://eli.thegreenplace.net/2011/01/27/how-debuggers-work-part-2-breakpoints#id1
|
||||
[2]:http://en.wikipedia.org/wiki/Out-of-order_execution
|
||||
[3]:http://eli.thegreenplace.net/2011/01/27/how-debuggers-work-part-2-breakpoints#id2
|
||||
[4]:http://eli.thegreenplace.net/2011/01/27/how-debuggers-work-part-2-breakpoints#id3
|
||||
[5]:http://eli.thegreenplace.net/2011/01/27/how-debuggers-work-part-2-breakpoints#id4
|
||||
[6]:http://eli.thegreenplace.net/2011/01/27/how-debuggers-work-part-2-breakpoints#id5
|
||||
[7]:http://en.wikipedia.org/wiki/Executable_and_Linkable_Format
|
||||
[8]:http://eli.thegreenplace.net/2011/01/27/how-debuggers-work-part-2-breakpoints#id6
|
||||
[9]:http://eli.thegreenplace.net/tag/articles
|
||||
[10]:http://eli.thegreenplace.net/tag/debuggers
|
||||
[11]:http://eli.thegreenplace.net/tag/programming
|
||||
[12]:http://www.alexonlinux.com/how-debugger-works
|
||||
[13]:http://www.linuxforums.org/articles/understanding-elf-using-readelf-and-objdump_125.html
|
||||
[14]:http://mainisusuallyafunction.blogspot.com/2011/01/implementing-breakpoints-on-x86-linux.html
|
||||
[15]:http://www.nasm.us/xdoc/2.09.04/html/nasmdoc0.html
|
||||
[16]:http://stackoverflow.com/questions/2187484/elf-binary-entry-point
|
||||
[17]:http://news.ycombinator.net/item?id=2131894
|
||||
[18]:http://www.deansys.com/doc/gdbInternals/gdbint_toc.html
|
||||
[19]:http://eli.thegreenplace.net/2011/01/27/how-debuggers-work-part-2-breakpoints#id7
|
||||
[20]:http://eli.thegreenplace.net/2011/01/27/how-debuggers-work-part-2-breakpoints#id8
|
||||
[21]:http://eli.thegreenplace.net/2011/01/27/how-debuggers-work-part-2-breakpoints#id9
|
||||
[22]:http://eli.thegreenplace.net/2011/01/27/how-debuggers-work-part-2-breakpoints#id10
|
||||
[23]:http://eli.thegreenplace.net/2011/01/27/how-debuggers-work-part-2-breakpoints#id11
|
||||
[24]:http://eli.thegreenplace.net/2011/01/27/how-debuggers-work-part-2-breakpoints#id12
|
||||
[25]:https://github.com/eliben/code-for-blog/tree/master/2011/debuggers_part2_code
|
||||
[26]:http://eli.thegreenplace.net/2011/01/27/how-debuggers-work-part-2-breakpoints
|
||||
[27]:http://eli.thegreenplace.net/2011/01/23/how-debuggers-work-part-1/
|
@ -1,3 +1,5 @@
|
||||
wyangsun translating
|
||||
|
||||
Apache Spark @Scale: A 60 TB+ production use case
|
||||
===========
|
||||
|
||||
|
@ -1,164 +0,0 @@
|
||||
Translating by cposture 20161228
|
||||
# Applying the Linus Torvalds “Good Taste” Coding Requirement
|
||||
|
||||
In [a recent interview with Linus Torvalds][1], the creator of Linux, at approximately 14:20 in the interview, he made a quick point about coding with “good taste”. Good taste? The interviewer prodded him for details and Linus came prepared with illustrations.
|
||||
|
||||
He presented a code snippet. But this wasn’t “good taste” code. This snippet was an example of poor taste in order to provide some initial contrast.
|
||||
|
||||
![](https://d262ilb51hltx0.cloudfront.net/max/1200/1*X2VgEA_IkLvsCS-X4iPY7g.png)
|
||||
|
||||
It’s a function, written in C, that removes an object from a linked list. It contains 10 lines of code.
|
||||
|
||||
He called attention to the if-statement at the bottom. It was _this_ if-statement that he criticized.
|
||||
|
||||
I paused the video and studied the slide. I had recently written code very similar. Linus was effectively saying I had poor taste. I swallowed my pride and continued the video.
|
||||
|
||||
Linus explained to the audience, as I already knew, that when removing an object from a linked list, there are two cases to consider. If the object is at the start of the list there is a different process for its removal than if it is in the middle of the list. And this is the reason for the “poor taste” if-statement.
|
||||
|
||||
But if he admits it is necessary, then why is it so bad?
|
||||
|
||||
Next he revealed a second slide to the audience. This was his example of the same function, but written with “good taste”.
|
||||
|
||||
![](https://d262ilb51hltx0.cloudfront.net/max/1200/1*GHFLYFB3vDQeakMyUGPglw.png)
|
||||
|
||||
The original 10 lines of code had now been reduced to 4.
|
||||
|
||||
But it wasn’t the line count that mattered. It was that if-statement. It’s gone. No longer needed. The code has been refactored so that, regardless of the object’s position in the list, the same process is applied to remove it.
|
||||
|
||||
Linus explained the new code, the elimination of the edge case, and that was it. The interview then moved on to the next topic.
|
||||
|
||||
I studied the code for a moment. Linus was right. The second slide _was_better. If this was a test to determine good taste from poor taste, I would have failed. The thought that it may be possible to eliminate that conditional statement had never occurred to me. And I had written it more than once, since I commonly work with linked lists.
|
||||
|
||||
What’s good about this illustration isn’t just that it teaches you a better way to remove an item from a linked list, but that it makes you consider that the code you’ve written, the little algorithms you’ve sprinkled throughout the program, may have room for improvement in ways you’ve never considered.
|
||||
|
||||
So this was my focus as I went back and reviewed the code in my most recent project. Perhaps it was serendipitous that it also happened to be written in C.
|
||||
|
||||
To the best of my ability to discern, the crux of the “good taste” requirement is the elimination of edge cases, which tend to reveal themselves as conditional statements. The fewer conditions you test for, the better your code “_tastes”_.
|
||||
|
||||
Here is one particular example of an improvement I made that I wanted to share.
|
||||
|
||||
Initializing Grid Edges
|
||||
|
||||
Below is an algorithm I wrote to initialize the points along the edge of a grid, which is represented as a multidimensional array: grid[rows][cols].
|
||||
|
||||
Again, the purpose of this code was to only initialize the values of the points that reside on the edge of the grid — so only the top row, bottom row, left column, and right column.
|
||||
|
||||
To accomplish this I initially looped over every point in the grid and used conditionals to test for the edges. This is what it looked like:
|
||||
|
||||
```Tr
|
||||
for (r = 0; r < GRID_SIZE; ++r) {
|
||||
for (c = 0; c < GRID_SIZE; ++c) {
|
||||
```
|
||||
|
||||
```
|
||||
// Top Edge
|
||||
if (r == 0)
|
||||
grid[r][c] = 0;
|
||||
```
|
||||
|
||||
```
|
||||
// Left Edge
|
||||
if (c == 0)
|
||||
grid[r][c] = 0;
|
||||
```
|
||||
|
||||
```
|
||||
// Right Edge
|
||||
if (c == GRID_SIZE - 1)
|
||||
grid[r][c] = 0;
|
||||
```
|
||||
|
||||
```
|
||||
// Bottom Edge
|
||||
if (r == GRID_SIZE - 1)
|
||||
grid[r][c] = 0;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Even though it works, in hindsight, there are some issues with this construct.
|
||||
|
||||
1. Complexity — The use 4 conditional statements inside 2 embedded loops seems overly complex.
|
||||
2. Efficiency — Given that GRID_SIZE has a value of 64, this loop performs 4096 iterations in order to set values for only the 256 edge points.
|
||||
|
||||
Linus would probably agree, this is not very _tasty_.
|
||||
|
||||
So I did some tinkering with it. After a little bit I was able to reduce the complexity to only a single for_-_loop containing four conditionals. It was only a slight improvement in complexity, but a large improvement in performance, because it only performed 256 loop iterations, one for each point along the edge.
|
||||
|
||||
```
|
||||
for (i = 0; i < GRID_SIZE * 4; ++i) {
|
||||
```
|
||||
|
||||
```
|
||||
// Top Edge
|
||||
if (i < GRID_SIZE)
|
||||
grid[0][i] = 0;
|
||||
```
|
||||
|
||||
```
|
||||
// Right Edge
|
||||
else if (i < GRID_SIZE * 2)
|
||||
grid[i - GRID_SIZE][GRID_SIZE - 1] = 0;
|
||||
```
|
||||
|
||||
```
|
||||
// Left Edge
|
||||
else if (i < GRID_SIZE * 3)
|
||||
grid[i - (GRID_SIZE * 2)][0] = 0;
|
||||
```
|
||||
|
||||
```
|
||||
// Bottom Edge
|
||||
else
|
||||
grid[GRID_SIZE - 1][i - (GRID_SIZE * 3)] = 0;
|
||||
}
|
||||
```
|
||||
|
||||
An improvement, yes. But it looked really ugly. It’s not exactly code that is easy to follow. Based on that alone, I wasn’t satisfied.
|
||||
|
||||
I continued to tinker. Could this really be improved further? In fact, the answer was _YES_. And what I eventually came up with was so astoundingly simple and elegant that I honestly couldn’t believe it took me this long to find it.
|
||||
|
||||
Below is the final version of the code. It has _one for-loop_ and _no conditionals_. Moreover, the loop only performs 64 iterations. It vastly improves both complexity and efficiency.
|
||||
|
||||
```
|
||||
for (i = 0; i < GRID_SIZE; ++i) {
|
||||
```
|
||||
|
||||
```
|
||||
// Top Edge
|
||||
grid[0][i] = 0;
|
||||
|
||||
// Bottom Edge
|
||||
grid[GRID_SIZE - 1][i] = 0;
|
||||
```
|
||||
|
||||
```
|
||||
// Left Edge
|
||||
grid[i][0] = 0;
|
||||
```
|
||||
|
||||
```
|
||||
// Right Edge
|
||||
grid[i][GRID_SIZE - 1] = 0;
|
||||
}
|
||||
```
|
||||
|
||||
This code initializes four different edge points for each loop iteration. It’s not complex. It’s highly efficient. It’s easy to read. Compared to the original version, and even the second version, they are like night and day.
|
||||
|
||||
I was quite satisfied.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://medium.com/@bartobri/applying-the-linus-tarvolds-good-taste-coding-requirement-99749f37684a
|
||||
|
||||
作者:[Brian Barto][a]
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://medium.com/@bartobri?source=post_header_lockup
|
||||
[1]:https://www.ted.com/talks/linus_torvalds_the_mind_behind_linux
|
@ -1,403 +0,0 @@
|
||||
ictlyh Translating
|
||||
GraphQL In Use: Building a Blogging Engine API with Golang and PostgreSQL
|
||||
============================================================
|
||||
|
||||
### Abstract
|
||||
|
||||
GraphQL appears hard to use in production: the graph interface is flexible in its modeling capabilities but is a poor match for relational storage, both in terms of implementation and performance.
|
||||
|
||||
In this document, we will design and write a simple blogging engine API, with the following specification:
|
||||
|
||||
* three types of resources (users, posts and comments) supporting a varied set of functionality (create a user, create a post, add a comment to a post, follow posts and comments from another user, etc.)
|
||||
* use PostgreSQL as the backing data store (chosen because it’s a popular relational DB)
|
||||
* write the API implementation in Golang (a popular language for writing APIs).
|
||||
|
||||
We will compare a simple GraphQL implementation with a pure REST alternative in terms of implementation complexity and efficiency for a common scenario: rendering a blog post page.
|
||||
|
||||
### Introduction
|
||||
|
||||
GraphQL is an IDL (Interface Definition Language), designers define data types and model information as a graph. Each vertex is an instance of a data type, while edges represent relationships between nodes. This approach is flexible and can accommodate any business domain. However, the problem is that the design process is more complex and traditional data stores don’t map well to the graph model. See _Appendix 1_ for more details on this topic.
|
||||
|
||||
GraphQL has been first proposed in 2014 by the Facebook Engineering Team. Although interesting and compelling in its advantages and features, it hasn’t seen mass adoption. Developers have to trade REST’s simplicity of design, familiarity and rich tooling for GraphQL’s flexibility of not being limited to just CRUD and network efficiency (it optimizes for round-trips to the server).
|
||||
|
||||
Most walkthroughs and tutorials on GraphQL avoid the problem of fetching data from the data store to resolve queries. That is, how to design a database using general-purpose, popular storage solutions (like relational databases) to support efficient data retrieval for a GraphQL API.
|
||||
|
||||
This document goes through building a blog engine GraphQL API. It is moderately complex in its functionality. It is scoped to a familiar business domain to facilitate comparisons with a REST based approach.
|
||||
|
||||
The structure of this document is the following:
|
||||
|
||||
* in the first part we will design a GraphQL schema and explain some of features of the language that are used.
|
||||
* next is the design of the PostgreSQL database in section two.
|
||||
* part three covers the Golang implementation of the GraphQL schema designed in part one.
|
||||
* in part four we compare the task of rendering a blog post page from the perspective of fetching the needed data from the backend.
|
||||
|
||||
### Related
|
||||
|
||||
* The excellent [GraphQL introduction document][1].
|
||||
* The complete and working code for this project is on [github.com/topliceanu/graphql-go-example][2].
|
||||
|
||||
### Modeling a blog engine in GraphQL
|
||||
|
||||
_Listing 1_ contains the entire schema for the blog engine API. It shows the data types of the vertices composing the graph. The relationships between vertices, ie. the edges, are modeled as attributes of a given type.
|
||||
|
||||
```
|
||||
type User {
|
||||
id: ID
|
||||
email: String!
|
||||
post(id: ID!): Post
|
||||
posts: [Post!]!
|
||||
follower(id: ID!): User
|
||||
followers: [User!]!
|
||||
followee(id: ID!): User
|
||||
followees: [User!]!
|
||||
}
|
||||
|
||||
type Post {
|
||||
id: ID
|
||||
user: User!
|
||||
title: String!
|
||||
body: String!
|
||||
comment(id: ID!): Comment
|
||||
comments: [Comment!]!
|
||||
}
|
||||
|
||||
type Comment {
|
||||
id: ID
|
||||
user: User!
|
||||
post: Post!
|
||||
title: String
|
||||
body: String!
|
||||
}
|
||||
|
||||
type Query {
|
||||
user(id: ID!): User
|
||||
}
|
||||
|
||||
type Mutation {
|
||||
createUser(email: String!): User
|
||||
removeUser(id: ID!): Boolean
|
||||
follow(follower: ID!, followee: ID!): Boolean
|
||||
unfollow(follower: ID!, followee: ID!): Boolean
|
||||
createPost(user: ID!, title: String!, body: String!): Post
|
||||
removePost(id: ID!): Boolean
|
||||
createComment(user: ID!, post: ID!, title: String!, body: String!): Comment
|
||||
removeComment(id: ID!): Boolean
|
||||
}
|
||||
```
|
||||
|
||||
_Listing 1_
|
||||
|
||||
The schema is written in the GraphQL DSL, which is used for defining custom data types, such as `User`, `Post` and `Comment`. A set of primitive data types is also provided by the language, such as `String`, `Boolean` and `ID` (which is an alias of `String` with the additional semantics of being the unique identifier of a vertex).
|
||||
|
||||
`Query` and `Mutation` are optional types recognized by the parser and used in querying the graph. Reading data from a GraphQL API is equivalent to traversing the graph. As such a starting vertex needs to be provided; this role is fulfilled by the `Query` type. In this case, all queries to the graph must start with a user specified by id `user(id:ID!)`. For writing data, the `Mutation` vertex type is defined. This exposes a set of operations, modeled as parameterized attributes which traverse (and return) the newly created vertex types. See _Listing 2_ for examples of how these queries might look.
|
||||
|
||||
Vertex attributes can be parameterized, ie. accept arguments. In the context of graph traversal, if a post vertex has multiple comment vertices, you can traverse just one of them by specifying `comment(id: ID)`. All this is by design, the designer can choose not to provide direct paths to individual vertices.
|
||||
|
||||
The `!` character is a type post-fix, works for both primitive or user-defined types and has two semantics:
|
||||
|
||||
* when used for the type of a param in a parametriezed attribute, it means that the param is required.
|
||||
* when used for the return type of an attribute it means that the attribute will not be null when the vertex is retrieved.
|
||||
* combinations are possible, for instance `[Comment!]!` represents a list of non-null Comment vertices, where `[]`, `[Comment]` are valid, but `null, [null], [Comment, null]` are not.
|
||||
|
||||
_Listing 2_ contains a list of _curl_ commands against the blogging API which will populate the graph using mutations and then query it to retrieve data. To run them, follow the instructions in the [topliceanu/graphql-go-example][3] repo to build and run the service.
|
||||
|
||||
```
|
||||
# Mutations to create users 1,2 and 3\. Mutations also work as queries, in these cases we retrieve the ids and emails of the newly created users.
|
||||
curl -XPOST http://vm:8080/graphql -d 'mutation {createUser(email:"user1@x.co"){id, email}}'
|
||||
curl -XPOST http://vm:8080/graphql -d 'mutation {createUser(email:"user2@x.co"){id, email}}'
|
||||
curl -XPOST http://vm:8080/graphql -d 'mutation {createUser(email:"user3@x.co"){id, email}}'
|
||||
# Mutations to add posts for the users. We retrieve their ids to comply with the schema, otherwise we will get an error.
|
||||
curl -XPOST http://vm:8080/graphql -d 'mutation {createPost(user:1,title:"post1",body:"body1"){id}}'
|
||||
curl -XPOST http://vm:8080/graphql -d 'mutation {createPost(user:1,title:"post2",body:"body2"){id}}'
|
||||
curl -XPOST http://vm:8080/graphql -d 'mutation {createPost(user:2,title:"post3",body:"body3"){id}}'
|
||||
# Mutations to all comments to posts. `createComment` expects the user's ID, a title and a body. See the schema in Listing 1.
|
||||
curl -XPOST http://vm:8080/graphql -d 'mutation {createComment(user:2,post:1,title:"comment1",body:"comment1"){id}}'
|
||||
curl -XPOST http://vm:8080/graphql -d 'mutation {createComment(user:1,post:3,title:"comment2",body:"comment2"){id}}'
|
||||
curl -XPOST http://vm:8080/graphql -d 'mutation {createComment(user:3,post:3,title:"comment3",body:"comment3"){id}}'
|
||||
# Mutations to have the user3 follow users 1 and 2\. Note that the `follow` mutation only returns a boolean which doesn't need to be specified.
|
||||
curl -XPOST http://vm:8080/graphql -d 'mutation {follow(follower:3, followee:1)}'
|
||||
curl -XPOST http://vm:8080/graphql -d 'mutation {follow(follower:3, followee:2)}'
|
||||
|
||||
# Query to fetch all data for user 1
|
||||
curl -XPOST http://vm:8080/graphql -d '{user(id:1)}'
|
||||
# Queries to fetch the followers of user2 and, respectively, user1.
|
||||
curl -XPOST http://vm:8080/graphql -d '{user(id:2){followers{id, email}}}'
|
||||
curl -XPOST http://vm:8080/graphql -d '{user(id:1){followers{id, email}}}'
|
||||
# Query to check if user2 is being followed by user1\. If so retrieve user1's email, otherwise return null.
|
||||
curl -XPOST http://vm:8080/graphql -d '{user(id:2){follower(id:1){email}}}'
|
||||
# Query to return ids and emails for all the users being followed by user3.
|
||||
curl -XPOST http://vm:8080/graphql -d '{user(id:3){followees{id, email}}}'
|
||||
# Query to retrieve the email of user3 if it is being followed by user1.
|
||||
curl -XPOST http://vm:8080/graphql -d '{user(id:1){followee(id:3){email}}}'
|
||||
# Query to fetch user1's post2 and retrieve the title and body. If post2 was not created by user1, null will be returned.
|
||||
curl -XPOST http://vm:8080/graphql -d '{user(id:1){post(id:2){title,body}}}'
|
||||
# Query to retrieve all data about all the posts of user1.
|
||||
curl -XPOST http://vm:8080/graphql -d '{user(id:1){posts{id,title,body}}}'
|
||||
# Query to retrieve the user who wrote post2, if post2 was written by user1; a contrived example that displays the flexibility of the language.
|
||||
curl -XPOST http://vm:8080/graphql -d '{user(id:1){post(id:2){user{id,email}}}}'
|
||||
```
|
||||
|
||||
_Listing 2_
|
||||
|
||||
By carefully desiging the mutations and type attributes, powerful and expressive queries are possible.
|
||||
|
||||
### Designing the PostgreSQL database
|
||||
|
||||
The relational database design is, as usual, driven by the need to avoid data duplication. This approach was chosen for two reasons: 1\. to show that there is no need for a specialized database technology or to learn and use new design techniques to accommodate a GraphQL API. 2\. to show that a GraphQL API can still be created on top of existing databases, more specifically databases originally designed to power REST endpoints or even traditional server-side rendered HTML websites.
|
||||
|
||||
See _Appendix 1_ for a discussion on differences between relational and graph databases with respect to building a GraphQL API. _Listing 3_ shows the SQL commands to create the new database. The database schema generally matches the GraphQL schema. The `followers` relation needed to be added to support the `follow/unfollow` mutations.
|
||||
|
||||
```
|
||||
CREATE TABLE IF NOT EXISTS users (
|
||||
id SERIAL PRIMARY KEY,
|
||||
email VARCHAR(100) NOT NULL
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS posts (
|
||||
id SERIAL PRIMARY KEY,
|
||||
user_id INTEGER NOT NULL REFERENCES users(id) ON DELETE CASCADE,
|
||||
title VARCHAR(200) NOT NULL,
|
||||
body TEXT NOT NULL
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS comments (
|
||||
id SERIAL PRIMARY KEY,
|
||||
user_id INTEGER NOT NULL REFERENCES users(id) ON DELETE CASCADE,
|
||||
post_id INTEGER NOT NULL REFERENCES posts(id) ON DELETE CASCADE,
|
||||
title VARCHAR(200) NOT NULL,
|
||||
body TEXT NOT NULL
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS followers (
|
||||
follower_id INTEGER NOT NULL REFERENCES users(id) ON DELETE CASCADE,
|
||||
followee_id INTEGER NOT NULL REFERENCES users(id) ON DELETE CASCADE,
|
||||
PRIMARY KEY(follower_id, followee_id)
|
||||
);
|
||||
```
|
||||
|
||||
_Listing 3_
|
||||
|
||||
### Golang API Implementation
|
||||
|
||||
The GraphQL parser implemented in Go and used in this project is `github.com/graphql-go/graphql`. It contains a query parser, but no schema parser. This requires the programmer to build the GraphQL schema in Go using the constructs offered by the library. This is unlike the reference [nodejs implementation][4], which offers a schema parser and exposes hooks for data fetching. As such the schema in `Listing 1` is only useful as a guideline and has to be translated into Golang code. However, this _“limitation”_ offers the opportunity to peer behind the levels of abstraction and see how the schema relates to the graph traversal model for retrieving data. _Listing 4_ shows the implementation of the `Comment` vertex type:
|
||||
|
||||
```
|
||||
var CommentType = graphql.NewObject(graphql.ObjectConfig{
|
||||
Name: "Comment",
|
||||
Fields: graphql.Fields{
|
||||
"id": &graphql.Field{
|
||||
Type: graphql.NewNonNull(graphql.ID),
|
||||
Resolve: func(p graphql.ResolveParams) (interface{}, error) {
|
||||
if comment, ok := p.Source.(*Comment); ok == true {
|
||||
return comment.ID, nil
|
||||
}
|
||||
return nil, nil
|
||||
},
|
||||
},
|
||||
"title": &graphql.Field{
|
||||
Type: graphql.NewNonNull(graphql.String),
|
||||
Resolve: func(p graphql.ResolveParams) (interface{}, error) {
|
||||
if comment, ok := p.Source.(*Comment); ok == true {
|
||||
return comment.Title, nil
|
||||
}
|
||||
return nil, nil
|
||||
},
|
||||
},
|
||||
"body": &graphql.Field{
|
||||
Type: graphql.NewNonNull(graphql.ID),
|
||||
Resolve: func(p graphql.ResolveParams) (interface{}, error) {
|
||||
if comment, ok := p.Source.(*Comment); ok == true {
|
||||
return comment.Body, nil
|
||||
}
|
||||
return nil, nil
|
||||
},
|
||||
},
|
||||
},
|
||||
})
|
||||
func init() {
|
||||
CommentType.AddFieldConfig("user", &graphql.Field{
|
||||
Type: UserType,
|
||||
Resolve: func(p graphql.ResolveParams) (interface{}, error) {
|
||||
if comment, ok := p.Source.(*Comment); ok == true {
|
||||
return GetUserByID(comment.UserID)
|
||||
}
|
||||
return nil, nil
|
||||
},
|
||||
})
|
||||
CommentType.AddFieldConfig("post", &graphql.Field{
|
||||
Type: PostType,
|
||||
Args: graphql.FieldConfigArgument{
|
||||
"id": &graphql.ArgumentConfig{
|
||||
Description: "Post ID",
|
||||
Type: graphql.NewNonNull(graphql.ID),
|
||||
},
|
||||
},
|
||||
Resolve: func(p graphql.ResolveParams) (interface{}, error) {
|
||||
i := p.Args["id"].(string)
|
||||
id, err := strconv.Atoi(i)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return GetPostByID(id)
|
||||
},
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
_Listing 4_
|
||||
|
||||
Just like in the schema in _Listing 1_, the `Comment` type is a structure with three attributes defined statically; `id`, `title` and `body`. Two other attributes `user` and `post` are defined dynamically to avoid circular dependencies.
|
||||
|
||||
Go does not lend itself well to this kind of dynamic modeling, there is little type-checking support, most of the variables in the code are of type `interface{}` and need to be type asserted before use. `CommentType` itself is a variable of type `graphql.Object` and its attributes are of type `graphql.Field`. So, there’s no direct translation between the GraphQL DSL and the data structures used in Go.
|
||||
|
||||
The `resolve` function for each field exposes the `Source` parameter which is a data type vertex representing the previous node in the traversal. All the attributes of a `Comment` have, as source, the current `CommentType` vertex. Retrieving the `id`, `title` and `body` is a straightforward attribute access, while retrieving the `user` and the `post` requires graph traversals, and thus database queries. The SQL queries are left out of this document because of their simplicity, but they are available in the github repository listed in the _References_ section.
|
||||
|
||||
### Comparison with REST in common scenarios
|
||||
|
||||
In this section we will present a common blog page rendering scenario and compare the REST and the GraphQL implementations. The focus will be on the number of inbound/outbound requests, because these are the biggest contributors to the latency of rendering the page.
|
||||
|
||||
The scenario: render a blog post page. It should contain information about the author (email), about the blog post (title, body), all comments (title, body) and whether the user that made the comment follows the author of the blog post or not. _Figure 1_ and _Figure 2_ show the interaction between the client SPA, the API server and the database, for a REST API and, respectively, for a GraphQL API.
|
||||
|
||||
```
|
||||
+------+ +------+ +--------+
|
||||
|client| |server| |database|
|
||||
+--+---+ +--+---+ +----+---+
|
||||
| GET /blogs/:id | |
|
||||
1\. +-------------------------> SELECT * FROM blogs... |
|
||||
| +--------------------------->
|
||||
| <---------------------------+
|
||||
<-------------------------+ |
|
||||
| | |
|
||||
| GET /users/:id | |
|
||||
2\. +-------------------------> SELECT * FROM users... |
|
||||
| +--------------------------->
|
||||
| <---------------------------+
|
||||
<-------------------------+ |
|
||||
| | |
|
||||
| GET /blogs/:id/comments | |
|
||||
3\. +-------------------------> SELECT * FROM comments... |
|
||||
| +--------------------------->
|
||||
| <---------------------------+
|
||||
<-------------------------+ |
|
||||
| | |
|
||||
| GET /users/:id/followers| |
|
||||
4\. +-------------------------> SELECT * FROM followers.. |
|
||||
| +--------------------------->
|
||||
| <---------------------------+
|
||||
<-------------------------+ |
|
||||
| | |
|
||||
+ + +
|
||||
```
|
||||
|
||||
_Figure 1_
|
||||
|
||||
```
|
||||
+------+ +------+ +--------+
|
||||
|client| |server| |database|
|
||||
+--+---+ +--+---+ +----+---+
|
||||
| GET /graphql | |
|
||||
1\. +-------------------------> SELECT * FROM blogs... |
|
||||
| +--------------------------->
|
||||
| <---------------------------+
|
||||
| | |
|
||||
| | |
|
||||
| | |
|
||||
2\. | | SELECT * FROM users... |
|
||||
| +--------------------------->
|
||||
| <---------------------------+
|
||||
| | |
|
||||
| | |
|
||||
| | |
|
||||
3\. | | SELECT * FROM comments... |
|
||||
| +--------------------------->
|
||||
| <---------------------------+
|
||||
| | |
|
||||
| | |
|
||||
| | |
|
||||
4\. | | SELECT * FROM followers.. |
|
||||
| +--------------------------->
|
||||
| <---------------------------+
|
||||
<-------------------------+ |
|
||||
| | |
|
||||
+ + +
|
||||
```
|
||||
|
||||
_Figure 2_
|
||||
|
||||
_Listing 5_ contains the single GraphQL query which will fetch all the data needed to render the blog post.
|
||||
|
||||
```
|
||||
{
|
||||
user(id: 1) {
|
||||
email
|
||||
followers
|
||||
post(id: 1) {
|
||||
title
|
||||
body
|
||||
comments {
|
||||
id
|
||||
title
|
||||
user {
|
||||
id
|
||||
email
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
_Listing 5_
|
||||
|
||||
The number of queries to the database for this scenario is deliberately identical, but the number of HTTP requests to the API server has been reduced to just one. We argue that the HTTP requests over the Internet are the most costly in this type of application.
|
||||
|
||||
The backend doesn’t have to be designed differently to start reaping the benefits of GraphQL, transitioning from REST to GraphQL can be done incrementally. This allows to measure performance improvements and optimize. From this point, the API developer can start to optimize (potentially merge) SQL queries to improve performance. The opportunity for caching is greatly increased, both on the database and API levels.
|
||||
|
||||
Abstractions on top of SQL (for instance ORM layers) usually have to contend with the `n+1` problem. In step `4.` of the REST example, a client could have had to request the follower status for the author of each comment in separate requests. This is because in REST there is no standard way of expressing relationships between more than two resources, whereas GraphQL was designed to prevent this problem by using nested queries. Here, we cheat by fetching all the followers of the user. We defer to the client the logic of determining the users who commented and also followed the author.
|
||||
|
||||
Another difference is fetching more data than the client needs, in order to not break the REST resource abstractions. This is important for bandwidth consumption and battery life spent parsing and storing unneeded data.
|
||||
|
||||
### Conclusions
|
||||
|
||||
GraphQL is a viable alternative to REST because:
|
||||
|
||||
* while it is more difficult to design the API, the process can be done incrementally. Also for this reason, it’s easy to transition from REST to GraphQL, the two paradigms can coexist without issues.
|
||||
* it is more efficient in terms of network requests, even with naive implementations like the one in this document. It also offers more opportunities for query optimization and result caching.
|
||||
* it is more efficient in terms of bandwidth consumption and CPU cycles spent parsing results, because it only returns what is needed to render the page.
|
||||
|
||||
REST remains very useful if:
|
||||
|
||||
* your API is simple, either has a low number of resources or simple relationships between them.
|
||||
* you already work with REST APIs inside your organization and you have the tooling all set up or your clients expect REST APIs from your organization.
|
||||
* you have complex ACL policies. In the blog example, a potential feature could allow users fine-grained control over who can see their email, their posts, their comments on a particular post, whom they follow etc. Optimizing data retrieval while checking complex business rules can be more difficult.
|
||||
|
||||
### Appendix 1: Graph Databases And Efficient Data Storage
|
||||
|
||||
While it is intuitive to think about application domain data as a graph, as this document demonstrates, the question of efficient data storage to support such an interface is still open.
|
||||
|
||||
In recent years graph databases have become more popular. Deferring the complexity of resolving the request by translating the GraphQL query into a specific graph database query language seems like a viable solution.
|
||||
|
||||
The problem is that graphs are not an efficient data structure compared to relational databases. A vertex can have links to any other vertex in the graph and access patterns are less predictable and thus offer less opportunity for optimization.
|
||||
|
||||
For instance, the problem of caching, ie. which vertices need to be kept in memory for fast access? Generic caching algorithms may not be very efficient in the context of graph traversal.
|
||||
|
||||
The problem of database sharding: splitting the database into smaller, non-interacting databases, living on separate hardware. In academia, the problem of splitting a graph on the minimal cut is well understood but it is suboptimal and may potentially result in highly unbalanced cuts due to pathological worst-case scenarios.
|
||||
|
||||
With relational databases, data is modeled in records (or rows, or tuples) and columns, tables and database names are simply namespaces. Most databases are row-oriented, which means that each record is a contiguous chunk of memory, all records in a table are neatly packed one after the other on the disk (usually sorted by some key column). This is efficient because it is optimal for the way physical storage works. The most expensive operation for an HDD is to move the read/write head to another sector on the disk, so minimizing these accesses is critical.
|
||||
|
||||
There is also a high probability that, if the application is interested in a particular record, it will need the whole record, not just a single key from it. There is a high probabilty that if the application is interested in a record, it will be interested in its neighbours as well, for instance a table scan. These two observations make relational databases quite efficient. However, for this reason also, the worst use-case scenario for a relational database is random access across all data all the time. This is exactly what graph databases do.
|
||||
|
||||
With the advent of SSD drives which have faster random access, cheap RAM memory which makes caching large portions of a graph database possible, better techniques to optimize graph caching and partitioning, graph databases have become a viable storage solution. And most large companies use it: Facebook has the Social Graph, Google has the Knowledge Graph.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://alexandrutopliceanu.ro/post/graphql-with-go-and-postgresql
|
||||
|
||||
作者:[Alexandru Topliceanu][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://github.com/topliceanu
|
||||
[1]:http://graphql.org/learn/
|
||||
[2]:https://github.com/topliceanu/graphql-go-example
|
||||
[3]:https://github.com/topliceanu/graphql-go-example
|
||||
[4]:https://github.com/graphql/graphql-js
|
@ -1,77 +0,0 @@
|
||||
Open technology for land rights documentation
|
||||
============================================================
|
||||
|
||||
### One-third of people on the planet don't have documented rights to the land on which they rely.
|
||||
|
||||
[up][3]
|
||||
![Open technology for land rights documentation](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/life_tree_clouds.png?itok=dSV0oTDS "Open technology for land rights documentation")
|
||||
Image by :
|
||||
|
||||
[Pixabay][4]. Modified by Opensource.com. [CC BY-SA 4.0][5]
|
||||
|
||||
The [Cadasta Foundation][6] creates tech to allow communities to document their land rights. By helping groups document the evidence of their individual and community rights to the land on which they depend, they can eventually obtain legal recognition of their land rights, and in the meantime, enjoy greater security.
|
||||
|
||||
We are motivated by the fact that most of the world does not have documented legal rights to the land on which they live. Technology is only a small part of this larger social issue, but our hope is that tech tools can be part of the solution even in the most remote and low-tech environments.
|
||||
|
||||
### The magnitude of property rights
|
||||
|
||||
Many of us who come from the global north probably take our rights to our land, property, and home for granted. We have titles, deeds, and rental agreements that document and solidly protect our rights.
|
||||
|
||||
But one-third of the people on the planet, from urban shanty towns to forest-dwelling indigenous communities, do not have documented rights to the land on which they rely. In fact, an estimated 70% of the property in emerging economies is undocumented. An estimated 25% of the world’s urban population live in homes to which they have no legal right. A majority of smallholder farmers around the world farm without the protection of having legal rights to their land documented by government records.
|
||||
|
||||
This is simply because government land and property records in many areas of the world either were never created or are out of date. For example, most rural land records in the state of Telangana, India haven't been updated since the 1940s. In other areas, such as parts of sub-Saharan Africa, there were never any records of land ownership to begin with—people simply farm the land their parents farmed, generation after generation.
|
||||
|
||||
Consider for a moment working land to which you have no secure rights. Would you invest your savings or labor in improving the land, including applying good quality seeds and fertilizer, with the knowledge that you could be displaced any day by a more powerful neighbor or investor? Imagine living in a home that could be bulldozed or usurped by an official any day. Or how could you sell your house, or use it for collateral for a loan, if you don’t have any proof that you own it?
|
||||
|
||||
For a majority of the world's population, these are not rhetorical questions. These are daily realities.
|
||||
|
||||
### How open source matters for land
|
||||
|
||||
Technology is only one part of the solution, but at Cadasta we believe it is a key component. While many governments had modern technology systems put in place to manage land records, often these were expensive to maintain, required highly trained staff, were not transparent, and were otherwise too complicated. Many of these systems, created at great expense by donor governments, are already outdated and no longer accurately reflect existing land and property rights.
|
||||
|
||||
By building open and user-friendly technology for land rights documentation we aim to overcome these problems and create land documentation systems that are flexible and accessible, allowing them to be treated as living documents that are updated continually.
|
||||
|
||||
We routinely train people who have never even used a smartphone before to use our technology to document their land rights in a single afternoon. The resulting data, hosted on an open source platform, is easy to access, update, and analyze. This flexibility means that governments in developing countries, should they adopt our platform, don't need to hire specially trained staff to manage the upkeep of these records.
|
||||
|
||||
We also believe that by contributing to and fostering open communities we can benefit more people, instead of attempting to develop all the technology ourselves. We do this by building a community around our tools as well as contributing to other existing software.
|
||||
|
||||
Over the past two years we've contributed and been involved in [OpenStreetMap][7]through the [Missing Maps Projec][8]t, used [OpenDataKit][9] extensively for data collection, and currently are integrating [Field Papers][10] with our system. Field Papers is technology that allows users to print paper maps, annotate those maps with pen, and then take a picture of those annotations with their phone and upload them to be transcribed.
|
||||
|
||||
We've also released a few Django libraries we hope will be useful to others in other Django applications. These include a policy-based permission system called [django-tutelary][11] and [django-jsonattrs][12], which provides JavaScript Object Notification (JSON)-based attribute management for PostgresSQL. If others use these pieces and contribute bug reports and patches, this can help make Cadasta's work stronger.
|
||||
|
||||
This work is critically important. Land rights are the foundation of stability and prosperity. Communities and countries seeking economic growth and sustainable development must document land rights and ensure land rights are secure for women, men, and communities.
|
||||
|
||||
_Learn more in Kate Chapman's talk at linux.conf.au 2017 ([#lca2017][1]) in Hobart: [Land Matters: Creating Open Technology for Land Rights][2]._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/kate-crop.jpg?itok=JkHxWrIQ)
|
||||
|
||||
Kate Chapman - Kate Chapman is Chief Technology Officer of the Cadasta Foundation, leading the organization’s technology team and strategy. Cadasta develops free and open source software to help communities document their land rights around the world. Chapman is recognized as a leader in the domains of open source geospatial technology and community mapping, and an advocate for open imagery as a public good.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
|
||||
via: https://opensource.com/article/17/1/land-rights-documentation-Cadasta
|
||||
|
||||
作者:[Kate Chapman][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/wonderchook
|
||||
[1]:https://twitter.com/search?q=%23lca2017&src=typd
|
||||
[2]:https://linux.conf.au/schedule/presentation/50/
|
||||
[3]:https://opensource.com/article/17/1/land-rights-documentation-Cadasta?rate=E8gJkvb1mbBXytsZiKA_ZtBCOvpi41nDSfz4R8tNnoc
|
||||
[4]:https://pixabay.com/en/tree-field-cornfield-nature-247122/
|
||||
[5]:https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[6]:http://cadasta.org/
|
||||
[7]:http://www.openstreetmap.org/
|
||||
[8]:http://www.missingmaps.org/
|
||||
[9]:https://opendatakit.org/
|
||||
[10]:http://fieldpapers.org/
|
||||
[11]:https://github.com/Cadasta/django-tutelary
|
||||
[12]:https://github.com/Cadasta/django-jsonattrs
|
@ -1,104 +0,0 @@
|
||||
Red Hat's OpenShift Container Platform Expands Cloud Options
|
||||
============================================================
|
||||
|
||||
Red Hat on Wednesday announced the general availability of Red Hat OpenShift Container Platform 3.4.
|
||||
|
||||
![Red Hat's OpenShift Container Platform Expands Cloud Options](http://www.linuxinsider.com/ai/465314/red-hat-openshift.jpg)
|
||||
|
||||
This latest version helps organizations better embrace new Linux container technologies that can deliver innovative business applications and services without sacrificing existing IT investments.
|
||||
|
||||
Red Hat OpenShift Container Platform 3.4 provides a platform for innovation without giving up existing mission-critical workloads. It offers dynamic storage provisioning for both traditional and cloud-native applications, as well as multitenant capabilities that can support multiple applications, teams and deployment processes in a hybrid cloud environment.
|
||||
|
||||
Today's enterprises are must balance management of their existing application portfolios with the goal of making it easier for developers to build new applications, observed Brian Gracely, director of product strategy for OpenShift at Red Hat.
|
||||
|
||||
The new release focuses on three complex areas for enterprises: managing storage; isolating resources for multiple groups (multitenancy); and the ability to consistently run applications on multiple cloud environments (public or private).
|
||||
|
||||
"Red Hat OpenShift Container Platform 3.4 builds on the momentum of both the Kubernetes and Docker projects, which are helping developers use containers to modernize existing applications and build new cloud-native microservices," Gracely told LinuxInsider.
|
||||
|
||||
OpenShift Container Platform 3.4 makes storage provisioning easier for developers and operators, and it enhances how the platform can be used to provide multitenant resources to multiple groups within an organization. Additionally, it continues to codify the best practices needed to deploy a consistent container platform across any cloud environment, such as AWS, Azure, GCP, OpenStack or VMware.
|
||||
|
||||
### Pushes Cloud Benefits
|
||||
|
||||
The new platform advances the process of creating and deploying applications by addressing the growing storage needs of applications across the hybrid cloud for enterprises. It allows for coexistence of modern and future-forward workloads on a single, enterprise-ready platform.
|
||||
|
||||
The new OpenShift Container Platform and service gives Red Hat customers an easy way to adopt and use Google Cloud as a public of hybrid cloud environment, noted Charles King, principal analyst at [Pund-IT][1].
|
||||
|
||||
"It will be a welcome addition in many or most enterprise IT shops, especially those that are active employing or exploring container solutions," he told LinuxInsider.
|
||||
|
||||
"Since Red Hat will act as the service provider of the new offering, customers should also be able to seamlessly integrate OpenShift support with their other Red Hat products and services," King pointed out.
|
||||
|
||||
The new release also provides an enterprise-ready version of Kubernetes 1.4 and the Docker container runtime, which will help customers roll out new services more quickly with the backing of Red Hat Enterprise Linux.
|
||||
|
||||
OpenShift Container Platform 3.4 integrates architectures, processes and services to enable delivery of critical business applications, whether legacy or cloud-native, and containerized workloads.
|
||||
|
||||
### Open Source and Linux Innovation
|
||||
|
||||
Kubernetes is becoming the de facto standard for orchestrating and managing Linux containers. OpenShift is delivering the leading enterprise-ready platform built on Kubernetes, noted Red Hat's Gracely.
|
||||
|
||||
"Kubernetes is one of the fastest-growing open source projects, with contributors from cloud providers, independent software vendors and [individual and business] end-users," he said. "It has become a project that has done an excellent job of considering and addressing the needs of many different groups with many types of application needs."
|
||||
|
||||
Both Red Hat and Google are pushing for innovation. Both companies are among the market's most proactive and innovative supporters of open source and Linux solutions.
|
||||
|
||||
"The pair's collaboration on this new service is a no-brainer that could eventually lead to Red Hat and Google finding or creating further innovative open source offerings," said Pund-IT's King.
|
||||
|
||||
### Features and Benefits
|
||||
|
||||
Among the new capabilities in the latest version of OpenShift Container Platform:
|
||||
|
||||
* Next-level container storage with support for dynamic storage provisioning -- This allows multiple storage types and multitier storage exposure in Kubernetes;
|
||||
* Container-native storage enabled by Red Hat Gluster Storage -- This now supports dynamic provisioning and push button deployment for stateful and stateless applications;
|
||||
* Software-defined, highly available and scalable storage solution -- This provides access across on-premises and public cloud environments for more cost efficiency over traditional hardware-based or cloud-only storage services;
|
||||
* Enhanced multitenancy through more simplified management of projects -- This feature is powered by Kubernetes namespaces in a single Kubernetes cluster. Applications can run fully isolated and share resources on a single Kubernetes cluster in OpenShift Container Platform.
|
||||
|
||||
### More Supplements
|
||||
|
||||
The OpenShift Container Platform upgrade adds the capacity to search for projects and project details, manage project membership, and more via a more streamlined Web console. This capability facilitates working with multiple projects across dispersed teams.
|
||||
|
||||
Another enhancement is the multitenancy feature that provides application development teams with their own cloud-like application environment. It lets them build and deploy customer-facing or internal applications using DevOps processes that are isolated from one another.
|
||||
|
||||
Also available in the new release are new hybrid cloud reference architectures for running Red Hat OpenShift Container Platform on OpenStack, VMware, Amazon Web Services, Google Cloud Engine and Microsoft Azure. These guides help walk a user through deployment across public and private clouds, virtual machines and bare metal.
|
||||
|
||||
"It also drastically simplifies how developers can access storage resources, allowing developers to dynamically provision storage resources/capacity with the click of a button -- effectively self-service for developers. It also allows developers to feel confident that the resources required for their applications will be properly isolated from other resource needs in the platform," said Red Hat's Gracely.
|
||||
|
||||
### Orchestration Backbone
|
||||
|
||||
The foundation for Red Hat OpenShift Container Platform 3.4 is the open source Kubernetes Project community. Kubernetes 1.4 features alpha support for expanded cluster federation APIs.
|
||||
|
||||
It enables multiple clusters federated across a hybrid environment. Red Hat engineers view this feature as a key component to enabling hybrid cloud deployments in the enterprise.
|
||||
|
||||
The latest version of OpenShift is available now via the Red Hat Customer Portal. It offers community innovation as hardened, production-grade features.
|
||||
|
||||
### Ensuring Customer Health
|
||||
|
||||
Red Hat's platform is vital to the success of The Vitality Group's global initiative and reward program, according to CIO Neil Adamson.
|
||||
|
||||
This program is a key component of how the company envisions the future of health, he said.
|
||||
|
||||
"Advanced services for our customers can only be delivered by embracing next-generation technologies, particularly those provided through the open source communities that drive Linux containers, Kubernetes and IoT," said Adamson.
|
||||
|
||||
Red Hat's OpenShift Container Platform provides his company with the best of these communities while still delivering a stable, more secure foundation that help "reap the benefits of open source innovation while lessening the risks often inherent to emerging technologies."
|
||||
|
||||
The latest platform features will further support application development in the cloud. Container solutions are being adopted rapidly for many core IT tasks, including app development projects and processes, according to King, who noted that "being able to seamlessly deploy containers in a widely and easily accessible environment like Google Cloud should simplify development tasks."
|
||||
![](http://www.ectnews.com/images/end-enn.gif)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
![](http://www.linuxinsider.com/ai/465314/red-hat-openshift.jpg)
|
||||
|
||||
**Jack M. Germain** has been writing about computer technology since the early days of the Apple II and the PC. He still has his original IBM PC-Jr and a few other legacy DOS and Windows boxes. He left shareware programs behind for the open source world of the Linux desktop. He runs several versions of Windows and Linux OSes and often cannot decide whether to grab his tablet, netbook or Android smartphone instead of using his desktop or laptop gear. You can connect with him on [Google+][2].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxinsider.com/story/84239.html?rss=1
|
||||
|
||||
作者:[Jack M. Germain ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/116242401898170634809?rel=author
|
||||
[1]:http://www.pund-it.com/
|
||||
[2]:https://plus.google.com/116242401898170634809?rel=author
|
@ -1,216 +0,0 @@
|
||||
# Fedora 24 Gnome & HP Pavilion + Nvidia setup review
|
||||
|
||||
Recently, you may have come across my [Chapeau][1] review. This experiment prompted me to widen my Fedora family testing, and so I decided to try setting up [Fedora 24 Gnome][2] on my [HP][3] machine, a six-year-old laptop with 4 GB of RAM and an aging Nvidia card. Yes, Fedora 25 has since been released and I had it [tested][4] with delight. But we can still enjoy this little article now can we?
|
||||
|
||||
This review should complement - and contrast - my usual crop of testing on the notorious but capable [Lenovo G50][5] machine, purchased in 2015, so we have old versus new, but also the inevitable lack of proper Linux support for the [Realtek][6] network card on the newer box. We will then also check how well Fedora handles the Nvidia stack, test if Nouveau is a valid alternative, and of course, pimp the system to the max, using some of the beauty tricks we have witnessed in the Chapeau review. Should be more than interesting.
|
||||
|
||||
![Teaser](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-teaser.jpg)
|
||||
|
||||
### Installation
|
||||
|
||||
Nothing special to report here. The system has a much simpler setup than the Lenovo laptop. The new machine comes with UEFI, Secure Boot, 1TB disk with a GPT setup partitioned sixteen different ways, with Windows 10 and some 6-7 Linux distros on it. In comparison, the BIOS-fueled Pavilion only dual boots. Prior to this review, it was running Linux Mint 17.3 [Rosa Xfce][7], but it used to have all sorts of Ubuntu children on it, and I had used it quite extensively for arguably funny [video processing][8] and all sorts of games. The home partition dates back to the early setup, and has remained such since, including a lot of legacy config and many desktop environments.
|
||||
|
||||
![Live desktop](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-desktop-live.jpg)
|
||||
|
||||
I was able to boot from a USB drive, although I did use the Fedora tool to create the live media. I've never had any problems booting on this host, to the best of my memory, a far cry (not the [game][9], just an expression, hi hi) from the Lenovo experience. There, before a BIOS update, Fedora would [not even run][10], and a large number of distros used to [struggle][11] until very recently. All part of my great disappointment adventure with Linux.
|
||||
|
||||
Anyhow, this procedure went without any fuss. Fedora 24 took control of the bootloader, managing itself and the resident Windows 7 installation. If you're interested in more details on how to dual-boot, you might want to check these:
|
||||
|
||||
[Ubuntu & Windows 7][12] dual-boot guide
|
||||
|
||||
[Xubuntu & Windows 7][13] dual-boot guide - same same but different
|
||||
|
||||
[CentOS 7 & Windows 7][14] dual-boot guide - fairly similar to our Fedora attempt
|
||||
|
||||
[Ubuntu & Windows 8][15] dual-boot guide - this one covers a UEFI setup, too
|
||||
|
||||
### It's pimping time!
|
||||
|
||||
My Fedora [pimping guide][16] has it all. I setup RPM Fusion Free and Non-Free, then installed about 700 MB worth of media codecs, plugins and extra software, including Steam, Skype, GIMP, VLC, Gnome Tweak Tool, Chrome, several other helper utilities, and more.
|
||||
|
||||
On the aesthetics side, I grabbed both Faenza and Moka icons, and configured half a dozen Gnome [extensions][17], including the mandatory [Dash to Dock][18], which really helps transforms this desktop environment into a usable product.
|
||||
|
||||
![About, with Nouveau](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-about-nouveau.jpg)
|
||||
|
||||
![Final looks](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-final.jpg)
|
||||
|
||||
What is that green icon on the right side? 'Tis a spoiler of things to be, that is.
|
||||
|
||||
I also had no problems with my smartphones, [Ubuntu Phone][19] or the[iPhone][20]. Both setups worked fine, and this also brings the annoyance with the Apple device on Chapeau 24 into bad spotlight. Rhythmbox would not play from any external media, though. Fail.
|
||||
|
||||
![Ubuntu Phone](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-ubuntu-phone.jpg)
|
||||
|
||||
![Media works fine](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-media-works-nice.jpg)
|
||||
|
||||
This is a teaser, implying wossname Nvidia thingie; well here we go.
|
||||
|
||||
### Nvidia setup
|
||||
|
||||
This is a tricky one. First, take a look at my generic [tutorial][21] on this topic. Then, take a look at my recent [Fedora 23][22] [experience][23] on this topic. Unlike Ubuntu, Red Hat distros do not quite like the whole pre-compiled setup. However, just to see whether things have changed in any way, I did use a helper tool called easyLife to setup the drivers. I've talked about this utility and Fedy in an OCS-Mag [article][24], and how you can use them to make your Fedora experience more colorful. Bottom line: good for lots of things, not for drivers, though.
|
||||
|
||||
![easyLife & Nvidia](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-easylife-nvidia.png)
|
||||
|
||||
Yes, this resulted in a broken system. I had to manually installed the drivers - luckily I had installed the kernel sources and headers, as well as other necessary build tools, gcc and make, beforehand, to prepare for this kind of scenario. Be warned, kids. In the end, the official way is the best.
|
||||
|
||||
### Nouveau vs Nvidia, which is faster?
|
||||
|
||||
I did something you would not really expect. I benchmarked the actual performance of the graphics stack with the Nouveau driver first and then the closed-source blob, using the Unigine Heaven tool. This gives clear results on how the two compare.
|
||||
|
||||
![Heaven benchmark](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-heaven-benchmark.jpg)
|
||||
|
||||
Remember, this is an ancient laptop, and it does not stack well against modern tools, so you will not be surprised to learn that Heaven reported a staggering 1 FPS for Nouveau, and it took me like 5 minutes before the system actually responded, and I was able to quit the benchmark.
|
||||
|
||||
![Nouveau benchmark](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-heaven-nouveau.jpg)
|
||||
|
||||
Nvidia gave much better results. To begin with, I was able to use the system while testing, and Heaven responded to mouse clicks and key strokes, all the while reporting a very humble 5-6 FPS, which means it was roughly 500% more efficient than the Nouveau driver. That tells you all you need to know, ladies and gentlemen.
|
||||
|
||||
![Nvidia installed](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-nvidia-installed.jpg)
|
||||
|
||||
![About, Nvidia installed](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-about-nvidia.jpg)
|
||||
|
||||
![Heaven, Nvidia installed, main menu](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-heaven-nvidia-menu.jpg)
|
||||
|
||||
![Nvidia benchmark 1](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-heaven-nvidia-1.jpg)
|
||||
|
||||
![Nvidia benchmark 2](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-heaven-nvidia-2.jpg)
|
||||
|
||||
![Steam works](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-steam-works.jpg)
|
||||
|
||||
Also, Steam would not run at all with Nouveau, so there's that to consider, too. Funny how system requirements creep up over time. I used to play, I mean test [Call of Duty][25], a highly mediocre and arcade-like shooter on this box on the highest settings, but that feat feels like a completely different era.
|
||||
|
||||
![Nouveau & Steam fail](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-steam-nouveau-fail.png)
|
||||
|
||||
### Hardware compatibility
|
||||
|
||||
Things were quite all right overall. All of the Fn buttons worked fine, and so did the web camera. Power management also did its thing well, dimming the screen and whatnot, but we cannot really judge the battery life, as the cells are six years old now and quite broken. They only lend about 40 minutes of juice in the best case.
|
||||
|
||||
![Webcam](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-webcam.jpg)
|
||||
|
||||
![Battery, broken](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-battery-broken.jpg)
|
||||
|
||||
Bluetooth did not work at first, but this is because crucial packages are missing.
|
||||
|
||||
![Bluetooth does not work out of the box](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-bt-no-work.png)
|
||||
|
||||
You can resolve the issue using dnf:
|
||||
|
||||
dnf install blueman bluez
|
||||
|
||||
![Bluetooth works now](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-bt-works.png)
|
||||
|
||||
### Suspend & resume
|
||||
|
||||
No issues, even with the Nvidia drivers. The whole sequence was quick and smooth, about 2-3 seconds each direction, into the land of sweet dreams and out of it. I do recall some problems with this in the past, but not any more. Happy sailing.
|
||||
|
||||
### Resource utilization
|
||||
|
||||
We can again compare Nouveau with Nvidia. But first, I had to sort out the swap partition setup manually, as Fedora refused to activate it. This is a big fail, and this happens consistently. Anyhow, the resource utilization with either one driver was almost identical. Both tolled a hefty 1.2 GB of RAM, and CPU ticked at about 2-3%, which is not really surprising, given the age of this machine. I did not see any big noise or heat difference the way we would witness it in the past, which is a testament to the improvements in the open-source driver, even though it fails on some of the advanced graphics logic required from it. But for normal use, non-gaming use, it behaves fairly well.
|
||||
|
||||
![Resources, Nouveau](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-resources-nouveau.jpg)
|
||||
|
||||
### Problems
|
||||
|
||||
Well, I observed some interesting issues during my testing. SELinux complained about legitimate processes a few times, and this really annoys me. Now to troubleshoot this, all you need to do is expand the alert, check the details, and then vomit. Why would anyone let ordinary users ever see this. Why?
|
||||
|
||||
![SELinux alerts](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-selinux.png)
|
||||
|
||||
![SELinux alerts, more](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-selinux-more.png)
|
||||
|
||||
SELinux is preventing totem-video-thu from write access on the directory gstreamer-1.0.
|
||||
|
||||
***** Plugin catchall_labels (83.8 confidence) suggests *****
|
||||
|
||||
If you want to allow totem-video-thu to have write access on the gstreamer-1.0 directory
|
||||
Then you need to change the label on gstreamer-1.0
|
||||
Do
|
||||
# semanage fcontext -a -t FILE_TYPE 'gstreamer-1.0'
|
||||
where FILE_TYPE is one of the following: cache_home_t, gstreamer_home_t, texlive_home_t, thumb_home_t, thumb_tmp_t, thumb_tmpfs_t, tmp_t, tmpfs_t, user_fonts_cache_t, user_home_dir_t, user_tmp_t.
|
||||
Then execute:
|
||||
restorecon -v 'gstreamer-1.0'
|
||||
|
||||
I want to execute something else, because hey, let us let developers be in charge of how things should be done. They know [best][26], right! This kind of garbage is what makes zombie apocalypses happen, when you miscode the safety lock on a lab confinement.
|
||||
|
||||
### Other observations
|
||||
|
||||
Exploring the system with gconf-editor and dconf-editor, I found tons of leftover settings from my old Gnome 2, Xfce and Cinnamon setups, and one of the weird things was that Nemo would create, or rather, restore, several desktop icons every time I had it launched, and it did not cooperate with the global settings I configured through the Tweak Tool. In the end, I had to resort to some command line witchcraft:
|
||||
|
||||
gsettings set org.nemo.desktop home-icon-visible false
|
||||
gsettings set org.nemo.desktop trash-icon-visible false
|
||||
gsettings set org.nemo.desktop computer-icon-visible false
|
||||
|
||||
### Gallery
|
||||
|
||||
Finally, some sweet screenshots:
|
||||
|
||||
![Nice desktop 1](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-nice-1.jpg)
|
||||
|
||||
![Nice desktop 2](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-nice-2.jpg)
|
||||
|
||||
![Nice desktop 3](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-nice-3.jpg)
|
||||
|
||||
![Nice desktop 4](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-nice-4.jpg)
|
||||
|
||||
![Nice desktop 5](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-nice-5.jpg)
|
||||
|
||||
### Conclusion
|
||||
|
||||
This was an interesting ordeal. It took me about four hours to finish the configuration and polish the system, the maniacal Fedora update that always runs in the deep hundreds and sometimes even thousands of packages, the graphics stack setup, and finally, all the gloss and trim needed to have a functional machine.
|
||||
|
||||
All in all, it works well. Fedora proved itself to be an adequate choice for the old HP machine, with decent performance and responsiveness, good hardware compatibility, fine aesthetics and functionality, once the extras are added, and only a small number of issues, some related to my laptop usage legacy. Not bad. Sure, the system could be faster, and Gnome isn't the best choice for olden hardware. But then, for something that was born in 2010, the HP laptop handles this desktop environment with grace, and it looks the part. Just proves that Red Hat makes a lot of sense once you release its essential oils and let the fragrance of extra software and codecs sweep you. It is your time to be enthused about this and commence your own testing.
|
||||
|
||||
Cheers.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
My name is Igor Ljubuncic. I'm more or less 38 of age, married with no known offspring. I am currently working as a Principal Engineer with a cloud technology company, a bold new frontier. Until roughly early 2015, I worked as the OS Architect with an engineering computing team in one of the largest IT companies in the world, developing new Linux-based solutions, optimizing the kernel and hacking the living daylights out of Linux. Before that, I was a tech lead of a team designing new, innovative solutions for high-performance computing environments. Some other fancy titles include Systems Expert and System Programmer and such. All of this used to be my hobby, but since 2008, it's a paying job. What can be more satisfying than that?
|
||||
|
||||
From 2004 until 2008, I used to earn my bread by working as a physicist in the medical imaging industry. My work expertise focused on problem solving and algorithm development. To this end, I used Matlab extensively, mainly for signal and image processing. Furthermore, I'm certified in several major engineering methodologies, including MEDIC Six Sigma Green Belt, Design of Experiment, and Statistical Engineering.
|
||||
|
||||
I also happen to write books, including high fantasy and technical work on Linux; mutually inclusive.
|
||||
|
||||
Please see my full list of open-source projects, publications and patents, just scroll down.
|
||||
|
||||
For a complete list of my awards, nominations and IT-related certifications, hop yonder and yonder please.
|
||||
|
||||
|
||||
-------------
|
||||
|
||||
|
||||
via: http://www.dedoimedo.com/computers/hp-pavilion-fedora-24.html
|
||||
|
||||
作者:[Igor Ljubuncic][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.dedoimedo.com/faq.html
|
||||
|
||||
[1]:http://www.dedoimedo.com/computers/chapeau-24.html
|
||||
[2]:http://www.dedoimedo.com/computers/fedora-24-gnome.html
|
||||
[3]:http://www.dedoimedo.com/computers/my-new-new-laptop.html
|
||||
[4]:http://www.dedoimedo.com/computers/fedora-25-gnome.html
|
||||
[5]:http://www.dedoimedo.com/computers/lenovo-g50-review.html
|
||||
[6]:http://www.dedoimedo.com/computers/ubuntu-xerus-realtek-bug.html
|
||||
[7]:http://www.dedoimedo.com/computers/linux-mint-rosa-xfce.html
|
||||
[8]:http://www.dedoimedo.com/computers/frankenstein-media.html
|
||||
[9]:http://www.dedoimedo.com/games/far-cry-4-review.html
|
||||
[10]:http://www.dedoimedo.com/computers/lenovo-g50-fedora.html
|
||||
[11]:http://www.dedoimedo.com/computers/lenovo-g50-distros-second-round.html
|
||||
[12]:http://www.dedoimedo.com/computers/dual-boot-windows-7-ubuntu.html
|
||||
[13]:http://www.dedoimedo.com/computers/dual-boot-windows-7-xubuntu.html
|
||||
[14]:http://www.dedoimedo.com/computers/dual-boot-windows-7-centos-7.html
|
||||
[15]:http://www.dedoimedo.com/computers/dual-boot-windows-8-ubuntu.html
|
||||
[16]:http://www.dedoimedo.com/computers/fedora-24-pimp.html
|
||||
[17]:http://www.dedoimedo.com/computers/fedora-23-extensions.html
|
||||
[18]:http://www.dedoimedo.com/computers/gnome-3-dash.html
|
||||
[19]:http://www.dedoimedo.com/computers/ubuntu-phone-sep-2016.html
|
||||
[20]:http://www.dedoimedo.com/computers/iphone-6-after-six-months.html
|
||||
[21]:http://www.dedoimedo.com/computers/fedora-nvidia-guide.html
|
||||
[22]:http://www.dedoimedo.com/computers/fedora-23-nvidia.html
|
||||
[23]:http://www.dedoimedo.com/computers/fedora-23-nvidia-steam.html
|
||||
[24]:http://www.ocsmag.com/2015/06/22/you-can-leave-your-fedora-on/
|
||||
[25]:http://www.dedoimedo.com/games/cod-mw2.html
|
||||
[26]:http://www.ocsmag.com/2016/10/19/systemd-progress-through-complexity/
|
@ -1,354 +0,0 @@
|
||||
What I Don’t Like About Error Handling in Go, and How to Work Around It
|
||||
======================
|
||||
|
||||
More often than not, people who write Go have some sort of opinion on its error handling model. Depending on your experience with other languages, you may be used to different approaches. That’s why I’ve decided to write this article, as despite being relatively opinionated, I think drawing on my experiences can be useful in the debate. The main issues I wanted to cover are that it is difficult to force good error handling practice, that errors don’t have stack traces, and that error handling itself is too verbose. However I’ve looked at some potential workarounds for these problems which could help negate the issues somewhat.
|
||||
|
||||
### Quick Comparison to Other Languages
|
||||
|
||||
|
||||
[In Go, all errors are values][1]. Because of this, a fair amount of functions end up returning an `error`, looking something like this:
|
||||
|
||||
```
|
||||
func (s *SomeStruct) Function() (string, error)
|
||||
```
|
||||
|
||||
As a result of this, the calling code will regularly have `if` statements to check for them:
|
||||
|
||||
```
|
||||
bytes, err := someStruct.Function()
|
||||
if err != nil {
|
||||
// Process error
|
||||
}
|
||||
```
|
||||
|
||||
Another approach is the `try-catch` model that is used in other languages such as Java, C#, Javascript, Objective C, Python etc. You could see the following Java code as synonymous to the previous Go examples, declaring `throws` instead of returning an `error`:
|
||||
|
||||
```
|
||||
public String function() throws Exception
|
||||
```
|
||||
|
||||
And then doing `try-catch` instead of `if err != nil`:
|
||||
|
||||
```
|
||||
try {
|
||||
String result = someObject.function()
|
||||
// continue logic
|
||||
}
|
||||
catch (Exception e) {
|
||||
// process exception
|
||||
}
|
||||
```
|
||||
|
||||
Of course, there are more differences than this. For example, an `error` can’t crash your program, whereas an `Exception` can. There are others as well, and I want to focus on them in this article.
|
||||
|
||||
### Implementing Centralised Error Handling
|
||||
|
||||
Taking a step back, let’s look at why and how we might want to have a centralised place for handling errors.
|
||||
|
||||
An example most people would be familiar with is a web service – if some unexpected server-side error were to happen, we would generate a 5xx error. At a first pass in Go you might implement this:
|
||||
|
||||
```
|
||||
func init() {
|
||||
http.HandleFunc("/users", viewUsers)
|
||||
http.HandleFunc("/companies", viewCompanies)
|
||||
}
|
||||
|
||||
func viewUsers(w http.ResponseWriter, r *http.Request) {
|
||||
user // some code
|
||||
if err := userTemplate.Execute(w, user); err != nil {
|
||||
http.Error(w, err.Error(), 500)
|
||||
}
|
||||
}
|
||||
|
||||
func viewCompanies(w http.ResponseWriter, r *http.Request) {
|
||||
companies = // some code
|
||||
if err := companiesTemplate.Execute(w, companies); err != nil {
|
||||
http.Error(w, err.Error(), 500)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This isn’t a good solution, as we would have to repeat the same error handling across all of our handler functions. It would be much better to do it all in one place for maintainability purposes. Fortunately, there is [an alternative by Andrew Gerrand on the Go blog][2] which works quite nicely. We can create a Type which does http error handling:
|
||||
|
||||
```
|
||||
type appHandler func(http.ResponseWriter, *http.Request) error
|
||||
|
||||
func (fn appHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
|
||||
if err := fn(w, r); err != nil {
|
||||
http.Error(w, err.Error(), 500)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
And then it can be used as a wrapper to decorate our handlers:
|
||||
|
||||
```
|
||||
func init() {
|
||||
http.Handle("/users", appHandler(viewUsers))
|
||||
http.Handle("/companies", appHandler(viewCompanies))
|
||||
}
|
||||
```
|
||||
|
||||
Then all we need to do is change the signature of handler functions so they return `errors`. This works nicely, as we have been able to apply the [dry][3] principle and not re-use code unnecessarily – now we return default errors in a single place.
|
||||
|
||||
### Error Context
|
||||
|
||||
In the previous example, there are many potential errors which we could receive, all of which could be generated in many parts of the call stack. This is when things start to get tricky.
|
||||
|
||||
To demonstrate this, we can expand on our handler. It’s more likely to look like this as the template execution is not the only place where an error could occur:
|
||||
|
||||
```
|
||||
func viewUsers(w http.ResponseWriter, r *http.Request) error {
|
||||
user, err := findUser(r.formValue("id"))
|
||||
if err != nil {
|
||||
return err;
|
||||
}
|
||||
return userTemplate.Execute(w, user);
|
||||
}
|
||||
```
|
||||
|
||||
The call chain could get quite deep, and throughout it, all sorts of errors could be instantiated in different places. This post by [Russ Cox][4] explains the best practice to prevent this from being too much of a problem:
|
||||
|
||||
> Part of the intended contract for error reporting in Go is that functions include relevant available context, including the operation being attempted (such as the function name and its arguments)
|
||||
|
||||
The example given is a call to the OS package:
|
||||
|
||||
```
|
||||
err := os.Remove("/tmp/nonexist")
|
||||
fmt.Println(err)
|
||||
```
|
||||
|
||||
Which prints the output:
|
||||
|
||||
```
|
||||
remove /tmp/nonexist: no such file or directory
|
||||
```
|
||||
|
||||
To summarise, you are outputting the method called, the arguments given, and the specific thing that went wrong, immediately after doing it. When creating an `Exception` message in another language you would also follow this practice. So if we stuck to these rules in our `viewUsers` handler, it could almost always be clear what the cause of an error is.
|
||||
|
||||
The problem comes from people not following this best practice, and quite often in third party Go libraries you will see messages like:
|
||||
|
||||
```
|
||||
Oh no I broke
|
||||
```
|
||||
|
||||
Which is just not helpful – you don’t know anything about the context which makes it really hard to debug. Even worse is when these sorts of errors are ignored or returned really far back up the stack until they are handled:
|
||||
|
||||
```
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
```
|
||||
|
||||
This means that when they happened isn’t communicated.
|
||||
|
||||
It should be noted that all these mistakes can be made in an `Exception` driven model – poor error messages, swallowing exceptions etc. So why do I think that model is more helpful?
|
||||
|
||||
If we’re dealing with a poor exception message, _we are still able to know where it occurred in the call stack_. This is because of stack traces, which raises something I don’t get about Go – you have the concept of `panic` in Go which contains a stack trace, but an `error` which does not. I think the reasoning is that a `panic` can crash your program, so requires a stack trace, whereas a handled error does not as you are supposed to do something about it where it occurs.
|
||||
|
||||
So let’s go back to our previous example – a third party library with a poor error message, which just gets propagated all the way up the call chain. Do you think debugging would be easier if you had this?
|
||||
|
||||
```
|
||||
panic: Oh no I broke
|
||||
[signal 0xb code=0x1 addr=0x0 pc=0xfc90f]
|
||||
|
||||
goroutine 1103 [running]:
|
||||
panic(0x4bed00, 0xc82000c0b0)
|
||||
/usr/local/go/src/runtime/panic.go:481 +0x3e6
|
||||
github.com/Org/app/core.(_app).captureRequest(0xc820163340, 0x0, 0x55bd50, 0x0, 0x0)
|
||||
/home/ubuntu/.go_workspace/src/github.com/Org/App/core/main.go:313 +0x12cf
|
||||
github.com/Org/app/core.(_app).processRequest(0xc820163340, 0xc82064e1c0, 0xc82002aab8, 0x1)
|
||||
/home/ubuntu/.go_workspace/src/github.com/Org/App/core/main.go:203 +0xb6
|
||||
github.com/Org/app/core.NewProxy.func2(0xc82064e1c0, 0xc820bb2000, 0xc820bb2000, 0x1)
|
||||
/home/ubuntu/.go_workspace/src/github.com/Org/App/core/proxy.go:51 +0x2a
|
||||
github.com/Org/app/core/vendor/github.com/rusenask/goproxy.FuncReqHandler.Handle(0xc820da36e0, 0xc82064e1c0, 0xc820bb2000, 0xc5001, 0xc820b4a0a0)
|
||||
/home/ubuntu/.go_workspace/src/github.com/Org/app/core/vendor/github.com/rusenask/goproxy/actions.go:19 +0x30
|
||||
```
|
||||
|
||||
I think this might be something which has been overlooked in the design of Go – not that things aren’t overlooked in all languages.
|
||||
|
||||
If we use Java as an arbitrary example, one of the silliest mistakes people make is not logging the stack trace:
|
||||
|
||||
```
|
||||
LOGGER.error(ex.getMessage()) // Doesn't log stack trace
|
||||
LOGGER.error(ex.getMessage(), ex) // Does log stack trace
|
||||
```
|
||||
|
||||
But Go seems to not have this information by design.
|
||||
|
||||
In terms of getting context information – Russ also mentions the community are talking about some potential interfaces for stripping out error contexts. It would be interesting to hear more about this.
|
||||
|
||||
### Solution to the Stack Trace Problem
|
||||
|
||||
Fortunately, after doing some searching, I found this excellent [Go Errors][5] library which helps solves the problem, by adding stack traces to errors:
|
||||
|
||||
```
|
||||
if errors.Is(err, crashy.Crashed) {
|
||||
fmt.Println(err.(*errors.Error).ErrorStack())
|
||||
}
|
||||
```
|
||||
|
||||
However, I’d think it would be an improvement for this feature to have first class citizenship in the language, so you wouldn’t have to fiddle around with types. Also, if we are working with a third party library like in the previous example then it is probably not using `crashy` – we still have the same problem.
|
||||
|
||||
### What Should We Do with an Error?
|
||||
|
||||
We also have to think about what should happen when an error occurs. [It’s definitely useful that they can’t crash your program][6], and it’s also idiomatic to handle them immediately:
|
||||
|
||||
```
|
||||
err := method()
|
||||
if err != nil {
|
||||
// some logic that I must do now in the event of an error!
|
||||
}
|
||||
```
|
||||
|
||||
But what happens if we want to call lots of methods which return errors, and then handle them all in the same place? Something like this:
|
||||
|
||||
```
|
||||
err := doSomething()
|
||||
if err != nil {
|
||||
// handle the error here
|
||||
}
|
||||
|
||||
func doSomething() error {
|
||||
err := someMethod()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = someOther()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
someOtherMethod()
|
||||
}
|
||||
```
|
||||
|
||||
It feels a little verbose, whereas in other languages you can treat multiple statements that fail as a block:
|
||||
|
||||
```
|
||||
try {
|
||||
someMethod()
|
||||
someOther()
|
||||
someOtherMethod()
|
||||
}
|
||||
catch (Exception e) {
|
||||
// process exception
|
||||
}
|
||||
```
|
||||
|
||||
Or just propagate failure in the method signature:
|
||||
|
||||
```
|
||||
public void doSomething() throws SomeErrorToPropogate {
|
||||
someMethod()
|
||||
someOther()
|
||||
someOtherMethod()
|
||||
}
|
||||
```
|
||||
|
||||
Personally I think both of these example achieve the same thing, only the `Exception` model is less verbose and more flexible. If anything, I find the `if err != nil` to feel like boilerplate. Maybe there is a way that it could be cleaned up?
|
||||
|
||||
### Treating Multiple Statements That Fail as a Block
|
||||
|
||||
To begin with, I did some more reading and found a relatively pragmatic solution on the [by Rob Pike on the Go Blog.][7]
|
||||
|
||||
He defines a struct with a method which wraps errors:
|
||||
|
||||
```
|
||||
type errWriter struct {
|
||||
w io.Writer
|
||||
err error
|
||||
}
|
||||
|
||||
func (ew *errWriter) write(buf []byte) {
|
||||
if ew.err != nil {
|
||||
return
|
||||
}
|
||||
_, ew.err = ew.w.Write(buf)
|
||||
}
|
||||
```
|
||||
|
||||
This let’s us do:
|
||||
|
||||
```
|
||||
ew := &errWriter{w: fd}
|
||||
ew.write(p0[a:b])
|
||||
ew.write(p1[c:d])
|
||||
ew.write(p2[e:f])
|
||||
// and so on
|
||||
if ew.err != nil {
|
||||
return ew.err
|
||||
}
|
||||
```
|
||||
|
||||
This is also a good solution, but I still feel like something is missing – as we can’t re-use this pattern. If we wanted a method which took a string as an argument, then we’d have to change the function signature. Or what if we didn’t want to perform a write? We could try and make it more generic:
|
||||
|
||||
```
|
||||
type errWrapper struct {
|
||||
err error
|
||||
}
|
||||
```
|
||||
|
||||
```
|
||||
func (ew *errWrapper) do(f func() error) {
|
||||
if ew.err != nil {
|
||||
return
|
||||
}
|
||||
ew.err = f();
|
||||
}
|
||||
```
|
||||
|
||||
But we have the same problem that it won’t compile if we want to call functions which have different arguments. However you simply wrap those function calls:
|
||||
|
||||
```
|
||||
w := &errWrapper{}
|
||||
|
||||
w.do(func() error {
|
||||
return someFunction(1, 2);
|
||||
})
|
||||
|
||||
w.do(func() error {
|
||||
return otherFunction("foo");
|
||||
})
|
||||
|
||||
err := w.err
|
||||
|
||||
if err != nil {
|
||||
// process error here
|
||||
}
|
||||
```
|
||||
|
||||
This works, but doesn’t help too much as it ends up being more verbose than the standard `if err != nil`checks. I would be interested to hear if anyone can offer any other solutions. Maybe the language itself needs some sort of way to propagate or group errors in a less bloated fashion – but it feels like it’s been specifically designed to not do that.
|
||||
|
||||
### Conclusion
|
||||
|
||||
After reading this, you might think that by picking on `errors` I’m opposed to Go. But that’s not the case, I’m just describing how it compares to my experience with the `try catch` model. It’s a great language for systems programming, and some outstanding tools have been produced by it. To name a few there is [Kubernetes][8], [Docker][9], [Terraform][10], [Hoverfly][11] and others. There’s also the advantage of your tiny, highly performant, native binary. But `errors` have been difficult to adjust to. I hope my reasoning makes sense, and also that some of the solutions and workarounds could be of help.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Andrew is a Consultant for OpenCredo having Joined the company in 2015. Andrew has several years experience working in the across a number of industries, developing web-based enterprise applications.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
|
||||
via: https://opencredo.com/why-i-dont-like-error-handling-in-go
|
||||
|
||||
作者:[Andrew Morgan][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opencredo.com/author/andrew/
|
||||
[1]:https://blog.golang.org/errors-are-values
|
||||
[2]:https://blog.golang.org/error-handling-and-go
|
||||
[3]:https://en.wikipedia.org/wiki/Don't_repeat_yourself
|
||||
[4]:https://research.swtch.com/go2017
|
||||
[5]:https://github.com/go-errors/errors
|
||||
[6]:https://davidnix.io/post/error-handling-in-go/
|
||||
[7]:https://blog.golang.org/errors-are-values
|
||||
[8]:https://kubernetes.io/
|
||||
[9]:https://www.docker.com/
|
||||
[10]:https://www.terraform.io/
|
||||
[11]:http://hoverfly.io/en/latest/
|
@ -1,324 +0,0 @@
|
||||
<header class="post-header" style="text-rendering: optimizeLegibility; font-family: "Noto Serif", Georgia, Cambria, "Times New Roman", Times, serif; font-size: 20px; text-align: start; background-color: rgb(255, 255, 255);">[How to use slice capacity and length in Go][14]
|
||||
============================================================</header>
|
||||
|
||||
<aside class="post-side" style="text-rendering: optimizeLegibility; position: fixed; top: 80px; left: 0px; width: 195px; padding-right: 5px; padding-left: 5px; text-align: right; z-index: 300; font-family: "Noto Serif", Georgia, Cambria, "Times New Roman", Times, serif; font-size: 20px;"></aside>
|
||||
|
||||
Quick pop quiz - what does the following code output?
|
||||
|
||||
```
|
||||
vals := make([]int, 5)
|
||||
for i := 0; i < 5; i++ {
|
||||
vals = append(vals, i)
|
||||
}
|
||||
fmt.Println(vals)
|
||||
```
|
||||
|
||||
_[Cheat and run it on the Go Playground][1]_
|
||||
|
||||
If you guessed `[0 0 0 0 0 0 1 2 3 4]` you are correct.
|
||||
|
||||
_Wait, what?_ Why isn't it `[0 1 2 3 4]`?
|
||||
|
||||
Don't worry if you got the pop quiz wrong. This is a fairly common mistake when transitioning into Go and in this post we are going to cover both why the output isn't what you expected along with how to utilize the nuances of Go to make your code more efficient.
|
||||
|
||||
### Slices vs Arrays
|
||||
|
||||
In Go there are both arrays and slices. This can be confusing at first, but once you get used to it you will love it. Trust me.
|
||||
|
||||
There are many differences between slices and arrays, but the primary one we want to focus on in this article is that the size of an array is part of its type, whereas slices can have a dynamic size because they are wrappers around arrays.
|
||||
|
||||
What does this mean in practice? Well, let's say we have the array `val a [10]int`. This array has a fixed size and that can't be changed. If we were to call `len(a)` it would always return 10, because that size is part of the type. As a result, if you suddenly need more than 10 items in your array you have to create a new object with an entirely different type, such as `val b [11]int`, and then copy all of your values from `a` over to `b`.
|
||||
|
||||
While having arrays with set sizes is valuable in specific cases, generally speaking this isn't what developers want. Instead, they want to work with something similar to an array in Go, but with the ability to grow over time. One crude way to do this would be to create an array that is much bigger than it needs to be and then to treat a subset of the array as your array. An example of this is shown in the code below.
|
||||
|
||||
```
|
||||
var vals [20]int
|
||||
for i := 0; i < 5; i++ {
|
||||
vals[i] = i * i
|
||||
}
|
||||
subsetLen := 5
|
||||
|
||||
fmt.Println("The subset of our array has a length of:", subsetLen)
|
||||
|
||||
// Add a new item to our array
|
||||
vals[subsetLen] = 123
|
||||
subsetLen++
|
||||
fmt.Println("The subset of our array has a length of:", subsetLen)
|
||||
```
|
||||
|
||||
_[Run it on the Go Playground][2]_
|
||||
|
||||
With this code we have an array with a set size of 20, but because we are only using a subset our code can pretend that the length of the array is 5, and then 6 after we add a new item to our array.
|
||||
|
||||
This is (very roughly speaking) how slices work. They wrap an array with a set size, much like our array in the previous example has a set size of 20.
|
||||
|
||||
They also keep track of the subset of the array that is available for your program to use - this is the `length` attribute, and it is similar to the `subsetLen` variable in the previous example.
|
||||
|
||||
Finally, a slice also has a `capacity`, which is similar to the total length of our array (20) in the previous example. This is useful because it tells you how large your subset can grow before it will no longer fit in the array that is backing the slice. When this does happen, a new array will need to be allocated, but all of this logic is hidden behind the `append` function.
|
||||
|
||||
In short, combining slices with the `append` function gives us a type that is very similar to arrays, but is capable of growing over time to handle more elements.
|
||||
|
||||
Let's look at the previous example again, but this time we will use a slice instead of an array.
|
||||
|
||||
```
|
||||
var vals []int
|
||||
for i := 0; i < 5; i++ {
|
||||
vals = append(vals, i)
|
||||
fmt.Println("The length of our slice is:", len(vals))
|
||||
fmt.Println("The capacity of our slice is:", cap(vals))
|
||||
}
|
||||
|
||||
// Add a new item to our array
|
||||
vals = append(vals, 123)
|
||||
fmt.Println("The length of our slice is:", len(vals))
|
||||
fmt.Println("The capacity of our slice is:", cap(vals))
|
||||
|
||||
// Accessing items is the same as an array
|
||||
fmt.Println(vals[5])
|
||||
fmt.Println(vals[2])
|
||||
```
|
||||
|
||||
_[Run it on the Go Playground][3]_
|
||||
|
||||
We can still access elements in our slice just like we would arrays, but by using a slice and the `append` function we no longer have to think about the size of the backing array. We are still able to figure these things out by using the `len` and `cap` functions, but we don't have to worry too much about them. Neat, right?
|
||||
|
||||
### Back to the pop quiz
|
||||
|
||||
With that in mind, let's look back at our pop quiz code to see what went wrong.
|
||||
|
||||
```
|
||||
vals := make([]int, 5)
|
||||
for i := 0; i < 5; i++ {
|
||||
vals = append(vals, i)
|
||||
}
|
||||
fmt.Println(vals)
|
||||
```
|
||||
|
||||
When calling `make` we are permitted to pass in up to 3 arguments. The first is the type that we are allocating, the second is the `length` of the type, and the third is the `capacity` of the type (_this parameter is optional_).
|
||||
|
||||
By passing in the arguments `make([]int, 5)` we are telling our program that we want to create a slice with a length of 5, and the capacity is defaulted to the length provided - 5 in this instance.
|
||||
|
||||
While this might seem like what we wanted at first, the important distinction here is that we told our slice that we wanted to set both the `length` and `capacity` to 5, and then we proceeded to call the `append` function which assumes you want to add a new element _after_ the initial 5, so it will increase the capacity and start adding new elements at the end of the slice.
|
||||
|
||||
You can actually see the capacity changing if you add a `Println()` statement to your code.
|
||||
|
||||
```
|
||||
vals := make([]int, 5)
|
||||
fmt.Println("Capacity was:", cap(vals))
|
||||
for i := 0; i < 5; i++ {
|
||||
vals = append(vals, i)
|
||||
fmt.Println("Capacity is now:", cap(vals))
|
||||
}
|
||||
|
||||
fmt.Println(vals)
|
||||
```
|
||||
|
||||
_[Run it on the Go Playground][4]_
|
||||
|
||||
As a result, we end up getting the output `[0 0 0 0 0 0 1 2 3 4]` instead of the desired `[0 1 2 3 4]`.
|
||||
|
||||
How do we fix it? Well, there are several ways to do this, so we are going to cover two of them and you can pick whichever makes the most sense in your situation.
|
||||
|
||||
### Write directly to indexes instead of using `append`
|
||||
|
||||
The first fix is to leave the `make` call unchanged and explicitly state the index that you want to set each element to. Doing this, we would get the following code:
|
||||
|
||||
```
|
||||
vals := make([]int, 5)
|
||||
for i := 0; i < 5; i++ {
|
||||
vals[i] = i
|
||||
}
|
||||
fmt.Println(vals)
|
||||
```
|
||||
|
||||
_[Run it on the Go Playground][5]_
|
||||
|
||||
In this case the value we are setting happens to be the same as the index we want to use, but you can also keep track of the index independently.
|
||||
|
||||
For example, if you wanted to get the keys of a map you could use the following code.
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import "fmt"
|
||||
|
||||
func main() {
|
||||
fmt.Println(keys(map[string]struct{}{
|
||||
"dog": struct{}{},
|
||||
"cat": struct{}{},
|
||||
}))
|
||||
}
|
||||
|
||||
func keys(m map[string]struct{}) []string {
|
||||
ret := make([]string, len(m))
|
||||
i := 0
|
||||
for key := range m {
|
||||
ret[i] = key
|
||||
i++
|
||||
}
|
||||
return ret
|
||||
}
|
||||
```
|
||||
|
||||
_[Run it on the Go Playground][6]_
|
||||
|
||||
This works well because we know that the exact length of the slice we return will be the same as the length of the map, so we can initialize our slice with that length and then assign each element to an appropriate index. The downside to this approach is that we have to keep track of `i` so that we know what index to place every value in.
|
||||
|
||||
This leads us to the second approach we are going to cover...
|
||||
|
||||
### Use `0` as your length and specify your capacity instead
|
||||
|
||||
Rather than keeping track of which index we want to add our values to, we can instead update our `make` call and provide it with two arguments after the slice type. The first, the length of our new slice, will be set to `0`, as we haven't added any new elements to our slice. The second, the capacity of our new slice, will be set to the length of the map parameter because we know that our slice will eventually have that many strings added to it.
|
||||
|
||||
This will still construct the same array behind the scenes as the previous example, but now when we call `append` it will know to place items at the start of our slice because the length of the slice is 0.
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import "fmt"
|
||||
|
||||
func main() {
|
||||
fmt.Println(keys(map[string]struct{}{
|
||||
"dog": struct{}{},
|
||||
"cat": struct{}{},
|
||||
}))
|
||||
}
|
||||
|
||||
func keys(m map[string]struct{}) []string {
|
||||
ret := make([]string, 0, len(m))
|
||||
for key := range m {
|
||||
ret = append(ret, key)
|
||||
}
|
||||
return ret
|
||||
}
|
||||
```
|
||||
|
||||
_[Run it on the Go Playground][7]_
|
||||
|
||||
### Why do we bother with capacity at all if `append` handles it?
|
||||
|
||||
The next thing you might be asking is, "Why are we even telling our program a capacity if the `append` function can handle increasing the capacity of my slice for me?"
|
||||
|
||||
The truth is, in most cases you don't need to worry about this too much. If it makes your code significantly more complicated, just initialize your slice with `var vals []int` and let the `append`function handle the heavy lifting for you.
|
||||
|
||||
But this case is different. It isn't an instance where declaring the capacity is difficult; In fact, it is actually quite easy to determine what the final capacity of our slice needs to be because we know it will map directly to the provided map. As a result, we can declare the capacity of our slice when we initialize it and save our program from needing to perform unnecessary memory allocations.
|
||||
|
||||
If you want to see what the extra memory allocations look like, run the following code on the Go Playground. Every time capacity increases our program needed to do another memory allocation.
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import "fmt"
|
||||
|
||||
func main() {
|
||||
fmt.Println(keys(map[string]struct{}{
|
||||
"dog": struct{}{},
|
||||
"cat": struct{}{},
|
||||
"mouse": struct{}{},
|
||||
"wolf": struct{}{},
|
||||
"alligator": struct{}{},
|
||||
}))
|
||||
}
|
||||
|
||||
func keys(m map[string]struct{}) []string {
|
||||
var ret []string
|
||||
fmt.Println(cap(ret))
|
||||
for key := range m {
|
||||
ret = append(ret, key)
|
||||
fmt.Println(cap(ret))
|
||||
}
|
||||
return ret
|
||||
}
|
||||
```
|
||||
|
||||
_[Run it on the Go Playground][8]_
|
||||
|
||||
Now compare this to the same code but with a predefined capacity.
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import "fmt"
|
||||
|
||||
func main() {
|
||||
fmt.Println(keys(map[string]struct{}{
|
||||
"dog": struct{}{},
|
||||
"cat": struct{}{},
|
||||
"mouse": struct{}{},
|
||||
"wolf": struct{}{},
|
||||
"alligator": struct{}{},
|
||||
}))
|
||||
}
|
||||
|
||||
func keys(m map[string]struct{}) []string {
|
||||
ret := make([]string, 0, len(m))
|
||||
fmt.Println(cap(ret))
|
||||
for key := range m {
|
||||
ret = append(ret, key)
|
||||
fmt.Println(cap(ret))
|
||||
}
|
||||
return ret
|
||||
}
|
||||
```
|
||||
|
||||
_[Run it on the Go Playground][9]_
|
||||
|
||||
In the first code sample our capacity starts at `0`, and then increases to `1`, `2`, `4`, and then finally `8`, meaning we had to allocate a new array 5 different times, and on top of that the final array used to back our slice has a capacity of `8`, which is bigger than we ultimately needed.
|
||||
|
||||
On the other hand, our second sample starts and ends with the same capacity (`5`) and only needs to allocate it once at the start of the `keys()` function. We also avoid wasting any extra memory and return a slice with the perfect size array backing it.
|
||||
|
||||
### Don't over-optimize
|
||||
|
||||
As I said before, I typically wouldn't encourage anyone to worry about minor optimizations like this, but in cases where it is really obvious what the final size should be I strongly encourage you to try to set an appropriate capacity or length for your slices.
|
||||
|
||||
Not only does it help improve the performance of your application, but it can also help clarify your code a bit by explicitly stating the relationship between the size of your input and the size of your output.
|
||||
|
||||
### In summary...
|
||||
|
||||
> Hi there! I write a lot about Go, web development, and other topics I find interesting.
|
||||
>
|
||||
> If you want to stay up to date with my writing, please [sign up for my mailing list][10]. I'll send you a FREE sample of my upcoming book, Web Development with Go, and an occasional email when I publish a new article (usually 1-2 per week).
|
||||
>
|
||||
> Oh, and I promise I don't spam. I hate it as much as you do :)
|
||||
|
||||
This article is not meant to be an exhaustive discussion on the differences between slices or arrays, but instead is meant to serve as a brief introduction into how capacity and length affect your slices, and what purpose they serve in the grand scheme of things.
|
||||
|
||||
For further reading, I highly recommend the following articles from the Go Blog:
|
||||
|
||||
* [Go Slices: usage and internals][11]
|
||||
* [Arrays, slices (and strings): The mechanics of 'append'][12]
|
||||
* [Slice Tricks][13]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Jon is a software consultant and the author of the book Web Development with Go. Prior to that he founded EasyPost, a Y Combinator backed startup, and worked at Google.
|
||||
https://www.usegolang.com
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
|
||||
via: https://www.calhoun.io/how-to-use-slice-capacity-and-length-in-go
|
||||
|
||||
作者:[Jon Calhoun][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.calhoun.io/hire-me
|
||||
[1]:https://play.golang.org/p/7PgUqBdZ6Z
|
||||
[2]:https://play.golang.org/p/Np6-NEohm2
|
||||
[3]:https://play.golang.org/p/M_qaNGVbC-
|
||||
[4]:https://play.golang.org/p/d6OUulTYM7
|
||||
[5]:https://play.golang.org/p/JI8Fx3fJCU
|
||||
[6]:https://play.golang.org/p/kIKxkdX35B
|
||||
[7]:https://play.golang.org/p/h5hVAHmqJm
|
||||
[8]:https://play.golang.org/p/fDbAxtAjLF
|
||||
[9]:https://play.golang.org/p/nwT8X9-7eQ
|
||||
[10]:https://www.calhoun.io/how-to-use-slice-capacity-and-length-in-go/?utm_source=golangweekly&utm_medium=email#mailing-list-form
|
||||
[11]:https://blog.golang.org/go-slices-usage-and-internals
|
||||
[12]:https://blog.golang.org/slices
|
||||
[13]:https://github.com/golang/go/wiki/SliceTricks
|
||||
[14]:https://www.calhoun.io/how-to-use-slice-capacity-and-length-in-go/
|
@ -1,631 +0,0 @@
|
||||
How to install OTRS (OpenSource Trouble Ticket System) on CentOS 7
|
||||
============================================================
|
||||
|
||||
### On this page
|
||||
|
||||
1. [The Environment][1]
|
||||
2. [Preparation][2]
|
||||
3. [Install MariaDB on Centos 7][3]
|
||||
4. [Install EPEL ][4]
|
||||
5. [Install OTRS][5]
|
||||
6. [Configure OTRS on CentOS 7][6]
|
||||
|
||||
OTRS (open-source trouble ticket system software) is a sophisticated open source software used by companies to improve their operation related to customer support, help desk, call centers and more. OTRS is written in PERL and provides the following important features:
|
||||
|
||||
* Customers can register and create/interact with a Ticket via the customer portal and by email, phone, and fax with each queue (Attendants/Technicians post box).
|
||||
* Tickets can be managed by their priority, assignment, transmission and follow-up. A ticket can be split, merged, bulk actions can be applied, and links to each other and notifications can be set. Services can be configurated through the service catalog.
|
||||
* To increase the team capacity, auto email (automatic answers), text templates and signatures can be configured. The system supports notes and attachments on tickets.
|
||||
* Others capabilities include: statistics and reports (CSV/PDF), SLA and many other features.
|
||||
|
||||
### The Environment
|
||||
|
||||
This article covers the OTRS 5 installation and basic configuration. This article was writen based on the following enviroment: A Virtual Box VM with CENTOS 7 Minimal, 2GB RAM, 8GB HD and 2 network interfaces (host only and NAT).
|
||||
|
||||
### Preparation
|
||||
|
||||
Assuming that you use a fresh installation of Centos 7 Minimal, before to install OTRS, run the following command to update the system and install aditional packages:
|
||||
|
||||
```
|
||||
yum update
|
||||
```
|
||||
|
||||
Transaction Summary ================================================================================ Install 1 Package Upgrade 39 Packages Total download size: 91 M Is this ok [y/d/N]: **y**
|
||||
|
||||
Install a text editor or use VI. In this article we use VIM, run the following command to install it:
|
||||
|
||||
```
|
||||
yum install vim
|
||||
```
|
||||
|
||||
To install the WGET package, run the following command:
|
||||
|
||||
```
|
||||
yum install wget
|
||||
```
|
||||
|
||||
To configure the Centos 7 network, run the following command to open the NMTUI (Network Manager Text User Interface) tool and edit the interfaces and hostname if nescessary:
|
||||
|
||||
```
|
||||
nmtui
|
||||
```
|
||||
|
||||
[
|
||||
![](https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/OTRS_How_To_Alexandre_Costa.jpg)
|
||||
][7]
|
||||
|
||||
After setup of network settings and hostname on CentOS 7, run the following command to apply the changes:
|
||||
|
||||
```
|
||||
service networks restart
|
||||
```
|
||||
|
||||
To verify the network information, run the following command:
|
||||
|
||||
```
|
||||
ip addr
|
||||
```
|
||||
|
||||
The output looks like this on my system:
|
||||
|
||||
```
|
||||
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
|
||||
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
||||
inet 127.0.0.1/8 scope host lo
|
||||
valid_lft forever preferred_lft forever
|
||||
inet6 ::1/128 scope host
|
||||
valid_lft forever preferred_lft forever
|
||||
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
|
||||
link/ether 08:00:27:67:bc:73 brd ff:ff:ff:ff:ff:ff
|
||||
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
|
||||
valid_lft 84631sec preferred_lft 84631sec
|
||||
inet6 fe80::9e25:c982:1091:90eb/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
|
||||
link/ether 08:00:27:68:88:f3 brd ff:ff:ff:ff:ff:ff
|
||||
inet 192.168.56.101/24 brd 192.168.56.255 scope global dynamic enp0s8
|
||||
valid_lft 1044sec preferred_lft 1044sec
|
||||
inet6 fe80::a00:27ff:fe68:88f3/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
```
|
||||
|
||||
Disable SELINUX (Security Enhanced Linux) on Centos 7, edit the following config file:
|
||||
|
||||
```
|
||||
vim /etc/selinux/config
|
||||
```
|
||||
|
||||
```
|
||||
"/etc/selinux/config" 14L, 547C# This file controls the state of SELinux on the system.
|
||||
# SELINUX= can take one of these three values:
|
||||
# enforcing - SELinux security policy is enforced.
|
||||
# permissive - SELinux prints warnings instead of enforcing.
|
||||
# disabled - No SELinux policy is loaded.
|
||||
SELINUX=enforcing
|
||||
# SELINUXTYPE= can take one of three two values:
|
||||
# targeted - Targeted processes are protected,
|
||||
# minimum - Modification of targeted policy. Only selected processes are prootected.
|
||||
# mls - Multi Level Security protection.
|
||||
SELINUXTYPE=targeted
|
||||
```
|
||||
|
||||
Change the value **enforcing** of directive SELINUX to **disabled**, save the file and reboot the server.
|
||||
|
||||
To check the status of SELinux on Centos 7, run the following command:
|
||||
|
||||
```
|
||||
getenforce
|
||||
```
|
||||
|
||||
The output must be:
|
||||
|
||||
```
|
||||
Disabled
|
||||
```
|
||||
|
||||
### Install MariaDB on Centos 7
|
||||
|
||||
To install MariaDB on Centos 7, run the following command:
|
||||
|
||||
```
|
||||
yum -y install mariadb-server
|
||||
```
|
||||
|
||||
Create the file with the name **zotrs.cnf** in the following directory:
|
||||
|
||||
```
|
||||
/etc/my.cnf.d/
|
||||
```
|
||||
|
||||
To create and edit the file, run the following command:
|
||||
|
||||
```
|
||||
vim /etc/my.cnf.d/zotrs.cnf
|
||||
```
|
||||
|
||||
Fill the file with the following content and save it:
|
||||
|
||||
```
|
||||
max_allowed_packet = 20M
|
||||
query_cache_size = 32M
|
||||
innodb_log_file_size = 256M
|
||||
```
|
||||
|
||||
To start MariaDB, run the following command:
|
||||
|
||||
```
|
||||
systemctl start mariadb
|
||||
```
|
||||
|
||||
To increase the security of MariaDB, run the following command:
|
||||
|
||||
```
|
||||
/usr/bin/mysql_secure_installation
|
||||
```
|
||||
|
||||
Setup the options accordind the following output:
|
||||
|
||||
```
|
||||
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
|
||||
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!
|
||||
|
||||
In order to log into MariaDB to secure it, we'll need the current
|
||||
password for the root user. If you've just installed MariaDB, and
|
||||
you haven't set the root password yet, the password will be blank,
|
||||
so you should just press enter here.
|
||||
|
||||
Enter current password for root (enter for none):<Press Enter>
|
||||
```
|
||||
|
||||
```
|
||||
OK, successfully used password, moving on...
|
||||
|
||||
Setting the root password ensures that nobody can log into the MariaDB
|
||||
root user without the proper authorisation.
|
||||
|
||||
Set root password? [Y/n] <Press Y>
|
||||
```
|
||||
|
||||
Set the root password:
|
||||
|
||||
```
|
||||
New password:
|
||||
Re-enter new password:
|
||||
Password updated successfully!
|
||||
Reloading privilege tables..
|
||||
... Success!
|
||||
|
||||
By default, a MariaDB installation has an anonymous user, allowing anyone
|
||||
to log into MariaDB without having to have a user account created for
|
||||
them. This is intended only for testing, and to make the installation
|
||||
go a bit smoother. You should remove them before moving into a
|
||||
production environment.
|
||||
|
||||
Remove anonymous users? [Y/n] <Press Y>
|
||||
```
|
||||
|
||||
```
|
||||
... Success!
|
||||
|
||||
Normally, root should only be allowed to connect from 'localhost'. This
|
||||
ensures that someone cannot guess at the root password from the network.
|
||||
|
||||
Disallow root login remotely? [Y/n] <Choose acording your needs>
|
||||
```
|
||||
|
||||
```
|
||||
... Success!
|
||||
|
||||
By default, MariaDB comes with a database named 'test' that anyone can
|
||||
access. This is also intended only for testing, and should be removed
|
||||
before moving into a production environment.
|
||||
|
||||
Remove test database and access to it? [Y/n] <Press Y>
|
||||
```
|
||||
|
||||
```
|
||||
- Dropping test database...
|
||||
... Success!
|
||||
- Removing privileges on test database...
|
||||
... Success!
|
||||
|
||||
Reloading the privilege tables will ensure that all changes made so far
|
||||
will take effect immediately.
|
||||
|
||||
Reload privilege tables now? [Y/n] <Press Y>
|
||||
```
|
||||
|
||||
```
|
||||
... Success!
|
||||
|
||||
Cleaning up...
|
||||
|
||||
All done! If you've completed all of the above steps, your MariaDB
|
||||
installation should now be secure.
|
||||
|
||||
Thanks for using MariaDB!
|
||||
|
||||
```
|
||||
|
||||
Setup MariaDB to start up automatically at boot time:
|
||||
|
||||
systemctl enable mariadb.service
|
||||
|
||||
To download OTRS, run the following command:
|
||||
|
||||
```
|
||||
wget http://ftp.otrs.org/pub/otrs/RPMS/rhel/7/otrs-5.0.15-01.n oarch.rpm
|
||||
```
|
||||
|
||||
### Install EPEL
|
||||
|
||||
Before we install OTRS, setup the EPEL repositoy on Centos 7\. Run the following command to do so:
|
||||
|
||||
```
|
||||
[root@centos7 ~]# yum -y http://mirror.globo.com/epel/7/x86_64/e/epel-r release-7-9.noarch.rpm
|
||||
```
|
||||
|
||||
### Install OTRS
|
||||
|
||||
Install OTRS with the following command:
|
||||
|
||||
```
|
||||
yum install -nogpgcheck otrs-5.0.15-01.noarch.rpm
|
||||
```
|
||||
|
||||
A list of software package will be installed, eg. Apache and all dependencies will be resolved automatically, at to the end of output press Y:
|
||||
|
||||
```
|
||||
Transaction Summary
|
||||
================================================================================
|
||||
Install 1 Package (+143 Dependent packages)
|
||||
|
||||
Total size: 148 M
|
||||
Total download size: 23 M
|
||||
Installed size: 181 M
|
||||
Is this ok [y/d/N]: y
|
||||
```
|
||||
|
||||
To start Apache (httpd), run the following command:
|
||||
|
||||
```
|
||||
systemctl start httpd.service
|
||||
```
|
||||
|
||||
To enable Apache (httpd) startup with systemd on Centos7, run the following command:
|
||||
|
||||
```
|
||||
systemctl enable httpd.service
|
||||
```
|
||||
|
||||
Enable SSL in Apache and configure a SelfSigned Certificate. Install the Mod_SSL module for the Apache HTTP Server, run the following command:
|
||||
|
||||
```
|
||||
yum -y install mod_ssl
|
||||
```
|
||||
|
||||
To generate a self-signed SSL certificate, go to the following directory:
|
||||
|
||||
```
|
||||
cd /etc/pki/tls/certs/
|
||||
```
|
||||
|
||||
And run the following command to generate the key (centos7.key is the name of my certificate, feel free to change it):
|
||||
|
||||
```
|
||||
make centos7.key
|
||||
```
|
||||
|
||||
```
|
||||
umask 77 ; \ /usr/bin/openssl genrsa -aes128 2048 > centos7.key Generating RSA private key, 2048 bit long modulus .+++ .........................................................................................+++ e is 65537 (0x10001) Enter pass phrase: **<Insert your Own Password>**
|
||||
|
||||
Verifying - Enter pass phrase:**<Retype the Password>**
|
||||
```
|
||||
|
||||
To generate the server SSL private key with OpenSSL, run the following command:
|
||||
|
||||
```
|
||||
openssl rsa -in centos7.key -out centos7.key
|
||||
```
|
||||
|
||||
```
|
||||
Enter pass phrase for centos7.key: **<Type the Password> **writing RSA key
|
||||
```
|
||||
|
||||
Run the following command to create the CSR (Certificate Signing Request) file (centos7.csr is the name of my certificate, feel free to change it):
|
||||
|
||||
```
|
||||
make centos7.csr
|
||||
```
|
||||
|
||||
Fill the questions acording your needs:
|
||||
|
||||
```
|
||||
umask 77 ; \ /usr/bin/openssl req -utf8 -new -key centos7.key -out centos7.csr You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. -----
|
||||
|
||||
Country Name (2 letter code) [XX]:
|
||||
|
||||
State or Province Name (full name) []:
|
||||
|
||||
Locality Name (eg, city) [Default City]:
|
||||
|
||||
Organization Name (eg, company) [Default Company Ltd]:
|
||||
|
||||
Organizational Unit Name (eg, section) []:
|
||||
|
||||
Centos7 Common Name (eg, your name or your server's hostname) []:
|
||||
|
||||
Email Address []:
|
||||
|
||||
Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: **<press enter>**
|
||||
|
||||
An optional company name []:
|
||||
```
|
||||
|
||||
Generate a CSR (Certificate Signing Request) for the server with the OpenSSL tool:
|
||||
|
||||
```
|
||||
openssl x509 -in centos7.csr -out centos7.crt -req -signkey centos7.key
|
||||
```
|
||||
|
||||
The output is:
|
||||
|
||||
```
|
||||
Signature ok subject=/C=BR/ST=SP/L=Campinas/O=Centos7/OU=Centos7/CN=centos7.local Getting Private key
|
||||
```
|
||||
|
||||
Before we edit the ssl.conf file, make a copy of the file with the following command:
|
||||
|
||||
```
|
||||
cp /etc/httpd/conf.d/ssl.conf /etc/httpd/conf.d/ssl.conf.old
|
||||
```
|
||||
|
||||
Then edit the file:
|
||||
|
||||
```
|
||||
vim /etc/httpd/conf.d/ssl.conf
|
||||
```
|
||||
|
||||
Find the following directives, uncomment each one and edit them like this:
|
||||
|
||||
```
|
||||
SSLCertificateKeyFile /etc/pki/tls/certs/centos7.key
|
||||
|
||||
SSLCertificateFile /etc/pki/tls/certs/centos7.csr
|
||||
|
||||
SSLProtocol -All +TLSv1 +TLSv1.1 +TLSv1.2
|
||||
|
||||
ServerName centos7.local:443
|
||||
```
|
||||
|
||||
Restart Apache with the following command:
|
||||
|
||||
```
|
||||
systemctl restart httpd
|
||||
```
|
||||
|
||||
To force OTRS to run in https mode, edit the following file:
|
||||
|
||||
```
|
||||
vim /etc/httpd/conf/httpd.conf
|
||||
```
|
||||
|
||||
At the end of file, uncoment the following directive:
|
||||
|
||||
```
|
||||
IncludeOptional conf.d/*.conf
|
||||
```
|
||||
|
||||
Edit the file zzz_otrs.conf:
|
||||
|
||||
```
|
||||
vim /etc/httpd/conf.d/zzz_otrs.conf
|
||||
```
|
||||
|
||||
After the line 26 (before the line module mod_version.c) add the following directives:
|
||||
|
||||
```
|
||||
RewriteEngine On
|
||||
RewriteCond %{HTTPS} off
|
||||
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
|
||||
```
|
||||
|
||||
Restart Apache:
|
||||
|
||||
```
|
||||
[root@centos7 ~]# systemctl restart httpd
|
||||
```
|
||||
|
||||
To use extended features in OTRS, we have to install some PERL modules. Run the following command to install them:
|
||||
|
||||
```
|
||||
yum -y install "perl(Text::CSV_XS)" "perl(Crypt::Eksblowfish::Bcrypt)" "perl(YAML::XS)" "perl(JSON::XS)" "perl(Encode::HanExtra)" "perl(Mail::IMAPClient)" "perl(ModPerl::Util)"
|
||||
```
|
||||
|
||||
The OTRS system has a tool to check the PERL modules, run it like this to verify the system requirements:
|
||||
|
||||
```
|
||||
cd /opt/otrs/bin
|
||||
```
|
||||
|
||||
and run:
|
||||
|
||||
```
|
||||
./otrs.CheckModules.pl
|
||||
```
|
||||
|
||||
The output for our configuration must be:
|
||||
|
||||
```
|
||||
o Apache::DBI......................ok (v1.12) o Apache2::Reload..................ok (v0.13) o Archive::Tar.....................ok (v1.92) o Archive::Zip.....................ok (v1.30) o Crypt::Eksblowfish::Bcrypt.......ok (v0.009) o Crypt::SSLeay....................ok (v0.64) o Date::Format.....................ok (v2.24) o DBI..............................ok (v1.627) o DBD::mysql.......................ok (v4.023) o DBD::ODBC........................Not installed! (optional - Required to connect to a MS-SQL database.) o DBD::Oracle......................Not installed! (optional - Required to connect to a Oracle database.) o DBD::Pg..........................Not installed! Use: 'yum install "perl(DBD::Pg)"' (optional - Required to connect to a PostgreSQL database.) o Digest::SHA......................ok (v5.85) o Encode::HanExtra.................ok (v0.23) o IO::Socket::SSL..................ok (v1.94) o JSON::XS.........................ok (v3.01) o List::Util::XS...................ok (v1.27) o LWP::UserAgent...................ok (v6.13) o Mail::IMAPClient.................ok (v3.37) o IO::Socket::SSL................ok (v1.94) o ModPerl::Util....................ok (v2.000010) o Net::DNS.........................ok (v0.72) o Net::LDAP........................ok (v0.56) o Template.........................ok (v2.24) o Template::Stash::XS..............ok (undef) o Text::CSV_XS.....................ok (v1.00) o Time::HiRes......................ok (v1.9725) o Time::Piece......................ok (v1.20_01) o XML::LibXML......................ok (v2.0018) o XML::LibXSLT.....................ok (v1.80) o XML::Parser......................ok (v2.41) o YAML::XS.........................ok (v0.54)
|
||||
```
|
||||
|
||||
To start the OTRS Daemon with the "otrs" user, run the following command:
|
||||
|
||||
```
|
||||
su -c "/opt/otrs/bin/otrs.Daemon.pl start" -s /bin/bash otrs
|
||||
```
|
||||
|
||||
To disable the CentOS 7 firewall, run the following command:
|
||||
|
||||
```
|
||||
systemctl stop firewalld
|
||||
```
|
||||
|
||||
To disable CentOS 7 Firewall to start up automaticaly, run:
|
||||
|
||||
```
|
||||
systemctl disable firewalld.service
|
||||
```
|
||||
|
||||
Start the OTRS Daemon with:
|
||||
|
||||
```
|
||||
su -c "/opt/otrs/bin/otrs.Daemon.pl start" -s /bin/bash otrsCron.sh
|
||||
```
|
||||
|
||||
The output of command must be:
|
||||
|
||||
```
|
||||
/opt/otrs/bin Cron.sh - start/stop OTRS cronjobs Copyright (C) 2001-2012 OTRS AG, http://otrs.org/ (using /opt/otrs) done
|
||||
```
|
||||
|
||||
If you want to check the OTRS Daemon status, run the following command:
|
||||
|
||||
```
|
||||
su -c "/opt/otrs/bin/otrs.Daemon.pl status" -s /bin/bash otrsCron.sh
|
||||
```
|
||||
|
||||
Configuring OTRS in the crontab. Change the user root to otrs and start to edit the crontab:
|
||||
|
||||
```
|
||||
su otrs
|
||||
|
||||
crontab -e
|
||||
```
|
||||
|
||||
Fill the crontab with the following content and save it:
|
||||
|
||||
```
|
||||
# --
|
||||
# Copyright (C) 2001-2016 OTRS AG, http://otrs.com/
|
||||
# --
|
||||
# This software comes with ABSOLUTELY NO WARRANTY. For details, see
|
||||
# the enclosed file COPYING for license information (AGPL). If you
|
||||
# did not receive this file, see http://www.gnu.org/licenses/agpl.txt.
|
||||
# --
|
||||
|
||||
# Who gets the cron emails?
|
||||
MAILTO="root@localhost"
|
||||
# --
|
||||
# Copyright (C) 2001-2016 OTRS AG, http://otrs.com/
|
||||
# --
|
||||
# This software comes with ABSOLUTELY NO WARRANTY. For details, see
|
||||
# the enclosed file COPYING for license information (AGPL). If you
|
||||
# did not receive this file, see http://www.gnu.org/licenses/agpl.txt.
|
||||
# --
|
||||
|
||||
# check OTRS daemon status
|
||||
*/5 * * * * $HOME/bin/otrs.Daemon.pl start >> /dev/null
|
||||
```
|
||||
|
||||
### Configure OTRS on CentOS 7
|
||||
|
||||
Open a web browser and open the URL [https://centos7.local/otrs/installer.pl][8]. Remember, centos7.local is the name of my server, insert your hostname or IP address. The first screen shows the 4 steps to conclude the OTRS installation, press Next.
|
||||
|
||||
[
|
||||
![OTRS installation screen](https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/OTRS_How_To_Alexandre_Costa.13_.jpg)
|
||||
][9]
|
||||
|
||||
License: to continue, read and accept the license to continue:
|
||||
|
||||
[
|
||||
![Accept the license and continue](https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/OTRS_How_To_Alexandre_Costa.14_.jpg)
|
||||
][10]
|
||||
|
||||
Database Selection: select the option **MySQL** and in the Install Type, mark the Create a new database for OTRS option and click on the next button:
|
||||
|
||||
[
|
||||
![Select database type mysql](https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/OTRS_How_To_Alexandre_Costa.15_.jpg)
|
||||
][11]
|
||||
|
||||
Configure MySQL: fill the fields User, Password and Host (remember the data of the MariaDB configuration that we made) and press check database settings:
|
||||
|
||||
[
|
||||
![Insert database login details](https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/OTRS_How_To_Alexandre_Costa.16_.jpg)
|
||||
][12]
|
||||
|
||||
The OTRS installer will create the database in MariaDB, press next button:
|
||||
|
||||
[
|
||||
![Create OTRS database](https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/OTRS_How_To_Alexandre_Costa.17_.jpg)
|
||||
][13]
|
||||
|
||||
OTRS database created successfully:
|
||||
|
||||
[
|
||||
![OTRS Database created](https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/OTRS_How_To_Alexandre_Costa.18_.jpg)
|
||||
][14]
|
||||
|
||||
Config system settings: fill the fields with your own information and press next:
|
||||
|
||||
[
|
||||
![Set the personal config details](https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/OTRS_How_To_Alexandre_Costa.19_.jpg)
|
||||
][15]
|
||||
|
||||
OTRS E-mail configuration: fill in the fields acording your e-mail server. In my setup, for outbound email I use SMPTTLS and port 587, for inbound email, I use pop3, you will need an e-mail account. Check mail configuration or skip this step:
|
||||
|
||||
[
|
||||
![Email setup in OTRS](https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/OTRS_How_To_Alexandre_Costa.21_.jpg)
|
||||
][16]
|
||||
|
||||
To finish, take a note about the user and password to access the OTRS, after login you can change the password:
|
||||
|
||||
[
|
||||
![OTRS Username and password](https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/OTRS_How_To_Alexandre_Costa.23_.jpg)
|
||||
][17]
|
||||
|
||||
The OTRS url login is [https://centos7.local/otrs/index.pl?][18]. Remember, centos7.local is the name of my server, insert your hostnamen or IP address.:
|
||||
|
||||
[
|
||||
![Login to OTRS](https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/OTRS_How_To_Alexandre_Costa.25_.jpg)
|
||||
][19]
|
||||
|
||||
Login at the OTRS:
|
||||
|
||||
[
|
||||
![OTRS Admin Login](https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/OTRS_How_To_Alexandre_Costa.27_.jpg)
|
||||
][20]
|
||||
|
||||
OTRS is installed and ready to be configured with your support rules or business model.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/tutorial/how-to-install-otrs-on-centos-7/
|
||||
|
||||
作者:[Alexandre Costa][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com/tutorial/how-to-install-otrs-on-centos-7/
|
||||
[1]:https://www.howtoforge.com/tutorial/how-to-install-otrs-on-centos-7/#thenbspenvironment
|
||||
[2]:https://www.howtoforge.com/tutorial/how-to-install-otrs-on-centos-7/#preparation
|
||||
[3]:https://www.howtoforge.com/tutorial/how-to-install-otrs-on-centos-7/#install-mariadb-on-centos-
|
||||
[4]:https://www.howtoforge.com/tutorial/how-to-install-otrs-on-centos-7/#install-epelnbsp
|
||||
[5]:https://www.howtoforge.com/tutorial/how-to-install-otrs-on-centos-7/#install-otrs
|
||||
[6]:https://www.howtoforge.com/tutorial/how-to-install-otrs-on-centos-7/#configure-otrs-on-centos-
|
||||
[7]:https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/big/OTRS_How_To_Alexandre_Costa.jpg
|
||||
[8]:http://centos7.local/otrs/installer.pl
|
||||
[9]:https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/big/OTRS_How_To_Alexandre_Costa.13_.jpg
|
||||
[10]:https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/big/OTRS_How_To_Alexandre_Costa.14_.jpg
|
||||
[11]:https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/big/OTRS_How_To_Alexandre_Costa.15_.jpg
|
||||
[12]:https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/big/OTRS_How_To_Alexandre_Costa.16_.jpg
|
||||
[13]:https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/big/OTRS_How_To_Alexandre_Costa.17_.jpg
|
||||
[14]:https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/big/OTRS_How_To_Alexandre_Costa.18_.jpg
|
||||
[15]:https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/big/OTRS_How_To_Alexandre_Costa.19_.jpg
|
||||
[16]:https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/big/OTRS_How_To_Alexandre_Costa.21_.jpg
|
||||
[17]:https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/big/OTRS_How_To_Alexandre_Costa.23_.jpg
|
||||
[18]:https://centos7.local/otrs/index.pl
|
||||
[19]:https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/big/OTRS_How_To_Alexandre_Costa.25_.jpg
|
||||
[20]:https://www.howtoforge.com/images/how_to_install_and_configure_otrs_open_source_trouble_ticket_system_software_on_centos_7/big/OTRS_How_To_Alexandre_Costa.27_.jpg
|
@ -1,83 +0,0 @@
|
||||
Dedicated engineering team in South Africa deploys open source tools, save lives
|
||||
============================================================
|
||||
|
||||
![Dedicated engineering team in South Africa deploys open source tools, save lives](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/world_hands_diversity.png?itok=LMT5xbxJ "Dedicated engineering team in South Africa deploys open source tools, save lives")
|
||||
Image by : opensource.com
|
||||
|
||||
In 2006, a groundbreaking TED talk used statistics to reveal surprising [insights about the developing world][2], including how many people in South Africa have HIV despite free and available anti-retroviral drugs.
|
||||
|
||||
[Gustav Praekelt][3], founder of [Praekelt.org][4], heard this TED talk and began tenaciously calling a local hospital to convince them to start an SMS program that would promote anti-retrovirals. The program that resulted from those calls became [txtAlert][5]—a successful and widely recognized mobile health program that dramatically improves medical appointment adherence and creates a free channel for patients to communicate with the hospital.
|
||||
|
||||
Today, nearly a decade later, the organization that Gustav founded in 2007, Praekelt.org, continues to harness the power of mobile technology.
|
||||
|
||||
The global nonprofit organization uses open source technologies to deliver essential information and vital services to millions of people around the world, particularly in Africa. We are deeply committed to the idea that our software innovations should be shared with the development community that made delivering our products possible. By participating and giving back to this community we support and sustain the rich ecosystem of tools and products that they have developed to improve the lives of people around the world.
|
||||
|
||||
Praekelt.org is a supporter of the [Principles for Digital Development][6] and in particular [Cause 6][7], which states:
|
||||
|
||||
* Adopt and expand existing open standards.
|
||||
* Open data and functionalities and expose them in documented Application Programming Interfaces (APIs) where use by a larger community is possible.
|
||||
* Invest in software as a public good.
|
||||
* Develop software to be open source by default with the code made available in public repositories and supported through developer communities.
|
||||
|
||||
A great example of this can be found in our original work to make population-scale messaging possible in the majority world. We had and continue to have success with txtAlert in South Africa, but despite considerable interest, replicating this success in other places has been very challenging. The necessary integration work required for each new messaging service provider requires too much customization.
|
||||
|
||||
To solve this, we created [Vumi][8], a software library that provides a single point of integration for messaging communication channel integrations. It abstracts away all of the differences that require the customized integrations and provided a single consistent API to speak to all of them. The result is a dramatic increase in the re-use of both integrations and applications because they were only needing to be written once and could be used widely.
|
||||
|
||||
Vumi provides the means of integrations, and this past year in collaboration with UNICEF we have launched [Junebug][9], an application server that provides APIs to launch Vumi integrations, enabling direct messaging system integrations in both cloud- and on-premise-based scenarios. Junebug now powers national-scale, maternal health programs in South Africa, Nigeria, and Uganda, delivering essential information for expecting women and mothers. It also provides SMS and [Unstructured Supplementary Service Data][10] (USSD) access to vital services, such as national helpdesks and FAQ services.
|
||||
|
||||
These systems have processed over 375 million real-time messages in the last year.
|
||||
|
||||
We are a relatively small engineering team based out of South Africa. We could not fathom developing these services were we not standing on the shoulders of giants. All of the services we provide or build on are available as open source software.
|
||||
|
||||
Our language of choice is [Python][11], which enables us to express our ideas in code succinctly and in a way that is both readable and maintainable. Our messaging systems are built using [Twisted][12], an excellent event-driven network programming framework built using Python. [Molo][13], our web publishing platform, is built using [Django][14], and the wonderful open source [Wagtail CMS][15] is built by our friends at [Torchbox][16].
|
||||
|
||||
Our three-person site reliability engineering team is able to run over a thousand applications in production by relying on Mesosphere's [Marathon][17] for [Apache Mesos][18]. We have recently released [Marathon Acme][19], which enables automatic SSL/TLS certificate provisioning via [LetsEncrypt][20] for Marathon's load balancer, ensuring our services are secure.
|
||||
|
||||
Our engineering team is distributed, and the workflow enabled by [Git][21] allows us to develop software in a reliable fashion. For example, by using test-driven development we are able to automate our deploys. Using these open source tools and systems we've averaged 21 automated deploys a day over the course of 2016. Developing software in an open environment is easier and more effective. Our work would have been significantly more difficult had there not been such an active and vibrant community on which to build.
|
||||
|
||||
We are excited to be part of these developments in open source technology integration. As a mission-driven organization we are deeply committed to continue [sharing ][22][what we learn][23] and develop. If you are interested in joining our team, [apply here][24]. Our open source repositories have documented OS licenses and contributions guidelines. We welcome any community contributions. Please email us at [dev@praekelt.org][25].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Simon de Haan - Simon de Haan is the Chief Engineer at Praekelt Foundation and has the rare talent to demystify software systems and platforms for nonengineers. He was the team lead on Praekelt Foundation’s Vumi platform, an open source messaging platform that allows for interactive conversations over SMS, USSD, Gtalk and other basic technologies at low cost and at population scale in the majority world. Vumi is the technology that powers various groundbreaking initiatives such as Wikipedia Text, PeaceTXT,
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/2/open-source-tools-south-africa
|
||||
|
||||
作者:[Simon de Haan][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/praekelt
|
||||
[1]:https://opensource.com/article/17/2/open-source-tools-south-africa?rate=XZZ1Mtc79KokPszccwi_HiEkWMJyoJZghkUumJTwIiI
|
||||
[2]:https://www.ted.com/talks/hans_rosling_shows_the_best_stats_you_ve_ever_seen
|
||||
[3]:http://www.praekelt.org/
|
||||
[4]:http://www.praekelt.org/
|
||||
[5]:http://txtalert.praekeltfoundation.org/bookings/about-txtalert/
|
||||
[6]:http://digitalprinciples.org/
|
||||
[7]:http://digitalprinciples.org/use-open-standards-open-data-open-source-and-open-innovation/
|
||||
[8]:https://github.com/praekelt/vumi
|
||||
[9]:http://junebug.praekelt.org/
|
||||
[10]:https://en.wikipedia.org/wiki/Unstructured_Supplementary_Service_Data
|
||||
[11]:https://www.python.org/
|
||||
[12]:https://en.wikipedia.org/wiki/Twisted_(software)
|
||||
[13]:http://molo.readthedocs.io/
|
||||
[14]:http://www.djangoproject.com/
|
||||
[15]:https://wagtail.io/
|
||||
[16]:https://torchbox.com/work/wagtail/
|
||||
[17]:https://mesosphere.github.io/marathon/
|
||||
[18]:http://mesos.apache.org/
|
||||
[19]:https://github.com/praekeltfoundation/marathon-acme
|
||||
[20]:https://letsencrypt.org/
|
||||
[21]:http://git-scm.org/
|
||||
[22]:https://medium.com/@praekeltorg
|
||||
[23]:https://medium.com/@praekeltorg
|
||||
[24]:http://www.praekelt.org/careers/
|
||||
[25]:https://opensource.com/article/17/2/mail%20to:%20dev@praekelt.org
|
||||
[26]:https://opensource.com/user/108011/feed
|
||||
[27]:https://opensource.com/users/praekelt
|
@ -1,108 +0,0 @@
|
||||
How to capture and stream your gaming session on Linux
|
||||
============================================================
|
||||
|
||||
### On this page
|
||||
|
||||
1. [Capture settings][1]
|
||||
2. [Setting up the sources][2]
|
||||
3. [Transitioning][3]
|
||||
4. [Conclusion][4]
|
||||
|
||||
There may not be many hardcore gamers who use Linux, but there certainly are quite a lot Linux users who like to play a game now and then. If you are one of them and would like to show the world that Linux gaming isn’t a joke anymore, then you will find the following quick tutorial on how to capture and/or stream your gaming session interesting. The software tool that I will be using for this purpose is called “[Open Broadcaster Software Studio][5]” and it is maybe the best of the kind that we have at our disposal.
|
||||
|
||||
### Capture settings
|
||||
|
||||
Through the top panel menu, we choose File → Settings and then we select the “Output” to set our preferences for the file that is to be produced. Here we can set the audio and video bitrate that we want, the destination path for the newly created file, and the file format. A rough setting for the quality is also available on this screen.
|
||||
|
||||
[
|
||||
![Select output set in OBS Studio](https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/pic_1.png)
|
||||
][6]
|
||||
|
||||
If we change the output mode on the top from “Simple” to “Advanced” we will be able to set the CPU usage load that we allow OBS to induce to our system. Depending on the selected quality, the CPU capabilities, and the game that we are capturing, there’s a CPU load setting that won’t cause the frames to drop. You may have to do some trial to find that optimal setting, but if the quality is set to low you shouldn’t worry about it.
|
||||
|
||||
[
|
||||
![Change OBS output mode](https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/pic_2.png)
|
||||
][7]
|
||||
|
||||
Next, we go to the “Video” section of the settings where we can set the output video resolution that we want. Pay attention to the down-scaling filtering method as it makes all the difference in regards to the quality of the end result.
|
||||
|
||||
[
|
||||
![Down scaling filter](https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/pic_3.png)
|
||||
][8]
|
||||
|
||||
You may also want to bind hotkeys for the starting, pausing, and stopping of a recording. This is especially useful since you will be seeing your game’s screen while recording. To do this, choose the “Hotkeys” section in the settings and assign the keys that you want in the corresponding boxes. Of course, you don’t have to fill out every box, only the ones you need.
|
||||
|
||||
[
|
||||
![Configure Hotkeys in OBS](https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/pic_4.png)
|
||||
][9]
|
||||
|
||||
If you are interested in streaming and not just recording, then select the “Stream” category of settings and then you may select the streaming service among the 30 that are supported including Twitch, Facebook Live and Youtube, and then select a server and enter a stream key.
|
||||
|
||||
[
|
||||
![Streaming settings](https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/pic_5.png)
|
||||
][10]
|
||||
|
||||
### Setting up the sources
|
||||
|
||||
On the lower left, you will find a box entitled as “Sources”. There we press the plus sign button to add a new source that is essentially our recording media source. Here you can set audio and video sources, but images and even text as well.
|
||||
|
||||
[
|
||||
![OBS Media Source](https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/pic_6.png)
|
||||
][11]
|
||||
|
||||
The first three concern audio sources, the next two images, the JACK option is for live audio capturing from an instrument, the Media Source is for the addition of a file, etc. What we are interested in for our purpose are the “Screen Capture (XSHM)”, the “Video Capture Device (V4L2)”, and the “Window Capture (Xcomposite) options.
|
||||
|
||||
The screen capture option let’s you select the screen that you want to capture (including the active one), so everything is recorded. Workspace changes, window minimizations, etc. It is a suitable option for a standard bulk recording that will get edited before getting released.
|
||||
|
||||
Let’s explore the other two. The Window Capture will let us select one of our active windows and put it into the capturing monitor. The Video Capture Device is useful in order to put our face right there on a corner so people can see us while we’re talking. Of course, each added source offers a set of options that we can fiddle with in order to achieve the result that we are after.
|
||||
|
||||
[
|
||||
![OBS Window Capture](https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/pic_7.png)
|
||||
][12]
|
||||
|
||||
The added sources are re-sizable and also movable along the plane of the recording frame, so you may add multiple sources, arrange them as you like, and finally perform basic editing tasks by right-clicking on them.
|
||||
|
||||
[
|
||||
![Add Multiple sources](https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/pic_8.png)
|
||||
][13]
|
||||
|
||||
### Transitioning
|
||||
|
||||
Finally, let’s suppose that you are streaming your gaming session and you want to be able to rotate between the game view and yourself (or any other source). To do this, change to “Studio Mode” from the lower right and add a second scene with assigned another source assigned to it. You may also rotate between sources by unchecking the “Duplicate scene” and checking the “Duplicate sources” on the gear icon next to the “Transitions”. This is helpful for when you want to show your face only for short commentary, etc.
|
||||
|
||||
[
|
||||
![Studio mode](https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/pic_9.png)
|
||||
][14]
|
||||
|
||||
There are many transition effects available in this software and you may add more by pressing the plus sign icon next to “Quick Transitions” in the center. As you add them, you will also be prompt to set them.
|
||||
|
||||
### Conclusion
|
||||
|
||||
The OBS Studio software is a powerful piece of free software that works stably, is fairly simple and straightforward to use, and has a growing set of [additional plugins][15] that extend its functionality. If you need to record and/or stream your gaming session on Linux, I can’t think of a better solution other than using OBS. What is your experience with this or other similar tools? Share in the comments and feel free to also include a video link that showcases your skills. :)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/tutorial/how-to-capture-and-stream-your-gaming-session-on-linux/
|
||||
|
||||
作者:[Bill Toulas ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com/tutorial/how-to-capture-and-stream-your-gaming-session-on-linux/
|
||||
[1]:https://www.howtoforge.com/tutorial/how-to-capture-and-stream-your-gaming-session-on-linux/#capture-settings
|
||||
[2]:https://www.howtoforge.com/tutorial/how-to-capture-and-stream-your-gaming-session-on-linux/#setting-up-the-sources
|
||||
[3]:https://www.howtoforge.com/tutorial/how-to-capture-and-stream-your-gaming-session-on-linux/#transitioning
|
||||
[4]:https://www.howtoforge.com/tutorial/how-to-capture-and-stream-your-gaming-session-on-linux/#conclusion
|
||||
[5]:https://obsproject.com/download
|
||||
[6]:https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/big/pic_1.png
|
||||
[7]:https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/big/pic_2.png
|
||||
[8]:https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/big/pic_3.png
|
||||
[9]:https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/big/pic_4.png
|
||||
[10]:https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/big/pic_5.png
|
||||
[11]:https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/big/pic_6.png
|
||||
[12]:https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/big/pic_7.png
|
||||
[13]:https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/big/pic_8.png
|
||||
[14]:https://www.howtoforge.com/images/how-to-capture-and-stream-your-gaming-session-on-linux/big/pic_9.png
|
||||
[15]:https://obsproject.com/forum/resources/categories/obs-studio-plugins.6/
|
@ -1,120 +0,0 @@
|
||||
How to make file-specific setting changes in Vim using Modeline
|
||||
============================================================
|
||||
ch-cn translating
|
||||
|
||||
### On this page
|
||||
|
||||
1. [VIM Modeline][2]
|
||||
1. [Usage][1]
|
||||
2. [Conclusion][3]
|
||||
|
||||
While [plugins][4] are no doubt one of Vim's biggest strengths, there are several other functionalities that make it one of the most powerful and feature-rich text editors/IDEs available to Linux users today. One of these functionalities is the ability to make file-specific setting changes. This ability can be accessed using the editor's Modeline feature.
|
||||
|
||||
In this article, we will discuss how you can use Vim's [Modeline][5] feature using easy to understand examples.
|
||||
|
||||
But before we start doing that, it's worth mentioning that all the examples, commands, and instructions mentioned in this tutorial have been tested on Ubuntu 16.04, and the Vim version we've used is 7.4.
|
||||
|
||||
### VIM Modeline
|
||||
|
||||
### Usage
|
||||
|
||||
As we've already mentioned, Vim's Modeline feature lets you make file-specific changes. For example, suppose you want to replace all the tabs used in a particular file of your project with spaces, and make sure that all other files aren't affected by this change. This is an ideal use-case where Modeline helps you in what you want to do.
|
||||
|
||||
So, what you can do is, you can put the following line in the beginning or end of the file in question:
|
||||
|
||||
```
|
||||
# vim: set expandtab:
|
||||
```
|
||||
|
||||
There are high chances that if you try doing the aforementioned exercise to test the use-case on your Linux machine, things won't work as expected. If that's the case, worry not, as the Modeline feature needs to be activated first in some cases (it's disabled by default on systems such as Debian, Ubuntu, Gentoo, and OSX for security reasons).
|
||||
|
||||
To enable the feature, open the .vimrc file (located in your home directory), and then add the following line to it:
|
||||
|
||||
```
|
||||
set modeline
|
||||
```
|
||||
|
||||
Now, whenever you enter a tab and save the file (where the expandtab modeline command was entered), the tab will automatically convert into white spaces.
|
||||
|
||||
Let's consider another use-case. Suppose the default tab space in Vim is set to 4, but for a particular file, you want to increase it to 8. For this, you need to add the following line in the beginning or the end of the file:
|
||||
|
||||
```
|
||||
// vim: noai:ts=8:
|
||||
```
|
||||
|
||||
Now try entering a tab and you'll see that the number of spaces it covers will be 8.
|
||||
|
||||
You might have noticed me saying that these modeline commands need to be entered somewhere near the top or the bottom of the file. If you're wondering why this is so, the reason is that the feature is designed this way. The following lines (taken from the official Vim documentation) should make this more clear:
|
||||
|
||||
"The modeline cannot be anywhere in the file: it must be in the first or last few lines. The exact location where vim checks for the modeline is controlled by the `modelines` variable; see :help modelines. By default, it is set to 5 lines."
|
||||
|
||||
And here's what the :help modelines command (referred to in the above lines) says:
|
||||
|
||||
If 'modeline' is on 'modelines' gives the number of lines that is checked for set commands. If 'modeline' is off or 'modelines' is zero no lines are checked.
|
||||
|
||||
Try and put the modeline command beyond the default 5 lines (either from the bottom or from the top) range, and you'll notice that tab spaces will revert to the Vim default - in my case that's 4 spaces.
|
||||
|
||||
However, you can change this behavior if you want, using the following command in your .vimrc file.
|
||||
|
||||
```
|
||||
set modelines=[new-value]
|
||||
```
|
||||
|
||||
For example, I increased the value from 5 to 10.
|
||||
|
||||
```
|
||||
set modelines=10
|
||||
```
|
||||
|
||||
This means that now I can put the modeline command anywhere between first or last 10 lines of the file.
|
||||
|
||||
Moving on, at any point in time, while editing a file, you can enter the following (with the Vim editor in the command mode) to see the current modeline-related settings as well as where they were last set.
|
||||
|
||||
```
|
||||
:verbose set modeline? modelines?
|
||||
```
|
||||
|
||||
For example, in my case, the above command produced the following output:
|
||||
|
||||
```
|
||||
modeline
|
||||
Last set from ~/.vimrc
|
||||
modelines=10
|
||||
Last set from ~/.vimrc
|
||||
```
|
||||
|
||||
Here are some of the important points you need to know about Vim's Modeline feature:
|
||||
|
||||
* This feature is enabled by default for Vim running in nocompatible (non Vi-compatible) mode, but some notable distributions of Vim disable this option in the system vimrc for security.
|
||||
* The feature is disabled by default when editing as root (if you've opened the file using 'sudo' then there's no issue - the feature works).
|
||||
* With '`set'`, the modeline ends at the first colon not following a backslash. And without '`set'`, no text can follow the options. For example, **/* vim: noai:ts=4:sw=4 */** is an invalid modeline.
|
||||
|
||||
### Security Concerns
|
||||
|
||||
Sadly, Vim's Modeline feature can be used to compromise security. In fact, multiple security-related Modeline issues have been reported in the past, including [shell command injection][6], [arbitrary commands execution][7], [unauthorized access][8], and more. Agreed, most of these are old, and would have been fixed by now, but it does give an idea that the Modeline feature could be misused by hackers.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Modeline may be an advanced feature of the Vim editor, but it's not very difficult to understand. There's no doubt that a bit of learning curve involved, but that's not much to ask for given how useful the feature is. Of course, there are security concerns, which means that you should weigh your options before enabling and using the feature.
|
||||
|
||||
Have you ever used the Modeline feature? How was your experience? Share with us (and the whole HowtoForge community) in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/tutorial/vim-modeline-settings/
|
||||
|
||||
作者:[ Ansh][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com/tutorial/vim-modeline-settings/
|
||||
[1]:https://www.howtoforge.com/tutorial/vim-modeline-settings/#usage
|
||||
[2]:https://www.howtoforge.com/tutorial/vim-modeline-settings/#vim-modeline
|
||||
[3]:https://www.howtoforge.com/tutorial/vim-modeline-settings/#conclusion
|
||||
[4]:https://www.howtoforge.com/tutorial/vim-editor-plugins-for-software-developers-3/
|
||||
[5]:http://vim.wikia.com/wiki/Modeline_magic
|
||||
[6]:https://tools.cisco.com/security/center/viewAlert.x?alertId=13223
|
||||
[7]:http://usevim.com/2012/03/28/modelines/
|
||||
[8]:https://tools.cisco.com/security/center/viewAlert.x?alertId=5169
|
@ -1,49 +0,0 @@
|
||||
Oracle Policy Change Raises Prices on AWS
|
||||
============================================================
|
||||
|
||||
>The change, which effectively doubles Oracle's prices for implementing its software on AWS, was put in effect quietly, with little notification to users.
|
||||
|
||||
![](http://windowsitpro.com/site-files/windowsitpro.com/files/imagecache/large_img/uploads/2017/02/ellison-hero.jpg)
|
||||
|
||||
News came last week that Oracle has, in effect, doubled the price for running its products on Amazon's cloud. It has done so with a bit of sleight-of-hand on [how it counts AWS's virtual CPUs.][6] It also did so without fanfare. The company's new pricing policy went in effect on January 23, and pretty much went unnoticed until January 28, when Oracle follower Tim Hall stumbled on the change in Big Red's ["Licensing Oracle Software in the Cloud Computing Environment"][7] document and blew the whistle.
|
||||
|
||||
At first glance, this move might not seem to mean much, as it only puts Oracle's AWS pricing on par with its prices on Microsoft Azure. But Azure is only about a third the size of market leading AWS, so if you want to make money selling licenses in the cloud, AWS is the place to be. And while this move may or may not affect those already using Oracle on AWS -- it's not clear whether the new rules apply to those already using the products -- it will certainly push some new users who might otherwise consider Oracle to look elsewhere.
|
||||
|
||||
The main reason for this move is obvious. Oracle is hoping to make its own cloud more attractive -- which led [The Register to observe][8] with a bit of snark, "Larry Ellison did promise Oracle's cloud would be faster and cheaper." Faster and cheaper both remain to be seen. Faster maybe, if Oracle's SPARC cloud launches as planned and if it performs as advertised. Cheaper might be less likely. Oracle is known for playing hardball with its prices.
|
||||
|
||||
With declining sales of its signature database and business stack, and with its $7.4 billion dollar bet on Sun not working out as planned, Oracle is betting its future on the cloud. But Oracle came late to the party and its efforts so far seem to be returning lackluster results, with some financial forecasters not seeing a bright future for Oracle Cloud. The cloud is a crowded market, they say, and the big four -- Amazon, Microsoft, IBM and Google -- already have a commanding lead.
|
||||
|
||||
That's true. But the biggest obstacle Oracle faces in the cloud is...well, Oracle. Its reputation precedes it.
|
||||
|
||||
It's an understatement to say the company is not known for stellar customer service. Indeed, press reports paint Oracle as something of a bully and a manipulator.
|
||||
|
||||
Back in 2015, for example, Oracle, evidently growing frustrated because its cloud wasn't growing as fast as anticipated, began [activating what Business Insider called the "nuclear option."][9] It would audit a client's datacenter and if the client wasn't in compliance, it would issue a "breach notice" -- usually reserved only for cases of large scale abuse -- and order the client to quit using its software within 30 days.
|
||||
|
||||
In case you don't know, big corporations heavily invested in Oracle's stack absolutely couldn't migrate to another solution on such short notice. An Oracle breach notice spelled disaster.
|
||||
|
||||
"[T]o make the breach notice go away — or to reduce an outrageously high out-of-compliance fine — an Oracle sales rep often wants the customer to add cloud "credits" to the contract...," Business Insider's Julie Bort explained.
|
||||
|
||||
In other words, Oracle was using the audit to arm-twist clients to buy into its cloud, whether or not they had a need. There might also be a tie-in between this tactic and the recent price doubling on AWS. A commenter to Hall's article noted that the purpose behind the secrecy surrounding the price boost could possibly be to trigger software audits.
|
||||
|
||||
The trouble with employing tactics like these is that sooner or later they catch up with you. Word gets out. Your customers start looking for other options. It might be time for Big Red to take a page from Microsoft's playbook and start working to build a kinder and gentler Oracle that puts the needs of its customers first.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://windowsitpro.com/cloud/oracle-policy-change-raises-prices-aws
|
||||
|
||||
作者:[Christine Hall][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://windowsitpro.com/author/christine-hall
|
||||
[1]:http://windowsitpro.com/penton_ur/nojs/user/register?path=node%2F186491&nid=186491&source=email
|
||||
[2]:http://windowsitpro.com/author/christine-hall
|
||||
[3]:http://windowsitpro.com/author/christine-hall
|
||||
[4]:http://windowsitpro.com/cloud/oracle-policy-change-raises-prices-aws#comments
|
||||
[5]:http://windowsitpro.com/cloud/oracle-policy-change-raises-prices-aws#comments
|
||||
[6]:https://oracle-base.com/blog/2017/01/28/oracles-cloud-licensing-change-be-warned/
|
||||
[7]:http://www.oracle.com/us/corporate/pricing/cloud-licensing-070579.pdf
|
||||
[8]:https://www.theregister.co.uk/2017/01/30/oracle_effectively_doubles_licence_fees_to_run_in_aws/
|
||||
[9]:http://www.businessinsider.com/oracle-is-using-the-nuclear-option-to-sell-its-cloud-software-2015-7
|
@ -1,544 +0,0 @@
|
||||
Blocking of international spam botnets with a Postfix plugin
|
||||
============================================================
|
||||
|
||||
### On this page
|
||||
|
||||
1. [Introduction][1]
|
||||
2. [How international botnet works][2]
|
||||
3. [Defending against botnet spammers][3]
|
||||
4. [Installation][4]
|
||||
|
||||
This article contains an analysis and solution for blocking of international SPAM botnets and a tutorial to install the anti-spam plugin to postfix firewall - postfwd in the postfix MTA.
|
||||
|
||||
### Introduction
|
||||
|
||||
One of the most important and hardest tasks for every company that provides mail services is staying out of the mail blacklists.
|
||||
|
||||
If a mail domain appears in one of the mail domain blacklists, other mail servers will stop accepting and relaying its e-mails. This will practically ban the domain from the majority of mail providers and prohibits that the provider’s customers can send e-mails. Tere is only one thing that a mail provider can do afterwards: ask the blacklist providers for removal from the list or change the IP addresses and domain names of its mail servers.
|
||||
|
||||
Getting into mail blacklist is very easy when a mail provider does not have a protection against spammers. Only one compromised customer mail account from which a hacker will start sending spam is needed to appear in a blacklist.
|
||||
|
||||
There are several ways of how hackers send spam from compromised mail accounts. In this article, I would like to show you how to completely mitigate international botnet spammers, who are characterized by logging into mail accounts from multiple IP addresses located in multiple countries worldwide.
|
||||
|
||||
### How international botnet works
|
||||
|
||||
Hackers who use an international botnet for spamming operate very efficient and are not easy to track. I started to analyze the behaviour of such an international spam botnet in October of 2016 and implemented a plugin for **postfix firewall** - **postfwd**, which intelligently bans all spammers from international botnets.
|
||||
|
||||
The first step was the analysis of the behavior of an international spam botnet done by tracking of one compromised mail account. I created a simple bash one-liner to select sasl login IP addresses of the compromised mail account from the postfwd login mail logs.
|
||||
|
||||
**Data in the following table are dumped 90 minutes after compromisation of one mail account and contains these attributes:**
|
||||
|
||||
* IP addresses from which hacker logged into account (ip_address)
|
||||
* Corresponding country codes of IP addresses from GeoIP database (state_code)
|
||||
* Number of sasl logins which hacker did from one IP address (login_count)
|
||||
|
||||
```
|
||||
+-----------------+------------+-------------+
|
||||
| ip_address | state_code | login_count |
|
||||
+-----------------+------------+-------------+
|
||||
| 41.63.176.___ | AO | 8 |
|
||||
| 200.80.227.___ | AR | 41 |
|
||||
| 120.146.134.___ | AU | 18 |
|
||||
| 79.132.239.___ | BE | 15 |
|
||||
| 184.149.27.___ | CA | 1 |
|
||||
| 24.37.20.___ | CA | 13 |
|
||||
| 70.28.77.___ | CA | 21 |
|
||||
| 70.25.65.___ | CA | 23 |
|
||||
| 72.38.177.___ | CA | 24 |
|
||||
| 174.114.121.___ | CA | 27 |
|
||||
| 206.248.139.___ | CA | 4 |
|
||||
| 64.179.221.___ | CA | 4 |
|
||||
| 184.151.178.___ | CA | 40 |
|
||||
| 24.37.22.___ | CA | 51 |
|
||||
| 209.250.146.___ | CA | 66 |
|
||||
| 209.197.185.___ | CA | 8 |
|
||||
| 47.48.223.___ | CA | 8 |
|
||||
| 70.25.41.___ | CA | 81 |
|
||||
| 184.71.9.___ | CA | 92 |
|
||||
| 84.226.27.___ | CH | 5 |
|
||||
| 59.37.9.___ | CN | 6 |
|
||||
| 181.143.131.___ | CO | 24 |
|
||||
| 186.64.177.___ | CR | 6 |
|
||||
| 77.104.244.___ | CZ | 1 |
|
||||
| 78.108.109.___ | CZ | 18 |
|
||||
| 185.19.1.___ | CZ | 58 |
|
||||
| 95.208.250.___ | DE | 1 |
|
||||
| 79.215.89.___ | DE | 15 |
|
||||
| 47.71.223.___ | DE | 23 |
|
||||
| 31.18.251.___ | DE | 27 |
|
||||
| 2.164.183.___ | DE | 32 |
|
||||
| 79.239.97.___ | DE | 32 |
|
||||
| 80.187.103.___ | DE | 54 |
|
||||
| 109.84.1.___ | DE | 6 |
|
||||
| 212.97.234.___ | DK | 49 |
|
||||
| 190.131.134.___ | EC | 42 |
|
||||
| 84.77.172.___ | ES | 1 |
|
||||
| 91.117.105.___ | ES | 10 |
|
||||
| 185.87.99.___ | ES | 14 |
|
||||
| 95.16.51.___ | ES | 15 |
|
||||
| 95.127.182.___ | ES | 16 |
|
||||
| 195.77.90.___ | ES | 19 |
|
||||
| 188.86.18.___ | ES | 2 |
|
||||
| 212.145.210.___ | ES | 38 |
|
||||
| 148.3.169.___ | ES | 39 |
|
||||
| 95.16.35.___ | ES | 4 |
|
||||
| 81.202.61.___ | ES | 45 |
|
||||
| 88.7.246.___ | ES | 7 |
|
||||
| 81.36.5.___ | ES | 8 |
|
||||
| 88.14.192.___ | ES | 8 |
|
||||
| 212.97.161.___ | ES | 9 |
|
||||
| 193.248.156.___ | FR | 5 |
|
||||
| 82.34.32.___ | GB | 1 |
|
||||
| 86.180.214.___ | GB | 11 |
|
||||
| 81.108.174.___ | GB | 12 |
|
||||
| 86.11.209.___ | GB | 13 |
|
||||
| 86.150.224.___ | GB | 15 |
|
||||
| 2.102.31.___ | GB | 17 |
|
||||
| 93.152.88.___ | GB | 18 |
|
||||
| 86.178.68.___ | GB | 19 |
|
||||
| 176.248.121.___ | GB | 2 |
|
||||
| 2.97.227.___ | GB | 2 |
|
||||
| 62.49.34.___ | GB | 2 |
|
||||
| 79.64.78.___ | GB | 20 |
|
||||
| 2.126.140.___ | GB | 22 |
|
||||
| 87.114.222.___ | GB | 23 |
|
||||
| 188.29.164.___ | GB | 24 |
|
||||
| 82.11.14.___ | GB | 26 |
|
||||
| 81.168.46.___ | GB | 29 |
|
||||
| 86.136.125.___ | GB | 3 |
|
||||
| 90.199.85.___ | GB | 3 |
|
||||
| 86.177.93.___ | GB | 31 |
|
||||
| 82.32.186.___ | GB | 4 |
|
||||
| 79.68.153.___ | GB | 46 |
|
||||
| 151.226.42.___ | GB | 6 |
|
||||
| 2.123.234.___ | GB | 6 |
|
||||
| 90.217.211.___ | GB | 6 |
|
||||
| 212.159.148.___ | GB | 68 |
|
||||
| 88.111.94.___ | GB | 7 |
|
||||
| 77.98.186.___ | GB | 9 |
|
||||
| 41.222.232.___ | GH | 4 |
|
||||
| 176.63.29.___ | HU | 30 |
|
||||
| 86.47.237.___ | IE | 10 |
|
||||
| 37.46.22.___ | IE | 4 |
|
||||
| 95.83.249.___ | IE | 4 |
|
||||
| 109.79.69.___ | IE | 6 |
|
||||
| 79.176.100.___ | IL | 13 |
|
||||
| 122.175.34.___ | IN | 19 |
|
||||
| 114.143.5.___ | IN | 26 |
|
||||
| 115.112.159.___ | IN | 4 |
|
||||
| 79.62.179.___ | IT | 11 |
|
||||
| 79.53.217.___ | IT | 19 |
|
||||
| 188.216.54.___ | IT | 2 |
|
||||
| 46.44.203.___ | IT | 2 |
|
||||
| 80.86.57.___ | IT | 2 |
|
||||
| 5.170.192.___ | IT | 27 |
|
||||
| 80.23.42.___ | IT | 3 |
|
||||
| 89.249.177.___ | IT | 3 |
|
||||
| 93.39.141.___ | IT | 31 |
|
||||
| 80.183.6.___ | IT | 34 |
|
||||
| 79.25.107.___ | IT | 35 |
|
||||
| 81.208.25.___ | IT | 39 |
|
||||
| 151.57.154.___ | IT | 4 |
|
||||
| 79.60.239.___ | IT | 42 |
|
||||
| 79.47.25.___ | IT | 5 |
|
||||
| 188.216.114.___ | IT | 7 |
|
||||
| 151.31.139.___ | IT | 8 |
|
||||
| 46.185.139.___ | JO | 9 |
|
||||
| 211.180.177.___ | KR | 22 |
|
||||
| 31.214.125.___ | KW | 2 |
|
||||
| 89.203.17.___ | KW | 3 |
|
||||
| 94.187.138.___ | KW | 4 |
|
||||
| 209.59.110.___ | LC | 18 |
|
||||
| 41.137.40.___ | MA | 12 |
|
||||
| 189.211.204.___ | MX | 5 |
|
||||
| 89.98.64.___ | NL | 6 |
|
||||
| 195.241.8.___ | NL | 9 |
|
||||
| 195.1.82.___ | NO | 70 |
|
||||
| 200.46.9.___ | PA | 30 |
|
||||
| 111.125.66.___ | PH | 1 |
|
||||
| 89.174.81.___ | PL | 7 |
|
||||
| 64.89.12.___ | PR | 24 |
|
||||
| 82.154.194.___ | PT | 12 |
|
||||
| 188.48.145.___ | SA | 8 |
|
||||
| 42.61.41.___ | SG | 25 |
|
||||
| 87.197.112.___ | SK | 3 |
|
||||
| 116.58.231.___ | TH | 4 |
|
||||
| 195.162.90.___ | UA | 5 |
|
||||
| 108.185.167.___ | US | 1 |
|
||||
| 108.241.56.___ | US | 1 |
|
||||
| 198.24.64.___ | US | 1 |
|
||||
| 199.249.233.___ | US | 1 |
|
||||
| 204.8.13.___ | US | 1 |
|
||||
| 206.81.195.___ | US | 1 |
|
||||
| 208.75.20.___ | US | 1 |
|
||||
| 24.149.8.___ | US | 1 |
|
||||
| 24.178.7.___ | US | 1 |
|
||||
| 38.132.41.___ | US | 1 |
|
||||
| 63.233.138.___ | US | 1 |
|
||||
| 68.15.198.___ | US | 1 |
|
||||
| 72.26.57.___ | US | 1 |
|
||||
| 72.43.167.___ | US | 1 |
|
||||
| 74.65.154.___ | US | 1 |
|
||||
| 74.94.193.___ | US | 1 |
|
||||
| 75.150.97.___ | US | 1 |
|
||||
| 96.84.51.___ | US | 1 |
|
||||
| 96.90.244.___ | US | 1 |
|
||||
| 98.190.153.___ | US | 1 |
|
||||
| 12.23.72.___ | US | 10 |
|
||||
| 50.225.58.___ | US | 10 |
|
||||
| 64.140.101.___ | US | 10 |
|
||||
| 66.185.229.___ | US | 10 |
|
||||
| 70.63.88.___ | US | 10 |
|
||||
| 96.84.148.___ | US | 10 |
|
||||
| 107.178.12.___ | US | 11 |
|
||||
| 170.253.182.___ | US | 11 |
|
||||
| 206.127.77.___ | US | 11 |
|
||||
| 216.27.83.___ | US | 11 |
|
||||
| 72.196.170.___ | US | 11 |
|
||||
| 74.93.168.___ | US | 11 |
|
||||
| 108.60.97.___ | US | 12 |
|
||||
| 205.196.77.___ | US | 12 |
|
||||
| 63.159.160.___ | US | 12 |
|
||||
| 204.93.122.___ | US | 13 |
|
||||
| 206.169.117.___ | US | 13 |
|
||||
| 208.104.106.___ | US | 13 |
|
||||
| 65.28.31.___ | US | 13 |
|
||||
| 66.119.110.___ | US | 13 |
|
||||
| 67.84.164.___ | US | 13 |
|
||||
| 69.178.166.___ | US | 13 |
|
||||
| 71.232.229.___ | US | 13 |
|
||||
| 96.3.6.___ | US | 13 |
|
||||
| 205.214.233.___ | US | 14 |
|
||||
| 38.96.46.___ | US | 14 |
|
||||
| 67.61.214.___ | US | 14 |
|
||||
| 173.233.58.___ | US | 141 |
|
||||
| 64.251.53.___ | US | 15 |
|
||||
| 73.163.215.___ | US | 15 |
|
||||
| 24.61.176.___ | US | 16 |
|
||||
| 67.10.184.___ | US | 16 |
|
||||
| 173.14.42.___ | US | 17 |
|
||||
| 173.163.34.___ | US | 17 |
|
||||
| 104.138.114.___ | US | 18 |
|
||||
| 23.24.168.___ | US | 18 |
|
||||
| 50.202.9.___ | US | 19 |
|
||||
| 96.248.123.___ | US | 19 |
|
||||
| 98.191.183.___ | US | 19 |
|
||||
| 108.215.204.___ | US | 2 |
|
||||
| 50.198.37.___ | US | 2 |
|
||||
| 69.178.183.___ | US | 2 |
|
||||
| 74.190.39.___ | US | 2 |
|
||||
| 76.90.131.___ | US | 2 |
|
||||
| 96.38.10.___ | US | 2 |
|
||||
| 96.60.117.___ | US | 2 |
|
||||
| 96.93.6.___ | US | 2 |
|
||||
| 74.69.197.___ | US | 21 |
|
||||
| 98.140.180.___ | US | 21 |
|
||||
| 50.252.0.___ | US | 22 |
|
||||
| 69.71.200.___ | US | 22 |
|
||||
| 71.46.59.___ | US | 22 |
|
||||
| 74.7.35.___ | US | 22 |
|
||||
| 12.191.73.___ | US | 23 |
|
||||
| 208.123.156.___ | US | 23 |
|
||||
| 65.190.29.___ | US | 23 |
|
||||
| 67.136.192.___ | US | 23 |
|
||||
| 70.63.216.___ | US | 23 |
|
||||
| 96.66.144.___ | US | 23 |
|
||||
| 173.167.128.___ | US | 24 |
|
||||
| 64.183.78.___ | US | 24 |
|
||||
| 68.44.33.___ | US | 24 |
|
||||
| 23.25.9.___ | US | 25 |
|
||||
| 24.100.92.___ | US | 25 |
|
||||
| 107.185.110.___ | US | 26 |
|
||||
| 208.118.179.___ | US | 26 |
|
||||
| 216.133.120.___ | US | 26 |
|
||||
| 75.182.97.___ | US | 26 |
|
||||
| 107.167.202.___ | US | 27 |
|
||||
| 66.85.239.___ | US | 27 |
|
||||
| 71.122.125.___ | US | 28 |
|
||||
| 74.218.169.___ | US | 28 |
|
||||
| 76.177.204.___ | US | 28 |
|
||||
| 216.165.241.___ | US | 29 |
|
||||
| 24.178.50.___ | US | 29 |
|
||||
| 63.149.147.___ | US | 29 |
|
||||
| 174.66.84.___ | US | 3 |
|
||||
| 184.183.156.___ | US | 3 |
|
||||
| 50.233.39.___ | US | 3 |
|
||||
| 70.183.165.___ | US | 3 |
|
||||
| 71.178.212.___ | US | 3 |
|
||||
| 72.175.83.___ | US | 3 |
|
||||
| 74.142.22.___ | US | 3 |
|
||||
| 98.174.50.___ | US | 3 |
|
||||
| 98.251.168.___ | US | 3 |
|
||||
| 206.74.148.___ | US | 30 |
|
||||
| 24.131.201.___ | US | 30 |
|
||||
| 50.80.199.___ | US | 30 |
|
||||
| 69.251.49.___ | US | 30 |
|
||||
| 108.6.53.___ | US | 31 |
|
||||
| 74.84.229.___ | US | 31 |
|
||||
| 172.250.78.___ | US | 32 |
|
||||
| 173.14.75.___ | US | 32 |
|
||||
| 216.201.55.___ | US | 33 |
|
||||
| 40.130.243.___ | US | 33 |
|
||||
| 164.58.163.___ | US | 34 |
|
||||
| 70.182.187.___ | US | 35 |
|
||||
| 184.170.168.___ | US | 37 |
|
||||
| 198.46.110.___ | US | 37 |
|
||||
| 24.166.234.___ | US | 37 |
|
||||
| 65.34.19.___ | US | 37 |
|
||||
| 75.146.12.___ | US | 37 |
|
||||
| 107.199.135.___ | US | 38 |
|
||||
| 206.193.215.___ | US | 38 |
|
||||
| 50.254.150.___ | US | 38 |
|
||||
| 69.54.48.___ | US | 38 |
|
||||
| 172.8.30.___ | US | 4 |
|
||||
| 24.106.124.___ | US | 4 |
|
||||
| 65.127.169.___ | US | 4 |
|
||||
| 71.227.65.___ | US | 4 |
|
||||
| 71.58.72.___ | US | 4 |
|
||||
| 74.9.236.___ | US | 4 |
|
||||
| 12.166.108.___ | US | 40 |
|
||||
| 174.47.56.___ | US | 40 |
|
||||
| 66.76.176.___ | US | 40 |
|
||||
| 76.111.90.___ | US | 41 |
|
||||
| 96.10.70.___ | US | 41 |
|
||||
| 97.79.226.___ | US | 41 |
|
||||
| 174.79.117.___ | US | 42 |
|
||||
| 70.138.178.___ | US | 42 |
|
||||
| 64.233.225.___ | US | 43 |
|
||||
| 97.89.203.___ | US | 43 |
|
||||
| 12.28.231.___ | US | 44 |
|
||||
| 64.235.157.___ | US | 45 |
|
||||
| 76.110.237.___ | US | 45 |
|
||||
| 71.196.10.___ | US | 46 |
|
||||
| 173.167.177.___ | US | 49 |
|
||||
| 24.7.92.___ | US | 49 |
|
||||
| 68.187.225.___ | US | 49 |
|
||||
| 184.75.77.___ | US | 5 |
|
||||
| 208.91.186.___ | US | 5 |
|
||||
| 71.11.113.___ | US | 5 |
|
||||
| 75.151.112.___ | US | 5 |
|
||||
| 98.189.112.___ | US | 5 |
|
||||
| 69.170.187.___ | US | 51 |
|
||||
| 97.64.182.___ | US | 51 |
|
||||
| 24.239.92.___ | US | 52 |
|
||||
| 72.211.28.___ | US | 53 |
|
||||
| 66.179.44.___ | US | 54 |
|
||||
| 66.188.47.___ | US | 55 |
|
||||
| 64.60.22.___ | US | 56 |
|
||||
| 73.1.95.___ | US | 56 |
|
||||
| 75.140.143.___ | US | 58 |
|
||||
| 24.199.140.___ | US | 59 |
|
||||
| 216.240.53.___ | US | 6 |
|
||||
| 216.26.16.___ | US | 6 |
|
||||
| 50.242.1.___ | US | 6 |
|
||||
| 65.83.137.___ | US | 6 |
|
||||
| 68.119.102.___ | US | 6 |
|
||||
| 68.170.224.___ | US | 6 |
|
||||
| 74.94.231.___ | US | 6 |
|
||||
| 96.64.21.___ | US | 6 |
|
||||
| 71.187.41.___ | US | 60 |
|
||||
| 184.177.173.___ | US | 61 |
|
||||
| 75.71.114.___ | US | 61 |
|
||||
| 75.82.232.___ | US | 61 |
|
||||
| 97.77.161.___ | US | 63 |
|
||||
| 50.154.213.___ | US | 65 |
|
||||
| 96.85.169.___ | US | 67 |
|
||||
| 100.33.70.___ | US | 68 |
|
||||
| 98.100.71.___ | US | 68 |
|
||||
| 24.176.214.___ | US | 69 |
|
||||
| 74.113.89.___ | US | 69 |
|
||||
| 204.116.101.___ | US | 7 |
|
||||
| 216.216.68.___ | US | 7 |
|
||||
| 65.188.191.___ | US | 7 |
|
||||
| 69.15.165.___ | US | 7 |
|
||||
| 74.219.118.___ | US | 7 |
|
||||
| 173.10.219.___ | US | 71 |
|
||||
| 97.77.209.___ | US | 72 |
|
||||
| 173.163.236.___ | US | 73 |
|
||||
| 162.210.13.___ | US | 79 |
|
||||
| 12.236.19.___ | US | 8 |
|
||||
| 208.180.242.___ | US | 8 |
|
||||
| 24.221.97.___ | US | 8 |
|
||||
| 40.132.97.___ | US | 8 |
|
||||
| 50.79.227.___ | US | 8 |
|
||||
| 64.130.109.___ | US | 8 |
|
||||
| 66.80.57.___ | US | 8 |
|
||||
| 74.68.130.___ | US | 8 |
|
||||
| 74.70.242.___ | US | 8 |
|
||||
| 96.80.61.___ | US | 81 |
|
||||
| 74.43.153.___ | US | 83 |
|
||||
| 208.123.153.___ | US | 85 |
|
||||
| 75.149.238.___ | US | 87 |
|
||||
| 96.85.138.___ | US | 89 |
|
||||
| 208.117.200.___ | US | 9 |
|
||||
| 208.68.71.___ | US | 9 |
|
||||
| 50.253.180.___ | US | 9 |
|
||||
| 50.84.132.___ | US | 9 |
|
||||
| 63.139.29.___ | US | 9 |
|
||||
| 70.43.78.___ | US | 9 |
|
||||
| 74.94.154.___ | US | 9 |
|
||||
| 50.76.82.___ | US | 94 |
|
||||
+-----------------+------------+-------------+
|
||||
```
|
||||
|
||||
**In next table we can see the distribution of IP addresses by country:**
|
||||
|
||||
```
|
||||
+--------+
|
||||
| 214 US |
|
||||
| 28 GB |
|
||||
| 17 IT |
|
||||
| 15 ES |
|
||||
| 15 CA |
|
||||
| 8 DE |
|
||||
| 4 IE |
|
||||
| 3 KW |
|
||||
| 3 IN |
|
||||
| 3 CZ |
|
||||
| 2 NL |
|
||||
| 1 UA |
|
||||
| 1 TH |
|
||||
| 1 SK |
|
||||
| 1 SG |
|
||||
| 1 SA |
|
||||
| 1 PT |
|
||||
| 1 PR |
|
||||
| 1 PL |
|
||||
| 1 PH |
|
||||
| 1 PA |
|
||||
| 1 NO |
|
||||
| 1 MX |
|
||||
| 1 MA |
|
||||
| 1 LC |
|
||||
| 1 KR |
|
||||
| 1 JO |
|
||||
| 1 IL |
|
||||
| 1 HU |
|
||||
| 1 GH |
|
||||
| 1 FR |
|
||||
| 1 EC |
|
||||
| 1 DK |
|
||||
| 1 CR |
|
||||
| 1 CO |
|
||||
| 1 CN |
|
||||
| 1 CH |
|
||||
| 1 BE |
|
||||
| 1 AU |
|
||||
| 1 AR |
|
||||
| 1 AO |
|
||||
+--------+
|
||||
```
|
||||
|
||||
Based on these tables can be drawn multiple facts according to which we designed our plugin:
|
||||
|
||||
* Spam was spread from a botnet. This is indicated by logins from huge amount of client IP addresses.
|
||||
* Spam was spread with a low cadence of messages in order to avoid rate limits.
|
||||
* Spam was spread from IP addresses from multiple countries (more than 30 countries after few minutes) which indicates an international botnet.
|
||||
|
||||
From these tables were taken out the statistics of IP addresses used, number of logins and countries from which were users logged in:
|
||||
|
||||
* Total number of logins 7531.
|
||||
* Total number of IP addresses used 342.
|
||||
* Total number of unique countries 41.
|
||||
|
||||
### Defending against botnet spammers
|
||||
|
||||
The solution to this kind of spam behavior was to make a plugin for the postfix firewall - postfwd. Postfwd is program that can be used to block users by rate limits, by using mail blacklists and by other means.
|
||||
|
||||
We designed and implemented the plugin that counts the number of unique countries from which a user logged in to his account by sasl authentication. Then in the postfwd configuration, you can set limits to the number of countries and after getting above the limit, user gets selected smtp code reply and is blocked from sending emails.
|
||||
|
||||
I am using this plugin in a medium sized internet provider company for 6 months and currently the plugin automatically caught over 50 compromised users without any intervention from administrator's side. Another interesting fact after 6 months of usage is that after finding spammer and sending SMTP code 544 (Host not found - not in DNS) to compromised account (sended directly from postfwd), botnets stopped trying to log into compromised accounts. It looks like the botnet spam application is intelligent and do not want to waste botnet resources. Sending other SMTP codes did not stopped botnet from trying.
|
||||
|
||||
The plugin is available at my company's github - [https://github.com/Vnet-as/postfwd-anti-geoip-spam-plugin][5]
|
||||
|
||||
### Installation
|
||||
|
||||
In this part I will give you a basic tutorial of how to make postfix work with postfwd and how to install the plugin and add a postfwd rule to use it. Installation was tested and done on Debian 8 Jessie. Instructions for parts of this installation are also available on the github project page.
|
||||
|
||||
1\. First install and configure postfix with sasl authentication. There are a lot of great tutorials on installation and configuration of postfix, therefore I will continue right next with postfwd installation.
|
||||
|
||||
2\. The next thing after you have postfix with sasl authentication installed is to install postfwd. On Debian systems, you can do it with the apt package manager by executing following command (This will also automatically create a user **postfw** and file **/etc/default/postfwd** which we need to update with correct configuration for autostart).
|
||||
|
||||
apt-get install postfwd
|
||||
|
||||
3\. Now we proceed with downloading the git project with our postfwd plugin:
|
||||
|
||||
apt-get install git
|
||||
git clone https://github.com/Vnet-as/postfwd-anti-geoip-spam-plugin /etc/postfix/postfwd-anti-geoip-spam-plugin
|
||||
chown -R postfw:postfix /etc/postfix/postfwd-anti-geoip-spam-plugin/
|
||||
|
||||
4\. If you do not have git or do not want to use git, you can download raw plugin file:
|
||||
|
||||
mkdir /etc/postfix/postfwd-anti-geoip-spam-plugin
|
||||
wget https://raw.githubusercontent.com/Vnet-as/postfwd-anti-geoip-spam-plugin/master/postfwd-anti-spam.plugin -O /etc/postfix/postfwd-anti-geoip-spam-plugin/postfwd-anti-spam.plugin
|
||||
chown -R postfw:postfix /etc/postfix/postfwd-anti-geoip-spam-plugin/
|
||||
|
||||
5. Then update the postfwd default config in the **/etc/default/postfwd** file and add the plugin parameter '**--plugins /etc/postfix/postfwd-anti-geoip-spam-plugin/postfwd-anti-spam.plugin'** to it**:**
|
||||
|
||||
sed -i 's/STARTUP=0/STARTUP=1/' /etc/default/postfwd # Auto-Startup
|
||||
|
||||
sed -i 's/ARGS="--summary=600 --cache=600 --cache-rdomain-only --cache-no-size"/#ARGS="--summary=600 --cache=600 --cache-rdomain-only --cache-no-size"/' /etc/default/postfwd # Comment out old startup parameters
|
||||
|
||||
echo 'ARGS="--summary=600 --cache=600 --cache-rdomain-only --cache-no-size --plugins /etc/postfix/postfwd-anti-geoip-spam-plugin/postfwd-anti-spam.plugin"' >> /etc/default/postfwd # Add new startup parameters
|
||||
|
||||
6\. Now create a basic postfwd configuration file with the anti spam botnet rule:
|
||||
|
||||
cat <<_EOF_ >> /etc/postfix/postfwd.cf
|
||||
# Anti spam botnet rule
|
||||
# This example shows how to limit e-mail address defined by sasl_username to be able to login from max. 5 different countries, otherwise they will be blocked to send messages.
|
||||
id=COUNTRY_LOGIN_COUNT ; \
|
||||
sasl_username=~^(.+)$ ; \
|
||||
incr_client_country_login_count != 0 ; \
|
||||
action=dunno
|
||||
id=BAN_BOTNET ; \
|
||||
sasl_username=~^(.+)$ ; \
|
||||
client_uniq_country_login_count > 5 ; \
|
||||
action=rate(sasl_username/1/3600/554 Your mail account was compromised. Please change your password immediately after next login.)
|
||||
_EOF_
|
||||
|
||||
7\. Update the postfix configuration file **/etc/postfix/main.cf** to use the policy service on the default postfwd port **10040** (or different port according to the configuration in **/etc/default/postfwd**). Your configuration should have following option in the **smtpd_recipient_restrictions** line. Note that the following restriction does not work without other restrictions such as one of **reject_unknown_recipient_domain** or **reject_unauth_destination**.
|
||||
|
||||
echo 'smtpd_recipient_restrictions = check_policy_service inet:127.0.0.1:12525' >> /etc/postfix/main.cf
|
||||
|
||||
8\. Install the dependencies of the plugin:
|
||||
|
||||
`apt-get install -y libgeo-ip-perl libtime-piece-perl libdbd-mysql-perl libdbd-pg-perl`
|
||||
|
||||
9\. Install MySQL or PostgreSQL database and configure one user which will be used in plugin.
|
||||
|
||||
10\. Update database connection part in plugin to refer to your database backend configuration. This example shows the MySQL configuration for a user testuser and database test.
|
||||
|
||||
```
|
||||
# my $driver = "Pg";
|
||||
my $driver = "mysql";
|
||||
my $database = "test";
|
||||
my $host = "127.0.0.1";
|
||||
my $port = "3306";
|
||||
# my $port = "5432";
|
||||
my $dsn = "DBI:$driver:database=$database;host=$host;port=$port";
|
||||
my $userid = "testuser";
|
||||
my $password = "password";
|
||||
```
|
||||
|
||||
11\. Now restart postfix and postfwd service.
|
||||
|
||||
```
|
||||
service postfix restart && service postfwd restart
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/tutorial/blocking-of-international-spam-botnets-postfix-plugin/
|
||||
|
||||
作者:[Ondrej Vasko][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com/tutorial/blocking-of-international-spam-botnets-postfix-plugin/
|
||||
[1]:https://www.howtoforge.com/tutorial/blocking-of-international-spam-botnets-postfix-plugin/#introduction
|
||||
[2]:https://www.howtoforge.com/tutorial/blocking-of-international-spam-botnets-postfix-plugin/#how-international-botnet-works
|
||||
[3]:https://www.howtoforge.com/tutorial/blocking-of-international-spam-botnets-postfix-plugin/#defending-against-botnet-spammers
|
||||
[4]:https://www.howtoforge.com/tutorial/blocking-of-international-spam-botnets-postfix-plugin/#installation
|
||||
[5]:https://github.com/Vnet-as/postfwd-anti-geoip-spam-plugin
|
@ -1,109 +0,0 @@
|
||||
Dotcra translating
|
||||
Best Third-Party Repositories for CentOS
|
||||
============================================================
|
||||
|
||||
|
||||
![CentOS](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/centos.png?itok=YRMQVk7U "CentOS")
|
||||
>Get reliable up-to-date packages for CentOS from the Software Collections repository, EPEL, and Remi.[Creative Commons Attribution][1]
|
||||
|
||||
Red Hat Enterprise Linux, in the grand tradition of enterprise software vendors, packages and supports old mold long after it should be dead and buried. They don't do this out of laziness, but because that is what their customers want. A lot of businesses view software the same way they see furniture: you buy a desk once and keep it forever, and software is just like a desk.
|
||||
|
||||
CentOS, as a RHEL clone, suffers from this as well. Red Hat supports deprecated software that is no longer supported by upstream -- presumably patching security holes and keeping it working. But that is not good enough when you are running a software stack that requires newer versions. I have bumped into this numerous times running web servers on RHEL and CentOS. LAMP stacks are not forgiving, and every piece of the stack must be compatible with all of the others. For example, last year I had ongoing drama with RHEL/CentOS because version 6 shipped with PHP 5.3, and version 7 had PHP 5.4\. PHP 5.3 was end-of-life in August, 2014 and unsupported by upstream. PHP 5.4 went EOL in Sept. 2015, and 5.5 in July 2016\. MySQL, Python, and many other ancient packages that should be on display in museums as mummies also ship in these releases.
|
||||
|
||||
So, what's a despairing admin to do? If you run both RHEL and CentOS turn first to the [Software Collections][3], as this is only Red Hat-supported source of updated packages. There is a Software Collections repository for CentOS, and installing and managing it is similar to any third-party repository, with a couple of unique twists. (If you're running RHEL, the procedure is different, as it is for all software management; you must do it [the RHEL way][4].) Software Collections also supports Fedora and Scientific Linux.
|
||||
|
||||
### Installing Software Collections
|
||||
|
||||
Install Software Collections on CentOS 6 and 7 with this command:
|
||||
|
||||
```
|
||||
$ sudo yum install centos-release-scl
|
||||
```
|
||||
|
||||
Then use Yum to search for and install packages in the usual way:
|
||||
|
||||
```
|
||||
$ yum search php7
|
||||
[...]
|
||||
rh-php70.x86_64 : Package that installs PHP 7.0
|
||||
[...]
|
||||
$ sudo yum install rh-php70
|
||||
```
|
||||
|
||||
This may also pull in `centos-release-scl-rh` as a dependency.
|
||||
|
||||
There is one more step, and that is enabling your new packages:
|
||||
|
||||
```
|
||||
$ scl enable rh-php70 bash
|
||||
$ php -v
|
||||
PHP 7.0.10
|
||||
```
|
||||
|
||||
This runs a script that loads the new package and changes your environment, and you should see a change in your prompt. You must also install the appropriate connectors for the new package if necessary, for example for Python, PHP, and MySQL, and update configuration files (e.g., Apache) to use the new version.
|
||||
|
||||
The SCL package will not be active after reboot. SCL is designed to run your old and new versions side-by-side and not overwrite your existing configurations. You can start your new packages automatically by sourcing their `enable` scripts in `.bashrc`. SCL installs everything into `opt`, so add this line to `.bashrc` for our PHP 7 example:
|
||||
|
||||
```
|
||||
source /opt/rh/rh-php70/enable
|
||||
```
|
||||
|
||||
It will automatically load and be available at startup, and you can go about your business cloaked in the warm glow of fresh up-to-date software.
|
||||
|
||||
### Listing Available Packages
|
||||
|
||||
So, what exactly do you get in Software Collections on CentOS? There are some extra community-maintained packages in `centos-release-scl`. You can see package lists in the [CentOS Wiki][5], or use Yum. First, let's see all our installed repos:
|
||||
|
||||
```
|
||||
$ yum repolist
|
||||
[...]
|
||||
repo id repo name
|
||||
base/7/x86_64 CentOS-7 - Base
|
||||
centos-sclo-rh/x86_64 CentOS-7 - SCLo rh
|
||||
centos-sclo-sclo/x86_64 CentOS-7 - SCLo sclo
|
||||
extras/7/x86_64 CentOS-7 - Extras
|
||||
updates/7/x86_64 CentOS-7 - Updates
|
||||
```
|
||||
|
||||
Yum does not have a simple command to list packages in a single repo, so you have to do this:
|
||||
|
||||
```
|
||||
$ yum --disablerepo "*" --enablerepo centos-sclo-rh \
|
||||
list available | less
|
||||
```
|
||||
|
||||
This use of the `--disablerepo` and `--enablerepo` options is not well documented. You're not really disabling or enabling anything, but only limiting your search query to a single repo. It spits out a giant list of packages, and that is why we pipe it through `less`.
|
||||
|
||||
### EPEL
|
||||
|
||||
The excellent Fedora peoples maintain the [EPEL, Extra Packages for Enterprise Linux][6] repository for Fedora and all RHEL-compatible distributions. This contains updated package versions and software that is not included in the stock distributions. Install software from EPEL in the usual way, without having to bother with enable scripts. Specify that you want packages from EPEL using the `--disablerepo` and `--enablerepo` options:
|
||||
|
||||
```
|
||||
$ sudo yum --disablerepo "*" --enablerepo epel install [package]
|
||||
```
|
||||
|
||||
### Remi Collet
|
||||
|
||||
Remi Collet maintains a large collection of updated and extra packages at [Remi's RPM repository][7]. Install EPEL first as Remi's repo depends on it.
|
||||
|
||||
The CentOS wiki has a list of [additional third-party repositories][8] to use, and some to avoid.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2017/2/best-third-party-repositories-centos
|
||||
|
||||
作者:[CARLA SCHRODER][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/cschroder
|
||||
[1]:https://www.linux.com/licenses/category/creative-commons-attribution
|
||||
[2]:https://www.linux.com/files/images/centospng
|
||||
[3]:https://www.softwarecollections.org/en/
|
||||
[4]:https://access.redhat.com/solutions/472793
|
||||
[5]:https://wiki.centos.org/SpecialInterestGroup/SCLo/CollectionsList
|
||||
[6]:https://fedoraproject.org/wiki/EPEL
|
||||
[7]:http://rpms.remirepo.net/
|
||||
[8]:https://wiki.centos.org/AdditionalResources/Repositories
|
@ -1,230 +0,0 @@
|
||||
WRITE MARKDOWN WITH 8 EXCEPTIONAL OPEN SOURCE EDITORS
|
||||
============================================================
|
||||
|
||||
### Markdown
|
||||
|
||||
By way of a succinct introduction, Markdown is a lightweight plain text formatting syntax created by John Gruber together with Aaron Swartz. Markdown offers individuals “to write using an easy-to-read, easy-to-write plain text format, then convert it to structurally valid XHTML (or HTML)”. Markdown’s syntax consists of easy to remember symbols. It has a gentle learning curve; you can literally learn the Markdown syntax in the time it takes to fry some mushrooms (that’s about 10 minutes). By keeping the syntax as simple as possible, the risk of errors is minimized. Besides being a friendly syntax, it has the virtue of producing clean and valid (X)HTML output. If you have seen my HTML, you would know that’s pretty essential.
|
||||
|
||||
The main goal for the formatting syntax is to make it extremely readable. Users should be able to publish a Markdown-formatted document as plain text. Text written in Markdown has the virtue of being easy to share between computers, smart phones, and individuals. Almost all content management systems support Markdown. It’s popularity as a format for writing for the web has also led to variants being adopted by many services such as GitHub and Stack Exchange.
|
||||
|
||||
Markdown can be composed in any text editor. But I recommend an editor purposely designed for this syntax. The software featured in this roundup allows an author to write professional documents of various formats including blog posts, presentations, reports, email, slides and more. All of the applications are, of course, released under an open source license. Linux, OS X and Windows’ users are catered for.
|
||||
|
||||
* * *
|
||||
|
||||
### Remarkable
|
||||
|
||||
![Remarkable - cross-platform Markdown editor](https://i2.wp.com/www.ossblog.org/wp-content/uploads/2017/02/Remarkable.png?resize=800%2C319&ssl=1)
|
||||
|
||||
Let’s start with Remarkable. An apt name. Remarkable is a reasonably featured Markdown editor – it doesn’t have all the bells and whistles, but there’s nothing critical missing. It has a syntax like Github flavoured markdown.
|
||||
|
||||
With this editor you can write Markdown and view the changes as you make them in the live preview window. You can export your files to PDF (with a TOC) and HTML. There are multiple styles available along with extensive configuration options so you can configure it to your heart’s content.
|
||||
|
||||
Other features include:
|
||||
|
||||
* Syntax highlighting
|
||||
* GitHub Flavored Markdown support
|
||||
* MathJax support – render rich documents with advanced formatting
|
||||
* Keyboard shortcuts
|
||||
|
||||
There are easy installers available for Debian, Ubuntu, Fedora, SUSE and Arch systems.
|
||||
|
||||
Homepage: [https://remarkableapp.github.io/][4]
|
||||
License: MIT License
|
||||
|
||||
* * *
|
||||
|
||||
### Atom
|
||||
|
||||
![Atom - cross-platform Markdown editor](https://i2.wp.com/www.ossblog.org/wp-content/uploads/2017/02/Atom-Markdown.png?resize=800%2C328&ssl=1)
|
||||
|
||||
Make no bones about it, Atom is a fabulous text editor. Atom consists of over 50 open source packages integrated around a minimal core. With Node.js support, and a full set of features, Atom is my preferred way to edit code. It features in our [Killer Open Source Apps][5], it is that masterly. But as a Markdown editor Atom leaves a lot to be desired – its default packages are bereft of Markdown specific features; for example, it doesn’t render equations, as illustrated in the graphic above.
|
||||
|
||||
But here lies the power of open source and one of the reasons I’m a strong advocate of openness. There are a plethora of packages, some forks, which add the missing functionality. For example, Markdown Preview Plus provides a real-time preview of markdown documents, with math rendering and live reloading. Alternatively, you might try [Markdown Preview Enhanced][6]. If you need an auto-scroll feature, there’s [markdown-scroll-sync][7]. I’m a big fan of [Markdown-Writer][8] and [markdown-pdf][9] the latter converts markdown to PDF, PNG and JPEG on the fly.
|
||||
|
||||
The approach embodies the open source mentality, allowing the user to add extensions to provide only the features needed. Reminds me of Woolworths pick ‘n’ mix sweets. A bit more effort, but the best outcome.
|
||||
|
||||
Homepage: [https://atom.io/][10]
|
||||
License: MIT License
|
||||
|
||||
* * *
|
||||
|
||||
### Haroopad
|
||||
|
||||
![Haroopad - - cross-platform Markdown editor](https://i2.wp.com/www.ossblog.org/wp-content/uploads/2017/02/Haroopad-1.png?resize=800%2C332&ssl=1)
|
||||
|
||||
Haroopad is an excellent markdown enabled document processor for creating web-friendly documents. Author various formats of documents such as blog articles, slides, presentations, reports, and e-mail. Haroopad runs on Windows, Mac OS X, and Linux. There are Debian/Ubuntu packages, and binaries for Windows and Mac. The application uses node-webkit, CodeMirror, marked, and Twitter Bootstrap.
|
||||
|
||||
Haroo means “A Day” in Korean.
|
||||
|
||||
The feature list is rather impressive; take a look below:
|
||||
|
||||
* Themes, Skins and UI Components
|
||||
* Over 30 different themes to edit – tomorrow-night-bright and zenburn are recent additions
|
||||
* Syntax highlighting in fenced code block on editor
|
||||
* Ruby, Python, PHP, Javascript, C, HTML, CSS
|
||||
* Based on CodeMirror, a versatile text editor implemented in JavaScript for the browser
|
||||
* Live Preview themes
|
||||
* 7 themes based markdown-css
|
||||
* Syntax Highlighting
|
||||
* 112 languages & 49 styles based on highlight.js
|
||||
* Custom Theme
|
||||
* Style based on CSS (Cascading Style Sheet)
|
||||
* Presentation Mode – useful for on the spot presentations
|
||||
* Draw diagrams – flowcharts, and sequence diagrams
|
||||
* Tasklist
|
||||
* Enhanced Markdown syntax with TOC, GitHub Flavored Markdown and extensions, mathematical expressions, footnotes, tasklists, and more
|
||||
* Font Size
|
||||
* Editor and Viewer font size control using Preference Window & Shortcuts
|
||||
* Embedding Rich Media Contents
|
||||
* Video, Audio, 3D, Text, Open Graph and oEmbed
|
||||
* About 100 major internet services (YouTube, SoundCloud, Flickr …) Support
|
||||
* Drag & Drop support
|
||||
* Display Mode
|
||||
* Default (Editor:Viewer), Reverse (Viewer:Editor), Only Editor, Only Viewer (View > Mode)
|
||||
* Insert Current Date & Time
|
||||
* Various Format support (Insert > Date & Time)
|
||||
* HTML to Markdown
|
||||
* Drag & Drop your selected text on Web Browser
|
||||
* Options for markdown parsing
|
||||
* Outline View
|
||||
* Vim Key-binding for purists
|
||||
* Markdown Auto Completion
|
||||
* Export to PDF, HTML
|
||||
* Styled HTML copy to clipboard for WYSIWYG editors
|
||||
* Auto Save & Restore
|
||||
* Document state information
|
||||
* Tab or Spaces for Indentation
|
||||
* Column (Single, Two and Three) Layout View
|
||||
* Markdown Syntax Help Dialog.
|
||||
* Import and Export settings
|
||||
* Support for LaTex mathematical expressions using MathJax
|
||||
* Export documents to HTML and PDF
|
||||
* Build extensions for making your own feature
|
||||
* Effortlessly transform documents into a blog system: WordPress, Evernote and Tumblr,
|
||||
* Full screen mode – although the mode fails to hide the top menu bar or the bottom toolbar
|
||||
* Internationalization support: English, Korean, Spanish, Chinese Simplified, German, Vietnamese, Russian, Greek, Portuguese, Japanese, Italian, Indonesian, Turkish, and French
|
||||
|
||||
Homepage: [http://pad.haroopress.com/][11]
|
||||
License: GNU GPL v3
|
||||
|
||||
* * *
|
||||
|
||||
### StackEdit
|
||||
|
||||
![StackEdit - a web based Markdown editor](https://i2.wp.com/www.ossblog.org/wp-content/uploads/2017/02/StackEdit.png?resize=800%2C311&ssl=1)
|
||||
|
||||
StackEdit is a full-featured Markdown editor based on PageDown, the Markdown library used by Stack Overflow and the other Stack Exchange sites. Unlike the other editors in this roundup, StackEdit is a web based editor. A Chrome app is also available.
|
||||
|
||||
Features include:
|
||||
|
||||
* Real-time HTML preview with Scroll Link feature to bind editor and preview scrollbars
|
||||
* Markdown Extra/GitHub Flavored Markdown support and Prettify/Highlight.js syntax highlighting
|
||||
* LaTeX mathematical expressions using MathJax
|
||||
* WYSIWYG control buttons
|
||||
* Configurable layout
|
||||
* Theming support with different themes available
|
||||
* A la carte extensions
|
||||
* Offline editing
|
||||
* Online synchronization with Google Drive (multi-accounts) and Dropbox
|
||||
* One click publish on Blogger, Dropbox, Gist, GitHub, Google Drive, SSH server, Tumblr, and WordPress
|
||||
|
||||
Homepage: [https://stackedit.io/][12]
|
||||
License: Apache License
|
||||
|
||||
* * *
|
||||
|
||||
### MacDown
|
||||
|
||||
![MacDown - OS X Markdown editor](https://i0.wp.com/www.ossblog.org/wp-content/uploads/2017/02/MacDown.png?resize=800%2C422&ssl=1)
|
||||
|
||||
MacDown is the only editor featured in this roundup which only runs on macOS. Specifically, it requires OS X 10.8 or later. Hoedown is used internally to render Markdown into HTML which gives an edge to its performance. Hoedown is a revived fork of Sundown, it is fully standards compliant with no dependencies, good extension support, and UTF-8 aware.
|
||||
|
||||
MacDown is based on Mou, a proprietary solution designed for web developers.
|
||||
|
||||
It offers good Markdown rendering, syntax highlighting for fenced code blocks with language identifiers rendered by Prism, MathML and LaTeX rendering, GTM task lists, Jekyll front-matter, and optional advanced auto-completion. And above all, it isn’t a resource hog. Want to write Markdown on OS X? MacDown is my open source recommendation for web developers.
|
||||
|
||||
Homepage: [https://macdown.uranusjr.com/][13]
|
||||
License: MIT License
|
||||
|
||||
* * *
|
||||
|
||||
### ghostwriter
|
||||
|
||||
![ghostwriter - cross-platform Markdown editor](https://i0.wp.com/www.ossblog.org/wp-content/uploads/2017/02/ghostwriter.png?resize=800%2C310&ssl=1)
|
||||
|
||||
ghostwriter is a cross-platform, aesthetic, distraction-free Markdown editor. It has built-in support for the Sundown processor, but can also auto-detect Pandoc, MultiMarkdown, Discount and cmark processors. It seeks to be an unobtrusive editor.
|
||||
|
||||
ghostwriter has a good feature set which includes syntax highlighting, a full-screen mode, a focus mode, themes, spell checking with Hunspell, a live word count, live HTML preview, and custom CSS style sheets for HTML preview, drag and drop support for images, and internalization support. A Hemingway mode button disables backspace and delete keys. A new Markdown cheat sheet HUD window is a useful addition. Theme support is pretty basic, but there are some experimental themes available at this [GitHub repository][14].
|
||||
|
||||
ghostwriter is an under-rated utility. I have come to appreciate the versatility of this application more and more, in part because of its spartan interface helps the writer fully concentrate on curating content. Recommended.
|
||||
|
||||
ghostwriter is available for Linux and Windows. There is also a portable version available for Windows.
|
||||
|
||||
Homepage: [https://github.com/wereturtle/ghostwriter][15]
|
||||
License: GNU GPL v3
|
||||
|
||||
* * *
|
||||
|
||||
### Abricotine
|
||||
|
||||
![Abricotine - cross-platform Markdown editor](https://i2.wp.com/www.ossblog.org/wp-content/uploads/2017/02/Abricotine.png?resize=800%2C316&ssl=1)
|
||||
|
||||
Abricotine is a promising cross-platform open-source markdown editor built for the desktop. It is available for Linux, OS X and Windows.
|
||||
|
||||
The application supports markdown syntax combined with some Github-flavored Markdown enhancements (such as tables). It lets users preview documents directly in the text editor as opposed to a side pane.
|
||||
|
||||
The tool has a reasonable set of features including a spell checker, the ability to save documents as HTML or copy rich text to paste in your email client. You can also display a document table of content in the side pane, display syntax highlighting for code, as well as helpers, anchors and hidden characters. It is at a fairly early stage of development with some basic bugs that need fixing, but it is one to keep an eye on. There are 2 themes, with the ability to add your own.
|
||||
|
||||
Homepage: [http://abricotine.brrd.fr/][16]
|
||||
License: GNU General Public License v3 or later
|
||||
|
||||
* * *
|
||||
|
||||
### ReText
|
||||
|
||||
![ReText - Linux Markdown editor](https://i1.wp.com/www.ossblog.org/wp-content/uploads/2017/02/ReText.png?resize=800%2C270&ssl=1)
|
||||
|
||||
ReText is a simple but powerful editor for Markdown and reStructuredText. It gives users the power to control all output formatting. The files it works with are plain text files, however it can export to PDF, HTML and other formats. ReText is officially supported on Linux only.
|
||||
|
||||
Features include:
|
||||
|
||||
* Full screen mode
|
||||
* Live previews
|
||||
* Synchronised scrolling (for Markdown)
|
||||
* Support for math formulas
|
||||
* Spell checking
|
||||
* Page breaks
|
||||
* Export to HTML, ODT and PDF
|
||||
* Use other markup languages
|
||||
|
||||
Homepage: [https://github.com/retext-project/retext][17]
|
||||
License: GNU GPL v2 or higher
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ossblog.org/markdown-editors/
|
||||
|
||||
作者:[Steve Emms ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ossblog.org/author/steve/
|
||||
[1]:https://www.ossblog.org/author/steve/
|
||||
[2]:https://www.ossblog.org/markdown-editors/#comments
|
||||
[3]:https://www.ossblog.org/category/utilities/
|
||||
[4]:https://remarkableapp.github.io/
|
||||
[5]:https://www.ossblog.org/top-software/2/
|
||||
[6]:https://atom.io/packages/markdown-preview-enhanced
|
||||
[7]:https://atom.io/packages/markdown-scroll-sync
|
||||
[8]:https://atom.io/packages/markdown-writer
|
||||
[9]:https://atom.io/packages/markdown-pdf
|
||||
[10]:https://atom.io/
|
||||
[11]:http://pad.haroopress.com/
|
||||
[12]:https://stackedit.io/
|
||||
[13]:https://macdown.uranusjr.com/
|
||||
[14]:https://github.com/jggouvea/ghostwriter-themes
|
||||
[15]:https://github.com/wereturtle/ghostwriter
|
||||
[16]:http://abricotine.brrd.fr/
|
||||
[17]:https://github.com/retext-project/retext
|
@ -1,478 +0,0 @@
|
||||
How to install pandom: a true random number generator for Linux
|
||||
============================================================
|
||||
|
||||
### On this page
|
||||
|
||||
1. [Introduction][40]
|
||||
2. [1 Installation of pandom][41]
|
||||
1. [1.1 Gain root access][1]
|
||||
2. [1.2 Install build dependencies][2]
|
||||
3. [Arch based systems][3]
|
||||
4. [Debian based systems][4]
|
||||
5. [Red Hat based systems][5]
|
||||
6. [SUSE based systems][6]
|
||||
7. [1.3 Download and extract sources][7]
|
||||
8. [1.4 Test before installing (recommended)][8]
|
||||
9. [1.5 Determine init system][9]
|
||||
10. [1.6 Install pandom][10]
|
||||
11. [init.d based init system (e.g: upstart, sysvinit)][11]
|
||||
12. [systemd as init system][12]
|
||||
3. [2 Analysis of checkme file][42]
|
||||
1. [2.1 Gain root access][13]
|
||||
2. [2.2 Install build dependencies][14]
|
||||
3. [Arch based systems][15]
|
||||
4. [Debian based systems][16]
|
||||
5. [Red Hat based systems][17]
|
||||
6. [SUSE based systems][18]
|
||||
7. [2.3 Download and extract sources][19]
|
||||
8. [2.4 Install entropyarray][20]
|
||||
9. [2.5 Analyze checkme file][21]
|
||||
10. [2.6 Uninstall entropyarray (optional)][22]
|
||||
4. [3 Installation using debian repository][43]
|
||||
1. [3.1 Gain root access][23]
|
||||
2. [3.2 Install keyring][24]
|
||||
3. [3.3 Install sources list][25]
|
||||
4. [Wheezy][26]
|
||||
5. [Jessie][27]
|
||||
6. [Stretch][28]
|
||||
7. [3.4 Update sources list][29]
|
||||
8. [3.5 Test pandom][30]
|
||||
9. [3.6 Install pandom][31]
|
||||
5. [4 Managing pandom][44]
|
||||
1. [4.1 Performance test][32]
|
||||
2. [4.2 Entropy and serial correlation test][33]
|
||||
3. [4.3 System service][34]
|
||||
4. [init.d based init system (e.g: upstart, sysvinit)][35]
|
||||
5. [systemd as init system][36]
|
||||
6. [5 Increasing unpredictability or performance][45]
|
||||
1. [5.1 Edit source files][37]
|
||||
2. [5.2 Test the unpredictability][38]
|
||||
3. [5.3 Install personalized pandom][39]
|
||||
|
||||
This tutorial is for amd64 / x86_64 linux kernel versions greater and equal to 2.6.9\. It explains how to install [pandom][46]: a timing jitter true random number generator maintained by ncomputers.org
|
||||
|
||||
### Introduction
|
||||
|
||||
The built-in Linux kernel true random number generator provides low throughput under modern circumstances, as for example: personal computers with solid state drives (SSD) and virtual private servers (VPS).
|
||||
|
||||
This problem is becoming popular in linux implementations, because of the continuously increasing need for true random numbers, mainly by diverse cryptographic purposes.
|
||||
|
||||
Pandom outputs around 8 KiB/s entropy of 64 [ubits][47] / 64 bits, is compatible with physical and virtual environments and assumes, that no other process running as root user writes to /dev/random.
|
||||
|
||||
### 1 Installation of pandom
|
||||
|
||||
### 1.1 Gain root access
|
||||
|
||||
Pandom must be installed as root, run this command if needed.
|
||||
|
||||
su -
|
||||
|
||||
### 1.2 Install build dependencies
|
||||
|
||||
In order to download and install pandom, you need: GNU **as**sembler, GNU **make**, GNU **tar** and GNU **wget** (the last two usually installed already). You may uninstall them later at will.
|
||||
|
||||
### Arch based systems
|
||||
|
||||
pacman -S binutils make
|
||||
|
||||
### Debian based systems
|
||||
|
||||
apt-get install binutils make
|
||||
|
||||
### Red Hat based systems
|
||||
|
||||
dnf install binutils make
|
||||
|
||||
yum install binutils make
|
||||
|
||||
### SUSE based systems
|
||||
|
||||
zypper install binutils make
|
||||
|
||||
### 1.3 Download and extract sources
|
||||
|
||||
These commands download and extract the sources of pandom from ncomputers.org using **wget** and **tar**.
|
||||
|
||||
wget [http://ncomputers.org/pandom.tar.gz][48]
|
||||
tar xf pandom.tar.gz
|
||||
cd pandom/amd64-linux
|
||||
|
||||
### 1.4 Test before installing (recommended)
|
||||
|
||||
This recommended test takes around 8 minutes. It checks for kernel support and generates a file named **checkme** (analyzed on the next section).
|
||||
|
||||
make check
|
||||
|
||||
### 1.5 Determine init system
|
||||
|
||||
Before installing pandom, you need to know, which init software does your system use. If the following command outputs the word **running**, it means that your system is using **systemd**, otherwise it is likely, that your system is using an **init.d** implementation (e.g: upstart, sysvinit). There might be some exceptions, more information in these [unix.stackexchange.com][49] answers.
|
||||
|
||||
systemctl is-system-running
|
||||
|
||||
```
|
||||
running
|
||||
```
|
||||
|
||||
### 1.6 Install pandom
|
||||
|
||||
Once you know which system does your linux implementation use, then you may install pandom accordingly.
|
||||
|
||||
### init.d based init system (e.g: upstart, sysvinit)
|
||||
|
||||
Install pandom running this command, if your system is using an **init.d** implementation (e.g: upstart, sysvinit).
|
||||
|
||||
make install-init.d
|
||||
|
||||
### systemd as init system
|
||||
|
||||
Install pandom running this command, if your system is using **systemd**.
|
||||
|
||||
make install-systemd
|
||||
|
||||
### 2 Analysis of checkme file
|
||||
|
||||
Before using pandom for cryptographic purposes, it is highly recommended to analyze **checkme** file generated during the installation process in the previous section of this tutorial. This task is useful for knowing if the numbers are truly random or not. This section explains how to analyze **checkme** file using ncomputers.org/**entropyarray**: a shell script, that tests entropy and serial correlation of its input.
|
||||
|
||||
**Note**: this analysis might be run in another computer, such as a laptop or desktop computer. For example: if you are installing pandom in a constrained-resources virtual private server (VPS), you might opt to copy **checkme** file to your personal computer, in order to analyze it there.
|
||||
|
||||
### 2.1 Gain root access
|
||||
|
||||
Entropyarray must be installed as root, run this command if needed.
|
||||
|
||||
su -
|
||||
|
||||
### 2.2 Install build dependencies
|
||||
|
||||
In order to download and install entropyarray, you need: GNU **g++** compiler, GNU **make**, GNU **tar** and GNU **wget** (the last two usually installed already). You may uninstall them later at will.
|
||||
|
||||
### Arch based systems
|
||||
|
||||
pacman -S gcc make
|
||||
|
||||
### Debian based systems
|
||||
|
||||
apt-get install g++ make
|
||||
|
||||
### Red Hat based systems
|
||||
|
||||
dnf install gcc-c++ make
|
||||
|
||||
yum install gcc-c++ make
|
||||
|
||||
### SUSE based systems
|
||||
|
||||
zypper install gcc-c++ make
|
||||
|
||||
### 2.3 Download and extract sources
|
||||
|
||||
These commands download and extract the sources of entropyarray from ncomputers.org using **wget** and **tar**.
|
||||
|
||||
wget [http://ncomputers.org/rearray.tar.gz][50]
|
||||
wget [http://ncomputers.org/entropy.tar.gz][51]
|
||||
wget [http://ncomputers.org/entropyarray.tar.gz][52]
|
||||
|
||||
tar xf entropy.tar.gz
|
||||
tar xf rearray.tar.gz
|
||||
tar xf entropyarray.tar.gz
|
||||
|
||||
### 2.4 Install entropyarray
|
||||
|
||||
**Note**: errors regarding -std=c++11 mean that the GNU **g++** compiler version doesn't support ISO C++ 2011 standard. You may try to compile ncomputers.org/**entropy** and ncomputers.org/**rearray** in another system that supports it (e.g: GNU g++ in a newer version of your favorite linux distribution) and then install the compiled binaries using **make install** in the system you would like to run **entropyarray**, or skip this step, despite it is highly recommended that you analyze **checkme** file before using pandom for any cryptographic purpose.
|
||||
|
||||
cd rearray; make install; cd ..
|
||||
cd entropy; make install; cd ..
|
||||
cd entropyarray; make install; cd ..
|
||||
|
||||
### 2.5 Analyze checkme file
|
||||
|
||||
**Note**: 64 [ubits][53] / 64 bits pandom implementations should result this test with entropy above **15.977** and **max** frequency below **70**. If your results differ too much, you may try to increase the unpredictability of your pandom implementation as described in the fifth section of this tutorial. In case you skipped the last step, you may use other tools such as [pseudorandom number sequence test][54].
|
||||
|
||||
entropyarray checkme
|
||||
|
||||
```
|
||||
entropyarray in /tmp/tmp.mbCopmzqsg
|
||||
15.977339
|
||||
min:12
|
||||
med:32
|
||||
max:56
|
||||
15.977368
|
||||
min:11
|
||||
med:32
|
||||
max:58
|
||||
15.977489
|
||||
min:11
|
||||
med:32
|
||||
max:59
|
||||
15.977077
|
||||
min:12
|
||||
med:32
|
||||
max:60
|
||||
15.977439
|
||||
min:8
|
||||
med:32
|
||||
max:59
|
||||
15.977374
|
||||
min:13
|
||||
med:32
|
||||
max:60
|
||||
15.977312
|
||||
min:12
|
||||
med:32
|
||||
max:67
|
||||
```
|
||||
|
||||
### 2.6 Uninstall entropyarray (optional)
|
||||
|
||||
If you plan to not use entropyarray any more, then you might want to uninstall it at will.
|
||||
|
||||
cd entropyarray; make uninstall; cd ..
|
||||
cd entropy; make uninstall; cd ..
|
||||
cd rearray; make uninstall; cd ..
|
||||
|
||||
### 3 Installation using debian repository
|
||||
|
||||
If you would like to keep pandom updated on your debian based system, you may opt to install / reinstall it using ncomputers.org debian repository.
|
||||
|
||||
### 3.1 Gain root access
|
||||
|
||||
The below debian packages must be installed as root, run this command if needed.
|
||||
|
||||
su -
|
||||
|
||||
### 3.2 Install keyring
|
||||
|
||||
This debian package includes the public key of the ncomputers.org debian repository.
|
||||
|
||||
wget [http://ncomputers.org/debian/keyring.deb][55]
|
||||
dpkg -i keyring.deb
|
||||
rm keyring.deb
|
||||
|
||||
### 3.3 Install sources list
|
||||
|
||||
These debian packages include the sources list of the ncomputers.org debian repository according to the latest debian distributions (year 2017).
|
||||
|
||||
**Note**: It is also possible to write the commented lines below in **/etc/apt/sources.list**, instead of installing the respective debian package for your debian distribution, but if these sources change in the future, then you would need to update them manually.
|
||||
|
||||
### Wheezy
|
||||
|
||||
#deb [http://ncomputers.org/debian][56] wheezy main
|
||||
wget [http://ncomputers.org/debian/wheezy.deb][57]
|
||||
dpkg -i wheezy.deb
|
||||
rm wheezy.deb
|
||||
|
||||
### Jessie
|
||||
|
||||
#deb [http://ncomputers.org/debian][58] jessie main
|
||||
wget [http://ncomputers.org/debian/jessie.deb][59]
|
||||
dpkg -i jessie.deb
|
||||
rm jessie.deb
|
||||
|
||||
### Stretch
|
||||
|
||||
#deb [http://ncomputers.org/debian][60] stretch main
|
||||
wget [http://ncomputers.org/debian/stretch.deb][61]
|
||||
dpkg -i stretch.deb
|
||||
rm stretch.deb
|
||||
|
||||
### 3.4 Update sources list
|
||||
|
||||
Once the keyring and sources list are installed.
|
||||
|
||||
apt-get update
|
||||
|
||||
### 3.5 Test pandom
|
||||
|
||||
Once tested, you may uninstall the below package at will.
|
||||
|
||||
**Note**: if you have already tested pandom in your linux implementation, you may skip this step.
|
||||
|
||||
apt-get install pandom-test
|
||||
pandom-test
|
||||
|
||||
```
|
||||
generating checkme file, please wait around 8 minutes ...
|
||||
entropyarray in /tmp/tmp.5SkiYsYG3h
|
||||
15.977366
|
||||
min:12
|
||||
med:32
|
||||
max:57
|
||||
15.977367
|
||||
min:13
|
||||
med:32
|
||||
max:57
|
||||
15.977328
|
||||
min:12
|
||||
med:32
|
||||
max:61
|
||||
15.977431
|
||||
min:12
|
||||
med:32
|
||||
max:59
|
||||
15.977437
|
||||
min:11
|
||||
med:32
|
||||
max:57
|
||||
15.977298
|
||||
min:11
|
||||
med:32
|
||||
max:59
|
||||
15.977196
|
||||
min:10
|
||||
med:32
|
||||
max:57
|
||||
```
|
||||
|
||||
### 3.6 Install pandom
|
||||
|
||||
apt-get install pandom
|
||||
|
||||
### 4 Managing pandom
|
||||
|
||||
After pandom was installed, you might want to manage it.
|
||||
|
||||
### 4.1 Performance test
|
||||
|
||||
Pandom offers around 8 kilobytes per second, but its performance may vary depending on the environment.
|
||||
|
||||
dd if=/dev/random of=/dev/null bs=8 count=512
|
||||
|
||||
```
|
||||
512+0 records in
|
||||
512+0 records out
|
||||
4096 bytes (4.1 kB, 4.0 KiB) copied, 0.451253 s, 9.1 kB/s
|
||||
```
|
||||
|
||||
### 4.2 Entropy and serial correlation test
|
||||
|
||||
Besides of ncomputers.org/**entropyarray**, there are more tests, for example [NIST testsuite by Ilja Gerhardt][62].
|
||||
|
||||
entropyarray /dev/random 1M
|
||||
|
||||
### 4.3 System service
|
||||
|
||||
Pandom runs as a system service.
|
||||
|
||||
### init.d based init system (e.g: upstart, sysvinit)
|
||||
|
||||
/etc/init.d/random status
|
||||
/etc/init.d/random start
|
||||
/etc/init.d/random stop
|
||||
/etc/init.d/random restart
|
||||
|
||||
### systemd as init system
|
||||
|
||||
systemctl status random
|
||||
systemctl start random
|
||||
systemctl stop random
|
||||
systemctl restart random
|
||||
|
||||
### 5 Increasing unpredictability or performance
|
||||
|
||||
If you would like to try to increase the unpredictabiity or the performance of your pandom implementation, you may try to add or delete CPU time measurements.
|
||||
|
||||
### 5.1 Edit source files
|
||||
|
||||
In the source files **test.s** and **tRNG.s** add or remove measurement blocks at will.
|
||||
|
||||
```
|
||||
#measurement block
|
||||
mov $35,%rax
|
||||
syscall
|
||||
rdtsc
|
||||
[...]
|
||||
|
||||
#measurement block
|
||||
mov $35,%rax
|
||||
syscall
|
||||
rdtsc
|
||||
[...]
|
||||
```
|
||||
|
||||
### 5.2 Test the unpredictability
|
||||
|
||||
We recommend to always test any personalized pandom implementation before using it for cryptographic purposes.
|
||||
|
||||
make check
|
||||
|
||||
### 5.3 Install personalized pandom
|
||||
|
||||
If you are happy with the results, then you may install your personalized pandom implementation.
|
||||
|
||||
make install
|
||||
|
||||
Additional information and updates: [http://ncomputers.org/pandom][63]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/
|
||||
|
||||
作者:[Oliver][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/
|
||||
[1]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-gain-root-access
|
||||
[2]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-install-build-dependencies
|
||||
[3]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#arch-based-systems
|
||||
[4]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#debian-based-systems
|
||||
[5]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#red-hat-based-systems
|
||||
[6]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#suse-based-systems
|
||||
[7]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-download-and-extract-sources
|
||||
[8]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-test-before-installing-recommended
|
||||
[9]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-determine-init-system
|
||||
[10]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-install-pandom
|
||||
[11]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#initd-based-init-system-eg-upstart-sysvinit
|
||||
[12]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#systemd-as-init-system
|
||||
[13]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-gain-root-access-2
|
||||
[14]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-install-build-dependencies-2
|
||||
[15]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#arch-based-systems-2
|
||||
[16]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#debian-based-systems-2
|
||||
[17]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#red-hat-based-systems-2
|
||||
[18]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#suse-based-systems-2
|
||||
[19]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-download-and-extract-sources-2
|
||||
[20]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-install-entropyarray
|
||||
[21]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-analyze-checkme-file
|
||||
[22]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-uninstall-entropyarray-optional
|
||||
[23]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-gain-root-access-3
|
||||
[24]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-install-keyring
|
||||
[25]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-install-sources-list
|
||||
[26]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#wheezy
|
||||
[27]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#jessie
|
||||
[28]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#stretch
|
||||
[29]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-update-sources-list
|
||||
[30]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-test-pandom
|
||||
[31]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-install-pandom-2
|
||||
[32]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-performance-test
|
||||
[33]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-entropy-and-serial-correlation-test
|
||||
[34]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-system-service
|
||||
[35]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#initd-based-init-system-eg-upstart-sysvinit-2
|
||||
[36]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#systemd-as-init-system-2
|
||||
[37]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-edit-source-files
|
||||
[38]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-test-the-unpredictability
|
||||
[39]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-install-personalized-pandom
|
||||
[40]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#introduction
|
||||
[41]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-installation-of-pandom
|
||||
[42]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-analysis-of-checkme-file
|
||||
[43]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-installation-using-debian-repository
|
||||
[44]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-managing-pandom
|
||||
[45]:https://www.howtoforge.com/tutorial/how-to-install-pandom-a-true-random-number-generator/#-increasing-unpredictability-or-performance
|
||||
[46]:http://ncomputers.org/pandom
|
||||
[47]:http://ncomputers.org/ubit
|
||||
[48]:http://ncomputers.org/pandom.tar.gz
|
||||
[49]:http://unix.stackexchange.com/a/18210/94448
|
||||
[50]:http://ncomputers.org/rearray.tar.gz
|
||||
[51]:http://ncomputers.org/entropy.tar.gz
|
||||
[52]:http://ncomputers.org/entropyarray.tar.gz
|
||||
[53]:http://ncomputers.org/ubit
|
||||
[54]:http://www.fourmilab.ch/random/
|
||||
[55]:http://ncomputers.org/debian/keyring.deb
|
||||
[56]:http://ncomputers.org/debian
|
||||
[57]:http://ncomputers.org/debian/wheezy.deb
|
||||
[58]:http://ncomputers.org/debian
|
||||
[59]:http://ncomputers.org/debian/jessie.deb
|
||||
[60]:http://ncomputers.org/debian
|
||||
[61]:http://ncomputers.org/debian/stretch.deb
|
||||
[62]:https://gerhardt.ch/random.php
|
||||
[63]:http://ncomputers.org/pandom
|
@ -1,393 +0,0 @@
|
||||
The Perfect Server CentOS 7.3 with Apache, Postfix, Dovecot, Pure-FTPD, BIND and ISPConfig 3.1
|
||||
============================================================
|
||||
|
||||
### This tutorial exists for these OS versions
|
||||
|
||||
* **CentOS 7.3**
|
||||
* [CentOS 7.2][3]
|
||||
* [CentOS 7.1][4]
|
||||
* [CentOS 7][5]
|
||||
|
||||
### On this page
|
||||
|
||||
1. [1 Requirements][6]
|
||||
2. [2 Preliminary Note][7]
|
||||
3. [3 Prepare the server][8]
|
||||
4. [4 Enable Additional Repositories and Install Some Software][9]
|
||||
5. [5 Quota][10]
|
||||
1. [Enabling quota on the / (root) partition][1]
|
||||
2. [Enabling quota on a separate /var partition][2]
|
||||
6. [6 Install Apache, MySQL, phpMyAdmin][11]
|
||||
|
||||
This tutorial shows the installation of ISPConfig 3.1 on a CentOS 7.3 (64Bit) server. ISPConfig is a web hosting control panel that allows you to configure the following services through a web browser: Apache web server, Postfix mail server, MySQL, BIND nameserver, PureFTPd, SpamAssassin, ClamAV, Mailman, and many more.
|
||||
|
||||
### 1 Requirements
|
||||
|
||||
To install such a system you will need the following:
|
||||
|
||||
* A Centos 7.3 minimal server system. This can be a server installed from scratch as described in our [Centos 7.3 minimal server tutorial][12] or a virtual-server or root-server from a hosting company that has a minimal Centos 7.3 setup installed.
|
||||
* A fast Internet connection.
|
||||
|
||||
### 2 Preliminary Note
|
||||
|
||||
In this tutorial, I use the hostname server1.example.com with the IP address 192.168.1.100 and the gateway 192.168.1.1. These settings might differ for you, so you have to replace them where appropriate.
|
||||
|
||||
Please note that HHVM and XMPP are not supported in ISPConfig for the CentOS platform yet. If you like to manage an XMPP chat server from within ISPConfig or use HHVM (Hip Hop Virtual Machine) in an ISPConfig website, then please use Debian 8 or Ubuntu 16.04 as server OS instead of CentOS 7.3.
|
||||
|
||||
### 3 Prepare the server
|
||||
|
||||
**Set the keyboard layout**
|
||||
|
||||
In case that the keyboard layout of the server does not match your keyboard, you can switch to the right keyboard (in my case "de" for a german keyboard layout, with the localectl command:
|
||||
|
||||
`localectl set-keymap de`
|
||||
|
||||
To get a list of all available keymaps, run:
|
||||
|
||||
`localectl list-keymaps`
|
||||
|
||||
I want to install ISPConfig at the end of this tutorial, ISPConfig ships with the Bastille firewall script that I will use as firewall, therefor I disable the default CentOS firewall now. Of course, you are free to leave the CentOS firewall on and configure it to your needs (but then you shouldn't use any other firewall later on as it will most probably interfere with the CentOS firewall).
|
||||
|
||||
Run...
|
||||
|
||||
```
|
||||
yum -y install net-tools
|
||||
systemctl stop firewalld.service
|
||||
systemctl disable firewalld.service
|
||||
```
|
||||
|
||||
to stop and disable the CentOS firewall. It is ok when you get errors here, this just indicates that the firewall was not installed.
|
||||
|
||||
Then you should check that the firewall has really been disabled. To do so, run the command:
|
||||
|
||||
`iptables -L`
|
||||
|
||||
The output should look like this:
|
||||
|
||||
[root@server1 ~]# iptables -L
|
||||
Chain INPUT (policy ACCEPT)
|
||||
target prot opt source destination
|
||||
|
||||
Chain FORWARD (policy ACCEPT)
|
||||
target prot opt source destination
|
||||
|
||||
Chain OUTPUT (policy ACCEPT)
|
||||
target prot opt source destination
|
||||
|
||||
Or use the firewall-cmd command:
|
||||
|
||||
firewall-cmd --state
|
||||
|
||||
[root@server1 ~]# firewall-cmd --state
|
||||
not running
|
||||
[root@server1 ~]#
|
||||
|
||||
Now I will install the network configuration editor and the shell based editor "nano" that I will use in the next steps to edit the config files:
|
||||
|
||||
yum -y install nano wget NetworkManager-tui
|
||||
|
||||
If you did not configure your network card during the installation, you can do that now. Run...
|
||||
|
||||
nmtui
|
||||
|
||||
... and go to Edit a connection:
|
||||
|
||||
[
|
||||
![](https://www.howtoforge.com/images/perfect_server_centos_7_1_x86_64_apache2_dovecot_ispconfig3/nmtui1.png)
|
||||
][13]
|
||||
|
||||
Select your network interface:
|
||||
|
||||
[
|
||||
![](https://www.howtoforge.com/images/perfect_server_centos_7_1_x86_64_apache2_dovecot_ispconfig3/nmtui2.png)
|
||||
][14]
|
||||
|
||||
Then fill in your network details - disable DHCP and fill in a static IP address, a netmask, your gateway, and one or two nameservers, then hit Ok:
|
||||
|
||||
[
|
||||
![](https://www.howtoforge.com/images/perfect_server_centos_7_1_x86_64_apache2_dovecot_ispconfig3/nmtui3.png)
|
||||
][15]
|
||||
|
||||
Next select OK to confirm the changes that you made in the network settings
|
||||
|
||||
[
|
||||
![](https://www.howtoforge.com/images/perfect_server_centos_7_1_x86_64_apache2_dovecot_ispconfig3/nmtui4.png)
|
||||
][16]
|
||||
|
||||
and Quit to close the nmtui network configuration tool.
|
||||
|
||||
[
|
||||
![](https://www.howtoforge.com/images/perfect_server_centos_7_1_x86_64_apache2_dovecot_ispconfig3/nmtui5.png)
|
||||
][17]
|
||||
|
||||
You should run
|
||||
|
||||
ifconfig
|
||||
|
||||
now to check if the installer got your IP address right:
|
||||
|
||||
```
|
||||
[root@server1 ~]# ifconfig
|
||||
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
|
||||
inet 192.168.1.100 netmask 255.255.255.0 broadcast 192.168.1.255
|
||||
inet6 fe80::20c:29ff:fecd:cc52 prefixlen 64 scopeid 0x20
|
||||
|
||||
ether 00:0c:29:cd:cc:52 txqueuelen 1000 (Ethernet)
|
||||
RX packets 55621 bytes 79601094 (75.9 MiB)
|
||||
RX errors 0 dropped 0 overruns 0 frame 0
|
||||
TX packets 28115 bytes 2608239 (2.4 MiB)
|
||||
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
|
||||
|
||||
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
|
||||
inet 127.0.0.1 netmask 255.0.0.0
|
||||
inet6 ::1 prefixlen 128 scopeid 0x10
|
||||
loop txqueuelen 0 (Local Loopback)
|
||||
RX packets 0 bytes 0 (0.0 B)
|
||||
RX errors 0 dropped 0 overruns 0 frame 0
|
||||
TX packets 0 bytes 0 (0.0 B)
|
||||
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
|
||||
```
|
||||
|
||||
If your network card does not show up there, then it not be enabled on boot, In this case, open the file /etc/sysconfig/network-scripts/ifcfg-eth0
|
||||
|
||||
nano /etc/sysconfig/network-scripts/ifcfg-ens33
|
||||
|
||||
and set ONBOOT to yes:
|
||||
|
||||
[...]
|
||||
ONBOOT=yes
|
||||
[...]
|
||||
|
||||
and reboot the server.
|
||||
|
||||
Check your /etc/resolv.conf if it lists all nameservers that you've previously configured:
|
||||
|
||||
cat /etc/resolv.conf
|
||||
|
||||
If nameservers are missing, run
|
||||
|
||||
nmtui
|
||||
|
||||
and add the missing nameservers again.
|
||||
|
||||
Now, on to the configuration...
|
||||
|
||||
**Adjusting /etc/hosts and /etc/hostname**
|
||||
|
||||
Next, we will edit /etc/hosts. Make it look like this:
|
||||
|
||||
nano /etc/hosts
|
||||
|
||||
```
|
||||
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
|
||||
192.168.1.100 server1.example.com server1
|
||||
|
||||
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
|
||||
```
|
||||
|
||||
Set the hostname in the /etc/hostname file. The file shall contain the fully qualified domain name (e.g. server1.example.com in my case) and not just the short name like "server1". Open the file with the nano editor:
|
||||
|
||||
nano /etc/hostname
|
||||
|
||||
And set the hostname in the file.
|
||||
|
||||
```
|
||||
server1.example.com
|
||||
```
|
||||
|
||||
Save the file and exit nano.
|
||||
|
||||
**Disable SELinux**
|
||||
|
||||
SELinux is a security extension of CentOS that should provide extended security. In my opinion you don't need it to configure a secure system, and it usually causes more problems than advantages (think of it after you have done a week of trouble-shooting because some service wasn't working as expected, and then you find out that everything was ok, only SELinux was causing the problem). Therefore I disable it (this is a must if you want to install ISPConfig later on).
|
||||
|
||||
Edit /etc/selinux/config and set SELINUX=disabled:
|
||||
|
||||
nano /etc/selinux/config
|
||||
|
||||
```
|
||||
# This file controls the state of SELinux on the system.
|
||||
# SELINUX= can take one of these three values:
|
||||
# enforcing - SELinux security policy is enforced.
|
||||
# permissive - SELinux prints warnings instead of enforcing.
|
||||
# disabled - No SELinux policy is loaded.
|
||||
SELINUX=disabled
|
||||
# SELINUXTYPE= can take one of these two values:
|
||||
# targeted - Targeted processes are protected,
|
||||
# mls - Multi Level Security protection.
|
||||
SELINUXTYPE=targeted
|
||||
```
|
||||
|
||||
Afterwards we must reboot the system:
|
||||
|
||||
reboot
|
||||
|
||||
### 4 Enable Additional Repositories and Install Some Software
|
||||
|
||||
First, we import the GPG keys for software packages:
|
||||
|
||||
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY*
|
||||
|
||||
Then we enable the EPEL repository on our CentOS system as lots of the packages that we are going to install in the course of this tutorial are not available in the official CentOS 7 repository:
|
||||
|
||||
yum -y install epel-release
|
||||
|
||||
yum -y install yum-priorities
|
||||
|
||||
Edit /etc/yum.repos.d/epel.repo...
|
||||
|
||||
nano /etc/yum.repos.d/epel.repo
|
||||
|
||||
... and add the line priority=10 to the [epel] section:
|
||||
|
||||
```
|
||||
[epel]
|
||||
name=Extra Packages for Enterprise Linux 7 - $basearch
|
||||
#baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch
|
||||
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
|
||||
failovermethod=priority
|
||||
enabled=1
|
||||
priority=10
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
|
||||
[...]
|
||||
```
|
||||
|
||||
Then we update our existing packages on the system:
|
||||
|
||||
yum -y update
|
||||
|
||||
Now we install some software packages that are needed later on:
|
||||
|
||||
yum -y groupinstall 'Development Tools'
|
||||
|
||||
### 5 Quota
|
||||
|
||||
(If you have chosen a different partitioning scheme than I did, you must adjust this chapter so that quota applies to the partitions where you need it.)
|
||||
|
||||
To install quota, we run this command:
|
||||
|
||||
yum -y install quota
|
||||
|
||||
Now we check if quota is already enabled for the filesystem where the website (/var/www) and maildir data (var/vmail) is stored. In this example setup, I have one big root partition, so I search for ' / ':
|
||||
|
||||
mount | grep ' / '
|
||||
|
||||
[root@server1 ~]# mount | grep ' / '
|
||||
/dev/mapper/centos-root on / type xfs (rw,relatime,attr2,inode64,noquota)
|
||||
[root@server1 ~]#
|
||||
|
||||
If you have a separate /var partition, then use:
|
||||
|
||||
mount | grep ' /var '
|
||||
|
||||
instead. If the line contains the word "**noquota**", then proceed with the following steps to enable quota.
|
||||
|
||||
### Enabling quota on the / (root) partition
|
||||
|
||||
Normally you would enable quota in the /etc/fstab file, but if the filesystem is the root filesystem "/", then quota has to be enabled by a boot parameter of the Linux Kernel.
|
||||
|
||||
Edit the grub configuration file:
|
||||
|
||||
nano /etc/default/grub
|
||||
|
||||
search fole the line that starts with GRUB_CMDLINE_LINUX and add rootflags=uquota,gquota to the commandline parameters so that the resulting line looks like this:
|
||||
|
||||
```
|
||||
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet rootflags=uquota,gquota"
|
||||
```
|
||||
|
||||
and apply the changes by running the following command.
|
||||
|
||||
cp /boot/grub2/grub.cfg /boot/grub2/grub.cfg_bak
|
||||
grub2-mkconfig -o /boot/grub2/grub.cfg
|
||||
|
||||
and reboot the server.
|
||||
|
||||
reboot
|
||||
|
||||
Now check if quota is enabled:
|
||||
|
||||
mount | grep ' / '
|
||||
|
||||
[root@server1 ~]# mount | grep ' / '
|
||||
/dev/mapper/centos-root on / type xfs (rw,relatime,attr2,inode64,usrquota,grpquota)
|
||||
[root@server1 ~]#
|
||||
|
||||
When quota is active, we can see "**usrquota,grpquota**" in the mount option list.
|
||||
|
||||
### Enabling quota on a separate /var partition
|
||||
|
||||
If you have a separate /var partition, then edit /etc/fstab and add ,uquota,gquota to the / partition (/dev/mapper/centos-var):
|
||||
|
||||
nano /etc/fstab
|
||||
|
||||
```
|
||||
|
||||
#
|
||||
# /etc/fstab
|
||||
# Created by anaconda on Sun Sep 21 16:33:45 2014
|
||||
#
|
||||
# Accessible filesystems, by reference, are maintained under '/dev/disk'
|
||||
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
|
||||
#
|
||||
/dev/mapper/centos-root / xfs defaults 1 1
|
||||
/dev/mapper/centos-var /var xfs defaults,uquota,gquota 1 2
|
||||
UUID=9ac06939-7e43-4efd-957a-486775edd7b4 /boot xfs defaults 1 3
|
||||
/dev/mapper/centos-swap swap swap defaults 0 0
|
||||
```
|
||||
|
||||
Then run
|
||||
|
||||
mount -o remount /var
|
||||
|
||||
quotacheck -avugm
|
||||
quotaon -avug
|
||||
|
||||
to enable quota. When you get an error that there is no partition with quota enabled, then reboot the server before you proceed.
|
||||
|
||||
### 6 Install Apache, MySQL, phpMyAdmin
|
||||
|
||||
We can install the needed packages with one single command:
|
||||
|
||||
yum -y install ntp httpd mod_ssl mariadb-server php php-mysql php-mbstring phpmyadmin
|
||||
|
||||
To ensure that the server can not be attacked trough the [HTTPOXY][18] vulnerability, we will disable the HTTP_PROXY header in apache globally.
|
||||
|
||||
Add the apache header rule at the end of the httpd.conf file:
|
||||
|
||||
echo "RequestHeader unset Proxy early" >> /etc/httpd/conf/httpd.conf
|
||||
|
||||
And restart httpd to apply the configuration change.
|
||||
|
||||
service httpd restart
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/tutorial/perfect-server-centos-7-3-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/
|
||||
|
||||
作者:[ Till Brehm][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-3-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/
|
||||
[1]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-3-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/#enabling-quota-on-the-root-partition
|
||||
[2]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-3-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/#enabling-quota-on-a-separate-var-partition
|
||||
[3]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-2-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/
|
||||
[4]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-1-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig3/
|
||||
[5]:https://www.howtoforge.com/perfect-server-centos-7-apache2-mysql-php-pureftpd-postfix-dovecot-and-ispconfig3
|
||||
[6]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-3-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/#-requirements
|
||||
[7]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-3-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/#-preliminary-note
|
||||
[8]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-3-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/#nbspprepare-the-server
|
||||
[9]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-3-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/#nbspenable-additional-repositories-and-install-some-software
|
||||
[10]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-3-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/#-quota
|
||||
[11]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-3-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/#-install-apache-mysql-phpmyadmin
|
||||
[12]:https://www.howtoforge.com/tutorial/centos-7-minimal-server/
|
||||
[13]:https://www.howtoforge.com/images/perfect_server_centos_7_1_x86_64_apache2_dovecot_ispconfig3/big/nmtui1.png
|
||||
[14]:https://www.howtoforge.com/images/perfect_server_centos_7_1_x86_64_apache2_dovecot_ispconfig3/big/nmtui2.png
|
||||
[15]:https://www.howtoforge.com/images/perfect_server_centos_7_1_x86_64_apache2_dovecot_ispconfig3/big/nmtui3.png
|
||||
[16]:https://www.howtoforge.com/images/perfect_server_centos_7_1_x86_64_apache2_dovecot_ispconfig3/big/nmtui4.png
|
||||
[17]:https://www.howtoforge.com/images/perfect_server_centos_7_1_x86_64_apache2_dovecot_ispconfig3/big/nmtui5.png
|
||||
[18]:https://www.howtoforge.com/tutorial/httpoxy-protect-your-server/
|
@ -1,265 +0,0 @@
|
||||
Monitoring a production-ready microservice
|
||||
============================================================
|
||||
|
||||
Explore essential components, principles, and key metrics.
|
||||
|
||||
|
||||
|
||||
![Container ship](https://d3tdunqjn7n0wj.cloudfront.net/360x240/container-1638068_1400-532657d38c05bb5bd8bd23571f7b3b88.jpg)
|
||||
|
||||
|
||||
This is an excerpt from [Production-Ready Microservices][8], by Susan J. Fowler.
|
||||
|
||||
|
||||
A production-ready microservice is one that is properly monitored. Proper monitoring is one of the most important parts of building a production-ready microservice and guarantees higher microservice availability. In this chapter, the essential components of microservice monitoring are covered, including which key metrics to monitor, how to log key metrics, building dashboards that display key metrics, how to approach alerting, and on-call best practices.
|
||||
|
||||
|
||||
|
||||
### Principles of Microservice Monitoring
|
||||
|
||||
The majority of outages in a microservice ecosystem are caused by bad deployments. The second most common cause of outages is the lack of proper _monitoring_ . It’s easy to see why this is the case. If the state of a microservice is unknown, if key metrics aren’t tracked, then any precipitating failures will remain unknown until an actual outage occurs. By the time a microservice experiences an outage due to lack of monitoring, its availability has already been compromised. During these outages, the time to mitigation and time to repair are prolonged, pulling the availability of the microservice down even further: without easily accessible information about the microservice’s key metrics, developers are often faced with a blank slate, unprepared to quickly resolve the issue. This is why proper monitoring is essential: it provides the development team with all of the relevant information about the microservice. When a microservice is properly monitored, its state is never unknown.
|
||||
|
||||
Monitoring a production-ready microservice has four components. The first is proper _logging_ of all relevant and important information, which allows developers to understand the state of the microservice at any time in the present or in the past. The second is the use of well-designed _dashboards_ that accurately reflect the health of the microservice, and are organized in such a way that anyone at the company could view the dashboard and understand the health and status of the microservice without difficulty. The third component is actionable and effective _alerting_ on all key metrics, a practice that makes it easy for developers to mitigate and resolve problems with the microservice before they cause outages. The final component is the implementation and practice of running a sustainable _on-call rotation_ responsible for the monitoring of the microservice. With effective logging, dashboards, alerting, and on-call rotation, the microservice’s availability can be protected: failures and errors will be detected, mitigated, and resolved before they bring down any part of the microservice ecosystem.
|
||||
|
||||
###### A Production-Ready Service Is Properly Monitored
|
||||
|
||||
* Its key metrics are identified and monitored at the host, infrastructure, and microservice levels.
|
||||
|
||||
* It has appropriate logging that accurately reflects the past states of the microservice.
|
||||
|
||||
* Its dashboards are easy to interpret, and contain all key metrics.
|
||||
|
||||
* Its alerts are actionable and are defined by signal-providing thresholds.
|
||||
|
||||
* There is a dedicated on-call rotation responsible for monitoring and responding to any incidents and outages.
|
||||
|
||||
* There is a clear, well-defined, and standardized on-call procedure in place for handling incidents and outages.
|
||||
|
||||
|
||||
### Key Metrics
|
||||
|
||||
Before we jump into the components of proper monitoring, it’s important to identify precisely _what_ we want and need to monitor: we want to monitor a microservice, but what does that _actually_ mean? A microservice isn’t an individual object that we can follow or track, it cannot be isolated and quarantined—it’s far more complicated than that. Deployed across dozens, if not hundreds, of servers, the behavior of a microservice is the sum of its behavior across all of its instantiations, which isn’t the easiest thing to quantify. The key is identifying which properties of a microservice are necessary and sufficient for describing its behavior, and then determining what changes in those properties tell us about the overall status and health of the microservice. We’ll call these properties _key metrics_ .
|
||||
|
||||
There are two types of key metrics: host and infrastructure metrics, and microservice metrics. Host and infrastructure metrics are those that pertain to the status of the infrastructure and the servers on which the microservice is running, while microservice metrics are metrics that are unique to the individual microservice. In terms of the four-layer model of the microservice ecosystem as described in [Chapter 1, _Microservices_ ][9], host and infrastructure metrics are metrics belonging to layers 1–3, while microservice metrics are those belonging to layer 4.
|
||||
|
||||
Separating key metrics into these two different types is important both organizationally and technically. Host and infrastructure metrics often affect more than one microservice: for example, if there is a problem with a particular server, and the microservice ecosystem shares the hardware resources among multiple microservices, host-level key metrics will be relevant to every microservice team that has a microservice deployed to that host. Likewise, microservice-specific metrics will rarely be applicable or useful to anyone but the team of developers working on that particular microservice. Teams should monitor both types of key metrics (that is, all metrics relevant to their microservice), and any metrics relevant to multiple microservices should be monitored and shared between the appropriate teams.
|
||||
|
||||
The host and infrastructure metrics that should be monitored for each microservice are the CPU utilized by the microservice on each host, the RAM utilized by the microservice on each host, the available threads, the microservice’s open file descriptors (FD), and the number of database connections that the microservice has to any databases it uses. Monitoring these key metrics should be done in such a way that the status of each metric is accompanied by information about the infrastructure and the microservice. This means that monitoring should be granular enough that developers can know the status of the keys metrics for their microservice on any particular host and across all of the hosts that it runs on. For example, developers should be able to know how much CPU their microservice is using on one particular host _and_ how much CPU their microservice is using across all hosts it runs on.
|
||||
|
||||
### Monitoring Host-Level Metrics When Resources Are Abstracted
|
||||
|
||||
Some microservice ecosystems may use cluster management applications (like Mesos) in which the resources (CPU, RAM, etc.) are abstracted away from the host level. Host-level metrics won’t be available in the same way to developers in these situations, but all key metrics for the microservice overall should still be monitored by the microservice team.
|
||||
|
||||
Determining the necessary and sufficient key metrics at the microservice level is a bit more complicated because it can depend on the particular language that the microservice is written in. Each language comes with its own special way of processing tasks, for example, and these language-specific features must be monitored closely in the majority of cases. Consider a Python service that utilizes uwsgi workers: the number of uwsgi workers is a necessary key metric for proper monitoring.
|
||||
|
||||
In addition to language-specific key metrics, we also must monitor the availability of the service, the service-level agreement (SLA) of the service, latency (of both the service as a whole and its API endpoints), success of API endpoints, responses and average response times of API endpoints, the services (clients) from which API requests originate (along with which endpoints they send requests to), errors and exceptions (both handled and unhandled), and the health and status of dependencies.
|
||||
|
||||
Importantly, all key metrics should be monitored everywhere that the application is deployed. This means that every stage of the deployment pipeline should be monitored. Staging must be closely monitored in order to catch any problems before a new candidate for production (a new build) is deployed to servers running production traffic. It almost goes without saying that all deployments to production servers should be monitored carefully, both in the canary and production deployment phases. (For more information on deployment pipelines, see [Chapter 3, _Stability and Reliability_ ][10].)
|
||||
|
||||
Once the key metrics for a microservice have been identified, the next step is to capture the metrics emitted by your service. Capture them, and then log them, graph them, and alert on them. We’ll cover each of these steps in the following sections.
|
||||
|
||||
|
||||
###### Summary of Key Metrics
|
||||
|
||||
**Host and infrastructure key metrics:**
|
||||
|
||||
* Threads
|
||||
|
||||
* File descriptors
|
||||
|
||||
* Database connections
|
||||
|
||||
**Microservice key metrics:**
|
||||
|
||||
* Language-specific metrics
|
||||
|
||||
* Availability
|
||||
|
||||
* Latency
|
||||
|
||||
* Endpoint success
|
||||
|
||||
* Endpoint responses
|
||||
|
||||
* Endpoint response times
|
||||
|
||||
* Clients
|
||||
|
||||
* Errors and exceptions
|
||||
|
||||
* Dependencies
|
||||
|
||||
### Logging
|
||||
|
||||
_Logging_ is the first component of production-ready monitoring. It begins and belongs in the codebase of each microservice, nestled deep within the code of each service, capturing all of the information necessary to describe the state of the microservice. In fact, describing the state of the microservice at any given time in the recent past is the ultimate goal of logging.
|
||||
|
||||
One of the benefits of microservice architecture is the freedom it gives developers to deploy new features and code changes frequently, and one of the consequences of this newfound developer freedom and increased development velocity is that the microservice is always changing. In most cases, the service will not be the same service it was 12 hours ago, let alone several days ago, and reproducing any problems will be impossible. When faced with a problem, often the only way to determine the root cause of an incident or outage is to comb through the logs, discover the state of the microservice at the time of the outage, and figure out why the service failed in that state. Logging needs to be such that developers can determine from the logs exactly what went wrong and where things fell apart.
|
||||
|
||||
### Logging Without Microservice Versioning
|
||||
|
||||
Microservice versioning is often discouraged because it can lead to other (client) services pinning to specific versions of a microservice that may not be the best or most updated version of the microservice. Without versioning, determining the state of a microservice when a failure or outage occurred can be difficult, but thorough logging can prevent this from becoming a problem: if the logging is good enough that state of a microservice at the _time_ of an outage can be sufficiently known and understood, the lack of versioning ceases to be a hindrance to quick and effective mitigation and resolution.
|
||||
|
||||
Determining precisely _what_ to log is specific to each microservice. The best guidance on determining what needs to be logged is, somewhat unfortunately, necessarily vague: log whatever information is essential to describing the state of the service at a given time. Luckily, we can narrow down which information is necessary by restricting our logging to whatever can be contained in the code of the service. Host-level and infrastructure-level information won’t (and shouldn’t) be logged by the application itself, but by services and tools running the application platform. Some microservice-level key metrics and information, like hashed user IDs and request and response details can and should be located in the microservice’s logs.
|
||||
|
||||
There are, of course, some things that _should never, ever be logged_ . Logs should never contain identifying information, such as names of customers, Social Security numbers, and other private data. They should never contain information that could present a security risk, such as passwords, access keys, or secrets. In most cases, even seemingly innocuous things like user IDs and usernames should not be logged unless encrypted.
|
||||
|
||||
At times, logging at the individual microservice level will not be enough. As we’ve seen throughout this book, microservices do not live alone, but within complex chains of clients and dependencies within the microservice ecosystem. While developers can try their best to log and monitor everything important and relevant to their service, tracking and logging requests and responses throughout the entire client and dependency chains from end-to-end can illuminate important information about the system that would otherwise go unknown (such as total latency and availability of the stack). To make this information accessible and visible, building a production-ready microservice ecosystem requires tracing each request through the entire stack.
|
||||
|
||||
The reader might have noticed at this point that it appears that a lot of information needs to be logged. Logs are data, and logging is expensive: they are expensive to store, they are expensive to access, and both storing and accessing logs comes with the additional cost associated with making expensive calls over the network. The cost of storing logs may not seem like much for an individual microservice, but if the logging needs of all the microservices within a microservice ecosystem are added together, the cost is rather high.
|
||||
|
||||
###### Warning
|
||||
|
||||
### Logs and Debugging
|
||||
|
||||
Avoid adding debugging logs in code that will be deployed to production—such logs are very costly. If any logs are added specifically for the purpose of debugging, developers should take great care to ensure that any branch or build containing these additional logs does not ever touch production.
|
||||
|
||||
Logging needs to be scalable, it needs to be available, and it needs to be easily accessible _and_ searchable. To keep the cost of logs down and to ensure scalability and high availability, it’s often necessary to impose per-service logging quotas along with limits and standards on what information can be logged, how many logs each microservice can store, and how long the logs will be stored before being deleted.
|
||||
|
||||
|
||||
### Dashboards
|
||||
|
||||
Every microservice must have at least one _dashboard_ where all key metrics (such as hardware utilization, database connections, availability, latency, responses, and the status of API endpoints) are collected and displayed. A dashboard is a graphical display that is updated in real time to reflect all the most important information about a microservice. Dashboards should be easily accessible, centralized, and standardized across the microservice ecosystem.
|
||||
|
||||
Dashboards should be easy to interpret so that an outsider can quickly determine the health of the microservice: anyone should be able to look at the dashboard and know immediately whether or not the microservice is working correctly. This requires striking a balance between overloading a viewer with information (which would render the dashboard effectively useless) and not displaying enough information (which would also make the dashboard useless): only the necessary minimum of information about key metrics should be displayed.
|
||||
|
||||
A dashboard should also serve as an accurate reflection of the overall quality of monitoring of the entire microservice. Any key metric that is alerted on should be included in the dashboard (we will cover this in the next section): the exclusion of any key metric in the dashboard will reflect poor monitoring of the service, while the inclusion of metrics that are not necessary will reflect a neglect of alerting (and, consequently, monitoring) best practices.
|
||||
|
||||
There are several exceptions to the rule against inclusion of nonkey metrics. In addition to key metrics, information about each phase of the deployment pipeline should be displayed, though not necessarily within the same dashboard. Developers working on microservices that require monitoring a large number of key metrics may opt to set up separate dashboards for each deployment phase (one for staging, one for canary, and one for production) to accurately reflect the health of the microservice at each deployment phase: since different builds will be running on the deployment phases simultaneously, accurately reflecting the health of the microservice in a dashboard might require approaching dashboard design with the goal of reflecting the health of the microservice at a particular deployment phase (treating them almost as different microservices, or at least as different instantiations of a microservice).
|
||||
|
||||
###### Warning
|
||||
|
||||
### Dashboards and Outage Detection
|
||||
|
||||
Even though dashboards can illuminate anomalies and negative trends of a microservice’s key metrics, developers should never need to watch a microservice’s dashboard in order to detect incidents and outages. Doing so is an anti-pattern that leads to deficiencies in alerting and overall monitoring.
|
||||
|
||||
To assist in determining problems introduced by new deployments, it helps to include information about when a deployment occurred in the dashboard. The most effective and useful way to accomplish this is to make sure that deployment times are shown within the graphs of each key metric. Doing so allows developers to quickly check graphs after each deployment to see if any strange patterns emerge in any of the key metrics.
|
||||
|
||||
Well-designed dashboards also give developers an easy, visual way to detect anomalies and determine alerting thresholds. Very slight or gradual changes or disturbances in key metrics run the risk of not being caught by alerting, but a careful look at an accurate dashboard can illuminate anomalies that would otherwise go undetected. Alerting thresholds, which we will cover in the next section, are notoriously difficult to determine, but can be set appropriately when historical data on the dashboard is examined: developers can see normal patterns in key metrics, view spikes in metrics that occurred with outages (or led to outages) in the past, and then set thresholds accordingly.
|
||||
|
||||
|
||||
|
||||
|
||||
### Alerting
|
||||
|
||||
The third component of monitoring a production-ready microservice is real-time _alerting_ . The detection of failures, as well as the detection of changes within key metrics that could lead to a failure, is accomplished through alerting. To ensure this, all key metrics—host-level metrics, infrastructure metrics, and microservice-specific metrics—should be alerted on, with alerts set at various thresholds. Effective and actionable alerting is essential to preserving the availability of a microservice and preventing downtime.
|
||||
|
||||
|
||||
|
||||
### Setting up Effective Alerting
|
||||
|
||||
Alerts must be set up for all key metrics. Any change in a key metric at the host level, infrastructure level, or microservice level that could lead to an outage, cause a spike in latency, or somehow harm the availability of the microservice should trigger an alert. Importantly, alerts should also be triggered whenever a key metric is _not_ seen.
|
||||
|
||||
All alerts should be useful: they should be defined by good, signal-providing thresholds. Three types of thresholds should be set for each key metric, and have both upper and lower bounds: _normal_ , _warning_ , and _critical_ . Normal thresholds reflect the usual, appropriate upper and lower bounds of each key metric and shouldn’t ever trigger an alert. Warning thresholds on each key metric will trigger alerts when there is a deviation from the norm that could lead to a problem with the microservice; warning thresholds should be set such that they will trigger alerts _before_ any deviations from the norm cause an outage or otherwise negatively affect the microservice. Critical thresholds should be set based on which upper and lower bounds on key metrics actually cause an outage, cause latency to spike, or otherwise hurt a microservice’s availability. In an ideal world, warning thresholds should trigger alerts that lead to quick detection, mitigation, and resolution before any critical thresholds are reached. In each category, thresholds should be high enough to avoid noise, but low enough to catch any and all real problems with key metrics.
|
||||
|
||||
### Determining Thresholds Early in the Lifecycle of a Microservice
|
||||
|
||||
Thresholds for key metrics can be very difficult to set without historical data. Any thresholds set early in a microservice’s lifecycle run the risk of either being useless or triggering too many alerts. To determine the appropriate thresholds for a new microservice (or even an old one), developers can run load testing on the microservice to gauge where the thresholds should lie. Running "normal" traffic loads through the microservice can determine the normal thresholds, while running larger-than-expected traffic loads can help determine warning and critical thresholds.
|
||||
|
||||
All alerts need to be actionable. Nonactionable alerts are those that are triggered and then resolved (or ignored) by the developer(s) on call for the microservice because they are not important, not relevant, do not signify that anything is wrong with the microservice, or alert on a problem that cannot be resolved by the developer(s). Any alert that cannot be immediately acted on by the on-call developer(s) should be removed from the pool of alerts, reassigned to the relevant on-call rotation, or (if possible) changed so that it becomes actionable.
|
||||
|
||||
Some of the key microservice metrics run the risk of being nonactionable. For example, alerting on the availability of dependencies can easily lead to nonactionable alerts if dependency outages, increases in dependency latency, or dependency downtime do not require any action to be taken by their client(s). If no action needs to be taken, then the thresholds should be set appropriately, or in more extreme cases, no alerts should be set on dependencies at all. However, if any action at all should be taken, even something as small as contacting the dependency’s on-call or development team in order to alert them to the issue and/or coordinate mitigation and resolution, then an alert should be triggered.
|
||||
|
||||
|
||||
### Handling Alerts
|
||||
|
||||
Once an alert has been triggered, it needs to be handled quickly and effectively. The root cause of the triggered alert should be mitigated and resolved. To quickly and effectively handle alerts, there are several steps that can be taken.
|
||||
|
||||
The first step is to create step-by-step instructions for each known alert that detail how to triage, mitigate, and resolve each alert. These step-by-step alert instructions should live within an on-call runbook within the centralized documentation of each microservice, making them easily accessible to anyone who is on call for the microservice (more details on runbooks can be found in [Chapter 7, _Documentation and Understanding_ ][6]). Runbooks are crucial to the monitoring of a microservice: they allow any on-call developer to have step-by-step instructions on how to mitigate and resolve the root causes of each alert. Since each alert is tied to a deviation in a key metric, runbooks can be written so that they address each key metric, known causes of deviations from the norm, and how to go about debugging the problem.
|
||||
|
||||
Two types of on-call runbooks should be created. The first are runbooks for host-level and infrastructure-level alerts that should be shared between the whole engineering organization—these should be written for every key host-level and infrastructure-level metric. The second are on-call runbooks for specific microservices that have step-by-step instructions regarding microservice-specific alerts triggered by changes in key metrics; for example, a spike in latency should trigger an alert, and there should be step-by-step instructions in the on-call runbook that clearly document how to debug, mitigate, and resolve spikes in the microservice’s latency.
|
||||
|
||||
The second step is to identify alerting anti-patterns. If the microservice on-call rotation is overwhelmed by alerts yet the microservice appears to work as expected, then any alerts that are seen more than once but that can be easily mitigated and/or resolved should be automated away. That is, build the mitigation and/or resolution steps into the microservice itself. This holds for every alert, and writing step-by-step instructions for alerts within on-call runbooks allows executing on this strategy to be rather effective. In fact, any alert that, once triggered, requires a simple set of steps to be taken in order to be mitigated and resolved, can be easily automated away. Once this level of production-ready monitoring has been established, a microservice should never experience the same exact problem twice.
|
||||
|
||||
### On-Call Rotations
|
||||
|
||||
In a microservice ecosystem, the development teams themselves are responsible for the availability of their microservices. Where monitoring is concerned, this means that developers need to be on call for their own microservices. The goal of each developer on-call for a microservice needs to be clear: they are to detect, mitigate, and resolve any issue that arises with the microservice during their on call shift before the issue causes an outage for their microservice or impacts the business itself.
|
||||
|
||||
In some larger engineering organizations, site reliability engineers, DevOps, or other operations engineers may take on the responsibility for monitoring and on call, but this requires each microservice to be relatively stable and reliable before the on-call responsibilities can be handed off to another team. In most microservice ecosystems, microservices rarely reach this high level of stability because, as we’ve seen throughout the previous chapters, microservices are constantly changing. In a microservice ecosystem, developers need to bear the responsibility of monitoring the code that they deploy.
|
||||
|
||||
Designing good on-call rotations is crucial and requires the involvement of the entire team. To prevent burnout, on-call rotations should be both brief and shared: no fewer than two developers should ever be on call at one time, and on-call shifts should last no longer than one week and be spaced no more frequently than one month apart.
|
||||
|
||||
The on-call rotations of each microservice should be internally publicized and easily accessible. If a microservice team is experiencing issues with one of their dependencies, they should be able to track down the on-call engineers for the microservice and contact them very quickly. Hosting this information in a centralized place helps to make developers more effective in triaging problems and preventing outages.
|
||||
|
||||
Developing standardized on-call procedures across an engineering organization will go a long way toward building a sustainable microservice ecosystem. Developers should be trained about how to approach their on-call shifts, be made aware of on-call best practices, and be ramped up for joining the on-call rotation very quickly. Standardizing this process and making on-call expectations completely clear to every developer will prevent the burnout, confusion, and frustration that usually accompanies any mention of joining an on-call rotation.
|
||||
|
||||
### Evaluate Your Microservice
|
||||
|
||||
Now that you have a better understanding of monitoring, use the following list of questions to assess the production-readiness of your microservice(s) and microservice ecosystem. The questions are organized by topic, and correspond to the sections within this chapter.
|
||||
|
||||
|
||||
### Key Metrics
|
||||
|
||||
* What are this microservice’s key metrics?
|
||||
|
||||
* What are the host and infrastructure metrics?
|
||||
|
||||
* What are the microservice-level metrics?
|
||||
|
||||
* Are all the microservice’s key metrics monitored?
|
||||
|
||||
### Logging
|
||||
|
||||
* What information does this microservice need to log?
|
||||
|
||||
* Does this microservice log all important requests?
|
||||
|
||||
* Does the logging accurately reflect the state of the microservice at any given time?
|
||||
|
||||
* Is this logging solution cost-effective and scalable?
|
||||
|
||||
### Dashboards
|
||||
|
||||
* Does this microservice have a dashboard?
|
||||
|
||||
* Is the dashboard easy to interpret? Are all key metrics displayed on the dashboard?
|
||||
|
||||
* Can I determine whether or not this microservice is working correctly by looking at the dashboard?
|
||||
|
||||
### Alerting
|
||||
|
||||
* Is there an alert for every key metric?
|
||||
|
||||
* Are all alerts defined by good, signal-providing thresholds?
|
||||
|
||||
* Are alert thresholds set appropriately so that alerts will fire before an outage occurs?
|
||||
|
||||
* Are all alerts actionable?
|
||||
|
||||
* Are there step-by-step triage, mitigation, and resolution instructions for each alert in the on-call runbook?
|
||||
|
||||
### On-Call Rotations
|
||||
|
||||
* Is there a dedicated on-call rotation responsible for monitoring this microservice?
|
||||
|
||||
* Is there a minimum of two developers on each on-call shift?
|
||||
|
||||
* Are there standardized on-call procedures across the engineering organization?
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Susan J. Fowler is the author of Production-Ready Microservices. She is currently an engineer at Stripe. Previously, Susan worked on microservice standardization at Uber, developed application platforms and infrastructure at several small startups, and studied particle physics at the University of Pennsylvania.
|
||||
|
||||
----------------------------
|
||||
|
||||
via: https://www.oreilly.com/learning/monitoring-a-production-ready-microservice
|
||||
|
||||
作者:[Susan Fowler][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.oreilly.com/people/susan_fowler
|
||||
[1]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
|
||||
[2]:https://pixabay.com/en/container-container-ship-port-1638068/
|
||||
[3]:https://www.oreilly.com/learning/monitoring-a-production-ready-microservice?imm_mid=0ee8c5&cmp=em-webops-na-na-newsltr_20170310
|
||||
[4]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
|
||||
[5]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
|
||||
[6]:https://www.safaribooksonline.com/library/view/production-ready-microservices/9781491965962/ch07.html?utm_source=oreilly&utm_medium=newsite&utm_campaign=monitoring-production-ready-microservices
|
||||
[7]:https://www.oreilly.com/people/susan_fowler
|
||||
[8]:https://www.safaribooksonline.com/library/view/production-ready-microservices/9781491965962/?utm_source=newsite&utm_medium=content&utm_campaign=lgen&utm_content=monitoring-production-ready-microservices
|
||||
[9]:https://www.safaribooksonline.com/library/view/production-ready-microservices/9781491965962/ch01.html?utm_source=oreilly&utm_medium=newsite&utm_campaign=monitoring-production-ready-microservices
|
||||
[10]:https://www.safaribooksonline.com/library/view/production-ready-microservices/9781491965962/ch03.html?utm_source=oreilly&utm_medium=newsite&utm_campaign=monitoring-production-ready-microservices
|
@ -1,122 +0,0 @@
|
||||
# How to work around video and subtitle embed errors
|
||||
|
||||
|
||||
This is going to be a slightly weird tutorial. The background story is as follows. Recently, I created a bunch of [sweet][1] [parody][2] [clips][3] of the [Risitas y las paelleras][4] sketch, famous for its insane laughter by the protagonist, Risitas. As always, I had them uploaded to Youtube, but from the moment I decided on what subtitles to use to the moment when the videos finally became available online, there was a long and twisty journey.
|
||||
|
||||
In this guide, I would like to present several typical issues that you may encounter when creating your own media, mostly with subtitles and the subsequent upload to media sharing portals, specifically Youtube, and how you can work around those. After me.
|
||||
|
||||
### The background story
|
||||
|
||||
My software of choice for video editing is Kdenlive, which I started using when I created the most silly [Frankenstein][5] clip, and it's been my loyal companion ever since. Normally, I render files to WebM container, with VP8 video codec and Vorbis audio codec, because that's what Google likes. Indeed, I had no issues with the roughly 40 different clips I uploaded in the last seven odd years.
|
||||
|
||||
![Kdenlive, create project](http://www.dedoimedo.com/images/computers-years/2016-2/vlc-subs-errors-kdenlive-create-project.jpg)
|
||||
|
||||
![Kdenlive, render](http://www.dedoimedo.com/images/computers-years/2016-2/vlc-subs-errors-kdenlive-render.png)
|
||||
|
||||
However, after I completed my Risitas & Linux project, I was in a bit of a predicament. The video file and the subtitle file were still two separate entities, and I needed somehow to put them together. My original article for subtitles work mentions Avidemux and Handbrake, and both these are valid options.
|
||||
|
||||
However, I was not too happy with the output generated by either one of these, and for a variety of reasons, something was ever so slightly off. Avidemux did not handle the video codecs well, whereas Handbrake omitted a couple of lines of subtitle text from the final product, and the font was ugly. Solvable, but not the topic for today.
|
||||
|
||||
Therefore, I decided to use VideoLAN (VLC) to embed subtitles onto the video. There are several ways to do this. You can use the Media > Convert/Save option, but this one does not have everything we need. Instead, you should use Media > Stream, which comes with a more fully fledged wizard, and it also offers an editable summary of the transcoding options, which we DO need - see my [tutorial][6] on subtitles for this please.
|
||||
|
||||
### Errors!
|
||||
|
||||
The process of embedding subtitles is not trivial. You will most likely encounter several problems along the way. This guide should help you work around these so you can focus on your work and not waste time debugging weird software errors. Anyhow, here's a small but probable collection of issues you will face while working with subtitles in VLC. Trial & error, but also nerdy design.
|
||||
|
||||
### No playable streams
|
||||
|
||||
You have probably chosen weird output settings. You might want to double check you have selected the right video and audio codecs. Also, remember that some media players may not have all the codecs. Also, make sure you test on the system you want these clips to play.
|
||||
|
||||
![No playable streams](http://www.dedoimedo.com/images/computers-years/2016-2/vlc-subs-errors-no-playable-streams.png)
|
||||
|
||||
### Subtitles overlaid twice
|
||||
|
||||
This can happen if you check the box that reads Use a subtitle file in the first step of the streaming media wizard. Just select the file you need and click Stream. Leave the box unchecked.
|
||||
|
||||
![Select file](http://www.dedoimedo.com/images/computers-years/2016-2/vlc-subs-select.png)
|
||||
|
||||
### No subtitle output is generated
|
||||
|
||||
This can happen for two main reasons. One, you have selected the wrong encapsulation format. Do make sure the subtitles are marked correctly on the profile page when you edit it before proceeding. If the format does not support subtitles, it might not work.
|
||||
|
||||
![Encapsulation](http://www.dedoimedo.com/images/computers-years/2016-2/vlc-subs-encap.png)
|
||||
|
||||
Two, you may have left the subtitle codec render enabled in the final output. You do not need this. You only need to overlay the subtitles onto the video clip. Please check the generated stream output string and delete an option that reads scodec=<something> before you click the Stream button.
|
||||
|
||||
![Remove text from output string](http://www.dedoimedo.com/images/computers-years/2016-2/vlc-subs-remove-text.png)
|
||||
|
||||
### Missing codecs + workaround
|
||||
|
||||
This is a common [bug][7] due to how experimental codecs are implemented, and you will most likely see it if you choose the following profile: Video - H.264 + AAC (MP4). The file will be rendered, and if you selected subtitles, they will be overlaid, too, but without any audio. However, we can fix this with a hack.
|
||||
|
||||
![AAC codec](http://www.dedoimedo.com/images/computers-years/2016-2/vlc-subs-errors-aac-codec.png)
|
||||
|
||||
![MP4A error](http://www.dedoimedo.com/images/computers-years/2016-2/vlc-subs-errors-mp4a.png)
|
||||
|
||||
One possible hack is to start VLC from command line with the --sout-ffmpeg-strict=-2 option (might work). The other and more sureway workaround is to take the audio-less video but with the subtitles overlayed and re-render it through Kdenlive with the original project video render without subtitles as an audio source. Sounds complicated, so in detail:
|
||||
|
||||
* Move existing clips (containing audio) from video to audio. Delete the rest.
|
||||
* Alternatively, use rendered WebM file as your audio source.
|
||||
* Add new clip - the one we created with embedded subtitles AND no audio.
|
||||
* Place the clip as new video.
|
||||
* Render as WebM again.
|
||||
|
||||
![Repeat render](http://www.dedoimedo.com/images/computers-years/2016-2/vlc-subs-errors-kdenlive-repeat-render.jpg)
|
||||
|
||||
Using other types of audio codecs will most likely work (e.g. MP3), and you will have a complete project with video, audio and subtitles. If you're happy that nothing is missing, you can now upload to Youtube. But then ...
|
||||
|
||||
### Youtube video manager & unknown format
|
||||
|
||||
If you're trying to upload a non-WebM clip (say MP4), you might get an unspecified error that your clip does not meet the media format requirements. I was not sure why VLC generated a non-Youtube-compliant file. However, again, the fix is easy. Use Kdenlive to recreate the video, and this should result in a file that has all the right meta fields and whatnot that Youtube likes. Back to my original story and the 40-odd clips created through Kdenlive this way.
|
||||
|
||||
P.S. If your clip has valid audio, then just re-run it through Kdenlive. If it does not, do the video/audio trick from before. Mute clips as necessary. In the end, this is just like overlay, except you're using the video source from one clip and audio from another for the final render. Job done.
|
||||
|
||||
### More reading
|
||||
|
||||
I do not wish to repeat myself or spam unnecessarily with links. I have loads of clips on VLC in the Software & Security section, so you might want to consult those. The earlier mentioned article on VLC & Subtitles has links to about half a dozen related tutorials, covering additional topics like streaming, logging, video rotation, remote file access, and more. I'm sure you can work the search engine like pros.
|
||||
|
||||
### Conclusion
|
||||
|
||||
I hope you find this guide helpful. It covers a lot, and I tried to make it linear and simple and address as many pitfalls entrepreneuring streamers and subtitle lovers may face when working with VLC. It's all about containers and codecs, but also the fact there are virtually no standards in the media world, and when you go from one format to another, sometimes you may encounter corner cases.
|
||||
|
||||
If you do hit an error or three, the tips and tricks here should help you solve at least some of them, including unplayable streams, missing or duplicate subtitles, missing codecs and the wicked Kdenlive workaround, Youtube upload errors, hidden VLC command line options, and a few other extras. Quite a lot for a single piece of text, right. Luckily, all good stuff. Take care, children of the Internet. And if you have any other requests as to what next my future VLC articles should cover, do feel liberated enough to send an email.
|
||||
|
||||
Cheers.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
|
||||
|
||||
作者简介:
|
||||
|
||||
My name is Igor Ljubuncic. I'm more or less 38 of age, married with no known offspring. I am currently working as a Principal Engineer with a cloud technology company, a bold new frontier. Until roughly early 2015, I worked as the OS Architect with an engineering computing team in one of the largest IT companies in the world, developing new Linux-based solutions, optimizing the kernel and hacking the living daylights out of Linux. Before that, I was a tech lead of a team designing new, innovative solutions for high-performance computing environments. Some other fancy titles include Systems Expert and System Programmer and such. All of this used to be my hobby, but since 2008, it's a paying job. What can be more satisfying than that?
|
||||
|
||||
From 2004 until 2008, I used to earn my bread by working as a physicist in the medical imaging industry. My work expertise focused on problem solving and algorithm development. To this end, I used Matlab extensively, mainly for signal and image processing. Furthermore, I'm certified in several major engineering methodologies, including MEDIC Six Sigma Green Belt, Design of Experiment, and Statistical Engineering.
|
||||
|
||||
I also happen to write books, including high fantasy and technical work on Linux; mutually inclusive.
|
||||
|
||||
Please see my full list of open-source projects, publications and patents, just scroll down.
|
||||
|
||||
For a complete list of my awards, nominations and IT-related certifications, hop yonder and yonder please.
|
||||
|
||||
|
||||
-------------
|
||||
|
||||
|
||||
via: http://www.dedoimedo.com/computers/vlc-subtitles-errors.html
|
||||
|
||||
作者:[Igor Ljubuncic][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.dedoimedo.com/faq.html
|
||||
|
||||
[1]:https://www.youtube.com/watch?v=MpDdGOKZ3dg
|
||||
[2]:https://www.youtube.com/watch?v=KHG6fXEba0A
|
||||
[3]:https://www.youtube.com/watch?v=TXw5lRi97YY
|
||||
[4]:https://www.youtube.com/watch?v=cDphUib5iG4
|
||||
[5]:http://www.dedoimedo.com/computers/frankenstein-media.html
|
||||
[6]:http://www.dedoimedo.com/computers/vlc-subtitles.html
|
||||
[7]:https://trac.videolan.org/vlc/ticket/6184
|
@ -1,272 +0,0 @@
|
||||
Understanding 7z command switches - part I
|
||||
============================================================
|
||||
|
||||
### On this page
|
||||
|
||||
1. [Include files][1]
|
||||
2. [Exclude files][2]
|
||||
3. [Set password for your archive][3]
|
||||
4. [Set output directory][4]
|
||||
5. [Creating multiple volumes][5]
|
||||
6. [Set compression level of archive][6]
|
||||
7. [Display technical information of archive][7]
|
||||
|
||||
7z is no doubt a feature-rich and powerful archiver (claimed to offer the highest compression ratio). Here at HowtoForge, we have [already discussed][9] how you can install and use it. But the discussion was limited to basic features that you can access using the 'function letters' the tool provides.
|
||||
|
||||
Expanding our coverage on the tool, here in this tutorial, we will be discussing some of the 'switches' 7z offers. But before we proceed, it's worth sharing that all the instructions and commands mentioned in this tutorial have been tested on Ubuntu 16.04 LTS.
|
||||
|
||||
**Note**: We will be using the files displayed in the following screenshot for performing various operations using 7zip.
|
||||
|
||||
[
|
||||
![ls from test directory](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/ls.png)
|
||||
][10]
|
||||
|
||||
###
|
||||
Include files
|
||||
|
||||
The 7z tool allows you selectively include files in an archive. This feature can be accessed using the -i switch.
|
||||
|
||||
Syntax:
|
||||
|
||||
-i[r[-|0]]{@listfile|!wildcard}
|
||||
|
||||
For example, if you want to include only ‘.txt’ files in your archive, you can use the following command:
|
||||
|
||||
$ 7z a ‘-i!*.txt’ include.7z
|
||||
|
||||
Here is the output:
|
||||
|
||||
[
|
||||
![add files to 7zip](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/include.png)
|
||||
][11]
|
||||
|
||||
Now, to check whether the newly-created archive file contains only ‘.txt’ file or not, you can use the following command:
|
||||
|
||||
$ 7z l include.7z
|
||||
|
||||
Here is the output:
|
||||
|
||||
[
|
||||
![Result](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/includelist.png)
|
||||
][12]
|
||||
|
||||
In the above screenshot, you can see that only ‘testfile.txt’ file has been added to the archive.
|
||||
|
||||
### Exclude files
|
||||
|
||||
If you want, you can also exclude the files that you don’t need. This can be done using the -x switch.
|
||||
|
||||
Syntax:
|
||||
|
||||
-x[r[-|0]]]{@listfile|!wildcard}
|
||||
|
||||
For example, if you want to exclude a file named ‘abc.7z’ from the archive that you are going to create, then you can use the following command:
|
||||
|
||||
$ 7z a ‘-x!abc.7z’ exclude.7z
|
||||
|
||||
Here is the output:
|
||||
|
||||
[
|
||||
![exclude files from 7zip](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/exclude.png)
|
||||
][13]
|
||||
|
||||
To check whether the resulting archive file has excluded ‘abc.7z’ or not, you can use the following command:
|
||||
|
||||
$ 7z l exclude.7z
|
||||
|
||||
Here is the output:
|
||||
|
||||
[
|
||||
![result of file exclusion](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/excludelist.png)
|
||||
][14]
|
||||
|
||||
In the above screenshot, you can see that ‘abc.7z’ file has been excluded from the new archive file.
|
||||
|
||||
**Pro tip**: Suppose the task is to exclude all the .7z files with names starting with letter ‘t’ and include all .7z files with names starting with letter ‘a’ . This can be done by combining both ‘-i’ and ‘-x’ switches in the following way:
|
||||
|
||||
$ 7z a '-x!t*.7z' '-i!a*.7z' combination.7z
|
||||
|
||||
### Set password for your archive
|
||||
|
||||
7z also lets you password protect your archive file. This feature can be accessed using the -p switch.
|
||||
|
||||
$ 7z a [archive-filename] -p[your-password] -mhe=[on/off]
|
||||
|
||||
**Note**: The -mhe option enables or disables archive header encryption (default is off).
|
||||
|
||||
For example:
|
||||
|
||||
$ 7z a password.7z -pHTF -mhe=on
|
||||
|
||||
Needless to say, when you will extract your password protected archive, the tool will ask you for the password. To extract a password-protected file, use the 'e' function letter. Following is an example:
|
||||
|
||||
$ 7z e password.7z
|
||||
|
||||
[
|
||||
![protect 7zip archive with a password](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/password.png)
|
||||
][15]
|
||||
|
||||
### Set output directory
|
||||
|
||||
The tool also lets you extract an archive file in the directory of your choice. This can be done using the -o switch. Needless to say, the switch only works when the command contains either the ‘e’ function letter or the ‘x’ function letter.
|
||||
|
||||
$ 7z [e/x] [existing-archive-filename] -o[path-of-directory]
|
||||
|
||||
For example, suppose the following command is run in the present working directory:
|
||||
|
||||
$ 7z e output.7z -ohow/to/forge
|
||||
|
||||
And, as the value passed to the -o switch suggests, the aim is to extract the archive in the ./how/to/forge directory.
|
||||
|
||||
Here is the output:
|
||||
|
||||
[
|
||||
![7zip output directory](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/output.png)
|
||||
][16]
|
||||
|
||||
In the above screenshot, you can see that all the contents of existing archive file has been extracted. But where? To check whether or not the archive file has been extracted in the ./how/to/forge directory or not, we can use the ‘ls -R’ command.
|
||||
|
||||
[
|
||||
![result](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/ls_-R.png)
|
||||
][17]
|
||||
|
||||
In the above screenshot, we can see that all the contents of output.7z have indeed been extracted to ./how/to/forge.
|
||||
|
||||
### Creating multiple volumes
|
||||
|
||||
With the help of the 7z tool, you can create multiple volumes (smaller sub-archives) of your archive file. This is very useful when transferring large files over a network or in a USB. This feature can be accessed using the -v switch. The switch requires you to specify size of sub-archives.
|
||||
|
||||
We can specify size of sub-archives in bytes (b), kilobytes (k), megabytes (m) and gigabytes (g).
|
||||
|
||||
$ 7z a [archive-filename] [files-to-archive] -v[size-of-sub-archive1] -v[size-of-sub-archive2] ....
|
||||
|
||||
Let's understand this using an example. Please note that we will be using a new directory for performing operations on the -v switch.
|
||||
|
||||
Here is the screenshot of the directory contents:
|
||||
|
||||
[
|
||||
![7zip volumes](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/volumels.png)
|
||||
][18]
|
||||
|
||||
Now, we can run the following command for creating multiple volumes (sized 100b each) of an archive file:
|
||||
|
||||
7z a volume.7z * -v100b
|
||||
|
||||
Here is the screenshot:
|
||||
|
||||
[
|
||||
![compressing volumes](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/volume.png)
|
||||
][19]
|
||||
|
||||
Now, to see the list of sub-archives that were created, use the ‘ls’ command.
|
||||
|
||||
[
|
||||
![list of archives](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/volumels2.png)
|
||||
][20]
|
||||
|
||||
As seen in the above screenshot, a total of four multiple volumes have been created - volume.7z.001, volume.7z.002, volume.7z.003, and volume.7z.004
|
||||
|
||||
**Note**: You can extract files using the .7z.001 archive. But, for that, all the other sub-archive volumes should be present in the same directory.
|
||||
|
||||
### Set compression level of archive
|
||||
|
||||
7z also allows you to set compression levels of your archives. This feature can be accessed using the -m switch. There are various compression levels in 7z, such as -mx0, -mx1, -mx3, -mx5, -mx7 and -mx9.
|
||||
|
||||
Here's a brief summary about these levels:
|
||||
|
||||
-**mx0** = Don't compress at all - just copy the contents to archive.
|
||||
-**mx1** = Consumes least time, but compression is low.
|
||||
-**mx3** = Better than -mx1.
|
||||
-**mx5** = This is default (compression is normal).
|
||||
-**mx7** = Maximum compression.
|
||||
-**mx9** = Ultra compression.
|
||||
|
||||
**Note**: For more information on these compression levels, head [here][8].
|
||||
|
||||
$ 7z a [archive-filename] [files-to-archive] -mx=[0,1,3,5,7,9]
|
||||
|
||||
For example, we have a bunch of files and folders in a directory, which we tried compressing using a different compression level each time. Just to give you an idea, here's the command used when the archive was created with compression level '0'.
|
||||
|
||||
$ 7z a compression(-mx0).7z * -mx=0
|
||||
|
||||
Similarly, other commands were executed.
|
||||
|
||||
Here is the list of output archives (produced using the 'ls' command), with their names suggesting the compression level used in their creation, and the fifth column in the output revealing the effect of compression level on their size.
|
||||
|
||||
[
|
||||
![7zip compression level](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/compression.png)
|
||||
][21]
|
||||
|
||||
###
|
||||
Display technical information of archive
|
||||
|
||||
If you want, 7z also lets you display technical information of an archive - it's type, physical size, header size, and so on - on the standard output. This feature can be accessed using the -slt switch. This switch only works with the ‘l’ function letter.
|
||||
|
||||
$ 7z l -slt [archive-filename]
|
||||
|
||||
For example:
|
||||
|
||||
$ 7z l -slt abc.7z
|
||||
|
||||
Here is the output:
|
||||
|
||||
[
|
||||
![](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/slt.png)
|
||||
][22]
|
||||
|
||||
# Specify type of archive to create
|
||||
|
||||
If you want to create a non 7zip archive (which gets created by default), you can specify your choice using the -t switch.
|
||||
|
||||
$ 7z a -t[specify-type-of-archive] [archive-filename] [file-to-archive]
|
||||
|
||||
The following example shows a command to create a .zip file:
|
||||
|
||||
7z a -tzip howtoforge *
|
||||
|
||||
The output file produced is 'howtoforge.zip'. To cross verify its type, use the 'file' command:
|
||||
|
||||
[
|
||||
![](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/type.png)
|
||||
][23]
|
||||
|
||||
So, howtoforge.zip is indeed a ZIP file. Similarly, you can create other kind of archives that 7z supports.
|
||||
|
||||
# Conclusion
|
||||
|
||||
As you would agree, the knowledge of 7z 'function letters' along with 'switches' lets you make the most out of the tool. We aren't yet done with switches - there are some more that will be discussed in part 2.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/tutorial/understanding-7z-command-switches/
|
||||
|
||||
作者:[ Himanshu Arora][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com/tutorial/understanding-7z-command-switches/
|
||||
[1]:https://www.howtoforge.com/tutorial/understanding-7z-command-switches/#include-files
|
||||
[2]:https://www.howtoforge.com/tutorial/understanding-7z-command-switches/#exclude-files
|
||||
[3]:https://www.howtoforge.com/tutorial/understanding-7z-command-switches/#set-password-for-your-archive
|
||||
[4]:https://www.howtoforge.com/tutorial/understanding-7z-command-switches/#set-output-directory
|
||||
[5]:https://www.howtoforge.com/tutorial/understanding-7z-command-switches/#creating-multiple-volumes
|
||||
[6]:https://www.howtoforge.com/tutorial/understanding-7z-command-switches/#set-compression-level-of-archive
|
||||
[7]:https://www.howtoforge.com/tutorial/understanding-7z-command-switches/#display-technical-information-of-archive
|
||||
[8]:http://askubuntu.com/questions/491223/7z-ultra-settings-for-zip-format
|
||||
[9]:https://www.howtoforge.com/tutorial/how-to-install-and-use-7zip-file-archiver-on-ubuntu-linux/
|
||||
[10]:https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/ls.png
|
||||
[11]:https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/include.png
|
||||
[12]:https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/includelist.png
|
||||
[13]:https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/exclude.png
|
||||
[14]:https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/excludelist.png
|
||||
[15]:https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/password.png
|
||||
[16]:https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/output.png
|
||||
[17]:https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/ls_-R.png
|
||||
[18]:https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/volumels.png
|
||||
[19]:https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/volume.png
|
||||
[20]:https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/volumels2.png
|
||||
[21]:https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/compression.png
|
||||
[22]:https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/slt.png
|
||||
[23]:https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/type.png
|
@ -1,332 +0,0 @@
|
||||
Our guide to a Golang logs world
|
||||
============================================================
|
||||
|
||||
![golang logo](https://logmatic.io/wp-content/uploads/2017/03/golang-logo.png)
|
||||
|
||||
Do you ever get tired of solutions that use convoluted languages, that are complex to deploy, and for which building takes forever? Golang is the solution to these very issues, being as fast as C and as simple as Python.
|
||||
|
||||
But how do you monitor your application with Golang logs? There are no exceptions in Golang, only errors. Your first impression might thus be that developing a Golang logging strategy is not going to be such a straightforward affair. The lack of exceptions is not in fact that troublesome, as exceptions have lost their exceptionality in many programming languages: they are often overused to the point of being overlooked.
|
||||
|
||||
We’ll first cover here Golang logging basics before going the extra mile and discuss Golang logs standardization, metadatas significance, and minimization of Golang logging impact on performance.
|
||||
By then, you’ll be able to track a user’s behavior across your application, quickly identify failing components in your project as well as monitor overall performance and user’s happiness.
|
||||
|
||||
### I. Basic Golang logging
|
||||
|
||||
### 1) Use Golang “log” library
|
||||
|
||||
Golang provides you with a native [logging library][3] simply called “log”. Its logger is perfectly suited to track simple behaviors such as adding a timestamp before an error message by using the available [flags][4].
|
||||
|
||||
Here is a basic example of how to log an error in Golang:
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import (
|
||||
"log"
|
||||
"errors"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
func main() {
|
||||
/* local variable definition */
|
||||
...
|
||||
|
||||
/* function for division which return an error if divide by 0 */
|
||||
ret,err = div(a, b)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
fmt.Println(ret)
|
||||
}
|
||||
```
|
||||
|
||||
And here comes what you get if you try to divide by 0:
|
||||
|
||||
![golang code](https://logmatic.io/wp-content/uploads/2017/03/golang-code.png)
|
||||
|
||||
In order to quickly test a function in Golang you can use the [go playground][5].
|
||||
|
||||
To make sure your logs are easily accessible at all times, we recommend to write them in a file:
|
||||
|
||||
```
|
||||
package main
|
||||
import (
|
||||
"log"
|
||||
"os"
|
||||
)
|
||||
func main() {
|
||||
//create your file with desired read/write permissions
|
||||
f, err := os.OpenFile("filename", os.O_WRONLY|os.O_CREATE|os.O_APPEND, 0644)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
//defer to close when you're done with it, not because you think it's idiomatic!
|
||||
defer f.Close()
|
||||
//set output of logs to f
|
||||
log.SetOutput(f)
|
||||
//test case
|
||||
log.Println("check to make sure it works")
|
||||
}
|
||||
```
|
||||
|
||||
You can find a complete tutorial for Golang log library [here][6] and find the complete list of available functions within their “log” [library][7].
|
||||
|
||||
So now you should be all set to log errors and their root causes.
|
||||
|
||||
But logs can also help you piece an activity stream together, identify an error context that needs fixing or investigate how a single request is impacting several layers and API’s in your system.
|
||||
And to get this enhanced type of vision, you first need to enrich your Golang logs with as much context as possible as well as standardize the format you use across your project. This is when Golang native library reaches its limits. The most widely used libraries are then [glog][8] and [logrus][9]. It must be said though that many good libraries are available. So if you’re already using one that uses JSON format you don’t necessarily have to change library, as we’ll explain just below.
|
||||
|
||||
### II. A consistent format for your Golang logs
|
||||
|
||||
### 1) The structuring advantage of JSON format
|
||||
|
||||
Structuring your Golang logs in one project or across multiples microservices is probably the hardest part of the journey, even though it _could_ seem trivial once done. Structuring your logs is what makes them especially readable by machines (cf our [collecting logs best practices blogpost][10]). Flexibility and hierarchy are at the very core of the JSON format, so information can be easily parsed and manipulated by humans as well as by machines.
|
||||
|
||||
Here is an example of how to log in JSON format with the [Logrus/Logmatic.io][11] library:
|
||||
|
||||
```
|
||||
package main
|
||||
import (
|
||||
log "github.com/Sirupsen/logrus"
|
||||
"github.com/logmatic/logmatic-go"
|
||||
)
|
||||
func main() {
|
||||
// use JSONFormatter
|
||||
log.SetFormatter(&logmatic.JSONFormatter{})
|
||||
// log an event as usual with logrus
|
||||
log.WithFields(log.Fields{"string": "foo", "int": 1, "float": 1.1 }).Info("My first ssl event from golang")
|
||||
}
|
||||
```
|
||||
|
||||
Which comes out as:
|
||||
|
||||
```
|
||||
{
|
||||
"date":"2016-05-09T10:56:00+02:00",
|
||||
"float":1.1,
|
||||
"int":1,
|
||||
"level":"info",
|
||||
"message":"My first ssl event from golang",
|
||||
"String":"foo"
|
||||
}
|
||||
```
|
||||
|
||||
### 2) Standardization of Golang logs
|
||||
|
||||
It really is a shame when the same error encountered in different parts of your code is registered differently in logs. Picture for example not being able to determine a web page loading status because of an error on one variable. One developer logged:
|
||||
|
||||
```
|
||||
message: 'unknown error: cannot determine loading status from unknown error: missing or invalid arg value client'</span>
|
||||
```
|
||||
|
||||
While the other registered:
|
||||
|
||||
```
|
||||
unknown error: cannot determine loading status - invalid client</span>
|
||||
```
|
||||
|
||||
A good solution to enforce logs standardization is to create an interface between your code and the logging library. This standardization interface would contain pre-defined log messages for all possible behavior you want to add in your logs. Doing so prevent custom log messages that would not match your desired standard format…. And in so doing facilitates log investigation.
|
||||
|
||||
![interface function](https://logmatic.io/wp-content/uploads/2017/03/functions-interface.png)
|
||||
|
||||
As log formats are centralized it becomes way easier to keep them up to date. If a new type of issue arises it only requires to be added in one interface for every team member to use the exact same message.
|
||||
|
||||
The most basic example would be to add the logger name and id before Golang log messages. Your code would then send “events” to your standardization interface that would in turn transform them into Golang log messages.
|
||||
|
||||
The most basic example would be to add the logger name and the id before the Golang log message. Your code would then send “events” to this interface that would transform them into Golang log messages:
|
||||
|
||||
```
|
||||
// The main part, we define all messages right here.
|
||||
// The Event struct is pretty simple. We maintain an Id to be sure to
|
||||
// retrieve simply all messages once they are logged
|
||||
var (
|
||||
invalidArgMessage = Event{1, "Invalid arg: %s"}
|
||||
invalidArgValueMessage = Event{2, "Invalid arg value: %s => %v"}
|
||||
missingArgMessage = Event{3, "Missing arg: %s"}
|
||||
)
|
||||
|
||||
// And here we were, all log events that can be used in our app
|
||||
func (l *Logger)InvalidArg(name string) {
|
||||
l.entry.Errorf(invalidArgMessage.toString(), name)
|
||||
}
|
||||
func (l *Logger)InvalidArgValue(name string, value interface{}) {
|
||||
l.entry.WithField("arg." + name, value).Errorf(invalidArgValueMessage.toString(), name, value)
|
||||
}
|
||||
func (l *Logger)MissingArg(name string) {
|
||||
l.entry.Errorf(missingArgMessage.toString(), name)
|
||||
}
|
||||
```
|
||||
|
||||
So if we use the previous example of the invalid argument value, we would get similar log messages:
|
||||
|
||||
```
|
||||
time="2017-02-24T23:12:31+01:00" level=error msg="LoadPageLogger00003 - Missing arg: client - cannot determine loading status" arg.client=<nil> logger.name=LoadPageLogger
|
||||
```
|
||||
|
||||
And in JSON format:
|
||||
|
||||
```
|
||||
{"arg.client":null,"level":"error","logger.name":"LoadPageLogger","msg":"LoadPageLogger00003 - Missing arg: client - cannot determine loading status", "time":"2017-02-24T23:14:28+01:00"}
|
||||
```
|
||||
|
||||
### III. The power of context in Golang logs
|
||||
|
||||
Now that the Golang logs are written in a structured and standardized format, time has come to decide which context and other relevant information should be added to them. Context and metadatas are critical in order to be able to extract insights from your logs such as following a user activity or its workflow.
|
||||
|
||||
For instance the Hostname, appname and session parameters could be added as follows using the JSON format of the logrus library:
|
||||
|
||||
```
|
||||
// For metadata, a common pattern is to re-use fields between logging statements by re-using
|
||||
contextualizedLog := log.WithFields(log.Fields{
|
||||
"hostname": "staging-1",
|
||||
"appname": "foo-app",
|
||||
"session": "1ce3f6v"
|
||||
})
|
||||
contextualizedLog.Info("Simple event with global metadata")
|
||||
```
|
||||
|
||||
Metadatas can be seen as javascript breadcrumbs. To better illustrate how important they are, let’s have a look at the use of metadatas among several Golang microservices. You’ll clearly see how decisive it is to track users on your application. This is because you do not simply need to know that an error occurred, but also on which instance and what pattern created the error. So let’s imagine we have two microservices which are sequentially called. The contextual information is transmitted and stored in the headers:
|
||||
|
||||
```
|
||||
func helloMicroService1(w http.ResponseWriter, r *http.Request) {
|
||||
client := &http.Client{}
|
||||
// This service is responsible to received all incoming user requests
|
||||
// So, we are checking if it's a new user session or a another call from
|
||||
// an existing session
|
||||
session := r.Header.Get("x-session")
|
||||
if ( session == "") {
|
||||
session = generateSessionId()
|
||||
// log something for the new session
|
||||
}
|
||||
// Track Id is unique per request, so in each case we generate one
|
||||
track := generateTrackId()
|
||||
// Call your 2nd microservice, add the session/track
|
||||
reqService2, _ := http.NewRequest("GET", "http://localhost:8082/", nil)
|
||||
reqService2.Header.Add("x-session", session)
|
||||
reqService2.Header.Add("x-track", track)
|
||||
resService2, _ := client.Do(reqService2)
|
||||
….
|
||||
```
|
||||
|
||||
So when the second service is called:
|
||||
|
||||
```
|
||||
func helloMicroService2(w http.ResponseWriter, r *http.Request) {
|
||||
// Like for the microservice, we check the session and generate a new track
|
||||
session := r.Header.Get("x-session")
|
||||
track := generateTrackId()
|
||||
// This time, we check if a track id is already set in the request,
|
||||
// if yes, it becomes the parent track
|
||||
parent := r.Header.Get("x-track")
|
||||
if (session == "") {
|
||||
w.Header().Set("x-parent", parent)
|
||||
}
|
||||
// Add meta to the response
|
||||
w.Header().Set("x-session", session)
|
||||
w.Header().Set("x-track", track)
|
||||
if (parent == "") {
|
||||
w.Header().Set("x-parent", track)
|
||||
}
|
||||
// Write the response body
|
||||
w.WriteHeader(http.StatusOK)
|
||||
io.WriteString(w, fmt.Sprintf(aResponseMessage, 2, session, track, parent))
|
||||
}
|
||||
```
|
||||
|
||||
Context and information relative to the initial query are now available in the second microservice and a log message in JSON format looks like the following ones:
|
||||
|
||||
In the first micro service:
|
||||
|
||||
```
|
||||
{"appname":"go-logging","level":"debug","msg":"hello from ms 1","session":"eUBrVfdw","time":"2017-03-02T15:29:26+01:00","track":"UzWHRihF"}
|
||||
```
|
||||
|
||||
Then in the second:
|
||||
|
||||
```
|
||||
{"appname":"go-logging","level":"debug","msg":"hello from ms 2","parent":"UzWHRihF","session":"eUBrVfdw","time":"2017-03-02T15:29:26+01:00","track":"DPRHBMuE"}
|
||||
```
|
||||
|
||||
In the case of an error occurring in the second micro service, we are now able – thanks to the contextual information hold in the Golang logs – to determine how it was called and what pattern created the error.
|
||||
|
||||
If you wish to dig deeper on Golang tracking possibilities, there are several libraries that offer tracking features such as [Opentracing][12]. This specific library delivers an easy way to add tracing implementations in complex (or simple) architecture. It allows you to track user queries across the different steps of any process as done below:
|
||||
|
||||
![client transaction](https://logmatic.io/wp-content/uploads/2017/03/client-transaction.png)
|
||||
|
||||
### IV. Performance impact of Golang logging
|
||||
|
||||
### 1) Do not log in Gorountine
|
||||
|
||||
It is tempting to create a new logger per goroutine. But it should not be done. Goroutine is a lightweight thread manager and is used to accomplish a “simple” task. It should not therefore be in charge of logging. It could lead to concurrency issues as using log.New() in each goroutine would duplicate the interface and all loggers would concurrently try to access the same io.Writer.
|
||||
Moreover libraries usually use a specific goroutine for the log writing to limit the impact on your performances and avoid concurrencial calls to the io.Writer.
|
||||
|
||||
### 2) Work with asynchronous libraries
|
||||
|
||||
If it is true that many Golang logging libraries are available, it’s important to note that most of them are synchronous (pseudo asynchronous in fact). The reason for this being probably that so far no one had any serious impact on their performance due to logging.
|
||||
|
||||
But as Kjell Hedström showed in [his experiment][13] using several threads that created millions of logs, asynchronous Golang logging could lead to 40% performance increase in the worst case scenario. So logging comes at a cost, and can have consequences on your application performance. In case you do not handle such volume of logs, using pseudo asynchronous Golang logging library might be efficient enough. But if you’re dealing with large amounts of logs or are keen on performance, Kjell Hedström asynchronous solution is interesting (despite the fact that you would probably have to develop it a bit as it only contains the minimum required features).
|
||||
|
||||
### 3) Use severity levels to manage your Golang logs volume
|
||||
|
||||
Some logging libraries allow you to enable or disable specific loggers, which can come in handy. You might not need some specific levels of logs once in production for example. Here is an example of how to disable a logger in the glog library where loggers are defined as boolean:
|
||||
|
||||
```
|
||||
type Log bool
|
||||
func (l Log) Println(args ...interface{}) {
|
||||
fmt.Println(args...)
|
||||
}
|
||||
var debug Log = false
|
||||
if debug {
|
||||
debug.Println("DEBUGGING")
|
||||
}
|
||||
```
|
||||
|
||||
You can then define those boolean parameters in a configuration file and use them to enable or disable loggers.
|
||||
|
||||
Golang logging can be expensive without a good Golang logging strategy. Developers should resist to the temptation of logging almost everything – even if much is interesting! If the purpose of logging is to gather as much information as possible, it has to be done properly in order to avoid the white noise of logs containing useless elements.
|
||||
|
||||
### V. Centralize Golang logs
|
||||
|
||||
![centralize go logs](https://logmatic.io/wp-content/uploads/2017/03/source-selector-1024x460-1.png)
|
||||
If your application is deployed on several servers, the hassle of connecting to each one of them to investigate a phenomenon can be avoided. Log centralization does make a difference.
|
||||
|
||||
Using log shippers such as Nxlog for windows, Rsyslog for linux (as it is installed by default) or Logstash and FluentD is the best way to do so. Log shippers only purpose is to send logs, and so they manage connection failures or other issues you could face very well.
|
||||
|
||||
There is even a [Golang syslog package][14] that takes care of sending Golang logs to the syslog daemon for you.
|
||||
|
||||
### Hope you enjoyed your Golang logs tour
|
||||
|
||||
Thinking about your Golang logging strategy at the beginning of your project is important. Tracking a user is much easier if overall context can be accessed from anywhere in the code. Reading logs from different services when they are not standardized is painful. Planning ahead to spread the same user or request id through several microservices will later on allow you to easily filter the information and follow an activity across your system.
|
||||
|
||||
Whether you’re building a large Golang project or several microservices also impacts your logging strategy. The main components of a large project should have their specific Golang logger named after their functionality. This enables you to instantly spot from which part of the code the logs are coming from. However with microservices or small Golang projects, fewer core components require their own logger. In each case though, the number of loggers should be kept below the number of core functionalities.
|
||||
|
||||
You’re now all set to quantify decisions about performance and user’s happiness with your Golang logs!
|
||||
|
||||
_Is there a specific coding language you want to read about? Let us know on Twitter [][1][@logmatic][2]._
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://logmatic.io/blog/our-guide-to-a-golang-logs-world/
|
||||
|
||||
作者:[Nils][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://logmatic.io/blog/our-guide-to-a-golang-logs-world/
|
||||
[1]:https://twitter.com/logmatic?lang=en
|
||||
[2]:http://twitter.com/logmatic
|
||||
[3]:https://golang.org/pkg/log/
|
||||
[4]:https://golang.org/pkg/log/#pkg-constants
|
||||
[5]:https://play.golang.org/
|
||||
[6]:https://www.goinggo.net/2013/11/using-log-package-in-go.html
|
||||
[7]:https://golang.org/pkg/log/
|
||||
[8]:https://github.com/google/glog
|
||||
[9]:https://github.com/sirupsen/logrus
|
||||
[10]:https://logmatic.io/blog/beyond-application-monitoring-discover-logging-best-practices/
|
||||
[11]:https://github.com/logmatic/logmatic-go
|
||||
[12]:https://github.com/opentracing/opentracing-go
|
||||
[13]:https://sites.google.com/site/kjellhedstrom2/g2log-efficient-background-io-processign-with-c11/g2log-vs-google-s-glog-performance-comparison
|
||||
[14]:https://golang.org/pkg/log/syslog/
|
@ -1,3 +1,5 @@
|
||||
GitFuture is translating
|
||||
|
||||
OpenGL & Go Tutorial Part 2: Drawing the Game Board
|
||||
============================================================
|
||||
|
||||
|
@ -1,219 +0,0 @@
|
||||
translating by chenxinlong
|
||||
AWS cloud terminology
|
||||
============================================================
|
||||
|
||||
* * *
|
||||
|
||||
![AWS Cloud terminology](http://cdn2.kerneltalks.com/wp-content/uploads/2017/03/AWS-Cloud-terminology-150x150.png)
|
||||
|
||||
_Understand AWS cloud terminology of 71 services! Get acquainted with terms used in AWS world to start with your AWS cloud career!_
|
||||
|
||||
* * *
|
||||
|
||||
AWS i.e. Amazon Web Services is cloud platform providing list of web services on pay per use basis. Its one of the famous cloud platform to date. Due to flexibility, availability, elasticity, scalability and no-maintenance many corporate are moving to cloud. Since many companies using these services its become necessary that sysadmin or devOps should be aware of AWS.
|
||||
|
||||
This article aims at listing services provided by AWS and explaining terminology used in AWS world.
|
||||
|
||||
As of today, AWS offers total of 71 services which are grouped together in 17 groups as below :
|
||||
|
||||
* * *
|
||||
|
||||
_Compute_
|
||||
|
||||
Its a cloud computing means virtual server provisioning. This group provides below services.
|
||||
|
||||
1. EC2 : EC2 stands for Elastic Compute Cloud. This service provides you scalable [virtual machines per your requirement.][11]
|
||||
2. EC2 container service : Its high performance, high scalable which allows running services on EC2 clustered environment
|
||||
3. Lightsail : This service enables user to launch and manage virtual servers (EC2) very easily.
|
||||
4. Elastic Beanstalk : This service manages capacity provisioning, load balancing, scaling, health monitoring of your application automatically thus reducing your management load.
|
||||
5. Lambda : It allows to run your code only when needed without managing servers for it.
|
||||
6. Batch : It enables users to run computing workloads (batches) in customized managed way.
|
||||
|
||||
* * *
|
||||
|
||||
_Storage_
|
||||
|
||||
Its a cloud storage i.e. cloud storage facility provided by Amazon. This group includes :
|
||||
|
||||
1. S3 : S3 stands for Simple Storage Service (3 times S). This provides you online storage to store/retrive any data at any time, from anywhere.
|
||||
2. EFS : EFS stands for Elastic File System. Its a online storage which can be used with EC2 servers.
|
||||
3. Glacier : Its a low cost/slow performance data storage solution mainly aimed at archives or long term backups.
|
||||
4. Storage Gateway : Its interface which connects your on-premise applications (hosted outside AWS) with AWS storage.
|
||||
|
||||
* * *
|
||||
|
||||
_Database_
|
||||
|
||||
AWS also offers to host databases on their Infra so that client can benefit with cutting edge tech Amazon have for faster/efficient/secured data processing. This group includes :
|
||||
|
||||
1. RDS : RDS stands for Relational Database Service. Helps to setup, operate, manage relational database on cloud.
|
||||
2. DynamoDB : Its noSQL database providing fast processing and high scalability.
|
||||
3. ElastiCache : Its a way to manage in-memory cache for your web application to run them faster!
|
||||
4. Redshift : Its a huge (petabyte-size) fully scalable, data warehouse service in cloud.
|
||||
|
||||
* * *
|
||||
|
||||
_Networking & Content Delivery_
|
||||
|
||||
As AWS provides cloud EC2 server, its corollary that networking will be in picture too. Content delivery is used to serve files to users from their geographically nearest location. This is pretty much famous for speeding up websites now a days.
|
||||
|
||||
1. VPC : VPC stands for Virtual Private Cloud. Its your very own virtual network dedicated to your AWS account.
|
||||
2. CloudFront : Its content delivery network by AWS.
|
||||
3. Direct Connect : Its a network way of connecting your datacenter/premises with AWS to increase throughput, reduce network cost and avoid connectivity issues which may arise due to internet-based connectivity.
|
||||
4. Route 53 : Its a cloud domain name system DNS web service.
|
||||
|
||||
* * *
|
||||
|
||||
_Migration_
|
||||
|
||||
Its a set of services to help you migrate from on-premises services to AWS. It includes :
|
||||
|
||||
1. Application Discovery Service : A service dedicated to analyse your servers, network, application to help/speed up migration.
|
||||
2. DMS : DMS stands for Database Migration Service. It is used to migrate your data from on-premises DB to RDS or DB hosted on EC2.
|
||||
3. Server Migration : Also called as SMS (Server Migration Service) is a agentless service which moves your workloads from on-premises to AWS.
|
||||
4. Snowball : Intended to use when you want to transfer huge amount of data in/out of AWS using physical storage appliances (rather than internet/network based transfers)
|
||||
|
||||
* * *
|
||||
|
||||
_Developer Tools_
|
||||
|
||||
As name suggest, its a group of services helping developers to code easy/better way on cloud.
|
||||
|
||||
1. CodeCommit : Its a secure, scalable, managed source control service to host code repositories.
|
||||
2. CodeBuild : Code builder on cloud. Executes, tests codes and build software packages for deployments.
|
||||
3. CodeDeploy : Deployment service to automate application deployments on AWS servers or on-premises.
|
||||
4. CodePipeline : This deployment service enables coders to visualize their application before release.
|
||||
5. X-Ray : Analyse applications with event calls.
|
||||
|
||||
* * *
|
||||
|
||||
_Management Tools_
|
||||
|
||||
Group of services which helps you manage your web services in AWS cloud.
|
||||
|
||||
1. CloudWatch : Monitoring service to monitor your AWS resources or applications.
|
||||
2. CloudFormation : Infrastructure as a code! Its way of managing AWS relative infra in collective and orderly manner.
|
||||
3. CloudTrail : Audit & compliance tool for AWS account.
|
||||
4. Config : AWS resource inventory, configuration history, and configuration change notifications to enable security and governance.
|
||||
5. OpsWorks : Automation to configure, deploy EC2 or on-premises compute
|
||||
6. Service Catalog : Create and manage IT service catalogs which are approved to use in your/company account
|
||||
7. Trusted Advisor : Its AWS AI helping you to have better, money saving AWS infra by inspecting your AWS Infra.
|
||||
8. Managed Service : Provides ongoing infra management
|
||||
|
||||
* * *
|
||||
|
||||
_Security, Identity & compliance_
|
||||
|
||||
Important group of AWS services helping you secure your AWS space.
|
||||
|
||||
1. IAM : IAM stands for Identity and Access Management. Controls user access to your AWS resources and services.
|
||||
2. Inspector : Automated security assessment helping you to secure and compliance your apps on AWS.
|
||||
3. Certificate Manager : Provision, manage and deploy SSL/TLS certificates for AWS applications.
|
||||
4. Directory Service : Its Microsoft Active Directory for AWS.
|
||||
5. WAF & Shield : WAF stands for Web Application Firewall. Monitors and controls access to your content on CloudFront or Load balancer.
|
||||
6. Compliance Reports : Compliance reporting of your AWS infra space to make sure your apps an dinfra are compliant to your policies.
|
||||
|
||||
* * *
|
||||
|
||||
_Analytics_
|
||||
|
||||
Data analytics of your AWS space to help you see, plan, act on happenings in your account.
|
||||
|
||||
1. Athena : Its a SQL based query service to analyse S3 stored data.
|
||||
2. EMR : EMR stands for Elastic Map Reduce. Service for big data processing and analysis.
|
||||
3. CloudSearch : Search capability of AWS within application and services.
|
||||
4. Elasticsearch Service : To create a domain and deploy, operate, and scale Elasticsearch clusters in the AWS Cloud
|
||||
5. Kinesis : Streams large amount of data in real time.
|
||||
6. Data Pipeline : Helps to move data between different AWS services.
|
||||
7. QuickSight : Collect, analyse and present insight of business data on AWS.
|
||||
|
||||
* * *
|
||||
|
||||
_Artificial Intelligence_
|
||||
|
||||
AI in AWS!
|
||||
|
||||
1. Lex : Helps to build conversational interfaces in application using voice and text.
|
||||
2. Polly : Its a text to speech service.
|
||||
3. Rekognition : Gives you ability to add image analysis to applications
|
||||
4. Machine Learning : It has algorithms to learn patterns in your data.
|
||||
|
||||
* * *
|
||||
|
||||
_Internet of Things_
|
||||
|
||||
This service enables AWS highly available on different devices.
|
||||
|
||||
1. AWS IoT : It lets connected hardware devices to interact with AWS applications.
|
||||
|
||||
* * *
|
||||
|
||||
_Game Development_
|
||||
|
||||
As name suggest this services aims at Game Development.
|
||||
|
||||
1. Amazon GameLift : This service aims for deplyoing, managing dedicated gaming servers for session based multiplayer games.
|
||||
|
||||
* * *
|
||||
|
||||
_Mobile Services_
|
||||
|
||||
Group of services mainly aimed at handheld devices
|
||||
|
||||
1. Mobile Hub : Helps you to create mobile app backend features and integrate them to mobile apps.
|
||||
2. Cognito : Controls mobile user’s authentication and access to AWS on internet connected devices.
|
||||
3. Device Farm : Mobile app testing service enables you to test apps across android, iOS on real phones hosted by AWS.
|
||||
4. Mobile Analytics : Measure, track and analyze mobile app data on AWS.
|
||||
5. Pinpoint : Targeted push notification and mobile engagements.
|
||||
|
||||
* * *
|
||||
|
||||
_Application Services_
|
||||
|
||||
Its a group of services which can be used with your applications in AWS.
|
||||
|
||||
1. Step Functions : Define and use various functions in your applications
|
||||
2. SWF : SWF stands for Simple Workflow Service. Its cloud workflow management helps developers to co-ordinate and contribute at different stages of application life cycle.
|
||||
3. API Gateway : Helps developers to create, manage, host APIs
|
||||
4. Elastic Transcoder : Helps developers to converts media files to play of various devices.
|
||||
|
||||
* * *
|
||||
|
||||
_Messaging_
|
||||
|
||||
Notification and messaging services in AWS
|
||||
|
||||
1. SQS : SQS stands for Simple Queue Service. Fully managed messaging queue service to communicate between services and apps in AWS.
|
||||
2. SNS : SNS stands for Simple Notification Service. Push notification service for AWS users to alert them about their services in AWS space.
|
||||
3. SES : SES stands for Simple Email Service. Its cost effective email service from AWS for its own customers.
|
||||
|
||||
* * *
|
||||
|
||||
_Business Productivity_
|
||||
|
||||
Group of services to help boost your business productivity.
|
||||
|
||||
1. WorkDocs : Collaborative file sharing, storing and editing service.
|
||||
2. WorkMail : Secured business mail, calendar service
|
||||
3. Amazon Chime : Online business meetings!
|
||||
|
||||
* * *
|
||||
|
||||
_Desktop & App Streaming_
|
||||
|
||||
Its desktop app streaming over cloud.
|
||||
|
||||
1. WorkSpaces : Fully managed, secured desktop computing service on cloud
|
||||
2. AppStream 2.0 : Stream desktop applications from cloud.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://kerneltalks.com/virtualization/aws-cloud-terminology/
|
||||
|
||||
作者:[Shrikant Lavhate][a]
|
||||
译者:[chenxinlong](https://github.com/chenxinlong)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://kerneltalks.com/virtualization/aws-cloud-terminology/
|
@ -1,410 +0,0 @@
|
||||
How to control GPIO pins and operate relays with the Raspberry Pi
|
||||
============================================================
|
||||
|
||||
> Learn how to operate relays and control GPIO pins with the Pi using PHP and a temperature sensor.
|
||||
|
||||
![How to control GPIO pins and operate relays with the Raspberry Pi](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/raspberry_pi_day_lead_0.jpeg?itok=lCxmviRD "How to control GPIO pins and operate relays with the Raspberry Pi")
|
||||
>Image by : opensource.com
|
||||
|
||||
Ever wondered how to control items like your fans, lights, and more using your phone or computer from anywhere?
|
||||
|
||||
I was looking to control my Christmas lights using any mobile phone, tablet, laptop... simply by using a Raspberry Pi. Let me show you how to operate relays and control GPIO pins with the Pi using PHP and a temperature sensor. I put them all together using AJAX.
|
||||
|
||||
### Hardware requirements
|
||||
|
||||
* Raspberry Pi
|
||||
* SD Card with Raspbian installed (any SD card would work, but I prefer to use a 32GB class 10 card)
|
||||
* Power adapter
|
||||
* Jumper wires (female to female and male to female)
|
||||
* Relay board (I use a 12V relay board with for relays)
|
||||
* DS18B20 temperature probe
|
||||
* Wi-Fi adapter for Raspberry Pi
|
||||
* Router (for Internet access, you need to have a port-forwarding supported router)
|
||||
* 10K-ohm resistor
|
||||
|
||||
### Software requirements
|
||||
|
||||
* Download and install Raspbian on your SD Card
|
||||
* Working Internet connection
|
||||
* Apache web server
|
||||
* PHP
|
||||
* WiringPi
|
||||
* SSH client on a Mac or Windows client
|
||||
|
||||
### General configurations and setup
|
||||
|
||||
1\. Insert the SD card into Raspberry Pi and connect it to the router using an Ethernet cable
|
||||
|
||||
2\. Connect the Wi-Fi Adapter.
|
||||
|
||||
3\. Now SSH to Pi and edit the **interfaces** file using:
|
||||
|
||||
**sudo nano /etc/network/interfaces**
|
||||
|
||||
This will open the file in an editor called **nano**. It is a very simple text editor that is easy to approach and use. If you're not familiar to a Linux-based operating systems, just use the arrow keys.
|
||||
|
||||
After opening the file in **nano** you will see a screen like this:
|
||||
|
||||
![File editor nano](https://opensource.com/sites/default/files/putty_0.png "File editor nano")
|
||||
|
||||
4\. To configure your wireless network, modify the file as follows:
|
||||
|
||||
**iface lo inet loopback**
|
||||
|
||||
**iface eth0 inet dhcp**
|
||||
|
||||
**allow-hotplug wlan0**
|
||||
|
||||
**auto wlan0**
|
||||
|
||||
**iface wlan0 inet dhcp**
|
||||
|
||||
** wpa-ssid "Your Network SSID"**
|
||||
|
||||
** wpa-psk "Your Password"**
|
||||
|
||||
5\. Press CTRL + O to save it, and then CTRL + X to exit the editor.
|
||||
|
||||
At this point, everything is configured and all you need to do is reload the network interfaces by running:
|
||||
|
||||
**sudo service networking reload**
|
||||
|
||||
(Warning: if you are connected using a remote connection it will disconnect now.)
|
||||
|
||||
### Software configurations
|
||||
|
||||
### Installing Apache Web Server
|
||||
|
||||
Apache is a popular web server application you can install on the Raspberry Pi to allow it to serve web pages. On its own, Apache can serve HTML files over HTTP, and with additional modules it can serve dynamic web pages using scripting languages such as PHP.
|
||||
|
||||
Install Apache by typing the following command on the command line:
|
||||
|
||||
**sudo apt-get install apache2 -y**
|
||||
|
||||
Once the installation is complete, type in the IP Address of your Pi to test the server. If you get the next image, then you have installed and set up your server successfully.
|
||||
|
||||
![Successful server setup](https://opensource.com/sites/default/files/itworks.png "Successful server setup")
|
||||
|
||||
To change this default page and add your own html file, go to **var/www/html**:
|
||||
|
||||
**cd /var/www/html**
|
||||
|
||||
To test this, add any file to this folder.
|
||||
|
||||
### Installing PHP
|
||||
|
||||
PHP is a preprocessor, meaning this is code that runs when the server receives a request for a web page. It runs, works out what needs to be shown on the page, then sends that page to the browser. Unlike static HTML, PHP can show different content under different circumstances. Other languages are capable of this, but since WordPress is written in PHP it's what you need to use this time. PHP is a very popular language on the web, with large projects like Facebook and Wikipedia written in it.
|
||||
|
||||
Install the PHP and Apache packages with the following command:
|
||||
|
||||
**sudo apt-get install php5 libapache2-mod-php5 -y**
|
||||
|
||||
### Testing PHP
|
||||
|
||||
Create the file **index.php**:
|
||||
|
||||
**sudo nano index.php**
|
||||
|
||||
Put some PHP content in it:
|
||||
|
||||
**<?php echo "hello world"; ?>**
|
||||
|
||||
Save the file. Next, delete "index.html" because it takes precedence over "index.php":
|
||||
|
||||
**sudo rm index.html**
|
||||
|
||||
Refresh your browser. You should see “hello world.” This is not dynamic, but it is still served by PHP. If you see the raw PHP above instead of “hello world,” reload and restart Apache with:
|
||||
|
||||
**sudo /etc/init.d/apache2 reload**
|
||||
|
||||
**sudo /etc/init.d/apache2 restart**
|
||||
|
||||
### Installing WiringPi
|
||||
|
||||
WiringPi is maintained under **git** for ease of change tracking; however, you have a plan B if you’re unable to use **git** for whatever reason. (Usually your firewall will be blocking you, so do check that first!)
|
||||
|
||||
If you do not have **git** installed, then under any of the Debian releases (e.g., Raspbian), you can install it with:
|
||||
|
||||
**sudo apt-get install git-core**
|
||||
|
||||
If you get any errors here, make sure your Pi is up to date with the latest version of Raspbian:
|
||||
|
||||
**sudo apt-get update sudo apt-get upgrade**
|
||||
|
||||
To obtain WiringPi using **git**:
|
||||
|
||||
**sudo git clone git://git.drogon.net/wiringPi**
|
||||
|
||||
If you have already used the clone operation for the first time, then:
|
||||
|
||||
**cd wiringPi git pull origin**
|
||||
|
||||
It will fetch an updated version, and then you can re-run the build script below.
|
||||
|
||||
To build/install there is a new simplified script:
|
||||
|
||||
**cd wiringPi ./build**
|
||||
|
||||
The new build script will compile and install it all for you. It does use the **sudo** command at one point, so you may wish to inspect the script before running it.
|
||||
|
||||
### Testing WiringPi
|
||||
|
||||
Run the **gpio** command to check the installation:
|
||||
|
||||
**gpio -v gpio readall**
|
||||
|
||||
This should give you some confidence that it’s working OK.
|
||||
|
||||
### Connecting DS18B20 To Raspberry Pi
|
||||
|
||||
* The Black wire on your probe is for GND
|
||||
* The Red wire is for VCC
|
||||
* The Yellow wire is the GPIO wire
|
||||
|
||||
![GPIO image](https://opensource.com/sites/default/files/gpio_0.png "GPIO image")
|
||||
|
||||
Connect:
|
||||
|
||||
* VCC to 3V Pin 1
|
||||
* GPIO wire to Pin 7 (GPIO 04)
|
||||
* Ground wire to any GND Pin 9
|
||||
|
||||
### Software Configuration
|
||||
|
||||
For using DS18B20 temperature sensor module with PHP, you need to activate the kernel module for the GPIO pins on the Raspberry Pi and the DS18B20 by executing the commands:
|
||||
|
||||
**sudo modprobe w1-gpio**
|
||||
|
||||
**sudo modprobe w1-therm**
|
||||
|
||||
You do not want to do that manually every time the Raspberry reboots, so you want to enable these modules on every boot. This is done by adding the following lines to the file **/etc/modules**:
|
||||
|
||||
**sudo nano /etc/modules/**
|
||||
|
||||
Add the following lines to it:
|
||||
|
||||
**w1-gpio**
|
||||
|
||||
**w1-therm**
|
||||
|
||||
To test this, type in:
|
||||
|
||||
**cd /sys/bus/w1/devices/**
|
||||
|
||||
Now type **ls. **
|
||||
|
||||
You should see your device information. In the device drivers, your DS18B20 sensor should be listed as a series of numbers and letters. In this case, the device is registered as 28-000005e2fdc3\. You then need to access the sensor with the cd command, replacing my serial number with your own: **cd 28-000005e2fdc3. **
|
||||
|
||||
The DS18B20 sensor periodically writes to the **w1_slave** file, so you simply use the cat command to read it**: cat w1_slave.**
|
||||
|
||||
This yields the following two lines of text, with the output **t=** showing the temperature in degrees Celsius. Place a decimal point after the first two digits (e.g., the temperature reading I received is 30.125 degrees Celsius).
|
||||
|
||||
### Connecting the relay
|
||||
|
||||
1\. Take two jumper wires and connect one of them to the GPIO 24 (Pin18) on the Pi and the other one to the GND Pin. You may refer the following diagram.
|
||||
|
||||
2\. Now connect the other ends of the wire to the relay board. Connect the GND to the GND on the relay and GPIO Output wire to the relay channel pin number, which depends on the relay that you are using. Remember theGNDgoes to GND on the relay and GPIO Output goes to the relay input pin.
|
||||
|
||||
![Headers](https://opensource.com/sites/default/files/headers.png "Headers")
|
||||
|
||||
Caution! Be very careful with the relay connections with Pi because if it causes a backflow of current, you with have a short circuit.
|
||||
|
||||
3\. Now connect the power supply to the relay, either using 12V power adapter or by connecting the VCC Pin to 3.3V or 5V on the Pi.
|
||||
|
||||
### Controlling the relay using PHP
|
||||
|
||||
Let's create a PHP script to control the GPIO pins on the Raspberry Pi, with the help of the WiringPi software.
|
||||
|
||||
1\. Create a file in the Apache server’s root web directory. Navigate using:
|
||||
|
||||
**cd ../../../**
|
||||
|
||||
**cd var/www/html/**
|
||||
|
||||
2\. Create a new folder called Home:
|
||||
|
||||
**sudo mkdir Home**
|
||||
|
||||
3\. Create a new PHP file called **on.php**:
|
||||
|
||||
**sudo nano on.php**
|
||||
|
||||
4\. Add the following code to it:
|
||||
|
||||
```
|
||||
<?php
|
||||
|
||||
system(“ gpio-g mode 24 out “) ;
|
||||
system(“ gpio-g write 24 1”) ;
|
||||
|
||||
?>
|
||||
```
|
||||
|
||||
5\. Save the file using CTRL + O and exit using CTRL + X
|
||||
|
||||
In the code above, in the first line you've set the GPIO Pin 24 to output mode using the command:
|
||||
|
||||
```
|
||||
system(“ gpio-g mode 24 out “) ;
|
||||
```
|
||||
|
||||
In the second line, you’ve turned on the GPIO Pin 24, Using “1,” where “1” in binary refers to ON and “0” Means OFF.
|
||||
|
||||
6\. To turn off the relay, create another file called **off.php** and replace “1” with “0.”
|
||||
|
||||
```
|
||||
<?php
|
||||
|
||||
system(“ gpio-g mode 24 out “) ;
|
||||
system(“ gpio-g write 24 0”) ;
|
||||
|
||||
?>
|
||||
```
|
||||
|
||||
7\. If you have your relay connected to the Pi, visit your web browser and type in the IP Address of your Pi followed by the directory name and file name:
|
||||
|
||||
**http://{IPADDRESS}/home/on.php**
|
||||
|
||||
This will turn ON the relay.
|
||||
|
||||
8\. To turn it OFF, open the page called **off.php**,
|
||||
|
||||
**http://{IPADDRESS}/home/off.php**
|
||||
|
||||
Now you need to control both these things from a single page without refreshing or visiting the pages individually. For that you'll use AJAX.
|
||||
|
||||
9\. Create a new HTML file and add this code to it.
|
||||
|
||||
```
|
||||
[html + php + ajax codeblock]
|
||||
|
||||
<html>
|
||||
|
||||
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.3/jquery.min.js"></script>
|
||||
|
||||
<script type="text/javascript">// <![CDATA[
|
||||
|
||||
$(document).ready(function() {
|
||||
|
||||
$('#on').click(function(){
|
||||
|
||||
var a= new XMLHttpRequest();
|
||||
|
||||
a.open("GET", "on.php"); a.onreadystatechange=function(){
|
||||
|
||||
if(a.readyState==4){ if(a.status ==200){
|
||||
|
||||
} else alert ("http error"); } }
|
||||
|
||||
a.send();
|
||||
|
||||
});
|
||||
|
||||
});
|
||||
|
||||
$(document).ready(function()
|
||||
|
||||
{ $('#Off').click(function(){
|
||||
|
||||
var a= new XMLHttpRequest();
|
||||
|
||||
a.open("GET", "off.php");
|
||||
|
||||
a.onreadystatechange=function(){
|
||||
|
||||
if(a.readyState==4){
|
||||
|
||||
if(a.status ==200){
|
||||
|
||||
} else alert ("http error"); } }
|
||||
|
||||
a.send();
|
||||
|
||||
});
|
||||
|
||||
});
|
||||
|
||||
</script>
|
||||
|
||||
<button id="on" type="button"> Switch Lights On </button>
|
||||
|
||||
<button id="off" type="button"> Switch Lights Off </button>
|
||||
```
|
||||
|
||||
10\. Save the file, go to your web browser, and open that page. You’ll see two buttons, which will turn lights on and off. Based on the same idea, you can create a beautiful web interface using bootstrap and CSS skills.
|
||||
|
||||
### Viewing temperature on this web page
|
||||
|
||||
1\. Create a file called **temperature.php**:
|
||||
|
||||
```
|
||||
sudo nano temperature.php
|
||||
```
|
||||
|
||||
2\. Add the following code to it, replace 10-000802292522 with your device ID:
|
||||
|
||||
```
|
||||
<?php
|
||||
//File to read
|
||||
$file = '/sys/devices/w1_bus_master1/10-000802292522/w1_slave';
|
||||
//Read the file line by line
|
||||
$lines = file($file);
|
||||
//Get the temp from second line
|
||||
$temp = explode('=', $lines[1]);
|
||||
//Setup some nice formatting (i.e., 21,3)
|
||||
$temp = number_format($temp[1] / 1000, 1, ',', '');
|
||||
//And echo that temp
|
||||
echo $temp . " °C";
|
||||
?>
|
||||
```
|
||||
|
||||
3\. Go to the HTML file that you just created, and create a new **<div>** with the **id** “screen”: **<div id=“screen”></div>.**
|
||||
|
||||
4\. Add the following code after the **<body>** tag or at the end of the document:
|
||||
|
||||
```
|
||||
<script>
|
||||
$(document).ready(function(){
|
||||
setInterval(function(){
|
||||
$("#screen").load('temperature.php')
|
||||
}, 1000);
|
||||
});
|
||||
</script>
|
||||
```
|
||||
|
||||
In this, **#screen** is the **id** of **<div>** in which you want to display the temperature. It loads the **temperature.php** file every 1000 milliseconds.
|
||||
|
||||
I have used bootstrap to make a beautiful panel for displaying temperature. You can add multiple icons and glyphicons as well to make it more attractive.
|
||||
|
||||
This was just a basic system that controls a relay board and displays the temperature. You can develop it even further by creating event-based triggers based on timings, temperature readings from the thermostat, etc.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
|
||||
作者简介:
|
||||
|
||||
Abdul Hannan Mustajab - I'm 17 years old and live in India. I am pursuing an education in science, math, and computer science. I blog about my projects at spunkytechnology.com. I've been working on AI based IoT using different micro controllers and boards .
|
||||
|
||||
--------
|
||||
|
||||
|
||||
via: https://opensource.com/article/17/3/operate-relays-control-gpio-pins-raspberry-pi
|
||||
|
||||
作者:[ Abdul Hannan Mustajab][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/mustajabhannan
|
||||
[1]:http://www.php.net/system
|
||||
[2]:http://www.php.net/system
|
||||
[3]:http://www.php.net/system
|
||||
[4]:http://www.php.net/system
|
||||
[5]:http://www.php.net/system
|
||||
[6]:http://www.php.net/file
|
||||
[7]:http://www.php.net/explode
|
||||
[8]:http://www.php.net/number_format
|
||||
[9]:https://opensource.com/article/17/3/operate-relays-control-gpio-pins-raspberry-pi?rate=RX8QqLzmUb_wEeLw0Ee0UYdp1ehVokKZ-JbbJK_Cn5M
|
||||
[10]:https://opensource.com/user/123336/feed
|
||||
[11]:https://opensource.com/users/mustajabhannan
|
@ -1,116 +0,0 @@
|
||||
translated by zhousiyu325
|
||||
|
||||
5 big ways AI is rapidly invading our lives
|
||||
============================================================
|
||||
|
||||
> Let's look at five real ways we're already surrounded by artificial intelligence.
|
||||
|
||||
|
||||
![5 big ways AI is rapidly invading our lives](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/brain-think-ai-intelligence-ccby.png?itok=-EK6Vpz1 "5 big ways AI is rapidly invading our lives")
|
||||
>Image by : opensource.com
|
||||
|
||||
Open source projects [are helping drive][2] artificial intelligence advancements, and we can expect to hear much more about how AI impacts our lives as the technologies mature. Have you considered how AI is changing the world around you already? Let's take a look at our increasingly artificially enhanced universe and consider the bold predictions about our AI-influenced future.
|
||||
|
||||
### 1\. AI influences your purchasing decisions
|
||||
|
||||
A recent story on [VentureBeat][3], "[How AI will help us decipher millennials][4]," caught my eye. I confess that I haven't given much thought to artificial intelligence—nor have I had a hard time deciphering millennials—so I was curious to learn more. As it turns out, the headline was a bit misleading; "How to sell to millennials" would have been a more accurate title.
|
||||
|
||||
According to the article, the millennial generation is a "the demographic segment so coveted that marketing managers from all over the globe are fighting over them." By analyzing online behavior—be it shopping, social media, or other activities—machine learning can help predict behavioral patterns, which can then turn into targeted advertising. The article goes on to explain how the Internet of Things and social media platforms can be mined for data points. "Using machine learning to mine social media data allows companies to determine how millennials talk about its products, what their sentiments are towards a product category, how they respond to competitors’ advertising campaigns, and a multitude of other data that can be used to design targeted advertising campaigns," the article explains. That AI and millennials are the future of marketing is no huge surprise, but Gen Xers and Baby Boomers, you're not off the hook yet.
|
||||
|
||||
>AI is being used to target entire groups—including cities—of people based on behavior changes.AI is being used to target entire groups—including cities—of people based on behavior changes.
|
||||
|
||||
For example, an article on [Raconteur][23], "[How AI will change buyer behaviour][24]," explains that the biggest strength of AI in the online retail industry is its ability to adapt quickly to fluid situations that change customer behavior. Abhinav Aggarwal, chief executive of artificial intelligence startup [Fluid AI][25], says that his company's software was being used by a client to predict customer behavior, and the system noticed a change during a snow storm. "Users who would typically ignore the e-mails or in-app notifications sent in the middle of the day were now opening them as they were stuck at home without much to do. Within an hour the AI system adapted to the new situation and started sending more promotional material during working hours," he explains.
|
||||
|
||||
AI is changing how, why, and when we spend money, but how is it changing the way we earn our paychecks?
|
||||
|
||||
### 2\. AI is changing how we work
|
||||
|
||||
A recent [Fast Company][5] article, "[This is how AI will change your work in 2017][6]," says that job seekers will benefit from artificial intelligence. The author explains that AI will be used to send job seekers alerts for relevant job openings, in addition to updates on salary trends, when you're due for a promotion, and the likelihood that you'll get one.
|
||||
|
||||
Artificial intelligence also will be used by companies to help on-board new talent. "Many new hires get a ton of information during their first couple of days on the job, much of which won't get retained," the article explains. Instead, a bot could "drip information" to a new employee over time as it becomes more relevant.
|
||||
|
||||
On [Inc.][7], "[Businesses Beyond Bias: How AI Will Reshape Hiring Practices][8]" looks at how [SAP SuccessFactors][9], a talent management solutions provider, leverages AI as a job description "bias checker" and to check for bias in employee compensation.
|
||||
|
||||
[Deloitte's 2017 Human Capital Trends Report][10] indicates that AI is motivating organizations to restructure. Fast Company's article "[How AI is changing the way companies are organized][11]" examines the report, which was based on surveys with more than 10,000 HR and business leaders around the world. "Instead of hiring the most qualified person for a specific task, many companies are now putting greater emphasis on cultural fit and adaptability, knowing that individual roles will have to evolve along with the implementation of AI," the article explains. To adapt to changing technologies, organizations are also moving away from top-down structures and to multidisciplinary teams, the article says.
|
||||
|
||||
### 3\. AI is transforming education
|
||||
|
||||
>AI will benefit all the stakeholders of the education ecosystem.
|
||||
|
||||
Education budgets are shrinking, whereas classroom sizes are growing, so leveraging technological advancements can help improve the productivity and efficiency of the education system, and play a role in improving the quality and affordability of education, according to an article on VentureBeat. "[How AI will transform education in 2017][26]" says that this year we'll see AI grading students' written answers, bots answering students' questions, virtual personal assistants tutoring students, and more. "AI will benefit all the stakeholders of the education ecosystem," the article explains. "Students would be able to learn better with instant feedback and guidance, teachers would get rich learning analytics and insights to personalize instruction, parents would see improved career prospects for their children at a reduced cost, schools would be able to scale high-quality education, and governments would be able to provide affordable education to all."
|
||||
|
||||
### 4\. AI is reshaping healthcare
|
||||
|
||||
A February 2017 article on [CB Insights][12] rounded up [106 artificial intelligence startups in healthcare][13], and many of those raised their first equity funding round within the past couple of years. "19 out of the 24 companies under imaging and diagnostics raised their first equity funding round since January 2015," the article says. Other companies on the list include those working on AI for remote patient monitoring, drug discovery, and oncology.
|
||||
|
||||
An article published on March 16 on TechCrunch that looks at [how AI advances are reshaping healthcare][14] explains, "Once a better understanding of human DNA is established, there is an opportunity to go one step further and provide personalized insights to individuals based on their idiosyncratic biological dispositions. This trend is indicative of a new era of 'personalized genetics,' whereby individuals are able to take full control of their health through access to unprecedented information about their own bodies."
|
||||
|
||||
The article goes on to explain that AI and machine learning are lowering the cost and time to discover new drugs. Thanks in part to extensive testing, new drugs can take more than 12 years to enter the market. "ML algorithms can allow computers to 'learn' how to make predictions based on the data they have previously processed or choose (and in some cases, even conduct) what experiments need to be done. Similar types of algorithms also can be used to predict the side effects of specific chemical compounds on humans, speeding up approvals," the article says. In 2015, the article notes, a San Francisco-based startup, [Atomwise][15], completed analysis on two new drugs to reduce Ebola infectivity within one day, instead of taking years.
|
||||
|
||||
>AI is helping with discovering, diagnosing, and managing new diseases.
|
||||
|
||||
Another startup, London-based [BenevolentAI][27], is harnessing AI to look for patterns in scientific literature. "Recently, the company identified two potential chemical compounds that may work on Alzheimer’s, attracting the attention of pharmaceutical companies," the article says.
|
||||
|
||||
In addition to drug discovery, AI is helping with discovering, diagnosing, and managing new diseases. The TechCrunch article explains that, historically, illnesses are diagnosed based on symptoms displayed, but AI is being used to detect disease signatures in the blood, and to develop treatment plans using deep learning insights from analyzing billions of clinical cases. "IBM's Watson is working with Memorial Sloan Kettering in New York to digest reams of data on cancer patients and treatments used over decades to present and suggest treatment options to doctors in dealing with unique cancer cases," the article says.
|
||||
|
||||
### 5\. AI is changing our love lives
|
||||
|
||||
More than 50-million active users across 195 countries swipe through potential mates with [Tinder][16], a dating app launched in 2012\. In a [Forbes Interview podcast][17], Tinder founder and chairman Sean Rad spoke with Steven Bertoni about how artificial intelligence is changing the dating game. In [an article][18] about the interview, Bertoni quotes Rad, who says, "There might be a moment when Tinder is just so good at predicting the few people that you're interested in, and Tinder might do a lot of the leg work in organizing a date, right?" So instead of presenting users with potential partners, the app would make a suggestion for a nearby partner and take it a step further, coordinate schedules, and set up a date.
|
||||
|
||||
>Future generations literally might fall in love with artificial intelligence.
|
||||
|
||||
Are you in love with AI yet? Future generations literally might fall in love with artificial intelligence. An article by Raya Bidshahri on [Singularity Hub][19], "[How AI will redefine love][20]," says that in a few decades we might be arguing that love is not limited by biology.
|
||||
|
||||
"Our technology, powered by Moore's law, is growing at a staggering rate—intelligent devices are becoming more and more integrated to our lives," Bidshahri explains, adding, "Futurist Ray Kurzweil predicts that we will have AI at a human level by 2029, and it will be a billion times more capable than humans by the 2040s. Many predict that one day we will merge with powerful machines, and we ourselves may become artificially intelligent." She argues that it's inevitable in such a world that humans would accept being in love with entirely non-biological beings.
|
||||
|
||||
That might sound a bit freaky, but falling in love with AI is a more optimistic outcome than a future in which robots take over the world. "Programming AI to have the capacity to feel love can allow us to create more compassionate AI and may be the very key to avoiding the AI apocalypse many fear," Bidshahri says.
|
||||
|
||||
This list of big ways AI is invading all areas of our lives barely scrapes the surface of the artificial intelligence bubbling up around us. Which AI innovations are most exciting—or troubling—to you? Let us know about them in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
|
||||
作者简介:
|
||||
|
||||
Rikki Endsley - Rikki Endsley is a community manager for Opensource.com. In the past, she worked as the community evangelist on the Open Source and Standards (OSAS) team at Red Hat; a freelance tech journalist; community manager for the USENIX Association; associate publisher of Linux Pro Magazine, ADMIN, and Ubuntu User; and as the managing editor of Sys Admin magazine and UnixReview.com. Follow her on Twitter at: @rikkiends.
|
||||
|
||||
|
||||
-------------------
|
||||
|
||||
via: https://opensource.com/article/17/3/5-big-ways-ai-rapidly-invading-our-lives
|
||||
|
||||
作者:[Rikki Endsley ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/rikki-endsley
|
||||
[1]:https://opensource.com/article/17/3/5-big-ways-ai-rapidly-invading-our-lives?rate=ORfqhKFu9dpA9aFfg-5Za9ZWGcBcx-f0cUlf_VZNeQs
|
||||
[2]:https://www.linux.com/news/open-source-projects-are-transforming-machine-learning-and-ai
|
||||
[3]:https://twitter.com/venturebeat
|
||||
[4]:http://venturebeat.com/2017/03/16/how-ai-will-help-us-decipher-millennials/
|
||||
[5]:https://opensource.com/article/17/3/5-big-ways-ai-rapidly-invading-our-lives
|
||||
[6]:https://www.fastcompany.com/3066620/this-is-how-ai-will-change-your-work-in-2017
|
||||
[7]:https://twitter.com/Inc
|
||||
[8]:http://www.inc.com/bill-carmody/businesses-beyond-bias-how-ai-will-reshape-hiring-practices.html
|
||||
[9]:https://www.successfactors.com/en_us.html
|
||||
[10]:https://dupress.deloitte.com/dup-us-en/focus/human-capital-trends.html?id=us:2el:3pr:dup3575:awa:cons:022817:hct17
|
||||
[11]:https://www.fastcompany.com/3068492/how-ai-is-changing-the-way-companies-are-organized
|
||||
[12]:https://twitter.com/CBinsights
|
||||
[13]:https://www.cbinsights.com/blog/artificial-intelligence-startups-healthcare/
|
||||
[14]:https://techcrunch.com/2017/03/16/advances-in-ai-and-ml-are-reshaping-healthcare/
|
||||
[15]:http://www.atomwise.com/
|
||||
[16]:https://twitter.com/Tinder
|
||||
[17]:https://www.forbes.com/podcasts/the-forbes-interview/#5e962e5624e1
|
||||
[18]:https://www.forbes.com/sites/stevenbertoni/2017/02/14/tinders-sean-rad-on-how-technology-and-artificial-intelligence-will-change-dating/#4180fc2e5b99
|
||||
[19]:https://twitter.com/singularityhub
|
||||
[20]:https://singularityhub.com/2016/08/05/how-ai-will-redefine-love/
|
||||
[21]:https://opensource.com/user/23316/feed
|
||||
[22]:https://opensource.com/article/17/3/5-big-ways-ai-rapidly-invading-our-lives#comments
|
||||
[23]:https://twitter.com/raconteur
|
||||
[24]:https://www.raconteur.net/technology/how-ai-will-change-buyer-behaviour
|
||||
[25]:http://www.fluid.ai/
|
||||
[26]:http://venturebeat.com/2017/02/04/how-ai-will-transform-education-in-2017/
|
||||
[27]:https://twitter.com/benevolent_ai
|
||||
[28]:https://opensource.com/users/rikki-endsley
|
@ -0,0 +1,346 @@
|
||||
ictlyh Translating
|
||||
Writing a Linux Debugger Part 3: Registers and memory
|
||||
============================================================
|
||||
|
||||
In the last post we added simple address breakpoints to our debugger. This time we’ll be adding the ability to read and write registers and memory, which will allow us to screw around with our program counter, observe state and change the behaviour of our program.
|
||||
|
||||
* * *
|
||||
|
||||
### Series index
|
||||
|
||||
These links will go live as the rest of the posts are released.
|
||||
|
||||
1. [Setup][3]
|
||||
|
||||
2. [Breakpoints][4]
|
||||
|
||||
3. [Registers and memory][5]
|
||||
|
||||
4. [Elves and dwarves][6]
|
||||
|
||||
5. [Source and signals][7]
|
||||
|
||||
6. [Source-level stepping][8]
|
||||
|
||||
7. Source-level breakpoints
|
||||
|
||||
8. Stack unwinding
|
||||
|
||||
9. Reading variables
|
||||
|
||||
10. Next steps
|
||||
|
||||
* * *
|
||||
|
||||
### Registering our registers
|
||||
|
||||
Before we actually read any registers, we need to teach our debugger a bit about our target, which is x86_64\. Alongside sets of general and special purpose registers, x86_64 has floating point and vector registers available. I’ll be omitting the latter two for simplicity, but you can choose to support them if you like. x86_64 also allows you to access some 64 bit registers as 32, 16, or 8 bit registers, but I’ll just be sticking to 64\. Due to these simplifications, for each register we just need its name, its DWARF register number, and where it is stored in the structure returned by `ptrace`. I chose to have a scoped enum for referring to the registers, then I laid out a global register descriptor array with the elements in the same order as in the `ptrace` register structure.
|
||||
|
||||
```
|
||||
enum class reg {
|
||||
rax, rbx, rcx, rdx,
|
||||
rdi, rsi, rbp, rsp,
|
||||
r8, r9, r10, r11,
|
||||
r12, r13, r14, r15,
|
||||
rip, rflags, cs,
|
||||
orig_rax, fs_base,
|
||||
gs_base,
|
||||
fs, gs, ss, ds, es
|
||||
};
|
||||
|
||||
constexpr std::size_t n_registers = 27;
|
||||
|
||||
struct reg_descriptor {
|
||||
reg r;
|
||||
int dwarf_r;
|
||||
std::string name;
|
||||
};
|
||||
|
||||
const std::array<reg_descriptor, n_registers> g_register_descriptors {{
|
||||
{ reg::r15, 15, "r15" },
|
||||
{ reg::r14, 14, "r14" },
|
||||
{ reg::r13, 13, "r13" },
|
||||
{ reg::r12, 12, "r12" },
|
||||
{ reg::rbp, 6, "rbp" },
|
||||
{ reg::rbx, 3, "rbx" },
|
||||
{ reg::r11, 11, "r11" },
|
||||
{ reg::r10, 10, "r10" },
|
||||
{ reg::r9, 9, "r9" },
|
||||
{ reg::r8, 8, "r8" },
|
||||
{ reg::rax, 0, "rax" },
|
||||
{ reg::rcx, 2, "rcx" },
|
||||
{ reg::rdx, 1, "rdx" },
|
||||
{ reg::rsi, 4, "rsi" },
|
||||
{ reg::rdi, 5, "rdi" },
|
||||
{ reg::orig_rax, -1, "orig_rax" },
|
||||
{ reg::rip, -1, "rip" },
|
||||
{ reg::cs, 51, "cs" },
|
||||
{ reg::rflags, 49, "eflags" },
|
||||
{ reg::rsp, 7, "rsp" },
|
||||
{ reg::ss, 52, "ss" },
|
||||
{ reg::fs_base, 58, "fs_base" },
|
||||
{ reg::gs_base, 59, "gs_base" },
|
||||
{ reg::ds, 53, "ds" },
|
||||
{ reg::es, 50, "es" },
|
||||
{ reg::fs, 54, "fs" },
|
||||
{ reg::gs, 55, "gs" },
|
||||
}};
|
||||
```
|
||||
|
||||
You can typically find the register data structure in `/usr/include/sys/user.h` if you’d like to look at it yourself, and the DWARF register numbers are taken from the [System V x86_64 ABI][11].
|
||||
|
||||
Now we can write a bunch of functions to interact with registers. We’d like to be able to read registers, write to them, retrieve a value from a DWARF register number, and lookup registers by name and vice versa. Let’s start with implementing `get_register_value`:
|
||||
|
||||
```
|
||||
uint64_t get_register_value(pid_t pid, reg r) {
|
||||
user_regs_struct regs;
|
||||
ptrace(PTRACE_GETREGS, pid, nullptr, ®s);
|
||||
//...
|
||||
}
|
||||
```
|
||||
|
||||
Again, `ptrace` gives us easy access to the data we want. We just construct an instance of `user_regs_struct` and give that to `ptrace` alongside the `PTRACE_GETREGS` request.
|
||||
|
||||
Now we want to read `regs` depending on which register was requested. We could write a big switch statement, but since we’ve laid out our `g_register_descriptors` table in the same order as `user_regs_struct`, we can just search for the index of the register descriptor, and access `user_regs_struct` as an array of `uint64_t`s.[1][9]
|
||||
|
||||
```
|
||||
auto it = std::find_if(begin(g_register_descriptors), end(g_register_descriptors),
|
||||
[r](auto&& rd) { return rd.r == r; });
|
||||
|
||||
return *(reinterpret_cast<uint64_t*>(®s) + (it - begin(g_register_descriptors)));
|
||||
```
|
||||
|
||||
The cast to `uint64_t` is safe because `user_regs_struct` is a standard layout type, but I think the pointer arithmetic is technically UB. No current compilers even warn about this and I’m lazy, but if you want to maintain utmost correctness, write a big switch statement.
|
||||
|
||||
`set_register_value` is much the same, we just write to the location and write the registers back at the end:
|
||||
|
||||
```
|
||||
void set_register_value(pid_t pid, reg r, uint64_t value) {
|
||||
user_regs_struct regs;
|
||||
ptrace(PTRACE_GETREGS, pid, nullptr, ®s);
|
||||
auto it = std::find_if(begin(g_register_descriptors), end(g_register_descriptors),
|
||||
[r](auto&& rd) { return rd.r == r; });
|
||||
|
||||
*(reinterpret_cast<uint64_t*>(®s) + (it - begin(g_register_descriptors))) = value;
|
||||
ptrace(PTRACE_SETREGS, pid, nullptr, ®s);
|
||||
}
|
||||
```
|
||||
|
||||
Next is lookup by DWARF register number. This time I’ll actually check for an error condition just in case we get some weird DWARF information:
|
||||
|
||||
```
|
||||
uint64_t get_register_value_from_dwarf_register (pid_t pid, unsigned regnum) {
|
||||
auto it = std::find_if(begin(g_register_descriptors), end(g_register_descriptors),
|
||||
[regnum](auto&& rd) { return rd.dwarf_r == regnum; });
|
||||
if (it == end(g_register_descriptors)) {
|
||||
throw std::out_of_range{"Unknown dwarf register"};
|
||||
}
|
||||
|
||||
return get_register_value(pid, it->r);
|
||||
}
|
||||
```
|
||||
|
||||
Nearly finished, now he have register name lookups:
|
||||
|
||||
```
|
||||
std::string get_register_name(reg r) {
|
||||
auto it = std::find_if(begin(g_register_descriptors), end(g_register_descriptors),
|
||||
[r](auto&& rd) { return rd.r == r; });
|
||||
return it->name;
|
||||
}
|
||||
|
||||
reg get_register_from_name(const std::string& name) {
|
||||
auto it = std::find_if(begin(g_register_descriptors), end(g_register_descriptors),
|
||||
[name](auto&& rd) { return rd.name == name; });
|
||||
return it->r;
|
||||
}
|
||||
```
|
||||
|
||||
And finally we’ll add a simple helper to dump the contents of all registers:
|
||||
|
||||
```
|
||||
void debugger::dump_registers() {
|
||||
for (const auto& rd : g_register_descriptors) {
|
||||
std::cout << rd.name << " 0x"
|
||||
<< std::setfill('0') << std::setw(16) << std::hex << get_register_value(m_pid, rd.r) << std::endl;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
As you can see, iostreams has a very concise interface for outputting hex data nicely[2][10]. Feel free to make an I/O manipulator to get rid of this mess if you like.
|
||||
|
||||
This gives us enough support to handle registers easily in the rest of the debugger, so we can now add this to our UI.
|
||||
|
||||
* * *
|
||||
|
||||
### Exposing our registers
|
||||
|
||||
All we need to do here is add a new command to the `handle_command` function. With the following code, users will be able to type `register read rax`, `register write rax 0x42` and so on.
|
||||
|
||||
```
|
||||
else if (is_prefix(command, "register")) {
|
||||
if (is_prefix(args[1], "dump")) {
|
||||
dump_registers();
|
||||
}
|
||||
else if (is_prefix(args[1], "read")) {
|
||||
std::cout << get_register_value(m_pid, get_register_from_name(args[2])) << std::endl;
|
||||
}
|
||||
else if (is_prefix(args[1], "write")) {
|
||||
std::string val {args[3], 2}; //assume 0xVAL
|
||||
set_register_value(m_pid, get_register_from_name(args[2]), std::stol(val, 0, 16));
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
* * *
|
||||
|
||||
### Where is my mind?
|
||||
|
||||
We’ve already read from and written to memory when setting our breakpoints, so we just need to add a couple of functions to hide the `ptrace` call a bit.
|
||||
|
||||
```
|
||||
uint64_t debugger::read_memory(uint64_t address) {
|
||||
return ptrace(PTRACE_PEEKDATA, m_pid, address, nullptr);
|
||||
}
|
||||
|
||||
void debugger::write_memory(uint64_t address, uint64_t value) {
|
||||
ptrace(PTRACE_POKEDATA, m_pid, address, value);
|
||||
}
|
||||
```
|
||||
|
||||
You might want to add support for reading and writing more than a word at a time, which you can do by just incrementing the address each time you want to read another word. You could also use [`process_vm_readv` and `process_vm_writev`][12] or `/proc/<pid>/mem` instead of `ptrace` if you like.
|
||||
|
||||
Now we’ll add commands for our UI:
|
||||
|
||||
```
|
||||
else if(is_prefix(command, "memory")) {
|
||||
std::string addr {args[2], 2}; //assume 0xADDRESS
|
||||
|
||||
if (is_prefix(args[1], "read")) {
|
||||
std::cout << std::hex << read_memory(std::stol(addr, 0, 16)) << std::endl;
|
||||
}
|
||||
if (is_prefix(args[1], "write")) {
|
||||
std::string val {args[3], 2}; //assume 0xVAL
|
||||
write_memory(std::stol(addr, 0, 16), std::stol(val, 0, 16));
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
* * *
|
||||
|
||||
### Patching `continue_execution`
|
||||
|
||||
Before we test out our changes, we’re now in a position to implement a more sane version of `continue_execution`. Since we can get the program counter, we can check our breakpoint map to see if we’re at a breakpoint. If so, we can disable the breakpoint and step over it before continuing.
|
||||
|
||||
First we’ll add for couple of helper functions for clarity and brevity:
|
||||
|
||||
```
|
||||
uint64_t debugger::get_pc() {
|
||||
return get_register_value(m_pid, reg::rip);
|
||||
}
|
||||
|
||||
void debugger::set_pc(uint64_t pc) {
|
||||
set_register_value(m_pid, reg::rip, pc);
|
||||
}
|
||||
```
|
||||
|
||||
Then we can write a function to step over a breakpoint:
|
||||
|
||||
```
|
||||
void debugger::step_over_breakpoint() {
|
||||
// - 1 because execution will go past the breakpoint
|
||||
auto possible_breakpoint_location = get_pc() - 1;
|
||||
|
||||
if (m_breakpoints.count(possible_breakpoint_location)) {
|
||||
auto& bp = m_breakpoints[possible_breakpoint_location];
|
||||
|
||||
if (bp.is_enabled()) {
|
||||
auto previous_instruction_address = possible_breakpoint_location;
|
||||
set_pc(previous_instruction_address);
|
||||
|
||||
bp.disable();
|
||||
ptrace(PTRACE_SINGLESTEP, m_pid, nullptr, nullptr);
|
||||
wait_for_signal();
|
||||
bp.enable();
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
First we check to see if there’s a breakpoint set for the value of the current PC. If there is, we first put execution back to before the breakpoint, disable it, step over the original instruction, and re-enable the breakpoint.
|
||||
|
||||
`wait_for_signal` will encapsulate our usual `waitpid` pattern:
|
||||
|
||||
```
|
||||
void debugger::wait_for_signal() {
|
||||
int wait_status;
|
||||
auto options = 0;
|
||||
waitpid(m_pid, &wait_status, options);
|
||||
}
|
||||
```
|
||||
|
||||
Finally we rewrite `continue_execution` like this:
|
||||
|
||||
```
|
||||
void debugger::continue_execution() {
|
||||
step_over_breakpoint();
|
||||
ptrace(PTRACE_CONT, m_pid, nullptr, nullptr);
|
||||
wait_for_signal();
|
||||
}
|
||||
```
|
||||
|
||||
* * *
|
||||
|
||||
### Testing it out
|
||||
|
||||
Now that we can read and modify registers, we can have a bit of fun with our hello world program. As a first test, try setting a breakpoint on the call instruction again and continue from it. You should see `Hello world` being printed out. For the fun part, set a breakpoint just after the output call, continue, then write the address of the call argument setup code to the program counter (`rip`) and continue. You should see `Hello world` being printed a second time due to this program counter manipulation. Just in case you aren’t sure where to set the breakpoint, here’s my `objdump` output from the last post again:
|
||||
|
||||
```
|
||||
0000000000400936 <main>:
|
||||
400936: 55 push rbp
|
||||
400937: 48 89 e5 mov rbp,rsp
|
||||
40093a: be 35 0a 40 00 mov esi,0x400a35
|
||||
40093f: bf 60 10 60 00 mov edi,0x601060
|
||||
400944: e8 d7 fe ff ff call 400820 <_ZStlsISt11char_traitsIcEERSt13basic_ostreamIcT_ES5_PKc@plt>
|
||||
400949: b8 00 00 00 00 mov eax,0x0
|
||||
40094e: 5d pop rbp
|
||||
40094f: c3 ret
|
||||
|
||||
```
|
||||
|
||||
You’ll want to move the program counter back to `0x40093a` so that the `esi` and `edi` registers are set up properly.
|
||||
|
||||
In the next post, we’ll take our first look at DWARF information and add various kinds of single stepping to our debugger. After that, we’ll have a mostly functioning tool which can step through code, set breakpoints wherever we like, modify data and so forth. As always, drop a comment below if you have any questions!
|
||||
|
||||
You can find the code for this post [here][13].
|
||||
|
||||
* * *
|
||||
|
||||
1. You could also reorder the `reg` enum and cast them to the underlying type to use as indexes, but I wrote it this way in the first place, it works, and I’m too lazy to change it. [↩][1]
|
||||
|
||||
2. Ahahahahahahahahahahahahahahahaha [↩][2]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blog.tartanllama.xyz/c++/2017/03/31/writing-a-linux-debugger-registers/
|
||||
|
||||
作者:[ TartanLlama ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.twitter.com/TartanLlama
|
||||
[1]:https://blog.tartanllama.xyz/c++/2017/03/31/writing-a-linux-debugger-registers/#fnref:2
|
||||
[2]:https://blog.tartanllama.xyz/c++/2017/03/31/writing-a-linux-debugger-registers/#fnref:1
|
||||
[3]:https://blog.tartanllama.xyz/2017/03/21/writing-a-linux-debugger-setup/
|
||||
[4]:https://blog.tartanllama.xyz/c++/2017/03/24/writing-a-linux-debugger-breakpoints/
|
||||
[5]:https://blog.tartanllama.xyz/c++/2017/03/31/writing-a-linux-debugger-registers/
|
||||
[6]:https://blog.tartanllama.xyz/c++/2017/04/05/writing-a-linux-debugger-elf-dwarf/
|
||||
[7]:https://blog.tartanllama.xyz/c++/2017/04/24/writing-a-linux-debugger-source-signal/
|
||||
[8]:https://blog.tartanllama.xyz/c++/2017/05/06/writing-a-linux-debugger-dwarf-step/
|
||||
[9]:https://blog.tartanllama.xyz/c++/2017/03/31/writing-a-linux-debugger-registers/#fn:2
|
||||
[10]:https://blog.tartanllama.xyz/c++/2017/03/31/writing-a-linux-debugger-registers/#fn:1
|
||||
[11]:https://www.uclibc.org/docs/psABI-x86_64.pdf
|
||||
[12]:http://man7.org/linux/man-pages/man2/process_vm_readv.2.html
|
||||
[13]:https://github.com/TartanLlama/minidbg/tree/tut_registers
|
@ -0,0 +1,329 @@
|
||||
ictlyh Translating
|
||||
Writing a Linux Debugger Part 4: Elves and dwarves
|
||||
============================================================
|
||||
|
||||
Up until now you’ve heard whispers of dwarves, of debug information, of a way to understand the source code without just parsing the thing. Today we’ll be going into the details of source-level debug information in preparation for using it in following parts of this tutorial.
|
||||
|
||||
* * *
|
||||
|
||||
### Series index
|
||||
|
||||
These links will go live as the rest of the posts are released.
|
||||
|
||||
1. [Setup][1]
|
||||
|
||||
2. [Breakpoints][2]
|
||||
|
||||
3. [Registers and memory][3]
|
||||
|
||||
4. [Elves and dwarves][4]
|
||||
|
||||
5. [Source and signals][5]
|
||||
|
||||
6. [Source-level stepping][6]
|
||||
|
||||
7. Source-level breakpoints
|
||||
|
||||
8. Stack unwinding
|
||||
|
||||
9. Reading variables
|
||||
|
||||
10. Next steps
|
||||
|
||||
* * *
|
||||
|
||||
### Introduction to ELF and DWARF
|
||||
|
||||
ELF and DWARF are two components which you may not have heard of, but probably use most days. ELF (Executable and Linkable Format) is the most widely used object file format in the Linux world; it specifies a way to store all of the different parts of a binary, like the code, static data, debug information, and strings. It also tells the loader how to take the binary and ready it for execution, which involves noting where different parts of the binary should be placed in memory, which bits need to be fixed up depending on the position of other components ( _relocations_ ) and more. I won’t cover much more of ELF in these posts, but if you’re interested, you can have a look at [this wonderful infographic][7] or [the standard][8].
|
||||
|
||||
DWARF is the debug information format most commonly used with ELF. It’s not necessarily tied to ELF, but the two were developed in tandem and work very well together. This format allows a compiler to tell a debugger how the original source code relates to the binary which is to be executed. This information is split across different ELF sections, each with its own piece of information to relay. Here are the different sections which are defined, taken from this highly informative if slightly out of date [Introduction to the DWARF Debugging Format][9]:
|
||||
|
||||
* `.debug_abbrev` Abbreviations used in the `.debug_info` section
|
||||
|
||||
* `.debug_aranges` A mapping between memory address and compilation
|
||||
|
||||
* `.debug_frame` Call Frame Information
|
||||
|
||||
* `.debug_info` The core DWARF data containing DWARF Information Entries (DIEs)
|
||||
|
||||
* `.debug_line` Line Number Program
|
||||
|
||||
* `.debug_loc` Location descriptions
|
||||
|
||||
* `.debug_macinfo` Macro descriptions
|
||||
|
||||
* `.debug_pubnames` A lookup table for global objects and functions
|
||||
|
||||
* `.debug_pubtypes` A lookup table for global types
|
||||
|
||||
* `.debug_ranges` Address ranges referenced by DIEs
|
||||
|
||||
* `.debug_str` String table used by `.debug_info`
|
||||
|
||||
* `.debug_types` Type descriptions
|
||||
|
||||
We are most interested in the `.debug_line` and `.debug_info` sections, so lets have a look at some DWARF for a simple program.
|
||||
|
||||
```
|
||||
int main() {
|
||||
long a = 3;
|
||||
long b = 2;
|
||||
long c = a + b;
|
||||
a = 4;
|
||||
}
|
||||
```
|
||||
|
||||
* * *
|
||||
|
||||
### DWARF line table
|
||||
|
||||
If you compile this program with the `-g` option and run the result through `dwarfdump`, you should see something like this for the line number section:
|
||||
|
||||
```
|
||||
.debug_line: line number info for a single cu
|
||||
Source lines (from CU-DIE at .debug_info offset 0x0000000b):
|
||||
|
||||
NS new statement, BB new basic block, ET end of text sequence
|
||||
PE prologue end, EB epilogue begin
|
||||
IS=val ISA number, DI=val discriminator value
|
||||
<pc> [lno,col] NS BB ET PE EB IS= DI= uri: "filepath"
|
||||
0x00400670 [ 1, 0] NS uri: "/home/simon/play/MiniDbg/examples/variable.cpp"
|
||||
0x00400676 [ 2,10] NS PE
|
||||
0x0040067e [ 3,10] NS
|
||||
0x00400686 [ 4,14] NS
|
||||
0x0040068a [ 4,16]
|
||||
0x0040068e [ 4,10]
|
||||
0x00400692 [ 5, 7] NS
|
||||
0x0040069a [ 6, 1] NS
|
||||
0x0040069c [ 6, 1] NS ET
|
||||
|
||||
```
|
||||
|
||||
The first bunch of lines is some information on how to understand the dump – the main line number data starts at the line starting with `0x00400670`. Essentially this maps a code memory address with a line and column number in some file. `NS` means that the address marks the beginning of a new statement, which is often used for setting breakpoints or stepping. `PE`marks the end of the function prologue, which is helpful for setting function entry breakpoints. `ET` marks the end of the translation unit. The information isn’t actually encoded like this; the real encoding is a very space-efficient program of sorts which can be executed to build up this line information.
|
||||
|
||||
So, say we want to set a breakpoint on line 4 of variable.cpp, what do we do? We look for entries corresponding to that file, then we look for a relevant line entry, look up the address which corresponds to it, and set a breakpoint there. In our example, that’s this entry:
|
||||
|
||||
```
|
||||
0x00400686 [ 4,14] NS
|
||||
|
||||
```
|
||||
|
||||
So we want to set a breakpoint at address `0x00400686`. You could do so by hand with the debugger you’ve already written if you want to give it a try.
|
||||
|
||||
The reverse works just as well. If we have a memory location – say, a program counter value – and want to find out where that is in the source, we just find the closest mapped address in the line table information and grab the line from there.
|
||||
|
||||
* * *
|
||||
|
||||
### DWARF debug info
|
||||
|
||||
The `.debug_info` section is the heart of DWARF. It gives us information about the types, functions, variables, hopes, and dreams present in our program. The fundamental unit in this section is the DWARF Information Entry, affectionately known as DIEs. A DIE consists of a tag telling you what kind of source-level entity is being represented, followed by a series of attributes which apply to that entity. Here’s the `.debug_info` section for the simple example program I posted above:
|
||||
|
||||
```
|
||||
|
||||
.debug_info
|
||||
|
||||
COMPILE_UNIT<header overall offset = 0x00000000>:
|
||||
< 0><0x0000000b> DW_TAG_compile_unit
|
||||
DW_AT_producer clang version 3.9.1 (tags/RELEASE_391/final)
|
||||
DW_AT_language DW_LANG_C_plus_plus
|
||||
DW_AT_name /super/secret/path/MiniDbg/examples/variable.cpp
|
||||
DW_AT_stmt_list 0x00000000
|
||||
DW_AT_comp_dir /super/secret/path/MiniDbg/build
|
||||
DW_AT_low_pc 0x00400670
|
||||
DW_AT_high_pc 0x0040069c
|
||||
|
||||
LOCAL_SYMBOLS:
|
||||
< 1><0x0000002e> DW_TAG_subprogram
|
||||
DW_AT_low_pc 0x00400670
|
||||
DW_AT_high_pc 0x0040069c
|
||||
DW_AT_frame_base DW_OP_reg6
|
||||
DW_AT_name main
|
||||
DW_AT_decl_file 0x00000001 /super/secret/path/MiniDbg/examples/variable.cpp
|
||||
DW_AT_decl_line 0x00000001
|
||||
DW_AT_type <0x00000077>
|
||||
DW_AT_external yes(1)
|
||||
< 2><0x0000004c> DW_TAG_variable
|
||||
DW_AT_location DW_OP_fbreg -8
|
||||
DW_AT_name a
|
||||
DW_AT_decl_file 0x00000001 /super/secret/path/MiniDbg/examples/variable.cpp
|
||||
DW_AT_decl_line 0x00000002
|
||||
DW_AT_type <0x0000007e>
|
||||
< 2><0x0000005a> DW_TAG_variable
|
||||
DW_AT_location DW_OP_fbreg -16
|
||||
DW_AT_name b
|
||||
DW_AT_decl_file 0x00000001 /super/secret/path/MiniDbg/examples/variable.cpp
|
||||
DW_AT_decl_line 0x00000003
|
||||
DW_AT_type <0x0000007e>
|
||||
< 2><0x00000068> DW_TAG_variable
|
||||
DW_AT_location DW_OP_fbreg -24
|
||||
DW_AT_name c
|
||||
DW_AT_decl_file 0x00000001 /super/secret/path/MiniDbg/examples/variable.cpp
|
||||
DW_AT_decl_line 0x00000004
|
||||
DW_AT_type <0x0000007e>
|
||||
< 1><0x00000077> DW_TAG_base_type
|
||||
DW_AT_name int
|
||||
DW_AT_encoding DW_ATE_signed
|
||||
DW_AT_byte_size 0x00000004
|
||||
< 1><0x0000007e> DW_TAG_base_type
|
||||
DW_AT_name long int
|
||||
DW_AT_encoding DW_ATE_signed
|
||||
DW_AT_byte_size 0x00000008
|
||||
|
||||
```
|
||||
|
||||
The first DIE represents a compilation unit (CU), which is essentially a source file with all of the `#includes` and such resolved. Here are the attributes annotated with their meaning:
|
||||
|
||||
```
|
||||
DW_AT_producer clang version 3.9.1 (tags/RELEASE_391/final) <-- The compiler which produced
|
||||
this binary
|
||||
DW_AT_language DW_LANG_C_plus_plus <-- The source language
|
||||
DW_AT_name /super/secret/path/MiniDbg/examples/variable.cpp <-- The name of the file which
|
||||
this CU represents
|
||||
DW_AT_stmt_list 0x00000000 <-- An offset into the line table
|
||||
which tracks this CU
|
||||
DW_AT_comp_dir /super/secret/path/MiniDbg/build <-- The compilation directory
|
||||
DW_AT_low_pc 0x00400670 <-- The start of the code for
|
||||
this CU
|
||||
DW_AT_high_pc 0x0040069c <-- The end of the code for
|
||||
this CU
|
||||
|
||||
```
|
||||
|
||||
The other DIEs follow a similar scheme, and you can probably intuit what the different attributes mean.
|
||||
|
||||
Now we can try and solve a few practical problems with our new-found knowledge of DWARF.
|
||||
|
||||
### Which function am I in?
|
||||
|
||||
Say we have a program counter value and want to figure out what function we’re in. A simple algorithm for this is:
|
||||
|
||||
```
|
||||
for each compile unit:
|
||||
if the pc is between DW_AT_low_pc and DW_AT_high_pc:
|
||||
for each function in the compile unit:
|
||||
if the pc is between DW_AT_low_pc and DW_AT_high_pc:
|
||||
return function information
|
||||
|
||||
```
|
||||
|
||||
This will work for many purposes, but things get a bit more difficult in the presence of member functions and inlining. With inlining, for example, once we’ve found the function whose range contains our PC, we’ll need to recurse over the children of that DIE to see if there are any inlined functions which are a better match. I won’t deal with inlining in my code for this debugger, but you can add support for this if you like.
|
||||
|
||||
### How do I set a breakpoint on a function?
|
||||
|
||||
Again, this depends on if you want to support member functions, namespaces and suchlike. For free functions you can just iterate over the functions in different compile units until you find one with the right name. If your compiler is kind enough to fill in the `.debug_pubnames` section, you can do this a lot more efficiently.
|
||||
|
||||
Once the function has been found, you can set a breakpoint on the memory address given by `DW_AT_low_pc`. However, that’ll break at the start of the function prologue, but it’s preferable to break at the start of the user code. Since the line table information can specify the memory address which specifies the prologue end, you could just lookup the value of `DW_AT_low_pc` in the line table, then keep reading until you get to the entry marked as the prologue end. Some compilers won’t output this information though, so another option is to just set a breakpoint on the address given by the second line entry for that function.
|
||||
|
||||
Say we want to set a breakpoint on `main` in our example program. We search for the function called `main`, and get this DIE:
|
||||
|
||||
```
|
||||
< 1><0x0000002e> DW_TAG_subprogram
|
||||
DW_AT_low_pc 0x00400670
|
||||
DW_AT_high_pc 0x0040069c
|
||||
DW_AT_frame_base DW_OP_reg6
|
||||
DW_AT_name main
|
||||
DW_AT_decl_file 0x00000001 /super/secret/path/MiniDbg/examples/variable.cpp
|
||||
DW_AT_decl_line 0x00000001
|
||||
DW_AT_type <0x00000077>
|
||||
DW_AT_external yes(1)
|
||||
|
||||
```
|
||||
|
||||
This tells us that the function begins at `0x00400670`. If we look this up in our line table, we get this entry:
|
||||
|
||||
```
|
||||
0x00400670 [ 1, 0] NS uri: "/super/secret/path/MiniDbg/examples/variable.cpp"
|
||||
|
||||
```
|
||||
|
||||
We want to skip the prologue, so we read ahead an entry:
|
||||
|
||||
```
|
||||
0x00400676 [ 2,10] NS PE
|
||||
|
||||
```
|
||||
|
||||
Clang has included the prologue end flag on this entry, so we know to stop here and set a breakpoint on address `0x00400676`.
|
||||
|
||||
### How do I read the contents of a variable?
|
||||
|
||||
Reading variables can be very complex. They are elusive things which can move around throughout a function, sit in registers, be placed in memory, be optimised out, hide in the corner, whatever. Fortunately our simple example is, well, simple. If we want to read the contents of variable `a`, we have a look at its `DW_AT_location` attribute:
|
||||
|
||||
```
|
||||
DW_AT_location DW_OP_fbreg -8
|
||||
|
||||
```
|
||||
|
||||
This says that the contents are stored at an offset of `-8` from the base of the stack frame. To work out where this base is, we look at the `DW_AT_frame_base` attribute on the containing function.
|
||||
|
||||
```
|
||||
DW_AT_frame_base DW_OP_reg6
|
||||
|
||||
```
|
||||
|
||||
`reg6` on x86 is the frame pointer register, as specified by the [System V x86_64 ABI][10]. Now we read the contents of the frame pointer, subtract 8 from it, and we’ve found our variable. If we actually want to make sense of the thing, we’ll need to look at its type:
|
||||
|
||||
```
|
||||
< 2><0x0000004c> DW_TAG_variable
|
||||
DW_AT_name a
|
||||
DW_AT_type <0x0000007e>
|
||||
|
||||
```
|
||||
|
||||
If we look up this type in the debug information, we get this DIE:
|
||||
|
||||
```
|
||||
< 1><0x0000007e> DW_TAG_base_type
|
||||
DW_AT_name long int
|
||||
DW_AT_encoding DW_ATE_signed
|
||||
DW_AT_byte_size 0x00000008
|
||||
|
||||
```
|
||||
|
||||
This tells us that the type is a 8 byte (64 bit) signed integer type, so we can go ahead and interpret those bytes as an `int64_t` and display it to the user.
|
||||
|
||||
Of course, types can get waaaaaaay more complex than that, as they have to be able to express things like C++ types, but this gives you a basic idea of how they work.
|
||||
|
||||
Coming back to that frame base for a second, Clang was nice enough to track the frame base with the frame pointer register. Recent versions of GCC tend to prefer `DW_OP_call_frame_cfa`, which involves parsing the `.eh_frame` ELF section, and that’s an entirely different article which I won’t be writing. If you tell GCC to use DWARF 2 instead of more recent versions, it’ll tend to output location lists, which are somewhat easier to read:
|
||||
|
||||
```
|
||||
DW_AT_frame_base <loclist at offset 0x00000000 with 4 entries follows>
|
||||
low-off : 0x00000000 addr 0x00400696 high-off 0x00000001 addr 0x00400697>DW_OP_breg7+8
|
||||
low-off : 0x00000001 addr 0x00400697 high-off 0x00000004 addr 0x0040069a>DW_OP_breg7+16
|
||||
low-off : 0x00000004 addr 0x0040069a high-off 0x00000031 addr 0x004006c7>DW_OP_breg6+16
|
||||
low-off : 0x00000031 addr 0x004006c7 high-off 0x00000032 addr 0x004006c8>DW_OP_breg7+8
|
||||
|
||||
```
|
||||
|
||||
A location list gives different locations depending on where the program counter is. This example says that if the PC is at an offset of `0x0` from `DW_AT_low_pc` then the frame base is an offset of 8 away from the value stored in register 7, if it’s between `0x1` and `0x4` away, then it’s at an offset of 16 away from the same, and so on.
|
||||
|
||||
* * *
|
||||
|
||||
### Take a breath
|
||||
|
||||
That’s a lot of information to get your head round, but the good news is that in the next few posts we’re going to have a library do the hard work for us. It’s still useful to understand the concepts at play, particularly when something goes wrong or when you want to support some DWARF concept which isn’t implemented in whatever DWARF library you use.
|
||||
|
||||
If you want to learn more about DWARF, then you can grab the standard [here][11]. At the time of writing, DWARF 5 has just been released, but DWARF 4 is more commonly supported.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blog.tartanllama.xyz/c++/2017/04/05/writing-a-linux-debugger-elf-dwarf/
|
||||
|
||||
作者:[ TartanLlama ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.twitter.com/TartanLlama
|
||||
[1]:https://blog.tartanllama.xyz/2017/03/21/writing-a-linux-debugger-setup/
|
||||
[2]:https://blog.tartanllama.xyz/c++/2017/03/24/writing-a-linux-debugger-breakpoints/
|
||||
[3]:https://blog.tartanllama.xyz/c++/2017/03/31/writing-a-linux-debugger-registers/
|
||||
[4]:https://blog.tartanllama.xyz/c++/2017/04/05/writing-a-linux-debugger-elf-dwarf/
|
||||
[5]:https://blog.tartanllama.xyz/c++/2017/04/24/writing-a-linux-debugger-source-signal/
|
||||
[6]:https://blog.tartanllama.xyz/c++/2017/05/06/writing-a-linux-debugger-dwarf-step/
|
||||
[7]:https://github.com/corkami/pics/raw/master/binary/elf101/elf101-64.pdf
|
||||
[8]:http://www.skyfree.org/linux/references/ELF_Format.pdf
|
||||
[9]:http://www.dwarfstd.org/doc/Debugging%20using%20DWARF-2012.pdf
|
||||
[10]:https://www.uclibc.org/docs/psABI-x86_64.pdf
|
||||
[11]:http://dwarfstd.org/Download.php
|
@ -1,256 +0,0 @@
|
||||
rdiff-backup – A Remote Incremental Backup Tool for Linux
|
||||
============================================================
|
||||
|
||||
rdiff-backup is a powerful and easy-to-use Python script for local/remote incremental backup, which works on any POSIX operating system such as Linux, Mac OS X or [Cygwin][1]. It brings together the remarkable features of a mirror and an incremental backup.
|
||||
|
||||
Significantly, it preserves subdirectories, dev files, hard links, and critical file attributes such as permissions, uid/gid ownership, modification times, extended attributes, acls, and resource forks. It can work in a bandwidth-efficient mode over a pipe, in a similar way as the popular [rsync backup tool][2].
|
||||
|
||||
rdiff-backup backs up a single directory to another over a network using SSH, implying that the data transfer is encrypted thus secure. The target directory (on the remote system) ends up an exact copy of the source directory, however extra reverse diffs are stored in a special subdirectory in the target directory, making it possible to recover files lost some time ago.
|
||||
|
||||
#### Dependencies
|
||||
|
||||
To use rdiff-backup in Linux, you’ll need the following packages installed on your system:
|
||||
|
||||
* Python v2.2 or later
|
||||
* librsync v0.9.7 or later
|
||||
* pylibacl and pyxattr Python modules are optional but necessary for POSIX access control list(ACL) and extended attribute support respectively.
|
||||
* rdiff-backup-statistics requires Python v2.4 or later.
|
||||
|
||||
### How to Install rdiff-backup in Linux
|
||||
|
||||
Important: If you are operating over a network, you’ll have to install rdiff-backup both systems, preferably both installations of rdiff-backup will have to be the exact same version.
|
||||
|
||||
The script is already present in the official repositories of the mainstream Linux distributions, simply run the command below to install rdiff-backup as well as its dependencies:
|
||||
|
||||
#### On Debian/Ubuntu
|
||||
|
||||
```
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install librsync-dev rdiff-backup
|
||||
```
|
||||
|
||||
#### On CentOS/RHEL 7
|
||||
|
||||
```
|
||||
# wget http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-9.noarch.rpm
|
||||
# rpm -ivh epel-release-7-9.noarch.rpm
|
||||
# yum install librsync rdiff-backup
|
||||
```
|
||||
|
||||
#### On CentOS/RHEL 6
|
||||
|
||||
```
|
||||
# wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
|
||||
# rpm -ivh epel-release-6-8.noarch.rpm
|
||||
# yum install librsync rdiff-backup
|
||||
```
|
||||
|
||||
#### On Fedora
|
||||
|
||||
```
|
||||
# yum install librsync rdiff-backup
|
||||
# dnf install librsync rdiff-backup [Fedora 22+]
|
||||
```
|
||||
|
||||
### How to Use rdiff-backup in Linux
|
||||
|
||||
As I mentioned before, rdiff-backup uses SSH to connect to remote machines on your network, and the default authentication in SSH is the username/password method, which normally requires human interaction.
|
||||
|
||||
However, to automate tasks such as automatic backups with scripts and beyond, you will need to configure [SSH Passwordless Login Using SSH keys][3], because SSH keys increases the trust between two Linux servers for [easy file synchronization or transfer][4].
|
||||
|
||||
Once you have setup [SSH Passwordless Login][5], you can start using the script with the following examples.
|
||||
|
||||
#### Backup Files to Different Partition
|
||||
|
||||
The example below will backup the `/etc` directory in a Backup directory on another partition:
|
||||
|
||||
```
|
||||
$ sudo rdiff-backup /etc /media/aaronkilik/Data/Backup/mint_etc.backup
|
||||
```
|
||||
[
|
||||
![Backup Files to Different Partition](http://www.tecmint.com/wp-content/uploads/2017/03/Backup-Files-to-Different-Partition.png)
|
||||
][6]
|
||||
|
||||
Backup Files to Different Partition
|
||||
|
||||
To exclude a particular directory as well as it’s subdirectories, you can use the `--exclude` option as follows:
|
||||
|
||||
```
|
||||
$ sudo rdiff-backup --exclude /etc/cockpit --exclude /etc/bluetooth /media/aaronkilik/Data/Backup/mint_etc.backup
|
||||
```
|
||||
|
||||
We can include all device files, fifo files, socket files, and symbolic links with the `--include-special-files` option as below:
|
||||
|
||||
```
|
||||
$ sudo rdiff-backup --include-special-files --exclude /etc/cockpit /media/aaronkilik/Data/Backup/mint_etc.backup
|
||||
```
|
||||
|
||||
There are two other important flags we can set for file selection; `--max-file-size` size which excludes files that are larger than the given size in bytes and `--min-file-size` size which excludes files that are smaller than the given size in bytes:
|
||||
|
||||
```
|
||||
$ sudo rdiff-backup --max-file-size 5M --include-special-files --exclude /etc/cockpit /media/aaronkilik/Data/Backup/mint_etc.backup
|
||||
```
|
||||
|
||||
#### Backup Remote Files on Local Linux Server
|
||||
|
||||
For the purpose of this section, we’ll use:
|
||||
|
||||
```
|
||||
Remote Server (tecmint) : 192.168.56.102
|
||||
Local Backup Server (backup) : 192.168.56.10
|
||||
```
|
||||
|
||||
As we stated before, you must install the same version of rdiff-backup on both machines, now try to check the version on both machines as follows:
|
||||
|
||||
```
|
||||
$ rdiff-backup -V
|
||||
```
|
||||
[
|
||||
![Check rdiff Version on Servers](http://www.tecmint.com/wp-content/uploads/2017/03/check-rdif-versions-on-servers.png)
|
||||
][7]
|
||||
|
||||
Check rdiff Version on Servers
|
||||
|
||||
On the backup server, create a directory which will store the backup files like so:
|
||||
|
||||
```
|
||||
# mkdir -p /backups
|
||||
```
|
||||
|
||||
Now from the backup server, run the following commands to make a backup of directories `/var/log/`and `/root` from remote Linux server 192.168.56.102 in `/backups`:
|
||||
|
||||
```
|
||||
# rdiff-backup root@192.168.56.102::/var/log/ /backups/192.168.56.102_logs.backup
|
||||
# rdiff-backup root@192.168.56.102::/root/ /backups/192.168.56.102_rootfiles.backup
|
||||
```
|
||||
|
||||
The screenshot below shows the root file on remote server 192.168.56.102 and the backed up files on the back server 192.168.56.10:
|
||||
|
||||
[
|
||||
![Backup Remote Directory on Local Server](http://www.tecmint.com/wp-content/uploads/2017/03/Backup-Remote-Linux-Directory-on-Local-Server.png)
|
||||
][8]
|
||||
|
||||
Backup Remote Directory on Local Server
|
||||
|
||||
Take note of the rdiff-backup-data directory created in the `backup` directory as seen in the screenshot, it contains vital data concerning the backup process and incremental files.
|
||||
|
||||
[
|
||||
![rdiff-backup - Backup Process Files](http://www.tecmint.com/wp-content/uploads/2017/03/rdiff-backup-data-directory-contents.png)
|
||||
][9]
|
||||
|
||||
rdiff-backup – Backup Process Files
|
||||
|
||||
Now, on the server 192.168.56.102, additional files have been added to the root directory as shown below:
|
||||
|
||||
[
|
||||
![Verify Backup Directory](http://www.tecmint.com/wp-content/uploads/2017/03/additional-files-in-root-directory.png)
|
||||
][10]
|
||||
|
||||
Verify Backup Directory
|
||||
|
||||
Let’s run the backup command once more time to get the changed data, we can use the `-v[0-9]`(where the number specifies the verbosity level, default is 3 which is silent) option to set the verbosity feature:
|
||||
|
||||
```
|
||||
# rdiff-backup -v4 root@192.168.56.102::/root/ /backups/192.168.56.102_rootfiles.backup
|
||||
```
|
||||
[
|
||||
![Incremental Backup with Summary](http://www.tecmint.com/wp-content/uploads/2017/03/incremental-backup-of-root-files.png)
|
||||
][11]
|
||||
|
||||
Incremental Backup with Summary
|
||||
|
||||
And to list the number and date of partial incremental backups contained in the /backups/192.168.56.102_rootfiles.backup directory, we can run:
|
||||
|
||||
```
|
||||
# rdiff-backup -l /backups/192.168.56.102_rootfiles.backup/
|
||||
```
|
||||
|
||||
#### Automating rdiff-back Backup Using Cron
|
||||
|
||||
We can print summary statistics after a successful backup with the `--print-statistics`. However, if we don’t set this option, the info will still be available from the session statistics file. Read more concerning this option in the STATISTICS section of the man page.
|
||||
|
||||
And the –remote-schema flag enables us to specify an alternative method of connecting to a remote computer.
|
||||
|
||||
Now, let’s start by creating a `backup.sh` script on the backup server 192.168.56.10 as follows:
|
||||
|
||||
```
|
||||
# cd ~/bin
|
||||
# vi backup.sh
|
||||
```
|
||||
|
||||
Add the following lines to the script file.
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
#This is a rdiff-backup utility backup script
|
||||
#Backup command
|
||||
rdiff-backup --print-statistics --remote-schema 'ssh -C %s "sudo /usr/bin/rdiff-backup --server --restrict-read-only /"' root@192.168.56.102::/var/logs /backups/192.168.56.102_logs.back
|
||||
#Checking rdiff-backup command success/error
|
||||
status=$?
|
||||
if [ $status != 0 ]; then
|
||||
#append error message in ~/backup.log file
|
||||
echo "rdiff-backup exit Code: $status - Command Unsuccessful" >>~/backup.log;
|
||||
exit 1;
|
||||
fi
|
||||
#Remove incremental backup files older than one month
|
||||
rdiff-backup --force --remove-older-than 1M /backups/192.168.56.102_logs.back
|
||||
```
|
||||
|
||||
Save the file and exit, then run the following command to add the script to the crontab on the backup server 192.168.56.10:
|
||||
|
||||
```
|
||||
# crontab -e
|
||||
```
|
||||
|
||||
Add this line to run your backup script daily at midnight:
|
||||
|
||||
```
|
||||
0 0 * * * /root/bin/backup.sh > /dev/null 2>&1
|
||||
```
|
||||
|
||||
Save the crontab and close it, now we’ve successful automated the backup process. Ensure that it is working as expected.
|
||||
|
||||
Read through the rdiff-backup man page for additional info, exhaustive usage options and examples:
|
||||
|
||||
```
|
||||
# man rdiff-backup
|
||||
```
|
||||
|
||||
rdiff-backup Homepage: [http://www.nongnu.org/rdiff-backup/][12]
|
||||
|
||||
That’s it for now! In this tutorial, we showed you how to install and basically use rdiff-backup, an easy-to-use Python script for local/remote incremental backup in Linux. Do share your thoughts with us via the feedback section below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.
|
||||
|
||||
|
||||
|
||||
------------
|
||||
|
||||
via: http://www.tecmint.com/rdiff-backup-remote-incremental-backup-for-linux/
|
||||
|
||||
作者:[Aaron Kili ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/aaronkili/
|
||||
[1]:http://www.tecmint.com/install-cygwin-to-run-linux-commands-on-windows-system/
|
||||
[2]:http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/
|
||||
[3]:http://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/
|
||||
[4]:http://www.tecmint.com/sync-new-changed-modified-files-rsync-linux/
|
||||
[5]:http://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/
|
||||
[6]:http://www.tecmint.com/wp-content/uploads/2017/03/Backup-Files-to-Different-Partition.png
|
||||
[7]:http://www.tecmint.com/wp-content/uploads/2017/03/check-rdif-versions-on-servers.png
|
||||
[8]:http://www.tecmint.com/wp-content/uploads/2017/03/Backup-Remote-Linux-Directory-on-Local-Server.png
|
||||
[9]:http://www.tecmint.com/wp-content/uploads/2017/03/rdiff-backup-data-directory-contents.png
|
||||
[10]:http://www.tecmint.com/wp-content/uploads/2017/03/additional-files-in-root-directory.png
|
||||
[11]:http://www.tecmint.com/wp-content/uploads/2017/03/incremental-backup-of-root-files.png
|
||||
[12]:http://www.nongnu.org/rdiff-backup/
|
||||
[13]:http://www.tecmint.com/author/aaronkili/
|
||||
[14]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
|
||||
[15]:http://www.tecmint.com/free-linux-shell-scripting-books/
|
@ -1,142 +0,0 @@
|
||||
bd – Quickly Go Back to a Parent Directory Instead of Typing “cd ../../..” Redundantly
|
||||
============================================================
|
||||
|
||||
|
||||
While navigating the file system via the command line on Linux systems, in order to move back into a parent directory (in a long path), we would normally issue the [cd command][1] repeatedly (`cd ../../..`) until we land into the directory of interest.
|
||||
|
||||
This can be so tedious and boring much of the time, especially for experienced Linux users or system administrators who carry out so many various tasks, therefore hope to discover shortcuts to ease their jobs while operating a system.
|
||||
|
||||
**Suggested Read:** [Autojump – An Advanced ‘cd’ Command to Quickly Navigate Linux Filesystem][2]
|
||||
|
||||
In this article, we will review a simple but helpful utility for quickly moving back into a parent directory in Linux with the help of bd tool.
|
||||
|
||||
bd is a handy utility for navigating the filesystem, it enables you to quickly go back to a parent directory without typing `cd ../../..` repeatedly. You can reliably combine it with other Linux commands to perform a few daily operations.
|
||||
|
||||
### How to Install bd in Linux Systems
|
||||
|
||||
Run the following commands to download and install bd under `/usr/bin/` using the [wget command][3], make it executable and create the required alias in your `~/.bashrc` file:
|
||||
|
||||
```
|
||||
$ wget --no-check-certificate -O /usr/bin/bd https://raw.github.com/vigneshwaranr/bd/master/bd
|
||||
$ chmod +rx /usr/bin/bd
|
||||
$ echo 'alias bd=". bd -si" >> ~/.bashrc
|
||||
$ source ~/.bashrc
|
||||
```
|
||||
|
||||
Note: To enable case-sensitive directory name matching, set the `-s` flag instead of `-si` in the alias created above.
|
||||
|
||||
To enable autocomplete support, run these commands:
|
||||
|
||||
```
|
||||
$ sudo wget -O /etc/bash_completion.d/bd https://raw.github.com/vigneshwaranr/bd/master/bash_completion.d/bd
|
||||
$ sudo source /etc/bash_completion.d/bd
|
||||
```
|
||||
|
||||
#### How to Use bd in Linux Systems
|
||||
|
||||
Assuming your currently in the top directory in this path:
|
||||
|
||||
```
|
||||
/media/aaronkilik/Data/Computer Science/Documents/Books/LEARN/Linux/Books/server $
|
||||
```
|
||||
|
||||
and you want to go to Documents directory quickly, then simply type:
|
||||
|
||||
```
|
||||
$ bd Documents
|
||||
```
|
||||
|
||||
Then to go straight into the Data directory, you can type:
|
||||
|
||||
```
|
||||
$ bd Data
|
||||
```
|
||||
[
|
||||
![Switch Between Directories Quickly](http://www.tecmint.com/wp-content/uploads/2017/03/Switch-Between-Directories-Quickly.png)
|
||||
][4]
|
||||
|
||||
Switch Between Directories Quickly
|
||||
|
||||
Actually, bd makes it even more straight forward, all you need to do is just type bd <few starting letters>such as:
|
||||
|
||||
```
|
||||
$ bd Doc
|
||||
$ bd Da
|
||||
```
|
||||
[
|
||||
![Quickly Switch Directories](http://www.tecmint.com/wp-content/uploads/2017/03/Quickly-Switch-Directories.png)
|
||||
][5]
|
||||
|
||||
Quickly Switch Directories
|
||||
|
||||
Important: In case there are more than one directories with the same name up in the hierarchy, bd will move you into the closest without considering the immediate parent as explained in the example below.
|
||||
|
||||
For instance, in the path above, there are two directories with the same name Books, if you want to move into:
|
||||
|
||||
```
|
||||
/media/aaronkilik/Data/ComputerScience/Documents/Books/LEARN/Linux/Books
|
||||
```
|
||||
|
||||
Typing bd books will take you into:
|
||||
|
||||
```
|
||||
/media/aaronkilik/Data/ComputerScience/Documents/Books
|
||||
```
|
||||
[
|
||||
![Move to 'Books' Directory Quickly](http://www.tecmint.com/wp-content/uploads/2017/03/Move-to-Directory-Quickly.png)
|
||||
][6]
|
||||
|
||||
Move to ‘Books’ Directory Quickly
|
||||
|
||||
Additionally, using bd within backticks in the form ``bd <letter(s)>`` prints out the path minus changing the current directory, so you can use ``bd <letter(s)>`` with other common Linux commands such as [ls][7], [echo][8] etc..
|
||||
|
||||
In the example below, am currently in the directory, /var/www/html/internship/assets/filetree and to print the absolute path, long-list the contents and sum up the size of all files in the directory html without moving into it, I can just type:
|
||||
|
||||
```
|
||||
$ echo `bd ht`
|
||||
$ ls -l `bd ht`
|
||||
$ du -cs `bd ht`
|
||||
```
|
||||
[
|
||||
![Switch Directory with Listing](http://www.tecmint.com/wp-content/uploads/2017/03/Switch-Directory-with-Listing.png)
|
||||
][9]
|
||||
|
||||
Switch Directory with Listing
|
||||
|
||||
Find out more about bd tool on Github: [https://github.com/vigneshwaranr/bd][10]
|
||||
|
||||
That’s all! In this article, we showed reviewed a handy way of [quickly navigating the filesystem in Linux][11]using bd utility.
|
||||
|
||||
Have your say via the feedback form below. Plus, do you know of any similar utilities out there, let us know in the comments as well.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.
|
||||
|
||||
---------------
|
||||
|
||||
via: http://www.tecmint.com/bd-quickly-go-back-to-a-linux-parent-directory/
|
||||
|
||||
作者:[Aaron Kili ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/aaronkili/
|
||||
[1]:http://www.tecmint.com/cd-command-in-linux/
|
||||
[2]:http://www.tecmint.com/autojump-a-quickest-way-to-navigate-linux-filesystem/
|
||||
[3]:http://www.tecmint.com/10-wget-command-examples-in-linux/
|
||||
[4]:http://www.tecmint.com/wp-content/uploads/2017/03/Switch-Between-Directories-Quickly.png
|
||||
[5]:http://www.tecmint.com/wp-content/uploads/2017/03/Quickly-Switch-Directories.png
|
||||
[6]:http://www.tecmint.com/wp-content/uploads/2017/03/Move-to-Directory-Quickly.png
|
||||
[7]:http://www.tecmint.com/tag/linux-ls-command/
|
||||
[8]:http://www.tecmint.com/echo-command-in-linux/
|
||||
[9]:http://www.tecmint.com/wp-content/uploads/2017/03/Switch-Directory-with-Listing.png
|
||||
[10]:https://github.com/vigneshwaranr/bd
|
||||
[11]:http://www.tecmint.com/autojump-a-quickest-way-to-navigate-linux-filesystem/
|
||||
[12]:http://www.tecmint.com/author/aaronkili/
|
||||
[13]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
|
||||
[14]:http://www.tecmint.com/free-linux-shell-scripting-books/
|
@ -1,3 +1,5 @@
|
||||
tranlated by mudongliang
|
||||
|
||||
FEWER MALLOCS IN CURL
|
||||
===========================================================
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user