Merge pull request #1 from LCTT/master

更新
This commit is contained in:
Bestony 2017-03-13 20:51:13 +08:00 committed by GitHub
commit edb4ca4ef6
125 changed files with 13006 additions and 5473 deletions

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,146 @@
浅述内核中“挂起到空闲”的实现
===============
### 简介
Linux 内核提供了多种睡眠状态各个状态通过设置系统中的不同部件进入低耗电模式来节约能源。目前总共有四种睡眠状态分别是挂起到空闲suspend to idle、加电待机power-on standbystandby、挂起到内存suspend to ram和挂起到磁盘suspend to disk。这些状态分别对应 ACPI 的 4 种状态S0S1S3 和 S4。挂起到空闲suspend to idle是纯软件实现的用于将 CPU 维持在尽可能深的 idle 状态。加电待机power-on standbystandby则使设备处于低功耗状态并且关闭所有非引导 CPU。挂起到内存suspend to ram就更进一步关闭所有 CPU 并且设置 RAM 进入自刷新模式。挂起到磁盘suspend to disk则是最省功耗的模式关闭尽可能多的系统包括关闭内存。然后内存中的内容会被写到硬盘待唤醒计算机的时候将硬盘中的内容重新恢复到内存中。
这篇博文主要介绍挂起到空闲suspend to idle的实现。如上所说它主要通过软件实现。一般平台的挂起过程包括冻结用户空间并将外围设备调至低耗电模式。但是系统并不是直接关闭和热插拔掉 CPU而是静静地强制将 CPU 进入空闲idle状态。随着外围设备进入了低耗电模式除了唤醒相关的中断外不应有其他中断产生。唤醒中断包括那些设置用于唤醒系统的计时器比如 RTC普通计时器等、或者电源开关、USB 和其它外围设备等。
在冻结过程中,当系统进入空闲状态时会调用一个特殊的 cpu 空闲函数。这个 `enter_freeze()` 函数可以和调用使 cpu 空闲的 `enter()` 函数一样简单,也可以复杂得多。该函数复杂的程度由将 SoC 置为低耗电模式的条件和方法决定。
### 先决条件
#### `platform_suspend_ops`
一般情况,为了支持 S2I系统必须实现 `platform_suspend_ops` 并提供最低限度的挂起支持。这意味着至少要完成 `platform_suspend_ops` 中的 `valid()` 函数。如果挂起到空闲suspend to idle和挂起到内存suspend to ram都要支持valid 函数中应使用 `suspend_valid_only_mem`
不过,最近内核增加了对 S2I 的自动支持。Sudeep Holla 提出了一个变更,可以让系统不需要满足 `platform_suspend_ops` 条件也能提供 S2I 支持。这个补丁已经被接收并将合并在 4.9 版本中,该补丁可从这里获取: [https://lkml.org/lkml/2016/8/19/474][1]。
如果定义了 `suspend_ops`,那么可以通过查看 `/sys/power/state` 文件得知系统具体支持哪些挂起状态。如下操作:
```
# cat /sys/power/state
freeze mem
```
这个示例的结果显示该平台支持 S0挂起到空闲suspend to idle和 S3挂起到内存suspend to ram。按 Sudeep 的变更,那些没有实现 `platform_suspend_ops` 的平台将只显示 freeze 状态。
#### 唤醒中断
一旦系统处于某种睡眠状态,系统必须要接收某个唤醒事件才能恢复系统。这些唤醒事件一般由系统的设备产生。因此一定要确保这些设备驱动使用唤醒中断,并且将自身配置为接收唤醒中断后产生唤醒事件。如果没有正确识别唤醒设备,系统收到中断后会继续保持睡眠状态而不会恢复。
一旦设备正确实现了唤醒接口的调用,就可用来生成唤醒事件。请确保 DT 文件正确配置了唤醒源。下面是一个配置唤醒源示例,该文件来自(`arch/arm/boot/dst/am335x-evm.dts`:
```
gpio_keys: volume_keys@0 {
compatible = “gpio-keys”;
#address-cells = <1>;
#size-cells = <0>;
autorepeat;
switch@9 {
label = “volume-up”;
linux,code = <115>;
gpios = <&gpio0 2 GPIO_ACTIVE_LOW>;
wakeup-source;
};
switch@10 {
label = “volume-down”;
linux,code = <114>;
gpios = <&gpio0 3 GPIO_ACTIVE_LOW>;
wakeup-source;
};
};
```
如上所示,有两个 gpio 键被配置为唤醒源,在系统挂起期间,其中任何一个键被按下都会产生一个唤醒事件。
可替代 DT 文件配置的另一个唤醒源配置就是设备驱动,如果设备驱动自身在代码里面配置了唤醒支持,那么就会使用该默认唤醒配置。
### 实施
#### 冻结功能
如果系统希望能够充分使用挂起到空闲suspend to idle那么应该在 CPU 空闲驱动代码中定义 `enter_freeze()` 函数。`enter_freeze()` 与 `enter()` 的函数原型略有不同。因此,不能将 `enter()` 同时指定给 `enter``enter_freeze`。至少,系统会直接调用 `enter()`。如果没有定义 `enter_freeze()`,系统会挂起,但是不会触发那些只有当 `enter_freeze()` 定义了才会触发的函数,比如 `tick_freeze()``stop_critical_timing()` 都不会发生。这会导致计时器中断唤醒系统,但不会导致系统恢复,因为系统处理完中断后会继续挂起。
在挂起过程中,中断越少越好(最好一个也没有)。
下图显示了能耗和时间的对比。图中的两个尖刺分别是挂起和恢复。挂起前后的能耗尖刺是系统退出空闲态进行记录操作,进程调度,计时器处理等。因延迟的缘故,系统进入更深层次空闲状态需要花费一段时间。
![blog-picture-one](http://www.linaro.org/wp-content/uploads/2016/10/blog-picture-one-1024x767.png)
*能耗使用时序图*
下图为 ftrace 抓取的 4 核 CPU 在系统挂起和恢复操作之前、之中和之后的活动。可以看到,在挂起期间,没有请求或者中断被处理。
![blog-picture-2](http://www.linaro.org/wp-content/uploads/2016/10/blog-picture-2-1024x577.png)
*Ftrace 抓取的挂起/恢复活动图*
#### 空闲状态
你必须确定哪个空闲状态支持冻结。在冻结期间,电源相关代码会决定用哪个空闲状态来实现冻结。这个过程是通过在每个空闲状态中查找谁定义了 `enter_freeze()` 来决定的。CPU 空闲驱动代码或者 SoC 挂起相关代码必须确定哪种空闲状态实现冻结操作,并通过给每个 CPU 的可应用空闲状态指定冻结功能来进行配置。
例如, Qualcomm 会在平台挂起代码的挂起初始化函数处定义 `enter_freeze` 函数。这个工作是在 CPU 空闲驱动已经初始化后进行,以便所有结构已经定义就位。
#### 挂起/恢复相关驱动支持
你可能会在第一次成功挂起操作后碰到驱动相关的 bug。很多驱动开发者没有精力完全测试挂起和恢复相关的代码。你甚至可能会发现挂起操作并没有多少工作可做因为 `pm_runtime` 已经做了你要做的挂起相关的一切工作。由于用户空间已经被冻结,设备此时已经处于休眠状态并且 `pm_runtime` 已经被禁止。
### 测试相关
测试挂起到空闲suspend to idle可以手动进行也可以使用脚本/进程等实现自动挂起、自动睡眠,或者使用像 Android 中的 `wakelock` 来让系统挂起。如果手动测试,下面的操作会将系统冻结。
```
/ # echo freeze > /sys/power/state
[ 142.580832] PM: Syncing filesystems … done.
[ 142.583977] Freezing user space processes … (elapsed 0.001 seconds) done.
[ 142.591164] Double checking all user space processes after OOM killer disable… (elapsed 0.000 seconds)
[ 142.600444] Freezing remaining freezable tasks … (elapsed 0.001 seconds) done.
[ 142.608073] Suspending console(s) (use no_console_suspend to debug)
[ 142.708787] mmc1: Reset 0x1 never completed.
[ 142.710608] msm_otg 78d9000.phy: USB in low power mode
[ 142.711379] PM: suspend of devices complete after 102.883 msecs
[ 142.712162] PM: late suspend of devices complete after 0.773 msecs
[ 142.712607] PM: noirq suspend of devices complete after 0.438 msecs
< system suspended >
….
< wake irq triggered >
[ 147.700522] PM: noirq resume of devices complete after 0.216 msecs
[ 147.701004] PM: early resume of devices complete after 0.353 msecs
[ 147.701636] msm_otg 78d9000.phy: USB exited from low power mode
[ 147.704492] PM: resume of devices complete after 3.479 msecs
[ 147.835599] Restarting tasks … done.
/ #
```
在上面的例子中,需要注意 MMC 驱动的操作占了 102.883ms 中的 100ms。有些设备驱动在挂起的时候有很多工作要做比如将数据刷出到硬盘或者其他耗时的操作等。
如果系统定义了冻结freeze那么系统将尝试挂起操作如果没有冻结功能那么你会看到下面的提示
```
/ # echo freeze > /sys/power/state
sh: write error: Invalid argument
/ #
```
### 未来的发展
目前在 ARM 平台上的挂起到空闲suspend to idle有两方面的工作需要做。第一方面工作在前面 `platform_suspend_ops` 小节中提到过,是总允许接受冻结状态以及合并到 4.9 版本内核中的工作。另一方面工作是冻结功能的支持。
如果你希望设备有更好的响应及表现,那么应该继续完善冻结功能的实现。然而,由于很多 SoC 会使用 ARM 的 CPU 空闲驱动,这使得 ARM 的 CPU 空闲驱动完善它自己的通用冻结功能的工作更有意义了。而事实上ARM 正在尝试添加此通用支持。如果 SoC 供应商希望实现他们自己的 CPU 空闲驱动或者需要在进入更深层次的冻结休眠状态时提供额外的支持,那么只有实现自己的冻结功能。
--------------------------------------------------------------------------------
via: http://www.linaro.org/blog/suspend-to-idle/
作者:[Andy Gross][a]
译者:[beyondworld](https://github.com/beyondworld)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linaro.org/author/andygross/
[1]:https://lkml.org/lkml/2016/8/19/474

View File

@ -1,58 +1,59 @@
Manage Samba4 AD Domain Controller DNS and Group Policy from Windows Part 4 Samba 系列(四):在 Windows 下管理 Samba4 AD 域管制器 DNS 和组策略
============================================================ ============================================================
在 Windows 系统下管理 Samba4 AD 域管制器 DNS 和组策略(四)
接着前一篇教程写的关于[使用 Windows 10 系统的 RSAT 工具来管理 Samba4 活动目录架构][4],在这篇文章中我们将学习如何使用微软 DNS 管理器远程管理我们的 Samba AD 域控制器的 DNS 服务器,如何创建 DNS 记录,如何创建反向查找区域以及如何通过组策略管理工具来创建域策略。 接着前一篇教程写的关于[使用 Windows 10 的 RSAT 工具来管理 Samba4 活动目录架构][4],在这篇文章中我们将学习如何使用微软 DNS 管理器远程管理我们的 Samba AD 域控制器的 DNS 服务器,如何创建 DNS 记录,如何创建反向查找区域以及如何通过组策略管理工具来创建域策略。
#### ####
1、 [在 Ubuntu16.04 系统上使用 Samba4 软件来创建活动目录架构(一)][1] 1、 [在 Ubuntu 16.04 系统上使用 Samba4 软件来创建活动目录架构(一)][1]
2、 [在 Linux 命令行下管理 Samba4 AD 架构(二)][2]
3、 [使用 Windows 10 系统的 RSAT 工具来管理 Samba4 活动目录架构 (三)][3] 2、 [在 Linux 命令行下管理 Samba4 AD 架构(二)][2]
3、 [使用 Windows 10 的 RSAT 工具来管理 Samba4 活动目录架构 (三)][3]
### 第 1 步:管理 Samba DNS 服务器 ### 第 1 步:管理 Samba DNS 服务器
Samba4 AD DC 使用内部的 DNS 解析模块,该模块在初始化域提供的过程中被创建完成(如果 BIND9 DLZ 模块未特定使用的情况下)。 Samba4 AD DC 使用内部的 DNS 解析模块,该模块在初始化域提供的过程中创建(如果 BIND9 DLZ 模块未指定使用的情况下)。
Samba4 内部的 DNS 模块支持 AD 域控制器所必须的基本功能。有两种方式来管理域 DNS 服务器,直接在命令行下通过 samba-tool 接口来管理,或者使用已加入域的微软工作站中的 RSAT DNS 管理器远程进行管理。 Samba4 内部的 DNS 模块支持 AD 域控制器所必须的基本功能。有两种方式来管理域 DNS 服务器,直接在命令行下通过 samba-tool 接口来管理,或者使用已加入域的微软工作站中的 RSAT DNS 管理器远程进行管理。
在这篇文章中,我们使用第二种方式来进行管理,因为这种方式很直观,也不容易出错。 在这篇文章中,我们使用第二种方式来进行管理,因为这种方式很直观,也不容易出错。
1、要使用 RSAT 工具来管理域控制器上的 DNS 服务器,在 Windows 机器上,打开控制面板 -> 系统和安全 -> 管理工具,然后运行 DNS 管理器工具。 1、要使用 RSAT 工具来管理域控制器上的 DNS 服务器,在 Windows 机器上,打开控制面板 -> 系统和安全 -> 管理工具,然后运行 DNS 管理器工具。
当打开这个工具时,它会询问你将要连接到哪台正在运行的 DNS 服务器。选择使用下面的计算机输入域名IP 地址或 FQDN 地址都可以使用),勾选‘现在连接到指定计算机’,然后单击 OK 按钮以开启 Samba DNS 服务。 当打开这个工具时,它会询问你将要连接到哪台正在运行的 DNS 服务器。选择使用下面的计算机输入域名IP 地址或 FQDN 地址都可以使用),勾选“现在连接到指定计算机”,然后单击 OK 按钮以开启 Samba DNS 服务。
[ [
![Connect Samba4 DNS on Windows](http://www.tecmint.com/wp-content/uploads/2016/12/Connect-Samba4-DNS-on-Windows.png) ![Connect Samba4 DNS on Windows](http://www.tecmint.com/wp-content/uploads/2016/12/Connect-Samba4-DNS-on-Windows.png)
][5] ][5]
在 Windows 系统上连接 Samba4 DNS 服务器 *在 Windows 系统上连接 Samba4 DNS 服务器*
2、为了添加一条 DNS 记录(比如我们添加一条指向 LAN 网关的记录 A'),打开 DNS 管理器,找到域正向查找区,在右侧单击右键选择新的主机(AAAA)。 2、为了添加一条 DNS 记录(比如我们添加一条指向 LAN 网关的 A 记录),打开 DNS 管理器,找到域正向查找区,在右侧单击右键选择新的主机(A 或 AAAA)。
[ [
![Add DNS A Record on Windows](http://www.tecmint.com/wp-content/uploads/2016/12/Add-DNS-A-Record.png) ![Add DNS A Record on Windows](http://www.tecmint.com/wp-content/uploads/2016/12/Add-DNS-A-Record.png)
][6] ][6]
在 Windows 下添加一条 DNS 记录 *在 Windows 下添加一条 DNS 记录*
3、在打开的新主机窗口界面输入 DNS 服务器的主机名和 IP 地址。 DNS 管理器工具会自动填写完成 FQDN 地址。填写完成后,点击添加主机按钮,之后会弹出一个新的窗口提示你 DNS A 记录已经创建完成。 3、在打开的新主机窗口界面输入 DNS 服务器的主机名和 IP 地址。 DNS 管理器工具会自动填写完成 FQDN 地址。填写完成后,点击添加主机按钮,之后会弹出一个新的窗口提示你 DNS A 记录已经创建完成。
确保你添加 DNS A 记录是你们网络中的资源[已配置静态 IP][7]。不要为那些从 DHCP 服务器自动获取 IP 地址或者经常变换 IP 地址的主机添加 DNS A 记录。 确保仅为你的网络中[已配置静态 IP][7]的资源(设备)添加 DNS A 记录。不要为那些从 DHCP 服务器自动获取 IP 地址或者经常变换 IP 地址的主机添加 DNS A 记录。
[ [
![Configure Samba Host on Windows](http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Samba-Host-on-Windows.png) ![Configure Samba Host on Windows](http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Samba-Host-on-Windows.png)
][8] ][8]
在 Windows 系统下配置 Samba 主机 *在 Windows 系统下配置 Samba 主机*
要更新一条 DNS 记录只需要双击那条记录,然后输入更改原因即可。要删除一条记录时,只需要在这条记录上单击右键,选择从菜单删除即可。 要更新一条 DNS 记录只需要双击那条记录,然后输入更改即可。要删除一条记录时,只需要在这条记录上单击右键,选择从菜单删除即可。
同样的方式,你也可以为你的域添加其它类型的 DNS 记录,比如说 CNAME 记录(也称为 DNS 别名记录MX 记录在邮件服务器上非常有用或者其它类型的记录SPE、TXT、SRVetc类型)。 同样的方式,你也可以为你的域添加其它类型的 DNS 记录,比如说 CNAME 记录(也称为 DNS 别名记录MX 记录在邮件服务器上非常有用或者其它类型的记录SPE、TXT、SRV类型)。
### 第 2 步:创建反向查找区域 ### 第 2 步:创建反向查找区域
默认情况下, Samba4 AD DC 不会自动为你的域添加一个反向查找区域和 PTR 记录,因为这些类型的记录对于域控制器的正常工作来说是无关紧要的。 默认情况下Samba4 AD DC 不会自动为你的域添加一个反向查找区域和 PTR 记录,因为这些类型的记录对于域控制器的正常工作来说是无关紧要的。
相反DNS 反向区和 PTR 记录在一些重要的网络服务中显得非常有用,比如邮件服务,因为这些类型的记录可以用于验证客户端请求服务的身份。 相反DNS 反向区和 PTR 记录在一些重要的网络服务中显得非常有用,比如邮件服务,因为这些类型的记录可以用于验证客户端请求服务的身份。
@ -64,56 +65,56 @@ Samba4 内部的 DNS 域模块支持 AD 域控制器所必须的基本功能。
![Create Reverse Lookup DNS Zone](http://www.tecmint.com/wp-content/uploads/2016/12/Create-Reverse-Lookup-DNS-Zone.png) ![Create Reverse Lookup DNS Zone](http://www.tecmint.com/wp-content/uploads/2016/12/Create-Reverse-Lookup-DNS-Zone.png)
][9] ][9]
创建 DNS 反向查找区域 *创建 DNS 反向查找区域*
5、下一步单击下一步按钮然后从区域类型向导中选择主区域。 5、下一步单击下一步按钮然后从区域类型向导中选择主区域Primary
[ [
![Select DNS Zone Type](http://www.tecmint.com/wp-content/uploads/2016/12/Select-DNS-Zone-Type.png) ![Select DNS Zone Type](http://www.tecmint.com/wp-content/uploads/2016/12/Select-DNS-Zone-Type.png)
][10] ][10]
选择 DNS 区域类型 *选择 DNS 区域类型*
6、下一步在 AD 区域复制范围中选择复制到该域里运行在域控制器上的所有的 DNS 服务器,选择 IPv4 反向查找区域然后单击下一步继续。 6、下一步AD 区域复制范围中选择复制到该域里运行在域控制器上的所有的 DNS 服务器,选择 IPv4 反向查找区域然后单击下一步继续。
[ [
![Select DNS for Samba Domain Controller](http://www.tecmint.com/wp-content/uploads/2016/12/Select-DNS-for-Samba-Domain-Controller.png) ![Select DNS for Samba Domain Controller](http://www.tecmint.com/wp-content/uploads/2016/12/Select-DNS-for-Samba-Domain-Controller.png)
][11] ][11]
为 Samba 域控制器选择 DNS 服务器 *为 Samba 域控制器选择 DNS 服务器*
[ [
![Add Reverse Lookup Zone Name](http://www.tecmint.com/wp-content/uploads/2016/12/Add-Reverse-Lookup-Zone-Name.png) ![Add Reverse Lookup Zone Name](http://www.tecmint.com/wp-content/uploads/2016/12/Add-Reverse-Lookup-Zone-Name.png)
][12] ][12]
添加反向查找区域名 *添加反向查找区域名*
7、下一步在网络ID 框中输入你的 LAN IP 地址,然后单击下一步继续。 7、下一步在网络ID 框中输入你的 LAN IP 地址,然后单击下一步继续。
资源在这个区域内添加的所有 PTR 记录仅指向 192.168.1.0/24 网络段。如果你想要为一个不在该网段中的服务器创建一个 PTR 记录比如邮件服务器位于 10.0.0.0/24 这个网段的时候),那么你还得为那个网段创建一个新的反向查找区域。 在这个区域内添加的所有资源(设备)的 PTR 记录仅指向 192.168.1.0/24 网络段。如果你想要为一个不在该网段中的服务器创建一个 PTR 记录比如邮件服务器位于 10.0.0.0/24 这个网段的时候),那么你还得为那个网段创建一个新的反向查找区域。
[ [
![Add IP Address of Reverse Lookup DNS Zone](http://www.tecmint.com/wp-content/uploads/2016/12/Add-IP-Address-of-Reverse-DNS-Zone.png) ![Add IP Address of Reverse Lookup DNS Zone](http://www.tecmint.com/wp-content/uploads/2016/12/Add-IP-Address-of-Reverse-DNS-Zone.png)
][13] ][13]
添加 DNS 反向查找区域的 IP 地址 *添加 DNS 反向查找区域的 IP 地址*
8、在下一个截图中选择仅允许安全的动态更新单击下一步继续最后单击完成按钮以完成反向查找区域的创建。 8、在下一个截图中选择仅允许安全的动态更新,单击下一步继续,最后单击完成按钮以完成反向查找区域的创建。
[ [
![Enable Secure Dynamic Updates](http://www.tecmint.com/wp-content/uploads/2016/12/Enable-Secure-Dynamic-Updates.png) ![Enable Secure Dynamic Updates](http://www.tecmint.com/wp-content/uploads/2016/12/Enable-Secure-Dynamic-Updates.png)
][14] ][14]
启用安全动态更新 *启用安全动态更新*
[ [
![New DNS Zone Summary](http://www.tecmint.com/wp-content/uploads/2016/12/New-DNS-Zone-Summary.png) ![New DNS Zone Summary](http://www.tecmint.com/wp-content/uploads/2016/12/New-DNS-Zone-Summary.png)
][15] ][15]
新 DNS 区域概述 *新 DNS 区域概览*
9、此时你已经为你的域环境创建完成了一个有效的 DNS 反向查找区域。为了在这个区域中添加一个 PTR 记录,在右侧右键单击,选择为网络资源创建一个 PTR 记录。 9、此时你已经为你的域环境创建完成了一个有效的 DNS 反向查找区域。为了在这个区域中添加一个 PTR 记录,在右侧右键单击,选择为网络资源创建一个 PTR 记录。
这个时候,我们已经为网关创建了一个指向。为了测试这条记录对于客户端是否添加正确和工作正常,打开命令行提示符执行 nslookup 查询资源名,再执行另外一条命令查询 IP 地址。 这个时候,我们已经为网关创建了一个指向。为了测试这条记录对于客户端是否添加正确和工作正常,打开命令行提示符执行 `nslookup` 查询资源名,再执行另外一条命令查询 IP 地址。
两个查询都应该为你的 DNS 资源返回正确的结果。 两个查询都应该为你的 DNS 资源返回正确的结果。
@ -121,19 +122,21 @@ Samba4 内部的 DNS 域模块支持 AD 域控制器所必须的基本功能。
nslookup gate.tecmint.lan nslookup gate.tecmint.lan
nslookup 192.168.1.1 nslookup 192.168.1.1
ping gate ping gate
``` ```
[ [
![Add DNS PTR Record and Query PTR](http://www.tecmint.com/wp-content/uploads/2016/12/Add-DNS-PTR-Record-and-Query.png) ![Add DNS PTR Record and Query PTR](http://www.tecmint.com/wp-content/uploads/2016/12/Add-DNS-PTR-Record-and-Query.png)
][16] ][16]
添加及查询 PTR 记录 *添加及查询 PTR 记录*
### 第 3 步:管理域控制策略 ### 第 3 步:管理域控制策略
10、域控制器最重要的作用就是集中控制系统资源及安全。使用域控制器的域组策略功能很容易实现这些类型的任务。 10、域控制器最重要的作用就是集中控制系统资源及安全。使用域控制器的域组策略功能很容易实现这些类型的任务。
遗憾的是,在 Samba 域控制器上唯一用来编辑或管理组策略的方法是通过微软的 RSAT GPM 工具。 遗憾的是,在 Samba 域控制器上唯一用来编辑或管理组策略的方法是通过微软的 RSAT GPM 工具。
在下面的实例中,我们将看到通过组策略来实现在 Samba 域环境中为域用户创建一种交互式的登录方式是多么的简单。 在下面的实例中,我们将看到通过组策略来实现在 Samba 域环境中为域用户创建一种交互式的登录提示是多么的简单。
要访问组策略控制台,打开控制面板 -> 系统和安全 -> 管理工具,然后打开组策略管理控制台。 要访问组策略控制台,打开控制面板 -> 系统和安全 -> 管理工具,然后打开组策略管理控制台。
@ -143,9 +146,9 @@ ping gate
![Manage Samba Domain Group Policy](http://www.tecmint.com/wp-content/uploads/2016/12/Manage-Samba-Domain-Group-Policy.png) ![Manage Samba Domain Group Policy](http://www.tecmint.com/wp-content/uploads/2016/12/Manage-Samba-Domain-Group-Policy.png)
][17] ][17]
管理 Samba 域组策略 *管理 Samba 域组策略*
11、在组策略管理编辑器窗口中进入到电脑配置 -> 组策略 -> Windows 设置 -> 安全设置 -> 本地策略 -> 安全选项,你将在右侧看到一个新的选项列表。 11、在组策略管理编辑器窗口中进入到计算机配置 -> 组策略 -> Windows 设置 -> 安全设置 -> 本地策略 -> 安全选项,你将在右侧看到一个新的选项列表。
在右侧查询并编辑你的定制化设置,参考下图中的两条设置内容。 在右侧查询并编辑你的定制化设置,参考下图中的两条设置内容。
@ -153,7 +156,7 @@ ping gate
![Configure Samba Domain Group Policy](http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Samba-Domain-Group-Policy.png) ![Configure Samba Domain Group Policy](http://www.tecmint.com/wp-content/uploads/2016/12/Configure-Samba-Domain-Group-Policy.png)
][18] ][18]
配置 Samba 域组策略 *配置 Samba 域组策略*
12、这两个条目编辑完成后关闭所有窗口打开 CMD 窗口,执行以下命令来强制应用组策略。 12、这两个条目编辑完成后关闭所有窗口打开 CMD 窗口,执行以下命令来强制应用组策略。
@ -164,14 +167,15 @@ gpupdate /force
![Update Samba Domain Group Policy](http://www.tecmint.com/wp-content/uploads/2016/12/Update-Samba-Domain-Group-Policy.png) ![Update Samba Domain Group Policy](http://www.tecmint.com/wp-content/uploads/2016/12/Update-Samba-Domain-Group-Policy.png)
][19] ][19]
更新 Samba 域组策略 *更新 Samba 域组策略*
13、最后重启你的电脑当你准备登录进入系统的时候你就会看到登录提示生效了。
13、最后重启你的电脑当你准备登录进入系统的时候你就会看到登录提示生效了。
[ [
![Samba4 AD Domain Controller Logon Banner](http://www.tecmint.com/wp-content/uploads/2016/12/Samba4-Domain-Controller-User-Login.png) ![Samba4 AD Domain Controller Logon Banner](http://www.tecmint.com/wp-content/uploads/2016/12/Samba4-Domain-Controller-User-Login.png)
][20] ][20]
Samba4 AD 域控制器登录提示 *Samba4 AD 域控制器登录提示*
就写到这里吧!组策略是一个操作起来很繁琐和很谨慎的主题,在管理系统的过程中你得非常的小心。还有,注意你设置的组策略不会以任何方式应用到已加入域的 Linux 系统中。 就写到这里吧!组策略是一个操作起来很繁琐和很谨慎的主题,在管理系统的过程中你得非常的小心。还有,注意你设置的组策略不会以任何方式应用到已加入域的 Linux 系统中。
@ -184,17 +188,17 @@ Samba4 AD 域控制器登录提示
via: http://www.tecmint.com/manage-samba4-dns-group-policy-from-windows/ via: http://www.tecmint.com/manage-samba4-dns-group-policy-from-windows/
作者:[Matei Cezar ][a] 作者:[Matei Cezar][a]
译者:[rusking](https://github.com/rusking) 译者:[rusking](https://github.com/rusking)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/cezarmatei/ [a]:http://www.tecmint.com/author/cezarmatei/
[1]:http://www.tecmint.com/install-samba4-active-directory-ubuntu/ [1]:https://linux.cn/article-8065-1.html
[2]:http://www.tecmint.com/manage-samba4-active-directory-linux-command-line/ [2]:https://linux.cn/article-8070-1.html
[3]:http://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/ [3]:https://linux.cn/article-8097-1.html
[4]:http://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/ [4]:https://linux.cn/article-8097-1.html
[5]:http://www.tecmint.com/wp-content/uploads/2016/12/Connect-Samba4-DNS-on-Windows.png [5]:http://www.tecmint.com/wp-content/uploads/2016/12/Connect-Samba4-DNS-on-Windows.png
[6]:http://www.tecmint.com/wp-content/uploads/2016/12/Add-DNS-A-Record.png [6]:http://www.tecmint.com/wp-content/uploads/2016/12/Add-DNS-A-Record.png
[7]:http://www.tecmint.com/set-add-static-ip-address-in-linux/ [7]:http://www.tecmint.com/set-add-static-ip-address-in-linux/

View File

@ -0,0 +1,36 @@
Git 中的那些可怕的事
============================================================
![Corey Quinn](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/corey-quinn-lcna.png)
在 LinuxCon 北美会议上 FutureAdvisor 的 Corey Quinn 说“Git 的确让你可以做一些超级强大的事。‘强大’,在这次讲演中,这是一种说你愚蠢的委婉说法”。在使用 Git 时谁没有经历让你感觉自己像个傻子的时刻当然Git 是很棒的,每个人都在使用它,你可以用几个基本命令完成你的大部分工作。但它也有一些强大的功能,让我们觉得我们不知道我们在做什么。
但这真的对我们来说不公平。没有人会知道一切每个人知道的都不同。Quinn 提醒我们:“在我许多讲演的问答部分,人们有时举手说:“嗯,我有一个傻问题。” 你看到人们在那里说:“是啊!这是一个非常愚蠢的问题”。但是当他们得到答案时,那些这么说的人也正在低头记笔记。
![Git](https://www.linux.com/sites/lcom/files/styles/floated_images/public/heffalump-git-corey-quinn_0.png)
Quinn 在演讲的开始做了一些有趣的演示,演示了一些你可以用 Git 做到的可怕的事情,例如变基主干然后进行强制推送来搞乱整个项目、胡乱输入一些命令让 git 吐槽、提交大型二进制文件等。然后他演示了如何使这些可怕的事情不怎么可怕,如更加明智地管理大型二进制文件。“你可以提交大型二进制文件,你可以在 Git 中暴力提交,如果你需要存储大的二进制文件,这里有两个工具会可以加速载入,一个是 git-annex这是由 Debian 开发人员 Joey Hess 开发的,而 git-lfs 是由 GitHub 支持的。”
你经常同样地错误输入么?例如,当你想要 `git status` 时却输入 `git stitis`Quinn 有一个方案“Git 有内置的别名支持,所以你可以将相对较长、复杂的东西命名为一个短的 Git 命令。” 此外,你还可以使用 shell 别名。
Quinn 说“我们都听说过变基主干然后强制推送这样的搞笑恶作剧它会改变版本历史突然发生的事情让所有人都措手不及每个人都被卷入了这种混乱当中。一群鲸鱼被称为“pod”一群乌鸦中被称为“谋杀”一群开发者被称为“合并冲突”……更严重的是如果有人干了这种事情你有几个选择。包括从备份中恢复主干还原提交或者把责任人从屋顶上尖叫着扔下去。或者采取一定的预防措施并使用一个并不知名的 Git 功能称为分支保护。启用分支保护后无法删除或强制推送分支并且在接受前拉取请求pull request必须至少有一个审核。”
Quinn 演示了几个更奇妙的有用的工具,使 Git 更高效和万无一失,如 mr、vcsh 和定制的 shell 提示。你可以在下面看到完整的视频,了解更多有趣的事情。
--------------------------------------------------------------------------------
via: https://www.linux.com/news/event/LinuxCon-Europe/2016/terrible-ideas-git-0
作者:[CARLA SCHRODER][a]
译者:[geekpi](https://github.com/geekpi)
校对:[Bestony](https://github.com/Bestony)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/cschroder
[1]:https://www.linux.com/licenses/category/used-permission
[2]:https://www.linux.com/licenses/category/linux-foundation
[3]:https://www.linux.com/files/images/heffalump-git-corey-quinnpng-0
[4]:https://www.linux.com/files/images/corey-quinn-lcnapng
[5]:http://events.linuxfoundation.org/events/linuxcon-north-america

View File

@ -1,21 +1,21 @@
[远程在 Atomic 主机上使用 Docker][1] 在 Atomic 主机上远程使用 Docker
--------------------- ==========
![remote-atomic-docker](https://cdn.fedoramagazine.org/wp-content/uploads/2017/01/remote-atomic-docker-945x400.jpg) ![remote-atomic-docker](https://cdn.fedoramagazine.org/wp-content/uploads/2017/01/remote-atomic-docker-945x400.jpg)
来自 [Atomic 项目][2] 的 Atomic 主机是一个基于轻量级容器的操作系统,它可以运行 Linux 容器。它已被优化为用作云环境的容器运行时系统。例如,它可以托管 Docker 守护进程和容器。有时,你可能需要在该主机上运行 docker 命令,并从其他地方管理服务器。本文介绍如何远程访问 Fedora Atomic 主机上的[Docker][3]守护进程[你可以在这里下载到它][4]。整个过程由[Ansible][5]自动完成 - 在涉及到自动化的一切上,这是一个伟大的工具。 来自 [Atomic 项目][2] 的 Atomic 主机是一个轻量级容器基于的操作系统,它可以运行 Linux 容器。它已被优化为用作云环境的容器运行时系统。例如,它可以托管 Docker 守护进程和容器。有时,你可能需要在该主机上运行 docker 命令,并从其他地方管理服务器。本文介绍如何远程访问 Fedora Atomic 主机[你可以在这里下载到它][4]上的 [Docker][3] 守护进程。整个过程由 [Ansible][5] 自动完成 - 在涉及到自动化的一切上,这真是一个伟大的工具!
### 一份安全笔记 ### 安全备忘录
由于我们通过网络连接,所以我们使用[TLS][6]保护 Docker 守护进程。此过程需要客户端证书和服务器证书。OpenSSL 包用于创建用于建立 TLS 连接的证书密钥。这里Atomic 主机运行守护程序,我们的本地的 [Fedora Workstation][7] 充当客户端。 由于我们通过网络连接,所以我们使用 [TLS][6] 保护 Docker 守护进程。此过程需要客户端证书和服务器证书。OpenSSL 包用于创建用于建立 TLS 连接的证书密钥。这里Atomic 主机运行守护程序,我们的本地的 [Fedora Workstation][7] 充当客户端。
在你按照这些步骤进行之前请注意_任何_在客户端上可以访问 TLS 证书的进程在服务器上具有**完全根访问权限。** 因此,客户端可以在服务器上做任何它想做的事情。因此,我们需要仅向可信任的特定客户端主机授予证书访问权限。你应该将客户端证书仅复制到完全由你控制的客户端主机。即使在这种情况下,客户端机器的安全也至关重要。 在你按照这些步骤进行之前请注意_任何_在客户端上可以访问 TLS 证书的进程在服务器上具有**完全的 root 访问权限**。 因此,客户端可以在服务器上做任何它想做的事情。我们需要仅向可信任的特定客户端主机授予证书访问权限。你应该将客户端证书仅复制到完全由你控制的客户端主机。即使在这种情况下,客户端机器的安全也至关重要。
但是,此方法只是远程访问守护程序的一种方法。编排工具通常提供更安全的控制。下面的简单方法适用于个人实验,可能不适合开放式网络。 不过,此方法只是远程访问守护程序的一种方法。编排工具通常提供更安全的控制。下面的简单方法适用于个人实验,可能不适合开放式网络。
### 获取 Ansible role ### 获取 Ansible role
[Chris Houseknecht][8] 写了一个 Ansible role它会创造所需的所有证书。这样你不需要手动运行 _openssl_ 命令了。 这些在[ Ansible role 仓库][9]中提供。将它克隆到你当前的工作主机。 [Chris Houseknecht][8] 写了一个 Ansible role它会创造所需的所有证书。这样你不需要手动运行 `openssl` 命令了。 这些在 [Ansible role 仓库][9]中提供。将它克隆到你当前的工作主机。
``` ```
$ mkdir docker-remote-access $ mkdir docker-remote-access
@ -25,7 +25,7 @@ $ git clone https://github.com/ansible/role-secure-docker-daemon.git
### 创建配置文件 ### 创建配置文件
接下来,你必须创建 Ansible 配置文件、inventory 和 playbook 文件以设置客户端和守护进程。以下说明在 Atomic 主机上创建客户端和服务器证书。然后,获取客户端证书到本地。最后,它们会配置守护进程以及客户端,使它们能彼此交互。 接下来,你必须创建 Ansible 配置文件、清单inventory和剧本playbook文件以设置客户端和守护进程。以下说明在 Atomic 主机上创建客户端和服务器证书。然后,获取客户端证书到本地。最后,它们会配置守护进程以及客户端,使它们能彼此交互。
这里是你需要的目录结构。如下所示,创建下面的每个文件。 这里是你需要的目录结构。如下所示,创建下面的每个文件。
@ -38,29 +38,34 @@ docker-remote-access/
└── role-secure-docker-daemon └── role-secure-docker-daemon
``` ```
### _ansible.cfg_ `ansible.cfg`
``` ```
$ vim ansible.cfg $ vim ansible.cfg
```
```
[defaults] [defaults]
inventory=inventory inventory=inventory
``` ```
### _inventory_ 清单文件(`inventory`
``` ```
$ vim inventory $ vim inventory
```
```
[daemonhost] [daemonhost]
'IP_OF_ATOMIC_HOST' ansible_ssh_private_key_file='PRIVATE_KEY_FILE' 'IP_OF_ATOMIC_HOST' ansible_ssh_private_key_file='PRIVATE_KEY_FILE'
``` ```
inventory 中的 _IP_OF_ATOMIC_HOST_ 替换为 Atomic 主机的 IP。将 _PRIVATE_KEY_FILE_ 替换为本地系统上的 SSH 私钥文件的位置。 清单文件(`inventory` 中的 `IP_OF_ATOMIC_HOST` 替换为 Atomic 主机的 IP。将 `PRIVATE_KEY_FILE` 替换为本地系统上的 SSH 私钥文件的位置。
### _remote-access.yml_ 剧本文件(`remote-access.yml`
``` ```
$ vim remote-access.yml $ vim remote-access.yml
--- ```
```
- name: Docker Client Set up - name: Docker Client Set up
hosts: daemonhost hosts: daemonhost
gather_facts: no gather_facts: no
@ -130,7 +135,7 @@ $ vim remote-access.yml
### 访问 Atomic 主机 ### 访问 Atomic 主机
现在运行 Ansible playbook 现在运行 Ansible 剧本
``` ```
$ ansible-playbook remote-access.yml $ ansible-playbook remote-access.yml
@ -138,9 +143,9 @@ $ ansible-playbook remote-access.yml
确保 tcp 端口 2376 在你的 Atomic 主机上打开了。如果你在使用 Openstack请在安全规则中添加 TCP 端口 2376。 如果你使用 AWS请将其添加到你的安全组。 确保 tcp 端口 2376 在你的 Atomic 主机上打开了。如果你在使用 Openstack请在安全规则中添加 TCP 端口 2376。 如果你使用 AWS请将其添加到你的安全组。
现在,在你的工作站上作为普通用户运行的 _docker_ 命令与 Atomic 主机的守护进程通信了,并在那里执行命令。你不需要手动 _ssh_ 或在 Atomic 主机上发出命令。这允许你远程、轻松、安全地启动容器化应用程序。 现在,在你的工作站上作为普通用户运行的 `docker` 命令与 Atomic 主机的守护进程通信,并在那里执行命令。你不需要手动 `ssh` 或在 Atomic 主机上发出命令。这可以让你远程、轻松、安全地启动容器化应用程序。
如果你想克隆 playbook 和配置文件,这里有[一个可用的 git 仓库][10]。 如果你想克隆 Ansible 剧本和配置文件,这里是 [git 仓库][10]。
[ [
![docker-daemon](https://cdn.fedoramagazine.org/wp-content/uploads/2017/01/docker-daemon.jpg) ![docker-daemon](https://cdn.fedoramagazine.org/wp-content/uploads/2017/01/docker-daemon.jpg)
@ -152,7 +157,7 @@ via: https://fedoramagazine.org/use-docker-remotely-atomic-host/
作者:[Trishna Guha][a] 作者:[Trishna Guha][a]
译者:[geekpi](https://github.com/geekpi) 译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,37 @@
我需要在 AGPLv3 许可证下提供源码么?
============================================================
![Do I need to provide access to source code under the AGPLv3 license?](https://opensource.com/sites/default/files/styles/image-full-size/public/images/law/LAW_PatentSpotlight_520x292_cm.png.png?itok=bCn-kMx2 "Do I need to provide access to source code under the AGPLv3 license?")
图片提供opensource.com
[GNU Affero 通用公共许可证版本 3][1]AGPLv3是与 GPLv3 几乎相同的左版copyleft许可证。两个许可证具有相同的公共版权范围但在一个重要方面有重大差异。 AGPLv3 的第 13 节规定了 GPLv2 或 GPLv3 中不存在的附加条件:
> 在本许可证的其它条款之外,如果你修改了程序,你必须把你修改的版本,给你的那些使用计算机网络远程(如果你的版本支持此类交互)与之交互的用户,明确提供一个通过一些标准或者常规的复制手段,从网络服务器上免费获得与你所修改的版本相匹配的源码的机会。
这个“通过计算机网络远程交互”的范围主要被认为是 SaaS 部署的情形,尽管其实际上读起来的意思超乎了惯例的 SaaS 部署情形。其目标是解决在用户在使用像 Web Services 这样的功能时,其代码没有公开的常规 GPL 协议所暴露出的漏洞。因此,该协议的第 13 节,在 GPLv2 第 3 节以及 GPLv3 和 AGPLv3 第 6 节中包含的目标代码分发的触发要求之外,提供了额外的源代码公开的要求。
常常被误解的是AGPLv3 第 13 节中的源代码分发要求仅在 AGPLv3 软件已被“你”(例如,提供网络服务的实体)修改的地方才触发。我的理解是,只要“你”不修改 AGPLv3 的代码,许可证就不应该被理解为需要按照第 13 节规定的方式访问相应的源码。如我所见,尽管即使公开许可证中不要求公开的源代码也是一个好主意,但在 AGPL 下许多未修改以及标准部署的软件模块根本不会触发第 13 节。
如何解释 AGPL 的条款和条件,包括 AGPL 软件是否已被修改,可能需要根据具体情况的事实和细节进行法律层面的分析。
--------------------------------------------------------------------------------
作者简介:
![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/kaufman-picture.jpg?itok=FPIizDR-)
Jeffrey R. Kaufman 是全球领先的开源软件解决方案提供商 Red Hat 公司的开源 IP 律师。Jeffrey 也是托马斯·杰斐逊法学院的兼职教授。在入职 Red Hat 之前Jeffrey 曾经担任高通公司的专利顾问向首席科学家办公室提供开源顾问。Jeffrey 在 RFID、条形码、图像处理和打印技术方面拥有多项专利。
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/1/providing-corresponding-source-agplv3-license
作者:[Jeffrey Robert Kaufman][a]
译者:[geekpi](https://github.com/geekpi)
校对:[Bestony](https://github.com/Bestony)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/jkaufman
[1]:https://www.gnu.org/licenses/agpl-3.0-standalone.html

View File

@ -0,0 +1,71 @@
为何 Linux 安装器需要添加安全功能
============================================================
> 由于安全问题越来越严重Linux 发行版需要在安装程序中突出显示基本安全选项,而不是让用户稍后手动添加这些选项。
十二年前Linux 发行版努力使安装变得简单。在 Ubuntu 和 Fedora 的引领下,它们很早就实现了这一目标。现在,随着对安全性越来越关注,它们需要稍微转变下方向,并在安装程序中突出显示基本安全选项,而不是让用户稍后手动添加这些选项。
当然,即便是在最好的情况下,说服用户来设置安全功能都是困难的。太多用户甚至不愿意添加如非特权用户帐户或密码这样简单的功能,他们显然更喜欢用重装或者以每小时 80 美元的价格咨询专家来减少风险。
然而,即便一般用户不会专门注意安全,但他也可能会在安装过程中注意。他们可能永远不会再想到它,但也许在安装过程中,当他们的注意力集中时,特别是如果有可见的在线帮助来解释其好处时,他们可能被说服选择一个复选框。
这种转变也并不伟大。许多安装程序已经提供了自动登录的选择 - 这对于不包含个人数据的安装来说或许是可以接受的功能,但更可能会被那些觉得登录不方便的用户使用。同样感谢 Ubuntu它选择加密文件系统 - 至少在主目录中是这样 - 它已经成为许多安装程序的标准。我真正建议的也是这样的。
此外,外部安装程序如 Firefox 已经无缝合并了隐私浏览,而 [Signal Private Messenger][8] 则是一个可替代标准 的 Android 手机和联系人的应用程序。
这些建议远不算激进。它只需要意志和想象力来实现它。
### Linux 安全第一步
应该将什么类型的安全功能添加到安装程序呢?
首先是防火墙。有许多图形界面程序可以设置防火墙。尽管十七年的经验,但是就像拜伦对柯尔律治的形而上的思想的讨论一样,我有时还是希望有人能来解释一下。
尽管出于好意,大多数防火墙工具对 iptables 的处理看起来都很直接。有一个现在已经停止维护的加固系统 [Bastille Linux][9] 可以用于安装一个基本的防火墙,我看不出为什么其他发行版做不到同样的事情。
一些工具可以用于安装后处理,并且对于安装器而言可以毫无困难地添加使用。例如,对于 [Grub 2][10],这个大多数发行版使用的引导管理器包含了基本密码保护。诚然,密码可以通过 Live CD 绕过,但它仍然在包括远程登录在内的日常情况下提供一定程度的保护。
类似地,一个类似于 [pwgen][11] 的密码生成器也可以添加到安装程序设置帐户的环节。这些工具强制可接受密码的长度、以及它们的大小写字母、数字和特殊字符的组合。它们许多都可以为你生成密码,有些甚至可以使生成的密码可拼读,以便你记住密码。
还有些工具也可以添加到安装过程的这个部分。例如,安装程序可以请求定期备份的计划,并添加一个计划任务和一个类似 [kbackup][12] 的简单的备份工具。
那么加密电子邮件怎么办?如今最流行的邮件阅读器包括了加密邮件的能力,但是设置和使用加密需要用户采取额外的设置,这使常见的任务复杂化,以至于用户会忽略它。然而,看看 Signal 在手机上的加密有多么简单,很显然,在笔记本电脑和工作站上加密会更容易。大多数发行版可能都喜欢对等加密,而不喜欢 Signal 那样的集中式服务器,但像 [Ring][13] 这样的程序可以提供这种功能。
无论在安装程序中添加了什么功能,也许这些预防措施也可以扩展到生产力软件,如 LibreOffice。大多数安全工作都集中在电子邮件、网络浏览和聊天中但文字处理程序和电子表格及其宏语言是一个明显的恶意软件感染的来源和隐私关注点。除了像 [Qubes OS][14] 或 [Subgraph][15] 这样的几个例外之外,很少有人努力将生产力软件纳入其安全预防措施 - 这可能会留下一个安全漏洞空缺。
### 适应现代
当然,在意安全的用户也许会采取一些安全的方法,这样的用户可以为自己负责。
我关心的是那些不太了解安全或不太愿意自己做修补的用户。我们越来越需要易于使用的安全性,并且亟待解决。
这些例子只是开始。所需要的工具大多数已经存在只是需要以这样的方式来实现它们使得用户不能忽略它们并且能够不用懂什么就可以使用它们。可能实现所有这些只需要一个人月而已包括原型、UI 设计和测试等等。
然而,在添加这些功能前,大多数主流的 Linux 发行版几乎不能说是关注到了安全性。毕竟,如果用户从不使用它们,那怎么会是好工具?
--------------------------------------------------------------------------------
via: http://www.datamation.com/security/why-linux-installers-need-to-add-security-features.html
作者:[Bruce Byfield][a]
译者:[geekpi](https://github.com/geekpi)
校对:[Bestony](https://github.com/Bestony), [wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.datamation.com/author/Bruce-Byfield-6030.html
[1]:http://www.datamation.com/feedback/http://www.datamation.com/security/why-linux-installers-need-to-add-security-features.html
[2]:http://www.datamation.com/author/Bruce-Byfield-6030.html
[3]:http://www.datamation.com/e-mail/http://www.datamation.com/security/why-linux-installers-need-to-add-security-features.html
[4]:http://www.datamation.com/print/http://www.datamation.com/security/why-linux-installers-need-to-add-security-features.html
[5]:http://www.datamation.com/security/why-linux-installers-need-to-add-security-features.html#comment_form
[6]:http://www.datamation.com/security/why-linux-installers-need-to-add-security-features.html#
[7]:http://www.datamation.com/author/Bruce-Byfield-6030.html
[8]:https://whispersystems.org/
[9]:http://bastille-linux.sourceforge.net/
[10]:https://help.ubuntu.com/community/Grub2/Passwords
[11]:http://pwgen-win.sourceforge.net/downloads.html
[12]:http://kbackup.sourceforge.net/
[13]:https://savannah.gnu.org/projects/ring/
[14]:https://www.qubes-os.org/
[15]:https://subgraph.com/sgos/

View File

@ -0,0 +1,77 @@
将 Tuleap 用于软件项目管理
============================================================
> Tuleap 正在被 Eclipse 基金会使用,用来取代 Bugzilla
![Get to know Tuleap for project management](https://opensource.com/sites/default/files/styles/image-full-size/public/images/education/rh_003588_01_rd3os.combacktoschoolseriesk12_rh_021x_0.png?itok=kOixOaEU "Get to know Tuleap for project management")
图片提供opensource.com
Tuleap 是一个独特的开源项目管理工具,目前发展势头很好,现在,每个月它会出一个大版本。它还被列在 [2015 年五大开源项目管理工具][1]和 [2016 年十一个名列前茅项目管理工具][2]中。
Manuel Vacelet 是开发 Tuleap 项目的 Enalean 公司的联合创始人和 CTO他说“Tuleap 是一个完整用于托管软件项目的 GPLv2 平台,它提供了一个集中化的平台,在这里,团队可以找到他们所需的所有工具追踪他们软件项目的生命周期。他们可以找到项目管理Scrum、看板、瀑布、混合等等、源码控制git 和 svn和代码审查pull 请求和 gerrit、持续集成、问题跟踪、wiki 和文档等的支持。”
在这次采访中,我会和 Manuel 讨论如何开始使用它,以及如何以开源方式管理 Tuleap。
**Nitish Tiwari以下简称 NT 为什么 Tuleap 项目很重要? **
**Manuel Vacelet以下简称 MV** Tuleap 很重要是因为我们坚信一个成功的软件项目必须涉及所有利益相关者开发人员、项目经理、QA、客户和用户。
很久以前,我还是一个 SourceForge 衍生项目的实习生(当时 SourceForge 还是一个自由开源项目),几年后它变成了 Tuleap。 我的第一个贡献是将 PhpWiki 集成到该工具中(不要告诉任何人,代码写的很糟)。
现在,我很高兴作为首席技术官和产品负责人在 Enalean 工作,该公司是 Tuleap 项目的主要贡献公司。
**NT让我们聊聊技术方面。**
**MV** Tuleap 核心系统是基于 LAMP 并且架构于 CentOS 之上。如今的开发栈是 AngularJS (v1)、REST 后端PHP、基于 NodeJS 的实时推送服务器。但如果你想成为一名 Tuleap 全栈开发人员,你还将需要接触 bash、Perl、Python、Docker、Make 等等。
说到技术方面,需要重点强调的 Tuleap 的一个显著特征是它的可扩展性。一个运行在单服务器上的 Tuleap 单一实例、并且没有复杂的 IT 架构,可以处理超过 10000 人的访问。
**NT给我们说下该项目的用户和社区。有谁参与他们如何使用这个工具**
**MV** 用户非常多样化。从使用 Tuleap 跟踪他们的项目进度并管理他们的源代码的小型初创公司,到非常大的公司,如法国电信运营商 Orange它为超过 17000 用户部署了它,并托管了 5000 个项目。
许多用户依靠 Tuleap 来促进敏捷项目并跟踪其进度。开发人员和客户共享同一个工作区。客户不需要学习如何使用 GitHub也不需要开发人员做额外的工作就可以将其工作转换到“客户可访问”平台。
今年Tuleap 被 [Eclipse 基金会][3]所使用,取代了 Bugzilla。
印度电子信息技术部使用 Tuleap 创建了印度政府开放电子政务的开放式协作开发平台。
Tuleap 有多种不同的使用方式和配置。有些人使用它作为 Drupal 客户门户网站的后端; 它们通过 REST API 插入到 Tuleap 中以管理 bug 和服务请求。
甚至一些建筑师也使用它来管理他们的工作进度和 AutoCAD 文件。
**NTTuleap 是否做了一些特别的事,使社区更安全,更多样化?**
**MV** 我们还没有创建“行为准则”本社区非常平和而欢迎新人但我们有计划这样做。Tuleap 的开发人员和贡献者来自不同的国家(例如加拿大、突尼斯、法国)。而且 35 的活跃开发者和贡献者是女性。
**NT由社区提议的 Tuleap 功能的百分比是多少?**
**MV** 几乎 100 的功能是由社区驱动的。
这是 Enalean 的关键挑战之一:找到一种商业模式,使我们能以正确的方式做开源软件。对我们来说,“开放核心”模式(其中应用程序的核心是开放的,但有趣和有用的部分是封闭源的)不是正确的方法,因为你最终还是要依赖闭源。因此,我们发明了 [OpenRoadmap][4],这种方式是我们从社区和最终用户那里收集需求,并找公司来为此买单。
--------------------------------------------------------------------------------
作者简介:
![](https://opensource.com/sites/default/files/styles/profile_pictures/public/nitish-crop.png?itok=h4PaLDQq)
Nitish 是一名专业的软件开发人员并对开源有热情。作为一本基于 Linux 的杂志的技术作者,他会尝试新的开源工具。他喜欢阅读和探索任何开源相关的事情。在他的空闲时间,他喜欢读励志书。他目前正在构建 DevUp - 一个让开发人员以真正的方式连接所有工具和拥抱 DevOps 的平台。你可以在 Twitter 上关注他 @tiwari_nitish
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/1/interview-Tuleap-project
作者:[Nitish Tiwari][a]
译者:[geekpi](https://github.com/geeki)
校对:[jamsinepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/tiwarinitish86
[1]:https://opensource.com/business/15/1/top-project-management-tools-2015
[2]:https://opensource.com/business/16/3/top-project-management-tools-2016
[3]:http://www.eclipse.org/
[4]:https://blog.enalean.com/enalean-open-roadmap-how-it-works/

View File

@ -0,0 +1,54 @@
长期维护嵌入式 Linux 内核变得容易
============================================================
![Jan Lübbe ](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/jan-lubbe-elc.png?itok=6G5lADKu "Jan Lübbe ")
*Pengutronix 内核黑客 Jan Lübbe 总结了嵌入式 Linux 中正在不断增长的安全威胁,并在这次欧洲嵌入式 Linux 会议上概述了一个计划,以保持长期设备的安全和功能完整。* [Linux 基金会][1]
安全漏洞只发生在 Windows 上的好日子正在快速过去。恶意软件黑客和拒绝服务老手们正在越来越多地瞄准过时的嵌入式 Linux 设备,因此在 10 月的[<ruby>欧洲嵌入式 Linux 会议<rt>Embedded Linux Conference Europe</rt></ruby>][3]ELCE上的几个演讲的主题就与修复 Linux 安全漏洞相关。
最值得去听的讲演之一是 [Pengutronix][4] 内核黑客 Jan Lübbe 的《长期维护或管理(或免管理)嵌入式系统 10 年以上》。在总结嵌入式 Linux 中不断增长的安全威胁后Lübbe 制定了一项计划,以确保长期设备的安全和功能完整。 Lübbe 说:“我们需要迁移到更新、更稳定的内核,并进行持续维护以修复关键漏洞。我们需要做上游更新和自动化流程,并建立一个可持续的工作流程。我们没有理由让系统中仍留有过时的软件。”
随着 Linux 设备变得越来越老,传统的生命周期过程已经不再适用。 Lübbe 说:“通常,你会从 SoC 供应商或主线上获取内核、构建系统,并添加到用户空间。你可以定制和添加程序,并做一些测试。但是,在此之后有 15 年的维护阶段,你最好期望平台不会发生变化、不会想要添加新的功能、不需要实施管理调整。”
所有这些变化,越来越多地导致你的系统暴露出新的错误,并需要大量更新以才能与上游软件保持同步。 Lübbe 说:“在内核中发生导致问题的错误并不总是无意的”。对于去年在 Allwinner 内核中发现的[后门][5],他又补充说:“这些供应商的内核从来不会执行主线内核社区的审查流程”。
Lübbe 继续说:“你不能认为你的供应商一直没问题。也许只有一两个工程师查看过后门代码这块。如果补丁发布在 Linux 内核邮件列表上,就不会有这种事,因为总会有人注意到。硬件供应商不关心安全或维护,也许你会在一两年后得到更新,但是即使这样,他们从一个固定版本开始开发,到他们发布稳定的版本通常需要几年的时间。如果你在这个基础上再开始开发,可能又过了半年,这就更过时了。”
越来越多的嵌入式开发人员在<ruby>长期稳定<rt>Long Term Stable</rt></ruby>LTS内核上构建长期产品。但这并不意味着没事了。Lübbe 说:“一个产品发布后,人们经常不再遵循稳定的发行链,也不再应用安全补丁。这样你会得到两个最糟糕的结果:过时的内核和没有安全性。你失去了多人测试的好处。”
Lübbe 指出,使用像 Red Hat 这样的面向服务器的发行版的 Pengutronix 客户经常由于快速的定制、需要系统管理员干预的部署和升级系统而遇到问题。
“更新对一些东西有用,特别是在 x86 上,但每个项目基本上是自己建立基础设施来更新到新版本。”
许多开发人员选择把向后移植作为更新长期产品的解决方案。Lübbe 说:“开始时很容易,但是一旦你不处于项目的维护范围,他们就不会告诉你所使用的版本是否受到一个 bug 的影响,因此很难判断一个修复是否相关。于是你不停打补丁和更新,而 bug 也在不断累积,而这些你必须自己维护,因为其他人不使用这些补丁。使用开源软件的好处就丢失了。”
### 跟随上游项目
Lübbe 认为,最好的解决方案是跟踪由上游项目维护的版本。“我们主要关注基于主线内核的开发,所以我们在产品和主流内核及其他上游项目之间尽可能没有差别。长期系统在主线内核上得到很好的支持。大多数不使用 3D 图形的系统只需要很少的补丁。较新的内核版本还有很多[新的强化功能][6],这些可以减少漏洞的影响。
跟随主线发展对许多开发人员来说似乎令人畏惧但是如果从一开始就这样然后坚持下去就会相对容易一些Lübbe 说:“你需要为系统上做的一切制定流程。你总需要知道什么软件正在运行,这在使用良好的构建系统时会更容易。每个软件版本应定义完整的系统,以便你可以更新相关的一切。如果你不知道那里有什么,你就不能解决它。你也需要一个自动测试和自动部署更新。”
为了“减少更新周期”Lübbe 建议在开始开发时使用最新的 Linux 内核并且在进入测试时才转到稳定的内核。之后他建议每年将系统中的所有软件包括内核、构建系统、用户空间、glibc 和组件(如 OpenSSL更新为当年上游项目支持的版本。
Lübbe 说:“得到更新并不意味着你需要部署。如果没有看到安全漏洞,你可以把补丁放在一边,需要时它再用就行。”
最后Lübbe 建议每个月查看发布公告,并且每周检查 CVE 和主线列表上的安全公告。你只需要问自己“该安全公告是否影响到了你”。他补充说:“如果你的内核足够新,就没有太多的工作。你不会希望通过在新闻中看到你的设备才获得有关你的产品的反馈。”
--------------------------------------------------------------------------------
via: https://www.linux.com/news/event/ELCE/2017/long-term-embedded-linux-maintenance-made-easier
作者:[ERIC BROWN][a]
译者:[geekpi](https://github.com/geekpi)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/ericstephenbrown
[1]:https://www.linux.com/licenses/category/linux-foundation
[2]:https://www.linux.com/files/images/jan-lubbe-elcpng
[3]:http://events.linuxfoundation.org/events/archive/2016/embedded-linux-conference-europe
[4]:http://www.pengutronix.de/index_en.html
[5]:http://arstechnica.com/security/2016/05/chinese-arm-vendor-left-developer-backdoor-in-kernel-for-android-pi-devices/
[6]:https://www.linux.com/news/event/ELCE/2017hardening-kernel-protect-against-attackers

View File

@ -0,0 +1,66 @@
如何用 R 语言的 Shiny 库编写 web 程序
============================================================
![How to write web apps in R with Shiny](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BUSINESS_lightbulbs.png?itok=70w-2-Ta "How to write web apps in R with Shiny")
图片提供 opensource.com
我这个月在写一些更加长的文章,所以你们可以在几周后再来看看。本月,我想简要地提下我自己一直在玩的一个很棒的 R 库。
我的一个亲密朋友最近在用 R 编写东西。我一直都对它很感兴趣,也一直在试图挤时间,学习更多关于 R 的知识以及可用它做的事情。探索 R 的超强数字处理能力对我而言有些困难,因为我并不如我朋友那样有一个数学头脑。我进展有点慢,但我一直试图将它与我在其他领域的经验联系起来,我甚至开始考虑非常简单的 web 程序。
[Shiny][1] 是一个来自 RStudio 的工具包,它让创建 web 程序变得更容易。它能从 R 控制台轻松安装,只需要一行,就可以加载好最新的稳定版本来使用。这里有一个很棒的[教程][2],它可以在前面课程基础上,带着你理解应用架设的概念。 Shiny 的授权是 GPLv3源代码可以在 [GitHub][3] 上获得。
这是一个用 Shiny 写的简单的小 web 程序:
```
library(shiny)
server <- function(input, output, session) {
observe({
myText <- paste("Value above is: ", input$textIn)
updateTextInput(session, "textOut", value=myText)
})
}
ui <- basicPage(
h3("My very own sample application!"),
textInput("textIn", "Input goes here, please."),
textInput("textOut", "Results will be printed in this box")
)
shinyApp(ui = ui, server = server)
```
当你在输入框中输入文字时,它会被复制到输出框中提示语后。这并没有什么奇特的,但它向你展示了一个 Shiny 程序的基本结构。“server”部分允许你处理所有后端工作如计算、数据库检索或程序需要发生的任何其他操作。“ui”部分定义了接口它可以根据需要变得简单或复杂。
包括在 Shiny 中的 [Bootstrap][4] 有了大量样式和主题,所以在学习了一点后,就能用 R 创建大量功能丰富的 web 程序。使用附加包可以将功能扩展到更高级的 JavaScript 程序、模板等。
有几种方式处理 Shiny 的后端工作。如果你只是在本地运行你的程序,加载库就能做到。对于想要发布到网络上的程序,你可以在 [RStudio 的 Shiny 网站][5]上共享它们,运行开源版本的 Shiny 服务器,或通过按年订阅服务从 RStudio 处购买 Shiny Server Pro。
经验丰富的 R 大牛可能已经知道 Shiny 了;它已经存在大约几年了。对于像我这样来自一个完全不同的编程语言,并且希望学习一点 R 的人来说,它是相当有帮助的。
--------------------------------------------------------------------------------
作者简介:
![](https://opensource.com/sites/default/files/styles/profile_pictures/public/ruth1_avi.jpg?itok=I_EE7NmY)
D Ruth Bavousett - D Ruth Bavousett 作为一名系统管理员和软件开发人员已经很长时间了,她的专业生涯开始于 VAX 11/780。在她的职业生涯迄今为止她花费了大量的时间在满足库的需求上她自 2008 年以来一直是 Koha 开源库自动化套件的贡献者. Ruth 目前在休斯敦的 cPanel 任 Perl 开发人员,他也作为首席员工效力于双猫公司。
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/1/writing-new-web-apps-shiny
作者:[D Ruth Bavousett][a]
译者:[geekpi](https://github.com/geekpi)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/druthb
[1]:http://shiny.rstudio.com/
[2]:http://shiny.rstudio.com/tutorial
[3]:https://github.com/studio/shiny
[4]:http://getbootstrap.com/
[5]:http://shinyapps.io/

View File

@ -0,0 +1,73 @@
如何加入一个技术社区
============================================================
> 参照以下几步可以让你很容易地融入社区
![How to join a technical community](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BIZ_DebucketizeOrgChart_A.png?itok=oBdRm8vc "How to join a technical community")
*图片提供 opensource.com*
加入一个新的社区在很多情况下可能是一个艰巨的任务。当加入一个新的技术社区时,焦虑感可能特别强烈,尤其是一些社区对新成员的严厉甚至讥讽都是有名的。
虽然有可能陷入一个不公正的地方,但是我认为你会发现大多数技术社区是相当合理的,并且以下几个简单的步骤可以缓解从非成员到成员的过渡。
### 冷暖自知
在你开始实际加入社区前,首先你要确保该社区适合你,同时你也是合适该社区。
这听起来很简单,但每个社区都有不同的文化、态度、理念和公认的规范。如果你对某个话题还了解甚少,那么面向行业专业人士的社区可能就不是一个理想的起点。同样,如果你是一个资深专家,希望寻找深入并且极其复杂问题的答案,那么初学者的社区肯定也不太合适。无论哪种方式,两边的不匹配几乎肯定会导致双方的失望。同样,一些社区是非常正规并且面向商业的,而另一些社区将非常宽松和悠闲,也有许多社区的氛围处于二者之间。选择适合你自己的社区,或至少不是让你厌恶的社区,这将有助于确保你的长期参与,可以使你顺利的迈出这一步。
### 潜龙勿用
最初以只读模式围观参与社区是一个好方法。但这并不意味着你不应该立即创建一个帐户或加入,只是你可以通过围观社区得到一个空间感(无论是虚拟的或物理的)。潜伏一段时间有助于你适应社区的规则和文化,以此确定你是否认为这是一个很适合你的平台。
### 毛遂自荐
根据社区的不同,自我介绍的细节将有很大的不同。同样,确保这样做的方式容易被社区接受。
有些社区可能有一个专门的介绍板块,而在另一些社区,它可能是填写你的个人资料等有意义和相关的信息。如果社区是邮件列表或 IRC 频道,在你的首次发问中包含简要介绍可能更有意义。这可以让社区了解你是谁,为什么你想成为社区的一部分,并让他们知道一点关于你自己和你的技术水平的信息。
### 相敬如宾
虽然社区与社区的接受方式有很大的不同,但你应该永远保持尊重。避免争吵和人身攻击,并始终致力于建设。记住,你在互联网上发布的东西,它就会一直在那里,不尽不灭,并为大家所看到。
### 非礼勿言
#### 提问
记住,精心设计的问题可以更快地得到更好的答案,正如我在十月专栏 [The Queue][2] 中指出的那样。
#### 回答
一旦遇见了自己很了解的关于基础或非常容易回答的提问时,“尊重”的理念也同样适用,就像提问时一样。一个冗长的并充满优越感的技术上正确的答案,并不是向一个新的社区介绍自己的正确方式。
#### 闲话
即使在技术社区,也并不是所有的讨论都是关于某个问题或答案。在这种情况下,以尊重和周到的、不带有侮辱和人身攻击的方式,提出不同的意见或挑战他人的观点才是健康正确的做法。
### 不亦说乎
长期参加社区最重要的事情是在那里可以满足自己。参与一个充满活力的社区是一个学习、成长、挑战和提升自我的好机会。很多情况下,这并不容易,但它是值得的。
--------------------------------------------------------------------------------
作者简介:
Jeremy Garcia - Jeremy Garcia 是 LinuxQuestions.org 的创始人,同时也是一个热情和注重实际的开源拥护者。个人推特: @linuxquestions
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/1/how-join-technical-community
作者:[Jeremy Garcia][a]
译者:[livc](https://github.com/livc)
校对:[Bestony](https://github.com/Bestony), [wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/jeremy-garcia
[1]:https://opensource.com/article/17/1/how-join-technical-community?rate=SfjMzwYInmhZiq6Yva3D87kngE-ocLOVraCD0wWbBss
[2]:https://opensource.com/life/16/10/how-ask-technical-questions
[3]:https://opensource.com/user/86816/feed
[4]:https://opensource.com/article/17/1/how-join-technical-community#comments
[5]:https://opensource.com/users/jeremy-garcia

View File

@ -0,0 +1,136 @@
超酷的 Vim 搜索技巧
================================
尽管目前我们已经[涉及][8] Vim 的多种特性,但此编辑器的特性集如此庞大,不管我们学习多少,似乎仍然远远不足。承接我们的 Vim 教程系列,本文我们将讨论 Vim 提供的多种搜索技术。
不过在此之前,请注意文中涉及到的所有的例子、命令、指令均是在 Ubuntu 14.04Vim 7.4 下测试的。
### Vim 中的基础搜索操作
当你在 Vim 中打开一个文件并且想要搜索一个特定的单词或模板,第一步你必须要先按下 `Esc` 键从插入模式中退出(如果你正处于插入模式中)。之后输入 `/` 并紧接着输入你要搜索的单词或搜索模式。
例如,如果你想要搜索的单词是 `linux`,下图显示的就是在 Vim 窗口底部的搜索命令:
[![Search for words in vim](https://www.howtoforge.com/images/perform-search-operations-in-vim/vim-basic-search.png) ][9]
敲击回车键之后,你会看到 Vim 会将光标停留在从光标在插入模式中的位置开始,找到的包含此单词的第一行。如果你刚刚打开一个文件并且立即开始了搜索操作,搜索将从文件的首行开始。
如果想要移动到下一处包含被搜索单词位置,按 `n` 键。当你遍历完所有被搜索模板所在之处,继续按 `n` 键 Vim 将重复搜索操作,光标将回到第一次搜索结果出现位置。
[ ![Move to next search hit](https://www.howtoforge.com/images/perform-search-operations-in-vim/vim-search-end.png) ][10]
在遍历搜索结果时,如果你想要回到上一匹配处,按 `N` (即 `shift` + `n`)。同时,值得注意的是不管在什么时候,你都可以输入 `ggn` 来跳转到第一个匹配处,或者 `GN` 来跳转到最后一处。
当你恰好在文件的底部,而且想要逆向搜索的情况下,使用 `?` 代替 `/` 来开始搜索。下图是一个例子:
[![search backwards](https://www.howtoforge.com/images/perform-search-operations-in-vim/vim-search-back.png)][11]
### 自定义你的搜索
#### 1、 高亮搜索结果
尽管通过 `n``N` 从被搜索单词或模式的匹配处跳转到另一处很简单,但是如果匹配处能够高亮就更加人性化了。例如,请看下附截图:
[![Search Highlighting in VIM](https://www.howtoforge.com/images/perform-search-operations-in-vim/vim-highlight-search.png) ][12]
这可以通过设置 `hlsearch` 变量来实现,例如在普通/命令行模式中执行下述命令:
```
:set hlsearch
```
[![set hlsearch](https://www.howtoforge.com/images/perform-search-operations-in-vim/vim-set-hlsearch.png) ][13]
#### 2、使搜索不区分大小写
在 Vim 中进行搜索默认是区分大小写的。这就意味着如果我要搜索 `linux`,那么 `Linux` 是不会匹配的。然而,如果这不是你想要的搜索方式,你可以使用如下命令来使搜索变得不区分大小写:
```
:set ignorecase
```
所以当我设置 `ignorecase` 变量后再使用前边提到的命令,搜索 `linux`,那么 `Linux` 所在处也会被高亮。
[![search case-insensitive](https://www.howtoforge.com/images/perform-search-operations-in-vim/vim-search-case.png) ][14]
#### 3、智能大小写搜索
Vim 提供了一个功能,只有当要搜索的单词 / 模板包含大写字母时,编辑器才会区分大小写。要想实现这种功能,必须先设置 `ignorecase`,再接着设置 `smartcase` 变量。
```
:set ignorecase
:set smartcase
```
例如,如果一个文件中既包含 `LINUX` 也包含 `linux`,在开启智能大小写搜索功能时,如果使用 `/LINUX` 进行搜索,只有单词 `LINUX` 处会被搜到。反之,如果搜索 `/linux`,那么不论大小写的搜索结果都会被匹配。
#### 4、递进搜索
就如谷歌一样随着你输入查询字串字串随你每输入一个字符不断更新显示不同的搜索结果Vim 也同样提供了递进搜索。要想使用这种特性,你必须在搜索前执行下述命令:
```
:set incsearch
```
### 一些很酷的在 Vim 中搜索的小技巧
你可能会发现还有一些其他的与搜索相关的小技巧很有用。
开始吧!如果你想要搜索一个文件中的一个单词,但是又不想输入它,你只需要将你的光标移到这个单词下然后按 `*` (或者 `shift` + `8`)。如果你想要启动一次部分搜索(例如:同时搜索 `in``terminal`),那你需要将光标移到到单词(在本例中, `in`)下,然后通过在键盘上按 `g*` (按一次 `g` 然后不断按 `*` )。
注意:如果你想要逆向搜索,按 `#` 或者 `g#`
下一个,只要你想要,你可以获得所有被搜索单词或模式匹配处所在的行和行号的一个列表。这可以在你开始搜索后通过按 `[I` 来实现。如下图是一个列表结果如何在 Vim 窗口底部被分组和显示的例子:
[![grouped search results](https://www.howtoforge.com/images/perform-search-operations-in-vim/vim-results-list.png) ][15]
接下来你可能已经得知Vim 默认是环形搜索的,意味着在到达文件结尾处(或者被搜索单词的最后一处匹配)时,如果继续按 “搜索下一个” 会将光标再次带回第一处匹配处。如果你希望禁止环形搜索,可以使用如下命令:
```
:set nowrapscan
```
再次开启环形搜索,使用如下命令即可:
```
:set wrapscan
```
最后,假设你想要对文件中已经存在的单词做一点小小的修改,然后对修改后的单词执行搜索操作,一种方法是输入 `/` 与要搜索的单词。但是如果这个单词又长又复杂,那么可能需要一点时间来输入它。
一个简单的办法是将光标移到你想要略微修改的单词下,按 `/` 之后再按 `Ctrl` + `r` 最后按 `Ctrl` + `w`。这个在光标下的单词不仅仅会被拷贝,也会被复制到 `/` 后,允许你对它进行修改并且继续进行搜索操作。
如果想要获得更多小技巧(包括如何使用鼠标来使在 Vim 中的操作变得简单),请前往 [Vim 官方文档][16]。
### 结语
当然,没有人希望你死记硬背这里提到的所有小技巧。你应该做的是,从一个你认为对你最有益的技巧开始不断练习。当它成为一种习惯并且嵌入你的记忆后,重新来这儿找找你应该开始学习的下一个技巧。
你知道其他像这样的技巧吗?并且希望能够和大家一起分享?那就在下边留言吧!
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/perform-search-operations-in-vim/
作者:[Himanshu Arora][a]
译者:[xiaow6](https://github.com/xiaow6)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com/tutorial/perform-search-operations-in-vim/
[1]:https://www.howtoforge.com/tutorial/perform-search-operations-in-vim/#-search-highlighting
[2]:https://www.howtoforge.com/tutorial/perform-search-operations-in-vim/#-making-searchnbspcaseinsensitive
[3]:https://www.howtoforge.com/tutorial/perform-search-operations-in-vim/#-smartcase-search
[4]:https://www.howtoforge.com/tutorial/perform-search-operations-in-vim/#-incremental-search
[5]:https://www.howtoforge.com/tutorial/perform-search-operations-in-vim/#customize-your-search
[6]:https://www.howtoforge.com/tutorial/perform-search-operations-in-vim/#some-other-cool-vim-search-tipstricks
[7]:https://www.howtoforge.com/tutorial/perform-search-operations-in-vim/#conclusion
[8]:https://www.howtoforge.com/tutorial/vim-editor-modes-explained/
[9]:https://www.howtoforge.com/images/perform-search-operations-in-vim/big/vim-basic-search.png
[10]:https://www.howtoforge.com/images/perform-search-operations-in-vim/big/vim-search-end.png
[11]:https://www.howtoforge.com/images/perform-search-operations-in-vim/big/vim-search-back.png
[12]:https://www.howtoforge.com/images/perform-search-operations-in-vim/big/vim-highlight-search.png
[13]:https://www.howtoforge.com/images/perform-search-operations-in-vim/big/vim-set-hlsearch.png
[14]:https://www.howtoforge.com/images/perform-search-operations-in-vim/big/vim-search-case.png
[15]:https://www.howtoforge.com/images/perform-search-operations-in-vim/big/vim-results-list.png
[16]:http://vim.wikia.com/wiki/Searching

View File

@ -0,0 +1,215 @@
sudo 入门指南
============================================================
你在使用 Linux 命令行时曾经得到过“拒绝访问Permission denied”的错误提示吗这可能是因为你正在尝试执行一个需要 root 权限的操作。例如,下面的截图展示了当我尝试复制一个二进制文件到一个系统目录时产生的错误。
[
![shell 的拒绝访问](https://www.howtoforge.com/images/sudo-beginners-guide/perm-denied-error.png)
][11]
那么该怎么解决这个错误?很简单,使用 `sudo` 命令。
[
![用 sudo 运行命令](https://www.howtoforge.com/images/sudo-beginners-guide/sudo-example.png)
][12]
用户运行此命令后会被提示输入他们(**自己**)的登录密码。一旦输入了正确的密码,操作将会成功执行。
毫无疑问,`sudo` 是任何在 Linux 上使用命令行的人都必须知道的命令。但是,为了更负责、更有效地使用该命令,你还是要知道一些相关(及深入)的细节。这正是我们将会在这篇文章中讨论的。
*在我们继续之前,值得提一下的是,这篇文章所提到的所有命令指示都已经在 Ubuntu 14.04 LTS 下的 4.3.11 版 Bash 下通过测试。*
### 什么是 sudo
正如你们大部分人所知道的,`sudo` 用来执行需要提升权限(通常是作为 root 用户)的命令。在这篇文章之前的简介部分已经讨论过这样的一个例子。然而,如果你想的话,你能用 `sudo` 以其它(非 root )用户运行命令。
这是由工具提供的 `-u` 命令行选项所实现的。举个例子,如下例所展示的那样,我(`himanshu`)尝试将一个在其他用户(`howtoforge`)的 Home 目录中的文件重命名,但是得到一个“访问拒绝”的错误。然后我加上 `sudo -u howtoforge` 后用同样的“mv”命令命令成功执行了
[
![什么是 sudo](https://www.howtoforge.com/images/sudo-beginners-guide/sudo-switch-user.png)
][13]
### 任何人都能用 sudo 吗?
不是。一个用户要能使用 `sudo` ,应该在 `/etc/sudoers` 文件里有一条跟该用户相关的信息。下述摘自 Ubuntu 网站的一段能讲得更清楚:
> `/etc/sudoers` 文件控制了谁能以哪个用户的身份在哪个机器上运行什么命令,还可以控制特别的情况,例如对于特定的命令是否需要输入密码。这个文件由<ruby>别名<rt>aliases</rt></ruby>(基本变量)和<ruby>用户标识<rt>user specifications</rt></ruby>(控制谁能运行什么命令)组成。
如果你正在使用 Ubuntu让一个用户能运行 `sudo` 命令很容易:你所需要做的就是把账户类型改成<ruby>管理员<rt>administrator</rt></ruby>。这可直接在 <ruby>系统设置<rt>System Settings</rt></ruby> -> <ruby>用户账户<rt> User Accounts</rt></ruby>里完成。
[
![sudo 用户](https://www.howtoforge.com/images/sudo-beginners-guide/sudo-user-accounts.png)
][14]
首先解锁该窗口:
[
![unlocking window](https://www.howtoforge.com/images/sudo-beginners-guide/sudo-user-unlock.png)
][15]
然后选择你想改变用户类型的用户,然后将类型改成<ruby>管理员<rt>administrator</rt></ruby>
[
![choose sudo accounts](https://www.howtoforge.com/images/sudo-beginners-guide/sudo-admin-account.png)
][16]
然而,如果你不使用 Ubuntu或者你的发行版没有提供这个特性你可以手动编辑 `/etc/sudoers` 文件来实现此改变。要在文件中添加这样的一行:
```
[user] ALL=(ALL:ALL) ALL
```
无需赘言,`[user]` 应该用你想提升 sudo 权限的用户的用户名所代替。在这里值得提到的一件重要的事情是,官方建议通过 `visudo` 命令编辑该文件 —— 你需要做的就是运行下述命令:
```
sudo visudo
```
为了说清究竟是怎么一回事,这里有段从 `visudo` 手册里的摘要:
> `visudo` 以安全的模式编辑 `sudoers` 文件。`visudo` 锁定 `sudoers` 文件以防多个编辑同时进行提供基本的检查sanity checks和语法错误检查。如果 `sudoers` 文件现在正在被编辑,你将会收到一个信息提示稍后再试。
关于 visudo 的更多信息,前往[这里][17]。
### 什么是 sudo 会话
如果你经常使用 `sudo` 命令,你肯定注意到过当你成功输入一次密码后,可以不用输入密码再运行几次 `sudo` 命令。但是一段时间后,`sudo` 命令会再次要求你的密码。
这种现象跟运行 `sudo` 命令数目无关,跟时间有关。是的,`sudo` 默认在输入一次密码后 15 分钟内不会再次要求密码。15 分钟后,你会再次被要求输入密码。
然而,如果你想的话,你能改变这种现象。用以下命令打开 `/etc/sudoers` 文件:
```
sudo visudo
```
找到这一行:
```
Defaults env_reset
```
[
![env_reset](https://www.howtoforge.com/images/sudo-beginners-guide/sudo-session-time-default.png)
][18]
然后在这行最后添加以下变量:
```
Defaults env_reset,timestamp_timeout=[new-value]
```
`[new-value]` 为想要 `sudo` 会话持续的时间数。例如,设数值为 40。
[
![sudo timeout value](https://www.howtoforge.com/images/sudo-beginners-guide/sudo-session-timeout.png)
][19]
如果你希望每次使用 `sudo` 命令时都要求输入密码,你可以把这个变量赋值为 0 。想要 `sudo` 会话永远不过时,应赋值为 -1。
注意将 `timestamp_timeout` 的值赋为 “-1” 是强烈不推荐的。
### sudo 密码
你可能注意过,当 `sudo` 要求输入密码然后你开始输入时,不会显示任何东西 —— 甚至连常规的星号都没有。虽然这不是什么大问题,不过一些用户就是希望显示星号。
好消息是那有可能也很容易做到。所有你需要做的就是在 `/etc/sudoers` 文件里将下述的行:
```
Defaults env_reset
```
改成
```
Defaults env_reset,pwfeedback
```
然后保存文件。
现在,无论什么时候输入 `sudo` 密码,星号都会显示。
[
![hide the sudo password](https://www.howtoforge.com/images/sudo-beginners-guide/sudo-password.png)
][20]
### 一些重要的 sudo 命令行参数
除了 `-u` 命令行参数(我们已经在这篇教程的开始部分讨论过了),还有其他重要的 `sudo` 命令行参数值得注意。在这部分,我们将会讨论其中一些。
#### -k 参数
考虑下这种情况:输入密码后你刚刚运行了几个 `sudo` 驱动的命令。现在如你所知sudo 会话默认保持 15 分钟。假设在这会话期间,你需要让某些人访问你的终端,但你不想让他们可以使用 `sudo` ,你将会怎么做?
还好,有 `-k` 命令行参数允许用户取消 `sudo` 权限。这是 `sudo` 帮助页面man page对此的解释
> `-k`, `--reset-timestamp`
> 不带任何命令使用时,撤销用户缓存的凭据。换句话讲,下一次使用 `sudo` 将会要求输入密码。使用这个参数不需要密码,也可以放到一个 `.logout` 文件中来撤销 sudo 权限。
> 当与一个命令,或者一个可能需要密码的操作一起用时,这个参数将会导致 `sudo` 忽略用户缓存的凭据。结果是 `sudo` 要求输入密码(如果这是被安全策略所要求的),而且不会更新用户缓存的凭据。
#### -s 参数
有时你的工作要求你运行一堆需要 root 权限的命令,你不想每次都输入密码。你也不想通过改变 `/etc/sudoers` 文件调整 `sudo` 会话的过期时限。
这种情况下,你可以用 `sudo``-s` 参数。这是 `sudo` 帮助页面man page对此的解释
> `-s`, `--shell`
> 如果设置了 SHELL 环境变量或者调用用户的密码数据库指定了 shell就运行该 shell 。如果指定了命令,命令将会通过 shell 的 `-c` 参数将命令传递给该 shell 执行。如果没有指定命令,会执行一个交互式 shell。
所以,基本上这命令参数做的是:
* 启动一个新的 shell - 至于是哪一个 shell参照 SHELL 环境变量赋值。如果 `$SHELL` 是空的,将会用 `/etc/passwd` 中定义的 shell。
* 如果你用 `-s` 参数传递了一个命令名(例如 `sudo -s whoami`),实际执行的是 `sudo /bin/bash -c whoami`
* 如果你没有尝试执行其他命令(也就是说,你只是要运行 `sudo -s`),你将会得到一个有 root 权限的交互式的 shell。
请记住,`-s` 命令行参数给你一个有 root 权限的 shell但那不是 root 环境 —— 还是执行的你自己的 `.bashrc` 。例如,在 `sudo -s` 运行的新 shell 里,执行 `whoami` 命令仍会返回你的用户名,而非 root 。
#### -i 参数
`-i` 参数跟我们讨论过的 `-s` 参数相像。然而,还是有点区别。一个重要的区别是 `-i` 给你的是 root 环境,意味着你的(用户的)`.bashrc` 被忽略。这就像没有显式地用 root 登录也能成为 root 。此外,你也不用输入 root 用户密码。
**重要**:请注意 `su` 命令也能让你切换用户(默认切换到 root )。这个命令需要你输入 root 密码。为了避免这一点,你可以使用 `sudo` 执行它(`sudo su`),这样你只需要输入你的登录密码。然而,`su` 和 `sudo su` 有隐含的区别 —— 要了解它们,以及它们和 `sudo -i` 的区别,请看[这里][10] 。
### 总结
我希望现在你至少知道了 `sudo` 的基本知识,以及如何调整 `sudo` 的默认行为。请按我们解释过的那样去尝试调整 `/etc/sudoers` 。同时也浏览一下论坛讨论来更深入了解 `sudo` 命令。
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/sudo-beginners-guide/
作者:[Himanshu Arora][a]
译者:[ypingcn](https://ypingcn.github.io/wiki/lctt)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.howtoforge.com/tutorial/sudo-beginners-guide/
[1]: https://www.howtoforge.com/tutorial/sudo-beginners-guide/#the-k-option
[2]: https://www.howtoforge.com/tutorial/sudo-beginners-guide/#the-s-option
[3]: https://www.howtoforge.com/tutorial/sudo-beginners-guide/#the-i-option
[4]: https://www.howtoforge.com/tutorial/sudo-beginners-guide/#what-is-sudo
[5]: https://www.howtoforge.com/tutorial/sudo-beginners-guide/#can-any-user-use-sudo
[6]: https://www.howtoforge.com/tutorial/sudo-beginners-guide/#what-is-a-sudo-session
[7]: https://www.howtoforge.com/tutorial/sudo-beginners-guide/#the-sudo-password
[8]: https://www.howtoforge.com/tutorial/sudo-beginners-guide/#some-important-sudo-command-line-options
[9]: https://www.howtoforge.com/tutorial/sudo-beginners-guide/#conclusion
[10]: http://unix.stackexchange.com/questions/98531/difference-between-sudo-i-and-sudo-su
[11]: https://www.howtoforge.com/images/sudo-beginners-guide/big/perm-denied-error.png
[12]: https://www.howtoforge.com/images/sudo-beginners-guide/big/sudo-example.png
[13]: https://www.howtoforge.com/images/sudo-beginners-guide/big/sudo-switch-user.png
[14]: https://www.howtoforge.com/images/sudo-beginners-guide/big/sudo-user-accounts.png
[15]: https://www.howtoforge.com/images/sudo-beginners-guide/big/sudo-user-unlock.png
[16]: https://www.howtoforge.com/images/sudo-beginners-guide/big/sudo-admin-account.png
[17]: https://www.sudo.ws/man/1.8.17/visudo.man.html
[18]: https://www.howtoforge.com/images/sudo-beginners-guide/big/sudo-session-time-default.png
[19]: https://www.howtoforge.com/images/sudo-beginners-guide/big/sudo-session-timeout.png
[20]: https://www.howtoforge.com/images/sudo-beginners-guide/big/sudo-password.png

View File

@ -1,15 +1,13 @@
如何在 Linux 启动时自动执行命令或脚本 如何在 Linux 启动时自动执行命令或脚本
============================================================ ============================================================
下载免费电子书 - [给管理员的 10 本 Linux 免费电子书] | [4 本 Shell 脚本免费电子书] 我一直很好奇,在[启动 Linux 系统并登录][1]的过程中到底发生了什么事情。按下开机键或启动一个虚拟机,你就启动了一系列事件,之后会进入到一个功能完备的系统中,有时,这个过程不到一分钟。当你注销或者关机时,也是这样。
我一直很好奇,在[启动 Linux 系统并登录][1]的过程中到底发生了什么事情。按下开机键或运行一台虚拟机,经过一系列事件之后,进入到一个完整的系统中,有的时候,都不用一分钟就进入系统了。当你注销或者关机时,也是这样 更有意思的是,在系统启动以及用户登录或注销时,还可以让系统执行特定的操作
更有意思的是,在登录或注销时,你还可以让系统执行特定的操作 本文,我们将探讨一下在 Linux 操作系统中实现这些目标的传统方法
本文,我们将探讨一下在 Linux 操作系统中实现这些目标(在登录或注销时,让系统执行特定的操作)的传统方法。 **注意**:我们假定使用的是 **Bash** 作为登录及注销的主 Shell。如果你使用的是其他 Shell那么有些方法可能会无效。如果有其他的疑问请参考对应的 Shell 文档。
注意:我们假定使用的是 Bash 作为登录及注销的主 Shell。如果你使用的是其他 Shell那么有些方法可能会无效。如果有其他的疑问请参考对应的 Shell 文档。
### 在启动时执行 Linux 脚本 ### 在启动时执行 Linux 脚本
@ -21,8 +19,8 @@
然而,这种方法需要注意两点: 然而,这种方法需要注意两点:
1. a) cron 守护进程必须处于运行状态(通常情况下都会运行),同时 * a) cron 守护进程必须处于运行状态(通常情况下都会运行),同时
2. b) 脚本或 crontab 文件必须包含必要的环境变量(参考 StackOverflow 获取更多详细内容)。 * b) 脚本或 crontab 文件必须包含需要的环境变量(如果有的话,参考 StackOverflow 获取更多详细内容)。
#### 方法 #2 - 使用 /etc/rc.d/rc.local #### 方法 #2 - 使用 /etc/rc.d/rc.local
@ -32,29 +30,33 @@
# chmod +x /etc/rc.d/rc.local # chmod +x /etc/rc.d/rc.local
``` ```
然后在这个文件底部添加指定的脚本代码 然后在这个文件底部添加脚本。
下图分别说明如何使用 cron 任务和 rc.local 文件运行两个示例脚本(`/home/gacanepa/script1.sh`、`/home/gacanepa/script2.sh`)。 下图说明如何分别使用 cron 任务和 rc.local 运行两个示例脚本(`/home/gacanepa/script1.sh` 和 `/home/gacanepa/script2.sh`)。
script1.sh
script1.sh:
``` ```
#!/bin/bash #!/bin/bash
DATE=$(date +'%F %H:%M:%S') DATE=$(date +'%F %H:%M:%S')
DIR=/home/gacanepa DIR=/home/gacanepa
echo "Current date and time: $DATE" > $DIR/file1.txt echo "Current date and time: $DATE" > $DIR/file1.txt
``` ```
script2.sh:
script2.sh
``` ```
#!/bin/bash #!/bin/bash
SITE="Tecmint.com" SITE="Tecmint.com"
DIR=/home/gacanepa DIR=/home/gacanepa
echo "$SITE rocks... add us to your bookmarks." > $DIR/file2.txt echo "$SITE rocks... add us to your bookmarks." > $DIR/file2.txt
``` ```
[ [
![启动时执行 Linux 脚本](http://www.tecmint.com/wp-content/uploads/2017/02/Run-Linux-Commands-at-Startup.png) ![启动时执行 Linux 脚本](http://www.tecmint.com/wp-content/uploads/2017/02/Run-Linux-Commands-at-Startup.png)
][3] ][3]
启动时执行 Linux 脚本 *启动时执行 Linux 脚本 *
记住,一定要提前给两个示例脚本授予执行权限: 记住,一定要提前给两个示例脚本授予执行权限:
@ -65,9 +67,9 @@ $ chmod +x /home/gacanepa/script2.sh
### 在登录或注销时执行 Linux 脚本 ### 在登录或注销时执行 Linux 脚本
要在登录或注销时执行脚本,分别需要使用 `~.bash_profile``~.bash_logout` 文件。多数情况下,需要手动创建后一个文件。在每个文件的底部,添加调用脚本代码,就可以实现这个功能。 要在登录或注销时执行脚本,分别需要使用 `~.bash_profile``~.bash_logout` 文件。多数情况下,后者需要手动创建。在每个文件的底部,添加调用脚本代码,如前面例中所示,就可以实现这个功能。
##### 总结 ### 总结
本文主要介绍如何在启动、登录以及注销系统时执行脚本。如果你有其他的方法可以补充,请使用下面的评论表给我们指出,我们期待您的回应! 本文主要介绍如何在启动、登录以及注销系统时执行脚本。如果你有其他的方法可以补充,请使用下面的评论表给我们指出,我们期待您的回应!
@ -75,20 +77,20 @@ $ chmod +x /home/gacanepa/script2.sh
作者简介: 作者简介:
我叫 Ravi Saive我是 TecMint 的作者。一名喜欢在互联网上分享 Linux 相关技巧的电脑极客和 Linux 大师。我的服务器大多数都是使用 Linux 系统。关注我Twitter, Facebook, Google+ Gabriel Cánepa 是 GNU/Linux 系统管理员, 阿根廷圣路易斯 Villa Mercedes 的 web 开发人员。他为一家国际大型消费品公司工作,在日常工作中使用 FOSS 工具以提高生产力,并从中获得极大乐趣。
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
via: http://www.tecmint.com/auto-execute-linux-scripts-during-reboot-or-startup/ via: http://www.tecmint.com/auto-execute-linux-scripts-during-reboot-or-startup/
作者:[Ravi Saive ][a] 作者:[Gabriel Cánepa][a]
译者:[zhb127](https://github.com/zhb127) 译者:[zhb127](https://github.com/zhb127)
校对:[校对者ID](https://github.com/校对者ID) 校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/admin/ [a]:http://www.tecmint.com/author/gacanepa/
[00]:https://twitter.com/ravisaive [00]:https://twitter.com/ravisaive
[01]:https://www.facebook.com/ravi.saive [01]:https://www.facebook.com/ravi.saive
[02]:https://plus.google.com/u/0/+RaviSaive [02]:https://plus.google.com/u/0/+RaviSaive

View File

@ -0,0 +1,145 @@
使用 Orange Pi 搭建 Time Machine 服务器
=================================
![Orange Pi as Time Machine Server](https://i1.wp.com/piboards.com/wp-content/uploads/2017/02/OPiTM.png?resize=960%2C450)
我的工作之一是为各类家用计算机安排进行自动备份,包括存放重要数据的一组 Mac 计算机。我决定使用运行 [Armbian Linux][4] 的便宜的 [Orange Pi][3] 做实验,目的是希望 [Time Machine][5] 可以通过网络使用挂载在 Orange Pi 主板上的 USB 驱动器。在这种情况下,我找到并成功地安装了 Netatalk。
[Netatalk][6] 是一个用作苹果文件服务器的开源软件。通过 [Avahi][7] 和 Netatalk 配合运行,你的 Mac 设备能够识别网络上的 Orange Pi 设备,甚至会将 Orange pi 设备当作 “Mac” 类型的设备。这使得你能够手动连接到该网络设备,更重要的是使得 Time Machine 能够发现并使用远程驱动器。如果你想在 Mac 上设置类似的备份机制,下面的指南也许能够帮到你。
### 准备工作
为了配置该 USB 驱动器,我首先尝试了 HFS+ 格式文件系统,不幸的是我没能成功写入。所以我选择创建一个 EXT4 文件系统,并确保用户 `pi` 有读写权限。Linux 有很多格式化磁盘的方法,但是我最喜欢(而且推荐)的仍然是 [gparted][8]。由于 gparted 已经集成在 Armbian 桌面了,所以我直接使用了该工具。
我需要当 Orange Pi 启动或者 USB 驱动连接的时候,这个设备能够自动挂载到相同的位置。于是我创建了一个目录(`timemachine`)用于挂载:在其下新建一个 `tm` 目录用于真正的备份路径,并将 `tm` 的所有者更改为用户 `pi`
```
cd /mnt
sudo mkdir timemachine
cd timemachine
sudo mkdir tm
sudo chown pi:pi tm
```
下一步,我打开一个终端并编辑 `/etc/fstab` 文件。
```
sudo nano /etc/fstab
```
并在该文件末尾添加了一行我的设备信息(根据我的设备情况,设置为 `sdc2`
```
/dev/sdc2 /mnt/timemachine ext4 rw,user,exec 0 0
```
你需要通过命令行预装一些包,可能其中一些已经安装在你的系统上了:
```
sudo apt-get install build-essential libevent-dev libssl-dev libgcrypt11-dev libkrb5-dev libpam0g-dev libwrap0-dev libdb-dev libtdb-dev libmysqlclient-dev avahi-daemon libavahi-client-dev libacl1-dev libldap2-dev libcrack2-dev systemtap-sdt-dev libdbus-1-dev libdbus-glib-1-dev libglib2.0-dev libio-socket-inet6-perl tracker libtracker-sparql-1.0-dev libtracker-miner-1.0-dev hfsprogs hfsutils avahi-daemon
```
### 安装并配置 Netatalk
下一步是下载 Netatalk解压下载的文件然后切换到 Netatalk 目录:
```
wget https://sourceforge.net/projects/netatalk/files/netatalk/3.1.10/netatalk-3.1.10.tar.bz2
tar xvf netatalk-3.1.10.tar.bz2
cd netatalk-3.1.10
```
然后需要顺序执行 `./configure``make``make install` 命令安装软件。在 netatalk-3.1.10 目录中执行 如下的 `./configure` 命令,这个命令需要花点时间才能执行完。
```
./configure --with-init-style=debian-systemd --without-libevent --without-tdb --with-cracklib --enable-krbV-uam --with-pam-confdir=/etc/pam.d --with-dbus-daemon=/usr/bin/dbus-daemon --with-dbus-sysconf-dir=/etc/dbus-1/system.d --with-tracker-pkgconfig-version=1.0
```
`./configure` 运行完成后执行 `make`
```
make
```
执行完 `make` 命令需要花较长时间,可以考虑喝杯咖啡或者做点其他什么。之后,执行以下命令:
```
sudo make install
```
这个命令能够快速执行完成。现在你可以通过下面两个命令验证安装是否成功,同时找到配置文件位置。
```
sudo netatalk -V
sudo afpd -V
```
然后你需要编辑 `afp.conf` 配置文件并在其中指定 Time Machine 备份路径,可以访问的帐号名并指定是否使用 [Spotlight][9] 为备份建立索引。
```
sudo nano /usr/local/etc/afp.conf
```
下面是 `afp.conf` 的配置示例:
```
[My Time Machine Volume]
path = /mnt/timemachine/tm
valid users = pi
time machine = yes
spotlight = no
```
最后,启用 Avahi 和 Netatalk 并启动它们。
```
sudo systemctl enable avahi-daemon
sudo systemctl enable netatalk
sudo systemctl start avahi-daemon
sudo systemctl start netatalk
```
### 连接到网络驱动器
此时,你的 Mac 可能已经发现并识别了你的 Pi 设备和网络驱动器。打开 Mac 中的 Finder看看是否有像下面的内容
![](https://i2.wp.com/piboards.com/wp-content/uploads/2017/02/TM_drive.png?resize=241%2C89)
当然你也可以通过主机名或者 ip 地址访问,比如:
```
afp://192.168.1.25
```
### Time Machine 备份
最后,打开 Mac 上的 Time Machine然后“选择硬盘”选择你的 Orange pi。
![](https://i1.wp.com/piboards.com/wp-content/uploads/2017/02/OPiTM.png?resize=579%2C381)
这样设置肯定有效Orange Pi 能够很好的处理进程,不过这可能并不是最快速的备份方式。但是,这个方法比较简单且便宜,并且正如其展示的一样能够正常工作。如果对这些设置你已经成功或者进行了改进,请在下面留言或者发送消息给我。
![](https://i0.wp.com/piboards.com/wp-content/uploads/2017/02/backup_complete.png?resize=300%2C71)
Amazon 上有售卖 Orange Pi 主板。
--------------------------------------------------------------------------------
via: http://piboards.com/2017/02/13/orange-pi-as-time-machine-server/
作者:[MIKE WILMOTH][a]
译者:[beyondworld](https://github.com/beyondworld)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://piboards.com/author/piguy/
[1]:http://piboards.com/author/piguy/
[2]:http://piboards.com/2017/02/13/orange-pi-as-time-machine-server/
[3]:https://www.amazon.com/gp/product/B018W6OTIM/ref=as_li_tl?ie=UTF8&tag=piboards-20&camp=1789&creative=9325&linkCode=as2&creativeASIN=B018W6OTIM&linkId=08bd6573c99ddb8a79746c8590776c39
[4]:https://www.armbian.com/
[5]:https://support.apple.com/kb/PH25710?locale=en_US
[6]:http://netatalk.sourceforge.net/
[7]:https://en.wikipedia.org/wiki/Avahi_(software)
[8]:http://gparted.org/
[9]:https://support.apple.com/en-us/HT204014

View File

@ -0,0 +1,46 @@
哪个 Linux 系统最适合玩游戏?
============================================================
> 告诉我们哪个 Linux 发型版对游戏支持的最好
在过去几个月中,出于游戏目的,我们尝试了多种 GNU/Linux 发行版,我们得出的结论是没有专为 Linux 游戏设计的完美的操作系统。
我们都知道,游戏世界分成 Nvidia 和 AMD 两个阵营。现在,如果你使用的是 Nvidia 显卡,即使是五年前的一块显卡,也可以在大多数基于 Linux 的操作系统上使用,因为 Nvidia 差不多为其所有的 GPU 都提供了最新的视频驱动程序。
当然,这意味着如果你有一块 Nvidia GPU在大多数 GNU/Linux 发行版上你不会有什么大问题。至少与游戏中的图形或其他性能问题无关,这种问题将严重影响你的游戏体验。
### AMD Radeon 用户最好的游戏发行版
如果你使用 AMD Radeon GPU事情会是完全不同的。我们都知道AMD 的专有显卡驱动程序仍然需要大量的工作来兼容最新的 GNU/Linux 发行版本。所有的 AMD GPU ,即便是在最新的 X.Org 服务端和 Linux 内核版本上都是这样。
目前AMDGPU-PRO 视频驱动程序只能在 Ubuntu 16.04 LTS、CentOS 6.8/7.3、Red Hat Enterprise Linux 6.8/7.3、SUSE Linux Enterprise Desktop 和 Server 12 SP2 上运行。除了 Ubuntu 16.04 LTS 之外,我们不知道为什么 AMD 为所有这些面向服务器和企业级的操作系统提供支持。
我们不相信有 Linux 玩家会在这些系统上面玩游戏。[最新的 AMDGPU-PRO 更新][1]终于支持了 HD 7xxx 和 8xxx 系列的 AMD Radeon GPU但是如果我们不想使用 Ubuntu 16.04 LTS 呢?
另外,我们有 Mesa 3D 图形库这在大多数发行版上都有。Mesa 图形栈为我们的 AMD GPU 提供了功能强大的开源 Radeon 和 AMDGPU 驱动程序,但是为了享受最好的游戏体验,你还需要拥有最新的 X.Org 服务端和 Linux 内核。
并不是所有的 Linux 操作系统都附带最新的 Mesa13.0、X.Org 服务端1.19)和 Linux 内核4.9)版本,它们支持较旧的 AMD GPU。有些系统只有其中一两种技术但我们这些都需要而且内核需要编译进 AMD Radeon Southern Islands 和 Sea Island 驱动来支持 AMDGPU。
我们发现整个情况相当令人沮丧,至少对于一些使用 AMD Radeon 老式显卡的玩家来说是这样的。现在,我们发现,使用 AMD Radeon HD 8xxx GPU 的最佳游戏体验只能通过使用 Git 获取到的 Mesa 17 以及 Linux 内核 4.10 RC 来实现。
所以我们现在请求你 - 如果你找到了玩游戏的完美的 GNU/Linux 发行版,无论你使用的是 AMD Radeon 还是 Nvidia GPU但我们最感兴趣的是那些使用 AMD GPU 的玩家,请告知我们你使用的是什么发行版,设置是什么,能不能玩最新的游戏,或者有无体验问题。谢谢!
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/the-best-operating-system-for-linux-gaming-which-one-do-you-use-and-why-512861.shtml
作者:[Marius Nestor][a]
译者:[geekpi](https://github.com/geekpi)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/marius-nestor
[1]:http://news.softpedia.com/news/amdgpu-pro-16-60-linux-driver-finally-adds-amd-radeon-hd-7xxx-8xxx-support-512280.shtml
[2]:http://news.softpedia.com/editors/browse/marius-nestor
[3]:http://news.softpedia.com/news/the-best-operating-system-for-linux-gaming-which-one-do-you-use-and-why-512861.shtml#
[4]:https://share.flipboard.com/bookmarklet/popout?v=2&title=The+Best+Operating+System+for+Linux+Gaming%3A+Which+One+Do+You+Use+and+Why%3F&url=http%3A%2F%2Fnews.softpedia.com%2Fnews%2Fthe-best-operating-system-for-linux-gaming-which-one-do-you-use-and-why-512861.shtml&t=1487038258&utm_campaign=widgets&utm_medium=web&utm_source=flipit&utm_content=news.softpedia.com
[5]:http://news.softpedia.com/news/the-best-operating-system-for-linux-gaming-which-one-do-you-use-and-why-512861.shtml#
[6]:http://twitter.com/intent/tweet?related=softpedia&via=mariusnestor&text=The+Best+Operating+System+for+Linux+Gaming%3A+Which+One+Do+You+Use+and+Why%3F&url=http%3A%2F%2Fnews.softpedia.com%2Fnews%2Fthe-best-operating-system-for-linux-gaming-which-one-do-you-use-and-why-512861.shtml
[7]:https://plus.google.com/share?url=http://news.softpedia.com/news/the-best-operating-system-for-linux-gaming-which-one-do-you-use-and-why-512861.shtml
[8]:https://twitter.com/intent/follow?screen_name=mariusnestor

View File

@ -0,0 +1,94 @@
使用 Elizabeth 为应用生成随机样本数据
============================================================
![Generate random data for your applications with Elizabeth](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/osdc_520x292_opendata_0613mm.png?itok=mzC0Tb28 "Generate random data for your applications with Elizabeth")
图片提供 : Opensource.com
> Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem. Nulla consequat massa quis enim. Donec pede justo, fringilla vel, aliquet nec, vulputate eget, arcu. 
不,我的文章没有被 [Lorem ipsum][2] 生成器劫持LCTT 译注Lorem ipsum中文又称“乱数假文”只是一段用来测试排版效果的占位文字没有实际的含义。作为本月的 NooksCrannies 专栏文章,我发现了一个有趣的小 Python 库,以帮助开发人员为其应用程序生成随机数据。它被称为 [Elizabeth][3]。
它由 Líkið Geimfari 编写,并在 MIT 许可证下发行Elizabeth 以 21 个不同本地化信息提供了 18 种数据提供器可用于生成随机信息LCTT 译注:不仅是随机数),包括姓名和个人特征、地址、文本数据、交通信息、网络和 Internet 社交媒体数据、数字等等。安装它需要 [Python 3.2][4] 或更高版本,您可以使用 `pip` 或从 `git` 仓库安装它。
在我的测试机上,我在一个全新安装的 [Debian][5] Jessie 上使用 pip 来安装它,要做的就是 `apt-get install python3-pip`,它将安装 Python 和所需的依赖项。然后 `pip install elizabeth`,之后就安装好了。
只是为好玩,让我们在 Python 的交互式解释器中为一个人生成一些随机数据:
```
>>> from elizabeth import Personal
>>> p=Personal('en')
>>> p.full_name(gender="male")
'Elvis Herring'
>>> p.blood_type()
'B+'
>>> p.credit_card_expiration_date()
'09/17'
>>> p.email(gender='male')
'jessie7517@gmail.com'
>>> p.favorite_music_genre()
'Ambient'
>>> p.identifier(mask='13064########')
'1306420450944'
>>> p.sexual_orientation()
'Heterosexual'
>>> p.work_experience()
39
>>> p.occupation()
'Senior System Designer'
>>>
```
在代码中使用它就像创建一个对象那样,然后调用要你需要填充数据的对应方法。
Elizabeth 内置了 18 种不同的生成工具,添加新的生成器并不困难;你只需要定义从 JSON 值集合中获取数据的例程。以下是一些随机文本字符串生成,再次打开解释器:
```
>>> from elizabeth import Text
>>> t=Text('en')
>>> t.swear_word()
'Rat-fink'
>>> t.quote()
'Let them eat cake.'
>>> t.words(quantity=20)
['securities', 'keeps', 'accessibility', 'barbara', 'represent', 'hentai', 'flower', 'keys', 'rpm', 'queen', 'kingdom', 'posted', 'wearing', 'attend', 'stack', 'interface', 'quite', 'elementary', 'broadcast', 'holland']
>>> t.sentence()
'She spent her earliest years reading classic literature, and writing poetry.'
```
使用 Elizabeth 填充 [SQLite][6] 或其它你可能需要用于开发或测试的数据库并不困难。其介绍文档给出了使用 [Flask][7] 这个轻量级 web 框架的一个医疗应用程序示例。
我对 Elizabeth 印象很深刻 - 它超快、轻量级、易于扩展,它的社区虽然小,但是很活跃。截至本文写作时,项目已有 25 名贡献者并且提交的问题处理迅速。Elizabeth 的[完整文档][8]至少对于美国英语而言易于阅读和遵循,并提供了广泛的 API 参考。
我曾尝试通过修改链接来查找该文档是否有其他语言,但没有成功。因为其 API 在非英语区域中是不同的,所以记录这些变化将对用户非常有帮助。公平地说,通过阅读其代码并找出可用的方法并不难,即使你的 Python 功力并不深厚。对我来说,另一个明显的缺陷是缺乏阿拉伯语或希伯来语区域测试数据。这些是著名的从右到左的语言,对于试图使其应用程序国际化的开发者来说,适当地处理这些语言是一个主要的障碍。像 Elizabeth 这种在此方面可以协助的工具是值得拥有的。
对于那些在应用中需要随机样本数据的开发员而言Elizabeth 是一个有价值的工具,而对于那些试图创建真正多语言、本地化应用程序的开发者来说,它可能是一个宝藏。
--------------------------------------------------------------------------------
作者简介:
D Ruth Bavousett - D Ruth Bavousett 作为一名系统管理员和软件开发人员已经很长时间了,她的专业生涯开始于 VAX 11/780。在她的职业生涯迄今为止她在解决图书馆需求上有大量的经验她自 2008 年以来一直是 Koha 开源图书馆自动化套件的贡献者。Ruth 目前在休斯敦的 cPanel 任 Perl 开发人员,她也作为首席员工效力于双猫公司。
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/2/elizabeth-python-library
作者:[D Ruth Bavousett][a]
译者:[geekpi](https://github.com/geekpi)
校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/druthb
[1]:https://opensource.com/article/17/2/elizabeth-python-library?rate=kuXZVuHCdEv_hrxRnK1YQctlsTJeFJLcVx3Nf2VIW38
[2]:https://en.wikipedia.org/wiki/Lorem_ipsum
[3]:https://github.com/lk-geimfari/elizabeth
[4]:https://www.python.org/
[5]:https://www.debian.org/
[6]:https://sqlite.org/
[7]:https://flask.pocoo.org/
[8]:http://elizabeth.readthedocs.io/en/latest/index.html
[9]:https://opensource.com/user/36051/feed
[10]:https://opensource.com/article/17/2/elizabeth-python-library#comments
[11]:https://opensource.com/users/druthb

View File

@ -1,21 +1,21 @@
Linux 上 PowerShell 6.0 使用入门 [新手指南] 微软爱上 Linux当 PowerShell 来到 Linux 时
============================================================ ============================================================
在微软爱上 Linux 之后(众所周知 **Microsoft Loves Linux****PowerShell** 原本只是 Windows 才能使用的组件,于 2016 年 8 月 18 日开源并且跨平台,已经可以在 Linux 和 macOS 中使用 在微软爱上 Linux 之后,**PowerShell** 这个原本只是 Windows 才能使用的组件,于 2016 年 8 月 18 日开源并且成为跨平台软件,登陆了 Linux 和 macOS
**PowerShell** 是一个微软开发的自动化任务和配置管理系统。它基于 .NET 框架由命令行语言解释器shell和脚本语言组成。 **PowerShell** 是一个微软开发的自动化任务和配置管理系统。它基于 .NET 框架由命令行语言解释器shell和脚本语言组成。
PowerShell 提供对 **COM** (**Component Object Model**) 和 **WMI** (**Windows Management Instrumentation**) 的完全访问,从而允许系统管理员在本地或远程 Windows 系统中 [执行管理任务][1],以及对 WS-Management 和 CIM**Common Information Model**)的访问,实现对远程 Linux 系统和网络设备的管理。 PowerShell 提供对 **COM** (<ruby>组件对象模型<rt>Component Object Model</rt></ruby>) 和 **WMI** (<ruby>Windows 管理规范<rt>Windows Management Instrumentation</rt></ruby>) 的完全访问,从而允许系统管理员在本地或远程 Windows 系统中 [执行管理任务][1],以及对 WS-Management 和 CIM<ruby>公共信息模型<rt>Common Information Model</rt></ruby>)的访问,实现对远程 Linux 系统和网络设备的管理。
通过这个框架,管理任务基本上由称为 **cmdlets**(发音 command-lets**.NET** 类执行。就像 Linux 的 shell 脚本一样,用户可以通过按照一定的规则将 **cmdlets** 写入文件来制作脚本或可执行文件。这些脚本可以用作独立的[命令行程序或工具][2]。 通过这个框架,管理任务基本上由称为 **cmdlets**(发音 command-lets**.NET** 类执行。就像 Linux 的 shell 脚本一样,用户可以通过按照一定的规则将一组 **cmdlets** 写入文件来制作脚本或可执行文件。这些脚本可以用作独立的[命令行程序或工具][2]。
### 在 Linux 系统中安装 PowerShell Core 6.0 ### 在 Linux 系统中安装 PowerShell Core 6.0
要在 Linux 中安装 **PowerShell Core 6.0**,我们将会用到微软 Ubuntu 官方仓库,它允许我们通过最流行的 Linux 包管理器工具,如 [apt-get][3]、[yum][4] 等来安装。 要在 Linux 中安装 **PowerShell Core 6.0**,我们将会用到微软软件仓库,它允许我们通过最流行的 Linux 包管理器工具,如 [apt-get][3]、[yum][4] 等来安装。
#### 在 Ubuntu 16.04 中安装 #### 在 Ubuntu 16.04 中安装
首先,导入公共仓库 **GPG** 密钥,然后将 **Microsoft Ubuntu** 仓库注册到 **APT** 的源中来安装 **PowerShell** 首先,导入公共仓库 **GPG** 密钥,然后将 **Microsoft Ubuntu** 仓库注册到 **APT** 的源中来安装 **PowerShell**
``` ```
$ curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add - $ curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
@ -51,6 +51,7 @@ $ sudo yum install -y powershell
``` ```
$ powershell $ powershell
``` ```
[ [
![Start Powershell in Linux](http://www.tecmint.com/wp-content/uploads/2017/02/start-powershell.png) ![Start Powershell in Linux](http://www.tecmint.com/wp-content/uploads/2017/02/start-powershell.png)
][5] ][5]
@ -62,6 +63,7 @@ $ powershell
``` ```
$PSVersionTable $PSVersionTable
``` ```
[ [
![Check Powershell Version](http://www.tecmint.com/wp-content/uploads/2017/02/check-powershell-version.png) ![Check Powershell Version](http://www.tecmint.com/wp-content/uploads/2017/02/check-powershell-version.png)
][6] ][6]
@ -78,7 +80,7 @@ get-location [# 显示当前工作目录]
#### 在 PowerShell 中操作文件和目录 #### 在 PowerShell 中操作文件和目录
1. 可以通过两种方法创建空文件: 1 可以通过两种方法创建空文件:
``` ```
new-item tecmint.tex new-item tecmint.tex
@ -92,25 +94,27 @@ new-item tecmint.tex
set-content tecmint.tex -value "TecMint Linux How Tos Guides" set-content tecmint.tex -value "TecMint Linux How Tos Guides"
get-content tecmint.tex get-content tecmint.tex
``` ```
[ [
![Create New File in Powershell](http://www.tecmint.com/wp-content/uploads/2017/02/Create-New-File-in-Powershell.png) ![Create New File in Powershell](http://www.tecmint.com/wp-content/uploads/2017/02/Create-New-File-in-Powershell.png)
][7] ][7]
*在 PowerShell 中创建新文件* *在 PowerShell 中创建新文件*
2. 在 PowerShell 中删除一个文件 2 在 PowerShell 中删除一个文件
``` ```
remove-item tecmint.tex remove-item tecmint.tex
get-content tecmint.tex get-content tecmint.tex
``` ```
[ [
![Delete File in Powershell](http://www.tecmint.com/wp-content/uploads/2017/02/Delete-File-in-Powershell.png) ![Delete File in Powershell](http://www.tecmint.com/wp-content/uploads/2017/02/Delete-File-in-Powershell.png)
][8] ][8]
*在 PowerShell 中删除一个文件* *在 PowerShell 中删除一个文件*
3. 创建目录 3 创建目录
``` ```
mkdir tecmint-files mkdir tecmint-files
@ -118,13 +122,14 @@ cd tecmint-files
“”>domains.list “”>domains.list
ls ls
``` ```
[ [
![Create Directory in Powershell](http://www.tecmint.com/wp-content/uploads/2017/02/create-new-directory-in-Powershell.png) ![Create Directory in Powershell](http://www.tecmint.com/wp-content/uploads/2017/02/create-new-directory-in-Powershell.png)
][9] ][9]
*在 PowerShell 中创建目录* *在 PowerShell 中创建目录*
4. 执行长列表,列出文件/目录详细情况,包括模式(文件类型)、最后修改时间等,使用以下命令: 4、 执行长格式的列表操作,列出文件/目录详细情况,包括模式(文件类型)、最后修改时间等,使用以下命令:
``` ```
dir dir
@ -135,22 +140,24 @@ dir
*Powershell 中列出目录长列表* *Powershell 中列出目录长列表*
5. 显示系统中所有的进程: 5 显示系统中所有的进程:
``` ```
get-process get-process
``` ```
[ [
![View Running Processes in Powershell](http://www.tecmint.com/wp-content/uploads/2017/02/View-Running-Processes-in-Powershell.png) ![View Running Processes in Powershell](http://www.tecmint.com/wp-content/uploads/2017/02/View-Running-Processes-in-Powershell.png)
][11] ][11]
*在 PowerShell 中显示运行中的进程* *在 PowerShell 中显示运行中的进程*
6. 通过给定的名称查看正在运行的进程/进程组细节,将进程名作为参数传给上面的命令,如下: 6 通过给定的名称查看正在运行的进程/进程组细节,将进程名作为参数传给上面的命令,如下:
``` ```
get-process apache2 get-process apache2
``` ```
[ [
![View Specific Process in Powershell](http://www.tecmint.com/wp-content/uploads/2017/02/View-Specific-Process-in-Powershell.png) ![View Specific Process in Powershell](http://www.tecmint.com/wp-content/uploads/2017/02/View-Specific-Process-in-Powershell.png)
][12] ][12]
@ -159,58 +166,62 @@ get-process apache2
输出中各部分的含义: 输出中各部分的含义:
* NPM(K)  进程总共使用的非分页内存单位Kb。 * NPM(K)  进程使用的非分页内存单位Kb。
* PM(K)  进程总共使用的可分页内存单位Kb。 * PM(K)  进程使用的可分页内存单位Kb。
* WS(K)  进程的工作集大小单位Kb包括进程引用到的内存页 * WS(K)  进程的工作集大小单位Kb工作集由进程所引用到的内存页组成
* CPU(s)  进程所用的处理器时间,单位:秒。 * CPU(s)  进程有处理器上所占用的处理器时间,单位:秒。
* ID  进程 ID (PID). * ID  进程 ID (PID).
* ProcessName  进程名称。 * ProcessName  进程名称。
7. 想要了解更多,获取 PowerShell 命令列表: 7 想要了解更多,获取 PowerShell 命令列表:
``` ```
get-command get-command
``` ```
[ [
![List Powershell Commands](http://www.tecmint.com/wp-content/uploads/2017/02/List-Powershell-Commands.png) ![List Powershell Commands](http://www.tecmint.com/wp-content/uploads/2017/02/List-Powershell-Commands.png)
][13] ][13]
*列出 PowerShell 的命令* *列出 PowerShell 的命令*
8. 想知道如何使用一个命令,查看它的帮助(类似于 Unix/Linux 中的 man举个例子你可以这样获取命令 **Describe** 的帮助: 8 想知道如何使用一个命令,查看它的帮助(类似于 Unix/Linux 中的 man举个例子你可以这样获取命令 **Describe** 的帮助:
``` ```
get-help Describe get-help Describe
``` ```
[ [
![Powershell Help Manual](http://www.tecmint.com/wp-content/uploads/2017/02/Powershell-Help-Manual.png) ![Powershell Help Manual](http://www.tecmint.com/wp-content/uploads/2017/02/Powershell-Help-Manual.png)
][14] ][14]
*PowerShell 帮助手册* *PowerShell 帮助手册*
9. 显示所有命令的别名,輸入: 9 显示所有命令的别名,輸入:
``` ```
get-alias get-alias
``` ```
[ [
![List Powershell Command Aliases](http://www.tecmint.com/wp-content/uploads/2017/02/List-Powershell-Command-Aliases.png) ![List Powershell Command Aliases](http://www.tecmint.com/wp-content/uploads/2017/02/List-Powershell-Command-Aliases.png)
][15] ][15]
*列出 PowerShell 命令别名* *列出 PowerShell 命令别名*
10. 最后,不过也很重要,显示命令历史记录(曾运行过的命令的列表): 10 最后,不过也很重要,显示命令历史记录(曾运行过的命令的列表):
``` ```
history history
``` ```
[ [
![List Powershell Commands History](http://www.tecmint.com/wp-content/uploads/2017/02/List-Powershell-Command-History.png) ![List Powershell Commands History](http://www.tecmint.com/wp-content/uploads/2017/02/List-Powershell-Command-History.png)
][16] ][16]
*显示 PowerShell 命令历史记录* *显示 PowerShell 命令历史记录*
就是这些了!在这篇文章里,我们展示了如何在 Linux 中安装**微软的 PowerShell Core 6.0**。在我看来,与传统 UnixLinux 的 shell 相比PowerShell 还有很长的路要走。到目前为止,前者为从命令行操作机器,更重要的是,编程(写脚本),提供更好、更多令人激动和富有成效的特性。 就是这些了!在这篇文章里,我们展示了如何在 Linux 中安装**微软的 PowerShell Core 6.0**。在我看来,与传统 UnixLinux 的 shell 相比PowerShell 还有很长的路要走。目前看来PowerShell 还需要在命令行操作机器,更重要的是,编程(写脚本)等方面,提供更好、更多令人激动和富有成效的特性。
查看 PowerShell 的 GitHub 仓库:[https://github.com/PowerShell/PowerShell][17]。 查看 PowerShell 的 GitHub 仓库:[https://github.com/PowerShell/PowerShell][17]。

View File

@ -0,0 +1,185 @@
如何在 Ubuntu 上用 Yocto 创建你自己的 Linux 发行版
========================================
### 本文内容
本文主要聚焦在如何使用 Yocto 在 Ubuntu 上创建一个最小化的 Linux 发行版。Yocto 项目在嵌入式 Linux 的世界非常著名这是因为它用起来非常灵活、方便。Yocto 的目标是为嵌入式软硬件开发商创建自己的 Linux 发行版。本文我们将会创建一个可以运行在 QEMU 上的最小化的 Linux并且在 QEMU 上实际运行。
### 开发机的基本条件
* 最少 4-6 GB 内存
* 最新版的 Ubuntu 系统(本文使用了 16.04 LTS
* 磁盘剩余空间至少 60-80 GB
* 在创建 Linux 发行版之前先安装下面的软件包
* 下载最新的 YoctoPoky 是其最小开发环境)稳定分支
```
apt-get update
```
```
apt-get install wget git-core unzip make gcc g++ build-essential subversion sed autoconf automake texi2html texinfo coreutils diffstat python-pysqlite2 docbook-utils libsdl1.2-dev libxml-parser-perl libgl1-mesa-dev libglu1-mesa-dev xsltproc desktop-file-utils chrpath groff libtool xterm gawk fop
```
![Install prerequisites for Yocto](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/1-pre_requisite_packages-1.png)
如下所示,开发环境要安装的软件包将近 1GB 大小。
![Install the development packages](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/2-pre_requisite_packages-2.png)
在这个教程中,系统上克隆的是 poky 的 `morty` 稳定分支。
```
git clone -b morty git://git.yoctoproject.org/poky.git
```
![install poky](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/download_morty_of_yocto.png)
进入 `poky` 目录,然后运行下面的命令为 Yocto 开发环境设置(设置/导出)一些环境变量。
```
source oe-init-build-env
```
如下所示,在运行了 open embedded (oe) 的构建环境脚本之后,终端里的路径会自动切换到 `build` 目录,以便进行之后行发行版的的配置和构建。
![Prepare OE build environment](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/source_environment_script.png)
上面的截屏展示了在 `conf` 目录下创建的文件 `local.conf`。这是 Yocto 用来设置目标机器细节和 SDK 的目标架构的配置文件。
如下所示,这里设置的目标机器是 `qemux86-64`
![Set the target machine type](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/selected_target.png)
如下面截图所示,在 `local.conf` 中取消下面参数的注释符号。
```
DL_DIR ?= "${TOPDIR}/downloads"
```
![Configure local.conf file](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/uncheck_Download_parameters.png)
```
SSTATE_DIR ?= "${TOPDIR}/sstate-cache"
```
![Set SSTATE_DIR](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/uncheck_sstate_parametes.png)
```
TMPDIR ?= "${TOPDIR}/tmp"
```
![Set TMPDIR](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/tempdir_uncheck_paramerter.png)
```
PACKAGE_CLASSES ?= "package_rpm"
SDKMACHINE ?= "i686"
```
![Set PACKAGE_CLASSES and SDKMACHINE](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/sdk_and_package_selection.png)
如下所示,在 `local.conf` 中为基于 Yocto 的 Linux 设置空密码和后续的一些参数。否则的话用户就不能登录进新的发行版。
```
EXTRA_IMAGE_FEATURES ?= "debug-tweaks"
```
![Set debug-tweaks option](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/extra-features_for_blank_password.png)
我们并不准备使用任何图形化工具来创建 Linux OS比如 `toaster` `hob` 已经不再支持了)。
### Yocto 编译构建过程
现在运行下面的 `bitbake` 工具命令开始为选定的目标机器下载和编译软件包。
```
bitbake core-image-minimal
```
![Start bitbake](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/bitbake_coreimageminimal.png)
**非常重要的是要在普通 Linux 用户下运行上面的命令,而不是使用 root 用户**。如下面截图所示,当你在 root 用户下运行 bitbake 命令会产生下面所示的错误。
![Do not run bitbake as root](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/dont_user_as_a_root.png)
再一次运行导出环境变量的脚本(`oe-init-build-env`),重新执行相同的命令来启动下载和编译过程。
![rerun commands](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/runniing_bitbake_again-normal_user.png)
如下所示,构建脚本组件的第一步工作是解析配置(`recipe`)。
![Parse the build recipes](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/parsing-receipe.png)
下面的截图展示了构建脚本的解析过程。同时也显示了用来构建你的新的基于 yocto 的发行版的构建系统的细节。
![Building proceeds](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/output_of_parsing.png)
在下载了 SDK 和必要的库之后,下一步工作是下载并编译软件包。如下截图展示了为构建新发行版而执行的任务。这一步将会执行 2-3 小时,因为首先要下载需要的软件包,然后还要为新的 Linux 发行版编译这些软件包。
![Compilation will take several hours](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/task_list.png)
下面的截图表明了任务列表执行完毕。
![](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/downloaded-all_packages_and_compiled.png)
为目标机器类型 `qemux86-64` 编译好的新镜像位于 `build/tmp/deploy/images/qemux86-64`
![Build complete](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/new_linux_compiled_under_qemux86_64.png)
如下所示,上面的命令如果运行在 `Putty` 上会产生一个错误。
![command error in putty](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/error_on_putty.png)
通过 `rdp` 在 Ubuntu 平台上再次运行上面的命令。
![Command works fine in rdp](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/runqemu_command.png)
为运行新的基于 Yocto 的 Linux 发行版的 qemu 打开一个新屏幕。
![Open Quemu emulator](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/new_linux_inside_the_qemu_.png)
下面展示了新发行版的登录界面,同时也显示了使用的 yocto 项目的版本号。默认的用户名是 `root` ,密码为空。
![Linux distribution started](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/reference_distro.png)
最后使用 `root` 用户名和空密码登录新发行版。如下截图所示,在这个最小版本的 Linux 上运行了基本的命令(`data` 、 `ifconfig``uname`)。
![Test the Linux distribution](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/inside_new_linux_distro_running_on_qemu_3.png)
本文的目标是理解使用 Yocto 创建新的 Linux 发行版的过程。
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/how-to-create-your-own-linux-distribution-with-yocto-on-ubuntu/
作者:[Ahmad][a]
译者:[Ezio](https://github.com/oska874)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com/tutorial/how-to-create-your-own-linux-distribution-with-yocto-on-ubuntu/
[1]:https://www.howtoforge.com/tutorial/how-to-create-your-own-linux-distribution-with-yocto-on-ubuntu/#prerequisites-for-the-development-machinenbsp
[2]:https://www.howtoforge.com/tutorial/how-to-create-your-own-linux-distribution-with-yocto-on-ubuntu/#yocto-compilation-and-building-process
[3]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/1-pre_requisite_packages-1.png
[4]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/2-pre_requisite_packages-2.png
[5]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/download_morty_of_yocto.png
[6]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/source_environment_script.png
[7]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/selected_target.png
[8]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/uncheck_Download_parameters.png
[9]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/uncheck_sstate_parametes.png
[10]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/tempdir_uncheck_paramerter.png
[11]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/sdk_and_package_selection.png
[12]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/extra-features_for_blank_password.png
[13]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/bitbake_coreimageminimal.png
[14]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/dont_user_as_a_root.png
[15]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/runniing_bitbake_again-normal_user.png
[16]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/parsing-receipe.png
[17]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/output_of_parsing.png
[18]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/task_list.png
[19]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/downloaded-all_packages_and_compiled.png
[20]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/new_linux_compiled_under_qemux86_64.png
[21]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/error_on_putty.png
[22]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/runqemu_command.png
[23]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/new_linux_inside_the_qemu_.png
[24]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/reference_distro.png
[25]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/inside_new_linux_distro_running_on_qemu_3.png

View File

@ -0,0 +1,238 @@
在 Ubuntu 上使用 SSL/TLS 搭建一个安全的 FTP 服务器
============================================================
在本教程中,我们将介绍如何使用 Ubuntu 16.04 / 16.10 中的 SSL / TLS 保护 FTP 服务器FTPS
如果你想为基于 CentOS 的发行版安装一个安全的 FTP 服务器,你可以阅读 [在 CentOS 上使用 SSL / TLS 保护 FTP 服务器][2]。
在遵循本指南中的各个步骤之后,我们将了解在 FTP 服务器中启用加密服务的基本原理,以确保安全的数据传输至关重要。
### 要求
- 你必须已经[在 Ubuntu 上安装和配置好一个 FTP 服务器][1]
在我们进行下一步之前确保本文中的所有命令都将以root身份或者 [sudo 特权账号][3]运行。
### 第一步:在 Ubuntu 上为 FTP 生成 SSL/TLS 证书
1、我们将首先在 `/etc/ssl/` 下创建一个子目录来存储 SSL/TLS 证书和密钥文件,如果它不存在的话这样做:
```
$ sudo mkdir /etc/ssl/private
```
2、 现在我们在一个单一文件中生成证书和密钥,运行下面的命令:
```
$ sudo openssl req -x509 -nodes -keyout /etc/ssl/private/vsftpd.pem -out /etc/ssl/private/vsftpd.pem -days 365 -newkey rsa:2048
```
上面的命令将提示你回答以下问题,不要忘了输入合适于你情况的值:
```
Country Name (2 letter code) [XX]:IN
State or Province Name (full name) []:Lower Parel
Locality Name (eg, city) [Default City]:Mumbai
Organization Name (eg, company) [Default Company Ltd]:TecMint.com
Organizational Unit Name (eg, section) []:Linux and Open Source
Common Name (eg, your name or your server's hostname) []:tecmint
Email Address []:admin@tecmint.com
```
### 第二步:在 Ubuntu 上配置 vsftpd 来使用 SSL/TLS
3、在我们进行 vsftpd 配置之前,对于那些[已启用 UFW 防火墙][4]的用户,你们必须打开端口 `990``40000` - `50000`,来在 vsftpd 配置文件中分别启用 TLS 连接端口和被动端口的端口范围:
```
$ sudo ufw allow 990/tcp
$ sudo ufw allow 40000:50000/tcp
$ sudo ufw status
```
4、现在打开 vsftpd 配置文件并定义 SSL 详细信息:
```
$ sudo vi /etc/vsftpd/vsftpd.conf
$ sudo nano /etc/vsftpd/vsftpd.conf
```
然后,添加或找到选项 `ssl_enable`,并将它的值设置为 `YES` 来激活使用 SSL ,同样,因为 TLS 比 SSL 更安全,我们将通过启用 `ssl_tlsv1` 选项限制 vsftpd 只使用 TLS
```
ssl_enable=YES
ssl_tlsv1=YES
ssl_sslv2=NO
ssl_sslv3=NO
```
5、 接下来,使用 `` 字符注释掉下面的行,如下所示:
```
#rsa_cert_file=/etc/ssl/private/ssl-cert-snakeoil.pem
#rsa_private_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
```
然后,添加以下行以定义 SSL 证书和密钥文件的位置LCTT 译注:或径直修改也可):
```
rsa_cert_file=/etc/ssl/private/vsftpd.pem
rsa_private_key_file=/etc/ssl/private/vsftpd.pem
```
6、现在我们也可以阻止匿名用户使用 SSL 登录,并且迫使所有的非匿名登录使用安全的 SSL 链接来传输数据和在登录期间发送密码:
```
allow_anon_ssl=NO
force_local_data_ssl=YES
force_local_logins_ssl=YES
```
7、此外我们可以使用以下选项在 FTP 服务器中添加更多的安全功能 。对于选项 `require_ssl_reuse=YES`,它表示所有的 SSL 数据链接都需重用已经建立的 SSL 会话(需要证明客户端拥有 FTP 控制通道的主密钥但是一些客户端不支持它如果没有客户端问题出于安全原因不应该关闭默认开启LCTT 译注:原文此处理解有误,译者修改。)
```
require_ssl_reuse=NO
```
此外,我们可以通过 `ssl_ciphers` 选项来设置 vsftpd 允许使用那些加密算法。 这将有助于挫败攻击者使用那些已经发现缺陷的加密算法的尝试:
```
ssl_ciphers=HIGH
```
8、 然后,我们定义被动端口的端口范围(最小和最大端口)。
```
pasv_min_port=40000
pasv_max_port=50000
```
9、 要启用 SSL 调试,把 openSSL 连接诊断记录到 vsftpd 日志文件中,我们可以使用 `debug_ssl` 选项:
```
debug_ssl=YES
```
最后,保存配置文件并且关闭它。然后重启 vsftpd 服务:
```
$ systemctl restart vsftpd
```
### 第三步:在 Ubuntu 上使用 SSL / TLS 连接验证 FTP
10、 执行所有上述配置后,通过尝试[在命令行中使用 FTP] [5] 来测试 vsftpd 是否现在使用了 SSL / TLS 连接,如下所示。
从下面的输出来看,这里有一个错误的信息告诉我们 vsftpd 仅允许用户(非匿名用户)从支持加密服务的安全客户端登录。
```
$ ftp 192.168.56.10
Connected to 192.168.56.10 (192.168.56.10).
220 Welcome to TecMint.com FTP service.
Name (192.168.56.10:root) : ravi
530 Non-anonymous sessions must use encryption.
Login failed.
421 Service not available, remote server has closed connection
ftp>
```
该命令不支持加密服务从而导致了上述错误。因此,要安全连接到启用了加密服务的 FTP 服务器,我们需要一个默认支持 SSL/TLS 连接的 FTP 客户端,例如 FileZilla。
### 第四步在客户端上安装FileZillaStep来安全地连接FTP
11、FileZilla 是一个强大的,广泛使用的跨平台 FTP 客户端,支持在 SSL/TLS 上的 FTP。为了在 Linux 客户端机器上安装 FileZilla使用下面的命令。
```
--------- On Debian/Ubuntu ---------
$ sudo apt-get install filezilla
--------- On CentOS/RHEL/Fedora ---------
# yum install epel-release filezilla
--------- On Fedora 22+ ---------
$ sudo dnf install filezilla
```
12、 一旦安装完成打开它然后点击File=>Sites Manager或者按Ctrl+S来获取下面的Site Manager。
[
![Filezilla Site Manager](http://www.tecmint.com/wp-content/uploads/2017/02/Filezilla-Site-Manager.png)
][6]
*Filezilla Site Manager*
13、 现在,定义主机/站点名字,添加 IP 地址,定义使用的协议,加密和登录类型,如下面的屏幕(使用适用于你方案的值):
点击 New Site 按钮来配置一个新的站点/主机连接。
- Host: 192.168.56.10
- Protocol: FTP File Transfer Protocol
- Encryption: Require explicit FTP over #推荐
- Logon Type: Ask for password #推荐
- User: 用户名
[
![在Filezilla上配置新的FTP站点](http://www.tecmint.com/wp-content/uploads/2017/02/Configure-New-FTP-Site-on-Filezilla.png)
][7]
*在 Filezilla 上配置新的 FTP 站点*
14、 然后从上面的界面单击连接以输入密码,然后验证用于 SSL / TLS 连接的证书,并再次单击确定以连接到 FTP 服务器:
[
![验证FTP的SSL证书](http://www.tecmint.com/wp-content/uploads/2017/02/Verify-FTP-SSL-Certificate-1.png)
][8]
*验证 FTP 的 SSL 证书*
15、现在你应该通过 TLS 连接成功地登录到了 FTP 服务器,检查连接状态部分,来获取有关下面接口的更多信息。
[
![连接Ubuntu的FTP服务器](http://www.tecmint.com/wp-content/uploads/2017/02/Connected-Ubuntu-FTP-Server.png)
][9]
*连接 Ubuntu 的 FTP 服务器*
16、 最后,让我们在文件夹中[从本地的机器传送文件到 FTP 服务器][10], 查看 FileZilla 界面的下端来查看有关文件传输的报告。
[
![使用Filezilla安全的传输FTP文件](http://www.tecmint.com/wp-content/uploads/2017/02/Transfer-Files-Securely-using-FTP.png)
][11]
*使用 Filezilla 安全的传输 FTP 文件*
就这样! 始终记住,安装 FTP 服务器而不启用加密服务具有某些安全隐患。 正如我们在本教程中解释的,您可以在 Ubuntu 16.04 / 16.10 中配置 FTP 服务器使用 SSL / TLS 连接来实现安全性。
如果你在 FTP 服务器上设置 SSL/TLS 遇到任何问题,请使用以下评论表单来分享您对本教程/主题的问题或想法。
--------------------------------------------------------------------------------
作者简介:
Aaron Kili 是 Linux 和 F.O.S.S 爱好者,即将成为 Linux SysAdmin 和网络开发人员,目前是 TecMint 的内容创作者,他喜欢在电脑上工作,并坚信分享知识。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/secure-ftp-server-using-ssl-tls-on-ubuntu/
作者:[Aaron Kili][a]
译者:[DockerChen](https://github.com/DockerChen)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/install-ftp-server-in-ubuntu/
[2]:http://www.tecmint.com/axel-commandline-download-accelerator-for-linux/
[3]:http://www.tecmint.com/sudoers-configurations-for-setting-sudo-in-linux/
[4]:http://www.tecmint.com/how-to-install-and-configure-ufw-firewall/
[5]:http://www.tecmint.com/sftp-command-examples/
[6]:http://www.tecmint.com/wp-content/uploads/2017/02/Filezilla-Site-Manager.png
[7]:http://www.tecmint.com/wp-content/uploads/2017/02/Configure-New-FTP-Site-on-Filezilla.png
[8]:http://www.tecmint.com/wp-content/uploads/2017/02/Verify-FTP-SSL-Certificate-1.png
[9]:http://www.tecmint.com/wp-content/uploads/2017/02/Connected-Ubuntu-FTP-Server.png
[10]:http://www.tecmint.com/sftp-command-examples/
[11]:http://www.tecmint.com/wp-content/uploads/2017/02/Transfer-Files-Securely-using-FTP.png
[12]:http://www.tecmint.com/author/aaronkili/
[13]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
[14]:http://www.tecmint.com/free-linux-shell-scripting-books/

View File

@ -1,17 +1,17 @@
How to Install MariaDB 10 on Debian and Ubuntu 如何在 Debian 和 Ubuntu 上安装 MariaDB 10
============================================================ ============================================================
MariaDB is a free and open source fork of the popular MySQL database management server software. It is developed under the GPLv2 (General Public License version 2) by the original developers of MySQL and is intended to remain open source. MariaDB 是深受欢迎的数据库管理服务器软件 MySQL 的一个自由开源的分支。它由 MySQL 的原开发者在 GPLv2通用公共许可证 2 版)下开发,并保持开源。
It is designed to achieve high compatibility with MySQL. For starters, you can read [MariaDB vs MySQL][5] features for more information and importantly, it is used by big companies/organizations such as Wikipedia, WordPress.com, Google plus and many more. 它被设计来实现 MySQL 的高兼容性。对于初学者,可以阅读 [MariaDB vs MySQL][5] 来了解关于它们的特性的更多信息。更重要的是,它被一些大公司/组织使用,比如 Wikipedia、WordPress.com 和 Google plus ,除此之外还有更多的。
In this article, we will show you how to install MariaDB 10.1 stable version in various Debian and Ubuntu distribution releases. 在这篇文章中,我将向你们展示如何在 Debian 和 Ubuntu 发行版中安装 MariaDB 10.1 稳定版。
### Install MariaDB in Debian and Ubuntu ### 在 Debian 和 Ubuntu 上安装 MariaDB
1. Before installing MariaDB, youll have to import the repository key and add the MariaDB repository with the following commands: 1、在安装之前 MariaDB 之前,你需要通过下面的命令导入仓库密匙并获取 MariaDB 仓库
#### On Debian 10(Sid) **在 Debian 10 (Sid) 上**
``` ```
$ sudo apt-get install software-properties-common $ sudo apt-get install software-properties-common
@ -19,7 +19,7 @@ $ sudo apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 0xF1656F24C74CD1
$ sudo add-apt-repository 'deb [arch=amd64,i386] http://www.ftp.saix.net/DB/mariadb/repo/10.1/debian sid main' $ sudo add-apt-repository 'deb [arch=amd64,i386] http://www.ftp.saix.net/DB/mariadb/repo/10.1/debian sid main'
``` ```
#### On Debian 9 (Stretch) **在 Debian 9 (Stretch) 上**
``` ```
$ sudo apt-get install software-properties-common $ sudo apt-get install software-properties-common
@ -27,7 +27,7 @@ $ sudo apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 0xF1656F24C74CD1
$ sudo add-apt-repository 'deb [arch=amd64] http://www.ftp.saix.net/DB/mariadb/repo/10.1/debian stretch main' $ sudo add-apt-repository 'deb [arch=amd64] http://www.ftp.saix.net/DB/mariadb/repo/10.1/debian stretch main'
``` ```
#### On Debian 8 (Jessie) **在 Debian 8 (Jessie) 上**
``` ```
$ sudo apt-get install software-properties-common $ sudo apt-get install software-properties-common
@ -35,7 +35,7 @@ $ sudo apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 0xcbcb082a1bb943
$ sudo add-apt-repository 'deb [arch=amd64,i386,ppc64el] http://www.ftp.saix.net/DB/mariadb/repo/10.1/debian jessie main' $ sudo add-apt-repository 'deb [arch=amd64,i386,ppc64el] http://www.ftp.saix.net/DB/mariadb/repo/10.1/debian jessie main'
``` ```
#### On Debian 7 (Wheezy) **在 Debian 7 (Wheezy) 上**
``` ```
$ sudo apt-get install python-software-properties $ sudo apt-get install python-software-properties
@ -43,7 +43,7 @@ $ sudo apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 0xcbcb082a1bb943
$ sudo add-apt-repository 'deb [arch=amd64,i386] http://www.ftp.saix.net/DB/mariadb/repo/10.1/debian wheezy main' $ sudo add-apt-repository 'deb [arch=amd64,i386] http://www.ftp.saix.net/DB/mariadb/repo/10.1/debian wheezy main'
``` ```
#### On Ubuntu 16.10 (Yakkety Yak) **在 Ubuntu 16.10 (Yakkety Yak) 上**
``` ```
$ sudo apt-get install software-properties-common $ sudo apt-get install software-properties-common
@ -51,7 +51,7 @@ $ sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xF1656
$ sudo add-apt-repository 'deb [arch=amd64,i386] http://www.ftp.saix.net/DB/mariadb/repo/10.1/ubuntu yakkety main' $ sudo add-apt-repository 'deb [arch=amd64,i386] http://www.ftp.saix.net/DB/mariadb/repo/10.1/ubuntu yakkety main'
``` ```
#### On Ubuntu 16.04 (Xenial Xerus) **在 Ubuntu 16.04 (Xenial Xerus) 上**
``` ```
$ sudo apt-get install software-properties-common $ sudo apt-get install software-properties-common
@ -59,7 +59,7 @@ $ sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xF1656
$ sudo add-apt-repository 'deb [arch=amd64,i386,ppc64el] http://www.ftp.saix.net/DB/mariadb/repo/10.1/ubuntu xenial main' $ sudo add-apt-repository 'deb [arch=amd64,i386,ppc64el] http://www.ftp.saix.net/DB/mariadb/repo/10.1/ubuntu xenial main'
``` ```
#### On Ubuntu 14.04 (Trusty) **在 Ubuntu 14.04 (Trusty) 上**
``` ```
$ sudo apt-get install software-properties-common $ sudo apt-get install software-properties-common
@ -67,103 +67,103 @@ $ sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb0
$ sudo add-apt-repository 'deb [arch=amd64,i386,ppc64el] http://www.ftp.saix.net/DB/mariadb/repo/10.1/ubuntu trusty main' $ sudo add-apt-repository 'deb [arch=amd64,i386,ppc64el] http://www.ftp.saix.net/DB/mariadb/repo/10.1/ubuntu trusty main'
``` ```
2. Then update the system packages sources list, and install MariaDB server like so: 2、 然后,更新系统安装包列表,并像下面这样安装 MariaDB 服务器:
``` ```
$ sudo apt-get update $ sudo apt-get update
$ sudo apt-get install mariadb-server $ sudo apt-get install mariadb-server
``` ```
During the course of installation, youll be asked to configure the MariaDB server; set a secure root user password in the interface below. 安装过程中,将会请求你配置 MariaDB 服务器;在下面的页面中设置一个安全的 root 用户密码:
[ [
![Set New Root Password for MariaDB](http://www.tecmint.com/wp-content/uploads/2017/02/Set-New-Root-Password-for-MariaDB.png) ![Set New Root Password for MariaDB](http://www.tecmint.com/wp-content/uploads/2017/02/Set-New-Root-Password-for-MariaDB.png)
][6] ][6]
Set New Root Password for MariaDB *为 MariaDB 设置新的 Root 密码*
再次输入密码并按下回车键来继续安装。
Re-enter the password and press [Enter] to continue with the installation process.
[ [
![Repeat MariaDB Password](http://www.tecmint.com/wp-content/uploads/2017/02/Repeat-MariaDB-Password.png) ![Repeat MariaDB Password](http://www.tecmint.com/wp-content/uploads/2017/02/Repeat-MariaDB-Password.png)
][7] ][7]
Repeat MariaDB Password *再次输入 MariaDB 密码*
3. When the installation of MariaDB packages completes, start the database server daemon for the mean time and enable it to start automatically at the next boot as follows: 当 MariaDB 安装包安装完成以后,启动数据库服务器守护进程,同时启用它,使得在下次开机时它能够像下面这样自动启动:
``` ```
------------- On SystemD Systems ------------- ------------- On SystemD Systems -------------
$ sudo systemctl start mariadb $ sudo systemctl start mariadb
$ sudo systemctl enable mariadb $ sudo systemctl enable mariadb
$ sudo systemctl status mariadb $ sudo systemctl status mariadb
------------- On SysVinit Systems ------------- ------------- On SysVinit Systems -------------
$ sudo service mysql start $ sudo service mysql start
$ chkconfig --level 35 mysql on $ chkconfig --level 35 mysql on
OR OR
$ update-rc.d mysql defaults $ update-rc.d mysql defaults
$ sudo service mysql status $ sudo service mysql status
``` ```
[ [
![Start MariaDB Service](http://www.tecmint.com/wp-content/uploads/2017/02/Start-MariaDB-Service.png) ![Start MariaDB Service](http://www.tecmint.com/wp-content/uploads/2017/02/Start-MariaDB-Service.png)
][8] ][8]
Start MariaDB Service *开启 MariaDB 服务*
4. Then run the `mysql_secure_installation` script to secure the database where you can: 4、 然后,运行 `mysql_secure_installation` 脚本来保护数据库,在这儿你可以:
1. set root password (if not set in the configuration step above). 1. 设置 root 密码(如果在上面的配置环节你没有进行设置的话)。
2. disable remote root login 2. 禁止远程 root 登录
3. remove test database 3. 移除测试数据库
4. remove anonymous users and 4. 移除匿名用户
5. reload privileges 5. 重载权限配置
``` ```
$ sudo mysql_secure_installation $ sudo mysql_secure_installation
``` ```
[ [
![Secure MariaDB Installation](http://www.tecmint.com/wp-content/uploads/2017/02/sudo-mysql-secure-installation.png) ![Secure MariaDB Installation](http://www.tecmint.com/wp-content/uploads/2017/02/sudo-mysql-secure-installation.png)
][9] ][9]
Secure MariaDB Installation *保护 MariaDB 安装*
5. Once the database server is secured, check its installed version and login to the MariaDB command shell as follows: 5、 一旦数据库服务器受保护以后,可以使用下面的 shell 命令查看已安装版本和登录 MariaDB
``` ```
$ mysql -V $ mysql -V
$ mysql -u root -p $ mysql -u root -p
``` ```
[ [
![Check MariaDB Version](http://www.tecmint.com/wp-content/uploads/2017/02/Check-MariaDB-Version.png) ![Check MariaDB Version](http://www.tecmint.com/wp-content/uploads/2017/02/Check-MariaDB-Version.png)
][10] ][10]
Check MariaDB Version *查看 MariaDB 版本*
To start learning MySQL/MariaDB, read through: 开始学习 MySQL/MariaDB 请阅读:
1. [Learn MySQL / MariaDB for Beginners Part 1][1] 1. [MySQL / MariaDB 初学者学习指南 — Part 1][1]
2. [Learn MySQL / MariaDB for Beginners Part 2][2] 2. [MySQL / MariaDB 初学者学习指南 — Part 2][2]
3. [MySQL Basic Database Administration Commands Part III][3] 3. [MySQL 基本数据库管理命令 — Part III][3]
4. [20 MySQL (Mysqladmin) Commands for Database Administration Part IV][4] 4. [针对数据库管理员的 20 个 MySQL (Mysqladmin) 命令 — Part IV][4]
And check out these 4 useful commandline tools to [monitor MySQL/MariaDB performance][11] in Linux and also go through these [15 useful MySQL/MariaDB performance tuning and optimization tips][12]. 查看在 Linux 中[监控 MySQL/MariaDB 性能][11]的四个有用的命令行工具,同时浏览 [15 个有用的 MySQL/MariaDB 性能调整和优化技巧][12]。
Thats all. In this article, we showed you how to install MariaDB 10.1 stable version in various Debian and Ubuntu releases. You can send us any questions/thoughts via the comment form below. 这就是本文的全部内容了。在这篇文章中,我向你们展示了如何在 Debian 和 Ubuntu 的不同发行版中安装 MariaDB 10.1 稳定版。你可以通过下面的评论框给我们提任何问题或者想法。
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
作者简介: 作者简介:
Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge. Aaron Kili 是 Linux 和 F.O.S.S 爱好者,将来的 Linux 系统管理员和网络开发人员,目前是 TecMint 的内容创作者,他喜欢用电脑工作,并坚信分享知识。
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
via: http://www.tecmint.com/install-mariadb-in-ubuntu-and-debian/ via: http://www.tecmint.com/install-mariadb-in-ubuntu-and-debian/
作者:[Aaron Kili][a] 作者:[Aaron Kili][a]
译者:[译者ID](https://github.com/译者ID) 译者:[ucasFL](https://github.com/ucasFL)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,163 @@
Linux 中 7 个判断文件系统类型的方法
============================================================
文件通过文件系统在磁盘及分区上命名、存储、检索以及更新,文件系统是在磁盘上组织文件的方式。
文件系统分为两个部分:用户数据和元数据(文件名、创建时间、修改时间、大小以及目录层次结构中的位置等)。
在本指南中,我们将用 7 种方法来识别你的 Linux 文件系统类型,如 Ext2、Ext3、Ext4、BtrFS、GlusterFS 等等。
### 1、 使用 df 命令
`df` 命令报告文件系统磁盘空间利用率,要显示特定的磁盘分区的文件系统类型,像下面那样使用 `-T` 标志:
```
$ df -Th
或者
$ df -Th | grep "^/dev"
```
[
![df Command - Find Filesystem Type](http://www.tecmint.com/wp-content/uploads/2017/03/Find-Filesystem-Type-Using-df-Command.png)
][3]
*df 命令 找出文件系统类型*
要更好理解 `df` 命令,阅读下面的文章:
1. [12 个有用的 df 命令来检查 Linux 中的磁盘空间][1]
2. [Pydf - 一个替代 df 的命令,用颜色显示磁盘使用率][2]
### 2、 使用 fsck 命令
`fsck` 用来检查以及[修复 Linux 文件系统][4],它也可以输出[指定磁盘分区的文件系统类型][5]。
`-N` 标志禁用检查文件系统错误,它只是显示会做什么(但是我们只需要文件系统类型):
```
$ fsck -N /dev/sda3
$ fsck -N /dev/sdb1
```
[
![fsck - Print Linux Filesystem Type](http://www.tecmint.com/wp-content/uploads/2017/03/fsck-Print-Linux-Filesystem-Type.png)
][6]
*fsck 打印 Linux 文件系统类型*
### 3、 使用 lsblk 命令
`lsblk` 会显示块设备,当使用 `-f` 选项时,它也会打印分区的文件系统类型:
```
$ lsblk -f
```
[
![lsblk - Shows Linux Filesystem Type](http://www.tecmint.com/wp-content/uploads/2017/03/lsblk-Shows-Linux-Filesystem-Type.png)
][7]
*lsblk 显示 Linux 文件系统类型*
### 4、 使用 mount 命令
`mount` 命令用来[在 Linux 中挂载文件系统][8],它也可以用来[挂载一个 ISO 镜像][9][挂载远程 Linux 文件系统][10]等等。
当不带任何参数运行时,它会打印包含文件系统类型在内的[磁盘分区的信息][11]
```
$ mount | grep "^/dev"
```
[
![Mount - Show Filesystem Type in Linux](http://www.tecmint.com/wp-content/uploads/2017/03/Mount-Show-Filesystem-Type.png)
][12]
*Mount 在 Linux 中显示文件系统类型*
### 5、 使用 blkid 命令
`blkid` 命令用来[找出或打印块设备属性][13],只要将磁盘分区作为参数就行了:
```
$ blkid /dev/sda3
```
[
![blkid - Find Filesystem Type](http://www.tecmint.com/wp-content/uploads/2017/03/blkid-Find-Filesystem-Type.png)
][14]
*blkid 找出文件系统类型*
### 6、 使用 file 命令
`file` 命令会识别文件类型,使用 `-s` 标志启用读取块设备或字符设备,`-L` 启用符号链接跟随:
```
$ sudo file -sL /dev/sda3
```
[
![file - Identifies Filesystem Type](http://www.tecmint.com/wp-content/uploads/2017/03/file-command-identifies-filesystem-type.png)
][15]
*file 识别文件系统类型*
### 7、 使用 fstab 文件
`/etc/fstab` 是一个静态文件系统信息(比如挂载点、文件系统类型、挂载选项等等)文件:
```
$ cat /etc/fstab
```
[
![Fstab - Shows Linux Filesystem Type](http://www.tecmint.com/wp-content/uploads/2017/03/fstab-shows-filesystem-types.png)
][16]
*fstab 显示 Linux 文件系统类型*
就是这样了!在这篇指南中,我们用 7 种方法来识别你的 Linux 文件系统类型。你还知道这里没有提到的其他方法么?在评论中与我们分享。
--------------------------------------------------------------------------------
作者简介:
Aaron Kili是一名 Linux 和 F.O.S.S 的爱好者,未来的 Linux 系统管理员、网站开发人员,目前是 TecMint 的内容创作者,他喜欢用电脑工作,并乐于分享知识。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/find-linux-filesystem-type/
作者:[Aaron Kili][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/how-to-check-disk-space-in-linux/
[2]:http://www.tecmint.com/pyd-command-to-check-disk-usage/
[3]:http://www.tecmint.com/wp-content/uploads/2017/03/Find-Filesystem-Type-Using-df-Command.png
[4]:http://www.tecmint.com/defragment-linux-system-partitions-and-directories/
[5]:http://www.tecmint.com/manage-file-types-and-set-system-time-in-linux/
[6]:http://www.tecmint.com/wp-content/uploads/2017/03/fsck-Print-Linux-Filesystem-Type.png
[7]:http://www.tecmint.com/wp-content/uploads/2017/03/lsblk-Shows-Linux-Filesystem-Type.png
[8]:http://www.tecmint.com/sshfs-mount-remote-linux-filesystem-directory-using-ssh/
[9]:http://www.tecmint.com/extract-files-from-iso-files-linux/
[10]:http://www.tecmint.com/sshfs-mount-remote-linux-filesystem-directory-using-ssh/
[11]:http://www.tecmint.com/linux-tools-to-monitor-disk-partition-usage/
[12]:http://www.tecmint.com/wp-content/uploads/2017/03/Mount-Show-Filesystem-Type.png
[13]:http://www.tecmint.com/find-usb-device-name-in-linux/
[14]:http://www.tecmint.com/wp-content/uploads/2017/03/blkid-Find-Filesystem-Type.png
[15]:http://www.tecmint.com/wp-content/uploads/2017/03/file-command-identifies-filesystem-type.png
[16]:http://www.tecmint.com/wp-content/uploads/2017/03/fstab-shows-filesystem-types.png
[17]:http://www.tecmint.com/find-linux-filesystem-type/#
[18]:http://www.tecmint.com/find-linux-filesystem-type/#
[19]:http://www.tecmint.com/find-linux-filesystem-type/#
[20]:http://www.tecmint.com/find-linux-filesystem-type/#
[21]:http://www.tecmint.com/find-linux-filesystem-type/#comments
[22]:http://www.tecmint.com/author/aaronkili/
[23]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
[24]:http://www.tecmint.com/free-linux-shell-scripting-books/

View File

@ -0,0 +1,92 @@
如何在 Ubuntu 中升级到最新内核
============================================================
每过段时间,就有新的设备和技术出来,因此如果我们想要充分利用它,保持最新的 Linux 内核就显得很重要。此外,更新系统内核将使我们能够利用新的内核优化,并且它还可以帮助我们避免在早期版本中发现的漏洞。
**建议阅读:** [如何升级 CentOS 7内核][1]
准备好了在 Ubuntu 16.04 或其衍生版本(如 Debian 和 Linux Mint中更新你的内核了么如果准备好了请你继续阅读
### 第一步:检查安装的内核版本
要发现当前系统安装的版本,我们可以:
```
$ uname -sr
```
下面的截图显示了在 Ubuntu 16.04 server 中上面命令的输出:
[
![Check Kernel Version in Ubuntu](http://www.tecmint.com/wp-content/uploads/2017/03/Check-Kernel-Version-in-Ubuntu.png)
][2]
*在 Ubuntu 中检查内核版本*
### 第二步:在 Ubuntu 16.04 中升级内核
要升级 Ubuntu 16.04 的内核,打开 [http://kernel.ubuntu.com/~kernel-ppa/mainline/][3] 并选择列表中需要的版本(发布此文时最新内核是 4.10.1)。
接下来,根据你的系统架构下载 `.deb` 文件:
对于 64 位系统:
```
$ wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.10.1/linux-headers-4.10.1-041001_4.10.1-041001.201702260735_all.deb
$ wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.10.1/linux-headers-4.10.1-041001-generic_4.10.1-041001.201702260735_amd64.deb
$ wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.10.1/linux-image-4.10.1-041001-generic_4.10.1-041001.201702260735_amd64.deb
```
这是 32 位系统的:
```
$ wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.10.1/linux-headers-4.10.1-041001_4.10.1-041001.201702260735_all.deb
$ wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.10.1/linux-headers-4.10.1-041001-generic_4.10.1-041001.201702260735_i386.deb
$ wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.10.1/linux-image-4.10.1-041001-generic_4.10.1-041001.201702260735_i386.deb
```
下载完成这些所有内核文件后,如下安装:
```
$ sudo dpkg -i *.deb
```
安装完成后,重启并验证新的内核已经被使用了:
```
$ uname -sr
```
就是这样。你下载就可以使用比 Ubuntu 16.04 默认安装的内核的更新版本了。
### 总结
本文我们展示了如何在 Ubuntu 系统上轻松升级Linux内核。这里还有另一个流程但我们在这里没有展示因为它需要从源代码编译内核这不推荐在生产 Linux 系统上使用。
如果你仍然有兴趣编译内核作为一个学习经验,你可以在 [Kernel Newbies][4] 网站中得到指导该如何做。
一如既往,如果你对本文有任何问题或意见,请随时使用下面的评论栏。
--------------------------------------------------------------------------------
作者简介:
Gabriel Cánepa - 一位来自阿根廷圣路易斯梅塞德斯镇 (Villa Mercedes, San Luis, Argentina) 的 GNU/Linux 系统管理员Web 开发者。就职于一家世界领先级的消费品公司,乐于在每天的工作中能使用 FOSS 工具来提高生产力。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/upgrade-kernel-in-ubuntu/
作者:[Gabriel Cánepa][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/install-upgrade-kernel-version-in-centos-7/
[2]:http://www.tecmint.com/wp-content/uploads/2017/03/Check-Kernel-Version-in-Ubuntu.png
[3]:http://kernel.ubuntu.com/~kernel-ppa/mainline/
[4]:https://kernelnewbies.org/KernelBuild
[5]:http://www.tecmint.com/author/gacanepa/
[6]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
[7]:http://www.tecmint.com/free-linux-shell-scripting-books/

View File

@ -1,26 +1,25 @@
LXD 2.0 系列LXD和Juju LXD 2.0 系列LXD Juju
====================================== ======================================
这是 [LXD 2.0 系列介绍文章][1]的第十篇。 这是 [LXD 2.0 系列介绍文章][1]的第十篇。
![LXD logo](https://linuxcontainers.org/static/img/containers.png) ![LXD logo](https://linuxcontainers.org/static/img/containers.png)
介绍 ### 介绍
============================================================
Juju是Canonical的服务建模和部署工具。 它支持非常广泛的云提供商,使您能够轻松地在任何云上部署任何您想要的服务。 Juju Canonical 的服务建模和部署工具。 它支持非常广泛的云服务提供商,使您能够轻松地在任何云上部署任何您想要的服务。
此外Juju 2.0还支持LXD既适用于本地部署也适合开发并且可以在云实例或物理机上共同协作。 此外Juju 2.0 还支持 LXD既适用于本地部署也适合开发并且可以在云实例或物理机上共同协作。
本篇文章将关注本地使用通过一个没有任何Juju经验的LXD用户来体验。 本篇文章将关注本地使用通过一个没有任何Juju经验的LXD用户来体验。
# 要求 ### 要求
本篇文章假设你已经安装了LXD 2.0并且配置完毕看前面的文章并且是在Ubuntu 16.04 LTS上运行的。 本篇文章假设你已经安装了 LXD 2.0 并且配置完毕(看前面的文章),并且是在 Ubuntu 16.04 LTS 上运行的。
# 设置 Juju ### 设置 Juju
第一件事是在Ubuntu 16.04上安装Juju 2.0。这个很简单: 第一件事是在 Ubuntu 16.04 上安装 Juju 2.0。这个很简单:
``` ```
stgraber@dakara:~$ sudo apt install juju stgraber@dakara:~$ sudo apt install juju
@ -52,7 +51,7 @@ Setting up juju-2.0 (2.0~beta7-0ubuntu1.16.04.1) ...
Setting up juju (2.0~beta7-0ubuntu1.16.04.1) ... Setting up juju (2.0~beta7-0ubuntu1.16.04.1) ...
``` ```
安装完成后我们可以使用LXD启动一个新的“控制器”。这意味着Juju不会修改你主机上的任何东西它会在LXD容器中安装它的管理服务。 安装完成后,我们可以使用 LXD 启动一个新的“控制器”。这意味着 Juju 不会修改你主机上的任何东西,它会在 LXD 容器中安装它的管理服务。
现在我们创建一个“test”控制器 现在我们创建一个“test”控制器
@ -86,7 +85,7 @@ Waiting for API to become available: upgrade in progress (upgrade in progress)
Bootstrap complete, local.test now available. Bootstrap complete, local.test now available.
``` ```
这会花费一点时间这时你可以看到一个正在运行的一个新的LXD容器 这会花费一点时间,这时你可以看到一个正在运行的一个新的 LXD 容器:
``` ```
stgraber@dakara:~$ lxc list juju- stgraber@dakara:~$ lxc list juju-
@ -97,7 +96,7 @@ stgraber@dakara:~$ lxc list juju-
+-----------------------------------------------------+---------+----------------------+------+------------+-----------+ +-----------------------------------------------------+---------+----------------------+------+------------+-----------+
``` ```
在Juju这边你可以确认它有响应并且还没有服务运行 Juju 这边,你可以确认它有响应,并且还没有服务运行:
``` ```
stgraber@dakara:~$ juju status stgraber@dakara:~$ juju status
@ -111,7 +110,7 @@ ID WORKLOAD-STATUS JUJU-STATUS VERSION MACHINE PORTS PUBLIC-ADDRESS MESSAGE
ID STATE DNS INS-ID SERIES AZ ID STATE DNS INS-ID SERIES AZ
``` ```
你也可以在浏览器中访问Juju的GUI界面 你也可以在浏览器中访问 Juju GUI 界面:
``` ```
stgraber@dakara:~$ juju gui stgraber@dakara:~$ juju gui
@ -120,13 +119,13 @@ If it does not open, open this URL:
https://10.178.150.72:17070/gui/97fa390d-96ad-44df-8b59-e15fdcfc636b/ https://10.178.150.72:17070/gui/97fa390d-96ad-44df-8b59-e15fdcfc636b/
``` ```
![Juju web UI](https://www.stgraber.org/wp-content/uploads/2016/06/juju-gui.png) ![Juju web UI](https://www.stgraber.org/wp-content/uploads/2016/06/juju-gui.png)
尽管我更倾向使用命令行,因此我会在接下来使用。 不过我更倾向使用命令行,因此我会在接下来使用。
# 部署一个minecraft服务 ### 部署一个 minecraft 服务
让我们先来一个简单的部署在一个容器中使用一个Juju单元的服务。 让我们先来一个简单的,部署在一个容器中使用一个 Juju 单元的服务。
``` ```
stgraber@dakara:~$ juju deploy cs:trusty/minecraft stgraber@dakara:~$ juju deploy cs:trusty/minecraft
@ -134,7 +133,7 @@ Added charm "cs:trusty/minecraft-3" to the model.
Deploying charm "cs:trusty/minecraft-3" with the charm series "trusty". Deploying charm "cs:trusty/minecraft-3" with the charm series "trusty".
``` ```
返回会很快,然而这不意味着服务已经启动并运行了。你应该使用“juju status”来查看: 命令返回会很快,然而这不意味着服务已经启动并运行了。你应该使用 `juju status` 来查看:
``` ```
stgraber@dakara:~$ juju status stgraber@dakara:~$ juju status
@ -152,7 +151,7 @@ ID STATE DNS INS-ID SERIES AZ
``` ```
我们可以看到它正在忙于在刚刚创建的LXD容器中安装java。 我们可以看到它正在忙于在刚刚创建的 LXD 容器中安装 java。
``` ```
stgraber@dakara:~$ lxc list juju- stgraber@dakara:~$ lxc list juju-
@ -182,7 +181,7 @@ ID STATE DNS INS-ID SERIES AZ
1 started 10.178.150.74 juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-1 trusty 1 started 10.178.150.74 juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-1 trusty
``` ```
这时你就可以启动你的Minecraft客户端了它指向10.178.150.74端口是25565。现在可以在新的minecraft服务器上玩了! 这时你就可以启动你的 Minecraft 客户端了,将其指向 10.178.150.74,端口是 25565。现在可以在新的 minecraft 服务器上玩了!
当你不再需要它,只需运行: 当你不再需要它,只需运行:
@ -192,13 +191,13 @@ stgraber@dakara:~$ juju destroy-service minecraft
只要等待几秒就好了。 只要等待几秒就好了。
# 部署一个更复杂的web应用 ### 部署一个更复杂的 web 应用
Juju的主要工作是建模复杂的服务并以可扩展的方式部署它们。 Juju 的主要工作是建模复杂的服务,并以可扩展的方式部署它们。
为了更好地展示让我们部署一个Juju “组合”。 这个组合是由网站API数据库静态Web服务器和反向代理组成的基本Web服务。 为了更好地展示,让我们部署一个 Juju “组合”。 这个组合是由网站API数据库静态 Web 服务器和反向代理组成的基本 Web 服务。
所以这将扩展到4个互联的LXD容器。 所以这将扩展到 4 个互联的 LXD 容器。
``` ```
stgraber@dakara:~$ juju deploy cs:~charmers/bundle/web-infrastructure-in-a-box stgraber@dakara:~$ juju deploy cs:~charmers/bundle/web-infrastructure-in-a-box
@ -228,7 +227,7 @@ added nginx-proxy/0 unit to new machine
deployment of bundle "cs:~charmers/bundle/web-infrastructure-in-a-box-10" completed deployment of bundle "cs:~charmers/bundle/web-infrastructure-in-a-box-10" completed
``` ```
几秒后你会看到LXD容器在运行了 几秒后,你会看到 LXD 容器在运行了:
``` ```
stgraber@dakara:~$ lxc list juju- stgraber@dakara:~$ lxc list juju-
@ -283,15 +282,15 @@ ID STATE DNS INS-ID SERIES AZ
5 started 10.178.150.214 juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-5 trusty 5 started 10.178.150.214 juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-5 trusty
``` ```
这时你就可以在80端口访问http://10.178.150.214并且会看到一个Juju学院页面。 这时你就可以在 80 端口访问 http://10.178.150.214,并且会看到一个 Juju 学院页面。
[ [
![Juju Academy web service](https://www.stgraber.org/wp-content/uploads/2016/06/juju-academy.png) ![Juju Academy web service](https://www.stgraber.org/wp-content/uploads/2016/06/juju-academy.png)
][2] ][2]
# 清理所有东西 ### 清理所有东西
如果你不需要Juju创建的容器并且不在乎下次需要再次启动最简单的方法是 如果你不需要 Juju 创建的容器并且不在乎下次需要再次启动,最简单的方法是:
``` ```
stgraber@dakara:~$ juju destroy-controller test --destroy-all-models stgraber@dakara:~$ juju destroy-controller test --destroy-all-models
@ -328,24 +327,36 @@ stgraber@dakara:~$ lxc list juju-
+------+-------+------+------+------+-----------+ +------+-------+------+------+------+-----------+
``` ```
# 总结 ### 总结
Juju 2.0内置的LXD支持使得可以用一种非常干净的方式来测试各种服务。 Juju 2.0 内置的 LXD 支持使得可以用一种非常干净的方式来测试各种服务。
在Juju charm store中有很多预制的“组合”可以用来部署甚至可以用多个“charm”来组合你想要的架构。 Juju charm store 中有很多预制的“组合”可以用来部署甚至可以用多个“charm”来组合你想要的架构。
Juju与LXD是一个完美的解决方案从一个小的Web服务到大规模的基础设施都可以简单开发这些都在你自己的机器上并且不会在你的系统上造成混乱 Juju 与 LXD 是一个完美的解决方案,从一个小的 Web 服务到大规模的基础设施都可以简单开发,这些都在你自己的机器上,并且不会在你的系统上造成混乱!
### 额外信息
Juju 网站: http://www.ubuntu.com/cloud/juju
Juju charm store https://jujucharms.com
LXD 的主站在: https://linuxcontainers.org/lxd
LXD 的 GitHub 仓库: https://github.com/lxc/lxd
LXD 的邮件列表: https://lists.linuxcontainers.org
LXD 的 IRC 频道: #lxcontainers on irc.freenode.net
如果你不想或者不能在你的机器上安装 LXD ,你可以在 web 上试试[在线版的 LXD](https://linuxcontainers.org/lxd/try-it)。
-------------------------------------------------------------------------- --------------------------------------------------------------------------
作者简介我是Stéphane Graber。我是LXC和LXD项目的领导者目前在加拿大魁北克蒙特利尔的家所在的Canonical有限公司担任LXD的技术主管。
作者简介:我是 Stéphane Graber。我是 LXC 和 LXD 项目的领导者目前在加拿大魁北克蒙特利尔的家所在的Canonical 有限公司担任 LXD 的技术主管。
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
via: https://www.stgraber.org/2016/06/06/lxd-2-0-lxd-and-juju-1012/ via: https://www.stgraber.org/2016/06/06/lxd-2-0-lxd-and-juju-1012/
作者:[ Stéphane Graber][a] 作者:[Stéphane Graber][a]
译者:[geekpi](https://github.com/geekpi) 译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,147 @@
LXD 2.0 系列十一LXD 和 OpenStack
======================================
这是 [LXD 2.0 系列介绍文章][1]的第十一篇。
![LXD logo](https://linuxcontainers.org/static/img/containers.png)
### 介绍
首先对这次的延期抱歉。为了让一切正常我花了很长时间。我第一次尝试是使用 devstack 时遇到了一些必须解决问题。 然而即使这样,我还是不能够使网络正常。
我终于放弃了 devstack并使用用户友好的 Juju 尝试使用 “conjure-up” 部署完整的 Ubuntu OpenStack。它终于工作了
下面是如何运行一个完整的 OpenStack使用 LXD 容器而不是 VM并在 LXD 容器中运行所有这些(嵌套的!)。
### 要求
这篇文章假设你有一个可以工作的 LXD 设置,提供容器网络访问,并且你有一个非常强大的 CPU大约 50GB 给容器空间和至少 16G B的内存。
记住,我们在这里运行一个完整的 OpenStack这东西不是很轻量
### 设置容器
OpenStack 由大量不同做不同事情的组件组成。 一些需要一些额外的特权,为了可以使设置更简单,我们将使用特权容器。
我们将配置支持嵌套的容器,预加载所有需要的内核模块,并允许它访问 `/dev/mem`(显然是需要的)。
请注意,这意味着 LXD 容器的大部分安全特性对该容器被禁用。 然而由 OpenStack 自身产生的容器将是无特权的,并且可以正常使用 LXD 的安全特性。
```
lxc launch ubuntu:16.04 openstack -c security.privileged=true -c security.nesting=true -c "linux.kernel_modules=iptable_nat, ip6table_nat, ebtables, openvswitch"
lxc config device add openstack mem unix-char path=/dev/mem
```
LXD 中有一个小 bug它会尝试加载已经加载到主机上的内核模块。这已在LXD 2.5中得到修复并将在LXD 2.0.6 中修复,但在此之前,可以使用以下方法:
```
lxc exec openstack -- ln -s /bin/true /usr/local/bin/modprobe
```
我们需要加几条 PPA 并安装 conjure-up它是我们用来安装 OpenStack 的部署工具。
```
lxc exec openstack -- apt-add-repository ppa:conjure-up/next -y
lxc exec openstack -- apt-add-repository ppa:juju/stable -y
lxc exec openstack -- apt update
lxc exec openstack -- apt dist-upgrade -y
lxc exec openstack -- apt install conjure-up -y
```
最后一步是在容器内部配置 LXD 网络。
所有问题都选择默认,除了:
* 使用 `dir` 存储后端( `zfs` 不在嵌套容器中用)
* 不要配置 IPv6 网络conjure-up/juju 不太兼容它)
```
lxc exec openstack -- lxd init
```
现在配置完容器了,现在我们部署 OpenStack
### 用 conjure-up 部署 OpenStack
如先前提到的,我们用 conjure-up 部署 OpenStack。
这是一个很棒的用户友好的可以与 Juju 交互来部署复杂服务的工具。
首先:
```
lxc exec openstack -- sudo -u ubuntu -i conjure-up
```
* 选择 “OpenStack with NovaLXD”
* 选择 “localhost” 作为部署目标(使用 LXD
* 点击 “Deploy all remaining applications”
接下来会部署 OpenStack。整个过程会花费一个多小时这取决于你运行的机器。你将看到所有服务会被分配一个容器然后部署并最终互连。
![Conjure-Up deploying OpenStack](https://www.stgraber.org/wp-content/uploads/2016/10/conjure-up.png)
部署完成后会显示一个安装完成的界面。它会导入一些初始镜像、设置 SSH 权限、配置网络最后会显示面板的 IP 地址。
### 访问面板并生成一个容器
面板运行在一个容器中,因此你不能直接从浏览器中访问。
最简单的方法是设置一条 NAT 规则:
```
lxc exec openstack -- iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to <IP>
```
其中 `<ip>` 是 conjure-up 在安装结束时给你的面板 IP 地址。
你现在可以获取 “openstack” 容器的 IP 地址(来自 `lxc info openstack`并将浏览器指向http://\<container ip>/horizon 。
第一次加载可能需要几分钟。 一旦显示了登录界面输入默认登录名和密码admin/openstack你就会看到OpenStack的欢迎面板
![oslxd-dashboard](https://www.stgraber.org/wp-content/uploads/2016/10/oslxd-dashboard.png)
现在可以选择左边的 “Project” 选项卡,进入 “Instances” 页面。 要启动一个使用 nova-lxd 的新实例,点击 “Launch instance”选择你想要的镜像网络等接着你的实例就产生了。
一旦它运行后,你可以为它分配一个浮动 IP它将允许你从你的 “openstack” 容器中访问你的实例。
### 总结
OpenStack 是一个非常复杂的软件,你也不会想在家里或在单个服务器上运行它。 但是,不管怎样在你的机器上包含这些服务在一个容器中都是非常有趣的。
conjure-up 是部署这种复杂软件的一个很好的工具,背后使用 Juju 驱动部署,为每个单独的服务使用 LXD 容器,最后是实例本身。
它也是少数几个容器嵌套多层并实际上有意义的情况之一!
### 额外信息
conjure-up 网站: http://conjure-up.io
Juju 网站: http://www.ubuntu.com/cloud/juju
LXD 的主站在: https://linuxcontainers.org/lxd
LXD 的 GitHub 仓库: https://github.com/lxc/lxd
LXD 的邮件列表: https://lists.linuxcontainers.org
LXD 的 IRC 频道: #lxcontainers on irc.freenode.net
如果你不想或者不能在你的机器上安装 LXD ,你可以在 web 上试试在线版的 LXD。
--------------------------------------------------------------------------
作者简介我是Stéphane Graber。我是LXC和LXD项目的领导者目前在加拿大魁北克蒙特利尔的家所在的Canonical有限公司担任LXD的技术主管。
--------------------------------------------------------------------------------
via: https://www.stgraber.org/2016/10/26/lxd-2-0-lxd-and-openstack-1112/
作者:[Stéphane Graber][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.stgraber.org/author/stgraber/
[1]:https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/

View File

@ -0,0 +1,396 @@
LXD 2.0 系列(十二):调试,及给 LXD 做贡献
================
![LXD logo](https://linuxcontainers.org/static/img/containers.png)
### 介绍
终于要结束了!这个大约一年前开始的[这系列文章][3]的最后一篇博文。
如果你从一开始就关注了这个系列,你应该已经使用了 LXD 相当长的时间了,并且非常熟悉它的日常操作和功能。
但如果出现问题怎么办?你可以做什么来自己跟踪问题?如果你不能,你应该记录什么信息,以便上游可以跟踪问题?
如果你想自己解决问题或通过实现你需要的功能来帮助改善LXD怎么办如何构建测试和贡献 LXD 代码库?
### 调试 LXD 并填写 bug 报告
#### LXD 日志文件
`/var/log/lxd/lxd.log`
这是 LXD 日志的主文件。为了避免它快速充满你的磁盘,默认只会记录 `INFO`、`WARNING` 或者 `ERROR` 级别的日志。你可以在 LXD 守护进程中使用 `debug` 改变其行为。
`/var/log/lxd/CONTAINER/lxc.conf`
每当你启动容器时,此文件将更新为传递给 LXC 的配置。
这里会展示容器将如何配置,包括其所有的设备、绑定挂载等等。
`/var/log/lxd/CONTAINER/forkexec.log`
这个文件包含 LXC 命令执行失败时产生的错误。这个情况是非常罕见的,因为 LXD 通常会在发生之前处理大多数错误。
`/var/log/lxd/CONTAINER/forkstart.log`
这个文件包含 LXC 在启动容器时的错误信息。含 LXC 命令执行失败时产生的错误。
#### CRIU 日志 (对于实时迁移)
如果使用 CRIU 进行容器实时迁移或实时快照,则每次生成 CRIU 转储或恢复转储时都会记录额外的日志文件。
这些日志也可以在 `/var/log/lxd/CONTAINER/` 中找到,并且有时间戳,以便你可以找到与你最近的操作所匹配的那些日志。它们包含 CRIU 转储和恢复的所有内容的详细记录,并且比典型的迁移/快照错误消息更容器理解。
#### LXD 调试消息
如上所述,你可以使用 `-debug` 选项将守护进程切换为执行调试日志记录。另一种方法是连接到守护进程的事件接口,它将显示所有日志条目,而不管配置的日志级别(即使是远程工作)。
举例说,对于 `lxc init ubuntu:16.04 xen` 来说,
`lxd.log` 会是这样:
```
INFO[02-24|18:14:09] Starting container action=start created=2017-02-24T23:11:45+0000 ephemeral=false name=xen stateful=false used=1970-01-01T00:00:00+0000
INFO[02-24|18:14:10] Started container action=start created=2017-02-24T23:11:45+0000 ephemeral=false name=xen stateful=false used=1970-01-01T00:00:00+0000
```
`lxc monitor type=logging` 会是:
```
metadata:
context: {}
level: dbug
message: 'New events listener: 9b725741-ffe7-4bfc-8d3e-fe620fc6e00a'
timestamp: 2017-02-24T18:14:01.025989062-05:00
type: logging
metadata:
context:
ip: '@'
method: GET
url: /1.0
level: dbug
message: handling
timestamp: 2017-02-24T18:14:09.341283344-05:00
type: logging
metadata:
context:
driver: storage/zfs
level: dbug
message: StorageCoreInit
timestamp: 2017-02-24T18:14:09.341536477-05:00
type: logging
metadata:
context:
ip: '@'
method: GET
url: /1.0/containers/xen
level: dbug
message: handling
timestamp: 2017-02-24T18:14:09.347709394-05:00
type: logging
metadata:
context:
ip: '@'
method: PUT
url: /1.0/containers/xen/state
level: dbug
message: handling
timestamp: 2017-02-24T18:14:09.357046302-05:00
type: logging
metadata:
context: {}
level: dbug
message: 'New task operation: 2e2cf904-c4c4-4693-881f-57897d602ad3'
timestamp: 2017-02-24T18:14:09.358387853-05:00
type: logging
metadata:
context: {}
level: dbug
message: 'Started task operation: 2e2cf904-c4c4-4693-881f-57897d602ad3'
timestamp: 2017-02-24T18:14:09.358578599-05:00
type: logging
metadata:
context:
ip: '@'
method: GET
url: /1.0/operations/2e2cf904-c4c4-4693-881f-57897d602ad3/wait
level: dbug
message: handling
timestamp: 2017-02-24T18:14:09.366213106-05:00
type: logging
metadata:
context:
driver: storage/zfs
level: dbug
message: StoragePoolInit
timestamp: 2017-02-24T18:14:09.369636451-05:00
type: logging
metadata:
context:
driver: storage/zfs
level: dbug
message: StoragePoolCheck
timestamp: 2017-02-24T18:14:09.369771164-05:00
type: logging
metadata:
context:
container: xen
driver: storage/zfs
level: dbug
message: ContainerMount
timestamp: 2017-02-24T18:14:09.424696767-05:00
type: logging
metadata:
context:
driver: storage/zfs
name: xen
level: dbug
message: ContainerUmount
timestamp: 2017-02-24T18:14:09.432723719-05:00
type: logging
metadata:
context:
container: xen
driver: storage/zfs
level: dbug
message: ContainerMount
timestamp: 2017-02-24T18:14:09.721067917-05:00
type: logging
metadata:
context:
action: start
created: 2017-02-24 23:11:45 +0000 UTC
ephemeral: "false"
name: xen
stateful: "false"
used: 1970-01-01 00:00:00 +0000 UTC
level: info
message: Starting container
timestamp: 2017-02-24T18:14:09.749808518-05:00
type: logging
metadata:
context:
ip: '@'
method: GET
url: /1.0
level: dbug
message: handling
timestamp: 2017-02-24T18:14:09.792551375-05:00
type: logging
metadata:
context:
driver: storage/zfs
level: dbug
message: StorageCoreInit
timestamp: 2017-02-24T18:14:09.792961032-05:00
type: logging
metadata:
context:
ip: '@'
method: GET
url: /internal/containers/23/onstart
level: dbug
message: handling
timestamp: 2017-02-24T18:14:09.800803501-05:00
type: logging
metadata:
context:
driver: storage/zfs
level: dbug
message: StoragePoolInit
timestamp: 2017-02-24T18:14:09.803190248-05:00
type: logging
metadata:
context:
driver: storage/zfs
level: dbug
message: StoragePoolCheck
timestamp: 2017-02-24T18:14:09.803251188-05:00
type: logging
metadata:
context:
container: xen
driver: storage/zfs
level: dbug
message: ContainerMount
timestamp: 2017-02-24T18:14:09.803306055-05:00
type: logging
metadata:
context: {}
level: dbug
message: 'Scheduler: container xen started: re-balancing'
timestamp: 2017-02-24T18:14:09.965080432-05:00
type: logging
metadata:
context:
action: start
created: 2017-02-24 23:11:45 +0000 UTC
ephemeral: "false"
name: xen
stateful: "false"
used: 1970-01-01 00:00:00 +0000 UTC
level: info
message: Started container
timestamp: 2017-02-24T18:14:10.162965059-05:00
type: logging
metadata:
context: {}
level: dbug
message: 'Success for task operation: 2e2cf904-c4c4-4693-881f-57897d602ad3'
timestamp: 2017-02-24T18:14:10.163072893-05:00
type: logging
```
`lxc monitor` 的格式有点不同于每个条目都缩合成一行的日志文件,但更重要的是,你可以看到所有 `leveldbug` 条目。
### 如何报告 bug
#### LXD 的 bug
最好报告 bug 的地方是 [https://github.com/lxc/lxd/issues][4]。确保完整填写了 bug 报告模板中的内容,这些信息可以节省我们我们时间来复现环境。
#### Ubuntu 的 bug
如果你发现 Ubuntu 包本身有问题,无法安装、升级或删除。或者遇到 LXD init 脚本的问题。报告此类错误的最好是在 Launchpad 上。
在 Ubuntu 系统上,你可以使用:`ubuntu-bug lxd` ,它将自动包括一些日志文件和包信息供我们查看。
#### CRIU 的 bug
与 CRIU 相关的 Bug你可以通过 CRIU 的错误输出发现,你应该在 Launchpad 上报告这些:`ubuntu-bug criu`
请注意,通过 LXD 使用 CRIU 属于测试版功能,除非你愿意通过 Canonical 的支持合同付费支持,要么可能需要一段时间才能查看你的错误报告。
### 贡献给 LXD
LXD 用 [Go][5] 写成并[托管在 Github][6]。我们欢迎任外部的贡献。为 LXD 贡献不需要 CLA 或类似的法律协议签署,只是通常的开发者所有权证书(`Signed-off-by:` 行)。
在我们的问题追踪器工具中,我们列有许多潜在的功能需求,新的贡献者可以以此作为良好的起点。通常最好在开始处理代码先发出 issue这样每个人都知道你正在做这项工作以便我们可以提供一些早期反馈。
#### 从源码源码构建 LXD
这里有上游的维护说明:[https://github.com/lxc/lxd#building-from-source][7]
你需要在 Github 上 fork 上游仓库,然后将你的更改推送到你的分支。我们建议每天 rebase 上游的 LXD因为我们倾向于定期合并更改。
#### 运行测试套件
LXD 维护了两套测试集,单元测试和集成测试。你可以用下面的命令测试所有:
```
sudo -E make check
```
要只运行单元测试,使用:
```
sudo -E go test ./...
```
要运行集成测试,使用:
```
cd test
sudo -E ./main.sh
```
后者支持相当多的环境变量来测试各种存储后端、禁用网络测试、使用 ramdisk 或只是调整日志输出。其中一些是:
* `LXD_BACKEND``btrfs`、`dir`、`lvm` 或 `zfs`” 之一(默认为 `dir`
  运行 LXD 存储驱动程序相关的所有测试。
* `LXD_CONCURRENT``true` 或 `false`(默认为 `false`
  这启用一些额外的并发测试。
* `LXD_DEBUG``true` 或 `false`(默认为 `false`
  记录所有 shell 命令,并在调试模式下运行所有​​ LXD 命令。
* `LXD_INSPECT``true` 或 `false`(默认为 `false`
  测试程序会在故障时挂起,以便你可以检查环境。
* `LXD_LOGS`:将所有 `LXD` 日志文件转储到的目录(默认为 “”)
  所有生成的 LXD 守护进程的 `logs` 目录将被复制到此路径。
* `LXD_OFFLINE``true` 或 `false`(默认为 `false`
  禁用任何依赖于外部网络连接的测试。
* `LXD_TEST_IMAGE` unified 格式的 LXD 镜像的路径(默认为 “”)
  可以使用自定义测试镜像,而不是默认的最小 busybox 镜像。
* `LXD_TMPFS``true` 或 `false`(默认为 `false`
  在 `tmpfs` 安装中运行整个测试套件,这会使用相当多的内存,但会使测试速度明显更快。
* `LXD_VERBOSE``true` 或 `false`(默认为 `false`
  不太极端的 `LXD_DEBUG` 版本。shell 命令仍然会记录,但 `-debug` 不会传递给 LXC 命令LXD 守护进程只能使用 `-verbose` 运行。
测试程序将在实际运行之前提醒你任何缺失的依赖项。在相当快的机器上运行该测试可在 10 分钟内完成。
#### 发送你的分支
发送拉取请求PR之前你需要确认
* 你已经 rebase 了上游分支
* 你的所有提交信息都包括 `Signed-off-by: First Last <email>` 这行
* 已删除任何你的临时调试代码
* 你已经将相关的提交 squash 在一起,以保持你的分支容易审查
* 单元和集成测试全部通过
一切完成后,在 Github 上发起一个拉取请求。我们的 [Jenkins][8] 将验证提交是否全部有 `signed-off`,在 MacOS 和 Windows 上的测试将自动执行,如果看起来不错,我们将触发一个完整的 Jenkins 测试它将在所有存储后端、32 位和 64 位以及我们关心的所有 Go 版本上测试你的分支。
假设我们有人触发了 Jenkins这通常需要不到一个小时的时间。
一旦所有测试完成,我们对代码本身感到满意,你的分支将会被合并,你的代码会出现在下一个 LXD 发布中。如果更改适用于 LXD stable-2.0 分支,我们将为你向后移植。
### 总结
我希望这个系列的博客文章有助于你了解什么是 LXD以及它可以做什么
本系列的范围仅限于 LXD2.0.x但我们也为那些想要最新功能的用户提供每月功能版本。你可以找到一些其他涵盖了原来的 [LXD 2.0系列文章][9]中列出的功能的博客文章。
### 额外的信息
LXD 的主站在: [https://linuxcontainers.org/lxd][10]
LXD 的 GitHub 开发仓库: [https://github.com/lxc/lxd][11]
LXD 的邮件列表: [https://lists.linuxcontainers.org][12]
LXD 的 IRC 频道:#lxcontainers on irc.freenode.net
在线尝试 LXD [https://linuxcontainers.org/lxd/try-it][13]
--------------------------------------------------------------------------------
via: https://stgraber.org/2017/02/27/lxd-2-0-debugging-and-contributing-to-lxd-1212/
作者:[Stéphane Graber][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://stgraber.org/author/stgraber/
[1]:https://stgraber.org/author/stgraber/
[2]:https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/
[3]:https://stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/
[4]:https://github.com/lxc/lxd/issues
[5]:https://golang.org/
[6]:https://github.com/lxc/lxd
[7]:https://github.com/lxc/lxd#building-from-source
[8]:https://jenkins.linuxcontainers.org/
[9]:https://stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/
[10]:https://linuxcontainers.org/lxd
[11]:https://github.com/lxc/lxd
[12]:https://lists.linuxcontainers.org/
[13]:https://linuxcontainers.org/lxd/try-it
[14]:https://stgraber.org/2017/02/27/lxd-2-0-debugging-and-contributing-to-lxd-1212/

View File

@ -1,4 +1,4 @@
LXD 2.0 系列LXD中的LXD LXD 2.0 系列LXD 中的 LXD
====================================== ======================================
这是 [LXD 2.0 系列介绍文章][0]的第八篇。 这是 [LXD 2.0 系列介绍文章][0]的第八篇。
@ -7,35 +7,35 @@ LXD 2.0 系列LXD中的LXD
### 介绍 ### 介绍
在上一篇文章中,我介绍了如何运行[LXD中的Docker][1]这是一个很好的方式来访问由Docker提供的应用程序组合同时Docker还运行在LXD提供的安全环境中。 在上一篇文章中,我介绍了如何[在 LXD 中运行 Docker][1],这是一个访问由 Docker 提供的应用程序组合的很好方式,同时 Docker 还运行在 LXD 提供的安全环境中。
我提到的一个情况是为你的用户提供一个LXD容器然后让他们使用他们的容器来运行Docker。那么如果他们自己想使用LXD在其容器中运行其他Linux发行版或者甚至运行容器允许另一组人来访问Linux系统 我提到的一个情况是为你的用户提供一个 LXD 容器,然后让他们使用他们的容器来运行 Docker。那么如果他们自己想要在其容器中使用 LXD 运行其他 Linux 发行版,或者甚至允许另一组人来访问运行在他们的容器中的 Linux 系统呢
原来LXD使得用户运行嵌套容器变得非常简单。 原来 LXD 使得用户运行嵌套容器变得非常简单。
### 嵌套LXD ### 嵌套 LXD
最简单的情况可以使用Ubuntu 16.04镜像来展示。 Ubuntu 16.04云镜像预装了LXD。守护进程本身没有运行因为它是套接字激活的所以它不使用任何资源直到你真正使用它。 最简单的情况可以使用 Ubuntu 16.04 镜像来展示。 Ubuntu 16.04 云镜像预装了 LXD。守护进程本身没有运行因为它是套接字激活的,所以它不使用任何资源,直到你真正使用它。
让我们启动一个启用了嵌套的Ubuntu 16.04容器: 让我们启动一个启用了嵌套的 Ubuntu 16.04 容器:
``` ```
lxc launch ubuntu-daily:16.04 c1 -c security.nesting=true lxc launch ubuntu-daily:16.04 c1 -c security.nesting=true
``` ```
你也可以在一个存在的容器上设置security.nesting 你也可以在一个已有的容器上设置 `security.nesting`
``` ```
lxc config set <container name> security.nesting true lxc config set <container name> security.nesting true
``` ```
或者对所有的容器使用一个配置文件: 或者对所有的容器使用一个指定的配置文件:
``` ```
lxc profile set <profile name> security.nesting true lxc profile set <profile name> security.nesting true
``` ```
容器启动后你可以从容器内部得到一个shell配置LXD并生成一个容器 容器启动后,你可以从容器内部得到一个 shell配置 LXD 并生成一个容器:
``` ```
stgraber@dakara:~$ lxc launch ubuntu-daily:16.04 c1 -c security.nesting=true stgraber@dakara:~$ lxc launch ubuntu-daily:16.04 c1 -c security.nesting=true
@ -79,20 +79,19 @@ root@c1:~# lxc list
root@c1:~# root@c1:~#
``` ```
就是这样简单 就是这样简单
### 在线演示服务器 ### 在线演示服务器
因为这篇文章很短,我想我会花一点时间谈论我们运行中的[演示服务器][2]。我们今天早些时候刚刚达到了10000个会话 因为这篇文章很短,我想我会花一点时间谈论我们运行中的[演示服务器][2]。我们今天早些时候刚刚达到了 10000 个会话!
这个服务器基本上只是一个运行在一个相当强大的虚拟机上的正常的LXD一个小型的守护进程实现我们的网站使用的REST API。 这个服务器基本上只是一个运行在一个相当强大的虚拟机上的正常的 LXD一个小型的守护进程实现我们的网站使用的 REST API。
当你接受服务条款时,将为你创建一个新的LXD容器并启用security.nesting如上所述接着你就像使用“lxc exec”时一样连接到了那个容器除了我们使用websockets和javascript来做这些。 当你接受服务条款时,将为你创建一个新的 LXD 容器,并启用 `security.nesting`,如上所述。接着你就像使用 `lxc exec` 时一样连接到了那个容器,除了我们使用 websockets javascript 来做这些。
你在此环境中创建的容器都是嵌套的LXD容器。 你在此环境中创建的容器都是嵌套的 LXD 容器。如果你想,你可以进一步地嵌套。
如果你想,你可以进一步地嵌套。
我们全范围地使用了[LXD资源限制][3],以防止一个用户的行为影响其他用户,并仔细监控服务器的任何滥用迹象。 我们全范围地使用了 [LXD 资源限制][3],以防止一个用户的行为影响其他用户,并仔细监控服务器的任何滥用迹象。
如果你想运行自己的类似的服务器,你可以获取我们的网站和守护进程的代码: 如果你想运行自己的类似的服务器,你可以获取我们的网站和守护进程的代码:
@ -118,12 +117,12 @@ via: https://www.stgraber.org/2016/04/14/lxd-2-0-lxd-in-lxd-812/
作者:[Stéphane Graber][a] 作者:[Stéphane Graber][a]
译者:[geekpi](https://github.com/geekpi) 译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织翻译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.stgraber.org/author/stgraber/ [a]: https://www.stgraber.org/author/stgraber/
[0]: https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/ [0]: https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/
[1]: https://www.stgraber.org/2016/04/13/lxd-2-0-docker-in-lxd-712/ [1]: https://linux.cn/article-8235-1.html
[2]: https://linuxcontainers.org/lxd/try-it/ [2]: https://linuxcontainers.org/lxd/try-it/
[3]: https://www.stgraber.org/2016/03/26/lxd-2-0-resource-control-412/ [3]: https://www.stgraber.org/2016/03/26/lxd-2-0-resource-control-412/

View File

@ -7,26 +7,26 @@ LXD 2.0 系列(九):实时迁移
### 介绍 ### 介绍
LXD 2.0中的有一个尽管是实验性质的但非常令人兴奋的功能,那就是支持容器检查点和恢复。 LXD 2.0 中的有一个尽管是实验性质的但非常令人兴奋的功能,那就是支持容器检查点和恢复。
简单地说,检查点/恢复意味着正在运行的容器状态可以被序列化到磁盘,然后在与容器状态快照相同的主机上或者在等同于实时迁移的另一主机上恢复 简单地说,检查点/恢复意味着正在运行的容器状态可以被序列化到磁盘,要么可以作为同一主机上的有状态快照,要么放到另一主机上相当于实时迁移
### 要求 ### 要求
访问容器实时迁移和有状态快照,你需要以下条件: 使用容器实时迁移和有状态快照,你需要以下条件:
- 一个最近的Linux内核4.4或更高版本。 - 一个非常新的 Linux 内核4.4 或更高版本。
- CRIU 2.0,可能有一些cherry-pick的提交,具体取决于你确切的内核配置。 - CRIU 2.0,可能需要一些 cherry-pick 的提交,具体取决于你确切的内核配置。
- 直接在主机上运行LXD。 不能在容器嵌套下使用这些功能。 - 直接在主机上运行 LXD。 不能在容器嵌套下使用这些功能。
- 对于迁移,目标机必须至少实现源的指令集,目标内核必须至少提供与源相同的系统调用,并且在源上挂载的任何内核文件系统也必须可挂载到目标主机上。 - 对于迁移,目标机必须至少实现源主机的指令集,目标主机内核必须至少提供与源主机相同的系统调用,并且在源主机上挂载的任何内核文件系统也必须可挂载到目标主机上。
Ubuntu 16.04 LTS已经提供了所有需要的依赖在这种情况下您只需要安装CRIU本身 Ubuntu 16.04 LTS 已经提供了所有需要的依赖,在这种情况下,您只需要安装 CRIU 本身:
``` ```
apt install criu apt install criu
``` ```
### 使用CRIU ### 使用 CRIU
#### 有状态快照 #### 有状态快照
@ -46,7 +46,7 @@ stgraber@dakara:~$ lxc info c1 | grep second
second (taken at 2016/04/25 19:36 UTC) (stateful) second (taken at 2016/04/25 19:36 UTC) (stateful)
``` ```
这意味着所有容器运行时状态都被序列化到磁盘并且作为了快照的一部分。像你还原无状态快照那样还原一个有状态快照: 这意味着所有容器运行时状态都被序列化到磁盘并且作为了快照的一部分。可以像你还原无状态快照那样还原一个有状态快照:
``` ```
stgraber@dakara:~$ lxc restore c1 second stgraber@dakara:~$ lxc restore c1 second
@ -55,7 +55,7 @@ stgraber@dakara:~$
#### 有状态快照的停止/启动 #### 有状态快照的停止/启动
比方说你想要升级内核或者其他类似的维护。与其等待所有的容器启动,你可以: 比方说你由于升级内核或者其他类似的维护而需要重启机器。与其等待重启后启动所有的容器,你可以:
``` ```
stgraber@dakara:~$ lxc stop c1 --stateful stgraber@dakara:~$ lxc stop c1 --stateful
@ -266,38 +266,37 @@ stgraber@dakara:~$ lxc list s-tollana:
### 限制 ### 限制
正如我之前说的,容器的检查点/恢复还是非常新的功能,我们还在努力地开发这个功能、修复问题已知问题。我们确实需要更多的人来尝试这个功能,并给我们反馈,但我不建议在生产中使用这个功能。 正如我之前说的,容器的检查点/恢复还是非常新的功能,我们还在努力地开发这个功能、修复已知问题。我们确实需要更多的人来尝试这个功能,并给我们反馈,但我不建议在生产中使用这个功能。
我们跟踪的问题列表在[Launchpad上][1]。 我们跟踪的问题列表在 [Launchpad上][1]。
我们期望在Ubuntu 16.04上有一个基本的带有几个服务的Ubuntu容器能够与CRIU一起工作。然而在更复杂的容器、使用设备传递、复杂的网络服务或特殊的存储配置可能会失败。 我们估计在带有 CRIU 的 Ubuntu 16.04 上带有几个服务的基本的 Ubuntu 容器能够正常工作。然而在更复杂的容器、使用了设备直通、复杂的网络服务或特殊的存储配置下可能会失败。
只要有可能CRIU会在转储时失败,而不是在恢复时。在这种情况下,源容器将继续运行,快照或迁移将会失败,并生成一个日志文件用于调试。 要是有问题CRIU 会尽可能地在转储时失败,而不是在恢复时。在这种情况下,源容器将继续运行,快照或迁移将会失败,并生成一个日志文件用于调试。
在极少数情况下CRIU无法恢复容器在这种情况下源容器仍然存在但将被停止并且必须手动重新启动。 在极少数情况下CRIU 无法恢复容器,在这种情况下,源容器仍然存在但将被停止,并且必须手动重新启动。
### 发送bug报告 ### 发送 bug 报告
我们正在跟踪Launchpad上关于CRIU Ubuntu软件包的检查点/恢复相关的错误。大多数修复bug工作是在上游的CRIU或Linux内核上但是这种方式我们更容易跟踪。 我们正在跟踪 Launchpad 上关于 CRIU Ubuntu 软件包的检查点/恢复相关的错误。大多数修复 bug 工作是在上游的 CRIU Linux 内核上进行,但是这种方式我们更容易跟踪。
要提交新的bug报告请看这里。 要提交新的 bug 报告,请看这里。
请务必包括: 请务必包括:
你运行的命令和显示给你的错误消息 - 你运行的命令和显示给你的错误消息
- `lxc info` 的输出(*
- `lxc info <container name> `的输出
- `lxc config show -expanded <container name>` 的输出
- `dmesg`*)的输出
- `/proc/self/mountinfo` 的输出(*
- `lxc exec <container name> - cat /proc/self/mountinfo` 的输出
- `uname -a`*)的输出
- `/var/log/lxd.log`*)的内容
- `/etc/default/lxd-bridge`*)的内容
- `/var/log/lxd/<container name>/` 的 tarball*
- “lxc info”的输出* 如果报告迁移错误,而不是状态快照或有状态停止的错误,请将上面所有含有(*)标记的源与目标主机的信息发来。
- “lxc info <container name>”的输出
- “lxc config show -expanded <container name>”的输出
- “dmesg”*)的输出
- “/proc/self/mountinfo”的输出*
- “lxc exec <container name> - cat /proc/self/mountinfo”的输出
- “uname -a”*)的输出
- /var/log/lxd.log*)的内容
- /etc/default/lxd-bridge*)的内容
- /var/log/lxd/<container name>/ 的tarball*
如果报告迁移错误,而不是状态快照或有状态停止错误,请将上面所有含有(*)标记的源与目标主机的信息发来。
### 额外信息 ### 额外信息
@ -314,11 +313,11 @@ LXD 的 IRC 频道: #lxcontainers on irc.freenode.net
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
via: https://www.stgraber.org/2016/03/19/lxd-2-0-your-first-lxd-container-312/ via: https://stgraber.org/2016/04/25/lxd-2-0-live-migration-912/
作者:[Stéphane Graber][a] 作者:[Stéphane Graber][a]
译者:[geekpi](https://github.com/geekpi) 译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织翻译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 组织翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -4,50 +4,53 @@
尽管姜饼中做了许多改变,安卓仍然是移动世界里的丑小鸭。相比于 iPhone它的优雅程度和设计完全抬不起头。另一方面来说为数不多的能与 iOS 的美学智慧相当的操作系统之一是 Palm 的 WebOS。WebOS 有着优秀的整体设计,创新的功能,而且被寄予期望能够从和 iPhone 的长期竞争中拯救公司。 尽管姜饼中做了许多改变,安卓仍然是移动世界里的丑小鸭。相比于 iPhone它的优雅程度和设计完全抬不起头。另一方面来说为数不多的能与 iOS 的美学智慧相当的操作系统之一是 Palm 的 WebOS。WebOS 有着优秀的整体设计,创新的功能,而且被寄予期望能够从和 iPhone 的长期竞争中拯救公司。
尽管如此一年之后Palm 资金链断裂。Palm 公司未曾看到 iPhone 的到来,到 WebOS 就绪的时候已经太晚了。2010年4月惠普花费10亿美元收购了 Palm。尽管惠普收购了一个拥有优秀用户界面的产品界面的首席设计师Matias Duarte并没有加入惠普公司。2010年5月就在惠普接手 Palm 之前Duarte 加入了谷歌。惠普买下了面包,但谷歌雇佣了它的烘培师。 尽管如此一年之后Palm 资金链断裂。Palm 公司未曾看到 iPhone 的到来,到 WebOS 就绪的时候已经太晚了。2010 4 月,惠普花费 10 亿美元收购了 Palm。尽管惠普收购了一个拥有优秀用户界面的产品界面的首席设计师Matias Duarte并没有加入惠普公司。2010 5 月,就在惠普接手 Palm 之前Duarte 加入了谷歌。惠普买下了面包,但谷歌雇佣了它的烘培师。
![第一部蜂巢设备,摩托罗拉 Xoom 10英寸平板。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/Motorola-XOOM-MZ604.jpg) ![第一部蜂巢设备,摩托罗拉 Xoom 10英寸平板。](http://cdn.arstechnica.net/wp-content/uploads/2014/03/Motorola-XOOM-MZ604.jpg)
第一部蜂巢设备,摩托罗拉 Xoom 10英寸平板。
在谷歌Duarte 被任命为安卓用户体验主管。这是第一次有人公开掌管安卓的外观。尽管 Matias 在安卓 2.2 发布时就来到了谷歌,第一个真正受他影响的安卓版本是 3.0 蜂巢它在2011年2月发布。 *第一部蜂巢设备,摩托罗拉 Xoom 10英寸平板。*
按谷歌自己的说法蜂巢是匆忙问世的。10个月前苹果发布了 iPad让平板变得更加现代谷歌希望能够尽快做出回应。蜂巢就是那个回应一个运行在10英寸触摸屏上的安卓版本。悲伤的是将这个系统推向市场是如此优先的事项以至于边边角角都被砍去了以节省时间 在谷歌Duarte 被任命为安卓用户体验主管。这是第一次有人公开掌管安卓的外观。尽管 Matias 在安卓 2.2 发布时就来到了谷歌,第一个真正受他影响的安卓版本是 3.0 蜂巢Honeycomb它在 2011 年 2 月发布
新系统只用于平板——手机不能升级到蜂巢这加大了谷歌让系统运行在差异巨大的不同尺寸屏幕上的难度。但是仅支持平板而不支持手机也使得蜂巢源码没有泄露。之前的安卓版本是开源的这使得黑客社区能够将其最新版本移植到所有的不同设备之上。谷歌不希望应用开发者在支持不完美的蜂巢手机移植版本时感到压力所以谷歌将源码留在自己手中并且严格控制能够拥有蜂巢的设备。匆忙的开发还导致了软件问题。在发布时蜂巢不是特别稳定SD卡不能工作Adobe Flash——安卓最大的特色之一——还不被支持 按谷歌自己的说法蜂巢是匆忙问世的。10 个月前,苹果发布了 iPad让平板变得更加现代谷歌希望能够尽快做出回应。蜂巢就是那个回应一个运行在 10 英寸触摸屏上的安卓版本。悲伤的是,将这个系统推向市场是如此优先的事项,以至于边边角角都被砍去了以节省时间
[摩托罗拉 Xoom][1]是为数不多的拥有蜂巢的设备之一它是这个新系统的旗舰产品。Xoom 是一个10英寸16:9 的平板,拥有 1GB 内存和 1GHz Tegra 2 双核处理器。尽管是由谷歌直接控制更新的新版安卓发布设备它并没有被叫做“Nexus”。对此最可能的原因是谷歌对它没有足够的信心称其为旗舰 新系统只用于平板——手机不能升级到蜂巢这加大了谷歌让系统运行在差异巨大的不同尺寸屏幕上的难度。但是仅支持平板而不支持手机也使得蜂巢源码没有泄露。之前的安卓版本是开源的这使得黑客社区能够将其最新版本移植到所有的不同设备之上。谷歌不希望应用开发者在支持不完美的蜂巢手机移植版本时感到压力所以谷歌将源码留在自己手中并且严格控制能够拥有蜂巢的设备。匆忙的开发还导致了软件问题。在发布时蜂巢不是特别稳定SD 卡不能工作Adobe Flash——安卓最大的特色之一——还不被支持
尽管如此,蜂巢是安卓的一个里程碑。在一个体验设计师的主管之下,整个安卓用户界面被重构,绝大多数奇怪的应用设计都得到改进。安卓的默认应用终于看起来像整体的一部分,不同的界面有着相似的布局和主题。然而重新设计安卓会是一个跨版本的项目——蜂巢只是将安卓塑造成型的开始。这第一稿为未来版本的安卓将如何运作奠定了基础,但它也用了过多的科幻主题,谷歌将花费接下来的数个版本来淡化它。 [摩托罗拉 Xoom][1] 是为数不多的拥有蜂巢的设备之一它是这个新系统的旗舰产品。Xoom 是一个 10 英寸16:9 的平板,拥有 1GB 内存和 1GHz Tegra 2 双核处理器。尽管是由谷歌直接控制更新的新版安卓发布设备它并没有被叫做“Nexus”。对此最可能的原因是谷歌对它没有足够的信心称其为旗舰。
尽管如此,蜂巢是安卓的一个里程碑。在一个体验设计师的主管之下,整个安卓用户界面被重构,绝大多数奇怪的应用设计都得到改进。安卓的默认应用终于看起来像整体的一部分,不同的界面有着相似的布局和主题。然而重新设计安卓会是一个跨越了多个版本的项目——蜂巢只是将安卓塑造成型的开始。这第一稿为未来版本的安卓将如何运作奠定了基础,但它也用了过多的科幻主题,谷歌将花费接下来的数个版本来淡化它。
![蜂巢和姜饼的主屏幕。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/homeskreen.png) ![蜂巢和姜饼的主屏幕。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/homeskreen.png)
蜂巢和姜饼的主屏幕。
Ron Amadeo供图 *蜂巢和姜饼的主屏幕。
[Ron Amadeo供图]*
姜饼只是在它的光子壁纸上试验了科幻外观,蜂巢整个系统的以电子为灵感的主题让它充满科幻意味。所有东西都是黑色的,如果你需要对比色,你可以从一些不同色调的蓝色中挑选。所有蓝色的东西还有“光晕”效果,让整个系统看起来像是外星科技创造的。默认背景是个六边形的全息方阵(一个蜂巢!明白了吗?),看起来像是一艘飞船上的传送阵的地板。 姜饼只是在它的光子壁纸上试验了科幻外观,蜂巢整个系统的以电子为灵感的主题让它充满科幻意味。所有东西都是黑色的,如果你需要对比色,你可以从一些不同色调的蓝色中挑选。所有蓝色的东西还有“光晕”效果,让整个系统看起来像是外星科技创造的。默认背景是个六边形的全息方阵(一个蜂巢!明白了吗?),看起来像是一艘飞船上的传送阵的地板。
蜂巢最重要的变化是增加了系统栏。摩托罗拉 Xoom 除了电源和音量键之外没有配备实体按键,所以蜂巢添加了一个大黑色底栏到屏幕底部,用于放置导航按键。这意味着默认安卓界面不再需要特别的实体按键。在这之前,安卓没有实体的返回菜单和 Home 键就不能正常工作。现在,软件提供了所有必需的按钮,任何带有触摸屏的设备都能够运行安卓。 蜂巢最重要的变化是增加了系统栏。摩托罗拉 Xoom 除了电源和音量键之外没有配备实体按键,所以蜂巢添加了一个大黑色底栏到屏幕底部,用于放置导航按键。这意味着默认安卓界面不再需要特别的实体按键。在这之前,安卓没有实体的返回菜单和 Home 键就不能正常工作。现在,软件提供了所有必需的按钮,任何带有触摸屏的设备都能够运行安卓。
新软件按键带来的最大的好处是灵活性。新的应用指南表明应用应不再要求实体菜单按键需要用到的时候蜂巢会自动检测并添加四个按钮到系统栏让应用正常工作。另一个软件按键的灵活属性是它们可以改变设备的屏幕方向。除了电源和音量键之外Xoom 的方向实际上不是那么重要。从用户的角度来看系统栏始终处于设备的“底部”。代价是系统栏明显占据了一些屏幕空间。为了在10英寸平板上节省空间状态栏被合并到了系统栏中。所有的常用状态指示放在了右侧——有电源,连接状态,时间还有通知图标。 新软件按键带来的最大的好处是灵活性。新的应用指南表明应用不再必需实体菜单按键需要用到的时候蜂巢会自动检测并添加四个按钮到系统栏让应用正常工作。另一个软件按键的灵活属性是它们可以改变设备的屏幕方向。除了电源和音量键之外Xoom 的方向实际上不是那么重要。从用户的角度来看系统栏始终处于设备的“底部”。代价是系统栏明显占据了一些屏幕空间。为了在10英寸平板上节省空间状态栏被合并到了系统栏中。所有的常用状态指示放在了右侧——有电源、连接状态、时间还有通知图标。
主屏幕的整个布局都改变了,用户界面部件放在了设备的四个角落。屏幕底部左侧放置着之前讨论过的导航按键,右侧用于状态指示和通知,顶部左侧显示的是文本搜索和语音搜索,右侧有应用抽屉和添加小部件的按钮。 主屏幕的整个布局都改变了,用户界面部件放在了设备的四个角落。屏幕底部左侧放置着之前讨论过的导航按键,右侧用于状态指示和通知,顶部左侧显示的是文本搜索和语音搜索,右侧有应用抽屉和添加小部件的按钮。
![新锁屏界面和最近应用界面。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/lockscreen-and-recent.png) ![新锁屏界面和最近应用界面。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/lockscreen-and-recent.png)
新锁屏界面和最近应用界面。
Ron Amadeo供图
(因为 Xoom 是一部 [较重] 的10英寸16:9平板设备这意味着它主要是横屏使用。虽然大部分应用还支持竖屏模式但是到目前为止由于我们的版式限制我们大部分使用的是竖屏模式的截图。请记住蜂巢的截图来自于10英寸的平板而姜饼的截图来自3.7英寸的手机。二者所展现的信息密度是不能直接比较的。) *新锁屏界面和最近应用界面。
[Ron Amadeo供图]*
(因为 Xoom 是一部 [较重] 的 10 英寸16:9 平板设备,这意味着它主要是横屏使用。虽然大部分应用还支持竖屏模式,但是到目前为止,由于我们的版式限制,我们大部分使用的是竖屏模式的截图。请记住蜂巢的截图来自于 10 英寸的平板,而姜饼的截图来自 3.7 英寸的手机。二者所展现的信息密度是不能直接比较的。)
解锁界面——从菜单按钮到旋转式拨号盘再到滑动解锁——移除了解锁步骤的任何精度要求,它采用了一个环状解锁盘。从中间向任意方向向外滑动就能解锁设备。就像旋转式解锁,这种解锁方式更加符合人体工程学,而不用强迫你的手指完美地遵循一条笔直的解锁路径。 解锁界面——从菜单按钮到旋转式拨号盘再到滑动解锁——移除了解锁步骤的任何精度要求,它采用了一个环状解锁盘。从中间向任意方向向外滑动就能解锁设备。就像旋转式解锁,这种解锁方式更加符合人体工程学,而不用强迫你的手指完美地遵循一条笔直的解锁路径。
第二张图中略缩图条带是由新增的“最近应用”按钮打开的界面,现在处在返回和 Home 键旁边。不像姜饼中长按 Home 键显示一组最近应用的图标,蜂巢在屏幕上显示应用图标和略缩图,使得在任务间切换变得更加方便。最近应用的灵感明显来自于 Duarte 在 WebOS 中的“卡片式”多任务管理,其使用全屏略缩图来切换任务。这个设计提供和 WebOS 的任务切换一样的易识别体验,但更小的略缩图允许更多的应用一次性显示在屏幕上。 第二张图中略缩图条带是由新增的“最近应用”按钮打开的界面,现在处在返回和 Home 键旁边。不像姜饼中长按 Home 键显示一组最近应用的图标,蜂巢在屏幕上显示应用图标和略缩图,使得在任务间切换变得更加方便。最近应用的灵感明显来自于 Duarte 在 WebOS 中的“卡片式”多任务管理,其使用全屏略缩图来切换任务。这个设计提供和 WebOS 的任务切换一样的易识别体验,但更小的略缩图允许更多的应用一次性显示在屏幕上。
尽管最近应用的实现看起来和你现在的设备很像,这个版本实际上是非常早期的。这个列表不能滚动,这意味着竖屏下只能显示七个应用,横屏下只能显示五个。任何超出范围的应用会从列表中去除。而且你也不能通过滑动略缩图来关闭应用——这只是个静态的列表。 尽管最近应用的实现看起来和你现在的设备很像,这个版本实际上是非常早期的。这个列表不能滚动,这意味着竖屏下只能显示七个应用,横屏下只能显示五个。任何超出范围的应用会从列表中去除。而且你也不能通过滑动略缩图来关闭应用——这只是个静态的列表。
这里我们看到电子灵感影响的完整主题效果:略缩图的周围有蓝色的轮廓以及神秘的光晕。这张截图还展示软件按键的好处——上下文。返回按钮可以关闭略缩图列表,所以这里的箭头指向下方,而不是通常的样子。 这里我们可以看到电子灵感影响的完整主题效果:略缩图的周围有蓝色的轮廓以及神秘的光晕。这张截图还展示软件按键的好处——上下文。返回按钮可以关闭略缩图列表,所以这里的箭头指向下方,而不是通常的样子。
---------- ----------
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg) ![Ron Amadeo](https://cdn.arstechnica.net/wp-content/uploads/2016/05/r.amadeo-45843.jpg)
[Ron Amadeo][a] / Ron是Ars Technica的评论编缉专注于安卓系统和谷歌产品。他总是在追寻新鲜事物还喜欢拆解事物看看它们到底是怎么运作的。 [Ron Amadeo][a] / Ron是Ars Technica的评论编缉专注于安卓系统和谷歌产品。他总是在追寻新鲜事物还喜欢拆解事物看看它们到底是怎么运作的。
@ -55,7 +58,7 @@ Ron Amadeo供图
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/16/ via: http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/16/
译者:[alim0x](https://github.com/alim0x) 校对:[Bestony](https://github.com/Bestony) 译者:[alim0x](https://github.com/alim0x) 校对:[Bestony](https://github.com/Bestony)

View File

@ -0,0 +1,93 @@
安卓编年史16安卓 3.0 蜂巢—平板和设计复兴
================================================================================
![蜂巢的应用列表少了很多应用。上图还展示了通知中心和新的快速设置。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/apps-and-notifications2.png)
*蜂巢的应用列表少了很多应用。上图还展示了通知中心和新的快速设置。
[Ron Amadeo 供图]*
默认的应用图标从 32 个减少到了 25 个,其中还有两个是第三方的游戏。因为蜂巢不是为手机设计的,而且谷歌希望默认应用都是为平板优化的,很多应用因此没有成为默认应用。被去掉的应用有亚马逊 MP3 商店、Car Home、Facebook、Google Goggles、信息、新闻与天气、电话、Twitter、谷歌语音以及语音拨号。谷歌正在悄悄打造的音乐服务将于不久后面世所以亚马逊 MP3 商店需要为它让路。Car Home、信息以及电话对一部不是手机的设备来说没有多大意义Facebook 和 Twitter还没有平板版应用Goggles、新闻与天气以及语音拨号几乎没什么人注意就算移除了大多数人也不会想念它们的。
几乎每个应用图标都是全新设计的。就像是从 G1 切换到摩托罗拉 Droid推动变化的最大动力可能是分辨率的提高。Nexus S 有一块 800×480 分辨率的显示屏,姜饼重新设计了图标等资源来适应它。而 Xoom 巨大的 1280×800 10 英寸显示屏意味着几乎所有设计都要重做。但是再说一次,这次是有真正的设计师在负责,所有东西看起来更有整体性了。蜂巢的应用列表从纵向滚动变为了横向分页式。这个变化对横屏设备有意义,而对手机来说,查找一个应用还是纵向滚动列表比较快。
第二张蜂巢截图展示的是新通知中心。姜饼中的灰色和黑色设计已经被抛弃了,现在是黑色面板带蓝色光晕。上面一块显示着日期时间、连接状态、电量和打开快速设置的按钮,下面是实际的通知。非持续性通知现在可以通过通知右侧的 “X” 来关闭。蜂巢是第一个支持通知内控制的版本。第一个(也是蜂巢发布时唯一一个)利用了此特性的应用是新的谷歌音乐,在它的通知上有上一曲、播放/暂停、下一曲按钮。这些控制可以在任何应用中访问到,这让控制音乐播放变成了一件轻而易举的事情。
![“添加到主屏幕”的缩小视图更易于组织布局。搜索界面将自动搜索建议和通用搜索分为两个面板显示。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/widgetkeyboard.png)
*“添加到主屏幕”的缩小视图更易于组织布局。搜索界面将自动搜索建议和通用搜索分为两个面板显示。
[Ron Amadeo 供图]*
点击主屏幕右上角的加号或长按背景空白处就会打开新的主屏幕设置界面。蜂巢会在屏幕上半部分显示所有主屏的缩小视图,下半部分的分页显示的是小部件和快捷方式。小部件或快捷方式可以从下半部分的抽屉中拖动到五个主屏幕中的任意一个上。姜饼只会显示一个文本列表,而蜂巢会显示小部件完整的略缩图预览。这让你更清楚一个小部件是什么样子的,而不是像原来的“日历”一样只是一个只有应用名称的描述。
摩托罗拉 Xoom 更大的屏幕让键盘的布局更加接近 PC 风格退格、回车、shift 以及 tab 都在传统的位置上。键盘带有浅蓝色,并且键与键之间的空间更大了。谷歌还添加了一个专门的笑脸按钮。 :-)
![打开菜单的 Gmail 在蜂巢和姜饼上的效果。按钮布置在首屏更容易被发现。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/thebasics.png)
*打开菜单的 Gmail 在蜂巢和姜饼上的效果。按钮布置在首屏更容易被发现。
[Ron Amadeo 供图]*
Gmail 示范了蜂巢所有的用户界面概念。安卓 3.0 不再把所有控制都隐藏在菜单按钮之后。屏幕的顶部现在有一条带有图标的条带,叫做 Action Bar操作栏它将许多常用的控制选项提升到了主屏幕上用户直接就能看到它们。Gmail 的操作栏显示着搜索、新邮件、刷新按钮,不常用的选项比如设置、帮助,以及反馈放在了“更多”按钮中。点击复选框或选中文本的时候时整个操作栏的图标会变成和操作相关的——举个例子,选择文本会出现复制、粘贴和全选按钮。
应用左上角显示的图标同时也作为称作“上一级”的导航按钮。“后退”的作用类似浏览器的后退按钮,导航到之前访问的页面,“上一级”则会导航至应用的上一层次。举例来说,如果你在安卓市场,点击“给开发者发邮件”,会打开 Gmail“后退”会让你返回安卓市场但是“上一级”会带你到 Gmail 的收件箱。“后退”可能会关闭当前应用,而“上一级”永远不会。应用可以控制“后退”按钮,它们往往重新定义为“上一级”的功能。事实上,这两个按钮之间几乎没什么不同。
蜂巢还引入了 “Fragments” API允许开发者开发同时适用于平板和手机的应用。一个 “Fragments”格子 是一个用户界面的面板。在上图的 Gmail 中,左边的文件夹列表是一个格子,收件箱是另一个格子。手机每屏显示一个格子,而平板则可以并列显示两个。开发者可以自行定义单独每个格子的外观,安卓会根据当前的设备决定如何显示它们。
![计算器使用了常规的安卓按钮,但日历看起来像是被谁打翻了蓝墨水。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/calculendar.png)
*计算器使用了常规的安卓按钮,但日历看起来像是被谁打翻了蓝墨水。
[Ron Amadeo 供图]*
这是安卓历史上第一次计算器换上了非定制按钮,所以它看起来确实像是系统的一部分。更大的屏幕有了更多空间容纳按钮,足够将计算器基本功能容纳在一个屏幕上。日历极大地受益于额外的显示空间,有了更多的空间显示事件文本和控制选项。顶部的操作栏有切换视图的按钮,显示当前时间跨度,以及常规按钮。事件块变成了白色背景,日历标识只在左上角显示。在底部(或横屏模式的侧边)显示的是月历和显示的日历列表。
日历的比例同样可以调整。通过两指缩放手势,纵向的周和日视图能够在一屏内显示五到十九小时的事件。日历的背景由不均匀的蓝色斑点组成,看起来不是特别棒,在随后的版本里就被抛弃了。
![新相机界面,取景器显示的是“负片”效果。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/camera.png)
*新相机界面,取景器显示的是“负片”效果。
[Ron Amadeo 供图]*
巨大的10英寸 Xoom 平板有个摄像头,这意味着它同样有个相机应用。电子风格的重新设计终于甩掉了谷歌从安卓 1.6 以来使用的仿皮革外观。控制选项以环形排布在快门键周围让人想起真正的相机上的圆形控制转盘。Cooliris 衍生的弹出对话气泡变成了带光晕的半透明黑色选框。蜂巢的截图显示的是新的“颜色效果”功能,它能给取景器实时加上滤镜效果。不像姜饼的相机应用,它不支持竖屏模式——它被限制在横屏状态。用 10 英寸的平板拍摄纵向照片没多大意义,但拍摄横向照片也没多大意义。
![时钟应用相比其它地方没受到多少关照。谷歌把它扔进一个小盒子里然后就收工了。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/clocks.png)
*时钟应用相比其它地方没受到多少关照。谷歌把它扔进一个小盒子里然后就收工了。
[Ron Amadeo 供图]*
无数的功能已经成形了,现在是时候来重制一下时钟了。整个“桌面时钟”概念被踢出门外,取而代之的是在纯黑背景上显示的简单又巨大的时间数字。启动其它应用来查看天气的功能不见了,随之而去的还有显示你的壁纸的功能。在设计平板尺寸的界面时,有时候谷歌就没那么认真了,就像这里,就只是把时钟界面扔到了一个小小的,居中的对话框里。
![音乐应用终于得到了一直以来都需要的完全重新设计。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/muzack.png)
*音乐应用终于得到了一直以来都需要的完全重新设计。
[Ron Amadeo 供图]*
尽管音乐应用之前有得到一些小的加强,但这是自安卓 0.9 以来它第一次受到正视。重新设计的亮点是一个“不叫滚动封面”的 3D 滚动的专辑封面视图,称作“最新和最近”。导航由操作栏的下拉框解决,取代了安卓 2.1 引入的标签页导航。尽管“最新和最近”有个 3D 滚动专辑封面,但“专辑”使用的是专辑略缩图的平面方阵。另一个部分也有个完全不同的设计。“歌曲”使用了垂直滚动的文本列表,“播放列表”、“年代”和“艺术家”用的是堆砌专辑显示。
在几乎每个视图中,每个单独的项目有它自己单独的菜单,通常在每项的右下角有个小箭头。眼下这里只会显示“播放”和“添加到播放列表”,但这个版本的谷歌音乐是为未来搭建的。谷歌不久后就要发布音乐服务,这些独立菜单在像是在音乐商店里浏览该艺术家的其它内容,或是管理云存储和本地存储时将会是不可或缺的。
正如安卓 2.1 中的 Cooliris 风格的相册,谷歌音乐会将略缩图放大作为背景图片。底部的“正在播放”栏现在显示着专辑封面、播放控制,以及播放进度条。
![新谷歌地图的一些地方真的很棒,一些却是从安卓 1.5 来的。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/maps.png)
*新谷歌地图的一些地方真的很棒,一些却是从安卓 1.5 来的。
[Ron Amadeo 供图]*
谷歌地图也为大屏幕进行了重新设计。这个设计将会持续一段时间,它对所有的控制选项用了一个半透明的黑色操作栏。搜索再次成为主要功能,占据了操作栏显要位置,但这回可是真的搜索栏,你可以在里面输入关键字,不像以前那个搜索栏形状的按钮会打开完全不同的界面。谷歌最终还是放弃了给缩放控件留屏幕空间,仅仅依靠手势来控制地图显示。尽管 3D 建筑轮廓这个特性已经被移植到了旧版本的地图中,蜂巢依然是拥有这个特性的第一个版本。双指在地图上向下拖放会“倾斜”地图的视角,展示建筑的侧面。你可以随意旋转,建筑同样会跟着进行调整。
并不是所有部分都进行了重新设计。导航自姜饼以来就没动过,还有些界面的核心部分,比如路线,直接从安卓 1.6 的设计拿出来,放到一个小盒子里居中放置,仅此而已。
----------
![Ron Amadeo](https://cdn.arstechnica.net/wp-content/uploads/2016/05/r.amadeo-45843.jpg)
[Ron Amadeo][a] / Ron 是 Ars Technica 的评论编缉,专注于安卓系统和谷歌产品。他总是在追寻新鲜事物,还喜欢拆解事物看看它们到底是怎么运作的。[@RonAmadeo][t]
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/17/
译者:[alim0x](https://github.com/alim0x) 校对:[Bestony](https://github.com/Bestony)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://arstechnica.com/author/ronamadeo
[t]:https://twitter.com/RonAmadeo

View File

@ -1,8 +1,9 @@
# Linux 是什么?一个简短的描述 Linux 发行版简介系列Linux 是什么?
===================
正如上面的问题所述,我们将要了解: 正如上面的问题所述,我们将要了解:
**Linux 是什么?** ### Linux 是什么?
简单来说, Linux 是一个基于 Unix 的开源操作系统。 简单来说, Linux 是一个基于 Unix 的开源操作系统。
@ -10,20 +11,21 @@
1991 年 10 月 5 日, Linus Torvalds 首次发布 Linux 内核。 Linux 内核是 Linux 系统的一个非常重要的组成部分。目前, Linux 主要用于多种服务器和超级计算机等。它也被用于手机操作系统,比如 Android 操作系统是基于 Linux 内核的。 1991 年 10 月 5 日, Linus Torvalds 首次发布 Linux 内核。 Linux 内核是 Linux 系统的一个非常重要的组成部分。目前, Linux 主要用于多种服务器和超级计算机等。它也被用于手机操作系统,比如 Android 操作系统是基于 Linux 内核的。
在早期, Linux 作为一个免费的操作系统被用于基于 Intel ×86 的个人电脑上。因为 Linux 是一个开源操作系统,所以它的源代码可以被修改或使用,也可以在有许可证,比如 GNU通用公共许可证的情况下被任何人发布。简而言之,如果具备一定知识,知道自己在干什么,那么任何人都可以从 Linux 那儿获得自己的操作系统。正因此,才有了许多 Linux 发行版。 在早期Linux 作为一个免费的操作系统被用于基于 Intel ×86 的个人电脑上。因为 Linux 是一个开源操作系统,所以它的源代码可以被修改或使用,也可以在有 GPL通用公共许可证这样许可证下被任何人发布。简而言之,如果具备一定知识,知道自己在干什么,那么任何人都可以从 Linux 那儿获得自己的操作系统。正因此,才有了许多 Linux 发行版。
**现在, Linux 发行版是什么?** ### 那么, Linux 发行版是什么?
它是基于 Linux 内核的一个操作系统。它加载有用户可以访问的软件集合。更多的,它还包含系统管理包。目前有许多 Linux 发行版。因为我们不能数清目前所有的 Linux 发行版,所以我们来看一下一些有名的版本: 它是基于 Linux 内核的一个操作系统。它带有用户可以使用的软件集合。更多的,它还包含系统管理包。目前有许多 Linux 发行版。因为我们不能数清目前所有的 Linux 发行版,所以我们来看一下一些有名的版本:
Ubuntu、Fedora、Opensuse、Red hat Linux 和 Debian 等是几个非常受欢迎的 Linux 发行版。 Ubuntu、Fedora、Opensuse、Red hat Linux 和 Debian 等是几个非常受欢迎的 Linux 发行版。
[ [
![](https://3.bp.blogspot.com/-8ckfHXqKaPA/U2o2ufvZ0nI/AAAAAAAAAN0/Frd4OS7m7dk/s280/image_1.png) ![](https://3.bp.blogspot.com/-8ckfHXqKaPA/U2o2ufvZ0nI/AAAAAAAAAN0/Frd4OS7m7dk/s280/image_1.png)
][1] ][1]
> Ubuntu 一个非常受欢迎的 Linux 发行版和第三受欢迎的操作系统 > Ubuntu 一个非常受欢迎的 Linux 发行版和第三受欢迎的操作系统
Linux 发行版是一个已经准备好可以在个人电脑上安装的完整包。一旦用户在桌面或者服务器上安装了 Linux 发行版,就可以使用各种现成的软件和应用程序。现在,很多 Linux 发行版都具有很好的图形用户界面GUI这使得它们成为 windows 系统或 Mac 系统的一个很好的替代品。 Linux 发行版是一个已经准备好可以在个人电脑上安装的完整包。一旦用户在桌面或者服务器上安装了 Linux 发行版,就可以使用各种现成的软件和应用程序。现在,很多 Linux 发行版都具有很好的图形用户界面GUI这使得它们成为 Windows 系统或 Mac 系统的一个很好的替代品。
目前, Linux 发行版在性能、界面、可访问性以及最重要的 - 用户友好性等方面都有了很大的提高。一些发行版比如 Ubuntu 和 Linux mint 等,随着用户数量的一天天增加,赢得了很好的市场地位。 Ubuntu 是紧随 Windows 和 Mac 第三受欢迎的操作系统。 目前, Linux 发行版在性能、界面、可访问性以及最重要的 - 用户友好性等方面都有了很大的提高。一些发行版比如 Ubuntu 和 Linux mint 等,随着用户数量的一天天增加,赢得了很好的市场地位。 Ubuntu 是紧随 Windows 和 Mac 第三受欢迎的操作系统。
@ -33,9 +35,9 @@ Linux 发行版是一个已经准备好可以在个人电脑上安装的完整
via: http://www.techphylum.com/2014/05/what-is-linux-brief-description.html?m=1 via: http://www.techphylum.com/2014/05/what-is-linux-brief-description.html?m=1
作者:[sumit rohankar ][a] 作者:[sumit rohankar][a]
译者:[ucasFL](https://github.com/ucasFL) 译者:[ucasFL](https://github.com/ucasFL)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,37 @@
Linux 发行版简介系列Debian
============
大家好!!
今天给大家带来点有意思的东西。我们准备给大家仔细讲讲 Linux 世界里的东西。
想必你们对 [Linux 是什么?][2]和[怎么在 Linux 下用 screenlets 工具来安装一些桌面小程序][1]这两篇文章也感兴趣。这篇文章就当是这一系列的文章的第一部分。来给大家讲讲 Debian 这个 Linux 发行版。作为 Linux 的第一个发行版Debian 是在 1993 年 9 月才初步发行的。Debian 这个名字来自于 Debian 发行版的创造者 Ian Murdock 及其妻子 Debra。LCTT 译注Ian 已经去世)
Debian 是个庞大的开源软件包的集合体。Debian 支持安装非自由的软件包,但是其自由软件包的数量更大。根据 Debian 的官方数据统计Debian 库里总共囊括了 37500 个自由软件包。这些软件都是由 Debian 官方免费提供的。目前全世界大概有一千多人在为打造一个更好的 Debian 发行版努力。
在写作本文时, Debian 最新的稳定发行版是 7.5 命名为 Wheezy 。给开发测试用的最新的测试发行版 8.0 也出来了,命令为 JesseLCTT 译注翻译本文时Debian 已经 8.7 了) 。Debian 发行版默认使用 Gnome 做为桌面环境。当然也不是只有 Gnome KDE 、Xfce 和 LXDE 这些桌面环境都是可选的。因为 Debian 的安装工具是可视化的图形界面,所以安装 Debian 这事很易容完成。
Debian 是一个稳健而且安全性高的操作系统。Debian 支持绝大部分的架构的硬件平台,所以你们不用担心它能不能在你的 PC 上运行。另外你是不是要问驱动怎么办?想知道从哪里可以找到能和你的 Debian 相匹配的驱动这些问题都不需要太担心Debian 社区已经把绝大部分的新老设备的驱动准备好了。这样一来你也不用再等设备生产商给你制作相应的设备驱动了。还有更棒的一点就是,这些驱动都是开源的,都是可以免费获取的。
Debian 是由社区来维护的。因为有了这个社区,你可以相信你在使用 Debian 过程种遇到的问题肯定是可以在社区里找到其它用户来给你提供解决办法的。Debian 软件库里有大把的软件供你选择而且都是免费的。Debian 是一个非常稳定而功能强大的操作系统,另外它的用户界面也很易用。
我们一般所说的稳定是指这个系统极少出现崩溃或者挂死现象还能兼顾高效率。Debian 正是这种系统的代表。Debian 的升级也相当容易实现。Debian 团队已经把软件库里的众多软件件源码包编译好,所以我们可以轻松的找到想要的软件,并且安装到系统里。
不管怎么说Debian 诞生到现在已经有 20 个年头了。能持续到现在,说明了 Debian 团队一直在为给用户提供一个最好的发行版而不懈努力着。Debian 可以通过购买 DVD 的方式进行安装,也可以直接在网上下载 ISO 镜像来进行安装。所以我们推荐你试一下 Debian。它可以给你提供非常多的东西。
Debian 是我们的“Linux 发行版简介系列”系列里的第一篇文章。我们会接下来会给你们介绍另外一个 Linux 发行版。保持关注哦,后面还有更多内容。到时候再见咯。
--------------------------------------------------------------------------------
via: http://www.techphylum.com/2014/05/what-is-debian-brief-introduction.html
作者:[sumit rohankar][a]
译者:[zschong](https://github.com/zschong)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/112160169713374382262
[1]:http://www.techphylum.com/2014/05/desktop-gadgets-in-linux-ubuntu.html
[2]:http://www.techphylum.com/2014/05/what-is-linux-brief-description.html?m=1

View File

@ -1,3 +1,5 @@
translating by XLCYun
Reactive programming vs. Reactive systems Reactive programming vs. Reactive systems
============================================================ ============================================================

View File

@ -1,3 +1,4 @@
【翻译中】
How Linux Helped Me Become an Empowered Computer User How Linux Helped Me Become an Empowered Computer User
============================================================ ============================================================

View File

@ -1,65 +0,0 @@
Windows wins the desktop, but Linux takes the world
============================================================
The city with the highest-profile Linux desktop projects is turning back to Windows, but the fate of Linux isn't tied to the PC anymore.
![munich2.jpg](http://zdnet3.cbsistatic.com/hub/i/r/2017/02/10/9befc3d2-7931-48df-8114-008d23f1941d/resize/770xauto/02ca33958e5288c81a85d3dac546f621/munich2.jpg)
>The fate of Munich's Linux project is only part of the story of open source software.
>Image: Getty Images/iStockphoto
After a nearly decade-long project to move away from Windows onto Linux, Munich has all but decided on a dramatic u-turn. It's likely that, by 2021, the city council will start to replace PCs running LiMux (its custom version of Ubuntu) [with Windows 10][4].
Going back maybe 15 or 20 years, it was seriously debated as to when Linux would overtake Windows on the desktop. When Ubuntu was created in 2004, for example, it was with the [specific intention of replacing Windows][5] as the standard desktop operating system.
Spoiler: it didn't happen.
Linux on the desktop has about a two percent market share today and is viewed by many as complicated and obscure. Meanwhile, Windows sails on serenely, currently running on 90 percent of PCs in use. There will likely always be a few Linux desktops around in business -- particularly for developers or data scientists.
But it's never going to be mainstream.
There has been lots of interest in Munich's Linux project because it's one of the biggest around. Few large organizations have switched from Windows to Linux, although there are some others, like [the French Gendarmerie and the city of Turin][6]. But [Munich was the poster child][7]: losing it as a case study will undoubtedly be a blow to those still [championing Linux on the desktop][8].
But the reality is that most companies are happy to go with the dominant desktop OS, given all of the advantages around integration and familiarity that come with it.
It's not entirely clear how much of the problems that some staff have complained about are down to the LiMux software and how much the operating system is being blamed for unrelated issues. But whatever Munich finally decides to do, Linux's fate is not going to be decided on the desktop -- Linux lost the desktop war years ago.
That's probably OK because Linux won the smartphone war and is doing pretty well on the cloud and Internet of Things battlefields too.
There's a four-in-five chance that there's a Linux-powered smartphone in your pocket (Android is based on the Linux kernel) and plenty of IoT devices are Linux-powered too, even if you don't necessarily notice it.
Devices [like the Raspberry Pi,][9] running a vast array of different flavours of Linux, are creating an enthusiastic community of makers and giving startups a low-cost way to power new types of devices.
Much of the public cloud is running on Linux in one form or another, too; even Microsoft has warmed up to open-source software. Regardless of your views about one software platform or another, having a rich set of options for developers and users is good for choice and good for innovation.
The dominance of the desktop is not what it once was: it's now just one computing platform among many. Indeed, the software on the PC becomes less and less relevant as more apps become device- and OS-independent, residing in the cloud instead.
The twists and turns of the Munich saga and the adventures of Linux on the desktop are fascinating, but they don't tell the full story.
_Agree? Disagree? Join the debate by posting a comment below._
--------------------------------------------------------------------------------
via: http://www.zdnet.com/article/windows-wins-the-desktop-but-linux-takes-the-world/
作者:[Steve Ranger ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.zdnet.com/meet-the-team/uk/steve-ranger/
[1]:http://www.techrepublic.com/resource-library/whitepapers/why-munich-made-the-switch-from-windows-to-linux-and-may-be-reversing-course/
[2]:http://www.zdnet.com/article/windows-wins-the-desktop-but-linux-takes-the-world/#comments-c2df091a-2ecf-4e55-84f6-fd3309cf917d
[3]:http://www.techrepublic.com/resource-library/whitepapers/why-munich-made-the-switch-from-windows-to-linux-and-may-be-reversing-course/
[4]:http://www.techrepublic.com/article/linux-champion-munich-takes-decisive-step-towards-returning-to-windows/
[5]:http://www.techrepublic.com/article/how-mark-shuttleworth-became-the-first-african-in-space-and-launched-a-software-revolution/
[6]:http://www.techrepublic.com/pictures/10-projects-ditching-microsoft-for-open-source-plus-one-switching-back/
[7]:http://www.techrepublic.com/article/how-munich-rejected-steve-ballmer-and-kicked-microsoft-out-of-the-city/
[8]:http://www.techrepublic.com/resource-library/whitepapers/why-munich-made-the-switch-from-windows-to-linux-and-may-be-reversing-course/
[9]:http://www.zdnet.com/article/hands-on-raspberry-pi-7-inch-touch-display-and-case/
[10]:http://intent.cbsi.com/redir?tag=medc-content-top-leaderboard&siteId=2&rsid=cnetzdnetglobalsite&pagetype=article&sl=en&sc=as&topicguid=&assetguid=c2df091a-2ecf-4e55-84f6-fd3309cf917d&assettype=content_article&ftag_cd=LGN-10-10aaa0h&devicetype=desktop&viewguid=5d31a1e5-4a88-4002-ac70-1c0ca3e33bb3&q=&ctype=docids;promo&cval=33159648;7214&ttag=&ursuid=&bhid=&destUrl=http%3A%2F%2Fwww.techrepublic.com%2Fresource-library%2Fwhitepapers%2Fgraphic-design-bootcamp%2F%3Fpromo%3D7214%26ftag%3DLGN-10-10aaa0h%26cval%3Dcontent-top-leaderboard
[11]:http://intent.cbsi.com/redir?tag=medc-content-top-leaderboard&siteId=2&rsid=cnetzdnetglobalsite&pagetype=article&sl=en&sc=as&topicguid=&assetguid=c2df091a-2ecf-4e55-84f6-fd3309cf917d&assettype=content_article&ftag_cd=LGN-10-10aaa0h&devicetype=desktop&viewguid=5d31a1e5-4a88-4002-ac70-1c0ca3e33bb3&q=&ctype=docids;promo&cval=33159648;7214&ttag=&ursuid=&bhid=&destUrl=http%3A%2F%2Fwww.techrepublic.com%2Fresource-library%2Fwhitepapers%2Fgraphic-design-bootcamp%2F%3Fpromo%3D7214%26ftag%3DLGN-10-10aaa0h%26cval%3Dcontent-top-leaderboard
[12]:http://www.zdnet.com/meet-the-team/uk/steve-ranger/
[13]:http://www.zdnet.com/meet-the-team/uk/steve-ranger/
[14]:http://www.zdnet.com/topic/enterprise-software/

View File

@ -0,0 +1,290 @@
[A Programmers Introduction to Unicode][18]
============================================================
! 🅤🅝🅘🅒🅞🅓🅔‽ 🇺‌🇳‌🇮‌🇨‌🇴‌🇩‌🇪! 😄 The very name strikes fear and awe into the hearts of programmers worldwide. We all know we ought to “support Unicode” in our software (whatever that means—like using `wchar_t` for all the strings, right?). But Unicode can be abstruse, and diving into the thousand-page [Unicode Standard][27] plus its dozens of supplementary [annexes, reports][28], and [notes][29]can be more than a little intimidating. I dont blame programmers for still finding the whole thing mysterious, even 30 years after Unicodes inception.
A few months ago, I got interested in Unicode and decided to spend some time learning more about it in detail. In this article, Ill give an introduction to it from a programmers point of view.
Im going to focus on the character set and whats involved in working with strings and files of Unicode text. However, in this article Im not going to talk about fonts, text layout/shaping/rendering, or localization in detail—those are separate issues, beyond my scope (and knowledge) here.
* [Diversity and Inherent Complexity][10]
* [The Unicode Codespace][11]
* [Codespace Allocation][2]
* [Scripts][3]
* [Usage Frequency][4]
* [Encodings][12]
* [UTF-8][5]
* [UTF-16][6]
* [Combining Marks][13]
* [Canonical Equivalence][7]
* [Normalization Forms][8]
* [Grapheme Clusters][9]
* [And More…][14]
### [][30]Diversity and Inherent Complexity
As soon as you start to study Unicode, it becomes clear that it represents a large jump in complexity over character sets like ASCII that you may be more familiar with. Its not just that Unicode contains a much larger number of characters, although thats part of it. Unicode also has a great deal of internal structure, features, and special cases, making it much more than what one might expect a mere “character set” to be. Well see some of that later in this article.
When confronting all this complexity, especially as an engineer, its hard not to find oneself asking, “Why do we need all this? Is this really necessary? Couldnt it be simplified?”
However, Unicode aims to faithfully represent the  _entire worlds_  writing systems. The Unicode Consortiums stated goal is “enabling people around the world to use computers in any language”. And as you might imagine, the diversity of written languages is immense! To date, Unicode supports 135 different scripts, covering some 1100 languages, and theres still a long tail of [over 100 unsupported scripts][31], both modern and historical, which people are still working to add.
Given this enormous diversity, its inevitable that representing it is a complicated project. Unicode embraces that diversity, and accepts the complexity inherent in its mission to include all human writing systems. It doesnt make a lot of trade-offs in the name of simplification, and it makes exceptions to its own rules where necessary to further its mission.
Moreover, Unicode is committed not just to supporting texts in any  _single_  language, but also to letting multiple languages coexist within one text—which introduces even more complexity.
Most programming languages have libaries available to handle the gory low-level details of text manipulation, but as a programmer, youll still need to know about certain Unicode features in order to know when and how to apply them. It may take some time to wrap your head around it all, but dont be discouraged—think about the billions of people for whom your software will be more accessible through supporting text in their language. Embrace the complexity!
### [][32]The Unicode Codespace
Lets start with some general orientation. The basic elements of Unicode—its “characters”, although that term isnt quite right—are called  _code points_ . Code points are identified by number, customarily written in hexadecimal with the prefix “U+”, such as [U+0041 “A” latin capital letter a][33] or [U+03B8 “θ” greek small letter theta][34]. Each code point also has a short name, and quite a few other properties, specified in the [Unicode Character Database][35].
The set of all possible code points is called the  _codespace_ . The Unicode codespace consists of 1,114,112 code points. However, only 128,237 of them—about 12% of the codespace—are actually assigned, to date. Theres plenty of room for growth! Unicode also reserves an additional 137,468 code points as “private use” areas, which have no standardized meaning and are available for individual applications to define for their own purposes.
### [][36]Codespace Allocation
To get a feel for how the codespace is laid out, its helpful to visualize it. Below is a map of the entire codespace, with one pixel per code point. Its arranged in tiles for visual coherence; each small square is 16×16 = 256 code points, and each large square is a “plane” of 65,536 code points. There are 17 planes altogether.
[
![Map of the Unicode codespace (click to zoom)](http://reedbeta.com/blog/programmers-intro-to-unicode/codespace-map.png "Map of the Unicode codespace (click to zoom)")
][37]
White represents unassigned space. Blue is assigned code points, green is private-use areas, and the small red area is surrogates (more about those later). As you can see, the assigned code points are distributed somewhat sparsely, but concentrated in the first three planes.
Plane 0 is also known as the “Basic Multilingual Plane”, or BMP. The BMP contains essentially all the characters needed for modern text in any script, including Latin, Cyrillic, Greek, Han (Chinese), Japanese, Korean, Arabic, Hebrew, Devanagari (Indian), and many more.
(In the past, the codespace was just the BMP and no more—Unicode was originally conceived as a straightforward 16-bit encoding, with only 65,536 code points. It was expanded to its current size in 1996\. However, the vast majority of code points in modern text belong to the BMP.)
Plane 1 contains historical scripts, such as Sumerian cuneiform and Egyptian hieroglyphs, as well as emoji and various other symbols. Plane 2 contains a large block of less-common and historical Han characters. The remaining planes are empty, except for a small number of rarely-used formatting characters in Plane 14; planes 1516 are reserved entirely for private use.
### [][38]Scripts
Lets zoom in on the first three planes, since thats where the action is:
[
![Map of scripts in Unicode planes 02 (click to zoom)](http://reedbeta.com/blog/programmers-intro-to-unicode/script-map.png "Map of scripts in Unicode planes 02 (click to zoom)")
][39]
This map color-codes the 135 different scripts in Unicode. You can see how Han <nobr>()</nobr> and Korean <nobr>()</nobr>take up most of the range of the BMP (the left large square). By contrast, all of the European, Middle Eastern, and South Asian scripts fit into the first row of the BMP in this diagram.
Many areas of the codespace are adapted or copied from earlier encodings. For example, the first 128 code points of Unicode are just a copy of ASCII. This has clear benefits for compatibility—its easy to losslessly convert texts from smaller encodings into Unicode (and the other direction too, as long as no characters outside the smaller encoding are used).
### [][40]Usage Frequency
One more interesting way to visualize the codespace is to look at the distribution of usage—in other words, how often each code point is actually used in real-world texts. Below is a heat map of planes 02 based on a large sample of text from Wikipedia and Twitter (all languages). Frequency increases from black (never seen) through red and yellow to white.
[
![Heat map of code point usage frequency in Unicode planes 02 (click to zoom)](http://reedbeta.com/blog/programmers-intro-to-unicode/heatmap-wiki+tweets.png "Heat map of code point usage frequency in Unicode planes 02 (click to zoom)")
][41]
You can see that the vast majority of this text sample lies in the BMP, with only scattered usage of code points from planes 12\. The biggest exception is emoji, which show up here as the several bright squares in the bottom row of plane 1.
### [][42]Encodings
Weve seen that Unicode code points are abstractly identified by their index in the codespace, ranging from U+0000 to U+10FFFF. But how do code points get represented as bytes, in memory or in a file?
The most convenient, computer-friendliest (and programmer-friendliest) thing to do would be to just store the code point index as a 32-bit integer. This works, but it consumes 4 bytes per code point, which is sort of a lot. Using 32-bit ints for Unicode will cost you a bunch of extra storage, memory, and performance in bandwidth-bound scenarios, if you work with a lot of text.
Consequently, there are several more-compact encodings for Unicode. The 32-bit integer encoding is officially called UTF-32 (UTF = “Unicode Transformation Format”), but its rarely used for storage. At most, it comes up sometimes as a temporary internal representation, for examining or operating on the code points in a string.
Much more commonly, youll see Unicode text encoded as either UTF-8 or UTF-16\. These are both _variable-length_  encodings, made up of 8-bit or 16-bit units, respectively. In these schemes, code points with smaller index values take up fewer bytes, which saves a lot of memory for typical texts. The trade-off is that processing UTF-8/16 texts is more programmatically involved, and likely slower.
### [][43]UTF-8
In UTF-8, each code point is stored using 1 to 4 bytes, based on its index value.
UTF-8 uses a system of binary prefixes, in which the high bits of each byte mark whether its a single byte, the beginning of a multi-byte sequence, or a continuation byte; the remaining bits, concatenated, give the code point index. This table shows how it works:
| UTF-8 (binary) | Code point (binary) | Range |
| --- | --- | --- |
| 0xxxxxxx | xxxxxxx | U+0000U+007F |
| 110xxxxx 10yyyyyy | xxxxxyyyyyy | U+0080U+07FF |
| 1110xxxx 10yyyyyy 10zzzzzz | xxxxyyyyyyzzzzzz | U+0800U+FFFF |
| 11110xxx 10yyyyyy 10zzzzzz 10wwwwww | xxxyyyyyyzzzzzzwwwwww | U+10000U+10FFFF |
A handy property of UTF-8 is that code points below 128 (ASCII characters) are encoded as single bytes, and all non-ASCII code points are encoded using sequences of bytes 128255\. This has a couple of nice consequences. First, any strings or files out there that are already in ASCII can also be interpreted as UTF-8 without any conversion. Second, lots of widely-used string programming idioms—such as null termination, or delimiters (newlines, tabs, commas, slashes, etc.)—will just work on UTF-8 strings. ASCII bytes never occur inside the encoding of non-ASCII code points, so searching byte-wise for a null terminator or a delimiter will do the right thing.
Thanks to this convenience, its relatively simple to extend legacy ASCII programs and APIs to handle UTF-8 strings. UTF-8 is very widely used in the Unix/Linux and Web worlds, and many programmers argue [UTF-8 should be the default encoding everywhere][44].
However, UTF-8 isnt a drop-in replacement for ASCII strings in all respects. For instance, code that iterates over the “characters” in a string will need to decode UTF-8 and iterate over code points (or maybe grapheme clusters—more about those later), not bytes. When you measure the “length” of a string, youll need to think about whether you want the length in bytes, the length in code points, the width of the text when rendered, or something else.
### [][45]UTF-16
The other encoding that youre likely to encounter is UTF-16\. It uses 16-bit words, with each code point stored as either 1 or 2 words.
Like UTF-8, we can express the UTF-16 encoding rules in the form of binary prefixes:
| UTF-16 (binary) | Code point (binary) | Range |
| --- | --- | --- |
| xxxxxxxxxxxxxxxx | xxxxxxxxxxxxxxxx | U+0000U+FFFF |
| 110110xxxxxxxxxx 110111yyyyyyyyyy | xxxxxxxxxxyyyyyyyyyy + 0x10000 | U+10000U+10FFFF |
A more common way that people talk about UTF-16 encoding, though, is in terms of code points called “surrogates”. All the code points in the range U+D800U+DFFF—or in other words, the code points that match the binary prefixes `110110` and `110111` in the table above—are reserved specifically for UTF-16 encoding, and dont represent any valid characters on their own. Theyre only meant to occur in the 2-word encoding pattern above, which is called a “surrogate pair”. Surrogate code points are illegal in any other context! Theyre not allowed in UTF-8 or UTF-32 at all.
Historically, UTF-16 is a descendant of the original, pre-1996 versions of Unicode, in which there were only 65,536 code points. The original intention was that there would be no different “encodings”; Unicode was supposed to be a straightforward 16-bit character set. Later, the codespace was expanded to make room for a long tail of less-common (but still important) Han characters, which the Unicode designers didnt originally plan for. Surrogates were then introduced, as—to put it bluntly—a kludge, allowing 16-bit encodings to access the new code points.
Today, Javascript uses UTF-16 as its standard string representation: if you ask for the length of a string, or iterate over it, etc., the result will be in UTF-16 words, with any code points outside the BMP expressed as surrogate pairs. UTF-16 is also used by the Microsoft Win32 APIs; though Win32 supports either 8-bit or 16-bit strings, the 8-bit version unaccountably still doesnt support UTF-8—only legacy code-page encodings, like ANSI. This leaves UTF-16 as the only way to get proper Unicode support in Windows.
By the way, UTF-16s words can be stored either little-endian or big-endian. Unicode has no opinion on that issue, though it does encourage the convention of putting [U+FEFF zero width no-break space][46] at the top of a UTF-16 file as a [byte-order mark][47], to disambiguate the endianness. (If the file doesnt match the systems endianness, the BOM will be decoded as U+FFFE, which isnt a valid code point.)
### [][48]Combining Marks
In the story so far, weve been focusing on code points. But in Unicode, a “character” can be more complicated than just an individual code point!
Unicode includes a system for  _dynamically composing_  characters, by combining multiple code points together. This is used in various ways to gain flexibility without causing a huge combinatorial explosion in the number of code points.
In European languages, for example, this shows up in the application of diacritics to letters. Unicode supports a wide range of diacritics, including acute and grave accents, umlauts, cedillas, and many more. All these diacritics can be applied to any letter of any alphabet—and in fact,  _multiple_  diacritics can be used on a single letter.
If Unicode tried to assign a distinct code point to every possible combination of letter and diacritics, things would rapidly get out of hand. Instead, the dynamic composition system enables you to construct the character you want, by starting with a base code point (the letter) and appending additional code points, called “combining marks”, to specify the diacritics. When a text renderer sees a sequence like this in a string, it automatically stacks the diacritics over or under the base letter to create a composed character.
For example, the accented character “Á” can be expressed as a string of two code points: [U+0041 “A” latin capital letter a][49] plus [U+0301 “◌́” combining acute accent][50]. This string automatically gets rendered as a single character: “Á”.
Now, Unicode does also include many “precomposed” code points, each representing a letter with some combination of diacritics already applied, such as [U+00C1 “Á” latin capital letter a with acute][51] or [U+1EC7 “ệ” latin small letter e with circumflex and dot below][52]. I suspect these are mostly inherited from older encodings that were assimilated into Unicode, and kept around for compatibility. In practice, there are precomposed code points for most of the common letter-with-diacritic combinations in European-script languages, so they dont use dynamic composition that much in typical text.
Still, the system of combining marks does allow for an  _arbitrary number_  of diacritics to be stacked on any base character. The reductio-ad-absurdum of this is [Zalgo text][53], which works by ͖͟ͅr͞aṋ̫̠̖͈̗d͖̻̹óm̪͙͕̗̝ļ͇̰͓̳̫ý͓̥̟͍ ̕s̫t̫̱͕̗̰̼̘͜a̼̩͖͇̠͈̣͝c̙͍k̖̱̹͍͘i̢n̨̺̝͇͇̟͙ģ̫̮͎̻̟ͅ ̕n̼̺͈͞u̮͙m̺̭̟̗͞e̞͓̰̤͓̫r̵o̖ṷs҉̪͍̭̬̝̤ ̮͉̝̞̗̟͠d̴̟̜̱͕͚i͇̫̼̯̭̜͡ḁ͙̻̼c̲̲̹r̨̠̹̣̰̦i̱t̤̻̤͍͙̘̕i̵̜̭̤̱͎c̵s ͘o̱̲͈̙͖͇̲͢n͘ ̜͈e̬̲̠̩ac͕̺̠͉h̷̪ ̺̣͖̱ḻ̫̬̝̹ḙ̙̺͙̭͓̲t̞̞͇̲͉͍t̷͔̪͉̲̻̠͙e̦̻͈͉͇r͇̭̭̬͖,̖́ ̜͙͓̣̭s̘̘͈o̱̰̤̲ͅ ̛̬̜̙t̼̦͕̱̹͕̥h̳̲͈͝ͅa̦t̻̲ ̻̟̭̦̖t̛̰̩h̠͕̳̝̫͕e͈̤̘͖̞͘y҉̝͙ ̷͉͔̰̠o̞̰v͈͈̳̘͜er̶f̰͈͔ḻ͕̘̫̺̲o̲̭͙͠ͅw̱̳̺ ͜t̸h͇̭͕̳͍e̖̯̟̠ ͍̞̜͔̩̪͜ļ͎̪̲͚i̝̲̹̙̩̹n̨̦̩̖ḙ̼̲̼͢ͅ ̬͝s̼͚̘̞͝p͙̘̻a̙c҉͉̜̤͈̯̖i̥͡n̦̠̱͟g̸̗̻̦̭̮̟ͅ ̳̪̠͖̳̯̕a̫͜n͝d͡ ̣̦̙ͅc̪̗r̴͙̮̦̹̳e͇͚̞͔̹̫͟a̙̺̙ț͔͎̘̹ͅe̥̩͍ a͖̪̜̮͙̹n̢͉̝ ͇͉͓̦̼́a̳͖̪̤̱p̖͔͔̟͇͎͠p̱͍̺ę̲͎͈̰̲̤̫a̯͜r̨̮̫̣̘a̩̯͖n̹̦̰͎̣̞̞c̨̦̱͔͎͍͖e̬͓͘ ̤̰̩͙̤̬͙o̵̼̻̬̻͇̮̪f̴ ̡̙̭͓͖̪̤“̸͙̠̼c̳̗͜o͏̼͙͔̮r̞̫̺̞̥̬ru̺̻̯͉̭̻̯p̰̥͓̣̫̙̤͢t̳͍̳̖ͅi̶͈̝͙̼̙̹o̡͔n̙̺̹̖̩͝ͅ”̨̗͖͚̩.̯͓
A few other places where dynamic character composition shows up in Unicode:
* [Vowel-pointing notation][15] in Arabic and Hebrew. In these languages, words are normally spelled with some of their vowels left out. They then have diacritic notation to indicate the vowels (used in dictionaries, language-teaching materials, childrens books, and such). These diacritics are expressed with combining marks.
| A Hebrew example, with [niqqud][1]: | אֶת דַלְתִּי הֵזִיז הֵנִיעַ, קֶטֶב לִשְׁכַּתִּי יָשׁוֹד |
| Normal writing (no niqqud): | את דלתי הזיז הניע, קטב לשכתי ישוד |
* [Devanagari][16], the script used to write Hindi, Sanskrit, and many other South Asian languages, expresses certain vowels as combining marks attached to consonant letters. For example, “ह” + “​ि” = “हि” (“h” + “i” = “hi”).
* Korean characters stand for syllables, but they are composed of letters called [jamo][17] that stand for the vowels and consonants in the syllable. While there are code points for precomposed Korean syllables, its also possible to dynamically compose them by concatenating their jamo. For example, “ᄒ” + “ᅡ” + “ᆫ” = “한” (“h” + “a” + “n” = “han”).
### [][54]Canonical Equivalence
In Unicode, precomposed characters exist alongside the dynamic composition system. A consequence of this is that there are multiple ways to express “the same” string—different sequences of code points that result in the same user-perceived characters. For example, as we saw earlier, we can express the character “Á” either as the single code point U+00C1,  _or_  as the string of two code points U+0041 U+0301.
Another source of ambiguity is the ordering of multiple diacritics in a single character. Diacritic order matters visually when two diacritics apply to the same side of the base character, e.g. both above: “ǡ” (dot, then macron) is different from “ā̇” (macron, then dot). However, when diacritics apply to different sides of the character, e.g. one above and one below, then the order doesnt affect rendering. Moreover, a character with multiple diacritics might have one of the diacritics precomposed and others expressed as combining marks.
For example, the Vietnamese letter “ệ” can be expressed in  _five_  different ways:
* Fully precomposed: U+1EC7 “ệ”
* Partially precomposed: U+1EB9 “ẹ” + U+0302 “◌̂”
* Partially precomposed: U+00EA “ê” + U+0323 “◌̣”
* Fully decomposed: U+0065 “e” + U+0323 “◌̣” + U+0302 “◌̂”
* Fully decomposed: U+0065 “e” + U+0302 “◌̂” + U+0323 “◌̣”
Unicode refers to set of strings like this as “canonically equivalent”. Canonically equivalent strings are supposed to be treated as identical for purposes of searching, sorting, rendering, text selection, and so on. This has implications for how you implement operations on text. For example, if an app has a “find in file” operation and the user searches for “ệ”, it should, by default, find occurrences of  _any_  of the five versions of “ệ” above!
### [][55]Normalization Forms
To address the problem of “how to handle canonically equivalent strings”, Unicode defines several _normalization forms_ : ways of converting strings into a canonical form so that they can be compared code-point-by-code-point (or byte-by-byte).
The “NFD” normalization form fully  _decomposes_  every character down to its component base and combining marks, taking apart any precomposed code points in the string. It also sorts the combining marks in each character according to their rendered position, so e.g. diacritics that go below the character come before the ones that go above the character. (It doesnt reorder diacritics in the same rendered position, since their order matters visually, as previously mentioned.)
The “NFC” form, conversely, puts things back together into precomposed code points as much as possible. If an unusual combination of diacritics is called for, there may not be any precomposed code point for it, in which case NFC still precomposes what it can and leaves any remaining combining marks in place (again ordered by rendered position, as in NFD).
There are also forms called NFKD and NFKC. The “K” here refers to  _compatibility_  decompositions, which cover characters that are “similar” in some sense but not visually identical. However, Im not going to cover that here.
### [][56]Grapheme Clusters
As weve seen, Unicode contains various cases where a thing that a user thinks of as a single “character” might actually be made up of multiple code points under the hood. Unicode formalizes this using the notion of a  _grapheme cluster_ : a string of one or more code points that constitute a single “user-perceived character”.
[UAX #29][57] defines the rules for what, precisely, qualifies as a grapheme cluster. Its approximately “a base code point followed by any number of combining marks”, but the actual definition is a bit more complicated; it accounts for things like Korean jamo, and [emoji ZWJ sequences][58].
The main thing grapheme clusters are used for is text  _editing_ : theyre often the most sensible unit for cursor placement and text selection boundaries. Using grapheme clusters for these purposes ensures that you cant accidentally chop off some diacritics when you copy-and-paste text, that left/right arrow keys always move the cursor by one visible character, and so on.
Another place where grapheme clusters are useful is in enforcing a string length limit—say, on a database field. While the true, underlying limit might be something like the byte length of the string in UTF-8, you wouldnt want to enforce that by just truncating bytes. At a minimum, youd want to “round down” to the nearest code point boundary; but even better, round down to the nearest  _grapheme cluster boundary_ . Otherwise, you might be corrupting the last character by cutting off a diacritic, or interrupting a jamo sequence or ZWJ sequence.
### [][59]And More…
Theres much more that could be said about Unicode from a programmers perspective! I havent gotten into such fun topics as case mapping, collation, compatibility decompositions and confusables, Unicode-aware regexes, or bidirectional text. Nor have I said anything yet about implementation issues—how to efficiently store and look-up data about the sparsely-assigned code points, or how to optimize UTF-8 decoding, string comparison, or NFC normalization. Perhaps Ill return to some of those things in future posts.
Unicode is a fascinating and complex system. It has a many-to-one mapping between bytes and code points, and on top of that a many-to-one (or, under some circumstances, many-to-many) mapping between code points and “characters”. It has oddball special cases in every corner. But no one ever claimed that representing  _all written languages_  was going to be  _easy_ , and its clear that were never going back to the bad old days of a patchwork of incompatible encodings.
Further reading:
* [The Unicode Standard][21]
* [UTF-8 Everywhere Manifesto][22]
* [Dark corners of Unicode][23] by Eevee
* [ICU (International Components for Unicode)][24]—C/C++/Java libraries implementing many Unicode algorithms and related things
* [Python 3 Unicode Howto][25]
* [Google Noto Fonts][26]—set of fonts intended to cover all assigned code points
--------------------------------------------------------------------------------
作者简介:
Im a graphics programmer, currently freelancing in Seattle. Previously I worked at NVIDIA on the DevTech software team, and at Sucker Punch Productions developing rendering technology for the Infamous series of games for PS3 and PS4.
Ive been interested in graphics since about 2002 and have worked on a variety of assignments, including fog, atmospheric haze, volumetric lighting, water, visual effects, particle systems, skin and hair shading, postprocessing, specular models, linear-space rendering, and GPU performance measurement and optimization.
You can read about what Im up to on my blog. In addition to graphics, Im interested in theoretical physics, and in programming language design.
You can contact me at nathaniel dot reed at gmail dot com, or follow me on Twitter (@Reedbeta) or Google+. I can also often be found answering questions at Computer Graphics StackExchange.
-------------------
via: http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311
作者:[ Nathan][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://reedbeta.com/about/
[1]:https://en.wikipedia.org/wiki/Niqqud
[2]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#codespace-allocation
[3]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#scripts
[4]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#usage-frequency
[5]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#utf-8
[6]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#utf-16
[7]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#canonical-equivalence
[8]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#normalization-forms
[9]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#grapheme-clusters
[10]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#diversity-and-inherent-complexity
[11]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#the-unicode-codespace
[12]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#encodings
[13]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#combining-marks
[14]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#and-more
[15]:https://en.wikipedia.org/wiki/Vowel_pointing
[16]:https://en.wikipedia.org/wiki/Devanagari
[17]:https://en.wikipedia.org/wiki/Hangul#Letters
[18]:http://reedbeta.com/blog/programmers-intro-to-unicode/
[19]:http://reedbeta.com/blog/category/coding/
[20]:http://reedbeta.com/blog/programmers-intro-to-unicode/#comments
[21]:http://www.unicode.org/versions/latest/
[22]:http://utf8everywhere.org/
[23]:https://eev.ee/blog/2015/09/12/dark-corners-of-unicode/
[24]:http://site.icu-project.org/
[25]:https://docs.python.org/3/howto/unicode.html
[26]:https://www.google.com/get/noto/
[27]:http://www.unicode.org/versions/latest/
[28]:http://www.unicode.org/reports/
[29]:http://www.unicode.org/notes/
[30]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#diversity-and-inherent-complexity
[31]:http://linguistics.berkeley.edu/sei/
[32]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#the-unicode-codespace
[33]:http://unicode.org/cldr/utility/character.jsp?a=A
[34]:http://unicode.org/cldr/utility/character.jsp?a=%CE%B8
[35]:http://www.unicode.org/reports/tr44/
[36]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#codespace-allocation
[37]:http://reedbeta.com/blog/programmers-intro-to-unicode/codespace-map.png
[38]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#scripts
[39]:http://reedbeta.com/blog/programmers-intro-to-unicode/script-map.png
[40]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#usage-frequency
[41]:http://reedbeta.com/blog/programmers-intro-to-unicode/heatmap-wiki+tweets.png
[42]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#encodings
[43]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#utf-8
[44]:http://utf8everywhere.org/
[45]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#utf-16
[46]:http://unicode.org/cldr/utility/character.jsp?a=FEFF
[47]:https://en.wikipedia.org/wiki/Byte_order_mark
[48]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#combining-marks
[49]:http://unicode.org/cldr/utility/character.jsp?a=A
[50]:http://unicode.org/cldr/utility/character.jsp?a=0301
[51]:http://unicode.org/cldr/utility/character.jsp?a=%C3%81
[52]:http://unicode.org/cldr/utility/character.jsp?a=%E1%BB%87
[53]:https://eeemo.net/
[54]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#canonical-equivalence
[55]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#normalization-forms
[56]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#grapheme-clusters
[57]:http://www.unicode.org/reports/tr29/
[58]:http://blog.emojipedia.org/emoji-zwj-sequences-three-letters-many-possibilities/
[59]:http://reedbeta.com/blog/programmers-intro-to-unicode/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#and-more

View File

@ -0,0 +1,74 @@
Does your open source project need a president?
============================================================
![Does your open source project need a president?](https://opensource.com/sites/default/files/styles/image-full-size/public/images/government/osdc_transparent_whitehouse_520x292.jpg?itok=IAsYgvi- "Does your open source project need a president?")
>Image by : opensource.com
Recently I was lucky enough to be invited to attend the [Linux Foundation Open Source Leadership Summit][4]. The event was stacked with many of the people I consider mentors, friends, and definitely leaders in the various open source and free software communities that I participate in.
I was able to observe the [CNCF][5] Technical Oversight Committee meeting while there, and was impressed at the way they worked toward consensus where possible. It reminded me of the [OpenStack Technical Committee][6] in its make-up of well-spoken technical individuals who care about their users and stand up for the technical excellence of their foundations' activities.
But it struck me (and several other attendees) that this consensus building has limitations. [Adam Jacob][7] noted that Linus Torvalds had given an interview on stage earlier in the day where he noted that most of his role was to listen closely for a time to differing opinions, but then stop them when it was clear there was no consensus, and select one that he felt was technically excellent, and move on. Linus, being the founder of Linux and the benevolent dictator of the project for its lifetime thus far, has earned this moral authority.
However, unlike Linux, many of the modern foundation-fostered projects lack an executive branch. The structure we see for governance is centered around ensuring that corporate sponsors have influence. Foundation members pay dues to get various levels of board seats or corporate access to events and data. And this is a good thing, as it keeps people like me paid to work in these communities.
However, I believe as technical contributors, we sometimes give this too much sway in the actual governance of the community and the projects. These foundation boards know that day to day decision making should be left to those working in the project, and as such allow committees like the [CNCF][8] TOC or the [OpenStack TC][9] full agency over the technical aspects of the member projects.
I believe these committees operate as a legislative branch. They evaluate conditions and regulate the projects accordingly, allocating budgets for infrastructure and passing edicts to avoid chaos. Since they're not as large as political legislative bodies like the US House of Representatives and Senate, they can usually operate on a consensus basis, and not drive everything to a contentious vote. By and large, these are as nimble as a legislative body can be.
However, I believe open source projects need an executive to be effective. At some point, we need a single person to listen to the facts, entertain theories, and then decide, and execute a plan. Some projects have natural single leaders like this. Most, however, do not.
I believe we as engineers aren't generally good at being like Linus. If you've spent any time in the corporate world you've had an executive disagree with you and run you right over. When we get the chance to distribute power evenly, we do it.
But I think that's a mistake. I think we should strive to have executives. Not just organizers like the [OpenStack PTL][10], but more like the [Debian Project Leader][11]. Empowered people with the responsibility to serve as a visionary and keep the project's decision making relevant and of high quality. This would also give the board somebody to interact with directly so that they do not have to try and convince the whole community to move in a particular direction to wield influence. In this way, I believe we'd end up with a system of checks and balances similar to the US Constitution.
So here is my suggestion for how a project executive structure could work, assuming there is already a strong technical committee and a well-defined voting electorate that I call the "active technical contributors."
1. The president is elected by [Condorcet][1] vote of the active technical contributors of a project for a term of 1 year.
2. The president will have veto power over any proposed change to the project's technical assets.
3. The technical committee may override the president's veto by a super majority vote.
4. The president will inform the technical contributors of their plans for the project every 6 months.
This system only works if the project contributors expect their project president to actively drive the vision of the project. Basically, the culture has to turn to this executive for final decision-making before it comes to a veto. The veto is for times when the community makes poor decisions. And this doesn't replace leaders of individual teams. Think of these like the governors of states in the US. They're running their sub-project inside the parameters set down by the technical committee and the president.
And in the case of foundations or communities with boards, I believe ultimately a board would serve as the judicial branch, checking the legality of changes made against the by-laws of the group. If there's no board of sorts, a judiciary could be appointed and confirmed, similar to the US Supreme Court or the [Debian CTTE][12]. This would also just be necessary to ensure that the technical arm of a project doesn't get the foundation into legal trouble of any kind, which is already what foundation boards tend to do.
I'd love to hear your thoughts on this on Twitter, please tweet me [@SpamapS][13] with the hashtag #OpenSourcePresident to get the discussion going.
_This article was originally published on [FewBar.com][2] as "Free and open source leaders—You need a president" and was republished with permission._
--------------------------------------------------------------------------------
作者简介:
Clint Byrum - Clint Byrum is a Cloud Architect at IBM (Though his words here are his own, and not those of IBM). He is an active Open Source and Free Software contributor to Debian, Ubuntu, OpenStack, and various other projects spanning the past 20 years.
-------------------------
via: https://opensource.com/article/17/3/governance-needs-president
作者:[ Clint Byrum][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/spamaps
[1]:https://en.wikipedia.org/wiki/Condorcet_method
[2]:http://fewbar.com/2017/02/open-source-governance-needs-presidents/
[3]:https://opensource.com/article/17/3/governance-needs-president?rate=g5uFkFg_AqVo7JnKqPHoAxKccWzo1XXgn5wj5hILAIk
[4]:http://events.linuxfoundation.org/events/open-source-leadership-summit
[5]:https://www.cncf.io/
[6]:https://www.openstack.org/foundation/tech-committee/
[7]:https://twitter.com/adamhjk
[8]:https://www.cncf.io/
[9]:https://www.openstack.org/foundation/tech-committee/
[10]:https://docs.openstack.org/project-team-guide/ptl.html
[11]:https://www.debian.org/devel/leader
[12]:https://www.debian.org/devel/tech-ctte
[13]:https://twitter.com/spamaps
[14]:https://opensource.com/user/121156/feed
[15]:https://opensource.com/users/spamaps

View File

@ -0,0 +1,75 @@
The impact GitHub is having on your software career
============================================================
![The impact GitHub is having on your software career](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/github-universe.jpg?itok=HCU81VX8 "The impact GitHub is having on your software career")
>Image credits : From GitHub
Over the next 12 to 24 months (in other words, between 2018 and 2019), how people hire software developers will change radically.
I spent from 2004 to 2014 working at Red Hat, the world's largest open source software engineering company. On my very first day there, in July 2004, my boss Marty Messer said to me, "All the work you do here will be in the open. In the future, you won't have a CV—people will just Google you."
This was one of the unique characteristics of working at Red Hat at the time. We had the opportunity to create our own personal brands and reputation in the open. Communication with other software engineers through mailing lists and bug trackers, and source code commits to mercurial, subversion, and CVS (Concurrent Versions System) repositories were all open and indexed by Google.
Fast-forward to 2017, and here we are living in a world that is being eaten by open source software.
There are two factors that give you a real sense of the times:
1. Microsoft, long the poster child for closed-source proprietary software and a crusader against open source, has embraced open source software whole-heartedly. The company formed the .NET Foundation (which has Red Hat as a member) and joined the Linux Foundation. .NET is now developed in the open as an open source project.
2. GitHub has become a singular social network that ties together issue tracking and distributed source control.
For software developers coming from a primarily closed source background, it's not really clear yet what just happened. To them, open source equals "working for free in your spare time."
For those of us who spent the past decade making a billion-dollar open source software company, however, there is nothing free or spare time about working in the open. Also, the benefits and consequences of working in the open are clear, your reputation is yours and is portable between companies. GitHub is a social network where your social capital, created by your commits and contribution to the global conversation in whatever technology you are working, is yours—not tied to the company you happen to be working at temporarily.
Smart people will take advantage of this environment. They'll contribute patches, issues, and comments upstream to the languages and frameworks that they use daily in their job, including TypeScript, .NET, and Redux. They'll also advocate for and creatively arrange for as much of their work as possible to be done in the open, even if it is just their contribution graph to private repositories.
GitHub is a great equalizer. You may not be able to get a job in Australia from India, but there is nothing stopping you from working with Australians on GitHub from India.
The way to get a job at Red Hat during the last decade was obvious. You just started collaborating with Red Hat engineers on a piece of technology that they were working on in the open, then when it was clear that you were making a valuable contribution and were a great person to work with, you would apply for a job. (Or they would hit you up.)
Now that same pathway is open for everyone, into just about any technology. As the world is eaten by open source, the same dynamic is now prevalent everywhere.
In [a recent interview][3], Linus Torvalds (49K followers, following 0 on GitHub), the inventor of Linux and git, put it like this, "You shoot off a lot of small patches until the point where the maintainers trust you, and at that point you become more than just a guy who sends patches, you become part of the network of trust."
Your reputation is your location in a network of trust. When you change companies, this is weakened and some of it is lost. If you live in a small town and have been there for a long time, then people all over town know you. However, if you move countries, then that goes. You end up somewhere where no one knows you—and worse, no one knows anyone who knows you.
You've lost your first- and second-, and probably even third-degree connections. Unless you've built a brand by speaking at conferences or some other big ticket event, the trust you built up by working with others and committing code to a corporate internal repository is gone. However, if that work has been on GitHub, it's not gone. It's visible. It's connected to a network of trust that is visible.
One of the first things that will happen is that the disadvantaged will start to take advantage of this. Students, new grads, immigrants—they'll use this to move to Australia.
This will change the landscape. Previously privileged developers will suddenly find their network disrupted. One of the principles of open source is meritocracy—the best idea wins, the most commits wins, the most passing tests wins, the best implementation wins, etc.
It's not perfect, nothing is, and it doesn't do away with or discount being a good person to work with. Companies fire some rockstar engineers who just don't play well with others, and that stuff does show up in GitHub, mostly in the interactions with other contributors.
GitHub is not simply a code repository and a list of raw commit numbers, as some people paint it in strawman arguments. It is a social network. I put it like this: It's not your code on GitHub that counts; it's what other people say on GitHub about your code that counts.
GitHub is your portable reputation, and over the next 12 to 24 months, as some developers develop that and others don't, it's going to be a stark differentiator. It's like having email versus not having email (and now everyone has email), or having a cell phone versus not having a cell phone (and now everyone has a cell phone). Eventually, a vast majority will be working in the open, and it will again be a level playing field differentiated on other factors.
But right now, the developer career space is being disrupted by GitHub.
_[This article][1] originally appeared on Medium.com. Reprinted with permission._
--------------------------------------------------------------------------------
作者简介:
Josh Wulf - About me: I'm a Legendary Recruiter at Just Digital People; a Red Hat alumnus; a CoderDojo mentor; a founder of Magikcraft.io; the producer of The JDP Internship — The World's #1 Software Development Reality Show;
-----------------------
via: https://opensource.com/article/17/3/impact-github-software-career
作者:[Josh Wulf ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/sitapati
[1]:https://medium.com/@sitapati/the-impact-github-is-having-on-your-software-career-right-now-6ce536ec0b50#.dl79wpyww
[2]:https://opensource.com/article/17/3/impact-github-software-career?rate=2gi7BrUHIADt4TWXO2noerSjzw18mLVZx56jwnExHqk
[3]:http://www.theregister.co.uk/2017/02/15/think_different_shut_up_and_work_harder_says_linus_torvalds/
[4]:https://opensource.com/user/118851/feed
[5]:https://opensource.com/article/17/3/impact-github-software-career#comments
[6]:https://opensource.com/users/sitapati

View File

@ -0,0 +1,95 @@
How to use pull requests to improve your code reviews
============================================================
Spend more time building and less time fixing with GitHub Pull Requests for proper code review.
![Measure](https://d3tdunqjn7n0wj.cloudfront.net/360x240/measure-106354_1920-a7f65d82a54323773f847cf572e640a4.jpg)
>Take a look Brent and Peters book, [ _Introducing GitHub_ ][5], for more on creating projects, starting pull requests, and getting an overview of your teams software development process.
If you dont write code every day, you may not know some of the problems that software developers face on a daily basis:
* Security vulnerabilities in the code
* Code that causes your application to crash
* Code that can be referred to as “technical debt” and needs to be re-written later
* Code that has already been written somewhere that you didnt know about
Code review helps improve the software we write by allowing other people and/or tools to look it over for us. This review can happen with automated code analysis or test coverage tools — two important pieces of the software development process that can save hours of manual work — or peer review. Peer review is a process where developers review each other's work. When it comes to developing software, speed and urgency are two components that often result in some of the previously mentioned problems. If you dont release soon enough, your competitor may come out with a new feature first. If you dont release often enough, your users may doubt whether or not you still care about improvements to your application.
### Weighing the time trade-off: code review vs. bug fixing
If someone is able to bring together multiple types of code review in a way that has minimal friction, then the quality of that software written over time will be improved. It would be naive to think that the introduction of new tools or processes would not at first introduce some amount of delay in time. But what is more expensive: time to fix bugs in production, or improving the software before it makes it into production? Even if new tools introduce some lag time in which a new feature can be released and appreciated by customers, that lag time will shorten as the software developers improve their own skills and the software release cycles will increase back to previous levels while bugs should decrease.
One of the keys for achieving this goal of proactively improving code quality with code review is using a platform that is flexible enough to allow software developers to quickly write code, plug in the tools they are familiar with, and do peer review of each others code. [GitHub][9] is a great example of such a platform. However, putting your code on GitHub doesnt just magically make code review happen; you have to open a pull request to start down this journey.
### Pull requests: a living discussion about code
[Pull requests][10] are a tool on GitHub that allows software developers to discuss and propose changes to the main codebase of a project that later can be deployed for all users to see. They were created back in February of 2008 for the purpose of suggesting a change on to someones work before it would be accepted (merged) and later deployed to production for end-users to see that change.
Pull requests started out as a loose way to offer your change to someones project, but they have evolved into:
* A living discussion about the code you want merged
* Added functionality of increasing the visibility of what changed
* Integration of your favorite tools
* Explicit pull request reviews that can be required as part of a protected branch workflow
### Considering code: URLs are forever
Looking at the first two bullet points above, pull requests foster an ongoing code discussion that makes code changes very visible, as well as making it easy to pick up where you left off on your review. For both new and experienced developers, being able to refer back to these previous discussions about why a feature was developed the way it was or being linked to another conversation about a related feature should be priceless. Context can be so important when coordinating features across multiple projects and keeping everyone in the loop as close as possible to the code is great too. If those features are still being developed, its important to be able to just see whats changed since you last reviewed. After all, its far easier to [review a small change than a large one][11], but thats not always possible with large features. So, its important to be able to pick up where you last reviewed and only view the changes since then.
### Integrating tools: software developers are opinionated
Considering the third point above, GitHubs pull requests have a lot of functionality but developers will always have a preference on additional tools. Code quality is a whole realm of code review that involves the other component to code reviews that arent necessarily human. Detecting code thats “inefficient” or slow, a potential security vulnerability, or just not up to company standards is a task best left to automated tools. Tools like [SonarQube][12] and [Code Climate][13]can analyse your code, while tools like [Codecov][14] and [Coveralls][15] can tell you if the new code you just wrote is not well tested. The wonder of these tools is that they can plug into GitHub and report their findings right back into the pull request! This means the conversation not only has people reviewing the code, but the tools are reporting there too. Everyone can stay in the loop of exactly how a feature is developing.
Lastly, depending on the preference of your team, you can make the tools and the peer review required by leveraging the required status feature of the [protected branch workflow][16].
Though you may just be getting started on your software development journey, a business stakeholder who wants to know how a project is doing, or a project manager who wants to ensure the timeliness and quality of a project, getting involved in the pull request by setting up an approval workflow and thinking about integration with additional tools to ensure quality is important at any level of software development.
Whether its for your personal website, your companys online store, or the latest combine to harvest this years corn with maximum yield, writing good software involves having good code review. Having good code review involves the right tools and platform. To learn more about GitHub and the software development process, take a look at the OReilly book, [ _Introducing GitHub_ ][17], where you can understand creating projects, starting pull requests, and getting an overview of your team's software development process.
--------------------------------------------------------------------------------
作者简介:
**Brent Beer**
Brent Beer has used Git and GitHub for over 5 years through university classes, contributions to open source projects, and professionally as a web developer. While working as a trainer for GitHub, he also became a published author of “Introducing GitHub” for OReilly. He now works as a solutions engineer for GitHub in Amsterdam to help bring Git and GitHub to developers across the world.
**Peter Bell**
Peter Bell is the founder and CTO of Ronin Labs. Training is broken - we're fixing it through technology enhanced training! He is an experienced entrepreneur, technologist, agile coach and CTO specializing in EdTech projects. He wrote "Introducing GitHub" for O'Reilly, created the "Mastering GitHub" course for code school and "Git and GitHub LiveLessons" for Pearson. He has presented regularly at national and international conferences on ruby, nodejs, NoSQL (especially MongoDB and neo4j), cloud computing, software craftsmanship, java, groovy, j...
-------------
via: https://www.oreilly.com/ideas/how-to-use-pull-requests-to-improve-your-code-reviews?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311
作者:[Brent Beer][a][Peter Bell][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.oreilly.com/people/acf937de-cdf4-4b0e-85bd-b559404c580e
[b]:https://www.oreilly.com/people/2256f119-7ea0-440e-99e8-65281919e952
[1]:https://pixabay.com/en/measure-measures-rule-metro-106354/
[2]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
[3]:https://www.oreilly.com/people/acf937de-cdf4-4b0e-85bd-b559404c580e
[4]:https://www.oreilly.com/people/2256f119-7ea0-440e-99e8-65281919e952
[5]:https://www.safaribooksonline.com/library/view/introducing-github/9781491949801/?utm_source=newsite&utm_medium=content&utm_campaign=lgen&utm_content=how-to-use-pull-requests-to-improve-your-code-reviews
[6]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
[7]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
[8]:https://www.oreilly.com/ideas/how-to-use-pull-requests-to-improve-your-code-reviews?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311
[9]:https://github.com/about
[10]:https://help.github.com/articles/about-pull-requests/
[11]:https://blog.skyliner.io/ship-small-diffs-741308bec0d1
[12]:https://github.com/integrations/sonarqube
[13]:https://github.com/integrations/code-climate
[14]:https://github.com/integrations/codecov
[15]:https://github.com/integrations/coveralls
[16]:https://help.github.com/articles/about-protected-branches/
[17]:https://www.safaribooksonline.com/library/view/introducing-github/9781491949801/?utm_source=newsite&utm_medium=content&utm_campaign=lgen&utm_content=how-to-use-pull-requests-to-improve-your-code-reviews-lower

View File

@ -0,0 +1,67 @@
# Why DevOps is the end of security as we know it
![](https://techbeacon.com/sites/default/files/styles/article_hero_image/public/field/image/rugged-devops-end-of-security.jpg?itok=Gp1xxSMK)
Security can be a hard sell. Its difficult to convince development teams to spend their limited cycles patching security holes with line-of-business managers pressuring them to release applications as quickly as possible. But given that 84 percent of all cyberattacks happen on the application layer, organizations cant afford for their dev teams not to include security.
The rise of DevOps presents a dilemma for many security leads. “Its a threat to security,” [Josh Corman, former CTO at Sonatype][2], “and its an opportunity for security to get better.” Corman is a staunch advocate of [integrating security and DevOps practices to create “Rugged DevOps.”][3]  _Business Insights_  talked with Corman about the values security and DevOps share, and how those shared values help make organizations less vulnerable to outages and exploits.
What Is the True State of Security in DevOps?[Get Report][1]
### How are security and DevOps practices mutually beneficial?
**Josh Corman: **A primary example is the tendency for DevOps teams to instrument everything that can be measured. Security is always looking for more intelligence and telemetry. You can take a lot of what DevOps teams are measuring and enter that info into your log management or your SIEM [security information and event management system].
An OODA loop [observe, orient, decide, act] is predicated on having enough pervasive eyes and ears to notice whispers and echoes. DevOps gives you pervasive instrumentation.
### Are there other cultural attitudes that they share?
**JC:** “Be mean to your code” is a shared value. For example, the software tool Chaos Monkey written by Netflix was a watershed moment for DevOps teams. Created to test the resiliency and recoverability of Amazon Web Services, Chaos Monkey made the Netflix teams stronger and more prepared for outages.
So theres now this notion that our systems need to be tested and, as such, James Wickett and I and others decided to make an evil, weaponized Chaos Monkey, which is where the GAUNTLT project came from. Its basically a barrage of security tests that can be used within DevOps cycle times and by DevOps tool chains. Its also very DevOps-friendly with APIs.
### Where else do enterprise security and DevOps values intersect?
**JC:** Both teams believe complexity is the enemy of all things. For example, [security people and Rugged DevOps folks][4] can actually say, “Look, were using 11 logging frameworks in our project—maybe we dont need that many, and maybe that attack surface and complexity could hurt us, or hurt the quality or availability of the product.”
Complexity tends to be the enemy of lots of things. Typically you dont have a hard time convincing DevOps teams to use better building materials in architectural levels: use the most recent, least vulnerable versions, and use fewer of them.
### What do you mean by “better building materials”?
**JC:** Im the custodian of the largest open-source repository in the world, so I see whos using which versions, which vulnerabilities are in them, when they dont take a fix for a vulnerability, and for how long. Certain logging frameworks, for example, fix none of their bugs, ever. Some of them fix most of their security bugs within 90 days. People are getting breached over and over because theyre using a framework that has zero security hygiene.
Beyond that, even if you dont know the quality of your logging frameworks, having 11 different frameworks makes for a very clunky, buggy deliverable, with lots of extra work and complexity. Your exposure to vulnerabilities is much greater. How much development time do you want to be spending fixing lots of little defects, as opposed to creating the next big disruptive thing?
One of the keys to [Rugged DevOps is software supply chain management][5], which incorporates three principles: Use fewer and better suppliers; use the highest-quality parts from those suppliers; and track which parts went where, so that you can have a prompt and agile response when something goes wrong.
### So change management is also important.
**JC:** Yes, thats another shared value. What Ive found is that when a company wants to perform security tests such as anomaly detection or net-flow analysis, they need to know what “normal” looks like. A lot of the basic things that trip people up have to do with inventory and patch management.
I saw in the  _Verizon Data Breach Investigations Report_  that 97 percent of last years successfully exploited vulnerabilities tracked to just ten CVEs [common vulnerabilities and exposures], and of those 10, eight have been fixed for over a decade. So, shame on us for talking about advanced espionage. Were not doing basic patching. Now, Im not saying that if you fix those ten CVEs, youll have no successful exploits, but they account for the lions share of how people are actually failing.
The nice thing about [DevOps automation tools ][6]is that theyve become an accidental change management database. Its a single version of the truth of who pushed which change where, and when. Thats a huge win, because often the factors that have the greatest impact on security are out of your control. You inherit the downstream consequences of the choices made by the CIO and the CTO. As IT becomes more rigorous and repeatable through automation, you lessen the chance for human error and allow more traceability on which change happened where.
### What would you say is the most important shared value?
**JC:** DevOps involves processes and toolchains, but I think the defining attribute is culture, specifically empathy. DevOps works because dev and ops teams understand each other better and can make more informed decisions. Rather than solving problems in silos, theyre solving for the stream of activity and the goal. If you show DevOps teams how security can make them better, then as a reciprocation they tend to ask, “Well, are there any choices we make that would make your life easier?” Because often they dont know that the choice theyve made to do X, Y, or Z made it impossible to include security.
For security teams, one of the ways to drive value is to be helpful before we ask for help, and provide qualitative and quantitative value before we tell DevOps teams what to do. Youve got to earn the trust of DevOps teams and earn the right to play, and then it will be reciprocated. It often happens a lot faster than you think.
--------------------------------------------------------------------------------
via: https://techbeacon.com/why-devops-end-security-we-know-it
作者:[Mike Barton][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://twitter.com/intent/follow?original_referer=https%3A%2F%2Ftechbeacon.com%2Fwhy-devops-end-security-we-know-it%3Fimm_mid%3D0ee8c5%26cmp%3Dem-webops-na-na-newsltr_20170310&ref_src=twsrc%5Etfw&region=follow_link&screen_name=mikebarton&tw_p=followbutton
[1]:https://techbeacon.com/resources/application-security-devops-true-state?utm_source=tb&utm_medium=article&utm_campaign=inline-cta
[2]:https://twitter.com/joshcorman
[3]:https://techbeacon.com/want-rugged-devops-team-your-release-security-engineers
[4]:https://techbeacon.com/rugged-devops-rsa-6-takeaways-security-ops-pros
[5]:https://techbeacon.com/josh-corman-security-devops-how-shared-team-values-can-reduce-threats
[6]:https://techbeacon.com/devops-automation-best-practices-how-much-too-much

View File

@ -0,0 +1,176 @@
translating by xiaow6
Your visual how-to guide for SELinux policy enforcement
============================================================
![SELinux policy guide](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life-uploads/selinux_rules_lead_image.png?itok=jxV7NgtD "Your visual how-to guide for SELinux policy enforcement")
>Image by : opensource.com
We are celebrating the SELinux 10th year anversary this year. Hard to believe it. SELinux was first introduced in Fedora Core 3 and later in Red Hat Enterprise Linux 4. For those who have never used SELinux, or would like an explanation...
More Linux resources
* [What is Linux?][1]
* [What are Linux containers?][2]
* [Managing devices in Linux][3]
* [Download Now: Linux commands cheat sheet][4]
* [Our latest Linux articles][5]
SElinux is a labeling system. Every process has a label. Every file/directory object in the operating system has a label. Even network ports, devices, and potentially hostnames have labels assigned to them. We write rules to control the access of a process label to an a object label like a file. We call this  _policy_ . The kernel enforces the rules. Sometimes this enforcement is called Mandatory Access Control (MAC). 
The owner of an object does not have discretion over the security attributes of a object. Standard Linux access control, owner/group + permission flags like rwx, is often called Discretionary Access Control (DAC). SELinux has no concept of UID or ownership of files. Everything is controlled by the labels. Meaning an SELinux system can be setup without an all powerful root process. 
**Note:**  _SELinux does not let you side step DAC Controls. SELinux is a parallel enforcement model. An application has to be allowed by BOTH SELinux and DAC to do certain activities. This can lead to confusion for administrators because the process gets Permission Denied. Administrators see Permission Denied means something is wrong with DAC, not SELinux labels._
### Type enforcement
Lets look a little further into the labels. The SELinux primary model or enforcement is called  _type enforcement_ . Basically this means we define the label on a process based on its type, and the label on a file system object based on its type.
_Analogy_
Imagine a system where we define types on objects like cats and dogs. A cat and dog are process types.
_*all cartoons by [Máirín Duffy][6]_
![Image showing a cartoon of a cat and dog.](https://opensource.com/sites/default/files/images/life-uploads/type-enforcement_01_catdog.png)
We have a class of objects that they want to interact with which we call food. And I want to add types to the food,  _cat_food_  and  _dog_food_ . 
![Cartoon Cat eating Cat Food and Dog eating Dog Food](https://opensource.com/sites/default/files/images/life-uploads/type-enforcement_03_foods.png)
As a policy writer, I would say that a dog has permission to eat  _dog_chow_  food and a cat has permission to eat  _cat_chow_  food. In SELinux we would write this rule in policy.
![allow cat cat_chow:food eat; allow dog dog_chow:food eat](https://opensource.com/sites/default/files/images/life-uploads/type-enforcement_04_policy.png "SELinux rule")
allow cat cat_chow:food eat;
allow dog dog_chow:food eat;
With these rules the kernel would allow the cat process to eat food labeled  _cat_chow _ and the dog to eat food labeled  _dog_chow_ .
![Cartoon Cat eating Cat Food and Dog eating Dog Food](https://opensource.com/sites/default/files/images/life-uploads/type-enforcement_02_eat.png)
But in an SELinux system everything is denied by default. This means that if the dog process tried to eat the  _cat_chow_ , the kernel would prevent it.
![](https://opensource.com/sites/default/files/images/life-uploads/type-enforcement_06_tux-dog-leash.png)
Likewise cats would not be allowed to touch dog food.
![Cartoon cat not allowed to eat dog fooda](https://opensource.com/sites/default/files/images/life-uploads/mcs-enforcement_07_tux-cat-no.png "Cartoon cat not allowed to eat dog fooda")
_Real world_
We label Apache processes as  _httpd_t_  and we label Apache content as  _httpd_sys_content_t _ and  _httpd_sys_content_rw_t_ . Imagine we have credit card data stored in a mySQL database which is labeled  _msyqld_data_t_ . If an Apache process is hacked, the hacker could get control of the  _httpd_t process_  and would be allowed to read  _httpd_sys_content_t_  files and write to  _httpd_sys_content_rw_t_ . But the hacker would not be allowed to read the credit card data ( _mysqld_data_t_ ) even if the process was running as root. In this case SELinux has mitigated the break in.
### MCS enforcement
_Analogy _
Above, we typed the dog process and cat process, but what happens if you have multiple dogs processes: Fido and Spot. You want to stop Fido from eating Spot's  _dog_chow_ .
![SELinux rule](https://opensource.com/sites/default/files/resize/images/life-uploads/mcs-enforcement_02_fido-eat-spot-food-500x251.png "SELinux rule")
One solution would be to create lots of new types, like  _Fido_dog_  and  _Fido_dog_chow_ . But, this will quickly become unruly because all dogs have pretty much the same permissions.
To handle this we developed a new form of enforcement, which we call Multi Category Security (MCS). In MCS, we add another section of the label which we can apply to the dog process and to the dog_chow food. Now we label the dog process as  _dog:random1 _ (Fido) and  _dog:random2_  (Spot).
![Cartoon of two dogs fido and spot](https://opensource.com/sites/default/files/images/life-uploads/mcs-enforcement_01_fido-spot.png)
We label the dog chow as  _dog_chow:random1 (Fido)_  and  _dog_chow:random2_ (Spot).
![SELinux rule](https://opensource.com/sites/default/files/images/life-uploads/mcs-enforcement_03_foods.png "SELinux rule")
MCS rules say that if the type enforcement rules are OK and the random MCS labels match exactly, then the access is allowed, if not it is denied.  
Fido (dog:random1) trying to eat  _cat_chow:food_  is denied by type enforcement.
![Cartoon of Kernel (Penquin) holding leash to prevent Fido from eating cat food.](https://opensource.com/sites/default/files/images/life-uploads/mcs-enforcement_04-bad-fido-cat-chow.png)
Fido (dog:random1) is allowed to eat  _dog_chow:random1._
![Cartoon Fido happily eating his dog food](https://opensource.com/sites/default/files/images/life-uploads/mcs-enforcement_05_fido-eat-fido-food.png)
Fido (dog:random1) denied to eat spot's ( _dog_chow:random2_ ) food.
![Cartoon of Kernel (Penquin) holding leash to prevent Fido from eating spots dog food.](https://opensource.com/sites/default/files/images/life-uploads/mcs-enforcement_06_fido-no-spot-food.png)
_Real world_
In computer systems we often have lots of processes all with the same access, but we want them separated from each other. We sometimes call this a  _multi-tenant environment_ . The best example of this is virtual machines. If I have a server running lots of virtual machines, and one of them gets hacked, I want to prevent it from attacking the other virtual machines and virtual machine images. But in a type enforcement system the KVM virtual machine is labeled  _svirt_t_  and the image is labeled  _svirt_image_t_ . We have rules that say  _svirt_t_  can read/write/delete content labeled  _svirt_image_t_ . With libvirt we implemented not only type enforcement separation, but also MCS separation. When libvirt is about to launch a virtual machine it picks out a random MCS label like  _s0:c1,c2_ , it then assigns the  _svirt_image_t:s0:c1,c2_  label to all of the content that the virtual machine is going to need to manage. Finally, it launches the virtual machine as  _svirt_t:s0:c1,c2_ . Then, the SELinux kernel controls that  _svirt_t:s0:c1,c2_  can not write to  _svirt_image_t:s0:c3,c4_ , even if the virtual machine is controled by a hacker and takes it over. Even if it is running as root.
We use [similar separation][8] in OpenShift. Each gear (user/app process)runs with the same SELinux type (openshift_t). Policy defines the rules controlling the access of the gear type and a unique MCS label to make sure one gear can not interact with other gears.
Watch [this short video][9] on what would happen if an Openshift gear became root.
### MLS enforcement
Another form of SELinux enforcement, used much less frequently, is called Multi Level Security (MLS); it was developed back in the 60s and is used mainly in trusted operating systems like Trusted Solaris.
The main idea is to control processes based on the level of the data they will be using. A  _secret _ process can not read  _top secret_  data.
MLS is very similar to MCS, except it adds a concept of dominance to enforcement. Where MCS labels have to match exactly, one MLS label can dominate another MLS label and get access.
_Analogy_
Instead of talking about different dogs, we now look at different breeds. We might have a Greyhound and a Chihuahua.
![Cartoon of a Greyhound and a Chihuahua](https://opensource.com/sites/default/files/images/life-uploads/mls-enforcement_01_chigrey.png)
We might want to allow the Greyhound to eat any dog food, but a Chihuahua could choke if it tried to eat Greyhound dog food.
We want to label the Greyhound as  _dog:Greyhound_  and his dog food as  _dog_chow:Greyhound, _ and label the Chihuahua as  _dog:Chihuahua_  and his food as  _dog_chow:Chihuahua_ .
![Cartoon of a Greyhound dog food and a Chihuahua dog food.](https://opensource.com/sites/default/files/images/life-uploads/mls-enforcement_04_mlstypes.png)
With the MLS policy, we would have the MLS Greyhound label dominate the Chihuahua label. This means  _dog:Greyhound_  is allowed to eat  _dog_chow:Greyhound _ and  _dog_chow:Chihuahua_ .
![SELinux rule](https://opensource.com/sites/default/files/images/life-uploads/mls-enforcement_05_chigreyeating.png "SELinux rule")
But  _dog:Chihuahua_  is not allowed to eat  _dog_chow:Greyhound_ .
![Cartoon of Kernel (Penquin) stopping the Chihahua from eating the greyhound food. Telling him it would be a big too beefy for him.](https://opensource.com/sites/default/files/images/life-uploads/mls-enforcement_03_chichoke.png)
Of course,  _dog:Greyhound_  and  _dog:Chihuahua_  are still prevented from eating  _cat_chow:Siamese_  by type enforcement, even if the MLS type Greyhound dominates Siamese.
![Cartoon of Kernel (Penquin) holding leash to prevent both dogs from eating cat food.](https://opensource.com/sites/default/files/images/life-uploads/mls-enforcement_06_nocatchow.png)
_Real world_
I could have two Apache servers: one running as  _httpd_t:TopSecret_  and another running as  _httpd_t:Secret_ . If the Apache process  _httpd_t:Secret_  were hacked, the hacker could read  _httpd_sys_content_t:Secret_  but would be prevented from reading  _httpd_sys_content_t:TopSecret_ .
However, if the Apache server running  _httpd_t:TopSecret_  was hacked, it could read  _httpd_sys_content_t:Secret data_  as well as  _httpd_sys_content_t:TopSecret_ .
We use the MLS in military environments where a user might only be allowed to see  _secret _ data, but another user on the same system could read  _top secret_  data.
### Conclusion
SELinux is a powerful labeling system, controlling access granted to individual processes by the kernel. The primary feature of this is type enforcement where rules define the access allowed to a process is allowed based on the labeled type of the process and the labeled type of the object. Two additional controls have been added to separate processes with the same type from each other called MCS, total separtion from each other, and MLS, allowing for process domination.
--------------------------------------------------------------------------------
作者简介:
Daniel J Walsh - Daniel Walsh has worked in the computer security field for almost 30 years. Dan joined Red Hat in August 2001.
-------------------------
via: https://opensource.com/business/13/11/selinux-policy-guide
作者:[Daniel J Walsh ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/rhatdan
[1]:https://opensource.com/resources/what-is-linux?src=linux_resource_menu
[2]:https://opensource.com/resources/what-are-linux-containers?src=linux_resource_menu
[3]:https://opensource.com/article/16/11/managing-devices-linux?src=linux_resource_menu
[4]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=7016000000127cYAAQ
[5]:https://opensource.com/tags/linux?src=linux_resource_menu
[6]:https://opensource.com/users/mairin
[7]:https://opensource.com/business/13/11/selinux-policy-guide?rate=XNCbBUJpG2rjpCoRumnDzQw-VsLWBEh-9G2hdHyB31I
[8]:http://people.fedoraproject.org/~dwalsh/SELinux/Presentations/openshift_selinux.ogv
[9]:http://people.fedoraproject.org/~dwalsh/SELinux/Presentations/openshift_selinux.ogv
[10]:https://opensource.com/user/16673/feed
[11]:https://opensource.com/business/13/11/selinux-policy-guide#comments
[12]:https://opensource.com/users/rhatdan

View File

@ -0,0 +1,119 @@
[Why (most) High Level Languages are Slow][7]
============================================================
Contents
* * [Cache costs review][1]
* [Why C# introduces cache misses][2]
* [Garbage Collection][3]
* [Closing remarks][5]
In the last month or two Ive had basically the same conversation half a dozen times, both online and in real life, so I figured Id just write up a blog post that I can refer to in the future.
The reason most high level languages are slow is usually because of two reasons:
1. They dont play well with the cache.
2. They have to do expensive garbage collections
But really, both of these boil down to a single reason: the language heavily encourages too many allocations.
First, Ill just state up front that for all of this Im talking mostly about client-side applications. If youre spending 99.9% of your time waiting on the network then it probably doesnt matter how slow your language is optimizing network is your main concern. Im talking about applications where local execution speed is important.
Im going to pick on C# as the specific example here for two reasons: the first is that its the high level language I use most often these days, and because if I used Java Id get a bunch of C# fans telling me how it has value types and therefore doesnt have these issues (this is wrong).
In the following I will be talking about what happens when you write idiomatic code. When you work “with the grain” of the language. When you write code in the style of the standard libraries and tutorials. Im not very interested in ugly workarounds as “proof” that theres no problem. Yes, you can sometimes fight the language to avoid a particular issue, but that doesnt make the language unproblematic.
### Cache costs review
First, lets review the importance of playing well with the cache. Heres a graph based on [this data][10] on memory latencies for Haswell:
![](https://www.sebastiansylvan.com/img/041315_0638_whymosthigh1.png)
The latency for this particular CPU to get to memory is about 230 cycles, meanwhile the cost of reading data from L1 is 4 cycles. The key takeaway here is that doing the wrong thing for the cache can make code ~50x slower. In fact, it may be even worse than that modern CPUs can often do multiple things at once so you could be loading stuff from L1 while operating on stuff thats already in registers, thus hiding the L1 load cost partially or completely.
Without exaggerating we can say that aside from making reasonable algorithm choices, cache misses are the main thing you need to worry about for performance. Once youre accessing data efficiently you can worry about fine tuning the actual operations you do. In comparison to cache misses, minor inefficiencies just dont matter much.
This is actually good news for language designers! You dont  _have_  to build the most efficient compiler on the planet, and you totally can get away with some extra overhead here and there for your abstractions (e.g. array bounds checking), all you need to do is make sure that your design makes it easy to write code that accesses data efficiently and programs in your language wont have any problems running at speeds that are competitive with C.
### Why C# introduces cache misses
To put it bluntly, C# is a language that simply isnt designed to run efficiently with modern cache realities in mind. Again, Im now talking about the limitations of the design and the “pressure” it puts on the programmer to do things in inefficient ways. Many of these things have theoretical workarounds that you could do at great inconvenience. Im talking about idiomatic code, what the language “wants” you to do.
The basic problem with C# is that it has very poor support for value-based programming. Yes, it has structs which are values that are stored “embedded” where they are declared (e.g. on the stack, or inside another object). But there are a several big issues with structs that make them more of a band-aid than a solution.
* You have to declare your data types as struct up front which means that if you  _ever_  need this type to exist as a heap allocation then  _all_  of them need to be heap allocations. You could make some kind of class-wrapper for your struct and forward all the members but its pretty painful. It would be better if classes and structs were declared the same way and could be used in both ways on a case-by-case basis. So when something can live on the stack you declare it as a value, and when it needs to be on the heap you declare it as an object. This is how C++ works, for example. Youre not encouraged to make everything into an object-type just because theres a few things here and there that need them on the heap.
* _Referencing_  values is extremely limited. You can pass values by reference to functions, but thats about it. You cant just grab a reference to an element in a List<int>, you have to store both a reference to the list and an index. You cant grab a pointer to a stack-allocated value, or a value stored inside an object (or value). You can only copy them, unless youre passing them to a function (by ref). This is all understandable, by the way. If type safety is a priority, its pretty difficult (though not imposible) to support flexible referencing of values while also guaranteeing type safety. The rationale behind these restrictions dont change the fact that the restrictions are there, though.</int>
* [Fixed sized buffers][6] dont support custom types and also requires you to use an unsafe keyword.
* Limited “array slice” functionality. Theres an ArraySegment class, but its not really used by anyone, which means that in order to pass a range of elements from an array you have to create an IEnumerable, which means allocation (boxing). Even if the APIs accepted ArraySegment parameters its still not good enough you can only use it for normal arrays, not for List<t>, not for [stack-allocated array][4]s, etc.</t>
The bottom line is that for all but very simple cases, the language pushes you very strongly towards heap allocations. If all your data is on the heap, it means that accessing it is likely to cause a cache misses (since you cant decide how objects are organized in the heap). So while a C++ program poses few challenges to ensuring that data is organized in cache-efficient ways, C# typically encourages you to allocate each part of that data in a separate heap allocation. This means the programmers loses control over data layout, which means unnecessary cache misses are introduced and performance drops precipitously. It doesnt matter that [you can now compile C# programs natively][11] ahead of time improvement to code quality is a drop in the bucket compared to poor memory locality.
Plus, theres storage overhead. Each reference is 8 bytes on a 64-bit machine, and each allocation has its own overhead in the form of various metadata. A heap full of tiny objects with lots of references everywhere has a lot of space overhead compared to a heap with very few large allocations where most data is just stored embedded within their owners at fixed offsets. Even if you dont care about memory requirements, the fact that the heap is bloated with header words and references means that cache lines have more waste in them, this in turn means even more cache misses and reduced performance.
There are sometimes workarounds you can do, for example you can use structs and allocate them in a pool using a big List<t>. This allows you to e.g. traverse the pool and update all of the objects in-bulk, getting good locality. This does get pretty messy though, because now anything else wanting to refer to one of these objects have to have a reference to the pool as well as an index, and then keep doing array-indexing all over the place. For this reason, and the reasons above, it is significantly more painful to do this sort of stuff in C# than it is to do it in C++, because its just not something the language was designed to do. Furthermore, accessing a single element in the pool is now more expensive than just having an allocation per object - you now get  _two_  cache misses because you have to first dereference the pool itself (since its a class). Ok, so you can duplicate the functionality of List<t> in struct-form and avoid this extra cache miss and make things even uglier. Ive written plenty of code just like this and its just extremely low level and error prone.</t></t>
Finally, I want to point out that this isnt just an issue for “hot spot” code. Idiomatically written C# code tends to have classes and references basically  _everywhere_ . This means that all over your code at relatively uniform frequency there are random multi-hundred cycle stalls, dwarfing the cost of surrounding operations. Yes there could be hotspots too, but after youve optimized them youre left with a program thats just [uniformly slow.][12] So unless you want to write all your code with memory pools and indices, effectively operating at a lower level of abstraction than even C++ does (and at that point, why bother with C#?), theres not a ton you can do to avoid this issue.
### Garbage Collection
Im just going to assume in the following that you already understand why garbage collection is a performance problem in a lot of cases. That pausing randomly for many milliseconds just is usually unacceptable for anything with animation. I wont linger on it and move on to explaining why the language design itself exacerbates this issue.
Because of the limitations when it comes to dealing with values, the language very strongly discourages you from using big chunky allocations consisting mostly of values embedded within other values (perhaps stored on the stack), pressuring you instead to use lots of small classes which have to be allocated on the heap. Roughly speaking, more allocations means more time spent collecting garbage.
There are benchmarks that show how C# or Java beat C++ in some particular case, because an allocator based on a GC can have decent throughput (cheap allocations, and you batch all the deallocations up). However, this isnt a common real world scenario. It takes a huge amount of effort to write a C# program with the same low allocation rate that even a very naïve C++ program has, so those kinds of comparisons are really comparing a highly tuned managed program with a naïve native one. Once you spend the same amount of effort on the C++ program, youd be miles ahead of C# again.
Im relatively convinced that you could write a GC more suitable for high performance and low latency applications (e.g. an incremental GC where you spend a fixed amount of time per frame doing collection), but this is not enough on its own. At the end of the day the biggest issue with most high level languages is simply that the design encourages far too much garbage being created in the first place. If idiomatic C# allocated at the same low rate a C program does, the GC would pose far fewer problems for high performance applications. And if you  _did_  have an incremental GC to support soft real-time applications, youll probably need a write barrier for it which, as cheap as it is, means that a language that encourages pointers will add a performance tax to the mutators.
Look at the base class library for .Net, allocations are everywhere! By my count the [.Net Core Framework][13] contains 19x more public classes than structs, so in order to use it youre very much expected to do quite a lot of allocation. Even the creators of .Net couldnt resist the siren call of the language design! I dont know how to gather statistics on this, but using the base class library you quickly notice that its not just in their choice of value vs. object types where the allocation-happiness shines through. Even  _within_  this code theres just a ton of allocations. Everything seems to be written with the assumption that allocations are cheap. Hell, you cant even print an int without allocating! Let that sink in for a second. Even with a pre-sized StringBuilder you cant stick an int in there without allocating using the standard library. Thats pretty silly if you ask me.
This isnt just in the standard library. Other C# libraries follow suit. Even Unity (a  _game engine_ , presumably caring more than average about performance issues) has APIs all over the place that return allocated objects (or arrays) or force the caller to allocate to call them. For example, by returning an array from GetComponents, theyre forcing an array allocation just to see what components are on a GameObject. There are a number of alternative APIs they couldve chosen, but going with the grain of the language means allocations. The Unity folks wrote “Good C#”, its just bad for performance.
# Closing remarks
If youre designing a new language,  _please_  consider efficiency up front. Its not something a “Sufficiently Smart Compiler” can fix after youve already made it impossible. Yes, its hard to do type safety without a garbage collector. Yes, its harder to do garbage collection when you dont have uniform representation for data. Yes, its hard to reason about scoping rules when you can have pointers to random values. Yes, there are tons of problems to figure out here, but isnt figuring those problems out what language design is supposed to be? Why make another minor iteration of languages that were already designed in the 1960s?
Even if you cant fix all these issues, maybe you can get most of the way there? Maybe use region types (a la Rust) to ensure safety. Or maybe even consider abandoning “type safety at all costs” in favor of more runtime checks (if they dont cause extra cache misses, they dont really matter… and in fact C# already does similar things, see covariant arrays which are strictly speaking a type system violation, and leads to a runtime exception).
The bottom line is that if you want to be an alternative to C++ for high performance scenarios, you need to worry about data layout and locality.
--------------------------------------------------------------------------------
作者简介:
My name is Sebastian Sylvan. Im from Sweden but live in Seattle. I work at Microsoft on Hololens. Obviously my views are my own and dont necessarily represent those of Microsoft.
I typically blog graphics, languages, performance, and such. Feel free to hit me up on twitter or email (see links in sidebar).
------------
via: https://www.sebastiansylvan.com/post/why-most-high-level-languages-are-slow
作者:[Sebastian Sylvan ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.sebastiansylvan.com/about/
[1]:https://www.sebastiansylvan.com/post/why-most-high-level-languages-are-slow/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#cache-costs-review
[2]:https://www.sebastiansylvan.com/post/why-most-high-level-languages-are-slow/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#why-c-introduces-cache-misses
[3]:https://www.sebastiansylvan.com/post/why-most-high-level-languages-are-slow/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#garbage-collection
[4]:https://msdn.microsoft.com/en-us/library/vstudio/cx9s2sy4(v=vs.100).aspx
[5]:https://www.sebastiansylvan.com/post/why-most-high-level-languages-are-slow/?imm_mid=0ee8ca&cmp=em-prog-na-na-newsltr_20170311#closing-remarks
[6]:https://msdn.microsoft.com/en-us/library/vstudio/zycewsya(v=vs.100).aspx
[7]:https://www.sebastiansylvan.com/post/why-most-high-level-languages-are-slow/
[8]:https://www.sebastiansylvan.com/categories/programming-languages
[9]:https://www.sebastiansylvan.com/categories/software-engineering
[10]:http://www.7-cpu.com/cpu/Haswell.html
[11]:https://msdn.microsoft.com/en-us/vstudio/dotnetnative.aspx
[12]:http://c2.com/cgi/wiki?UniformlySlowCode
[13]:https://github.com/dotnet/corefx

View File

@ -1,145 +0,0 @@
(翻译中 by runningwater)
15 JavaScript frameworks and libraries
============================================================
![15 JavaScript frameworks and libraries](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/code_javascript.jpg?itok=a4uULCF0 "15 JavaScript frameworks and libraries")
>Image credits : Photo by Jen Wike Huger
JavaScript is the future.
The language is supported by a number of technololgy leaders, one of whom is WordPress's founder Matt Mullenweg, who hinted that [WordPress developers][18]should learn it, clearly sending a message to the WordPress community as to it future importance. The mention was well received. The transition to better technology will enable WordPress to keep up with future challenges.
JavaScripts open source stance is also one of the best. Contrary to popular belief, JavaScript is not a project, but a specification with an open standard where the language is evolved and maintained by its core team. [ECMAScript][19], another fancy name of JavaScript, is not open source, but it too has an open standard.
You can easily see evidence of JavaScript's popularity when you look at both at GitHub. JavaScript is the top programming language when it comes to the[number of repositories][20]. Its prominance is also evident on Livecoding.tv, where members are diligently creating more videos on JavaScript than any other topic. At the time of this writing, the self-dubbed edutainment site hosts [45,919 JavaScript videos][21].
### Top open source JavaScript frameworks and libraries
Getting back to the topic, JavaScript is blessed with a large community that thrives on improving the technology. Hundreds of JavaScript frameworks and libraries are available to developers, and the good news is that the best ones are open source. For a JavaScript developer, using the best framework or libraries for rapid development is now a necessity. The current market demands rapid development. Also, reinventing the wheel is not a good idea in the current market. Regardless of whether you are new to JavaScript or an experienced JavaScript developer, using libraries and frameworks improves your work significantly.
Lets get started.
### 1\. Angular.js
[Angular.js][1] is one of the most popular JavaScript frameworks. It is used by developers to create complex web apps. The idea behind Angular.js is its one-page app model. It also supports MVC architecture. With Angular.js, the developer can use JavaScript code in the front end, literally extending the HTML vocabulary.
Angular.js has improved a great deal since its inception in 2009\. The current stable version of Angular 1 is 1.5.8/1.2.30\. You can also try out Angular 2, a significant improvement over Angular 1, but this framework is still not yet adopted by developers across the world.
Angular.js uses data binding as one of the main concepts to get work done. The user interacts with the interface. When the interaction is done, the view is then updated with the new values, which in turn interact with the model and ensure everything is synchronized. The DOM gets updated after the underlying logic is executed in the model.
### 2\. Backbone.js
Not everyone intends to build a complex web application. Simpler web application frameworks such as [Backbone.js][2] are a great fit for those learning web app development. Backbone.js is a straightforward framework that makes building simple web apps fun and speedy. Just like Angular.js, Backbone.js also comes with MVC support. Other key features of Backbone.js are routing, RESTful APIs support, proper state management, and much more. You can also use Backbone.js to build single page apps.
The current stable version is 1.3.3 and is available from [GitHub][22].
### 3\. D3.js
[D3.js][3] is an excellent JavaScript library that enables developers to create rich web pages with data manipulation features. D3.js uses SVG, HTML, and CSS to make the magic happen. With D3.js, you can bind data to DOM easily and enable data-driven events. With D3.js, you also can create high-quality data-driven web pages that offer a better understanding of data coupled with great visuals. Check[Hamiltonian Graphs from LCF notation][23], powered by D3.js.
### 4\. React.js
[React.js][4] is an interesting JavaScript framework to work with. Unlike other JavaScript frameworks, React.js is ideal for building highly scalable front-end user interfaces. React.js came into existence in 2013 under a BSD license and is growing rapidly thanks to the advantages that it brings to developing complex yet beautiful user interfaces.
The core idea behind React.js is the virtual DOM. Virtual DOM acts as a mediator between the client-side and the server-side, bringing improved performance. The changes made in the virtual DOM are matched with the server DOM, and only the needed elements are updated, making the process much faster than a traditional UI update.
You can also use material design with React, enabling you to develop modern web apps with unparalleled performance.
Check out mittax from Munich, Germany working on React Material-UI in the video below.
### 6\. jQuery
[jQuery][5] is a very popular JavaScript library with features such as event handling, animation, and much more. When working on a web project, you dont want to waste time writing code for simple tasks. jQuery frees you from this with its easy-to-use API. It also works with all the popular web browsers. With jQuery, you can seamlessly control the DOM and also develop an Ajax application, which is in high demand for the last few years. With jQuery, developers dont have to worry about low-level interactions and can easily develop their web applications faster and easier.
jQuery also facilitates the separation of HTML and JavaScript code, enabling developers to write clean code with cross-browser compatibility. Moreover, web apps created using jQuery are easily improved and extended in the future.
### 7\. Ember.js
[Ember.js][6] is a mix of Angular.js and React.js when it comes to functionality. You can easily see the popularity of Ember.js when observing the support community. New features are added constantly. It works similar to Angular.js when it comes to syncing data. The two-way data exchange ensures that the app is fast and scalable. It also helps developers to create front-end elements.
When it comes to React.js for similarities, Ember.js provides similar server-side Virtual DOM for better performance and scalability. Ember.js also encourages minimal code writing, offers excellent APIs to work with, and has an excellent community.
### 8\. Polymer.js
If you ever thought of creating your own HTML5 elements, you can do so with the help of [Polymer.js][7]. Polymers main focus is to provide extended functionality to web developers by giving them the ability to create their own tags. For example, you can create a <my_video> element with its own functionality that is similar to the <video> element in HTML5.
Polymer was introduced in the year 2013 by Google and is covered under [3-Clause BSD][24].
### 9\. Three.js
[Three.js][8] is yet another JavaScript library, with a focus on 3D development. If you are into animation and game development, you can use Three.js to your advantage. Under the hood, Three.js uses WebGL and can easily be used to render 3D objects on the screen. A popular example of the power of Three.js is HexGL, a futuristic racing game.
### 10\. PhantomJS
Working with JavaScript can also mean working with different browsers, and, when we talk about browsers, resource management comes into the discussion easily. With [PhantomJS][25], you can monitor the performance of your web app thanks to the headless WebKit provided. The headless WebKit is part of the rendering engine used in Chrome and Safari.
The entire process is automated, and all you need to is set up your web application using the available APIs.
### 11\. BabylonJS
[BabylonJS][9] stands in the territory of Three.js, providing JavaScript APIs to create seamless, powerful 3D web apps. It is open source and is based on JavaScript and the power of WebGL. Creating simple 3D objects such as a sphere is easy and you can do so with just a few lines of code. You can get a good grasp of what the library has to offer by going through its [documentation][10]. The homepage also offers excellent demos for inspiration purposes. Do check them out by visiting the official website.
### 12\. Boba.js
Web apps always have one need in common, analytics. If you have struggled to insert analytics into your JavaScript web app, then look to [Boba.js][11]. Boba.js can help you insert analytics into your web app with support for the old ga.js. You can also integrate metrics with Boba.js. The only requirement is jQuery.
### 13\. Underscore.js
[Underscore.js][12] is the answer to your blank HTML editor file. When you start a project, feeling lost or doing a series of steps that repeat what you have done in earlier projects is common. To simplify the process of starting a project and to give you a head start, the Underscore.js JavaScript library provides you with a set of functions. For example, you can use your favorite Backbone.js suspenders or with jQuery functions that you use in your projects frequently.
Functional helpers such as "filter" and "invoke the map" provide you with a good head start so you can dive into your work as quickly as possible. Underscore.js also comes with a suite for easy testing purposes.
### 14\. Meteor.js
[Meteor.js][13] is a fast way to get started building JavaScript apps. It is open source in nature and can be used to create apps for desktop, mobile, and the web. Meteor.js is a full-stack framework and enables end-to-end development for multiple platforms. You can create back-end and front-end features with Meteor.js, and also keep tabs on the performance of the application. The community surrounding Meteor.js is huge, so there are frequent feature and bug fix updates. Meteor.js is also modular in nature and can be equipped with amazing APIs.
### 15\. Knockout.js
[Knockout.js][14] is clearly the most underrated framework out there. It was developed by [Steve Sanderson][15] as an open source JavaScript Framework and is available under license from MIT. The framework is based on the MVVM design.
### Notable Mention: Node.js
[Node.js][16] is a powerful runtime environment for JavaScript. It can be used to build fast, scalable applications with real world data. It is neither a framework nor a library, but a runtime environment that is based on Google Chromes JavaScript V8 Engine. You can use Node.js to create diversified JavaScript applications, including single page applications, real-time web applications, and much more. Technically, Node.js supports asynchronous I/O with the help of its event-driven architecture. This approach makes it an excellent choice for developing highly scalable solutions. View [Node.js][17] videos on livecoding.tv.
### Conclusion
JavaScript is the lingua franca of the web. It has grown rapidly not only because of what it offers, but also because of the open source community surrounding it. The above-mentioned frameworks and libraries are must-checks for any JavaScript developer. All of them provide some way to explore JavaScript and front-end development. Most of the above-mentioned libraries and frameworks are frequently used on Livecoding.tv by software engineers who are interested in JavaScript and its associated technologies.
If you have something to add, please comment below and let us know. We are eager to see which framework and library you use for projects and also want to know the reason behind it.
--------------------------------------------------------------------------------
via: https://opensource.com/article/16/11/15-javascript-frameworks-libraries
作者:[Dr. Michael J. Garbade ][a]
译者:[runningwater](https://github.com/runningwater)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/drmjg
[1]:https://opensource.com/article/16/11/www.angularjs.org
[2]:https://opensource.com/article/16/11/www.backbone.js
[3]:https://opensource.com/article/16/11/www.d3js.org
[4]:https://facebook.github.io/react/
[5]:http://jquery.com/
[6]:http://emberjs.com/
[7]:https://www.polymer-project.org/1.0/
[8]:https://threejs.org/
[9]:http://www.babylonjs.com/
[10]:https://doc.babylonjs.com/
[11]:http://boba.space150.com/
[12]:http://underscorejs.org/
[13]:https://www.meteor.com/
[14]:http://knockoutjs.com/
[15]:http://blog.stevensanderson.com/
[16]:https://nodejs.org/en/
[17]:https://www.livecoding.tv/learn/node-js/
[18]:http://wesbos.com/learn-javascript/
[19]:http://stackoverflow.com/questions/5520245/is-javascript-an-open-source-project
[20]:https://github.com/blog/2047-language-trends-on-github
[21]:https://www.livecoding.tv/learn/javascript/
[22]:https://github.com/jashkenas/backbone/
[23]:http://christophermanning.org/projects/building-cubic-hamiltonian-graphs-from-lcf-notation
[24]:https://en.wikipedia.org/wiki/BSD_licenses#3-clause
[25]:https://phantomjs.org/

View File

@ -0,0 +1,84 @@
FTPS (FTP over SSL) vs SFTP (SSH File Transfer Protocol)
============================================================
[
![ftps sftp](http://www.techmixer.com/pic/2015/07/ftps-sftp.png "ftps sftp")
][5]
![ftps sftp](data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs= "ftps sftp")
**SSH File transfer protocol, SFTP** or **File Transfer protocol via Secure Socket Layer, **FTPS are the most common secure FTP communication technologies used to transfer computer files from one host to another host over a TCP networks. Both SFTP and FTPS offer a high level file transfer security protection with strong algorithms such as AES and Triple DES to encrypt any data transferred.
But the most notable differences between SFTP and FTPS is how connections are authenticated and managed.
FTPS is FTP utilising Secure Secure Layer (SSL) certificate for Security. The entire secure FTP connection is authenticated using an User ID, Password and SSL certificate. Once FTPS connection established, [FTP client software][6] will check destination [FTP server ][7]if the servers certificate is trusted.
The SSL certificate will considered trusted if either the certificate was signed off by a known certificate authority (CA) or if the certificate was self-signed (by your partner) and you have a copy of their public certificate in your trusted key store. All username and password information for FTPS will be encrypted through secure FTP connection.
### Below are the FTPS pros and cons:
Pros:
* The communication can be read and understood by a human
* Provides services for server-to-server file transfer
* SSL/TLS has good authentication mechanisms (X.509 certificate features)
* FTP and SSL support is built into many internet communications frameworks
Cons:
* Does not have a uniform directory listing format
* Requires a secondary DATA channel, which makes it hard to use behind firewalls
* Does not define a standard for file name character sets (encodings)
* Not all FTP servers support SSL/TLS
* Does not have a standard way to get and change file or directory attributes
SFTP or SSH File Transfer Protocol is another secure Secure File Transfer Protocol is designed as a SSH extension to provide file transfer capability, so it usually uses only the SSH port for both data and control. When your [FTP client][8] software connect to SFTP server, it will transmit public key to the server for authentication. If the keys match, along with any user/password supplied, then the authentication will succeed.
### Below are the SFTP Pros and Cons:
Pros:
* Has only one connection (no need for a DATA connection).
* FTP connection is always secured
* FTP directory listing is uniform and machine-readable
* FTP protocol includes operations for permission and attribute manipulation, file locking, and more functionality.
Cons:
* The communication is binary and can not be logged “as is” for human reading
SSH keys are harder to manage and validate.
* The standards define certain things as optional or recommended, which leads to certain compatibility problems between different software titles from different vendors.
* No server-to-server copy and recursive directory removal operations
* No built-in SSH/SFTP support in VCL and .NET frameworks.
Overall most of FTP server software support both secure FTP technologies with strong authentication options.
But SFTP will be clear winner since its very firewall friendly. SFTP only needs a single port number (default of 22) to be opened through the firewall.  This port will be used for all SFTP communications, including the initial authentication, any commands issued, as well as any data transferred.
FTPS will be more difficult to implement through a tightly secure firewall since FTPS uses multiple network port numbers. Every time a file transfer request (get, put) or directory listing request is made, another port number needs to be opened.  Therefore it have to open a range of ports in your firewalls to allow for FTPS connections, which can be a security risk for your network.
FTP Server software that supports FTPS and SFTP:
1. [Cerberus FTP Server][2]
2. [FileZilla Most famous free FTPs and FTPS server software][3]
3. [Serv-U FTP Server][4]
--------------------------------------------------------------------------------
via: http://www.techmixer.com/ftps-sftp/
作者:[Techmixer.com][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.techmixer.com/
[1]:http://www.techmixer.com/ftps-sftp/#respond
[2]:http://www.cerberusftp.com/
[3]:http://www.techmixer.com/free-ftp-server-best-windows-ftp-server-download/
[4]:http://www.serv-u.com/
[5]:http://www.techmixer.com/pic/2015/07/ftps-sftp.png
[6]:http://www.techmixer.com/free-ftp-file-transfer-protocol-softwares/
[7]:http://www.techmixer.com/free-ftp-server-best-windows-ftp-server-download/
[8]:http://www.techmixer.com/best-free-mac-ftp-client-connect-ftp-server/

View File

@ -0,0 +1,726 @@
Git in 2016
============================================================
![](https://cdn-images-1.medium.com/max/2000/1*1SiSsLMsNSyAk6khb63W9g.png)
Git had a  _huge_  year in 2016, with five feature releases[¹][57] ( _v2.7_  through  _v2.11_ ) and sixteen patch releases[²][58]. 189 authors[³][59] contributed 3,676 commits[⁴][60] to `master`, which is up 15%[⁵][61] over 2015! In total, 1,545 files were changed with 276,799 lines added and 100,973 lines removed[⁶][62].
However, commit counts and LOC are pretty terrible ways to measure productivity. Until deep learning develops to the point where it can qualitatively grok code, were going to be stuck with human judgment as the arbiter of productivity.
With that in mind, I decided to put together a retrospective of sorts that covers changes improvements made to six of my favorite Git features over the course of the year. This article is pretty darn long for a Medium post, so I will forgive you if you want to skip ahead to a feature that particularly interests you:
* [Rounding out the ][41]`[git worktree][25]`[ command][42]
* [More convenient ][43]`[git rebase][26]`[ options][44]
* [Dramatic performance boosts for ][45]`[git lfs][27]`
* [Experimental algorithms and better defaults for ][46]`[git diff][28]`
* `[git submodules][29]`[ with less suck][47]
* [Nifty enhancements to ][48]`[git stash][30]`
Before we begin, note that many operating systems ship with legacy versions of Git, so its worth checking that youre on the latest and greatest. If running `git --version` from your terminal returns anything less than Git `v2.11.0`, head on over to Atlassian's quick guide to [upgrade or install Git][63] on your platform of choice.
### [`Citation` needed]
One more quick stop before we jump into the qualitative stuff: I thought Id show you how I generated the statistics from the opening paragraph (and the rather over-the-top cover image). You can use the commands below to do a quick  _year in review_  for your own repositories as well!
```
¹ Tags from 2016 matching the form vX.Y.0
```
```
$ git for-each-ref --sort=-taggerdate --format \
'%(refname) %(taggerdate)' refs/tags | grep "v\d\.\d*\.0 .* 2016"
```
```
² Tags from 2016 matching the form vX.Y.Z
```
```
$ git for-each-ref --sort=-taggerdate --format '%(refname) %(taggerdate)' refs/tags | grep "v\d\.\d*\.[^0] .* 2016"
```
```
³ Commits by author in 2016
```
```
$ git shortlog -s -n --since=2016-01-01 --until=2017-01-01
```
```
⁴ Count commits in 2016
```
```
$ git log --oneline --since=2016-01-01 --until=2017-01-01 | wc -l
```
```
⁵ ... and in 2015
```
```
$ git log --oneline --since=2015-01-01 --until=2016-01-01 | wc -l
```
```
⁶ Net LOC added/removed in 2016
```
```
$ git diff --shortstat `git rev-list -1 --until=2016-01-01 master` \
`git rev-list -1 --until=2017-01-01 master`
```
The commands above were are run on Gits `master` branch, so dont represent any unmerged work on outstanding branches. If you use these command, remember that commit counts and LOC are not metrics to live by. Please dont use them to rate the performance of your teammates!
And now, on with the retrospective…
### Rounding out Git worktrees
The `git worktree` command first appeared in Git v2.5 but had some notable enhancements in 2016\. Two valuable new features were introduced in v2.7the `list` subcommand, and namespaced refs for bisectingand the `lock`/`unlock` subcommands were implemented in v2.10.
#### Whats a worktree again?
The `[git worktree][49]` command lets you check out and work on multiple repository branches in separate directories simultaneously. For example, if you need to make a quick hotfix but don't want to mess with your working copy, you can check out a new branch in a new directory with:
```
$ git worktree add -b hotfix/BB-1234 ../hotfix/BB-1234
Preparing ../hotfix/BB-1234 (identifier BB-1234)
HEAD is now at 886e0ba Merged in bedwards/BB-13430-api-merge-pr (pull request #7822)
```
Worktrees arent just for branches. You can check out multiple tags as different worktrees in order to build or test them in parallel. For example, I created worktrees from the Git v2.6 and v2.7 tags in order to examine the behavior of different versions of Git:
```
$ git worktree add ../git-v2.6.0 v2.6.0
Preparing ../git-v2.6.0 (identifier git-v2.6.0)
HEAD is now at be08dee Git 2.6
```
```
$ git worktree add ../git-v2.7.0 v2.7.0
Preparing ../git-v2.7.0 (identifier git-v2.7.0)
HEAD is now at 7548842 Git 2.7
```
```
$ git worktree list
/Users/kannonboy/src/git 7548842 [master]
/Users/kannonboy/src/git-v2.6.0 be08dee (detached HEAD)
/Users/kannonboy/src/git-v2.7.0 7548842 (detached HEAD)
```
```
$ cd ../git-v2.7.0 && make
```
You could use the same technique to build and run different versions of your own applications side-by-side.
#### Listing worktrees
The `git worktree list` subcommand (introduced in Git v2.7) displays all of the worktrees associated with a repository:
```
$ git worktree list
/Users/kannonboy/src/bitbucket/bitbucket 37732bd [master]
/Users/kannonboy/src/bitbucket/staging d5924bc [staging]
/Users/kannonboy/src/bitbucket/hotfix-1234 37732bd [hotfix/1234]
```
#### Bisecting worktrees
`[git bisect][50]` is a neat Git command that lets you perform a binary search of your commit history. It's usually used to find out which commit introduced a particular regression. For example, if a test is failing on the tip commit of my `master` branch, I can use `git bisect` to traverse the history of my repository looking for the commit that first broke it:
```
$ git bisect start
```
```
# indicate the last commit known to be passing the tests
# (e.g. the latest release tag)
$ git bisect good v2.0.0
```
```
# indicate a known broken commit (e.g. the tip of master)
$ git bisect bad master
```
```
# tell git bisect a script/command to run; git bisect will
# find the oldest commit between "good" and "bad" that causes
# this script to exit with a non-zero status
$ git bisect run npm test
```
Under the hood, bisect uses refs to track the good and bad commits used as the upper and lower bounds of the binary search range. Unfortunately for worktree fans, these refs were stored under the generic `.git/refs/bisect`namespace, meaning that `git bisect` operations that are run in different worktrees could interfere with each other.
As of v2.7, the bisect refs have been moved to`.git/worktrees/$worktree_name/refs/bisect`, so you can run bisect operations concurrently across multiple worktrees.
#### Locking worktrees
When youre finished with a worktree, you can simply delete it and then run `git worktree prune` or wait for it to be garbage collected automatically. However, if you're storing a worktree on a network share or removable media, then it will be cleaned up if the worktree directory isn't accessible during pruningwhether you like it or not! Git v2.10 introduced the `git worktree lock` and `unlock` subcommands to prevent this from happening:
```
# to lock the git-v2.7 worktree on my USB drive
$ git worktree lock /Volumes/Flash_Gordon/git-v2.7 --reason \
"In case I remove my removable media"
```
```
# to unlock (and delete) the worktree when I'm finished with it
$ git worktree unlock /Volumes/Flash_Gordon/git-v2.7
$ rm -rf /Volumes/Flash_Gordon/git-v2.7
$ git worktree prune
```
The `--reason` flag lets you leave a note for your future self, describing why the worktree is locked. `git worktree unlock` and `lock` both require you to specify the path to the worktree. Alternatively, you can `cd` to the worktree directory and run `git worktree lock .` for the same effect.
### More Git r`ebase` options
In March, Git v2.8 added the ability to interactively rebase whilst pulling with a `git pull --rebase=interactive`. Conversely, June's Git v2.9 release implemented support for performing a rebase exec without needing to drop into interactive mode via `git rebase -x`.
#### Re-wah?
Before we dive in, I suspect there may be a few readers who arent familiar or completely comfortable with the rebase command or interactive rebasing. Conceptually, its pretty simple, but as with many of Gits powerful features, the rebase is steeped in some complex-sounding terminology. So, before we dive in, lets quickly review what a rebase is.
Rebasing means rewriting one or more commits on a particular branch. The `git rebase` command is heavily overloaded, but the name rebase originates from the fact that it is often used to change a branch's base commit (the commit that you created the branch from).
Conceptually, rebase unwinds the commits on your branch by temporarily storing them as a series of patches, and then reapplying them in order on top of the target commit.
![](https://cdn-images-1.medium.com/max/800/1*mgyl38slmqmcE4STS56nXA.gif)
Rebasing a feature branch on master (`git rebase master`) is a great way to "freshen" your feature branch with the latest changes from master. For long-lived feature branches, regular rebasing minimizes the chance and severity of conflicts down the road.
Some teams also choose to rebase immediately before merging their changes onto master in order to achieve a fast-forward merge (`git merge --ff <feature>` ). Fast-forwarding merges your commits onto master by simply making the master ref point at the tip of your rewritten branch without creating a merge commit:
![](https://cdn-images-1.medium.com/max/800/1*QXa3znQiuNWDjxroX628VA.gif)
Rebasing is so convenient and powerful that it has been baked into some other common Git commands, such as `git pull`. If you have some unpushed changes on your local master branch, running `git pull` to pull your teammates' changes from the origin will create an unnecessary merge commit:
![](https://cdn-images-1.medium.com/max/800/1*IxDdJ5CygvSWdD8MCNpZNg.gif)
This is kind of messy, and on busy teams, youll get heaps of these unnecessary merge commits. `git pull --rebase` rebases your local changes on top of your teammates' without creating a merge commit:
![](https://cdn-images-1.medium.com/max/800/1*HcroDMwBE9m21-hOeIwRmw.gif)
This is pretty neat! Even cooler, Git v2.8 introduced a feature that lets you rebase  _interactively_  whilst pulling.
#### Interactive rebasing
Interactive rebasing is a more powerful form of rebasing. Like a standard rebase, it rewrites commits, but it also gives you a chance to modify them interactively as they are reapplied onto the new base.
When you run `git rebase --interactive` (or `git pull --rebase=interactive`), you'll be presented with a list of commits in your text editor of choice:
```
$ git rebase master --interactive
```
```
pick 2fde787 ACE-1294: replaced miniamalCommit with string in test
pick ed93626 ACE-1294: removed pull request service from test
pick b02eb9a ACE-1294: moved fromHash, toHash and diffType to batch
pick e68f710 ACE-1294: added testing data to batch email file
```
```
# Rebase f32fa9d..0ddde5f onto f32fa9d (4 commands)
#
# Commands:
# p, pick = use commit
# r, reword = use commit, but edit the commit message
# e, edit = use commit, but stop for amending
# s, squash = use commit, but meld into previous commit
# f, fixup = like "squash", but discard this commit's log message
# x, exec = run command (the rest of the line) using shell
# d, drop = remove commit
#
# These lines can be re-ordered; they are executed from top to
# bottom.
#
# If you remove a line here THAT COMMIT WILL BE LOST.
```
Notice that each commit has the word `pick` next to it. That's rebase-speak for, "Keep this commit as-is." If you quit your text editor now, it will perform a normal rebase as described in the last section. However, if you change `pick` to `edit` or one of the other rebase commands, rebase will let you mutate the commit before it is reapplied! There are several available rebase commands:
* `reword`: Edit the commit message.
* `edit`: Edit the files that were committed.
* `squash`: Combine the commit with the previous commit (the one above it in the file), concatenating the commit messages.
* `fixup`: Combine the commit with the commit above it, and uses the previous commit's log message verbatim (this is handy if you created a second commit for a small change that should have been in the original commit, i.e., you forgot to stage a file).
* `exec`: Run an arbitrary shell command (we'll look at a neat use-case for this later, in the next section).
* `drop`: This kills the commit.
You can also reorder commits within the file, which changes the order in which theyre reapplied. This is handy if you have interleaved commits that are addressing different topics and you want to use `squash` or `fixup` to combine them into logically atomic commits.
Once youve set up the commands and saved the file, Git will iterate through each commit, pausing at each `reword` and `edit` for you to make your desired changes and automatically applying any `squash`, `fixup`, `exec`, and `drop` commands for you.
#### Non-interactive exec
When you rebase, youre essentially rewriting history by applying each of your new commits on top of the specified base. `git pull --rebase` can be a little risky because depending on the nature of the changes from the upstream branch, you may encounter test failures or even compilation problems for certain commits in your newly created history. If these changes cause merge conflicts, the rebase process will pause and allow you to resolve them. However, changes that merge cleanly may still break compilation or tests, leaving broken commits littering your history.
However, you can instruct Git to run your projects test suite for each rewritten commit. Prior to Git v2.9, you could do this with a combination of `git rebase interactive` and the `exec` command. For example, this:
```
$ git rebase master interactive exec=”npm test”
```
…would generate an interactive rebase plan that invokes `npm test` after rewriting each commit, ensuring that your tests still pass:
```
pick 2fde787 ACE-1294: replaced miniamalCommit with string in test
exec npm test
pick ed93626 ACE-1294: removed pull request service from test
exec npm test
pick b02eb9a ACE-1294: moved fromHash, toHash and diffType to batch
exec npm test
pick e68f710 ACE-1294: added testing data to batch email file
exec npm test
```
```
# Rebase f32fa9d..0ddde5f onto f32fa9d (4 command(s))
```
In the event that a test fails, rebase will pause to let you fix the tests (and apply your changes to that commit):
```
291 passing
1 failing
```
```
1) Host request “after all” hook:
Uncaught Error: connect ECONNRESET 127.0.0.1:3001
npm ERR! Test failed.
Execution failed: npm test
You can fix the problem, and then run
git rebase continue
```
This is handy, but needing to do an interactive rebase is a bit clunky. As of Git v2.9, you can perform a non-interactive rebase exec, with:
```
$ git rebase master -x “npm test”
```
Just replace `npm test` with `make`, `rake`, `mvn clean install`, or whatever you use to build and test your project.
#### A word of warning
Just like in the movies, rewriting history is risky business. Any commit that is rewritten as part of a rebase will have its SHA-1 ID changed, which means that Git will treat it as a totally different commit. If rewritten history is mixed with the original history, youll get duplicate commits, which can cause a lot of confusion for your team.
To avoid this problem, you only need to follow one simple rule:
> _Never rebase a commit that youve already pushed!_
Stick to that and youll be fine.
### Performance boosts for `Git LFS`
[Git is a distributed version control system][64], meaning the entire history of the repository is transferred to the client during the cloning process. For projects that contain large filesparticularly large files that are modified regularly __ the initial clone can be expensive, as every version of every file has to be downloaded by the client. [Git LFS (Large File Storage)][65] is a Git extension developed by Atlassian, GitHub, and a few other open source contributors that reduces the impact of large files in your repository by downloading the relevant versions of them lazily. Specifically, large files are downloaded as needed during the checkout process rather than during cloning or fetching.
Alongside Gits five huge releases in 2016, Git LFS had four feature-packed releases of its own: v1.2 through v1.5. You could write a retrospective series on Git LFS in its own right, but for this article, Im going to focus on one of the most important themes tackled in 2016: speed. A series of improvements to both Git and Git LFS have greatly improved the performance of transferring files to and from the server.
#### Long-running filter processes
When you `git add` a file, Git's system of clean filters can be used to transform the files contents before being written to the Git object store. Git LFS reduces your repository size by using a clean filter to squirrel away large file content in the LFS cache and adds a tiny “pointer” file to the Git object store instead.
![](https://cdn-images-1.medium.com/max/800/0*Ku328eca7GLOo7sS.png)
Smudge filters are the opposite of clean filtershence the name. When file content is read from the Git object store during a `git checkout`, smudge filters have a chance to transform it before its written to the users working copy. The Git LFS smudge filter transforms pointer files by replacing them with the corresponding large file, either from your LFS cache or by reading through to your Git LFS store on Bitbucket.
![](https://cdn-images-1.medium.com/max/800/0*CU60meE1lbCuivn7.png)
Traditionally, smudge and clean filter processes were invoked once for each file that was being added or checked out. So, a project with 1,000 files tracked by Git LFS invoked the `git-lfs-smudge` command 1,000 times for a fresh checkout! While each operation is relatively quick, the overhead of spinning up 1,000 individual smudge processes is costly.
As of Git v2.11 (and Git LFS v1.5), smudge and clean filters can be defined as long-running processes that are invoked once for the first filtered file, then fed subsequent files that need smudging or cleaning until the parent Git operation exits. [Lars Schneider][66], who contributed long-running filters to Git, neatly summarized the impact of the change on Git LFS performance:
> The filter process is 80x faster on macOS and 58x faster on Windows for the test repo with 12k files. On Windows, that means the tests runs in 57 seconds instead of 55 minutes!
Thats a seriously impressive performance gain!
#### Specialized LFS clones
Long-running smudge and clean filters are great for speeding up reads and writes to the local LFS cache, but they do little to speed up transferring of large objects to and from your Git LFS server. Each time the Git LFS smudge filter cant find a file in the local LFS cache, it has to make two HTTP calls to retrieve it: one to locate the file and one to download it. During a `git clone`, your local LFS cache is empty, so Git LFS will naively make two HTTP calls for every LFS tracked file in your repository:
![](https://cdn-images-1.medium.com/max/800/0*ViL7r3ZhkGvF0z3-.png)
Fortunately, Git LFS v1.2 shipped the specialized `[git lfs clone][51]` command. Rather than downloading files one at a time; `git lfs clone` disables the Git LFS smudge filter, waits until the checkout is complete, and then downloads any required files as a batch from the Git LFS store. This allows downloads to be parallelized and halves the number of required HTTP requests:
![](https://cdn-images-1.medium.com/max/800/0*T43VA0DYTujDNgkH.png)
### Custom Transfer Adapters
As discussed earlier, Git LFS shipped support for long running filter processes in v1.5\. However, support for another type of pluggable process actually shipped earlier in the year. Git LFS v1.3 included support for pluggable transfer adapters so that different Git LFS hosting services could define their own protocols for transferring files to and from LFS storage.
As of the end of 2016, Bitbucket is the only hosting service to implement their own Git LFS transfer protocol via the [Bitbucket LFS Media Adapter][67]. This was done to take advantage of a unique feature of Bitbuckets LFS storage API called chunking. Chunking means large files are broken down into 4MB chunks before uploading or downloading.
![](https://cdn-images-1.medium.com/max/800/1*N3SpjQZQ1Ge8OwvWrtS1og.gif)
Chunking gives Bitbuckets Git LFS support three big advantages:
1. Parallelized downloads and uploads. By default, Git LFS transfers up to three files in parallel. However, if only a single file is being transferred (which is the default behavior of the Git LFS smudge filter), it is transferred via a single stream. Bitbuckets chunking allows multiple chunks from the same file to be uploaded or downloaded simultaneously, often dramatically improving transfer speed.
2. Resumable chunk transfers. File chunks are cached locally, so if your download or upload is interrupted, Bitbuckets custom LFS media adapter will resume transferring only the missing chunks the next time you push or pull.
3. Deduplication. Git LFS, like Git itself, is content addressable; each LFS file is identified by a SHA-256 hash of its contents. So, if you flip a single bit, the files SHA-256 changes and you have to re-upload the entire file. Chunking allows you to re-upload only the sections of the file that have actually changed. To illustrate, imagine we have a 41MB spritesheet for a video game tracked in Git LFS. If we add a new 2MB layer to the spritesheet and commit it, wed typically need to push the entire new 43MB file to the server. However, with Bitbuckets custom transfer adapter, we only need to push ~7Mb: the first 4MB chunk (because the files header information will have changed) and the last 3MB chunk containing the new layer weve just added! The other unchanged chunks are skipped automatically during the upload process, saving a huge amount of bandwidth and time.
Customizable transfer adapters are a great feature for Git LFS, as they allow different hosts to experiment with optimized transfer protocols to suit their services without overloading the core project.
### Better `git diff` algorithms and defaults
Unlike some other version control systems, Git doesnt explicitly store the fact that files have been renamed. For example, if I edited a simple Node.js application and renamed `index.js` to `app.js` and then ran `git diff`, Id get back what looks like a file deletion and an addition:
![](https://cdn-images-1.medium.com/max/800/1*ohMUBpSh_jqz2ffScJ7ApQ.png)
I guess moving or renaming a file is technically just a delete followed by an add, but this isnt the most human-friendly way to show it. Instead, you can use the `-M` flag to instruct Git to attempt to detect renamed files on the fly when computing a diff. For the above example, `git diff -M` gives us:
![](https://cdn-images-1.medium.com/max/800/1*ywYjxBc1wii5O8EhHbpCTA.png)
The similarity index on the second line tells us how similar the content of the files compared was. By default, `-M` will consider any two files that are more than 50% similar. That is, you need to modify less than 50% of their lines to make them identical as a renamed file. You can choose your own similarity index by appending a percentage, i.e., `-M80%`.
As of Git v2.9, the `git diff` and `git log` commands will both detect renames by default as if you'd passed the `-M` flag. If you dislike this behavior (or, more realistically, are parsing the diff output via a script), then you can disable it by explicitly passing the `no-renames` flag.
#### Verbose Commits
Do you ever invoke `git commit` and then stare blankly at your shell trying to remember all the changes you just made? The verbose flag is for you!
Instead of:
```
Ah crap, which dependency did I just rev?
```
```
# Please enter the commit message for your changes. Lines starting
# with # will be ignored, and an empty message aborts the commit.
# On branch master
# Your branch is up-to-date with origin/master.
#
# Changes to be committed:
# new file: package.json
#
```
…you can invoke `git commit verbose` to view an inline diff of your changes. Dont worry, it wont be included in your commit message:
![](https://cdn-images-1.medium.com/max/800/1*1vOYE2ow3ZDS8BP_QfssQw.png)
The `verbose` flag isnt new, but as of Git v2.9 you can enable it permanently with `git config --global commit.verbose true`.
#### Experimental Diff Improvements
`git diff` can produce some slightly confusing output when the lines before and after a modified section are the same. This can happen when you have two or more similarly structured functions in a file. For a slightly contrived example, imagine we have a JS file that contains a single function:
```
/* @return {string} "Bitbucket" */
function productName() {
return "Bitbucket";
}
```
Now imagine weve committed a change that prepends  _another_  function that does something similar:
```
/* @return {string} "Bitbucket" */
function productId() {
return "Bitbucket";
}
```
```
/* @return {string} "Bitbucket" */
function productName() {
return "Bitbucket";
}
```
Youd expect `git diff` to show the top five lines as added, but it actually incorrectly attributes the very first line to the original commit:
![](https://cdn-images-1.medium.com/max/800/1*9C7DWMObGHMEqD-QFGHmew.png)
The wrong comment is included in the diff! Not the end of the world, but the couple of seconds of cognitive overhead from the  _Whaaat?_  every time this happens can add up.
In December, Git v2.11 introduced a new experimental diff option, `--indent-heuristic`, that attempts to produce more aesthetically pleasing diffs:
![](https://cdn-images-1.medium.com/max/800/1*UyWZ6JjC-izDquyWCA4bow.png)
Under the hood, `--indent-heuristic` cycles through the possible diffs for each change and assigns each a “badness” score. This is based on heuristics like whether the diff block starts and ends with different levels of indentation (which is aesthetically bad) and whether the diff block has leading and trailing blank lines (which is aesthetically pleasing). Then, the block with the lowest badness score is output.
This feature is experimental, but you can test it out ad-hoc by applying the `--indent-heuristic` option to any `git diff` command. Or, if you like to live on the bleeding edge, you can enable it across your system with:
```
$ git config --global diff.indentHeuristic true
```
### Submodules with less suck
Submodules allow you to reference and include other Git repositories from inside your Git repository. This is commonly used by some projects to manage source dependencies that are also tracked in Git, or by some companies as an alternative to a [monorepo][68] containing a collection of related projects.
Submodules get a bit of a bad rap due to some usage complexities and the fact that its reasonably easy to break them with an errant command.
![](https://cdn-images-1.medium.com/max/800/1*xNffiElY7BZNMDM0jm0JNQ.gif)
However, they do have their uses and are, I think, still the best choice for vendoring dependencies. Fortunately, 2016 was a great year to be a submodule user, with some significant performance and feature improvements landing across several releases.
#### Parallelized fetching
When cloning or fetching a repository, appending the `--recurse-submodules`option means any referenced submodules will be cloned or updated, as well. Traditionally, this was done serially, with each submodule being fetched one at a time. As of Git v2.8, you can append the `--jobs=n` option to fetch submodules in  _n_  parallel threads.
I recommend configuring this option permanently with:
```
$ git config --global submodule.fetchJobs 4
```
…or whatever degree of parallelization you choose to use.
#### Shallow submodules
Git v2.9 introduced the `git clone -shallow-submodules` flag. It allows you to grab a full clone of your repository and then recursively shallow clone any referenced submodules to a depth of one commit. This is useful if you dont need the full history of your projects dependencies.
For example, consider a repository with a mixture of submodules containing vendored dependencies and other projects that you own. You may wish to clone with shallow submodules initially and then selectively deepen the few projects you want to work with.
Another scenario would be configuring a continuous integration or deployment job. Git needs the super repository as well as the latest commit from each of your submodules in order to actually perform the build. However, you probably dont need the full history for every submodule, so retrieving just the latest commit will save you both time and bandwidth.
#### Submodule alternates
The `--reference` option can be used with `git clone` to specify another local repository as an alternate object store to save recopying objects over the network that you already have locally. The syntax is:
```
$ git clone --reference <local repo> <url>
```
As of Git v2.11, you can use the `--reference` option in combination with `--recurse-submodules` to set up submodule alternates pointing to submodules from another local repository. The syntax is:
```
$ git clone --recurse-submodules --reference <local repo> <url>
```
This can potentially save a huge amount of bandwidth and local disk but it will fail if the referenced local repository does not have all the required submodules of the remote repository that youre cloning from.
Fortunately, the handy `--reference-if-able` option will fail gracefully and fall back to a normal clone for any submodules that are missing from the referenced local repository:
```
$ git clone --recurse-submodules --reference-if-able \
<local repo> <url>
```
#### Submodule diffs
Prior to Git v2.11, Git had two modes for displaying diffs of commits that updated your repositorys submodules:
`git diff --submodule=short` displays the old commit and new commit from the submodule referenced by your project (this is also the default if you omit the `--submodule` option altogether):
![](https://cdn-images-1.medium.com/max/800/1*K71cJ30NokO5B69-a470NA.png)
`git diff --submodule=log` is slightly more verbose, displaying the summary line from the commit message of any new or removed commits in the updated submodule:
![](https://cdn-images-1.medium.com/max/800/1*frvsd_T44De8_q0uvNHB1g.png)
Git v2.11 introduces a third much more useful option: `--submodule=diff`. This displays a full diff of all changes in the updated submodule:
![](https://cdn-images-1.medium.com/max/800/1*nPhJTjP8tcJ0cD8s3YOmjw.png)
### Nifty enhancements to `git stash`
Unlike submodules, `[git stash][52]` is almost universally beloved by Git users. `git stash` temporarily shelves (or  _stashes_ ) changes you've made to your working copy so you can work on something else, and then come back and re-apply them later on.
#### Autostash
If youre a fan of `git rebase`, you might be familiar with the `--autostash`option. It automatically stashes any local changes made to your working copy before rebasing and reapplies them after the rebase is completed.
```
$ git rebase master --autostash
Created autostash: 54f212a
HEAD is now at 8303dca It's a kludge, but put the tuple from the database in the cache.
First, rewinding head to replay your work on top of it...
Applied autostash.
```
This is handy, as it allows you to rebase from a dirty worktree. Theres also a handy config flag named `rebase.autostash` to make this behavior the default, which you can enable globally with:
```
$ git config --global rebase.autostash true
```
`rebase.autostash` has actually been available since [Git v1.8.4][69], but v2.7 introduces the ability to cancel this flag with the `--no-autostash` option. If you use this option with unstaged changes, the rebase will abort with a dirty worktree warning:
```
$ git rebase master --no-autostash
Cannot rebase: You have unstaged changes.
Please commit or stash them.
```
#### Stashes as Patches
Speaking of config flags, Git v2.7 also introduces `stash.showPatch`. The default behavior of `git stash show` is to display a summary of your stashed files.
```
$ git stash show
package.json | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
```
Passing the `-p` flag puts `git stash show` into "patch mode," which displays the full diff:
![](https://cdn-images-1.medium.com/max/800/1*HpcT3quuKKQj9CneqPuufw.png)
`stash.showPatch` makes this behavior the default. You can enable it globally with:
```
$ git config --global stash.showPatch true
```
If you enable `stash.showPatch` but then decide you want to view just the file summary, you can get the old behavior back by passing the `--stat` option instead.
```
$ git stash show --stat
package.json | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
```
As an aside: `--no-patch` is a valid option but it doesn't negate `stash.showPatch` as you'd expect. Instead, it gets passed along to the underlying `git diff` command used to generate the patch, and you'll end up with no output at all!
#### Simple Stash IDs
If youre a `git stash` fan, you probably know that you can shelve multiple sets of changes, and then view them with `git stash list`:
```
$ git stash list
stash@{0}: On master: crazy idea that might work one day
stash@{1}: On master: desperate samurai refactor; don't apply
stash@{2}: On master: perf improvement that I forgot I stashed
stash@{3}: On master: pop this when we use Docker in production
```
However, you may not know why Gits stashes have such awkward identifiers (`stash@{1}`, `stash@{2}`, etc.) and may have written them off as "just one of those Git idiosyncrasies." It turns out that like many Git features, these weird IDs are actually a symptom of a very clever use (or abuse) of the Git data model.
Under the hood, the `git stash` command actually creates a set of special commit objects that encode your stashed changes and maintains a [reflog][70]that holds references to these special commits. This is why the output from `git stash list` looks a lot like the output from the `git reflog` command. When you run `git stash apply stash@{1}`, you're actually saying, “Apply the commit at position 1 from the stash reflog.”
As of Git v2.11, you no longer have to use the full `stash@{n}` syntax. Instead, you can reference stashes with a simple integer indicating their position in the stash reflog:
```
$ git stash show 1
$ git stash apply 1
$ git stash pop 1
```
And so forth. If youd like to learn more about how stashes are stored, I wrote a little bit about it in [this tutorial][71].
### </2016> <2017>
And were done. Thanks for reading! I hope you enjoyed reading this behemoth as much as I enjoyed spelunking through Gits source code, release notes, and `man` pages to write it. If you think I missed anything big, please leave a comment or let me know [on Twitter][72] and I'll endeavor to write a follow-up piece.
As for whats next for Git, thats up to the maintainers and contributors (which [could be you!][73]). With ever-increasing adoption, Im guessing that simplification, improved UX, and better defaults will be strong themes for Git in 2017\. As Git repositories get bigger and older, I suspect well also see continued focus on performance and improved handling of large files, deep trees, and long histories.
If youre into Git and excited to meet some of the developers behind the project, consider coming along to [Git Merge][74] in Brussels in a few weeks time. Im [speaking there][75]! But more importantly, many of the developers who maintain Git will be in attendance for the conference and the annual Git Contributors Summit, which will likely drive much of the direction for the year ahead.
Or if you cant wait til then, head over to Atlassians excellent selection of [Git tutorials][76] for more tips and tricks to improve your workflow.
_If you scrolled to the end looking for the footnotes from the first paragraph, please jump to the _ [ _[Citation needed]_ ][77] _ section for the commands used to generate the stats. Gratuitous cover image generated using _ [ _instaco.de_ ][78] _ _
--------------------------------------------------------------------------------
via: https://hackernoon.com/git-in-2016-fad96ae22a15#.t5c5cm48f
作者:[Tim Pettersen][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://hackernoon.com/@kannonboy?source=post_header_lockup
[1]:https://medium.com/@g.kylafas/the-git-config-command-is-missing-a-yes-at-the-end-as-in-git-config-global-commit-verbose-yes-7e126365750e?source=responses---------1----------
[2]:https://medium.com/@kannonboy/thanks-giorgos-fixed-f3b83c61589a?source=responses---------1----------
[3]:https://medium.com/@TomSwirly/i-read-the-whole-thing-from-start-to-finish-415a55d89229?source=responses---------0-31---------
[4]:https://medium.com/@g.kylafas
[5]:https://medium.com/@g.kylafas?source=responses---------1----------
[6]:https://medium.com/@kannonboy
[7]:https://medium.com/@kannonboy?source=responses---------1----------
[8]:https://medium.com/@TomSwirly
[9]:https://medium.com/@TomSwirly?source=responses---------0-31---------
[10]:https://medium.com/@g.kylafas/the-git-config-command-is-missing-a-yes-at-the-end-as-in-git-config-global-commit-verbose-yes-7e126365750e?source=responses---------1----------#--responses
[11]:https://hackernoon.com/@kannonboy
[12]:https://hackernoon.com/@kannonboy?source=placement_card_footer_grid---------0-44
[13]:https://medium.freecodecamp.com/@BillSourour
[14]:https://medium.freecodecamp.com/@BillSourour?source=placement_card_footer_grid---------1-43
[15]:https://blog.uncommon.is/@lut4rp
[16]:https://blog.uncommon.is/@lut4rp?source=placement_card_footer_grid---------2-43
[17]:https://medium.com/@kannonboy
[18]:https://medium.com/@kannonboy
[19]:https://medium.com/@g.kylafas/the-git-config-command-is-missing-a-yes-at-the-end-as-in-git-config-global-commit-verbose-yes-7e126365750e?source=responses---------1----------
[20]:https://medium.com/@kannonboy/thanks-giorgos-fixed-f3b83c61589a?source=responses---------1----------
[21]:https://medium.com/@TomSwirly/i-read-the-whole-thing-from-start-to-finish-415a55d89229?source=responses---------0-31---------
[22]:https://hackernoon.com/setting-breakpoints-on-a-snowy-evening-df34fc3168e2?source=placement_card_footer_grid---------0-44
[23]:https://medium.freecodecamp.com/the-code-im-still-ashamed-of-e4c021dff55e?source=placement_card_footer_grid---------1-43
[24]:https://blog.uncommon.is/using-git-to-generate-versionname-and-versioncode-for-android-apps-aaa9fc2c96af?source=placement_card_footer_grid---------2-43
[25]:https://hackernoon.com/git-in-2016-fad96ae22a15#fd10
[26]:https://hackernoon.com/git-in-2016-fad96ae22a15#cc52
[27]:https://hackernoon.com/git-in-2016-fad96ae22a15#42b9
[28]:https://hackernoon.com/git-in-2016-fad96ae22a15#4208
[29]:https://hackernoon.com/git-in-2016-fad96ae22a15#a5c3
[30]:https://hackernoon.com/git-in-2016-fad96ae22a15#c230
[31]:https://hackernoon.com/tagged/git?source=post
[32]:https://hackernoon.com/tagged/web-development?source=post
[33]:https://hackernoon.com/tagged/software-development?source=post
[34]:https://hackernoon.com/tagged/programming?source=post
[35]:https://hackernoon.com/tagged/atlassian?source=post
[36]:https://hackernoon.com/@kannonboy
[37]:https://hackernoon.com/?source=footer_card
[38]:https://hackernoon.com/setting-breakpoints-on-a-snowy-evening-df34fc3168e2?source=placement_card_footer_grid---------0-44
[39]:https://medium.freecodecamp.com/the-code-im-still-ashamed-of-e4c021dff55e?source=placement_card_footer_grid---------1-43
[40]:https://blog.uncommon.is/using-git-to-generate-versionname-and-versioncode-for-android-apps-aaa9fc2c96af?source=placement_card_footer_grid---------2-43
[41]:https://hackernoon.com/git-in-2016-fad96ae22a15#fd10
[42]:https://hackernoon.com/git-in-2016-fad96ae22a15#fd10
[43]:https://hackernoon.com/git-in-2016-fad96ae22a15#cc52
[44]:https://hackernoon.com/git-in-2016-fad96ae22a15#cc52
[45]:https://hackernoon.com/git-in-2016-fad96ae22a15#42b9
[46]:https://hackernoon.com/git-in-2016-fad96ae22a15#4208
[47]:https://hackernoon.com/git-in-2016-fad96ae22a15#a5c3
[48]:https://hackernoon.com/git-in-2016-fad96ae22a15#c230
[49]:https://git-scm.com/docs/git-worktree
[50]:https://git-scm.com/book/en/v2/Git-Tools-Debugging-with-Git#Binary-Search
[51]:https://www.atlassian.com/git/tutorials/git-lfs/#speeding-up-clones
[52]:https://www.atlassian.com/git/tutorials/git-stash/
[53]:https://hackernoon.com/@kannonboy?source=footer_card
[54]:https://hackernoon.com/?source=footer_card
[55]:https://hackernoon.com/@kannonboy?source=post_header_lockup
[56]:https://hackernoon.com/@kannonboy?source=post_header_lockup
[57]:https://hackernoon.com/git-in-2016-fad96ae22a15#c8e9
[58]:https://hackernoon.com/git-in-2016-fad96ae22a15#408a
[59]:https://hackernoon.com/git-in-2016-fad96ae22a15#315b
[60]:https://hackernoon.com/git-in-2016-fad96ae22a15#dbfb
[61]:https://hackernoon.com/git-in-2016-fad96ae22a15#2220
[62]:https://hackernoon.com/git-in-2016-fad96ae22a15#bc78
[63]:https://www.atlassian.com/git/tutorials/install-git/
[64]:https://www.atlassian.com/git/tutorials/what-is-git/
[65]:https://www.atlassian.com/git/tutorials/git-lfs/
[66]:https://twitter.com/kit3bus
[67]:https://confluence.atlassian.com/bitbucket/bitbucket-lfs-media-adapter-856699998.html
[68]:https://developer.atlassian.com/blog/2015/10/monorepos-in-git/
[69]:https://blogs.atlassian.com/2013/08/what-you-need-to-know-about-the-new-git-1-8-4/
[70]:https://www.atlassian.com/git/tutorials/refs-and-the-reflog/
[71]:https://www.atlassian.com/git/tutorials/git-stash/#how-git-stash-works
[72]:https://twitter.com/kannonboy
[73]:https://git.kernel.org/cgit/git/git.git/tree/Documentation/SubmittingPatches
[74]:http://git-merge.com/
[75]:http://git-merge.com/#git-aliases
[76]:https://www.atlassian.com/git/tutorials
[77]:https://hackernoon.com/git-in-2016-fad96ae22a15#87c4
[78]:http://instaco.de/
[79]:https://medium.com/@Medium/personalize-your-medium-experience-with-users-publications-tags-26a41ab1ee0c#.hx4zuv3mg
[80]:https://hackernoon.com/

View File

@ -0,0 +1,216 @@
# Fedora 24 Gnome & HP Pavilion + Nvidia setup review
Recently, you may have come across my [Chapeau][1] review. This experiment prompted me to widen my Fedora family testing, and so I decided to try setting up [Fedora 24 Gnome][2] on my [HP][3] machine, a six-year-old laptop with 4 GB of RAM and an aging Nvidia card. Yes, Fedora 25 has since been released and I had it [tested][4] with delight. But we can still enjoy this little article now can we?
This review should complement - and contrast - my usual crop of testing on the notorious but capable [Lenovo G50][5] machine, purchased in 2015, so we have old versus new, but also the inevitable lack of proper Linux support for the [Realtek][6] network card on the newer box. We will then also check how well Fedora handles the Nvidia stack, test if Nouveau is a valid alternative, and of course, pimp the system to the max, using some of the beauty tricks we have witnessed in the Chapeau review. Should be more than interesting.
![Teaser](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-teaser.jpg)
### Installation
Nothing special to report here. The system has a much simpler setup than the Lenovo laptop. The new machine comes with UEFI, Secure Boot, 1TB disk with a GPT setup partitioned sixteen different ways, with Windows 10 and some 6-7 Linux distros on it. In comparison, the BIOS-fueled Pavilion only dual boots. Prior to this review, it was running Linux Mint 17.3 [Rosa Xfce][7], but it used to have all sorts of Ubuntu children on it, and I had used it quite extensively for arguably funny [video processing][8] and all sorts of games. The home partition dates back to the early setup, and has remained such since, including a lot of legacy config and many desktop environments.
![Live desktop](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-desktop-live.jpg)
I was able to boot from a USB drive, although I did use the Fedora tool to create the live media. I've never had any problems booting on this host, to the best of my memory, a far cry (not the [game][9], just an expression, hi hi) from the Lenovo experience. There, before a BIOS update, Fedora would [not even run][10], and a large number of distros used to [struggle][11] until very recently. All part of my great disappointment adventure with Linux.
Anyhow, this procedure went without any fuss. Fedora 24 took control of the bootloader, managing itself and the resident Windows 7 installation. If you're interested in more details on how to dual-boot, you might want to check these:
[Ubuntu & Windows 7][12] dual-boot guide
[Xubuntu & Windows 7][13] dual-boot guide - same same but different
[CentOS 7 & Windows 7][14] dual-boot guide - fairly similar to our Fedora attempt
[Ubuntu & Windows 8][15] dual-boot guide - this one covers a UEFI setup, too
### It's pimping time!
My Fedora [pimping guide][16] has it all. I setup RPM Fusion Free and Non-Free, then installed about 700 MB worth of media codecs, plugins and extra software, including Steam, Skype, GIMP, VLC, Gnome Tweak Tool, Chrome, several other helper utilities, and more.
On the aesthetics side, I grabbed both Faenza and Moka icons, and configured half a dozen Gnome [extensions][17], including the mandatory [Dash to Dock][18], which really helps transforms this desktop environment into a usable product.
![About, with Nouveau](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-about-nouveau.jpg)
![Final looks](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-final.jpg)
What is that green icon on the right side? 'Tis a spoiler of things to be, that is.
I also had no problems with my smartphones, [Ubuntu Phone][19] or the[iPhone][20]. Both setups worked fine, and this also brings the annoyance with the Apple device on Chapeau 24 into bad spotlight. Rhythmbox would not play from any external media, though. Fail.
![Ubuntu Phone](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-ubuntu-phone.jpg)
![Media works fine](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-media-works-nice.jpg)
This is a teaser, implying wossname Nvidia thingie; well here we go.
### Nvidia setup
This is a tricky one. First, take a look at my generic [tutorial][21] on this topic. Then, take a look at my recent [Fedora 23][22] [experience][23] on this topic. Unlike Ubuntu, Red Hat distros do not quite like the whole pre-compiled setup. However, just to see whether things have changed in any way, I did use a helper tool called easyLife to setup the drivers. I've talked about this utility and Fedy in an OCS-Mag [article][24], and how you can use them to make your Fedora experience more colorful. Bottom line: good for lots of things, not for drivers, though.
![easyLife & Nvidia](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-easylife-nvidia.png)
Yes, this resulted in a broken system. I had to manually installed the drivers - luckily I had installed the kernel sources and headers, as well as other necessary build tools, gcc and make, beforehand, to prepare for this kind of scenario. Be warned, kids. In the end, the official way is the best.
### Nouveau vs Nvidia, which is faster?
I did something you would not really expect. I benchmarked the actual performance of the graphics stack with the Nouveau driver first and then the closed-source blob, using the Unigine Heaven tool. This gives clear results on how the two compare.
![Heaven benchmark](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-heaven-benchmark.jpg)
Remember, this is an ancient laptop, and it does not stack well against modern tools, so you will not be surprised to learn that Heaven reported a staggering 1 FPS for Nouveau, and it took me like 5 minutes before the system actually responded, and I was able to quit the benchmark.
![Nouveau benchmark](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-heaven-nouveau.jpg)
Nvidia gave much better results. To begin with, I was able to use the system while testing, and Heaven responded to mouse clicks and key strokes, all the while reporting a very humble 5-6 FPS, which means it was roughly 500% more efficient than the Nouveau driver. That tells you all you need to know, ladies and gentlemen.
![Nvidia installed](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-nvidia-installed.jpg)
![About, Nvidia installed](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-about-nvidia.jpg)
![Heaven, Nvidia installed, main menu](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-heaven-nvidia-menu.jpg)
![Nvidia benchmark 1](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-heaven-nvidia-1.jpg)
![Nvidia benchmark 2](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-heaven-nvidia-2.jpg)
![Steam works](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-steam-works.jpg)
Also, Steam would not run at all with Nouveau, so there's that to consider, too. Funny how system requirements creep up over time. I used to play, I mean test [Call of Duty][25], a highly mediocre and arcade-like shooter on this box on the highest settings, but that feat feels like a completely different era.
![Nouveau & Steam fail](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-steam-nouveau-fail.png)
### Hardware compatibility
Things were quite all right overall. All of the Fn buttons worked fine, and so did the web camera. Power management also did its thing well, dimming the screen and whatnot, but we cannot really judge the battery life, as the cells are six years old now and quite broken. They only lend about 40 minutes of juice in the best case.
![Webcam](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-webcam.jpg)
![Battery, broken](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-battery-broken.jpg)
Bluetooth did not work at first, but this is because crucial packages are missing.
![Bluetooth does not work out of the box](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-bt-no-work.png)
You can resolve the issue using dnf:
dnf install blueman bluez
![Bluetooth works now](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-bt-works.png)
### Suspend & resume
No issues, even with the Nvidia drivers. The whole sequence was quick and smooth, about 2-3 seconds each direction, into the land of sweet dreams and out of it. I do recall some problems with this in the past, but not any more. Happy sailing.
### Resource utilization
We can again compare Nouveau with Nvidia. But first, I had to sort out the swap partition setup manually, as Fedora refused to activate it. This is a big fail, and this happens consistently. Anyhow, the resource utilization with either one driver was almost identical. Both tolled a hefty 1.2 GB of RAM, and CPU ticked at about 2-3%, which is not really surprising, given the age of this machine. I did not see any big noise or heat difference the way we would witness it in the past, which is a testament to the improvements in the open-source driver, even though it fails on some of the advanced graphics logic required from it. But for normal use, non-gaming use, it behaves fairly well.
![Resources, Nouveau](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-resources-nouveau.jpg)
### Problems
Well, I observed some interesting issues during my testing. SELinux complained about legitimate processes a few times, and this really annoys me. Now to troubleshoot this, all you need to do is expand the alert, check the details, and then vomit. Why would anyone let ordinary users ever see this. Why?
![SELinux alerts](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-selinux.png)
![SELinux alerts, more](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-selinux-more.png)
SELinux is preventing totem-video-thu from write access on the directory gstreamer-1.0.
***** Plugin catchall_labels (83.8 confidence) suggests *****
If you want to allow totem-video-thu to have write access on the gstreamer-1.0 directory
Then you need to change the label on gstreamer-1.0
Do
# semanage fcontext -a -t FILE_TYPE 'gstreamer-1.0'
where FILE_TYPE is one of the following: cache_home_t, gstreamer_home_t, texlive_home_t, thumb_home_t, thumb_tmp_t, thumb_tmpfs_t, tmp_t, tmpfs_t, user_fonts_cache_t, user_home_dir_t, user_tmp_t.
Then execute:
restorecon -v 'gstreamer-1.0'
I want to execute something else, because hey, let us let developers be in charge of how things should be done. They know [best][26], right! This kind of garbage is what makes zombie apocalypses happen, when you miscode the safety lock on a lab confinement.
### Other observations
Exploring the system with gconf-editor and dconf-editor, I found tons of leftover settings from my old Gnome 2, Xfce and Cinnamon setups, and one of the weird things was that Nemo would create, or rather, restore, several desktop icons every time I had it launched, and it did not cooperate with the global settings I configured through the Tweak Tool. In the end, I had to resort to some command line witchcraft:
gsettings set org.nemo.desktop home-icon-visible false
gsettings set org.nemo.desktop trash-icon-visible false
gsettings set org.nemo.desktop computer-icon-visible false
### Gallery
Finally, some sweet screenshots:
![Nice desktop 1](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-nice-1.jpg)
![Nice desktop 2](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-nice-2.jpg)
![Nice desktop 3](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-nice-3.jpg)
![Nice desktop 4](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-nice-4.jpg)
![Nice desktop 5](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-hp-nice-5.jpg)
### Conclusion
This was an interesting ordeal. It took me about four hours to finish the configuration and polish the system, the maniacal Fedora update that always runs in the deep hundreds and sometimes even thousands of packages, the graphics stack setup, and finally, all the gloss and trim needed to have a functional machine.
All in all, it works well. Fedora proved itself to be an adequate choice for the old HP machine, with decent performance and responsiveness, good hardware compatibility, fine aesthetics and functionality, once the extras are added, and only a small number of issues, some related to my laptop usage legacy. Not bad. Sure, the system could be faster, and Gnome isn't the best choice for olden hardware. But then, for something that was born in 2010, the HP laptop handles this desktop environment with grace, and it looks the part. Just proves that Red Hat makes a lot of sense once you release its essential oils and let the fragrance of extra software and codecs sweep you. It is your time to be enthused about this and commence your own testing.
Cheers.
--------------------------------------------------------------------------------
作者简介:
My name is Igor Ljubuncic. I'm more or less 38 of age, married with no known offspring. I am currently working as a Principal Engineer with a cloud technology company, a bold new frontier. Until roughly early 2015, I worked as the OS Architect with an engineering computing team in one of the largest IT companies in the world, developing new Linux-based solutions, optimizing the kernel and hacking the living daylights out of Linux. Before that, I was a tech lead of a team designing new, innovative solutions for high-performance computing environments. Some other fancy titles include Systems Expert and System Programmer and such. All of this used to be my hobby, but since 2008, it's a paying job. What can be more satisfying than that?
From 2004 until 2008, I used to earn my bread by working as a physicist in the medical imaging industry. My work expertise focused on problem solving and algorithm development. To this end, I used Matlab extensively, mainly for signal and image processing. Furthermore, I'm certified in several major engineering methodologies, including MEDIC Six Sigma Green Belt, Design of Experiment, and Statistical Engineering.
I also happen to write books, including high fantasy and technical work on Linux; mutually inclusive.
Please see my full list of open-source projects, publications and patents, just scroll down.
For a complete list of my awards, nominations and IT-related certifications, hop yonder and yonder please.
-------------
via: http://www.dedoimedo.com/computers/hp-pavilion-fedora-24.html
作者:[Igor Ljubuncic][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.dedoimedo.com/faq.html
[1]:http://www.dedoimedo.com/computers/chapeau-24.html
[2]:http://www.dedoimedo.com/computers/fedora-24-gnome.html
[3]:http://www.dedoimedo.com/computers/my-new-new-laptop.html
[4]:http://www.dedoimedo.com/computers/fedora-25-gnome.html
[5]:http://www.dedoimedo.com/computers/lenovo-g50-review.html
[6]:http://www.dedoimedo.com/computers/ubuntu-xerus-realtek-bug.html
[7]:http://www.dedoimedo.com/computers/linux-mint-rosa-xfce.html
[8]:http://www.dedoimedo.com/computers/frankenstein-media.html
[9]:http://www.dedoimedo.com/games/far-cry-4-review.html
[10]:http://www.dedoimedo.com/computers/lenovo-g50-fedora.html
[11]:http://www.dedoimedo.com/computers/lenovo-g50-distros-second-round.html
[12]:http://www.dedoimedo.com/computers/dual-boot-windows-7-ubuntu.html
[13]:http://www.dedoimedo.com/computers/dual-boot-windows-7-xubuntu.html
[14]:http://www.dedoimedo.com/computers/dual-boot-windows-7-centos-7.html
[15]:http://www.dedoimedo.com/computers/dual-boot-windows-8-ubuntu.html
[16]:http://www.dedoimedo.com/computers/fedora-24-pimp.html
[17]:http://www.dedoimedo.com/computers/fedora-23-extensions.html
[18]:http://www.dedoimedo.com/computers/gnome-3-dash.html
[19]:http://www.dedoimedo.com/computers/ubuntu-phone-sep-2016.html
[20]:http://www.dedoimedo.com/computers/iphone-6-after-six-months.html
[21]:http://www.dedoimedo.com/computers/fedora-nvidia-guide.html
[22]:http://www.dedoimedo.com/computers/fedora-23-nvidia.html
[23]:http://www.dedoimedo.com/computers/fedora-23-nvidia-steam.html
[24]:http://www.ocsmag.com/2015/06/22/you-can-leave-your-fedora-on/
[25]:http://www.dedoimedo.com/games/cod-mw2.html
[26]:http://www.ocsmag.com/2016/10/19/systemd-progress-through-complexity/

View File

@ -1,141 +0,0 @@
Compile-time assertions in Go
============================================================
This post is about a little-known way to make compile-time assertions in Go. You probably shouldnt use it, but it is interesting to know about.
As a warm-up, heres a fairly well-known form of compile-time assertions in Go: Interface satisfaction checks.
In this code ([playground][1]), the `var _ =` line ensures that type `W` is a `stringWriter`, as checked for by [`io.WriteString`][2].
```
package main
import "io"
type W struct{}
func (w W) Write(b []byte) (int, error) { return len(b), nil }
func (w W) WriteString(s string) (int, error) { return len(s), nil }
type stringWriter interface {
WriteString(string) (int, error)
}
var _ stringWriter = W{}
func main() {
var w W
io.WriteString(w, "very long string")
}
```
If you comment out `W`s `WriteString` method, the code will not compile:
```
main.go:14: cannot use W literal (type W) as type stringWriter in assignment:
W does not implement stringWriter (missing WriteString method)
```
This is useful. For most types that satisfy both `io.Writer` and `stringWriter`, if you eliminate the `WriteString` method, everything will continue to work as it did before, but with worse performance.
Rather than trying to write a fragile test for a performance regression using [`testing.T.AllocsPerRun`][3], you can simply protect your code with a compile-time assertion.
Heres [a real world example of this technique from package io][4].
* * *
OK, onward to obscurity!
Interface satisfaction checks are great. But what if you wanted to check a plain old boolean expression, like `1+1==2`?
Consider this code ([playground][5]):
```
package main
import "crypto/md5"
type Hash [16]byte
func init() {
if len(Hash{}) < md5.Size {
panic("Hash is too small")
}
}
func main() {
// ...
}
```
`Hash` is perhaps some kind of abstracted hash result. The `init` function ensures that it will work with [crypto/md5][6]. If you change `Hash` to be (say) `[8]byte`, itll panic when the process starts. However, this is a run-time check. What if we wanted it to fail earlier?
Heres how. (Theres no playground link, because this doesnt work on the playground.)
```
package main
import "C"
import "crypto/md5"
type Hash [16]byte
func hashIsTooSmall()
func init() {
if len(Hash{}) < md5.Size {
hashIsTooSmall()
}
}
func main() {
// ...
}
```
Now if you change `Hash` to be `[8]byte`, it will fail during compilation. (Actually, it fails during linking. Close enough for our purposes.)
```
$ go build .
# demo
main.hashIsTooSmall: call to external function
main.init.1: relocation target main.hashIsTooSmall not defined
main.init.1: undefined: "main.hashIsTooSmall"
```
Whats going on here?
`hashIsTooSmall` is [declared without a function body][7]. The compiler assumes that someone else will provide an implementation, perhaps an assembly routine.
When the compiler can prove that `len(Hash{}) < md5.Size`, it eliminates the code inside the if statement. As a result, no one uses the function `hashIsTooSmall`, so the linker eliminates it. No harm done. As soon as the assertion fails, the code inside the if statement is preserved.`hashIsTooSmall` cant be eliminated. The linker then notices that no one else has provided an implementation for the function and fails with an error, which was the goal.
One last oddity: Why `import "C"`? The go tool knows that in normal Go code, all functions must have bodies, and instructs the compiler to enforce that. By switching to cgo, we remove that check. (If you run `go build -x` on the code above, without the `import "C"` line, you will see that the compiler is invoked with the `-complete` flag.) An alternative to adding `import "C"` is to [add an empty file called `foo.s` to the package][8].
I know of only one use of this technique, in the [compiler test suite][9]. There are other [imaginable places to apply it][10], but no one has bothered.
And thats probably how it should be. :)
--------------------------------------------------------------------------------
via: http://commaok.xyz/post/compile-time-assertions
作者:[Josh Bleecher Snyder][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://twitter.com/commaok
[1]:https://play.golang.org/p/MJ6zF1oNsX
[2]:https://golang.org/pkg/io/#WriteString
[3]:https://golang.org/pkg/testing/#AllocsPerRun
[4]:https://github.com/golang/go/blob/go1.8rc2/src/io/multi.go#L72
[5]:https://play.golang.org/p/mjIMWsWu4V
[6]:https://golang.org/pkg/crypto/md5/
[7]:https://golang.org/ref/spec#Function_declarations
[8]:https://github.com/golang/go/blob/go1.8rc2/src/os/signal/sig.s
[9]:https://github.com/golang/go/blob/go1.8rc2/test/fixedbugs/issue9608.dir/issue9608.go
[10]:https://github.com/golang/go/blob/go1.8rc2/src/runtime/hashmap.go#L261

View File

@ -0,0 +1,81 @@
How to Keep Hackers out of Your Linux Machine Part 3: Your Questions Answered
============================================================
![Computer security](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/keep-hackers-out.jpg?itok=lqgHDxDu "computer security")
Mike Guthrie answers some of the security-related questions received during his recent Linux Foundation webinar. Watch the free webinar on-demand.[Creative Commons Zero][1]
Articles [one][6] and [two][7] in this series covered the five easiest ways to keep hackers out of your Linux machine, and know if they have made it in. This time, Ill answer some of the excellent security questions I received during my recent Linux Foundation webinar. [Watch the free webinar on-demand.][8]
**How can I store a passphrase for a private key if private key authentication is used by automated systems?**
This is tough. This is something that we struggle with on our end, especially when we are doing Red Teams because we have stuff that calls back automatically. I use Expect but I tend to be old-school on that. You are going to have to script it and, yes, storing that passphrase on the system is going to be tough; you are going to have to encrypt it when you store it.
My Expect script encrypts the passphrase stored and then decrypts, sends the passphrase, and re-encrypts it when it's done. I do realize there are some flaws in that, but it's better than having a no-passphrase key.
If you do have a no-passphrase key, and you do need to use it. Then I would suggest limiting the user that requires that to almost nothing. For instance, if you are doing some automated log transfers or automated software installs, limit the access to only what it requires to perform those functions.
You can run commands by SSH, so don't give them a shell, make it so they just run that command and it will actually prevent somebody from stealing that key and doing something other than just that one command.
**What do you think of password managers such as KeePass2?**
Password managers, for me, are a very juicy target. With the advent of GPU cracking and some of the cracking capabilities in EC2, they become pretty easy to get past.  I steal password vaults all the time.
Now, our success rate at cracking those, that's a different story. We are still in about the 10 percent range of crack versus no crack. If a person doesn't do a good job at keeping a secure passphrase on their password vault, then we tend to get into it and we have a large amount of success. It's better than nothing but still you need to protect those assets. Protect the password vault as you would protect any other passwords.
**Do you think it is worthwhile from a security perspective to create a new Diffie-Hellman moduli and limit them to 2048 bit or higher in addition to creating host keys with higher key lengths?**
Yeah. There have been weaknesses in SSH products in the past where you could actually decrypt the packet stream. With that, you can pull all kinds of data across. People use this safes to transfer files and passwords and they do it thoughtlessly as an encryption mechanism. Doing what you can to use strong encryption and changing your keys and whatnot is important. I rotate my SSH keys -- not as often as I do my passwords -- but I rotate them about once a year. And, yeah, it's a pain, but it gives me peace of mind. I would recommend doing everything you can to make your encryption technology as strong as you possibly can.
**Is using four completely random English words (around 100k words) for a passphrase okay?**
Sure. My passphrase is actually a full phrase. It's a sentence. With punctuation and capitalization. I don't use anything longer than that.
I am a big proponent of having passwords that you can remember that you dont have to write down or store in a password vault. A password that you can remember that you don't have to write down is more secure than one that you have to write down because it's funky.
Using a phrase or using four random words that you will remember is much more secure than having a string of numbers and characters and having to hit shift a bunch of times. My current passphrase is roughly 200 characters long. It's something that I can type quickly and that I remember.
**Any advice for protecting Linux-based embedded systems in an IoT scenario?**
IoT is a new space, this is the frontier of systems and security. It is starting to be different every single day. Right now, I try to keep as much offline as I possibly can. I don't like people messing with my lights and my refrigerator. I purposely did not buy a connected refrigerator because I have friends that are hackers, and I know that I would wake up to inappropriate pictures every morning. Keep them locked down. Keep them locked up. Keep them isolated.
The current malware for IoT devices is dependent on default passwords and backdoors, so just do some research into what devices you have and make sure that there's nothing there that somebody could particularly access by default. Then make sure that the management interfaces for those devices are well protected by a firewall or another such device.
**Can you name a firewall/UTM (OS or application) to use in SMB and large environments?**
I use pfSense; its a BSD derivative. I like it a lot. There's a lot of modules, and there's actually commercial support for it now, which is pretty fantastic for small business. For larger devices, larger environments, it depends on what admins you can get a hold of.
I have been a CheckPoint admin for most of my life, but Palo Alto is getting really popular, too. Those types of installations are going to be much different from a small business or home use. I use pfSense for any small networks.
**Is there an inherent problem with cloud services?**
There is no cloud; there are only other people's computers. There are inherent issues with cloud services. Just know who has access to your data and know what you are putting out there. Realize that when you give something to Amazon or Google or Microsoft, then you no longer have full control over it and the privacy of that data is in question.
**What preparation would you suggest to get an OSCP?**
I am actually going through that certification right now. My whole team is. Read their materials. Keep in mind that OSCP is going to be the offensive security baseline. You are going to use Kali for everything. If you don't -- if you decide not to use Kali -- make sure that you have all the tools installed to emulate a Kali instance.
It's going to be a heavily tools-based certification. It's a good look into methodologies. Take a look at something called the Penetration Testing Framework because that would give you a good flow of how to do your test and their lab seems to be great. It's very similar to the lab that I have here at the house.
_[Watch the full webinar on demand][3], for free. And see [parts one][4] and [two][5] of this series for five easy tips to keep your Linux machine secure._
_Mike Guthrie works for the Department of Energy doing Red Team engagements and penetration testing._
--------------------------------------------------------------------------------
via: https://www.linux.com/news/webinar/2017/how-keep-hackers-out-your-linux-machine-part-3-your-questions-answered
作者:[MIKE GUTHRIE][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/anch
[1]:https://www.linux.com/licenses/category/creative-commons-zero
[2]:https://www.linux.com/files/images/keep-hackers-outjpg
[3]:http://portal.on24.com/view/channel/index.html?showId=1101876&showCode=linux&partnerref=linco
[4]:https://www.linux.com/news/webinar/2017/how-keep-hackers-out-your-linux-machine-part-1-top-two-security-tips
[5]:https://www.linux.com/news/webinar/2017/how-keep-hackers-out-your-linux-machine-part-2-three-more-easy-security-tips
[6]:https://www.linux.com/news/webinar/2017/how-keep-hackers-out-your-linux-machine-part-1-top-two-security-tips
[7]:https://www.linux.com/news/webinar/2017/how-keep-hackers-out-your-linux-machine-part-2-three-more-easy-security-tips
[8]:http://portal.on24.com/view/channel/index.html?showId=1101876&showCode=linux&partnerref=linco

View File

@ -1,168 +0,0 @@
GHLandy Translating
How to Configure Custom SSH Connections to Simplify Remote Access
============================================================
SSH (SSH client) is a program for remotely accessing a machine, it enables a user to [execute commands on a remote host][2]. It is one of the most recommended method for logging in to a remote host, since it is designed to provide secure encrypted communications between two untrusted hosts over an insecure network.
SSH uses both a system-wide as well as a user-specific (custom) configuration file. In this tutorial, we will explain how to create a custom ssh configuration file and use certain options to connect to remote hosts.
#### Requirements:
1. You must have installed [OpenSSH client on your Linux desktop][1].
2. Understand the common options used for remote connections via ssh.
#### SSH Client Config Files
Below are the locations of the ssh client configuration files:
1. `/etc/ssh/ssh_config`  this is the default, system-wide configuration file. It contains settings that apply to all users of ssh client machine.
2. `~/.ssh/config` or `$HOME/.ssh/config`  is the user-specific/custom configuration file. It has configurations that apply to a specific user. It therefore overrides default settings in the system-wide config file. This is the file we will create and use.
By default, users are authenticated in ssh using passwords, however, you can setup [ssh passwordless login using ssh keygen][3] in 5 simple steps.
Note: In case the directory `~/.ssh` does not exist on your desktop system, create it with the following permissions.
```
$ mkdir -p ~/.ssh
$ chmod 0700 ~/.ssh
```
The chmod command above implies that only the user can have read, write and execute permissions on the directory as required by ssh settings.
### How To Create User Specific SSH Configuration File
This file is usually not created by default, so you need to create it with the read/write permissions for only the user.
```
$ touch ~/.ssh/config
$ chmod 0700 ~/.ssh/config
```
The above file contains sections defined by hosts specifications, and a section is only applied to hosts that match one of the patterns set in the specification.
The conventional format of `~/.ssh/config` is as follows, and all empty lines as well as lines starting with `#` are considered as comments:
```
Host host1
ssh_option1=value1
ssh_option2=value1 value2
ssh_option3=value1
Host host2
ssh_option1=value1
ssh_option2=value1 value2
Host *
ssh_option1=value1
ssh_option2=value1 value2
```
From the format above:
1. Host host1  is a header definition for host1, this is where a host specification starts and it ends with the next header definition, Host host2 making a section.
2. host1, host2 are simply host aliases to use on the command line, they are not the actual hostnames of the remote hosts.
3. The configuration options such as ssh_option1=value1, ssh_option2=value1 value2 apply to a matched host and should be indented for well organized formatting.
4. For an option such as ssh_option2=value1 value2, the value value1 is considered first, then value2.
5. The header definition Host * (where `*` is a pattern wildcard that matches zero or more characters) will match zero or more hosts.
Still considering the format above, this is how ssh reads the config file. If you execute a ssh command to remotely access host1 like so:
```
$ ssh host1
```
The above ssh command will does the following things:
1. match the host alias host1 in the config file and applies the options set under the definition header, Host host1.
2. then moves to the next host section, Host host2 and finds that the name provided on the command line doesnt match, so no options are used from here.
3. It proceeds to the last section, Host *, which matches all hosts. Here, it applies all the options in this section to the host connection. But it can not override any values of options that where already used in the previous section(s).
4. The same applies to host2.
### How To Use User Specific SSH Configuration File
Once you have understood how the ssh client config file works, you can create it as follows. Remember to use options and values (host aliases, port numbers, usernames and so on) applicable to your server environment.
Open the config file with your favorite editor:
```
$ vi ~/.ssh/config
```
And define the necessary sections:
```
Host fedora25
HostName 192.168.56.15
Port 22
ForwardX11 no
Host centos7
HostName 192.168.56.10
Port 22
ForwardX11 no
Host ubuntu
HostName 192.168.56.5
Port 2222
ForwardX11 yes
Host *
User tecmint
IdentityFile ~/.ssh/id_rsa
Protocol 2
Compression yes
ServerAliveInterval 60
ServerAliveCountMax 20
LogLevel INFO
```
A detailed explanation of the above ssh configuration options.
1. HostName  defines the real host name to log into, alternatively, you can use a numeric IP addresses, it is also permitted (both on the command line and in HostName specifications).
2. User  specifies the user to log in as.
3. Port  sets the port number to connect on the remote host, the default is 22. Use the port number configured in the remote hosts sshd config file.
4. Protocol  this option defines the protocol versions ssh should support in order of preference. The usual values are 1 and 2, multiple versions must be comma-separated.
5. IdentityFile  specifies a file from which the users DSA, Ed25519, RSA or ECDSA authentication identity is read.
6. ForwardX11  defines whether X11 connections will be automatically redirected over the secure channel and DISPLAY set. It has two possible values “yes” or “no”.
7. Compression  its used to set compression during the remote connection with the “yes” value. The default is “no”.
8. ServerAliveInterval  sets a timeout interval in seconds after which if no response (or data) has been received from the server, ssh will send a message through the encrypted channel to request a response from the server. The default value is 0, meaning no messages will be sent to the server, or 300 if the BatchMode option has been defined.
9. ServerAliveCountMax  sets the number of server alive messages which may be sent without ssh receiving any response from the server.
10. LogLevel  defines the verbosity level that is used when logging messages from ssh. The allowed values includes: QUIET, FATAL, ERROR, INFO, VERBOSE, DEBUG, DEBUG1, DEBUG2, and DEBUG3\. And the default is INFO.
The standard way of connecting to any remote Linux host (CentOS 7 in my case), defined in section two of the config file above, we would normally type the command below:
```
$ ssh -i ~/.ssh/id_rsa -p 22 tecmint@192.168.56.10
```
However, with the use of the ssh client configuration file, we can simply type the following command:
```
$ ssh centos7
```
You can find more options and usage examples in the ssh client config man page:
```
$man ssh_config
```
Thats it for now, in this guide, we explained you how to use a user-specific (custom) ssh client config file in Linux. Use the feedback form below to write back to us concerning this article.
--------------------------------------------------------------------------------
译者简介:
Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/configure-custom-ssh-connection-in-linux/
作者:[Aaron Kili][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/install-openssh-server-in-linux/
[2]:http://www.tecmint.com/execute-commands-on-multiple-linux-servers-using-pssh/
[3]:http://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/

View File

@ -0,0 +1,130 @@
# OpenSUSE Leap 42.2 Gnome - Better but not really
Updated: February 6, 2017
It is time to give Leap a second chance. Let me be extra corny. Give leap a chance. Yes. Well, several weeks ago, I reviewed the Plasma edition of the latest [openSUSE][1] release, and while it was busy firing all cannon, like a typical Stormtrooper, most of the beams did not hit the target. It was a fairly mediocre distro, delivering everything but then stopping just short of the goodness mark.
I will now conduct a Gnome experiment. Load the distro with a fresh new desktop environment, and see how it behaves. We did something rather similar with CentOS recently, with some rather surprising results. Hint. Maybe we will get lucky. Let's do it.
![Teaser](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-teaser.jpg)
### Gnome it up
You can install new desktop environments by checking the Patterns tab in YaST > Software Management. Specifically, you can install Gnome, Xfce, LXQt, MATE, and others. A very simple procedure worth some 900 MB of disk data. No errors, no woes.
![Patterns, Gnome](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-patterns.png)
### Pretty Gnome stuff
I spent a short period of time taming openSUSE. Having had a lot of experience with [Fedora 24][2] doing this exact same stuff, i.e. [pimping][3], the procedure was rather fast and simple. Get some Gnome [extensions][4] first. Keep on low fire for 20 minutes. Stir and serve in clay bowls.
For dessert, launch Gnome Tweak Tool and add the window buttons. Most importantly, install the abso-serious-lutely needed, life-saving [Dash to Dock][5] extension, because then you can finally work like a human being without that maddening lack of efficiency called Activities. Digest, toss in some fresh [icons][6], and Bob's our uncle. All in all, it took me exactly 42 minutes and 12 seconds to get this completed. Get it? 42.2 minutes. OMGZ!
![Gnome 1](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-1.jpg)
![Gnome 2](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-2.jpg)
### Other customization and tweaks
I actually used Breeze window decorations in Gnome, and this seems to work very well. So much better than trying to customize Plasma. Behold and weep, for the looks were dire and pure!
![Gnome 3](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-3.jpg)
![Gnome 4](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-4.jpg)
### Smartphone support
So much better than Plasma - both [iPhone][7] and [Ubuntu Phone][8] were correctly identified and mounted. This reminds me of all the discrepancies and inconsistencies in the behavior of the [KDE][9] and [Gnome][10] editions of CentOS 7.2\. So this definitely crosses the boundaries of specific platforms. It has everything to do with the desktop environment.
![Ubuntu Phone](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-ubuntu-phone.jpg)
The one outstanding bug is, you need to purge icon cache sometimes, or you will end up with old icons in file
managers. There will be a whole article on this coming soon.
### Multimedia
No luck. Same problems like the Plasma edition. Missing dependencies. Can't have H.264 codecs, meaning you cannot really watch 99% of all the things that you need. That's like saying, no Internet for a month.
![Failed codecs setup](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-failed-codecs.png)
### Resource utilization
The Gnome edition is faster than the Plasma one, even with the Compositor turned off, and ignoring the KWin crashes and freezes. The CPU ticks at about 2-3%, and memory hovers around the 900MB mark. Middle of the road results, I say.
![Resources](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-resources.jpg)
### Battery usage
Worse than Plasma actually. Not sure why. But even with the brightness adjusted to about 50%, Leap Gnome gave my G50 only about 2.5 hours of electronic love. I did not explore as to where it all gets wasted, but it sure does.
![Battery usage](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-battery.jpg)
### Weird issues
There were also some glitches and errors. For instance, the desktop keeps on asking me for the Wireless password, maybe because Gnome does not handle KWallet very well or something. Also, KWin was left running after I logged out of a Plasma session, eating a good solid 100% CPU until I killed it. Such a disgrace.
![KWin leftover](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-kwin-leftover.jpg)
### Hardware support
Suspend & resume, alles gut. I did not experience network drops in the Gnome version yet. The webcam works, too. In general, hardware support seems quite decent. Bluetooth works, though. Yay! Maybe we should label this under networking? To wit.
![Webcam](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-webcam.jpg)
![Bluetooth works](http://www.dedoimedo.com/images/computers-years/2016-2/opensuse-gnome-bt-works.png)
### Networking
Samba printing? You get that same, lame applet like in [Yakkety Yak][11], which all gets messed up visually. But then it says no print shares, check firewall. Ah whatever. It's no longer 1999\. Being able to print is not a privilege, it's a basic human right. People have staged revolutions over far less. And I cannot take a screenshot of this. That bad.
### The rest of it?
All in all, it was a standard Gnome desktop, with its slightly mentally challenged approach to computing and ergonomics, tamed through the rigorous use of extensions. It is a little friendlier than the Plasma version, and you get better overall results with most of the normal, everyday stuff. Then you get stumped by a silly lack of options that Plasma has in overwhelming abundance. But then you remember your desktop isn't freezing every minute or so, and that's a definite bonus.
### Conclusion
OpenSUSE Leap 42.2 Gnome is a better product than its Plasma counterpart, and no mistake. It is more stable, it is faster, more elegant, more easily customizable, and most of the critical everyday functions actually work. For example, you can print to Samba, if you are inclined to fight the firewall, copy files to Samba without losing timestamps, use Bluetooth, use your Ubuntu Phone, and all this without the crippling effects of constant crashes. The entire stack is just more fully featured and better supported.
However, Leap is still only a reasonable release and nothing more. It struggles in many core areas that other distros do with more panache and elegance, and there are some big, glaring problems in the overall product that are a direct result of bad QA. At the very least, this lack of quality has been an almost consistent element with openSUSE these past few years. Now and then, you get a decent hatchling, but most of them are just average. That's probably the word that best defines openSUSE Leap. Average. You should try and see for yourself. You will most likely not be amazed. Such a shame, because for me, SUSE has a sweet spot, and yet, it stubbornly refuses to rekindle the love. 6/10\. Have a go, play with your emotions.
Cheers.
--------------------------------------------------------------------------------
作者简介:
My name is Igor Ljubuncic. I'm more or less 38 of age, married with no known offspring. I am currently working as a Principal Engineer with a cloud technology company, a bold new frontier. Until roughly early 2015, I worked as the OS Architect with an engineering computing team in one of the largest IT companies in the world, developing new Linux-based solutions, optimizing the kernel and hacking the living daylights out of Linux. Before that, I was a tech lead of a team designing new, innovative solutions for high-performance computing environments. Some other fancy titles include Systems Expert and System Programmer and such. All of this used to be my hobby, but since 2008, it's a paying job. What can be more satisfying than that?
From 2004 until 2008, I used to earn my bread by working as a physicist in the medical imaging industry. My work expertise focused on problem solving and algorithm development. To this end, I used Matlab extensively, mainly for signal and image processing. Furthermore, I'm certified in several major engineering methodologies, including MEDIC Six Sigma Green Belt, Design of Experiment, and Statistical Engineering.
I also happen to write books, including high fantasy and technical work on Linux; mutually inclusive.
Please see my full list of open-source projects, publications and patents, just scroll down.
For a complete list of my awards, nominations and IT-related certifications, hop yonder and yonder please.
-------------
via: http://www.dedoimedo.com/computers/opensuse-42-2-gnome.html
作者:[Igor Ljubuncic][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.dedoimedo.com/faq.html
[1]:http://www.dedoimedo.com/computers/opensuse-42-2.html
[2]:http://www.dedoimedo.com/computers/fedora-24-gnome.html
[3]:http://www.dedoimedo.com/computers/fedora-24-pimp.html
[4]:http://www.dedoimedo.com/computers/fedora-23-extensions.html
[5]:http://www.dedoimedo.com/computers/gnome-3-dash.html
[6]:http://www.dedoimedo.com/computers/fedora-24-pimp-more.html
[7]:http://www.dedoimedo.com/computers/iphone-6-after-six-months.html
[8]:http://www.dedoimedo.com/computers/ubuntu-phone-sep-2016.html
[9]:http://www.dedoimedo.com/computers/lenovo-g50-centos-kde.html
[10]:http://www.dedoimedo.com/computers/lenovo-g50-centos-gnome.html
[11]:http://www.dedoimedo.com/computers/ubuntu-yakkety-yak.html

View File

@ -1,85 +0,0 @@
4 open source tools for conducting online surveys
============================================================
![4 open source tools for doing online surveys](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BIZ_question_B.png?itok=UVCz8ld_ "4 open source tools for doing online surveys")
Image by : opensource.com
Ah, the venerable survey. It can be a fast, simple, cheap, and effective way gather the opinions of friends, family, classmates, co-workers, customers, readers, and others.
Millions turn to proprietary tools like SurveyGizmo, Polldaddy, SurveyMonkey, or even Google Forms to set up their surveys. But if you want more control, not just over the application but also the data you collect, then you'll want to go open source.
Let's take a look at four open source survey tools that can suit your needs, no matter how simple or complex those needs are.
### LimeSurvey
[LimeSurvey][2] is where you turn to when you want a survey tool that can do just about everything you want it to do. You can use LimeSurvey for doing simple surveys and polls, and more complex ones that span multiple pages. If you work in more than one language, LimeSurvey supports 80 of them.
LimeSurvey also lets you customize your surveys with your own JavaScript, photos, and videos, and even by editing your survey's HTML directly. And all that is only scratching the surface of [its features][3].
You can install LimeSurvey on your own server, or [get a hosted plan][4] that will set you back a few hundred euros a year (although there is a free option too).
### JD Esurvey
If LimeSurvey doesn't pack enough features for you and Java-powered web applications are your thing, then give [JD Esurvey ][5]a look. It's described as "an open source enterprise survey web application." It's definitely powerful, and ticks a number of boxes for organizations looking for a high-volume, robust survey tool.
Using JD Esurvey, you can collect a range of information including answers to "Yes/No" questions and star ratings for products and services. You can even process answers to questions with multiple parts. JD Esurvey supports creating and managing surveys with tablets and smartphones, and your published surveys are mobile friendly too. According to the developer, the application is usable by [people with disabilities][6].
To give it a go, you can either [fork JD Esurvey on GitHub][7] or [download and install][8] a pre-compiled version of the application.
### Quick Survey
For many of us, tools like LimeSurvey and JD Esurvey are overkill. We just want a quick and dirty way to gather opinions or feedback. That's where [Quick Survey][9] comes in.
Quick Survey only lets you create question-and-answer or multiple choice list surveys. You add your questions or create your list, then publish it and share the URL. You can add as many items to your survey as you need to, and the responses appear on Quick Survey's admin page. You can download the results of your surveys as a CSV file, too.
While you can download the code for Quick Survey from GitHub, it's currently optimized for [Sandstorm.io][10] and [Sandstorm Oasis][11] where you can grab it from the [Sandstorm App Market][12].
### TellForm
In terms of features, [TellForm][13] lies somewhere between LimeSurvey and Quick Survey. It's one of those tools for people who need more than a minimal set of functions, but who don't need everything and the kitchen sink.
In addition to having 11 different types of surveys, TellForm has pretty good analytics attached to its surveys. You can easily customize the look and feel of your surveys, and the application's interface is simple and clean.
If you want to host TellForm yourself, you can grab the code from the [GitHub repository][14]. Or, you can sign up for a [free hosted account][15].
* * *
Do you have a favorite open source tool for doing online surveys? Feel free to share it with our community by leaving a comment.
--------------------------------------------------------------------------------
作者简介:
Scott Nesbitt - Writer. Editor. Soldier of fortune. Ocelot wrangler. Husband and father. Blogger. Collector of pottery. Scott is a few of these things. He's also a long-time user of free/open source software who extensively writes and blogs about it. You can find Scott on Twitter, GitHub
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/2/tools-online-surveys-polls
作者:[Scott Nesbitt ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/scottnesbitt
[1]:https://opensource.com/article/17/2/tools-online-surveys-polls?rate=IvQATPRT8VEAJbe667E6i5txmmDenX8cL7YtkAxasWQ
[2]:https://www.limesurvey.org/
[3]:https://www.limesurvey.org/about-limesurvey/features
[4]:https://www.limesurvey.org/services
[5]:https://www.jdsoft.com/jd-esurvey.html
[6]:https://www.ada.gov/508/
[7]:https://github.com/JD-Software/JDeSurvey
[8]:https://github.com/JD-Software/JDeSurvey/wiki/Download-and-Installation
[9]:https://github.com/simonv3/quick-survey/
[10]:http://sandstorm.io/
[11]:http://oasis.sandstorm.io/
[12]:https://apps.sandstorm.io/app/wupmzqk4872vgsye9t9x5dmrdw17mad97dk21jvcm2ph4jataze0
[13]:https://www.tellform.com/
[14]:https://github.com/whitef0x0/tellform
[15]:https://admin.tellform.com/#!/signup
[16]:https://opensource.com/user/14925/feed
[17]:https://opensource.com/article/17/2/tools-online-surveys-polls#comments
[18]:https://opensource.com/users/scottnesbitt

View File

@ -1,219 +0,0 @@
ucasFL translating
free A Standard Command to Check Memory Usage Statistics (Free & Used) in Linux
============================================================
We all knows, most of the Servers (Including world Top Super Computers are running in Linux) are running in Linux platform on IT infrastructure because Linux is more flexible compare with other operating systems. Other operating systems required reboot for small small changes & patch updates but Linux systems doesnt required reboot except critical patch updates.
One of the big challenge for Linux administrator to maintain the system up and running without any downtime. To managing memory utilization on Linux is another challenging task for administrator, `free` is one of the standard & widely used command, to analyze Memory Statistics (Free & Used Memory) in Linux. Today we are going to cover free command with useful options.
Suggested Articles :
* [smem Linux Memory Reporting/Statistics Tool][1]
* [vmstat A Standard Nifty Tool to Report Virtual Memory Statistics][2]
#### Whats Free Command
free displays the total amount of `free` and `used` physical and `swap` memory in the system, as well as the `buffers` and `caches`used by the kernel. The information is gathered by parsing /proc/meminfo.
#### Display System Memory
Run the `free` command without any option to display system memory, including total amount of `free`, `used`, `buffers`, `caches`& `swap`.
```
# free
total used free shared buffers cached
Mem: 32869744 25434276 7435468 0 412032 23361716
-/+ buffers/cache: 1660528 31209216
Swap: 4095992 0 4095992
```
The output has three columns.
* Column-1 : Indicates Total memory, used memory, free memory, shared memory (mostly used by tmpfs (Shmem in /proc/meminfo)), memory used for buffers, cached contents memory size.
* Total : Total installed memory (MemTotal in /proc/meminfo)
* Used : Used memory (calculated as total free + buffers + cache)
* Free : Unused memory (MemFree in /proc/meminfo)
* Shared : Memory used (mostly) by tmpfs (Shmem in /proc/meminfo)
* Buffers : Memory used by kernel buffers (Buffers in /proc/meminfo)
* Cached : Memory used by the page cache and slabs (Cached and SReclaimable in /proc/meminfo)
* Column-2 : Indicates buffers/cache used & free.
* Column-3 : Indicates Total swap memory (SwapTotal in /proc/meminfo), free (SwapFree in /proc/meminfo)) & used swap memory.
#### Display Memory in MB
By default `free` command output display memory in `KB - Kilobytes` which is bit confusion to most of the administrator (Many of us convert the output to MB, to understand the size, when the system has more memory). To avoid the confusion, add `-m` option with free command to get instant output with `MB - Megabytes`.
```
# free -m
total used free shared buffers cached
Mem: 32099 24838 7261 0 402 22814
-/+ buffers/cache: 1621 30477
Swap: 3999 0 3999
```
How to check, how much free ram I really have From the above output based on `used` & `free` column, you may think, you have very low free memory, when its really just `10%`, How ?
Total Actual Available RAM = (Total RAM column2 used)
Total RAM = 32099
Actual used RAM = -1621
Total actual available RAM = 30477
If you have latest distribution, you have a option to see the actual free memory called `available`, for older distribution, look at the `free` column in the row that says `-/+ buffers/cache`.
How to check, how much RAM actually used From the above output based on `used` & `free` column, you may think, you have utilized morethan `95%` memory.
Total Actual used RAM = column1 used (column1 buffers + column1 cached)
Used RAM = 24838
Used Buffers = 402
Used Cache = 22814
Total Actual used RAM = 1621
#### Display Memory in GB
By default `free` command output display memory in `KB - Kilobytes` which is bit confusion to most of the administrator, so we can use the above option to get the output in `MB - Megabytes` but when the server has huge memory (morethan 100 GB or 200 GB), the above option also get confuse, so in this situation, we can add `-g` option with free command to get instant output with `GB - Gigabytes`.
```
# free -g
total used free shared buffers cached
Mem: 31 24 7 0 0 22
-/+ buffers/cache: 1 29
Swap: 3 0 3
```
#### Display Total Memory Line
By default `free` command output comes with three columns (Memory, Buffers/Cache & Swap). To display consolidated total in separate line (Total (Mem+Swap), Used (Mem+(Used Buffers/Cache)+Swap) & Free (Mem+(Used Buffers/Cache)+Swap), add `-t` option with free command.
```
# free -t
total used free shared buffers cached
Mem: 32869744 25434276 7435468 0 412032 23361716
-/+ buffers/cache: 1660528 31209216
Swap: 4095992 0 4095992
Total: 36965736 27094804 42740676
```
#### Run free with delay for better statistic
By default free command display single statistics output which is not enough to troubleshoot further so, add delay (delay is the delay between updates in seconds) which capture the activity periodically. If you want to run free with 2 second delay, just use the below command (If you want more delay you can change as per your wish).
The following command will run every 2 seconds until you exit.
```
# free -s 2
total used free shared buffers cached
Mem: 32849392 25935844 6913548 188 182424 24632796
-/+ buffers/cache: 1120624 31728768
Swap: 20970492 0 20970492
total used free shared buffers cached
Mem: 32849392 25935288 6914104 188 182424 24632796
-/+ buffers/cache: 1120068 31729324
Swap: 20970492 0 20970492
total used free shared buffers cached
Mem: 32849392 25934968 6914424 188 182424 24632796
-/+ buffers/cache: 1119748 31729644
Swap: 20970492 0 20970492
```
#### Run free with delay & counts
Alternatively you can run free command with delay and specific counts, once it reach the given counts then exit automatically.
The following command will run every 2 seconds with 5 counts then exit automatically.
```
# free -s 2 -c 5
total used free shared buffers cached
Mem: 32849392 25931052 6918340 188 182424 24632796
-/+ buffers/cache: 1115832 31733560
Swap: 20970492 0 20970492
total used free shared buffers cached
Mem: 32849392 25931192 6918200 188 182424 24632796
-/+ buffers/cache: 1115972 31733420
Swap: 20970492 0 20970492
total used free shared buffers cached
Mem: 32849392 25931348 6918044 188 182424 24632796
-/+ buffers/cache: 1116128 31733264
Swap: 20970492 0 20970492
total used free shared buffers cached
Mem: 32849392 25931316 6918076 188 182424 24632796
-/+ buffers/cache: 1116096 31733296
Swap: 20970492 0 20970492
total used free shared buffers cached
Mem: 32849392 25931308 6918084 188 182424 24632796
-/+ buffers/cache: 1116088 31733304
Swap: 20970492 0 20970492
```
#### Human readable format
To print the human readable output, add `h` option with `free` command, which will print more detailed output compare with other options like m & g.
```
# free -h
total used free shared buff/cache available
Mem: 2.0G 1.6G 138M 20M 188M 161M
Swap: 2.0G 1.8G 249M
```
#### Split Buffers & Cached memory output
By default `Buffers/Cached` memory output comes together. To split Buffers & Cached memory output, add `-w` option with free command. (This option is available on version 3.3.12).
Note : See the above output, `Buffers/Cached` comes together.
```
# free -wh
total used free shared buffers cache available
Mem: 2.0G 1.6G 137M 20M 8.1M 183M 163M
Swap: 2.0G 1.8G 249M
```
#### Show Low and High Memory Statistics
By default `free` command output comes without Low and High Memory Statistics. To display Show Low and High Memory Statistics, add `-l` option with free command.
```
# free -l
total used free shared buffers cached
Mem: 32849392 25931336 6918056 188 182424 24632808
Low: 32849392 25931336 6918056
High: 0 0 0
-/+ buffers/cache: 1116104 31733288
Swap: 20970492 0 20970492
```
#### Read more about free
If you want to know more option which is available for free, simply navigate to man page.
```
# free --help
or
# man free
```
--------------------------------------------------------------------------------
via: http://www.2daygeek.com/free-command-to-check-memory-usage-statistics-in-linux/
作者:[MAGESH MARUTHAMUTHU][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.2daygeek.com/author/magesh/
[1]:http://www.2daygeek.com/smem-linux-memory-usage-statistics-reporting-tool/
[2]:http://www.2daygeek.com/linux-vmstat-command-examples-tool-report-virtual-memory-statistics/
[3]:http://www.2daygeek.com/author/magesh/

View File

@ -1,77 +0,0 @@
5 Linux Music Players You Should Consider Switching To
============================================================
![](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/02/linux-music-players.jpg "5 Linux Music Players You Should Consider Switching Tos")
There are dozens of Linux music players out there, and this makes it difficult to find the best one for our usage. In the past weve reviewed some of these players, such as [Cantata][10], [Exaile][11], or even [the lesser known ones][12] like Clementine, Nightingale and Quod Libet.
In this article I will be covering more music players for Linux that in some aspects are even better than the ones weve already told you about.
### 1\. Qmmp
[Qmmp][13] isnt the most feature-rich (or stable) Linux music player, but its my favorite one, and this is why I put it as number one. I know there are better players, but I somehow just love this one and use it most of the time. It does crash, and there are many files it cant play, but nevertheless I still love it the most. Go figure!
![linux-players-01-qmmp](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/02/Linux-players-01-Qmmp.jpg "linux-players-01-qmmp")
Qmmp is a Winamp port for Linux. Its (relatively) lightweight and has a decent feature set. Since I grew up with Winamp and loved its keyboard shortcuts, it was a nice surprise that they are present in the Linux version, too. As for formats, Qmmp plays most of the popular ones such as MPEG1 layer 2/3, Ogg Vorbis and Opus, Native FLAC/Ogg FLAC, Musepack, WavePack, tracker modules (mod, s3m, it, xm, etc.), ADTS AAC, CD Audio, WMA, Monkeys Audio (and other formats provided by FFmpeg library), PCM WAVE (and other formats provided by libsndfile library), Midi, SID, and Chiptune formats (AY, GBS, GYM, HES, KSS, NSF, NSFE, SAP, SPC, VGM, VGZ, and VTX).
### 2\. Amarok
[Amarok][14] is the KDE music player, though you certainly can use it with any other desktop environment. Its one of the oldest music players for Linux. This is probably one of the reasons why its a very popular player, though I personally dont like it that much.
![linux-players-02-amarok](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/02/Linux-players-02-Amarok.jpg "linux-players-02-amarok")
Amarok plays a huge array of music formats, but its main advantage is the abundance of plugins. The app comes with a lot of documentation, though it hasnt been updated recently. Amarok is also famous for its integration with various web services such as Ampache, Jamendo Service, Last.fm, Librivox, MP3tunes, Magnatune, and OPML Podcast Directory.
### 3\. Rhythmbox
Now that I have mentioned Amarok and the KDE music player, now lets move to [Rhythmbox][15], the default Gnome music player. Since it comes with Gnome, you can guess its a popular app. Its not only a music player, but also a music management app. It supports MP3 and OGG, plus about a dozen other file formats, as well as Internet Radio, iPod integration, the playing of audio files, audio CD burning and playback, music sharing, and podcasts. All in all, its not a bad player, but this doesnt mean you will like it the most. Try it and see if this is your player. If you dont like it, just move on to the next option.
![linux-players-03-rhythmbox](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/02/Linux-players-03-Rhythmbox.jpg "linux-players-03-rhythmbox")
### 4\. VLC
Though [VLC][16] is best known as a movie player, its great as a music player, too, simply because it has the largest collection of codecs. If you cant play a file with it, its unlikely you will be able to open it with any other player. VLC is highly customizable, and there are a lot of extensions for it. It runs on Windows, Linux, Mac OS X, Unix, iOS, Android, etc.
![linux-players-04-vlc](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/02/Linux-players-04-VLC.jpg "linux-players-04-vlc")
What I personally dont like about VLC is that its quite heavy on resources. Also, for some of the files Ive used it with, the playback quality was far from stellar. The app would often shut down without any obvious reason while playing a file most of the other players wouldnt struggle with, but its quite possible its not so much the player, as the file itself. Even though VLC isnt among the apps I frequently use, I still wholeheartedly recommend it.
### 5\. Cmus
If you fancy command line apps, then [Cmus][17] is your Linux music player. You can use it to play Ogg Vorbis, MP3, FLAC, Opus, Musepack, WavPack, WAV, AAC, MP4, audio CD, everything supported by ffmpeg (WMA, APE, MKA, TTA, SHN, etc.) and libmodplug. You can also use it for streaming from Shoutcast or Icecast. Its not the most feature-rich music player, but it has all the basics and beyond. Its main advantage is that its very lightweight, and its memory requirements are really minimal.
![linux-players-05-cmus](https://maketecheasier-2d0f.kxcdn.com/assets/uploads/2017/02/Linux-players-05-Cmus.jpg "linux-players-05-cmus")
All these music players are great in one aspect or another. I cant say there is a best among them this is largely a matter of personal taste and needs. Most of these apps either come installed by default in the distro or can be easily found in the package manager. Simply open Synaptic, Software Center, or whatever package manager your distro is using, search for them and install them from there. You can also use the command line, or simply double-click the install file you download from their site the choice is yours.
--------------------------------------------------------------------------------
via: https://www.maketecheasier.com/linux-music-players-to-check-out/
作者:[Ada Ivanova][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.maketecheasier.com/author/adaivanoff/
[1]:https://www.maketecheasier.com/author/adaivanoff/
[2]:https://www.maketecheasier.com/linux-music-players-to-check-out/#comments
[3]:https://www.maketecheasier.com/category/linux-tips/
[4]:http://www.facebook.com/sharer.php?u=https%3A%2F%2Fwww.maketecheasier.com%2Flinux-music-players-to-check-out%2F
[5]:http://twitter.com/share?url=https%3A%2F%2Fwww.maketecheasier.com%2Flinux-music-players-to-check-out%2F&text=5+Linux+Music+Players+You+Should+Consider+Switching+To
[6]:mailto:?subject=5%20Linux%20Music%20Players%20You%20Should%20Consider%20Switching%20To&body=https%3A%2F%2Fwww.maketecheasier.com%2Flinux-music-players-to-check-out%2F
[7]:https://www.maketecheasier.com/mastering-disk-utility-mac/
[8]:https://www.maketecheasier.com/airy-youtube-video-downloader/
[9]:https://support.google.com/adsense/troubleshooter/1631343
[10]:https://www.maketecheasier.com/cantata-new-music-player-for-linux/
[11]:https://www.maketecheasier.com/exaile-the-first-media-player-i-dont-hate/
[12]:https://www.maketecheasier.com/the-lesser-known-music-players-for-linux/
[13]:http://qmmp.ylsoftware.com/
[14]:https://amarok.kde.org/
[15]:https://wiki.gnome.org/Apps/Rhythmbox
[16]:http://www.videolan.org/vlc/
[17]:https://cmus.github.io/

View File

@ -0,0 +1,99 @@
# Fedora 25: Wayland vs Xorg
Almost as good as Alien vs Predator only much better. Anyhow, as you probably know, I have recently tested [Fedora 25][1]. It was an okay experience. Overall, the distro behaved reasonably well. Not the fastest, but stable enough, usable enough, with some neat improvements here and there. Most importantly, apart from some performance and responsiveness loss, Wayland did not cause my system to melt. But that's just a beginning.
Wayland is in its infancy as a consumer technology, or at least that thing that people take for granted when they do desktop stuff. Therefore, I must continue testing. Never surrender. In the past few weeks of actively using Fedora 25, I did come across a few other issues and problems, some less worrying, some quite disturbing, some odd, some meaningless. Let us elaborate.
![Teaser](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-wayland-xorg-teaser.jpg)
Note: Image taken from [Wikimedia][2] and modified, licensed under [CC BY-SA 3.0][3].
### Wayland does not support everything
Nope. 'Tis a fact. If you go about a Web, doing some reading, you will have learned that all sorts of things are not yet Wayland-ready. Still, we all know Fedora is the state-of-art bleeding-edge distro, and so it's a testbed for pain and discovery. Fair enough. For a while, things were quite all right, no fuss, no errors, but then, I suddenly needed to use GParted. I was in a hurry, troubleshooting a big issue, and now I had to sidetrack myself with pointless extra work. GParted would just not launch under Wayland. Exploring in a bit more detail, I learned that this partitioning software was not supported yet.
![GParted does not run under Wayland](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-wayland-xorg-gparted.jpg)
And the thing is, I do not really know what other applications do not work under Wayland, and I am not really keen to discover that in a moment of true reckoning. Searching online, I wasn't able to find a quick, easy list that details the current incompatibilities. Maybe it's me, and I suck at searching, but something as trivial as "Wayland + compatibility" should be obvious.
What I did find is a [self-argument][4] telling us why Wayland is good, a list of [Gnome][5] applications currently supported under this new thingie, several nerdy pages on ArchWiki, a super-nerdy slit-my-wrists topic on [Nvidia][6]devtalk, and a few other ambiguous discussions.
### Performance, again
On the Fedora 25 box, I changed the login session from Gnome (Wayland) to Gnome Xorg, to see how this affects the system. I didpreviously mention the performance benchmarks and comparison to [Fedora 24][7] on the same laptop - [Lenovo G50][8], but this should give us even more accurate results.
Wayland (screenshot 1) gives us 1.4GB memory use without anything else running, and the CPU averages about 4-5%. Xorg (screenshot 2) tolls the same amount of RAM, and the processor eats 3-4% of its full power. Marginally less in sheer numbers. But then, the experience in the Xorg session is just so much better. It's milliseconds alright, but you can feel it. The legacy session seems to be ever so slightly sprightlier, faster, fresher. The lag is less noticeable. If you are sensitive as to how your desktop responds, you will not be happy with this penalty. Sure, this may only be a bit of sub-optimized beginner's luck, and Wayland may improve over time. But it's also something we cannot ignore.
![Wayland resources](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-wayland-xorg-resources-wayland.jpg)
![Xorg resources](http://www.dedoimedo.com/images/computers-years/2016-2/fedora-wayland-xorg-resources-xorg.jpg)
### Let's rant
I am not happy with this. Not massively angry, but I don't like that I actually need to login into the classic X session to be able to fully enjoy my desktop experience. Because X gives me 100%. Wayland does not. That means, at the end of the day, I will not be using Wayland. I like exploring technology, but I am not a zealot on some holy big-endian pilgrimage. I just want to use my desktop, and sometimes, I might even need things fast. Logging out and back in can be an annoying hassle in a moment of need. And the reason why we have this issue is because Wayland is not there to make life easier for Linux desktop users. Quite the opposite. Quote:
Wayland is intended as a simpler replacement for X, easier to develop and maintain. GNOME and KDE are expected to be ported to it.
And you see, that's part of the problem. Stuff should not be designed to be easier to developer or maintain. That can be a beneficial by-product provided all other customer requirements are met. But if they are not, then it does not matter how hard or simple it is for programmers to hammer code. That's their job. The whole purpose of technology is to support the end state - in this case, a seamless and smooth user experience.
Unfortunately, a large number of products today are being re-invented and re-developed for the sake of making it easier for software people and not for the users. To a large extent, Gnome 3, PulseAudio, [Systemd][9], and Wayland, they all serve no higher user experience purpose. They are quite intrusive in that sense, and they do not contribute to the stability and simplicity of the Linux desktop ecosystem.
This is one of primary reasons why Linux desktop is a relatively immature product - it is designed to self-support the people developing it, almost like a living organism. It's not there to be the slave to the whims and wishes of the user. And that's how great things are done. You satisfy the primary need, and only then worry about the details. Great user experience does not depend - and should never depend - on the choice of programming language, compiler or any nonsense like that. If it does, that whoever designed the product has not done the abstraction piece well enough, and we have a failed thing that needs to be removed from existence.
And so, from my perspective, I don't care if it takes 10 liters of blood to compile one version of X or whatever. I'm a user. All I care is that my desktop works as robustly as did it yesterday or 5 years ago. If that's not happening, I'm not interested in macros, classes, variables, declarations, structs, or any other geeky CS technobabble. That's irrelevant. And a product that advertises itself as being created to be convenient for the people developing it is a paradox. Don't develop it, then. Makes things even easier.
Now, the reality is, Wayland is largely ok - but it is still not as good as X, and as such it should not be offered as a production-ready item on any desktop. Once it can replace the old technology so seamlessly no one ever knows about it, only then will it have succeeded in what it needs to achieve, and then, it can be written in C or D or K language, and it can have anything the developers want. Until then, it's a parasite that eats on the resources and peoples' nerves.
Don't get me wrong. We need progress. We need change. But it has to serve an evolutionary purpose. Does X handle the user needs well today? Can it do graphics support for 3rd party blobs? Can it support HD and UHD and DPI and whatnot? Can you play the latest games on it? Yes? No? If not, then it needs to be fixed. Those are the evolutionary drivers. Not the difficulty of writing and compiling code. Software developers are the coal miners of the digital industry, and they need to work hard to make users happy. As a phrase 'easier to develop' should be outlawed, and people who like it need to be electrocuted by old radio batteries and then exiled to Mars in non-A/C spaceships. If you can't write smart code, it's your problem. The user should not suffer because developers think they're princesses.
### Conclusion
Here we are. In general, Wayland is not bad. It's okay. But that's like saying you are earning 83% today compared to 100% yesterday only because someone decided to change the layout of your payslip. Not acceptable in that sense, even if Wayland works fairly well. It's the stuff that does not work that makes all the difference. Ignoring the whole rant side of it, Wayland introduced reduced usability, performance and app wise, and this is something Fedora will have to sort out fast.
Other distros will follow, and we will be seeing a recurring pattern. The same happened with Gnome 3\. The same happened with Systemd. Less than fully ready technologies are unleashed into the open, and then we spend a year or two fixing things that needed no fixing, and eventually, we will have the same functionality we already have, only created in a different programming language. Not interested. CS used to be all glamor in 1999, when Excel users were making USD50/hour. Today, programming is the undeserving oar galley, and people don't care for the sweat and blister under the deck.
Performance is probably less of an issue, because you can give up on 1-2% change, especially since it can randomly come from any which factor. You will know this if you've used Linux for more than a year or two. But not being able to launch programs is a big deal. At the very least, Fedora graciously offers the legacy platform, too. But then, it may be gone before Wayland reaches 100% maturity. Here we go again. So no, there's no disaster. My original Fedora 25 claim stands in this regard. What we have is annoyance. Unnecessary annoyance. Ah well. The story of Linux, part 9000.
And so, at the end of the day, with everything said and done, what we learned here is: KNEEL BEFORE XORG! OMG. That's so good, I will now fade into the background while the chuckles off your merriment carry off into the frosty night. So long.
Cheers.
--------------------------------------------------------------------------------
作者简介:
My name is Igor Ljubuncic. I'm more or less 38 of age, married with no known offspring. I am currently working as a Principal Engineer with a cloud technology company, a bold new frontier. Until roughly early 2015, I worked as the OS Architect with an engineering computing team in one of the largest IT companies in the world, developing new Linux-based solutions, optimizing the kernel and hacking the living daylights out of Linux. Before that, I was a tech lead of a team designing new, innovative solutions for high-performance computing environments. Some other fancy titles include Systems Expert and System Programmer and such. All of this used to be my hobby, but since 2008, it's a paying job. What can be more satisfying than that?
From 2004 until 2008, I used to earn my bread by working as a physicist in the medical imaging industry. My work expertise focused on problem solving and algorithm development. To this end, I used Matlab extensively, mainly for signal and image processing. Furthermore, I'm certified in several major engineering methodologies, including MEDIC Six Sigma Green Belt, Design of Experiment, and Statistical Engineering.
I also happen to write books, including high fantasy and technical work on Linux; mutually inclusive.
Please see my full list of open-source projects, publications and patents, just scroll down.
For a complete list of my awards, nominations and IT-related certifications, hop yonder and yonder please.
-------------
via: http://www.dedoimedo.com/computers/fedora-25-wayland-vs-xorg.html
作者:[Igor Ljubuncic][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.dedoimedo.com/faq.html
[1]:http://www.dedoimedo.com/computers/fedora-25-gnome.html
[2]:https://commons.wikimedia.org/wiki/File:DragonCon-AlienVsPredator.jpg
[3]:https://creativecommons.org/licenses/by-sa/3.0/deed.en
[4]:https://wayland.freedesktop.org/faq.html
[5]:https://wiki.gnome.org/Initiatives/Wayland/Applications
[6]:https://devtalk.nvidia.com/default/topic/925605/linux/nvidia-364-12-release-vulkan-glvnd-drm-kms-and-eglstreams/
[7]:http://www.dedoimedo.com/computers/fedora-24-gnome.html
[8]:http://www.dedoimedo.com/computers/lenovo-g50-distros-second-round.html
[9]:http://www.ocsmag.com/2016/10/19/systemd-progress-through-complexity/

View File

@ -1,162 +0,0 @@
translating by xiaow6
How to perform search operations in Vim
============================================================
### On this page
1. [Customize your search][5]
1. [1\. Search highlighting][1]
2. [2\. Making search case-insensitive][2]
3. [3\. Smartcase search][3]
4. [4\. Incremental search][4]
2. [Some other cool Vim search tips/tricks][6]
3. [Conclusion][7]
While we've already [covered][8] several features of Vim until now, the editor's feature-set is so vast that no matter how much you learn, it doesn't seem to be enough. So continuing with our Vim tutorial series, in this write-up, we will discuss the various search techniques that the editor offers.
But before we do that, please note that all the examples, commands, and instructions mentioned in this tutorial have been tested on Ubuntu 14.04, and the Vim version we've used is 7.4.
### Basic search operations in Vim
If you have opened a file in the Vim editor, and want to search a particular word or pattern, the first step that you have to do is to come out of the Insert mode (if you that mode is currently active). Once that is done, type '**/**' (without quotes) followed by the word/pattern that you want to search.
For example, if the word you want to search is 'linux', here's how it will appear at the bottom of your Vim window:
[
![Search for words in vim](https://www.howtoforge.com/images/perform-search-operations-in-vim/vim-basic-search.png)
][9]
After this, just hit the Enter key and you'll see that Vim will place the cursor on the first line (containing the word) that it encounters beginning from the line where the cursor was when you were in Insert mode. If you've just opened a file and began searching then the search operation will start from the very first line of the file.
To move on to the next line containing the searched word, press the '**n**' key. When you've traversed through all the lines containing the searched pattern, pressing the '**n**' key again will make the editor to repeat the search, and you'll be back to the first searched occurrence again.
[
![Move to next search hit](https://www.howtoforge.com/images/perform-search-operations-in-vim/vim-search-end.png)
][10]
While traversing the searched occurrences, if you want to go back to the previous occurrence, press '**N**' (shift+n). Also, it's worth mentioning that at any point in time, you can type '**ggn**' to jump to the first match, or '**GN**' to jump to the last.
In case you are at the bottom of a file, and want to search backwards, then instead of initiating the search with **/**, use **?**. Here's an example:
[
![search backwards](https://www.howtoforge.com/images/perform-search-operations-in-vim/vim-search-back.png)
][11]
### Customize your search
### 1\. Search highlighting
While jumping from one occurrence of the searched word/pattern to another is easy using 'n' or 'N,' things become more user-friendly if the searched occurrences get highlighted. For example, see the screenshot below:
[
![Search Highlighting in VIM](https://www.howtoforge.com/images/perform-search-operations-in-vim/vim-highlight-search.png)
][12]
This can be made possible by setting the 'hlsearch' variable, something which you can do by writing the following in the normal/command mode:
```
:set hlsearch
```
[
![set hlsearch](https://www.howtoforge.com/images/perform-search-operations-in-vim/vim-set-hlsearch.png)
][13]
### 2\. Making search case-insensitive
By default, the search you do in Vim is case-sensitive. This means that if I am searching for 'linux', then 'Linux' won't get matched. However, if that's not what you are looking for, then you can make the search case-insensitive using the following command:
```
:set ignorecase
```
So after I set the 'ignorecase' variable using the aforementioned command, and searched for 'linux', the occurrences of 'LINUX' were also highlighted:
[
![search case-insensitive](https://www.howtoforge.com/images/perform-search-operations-in-vim/vim-search-case.png)
][14]
### 3\. Smartcase search
Vim also offers you a feature using which you can ask the editor to be case-sensitive only when the searched word/pattern contains an uppercase character. For this you need to first set the 'ignorecase' variable and then set the 'smartcase' variable.
```
:set ignorecase
:set smartcase
```
For example, if a file contains both 'LINUX' and 'linux,' and smartcase is on, then only occurrences of the word LINUX will be searched if you search using '/LINUX'. However, if the search is '/linux', then all the occurrences will get matched irrespective of whether they are in caps or not.
### 4\. Incremental search
Just like, for example, Google, which shows search results as you type your query (updating them with each alphabet you type), Vim also provides incremental search. To access the feature, you'll have to execute the following command before you start searching:
```
:set incsearch
```
### Some other cool Vim search tips/tricks
There are several other search-related tips tricks that you may find useful.
To start off, if you want to search for a word that's there in the file, but you don't want to type it, you can just bring your cursor below it and press ***** (or **shift+8**). And if you want to launch a partial search (for example: search both 'in' and 'terminal'), then you can bring the cursor under the word (in our example, in) and search by pressing **g*** (press 'g' once and then keep pressing *) on the keyboard.
Note: Press **#** or **g#** in case you want to search backwards.
Next up, if you want, you can get a list of all occurrences of the searched word/pattern along with the respective lines and line numbers at one place. This can be done by type **[I** after you've initiated the search. Following is an example of how the results are grouped and displayed at the bottom of Vim window:
[
![grouped search results](https://www.howtoforge.com/images/perform-search-operations-in-vim/vim-results-list.png)
][15]
Moving on, as you might already know, the Vim search wraps by default, meaning after reaching the end of the file (or to the last occurrence of the searched word), pressing "search next" brings the cursor to the first occurrence again. If you want, you can disable this search wrapping by running the following command:
```
:set nowrapscan
```
To enable wrap scan again, use the following command:
```
:set wrapscan
```
Finally, suppose you want to make a slight change to an already existing word in the file, and then perform the search operation, then one way is to type **/** followed by that word. But if the word in long or complicated, then it may take time to type it.
An easy way out is to bring the cursor under the word you want to slightly edit, then press '/' and then press Ctrl-r followed by Ctrl-w. The word under the cursor will not only get copied, it will be pasted after '/' as well, allowing you to easily edit it and go ahead with the search operation.
For more tricks (including how you can use your mouse to make things easier in Vim), head to the [official Vim documentation][16].
### Conclusion
Of course, nobody expects you to mug up all the tips/tricks mentioned here. What you can do is, start with the one you think will be the most beneficial to you, and practice it regularly. Once it gets embedded in your memory and becomes a habit, come here again, and see which one you should learn next.
Do you know any more such tricks? Want to share it with everyone in the HTF community? Then leave it as a comment below.
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/perform-search-operations-in-vim/
作者:[Himanshu Arora][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com/tutorial/perform-search-operations-in-vim/
[1]:https://www.howtoforge.com/tutorial/perform-search-operations-in-vim/#-search-highlighting
[2]:https://www.howtoforge.com/tutorial/perform-search-operations-in-vim/#-making-searchnbspcaseinsensitive
[3]:https://www.howtoforge.com/tutorial/perform-search-operations-in-vim/#-smartcase-search
[4]:https://www.howtoforge.com/tutorial/perform-search-operations-in-vim/#-incremental-search
[5]:https://www.howtoforge.com/tutorial/perform-search-operations-in-vim/#customize-your-search
[6]:https://www.howtoforge.com/tutorial/perform-search-operations-in-vim/#some-other-cool-vim-search-tipstricks
[7]:https://www.howtoforge.com/tutorial/perform-search-operations-in-vim/#conclusion
[8]:https://www.howtoforge.com/tutorial/vim-editor-modes-explained/
[9]:https://www.howtoforge.com/images/perform-search-operations-in-vim/big/vim-basic-search.png
[10]:https://www.howtoforge.com/images/perform-search-operations-in-vim/big/vim-search-end.png
[11]:https://www.howtoforge.com/images/perform-search-operations-in-vim/big/vim-search-back.png
[12]:https://www.howtoforge.com/images/perform-search-operations-in-vim/big/vim-highlight-search.png
[13]:https://www.howtoforge.com/images/perform-search-operations-in-vim/big/vim-set-hlsearch.png
[14]:https://www.howtoforge.com/images/perform-search-operations-in-vim/big/vim-search-case.png
[15]:https://www.howtoforge.com/images/perform-search-operations-in-vim/big/vim-results-list.png
[16]:http://vim.wikia.com/wiki/Searching

View File

@ -0,0 +1,149 @@
# Docker swarm mode - Adding worker nodes tutorial
Let us expand on what we started with CentOS 7.2 several weeks ago. In this [guide][1], we learned how to initiate and start the native clustering and orchestration functionality built into Docker 1.12\. But we only had our manager node and no other workers. Today, we will expand this.
I will show you how to add non-symmetrical nodes into the swarm, i.e. a [Fedora 24][2] that will sit alongside our CentOS box, and they will both participate in the cluster, with all the associated fancy loadbalancing and whatnot. Of course, this will not be trivial, and we will encounter some snags, and so it ought to be quite interesting. After me.
![Teaser](http://www.dedoimedo.com/images/computers-years/2016-2/docker-swarm-teaser-more.png)
### Prerequisites
There are several things we need to do before we can successfully join additional nodes into the swarm. One, ideally, all nodes should be running the same version of Docker, and it should be at least 1.12 in order to support native orchestration. Like CentOS, Fedora does not have the latest built in its repo, so you will need to manually [add and install][3] the right software version, either manually or using the Docker repository, and then fix a few dependency conflicts. I have shown you how to do this with CentOS, and the exercise is identical.
Moreover, all your nodes will need to be able to communicate with one another. There will have to be routing and firewall rules in places so that the managers and workers can talk among them. Otherwise, you will not be able to join nodes into the swarm. The easiest way to work around problems is to temporarily flush firewall rules (iptables -F), but this may impair your security. Make sure you fully understand what you're doing, and that you create the right rules for your nodes and ports.
Error response from daemon: Timeout was reached before node was joined. The attempt to join the swarm will continue in the background. Use the "docker info" command to see the current swarm status of your node.
You need to have the same Docker images available on your hosts. In our previous tutorial, we created an Apache image, and you will need to do the same on your worker nodes, or distribute the created images. If you do not do that, you will encounter errors. If you need help setting up Docker, please read my [intro guide][4] and the [networking tutorial][5].
```
7vwdxioopmmfp3amlm0ulimcu   \_ websky.11   my-apache2:latest
localhost.localdomain   Shutdown   Rejected 7 minutes ago
"No such image: my-apache2:lat&"
```
### Let's get started
So we have our CentOS box up and running, and it's spawning containers successfully. You are able to connect to the services using host ports, and everything looks peachy. At the moment, your swarm only has the manager.
![Manager](http://www.dedoimedo.com/images/computers-years/2016-2/docker-swarm-manager.png)
### Join workers
To add new nodes, you will need to use the join command. But you first need to discover what token, IP address and port you must provide on the worker nodes for them to authenticate correctly against the swarm manager. Then execute (on Fedora).
```
[root@localhost ~]# docker swarm join-token worker
To add a worker to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-0xvojvlza90nrbihu6gfu3qm34ari7lwnza ... \
192.168.2.100:2377
```
If you do not fix the firewall and routing rules, you will get timeout errors. If you've already joined the swarm, repeating the join command will create its own noise:
```
Error response from daemon: This node is already part of a swarm. Use "docker swarm leave" to leave this swarm and join another one.
```
If ever in doubt, you can leave the swarm and then try again:
```
[root@localhost ~]# docker swarm leave
Node left the swarm.
docker swarm join --token
SWMTKN-1-0xvojvlza90nrbihu6gfu3qnza4 ... 192.168.2.100:2377
This node joined a swarm as a worker.
```
On the worker node, you can use docker info to check the status:
```
Swarm: active
NodeID: 2i27v3ce9qs2aq33nofaon20k
Is Manager: false
Node Address: 192.168.2.103
Likewise, on the manager:
Swarm: active
NodeID: cneayene32jsb0t2inwfg5t5q
Is Manager: true
ClusterID: 8degfhtsi7xxucvi6dxvlx1n4
Managers: 1
Nodes: 3
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Node Address: 192.168.2.100
```
### Create or scale services
Now, we need to see if and how Docker distributes the containers between the nodes. My testing shows a rather simplistic balancing algorithm under very light load. Once or twice, Docker did not try to re-distribute running services to new workers, even after I tried to scale and update them. Likewise, on one occasion, it created a new service entirely on the worker node. Maybe it was the best choice.
![Scale service](http://www.dedoimedo.com/images/computers-years/2016-2/docker-swarm-scale-service.png)
![Service ls](http://www.dedoimedo.com/images/computers-years/2016-2/docker-swarm-service-list.png)
![Services ls, more](http://www.dedoimedo.com/images/computers-years/2016-2/docker-swarm-service-list-more.png)
![New service](http://www.dedoimedo.com/images/computers-years/2016-2/docker-swarm-new-service.png)
New service created entirely on the worker node.
After a while, there was some re-distribution of containers for existing services between the two, but it took some time. New services worked fine. This is an early observation only, so I cannot say much more at this point. For now, this is a good starting point to begin exploring and tweaking.
![Service distributed](http://www.dedoimedo.com/images/computers-years/2016-2/docker-swarm-distributed.png)
Load balancing kicks in after a while.
### Conclusion
Docker is a neat little beast, and it will only continue to grow bigger, more complex, more powerful, and of course, more elegant. It is only a matter of time before it gets eaten by a big, juicy enterprise. When it comes to its native orchestration, the swarm mode works quite well, but it takes more than just a few containers to fully tap into the power of its algorithms and scalability.
My tutorial shows how to add a Fedora node to a cluster run by a CentOS box, and the two worked fine side by side. There are some questions around the loadbalancing, but this is something I will explore in future articles. All in all, I hope this was a worthwhile lesson. We've tackled some prerequisites and common problems that you might encounter when trying to setup a swarm, we fired up a bunch of containers, and we even briefly touched on how to scale and distribute the services. And remember, 'tis is just a beginning.
Cheers.
--------------------------------------------------------------------------------
作者简介:
My name is Igor Ljubuncic. I'm more or less 38 of age, married with no known offspring. I am currently working as a Principal Engineer with a cloud technology company, a bold new frontier. Until roughly early 2015, I worked as the OS Architect with an engineering computing team in one of the largest IT companies in the world, developing new Linux-based solutions, optimizing the kernel and hacking the living daylights out of Linux. Before that, I was a tech lead of a team designing new, innovative solutions for high-performance computing environments. Some other fancy titles include Systems Expert and System Programmer and such. All of this used to be my hobby, but since 2008, it's a paying job. What can be more satisfying than that?
From 2004 until 2008, I used to earn my bread by working as a physicist in the medical imaging industry. My work expertise focused on problem solving and algorithm development. To this end, I used Matlab extensively, mainly for signal and image processing. Furthermore, I'm certified in several major engineering methodologies, including MEDIC Six Sigma Green Belt, Design of Experiment, and Statistical Engineering.
I also happen to write books, including high fantasy and technical work on Linux; mutually inclusive.
Please see my full list of open-source projects, publications and patents, just scroll down.
For a complete list of my awards, nominations and IT-related certifications, hop yonder and yonder please.
-------------
via: http://www.dedoimedo.com/computers/docker-swarm-adding-worker-nodes.html
作者:[Igor Ljubuncic][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.dedoimedo.com/faq.html
[1]:http://www.dedoimedo.com/computers/docker-swarm-intro.html
[2]:http://www.dedoimedo.com/computers/fedora-24-gnome.html
[3]:http://www.dedoimedo.com/computers/docker-centos-upgrade-latest.html
[4]:http://www.dedoimedo.com/computers/docker-guide.html
[5]:http://www.dedoimedo.com/computers/docker-networking.html

View File

@ -1,228 +0,0 @@
translating by ypingcn
A beginner's guide to understanding sudo on Ubuntu
============================================================
### On this page
1. [What is sudo?][4]
2. [Can any user use sudo?][5]
3. [What is a sudo session?][6]
4. [The sudo password][7]
5. [Some important sudo command line options][8]
1. [The -k option][1]
2. [The -s option][2]
3. [The -i option][3]
6. [Conclusion][9]
Ever got a 'Permission denied' error while working on the Linux command line? Chances are that you were trying to perform an operation that requires root permissions. For example, the following screenshot shows the error being thrown when I was trying to copy a binary file to one of the system directories:
[
![permission denied on the shell](https://www.howtoforge.com/images/sudo-beginners-guide/perm-denied-error.png)
][11]
So what's the solution to this problem? Simple, use the **sudo** command.
[
![run command with sudo](https://www.howtoforge.com/images/sudo-beginners-guide/sudo-example.png)
][12]
The user who is running the command will be prompted for their login password. Once the correct password is entered, the operation will be performed successfully.
While sudo is no doubt a must-know command for any and everyone who works on the command line in Linux, there are several other related (and in-depth) details that you should know in order to use the command more responsibly and effectively.  And that's exactly what we'll be discussing here in this article.
But before we move ahead, it's worth mentioning that all the commands and instructions mentioned in this article have been tested on Ubuntu 14.04LTS with Bash shell version 4.3.11.
### What is sudo?
The sudo command, as most of you might already know, is used to execute a command with elevated privileges (usually as root). An example of this we've already discussed in the introduction section above. However, if you want, you can use sudo to execute command as some other (non-root) user.
This is achieved through the -u command line option the tool provides. For example, in the example shown below, I (himanshu) tried renaming a file in some other user's (howtoforge) home directory, but got a 'permission denied' error. And then I tried the same 'mv' command with 'sudo -u howtoforge,' the command was successful:
[
![What is sudo](https://www.howtoforge.com/images/sudo-beginners-guide/sudo-switch-user.png)
][13]
### Can any user use sudo?
No. For a user to be able to use sudo, an entry corresponding to that user should be in the /etc/sudoers file. The following paragraph - taken from Ubuntu's website - should make it more clear:
```
The /etc/sudoers file controls who can run what commands as what users on what machines and can also control special things such as whether you need a password for particular commands. The file is composed of aliases (basically variables) and user specifications (which control who can run what).
```
If you are using Ubuntu, it's easy to make sure that a user can run the sudo command: all you have to do is to make that user account type 'administrator'. This can be done by heading to System Settings... -> User Accounts.
[
![sudo users](https://www.howtoforge.com/images/sudo-beginners-guide/sudo-user-accounts.png)
][14]
Unlocking the window:
[
![unlocking window](https://www.howtoforge.com/images/sudo-beginners-guide/sudo-user-unlock.png)
][15]
Then selecting the user whose account type you want to change, and then changing the type to 'administrator'
[
![choose sudo accounts](https://www.howtoforge.com/images/sudo-beginners-guide/sudo-admin-account.png)
][16]
However, if you aren't on Ubuntu, or your distribution doesn't provide this feature, you can manually edit the /etc/sudoers file to make the change. You'll be required to add the following line in that file:
```
[user]    ALL=(ALL:ALL) ALL
```
Needless to say, [user] should be replaced by the user-name of the account you're granting the sudo privilege. An important thing worth mentioning here is that the officially suggested method of editing this file is through the **visudo** command - all you have to do is to run the following command:
sudo visudo
To give you an idea why exactly is that the case, here's an excerpt from the visudo manual:
```
visudo edits the sudoers file in a safe fashion. visudo locks the sudoers file against multiple simultaneous edits, provides basic sanity checks, and checks for parse errors. If the sudoers file is currently being edited you will receive a message to try again later.
```
For more information on visudo, head [here][17].
### What is a sudo session?
If you use the sudo command frequently, I am sure you'd have observed that after you successfully enter the password once, you can run multiple sudo commands without being prompted for the password. But after sometime, the sudo command asks for your password again.
This behavior has nothing to do with the number of sudo-powered commands you run, but instead depends on time. Yes, by default, sudo won't ask for password for 15 minutes after the user has entered it once. Post these 15 minutes, you'll be prompted for password again.
However, if you want, you can change this behavior. For this, open the /etc/sudoers file using the following command:
sudo visudo
And then go to the line that reads:
```
Defaults env_reset
```
[
![env_reset](https://www.howtoforge.com/images/sudo-beginners-guide/sudo-session-time-default.png)
][18]
and add the following variable (highlighted in bold below) at the end of the line
```
Defaults env_reset,timestamp_timeout=[new-value]
```
The [new-value] field should be replaced by the number of minutes you want your sudo session to last. For example, I used the value 40.
[
![sudo timeout value](https://www.howtoforge.com/images/sudo-beginners-guide/sudo-session-timeout.png)
][19]
In case you want to get prompted for password every time you use the sudo command, then in that case you can assign the value '0' to this variable. And for those of you who want that their sudo session should never time out, you can assign the value '-1'.
Please note that using timestamp_timeout with value '-1' is strongly discouraged.
### The sudo password
As you might have observed, whenever sudo prompts you for a password and you start entering it, nothing shows up - not even asterisks that's usually the norm. While that's not a big deal in general, some users may want to have the asterisks displayed for whatever reason.
The good thing is that's possible and pretty easy to do. All you have to do is to change the following line in /etc/sudoers file:
```
Defaults        env_reset
```
to
```
Defaults        env_reset,pwfeedback
```
And save the file.
Now, whenever you'll type the sudo password, asterisk will show up.
[
![hide the sudo password](https://www.howtoforge.com/images/sudo-beginners-guide/sudo-password.png)
][20]
### Some important sudo command line options
Aside from the -u command line option (which we've already discussed at the beginning of this tutorial), there are some other important sudo command line options that deserve a mention. In this section, we will discuss some of those.
### The -k option
Consider a case where-in you've just run a sudo-powered command after entering your password. Now, as you already know, the sudo session remains active for 15-mins by default. Suppose during this session, you have to give someone access to your terminal, but you don't want them to be able to use sudo. What will you do?
Thankfully, there exists a command line option -k that allows user to revoke sudo permission. Here's what the sudo man page has to say about this option:
```
-k, --reset-timestamp
When used without a command, invalidates the user's cached credentials. In other words, the next time sudo is run a password will be required. This option does not require a password and was added to allow a user to revoke sudo permissions from a .logout file.
When used in conjunction with a command or an option that may require a password, this option will cause sudo to ignore the user's cached credentials. As a result, sudo will prompt for a password (if one is required by the security policy) and will not update the user's cached credentials.
```
### The -s option
There might be times when you work requires you to run a bucketload of commands that need root privileges, and you don't want to enter the sudo password every now and then. Also, you don't want to tweak the sudo session timeout limit by making changes to the /etc/sudoers file. 
In that case, you may want to use the -s command line option of the sudo command. Here's how the sudo man page explains it:
```
-s, --shell
Run the shell specified by the SHELL environment variable if it is set or the shell specified by the invoking user's password database entry. If a command is specified, it is passed to the shell for execution via the shell's -c option. If no command is specified, an interactive shell is executed.
```
So basically, what this command line option does is:
* Launches a new shell - as for which shell, the SHELL env variable is referred. In case $SHELL is empty, the shell defined in the /etc/passwd file is picked up.
* If you're also passing a command name along with the -s option (for example: sudo -s whoami), then the actual command that gets executed is: sudo /bin/bash -c whoami.
* If you aren't trying to execute any other command (meaning, you're just trying to run sudo -s) then you get an interactive shell with root privileges.
What's worth keeping in mind here is that the -s command line option gives you a shell with root privileges, but you don't get the root environment - it's your .bashrc that gets sourced. This means that, for example, in the new shell that sudo -s runs, executing the whoami command will still return your username, and not 'root'.
### The -i option
The -i option is similar to the -s option we just discussed. However, there are some differences. One of the key differences is that -i gives you the root environment as well, meaning your (user's) .bashrc is ignored. It's like becoming root without explicitly logging as root. What more, you don't have to enter the root user's password as well.
**Important**: Please note that there exists a **su** command which also lets you switch users (by default, it lets you become root). This command requires you to enter the 'root' password. To avoid this, you can also execute it with sudo ('sudo su'); in that case you'll just have to enter your login password. However, 'su' and 'sudo su' have some underlying differences - to understand them as well as know more about how 'sudo -i' compares to them, head [here][10].
### Conclusion
I hope that by now you'd have at least got the basic idea behind sudo, and how you tweak it's default behavior. Do try out the /etc/sudoers tweaks we've explained here, also go through the forum discussion (linked in the last paragraph) to get more insight about the sudo command.
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/sudo-beginners-guide/
作者:[Himanshu Arora][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com/tutorial/sudo-beginners-guide/
[1]:https://www.howtoforge.com/tutorial/sudo-beginners-guide/#the-k-option
[2]:https://www.howtoforge.com/tutorial/sudo-beginners-guide/#the-s-option
[3]:https://www.howtoforge.com/tutorial/sudo-beginners-guide/#the-i-option
[4]:https://www.howtoforge.com/tutorial/sudo-beginners-guide/#what-is-sudo
[5]:https://www.howtoforge.com/tutorial/sudo-beginners-guide/#can-any-user-use-sudo
[6]:https://www.howtoforge.com/tutorial/sudo-beginners-guide/#what-is-a-sudo-session
[7]:https://www.howtoforge.com/tutorial/sudo-beginners-guide/#the-sudo-password
[8]:https://www.howtoforge.com/tutorial/sudo-beginners-guide/#some-important-sudo-command-line-options
[9]:https://www.howtoforge.com/tutorial/sudo-beginners-guide/#conclusion
[10]:http://unix.stackexchange.com/questions/98531/difference-between-sudo-i-and-sudo-su
[11]:https://www.howtoforge.com/images/sudo-beginners-guide/big/perm-denied-error.png
[12]:https://www.howtoforge.com/images/sudo-beginners-guide/big/sudo-example.png
[13]:https://www.howtoforge.com/images/sudo-beginners-guide/big/sudo-switch-user.png
[14]:https://www.howtoforge.com/images/sudo-beginners-guide/big/sudo-user-accounts.png
[15]:https://www.howtoforge.com/images/sudo-beginners-guide/big/sudo-user-unlock.png
[16]:https://www.howtoforge.com/images/sudo-beginners-guide/big/sudo-admin-account.png
[17]:https://www.sudo.ws/man/1.8.17/visudo.man.html
[18]:https://www.howtoforge.com/images/sudo-beginners-guide/big/sudo-session-time-default.png
[19]:https://www.howtoforge.com/images/sudo-beginners-guide/big/sudo-session-timeout.png
[20]:https://www.howtoforge.com/images/sudo-beginners-guide/big/sudo-password.png

View File

@ -1,147 +0,0 @@
beyondworld translating
Orange Pi as Time Machine Server
============================================================
![Orange Pi as Time Machine Server](https://i1.wp.com/piboards.com/wp-content/uploads/2017/02/OPiTM.png?resize=960%2C450)
One of my projects has been to organize automated backups of the various computers in the house.  This includes a couple Macs with some precious data on them.  So, I decided to put my inexpensive [Orange Pi][3] with [Armbian][4] Linux to the test, with the goal of getting [Time Machine][5] working over the network to a USB drive attached to the pi board.  That being the case, I discovered and successfully installed Netatalk.
[Netatalk][6] is open source software that acts as an Apple file server.  With a combination of [Avahi][7] and Netatalk running, your Mac can discover your pi board on the network and will even consider it to be a “Mac” type device.  This enables you to connect manually to the network drive but more importantly it enables Time Machine to find and use the remote drive.  The below guidance may help if you if you wish to set up a similar backup capability for your Macs.
### Preparations
To set up the USB drive, I first experimented with an HFS+ formatted file system.  Unfortunately, I could never get write permissions working.  So, I opted instead to create an EXT4 filesystem and ensured that my user “pi” had read/write permissions.  There are many ways to format a drive but my preferred (and recommended) method is to use [gparted][8] whenever possible.  Since gparted is included with the Armbian desktop, that I what I used.
I wanted this drive to be automatically mounted to the same location every time the pi board boots or the USB drive is connected.  So, I created a location for it to be mounted, made a “tm” directory for the actual backups, and changed the ownership of “tm” to user pi:
```
cd /mnt
sudo mkdir timemachine
cd timemachine
sudo mkdir tm
sudo chown pi:pi tm
```
Then I opened a terminal and edited /etc/fstab…
```
sudo nano /etc/fstab
```
…and added a line at the end for the device  (in my case, is it sdc2):
```
/dev/sdc2 /mnt/timemachine ext4 rw,user,exec 0 0
```
You will need to install some prerequisites packages via command line, some of which may already be installed on your system:
```
sudo apt-get install build-essential libevent-dev libssl-dev libgcrypt11-dev libkrb5-dev libpam0g-dev libwrap0-dev libdb-dev libtdb-dev libmysqlclient-dev avahi-daemon libavahi-client-dev libacl1-dev libldap2-dev libcrack2-dev systemtap-sdt-dev libdbus-1-dev libdbus-glib-1-dev libglib2.0-dev libio-socket-inet6-perl tracker libtracker-sparql-1.0-dev libtracker-miner-1.0-dev hfsprogs hfsutils avahi-daemon
```
### Install & Configure Netatalk
The next action is to download Netatalk, extract the downloaded archive file, and navigate to the Netatalk software directory:
```
wget https://sourceforge.net/projects/netatalk/files/netatalk/3.1.10/netatalk-3.1.10.tar.bz2
tar xvf netatalk-3.1.10.tar.bz2
cd netatalk-3.1.10
```
Now you need to configure, make, and make install the software.  In the netatalk-3.1.10 directory, run the following configure command and be prepared for it to take a bit of time:
```
./configure --with-init-style=debian-systemd --without-libevent --without-tdb --with-cracklib --enable-krbV-uam --with-pam-confdir=/etc/pam.d --with-dbus-daemon=/usr/bin/dbus-daemon --with-dbus-sysconf-dir=/etc/dbus-1/system.d --with-tracker-pkgconfig-version=1.0
```
When that finishes, run:
```
make
```
Be prepared for this to take a rather long time to complete.  Seriously, grab a cup of coffee or something.  When that is finally done, run the following command:
```
sudo make install
```
That should complete in a brief moment.  Now you can verify installation and also find the location of configuration files with the following two commands:
```
sudo netatalk -V
sudo afpd -V
```
You will need to edit your afp.conf file so that your time machine backup location is defined, your user account has access to it, and to specify whether or not you want [Spotlight][9] to index your backups.
```
sudo nano /usr/local/etc/afp.conf
```
As an example, my afp.conf includes the following:
```
[My Time Machine Volume]
path = /mnt/timemachine/tm
valid users = pi
time machine = yes
spotlight = no
```
Finally, enable and start up Avahi and Netatalk:
```
sudo systemctl enable avahi-daemon
sudo systemctl enable netatalk
sudo systemctl start avahi-daemon
sudo systemctl start netatalk
```
### Connecting to the Network Drive
At this point, your Mac may have already discovered your pi board and network drive. Open Finder on the Mac and see if you have something like this:
![](https://i2.wp.com/piboards.com/wp-content/uploads/2017/02/TM_drive.png?resize=241%2C89)
You can also connect to the server by host name or IP address, for example:
```
afp://192.168.1.25
```
### Time Machine Backup
And at last…open Time Machine on the Mac, and select disk, and choose your Orange Pi.
![](https://i1.wp.com/piboards.com/wp-content/uploads/2017/02/OPiTM.png?resize=579%2C381)
This set up will definitely work and the Orange Pi handles the process like a champ, but keep in mind this may not be the fastest of backups.  But it is easy, inexpensive, and just works like it should.  If you have success or improvements for this type of set up, please comment below or send me a note.
![](https://i0.wp.com/piboards.com/wp-content/uploads/2017/02/backup_complete.png?resize=300%2C71)
Orange Pi boards are available at Amazon (affiliate links):
--------------------------------------------------------------------------------
via: http://piboards.com/2017/02/13/orange-pi-as-time-machine-server/
作者:[MIKE WILMOTH][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://piboards.com/author/piguy/
[1]:http://piboards.com/author/piguy/
[2]:http://piboards.com/2017/02/13/orange-pi-as-time-machine-server/
[3]:https://www.amazon.com/gp/product/B018W6OTIM/ref=as_li_tl?ie=UTF8&tag=piboards-20&camp=1789&creative=9325&linkCode=as2&creativeASIN=B018W6OTIM&linkId=08bd6573c99ddb8a79746c8590776c39
[4]:https://www.armbian.com/
[5]:https://support.apple.com/kb/PH25710?locale=en_US
[6]:http://netatalk.sourceforge.net/
[7]:https://en.wikipedia.org/wiki/Avahi_(software)
[8]:http://gparted.org/
[9]:https://support.apple.com/en-us/HT204014

View File

@ -0,0 +1,111 @@
# Recover from a badly corrupt Linux EFI installation
In the past decade or so, Linux distributions would occasionally fail before, during and after the installation, but I was always able to somehow recover the system and continue working normally. Well, [Solus][1]broke my laptop. Literally.
GRUB rescue. No luck. Reinstall. No luck still! Ubuntu refused to install, complaining about the target device not being this or that. Wow. Something like this has never happened to me before. Effectively my test machine had become a useless brick. Should we despair? No, absolutely not. Let me show you how you can fix it.
### Problem in more detail
It all started with Solus trying to install its own bootloader - goofiboot. No idea what, who or why, but it failed to complete successfully, and I was left with a system that would not boot. After BIOS, I would get a GRUB rescue shell.
![Installation failed](http://www.dedoimedo.com/images/computers-years/2016-2/solus-installation-failed.png)
I tried manually working in the rescue shell, using this and that command, very similar to what I have outlined in my extensive [GRUB2 tutorial][2]. This did not really work. My next attempt was to recover from a live CD, again following my own advice, as I have outlined in my [GRUB2 & EFI tutorial][3]. I set up a new entry, and made sure to mark it active with the efibootmgr utility. Just as we did in the guide, and this has served us well before. Alas, this recovery method did not work, either.
I tried to perform a complete Ubuntu installation, into the same partition used by Solus, expecting the installer to sort out some of the fine details. But Ubuntu was not able to finish the install. It complained about: failed to install into /target. This was a first. What now?
### Manually clean up EFI partition
Obviously, something is very wrong with our EFI partition. Just to briefly recap, if you are using UEFI, then you must have a separate FAT32-formatted partition. This partition is used to store EFI boot images. For instance, when you install Fedora, the Fedora boot image will be copied into the EFI subdirectory. Every operating system is stored into a folder of its own, e.g. /boot/efi/EFI/<os version>/.
![EFI partition contents](http://www.dedoimedo.com/images/computers-years/2016-2/grub2-efi-partition-contents.png)
On my [G50][4] machine, there were multiple entries, from a variety of my distro tests, including: centos, debian, fedora, mx-15, suse, ubuntu, zorin, and many others. There was also a goofiboot folder. However, the efibootmgr was not showing a goofiboot entry in its menu. There was obviously something wrong with the whole thing.
```
sudo efibootmgr -d /dev/sda
BootCurrent: 0001
Timeout: 0 seconds
BootOrder: 0001,0005,2003,0000,2001,2002
Boot0000* Lenovo Recovery System
Boot0001* ubuntu
Boot0003* EFI Network 0 for IPv4 (68-F7-28-4D-D1-A1)
Boot0004* EFI Network 0 for IPv6 (68-F7-28-4D-D1-A1)
Boot0005* Windows Boot Manager
Boot0006* fedora
Boot0007* suse
Boot0008* debian
Boot0009* mx-15
Boot2001* EFI USB Device
Boot2002* EFI DVD/CDROM
Boot2003* EFI Network
...
```
P.S. The output above was generated running the command in a LIVE session!
I decided to clean up all the non-default and non-Microsoft entries and start fresh. Obviously, something was corrupt, and preventing new distros from setting up their own bootloader. So I deleted all the folders in the /boot/efi/EFI partition except Boot and Windows. And then, I also updated the boot manager by removing all the extras.
```
efibootmgr -b <hex> -B <hex>
```
Lastly, I reinstalled Ubuntu and closely monitored the progress with the GRUB installation and setup. This time, things completed fine. There were some errors with several invalid entries, as can be expected, but the whole sequenced completed just fine.
![Install errors](http://www.dedoimedo.com/images/computers-years/2016-2/grub2-install-errors.jpg)
![Install successful](http://www.dedoimedo.com/images/computers-years/2016-2/grub2-install-successful.jpg)
### More reading
If you don't fancy this manual fix, you may want to read:
```
[Boot-Info][5] page, with automated tools to help you recover your system
[Boot-repair-cd][6] automatic repair tool download page
```
### Conclusion
If you ever encounter a situation where your system is badly botched due to an EFI partition clobbering, then you may want to follow the advice in this guide. Delete all non-default entries. Make sure you do not touch anything Microsoft, if you're multi-booting with Windows. Then update the boot menu accordingly so the baddies are removed. Rerun the installation setup for your desired distro, or try to fix with a less stringent method as explained before.
I hope this little article saves you some bacon. I was quite annoyed by what Solus did to my system. This is not something that should happen, and the recovery ought to be simpler. However, while things may seem dreadful, the fix is not difficult. You just need to delete the corrupt files and start again. Your data should not be affected, and you will be able to promptly boot into a running system and continue working. There you go.
Cheers.
--------------------------------------------------------------------------------
作者简介:
My name is Igor Ljubuncic. I'm more or less 38 of age, married with no known offspring. I am currently working as a Principal Engineer with a cloud technology company, a bold new frontier. Until roughly early 2015, I worked as the OS Architect with an engineering computing team in one of the largest IT companies in the world, developing new Linux-based solutions, optimizing the kernel and hacking the living daylights out of Linux. Before that, I was a tech lead of a team designing new, innovative solutions for high-performance computing environments. Some other fancy titles include Systems Expert and System Programmer and such. All of this used to be my hobby, but since 2008, it's a paying job. What can be more satisfying than that?
From 2004 until 2008, I used to earn my bread by working as a physicist in the medical imaging industry. My work expertise focused on problem solving and algorithm development. To this end, I used Matlab extensively, mainly for signal and image processing. Furthermore, I'm certified in several major engineering methodologies, including MEDIC Six Sigma Green Belt, Design of Experiment, and Statistical Engineering.
I also happen to write books, including high fantasy and technical work on Linux; mutually inclusive.
Please see my full list of open-source projects, publications and patents, just scroll down.
For a complete list of my awards, nominations and IT-related certifications, hop yonder and yonder please.
-------------
via: http://www.dedoimedo.com/computers/grub2-efi-corrupt-part-recovery.html
作者:[Igor Ljubuncic][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.dedoimedo.com/faq.html
[1]:http://www.dedoimedo.com/computers/solus-1-2-review.html
[2]:http://www.dedoimedo.com/computers/grub-2.html
[3]:http://www.dedoimedo.com/computers/grub2-efi-recovery.html
[4]:http://www.dedoimedo.com/computers/lenovo-g50-distros-second-round.html
[5]:https://help.ubuntu.com/community/Boot-Info
[6]:https://sourceforge.net/projects/boot-repair-cd/

View File

@ -1,48 +0,0 @@
The Best Operating System for Linux Gaming: Which One Do You Use and Why?
============================================================
### Tell us which is the best Linux distro for Linux gaming
In the last few months, we tried multiple GNU/Linux distributions for gaming purposes, and we have come to the conclusion that there's no perfect operating system out there designed for Linux gaming.
We all know that the world of gaming is split between Nvidia and AMD users. Now, if you're using an Nvidia graphics card, even one from five years ago, chances are it's supported on most Linux-based operating systems because Nvidia provides up-to-date video drivers for most, if not all of its GPUs.
Of course, this means that you shouldn't have any major issues with most GNU/Linux distributions if you have an Nvidia GPU. At least not related to graphical artifacts or other performance problems when playing games, which will drastically affect your gaming experiences.
<q class="subhead" style="font-size: 18px; line-height: 26px; margin-top: 30px; margin-bottom: 10px; position: relative; display: block; font-weight: 700; letter-spacing: -0.6px; color: rgb(0, 40, 115); font-family: museo_slab, serif;">The best Linux gaming OS for AMD Radeon users</q>
Now, things are totally different if you're using an AMD Radeon GPU. We all know that AMD's proprietary graphics drivers still need a lot of work to be compatible with the latest GNU/Linux distributions and all the AMD GPUs that exist out there, as well as the latest X.Org Server and Linux kernel releases.
Currently, the AMDGPU-PRO video driver works only on Ubuntu 16.04 LTS, CentOS 6.8/7.3, Red Hat Enterprise Linux 6.8/7.3, and SUSE Linux Enterprise Desktop and Server 12 SP2\. With the exception of Ubuntu 16.04 LTS, we have no idea why AMD provides support for all those server-oriented and enterprise-ready operating systems.
We refuse to believe that there are Linux gamers out there who use any of these OSes for anything gaming related. The [latest AMDGPU-PRO update][1] finally brought support for AMD Radeon GPUs from the HD 7xxx and 8xxx series, but what if we don't want to use Ubuntu 16.04 LTS?
On the other hand, we have the Mesa 3D Graphics Library, which is found on most distros out there. The Mesa graphics stack provides us with quite powerful open-source Radeon and AMDGPU drivers for our AMD GPUs, but to enjoy the best gaming experience possible, you also need to have the latest X.Org Server and Linux kernels.
Not all Linux operating systems come with the latest Mesa (13.0), X.Org Server (1.19), and Linux kernel (4.9) versions with support for older AMD GPUs. Some have only one or two of these technologies, but we need them all and the kernel needs to be compiled with AMD Radeon Southern Islands and Sea Island support for the AMDGPU driver to work.
We found the entire situation quite disheartening, at least for some AMD Radeon gamers using a bit older graphics cards. For now, we have discovered that the best gaming experience with an AMD Radeon HD 8xxx GPU can be achieved only by using Mesa 17 from Git and Linux kernel 4.10 RC.
So we're asking you now - if you've found the perfect GNU/Linux distribution for gaming, no matter if your using an AMD Radeon or Nvidia GPU, but we are most interested in those who are using AMD GPUs, what distro and settings are you using and can you play the latest games or are still experiencing issues? Thank you!
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/the-best-operating-system-for-linux-gaming-which-one-do-you-use-and-why-512861.shtml
作者:[Marius Nestor ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/marius-nestor
[1]:http://news.softpedia.com/news/amdgpu-pro-16-60-linux-driver-finally-adds-amd-radeon-hd-7xxx-8xxx-support-512280.shtml
[2]:http://news.softpedia.com/editors/browse/marius-nestor
[3]:http://news.softpedia.com/news/the-best-operating-system-for-linux-gaming-which-one-do-you-use-and-why-512861.shtml#
[4]:https://share.flipboard.com/bookmarklet/popout?v=2&title=The+Best+Operating+System+for+Linux+Gaming%3A+Which+One+Do+You+Use+and+Why%3F&url=http%3A%2F%2Fnews.softpedia.com%2Fnews%2Fthe-best-operating-system-for-linux-gaming-which-one-do-you-use-and-why-512861.shtml&t=1487038258&utm_campaign=widgets&utm_medium=web&utm_source=flipit&utm_content=news.softpedia.com
[5]:http://news.softpedia.com/news/the-best-operating-system-for-linux-gaming-which-one-do-you-use-and-why-512861.shtml#
[6]:http://twitter.com/intent/tweet?related=softpedia&via=mariusnestor&text=The+Best+Operating+System+for+Linux+Gaming%3A+Which+One+Do+You+Use+and+Why%3F&url=http%3A%2F%2Fnews.softpedia.com%2Fnews%2Fthe-best-operating-system-for-linux-gaming-which-one-do-you-use-and-why-512861.shtml
[7]:https://plus.google.com/share?url=http://news.softpedia.com/news/the-best-operating-system-for-linux-gaming-which-one-do-you-use-and-why-512861.shtml
[8]:https://twitter.com/intent/follow?screen_name=mariusnestor

View File

@ -1,3 +1,4 @@
Yoo-4x translating
# [CentOS Vs. Ubuntu][5] # [CentOS Vs. Ubuntu][5]
[ [

View File

@ -1,72 +0,0 @@
# rusking translating
# [Best Windows Like Linux Distributions For New Linux Users][12]
[
![Best Windows Like Linux Distributions](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/best-windows-like-linux-distributions_1_orig.jpg)
][5]Hey new Linux users, you may be wondering that which Linux distro to choose after seeing so many distros based on Linux. Most of you might be switching from windows to Linux and want those distros which are easy and simple, resemble like windows. So today I will cover those Linux distros whose Desktop Environment is much similar to windows, so lets start.
### Linux Mint
[
![linux mint for new linux users](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/linux-mint-for-new-linux-users.jpg?1487173522)
][6]The first distro on my list is the famous Linux distro i.e. “[Linux Mint”][14]. You might have heard about Linux mint somewhere when you decided to move from windows to Linux. It is considered as one of the best Linux distros alongside Ubuntu as it is very simple, powerful and easier to operate due to its famous Desktop environment cinnamon. [Cinnamon][15] is very easy to use and there are even [themes][16], icon pack, desklets, applets are available that you can use to make it fully look like any windows rather XP, 7, 8, or 10. [Cinnamon][17] is one of the famous DE in Linux world. You will surely find it easy, powerful and lovable.Also read -
[Linux Mint 18.1 "Serena" Is One Of The Finest Linux Distro
][1][Cinnamon The Best Linux Desktop Environment For New Users][2]
### Zorin OS
[
![zorin os for windows users](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/editor/zorin-os-for-windows-users.jpg?1487174149)
][7][Zorin OS][18] is also the famous Linux distro replacement for windows 7\. Beautiful start menu, taskbar, animation while no compromise on speed and stability. Zorin OS will be the best choice for you if you love windows 7 and not windows 10\. It also comes with preloaded software, so you won't be troubled while looking for software. I was really fell in love with the animations and look. Go grab it.Also read - [Zorin OS 12 Review | LinuxAndUbuntu Distro Review Of The Week][3]
### Robolinux
[
![robolinux for new users](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/editor/robolinux-for-new-users.jpg?1487174455)
][8][Robolinux][9] is the Linux distribution that comes with built-in wine. Yeah! It has built-in support for windows applications so you won't miss your favorite applications from Windows. They call it “[Stealth VM][10]”. I was really impressed by this feature as it is unique. Also, there are a lot of DE, so can choose any DE that will suit your needs and love. They also have a tool to copy your whole C drive, so no file misses out. Unique. Isnt it?
### ChaletOS
[
![chalet os for new users](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/editor/chalet-os-for-new-users.jpg?1487174713)
][11]Did someone say [ChaletOS][19]? It is one of those [Linux distros][20] that feature the closest look and feel to windows. The above pic is after I used windows 10 icons and theme, so you get the idea that it can be easily themed. Pre Handy apps also help to make distro better. You will really feel at home while using it. It's screenshot even fooled my friends. Go ahead and try it, you will love it.Also read - [ChaletOS A New Beautiful Linux Distribution][4]
### Conclusion
I wanted to keep this list as short as possible as I didnt want to confuse a new user to find it difficult to choose among so many options. Still many users have got a distro that isnt mentioned here. I would invite you to comment below your distro and help new Linux user to choose his best distro for the first time.
Well, these four were the most used **Linux distros** to switch from windows to Linux, however Kubuntu, Elementary OS also put a competition. It overall depends upon users. Most of time [Linux Mint][21] always comes as a winner. I will really recommend it to you if this is your first time to Linux. Go ahead and grab your Linux today and be the part of change. |
--------------------------------------------------------------------------------
via: http://www.linuxandubuntu.com/home/best-windows-like-linux-distributions-for-new-linux-users
作者:[linuxandubuntu.com][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxandubuntu.com/home/best-windows-like-linux-distributions-for-new-linux-users
[1]:http://www.linuxandubuntu.com/home/linux-mint-181-sarah-one-of-the-finest-linux-distro-ever
[2]:http://www.linuxandubuntu.com/home/cinnamon-desktop-the-best-desktop-environment-for-new-linux-user
[3]:http://www.linuxandubuntu.com/home/zorin-os-12-review-linuxandubuntu-distro-review-of-the-week
[4]:http://www.linuxandubuntu.com/home/chaletos-new-beautiful-linux-distribution-based-on-xubuntu-and-a-clone-of-windows
[5]:http://www.linuxandubuntu.com/home/best-windows-like-linux-distributions-for-new-linux-users
[6]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/linux-mint-for-new-linux-users_orig.jpg
[7]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/zorin-os-for-windows-users_orig.jpg
[8]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/robolinux-for-new-users_orig.jpg
[9]:https://www.robolinux.org/
[10]:https://www.robolinux.org/stealth-vm-info/
[11]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/chalet-os-for-new-users_orig.jpg
[12]:http://www.linuxandubuntu.com/home/best-windows-like-linux-distributions-for-new-linux-users
[13]:http://www.linuxandubuntu.com/home/best-windows-like-linux-distributions-for-new-linux-users#comments
[14]:http://www.linuxandubuntu.com/home/linux-mint-181-sarah-one-of-the-finest-linux-distro-ever
[15]:http://www.developer.linuxmint.com/
[16]:http://www.linuxandubuntu.com/linux-themes/mintilicious-cinnamon-theme-install-in-linux-mint
[17]:http://www.linuxandubuntu.com/linux-apps-releases/cinnamon-2610
[18]:https://zorinos.com/
[19]:https://sites.google.com/site/chaletoslinux/home
[20]:http://www.linuxandubuntu.com/home/how-to-create-a-linux-distro
[21]:http://www.linuxandubuntu.com/home/linux-mint-18-sarah-review

View File

@ -1,94 +0,0 @@
Generate random data for your applications with Elizabeth
============================================================
![Generate random data for your applications with Elizabeth](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/osdc_520x292_opendata_0613mm.png?itok=mzC0Tb28 "Generate random data for your applications with Elizabeth")
Image by : Opensource.com
_Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem. Nulla consequat massa quis enim. Donec pede justo, fringilla vel, aliquet nec, vulputate eget, arcu. _
No, I've not had my article hijacked by a [Lorem ipsum][2] generator. For this month's Nooks & Crannies column, I found an interesting little Python library to help developers generate random data for their applications. It's called [Elizabeth][3].
Written by Líkið Geimfari, and licensed under the MIT license, Elizabeth has a set of 18 data providers in 21 different locales that you can use to generate random information, including names and personal characteristics, addresses, text data, transportation information, networking and Internet social media data, numbers, and much more. Installation requires [Python 3.2][4] or higher, and you can either install it using **pip**, or from the **git** repository.
For my test drive, I installed with pip, on a fresh [Debian][5] Jessie box. You'll need to **apt-get install python3-pip**, which will install Python and needed dependencies. Then **pip install elizabeth**, and you're ready to use it.
Just for giggles, let's generate some random data on a person in the Python interactive interpreter:
```
>>> from elizabeth import Personal
>>> p=Personal('en')
>>> p.full_name(gender="male")
'Elvis Herring'
>>> p.blood_type()
'B+'
>>> p.credit_card_expiration_date()
'09/17'
>>> p.email(gender='male')
'jessie7517@gmail.com'
>>> p.favorite_music_genre()
'Ambient'
>>> p.identifier(mask='13064########')
'1306420450944'
>>> p.sexual_orientation()
'Heterosexual'
>>> p.work_experience()
39
>>> p.occupation()
'Senior System Designer'
>>>
```
Using it in your code works just the same way—create an object, and then call the methods you want to fill in your data.
There are 18 different generator tools built into Elizabeth, and adding a new one is not at all difficult; you just have to define the routines that get the data from a JSON collection of values. Here's some random text string generation, again in the interpreter:
```
>>> from elizabeth import Text
>>> t=Text('en')
>>> t.swear_word()
'Rat-fink'
>>> t.quote()
'Let them eat cake.'
>>> t.words(quantity=20)
['securities', 'keeps', 'accessibility', 'barbara', 'represent', 'hentai', 'flower', 'keys', 'rpm', 'queen', 'kingdom', 'posted', 'wearing', 'attend', 'stack', 'interface', 'quite', 'elementary', 'broadcast', 'holland']
>>> t.sentence()
'She spent her earliest years reading classic literature, and writing poetry.'
```
It's not a difficult exercise to use Elizabeth to populate a [SQLite][6] or other database you might need for development or testing. The introductory documentation gives an example for a medical application using the [Flask][7] lightweight web framework.
I'm very impressed with Elizabeth—it's super-fast, lightweight, easily extensible, and the community, while small, is active and engaged. As of this writing, there have been 25 committers to the project, and issues are being handled swiftly. The [full documentation][8] for Elizabeth is easy to read and follow, and provides an extensive API reference, at least for US English.
I tried tinkering with the links to find if documentation was available in other languages, but I didn't have any success. Because the APIs are different in non-English locales, documenting those variations would be extremely helpful for users. To be fair, it's not terribly hard to read the code and find out what methods are available, even if your Python-fu is not strong. Another glaring lack, for me, was the lack of Arabic or Hebrew locale test data. These are notable right-to-left languages, and for developers who are trying to internationalize their application, proper handling of these languages is a major hurdle. Tools like Elizabeth that can assist with that effort are great to have.
For developers needing sample data for their applications, Elizabeth is a valuable tool, and for those trying to create truly multilingual, localizable applications, it could be a treasure.
--------------------------------------------------------------------------------
作者简介:
D Ruth Bavousett - D Ruth Bavousett has been a system administrator and software developer for a long, long time, getting her professional start on a VAX 11/780, way back when. She spent a lot of her career (so far) serving the technology needs of libraries, and has been a contributor since 2008 to the Koha open source library automation suite.Ruth is currently a Perl Developer at cPanel in Houston, and also serves as chief of staff for two cats.
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/2/elizabeth-python-library
作者:[D Ruth Bavousett][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/druthb
[1]:https://opensource.com/article/17/2/elizabeth-python-library?rate=kuXZVuHCdEv_hrxRnK1YQctlsTJeFJLcVx3Nf2VIW38
[2]:https://en.wikipedia.org/wiki/Lorem_ipsum
[3]:https://github.com/lk-geimfari/elizabeth
[4]:https://www.python.org/
[5]:https://www.debian.org/
[6]:https://sqlite.org/
[7]:https://flask.pocoo.org/
[8]:http://elizabeth.readthedocs.io/en/latest/index.html
[9]:https://opensource.com/user/36051/feed
[10]:https://opensource.com/article/17/2/elizabeth-python-library#comments
[11]:https://opensource.com/users/druthb

View File

@ -1,3 +1,5 @@
GHLandy Translating
How to change the Linux Boot Splash screen How to change the Linux Boot Splash screen
============================================================ ============================================================

View File

@ -1,70 +0,0 @@
Using Scripting Languages in IoT: Challenges and Approaches
============================================================
![Scripting IoT](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/scripting-languages-iot.jpg?itok=d6uog0Ss "Scripting IoT")
At the upcoming Embedded Linux Conference + OpenIoT Summit, Paul Sokolovsky will discuss some of the challenges of using scripting languages in embedded development.[Creative Commons Zero][2]Pixabay
Scripting languages (aka Very High-Level Languages or VHLLs), such as Python, PHP, and JavaScript are commonly used in desktop, server, and web development. And, their powerful built-in functionality lets you develop small useful applications with little time and effort, says Paul Sokolovsky, IoT engineer at Linaro. However, using VHLLs for deeply embedded development is a relatively recent twist in IoT.
![Paul Sokolovsky](https://www.linux.com/sites/lcom/files/styles/floated_images/public/paul-sokolovsky-2014-09-21.jpg?itok=nUlGjxf3 "Paul Sokolovsky")
Paul Sokolovsky, IoT engineer at Linaro[Used with permission][1]
At the upcoming [Embedded Linux Conference][6] + [OpenIoT Summit][7], Sokolovsky will discuss the challenges of using VHLLs in embedded development and compare different approaches, based on the  examples of MicroPython and JerryScript + Zephyr.js projects. We talked with Sokolovsky to get more information.
**Linux.com: Can you please give our readers some background on VHLLs?**
Paul Sokolovsky: Very High Level Languages have been a part of the computer science and information technologies landscape for several decades now. Perhaps the first popular scripting language was a Unix shell (sh), although it's rarely considered a VHLL, but rather a domain-specific language, due to its modest feature set. However, the first truly record-breaker VHLLs were Perl (1987) and Tcl (1988), soon followed by Python (1991), Ruby (1995), PHP (1995), JavaScript (1995), and many others.
The distinctive features of VHLLs are their interpreted nature (from the user's point of view, there may be sophisticated compilers inside), built-in availability of powerful data types like arbitrary-sized lists and mappings, sizable standard library, and external modules system allowing users to access even larger third-party libraries. All that is coupled with a general easy feel (less typing, no build times, etc.) and an easy learning curve.
**Linux.com: What are the benefits of these languages for development?**
Sokolovsky: The benefits stem from the features described above. One can start with a scripting language quite easily and learn it quickly. Many VHLLs offer a powerful interactive mode, so you don't need to read thick manuals to get started but can explore and experiment right away. Powerful built-in functionality allows you to develop small useful applications -- scripts -- with little time and effort (that's where the "scripting languages" name came from). Moving to larger applications, vast third-party libraries and an easy-to-use module system make developing them also streamlined and productive.
**Linux.com: How does scripting for embedded platforms differ from development for other platforms?**
Sokolovsky: With all the exciting capabilities of VHLLs discussed above, there's an idea -- why we can't enjoy all (or at least some) benefits of them when developing for embedded devices? And by "embedded devices" I mean here not just small Linux systems with 8-32MB of RAM, but deeply embedded systems running on microcontrollers (MCUs) with mere kilobytes of memory. Small, and sometimes really scarce, resources definitely add complexity to this idea. Another issue is device access and interaction. Embedded devices usually don't have displays and keyboards, but fortunately the answer is known for decades thanks to Unix -- just use a terminal connection over a serial (UART). Of course, on a host side, it can be hidden behind a graphical IDE, which some users prefer.
So, with all the differences the embedded devices have, the idea is to provide as familiar a working environment as possible. That's on one side of the spectrum and, on the other, the idea is to make it as scaled down as possible to accommodate even the smallest of devices. These conflicting aims require embedded VHLLs implementations to be highly configurable, to adjust for the needs of different projects and hardware.
**Linux.com: What are the specific challenges of using these languages for IoT? How do you address memory constraints, for example?**
Sokolovsky: It's definitely true that the interpreter consumes scarce hardware resources. But nowadays the most precious resource is the human time. Whether you are an R&D engineer, a maker with only a few hours on weekend, a support engineer overwhelmed with bugs and security issues, or a project manager planning a product -- you likely don't have extra time on your hands. The idea is to deliver the productivity of VHLLs into the hands of embedded engineers.
Nowadays, the state of art is very enabling of this. It's fair to say that, even of microcontroller units (MCUs), an average now is 16-32KB RAM and 128-256K ROM. That's just enough to host a core interpreter, a no-nonsense subset of standard library types, some hardware drivers, and a small -- but still useful -- user application. If you go slightly above the middle line, capabilities raise rapidly -- it's actually a well-known trick from 1970s that using custom bytecode/pcode lets you achieve greater code/feature density than the raw machine code.
There are a lot of challenges on that road, scarcity of RAM being the main one. I write these words on a laptop with 16GB of RAM (and there're still slowdowns due to swapping), and the 16KB mentioned above is a million times less! And yet, by using carefully chosen algorithms and coding techniques, it's possible to implement a scripting language that can execute simple applications in that amount of RAM, and fairly complex ones in 128-256K.
There are many technical challenges to address (and which are being successfully addressed), and there wouldn't be a space to cover them here. Instead, my presentation at OpenIoT Summit will cover experiences and achievements of two embedded scripting languages: MicroPython (Python3 language subset) and Zephyr.js (JavaScript/Node.js subset), both running on top of The Linux Foundation's Zephyr RTOS, which is expected to do for the IoT industry what Linux did for the mobile and server industries. (The slides will be available afterwards for people who can't attend OpenIoT Summit.)
**Linux.com: Can you give us some examples of applications for which VHLLs are most appropriate? And for which they are inappropriate?**
Sokolovsky: Above are many bright prospects for VHLLs, fairly speaking; in embedded, there's a lot of wishful thinking in that (or hopefully, self-fulfilling prophecy). Where VHLLs in embedded can deliver right now are: rapid prototyping, and educational/maker markets where easy learnability and usage is a must. There are pioneers that use VHLLs in other areas, but generally, it requires more investment into infrastructure and tools. It's important that such investment be guided by open source principles and be shared, or otherwise it undermines the idea that VHLLs can save their users time and effort.
With that in mind, embedded VHLLs are full-fledged ("Turing complete") languages suitable for any type of application, subject to hardware constraints. For example, if an MCU is below the thresholds stated above, of a legacy 8-bit micro, good old C is the only choice you can enjoy. Another limit is when you really want to get the most out of the hardware -- C or Assembler is the right choice. But, here's a surprise -- the developers of embedded VHLLs thought about that, too, and, for example, MicroPython allows you to combine Python and Assembler in one application.
Where embedded VHLLs excel is configurability and (re)programmability, coupled with flexible connectivity support. That's exactly what IoT and smart devices are all about, and many IoT applications don't have to be complex to be useful. Consider, for example, a smart button you can stick anywhere to do any task. But, what if you need to adjust the double-click time? With a scripting language, you can. Maybe you didn't think about triple-clicks at all, but now find that even four clicks would be useful in some cases. With a scripting language you can change that -- easily.
_Embedded Linux Conference + OpenIoT Summit North America will be held on February 21 - 23, 2017 in Portland, Oregon. [Check out over 130 sessions][5] on the Linux kernel, embedded development & systems, and the latest on the open Internet of Things._
--------------------------------------------------------------------------------
via: https://www.linux.com/news/event/elcna/2017/2/using-scripting-languages-iot-challenges-and-approaches
作者:[AMBER ANKERHOLZ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/aankerholz
[1]:https://www.linux.com/licenses/category/used-permission
[2]:https://www.linux.com/licenses/category/creative-commons-zero
[3]:https://www.linux.com/files/images/paul-sokolovsky-2014-09-21jpg
[4]:https://www.linux.com/files/images/scripting-languages-iotjpg
[5]:http://events.linuxfoundation.org/events/embedded-linux-conference/program/schedule?utm_source=linux&utm_campaign=elc17&utm_medium=blog&utm_content=video-blog
[6]:http://events.linuxfoundation.org/events/embedded-linux-conference
[7]:https://events.linuxfoundation.org/events/openiot-summit/program/schedule

View File

@ -1,3 +1,4 @@
申请翻译
Understanding the difference between sudo and su Understanding the difference between sudo and su
============================================================ ============================================================

View File

@ -1,216 +0,0 @@
How to create your own Linux Distribution with Yocto on Ubuntu
============================================================
### On this page
1. [Prerequisites for the development machine ][1]
2. [Yocto Compilation and Building Process][2]
In this article, our focus is the creation of a minimal Linux distribution using the Yocto project on the Ubuntu platform. The Yocto project is very famous in the embedded Linux world because of its flexibility and ease of use.  The purpose of the Yocto project is to create a Linux distro for manufacturers of embedded hardware and software. A new minimal Linux distro will be created for qemu as the (qemu is a basic software emulator) target machine and we will run it in qemu. 
### Prerequisites for the development machine 
* At least 4 - 6 GB RAM.
* Recent Ubuntu OS (16.04 LTS in this case).
* At least 60-80 GB free space on the disk.
* Installation of following packages before creation of new Linux distro.
* Download latest Yocto (Poky which is minimal development environment) stable branch.
apt-get update
apt-get install wget git-core unzip make gcc g++ build-essential subversion sed autoconf automake texi2html texinfo coreutils diffstat python-pysqlite2 docbook-utils libsdl1.2-dev libxml-parser-perl libgl1-mesa-dev libglu1-mesa-dev xsltproc desktop-file-utils chrpath groff libtool xterm gawk fop
### [
![Install prerequisites for Yocto](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/1-pre_requisite_packages-1.png)
][3]
As shown below, almost 1 GB size is required to install required development packages.
[
![Install the development packages](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/2-pre_requisite_packages-2.png)
][4]
In this tutorial, the "morty" stable release of poky is cloned on the system.
 git clone -b morty git://git.yoctoproject.org/poky.git
[
![install poky](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/download_morty_of_yocto.png)
][5]
Go inside the "poky" directory and run the following command to set/export some variables for yocto development.
source oe-init-build-env
As shown below, after running the open embedded (oe) build environment script, the path location in the terminal will be changed to a "build" directory for the further configuration and compilation of new distribution. 
[
![Prepare OE build environment](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/source_environment_script.png)
][6]
The above screenshot shows that the "local.conf" file is created inside the "conf" directory. This is the configuration file for yocto which specifies details of the target machine and SDK for desired architecture etc.
As shown below, setting target machine "qemux86-64".
[
![Set the target machine type](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/selected_target.png)
][7]
Uncomment following parameters in "local.conf" file as shown in the screenshots.
DL_DIR ?= "${TOPDIR}/downloads"
[
![Configure local.conf file](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/uncheck_Download_parameters.png)
][8]
SSTATE_DIR ?= "${TOPDIR}/sstate-cache"
[
![Set SSTATE_DIR](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/uncheck_sstate_parametes.png)
][9]
TMPDIR ?= "${TOPDIR}/tmp"
[
![Set TMPDIR](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/tempdir_uncheck_paramerter.png)
][10]
PACKAGE_CLASSES ?= "package_rpm"
SDKMACHINE ?= "i686"
[
![Set PACKAGE_CLASSES and SDKMACHINE](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/sdk_and_package_selection.png)
][11]
As shown below, set a blank password for the Yocto based Linux and include the following parameters in the local.conf file. Otherwise, the user will not be able to login in the new distro.
EXTRA_IMAGE_FEATURES ?= "debug-tweaks"
[
![Set debug-tweaks option](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/extra-features_for_blank_password.png)
][12]
We are not using any GUI tool such as toaster (hob is no more supported) to create Linux OS.
### Yocto Compilation and Building Process
Now run the following command of the bitbake utility to start the download and compilation of packages for the selected target machine.
bitbake core-image-minimal
[
![Start bitbake](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/bitbake_coreimageminimal.png)
][13]
It is important to run the above command as a normal Linux user and not the root user. As shown in the following sscreenshot, an error is generated when you run the bitbake command as root user.
[
![Do not run bitbake as root](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/dont_user_as_a_root.png)
][14]
Again, run the export of environment variables script (oe-init-build-env) and re-run the same command to start the downloading and compilation process.
[
![rerun commands](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/runniing_bitbake_again-normal_user.png)
][15]
As shown below, the first step of build script utility is to parse the recipe.
[
![Parse the build recipes](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/parsing-receipe.png)
][16]
The following screenshot shows the completion of the parsing step of the build script. It also shows the details of the build system on which the new yocto based distro will be generated.
[
![Building proceeds](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/output_of_parsing.png)
][17]
After downloading the SDK and necessary libraries, the next step is to download and compile the packages. The following screenshot shows the task for the new distribution. This step will take 2-3 hours because first, it downloads the required packages and then compiles for the new Linux distribution.
[
![Compilation will take several hours](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/task_list.png)
][18]
The following screenshot shows the completion of the task list.
[
![](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/downloaded-all_packages_and_compiled.png)
][19]
The compiled new images for the target machine type "qemux86-64" is inside the "build/tmp/deploy/images/qemux86-64" path as shown below.
[
![Build complete](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/new_linux_compiled_under_qemux86_64.png)
][20]
As shown below, above command will produce an error if run in the Putty.
[
![command error in putty](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/error_on_putty.png)
][21]
Above command is again run inside the terminal via rdp on Ubuntu platform.
[
![Command works fine in rdp](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/runqemu_command.png)
][22]
Another screen is opened for the qemu emulator for new yocto based Linux distro.
[
![Open Quemu emulator](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/new_linux_inside_the_qemu_.png)
][23]
The login screen of the new distro is shown below which also shows the reference version of the yocto project. The default username is root and a blank password.
[
![Linux distribution started](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/reference_distro.png)
][24]
Finally, login in the new distro with root username and an empty password. As shown in the following screenshot, basic commands (date,ifconfig and uname) are run in the minimal version of Linux.
[
![Test the Linux distribution](https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/inside_new_linux_distro_running_on_qemu_3.png)
][25]
The purpose of this article is to understand the procedure for the creation of new Linux distribution using yocto project.
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/how-to-create-your-own-linux-distribution-with-yocto-on-ubuntu/
作者:[Ahmad][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com/tutorial/how-to-create-your-own-linux-distribution-with-yocto-on-ubuntu/
[1]:https://www.howtoforge.com/tutorial/how-to-create-your-own-linux-distribution-with-yocto-on-ubuntu/#prerequisites-for-the-development-machinenbsp
[2]:https://www.howtoforge.com/tutorial/how-to-create-your-own-linux-distribution-with-yocto-on-ubuntu/#yocto-compilation-and-building-process
[3]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/1-pre_requisite_packages-1.png
[4]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/2-pre_requisite_packages-2.png
[5]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/download_morty_of_yocto.png
[6]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/source_environment_script.png
[7]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/selected_target.png
[8]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/uncheck_Download_parameters.png
[9]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/uncheck_sstate_parametes.png
[10]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/tempdir_uncheck_paramerter.png
[11]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/sdk_and_package_selection.png
[12]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/extra-features_for_blank_password.png
[13]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/bitbake_coreimageminimal.png
[14]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/dont_user_as_a_root.png
[15]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/runniing_bitbake_again-normal_user.png
[16]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/parsing-receipe.png
[17]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/output_of_parsing.png
[18]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/task_list.png
[19]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/downloaded-all_packages_and_compiled.png
[20]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/new_linux_compiled_under_qemux86_64.png
[21]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/error_on_putty.png
[22]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/runqemu_command.png
[23]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/new_linux_inside_the_qemu_.png
[24]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/reference_distro.png
[25]:https://www.howtoforge.com/images/how-to-create-your-own-linux-distribution-with-yocto/big/inside_new_linux_distro_running_on_qemu_3.png

View File

@ -1,280 +0,0 @@
How to Install and Configure FTP Server in Ubuntu
============================================================
FTP (File Transfer Protocol) is a relatively old and most used standard network protocol used for uploading/downloading files between two computers over a network. However, FTP by its original insecure, because it transmits data together with user credentials (username and password) without encryption.
Warning: If you planning to use FTP, consider configuring FTP connection with SSL/TLS (will cover in next article). Otherwise, its always better to use secure FTP such as [SFTP][1].
**Suggested Read:** [How to Install and Secure FTP Server in CentOS 7][2]
In this tutorial, we will show how to install, configure and secure a FTP server (VSFTPD in full “Very Secure FTP Daemon“) in Ubuntu to have a powerful security against FTP vulnerabilities.
### Step 1: Installing VsFTP Server in Ubuntu
1. First, we need to update the system package sources list and then install VSFTPD binary package as follows:
```
$ sudo apt-get update
$ sudo apt-get install vsftpd
```
2. Once the installation completes, the service will be disabled initially, therefore, we need to start it manually for the mean time and also enable it to start automatically from the next system boot:
```
------------- On SystemD -------------
# systemctl start vsftpd
# systemctl enable vsftpd
------------- On SysVInit -------------
# service vsftpd start
# chkconfig --level 35 vsftpd on
```
3. Next, if you have [UFW firewall][3] enabled ( its not enabled by default) on the server, you have to open ports 21and 20 where the FTP daemons are listening, in order to allow access to FTP services from remote machines, then add the new firewall rules as follows:
```
$ sudo ufw allow 20/tcp
$ sudo ufw allow 21/tcp
$ sudo ufw status
```
### Step 2: Configuring and Securing VsFTP Server in Ubuntu
4. Lets now perform a few configurations to setup and secure our FTP server, first we will create a backup of the original config file /etc/vsftpd/vsftpd.conf like so:
```
$ sudo cp /etc/vsftpd.conf /etc/vsftpd.conf.orig
```
Next, lets open the vsftpd config file.
```
$ sudo vi /etc/vsftpd.conf
OR
$ sudo nano /etc/vsftpd.conf
```
Add/modify the following options with these values:
```
anonymous_enable=NO # disable anonymous login
local_enable=YES # permit local logins
write_enable=YES # enable FTP commands which change the filesystem
local_umask=022 # value of umask for file creation for local users
dirmessage_enable=YES # enable showing of messages when users first enter a new directory
xferlog_enable=YES # a log file will be maintained detailing uploads and downloads
connect_from_port_20=YES # use port 20 (ftp-data) on the server machine for PORT style connections
xferlog_std_format=YES # keep standard log file format
listen=NO # prevent vsftpd from running in standalone mode
listen_ipv6=YES # vsftpd will listen on an IPv6 socket instead of an IPv4 one
pam_service_name=vsftpd # name of the PAM service vsftpd will use
userlist_enable=YES # enable vsftpd to load a list of usernames
tcp_wrappers=YES # turn on tcp wrappers
```
5. Now, configure VSFTPD to allow/deny FTP access to users based on the user list file /etc/vsftpd.userlist.
Note that by default, users listed in userlist_file=/etc/vsftpd.userlist are denied login access with `userlist_deny=YES` option if `userlist_enable=YES`.
But, the option `userlist_deny=NO` twists the meaning of the default setting, so only users whose username is explicitly listed in userlist_file=/etc/vsftpd.userlist will be allowed to login to the FTP server.
```
userlist_enable=YES # vsftpd will load a list of usernames, from the filename given by userlist_file
userlist_file=/etc/vsftpd.userlist # stores usernames.
userlist_deny=NO
```
Important: When users login to the FTP server, they are placed in a chrooted jail, this is the local root directory which will act as their home directory for the FTP session only.
Next, we will look at two possible scenarios of how to set the chrooted jail (local root) directory, as explained below.
6. At this point, lets add/modify/uncomment these two following options to [restrict FTP users to their Home directories][4].
```
chroot_local_user=YES
allow_writeable_chroot=YES
```
The option `chroot_local_user=YES` importantly means local users will be placed in a chroot jail, their home directory by default after login.
And we must as well understand that VSFTPD does not permit the chroot jail directory to be writable, by default for security reasons, however, we can use the option allow_writeable_chroot=YES to disable this setting.
Save the file and close it. Then we have to restart VSFTPD services for the changes above to take effect:
```
------------- On SystemD -------------
# systemctl restart vsftpd
------------- On SysVInit -------------
# service vsftpd restart
```
### Step 3: Testing VsFTP Server in Ubuntu
7. Now we will test FTP server by creating a FTP user with [useradd command][5] as follows:
```
$ sudo useradd -m -c "Aaron Kili, Contributor" -s /bin/bash aaronkilik
$ sudo passwd aaronkilik
```
Then, we have to explicitly list the user aaronkilik in the file /etc/vsftpd.userlist with the [echo command][6] and tee command as below:
```
$ echo "aaronkilik" | sudo tee -a /etc/vsftpd.userlist
$ cat /etc/vsftpd.userlist
```
8. Now its about time to test our above configurations are functioning as required. We will begin by testing anonymous logins; we can clearly see from the output below that anonymous logins are not permitted on the FTP server:
```
# ftp 192.168.56.102
Connected to 192.168.56.102 (192.168.56.102).
220 Welcome to TecMint.com FTP service.
Name (192.168.56.102:aaronkilik) : anonymous
530 Permission denied.
Login failed.
ftp> bye
221 Goodbye.
```
9. Next, lets test if a user not listed in the file /etc/vsftpd.userlist will be granted permission to login, which is not true from the output that follows:
```
# ftp 192.168.56.102
Connected to 192.168.56.102 (192.168.56.102).
220 Welcome to TecMint.com FTP service.
Name (192.168.56.10:root) : user1
530 Permission denied.
Login failed.
ftp> bye
221 Goodbye.
```
10. Now we will carry out a final test to determine whether a user listed in the file /etc/vsftpd.userlist, is actually placed in his/her home directory after login. And this is true from the output below:
```
# ftp 192.168.56.102
Connected to 192.168.56.102 (192.168.56.102).
220 Welcome to TecMint.com FTP service.
Name (192.168.56.102:aaronkilik) : aaronkilik
331 Please specify the password.
Password:
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> ls
```
[
![Verify FTP Login in Ubuntu](http://www.tecmint.com/wp-content/uploads/2017/02/Verify-FTP-Login-in-Ubuntu.png)
][7]
Verify FTP Login in Ubuntu
Warning: Setting the option `allow_writeable_chroot=YES` can be so dangerous, it has possible security implications, especially if the users have upload permission, or more so, shell access. Only use it if you exactly know what you are doing.
We should note that these security implications are not specific to VSFTPD, they can also affect all other FTP daemons which offer to put local users in chroot jails.
Because of this reason, in the section below, we will explain a more secure method of setting a different non-writable local root directory for a user.
### Step 4: Configure FTP User Home Directories in Ubuntu
11. Now, open the VSFTPD configuration file once more time.
```
$ sudo vi /etc/vsftpd.conf
OR
$ sudo nano /etc/vsftpd.conf
```
and comment out the unsecure option using the `#` character as shown below:
```
#allow_writeable_chroot=YES
```
Next, create the alternative local root directory for the user (aaronkilik, yours is possibly not the same) and set the required permissions by disabling write permissions to all other users to this directory:
```
$ sudo mkdir /home/aaronkilik/ftp
$ sudo chown nobody:nogroup /home/aaronkilik/ftp
$ sudo chmod a-w /home/aaronkilik/ftp
```
12. Then, create a directory under the local root with the appropriate permissions where the user will store his files:
```
$ sudo mkdir /home/aaronkilik/ftp/files
$ sudo chown -R aaronkilk:aaronkilik /home/aaronkilik/ftp/files
$ sudo chmod -R 0770 /home/aaronkilik/ftp/files/
```
Afterwards, add/modify the options below in the VSFTPD config file with their corresponding values:
```
user_sub_token=$USER # inserts the username in the local root directory
local_root=/home/$USER/ftp # defines any users local root directory
```
Save the file and close it. And restart the VSFTPD services with the recent settings:
```
------------- On SystemD -------------
# systemctl restart vsftpd
------------- On SysVInit -------------
# service vsftpd restart
```
13. Now, lets perform a final check and make sure that the users local root directory is the FTP directory we created in his Home directory.
```
# ftp 192.168.56.102
Connected to 192.168.56.102 (192.168.56.102).
220 Welcome to TecMint.com FTP service.
Name (192.168.56.10:aaronkilik) : aaronkilik
331 Please specify the password.
Password:
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> ls
```
[
![FTP User Home Directory Login](http://www.tecmint.com/wp-content/uploads/2017/02/FTP-User-Home-Directory-Login.png)
][8]
FTP User Home Directory Login
Thats it! Remember to share your opinion about this guide via the comment form below or possibly provide us any important information concerning the topic.
Last but not least, do not miss our next article, where we will describe how to [secure an FTP server using SSL/TLS][9] connections in Ubuntu 16.04/16.10, until then, always stay tunned to TecMint.
--------------------------------------------------------------------------------
作者简介:
Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/install-ftp-server-in-ubuntu/
作者:[Aaron Kili][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/sftp-command-examples/
[2]:http://www.tecmint.com/install-ftp-server-in-centos-7/
[3]:http://www.tecmint.com/how-to-install-and-configure-ufw-firewall/
[4]:http://www.tecmint.com/restrict-sftp-user-home-directories-using-chroot/
[5]:http://www.tecmint.com/add-users-in-linux/
[6]:http://www.tecmint.com/echo-command-in-linux/
[7]:http://www.tecmint.com/wp-content/uploads/2017/02/Verify-FTP-Login-in-Ubuntu.png
[8]:http://www.tecmint.com/wp-content/uploads/2017/02/FTP-User-Home-Directory-Login.png
[9]:http://www.tecmint.com/secure-ftp-server-using-ssl-tls-on-ubuntu/
[10]:http://www.tecmint.com/author/aaronkili/
[11]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
[12]:http://www.tecmint.com/free-linux-shell-scripting-books/

View File

@ -1,267 +0,0 @@
#rusking translating
Create a Shared Directory on Samba AD DC and Map to Windows/Linux Clients Part 7
============================================================
This tutorial will guide you on how to create a shared directory on Samba AD DC system, map this Shared Volume to Windows clients integrated into the domain via GPO and manage share permissions from Windows domain controller perspective.
It will also cover how to access and mount the file share from a Linux machine enrolled into domain using a Samba4 domain account.
#### Requirements:
1. [Create an Active Directory Infrastructure with Samba4 on Ubuntu][1]
### Step 1: Create Samba File Share
1. The process of creating a share on Samba AD DC is a very simple task. First create a directory you want to share via SMB protocol and add the below permissions on the filesystem in order to allow a Windows AD DC admin acount to modify the share permissions accordingly to what permissions Windows clients should see.
Assuming that the new file share on the AD DC would be the `/nas` directory, run the below commands to assign the correct permissions.
```
# mkdir /nas
# chmod -R 775 /nas
# chown -R root:"domain users" /nas
# ls -alh | grep nas
```
[
![Create Samba Shared Directory](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][2]
Create Samba Shared Directory
2. After youve created the directory that will be exported as a share from Samba4 AD DC, you need to add the following statements to samba configuration file in order to make the share available via SMB protocol.
```
# nano /etc/samba/smb.conf
```
Go to the bottom of the file and add the following lines:
```
[nas]
path = /nas
read only = no
```
[
![Configure Samba Shared Directory](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][3]
Configure Samba Shared Directory
3. The last thing you need to do is to restart Samba AD DC daemon in order to apply the changes by issuing the below command:
```
# systemctl restart samba-ad-dc.service
```
### Step 2: Manage Samba Share Permissions
4. Since were accessing this shared volume from Windows, using domain accounts (users and groups) that are created on Samba AD DC (the share is not meant to be accessed by Linux system users).
The process of managing permissions can be done directly from Windows Explorer, in the same way permissions are managed for any folder in Windows Explorer.
First, log on to Windows machine with a Samba4 AD account with administrative privileges on the domain. In order to access the share from Windows and set the permissions, type the IP address or host name or FQDN of the Samba AD DC machine in Windows Explorer path field, preceded by two back slashes, and the share should be visible.
```
\\adc1
Or
\\192.168.1.254
Or
\\adc1.tecmint.lan
```
[
![Access Samba Share Directory from Windows](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][4]
Access Samba Share Directory from Windows
5. To modify permissions just right click on the share and choose Properties. Navigate to Security tab and proceed with altering domain users and group permissions accordingly. Use Advanced button in order to fine tune permissions.
[
![Configure Samba Share Directory Permissions](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][5]
Configure Samba Share Directory Permissions
Use the below screenshot as an excerpt on how to tune permissions for specific Samba AD DC authenticated accounts.
[
![Manage Samba Share Directory User Permissions](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][6]
Manage Samba Share Directory User Permissions
6. Other method you can use to manage the share permissions is from Computer Management -> Connect to another computer.
Navigate to Shares, right click on the share you want to modify permissions, choose Properties and move to Security tab. From here you can alter permissions in any way you want just as presented in the previous method using file share permissions.
[
![Connect to Samba Share Directory Machine](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][7]
Connect to Samba Share Directory Machine
[
![Manage Samba Share Directory Properties](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][8]
Manage Samba Share Directory Properties
[
![Assign Samba Share Directory Permissions to Users](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][9]
Assign Samba Share Directory Permissions to Users
### Step 3: Map the Samba File Share via GPO
7. To automatically mount the exported samba file share via domain Group Policy, first on a machine with [RSAT tools installed][10], open AD UC utility, right click on your domain name and, then, choose New -> Shared Folder.
[
![Map Samba Share Folder](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][11]
Map Samba Share Folder
8. Add a name for the shared volume and enter the network path where your share is located as illustrated on the below image. Hit OK when youve finished and the share should now be visible on the right plane.
[
![Set Samba Shared Folder Name Location](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][12]
Set Samba Shared Folder Name Location
9. Next, open Group Policy Management console, expand to your domain Default Domain Policy script and open the file for editing.
On the GPM Editor navigate to User Configuration -> Preferences -> Windows Settings and right click on Drive Maps and choose New -> Mapped Drive.
[
![Map Samba Share Folder in Windows](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][13]
Map Samba Share Folder in Windows
10. On the new window search and add the network location for the share by pressing the right button with three dots, check Reconnect checkbox, add a label for this share, choose the letter for this drive and hit OK button to save and apply configuration.
[
![Configure Network Location for Samba Share Directory](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][14]
Configure Network Location for Samba Share Directory
11. Finally, in order to force and apply GPO changes on your local machine without a system restart, open a Command Prompt and run the following command.
```
gpupdate /force
```
[
![Apply GPO Changes](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][15]
Apply GPO Changes
12. After the policy has been successfully applied on your machine, open Windows Explorer and the shared network volume should be visible and accessible, depending on what permissions youve granted for the share on previous steps.
The share will be visible for other clients on your network after they reboot or re-login onto their systems if the group policy will not forced from command line.
[
![Samba Shared Network Volume on Windows](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][16]
Samba Shared Network Volume on Windows
### Step 4: Access the Samba Shared Volume from Linux Clients
13. Linux users from machines that are enrolled into Samba AD DC can also access or mount the share locally by authenticating into the system with a Samba account.
First, they need to assure that the following samba clients and utilities are installed on their systems by issuing the below command.
```
$ sudo apt-get install smbclient cifs-utils
```
14. In order to list the exported shares your domain provides for a specific domain controller machine use the below command:
```
$ smbclient L your_domain_controller U%
or
$ smbclient L \\adc1 U%
```
[
![List Samba Share Directory in Linux](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][17]
List Samba Share Directory in Linux
15. To interactively connect to a samba share from command line with a domain account use the following command:
```
$ sudo smbclient //adc/share_name -U domain_user
```
On command line you can list the content of the share, download or upload files to the share or perform other tasks. Use ? to list all available smbclient commands.
[
![Connect Samba Share Directory in Linux](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][18]
Connect Samba Share Directory in Linux
16. To mount a samba share on a Linux machine use the below command.
```
$ sudo mount //adc/share_name /mnt -o username=domain_user
```
[
![Mount Samba Share Directory in Linux](http://www.tecmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
][19]
Mount Samba Share Directory in Linux
Replace the host, share name, mount point and domain user accordingly. Use mount command piped with grep to filter only by cifs expression.
As some final conclusions, shares configured on a Samba4 AD DC will work only with Windows access control lists (ACL), not POSIX ACLs.
Configure Samba as a Domain member with file shares in order to achieve other capabilities for a network share. Also, on an Additional Domain Controller [configure Windbindd daemon][20]  Step Two  before you start exporting network shares.
--------------------------------------------------------------------------------
作者简介:
I'am a computer addicted guy, a fan of open source and linux based system software, have about 4 years experience with Linux distributions desktop, servers and bash scripting.
--------------------------------------------------------------------------------
via: 网址
作者:[Matei Cezar][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/cezarmatei/
[1]:http://www.tecmint.com/install-samba4-active-directory-ubuntu/
[2]:http://www.tecmint.com/wp-content/uploads/2017/02/Create-Samba-Shared-Directory.png
[3]:http://www.tecmint.com/wp-content/uploads/2017/02/Configure-Samba-Shared-Directory.png
[4]:http://www.tecmint.com/wp-content/uploads/2017/02/Access-Samba-Share-Directory-from-Windows.png
[5]:http://www.tecmint.com/wp-content/uploads/2017/02/Configure-Samba-Share-Directory-Permissions.png
[6]:http://www.tecmint.com/wp-content/uploads/2017/02/Manage-Samba-Share-Directory-User-Permissions.png
[7]:http://www.tecmint.com/wp-content/uploads/2017/02/Connect-to-Samba-Share-Directory-Machine.png
[8]:http://www.tecmint.com/wp-content/uploads/2017/02/Manage-Samba-Share-Directory-Properties.png
[9]:http://www.tecmint.com/wp-content/uploads/2017/02/Assign-Samba-Share-Directory-Permissions-to-Users.png
[10]:http://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/
[11]:http://www.tecmint.com/wp-content/uploads/2017/02/Map-Samba-Share-Folder.png
[12]:http://www.tecmint.com/wp-content/uploads/2017/02/Set-Samba-Shared-Folder-Name-Location.png
[13]:http://www.tecmint.com/wp-content/uploads/2017/02/Map-Samba-Share-Folder-in-Windows.png
[14]:http://www.tecmint.com/wp-content/uploads/2017/02/Configure-Network-Location-for-Samba-Share-Directory.png
[15]:http://www.tecmint.com/wp-content/uploads/2017/02/Apply-GPO-Changes.png
[16]:http://www.tecmint.com/wp-content/uploads/2017/02/Samba-Shared-Network-Volume-on-Windows.png
[17]:http://www.tecmint.com/wp-content/uploads/2017/02/List-Samba-Share-Directory-in-Linux.png
[18]:http://www.tecmint.com/wp-content/uploads/2017/02/Connect-Samba-Share-Directory-in-Linux.png
[19]:http://www.tecmint.com/wp-content/uploads/2017/02/Mount-Samba-Share-Directory-in-Linux.png
[20]:http://www.tecmint.com/manage-samba4-active-directory-linux-command-line/
[21]:http://www.tecmint.com/author/cezarmatei/
[22]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
[23]:http://www.tecmint.com/free-linux-shell-scripting-books/

View File

@ -0,0 +1,393 @@
The Perfect Server CentOS 7.3 with Apache, Postfix, Dovecot, Pure-FTPD, BIND and ISPConfig 3.1
============================================================
### This tutorial exists for these OS versions
* **CentOS 7.3**
* [CentOS 7.2][3]
* [CentOS 7.1][4]
* [CentOS 7][5]
### On this page
1. [1 Requirements][6]
2. [2 Preliminary Note][7]
3. [3 Prepare the server][8]
4. [4 Enable Additional Repositories and Install Some Software][9]
5. [5 Quota][10]
1. [Enabling quota on the / (root) partition][1]
2. [Enabling quota on a separate /var partition][2]
6. [6 Install Apache, MySQL, phpMyAdmin][11]
This tutorial shows the installation of ISPConfig 3.1 on a CentOS 7.3 (64Bit) server. ISPConfig is a web hosting control panel that allows you to configure the following services through a web browser: Apache web server, Postfix mail server, MySQL, BIND nameserver, PureFTPd, SpamAssassin, ClamAV, Mailman, and many more.
### 1 Requirements
To install such a system you will need the following:
* A Centos 7.3 minimal server system. This can be a server installed from scratch as described in our [Centos 7.3 minimal server tutorial][12] or a virtual-server or root-server from a hosting company that has a minimal Centos 7.3 setup installed.
* A fast Internet connection.
### 2 Preliminary Note
In this tutorial, I use the hostname server1.example.com with the IP address 192.168.1.100 and the gateway 192.168.1.1. These settings might differ for you, so you have to replace them where appropriate.
Please note that HHVM and XMPP are not supported in ISPConfig for the CentOS platform yet. If you like to manage an XMPP chat server from within ISPConfig or use HHVM (Hip Hop Virtual Machine) in an ISPConfig website, then please use Debian 8 or Ubuntu 16.04 as server OS instead of CentOS 7.3.
### 3 Prepare the server
**Set the keyboard layout**
In case that the keyboard layout of the server does not match your keyboard, you can switch to the right keyboard (in my case "de" for a german keyboard layout, with the localectl command:
`localectl set-keymap de`
To get a list of all available keymaps, run:
`localectl list-keymaps`
I want to install ISPConfig at the end of this tutorial, ISPConfig ships with the Bastille firewall script that I will use as firewall, therefor I disable the default CentOS firewall now. Of course, you are free to leave the CentOS firewall on and configure it to your needs (but then you shouldn't use any other firewall later on as it will most probably interfere with the CentOS firewall).
Run...
```
yum -y install net-tools
systemctl stop firewalld.service
systemctl disable firewalld.service
```
to stop and disable the CentOS firewall. It is ok when you get errors here, this just indicates that the firewall was not installed.
Then you should check that the firewall has really been disabled. To do so, run the command:
`iptables -L`
The output should look like this:
[root@server1 ~]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Or use the firewall-cmd command:
firewall-cmd --state
[root@server1 ~]# firewall-cmd --state
not running
[root@server1 ~]#
Now I will install the network configuration editor and the shell based editor "nano" that I will use in the next steps to edit the config files:
yum -y install nano wget NetworkManager-tui
If you did not configure your network card during the installation, you can do that now. Run...
nmtui
... and go to Edit a connection:
[
![](https://www.howtoforge.com/images/perfect_server_centos_7_1_x86_64_apache2_dovecot_ispconfig3/nmtui1.png)
][13]
Select your network interface:
[
![](https://www.howtoforge.com/images/perfect_server_centos_7_1_x86_64_apache2_dovecot_ispconfig3/nmtui2.png)
][14]
Then fill in your network details - disable DHCP and fill in a static IP address, a netmask, your gateway, and one or two nameservers, then hit Ok:
[
![](https://www.howtoforge.com/images/perfect_server_centos_7_1_x86_64_apache2_dovecot_ispconfig3/nmtui3.png)
][15]
Next select OK to confirm the changes that you made in the network settings
[
![](https://www.howtoforge.com/images/perfect_server_centos_7_1_x86_64_apache2_dovecot_ispconfig3/nmtui4.png)
][16]
and Quit to close the nmtui network configuration tool.
[
![](https://www.howtoforge.com/images/perfect_server_centos_7_1_x86_64_apache2_dovecot_ispconfig3/nmtui5.png)
][17]
You should run
ifconfig
now to check if the installer got your IP address right:
```
[root@server1 ~]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.100 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::20c:29ff:fecd:cc52 prefixlen 64 scopeid 0x20
ether 00:0c:29:cd:cc:52 txqueuelen 1000 (Ethernet)
RX packets 55621 bytes 79601094 (75.9 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 28115 bytes 2608239 (2.4 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 0 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
```
If your network card does not show up there, then it not be enabled on boot, In this case, open the file /etc/sysconfig/network-scripts/ifcfg-eth0
nano /etc/sysconfig/network-scripts/ifcfg-ens33
and set ONBOOT to yes:
[...]
ONBOOT=yes
[...]
and reboot the server.
Check your /etc/resolv.conf if it lists all nameservers that you've previously configured:
cat /etc/resolv.conf
If nameservers are missing, run
nmtui
and add the missing nameservers again.
Now, on to the configuration...
**Adjusting /etc/hosts and /etc/hostname**
Next, we will edit /etc/hosts. Make it look like this:
nano /etc/hosts
```
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
192.168.1.100 server1.example.com server1
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
```
Set the hostname in the /etc/hostname file. The file shall contain the fully qualified domain name (e.g. server1.example.com in my case) and not just the short name like "server1". Open the file with the nano editor:
nano /etc/hostname
And set the hostname in the file.
```
server1.example.com
```
Save the file and exit nano.
**Disable SELinux**
SELinux is a security extension of CentOS that should provide extended security. In my opinion you don't need it to configure a secure system, and it usually causes more problems than advantages (think of it after you have done a week of trouble-shooting because some service wasn't working as expected, and then you find out that everything was ok, only SELinux was causing the problem). Therefore I disable it (this is a must if you want to install ISPConfig later on).
Edit /etc/selinux/config and set SELINUX=disabled:
nano /etc/selinux/config
```
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
# targeted - Targeted processes are protected,
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
```
Afterwards we must reboot the system:
reboot
### 4 Enable Additional Repositories and Install Some Software
First, we import the GPG keys for software packages:
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY*
Then we enable the EPEL repository on our CentOS system as lots of the packages that we are going to install in the course of this tutorial are not available in the official CentOS 7 repository:
yum -y install epel-release
yum -y install yum-priorities
Edit /etc/yum.repos.d/epel.repo...
nano /etc/yum.repos.d/epel.repo
... and add the line priority=10 to the [epel] section:
```
[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
failovermethod=priority
enabled=1
priority=10
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
[...]
```
Then we update our existing packages on the system:
yum -y update
Now we install some software packages that are needed later on:
yum -y groupinstall 'Development Tools'
### 5 Quota
(If you have chosen a different partitioning scheme than I did, you must adjust this chapter so that quota applies to the partitions where you need it.)
To install quota, we run this command:
yum -y install quota
Now we check if quota is already enabled for the filesystem where the website (/var/www) and maildir data (var/vmail) is stored. In this example setup, I have one big root partition, so I search for ' / ':
mount | grep ' / '
[root@server1 ~]# mount | grep ' / '
/dev/mapper/centos-root on / type xfs (rw,relatime,attr2,inode64,noquota)
[root@server1 ~]#
If you have a separate /var partition, then use:
mount | grep ' /var '
instead. If the line contains the word "**noquota**", then proceed with the following steps to enable quota.
### Enabling quota on the / (root) partition
Normally you would enable quota in the /etc/fstab file, but if the filesystem is the root filesystem "/", then quota has to be enabled by a boot parameter of the Linux Kernel.
Edit the grub configuration file:
nano /etc/default/grub
search fole the line that starts with GRUB_CMDLINE_LINUX and add rootflags=uquota,gquota to the commandline parameters so that the resulting line looks like this:
```
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet rootflags=uquota,gquota"
```
and apply the changes by running the following command.
cp /boot/grub2/grub.cfg /boot/grub2/grub.cfg_bak
grub2-mkconfig -o /boot/grub2/grub.cfg
and reboot the server.
reboot
Now check if quota is enabled:
mount | grep ' / '
[root@server1 ~]# mount | grep ' / '
/dev/mapper/centos-root on / type xfs (rw,relatime,attr2,inode64,usrquota,grpquota)
[root@server1 ~]#
When quota is active, we can see "**usrquota,grpquota**" in the mount option list.
### Enabling quota on a separate /var partition
If you have a separate /var partition, then edit /etc/fstab and add ,uquota,gquota to the / partition (/dev/mapper/centos-var):
nano /etc/fstab
```
#
# /etc/fstab
# Created by anaconda on Sun Sep 21 16:33:45 2014
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root / xfs defaults 1 1
/dev/mapper/centos-var /var xfs defaults,uquota,gquota 1 2
UUID=9ac06939-7e43-4efd-957a-486775edd7b4 /boot xfs defaults 1 3
/dev/mapper/centos-swap swap swap defaults 0 0
```
Then run
mount -o remount /var
quotacheck -avugm
quotaon -avug
to enable quota. When you get an error that there is no partition with quota enabled, then reboot the server before you proceed.
### 6 Install Apache, MySQL, phpMyAdmin
We can install the needed packages with one single command:
yum -y install ntp httpd mod_ssl mariadb-server php php-mysql php-mbstring phpmyadmin
To ensure that the server can not be attacked trough the [HTTPOXY][18] vulnerability, we will disable the HTTP_PROXY header in apache globally. 
Add the apache header rule at the end of the httpd.conf file:
echo "RequestHeader unset Proxy early" >> /etc/httpd/conf/httpd.conf
And restart httpd to apply the configuration change.
service httpd restart
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/perfect-server-centos-7-3-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/
作者:[ Till Brehm][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-3-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/
[1]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-3-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/#enabling-quota-on-the-root-partition
[2]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-3-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/#enabling-quota-on-a-separate-var-partition
[3]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-2-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/
[4]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-1-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig3/
[5]:https://www.howtoforge.com/perfect-server-centos-7-apache2-mysql-php-pureftpd-postfix-dovecot-and-ispconfig3
[6]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-3-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/#-requirements
[7]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-3-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/#-preliminary-note
[8]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-3-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/#nbspprepare-the-server
[9]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-3-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/#nbspenable-additional-repositories-and-install-some-software
[10]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-3-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/#-quota
[11]:https://www.howtoforge.com/tutorial/perfect-server-centos-7-3-apache-mysql-php-pureftpd-postfix-dovecot-and-ispconfig/#-install-apache-mysql-phpmyadmin
[12]:https://www.howtoforge.com/tutorial/centos-7-minimal-server/
[13]:https://www.howtoforge.com/images/perfect_server_centos_7_1_x86_64_apache2_dovecot_ispconfig3/big/nmtui1.png
[14]:https://www.howtoforge.com/images/perfect_server_centos_7_1_x86_64_apache2_dovecot_ispconfig3/big/nmtui2.png
[15]:https://www.howtoforge.com/images/perfect_server_centos_7_1_x86_64_apache2_dovecot_ispconfig3/big/nmtui3.png
[16]:https://www.howtoforge.com/images/perfect_server_centos_7_1_x86_64_apache2_dovecot_ispconfig3/big/nmtui4.png
[17]:https://www.howtoforge.com/images/perfect_server_centos_7_1_x86_64_apache2_dovecot_ispconfig3/big/nmtui5.png
[18]:https://www.howtoforge.com/tutorial/httpoxy-protect-your-server/

View File

@ -1,241 +0,0 @@
Setting Up a Secure FTP Server using SSL/TLS on Ubuntu
============================================================
 Download Your Free eBooks NOW - [10 Free Linux eBooks for Administrators][13] | [4 Free Shell Scripting eBooks][14]
In this tutorial, we will describe how to secure a FTP server (VSFTPD stands for “Very Secure FTP Daemon”) using SSL/TLS in Ubuntu 16.04/16.10.
If youre looking to setup a secure FTP server for CentOS based distributions, you can read  [Secure an FTP Server Using SSL/TLS on CentOS][2]
After following the various steps in this guide, we will have learned the fundamentals of enabling encryption services in a FTP server for secure data transfers is crucial.
#### Requirements
1. You must [Install and Configure a FTP Server in Ubuntu][1]
Before we move further, make sure that all commands in this article will be run as root or [sudo privileged account][3].
### Step 1: Generating SSL/TLS Certificate for FTP on Ubuntu
1. We will begin by creating a subdirectory under: /etc/ssl/ to store the SSL/TLS certificate and key files if it doesnt exist:
```
$ sudo mkdir /etc/ssl/private
```
2. Now lets generate the certificate and key in a single file, by running the command below.
```
$ sudo openssl req -x509 -nodes -keyout /etc/ssl/private/vsftpd.pem -out /etc/ssl/private/vsftpd.pem -days 365 -newkey rsa:2048
```
The above command will prompt you to answer the questions below, dont forget to enter values that applicable to your scenario.
```
Country Name (2 letter code) [XX]:IN
State or Province Name (full name) []:Lower Parel
Locality Name (eg, city) [Default City]:Mumbai
Organization Name (eg, company) [Default Company Ltd]:TecMint.com
Organizational Unit Name (eg, section) []:Linux and Open Source
Common Name (eg, your name or your server's hostname) []:tecmint
Email Address []:admin@tecmint.com
```
### Step 2: Configuring VSFTPD to Use SSL/TLS on Ubuntu
3. Before we perform any VSFTPD configurations, for those who have [UFW firewall enabled][4], you have to open the ports 990 and 40000-50000 to allow TLS connections and the port range of passive ports to set in the VSFTPD configuration file respectively:
```
$ sudo ufw allow 990/tcp
$ sudo ufw allow 40000:50000/tcp
$ sudo ufw status
```
4. Now, open the VSFTPD config file and define the SSL details in it:
```
$ sudo vi /etc/vsftpd/vsftpd.conf
OR
$ sudo nano /etc/vsftpd/vsftpd.conf
```
Then, add or locate the option `ssl_enable` and set its value to YES to activate the use of SSL, again, because TLS is more secure than SSL, we will restrict VSFTPD to use TLS instead, by enabling the `ssl_tlsv1` option:
```
ssl_enable=YES
ssl_tlsv1=YES
ssl_sslv2=NO
ssl_sslv3=NO
```
5. Next, comment out the lines below using the `#` character as follows:
```
#rsa_cert_file=/etc/ssl/private/ssl-cert-snakeoil.pem
#rsa_private_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
```
Afterwards, add the lines below to define the location of the SSL certificate and key file:
```
rsa_cert_file=/etc/ssl/private/vsftpd.pem
rsa_private_key_file=/etc/ssl/private/vsftpd.pem
```
6. Now, we also have to prevent anonymous users from using SSL, then force all non-anonymous logins to use a secure SSL connection for data transfer and to send the password during login:
```
allow_anon_ssl=NO
force_local_data_ssl=YES
force_local_logins_ssl=YES
```
7. Furthermore, we can use the options below to add more security features in the FTP server. With option `require_ssl_reuse=YES`, all SSL data connections are required to exhibit SSL session reuse; proving that they know the same master secret as the control channel. So, we should disable it.
```
require_ssl_reuse=NO
```
In addition, we can set which SSL ciphers VSFTPD will permit for encrypted SSL connections with the `ssl_ciphers` option. This will help frustrate any efforts by attackers who try to force a specific cipher which they possibly discovered vulnerabilities in:
```
ssl_ciphers=HIGH
```
8. Then, lets define the port range (min and max port) of passive ports.
```
pasv_min_port=40000
pasv_max_port=50000
```
9. To enable SSL debugging, meaning openSSL connection diagnostics are recorded to the VSFTPD log file, we can use the `debug_ssl` option:
```
debug_ssl=YES
```
Finally save the file and close it. Then restart VSFTPD service:
```
$ systemctl restart vsftpd
```
### Step 3: Verify FTP with SSL/TLS Connections on Ubuntu
10. After performing all the above configurations, test if VSFTPD is now using SSL/TLS connections by trying to [use FTP from the command line][5] as below.
From the output below, there is an error message telling us VSFTPD can only permit users (non-anonymous) to login from secure clients which support encryption services.
```
$ ftp 192.168.56.10
Connected to 192.168.56.10 (192.168.56.10).
220 Welcome to TecMint.com FTP service.
Name (192.168.56.10:root) : ravi
530 Non-anonymous sessions must use encryption.
Login failed.
421 Service not available, remote server has closed connection
ftp>
```
The command line doesnt support encryption services thus resulting to the error above. Therefore, to securely connect to a FTP server with encryption services enabled, we need a FTP client that supports SSL/TLS connections by default, such as FileZilla.
### Step 4:Install FileZilla On Clients to Connect FTP Securely
FileZilla is a powerful, widely used cross-platform FTP client which supports FTP over SSL/TLS and more. To install FileZilla on a Linux client machine, use the following command.
```
--------- On Debian/Ubuntu ---------
$ sudo apt-get install filezilla
--------- On CentOS/RHEL/Fedora ---------
# yum install epel-release filezilla
--------- On Fedora 22+ ---------
$ sudo dnf install filezilla
```
12. Once the installation completes, open it and go to File=>Sites Manager or (press Ctrl+S) to get the Site Manager interface below.
[
![Filezilla Site Manager](http://www.tecmint.com/wp-content/uploads/2017/02/Filezilla-Site-Manager.png)
][6]
Filezilla Site Manager
13. Now, define the host/site name, add the IP address, define the protocol to use, encryption and logon type as in the screen shot below (use values that apply to your scenario):
Click on New Site button to configure a new site/host connection.
```
Host: 192.168.56.10
Protocol: FTP File Transfer Protocol
Encryption: Require explicit FTP over #recommended
Logon Type: Ask for password #recommended
User: username
```
[
![Configure New FTP Site on Filezilla](http://www.tecmint.com/wp-content/uploads/2017/02/Configure-New-FTP-Site-on-Filezilla.png)
][7]
Configure New FTP Site on Filezilla
14. Then click on Connect from the interface above to enter the password, and then verify the certificate being used for the SSL/TLS connection, and click OK once more to connect to the FTP server:
[
![Verify FTP SSL Certificate](http://www.tecmint.com/wp-content/uploads/2017/02/Verify-FTP-SSL-Certificate-1.png)
][8]
Verify FTP SSL Certificate
15. Now, you should have logged successfully into the FTP server over a TLS connection, check the connection status section for more information from the interface below.
[
![Connected to Ubuntu FTP Server](http://www.tecmint.com/wp-content/uploads/2017/02/Connected-Ubuntu-FTP-Server.png)
][9]
Connected to Ubuntu FTP Server
16. Lastly, lets [transfer files from the local machine to the FTP sever][10] in the files folder, take a look at the lower end of the FileZilla interface to view reports concerning file transfers.
[
![Secure FTP File Transfer using Filezilla](http://www.tecmint.com/wp-content/uploads/2017/02/Transfer-Files-Securely-using-FTP.png)
][11]
Secure FTP File Transfer using Filezilla
Thats all! Always remember that installing a FTP server without enabling encryption services has certain security implications. As we explained in this tutorial, you can configure a FTP server to use SSL/TLS connections to implement security in Ubuntu 16.04/16.10.
If you face any issues in setting up SSL/TLS on FTP server, do use the comment form below to share your problems or thoughts concerning this tutorial/topic.
--------------------------------------------------------------------------------
作者简介:
Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/secure-ftp-server-using-ssl-tls-on-ubuntu/
作者:[Aaron Kili][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/install-ftp-server-in-ubuntu/
[2]:http://www.tecmint.com/axel-commandline-download-accelerator-for-linux/
[3]:http://www.tecmint.com/sudoers-configurations-for-setting-sudo-in-linux/
[4]:http://www.tecmint.com/how-to-install-and-configure-ufw-firewall/
[5]:http://www.tecmint.com/sftp-command-examples/
[6]:http://www.tecmint.com/wp-content/uploads/2017/02/Filezilla-Site-Manager.png
[7]:http://www.tecmint.com/wp-content/uploads/2017/02/Configure-New-FTP-Site-on-Filezilla.png
[8]:http://www.tecmint.com/wp-content/uploads/2017/02/Verify-FTP-SSL-Certificate-1.png
[9]:http://www.tecmint.com/wp-content/uploads/2017/02/Connected-Ubuntu-FTP-Server.png
[10]:http://www.tecmint.com/sftp-command-examples/
[11]:http://www.tecmint.com/wp-content/uploads/2017/02/Transfer-Files-Securely-using-FTP.png
[12]:http://www.tecmint.com/author/aaronkili/
[13]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
[14]:http://www.tecmint.com/free-linux-shell-scripting-books/

View File

@ -0,0 +1,141 @@
# Microsoft Office Online gets better - on Linux, too
One of the core things that will make or break your Linux experience is the lack of the Microsoft Office suite, well, for Linux. If you are forced to use Office products to make a living, and this applies to a very large number of people, you might not be able to afford open-source alternatives. Get the paradox?
Indeed, LibreOffice a [great][1] free program, but what if your client, customer or boss demands Word and Excel files? Can you, indeed, [afford any mistakes][2] or errors or glitches in converting these files from ODT or whatnot into DOCX and such, and vice versa? This is a very tricky set of questions. Unfortunately, for most people, technically, this means Linux is out of limits. Well, not quite.
![Teaser](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-teaser.png)
### Enter Microsoft Office Online, Enter Linux
For a number of years, Microsoft has had its cloud office offering. No news there. What makes this cool and relevant is, it's available through any modern browser interface, and this means Linux, too! I have also tested this [solution][3] a while back, and it worked great. I was able to use the product just fine, save files in their native format, or even export my documents in the ODF format, which is really nice.
I decided to revisit this suite and see how it's evolved in the past few years, and see whether it still likes Linux. My scapegoat for this experience was a [Fedora 25][4] instance, and I had the Microsoft Office Online running open in several tabs. I did this in parallel to testing [SoftMaker Office 2016][5]. Sounds like a lot of fun, and it was.
### First impressions
I have to say, I was pleased. The Office does not require any special plugins. No Silverlight or Flash or anything like that. Pure HTML and Javascript, and lots of it. Still, the interface is fairly responsive. The only thing I did not like was the gray background in Word documents, which can be exhausting after a while. Other than that, the suite was working fine, there were no delays, lags or weird, unexpected errors. But let us proceed slowly then, shall we.
The suite does require that you log in with an online account or a phone number - it does not have to be a Live or Hotmail email. Any one will do. If you also have a Microsoft [phone][6], then you can use the same account, and you will be able to sync your data. The account grants you 5 GB of OneDrive storage for free, as well. This is quite neat. Not stellar or super exciting, but rather decent.
![MS Office, welcome page](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-welcome-page.jpg)
You have access to a whole range of programs, including the mandatory trio - Word, Excel and Powerpoint, but then, the rest of the stuff is also available, including some new fancy stuff. Documents are auto-saved, but you can also download copies and convert to other formats, like PDF and ODF.
For me, this is excellent. And let me share a short personal story. I write my [fantasy][7] books using LibreOffice. But then, when I need to send them to a publisher for editing or proofreading, I need to convert them to DOCX. Alas, this requires Microsoft Office. With my [Linux problem solving book][8], I had to use Word from the start, because there was a lot of collaboration work required with my editor, which mandated the use of the proprietary solution. There are no emotions here. Only cold monetary and business considerations. Mistakes are not acceptable.
![Word, new document](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-word-new.png)
Having access to Office Online can give a lot of people the leeway they need for occasional, recreational use of Word and Excel and alike without having to buy the whole, expensive suite. If you are a daytime LibreOffice fan, you can be a nighttime party animal at the Microsoft Office Heartbreakers Club without a guilty conscience. When someone ships you a Word or Powerpoint file, you can upload and manipulate them online, then export as needed. Likewise, you can create your work online, send it to people with strict requirements, then grab yourself a copy in ODF, and work with LibreOffice if needed. The flexibility is quite useful, but that should not be your main driver. Still, for Linux people, this gives them a lot of freedom they do not normally have. Because even if they do want to use Microsoft Office, it simply isn't available as a native install.
![Save as, export options](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-save-as.jpg)
### Features, options, tools
I started hammering out a document - with all the fine trimming of a true jousting rouncer. I wrote some text, applied a style or three, hyperlinked some text, embedded an image, added a footnote, and then commented on my writing and even replied to myself in the best fashion of a poly-personality geek.
Apart from the gray background - and we will learn how to work around this in a nice yet nerdy way skunkworks style, because there isn't an option to tweak the background color in the browser interface - things were looking fine.
You even have Skype integrated into the suite, so you can chat and collaborate. Or rather collaborate and listen. Hue hue. Quite neat. The right-click button lets you select a few quick actions, including links, comments and translations. The last piece still needs a lot of work, because it did not quite give me what I expected. The translations are wonky.
![Skype active](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-skype-active.jpg)
![Right click options](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-right-click.png)
![Right click options, more](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-right-click-more.jpg)
![Translations, still wonky](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-translations.png)
You can also add images - including embedded Bing search, which will also, by default, filter images based on their licensing and re-distribution rights. This is neat, especially if you need to create a document and must avoid any copyright claims and such.
![Images, online search](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-images.jpg)
### More on comments, tracking
Quite useful. For realz. The online nature of this product also means changes and edits to the documents will be tracked by default, so you also have a basic level of versioning available. However, session edits are lost once you close the document.
![Comments](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-comments.jpg)
![Edit activity log](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-edit-activity.png)
The one error that will visibly come up - if you try to edit the document in Word or Excel on Linux, you will get prompted that you're being naughty, because this is not a supported action, for obvious reasons.
![Edit error](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-edit-error.jpg)
### Excel and friends
The practical workflows extends beyond Word. I also tried Excel, and it did as advertised, including having some neat and useful templates and such. Worked just fine, and there's no lag updating cells and formulas. You get most of the functionality you need and expect.
![Excel, interesting templates](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-excel.jpg)
![New blank spreadsheet](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-excel-new-spreadsheet.jpg)
![Excel, budget template](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-excel-budget.jpg)
### OneDrive
This is where you can create and organize folders and files, move documents about, and share them with your friends (if you have any) and colleagues. 5 GB for free, upgradeable for a fee, of course. Worked fine, overall. It does take a few moments to refresh and display contents. Open documents will not be deleted, so this may look like a bug, but it makes perfect sense from the computational perspective.
![OneDrive](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-onedrive.jpg)
### Help
If you get confused - or feel like being dominatrixed by AI, you can ask the cloud collective intelligence of the Redmond Borg ship for assistance. This is quite useful, if not as straightforward or laser-sharp as it can be. But the effort is benevolent.
![What to do, interactive help](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-what-to-do.png)
### Problems
During my three-hour adventure, I only encountered two glitches. One, during a document edit, the browser had a warning (yellow triangle) about an insecure element loaded and used in an otherwise secure HTTPS session. Two, I hit a snag of failing to create a new Excel document. A one-time issue, and it hasn't happened since.
![Document creation error](http://www.dedoimedo.com/images/computers-years/2017-1/ms-office-online-error.jpg)
### Conclusion
Microsoft Office Online is a great product, and better than it was when I tested it some two years ago. It's fairly snappy, it looks nice, it behaves well, the errors are far and few in between, and it offers genuine Microsoft Office compatibility even to Linux users, which can be of significant personal and business importance to some. I won't say this is the best thing that happened to humanity since VHS was invented, but it's a nice addition, and it bridges a big gap that Linux folks have faced since day one. Quite handy, and the ODF support is another neat touch.
Now, to make things even spicier, if you like this whole cloud concept thingie, you might also be interested in [Open365][9], a LibreOffice-based office productivity platform, with an added bonus of a mail client and image processing software, plus 20 GB free storage. Best of all, you can have both of these running in your browser, in parallel. All it takes is another tab or two.
Back to Microsoft, if you a Linuxperson, you may actually require Microsoft office products now and then. The easier way to enjoy them - or at the very least, use them when needed without having to commit to a full operating system stack - is through the online office suite. Free, elegant, and largely transparent. Worth checking out, provided you can put the ideological game aside. There you go. Enjoy thy clouden. Or something.
Cheers.
--------------------------------------------------------------------------------
作者简介:
My name is Igor Ljubuncic. I'm more or less 38 of age, married with no known offspring. I am currently working as a Principal Engineer with a cloud technology company, a bold new frontier. Until roughly early 2015, I worked as the OS Architect with an engineering computing team in one of the largest IT companies in the world, developing new Linux-based solutions, optimizing the kernel and hacking the living daylights out of Linux. Before that, I was a tech lead of a team designing new, innovative solutions for high-performance computing environments. Some other fancy titles include Systems Expert and System Programmer and such. All of this used to be my hobby, but since 2008, it's a paying job. What can be more satisfying than that?
From 2004 until 2008, I used to earn my bread by working as a physicist in the medical imaging industry. My work expertise focused on problem solving and algorithm development. To this end, I used Matlab extensively, mainly for signal and image processing. Furthermore, I'm certified in several major engineering methodologies, including MEDIC Six Sigma Green Belt, Design of Experiment, and Statistical Engineering.
I also happen to write books, including high fantasy and technical work on Linux; mutually inclusive.
Please see my full list of open-source projects, publications and patents, just scroll down.
For a complete list of my awards, nominations and IT-related certifications, hop yonder and yonder please.
-------------
via: http://www.dedoimedo.com/computers/office-online-linux-better.html
作者:[Igor Ljubuncic][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.dedoimedo.com/faq.html
[1]:http://www.ocsmag.com/2015/02/16/libreoffice-4-4-review-finally-it-rocks/
[2]:http://www.ocsmag.com/2014/03/14/libreoffice-vs-microsoft-office-part-deux/
[3]:http://www.dedoimedo.com/computers/office-online-linux.html
[4]:http://www.dedoimedo.com/computers/fedora-25-gnome.html
[5]:http://www.ocsmag.com/2017/01/18/softmaker-office-2016-your-alternative-to-libreoffice/
[6]:http://www.dedoimedo.com/computers/microsoft-lumia-640.html
[7]:http://www.thelostwordsbooks.com/
[8]:http://www.dedoimedo.com/computers/linux-problem-solving-book.html
[9]:http://www.ocsmag.com/2016/08/17/open365/

View File

@ -0,0 +1,144 @@
How to setup a Linux server on Amazon AWS
============================================================
### On this page
1. [Setup a Linux VM in AWS][1]
2. [Connect to an EC2 instance from Windows][2]
AWS (Amazon Web Services) is one of the leading cloud server providers worldwide. You can setup a server within a minute using the AWS platform. On AWS, you can fine tune many techncal details of your server like the number of CPU's, Memory and HDD space, type of HDD (SSD which is faster or a classic IDE) and so on. And the best thing about the AWS is that you need to pay only for the services that you have used. To get started, AWS provides a special account called "Free tier" where you can use the AWS technology free for one year with some minor restrictions like you can use the server only upto 750 Hours a month, when you cross this theshold they will charge you. You can check all the rules related this on [aws portal][3].
Since I am writing this post about creating a Linux server on AWS, having a "Free Tier" account is the main pre-requisite. To sign up for this account you can use this [link][4]. Kindly note that you need to enter your credit card details while creating the account.
So let's assume that you have created the "free tier" account.
Before we proceed, you must know some of the terminologies in AWS to understand the setup:
1. EC2 (Elastic compute cloud): This term used for the virtual machine.
2. AMI (Amazon machine image): Used for the OS instance.
3. EBS (Elastic block storage): one of the type Storage environment in AWS.
Now login to AWS console at below location:
[https://console.aws.amazon.com/][5]
The AWS console will look like this:
[
![Amazon AWS console](https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/aws_console.JPG)
][6]
### Setup a Linux VM in AWS
1: Create an EC2 (virtual machine) instance: Before installing the OS on you must create a VM in AWS. To create this, click on EC2 under compute menu:
[
![Create an EC2 instance](https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/aws_console_ec21.png)
][7]
2\. Now click on "Launch Instance" Button under Create instance.
[
![Launch the EC2 instance](https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/aws_launch_ec2.png)
][8]
3\. Now, when you are using a free tier account, then better select the Free Tier radio button so that AWS will filter the instances which are used for free usage. This will keep you aside from paying money to AWS for using billed resources under AWS.
[
![Select Free Tier instances only](https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/aws_free_tier_radio1.png)
][9]
4\. To proceed further, select following options:
a. **Choose an AMI in the classic instance wizard: selection --> I'll use Red Hat Enterprise Linux 7.2 (HVM), SSD Volume Type here**
b. Select "**t2.micro**" for the instance details.
c. **Configure Instance Details**: Do not change anything simply click next.
d. **Add Storage: **Do not change anything simply click next as we will using default Size 10 (GiB) Hard disk in this case.
e. **Add Tags**: Do not change anything simply click next.
f. **Configure Security Group**: Now select port 22 which is used for ssh so that you can access this server from anywhere.
[
![Configure AWS server](https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/aws_ssh_port1.png)
][10]
g. **Select review and launch button**
h. If all the details are Ok now press the "**Launch**" button,
i. Once you clicked the Launch button, a popup window gets displayed to create a "Key pair" as shown below: Select the option "create a new key pair" and give a name to key pair. Then download the same. You require this key pair while connecting to the server using ssh. At the end, click the "Launch Instance" button.
[
![Create Key pair](https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/aws_key_pair.png)
][11]
j. After clicking Launch instance Button, go to services at the left top side. Select Compute--> EC2\. Now click on running instance link as below:
[
![Go to the running EC2 instance](https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/aws_running_instance.png)
][12]
k. Now you can see that your new VM is ready with status "running" as shown below. Select the Instance and Please note down the "Public DNS value" which is required for logging on to the server.
[
![Public DNS value of the VM](https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/aws_dns_value.png)
][13]
Now you are done with creating a sample Linux installed VM. To connect to the server, follow below steps.
### Connect to an EC2 instance from Windows
1\. First of all, you need to have putty gen and Putty exe's for connecting to the server from Windows (or the SSH command on Linux). You can download putty by following this [Link][14].
2\. Now open the putty gen "puttygen.exe".
3\. You need to click on the "Load button", browse and select the keypair file (pem file) that you downloaded above from Amazon.
4\. You need to select the "ssh2-RSA" option and click on the save private key button. Kindly select yes on the next pop-up.
5\. Save the file with the file extension .ppk.
6\. Now you need to open Putty.exe. Go to connection at the left side menu then select "SSH" and then select "Auth". You need to click on the browse button to select the .ppk file that we created in the step 4.
7\. Now click on the "session" menu and paste the DNS value captured during the 'k' step of this tutorial in the "host name" box and hit the open button.
8\. Upon asking for username and password, enter "**ec2-user**" and blank password and then give below command.
$sudo su -
Hurray, you are now root on the Linux server which is hosted on AWS cloud.
[
![Logged in to AWS EC2 server](https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/aws_putty1.JPG)
][15]
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/how-to-setup-linux-server-with-aws/
作者:[MANMOHAN MIRKAR][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com/tutorial/how-to-setup-linux-server-with-aws/
[1]:https://www.howtoforge.com/tutorial/how-to-setup-linux-server-with-aws/#setup-a-linux-vm-in-aws
[2]:https://www.howtoforge.com/tutorial/how-to-setup-linux-server-with-aws/#connect-to-an-ec-instance-from-windows
[3]:http://aws.amazon.com/free/
[4]:http://aws.amazon.com/ec2/
[5]:https://console.aws.amazon.com/
[6]:https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/big/aws_console.JPG
[7]:https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/big/aws_console_ec21.png
[8]:https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/big/aws_launch_ec2.png
[9]:https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/big/aws_free_tier_radio1.png
[10]:https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/big/aws_ssh_port1.png
[11]:https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/big/aws_key_pair.png
[12]:https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/big/aws_running_instance.png
[13]:https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/big/aws_dns_value.png
[14]:http://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html
[15]:https://www.howtoforge.com/images/how_to_setup_linux_server_with_aws/big/aws_putty1.JPG

View File

@ -0,0 +1,151 @@
beyondworld translating
How to Install and Secure MariaDB 10 in CentOS 7
============================================================
MariaDB is a free and open source fork of well known MySQL database management server software, developed by the brains behind MySQL, its envisioned to remain free/open source.
In this tutorial, we will show you how to install MariaDB 10.1 stable version in the most widely used versions of RHEL/CentOS and Fedora distributions.
For your information, Red Hat Enterprise Linux/CentOS 7.0 switched from supporting MySQL to MariaDB as the default database management system.
Note that in this tutorial, well assume your working on the server as root, otherwise, use the [sudo command][7] to run all the commands.
### Step 1: Add MariaDB Yum Repository
1. Start by adding the MariaDB YUM repository file `MariaDB.repo` for RHEL/CentOS and Fedora systems.
```
# vi /etc/yum.repos.d/MariaDB.repo
```
Now add the following lines to your respective Linux distribution version as shown.
#### On CentOS 7
```
[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.1/centos7-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
```
#### On RHEL 7
```
[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.1/rhel7-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
```
[
![Add MariaDB Yum Repo](http://www.tecmint.com/wp-content/uploads/2017/02/Add-MariaDB-Repo.png)
][8]
Add MariaDB Yum Repo
### Step 2: Install MariaDB in CentOS 7
2. Once MariaDB repository has been added, you can easily install it with just one single command.
```
# yum install MariaDB-server MariaDB-client -y
```
[
![Install MariaDB in CentOS 7](http://www.tecmint.com/wp-content/uploads/2017/02/Install-MariaDB-in-CentOS-7.png)
][9]
Install MariaDB in CentOS 7
3. As soon as the installation of MariaDB packages completes, start the database server daemon for the time being, and also enable it to start automatically at the next boot like so:
```
# systemctl start mariadb
# systemctl enable mariadb
# systemctl status mariadb
```
[
![Start MariaDB Service in CentOS 7](http://www.tecmint.com/wp-content/uploads/2017/02/Start-MariaDB-Service-in-CentOS-7.png)
][10]
Start MariaDB Service in CentOS 7
### Step 3: Secure MariaDB in CentOS 7
4. Now its time to secure your MariaDB by setting root password, disabling remote root login, removing the test database as well as anonymous users and finally reload privileges as shown in the screen shot below:
```
# mysql_secure_installation
```
[
![Secure MySQL in CentOS 7](http://www.tecmint.com/wp-content/uploads/2017/02/Secure-MySQL-in-CentOS-7.png)
][11]
Secure MySQL in CentOS 7
5. After securing the database server, you may want to check certain MariaDB features such as: installed version, default program argument list, and also login to the MariaDB command shell as follows:
```
# mysql -V
# mysqld --print-defaults
# mysql -u root -p
```
[
![Verify MySQL Version](http://www.tecmint.com/wp-content/uploads/2017/02/Verify-MySQL-Version.png)
][12]
Verify MySQL Version
### Step 4: Learn MariaDB Administration
If you are new to MySQL/MariaDB, start off by going through these guides:
1. [Learn MySQL / MariaDB for Beginners Part 1][1]
2. [Learn MySQL / MariaDB for Beginners Part 2][2]
3. [MySQL Basic Database Administration Commands Part III][3]
4. [20 MySQL (Mysqladmin) Commands for Database Administration Part IV][4]
Also check out these following articles to fine tune your MySQL/MariaDB performance and use the tools to monitor the activity of your databases.
1. [15 Tips to Tune and Optimize Your MySQL/MariaDB Performance][5]
2. [4 Useful Tools to Monitor MySQL/MariaDB Database Activities][6]
Thats it for now! In this simple tutorial, we showed you how to install MariaDB 10.1 stable version in various RHEL/CentOS and Fedora. Use the feedback form below to send us any questions or any thoughts concerning this guide.
--------------------------------------------------------------------------------
作者简介:
Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/install-mariadb-in-centos-7/
作者:[Aaron Kili][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/learn-mysql-mariadb-for-beginners/
[2]:http://www.tecmint.com/learn-mysql-mariadb-advance-functions-sql-queries/
[3]:http://www.tecmint.com/gliding-through-database-mysql-in-a-nutshell-part-i/
[4]:http://www.tecmint.com/mysqladmin-commands-for-database-administration-in-linux/
[5]:http://www.tecmint.com/mysql-mariadb-performance-tuning-and-optimization/
[6]:http://www.tecmint.com/mysql-performance-monitoring/
[7]:http://www.tecmint.com/sudoers-configurations-for-setting-sudo-in-linux/
[8]:http://www.tecmint.com/wp-content/uploads/2017/02/Add-MariaDB-Repo.png
[9]:http://www.tecmint.com/wp-content/uploads/2017/02/Install-MariaDB-in-CentOS-7.png
[10]:http://www.tecmint.com/wp-content/uploads/2017/02/Start-MariaDB-Service-in-CentOS-7.png
[11]:http://www.tecmint.com/wp-content/uploads/2017/02/Secure-MySQL-in-CentOS-7.png
[12]:http://www.tecmint.com/wp-content/uploads/2017/02/Verify-MySQL-Version.png
[13]:http://www.tecmint.com/author/aaronkili/
[14]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
[15]:http://www.tecmint.com/free-linux-shell-scripting-books/

View File

@ -0,0 +1,177 @@
How to Install or Upgrade to Latest Kernel Version in CentOS 7
============================================================
by [Gabriel Cánepa][14] | Published: March 1, 2017 | Last Updated: March 6, 2017
 Download Your Free eBooks NOW - [10 Free Linux eBooks for Administrators][15] | [4 Free Shell Scripting eBooks][16]
Although some people use the word Linux to represent the operating system as a whole, it is important to note that, strictly speaking, Linux is only the kernel. On the other hand, a distribution is a fully-functional system built on top of the kernel with a wide variety of application tools and libraries.
During normal operations, the kernel is responsible for performing two important tasks:
1. Acting as an interface between the hardware and the software running on the system.
2. Managing system resources as efficiently as possible.
To do this, the kernel communicates with the hardware through the drivers that are built into it or those that can be later installed as a module.
For example, when an application running on your machine wants to connect to a wireless network, it submits that request to the kernel, which in turns uses the right driver to connect to the network.
**Suggested Read:** [How to Upgrade Kernel in Ubuntu][1]
With new devices and technology coming out periodically, it is important to keep our kernel up to date if we want to make the most of out them. Additionally, updating our kernel will help us to leverage new kernel functions and to protect ourselves from vulnerabilities that have been discovered in previous versions.
Ready to update your kernel on CentOS 7 or one of their derivatives such as RHEL 7 and Fedora? If so, keep reading!
### Step 1: Checking Installed Kernel Version
When we install a distribution it includes a certain version of the Linux kernel. To show the current version installed on our system we can do:
```
# uname -sr
```
The following image shows the output of the above command in a CentOS 7 server:
[
![Check Kernel Version in CentOS 7](http://www.tecmint.com/wp-content/uploads/2017/03/Check-Kernel-Version-in-CentOS-7.png)
][2]
Check Kernel Version in CentOS 7
If we now go to [https://www.kernel.org/][3], we will see that the latest kernel version is 4.10.1 at the time of this writing (other versions are available from the same site).
One important thing to consider is the life cycle of a kernel version if the version you are currently using is approaching its end of life, no more bug fixes will be provided after that date. For more info, refer to the [kernel Releases][4] page.
### Step 2: Upgrading Kernel in CentOS 7
Most modern distributions provide a way to upgrade the kernel using a [package management system such as yum][5] and an officially-supported repository.
However, this will only perform the upgrade to the most recent version available from the distributions repositories not the latest one available in the [https://www.kernel.org/][6]. Unfortunately, Red Hat only allows to upgrade the kernel using the former option.
As opposed to Red Hat, CentOS allows the use of ELRepo, a third-party repository that makes the upgrade to a recent version a kernel.
To enable the ELRepo repository on CentOS 7, do:
```
# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
```
[
![Enable ELRepo in CentOS 7](http://www.tecmint.com/wp-content/uploads/2017/03/Enable-ELRepo-in-CentOS-7.png)
][7]
Enable ELRepo in CentOS 7
Once the repository has been enabled, you can use the following command to list the available kernel.related packages:
```
# yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
```
[
![Yum - Find Available Kernel Versions](http://www.tecmint.com/wp-content/uploads/2017/03/Yum-Find-Available-Kernel-Versions.png)
][8]
Yum Find Available Kernel Versions
Next, install the latest mainline stable kernel:
```
# yum --enablerepo=elrepo-kernel install kernel-ml
```
[
![Install Latest Kernel Version in CentOS 7](http://www.tecmint.com/wp-content/uploads/2017/03/Install-Latest-Kernel-Version-in-CentOS-7.png)
][9]
Install Latest Kernel Version in CentOS 7
Finally, reboot your machine to apply the latest kernel, and then run following command to check the kernel version:
```
uname -sr
```
[
![Verify Kernel Version](http://www.tecmint.com/wp-content/uploads/2017/03/Verify-Kernel-Version.png)
][10]
Verify Kernel Version
### Step 3: Set Default Kernel Version in GRUB
To make the newly-installed version the default boot option, you will have to modify the GRUB configuration as follows:
Open and edit the file /etc/default/grub and set `GRUB_DEFAULT=0`. This means that the first kernel in the GRUB initial screen will be used as default.
```
GRUB_TIMEOUT=5
GRUB_DEFAULT=0
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="rd.lvm.lv=centos/root rd.lvm.lv=centos/swap crashkernel=auto rhgb quiet"
GRUB_DISABLE_RECOVERY="true"
```
Next, run the following command to recreate the kernel configuration.
```
# grub2-mkconfig -o /boot/grub2/grub.cfg
```
[
![Set Kernel in GRUB](http://www.tecmint.com/wp-content/uploads/2017/03/Set-Kernel-in-GRUB.png)
][11]
Set Kernel in GRUB
Reboot and verify that the latest kernel is now being used by default.
[
![Booting Default Kernel Version in CentOS 7](http://www.tecmint.com/wp-content/uploads/2017/03/Booting-Default-Kernel-Version.png)
][12]
Booting Default Kernel Version in CentOS 7
Congratulations! You have upgraded your kernel in CentOS 7!
##### Summary
In this article we have explained how to easily upgrade the Linux kernel on your system. There is yet another method which we havent covered as it involves compiling the kernel from source, which would deserve an entire book and is not recommended on production systems.
Although it represents one of the best learning experiences and allows for a fine-grained configuration of the kernel, you may render your system unusable and may have to reinstall it from scratch.
If you are still interested in building the kernel as a learning experience, you will find instructions on how to do it at the [Kernel Newbies][13] page.
As always, feel free to use the form below if you have any questions or comments about this article.
--------------------------------------------------------------------------------
作者简介:
I'am a computer addicted guy, a fan of open source and linux based system software, have about 4 years experience with Linux distributions desktop, servers and bash scripting.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/install-upgrade-kernel-version-in-centos-7/
作者:[Matei Cezar][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/cezarmatei/
[1]:http://www.tecmint.com/upgrade-kernel-in-ubuntu/
[2]:http://www.tecmint.com/wp-content/uploads/2017/03/Check-Kernel-Version-in-CentOS-7.png
[3]:https://www.kernel.org/
[4]:https://www.kernel.org/category/releases.html
[5]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/
[6]:https://www.kernel.org/
[7]:http://www.tecmint.com/wp-content/uploads/2017/03/Enable-ELRepo-in-CentOS-7.png
[8]:http://www.tecmint.com/wp-content/uploads/2017/03/Yum-Find-Available-Kernel-Versions.png
[9]:http://www.tecmint.com/wp-content/uploads/2017/03/Install-Latest-Kernel-Version-in-CentOS-7.png
[10]:http://www.tecmint.com/wp-content/uploads/2017/03/Verify-Kernel-Version.png
[11]:http://www.tecmint.com/wp-content/uploads/2017/03/Set-Kernel-in-GRUB.png
[12]:http://www.tecmint.com/wp-content/uploads/2017/03/Booting-Default-Kernel-Version.png
[13]:https://kernelnewbies.org/KernelBuild
[14]:http://www.tecmint.com/author/gacanepa/
[15]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
[16]:http://www.tecmint.com/free-linux-shell-scripting-books/

View File

@ -0,0 +1,127 @@
How to use markers and perform text selection in Vim
============================================================
When using GUI-based text/source code editors, some features are a given, such as selecting text. I mean, most of us won't even consider this a feature anymore. But that's not the case with command line based editors like Vim. Specifically for Vim, when only using keyboard, you'll have to learn certain commands in order to select text the way you want. In this tutorial we will discuss this feature as well as the 'marks' feature of Vim in detail.
But before we start doing that, it's worth mentioning that all the examples, commands, and instructions mentioned in this tutorial have been tested on Ubuntu 16.04, and the Vim version we've used is 7.4.
# Text selection options in Vim
Assuming that you have the basic knowledge about the Vim editor (No? no problem, just head [here][2]), you would be knowing that the 'd' command lets you cut/delete a line. But what if you want to cut, say, 3 lines? Your answer could be: 'repeat the command thrice'. Fine, but what if the requirement is to cut 15 lines? Is running the 'd' command 15 times a practical solution?
No, it's not. A better solution, in this case, would be to select the lines you want to cut/delete, and then run the 'd' command just once. Here's an example:
Suppose I want to cut/delete the complete first paragraph of the INTRODUCTION section shown in the screenshot below:
[
![Text edited in VIM](https://www.howtoforge.com/images/how-to-use-markers-and-perform-text-selection-in-vim/vim-select-example.png)
][3]
So what I'll do is, I'll bring the cursor in the beginning of the first line, and (making sure I am out of Insert mode) type the 'V' (Shift+v) command. This will result in the first line being selected and Vim enabling the Visual Line mode.
[
![Select a line with VIM](https://www.howtoforge.com/images/how-to-use-markers-and-perform-text-selection-in-vim/vim-select-initiated.png)
][4]
Now, all I have to do is to use the down arrow key to select the whole paragraph.
[
![Select multiple lines with Vim](https://www.howtoforge.com/images/how-to-use-markers-and-perform-text-selection-in-vim/vim-select-working.png)
][5]
So that's what we wanted, right? Now just press 'd' and the selected paragraph will be cut/deleted. Needless to say, aside from cut/delete, you can perform any other option on the selected text.
This brings us to another important aspect: not every time we need to delete the complete line or lines; what to do in those cases?  What that means is, the solution we just discussed only works when you want to perform operation on complete line(s). What if the requirement is to delete the first three sentences in a paragraph?
Well, there's a command for this as well - just use 'v' instead of 'V' (without single quotes of course). Following is an example where-in I used 'v' and then selected the first three sentences in the paragraph:
[
![Select the first three sentences in Vim](https://www.howtoforge.com/images/how-to-use-markers-and-perform-text-selection-in-vim/vim-select-partial-lines.png)
][6]
Moving on, sometimes the data you are dealing with consists of separate columns, and the requirement may be to select a particular column. For example, consider the following screenshot:
[
![Columns in Vom](https://www.howtoforge.com/images/how-to-use-markers-and-perform-text-selection-in-vim/vim-select-columns.png)
][7]
Suppose the requirement is to only select the name of countries, which mean the second column of the text. So what you can do in this case is, bring your cursor under the first element of the column in question and press Ctrl+v once. Now, using the down arrow key, select the first letter of each country name:
[
![Select the first char of a column](https://www.howtoforge.com/images/how-to-use-markers-and-perform-text-selection-in-vim/vim-select-column-1.png)
][8]
And then using the right arrow key, select the complete column, or the whole names.
[
![Select a whole column in Vim](https://www.howtoforge.com/images/how-to-use-markers-and-perform-text-selection-in-vim/vim-select-column-2.png)
][9]
**Tip**: In case you de-selected a block of text for some reason, and now want to again select it, just press 'gv' in the command mode.
# Using marks
Sometimes, while working on a large file (say, a source code file or a shell script), you might want to switch to particular location and then come back to the line where you were originally. That's not an issue if the lines in question aren't far away, or you have to do this occasionally.
But what is it's the other way round - you have to frequently jump between your present location and various far off lines in the file. Well, the solution in that case is to use marks. Just mark your current location, then come back to this location from anywhere in the file by just mentioning the name of the mark.
To mark a line in vim, use the m command followed by an alphabet that represents the name of the mark (available options are a-z in lowercase). For example, ma. Now, to come back to the mark a, use the 'a command (single quote included).
**Tip**: You can use apostrophe (`'`) or backtick `(`) depending on whether you want to jump to the beginning of the marked line, or specifically to` the line and column of the mark.
There can be various other useful applications of Vim markers. For example, you can put a mark on a line, then go to some other line and run the following command:
```
d'[mark-name]
```
 to delete everything between your current position and the marked line.
Moving on, here's an important tid-bid from the Vim's official documentation:
```
Each file has a set of marks identified by lowercase letters (a-z). In addition there is a global set of marks identified by uppercase letters (A-Z) that identify a position within a particular file. For example, you may be editing ten files. Each file could have mark a, but only one file can have mark A. 
```
So while we have discussed the basic usage of lowercase alphabets as Vim marks, how and where the uppercase letters are useful. Well, the following excerpt makes it amply clear:
```
Because of their limitations, uppercase marks may at first glance seem less versatile than their lowercase counterpart, but this feature allows them to be used as a quick sort of "file bookmark." For example, open your .vimrc, press mV, and close Vim. The next time you want to edit your .vimrc, just press 'V to open it.
```
And finally, to delete a mark, use the 'delmarks' command. For example:
```
:delmarks a
```
The aforementioned command will delete the mark a from the file. Of course, if you delete a line containing a mark, then that mark is also deleted automatically. For more information on marks, head to the [Vim documentation][11].
# Conclusion
As you start using Vim as your primary editor, features like the ones explained in this tutorial become useful tools that save a lot of your time. As you'd agree, there's not much of a learning curve involved with selection and marks features explained here - all that's required is a bit of practice.
For the complete coverage of Vim-related articles on HowtoForge, head [here][1].
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/how-to-use-markers-and-perform-text-selection-in-vim/
作者:[Himanshu Arora][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com/tutorial/how-to-use-markers-and-perform-text-selection-in-vim/
[1]:https://www.howtoforge.com/tutorials/shell/
[2]:https://www.howtoforge.com/vim-basics
[3]:https://www.howtoforge.com/images/how-to-use-markers-and-perform-text-selection-in-vim/big/vim-select-example.png
[4]:https://www.howtoforge.com/images/how-to-use-markers-and-perform-text-selection-in-vim/big/vim-select-initiated.png
[5]:https://www.howtoforge.com/images/how-to-use-markers-and-perform-text-selection-in-vim/big/vim-select-working.png
[6]:https://www.howtoforge.com/images/how-to-use-markers-and-perform-text-selection-in-vim/big/vim-select-partial-lines.png
[7]:https://www.howtoforge.com/images/how-to-use-markers-and-perform-text-selection-in-vim/big/vim-select-columns.png
[8]:https://www.howtoforge.com/images/how-to-use-markers-and-perform-text-selection-in-vim/big/vim-select-column-1.png
[9]:https://www.howtoforge.com/images/how-to-use-markers-and-perform-text-selection-in-vim/big/vim-select-column-2.png
[10]:http://vim.wikia.com/wiki/Vimrc
[11]:http://vim.wikia.com/wiki/Using_marks

View File

@ -0,0 +1,266 @@
Installation of Devuan Linux (Fork of Debian)
============================================================
Devuan Linux, the most recent fork of Debian, is a version of Debian that is designed to be completely free of systemd.
Devuan was announced towards the end of 2014 and has been actively developed over that time. The most recently release is the beta2 release of codenamed: Jessie (Yes the same name as the current stable version of Debian).
The final release for the current stable release is said to be ready in early 2017\. To read more about the project please visit the communitys home page: [https://devuan.org/][1].
This article will walk through the installation of Devuans current release. Most of the packages available in Debian are available in Devuan allowing for a fairly seamless transition for Debian users to Devuan should they prefer the freedom to choose their initialization system.
#### System Requirements
Devuan, like Debian. Is very light on system requirements. The biggest determining factor is the desktop environment the user wishes to use. This guide will assume that the user would like a flashier desktop environment and will suggest the following minimums:
1. At least 15GB of disk space; strongly encouraged to have more
2. At least 2GB of ram; more is encouraged
3. USB or CD/DVD boot support
4. Internet connection; installer will download files from the Internet
### Devuan Linux Installation
As with all of the authors guides, this guide will be assuming that a USB drive is available to use as the installation media. Take note that the USB drive should be as close to 4/8GB as possible and ALL DATA WILL BE REMOVED!
The author has had issues with larger USB drives but some may still work. Regardless, following the next few steps WILL RESULT IN DATA LOSS ON THE USB DRIVE.
Please be sure to backup all data before proceeding. This bootable Kali Linux USB drive is going to be created from another Linux machine.
1. First obtain the latest release of Devuan installation ISO from [https://devuan.org/][2] or you can obtain from a Linux station, type the following commands:
```
$ cd ~/Downloads
$ wget -c https://files.devuan.org/devuan_jessie_beta/devuan_jessie_1.0.0-beta2_amd64_CD.iso
```
2. The commands above will download the installer ISO file to the users Downloads folder. The next process is to write the ISO to a USB drive to boot the installer.
To accomplish this we can use the `'dd'` tool within Linux. First, the disk name needs to be located with [lsblk command][3] though.
```
$ lsblk
```
[
![Find Device Name in Linux](http://www.tecmint.com/wp-content/uploads/2017/03/Find-Device-Name-in-Linux.png)
][4]
Find Device Name in Linux
With the name of the USB drive determined as `/dev/sdc`, the Devuan ISO can be written to the drive with the `dd` tool.
```
$ sudo dd if=~/Downloads/devuan_jessie_1.0.0-beta2_amd64_CD.iso of=/dev/sdc
```
Important: The above command requires root privileges so utilize sudo or login as the root user to run the command. Also this command will REMOVE EVERYTHING on the USB drive. Be sure to backup needed data.
3. Once the ISO is copied over to the USB drive, plug the USB drive into the respective computer that Devuan should be installed upon and proceed to boot to the USB drive.
Upon successful booting to the USB drive, the user will be presented with the following screen and should proceed with the Install or Graphical Install options.
This guide will be using the Graphical Install method.
[
![Devuan Graphic Installation](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Graphic-Installation.png)
][5]
Devuan Graphic Installation
4. Allow the installer to boot to the localization menus. Once here the user will be prompted with a string of windows asking about the users keyboard layout and language. Simply select the desired options to continue.
[
![Devuan Language Selection](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Language-Selection.png)
][6]
Devuan Language Selection
[
![Devuan Location Selection](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Location-Selection.png)
][7]
Devuan Location Selection
[
![Devuan Keyboard Configuration](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Keyboard-Configuration.png)
][8]
Devuan Keyboard Configuration
5. The next step is to provide the installer with the hostname and domain name that this machine will be a member.
The hostname should be something unique but the domain can be left blank if the computer wont be part of a domain.
[
![Set Devuan Linux Hostname](http://www.tecmint.com/wp-content/uploads/2017/03/Set-Devuan-Linux-Hostname.png)
][9]
Set Devuan Linux Hostname
[
![Set Devuan Linux Domain Name](http://www.tecmint.com/wp-content/uploads/2017/03/Set-Devuan-Linux-Domain-Name.png)
][10]
Set Devuan Linux Domain Name
6. Once the hostname and domain name information have been provided the installer will want the user to provide a root user password.
Take note to remember this password as it will be required to do administrative tasks on this Devuan machine! Devuan doesnt install the sudo package by default so the admin user will be root when this installation finishes.
[
![Setup Devuan Linux Root User](http://www.tecmint.com/wp-content/uploads/2017/03/Setup-Devuan-Linux-Root-User.png)
][11]
Setup Devuan Linux Root User
7. The next series of questions will be for the creation of a non-root user. It is always a good to avoid using your system as the root user whenever possible. The installer will prompt for the creation of a non-root user at this point.
[
![Setup Devuan Linux User Account](http://www.tecmint.com/wp-content/uploads/2017/03/Setup-Devuan-Linux-User-Account.png)
][12]
Setup Devuan Linux User Account
8. Once the root user password and user creation prompts have completed, the installer will request that the clock be [set up with NTP][13].
Again a connection to the internet will be required in order for this to work for most systems!
[
![Devuan Linux Timezone Setup](http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Clock-on-Devuan-Linux.png)
][14]
Devuan Linux Timezone Setup
9. The next step will be the act of partitioning the system. For most users Guided use entire disk is typically sufficient. However, if advanced partitioning is desired, this would be the time to set them up.
[
![Devuan Linux Partitioning](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Partitioning.png)
][15]
Devuan Linux Partitioning
Be sure to confirm the partition changes after clicking continue above in order to write the partitions to the disk!
10. Once the partitioning is completed, the installer will begin to install the base files for Devuan. This process will take a few minutes but will stop when the system is ready to configure a network mirror (software repository). Most users will want to click yes when prompted to use a network mirror.
[
![Devuan Linux Configure Package Manager](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Configure-Package-Manager.png)
][16]
Devuan Linux Configure Package Manager
Clicking `yes` here will present the user with a list of network mirrors by country. It is typically best to pick the mirror that is geographically closest to the machines location.
[
![Devuan Linux Mirror Selection](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Mirror-Selection.png)
][17]
Devuan Linux Mirror Selection
[
![Devuan Linux Mirrors](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Mirrors.png)
][18]
Devuan Linux Mirrors
11. The next screen is the traditional Debian popularity contest all this does is track what packages are downloaded for statistics on package usage.
This can be enabled or disabled to the administrators preference during the installation process.
[
![Configure Devuan Linux Popularity Contest](http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Devuan-Linux-Popularity-Contest.png)
][19]
Configure Devuan Linux Popularity Contest
12. After a brief scan of the repositories and a couple of package updates, the installer will present the user with a list of software packages that can be installed to provide a Desktop Environment, SSH access, and other system tools.
While Devuan has some of the major Desktop Environments listed, it should be noted that not all of them are ready for use in Devuan yet. The author has had good luck with Xfce, LXDE, and Mate in Devuan (Future articles will walk the user through how to install Enlightenment from source in Devuan as well).
If interested in installing a different Desktop Environment, un-check the Devuan Desktop Environment check box.
[
![Devuan Linux Software Selection](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Software-Selection.png)
][20]
Devuan Linux Software Selection
Depending on the number of items selected in the above installer screen, there may be a couple of minutes of downloads and installations taking place.
When all the software installation is completed, the installer will prompt the user for the location to install grub. This is typically done on /dev/sda as well.
[
![Devuan Linux Grub Install](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Grub-Install.png)
][21]
Devuan Linux Grub Install
[
![Devuan Linux Grub Install Disk](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Grub-Install-Disk.png)
][22]
Devuan Linux Grub Install Disk
13. After GRUB successfully installs to the boot drive, the installer will alert the user that the installation is complete and to reboot the system.
[
![Devuan Linux Installation Completes](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Installation-Completes.png)
][23]
Devuan Linux Installation Completes
14. Assuming that the installation was indeed successful, the system should either boot into the chosen Desktop Environment or if no Desktop Environment was selected, the machine will boot to a text based console.
[
![Devuan Linux Console](http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Console.png)
][24]
Devuan Linux Console
This concludes the installation of the latest version of Devuan Linux. The next article in this short series will cover the [installation of the Enlightenment Desktop Environment][25] from source code on a Devuan system. Please let Tecmint know if you have any issues or questions and thanks for reading!
--------------------------------------------------------------------------------
作者简介:
He is an Instructor of Computer Technology with Ball State University where he currently teaches all of the departments Linux courses and co-teaches Cisco networking courses. He is an avid Debian user as well as many of the derivatives of Debian such as Mint, Ubuntu, and Kali. Rob holds a Masters in Information and Communication Sciences as well as several industry certifications from Cisco, EC-Council, and Linux Foundation.
-----------------------------
via: http://www.tecmint.com/installation-of-devuan-linux/
作者:[Rob Turner ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/robturner/
[1]:https://devuan.org/
[2]:https://devuan.org/
[3]:http://www.tecmint.com/find-usb-device-name-in-linux/
[4]:http://www.tecmint.com/wp-content/uploads/2017/03/Find-Device-Name-in-Linux.png
[5]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Graphic-Installation.png
[6]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Language-Selection.png
[7]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Location-Selection.png
[8]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Keyboard-Configuration.png
[9]:http://www.tecmint.com/wp-content/uploads/2017/03/Set-Devuan-Linux-Hostname.png
[10]:http://www.tecmint.com/wp-content/uploads/2017/03/Set-Devuan-Linux-Domain-Name.png
[11]:http://www.tecmint.com/wp-content/uploads/2017/03/Setup-Devuan-Linux-Root-User.png
[12]:http://www.tecmint.com/wp-content/uploads/2017/03/Setup-Devuan-Linux-User-Account.png
[13]:http://www.tecmint.com/install-and-configure-ntp-server-client-in-debian/
[14]:http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Clock-on-Devuan-Linux.png
[15]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Partitioning.png
[16]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Configure-Package-Manager.png
[17]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Mirror-Selection.png
[18]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Mirrors.png
[19]:http://www.tecmint.com/wp-content/uploads/2017/03/Configure-Devuan-Linux-Popularity-Contest.png
[20]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Software-Selection.png
[21]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Grub-Install.png
[22]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Grub-Install-Disk.png
[23]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Installation-Completes.png
[24]:http://www.tecmint.com/wp-content/uploads/2017/03/Devuan-Linux-Console.png
[25]:http://www.tecmint.com/install-enlightenment-on-devuan-linux/
[26]:http://www.tecmint.com/author/robturner/
[27]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
[28]:http://www.tecmint.com/free-linux-shell-scripting-books/

View File

@ -0,0 +1,265 @@
Monitoring a production-ready microservice
============================================================
Explore essential components, principles, and key metrics.
![Container ship](https://d3tdunqjn7n0wj.cloudfront.net/360x240/container-1638068_1400-532657d38c05bb5bd8bd23571f7b3b88.jpg)
This is an excerpt from [Production-Ready Microservices][8], by Susan J. Fowler.
A production-ready microservice is one that is properly monitored. Proper monitoring is one of the most important parts of building a production-ready microservice and guarantees higher microservice availability. In this chapter, the essential components of microservice monitoring are covered, including which key metrics to monitor, how to log key metrics, building dashboards that display key metrics, how to approach alerting, and on-call best practices.
### Principles of Microservice Monitoring
The majority of outages in a microservice ecosystem are caused by bad deployments. The second most common cause of outages is the lack of proper  _monitoring_ . Its easy to see why this is the case. If the state of a microservice is unknown, if key metrics arent tracked, then any precipitating failures will remain unknown until an actual outage occurs. By the time a microservice experiences an outage due to lack of monitoring, its availability has already been compromised. During these outages, the time to mitigation and time to repair are prolonged, pulling the availability of the microservice down even further: without easily accessible information about the microservices key metrics, developers are often faced with a blank slate, unprepared to quickly resolve the issue. This is why proper monitoring is essential: it provides the development team with all of the relevant information about the microservice. When a microservice is properly monitored, its state is never unknown.
Monitoring a production-ready microservice has four components. The first is proper  _logging_  of all relevant and important information, which allows developers to understand the state of the microservice at any time in the present or in the past. The second is the use of well-designed  _dashboards_  that accurately reflect the health of the microservice, and are organized in such a way that anyone at the company could view the dashboard and understand the health and status of the microservice without difficulty. The third component is actionable and effective  _alerting_  on all key metrics, a practice that makes it easy for developers to mitigate and resolve problems with the microservice before they cause outages. The final component is the implementation and practice of running a sustainable  _on-call rotation_  responsible for the monitoring of the microservice. With effective logging, dashboards, alerting, and on-call rotation, the microservices availability can be protected: failures and errors will be detected, mitigated, and resolved before they bring down any part of the microservice ecosystem.
###### A Production-Ready Service Is Properly Monitored
* Its key metrics are identified and monitored at the host, infrastructure, and microservice levels.
* It has appropriate logging that accurately reflects the past states of the microservice.
* Its dashboards are easy to interpret, and contain all key metrics.
* Its alerts are actionable and are defined by signal-providing thresholds.
* There is a dedicated on-call rotation responsible for monitoring and responding to any incidents and outages.
* There is a clear, well-defined, and standardized on-call procedure in place for handling incidents and outages.
### Key Metrics
Before we jump into the components of proper monitoring, its important to identify precisely  _what_  we want and need to monitor: we want to monitor a microservice, but what does that  _actually_  mean? A microservice isnt an individual object that we can follow or track, it cannot be isolated and quarantined—its far more complicated than that. Deployed across dozens, if not hundreds, of servers, the behavior of a microservice is the sum of its behavior across all of its instantiations, which isnt the easiest thing to quantify. The key is identifying which properties of a microservice are necessary and sufficient for describing its behavior, and then determining what changes in those properties tell us about the overall status and health of the microservice. Well call these properties  _key metrics_ .
There are two types of key metrics: host and infrastructure metrics, and microservice metrics. Host and infrastructure metrics are those that pertain to the status of the infrastructure and the servers on which the microservice is running, while microservice metrics are metrics that are unique to the individual microservice. In terms of the four-layer model of the microservice ecosystem as described in [Chapter 1,  _Microservices_ ][9], host and infrastructure metrics are metrics belonging to layers 13, while microservice metrics are those belonging to layer 4.
Separating key metrics into these two different types is important both organizationally and technically. Host and infrastructure metrics often affect more than one microservice: for example, if there is a problem with a particular server, and the microservice ecosystem shares the hardware resources among multiple microservices, host-level key metrics will be relevant to every microservice team that has a microservice deployed to that host. Likewise, microservice-specific metrics will rarely be applicable or useful to anyone but the team of developers working on that particular microservice. Teams should monitor both types of key metrics (that is, all metrics relevant to their microservice), and any metrics relevant to multiple microservices should be monitored and shared between the appropriate teams.
The host and infrastructure metrics that should be monitored for each microservice are the CPU utilized by the microservice on each host, the RAM utilized by the microservice on each host, the available threads, the microservices open file descriptors (FD), and the number of database connections that the microservice has to any databases it uses. Monitoring these key metrics should be done in such a way that the status of each metric is accompanied by information about the infrastructure and the microservice. This means that monitoring should be granular enough that developers can know the status of the keys metrics for their microservice on any particular host and across all of the hosts that it runs on. For example, developers should be able to know how much CPU their microservice is using on one particular host  _and_  how much CPU their microservice is using across all hosts it runs on.
### Monitoring Host-Level Metrics When Resources Are Abstracted
Some microservice ecosystems may use cluster management applications (like Mesos) in which the resources (CPU, RAM, etc.) are abstracted away from the host level. Host-level metrics wont be available in the same way to developers in these situations, but all key metrics for the microservice overall should still be monitored by the microservice team.
Determining the necessary and sufficient key metrics at the microservice level is a bit more complicated because it can depend on the particular language that the microservice is written in. Each language comes with its own special way of processing tasks, for example, and these language-specific features must be monitored closely in the majority of cases. Consider a Python service that utilizes uwsgi workers: the number of uwsgi workers is a necessary key metric for proper monitoring.
In addition to language-specific key metrics, we also must monitor the availability of the service, the service-level agreement (SLA) of the service, latency (of both the service as a whole and its API endpoints), success of API endpoints, responses and average response times of API endpoints, the services (clients) from which API requests originate (along with which endpoints they send requests to), errors and exceptions (both handled and unhandled), and the health and status of dependencies.
Importantly, all key metrics should be monitored everywhere that the application is deployed. This means that every stage of the deployment pipeline should be monitored. Staging must be closely monitored in order to catch any problems before a new candidate for production (a new build) is deployed to servers running production traffic. It almost goes without saying that all deployments to production servers should be monitored carefully, both in the canary and production deployment phases. (For more information on deployment pipelines, see [Chapter 3,  _Stability and Reliability_ ][10].)
Once the key metrics for a microservice have been identified, the next step is to capture the metrics emitted by your service. Capture them, and then log them, graph them, and alert on them. Well cover each of these steps in the following sections.
###### Summary of Key Metrics
**Host and infrastructure key metrics:**
* Threads
* File descriptors
* Database connections
**Microservice key metrics:**
* Language-specific metrics
* Availability
* Latency
* Endpoint success
* Endpoint responses
* Endpoint response times
* Clients
* Errors and exceptions
* Dependencies
### Logging
_Logging_  is the first component of production-ready monitoring. It begins and belongs in the codebase of each microservice, nestled deep within the code of each service, capturing all of the information necessary to describe the state of the microservice. In fact, describing the state of the microservice at any given time in the recent past is the ultimate goal of logging.
One of the benefits of microservice architecture is the freedom it gives developers to deploy new features and code changes frequently, and one of the consequences of this newfound developer freedom and increased development velocity is that the microservice is always changing. In most cases, the service will not be the same service it was 12 hours ago, let alone several days ago, and reproducing any problems will be impossible. When faced with a problem, often the only way to determine the root cause of an incident or outage is to comb through the logs, discover the state of the microservice at the time of the outage, and figure out why the service failed in that state. Logging needs to be such that developers can determine from the logs exactly what went wrong and where things fell apart.
### Logging Without Microservice Versioning
Microservice versioning is often discouraged because it can lead to other (client) services pinning to specific versions of a microservice that may not be the best or most updated version of the microservice. Without versioning, determining the state of a microservice when a failure or outage occurred can be difficult, but thorough logging can prevent this from becoming a problem: if the logging is good enough that state of a microservice at the  _time_  of an outage can be sufficiently known and understood, the lack of versioning ceases to be a hindrance to quick and effective mitigation and resolution.
Determining precisely  _what_  to log is specific to each microservice. The best guidance on determining what needs to be logged is, somewhat unfortunately, necessarily vague: log whatever information is essential to describing the state of the service at a given time. Luckily, we can narrow down which information is necessary by restricting our logging to whatever can be contained in the code of the service. Host-level and infrastructure-level information wont (and shouldnt) be logged by the application itself, but by services and tools running the application platform. Some microservice-level key metrics and information, like hashed user IDs and request and response details can and should be located in the microservices logs.
There are, of course, some things that  _should never, ever be logged_ . Logs should never contain identifying information, such as names of customers, Social Security numbers, and other private data. They should never contain information that could present a security risk, such as passwords, access keys, or secrets. In most cases, even seemingly innocuous things like user IDs and usernames should not be logged unless encrypted.
At times, logging at the individual microservice level will not be enough. As weve seen throughout this book, microservices do not live alone, but within complex chains of clients and dependencies within the microservice ecosystem. While developers can try their best to log and monitor everything important and relevant to their service, tracking and logging requests and responses throughout the entire client and dependency chains from end-to-end can illuminate important information about the system that would otherwise go unknown (such as total latency and availability of the stack). To make this information accessible and visible, building a production-ready microservice ecosystem requires tracing each request through the entire stack.
The reader might have noticed at this point that it appears that a lot of information needs to be logged. Logs are data, and logging is expensive: they are expensive to store, they are expensive to access, and both storing and accessing logs comes with the additional cost associated with making expensive calls over the network. The cost of storing logs may not seem like much for an individual microservice, but if the logging needs of all the microservices within a microservice ecosystem are added together, the cost is rather high.
###### Warning
### Logs and Debugging
Avoid adding debugging logs in code that will be deployed to production—such logs are very costly. If any logs are added specifically for the purpose of debugging, developers should take great care to ensure that any branch or build containing these additional logs does not ever touch production.
Logging needs to be scalable, it needs to be available, and it needs to be easily accessible  _and_  searchable. To keep the cost of logs down and to ensure scalability and high availability, its often necessary to impose per-service logging quotas along with limits and standards on what information can be logged, how many logs each microservice can store, and how long the logs will be stored before being deleted.
### Dashboards
Every microservice must have at least one  _dashboard_  where all key metrics (such as hardware utilization, database connections, availability, latency, responses, and the status of API endpoints) are collected and displayed. A dashboard is a graphical display that is updated in real time to reflect all the most important information about a microservice. Dashboards should be easily accessible, centralized, and standardized across the microservice ecosystem.
Dashboards should be easy to interpret so that an outsider can quickly determine the health of the microservice: anyone should be able to look at the dashboard and know immediately whether or not the microservice is working correctly. This requires striking a balance between overloading a viewer with information (which would render the dashboard effectively useless) and not displaying enough information (which would also make the dashboard useless): only the necessary minimum of information about key metrics should be displayed.
A dashboard should also serve as an accurate reflection of the overall quality of monitoring of the entire microservice. Any key metric that is alerted on should be included in the dashboard (we will cover this in the next section): the exclusion of any key metric in the dashboard will reflect poor monitoring of the service, while the inclusion of metrics that are not necessary will reflect a neglect of alerting (and, consequently, monitoring) best practices.
There are several exceptions to the rule against inclusion of nonkey metrics. In addition to key metrics, information about each phase of the deployment pipeline should be displayed, though not necessarily within the same dashboard. Developers working on microservices that require monitoring a large number of key metrics may opt to set up separate dashboards for each deployment phase (one for staging, one for canary, and one for production) to accurately reflect the health of the microservice at each deployment phase: since different builds will be running on the deployment phases simultaneously, accurately reflecting the health of the microservice in a dashboard might require approaching dashboard design with the goal of reflecting the health of the microservice at a particular deployment phase (treating them almost as different microservices, or at least as different instantiations of a microservice).
###### Warning
### Dashboards and Outage Detection
Even though dashboards can illuminate anomalies and negative trends of a microservices key metrics, developers should never need to watch a microservices dashboard in order to detect incidents and outages. Doing so is an anti-pattern that leads to deficiencies in alerting and overall monitoring.
To assist in determining problems introduced by new deployments, it helps to include information about when a deployment occurred in the dashboard. The most effective and useful way to accomplish this is to make sure that deployment times are shown within the graphs of each key metric. Doing so allows developers to quickly check graphs after each deployment to see if any strange patterns emerge in any of the key metrics.
Well-designed dashboards also give developers an easy, visual way to detect anomalies and determine alerting thresholds. Very slight or gradual changes or disturbances in key metrics run the risk of not being caught by alerting, but a careful look at an accurate dashboard can illuminate anomalies that would otherwise go undetected. Alerting thresholds, which we will cover in the next section, are notoriously difficult to determine, but can be set appropriately when historical data on the dashboard is examined: developers can see normal patterns in key metrics, view spikes in metrics that occurred with outages (or led to outages) in the past, and then set thresholds accordingly.
### Alerting
The third component of monitoring a production-ready microservice is real-time  _alerting_ . The detection of failures, as well as the detection of changes within key metrics that could lead to a failure, is accomplished through alerting. To ensure this, all key metrics—host-level metrics, infrastructure metrics, and microservice-specific metrics—should be alerted on, with alerts set at various thresholds. Effective and actionable alerting is essential to preserving the availability of a microservice and preventing downtime.
### Setting up Effective Alerting
Alerts must be set up for all key metrics. Any change in a key metric at the host level, infrastructure level, or microservice level that could lead to an outage, cause a spike in latency, or somehow harm the availability of the microservice should trigger an alert. Importantly, alerts should also be triggered whenever a key metric is  _not_  seen.
All alerts should be useful: they should be defined by good, signal-providing thresholds. Three types of thresholds should be set for each key metric, and have both upper and lower bounds:  _normal_ ,  _warning_ , and  _critical_ . Normal thresholds reflect the usual, appropriate upper and lower bounds of each key metric and shouldnt ever trigger an alert. Warning thresholds on each key metric will trigger alerts when there is a deviation from the norm that could lead to a problem with the microservice; warning thresholds should be set such that they will trigger alerts  _before_  any deviations from the norm cause an outage or otherwise negatively affect the microservice. Critical thresholds should be set based on which upper and lower bounds on key metrics actually cause an outage, cause latency to spike, or otherwise hurt a microservices availability. In an ideal world, warning thresholds should trigger alerts that lead to quick detection, mitigation, and resolution before any critical thresholds are reached. In each category, thresholds should be high enough to avoid noise, but low enough to catch any and all real problems with key metrics.
### Determining Thresholds Early in the Lifecycle of a Microservice
Thresholds for key metrics can be very difficult to set without historical data. Any thresholds set early in a microservices lifecycle run the risk of either being useless or triggering too many alerts. To determine the appropriate thresholds for a new microservice (or even an old one), developers can run load testing on the microservice to gauge where the thresholds should lie. Running "normal" traffic loads through the microservice can determine the normal thresholds, while running larger-than-expected traffic loads can help determine warning and critical thresholds.
All alerts need to be actionable. Nonactionable alerts are those that are triggered and then resolved (or ignored) by the developer(s) on call for the microservice because they are not important, not relevant, do not signify that anything is wrong with the microservice, or alert on a problem that cannot be resolved by the developer(s). Any alert that cannot be immediately acted on by the on-call developer(s) should be removed from the pool of alerts, reassigned to the relevant on-call rotation, or (if possible) changed so that it becomes actionable.
Some of the key microservice metrics run the risk of being nonactionable. For example, alerting on the availability of dependencies can easily lead to nonactionable alerts if dependency outages, increases in dependency latency, or dependency downtime do not require any action to be taken by their client(s). If no action needs to be taken, then the thresholds should be set appropriately, or in more extreme cases, no alerts should be set on dependencies at all. However, if any action at all should be taken, even something as small as contacting the dependencys on-call or development team in order to alert them to the issue and/or coordinate mitigation and resolution, then an alert should be triggered.
### Handling Alerts
Once an alert has been triggered, it needs to be handled quickly and effectively. The root cause of the triggered alert should be mitigated and resolved. To quickly and effectively handle alerts, there are several steps that can be taken.
The first step is to create step-by-step instructions for each known alert that detail how to triage, mitigate, and resolve each alert. These step-by-step alert instructions should live within an on-call runbook within the centralized documentation of each microservice, making them easily accessible to anyone who is on call for the microservice (more details on runbooks can be found in [Chapter 7,  _Documentation and Understanding_ ][6]). Runbooks are crucial to the monitoring of a microservice: they allow any on-call developer to have step-by-step instructions on how to mitigate and resolve the root causes of each alert. Since each alert is tied to a deviation in a key metric, runbooks can be written so that they address each key metric, known causes of deviations from the norm, and how to go about debugging the problem.
Two types of on-call runbooks should be created. The first are runbooks for host-level and infrastructure-level alerts that should be shared between the whole engineering organization—these should be written for every key host-level and infrastructure-level metric. The second are on-call runbooks for specific microservices that have step-by-step instructions regarding microservice-specific alerts triggered by changes in key metrics; for example, a spike in latency should trigger an alert, and there should be step-by-step instructions in the on-call runbook that clearly document how to debug, mitigate, and resolve spikes in the microservices latency.
The second step is to identify alerting anti-patterns. If the microservice on-call rotation is overwhelmed by alerts yet the microservice appears to work as expected, then any alerts that are seen more than once but that can be easily mitigated and/or resolved should be automated away. That is, build the mitigation and/or resolution steps into the microservice itself. This holds for every alert, and writing step-by-step instructions for alerts within on-call runbooks allows executing on this strategy to be rather effective. In fact, any alert that, once triggered, requires a simple set of steps to be taken in order to be mitigated and resolved, can be easily automated away. Once this level of production-ready monitoring has been established, a microservice should never experience the same exact problem twice.
### On-Call Rotations
In a microservice ecosystem, the development teams themselves are responsible for the availability of their microservices. Where monitoring is concerned, this means that developers need to be on call for their own microservices. The goal of each developer on-call for a microservice needs to be clear: they are to detect, mitigate, and resolve any issue that arises with the microservice during their on call shift before the issue causes an outage for their microservice or impacts the business itself.
In some larger engineering organizations, site reliability engineers, DevOps, or other operations engineers may take on the responsibility for monitoring and on call, but this requires each microservice to be relatively stable and reliable before the on-call responsibilities can be handed off to another team. In most microservice ecosystems, microservices rarely reach this high level of stability because, as weve seen throughout the previous chapters, microservices are constantly changing. In a microservice ecosystem, developers need to bear the responsibility of monitoring the code that they deploy.
Designing good on-call rotations is crucial and requires the involvement of the entire team. To prevent burnout, on-call rotations should be both brief and shared: no fewer than two developers should ever be on call at one time, and on-call shifts should last no longer than one week and be spaced no more frequently than one month apart.
The on-call rotations of each microservice should be internally publicized and easily accessible. If a microservice team is experiencing issues with one of their dependencies, they should be able to track down the on-call engineers for the microservice and contact them very quickly. Hosting this information in a centralized place helps to make developers more effective in triaging problems and preventing outages.
Developing standardized on-call procedures across an engineering organization will go a long way toward building a sustainable microservice ecosystem. Developers should be trained about how to approach their on-call shifts, be made aware of on-call best practices, and be ramped up for joining the on-call rotation very quickly. Standardizing this process and making on-call expectations completely clear to every developer will prevent the burnout, confusion, and frustration that usually accompanies any mention of joining an on-call rotation.
### Evaluate Your Microservice
Now that you have a better understanding of monitoring, use the following list of questions to assess the production-readiness of your microservice(s) and microservice ecosystem. The questions are organized by topic, and correspond to the sections within this chapter.
### Key Metrics
* What are this microservices key metrics?
* What are the host and infrastructure metrics?
* What are the microservice-level metrics?
* Are all the microservices key metrics monitored?
### Logging
* What information does this microservice need to log?
* Does this microservice log all important requests?
* Does the logging accurately reflect the state of the microservice at any given time?
* Is this logging solution cost-effective and scalable?
### Dashboards
* Does this microservice have a dashboard?
* Is the dashboard easy to interpret? Are all key metrics displayed on the dashboard?
* Can I determine whether or not this microservice is working correctly by looking at the dashboard?
### Alerting
* Is there an alert for every key metric?
* Are all alerts defined by good, signal-providing thresholds?
* Are alert thresholds set appropriately so that alerts will fire before an outage occurs?
* Are all alerts actionable?
* Are there step-by-step triage, mitigation, and resolution instructions for each alert in the on-call runbook?
### On-Call Rotations
* Is there a dedicated on-call rotation responsible for monitoring this microservice?
* Is there a minimum of two developers on each on-call shift?
* Are there standardized on-call procedures across the engineering organization?
--------------------------------------------------------------------------------
作者简介:
Susan J. Fowler is the author of Production-Ready Microservices. She is currently an engineer at Stripe. Previously, Susan worked on microservice standardization at Uber, developed application platforms and infrastructure at several small startups, and studied particle physics at the University of Pennsylvania.
----------------------------
via: https://www.oreilly.com/learning/monitoring-a-production-ready-microservice
作者:[Susan Fowler][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.oreilly.com/people/susan_fowler
[1]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
[2]:https://pixabay.com/en/container-container-ship-port-1638068/
[3]:https://www.oreilly.com/learning/monitoring-a-production-ready-microservice?imm_mid=0ee8c5&cmp=em-webops-na-na-newsltr_20170310
[4]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
[5]:http://conferences.oreilly.com/oscon/oscon-tx?intcmp=il-prog-confreg-update-ostx17_new_site_oscon_17_austin_right_rail_cta
[6]:https://www.safaribooksonline.com/library/view/production-ready-microservices/9781491965962/ch07.html?utm_source=oreilly&utm_medium=newsite&utm_campaign=monitoring-production-ready-microservices
[7]:https://www.oreilly.com/people/susan_fowler
[8]:https://www.safaribooksonline.com/library/view/production-ready-microservices/9781491965962/?utm_source=newsite&utm_medium=content&utm_campaign=lgen&utm_content=monitoring-production-ready-microservices
[9]:https://www.safaribooksonline.com/library/view/production-ready-microservices/9781491965962/ch01.html?utm_source=oreilly&utm_medium=newsite&utm_campaign=monitoring-production-ready-microservices
[10]:https://www.safaribooksonline.com/library/view/production-ready-microservices/9781491965962/ch03.html?utm_source=oreilly&utm_medium=newsite&utm_campaign=monitoring-production-ready-microservices

View File

@ -0,0 +1,122 @@
# How to work around video and subtitle embed errors
This is going to be a slightly weird tutorial. The background story is as follows. Recently, I created a bunch of [sweet][1] [parody][2] [clips][3] of the [Risitas y las paelleras][4] sketch, famous for its insane laughter by the protagonist, Risitas. As always, I had them uploaded to Youtube, but from the moment I decided on what subtitles to use to the moment when the videos finally became available online, there was a long and twisty journey.
In this guide, I would like to present several typical issues that you may encounter when creating your own media, mostly with subtitles and the subsequent upload to media sharing portals, specifically Youtube, and how you can work around those. After me.
### The background story
My software of choice for video editing is Kdenlive, which I started using when I created the most silly [Frankenstein][5] clip, and it's been my loyal companion ever since. Normally, I render files to WebM container, with VP8 video codec and Vorbis audio codec, because that's what Google likes. Indeed, I had no issues with the roughly 40 different clips I uploaded in the last seven odd years.
![Kdenlive, create project](http://www.dedoimedo.com/images/computers-years/2016-2/vlc-subs-errors-kdenlive-create-project.jpg)
![Kdenlive, render](http://www.dedoimedo.com/images/computers-years/2016-2/vlc-subs-errors-kdenlive-render.png)
However, after I completed my Risitas & Linux project, I was in a bit of a predicament. The video file and the subtitle file were still two separate entities, and I needed somehow to put them together. My original article for subtitles work mentions Avidemux and Handbrake, and both these are valid options.
However, I was not too happy with the output generated by either one of these, and for a variety of reasons, something was ever so slightly off. Avidemux did not handle the video codecs well, whereas Handbrake omitted a couple of lines of subtitle text from the final product, and the font was ugly. Solvable, but not the topic for today.
Therefore, I decided to use VideoLAN (VLC) to embed subtitles onto the video. There are several ways to do this. You can use the Media > Convert/Save option, but this one does not have everything we need. Instead, you should use Media > Stream, which comes with a more fully fledged wizard, and it also offers an editable summary of the transcoding options, which we DO need - see my [tutorial][6] on subtitles for this please.
### Errors!
The process of embedding subtitles is not trivial. You will most likely encounter several problems along the way. This guide should help you work around these so you can focus on your work and not waste time debugging weird software errors. Anyhow, here's a small but probable collection of issues you will face while working with subtitles in VLC. Trial & error, but also nerdy design.
### No playable streams
You have probably chosen weird output settings. You might want to double check you have selected the right video and audio codecs. Also, remember that some media players may not have all the codecs. Also, make sure you test on the system you want these clips to play.
![No playable streams](http://www.dedoimedo.com/images/computers-years/2016-2/vlc-subs-errors-no-playable-streams.png)
### Subtitles overlaid twice
This can happen if you check the box that reads Use a subtitle file in the first step of the streaming media wizard. Just select the file you need and click Stream. Leave the box unchecked.
![Select file](http://www.dedoimedo.com/images/computers-years/2016-2/vlc-subs-select.png)
### No subtitle output is generated
This can happen for two main reasons. One, you have selected the wrong encapsulation format. Do make sure the subtitles are marked correctly on the profile page when you edit it before proceeding. If the format does not support subtitles, it might not work.
![Encapsulation](http://www.dedoimedo.com/images/computers-years/2016-2/vlc-subs-encap.png)
Two, you may have left the subtitle codec render enabled in the final output. You do not need this. You only need to overlay the subtitles onto the video clip. Please check the generated stream output string and delete an option that reads scodec=<something> before you click the Stream button.
![Remove text from output string](http://www.dedoimedo.com/images/computers-years/2016-2/vlc-subs-remove-text.png)
### Missing codecs + workaround
This is a common [bug][7] due to how experimental codecs are implemented, and you will most likely see it if you choose the following profile: Video - H.264 + AAC (MP4). The file will be rendered, and if you selected subtitles, they will be overlaid, too, but without any audio. However, we can fix this with a hack.
![AAC codec](http://www.dedoimedo.com/images/computers-years/2016-2/vlc-subs-errors-aac-codec.png)
![MP4A error](http://www.dedoimedo.com/images/computers-years/2016-2/vlc-subs-errors-mp4a.png)
One possible hack is to start VLC from command line with the --sout-ffmpeg-strict=-2 option (might work). The other and more sureway workaround is to take the audio-less video but with the subtitles overlayed and re-render it through Kdenlive with the original project video render without subtitles as an audio source. Sounds complicated, so in detail:
* Move existing clips (containing audio) from video to audio. Delete the rest.
* Alternatively, use rendered WebM file as your audio source.
* Add new clip - the one we created with embedded subtitles AND no audio.
* Place the clip as new video.
* Render as WebM again.
![Repeat render](http://www.dedoimedo.com/images/computers-years/2016-2/vlc-subs-errors-kdenlive-repeat-render.jpg)
Using other types of audio codecs will most likely work (e.g. MP3), and you will have a complete project with video, audio and subtitles. If you're happy that nothing is missing, you can now upload to Youtube. But then ...
### Youtube video manager & unknown format
If you're trying to upload a non-WebM clip (say MP4), you might get an unspecified error that your clip does not meet the media format requirements. I was not sure why VLC generated a non-Youtube-compliant file. However, again, the fix is easy. Use Kdenlive to recreate the video, and this should result in a file that has all the right meta fields and whatnot that Youtube likes. Back to my original story and the 40-odd clips created through Kdenlive this way.
P.S. If your clip has valid audio, then just re-run it through Kdenlive. If it does not, do the video/audio trick from before. Mute clips as necessary. In the end, this is just like overlay, except you're using the video source from one clip and audio from another for the final render. Job done.
### More reading
I do not wish to repeat myself or spam unnecessarily with links. I have loads of clips on VLC in the Software & Security section, so you might want to consult those. The earlier mentioned article on VLC & Subtitles has links to about half a dozen related tutorials, covering additional topics like streaming, logging, video rotation, remote file access, and more. I'm sure you can work the search engine like pros.
### Conclusion
I hope you find this guide helpful. It covers a lot, and I tried to make it linear and simple and address as many pitfalls entrepreneuring streamers and subtitle lovers may face when working with VLC. It's all about containers and codecs, but also the fact there are virtually no standards in the media world, and when you go from one format to another, sometimes you may encounter corner cases.
If you do hit an error or three, the tips and tricks here should help you solve at least some of them, including unplayable streams, missing or duplicate subtitles, missing codecs and the wicked Kdenlive workaround, Youtube upload errors, hidden VLC command line options, and a few other extras. Quite a lot for a single piece of text, right. Luckily, all good stuff. Take care, children of the Internet. And if you have any other requests as to what next my future VLC articles should cover, do feel liberated enough to send an email.
Cheers.
--------------------------------------------------------------------------------
作者简介:
My name is Igor Ljubuncic. I'm more or less 38 of age, married with no known offspring. I am currently working as a Principal Engineer with a cloud technology company, a bold new frontier. Until roughly early 2015, I worked as the OS Architect with an engineering computing team in one of the largest IT companies in the world, developing new Linux-based solutions, optimizing the kernel and hacking the living daylights out of Linux. Before that, I was a tech lead of a team designing new, innovative solutions for high-performance computing environments. Some other fancy titles include Systems Expert and System Programmer and such. All of this used to be my hobby, but since 2008, it's a paying job. What can be more satisfying than that?
From 2004 until 2008, I used to earn my bread by working as a physicist in the medical imaging industry. My work expertise focused on problem solving and algorithm development. To this end, I used Matlab extensively, mainly for signal and image processing. Furthermore, I'm certified in several major engineering methodologies, including MEDIC Six Sigma Green Belt, Design of Experiment, and Statistical Engineering.
I also happen to write books, including high fantasy and technical work on Linux; mutually inclusive.
Please see my full list of open-source projects, publications and patents, just scroll down.
For a complete list of my awards, nominations and IT-related certifications, hop yonder and yonder please.
-------------
via: http://www.dedoimedo.com/computers/vlc-subtitles-errors.html
作者:[Igor Ljubuncic][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.dedoimedo.com/faq.html
[1]:https://www.youtube.com/watch?v=MpDdGOKZ3dg
[2]:https://www.youtube.com/watch?v=KHG6fXEba0A
[3]:https://www.youtube.com/watch?v=TXw5lRi97YY
[4]:https://www.youtube.com/watch?v=cDphUib5iG4
[5]:http://www.dedoimedo.com/computers/frankenstein-media.html
[6]:http://www.dedoimedo.com/computers/vlc-subtitles.html
[7]:https://trac.videolan.org/vlc/ticket/6184

View File

@ -0,0 +1,272 @@
Understanding 7z command switches - part I
============================================================
### On this page
1. [Include files][1]
2. [Exclude files][2]
3. [Set password for your archive][3]
4. [Set output directory][4]
5. [Creating multiple volumes][5]
6. [Set compression level of archive][6]
7. [Display technical information of archive][7]
7z is no doubt a feature-rich and powerful archiver (claimed to offer the highest compression ratio). Here at HowtoForge, we have [already discussed][9] how you can install and use it. But the discussion was limited to basic features that you can access using the 'function letters' the tool provides.
Expanding our coverage on the tool, here in this tutorial, we will be discussing some of the 'switches' 7z offers. But before we proceed, it's worth sharing that all the instructions and commands mentioned in this tutorial have been tested on Ubuntu 16.04 LTS.
**Note**: We will be using the files displayed in the following screenshot for performing various operations using 7zip.
[
![ls from test directory](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/ls.png)
][10]
###
Include files
The 7z tool allows you selectively include files in an archive. This feature can be accessed using the -i switch.
Syntax:
-i[r[-|0]]{@listfile|!wildcard}
For example, if you want to include only .txt files in your archive, you can use the following command:
$ 7z a -i!*.txt include.7z
Here is the output:
[
![add files to 7zip](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/include.png)
][11]
Now, to check whether the newly-created archive file contains only .txt file or not, you can use the following command:
$ 7z l include.7z
Here is the output:
[
![Result](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/includelist.png)
][12]
In the above screenshot, you can see that only testfile.txt file has been added to the archive.
### Exclude files
If you want, you can also exclude the files that you dont need. This can be done using the -x switch.
Syntax:
-x[r[-|0]]]{@listfile|!wildcard}
For example, if you want to exclude a file named abc.7z from the archive that you are going to create, then you can use the following command:
$ 7z a -x!abc.7z exclude.7z
Here is the output:
[
![exclude files from 7zip](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/exclude.png)
][13]
To check whether the resulting archive file has excluded abc.7z or not, you can use the following command:
$ 7z l exclude.7z
Here is the output:
[
![result of file exclusion](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/excludelist.png)
][14]
In the above screenshot, you can see that abc.7z file has been excluded from the new archive file.
**Pro tip**: Suppose the task is to exclude all the .7z files with names starting with letter t and include all .7z files with names starting with letter a . This can be done by combining both -i and -x switches in the following way:
$ 7z a '-x!t*.7z' '-i!a*.7z' combination.7z
### Set password for your archive
7z also lets you password protect your archive file. This feature can be accessed using the -p switch.
$ 7z a [archive-filename] -p[your-password] -mhe=[on/off]
**Note**: The -mhe option enables or disables archive header encryption (default is off).
For example:
$ 7z a password.7z -pHTF -mhe=on
Needless to say, when you will extract your password protected archive, the tool will ask you for the password. To extract a password-protected file, use the 'e' function letter. Following is an example:
$ 7z e password.7z
[
![protect 7zip archive with a password](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/password.png)
][15]
### Set output directory
The tool also lets you extract an archive file in the directory of your choice. This can be done using the -o switch. Needless to say, the switch only works when the command contains either the e function letter or the x function letter.
$ 7z [e/x] [existing-archive-filename] -o[path-of-directory]
For example, suppose the following command is run in the present working directory:
$ 7z e output.7z -ohow/to/forge
And, as the value passed to the -o switch suggests, the aim is to extract the archive in the ./how/to/forge directory.
Here is the output:
[
![7zip output directory](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/output.png)
][16]
In the above screenshot, you can see that all the contents of existing archive file has been extracted. But where? To check whether or not the archive file has been extracted in the ./how/to/forge directory or not, we can use the ls -R command.
[
![result](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/ls_-R.png)
][17]
In the above screenshot, we can see that all the contents of output.7z have indeed been extracted to ./how/to/forge.
### Creating multiple volumes
With the help of the 7z tool, you can create multiple volumes (smaller sub-archives) of your archive file. This is very useful when transferring large files over a network or in a USB. This feature can be accessed using the -v switch. The switch requires you to specify size of sub-archives.
We can specify size of sub-archives in bytes (b), kilobytes (k), megabytes (m) and gigabytes (g).
$ 7z a [archive-filename] [files-to-archive] -v[size-of-sub-archive1] -v[size-of-sub-archive2] ....
Let's understand this using an example. Please note that we will be using a new directory for performing operations on the -v switch.
Here is the screenshot of the directory contents:
[
![7zip volumes](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/volumels.png)
][18]
Now, we can run the following command for creating multiple volumes (sized 100b each) of an archive file:
7z a volume.7z * -v100b
Here is the screenshot:
[
![compressing volumes](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/volume.png)
][19]
Now, to see the list of sub-archives that were created, use the ls command.
[
![list of archives](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/volumels2.png)
][20]
As seen in the above screenshot, a total of four multiple volumes have been created - volume.7z.001, volume.7z.002, volume.7z.003, and volume.7z.004
**Note**: You can extract files using the .7z.001 archive. But, for that, all the other sub-archive volumes should be present in the same directory.
### Set compression level of archive
7z also allows you to set compression levels of your archives. This feature can be accessed using the -m switch. There are various compression levels in 7z, such as -mx0, -mx1, -mx3, -mx5, -mx7 and -mx9.
Here's a brief summary about these levels:
-**mx0** = Don't compress at all - just copy the contents to archive.
-**mx1** = Consumes least time, but compression is low.
-**mx3** = Better than -mx1.
-**mx5** = This is default (compression is normal).
-**mx7** = Maximum compression.
-**mx9** = Ultra compression.
**Note**: For more information on these compression levels, head [here][8].
$ 7z a [archive-filename] [files-to-archive] -mx=[0,1,3,5,7,9]
For example, we have a bunch of files and folders in a directory, which we tried compressing using a different compression level each time. Just to give you an idea, here's the command used when the archive was created with compression level '0'.
$ 7z a compression(-mx0).7z * -mx=0
Similarly, other commands were executed.
Here is the list of output archives (produced using the 'ls' command), with their names suggesting the compression level used in their creation, and the fifth column in the output revealing the effect of compression level on their size.
[
![7zip compression level](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/compression.png)
][21]
###
Display technical information of archive
If you want, 7z also lets you display technical information of an archive - it's type, physical size, header size, and so on - on the standard output. This feature can be accessed using the -slt switch. This switch only works with the l function letter.
$ 7z l -slt [archive-filename]
For example:
$ 7z l -slt abc.7z
Here is the output:
[
![](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/slt.png)
][22]
# Specify type of archive to create
If you want to create a non 7zip archive (which gets created by default), you can specify your choice using the -t switch. 
$ 7z a -t[specify-type-of-archive] [archive-filename] [file-to-archive]
The following example shows a command to create a .zip file:
7z a -tzip howtoforge *
The output file produced is 'howtoforge.zip'. To cross verify its type, use the 'file' command:
[
![](https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/type.png)
][23]
So, howtoforge.zip is indeed a ZIP file. Similarly, you can create other kind of archives that 7z supports.
# Conclusion
As you would agree, the knowledge of 7z 'function letters' along with 'switches' lets you make the most out of the tool. We aren't yet done with switches - there are some more that will be discussed in part 2.
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/understanding-7z-command-switches/
作者:[ Himanshu Arora][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com/tutorial/understanding-7z-command-switches/
[1]:https://www.howtoforge.com/tutorial/understanding-7z-command-switches/#include-files
[2]:https://www.howtoforge.com/tutorial/understanding-7z-command-switches/#exclude-files
[3]:https://www.howtoforge.com/tutorial/understanding-7z-command-switches/#set-password-for-your-archive
[4]:https://www.howtoforge.com/tutorial/understanding-7z-command-switches/#set-output-directory
[5]:https://www.howtoforge.com/tutorial/understanding-7z-command-switches/#creating-multiple-volumes
[6]:https://www.howtoforge.com/tutorial/understanding-7z-command-switches/#set-compression-level-of-archive
[7]:https://www.howtoforge.com/tutorial/understanding-7z-command-switches/#display-technical-information-of-archive
[8]:http://askubuntu.com/questions/491223/7z-ultra-settings-for-zip-format
[9]:https://www.howtoforge.com/tutorial/how-to-install-and-use-7zip-file-archiver-on-ubuntu-linux/
[10]:https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/ls.png
[11]:https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/include.png
[12]:https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/includelist.png
[13]:https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/exclude.png
[14]:https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/excludelist.png
[15]:https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/password.png
[16]:https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/output.png
[17]:https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/ls_-R.png
[18]:https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/volumels.png
[19]:https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/volume.png
[20]:https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/volumels2.png
[21]:https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/compression.png
[22]:https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/slt.png
[23]:https://www.howtoforge.com/images/understanding_7z_command_switches_part_i/big/type.png

View File

@ -0,0 +1,154 @@
翻译中 [ChrisLeeGit](https://github.com/chrisleegit)
Assign Read/Write Access to a User on Specific Directory in Linux
============================================================
In a previous article, we showed you how to [create a shared directory in Linux][3]. Here, we will describe how to give read/write access to a user on a specific directory in Linux.
There are two possible methods of doing this: the first is [using ACLs (Access Control Lists)][4] and the second is [creating user groups to manage file permissions][5], as explained below.
For the purpose of this tutorial, we will use following setup.
```
Operating system: CentOS 7
Test directory: /shares/project1/reports
Test user: tecmint
Filesystem type: Ext4
```
Make sure all commands are executed as root user or use the the [sudo command][6] with equivalent privileges.
Lets start by creating the directory called `reports` using the mkdir command:
```
# mkdir -p /shares/project1/reports
```
### Using ACL to Give Read/Write Access to User on Directory
Important: To use this method, ensure that your Linux filesystem type (such as Ext3 and Ext4, NTFS, BTRFS) support ACLs.
1. First, [check the current file system type][7] on your system, and also whether the kernel supports ACL as follows:
```
# df -T | awk '{print $1,$2,$NF}' | grep "^/dev"
# grep -i acl /boot/config*
```
From the screenshot below, the filesystem type is Ext4 and the kernel supports POSIX ACLs as indicated by the CONFIG_EXT4_FS_POSIX_ACL=y option.
[
![Check Filesystem Type and Kernel ACL Support](http://www.tecmint.com/wp-content/uploads/2017/03/Check-Filesystem-Type-and-Kernel-ACL-Support.png)
][8]
Check Filesystem Type and Kernel ACL Support
2. Next, check if the file system (partition) is mounted with ACL option or not:
```
# tune2fs -l /dev/sda1 | grep acl
```
[
![Check Partition ACL Support](http://www.tecmint.com/wp-content/uploads/2017/03/Check-Partition-ACL-Support.png)
][9]
Check Partition ACL Support
From the above output, we can see that default mount option already has support for ACL. If in case its not enabled, you can enable it for the particular partition (/dev/sda3 for this case):
```
# mount -o remount,acl /
# tune2fs -o acl /dev/sda3
```
3. Now, its time to assign a read/write access to a user `tecmint` to a specific directory called `reports`by running the following commands.
```
# getfacl /shares/project1/reports # Check the default ACL settings for the directory
# setfacl -m user:tecmint:rw /shares/project1/reports # Give rw access to user tecmint
# getfacl /shares/project1/reports # Check new ACL settings for the directory
```
[
![Give Read/Write Access to Directory Using ACL](http://www.tecmint.com/wp-content/uploads/2017/03/Give-Read-Write-Access-to-Directory-Using-ACL.png)
][10]
Give Read/Write Access to Directory Using ACL
In the screenshot above, the user `tecmint` now has read/write (rw) permissions on directory /shares/project1/reports as seen from the output of the second getfacl command.
For more information about ACL lists, do check out our following guides.
1. [How to Use ACLs (Access Control Lists) to Setup Disk Quotas for Users/Groups][1]
2. [How to Use ACLs (Access Control Lists) to Mount Network Shares][2]
Now lets see the second method of assigning read/write access to a directory.
### Using Groups to Give Read/Write Access to User on Directory
1. If the user already has a default user group (normally with same name as username), simply change the group owner of the directory.
```
# chgrp tecmint /shares/project1/reports
```
Alternatively, create a new group for multiple users (who will be given read/write permissions on a specific directory), as follows. However, this will c[reate a shared directory][11]:
```
# groupadd projects
```
2. Then add the user `tecmint` to the group `projects` as follows:
```
# usermod -aG projects tecmint # add user to projects
# groups tecmint # check users groups
```
3. Change the group owner of the directory to projects:
```
# chgrp projects /shares/project1/reports
```
4. Now set read/write access for the group members:
```
# chmod -R 0760 /shares/projects/reports
# ls -l /shares/projects/ #check new permissions
```
Thats it! In this tutorial, we showed you how to give read/write access to a user on a specific directory in Linux. If any issues, do ask via the comment section below.
--------------------------------------------------------------------------------
作者简介:
Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/give-read-write-access-to-directory-in-linux/
作者:[Aaron Kili][a]
译者:[ChrisLeeGit](https://github.com/chrisleegit)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/set-access-control-lists-acls-and-disk-quotas-for-users-groups/
[2]:http://www.tecmint.com/rhcsa-exam-configure-acls-and-mount-nfs-samba-shares/
[3]:http://www.tecmint.com/create-a-shared-directory-in-linux/
[4]:http://www.tecmint.com/secure-files-using-acls-in-linux/
[5]:http://www.tecmint.com/manage-users-and-groups-in-linux/
[6]:http://www.tecmint.com/sudoers-configurations-for-setting-sudo-in-linux/
[7]:http://www.tecmint.com/find-linux-filesystem-type/
[8]:http://www.tecmint.com/wp-content/uploads/2017/03/Check-Filesystem-Type-and-Kernel-ACL-Support.png
[9]:http://www.tecmint.com/wp-content/uploads/2017/03/Check-Partition-ACL-Support.png
[10]:http://www.tecmint.com/wp-content/uploads/2017/03/Give-Read-Write-Access-to-Directory-Using-ACL.png
[11]:http://www.tecmint.com/create-a-shared-directory-in-linux/
[12]:http://www.tecmint.com/author/aaronkili/
[13]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
[14]:http://www.tecmint.com/free-linux-shell-scripting-books/

View File

@ -0,0 +1,307 @@
How to set up a personal web server with a Raspberry Pi
============================================================
![How to set up a personal web server with a Raspberry Pi](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/lightbulb_computer_person_general_.png?itok=ZY3UuQQa "How to set up a personal web server with a Raspberry Pi")
>Image by : opensource.com
A personal web server is "the cloud," except you own and control it as opposed to a large corporation.
Owning a little cloud has a lot of benefits, including customization, free storage, free Internet services, a path into open source software, high-quality security, full control over your content, the ability to make quick changes, a place to experiment with code, and much more. Most of these benefits are immeasurable, but financially these benefits can save you over $100 per month.
![Building your own web server with Raspberry Pi](https://opensource.com/sites/default/files/1-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Building your own web server with Raspberry Pi")
Image by Mitchell McLaughlin, CC BY-SA 4.0
I could have used AWS, but I prefer complete freedom, full control over security, and learning how things are built.
* Self web-hosting: No BlueHost or DreamHost
* Cloud storage: No Dropbox, Box, Google Drive, Microsoft Azure, iCloud, or AWS
* On-premise security
* HTTPS: Lets Encrypt
* Analytics: Google
* OpenVPN: Do not need private Internet access (at an estimated $7 per month)
Things I used:
* Raspberry Pi 3 Model B
* MicroSD Card (32GB recommended, [Raspberry Pi Compatible SD Cards][1])
* USB microSD card reader
* Ethernet cable
* Router connected to Wi-Fi
* Raspberry Pi case
* Amazon Basics MicroUSB cable
* Apple wall charger
* USB mouse
* USB keyboard
* HDMI cable
* Monitor (with HDMI input)
* MacBook Pro
### Step 1: Setting up the Raspberry Pi
Download the most recent release of Raspbian (the Raspberry Pi operating system). [Raspbian Jessie][6] ZIP version is ideal [1]. Unzip or extract the downloaded file. Copy it onto the SD card. [Pi Filler][7] makes this process easy. [Download Pi Filer 1.3][8] or the most recent version. Unzip or extract the downloaded file and open it. You should be greeted with this prompt:
![Pi Filler prompt](https://opensource.com/sites/default/files/2-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Pi Filler prompt")
Make sure the USB card reader has NOT been inserted yet. If it has, eject it. Proceed by clicking Continue. A file explorer should appear. Locate the uncompressed Raspberry Pi OS file from your Mac or PC and select it. You should see another prompt like the one pictured below:
![USB card reader prompt](https://opensource.com/sites/default/files/3-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "USB card reader")
Insert the MicroSD card (32GB recommended, 16GB minimum) into the USB MicroSD Card Reader. Then insert the USB reader into the Mac or PC. You can rename the SD card to "Raspberry" to distinguish it from others. Click Continue. Make sure the SD card is empty. Pi Filler will  _erase_  all previous storage at runtime. If you need to back up the card, do so now. When you are ready to continue, the Raspbian OS will be written to the SD card. It should take between one to three minutes. Once the write is completed, eject the USB reader, remove the SD card, and insert it into the Raspberry Pi SD card slot. Give the Raspberry Pi power by plugging the power cord into the wall. It should start booting up. The Raspberry Pi default login is:
**username: pi
password: raspberry**
When the Raspberry Pi has completed booting for the first time, a configuration screen titled "Setup Options" should appear like the image below [2]:
![Raspberry Pi software configuration setup](https://opensource.com/sites/default/files/4-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Raspberry Pi software configuration setup")
Select the "Expand Filesystem" option and hit the Enter key [3]. Also, I recommend selecting the second option, "Change User Password." It is important for security. It also personalizes your Raspberry Pi.
Select the third option in the setup options list, "Enable Boot To Desktop/Scratch" and hit the Enter key. It will take you to another window titled "Choose boot option" as shown in the image below.
![Choose boot option](https://opensource.com/sites/default/files/5-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Choose boot option")
In the "Choose boot option" window, select the second option, "Desktop log in as user 'pi' at the graphical desktop" and hit the Enter button [4]. Once this is done you will be taken back to the "Setup Options" page. If not, select the "OK" button at the bottom of this window and you will be taken back to the previous window.
Once both these steps are done, select the "Finish" button at the bottom of the page and it should reboot automatically. If it does not, then use the following command in the terminal to reboot.
**$ sudo reboot**
After the reboot from the previous step, if everything went well, you will end up on the desktop similar to the image below.
![Raspberry Pi desktop](https://opensource.com/sites/default/files/6-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Raspberry Pi desktop")
Once you are on the desktop, open a terminal and enter the following commands to update the firmware of the Raspberry Pi.
```
$ sudo apt-get update
$ sudo apt-get upgrade-y
$ sudo apt-get dist-upgrade -y
$ sudo rpi-update
```
This may take a few minutes. Now the Raspberry Pi is up-to-date and running.
### Step 2: Configuring the Raspberry Pi
SSH, which stands for Secure Shell, is a cryptographic network protocol that lets you securely transfer data between your computer and your Raspberry Pi. You can control your Raspberry Pi from your Mac's command line without a monitor or keyboard.
To use SSH, first, you need your Pi's IP address. Open the terminal and type:
```
$ sudo ifconfig
```
If you are using Ethernet, look at the "eth0" section. If you are using Wi-Fi, look at the "wlan0" section.
Find "inet addr" followed by an IP address—something like 192.168.1.115, a common default IP I will use for the duration of this article.
With this address, open terminal and type:
```
$ ssh pi@192.168.1.115
```
For SSH on PC, see footnote [5].
Enter the default password "raspberry" when prompted, unless you changed it.
You are now logged in via SSH.
### Remote desktop
Using a GUI (graphical user interface) is sometimes easier than a command line. On the Raspberry Pi's command line (using SSH) type:
```
$ sudo apt-get install xrdp
```
Xrdp supports the Microsoft Remote Desktop Client for Mac and PC.
On Mac, navigate to the app store and search for "Microsoft Remote Desktop." Download it. (For a PC, see footnote [6].)
After installation, search your Mac for a program called "Microsoft Remote Desktop." Open it. You should see this:
![Microsoft Remote Desktop](https://opensource.com/sites/default/files/7-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Microsoft Remote Desktop")
Image by Mitchell McLaughlin, CC BY-SA 4.0
Click "New" to set up a remote connection. Fill in the blanks as shown below.
![Setting up a remote connection](https://opensource.com/sites/default/files/8-image_by_mitchell_mclaughlin_cc_by-sa_4.0.png "Setting up a remote connection")
Image by Mitchell McLaughlin, CC BY-SA 4.0
Save it by exiting out of the "New" window.
You should now see the remote connection listed under "My Desktops." Double click it.
After briefly loading, you should see your Raspberry Pi desktop in a window on your screen, which looks like this:
![Raspberry Pi desktop](https://opensource.com/sites/default/files/6-image_by_mitchell_mclaughlin_cc_by-sa_4.0_0.png "Raspberry Pi desktop")
Perfect. Now, you don't need a separate mouse, keyboard, or monitor to control the Pi. This is a much more lightweight setup.
### Static local IP address
Sometimes the local IP address 192.168.1.115 will change. We need to make it static. Type:
```
$ sudo ifconfig
```
Write down from the "eth0" section or the "wlan0" section, the "inet addr" (Pi's current IP), the "bcast" (the broadcast IP range), and the "mask" (subnet mask address). Then, type:
```
$ netstat -nr
```
Write down the "destination" and the "gateway/network."
![Setting up a local IP address](https://opensource.com/sites/default/files/setting_up_local_ip_address.png "Setting up a local IP address")
The cumulative records should look something like this:
```
net address 192.168.1.115
bcast 192.168.1.255
mask 255.255.255.0
gateway 192.168.1.1
network 192.168.1.1
destination 192.168.1.0
```
With this information, you can set a static internal IP easily. Type:
```
$ sudo nano /etc/dhcpcd.conf
```
Do not use **/etc/network/interfaces**.
Then all you need to do is append this to the bottom of the file, substituting the correct IP address you want.
```
interface eth0
static ip_address=192.168.1.115
static routers=192.168.1.1
static domain_name_servers=192.168.1.1
```
Once you have set the static internal IP address, reboot the Raspberry Pi with:
```
$ sudo reboot
```
After rebooting, from terminal type:
```
$ sudo ifconfig
```
Your new static settings should appear for your Raspberry Pi.
### Static global IP address
If your ISP (internet service provider) has already given you a static external IP address, you can skip ahead to the port forwarding section. If not, continue reading.
You have set up SSH, a remote desktop, and a static internal IP address, so now computers inside the local network will know where to find the Pi. But you still can't access your Raspberry Pi from outside the local Wi-Fi network. You need your Raspberry Pi to be accessible publicly from anywhere on the Internet. This requires a static external IP address [7].
It can be a sensitive process initially. Call your ISP and request a static external (sometimes referred to as static global) IP address. The ISP holds the decision-making power, so I would be extremely careful dealing with them. They may refuse your static external IP address request. If they do, you can't fault the ISP because there is a legal and operational risk with this type of request. They particularly do not want customers running medium- or large-scale Internet services. They might explicitly ask why you need a static external IP address. It is probably best to be honest and tell them you plan on hosting a low-traffic personal website or a similar small not-for-profit internet service. If all goes well, they should open a ticket and call you in a week or two with an address.
### Port forwarding
This newly obtained static global IP address your ISP assigned is for accessing the router. The Raspberry Pi is still unreachable. You need to set up port forwarding to access the Raspberry Pi specifically.
Ports are virtual pathways where information travels on the Internet. You sometimes need to forward a port in order to make a computer, like the Raspberry Pi, accessible to the Internet because it is behind a network router. A YouTube video titled [What is TCP/IP, port, routing, intranet, firewall, Internet][9] by VollmilchTV helped me visually understand ports.
Port forwarding can be used for projects like a Raspberry Pi web server, or applications like VoIP or peer-to-peer downloading. There are [65,000+ ports][10] to choose from, so you can assign a different port for every Internet application you build.
The way to set up port forwarding can depend on your router. If you have a Linksys, a YouTube video titled  _[How to go online with your Apache Ubuntu server][2]_  by Gabriel Ramirez explains how to set it up. If you don't have a Linksys, read the documentation that comes with your router in order to customize and define ports to forward.
You will need to port forward for SSH as well as the remote desktop.
Once you believe you have port forwarding configured, check to see if it is working via SSH by typing:
```
$ ssh pi@your_global_ip_address
```
It should prompt you for the password.
Check to see if port forwarding is working for the remote desktop as well. Open Microsoft Remote Desktop. Your previous remote connection settings should be saved, but you need to update the "PC name" field with the static external IP address (for example, 195.198.227.116) instead of the static internal address (for example, 192.168.1.115).
Now, try connecting via remote desktop. It should briefly load and arrive at the Pi's desktop.
![Raspberry Pi desktop](https://opensource.com/sites/default/files/6-image_by_mitchell_mclaughlin_cc_by-sa_4.0_1.png "Raspberry Pi desktop")
Good job. The Raspberry Pi is now accessible from the Internet and ready for advanced projects.
As a bonus option, you can maintain two remote connections to your Pi. One via the Internet and the other via the LAN (local area network). It's easy to set up. In Microsoft Remote Desktop, keep one remote connection called "Pi Internet" and another called "Pi Local." Configure Pi Internet's "PC name" to the static external IP address—for example, 195.198.227.116\. Configure Pi Local's "PC name" to the static internal IP address—for example, 192.168.1.115\. Now, you have the option to connect globally or locally.
If you have not seen it already, watch  _[How to go online with your Apache Ubuntu server][3]_  by Gabriel Ramirez as a transition into Project 2\. It will show you the technical architecture behind your project. In our case, you are using a Raspberry Pi instead of an Ubuntu server. The dynamic DNS sits between the domain company and your router, which Ramirez omits. Beside this subtlety, the video is spot on when explaining visually how the system works. You might notice this tutorial covers the Raspberry Pi setup and port forwarding, which is the server-side or back end. See the original source for more advanced projects covering the domain name, dynamic DNS, Jekyll (static HTML generator), and Apache (web hosting), which is the client-side or front end.
### Footnotes
[1] I do not recommend starting with the NOOBS operating system. I prefer starting with the fully functional Raspbian Jessie operating system.
[2] If "Setup Options" does not pop up, you can always find it by opening Terminal and executing this command:
```
$ sudo-rasps-config
```
[3] We do this to make use of all the space present on the SD card as a full partition. All this does is expand the operating system to fit the entire space on the SD card, which can then be used as storage memory for the Raspberry Pi.
[4] We do this because we want to boot into a familiar desktop environment. If we do not do this step, the Raspberry Pi boots into a terminal each time with no GUI.
[5]
![PuTTY configuration](https://opensource.com/sites/default/files/putty_configuration.png "PuTTY configuration")
[Download and run PuTTY][11] or another SSH client for Windows. Enter your IP address in the field, as shown in the above screenshot. Keep the default port at 22\. Hit Enter, and PuTTY will open a terminal window, which will prompt you for your username and password. Fill those in, and begin working remotely on your Pi.
[6] If it is not already installed, download [Microsoft Remote Desktop][12]. Search your computer for Microsoft Remote Desktop. Run it. Input the IP address when prompted. Next, an xrdp window will pop up, prompting you for your username and password.
[7] The router has a dynamically assigned external IP address, so in theory, it can be reached from the Internet momentarily, but you'll need the help of your ISP to make it permanently accessible. If this was not the case, you would need to reconfigure the remote connection on each use.
_For the original source, visit [Mitchell McLaughlin's Full-Stack Computer Projects][4]._
--------------------------------------------------------------------------------
作者简介:
Mitchell McLaughlin - I'm an open-web contributor and developer. My areas of interest are broad, but specifically I enjoy open source software/hardware, bitcoin, and programming in general. I reside in San Francisco. My work experience in the past has included brief stints at GoPro and Oracle.
-------------
via: https://opensource.com/article/17/3/building-personal-web-server-raspberry-pi-3
作者:[Mitchell McLaughlin ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/mitchm
[1]:http://elinux.org/RPi_SD_cards
[2]:https://www.youtube.com/watch?v=i1vB7JnPvuE#t=07m08s
[3]:https://www.youtube.com/watch?v=i1vB7JnPvuE#t=07m08s
[4]:https://mitchellmclaughlin.com/server.html
[5]:https://opensource.com/article/17/3/building-personal-web-server-raspberry-pi-3?rate=Zdmkgx8mzy9tFYdVcQZSWDMSy4uDugnbCKG4mFsVyaI
[6]:https://www.raspberrypi.org/downloads/raspbian/
[7]:http://ivanx.com/raspberrypi/
[8]:http://ivanx.com/raspberrypi/files/PiFiller.zip
[9]:https://www.youtube.com/watch?v=iskxw6T1Wb8
[10]:https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers
[11]:http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
[12]:https://www.microsoft.com/en-us/store/apps/microsoft-remote-desktop/9wzdncrfj3ps
[13]:https://opensource.com/user/41906/feed
[14]:https://opensource.com/article/17/3/building-personal-web-server-raspberry-pi-3#comments
[15]:https://opensource.com/users/mitchm

View File

@ -0,0 +1,394 @@
Many SQL Performance Problems Stem from “Unnecessary, Mandatory Work”
============================================================ 
Probably the most impactful thing you could learn about when writing efficient SQL is [indexing][1]. A very close runner-up, however, is the fact that a lot of SQL clients demand tons of **“unnecessary, mandatory work”** from the database.
Repeat this after me:
> Unnecessary, Mandatory Work
What is **“unnecessary, mandatory work”**? Its two things (duh):
### Unnecessary
Lets assume your client application needs this information here:
[
![](https://lukaseder.files.wordpress.com/2017/03/title-rating.png?w=662)
][2]
Nothing out of the ordinary. We run a movie database ([e.g. the Sakila database][3]) and we want to display the title and rating of each film to the user.
This is the query that would produce the above result:
`SELECT title, rating`
`FROM film`
However, our application (or our ORM) runs this query instead:
`SELECT *`
`FROM film`
What are we getting? Guess what. Were getting tons of useless information:
[
![](https://lukaseder.files.wordpress.com/2017/03/useless-information.png?w=662&h=131)
][4]
Theres even some complex JSON all the way to the right, which is loaded:
* From the disk
* Into the caches
* Over the wire
* Into the client memory
* And then discarded
Yes, we discard most of this information. The work that was performed to retrieve it was completely unnecessary. Right? Agreed.
### Mandatory
Thats the worse part. While optimisers have become quite smart these days, this work is mandatory for the database. Theres no way the database can  _know_  that the client application actually didnt need 95% of the data. And thats just a simple example. Imagine if we joined more tables…
So what, you think? Databases are fast? Let me offer you some insight you may not have thought of, before:
### Memory consumption
Sure, the individual execution time doesnt really change much. Perhaps, itll be 1.5x slower, but we can handle that right? For the sake of convenience? Sometimes thats true. But if youre sacrificing performance for convenience  _every time_ , things add up. Were no longer talking about performance (speed of individual queries), but throughput (system response time), and thats when stuff gets really hairy and tough to fix. When you stop being able to scale.
Lets look at execution plans, Oracle this time:
```
--------------------------------------------------
| Id | Operation | Name | Rows | Bytes |
--------------------------------------------------
| 0 | SELECT STATEMENT | | 1000 | 166K|
| 1 | TABLE ACCESS FULL| FILM | 1000 | 166K|
--------------------------------------------------
```
Versus
```
--------------------------------------------------
| Id | Operation | Name | Rows | Bytes |
--------------------------------------------------
| 0 | SELECT STATEMENT | | 1000 | 20000 |
| 1 | TABLE ACCESS FULL| FILM | 1000 | 20000 |
--------------------------------------------------
```
Were using 8x as much memory in the database when doing `SELECT *`rather than `SELECT film, rating`. Thats not really surprising though, is it? We knew that. Yet we accepted it in many many of our queries where we simply didnt need all that data. We generated **needless, mandatory work** for the database, and it does sum up. Were using 8x too much memory (the number will differ, of course).
Now, all the other steps (disk I/O, wire transfer, client memory consumption) are also affected in the same way, but Im skipping those. Instead, Id like to look at…
### Index usage
Most databases these days have figured out the concept of [ _covering indexes_ ][5]. A covering index is not a special index per se. But it can turn into a “special index” for a given query, either “accidentally,” or by design.
Check out this query:
`SELECT` `*`
`FROM` `actor`
`WHERE` `last_name` `LIKE` `'A%'`
Theres no extraordinary thing to be seen in the execution plan. Its a simple query. Index range scan, table access, done:
```
-------------------------------------------------------------------
| Id | Operation | Name | Rows |
-------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 8 |
| 1 | TABLE ACCESS BY INDEX ROWID| ACTOR | 8 |
|* 2 | INDEX RANGE SCAN | IDX_ACTOR_LAST_NAME | 8 |
-------------------------------------------------------------------
```
Is it a good plan, though? Well, if what we really needed was this, then its not:
[
![](https://lukaseder.files.wordpress.com/2017/03/first-name-last-name.png?w=662)
][6]
Sure, were wasting memory etc. But check out this alternative query:
| 123 | `SELECT` `first_name, last_name``FROM` `actor``WHERE` `last_name` `LIKE` `'A%'` |
Its plan is this:
```
----------------------------------------------------
| Id | Operation | Name | Rows |
----------------------------------------------------
| 0 | SELECT STATEMENT | | 8 |
|* 1 | INDEX RANGE SCAN| IDX_ACTOR_NAMES | 8 |
----------------------------------------------------
```
We could now eliminate the table access entirely, because theres an index that covers all the needs of our query… a covering index. Does it matter? Absolutely! This approach can speed up some of your queries by an order of magnitude (or slow them down by an order of magnitude when your index stops being covering after a change).
You cannot always profit from covering indexes. Indexes come with their own cost and you shouldnt add too many of them, but in cases like these, its a no-brainer. Lets run a benchmark:
```
SET SERVEROUTPUT ON
DECLARE
v_ts TIMESTAMP;
v_repeat CONSTANT NUMBER := 100000;
BEGIN
v_ts := SYSTIMESTAMP;
FOR i IN 1..v_repeat LOOP
FOR rec IN (
-- Worst query: Memory overhead AND table access
SELECT *
FROM actor
WHERE last_name LIKE 'A%'
) LOOP
NULL;
END LOOP;
END LOOP;
dbms_output.put_line('Statement 1 : ' || (SYSTIMESTAMP - v_ts));
v_ts := SYSTIMESTAMP;
FOR i IN 1..v_repeat LOOP
FOR rec IN (
-- Better query: Still table access
SELECT /*+INDEX(actor(last_name))*/
first_name, last_name
FROM actor
WHERE last_name LIKE 'A%'
) LOOP
NULL;
END LOOP;
END LOOP;
dbms_output.put_line('Statement 2 : ' || (SYSTIMESTAMP - v_ts));
v_ts := SYSTIMESTAMP;
FOR i IN 1..v_repeat LOOP
FOR rec IN (
-- Best query: Covering index
SELECT /*+INDEX(actor(last_name, first_name))*/
first_name, last_name
FROM actor
WHERE last_name LIKE 'A%'
) LOOP
NULL;
END LOOP;
END LOOP;
dbms_output.put_line('Statement 3 : ' || (SYSTIMESTAMP - v_ts));
END;
/
```
The result is:
```
Statement 1 : +000000000 00:00:02.479000000
Statement 2 : +000000000 00:00:02.261000000
Statement 3 : +000000000 00:00:01.857000000
```
Note, the actor table only has 4 columns, so the difference between statements 1 and 2 is not too impressive, but still significant. Note also Im using Oracles hints to force the optimiser to pick one or the other index for the query. Statement 3 clearly wins in this case. Its a  _much_  better query, and thats just an extremely simple query.
Again, when we write `SELECT *`, we create **needless, mandatory work** for the database, which it cannot optimise. It wont pick the covering index because that index has a bit more overhead than the `LAST_NAME`index that it did pick, and after all, it had to go to the table anyway to fetch the useless `LAST_UPDATE` column, for instance.
But things get worse with `SELECT *`. Consider…
### SQL transformations
Optimisers work so well, because they transform your SQL queries ([watch my recent talk at Voxxed Days Zurich about how this works][7]). For instance, theres a SQL transformation called “`JOIN` elimination”, and it is really powerful. Consider this auxiliary view, which we wrote because we grew so incredibly tired of joining all these tables all the time:
```
CREATE VIEW v_customer AS
SELECT
c.first_name, c.last_name,
a.address, ci.city, co.country
FROM customer c
JOIN address a USING (address_id)
JOIN city ci USING (city_id)
JOIN country co USING (country_id)
```
This view just connects all the “to-one” relationships between a `CUSTOMER` and their different `ADDRESS` parts. Thanks, normalisation.
Now, after a while working with this view, imagine, weve become so accustomed to this view, we forgot all about the underlying tables. And now, were running this query:
```
SELECT *
FROM v_customer
```
Were getting quite some impressive plan:
```
----------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
----------------------------------------------------------------
| 0 | SELECT STATEMENT | | 599 | 47920 | 14 |
|* 1 | HASH JOIN | | 599 | 47920 | 14 |
| 2 | TABLE ACCESS FULL | COUNTRY | 109 | 1526 | 2 |
|* 3 | HASH JOIN | | 599 | 39534 | 11 |
| 4 | TABLE ACCESS FULL | CITY | 600 | 10800 | 3 |
|* 5 | HASH JOIN | | 599 | 28752 | 8 |
| 6 | TABLE ACCESS FULL| CUSTOMER | 599 | 11381 | 4 |
| 7 | TABLE ACCESS FULL| ADDRESS | 603 | 17487 | 3 |
----------------------------------------------------------------
```
Well, of course. We run all these joins and full table scans, because thats what we told the database to do. Fetch all this data.
Now, again, imagine, what we really wanted on one particular screen was this:
[
![](https://lukaseder.files.wordpress.com/2017/03/first-name-last-name-customers.png?w=662)
][8]
Yeah, duh, right? By now you get my point. But imagine, weve learned from the previous mistakes and were now actually running the following, better query:
```
SELECT first_name, last_name
FROM v_customer
```
Now, check this out!
```
------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 599 | 16173 | 4 |
| 1 | NESTED LOOPS | | 599 | 16173 | 4 |
| 2 | TABLE ACCESS FULL| CUSTOMER | 599 | 11381 | 4 |
|* 3 | INDEX UNIQUE SCAN| SYS_C007120 | 1 | 8 | 0 |
------------------------------------------------------------------
```
Thats a  _drastic_  improvement in the execution plan. Our joins were eliminated, because the optimiser could prove they were **needless**, so once it can prove this (and you dont make the work **mandatory** by selecting *), it can remove the work and simply not do it. Why is that the case?
Each `CUSTOMER.ADDRESS_ID` foreign key guarantees that there is  _exactly one_  `ADDRESS.ADDRESS_ID` primary key value, so the `JOIN` operation is guaranteed to be a to-one join which does not add rows nor remove rows. If we dont even select rows or query rows, well, we dont need to actually load the rows at all. Removing the `JOIN` provably wont change the outcome of the query.
Databases do these things all the time. You can try this on most databases:
```
-- Oracle
SELECT CASE WHEN EXISTS (
SELECT 1 / 0 FROM dual
) THEN 1 ELSE 0 END
FROM dual
-- More reasonable SQL dialects, e.g. PostgreSQL
SELECT EXISTS (SELECT 1 / 0)
```
In this case, you might expect an arithmetic exception to be raised, as when you run this query:
```
SELECT 1 / 0 FROM dual
```
yielding
```
ORA-01476: divisor is equal to zero
```
But it doesnt happen. The optimiser (or even the parser) can prove that any `SELECT` column expression in a `EXISTS (SELECT ..)` predicate will not change the outcome of a query, so theres no need to evaluate it. Huh!
### Meanwhile…
One of most ORMs most unfortunate problems is the fact that they make writing `SELECT *` queries so easy to write. In fact, HQL / JPQL for instance, proceeded to making it the default. You can even omit the `SELECT` clause entirely, because after all, youre going to be fetching the entire entity, as declared, right?
For instance:
`FROM` `v_customer`
[Vlad Mihalcea for instance, a Hibernate expert and Hibernate Developer advocate][9] recommends you use queries almost every time youre sure you dont want to persist any modifications after fetching. ORMs make it easy to solve the object graph persistence problem. Note: Persistence. The idea of actually modifying the object graph and persisting the modifications is inherent.
But if you dont intend to do that, why bother fetching the entity? Why not write a query? Lets be very clear: From a performance perspective, writing a query tailored to the exact use-case youre solving is  _always_  going to outperform any other option. You may not care because your data set is small and it doesnt matter. Fine. But eventually, youll need to scale and re-designing your applications to favour a query language over imperative entity graph traversal will be quite hard. Youll have other things to do.
### Counting for existence
Some of the worst wastes of resources is when people run `COUNT(*)`queries when they simply want to check for existence. E.g.
> Did this user have any orders at all?
And well run:
```
SELECT count(*)
FROM orders
WHERE user_id = :user_id
```
Easy. If `COUNT = 0`: No orders. Otherwise: Yes, orders.
The performance will not be horrible, because we probably have an index on the `ORDERS.USER_ID` column. But what do you think will be the performance of the above compared to this alternative here:
```
-- Oracle
SELECT CASE WHEN EXISTS (
SELECT *
FROM orders
WHERE user_id = :user_id
) THEN 1 ELSE 0 END
FROM dual
-- Reasonable SQL dialects, like PostgreSQL
SELECT EXISTS (
SELECT *
FROM orders
WHERE user_id = :user_id
)
```
It doesnt take a rocket scientist to figure out that an actual existence predicate can stop looking for additional rows as soon as it found  _one_ . So, if the answer is “no orders”, then the speed will be comparable. If, however, the answer is “yes, orders”, then the answer might be  _drastically_  faster in the case where we do not calculate the exact count.
Because we  _dont care_  about the exact count. Yet, we told the database to calculate it (**needless**), and the database doesnt know were discarding all results bigger than 1 (**mandatory**).
Of course, things get much worse if you call `list.size()` on a JPA-backed collection to do the same…
[Ive blogged about this recently, and benchmarked the alternatives on different databases. Do check it out.][10]
### Conclusion
This article stated the “obvious”. Dont tell the database to perform **needless, mandatory work**.
Its **needless** because given your requirements, you  _knew_  that some specific piece of work did not need to be done. Yet, you tell the database to do it.
Its **mandatory** because the database has no way to prove its **needless**. This information is contained only in the client, which is inaccessible to the server. So, the database has to do it.
This article talked about `SELECT *`, mostly, because thats such an easy target. But this isnt about databases only. This is about any distributed algorithm where a client instructs a server to perform **needless, mandatory work**. How many N+1 problems does your average AngularJS application have, where the UI loops over service result A, calling service B many times, instead of batching all calls to B into a single call? Its a recurrent pattern.
The solution is always the same. The more information you give to the entity executing your command, the faster it can (in principle) execute such command. Write a better query. Every time. Your entire system will thank you for it.
### If you liked this article…
… do also check out my recent talk at Voxxed Days Zurich, where I show some hyperbolic examples of why SQL will beat Java at data processing algorithms every time:
--------------------------------------------------------------------------------
via: https://blog.jooq.org/2017/03/08/many-sql-performance-problems-stem-from-unnecessary-mandatory-work
作者:[ jooq][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://blog.jooq.org/
[1]:http://use-the-index-luke.com/
[2]:https://lukaseder.files.wordpress.com/2017/03/title-rating.png
[3]:https://github.com/jOOQ/jOOQ/tree/master/jOOQ-examples/Sakila
[4]:https://lukaseder.files.wordpress.com/2017/03/useless-information.png
[5]:https://blog.jooq.org/2015/04/28/do-not-think-that-one-second-is-fast-for-query-execution/
[6]:https://lukaseder.files.wordpress.com/2017/03/first-name-last-name.png
[7]:https://www.youtube.com/watch?v=wTPGW1PNy_Y
[8]:https://lukaseder.files.wordpress.com/2017/03/first-name-last-name-customers.png
[9]:https://vladmihalcea.com/2016/09/13/the-best-way-to-handle-the-lazyinitializationexception/
[10]:https://blog.jooq.org/2016/09/14/avoid-using-count-in-sql-when-you-could-use-exists/

View File

@ -0,0 +1,82 @@
8 reasons to use LXDE
============================================================
### Learn reasons to consider using the lightweight LXDE desktop environment as your Linux desktop.
![8 reasons to use LXDE](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/rh_003499_01_linux31x_cc.png?itok=1HXbvw2E "8 reasons to use LXDE")
>Image by : opensource.com
Late last year, an upgrade to Fedora 25 brought issues with the new version of [KDE][7] Plasma that were so bad it was difficult to get any work done. I decided to try other Linux desktop environments for two reasons. First, I needed to get my work done. Second, having used KDE exclusively for many years, I thought it was time to try some different desktops.
The first alternate desktop I tried for several weeks was [Cinnamon][8], which I wrote about in January. This time I have been using LXDE (Lightweight X11 Desktop Environment) for about six weeks, and I have found many things about it that I like. Here is my list of eight reasons to use LXDE.
More Linux resources
* [What is Linux?][1]
* [What are Linux containers?][2]
* [Managing devices in Linux][3]
* [Download Now: Linux commands cheat sheet][4]
* [Our latest Linux articles][5]
**1\. LXDE supports multiple panels. **As with KDE and Cinnamon, LXDE sports panels that contain the system menu, application launchers, and a taskbar that displays buttons for the running applications. The first time I logged in to LXDE the panel configuration looked surprisingly familiar. LXDE appears to have picked up the KDE configuration for my favored top and bottom panels, including system tray settings. The application launchers on the top panel appear to have been from the Cinnamon configuration. The contents of the panels make it easy to launch and manage programs. By default, there is only one panel at the bottom of the desktop.
![The LXDE desktop with the Openbox Configuration Manager open.](https://opensource.com/sites/default/files/lxde-openboxconfigurationmanager.png "The LXDE desktop with the Openbox Configuration Manager open.")
The LXDE desktop with the Openbox Configuration Manager open. This desktop has not been modified, so it uses the default color and icon schemes.
**2\. The Openbox configuration manager provides a single, simple tool for managing the look and feel of the desktop.** It provides options for themes, window decorations, window behavior with multiple monitors, moving and resizing windows, mouse control, multiple desktops, and more. Although that seems like a lot, it is far less complex than configuring the KDE desktop, yet Openbox provides a surprisingly great amount of control.
**3\. LXDE has a powerful menu tool.** There is an interesting option that you can access on the Advanced tab of the Desktop Preferences menu. The long name for this option is, “Show menus provided by window managers when desktop is clicked.” When this checkbox is selected, the Openbox desktop menu is displayed instead of the standard LXDE desktop menu, when you right-click the desktop.
The Openbox desktop menu contains nearly every menu selection you would ever want, with all easily accessible from the desktop. It includes all of the application menus, system administration, and preferences. It even has a menu containing a list of all the terminal emulator applications installed so that sysadmins can easily launch their favorite.
**4\. By design, the LXDE desktop is clean and simple.** It has nothing to get in the way of getting your work done. Although you can add some clutter to the desktop in the form of files, directory folders, and links to applications, there are no widgets that can be added to the desktop. I do like some widgets on my KDE and Cinnamon desktops, but they are easy to cover and then I need to move or minimize windows, or just use the “Show desktop” button to clear off the entire desktop. LXDE does have a “Iconify all windows” button, but I seldom need to use it unless I want to look at my wallpaper.
**5\. LXDE comes with a strong file manager.** The default file manager for LXDE is PCManFM, so that became my file manager for the duration of my time with LXDE. PCManFM is very flexible and can be configured to make it work well for most people and situations. It seems to be somewhat less configurable than Krusader, which is usually my go-to file manager, but I really like the sidebar on PCManFM that Krusader does not have.
PCManFM allows multiple tabs, which can be opened with a right-click on any item in the sidebar or by a left-click on the new tab icon in the icon bar. The Places pane at the left of the PCManFM window shows the applications menu, and you can launch applications from PCManFM. The upper part of the Places pane also shows a devices icon, which can be used to view your physical storage devices, a list of removable devices along with buttons to allow you to mount or unmount them, and the Home, Desktop, and trashcan folders to make them easy to access. The bottom part of the Places panel contains shortcuts to some default directories, Documents, Music, Pictures, Videos, and Downloads. You can also drag additional directories to the shortcut part of the Places pane. The Places pane can be swapped for a regular directory tree.
**6\. The title bar of ****a new window flashes**** if it is opened behind existing windows.** This is a nice way to make locating new windows in with a large number of existing ones.
**7\. Most modern desktop environments allow for multiple desktops and LXDE is no exception to that.** I like to use one desktop for my development, testing, and writing activities, and another for mundane tasks like email and web browsing. LXDE provides two desktops by default, but you can configure just one or more. Right-click on the Desktop Pager to configure it.
Through some disruptive but not destructive testing, I was able to determine that the maximum number of desktops allowed is 100\. I also discovered that when I reduced the number of desktops to fewer than the three I actually had in use, the windows on the defunct desktops are moved to desktop 1\. What fun I have had with this!
**8\. The Xfce power manager is a powerful little application that allows you to configure how power management works.** It provides a tab for General configuration as well as tabs for System, Display, and Devices. The Devices tab displays a table of attached devices on my system, such as battery-powered mice, keyboards, and even my UPS. It displays information about each, including the vendor and serial number, if available, and the state of the battery charge. As I write this, my UPS is 100% charged and my Logitech mouse is 75% charged. The Xfce power manager also displays an icon in the system tray so you can get a quick read on your devices' battery status from there.
There are more things to like about the LXDE desktop, but these are the ones that either grabbed my attention or are so important to my way of working in a modern GUI interface that they are indispensable to me.
One quirk I noticed with LXDE is that I never did figure out what the “Reconfigure” option does on the desktop (Openbox) menu. I clicked on that several times and never noticed any activity of any kind to indicate that that selection actually did anything.
I have found LXDE to be an easy-to-use, yet powerful, desktop. I have enjoyed my weeks using it for this article. LXDE has enabled me to work effectively mostly by allowing me access to the applications and files that I want, while remaining unobtrusive the rest of the time. I also never encountered anything that prevented me from doing my work. Well, except perhaps for the time I spent exploring this fine desktop. I can highly recommend the LXDE desktop.
I am now using GNOME 3 and the GNOME Shell and will report on that in my next installment.
--------------------------------------------------------------------------------
作者简介:
David Both - David Both is a Linux and Open Source advocate who resides in Raleigh, North Carolina. He has been in the IT industry for over forty years and taught OS/2 for IBM where he worked for over 20 years. While at IBM, he wrote the first training course for the original IBM PC in 1981. He has taught RHCE classes for Red Hat and has worked at MCI Worldcom, Cisco, and the State of North Carolina. He has been working with Linux and Open Source Software for almost 20 years.
--------------------------------------
via: https://opensource.com/article/17/3/8-reasons-use-lxde
作者:[David Both ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/dboth
[1]:https://opensource.com/resources/what-is-linux?src=linux_resource_menu
[2]:https://opensource.com/resources/what-are-linux-containers?src=linux_resource_menu
[3]:https://opensource.com/article/16/11/managing-devices-linux?src=linux_resource_menu
[4]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=7016000000127cYAAQ
[5]:https://opensource.com/tags/linux?src=linux_resource_menu
[6]:https://opensource.com/article/17/3/8-reasons-use-lxde?rate=QigvkBy_9zLvktdsL-QaIWedjIqjtlwwJIVFQDQzsSY
[7]:https://opensource.com/life/15/4/9-reasons-to-use-kde
[8]:https://opensource.com/article/17/1/cinnamon-desktop-environment
[9]:https://opensource.com/user/14106/feed
[10]:https://opensource.com/article/17/3/8-reasons-use-lxde#comments
[11]:https://opensource.com/users/dboth

View File

@ -0,0 +1,98 @@
How to Change Root Password of MySQL or MariaDB in Linux
============================================================
If youre [installing MySQL or MariaDB in Linux][1] for the first time, chances are you will be executing mysql_secure_installation script to secure your MySQL installation with basic settings.
One of these settings is, database root password which you must keep secret and use only when it is required. If you need to change it (for example, when a database administrator changes roles or is laid off!).
**Suggested Read:** [Recover MySQL or MariaDB Root Password in Linux][2]
This article will come in handy. We will explain how to change a root password of MySQL or MariaDB database server in Linux.
Although we will use a MariaDB server in this article, the instructions should work for MySQL as well.
### Change MySQL or MariaDB Root Password
You know the root password and want to reset it, in this case, lets make sure MariaDB is running:
```
------------- CentOS/RHEL 7 and Fedora 22+ -------------
# systemctl is-active mariadb
------------- CentOS/RHEL 6 and Fedora -------------
# /etc/init.d/mysqld status
```
[
![Check MySQL Status](http://www.tecmint.com/wp-content/uploads/2017/03/Check-MySQL-Status.png)
][3]
Check MySQL Status
If the above command does not return the word `active` as output or its stopped, you will need to start the database service before proceeding:
```
------------- CentOS/RHEL 7 and Fedora 22+ -------------
# systemctl start mariadb
------------- CentOS/RHEL 6 and Fedora -------------
# /etc/init.d/mysqld start
```
Next, we will login to the database server as root:
```
# mysql -u root -p
```
For compatibility across versions, we will use the following statement to update the user table in the mysql database. Note that you need to replace `YourPasswordHere` with the new password you have chosen for root.
```
MariaDB [(none)]> USE mysql;
MariaDB [(none)]> UPDATE user SET password=PASSWORD('YourPasswordHere') WHERE User='root' AND Host = 'localhost';
MariaDB [(none)]> FLUSH PRIVILEGES;
```
To validate, exit your current MariaDB session by typing.
```
MariaDB [(none)]> exit;
```
and then press Enter. You should now be able to connect to the server using the new password.
[
![Change MySQL/MariaDB Root Password](http://www.tecmint.com/wp-content/uploads/2017/03/Change-MySQL-Root-Password.png)
][4]
Change MySQL/MariaDB Root Password
##### Summary
In this article we have explained how to change the MariaDB / MySQL root password whether you know the current one or not.
As always, feel free to drop us a note if you have any questions or feedback using our comment form below. We look forward to hearing from you!
--------------------------------------------------------------------------------
作者简介:
Gabriel Cánepa is a GNU/Linux sysadmin and web developer from Villa Mercedes, San Luis, Argentina. He works for a worldwide leading consumer product company and takes great pleasure in using FOSS tools to increase productivity in all areas of his daily work.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/change-mysql-mariadb-root-password/
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/install-mariadb-in-centos-7/
[2]:http://www.tecmint.com/reset-mysql-or-mariadb-root-password/
[3]:http://www.tecmint.com/wp-content/uploads/2017/03/Check-MySQL-Status.png
[4]:http://www.tecmint.com/wp-content/uploads/2017/03/Change-MySQL-Root-Password.png
[5]:http://www.tecmint.com/author/gacanepa/
[6]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
[7]:http://www.tecmint.com/free-linux-shell-scripting-books/

View File

@ -0,0 +1,158 @@
A public cloud migration in 22 days
============================================================
![A public cloud migration in 22 days](https://i.nextmedia.com.au/Utils/ImageResizer.ashx?n=http%3a%2f%2fi.nextmedia.com.au%2fNews%2fLush.jpg&w=480&c=0&s=1)
>Lush's Oxford St, UK store. Credit: Lush.
### Lush says its possible.
Migrating your core operations from one public cloud to another in less than one month may seem like a farfetched goal, but British cosmetics giant Lush reckons it can be done.
Last September Lush - who you might recognise as the company behind the candy-coloured, sweet smelling bath and body products - was nearing the end of its contract with its existing infrastructure provider, understood to be [UK-based Memset][5].
Memset had been hosting Lush's Drupal-based commerce environment out of Amazon Web Services for a few years, but the retailer wanted out.
The arrangement was 'awkward' and rigid, according to Lush chief digital officer and heir to the company throne Jack Constantine (his parents founded the business in 1995).
“We were in a contract that we werent really comfortable with, and we wanted to have a look and see what else we could go for,” he told the Google Cloud Next conference in San Francisco today.
“It was a very closed environment [which] made it difficult for us to get visibility of everything we wanted to be able to move over.
"[We] could either sign up for another year, and have that commitment and think up a long-term plan where we had more control ... but [we] would have ended up struggling."
After scouring the market Lush landed on Googles Cloud Platform. The company was already familiar with Google, having migrated from Scalix to Google Apps (now known as G Suite) in [late 2013][6].
However, it had less than a few months to make the migration, both in time for the end of its existing contract on December 22 as well as the critical Christmas shopping period.
“So it wasnt just a little bit business critical. We were talking peak trade time. It was a huge deal,” Constantine said.
Lushs lack of bureaucracy meant Constantine was able to make a quick decision on vendor selection, and “then the team just powered through”, he said.
They also prioritised optimising the "monolithic" Drupal application specifically for the migration, pushing back bug fixes until later.
Lush started the physical migration on December 1 and completed it on December 22.
The team came up against challenges “like with any migration”, Constantine said - “you have to worry about getting your data from one place to another, you have to make sure you have consistency, and customer, product data etc. needs to be up and stable”.
But the CDO said one thing that got the company through the incredibly tight timeframe was the teams lack of alternatives: there was no fallback plan.
“About a week before the deadline my colleague had a conversation with our Google partner on the phone, they were getting a bit nervous about whether this was going to happen, and they asked us what Plan B was. My colleague said Plan B is to make Plan A happen, thats it,” Constantine said.
“When you throw a hard deadline like that it can sound a bit unachieveable, but [you need to keep] that focus on people believing that this is a goal that we can achieve in that timeframe, and not letting people put up the blockers and say were going to have to delay this and that.
“Yes everybody gets very tense but you achieve a lot. You actually get through it and nail it. All the things you need to get done, get done.”
The focus now is on moving the commerce application to a microservices architecture, while looking into various Google tools like the Kubernetes container management system and Spanner relational database.
The retailer also recently built a prototype point-of-sale system using GCP and Android, which it is currently playing around with, Constantine said.
Allie Coyne travelled to Google Cloud Next as a guest of Google
![A public cloud migration in 22 days](https://i.nextmedia.com.au/Utils/ImageResizer.ashx?n=http%3a%2f%2fi.nextmedia.com.au%2fNews%2fLush.jpg&w=480&c=0&s=1)
Lush's Oxford St, UK store. Credit: Lush.
### Lush says its possible.
Migrating your core operations from one public cloud to another in less than one month may seem like a farfetched goal, but British cosmetics giant Lush reckons it can be done.
Last September Lush - who you might recognise as the company behind the candy-coloured, sweet smelling bath and body products - was nearing the end of its contract with its existing infrastructure provider, understood to be [UK-based Memset][1].
Memset had been hosting Lush's Drupal-based commerce environment out of Amazon Web Services for a few years, but the retailer wanted out.
The arrangement was 'awkward' and rigid, according to Lush chief digital officer and heir to the company throne Jack Constantine (his parents founded the business in 1995).
“We were in a contract that we werent really comfortable with, and we wanted to have a look and see what else we could go for,” he told the Google Cloud Next conference in San Francisco today.
“It was a very closed environment [which] made it difficult for us to get visibility of everything we wanted to be able to move over.
"[We] could either sign up for another year, and have that commitment and think up a long-term plan where we had more control ... but [we] would have ended up struggling."
After scouring the market Lush landed on Googles Cloud Platform. The company was already familiar with Google, having migrated from Scalix to Google Apps (now known as G Suite) in [late 2013][2].
However, it had less than a few months to make the migration, both in time for the end of its existing contract on December 22 as well as the critical Christmas shopping period.
“So it wasnt just a little bit business critical. We were talking peak trade time. It was a huge deal,” Constantine said.
Lushs lack of bureaucracy meant Constantine was able to make a quick decision on vendor selection, and “then the team just powered through”, he said.
They also prioritised optimising the "monolithic" Drupal application specifically for the migration, pushing back bug fixes until later.
Lush started the physical migration on December 1 and completed it on December 22.
The team came up against challenges “like with any migration”, Constantine said - “you have to worry about getting your data from one place to another, you have to make sure you have consistency, and customer, product data etc. needs to be up and stable”.
But the CDO said one thing that got the company through the incredibly tight timeframe was the teams lack of alternatives: there was no fallback plan.
“About a week before the deadline my colleague had a conversation with our Google partner on the phone, they were getting a bit nervous about whether this was going to happen, and they asked us what Plan B was. My colleague said Plan B is to make Plan A happen, thats it,” Constantine said.
“When you throw a hard deadline like that it can sound a bit unachieveable, but [you need to keep] that focus on people believing that this is a goal that we can achieve in that timeframe, and not letting people put up the blockers and say were going to have to delay this and that.
“Yes everybody gets very tense but you achieve a lot. You actually get through it and nail it. All the things you need to get done, get done.”
The focus now is on moving the commerce application to a microservices architecture, while looking into various Google tools like the Kubernetes container management system and Spanner relational database.
The retailer also recently built a prototype point-of-sale system using GCP and Android, which it is currently playing around with, Constantine said.
![A public cloud migration in 22 days](https://i.nextmedia.com.au/Utils/ImageResizer.ashx?n=http%3a%2f%2fi.nextmedia.com.au%2fNews%2fLush.jpg&w=480&c=0&s=1)
Lush's Oxford St, UK store. Credit: Lush.
### Lush says its possible.
Migrating your core operations from one public cloud to another in less than one month may seem like a farfetched goal, but British cosmetics giant Lush reckons it can be done.
Last September Lush - who you might recognise as the company behind the candy-coloured, sweet smelling bath and body products - was nearing the end of its contract with its existing infrastructure provider, understood to be [UK-based Memset][3].
Memset had been hosting Lush's Drupal-based commerce environment out of Amazon Web Services for a few years, but the retailer wanted out.
The arrangement was 'awkward' and rigid, according to Lush chief digital officer and heir to the company throne Jack Constantine (his parents founded the business in 1995).
“We were in a contract that we werent really comfortable with, and we wanted to have a look and see what else we could go for,” he told the Google Cloud Next conference in San Francisco today.
“It was a very closed environment [which] made it difficult for us to get visibility of everything we wanted to be able to move over.
"[We] could either sign up for another year, and have that commitment and think up a long-term plan where we had more control ... but [we] would have ended up struggling."
After scouring the market Lush landed on Googles Cloud Platform. The company was already familiar with Google, having migrated from Scalix to Google Apps (now known as G Suite) in [late 2013][4].
However, it had less than a few months to make the migration, both in time for the end of its existing contract on December 22 as well as the critical Christmas shopping period.
“So it wasnt just a little bit business critical. We were talking peak trade time. It was a huge deal,” Constantine said.
Lushs lack of bureaucracy meant Constantine was able to make a quick decision on vendor selection, and “then the team just powered through”, he said.
They also prioritised optimising the "monolithic" Drupal application specifically for the migration, pushing back bug fixes until later.
Lush started the physical migration on December 1 and completed it on December 22.
The team came up against challenges “like with any migration”, Constantine said - “you have to worry about getting your data from one place to another, you have to make sure you have consistency, and customer, product data etc. needs to be up and stable”.
But the CDO said one thing that got the company through the incredibly tight timeframe was the teams lack of alternatives: there was no fallback plan.
“About a week before the deadline my colleague had a conversation with our Google partner on the phone, they were getting a bit nervous about whether this was going to happen, and they asked us what Plan B was. My colleague said Plan B is to make Plan A happen, thats it,” Constantine said.
“When you throw a hard deadline like that it can sound a bit unachieveable, but [you need to keep] that focus on people believing that this is a goal that we can achieve in that timeframe, and not letting people put up the blockers and say were going to have to delay this and that.
“Yes everybody gets very tense but you achieve a lot. You actually get through it and nail it. All the things you need to get done, get done.”
The focus now is on moving the commerce application to a microservices architecture, while looking into various Google tools like the Kubernetes container management system and Spanner relational database.
The retailer also recently built a prototype point-of-sale system using GCP and Android, which it is currently playing around with, Constantine said.
--------------------------------------------------------------------------------
via: https://www.itnews.com.au/news/a-public-cloud-migration-in-22-days-454186
作者:[Allie Coyne ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.itnews.com.au/author/allie-coyne-461593
[1]:http://www.memset.com/about-us/case-studies/lush-cosmetics/
[2]:https://cloud.googleblog.com/2013/12/google-apps-helps-eco-cosmetics-company.html
[3]:http://www.memset.com/about-us/case-studies/lush-cosmetics/
[4]:https://cloud.googleblog.com/2013/12/google-apps-helps-eco-cosmetics-company.html
[5]:http://www.memset.com/about-us/case-studies/lush-cosmetics/
[6]:https://cloud.googleblog.com/2013/12/google-apps-helps-eco-cosmetics-company.html

View File

@ -0,0 +1,76 @@
Developer-defined application delivery
============================================================
How load balancers help you manage the complexity of distributed systems.
![Ship with tug](https://d3tdunqjn7n0wj.cloudfront.net/360x240/ship-84139_1400-154e17db40c32ff6fc352fd12b2b32d3.jpg)
Cloud-native applications are designed to draw upon the performance, scalability, and reliability benefits of distributed systems. Unfortunately, distributed systems often come at the cost of added complexity. As individual components of your application are distributed across networks, and those networks have communication gaps or experience degraded performance, your distributed application components need to continue to function independently.
To avoid inconsistencies in application state, distributed systems should be designed with an understanding that components will fail. Nowhere is this more prominent than in the network. Consequently, at their core, distributed systems rely heavily on load balancing—the distribution of requests across two or more systems—in order to be resilient in the face of network disruption and horizontally scale as system load fluctuates.
Get O'Reilly's weekly Systems Engineering and Operations newsletter[
![](https://cdn.oreillystatic.com/oreilly/email/webops-newsletter-20170102.png)
][5]
As distributed systems become more and more prevalent in the design and delivery of cloud-native applications, load balancers saturate infrastructure design at every level of modern application architecture. In their most commonly thought-of configuration, load balancers are deployed in front of the application, handling requests from the outside world. However, the emergence of microservices means that load balancers play a critical role behind the scenes: i.e. managing the flow between  _services_ .
Therefore, when you work with cloud-native applications and distributed systems, your load balancer takes on other role(s):
* As a reverse proxy to provide caching and increased security as it becomes the go-between for external clients.
* As an API gateway by providing protocol translation (e.g. REST to AMQP).
* It may handle security (i.e. running a web application firewall).
* It may take on application management tasks such as rate limiting and HTTP/2 support.
Given their clearly expanded capabilities beyond that of balancing traffic, load balancers can be more broadly referred to as Application Delivery Controllers (ADCs).
### Developers defining infrastructure
Historically, ADCs were purchased, deployed, and managed by IT professionals most commonly to run enterprise-architected applications. For physical load balancer equipment (e.g. F5, Citrix, Brocade, etc.), this largely remains the case. Cloud-native applications with their distributed systems design and ephemeral infrastructure require load balancers to be as dynamic as the infrastructure (e.g. containers) upon which they run. These are often software load balancers (e.g. NGINX and load balancers from public cloud providers). Cloud-native applications are typically developer-led initiatives, which means that developers are creating the application (e.g. microservices) and the infrastructure (Kubernetes and NGINX). Developers are increasingly making or heavily influencing decisions for load balancing (and other) infrastructure.
As a decision maker, the developer of cloud-native applications generally isn't aware of, or influenced by, enterprise infrastructure requirements or existing deployments, both considering that these deployments are often new and often deployments within a public or private cloud environment. Because cloud technologies have abstracted infrastructure into programmable APIs, developers are defining the way that applications are built at each layer of that infrastructure. In the case of the load balancer, developers choose which type to use, how it gets deployed, and which functions to enable. They programmatically encode how the load balancer behaves—how it dynamically responds to the needs of the application as the application grows, shrinks and evolves in functionality over the lifetime of application deployments. Developers are defining infrastructure as code—both infrastructure configuration and its operation as code.
### Why developers are defining infrastructure
The practice of writing this code— _how applications are built and deployed_ —has undergone a fundamental shift, which can be characterized in many ways. Stated pithily, this fundamental shift has been driven by two factors: the time it takes to bring new application functionality to market ( _time to market_ ) and the time it takes for an application user to derive value from the offering ( _time to value_ ). As a result, new applications are written to be continuously delivered (as a service), not downloaded and installed.
Time-to-market and time-to-value pressures arent new, but they are joined by other factors that are increasing the decisioning-making power developers have:
* Cloud: the ability to define infrastructure as code via API.
* Scale: the need to run operations efficiently in large environments.
* Speed: the need to deliver application functionality now; for businesses to be competitive.
* Microservices: abstraction of framework and tool choice, further empowering developers to make infrastructure decisions.
In addition to the above factors, its worth noting the impact of open source. With the prevalence and power of open source software, developers have a plethora of application infrastructure—languages, runtimes, frameworks, databases, load balancers, managed services, etc.—at their fingertips. The rise of microservices has democratized the selection of application infrastructure, allowing developers to choose best-for-purpose tooling. In the case of choice of load balancer, those that tightly integrate with and respond to the dynamic nature of cloud-native applications rise to the top of the heap.
### Conclusion
As you are mulling over your cloud-native application design, join me for a discussion on  _[Load Balancing in the Cloud with NGINX and Kubernetes][8]_ . We'll examine the load balancing capabilities of different public clouds and container platforms and walk through a case study involving a bloat-a-lith—an overstuffed monolithic application. We'll look at how it was broken into smaller, independent services and how capabilities of NGINX and Kubernetes came to its rescue.
--------------------------------------------------------------------------------
作者简介:
Lee Calcote is an innovative thought leader, passionate about developer platforms and management software for clouds, containers, infrastructure and applications. Advanced and emerging technologies have been a consistent focus through Calcotes tenure at SolarWinds, Seagate, Cisco and Pelco. An organizer of technology meetups and conferences, a writer, author, speaker, he is active in the tech community.
----------------------------
via: https://www.oreilly.com/learning/developer-defined-application-delivery
作者:[Lee Calcote][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.oreilly.com/people/7f693-lee-calcote
[1]:https://pixabay.com/en/ship-containers-products-shipping-84139/
[2]:https://conferences.oreilly.com/velocity/vl-ca?intcmp=il-webops-confreg-na-vlca17_new_site_velocity_sj_17_cta
[3]:https://www.oreilly.com/people/7f693-lee-calcote
[4]:http://www.oreilly.com/pub/e/3864?intcmp=il-webops-webcast-reg-webcast_new_site_developer_defined_application_delivery_text_cta
[5]:https://www.oreilly.com/learning/developer-defined-application-delivery?imm_mid=0ee8c5&cmp=em-webops-na-na-newsltr_20170310
[6]:https://conferences.oreilly.com/velocity/vl-ca?intcmp=il-webops-confreg-na-vlca17_new_site_velocity_sj_17_cta
[7]:https://conferences.oreilly.com/velocity/vl-ca?intcmp=il-webops-confreg-na-vlca17_new_site_velocity_sj_17_cta
[8]:http://www.oreilly.com/pub/e/3864?intcmp=il-webops-webcast-reg-webcast_new_site_developer_defined_application_delivery_body_text_cta

View File

@ -0,0 +1,129 @@
How to install Fedora 25 on your Raspberry Pi
============================================================
### Check out this step-by-step tutorial.
![How to install Fedora 25 on your Raspberry Pi](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/gnome_raspberry_pi_fedora.jpg?itok=Efm6IKxP "How to install Fedora 25 on your Raspberry Pi")
>Image by : opensource.com
In October 2016, the release of Fedora 25 Beta was announced, along with initial [support for the Raspberry Pi 2 and 3][6]. The final "general availability" version of Fedora 25 was released a month later, and since then I have been playing around with the many different Fedora spins available for the latest versions of the Raspberry Pi.
This article is not as much a review of Fedora 25 on the Raspberry Pi 3 as a collection of tips, screenshots, and my own personal thoughts on the very first officially supported version of Fedora for the Pi.
More on Raspberry Pi
* [Our latest on Raspberry Pi][1]
* [What is Raspberry Pi?][2]
* [Getting started with Raspberry Pi][3]
* [Send us your Raspberry Pi projects and tutorials][4]
Before I start, it is worth mentioning that all of the work I have done to write this article was done on my personal laptop, which is running Fedora 25\. I used a microSD to SD adapter to copy and edit all of the Fedora images into a 32GB microSD card, which I used to boot up my Raspberry Pi 3 on a Samsung TV. The Raspberry Pi 3 used an Ethernet cable connection for network connectivity because the built-in Wi-Fi is not yet supported by Fedora 25\. Finally, I used a Logitech K410 wireless keyboard and touchpad for input.
If you don't have the opportunity to use an Ethernet wire connection to play around with Fedora 25 on your Raspberry Pi, I was able to get an Edimax Wi-Fi USB adapter to work on Fedora 25 as well, but for the purposes of this article, I only used the Ethernet connection.
### Before you install Fedora 25 on your Raspberry Pi
Read over the [Raspberry Pi support documentation][7] on the Fedora Project wiki. You can download the Fedora 25 images you need for installation from the wiki, and everything that is supported and not supported is listed there. 
Also, be mindful that this is an initially supported version and a lot of new work and support will be coming out with the release of Fedora 26, so feel free to report bugs and share feedback of your own experience with Fedora 25 on the Raspberry Pi via [Bugzilla][8], Fedora's [ARM mailing list][9], or on the Freenode IRC channel #fedora-arm.
### Installation
I downloaded and installed five different Fedora 25 spins: GNOME (Workstation default), KDE, Minimal, LXDE, and Xfce. For the most part, they all had pretty consistent and easy-to-follow steps to ensure my Raspberry Pi 3 would boot up properly. Some have known bugs that people are working on, and some followed standard operating procedure via the Fedora wiki.
![GNOME on Raspberry Pi](https://opensource.com/sites/default/files/gnome_on_rpi.png "GNOME on Raspberry Pi")
Fedora 25 workstation, GNOME on Raspberry Pi 3.
### Steps for installation
1\. On your laptop, download one of the Fedora 25 images for the Raspberry Pi from the links on the support documentation page.
2\. On your laptop, copy the image onto your microSD using either **fedora-arm-installer** or the command line:
**xzcat Fedora-Workstation-armhfp-25-1.3-sda.raw.xz | dd bs=4M status=progress of=/dev/mmcblk0**
Note: **/dev/mmclk0** was the device that my microSD to SD adapter mounted on my laptop, and even though I am using Fedora on my laptop and I could have used the **fedora-arm-installer**, I preferred the command line.
3\. Once you've copied the image,  _don't boot up your system yet_ . I know it is tempting to just go for it, but you still need to make a couple of tweaks.
4\. To keep the image file as small as possible for download convenience, the root file system on the image was kept to a minimum, so you must grow your root filesystem. If you don't, your Pi will still boot up, but if you run **dnf update** to upgrade your system, it will fill up the file system and bad things will happen, so with the microSD still on your laptop grow the partition:
**growpart /dev/mmcblk0 4
resize2fs /dev/mmcblk0p4**
Note: In Fedora, the **growpart** command is provided by **cloud-utils-growpart.noarch** RPM.
5\. Once the file system is updated, you will need to blacklist the **vc4** module. [Read more about this bug.][10]
I recommend doing this before you boot up the Raspberry Pi because different spins will behave in different ways. For example, (at least for me) GNOME came up first after I booted, without blacklisting **vc4**, but after doing a system update, it no longer came up. The KDE spin wouldn't come up at all during the first initial boot. We might as well blacklist **vc4** even before our first boot until the bug is resolved.
Blacklisting should happen in two different places. First, on your microSD root partition, create a **vc4.conf** under **etc/modprode.d/** with content: **blacklist vc4**. Second, on your microSD boot partition add **rd.driver.blacklist=vc4** to the end of the append line in the **extlinux/extlinux.conf** file.
6. Now, you are ready to boot up your Raspberry Pi.
### Booting Up
Be patient, especially for the GNOME and KDE distributions to come up. In the age of SSDs (Solid-State Drives), and almost instant bootups, it's easy to become impatient with write speeds for the Pi, especially the first time you boot. Before the Window Manager comes up for the first time, an initial configuration screen will pop up, which will allow you to configure root password, a regular user, time zones, and networking. Once you get that configured, you should be able to SSH into your Raspberry Pi, which can be very handy for debugging display issues.
### System updates
Once you have Fedora 25 up and running on your Raspberry Pi, you will eventually (or immediately) want to apply system updates.
First, when doing kernel upgrades, become familiar with your **/boot/extlinux/extlinux.conf** file. If you upgrade your kernel, the next time you boot, unless you manually pick the right kernel, you will most likely boot into Rescue mode. The best way to avoid that is to take the five lines that define the Rescue image on your **extlinux.conf** and move them to the bottom of the file, so the latest kernel will automatically boot up next time. You can edit the **/boot/extlinux/extlinux.conf** directly on the Pi or by mounting on your laptop:
**label Fedora 25 Rescue fdcb76d0032447209f782a184f35eebc (4.9.9-200.fc25.armv7hl)
            kernel /vmlinuz-0-rescue-fdcb76d0032447209f782a184f35eebc
            append ro root=UUID=c19816a7-cbb8-4cbb-8608-7fec6d4994d0 rd.driver.blacklist=vc4
            fdtdir /dtb-4.9.9-200.fc25.armv7hl/
            initrd /initramfs-0-rescue-fdcb76d0032447209f782a184f35eebc.img**
Second, if for whatever reason your display goes dark again after an upgrade and you are sure that **vc4** is blacklisted, run **lsmod | grep vc4**. You can always boot into multiuser mode, instead of graphical mode, and run **startx**from the command line. Read the content of **/etc/inittab** for directions on how to switch targets. 
![KDE on Raspberry Pi 3](https://opensource.com/sites/default/files/kde_on_rpi.png "KDE on Raspberry Pi 3")
A Fedora 25 workstation, KDE on Raspberry Pi 3.
### The Fedora spins
Out of all of the Fedora spins I have tried, the only one that gave me a problem was the XFCE spin, and I believe it was due to this [known bug][11].
GNOME, KDE, LXDE, and minimal spins worked pretty well when I followed the steps I've shared here. Given that KDE and GNOME are a bit more resource heavy, I would recommend LXDE and Minimal for anyone who wants to just start playing with Fedora 25 on the Raspberry Pi. If you are a sysadmin who wants a cheap server backed by SELinux to cover your security concerns and all you want is to run your Raspberry Pi as some sort of server and you are happy with an IP address and port 22 open and vi, go with the Minimal spin. For developers or people starting to learn Linux, the LXDE may be the better way to go because it will give quick and easy access to all the GUI-based tools like browsers, IDEs, and clients you may need.
![LXDE on Raspberry Pi 3](https://opensource.com/sites/default/files/lxde_on_rpi.png "LXDE on Raspberry Pi 3")
Fedora 25 workstation, LXDE on Raspberry Pi 3.
It is fantastic to see more and more Linux distributions become available on ARM-based Raspberry Pi computers. For its very first supported version, the Fedora team has provided a polished experience for the everyday Linux user. I will certainly be looking forward to the improvements and bug fixes for Fedora 26.
--------------------------------------------------------------------------------
作者简介:
Anderson Silva - Anderson started using Linux back in 1996. Red Hat Linux, to be more precise. In 2007, his main professional dream became reality when he joined Red Hat as a Release Engineer in IT. Since then he has worked in several different roles at Red Hat from Release Engineer to System Administrator to Senior Manager and Information System Engineer. He is a RHCE and RHCA and an active Fedora Package maintainer.
----------------
via: https://opensource.com/article/17/3/how-install-fedora-on-raspberry-pi
作者:[Anderson Silva][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/ansilva
[1]:https://opensource.com/tags/raspberry-pi?src=raspberry_pi_resource_menu
[2]:https://opensource.com/resources/what-raspberry-pi?src=raspberry_pi_resource_menu
[3]:https://opensource.com/article/16/12/getting-started-raspberry-pi?src=raspberry_pi_resource_menu
[4]:https://opensource.com/article/17/2/raspberry-pi-submit-your-article?src=raspberry_pi_resource_menu
[5]:https://opensource.com/article/17/3/how-install-fedora-on-raspberry-pi?rate=gIIRltTrnOlwo4h81uDvdAjAE3V2rnwoqH0s_Dx44mE
[6]:https://fedoramagazine.org/raspberry-pi-support-fedora-25-beta/
[7]:https://fedoraproject.org/wiki/Raspberry_Pi
[8]:https://bugzilla.redhat.com/show_bug.cgi?id=245418
[9]:https://lists.fedoraproject.org/admin/lists/arm%40lists.fedoraproject.org/
[10]:https://bugzilla.redhat.com/show_bug.cgi?id=1387733
[11]:https://bugzilla.redhat.com/show_bug.cgi?id=1389163
[12]:https://opensource.com/user/26502/feed
[13]:https://opensource.com/article/17/3/how-install-fedora-on-raspberry-pi#comments
[14]:https://opensource.com/users/ansilva

View File

@ -0,0 +1,296 @@
Restrict SSH User Access to Certain Directory Using Chrooted Jail
============================================================
There are several reasons to [restrict a SSH user session][1] to a particular directory, especially on web servers, but the obvious one is a system security. In order to lock SSH users in a certain directory, we can use chroot mechanism.
change root (chroot) in Unix-like systems such as Linux, is a means of separating specific user operations from the rest of the Linux system; changes the apparent root directory for the current running user process and its child process with new root directory called a chrooted jail.
In this tutorial, well show you how to restrict a SSH user access to a given directory in Linux. Note that well run the all the commands as root, use the [sudo command][2] if you are logged into server as a normal user.
### Step 1: Create SSH Chroot Jail
1. Start by creating the chroot jail using the mkdir command below:
```
# mkdir -p /home/test
```
2. Next, identify required files, according to the sshd_config man page, the `ChrootDirectory` option specifies the pathname of the directory to chroot to after authentication. The directory must contain the necessary files and directories to support a users session.
For an interactive session, this requires at least a shell, commonly `sh`, and basic `/dev` nodes such as null, zero, stdin, stdout, stderr, and tty devices:
```
# ls -l /dev/{null,zero,stdin,stdout,stderr,random,tty}
```
[
![Listing Required Files](http://www.tecmint.com/wp-content/uploads/2017/03/Listing-Required-Files.png)
][3]
Listing Required Files
3. Now, create the `/dev` files as follows using the mknod command. In the command below, the `-m` flag is used to specify the file permissions bits, `c` means character file and the two numbers are major and minor numbers that the files point to.
```
# mkdir -p /home/test/dev/
# cd /home/test/dev/
# mknod -m 666 null c 1 3
# mknod -m 666 tty c 5 0
# mknod -m 666 zero c 1 5
# mknod -m 666 random c 1 8
```
[
![Create /dev and Required Files](http://www.tecmint.com/wp-content/uploads/2017/03/Create-Required-Files.png)
][4]
Create /dev and Required Files
4. Afterwards, set the appropriate permission on the chroot jail. Note that the chroot jail and its subdirectories and subfiles must be owned by root user, and not writable by any normal user or group:
```
# chown root:root /home/test
# chmod 0755 /home/test
# ls -ld /home/test
```
[
![Set Permissions on Directory](http://www.tecmint.com/wp-content/uploads/2017/03/Set-Permission-on-Directory.png)
][5]
Set Permissions on Directory
### Step 2: Setup Interactive Shell for SSH Chroot Jail
5. First, create the `bin` directory and then copy the `/bin/bash` files into the `bin` directory as follows:
```
# mkdir -p /home/test/bin
# cp -v /bin/bash /home/test/bin/
```
[
![Copy Files to bin Directory](http://www.tecmint.com/wp-content/uploads/2017/03/Copy-Bin-Files.png)
][6]
Copy Files to bin Directory
6. Now, identify bash required shared `libs`, as below and copy them into the `lib` directory:
```
# ldd /bin/bash
# mkdir -p /home/test/lib64
# cp -v /lib64/{libtinfo.so.5,libdl.so.2,libc.so.6,ld-linux-x86-64.so.2} /home/test/lib64/
```
[
![Copy Shared Library Files](http://www.tecmint.com/wp-content/uploads/2017/03/Copy-Shared-Library-Files.png)
][7]
Copy Shared Library Files
### Step 3: Create and Configure SSH User
7. Now, create the SSH user with the [useradd command][8] and set a secure password for the user:
```
# useradd tecmint
# passwd tecmint
```
8. Create the chroot jail general configurations directory, `/home/test/etc` and copy the updated account files (/etc/passwd and /etc/group) into this directory as follows:
```
# mkdir /home/test/etc
# cp -vf /etc/{passwd,group} /home/test/etc/
```
[
![Copy Password Files](http://www.tecmint.com/wp-content/uploads/2017/03/Copy-Password-Files.png)
][9]
Copy Password Files
Note: Each time you add more SSH users to the system, you will need to copy the updated account files into the `/home/test/etc` directory.
### Step 4: Configure SSH to Use Chroot Jail
9. Now, open the `sshd_config` file.
```
# vi /etc/ssh/sshd_config
```
and add/modify the lines below in the file.
```
#define username to apply chroot jail to
Match User tecmint
#specify chroot jail
ChrootDirectory /home/test
```
[
![Configure SSH Chroot Jail](http://www.tecmint.com/wp-content/uploads/2017/03/Configure-SSH-Chroot-Jail.png)
][10]
Configure SSH Chroot Jail
Save the file and exit, and restart the SSHD services:
```
# systemctl restart sshd
OR
# service sshd restart
```
### Step 5: Testing SSH with Chroot Jail
10. At this point, test if the chroot jail setup is working as expected:
```
# ssh tecmint@192.168.0.10
-bash-4.1$ ls
-bash-4.1$ date
-bash-4.1$ uname
```
[
![Testing SSH User Chroot Jail](http://www.tecmint.com/wp-content/uploads/2017/03/Testing-SSH-User-Chroot-Jail.png)
][11]
Testing SSH User Chroot Jail
From the screenshot above, we can see that the SSH user is locked in the chrooted jail, and cant run any external commands (ls, date, uname etc).
The user can only execute bash and its builtin commands such as(pwd, history, echo etc) as seen below:
```
# ssh tecmint@192.168.0.10
-bash-4.1$ pwd
-bash-4.1$ echo "Tecmint - Fastest Growing Linux Site"
-bash-4.1$ history
```
[
![SSH Built-in Commands](http://www.tecmint.com/wp-content/uploads/2017/03/SSH-Builtin-Commands.png)
][12]
SSH Built-in Commands
### Step 6\. Create SSH Users Home Directory and Add Linux Commands
11. From the previous step, we can notice that the user is locked in the root directory, we can create a home directory for the the SSH user like so (do this for all future users):
```
# mkdir -p /home/test/home/tecmint
# chown -R tecmint:tecmint /home/test/home/tecmint
# chmod -R 0700 /home/test/home/tecmint
```
[
![Create SSH User Home Directory](http://www.tecmint.com/wp-content/uploads/2017/03/Create-SSH-User-Home-Directory.png)
][13]
Create SSH User Home Directory
12. Next, install a few user commands such as ls, date, mkdir in the `bin` directory:
```
# cp -v /bin/ls /home/test/bin/
# cp -v /bin/date /home/test/bin/
# cp -v /bin/mkdir /home/test/bin/
```
[
![Add Commands to SSH User](http://www.tecmint.com/wp-content/uploads/2017/03/Add-Commands-to-SSH-User.png)
][14]
Add Commands to SSH User
13. Next, check the shared libraries for the commands above and move them into the chrooted jail libraries directory:
```
# ldd /bin/ls
# cp -v /lib64/{libselinux.so.1,libcap.so.2,libacl.so.1,libc.so.6,libpcre.so.1,libdl.so.2,ld-linux-x86-64.so.2,libattr.so.1,libpthread.so.0} /home/test/lib64/
```
[
![Copy Shared Libraries](http://www.tecmint.com/wp-content/uploads/2017/03/Copy-Shared-Libraries.png)
][15]
Copy Shared Libraries
### Step 7\. Testing SFTP with Chroot Jail
14. Do a final test using sftp; check if the commands you have just installed are working.
Add the line below in the `/etc/ssh/sshd_config` file:
```
#Enable sftp to chrooted jail
ForceCommand internal-sftp
```
Save the file and exit. Then restart the SSHD services:
```
# systemctl restart sshd
OR
# service sshd restart
```
15. Now, test using SSH, youll get the following error:
```
# ssh tecmint@192.168.0.10
```
[
![Test SSH Chroot Jail](http://www.tecmint.com/wp-content/uploads/2017/03/Test-SSH-Chroot-Jail.png)
][16]
Test SSH Chroot Jail
Try using SFTP as follows:
```
# sftp tecmint@192.168.0.10
```
[
![Testing sFTP SSH User](http://www.tecmint.com/wp-content/uploads/2017/03/Testing-sFTP-SSH-User.png)
][17]
Testing sFTP SSH User
**Suggested Read:** [Restrict SFTP Users to Home Directories Using chroot Jail][18]
Thats it for now!. In this article, we showed you how to restrict a SSH user in a given directory (chrooted jail) in Linux. Use the comment section below to offer us your thoughts about this guide.
--------------------------------------------------------------------------------
作者简介:
Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/restrict-ssh-user-to-directory-using-chrooted-jail/
作者:[Aaron Kili][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/aaronkili/
[1]:http://www.tecmint.com/restrict-sftp-user-home-directories-using-chroot/
[2]:http://www.tecmint.com/sudoers-configurations-for-setting-sudo-in-linux/
[3]:http://www.tecmint.com/wp-content/uploads/2017/03/Listing-Required-Files.png
[4]:http://www.tecmint.com/wp-content/uploads/2017/03/Create-Required-Files.png
[5]:http://www.tecmint.com/wp-content/uploads/2017/03/Set-Permission-on-Directory.png
[6]:http://www.tecmint.com/wp-content/uploads/2017/03/Copy-Bin-Files.png
[7]:http://www.tecmint.com/wp-content/uploads/2017/03/Copy-Shared-Library-Files.png
[8]:http://www.tecmint.com/add-users-in-linux/
[9]:http://www.tecmint.com/wp-content/uploads/2017/03/Copy-Password-Files.png
[10]:http://www.tecmint.com/wp-content/uploads/2017/03/Configure-SSH-Chroot-Jail.png
[11]:http://www.tecmint.com/wp-content/uploads/2017/03/Testing-SSH-User-Chroot-Jail.png
[12]:http://www.tecmint.com/wp-content/uploads/2017/03/SSH-Builtin-Commands.png
[13]:http://www.tecmint.com/wp-content/uploads/2017/03/Create-SSH-User-Home-Directory.png
[14]:http://www.tecmint.com/wp-content/uploads/2017/03/Add-Commands-to-SSH-User.png
[15]:http://www.tecmint.com/wp-content/uploads/2017/03/Copy-Shared-Libraries.png
[16]:http://www.tecmint.com/wp-content/uploads/2017/03/Test-SSH-Chroot-Jail.png
[17]:http://www.tecmint.com/wp-content/uploads/2017/03/Testing-sFTP-SSH-User.png
[18]:http://www.tecmint.com/restrict-sftp-user-home-directories-using-chroot/
[19]:http://www.tecmint.com/author/aaronkili/
[20]:http://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
[21]:http://www.tecmint.com/free-linux-shell-scripting-books/

View File

@ -1,37 +0,0 @@
### What is Debian? A brief introduction
Hello buddies.!! 
Today we have something interesting for you. We are gonna introduce the Linux world to you slowly and steadily. 
YOU MAY ALSO LIKE- [What is Linux? A brief description.][2]
and  [How to install Desktop Gadgets in Linux using screenlets? ][1]
 So here's the first piece of our intro series. Today we are going to know about Debian. It is one of the first Linux distribution. Debian was initially launched at 1993 August. The name Debian was given by the creator of Debian, Ian Murdock and his wife Debra.
Debian is a huge set of open-source packages. Debian also supports installation of non-free packages, but the number is big in free packages. According to official source Debian contains about 37500 free packages in its repository. And Debian provides all this for free of cost. A team of around thousand or more people work on Debian to make it better.
The latest stable release of Debian is 7.5 Wheezy. Debian has also released the alpha 8.0 Jesse for further development. By default, Debian uses Gnome desktop environment. But it also gives an option to choose between Gnome, KDE, Xfce and LXDE environments too. Debian is very easy to install due to its graphical installer.
Debian is robust and secure Operating system. Debian supports most of hardware and architectures, so you usually don't have to worry about whether it will run on your PC or not. Now the thing is drivers? you must be wondering how to get drivers for Debian. Do not worry, for most of new and old hardware the community of Debian has made drivers. So you don't have to wait for your vendor to make drivers for your hardware. And once again as it's open-source, so it's all for free.
Debian is supported by the community. So you can be assured that your problem will definitely get solved as there are actual users of Debian supporting you. Debian has wide range of software to choose from, which is of course free of cost. Debian is also very stable and powerful OS, which will give you amazing performance, security, a good Graphical user Interface (GUI) and also ease of use.
When it comes to stability, we mean less crash and hangs but more performance. Debian fulfill this role very well. Debian is also very easy to upgrade. The Debian team has worked hard to compile all the packages in there repository so that we can easily find them and install in our system.
So overall, it's been 20 years for Debian. So we can see the hard work and dedication of Debian team to keep it in the best condition for users. Debian can be installed either via buying a DVD or by downloading ISO files. So we would like to suggest you to give Debian a try. It has lots of things on grand scale to provide you.
Debian is the first one in our Introduction of Linux world section. We will be back with another Linux distribution next time. Stay tuned, there is lot more in Linux world. Till then, ciao.
--------------------------------------------------------------------------------
via: http://www.techphylum.com/2014/05/what-is-debian-brief-introduction.html
作者:[sumit rohankar ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/112160169713374382262
[1]:http://www.techphylum.com/2014/05/desktop-gadgets-in-linux-ubuntu.html
[2]:http://www.techphylum.com/2014/05/what-is-linux-brief-description.html?m=1

View File

@ -1,43 +0,0 @@
# What is OpenSUSE? An introduction
Last time we introduced Debian, one of the first giants. Now we gonna introduce SUSE. Yup we will see some good things about SUSE Linux or most commonly known as OpenSUSE. So here we begin. OpenSUSE is based on rpm package management. It is one of the most widely used Linux distro. OpenSUSE is also used as base of SUSE Enterprise Linux products. The operating system is sponsored by Novell which later acquired by Attachmate group.
It is a very simple but highly customizable and efficient Linux distro. It is made for daily use plus servers and industry work too. While you can get OpenSUSE free of cost as it is completely opensource, but you can get lots of amazing features packed in it. An iso of  OpenSUSE contains loads of useful softwares packed in it plus it offers multiple DE while installing later too. 
[
![](https://4.bp.blogspot.com/-KtuHu6WYnOk/U3OEQ8ghxII/AAAAAAAAARU/fllY-Qqg47c/s1600/06.png)
][3]
> OpenSUSE 13.1 running with KDE
[
![](https://4.bp.blogspot.com/-hwPaooOBwyk/U3OFwrgFldI/AAAAAAAAARk/dDJuvx0ltf4/s1600/08.png)
][4]
> OpenSUSE 13.1 running with Gnome
AS you can see i am using OpenSUSE with both KDE and Gnome in it. So it's your choice. OpenSUSE offers Gnome, KDE, Xfce, LXDE, Openbox, IceWM, blackbox, etc. You can install other DE manually later too. Apart from that OpenSUSE gives you power. Yup it is very powerful and robust OS. It is secure and it is stable.                       OpenSUSE is well known for it's KDE desktop. OpenSUSE will install KDE by default if you didn't select any other option from DE choices. OpenSUSE have fully integrated and supported desktop environments. OpenSUSE includes huge amount of softwares in their repository plus in third party repositories too. It includes  Mozilla Firefox as default web browser. You can easily work with your documents using Libreoffice. You have Amarok music player for your music need, got Kaffeine for watching videos. Most of all you got YaST. With YaST you got control over your system as you can perform various admin task from here. You can install multiple DE from YaST, you can add new devices and set them up, etc etc.
OpenSUSE also provide an interesting service called Tumbleweed. In this service you can get all the latest stable updates released by developers. By this option you can be assured to get a healthy and stable PC.
Overall, OpenSUSE is very interesting, stable (also bleeding edge) and powerful OS. we will surely suggest you to give it a try, as we also use it in our daily life. So here we conclude the second segment of "Introduction with Linux World" series. Stay tuned to us as we will be bringing more interesting stuffs from Linux world right onto your PC.
We are trying you to introduce you with most of distros.
Have fun with Linux, Ciao.
--------------------------------------------------------------------------------
via: http://www.techphylum.com/2014/05/what-is-opensuse-introduction.html
作者:[sumit rohankar ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/112160169713374382262
[1]:http://www.techphylum.com/2014/05/linux-guide-for-beginners-part-1.html
[2]:http://www.techphylum.com/2014/05/linux-guide-for-beginners-part-2.html
[3]:http://4.bp.blogspot.com/-KtuHu6WYnOk/U3OEQ8ghxII/AAAAAAAAARU/fllY-Qqg47c/s1600/06.png
[4]:http://4.bp.blogspot.com/-hwPaooOBwyk/U3OFwrgFldI/AAAAAAAAARk/dDJuvx0ltf4/s1600/08.png

View File

@ -1,44 +0,0 @@
# What is Fedora? An introduction
We are here to continue this Distro Intro. series forward. So we will be meeting with another distribution today. Well this distro is rather widely used and famous among Linux users. We will be introduced with another giant today named Fedora.
So the question is what is Fedora?
Fedora is an Linux distro or an operating system. Fedora is sponsored by Red Hat, one of the giant in Linux industry. Fedora is a worldwide community effort to to introduce people to the world of open-source. In Fedora project people all around world works to make it better day by day. The Fedora project was established in 2003 and owned by Red Hat. 
Fedora OS contains wide range of software prebundled or available in repository for easy access  and installation. Fedora by default uses Gnome desktop environment but it also provide option to install fedora with KDE, Xfce, LXDE, Cinnamon, MATE, Sugar desktop. Fedora uses RPM package management and software can be easily found in Fedora. We can easily download and install packages using yum package manager or Gnome software which is graphical package manager. There are third party repositories too for providing software in Fedora, of which RPM Fusion is one of the most famous repositories. Fedora also provide a service named Copr which let's the users to create their own repository.
Fedora is rather very stable distro but along with the stable part, Fedora is indeed popular for it's bleeding edge nature. Fedora is known for bringing new technologies, experimenting new innovations. Fedora also provide strong security. It provides various security modules to customize and give a strong security to your PC. Fedora also works on upstream communities of Linux due to which the innovations are not only limited to Fedora but available for all the Linux distro. Fedora is kinda distro which bring new things first and then it's spread all among the other distros. Fedora is completely free and open community, you have full permission to redistribute Fedora iso to your friends or any person who is interested in using Fedora. More on it Fedora works openly and transparently, therefore encouraging and inviting people all around the world to work with Fedora project if interested. 
[
![](https://2.bp.blogspot.com/-EnaboEcRH6E/U3OAe2bFZ7I/AAAAAAAAAQ0/6iH-DigFGBM/s1600/03.png)
][5]
> Fedora 20 running on Xfce desktop environment.
Fedora is also a good in support section. By asking your problems on Fedora community you will be assure to get it answered because the it will answered by community who uses Fedora. Many experts and Beginners also present in this community to make your stay better at Fedora. Fedora doesn't provide Long term support as it upgrades to new version every six months. They provide new features in every new upgrade which makes them a bleeding edge distro. But it's not a big worry as you can easily upgrade your system using FedUp. There are also branches of Fedora called Fedora spins which are created by communities and groups interested in Fedora, these spins are flavors of various categories like Education, Gaming, Music, etc. which are made specifically for respected category.
So overall Fedora is open-source, completely free of cost, redistributable, stable, secure, robust and bleeding edge operating system, which is easy to use in day to day life. Fedora brings you into the world of innovation and free things plus the strong sponsor Red Hat makes it even more better. If you are thinking to use Linux then Fedora is one of the best choice to try. So here we conclude third Segment of  "Introduction with Linux Distro" series. We will be back with another Linux distro in next segment, so stay tuned and subscribe us to keep updated. Till then, Ciao.
--------------------------------------------------------------------------------
via: http://www.techphylum.com/2014/05/Introduction-to-fedora.html
作者:[sumit rohankar ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/112160169713374382262
[1]:http://www.techphylum.com/2014/05/make-bootable-usb-drive-in-ubuntu.html
[2]:http://www.techphylum.com/2014/05/how-to-convert-deb-to-rpm.html
[3]:http://www.techphylum.com/2014/05/how-to-install-opensuse-131.html
[4]:http://www.techphylum.com/2014/05/install-deb-files.html
[5]:http://2.bp.blogspot.com/-EnaboEcRH6E/U3OAe2bFZ7I/AAAAAAAAAQ0/6iH-DigFGBM/s1600/03.png
[6]:http://www.techphylum.com/2014/05/linux-guide-for-beginners-part-1.html
[7]:http://www.techphylum.com/2014/05/what-is-debian-brief-introduction.html
[8]:http://www.techphylum.com/2014/05/what-is-opensuse-introduction.html
[9]:http://www.techphylum.com/2014/05/linux-guide-for-beginners-part-2.html
[10]:http://www.techphylum.com/

View File

@ -1,87 +0,0 @@
# Elementary OS-A brief introduction
So today we gonna introduce you guys to Elementary OS. This is a lightweight distro which is based on Ubuntu. this is an ideal distro for using on Netbooks and old PC. Apart from Xubuntu, Lubuntu, Linux Mint Mate edition and other lightweight distros, Elementary OS have it's own unique feature i.e. The Pantheon desktop environment. The latest version of Elementary OS is 0.2 Luna which you can easily download from there official site. So let's take a look at Elementary OS.
[
![](https://1.bp.blogspot.com/-K-lAdpgtseA/U4h9KOrOVsI/AAAAAAAAAVo/9ZRB8TgTkXI/s1600/08.png)
][8]
> Elementary OS login screen
The first thing we see after installing Elementary OS is this beautiful greeter i.e. login screen. after filling up username and password, we will be granted to enter authorised area :P as you can see there's also an option of guest session by which you can let anyone enter but won't be able to touch your private data.
[
![](https://3.bp.blogspot.com/-SU4OPMzFhp4/U4h9DqSSRHI/AAAAAAAAAVE/R4xE42c73Lk/s1600/03.png)
][9]
>The main screen i.e. Desktop
Now here is the main screen of Elementary OS. It seems like a basic and less crowded desktop but it have it's own beauty. The dock below is beautifully designed with auto hide function. If you have created multiple accounts then no worries you can easily switch between multiple accounts as you can seen in top right corner of above screensnap.
[
![](https://4.bp.blogspot.com/-R-gFrmV_Tp8/U4h9DrbIPfI/AAAAAAAAAVA/I2VrKIwQtYs/s1600/02.png)
][15]
It have integrated power button, account switch button, Empathy IM client button, a network button, a launcher button and a sound control button in which music control is embedded on the top bar. These buttons are very helpful and easy for quick actions.
[
![](https://2.bp.blogspot.com/-5v9ujXuGk84/U4h9Blpe_qI/AAAAAAAAAUw/eosR6KfTNO4/s1600/01.png)
][10]
> Apps launcher of elementary OS
On the top left corner, there's a button to fire up the application launcher. The app launcher is simple yet elegant and beautiful. As you can see by default there are not many apps installed in it. But no worries you can install new apps very easily.
[
![](https://2.bp.blogspot.com/-tQUW8TRnc54/U4h9EVIIdEI/AAAAAAAAAVI/zbbfZtht4qI/s1600/04.png)
][11]
> Software center in Elementary OS
There is a software center in Elementary OS. Due to this software center you can easily find your favourite apps and install them very easily and quickly. There's no need of hassling over internet for searching an app. As it is based on Ubuntu, lot's of softwares are available in software center for installing.
[
![](https://1.bp.blogspot.com/-IsHgpYBgkAM/U4h9GZzqEmI/AAAAAAAAAVU/29kMAhksnFk/s1600/07.png)
][12]
>Shotwell in Elementary OS.
By default Elementary OS includes Shotwell photo manager which is a very amazingand easy to use. you can easily import your pics in shotwell and view them in organised manner too. Elementary OS also includes Empathy IM client by default which is a multi-IM client and very useful for managing multiple accounts at same time. It also includes Geary Mail, a good email client. A music player and a Movie player too.
[
![](https://1.bp.blogspot.com/-EVgzaK9mEUM/U4h9GgJoRFI/AAAAAAAAAVY/p-6ZcRx2FqQ/s1600/05.png)
][13]
> Midori Web Browser in Elementary OS
Elementary OS includes a lightweight web browser, Midori. It is a good browser with multi-tab and download support. It supports HTML5 and CSS3, it cando all the task that a heavy browser can do. It is an example of good speed and beautiful design.
[
![](https://1.bp.blogspot.com/-kHtqnbLR_-o/U4h9HYEvmGI/AAAAAAAAAVg/MHmarvOp3xw/s1600/06.png)
][14]
>Settings menu
 The settings menu is very simplified and organised, so that user can easily tweak what he wants. Well designed looks and easy user interface makes Elementary OS a good choice for beginners. It's not only about the looks and lightweight thing, Elementary OS is stable and secure at the same time. It provides a good support as it's based on Ubuntu people can easily find answers on ubuntu community too. So all in all Elementary OS is recommended to use on Low specs as well as high specs PC.
Oh!!! i forgot to tell you guys that, i am posting this topic using Elementary OS and Midori Browser. So with this we conclude the fourth Segment of "introduction to Linux distro" series. we will be back with a new Linux distro next time so stay tuned with us. Till then, ciao.
--------------------------------------------------------------------------------
via: http://www.techphylum.com/2014/05/elementary-os-brief-introduction.html
作者:[sumit rohankar ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/112160169713374382262
[1]:http://www.techphylum.com/2014/05/wine-intro.html
[2]:http://www.techphylum.com/2014/05/how-to-install-desktop-environments.html
[3]:http://www.techphylum.com/2014/05/best-desktop-environments-part-2.html
[4]:http://www.techphylum.com/2014/05/acetoneiso-in-linux.html
[5]:http://www.techphylum.com/2014/05/Introduction-to-fedora.html
[6]:http://www.techphylum.com/2014/05/what-is-opensuse-introduction.html
[7]:http://www.techphylum.com/2014/05/what-is-debian-brief-introduction.html
[8]:http://1.bp.blogspot.com/-K-lAdpgtseA/U4h9KOrOVsI/AAAAAAAAAVo/9ZRB8TgTkXI/s1600/08.png
[9]:http://3.bp.blogspot.com/-SU4OPMzFhp4/U4h9DqSSRHI/AAAAAAAAAVE/R4xE42c73Lk/s1600/03.png
[10]:http://2.bp.blogspot.com/-5v9ujXuGk84/U4h9Blpe_qI/AAAAAAAAAUw/eosR6KfTNO4/s1600/01.png
[11]:http://2.bp.blogspot.com/-tQUW8TRnc54/U4h9EVIIdEI/AAAAAAAAAVI/zbbfZtht4qI/s1600/04.png
[12]:http://1.bp.blogspot.com/-IsHgpYBgkAM/U4h9GZzqEmI/AAAAAAAAAVU/29kMAhksnFk/s1600/07.png
[13]:http://1.bp.blogspot.com/-EVgzaK9mEUM/U4h9GgJoRFI/AAAAAAAAAVY/p-6ZcRx2FqQ/s1600/05.png
[14]:http://1.bp.blogspot.com/-kHtqnbLR_-o/U4h9HYEvmGI/AAAAAAAAAVg/MHmarvOp3xw/s1600/06.png
[15]:http://4.bp.blogspot.com/-R-gFrmV_Tp8/U4h9DrbIPfI/AAAAAAAAAVA/I2VrKIwQtYs/s1600/02.png

View File

@ -1,101 +0,0 @@
# Linux Mint-An Introduction
So today, we are here to continue our "**Introduction with Linux Distro**" series. On our fifth segment we have Linux Mint. So what is Linux Mint?  It is an operating system based on Ubuntu and Debian.
Linux mint is probably one of the most famous Linux distro out there. It is a good choice for beginners as it includes easy to use tools with a great GUI (graphical user interface). Even though it is good for beginners, it doesn't mean that it don't provide advanced use. Ofcource it gives a  stable and secure atmosphere. It is beautifully designed and softwares are deeply integrated. The latest version of Linux Mint is 17 codenamed **Qiana** which is based on Ubuntu 14.04 Trusty Tahr. It is a long term support version. After you done installing the OS, the first boot will bring you to welcome screen from where you can find various thing to help you understand whats going on your PC.
[
![](https://2.bp.blogspot.com/-tgtLUjXSlwU/U5hCTSJ2FII/AAAAAAAAAaE/nfNOr9dgC1I/s1600/Toolwiz20146-11-16-29-47.png)
][11]
Linux mint use Cinnamon or Mate DE by default. We are using cinnamon edition of Linux mint 17\. The desktop environment is deeply integrated in the OS. It is beautifully designed plus it gives an elegant look to your PC. 
[
![](https://1.bp.blogspot.com/-vl5ErJ-CwcM/U5hCWdGSGAI/AAAAAAAAAaQ/-5EaKbaTlaY/s1600/Toolwiz20146-11-16-46-40.png)
][9]
>Linux Mint 17 running on cinnamon DE
[
![](https://1.bp.blogspot.com/-pWgcchwcSVc/U5hCaJyu_lI/AAAAAAAAAak/clzx1v8AYNE/s1600/Toolwiz20146-11-16-59-20.png)
][10]
> Cinnamon DE in Linux mint 17 |
[
![](https://4.bp.blogspot.com/-xZd1slVytuU/U5hCWKwY2SI/AAAAAAAAAaM/1jnAW__1njk/s1600/Toolwiz20146-11-16-56-44.png)
][12]
The update manager of Linux Mint do it's job very smoothly. It checks the updates of every single component of your pc including third party apps installed using software center. It provides all the updates in one place for easier installation  with less hassle.
[
![](https://2.bp.blogspot.com/-nn6uv3lZTj8/U5hCXhHgj-I/AAAAAAAAAac/jdzguETuTRo/s1600/Toolwiz20146-11-16-57-36.png)
][13] 
The software center of Linux mint is well categorized and designed. It is very easy to find your favorite apps and install them. The categories and navigation will make it easier to use. It also gives good graphical user interface for newbies to understand quicker.
[
![](https://1.bp.blogspot.com/-TvFxC8YSwz0/U5hCbrIfUyI/AAAAAAAAAas/pOwQ9-qZR0g/s1600/Toolwiz20146-11-17-1-2.png)
][14] 
The file manager of  Linux Mint is nicely designed and it gives root access to your files and folders. One of the most essential part in every operating system.
[
![](https://2.bp.blogspot.com/-JmTOA8pJ47s/U5hCcj8QHuI/AAAAAAAAAa0/QOy8DUSkZBo/s1600/Toolwiz20146-11-17-1-51.png)
][15] 
At the bottom, there's a toolbar with different things embedded in it. It is customizable so you can have your own taste in it. By default it includes a menu launcher button, some apps shortcuts, a notification panel, user accounts panel, sound panel, date and time panel, network panel, etc.
[
![](https://4.bp.blogspot.com/-H5CfcaYs6Vg/U5hChGNHbNI/AAAAAAAAAbE/ZLM2NEPhZ18/s1600/Toolwiz20146-11-17-3-23.png)
][16] 
The setting menu is well categorized and designed too. It gives easier navigation to the field you wanna go. From settings menu you can do almost all the changes in your operating system.
[
![](https://4.bp.blogspot.com/-8M1PP9GgvHY/U5hCiiI7i4I/AAAAAAAAAbM/ZAdxWWGLqIw/s1600/Toolwiz20146-11-17-5-2.png)
][17] 
Each flavor of Linux mint comes in two editions, one is with all the media codecs and one without media codecs. We suggest you to choose media codecs included edition. It saves the hassle of installing codecs later on. And for the entertainment purpose there are some apps like Banshee, Brasero, VLC media player, etc. which gives you music and video playback on the go.
[
![](https://3.bp.blogspot.com/-3kUkWnh2tAM/U5hCj4nOUFI/AAAAAAAAAbU/9d394JW0iZw/s1600/Toolwiz20146-11-17-6-47.png)
][18]
Overall Linux mint is a versatile, stable and secure OS. It is very easy to use, but ofcource you will need to find out your way with some homework at some places. as per our suggestion Linux Mint is highly recommended for newbies as well as experts too. The mate flavor of Linux Mint can run on netbooks and oldies too. However Cinnamon flavour need 3D acceleration.
For more information about this distro and for downloading please visit their **[Official site][19]**
So with this we conclude our fifth segment. We will be back with more interesting things from Linux world so stay tuned and updated by subscribing us.
Till then, KEEP VISITING!
--------------------------------------------------------------------------------
via: http://www.techphylum.com/2014/06/linux-mint-introduction.html
作者:[sumit rohankar][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/112160169713374382262
[1]:http://www.techphylum.com/2014/06/monitor-internet-speed-ubuntu.html
[2]:http://www.techphylum.com/2014/06/mars-ridiculous-shooter-open-source-game.html
[3]:http://www.techphylum.com/2014/06/bootable-usb-drive-cmd.html
[4]:http://www.techphylum.com/2014/06/bleachbit-junk-cleaner-for-ubuntu-and.html
[5]:http://www.techphylum.com/2014/05/elementary-os-brief-introduction.html
[6]:http://www.techphylum.com/2014/05/Introduction-to-fedora.html
[7]:http://www.techphylum.com/2014/05/what-is-opensuse-introduction.html
[8]:http://www.techphylum.com/2014/05/what-is-debian-brief-introduction.html
[9]:http://1.bp.blogspot.com/-vl5ErJ-CwcM/U5hCWdGSGAI/AAAAAAAAAaQ/-5EaKbaTlaY/s1600/Toolwiz20146-11-16-46-40.png
[10]:http://1.bp.blogspot.com/-pWgcchwcSVc/U5hCaJyu_lI/AAAAAAAAAak/clzx1v8AYNE/s1600/Toolwiz20146-11-16-59-20.png
[11]:http://2.bp.blogspot.com/-tgtLUjXSlwU/U5hCTSJ2FII/AAAAAAAAAaE/nfNOr9dgC1I/s1600/Toolwiz20146-11-16-29-47.png
[12]:http://4.bp.blogspot.com/-xZd1slVytuU/U5hCWKwY2SI/AAAAAAAAAaM/1jnAW__1njk/s1600/Toolwiz20146-11-16-56-44.png
[13]:http://2.bp.blogspot.com/-nn6uv3lZTj8/U5hCXhHgj-I/AAAAAAAAAac/jdzguETuTRo/s1600/Toolwiz20146-11-16-57-36.png
[14]:http://1.bp.blogspot.com/-TvFxC8YSwz0/U5hCbrIfUyI/AAAAAAAAAas/pOwQ9-qZR0g/s1600/Toolwiz20146-11-17-1-2.png
[15]:http://2.bp.blogspot.com/-JmTOA8pJ47s/U5hCcj8QHuI/AAAAAAAAAa0/QOy8DUSkZBo/s1600/Toolwiz20146-11-17-1-51.png
[16]:http://4.bp.blogspot.com/-H5CfcaYs6Vg/U5hChGNHbNI/AAAAAAAAAbE/ZLM2NEPhZ18/s1600/Toolwiz20146-11-17-3-23.png
[17]:http://4.bp.blogspot.com/-8M1PP9GgvHY/U5hCiiI7i4I/AAAAAAAAAbM/ZAdxWWGLqIw/s1600/Toolwiz20146-11-17-5-2.png
[18]:http://3.bp.blogspot.com/-3kUkWnh2tAM/U5hCj4nOUFI/AAAAAAAAAbU/9d394JW0iZw/s1600/Toolwiz20146-11-17-6-47.png
[19]:http://linuxmint.com/

View File

@ -1,88 +0,0 @@
Translating by Chao-zhi
### Linux Deepin - A distro with a unique style
On the sixth segment of this series we have **Linux Deepin. **Well this distro is very interesting and must mention it's eye candy for sure. There are lot of distros, but lot of them are only making fair market by using what already is available. But the story is very different here, though this distro is based on ubuntu but they provide their own desktop environment. There are very few distros which make a great deal by making something of their own creation. Last time we have seen**Elementary OS **with thier own Pantheon desktop environment. Let's have a look how it is in case of Linux Deepin. 
[
![](http://2.bp.blogspot.com/-xKbTZAtY2eg/U_xD1M8LocI/AAAAAAAAAp8/DXQP6iaLD00/s1600/DeepinScreenshot20140826131241.png)
][6]
First of all just after after you log in to your account, you will get welcomed by a beautiful desktop with a well designed and good looking dock. This dock is customisable and have some interesting effects depending on softwares placed in dock.
[
![](http://2.bp.blogspot.com/-WPddx-EYlZw/U_xD0bjQotI/AAAAAAAAApw/vDx8O8myVI4/s1600/DeepinScreenshot20140826131302.png)
][7]
Now there is an launcher icon on the dock, but there's one more way to open launcher that is just hit the pointer onto the upper left corner. The launcher itself is well categorized and elegant. 
[
![](http://2.bp.blogspot.com/-FTOcyeJfs_k/U_xD0ErsgzI/AAAAAAAAAps/w4v1UFhaDWs/s1600/DeepinScreenshot20140826131320.png)
][8] 
As you can see in above screenshot the launcher is properly categorized and different software are sorted accordingly. and there's another good thing is if you hit the pointer on lower left corner then it will minimize all your running softwares an bring you on desktop. Again hitting on same corner will restore all the running tasks again.
[
![](http://3.bp.blogspot.com/-MVFLbWGTVJg/U_xD-xLuTrI/AAAAAAAAAqE/CD2bFiJsxqA/s1600/DeepinScreenshot20140826131333.png)
][9] 
Now if you hit the lower right corner, it will slide out control center. From here one can change all the settings of the pc.
[
![](http://2.bp.blogspot.com/-0EYqhY3WQFI/U_xEB8zO9RI/AAAAAAAAAqU/Jy54wrFZ2J8/s1600/DeepinScreenshot20140826131722.png)
][10] 
As you can see in above screenshot control center is also well categorized and well designed you can change almost all the settings of PC from here. Even you can customize your boot screen with custom wallpapers.
[
![](http://3.bp.blogspot.com/-Rpz5kyTxK_M/U_xD_1QkdaI/AAAAAAAAAqI/Wco4CDnWUHw/s1600/DeepinScreenshot20140826131837.png)
][11] 
Linux deepin have it's software store bundled with the distro. You can fin most of the software here and it's very easy to install them as well. Software store is also properly designed and well sorted for easy navigation.
[
![](http://2.bp.blogspot.com/-MDSiaRVT59c/U_xEJpwBSLI/AAAAAAAAAqk/s3As7rmqQxc/s1600/DeepinScreenshot20140826132205.png)
][12] 
Another interesting feature here is Deepin game. It offers lots of free to play online games which are quite interesting to play and good to keep one entertained.
[
![](http://2.bp.blogspot.com/-yx8wExwyjFs/U_xML8CxBEI/AAAAAAAAAq0/r2RfwtnrdhU/s1600/DeepinScreenshot20140826142428.png)
][13]
Linux Deepin also offers a rich music player which have internet radio service built in too. Don't have offline music in your PC no worries just switch to network radio and enjoy music online.
Overall Linux Deepin has lot more to offer plus they know how to keep users entertained while using their product. It's like their motto is "whatever we do, we gonna do it with some style". The support is quite good as it i given by people who us deepin in actual. Despite of being a chinese distro it offers english language too so there's no need to worry about language problem. The iso of Deepin is around 1.5 gb in size. You can also visit thier **[Official Site][14] **for more information and downloads. We would highly recommend you to try out this distro. With this we end today's segment of "**Introduction with Linux Distro**" series. We will be back with more distros n next segments so stay tuned with us. Till then, ciao.
--------------------------------------------------------------------------------
via: http://www.techphylum.com/2014/08/linux-deepin-distro-with-unique-style.html
作者:[sumit rohankar https://plus.google.com/112160169713374382262][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/112160169713374382262
[1]:http://www.techphylum.com/2014/06/linux-mint-introduction.html
[2]:http://www.techphylum.com/2014/05/elementary-os-brief-introduction.html
[3]:http://www.techphylum.com/2014/05/Introduction-to-fedora.html
[4]:http://www.techphylum.com/2014/05/what-is-debian-brief-introduction.html
[5]:http://www.techphylum.com/2014/05/what-is-opensuse-introduction.html
[6]:http://2.bp.blogspot.com/-xKbTZAtY2eg/U_xD1M8LocI/AAAAAAAAAp8/DXQP6iaLD00/s1600/DeepinScreenshot20140826131241.png
[7]:http://2.bp.blogspot.com/-WPddx-EYlZw/U_xD0bjQotI/AAAAAAAAApw/vDx8O8myVI4/s1600/DeepinScreenshot20140826131302.png
[8]:http://2.bp.blogspot.com/-FTOcyeJfs_k/U_xD0ErsgzI/AAAAAAAAAps/w4v1UFhaDWs/s1600/DeepinScreenshot20140826131320.png
[9]:http://3.bp.blogspot.com/-MVFLbWGTVJg/U_xD-xLuTrI/AAAAAAAAAqE/CD2bFiJsxqA/s1600/DeepinScreenshot20140826131333.png
[10]:http://2.bp.blogspot.com/-0EYqhY3WQFI/U_xEB8zO9RI/AAAAAAAAAqU/Jy54wrFZ2J8/s1600/DeepinScreenshot20140826131722.png
[11]:http://3.bp.blogspot.com/-Rpz5kyTxK_M/U_xD_1QkdaI/AAAAAAAAAqI/Wco4CDnWUHw/s1600/DeepinScreenshot20140826131837.png
[12]:http://2.bp.blogspot.com/-MDSiaRVT59c/U_xEJpwBSLI/AAAAAAAAAqk/s3As7rmqQxc/s1600/DeepinScreenshot20140826132205.png
[13]:http://2.bp.blogspot.com/-yx8wExwyjFs/U_xML8CxBEI/AAAAAAAAAq0/r2RfwtnrdhU/s1600/DeepinScreenshot20140826142428.png
[14]:http://www.linuxdeepin.com/index.en.html

View File

@ -1,39 +0,0 @@
Git 中糟糕的想法
============================================================
![Corey Quinn](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/corey-quinn-lcna.png)
在 LinuxCon 北美会议上 FutureAdvisor 的 Corey Quinn 说“Git 的确让你可以做一些额外的强大的事。但是在这次谈论中,强大是愚蠢的一种委婉说法” [Linux 基金会][2]
在 LinuxCon 北美会议上 FutureAdvisor 的 Corey Quinn 说“Git 的确让你可以做一些额外的强大的事。但是在这次谈论中,强大是愚蠢的委婉说法”。在使用 Git 时谁没有至少经历一个时刻让你感觉自己像个傻子当然Git 是很棒的,每个人都在使用它,你可以用几个基本命令完成你的大部分工作。但它也有一些强大的功能,让我们觉得我们不知道我们在做什么。
但这真的对我们自己不公平。没有人知道一切每个人知道的都不同。Quinn 提醒我们:“在我许多谈话的 QA 部分,人们有时举手说:“嗯,我有一个傻问题。” 你看到人们在那里说:“是啊!这是一个非常愚蠢的问题”。但是当他们得到答案时,这些人正在大量记笔记。
![Git](https://www.linux.com/sites/lcom/files/styles/floated_images/public/heffalump-git-corey-quinn_0.png)
[有权限使用][1]
Quinn 开始了一些有趣的演示,你可以用 Git 做一些可怕的事情,例如 rebase master 然后进行强制推送搞乱整个项目、输入错误命令并收到 git 提示、提交大型二进制文件等。然后他演示了如何使这些可怕的事情不怎么可怕,如更加明智地管理大型二进制文件。“你可以提交大的二进制文件,你可以在 Git 中提交大文件,如果你需要存储大的二进制文件,这里有两个工具会真的加快加载,一个是 git-annex这是由 Debian 开发人员 Joey Hess 开发的,而 git-lfs 是由 GitHub 支持的。
你有连续输入错误么?例如,当你想要 “git status” 时却输入 “git stitis”Quinn 有一个方案“Git 确实对别名有内置支持,所以你可以使用相对较长、复杂的东西,并把它命名为一个短的 Git 命令。” 你还可以使用 shell 别名。
Quinn 说:“我们都听说过 rebase master 然后强制推送这样一个给你所有同事的搞笑恶作剧它会改变历史所以突然之前发生的事情并不是人们真正在做的事而且其他人都被卷入了这个过程。。一群鲸鱼被称为“pod”一群乌鸦中被称为“谋杀”一群开发者被称为“合并冲突”。。。更严重的是如果有人这样做你有几个选择。包括从备份中恢复 master还原提交或者把责任人从屋顶扔下去。或者采取一定的预防措施并使用一个并不知名的 Git 功能称为分支保护。启用分支保护后,无法删除或强制推送分支,并且在接受前,合并请求必须至少有一个审核。”
Quinn 演示了几个更奇妙的有用的工具,使 Git 更高效和万无一失,如 mr、vcsh和定制的 shell 提示。你可以在下面看到完整的视频,了解更多有趣的事情。
--------------------------------------------------------------------------------
via: https://www.linux.com/news/event/LinuxCon-Europe/2016/terrible-ideas-git-0
作者:[CARLA SCHRODER][a]
译者:[geekpi](https://github.com/geekpi)
校对:[Bestony](https://github.com/Bestony)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/cschroder
[1]:https://www.linux.com/licenses/category/used-permission
[2]:https://www.linux.com/licenses/category/linux-foundation
[3]:https://www.linux.com/files/images/heffalump-git-corey-quinnpng-0
[4]:https://www.linux.com/files/images/corey-quinn-lcnapng
[5]:http://events.linuxfoundation.org/events/linuxcon-north-america

View File

@ -1,38 +0,0 @@
我需要在 AGPLv3 许可证下提供源码么?
============================================================
![Do I need to provide access to source code under the AGPLv3 license?](https://opensource.com/sites/default/files/styles/image-full-size/public/images/law/LAW_PatentSpotlight_520x292_cm.png.png?itok=bCn-kMx2 "Do I need to provide access to source code under the AGPLv3 license?")
图片提供:
opensource.com
[GNU Affero 通用公共许可证版本 3][1]AGPLv3是与 GPLv3 几乎相同的公共版权许可证。两个许可证具有相同的公共版权范围,但在一个重要方面有重大差异。 AGPLv3 的第 13 节规定了 GPLv2 或 GPLv3 中不存在的附加条件:
>你必须给你那些使用计算机网络远程(如果你的版本支持此类交互)与它交互的用户提供一个通过网络服务器利用一些标准或者常规复制手段免费获得相关你的版本的源码的机会。
尽管“通过计算机网络远程交互”的范围应该被理解为涵盖超越常规 SaaS 的情况,但是这个条件主要适用于目前被认为是 SaaS 的部署。目标是在用户使用 web 服务提供功能但是不提供功能代码的分发的环境中关闭普通 GPL 中的感知漏洞。因此,第 13 节提供了超出 GPLv2 第 3 节以及 GPLv3 和 AGPLv3 第 6 节中包含的目标代码分发触发要求的额外源码公开要求。
常常被误解的是AGPLv3 第 13 节中的源代码要求仅在 AGPLv3 软件已被“你”(例如,提供网络服务的实体)修改的地方触发。我的解释是,只要“你”不修改 AGPLv3 的代码,许可证不应该被理解为需要按照第 13 节规定的方式访问相应的源码。如我所见,尽管即使公开许可证不必要的源代码也是一个好主意,但在 AGPL 下许多未修改以及标准部署的软件模块根本不会触发第 13 节。
如何解释 AGPL 的条款和条件,包括 AGPL 软件是否已被修改,可能需要根据具体情况的事实和细节进行法律层面的分析。
--------------------------------------------------------------------------------
作者简介:
![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/kaufman-picture.jpg?itok=FPIizDR-)
Jeffrey R. Kaufman 是全球领先的开源软件解决方案提供商 Red Hat 公司的开源 IP 律师。Jeffrey 也是托马斯·杰斐逊法学院的兼职教授。在入职 Red Hat 之前Jeffrey 曾经担任高通公司的专利顾问向首席科学家办公室提供开源顾问。Jeffrey在 RFID、条形码、图像处理和打印技术方面拥有多项专利。
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/1/providing-corresponding-source-agplv3-license
作者:[Jeffrey Robert Kaufman][a]
译者:[geekpi](https://github.com/geekpi)
校对:[Bestony](https://github.com/Bestony)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/jkaufman
[1]:https://www.gnu.org/licenses/agpl-3.0-standalone.html

View File

@ -1,73 +0,0 @@
如何加入一个技术社区
============================================================
### 参照以下几步可以让你很容易地融入社区
![How to join a technical community](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BIZ_DebucketizeOrgChart_A.png?itok=oBdRm8vc "How to join a technical community")
图片提供 opensource.com
加入一个新的社区在很多情况下可能是一个艰巨的任务。当加入一个新的技术社区时,焦虑感可能特别强烈,尤其是一些社区对新成员的严厉甚至讥讽。
虽然有可能陷入一个不公正的地方,但是我认为你会发现大多数技术社区是相当合理的,并且以下几个简单的步骤可以缓解从非成员到成员的过渡。
### 相互适合
该过程开始于实际加入社区前。第一步是确保社区适合你,同时你也是社区的一个补充。
这听起来很简单,但每个社区都有不同的文化、态度、理念和公认的规范。如果你是某话题的新成员,面向行业专业人士的社区不是一个理想的起点。同样,如果你是一个寻找深入并且极其复杂问题的答案的专家,一个初学者的社区肯定也不太合适。无论哪种方式,两边的不匹配几乎肯定会导致双方的失望。同样,一些社区将是非常正规并且面向商业的,而另一些社区将非常宽松和悠闲,同时许多社区氛围在二者之间。选择适合你的社区,或至少不是你厌恶的社区,这将有助于确保你的长期参与。
### 浏览社区
最初以浏览和只读模式参与社区是一个好方法。这并不意味着你不应该立即创建一个帐户或加入,只是你需要通过浏览社区得到一个空间(虚拟的或物理的)感觉。潜伏一段时间会帮助你适应社区的规则和文化,以此确定你是否认为这是一个很适合你的平台。
### 介绍自己
根据社区的不同,自我介绍的细节将有很大的不同。同样,确保这样做的方式容易被社区接受。
有些社区可能有一个专门的介绍板块,而在另一些社区,它可能意味着填写你的个人资料等有意义和相关的信息。如果社区是邮件列表或 IRC 频道,你的初始疑问中包含简要介绍可能更有意义。这将让社区了解你是谁,为什么你想成为社区的一部分,并让他们知道一点关于你自己和你的技术水平的信息。
### 保持尊重
虽然社区间的接受方式有很大的不同,但你应该永远保持尊重。避免争吵和人身攻击,并始终努力建设社区。记住,你在互联网上发布的东西,它将永远存在,并为大家所看到。
### 问题
#### 提问
记住,精心设计的问题可以更快地得到更好的答案,正如我在十月专栏[The Queue][2]中指出的。
#### 回答
一旦遇见了自己很了解的关于基础或非常容易回答的提问时,“尊重”的理念也同样适用,就像提问时一样。一个技术上的冗长并充满优越感的正确答案,并不是向一个新的社区介绍自己的正确方式。
#### 其他讨论
即使在技术社区,并不是所有的讨论都是关于某个问题或答案。在这种情况下,以尊重和周到的、不带有侮辱和人身攻击的方式,提出不同的意见、挑战他人的观点是健康正确的做法。
### 享受自己
长期参加社区最重要的事情是在那里享受自己。参与一个充满活力的社区是一个学习,成长,挑战和提升自我的好机会。很多情况下,这并不容易,但它是值得的。
--------------------------------------------------------------------------------
作者简介:
Jeremy Garcia - Jeremy Garcia 是 LinuxQuestions.org 的创始人,同时也是一个热情和注重实际的开源拥护者。个人推特: @linuxquestions
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/1/how-join-technical-community
作者:[Jeremy Garcia][a]
译者:[livc](https://github.com/livc)
校对:[Bestony](https://github.com/Bestony)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/jeremy-garcia
[1]:https://opensource.com/article/17/1/how-join-technical-community?rate=SfjMzwYInmhZiq6Yva3D87kngE-ocLOVraCD0wWbBss
[2]:https://opensource.com/life/16/10/how-ask-technical-questions
[3]:https://opensource.com/user/86816/feed
[4]:https://opensource.com/article/17/1/how-join-technical-community#comments
[5]:https://opensource.com/users/jeremy-garcia

View File

@ -0,0 +1,65 @@
Windows赢了桌面而Linux赢得整个世界
============================================================
拥有最高级的 Linux 桌面系统项目的城市正转回 Windows 阵营,但 Linux 的命运已经不再与 PC 休戚相关。
![munich2.jpg](http://zdnet3.cbsistatic.com/hub/i/r/2017/02/10/9befc3d2-7931-48df-8114-008d23f1941d/resize/770xauto/02ca33958e5288c81a85d3dac546f621/munich2.jpg)
> 慕尼黑的 Linux 项目只是开源软件故事中的一小部分
> 图片: Getty Images/iStockphoto
在实施从 Windows 系统迁移到 Linux 系统这一项目接近十年之久, 慕尼黑却突然走上了一条戏剧性的转弯。据说是到 2021 年,地方议会就会开始用 [Windows 10][4] 替换运行 LiMux (Ubuntu 的一种自定义版本)的 PC 机。
若是回到 15 或者 20 年前,人们可能会争论什么时候 Linux 将会在桌面上取代 Windows。例如当 Ubuntu 于 2004 年问世时,它是带着 [终结 Windows 的抱负][5] 而被设计为标准的桌面操作系统的。
剧透:这一切并没有发生。
桌面上的 Linux 在今天占有约为 2% 的市场很多人都认为它复杂晦涩。与此同时Windows 则稳航无虞,在 PC 市场笑傲群雄,天下十之有九。但商业中总还有些许 Linux 桌面的身影,它仍被需要——尤其是对开发者和数据科学家而言。
但遗憾的是,它永远也不会成为历史的主流。
慕尼黑的 Linux 项目因其规模之大,引起了许多人的兴趣。几乎没有哪家大型组织会做出从 Windows 到 Linux 的迁移,一些个别的案例像 [法国宪兵队和都灵市][6] 曾有类似之举。然而,[慕尼黑作为模范][7]:在这一事件上的失败将会大大打击那些仍在 [试图用 Linux 将 Windows 取而代之的信徒们][8]。
但现实就是如此,绝大多数公司乐于去使用主流的桌面操作系统,因为它具有完整性、用户友好这种天然的优势。
工作人员所抱怨的问题中,有多少是归咎于 Limux 软件以及多少是操作系统无端被责备的已经不可计数。但重要的是无论慕尼黑最后何去何从Linux 的命运都已经脱离了桌面——是的Linux 在多年前就已经输掉了桌面战争。
但这对 Linux 来说无伤大雅,因为它赢得了智能手机之战,并且在云端和物联网之战上也是捷报连连。
你的口袋里,七八成装的是一个 Linux 驱动的智能手机Android 基于 Linux 内核)。你的身边,更是有成千上万 Linux 驱动的设备,虽然这些也许你甚至都没注意到。
[像 Raspberry Pi][9] 这样运行大量不同类型 Linux 的设备,正在创建一个充满热情和活力的开发者社区,并且提供给初创公司一种低成本的驱动新设备的方法。
大部分公有云也是以这样或那样的形式在 Linux 上运行的;即便是微软,也已经敞开大门,拥抱开源软件。无论你站在哪一个软件平台的立场,不可否认地,开发者和用户拥有更多丰富的可选项是一件好事,对决策来说,抑或是对创新来说,都是如此。
桌面的主导地位早已不再是当年模样了它现在只是许多计算平台中的一个。事实上PC 的软件也已经变得越来越不相关,因为更多的应用程序都在与设备和操作系统解耦,驻留在云中。
虽然慕尼黑传奇的曲折变换与 Linux 在桌面上的冒险耐人寻味,但它们却并未给你呈现出一个完整的故事。
_同意? 还是不同意? 在下面添加你的评论来加入讨论吧_
--------------------------------------------------------------------------------
via: http://www.zdnet.com/article/windows-wins-the-desktop-but-linux-takes-the-world/
作者:[Steve Ranger ][a]
译者:[Meditator-hkx](https://github.com/Meditator-hkx)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.zdnet.com/meet-the-team/uk/steve-ranger/
[1]:http://www.techrepublic.com/resource-library/whitepapers/why-munich-made-the-switch-from-windows-to-linux-and-may-be-reversing-course/
[2]:http://www.zdnet.com/article/windows-wins-the-desktop-but-linux-takes-the-world/#comments-c2df091a-2ecf-4e55-84f6-fd3309cf917d
[3]:http://www.techrepublic.com/resource-library/whitepapers/why-munich-made-the-switch-from-windows-to-linux-and-may-be-reversing-course/
[4]:http://www.techrepublic.com/article/linux-champion-munich-takes-decisive-step-towards-returning-to-windows/
[5]:http://www.techrepublic.com/article/how-mark-shuttleworth-became-the-first-african-in-space-and-launched-a-software-revolution/
[6]:http://www.techrepublic.com/pictures/10-projects-ditching-microsoft-for-open-source-plus-one-switching-back/
[7]:http://www.techrepublic.com/article/how-munich-rejected-steve-ballmer-and-kicked-microsoft-out-of-the-city/
[8]:http://www.techrepublic.com/resource-library/whitepapers/why-munich-made-the-switch-from-windows-to-linux-and-may-be-reversing-course/
[9]:http://www.zdnet.com/article/hands-on-raspberry-pi-7-inch-touch-display-and-case/
[10]:http://intent.cbsi.com/redir?tag=medc-content-top-leaderboard&siteId=2&rsid=cnetzdnetglobalsite&pagetype=article&sl=en&sc=as&topicguid=&assetguid=c2df091a-2ecf-4e55-84f6-fd3309cf917d&assettype=content_article&ftag_cd=LGN-10-10aaa0h&devicetype=desktop&viewguid=5d31a1e5-4a88-4002-ac70-1c0ca3e33bb3&q=&ctype=docids;promo&cval=33159648;7214&ttag=&ursuid=&bhid=&destUrl=http%3A%2F%2Fwww.techrepublic.com%2Fresource-library%2Fwhitepapers%2Fgraphic-design-bootcamp%2F%3Fpromo%3D7214%26ftag%3DLGN-10-10aaa0h%26cval%3Dcontent-top-leaderboard
[11]:http://intent.cbsi.com/redir?tag=medc-content-top-leaderboard&siteId=2&rsid=cnetzdnetglobalsite&pagetype=article&sl=en&sc=as&topicguid=&assetguid=c2df091a-2ecf-4e55-84f6-fd3309cf917d&assettype=content_article&ftag_cd=LGN-10-10aaa0h&devicetype=desktop&viewguid=5d31a1e5-4a88-4002-ac70-1c0ca3e33bb3&q=&ctype=docids;promo&cval=33159648;7214&ttag=&ursuid=&bhid=&destUrl=http%3A%2F%2Fwww.techrepublic.com%2Fresource-library%2Fwhitepapers%2Fgraphic-design-bootcamp%2F%3Fpromo%3D7214%26ftag%3DLGN-10-10aaa0h%26cval%3Dcontent-top-leaderboard
[12]:http://www.zdnet.com/meet-the-team/uk/steve-ranger/
[13]:http://www.zdnet.com/meet-the-team/uk/steve-ranger/
[14]:http://www.zdnet.com/topic/enterprise-software/

View File

@ -1,86 +0,0 @@
安卓编年史
================================================================================
![蜂巢的应用列表少了很多应用。上图还展示了通知中心和新的快速设置。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/apps-and-notifications2.png)
蜂巢的应用列表少了很多应用。上图还展示了通知中心和新的快速设置。
Ron Amadeo 供图
默认的应用图标从32个减少到了25个其中还有两个是第三方的游戏。因为蜂巢不是为手机设计的而且谷歌希望默认应用都是为平板优化的很多应用因此没有成为默认应用。被去掉的应用有亚马逊 MP3 商店Car HomeFacebookGoogle Goggles信息新闻与天气电话Twitter谷歌语音以及语音拨号。谷歌正在悄悄打造的音乐服务将于不久后面世所以亚马逊 MP3 商店需要为它让路。Car Home信息以及电话对一部不是手机的设备来说没有多大意义Facebook 和 Twitter还没有平板版应用Goggles新闻与天气以及语音拨号几乎没什么人注意就算移除了大多数人也不会想念它们的。
几乎每个应用图标都是全新设计的。就像是从 G1 切换到摩托罗拉 Droid变化的最大动力是分辨率的提高。Nexus S 有一块800×480分辨率的显示屏姜饼重新设计了图标等资源来适应它。Xoom 巨大的1280×800 10英寸显示屏意味着几乎所有设计都要重做。但是再说一次这次是有真正的设计师在负责所有东西看起来更有整体性了。蜂巢的应用列表从纵向滚动变为了横向分页式。这个变化对横屏设备有意义而对手机来说查找一个应用还是纵向滚动列表比较快。
第二张蜂巢截图展示的是新通知中心。姜饼中的灰色和黑色设计已经被抛弃了现在是黑色面板带蓝色光晕。上面一块显示着日期时间连接状态电量和打开快速设置的按钮下面是实际的通知。非持续性通知现在可以通过通知右侧的“X”来关闭。蜂巢是第一个支持通知内控制的版本。第一个也是蜂巢发布时唯一一个利用了此特性的应用是新的谷歌音乐在它的通知上有上一曲播放/暂停,下一曲按钮。这些控制可以在任何应用中访问到,这让控制音乐播放变成了一件轻而易举的事情。
![“添加到主屏幕”的缩小视图更易于组织布局。搜索界面将自动搜索建议和通用搜索分为两个面板显示。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/widgetkeyboard.png)
“添加到主屏幕”的缩小视图更易于组织布局。搜索界面将自动搜索建议和通用搜索分为两个面板显示。
Ron Amadeo 供图
点击主屏幕右上角的加号或长按背景空白处就会打开新的主屏幕设置界面。蜂巢会在屏幕上半部分显示所有主屏的缩小视图,下半部分分页显示的是小部件和快捷方式。小部件或快捷方式可以从下半部分的抽屉中拖动到五个主屏幕中的任意一个上。姜饼只会显示一个文本列表,而蜂巢会显示小部件完整的略缩图预览。这让你更清楚一个小部件是什么样子的,而不是像原来的“日历”一样只是一个只有应用名称的描述。
摩托罗拉 Xoom 更大的屏幕让键盘的布局更加接近 PC 风格退格回车shift 以及 tab 都在传统的位置上。键盘带有浅蓝色,并且键与键之间的空间更大了。谷歌还添加了一个专门的笑脸按钮。 :-)
![打开菜单的 Gmail 在蜂巢和姜饼上的效果。按钮布置在首屏更容易被发现。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/thebasics.png)
打开菜单的 Gmail 在蜂巢和姜饼上的效果。按钮布置在首屏更容易被发现。
Ron Amadeo 供图
Gmail 示范了蜂巢所有的用户界面概念。安卓 3.0不再把所有控制都隐藏在菜单按钮之后。屏幕的顶部现在有一条带有图标的条带,叫做 Action Bar操作栏它将许多常用的控制选项提升到了主屏幕上用户直接就能看到它们。Gmail 的操作栏显示着搜索,新邮件,刷新按钮,不常用的选项比如设置,帮助,以及反馈放在了“更多”按钮中。点击复选框或选中文本的时候时整个操作栏的图标会变成和操作相关的——举个例子,选择文本会出现复制,粘贴和全选按钮。
应用左上角显示的图标同时也作为称作“上一级”的导航按钮。“后退”的作用类似浏览器的后退按钮,导航到之前访问的页面,“上一级”则会导航至应用的上一层次。举例来说,如果你在安卓市场,点击“给开发者发邮件”,会打开 Gmail“后退”会让你返回安卓市场但是“上一级”会带你到 Gmail 的收件箱。“后退”可能会关闭当前应用,而“上一级”永远不会。应用可以控制“后退”按钮,它们往往重新定义它为“上一级”的功能。事实上,这两个按钮之间几乎没什么不同。
蜂巢还引入了 “Fragments” API允许开发者开发同时适用于平板和手机的应用。一个 “Fragments”格子 是一个用户界面的面板。在上图的 Gmail 中,左边的文件夹列表是一个格子,收件箱是另一个格子。手机每屏显示一个格子,而平板则可以并列显示两个。开发者可以自行定义单独每个格子的外观,安卓会根据当前的设备决定如何显示它们。
![计算器使用了常规的安卓按钮,但日历看起来像是被谁打翻了蓝墨水。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/calculendar.png)
计算器使用了常规的安卓按钮,但日历看起来像是被谁打翻了蓝墨水。
Ron Amadeo 供图
这是安卓历史上第一次计算器换上了没有特别定制的按钮,所以它看起来确实是系统的一部分。更大的屏幕有了更多空间容纳按钮,足够将计算器基本功能容纳在一个屏幕上。日历极大地受益于额外的显示空间,有了更多的空间显示事件文本和控制选项。顶部的操作栏有切换视图的按钮,显示当前时间跨度,以及常规按钮。事件块变成了白色背景,日历标识只在左上角显示。在底部(或横屏模式的侧边)显示的是月历和显示的日历列表。
日历的比例同样可以调整。通过两指缩放手势,纵向的周和日视图能够在一屏内显示五到十九小时的事件。日历的背景由不均匀的蓝色斑点组成,看起来不是特别棒,在随后的版本里就被抛弃了。
![新相机界面,取景器显示的是“负片”效果。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/camera.png)
新相机界面,取景器显示的是“负片”效果。
Ron Amadeo 供图
巨大的10英寸 Xoom 平板有个摄像头,这意味着它同样有个相机应用。电子风格的重新设计终于甩掉了谷歌从安卓 1.6 以来使用的仿皮革外观。控制选项以环形排布在快门键周围让人想起真正的相机上的圆形控制转盘。Cooliris 衍生的弹出对话气泡变成了带光晕的半透明黑色选框。蜂巢的截图显示的是新的“颜色效果”功能它能给取景器实时加上滤镜效果。不像姜饼的相机应用它不支持竖屏模式——它被限制在横屏状态。用10英寸的平板拍摄纵向照片没多大意义但拍摄横向照片也没多大意义。
![时钟应用相比其它地方没受到多少关照。谷歌把它扔进一个小盒子里然后就收工了。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/clocks.png)
时钟应用相比其它地方没受到多少关照。谷歌把它扔进一个小盒子里然后就收工了。
Ron Amadeo 供图
无数功能已经成形了,现在是时候来重制一下时钟了。整个“桌面时钟”概念被踢出门外,取而代之的是在纯黑背景上显示的简单又巨大的时间数字。打开其它应用查看天气的功能不见了,随之而去的还有显示你的壁纸的功能。当要设计平板尺寸的界面时,有时候谷歌就放弃了,就像这里,就只是把时钟界面扔到了一个小小的,居中的对话框里。
![音乐应用终于得到了一直以来都需要的完全重新设计。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/muzack.png)
音乐应用终于得到了一直以来都需要的完全重新设计。
Ron Amadeo 供图
尽管音乐应用之前有得到一些小的加强,但这是自安卓 0.9 以来它第一次受到正视。重新设计的亮点是一个“不叫滚动封面的 3D 专辑封面视图”,称作“最新和最近”。导航由操作栏的下拉框解决,取代了安卓 2.1 引入的标签页导航。尽管“最新和最近”有个 3D 滚动专辑封面,“专辑”使用的是专辑略缩图的平面方阵。另一个部分也有个完全不同的设计。“歌曲”使用了垂直滚动的文本列表,“播放列表”,“年代”和“艺术家”用的是堆砌专辑显示。
在几乎每个视图中,每个单独的项目有它自己单独的菜单,通常在每项的右下角有个小箭头。眼下这里只会显示“播放”和“添加到播放列表”,但这个版本的谷歌音乐是为未来搭建的。谷歌不久后就要发布音乐服务,这些独立菜单在像是在音乐商店里浏览该艺术家的其它内容,或是管理云存储和本地存储时将会是不可或缺的。
正如安卓 2.1 中的 Cooliris 风格的相册,谷歌音乐会将略缩图放大作为背景图片。底部的“正在播放”栏现在显示着专辑封面,播放控制,以及播放进度条。
![新谷歌地图的一些地方真的很棒,一些却是从安卓 1.5 来的。](http://cdn.arstechnica.net/wp-content/uploads/2014/02/maps.png)
新谷歌地图的一些地方真的很棒,一些却是从安卓 1.5 来的。
Ron Amadeo 供图
谷歌地图也为大屏幕进行了重新设计。这个设计将会持续一段时间,它对所有的控制选项用了一个半透明的黑色操作栏。搜索再次成为主要功能,占据了操作栏显要位置,但这回可是真的搜索栏,你可以在里面输入关键字,不像以前那个搜索栏形状的按钮会打开完全不同的界面。谷歌最终还是放弃了给缩放控件留屏幕空间,仅仅依靠手势来控制地图显示。尽管 3D 建筑轮廓这个特性已经被移植到了旧版本的地图中,蜂巢依然是拥有这个特性的第一个版本。双指在地图上向下拖放会“倾斜”地图的视角,展示建筑的侧面。你可以随意旋转,建筑同样会跟着进行调整。
并不是所有部分都进行了重新设计。导航自姜饼以来就没动过,还有些界面的核心部分,比如路线,直接从安卓 1.6 的设计拿出来,放到一个小盒子里居中放置,仅此而已。
----------
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
[Ron Amadeo][a] / Ron是Ars Technica的评论编缉专注于安卓系统和谷歌产品。他总是在追寻新鲜事物还喜欢拆解事物看看它们到底是怎么运作的。
[@RonAmadeo][t]
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/17/
译者:[alim0x](https://github.com/alim0x) 校对:[Bestony](https://github.com/Bestony)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://arstechnica.com/author/ronamadeo
[t]:https://twitter.com/RonAmadeo

Some files were not shown because too many files have changed in this diff Show More