Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu.Wang 2018-08-11 00:26:08 +08:00
commit 32b5a48d91
20 changed files with 2143 additions and 397 deletions

View File

@ -1,58 +1,52 @@
# [Google 为树莓派 Zero W 发布了基于TensorFlow 的视觉识别套件][26]
Google 为树莓派 Zero W 发布了基于TensorFlow 的视觉识别套件
===============
![](http://linuxgizmos.com/files/google_aiyvisionkit-thm.jpg)
Google 发行了一个 45 美元的 “AIY Vision Kit”它是运行在树莓派 Zero W 上的基于 TensorFlow 的视觉识别开发套件,它使用了一个带 Movidius 芯片的 “VisionBonnet” 板。
为加速设备上的神经网络Google 的 AIY 视频套件继承了早期树莓派上运行的 [AIY 项目][7] 的语音/AI 套件,这个型号的树莓派随五月份的 MagPi 杂志一起赠送。与语音套件和老的 Google 硬纸板 VR 查看器一样,这个新的 AIY 视觉套件也使用一个硬纸板包装。这个套件和 [Cloud Vision API][8] 是不一样的,它使用了一个在 2015 年演示过的基于树莓派的 GoPiGo 机器人,它完全在本地的处理能力上运行,而不需要使用一个云端连接。这个 AIY 视觉套件现在可以 45 美元的价格去预订,将在 12 月份发货。
为加速设备上的神经网络Google 的 AIY 视频套件继承了早期树莓派上运行的 [AIY 项目][7] 的语音/AI 套件,这个型号的树莓派随五月份的 MagPi 杂志一起赠送。与语音套件和老的 Google 硬纸板 VR 查看器一样,这个新的 AIY 视觉套件也使用一个硬纸板包装。这个套件和 [Cloud Vision API][8] 是不一样的,它使用了一个在 2015 年演示过的基于树莓派的 GoPiGo 机器人,它完全在本地的处理能力上运行,而不需要使用一个云端连接。这个 AIY 视觉套件现在可以 45 美元的价格去预订,将在 12 月份发货。
[![](http://linuxgizmos.com/files/google_aiyvisionkit-sm.jpg)][9]   [![](http://linuxgizmos.com/files/rpi_zerow-sm.jpg)][10]
[![](http://linuxgizmos.com/files/google_aiyvisionkit-sm.jpg)][9]   [![](http://linuxgizmos.com/files/rpi_zerow-sm.jpg)][10]
**AIY 视觉套件,完整包装(左)和树莓派 Zero W**
(点击图片放大)
*AIY 视觉套件,完整包装(左)和树莓派 Zero W*
这个套件的主要处理部分除了所需要的 [树莓派 Zero W][21] 单片机之外 —— 一个基于 ARM11 的 1 GHz 的 Broadcom BCM2836 片上系统,另外的就是 Google 最新的 VisionBonnet RPi 附件板。这个 VisionBonnet pHAT 附件板使用了一个 Movidius MA2450它是 [Movidius Myriad 2 VPU][22] 版的处理器。在 VisionBonnet 上,处理器为神经网络运行了 Google 的开源机器学习库 [TensorFlow][23]。因为这个芯片,使得视觉处理的速度最高达每秒 30 帧。
这个套件的主要处理部分除了所需要的 [树莓派 Zero W][21] 单片机之外 —— 一个基于 ARM11 的 1 GHz 的 Broadcom BCM2836 片上系统,另外的就是 Google 最新的 VisionBonnet RPi 附件板。这个 VisionBonnet pHAT 附件板使用了一个 Movidius MA2450它是 [Movidius Myriad 2 VPU][22] 版的处理器。在 VisionBonnet 上,处理器为神经网络运行了 Google 的开源机器学习库 [TensorFlow][23]。因为这个芯片,便得视觉处理的速度最高达每秒 30 帧。
这个 AIY 视觉套件要求用户提供一个树莓派 Zero W、一个 [树莓派摄像机 v2][11]、以及一个 16GB 的 micro SD 卡,它用来下载基于 Linux 的 OS 镜像。这个套件包含了 VisionBonnet、一个 RGB 街机风格的按钮、一个压电扬声器、一个广角镜头套件、以及一个包裹它们的硬纸板。还有一些就是线缆、支架、安装螺母、以及连接部件。
这个 AIY 视觉套件要求用户提供一个树莓派 Zero W、一个 [树莓派摄像机 v2][11]、以及一个 16GB 的 micro SD 卡,它用来下载基于 Linux 的 OS 镜像。这个套件包含了 VisionBonnet、一个 RGB 街机风格的按钮、一个压电扬声器、一个广角镜头套件、以及一个包裹它们的硬纸板。还有一些就是线缆、支架、安装螺母,以及连接部件。
[![](http://linuxgizmos.com/files/google_aiyvisionkit_pieces-sm.jpg)][12]   [![](http://linuxgizmos.com/files/google_visionbonnet-sm.jpg)][13]
**AIY 视觉套件组件(左)和 VisonBonnet 附件板**
(点击图片放大)
*AIY 视觉套件组件(左)和 VisonBonnet 附件板*
有三个可用的神经网络模型。一个是通用的模型,它可以识别常见的 1,000 个东西,一个是面部检测模型,它可以对 “快乐程度” 进行评分,从 “悲伤” 到 “大笑”,还有一个模型可以用来辨别图像内容是狗、猫、还是人。这个 1,000-image 模型源自 Google 的开源 [MobileNets][24],它是基于 TensorFlow 家族的计算机视觉模型,它设计用于资源受限的移动或者嵌入式设备。
MobileNet 模型是低延时、低功耗和参数化的以满足资源受限的不同使用案例。Google 说这个模型可以用于构建分类、检测、嵌入、以及分隔。在本月的早些时候Google 发布了一个开发者预览版,它是一个对 Android 和 iOS 移动设备友好的 [TensorFlow Lite][14] 库,它与 MobileNets 和 Android 神经网络 API 是兼容的。
有三个可用的神经网络模型。一个是通用的模型,它可以识别常见的 1000 个东西,一个是面部检测模型,它可以对 “快乐程度” 进行评分,从 “悲伤” 到 “大笑”,还有一个模型可以用来辨别图像内容是狗、猫、还是人。这个 1000 个图片模型源自 Google 的开源 [MobileNets][24],它是基于 TensorFlow 家族的计算机视觉模型,它设计用于资源受限的移动或者嵌入式设备。
MobileNet 模型是低延时、低功耗和参数化的以满足资源受限的不同使用情景。Google 说这个模型可以用于构建分类、检测、嵌入、以及分隔。在本月的早些时候Google 发布了一个开发者预览版,它是一个对 Android 和 iOS 移动设备友好的 [TensorFlow Lite][14] 库,它与 MobileNets 和 Android 神经网络 API 是兼容的。
[![](http://linuxgizmos.com/files/google_aiyvisionkit_assembly-sm.jpg)][15]
**AIY 视觉套件包装图**
(点击图像放大)
*AIY 视觉套件包装图*
除了提供这三个模型之外AIY 视觉套件还提供了基本的 TensorFlow 代码和一个编译器因此用户可以去开发自己的模型。另外Python 开发者可以写一些新软件去定制 RGB 按钮颜色、压电元素声音、以及在 VisionBonnet 上的 4x GPIO 针脚它可以添加另外的指示灯、按钮、或者伺服机构。Potential 模型包括识别食物、基于可视化输入来打开一个狗门、当你的汽车偏离车道时发出文本信息、或者根据识别到的人的面部表情来播放特定的音乐。
[![](http://linuxgizmos.com/files/movidius_myriad2vpu_block-sm.jpg)][16]   [![](http://linuxgizmos.com/files/movidius_myriad2_reference_board-sm.jpg)][17]
**Myriad 2 VPU 结构图(左)和参考板**
(点击图像放大)
*Myriad 2 VPU 结构图(左)和参考板*
Movidius Myriad 2 处理器在一个标称 1W 的功耗下提供每秒万亿次浮点运算的性能。在被 Intel 收购之前,这个芯片最早出现在 Tango 项目的参考平台上,并内置在 2016 年 5 月由 Movidius 首次亮相的、Ubuntu 驱动的 USB 的 [Fathom][25] 神经网络处理棒中。根据 Movidius 的说法Myriad 2 目前已经在 “市场上数百万的设备上使用”。
**更多信息**
AIY 视觉套件可以在 Micro Center 上预订,价格为 $44.99,预计在 12 月初发货。更多信息请参考 AIY 视觉套件的 [公告][18]、[Google 博客][19]、以及 [Micro Center 购物页面][20]。
AIY 视觉套件可以在 Micro Center 上预订,价格为 $44.99,预计在2017 年) 12 月初发货。更多信息请参考 AIY 视觉套件的 [公告][18]、[Google 博客][19]、以及 [Micro Center 购物页面][20]。
--------------------------------------------------------------------------------
via: http://linuxgizmos.com/google-launches-tensorflow-based-vision-recognition-kit-for-rpi-zero-w/
作者:[ Eric Brown][a]
作者:[Eric Brown][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,51 +1,52 @@
献给 Debian 和 Ubuntu 用户的一组实用程序
======
![](https://www.ostechnix.com/debian-goodies-a-set-of-useful-utilities-for-debian-and-ubuntu-users/)
![](https://www.ostechnix.com/wp-content/uploads/2018/05/debian-goodies-720x340.png)
你使用的是基于 Debian 的系统吗?如果是,太好了!我今天在这里给你带来了一个好消息。先向 **“Debian-goodies”** 打个招呼,这是一组基于 Debian 系统比如Ubuntu, Linux Mint的有用工具。这些实用工具提供了一些额外的有用的命令这些命令在基于 Debian 的系统中默认不可用。通过使用这些工具,用户可以找到哪些程序占用更多磁盘空间,更新系统后需要重新启动哪些服务,在一个包中搜索与模式匹配的文件,根据搜索字符串列出已安装的包等等。在这个简短的指南中,我们将讨论一些有用的 Debian 的好东西。
你使用的是基于 Debian 的系统吗?如果是,太好了!我今天在这里给你带来了一个好消息。先向 “Debian-goodies” 打个招呼,这是一组基于 Debian 系统比如UbuntuLinux Mint的有用工具。这些实用工具提供了一些额外的有用的命令这些命令在基于 Debian 的系统中默认不可用。通过使用这些工具,用户可以找到哪些程序占用更多磁盘空间,更新系统后需要重新启动哪些服务,在一个软件包中搜索与模式匹配的文件,根据搜索字符串列出已安装的包等等。在这个简短的指南中,我们将讨论一些有用的 Debian 的好东西。
### Debian-goodies 给 Debian 和 Ubuntu 用户的实用程序
debian-goodies 包可以在 Debian 和其衍生的 Ubuntu 以及其它 Ubuntu 变体(如 Linux Mint的官方仓库中找到。要安装 debian-goodies只需简单运行
```
$ sudo apt-get install debian-goodies
```
debian-goodies 安装完成后,让我们继续看一看一些有用的实用程序。
#### **1. Checkrestart**
#### 1、 checkrestart
让我从我最喜欢的 **“checkrestart”** 实用程序开始。安装某些安全更新时,某些正在运行的应用程序可能仍然会使用旧库。要彻底应用安全更新,你需要查找并重新启动所有这些更新。这就是 Checkrestart 派上用场的地方。该实用程序将查找哪些进程仍在使用旧版本的库,然后,你可以重新启动服务。
让我从我最喜欢的 `checkrestart` 实用程序开始。安装某些安全更新时,某些正在运行的应用程序可能仍然会使用旧库。要彻底应用安全更新,你需要查找并重新启动所有这些更新。这就是 `checkrestart` 派上用场的地方。该实用程序将查找哪些进程仍在使用旧版本的库,然后,你可以重新启动服务。
在进行库更新后,要检查哪些守护进程应该被重新启动,运行:
```
$ sudo checkrestart
[sudo] password for sk:
Found 0 processes using old versions of upgraded files
```
由于我最近没有执行任何安全更新,因此没有显示任何内容。
请注意,Checkrestart 实用程序确实运行良好。但是,有一个名为 “needrestart” 的类似工具可用于最新的 Debian 系统。Needrestart 的灵感来自 checkrestart 实用程序,它完成了同样的工作。 Needrestart 得到了积极维护并支持容器LXC, Docker等新技术。
请注意,`checkrestart` 实用程序确实运行良好。但是,有一个名为 `needrestart` 的类似的新工具可用于最新的 Debian 系统。`needrestart` 的灵感来自 `checkrestart` 实用程序,它完成了同样的工作。 `needrestart` 得到了积极维护并支持容器LXC、 Docker等新技术。
以下是 Needrestart 的特点:
以下是 `needrestart` 的特点:
* 支持(不要求systemd
* 二进制黑名单(即显示管理员
* 试检测挂起的内核升级
* 尝试检测基于解释器的守护进程所需的重启(支持 Perl, Python, Ruby
* 支持(不要求systemd
* 二进制程序的黑名单(例如:用于图形显示的显示管理器
* 试检测挂起的内核升级
* 尝试检测基于解释器的守护进程所需的重启(支持 Perl、Python、Ruby
* 使用钩子完全集成到 apt/dpkg 中
它在默认仓库中也可以使用。所以,你可以使用如下命令安装它:
```
$ sudo apt-get install needrestart
```
现在,你可以使用以下命令检查更新系统后需要重新启动的守护程序列表:
```
$ sudo needrestart
Scanning processes...
@ -60,26 +61,26 @@ No services need to be restarted.
No containers need to be restarted.
No user sessions are running outdated binaries.
```
好消息是 Needrestart 同样也适用于其它 Linux 发行版。例如,你可以从 Arch Linux 及其衍生版的 AUR 或者其它任何 AUR 帮助程序来安装,就像下面这样:
```
$ yaourt -S needrestart
```
在 fedora:
在 Fedora
```
$ sudo dnf install needrestart
```
#### 2. Check-enhancements
#### 2、 check-enhancements
Check-enhancements 实用程序用于查找那些用于增强已安装的包的软件包。此实用程序将列出增强其它包但不是必须运行它的包。你可以通过 “-ip” 或 “installed-packages” 选项来查找增强单个包或所有已安装包的软件包。
`check-enhancements` 实用程序用于查找那些用于增强已安装的包的软件包。此实用程序将列出增强其它包但不是必须运行它的包。你可以通过 `-ip``installed-packages` 选项来查找增强单个包或所有已安装包的软件包。
例如,我将列出增强 gimp 包功能的包:
```
$ check-enhancements gimp
gimp => gimp-data: Installed: (none) Candidate: 2.8.22-1
@ -102,10 +103,10 @@ gimp => gimp-help-sl: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-help-sv: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-plugin-registry: Installed: (none) Candidate: 7.20140602ubuntu3
gimp => xcftools: Installed: (none) Candidate: 1.0.7-6
```
要列出增强所有已安装包的,请运行:
```
$ check-enhancements -ip
autoconf => autoconf-archive: Installed: (none) Candidate: 20170928-2
@ -114,12 +115,12 @@ ca-certificates => ca-cacert: Installed: (none) Candidate: 2011.0523-2
cryptsetup => mandos-client: Installed: (none) Candidate: 1.7.19-1
dpkg => debsig-verify: Installed: (none) Candidate: 0.18
[...]
```
#### 3. dgrep
#### 3、 dgrep
顾名思义,`dgrep` 用于根据给定的正则表达式搜索制指定包的所有文件。例如,我将在 Vim 包中搜索包含正则表达式 “text” 的文件。
顾名思义dgrep 用于根据给定的正则表达式搜索制指定包的所有文件。例如,我将在 Vim 包中搜索包含正则表达式 “text” 的文件。
```
$ sudo dgrep "text" vim
Binary file /usr/bin/vim.tiny matches
@ -131,44 +132,44 @@ Binary file /usr/bin/vim.tiny matches
/usr/share/doc/vim-tiny/copyright: context diff will do. The e-mail address to be used is
/usr/share/doc/vim-tiny/copyright: On Debian systems, the complete text of the GPL version 2 license can be
[...]
```
dgrep 支持大多数 grep 的选项。参阅以下指南以了解 grep 命令。
`dgrep` 支持大多数 `grep` 的选项。参阅以下指南以了解 `grep` 命令。
* [献给初学者的 Grep 命令教程][2]
#### 4 dglob
#### 4、 dglob
`dglob` 实用程序生成与给定模式匹配的包名称列表。例如,找到与字符串 “vim” 匹配的包列表。
dglob 实用程序生成与给定模式匹配的包名称列表。例如,找到与字符串 “vim” 匹配的包列表。
```
$ sudo dglob vim
vim-tiny:amd64
vim:amd64
vim-common:all
vim-runtime:all
```
默认情况下dglob 将仅显示已安装的软件包。如果要列出所有包(包括已安装的和未安装的),使用 **-a** 标志。
默认情况下,`dglob` 将仅显示已安装的软件包。如果要列出所有包(包括已安装的和未安装的),使用 `-a` 标志。
```
$ sudo dglob vim -a
```
#### 5. debget
#### 5、 debget
`debget` 实用程序将在 APT 的数据库中下载一个包的 .deb 文件。请注意,它只会下载给定的包,不包括依赖项。
**debget** 实用程序将在 APT 的数据库中下载一个包的 .deb 文件。请注意,它只会下载给定的包,不包括依赖项。
```
$ debget nano
Get:1 http://in.archive.ubuntu.com/ubuntu bionic/main amd64 nano amd64 2.9.3-2 [231 kB]
Fetched 231 kB in 2s (113 kB/s)
```
#### 6. dpigs
#### 6、 dpigs
这是此次集合中另一个有用的实用程序。`dpigs` 实用程序将查找并显示那些占用磁盘空间最多的已安装包。
这是此次集合中另一个有用的实用程序。**dpigs** 实用程序将查找并显示那些占用磁盘空间最多的已安装包。
```
$ dpigs
260644 linux-firmware
@ -181,64 +182,66 @@ $ dpigs
28420 vim-runtime
25971 gcc-7
24349 g++-7
```
如你所见linux-firmware 包占用的磁盘空间最多。默认情况下,它将显示占用磁盘空间的 **前 10 个**包。如果要显示更多包,例如 20 个,运行以下命令:
```
$ dpigs -n 20
```
#### 7. debman
**debman** 实用程序允许你轻松查看二进制文件 **.deb** 中的手册页而不提取它。你甚至不需要安装 .deb 包。以下命令显示 nano 包的手册页。
`debman` 实用程序允许你轻松查看二进制文件 .deb 中的手册页而不提取它。你甚至不需要安装 .deb 包。以下命令显示 nano 包的手册页。
```
$ debman -f nano_2.9.3-2_amd64.deb nano
```
如果你没有 .deb 软件包的本地副本,使用 **-p** 标志下载并查看包的手册页。
如果你没有 .deb 软件包的本地副本,使用 `-p` 标志下载并查看包的手册页。
```
$ debman -p nano nano
```
**建议阅读:**
[每个 Linux 用户都应该知道的 3 个 man 的替代品][3]
#### 8. debmany
- [每个 Linux 用户都应该知道的 3 个 man 的替代品][3]
#### 8、 debmany
安装的 Debian 包不仅包含手册页,还包括其它文件,如确认、版权和自述文件等。`debmany` 实用程序允许你查看和读取那些文件。
安装的 Debian 包不仅包含手册页,还包括其它文件,如确认,版权和 read me (自述文件)等。**debmany** 实用程序允许你查看和读取那些文件。
```
$ debmany vim
```
![][1]
使用方向键选择要查看的文件,然后按 ENTER 键查看所选文件。按 **q** 返回主菜单。
使用方向键选择要查看的文件,然后按回车键查看所选文件。按 `q` 返回主菜单。
如果未安装指定的软件包debmany 将从 APT 数据库下载并显示手册页。应安装 **dialog** 包来阅读手册页。
如果未安装指定的软件包,`debmany` 将从 APT 数据库下载并显示手册页。应安装 `dialog` 包来阅读手册页。
#### 9. popbugs
#### 9 popbugs
如果你是开发人员,**popbugs** 实用程序将非常有用。它将根据你使用的包显示一个定制的发布关键 bug 列表(使用热门竞赛数据。对于那些不关心的人Popular-contest 包设置了一个 cron (定时)任务,它将定期匿名向 Debian 开发人员提交有关该系统上最常用的 Debian 软件包的统计信息。这些信息有助于 Debian 做出决定,例如哪些软件包应该放在第一张 CD 上。它还允许 Debian 改进未来的发行版本,以便为新用户自动安装最流行的软件包。
如果你是开发人员,`popbugs` 实用程序将非常有用。它将根据你使用的包显示一个定制的发布关键 bug 列表(使用 popularity-contest 数据。对于那些不关心的人popularity-contest 包设置了一个 cron (定时)任务,它将定期匿名向 Debian 开发人员提交有关该系统上最常用的 Debian 软件包的统计信息。这些信息有助于 Debian 做出决定,例如哪些软件包应该放在第一张 CD 上。它还允许 Debian 改进未来的发行版本,以便为新用户自动安装最流行的软件包。
要生成严重 bug 列表并在默认 Web 浏览器中显示结果,运行:
```
$ popbugs
```
此外,你可以将结果保存在文件中,如下所示。
```
$ popbugs --output=bugs.txt
```
#### 10. which-pkg-broke
#### 10、 which-pkg-broke
此命令将显示给定包的所有依赖项以及安装每个依赖项的时间。通过使用此信息,你可以在升级系统或软件包之后轻松找到哪个包可能会在什么时间损坏了另一个包。
此命令将显示给定包的所有依赖项以及安装每个依赖项的时间。通过使用此信息,你可以在升级系统或软件包之后轻松找到哪个包可能会在什么时间损坏另一个包。
```
$ which-pkg-broke vim
Package <debconf-2.0> has no install time info
@ -253,15 +256,14 @@ libgcc1:amd64 Wed Apr 25 08:08:42 2018
liblzma5:amd64 Wed Apr 25 08:08:42 2018
libdb5.3:amd64 Wed Apr 25 08:08:42 2018
[...]
```
#### 11. dhomepage
#### 11、 dhomepage
`dhomepage` 实用程序将在默认 Web 浏览器中显示给定包的官方网站。例如,以下命令将打开 Vim 编辑器的主页。
dhomepage 实用程序将在默认 Web 浏览器中显示给定包的官方网站。例如,以下命令将打开 Vim 编辑器的主页。
```
$ dhomepage vim
```
这就是全部了。Debian-goodies 是你武器库中必备的工具。即使我们不经常使用所有这些实用程序,但它们值得学习,我相信它们有时会非常有用。
@ -278,7 +280,7 @@ via: https://www.ostechnix.com/debian-goodies-a-set-of-useful-utilities-for-debi
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,94 @@
Becoming a successful programmer in an underrepresented community
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/people_remote_teams_world.png?itok=_9DCHEel)
Becoming a programmer from an underrepresented community like Cameroon is tough. Many Africans don't even know what computer programming is—and a lot who do think it's only for people from Western or Asian countries.
I didn't own a computer until I was 18, and I didn't start programming until I was a 19-year-old high school senior, and had to write a lot of code on paper because I couldn't be carrying my big desktop to school. I have learned a lot over the past five years as I've moved up the ladder to become a successful programmer from an underrepresented community. While these lessons are from my experience in Africa, many apply to other underrepresented communities, including women.
### 1\. Learn how to code
This is obvious: To be a successful programmer, you first have to be a programmer. In an African community, this may not be very easy. To learn how to code you need a computer and probably internet, too, which aren't very common for Africans to have. I didn't own a desktop computer until I was 18 years old—and I didn't own a laptop until I was about 20, and some may have still considered me privileged. Some students don't even know what a computer looks like until they get to the university.
You still have to find a way to learn how to code. Before I had a computer, I used to walk for miles to see a friend who had one. He wasn't very interested in it, so I spent a lot of time with it. I also visited cybercafes regularly, which consumed most of my pocket money.
Take advantage of local programming communities, as this could be one of your greatest sources of motivation. When you're working on your own, you may feel like a ninja, but that may be because you do not interact much with other programmers. Attend tech events. Make sure you have at least one friend who is better than you. See that person as a competitor and work hard to beat them, even though they may be working as hard as you are. Even if you never win, you'll be growing in skill as a programmer.
### 2\. Don't read too much into statistics
A lot of smart people in underrepresented communities never even make it to the "learning how to code" part because they take statistics as hard facts. I remember when I was aspiring to be a hacker, I used to get discouraged about the statistic that there are far fewer black people than white people in technology. If you google the "top 50 computer programmers of all time," there probably won't be many (if any) black people on the list. Most of the inspiring names in tech, like Ada Lovelace, Linus Torvalds, and Bill Gates, are white.
Growing up, I always believed technology was a white person's thing. I used to think I couldn't do it. When I was young, I never saw a science fiction movie with a black man as a hacker or an expert in computing. It was always white people. I remember when I got to high school and our teacher wrote that programming was part of our curriculum, I thought that was a joke—I wondered, "since when and how will that even be possible?" I wasn't far from the truth. Our teachers couldn't program at all.
Statistics also say that a lot of the amazing, inspiring programmers you look up to, no matter what their color, started coding at the age of 13. But you didn't even know programming existed until you were 19. You ask yourself questions like: How am I going to catch up? Do I even have the intelligence for this? When I was 13, I was still playing stupid, childish games—how can I compete with this?
This may make you conclude that white people are naturally better at tech. That's wrong. Yes, the statistics are correct, but they're just statistics. And they can change. Make them change. Your environment contributes a lot to the things you do while growing up. How can you compare yourself to someone whose parents got him a computer before he was nine—when you didn't even see one until you were 19? That's a 10-year gap. And that nine-year-old kid also had a lot of people to coach him.
How can you compare yourself to someone whose parents got him a computer before he was nine—when you didn't even see one until you were 19?
You can be a great software engineer regardless of your background. It may be a little harder because you may not have the resources or opportunities people in the western world have, but it's not impossible.
### 3\. Have a local hero or mentor
I think having someone in your life to look up to is one of the most important things. We all admire people like Linus Torvalds and Bill Gates but trying to make them your role models can be demotivating. Bill Gates began coding at age 13 and formed his first venture at age 17. I'm 24 and still trying to figure out what I want to do with my life. Those stories always make me wonder why I'm not better yet, rather than looking for reasons to get better.
Having a local hero or mentor is more helpful. Because you're both living in the same community, there's a greater chance there won't be such a large gap to discourage you. A local mentor probably started coding around the age you did and was unlikely to start a big venture at a very young age.
I've always admired the big names in tech and still do. But I never saw them as mentors. First, because their stories seemed like fantasy to me, and second, I couldn't reach them. I chose my mentors and role models to be those near my reach. Choosing a role model doesn't mean you just want to get to where they are and stop. Success is step by step, and you need a role model for each stage you're trying to reach. When you attain a stage, get another role model for the next stage.
You probably can't get one-on-one advice from someone like Bill Gates. You can get the advice they're giving to the public at conferences, which is great, too. I always follow smart people. But advice that makes the most impact is advice that is directed to you. Advice that takes into consideration your goals and circumstances. You can get that only from someone you have direct access to.
I'm a product of many mentors at different stages of my life. One is [Nyah Check][1] , who was a year ahead of me at the university, but in terms of skill and experience, he was two to three years ahead. I heard stories about him when I was still in high school. He made people want to be great programmers, not just focus on getting a 4.0 GPA. He was one of the first people in French-speaking Africa to participate in [Google Summer of Code][2] . While still at the university, he traveled abroad more times than many lecturers would dream of—without spending a dime. He could write code that even our course instructors couldn't understand. He co-founded [Google Developer Group Buea][3] and created an elite programmers club that helped many students learn to code. He started a lot of other communities, like the [Docker Buea meetup][4] that I'm the lead organizer for.
These things inspired me. I wanted to be like him and knew what I would gain by becoming friends with him. Discussions with him were always very inspiring—he talked about programming and his adventures traveling the world for conferences. I learned a lot from him, and I think he taught me well. Now younger students want to be around me for the same reasons I wanted to learn from him.
### 4\. Get involved with open source
If you're in Africa and want to gain top skills from top engineers, your best bet is to join an open source project. The tech ecosystem in Africa is small and mostly made of startups, so getting experience in a field you love might not be easy. It's rare for startups in Africa to be working with machine learning, distributed computing, or containers and technologies like Kubernetes. Unless your passion is web development, your best bet is joining an open source project. I've learned most of what I know by being part of the [OpenMRS][5] community. I've also contributed to other open source projects including [LibreHealth][6], [Coala][7], and [Kubernetes][8]. Along with gaining tech skills, you'll be building your network of influential people. Most of my peers know about Linus Torvalds from books, but I have a picture with him.
Participate in open source outreach programs like Google Summer of Code, [Google Code-in][9], [Outreachy][10], or [Linux Foundation Networking Internships][11]. These opportunities help you gain skills that may not be available in startups.
I participated in Google Summer of Code twice as a student, and I'm now a mentor. I've been a Google Code-in org admin, and I'm volunteering as an open source developer. All these activities help me learn new things.
### 5\. Take advantage of diversity programs while you can
Diversity programs are great, but if you're like me, you may not like to benefit very much from them. If you're on a team of five and the basis of your offer is that you're a black person and the other four are white, you might wonder if you're really good enough. You won't want people to think a foundation sponsored your trip because you're black rather than because you add as much value as anyone else. It's never only that you're a minority—it's because the sponsoring organization thinks you're an exceptional minority. You're not the only person who applied for the diversity scholarship, and not everyone that applied won the award. Take advantage of diversity opportunities while you can and build your knowledge base and network.
When people ask me why the Linux Foundation sponsored my trip to the Open Source Summit, I say: "I was invited to give a talk at their conference, but they have diversity scholarships you can apply for." How cool does that sound?
Attend as many conferences as you can—diversity scholarships can help. Learn all you can learn. Practice what you learn. Get to know people. Apply to give talks. Start small. My right leg used to shake whenever I stood in front of a crowd to give a speech, but with practice, I've gotten better.
### 6\. Give back
Always find a way to give back. Mentor someone. Take up an active role in a community. These are the ways I give back to my community. It isn't only a moral responsibility—it's a win-win because you can learn a lot while helping others get closer to their dreams.
I was part of a Programming Language meetup organized by Google Developer Group Buea where I mentored 15 students in Java programming (from beginner to intermediate). After the program was over, I created a Java User Group to keep the Java community together. I recruited two members from the meetup to join me as volunteer developers at LibreHealth, and under my guidance, they made useful commits to the project. They were later accepted as Google Summer of Code students, and I was assigned to mentor them during the program. I'm also the lead organizer for Docker Buea, the official Docker meetup in Cameroon, and I'm also Docker Campus Ambassador.
Taking up leadership roles in this community has forced me to learn. As Docker Campus Ambassador, I'm supposed to train students on how to use Docker. Because of this, I've learned a lot of cool stuff about Docker and containers in general.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/becoming-successful-programmer
作者:[lvange Larry][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/ivange94
[1]:https://github.com/Ch3ck
[2]:https://summerofcode.withgoogle.com/
[3]:http://www.gdgbuea.net/
[4]:https://www.meetup.com/Docker-Buea/?_cookie-check=EnOn1Ct-CS4o1YOw
[5]:https://openmrs.org/
[6]:https://librehealth.io/
[7]:https://coala.io/#/home'
[8]:https://kubernetes.io/
[9]:https://codein.withgoogle.com/archive/
[10]:https://www.outreachy.org/
[11]:https://wiki.lfnetworking.org/display/LN/LF+Networking+Internships
[12]:http://sched.co/FAND
[13]:https://ossna18.sched.com/

View File

@ -0,0 +1,70 @@
Building more trustful teams in four steps
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_trust.png?itok=KMfi0Rdo)
Robin Dreeke's The Code of Trust is a helpful guide to developing trustful relationships, and it's particularly useful to people working in open organizations (where trust is fundamental to any kind of work). As its title implies, Dreeke's book presents a "code" or set of principles people can follow when attempting to establish trust. I explained those in [the first installment of this review][1]. In this article, then, I'll outline what Dreeke (a former FBI agent) calls "The Four Steps to Inspiring Trust"—a set of practices for enacting the principles. In other words, the Steps make the Code work in the real world.
### The Four Steps
#### 1\. Align your goals
First, determine your primary goal—what you want to achieve and what sacrifices you are willing to make to achieve those goals. Learn the goals of others. Look for ways to align your goals with their goals, to make parts of their goals a part of yours. "You'll achieve the power that only combined forces can attain," Dreeke writes. For example, in the sales manager seminar I once ran regularly, I mentioned that if a sales manager helps a salesman reach his sales goals, the manager will reach his goals automatically. Also, if a salesman helps his customer reach his goals, the salesman will reach his goals automatically. This is aligning goals. (For more on this, see an [earlier article][2] I wrote about how companies can determine when to compete and when to cooperate).
This couldn't be more true in open organizations, which depend on both internal and external contributors a great deal. What are those contributors' goals? Everyone must understand these if an open organization is going to be successful.
When aligning goals, try to avoid having strong opinions on the topic at hand. This leads to inflexibility, Dreeke says, and reduces the chance of generating options that align with other people's goals. To find their goals, consider what their fears or concerns are. Then try to help them overcome those fears or concerns.
If you can't get them to align with your goals, then you should choose to not align with them and instead remove them from the team. Dreeke recommends doing this in a way that allows you to stay approachable for other projects. In one issue, goals might not be aligned; in other issues, they may.
Dreeke also notes that many people believe being successful means carefully narrowing your focus to your own goals. "But that's one of those lazy shortcuts that slow you down," Dreeke writes. Success, Dreeke says, arrives faster when you inspire others to merge their goals with yours, then forge ahead together. In that respect, if you place heavy attention on other people and their goals while doing the same with yours, success in opening someone up comes far sooner. This all sounds very much like advice for activating transparency, inclusivity, and collaboration—key open organization principles.
#### 2\. Apply the power of context
Dreeke recommends really getting to know your partners, discovering "their desires, beliefs, personality traits, behaviors, and demographic characteristics." Those are key influences that define their context.
To achieve trust, you must find a plan that achieves their goals along with yours.
People only trust those who know them (including these beliefs, goals, and personalities). Once known, you can match their goals with yours. To achieve trust, you must find a plan that achieves their goals along with yours (see above). If you try to push your goals on them, they'll become defensive and information exchange will shut down. If that happens, no good ideas will materialize.
#### 3\. Craft your encounter
When you meet with potential allies, plan the meeting meticulously—especially the first meeting. Create the perfect environment for it. Know in advance: 1. the proper atmosphere and mood required, 2. the special nature of the occasion, 3. the perfect time and location, 4. your opening remark, and 5. your plan of what to offer the other person (and what to ask for at that time). Creating the best possible environment for every interaction sets the stage for success.
Dreeke explains the difference between times for planning and thinking and times for simply performing (like when you meet a stranger for the first time). If you are not well prepared, the fear and emotions of the moment could be overwhelming. To reduce that emotion, planning, preparing and role playing can be very helpful.
Later in the book, Dreeke discusses "toxic situations," suggesting you should not ignore toxic situations, as they'll more than likely get worse if you do. People could become emotional and say irrational things. You must address the toxic situation by helping people stay rational. Then try to laser in on interactions between your goals and theirs. What does the person want to achieve? Suspending your ego gives you "the freedom to laser-in" on others' points of view and places where their goals can lead to joint ultimate goals, Dreeke says. Stay focused on their context, not your ego, in toxic situations.
Some leaders think it is best to strongly confront toxic people, maybe embarrassing them in front of others. That might feel good at the time, but "kicking ass in a crowd" just builds people's defenses, Dreeke says. To build a productive plan, he says, you need "shields down," so information will be shared.
Show others you speak their language—not only for understanding, but also to demonstrate reason, respect, and consideration.
"Trust leaders take no interest in their own power," Dreeke argues, as they are deeply interested and invested in others. By helping others, their trust develops. For toxic people, the opposite is true: They want power. Unfortunately, this desire for power just espouses more fear and distrust. Dreeke says that to combat a toxic environment, trust leaders do not "fight fire with fire" which spreads the toxicity. They "fight fire with water" to reduce it. In movies, fights are exciting; in real life they are counterproductive.
#### 4\. Connect
Finally, show others you speak their language—not only for understanding, but also to demonstrate reason, respect, and consideration. Speak about what they want to hear (namely, issues that focus on them and their needs). The speed of trust is directly opposed to the speed of speech, Dreeke says. People who speak slowly and carefully build trust faster than people who rush their speaking.
Importantly, Dreeke also covers a way to get people to like you. It doesn't involve directly getting people to like you personally; it involves getting people to like themselves. Show more respect for them than they might even feel about themselves. Praise them for qualities about themselves that they hadn't thought about. That will open the doors to a trusting relationship.
### Putting it together
I've spent my entire career attempting to build trust globally, throughout the business communities in which I've worked. I have no experience in the intelligence community, but I do see great similarities in spite of the different working environment. The book has given me new insights I never considered (like the section on "crafting your encounter," for example). I recommend people pick up the book and read it thoroughly, as there is other helpful advice in it that I couldn't cover in this short article.
As I [mentioned in Part 1][1], following Dreeke's Code of Trust can lead to building strong trust networks or communities. Those trust communities are exactly what we are trying to create in open organizations.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/18/8/steps-trust
作者:[Ron McFarland][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/ron-mcfarland
[1]:https://opensource.com/open-organization/18/7/the-code-of-trust-1
[2]:https://opensource.com/open-organization/17/6/collaboration-vs-competition-part-1

View File

@ -0,0 +1,180 @@
3 tips for moving your team to a microservices architecture
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/building_architecture_design.jpg?itok=lB_qYv-I)
Microservices are gaining in popularity and providing new ways for tech companies to improve their services for end users. But what impact does the shift to microservices have on team culture and morale? What issues should CTOs, developers, and project managers consider when the best technological choice is a move to microservices?
Below youll find key advice and insight from CTOs and project leads as they reflect on their experiences with team culture and microservices.
### You can't build successful microservices without a successful team culture
When I was working with Java developers, there was tension within the camp about who got to work on the newest and meatiest features. Our engineering leadership had decided that we would exclusively use Java to build all new microservices.
There were great reasons for this decision, but as I will explain later, such a restrictive decision come with some repercussions. Communicating the “why” of technical decisions can go a long way toward creating a culture where people feel included and informed.
When you're organizing and managing a team around microservices, its always challenging to balance the mood, morale, and overall culture. In most cases, the leadership needs to balance the risk of team members using new technology against the needs of the client and the business itself.
This dilemma, and many others like it, has led CTOs to ask themselves questions such as: How much freedom should I give my team when it comes to adopting new technologies? And perhaps even more importantly, how can I manage the overarching culture within my camp?
### Give every team member a chance to thrive
When the engineering leaders in the example above decided that Java was the best technology to use when building microservices, the decision was best for the company: Java is performant, and many of the senior people on the team were well-versed with it. However, not everyone on the team had experience with Java.
The problem was, our team was split into two camps: the Java guys and the JavaScript guys. As time went by and exciting new projects came up, wed always reach for Java to get the job done. Before long, some annoyance within the JavaScript camp crept in: “Why do the Java guys always get to work on the exciting new projects while were left to do the mundane front-end tasks like implementing third-party analytics tools? We want a big, exciting project to work on too!”
Like most rifts, it started out small, but it grew worse over time.
The lesson I learned from that experience was to take your teams expertise and favored technologies into account when choosing a de facto tech stack for your microservices and when adjusting your team's level of freedom to pick and choose their tools.
Sure, you need some structure, but if youre too restrictive—or worse, blind to the desire of team members to innovate with different technologies—you may have a rift of your own to manage.
So evaluate your team closely and come up with a plan that empowers everyone. That way, every section of your team can get involved in major projects, and nobody will feel like theyre being left on the bench.
### Technology choices: stability vs. flexibility
Lets say you hire a new junior developer who is excited about some brand new, fresh-off-the-press JavaScript framework.
That framework, while sporting some technical breakthroughs, may not have proven itself in production environments, and it probably doesnt have great support available. CTOs have to make a difficult choice: Okaying that move for the morale of the team, or declining it to protect the company and its bottom line and to keep the project stable as the deadline approaches.
The answer depends on a lot of different factors (which also means there is no single correct answer).
### Technological freedom
“We give our team and ourselves 100% freedom in considering technology choices. We eventually identified two or three technologies not to use in the end, primarily due to not wanting to complicate our deployment story,” said [Benjamin Curtis][1], co-founder of [Honeybadger][2].
“In other words, we considered introducing new languages and new approaches into our tech stack when creating our microservices, and we actually did deploy a production microservice on a different stack at one point. [While we do generally] stick with technologies that we know in order to simplify our ops stack, we periodically revisit that decision to see if potential performance or reliability benefits would be gained by adopting a new technology, but so far we haven't made a change,” Curtis continued.
When I spoke with [Stephen Blum][3], CTO at [PubNub][4], he expressed a similar view, welcoming pretty much any technology that cuts the mustard: “We're totally open with it. We want to continue to push forward with new open source technologies that are available, and we only have a couple of constraints with the team that are very fair: [It] must run in container environment, and it has to be cost-effective.”
### High freedom, high responsibility
[Sumo Logic][5] CTO [Christian Beedgen][6] and chief architect [Stefan Zier][7] expanded on this topic, agreeing that if youre going to give developers freedom to choose their technology, it must come with a high level of responsibility attached. “Its really important that [whoever builds] the software takes full ownership for it. In other words, they not only build software, but they also run the software and remain responsible for the whole lifecycle.”
Beedgen and Zier recommend implementing a system that resembles a federal government system, keeping those freedoms in check by heightening responsibility: “[You need] a federal culture, really. You've got to have a system where multiple, independent teams can come together towards the greater goal. That limits the independence of the units to some degree, as they have to agree that there is potentially a federal government of some sort. But within those smaller groups, they can make as many decisions on their own as they like within guidelines established on a higher level.”
Decentralized, federal, or however you frame it, this approach to structuring microservice teams gives each team and each team member the freedom they want, without enabling anyone to pull the project apart.
However, not everyone agrees.
### Restrict technology to simplify things
[Darby Frey][8], co-founder of [Lead Honestly][9], takes a more restrictive approach to technology selection.
“At my last company we had a lot of services and a fairly small team, and one of the main things that made it work, especially for the team size that we had, was that every app was the same. Every backend service was a Ruby app,” he explained.
Frey explained that this helped simplify the lives of his team members: “[Every service has] the same testing framework, the same database backend, the same background job processing tool, et cetera. Everything was the same.
“That meant that when an engineer would jump around between apps, they werent having to learn a new pattern or learn a different language each time,” Frey continued, “So we're very aware and very strict about keeping that commonality.”
While Frey is sympathetic to developers wanting to introduce a new language, admitting that he “loves the idea of trying new things,” he feels that the cons still outweigh the pros.
“Having a polyglot architecture can increase the development and maintenance costs. If it's just all the same, you can focus on business value and business features and not have to be super siloed in how your services operate. I don't think everybody loves that decision, but at the end of the day, when they have to fix something on a weekend or in the middle of the night, they appreciate it,” said Frey.
### Centralized or decentralized organization
How your team is structured is also going to impact your microservices engineering culture—for better or worse.
For example, its common for software engineers to write the code before shipping it off to the operations team, who in turn deploy it to the servers. But when things break (and things always break!), an internal conflict occurs.
Because operation engineers dont write the code themselves, they rarely understand problems when they first arise. As a result, they need to get in touch with those who did code it: the software engineers. So right from the get-go, youve got a middleman relaying messages between the problem and the team that can fix that problem.
To add an extra layer of complexity, because software engineers arent involved with operations, they often cant fully appreciate how their code affects the overall operation of the platform. They learn of issues only when operations engineers complain about them.
As you can see, this is a relationship thats destined for constant conflict.
### Navigating conflict
One way to attack this problem is by following the lead of Netflix and Amazon, both of which favor decentralized governance. Software development thought leaders James Lewis and Martin Fowler feel that decentralized governance is the way to go when it comes to microservice team organization, as they explain in a [blog post][10].
“One of the consequences of centralized governance is the tendency to standardize on single technology platforms. Experience shows that this approach is constricting—not every problem is a nail and not every solution a hammer,” the article reads. “Perhaps the apogee of decentralized governance is the build it, run it ethos popularized by Amazon. Teams are responsible for all aspects of the software they build, including operating the software 24/7.”
Netflix, Lewis and Fowler write, is another company pushing higher levels of responsibility on development teams. They hypothesize that, because theyll be responsible and called upon should anything go wrong later down the line, more care will be taken during the development and testing stages to ensure each microservice is in ship shape.
“These ideas are about as far away from the traditional centralized governance model as it is possible to be,” they conclude.
### Who's on weekend pager duty?
When considering a centralized or decentralized culture, think about how it impacts your team members when problems inevitably crop up at inopportune times. A decentralized system implies that each decentralized team takes responsibility for one service or one set of services. But that also creates a problem: Silos.
Thats one reason why Lead Honestly's Frey isnt a proponent of the concept of decentralized governance.
“The pattern of a single team is responsible for a particular service is something you see a lot in microservice architectures. We don't do that, for a couple of reasons. The primary business reason is that we want teams that are responsible not for specific code but for customer-facing features. A team might be responsible for order processing, so that will touch multiple code bases but the end result for the business is that there is one team that owns the whole thing end to end, so there are fewer cracks for things to fall through,” Frey explained.
The other main reason, he continued, is that developers can take more ownership of the overall project: “They can actually think about [the project] holistically.”
Nathan Peck, developer advocate for container services at Amazon Web Services, [explained this problem in more depth][11]. In essence, when you separate the software engineers and the operations engineers, you make life harder for your team whenever an issue arises with the code—which is bad news for end users, too.
But does decentralization need to lead to separation and siloization?
Peck explained that his solution lies in [DevOps][12], a model aimed at tightening the feedback loop by bringing these two teams closer together, strengthening team culture and communication in the process. Peck describes this as the “you build it, you run it” approach.
However, that doesnt mean teams need to get siloed or distanced away from partaking in certain tasks, as Frey suggests might happen.
“One of the most powerful approaches to decentralized governance is to build a mindset of DevOps,’” Peck wrote. “[With this approach], engineers are involved in all parts of the software pipeline: writing code, building it, deploying the resulting product, and operating and monitoring it in production. The DevOps way contrasts with the older model of separating development teams from operations teams by having development teams ship code over the wall to operations teams who were then responsible to run it and maintain it.”
DevOps, as [Armory][13] CTO [Isaac Mosquera][14] explained, is an agile software development framework and culture thats gaining traction thanks to—well, pretty much everything that Peck said.
Interestingly, Mosquera feels that this approach actually flies in the face of [Conways Law][15]:
_" Organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations." — M. Conway_
“Instead of communication driving software design, now software architecture drives communication. Not only do teams operate and organize differently, but it requires a new set of tooling and process to support this type of architecture; i.e., DevOps,” Mosquera explained.
[Chris McFadden][16], VP of engineering at [SparkPost][17], offers an interesting example that might be worth following. At SparkPost, youll find decentralized governance—but you wont find a one-team-per-service culture.
“The team that is developing these microservices started off as one team, but theyre now split up into three teams under the same larger group. Each team has some level of responsibility around certain domains and certain expertise, but the ownership of these services is not restricted to any one of these teams,” McFadden explained.
This approach, McFadden continued, allows any team to work on anything from new features to bug fixes to production issues relating to any of those services. Theres total flexibility and not a silo in sight.
“It allows [the teams to be] a little more flexible both in terms of new product development as well, just because you're not getting too restricted and that's based on our size as a company and as an engineering team. We really need to retain some flexibility,” he said.
However, size might matter here. McFadden admitted that if SparkPost was a lot larger, “then it would make more sense to have a single, larger team own one of those microservices.”
“[It's] better, I think, to have a little bit more broad responsibility for these services and it gives you a little more flexibility. At least that works for us at this time, where we are as an organization,” he said.
### A successful microservices engineering culture is a balancing act
When it comes to technology, freedom—with responsibility—looks to be the most rewarding path. Team members with differing technological preferences will come and go, while new challenges may require you to ditch technologies that have previously served you well. Software development is constantly in flux, so youll need to continually balance the needs of your team are new devices, technologies, and clients emerge.
As for structuring your teams, a decentralized yet un-siloed approach that leverages DevOps and instills a “you build it, you run it” mentality seems to be popular, although other schools of thought do exist. As usual, youre going to have to experiment to see what suits your team best.
Heres a quick recap on how to ensure your team culture meshes well with a microservices architecture:
* **Be sustainable, yet flexible** : Balance sustainability without forgetting about flexibility and the need for your team to be innovative when the right opportunity comes along. However, theres a distinct difference of opinion over how you should achieve that balance.
* **Give equal opportunities** : Dont favor one section of your team over another. If youre going to impose restrictions, make sure its not going to fundamentally alienate team members from the get-go. Think about how your product roadmap is shaping up and forecast how it will be built and whos going to do the work.
* **Structure your team to be agile, yet responsible** : Decentralized governance and agile development is the flavor of the day for a good reason, but dont forget to install a sense of responsibility within each team.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/microservices-team-challenges
作者:[Jake Lumetta][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/jakelumetta
[1]:https://twitter.com/stympy?lang=en
[2]:https://www.honeybadger.io/
[3]:https://twitter.com/stephenlb
[4]:https://www.pubnub.com/
[5]:http://sumologic.com/
[6]:https://twitter.com/raychaser
[7]:https://twitter.com/stefanzier
[8]:https://twitter.com/darbyfrey
[9]:https://leadhonestly.com/
[10]:https://martinfowler.com/articles/microservices.html#ProductsNotProjects
[11]:https://medium.com/@nathankpeck/microservice-principles-decentralized-governance-4cdbde2ff6ca
[12]:https://opensource.com/resources/devops
[13]:http://armory.io/
[14]:https://twitter.com/imosquera
[15]:https://en.wikipedia.org/wiki/Conway%27s_law
[16]:https://twitter.com/cristoirmac
[17]:https://www.sparkpost.com/

View File

@ -0,0 +1,56 @@
How do tools affect culture?
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_cafe_brew_laptop_desktop.jpg?itok=G-n1o1-o)
Most of the DevOps community talks about how tools dont matter much. The culture has to change first, the argument goes, which might modify how the tools are used.
I agree and disagree with that concept. I believe the relationship between tools and culture is more symbiotic and bidirectional than unidirectional. I have discovered this through real-world transformations across several companies now. I admit its hard to determine whether the tools changed the culture or whether the culture changed how the tools were used.
### Violating principles
Some tools violate core principles of modern development and operations. The primary violation I have seen are tools that require GUI interactions. This often separates operators from the value pipeline in a way that is cognitively difficult to overcome. If everything in your infrastructure is supposed to be configured and deployed through a value pipeline, then taking someone out of that flow inherently changes their perspective and engagement. Making manual modifications also injects risk into the system that creates unpredictability and undermines the value of the pipeline.
Ive heard it said that these tools are fine and can be made to work within the new culture, and Ive tried this in the past. Screen scraping and form manipulation tools have been used to attempt automation with some systems Ive integrated. This is very fragile and doesnt work on all systems. It ultimately required a lot of manual intervention.
Another system from a large vendor providing integrated monitoring and ticketing solutions for infrastructure seemed to implement its API as an afterthought, and this resulted in the system being unable to handle the load from the automated system. This required constant manual recoveries and sometimes the tedious task of manually closing errant tickets that shouldnt have been created or that werent closed properly.
The individuals maintaining these systems experienced great frustration and often expressed a lack of confidence in the overall DevOps transformation. In one of these instances, we introduced a modern tool for monitoring and alerting, and the same individuals suddenly developed a tremendous amount of confidence in the overall DevOps transformation. I believe this is because tools can reinforce culture and improve it when a similar tool that lacks modern capabilities would otherwise stymie motivation and engagement.
### Choosing tools
At the NAIC (National Association of Insurance Commissioners), weve adopted a practice of evaluating new and existing tools based on features we believe reinforce the core principles of our value pipeline. We currently have seven items on our list:
* REST API provided and fully functional (possesses all application functionality)
* Ability to provision immutably (can be installed, configured, and started without human intervention)
* Ability to provide all configuration through static files
* Open source code
* Uses open standards when available
* Offered as Software as a Service (SaaS) or hosted (we don't run anything)
* Deployable to public cloud (based on licensing and cost)
This is a prioritized list. Each item gets rated green, yellow, or red to indicate how much each statement applies to a particular technology. This creates a visual that makes it quite clear how the different candidates compare to one another. We then use this to make decisions about which tools we should use. We dont make decisions solely on these criteria, but they do provide a clearer picture and help us know when were sacrificing principles. Transparency is a core principle in our culture, and this system helps reinforce that in our decision-making process.
We use green, yellow, and red because theres not normally a clear binary representation of these criteria within each tool. For example, some tools have an incomplete API, which would result in yellow being applied. If the tool uses open standards like OpenAPI and theres no other applicable open standard, then it would receive green for “Uses open standards when available.” However, a tracing system that uses OpenAPI and not OpenTracing would receive a yellow rating.
This type of system creates a common understanding of what is valued when it comes to tool selection, and it helps avoid unknowingly violating core principles of your value pipeline. We recently used this method to select [GitLab][1] as our version control and continuous integration system, and it has drastically improved our culture for many reasons. I estimated 50 users for the first year, and were already over 120 in just the first few months.
The tools we used previously didnt allow us to contribute back our own features, collaborate transparently, or automate so completely. Weve also benefited from GitLabs culture influencing ours. Its [handbook][2] and open communication have been invaluable to our growth. Tools, and the companies that make them, can and will influence your companys culture. What are you willing to allow in?
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/how-tools-affect-culture
作者:[Dan Barker][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/barkerd427
[1]:https://about.gitlab.com/
[2]:https://about.gitlab.com/handbook/

View File

@ -1,3 +1,5 @@
Translating by MjSeven
A Collection Of Useful BASH Scripts For Heavy Commandline Users
======

View File

@ -1,262 +0,0 @@
Translating by MjSeven
API Star: Python 3 API Framework Polyglot.Ninja()
======
For building quick APIs in Python, I have mostly depended on [Flask][1]. Recently I came across a new API framework for Python 3 named “API Star” which seemed really interesting to me for several reasons. Firstly the framework embraces modern Python features like type hints and asyncio. And then it goes ahead and uses these features to provide awesome development experience for us, the developers. We will get into those features soon but before we begin, I would like to thank Tom Christie for all the work he has put into Django REST Framework and now API Star.
Now back to API Star I feel very productive in the framework. I can choose to write async codes based on asyncio or I can choose a traditional backend like WSGI. It comes with a command line tool `apistar` to help us get things done faster. Theres (optional) support for both Django ORM and SQLAlchemy. Theres a brilliant type system that enables us to define constraints on our input and output and from these, API Star can auto generate api schemas (and docs), provide validation and serialization feature and a lot more. Although API Star is heavily focused on building APIs, you can also build web applications on top of it fairly easily. All these might not make proper sense until we build something all by ourselves.
### Getting Started
We will start by installing API Star. It would be a good idea to create a virtual environment for this exercise. If you dont know how to create a virtualenv, dont worry and go ahead.
```
pip install apistar
```
If youre not using a virtual environment or the `pip` command for your Python 3 is called `pip3`, then please use `pip3 install apistar` instead.
Once we have the package installed, we should have access to the `apistar` command line tool. We can create a new project with it. Lets create a new project in our current directory.
```
apistar new .
```
Now we should have two files created `app.py` which contains the main application and then `test.py` for our tests. Lets examine our `app.py` file:
```
from apistar import Include, Route
from apistar.frameworks.wsgi import WSGIApp as App
from apistar.handlers import docs_urls, static_urls
def welcome(name=None):
if name is None:
return {'message': 'Welcome to API Star!'}
return {'message': 'Welcome to API Star, %s!' % name}
routes = [
Route('/', 'GET', welcome),
Include('/docs', docs_urls),
Include('/static', static_urls)
]
app = App(routes=routes)
if __name__ == '__main__':
app.main()
```
Before we dive into the code, lets run the app and see if it works. If we navigate to `http://127.0.0.1:8080/` we will get this following response:
```
{"message": "Welcome to API Star!"}
```
And if we navigate to: `http://127.0.0.1:8080/?name=masnun`
```
{"message": "Welcome to API Star, masnun!"}
```
Similarly if we navigate to: `http://127.0.0.1:8080/docs/`, we will see auto generated docs for our API.
Now lets look at the code. We have a `welcome` function that takes a parameter named `name` which has a default value of `None`. API Star is a smart api framework. It will try to find the `name` key in the url path or query string and pass it to our function. It also generates the API docs based on it. Pretty nice, no?
We then create a list of `Route` and `Include` instances and pass the list to the `App` instance. `Route` objects are used to define custom user routing. `Include` , as the name suggests, includes/embeds other routes under the path provided to it.
### Routing
Routing is simple. When constructing the `App` instance, we need to pass a list as the `routes` argument. This list should comprise of `Route` or `Include` objects as we just saw above. For `Route`s, we pass a url path, http method name and the request handler callable (function or otherwise). For the `Include` instances, we pass a url path and a list of `Routes` instance.
##### Path Parameters
We can put a name inside curly braces to declare a url path parameter. For example `/user/{user_id}` defines a path where the `user_id` is a path parameter or a variable which will be injected into the handler function (actually callable). Heres a quick example:
```
from apistar import Route
from apistar.frameworks.wsgi import WSGIApp as App
def user_profile(user_id: int):
return {'message': 'Your profile id is: {}'.format(user_id)}
routes = [
Route('/user/{user_id}', 'GET', user_profile),
]
app = App(routes=routes)
if __name__ == '__main__':
app.main()
```
If we visit `http://127.0.0.1:8080/user/23` we will get a response like this:
```
{"message": "Your profile id is: 23"}
```
But if we try to visit `http://127.0.0.1:8080/user/some_string` it will not match. Because the `user_profile` function we defined, we added a type hint for the `user_id` parameter. If its not integer, the path doesnt match. But if we go ahead and delete the type hint and just use `user_profile(user_id)`, it will match this url. This is again API Star is being smart and taking advantages of typing.
#### Including / Grouping Routes
Sometimes it might make sense to group certain urls together. Say we have a `user` module that deals with user related functionality. It might be better to group all the user related endpoints under the `/user` path. For example `/user/new`, `/user/1`, `/user/1/update` and what not. We can easily create our handlers and routes in a separate module or package even and then include them in our own routes.
Lets create a new module named `user`, the file name would be `user.py`. Lets put these codes in this file:
```
from apistar import Route
def user_new():
return {"message": "Create a new user"}
def user_update(user_id: int):
return {"message": "Update user #{}".format(user_id)}
def user_profile(user_id: int):
return {"message": "User Profile for: {}".format(user_id)}
user_routes = [
Route("/new", "GET", user_new),
Route("/{user_id}/update", "GET", user_update),
Route("/{user_id}/profile", "GET", user_profile),
]
```
Now we can import our `user_routes` from within our main app file and use it like this:
```
from apistar import Include
from apistar.frameworks.wsgi import WSGIApp as App
from user import user_routes
routes = [
Include("/user", user_routes)
]
app = App(routes=routes)
if __name__ == '__main__':
app.main()
```
Now `/user/new` will delegate to `user_new` function.
### Accessing Query String / Query Parameters
Any parameters passed in the query parameters can be injected directly into handler function. Say for the url `/call?phone=1234`, the handler function can define a `phone` parameter and it will receive the value from the query string / query parameters. If the url query string doesnt include a value for `phone`, it will get `None` instead. We can also set a default value to the parameter like this:
```
def welcome(name=None):
if name is None:
return {'message': 'Welcome to API Star!'}
return {'message': 'Welcome to API Star, %s!' % name}
```
In the above example, we set a default value to `name` which is `None` anyway.
### Injecting Objects
By type hinting a request handler, we can have different objects injected into our views. Injecting request related objects can be helpful for accessing them directly from inside the handler. There are several built in objects in the `http` package from API Star itself. We can also use its type system to create our own custom objects and have them injected into our functions. API Star also does data validation based on the constraints specified.
Lets define our own `User` type and have it injected in our request handler:
```
from apistar import Include, Route
from apistar.frameworks.wsgi import WSGIApp as App
from apistar import typesystem
class User(typesystem.Object):
properties = {
'name': typesystem.string(max_length=100),
'email': typesystem.string(max_length=100),
'age': typesystem.integer(maximum=100, minimum=18)
}
required = ["name", "age", "email"]
def new_user(user: User):
return user
routes = [
Route('/', 'POST', new_user),
]
app = App(routes=routes)
if __name__ == '__main__':
app.main()
```
Now if we send this request:
```
curl -X POST \
http://127.0.0.1:8080/ \
-H 'Cache-Control: no-cache' \
-H 'Content-Type: application/json' \
-d '{"name": "masnun", "email": "masnun@gmail.com", "age": 12}'
```
Guess what happens? We get an error saying age must be equal to or greater than 18. The type system is allowing us intelligent data validation as well. If we enable the `docs` url, we will also get these parameters automatically documented there.
### Sending a Response
If you have noticed so far, we can just pass a dictionary and it will be JSON encoded and returned by default. However, we can set the status code and any additional headers by using the `Response` class from `apistar`. Heres a quick example:
```
from apistar import Route, Response
from apistar.frameworks.wsgi import WSGIApp as App
def hello():
return Response(
content="Hello".encode("utf-8"),
status=200,
headers={"X-API-Framework": "API Star"},
content_type="text/plain"
)
routes = [
Route('/', 'GET', hello),
]
app = App(routes=routes)
if __name__ == '__main__':
app.main()
```
It should send a plain text response along with a custom header. Please note that the `content` should be bytes, not string. Thats why I encoded it.
### Moving On
I just walked through some of the features of API Star. Theres a lot more of cool stuff in API Star. I do recommend going through the [Github Readme][2] for learning more about different features offered by this excellent framework. I shall also try to cover short, focused tutorials on API Star in the coming days.
--------------------------------------------------------------------------------
via: http://polyglot.ninja/api-star-python-3-api-framework/
作者:[MASNUN][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://polyglot.ninja/author/masnun/
[1]:http://polyglot.ninja/rest-api-best-practices-python-flask-tutorial/
[2]:https://github.com/encode/apistar

View File

@ -0,0 +1,330 @@
Tuptime - A Tool To Report The Historical Uptime Of Linux System
======
Beginning of this month we written an article about system uptime that helps user to check how long your Linux system has been running without downtime? when the system is up and what date. This can be done using 11 methods.
uptime is one of the very famous commands, which everyone use when there is a requirement to check the Linux server uptime.
But it wont shows historical and statistical running time of Linux system, thats why tuptime is came to picture.
server uptime is very important when the server running with critical applications such as online portals.
**Suggested Read :** [11 Methods To Find System/Server Uptime In Linux][1]
### What Is tuptime?
[Tuptime][2] is a tool for report the historical and statistical running time of the system, keeping it between restarts. Like uptime command but with more interesting output.
### tuptime Features
* Count system startups
* Register first boot time (a.k.a. installation time)
* Count nicely and accidentally shutdowns
* Uptime and downtime percentage since first boot time
* Accumulated system uptime, downtime and total
* Largest, shortest and average uptime and downtime
* Current uptime
* Print formatted table or list with most of the previous values
* Register used kernels
* Narrow reports since and/or until a given startup or timestamp
* Reports in csv
### Prerequisites
Make sure your system should have installed Python3 as a prerequisites. If no, install it using your distribution package manager.
**Suggested Read :** [3 Methods To Install Latest Python3 Package On CentOS 6 System][3]
### How To Install tuptime
Few distributions offer tuptime package but it may be bit older version. I would advise you to install latest available version to avail all the features using the below method.
Clone tuptime repository from github.
```
# git clone https://github.com/rfrail3/tuptime.git
```
Copy executable file from `tuptime/src/tuptime` to `/usr/bin/` and assign 755 permission.
```
# cp tuptime/src/tuptime /usr/bin/tuptime
# chmod 755 /usr/bin/tuptime
```
All scripts, units and related files are provided inside this repo so, copy and past the necessary files in the appropriate location to get full functionality of tuptime utility.
Add tuptime user because it doesnt run as a daemon, at least, it only need execution when the init manager startup and shutdown the system.
```
# useradd -d /var/lib/tuptime -s /bin/sh tuptime
```
Change owner of the db file.
```
# chown -R tuptime:tuptime /var/lib/tuptime
```
Copy cron file from `tuptime/src/tuptime` to `/usr/bin/` and assign 644 permission.
```
# cp tuptime/src/cron.d/tuptime /etc/cron.d/tuptime
# chmod 644 /etc/cron.d/tuptime
```
Add system service file based on your system initsystem. Use the below command to check if your system is running with systemd or init.
```
# ps -p 1
PID TTY TIME CMD
1 ? 00:00:03 systemd
# ps -p 1
PID TTY TIME CMD
1 ? 00:00:00 init
```
If is a system with systemd, copy service file and enable it.
```
# cp tuptime/src/systemd/tuptime.service /lib/systemd/system/
# chmod 644 /lib/systemd/system/tuptime.service
# systemctl enable tuptime.service
```
If have upstart system, copy the file:
```
# cp tuptime/src/init.d/redhat/tuptime /etc/init.d/tuptime
# chmod 755 /etc/init.d/tuptime
# chkconfig --add tuptime
# chkconfig tuptime on
```
If have init system, copy the file:
```
# cp tuptime/src/init.d/debian/tuptime /etc/init.d/tuptime
# chmod 755 /etc/init.d/tuptime
# update-rc.d tuptime defaults
# /etc/init.d/tuptime start
```
### How To Use tuptime
Make sure you should run the command with a privileged user. Intially you will get output similar to this.
```
# tuptime
System startups: 1 since 02:48:00 AM 04/12/2018
System shutdowns: 0 ok - 0 bad
System uptime: 100.0 % - 26 days, 5 hours, 31 minutes and 52 seconds
System downtime: 0.0 % - 0 seconds
System life: 26 days, 5 hours, 31 minutes and 52 seconds
Largest uptime: 26 days, 5 hours, 31 minutes and 52 seconds from 02:48:00 AM 04/12/2018
Shortest uptime: 26 days, 5 hours, 31 minutes and 52 seconds from 02:48:00 AM 04/12/2018
Average uptime: 26 days, 5 hours, 31 minutes and 52 seconds
Largest downtime: 0 seconds
Shortest downtime: 0 seconds
Average downtime: 0 seconds
Current uptime: 26 days, 5 hours, 31 minutes and 52 seconds since 02:48:00 AM 04/12/2018
```
### Details:
* **`System startups:`** Total number of system startups from since to until date. Until is joined if is used in a narrow range.
* **`System shutdowns:`** Total number of shutdowns done correctly or incorrectly. The separator usually points to the state of last shutdown () bad.
* **`System uptime:`** Percentage of uptime and time counter.
* **`System downtime:`** Percentage of downtime and time counter.
* **`System life:`** Time counter since first startup date until last.
* **`Largest/Shortest uptime:`** Time counter and date with the largest/shortest uptime register.
* **`Largest/Shortest downtime:`** Time counter and date with the largest/shortest downtime register.
* **`Average uptime/downtime:`** Time counter with the average time.
* **`Current uptime:`** Actual time counter and date since registered boot date.
If you do the same a few days after some reboot, the output may will be more similar to this.
```
# tuptime
System startups: 3 since 02:48:00 AM 04/12/2018
System shutdowns: 0 ok -> 2 bad
System uptime: 97.0 % - 28 days, 4 hours, 6 minutes and 0 seconds
System downtime: 3.0 % - 20 hours, 54 minutes and 22 seconds
System life: 29 days, 1 hour, 0 minutes and 23 seconds
Largest uptime: 26 days, 5 hours, 32 minutes and 57 seconds from 02:48:00 AM 04/12/2018
Shortest uptime: 1 hour, 31 minutes and 12 seconds from 02:17:11 AM 05/11/2018
Average uptime: 9 days, 9 hours, 22 minutes and 0 seconds
Largest downtime: 20 hours, 51 minutes and 58 seconds from 08:20:57 AM 05/08/2018
Shortest downtime: 2 minutes and 24 seconds from 02:14:47 AM 05/11/2018
Average downtime: 10 hours, 27 minutes and 11 seconds
Current uptime: 1 hour, 31 minutes and 12 seconds since 02:17:11 AM 05/11/2018
```
Enumerate as table each startup number, startup date, uptime, shutdown date, end status and downtime. Multiple order options can be combined together.
```
# tuptime -t
No. Startup Date Uptime Shutdown Date End Downtime
1 02:48:00 AM 04/12/2018 26 days, 5 hours, 32 minutes and 57 seconds 08:20:57 AM 05/08/2018 BAD 20 hours, 51 minutes and 58 seconds
2 05:12:55 AM 05/09/2018 1 day, 21 hours, 1 minute and 52 seconds 02:14:47 AM 05/11/2018 BAD 2 minutes and 24 seconds
3 02:17:11 AM 05/11/2018 1 hour, 34 minutes and 33 seconds
```
Enumerate as list each startup number, startup date, uptime, shutdown date, end status and offtime. Multiple order options can be combined together.
```
# tuptime -l
Startup: 1 at 02:48:00 AM 04/12/2018
Uptime: 26 days, 5 hours, 32 minutes and 57 seconds
Shutdown: BAD at 08:20:57 AM 05/08/2018
Downtime: 20 hours, 51 minutes and 58 seconds
Startup: 2 at 05:12:55 AM 05/09/2018
Uptime: 1 day, 21 hours, 1 minute and 52 seconds
Shutdown: BAD at 02:14:47 AM 05/11/2018
Downtime: 2 minutes and 24 seconds
Startup: 3 at 02:17:11 AM 05/11/2018
Uptime: 1 hour, 34 minutes and 36 seconds
```
To print kernel information with tuptime output.
```
# tuptime -k
System startups: 3 since 02:48:00 AM 04/12/2018
System shutdowns: 0 ok -> 2 bad
System uptime: 97.0 % - 28 days, 4 hours, 11 minutes and 25 seconds
System downtime: 3.0 % - 20 hours, 54 minutes and 22 seconds
System life: 29 days, 1 hour, 5 minutes and 47 seconds
System kernels: 1
Largest uptime: 26 days, 5 hours, 32 minutes and 57 seconds from 02:48:00 AM 04/12/2018
...with kernel: Linux-2.6.32-696.23.1.el6.x86_64-x86_64-with-centos-6.9-Final
Shortest uptime: 1 hour, 36 minutes and 36 seconds from 02:17:11 AM 05/11/2018
...with kernel: Linux-2.6.32-696.23.1.el6.x86_64-x86_64-with-centos-6.9-Final
Average uptime: 9 days, 9 hours, 23 minutes and 48 seconds
Largest downtime: 20 hours, 51 minutes and 58 seconds from 08:20:57 AM 05/08/2018
...with kernel: Linux-2.6.32-696.23.1.el6.x86_64-x86_64-with-centos-6.9-Final
Shortest downtime: 2 minutes and 24 seconds from 02:14:47 AM 05/11/2018
...with kernel: Linux-2.6.32-696.23.1.el6.x86_64-x86_64-with-centos-6.9-Final
Average downtime: 10 hours, 27 minutes and 11 seconds
Current uptime: 1 hour, 36 minutes and 36 seconds since 02:17:11 AM 05/11/2018
...with kernel: Linux-2.6.32-696.23.1.el6.x86_64-x86_64-with-centos-6.9-Final
```
Change the date format. By default its printed based on system locales.
```
# tuptime -d %d/%m/%y %H:%M:%S
System startups: 3 since 12/04/18
System shutdowns: 0 ok -> 2 bad
System uptime: 97.0 % - 28 days, 4 hours, 15 minutes and 18 seconds
System downtime: 3.0 % - 20 hours, 54 minutes and 22 seconds
System life: 29 days, 1 hour, 9 minutes and 41 seconds
Largest uptime: 26 days, 5 hours, 32 minutes and 57 seconds from 12/04/18
Shortest uptime: 1 hour, 40 minutes and 30 seconds from 11/05/18
Average uptime: 9 days, 9 hours, 25 minutes and 6 seconds
Largest downtime: 20 hours, 51 minutes and 58 seconds from 08/05/18
Shortest downtime: 2 minutes and 24 seconds from 11/05/18
Average downtime: 10 hours, 27 minutes and 11 seconds
Current uptime: 1 hour, 40 minutes and 30 seconds since 11/05/18
```
Print information about the internals of tuptime. Its good for debugging how it gets the variables.
```
# tuptime -v
INFO:Arguments: {'endst': 0, 'seconds': None, 'table': False, 'csv': False, 'ts': None, 'silent': False, 'order': False, 'since': 0, 'kernel': False, 'reverse': False, 'until': 0, 'db_file': '/var/lib/tuptime/tuptime.db', 'lst': False, 'tu': None, 'date_format': '%X %x', 'update': True}
INFO:Linux system
INFO:uptime = 5773.54
INFO:btime = 1526019431
INFO:kernel = Linux-2.6.32-696.23.1.el6.x86_64-x86_64-with-centos-6.9-Final
INFO:Execution user = 0
INFO:Directory exists = /var/lib/tuptime
INFO:DB file exists = /var/lib/tuptime/tuptime.db
INFO:Last btime from db = 1526019431
INFO:Last uptime from db = 5676.04
INFO:Drift over btime = 0
INFO:System wasn't restarted. Updating db values...
System startups: 3 since 02:48:00 AM 04/12/2018
System shutdowns: 0 ok -> 2 bad
System uptime: 97.0 % - 28 days, 4 hours, 11 minutes and 2 seconds
System downtime: 3.0 % - 20 hours, 54 minutes and 22 seconds
System life: 29 days, 1 hour, 5 minutes and 25 seconds
Largest uptime: 26 days, 5 hours, 32 minutes and 57 seconds from 02:48:00 AM 04/12/2018
Shortest uptime: 1 hour, 36 minutes and 14 seconds from 02:17:11 AM 05/11/2018
Average uptime: 9 days, 9 hours, 23 minutes and 41 seconds
Largest downtime: 20 hours, 51 minutes and 58 seconds from 08:20:57 AM 05/08/2018
Shortest downtime: 2 minutes and 24 seconds from 02:14:47 AM 05/11/2018
Average downtime: 10 hours, 27 minutes and 11 seconds
Current uptime: 1 hour, 36 minutes and 14 seconds since 02:17:11 AM 05/11/2018
```
Print a quick reference of the command line parameters.
```
# tuptime -h
Usage: tuptime [options]
Options:
-h, --help show this help message and exit
-c, --csv csv output
-d DATE_FORMAT, --date=DATE_FORMAT
date format output
-f FILE, --filedb=FILE
database file
-g, --graceful register a gracefully shutdown
-k, --kernel print kernel information
-l, --list enumerate system life as list
-n, --noup avoid update values
-o TYPE, --order=TYPE
order enumerate by []
-r, --reverse reverse order
-s, --seconds output time in seconds and epoch
-S SINCE, --since=SINCE
restric since this register number
-t, --table enumerate system life as table
--tsince=TIMESTAMP restrict since this epoch timestamp
--tuntil=TIMESTAMP restrict until this epoch timestamp
-U UNTIL, --until=UNTIL
restrict until this register number
-v, --verbose verbose output
-V, --version show version
-x, --silent update values into db without output
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/tuptime-a-tool-to-report-the-historical-and-statistical-running-time-of-linux-system/
作者:[Prakash Subramanian][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.2daygeek.com/author/prakash/
[1]:https://www.2daygeek.com/11-methods-to-find-check-system-server-uptime-in-linux/
[2]:https://github.com/rfrail3/tuptime/
[3]:https://www.2daygeek.com/3-methods-to-install-latest-python3-package-on-centos-6-system/

View File

@ -1,55 +0,0 @@
translating---geekpi
Cross-Site Request Forgery
======
![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/understanding-csrf-cross-site-forgery_orig.jpg)
Security is a major concern when designing web apps. And I am not talking about DDOS protection, using a strong password or 2 step verification. I am talking about the biggest threat to a web app. It is known as **CSRF** short for **Cross Site Resource Forgery**.
### What is CSRF?
[![csrf what is cross site forgery](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/csrf-what-is-cross-site-forgery_orig.jpg)][1]
First thing first, **CSRF** is short for Cross Site Resource Forgery. It is commonly pronounced as sea-surf and often referred to as XSRF. CSRF is a type of attack where various actions are performed on the web app where the victim is logged in without the victim's knowledge. These actions could be anything ranging from simply liking or commenting on a social media post to sending abusive messages to people or even transferring money from the victims bank account.
### How CSRF works?
**CSRF** attacks try to bank upon a simple common vulnerability in all browsers. Every time, we authenticate or log in to a website, session cookies are stored in the browser. So whenever we make a request to the website these cookies are automatically sent to the server where the server identifies us by matching the cookie we sent with the servers records. So that way it knows its us.
[![cookies set by website chrome](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/cookies-set-by-website-chrome_orig.jpg)][2]
This means that any request made by me, knowingly or unknowingly, will be fulfilled. Since the cookies are being sent and they will match the records on the server, the server thinks I am making that request.
CSRF attacks usually come in form of links. We may click them on other websites or receive them as email. On clicking these links, an unwanted request is made to the server. And as I previously said, the server thinks we made the request and authenticates it.
#### A Real World Example
To put things into perspective, imagine you are logged into your banks website. And you fill up a form on the page at **yourbank.com/transfer** . You fill in the account number of the receiver as 1234 and the amount of 5,000 and you click on the submit button. Now, a request will be made to **yourbank.com/transfer/send?to=1234&amount=5000** . So the server will act upon the request and make the transfer. Now just imagine you are on another website and you click on a link that opens up the above URL with the hackers account number. That money is now transferred to the hacker and the server thinks you made the transaction. Even though you didnt.
[![csrf hacking bank account](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/csrf-hacking-bank-account_orig.jpg)][3]
#### Protection against CSRF
CSRF protection is very easy to implement. It usually involves sending a token called the CSRF token to the webpage. This token is sent and verified on the server with every new request made. So malicious requests made by the server will pass cookie authentication but fail CSRF authentication. Most web frameworks provide out of the box support for preventing CSRF attacks and CSRF attacks are not as commonly seen today as they were some time back.
### Conclusion
CSRF attacks were a big thing 10 years back but today we dont see too many of them. In the past, famous sites such as Youtube, The New York Times and Netflix have been vulnerable to CSRF. However, popularity and occurrence of CSRF attacks have decreased lately. Nevertheless, CSRF attacks are still a threat and it is important, you protect your website or app from it.
--------------------------------------------------------------------------------
via: http://www.linuxandubuntu.com/home/understanding-csrf-cross-site-request-forgery
作者:[linuxandubuntu][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxandubuntu.com
[1]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/csrf-what-is-cross-site-forgery_orig.jpg
[2]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/cookies-set-by-website-chrome_orig.jpg
[3]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/csrf-hacking-bank-account_orig.jpg

View File

@ -1,3 +1,5 @@
translating----geekpi
UNIX curiosities
======
Recently I've been doing more UNIXy things in various tools I'm writing, and I hit two interesting issues. Neither of these are "bugs", but behaviors that I wasn't expecting.

View File

@ -0,0 +1,111 @@
5 reasons the i3 window manager makes Linux better
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud-windows.png?itok=jd5sBNQH)
One of the nicest things about Linux (and open source software in general) is the freedom to choose among different alternatives to address our needs.
I've been using Linux for a long time, but I was never entirely happy with the desktop environment options available. Until last year, [Xfce][1] was the closest to what I consider a good compromise between features and performance. Then I found [i3][2], an amazing piece of software that changed my life.
I3 is a tiling window manager. The goal of a window manager is to control the appearance and placement of windows in a windowing system. Window managers are often used as part a full-featured desktop environment (such as GNOME or Xfce), but some can also be used as standalone applications.
A tiling window manager automatically arranges the windows to occupy the whole screen in a non-overlapping way. Other popular tiling window managers include [wmii][3] and [xmonad][4].
![i3 tiled window manager screenshot][6]
Screenshot of i3 with three tiled windows
Following are the top five reasons I use the i3 window manager and recommend it for a better Linux desktop experience.
### 1\. Minimalism
I3 is fast. It is neither bloated nor fancy. It is designed to be simple and efficient. As a developer, I value these features, as I can use the extra capacity to power my favorite development tools or test stuff locally using containers or virtual machines.
In addition, i3 is a window manager and, unlike full-featured desktop environments, it does not dictate the applications you should use. Do you want to use Thunar from Xfce as your file manager? GNOME's gedit to edit text? I3 does not care. Pick the tools that make the most sense for your workflow, and i3 will manage them all in the same way.
### 2\. Screen real estate
As a tiling window manager, i3 will automatically "tile" or position the windows in a non-overlapping way, similar to laying tiles on a wall. Since you don't need to worry about window positioning, i3 generally makes better use of your screen real estate. It also allows you to get to what you need faster.
There are many useful cases for this. For example, system administrators can open several terminals to monitor or work on different remote systems simultaneously; and developers can use their favorite IDE or editor and a few terminals to test their programs.
In addition, i3 is flexible. If you need more space for a particular window, enable full-screen mode or switch to a different layout, such as stacked or tabbed.
### 3\. Keyboard-driven workflow
I3 makes extensive use of keyboard shortcuts to control different aspects of your environment. These include opening the terminal and other programs, resizing and positioning windows, changing layouts, and even exiting i3. When you start using i3, you need to memorize a few of those shortcuts to get around and, with time, you'll use more of them.
The main benefit is that you don't often need to switch contexts from the keyboard to the mouse. With practice, it means you'll improve the speed and efficiency of your workflow.
For example, to open a new terminal, press `<SUPER>+<ENTER>`. Since the windows are automatically positioned, you can start typing your commands right away. Combine that with a nice terminal-driven text editor (e.g., Vim) and a keyboard-focused browser for a fully keyboard-driven workflow.
In i3, you can define shortcuts for everything. Here are some examples:
* Open terminal
* Open browser
* Change layouts
* Resize windows
* Control music player
* Switch workspaces
Now that I am used to this workflow, I can't see myself going back to a regular desktop environment.
### 4\. Flexibility
I3 strives to be minimal and use few system resources, but that does not mean it can't be pretty. I3 is flexible and can be customized in several ways to improve the visual experience. Because i3 is a window manager, it doesn't provide tools to enable customizations; you need external tools for that. Some examples:
* Use `feh` to define a background picture for your desktop.
* Use a compositor manager such as `compton` to enable effects like window fading and transparency.
* Use `dmenu` or `rofi` to enable customizable menus that can be launched from a keyboard shortcut.
* Use `dunst` for desktop notifications.
I3 is fully configurable, and you can control every aspect of it by updating the default configuration file. From changing all keyboard shortcuts, to redefining the name of the workspaces, to modifying the status bar, you can make i3 behave in any way that makes the most sense for your needs.
![i3 with rofi menu and dunst desktop notifications][8]
i3 with `rofi` menu and `dunst` desktop notifications
Finally, for more advanced users, i3 provides a full interprocess communication ([IPC][9]) interface that allows you to use your favorite language to develop scripts or programs for even more customization options.
### 5\. Workspaces
In i3, a workspace is an easy way to group windows. You can group them in different ways according to your workflow. For example, you can put the browser on one workspace, the terminal on another, an email client on a third, etc. You can even change i3's configuration to always assign specific applications to their own workspaces.
Switching workspaces is quick and easy. As usual in i3, do it with a keyboard shortcut. Press `<SUPER>+num` to switch to workspace `num`. If you get into the habit of always assigning applications/groups of windows to the same workspace, you can quickly switch between them, which makes workspaces a very useful feature.
In addition, you can use workspaces to control multi-monitor setups, where each monitor gets an initial workspace. If you switch to that workspace, you switch to that monitor—without moving your hand off the keyboard.
Finally, there is another, special type of workspace in i3: the scratchpad. It is an invisible workspace that shows up in the middle of the other workspaces by pressing a shortcut. This is a convenient way to access windows or programs that you frequently use, such as an email client or your music player.
### Give it a try
If you value simplicity and efficiency and are not afraid of working with the keyboard, i3 is the window manager for you. Some say it is for advanced users, but that is not necessarily the case. You need to learn a few basic shortcuts to get around at the beginning, but they'll soon feel natural and you'll start using them without thinking.
This article just scratches the surface of what i3 can do. For more details, consult [i3's documentation][10].
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/i3-tiling-window-manager
作者:[Ricardo Gerardi][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/rgerardi
[1]:https://xfce.org/
[2]:https://i3wm.org/
[3]:https://code.google.com/archive/p/wmii/
[4]:https://xmonad.org/
[5]:/file/406476
[6]:https://opensource.com/sites/default/files/uploads/i3_screenshot.png (i3 tiled window manager screenshot)
[7]:/file/405161
[8]:https://opensource.com/sites/default/files/uploads/rofi_dunst.png (i3 with rofi menu and dunst desktop notifications)
[9]:https://i3wm.org/docs/ipc.html
[10]:https://i3wm.org/docs/userguide.html

View File

@ -0,0 +1,90 @@
5 applications to manage your to-do list on Fedora
======
![](https://fedoramagazine.org/wp-content/uploads/2018/08/todoapps-816x345.jpg)
Effective management of your to-do list can do wonders for your productivity. Some prefer just keeping a to-do list in a text file, or even just using a notepad and pen. For users that want more out of their to-do list, they often turn to an application. In this article we highlight 4 graphical applications and a terminal-based tool for managing your to-do list.
### GNOME To Do
[GNOME To Do][1] is a personal task manager designed specifically for the GNOME desktop (Fedora Workstations default desktop). When comparing GNOME To Do with some others in this list, it is has a range of neat features.
GNOME To Do provides organization of tasks by lists, and the ability to assign a colour to that list. Additionally, individual tasks can be assigned due dates & priorities, and notes for each task. Futhermore, GNOME To Do has extensions, allowing even more features, including support for [todo.txt][2] and syncing with online services such as [todoist][3].
![][4]
Install GNOME To Do either by using the Software application, or using the following command in the Terminal:
```
sudo dnf install gnome-todo
```
### Getting things GNOME!
Before GNOME To Do existed, the go-to application for tracking tasks on GNOME was [Getting things GNOME!][5] This older-style GNOME application has a multiple window layout, allowing you to show the details of multiple tasks at the same time. Rather than having lists of tasks, GTG has the ability to add sub-tasks to tasks and even to sub-tasks. GTG also has the ability to add due dates and start dates. Syncing to other apps and services is also possible in GTG via plugins.
![][6]
Install Getting Things GNOME either by using the Software application, or using the following command in the Terminal:
```
sudo dnf install gtg
```
### Go For It!
[Go For It!][7] is a super-simple task management application. It is used to simply create a list of tasks, and mark them as done when completed. It does not have the ability to group tasks, or create sub-tasks. By default, Go For It! stored tasks in the todo.txt format, allowing simpler syncing to online services and other applications. Additionally, Go For It! contains a simple timer to track how much time you have spent on the current task.
![][8]
Go For It is available to download from the Flathub application repository. To install, simply [enable Flathub as a software source][9], and then install via the Software application.
### Agenda
If you are looking for a no-fuss super simple to-do application, look no further than [Agenda][10]. Create tasks, mark them as complete, and then delete them from your list. Agenda shows all tasks (completed or open) until you remove them.
![][11]
Agenda is available to download from the Flathub application repository. To install, simply [enable Flathub as a software source][9], and then install via the Software application.
### Taskwarrior
[Taskwarrior][12] is a flexible command-line task management program. It is highly customizable, but can also be used “right out of the box.” Using simple commands, you can create tasks, mark them as complete, and list current open tasks. Additionally, tasks can be tagged, added to projects, searched and filtered. Furthermore, you can set up recurring tasks, and apply due dates to tasks.
[This previous article on the Fedora Magazine][13] provides a good overview of getting started with Taskwarrior.
![][14]
Install Taskwarrior with this command in the Terminal:
```
sudo dnf install task
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/5-tools-to-manage-your-to-do-list-on-fedora/
作者:[Ryan Lerch][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/introducing-flatpak/
[1]:https://wiki.gnome.org/Apps/Todo/
[2]:http://todotxt.org/
[3]:https://en.todoist.com/
[4]:https://fedoramagazine.org/wp-content/uploads/2018/08/gnome-todo.png
[5]:https://wiki.gnome.org/Apps/GTG
[6]:https://fedoramagazine.org/wp-content/uploads/2018/08/gtg.png
[7]:http://manuel-kehl.de/projects/go-for-it/
[8]:https://fedoramagazine.org/wp-content/uploads/2018/08/goforit.png
[9]:https://fedoramagazine.org/install-flathub-apps-fedora/
[10]:https://github.com/dahenson/agenda
[11]:https://fedoramagazine.org/wp-content/uploads/2018/08/agenda.png
[12]:https://taskwarrior.org/
[13]:https://fedoramagazine.org/getting-started-taskwarrior/
[14]:https://fedoramagazine.org/wp-content/uploads/2018/08/taskwarrior.png

View File

@ -0,0 +1,104 @@
5 open source role-playing games for Linux
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dice_tabletop_board_gaming_game.jpg?itok=y93eW7HN)
Gaming has traditionally been one of Linux's weak points. That has changed somewhat in recent years thanks to Steam, GOG, and other efforts to bring commercial games to multiple operating systems, but those games are often not open source. Sure, the games can be played on an open source operating system, but that is not good enough for an open source purist.
So, can someone who only uses free and open source software find games that are polished enough to present a solid gaming experience without compromising their open source ideals? Absolutely. While open source games are unlikely ever to rival some of the AAA commercial games developed with massive budgets, there are plenty of open source games, in many genres, that are fun to play and can be installed from the repositories of most major Linux distributions. Even if a particular game is not packaged for a particular distribution, it is usually easy to download the game from the project's website in order to install and play it.
This article looks at role-playing games. I have already written about [arcade-style games][1], [board & card games][2], [puzzle games][3], and [racing & flying games][4]. In the final article in this series, I plan to cover strategy and simulation games.
### Endless Sky
![](https://opensource.com/sites/default/files/uploads/endless_sky.png)
[Endless Sky][5] is an open source clone of the [Escape Velocity][6] series from Ambrosia Software. Players captain a spaceship and travel between worlds delivering trade goods or passengers, taking on other missions along the way, or they can turn to piracy and steal from cargo ships. The game lets the player decide how they want to experience the game, and the extremely large map of solar systems is theirs to explore as they see fit. Endless Sky is one of those games that defies normal genre classifications, but this action, role-playing, space simulation, trading game is well worth checking out.
To install Endless Sky, run the following command:
On Fedora: `dnf install endless-sky`
On Debian/Ubuntu: `apt install endless-sky`
### FreeDink
![](https://opensource.com/sites/default/files/uploads/freedink.png)
[FreeDink][7] is the open source version of [Dink Smallwood][8], an action role-playing game released by RTSoft in 1997. Dink Smallwood became freeware in 1999, and the source code was released in 2003. In 2008 the game's data files, minus a few sound files, were also released under an open license. FreeDink replaces those sound files with alternatives to provide a complete game. Gameplay is similar to Nintendo's [The Legend of Zelda][9] series. The player's character, the eponymous Dink Smallwood, explores an over-world map filled with hidden items and caves as he moves from one quest to another. Due to its age, FreeDink is not going to stand up to modern commercial games, but it is still a fun game with an amusing story. The game can be expanded by using [D-Mods][10], which are add-on modules that provide additional quests, but the D-Mods do vary greatly in complexity, quality, and age-appropriateness; the main game is suitable for teenagers, but some of the add-ons are for adult audiences.
To install FreeDink, run the following command:
On Fedora: `dnf install freedink`
On Debian/Ubuntu: `apt install freedink`
### ManaPlus
![](https://opensource.com/sites/default/files/uploads/manaplus.png)
Technically not a game in itself, [ManaPlus][11] is a client for accessing various massive multi-player online role-playing games. [The Mana World][12] and [Evol Online][13] are the two of the open source games available, but other servers are out there. The games feature 2D sprite graphics reminiscent of Super Nintendo games. While none of the games supported by ManaPlus are as popular as some of the commercial alternatives, they do have interesting worlds and at least a few players are online most of the time. Players are unlikely to run into massive groups of other players, but there are usually enough people around to make the games [MMORPG][14]s, not single-player games that require a connection to a server. The Mana World and Evol Online developers have joined together for future development, but for now, The Mana World's legacy server and Evol Online offer different experiences.
To install ManaPlus, run the following command:
On Fedora: `dnf install manaplus`
On Debian/Ubuntu: `apt install manaplus`
### Minetest
![](https://opensource.com/sites/default/files/uploads/minetest.png)
Explore and build in an open-ended world with [Minetest][15], a clone of Minecraft. Just like the game it is based on, Minetest provides an open-ended world where players can explore and build whatever they wish. Minetest provides a wide variety of block types and tools, making it a good alternative to Minecraft for anyone wanting a more open alternative. Beyond what comes with the basic game, Minetest can be extended with [add-on modules][16], which add even more options.
To install Minetest, run the following command:
On Fedora: `dnf install minetest`
On Debian/Ubuntu: `apt install minetest`
### NetHack
![](https://opensource.com/sites/default/files/uploads/nethack.png)
[NetHack][17] is a classic [Roguelike][18] role-playing game. Players explore a multi-level dungeon as one of several different character races, classes, and alignments. The object of the game is to retrieve the Amulet of Yendor. Players begin on the first level of the dungeon and try to work their way towards the bottom, with each level being randomly generated, which makes for a unique game experience each time. While this game features either ASCII graphics or basic tile graphics, the depth of game-play more than makes up for the primitive graphics. Players who want less primitive graphics might want to check out [Vulture for NetHack][19], which offers better graphics along with sound effects and background music.
To install NetHack, run the following command:
On Fedora: `dnf install nethack`
On Debian/Ubuntu: `apt install nethack-x11 or apt install nethack-console`
Did I miss one of your favorite open source role-playing games? Share it in the comments below.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/role-playing-games-linux
作者:[Joshua Allen Holm][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/holmja
[1]:https://opensource.com/article/18/1/arcade-games-linux
[2]:https://opensource.com/article/18/3/card-board-games-linux
[3]:https://opensource.com/article/18/6/puzzle-games-linux
[4]:https://opensource.com/article/18/7/racing-flying-games-linux
[5]:https://endless-sky.github.io/
[6]:https://en.wikipedia.org/wiki/Escape_Velocity_(video_game)
[7]:http://www.gnu.org/software/freedink/
[8]:http://www.rtsoft.com/pages/dink.php
[9]:https://en.wikipedia.org/wiki/The_Legend_of_Zelda
[10]:http://www.dinknetwork.com/files/category_dmod/
[11]:http://manaplus.org/
[12]:http://www.themanaworld.org/
[13]:http://evolonline.org/
[14]:https://en.wikipedia.org/wiki/Massively_multiplayer_online_role-playing_game
[15]:https://www.minetest.net/
[16]:https://wiki.minetest.net/Mods
[17]:https://www.nethack.org/
[18]:https://en.wikipedia.org/wiki/Roguelike
[19]:http://www.darkarts.co.za/vulture-for-nethack

View File

@ -0,0 +1,334 @@
Getting started with Postfix, an open source mail transfer agent
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_mail_box_envelope_send.jpg?itok=bbJOPIWl)
[Postfix][1] is a great program that routes and delivers email to accounts that are external to the system. It is currently used by approximately [33% of internet mail servers][2]. In this article, I'll explain how you can use Postfix to send mail using Gmail with two-factor authentication enabled.
Before you get Postfix up and running, however, you need to have some items lined up. Following are instructions on how to get it working on a number of distros.
### Prerequisites
* An installed OS (Ubuntu/Debian/Fedora/Centos/Arch/FreeBSD/OpenSUSE)
* A Google account with two-factor authentication
* A working internet connection
### Step 1: Prepare Google
Open a web browser and log into your Google account. Once youre in, go to your settings by clicking your picture and selecting "Google Account.” Click “Sign-in & security” and scroll down to "App passwords.” Use your password to log in. Then you can create a new app password (I named mine "postfix Setup”).
![](https://opensource.com/sites/default/files/uploads/google_setup_1_app_passwords.png)
Note the crazy password (shown below), which I will use throughout this article.
![](https://opensource.com/sites/default/files/uploads/google_setup_2_generated_password.png)
### Step 2: Install Postfix
Before you can configure the mail client, you need to install it. You must also install either the `mailutils` or `mailx` utility, depending on the OS you're using. Here's how to install it for each OS:
**Debian/Ubuntu** :
```
apt-get update && apt-get install postfix mailutils
```
**Fedora** :
```
dnf update && dnf install postfix mailx
```
**Centos** :
```
yum update && yum install postfix mailx cyrus-sasl cyrus-sasl-plain
```
**Arch** :
```
pacman -Sy postfix mailutils
```
**FreeBSD** :
```
portsnap fetch extract update
cd /usr/ports/mail/postfix
make config
```
In the configuration dialog, select "SASL support." All other options can remain the same.
From there: `make install clean`
Install `mailx` from the binary package: `pkg install mailx`
**OpenSUSE** :
```
zypper update && zypper install postfix mailx cyrus-sasl
```
### Step 3: Set up Gmail authentication
Once you've installed Postfix, you can set up Gmail authentication. Since you have created the app password, you need to put it in a configuration file and lock it down so no one else can see it. Fortunately, this is simple to do:
**Ubuntu/Debian/Fedora/Centos/Arch/OpenSUSE** :
```
vim /etc/postfix/sasl_passwd
```
Add this line:
```
[smtp.gmail.com]:587   ben.heffron@gmail.com:thgcaypbpslnvgce
```
Save and close the file. Since your Gmail password is stored as plaintext, make the file accessible only by root to be extra safe.
```
chmod 600 /etc/postfix/sasl_passwd
```
**FreeBSD** :
```
vim /usr/local/etc/postfix/sasl_passwd
```
Add this line:
```
[smtp.gmail.com]:587    ben.heffron@gmail.com:thgcaypbpslnvgce
```
Save and close the file. Since your Gmail password is stored as plaintext, make the file accessible only by root to be extra safe.
```
chmod 600 /usr/local/etc/postfix/sasl_passwd
```
![](https://opensource.com/sites/default/files/uploads/google_setup_3_vim_config.png)
### Step 4: Get Postfix moving
This step is the "meat and potatoes"—everything you've done so far has been preparation.
Postfix gets its configuration from the `main.cf` file, so the settings in this file are critical. For Google, it is mandatory to enable the correct SSL settings.
Here are the six options you need to enter or update on the `main.cf` to make it work with Gmail (from the [SASL readme][3]):
* The **smtp_sasl_auth_enable** setting enables client-side authentication. We will configure the clients username and password information in the second part of the example.
* The **relayhost** setting forces the Postfix SMTP to send all remote messages to the specified mail server instead of trying to deliver them directly to their destination.
* With the **smtp_sasl_password_maps** parameter, we configure the Postfix SMTP client to send username and password information to the mail gateway server.
* Postfix SMTP client SASL security options are set using **smtp_sasl_security_options** , with a whole lot of options. In this case, it will be nothing; otherwise, Gmail wont play nicely with Postfix.
* The **smtp_tls_CAfile** is a file containing CA certificates of root CAs trusted to sign either remote SMTP server certificates or intermediate CA certificates.
* From the [configure settings page:][4] **stmp_use_tls** uses TLS when a remote SMTP server announces STARTTLS support, the default is not using TLS.
**Ubuntu/Debian/Arch**
These three OSes keep their files (certificates and `main.cf`) in the same location, so this is all you need to put in there:
```
vim /etc/postfix/main.cf
```
If the following values arent there, add them:
```
relayhost = [smtp.gmail.com]:587
smtp_use_tls = yes
smtp_sasl_auth_enable = yes
smtp_sasl_security_options =
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt
```
Save and close the file.
**Fedora/CentOS**
These two OSes are based on the same underpinnings, so they share the same updates.
```
vim /etc/postfix/main.cf
```
If the following values arent there, add them:
```
relayhost = [smtp.gmail.com]:587
smtp_use_tls = yes
smtp_sasl_auth_enable = yes
smtp_sasl_security_options =
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_tls_CAfile = /etc/ssl/certs/ca-bundle.crt
```
Save and close the file.
**OpenSUSE**
```
vim /etc/postfix/main.cf
```
If the following values arent there, add them:
```
relayhost = [smtp.gmail.com]:587
smtp_use_tls = yes
smtp_sasl_auth_enable = yes
smtp_sasl_security_options =
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_tls_CAfile = /etc/ssl/ca-bundle.pem
```
Save and close the file.
OpenSUSE also requires that you modify the Postfix master process configuration file `master.cf`. Open it for editing:
```
vim /etc/postfix/master.cf
```
Uncomment the line that reads:
```
#tlsmgr unix - - n 1000? 1 tlsmg
```
It should look like this:
```
tlsmgr unix - - n 1000? 1 tlsmg
```
Save and close the file.
**FreeBSD**
```
vim /usr/local/etc/postfix/main.cf
```
If the following values arent there, add them:
```
relayhost = [smtp.gmail.com]:587
smtp_use_tls = yes
smtp_sasl_auth_enable = yes
smtp_sasl_security_options =
smtp_sasl_password_maps = hash:/usr/local/etc/postfix/sasl_passwd
smtp_tls_CAfile = /etc/mail/certs/cacert.pem
```
Save and close the file.
### Step 5: Set up the password file
Remember that password file you created? Now you need to feed it into Postfix using `postmap`. This is part of the `mailutils` or `mailx` utilities.
**Debian, Ubuntu, Fedora, CentOS, OpenSUSE, Arch Linux**
```
postmap /etc/postfix/sasl_passwd
```
**FreeBSD**
```
postmap /usr/local/etc/postfix/sasl_passwd
```
### Step 6: Get Postfix grooving
To get all the settings and configurations working, you must restart Postfix.
**Debian, Ubuntu, Fedora, CentOS, OpenSUSE, Arch Linux**
These guys make it simple to restart:
```
systemctl restart postfix.service
```
**FreeBSD**
To start Postfix at startup, edit `/etc/rc.conf`:
```
vim /etc/rc.conf
```
Add the line:
```
postfix_enable=YES
```
Save and close the file. Then start Postfix by running:
```
service postfix start
```
### Step 7: Test it
Now for the big finale—time to test it to see if it works. The `mail` command is another tool installed with `mailutils` or `mailx`.
```
echo    Just testing my sendmail gmail relay" | mail -s "Sendmail gmail Relay" ben.heffron@gmail.com
```
This is what I used to test my settings, and then it came up in my Gmail.
![](https://opensource.com/sites/default/files/uploads/google_setup_4_gmail.png)
Now you can use Gmail with two-factor authentication in your Postfix setup.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/postfix-open-source-mail-transfer-agent
作者:[Ben Heffron][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/elheffe
[1]:http://www.postfix.org/start.html
[2]:http://www.securityspace.com/s_survey/data/man.201806/mxsurvey.html
[3]:http://www.postfix.org/SASL_README.html
[4]:http://www.postfix.org/postconf.5.html#smtp_tls_security_level

View File

@ -0,0 +1,128 @@
How To Switch Between Multiple PHP Versions In Ubuntu
======
![](https://www.ostechnix.com/wp-content/uploads/2018/08/php-720x340.png)
Sometimes, the most recent version of an installed package might not work as you expected. Your application may not compatible with the updated package and support only a specific old version of package. In such cases, you can simply downgrade the problematic package to its earlier working version in no time. Refer our old guides on how to downgrade a package in Ubuntu and its variants [**here**][1] and how to downgrade a package in Arch Linux and its derivatives [**here**][2]. However, you need not to downgrade some packages. We can use multiple versions at the same time. For instance, let us say you are testing a PHP application in [**LAMP stack**][3] deployed in Ubuntu 18.04 LTS. After a while you find out that the application worked fine in PHP5.6, but not in PHP 7.2 (Ubuntu 18.04 LTS installs PHP 7.x by default). Are you going to reinstall PHP or the whole LAMP stack again? Not necessary, though. You dont even have to downgrade the PHP to its earlier version. In this brief tutorial, I will show you how to switch between multiple PHP versions in Ubuntu 18.04 LTS. Its not that difficult as you may think. Read on.
### Switch Between Multiple PHP Versions
To check the default installed version of PHP, run:
```
$ php -v
PHP 7.2.7-0ubuntu0.18.04.2 (cli) (built: Jul 4 2018 16:55:24) ( NTS )
Copyright (c) 1997-2018 The PHP Group
Zend Engine v3.2.0, Copyright (c) 1998-2018 Zend Technologies
with Zend OPcache v7.2.7-0ubuntu0.18.04.2, Copyright (c) 1999-2018, by Zend Technologies
```
As you can see, the installed version of PHP is 7.2.7. After testing your application for couple days, you find out that your application doesnt support PHP7.2. In such cases, it is a good idea to have both PHP5.x version and PHP7.x version, so that you can easily switch between to/from any supported version at any time.
You dont need to remove PHP7.x or reinstall LAMP stack. You can use both PHP5.x and 7.x versions together.
I assume you didnt uninstall php5.6 in your system yet. Just in case, you removed it already, you can install it again using a PPA like below.
You can install PHP5.6 from a PPA:
```
$ sudo add-apt-repository -y ppa:ondrej/php
$ sudo apt update
$ sudo apt install php5.6
```
#### Switch from PHP7.x to PHP5.x
First disable PHP7.2 module using command:
```
$ sudo a2dismod php7.2
Module php7.2 disabled.
To activate the new configuration, you need to run:
systemctl restart apache2
```
Next, enable PHP5.6 module:
```
$ sudo a2enmod php5.6
```
Set PHP5.6 as default version:
```
$ sudo update-alternatives --set php /usr/bin/php5.6
```
Alternatively, you can run the following command to to set which system wide version of PHP you want to use by default.
```
$ sudo update-alternatives --config php
```
Enter the selection number to set it as default version or simply press ENTER to keep the current choice.
In case, you have installed other PHP extensions, set them as default as well.
```
$ sudo update-alternatives --set phar /usr/bin/phar5.6
```
Finally, restart your Apache web server:
```
$ sudo systemctl restart apache2
```
Now, check if PHP5.6 is the default version or not:
```
$ php -v
PHP 5.6.37-1+ubuntu18.04.1+deb.sury.org+1 (cli)
Copyright (c) 1997-2016 The PHP Group
Zend Engine v2.6.0, Copyright (c) 1998-2016 Zend Technologies
with Zend OPcache v7.0.6-dev, Copyright (c) 1999-2016, by Zend Technologies
```
#### Switch from PHP5.x to PHP7.x
Likewise, you can switch from PHP5.x to PHP7.x version as shown below.
```
$ sudo a2enmod php7.2
$ sudo a2dismod php5.6
$ sudo update-alternatives --set php /usr/bin/php7.2
$ sudo systemctl restart apache2
```
**A word of caution:**
The final stable PHP5.6 version has reached the [**end of active support**][4] as of 19 Jan 2017. However, PHP 5.6 will continue to receive support for critical security issues until 31 Dec 2018. So, It is recommended to upgrade all your PHP applications to be compatible with PHP7.x as soon as possible.
If you want prevent PHP to be automatically upgraded in future, refer the following guide.
And, thats all for now. Hope this helps. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-switch-between-multiple-php-versions-in-ubuntu/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.ostechnix.com/how-to-downgrade-a-package-in-ubuntu/
[2]:https://www.ostechnix.com/downgrade-package-arch-linux/
[3]:https://www.ostechnix.com/install-apache-mariadb-php-lamp-stack-ubuntu-16-04/
[4]:http://php.net/supported-versions.php

View File

@ -0,0 +1,176 @@
Perform robust unit tests with PyHamcrest
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/web_browser_desktop_devlopment_design_system_computer.jpg?itok=pfqRrJgh)
At the base of the [testing pyramid][1] are unit tests. Unit tests test one unit of code at a time—usually one function or method.
Often, a single unit test is designed to test one particular flow through a function, or a specific branch choice. This enables easy mapping of a unit test that fails and the bug that made it fail.
Ideally, unit tests use few or no external resources, isolating them and making them faster.
_Good_ tests increase developer productivity by catching bugs early and making testing faster. _Bad_ tests decrease developer productivity.
Unit test suites help maintain high-quality products by signaling problems early in the development process. An effective unit test catches bugs before the code has left the developer machine, or at least in a continuous integration environment on a dedicated branch. This marks the difference between good and bad unit tests:tests increase developer productivity by catching bugs early and making testing faster.tests decrease developer productivity.
Productivity usually decreases when testing _incidental features_. The test fails when the code changes, even if it is still correct. This happens because the output is different, but in a way that is not part of the function's contract.
A good unit test, therefore, is one that helps enforce the contract to which the function is committed.
If a unit test breaks, the contract is violated and should be either explicitly amended (by changing the documentation and tests), or fixed (by fixing the code and leaving the tests as is).
While limiting tests to enforce only the public contract is a complicated skill to learn, there are tools that can help.
One of these tools is [Hamcrest][2], a framework for writing assertions. Originally invented for Java-based unit tests, today the Hamcrest framework supports several languages, including [Python][3].
Hamcrest is designed to make test assertions easier to write and more precise.
```
def add(a, b):
    return a + b
from hamcrest import assert_that, equal_to
def test_add():
    assert_that(add(2, 2), equal_to(4))  
```
This is a simple assertion, for simple functionality. What if we wanted to assert something more complicated?
```
def test_set_removal():
    my_set = {1, 2, 3, 4}
    my_set.remove(3)
    assert_that(my_set, contains_inanyorder([1, 2, 4]))
    assert_that(my_set, is_not(has_item(3)))
```
Note that we can succinctly assert that the result has `1`, `2`, and `4` in any order since sets do not guarantee order.
We also easily negate assertions with `is_not`. This helps us write _precise assertions_ , which allow us to limit ourselves to enforcing public contracts of functions.
Sometimes, however, none of the built-in functionality is _precisely_ what we need. In those cases, Hamcrest allows us to write our own matchers.
Imagine the following function:
```
def scale_one(a, b):
    scale = random.randint(0, 5)
    pick = random.choice([a,b])
    return scale * pick
```
We can confidently assert that the result divides into at least one of the inputs evenly.
A matcher inherits from `hamcrest.core.base_matcher.BaseMatcher`, and overrides two methods:
```
class DivisibleBy(hamcrest.core.base_matcher.BaseMatcher):
    def __init__(self, factor):
        self.factor = factor
    def _matches(self, item):
        return (item % self.factor) == 0
    def describe_to(self, description):
        description.append_text('number divisible by')
        description.append_text(repr(self.factor))
```
Writing high-quality `describe_to` methods is important, since this is part of the message that will show up if the test fails.
```
def divisible_by(num):
    return DivisibleBy(num)
```
By convention, we wrap matchers in a function. Sometimes this gives us a chance to further process the inputs, but in this case, no further processing is needed.
```
def test_scale():
    result = scale_one(3, 7)
    assert_that(result,
                any_of(divisible_by(3),
                       divisible_by(7)))
```
Note that we combined our `divisible_by` matcher with the built-in `any_of` matcher to ensure that we test only what the contract commits to.
While editing this article, I heard a rumor that the name "Hamcrest" was chosen as an anagram for "matches". Hrm...
```
>>> assert_that("matches", contains_inanyorder(*"hamcrest")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/moshez/src/devops-python/build/devops/lib/python3.6/site-packages/hamcrest/core/assert_that.py", line 43, in assert_that
    _assert_match(actual=arg1, matcher=arg2, reason=arg3)
  File "/home/moshez/src/devops-python/build/devops/lib/python3.6/site-packages/hamcrest/core/assert_that.py", line 57, in _assert_match
    raise AssertionError(description)
AssertionError:
Expected: a sequence over ['h', 'a', 'm', 'c', 'r', 'e', 's', 't'] in any order
      but: no item matches: 'r' in ['m', 'a', 't', 'c', 'h', 'e', 's']
```
Researching more, I found the source of the rumor: It is an anagram for "matchers".
```
>>> assert_that("matchers", contains_inanyorder(*"hamcrest"))
>>>
```
If you are not yet writing unit tests for your Python code, now is a good time to start. If you are writing unit tests for your Python code, using Hamcrest will allow you to make your assertion _precise_ —neither more nor less than what you intend to test. This will lead to fewer false positives when modifying code and less time spent modifying tests for working code.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/robust-unit-tests-hamcrest
作者:[Moshe Zadka][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/moshez
[1]:https://martinfowler.com/bliki/TestPyramid.html
[2]:http://hamcrest.org/
[3]:https://www.python.org/

View File

@ -0,0 +1,75 @@
Automatically Switch To Light / Dark Gtk Themes Based On Sunrise And Sunset Times With AutomaThemely
======
If you're looking for an easy way of automatically changing the Gtk theme based on sunrise and sunset times, give [AutomaThemely][3] a try.
![](https://4.bp.blogspot.com/-LS0XNNflbp0/W2q8zAwhUdI/AAAAAAAABUY/l8fVbjt-tHExYxPHsyVv74iUhV4O9UXLwCLcBGAs/s640/automathemely-settings.png)
**AutomaThemely is a Python application that automatically changes Gnome themes according to light and dark hours, useful if you want to use a dark Gtk theme at night and a light Gtk theme during the day.**
**While the application is made for the Gnome desktop, it also works with Unity**. AutomaThemely does not support changing the Gtk theme for desktop environments that don't make use of the `org.gnome.desktop.interface Gsettings` , like Cinnamon, or changing the icon theme, at least not yet. It also doesn't support setting the Gnome Shell theme.
Besides automatically changing the Gtk3 theme, **AutomaThemely can also automatically switch between dark and light themes for Atom editor and VSCode, as well as between light and dark syntax highlighting for Atom editor.** This is obviously also done based the time of day.
[![AutomaThemely Atom VSCode][1]][2]
AutomaThemely Atom and VSCode theme / syntax settings
The application uses your IP address to determine your location in order to retrieve the sunrise and sunset times, and requires a working Internet connection for this. However, you can disable automatic location from the application user interface, and enter your location manually.
From the AutomaThemely user interface you can also enter a time offset (in minutes) for the sunrise and sunset times, and enable or disable notifications on theme changes.
### Downloading / installing AutomaThemely
**Ubuntu 18.04** : using the link above, download the Python 3.6 DEB which includes dependencies (python3.6-automathemely_1.2_all.deb).
**Ubuntu 16.04:** you'll need to download and install the AutomaThemely Python 3.5 DEB which DOES NOT include dependencies (python3.5-no_deps-automathemely_1.2_all.deb), and install the dependencies (`requests` , `astral` , `pytz` , `tzlocal` and `schedule`) separately, using PIP3:
```
sudo apt install python3-pip
python3 -m pip install --user requests astral pytz tzlocal schedule
```
The AutomaThemely download page also includes RPM packages for Python 3.5 or 3.6, with and without dependencies. Install the package appropriate for your Python version. If you download the package that includes dependencies and they are not available on your system, grab the "no_deps" package and install the Python3 dependencies using PIP3, as explained above.
### Using AutomaThemely to change to light / dark Gtk themes based on Sun times
Once installed, run AutomaThemely once to generate the configuration file. Either click on the AutomaThemely menu entry or run this in a terminal:
```
automathemely
```
This doesn't run any GUI, it only generates the configuration file.
Using AutomaThemely is a bit counter intuitive. You'll get an AutomaThemely icon in your menu but clicking it does not open any window / GUI. If you use Gnome or some other Gnome-based desktop that supports jumplists / quicklists, you can right click the AutomaThemely icon in the menu (or you can pin it to Dash / dock and right click it there) and select Manage Settings to launch the GUI:
![](https://2.bp.blogspot.com/-7YWj07q0-M0/W2rACrCyO_I/AAAAAAAABUs/iaN_LEyRSG8YGM0NB6Aw9PLKmRU4NxzMACLcBGAs/s320/automathemely-jumplists.png)
You can also launch the AutomaThemely GUI from the command line, using:
```
automathemely --manage
```
**Once you configure the themes you want to use, you'll need to update the Sun times and restart the AutomaThemely scheduler**. You can do this by right clicking on the AutomaThemely icon (should work in Unity / Gnome) and selecting `Update sun times` , and then `Restart the scheduler` . You can also do this from a terminal, using these commands:
```
automathemely --update
automathemely --restart
```
--------------------------------------------------------------------------------
via: https://www.linuxuprising.com/2018/08/automatically-switch-to-light-dark-gtk.html
作者:[Logix][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/118280394805678839070
[1]:https://4.bp.blogspot.com/-K2-1K_MIWv0/W2q9GEWYA6I/AAAAAAAABUg/-z_gTMSHlxgN-ZXDvUGIeTQ8I72WrRq0ACLcBGAs/s640/automathemely-settings_2.png (AutomaThemely Atom VSCode)
[2]:https://4.bp.blogspot.com/-K2-1K_MIWv0/W2q9GEWYA6I/AAAAAAAABUg/-z_gTMSHlxgN-ZXDvUGIeTQ8I72WrRq0ACLcBGAs/s1600/automathemely-settings_2.png
[3]:https://github.com/C2N14/AutomaThemely

View File

@ -0,0 +1,262 @@
API Star: Python 3 的 API 框架 Polyglot.Ninja()
======
为了在 Python 中快速构建 API我主要依赖于 [Flask][1]。最近我遇到了一个名为 “API Star” 的基于 Python 3 的新 API 框架。由于几个原因,我对它很感兴趣。首先,该框架包含 Python 新特点,如类型提示和 asyncio。接着它再进一步并且为开发人员提供了很棒的开发体验。我们很快就会讲到这些功能但在我们开始之前我首先要感谢 Tom Christie感谢他为 Django REST Framework 和 API Star 所做的所有工作。
现在说回 API Star -- 我感觉这个框架很有成效。我可以选择基于 asyncio 编写异步代码,或者可以选择传统后端方式就像 WSGI 那样。它配备了一个命令行工具 - `apistar` 来帮助我们更快地完成工作。它支持 Django ORM 和 SQLAlchemy这是可选的。它有一个出色类型系统使我们能够定义输入和输出的约束API Star 可以自动生成 api 模式(包括文档),提供验证和序列化功能等等。虽然 API Star 专注于构建 API但你也可以非常轻松地在其上构建 Web 应用程序。在我们自己构建一些东西之前,所有这些可能都没有意义的。
### 开始
我们将从安装 API Star 开始。为此实验创建一个虚拟环境是一个好主意。如果你不知道如何创建一个虚拟环境,不要担心,继续往下看。
```
pip install apistar
```
(译注:上面的命令是在 Python 3 虚拟环境下使用的)
如果你没有使用虚拟环境或者 Python 3 的 `pip`,它被称为 `pip3`,那么使用 `pip3 install apistar` 代替。
一旦我们安装了这个包,我们就应该可以使用 `apistar` 命令行工具了。我们可以用它创建一个新项目,让我们在当前目录中创建一个新项目。
```
apistar new .
```
现在我们应该创建两个文件:`app.py`,它包含主应用程序,然后是 `test.py`,它用于测试。让我们来看看 `app.py` 文件:
```
from apistar import Include, Route
from apistar.frameworks.wsgi import WSGIApp as App
from apistar.handlers import docs_urls, static_urls
def welcome(name=None):
if name is None:
return {'message': 'Welcome to API Star!'}
return {'message': 'Welcome to API Star, %s!' % name}
routes = [
Route('/', 'GET', welcome),
Include('/docs', docs_urls),
Include('/static', static_urls)
]
app = App(routes=routes)
if __name__ == '__main__':
app.main()
```
在我们深入研究代码之前,让我们运行应用程序并查看它是否正常工作。我们在浏览器中输入 `http://127.0.0.1:8080/`,我们将得到以下响应:
```
{"message": "Welcome to API Star!"}
```
如果我们输入:`http://127.0.0.1:8080/?name=masnun`
```
{"message": "Welcome to API Star, masnun!"}
```
同样的,输入 `http://127.0.0.1:8080/docs/`,我们将看到自动生成的 API 文档。
现在让我们来看看代码。我们有一个 `welcome` 函数,它接收一个名为 `name` 的参数,其默认值为 `None`。API Star 是一个智能的 api 框架。它将尝试在 url 路径或者查询字符串中找到 `name` 键并将其传递给我们的函数,它还基于其生成 API 文档。这真是太好了,不是吗?
然后,我们创建一个 `Route``Include` 实例列表,并将列表传递给 `App` 实例。`Route` 对象用于定义用户自定义路由。顾名思义,`Include` 包含了在给定的路径下的其它 url 路径。
### 路由
路由很简单。当构造 `App` 实例时,我们需要传递一个列表作为 `routes` 参数,这个列表应该有我们刚才看到的 `Route``Include` 对象组成。对于 `Route`,我们传递一个 url 路径http 方法和可调用的请求处理程序(函数或者其他)。对于 `Include` 实例,我们传递一个 url 路径和一个 `Routes` 实例列表。
##### 路径参数
我们可以在花括号内添加一个名称来声明 url 路径参数。例如 `/user/{user_id}` 定义了一个 url其中 `user_id` 是路径参数,或者说是一个将被注入到处理函数(实际上是可调用的)中的变量。这有一个简单的例子:
```
from apistar import Route
from apistar.frameworks.wsgi import WSGIApp as App
def user_profile(user_id: int):
return {'message': 'Your profile id is: {}'.format(user_id)}
routes = [
Route('/user/{user_id}', 'GET', user_profile),
]
app = App(routes=routes)
if __name__ == '__main__':
app.main()
```
如果我们访问 `http://127.0.0.1:8080/user/23`,我们将得到以下响应:
```
{"message": "Your profile id is: 23"}
```
但如果我们尝试访问 `http://127.0.0.1:8080/user/some_string`,它将无法匹配。因为我们定义了 `user_profile` 函数,且为 `user_id` 参数添加了一个类型提示。如果它不是整数,则路径不匹配。但是如果我们继续删除类型提示,只使用 `user_profile(user_id)`,它将匹配此 url。这也展示了 API Star 的智能之处和利用类型和好处。
#### 包含/分组路由
有时候将某些 url 组合在一起是有意义的。假设我们有一个处理用户相关功能的 `user` 模块,将所有与用户相关的 url 分组在 `/user` 路径下可能会更好。例如 `/user/new`, `/user/1`, `/user/1/update` 等等。我们可以轻松地在单独的模块或包中创建我们的处理程序和路由,然后将它们包含在我们自己的路由中。
让我们创建一个名为 `user` 的新模块,文件名为 `user.py`。我们将以下代码放入这个文件:
```
from apistar import Route
def user_new():
return {"message": "Create a new user"}
def user_update(user_id: int):
return {"message": "Update user #{}".format(user_id)}
def user_profile(user_id: int):
return {"message": "User Profile for: {}".format(user_id)}
user_routes = [
Route("/new", "GET", user_new),
Route("/{user_id}/update", "GET", user_update),
Route("/{user_id}/profile", "GET", user_profile),
]
```
现在我们可以从 app 主文件中导入 `user_routes`,并像这样使用它:
```
from apistar import Include
from apistar.frameworks.wsgi import WSGIApp as App
from user import user_routes
routes = [
Include("/user", user_routes)
]
app = App(routes=routes)
if __name__ == '__main__':
app.main()
```
现在 `/user/new` 将委托给 `user_new` 函数。
### 访问查询字符串/查询参数
查询参数中传递的任何参数都可以直接注入到处理函数中。比如 url `/call?phone=1234`,处理函数可以定义一个 `phone` 参数,它将从查询字符串/查询参数中接收值。如果 url 查询字符串不包含 `phone` 的值,那么它将得到 `None`。我们还可以为参数设置一个默认值,如下所示:
```
def welcome(name=None):
if name is None:
return {'message': 'Welcome to API Star!'}
return {'message': 'Welcome to API Star, %s!' % name}
```
在上面的例子中,我们为 `name` 设置了一个默认值 `None`
### 注入对象
通过给一个请求程序添加类型提示我们可以将不同的对象注入到视图中。注入请求相关对象有助于处理程序直接从内部访问它们。API Star 内置的 `http` 包中有几个内置对象。我们也可以使用它的类型系统来创建我们自己的自定义对象并将它们注入到我们的函数中。API Star 还根据指定的约束进行数据验证。
让我们定义自己的 `User` 类型,并将其注入到我们的请求处理程序中:
```
from apistar import Include, Route
from apistar.frameworks.wsgi import WSGIApp as App
from apistar import typesystem
class User(typesystem.Object):
properties = {
'name': typesystem.string(max_length=100),
'email': typesystem.string(max_length=100),
'age': typesystem.integer(maximum=100, minimum=18)
}
required = ["name", "age", "email"]
def new_user(user: User):
return user
routes = [
Route('/', 'POST', new_user),
]
app = App(routes=routes)
if __name__ == '__main__':
app.main()
```
现在如果我们发送这样的请求:
```
curl -X POST \
http://127.0.0.1:8080/ \
-H 'Cache-Control: no-cache' \
-H 'Content-Type: application/json' \
-d '{"name": "masnun", "email": "masnun@gmail.com", "age": 12}'
```
猜猜发生了什么?我们得到一个错误,说年龄必须等于或大于 18。类型系允许我们进行智能数据验证。如果我们启用了 `docs` url我们还将自动记录这些参数。
### 发送响应
如果你已经注意到,到目前为止,我们只可以传递一个字典,它将被转换为 JSON 并作为默认返回。但是,我们可以使用 `apistar` 中的 `Response` 类来设置状态码和其它任意响应头。这有一个简单的例子:
```
from apistar import Route, Response
from apistar.frameworks.wsgi import WSGIApp as App
def hello():
return Response(
content="Hello".encode("utf-8"),
status=200,
headers={"X-API-Framework": "API Star"},
content_type="text/plain"
)
routes = [
Route('/', 'GET', hello),
]
app = App(routes=routes)
if __name__ == '__main__':
app.main()
```
它应该返回纯文本响应和一个自定义标响应头。请注意,`content` 应该是字节,而不是字符串。这就是我编码它的原因。
### 继续
我刚刚介绍了 API
Star 的一些特性API Star 中还有许多非常酷的东西,我建议通过 [Github Readme][2] 文件来了解这个优秀框架所提供的不同功能的更多信息。我还将尝试在未来几天内介绍关于 API Star 的更多简短的,集中的教程。
--------------------------------------------------------------------------------
via: http://polyglot.ninja/api-star-python-3-api-framework/
作者:[MASNUN][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://polyglot.ninja/author/masnun/
[1]:http://polyglot.ninja/rest-api-best-practices-python-flask-tutorial/
[2]:https://github.com/encode/apistar

View File

@ -0,0 +1,53 @@
跨站请求伪造
======
![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/understanding-csrf-cross-site-forgery_orig.jpg)
设计 Web 程序时,安全性是一个主要问题。我不是在谈论 DDOS 保护,使用强密码或两步验证。我在谈论对网络程序的最大威胁。它被称为**CSRF**, 是 **Cross Site Resource Forgery** (跨站请求伪造)的缩写。
### 什么是 CSRF
[![csrf what is cross site forgery](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/csrf-what-is-cross-site-forgery_orig.jpg)][1]
首先,**CSRF** 是 Cross Site Resource Forgery 的缩写。它通常发音为 “sea-surf”也经常被称为XSRF。CSRF 是一种攻击类型,在受害者不知情的情况下,在受害者登录的 Web 程序上执行各种操作。这些行为可以是任何事情,从简单地喜欢或评论社交媒体帖子到向人们发送垃圾消息,甚至从受害者的银行账户转移资金。
### CSRF 如何工作?
**CSRF** 攻击尝试利用在所有浏览器一个简单的常见漏洞。每次我们对网站进行身份验证或登录时,会话 cookie 都会存储在浏览器中。因此,每当我们向网站提出请求时,这些 cookie 就会自动发送到服务器,服务器通过匹配与服务器记录一起发送的 cookie 来识别我们。这样就知道是我们了。
[![cookies set by website chrome](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/cookies-set-by-website-chrome_orig.jpg)][2]
这意味着我将在知情或不知情的情况下发出关请求。由于 cookie 被发送并且它们将匹配服务器上的记录,服务器认为我在发出该请求。
CSRF 攻击通常以链接的形式出现。我们可以在其他网站上点击它们或通过电子邮件接收它们。单击这些链接时,会向服务器发出不需要的请求。正如我之前所说,服务器认为我们发出了请求并对其进行了身份验证。
#### 一个真实世界的例子
为了把事情看得更深入,想象一下你已登录银行的网站。并在 **yourbank.com/transfer** 上填写表格。你将接收者的帐号填写为 1234填入金额 5,000 并单击提交按钮。现在,我们将有一个 **yourbank.com/transfer/send?to=1234&amount=5000** 的请求。因此服务器将根据请求进行操作并转账。现在想象一下你在另一个网站上,然后点击一个链接,用黑客的帐号打开上面的 URL。这笔钱现在会转账给黑客服务器认为你做了交易。即使你没有。
[![csrf hacking bank account](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/csrf-hacking-bank-account_orig.jpg)][3]
#### CSRF 防护
CSRF 防护非常容易实现。它通常将一个称为 CSRF 令牌的令牌发送到网页。每次发出新请求时,都会发送并验证此令牌。因此,向服务器发出的恶意请求将通过 cookie 身份验证,但 CSRF 验证会失败。大多数 Web 框架为防止 CSRF 攻击提供了开箱即用的支持,而 CSRF 攻击现在并不像以前那样常见。
### 总结
CSRF 攻击在 10 年前是一件大事但如今我们看不到太多。过去Youtube、纽约时报和 Netflix 等知名网站都容易受到 CSRF 的攻击。然而CSRF 攻击的普遍性和发生率最近有减少。尽管如此CSRF 攻击仍然是一种威胁,重要的是,你要保护自己的网站或程序免受攻击。
--------------------------------------------------------------------------------
via: http://www.linuxandubuntu.com/home/understanding-csrf-cross-site-request-forgery
作者:[linuxandubuntu][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxandubuntu.com
[1]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/csrf-what-is-cross-site-forgery_orig.jpg
[2]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/cookies-set-by-website-chrome_orig.jpg
[3]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/csrf-hacking-bank-account_orig.jpg