mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-21 02:10:11 +08:00
commit
17d0bbf895
2
.gitignore
vendored
2
.gitignore
vendored
@ -3,3 +3,5 @@ members.md
|
||||
*.html
|
||||
*.bak
|
||||
.DS_Store
|
||||
sources/*/.*
|
||||
translated/*/.*
|
95
published/20180128 Being open about data privacy.md
Normal file
95
published/20180128 Being open about data privacy.md
Normal file
@ -0,0 +1,95 @@
|
||||
对数据隐私持开放的态度
|
||||
======
|
||||
|
||||
> 尽管有包括 GDPR 在内的法规,数据隐私对于几乎所有的人来说都是很重要的事情。
|
||||
|
||||

|
||||
|
||||
今天(LCTT 译注:本文发表于 2018/1/28)是<ruby>[数据隐私日][1]<rt>Data Privacy Day</rt></ruby>,(在欧洲叫“<ruby>数据保护日<rt>Data Protection Day</rt></ruby>”),你可能会认为现在我们处于一个开源的世界中,所有的数据都应该是自由的,[就像人们想的那样][2],但是现实并没那么简单。主要有两个原因:
|
||||
|
||||
1. 我们中的大多数(不仅仅是在开源中)认为至少有些关于我们自己的数据是不愿意分享出去的(我在之前发表的一篇文章中列举了一些例子[3])
|
||||
2. 我们很多人虽然在开源中工作,但事实上是为了一些商业公司或者其他一些组织工作,也是在合法的要求范围内分享数据。
|
||||
|
||||
所以实际上,数据隐私对于每个人来说是很重要的。
|
||||
|
||||
事实证明,在美国和欧洲之间,人们和政府认为让组织使用哪些数据的出发点是有些不同的。前者通常为商业实体(特别是愤世嫉俗的人们会指出是大型的商业实体)利用他们所收集到的关于我们的数据提供了更多的自由度。在欧洲,完全是另一观念,一直以来持有的多是有更多约束限制的观念,而且在 5 月 25 日,欧洲的观点可以说取得了胜利。
|
||||
|
||||
### 通用数据保护条例(GDPR)的影响
|
||||
|
||||
那是一个相当全面的声明,其实事实上这是 2016 年欧盟通过的一项称之为<ruby>通用数据保护条例<rt>General Data Protection Regulation</rt></ruby>(GDPR)的立法的日期。数据通用保护条例在私人数据怎样才能被保存,如何才能被使用,谁能使用,能被持有多长时间这些方面设置了严格的规则。它描述了什么数据属于私人数据——而且涉及的条目范围非常广泛,从你的姓名、家庭住址到你的医疗记录以及接通你电脑的 IP 地址。
|
||||
|
||||
通用数据保护条例的重要之处是它并不仅仅适用于欧洲的公司,如果你是阿根廷人、日本人、美国人或者是俄罗斯的公司而且你正在收集涉及到欧盟居民的数据,你就要受到这个条例的约束管辖。
|
||||
|
||||
“哼!” 你可能会这样说^注1 ,“我的业务不在欧洲:他们能对我有啥约束?” 答案很简单:如果你想继续在欧盟做任何生意,你最好遵守,因为一旦你违反了通用数据保护条例的规则,你将会受到你的全球总收入百分之四的惩罚。是的,你没听错,是全球总收入,而不是仅仅在欧盟某一国家的的收入,也不只是净利润,而是全球总收入。这将会让你去叮嘱告知你的法律团队,他们就会知会你的整个团队,同时也会立即去指引你的 IT 团队,确保你的行为在相当短的时间内合规。
|
||||
|
||||
看上去这和非欧盟公民没有什么相关性,但其实不然,对大多数公司来说,对所有的他们的顾客、合作伙伴以及员工实行同样的数据保护措施是件既简单又有效的事情,而不是仅针对欧盟公民实施,这将会是一件很有利的事情。^注2
|
||||
|
||||
然而,数据通用保护条例不久将在全球实施并不意味着一切都会变的很美好^注3 :事实并非如此,我们一直在丢弃关于我们自己的信息——而且允许公司去使用它。
|
||||
|
||||
有一句话是这么说的(尽管很争议):“如果你没有在付费,那么你就是产品。”这句话的意思就是如果你没有为某一项服务付费,那么其他的人就在付费使用你的数据。你有付费使用 Facebook、推特、谷歌邮箱?你觉得他们是如何赚钱的?大部分是通过广告,一些人会争论那是他们向你提供的一项服务而已,但事实上是他们在利用你的数据从广告商里获取收益。你不是一个真正的广告的顾客——只有当你从看了广告后买了他们的商品之后你才变成了他们的顾客,但直到这个发生之前,都是广告平台和广告商的关系。
|
||||
|
||||
有些服务是允许你通过付费来消除广告的(流媒体音乐平台声破天就是这样的),但从另一方面来讲,即使你认为付费的服务也可以启用广告(例如,亚马逊正在努力让 Alexa 发广告),除非我们想要开始为这些所有的免费服务付费,我们需要清楚我们所放弃的,而且在我们暴露的和不想暴露的之间做一些选择。
|
||||
|
||||
### 谁是顾客?
|
||||
|
||||
关于数据的另一个问题一直在困扰着我们,它是产生的数据量的直接结果。有许多组织一直在产生巨量的数据,包括公共的组织比如大学、医院或者是政府部门^注4 ——而且他们没有能力去储存这些数据。如果这些数据没有长久的价值也就没什么要紧的,但事实正好相反,随着处理大数据的工具正在开发中,而且这些组织也认识到他们现在以及在不久的将来将能够去挖掘这些数据。
|
||||
|
||||
然而他们面临的是,随着数据的增长和存储量无法跟上该怎么办。幸运的是——而且我是带有讽刺意味的使用了这个词^注5 ,大公司正在介入去帮助他们。“把你们的数据给我们,”他们说,“我们将免费保存。我们甚至让你随时能够使用你所收集到的数据!”这听起来很棒,是吗?这是大公司^注6 的一个极具代表性的例子,站在慈善的立场上帮助公共组织管理他们收集到的关于我们的数据。
|
||||
|
||||
不幸的是,慈善不是唯一的理由。他们是附有条件的:作为同意保存数据的交换条件,这些公司得到了将数据访问权限出售给第三方的权利。你认为公共组织,或者是被收集数据的人在数据被出售使用权使给第三方,以及在他们如何使用上能有发言权吗?我将把这个问题当做一个练习留给读者去思考。^注7
|
||||
|
||||
### 开放和积极
|
||||
|
||||
然而并不只有坏消息。政府中有一项在逐渐发展起来的“开放数据”运动鼓励各个部门免费开放大量他们的数据给公众或者其他组织。在某些情况下,这是专门立法的。许多志愿组织——尤其是那些接受公共资金的——正在开始这样做。甚至商业组织也有感兴趣的苗头。而且,有一些技术已经可行了,例如围绕不同的隐私和多方计算上,正在允许跨越多个数据集挖掘数据,而不用太多披露个人的信息——这个计算问题从未如现在比你想象的更容易。
|
||||
|
||||
这些对我们来说意味着什么呢?我之前在网站 Opensource.com 上写过关于[开源的共享福利][4],而且我越来越相信我们需要把我们的视野从软件拓展到其他区域:硬件、组织,和这次讨论有关的,数据。让我们假设一下你是 A 公司要提向另一家公司客户 B^注8 提供一项服务 。在此有四种不同类型的数据:
|
||||
|
||||
1. 数据完全开放:对 A 和 B 都是可得到的,世界上任何人都可以得到
|
||||
2. 数据是已知的、共享的,和机密的:A 和 B 可得到,但其他人不能得到
|
||||
3. 数据是公司级别上保密的:A 公司可以得到,但 B 顾客不能
|
||||
4. 数据是顾客级别保密的:B 顾客可以得到,但 A 公司不能
|
||||
|
||||
首先,也许我们对数据应该更开放些,将数据默认放到选项 1 中。如果那些数据对所有人开放——在无人驾驶、语音识别,矿藏以及人口数据统计会有相当大的作用的。^注9 如果我们能够找到方法将数据放到选项 2、3 和 4 中,不是很好吗?——或者至少它们中的一些——在选项 1 中是可以实现的,同时仍将细节保密?这就是研究这些新技术的希望。
|
||||
然而有很长的路要走,所以不要太兴奋,同时,开始考虑将你的的一些数据默认开放。
|
||||
|
||||
### 一些具体的措施
|
||||
|
||||
我们如何处理数据的隐私和开放?下面是我想到的一些具体的措施:欢迎大家评论做出更多的贡献。
|
||||
|
||||
* 检查你的组织是否正在认真严格的执行通用数据保护条例。如果没有,去推动实施它。
|
||||
* 要默认加密敏感数据(或者适当的时候用散列算法),当不再需要的时候及时删掉——除非数据正在被处理使用,否则没有任何借口让数据清晰可见。
|
||||
* 当你注册了一个服务的时候考虑一下你公开了什么信息,特别是社交媒体类的。
|
||||
* 和你的非技术朋友讨论这个话题。
|
||||
* 教育你的孩子、你朋友的孩子以及他们的朋友。然而最好是去他们的学校和他们的老师谈谈在他们的学校中展示。
|
||||
* 鼓励你所服务和志愿贡献的组织,或者和他们沟通一些推动数据的默认开放。不是去思考为什么我要使数据开放,而是从我为什么不让数据开放开始。
|
||||
* 尝试去访问一些开源数据。挖掘使用它、开发应用来使用它,进行数据分析,画漂亮的图,^注10 制作有趣的音乐,考虑使用它来做些事。告诉组织去使用它们,感谢它们,而且鼓励他们去做更多。
|
||||
|
||||
**注:**
|
||||
|
||||
1. 我承认你可能尽管不会。
|
||||
2. 假设你坚信你的个人数据应该被保护。
|
||||
3. 如果你在思考“极好的”的寓意,在这点上你并不孤独。
|
||||
4. 事实上这些机构能够有多开放取决于你所居住的地方。
|
||||
5. 假设我是英国人,那是非常非常大的剂量。
|
||||
6. 他们可能是巨大的公司:没有其他人能够负担得起这么大的存储和基础架构来使数据保持可用。
|
||||
7. 不,答案是“不”。
|
||||
8. 尽管这个例子也同样适用于个人。看看:A 可能是 Alice,B 可能是 BOb……
|
||||
9. 并不是说我们应该暴露个人的数据或者是这样的数据应该被保密,当然——不是那类的数据。
|
||||
10. 我的一个朋友当她接孩子放学的时候总是下雨,所以为了避免确认失误,她在整个学年都访问天气信息并制作了图表分享到社交媒体上。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/being-open-about-data-privacy
|
||||
|
||||
作者:[Mike Bursell][a]
|
||||
译者:[FelixYFZ](https://github.com/FelixYFZ)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/mikecamel
|
||||
[1]:https://en.wikipedia.org/wiki/Data_Privacy_Day
|
||||
[2]:https://en.wikipedia.org/wiki/Information_wants_to_be_free
|
||||
[3]:https://aliceevebob.wordpress.com/2017/06/06/helping-our-governments-differently/
|
||||
[4]:https://opensource.com/article/17/11/commonwealth-open-source
|
||||
[5]:http://www.outpost9.com/reference/jargon/jargon_40.html#TAG2036
|
@ -0,0 +1,120 @@
|
||||
如何在 Linux 中使用一个命令升级所有软件
|
||||
======
|
||||
|
||||

|
||||
|
||||
众所周知,让我们的 Linux 系统保持最新状态会用到多种包管理器。比如说,在 Ubuntu 中,你无法使用 `sudo apt update` 和 `sudo apt upgrade` 命令升级所有软件。此命令仅升级使用 APT 包管理器安装的应用程序。你有可能使用 `cargo`、[pip][1]、`npm`、`snap` 、`flatpak` 或 [Linuxbrew][2] 包管理器安装了其他软件。你需要使用相应的包管理器才能使它们全部更新。
|
||||
|
||||
再也不用这样了!跟 `topgrade` 打个招呼,这是一个可以一次性升级系统中所有软件的工具。
|
||||
|
||||
你无需运行每个包管理器来更新包。这个 `topgrade` 工具通过检测已安装的软件包、工具、插件并运行相应的软件包管理器来更新 Linux 中的所有软件,用一条命令解决了这个问题。它是自由而开源的,使用 **rust 语言**编写。它支持 GNU/Linux 和 Mac OS X.
|
||||
|
||||
### 在 Linux 中使用一个命令升级所有软件
|
||||
|
||||
`topgrade` 存在于 AUR 中。因此,你可以在任何基于 Arch 的系统中使用 [Yay][3] 助手程序安装它。
|
||||
|
||||
```
|
||||
$ yay -S topgrade
|
||||
```
|
||||
|
||||
在其他 Linux 发行版上,你可以使用 `cargo` 包管理器安装 `topgrade`。要安装 cargo 包管理器,请参阅以下链接:
|
||||
|
||||
- [在 Linux 安装 rust 语言][12]
|
||||
|
||||
然后,运行以下命令来安装 `topgrade`。
|
||||
|
||||
```
|
||||
$ cargo install topgrade
|
||||
```
|
||||
|
||||
安装完成后,运行 `topgrade` 以升级 Linux 系统中的所有软件。
|
||||
|
||||
```
|
||||
$ topgrade
|
||||
```
|
||||
|
||||
一旦调用了 `topgrade`,它将逐个执行以下任务。如有必要,系统会要求输入 root/sudo 用户密码。
|
||||
|
||||
1、 运行系统的包管理器:
|
||||
|
||||
* Arch:运行 `yay` 或者回退到 [pacman][4]
|
||||
* CentOS/RHEL:运行 `yum upgrade`
|
||||
* Fedora :运行 `dnf upgrade`
|
||||
* Debian/Ubuntu:运行 `apt update` 和 `apt dist-upgrade`
|
||||
* Linux/macOS:运行 `brew update` 和 `brew upgrade`
|
||||
|
||||
2、 检查 Git 是否跟踪了以下路径。如果有,则拉取它们:
|
||||
|
||||
* `~/.emacs.d` (无论你使用 Spacemacs 还是自定义配置都应该可用)
|
||||
* `~/.zshrc`
|
||||
* `~/.oh-my-zsh`
|
||||
* `~/.tmux`
|
||||
* `~/.config/fish/config.fish`
|
||||
* 自定义路径
|
||||
|
||||
3、 Unix:运行 `zplug` 更新
|
||||
|
||||
4、 Unix:使用 TPM 升级 `tmux` 插件
|
||||
|
||||
5、 运行 `cargo install-update`
|
||||
|
||||
6、 升级 Emacs 包
|
||||
|
||||
7、 升级 Vim 包。对以下插件框架均可用:
|
||||
|
||||
* NeoBundle
|
||||
* [Vundle][5]
|
||||
* Plug
|
||||
|
||||
8、 升级 [npm][6] 全局安装的包
|
||||
|
||||
9、 升级 Atom 包
|
||||
|
||||
10、 升级 [Flatpak][7] 包
|
||||
|
||||
11、 升级 [snap][8] 包
|
||||
|
||||
12、 Linux:运行 `fwupdmgr` 显示固件升级。 (仅查看。实际不会执行升级)
|
||||
|
||||
13、 运行自定义命令。
|
||||
|
||||
最后,`topgrade` 将运行 `needrestart` 以重新启动所有服务。在 Mac OS X 中,它会升级 App Store 程序。
|
||||
|
||||
我的 Ubuntu 18.04 LTS 测试环境的示例输出:
|
||||
|
||||
![][10]
|
||||
|
||||
好处是如果一个任务失败,它将自动运行下一个任务并完成所有其他后续任务。最后,它将显示摘要,其中包含运行的任务数量,成功的数量和失败的数量等详细信息。
|
||||
|
||||
![][11]
|
||||
|
||||
**建议阅读:**
|
||||
|
||||
就个人而言,我喜欢创建一个像 `topgrade` 程序的想法,并使用一个命令升级使用各种包管理器安装的所有软件。我希望你也觉得它有用。还有更多的好东西。敬请关注!
|
||||
|
||||
干杯!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-upgrade-everything-using-a-single-command-in-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://www.ostechnix.com/manage-python-packages-using-pip/
|
||||
[2]:https://www.ostechnix.com/linuxbrew-common-package-manager-linux-mac-os-x/
|
||||
[3]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
|
||||
[4]:https://www.ostechnix.com/getting-started-pacman/
|
||||
[5]:https://www.ostechnix.com/manage-vim-plugins-using-vundle-linux/
|
||||
[6]:https://www.ostechnix.com/manage-nodejs-packages-using-npm/
|
||||
[7]:https://www.ostechnix.com/flatpak-new-framework-desktop-applications-linux/
|
||||
[8]:https://www.ostechnix.com/install-snap-packages-arch-linux-fedora/
|
||||
[9]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[10]:http://www.ostechnix.com/wp-content/uploads/2018/06/topgrade-1.png
|
||||
[11]:http://www.ostechnix.com/wp-content/uploads/2018/06/topgrade-2.png
|
||||
[12]:https://www.ostechnix.com/install-rust-programming-language-in-linux/
|
@ -0,0 +1,40 @@
|
||||
在 Arch 用户仓库(AUR)中发现恶意软件
|
||||
======
|
||||
|
||||
7 月 7 日,有一个 AUR 软件包被改入了一些恶意代码,提醒 [Arch Linux][1] 用户(以及一般的 Linux 用户)在安装之前应该尽可能检查所有由用户生成的软件包。
|
||||
|
||||
[AUR][3](即 Arch(Linux)用户仓库)包含包描述,也称为 PKGBUILD,它使得从源代码编译包变得更容易。虽然这些包非常有用,但它们永远不应被视为安全的,并且用户应尽可能在使用之前检查其内容。毕竟,AUR 在网页中以粗体显示 “**AUR 包是用户制作的内容。任何使用该提供的文件的风险由你自行承担。**”
|
||||
|
||||
这次[发现][4]包含恶意代码的 AUR 包证明了这一点。[acroread][5] 于 7 月 7 日(看起来它以前是“孤儿”,意思是它没有维护者)被一位名为 “xeactor” 的用户修改,它包含了一行从 pastebin 使用 `curl` 下载脚本的命令。然后,该脚本下载了另一个脚本并安装了一个 systemd 单元以定期运行该脚本。
|
||||
|
||||
**看来有[另外两个][2] AUR 包以同样的方式被修改。所有违规软件包都已删除,并暂停了用于上传它们的用户帐户(它们注册在更新软件包的同一天)。**
|
||||
|
||||
这些恶意代码没有做任何真正有害的事情 —— 它只是试图上传一些系统信息,比如机器 ID、`uname -a` 的输出(包括内核版本、架构等)、CPU 信息、pacman 信息,以及 `systemctl list-units`(列出 systemd 单元信息)的输出到 pastebin.com。我说“试图”是因为第二个脚本中存在错误而没有实际上传系统信息(上传函数为 “upload”,但脚本试图使用其他名称 “uploader” 调用它)。
|
||||
|
||||
此外,将这些恶意脚本添加到 AUR 的人将脚本中的个人 Pastebin API 密钥以明文形式留下,再次证明他们真的不明白他们在做什么。(LCTT 译注:意即这是一个菜鸟“黑客”,还不懂得如何有经验地隐藏自己。)
|
||||
|
||||
尝试将此信息上传到 Pastebin 的目的尚不清楚,特别是原本可以上传更加敏感信息的情况下,如 GPG / SSH 密钥。
|
||||
|
||||
**更新:** Reddit用户 u/xanaxdroid_ [提及][6]同一个名为 “xeactor” 的用户也发布了一些加密货币挖矿软件包,因此他推测 “xeactor” 可能正计划添加一些隐藏的加密货币挖矿软件到 AUR([两个月][7]前的一些 Ubuntu Snap 软件包也是如此)。这就是 “xeactor” 可能试图获取各种系统信息的原因。此 AUR 用户上传的所有包都已删除,因此我无法检查。
|
||||
|
||||
**另一个更新:**你究竟应该在那些用户生成的软件包检查什么(如 AUR 中发现的)?情况各有不同,我无法准确地告诉你,但你可以从寻找任何尝试使用 `curl`、`wget`和其他类似工具下载内容的东西开始,看看他们究竟想要下载什么。还要检查从中下载软件包源的服务器,并确保它是官方来源。不幸的是,这不是一个确切的“科学做法”。例如,对于 Launchpad PPA,事情变得更加复杂,因为你必须懂得 Debian 如何打包,并且这些源代码是可以直接更改的,因为它托管在 PPA 中并由用户上传的。使用 Snap 软件包会变得更加复杂,因为在安装之前你无法检查这些软件包(据我所知)。在后面这些情况下,作为通用解决方案,我觉得你应该只安装你信任的用户/打包器生成的软件包。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxuprising.com/2018/07/malware-found-on-arch-user-repository.html
|
||||
|
||||
作者:[Logix][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/118280394805678839070
|
||||
[1]:https://www.archlinux.org/
|
||||
[2]:https://lists.archlinux.org/pipermail/aur-general/2018-July/034153.html
|
||||
[3]:https://aur.archlinux.org/
|
||||
[4]:https://lists.archlinux.org/pipermail/aur-general/2018-July/034152.html
|
||||
[5]:https://aur.archlinux.org/cgit/aur.git/commit/?h=acroread&id=b3fec9f2f16703c2dae9e793f75ad6e0d98509bc
|
||||
[6]:https://www.reddit.com/r/archlinux/comments/8x0p5z/reminder_to_always_read_your_pkgbuilds/e21iugg/
|
||||
[7]:https://www.linuxuprising.com/2018/05/malware-found-in-ubuntu-snap-store.html
|
94
published/20180710 6 open source cryptocurrency wallets.md
Normal file
94
published/20180710 6 open source cryptocurrency wallets.md
Normal file
@ -0,0 +1,94 @@
|
||||
6 个开源的数字货币钱包
|
||||
======
|
||||
|
||||
> 想寻找一个可以存储和交易你的比特币、以太坊和其它数字货币的软件吗?这里有 6 个开源的软件可以选择。
|
||||
|
||||

|
||||
|
||||
没有数字货币钱包,像比特币和以太坊这样的数字货币只不过是又一个空想罢了。这些钱包对于保存、发送、以及接收数字货币来说是必需的东西。
|
||||
|
||||
迅速成长的 [数字货币][1] 之所以是革命性的,都归功于它的去中心化,该网络中没有中央权威,每个人都享有平等的权力。开源技术是数字货币和 [区块链][2] 网络的核心所在。它使得这个充满活力的新兴行业能够从去中心化中获益 —— 比如,不可改变、透明和安全。
|
||||
|
||||
如果你正在寻找一个自由开源的数字货币钱包,请继续阅读,并开始去探索以下的选择能否满足你的需求。
|
||||
|
||||
### 1、 Copay
|
||||
|
||||
[Copay][3] 是一个能够很方便地存储比特币的开源数字货币钱包。这个软件以 [MIT 许可证][4] 发布。
|
||||
|
||||
Copay 服务器也是开源的。因此,开发者和比特币爱好者可以在服务器上部署他们自己的应用程序来完全控制他们的活动。
|
||||
|
||||
Copay 钱包能让你手中的比特币更加安全,而不是去信任不可靠的第三方。它允许你使用多重签名来批准交易,并且支持在同一个 app 钱包内支持存储多个独立的钱包。
|
||||
|
||||
Copay 可以在多种平台上使用,比如 Android、Windows、MacOS、Linux、和 iOS。
|
||||
|
||||
### 2、 MyEtherWallet
|
||||
|
||||
正如它的名字所示,[MyEtherWallet][5] (缩写为 MEW) 是一个以太坊钱包。它是开源的(遵循 [MIT 许可证][6])并且是完全在线的,可以通过 web 浏览器来访问它。
|
||||
|
||||
这个钱包的客户端界面非常简洁,它可以让你自信而安全地参与到以太坊区块链中。
|
||||
|
||||
### 3、 mSIGNA
|
||||
|
||||
[mSIGNA][7] 是一个功能强大的桌面版应用程序,用于在比特币网络上完成交易。它遵循 [MIT 许可证][8] 并且在 MacOS、Windows、和 Linux 上可用。
|
||||
|
||||
这个区块链钱包可以让你完全控制你存储的比特币。其中一些特性包括用户友好性、灵活性、去中心化的离线密钥生成能力、加密的数据备份,以及多设备同步功能。
|
||||
|
||||
### 4、 Armory
|
||||
|
||||
[Armory][9] 是一个在你的计算机上产生和保管比特币私钥的开源钱包(遵循 [GNU AGPLv3][10])。它通过使用冷存储和支持多重签名的能力增强了安全性。
|
||||
|
||||
使用 Armory,你可以在完全离线的计算机上设置一个钱包;你将通过<ruby>仅查看<rt>watch-only</rt></ruby>功能在因特网上查看你的比特币具体信息,这样有助于改善安全性。这个钱包也允许你去创建多个地址,并使用它们去完成不同的事务。
|
||||
|
||||
Armory 可用于 MacOS、Windows、和几个比较有特色的 Linux 平台上(包括树莓派)。
|
||||
|
||||
### 5、 Electrum
|
||||
|
||||
[Electrum][11] 是一个既对新手友好又具备专家功能的比特币钱包。它遵循 [MIT 许可证][12] 来发行。
|
||||
|
||||
Electrum 可以在你的本地机器上使用较少的资源来实现本地加密你的私钥,支持冷存储,并且提供多重签名能力。
|
||||
|
||||
它在各种操作系统和设备上都可以使用,包括 Windows、MacOS、Android、iOS 和 Linux,并且也可以在像 [Trezor][13] 这样的硬件钱包中使用。
|
||||
|
||||
### 6、 Etherwall
|
||||
|
||||
[Etherwall][14] 是第一款可以在桌面计算机上存储和发送以太坊的钱包。它是一个遵循 [GPLv3 许可证][15] 的开源钱包。
|
||||
|
||||
Etherwall 非常直观而且速度很快。更重要的是,它增加了你的私钥安全性,你可以在一个全节点或瘦节点上来运行它。它作为全节点客户端运行时,可以允许你在本地机器上下载整个以太坊区块链。
|
||||
|
||||
Etherwall 可以在 MacOS、Linux 和 Windows 平台上运行,并且它也支持 Trezor 硬件钱包。
|
||||
|
||||
### 智者之言
|
||||
|
||||
自由开源的数字钱包在让更多的人快速上手数字货币方面扮演至关重要的角色。
|
||||
|
||||
在你使用任何数字货币软件钱包之前,你一定要确保你的安全,而且一定要记住并完全遵循确保你的资金安全的最佳实践。
|
||||
|
||||
如果你喜欢的开源数字货币钱包不在以上的清单中,请在下面的评论区共享出你所知道的开源钱包。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/7/crypto-wallets
|
||||
|
||||
作者:[Dr.Michael J.Garbade][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/drmjg
|
||||
[1]:https://www.liveedu.tv/guides/cryptocurrency/
|
||||
[2]:https://opensource.com/tags/blockchain
|
||||
[3]:https://copay.io/
|
||||
[4]:https://github.com/bitpay/copay/blob/master/LICENSE
|
||||
[5]:https://www.myetherwallet.com/
|
||||
[6]:https://github.com/kvhnuke/etherwallet/blob/mercury/LICENSE.md
|
||||
[7]:https://ciphrex.com/
|
||||
[8]:https://github.com/ciphrex/mSIGNA/blob/master/LICENSE
|
||||
[9]:https://www.bitcoinarmory.com/
|
||||
[10]:https://github.com/etotheipi/BitcoinArmory/blob/master/LICENSE
|
||||
[11]:https://electrum.org/#home
|
||||
[12]:https://github.com/spesmilo/electrum/blob/master/LICENCE
|
||||
[13]:https://trezor.io/
|
||||
[14]:https://www.etherwall.com/
|
||||
[15]:https://github.com/almindor/etherwall/blob/master/LICENSE
|
@ -2,19 +2,20 @@
|
||||
======
|
||||
|
||||

|
||||
Thunderbird 是由 [Mozilla][1] 开发的流行免费电子邮件客户端。与 Firefox 类似,Thunderbird 提供了大量加载项来用于额外功能和自定义。本文重点介绍四个加载项,以改善你的隐私。
|
||||
|
||||
Thunderbird 是由 [Mozilla][1] 开发的流行的免费电子邮件客户端。与 Firefox 类似,Thunderbird 提供了大量加载项来用于额外功能和自定义。本文重点介绍四个加载项,以改善你的隐私。
|
||||
|
||||
### Enigmail
|
||||
|
||||
使用 GPG(GNU Privacy Guard)加密电子邮件是保持其内容私密性的最佳方式。如果你不熟悉 GPG,请[查看我们在 Magazine 上的入门介绍][2]。
|
||||
使用 GPG(GNU Privacy Guard)加密电子邮件是保持其内容私密性的最佳方式。如果你不熟悉 GPG,请[查看我们在这里的入门介绍][2]。
|
||||
|
||||
[Enigmail][3] 是使用 OpenPGP 和 Thunderbird 的首选加载项。实际上,Enigmail 与 Thunderbird 集成良好,可让你加密、解密、数字签名和验证电子邮件。
|
||||
|
||||
### Paranoia
|
||||
|
||||
[Paranoia][4] 可让你查看有关收到的电子邮件的重要信息。表情符号显示电子邮件在到达收件箱之前经过的服务器之间的加密状态。
|
||||
[Paranoia][4] 可让你查看有关收到的电子邮件的重要信息。用一个表情符号显示电子邮件在到达收件箱之前经过的服务器之间的加密状态。
|
||||
|
||||
黄色,快乐的表情告诉你所有连接都已加密。蓝色,悲伤的表情意味着一个连接未加密。最后,红色的,害怕的表情表示在多个连接上该消息未加密。
|
||||
黄色、快乐的表情告诉你所有连接都已加密。蓝色、悲伤的表情意味着有一个连接未加密。最后,红色的、害怕的表情表示在多个连接上该消息未加密。
|
||||
|
||||
还有更多有关这些连接的详细信息,你可以用来检查哪台服务器用于投递邮件。
|
||||
|
||||
@ -30,9 +31,9 @@ Thunderbird 是由 [Mozilla][1] 开发的流行免费电子邮件客户端。与
|
||||
|
||||
如果你真的担心自己的隐私,[TorBirdy][6] 就是给你设计的加载项。它将 Thunderbird 配置为使用 [Tor][7] 网络。
|
||||
|
||||
据其[文档][8]所述,TorBirdy 在以前没有使用 Tor 的电子邮件帐户上提供较少的隐私。
|
||||
据其[文档][8]所述,TorBirdy 为以前没有使用 Tor 的电子邮件帐户提供了少量隐私保护。
|
||||
|
||||
>请记住,跟之前使用 Tor 访问的电子邮件帐户相比,之前没有使用 Tor 访问的电子邮件帐户提供**更少**的隐私/匿名/更弱的假名。但是,TorBirdy 仍然对现有帐户或实名电子邮件地址有用。例如,如果你正在寻求隐匿位置 - 你经常旅行并且不想通过发送电子邮件来披露你的所有位置--TorBirdy 非常有效!
|
||||
> 请记住,跟之前使用 Tor 访问的电子邮件帐户相比,之前没有使用 Tor 访问的电子邮件帐户提供**更少**的隐私/匿名/更弱的假名。但是,TorBirdy 仍然对现有帐户或实名电子邮件地址有用。例如,如果你正在寻求隐匿位置 —— 你经常旅行并且不想通过发送电子邮件来披露你的所有位置 —— TorBirdy 非常有效!
|
||||
|
||||
请注意,要使用此加载项,必须在系统上安装 Tor。
|
||||
|
||||
@ -46,7 +47,7 @@ via: https://fedoramagazine.org/4-addons-privacy-thunderbird/
|
||||
作者:[Clément Verna][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,17 +1,17 @@
|
||||
如何强制用户在下次登录 Linux 时更改密码
|
||||
======
|
||||
|
||||
当你创建用户使用默认密码时,你必须强制用户在下一次登录时更改密码。
|
||||
当你使用默认密码创建用户时,你必须强制用户在下一次登录时更改密码。
|
||||
|
||||
当你在一个组织中工作时,此选项是强制性的。因为老员工可能知道默认密码,他们可能会也可能不会尝试不当行为。
|
||||
|
||||
这是安全投诉之一,所以,确保你必须以正确的方式处理此事,不会失败。甚至你的团队成员也可以这样做。
|
||||
这是安全投诉之一,所以,确保你必须以正确的方式处理此事而无任何失误。即使是你的团队成员也要一样做。
|
||||
|
||||
大多数用户都很懒,除非你强迫他们更改密码,否则他们不会这样做。所以要做这个实践。
|
||||
|
||||
出于安全原因,你需要经常更改密码,或者至少每个月更换一次。
|
||||
|
||||
确保你使用的是硬编码和猜测密码(大小写字母,数字和特殊字符的组合)。它至少应该为 10-15 个字符。
|
||||
确保你使用的是难以猜测的密码(大小写字母,数字和特殊字符的组合)。它至少应该为 10-15 个字符。
|
||||
|
||||
我们运行了一个 shell 脚本来在 Linux 服务器中创建一个用户账户,它会自动为用户附加一个密码,密码是实际用户名和少量数字的组合。
|
||||
|
||||
@ -20,55 +20,56 @@
|
||||
* passwd 命令
|
||||
* chage 命令
|
||||
|
||||
|
||||
**建议阅读:**
|
||||
**(#)** [如何在 Linux 上检查用户所属的组][1]
|
||||
**(#)** [如何在 Linux 上检查创建用户的日期][2]
|
||||
**(#)** [如何在 Linux 中重置/更改用户密码][3]
|
||||
**(#)** [如何使用 passwd 命令管理密码过期和老化][4]
|
||||
|
||||
- [如何在 Linux 上检查用户所属的组][1]
|
||||
- [如何在 Linux 上检查创建用户的日期][2]
|
||||
- [如何在 Linux 中重置/更改用户密码][3]
|
||||
- [如何使用 passwd 命令管理密码过期和老化][4]
|
||||
|
||||
### 方法 1:使用 passwd 命令
|
||||
|
||||
passwd 代表密码。它用于更新用户的身份验证令牌。passwd 命令/实用程序用于设置,修改或更改用户的密码。
|
||||
`passwd` 的意思是“密码”。它用于更新用户的身份验证令牌。`passwd` 命令/实用程序用于设置、修改或更改用户的密码。
|
||||
|
||||
普通的用户只能更改自己的账户,但超级用户可以更改任何账户的密码。
|
||||
|
||||
此外,我们还可以使用其他选项,允许用户执行其他活动,例如删除用户密码,锁定或解锁用户账户,设置用户账户的密码过期时间等。
|
||||
此外,我们还可以使用其他选项,允许用户执行其他活动,例如删除用户密码、锁定或解锁用户账户、设置用户账户的密码过期时间等。
|
||||
|
||||
在 Linux 中这可以通过调用 Linux-PAM 和 Libuser API 执行。
|
||||
|
||||
在 Linux 中创建用户时,用户详细信息将存储在 `/etc/passwd` 文件中。passwd 文件将每个用户的详细信息保存为七个字段的单行。
|
||||
在 Linux 中创建用户时,用户详细信息将存储在 `/etc/passwd` 文件中。`passwd` 文件将每个用户的详细信息保存为带有七个字段的单行。
|
||||
|
||||
此外,在 Linux 系统中创建新用户时,将更新以下四个文件。
|
||||
|
||||
* `/etc/passwd:` 用户详细信息将在此文件中更新。
|
||||
* `/etc/shadow:` 用户密码信息将在此文件中更新。
|
||||
* `/etc/group:` 新用户的组详细信息将在此文件中更新。
|
||||
* `/etc/gshadow:` 新用户的组密码信息将在此文件中更新。
|
||||
* `/etc/passwd`: 用户详细信息将在此文件中更新。
|
||||
* `/etc/shadow`: 用户密码信息将在此文件中更新。
|
||||
* `/etc/group`: 新用户的组详细信息将在此文件中更新。
|
||||
* `/etc/gshadow`: 新用户的组密码信息将在此文件中更新。
|
||||
|
||||
#### 如何使用 passwd 命令执行操作
|
||||
#### 如何使用 passwd 命令执行此操作
|
||||
|
||||
我们可以使用 passwd 命令并添加 `-e` 选项来执行此操作。
|
||||
我们可以使用 `passwd` 命令并添加 `-e` 选项来执行此操作。
|
||||
|
||||
为了测试这一点,让我们创建一个新用户账户,看看它是如何工作的。
|
||||
|
||||
```
|
||||
# useradd -c "2g Admin - Magesh M" magesh && passwd magesh
|
||||
Changing password for user magesh.
|
||||
New password:
|
||||
Retype new password:
|
||||
passwd: all authentication tokens updated successfully.
|
||||
|
||||
```
|
||||
|
||||
使用户账户的密码失效,那么在下次登录尝试期间,用户将被迫更改密码。
|
||||
|
||||
```
|
||||
# passwd -e magesh
|
||||
Expiring password for user magesh.
|
||||
passwd: Success
|
||||
|
||||
```
|
||||
|
||||
当我第一次尝试使用此用户登录系统时,它要求我设置一个新密码。
|
||||
|
||||
```
|
||||
login as: magesh
|
||||
[email protected]'s password:
|
||||
@ -82,32 +83,32 @@ New password:
|
||||
Retype new password:
|
||||
passwd: all authentication tokens updated successfully.
|
||||
Connection to localhost closed.
|
||||
|
||||
```
|
||||
|
||||
### 方法 2:使用 chage 命令
|
||||
|
||||
chage 代表改变时间。它会更改用户密码过期信息。
|
||||
`chage` 意即“改变时间”。它会更改用户密码过期信息。
|
||||
|
||||
chage 命令改变密码更改与上次密码更改日期之间的天数。系统使用此信息来确定用户何时必须更改他/她的密码。
|
||||
`chage` 命令会改变上次密码更改日期之后需要修改密码的天数。系统使用此信息来确定用户何时必须更改他/她的密码。
|
||||
|
||||
它允许用户执行其他活动,例如设置帐户到期日期,到期后设置密码失效,显示帐户老化信息,设置密码更改前的最小和最大天数以及设置到期警告天数。
|
||||
它允许用户执行其他活动,例如设置帐户到期日期,到期后设置密码失效,显示帐户过期信息,设置密码更改前的最小和最大天数以及设置到期警告天数。
|
||||
|
||||
#### 如何使用 chage 命令执行此操作
|
||||
|
||||
让我们在 chage 命令的帮助下,通过添加 `-d` 选项执行此操作。
|
||||
让我们在 `chage` 命令的帮助下,通过添加 `-d` 选项执行此操作。
|
||||
|
||||
为了测试这一点,让我们创建一个新用户帐户,看看它是如何工作的。我们将创建一个名为 `thanu` 的用户帐户。
|
||||
|
||||
```
|
||||
# useradd -c "2g Editor - Thanisha M" thanu && passwd thanu
|
||||
Changing password for user thanu.
|
||||
New password:
|
||||
Retype new password:
|
||||
passwd: all authentication tokens updated successfully.
|
||||
|
||||
```
|
||||
|
||||
要实现这一点,请使用 chage 命令将用户的上次密码更改日期设置为 “0”。
|
||||
要实现这一点,请使用 `chage` 命令将用户的上次密码更改日期设置为 0。
|
||||
|
||||
```
|
||||
# chage -d 0 thanu
|
||||
|
||||
@ -119,10 +120,10 @@ Account expires : never
|
||||
Minimum number of days between password change : 0
|
||||
Maximum number of days between password change : 99999
|
||||
Number of days of warning before password expires : 7
|
||||
|
||||
```
|
||||
|
||||
当我第一次尝试使用此用户登录系统时,它要求我设置一个新密码。
|
||||
|
||||
```
|
||||
login as: thanu
|
||||
[email protected]'s password:
|
||||
@ -136,10 +137,8 @@ New password:
|
||||
Retype new password:
|
||||
passwd: all authentication tokens updated successfully.
|
||||
Connection to localhost closed.
|
||||
|
||||
```
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/how-to-force-user-to-change-password-on-next-login-in-linux/
|
||||
@ -147,7 +146,7 @@ via: https://www.2daygeek.com/how-to-force-user-to-change-password-on-next-login
|
||||
作者:[Prakash Subramanian][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,64 @@
|
||||
Open Source Certification: Preparing for the Exam
|
||||
======
|
||||
Open source is the new normal in tech today, with open components and platforms driving mission-critical processes at organizations everywhere. As open source has become more pervasive, it has also profoundly impacted the job market. Across industries [the skills gap is widening, making it ever more difficult to hire people][1] with much needed job skills. That’s why open source training and certification are more important than ever, and this series aims to help you learn more and achieve your own certification goals.
|
||||
|
||||
In the [first article in the series][2], we explored why certification matters so much today. In the [second article][3], we looked at the kinds of certifications that are making a difference. This story will focus on preparing for exams, what to expect during an exam, and how testing for open source certification differs from traditional types of testing.
|
||||
|
||||
Clyde Seepersad, General Manager of Training and Certification at The Linux Foundation, stated, “For many of you, if you take the exam, it may well be the first time that you've taken a performance-based exam and it is quite different from what you might have been used to with multiple choice, where the answer is on screen and you can identify it. In performance-based exams, you get what's called a prompt.”
|
||||
|
||||
As a matter of fact, many Linux-focused certification exams literally prompt test takers at the command line. The idea is to demonstrate skills in real time in a live environment, and the best preparation for this kind of exam is practice, backed by training.
|
||||
|
||||
### Know the requirements
|
||||
|
||||
"Get some training," Seepersad emphasized. "Get some help to make sure that you're going to do well. We sometimes find folks have very deep skills in certain areas, but then they're light in other areas. If you go to the website for [Linux Foundation training and certification][4], for the [LFCS][5] and the [LFCE][6] certifications, you can scroll down the page and see the details of the domains and tasks, which represent the knowledge areas you're supposed to know.”
|
||||
|
||||
Once you’ve identified the skills you need, “really spend some time on those and try to identify whether you think there are areas where you have gaps. You can figure out what the right training or practice regimen is going to be to help you get prepared to take the exam," Seepersad said.
|
||||
|
||||
### Practice, practice, practice
|
||||
|
||||
"Practice is important, of course, for all exams," he added. "We deliver the exams in a bit of a unique way -- through your browser. We're using a terminal emulator on your browser and you're being proctored, so there's a live human who is watching you via video cam, your screen is being recorded, and you're having to work through the exam console using the browser window. You're going to be asked to do something live on the system, and then at the end, we're going to evaluate that system to see if you were successful in accomplishing the task"
|
||||
|
||||
What if you run out of time on your exam, or simply don’t pass because you couldn’t perform the required skills? “I like the phrase, exam insurance,” Seepersad said. “The way we take the stress out is by offering a ‘no questions asked’ retake. If you take either exam, LFCS, LFCE and you do not pass on your first attempt, you are automatically eligible to have a free second attempt.”
|
||||
|
||||
The Linux Foundation intentionally maintains separation between its training and certification programs and uses an independent proctoring solution to monitor candidates. It also requires that all certifications be renewed every two years, which gives potential employers confidence that skills are current and have been recently demonstrated.
|
||||
|
||||
### Free certification guide
|
||||
|
||||
Becoming a Linux Foundation Certified System Administrator or Engineer is no small feat, so the Foundation has created [this free certification guide][7] to help you with your preparation. In this guide, you’ll find:
|
||||
|
||||
* Critical things to keep in mind on test day
|
||||
|
||||
|
||||
* An array of both free and paid study resources to help you be as prepared as possible
|
||||
|
||||
* A few tips and tricks that could make the difference at exam time
|
||||
|
||||
* A checklist of all the domains and competencies covered in the exam
|
||||
|
||||
|
||||
|
||||
|
||||
With certification playing a more important role in securing a rewarding long-term career, careful planning and preparation are key. Stay tuned for the next article in this series that will answer frequently asked questions pertaining to open source certification and training.
|
||||
|
||||
[Learn more about Linux training and certification.][8]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/sysadmin-cert/2018/7/open-source-certification-preparing-exam
|
||||
|
||||
作者:[Sam Dean][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/sam-dean
|
||||
[1]:https://www.linux.com/blog/os-jobs-report/2017/9/demand-open-source-skills-rise
|
||||
[2]:https://www.linux.com/blog/sysadmin-cert/2018/7/5-reasons-open-source-certification-matters-more-ever
|
||||
[3]:https://www.linux.com/blog/sysadmin-cert/2018/7/tips-success-open-source-certification
|
||||
[4]:https://training.linuxfoundation.org/
|
||||
[5]:https://training.linuxfoundation.org/certification/linux-foundation-certified-sysadmin-lfcs/
|
||||
[6]:https://training.linuxfoundation.org/certification/linux-foundation-certified-engineer-lfce/
|
||||
[7]:https://training.linuxfoundation.org/download-free-certification-prep-guide
|
||||
[8]:https://training.linuxfoundation.org/certification/
|
@ -0,0 +1,71 @@
|
||||
Why moving all your workloads to the cloud is a bad idea
|
||||
======
|
||||
|
||||

|
||||
|
||||
As we've been exploring in this series, cloud hype is everywhere, telling you that migrating your applications to the cloud—including hybrid cloud and multicloud—is the way to ensure a digital future for your business. This hype rarely dives into the pitfalls of moving to the cloud, nor considers the daily work of enhancing your customer's experience and agile delivery of new and legacy applications.
|
||||
|
||||
In [part one][1] of this series, we covered basic definitions (to level the playing field). We outlined our views on hybrid cloud and multi-cloud, making sure to show the dividing lines between the two. This set the stage for [part two][2], where we discussed the first of three pitfalls: Why cost is not always the obvious motivator for moving to the cloud.
|
||||
|
||||
In part three, we'll look at the second pitfall: Why moving all your workloads to the cloud is a bad idea.
|
||||
|
||||
### Everything's better in the cloud?
|
||||
|
||||
There's a misconception that everything will benefit from running in the cloud. All workloads are not equal, and not all workloads will see a measurable effect on the bottom line from moving to the cloud.
|
||||
|
||||
As [InformationWeek wrote][3], "Not all business applications should migrate to the cloud, and enterprises must determine which apps are best suited to a cloud environment." This is a hard fact that the utility company in part two of this series learned when labor costs rose while trying to move applications to the cloud. Discovering this was not a viable solution, the utility company backed up and reevaluated its applications. It found some applications were not heavily used and others had data ownership and compliance issues. Some of its applications were not certified for use in a cloud environment.
|
||||
|
||||
Sometimes running applications in the cloud is not physically possible, but other times it's not financially viable to run in the cloud.
|
||||
|
||||
Imagine a fictional online travel company. As its business grew, it expanded its on-premises hosting capacity to over 40,000 servers. It eventually became a question of expanding resources by purchasing a data center at a time, not a rack at a time. Its business consumes bandwidth at such volumes that cloud pricing models based on bandwidth usage remain prohibitive.
|
||||
|
||||
### Get a baseline
|
||||
|
||||
Sometimes running applications in the cloud is not physically possible, but other times it's not financially viable to run in the cloud.
|
||||
|
||||
As these examples show, nothing is more important than having a thorough understanding of your application landscape. Along with a having good understanding of what applications need to migrate to the cloud, you also need to understand current IT environments, know your present level of resources, and estimate your costs for moving.
|
||||
|
||||
As these examples show, nothing is more important than having a thorough understanding of your application landscape. Along with a having good understanding of what applications need to migrate to the cloud, you also need to understand current IT environments, know your present level of resources, and estimate your costs for moving.
|
||||
|
||||
Understanding your baseline–each application's current situation and performance requirements (network, storage, CPU, memory, application and infrastructure behavior under load, etc.)–gives you the tools to make the right decision.
|
||||
|
||||
If you're running servers with single-digit CPU utilization due to complex acquisition processes, a cloud with on-demand resourcing might be a great idea. However, first ask these questions:
|
||||
|
||||
* How long did this low-utilization exist?
|
||||
* Why wasn't it caught earlier?
|
||||
* Isn't there a process or effective monitoring in place?
|
||||
* Do you really need a cloud to fix this? Or just a better process for both getting and managing your resources?
|
||||
* Will you have a better process in the cloud?
|
||||
|
||||
|
||||
|
||||
### Are containers necessary?
|
||||
|
||||
Many believe you need containers to be successful in the cloud. This popular [catchphrase][4] sums it up nicely, "We crammed this monolith into a container and called it a microservice."
|
||||
|
||||
Containers are a means to an end, and using containers doesn't mean your organization is capable of running maturely in the cloud. It's not about the technology involved, it's about applications that often were written in days gone by with technology that's now outdated. If you put a tire fire into a container and then put that container on a container platform to ship, it's still functionality that someone is using.
|
||||
|
||||
Is that fire easier to extinguish now? These container fires just create more challenges for your DevOps teams, who are already struggling to keep up with all the changes being pushed through an organization moving everything into the cloud.
|
||||
|
||||
Note, it's not necessarily a bad decision to move legacy workloads into the cloud, nor is it a bad idea to containerize them. It's about weighing the benefits and the downsides, assessing the options available, and making the right choices for each of your workloads.
|
||||
|
||||
### Coming up
|
||||
|
||||
In part four of this series, we'll describe the third and final pitfall everyone should avoid with hybrid multi-cloud. Find out what the cloud means for your data.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/7/why-you-cant-move-everything-cloud
|
||||
|
||||
作者:[Eric D.Schabell][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/eschabell
|
||||
[1]:https://opensource.com/article/18/4/pitfalls-hybrid-multi-cloud
|
||||
[2]:https://opensource.com/article/18/6/reasons-move-to-cloud
|
||||
[3]:https://www.informationweek.com/cloud/10-cloud-migration-mistakes-to-avoid/d/d-id/1318829
|
||||
[4]:https://speakerdeck.com/caseywest/containercon-north-america-cloud-anti-patterns?slide=22
|
@ -1,3 +1,5 @@
|
||||
translating by bestony
|
||||
|
||||
How to use Fio (Flexible I/O Tester) to Measure Disk Performance in Linux
|
||||
======
|
||||

|
||||
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
# Understanding metrics and monitoring with Python
|
||||
|
||||

|
||||
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
Asynchronous Processing with Go using Kafka and MongoDB
|
||||
============================================================
|
||||
|
||||
|
@ -1,58 +0,0 @@
|
||||
Everything old is new again: Microservices – DXC Blogs
|
||||
======
|
||||

|
||||
|
||||
If I told you about a software architecture in which components of an application provided services to other components via a communications protocol over a network you would say it was…
|
||||
|
||||
Well, it depends. If you got your start programming in the 90s, you’d say I just defined a [Service-Oriented Architecture (SOA)][1]. But, if you’re younger and cut your developer teeth on the cloud, you’d say: “Oh, you’re talking about [microservices][2].”
|
||||
|
||||
You’d both be right. To really understand the differences, you need to dive deeper into these architectures.
|
||||
|
||||
In SOA, a service is a function, which is well-defined, self-contained, and doesn’t depend on the context or state of other services. There are two kinds of services. A service consumer, which requests a service from the other type, a service provider. An SOA service can play both roles.
|
||||
|
||||
SOA services can trade data with each other. Two or more services can also coordinate with each other. These services carry out basic jobs such as creating a user account, providing login functionality, or validating a payment.
|
||||
|
||||
SOA isn’t so much about modularizing an application as it is about composing an application by integrating distributed, separately-maintained and deployed components. These components run on servers.
|
||||
|
||||
Early versions of SOA used object-oriented protocols to communicate with each other. For example, Microsoft’s [Distributed Component Object Model (DCOM)][3] and [Object Request Brokers (ORBs)][4] use the [Common Object Request Broker Architecture (CORBA)][5] specification.
|
||||
|
||||
Later versions used messaging services such as [Java Message Service (JMS)][6] or [Advanced Message Queuing Protocol (AMQP)][7]. These service connections are called Enterprise Service Buses (ESB). Over these buses, data, almost always in eXtensible Markup Language (XML) format, is transmitted and received.
|
||||
|
||||
[Microservices][2] is an architectural style where applications are made up from loosely coupled services or modules. It lends itself to the Continuous Integration/Continuous Deployment (CI/CD) model of developing large, complex applications. An application is the sum of its modules.
|
||||
|
||||
Each microservice provides an application programming interface (API) endpoint. These are connected by lightweight protocols such as [REpresentational State Transfer (REST)][8], or [gRPC][9]. Data tends to be represented by [JavaScript Object Notation (JSON)][10] or [Protobuf][11].
|
||||
|
||||
Both architectures stand as an alternative to the older, monolithic style of architecture where applications are built as single, autonomous units. For example, in a client-server model, a typical Linux, Apache, MySQL, PHP/Python/Perl (LAMP) server-side application would deal with HTTP requests, run sub-programs and retrieves/updates from the underlying MySQL database. These are all tied closely together. When you change anything, you must build and deploy a new version.
|
||||
|
||||
With SOA, you may need to change several components, but never the entire application. With microservices, though, you can make changes one service at a time. With microservices, you’re working with a true decoupled architecture.
|
||||
|
||||
Microservices are also lighter than SOA. While SOA services are deployed to servers and virtual machines (VMs), microservices are deployed in containers. The protocols are also lighter. This makes microservices more flexible than SOA. Hence, it works better with Agile shops.
|
||||
|
||||
So what does this mean? The long and short of it is that microservices are an SOA variation for container and cloud computing.
|
||||
|
||||
Old style SOA isn’t going away, but as we continue to move applications to containers, the microservice architecture will only grow more popular.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blogs.dxc.technology/2018/05/08/everything-old-is-new-again-microservices/
|
||||
|
||||
作者:[Cloudy Weather][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://blogs.dxc.technology/author/steven-vaughan-nichols/
|
||||
[1]:https://www.service-architecture.com/articles/web-services/service-oriented_architecture_soa_definition.html
|
||||
[2]:http://microservices.io/
|
||||
[3]:https://technet.microsoft.com/en-us/library/cc958799.aspx
|
||||
[4]:https://searchmicroservices.techtarget.com/definition/Object-Request-Broker-ORB
|
||||
[5]:http://www.corba.org/
|
||||
[6]:https://docs.oracle.com/javaee/6/tutorial/doc/bncdq.html
|
||||
[7]:https://www.amqp.org/
|
||||
[8]:https://www.service-architecture.com/articles/web-services/representational_state_transfer_rest.html
|
||||
[9]:https://grpc.io/
|
||||
[10]:https://www.json.org/
|
||||
[11]:https://github.com/google/protobuf/
|
@ -1,127 +0,0 @@
|
||||
How To Check Ubuntu Version and Other System Information Easily
|
||||
======
|
||||
**Brief: Wondering which Ubuntu version are you using? Here’s how to check Ubuntu version, desktop environment and other relevant system information.**
|
||||
|
||||
You can easily find the Ubuntu version you are using in the command line or via the graphical interface. Knowing the exact Ubuntu version, desktop environment and other system information helps a lot when you are trying to follow a tutorial from the web or seeking help in various forums.
|
||||
|
||||
In this quick tip, I’ll show you various ways to check [Ubuntu][1] version and other common system information.
|
||||
|
||||
### How to check Ubuntu version in terminal
|
||||
|
||||
This is the best way to find Ubuntu version. I could have mentioned the graphical way first but then I chose this method because this one doesn’t depend on the [desktop environment][2] you are using. You can use it on any Ubuntu variant.
|
||||
|
||||
Open a terminal (Ctrl+Alt+T) and type the following command:
|
||||
```
|
||||
lsb_release -a
|
||||
|
||||
```
|
||||
|
||||
The output of the above command should be like this:
|
||||
```
|
||||
No LSB modules are available.
|
||||
Distributor ID: Ubuntu
|
||||
Description: Ubuntu 16.04.4 LTS
|
||||
Release: 16.04
|
||||
Codename: xenial
|
||||
|
||||
```
|
||||
|
||||
![How to check Ubuntu version in command line][3]
|
||||
|
||||
As you can see, the current Ubuntu installed in my system is Ubuntu 16.04 and its code name is Xenial.
|
||||
|
||||
Wait! Why does it say Ubuntu 16.04.4 in Description and 16.04 in the Release? Which one is it, 16.04 or 16.04.4? What’s the difference between the two?
|
||||
|
||||
The short answer is that you are using Ubuntu 16.04. That’s the base image. 16.04.4 signifies the fourth point release of 16.04. A point release can be thought of as a service pack in Windows era. Both 16.04 and 16.04.4 will be the correct answer here.
|
||||
|
||||
What’s Xenial in the output? That’s the codename of the Ubuntu 16.04 release. You can read this [article to know about Ubuntu naming convention][4].
|
||||
|
||||
#### Some alternate ways to find Ubuntu version
|
||||
|
||||
Alternatively, you can use either of the following commands to find Ubuntu version:
|
||||
```
|
||||
cat /etc/lsb-release
|
||||
|
||||
```
|
||||
|
||||
The output of the above command would look like this:
|
||||
```
|
||||
DISTRIB_ID=Ubuntu
|
||||
DISTRIB_RELEASE=16.04
|
||||
DISTRIB_CODENAME=xenial
|
||||
DISTRIB_DESCRIPTION="Ubuntu 16.04.4 LTS"
|
||||
|
||||
```
|
||||
|
||||
![How to check Ubuntu version in command line][5]
|
||||
|
||||
You can also use this command to know Ubuntu version
|
||||
```
|
||||
cat /etc/issue
|
||||
|
||||
```
|
||||
|
||||
The output of this command will be like this:
|
||||
```
|
||||
Ubuntu 16.04.4 LTS \n \l
|
||||
|
||||
```
|
||||
|
||||
Forget the \n \l. The Ubuntu version is 16.04.4 in this case or simply Ubuntu 16.04.
|
||||
|
||||
### How to check Ubuntu version graphically
|
||||
|
||||
Checking Ubuntu version graphically is no big deal either. I am going to use screenshots from Ubuntu 18.04 GNOME here. Things may look different if you are using Unity or some other desktop environment. This is why I recommend the command line version discussed in the previous sections because that doesn’t depend on the desktop environment.
|
||||
|
||||
I’ll show you how to find the desktop environment in the next section.
|
||||
|
||||
For now, go to System Settings and look under the Details segment.
|
||||
|
||||
![Finding Ubuntu version graphically][6]
|
||||
|
||||
You should see the Ubuntu version here along with the information about the desktop environment you are using, [GNOME][7] being the case here.
|
||||
|
||||
![Finding Ubuntu version graphically][8]
|
||||
|
||||
### How to know the desktop environment and other system information in Ubuntu
|
||||
|
||||
So you just learned how to find Ubuntu version. What about the desktop environment in use? Which Linux kernel version is being used?
|
||||
|
||||
Of course, there are various commands you can use to get all those information but I’ll recommend a command line utility called [Neofetch][9]. This will show you essential system information in the terminal beautifully with the logo of Ubuntu or any other Linux distribution you are using.
|
||||
|
||||
Install Neofetch using the command below:
|
||||
```
|
||||
sudo apt install neofetch
|
||||
|
||||
```
|
||||
|
||||
Once installed, simply run the command `neofetch` in the terminal and see a beautiful display of system information.
|
||||
|
||||
![System information in Linux terminal][10]
|
||||
|
||||
As you can see, Neofetch shows you the Linux kernel version, Ubuntu version, desktop environment in use along with its version, themes and icons in use etc.
|
||||
|
||||
I hope it helps you to find Ubuntu version and other system information. If you have suggestions to improve this article, feel free to drop it in the comment section. Ciao :)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/how-to-know-ubuntu-unity-version/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[1]:https://www.ubuntu.com/
|
||||
[2]:https://en.wikipedia.org/wiki/Desktop_environment
|
||||
[3]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2013/03/check-ubuntu-version-command-line-1-800x216.jpeg
|
||||
[4]:https://itsfoss.com/linux-code-names/
|
||||
[5]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2013/03/check-ubuntu-version-command-line-2-800x185.jpeg
|
||||
[6]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2013/03/ubuntu-version-system-settings.jpeg
|
||||
[7]:https://www.gnome.org/
|
||||
[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2013/03/checking-ubuntu-version-gui.jpeg
|
||||
[9]:https://itsfoss.com/display-linux-logo-in-ascii/
|
||||
[10]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2013/03/ubuntu-system-information-terminal-800x400.jpeg
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
Splicing the Cloud Native Stack, One Floor at a Time
|
||||
======
|
||||
At Packet, our value (automated infrastructure) is super fundamental. As such, we spend an enormous amount of time looking up at the players and trends in all the ecosystems above us - as well as the very few below!
|
||||
|
@ -1,94 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
Give Your Linux Desktop a Stunning Makeover With Xenlism Themes
|
||||
============================================================
|
||||
|
||||
|
||||
_Brief: Xenlism theme pack provides an aesthetically pleasing GTK theme, colorful icons, and minimalist wallpapers to transform your Linux desktop into an eye-catching setup._
|
||||
|
||||
It’s not every day that I dedicate an entire article to a theme unless I find something really awesome. I used to cover themes and icons regularly. But lately, I preferred having lists of [best GTK themes][6] and icon themes. This is more convenient for me and for you as well as you get to see many beautiful themes in one place.
|
||||
|
||||
After [Pop OS theme][7] suit, Xenlism is another theme that has left me awestruck by its look.
|
||||
|
||||

|
||||
|
||||
Xenlism GTK theme is based on the Arc theme, an inspiration behind so many themes these days. The GTK theme provides Windows buttons similar to macOS which I neither like nor dislike. The GTK theme has a flat, minimalist layout and I like that.
|
||||
|
||||
There are two icon themes in the Xenlism suite. Xenlism Wildfire is an old one and had already made to our list of [best icon themes][8].
|
||||
|
||||

|
||||
Xenlism Wildfire Icons
|
||||
|
||||
Xenlsim Storm is the relatively new icon theme but is equally beautiful.
|
||||
|
||||

|
||||
Xenlism Storm Icons
|
||||
|
||||
Xenlism themes are open source under GPL license.
|
||||
|
||||
### How to install Xenlism theme pack on Ubuntu 18.04
|
||||
|
||||
Xenlism dev provides an easier way of installing the theme pack through a PPA. Though the PPA is available for Ubuntu 16.04, I found the GTK theme wasn’t working with Unity. It works fine with the GNOME desktop in Ubuntu 18.04.
|
||||
|
||||
Open a terminal (Ctrl+Alt+T) and use the following commands one by one:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:xenatt/xenlism
|
||||
sudo apt update
|
||||
```
|
||||
|
||||
This PPA offers four packages:
|
||||
|
||||
* xenlism-finewalls: for a set of wallpapers that will be available directly in the wallpaper section of Ubuntu. One of the wallpapers has been used in the screenshot.
|
||||
|
||||
* xenlism-minimalism-theme: GTK theme
|
||||
|
||||
* xenlism-storm: an icon theme (see previous screenshots)
|
||||
|
||||
* xenlism-wildfire-icon-theme: another icon theme with several color variants (folder colors get changed in the variants)
|
||||
|
||||
You can decide on your own what theme component you want to install. Personally, I don’t see any harm in installing all the components.
|
||||
|
||||
```
|
||||
sudo apt install xenlism-minimalism-theme xenlism-storm-icon-theme xenlism-wildfire-icon-theme xenlism-finewalls
|
||||
```
|
||||
|
||||
You can use GNOME Tweaks for changing the theme and icons. If you are not familiar with the procedure already, I suggest reading this tutorial to learn [how to install themes in Ubuntu 18.04 GNOME][9].
|
||||
|
||||
### Getting Xenlism themes in other Linux distributions
|
||||
|
||||
You can install Xenlism themes on other Linux distributions as well. Installation instructions for various Linux distributions can be found on its website:
|
||||
|
||||
[Install Xenlism Themes][10]
|
||||
|
||||
### What do you think?
|
||||
|
||||
I know not everyone would agree with me but I loved this theme. I think you are going to see the glimpse of Xenlism theme in the screenshots in future tutorials on It’s FOSS.
|
||||
|
||||
Did you like Xenlism theme? If not, what theme do you like the most? Share your opinion in the comment section below.
|
||||
|
||||
#### 关于作者
|
||||
|
||||
I am a professional software developer, and founder of It's FOSS. I am an avid Linux lover and Open Source enthusiast. I use Ubuntu and believe in sharing knowledge. Apart from Linux, I love classic detective mysteries. I'm a huge fan of Agatha Christie's work.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/xenlism-theme/
|
||||
|
||||
作者:[Abhishek Prakash ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://itsfoss.com/author/abhishek/
|
||||
[1]:https://itsfoss.com/author/abhishek/
|
||||
[2]:https://itsfoss.com/xenlism-theme/#comments
|
||||
[3]:https://itsfoss.com/category/desktop/
|
||||
[4]:https://itsfoss.com/tag/themes/
|
||||
[5]:https://itsfoss.com/tag/xenlism/
|
||||
[6]:https://itsfoss.com/best-gtk-themes/
|
||||
[7]:https://itsfoss.com/pop-icon-gtk-theme-ubuntu/
|
||||
[8]:https://itsfoss.com/best-icon-themes-ubuntu-16-04/
|
||||
[9]:https://itsfoss.com/install-themes-ubuntu/
|
||||
[10]:http://xenlism.github.io/minimalism/#install
|
@ -1,3 +1,5 @@
|
||||
pinewall translating
|
||||
|
||||
How Graphics Cards Work
|
||||
======
|
||||
![AMD-Polaris][1]
|
||||
|
@ -1,311 +0,0 @@
|
||||
pinewall translating
|
||||
|
||||
Using MQTT to send and receive data for your next project
|
||||
======
|
||||
|
||||

|
||||
|
||||
Last November we bought an electric car, and it raised an interesting question: When should we charge it? I was concerned about having the lowest emissions for the electricity used to charge the car, so this is a specific question: What is the rate of CO2 emissions per kWh at any given time, and when during the day is it at its lowest?
|
||||
|
||||
### Finding the data
|
||||
|
||||
I live in New York State. About 80% of our electricity comes from in-state generation, mostly through natural gas, hydro dams (much of it from Niagara Falls), nuclear, and a bit of wind, solar, and other fossil fuels. The entire system is managed by the [New York Independent System Operator][1] (NYISO), a not-for-profit entity that was set up to balance the needs of power generators, consumers, and regulatory bodies to keep the lights on in New York.
|
||||
|
||||
Although there is no official public API, as part of its mission, NYISO makes [a lot of open data][2] available for public consumption. This includes reporting on what fuels are being consumed to generate power, at five-minute intervals, throughout the state. These are published as CSV files on a public archive and updated throughout the day. If you know the number of megawatts coming from different kinds of fuels, you can make a reasonable approximation of how much CO2 is being emitted at any given time.
|
||||
|
||||
We should always be kind when building tools to collect and process open data to avoid overloading those systems. Instead of sending everyone to their archive service to download the files all the time, we can do better. We can create a low-overhead event stream that people can subscribe to and get updates as they happen. We can do that with [MQTT][3]. The target for my project ([ny-power.org][4]) was inclusion in the [Home Assistant][5] project, an open source home automation platform that has hundreds of thousands of users. If all of these users were hitting this CSV server all the time, NYISO might need to restrict access to it.
|
||||
|
||||
### What is MQTT?
|
||||
|
||||
MQTT is a publish/subscribe (pubsub) wire protocol designed with small devices in mind. Pubsub systems work like a message bus. You send a message to a topic, and any software with a subscription for that topic gets a copy of your message. As a sender, you never really know who is listening; you just provide your information to a set of topics and listen for any other topics you might care about. It's like walking into a party and listening for interesting conversations to join.
|
||||
|
||||
This can make for extremely efficient applications. Clients subscribe to a narrow selection of topics and only receive the information they are looking for. This saves both processing time and network bandwidth.
|
||||
|
||||
As an open standard, MQTT has many open source implementations of both clients and servers. There are client libraries for every language you could imagine, even a library you can embed in Arduino for making sensor networks. There are many servers to choose from. My go-to is the [Mosquitto][6] server from Eclipse, as it's small, written in C, and can handle tens of thousands of subscribers without breaking a sweat.
|
||||
|
||||
### Why I like MQTT
|
||||
|
||||
Over the past two decades, we've come up with tried and true models for software applications to ask questions of services. Do I have more email? What is the current weather? Should I buy this thing now? This pattern of "ask/receive" works well much of the time; however, in a world awash with data, there are other patterns we need. The MQTT pubsub model is powerful where lots of data is published inbound to the system. Clients can subscribe to narrow slices of data and receive updates instantly when that data comes in.
|
||||
|
||||
MQTT also has additional interesting features, such as "last-will-and-testament" messages, which make it possible to distinguish between silence because there is no relevant data and silence because your data collectors have crashed. MQTT also has retained messages, which provide the last message on a topic to clients when they first connect. This is extremely useful for topics that update slowly.
|
||||
|
||||
In my work with the Home Assistant project, I've found this message bus model works extremely well for heterogeneous systems. If you dive into the Internet of Things space, you'll quickly run into MQTT everywhere.
|
||||
|
||||
### Our first MQTT stream
|
||||
|
||||
One of NYSO's CSV files is the real-time fuel mix. Every five minutes, it's updated with the fuel sources and power generated (in megawatts) during that time period.
|
||||
|
||||
The CSV file looks something like this:
|
||||
|
||||
| Time Stamp | Time Zone | Fuel Category | Gen MW |
|
||||
| 05/09/2018 00:05:00 | EDT | Dual Fuel | 1400 |
|
||||
| 05/09/2018 00:05:00 | EDT | Natural Gas | 2144 |
|
||||
| 05/09/2018 00:05:00 | EDT | Nuclear | 4114 |
|
||||
| 05/09/2018 00:05:00 | EDT | Other Fossil Fuels | 4 |
|
||||
| 05/09/2018 00:05:00 | EDT | Other Renewables | 226 |
|
||||
| 05/09/2018 00:05:00 | EDT | Wind | 1 |
|
||||
| 05/09/2018 00:05:00 | EDT | Hydro | 3229 |
|
||||
| 05/09/2018 00:10:00 | EDT | Dual Fuel | 1307 |
|
||||
| 05/09/2018 00:10:00 | EDT | Natural Gas | 2092 |
|
||||
| 05/09/2018 00:10:00 | EDT | Nuclear | 4115 |
|
||||
| 05/09/2018 00:10:00 | EDT | Other Fossil Fuels | 4 |
|
||||
| 05/09/2018 00:10:00 | EDT | Other Renewables | 224 |
|
||||
| 05/09/2018 00:10:00 | EDT | Wind | 40 |
|
||||
| 05/09/2018 00:10:00 | EDT | Hydro | 3166 |
|
||||
|
||||
The only odd thing in the table is the dual-fuel category. Most natural gas plants in New York can also burn other fossil fuel to generate power. During cold snaps in the winter, the natural gas supply gets constrained, and its use for home heating is prioritized over power generation. This happens at a low enough frequency that we can consider dual fuel to be natural gas (for our calculations).
|
||||
|
||||
The file is updated throughout the day. I created a simple data pump that polls for the file every minute and looks for updates. It publishes any new entries out to the MQTT server into a set of topics that largely mirror this CSV file. The payload is turned into a JSON object that is easy to parse from nearly any programming language.
|
||||
```
|
||||
ny-power/upstream/fuel-mix/Hydro {"units": "MW", "value": 3229, "ts": "05/09/2018 00:05:00"}
|
||||
|
||||
ny-power/upstream/fuel-mix/Dual Fuel {"units": "MW", "value": 1400, "ts": "05/09/2018 00:05:00"}
|
||||
|
||||
ny-power/upstream/fuel-mix/Natural Gas {"units": "MW", "value": 2144, "ts": "05/09/2018 00:05:00"}
|
||||
|
||||
ny-power/upstream/fuel-mix/Other Fossil Fuels {"units": "MW", "value": 4, "ts": "05/09/2018 00:05:00"}
|
||||
|
||||
ny-power/upstream/fuel-mix/Wind {"units": "MW", "value": 41, "ts": "05/09/2018 00:05:00"}
|
||||
|
||||
ny-power/upstream/fuel-mix/Other Renewables {"units": "MW", "value": 226, "ts": "05/09/2018 00:05:00"}
|
||||
|
||||
ny-power/upstream/fuel-mix/Nuclear {"units": "MW", "value": 4114, "ts": "05/09/2018 00:05:00"}
|
||||
|
||||
```
|
||||
|
||||
This direct reflection is a good first step in turning open data into open events. We'll be converting this into a CO2 intensity, but other applications might want these raw feeds to do other calculations with them.
|
||||
|
||||
### MQTT topics
|
||||
|
||||
Topics and topic structures are one of MQTT's major design points. Unlike more "enterprisey" message buses, in MQTT topics are not preregistered. A sender can create topics on the fly, the only limit being that they are less than 220 characters. The `/` character is special; it's used to create topic hierarchies. As we'll soon see, you can subscribe to slices of data in these hierarchies.
|
||||
|
||||
Out of the box with Mosquitto, every client can publish to any topic. While it's great for prototyping, before going to production you'll want to add an access control list (ACL) to restrict writing to authorized applications. For example, my app's tree is accessible to everyone in read-only format, but only clients with specific credentials can publish to it.
|
||||
|
||||
There is no automatic schema around topics nor a way to discover all the possible topics that clients will publish to. You'll have to encode that understanding directly into any application that consumes the MQTT bus.
|
||||
|
||||
So how should you design your topics? The best practice is to start with an application-specific root name, in our case, `ny-power`. After that, build a hierarchy as deep as you need for efficient subscription. The `upstream` tree will contain data that comes directly from an upstream source without any processing. Our `fuel-mix` category is a specific type of data. We may add others later.
|
||||
|
||||
### Subscribing to topics
|
||||
|
||||
Subscriptions in MQTT are simple string matches. For processing efficiency, only two wildcards are allowed:
|
||||
|
||||
* `#` matches everything recursively to the end
|
||||
* `+` matches only until the next `/` character
|
||||
|
||||
|
||||
|
||||
It's easiest to explain this with some examples:
|
||||
```
|
||||
ny-power/# - match everything published by the ny-power app
|
||||
|
||||
ny-power/upstream/# - match all raw data
|
||||
|
||||
ny-power/upstream/fuel-mix/+ - match all fuel types
|
||||
|
||||
ny-power/+/+/Hydro - match everything about Hydro power that's
|
||||
|
||||
nested 2 deep (even if it's not in the upstream tree)
|
||||
|
||||
```
|
||||
|
||||
A wide subscription like `ny-power/#` is common for low-volume applications. Just get everything over the network and handle it in your own application. This works poorly for high-volume applications, as most of the network bandwidth will be wasted as you drop most of the messages on the floor.
|
||||
|
||||
To stay performant at higher volumes, applications will do some clever topic slides like `ny-power/+/+/Hydro` to get exactly the cross-section of data they need.
|
||||
|
||||
### Adding our next layer of data
|
||||
|
||||
From this point forward, everything in the application will work off existing MQTT streams. The first additional layer of data is computing the power's CO2 intensity.
|
||||
|
||||
Using the 2016 [U.S. Energy Information Administration][7] numbers for total emissions and total power by fuel type in New York, we can come up with an [average emissions rate][8] per megawatt hour of power.
|
||||
|
||||
This is encapsulated in a dedicated microservice. This has a subscription on `ny-power/upstream/fuel-mix/+`, which matches all upstream fuel-mix entries from the data pump. It then performs the calculation and publishes out to a new topic tree:
|
||||
```
|
||||
ny-power/computed/co2 {"units": "g / kWh", "value": 152.9486, "ts": "05/09/2018 00:05:00"}
|
||||
|
||||
```
|
||||
|
||||
In turn, there is another process that subscribes to this topic tree and archives that data into an [InfluxDB][9] instance. It then publishes a 24-hour time series to `ny-power/archive/co2/24h`, which makes it easy to graph the recent changes.
|
||||
|
||||
This layer model works well, as the logic for each of these programs can be distinct from each other. In a more complicated system, they may not even be in the same programming language. We don't care, because the interchange format is MQTT messages, with well-known topics and JSON payloads.
|
||||
|
||||
### Consuming from the command line
|
||||
|
||||
To get a feel for MQTT in action, it's useful to just attach it to a bus and see the messages flow. The `mosquitto_sub` program included in the `mosquitto-clients` package is a simple way to do that.
|
||||
|
||||
After you've installed it, you need to provide a server hostname and the topic you'd like to listen to. The `-v` flag is important if you want to see the topics being posted to. Without that, you'll see only the payloads.
|
||||
```
|
||||
mosquitto_sub -h mqtt.ny-power.org -t ny-power/# -v
|
||||
|
||||
```
|
||||
|
||||
Whenever I'm writing or debugging an MQTT application, I always have a terminal with `mosquitto_sub` running.
|
||||
|
||||
### Accessing MQTT directly from the web
|
||||
|
||||
We now have an application providing an open event stream. We can connect to it with our microservices and, with some command-line tooling, it's on the internet for all to see. But the web is still king, so it's important to get it directly into a user's browser.
|
||||
|
||||
The MQTT folks thought about this one. The protocol specification is designed to work over three transport protocols: [TCP][10], [UDP][11], and [WebSockets][12]. WebSockets are supported by all major browsers as a way to retain persistent connections for real-time applications.
|
||||
|
||||
The Eclipse project has a JavaScript implementation of MQTT called [Paho][13], which can be included in your application. The pattern is to connect to the host, set up some subscriptions, and then react to messages as they are received.
|
||||
```
|
||||
// ny-power web console application
|
||||
|
||||
var client = new Paho.MQTT.Client(mqttHost, Number("80"), "client-" + Math.random());
|
||||
|
||||
|
||||
|
||||
// set callback handlers
|
||||
|
||||
client.onMessageArrived = onMessageArrived;
|
||||
|
||||
|
||||
|
||||
// connect the client
|
||||
|
||||
client.reconnect = true;
|
||||
|
||||
client.connect({onSuccess: onConnect});
|
||||
|
||||
|
||||
|
||||
// called when the client connects
|
||||
|
||||
function onConnect() {
|
||||
|
||||
// Once a connection has been made, make a subscription and send a message.
|
||||
|
||||
console.log("onConnect");
|
||||
|
||||
client.subscribe("ny-power/computed/co2");
|
||||
|
||||
client.subscribe("ny-power/archive/co2/24h");
|
||||
|
||||
client.subscribe("ny-power/upstream/fuel-mix/#");
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
||||
// called when a message arrives
|
||||
|
||||
function onMessageArrived(message) {
|
||||
|
||||
console.log("onMessageArrived:"+message.destinationName + message.payloadString);
|
||||
|
||||
if (message.destinationName == "ny-power/computed/co2") {
|
||||
|
||||
var data = JSON.parse(message.payloadString);
|
||||
|
||||
$("#co2-per-kwh").html(Math.round(data.value));
|
||||
|
||||
$("#co2-units").html(data.units);
|
||||
|
||||
$("#co2-updated").html(data.ts);
|
||||
|
||||
}
|
||||
|
||||
if (message.destinationName.startsWith("ny-power/upstream/fuel-mix")) {
|
||||
|
||||
fuel_mix_graph(message);
|
||||
|
||||
}
|
||||
|
||||
if (message.destinationName == "ny-power/archive/co2/24h") {
|
||||
|
||||
var data = JSON.parse(message.payloadString);
|
||||
|
||||
var plot = [
|
||||
|
||||
{
|
||||
|
||||
x: data.ts,
|
||||
|
||||
y: data.values,
|
||||
|
||||
type: 'scatter'
|
||||
|
||||
}
|
||||
|
||||
];
|
||||
|
||||
var layout = {
|
||||
|
||||
yaxis: {
|
||||
|
||||
title: "g CO2 / kWh",
|
||||
|
||||
}
|
||||
|
||||
};
|
||||
|
||||
Plotly.newPlot('co2_graph', plot, layout);
|
||||
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
This application subscribes to a number of topics because we're going to display a few different kinds of data. The `ny-power/computed/co2` topic provides us a topline number of current intensity. Whenever we receive that topic, we replace the related contents on the site.
|
||||
|
||||
|
||||
![NY ISO Grid CO2 Intensity][15]
|
||||
|
||||
NY ISO Grid CO2 Intensity graph from [ny-power.org][4].
|
||||
|
||||
The `ny-power/archive/co2/24h` topic provides a time series that can be loaded into a [Plotly][16] line graph. And `ny-power/upstream/fuel-mix` provides the data needed to provide a nice bar graph of the current fuel mix.
|
||||
|
||||
|
||||
![Fuel mix on NYISO grid][18]
|
||||
|
||||
Fuel mix on NYISO grid, [ny-power.org][4].
|
||||
|
||||
This is a dynamic website that is not polling the server. It is attached to the MQTT bus and listening on its open WebSocket. The webpage is a pub/sub client just like the data pump and the archiver. This one just happens to be executing in your browser instead of a microservice in the cloud.
|
||||
|
||||
You can see the page in action at <http://ny-power.org>. That includes both the graphics and a real-time MQTT console to see the messages as they come in.
|
||||
|
||||
### Diving deeper
|
||||
|
||||
The entire ny-power.org application is [available as open source on GitHub][19]. You can also check out [this architecture overview][20] to see how it was built as a set of Kubernetes microservices deployed with [Helm][21]. You can see another interesting MQTT application example with [this code pattern][22] using MQTT and OpenWhisk to translate text messages in real time.
|
||||
|
||||
MQTT is used extensively in the Internet of Things space, and many more examples of MQTT use can be found at the [Home Assistant][23] project.
|
||||
|
||||
And if you want to dive deep into the protocol, [mqtt.org][3] has all the details for this open standard.
|
||||
|
||||
To learn more, attend Sean Dague's talk, [Adding MQTT to your toolkit][24], at [OSCON][25], which will be held July 16-19 in Portland, Oregon.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/6/mqtt
|
||||
|
||||
作者:[Sean Dague][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/sdague
|
||||
[1]:http://www.nyiso.com/public/index.jsp
|
||||
[2]:http://www.nyiso.com/public/markets_operations/market_data/reports_info/index.jsp
|
||||
[3]:http://mqtt.org/
|
||||
[4]:http://ny-power.org/#
|
||||
[5]:https://www.home-assistant.io
|
||||
[6]:https://mosquitto.org/
|
||||
[7]:https://www.eia.gov/
|
||||
[8]:https://github.com/IBM/ny-power/blob/master/src/nypower/calc.py#L1-L60
|
||||
[9]:https://www.influxdata.com/
|
||||
[10]:https://en.wikipedia.org/wiki/Transmission_Control_Protocol
|
||||
[11]:https://en.wikipedia.org/wiki/User_Datagram_Protocol
|
||||
[12]:https://en.wikipedia.org/wiki/WebSocket
|
||||
[13]:https://www.eclipse.org/paho/
|
||||
[14]:/file/400041
|
||||
[15]:https://opensource.com/sites/default/files/uploads/mqtt_nyiso-co2intensity.png (NY ISO Grid CO2 Intensity)
|
||||
[16]:https://plot.ly/
|
||||
[17]:/file/400046
|
||||
[18]:https://opensource.com/sites/default/files/uploads/mqtt_nyiso_fuel-mix.png (Fuel mix on NYISO grid)
|
||||
[19]:https://github.com/IBM/ny-power
|
||||
[20]:https://developer.ibm.com/code/patterns/use-mqtt-stream-real-time-data/
|
||||
[21]:https://helm.sh/
|
||||
[22]:https://developer.ibm.com/code/patterns/deploy-serverless-multilingual-conference-room/
|
||||
[23]:https://www.home-assistant.io/
|
||||
[24]:https://conferences.oreilly.com/oscon/oscon-or/public/schedule/speaker/77317
|
||||
[25]:https://conferences.oreilly.com/oscon/oscon-or
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
How to reset, revert, and return to previous states in Git
|
||||
======
|
||||
|
||||
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
Bitcoin is a Cult — Adam Caudill
|
||||
======
|
||||
The Bitcoin community has changed greatly over the years; from technophiles that could explain a [Merkle tree][1] in their sleep, to speculators driven by the desire for a quick profit & blockchain startups seeking billion dollar valuations led by people who don’t even know what a Merkle tree is. As the years have gone on, a zealotry has been building around Bitcoin and other cryptocurrencies driven by people who see them as something far grander than they actually are; people who believe that normal (or fiat) currencies are becoming a thing of the past, and the cryptocurrencies will fundamentally change the world’s economy.
|
||||
|
322
sources/tech/20180705 Testing Node.js in 2018.md
Normal file
322
sources/tech/20180705 Testing Node.js in 2018.md
Normal file
@ -0,0 +1,322 @@
|
||||
BriFuture is translating
|
||||
|
||||
|
||||
Testing Node.js in 2018
|
||||
============================================================
|
||||
|
||||

|
||||
|
||||
[Stream][4] powers feeds for over 300+ million end users. With all of those users relying on our infrastructure, we’re very good about testing everything that gets pushed into production. Our primary codebase is written in Go, with some remaining bits of Python.
|
||||
|
||||
Our recent showcase application, [Winds 2.0][5], is built with Node.js and we quickly learned that our usual testing methods in Go and Python didn’t quite fit. Furthermore, creating a proper test suite requires a bit of upfront work in Node.js as the frameworks we are using don’t offer any type of built-in test functionality.
|
||||
|
||||
Setting up a good test framework can be tricky regardless of what language you’re using. In this post, we’ll uncover the hard parts of testing with Node.js, the various tooling we decided to utilize in Winds 2.0, and point you in the right direction for when it comes time for you to write your next set of tests.
|
||||
|
||||
### Why Testing is so Important
|
||||
|
||||
We’ve all pushed a bad commit to production and faced the consequences. It’s not a fun thing to have happen. Writing a solid test suite is not only a good sanity check, but it allows you to completely refactor code and feel confident that your codebase is still functional. This is especially important if you’ve just launched.
|
||||
|
||||
If you’re working with a team, it’s extremely important that you have test coverage. Without it, it’s nearly impossible for other developers on the team to know if their contributions will result in a breaking change (ouch).
|
||||
|
||||
Writing tests also encourage you and your teammates to split up code into smaller pieces. This makes it much easier to understand your code, and fix bugs along the way. The productivity gains are even bigger, due to the fact that you catch bugs early on.
|
||||
|
||||
Finally, without tests, your codebase might as well be a house of cards. There is simply zero certainty that your code is stable.
|
||||
|
||||
### The Hard Parts
|
||||
|
||||
In my opinion, most of the testing problems we ran into with Winds were specific to Node.js. The ecosystem is always growing. For example, if you are on macOS and run “brew upgrade” (with homebrew installed), your chances of seeing a new version of Node.js are quite high. With Node.js moving quickly and libraries following close behind, keeping up to date with the latest libraries is difficult.
|
||||
|
||||
Below are a few pain points that immediately come to mind:
|
||||
|
||||
1. Testing in Node.js is very opinionated and un-opinionated at the same time. Many people have different views on how a test infrastructure should be built and measured for success. The sad part is that there is no golden standard (yet) for how you should approach testing.
|
||||
|
||||
2. There are a large number of frameworks available to use in your application. However, they are generally minimal with no well-defined configuration or boot process. This leads to side effects that are very common, and yet hard to diagnose; so, you’ll likely end up writing your own test runner from scratch.
|
||||
|
||||
3. It’s almost guaranteed that you will be _required_ to write your own test runner (we’ll get to this in a minute).
|
||||
|
||||
The situations listed above are not ideal and it’s something that the Node.js community needs to address sooner rather than later. If other languages have figured it out, I think it’s time for Node.js, a widely adopted language, to figure it out as well.
|
||||
|
||||
### Writing Your Own Test Runner
|
||||
|
||||
So… you’re probably wondering what a test runner _is_ . To be honest, it’s not that complicated. A test runner is the highest component in the test suite. It allows for you to specify global configurations and environments, as well as import fixtures. One would assume this would be simple and easy to do… Right? Not so fast…
|
||||
|
||||
What we learned is that, although there is a solid number of test frameworks out there, not a single one for Node.js provides a unified way to construct your test runner. Sadly, it’s up to the developer to do so. Here’s a quick breakdown of the requirements for a test runner:
|
||||
|
||||
* Ability to load different configurations (e.g. local, test, development) and ensure that you _NEVER_ load a production configuration — you can guess what goes wrong when that happens.
|
||||
|
||||
* Lift and seed a database with dummy data for testing. This must work for various databases, whether it be MySQL, PostgreSQL, MongoDB, or any other, for that matter.
|
||||
|
||||
* Ability to load fixtures (files with seed data for testing in a development environment).
|
||||
|
||||
With Winds, we chose to use Mocha as our test runner. Mocha provides an easy and programmatic way to run tests on an ES6 codebase via command-line tools (integrated with Babel).
|
||||
|
||||
To kick off the tests, we register the Babel module loader ourselves. This provides us with finer grain greater control over which modules are imported before Babel overrides Node.js module loading process, giving us the opportunity to mock modules before any tests are run.
|
||||
|
||||
Additionally, we also use Mocha’s test runner feature to pre-assign HTTP handlers to specific requests. We do this because the normal initialization code is not run during tests (server interactions are mocked by the Chai HTTP plugin) and run some safety check to ensure we are not connecting to production databases.
|
||||
|
||||
While this isn’t part of the test runner, having a fixture loader is an important part of our test suite. We examined existing solutions; however, we settled on writing our own helper so that it was tailored to our requirements. With our solution, we can load fixtures with complex data-dependencies by following an easy ad-hoc convention when generating or writing fixtures by hand.
|
||||
|
||||
### Tooling for Winds
|
||||
|
||||
Although the process was cumbersome, we were able to find the right balance of tools and frameworks to make proper testing become a reality for our backend API. Here’s what we chose to go with:
|
||||
|
||||
### Mocha ☕
|
||||
|
||||
[Mocha][6], described as a “feature-rich JavaScript test framework running on Node.js”, was our immediate choice of tooling for the job. With well over 15k stars, many backers, sponsors, and contributors, we knew it was the right framework for the job.
|
||||
|
||||
### Chai 🥃
|
||||
|
||||
Next up was our assertion library. We chose to go with the traditional approach, which is what works best with Mocha — [Chai][7]. Chai is a BDD and TDD assertion library for Node.js. With a simple API, Chai was easy to integrate into our application and allowed for us to easily assert what we should _expect_ tobe returned from the Winds API. Best of all, writing tests feel natural with Chai. Here’s a short example:
|
||||
|
||||
```
|
||||
describe('retrieve user', () => {
|
||||
let user;
|
||||
|
||||
before(async () => {
|
||||
await loadFixture('user');
|
||||
user = await User.findOne({email: authUser.email});
|
||||
expect(user).to.not.be.null;
|
||||
});
|
||||
|
||||
after(async () => {
|
||||
await User.remove().exec();
|
||||
});
|
||||
|
||||
describe('valid request', () => {
|
||||
it('should return 200 and the user resource, including the email field, when retrieving the authenticated user', async () => {
|
||||
const response = await withLogin(request(api).get(`/users/${user._id}`), authUser);
|
||||
|
||||
expect(response).to.have.status(200);
|
||||
expect(response.body._id).to.equal(user._id.toString());
|
||||
});
|
||||
|
||||
it('should return 200 and the user resource, excluding the email field, when retrieving another user', async () => {
|
||||
const anotherUser = await User.findOne({email: 'another_user@email.com'});
|
||||
|
||||
const response = await withLogin(request(api).get(`/users/${anotherUser.id}`), authUser);
|
||||
|
||||
expect(response).to.have.status(200);
|
||||
expect(response.body._id).to.equal(anotherUser._id.toString());
|
||||
expect(response.body).to.not.have.an('email');
|
||||
});
|
||||
|
||||
});
|
||||
|
||||
describe('invalid requests', () => {
|
||||
|
||||
it('should return 404 if requested user does not exist', async () => {
|
||||
const nonExistingId = '5b10e1c601e9b8702ccfb974';
|
||||
expect(await User.findOne({_id: nonExistingId})).to.be.null;
|
||||
|
||||
const response = await withLogin(request(api).get(`/users/${nonExistingId}`), authUser);
|
||||
expect(response).to.have.status(404);
|
||||
});
|
||||
});
|
||||
|
||||
});
|
||||
```
|
||||
|
||||
### Sinon 🧙
|
||||
|
||||
With the ability to work with any unit testing framework, [Sinon][8] was our first choice for a mocking library. Again, a super clean integration with minimal setup, Sinon turns mocking requests into a simple and easy process. Their website has an extremely friendly user experience and offers up easy steps to integrate Sinon with your test suite.
|
||||
|
||||
### Nock 🔮
|
||||
|
||||
For all external HTTP requests, we use [nock][9], a robust HTTP mocking library that really comes in handy when you have to communicate with a third party API (such as [Stream’s REST API][10]). There’s not much to say about this little library aside from the fact that it is awesome at what it does, and that’s why we like it. Here’s a quick example of us calling our [personalization][11] engine for Stream:
|
||||
|
||||
```
|
||||
nock(config.stream.baseUrl)
|
||||
.get(/winds_article_recommendations/)
|
||||
.reply(200, { results: [{foreign_id:`article:${article.id}`}] });
|
||||
```
|
||||
|
||||
### Mock-require 🎩
|
||||
|
||||
The library [mock-require][12] allows dependencies on external code. In a single line of code, you can replace a module and mock-require will step in when some code attempts to import that module. It’s a small and minimalistic, but robust library, and we’re big fans.
|
||||
|
||||
### Istanbul 🔭
|
||||
|
||||
[Istanbul][13] is a JavaScript code coverage tool that computes statement, line, function and branch coverage with module loader hooks to transparently add coverage when running tests. Although we have similar functionality with CodeCov (see next section), this is a nice tool to have when running tests locally.
|
||||
|
||||
### The End Result — Working Tests
|
||||
|
||||
_With all of the libraries, including the test runner mentioned above, let’s have a look at what a full test looks like (you can have a look at our entire test suite _ [_here_][14] _):_
|
||||
|
||||
```
|
||||
import nock from 'nock';
|
||||
import { expect, request } from 'chai';
|
||||
|
||||
import api from '../../src/server';
|
||||
import Article from '../../src/models/article';
|
||||
import config from '../../src/config';
|
||||
import { dropDBs, loadFixture, withLogin } from '../utils.js';
|
||||
|
||||
describe('Article controller', () => {
|
||||
let article;
|
||||
|
||||
before(async () => {
|
||||
await dropDBs();
|
||||
await loadFixture('initial-data', 'articles');
|
||||
article = await Article.findOne({});
|
||||
expect(article).to.not.be.null;
|
||||
expect(article.rss).to.not.be.null;
|
||||
});
|
||||
|
||||
describe('get', () => {
|
||||
it('should return the right article via /articles/:articleId', async () => {
|
||||
let response = await withLogin(request(api).get(`/articles/${article.id}`));
|
||||
expect(response).to.have.status(200);
|
||||
});
|
||||
});
|
||||
|
||||
describe('get parsed article', () => {
|
||||
it('should return the parsed version of the article', async () => {
|
||||
const response = await withLogin(
|
||||
request(api).get(`/articles/${article.id}`).query({ type: 'parsed' })
|
||||
);
|
||||
expect(response).to.have.status(200);
|
||||
});
|
||||
});
|
||||
|
||||
describe('list', () => {
|
||||
it('should return the list of articles', async () => {
|
||||
let response = await withLogin(request(api).get('/articles'));
|
||||
expect(response).to.have.status(200);
|
||||
});
|
||||
});
|
||||
|
||||
describe('list from personalization', () => {
|
||||
after(function () {
|
||||
nock.cleanAll();
|
||||
});
|
||||
|
||||
it('should return the list of articles', async () => {
|
||||
nock(config.stream.baseUrl)
|
||||
.get(/winds_article_recommendations/)
|
||||
.reply(200, { results: [{foreign_id:`article:${article.id}`}] });
|
||||
|
||||
const response = await withLogin(
|
||||
request(api).get('/articles').query({
|
||||
type: 'recommended',
|
||||
})
|
||||
);
|
||||
expect(response).to.have.status(200);
|
||||
expect(response.body.length).to.be.at.least(1);
|
||||
expect(response.body[0].url).to.eq(article.url);
|
||||
});
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### Continuous Integration
|
||||
|
||||
There are a lot of continuous integration services available, but we like to use [Travis CI][15] because they love the open-source environment just as much as we do. Given that Winds is open-source, it made for a perfect fit.
|
||||
|
||||
Our integration is rather simple — we have a [.travis.yml][16] file that sets up the environment and kicks off our tests via a simple [npm][17] command. The coverage reports back to GitHub, where we have a clear picture of whether or not our latest codebase or PR passes our tests. The GitHub integration is great, as it is visible without us having to go to Travis CI to look at the results. Below is a screenshot of GitHub when viewing the PR (after tests):
|
||||
|
||||

|
||||
|
||||
In addition to Travis CI, we use a tool called [CodeCov][18]. CodeCov is similar to [Istanbul][19], however, it’s a visualization tool that allows us to easily see code coverage, files changed, lines modified, and all sorts of other goodies. Though visualizing this data is possible without CodeCov, it’s nice to have everything in one spot.
|
||||
|
||||
### What We Learned
|
||||
|
||||

|
||||
|
||||
We learned a lot throughout the process of developing our test suite. With no “correct” way of doing things, we decided to set out and create our own test flow by sorting through the available libraries to find ones that were promising enough to add to our toolbox.
|
||||
|
||||
What we ultimately learned is that testing in Node.js is not as easy as it may sound. Hopefully, as Node.js continues to grow, the community will come together and build a rock solid library that handles everything test related in a “correct” manner.
|
||||
|
||||
Until then, we’ll continue to use our test suite, which is open-source on the [Winds GitHub repository][20].
|
||||
|
||||
### Limitations
|
||||
|
||||
#### No Easy Way to Create Fixtures
|
||||
|
||||
Frameworks and languages, such as Python’s Django, have easy ways to create fixtures. With Django, for example, you can use the following commands to automate the creation of fixtures by dumping data into a file:
|
||||
|
||||
The Following command will dump the whole database into a db.json file:
|
||||
./manage.py dumpdata > db.json
|
||||
|
||||
The Following command will dump only the content in django admin.logentry table:
|
||||
./manage.py dumpdata admin.logentry > logentry.json
|
||||
|
||||
The Following command will dump the content in django auth.user table: ./manage.py dumpdata auth.user > user.json
|
||||
|
||||
There’s no easy way to create a fixture in Node.js. What we ended up doing is using MongoDB Compass and exporting JSON from there. This resulted in a nice fixture, as shown below (however, it was a tedious process and prone to error):
|
||||
|
||||
|
||||

|
||||
|
||||
#### Unintuitive Module Loading When Using Babel, Mocked Modules, and Mocha Test-Runner
|
||||
|
||||
To support a broader variety of node versions and have access to latest additions to Javascript standard, we are using Babel to transpile our ES6 codebase to ES5\. Node.js module system is based on the CommonJS standard whereas the ES6 module system has different semantics.
|
||||
|
||||
Babel emulates ES6 module semantics on top of the Node.js module system, but because we are interfering with module loading by using mock-require, we are embarking on a journey through weird module loading corner cases, which seem unintuitive and can lead to multiple independent versions of the module imported and initialized and used throughout the codebase. This complicates mocking and global state management during testing.
|
||||
|
||||
#### Inability to Mock Functions Used Within the Module They Are Declared in When Using ES6 Modules
|
||||
|
||||
When a module exports multiple functions where one calls the other, it’s impossible to mock the function being used inside the module. The reason is that when you require an ES6 module you are presented with a separate set of references from the one used inside the module. Any attempt to rebind the references to point to new values does not really affect the code inside the module, which will continue to use the original function.
|
||||
|
||||
### Final Thoughts
|
||||
|
||||
Testing Node.js applications is a complicated process because the ecosystem is always evolving. It’s important to stay on top of the latest and greatest tools so you don’t fall behind.
|
||||
|
||||
There are so many outlets for JavaScript related news these days that it’s hard to keep up to date with all of them. Following email newsletters such as [JavaScript Weekly][21] and [Node Weekly][22] is a good start. Beyond that, joining a subreddit such as [/r/node][23] is a great idea. If you like to stay on top of the latest trends, [State of JS][24] does a great job at helping developers visualize trends in the testing world.
|
||||
|
||||
Lastly, here are a couple of my favorite blogs where articles often popup:
|
||||
|
||||
* [Hacker Noon][1]
|
||||
|
||||
* [Free Code Camp][2]
|
||||
|
||||
* [Bits and Pieces][3]
|
||||
|
||||
Think I missed something important? Let me know in the comments, or on Twitter – [@NickParsons][25].
|
||||
|
||||
Also, if you’d like to check out Stream, we have a great 5 minute tutorial on our website. Give it a shot [here][26].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Nick Parsons
|
||||
|
||||
Dreamer. Doer. Engineer. Developer Evangelist https://getstream.io.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://hackernoon.com/testing-node-js-in-2018-10a04dd77391
|
||||
|
||||
作者:[Nick Parsons][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://hackernoon.com/@nparsons08?source=post_header_lockup
|
||||
[1]:https://hackernoon.com/
|
||||
[2]:https://medium.freecodecamp.org/
|
||||
[3]:https://blog.bitsrc.io/
|
||||
[4]:https://getstream.io/
|
||||
[5]:https://getstream.io/winds
|
||||
[6]:https://github.com/mochajs/mocha
|
||||
[7]:http://www.chaijs.com/
|
||||
[8]:http://sinonjs.org/
|
||||
[9]:https://github.com/node-nock/nock
|
||||
[10]:https://getstream.io/docs_rest/
|
||||
[11]:https://getstream.io/personalization
|
||||
[12]:https://github.com/boblauer/mock-require
|
||||
[13]:https://github.com/gotwarlost/istanbul
|
||||
[14]:https://github.com/GetStream/Winds/tree/master/api/test
|
||||
[15]:https://travis-ci.org/
|
||||
[16]:https://github.com/GetStream/Winds/blob/master/.travis.yml
|
||||
[17]:https://www.npmjs.com/
|
||||
[18]:https://codecov.io/#features
|
||||
[19]:https://github.com/gotwarlost/istanbul
|
||||
[20]:https://github.com/GetStream/Winds/tree/master/api/test
|
||||
[21]:https://javascriptweekly.com/
|
||||
[22]:https://nodeweekly.com/
|
||||
[23]:https://www.reddit.com/r/node/
|
||||
[24]:https://stateofjs.com/2017/testing/results/
|
||||
[25]:https://twitter.com/@nickparsons
|
||||
[26]:https://getstream.io/try-the-api
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
How to Run Windows Apps on Android with Wine
|
||||
======
|
||||
|
||||
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
A sysadmin's guide to network management
|
||||
======
|
||||
|
||||
|
@ -1,70 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
Boost your typing with emoji in Fedora 28 Workstation
|
||||
======
|
||||
|
||||

|
||||
|
||||
Fedora 28 Workstation ships with a feature that allows you to quickly search, select and input emoji using your keyboard. Emoji, cute ideograms that are part of Unicode, are used fairly widely in messaging and especially on mobile devices. You may have heard the idiom “A picture is worth a thousand words.” This is exactly what emoji provide: simple images for you to use in communication. Each release of Unicode adds more, with over 200 new ones added in past releases of Unicode. This article shows you how to make them easy to use in your Fedora system.
|
||||
|
||||
It’s great to see emoji numbers growing. But at the same time it brings the challenge of how to input them in a computing device. Many people already use these symbols for input in mobile devices or social networking sites.
|
||||
|
||||
[**Editors’ note: **This article is an update to a previously published piece on this topic.]
|
||||
|
||||
### Enabling Emoji input on Fedora 28 Workstation
|
||||
|
||||
The new emoji input method ships by default in Fedora 28 Workstation. To use it, you must enable it using the Region and Language settings dialog. Open the Region and Language dialog from the main Fedora Workstation settings, or search for it in the Overview.
|
||||
|
||||
[![Region & Language settings tool][1]][2]
|
||||
|
||||
Choose the + control to add an input source. The following dialog appears:
|
||||
|
||||
[![Adding an input source][3]][4]
|
||||
|
||||
Choose the final option (three dots) to expand the selections fully. Then, find Other at the bottom of the list and select it:
|
||||
|
||||
[![Selecting other input sources][5]][6]
|
||||
|
||||
In the next dialog, find the Typing booster choice and select it:
|
||||
|
||||
[![][7]][8]
|
||||
|
||||
This advanced input method is powered behind the scenes by iBus. The advanced input methods are identifiable in the list by the cogs icon on the right of the list.
|
||||
|
||||
The Input Method drop-down automatically appears in the GNOME Shell top bar. Ensure your default method — in this example, English (US) — is selected as the current method, and you’ll be ready to input.
|
||||
|
||||
[![Input method dropdown in Shell top bar][9]][10]
|
||||
|
||||
## Using the new Emoji input method
|
||||
|
||||
Now the Emoji input method is enabled, search for emoji by pressing the keyboard shortcut **Ctrl+Shift+E**. A pop-over dialog appears where you can type a search term, such as smile, to find matching symbols.
|
||||
|
||||
[![Searching for smile emoji][11]][12]
|
||||
|
||||
Use the arrow keys to navigate the list. Then, hit **Enter** to make your selection, and the glyph will be placed as input.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/boost-typing-emoji-fedora-28-workstation/
|
||||
|
||||
作者:[Paul W. Frields][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fedoramagazine.org/author/pfrields/
|
||||
[1]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-15-02-41-1024x718.png
|
||||
[2]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-15-02-41.png
|
||||
[3]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-33-46-1024x839.png
|
||||
[4]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-33-46.png
|
||||
[5]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-34-15-1024x839.png
|
||||
[6]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-34-15.png
|
||||
[7]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-34-41-1024x839.png
|
||||
[8]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-34-41.png
|
||||
[9]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-15-05-24-300x244.png
|
||||
[10]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-15-05-24.png
|
||||
[11]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-36-31-290x300.png
|
||||
[12]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-36-31.png
|
@ -1,74 +0,0 @@
|
||||
15 open source applications for MacOS
|
||||
======
|
||||
|
||||

|
||||
|
||||
I use open source tools whenever and wherever I can. I returned to college a while ago to earn a master's degree in educational leadership. Even though I switched from my favorite Linux laptop to a MacBook Pro (since I wasn't sure Linux would be accepted on campus), I decided I would keep using my favorite tools, even on MacOS, as much as I could.
|
||||
|
||||
Fortunately, it was easy, and no professor ever questioned what software I used. Even so, I couldn't keep a secret.
|
||||
|
||||
I knew some of my classmates would eventually assume leadership positions in school districts, so I shared information about the open source applications described below with many of my MacOS or Windows-using classmates. After all, open source software is really about freedom and goodwill. I also wanted them to know that it would be easy to provide their students with world-class applications at little cost. Most of them were surprised and amazed because, as we all know, open source software doesn't have a marketing team except users like you and me.
|
||||
|
||||
### My MacOS learning curve
|
||||
|
||||
Through this process, I learned some of the nuances of MacOS. While most of the open source tools worked as I was used to, others required different installation methods. Tools like [yum][1], [DNF][2], and [APT][3] do not exist in the MacOS world—and I really missed them.
|
||||
|
||||
Some MacOS applications required dependencies and installations that were more difficult than what I was accustomed to with Linux. Nonetheless, I persisted. In the process, I learned how I could keep the best software on my new platform. Even much of MacOS's core is [open source][4].
|
||||
|
||||
Also, my Linux background made it easy to get comfortable with the MacOS command line. I still use it to create and copy files, add users, and use other [utilities][5]like cat, tac, more, less, and tail.
|
||||
|
||||
### 15 great open source applications for MacOS
|
||||
|
||||
* The college required that I submit most of my work electronically in DOCX format, and I did that easily, first with [OpenOffice][6] and later using [LibreOffice][7] to produce my papers.
|
||||
* When I needed to produce graphics for presentations, I used my favorite graphics applications, [GIMP][8] and [Inkscape][9].
|
||||
* My favorite podcast creation tool is [Audacity][10]. It's much simpler to use than the proprietary application that ships with the Mac. I use it to record interviews and create soundtracks for video presentations.
|
||||
* I discovered early on that I could use the [VideoLan][11] (VLC) media player on MacOS.
|
||||
* MacOS's built-in proprietary video creation tool is a good product, but you can easily install and use [OpenShot][12], which is a great content creation tool.
|
||||
* When I need to analyze networks for my clients, I use the easy-to-install [Nmap][13] (Network Mapper) and [Wireshark][14] tools on my Mac.
|
||||
* I use [VirtualBox][15] for MacOS to demonstrate Raspbian, Fedora, Ubuntu, and other Linux distributions, as well as Moodle, WordPress, Drupal, and Koha when I provide training for librarians and other educators.
|
||||
* I make boot drives on my MacBook using [Etcher.io][16]. I just download the ISO file and burn it on a USB stick drive.
|
||||
* I think [Firefox][17] is easier and more secure to use than the proprietary browser that comes with the MacBook Pro, and it allows me to synchronize my bookmarks across operating systems.
|
||||
* When it comes to eBook readers, [Calibre][18] cannot be beaten. It is easy to download and install, and you can even configure it for a [classroom eBook server][19] with a few clicks.
|
||||
* Recently I have been teaching Python to middle school students, I have found it is easy to download and install Python 3 and the IDLE3 editor from [Python.org][20]. I have also enjoyed learning about data science and sharing that with students. Whether you're interested in Python or R, I recommend you download and [install][21] the [Anaconda distribution][22]. It contains the great iPython editor, RStudio, Jupyter Notebooks, and JupyterLab, along with some other applications.
|
||||
* [HandBrake][23] is a great way to turn your old home video DVDs into MP4s, which you can share on YouTube, Vimeo, or your own [Kodi][24] server on MacOS.
|
||||
|
||||
|
||||
|
||||
Now it's your turn: What open source software are you using on MacOS (or Windows)? Share your favorites in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/7/open-source-tools-macos
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/don-watkins
|
||||
[1]:https://en.wikipedia.org/wiki/Yum_(software)
|
||||
[2]:https://en.wikipedia.org/wiki/DNF_(software)
|
||||
[3]:https://en.wikipedia.org/wiki/APT_(Debian)
|
||||
[4]:https://developer.apple.com/library/archive/documentation/MacOSX/Conceptual/OSX_Technology_Overview/SystemTechnology/SystemTechnology.html
|
||||
[5]:https://www.gnu.org/software/coreutils/coreutils.html
|
||||
[6]:https://www.openoffice.org/
|
||||
[7]:https://www.libreoffice.org/
|
||||
[8]:https://www.gimp.org/
|
||||
[9]:https://inkscape.org/en/
|
||||
[10]:https://www.audacityteam.org/
|
||||
[11]:https://www.videolan.org/index.html
|
||||
[12]:https://www.openshot.org/
|
||||
[13]:https://nmap.org/
|
||||
[14]:https://www.wireshark.org/
|
||||
[15]:https://www.virtualbox.org/
|
||||
[16]:https://etcher.io/
|
||||
[17]:https://www.mozilla.org/en-US/firefox/new/
|
||||
[18]:https://calibre-ebook.com/
|
||||
[19]:https://opensource.com/article/17/6/raspberrypi-ebook-server
|
||||
[20]:https://www.python.org/downloads/release/python-370/
|
||||
[21]:https://opensource.com/article/18/4/getting-started-anaconda-python
|
||||
[22]:https://www.anaconda.com/download/#macos
|
||||
[23]:https://handbrake.fr/
|
||||
[24]:https://kodi.tv/download
|
@ -1,92 +0,0 @@
|
||||
6 open source cryptocurrency wallets
|
||||
======
|
||||
|
||||

|
||||
|
||||
Without crypto wallets, cryptocurrencies like Bitcoin and Ethereum would just be another pie-in-the-sky idea. These wallets are essential for keeping, sending, and receiving cryptocurrencies.
|
||||
|
||||
The revolutionary growth of [cryptocurrencies][1] is attributed to the idea of decentralization, where a central authority is absent from the network and everyone has a level playing field. Open source technology is at the heart of cryptocurrencies and [blockchain][2] networks. It has enabled the vibrant, nascent industry to reap the benefits of decentralization—such as immutability, transparency, and security.
|
||||
|
||||
If you're looking for a free and open source cryptocurrency wallet, read on to start exploring whether any of the following options meet your needs.
|
||||
|
||||
### 1\. Copay
|
||||
|
||||
[Copay][3] is an open source Bitcoin crypto wallet that promises convenient storage. The software is released under the [MIT License][4].
|
||||
|
||||
The Copay server is also open source. Therefore, developers and Bitcoin enthusiasts can assume complete control of their activities by deploying their own applications on the server.
|
||||
|
||||
The Copay wallet empowers you to take the security of your Bitcoin in your own hands, instead of trusting unreliable third parties. It allows you to use multiple signatories for approving transactions and supports the storage of multiple, separate wallets within the same app.
|
||||
|
||||
Copay is available for a range of platforms, such as Android, Windows, MacOS, Linux, and iOS.
|
||||
|
||||
### 2\. MyEtherWallet
|
||||
|
||||
As the name implies, [MyEtherWallet][5] (abbreviated MEW) is a wallet for Ethereum transactions. It is open source (under the [MIT License][6]) and is completely online, accessible through a web browser.
|
||||
|
||||
The wallet has a simple client-side interface, which allows you to participate in the Ethereum blockchain confidently and securely.
|
||||
|
||||
### 3\. mSIGNA
|
||||
|
||||
[mSIGNA][7] is a powerful desktop application for completing transactions on the Bitcoin network. It is released under the [MIT License][8] and is available for MacOS, Windows, and Linux.
|
||||
|
||||
The blockchain wallet provides you with complete control over your Bitcoin stash. Some of its features include user-friendliness, versatility, decentralized offline key generation capabilities, encrypted data backups, and multi-device synchronization.
|
||||
|
||||
### 4\. Armory
|
||||
|
||||
[Armory][9] is an open source wallet (released under the [GNU AGPLv3][10]) for producing and keeping Bitcoin private keys on your computer. It enhances security by providing users with cold storage and multi-signature support capabilities.
|
||||
|
||||
With Armory, you can set up a wallet on a computer that is completely offline; you'll use the watch-only feature for observing your Bitcoin details on the internet, which improves security. The wallet also allows you to create multiple addresses and use them to complete different transactions.
|
||||
|
||||
Armory is available for MacOS, Windows, and several flavors of Linux (including Raspberry Pi).
|
||||
|
||||
### 5\. Electrum
|
||||
|
||||
[Electrum][11] is a Bitcoin wallet that navigates the thin line between beginner user-friendliness and expert functionality. The open source wallet is released under the [MIT License][12].
|
||||
|
||||
Electrum encrypts your private keys locally, supports cold storage, and provides multi-signature capabilities with minimal resource usage on your machine.
|
||||
|
||||
It is available for a wide range of operating systems and devices, including Windows, MacOS, Android, iOS, and Linux, and hardware wallets such as [Trezor][13].
|
||||
|
||||
### 6\. Etherwall
|
||||
|
||||
[Etherwall][14] is the first wallet for storing and sending Ethereum on the desktop. The open source wallet is released under the [GPLv3 License][15].
|
||||
|
||||
Etherwall is intuitive and fast. What's more, to enhance the security of your private keys, you can operate it on a full node or a thin node. Running it as a full-node client will enable you to download the whole Ethereum blockchain on your local machine.
|
||||
|
||||
Etherwall is available for MacOS, Linux, and Windows, and it also supports the Trezor hardware wallet.
|
||||
|
||||
### Words to the wise
|
||||
|
||||
Open source and free crypto wallets are playing a vital role in making cryptocurrencies easily available to more people.
|
||||
|
||||
Before using any digital currency software wallet, make sure to do your due diligence to protect your security, and always remember to comply with best practices for safeguarding your finances.
|
||||
|
||||
If your favorite open source cryptocurrency wallet is not on this list, please share what you know in the comment section below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/7/crypto-wallets
|
||||
|
||||
作者:[Dr.Michael J.Garbade][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/drmjg
|
||||
[1]:https://www.liveedu.tv/guides/cryptocurrency/
|
||||
[2]:https://opensource.com/tags/blockchain
|
||||
[3]:https://copay.io/
|
||||
[4]:https://github.com/bitpay/copay/blob/master/LICENSE
|
||||
[5]:https://www.myetherwallet.com/
|
||||
[6]:https://github.com/kvhnuke/etherwallet/blob/mercury/LICENSE.md
|
||||
[7]:https://ciphrex.com/
|
||||
[8]:https://github.com/ciphrex/mSIGNA/blob/master/LICENSE
|
||||
[9]:https://www.bitcoinarmory.com/
|
||||
[10]:https://github.com/etotheipi/BitcoinArmory/blob/master/LICENSE
|
||||
[11]:https://electrum.org/#home
|
||||
[12]:https://github.com/spesmilo/electrum/blob/master/LICENCE
|
||||
[13]:https://trezor.io/
|
||||
[14]:https://www.etherwall.com/
|
||||
[15]:https://github.com/almindor/etherwall/blob/master/LICENSE
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
A sysadmin's guide to SELinux: 42 answers to the big questions
|
||||
======
|
||||
|
||||
|
263
sources/tech/20180712 Slices from the ground up.md
Normal file
263
sources/tech/20180712 Slices from the ground up.md
Normal file
@ -0,0 +1,263 @@
|
||||
name1e5s is translating
|
||||
|
||||
|
||||
Slices from the ground up
|
||||
============================================================
|
||||
|
||||
This blog post was inspired by a conversation with a co-worker about using a slice as a stack. The conversation turned into a wider discussion on the way slices work in Go, so I thought it would be useful to write it up.
|
||||
|
||||
### Arrays
|
||||
|
||||
Every discussion of Go’s slice type starts by talking about something that isn’t a slice, namely, Go’s array type. Arrays in Go have two relevant properties:
|
||||
|
||||
1. They have a fixed size; `[5]int` is both an array of 5 `int`s and is distinct from `[3]int`.
|
||||
|
||||
2. They are value types. Consider this example:
|
||||
```
|
||||
package main
|
||||
|
||||
import "fmt"
|
||||
|
||||
func main() {
|
||||
var a [5]int
|
||||
b := a
|
||||
b[2] = 7
|
||||
fmt.Println(a, b) // prints [0 0 0 0 0] [0 0 7 0 0]
|
||||
}
|
||||
```
|
||||
|
||||
The statement `b := a` declares a new variable, `b`, of type `[5]int`, and _copies _ the contents of `a` to `b`. Updating `b` has no effect on the contents of `a` because `a` and `b` are independent values.[1][1]
|
||||
|
||||
### Slices
|
||||
|
||||
Go’s slice type differs from its array counterpart in two important ways:
|
||||
|
||||
1. Slices do not have a fixed length. A slice’s length is not declared as part of its type, rather it is held within the slice itself and is recoverable with the built-in function `len`.[2][2]
|
||||
|
||||
2. Assigning one slice variable to another _does not_ make a copy of the slices contents. This is because a slice does not directly hold its contents. Instead a slice holds a pointer to its _underlying_ array[3][3] which holds the contents of the slice.
|
||||
|
||||
As a result of the second property, two slices can share the same underlying array. Consider these examples:
|
||||
|
||||
1. Slicing a slice:
|
||||
```
|
||||
package main
|
||||
|
||||
import "fmt"
|
||||
|
||||
func main() {
|
||||
var a = []int{1,2,3,4,5}
|
||||
b := a[2:]
|
||||
b[0] = 0
|
||||
fmt.Println(a, b) // prints [1 2 0 4 5] [0 4 5]
|
||||
}
|
||||
```
|
||||
|
||||
In this example `a` and `b` share the same underlying array–even though `b` starts at a different offset in that array, and has a different length. Changes to the underlying array via `b` are thus visible to `a`.
|
||||
|
||||
2. Passing a slice to a function:
|
||||
```
|
||||
package main
|
||||
|
||||
import "fmt"
|
||||
|
||||
func negate(s []int) {
|
||||
for i := range s {
|
||||
s[i] = -s[i]
|
||||
}
|
||||
}
|
||||
|
||||
func main() {
|
||||
var a = []int{1, 2, 3, 4, 5}
|
||||
negate(a)
|
||||
fmt.Println(a) // prints [-1 -2 -3 -4 -5]
|
||||
}
|
||||
```
|
||||
|
||||
In this example `a` is passed to `negate`as the formal parameter `s.` `negate` iterates over the elements of `s`, negating their sign. Even though `negate` does not return a value, or have any way to access the declaration of `a` in `main`, the contents of `a` are modified when passed to `negate`.
|
||||
|
||||
Most programmers have an intuitive understanding of how a Go slice’s underlying array works because it matches how array-like concepts in other languages tend to work. For example, here’s the first example of this section rewritten in Python:
|
||||
|
||||
```
|
||||
Python 2.7.10 (default, Feb 7 2017, 00:08:15)
|
||||
[GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.34)] on darwin
|
||||
Type "help", "copyright", "credits" or "license" for more information.
|
||||
>>> a = [1,2,3,4,5]
|
||||
>>> b = a
|
||||
>>> b[2] = 0
|
||||
>>> a
|
||||
[1, 2, 0, 4, 5]
|
||||
```
|
||||
|
||||
And also in Ruby:
|
||||
|
||||
```
|
||||
irb(main):001:0> a = [1,2,3,4,5]
|
||||
=> [1, 2, 3, 4, 5]
|
||||
irb(main):002:0> b = a
|
||||
=> [1, 2, 3, 4, 5]
|
||||
irb(main):003:0> b[2] = 0
|
||||
=> 0
|
||||
irb(main):004:0> a
|
||||
=> [1, 2, 0, 4, 5]
|
||||
```
|
||||
|
||||
The same applies to most languages that treat arrays as objects or reference types.[4][8]
|
||||
|
||||
### The slice header value
|
||||
|
||||
The magic that makes a slice behave both as a value and a pointer is to understand that a slice is actually a struct type. This is commonly referred to as a _slice header_ after its [counterpart in the reflect package][20]. The definition of a slice header looks something like this:
|
||||
|
||||

|
||||
|
||||
```
|
||||
package runtime
|
||||
|
||||
type slice struct {
|
||||
ptr unsafe.Pointer
|
||||
len int
|
||||
cap int
|
||||
}
|
||||
```
|
||||
|
||||
This is important because [_unlike_ `map` and `chan`types][21] slices are value types and are _copied_ when assigned or passed as arguments to functions.
|
||||
|
||||
To illustrate this, programmers instinctively understand that `square`‘s formal parameter `v` is an independent copy of the `v` declared in `main`.
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import "fmt"
|
||||
|
||||
func square(v int) {
|
||||
v = v * v
|
||||
}
|
||||
|
||||
func main() {
|
||||
v := 3
|
||||
square(v)
|
||||
fmt.Println(v) // prints 3, not 9
|
||||
}
|
||||
```
|
||||
|
||||
So the operation of `square` on its `v` has no effect on `main`‘s `v`. So too the formal parameter `s` of `double` is an independent copy of the slice `s` declared in `main`, _not_ a pointer to `main`‘s `s` value.
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import "fmt"
|
||||
|
||||
func double(s []int) {
|
||||
s = append(s, s...)
|
||||
}
|
||||
|
||||
func main() {
|
||||
s := []int{1, 2, 3}
|
||||
double(s)
|
||||
fmt.Println(s, len(s)) // prints [1 2 3] 3
|
||||
}
|
||||
```
|
||||
|
||||
The slightly unusual nature of a Go slice variable is it’s passed around as a value, not than a pointer. 90% of the time when you declare a struct in Go, you will pass around a pointer to values of that struct.[5][9] This is quite uncommon, the only other example of passing a struct around as a value I can think of off hand is `time.Time`.
|
||||
|
||||
It is this exceptional behaviour of slices as values, rather than pointers to values, that can confuses Go programmer’s understanding of how slices work. Just remember that any time you assign, subslice, or pass or return, a slice, you’re making a copy of the three fields in the slice header; the pointer to the underlying array, and the current length and capacity.
|
||||
|
||||
### Putting it all together
|
||||
|
||||
I’m going to conclude this post on the example of a slice as a stack that I opened this post with:
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import "fmt"
|
||||
|
||||
func f(s []string, level int) {
|
||||
if level > 5 {
|
||||
return
|
||||
}
|
||||
s = append(s, fmt.Sprint(level))
|
||||
f(s, level+1)
|
||||
fmt.Println("level:", level, "slice:", s)
|
||||
}
|
||||
|
||||
func main() {
|
||||
f(nil, 0)
|
||||
}
|
||||
```
|
||||
|
||||
Starting from `main` we pass a `nil` slice into `f` as `level` 0\. Inside `f` we append to `s` the current `level`before incrementing `level` and recursing. Once `level` exceeds 5, the calls to `f` return, printing their current level and the contents of their copy of `s`.
|
||||
|
||||
```
|
||||
level: 5 slice: [0 1 2 3 4 5]
|
||||
level: 4 slice: [0 1 2 3 4]
|
||||
level: 3 slice: [0 1 2 3]
|
||||
level: 2 slice: [0 1 2]
|
||||
level: 1 slice: [0 1]
|
||||
level: 0 slice: [0]
|
||||
```
|
||||
|
||||
You can see that at each level the value of `s` was unaffected by the operation of other callers of `f`, and that while four underlying arrays were created [6][10] higher levels of `f` in the call stack are unaffected by the copy and reallocation of new underlying arrays as a by-product of `append`.
|
||||
|
||||
### Further reading
|
||||
|
||||
If you want to find out more about how slices work in Go, I recommend these posts from the Go blog:
|
||||
|
||||
* [Go Slices: usage and internals][11] (blog.golang.org)
|
||||
|
||||
* [Arrays, slices (and strings): The mechanics of ‘append’][12] (blog.golang.org)
|
||||
|
||||
### Notes
|
||||
|
||||
1. This is not a unique property of arrays. In Go _every_ assignment is a copy.[][13]
|
||||
|
||||
2. You can also use `len` on array values, but the result is a forgone conclusion.[][14]
|
||||
|
||||
3. This is also known as the backing array or sometimes, less correctly, as the backing slice[][15]
|
||||
|
||||
4. In Go we tend to say value type and pointer type because of the confusion caused by C++’s _reference_ type, but in this case I think calling arrays as objects reference types is appropriate.[][16]
|
||||
|
||||
5. I’d argue if that struct has a [method defined on it and/or is used to satisfy an interface][17]then the percentage that you will pass around a pointer to your struct raises to near 100%.[][18]
|
||||
|
||||
6. Proof of this is left as an exercise to the reader.[][19]
|
||||
|
||||
### Related Posts:
|
||||
|
||||
1. [If a map isn’t a reference variable, what is it?][4]
|
||||
|
||||
2. [What is the zero value, and why is it useful ?][5]
|
||||
|
||||
3. [The empty struct][6]
|
||||
|
||||
4. [Should methods be declared on T or *T][7]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://dave.cheney.net/2018/07/12/slices-from-the-ground-up
|
||||
|
||||
作者:[Dave Cheney][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://dave.cheney.net/
|
||||
[1]:https://dave.cheney.net/2018/07/12/slices-from-the-ground-up#easy-footnote-bottom-1-3265
|
||||
[2]:https://dave.cheney.net/2018/07/12/slices-from-the-ground-up#easy-footnote-bottom-2-3265
|
||||
[3]:https://dave.cheney.net/2018/07/12/slices-from-the-ground-up#easy-footnote-bottom-3-3265
|
||||
[4]:https://dave.cheney.net/2017/04/30/if-a-map-isnt-a-reference-variable-what-is-it
|
||||
[5]:https://dave.cheney.net/2013/01/19/what-is-the-zero-value-and-why-is-it-useful
|
||||
[6]:https://dave.cheney.net/2014/03/25/the-empty-struct
|
||||
[7]:https://dave.cheney.net/2016/03/19/should-methods-be-declared-on-t-or-t
|
||||
[8]:https://dave.cheney.net/2018/07/12/slices-from-the-ground-up#easy-footnote-bottom-4-3265
|
||||
[9]:https://dave.cheney.net/2018/07/12/slices-from-the-ground-up#easy-footnote-bottom-5-3265
|
||||
[10]:https://dave.cheney.net/2018/07/12/slices-from-the-ground-up#easy-footnote-bottom-6-3265
|
||||
[11]:https://blog.golang.org/go-slices-usage-and-internals
|
||||
[12]:https://blog.golang.org/slices
|
||||
[13]:https://dave.cheney.net/2018/07/12/slices-from-the-ground-up#easy-footnote-1-3265
|
||||
[14]:https://dave.cheney.net/2018/07/12/slices-from-the-ground-up#easy-footnote-2-3265
|
||||
[15]:https://dave.cheney.net/2018/07/12/slices-from-the-ground-up#easy-footnote-3-3265
|
||||
[16]:https://dave.cheney.net/2018/07/12/slices-from-the-ground-up#easy-footnote-4-3265
|
||||
[17]:https://dave.cheney.net/2016/03/19/should-methods-be-declared-on-t-or-t
|
||||
[18]:https://dave.cheney.net/2018/07/12/slices-from-the-ground-up#easy-footnote-5-3265
|
||||
[19]:https://dave.cheney.net/2018/07/12/slices-from-the-ground-up#easy-footnote-6-3265
|
||||
[20]:https://golang.org/pkg/reflect/#SliceHeader
|
||||
[21]:https://dave.cheney.net/2017/04/30/if-a-map-isnt-a-reference-variable-what-is-it
|
@ -0,0 +1,84 @@
|
||||
translating---geekpi
|
||||
|
||||
Incomplete Path Expansion (Completion) For Bash
|
||||
======
|
||||
|
||||

|
||||
|
||||
[bash-complete-partial-path][1] enhances the path completion in Bash (on Linux, macOS with gnu-sed, and Windows with MSYS) by adding incomplete path expansion, similar to Zsh. This is useful if you want this time-saving feature in Bash, without having to switch to Zsh.
|
||||
|
||||
Here is how this works. When the `Tab` key is pressed, bash-complete-partial-path assumes each component is incomplete and tries to expand it. Let's say you want to navigate to `/usr/share/applications` . You can type `cd /u/s/app` , press `Tab` , and bash-complete-partial-path should expand it into `cd /usr/share/applications` . If there are conflicts, only the path without conflicts is completed upon pressing `Tab` . For instance Ubuntu users should have quite a few folders in `/usr/share` that begin with "app" so in this case, typing `cd /u/s/app` will only expand the `/usr/share/` part.
|
||||
|
||||
Here is another example of deeper incomplete file path expansion. On an Ubuntu system type `cd /u/s/f/t/u` , press `Tab` , and it should be automatically expanded to cd `/usr/share/fonts/truetype/ubuntu` .
|
||||
|
||||
Features include:
|
||||
|
||||
* Escapes special characters
|
||||
|
||||
* If the user starts the path with quotes, character escaping is not applied and instead, the quote is closed with a matching character after expending the path
|
||||
|
||||
* Properly expands ~ expressions
|
||||
|
||||
* If bash-completion package is already in use, this code will safely override its _filedir function. No extra configuration is required, just make sure you source this project after the main bash-completion.
|
||||
|
||||
Check out the [project page][2] for more information and a demo screencast.
|
||||
|
||||
### Install bash-complete-partial-path
|
||||
|
||||
The bash-complete-partial-path installation instructions specify downloading the bash_completion script directly. I prefer to grab the Git repository instead, so I can update it with a simple `git pull` , therefore the instructions below will use this method of installing bash-complete-partial-path. You can use the [official][3] instructions if you prefer them.
|
||||
|
||||
1. Install Git (needed to clone the bash-complete-partial-path Git repository).
|
||||
|
||||
In Debian, Ubuntu, Linux Mint and so on, use this command to install Git:
|
||||
|
||||
```
|
||||
sudo apt install git
|
||||
```
|
||||
|
||||
2. Clone the bash-complete-partial-path Git repository in `~/.config/`:
|
||||
|
||||
```
|
||||
cd ~/.config && git clone https://github.com/sio/bash-complete-partial-path
|
||||
```
|
||||
|
||||
3. Source `~/.config/bash-complete-partial-path/bash_completion` in your `~/.bashrc` file,
|
||||
|
||||
Open ~/.bashrc with a text editor. You can use Gedit for example:
|
||||
|
||||
```
|
||||
gedit ~/.bashrc
|
||||
```
|
||||
|
||||
At the end of the `~/.bashrc` file add the following (as a single line):
|
||||
|
||||
```
|
||||
[ -s "$HOME/.config/bash-complete-partial-path/bash_completion" ] && source "$HOME/.config/bash-complete-partial-path/bash_completion"
|
||||
```
|
||||
|
||||
I mentioned adding it at the end of the file because this needs to be included below (after) the main bash-completion from your `~/.bashrc` file. So make sure you don't add it above the original bash-completion as it will cause issues.
|
||||
|
||||
4\. Source `~/.bashrc`:
|
||||
|
||||
```
|
||||
source ~/.bashrc
|
||||
```
|
||||
|
||||
And you're done, bash-complete-partial-path should now be installed and ready to be used.
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxuprising.com/2018/07/incomplete-path-expansion-completion.html
|
||||
|
||||
作者:[Logix][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/118280394805678839070
|
||||
[1]:https://github.com/sio/bash-complete-partial-path
|
||||
[2]:https://github.com/sio/bash-complete-partial-path
|
||||
[3]:https://github.com/sio/bash-complete-partial-path#installation-and-updating
|
@ -0,0 +1,235 @@
|
||||
3 Methods to List All The Users in Linux System
|
||||
======
|
||||
Everyone knows user information was residing in `/etc/passwd` file.
|
||||
|
||||
It’s a text file that contains the essential information about each user.
|
||||
|
||||
When we create a new user, the new user details will be appended into this file.
|
||||
|
||||
The /etc/passwd file contains each user essential information as a single line with seven fields.
|
||||
|
||||
Each line in /etc/passwd represents a single user. This file keep the user’s information in three parts.
|
||||
|
||||
* `Part-1:` root user information
|
||||
* `Part-2:` system-defined accounts information
|
||||
* `Part-3:` Real user information
|
||||
|
||||
|
||||
|
||||
**Suggested Read :**
|
||||
**(#)** [How To Check User Created Date On Linux][1]
|
||||
**(#)** [How To Check Which Groups A User Belongs To On Linux][2]
|
||||
**(#)** [How To Force User To Change Password On Next Login In Linux][3]
|
||||
|
||||
The first part is the root account, which is administrator account has complete power over every aspect of the system.
|
||||
|
||||
The second part is followed by system-defined groups and accounts that are required for proper installation and update of system software.
|
||||
|
||||
The third part at the end represent real people who use the system.
|
||||
|
||||
While creating a new users the below four files will be modified.
|
||||
|
||||
* `/etc/passwd:` User details will be updated in this file.
|
||||
* `/etc/shadow:` User password info will be updated in this file.
|
||||
* `/etc/group:` Group details will be updated of the new user in this file.
|
||||
* `/etc/gshadow:` Group password info will be updated of the new user in the file.
|
||||
|
||||
|
||||
|
||||
### Method-1: Using /etc/passwd file
|
||||
|
||||
Use any of the file manipulation command such as cat, more, less, etc to print the list of users were created on your Linux system.
|
||||
|
||||
The `/etc/passwd` is a text file that contains each user information, which is necessary to login Linux system. It maintain useful information about users such as username, password, user ID, group ID, user ID info, home directory and shell.
|
||||
|
||||
The /etc/passwd file contain every user details as a single line with seven fields as described below, each fields separated by colon “:”
|
||||
```
|
||||
# cat /etc/passwd
|
||||
root:x:0:0:root:/root:/bin/bash
|
||||
bin:x:1:1:bin:/bin:/sbin/nologin
|
||||
daemon:x:2:2:daemon:/sbin:/sbin/nologin
|
||||
adm:x:3:4:adm:/var/adm:/sbin/nologin
|
||||
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
|
||||
sync:x:5:0:sync:/sbin:/bin/sync
|
||||
shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
|
||||
halt:x:7:0:halt:/sbin:/sbin/halt
|
||||
mail:x:8:12:mail:/var/spool/mail:/sbin/nologin
|
||||
ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
|
||||
postfix:x:89:89::/var/spool/postfix:/sbin/nologin
|
||||
sshd:x:74:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologin
|
||||
tcpdump:x:72:72::/:/sbin/nologin
|
||||
2gadmin:x:500:10::/home/viadmin:/bin/bash
|
||||
apache:x:48:48:Apache:/var/www:/sbin/nologin
|
||||
zabbix:x:498:499:Zabbix Monitoring System:/var/lib/zabbix:/sbin/nologin
|
||||
mysql:x:497:502::/home/mysql:/bin/bash
|
||||
zend:x:502:503::/u01/zend/zend/gui/lighttpd:/sbin/nologin
|
||||
rpc:x:32:32:Rpcbind Daemon:/var/cache/rpcbind:/sbin/nologin
|
||||
2daygeek:x:503:504::/home/2daygeek:/bin/bash
|
||||
named:x:25:25:Named:/var/named:/sbin/nologin
|
||||
mageshm:x:506:507:2g Admin - Magesh M:/home/mageshm:/bin/bash
|
||||
|
||||
```
|
||||
|
||||
Below are the detailed information about seven fields.
|
||||
|
||||
* **`Username (magesh):`** Username of created user. Characters length should be between 1 to 32.
|
||||
* **`Password (x):`** It indicates that encrypted password is stored at /etc/shadow file.
|
||||
* **`User ID (UID-506):`** It indicates the user ID (UID) each user should be contain unique UID. UID (0-Zero) is reserved for root, UID (1-99) reserved for system users and UID (100-999) reserved for system accounts/groups
|
||||
* **`Group ID (GID-507):`** It indicates the group ID (GID) each group should be contain unique GID is stored at /etc/group file.
|
||||
* **`User ID Info (2g Admin - Magesh M):`** It indicates the command field. This field can be used to describe the user information.
|
||||
* **`Home Directory (/home/mageshm):`** It indicates the user home directory.
|
||||
* **`shell (/bin/bash):`** It indicates the user’s bash shell.
|
||||
|
||||
|
||||
|
||||
You can use the **awk** or **cut** command to print only the user names list on your Linux system. Both are showing the same results.
|
||||
```
|
||||
# awk -F':' '{ print $1}' /etc/passwd
|
||||
or
|
||||
# cut -d: -f1 /etc/passwd
|
||||
root
|
||||
bin
|
||||
daemon
|
||||
adm
|
||||
lp
|
||||
sync
|
||||
shutdown
|
||||
halt
|
||||
mail
|
||||
ftp
|
||||
postfix
|
||||
sshd
|
||||
tcpdump
|
||||
2gadmin
|
||||
apache
|
||||
zabbix
|
||||
mysql
|
||||
zend
|
||||
rpc
|
||||
2daygeek
|
||||
named
|
||||
mageshm
|
||||
|
||||
```
|
||||
|
||||
### Method-2: Using getent Command
|
||||
|
||||
The getent command displays entries from databases supported by the Name Service Switch libraries, which are configured in /etc/nsswitch.conf.
|
||||
|
||||
getent command shows user details similar to /etc/passwd file, it shows every user details as a single line with seven fields.
|
||||
```
|
||||
# getent passwd
|
||||
root:x:0:0:root:/root:/bin/bash
|
||||
bin:x:1:1:bin:/bin:/sbin/nologin
|
||||
daemon:x:2:2:daemon:/sbin:/sbin/nologin
|
||||
adm:x:3:4:adm:/var/adm:/sbin/nologin
|
||||
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
|
||||
sync:x:5:0:sync:/sbin:/bin/sync
|
||||
shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
|
||||
halt:x:7:0:halt:/sbin:/sbin/halt
|
||||
mail:x:8:12:mail:/var/spool/mail:/sbin/nologin
|
||||
ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
|
||||
postfix:x:89:89::/var/spool/postfix:/sbin/nologin
|
||||
sshd:x:74:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologin
|
||||
tcpdump:x:72:72::/:/sbin/nologin
|
||||
2gadmin:x:500:10::/home/viadmin:/bin/bash
|
||||
apache:x:48:48:Apache:/var/www:/sbin/nologin
|
||||
zabbix:x:498:499:Zabbix Monitoring System:/var/lib/zabbix:/sbin/nologin
|
||||
mysql:x:497:502::/home/mysql:/bin/bash
|
||||
zend:x:502:503::/u01/zend/zend/gui/lighttpd:/sbin/nologin
|
||||
rpc:x:32:32:Rpcbind Daemon:/var/cache/rpcbind:/sbin/nologin
|
||||
2daygeek:x:503:504::/home/2daygeek:/bin/bash
|
||||
named:x:25:25:Named:/var/named:/sbin/nologin
|
||||
mageshm:x:506:507:2g Admin - Magesh M:/home/mageshm:/bin/bash
|
||||
|
||||
```
|
||||
|
||||
Below are the detailed information about seven fields.
|
||||
|
||||
* **`Username (magesh):`** Username of created user. Characters length should be between 1 to 32.
|
||||
* **`Password (x):`** It indicates that encrypted password is stored at /etc/shadow file.
|
||||
* **`User ID (UID-506):`** It indicates the user ID (UID) each user should be contain unique UID. UID (0-Zero) is reserved for root, UID (1-99) reserved for system users and UID (100-999) reserved for system accounts/groups
|
||||
* **`Group ID (GID-507):`** It indicates the group ID (GID) each group should be contain unique GID is stored at /etc/group file.
|
||||
* **`User ID Info (2g Admin - Magesh M):`** It indicates the command field. This field can be used to describe the user information.
|
||||
* **`Home Directory (/home/mageshm):`** It indicates the user home directory.
|
||||
* **`shell (/bin/bash):`** It indicates the user’s bash shell.
|
||||
|
||||
|
||||
|
||||
You can use the **awk** or **cut** command to print only the user names list on your Linux system. Both are showing the same results.
|
||||
```
|
||||
# getent passwd | awk -F':' '{ print $1}'
|
||||
or
|
||||
# getent passwd | cut -d: -f1
|
||||
root
|
||||
bin
|
||||
daemon
|
||||
adm
|
||||
lp
|
||||
sync
|
||||
shutdown
|
||||
halt
|
||||
mail
|
||||
ftp
|
||||
postfix
|
||||
sshd
|
||||
tcpdump
|
||||
2gadmin
|
||||
apache
|
||||
zabbix
|
||||
mysql
|
||||
zend
|
||||
rpc
|
||||
2daygeek
|
||||
named
|
||||
mageshm
|
||||
|
||||
```
|
||||
|
||||
### Method-3: Using compgen Command
|
||||
|
||||
compgen is bash built-in command and it will show all available commands, aliases, and functions for you.
|
||||
```
|
||||
# compgen -u
|
||||
root
|
||||
bin
|
||||
daemon
|
||||
adm
|
||||
lp
|
||||
sync
|
||||
shutdown
|
||||
halt
|
||||
mail
|
||||
ftp
|
||||
postfix
|
||||
sshd
|
||||
tcpdump
|
||||
2gadmin
|
||||
apache
|
||||
zabbix
|
||||
mysql
|
||||
zend
|
||||
rpc
|
||||
2daygeek
|
||||
named
|
||||
mageshm
|
||||
|
||||
```
|
||||
|
||||
Please comment your inputs into our comment section, so based on that we can improve our blog and make effective article. So, stay tune with us.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/3-methods-to-list-all-the-users-in-linux-system/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.2daygeek.com/author/magesh/
|
||||
[1]:https://www.2daygeek.com/how-to-check-user-created-date-on-linux/
|
||||
[2]:https://www.2daygeek.com/how-to-check-which-groups-a-user-belongs-to-on-linux/
|
||||
[3]:https://www.2daygeek.com/how-to-force-user-to-change-password-on-next-login-in-linux/
|
@ -0,0 +1,88 @@
|
||||
4 cool new projects to try in COPR for July 2018
|
||||
======
|
||||
|
||||

|
||||
|
||||
COPR is a [collection][1] of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.
|
||||
|
||||
Here’s a set of new and interesting projects in COPR.
|
||||
|
||||
### Hledger
|
||||
|
||||
[Hledger][2] is a command-line program for tracking money or other commodities. It uses a simple, plain-text formatted journal for storing data and double-entry accounting. In addition to the command-line interface, hledger offers a terminal interface and a web client that can show graphs of balance on the accounts.
|
||||
![][3]
|
||||
|
||||
#### Installation instructions
|
||||
|
||||
The repo currently provides hledger for Fedora 27, 28, and Rawhide. To install hledger, use these commands:
|
||||
```
|
||||
sudo dnf copr enable kefah/HLedger
|
||||
sudo dnf install hledger
|
||||
|
||||
```
|
||||
|
||||
### Neofetch
|
||||
|
||||
[Neofetch][4] is a command-line tool that displays information about the operating system, software, and hardware. Its main purpose is to show the data in a compact way to take screenshots. You can configure Neofetch to display exactly the way you want, by using both command-line flags and a configuration file.
|
||||
![][5]
|
||||
|
||||
#### Installation instructions
|
||||
|
||||
The repo currently provides Neofetch for Fedora 28. To install Neofetch, use these commands:
|
||||
```
|
||||
sudo dnf copr enable sysek/neofetch
|
||||
sudo dnf install neofetch
|
||||
|
||||
```
|
||||
|
||||
### Remarkable
|
||||
|
||||
[Remarkable][6] is a Markdown text editor that uses the GitHub-like flavor of Markdown. It offers a preview of the document, as well as the option to export to PDF and HTML. There are several styles available for the Markdown, including an option to create your own styles using CSS. In addition, Remarkable supports LaTeX syntax for writing equations and syntax highlighting for source code.
|
||||
![][7]
|
||||
|
||||
#### Installation instructions
|
||||
|
||||
The repo currently provides Remarkable for Fedora 28 and Rawhide. To install Remarkable, use these commands:
|
||||
```
|
||||
sudo dnf copr enable neteler/remarkable
|
||||
sudo dnf install remarkable
|
||||
|
||||
```
|
||||
|
||||
### Aha
|
||||
|
||||
[Aha][8] (or ANSI HTML Adapter) is a command-line tool that converts terminal escape sequences to HTML code. This allows you to share, for example, output of git diff or htop as a static HTML page.
|
||||
![][9]
|
||||
|
||||
#### Installation instructions
|
||||
|
||||
The [repo][10] currently provides aha for Fedora 26, 27, 28, and Rawhide, EPEL 6 and 7, and other distributions. To install aha, use these commands:
|
||||
```
|
||||
sudo dnf copr enable scx/aha
|
||||
sudo dnf install aha
|
||||
|
||||
```
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/4-try-copr-july-2018/
|
||||
|
||||
作者:[Dominik Turecek][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fedoramagazine.org
|
||||
[1]:https://copr.fedorainfracloud.org/
|
||||
[2]:http://hledger.org/
|
||||
[3]:https://fedoramagazine.org/wp-content/uploads/2018/07/hledger.png
|
||||
[4]:https://github.com/dylanaraps/neofetch
|
||||
[5]:https://fedoramagazine.org/wp-content/uploads/2018/07/neofetch.png
|
||||
[6]:https://remarkableapp.github.io/linux.html
|
||||
[7]:https://fedoramagazine.org/wp-content/uploads/2018/07/remarkable.png
|
||||
[8]:https://github.com/theZiz/aha
|
||||
[9]:https://fedoramagazine.org/wp-content/uploads/2018/07/aha.png
|
||||
[10]:https://copr.fedorainfracloud.org/coprs/scx/aha/
|
@ -0,0 +1,142 @@
|
||||
A brief history of text-based games and open source
|
||||
======
|
||||
|
||||

|
||||
|
||||
The [Interactive Fiction Technology Foundation][1] (IFTF) is a non-profit organization dedicated to the preservation and improvement of technologies enabling the digital art form we call interactive fiction. When a Community Moderator for Opensource.com suggested an article about IFTF, the technologies and services it supports, and how it all intersects with open source, I found it a novel angle to the decades-long story I’ve so often told. The history of IF is longer than—but quite enmeshed with—the modern FOSS movement. I hope you’ll enjoy my sharing it here.
|
||||
|
||||
### Definitions and history
|
||||
|
||||
To me, the term interactive fiction includes any video game or digital artwork whose audience interacts with it primarily through text. The term originated in the 1980s when parser-driven text adventure games—epitomized in the United States by [Zork][2], [The Hitchhiker’s Guide to the Galaxy][3], and the rest of [Infocom][4]’s canon—defined home-computer entertainment. Its mainstream commercial viability had guttered by the 1990s, but online hobbyist communities carried on the tradition, releasing both games and game-creation tools.
|
||||
|
||||
After a quarter century, interactive fiction now comprises a broad and sparkling variety of work, from puzzle-laden text adventures to sprawling and introspective hypertexts. Regular online competitions and festivals provide a great place to peruse and play new work: The English-language IF world enjoys annual events including [Spring Thing][5] and [IFComp][6], the latter a centerpiece of modern IF since 1995—which also makes it the longest-lived continually running game showcase event of its kind in any genre. [IFComp’s crop of judged-and-ranked entries from 2017][7] shows the amazing diversity in form, style, and subject matter that text-based games boast today.
|
||||
|
||||
(I specify "English-language" above because IF communities tend to self-segregate by language, perhaps due to the technology's focus on writing. There are also annual IF events in [French][8] and [Italian][9], for example, and I've heard of at least one Chinese IF festival. Happily, these borders are porous; during the four years I managed IFComp, it has welcomed English-translated work from all international communities.)
|
||||
|
||||
![counterfeit monkey game screenshot][11]
|
||||
|
||||
Starting a new game of Emily Short's "Counterfeit Monkey," running on the interpreter Lectrote (both open source software).
|
||||
|
||||
Also due to its focus on text, IF presents some of the most accessible platforms for both play and authorship. Almost anyone who can read digital text—including users of assistive technology such as text-to-speech software—can play most IF works. Likewise, IF creation is open to all writers willing to learn and work with its tools and techniques.
|
||||
|
||||
This brings us to IF’s long relationship with open source, which has helped enable the art form’s availability since its commercial heyday. I'll provide an overview of contemporary open-source IF creation tools, and then discuss the ancient and sometimes curious tradition of IF works that share their source code.
|
||||
|
||||
### The world of open source IF tools
|
||||
|
||||
A number of development platforms, most of which are open source, are available to create traditional parser-driven IF in which the user types commands—for example, `go north,` `get lamp`, `pet the cat`, or `ask Zoe about quantum mechanics`—to interact with the game’s world. The early 1990s saw the emergence of several hacker-friendly parser-game development kits; those still in use today include [TADS][12], [Alan][13], and [Quest][14]—all open, with the latter two bearing FOSS licenses.
|
||||
|
||||
But by far the most prominent of these is [Inform][15], first released by Graham Nelson in 1993 and now maintained by a team Nelson still leads. Inform source is semi-open, in an unusual fashion: Inform 6, the previous major version, [makes its source available through the Artistic License][16]. This has more immediate relevance than may be obvious, since the otherwise proprietary Inform 7 holds Inform 6 at its core, translating its [remarkable natural-language syntax][17] into its predecessor’s more C-like code before letting it compile the work down into machine code.
|
||||
|
||||
![inform 7 IDE screenshot][19]
|
||||
|
||||
The Inform 7 IDE, loaded up with documentation and a sample project.
|
||||
|
||||
Inform games run on a virtual machine, a relic of the Infocom era when that publisher targeted a VM so that it could write a single game that would run on Apple II, Commodore 64, Atari 800, and other flavors of the "[home computer][20]." Fewer popular operating systems exist today, but Inform’s virtual machines—the relatively modern [Glulx][21] or the charmingly antique [Z-machine][22], a reverse-engineered clone of Infocom’s historical VM—let Inform-created work run on any computer with an Inform interpreter. Currently, popular cross-platform interpreters include desktop programs like [Lectrote][23] and [Gargoyle][24] or browser-based ones like [Quixe][25] and [Parchment][26]. All are open source.
|
||||
|
||||
If the pace of Inform’s development has slowed in its maturity, it remains vital through an active and transparent ecosystem—just like any other popular open source project. In Inform’s case, this includes the aforementioned interpreters, [a collection of language extensions][27] (usually written in a mix of Inform 6 and 7), and of course, all the work created with it and shared with the world, sometimes with source included (I’ll return to that topic later in this article).
|
||||
|
||||
IF creation tools invented in the 21st century tend to explore player interactions outside of the traditional parser, generating hypertext-driven work that any modern web browser can load. Chief among these is [Twine][28], originally developed by Chris Klimas in 2009 and under active development by many contributors today as [a GNU-licensed open source project][29]. (In fact, [Twine][30] can trace its OSS lineage back to [TiddlyWiki][31], the project from which Klimas initially derived it.)
|
||||
|
||||
Twine represents a sort of maximally [open and accessible approach][30] to IF development: Beyond its own FOSS nature, it renders its output as self-contained websites, relying not on machine code requiring further specialized interpretation but the open and well-exercised standards of HTML, CSS, and JavaScript. As a creative tool, Twine can match its own exposed complexity to the creator’s skill level. Users with little or no programming knowledge can create simple but playable IF work, while those with more coding and design skills—including those developing these skills by making Twine games—can develop more sophisticated projects. Little wonder that Twine’s visibility and popularity in educational contexts has grown quite a bit in recent years.
|
||||
|
||||
Other noteworthy open source IF development projects include the MIT-licensed [Undum][32] by Ian Millington, and [ChoiceScript][33] by Dan Fabulich and the [Choice of Games][34] team—both of which also target the web browser as the gameplay platform. Looking beyond strict development systems like these, web-based IF gives us a rich and ever-churning ecosystem of open source work, such as furkle’s [collection of Twine-extending tools][35] and Liza Daly’s [Windrift][36], a JavaScript framework purpose-built for her own IF games.
|
||||
|
||||
### Programs, games, and game-programs
|
||||
|
||||
Twine benefits from [a standing IFTF program dedicated to its support][37], allowing the public to help fund its maintenance and development. IFTF also directly supports two long-time public services, IFComp and the IF Archive, both of which depend upon and contribute back into open software and technologies.
|
||||
|
||||
![Harmonia opening screen shot][39]
|
||||
|
||||
The opening of Liza Daly's "Harmonia," created with the Windrift open source IF-creation framework.
|
||||
|
||||
The Perl- and JavaScript-based application that runs the IFComp’s website has been [a shared-source project][40] since 2014, and it reflects [the stew of FOSS licenses used by its IF-specific sub-components][41], including the various code libraries that allow parser-driven competition entries to run in a web browser. [The IF Archive][42]—online since 1992 and [an IFTF project since 2017][43]—is a set of mirrored repositories based entirely on ancient and stable internet standards, with [a little open source Python script][44] to handle indexing.
|
||||
|
||||
### At last, the fun part: Let's talk about open source text games
|
||||
|
||||
The bulk of the archive [comprises games][45], of course—years and years of games, reflecting decades of evolving game-design trends and IF tool development.
|
||||
|
||||
Lots of IF work shares its source code, and the community’s quick-start solution for finding it is simple: [Search the IFDB for the tag "source available"][46]. (The IFDB is yet another long-running IF community service, run privately by TADS creator Mike Roberts.) Users who are comfortable with a more bare-bones interface may also wish to browse [the `/games/source` directory][47] of the IF Archive, which groups content by development platform and written language (there's also a lot of work either too miscellaneous or too ancient to categorize floating at the top).
|
||||
|
||||
A little bit of random sampling of these code-sharing games reveals an interesting dilemma: Unlike the wider world of open source software, the IF community lacks a generally agreed-upon way of licensing all the code that it generates. Unlike a software tool—including all the tools we use to build IF—an interactive fiction game is a work of art in the most literal sense, meaning that an open source license intended for software would fit it no better than it would any other work of prose or poetry. But then again, an IF game is also a piece of software, and it exhibits source-code patterns and techniques that its creator may legitimately wish to share with the world. What is an open source-aware IF creator to do?
|
||||
|
||||
Some games address this by passing their code into the public domain, either through explicit license or—as in the case of [the original 42-year-old Adventure by Crowther and Woods][48]—through community fiat. Some try to split the difference, rolling their own license that allows for free re-use of a game’s exposed business logic but prohibits the creation of work derived specifically from its prose. This is the tack I took when I opened up the source of my own game, [The Warbler’s Nest][49]. Lord knows how well that’d stand up in court, but I didn’t have any better ideas at the time.
|
||||
|
||||
Naturally, you can find work that simply puts everything under a single common license and never mind the naysayers. A prominent example is [Emily Short’s epic Counterfeit Monkey][50], released in its entirety under a Creative Commons 4.0 license. [CC frowns at its application to code][51], but you could argue that [the strangely prose-like nature of Inform 7 source][52] makes it at least a little more compatible with a CC license than a more traditional software project would be.
|
||||
|
||||
### What now, adventurer?
|
||||
|
||||
If you are eager to start exploring the world of interactive fiction, here are a few links to check out:
|
||||
|
||||
|
||||
+ As mentioned above, IFDB and the IF Archive both present browsable interfaces to more than 40 years worth of collected interactive fiction work. Much of this is playable in a web browser, but some require additional interpreter programs. IFDB can help you find and install these.
|
||||
|
||||
IFComp’s annual results pages provide another view into the best of this free and archive-available work.
|
||||
|
||||
+ The Interactive Fiction Technology Foundation is a charitable non-profit organization that helps support Twine, IFComp, and the IF Archive, as well as improve the accessibility of IF, explore IF’s use in education, and more. Join its mailing list to receive IFTF’s monthly newsletter, peruse its blog, and browse some thematic merchandise.
|
||||
|
||||
+ John Paul Wohlscheid wrote this article about open-source IF tools earlier this year. It covers some platforms not mentioned here, so if you’re still hungry for more, have a look.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/7/interactive-fiction-tools
|
||||
|
||||
作者:[Jason Mclntosh][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jmac
|
||||
[1]:http://iftechfoundation.org/
|
||||
[2]:https://en.wikipedia.org/wiki/Zork
|
||||
[3]:https://en.wikipedia.org/wiki/The_Hitchhiker%27s_Guide_to_the_Galaxy_(video_game)
|
||||
[4]:https://en.wikipedia.org/wiki/Infocom
|
||||
[5]:http://www.springthing.net/
|
||||
[6]:http://ifcomp.org/
|
||||
[7]:https://ifcomp.org/comp/2017
|
||||
[8]:http://www.fiction-interactive.fr/
|
||||
[9]:http://www.oldgamesitalia.net/content/marmellata-davventura-2018
|
||||
[10]:/file/403396
|
||||
[11]:https://opensource.com/sites/default/files/uploads/monkey.png (counterfeit monkey game screenshot)
|
||||
[12]:http://tads.org/
|
||||
[13]:https://www.alanif.se/
|
||||
[14]:http://textadventures.co.uk/quest/
|
||||
[15]:http://inform7.com/
|
||||
[16]:https://github.com/DavidKinder/Inform6
|
||||
[17]:http://inform7.com/learn/man/RB_4_1.html#e307
|
||||
[18]:/file/403386
|
||||
[19]:https://opensource.com/sites/default/files/uploads/inform.png (inform 7 IDE screenshot)
|
||||
[20]:https://www.youtube.com/watch?v=bu55q_3YtOY
|
||||
[21]:http://ifwiki.org/index.php/Glulx
|
||||
[22]:http://ifwiki.org/index.php/Z-machine
|
||||
[23]:https://github.com/erkyrath/lectrote
|
||||
[24]:https://github.com/garglk/garglk/
|
||||
[25]:http://eblong.com/zarf/glulx/quixe/
|
||||
[26]:https://github.com/curiousdannii/parchment
|
||||
[27]:https://github.com/i7/extensions
|
||||
[28]:http://twinery.org/
|
||||
[29]:https://github.com/klembot/twinejs
|
||||
[30]:/article/18/7/twine-vs-renpy-interactive-fiction
|
||||
[31]:https://tiddlywiki.com/
|
||||
[32]:https://github.com/idmillington/undum
|
||||
[33]:https://github.com/dfabulich/choicescript
|
||||
[34]:https://www.choiceofgames.com/
|
||||
[35]:https://github.com/furkle
|
||||
[36]:https://github.com/lizadaly/windrift
|
||||
[37]:http://iftechfoundation.org/committees/twine/
|
||||
[38]:/file/403391
|
||||
[39]:https://opensource.com/sites/default/files/uploads/harmonia.png (Harmonia opening screen shot)
|
||||
[40]:https://github.com/iftechfoundation/ifcomp
|
||||
[41]:https://github.com/iftechfoundation/ifcomp/blob/master/LICENSE.md
|
||||
[42]:https://www.ifarchive.org/
|
||||
[43]:http://blog.iftechfoundation.org/2017-06-30-iftf-is-adopting-the-if-archive.html
|
||||
[44]:https://github.com/iftechfoundation/ifarchive-ifmap-py
|
||||
[45]:https://www.ifarchive.org/indexes/if-archiveXgames
|
||||
[46]:http://ifdb.tads.org/search?sortby=ratu&searchfor=%22source+available%22
|
||||
[47]:https://www.ifarchive.org/indexes/if-archiveXgamesXsource.html
|
||||
[48]:http://ifdb.tads.org/viewgame?id=fft6pu91j85y4acv
|
||||
[49]:https://github.com/jmacdotorg/warblers-nest/
|
||||
[50]:https://github.com/i7/counterfeit-monkey
|
||||
[51]:https://creativecommons.org/faq/#can-i-apply-a-creative-commons-license-to-software
|
||||
[52]:https://github.com/i7/counterfeit-monkey/blob/master/Counterfeit%20Monkey.materials/Extensions/Counterfeit%20Monkey/Liquids.i7x
|
193
sources/tech/20180720 An Introduction to Using Git.md
Normal file
193
sources/tech/20180720 An Introduction to Using Git.md
Normal file
@ -0,0 +1,193 @@
|
||||
translating by distant1219
|
||||
|
||||
An Introduction to Using Git
|
||||
======
|
||||

|
||||
If you’re a developer, then you know your way around development tools. You’ve spent years studying one or more programming languages and have perfected your skills. You can develop with GUI tools or from the command line. On your own, nothing can stop you. You code as if your mind and your fingers are one to create elegant, perfectly commented, source for an app you know will take the world by storm.
|
||||
|
||||
But what happens when you’re tasked with collaborating on a project? Or what about when that app you’ve developed becomes bigger than just you? What’s the next step? If you want to successfully collaborate with other developers, you’ll want to make use of a distributed version control system. With such a system, collaborating on a project becomes incredibly efficient and reliable. One such system is [Git][1]. Along with Git comes a handy repository called [GitHub][2], where you can house your projects, such that a team can check out and check in code.
|
||||
|
||||
I will walk you through the very basics of getting Git up and running and using it with GitHub, so the development on your game-changing app can be taken to the next level. I’ll be demonstrating on Ubuntu 18.04, so if your distribution of choice is different, you’ll only need to modify the Git install commands to suit your distribution’s package manager.
|
||||
|
||||
### Git and GitHub
|
||||
|
||||
The first thing to do is create a free GitHub account. Head over to the [GitHub signup page][3] and fill out the necessary information. Once you’ve done that, you’re ready to move on to installing Git (you can actually do these two steps in any order).
|
||||
|
||||
Installing Git is simple. Open up a terminal window and issue the command:
|
||||
```
|
||||
sudo apt install git-all
|
||||
|
||||
```
|
||||
|
||||
This will include a rather large number of dependencies, but you’ll wind up with everything you need to work with Git and GitHub.
|
||||
|
||||
On a side note: I use Git quite a bit to download source for application installation. There are times when a piece of software isn’t available via the built-in package manager. Instead of downloading the source files from a third-party location, I’ll often go the project’s Git page and clone the package like so:
|
||||
```
|
||||
git clone ADDRESS
|
||||
|
||||
```
|
||||
|
||||
Where ADDRESS is the URL given on the software’s Git page.
|
||||
Doing this most always ensures I am installing the latest release of a package.
|
||||
|
||||
Create a local repository and add a file
|
||||
|
||||
The next step is to create a local repository on your system (we’ll call it newproject and house it in ~/). Open up a terminal window and issue the commands:
|
||||
```
|
||||
cd ~/
|
||||
|
||||
mkdir newproject
|
||||
|
||||
cd newproject
|
||||
|
||||
```
|
||||
|
||||
Now we must initialize the repository. In the ~/newproject folder, issue the command git init. When the command completes, you should see that the empty Git repository has been created (Figure 1).
|
||||
|
||||
![new repository][5]
|
||||
|
||||
Figure 1: Our new repository has been initialized.
|
||||
|
||||
[Used with permission][6]
|
||||
|
||||
Next we need to add a file to the project. From within the root folder (~/newproject) issue the command:
|
||||
```
|
||||
touch readme.txt
|
||||
|
||||
```
|
||||
|
||||
You will now have an empty file in your repository. Issue the command git status to verify that Git is aware of the new file (Figure 2).
|
||||
|
||||
![readme][8]
|
||||
|
||||
Figure 2: Git knows about our readme.txt file.
|
||||
|
||||
[Used with permission][6]
|
||||
|
||||
Even though Git is aware of the file, it hasn’t actually been added to the project. To do that, issue the command:
|
||||
```
|
||||
git add readme.txt
|
||||
|
||||
```
|
||||
|
||||
Once you’ve done that, issue the git status command again to see that readme.txt is now considered a new file in the project (Figure 3).
|
||||
|
||||
![file added][10]
|
||||
|
||||
Figure 3: Our file now has now been added to the staging environment.
|
||||
|
||||
[Used with permission][6]
|
||||
|
||||
### Your first commit
|
||||
|
||||
With the new file in the staging environment, you are now ready to create your first commit. What is a commit? Easy: A commit is a record of the files you’ve changed within the project. Creating the commit is actually quite simple. It is important, however, that you include a descriptive message for the commit. By doing this, you are adding notes about what the commit contains (such as what changes you’ve made to the file). Before we do this, however, we have to inform Git who we are. To do this, issue the command:
|
||||
```
|
||||
git config --global user.email EMAIL
|
||||
|
||||
git config --global user.name “FULL NAME”
|
||||
|
||||
```
|
||||
|
||||
Where EMAIL is your email address and FULL NAME is your name.
|
||||
|
||||
Now we can create the commit by issuing the command:
|
||||
```
|
||||
git commit -m “Descriptive Message”
|
||||
|
||||
```
|
||||
|
||||
Where Descriptive Message is your message about the changes within the commit. For example, since this is the first commit for the readme.txt file, the commit could be:
|
||||
```
|
||||
git commit -m “First draft of readme.txt file”
|
||||
|
||||
```
|
||||
|
||||
You should see output indicating that 1 file has changed and a new mode was created for readme.txt (Figure 4).
|
||||
|
||||
![success][12]
|
||||
|
||||
Figure 4: Our commit was successful.
|
||||
|
||||
[Used with permission][6]
|
||||
|
||||
### Create a branch and push it to GitHub
|
||||
|
||||
Branches are important, as they allow you to move between project states. Let’s say you want to create a new feature for your game-changing app. To do that, create a new branch. Once you’ve completed work on the feature you can merge this feature from the branch to the master branch. To create the new branch, issue the command:
|
||||
|
||||
git checkout -b BRANCH
|
||||
|
||||
where BRANCH is the name of the new branch. Once the command completes, issue the command git branch to see that it has been created (Figure 5).
|
||||
|
||||
![featureX][14]
|
||||
|
||||
Figure 5: Our new branch, called featureX.
|
||||
|
||||
[Used with permission][6]
|
||||
|
||||
Next we need to create a repository on GitHub. If you log into your GitHub account, click the New Repository button from your account main page. Fill out the necessary information and click Create repository (Figure 6).
|
||||
|
||||
![new repository][16]
|
||||
|
||||
Figure 6: Creating the new repository on GitHub.
|
||||
|
||||
[Used with permission][6]
|
||||
|
||||
After creating the repository, you will be presented with a URL to use for pushing our local repository. To do this, go back to the terminal window (still within ~/newproject) and issue the commands:
|
||||
```
|
||||
git remote add origin URL
|
||||
|
||||
git push -u origin master
|
||||
|
||||
```
|
||||
|
||||
Where URL is the url for our new GitHub repository.
|
||||
|
||||
You will be prompted for your GitHub username and password. Once you successfully authenticate, the project will be pushed to your GitHub repository and you’re ready to go.
|
||||
|
||||
### Pulling the project
|
||||
|
||||
Say your collaborators make changes to the code on the GitHub project and have merged those changes. You will then need to pull the project files to your local machine, so the files you have on your system match those on the remote account. To do this, issue the command (from within ~/newproject):
|
||||
```
|
||||
git pull origin master
|
||||
|
||||
```
|
||||
|
||||
The above command will pull down any new or changed files to your local repository.
|
||||
|
||||
### The very basics
|
||||
|
||||
And that is the very basics of using Git from the command line to work with a project stored on GitHub. There is quite a bit more to learn, so I highly recommend you issue the commands man git, man git-push, and man git-pull to get a more in-depth understanding of what the git command can do.
|
||||
|
||||
Happy developing!
|
||||
|
||||
Learn more about Linux through the free ["Introduction to Linux" ][17]course from The Linux Foundation and edX.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2018/7/introduction-using-git
|
||||
|
||||
作者:[Jack Wallen][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/jlwallen
|
||||
[1]:https://git-scm.com/
|
||||
[2]:https://github.com/
|
||||
[3]:https://github.com/join?source=header-home
|
||||
[4]:/files/images/git1jpg
|
||||
[5]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/git_1.jpg?itok=FKkr5Mrk (new repository)
|
||||
[6]:https://www.linux.com/licenses/category/used-permission
|
||||
[7]:/files/images/git2jpg
|
||||
[8]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/git_2.jpg?itok=54G9KBHS (readme)
|
||||
[9]:/files/images/git3jpg
|
||||
[10]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/git_3.jpg?itok=KAJwRJIB (file added)
|
||||
[11]:/files/images/git4jpg
|
||||
[12]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/git_4.jpg?itok=qR0ighDz (success)
|
||||
[13]:/files/images/git5jpg
|
||||
[14]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/git_5.jpg?itok=6m9RTWg6 (featureX)
|
||||
[15]:/files/images/git6jpg
|
||||
[16]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/git_6.jpg?itok=d2toRrUq (new repository)
|
||||
[17]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
57
sources/tech/20180720 Convert video using Handbrake.md
Normal file
57
sources/tech/20180720 Convert video using Handbrake.md
Normal file
@ -0,0 +1,57 @@
|
||||
translating---geekpi
|
||||
|
||||
Convert video using Handbrake
|
||||
======
|
||||
|
||||

|
||||
|
||||
Recently, when my son asked me to digitally convert some old DVDs of his high school basketball games, I immediately knew I would use [Handbrake][1]. It is an open source package that has all the tools necessary to easily convert video into formats that can be played on MacOS, Windows, Linux, iOS, Android, and other platforms.
|
||||
|
||||
Handbrake is open source and distributable under the [GPLv2 license][2]. It's easy to install on MacOS, Windows, and Linux, including both [Fedora][3] and [Ubuntu][4]. In Linux, once it's installed, it can be launched from the command line with `$ handbrake` or selected from the graphical user interface. (In my case, that is GNOME 3.)
|
||||
|
||||

|
||||
|
||||
Handbrake's menu system is easy to use. Click on **Open Source** to select the video source you want to convert. For my son's basketball videos, that is the DVD drive in my Linux laptop. After inserting the DVD into the drive, the software identifies the contents of the disk.
|
||||

|
||||
|
||||
As you can see next to Source in the screenshot above, Handbrake recognizes it as a DVD with a 720x480 video in 4:3 aspect ratio, recorded at 29.97 frames per second, with one audio track. The software also previews the video.
|
||||
|
||||
If the default conversion settings are acceptable, just press the **Start Encoding** button and (after a period of time, depending on the speed of your processor) the DVD's contents will be converted and saved in the default format, [M4V][5] (which can be changed).
|
||||
|
||||
If you don't like the filename, it's easy to change it.
|
||||
|
||||

|
||||
|
||||
Handbrake has a variety of output options for format, size, and disposition. For example, it can produce video optimized for YouTube, Vimeo, and other websites, as well as for devices including iPod, iPad, Apple TV, Amazon Fire TV, Roku, PlayStation, and more.
|
||||
|
||||

|
||||
|
||||
You can change the video output size in the Dimensions menu tab. Other tabs allow you to apply filters, change video quality and encoding, add or modify an audio track, include subtitles, and modify chapters. The Tags menu tab lets you identify the author, actors, director, release date, and more on the output video file.
|
||||
|
||||

|
||||
|
||||
If you want to set Handbrake to produce output for a specific platform, you can use the included presets.
|
||||
|
||||

|
||||
|
||||
You can also use the menu options to create your own format, depending on the functionality you want.
|
||||
|
||||
Handbrake is an incredibly powerful piece of software, but it's not the only open source video conversion tool out there. Do you have another favorite? If so, please share in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/7/handbrake
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/don-watkins
|
||||
[1]:https://handbrake.fr/
|
||||
[2]:https://github.com/HandBrake/HandBrake/blob/master/LICENSE
|
||||
[3]:https://fedora.pkgs.org/28/rpmfusion-free-x86_64/HandBrake-1.1.0-1.fc28.x86_64.rpm.html
|
||||
[4]:https://launchpad.net/~stebbins/+archive/ubuntu/handbrake-releases
|
||||
[5]:https://en.wikipedia.org/wiki/M4V
|
@ -0,0 +1,82 @@
|
||||
How to build a URL shortener with Apache
|
||||
======
|
||||
|
||||

|
||||
|
||||
Long ago, folks started sharing links on Twitter. The 140-character limit meant that URLs might consume most (or all) of a tweet, so people turned to URL shorteners. Eventually, Twitter added a built-in URL shortener ([t.co][1]).
|
||||
|
||||
Character count isn't as important now, but there are still other reasons to shorten links. For one, the shortening service may provide analytics—you can see how popular the links are that you share. It also simplifies making easy-to-remember URLs. For example, [bit.ly/INtravel][2] is much easier to remember than <https://www.in.gov/ai/appfiles/dhs-countyMap/dhsCountyMap.html>. And URL shorteners can come in handy if you want to pre-share a link but don't know the final destination yet.
|
||||
|
||||
Like any technology, URL shorteners aren't all positive. By masking the ultimate destination, shortened links can be used to direct people to malicious or offensive content. But if you surf carefully, URL shorteners are a useful tool.
|
||||
|
||||
We [covered shorteners previously][3] on this site, but maybe you want to run something simple that's powered by a text file. In this article, we'll show how to use the Apache HTTP server's mod_rewrite feature to set up your own URL shortener. If you're not familiar with the Apache HTTP server, check out David Both's article on [installing and configuring][4] it.
|
||||
|
||||
### Create a VirtualHost
|
||||
|
||||
In this tutorial, I'm assuming you bought a cool domain that you'll use exclusively for the URL shortener. For example, my website is [funnelfiasco.com][5] , so I bought [funnelfias.co][6] to use for my URL shortener (okay, it's not exactly short, but it feeds my vanity). If you won't run the shortener as a separate domain, skip to the next section.
|
||||
|
||||
The first step is to set up the VirtualHost that will be used for the URL shortener. For more information on VirtualHosts, see [David Both's article][7]. This setup requires just a few basic lines:
|
||||
```
|
||||
<VirtualHost *:80>
|
||||
|
||||
ServerName funnelfias.co
|
||||
|
||||
</VirtualHost>
|
||||
|
||||
```
|
||||
|
||||
### Create the rewrites
|
||||
|
||||
This service uses HTTPD's rewrite engine to rewrite the URLs. If you created a VirtualHost in the section above, the configuration below goes into your VirtualHost section. Otherwise, it goes in the VirtualHost or main HTTPD configuration for your server.
|
||||
```
|
||||
RewriteEngine on
|
||||
|
||||
RewriteMap shortlinks txt:/data/web/shortlink/links.txt
|
||||
|
||||
RewriteRule ^/(.+)$ ${shortlinks:$1} [R=temp,L]
|
||||
|
||||
```
|
||||
|
||||
The first line simply enables the rewrite engine. The second line builds a map of the short links from a text file. The path above is only an example; you will need to use a valid path on your system (make sure it's readable by the user account that runs HTTPD). The last line rewrites the URL. In this example, it takes any characters and looks them up in the rewrite map. You may want to have your rewrites use a particular string at the beginning. For example, if you wanted all your shortened links to be of the form "slX" (where X is a number), you would replace `(.+)` above with `(sl\d+)`.
|
||||
|
||||
I used a temporary (HTTP 302) redirect here. This allows me to update the destination URL later. If you want the short link to always point to the same target, you can use a permanent (HTTP 301) redirect instead. Replace `temp` on line three with `permanent`.
|
||||
|
||||
### Build your map
|
||||
|
||||
Edit the file you specified on the `RewriteMap` line of the configuration. The format is a space-separated key-value store. Put one link on each line:
|
||||
```
|
||||
osdc https://opensource.com/users/bcotton
|
||||
|
||||
twitter https://twitter.com/funnelfiasco
|
||||
|
||||
swody1 https://www.spc.noaa.gov/products/outlook/day1otlk.html
|
||||
|
||||
```
|
||||
|
||||
### Restart HTTPD
|
||||
|
||||
The last step is to restart the HTTPD process. This is done with `systemctl restart httpd` or similar (the command and daemon name may differ by distribution). Your link shortener is now up and running. When you're ready to edit your map, you don't need to restart the web server. All you have to do is save the file, and the web server will pick up the differences.
|
||||
|
||||
### Future work
|
||||
|
||||
This example gives you a basic URL shortener. It can serve as a good starting point if you want to develop your own management interface as a learning project. Or you can just use it to share memorable links to forgettable URLs.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/7/apache-url-shortener
|
||||
|
||||
作者:[Ben Cotton][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/bcotton
|
||||
[1]:http://t.co
|
||||
[2]:http://bit.ly/INtravel
|
||||
[3]:https://opensource.com/article/17/3/url-link-shortener
|
||||
[4]:https://opensource.com/article/18/2/how-configure-apache-web-server
|
||||
[5]:http://funnelfiasco.com
|
||||
[6]:http://funnelfias.co
|
||||
[7]:https://opensource.com/article/18/3/configuring-multiple-web-sites-apache
|
@ -0,0 +1,71 @@
|
||||
4 open source media conversion tools for the Linux desktop
|
||||
======
|
||||
|
||||

|
||||
|
||||
Ah, so many file formats—especially audio and video ones—can make for fun times if you get a file with an extension you don't recognize, if your media player doesn't play a file in that format, or if you want to use an open format.
|
||||
|
||||
So, what can a Linux user do? Turn to one of the many open source media conversion tools for the Linux desktop, of course. Let's take a look at four of them.
|
||||
|
||||
### Gnac
|
||||
|
||||

|
||||
|
||||
[Gnac][1] is one of my favorite audio converters and has been for years. It's easy to use, it's powerful, and it does one thing well—as any top-notch utility should.
|
||||
|
||||
How easy? You click a toolbar button to add one or more files to convert, choose a format to convert to, and then click **Convert**. The conversions are quick, and they're clean.
|
||||
|
||||
How powerful? Gnac can handle all the audio formats that the [GStreamer][2] multimedia framework supports. Out of the box, you can convert between Ogg, FLAC, AAC, MP3, WAV, and SPX. You can also change the conversion options for each format or add new ones.
|
||||
|
||||
### SoundConverter
|
||||
|
||||

|
||||
|
||||
If simplicity with a few extra features is your thing, then give [SoundConverter][3] a look. As its name states, SoundConverter works its magic only on audio files. Like Gnac, it can read the formats that GStreamer supports, and it can spit out Ogg Vorbis, MP3, FLAC, WAV, AAC, and Opus files.
|
||||
|
||||
Load individual files or an entire folder by either clicking **Add File** or dragging and dropping it into the SoundConverter window. Click **Convert** , and the software powers through the conversion. It's fast, too—I've converted a folder containing a couple dozen files in about a minute.
|
||||
|
||||
SoundConverter has options for setting the quality of your converted files. You can change the way files are named (for example, include a track number or album name in the title) and create subfolders for the converted files.
|
||||
|
||||
### WinFF
|
||||
|
||||

|
||||
|
||||
[WinFF][4], on its own, isn't a converter. It's a graphical frontend to FFmpeg, which [Tim Nugent looked at][5] for Opensource.com. While WinFF doesn't have all the flexibility of FFmpeg, it makes FFmpeg easier to use and gets the job done quickly and fairly easily.
|
||||
|
||||
Although it's not the prettiest application out there, WinFF doesn't need to be. It's more than usable. You can choose what formats to convert to from a dropdown list and select several presets. On top of that, you can specify options like bitrates and frame rates, the number of audio channels to use, and even the size at which to crop videos.
|
||||
|
||||
The conversions, especially video, take a bit of time, but the results are generally quite good. Once in a while, the conversion gets a bit mangled—but not often enough to be a concern. And, as I said earlier, using WinFF can save me a bit of time.
|
||||
|
||||
### Miro Video Converter
|
||||
|
||||

|
||||
|
||||
Not all video files are created equally. Some are in proprietary formats. Others look great on a monitor or TV screen but aren't optimized for a mobile device. That's where [Miro Video Converter][6] comes to the rescue.
|
||||
|
||||
Miro Video Converter has a heavy emphasis on mobile. It can convert video that you can play on Android phones, Apple devices, the PlayStation Portable, and the Kindle Fire. It will convert most common video formats to MP4, [WebM][7] , and [Ogg Theora][8] . You can find a full list of supported devices and formats [on Miro's website][6]
|
||||
|
||||
To use it, either drag and drop a file into the window or select the file that you want to convert. Then, click the Format menu to choose the format for the conversion. You can also click the Apple, Android, or Other menus to choose a device for which you want to convert the file. Miro Video Converter resizes the video for the device's screen resolution.
|
||||
|
||||
Do you have a favorite Linux media conversion application? Feel free to share it by leaving a comment.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/7/media-conversion-tools-linux
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/scottnesbitt
|
||||
[1]:http://gnac.sourceforge.net
|
||||
[2]:http://www.gstreamer.net/
|
||||
[3]:http://soundconverter.org/
|
||||
[4]:https://www.biggmatt.com/winff/
|
||||
[5]:https://opensource.com/article/17/6/ffmpeg-convert-media-file-formats
|
||||
[6]:http://www.mirovideoconverter.com/
|
||||
[7]:https://en.wikipedia.org/wiki/WebM
|
||||
[8]:https://en.wikipedia.org/wiki/Ogg_theora
|
@ -0,0 +1,284 @@
|
||||
Building a network attached storage device with a Raspberry Pi
|
||||
======
|
||||
|
||||

|
||||
|
||||
In this three-part series, I'll explain how to set up a simple, useful NAS (network attached storage) system. I use this kind of setup to store my files on a central system, creating incremental backups automatically every night. To mount the disk on devices that are located in the same network, NFS is installed. To access files offline and share them with friends, I use [Nextcloud][1].
|
||||
|
||||
This article will cover the basic setup of software and hardware to mount the data disk on a remote device. In the second article, I will discuss a backup strategy and set up a cron job to create daily backups. In the third and last article, we will install Nextcloud, a tool for easy file access to devices synced offline as well as online using a web interface. It supports multiple users and public file-sharing so you can share pictures with friends, for example, by sending a password-protected link.
|
||||
|
||||
The target architecture of our system looks like this:
|
||||

|
||||
|
||||
### Hardware
|
||||
|
||||
Let's get started with the hardware you need. You might come up with a different shopping list, so consider this one an example.
|
||||
|
||||
The computing power is delivered by a [Raspberry Pi 3][2], which comes with a quad-core CPU, a gigabyte of RAM, and (somewhat) fast ethernet. Data will be stored on two USB hard drives (I use 1-TB disks); one is used for the everyday traffic, the other is used to store backups. Be sure to use either active USB hard drives or a USB hub with an additional power supply, as the Raspberry Pi will not be able to power two USB drives.
|
||||
|
||||
### Software
|
||||
|
||||
The operating system with the highest visibility in the community is [Raspbian][3] , which is excellent for custom projects. There are plenty of [guides][4] that explain how to install Raspbian on a Raspberry Pi, so I won't go into details here. The latest official supported version at the time of this writing is [Raspbian Stretch][5] , which worked fine for me.
|
||||
|
||||
At this point, I will assume you have configured your basic Raspbian and are able to connect to the Raspberry Pi by `ssh`.
|
||||
|
||||
### Prepare the USB drives
|
||||
|
||||
To achieve good performance reading from and writing to the USB hard drives, I recommend formatting them with ext4. To do so, you must first find out which disks are attached to the Raspberry Pi. You can find the disk devices in `/dev/sd/<x>`. Using the command `fdisk -l`, you can find out which two USB drives you just attached. Please note that all data on the USB drives will be lost as soon as you follow these steps.
|
||||
```
|
||||
pi@raspberrypi:~ $ sudo fdisk -l
|
||||
|
||||
|
||||
|
||||
<...>
|
||||
|
||||
|
||||
|
||||
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
|
||||
|
||||
Units: sectors of 1 * 512 = 512 bytes
|
||||
|
||||
Sector size (logical/physical): 512 bytes / 512 bytes
|
||||
|
||||
I/O size (minimum/optimal): 512 bytes / 512 bytes
|
||||
|
||||
Disklabel type: dos
|
||||
|
||||
Disk identifier: 0xe8900690
|
||||
|
||||
|
||||
|
||||
Device Boot Start End Sectors Size Id Type
|
||||
|
||||
/dev/sda1 2048 1953525167 1953523120 931.5G 83 Linux
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
|
||||
|
||||
Units: sectors of 1 * 512 = 512 bytes
|
||||
|
||||
Sector size (logical/physical): 512 bytes / 512 bytes
|
||||
|
||||
I/O size (minimum/optimal): 512 bytes / 512 bytes
|
||||
|
||||
Disklabel type: dos
|
||||
|
||||
Disk identifier: 0x6aa4f598
|
||||
|
||||
|
||||
|
||||
Device Boot Start End Sectors Size Id Type
|
||||
|
||||
/dev/sdb1 * 2048 1953521663 1953519616 931.5G 83 Linux
|
||||
|
||||
```
|
||||
|
||||
As those devices are the only 1TB disks attached to the Raspberry Pi, we can easily see that `/dev/sda` and `/dev/sdb` are the two USB drives. The partition table at the end of each disk shows how it should look after the following steps, which create the partition table and format the disks. To do this, repeat the following steps for each of the two devices by replacing `sda` with `sdb` the second time (assuming your devices are also listed as `/dev/sda` and `/dev/sdb` in `fdisk`).
|
||||
|
||||
First, delete the partition table of the disk and create a new one containing only one partition. In `fdisk`, you can use interactive one-letter commands to tell the program what to do. Simply insert them after the prompt `Command (m for help):` as follows (you can also use the `m` command anytime to get more information):
|
||||
```
|
||||
pi@raspberrypi:~ $ sudo fdisk /dev/sda
|
||||
|
||||
|
||||
|
||||
Welcome to fdisk (util-linux 2.29.2).
|
||||
|
||||
Changes will remain in memory only, until you decide to write them.
|
||||
|
||||
Be careful before using the write command.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
Command (m for help): o
|
||||
|
||||
Created a new DOS disklabel with disk identifier 0x9c310964.
|
||||
|
||||
|
||||
|
||||
Command (m for help): n
|
||||
|
||||
Partition type
|
||||
|
||||
p primary (0 primary, 0 extended, 4 free)
|
||||
|
||||
e extended (container for logical partitions)
|
||||
|
||||
Select (default p): p
|
||||
|
||||
Partition number (1-4, default 1):
|
||||
|
||||
First sector (2048-1953525167, default 2048):
|
||||
|
||||
Last sector, +sectors or +size{K,M,G,T,P} (2048-1953525167, default 1953525167):
|
||||
|
||||
|
||||
|
||||
Created a new partition 1 of type 'Linux' and of size 931.5 GiB.
|
||||
|
||||
|
||||
|
||||
Command (m for help): p
|
||||
|
||||
|
||||
|
||||
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
|
||||
|
||||
Units: sectors of 1 * 512 = 512 bytes
|
||||
|
||||
Sector size (logical/physical): 512 bytes / 512 bytes
|
||||
|
||||
I/O size (minimum/optimal): 512 bytes / 512 bytes
|
||||
|
||||
Disklabel type: dos
|
||||
|
||||
Disk identifier: 0x9c310964
|
||||
|
||||
|
||||
|
||||
Device Boot Start End Sectors Size Id Type
|
||||
|
||||
/dev/sda1 2048 1953525167 1953523120 931.5G 83 Linux
|
||||
|
||||
|
||||
|
||||
Command (m for help): w
|
||||
|
||||
The partition table has been altered.
|
||||
|
||||
Syncing disks.
|
||||
|
||||
```
|
||||
|
||||
Now we will format the newly created partition `/dev/sda1` using the ext4 filesystem:
|
||||
```
|
||||
pi@raspberrypi:~ $ sudo mkfs.ext4 /dev/sda1
|
||||
|
||||
mke2fs 1.43.4 (31-Jan-2017)
|
||||
|
||||
Discarding device blocks: done
|
||||
|
||||
|
||||
|
||||
<...>
|
||||
|
||||
|
||||
|
||||
Allocating group tables: done
|
||||
|
||||
Writing inode tables: done
|
||||
|
||||
Creating journal (1024 blocks): done
|
||||
|
||||
Writing superblocks and filesystem accounting information: done
|
||||
|
||||
```
|
||||
|
||||
After repeating the above steps, let's label the new partitions according to their usage in your system:
|
||||
```
|
||||
pi@raspberrypi:~ $ sudo e2label /dev/sda1 data
|
||||
|
||||
pi@raspberrypi:~ $ sudo e2label /dev/sdb1 backup
|
||||
|
||||
```
|
||||
|
||||
Now let's get those disks mounted to store some data. My experience, based on running this setup for over a year now, is that USB drives are not always available to get mounted when the Raspberry Pi boots up (for example, after a power outage), so I recommend using autofs to mount them when needed.
|
||||
|
||||
First install autofs and create the mount point for the storage:
|
||||
```
|
||||
pi@raspberrypi:~ $ sudo apt install autofs
|
||||
|
||||
pi@raspberrypi:~ $ sudo mkdir /nas
|
||||
|
||||
```
|
||||
|
||||
Then mount the devices by adding the following line to `/etc/auto.master`:
|
||||
```
|
||||
/nas /etc/auto.usb
|
||||
|
||||
```
|
||||
|
||||
Create the file `/etc/auto.usb` if not existing with the following content, and restart the autofs service:
|
||||
```
|
||||
data -fstype=ext4,rw :/dev/disk/by-label/data
|
||||
|
||||
backup -fstype=ext4,rw :/dev/disk/by-label/backup
|
||||
|
||||
pi@raspberrypi3:~ $ sudo service autofs restart
|
||||
|
||||
```
|
||||
|
||||
Now you should be able to access the disks at `/nas/data` and `/nas/backup`, respectively. Clearly, the content will not be too thrilling, as you just erased all the data from the disks. Nevertheless, you should be able to verify the devices are mounted by executing the following commands:
|
||||
```
|
||||
pi@raspberrypi3:~ $ cd /nas/data
|
||||
|
||||
pi@raspberrypi3:/nas/data $ cd /nas/backup
|
||||
|
||||
pi@raspberrypi3:/nas/backup $ mount
|
||||
|
||||
<...>
|
||||
|
||||
/etc/auto.usb on /nas type autofs (rw,relatime,fd=6,pgrp=463,timeout=300,minproto=5,maxproto=5,indirect)
|
||||
|
||||
<...>
|
||||
|
||||
/dev/sda1 on /nas/data type ext4 (rw,relatime,data=ordered)
|
||||
|
||||
/dev/sdb1 on /nas/backup type ext4 (rw,relatime,data=ordered)
|
||||
|
||||
```
|
||||
|
||||
First move into the directories to make sure autofs mounts the devices. Autofs tracks access to the filesystems and mounts the needed devices on the go. Then the `mount` command shows that the two devices are actually mounted where we wanted them.
|
||||
|
||||
Setting up autofs is a bit fault-prone, so do not get frustrated if mounting doesn't work on the first try. Give it another chance, search for more detailed resources (there is plenty of documentation online), or leave a comment.
|
||||
|
||||
### Mount network storage
|
||||
|
||||
Now that you have set up the basic network storage, we want it to be mounted on a remote Linux machine. We will use the network file system (NFS) for this. First, install the NFS server on the Raspberry Pi:
|
||||
```
|
||||
pi@raspberrypi:~ $ sudo apt install nfs-kernel-server
|
||||
|
||||
```
|
||||
|
||||
Next we need to tell the NFS server to expose the `/nas/data` directory, which will be the only device accessible from outside the Raspberry Pi (the other one will be used for backups only). To export the directory, edit the file `/etc/exports` and add the following line to allow all devices with access to the NAS to mount your storage:
|
||||
```
|
||||
/nas/data *(rw,sync,no_subtree_check)
|
||||
|
||||
```
|
||||
|
||||
For more information about restricting the mount to single devices and so on, refer to `man exports`. In the configuration above, anyone will be able to mount your data as long as they have access to the ports needed by NFS: `111` and `2049`. I use the configuration above and allow access to my home network only for ports 22 and 443 using the routers firewall. That way, only devices in the home network can reach the NFS server.
|
||||
|
||||
To mount the storage on a Linux computer, run the commands:
|
||||
```
|
||||
you@desktop:~ $ sudo mkdir /nas/data
|
||||
|
||||
you@desktop:~ $ sudo mount -t nfs <raspberry-pi-hostname-or-ip>:/nas/data /nas/data
|
||||
|
||||
```
|
||||
|
||||
Again, I recommend using autofs to mount this network device. For extra help, check out [How to use autofs to mount NFS shares][6].
|
||||
|
||||
Now you are able to access files stored on your own RaspberryPi-powered NAS from remote devices using the NFS mount. In the next part of this series, I will cover how to automatically back up your data to the second hard drive using `rsync`. To save space on the device while still doing daily backups, you will learn how to create incremental backups with `rsync`.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/7/network-attached-storage-Raspberry-Pi
|
||||
|
||||
作者:[Manuel Dewald][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/ntlx
|
||||
[1]:https://nextcloud.com/
|
||||
[2]:https://www.raspberrypi.org/products/raspberry-pi-3-model-b/
|
||||
[3]:https://www.raspbian.org/
|
||||
[4]:https://www.raspberrypi.org/documentation/installation/installing-images/
|
||||
[5]:https://www.raspberrypi.org/blog/raspbian-stretch/
|
||||
[6]:https://opensource.com/article/18/6/using-autofs-mount-nfs-shares
|
@ -0,0 +1,265 @@
|
||||
How To Mount Google Drive Locally As Virtual File System In Linux
|
||||
======
|
||||
|
||||

|
||||
|
||||
[**Google Drive**][1] is the one of the popular cloud storage provider on the planet. As of 2017, over 800 million users are actively using this service worldwide. Even though the number of users have dramatically increased, Google haven’t released a Google drive client for Linux yet. But it didn’t stop the Linux community. Every now and then, some developers had brought few google drive clients for Linux operating system. In this guide, we will see three unofficial google drive clients for Linux. Using these clients, you can mount Google drive locally as a virtual file system and access your drive files in your Linux box. Read on.
|
||||
|
||||
### 1. Google-drive-ocamlfuse
|
||||
|
||||
The **google-drive-ocamlfuse** is a FUSE filesystem for Google Drive, written in OCaml. For those wondering, FUSE, stands for **F** ilesystem in **Use** rspace, is a project that allows the users to create virtual file systems in user level. **google-drive-ocamlfuse** allows you to mount your Google Drive on Linux system. It features read/write access to ordinary files and folders, read-only access to Google docks, sheets, and slides, support for multiple google drive accounts, duplicate file handling, access to your drive trash directory, and more.
|
||||
|
||||
#### Installing google-drive-ocamlfuse
|
||||
|
||||
google-drive-ocamlfuse is available in the [**AUR**][2], so you can install it using any AUR helper programs, for example [**Yay**][3].
|
||||
```
|
||||
$ yay -S google-drive-ocamlfuse
|
||||
|
||||
```
|
||||
|
||||
On Ubuntu:
|
||||
```
|
||||
$ sudo add-apt-repository ppa:alessandro-strada/ppa
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install google-drive-ocamlfuse
|
||||
|
||||
```
|
||||
|
||||
To install latest beta version, do:
|
||||
```
|
||||
$ sudo add-apt-repository ppa:alessandro-strada/google-drive-ocamlfuse-beta
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install google-drive-ocamlfuse
|
||||
|
||||
```
|
||||
|
||||
#### Usage
|
||||
|
||||
Once installed, run the following command to launch **google-drive-ocamlfuse** utility from your Terminal:
|
||||
```
|
||||
$ google-drive-ocamlfuse
|
||||
|
||||
```
|
||||
|
||||
When you run this first time, the utility will open your web browser and ask your permission to authorize your google drive files. Once you gave authorization, all necessary config files and folders it needs to mount your google drive will be automatically created.
|
||||
|
||||
![][5]
|
||||
|
||||
After successful authentication, you will see the following message in your Terminal.
|
||||
```
|
||||
Access token retrieved correctly.
|
||||
|
||||
```
|
||||
|
||||
You’re good to go now. Close the web browser and then create a mount point to mount your google drive files.
|
||||
```
|
||||
$ mkdir ~/mygoogledrive
|
||||
|
||||
```
|
||||
|
||||
Finally, mount your google drive using command:
|
||||
```
|
||||
$ google-drive-ocamlfuse ~/mygoogledrive
|
||||
|
||||
```
|
||||
|
||||
Congratulations! You can access access your files either from Terminal or file manager.
|
||||
|
||||
From **Terminal** :
|
||||
```
|
||||
$ ls ~/mygoogledrive
|
||||
|
||||
```
|
||||
|
||||
From **File manager** :
|
||||
|
||||
![][6]
|
||||
|
||||
If you have more than one account, use **label** option to distinguish different accounts like below.
|
||||
```
|
||||
$ google-drive-ocamlfuse -label label [mountpoint]
|
||||
|
||||
```
|
||||
|
||||
Once you’re done, unmount the FUSE flesystem using command:
|
||||
```
|
||||
$ fusermount -u ~/mygoogledrive
|
||||
|
||||
```
|
||||
|
||||
For more details, refer man pages.
|
||||
```
|
||||
$ google-drive-ocamlfuse --help
|
||||
|
||||
```
|
||||
|
||||
Also, do check the [**official wiki**][7] and the [**project GitHub repository**][8] for more details.
|
||||
|
||||
### 2. GCSF
|
||||
|
||||
**GCSF** is a FUSE filesystem based on Google Drive, written using **Rust** programming language. The name GCSF has come from the Romanian word “ **G** oogle **C** onduce **S** istem de **F** ișiere”, which means “Google Drive Filesystem” in English. Using GCSF, you can mount your Google drive as a local virtual file system and access the contents from the Terminal or file manager. You might wonder how it differ from other Google Drive FUSE projects, for example **google-drive-ocamlfuse**. The developer of GCSF replied to a similar [comment on Reddit][9] “GCSF tends to be faster in several cases (listing files recursively, reading large files from Drive). The caching strategy it uses also leads to very fast reads (x4-7 improvement compared to google-drive-ocamlfuse) for files that have been cached, at the cost of using more RAM“.
|
||||
|
||||
#### Installing GCSF
|
||||
|
||||
GCSF is available in the [**AUR**][10], so the Arch Linux users can install it using any AUR helper, for example [**Yay**][3].
|
||||
```
|
||||
$ yay -S gcsf-git
|
||||
|
||||
```
|
||||
|
||||
For other distributions, do the following.
|
||||
|
||||
Make sure you have installed Rust on your system.
|
||||
|
||||
Make sure **pkg-config** and the **fuse** packages are installed. They are available in the default repositories of most Linux distributions. For example, on Ubuntu and derivatives, you can install them using command:
|
||||
```
|
||||
$ sudo apt-get install -y libfuse-dev pkg-config
|
||||
|
||||
```
|
||||
|
||||
Once all dependencies installed, run the following command to install GCSF:
|
||||
```
|
||||
$ cargo install gcsf
|
||||
|
||||
```
|
||||
|
||||
#### Usage
|
||||
|
||||
First, we need to authorize our google drive. To do so, simply run:
|
||||
```
|
||||
$ gcsf login ostechnix
|
||||
|
||||
```
|
||||
|
||||
You must specify a session name. Replace **ostechnix** with your own session name. You will see an output something like below with an URL to authorize your google drive account.
|
||||
|
||||
![][11]
|
||||
|
||||
Just copy and navigate to the above URL from your browser and click **allow** to give permission to access your google drive contents. Once you gave the authentication you will see an output like below.
|
||||
```
|
||||
Successfully logged in. Credentials saved to "/home/sk/.config/gcsf/ostechnix".
|
||||
|
||||
```
|
||||
|
||||
GCSF will create a configuration file in **$XDG_CONFIG_HOME/gcsf/gcsf.toml** , which is usually defined as **$HOME/.config/gcsf/gcsf.toml**. Credentials are stored in the same directory.
|
||||
|
||||
Next, create a directory to mount your google drive contents.
|
||||
```
|
||||
$ mkdir ~/mygoogledrive
|
||||
|
||||
```
|
||||
|
||||
Then, edit **/etc/fuse.conf** file:
|
||||
```
|
||||
$ sudo vi /etc/fuse.conf
|
||||
|
||||
```
|
||||
|
||||
Uncomment the following line to allow non-root users to specify the allow_other or allow_root mount options.
|
||||
```
|
||||
user_allow_other
|
||||
|
||||
```
|
||||
|
||||
Save and close the file.
|
||||
|
||||
Finally, mount your google drive using command:
|
||||
```
|
||||
$ gcsf mount ~/mygoogledrive -s ostechnix
|
||||
|
||||
```
|
||||
|
||||
Sample output:
|
||||
```
|
||||
INFO gcsf > Creating and populating file system...
|
||||
INFO gcsf > File sytem created.
|
||||
INFO gcsf > Mounting to /home/sk/mygoogledrive
|
||||
INFO gcsf > Mounted to /home/sk/mygoogledrive
|
||||
INFO gcsf::gcsf::file_manager > Checking for changes and possibly applying them.
|
||||
INFO gcsf::gcsf::file_manager > Checking for changes and possibly applying them.
|
||||
|
||||
```
|
||||
|
||||
Again, replace **ostechnix** with your session name. You can view the existing sessions using command:
|
||||
```
|
||||
$ gcsf list
|
||||
Sessions:
|
||||
- ostechnix
|
||||
|
||||
```
|
||||
|
||||
You can now access your google drive contents either from the Terminal or from File manager.
|
||||
|
||||
From **Terminal** :
|
||||
```
|
||||
$ ls ~/mygoogledrive
|
||||
|
||||
```
|
||||
|
||||
From **File manager** :
|
||||
|
||||
![][12]
|
||||
|
||||
If you don’t know where your Google drive is mounted, use **df** or **mount** command as shown below.
|
||||
```
|
||||
$ df -h
|
||||
Filesystem Size Used Avail Use% Mounted on
|
||||
udev 968M 0 968M 0% /dev
|
||||
tmpfs 200M 1.6M 198M 1% /run
|
||||
/dev/sda1 20G 7.5G 12G 41% /
|
||||
tmpfs 997M 0 997M 0% /dev/shm
|
||||
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
|
||||
tmpfs 997M 0 997M 0% /sys/fs/cgroup
|
||||
tmpfs 200M 40K 200M 1% /run/user/1000
|
||||
GCSF 15G 857M 15G 6% /home/sk/mygoogledrive
|
||||
|
||||
$ mount | grep GCSF
|
||||
GCSF on /home/sk/mygoogledrive type fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000,allow_other)
|
||||
|
||||
```
|
||||
|
||||
Once done, unmount the google drive using command:
|
||||
```
|
||||
$ fusermount -u ~/mygoogledrive
|
||||
|
||||
```
|
||||
|
||||
Check the [**GCSF GitHub repository**][13] for more details.
|
||||
|
||||
### 3. Tuxdrive
|
||||
|
||||
**Tuxdrive** is yet another unofficial google drive client for Linux. We have written a detailed guide about Tuxdrive a while ago. Please check the following link.
|
||||
|
||||
Of course, there were few other unofficial google drive clients available in the past, such as Grive2, Syncdrive. But it seems that they are discontinued now. I will keep updating this list when I come across any active google drive clients.
|
||||
|
||||
And, that’s all for now, folks. Hope this was useful. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-mount-google-drive-locally-as-virtual-file-system-in-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://www.google.com/drive/
|
||||
[2]:https://aur.archlinux.org/packages/google-drive-ocamlfuse/
|
||||
[3]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
|
||||
[4]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[5]:http://www.ostechnix.com/wp-content/uploads/2018/07/google-drive.png
|
||||
[6]:http://www.ostechnix.com/wp-content/uploads/2018/07/google-drive-2.png
|
||||
[7]:https://github.com/astrada/google-drive-ocamlfuse/wiki/Configuration
|
||||
[8]:https://github.com/astrada/google-drive-ocamlfuse
|
||||
[9]:https://www.reddit.com/r/DataHoarder/comments/8vlb2v/google_drive_as_a_file_system/e1oh9q9/
|
||||
[10]:https://aur.archlinux.org/packages/gcsf-git/
|
||||
[11]:http://www.ostechnix.com/wp-content/uploads/2018/07/google-drive-3.png
|
||||
[12]:http://www.ostechnix.com/wp-content/uploads/2018/07/google-drive-4.png
|
||||
[13]:https://github.com/harababurel/gcsf
|
@ -0,0 +1,40 @@
|
||||
Textricator: Data extraction made simple
|
||||
======
|
||||
|
||||

|
||||
|
||||
You probably know the feeling: You ask for data and get a positive response, only to open the email and find a whole bunch of PDFs attached. Data, interrupted.
|
||||
|
||||
We understand your frustration, and we’ve done something about it: Introducing [Textricator][1], our first open source product.
|
||||
|
||||
We’re Measures for Justice, a criminal justice research and transparency organization. Our mission is to provide data transparency for the entire justice system, from arrest to post-conviction. We do this by producing a series of up to 32 performance measures covering the entire criminal justice system, county by county. We get our data in many ways—all legal, of course—and while many state and county agencies are data-savvy, giving us quality, formatted data in CSVs, the data is often bundled inside software with no simple way to get it out. PDF reports are the best they can offer.
|
||||
|
||||
Developers Joe Hale and Stephen Byrne have spent the past two years developing Textricator to extract tens of thousands of pages of data for our internal use. Textricator can process just about any text-based PDF format—not just tables, but complex reports with wrapping text and detail sections generated from tools like Crystal Reports. Simply tell Textricator the attributes of the fields you want to collect, and it chomps through the document, collecting and writing out your records.
|
||||
|
||||
Not a software engineer? Textricator doesn’t require programming skills; rather, the user describes the structure of the PDF and Textricator handles the rest. Most users run it via the command line; however, a browser-based GUI is available.
|
||||
|
||||
We evaluated other great open source solutions like [Tabula][2], but they just couldn’t handle the structure of some of the PDFs we needed to scrape. “Textricator is both flexible and powerful and has cut the time we spend to process large datasets from days to hours,” says Andrew Branch, director of technology.
|
||||
|
||||
At MFJ, we’re committed to transparency and knowledge-sharing, which includes making our software available to anyone, especially those trying to free and share data publicly. Textricator is available on [GitHub][3] and released under [GNU Affero General Public License Version 3][4].
|
||||
|
||||
You can see the results of our work, including data processed via Textricator, on our free [online data portal][5]. Textricator is an essential part of our process and we hope civic tech and government organizations alike can unlock more data with this new tool.
|
||||
|
||||
If you use Textricator, let us know how it helped solve your data problem. Want to improve it? Submit a pull request.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/7/textricator
|
||||
|
||||
作者:[Stephen Byrne][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[1]:https://textricator.mfj.io/
|
||||
[2]:https://tabula.technology/
|
||||
[3]:https://github.com/measuresforjustice/textricator
|
||||
[4]:https://www.gnu.org/licenses/agpl-3.0.en.html
|
||||
[5]:https://www.measuresforjustice.org/portal/
|
@ -1,95 +0,0 @@
|
||||
对数据隐私持开放的态度
|
||||
======
|
||||

|
||||
|
||||
|
||||
Image by : opensource.com
|
||||
|
||||
今天是[数据隐私日][1],(在欧洲叫"数据保护日"),你可能会认为现在我们处于一个开源的世界中,所有的数据都应该免费,[就像人们想的那样][2],但是现实并没那么简单。主要有两个原因:
|
||||
1. 我们中的大多数(不仅仅是在开源中)认为至少有些关于我们自己的数据是不愿意分享出去的(我在之前发表的一篇文章中列举了一些列子[3])
|
||||
2. 我们很多人虽然在开源中工作,但事实上是为了一些商业公司或者其他一些组织工作,也是在合法的要求范围内分享数据。
|
||||
|
||||
所以实际上,数据隐私对于每个人来说是很重要的。
|
||||
|
||||
事实证明,在美国和欧洲之间,人们和政府认为让组织使用的数据的起点是有些不同的。前者通常为实体提供更多的自由度,更愤世嫉俗的是--大型的商业体利用他们收集到的关于我们的数据。在欧洲,完全是另一观念,一直以来持有的多是有更多约束限制的观念,而且在5月25日,欧洲的观点可以说取得了胜利。
|
||||
|
||||
## 通用数据保护条例的影响
|
||||
|
||||
那是一个相当全面的声明,其实事实上就是欧盟在2016年通过的一项关于通用数据保护的立法,使它变得可实施。数据通用保护条例在私人数据怎样才能被保存,如何才能被使用,谁能使用,能被持有多长时间这些方面设置了严格的规则。它描述了什么数据属于私人数据--而且涉及的条目范围非常广泛,从你的姓名家庭住址到你的医疗记录以及接通你电脑的IP地址。
|
||||
|
||||
通用数据保护条例的重要之处是他并不仅仅适用于欧洲的公司,如果你是阿根廷人,日本人,美国人或者是俄罗斯的公司而且你正在收集涉及到欧盟居民的数据,你就要受到这个条例的约束管辖。
|
||||
|
||||
“哼!” 你可能会这样说,“我的业务不在欧洲:他们能对我有啥约束?” 答案很简答:如果你想继续在欧盟做任何生意,你最好遵守,因为一旦你违反了通用数据保护条例的规则,你将会受到你全球总收入百分之四的惩罚。是的,你没听错,是全球总收入不是仅仅在欧盟某一国家的的收入,也不只是净利润,而是全球总收入。这将会让你去叮嘱告知你的法律团队,他们就会知会你的整个团队,同时也会立即去指引你的IT团队,确保你的行为相当短的时间内是符合要求的。
|
||||
|
||||
看上去这和欧盟之外的城市没有什么相关性,但其实不然,对大多数公司来说,对所有的他们的顾客、合作伙伴以及员工实行同样的数据保护措施是件既简单又有效的事情,而不是只是在欧盟的城市实施,这将会是一件很有利的事情。2
|
||||
|
||||
然而,数据通用保护条例不久将在全球实施并不意味着一切都会变的很美好:事实并非如此,我们一直在丢弃关于我们自己的信息--而且允许公司去使用它。
|
||||
|
||||
有一句话是这么说的(尽管很争议):“如果你没有在付费,那么你就是产品。”这句话的意思就是如果你没有为某一项服务付费,那么其他的人就在付费使用你的数据。
|
||||
你有付费使用Facebook、推特?谷歌邮箱?你觉得他们是如何赚钱的?大部分是通过广告,一些人会争论那是他们向你提供的一项服务而已,但事实上是他们在利用你的数据从广告商里获取收益。你不是一个真正的广告的顾客-只有当你从看了广告后买了他们的商品之后你才变成了他们的顾客,但直到这个发生之前,都是广告平台和广告商的关系。
|
||||
|
||||
有些服务是允许你通过付费来消除广告的(流媒体音乐平台声破天就是这样的),但从另一方面来讲,即使你认为付费的服务也可以启用广告(列如,亚马逊正在允许通过Alexa广告)除非我们想要开始为这些所有的免费服务付费,我们需要清除我们所放弃的,而且在我们想要揭发和不想的里面做一些选择。
|
||||
|
||||
### 谁是顾客?
|
||||
|
||||
关于数据的另一个问题一直在困扰着我们,它是产生的数据量的直接结果。有许多组织一直在产生巨量的数据,包括公共的组织比如大学、医院或者是政府部门4--
|
||||
而且他们没有能力去储存这些数据。如果这些数据没有长久的价值也就没什么要紧的,但事实正好相反,随着处理大数据的工具正在开发中,而且这些组织也认识到他们现在以及在不久的将来将能够去开采这些数据。
|
||||
|
||||
然而他们面临的是,随着数据的增长和存储量的不足他们是如何处理的。幸运--而且我是带有讽刺意味的使用了这个词,5大公司正在介入去帮助他们。“把你们的数据给我们,”他们说,“我们将免费保存。我们甚至让你随时能够使用你所收集到的数据!”这听起来很棒,是吗?这是大公司的一个极具代表性的列子,站在慈善的立场上帮助公共组织管理他们收集到的关于我们的数据。
|
||||
|
||||
不幸的是,慈善不是唯一的理由。他们是附有条件的:作为同意保存数据的交换条件,这些公司得到了将数据访问权限出售非第三方的权利。你认为公共组织,或者是被收集数据的人在数据被出售使用权使给第三方在他们如何使用上面能有发言权吗?我将把这个问题当做一个练习留给读者去思考。7
|
||||
|
||||
### 开放和积极
|
||||
|
||||
然而并不只有坏消息。政府中有一项在逐渐发展起来的“开放数据”运动鼓励部门能够将免费开放他们的数据给公众或者其他组织。这项行动目前正在被实施立法。许多
|
||||
支援组织--尤其是那些收到公共基金的--正在开始推动同样的活动。即使商业组织也有些许的兴趣。而且,在技术上已经可行了,例如围绕不同的隐私和多方计算上,正在允许我们根据数据设置和不揭露太多关于个人的前提下开采数据--一个历史性的计算问题比你想象的要容易处理的多。
|
||||
|
||||
这些对我们来说意味着什么呢?我之前在网站Opensource.com上写过关于[开源的共享福利][4],而且我越来越相信我们需要把我们的视野从软件拓展到其他区域:硬件,组织,和这次讨论有关的,数据。让我们假设一下你是A公司要提向另一家公司提供一项服务,客户B。在游戏中有四种不同类型的数据:
|
||||
1. 数据完全开放:对A和B都是可得到的,世界上任何人都可以得到
|
||||
2. 数据是已知的,共享的,和机密的:A和B可得到,但其他人不能得到。
|
||||
3. 数据是公司级别上保密的:A公司可以得到,但B顾客不能
|
||||
4. 数据是顾客级别保密的:B顾客可以得到,但A公司不能
|
||||
|
||||
首先,也许我们对数据应该更开放些,将数据默认放到选项一中。如果那些数据对所有人开放--在无人驾驶、语音识别,矿藏以及人口数据统计会有相当大的作用的,9
|
||||
如果我们能够找到方法将数据放到选项2,3和4中,不是很好嘛--或者至少它们中的一些--在选项一中是可以实现的,同时仍将细节保密?这就是研究这些新技术的希望。
|
||||
然而又很长的路要走,所以不要太兴奋,同时,开始考虑将你的的一些数据默认开放。
|
||||
|
||||
### 一些具体的措施
|
||||
|
||||
我们如何处理数据的隐私和开放?下面是我想到的一些具体的措施:欢迎大家评论做出更多的贡献。
|
||||
* 检查你的组织是否正在认真严格的执行通用数据保护条例。如果没有,去推动实施它。
|
||||
* 要默认去加密敏感数据(或者适当的时候用散列算法),当不再需要的时候及时删掉--除非数据正在被处理使用否则没有任何借口让数据清晰可见。
|
||||
* 当你注册一个服务的时候考虑一下你公开了什么信息,特别是社交媒体类的。
|
||||
* 和你的非技术朋友讨论这个话题。
|
||||
* 教育你的孩子,你朋友的孩子以及他们的朋友。然而最好是去他们的学校和他们的老师交谈在他们的学校中展示。
|
||||
* 鼓励你工作志愿服务的组织,或者和他们互动推动数据的默认开放。不是去思考为什么我要使数据开放而是以我为什么不让数据开放开始。
|
||||
* 尝试去访问一些开源数据。开采使用它。开发应用来使用它,进行数据分析,画漂亮的图,10 制作有趣的音乐,考虑使用它来做些事。告诉组织去使用它们,感谢它们,而且鼓励他们去做更多。
|
||||
|
||||
|
||||
|
||||
1. 我承认你可能尽管不会
|
||||
2. 假设你坚信你的个人数据应该被保护。
|
||||
3. 如果你在思考“极好的”的寓意,在这点上你并不孤独。
|
||||
4. 事实上这些机构能够有多开放取决于你所居住的地方。
|
||||
5. 假设我是英国人,那是非常非常大的剂量。
|
||||
6. 他们可能是巨大的公司:没有其他人能够负担得起这么大的存储和基础架构来使数据保持可用。
|
||||
7. 不,答案是“不”。
|
||||
8. 尽管这个列子也同样适用于个人。看看:A可能是Alice,B 可能是BOb...
|
||||
9. 并不是说我们应该暴露个人的数据或者是这样的数据应该被保密,当然--不是那类的数据。
|
||||
10. 我的一个朋友当她接孩子放学的时候总是下雨,所以为了避免确认失误,她在整个学年都访问天气信息并制作了图表分享到社交媒体上。
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/being-open-about-data-privacy
|
||||
|
||||
作者:[Mike Bursell][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者FelixYFZ](https://github.com/FelixYFZ)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/mikecamel
|
||||
[1]:https://en.wikipedia.org/wiki/Data_Privacy_Day
|
||||
[2]:https://en.wikipedia.org/wiki/Information_wants_to_be_free
|
||||
[3]:https://aliceevebob.wordpress.com/2017/06/06/helping-our-governments-differently/
|
||||
[4]:https://opensource.com/article/17/11/commonwealth-open-source
|
||||
[5]:http://www.outpost9.com/reference/jargon/jargon_40.html#TAG2036
|
@ -1,78 +1,78 @@
|
||||
谁是 Python 的目标受众?
|
||||
盘点 Python 的目标受众
|
||||
============================================================
|
||||
|
||||
Python 是为谁设计的?
|
||||
|
||||
* [Python 使用情况的参考][8]
|
||||
* [Python 的参考解析器使用情况][8]
|
||||
* [CPython 主要服务于哪些受众?][9]
|
||||
* [这些相关问题的原因是什么?][10]
|
||||
* [适合进入 PyPI 规划的方面有哪些?][11]
|
||||
* [当增加它到标准库中时,为什么一些 API 会被改变?][12]
|
||||
* [为什么一些 API 是以<ruby>临时<rt>provisional</rt></ruby>的形式被增加的?][13]
|
||||
* [当添加它们到标准库中时,为什么一些 API 会被改变?][12]
|
||||
* [为什么一些 API 是以<ruby>临时<rt>provisional</rt></ruby>的形式被添加的?][13]
|
||||
* [为什么只有一些标准库 API 被升级?][14]
|
||||
* [标准库任何部分都有独立的版本吗?][15]
|
||||
* [这些注意事项为什么很重要?][16]
|
||||
|
||||
几年前, 我在 python-dev 邮件列表里面、活跃的 CPython 核心开发人员、以及决定参与该过程的人员中[强调][38]说,“CPython 的动作太快了也太慢了”,作为这种冲突的一个主要原因是,他们不能有效地使用他们的个人时间和精力。
|
||||
几年前,我在 python-dev 邮件列表中,以及在活跃的 CPython 核心开发人员和认为参与这一过程不是有效利用他们个人时间和精力的人中[强调][38]说,“CPython 的发展太快了也太慢了”是很多冲突的原因之一。
|
||||
|
||||
我一直在考虑这种情况,在参与的这几年,我也花费了一些时间去思考这一点,在我写那篇文章的时候,我还在<ruby>波音防务澳大利亚公司<rt>Boeing Defence Australia</rt></ruby>工作。下个月,我将离开波音进入<ruby>红帽亚太<rt>Red Hat Asia-Pacific</rt></ruby>,并且开始在大企业的[开源供应链管理][39]上形成<ruby>再分发者<rt>redistributor</rt></ruby>层面的观点。
|
||||
我一直认为事实确实如此,但这也是一个要点,在这几年中我也花费了一些时间去反思它。在我写那篇文章的时候,我还在<ruby>波音防务澳大利亚公司<rt>Boeing Defence Australia</rt></ruby>工作。下个月,我将离开波音进入<ruby>红帽亚太<rt>Red Hat Asia-Pacific</rt></ruby>,并且开始在大企业的[开源供应链管理][39]上获得<ruby>再分发者<rt>redistributor</rt></ruby>层面的视角。
|
||||
|
||||
### Python 使用情况的参考
|
||||
### Python 的参考解析器使用情况
|
||||
|
||||
我尝试将 CPython 的使用情况分解如下,它虽然有些过于简化(注意,这些分类并不是很清晰,他们仅关注影响新软件特性和版本的部署不同因素):
|
||||
我尝试将 CPython 的使用情况分解如下,它虽然有些过于简化(注意,这些分类的界线并不是很清晰,他们仅关注于思考新软件特性和版本发布后不同因素的影响):
|
||||
|
||||
* 教育类:教育工作者的主要兴趣在于建模方法的教学和计算操作方面,_不会去_ 写或维护软件产品。例如:
|
||||
* 教育类:教育工作者的主要兴趣在于建模方法的教学和计算操作方面,_不会去_ 写或维护生产级别的软件。例如:
|
||||
* 澳大利亚的 [数字课程][1]
|
||||
* Lorena A. Barba 的 [AeroPython][2]
|
||||
* 个人的自动化爱好者的项目:主要的是软件,经常是只有软件,而且用户通常是写它的人。例如:
|
||||
* my Digital Blasphemy [image download notebook][3]
|
||||
* 个人级的自动化和爱好者的项目:主要的是软件,而且经常是只有软件,用户通常是写它的人。例如:
|
||||
* my Digital Blasphemy [图片下载器][3]
|
||||
* Paul Fenwick 的 (Inter)National [Rick Astley Hotline][4]
|
||||
* <ruby>组织<rt>organisational</rt></ruby>过程的自动化:主要是软件,经常是只有软件,用户是为了利益而编写它的组织。例如:
|
||||
* CPython 的 [核心开发工作流工具][5]
|
||||
* Linux 发行版的开发、构建 & 发行工具
|
||||
* <ruby>组织<rt>organisational</rt></ruby>过程自动化:主要是软件,而且经常是只有软件,用户是为了利益而编写它的组织。例如:
|
||||
* CPython 的 [核心工作流工具][5]
|
||||
* Linux 发行版的开发、构建 & 发行管理工具
|
||||
* “<ruby>一劳永逸<rt>Set-and-forget</rt></ruby>” 的基础设施中:这里是软件,(这种说法有时候有些争议),在生命周期中该软件几乎不会升级,但是,在底层平台可能会升级。例如:
|
||||
* 大多数的自我管理的企业或机构的基础设施(在那些资金充足的可持续工程计划中,这是让人非常不安的)
|
||||
* 大多数的自我管理的企业或机构的基础设施(在那些资金充足的可持续工程计划中,这种情况是让人非常不安的)
|
||||
* 拨款资助的软件(当最初的拨款耗尽时,维护通常会终止)
|
||||
* 有严格认证要求的软件(如果没有绝对必要的话,从经济性考虑,重新认证比常规更新来说要昂贵很多)
|
||||
* 没有自动升级功能的嵌入式软件系统
|
||||
* 持续升级的基础设施:具有健壮的持续工程化模型的软件,对于依赖和平台升级被认为是例行的,而不去关心其它的代码改变。例如:
|
||||
* 持续升级的基础设施:具有健壮支撑的工程学模型的软件,对于依赖和平台升级通常是例行的,而不去关心其它的代码改变。例如:
|
||||
* Facebook 的 Python 服务基础设施
|
||||
* 滚动发布的 Linux 分发版
|
||||
* 大多数的公共 PaaS 无服务器环境(Heroku、OpenShift、AWS Lambda、Google Cloud Functions、Azure Cloud Functions等等)
|
||||
* 间歇性升级的标准的操作环境:对其核心组件进行常规升级,但这些升级以年为单位进行,而不是周或月。例如:
|
||||
* 间隔性升级的标准的操作环境:对其核心组件进行常规升级,但这些升级以年为单位进行,而不是周或月。例如:
|
||||
* [VFX 平台][6]
|
||||
* 长周期支持的 Linux 分发版
|
||||
* CPython 和 Python 标准库
|
||||
* 基础设施管理 & 业务流程工具(比如 OpenStack、 Ansible)
|
||||
* 基础设施管理 & 编排工具(比如 OpenStack、 Ansible)
|
||||
* 硬件控制系统
|
||||
* 短生命周期的软件:软件仅被使用一次,然后就丢弃或忽略,而不是随后接着升级。例如:
|
||||
* <ruby>临时<rt>Ad hoc</rt></ruby>自动脚本
|
||||
* 被确定为 “终止” 的单用户游戏(你玩它们一次后,甚至都忘了去卸载它,或许在一个新的设备上都不打算再去安装它)
|
||||
* 短暂的或非持久状态的单用户游戏(如果你卸载并重安装它们,你的游戏体验也不会有什么大的变化)
|
||||
* 特定事件的应用程序(这些应用程序与特定的物理事件捆绑,一旦事件结束,这些应用程序就不再有用了)
|
||||
* 定期使用的应用程序:部署后定期升级的软件。例如:
|
||||
* 频繁使用的应用程序:部署后定期升级的软件。例如:
|
||||
* 业务管理软件
|
||||
* 个人 & 专业的生产力应用程序(比如,Blender)
|
||||
* 开发工具 & 服务(比如,Mercurial、 Buildbot、 Roundup)
|
||||
* 多用户游戏,和其它明显的处于持续状态的还没有被定义为 “终止” 的游戏
|
||||
* 有自动升级功能的嵌入式软件系统
|
||||
* 共享的抽象层:软件组件的设计使它能够在特定的问题域有效地工作,即使你没有亲自掌握该领域的所有错综复杂的东西。例如:
|
||||
* 共享的抽象层:在一个特定的问题领域中,设计用于让工作更高效的软件组件。即便是你没有亲自掌握该领域的所有错综复杂的东西。例如:
|
||||
* 大多数的运行时库和归入这一类的框架(比如,Django、Flask、Pyramid、SQL Alchemy、NumPy、SciPy、requests)
|
||||
* 也适合归入这里的许多测试和类型引用工具(比如,pytest、Hypothesis、vcrpy、behave、mypy)
|
||||
* 适合归入这一类的许多测试和类型推断工具(比如,pytest、Hypothesis、vcrpy、behave、mypy)
|
||||
* 其它应用程序的插件(比如,Blender plugins、OpenStack hardware adapters)
|
||||
* 本身就代表了 “Python 世界” 的基准的标准库(那是一个 [难以置信的复杂][7] 的世界观)
|
||||
* 本身就代表了 “Python 世界” 基准的标准库(那是一个 [难以置信的复杂][7] 的世界观)
|
||||
|
||||
### CPython 主要服务于哪些受众?
|
||||
|
||||
最终,CPython 和标准库的主要受众是哪些,不论什么原因,比较有限的标准库和从 PyPI 安装的显式声明的第三方库的组合,所提供的服务是不够的。
|
||||
从根本上说,CPython 和标准库的主要受众是哪些呢,是那些不管出于什么原因,将有限的标准库和从 PyPI 显式声明安装的第三方库组合起来所提供的服务,还不能够满足需求的那些人。
|
||||
|
||||
为了更进一步简化上面回顾的不同用法和部署模式,尽可能的总结,将最大的 Python 用户群体分开来看,一种是,在一些环境中将 Python 作为一种_脚本语言_使用的;另外一种是将它用作一个_应用程序开发语言_,最终发布的是一种产品而不是他们的脚本。
|
||||
为了更进一步简化上面回顾的不同用法和部署模型,尽可能的总结,将最大的 Python 用户群体分开来看,一种是,在一些感兴趣的环境中将 Python 作为一种_脚本语言_使用的那些人;另外一种是将它用作一个_应用程序开发语言_的那些人,他们最终发布的是一种产品而不是他们的脚本。
|
||||
|
||||
当把 Python 作为一种脚本语言来使用时,它们典型的开发者特性包括:
|
||||
把 Python 作为一种脚本语言来使用的开发者的典型特性包括:
|
||||
|
||||
* 主要的处理单元是由一个 Python 文件组成的(或 Jupyter notebook !),而不是一个 Python 目录和元数据文件
|
||||
* 没有任何形式的单独的构建步骤 —— 是_作为_一个脚本分发的,类似于分发一个单独的 shell 脚本的方法
|
||||
* 没有单独的安装步骤(除了下载这个文件到一个合适的位置),除了在目标系统上要求预配置运行时环境
|
||||
* 主要的工作单元是由一个 Python 文件组成的(或 Jupyter notebook !),而不是一个 Python 和元数据文件的目录
|
||||
* 没有任何形式的单独的构建步骤 —— 是_作为_一个脚本分发的,类似于分发一个独立的 shell 脚本的方式
|
||||
* 没有单独的安装步骤(除了下载这个文件到一个合适的位置),除了在目标系统上要求预配置运行时环境外
|
||||
* 没有显式的规定依赖关系,除了最低的 Python 版本,或一个预期的运行环境声明。如果需要一个标准库以外的依赖项,他们会通过一个环境脚本去提供(无论是操作系统、数据分析平台、还是嵌入 Python 运行时的应用程序)
|
||||
* 没有单独的测试套件,使用“通过你给定的输入,这个脚本是否给出了你期望的结果?” 这种方式来进行测试
|
||||
* 如果在执行前需要测试,它将以 “试运行” 和 “预览” 模式来向用户展示软件_将_怎样运行
|
||||
@ -80,79 +80,79 @@ Python 是为谁设计的?
|
||||
|
||||
相比之下,使用 Python 作为一个应用程序开发语言的开发者特征包括:
|
||||
|
||||
* 主要的工作单元是由 Python 的目录和元数据文件组成的,而不是单个 Python 文件
|
||||
* 在发布之前有一个单独的构建步骤去预处理应用程序,即使是把它的这些文件一起打包进一个 Python sdist、wheel 或 zipapp 文档
|
||||
* 主要的工作单元是由 Python 和元数据文件组成的目录,而不是单个 Python 文件
|
||||
* 在发布之前有一个单独的构建步骤去预处理应用程序,哪怕是把它的这些文件一起打包进一个 Python sdist、wheel 或 zipapp 文档中
|
||||
* 是否有独立的安装步骤去预处理将要使用的应用程序,取决于应用程序是如何打包的,和支持的目标环境
|
||||
* 外部的依赖直接在项目目录中的一个元数据文件中表示(比如,`pyproject.toml`、`requirements.txt`、`Pipfile`),或作为生成的发行包的一部分(比如,`setup.py`、`flit.ini`)
|
||||
* 存在一个独立的测试套件,或者作为一个 Python API 的一个测试单元、功能接口的集成测试、或者是两者的一个结合
|
||||
* 静态分析工具的使用是在项目级配置的,作为测试管理的一部分,而不是依赖
|
||||
* 外部的依赖明确表示为项目目录中的一个元数据文件中,要么是直接在项目的目录中(比如,`pyproject.toml`、`requirements.txt`、`Pipfile`),要么是作为生成的发行包的一部分(比如,`setup.py`、`flit.ini`)
|
||||
* 存在一个独立的测试套件,或者作为一个 Python API 的一个单元测试,或者作为功能接口的集成测试,或者是两者的一个结合
|
||||
* 静态分析工具的使用是在项目级配置的,并作为测试管理的一部分,而不是取决于环境
|
||||
|
||||
作为以上分类的一个结果,CPython 和标准库最终提供的主要用途是,在合适的 CPython 特性发布后 3 - 5 年,为教育和<ruby>临时<rt>ad hoc</rt></ruby>的 Python 脚本环境呈现的功能,定义重新分发的独立基准。
|
||||
作为以上分类的一个结果,CPython 和标准库的主要用途是,在相应的 CPython 特性发布后,为教育和<ruby>临时<rt>ad hoc</rt></ruby>的 Python 脚本环境,最终提供的是定义重分发者假定功能的独立基准 3- 5 年。
|
||||
|
||||
对于<ruby>临时<rt>ad hoc</rt></ruby>脚本使用的情况,这个 3 - 5 年的延迟是由于新版本重分发给用户的延迟造成的,以及那些再分发版的用户花在修改他们的标准操作环境上的时间。
|
||||
对于<ruby>临时<rt>ad hoc</rt></ruby>脚本使用的情况,这个 3 - 5 年的延迟是由于重分发者给用户制作新版本的延迟造成的,以及那些重分发版本的用户们花在修改他们的标准操作环境上的时间。
|
||||
|
||||
在教育环境中的情况,教育工作者需要一些时间去评估新特性,和决定是否将它们包含进提供给他们的学生的课程中。
|
||||
在教育环境中的情况是,教育工作者需要一些时间去评估新特性,和决定是否将它们包含进提供给他们的学生的课程中。
|
||||
|
||||
### 这些相关问题的原因是什么?
|
||||
|
||||
这篇文章很大程序上是受 Twitter 上对 [我的这个评论][20] 的讨论鼓舞的,它援引了定义在 [PEP 411][21] 中<ruby>临时<rt>Provisional</rt></ruby> API 的情形,作为一个开源项目的例子,对用户发出事实上的邀请,请其作为共同开发者去积极参与设计和开发过程,而不是仅被动使用已准备好的最终设计。
|
||||
这篇文章很大程度上是受 Twitter 上对 [我的这个评论][20] 的讨论鼓舞的,它援引了定义在 [PEP 411][21] 中<ruby>临时<rt>Provisional</rt></ruby> API 的情形,作为一个开源项目的例子,对用户发出事实上的邀请,请其作为共同开发者去积极参与设计和开发过程,而不是仅被动使用已准备好的最终设计。
|
||||
|
||||
这些回复包括一些在更高级别的库中支持的临时 API 的困难程度的一些沮丧表述,这些库没有临时状态的传递,以及因此而被限制为只有临时 API 的最新版本支持这些相关特性,而没有任何的早期迭代版本。
|
||||
这些回复包括一些在更高级别的库中支持临时 API 的困难程度的一些沮丧性表述、没有这些库做临时状态的传递、以及因此而被限制为只有临时 API 的最新版本才支持这些相关特性,而不是任何早期版本的迭代。
|
||||
|
||||
我的 [主要反应][22] 是去建议,开源提供者应该努力加强他们需要的有限支持,以加强他们的维护工作的可持续性。这意味着,如果支持老版本的临时 API 是非常痛苦的,然后,只有项目开发人员自己需要时,或者,有人为此支付费用时,他们才会去提供支持。这类似于我的观点,志愿者提供的项目是否应该免费支持老的商业的长周期支持的 Python 版本,这对他们来说是非常麻烦的事,我[不认他们应该去做][23],正如我所期望的那样,大多数这样的需求都来自于管理差劲的、习以为常的惯性,而不是真正的需求(真正的需求,它应该去支付费用来解决问题)
|
||||
我的 [主要回应][22] 是,建议开源提供者应该强制实施有限支持,通过这种强制的有限支持可以让个人的维护努力变得可持续。这意味着,如果对临时 API 的老版本提供迭代支持是非常痛苦的,到那时,只有在项目开发人员自己需要、或有人为此支付费用时,他们才会去提供支持。这与我的这个观点是类似的,那就是,志愿者提供的项目是否应该免费支持老的、商业性质的、长周期的 Python 版本,这对他们来说是非常麻烦的事,我[不认为他们应该去做][23],正如我所期望的那样,大多数这样的需求都来自于管理差劲的、习以为常的惯性,而不是真正的需求(真正的需求,应该去支付费用来解决问题)。
|
||||
|
||||
然而,我的[第二个反应][24]是,去认识到这一点,尽管多年来一直在讨论这个问题(比如,在上面链接中 2011 的一篇年的文章中,以及在 Python 3 问答的回答中 [在这里][25]、[这里][26]、和[这里][27],和在去年的 [Python 包生态系统][28]上的一篇文章中的一小部分),我从来没有真实尝试直接去解释过它对标准库设计过程中的影响。
|
||||
而我的[第二个回应][24]是去实现这一点,尽管多年来一直在讨论这个问题(比如,在上面链接中最早在 2011 年的一篇的文章中,以及在 Python 3 问答的回复中的 [这里][25]、[这里][26]、和[这里][27],以及去年的这篇文章 [Python 包生态系统][28] 中也提到了一些),但我从来没有真实地尝试直接去解释它在标准库设计过程中的影响。
|
||||
|
||||
如果没有这些背景,设计过程中的一些方面,如临时 API 的介绍,或者是<ruby>受到不同的启发<rt>inspired-by-not-the-same-as</rt></ruby>的介绍,看起来似乎是很荒谬的,因为它们似乎是在试图标准化 API,而实际上并没有对 API 进行标准化。
|
||||
如果没有这些背景,设计过程中的一部分,比如临时 API 的引入,或者是<ruby>受启发而不同于它<rt>inspired-by-not-the-same-as</rt></ruby>的引入,看起来似乎是完全没有意义的,因为他们看起来似乎是在尝试对 API 进行标准化,而实际上并没有。
|
||||
|
||||
### 适合进入 PyPI 规划的方面有哪些?
|
||||
|
||||
提交给 python-ideas 或 python-dev 的_任何_建议的第一个门槛就是,清楚地回答这个问题,“为什么 PyPI 上的一个模块不够好?”。绝大多数的建议都在这一步失败了,但有几个常见的方面可以考虑:
|
||||
提交给 python-ideas 或 python-dev 的_任何_建议所面临的第一个门槛就是清楚地回答这个问题:“为什么 PyPI 上的一个模块不够好?”。绝大多数的建议都在这一步失败了,为了通过这一步,这里有几个常见的话题:
|
||||
|
||||
* 大多数新手可能经常是从互联网上去 “复制粘贴” 错误的指导,而不是去下载一个合适的第三方库。(比如,这就是为什么存在 `secrets` 库的原因:它使得人们很少去使用 `random` 模块,因为安全敏感的原因,这是用于游戏和统计模拟的)
|
||||
* 这个模块是用于提供一个实现的参考,并去允许与其它的相互竞争的实现之间提供互操作性,而不是对所有人的所有事物都是必要的。(比如,`asyncio`、`wsgiref`、`unittest`、和 `logging` 全部都是这种情况)
|
||||
* 这个模块是用于标准库的其它部分(比如,`enum` 就是这种情况,像`unittest`一样)
|
||||
* 与其去下载一个合适的第三方库,新手一般可能更倾向于从互联网上 “复制粘贴” 错误的指导。(比如,这就是为什么存在 `secrets` 库的原因:它使得人们很少去使用 `random` 模块,由于安全敏感的原因,它预期用于游戏和统计模拟的)
|
||||
* 这个模块是打算去提供一个实现的参考,并允许与其它的相互竞争的实现之间提供互操作性,而不是对所有人的所有事物都是必要的。(比如,`asyncio`、`wsgiref`、`unittest`、和 `logging` 全部都是这种情况)
|
||||
* 这个模块是预期用于标准库的其它部分(比如,`enum` 就是这种情况,像`unittest`一样)
|
||||
* 这个模块是被设计去支持语言之外的一些语法(比如,`contextlib`、`asyncio` 和 `typing` 模块,就是这种情况)
|
||||
* 这个模块只是普通的临时的脚本用途(比如,`pathlib` 和 `ipaddress` 就是这种情况)
|
||||
* 这个模块被用于一个教育环境(比如,`statistics` 模块允许进行交互式地探索统计的概念,尽管你不会用它来做全部的统计分析)
|
||||
* 这个模块被用于一个教育环境(比如,`statistics` 模块允许进行交互式地探索统计的概念,尽管你可能根本就不会用它来做全部的统计分析)
|
||||
|
||||
通过前面的 “PyPI 是不是明显不够好” 的检查,一个模块还不足以确保被接收到标准库中,但它已经足够转变问题为 “在接下来的几年中,你所推荐的要包含的库能否对一般的 Python 开发人员的经验有所改提升?”
|
||||
通过前面的 “PyPI 是不是明显不够好” 的检查,一个模块还不足以确保被接收到标准库中,但它已经足以转变问题为 “在接下来的几年中,你所推荐的要包含的库能否对一般的入门级 Python 开发人员的经验有所提升?”
|
||||
|
||||
标准库中的 `ensurepip` 和 `venv` 模块的介绍也明确地告诉再分发者,我们期望的 Python 级别的打包和安装工具在任何平台的特定分发机制中都予以支持。
|
||||
标准库中的 `ensurepip` 和 `venv` 模块的引入也明确地告诉再分发者,我们期望的 Python 级别的打包和安装工具在任何平台的特定分发机制中都予以支持。
|
||||
|
||||
### 当增加它到标准库中时,为什么一些 API 会被改变?
|
||||
### 当添加它们到标准库中时,为什么一些 API 会被改变?
|
||||
|
||||
现在已经存在的第三方模块有时候会被批量地采用到标准库中,在其它情况下,实际上增加进行的是重新设计和重新实现的 API,只是它参照了现有 API 的用户体验,但是,根据另外的设计参考,删除或修改了一些细节,和附属于语言参考实现部分的权限。
|
||||
现在已经存在的第三方模块有时候会被批量地采用到标准库中,在其它情况下,实际上添加的是吸收了用户对现有 API 体验之后,进行重新设计和重新实现的 API,但是,会根据另外的设计考虑和已经成为其中一部分的语言实现参考来进行一些删除或细节修改。
|
||||
|
||||
例如,不像第三方库的广受欢迎的前身 `path.py`,`pathlib` 并_没有_定义字符串子类,而是以独立的类型替代。作为解决文件互操作性问题的结果,定义了文件系统路径协议,它允许与文件系统路径一起使用的接口,去使用更大范围的对象。
|
||||
例如,不像广受欢迎的第三方库的前身 `path.py`,`pathlib` 并_没有_定义字符串子类,而是以独立的类型替代。作为解决文件互操作性问题的结果,定义了文件系统路径协议,它允许使用文件系统路径的接口去使用更多的对象。
|
||||
|
||||
为 `ipaddress` 模块设计的 API 将地址和网络的定义,调整为显式的单独主机接口定义(IP 地址关联到特定的 IP 网络),是为教学 IP 地址概念而提供的一个最佳的教学工具,而最原始的 `ipaddr` 模块中,使用网络术语的方式不那么严格。
|
||||
为了在“IP 地址” 这个概念的教学上提供一个更好的工具,为 `ipaddress` 模块设计的 API,将地址和网络的定义调整为显式的、独立定义的主机接口(IP 地址被关联到特定的 IP 网络),而最原始的 `ipaddr` 模块中,在网络术语的使用方式上不那么严格。
|
||||
|
||||
在其它情况下,标准库被构建为多种现有方法的一个综合,而且,还有可能依赖于定义现有库的 API 时所不存在的特性。对于 `asyncio` 和 `typing` 模块就有这些考虑因素,虽然后来对 `dataclasses` API 的考虑因素被放到了 PEP 557 (它可以被总结为 “像属性一样,但是使用变量注释作为字段声明”)。
|
||||
另外的情况是,标准库将综合多种现有的方法的来构建,以及为早已存在的库定义 API 时,还有可能依靠不存在的语法特性。比如,`asyncio` 和 `typing` 模块就全部考虑了这些因素,虽然在 PEP 557 中正在考虑将后者所考虑的因素应用到 `dataclasses` API 上。(它可以被总结为 “像属性一样,但是使用可变注释作为字段声明”)。
|
||||
|
||||
这类改变的工作原理是,这类库不会消失,而且,它们的维护者经常并不关心与标准库相关的限制(特别是,相对缓慢的发行节奏)。在这种情况下,对于标准库版本的文档来说,使用 “See Also” 链接指向原始模块是很常见的,特别是,如果第三方版本提供了标准库模块中忽略的其他特性和灵活性时。
|
||||
这类改变的原理是,这类库不会消失,并且它们的维护者对标准库维护相关的那些限制通常并不感兴趣(特别是,相对缓慢的发行节奏)。在这种情况下,在标准库文档的更新版本中,很常见的做法是使用 “See Also” 链接指向原始模块,尤其是在第三方版本提供了额外的特性以及标准库模块中忽略的那些特性时。
|
||||
|
||||
### 为什么一些 API 是以临时的形式被增加的?
|
||||
### 为什么一些 API 是以临时的形式被添加的?
|
||||
|
||||
虽然 CPython 确实设置了 API 的弃用策略,但我们通常不希望在没有令人信服的理由的情况下去使用该策略(在其他项目试图与 Python 2.7 保持兼容性时,尤其如此)。
|
||||
虽然 CPython 维护了 API 的弃用策略,但在没有正当理由的情况下,我们通常不会去使用该策略(在其他项目试图与 Python 2.7 保持兼容性时,尤其如此)。
|
||||
|
||||
然而,当增加一个受已有的第三方启发去设计的而不是去拷贝的新 API 时,在设计实践中,有些设计实践可能会出现问题,这比平常的风险要高。
|
||||
然而在实践中,当添加这种受已有的第三方启发而不是直接精确拷贝第三方设计的新 API 时,所承担的风险要高于一些正常设计决定可能出现问题的风险。
|
||||
|
||||
当我们考虑这种改变的风险比平常要高,我们将相关的 API 标记为临时(`provisional`),表示保守的终端用户要避免完全依赖它们,并且,共享抽象层的开发者可能希望考虑,对他们准备支持的临时 API 的版本实施比平时更严格的限制。
|
||||
当我们考虑这种改变的风险比平常要高,我们将相关的 API 标记为临时,表示保守的终端用户要避免完全依赖它们,而共享抽象层的开发者可能希望,对他们准备去支持的那个临时 API 的版本,考虑实施比平时更严格的限制。
|
||||
|
||||
### 为什么只有一些标准库 API 被升级?
|
||||
|
||||
这里简短的回答得到升级的主要 API 有哪些?:
|
||||
|
||||
* 不适合外部因素驱动的补充更新
|
||||
* 无论是临时脚本使用情况,还是为促进将来的多个第三方解决方案之间的互操作性,都有明显好处的
|
||||
* 不太可能有大量的外部因素干扰的附加更新
|
||||
* 无论是对临时脚本使用案例还是对促进将来多个第三方解决方案之间的互操作性,都有明显好处的
|
||||
* 对这方面感兴趣的人提交了一个可接受的建议
|
||||
|
||||
如果一个用于应用程序开发的模块存在一个非常明显的限制,比如,`datetime`,如果重分发版通过替代一个现成的第三方模块有所改善,比如,`requests`,或者,如果标准库的发布节奏与所需要的包之间真的存在冲突,比如,`certifi`,那么,计划对标准库版本进行改变的因素将显著减少。
|
||||
如果一个用于应用程序开发的模块存在一个非常明显的限制,比如,`datetime`,如果重分发者通过可供替代的第三方选择很容易地实现了改善,比如,`requests`,或者,如果标准库的发布节奏与所需要的包之间真的存在冲突,比如,`certifi`,那么,建议对标准库版本进行改变的因素将显著减少。
|
||||
|
||||
从本质上说,这和关于 PyPI 上面的问题是相反的:因为,PyPI 分发机制对增强应用程序开发人员的体验来说,通常_是_足够好的,这样的改进是有意义的,允许重分发者和平台提供者自行决定将哪些内容作为缺省提供的一部分。
|
||||
从本质上说,这和上面的关于 PyPI 问题正好相反:因为,从应用程序开发人员体验改善的角度来说,PyPI 分发机制通常_是_足够好的,这种分发方式的改进是有意义的,允许重分发者和平台提供者自行决定将哪些内容作为他们缺省提供的一部分。
|
||||
|
||||
当改变后的能力假设在 3-5 年内缺省出现时被认为是有价值的,才会将这些改变进入到 CPython 和标准库中。
|
||||
假设在 3-5 年时间内,缺省出现了被认为是改变带来的可感知的价值时,才会将这些改变纳入到 CPython 和标准库中。
|
||||
|
||||
### 标准库任何部分都有独立的版本吗?
|
||||
|
||||
@ -160,19 +160,19 @@ Python 是为谁设计的?
|
||||
|
||||
最有可能的第一个候选者是 `distutils` 构建系统,因为切换到这种模式将允许构建系统在多个发行版本之间保持一致。
|
||||
|
||||
这种处理方式的其它可能的候选者是 Tcl/Tk 图形捆绑和 IDLE 编辑器,它们已经被拆分,并且通过一些重分发程序转换成可选安装项。
|
||||
这种处理方式的其它的可能候选者是 Tcl/Tk 图形捆绑和 IDLE 编辑器,它们已经被拆分,并且通过一些重分发程序转换成可选安装项。
|
||||
|
||||
### 这些注意事项为什么很重要?
|
||||
|
||||
从最本质上说,那些积极参与开源开发的人就是那些致力于开源应用程序和共享抽象层的人。
|
||||
从本质上说,那些积极参与开源开发的人就是那些致力于开源应用程序和共享抽象层的人。
|
||||
|
||||
写一些临时脚本和为学生设计一些教学习题的人,通常不会认为他们是软件开发人员 —— 他们是教师、系统管理员、数据分析人员、金融工程师、流行病学家、物理学家、生物学家、商业分析师、动画师、架构设计师等等。
|
||||
那些写一些临时脚本或为学生设计一些教学习题的人,通常不认为他们是软件开发人员 —— 他们是教师、系统管理员、数据分析人员、金融工程师、流行病学家、物理学家、生物学家、商业分析师、动画师、架构设计师等等。
|
||||
|
||||
当我们所有的担心是,语言是开发人员的经验时,那么,我们可以简单假设人们知道一些什么,他们使用的工具,所遵循的开发流程,以及构建和部署他们软件的方法。
|
||||
对于一种语言,当我们全部的担心都是开发人员的经验时,那么我们就可以根据人们所知道的内容、他们使用的工具种类、他们所遵循的开发流程种类、构建和部署他们软件的方法等假定,来做大量的简化。
|
||||
|
||||
当一个应用程序运行时(runtime),_也_作为一个脚本引擎广为流行时,事情将变的更加复杂。在一个项目中去平衡两种需求,就会导致双方的不理解和怀疑,做任何一件事都将变得很困难。
|
||||
当一个应用程序运行时(runtime),_也_作为一个脚本引擎广为流行时,事情将变的更加复杂。在同一个项目中去平衡两种受众的需求,将就会导致双方的不理解和怀疑,做任何一件事都将变得很困难。
|
||||
|
||||
这篇文章不是为了说,我们在开发 CPython 过程中从来没有做出过不正确的决定 —— 它只是指出添加到 Python 标准库中的看上去很荒谬的特性的最合理的反应,它将是 “我不是那个特性的预期目标受众的一部分”,而不是 “我对它没有兴趣,因此,它对所有人都是毫无用处和没有价值的,添加它纯属骚扰我”。
|
||||
这篇文章不是为了说,我们在开发 CPython 过程中从来没有做出过不正确的决定 —— 它只是去合理地回应那些对添加到 Python 标准库中的看上去很荒谬的特性的质疑,它将是 “我不是那个特性的预期目标受众的一部分”,而不是 “我对它没有兴趣,因此它对所有人都是毫无用处和没有价值的,添加它纯属骚扰我”。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
|
@ -0,0 +1,57 @@
|
||||
老树发新芽:微服务 – DXC Blogs
|
||||
======
|
||||

|
||||
|
||||
如果我告诉你有这样一种软件架构,一个应用程序的组件通过基于网络的通讯协议为其它组件提供服务,我估计你可能会说它是 …
|
||||
|
||||
是的,确实是。如果你从上世纪九十年代就开始了你的编程生涯,那么你肯定会说它是 [面向服务的架构 (SOA)][1]。但是,如果你是个年青人,并且在云上获得初步的经验,那么,你将会说:“哦,你说的是 [微服务][2]。”
|
||||
|
||||
你们都没错。如果想真正地了解它们的差别,你需要深入地研究这两种架构。
|
||||
|
||||
在 SOA 中,一个服务是一个功能,它是定义好的、自包含的、并且是不依赖上下文和其它服务的状态的功能。总共有两种服务。一种是消费者服务,它从另外类型的服务 —— 提供者服务 —— 中请求一个服务。一个 SOA 服务可以同时扮演这两种角色。
|
||||
|
||||
SOA 服务可以与其它服务交换数据。两个或多个服务也可以彼此之间相互协调。这些服务执行基本的任务,比如创建一个用户帐户、提供登陆功能、或验证支付。
|
||||
|
||||
与其说 SOA 是模块化一个应用程序,还不如说它是把分布式的、独立维护和部署的组件,组合成一个应用程序。然后在服务器上运行这些组件。
|
||||
|
||||
早期版本的 SOA 使用面向对象的协议进行组件间通讯。例如,微软的 [分布式组件对象模型 (DCOM)][3] 和使用 [通用对象请求代理架构 (CORBA)][5] 规范的 [对象请求代理 (ORBs)][4]。
|
||||
|
||||
用于消息服务的最新版本,比如 [Java 消息服务 (JMS)][6] 或者 [高级消息队列协议 (AMQP)][7]。这些服务通过企业服务总线 (ESB) 进行连接。基于这些总线,来传递和接收可扩展标记语言(XML)格式的数据。
|
||||
|
||||
[微服务][2] 是一个架构样式,其中的应用程序以松散耦合的服务或模块组成。它适用于开发大型的、复杂的应用程序的持续集成/持续部署(CI/CD)模型。一个应用程序就是一堆模块的汇总。
|
||||
|
||||
每个微服务提供一个应用程序编程接口(API)端点。它们通过轻量级协议连接,比如,[表述性状态转移 (REST)][8],或 [gRPC][9]。数据倾向于使用 [JavaScript 对象标记 (JSON)][10] 或 [Protobuf][11] 来表示。
|
||||
|
||||
这两种架构都可以用于去替代以前老的整体式架构,整体式架构的应用程序被构建为单个自治的单元。例如,在一个客户机 - 服务器模式中,一个典型的 Linux、Apache、MySQL、PHP/Python/Perl (LAMP) 服务器端应用程序将去处理 HTTP 请求、运行子程序、以及从底层的 MySQL 数据库中检索/更新数据。所有这些应用程序”绑“在一起提供服务。当你改变了任何一个东西,你都必须去构建和部署一个新版本。
|
||||
|
||||
使用 SOA,你可以只改变需要的几个组件,而不是整个应用程序。使用微服务,你可以做到一次只改变一个服务。使用微服务,你才能真正做到一个解耦架构。
|
||||
|
||||
微服务也比 SOA 更轻量级。不过 SOA 服务是部署到服务器和虚拟机上,而微服务是部署在容器中。协议也更轻量级。这使得微服务比 SOA 更灵活。因此,它更适合于要求敏捷性的电商网站。
|
||||
|
||||
说了这么多,到底意味着什么呢?微服务就是 SOA 在容器和云计算上的变种。
|
||||
|
||||
老式的 SOA 并没有离我们远去,但是,因为我们持续将应用程序搬迁到容器中,所以微服务架构将越来越流行。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blogs.dxc.technology/2018/05/08/everything-old-is-new-again-microservices/
|
||||
|
||||
作者:[Cloudy Weather][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://blogs.dxc.technology/author/steven-vaughan-nichols/
|
||||
[1]:https://www.service-architecture.com/articles/web-services/service-oriented_architecture_soa_definition.html
|
||||
[2]:http://microservices.io/
|
||||
[3]:https://technet.microsoft.com/en-us/library/cc958799.aspx
|
||||
[4]:https://searchmicroservices.techtarget.com/definition/Object-Request-Broker-ORB
|
||||
[5]:http://www.corba.org/
|
||||
[6]:https://docs.oracle.com/javaee/6/tutorial/doc/bncdq.html
|
||||
[7]:https://www.amqp.org/
|
||||
[8]:https://www.service-architecture.com/articles/web-services/representational_state_transfer_rest.html
|
||||
[9]:https://grpc.io/
|
||||
[10]:https://www.json.org/
|
||||
[11]:https://github.com/google/protobuf/
|
@ -0,0 +1,175 @@
|
||||
|
||||
如何方便的检查Ubuntu版本以及其他系统信息
|
||||
======
|
||||
|
||||
|
||||
**简要:想知道你正在使用的Ubuntu具体是什么版本吗?这篇文档将告诉你如何检查你的Ubuntu版本,桌面环境以及其他相关的系统信息**
|
||||
|
||||
|
||||
|
||||
通常,你能非常容易的通过命令行或者图形界面获取你正在使用的Ubutnu的版本。当你正在尝试学习一篇互联网上的入门教材或者正在从各种各样的论坛里获取帮助的时候,知道当前正在使用的Ubuntu确切的版本号,桌面环境以及其他的系统信息将是尤为重要的。
|
||||
|
||||
|
||||
|
||||
在这篇简短的文章中,作者将展示各种检查[Ubuntu][1] 版本以及其他常用的系统信息的方法。
|
||||
|
||||
|
||||
### 如何在命令行检查Ubuntu版本
|
||||
|
||||
|
||||
这个是获得Ubuntu版本的最好的办法。 我本想先展示如何用图形界面做到这一点但是我决定还是先从命令行方法说起,因为这种方法不依赖于你使用的任何桌面环境[desktop environment][2]。 你可以在任何Ubuntu的变种系统上使用这种方法。
|
||||
|
||||
|
||||
打开你的命令行终端 (Ctrl+Alt+T), 键入下面的命令:
|
||||
|
||||
```
|
||||
lsb_release -a
|
||||
|
||||
```
|
||||
|
||||
|
||||
上面命令的输出应该如下:
|
||||
|
||||
```
|
||||
No LSB modules are available.
|
||||
Distributor ID: Ubuntu
|
||||
Description: Ubuntu 16.04.4 LTS
|
||||
Release: 16.04
|
||||
Codename: xenial
|
||||
|
||||
```
|
||||
|
||||
![How to check Ubuntu version in command line][3]
|
||||
|
||||
|
||||
正像你所看到的,当前我的系统安装的Ubuntu版本是 Ubuntu 16.04, 版本代号: Xenial
|
||||
|
||||
|
||||
且慢!为什么版本描述中显示的是Ubuntu 16.04.4而发行号是16.04?到底哪个才是正确的版本?16.04还是16.04.4? 这两者之间有什么区别?
|
||||
|
||||
|
||||
如果言简意赅的回答这个问题的话,那么答案应该是你正在使用Ubuntu 16.04. 这个是基准版本, 而16.04.4进一步指明这是16.04的第四个补丁版本。你可以将补丁版本理解为Windows世界里的服务包。在这里,16.04和16.04.4都是正确的版本号。
|
||||
|
||||
|
||||
那么输出的Xenial又是什么?那正是Ubuntu 16.04版本代号。你可以阅读下面这篇文章获取更多信息[article to know about Ubuntu naming convention][4].
|
||||
|
||||
|
||||
|
||||
#### 其他一些获取Ubuntu版本的方法
|
||||
|
||||
|
||||
|
||||
你也可以使用下面任意的命令得到Ubuntu的版本:
|
||||
|
||||
```
|
||||
cat /etc/lsb-release
|
||||
|
||||
```
|
||||
|
||||
|
||||
输出如下信息:
|
||||
|
||||
```
|
||||
DISTRIB_ID=Ubuntu
|
||||
DISTRIB_RELEASE=16.04
|
||||
DISTRIB_CODENAME=xenial
|
||||
DISTRIB_DESCRIPTION="Ubuntu 16.04.4 LTS"
|
||||
|
||||
```
|
||||
|
||||
![How to check Ubuntu version in command line][5]
|
||||
|
||||
|
||||
你还可以使用下面的命令来获得Ubuntu版本
|
||||
|
||||
```
|
||||
cat /etc/issue
|
||||
|
||||
```
|
||||
|
||||
|
||||
命令行的输出将会如下:
|
||||
|
||||
```
|
||||
Ubuntu 16.04.4 LTS \n \l
|
||||
|
||||
```
|
||||
|
||||
|
||||
不要介意输出末尾的\n \l. 这里Ubuntu版本就是16.04.4或者更加简单:16.04.
|
||||
|
||||
|
||||
### 如何在图形界面下得到Ubuntu版本
|
||||
|
||||
|
||||
在图形界面下获取Ubuntu版本更是小事一桩。这里我使用了Ubuntu 18.04的图形界面系统GNOME的屏幕截图来展示如何做到这一点。如果你在使用Unity或者别的桌面环境的话,显示可能会有所不同。这也是为什么我推荐使用命令行方式来获得版本的原因:你不用依赖形形色色的图形界面。
|
||||
|
||||
|
||||
|
||||
下面我来展示如何在桌面环境获取Ubuntu版本。
|
||||
|
||||
|
||||
|
||||
进入‘系统设置’并点击下面的‘详细信息’栏。
|
||||
|
||||
![Finding Ubuntu version graphically][6]
|
||||
|
||||
|
||||
|
||||
你将会看到系统的Ubuntu版本和其他和桌面系统有关的系统信息 这里的截图来自[GNOME][7] 。
|
||||
|
||||
![Finding Ubuntu version graphically][8]
|
||||
|
||||
|
||||
|
||||
### 如何知道桌面环境以及其他的系统信息
|
||||
|
||||
|
||||
|
||||
你刚才学习的是如何得到Ubuntu的版本信息,那么如何知道桌面环境呢? 更进一步, 如果你还想知道当前使用的Linux内核版本呢?
|
||||
|
||||
|
||||
有各种各样的命令你可以用来得到这些信息,不过今天我想推荐一个命令行工具, 叫做[Neofetch][9]。 这个工具能在命令行完美展示系统信息,包括Ubuntu或者其他Linux发行版的系统图标。
|
||||
|
||||
|
||||
用下面的命令安装Neofetch:
|
||||
|
||||
```
|
||||
sudo apt install neofetch
|
||||
|
||||
```
|
||||
|
||||
|
||||
安装成功后,运行'neofetch'将会优雅的展示系统的信息如下。
|
||||
|
||||
![System information in Linux terminal][10]
|
||||
|
||||
|
||||
如你所见,neofetch 完全展示了Linux内核版本,Ubuntu的版本,桌面系统版本以及环境,主题和图标等等信息。
|
||||
|
||||
|
||||
希望我如上展示方法能帮到你更快的找到你正在使用的Ubuntu版本和其他系统信息。如果你对这篇文章有其他的建议,欢迎在评论栏里留言。
|
||||
再见。:)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/how-to-know-ubuntu-unity-version/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[DavidChenLiang](https://github.com/davidchenliang)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[1]:https://www.ubuntu.com/
|
||||
[2]:https://en.wikipedia.org/wiki/Desktop_environment
|
||||
[3]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2013/03/check-ubuntu-version-command-line-1-800x216.jpeg
|
||||
[4]:https://itsfoss.com/linux-code-names/
|
||||
[5]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2013/03/check-ubuntu-version-command-line-2-800x185.jpeg
|
||||
[6]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2013/03/ubuntu-version-system-settings.jpeg
|
||||
[7]:https://www.gnome.org/
|
||||
[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2013/03/checking-ubuntu-version-gui.jpeg
|
||||
[9]:https://itsfoss.com/display-linux-logo-in-ascii/
|
||||
[10]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2013/03/ubuntu-system-information-terminal-800x400.jpeg
|
@ -0,0 +1,92 @@
|
||||
使用 Xenlism 主题为你的 Linux 桌面带来令人惊叹的改造
|
||||
============================================================
|
||||
|
||||
|
||||
_简介:Xenlism 主题包提供了一个美观的 GTK 主题、彩色图标和简约壁纸,将你的 Linux 桌面转变为引人注目的设置._
|
||||
|
||||
除非我找到一些非常棒的东西,否则我不会每天都把整篇文章献给一个主题。我曾经经常发布主题和图标。但最近,我更喜欢列出[最佳 GTK 主题][6]和图标主题。这对我和你来说都更方便,你可以在一个地方看到许多美丽的主题。
|
||||
|
||||
在[ Pop OS 主题][7]套件之后,Xenlism 是另一个让我对它的外观感到震惊的主题。
|
||||
|
||||

|
||||
|
||||
Xenlism GTK 主题基于 Arc 主题,这后面有许多主题的灵感。GTK 主题提供类似于 macOS 的 Windows 按钮,我既没有喜欢,也没有不喜欢。GTK 主题采用扁平、简约的布局,我喜欢这样。
|
||||
|
||||
Xenlism 套件中有两个图标主题。Xenlism Wildfire是以前的,已经进入我们的[最佳图标主题][8]列表。
|
||||
|
||||

|
||||
Xenlism Wildfire 图标
|
||||
|
||||
Xenlsim Storm 是一个相对较新的图标主题,但同样美观。
|
||||
|
||||

|
||||
Xenlism Storm 图标
|
||||
|
||||
Xenlism 主题在 GPL 许可下的开源。
|
||||
|
||||
### 如何在 Ubuntu 18.04 上安装 Xenlism 主题包
|
||||
|
||||
Xenlism 开发提供了一种通过 PPA 安装主题包的更简单方法。尽管 PPA 可用于 Ubuntu 16.04,但我发现 GTK 主题不适用于 Unity。它适用于 Ubuntu 18.04 中的 GNOME 桌面。
|
||||
|
||||
打开终端(Ctrl+Alt+T)并逐个使用以下命令:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:xenatt/xenlism
|
||||
sudo apt update
|
||||
```
|
||||
|
||||
该 PPA 提供四个包:
|
||||
|
||||
* xenlism-finewalls:一组壁纸,可直接在 Ubuntu 的壁纸中使用。截图中使用了其中一个壁纸。
|
||||
|
||||
* xenlism-minimalism-theme:GTK主题
|
||||
|
||||
* xenlism-storm:一个图标主题(见前面的截图)
|
||||
|
||||
* xenlism-wildfire-icon-theme:具有多种颜色变化的另一个图标主题(文件夹颜色在变体中更改)
|
||||
|
||||
你可以自己决定要安装的主题组件。就个人而言,我认为安装所有组件没有任何损害。
|
||||
|
||||
```
|
||||
sudo apt install xenlism-minimalism-theme xenlism-storm-icon-theme xenlism-wildfire-icon-theme xenlism-finewalls
|
||||
```
|
||||
|
||||
你可以使用 GNOME Tweaks 来更改主题和图标。如果你不熟悉该过程,我建议你阅读本教程以学习[如何在 Ubuntu 18.04 GNOME 中安装主题][9]。
|
||||
|
||||
### 在其他 Linux 发行版中获取 Xenlism 主题
|
||||
|
||||
你也可以在其他 Linux 发行版上安装 Xenlism 主题。各种 Linux 发行版的安装说明可在其网站上找到:
|
||||
|
||||
[安装 Xenlism 主题][10]
|
||||
|
||||
### 你怎么看?
|
||||
|
||||
我知道不是每个人都会同意我,但我喜欢这个主题。我想你将来会在 It's FOSS 的教程中会看到 Xenlism 主题的截图。
|
||||
|
||||
你喜欢 Xenlism 主题吗?如果不喜欢,你最喜欢什么主题?在下面的评论部分分享你的意见。
|
||||
|
||||
#### 关于作者
|
||||
|
||||
我是一名专业软件开发人员,也是 It's FOSS 的创始人。我是一名狂热的 Linux 爱好者和开源爱好者。我使用 Ubuntu 并相信分享知识。除了Linux,我喜欢经典侦探之谜。我是 Agatha Christie 作品的忠实粉丝。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/xenlism-theme/
|
||||
|
||||
作者:[Abhishek Prakash ][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://itsfoss.com/author/abhishek/
|
||||
[1]:https://itsfoss.com/author/abhishek/
|
||||
[2]:https://itsfoss.com/xenlism-theme/#comments
|
||||
[3]:https://itsfoss.com/category/desktop/
|
||||
[4]:https://itsfoss.com/tag/themes/
|
||||
[5]:https://itsfoss.com/tag/xenlism/
|
||||
[6]:https://itsfoss.com/best-gtk-themes/
|
||||
[7]:https://itsfoss.com/pop-icon-gtk-theme-ubuntu/
|
||||
[8]:https://itsfoss.com/best-icon-themes-ubuntu-16-04/
|
||||
[9]:https://itsfoss.com/install-themes-ubuntu/
|
||||
[10]:http://xenlism.github.io/minimalism/#install
|
@ -0,0 +1,256 @@
|
||||
使用 MQTT 实现项目数据收发
|
||||
======
|
||||
|
||||

|
||||
|
||||
去年 11 月我们购买了一辆电动汽车,同时也引发了有趣的思考:我们应该什么时候为电动汽车充电?对于电动汽车充电所用的电,我希望能够对应最小的二氧化碳排放,归结为一个特定的问题:对于任意给定时刻,每千瓦时对应的二氧化碳排放量是多少,一天中什么时间这个值最低?
|
||||
|
||||
|
||||
### 寻找数据
|
||||
|
||||
我住在纽约州,大约 80% 的电力消耗可以自给自足,主要来自天然气、水坝(大部分来自于<ruby>尼亚加拉<rt>Niagara</rt></ruby>大瀑布)、核能发电,少部分来自风力、太阳能和其它化石燃料发电。非盈利性组织 [<ruby>纽约独立电网运营商<rt>New York Independent System Operator</rt></ruby>][1] (NYISO) 负责整个系统的运作,实现发电机组发电与用电之间的平衡,同时也是纽约路灯系统的监管部门。
|
||||
|
||||
尽管没有为公众提供公开 API,NYISO 还是尽责提供了[不少公开数据][2]供公众使用。每隔 5 分钟汇报全州各个发电机组消耗的燃料数据。数据以 CSV 文件的形式发布于公开的档案库中,全天更新。如果你了解不同燃料对发电瓦数的贡献比例,你可以比较准确的估计任意时刻的二氧化碳排放情况。
|
||||
|
||||
在构建收集处理公开数据的工具时,我们应该时刻避免过度使用这些资源。相比将这些数据打包并发送给所有人,我们有更好的方案。我们可以创建一个低开销的<ruby>事件流<rt>event stream</rt></ruby>,人们可以订阅并第一时间得到消息。我们可以使用 [MQTT][3] 实现该方案。我的 ([ny-power.org][4]) 项目目标是收录到 [Home Assistant][5] 项目中;后者是一个开源的<ruby>家庭自动化<rt>home automation</rt></ruby>平台,拥有数十万用户。如果所有用户同时访问 CSV 文件服务器,估计 NYISO 不得不增加访问限制。
|
||||
|
||||
### MQTT 是什么?
|
||||
|
||||
MQTT 是一个<ruby>发布订阅线协议<rt>publish/subscription wire protocol</rt></ruby>,为小规模设备设计。发布订阅系统工作原理类似于消息总线。你将一条消息发布到一个<ruby>主题<rt>topic</rt></ruby>上,那么所有订阅了该主题的客户端都可以获得该消息的一份拷贝。对于消息发送者而言,无需知道哪些人在订阅消息;你只需将消息发布到一系列主题,同时订阅一些你感兴趣的主题。就像参加了一场聚会,你选取并加入感兴趣的对话。
|
||||
|
||||
MQTT 可应用构建极为高效的应用。客户端订阅有限的几个主题,也只收到他们感兴趣的内容。不仅节省了处理时间,还降低了网络带宽使用。
|
||||
|
||||
作为一个开放标准,MQTT 有很多开源的客户端和服务端实现。对于你能想到的每种编程语言,都有对应的客户端库;甚至有嵌入到 Arduino 的库,可以构建传感器网络。服务端可供选择的也很多,我的选择是 Eclipse 项目提供的 [Mosquitto][6] 服务端,这是因为它体积小、用 C 编写,可以承载数以万计的订阅者。
|
||||
|
||||
### 为何我喜爱 MQTT
|
||||
|
||||
在过去二十年间,我们为软件应用设计了可靠且准确的模型,用于解决服务遇到的问题。我还有其它邮件吗?当前的天气情况如何?我应该此刻购买这种产品吗?在绝大多数情况下,这种<ruby>问答式<rt>ask/receive</rt></ruby>的模型工作良好;但对于一个数据爆炸的世界,我们需要其它的模型。MQTT 的发布订阅模型十分强大,可以将大量数据发送到系统中。客户可以订阅数据中的一小部分并在订阅数据发布的第一时间收到更新。
|
||||
|
||||
MQTT 还有一些有趣的特性,其中之一是<ruby>遗嘱<rt>last-will-and-testament</rt></ruby>消息,可以用于区分两种不同的静默,一种是没有主题相关数据推送,另一种是你的数据接收器出现故障。MQTT 还包括<ruby>保留消息<rt>retained message</rt></ruby>,当客户端初次连接时会提供相关主题的最后一条消息。这对那些更新缓慢的主题来说很有必要。
|
||||
|
||||
我在 Home Assistant 项目开发过程中,发现这种消息总线模型对<ruby>异构系统<rt>heterogeneous systems</rt></ruby>尤为适合。如果你深入<ruby>物联网<rt>Internet of Things</rt></ruby>领域,你会发现 MQTT 无处不在。
|
||||
|
||||
### 我们的第一个 MQTT 流
|
||||
|
||||
NYSO 公布的 CSV 文件中有一个是实时的燃料混合使用情况。每 5 分钟,NYSO 发布这 5 分钟内发电使用的燃料类型和相应的发电量(以兆瓦为单位)。
|
||||
|
||||
The CSV file looks something like this:
|
||||
|
||||
| 时间戳 | 时区 | 燃料类型 | 兆瓦为单位的发电量 |
|
||||
| --- | --- | --- | --- |
|
||||
| 05/09/2018 00:05:00 | EDT | 混合燃料 | 1400 |
|
||||
| 05/09/2018 00:05:00 | EDT | 天然气 | 2144 |
|
||||
| 05/09/2018 00:05:00 | EDT | 核能 | 4114 |
|
||||
| 05/09/2018 00:05:00 | EDT | 其它化石燃料 | 4 |
|
||||
| 05/09/2018 00:05:00 | EDT | 其它可再生资源 | 226 |
|
||||
| 05/09/2018 00:05:00 | EDT | 风力 | 1 |
|
||||
| 05/09/2018 00:05:00 | EDT | 水力 | 3229 |
|
||||
| 05/09/2018 00:10:00 | EDT | 混合燃料 | 1307 |
|
||||
| 05/09/2018 00:10:00 | EDT | 天然气 | 2092 |
|
||||
| 05/09/2018 00:10:00 | EDT | 核能 | 4115 |
|
||||
| 05/09/2018 00:10:00 | EDT | 其它化石燃料 | 4 |
|
||||
| 05/09/2018 00:10:00 | EDT | 其它可再生资源 | 224 |
|
||||
| 05/09/2018 00:10:00 | EDT | 风力 | 40 |
|
||||
| 05/09/2018 00:10:00 | EDT | 水力 | 3166 |
|
||||
|
||||
表中唯一令人不解就是燃料类别中的混合燃料。纽约的大多数天然气工厂也通过燃烧其它类型的化石燃料发电。在冬季寒潮到来之际,家庭供暖的优先级高于发电;但这种情况出现的次数不多,(在我们计算中)可以将混合燃料类型看作天然气类型。
|
||||
|
||||
CSV 文件全天更新。我编写了一个简单的数据泵,每隔 1 分钟检查是否有数据更新,并将新条目发布到 MQTT 服务器的一系列主题上,主题名称基本与 CSV 文件有一定的对应关系。数据内容被转换为 JSON 对象,方便各种编程语言处理。
|
||||
|
||||
```
|
||||
ny-power/upstream/fuel-mix/Hydro {"units": "MW", "value": 3229, "ts": "05/09/2018 00:05:00"}
|
||||
ny-power/upstream/fuel-mix/Dual Fuel {"units": "MW", "value": 1400, "ts": "05/09/2018 00:05:00"}
|
||||
ny-power/upstream/fuel-mix/Natural Gas {"units": "MW", "value": 2144, "ts": "05/09/2018 00:05:00"}
|
||||
ny-power/upstream/fuel-mix/Other Fossil Fuels {"units": "MW", "value": 4, "ts": "05/09/2018 00:05:00"}
|
||||
ny-power/upstream/fuel-mix/Wind {"units": "MW", "value": 41, "ts": "05/09/2018 00:05:00"}
|
||||
ny-power/upstream/fuel-mix/Other Renewables {"units": "MW", "value": 226, "ts": "05/09/2018 00:05:00"}
|
||||
ny-power/upstream/fuel-mix/Nuclear {"units": "MW", "value": 4114, "ts": "05/09/2018 00:05:00"}
|
||||
|
||||
```
|
||||
|
||||
这种直接的转换是种不错的尝试,可将公开数据转换为公开事件。我们后续会继续将数据转换为二氧化碳排放强度,但这些原始数据还可被其它应用使用,用于其它计算用途。
|
||||
|
||||
### MQTT 主题
|
||||
|
||||
主题和<ruby>主题结构<rt>topic structure</rt></ruby>是 MQTT 的一个主要特色。与其它标准的企业级消息总线不同,MQTT 的主题无需事先注册。发送者可以凭空创建主题,唯一的限制是主题的长度,不超过 220 字符。其中 `/` 字符有特殊含义,用于创建主题的层次结构。我们即将看到,你可以订阅这些层次中的一些分片。
|
||||
|
||||
基于开箱即用的 Mosquitto,任何一个客户端都可以向任何主题发布消息。在原型设计过程中,这种方式十分便利;但一旦部署到生产环境,你需要增加<ruby>访问控制列表<rt>access control list, ACL</rt></ruby>只允许授权的应用发布消息。例如,任何人都能以只读的方式访问我的应用的主题层级,但只有那些具有特定<ruby>凭证<rt>credentials</rt></ruby>的客户端可以发布内容。
|
||||
|
||||
主题中不包含<ruby>自动样式<rt>automatic schema</rt></ruby>,也没有方法查找客户端可以发布的全部主题。因此,对于那些从 MQTT 总线消费数据的应用,你需要让其直接使用已知的主题和消息格式样式。
|
||||
|
||||
那么应该如何设计主题呢?最佳实践包括使用应用相关的根名称,例如在我的应用中使用 `ny-power`。接着,为提高订阅效率,构建足够深的层次结构。`upstream` 层次结构包含了直接从数据源获取的、不经处理的原始数据,而 `fuel-mix` 层次结构包含特定类型的数据;我们后续还可以增加其它的层次结构。
|
||||
|
||||
### 订阅主题
|
||||
|
||||
在 MQTT 中,订阅仅仅是简单的字符串匹配。为提高处理效率,只允许如下两种通配符:
|
||||
|
||||
* `#` 以递归方式匹配,直到字符串结束
|
||||
* `+` 匹配下一个 `/` 之前的内容
|
||||
|
||||
|
||||
为便于理解,下面给出几个例子:
|
||||
```
|
||||
ny-power/# - 匹配 ny-power 应用发布的全部主题
|
||||
ny-power/upstream/# - 匹配全部原始数据的主题
|
||||
ny-power/upstream/fuel-mix/+ - 匹配全部燃料类型的主题
|
||||
ny-power/+/+/Hydro - 匹配全部两次层级之后为 Hydro 类型的主题(即使不位于 upstream 层次结构下)
|
||||
```
|
||||
|
||||
类似 `ny-power/#` 的大范围订阅适用于<ruby>低数据量<rt>low-volume</rt></ruby>的应用,应用从网络获取全部数据并处理。但对<ruby>高数据量<rt>high-volume</rt></ruby>应用而言则是一个灾难,由于绝大多数消息并不会被使用,大部分的网络带宽被白白浪费了。
|
||||
|
||||
在大数据量情况下,为确保性能,应用需要使用恰当的主题筛选(如 `ny-power/+/+/Hydro`)尽量准确获取业务所需的数据。
|
||||
|
||||
### 增加我们自己的数据层次
|
||||
|
||||
接下来,应用中的一切都依赖于已有的 MQTT 流并构建新流。第一个额外的数据层用于计算发电对应的二氧化碳排放。
|
||||
|
||||
利用[<ruby>美国能源情报署<rt>U.S. Energy Information Administration</rt></ruby>][7] 给出的 2016 年纽约各类燃料发电及排放情况,我们可以给出各类燃料的[平均排放率][8],单位为克/兆瓦时。
|
||||
|
||||
上述结果被封装到一个专用的微服务中。该微服务订阅 `ny-power/upstream/fuel-mix/+`,即数据泵中燃料组成情况的原始数据,接着完成计算并将结果(单位为克/千瓦时)发布到新的主题层次结构上:
|
||||
```
|
||||
ny-power/computed/co2 {"units": "g / kWh", "value": 152.9486, "ts": "05/09/2018 00:05:00"}
|
||||
```
|
||||
|
||||
接着,另一个服务会订阅该主题层次结构并将数据打包到 [InfluxDB][9] 进程中;同时,发布 24 小时内的时间序列数据到 `ny-power/archive/co2/24h` 主题,这样可以大大简化当前变化数据的绘制。
|
||||
|
||||
这种层次结构的主题模型效果不错,可以将上述程序之间的逻辑解耦合。在复杂系统中,各个组件可能使用不同的编程语言,但这并不重要,因为交换格式都是 MQTT 消息,即主题和 JSON 格式的消息内容。
|
||||
|
||||
### 从终端消费数据
|
||||
|
||||
为了更好的了解 MQTT 完成了什么工作,将其绑定到一个消息总线并查看消息流是个不错的方法。`mosquitto-clients` 包中的 `mosquitto_sub` 可以让我们轻松实现该目标。
|
||||
|
||||
安装程序后,你需要提供服务器名称以及你要订阅的主题。如果有需要,使用参数 `-v` 可以让你看到有新消息发布的那些主题;否则,你只能看到主题内的消息数据。
|
||||
|
||||
```
|
||||
mosquitto_sub -h mqtt.ny-power.org -t ny-power/# -v
|
||||
|
||||
```
|
||||
|
||||
只要我编写或调试 MQTT 应用,我总会在一个终端中运行 `mosquitto_sub`。
|
||||
|
||||
### 从网页直接访问 MQTT
|
||||
|
||||
到目前为止,我们已经有提供公开事件流的应用,可以用微服务或命令行工具访问该应用。但考虑到互联网仍占据主导地位,因此让用户可以从浏览器直接获取事件流是很重要。
|
||||
|
||||
MQTT 的设计者已经考虑到了这一点。协议标准支持三种不同的传输协议:[TCP][10],[UDP][11] 和 [WebSockets][12]。主流浏览器都支持 WebSockets,可以维持持久连接,用于实时应用。
|
||||
|
||||
Eclipse 项目提供了 MQTT 的一个 JavaScript 实现,叫做 [Paho][13],可包含在你的应用中。工作模式为与服务器建立连接、建立一些订阅,然后根据接收到的消息进行响应。
|
||||
|
||||
```
|
||||
// ny-power web console application
|
||||
|
||||
var client = new Paho.MQTT.Client(mqttHost, Number("80"), "client-" + Math.random());
|
||||
|
||||
// set callback handlers
|
||||
client.onMessageArrived = onMessageArrived;
|
||||
|
||||
// connect the client
|
||||
client.reconnect = true;
|
||||
client.connect({onSuccess: onConnect});
|
||||
|
||||
// called when the client connects
|
||||
function onConnect() {
|
||||
// Once a connection has been made, make a subscription and send a message.
|
||||
console.log("onConnect");
|
||||
client.subscribe("ny-power/computed/co2");
|
||||
client.subscribe("ny-power/archive/co2/24h");
|
||||
client.subscribe("ny-power/upstream/fuel-mix/#");
|
||||
}
|
||||
|
||||
// called when a message arrives
|
||||
function onMessageArrived(message) {
|
||||
console.log("onMessageArrived:"+message.destinationName + message.payloadString);
|
||||
if (message.destinationName == "ny-power/computed/co2") {
|
||||
var data = JSON.parse(message.payloadString);
|
||||
$("#co2-per-kwh").html(Math.round(data.value));
|
||||
$("#co2-units").html(data.units);
|
||||
$("#co2-updated").html(data.ts);
|
||||
}
|
||||
|
||||
if (message.destinationName.startsWith("ny-power/upstream/fuel-mix")) {
|
||||
fuel_mix_graph(message);
|
||||
}
|
||||
|
||||
if (message.destinationName == "ny-power/archive/co2/24h") {
|
||||
var data = JSON.parse(message.payloadString);
|
||||
var plot = [
|
||||
{
|
||||
x: data.ts,
|
||||
y: data.values,
|
||||
type: 'scatter'
|
||||
}
|
||||
];
|
||||
var layout = {
|
||||
yaxis: {
|
||||
title: "g CO2 / kWh",
|
||||
}
|
||||
};
|
||||
Plotly.newPlot('co2_graph', plot, layout);
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
上述应用订阅了不少主题,因为我们将要呈现若干种不同类型的数据;其中 `ny-power/computed/co2` 主题为我们提供当前二氧化碳排放的参考值。一旦收到该主题的新消息,网站上的相应内容会被相应替换。
|
||||
|
||||
|
||||
![NYISO 二氧化碳排放图][15]
|
||||
|
||||
[ny-power.org][4] 网站提供的 NYISO 二氧化碳排放图。
|
||||
|
||||
`ny-power/archive/co2/24h` 主题提供了时间序列数据,用于为 [Plotly][16] 线表提供数据。`ny-power/upstream/fuel-mix` 主题提供当前燃料组成情况,为漂亮的柱状图提供数据。
|
||||
|
||||
![NYISO 燃料组成情况][18]
|
||||
|
||||
[ny-power.org][4] 网站提供的燃料组成情况。
|
||||
|
||||
这是一个动态网站,数据不从服务器拉取,而是结合 MQTT 消息总线,监听对外开放的 WebSocket。就像数据泵和打包器程序那样,网站页面也是一个发布订阅客户端,只不过是在你的浏览器中执行,而不是在公有云的微服务上。
|
||||
|
||||
你可以在 <http://ny-power.org> 站点点看到动态变更,包括图像和可以看到消息到达的实时 MQTT 终端。
|
||||
|
||||
### 继续深入
|
||||
|
||||
ny-power.org 应用的完整内容开源在 [GitHub][19] 中。你也可以查阅 [架构简介][20],学习如何使用 [Helm][21] 部署一系列 Kubernetes 微服务构建应用。另一个有趣的 MQTT 示例使用 MQTT 和 OpenWhisk 进行实时文本消息翻译,<ruby>代码模式<rt>code pattern</rt></ruby>参考[链接][22]。
|
||||
|
||||
MQTT 被广泛应用于物联网领域,更多关于 MQTT 用途的例子可以在 [Home Assistant][23] 项目中找到。
|
||||
|
||||
如果你希望深入了解协议内容,可以从 [mqtt.org][3] 获得该公开标准的全部细节。
|
||||
|
||||
想了解更多,可以参加 Sean Dague 在 [OSCON][25] 上的演讲,主题为 [将 MQTT 加入到你的工具箱][24],会议将于 7 月 16-19 日在奥尔良州波特兰举办。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/6/mqtt
|
||||
|
||||
作者:[Sean Dague][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/sdague
|
||||
[1]:http://www.nyiso.com/public/index.jsp
|
||||
[2]:http://www.nyiso.com/public/markets_operations/market_data/reports_info/index.jsp
|
||||
[3]:http://mqtt.org/
|
||||
[4]:http://ny-power.org/#
|
||||
[5]:https://www.home-assistant.io
|
||||
[6]:https://mosquitto.org/
|
||||
[7]:https://www.eia.gov/
|
||||
[8]:https://github.com/IBM/ny-power/blob/master/src/nypower/calc.py#L1-L60
|
||||
[9]:https://www.influxdata.com/
|
||||
[10]:https://en.wikipedia.org/wiki/Transmission_Control_Protocol
|
||||
[11]:https://en.wikipedia.org/wiki/User_Datagram_Protocol
|
||||
[12]:https://en.wikipedia.org/wiki/WebSocket
|
||||
[13]:https://www.eclipse.org/paho/
|
||||
[14]:/file/400041
|
||||
[15]:https://opensource.com/sites/default/files/uploads/mqtt_nyiso-co2intensity.png (NY ISO Grid CO2 Intensity)
|
||||
[16]:https://plot.ly/
|
||||
[17]:/file/400046
|
||||
[18]:https://opensource.com/sites/default/files/uploads/mqtt_nyiso_fuel-mix.png (Fuel mix on NYISO grid)
|
||||
[19]:https://github.com/IBM/ny-power
|
||||
[20]:https://developer.ibm.com/code/patterns/use-mqtt-stream-real-time-data/
|
||||
[21]:https://helm.sh/
|
||||
[22]:https://developer.ibm.com/code/patterns/deploy-serverless-multilingual-conference-room/
|
||||
[23]:https://www.home-assistant.io/
|
||||
[24]:https://conferences.oreilly.com/oscon/oscon-or/public/schedule/speaker/77317
|
||||
[25]:https://conferences.oreilly.com/oscon/oscon-or
|
@ -1,122 +0,0 @@
|
||||
如何在 Linux 中使用一个命令升级所有软件
|
||||
======
|
||||
|
||||

|
||||
|
||||
众所周知,让我们的 Linux 系统保持最新状态涉及调用多个包管理器。比如说,在 Ubuntu 中,你无法使用 “sudo apt update 和 sudo apt upgrade” 命令升级所有软件。此命令仅升级使用 APT 包管理器安装的应用程序。你有可能使用 **cargo**、[**pip**][1]、**npm**、**snap** 、**flatpak** 或 [**Linuxbrew**][2] 包管理器安装了其他软件。你需要使用相应的包管理器才能使它们全部更新。再也不用这样了!跟 **“topgrade”** 打个招呼,这是一个可以一次性升级系统中所有软件的工具。
|
||||
|
||||
你无需运行每个包管理器来更新包。topgrade 工具通过检测已安装的软件包、工具、插件并运行相应的软件包管理器并使用一个命令更新 Linux 中的所有软件来解决这个问题。它是免费的、开源的,使用 **rust 语言**编写。它支持 GNU/Linux 和 Mac OS X.
|
||||
|
||||
### 在 Linux 中使用一个命令升级所有软件
|
||||
|
||||
topgrade 存在于 AUR 中。因此,你可以在任何基于 Arch 的系统中使用 [**Yay**][3 ] 助手程序安装它。
|
||||
```
|
||||
$ yay -S topgrade
|
||||
|
||||
```
|
||||
|
||||
在其他 Linux 发行版上,你可以使用 **cargo** 包管理器安装 topgrade。要安装 cargo 包管理器,请参阅以下链接。
|
||||
|
||||
然后,运行以下命令来安装 topgrade。
|
||||
```
|
||||
$ cargo install topgrade
|
||||
|
||||
```
|
||||
|
||||
安装完成后,运行 topgrade 以升级 Linux 系统中的所有软件。
|
||||
```
|
||||
$ topgrade
|
||||
|
||||
```
|
||||
|
||||
一旦调用了 topgrade,它将逐个执行以下任务。如有必要,系统会要求输入 root/sudo 用户密码。
|
||||
|
||||
1\. 运行系统的包管理器:
|
||||
|
||||
* Arch:运行 **yay** 或者回退到 [**pacman**][4]
|
||||
* CentOS/RHEL:运行 `yum upgrade`
|
||||
* Fedora :运行 `dnf upgrade`
|
||||
* Debian/Ubuntu:运行 `apt update 和 apt dist-upgrade`
|
||||
* Linux/macOS:运行 `brew update 和 brew upgrade`
|
||||
|
||||
|
||||
|
||||
2\. 检查 Git 是否跟踪了以下路径。如果有,则拉取它们:
|
||||
|
||||
* ~/.emacs.d (无论你使用 **Spacemacs** 还是自定义配置都应该可用)
|
||||
* ~/.zshrc
|
||||
* ~/.oh-my-zsh
|
||||
* ~/.tmux
|
||||
* ~/.config/fish/config.fish
|
||||
* 自定义路径
|
||||
|
||||
|
||||
|
||||
3\. Unix:运行 **zplug** 更新
|
||||
|
||||
4\. Unix:使用 **TPM** 升级 **tmux** 插件
|
||||
|
||||
5\. 运行 **Cargo install-update**
|
||||
|
||||
6\. 升级 **Emacs** 包
|
||||
|
||||
7\. 升级 Vim 包。对以下插件框架均可用:
|
||||
|
||||
* NeoBundle
|
||||
* [**Vundle**][5]
|
||||
* Plug
|
||||
|
||||
|
||||
|
||||
8\. 升级 [**NPM**][6]全局安装的包
|
||||
|
||||
9\. 升级 **Atom** 包
|
||||
|
||||
10\. 升级 [**Flatpak**][7] 包
|
||||
|
||||
11\. 升级 [**snap**][8] 包
|
||||
|
||||
12\. ** Linux:**运行 **fwupdmgr**显示固件升级。 (仅查看。实际不会执行升级)
|
||||
|
||||
13\. 运行自定义命令。
|
||||
|
||||
最后,topgrade 将运行 **needrestart** 以重新启动所有服务。在 Mac OS X 中,它会升级 App Store 程序。
|
||||
|
||||
我的 Ubuntu 18.04 LTS 测试环境的示例输出:
|
||||
|
||||
![][10]
|
||||
|
||||
好处是如果一个任务失败,它将自动运行下一个任务并完成所有其他后续任务。最后,它将显示摘要,其中包含运行的任务数量,成功的数量和失败的数量等详细信息。
|
||||
|
||||
![][11]
|
||||
|
||||
**建议阅读:**
|
||||
|
||||
就个人而言,我喜欢创建一个像 topgrade 程序的想法,并使用一个命令升级使用各种包管理器安装的所有软件。我希望你也觉得它有用。还有更多的好东西。敬请关注!
|
||||
|
||||
干杯!
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-upgrade-everything-using-a-single-command-in-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://www.ostechnix.com/manage-python-packages-using-pip/
|
||||
[2]:https://www.ostechnix.com/linuxbrew-common-package-manager-linux-mac-os-x/
|
||||
[3]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
|
||||
[4]:https://www.ostechnix.com/getting-started-pacman/
|
||||
[5]:https://www.ostechnix.com/manage-vim-plugins-using-vundle-linux/
|
||||
[6]:https://www.ostechnix.com/manage-nodejs-packages-using-npm/
|
||||
[7]:https://www.ostechnix.com/flatpak-new-framework-desktop-applications-linux/
|
||||
[8]:https://www.ostechnix.com/install-snap-packages-arch-linux-fedora/
|
||||
[9]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[10]:http://www.ostechnix.com/wp-content/uploads/2018/06/topgrade-1.png
|
||||
[11]:http://www.ostechnix.com/wp-content/uploads/2018/06/topgrade-2.png
|
@ -0,0 +1,68 @@
|
||||
在 Fedora 28 Workstation 使用 emoji 加速输入
|
||||
======
|
||||
|
||||

|
||||
|
||||
Fedora 28 Workstation 添加了一个功能允许你使用键盘快速搜索、选择和输入 emoji。emoji,可爱的表意文字是 Unicode 的一部分,在消息传递中使用得相当广泛,特别是在移动设备上。你可能听过这样的成语:“一张图片胜过千言万语。”这正是 emoji 所提供的:简单的图像供你在交流中使用。Unicode 的每个版本都增加了更多,在过去的 Unicode 版本中添加了 200 多个 emoji。本文向你展示如何使它们在你的 Fedora 系统中易于使用。
|
||||
|
||||
很高兴看到 emoji 数字在增长。但与此同时,它带来了如何在计算设备中输入它们的挑战。许多人已经将这些符号用于移动设备或社交网站中的输入。
|
||||
|
||||
[ **编者注:**本文是对此主题以前发表过的文章的更新]。
|
||||
|
||||
### 在 Fedora 28 Workstation 上启用 emoji 输入
|
||||
|
||||
新的 emoji 输入法默认出现在 Fedora 28 Workstation 中。要使用它,必须使用“区域和语言设置”对话框启用它。从 Fedora Workstation 设置打开“区域和语言”对话框,或在“概要”中搜索它。
|
||||
|
||||
[![Region & Language settings tool][1]][2]
|
||||
|
||||
选择 + 控件添加输入源。出现以下对话框:
|
||||
|
||||
[![Adding an input source][3]][4]
|
||||
|
||||
选择最后选项(三个点)来完全展开选择。然后,在列表底部找到“其他”并选择它:
|
||||
|
||||
[![Selecting other input sources][5]][6]
|
||||
|
||||
在下面的对话框中,找到 ”Typing Booster“ 选项并选择它:
|
||||
|
||||
[![][7]][8]
|
||||
|
||||
这个高级输入法由 iBus 在背后支持。高级输入方法可通过列表右侧的齿轮图标在列表中识别。
|
||||
|
||||
输入法下拉菜单自动出现在 GNOME Shell 顶部栏中。确认你的默认输入法 - 在此示例中为英语(美国) - 被选为当前输入法,你就可以输入了。
|
||||
|
||||
[![Input method dropdown in Shell top bar][9]][10]
|
||||
|
||||
## 使用新的表情符号输入法
|
||||
|
||||
现在 emoji 输入法启用了,按键盘快捷键 **Ctrl+Shift+E** 搜索 emoji。将出现一个弹出对话框,你可以在其中输入搜索词,例如 smile 来查找匹配的符号。
|
||||
|
||||
[![Searching for smile emoji][11]][12]
|
||||
|
||||
使用箭头键翻页列表。然后按 **Enter** 进行选择,字形将作为输入。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/boost-typing-emoji-fedora-28-workstation/
|
||||
|
||||
作者:[Paul W. Frields][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fedoramagazine.org/author/pfrields/
|
||||
[1]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-15-02-41-1024x718.png
|
||||
[2]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-15-02-41.png
|
||||
[3]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-33-46-1024x839.png
|
||||
[4]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-33-46.png
|
||||
[5]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-34-15-1024x839.png
|
||||
[6]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-34-15.png
|
||||
[7]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-34-41-1024x839.png
|
||||
[8]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-34-41.png
|
||||
[9]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-15-05-24-300x244.png
|
||||
[10]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-15-05-24.png
|
||||
[11]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-36-31-290x300.png
|
||||
[12]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-36-31.png
|
@ -1,44 +0,0 @@
|
||||
在 Arch 用户仓库(AUR)中发现恶意软件
|
||||
======
|
||||
|
||||
7 月 7 日,AUR 软件包被修改了一些恶意代码,提醒 [Arch Linux][1] 用户(以及一般 Linux 用户)在安装之前应该检查所有用户生成的软件包(如果可能)。
|
||||
|
||||
[AUR][3] 或者称 Arch(Linux)用户仓库包含包描述,也称为 PKGBUILD,它使得从源代码编译包变得更容易。虽然这些包非常有用,但它们永远不应被视为安全,并且用户应尽可能在使用之前检查其内容。毕竟,AUR在网页中以粗体显示 “AUR 包是用户制作的内容。任何使用提供的文件的风险由你自行承担。”
|
||||
|
||||
|
||||
包含恶意代码的 AUR 包的[发现][4]证明了这一点。[acroread][5] 于 7 月 7 日(看起来它以前是“孤儿”,意思是它没有维护者)被一位名为 “xeactor” 的用户修改,它包含了一行从 pastebin 使用 `curl` 下载脚本的命令。然后,该脚本下载了另一个脚本并安装了systemd 单元以定期运行该脚本。
|
||||
|
||||
**看来[另外两个][2] AUR 包以同样的方式被修改。所有违规软件包都已删除,并暂停了用于上传它们的用户帐户(在更新软件包同一天注册了那些帐户)。**
|
||||
|
||||
恶意代码没有做任何真正有害的事情 - 它只是试图上传一些系统信息,比如机器 ID、`uname -a` 的输出(包括内核版本、架构等)、CPU 信息、pacman 信息,以及 `systemctl list-units`(列出 systemd 单位信息)的输出到 pastebin.com。我说“尝试”是因为第二个脚本中存在错误而没有实际上传系统信息(上传函数为 “upload”,但脚本试图使用其他名称 “uploader” 调用它)。
|
||||
|
||||
此外,将这些恶意脚本添加到 AUR 的人将脚本中的个人 Pastebin API 密钥以明文形式留下,再次证明他们不确切地知道他们在做什么。
|
||||
|
||||
尝试将此信息上传到 Pastebin 的目的尚不清楚,特别是原本可以上传更加敏感信息的情况下,如 GPG / SSH 密钥。
|
||||
|
||||
**更新:** Reddit用户 u/xanaxdroid_ [提及][6]同一个名为 “xeactor” 的用户也发布了一些加密货币挖矿软件包,因此他推测 “xeactor” 可能正计划添加一些隐藏的加密货币挖矿软件到 AUR([两个月][7]前的一些 Ubuntu Snap 软件包也是如此)[7]。这就是 “xeactor” 可能试图获取各种系统信息的原因。此 AUR 用户上传的所有包都已删除,因此我无法检查。
|
||||
|
||||
**另一个更新:**
|
||||
|
||||
你究竟应该在那些用户生成的软件包 (如 AUR 中找到的) 检查什么?情况各有相同,我无法准确地告诉你,但你可以从寻找任何尝试使用 `curl`、`wget`和其他类似工具下载内容的东西开始,看看他们究竟想要下载什么。还要检查从中下载软件包源的服务器,并确保它是官方来源。不幸的是,这不是一个确切的“科学”。例如,对于 Launchpad PPA,事情变得更加复杂,因为你必须知道 Debian 打包,并且可以直接更改源,因为它在 PPA 中托管并由用户上传。使用 Snap 软件包会变得更加复杂,因为在安装之前你无法检查这些软件包(据我所知)。在后面这些情况下,作为通用解决方案,我猜你应该只安装你信任的用户/打包器生成的软件包。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxuprising.com/2018/07/malware-found-on-arch-user-repository.html
|
||||
|
||||
作者:[Logix][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/118280394805678839070
|
||||
[1]:https://www.archlinux.org/
|
||||
[2]:https://lists.archlinux.org/pipermail/aur-general/2018-July/034153.html
|
||||
[3]:https://aur.archlinux.org/
|
||||
[4]:https://lists.archlinux.org/pipermail/aur-general/2018-July/034152.html
|
||||
[5]:https://aur.archlinux.org/cgit/aur.git/commit/?h=acroread&id=b3fec9f2f16703c2dae9e793f75ad6e0d98509bc
|
||||
[6]:https://www.reddit.com/r/archlinux/comments/8x0p5z/reminder_to_always_read_your_pkgbuilds/e21iugg/
|
||||
[7]:https://www.linuxuprising.com/2018/05/malware-found-in-ubuntu-snap-store.html
|
@ -0,0 +1,74 @@
|
||||
15 个适用于 MacOS 的开源应用程序
|
||||
======
|
||||
|
||||

|
||||
|
||||
只要有可能的情况下,我都会去选择使用开源工具。不久之前,我回到大学去攻读教育领导学硕士学位。即便是我将喜欢的 Linux 笔记本电脑换成了一台 MacBook Pro(因为我不能确定校园里能够接受 Linux),我还是决定继续使用我喜欢的工具,哪怕是在 MacOS 上也是如此。
|
||||
|
||||
幸运的是,它很容易,并且没有哪个教授质疑过我用的是什么软件。即然如此,我就不能保守秘密。
|
||||
|
||||
我知道,我的一些同学最终会在学区担任领导职务,因此,我与我的那些使用 MacOS 或 Windows 的同学分享了关于下面描述的这些开源软件。毕竟,开源软件是真正地自由和友好的。我也希望他们去了解它,并且愿意以很少的一些成本去提供给他们的学生去使用这些世界级的应用程序。他们中的大多数人都感到很惊讶,因为,众所周知,开源软件除了有像你和我这样的用户之外,压根就没有销售团队。
|
||||
|
||||
### 我的 MacOS 学习曲线
|
||||
|
||||
虽然大多数的开源工具都能像以前我在 Linux 上使用的那样工作,只是需要不同的安装方法。但是,经过这个过程,我学习了这些工具在 MacOS 上的一些细微差别。像 [yum][1]、[DNF][2]、和 [APT][3] 在 MacOS 的世界中压根不存在 — 我真的很怀念它们。
|
||||
|
||||
一些 MacOS 应用程序要求依赖项,并且安装它们要比我在 Linux 上习惯的方法困难很多。尽管如此,我仍然没有放弃。在这个过程中,我学会了如何在我的新平台上保留最好的软件。即便是 MacOS 大部分的核心也是 [开源的][4]。
|
||||
|
||||
此外,我的 Linux 的知识背景让我使用 MacOS 的命令行很轻松很舒适。我仍然使用命令行去创建和拷贝文件、添加用户、以及使用其它的像 cat、tac、more、less、和 tail 这样的 [实用工具][5]。
|
||||
|
||||
### 15 个适用于 MacOS 的非常好的开源应用程序
|
||||
|
||||
* 在大学里,要求我使用 DOCX 的电子版格式来提交我的工作,而这其实很容易,最初我使用的是 [OpenOffice][6],而后来我使用的是 [LibreOffice][7] 去完成我的论文。
|
||||
* 当我因为演示需要去做一些图像时,我使用的是我最喜欢的图像应用程序 [GIMP][8] 和 [Inkscape][9]。
|
||||
* 我喜欢的播客创建工具是 [Audacity][10]。它比起 Mac 上搭载的专有的应用程序更加简单。我使用它去录制访谈和为视频演示创建配乐。
|
||||
* 在 MacOS 上我最早发现的多媒体播放器是 [VideoLan][11] (VLC)。
|
||||
* MacOS 的内置专有视频创建工具是一个非常好的产品,但是你也可以很轻松地去安装和使用 [OpenShot][12],它是一个非常好的内容创建工具。
|
||||
* 当我需要在我的客户端上分析网络时,我在我的 Mac 上使用了易于安装的 [Nmap][13] (Network Mapper) 和 [Wireshark][14] 工具。
|
||||
* 当我为图书管理员和其它教育工作者提供培训时,我在 MacOS 上使用 [VirtualBox][15] 去做 Raspbian、Fedora、Ubuntu、和其它 Linux 发行版的示范操作。
|
||||
* 我使用 [Etcher.io][16] 在我的 MacBook 上制作了一个引导盘,下载 ISO 文件,将它刻录到一个 U 盘上。
|
||||
* 我认为 [Firefox][17] 比起 MacBook Pro 自带的专有浏览器更易用更安全,并且它允许我跨操作系统去同步我的书签。
|
||||
* 当我使用电子书阅读器时,[Calibre][18] 是当之无愧的选择。它很容易去下载和安装,你甚至只需要几次点击就能将它配置为一台 [教室中使用的电子书服务器][19]。
|
||||
* 最近我给中学的学习教 Python 课程,我发现它可以很容易地从 [Python.org][20] 上下载和安装 Python 3 及 IDLE3 编辑器。我也喜欢学习数据科学,并与学生分享。不论你是对 Python 还是 R 感兴趣,我都建议你下载和 [安装][21] [Anaconda 发行版][22]。它包含了非常好的 iPython 编辑器、RStudio、Jupyter Notebooks、和 JupyterLab,以及其它一些应用程序。
|
||||
* [HandBrake][23] 是一个将你家里的旧的视频 DVD 转成 MP4 的工具,这样你就可以将它们共享到 YouTube、Vimeo、或者你的 MacOS 上的 [Kodi][24] 服务器上。
|
||||
|
||||
|
||||
|
||||
现在轮到你了:你在 MacOS(或 Windows)上都使用什么样的开源软件?在下面的评论区共享出来吧。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/7/open-source-tools-macos
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/don-watkins
|
||||
[1]:https://en.wikipedia.org/wiki/Yum_(software)
|
||||
[2]:https://en.wikipedia.org/wiki/DNF_(software)
|
||||
[3]:https://en.wikipedia.org/wiki/APT_(Debian)
|
||||
[4]:https://developer.apple.com/library/archive/documentation/MacOSX/Conceptual/OSX_Technology_Overview/SystemTechnology/SystemTechnology.html
|
||||
[5]:https://www.gnu.org/software/coreutils/coreutils.html
|
||||
[6]:https://www.openoffice.org/
|
||||
[7]:https://www.libreoffice.org/
|
||||
[8]:https://www.gimp.org/
|
||||
[9]:https://inkscape.org/en/
|
||||
[10]:https://www.audacityteam.org/
|
||||
[11]:https://www.videolan.org/index.html
|
||||
[12]:https://www.openshot.org/
|
||||
[13]:https://nmap.org/
|
||||
[14]:https://www.wireshark.org/
|
||||
[15]:https://www.virtualbox.org/
|
||||
[16]:https://etcher.io/
|
||||
[17]:https://www.mozilla.org/en-US/firefox/new/
|
||||
[18]:https://calibre-ebook.com/
|
||||
[19]:https://opensource.com/article/17/6/raspberrypi-ebook-server
|
||||
[20]:https://www.python.org/downloads/release/python-370/
|
||||
[21]:https://opensource.com/article/18/4/getting-started-anaconda-python
|
||||
[22]:https://www.anaconda.com/download/#macos
|
||||
[23]:https://handbrake.fr/
|
||||
[24]:https://kodi.tv/download
|
@ -0,0 +1,146 @@
|
||||
学习如何使用 Python 构建你自己的 Twitter Bot
|
||||
======
|
||||
|
||||

|
||||
|
||||
Twitter 允许用户向这个世界[分享][1]博客文章。使用 Python 和 Tweepy 拓展使得创建一个 Twitter 机器人来接管你的所有 tweeting 变得非常简单。这篇文章告诉你如何去构建这样一个机器人。希望你能将这些概念也同样应用到其他的在线服务的项目中去。
|
||||
|
||||
### 开始
|
||||
|
||||
[tweepy][2] 拓展可以让创建一个 Twitter 机器人的过程更加容易上手。它包含了 Twitter 的 API 调用和一个很简单的接口
|
||||
|
||||
下面这些命令使用 Pipenv 在一个虚拟环境中安装 tweepy。如果你没有安装 Pipenv,可以看一看我们之前的文章[如何在 Fedora 上安装 Pipenv][3]
|
||||
|
||||
```
|
||||
$ mkdir twitterbot
|
||||
$ cd twitterbot
|
||||
$ pipenv --three
|
||||
$ pipenv install tweepy
|
||||
$ pipenv shell
|
||||
|
||||
```
|
||||
|
||||
### Tweepy – 开始
|
||||
|
||||
要使用 Twitter API ,机器人需要通过 Twitter 的授权。为了解决这个问题, tweepy 使用了 OAuth 授权标准。你可以通过在 <https://apps.twitter.com/> 创建一个新的应用来获取到凭证。
|
||||
|
||||
|
||||
#### 创建一个新的 Twitter 应用
|
||||
|
||||
当你填完了表格并点击了创建你自己的应用的按钮后,你可以获取到应用的凭证。 Tweepy 需要用户密钥 (API Key)和用户密码 (API Secret),这些都可以在 the Keys and Access Tokens 中找到
|
||||
|
||||
![][4]
|
||||
|
||||
向下滚动页面,使用创建我的 Access Token 按钮生成一个 Access Token 和一个 Access Token Secret
|
||||
|
||||
#### 使用 Tweppy —— 输出你的时间线
|
||||
|
||||
现在你已经有了所需的凭证了,打开一个文件,并写下如下的 Python 代码。
|
||||
```
|
||||
import tweepy
|
||||
|
||||
auth = tweepy.OAuthHandler("your_consumer_key", "your_consumer_key_secret")
|
||||
|
||||
auth.set_access_token("your_access_token", "your_access_token_secret")
|
||||
|
||||
api = tweepy.API(auth)
|
||||
|
||||
public_tweets = api.home_timeline()
|
||||
|
||||
for tweet in public_tweets:
|
||||
print(tweet.text)
|
||||
|
||||
```
|
||||
|
||||
在确保你正在使用你的 Pipenv 虚拟环境后,执行你的程序
|
||||
|
||||
```
|
||||
$ python tweet.py
|
||||
|
||||
```
|
||||
|
||||
上述程序调用了 home_timeline 方法来获取到你时间线中的 20 条最近的 tweets。现在这个机器人能够使用 tweepy 来获取到 Twitter 的数据,接下来尝试修改代码来发送 tweet。
|
||||
|
||||
#### 使用 Tweepy —— 发送一条 tweet
|
||||
|
||||
要发送一条 tweet ,有一个容易上手的 API 方法 update_status 。它的用法很简单:
|
||||
|
||||
```
|
||||
api.update_status("The awesome text you would like to tweet")
|
||||
```
|
||||
|
||||
Tweepy 拓展为制作 Twitter 机器人准备了非常多不同有用的方法。获取 API 的详细信息,查看[文档][5]。
|
||||
|
||||
|
||||
### 一个杂志机器人
|
||||
|
||||
接下来我们来创建一个搜索 Fedora Magazine 的 tweets 并转推这些 tweets 的机器人。
|
||||
|
||||
为了避免多次转推相同的内容,这个机器人存放了最近一条转推的 tweet 的 ID 。 两个助手函数 store_last_id 和 get_last_id 将会帮助存储和保存这个 ID。
|
||||
|
||||
然后,机器人使用 tweepy 搜索 API 来查找 Fedora Magazine 的最近的 tweets 并存储这个 ID。
|
||||
|
||||
```
|
||||
import tweepy
|
||||
|
||||
|
||||
def store_last_id(tweet_id):
|
||||
""" Store a tweet id in a file """
|
||||
with open("lastid", "w") as fp:
|
||||
fp.write(str(tweet_id))
|
||||
|
||||
|
||||
def get_last_id():
|
||||
""" Read the last retweeted id from a file """
|
||||
with open("lastid", "r") as fp:
|
||||
return fp.read()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
||||
auth = tweepy.OAuthHandler("your_consumer_key", "your_consumer_key_secret")
|
||||
|
||||
auth.set_access_token("your_access_token", "your_access_token_secret") api = tweepy.API(auth)
|
||||
|
||||
try:
|
||||
last_id = get_last_id()
|
||||
except FileNotFoundError:
|
||||
print("No retweet yet")
|
||||
last_id = None
|
||||
|
||||
for tweet in tweepy.Cursor(api.search, q="fedoramagazine.org", since_id=last_id).items():
|
||||
if tweet.user.name == 'Fedora Project':
|
||||
store_last_id(tweet.id)
|
||||
tweet.retweet()
|
||||
print(f'"{tweet.text}" was retweeted'
|
||||
|
||||
```
|
||||
|
||||
为了只转推 Fedora Magazine 的 tweet ,机器人搜索内容包含 fedoramagazine.org 和由 「Fedora Project」 Twitter 账户发布的 tweets。
|
||||
|
||||
|
||||
|
||||
### 结论
|
||||
|
||||
在这篇文章中你看到了如何使用 tweepy Python 拓展来创建一个自动阅读、发送和搜索 tweets 的 Twitter 应用。现在,你能使用你自己的创造力来创造一个你自己的 Twitter 机器人。
|
||||
|
||||
这篇文章的演示源码可以在 [Github][6] 找到。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/learn-build-twitter-bot-python/
|
||||
|
||||
作者:[Clément Verna][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[Bestony](https://github.com/bestony)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fedoramagazine.org
|
||||
[1]:https://twitter.com
|
||||
[2]:https://tweepy.readthedocs.io/en/v3.5.0/
|
||||
[3]:https://fedoramagazine.org/install-pipenv-fedora/
|
||||
[4]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-19-20-17-17.png
|
||||
[5]:http://docs.tweepy.org/en/v3.5.0/api.html#id1
|
||||
[6]:https://github.com/cverna/magabot
|
Loading…
Reference in New Issue
Block a user