mirror of
https://github.com/LCTT/TranslateProject.git
synced 2024-12-26 21:30:55 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
db0bb54799
@ -1,42 +1,44 @@
|
||||
使用 Ledger 记录(财务)情况
|
||||
======
|
||||
|
||||
自 2005 年搬到加拿大以来,我使用 [Ledger CLI][1] 来跟踪我的财务状况。我喜欢纯文本的方式,它支持虚拟信封意味着我可以同时将我的银行帐户余额和我的虚拟分配到不同的目录下。以下是我们如何使用这些虚拟信封分别管理我们的财务状况。
|
||||
|
||||
每个月,我都有一个条目将我生活开支分配到不同的目录中,包括家庭开支的分配。W- 不要求太多, 所以我要谨慎地处理这两者之间的差别和我自己的生活费用。我们处理它的方式是我支付固定金额,这是贷记我支付的杂货。由于我们的杂货总额通常低于我预算的家庭开支,因此任何差异都会留在标签上。我过去常常给他写支票,但最近我只是支付偶尔额外的大笔费用。
|
||||
|
||||
这是个示例信封分配:
|
||||
|
||||
```
|
||||
2014.10.01 * Budget
|
||||
[Envelopes:Living]
|
||||
[Envelopes:Household] $500
|
||||
;; More lines go here
|
||||
|
||||
```
|
||||
|
||||
这是设置的信封规则之一。它鼓励我正确地分类支出。所有支出都从我的 “Play” 信封中取出。
|
||||
|
||||
```
|
||||
= /^Expenses/
|
||||
(Envelopes:Play) -1.0
|
||||
|
||||
```
|
||||
|
||||
这个为家庭支出报销 “Play” 信封,将金额从 “Household” 信封转移到 “Play” 信封。
|
||||
|
||||
```
|
||||
= /^Expenses:House$/
|
||||
(Envelopes:Play) 1.0
|
||||
(Envelopes:Household) -1.0
|
||||
|
||||
```
|
||||
|
||||
我有一套定期的支出来模拟我的预算中的家庭开支。例如,这是 10 月份的。
|
||||
|
||||
```
|
||||
2014.10.1 * House
|
||||
Expenses:House
|
||||
Assets:Household $-500
|
||||
|
||||
```
|
||||
|
||||
这是杂货交易的形式:
|
||||
|
||||
```
|
||||
2014.09.28 * No Frills
|
||||
Assets:Household:Groceries $70.45
|
||||
@ -61,7 +63,7 @@ via: http://sachachua.com/blog/2014/11/keeping-financial-score-ledger/
|
||||
作者:[Sacha Chua][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,15 +1,17 @@
|
||||
Mesos 和 Kubernetes:不是竞争者
|
||||
======
|
||||
|
||||
> 人们经常用 x 相对于 y 这样的术语来考虑问题,但是它并不是一个技术对另一个技术的问题。Ben Hindman 在这里解释了 Mesos 是如何对另外一种技术进行补充的。
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/architecture-barge-bay-161764_0.jpg?itok=vNChG5fb)
|
||||
|
||||
Mesos 的起源可以追溯到 2009 年,当时,Ben Hindman 还是加州大学伯克利分校研究并行编程的博士生。他们在 128 核的芯片上做大规模的并行计算,并尝试去解决多个问题,比如怎么让软件和库在这些芯片上运行更高效。他与同学们讨论能否借鉴并行处理和多线程的思想,并将它们应用到集群管理上。
|
||||
Mesos 的起源可以追溯到 2009 年,当时,Ben Hindman 还是加州大学伯克利分校研究并行编程的博士生。他们在 128 核的芯片上做大规模的并行计算,以尝试去解决多个问题,比如怎么让软件和库在这些芯片上运行更高效。他与同学们讨论能否借鉴并行处理和多线程的思想,并将它们应用到集群管理上。
|
||||
|
||||
Hindman 说 "最初,我们专注于大数据” 。那时,大数据非常热门,并且 Hadoop 是其中一个热门技术。“我们发现,人们在集群上运行像 Hadoop 这样的程序与运行多线程应用和并行应用很相似。Hindman 说。
|
||||
Hindman 说 “最初,我们专注于大数据” 。那时,大数据非常热门,而 Hadoop 就是其中的一个热门技术。“我们发现,人们在集群上运行像 Hadoop 这样的程序与运行多线程应用及并行应用很相似。”Hindman 说。
|
||||
|
||||
但是,它们的效率并不高,因此,他们开始去思考,如何通过集群管理和资源管理让它们运行的更好。”我们查看了那个时间很多的不同技术“ Hindman 回忆道。
|
||||
但是,它们的效率并不高,因此,他们开始去思考,如何通过集群管理和资源管理让它们运行的更好。“我们查看了那个时期很多的各种技术” Hindman 回忆道。
|
||||
|
||||
然而,Hindman 和他的同事们,决定去采用一种全新的方法。”我们决定去对资源管理创建一个低级的抽象,然后在此之上运行调度服务和做其它的事情。“ Hindman 说,“基本上,这就是 Mesos 的本质 —— 将资源管理部分从调度部分中分离出来。”
|
||||
然后,Hindman 和他的同事们决定去采用一种全新的方法。“我们决定对资源管理创建一个低级的抽象,然后在此之上运行调度服务和做其它的事情。” Hindman 说,“基本上,这就是 Mesos 的本质 —— 将资源管理部分从调度部分中分离出来。”
|
||||
|
||||
他成功了,并且 Mesos 从那时开始强大了起来。
|
||||
|
||||
@ -17,21 +19,21 @@ Hindman 说 "最初,我们专注于大数据” 。那时,大数据非常热
|
||||
|
||||
这个项目发起于 2009 年。在 2010 年时,团队决定将这个项目捐献给 Apache 软件基金会(ASF)。它在 Apache 孵化,并于 2013 年成为顶级项目(TLP)。
|
||||
|
||||
为什么 Mesos 社区选择 Apache 软件基金会有很多的原因,比如,Apache 许可证,以及他们已经拥有了一个充满活力的此类项目的许多其它社区。
|
||||
为什么 Mesos 社区选择 Apache 软件基金会有很多的原因,比如,Apache 许可证,以及基金会已经拥有了一个充满活力的其它此类项目的社区。
|
||||
|
||||
与影响力也有关系。许多在 Mesos 上工作的人,也参与了 Apache,并且许多人也致力于像 Hadoop 这样的项目。同时,来自 Mesos 社区的许多人也致力于其它大数据项目,比如 Spark。这种交叉工作使得这三个项目 —— Hadoop、Mesos、以及 Spark —— 成为 ASF 的项目。
|
||||
与影响力也有关系。许多在 Mesos 上工作的人也参与了 Apache,并且许多人也致力于像 Hadoop 这样的项目。同时,来自 Mesos 社区的许多人也致力于其它大数据项目,比如 Spark。这种交叉工作使得这三个项目 —— Hadoop、Mesos,以及 Spark —— 成为 ASF 的项目。
|
||||
|
||||
与商业也有关系。许多公司对 Mesos 很感兴趣,并且开发者希望它能由一个中立的机构来维护它,而不是让它成为一个私有项目。
|
||||
|
||||
### 谁在用 Mesos?
|
||||
|
||||
更好的问题应该是,谁不在用 Mesos?从 Apple 到 Netflix 每个都在用 Mesos。但是,Mesos 也面临任何技术在早期所面对的挑战。”最初,我要说服人们,这是一个很有趣的新技术。它叫做“容器”,因为它不需要使用虚拟机“ Hindman 说。
|
||||
更好的问题应该是,谁不在用 Mesos?从 Apple 到 Netflix 每个都在用 Mesos。但是,Mesos 也面临任何技术在早期所面对的挑战。“最初,我要说服人们,这是一个很有趣的新技术。它叫做‘容器’,因为它不需要使用虚拟机” Hindman 说。
|
||||
|
||||
从那以后,这个行业发生了许多变化,现在,只要与别人聊到基础设施,必然是从”容器“开始的 —— 感谢 Docker 所做出的工作。今天再也不需要说服工作了,而在 Mesos 出现的早期,前面提到的像 Apple、Netflix、以及 PayPal 这样的公司。他们已经知道了容器化替代虚拟机给他们带来的技术优势。”这些公司在容器化成为一种现象之前,已经明白了容器化的价值所在“, Hindman 说。
|
||||
从那以后,这个行业发生了许多变化,现在,只要与别人聊到基础设施,必然是从”容器“开始的 —— 感谢 Docker 所做出的工作。今天再也不需要做说服工作了,而在 Mesos 出现的早期,前面提到的像 Apple、Netflix,以及 PayPal 这样的公司。他们已经知道了容器替代虚拟机给他们带来的技术优势。“这些公司在容器成为一种现象之前,已经明白了容器的价值所在”, Hindman 说。
|
||||
|
||||
可以在这些公司中看到,他们有大量的容器而不是虚拟机。他们所做的全部工作只是去管理和运行这些容器,并且他们欣然接受了 Mesos。在 Mesos 早期就使用它的公司有 Apple、Netflix、PayPal、Yelp、OpenTable、和 Groupon。
|
||||
可以在这些公司中看到,他们有大量的容器而不是虚拟机。他们所做的全部工作只是去管理和运行这些容器,并且他们欣然接受了 Mesos。在 Mesos 早期就使用它的公司有 Apple、Netflix、PayPal、Yelp、OpenTable 和 Groupon。
|
||||
|
||||
“大多数组织使用 Mesos 来运行任意需要的服务” Hindman 说,“但也有些公司用它做一些非常有趣的事情,比如,数据处理、数据流、分析负载和应用程序。“
|
||||
“大多数组织使用 Mesos 来运行各种服务” Hindman 说,“但也有些公司用它做一些非常有趣的事情,比如,数据处理、数据流、分析任务和应用程序。“
|
||||
|
||||
这些公司采用 Mesos 的其中一个原因是,资源管理层之间有一个明晰的界线。当公司运营容器的时候,Mesos 为他们提供了很好的灵活性。
|
||||
|
||||
@ -43,11 +45,11 @@ Hindman 说 "最初,我们专注于大数据” 。那时,大数据非常热
|
||||
|
||||
人们经常用 x 相对于 y 这样的术语来考虑问题,但是它并不是一个技术对另一个技术的问题。大多数的技术在一些领域总是重叠的,并且它们可以是互补的。“我不喜欢将所有的这些东西都看做是竞争者。我认为它们中的一些与另一个在工作中是互补的,” Hindman 说。
|
||||
|
||||
“事实上,名字 Mesos 表示它处于 ‘中间’;它是一种中间的 OS,” Hindman 说,“我们有一个容器调度器的概念,它能够运行在像 Mesos 这样的东西之上。当 Kubernetes 刚出现的时候,我们实际上在 Mesos 的生态系统中接受它的,并将它看做是运行在 Mesos 之上、DC/OS 之中的另一种方式的容器。”
|
||||
“事实上,名字 Mesos 表示它处于 ‘中间’;它是一种中间的操作系统”, Hindman 说,“我们有一个容器调度器的概念,它能够运行在像 Mesos 这样的东西之上。当 Kubernetes 刚出现的时候,我们实际上在 Mesos 的生态系统中接受了它,并将它看做是在 Mesos 上的 DC/OS 中运行容器的另一种方式。”
|
||||
|
||||
Mesos 也复活了一个名为 [Marathon][1](一个用于 Mesos 和 DC/OS 的容器编排器)的项目,它在 Mesos 生态系统中是做的最好的容器编排器。但是,Marathon 确实无法与 Kubernetes 相比较。“Kubernetes 比 Marathon 做的更多,因此,你不能将它们简单地相互交换,” Hindman 说,“与此同时,我们在 Mesos 中做了许多 Kubernetes 中没有的东西。因此,这些技术之间是互补的。”
|
||||
Mesos 也复活了一个名为 [Marathon][1](一个用于 Mesos 和 DC/OS 的容器编排器)的项目,它成为了 Mesos 生态系统中最重要的成员。但是,Marathon 确实无法与 Kubernetes 相比较。“Kubernetes 比 Marathon 做的更多,因此,你不能将它们简单地相互交换,” Hindman 说,“与此同时,我们在 Mesos 中做了许多 Kubernetes 中没有的东西。因此,这些技术之间是互补的。”
|
||||
|
||||
不要将这些技术视为相互之间是敌对的关系,它们应该被看做是对行业有益的技术。它们不是技术上的重复;它们是多样化的。据 Hindman 说,“对于开源领域的终端用户来说,这可能会让他们很困惑,因为他们很难去知道哪个技术适用于哪种负载,但这是被称为开源的这种东西最令人讨厌的本质所在。“
|
||||
不要将这些技术视为相互之间是敌对的关系,它们应该被看做是对行业有益的技术。它们不是技术上的重复;它们是多样化的。据 Hindman 说,“对于开源领域的终端用户来说,这可能会让他们很困惑,因为他们很难去知道哪个技术适用于哪种任务,但这是这个被称之为开源的本质所在。“
|
||||
|
||||
这只是意味着有更多的选择,并且每个都是赢家。
|
||||
|
||||
@ -58,7 +60,7 @@ via: https://www.linux.com/blog/2018/6/mesos-and-kubernetes-its-not-competition
|
||||
作者:[Swapnil Bhartiya][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,33 +1,35 @@
|
||||
使用 Open edX 托管课程入门
|
||||
使用 Open edX 托管课程
|
||||
======
|
||||
|
||||
> Open edX 为各种规模和类型的组织提供了一个强大而多功能的开源课程管理的解决方案。要不要了解一下。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolseriesgen_rh_032x_0.png?itok=cApG9aB4)
|
||||
|
||||
[Open edX 平台][2] 是一个免费和开源的课程管理系统,它是 [全世界][3] 都在使用的大规模网络公开课(MOOCs)以及小型课程和培训模块的托管平台。在 Open edX 的 [第七个主要发行版][1] 中,到现在为止,它已经提供了超过 8,000 个原创课程和 5000 万个课程注册数。你可以使用你自己的本地设备或者任何行业领先的云基础设施服务提供商来安装这个平台,但是,随着项目的[服务提供商][4]名单越来越长,来自它们中的软件即服务(SaaS)的可用模型也越来越多了。
|
||||
[Open edX 平台][2] 是一个自由开源的课程管理系统,它是 [全世界][3] 都在使用的大规模网络公开课(MOOC)以及小型课程和培训模块的托管平台。在 Open edX 的 [第七个主要发行版][1] 中,到现在为止,它已经提供了超过 8,000 个原创课程和 5000 万个课程注册数。你可以使用你自己的本地设备或者任何行业领先的云基础设施服务提供商来安装这个平台,而且,随着项目的[服务提供商][4]名单越来越长,来自它们中的软件即服务(SaaS)的可用模型也越来越多了。
|
||||
|
||||
Open edX 平台被来自世界各地的顶尖教育机构、私人公司、公共机构、非政府组织、非营利机构、以及教育技术初创企业广泛地使用,并且项目服务提供商的全球社区持续让越来越小的组织可以访问这个平台。如果你打算向广大的读者设计和提供教育内容,你应该考虑去使用 Open edX 平台。
|
||||
Open edX 平台被来自世界各地的顶尖教育机构、私人公司、公共机构、非政府组织、非营利机构,以及教育技术初创企业广泛地使用,并且该项目的服务提供商全球社区不断地让甚至更小的组织也可以访问这个平台。如果你打算向广大的读者设计和提供教育内容,你应该考虑去使用 Open edX 平台。
|
||||
|
||||
### 安装
|
||||
|
||||
安装这个软件有多种方式,这可能是不一个不受欢迎的惊喜,至少刚开始是这样。但是不管你是以何种方式 [安装 Open edX][5],最终你都得到的是有相同功能的应用程序。默认安装包含一个为在线学习者提供的、全功能的学习管理系统(LMS),和一个全功能的课程管理工作室(CMS),CMS 可以让你的讲师团队用它来编写原创课程内容。你可以把 CMS 当做是课程内容设计和管理的 “[Wordpress][6]”,把 LMS 当做是课程销售、分发、和消费的 “[Magento][7]”。
|
||||
安装这个软件有多种方式,这可能有点让你难以选择,至少刚开始是这样。但是不管你是以何种方式 [安装 Open edX][5],最终你都得到的是有相同功能的应用程序。默认安装包含一个为在线学习者提供的、全功能的学习管理系统(LMS),和一个全功能的课程管理工作室(CMS),CMS 可以让你的讲师团队用它来编写原创课程内容。你可以把 CMS 当做是课程内容设计和管理的 “[Wordpress][6]”,把 LMS 当做是课程销售、分发、和消费的 “[Magento][7]”。
|
||||
|
||||
Open edX 是设备无关的和完全响应式的应用软件,并且不用花费很多的努力就可发布一个原生的 iOS 和 Android apps,它可以无缝地集成到你的实例后端。Open edX 平台的代码库、原生移动应用、以及安装脚本都发布在 [GitHub][8] 上。
|
||||
Open edX 是设备无关的、完全响应式的应用软件,并且不用花费很多的努力就可发布一个原生的 iOS 和 Android 应用,它可以无缝地集成到你的实例后端。Open edX 平台的代码库、原生移动应用、以及安装脚本都发布在 [GitHub][8] 上。
|
||||
|
||||
#### 有何期望
|
||||
|
||||
Open edX 平台的 [GitHub 仓库][9] 包含适用于各种类型组织的、性能很好的、产品级的代码。来自数百个机构的数千名程序员定期为 edX 仓库做贡献,并且这个平台是一个名副其实的,研究如何去构建和管理一个复杂的企业级应用的好案例。因此,尽管你可能会遇到大量的类似如何将平台迁移到生产环境中的问题,但是你不应该对 Open edX 平台代码库本身的质量和健状性担忧。
|
||||
Open edX 平台的 [GitHub 仓库][9] 包含适用于各种类型的组织的、性能很好的、产品级的代码。来自数百个机构的数千名程序员经常为 edX 仓库做贡献,并且这个平台是一个名副其实的、研究如何去构建和管理一个复杂的企业级应用的好案例。因此,尽管你可能会遇到大量的类似“如何将平台迁移到生产环境中”的问题,但是你无需对 Open edX 平台代码库本身的质量和健状性担忧。
|
||||
|
||||
通过少量的培训,你的讲师就可以去设计很好的在线课程。但是请记住,Open edX 是通过它的 [XBlock][10] 组件架构可扩展的,因此,通过他们和你的努力,你的讲师将有可能将好的课程变成精品课程。
|
||||
通过少量的培训,你的讲师就可以去设计不错的在线课程。但是请记住,Open edX 是通过它的 [XBlock][10] 组件架构进行扩展的,因此,通过他们和你的努力,你的讲师将有可能将不错的课程变成精品课程。
|
||||
|
||||
这个平台在单服务器环境下也运行的很好,并且它是高度模块化的,几乎可以进行无限地水平扩展。它也是主题化的和本地化的,平台的功能和外观可以根据你的需要进行几乎无限制地调整。平台在你的设备上可以按需安装并可靠地运行。
|
||||
|
||||
#### 一些封装要求
|
||||
#### 需要一些封装
|
||||
|
||||
请记住,有大量的 edX 软件模块是不包含在默认安装中的,并且这些模块提供的经常都是组织所需要的功能。比如,分析模块、电商模块、以及课程的通知/公告模块都是不包含在默认安装中的,并且这些单独的模块都是值得安装的。另外,在数据备份/恢复和系统管理方面要完全依赖你自己去处理。幸运的是,有关这方面的内容,社区有越来越多的文档和如何去做的文章。你可以通过 Google 和 Bing 去搜索,以帮助你在生产环境中安装它们。
|
||||
请记住,有大量的 edX 软件模块是不包含在默认安装中的,并且这些模块提供的经常都是各种组织所需要的功能。比如,分析模块、电商模块,以及课程的通知/公告模块都是不包含在默认安装中的,并且这些单独的模块都是值得安装的。另外,在数据备份/恢复和系统管理方面要完全依赖你自己去处理。幸运的是,有关这方面的内容,社区有越来越多的文档和如何去做的文章。你可以通过 Google 和 Bing 去搜索,以帮助你在生产环境中安装它们。
|
||||
|
||||
虽然有很多文档良好的程序,但是根据你的技能水平,配置 [oAuth][11] 和 [SSL/TLS][12],以及使用平台的 [REST API][13] 可能对你是一个挑战。另外,一些组织要求将 MySQL 和/或 MongoDB 数据库在中心化环境中管理,如果你正好是这种情况,你还需要将这些服务从默认平台安装中分离出来。edX 设计团队已经尽可能地为你做了简化,但是由于它是一个非常重大的更改,因此可能需要一些时间去实现。
|
||||
|
||||
如果你面临资源和/或技术上的困难 —— 不要气馁,Open edX 社区 SaaS 提供商,像 [appsembler][14] 和 [eduNEXT][15],提供了引人入胜的替代方案去进行 DIY 安装,尤其是如果你只适应窗口方式操作。
|
||||
如果你面临资源和/或技术上的困难 —— 不要气馁,Open edX 社区 SaaS 提供商,像 [appsembler][14] 和 [eduNEXT][15],提供了引人入胜的替代方案去进行 DIY 安装,尤其是如果你只想简单购买就行。
|
||||
|
||||
### 技术栈
|
||||
|
||||
@ -35,7 +37,7 @@ Open edX 平台的 [GitHub 仓库][9] 包含适用于各种类型组织的、性
|
||||
|
||||
![edx-architecture.png][24]
|
||||
|
||||
Open edX 技术栈(CC BY,来自 edX)
|
||||
*Open edX 技术栈(CC BY,来自 edX)*
|
||||
|
||||
将这些组件安装并配置好本身就是一件非常不容易的事情,但是以这样的一种方式将所有的组件去打包,并适合于任意规模和复杂性的组织,并且能够按他们的需要进行任意调整搭配而无需在代码上做重大改动,看起来似乎是不可能的事情 —— 它就是这种情况,直到你看到主要的平台配置参数安排和命名是多少的巧妙和直观。请注意,平台的组织结构有一个学习曲线,但是,你所学习的一切都是值的去学习的,不仅是对这个项目,对一般意义上的大型 IT 项目都是如此。
|
||||
|
||||
@ -43,7 +45,7 @@ Open edX 技术栈(CC BY,来自 edX)
|
||||
|
||||
### 采用
|
||||
|
||||
edX 项目能够迅速得到世界范围内的采纳,很大程度上取决于软件的运行情况。这一点也不奇怪,这个项目成功地吸引了大量才华卓越的人参与其中,他们作为程序员、项目顾问、翻译者、技术作者、以及博客作者参与了项目的贡献。一年一次的 [Open edX 会议][27]、[官方的 edX Google Group][28]、以及 [Open edX 服务提供商名单][4] 是了解这个多样化的、不断成长的生态系统的非常好的起点。我作为相对而言的新人,我发现参与和直接从事这个项目的各个方面是非常容易的。
|
||||
edX 项目能够迅速得到世界范围内的采纳,很大程度上取决于该软件的运行情况。这一点也不奇怪,这个项目成功地吸引了大量才华卓越的人参与其中,他们作为程序员、项目顾问、翻译者、技术作者、以及博客作者参与了项目的贡献。一年一次的 [Open edX 会议][27]、[官方的 edX Google Group][28]、以及 [Open edX 服务提供商名单][4] 是了解这个多样化的、不断成长的生态系统的非常好的起点。我作为相对而言的新人,我发现参与和直接从事这个项目的各个方面是非常容易的。
|
||||
|
||||
祝你学习之旅一切顺利,并且当你构思你的项目时,你可以随时联系我。
|
||||
|
||||
@ -54,7 +56,7 @@ via: https://opensource.com/article/18/6/getting-started-open-edx
|
||||
作者:[Lawrence Mc Daniel][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,200 @@
|
||||
iWant – The Decentralized Peer To Peer File Sharing Commandline Application
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2017/07/p2p-720x340.jpg)
|
||||
|
||||
A while ago, we have written a guide about two file sharing utilities named [**transfer.sh**][1], a free web service that allows you to share files over Internet easily and quickly, and [**PSiTransfer**][2], a simple open source self-hosted file sharing solution. Today, we will see yet another file sharing utility called **“iWant”**. It is a free and open source CLI-based decentralized peer to peer file sharing application.
|
||||
|
||||
What’s makes it different from other file sharing applications? You might wonder. Here are some prominent features of iWant.
|
||||
|
||||
* It’s commandline application. You don’t need any memory consuming GUI utilities. You need only the Terminal.
|
||||
* It is decentralized. That means your data will not be stored in any central location. So, there is no central point of failure.
|
||||
* iWant allows you to pause the download and you can resume it later when you want. You don’t need to download it from beginning, it just resumes the downloads from where you left off.
|
||||
* Any changes made in the files in the shared directory (such as deletion, addition, modification) will be reflected instantly in the network.
|
||||
* Just like torrents, iWant downloads the files from multiple peers. If any seeder left the group or failed to respond, it will continue the download from another seeder.
|
||||
* It is cross-platform, so, you can use it in GNU/Linux, MS Windows, and Mac OS X.
|
||||
|
||||
|
||||
|
||||
### iWant – A CLI-based Decentralized Peer To Peer File Sharing Solution
|
||||
|
||||
#### Install iWant
|
||||
|
||||
iWant can be easily installed using PIP package manager. Make sure you have pip installed in your Linux distribution. if it is not installed yet, refer the following guide.
|
||||
|
||||
[How To Manage Python Packages Using Pip](https://www.ostechnix.com/manage-python-packages-using-pip/)
|
||||
|
||||
After installing PIP, make sure you have installed the the following dependencies:
|
||||
|
||||
* libffi-dev
|
||||
* libssl-dev
|
||||
|
||||
|
||||
|
||||
Say for example, on Ubuntu, you can install these dependencies using command:
|
||||
```
|
||||
$ sudo apt-get install libffi-dev libssl-dev
|
||||
|
||||
```
|
||||
|
||||
Once all dependencies installed, install iWant using the following command:
|
||||
```
|
||||
$ sudo pip install iwant
|
||||
|
||||
```
|
||||
|
||||
We have now iWant in our system. Let us go ahead and see how to use it to transfer files over network.
|
||||
|
||||
#### Usage
|
||||
|
||||
First, start iWant server using command:
|
||||
```
|
||||
$ iwanto start
|
||||
|
||||
```
|
||||
|
||||
At the first time, iWant will ask the Shared and Download folder’s location. Enter the actual location of both folders. Then, choose which interface you want to use:
|
||||
|
||||
Sample output would be:
|
||||
```
|
||||
Shared/Download folder details looks empty..
|
||||
Note: Shared and Download folder cannot be the same
|
||||
SHARED FOLDER(absolute path):/home/sk/myshare
|
||||
DOWNLOAD FOLDER(absolute path):/home/sk/mydownloads
|
||||
Network interface available
|
||||
1. lo => 127.0.0.1
|
||||
2. enp0s3 => 192.168.43.2
|
||||
Enter index of the interface:2
|
||||
now scanning /home/sk/myshare
|
||||
[Adding] /home/sk/myshare 0.0
|
||||
Updating Leader 56f6d5e8-654e-11e7-93c8-08002712f8c1
|
||||
[Adding] /home/sk/myshare 0.0
|
||||
connecting to 192.168.43.2:1235 for hashdump
|
||||
|
||||
```
|
||||
|
||||
If you see an output something like above, you can start using iWant right away.
|
||||
|
||||
Similarly, start iWant service on all systems in the network, assign valid Shared and Downloads folder’s location, and select the network interface card.
|
||||
|
||||
The iWant service will keep running in the current Terminal window until you press **CTRL+C** to quit it. You need to open a new tab or new Terminal window to use iWant.
|
||||
|
||||
iWant usage is very simple. It has few commands as listed below.
|
||||
|
||||
* **iwanto start** – Starts iWant server.
|
||||
* **iwanto search <name>** – Search for files.
|
||||
* **iwanto download <hash>** – Download a file.
|
||||
* **iwanto share <path>** – Change the Shared folder’s location.
|
||||
* **iwanto download to <destination>** – Change the Download folder’s location.
|
||||
* **iwanto view config** – View Shared and Download folders.
|
||||
* **iwanto –version** – Displays the iWant version.
|
||||
* **iwanto -h** – Displays the help section.
|
||||
|
||||
|
||||
|
||||
Allow me to show you some examples.
|
||||
|
||||
**Search files**
|
||||
|
||||
To search for a file, run:
|
||||
```
|
||||
$ iwanto search <filename>
|
||||
|
||||
```
|
||||
|
||||
Please note that you don’t need to specify the accurate name.
|
||||
|
||||
Example:
|
||||
```
|
||||
$ iwanto search command
|
||||
|
||||
```
|
||||
|
||||
The above command will search for any files that contains the string “command”.
|
||||
|
||||
Sample output from my Ubuntu system:
|
||||
```
|
||||
Filename Size Checksum
|
||||
------------------------------------------- ------- --------------------------------
|
||||
/home/sk/myshare/THE LINUX COMMAND LINE.pdf 3.85757 efded6cc6f34a3d107c67c2300459911
|
||||
|
||||
```
|
||||
|
||||
**Download files**
|
||||
|
||||
You can download the files from any system on your network. To download a file, just mention the hash (checksum) of the file as shown below. You can get hash value of a share using “iwanto search” command.
|
||||
```
|
||||
$ iwanto download efded6cc6f34a3d107c67c2300459911
|
||||
|
||||
```
|
||||
|
||||
The file will be saved in your Download location (/home/sk/mydownloads/ in my case).
|
||||
```
|
||||
Filename: /home/sk/mydownloads/THE LINUX COMMAND LINE.pdf
|
||||
Size: 3.857569 MB
|
||||
|
||||
```
|
||||
|
||||
**View configuration**
|
||||
|
||||
To view the configuration i.e the Shared and Download folders, run:
|
||||
```
|
||||
$ iwanto view config
|
||||
|
||||
```
|
||||
|
||||
Sample output:
|
||||
```
|
||||
Shared folder:/home/sk/myshare
|
||||
Download folder:/home/sk/mydownloads
|
||||
|
||||
```
|
||||
|
||||
**Change Shared and Download folder’s location**
|
||||
|
||||
You can change the Shared folder and Download folder location to some other path like below.
|
||||
```
|
||||
$ iwanto share /home/sk/ostechnix
|
||||
|
||||
```
|
||||
|
||||
Now, the Shared location has been changed to /home/sk/ostechnix location.
|
||||
|
||||
Also, you can change the Downloads location using command:
|
||||
```
|
||||
$ iwanto download to /home/sk/Downloads
|
||||
|
||||
```
|
||||
|
||||
To view the changes made, run the config command:
|
||||
```
|
||||
$ iwanto view config
|
||||
|
||||
```
|
||||
|
||||
**Stop iWant**
|
||||
|
||||
Once you done with iWant, you can quit it by pressing **CTRL+C**.
|
||||
|
||||
If it is not working by any chance, it might be due to Firewall or your router doesn’t support multicast. You can view all logs in** ~/.iwant/.iwant.log** file. For more details, refer the project’s GitHub page provided at the end.
|
||||
|
||||
And, that’s all. Hope this tool helps. I will be here again with another interesting guide. Till then, stay tuned with OSTechNix!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/iwant-decentralized-peer-peer-file-sharing-commandline-application/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://www.ostechnix.com/easy-fast-way-share-files-internet-command-line/
|
||||
[2]:https://www.ostechnix.com/psitransfer-simple-open-source-self-hosted-file-sharing-solution/
|
@ -1,270 +0,0 @@
|
||||
BriFuture is translating
|
||||
|
||||
You don't know Bash: An introduction to Bash arrays
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop.png?itok=pGfEfu2S)
|
||||
|
||||
Although software engineers regularly use the command line for many aspects of development, arrays are likely one of the more obscure features of the command line (although not as obscure as the regex operator `=~`). But obscurity and questionable syntax aside, [Bash][1] arrays can be very powerful.
|
||||
|
||||
### Wait, but why?
|
||||
|
||||
Writing about Bash is challenging because it's remarkably easy for an article to devolve into a manual that focuses on syntax oddities. Rest assured, however, the intent of this article is to avoid having you RTFM.
|
||||
|
||||
#### A real (actually useful) example
|
||||
|
||||
To that end, let's consider a real-world scenario and how Bash can help: You are leading a new effort at your company to evaluate and optimize the runtime of your internal data pipeline. As a first step, you want to do a parameter sweep to evaluate how well the pipeline makes use of threads. For the sake of simplicity, we'll treat the pipeline as a compiled C++ black box where the only parameter we can tweak is the number of threads reserved for data processing: `./pipeline --threads 4`.
|
||||
|
||||
### The basics
|
||||
|
||||
`--threads` parameter that we want to test:
|
||||
```
|
||||
allThreads=(1 2 4 8 16 32 64 128)
|
||||
|
||||
```
|
||||
|
||||
The first thing we'll do is define an array containing the values of theparameter that we want to test:
|
||||
|
||||
In this example, all the elements are numbers, but it need not be the case—arrays in Bash can contain both numbers and strings, e.g., `myArray=(1 2 "three" 4 "five")` is a valid expression. And just as with any other Bash variable, make sure to leave no spaces around the equal sign. Otherwise, Bash will treat the variable name as a program to execute, and the `=` as its first parameter!
|
||||
|
||||
Now that we've initialized the array, let's retrieve a few of its elements. You'll notice that simply doing `echo $allThreads` will output only the first element.
|
||||
|
||||
To understand why that is, let's take a step back and revisit how we usually output variables in Bash. Consider the following scenario:
|
||||
```
|
||||
type="article"
|
||||
|
||||
echo "Found 42 $type"
|
||||
|
||||
```
|
||||
|
||||
Say the variable `$type` is given to us as a singular noun and we want to add an `s` at the end of our sentence. We can't simply add an `s` to `$type` since that would turn it into a different variable, `$types`. And although we could utilize code contortions such as `echo "Found 42 "$type"s"`, the best way to solve this problem is to use curly braces: `echo "Found 42 ${type}s"`, which allows us to tell Bash where the name of a variable starts and ends (interestingly, this is the same syntax used in JavaScript/ES6 to inject variables and expressions in [template literals][2]).
|
||||
|
||||
So as it turns out, although Bash variables don't generally require curly brackets, they are required for arrays. In turn, this allows us to specify the index to access, e.g., `echo ${allThreads[1]}` returns the second element of the array. Not including brackets, e.g.,`echo $allThreads[1]`, leads Bash to treat `[1]` as a string and output it as such.
|
||||
|
||||
Yes, Bash arrays have odd syntax, but at least they are zero-indexed, unlike some other languages (I'm looking at you, `R`).
|
||||
|
||||
### Looping through arrays
|
||||
|
||||
Although in the examples above we used integer indices in our arrays, let's consider two occasions when that won't be the case: First, if we wanted the `$i`-th element of the array, where `$i` is a variable containing the index of interest, we can retrieve that element using: `echo ${allThreads[$i]}`. Second, to output all the elements of an array, we replace the numeric index with the `@` symbol (you can think of `@` as standing for `all`): `echo ${allThreads[@]}`.
|
||||
|
||||
#### Looping through array elements
|
||||
|
||||
With that in mind, let's loop through `$allThreads` and launch the pipeline for each value of `--threads`:
|
||||
```
|
||||
for t in ${allThreads[@]}; do
|
||||
|
||||
./pipeline --threads $t
|
||||
|
||||
done
|
||||
|
||||
```
|
||||
|
||||
#### Looping through array indices
|
||||
|
||||
Next, let's consider a slightly different approach. Rather than looping over array elements, we can loop over array indices:
|
||||
```
|
||||
for i in ${!allThreads[@]}; do
|
||||
|
||||
./pipeline --threads ${allThreads[$i]}
|
||||
|
||||
done
|
||||
|
||||
```
|
||||
|
||||
Let's break that down: As we saw above, `${allThreads[@]}` represents all the elements in our array. Adding an exclamation mark to make it `${!allThreads[@]}` will return the list of all array indices (in our case 0 to 7). In other words, the `for` loop is looping through all indices `$i` and reading the `$i`-th element from `$allThreads` to set the value of the `--threads` parameter.
|
||||
|
||||
This is much harsher on the eyes, so you may be wondering why I bother introducing it in the first place. That's because there are times where you need to know both the index and the value within a loop, e.g., if you want to ignore the first element of an array, using indices saves you from creating an additional variable that you then increment inside the loop.
|
||||
|
||||
### Populating arrays
|
||||
|
||||
So far, we've been able to launch the pipeline for each `--threads` of interest. Now, let's assume the output to our pipeline is the runtime in seconds. We would like to capture that output at each iteration and save it in another array so we can do various manipulations with it at the end.
|
||||
|
||||
#### Some useful syntax
|
||||
|
||||
But before diving into the code, we need to introduce some more syntax. First, we need to be able to retrieve the output of a Bash command. To do so, use the following syntax: `output=$( ./my_script.sh )`, which will store the output of our commands into the variable `$output`.
|
||||
|
||||
The second bit of syntax we need is how to append the value we just retrieved to an array. The syntax to do that will look familiar:
|
||||
```
|
||||
myArray+=( "newElement1" "newElement2" )
|
||||
|
||||
```
|
||||
|
||||
#### The parameter sweep
|
||||
|
||||
Putting everything together, here is our script for launching our parameter sweep:
|
||||
```
|
||||
allThreads=(1 2 4 8 16 32 64 128)
|
||||
|
||||
allRuntimes=()
|
||||
|
||||
for t in ${allThreads[@]}; do
|
||||
|
||||
runtime=$(./pipeline --threads $t)
|
||||
|
||||
allRuntimes+=( $runtime )
|
||||
|
||||
done
|
||||
|
||||
```
|
||||
|
||||
And voilà!
|
||||
|
||||
### What else you got?
|
||||
|
||||
In this article, we covered the scenario of using arrays for parameter sweeps. But I promise there are more reasons to use Bash arrays—here are two more examples.
|
||||
|
||||
#### Log alerting
|
||||
|
||||
In this scenario, your app is divided into modules, each with its own log file. We can write a cron job script to email the right person when there are signs of trouble in certain modules:``
|
||||
```
|
||||
# List of logs and who should be notified of issues
|
||||
|
||||
logPaths=("api.log" "auth.log" "jenkins.log" "data.log")
|
||||
|
||||
logEmails=("jay@email" "emma@email" "jon@email" "sophia@email")
|
||||
|
||||
|
||||
|
||||
# Look for signs of trouble in each log
|
||||
|
||||
for i in ${!logPaths[@]};
|
||||
|
||||
do
|
||||
|
||||
log=${logPaths[$i]}
|
||||
|
||||
stakeholder=${logEmails[$i]}
|
||||
|
||||
numErrors=$( tail -n 100 "$log" | grep "ERROR" | wc -l )
|
||||
|
||||
|
||||
|
||||
# Warn stakeholders if recently saw > 5 errors
|
||||
|
||||
if [[ "$numErrors" -gt 5 ]];
|
||||
|
||||
then
|
||||
|
||||
emailRecipient="$stakeholder"
|
||||
|
||||
emailSubject="WARNING: ${log} showing unusual levels of errors"
|
||||
|
||||
emailBody="${numErrors} errors found in log ${log}"
|
||||
|
||||
echo "$emailBody" | mailx -s "$emailSubject" "$emailRecipient"
|
||||
|
||||
fi
|
||||
|
||||
done
|
||||
|
||||
```
|
||||
|
||||
#### API queries
|
||||
|
||||
Say you want to generate some analytics about which users comment the most on your Medium posts. Since we don't have direct database access, SQL is out of the question, but we can use APIs!
|
||||
|
||||
To avoid getting into a long discussion about API authentication and tokens, we'll instead use [JSONPlaceholder][3], a public-facing API testing service, as our endpoint. Once we query each post and retrieve the emails of everyone who commented, we can append those emails to our results array:
|
||||
```
|
||||
endpoint="https://jsonplaceholder.typicode.com/comments"
|
||||
|
||||
allEmails=()
|
||||
|
||||
|
||||
|
||||
# Query first 10 posts
|
||||
|
||||
for postId in {1..10};
|
||||
|
||||
do
|
||||
|
||||
# Make API call to fetch emails of this posts's commenters
|
||||
|
||||
response=$(curl "${endpoint}?postId=${postId}")
|
||||
|
||||
|
||||
|
||||
# Use jq to parse the JSON response into an array
|
||||
|
||||
allEmails+=( $( jq '.[].email' <<< "$response" ) )
|
||||
|
||||
done
|
||||
|
||||
```
|
||||
|
||||
Note here that I'm using the [`jq` tool][4] to parse JSON from the command line. The syntax of `jq` is beyond the scope of this article, but I highly recommend you look into it.
|
||||
|
||||
As you might imagine, there are countless other scenarios in which using Bash arrays can help, and I hope the examples outlined in this article have given you some food for thought. If you have other examples to share from your own work, please leave a comment below.
|
||||
|
||||
### But wait, there's more!
|
||||
|
||||
Since we covered quite a bit of array syntax in this article, here's a summary of what we covered, along with some more advanced tricks we did not cover:
|
||||
|
||||
Syntax Result `arr=()` Create an empty array `arr=(1 2 3)` Initialize array `${arr[2]}` Retrieve third element `${arr[@]}` Retrieve all elements `${!arr[@]}` Retrieve array indices `${#arr[@]}` Calculate array size `arr[0]=3` Overwrite 1st element `arr+=(4)` Append value(s) `str=$(ls)` Save `ls` output as a string `arr=( $(ls) )` Save `ls` output as an array of files `${arr[@]:s:n}` Retrieve elements at indices `n` to `s+n`
|
||||
|
||||
### One last thought
|
||||
|
||||
As we've discovered, Bash arrays sure have strange syntax, but I hope this article convinced you that they are extremely powerful. Once you get the hang of the syntax, you'll find yourself using Bash arrays quite often.
|
||||
|
||||
#### Bash or Python?
|
||||
|
||||
Which begs the question: When should you use Bash arrays instead of other scripting languages such as Python?
|
||||
|
||||
To me, it all boils down to dependencies—if you can solve the problem at hand using only calls to command-line tools, you might as well use Bash. But for times when your script is part of a larger Python project, you might as well use Python.
|
||||
|
||||
For example, we could have turned to Python to implement the parameter sweep, but we would have ended up just writing a wrapper around Bash:
|
||||
```
|
||||
import subprocess
|
||||
|
||||
|
||||
|
||||
all_threads = [1, 2, 4, 8, 16, 32, 64, 128]
|
||||
|
||||
all_runtimes = []
|
||||
|
||||
|
||||
|
||||
# Launch pipeline on each number of threads
|
||||
|
||||
for t in all_threads:
|
||||
|
||||
cmd = './pipeline --threads {}'.format(t)
|
||||
|
||||
|
||||
|
||||
# Use the subprocess module to fetch the return output
|
||||
|
||||
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
|
||||
|
||||
output = p.communicate()[0]
|
||||
|
||||
all_runtimes.append(output)
|
||||
|
||||
```
|
||||
|
||||
Since there's no getting around the command line in this example, using Bash directly is preferable.
|
||||
|
||||
#### Time for a shameless plug
|
||||
|
||||
If you enjoyed this article, there's more where that came from! [Register here to attend OSCON][5], where I'll be presenting the live-coding workshop [You Don't Know Bash][6] on July 17, 2018. No slides, no clickers—just you and me typing away at the command line, exploring the wondrous world of Bash.
|
||||
|
||||
This article originally appeared on [Medium][7] and is republished with permission.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/you-dont-know-bash-intro-bash-arrays
|
||||
|
||||
作者:[Robert Aboukhalil][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/robertaboukhalil
|
||||
[1]:https://opensource.com/article/17/7/bash-prompt-tips-and-tricks
|
||||
[2]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals
|
||||
[3]:https://github.com/typicode/jsonplaceholder
|
||||
[4]:https://stedolan.github.io/jq/
|
||||
[5]:https://conferences.oreilly.com/oscon/oscon-or
|
||||
[6]:https://conferences.oreilly.com/oscon/oscon-or/public/schedule/detail/67166
|
||||
[7]:https://medium.com/@robaboukhalil/the-weird-wondrous-world-of-bash-arrays-a86e5adf2c69
|
@ -0,0 +1,59 @@
|
||||
What is the Difference Between the macOS and Linux Kernels
|
||||
======
|
||||
Some people might think that there are similarities between the macOS and the Linux kernel because they can handle similar commands and similar software. Some people even think that Apple’s macOS is based on Linux. The truth is that both kernels have very different histories and features. Today, we will take a look at the difference between macOS and Linux kernels.
|
||||
|
||||
![macOS vs Linux][1]
|
||||
|
||||
### History of macOS Kernel
|
||||
|
||||
We will start with the history of the macOS kernel. In 1985, Steve Jobs left Apple due to a falling out with CEO John Sculley and the Apple board of directors. He then founded a new computer company named [NeXT][2]. Jobs wanted to get a new computer (with a new operating system) to market quickly. To save time, the NeXT team used the [Mach kernel][3] from Carnegie Mellon and parts of the BSD code base to created the [NeXTSTEP operating system][4].
|
||||
|
||||
NeXT never became a financial success, due in part to Jobs’ habit of spending money like he was still at Apple. Meanwhile, Apple had tried unsuccessfully on several occasions to update their operating system, even going so far as to partner with IBM. In 1997, Apple purchased NeXT for $429 million. As part of the deal, Steve Jobs returned to Apple and NeXTSTEP became the foundation of macOS and iOS.
|
||||
|
||||
### History of Linux Kernel
|
||||
|
||||
Unlike the macOS kernel, Linux was not created as part of a commercial endeavor. Instead, it was [created in 1991 by Finnish computer science student Linus Torvalds][5]. Originally, the kernel was written to the specifications of Linus’ computer because he wanted to take advantage of its new 80386 processor. Linus posted the code for his new kernel to [the Usenet in August of 1991][6]. Soon, he was receiving code and feature suggestions from all over the world. The following year Orest Zborowski ported the X Window System to Linux, giving it the ability to support a graphical user interface.
|
||||
|
||||
Over the last 27 years, Linux has slowly grown and gained features. It’s no longer a student’s small-time project. Now it runs most of the [world’s][7] [computing devices][8] and the [world’s supercomputers][9]. Not too shabby.
|
||||
|
||||
### Features of the macOS Kernel
|
||||
|
||||
The macOS kernel is officially known as XNU. The [acronym][10] stands for “XNU is Not Unix.” According to [Apple’s Github page][10], XNU is “a hybrid kernel combining the Mach kernel developed at Carnegie Mellon University with components from FreeBSD and C++ API for writing drivers”. The BSD subsystem part of the code is [“typically implemented as user-space servers in microkernel systems”][11]. The Mach part is responsible for low-level work, such as multitasking, protected memory, virtual memory management, kernel debugging support, and console I/O.
|
||||
|
||||
### Features of Linux Kernel
|
||||
|
||||
While the macOS kernel combines the feature of a microkernel ([Mach][12])) and a monolithic kernel ([BSD][13]), Linux is solely a monolithic kernel. A [monolithic kernel][14] is responsible for managing the CPU, memory, inter-process communication, device drivers, file system, and system server calls.
|
||||
|
||||
### Difference between Mac and Linux kernel in one line
|
||||
|
||||
The macOS kernel (XNU) has been around longer than Linux and was based on a combination of two even older code bases. On the other hand, Linux is newer, written from scratch, and is used on many more devices.
|
||||
|
||||
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][15].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/mac-linux-difference/
|
||||
|
||||
作者:[John Paul][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/john/
|
||||
[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/macos-vs-linux-kernels.jpeg
|
||||
[2]:https://en.wikipedia.org/wiki/NeXT
|
||||
[3]:https://en.wikipedia.org/wiki/Mach_(kernel)
|
||||
[4]:https://en.wikipedia.org/wiki/NeXTSTEP
|
||||
[5]:https://www.cs.cmu.edu/%7Eawb/linux.history.html
|
||||
[6]:https://groups.google.com/forum/#!original/comp.os.minix/dlNtH7RRrGA/SwRavCzVE7gJ
|
||||
[7]:https://www.zdnet.com/article/sorry-windows-android-is-now-the-most-popular-end-user-operating-system/
|
||||
[8]:https://www.linuxinsider.com/story/31855.html
|
||||
[9]:https://itsfoss.com/linux-supercomputers-2017/
|
||||
[10]:https://github.com/apple/darwin-xnu
|
||||
[11]:http://osxbook.com/book/bonus/ancient/whatismacosx/arch_xnu.html
|
||||
[12]:https://en.wikipedia.org/wiki/Mach_(kernel
|
||||
[13]:https://en.wikipedia.org/wiki/FreeBSD
|
||||
[14]:https://www.howtogeek.com/howto/31632/what-is-the-linux-kernel-and-what-does-it-do/
|
||||
[15]:http://reddit.com/r/linuxusersgroup
|
@ -0,0 +1,54 @@
|
||||
5 Firefox extensions to protect your privacy
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/biz_cinderblock_cloud_yellowhat.jpg?itok=sJdlsYTF)
|
||||
|
||||
In the wake of the Cambridge Analytica story, I took a hard look at how far I had let Facebook penetrate my online presence. As I'm generally concerned about single points of failure (or compromise), I am not one to use social logins. I use a password manager and create unique logins for every site (and you should, too).
|
||||
|
||||
What I was most perturbed about was the pervasive intrusion Facebook was having on my digital life. I uninstalled the Facebook mobile app almost immediately after diving into the Cambridge Analytica story. I also [disconnected all apps, games, and websites][1] from Facebook. Yes, this will change your experience on Facebook, but it will also protect your privacy. As a veteran with friends spread out across the globe, maintaining the social connectivity of Facebook is important to me.
|
||||
|
||||
I went about the task of scrutinizing other services as well. I checked Google, Twitter, GitHub, and more for any unused connected applications. But I know that's not enough. I need my browser to be proactive in preventing behavior that violates my privacy. I began the task of figuring out how best to do that. Sure, I can lock down a browser, but I need to make the sites and tools I use work while trying to keep them from leaking data.
|
||||
|
||||
Following are five tools that will protect your privacy while using your browser. The first three extensions are available for Firefox and Chrome, while the latter two are only available for Firefox.
|
||||
|
||||
### Privacy Badger
|
||||
|
||||
[Privacy Badger][2] has been my go-to extension for quite some time. Do other content or ad blockers do a better job? Maybe. The problem with a lot of content blockers is that they are "pay for play." Meaning they have "partners" that get whitelisted for a fee. That is the antithesis of why content blockers exist. Privacy Badger is made by the Electronic Frontier Foundation (EFF), a nonprofit entity with a donation-based business model. Privacy Badger promises to learn from your browsing habits and requires minimal tuning. For example, I have only had to whitelist a handful of sites. Privacy Badger also allows granular controls of exactly which trackers are enabled on what sites. It's my #1, must-install extension, no matter the browser.
|
||||
|
||||
### DuckDuckGo Privacy Essentials
|
||||
|
||||
The search engine DuckDuckGo has typically been privacy-conscious. [DuckDuckGo Privacy Essentials][3] works across major mobile devices and browsers. It's unique in the sense that it grades sites based on the settings you give them. For example, Facebook gets a D, even with Privacy Protection enabled. Meanwhile, [chrisshort.net][4] gets a B with Privacy Protection enabled and a C with it disabled. If you're not keen on EFF or Privacy Badger for whatever reason, I would recommend DuckDuckGo Privacy Essentials (choose one, not both, as they essentially do the same thing).
|
||||
|
||||
### HTTPS Everywhere
|
||||
|
||||
[HTTPS Everywhere][5] is another extension from the EFF. According to HTTPS Everywhere, "Many sites on the web offer some limited support for encryption over HTTPS, but make it difficult to use. For instance, they may default to unencrypted HTTP or fill encrypted pages with links that go back to the unencrypted site. The HTTPS Everywhere extension fixes these problems by using clever technology to rewrite requests to these sites to HTTPS." While a lot of sites and browsers are getting better about implementing HTTPS, there are a lot of sites that still need help. HTTPS Everywhere will try its best to make sure your traffic is encrypted.
|
||||
|
||||
### NoScript Security Suite
|
||||
|
||||
[NoScript Security Suite][6] is not for the faint of heart. While the Firefox-only extension "allows JavaScript, Java, Flash, and other plugins to be executed only by trusted websites of your choice," it doesn't do a great job at figuring out what your choices are. But, make no mistake, a surefire way to prevent leaking data is not executing code that could leak it. NoScript enables that via its "whitelist-based preemptive script blocking." This means you will need to build the whitelist as you go for sites not already on it. Note that NoScript is only available for Firefox.
|
||||
|
||||
### Facebook Container
|
||||
|
||||
[Facebook Container][7] makes Firefox the only browser where I will use Facebook. "Facebook Container works by isolating your Facebook identity into a separate container that makes it harder for Facebook to track your visits to other websites with third-party cookies." This means Facebook cannot snoop on activity happening elsewhere in your browser. Suddenly those creepy ads will stop appearing so frequently (assuming you uninstalled the Facebook app from your mobile devices). Using Facebook in an isolated space will prevent any additional collection of data. Remember, you've given Facebook data already, and Facebook Container can't prevent that data from being shared.
|
||||
|
||||
These are my go-to extensions for browser privacy. What are yours? Please share them in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/7/firefox-extensions-protect-privacy
|
||||
|
||||
作者:[Chris Short][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/chrisshort
|
||||
[1]:https://www.facebook.com/help/211829542181913
|
||||
[2]:https://www.eff.org/privacybadger
|
||||
[3]:https://duckduckgo.com/app
|
||||
[4]:https://chrisshort.net
|
||||
[5]:https://www.eff.org/https-everywhere
|
||||
[6]:https://noscript.net/
|
||||
[7]:https://addons.mozilla.org/en-US/firefox/addon/facebook-container/
|
@ -0,0 +1,200 @@
|
||||
A sysadmin's guide to network management
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openwires_fromRHT_520_0612LL.png?itok=PqZi55Ab)
|
||||
|
||||
If you're a sysadmin, your daily tasks include managing servers and the data center's network. The following Linux utilities and commands—from basic to advanced—will help make network management easier.
|
||||
|
||||
In several of these commands, you'll see `<fqdn>`, which stands for "fully qualified domain name." When you see this, substitute your website URL or your server (e.g., `server-name.company.com`), as the case may be.
|
||||
|
||||
### Ping
|
||||
|
||||
As the name suggests, `ping` is used to check the end-to-end connectivity from your system to the one you are trying to connect to. It uses [ICMP][1] echo packets that travel back to your system when a ping is successful. It's also a good first step to check system/network connectivity. You can use the `ping` command with IPv4 and IPv6 addresses. (Read my article "[How to find your IP address in Linux][2]" to learn more about IP addresses.)
|
||||
|
||||
**Syntax:**
|
||||
|
||||
* IPv4: `ping <ip address>/<fqdn>`
|
||||
* IPv6: `ping6 <ip address>/<fqdn>`
|
||||
|
||||
|
||||
|
||||
You can also use `ping` to resolve names of websites to their corresponding IP address, as shown below:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/ping-screen-0.png)
|
||||
|
||||
### Traceroute
|
||||
|
||||
`ping` checks end-to-end connectivity, the `traceroute` utility tells you all the router IPs on the path you travel to reach the end system, website, or server. `traceroute` is usually is the second step after `ping` for network connection debugging.
|
||||
|
||||
This is a nice utility for tracing the full network path from your system to another. Wherechecks end-to-end connectivity, theutility tells you all the router IPs on the path you travel to reach the end system, website, or server.is usually is the second step afterfor network connection debugging.
|
||||
|
||||
**Syntax:**
|
||||
|
||||
* `traceroute <ip address>/<fqdn>`
|
||||
|
||||
|
||||
|
||||
### Telnet
|
||||
|
||||
**Syntax:**
|
||||
|
||||
* `telnet <ip address>/<fqdn>` is used to [telnet][3] into any server.
|
||||
|
||||
|
||||
|
||||
### Netstat
|
||||
|
||||
The network statistics (`netstat`) utility is used to troubleshoot network-connection problems and to check interface/port statistics, routing tables, protocol stats, etc. It's any sysadmin's must-have tool.
|
||||
|
||||
**Syntax:**
|
||||
|
||||
* `netstat -l` shows the list of all the ports that are in listening mode.
|
||||
* `netstat -a` shows all ports; to specify only TCP, use `-at` (for UDP use `-au`).
|
||||
* `netstat -r` provides a routing table.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/netstat-r.png)
|
||||
|
||||
* `netstat -s` provides a summary of statistics for each protocol.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/netstat-s.png)
|
||||
|
||||
* `netstat -i` displays transmission/receive (TX/RX) packet statistics for each interface.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/netstat-i.png)
|
||||
|
||||
### Nmcli
|
||||
|
||||
`nmcli` is a good utility for managing network connections, configurations, etc. It can be used to control Network Manager and modify any device's network configuration details.
|
||||
|
||||
**Syntax:**
|
||||
|
||||
* `nmcli device` lists all devices on the system.
|
||||
|
||||
* `nmcli device show <interface>` shows network-related details of the specified interface.
|
||||
|
||||
* `nmcli connection` checks a device's connection.
|
||||
|
||||
* `nmcli connection down <interface>` shuts down the specified interface.
|
||||
|
||||
* `nmcli connection up <interface>` starts the specified interface.
|
||||
|
||||
* `nmcli con add type vlan con-name <connection-name> dev <interface> id <vlan-number> ipv4 <ip/cidr> gw4 <gateway-ip>` adds a virtual LAN (VLAN) interface with the specified VLAN number, IP address, and gateway to a particular interface.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/nmcli.png)
|
||||
|
||||
|
||||
### Routing
|
||||
|
||||
There are many commands you can use to check and configure routing. Here are some useful ones:
|
||||
|
||||
**Syntax:**
|
||||
|
||||
* `ip route` shows all the current routes configured for the respective interfaces.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/ip-route.png)
|
||||
|
||||
* `route add default gw <gateway-ip>` adds a default gateway to the routing table.
|
||||
* `route add -net <network ip/cidr> gw <gateway ip> <interface>` adds a new network route to the routing table. There are many other routing parameters, such as adding a default route, default gateway, etc.
|
||||
* `route del -net <network ip/cidr>` deletes a particular route entry from the routing table.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/route-add-del.png)
|
||||
|
||||
* `ip neighbor` shows the current neighbor table and can be used to add, change, or delete new neighbors.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/ip-neighbor.png)
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/ip-neigh-help.png)
|
||||
|
||||
* `arp` (which stands for address resolution protocol) is similar to `ip neighbor`. `arp` maps a system's IP address to its corresponding MAC (media access control) address.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/arp.png)
|
||||
|
||||
### Tcpdump and Wireshark
|
||||
|
||||
Linux provides many packet-capturing tools like `tcpdump`, `wireshark`, `tshark`, etc. They are used to capture network traffic in packets that are transmitted/received and hence are very useful for a sysadmin to debug any packet losses or related issues. For command-line enthusiasts, `tcpdump` is a great tool, and for GUI users, `wireshark` is a great utility to capture and analyze packets. `tcpdump` is a built-in Linux utility to capture network traffic. It can be used to capture/show traffic on specific ports, protocols, etc.
|
||||
|
||||
**Syntax:**
|
||||
|
||||
* `tcpdump -i <interface-name>` shows live packets from the specified interface. Packets can be saved in a file by adding the `-w` flag and the name of the output file to the command, for example: `tcpdump -w <output-file.> -i <interface-name>`.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/tcpdump-i.png)
|
||||
|
||||
* `tcpdump -i <interface> src <source-ip>` captures packets from a particular source IP.
|
||||
* `tcpdump -i <interface> dst <destination-ip>` captures packets from a particular destination IP.
|
||||
* `tcpdump -i <interface> port <port-number>` captures traffic for a specific port number like 53, 80, 8080, etc.
|
||||
* `tcpdump -i <interface> <protocol>` captures traffic for a particular protocol, like TCP, UDP, etc.
|
||||
|
||||
|
||||
|
||||
### Iptables
|
||||
|
||||
`iptables` is a firewall-like packet-filtering utility that can allow or block certain traffic. The scope of this utility is very wide; here are some of its most common uses.
|
||||
|
||||
**Syntax:**
|
||||
|
||||
* `iptables -L` lists all existing `iptables` rules.
|
||||
* `iptables -F` deletes all existing rules.
|
||||
|
||||
|
||||
|
||||
The following commands allow traffic from the specified port number to the specified interface:
|
||||
|
||||
* `iptables -A INPUT -i <interface> -p tcp –dport <port-number> -m state –state NEW,ESTABLISHED -j ACCEPT`
|
||||
* `iptables -A OUTPUT -o <interface> -p tcp -sport <port-number> -m state – state ESTABLISHED -j ACCEPT`
|
||||
|
||||
|
||||
|
||||
The following commands allow loopback access to the system:
|
||||
|
||||
* `iptables -A INPUT -i lo -j ACCEPT`
|
||||
* `iptables -A OUTPUT -o lo -j ACCEPT`
|
||||
|
||||
|
||||
|
||||
### Nslookup
|
||||
|
||||
The `nslookup` tool is used to obtain IP address mapping of a website or domain. It can also be used to obtain information on your DNS server, such as all DNS records on a website (see the example below). A similar tool to `nslookup` is the `dig` (Domain Information Groper) utility.
|
||||
|
||||
**Syntax:**
|
||||
|
||||
* `nslookup <website-name.com>` shows the IP address of your DNS server in the Server field, and, below that, gives the IP address of the website you are trying to reach.
|
||||
* `nslookup -type=any <website-name.com>` shows all the available records for the specified website/domain.
|
||||
|
||||
|
||||
|
||||
### Network/interface debugging
|
||||
|
||||
Here is a summary of the necessary commands and files used to troubleshoot interface connectivity or related network issues.
|
||||
|
||||
**Syntax:**
|
||||
|
||||
* `ss` is a utility for dumping socket statistics.
|
||||
* `nmap <ip-address>`, which stands for Network Mapper, scans network ports, discovers hosts, detects MAC addresses, and much more.
|
||||
* `ip addr/ifconfig -a` provides IP addresses and related info on all the interfaces of a system.
|
||||
* `ssh -vvv user@<ip/domain>` enables you to SSH to another server with the specified IP/domain and username. The `-vvv` flag provides "triple-verbose" details of the processes going on while SSH'ing to the server.
|
||||
* `ethtool -S <interface>` checks the statistics for a particular interface.
|
||||
* `ifup <interface>` starts up the specified interface.
|
||||
* `ifdown <interface>` shuts down the specified interface.
|
||||
* `systemctl restart network` restarts a network service for the system.
|
||||
* `/etc/sysconfig/network-scripts/<interface-name>` is an interface configuration file used to set IP, network, gateway, etc. for the specified interface. DHCP mode can be set here.
|
||||
* `/etc/hosts` this file contains custom host/domain to IP mappings.
|
||||
* `/etc/resolv.conf` specifies the DNS nameserver IP of the system.
|
||||
* `/etc/ntp.conf` specifies the NTP server domain.
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/7/sysadmin-guide-networking-commands
|
||||
|
||||
作者:[Archit Modi][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/architmodi
|
||||
[1]:https://en.wikipedia.org/wiki/Internet_Control_Message_Protocol
|
||||
[2]:https://opensource.com/article/18/5/how-find-ip-address-linux
|
||||
[3]:https://en.wikipedia.org/wiki/Telnet
|
@ -0,0 +1,101 @@
|
||||
Anbox: How To Install Google Play Store And Enable ARM (libhoudini) Support, The Easy Way
|
||||
======
|
||||
**[Anbox][1], or Android in a Box, is a free and open source tool that allows running Android applications on Linux.** It works by running the Android runtime environment in an LXC container, recreating the directory structure of Android as a mountable loop image, while using the native Linux kernel to execute applications.
|
||||
|
||||
Its key features are security, performance, integration and convergence (scales across different form factors), according to its website.
|
||||
|
||||
**Using Anbox, each Android application or game is launched in a separate window, just like system applications** , and they behave more or less like regular windows, showing up in the launcher, can be tiled, etc.
|
||||
|
||||
By default, Anbox doesn't ship with the Google Play Store or support for ARM applications. To install applications you must download each app APK and install it manually using adb. Also, installing ARM applications or games doesn't work by default with Anbox - trying to install ARM apps results in the following error being displayed:
|
||||
```
|
||||
Failed to install PACKAGE.NAME.apk: Failure [INSTALL_FAILED_NO_MATCHING_ABIS: Failed to extract native libraries, res=-113]
|
||||
|
||||
```
|
||||
|
||||
You can set up both Google Play Store and support for ARM applications (through libhoudini) manually for Android in a Box, but it's a quite complicated process. **To make it easier to install Google Play Store and Google Play Services on Anbox, and get it to support ARM applications and games (using libhoudini), the folks at[geeks-r-us.de][2] (linked article is in German) have created a [script][3] that automates these tasks.**
|
||||
|
||||
Before using this, I'd like to make it clear that not all Android applications and games work in Anbox, even after integrating libhoudini for ARM support. Some Android applications and games may not show up in the Google Play Store at all, while others may be available for installation but will not work. Also, some features may not be available in some applications.
|
||||
|
||||
### Install Google Play Store and enable ARM applications / games support on Anbox (Android in a Box)
|
||||
|
||||
These instructions will obviously not work if Anbox is not already installed on your Linux desktop. If you haven't already, install Anbox by following the installation instructions found
|
||||
|
||||
`anbox.appmgr`
|
||||
|
||||
at least once after installing Anbox and before using this script, to avoid running into issues.
|
||||
|
||||
1\. Install the required dependencies (`wget` , `lzip` , `unzip` and `squashfs-tools`).
|
||||
|
||||
In Debian, Ubuntu or Linux Mint, use this command to install the required dependencies:
|
||||
```
|
||||
sudo apt install wget lzip unzip squashfs-tools
|
||||
|
||||
```
|
||||
|
||||
2\. Download and run the script that automatically downloads and installs Google Play Store (and Google Play Services) and libhoudini (for ARM apps / games support) on your Android in a Box installation.
|
||||
|
||||
**Warning: never run a script you didn't write without knowing what it does. Before running this script, check out its [code][4]. **
|
||||
|
||||
To download the script, make it executable and run it on your Linux desktop, use these commands in a terminal:
|
||||
```
|
||||
wget https://raw.githubusercontent.com/geeks-r-us/anbox-playstore-installer/master/install-playstore.sh
|
||||
chmod +x install-playstore.sh
|
||||
sudo ./install-playstore.sh
|
||||
|
||||
```
|
||||
|
||||
3\. To get Google Play Store to work in Anbox, you need to enable all the permissions for both Google Play Store and Google Play Services
|
||||
|
||||
To do this, run Anbox:
|
||||
```
|
||||
anbox.appmgr
|
||||
|
||||
```
|
||||
|
||||
Then go to `Settings > Apps > Google Play Services > Permissions` and enable all available permissions. Do the same for Google Play Store!
|
||||
|
||||
You should now be able to login using a Google account into Google Play Store.
|
||||
|
||||
Without enabling all permissions for Google Play Store and Google Play Services, you may encounter an issue when trying to login to your Google account, with the following error message: " _Couldn't sign in. There was a problem communicating with Google servers. Try again later_ ", as you can see in this screenshot:
|
||||
|
||||
After logging in, you can disable some of the Google Play Store / Google Play Services permissions.
|
||||
|
||||
**If you're encountering some connectivity issues when logging in to your Google account on Anbox,** make sure the `anbox-bride.sh` is running:
|
||||
|
||||
* to start it:
|
||||
|
||||
|
||||
```
|
||||
sudo /snap/anbox/current/bin/anbox-bridge.sh start
|
||||
|
||||
```
|
||||
|
||||
* to restart it:
|
||||
|
||||
|
||||
```
|
||||
sudo /snap/anbox/current/bin/anbox-bridge.sh restart
|
||||
|
||||
```
|
||||
|
||||
You may also need to install the dnsmasq package if you continue to have connectivity issues with Anbox, according to
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxuprising.com/2018/07/anbox-how-to-install-google-play-store.html
|
||||
|
||||
作者:[Logix][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/118280394805678839070
|
||||
[1]:https://anbox.io/
|
||||
[2]:https://geeks-r-us.de/2017/08/26/android-apps-auf-dem-linux-desktop/
|
||||
[3]:https://github.com/geeks-r-us/anbox-playstore-installer/
|
||||
[4]:https://github.com/geeks-r-us/anbox-playstore-installer/blob/master/install-playstore.sh
|
||||
[5]:https://docs.anbox.io/userguide/install.html
|
||||
[6]:https://github.com/anbox/anbox/issues/118#issuecomment-295270113
|
@ -0,0 +1,68 @@
|
||||
Boost your typing with emoji in Fedora 28 Workstation
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/07/emoji-typing-816x345.jpg)
|
||||
|
||||
Fedora 28 Workstation ships with a feature that allows you to quickly search, select and input emoji using your keyboard. Emoji, cute ideograms that are part of Unicode, are used fairly widely in messaging and especially on mobile devices. You may have heard the idiom “A picture is worth a thousand words.” This is exactly what emoji provide: simple images for you to use in communication. Each release of Unicode adds more, with over 200 new ones added in past releases of Unicode. This article shows you how to make them easy to use in your Fedora system.
|
||||
|
||||
It’s great to see emoji numbers growing. But at the same time it brings the challenge of how to input them in a computing device. Many people already use these symbols for input in mobile devices or social networking sites.
|
||||
|
||||
[**Editors’ note: **This article is an update to a previously published piece on this topic.]
|
||||
|
||||
### Enabling Emoji input on Fedora 28 Workstation
|
||||
|
||||
The new emoji input method ships by default in Fedora 28 Workstation. To use it, you must enable it using the Region and Language settings dialog. Open the Region and Language dialog from the main Fedora Workstation settings, or search for it in the Overview.
|
||||
|
||||
[![Region & Language settings tool][1]][2]
|
||||
|
||||
Choose the + control to add an input source. The following dialog appears:
|
||||
|
||||
[![Adding an input source][3]][4]
|
||||
|
||||
Choose the final option (three dots) to expand the selections fully. Then, find Other at the bottom of the list and select it:
|
||||
|
||||
[![Selecting other input sources][5]][6]
|
||||
|
||||
In the next dialog, find the Typing booster choice and select it:
|
||||
|
||||
[![][7]][8]
|
||||
|
||||
This advanced input method is powered behind the scenes by iBus. The advanced input methods are identifiable in the list by the cogs icon on the right of the list.
|
||||
|
||||
The Input Method drop-down automatically appears in the GNOME Shell top bar. Ensure your default method — in this example, English (US) — is selected as the current method, and you’ll be ready to input.
|
||||
|
||||
[![Input method dropdown in Shell top bar][9]][10]
|
||||
|
||||
## Using the new Emoji input method
|
||||
|
||||
Now the Emoji input method is enabled, search for emoji by pressing the keyboard shortcut **Ctrl+Shift+E**. A pop-over dialog appears where you can type a search term, such as smile, to find matching symbols.
|
||||
|
||||
[![Searching for smile emoji][11]][12]
|
||||
|
||||
Use the arrow keys to navigate the list. Then, hit **Enter** to make your selection, and the glyph will be placed as input.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/boost-typing-emoji-fedora-28-workstation/
|
||||
|
||||
作者:[Paul W. Frields][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fedoramagazine.org/author/pfrields/
|
||||
[1]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-15-02-41-1024x718.png
|
||||
[2]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-15-02-41.png
|
||||
[3]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-33-46-1024x839.png
|
||||
[4]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-33-46.png
|
||||
[5]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-34-15-1024x839.png
|
||||
[6]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-34-15.png
|
||||
[7]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-34-41-1024x839.png
|
||||
[8]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-34-41.png
|
||||
[9]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-15-05-24-300x244.png
|
||||
[10]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-15-05-24.png
|
||||
[11]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-36-31-290x300.png
|
||||
[12]:https://fedoramagazine.org/wp-content/uploads/2018/07/Screenshot-from-2018-07-08-14-36-31.png
|
@ -0,0 +1,225 @@
|
||||
How To Configure SSH Key-based Authentication In Linux
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2017/01/Configure-SSH-Key-based-Authentication-In-Linux-720x340.png)
|
||||
|
||||
### What is SSH Key-based authentication?
|
||||
|
||||
As we all know, **Secure Shell** , shortly **SSH** , is the cryptographic network protocol that allows you to securely communicate/access a remote system over unsecured network, for example Internet. Whenever you send a data over an unsecured network using SSH, the data will be automatically encrypted in the source system, and decrypted in the destination side. SSH provides four authentication methods namely **password-based authentication** , **key-based authentication** , **Host-based authentication** , and **Keyboard authentication**. The most commonly used authentication methods are password-based and key-based authentication.
|
||||
|
||||
In password-based authentication, all you need is the password of the remote system’s user. If you know the password of remote user, you can access the respective system using **“ssh[[email protected]][1]”**. On the other hand, in key-based authentication, you need to generate SSH key pairs and upload the SSH public key to the remote system in order to communicate it via SSH. Each SSH key pair consists of a private key and public key. The private key should be kept within the client system, and the public key should uploaded to the remote systems. You shouldn’t disclose the private key to anyone. Hope you got the basic idea about SSH and its authentication methods.
|
||||
|
||||
In this tutorial, we will be discussing how to configure SSH key-based authentication in Linux.
|
||||
|
||||
### Configure SSH Key-based Authentication In Linux
|
||||
|
||||
For the purpose of this guide, I will be using Arch Linux system as local system and Ubuntu 18.04 LTS as remote system.
|
||||
|
||||
Local system details:
|
||||
|
||||
* **OS** : Arch Linux Desktop
|
||||
* **IP address** : 192.168.225.37 /24
|
||||
|
||||
|
||||
|
||||
Remote system details:
|
||||
|
||||
* **OS** : Ubuntu 18.04 LTS Server
|
||||
* **IP address** : 192.168.225.22/24
|
||||
|
||||
|
||||
|
||||
### Local system configuration
|
||||
|
||||
Like I said already, in SSH key-based authentication method, the public key should be uploaded to the remote system that you want to access via SSH. The public keys will usually be stored in a file called **~/.ssh/authorized_keys** in the remote SSH systems.
|
||||
|
||||
**Important note:** Do not generate key pairs as **root** , as only root would be able to use those keys. Create key pairs as normal user.
|
||||
|
||||
Now, let us create the SSH key pair in the local system. To do so, run the following command in your client system.
|
||||
```
|
||||
$ ssh-keygen
|
||||
|
||||
```
|
||||
|
||||
The above command will create 2048 bit RSA key pair. Enter the passphrase twice. More importantly, Remember your passphrase. You’ll need it later.
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
Generating public/private rsa key pair.
|
||||
Enter file in which to save the key (/home/sk/.ssh/id_rsa):
|
||||
Enter passphrase (empty for no passphrase):
|
||||
Enter same passphrase again:
|
||||
Your identification has been saved in /home/sk/.ssh/id_rsa.
|
||||
Your public key has been saved in /home/sk/.ssh/id_rsa.pub.
|
||||
The key fingerprint is:
|
||||
SHA256:wYOgvdkBgMFydTMCUI3qZaUxvjs+p2287Tn4uaZ5KyE [email protected]
|
||||
The key's randomart image is:
|
||||
+---[RSA 2048]----+
|
||||
|+=+*= + |
|
||||
|o.o=.* = |
|
||||
|.oo * o + |
|
||||
|. = + . o |
|
||||
|. o + . S |
|
||||
| . E . |
|
||||
| + o |
|
||||
| +.*o+o |
|
||||
| .o*=OO+ |
|
||||
+----[SHA256]-----+
|
||||
|
||||
```
|
||||
|
||||
In case you have already created the key pair, you will see the following message. Just type “y” to create overwrite the existing key .
|
||||
```
|
||||
/home/username/.ssh/id_rsa already exists.
|
||||
Overwrite (y/n)?
|
||||
|
||||
```
|
||||
|
||||
Please note that **passphrase is optional**. If you give one, you’ll be asked to enter the password every time when you try to SSH a remote system unless you are using any SSH agent to store the password. If you don’t want passphrase(not safe though), simply press ENTER key twice when you’ll be asked to enter the passphrase. However, we recommend you to use passphrase. Using a password-less ssh key is generally not a good idea from a security point of view. They should be limited to very specific cases such as services having to access a remote system without the user intervention (e.g. remote backups with rsync, …).
|
||||
|
||||
If you already have a ssh key without a passphrase in private file **~/.ssh/id_rsa** and wanted to update key with passphrase, use the following command:
|
||||
```
|
||||
$ ssh-keygen -p -f ~/.ssh/id_rsa
|
||||
|
||||
```
|
||||
|
||||
Sample output:
|
||||
```
|
||||
Enter new passphrase (empty for no passphrase):
|
||||
Enter same passphrase again:
|
||||
Your identification has been saved with the new passphrase.
|
||||
|
||||
```
|
||||
|
||||
Now, we have created the key pair in the local system. Now, copy the SSH public key to your remote SSH server using command:
|
||||
|
||||
Here, I will be copying the local (Arch Linux) system’s public key to the remote system (Ubuntu 18.04 LTS in my case). Technically speaking, the above command will copy the contents of local system’s **~/.ssh/id_rsa.pub key** into remote system’s **~/.ssh/authorized_keys** file. Clear? Good.
|
||||
|
||||
Type **yes** to continue connecting to your remote SSH server. And, then Enter the root user’s password of the remote system.
|
||||
```
|
||||
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
|
||||
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
|
||||
[email protected]2.168.225.22's password:
|
||||
|
||||
Number of key(s) added: 1
|
||||
|
||||
Now try logging into the machine, with: "ssh '[email protected]'"
|
||||
and check to make sure that only the key(s) you wanted were added.
|
||||
|
||||
```
|
||||
|
||||
If you have already copied the key, but want to update the key with new passphrase, use **-f** option to overwrite the existing key like below.
|
||||
|
||||
We have now successfully added the local system’s SSH public key to the remote system. Now, let us disable the password-based authentication completely in the remote system. Because, we have configured key-based authentication, so we don’t need password-base authentication anymore.
|
||||
|
||||
### Disable SSH Password-based authentication in remote system
|
||||
|
||||
You need to perform the following commands as root or sudo user.
|
||||
|
||||
To disable password-based authentication, go to your remote system’s console and edit **/etc/ssh/sshd_config** configuration file using any editor:
|
||||
```
|
||||
$ sudo vi /etc/ssh/sshd_config
|
||||
|
||||
```
|
||||
|
||||
Find the following line. Uncomment it and set it’s value as **no**.
|
||||
```
|
||||
PasswordAuthentication no
|
||||
|
||||
```
|
||||
|
||||
Restart ssh service to take effect the changes.
|
||||
```
|
||||
$ sudo systemctl restart sshd
|
||||
|
||||
```
|
||||
|
||||
### Access Remote system from local system
|
||||
|
||||
Go to your local system and SSH into your remote server using command:
|
||||
|
||||
Enter the passphrase.
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
Enter passphrase for key '/home/sk/.ssh/id_rsa':
|
||||
Last login: Mon Jul 9 09:59:51 2018 from 192.168.225.37
|
||||
[email protected]:~$
|
||||
|
||||
```
|
||||
|
||||
Now, you’ll be able to SSH into your remote system. As you noticed, we have logged-in to the remote system’s account using passphrase which we created earlier using **ssh-keygen** command, not using the actual account’s password.
|
||||
|
||||
If you try to ssh from another client system, you will get this error message. Say for example, I am tried to SSH into my Ubuntu system from my CentOS using command:
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
The authenticity of host '192.168.225.22 (192.168.225.22)' can't be established.
|
||||
ECDSA key fingerprint is 67:fc:69:b7:d4:4d:fd:6e:38:44:a8:2f:08:ed:f4:21.
|
||||
Are you sure you want to continue connecting (yes/no)? yes
|
||||
Warning: Permanently added '192.168.225.22' (ECDSA) to the list of known hosts.
|
||||
Permission denied (publickey).
|
||||
|
||||
```
|
||||
|
||||
As you see in the above output, I can’t SSH into my remote Ubuntu 18.04 systems from any other systems, except the CentOS system.
|
||||
|
||||
### Adding more Client system’s keys to SSH server
|
||||
|
||||
This is very important. Like I said already, you can’t access the remote system via SSH, except the one you configured (In our case, it’s Ubuntu). I want to give permissions to more clients to access the remote SSH server. What should I do? Simple. You need to generate the SSH key pair in all your client systems and copy the ssh public key manually to the remote server that you want to access via SSH.
|
||||
|
||||
To create SSH key pair on your client system’s, run:
|
||||
```
|
||||
$ ssh-keygen
|
||||
|
||||
```
|
||||
|
||||
Enter the passphrase twice. Now, the ssh key pair is generated. You need to copy the public ssh key (not private key) to your remote server manually.
|
||||
|
||||
Display the pub key using command:
|
||||
```
|
||||
$ cat ~/.ssh/id_rsa.pub
|
||||
|
||||
```
|
||||
|
||||
You should an output something like below.
|
||||
|
||||
Copy the entire contents (via USB drive or any medium) and go to your remote server’s console. Create a directory called **ssh** in the home directory as shown below. You need to execute the following commands as root user.
|
||||
```
|
||||
$ mkdir -p ~/.ssh
|
||||
|
||||
```
|
||||
|
||||
Now, append the your client system’s pub key which you generated in the previous step in a file called
|
||||
```
|
||||
echo {Your_public_key_contents_here} >> ~/.ssh/authorized_keys
|
||||
|
||||
```
|
||||
|
||||
Restart ssh service on the remote system. Now, you’ll be able to SSH to your server from the new client.
|
||||
|
||||
If manually adding ssh pubkey seems difficult, enable password-based authentication temporarily in the remote system and copy the key using “ssh-copy-id” command from your local system and finally disable the password-based authentication.
|
||||
|
||||
**Suggested read:**
|
||||
|
||||
And, that’s all for now. SSH Key-based authentication provides an extra layer protection from brute-force attacks. As you can see, configuring key-based authentication is not that difficult either. It is one of the recommended method to keep your Linux servers safe and secure.
|
||||
|
||||
I will be here soon with another useful article. Until then, stay tuned with OSTechNix.
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/configure-ssh-key-based-authentication-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://www.ostechnix.com/cdn-cgi/l/email-protection
|
@ -0,0 +1,43 @@
|
||||
Malware Found On The Arch User Repository (AUR)
|
||||
======
|
||||
|
||||
On July 7, an AUR package was modified with some malicious code, reminding [Arch Linux][1] users (and Linux users in general) that all user-generated packages should be checked (when possible) before installation.
|
||||
|
||||
[AUR][3] , or the Arch (Linux) User Repository contains package descriptions, also known as PKGBUILDs, which make compiling packages from source easier. While these packages are very useful, they should never be treated as safe, and users should always check their contents before using them, when possible. After all, the AUR webpage states in bold that "AUR packages are user produced content. Any use of the provided files is at your own risk."
|
||||
|
||||
The [discovery][4] of an AUR package containing malicious code proves this. [acroread][5] was modified on July 7 (it appears it was previously "orphaned", meaning it had no maintainer) by an user named "xeactor" to include a `curl` command that downloaded a script from a pastebin. The script then downloaded another script and installed a systemd unit to run that script periodically.
|
||||
|
||||
**It appears [two other][2] AUR packages were modified in the same way. All the offending packages were removed and the user account (which was registered in the same day those packages were updated) that was used to upload them was suspended.**
|
||||
|
||||
The malicious code didn't do anything truly harmful - it only tried to upload some system information, like the machine ID, the output of `uname -a` (which includes the kernel version, architecture, etc.), CPU information, pacman information, and the output of `systemctl list-units` (which lists systemd units information) to pastebin.com. I'm saying "tried" because no system information was actually uploaded due to an error in the second script (the upload function is called "upload", but the script tried to call it using a different name, "uploader").
|
||||
|
||||
Also, the person adding these malicious scripts to AUR left the personal Pastebin API key in the script in cleartext, proving once again that they don't know exactly what they are doing.
|
||||
|
||||
The purpose for trying to upload this information to Pastebin is not clear, especially since much more sensitive data could have been uploaded, like GPG / SSH keys.
|
||||
|
||||
**Update:** Reddit user u/xanaxdroid_ [mentions][6] that the same user named "xeactor" also had some cryptocurrency mining packages posted, so he speculates that "xeactor" was probably planning on adding some hidden cryptocurrency mining software to AUR (this was also the case with some Ubuntu Snap packages [two months ago][7]). That's why "xeactor" was probably trying to obtain various system information. All the packages uploaded by this AUR user have been removed so I cannot check this.
|
||||
|
||||
**Another update:**
|
||||
|
||||
What exactly should you check in user-generated packages such as those found in AUR? This varies and I can't tell you exactly but you can start by looking for anything that tries to download something using `curl` , `wget` and other similar tools, and see what exactly they are attempting to download. Also check the server from which the package source is downloaded from and make sure it's the official source. Unfortunately this is not an exact 'science'. For Launchpad PPAs for example, things get more complicated as you must know how Debian packaging works, and the source can be altered directly as it's hosted in the PPA and uploaded by the user. It gets even more complicated with Snap packages, because you cannot check such packages before installation (as far as I know). In these latter cases, and as a generic solution, I guess you should only install user-generated packages if you trust the uploader / packager.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxuprising.com/2018/07/malware-found-on-arch-user-repository.html
|
||||
|
||||
作者:[Logix][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/118280394805678839070
|
||||
[1]:https://www.archlinux.org/
|
||||
[2]:https://lists.archlinux.org/pipermail/aur-general/2018-July/034153.html
|
||||
[3]:https://aur.archlinux.org/
|
||||
[4]:https://lists.archlinux.org/pipermail/aur-general/2018-July/034152.html
|
||||
[5]:https://aur.archlinux.org/cgit/aur.git/commit/?h=acroread&id=b3fec9f2f16703c2dae9e793f75ad6e0d98509bc
|
||||
[6]:https://www.reddit.com/r/archlinux/comments/8x0p5z/reminder_to_always_read_your_pkgbuilds/e21iugg/
|
||||
[7]:https://www.linuxuprising.com/2018/05/malware-found-in-ubuntu-snap-store.html
|
@ -0,0 +1,74 @@
|
||||
15 open source applications for MacOS
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_blue.png?itok=IfckxN48)
|
||||
|
||||
I use open source tools whenever and wherever I can. I returned to college a while ago to earn a master's degree in educational leadership. Even though I switched from my favorite Linux laptop to a MacBook Pro (since I wasn't sure Linux would be accepted on campus), I decided I would keep using my favorite tools, even on MacOS, as much as I could.
|
||||
|
||||
Fortunately, it was easy, and no professor ever questioned what software I used. Even so, I couldn't keep a secret.
|
||||
|
||||
I knew some of my classmates would eventually assume leadership positions in school districts, so I shared information about the open source applications described below with many of my MacOS or Windows-using classmates. After all, open source software is really about freedom and goodwill. I also wanted them to know that it would be easy to provide their students with world-class applications at little cost. Most of them were surprised and amazed because, as we all know, open source software doesn't have a marketing team except users like you and me.
|
||||
|
||||
### My MacOS learning curve
|
||||
|
||||
Through this process, I learned some of the nuances of MacOS. While most of the open source tools worked as I was used to, others required different installation methods. Tools like [yum][1], [DNF][2], and [APT][3] do not exist in the MacOS world—and I really missed them.
|
||||
|
||||
Some MacOS applications required dependencies and installations that were more difficult than what I was accustomed to with Linux. Nonetheless, I persisted. In the process, I learned how I could keep the best software on my new platform. Even much of MacOS's core is [open source][4].
|
||||
|
||||
Also, my Linux background made it easy to get comfortable with the MacOS command line. I still use it to create and copy files, add users, and use other [utilities][5]like cat, tac, more, less, and tail.
|
||||
|
||||
### 15 great open source applications for MacOS
|
||||
|
||||
* The college required that I submit most of my work electronically in DOCX format, and I did that easily, first with [OpenOffice][6] and later using [LibreOffice][7] to produce my papers.
|
||||
* When I needed to produce graphics for presentations, I used my favorite graphics applications, [GIMP][8] and [Inkscape][9].
|
||||
* My favorite podcast creation tool is [Audacity][10]. It's much simpler to use than the proprietary application that ships with the Mac. I use it to record interviews and create soundtracks for video presentations.
|
||||
* I discovered early on that I could use the [VideoLan][11] (VLC) media player on MacOS.
|
||||
* MacOS's built-in proprietary video creation tool is a good product, but you can easily install and use [OpenShot][12], which is a great content creation tool.
|
||||
* When I need to analyze networks for my clients, I use the easy-to-install [Nmap][13] (Network Mapper) and [Wireshark][14] tools on my Mac.
|
||||
* I use [VirtualBox][15] for MacOS to demonstrate Raspbian, Fedora, Ubuntu, and other Linux distributions, as well as Moodle, WordPress, Drupal, and Koha when I provide training for librarians and other educators.
|
||||
* I make boot drives on my MacBook using [Etcher.io][16]. I just download the ISO file and burn it on a USB stick drive.
|
||||
* I think [Firefox][17] is easier and more secure to use than the proprietary browser that comes with the MacBook Pro, and it allows me to synchronize my bookmarks across operating systems.
|
||||
* When it comes to eBook readers, [Calibre][18] cannot be beaten. It is easy to download and install, and you can even configure it for a [classroom eBook server][19] with a few clicks.
|
||||
* Recently I have been teaching Python to middle school students, I have found it is easy to download and install Python 3 and the IDLE3 editor from [Python.org][20]. I have also enjoyed learning about data science and sharing that with students. Whether you're interested in Python or R, I recommend you download and [install][21] the [Anaconda distribution][22]. It contains the great iPython editor, RStudio, Jupyter Notebooks, and JupyterLab, along with some other applications.
|
||||
* [HandBrake][23] is a great way to turn your old home video DVDs into MP4s, which you can share on YouTube, Vimeo, or your own [Kodi][24] server on MacOS.
|
||||
|
||||
|
||||
|
||||
Now it's your turn: What open source software are you using on MacOS (or Windows)? Share your favorites in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/7/open-source-tools-macos
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/don-watkins
|
||||
[1]:https://en.wikipedia.org/wiki/Yum_(software)
|
||||
[2]:https://en.wikipedia.org/wiki/DNF_(software)
|
||||
[3]:https://en.wikipedia.org/wiki/APT_(Debian)
|
||||
[4]:https://developer.apple.com/library/archive/documentation/MacOSX/Conceptual/OSX_Technology_Overview/SystemTechnology/SystemTechnology.html
|
||||
[5]:https://www.gnu.org/software/coreutils/coreutils.html
|
||||
[6]:https://www.openoffice.org/
|
||||
[7]:https://www.libreoffice.org/
|
||||
[8]:https://www.gimp.org/
|
||||
[9]:https://inkscape.org/en/
|
||||
[10]:https://www.audacityteam.org/
|
||||
[11]:https://www.videolan.org/index.html
|
||||
[12]:https://www.openshot.org/
|
||||
[13]:https://nmap.org/
|
||||
[14]:https://www.wireshark.org/
|
||||
[15]:https://www.virtualbox.org/
|
||||
[16]:https://etcher.io/
|
||||
[17]:https://www.mozilla.org/en-US/firefox/new/
|
||||
[18]:https://calibre-ebook.com/
|
||||
[19]:https://opensource.com/article/17/6/raspberrypi-ebook-server
|
||||
[20]:https://www.python.org/downloads/release/python-370/
|
||||
[21]:https://opensource.com/article/18/4/getting-started-anaconda-python
|
||||
[22]:https://www.anaconda.com/download/#macos
|
||||
[23]:https://handbrake.fr/
|
||||
[24]:https://kodi.tv/download
|
@ -0,0 +1,92 @@
|
||||
6 open source cryptocurrency wallets
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus_cash_register.jpg?itok=7NKVKuPa)
|
||||
|
||||
Without crypto wallets, cryptocurrencies like Bitcoin and Ethereum would just be another pie-in-the-sky idea. These wallets are essential for keeping, sending, and receiving cryptocurrencies.
|
||||
|
||||
The revolutionary growth of [cryptocurrencies][1] is attributed to the idea of decentralization, where a central authority is absent from the network and everyone has a level playing field. Open source technology is at the heart of cryptocurrencies and [blockchain][2] networks. It has enabled the vibrant, nascent industry to reap the benefits of decentralization—such as immutability, transparency, and security.
|
||||
|
||||
If you're looking for a free and open source cryptocurrency wallet, read on to start exploring whether any of the following options meet your needs.
|
||||
|
||||
### 1\. Copay
|
||||
|
||||
[Copay][3] is an open source Bitcoin crypto wallet that promises convenient storage. The software is released under the [MIT License][4].
|
||||
|
||||
The Copay server is also open source. Therefore, developers and Bitcoin enthusiasts can assume complete control of their activities by deploying their own applications on the server.
|
||||
|
||||
The Copay wallet empowers you to take the security of your Bitcoin in your own hands, instead of trusting unreliable third parties. It allows you to use multiple signatories for approving transactions and supports the storage of multiple, separate wallets within the same app.
|
||||
|
||||
Copay is available for a range of platforms, such as Android, Windows, MacOS, Linux, and iOS.
|
||||
|
||||
### 2\. MyEtherWallet
|
||||
|
||||
As the name implies, [MyEtherWallet][5] (abbreviated MEW) is a wallet for Ethereum transactions. It is open source (under the [MIT License][6]) and is completely online, accessible through a web browser.
|
||||
|
||||
The wallet has a simple client-side interface, which allows you to participate in the Ethereum blockchain confidently and securely.
|
||||
|
||||
### 3\. mSIGNA
|
||||
|
||||
[mSIGNA][7] is a powerful desktop application for completing transactions on the Bitcoin network. It is released under the [MIT License][8] and is available for MacOS, Windows, and Linux.
|
||||
|
||||
The blockchain wallet provides you with complete control over your Bitcoin stash. Some of its features include user-friendliness, versatility, decentralized offline key generation capabilities, encrypted data backups, and multi-device synchronization.
|
||||
|
||||
### 4\. Armory
|
||||
|
||||
[Armory][9] is an open source wallet (released under the [GNU AGPLv3][10]) for producing and keeping Bitcoin private keys on your computer. It enhances security by providing users with cold storage and multi-signature support capabilities.
|
||||
|
||||
With Armory, you can set up a wallet on a computer that is completely offline; you'll use the watch-only feature for observing your Bitcoin details on the internet, which improves security. The wallet also allows you to create multiple addresses and use them to complete different transactions.
|
||||
|
||||
Armory is available for MacOS, Windows, and several flavors of Linux (including Raspberry Pi).
|
||||
|
||||
### 5\. Electrum
|
||||
|
||||
[Electrum][11] is a Bitcoin wallet that navigates the thin line between beginner user-friendliness and expert functionality. The open source wallet is released under the [MIT License][12].
|
||||
|
||||
Electrum encrypts your private keys locally, supports cold storage, and provides multi-signature capabilities with minimal resource usage on your machine.
|
||||
|
||||
It is available for a wide range of operating systems and devices, including Windows, MacOS, Android, iOS, and Linux, and hardware wallets such as [Trezor][13].
|
||||
|
||||
### 6\. Etherwall
|
||||
|
||||
[Etherwall][14] is the first wallet for storing and sending Ethereum on the desktop. The open source wallet is released under the [GPLv3 License][15].
|
||||
|
||||
Etherwall is intuitive and fast. What's more, to enhance the security of your private keys, you can operate it on a full node or a thin node. Running it as a full-node client will enable you to download the whole Ethereum blockchain on your local machine.
|
||||
|
||||
Etherwall is available for MacOS, Linux, and Windows, and it also supports the Trezor hardware wallet.
|
||||
|
||||
### Words to the wise
|
||||
|
||||
Open source and free crypto wallets are playing a vital role in making cryptocurrencies easily available to more people.
|
||||
|
||||
Before using any digital currency software wallet, make sure to do your due diligence to protect your security, and always remember to comply with best practices for safeguarding your finances.
|
||||
|
||||
If your favorite open source cryptocurrency wallet is not on this list, please share what you know in the comment section below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/7/crypto-wallets
|
||||
|
||||
作者:[Dr.Michael J.Garbade][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/drmjg
|
||||
[1]:https://www.liveedu.tv/guides/cryptocurrency/
|
||||
[2]:https://opensource.com/tags/blockchain
|
||||
[3]:https://copay.io/
|
||||
[4]:https://github.com/bitpay/copay/blob/master/LICENSE
|
||||
[5]:https://www.myetherwallet.com/
|
||||
[6]:https://github.com/kvhnuke/etherwallet/blob/mercury/LICENSE.md
|
||||
[7]:https://ciphrex.com/
|
||||
[8]:https://github.com/ciphrex/mSIGNA/blob/master/LICENSE
|
||||
[9]:https://www.bitcoinarmory.com/
|
||||
[10]:https://github.com/etotheipi/BitcoinArmory/blob/master/LICENSE
|
||||
[11]:https://electrum.org/#home
|
||||
[12]:https://github.com/spesmilo/electrum/blob/master/LICENCE
|
||||
[13]:https://trezor.io/
|
||||
[14]:https://www.etherwall.com/
|
||||
[15]:https://github.com/almindor/etherwall/blob/master/LICENSE
|
@ -0,0 +1,117 @@
|
||||
Display Weather Forecast In Your Terminal With Wttr.in
|
||||
======
|
||||
**[wttr.in][1] is a feature-packed weather forecast service that supports displaying the weather from the command line**. It can automatically detect your location (based on your IP address), supports specifying the location or searching for a geographical location (like a site in a city, a mountain and so on), and much more. Oh, and **you don't have to install it - all you need to use it is cURL or Wget** (see below).
|
||||
|
||||
wttr.in features include:
|
||||
|
||||
* **displays the current weather as well as a 3-day weather forecast, split into morning, noon, evening and night** (includes temperature range, wind speed and direction, viewing distance, precipitation amount and probability)
|
||||
|
||||
* **can display Moon phases**
|
||||
|
||||
* **automatic location detection based on your IP address**
|
||||
|
||||
* **allows specifying a location using the city name, 3-letter airport code, area code, GPS coordinates, IP address, or domain name**. You can also specify a geographical location like a lake, mountain, landmark, and so on)
|
||||
|
||||
* **supports multilingual location names** (the query string must be specified in Unicode)
|
||||
|
||||
* **supports specifying the language** in which the weather forecast should be displayed in (it supports more than 50 languages)
|
||||
|
||||
* **it uses USCS units for queries from the USA and the metric system for the rest of the world** , but you can change this by appending `?u` for USCS, and `?m` for the metric system (SI)
|
||||
|
||||
* **3 output formats: ANSI for the terminal, HTML for the browser, and PNG**.
|
||||
|
||||
|
||||
|
||||
|
||||
Like I mentioned in the beginning of the article, to use wttr.in, all you need is cURL or Wget, but you can also
|
||||
|
||||
**Before using wttr.in, make sure cURL is installed.** In Debian, Ubuntu or Linux Mint (and other Debian or Ubuntu-based Linux distributions), install cURL using this command:
|
||||
```
|
||||
sudo apt install curl
|
||||
|
||||
```
|
||||
|
||||
### wttr.in command line examples
|
||||
|
||||
Get the weather for your location (wttr.in tries to guess your location based on your IP address):
|
||||
```
|
||||
curl wttr.in
|
||||
|
||||
```
|
||||
|
||||
Force cURL to resolve names to IPv4 addresses (in case you're having issues with IPv6 and wttr.in) by adding `-4` after `curl` :
|
||||
```
|
||||
curl -4 wttr.in
|
||||
|
||||
```
|
||||
|
||||
**Wget also works** (instead of cURL) if you want to retrieve the current weather and forecast as a png, or if you use it like this:
|
||||
```
|
||||
wget -O- -q wttr.in
|
||||
|
||||
```
|
||||
|
||||
You can replace `curl` with `wget -O- -q` in all the commands below if you prefer Wget over cURL.
|
||||
|
||||
Specify the location:
|
||||
```
|
||||
curl wttr.in/Dublin
|
||||
|
||||
```
|
||||
|
||||
Display weather information for a landmark (the Eiffel Tower in this example):
|
||||
```
|
||||
curl wttr.in/~Eiffel+Tower
|
||||
|
||||
```
|
||||
|
||||
Get the weather information for an IP address' location (the IP below belongs to GitHub):
|
||||
```
|
||||
curl wttr.in/@192.30.253.113
|
||||
|
||||
```
|
||||
|
||||
Retrieve the weather using USCS units:
|
||||
```
|
||||
curl wttr.in/Paris?u
|
||||
|
||||
```
|
||||
|
||||
Force wttr.in to use the metric system (SI) if you're in the USA:
|
||||
```
|
||||
curl wttr.in/New+York?m
|
||||
|
||||
```
|
||||
|
||||
Use Wget to download the current weather and 3-day forecast as a PNG image:
|
||||
```
|
||||
wget wttr.in/Istanbul.png
|
||||
|
||||
```
|
||||
|
||||
You can specify the PNG
|
||||
|
||||
**For many other examples, check out the wttr.in[project page][2] or type this in a terminal:**
|
||||
```
|
||||
curl wttr.in/:help
|
||||
|
||||
```
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxuprising.com/2018/07/display-weather-forecast-in-your.html
|
||||
|
||||
作者:[Logix][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/118280394805678839070
|
||||
[1]:https://wttr.in/
|
||||
[2]:https://github.com/chubin/wttr.in
|
||||
[3]:https://github.com/chubin/wttr.in#installation
|
||||
[4]:https://github.com/schachmat/wego
|
||||
[5]:https://github.com/chubin/wttr.in#supported-formats
|
86
sources/tech/20180710 Getting started with Perlbrew.md
Normal file
86
sources/tech/20180710 Getting started with Perlbrew.md
Normal file
@ -0,0 +1,86 @@
|
||||
Getting started with Perlbrew
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_cafe_brew_laptop_desktop.jpg?itok=G-n1o1-o)
|
||||
|
||||
What's better than having Perl installed on your system? Having multiple Perls installed on your system! With [Perlbrew][1] you can do just that. But why—apart from surrounding yourself in Perl—would you want to do that?
|
||||
|
||||
The short answer is that different versions of Perl are… different. Application A may depend on behavior deprecated in a newer release, while Application B needs new features that weren't available last year. If you have multiple versions of Perl installed, each script can use the version that best suits it. This also comes in handy if you're a developer—you can test your application against multiple versions of Perl so that, no matter what your users are running, you know it works.
|
||||
|
||||
### Install Perlbrew
|
||||
|
||||
The other benefit is that Perlbrew installs to the user's home directory. That means each user can manage their Perl versions (and the associated CPAN packages) without having to involve the system administrators. Self-service means quicker installation for the users and gives sysadmins more time to work on the hard problems.
|
||||
|
||||
The first step is to install Perlbrew on your system. Many Linux distributions have it in the package repo already, so you're just a `dnf install perlbrew` (or whatever is the appropriate command for your distribution) away. You can also install the `App::perlbrew` module from CPAN with `cpan App::perlbrew`. Or you can download and run the installation script at [install.perlbrew.pl][2].
|
||||
|
||||
To begin using Perlbrew, run `perlbrew init`.
|
||||
|
||||
### Install a new Perl version
|
||||
|
||||
Let's say you want to try the latest development release (5.27.11 as of this writing). First, you need to install the package:
|
||||
```
|
||||
perlbrew install 5.27.11
|
||||
|
||||
```
|
||||
|
||||
### Switch Perl version
|
||||
|
||||
Now that you have a new version installed, you can use it for just that shell:
|
||||
```
|
||||
perlbrew use 5.27.11
|
||||
|
||||
```
|
||||
|
||||
Or you can make it the default Perl version for your account (assuming you set up your profile as instructed by the output of `perlbrew init`):
|
||||
```
|
||||
perlbrew switch 5.27.11
|
||||
|
||||
```
|
||||
|
||||
### Run a single script
|
||||
|
||||
You can run a single command against a specific version of Perl, too:
|
||||
```
|
||||
perlberew exec 5.27.11 myscript.pl
|
||||
|
||||
```
|
||||
|
||||
Or you can run a command against all your installed versions. This is particularly handy if you want to run tests against a variety of versions. In this case, specify Perl as the version:
|
||||
```
|
||||
.plperlbrew exec perl myscriptpl
|
||||
|
||||
```
|
||||
|
||||
### Install CPAN modules
|
||||
|
||||
If you want to install CPAN modules, the `cpanm` package is an easy-to-use interface that works well with Perlbrew. Install it with:
|
||||
```
|
||||
perlbrew install-cpamn
|
||||
|
||||
```
|
||||
|
||||
You can then install CPAN modules with the `cpanm` command:
|
||||
```
|
||||
cpanm CGI::simple
|
||||
|
||||
```
|
||||
|
||||
### But wait, there's more!
|
||||
|
||||
This article covers basic Perlbrew usage. There are many more features and options available. Look at the output of `perlbrew help` as a starting point, or check out the [App::perlbrew documentation][3]. What other features do you love in Perlbrew? Let us know in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/7/perlbrew
|
||||
|
||||
作者:[Ben Cotton][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/bcotton
|
||||
[1]:https://perlbrew.pl/
|
||||
[2]:https://raw.githubusercontent.com/gugod/App-perlbrew/master/perlbrew-install
|
||||
[3]:https://metacpan.org/pod/App::perlbrew
|
@ -0,0 +1,72 @@
|
||||
The aftermath of the Gentoo GitHub hack
|
||||
======
|
||||
|
||||
![](https://images.idgesg.net/images/article/2018/07/gentoo_penguins-100763422-large.jpg)
|
||||
|
||||
### Gentoo GitHub hack: What happened?
|
||||
|
||||
Late last month (June 28), the Gentoo GitHub repository was attacked after someone gained control of an admin account. All access to the repositories was soon removed from Gentoo developers. Repository and page content were altered. But within 10 minutes of the attacker gaining access, someone noticed something was going on, 7 minutes later a report was sent, and within 70 minutes the attack was over. Legitimate Gentoo developers were shut out for 5 days while the dust settled and repairs and analysis were completed.
|
||||
|
||||
The attackers also attempted to add "rm -rf" commands to some repositories to cause user data to be recursively removed. As it turns out, this code was unlikely to be run because of technical precautions that were in place, but this wouldn't have been obvious to the attacker.
|
||||
|
||||
One of the things that constrained how big a disaster this break in might have turned out to be was that the attack was "loud." The removal of developers resulted in them being emailed, and developers quickly discovered they'd been shut out. A stealthier attack might have led to a significant delay in anyone responding to the problem and a significantly bigger problem.
|
||||
|
||||
A detailed timeline showing the details of what happened is available at the [Gentoo Linux site][1].
|
||||
|
||||
### How the Gentoo GitHub attack happened
|
||||
|
||||
Much of the focus in the aftermath of this very significant attack has been on how the attacker was able to gain admin access and what might have been done differently to keep the site safe. The most obvious take-home was that the admin's password was guessed because it too closely related to one that had been captured on another system. This might be like your using "Spring2018" on one system and "Summer2018" on another.
|
||||
|
||||
Another problem was that it was unclear how end users might have been able to tell whether or not they had a clean copy of the code, and there was no confirmation as to whether the malicious commits (accessible for a while) would execute.
|
||||
|
||||
### Lessons learned from the hack
|
||||
|
||||
The lessons learned should come as no surprise. We should all be careful not to use the same password on multiple systems and not to use passwords that relate to each other so strongly that knowing one in a set suggests another.
|
||||
|
||||
We also have to admit that two-factor authentication would have prevented this break-in. While something of a burden on users (i.e., they may have to carry a token generator or confirm their login through some secondary service), it very strongly limits who can get access to an account.
|
||||
|
||||
Of course the lessons learned should also not overlook what this incident showed us was going right. The fact that the break-in was noticed so quickly and that communications lines were functional meant the break-in could be quickly addressed. The breach was also made public, the repository was only a secondary copy of the main Gentoo source code, and changes in the main repository were signed and could be verified.
|
||||
|
||||
#### The best news
|
||||
|
||||
The really good news is that it appears that no one was affected by the break in other than the fact that developers were locked out for a while. The hackers weren't able to penetrate Gentoo's master repository (the default location for automatic updates). They also weren't able to get their hands on Gentoo's digital signing key. This means that default updates would have rejected their files as fakes.
|
||||
|
||||
The harm that could have been made to Gentoo's reputation was avoided by the precautions in place and their professional handling of the incident. What could have cost them a lot ended up as a confirmation of what they're doing right and added to their determination to make some changes to strengthen their security. They faced up to some cyberbullies and came out stronger and more confident.
|
||||
|
||||
### Fixing the potholes
|
||||
|
||||
Gentoo is already addressing the weaknesses that contributed to the break-in. They are making frequent backups of their GitHub Organization (i.e., their content), starting to use two-factor authentication by default, working on an incident response plan with a focus on sharing information about a security incident with their users, and tightening procedures around credential revocation. They are also reducing the number of users with elevated privileges, auditing logins, and publishing password policies that mandate the use of password managers.
|
||||
|
||||
### Gentoo and GitHub
|
||||
|
||||
For readers unfamiliar with Gentoo, it's important to understand that Gentoo is different than most Linux distributions. Users download and then compile the source to build the OS they will then be using. It's as close to the Linux mantra of “know how to do it yourself” as you can get.
|
||||
|
||||
Git is a code management system not unlike CVS, and GitHub provides repositories for the code.
|
||||
|
||||
### Gentoo strengths
|
||||
|
||||
Gentoo users tend to be more knowledgeable about the low-level aspects of the OS (e.g., kernel configuration and hardware support) than most Linux users — probably due to their interest in working with the source code. The OS is also highly scalable and flexible with a "build what you need" focus. The name derives from that of the "Gentoo penguin" — a penguin breed that lives on many sub-Antarctic islands. More information and downloads are available at [www.gentoo.org][2].
|
||||
|
||||
### More on the Gentoo GitHub break-in
|
||||
|
||||
More information on the break in is available on [Naked Security][3] and (as noted above) the [Gentoo site][1].
|
||||
|
||||
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3287973/linux/the-aftermath-of-the-gentoo-github-hack.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[1]:https://wiki.gentoo.org/wiki/Project:Infrastructure/Incident_Reports/2018-06-28_Github
|
||||
[2]:https://www.gentoo.org/
|
||||
[3]:https://nakedsecurity.sophos.com/2018/06/29/linux-distro-hacked-on-github-all-code-considered-compromised/
|
||||
[4]:https://www.facebook.com/NetworkWorld/
|
||||
[5]:https://www.linkedin.com/company/network-world
|
@ -0,0 +1,291 @@
|
||||
你不知道的 Bash:关于 Bash 数组的介绍
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop.png?itok=pGfEfu2S)
|
||||
|
||||
尽管软件工程师常常使用命令行来进行各种开发,但命令行中的数组似乎总是一个模糊的东西(虽然没有正则操作符 `=~` 那么复杂隐晦)。除开隐晦和有疑问的语法,[Bash][1] 数组其实是非常有用的。
|
||||
|
||||
### 稍等,这是为什么?
|
||||
|
||||
写 Bash 相关的东西很难,但如果是写一篇像手册那样注重怪异语法的文章,就会非常简单。不过请放心,这篇文章的目的就是让你不用去读该死的使用手册。
|
||||
|
||||
#### 真实(通常是有用的)示例
|
||||
|
||||
为了这个目的,想象一下真实世界的场景以及 Bash 是怎么帮忙的:你正在公司里面主导一个新工作,评估并优化内部数据管线的运行时间。首先,你要做个参数扫描分析来评估管线使用线程的状况。简单起见,我们把这个管道当作一个编译好的 C++ 黑盒子,这里面我们能够调整的唯一的参数是用于处理数据的线程数量:`./pipeline --threads 4`。
|
||||
|
||||
### 基础
|
||||
|
||||
我们将要测试的 `--threads` 参数:
|
||||
|
||||
```
|
||||
allThreads=(1 2 4 8 16 32 64 128)
|
||||
|
||||
```
|
||||
|
||||
我们首先要做的事是定义一个数组,用来容纳我们想要测试的参数:
|
||||
|
||||
本例中,所有元素都是数字,但参数并不一定是数字,Bash 中的 数组可以容纳数字和字符串,比如 `myArray=(1 2 "three" 4 "five")` 就是个有效的表达式。就像 Bash 中其它的变量一样,确保赋值符号两边没有空格。否则 Bash 将会把变量名当作程序来执行,把 `=` 当作程序的第一个参数。
|
||||
|
||||
现在我们初始化了数组,让我们解析它其中的一些元素。仅仅输入 `echo $allThreads` ,你能发现,它只会输出第一个元素。
|
||||
|
||||
要理解这个产生的原因,需要回到上一步,回顾我们一般是怎么在 Bash 中输出 变量。考虑以下场景:
|
||||
|
||||
```
|
||||
type="article"
|
||||
|
||||
echo "Found 42 $type"
|
||||
|
||||
```
|
||||
|
||||
假如我们得到的变量 `$type` 是一个单词,我们想要添加在句子结尾一个 `s`。我们无法直接把 `s` 加到 `$type` 里面,因为这会把它变成另一个变量,`$types`。尽管我们可以利用像 `echo "Found 42 "$type"s"` 这样的代码形变,但解决这个问题的最好方法是用一个花括号:`echo "Found 42 ${type}s"`,这让我们能够告诉 Bash 变量名的起止位置(有趣的是,JavaScript/ES6 在 [template literals][2] 中注入变量和表达式的语法和这里是一样的)
|
||||
|
||||
事实上,尽管 Bash 变量一般不用花括号,但在数组中需要用到花括号。这反而允许我们指定要访问的索引,例如 `echo ${allThreads[1]}` 返回的是数组中的第二个元素。如果不写花括号,比如 `echo $allThreads[1]`,会导致 Bash 把 `[1]` 当作字符串然后输出。
|
||||
|
||||
是的,Bash 数组的语法很怪,但是至少他们是从 0 开始索引的,不像有些语言(说的就是你,`R` 语言)。
|
||||
|
||||
### 遍历数组
|
||||
|
||||
上面的例子中我们直接用整数作为数组的索引,我们现在考虑两种其他情况:第一,如果想要数组中的第 `$i` 个元素,这里 `$i` 是一个代表索引的变量,我们可以这样 `echo ${allThreads[$i]}` 解析这个元素。第二,要输出一个数组的所有元素,我们把数字索引换成 `@` 符号(你可以把 `@` 当作表示 `all` 的符号):`echo ${allThreads[@]}`。
|
||||
|
||||
#### 遍历数组元素
|
||||
|
||||
记住上面讲过的,我们遍历 `$allThreads` 数组,把每个值当作 `--threads` 参数启动 pipeline:
|
||||
|
||||
```
|
||||
for t in ${allThreads[@]}; do
|
||||
|
||||
./pipeline --threads $t
|
||||
|
||||
done
|
||||
|
||||
```
|
||||
|
||||
#### 遍历数组索引
|
||||
|
||||
接下来,考虑一个稍稍不同的方法。不是遍历所有的数组元素,我们可以遍历所有的索引:
|
||||
|
||||
```
|
||||
for i in ${!allThreads[@]}; do
|
||||
|
||||
./pipeline --threads ${allThreads[$i]}
|
||||
|
||||
done
|
||||
|
||||
```
|
||||
|
||||
一步一步看:如之前所见,`${allThreads[@]}` 表示数组中的所有元素。前面加了个感叹号,变成 `${!allThreads[@]}`,这会返回数组索引列表(这里是 0 到 7)。换句话说。`for` 循环就遍历所有的索引 `$i` 并从 `$allThreads` 中读取第 `$i` 个元素,当作 `--threads` 选项的参数。
|
||||
|
||||
这看上去很辣眼睛,你可能奇怪为什么我要一开始就讲这个。这是因为有时候在循环中需要同时获得索引和对应的值,例如,如果你想要忽视数组中的第一个元素,使用索引避免创建要在循环中累加的额外变量。
|
||||
|
||||
### 填充数组
|
||||
|
||||
到目前为止,我们已经能够用给定的 `--threads` 选项启动 pipeline 了。现在假设按秒计时的运行时间输出到 pipeline。我们想要捕捉每个迭代的输出,然后把它保存在另一个数组中,因此我们最终可以随心所欲的操作它。
|
||||
|
||||
#### 一些有用的语法
|
||||
|
||||
在深入代码前,我们要多介绍一些语法。首先,我们要能解析 Bash 命令的输出。用这个语法可以做到:`output=$( ./my_script.sh )`,这会把命令的输出存储到变量 `$output` 中。
|
||||
|
||||
我们需要的第二个语法是如何把我们刚刚解析的值添加到数组中。完成这个任务的语法看起来很熟悉:
|
||||
|
||||
```
|
||||
myArray+=( "newElement1" "newElement2" )
|
||||
|
||||
```
|
||||
|
||||
#### 参数扫描
|
||||
|
||||
万事具备,执行参数扫描的脚步如下:
|
||||
|
||||
```
|
||||
allThreads=(1 2 4 8 16 32 64 128)
|
||||
|
||||
allRuntimes=()
|
||||
|
||||
for t in ${allThreads[@]}; do
|
||||
|
||||
runtime=$(./pipeline --threads $t)
|
||||
|
||||
allRuntimes+=( $runtime )
|
||||
|
||||
done
|
||||
|
||||
```
|
||||
|
||||
就是这个了!
|
||||
|
||||
### 还有什么能做的?
|
||||
|
||||
这篇文章中,我们讲过使用数组进行参数扫描的场景。我担保有很多理由要使用 Bash 数组,这里就有两个例子:
|
||||
|
||||
#### 日志警告
|
||||
|
||||
本场景中,把应用分成几个模块,每一个都有它自己的日志文件。我们可以编写一个 cron 任务脚本,当某个模块中出现问题标志时向特定的人发送邮件:
|
||||
|
||||
```
|
||||
# 日志列表,发生问题时应该通知的人
|
||||
|
||||
logPaths=("api.log" "auth.log" "jenkins.log" "data.log")
|
||||
|
||||
logEmails=("jay@email" "emma@email" "jon@email" "sophia@email")
|
||||
|
||||
|
||||
|
||||
# 在每个日志中查找问题标志
|
||||
|
||||
for i in ${!logPaths[@]};
|
||||
|
||||
do
|
||||
|
||||
log=${logPaths[$i]}
|
||||
|
||||
stakeholder=${logEmails[$i]}
|
||||
|
||||
numErrors=$( tail -n 100 "$log" | grep "ERROR" | wc -l )
|
||||
|
||||
|
||||
|
||||
# 如果近期发现超过 5 个错误,就警告负责人
|
||||
|
||||
if [[ "$numErrors" -gt 5 ]];
|
||||
|
||||
then
|
||||
|
||||
emailRecipient="$stakeholder"
|
||||
|
||||
emailSubject="WARNING: ${log} showing unusual levels of errors"
|
||||
|
||||
emailBody="${numErrors} errors found in log ${log}"
|
||||
|
||||
echo "$emailBody" | mailx -s "$emailSubject" "$emailRecipient"
|
||||
|
||||
fi
|
||||
|
||||
done
|
||||
|
||||
```
|
||||
|
||||
#### API 查询
|
||||
|
||||
如果你想要生成一些分析数据,分析你的 Medium 帖子中用户评论最多的。由于我们无法直接访问数据库,毫无疑问要用 SQL,但我们可以用 APIs!
|
||||
|
||||
为了避免陷入关于 API 授权和令牌的冗长讨论,我们将会使用 [JSONPlaceholder][3] 作为我们的目的,这是一个面向公众的测试服务 API。一旦我们查询每个帖子,解析出评论者的邮箱,我们就可以把这些邮箱添加到我们的结果数组里:
|
||||
|
||||
```
|
||||
endpoint="https://jsonplaceholder.typicode.com/comments"
|
||||
|
||||
allEmails=()
|
||||
|
||||
|
||||
|
||||
# 查询前 10 个帖子
|
||||
|
||||
for postId in {1..10};
|
||||
|
||||
do
|
||||
|
||||
# 执行 API 调用,获取该帖子评论者的邮箱
|
||||
|
||||
response=$(curl "${endpoint}?postId=${postId}")
|
||||
|
||||
|
||||
|
||||
# 使用 jq 把 JSON 响应解析成数组
|
||||
|
||||
allEmails+=( $( jq '.[].email' <<< "$response" ) )
|
||||
|
||||
done
|
||||
|
||||
```
|
||||
|
||||
注意这里我是用 [`jq` 工具][4] 从命令行里解析 JSON 数据。关于 `jq` 的语法超出了本文的范围,但我强烈建议你了解它。
|
||||
|
||||
你可能已经想到,使用 Bash 数组在数不胜数的场景中很有帮助,我希望这篇文章中的示例可以给你思维的启发。如果你从自己的工作中找到其它的例子想要分享出来,请在帖子下方评论。
|
||||
|
||||
### 请等等,还有很多东西!
|
||||
|
||||
由于我们在本文讲了很多数组语法,这里是关于我们讲到内容的总结,包含一些还没讲到的高级技巧:
|
||||
|
||||
| 语法 | 效果 |
|
||||
|:--|:--|
|
||||
| `arr=()` | 创建一个空数组 |
|
||||
| `arr=(1 2 3)` | 初始化数组 |
|
||||
| `${arr[2]}` | 解析第三个元素 |
|
||||
| `${arr[@]}` | 解析所有元素 |
|
||||
| `${!arr[@]}` | 解析数组索引 |
|
||||
| `${#arr[@]}` | 计算数组长度 |
|
||||
| `arr[0]=3` | 重写第 1 个元素 |
|
||||
| `arr+=(4)` | 添加值 |
|
||||
| `str=$(ls)` | 把 `ls` 输出保存到字符串 |
|
||||
| `arr=( $(ls) )` | 把 `ls` 输出的文件保存到数组里 |
|
||||
| `${arr[@]:s:n}` | 解析索引在 `n` 到 `s+n` 之间的元素|
|
||||
|
||||
>译者注: `${arr[@]:s:n}` 应该是解析索引在 `s` 到 `s+n-1` 之间的元素
|
||||
|
||||
### 最后一点思考
|
||||
|
||||
正如我们所见,Bash 数组的语法很奇怪,但我希望这篇文章让你相信它们很有用。只要你理解了这些语法,你会发现以后会经常使用 Bash 数组。
|
||||
|
||||
#### Bash 还是 Python?
|
||||
|
||||
问题来了:什么时候该用 Bash 数组而不是其他的脚本语法,比如 Python?
|
||||
|
||||
对我而言,完全取决于需求——如果你可以只需要调用命令行工具就能立马解决问题,你也可以用 Bash。但有些时候,当你的脚本属于一个更大的 Python 项目时,你也可以用 Python。
|
||||
|
||||
比如,我们可以用 Python 来实现参数扫描,但我们只用编写一个 Bash 的包装:
|
||||
|
||||
```
|
||||
import subprocess
|
||||
|
||||
|
||||
|
||||
all_threads = [1, 2, 4, 8, 16, 32, 64, 128]
|
||||
|
||||
all_runtimes = []
|
||||
|
||||
|
||||
|
||||
# 用不同的线程数字启动 pipeline
|
||||
|
||||
for t in all_threads:
|
||||
|
||||
cmd = './pipeline --threads {}'.format(t)
|
||||
|
||||
|
||||
|
||||
# 使用子线程模块获得返回的输出
|
||||
|
||||
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
|
||||
|
||||
output = p.communicate()[0]
|
||||
|
||||
all_runtimes.append(output)
|
||||
|
||||
```
|
||||
|
||||
由于本例中没法避免使用命令行,所以可以优先使用 Bash。
|
||||
|
||||
#### 羞耻的宣传时间
|
||||
|
||||
如果你喜欢这篇文章,这里还有很多类似的文章! [在此注册,加入 OSCON][5],2018 年 7 月 17 号我会在这做一个主题为 [你不知道的 Bash][6] 的在线编码研讨会。没有幻灯片,不需要门票,只有你和我在命令行里面敲代码,探索 Bash 中的奇妙世界。
|
||||
|
||||
本文章由 [Medium] 首发,再发布时已获得授权。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/you-dont-know-bash-intro-bash-arrays
|
||||
|
||||
作者:[Robert Aboukhalil][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[BriFuture](https://github.com/BriFuture)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/robertaboukhalil
|
||||
[1]:https://opensource.com/article/17/7/bash-prompt-tips-and-tricks
|
||||
[2]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals
|
||||
[3]:https://github.com/typicode/jsonplaceholder
|
||||
[4]:https://stedolan.github.io/jq/
|
||||
[5]:https://conferences.oreilly.com/oscon/oscon-or
|
||||
[6]:https://conferences.oreilly.com/oscon/oscon-or/public/schedule/detail/67166
|
||||
[7]:https://medium.com/@robaboukhalil/the-weird-wondrous-world-of-bash-arrays-a86e5adf2c69
|
Loading…
Reference in New Issue
Block a user