Merge pull request #3 from LCTT/master

update
This commit is contained in:
ChenYi 2018-02-05 22:59:15 +08:00 committed by GitHub
commit c8c0feb52e
81 changed files with 8110 additions and 1920 deletions

View File

@ -1,10 +1,11 @@
我在 Twitch 平台直播编程的第一年
我在 Twitch 平台直播编程的经验
============================================================
去年 7 月我进行了第一次直播。不像大多数人那样在 Twitch 上进行游戏直播,我想直播的内容是我利用个人时间进行的开源工作。我对 NodeJS 硬件库有一定的研究(其中大部分是靠我自学的)。考虑到我已经在 Twitch 上有了一个直播间,为什么不再建一个更小更专业的直播间,比如使用 <ruby>JavaScript 驱动硬件<rt>JavaScript powered hardware</rt></ruby> 来建立直播间 :) 我注册了 [我自己的频道][1] ,从那以后我就开始定期直播。
去年 7 月我进行了第一次直播。不像大多数人那样在 Twitch 上进行游戏直播,我想直播的内容是我利用个人时间进行的开源工作。我对 NodeJS 硬件库有一定的研究(其中大部分是靠我自学的)。考虑到我已经在 Twitch 上有了一个直播间,为什么不再建一个更小更专业的直播间,比如 <ruby>由 JavaScript 驱动的硬件<rt>JavaScript powered hardware</rt></ruby> ;) 我注册了 [我自己的频道][1] ,从那以后我就开始定期直播。
我当然不是第一个这么做的人。[Handmade Hero][2] 是我最早看到的几个在线直播编程的程序员之一。很快这种直播方式被 Vlambeer 发扬光大,他在 Twitch 的 [Nuclear Throne live][3] 直播间进行直播。我对 Vlambeer 尤其着迷。
我的朋友 [Nolan Lawson][4] 让我 _真正开始做_ 这件事,而不只是单纯地 _想要做_ 。我看了他 [在周末直播开源工作][5] ,做得棒极了。他解释了他当时做的每一件事。每一件事。回复 GitHub 上的 <ruby>问题<rt>issues</rt></ruby> ,鉴别 bug ,在 <ruby>分支<rt>branches</rt></ruby> 中调试程序,你知道的。这令我着迷,因为 Nolan 使他的开源库得到了广泛的使用。他的开源生活和我的完全不一样。
我的朋友 [Nolan Lawson][4] 让我 _真正开始做_ 这件事,而不只是单纯地 _想要做_ 。我看了他 [在周末直播开源工作][5] ,做得棒极了。他解释了他当时做的每一件事。是的,每一件事,包括回复 GitHub 上的 <ruby>问题<rt>issues</rt></ruby> ,鉴别 bug ,在 <ruby>分支<rt>branches</rt></ruby> 中调试程序,你知道的。这令我着迷,因为 Nolan 使他的开源库得到了广泛的使用。他的开源生活和我的完全不一样。
你甚至可以看到我在他视频下的评论:
@ -14,27 +15,27 @@
那个星期六我极少的几个听众给了我很大的鼓舞,因此我坚持了下去。现在我有了超过一千个听众,他们中的一些人形成了一个可爱的小团体,他们会定期观看我的直播,我称呼他们为 “noopkat 家庭” 。
我们很开心。我想称呼这个即时编程部分为“多玩家在线组队编程”。我真的被他们每个人的热情和才能触动了。一次,一个团体成员指出我的 Arduino 开发板没有连接上软件,因为板子上的芯片丢了。这真是最有趣的时刻之一。
我们很开心。我想称呼这个即时编程部分为“多玩家在线组队编程”。我真的被他们每个人的热情和才能触动了。一次,一个团体成员指出我的 Arduino 开发板不能随同我的软件工作,因为板子上的芯片丢了。这真是最有趣的时刻之一。
我经常暂停直播,检查我的收件箱,看看有没有人对我提过的,不再有时间完成的工作发起 <ruby>拉取请求<rt>pull request</rt></ruby> 。感谢我 Twitch 社区对我的帮助和鼓励。
我经常暂停直播,检查我的收件箱,看看有没有人对我提及过但没有时间完成的工作发起 <ruby>拉取请求<rt>pull request</rt></ruby> 。感谢我 Twitch 社区对我的帮助和鼓励。
我很想聊聊 Twitch 直播给我带来的好处,但它的内容太多了,我应该会在我下一博客里介绍。我在这里想要分享的,是我学习的关于如何自己实现直播编程的课程。最近几个开发者问我怎么开始自己的直播,因此我在这里想大家展示我给他们的建议!
我很想聊聊 Twitch 直播给我带来的好处,但它的内容太多了,我应该会在我下一博客里介绍。我在这里想要分享的,是我学习的关于如何自己实现直播编程的课程。最近几个开发者问我怎么开始自己的直播,因此我在这里想大家展示我给他们的建议!
首先,我在这里贴出一个给过我很大帮助的教程 [“Streaming and Finding Success on Twitch”][7] 。它专注于 Twitch 与游戏直播,但也有很多和我们要做的东西相关的部分。我建议首先阅读这个教程,然后再考虑一些建立直播频道的细节(比如如何选择设备和软件)。
下面我列出我自己的配置。这些配置是从我多次的错误经验中总结出来的,其中要感谢我的直播同行的智慧与建议(对,你们知道就是你们!)
下面我列出我自己的配置。这些配置是从我多次的错误经验中总结出来的,其中要感谢我的直播同行的智慧与建议(对,你们知道就是你们!)
### 软件
有很多免费的直播软件。我用的是 [Open Broadcaster Software (OBS)][8] 。它适用于大多数的平台。我觉得它十分直观且易于入门,但掌握其他的进阶功能则需要一段时间的学习。学好它你会获得很多好处!这是今天我直播时 OBS 的桌面截图(点击查看大图)
有很多免费的直播软件。我用的是 [Open Broadcaster Software (OBS)][8] 。它适用于大多数的平台。我觉得它十分直观且易于入门,但掌握其他的进阶功能则需要一段时间的学习。学好它你会获得很多好处!这是今天我直播时 OBS 的桌面截图:
![](https://cdn-images-1.medium.com/max/1600/0*s4wyeYuaiThV52q5.png)
你直播时需要在不用的“场景”中进行切换。一个“场景”是多个“素材”通过堆叠和组合产生的集合。一个“素材”可以是照相机,麦克风,你的桌面,网页,动态文本,图片等等。 OBS 是一个很强大的软件。
你直播时需要在不用的“<ruby>场景<rt>scenes</rt></ruby>”中进行切换。一个“场景”是多个“<ruby>素材<rt>sources</rt></ruby>”通过堆叠和组合产生的集合。一个“素材”可以是照相机、麦克风、你的桌面、网页、动态文本、图片等等。 OBS 是一个很强大的软件。
最上方的桌面场景是我编程的环境,我直播的时候主要停留在这里。我使用 iTerm 和 vim ,同时打开一个可以切换的浏览器窗口来查阅文献或在 GitHub 上分类检索资料。
底部的黑色长方形是我的网络摄像头,人们可以通过这种个人化的连接方式来观看我工作。
底部的黑色长方形是我的网络摄像头,人们可以通过这种个人化的连接方式来观看我工作。
我的场景中有一些“标签”,很多都与状态或者顶栏信息有关。顶栏只是添加了个性化信息,它在直播时是一个很好的连续性素材。这是我在 [GIMP][9] 里制作的图片,在你的场景里它会作为一个素材来加载。一些标签是从文本文件里添加的动态内容(例如最新粉丝)。另一个标签是一个 [custom one I made][10] ,它可以展示我直播的房间的动态温度与湿度。
@ -62,7 +63,7 @@
### 硬件
我从使用便宜的器材开始,当我意识到我会长期坚持直播之后,才将们逐渐换成更好的。开始的时候尽量使用你现有的器材,即使是只用电脑内置的摄像头与麦克风。
我从使用便宜的器材开始,当我意识到我会长期坚持直播之后,才将们逐渐换成更好的。开始的时候尽量使用你现有的器材,即使是只用电脑内置的摄像头与麦克风。
现在我使用 Logitech Pro C920 网络摄像头,和一个固定有支架的 Blue Yeti 麦克风。花费是值得的。我直播的质量完全不同了。
@ -116,7 +117,7 @@
当你即将开始的时候,你会感觉很奇怪,不适应。你会在人们看着你写代码的时候感到紧张。这很正常!尽管我之前有过公共演说的经历,我一开始的时候还是感到陌生而不适应。我感觉我无处可藏,这令我害怕。我想:“大家可能都觉得我的代码很糟糕,我是一个糟糕的开发者。”这是一个困扰了我 _整个职业生涯_ 的想法,对我来说不新鲜了。我知道带着这些想法,我不能在发布到 GitHub 之前仔细地再检查一遍代码,而这样做更有利于我保持我作为开发者的声誉。
我从 Twitch 直播中发现了很多关于我代码风格的东西。我知道我的风格绝对是“先让它跑起来,然后再考虑可读性,然后再考虑运行速度”。我不再在前一天晚上提前排练好直播的内容(一开始的三四次直播我都是这么做的),所以我在 Twitch 上写的代码是相当粗糙的,我还得保证它们运行起来没问题。当我不看别人的聊天和讨论的时候,我可以写出我最好的代码,这样是没问题的。但我总会忘记我使用过无数遍的方法的名字,而且每次直播的时候都会犯“愚蠢的”错误。一般来说,这不是一个让你能达到你最好状态的生产环境。
我从 Twitch 直播中发现了很多关于我代码风格的东西。我知道我的风格绝对是“先让它跑起来,然后再考虑可读性,然后再考虑运行速度”。我不再在前一天晚上提前排练好直播的内容(一开始的三四次直播我都是这么做的),所以我在 Twitch 上写的代码是相当粗糙的,我还得保证它们运行起来没问题。当我不看别人的聊天和讨论的时候,我可以写出我最好的代码,这样是没问题的。但我总会忘记我使用过无数遍的方法的名字,而且每次直播的时候都会犯“愚蠢的”错误。一般来说,这不是一个让你能达到你最好状态的生产环境。
我的 Twitch 社区从来不会因为这个苛求我,反而是他们帮了我很多。他们理解我正同时做着几件事,而且真的给了很多务实的意见和建议。有时是他们帮我找到了解决方法,有时是我要向他们解释为什么他们的建议不适合解决这个问题。这真的很像一般意义的组队编程!
@ -128,7 +129,7 @@
如果你周日想要加入我的直播,你可以 [订阅我的 Twitch 频道][13] :)
最后我想说一下,我个人十分感谢 [Mattias Johansson][14] 在我早期开始直播的时候给我的建议和鼓励。他的 [FunFunFunction YouTube channel][15] 也是一个令人激动的定期直播频道。
最后我想说一下,我自己十分感谢 [Mattias Johansson][14] 在我早期开始直播的时候给我的建议和鼓励。他的 [FunFunFunction YouTube channel][15] 也是一个令人激动的定期直播频道。
另:许多人问过我的键盘和其他工作设备是什么样的, [这是我使用的器材的完整列表][16] 。感谢关注!
@ -136,9 +137,9 @@
via: https://medium.freecodecamp.org/lessons-from-my-first-year-of-live-coding-on-twitch-41a32e2f41c1
作者:[ Suz Hinton][a]
作者:[Suz Hinton][a]
译者:[lonaparte](https://github.com/lonaparte)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,134 @@
为什么 Kubernetes 很酷
============================================================
在我刚开始学习 Kubernetes大约是一年半以前吧我真的不明白为什么应该去关注它。
在我使用 Kubernetes 全职工作了三个多月后,我才逐渐明白了为什么我应该使用它。(我距离成为一个 Kubernetes 专家还很远!)希望这篇文章对你理解 Kubernetes 能做什么会有帮助!
我将尝试去解释我对 Kubernetes 感兴趣的一些原因,而不去使用 “<ruby>原生云<rt>cloud native</rt></ruby>”、“<ruby>编排系统<rt>orchestration</rt></ruby>”、“<ruby>容器<rt>container</rt></ruby>”,或者任何 Kubernetes 专用的术语 :)。我去解释的这些观点主要来自一位 Kubernetes 操作者/基础设施工程师,因为,我现在的工作就是去配置 Kubernetes 和让它工作的更好。
我不会去尝试解决一些如 “你应该在你的生产系统中使用 Kubernetes 吗?”这样的问题。那是非常复杂的问题。(不仅是因为“生产系统”根据你的用途而总是有不同的要求)
### Kubernetes 可以让你无需设置一台新的服务器即可在生产系统中运行代码
我首次被说教使用 Kubernetes 是与我的伙伴 Kamal 的下面的谈话:
大致是这样的:
* Kamal 使用 Kubernetes 你可以通过一条命令就能设置一台新的服务器。
* Julia 我觉得不太可能吧。
* Kamal 像这样,你写一个配置文件,然后应用它,这时候,你就在生产系统中运行了一个 HTTP 服务。
* Julia 但是,现在我需要去创建一个新的 AWS 实例,明确地写一个 Puppet 清单,设置服务发现,配置负载均衡,配置我们的部署软件,并且确保 DNS 正常工作,如果没有什么问题的话,至少在 4 小时后才能投入使用。
* Kamal: 是的,使用 Kubernetes 你不需要做那么多事情,你可以在 5 分钟内设置一台新的 HTTP 服务,并且它将自动运行。只要你的集群中有空闲的资源它就能正常工作!
* Julia: 这儿一定是一个“坑”。
这里有一种陷阱,设置一个生产用 Kubernetes 集群(在我的经险中)确实并不容易。(查看 [Kubernetes 艰难之旅][3] 中去开始使用时有哪些复杂的东西)但是,我们现在并不深入讨论它。
因此Kubernetes 第一个很酷的事情是,它可能使那些想在生产系统中部署新开发的软件的方式变得更容易。那是很酷的事,而且它真的是这样,因此,一旦你使用一个运作中的 Kubernetes 集群,你真的可以仅使用一个配置文件就在生产系统中设置一台 HTTP 服务(在 5 分钟内运行这个应用程序,设置一个负载均衡,给它一个 DNS 名字,等等)。看起来真的很有趣。
### 对于运行在生产系统中的代码Kubernetes 可以提供更好的可见性和可管理性
在我看来,在理解 etcd 之前,你可能不会理解 Kubernetes 的。因此,让我们先讨论 etcd
想像一下,如果现在我这样问你,“告诉我你运行在生产系统中的每个应用程序,它运行在哪台主机上?它是否状态很好?是否为它分配了一个 DNS 名字?”我并不知道这些,但是,我可能需要到很多不同的地方去查询来回答这些问题,并且,我需要花很长的时间才能搞定。我现在可以很确定地说不需要查询,仅一个 API 就可以搞定它们。
在 Kubernetes 中,你的集群的所有状态 运行中的应用程序 (“pod”)、节点、DNS 名字、 cron 任务、 等等 —— 都保存在一个单一的数据库中etcd。每个 Kubernetes 组件是无状态的,并且基本是通过下列方式工作的:
* 从 etcd 中读取状态(比如,“分配给节点 1 的 pod 列表”)
* 产生变化(比如,“在节点 1 上运行 pod A”
* 更新 etcd 中的状态(比如,“设置 pod A 的状态为 running
这意味着,如果你想去回答诸如 “在那个可用区中有多少台运行着 nginx 的 pod” 这样的问题时,你可以通过查询一个统一的 APIKubernetes API去回答它。并且你可以在每个其它 Kubernetes 组件上运行那个 API 去进行同样的访问。
这也意味着,你可以很容易地去管理每个运行在 Kubernetes 中的任何东西。比如说,如果你想要:
* 部署实现一个复杂的定制的部署策略(部署一个东西,等待 2 分钟,部署 5 个以上,等待 3.7 分钟,等等)
* 每当推送到 github 上一个分支,自动化 [启动一个新的 web 服务器][1]
* 监视所有你的运行的应用程序,确保它们有一个合理的内存使用限制。
这些你只需要写一个程序与 Kubernetes API“controller”通讯就可以了。
另一个关于 Kubernetes API 的令人激动的事情是,你不会局限于 Kubernetes 所提供的现有功能!如果对于你要部署/创建/监视的软件有你自己的方案,那么,你可以使用 Kubernetes API 去写一些代码去达到你的目的!它可以让你做到你想做的任何事情。
### 即便每个 Kubernetes 组件都“挂了”,你的代码将仍然保持运行
关于 Kubernetes 我(在各种博客文章中 :))承诺的一件事情是,“如果 Kubernetes API 服务和其它组件‘挂了’也没事,你的代码将一直保持运行状态”。我认为理论上这听起来很酷,但是我不确定它是否真是这样的。
到目前为止,这似乎是真的!
我已经断开了一些正在运行的 etcd发生了这些情况
1. 所有的代码继续保持运行状态
2. 不能做 _新的_ 事情你不能部署新的代码或者生成变更cron 作业将停止工作)
3. 当它恢复时,集群将赶上这期间它错过的内容
这样做意味着如果 etcd 宕掉,并且你的应用程序的其中之一崩溃或者发生其它事情,在 etcd 恢复之前,它不能够恢复。
### Kubernetes 的设计对 bug 很有弹性
与任何软件一样Kubernetes 也会有 bug。例如到目前为止我们的集群控制管理器有内存泄漏并且调度器经常崩溃。bug 当然不好,但是,我发现 Kubernetes 的设计可以帮助减轻它的许多核心组件中的错误的影响。
如果你重启动任何组件,将会发生:
* 从 etcd 中读取所有的与它相关的状态
* 基于那些状态(调度 pod、回收完成的 pod、调度 cron 作业、按需部署等等),它会去做那些它认为必须要做的事情
因为,所有的组件并不会在内存中保持状态,你在任何时候都可以重启它们,这可以帮助你减轻各种 bug 的影响。
例如,如果在你的控制管理器中有内存泄露。因为,控制管理器是无状态的,你可以每小时定期去重启它,或者,在感觉到可能导致任何不一致的问题发生时重启它。又或者,在调度器中遇到了一个 bug它有时忘记了某个 pod从来不去调度它们。你可以每隔 10 分钟来重启调度器来缓减这种情况。(我们并不会这么做,而是去修复这个 bug但是你_可以这样做_ :
因此,我觉得即使在它的核心组件中有 bug我仍然可以信任 Kubernetes 的设计可以让我确保集群状态的一致性。并且,总在来说,随着时间的推移软件质量会提高。唯一你必须去操作的有状态的东西就是 etcd。
不用过多地讨论“状态”这个东西 —— 而我认为在 Kubernetes 中很酷的一件事情是,唯一需要去做备份/恢复计划的东西是 etcd (除非为你的 pod 使用了持久化存储的卷)。我认为这样可以使 Kubernetes 运维比你想的更容易一些。
### 在 Kubernetes 之上实现新的分布式系统是非常容易的
假设你想去实现一个分布式 cron 作业调度系统!从零开始做工作量非常大。但是,在 Kubernetes 里面实现一个分布式 cron 作业调度系统是非常容易的!(仍然没那么简单,毕竟它是一个分布式系统)
我第一次读到 Kubernetes 的 cron 作业控制器的代码时,我对它是如此的简单感到由衷高兴。去读读看,其主要的逻辑大约是 400 行的 Go 代码。去读它吧! => [cronjob_controller.go][4] <=
cron 作业控制器基本上做的是:
* 每 10 秒钟:
* 列出所有已存在的 cron 作业
* 检查是否有需要现在去运行的任务
* 如果有,创建一个新的作业对象去调度,并通过其它的 Kubernetes 控制器实际运行它
* 清理已完成的作业
* 重复以上工作
Kubernetes 模型是很受限制的(它有定义在 etcd 中的资源模式,控制器读取这个资源并更新 etcd我认为这种相关的固有的/受限制的模型,可以使它更容易地在 Kubernetes 框架中开发你自己的分布式系统。
Kamal 给我说的是 “Kubernetes 是一个写你自己的分布式系统的很好的平台” ,而不是“ Kubernetes 是一个你可以使用的分布式系统”,并且,我觉得它真的很有意思。他做了一个 [为你推送到 GitHub 的每个分支运行一个 HTTP 服务的系统][5] 的原型。这花了他一个周末的时间,大约 800 行 Go 代码,我认为它真不可思议!
### Kubernetes 可以使你做一些非常神奇的事情(但并不容易)
我一开始就说 “kubernetes 可以让你做一些很神奇的事情,你可以用一个配置文件来做这么多的基础设施,它太神奇了”。这是真的!
为什么说 “Kubernetes 并不容易”呢?是因为 Kubernetes 有很多部分,学习怎么去成功地运营一个高可用的 Kubernetes 集群要做很多的工作。就像我发现它给我了许多抽象的东西,我需要去理解这些抽象的东西才能调试问题和正确地配置它们。我喜欢学习新东西,因此,它并不会使我发狂或者生气,但是我认为了解这一点很重要 :
对于 “我不能仅依靠抽象概念” 的一个具体的例子是,我努力学习了许多 [Linux 上网络是如何工作的][6],才让我对设置 Kubernetes 网络稍有信心,这比我以前学过的关于网络的知识要多很多。这种方式很有意思但是非常费时间。在以后的某个时间,我或许写更多的关于设置 Kubernetes 网络的困难/有趣的事情。
或者,为了成功设置我的 Kubernetes CA我写了一篇 [2000 字的博客文章][7],述及了我不得不学习 Kubernetes 不同方式的 CA 的各种细节。
我觉得,像 GKE Google 的 Kubernetes 产品) 这样的一些监管的 Kubernetes 的系统可能更简单,因为,他们为你做了许多的决定,但是,我没有尝试过它们。
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2017/10/05/reasons-kubernetes-is-cool/
作者:[Julia Evans][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://jvns.ca/about
[1]:https://github.com/kamalmarhubi/kubereview
[2]:https://jvns.ca/categories/kubernetes
[3]:https://github.com/kelseyhightower/kubernetes-the-hard-way
[4]:https://github.com/kubernetes/kubernetes/blob/e4551d50e57c089aab6f67333412d3ca64bc09ae/pkg/controller/cronjob/cronjob_controller.go
[5]:https://github.com/kamalmarhubi/kubereview
[6]:https://jvns.ca/blog/2016/12/22/container-networking/
[7]:https://jvns.ca/blog/2017/08/05/how-kubernetes-certificates-work/

View File

@ -0,0 +1,127 @@
怎样完整地离线更新并升级基于 Debian 的操作系统
======
![](https://www.ostechnix.com/wp-content/uploads/2017/11/Upgrade-Offline-Debian-based-Systems-2-720x340.png)
不久之前我已经向你展示了如何在任意离线的 [Ubuntu][1] 和 [Arch Linux][2] 操作系统上安装软件。 今天,我们将会看看如何完整地离线更新并升级基于 Debian 的操作系统。 和之前所述方法的不同之处在于,这次我们将会升级整个操作系统,而不是单个的软件包。这个方法在你没有网络链接或拥有的网络速度很慢的时候十分有用。
### 完整地离线更新并升级基于 Debian 的操作系统
首先假设你在单位拥有正在运行并配置有高速互联网链接的系统Windows 或者 Linux而在家有一个没有网络链接或网络很慢例如拨号网络的 Debian 或其衍生的操作系统。现在如果你想要离线更新你家里的操作系统怎么办?购买一个更加高速的网络链接?不,根本不需要!你仍然可以通过互联网离线更新升级你的操作系统。这正是 **Apt-Offline**工具可以帮助你做到的。
正如其名apt-offline 是一个为 Debian 及其衍生发行版(诸如 Ubuntu、Linux Mint 这样基于 APT 的操作系统)提供的离线 APT 包管理器。使用 apt-offline我们可以完整地更新/升级我们的 Debian 系统而不需要网络链接。这个程序是由 Python 编程语言写成的兼具 CLI 和图形界面的跨平台工具。
#### 准备工作
* 一个已经联网的操作系统Windows 或者 Linux。在这份指南中为了便于理解我们将之称为在线操作系统。
* 一个离线操作系统Debian 及其衍生版本)。我们称之为离线操作系统。
* 有足够空间容纳所有更新包的 USB 驱动器或者外接硬盘。
#### 安装
Apt-Offline 可以在 Debian 及其衍生版本的默认仓库中获得。如果你的在线操作系统是运行的 Debian、Ubuntu、Linux Mint及其它基于 DEB 的操作系统,你可以通过下面的命令安装 Apt-Offline
```
sudo apt-get install apt-offline
```
如果你的在线操作系统运行的是非 Debian 类的发行版,使用 `git clone` 获取 Apt-Offline 仓库:
```
git clone https://github.com/rickysarraf/apt-offline.git
```
切换到克隆的目录下并在此处运行:
```
cd apt-offline/
sudo ./apt-offline
```
#### 在离线操作系统(没有联网的操作系统)上的步骤
到你的离线操作系统上创建一个你想存储签名文件的目录:
```
mkdir ~/tmp
cd ~/tmp/
```
你可以自己选择使用任何目录。接下来,运行下面的命令生成签名文件:
```
sudo apt-offline set apt-offline.sig
```
示例输出如下:
```
Generating database of files that are needed for an update.
Generating database of file that are needed for operation upgrade
```
默认条件下apt-offline 将会生成需要更新和升级的相关文件的数据库。你可以使用 `--update` 或者 `--upgrade` 选项相应创建。
拷贝完整的 `tmp` 目录到你的 USB 驱动器或者或者外接硬盘上,然后换到你的在线操作系统(有网络链接的操作系统)。
#### 在在线操作系统上的步骤
插入你的 USB 驱动器然后进入 `tmp` 文件夹:
```
cd tmp/
```
然后,运行如下命令:
```
sudo apt-offline get apt-offline.sig --threads 5 --bundle apt-offline-bundle.zip
```
在这里的 `-threads 5` 代表着(并发连接的) APT 仓库的数目。如果你想要从更多的仓库下载软件包,你可以增加这里的数值。然后 `-bundle apt-offline-bundle.zip` 选项表示所有的软件包将会打包到一个叫做 `apt-offline-bundle.zip` 的单独存档中。这个存档文件将会被保存在你的当前工作目录中LCTT 译注:即 `tmp` 目录)。
上面的命令将会按照之前在离线操作系统上生成的签名文件下载数据。
![][4]
根据你的网络状况这个操作将会花费几分钟左右的时间。请记住apt-offline 是跨平台的,所以你可以在任何操作系统上使用它下载包。
一旦下载完成,拷贝 `tmp` 文件夹到你的 USB 或者外接硬盘上并且返回你的离线操作系统LCTT 译注:此处的复制操作似不必要,因为我们一直在 USB 存储器的 `tmp` 目录中操作)。千万保证你的 USB 驱动器上有足够的空闲空间存储所有的下载文件,因为所有的包都放在 `tmp` 文件夹里了。
#### 离线操作系统上的步骤
把你的设备插入你的离线操作系统,然后切换到你之前下载了所有包的 `tmp`目录下。
```
cd tmp
```
然后,运行下面的命令来安装所有下载好的包。
```
sudo apt-offline install apt-offline-bundle.zip
```
这个命令将会更新 APT 数据库,所以 APT 将会在 APT 缓冲里找所有需要的包。
**注意事项:** 如果在线和离线操作系统都在同一个局域网中,你可以通过 `scp` 或者其他传输应用程序将 `tmp` 文件传到离线操作系统中。如果两个操作系统在不同的位置LCTT 译注:意指在不同的局域网),那就使用 USB 设备来拷贝。
好了大伙儿,现在就这么多了。 希望这篇指南对你有用。还有更多好东西正在路上。敬请关注!
祝你愉快!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/fully-update-upgrade-offline-debian-based-systems/
作者:[SK][a]
译者:[leemeans](https://github.com/leemeans)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.ostechnix.com/install-softwares-offline-ubuntu-16-04/
[2]:https://www.ostechnix.com/install-packages-offline-arch-linux/
[3]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[4]:http://www.ostechnix.com/wp-content/uploads/2017/11/apt-offline.png

View File

@ -1,27 +1,26 @@
一步一步学习如何在 MariaDB 中配置主从复制
循序渐进学习如何在 MariaDB 中配置主从复制
======
在我们前面的教程中,我们已经学习了 [**如何安装和配置 MariaDB**][1],也学习了 [**管理 MariaDB 的一些基础命令**][2]。现在我们来学习,如何在 MariaDB 服务器上配置一个主从复制。
复制是用于为我们的数据库去创建多个副本,这些副本可以在其它数据库上用于运行查询,像一些非常繁重的查询可能会影响主数据库服务器的性能,或者我们可以使用它来做数据冗余,或者兼具以上两个目的。我们可以将这个过程自动化,即主服务器到从服务器的复制过程自动进行。执行备份而不影响在主服务器上的写操作。
在我们前面的教程中,我们已经学习了 [如何安装和配置 MariaDB][1],也学习了 [管理 MariaDB 的一些基础命令][2]。现在我们来学习,如何在 MariaDB 服务器上配置一个主从复制。
复制是用于为我们的数据库创 建多个副本,这些副本可以在其它数据库上用于运行查询,像一些非常繁重的查询可能会影响主数据库服务器的性能,或者我们可以使用它来做数据冗余,或者兼具以上两个目的。我们可以将这个过程自动化,即主服务器到从服务器的复制过程自动进行。执行备份而不影响在主服务器上的写操作。
因此,我们现在去配置我们的主-从复制,它需要两台安装了 MariaDB 的机器。它们的 IP 地址如下:
**主服务器 -** 192.168.1.120 **主机名** master.ltechlab.com
- **主服务器 -** 192.168.1.120 **主机名 -** master.ltechlab.com
- **从服务器 -** 192.168.1.130 **主机名 -** slave.ltechlab.com
**从服务器 -** 192.168.1.130 **主机名 -** slave.ltechlab.com
MariaDB 安装到这些机器上之后,我们继续进行本教程。如果你需要安装和配置 MariaDB 的教程,请查看[**这个教程**][1]。
MariaDB 安装到这些机器上之后,我们继续进行本教程。如果你需要安装和配置 MariaDB 的教程,请查看[ **这个教程**][1]。
### 第 1 步 - 主服务器配置
### **第 1 步 - 主服务器配置**
我们现在进入到 MariaDB 中的一个命名为 ' **important '** 的数据库,它将被复制到我们的从服务器。为开始这个过程,我们编辑名为 ' **/etc/my.cnf** ' 的文件,它是 MariaDB 的配置文件。
我们现在进入到 MariaDB 中的一个命名为 `important` 的数据库,它将被复制到我们的从服务器。为开始这个过程,我们编辑名为 `/etc/my.cnf` 的文件,它是 MariaDB 的配置文件。
```
$ vi /etc/my.cnf
```
在这个文件中找到 [mysqld] 节,然后输入如下内容:
在这个文件中找到 `[mysqld]` 节,然后输入如下内容:
```
[mysqld]
@ -43,7 +42,7 @@ $ systemctl restart mariadb
$ mysql -u root -p
```
在它上面创建一个命名为 'slaveuser' 的为主从复制使用的新用户,然后运行如下的命令为它分配所需要的权限:
在它上面创建一个命名为 `slaveuser` 的为主从复制使用的新用户,然后运行如下的命令为它分配所需要的权限:
```
STOP SLAVE;
@ -53,19 +52,19 @@ FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;
```
**注意: ** 我们配置主从复制需要 **MASTER_LOG_FILE 和 MASTER_LOG_POS ** 的值,它可以通过 'show master status' 来获得,因此,你一定要确保你记下了它们的值。
**注意:** 我们配置主从复制需要 `MASTER_LOG_FILE``MASTER_LOG_POS` 的值,它可以通过 `show master status` 来获得,因此,你一定要确保你记下了它们的值。
这些命令运行完成之后,输入 'exit' 退出这个会话。
这些命令运行完成之后,输入 `exit` 退出这个会话。
### 第 2 步 - 创建一个数据库备份,并将它移动到从服务器上
现在,我们需要去为我们的数据库 'important' 创建一个备份,可以使用 'mysqldump' 命令去备份。
现在,我们需要去为我们的数据库 `important` 创建一个备份,可以使用 `mysqldump` 命令去备份。
```
$ mysqldump -u root -p important > important_backup.sql
```
备份完成后,我们需要重新登到 MariaDB 数据库,并解锁我们的表。
备份完成后,我们需要重新登到 MariaDB 数据库,并解锁我们的表。
```
$ mysql -u root -p
@ -78,7 +77,7 @@ $ UNLOCK TABLES;
### 第 3 步:配置从服务器
我们再次去编辑 '/etc/my.cnf' 文件,找到配置文件中的 [mysqld] 节,然后输入如下内容:
我们再次去编辑(从服务器上的) `/etc/my.cnf` 文件,找到配置文件中的 `[mysqld]` 节,然后输入如下内容:
```
[mysqld]
@ -93,7 +92,7 @@ replicate-do-db=important
$ mysql -u root -p < /data/ important_backup.sql
```
当这个恢复过程结束之后,我们将通过登入到从服务器上的 MariaDB为数据库 'important' 上的用户 'slaveuser' 授权。
当这个恢复过程结束之后,我们将通过登入到从服务器上的 MariaDB为数据库 `important` 上的用户 'slaveuser' 授权。
```
$ mysql -u root -p
@ -110,9 +109,9 @@ FLUSH PRIVILEGES;
$ systemctl restart mariadb
```
### **第 4 步:启动复制**
### 第 4 步:启动复制
记住,我们需要 **MASTER_LOG_FILE 和 MASTER_LOG_POS** 变量的值,它可以通过在主服务器上运行 'SHOW MASTER STATUS' 获得。现在登入到从服务器上的 MariaDB然后通过运行下列命令告诉我们的从服务器它应该去哪里找主服务器。
记住,我们需要 `MASTER_LOG_FILE``MASTER_LOG_POS` 变量的值,它可以通过在主服务器上运行 `SHOW MASTER STATUS` 获得。现在登入到从服务器上的 MariaDB然后通过运行下列命令告诉我们的从服务器它应该去哪里找主服务器。
```
STOP SLAVE;
@ -131,13 +130,13 @@ SHOW SLAVE STATUS\G;
$ mysql -u root -p
```
选择数据库为 'important'
选择数据库为 `important`
```
use important;
```
在这个数据库上创建一个名为 test 的表:
在这个数据库上创建一个名为 `test` 的表:
```
create table test (c int);
@ -175,10 +174,10 @@ via: http://linuxtechlab.com/creating-master-slave-replication-mariadb/
作者:[Shusain][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linuxtechlab.com/author/shsuain/
[1]:http://linuxtechlab.com/installing-configuring-mariadb-rhelcentos/
[2]:http://linuxtechlab.com/mariadb-administration-commands-beginners/
[1]:https://linux.cn/article-8320-1.html
[2]:https://linux.cn/article-9306-1.html

View File

@ -1,12 +1,12 @@
mod 保护您的网站免受应用层 DOS 攻击
Apache 服务器模块保护您的网站免受应用层 DOS 攻击
======
有多种恶意攻击网站的方法,比较复杂的方法要涉及数据库和编程方面的技术知识。一个更简单的方法被称为“拒绝服务”或“DOS”攻击。这个攻击方法的名字来源于它的意图:使普通客户或网站访问者的正常服务请求被拒绝。
有多种可以导致网站下线的攻击方法,比较复杂的方法要涉及数据库和编程方面的技术知识。一个更简单的方法被称为“<ruby>拒绝服务<rt>Denial Of Service</rt></ruby>DOS攻击。这个攻击方法的名字来源于它的意图:使普通客户或网站访问者的正常服务请求被拒绝。
一般来说,有两种形式的 DOS 攻击:
1. OSI 模型的三、四层,即网络层攻击
2. OSI 模型的七层,即应用层攻击
1. OSI 模型的三、四层,即网络层攻击
2. OSI 模型的七层,即应用层攻击
第一种类型的 DOS 攻击——网络层,发生于当大量的垃圾流量流向网页服务器时。当垃圾流量超过网络的处理能力时,网站就会宕机。
@ -14,174 +14,172 @@
本文将着眼于缓解应用层攻击,因为减轻网络层攻击需要大量的可用带宽和上游提供商的合作,这通常不是通过配置网络服务器就可以做到的。
通过配置普通的网页服务器,可以保护网页免受应用层攻击,至少是适度的防护。防止这种形式的攻击是非常重要的,因为 [Cloudflare][1] 最近 [报道][2] 了网络层攻击的数量正在减少,而应用层攻击的数量则在增加。
通过配置普通的网页服务器,可以保护网页免受应用层攻击,至少是适度的防护。防止这种形式的攻击是非常重要的,因为 [Cloudflare][1] 最近 [报告称][2] 网络层攻击的数量正在减少,而应用层攻击的数量则在增加。
本文将根据 [zdziarski 的博客][4] 来解释如何使用 Apache2 的模块 [mod_evasive][3]。
本文将介绍如何使用 [zdziarski][4] 开发的 Apache2 的模块 [mod_evasive][3]。
另外mod_evasive 会阻止攻击者试图通过尝试数百个组合来猜测用户名和密码,即暴力攻击
另外mod_evasive 会阻止攻击者通过尝试数百个用户名和密码的组合来进行猜测(即暴力攻击)的企图
Mod_evasive 会记录来自每个 IP 地址的请求的数量。当这个数字超过相应 IP 地址的几个阈值之一时,会出现一个错误页面。错误页面所需的资源要比一个能够响应合法访问的在线网站少得多。
mod_evasive 会记录来自每个 IP 地址的请求的数量。当这个数字超过相应 IP 地址的几个阈值之一时,会出现一个错误页面。错误页面所需的资源要比一个能够响应合法访问的在线网站少得多。
### 在 Ubuntu 16.04 上安装 mod_evasive
Ubuntu 16.04 默认的软件库中包含了 mod_evasive名称为“libapache2-mod-evasive”。您可以使用 `apt-get` 来完成安装:
Ubuntu 16.04 默认的软件库中包含了 mod_evasive名称为 “libapache2-mod-evasive”。您可以使用 `apt-get` 来完成安装:
```
apt-get update
apt-get upgrade
apt-get install libapache2-mod-evasive
```
现在我们需要配置 mod_evasive。
它的配置文件位于 `/etc/apache2/mods-available/evasive.conf`。默认情况下,所有模块的设置在安装后都会被注释掉。因此,在修改配置文件之前,模块不会干扰到网站流量。
```
<IfModule mod_evasive20.c>
#DOSHashTableSize 3097
#DOSPageCount 2
#DOSSiteCount 50
#DOSPageInterval 1
#DOSSiteInterval 1
#DOSBlockingPeriod 10
#DOSEmailNotify you@yourdomain.com
#DOSSystemCommand "su - someuser -c '/sbin/... %s ...'"
#DOSLogDir "/var/log/mod_evasive"
<IfModule mod_evasive20.c>
#DOSHashTableSize 3097
#DOSPageCount 2
#DOSSiteCount 50
#DOSPageInterval 1
#DOSSiteInterval 1
#DOSBlockingPeriod 10
#DOSEmailNotify you@yourdomain.com
#DOSSystemCommand "su - someuser -c '/sbin/... %s ...'"
#DOSLogDir "/var/log/mod_evasive"
</IfModule>
```
第一部分的参数的含义如下:
* **DOSHashTableSize** - 正在访问网站的 IP 地址列表及其请求数。
* **DOSPageCount** - 在一定的时间间隔内,每个的页面的请求次数。时间间隔由 DOSPageInterval 定义。
* **DOSPageInterval** - mod_evasive 统计页面请求次数的时间间隔。
* **DOSSiteCount** - 与 DOSPageCount 相同,但统计的是网站内任何页面的来自相同 IP 地址的请求数量。
* **DOSSiteInterval** - mod_evasive 统计网站请求次数的时间间隔。
* **DOSBlockingPeriod** - 某个 IP 地址被加入黑名单的时长(以秒为单位)。
* `DOSHashTableSize` - 正在访问网站的 IP 地址列表及其请求数的当前列表。
* `DOSPageCount` - 在一定的时间间隔内,每个页面的请求次数。时间间隔由 DOSPageInterval 定义。
* `DOSPageInterval` - mod_evasive 统计页面请求次数的时间间隔。
* `DOSSiteCount` - 与 `DOSPageCount` 相同,但统计的是来自相同 IP 地址对网站内任何页面的请求数量。
* `DOSSiteInterval` - mod_evasive 统计网站请求次数的时间间隔。
* `DOSBlockingPeriod` - 某个 IP 地址被加入黑名单的时长(以秒为单位)。
如果使用上面显示的默认配置,则在如下情况下,一个 IP 地址会被加入黑名单:
* 每秒请求同一页面超过两次。
* 每秒请求 50 个以上不同页面。
如果某个 IP 地址超过了这些阈值,则被加入黑名单 10 秒钟。
这看起来可能不算久但是mod_evasive 将一直监视页面请求,包括在黑名单中的 IP 地址,并重置其加入黑名单的起始时间。只要一个 IP 地址一直尝试使用 DOS 攻击该网站,它将始终在黑名单中。
其余的参数是:
* **DOSEmailNotify** - 用于接收 DOS 攻击信息和 IP 地址黑名单的电子邮件地址。
* **DOSSystemCommand** - 检测到 DOS 攻击时运行的命令。
* **DOSLogDir** - 用于存放 mod_evasive 的临时文件的目录。
* `DOSEmailNotify` - 用于接收 DOS 攻击信息和 IP 地址黑名单的电子邮件地址。
* `DOSSystemCommand` - 检测到 DOS 攻击时运行的命令。
* `DOSLogDir` - 用于存放 mod_evasive 的临时文件的目录。
### 配置 mod_evasive
默认的配置是一个很好的开始因为它的黑名单里不该有任何合法的用户。取消配置文件中的所有参数DOSSystemCommand 除外)的注释,如下所示:
默认的配置是一个很好的开始,因为它不会阻塞任何合法的用户。取消配置文件中的所有参数(`DOSSystemCommand` 除外)的注释,如下所示:
```
<IfModule mod_evasive20.c>
DOSHashTableSize 3097
DOSPageCount 2
DOSSiteCount 50
DOSPageInterval 1
DOSSiteInterval 1
DOSBlockingPeriod 10
DOSEmailNotify JohnW@example.com
#DOSSystemCommand "su - someuser -c '/sbin/... %s ...'"
DOSLogDir "/var/log/mod_evasive"
<IfModule mod_evasive20.c>
DOSHashTableSize 3097
DOSPageCount 2
DOSSiteCount 50
DOSPageInterval 1
DOSSiteInterval 1
DOSBlockingPeriod 10
DOSEmailNotify JohnW@example.com
#DOSSystemCommand "su - someuser -c '/sbin/... %s ...'"
DOSLogDir "/var/log/mod_evasive"
</IfModule>
```
必须要创建日志目录并且要赋予其与 apache 进程相同的所有者。这里创建的目录是 `/var/log/mod_evasive` ,并且在 Ubuntu 上将该目录的所有者和组设置为 `www-data` ,与 Apache 服务器相同:
必须要创建日志目录并且要赋予其与 apache 进程相同的所有者。这里创建的目录是 `/var/log/mod_evasive` ,并且在 Ubuntu 上将该目录的所有者和组设置为 `www-data` ,与 Apache 服务器相同:
```
mkdir /var/log/mod_evasive
chown www-data:www-data /var/log/mod_evasive
```
在编辑了 Apache 的配置之后,特别是在正在运行的网站上,在重新启动或重新加载之前,最好检查一下语法,因为语法错误将影响 Apache 的启动从而使网站宕机。
Apache 包含一个辅助命令,是一个配置语法检查器。只需运行以下命令来检查您的语法:
```
apachectl configtest
```
如果您的配置是正确的,会得到如下结果:
```
Syntax OK
```
但是,如果出现问题,您会被告知在哪部分发生了什么错误,例如:
```
AH00526: Syntax error on line 6 of /etc/apache2/mods-enabled/evasive.conf:
DOSSiteInterval takes one argument, Set site interval
Action 'configtest' failed.
The Apache error log may have more information.
```
如果您的配置通过了 configtest 的测试,那么这个模块可以安全地被启用并且 Apache 可以重新加载:
```
a2enmod evasive
systemctl reload apache2.service
```
Mod_evasive 现在已配置好并正在运行了。
mod_evasive 现在已配置好并正在运行了。
### 测试
为了测试 mod_evasive我们只需要向服务器提出足够的网页访问请求以使其超出阈值并记录来自 Apache 的响应代码。
一个正常并成功的页面请求将收到如下响应:
```
HTTP/1.1 200 OK
```
但是,被 mod_evasive 拒绝的将返回以下内容:
```
HTTP/1.1 403 Forbidden
```
以下脚本会尽可能迅速地向本地主机127.0.0.1localhost的 80 端口发送 HTTP 请求,并打印出每个请求的响应代码。
你所要做的就是把下面的 bash 脚本复制到一个文件中,例如 `mod_evasive_test.sh`
```
#!/bin/bash
set -e
for i in {1..50}; do
curl -s -I 127.0.0.1 | head -n 1
#!/bin/bash
set -e
for i in {1..50}; do
curl -s -I 127.0.0.1 | head -n 1
done
```
这个脚本的部分含义如下:
* curl - 这是一个发出网络请求的命令。
* -s - 隐藏进度表。
* -I - 仅显示响应头部信息。
* head - 打印文件的第一部分。
* -n 1 - 只显示第一行。
* `curl` - 这是一个发出网络请求的命令。
* `-s` - 隐藏进度表。
* `-I` - 仅显示响应头部信息。
* `head` - 打印文件的第一部分。
* `-n 1` - 只显示第一行。
然后赋予其执行权限:
```
chmod 755 mod_evasive_test.sh
```
在启用 mod_evasive **之前**,脚本运行时,将会看到 50 行“HTTP / 1.1 200 OK”的返回值。
在启用 mod_evasive **之前**,脚本运行时,将会看到 50 行 “HTTP / 1.1 200 OK” 的返回值。
但是,启用 mod_evasive 后,您将看到以下内容:
```
HTTP/1.1 200 OK
HTTP/1.1 200 OK
@ -191,13 +189,11 @@ HTTP/1.1 403 Forbidden
HTTP/1.1 403 Forbidden
HTTP/1.1 403 Forbidden
...
```
前两个请求被允许但是在同一秒内第三个请求发出时mod_evasive 拒绝了任何进一步的请求。您还将收到一封电子邮件(邮件地址在选项 `DOSEmailNotify` 中设置),通知您有 DOS 攻击被检测到。
Mod_evasive 现在已经在保护您的网站啦!
mod_evasive 现在已经在保护您的网站啦!
--------------------------------------------------------------------------------
@ -205,7 +201,7 @@ via: https://bash-prompt.net/guides/mod_proxy/
作者:[Elliot Cooper][a]
译者:[jessie-pang](https://github.com/jessie-pang)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,174 @@
为初学者准备的 MariaDB 管理命令
======
之前我们学过了[在 Centos/RHEL 7 上安装 MariaDB 服务器并保证其安全][1],使之成为了 RHEL/CentOS 7 的默认数据库。现在我们再来看看一些有用的 MariaDB 管理命令。这些都是使用 MariaDB 最基础的命令,而且它们对 MySQL 也同样适合,因为 Mariadb 就是 MySQL 的一个分支而已。
**(推荐阅读:[在 RHEL/CentOS 上安装并配置 MongoDB][2])**
### MariaDB 管理命令
#### 1、查看 MariaDB 安装的版本
要查看所安装数据库的当前版本,在终端中输入下面命令:
```
$ mysql -version
```
该命令会告诉你数据库的当前版本。此外你也可以运行下面命令来查看版本的详细信息:
```
$ mysqladmin -u root -p version
```
#### 2、登录 MariaDB
要登录 MariaDB 服务器,运行:
```
$ mysql -u root -p
```
然后输入密码登录。
#### 3、列出所有的数据库
要列出 MariaDB 当前拥有的所有数据库,在你登录到 MariaDB 中后运行:
```
> show databases;
```
LCTT 译注:`$` 这里代表 shell 的提示符,`>` 这里代表 MariaDB shell 的提示符。)
#### 4、创建新数据库
在 MariaDB 中创建新数据库,登录 MariaDB 后运行:
```
> create database dan;
```
若想直接在终端创建数据库,则运行:
```
$ mysqladmin -u user -p create dan
```
这里,`dan` 就是新数据库的名称。
#### 5、删除数据库
要删除数据库,在已登录的 MariaDB 会话中运行:
```
> drop database dan;
```
此外你也可以运行,
```
$ mysqladmin -u root -p drop dan
```
**注意:** 若在运行 `mysqladmin` 命令时提示 “access denied” 错误,这应该是由于我们没有给 root 授权。要对 root 授权,请参照第 7 点方法,只是要将用户改成 root。
#### 6、创建新用户
为数据库创建新用户,运行:
```
> CREATE USER 'dan'@'localhost' IDENTIFIED BY 'password';
```
#### 7、授权用户访问某个数据库
授权用户访问某个数据库,运行:
```
> GRANT ALL PRIVILEGES ON test.* to 'dan'@'localhost';
```
这会赋予用户 `dan` 对名为 `test` 的数据库完全操作的权限。我们也可以限定为用户只赋予 `SELECT`、`INSERT`、`DELETE` 权限。
要赋予访问所有数据库的权限,将 `test` 替换成 `*` 。像这样:
```
> GRANT ALL PRIVILEGES ON *.* to 'dan'@'localhost';
```
#### 8、备份/导出数据库
要创建单个数据库的备份,在终端窗口中运行下列命令,
```
$ mysqldump -u root -p database_name>db_backup.sql
```
若要一次性创建多个数据库的备份则运行:
```
$ mysqldump -u root -p --databases db1 db2 > db12_backup.sql
```
要一次性导出多个数据库,则运行:
```
$ mysqldump -u root -p --all-databases > all_dbs.sql
```
#### 9、从备份中恢复数据库
要从备份中恢复数据库,运行:
```
$ mysql -u root -p database_name < db_backup.sql
```
但这条命令成功的前提是预先没有存在同名的数据库。如果想要恢复数据库数据到已经存在的数据库中,则需要用到 `mysqlimport` 命令:
```
$ mysqlimport -u root -p database_name < db_backup.sql
```
#### 10、更改 mariadb 用户的密码
本例中我们会修改 `root` 的密码,但修改其他用户的密码也是一样的过程。
登录 mariadb 并切换到 'mysql' 数据库:
```
$ mysql -u root -p
> use mysql;
```
然后运行下面命令:
```
> update user set password=PASSWORD('your_new_password_here') where User='root';
```
下一步,重新加载权限:
```
> flush privileges;
```
然后退出会话。
我们的教程至此就结束了,在本教程中我们学习了一些有用的 MariaDB 管理命令。欢迎您的留言。
--------------------------------------------------------------------------------
via: http://linuxtechlab.com/mariadb-administration-commands-beginners/
作者:[Shusain][a]
译者:[lujun9972](https://github.com/lujun9972)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linuxtechlab.com/author/shsuain/
[1]:http://linuxtechlab.com/installing-configuring-mariadb-rhelcentos/
[2]:http://linuxtechlab.com/mongodb-installation-configuration-rhelcentos/

View File

@ -0,0 +1,201 @@
如何统计 Linux 中文件和文件夹/目录的数量
======
嗨,伙计们,今天我们再次带来一系列可以多方面帮助到你的复杂的命令。 通过操作命令,可以帮助您计数当前目录中的文件和目录、递归计数,统计特定用户创建的文件列表等。
在本教程中,我们将向您展示如何使用多个命令,并使用 `ls`、`egrep`、`wc` 和 `find` 命令执行一些高级操作。 下面的命令将可用在多个方面。
为了实验,我打算总共创建 7 个文件和 2 个文件夹5 个常规文件和 2 个隐藏文件)。 下面的 `tree` 命令的输出清楚的展示了文件和文件夹列表。
```
# tree -a /opt
/opt
├── magi
│   └── 2g
│   ├── test5.txt
│   └── .test6.txt
├── test1.txt
├── test2.txt
├── test3.txt
├── .test4.txt
└── test.txt
2 directories, 7 files
```
### 示例-1
统计当前目录的文件(不包括隐藏文件)。 运行以下命令以确定当前目录中有多少个文件并且不计算点文件LCTT 译注:点文件即以“.” 开头的文件,它们在 Linux 默认是隐藏的)。
```
# ls -l . | egrep -c '^-'
4
```
**细节:**
* `ls` 列出目录内容
* `-l` 使用长列表格式
* `.` 列出有关文件的信息(默认为当前目录)
* `|` 将一个程序的输出发送到另一个程序进行进一步处理的控制操作符
* `egrep` 打印符合模式的行
* `-c` 通用输出控制
* `'^-'` 以“-”开头的行(`ls -l` 列出长列表时,行首的 “-” 代表普通文件)
### 示例-2
统计当前目录包含隐藏文件在内的文件。 包括当前目录中的点文件。
```
# ls -la . | egrep -c '^-'
5
```
### 示例-3
运行以下命令来计数当前目录的文件和文件夹。 它会计算所有的文件和目录。
```
# ls -l | wc -l
5
```
**细节:**
* `ls` 列出目录内容
* `-l` 使用长列表格式
* `|` 将一个程序的输出发送到另一个程序进行进一步处理的控制操作符
* `wc` 这是一个统计每个文件的换行符、单词和字节数的命令
* `-l` 输出换行符的数量
### 示例-4
统计当前目录包含隐藏文件和目录在内的文件和文件夹。
```
# ls -la | wc -l
8
```
### 示例-5
递归计算当前目录的文件,包括隐藏文件。
```
# find . -type f | wc -l
7
```
**细节 **
* `find` 搜索目录结构中的文件
* `-type` 文件类型
* `f` 常规文件
* `wc` 这是一个统计每个文件的换行符、单词和字节数的命令
* `-l` 输出换行符的数量
### 示例-6
使用 `tree` 命令输出目录和文件数(不包括隐藏文件)。
```
# tree | tail -1
2 directories, 5 files
```
### 示例-7
使用包含隐藏文件的 `tree` 命令输出目录和文件计数。
```
# tree -a | tail -1
2 directories, 7 files
```
### 示例-8
运行下面的命令递归计算包含隐藏目录在内的目录数。
```
# find . -type d | wc -l
3
```
### 示例-9
根据文件扩展名计数文件数量。 这里我们要计算 `.txt` 文件。
```
# find . -name "*.txt" | wc -l
7
```
### 示例-10
组合使用 `echo` 命令和 `wc` 命令统计当前目录中的所有文件。 `4` 表示当前目录中的文件数量。
```
# echo *.* | wc
1 4 39
```
### 示例-11
组合使用 `echo` 命令和 `wc` 命令来统计当前目录中的所有目录。 第二个 `1` 表示当前目录中的目录数量。
```
# echo */ | wc
1 1 6
```
### 示例-12
组合使用 `echo` 命令和 `wc` 命令来统计当前目录中的所有文件和目录。 `5` 表示当前目录中的目录和文件的数量。
```
# echo * | wc
1 5 44
```
### 示例-13
统计系统(整个系统)中的文件数。
```
# find / -type f | wc -l
69769
```
### 示例-14
统计系统(整个系统)中的文件夹数。
```
# find / -type d | wc -l
8819
```
### 示例-15
运行以下命令来计算系统(整个系统)中的文件、文件夹、硬链接和符号链接数。
```
# find / -type d -exec echo dirs \; -o -type l -exec echo symlinks \; -o -type f -links +1 -exec echo hardlinks \; -o -type f -exec echo files \; | sort | uniq -c
8779 dirs
69343 files
20 hardlinks
11646 symlinks
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/how-to-count-the-number-of-files-and-folders-directories-in-linux/
作者:[Magesh Maruthamuthu][a]
译者:[Flowsnow](https://github.com/Flowsnow)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.2daygeek.com/author/magesh/
[1]:https://www.2daygeek.com/empty-a-file-delete-contents-lines-from-a-file-remove-matching-string-from-a-file-remove-empty-blank-lines-from-a-file/

View File

@ -0,0 +1,70 @@
Linux 与 Unix 之差异
==============
[![Linux vs. Unix](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/unix-vs-linux_orig.jpg)][1]
在计算机时代,相当一部分的人错误地认为 **Unix****Linux** 操作系统是一样的。然而,事实恰好相反。让我们仔细看看。
### 什么是 Unix
[![what is unix](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/unix_orig.png)][2]
在 IT 领域,以操作系统而为人所知的 Unix是 1969 年 AT&T 公司在美国新泽西所开发的(目前它的商标权由国际开放标准组织所拥有)。大多数的操作系统都受到了 Unix 的启发,而 Unix 也受到了未完成的 Multics 系统的启发。Unix 的另一版本是来自贝尔实验室的 Play 9。
#### Unix 被用于哪里?
作为一个操作系统Unix 大多被用在服务器、工作站,现在也有用在个人计算机上。它在创建互联网、计算机网络或客户端/服务器模型方面发挥着非常重要的作用。
#### Unix 系统的特点
* 支持多任务
* 相比 Multics 操作更加简单
* 所有数据以纯文本形式存储
* 采用单一根文件的树状存储
* 能够同时访问多用户账户
#### Unix 操作系统的组成
**a)** 单核操作系统,负责低级操作以及由用户发起的操作,内核之间的通信通过系统调用进行。
**b)** 系统工具
**c)** 其他应用程序
### 什么是 Linux
[![what is linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/linux_orig.png)][4]
这是一个基于 Unix 操作系统原理的开源操作系统。正如开源的含义一样它是一个可以自由下载的系统。它也可以通过编辑、添加及扩充其源代码而定制该系统。这是它最大的好处之一而不像今天的其它操作系统Windows、Mac OS X 等需要付费。Unix 系统不是创建新系统的唯一模版,另外一个重要的因素是 MINIX 系统,不像 Linus此版本被其缔造者Andrew Tanenbaum用于商业系统。
Linux 由 Linus Torvalds 开发于 1991 年,这是一个其作为个人兴趣的操作系统。为什么 Linux 借鉴 Unix 的一个主要原因是因为其简洁性。Linux 第一个官方版本0.01)发布于 1991 年 9 月 17 日。虽然这个系统并不是很完美和完善,但 Linus 对它产生很大的兴趣并在几天内Linus 发出了一些关于 Linux 源代码扩展以及其他想法的电子邮件。
#### Linux 的特点
Linux 的基石是 Unix 内核,其基于 Unix 的基本特点以及 **POSIX** 和单独的 **UNIX 规范标准**。看起来,该操作系统官方名字取自于 **Linus**,其中其操作系统名称的尾部的 “x” 和 **Unix 系统**相联系。
#### 主要功能
* 同时运行多任务(多任务)
* 程序可以包含一个或多个进程(多用途系统),且每个进程可能有一个或多个线程。
* 多用户,因此它可以运行多个用户程序。
* 个人帐户受适当授权的保护。
* 因此账户准确地定义了系统控制权。
**企鹅 Tux** 的 Logo 作者是 Larry Ewing他选择这个企鹅作为他的开源 **Linux 操作系统**的吉祥物。**Linux Torvalds** 最初提出这个新的操作系统的名字为 “Freax” ,即为 “自由free” + “奇异freak” + xUNIX 系统)的结合字,而不像存放它的首个版本的 FTP 服务器上所起的名字Linux
--------------------------------------------------------------------------------
via: http://www.linuxandubuntu.com/home/linux-vs-unix
作者:[linuxandubuntu][a]
译者:[HardworkFish](https://github.com/HardworkFish)
校对:[imquanquan](https://github.com/imquanquan), [wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxandubuntu.com
[1]:http://www.linuxandubuntu.com/home/linux-vs-unix
[2]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/unix_orig.png
[3]:http://www.unix.org/what_is_unix.html
[4]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/linux_orig.png
[5]:https://www.linux.com

View File

@ -0,0 +1,106 @@
创建局域网内的离线 Yum 仓库
======
在早先的教程中,我们讨论了[如何使用 ISO 镜像和在线 Yum 仓库的方式来创建自己的 Yum 仓库 ][1]。创建自己的 Yum 仓库是一个不错的想法,但若网络中只有 2-3 台 Linux 机器那就没啥必要了。不过若你的网络中有大量的 Linux 服务器,而且这些服务器还需要定时进行升级,或者你有大量服务器无法直接访问互联网,那么创建自己的 Yum 仓库就很有必要了。
当我们有大量的 Linux 服务器,而每个服务器都直接从互联网上升级系统时,数据消耗会很可观。为了节省数据量,我们可以创建个离线 Yum 源并将之分享到本地网络中。网络中的其他 Linux 机器就可以直接从本地 Yum 上获取系统更新,从而节省数据量,而且传输速度也会很好。
我们可以使用下面两种方法来分享 Yum 仓库:
* 使用 Web 服务器Apache
* 使用 FTP 服务器VSFTPD
在开始讲解这两个方法之前,我们需要先根据[之前的教程][1]创建一个 Yum 仓库。
### 使用 Web 服务器
首先在 Yum 服务器上安装 Web 服务器Apache我们假设服务器 IP 是 `192.168.1.100`。我们已经在这台系统上配置好了 Yum 仓库,现在我们来使用 `yum` 命令安装 Apache Web 服务器,
```
$ yum install httpd
```
下一步,拷贝所有的 rpm 包到默认的 Apache 根目录下,即 `/var/www/html`,由于我们已经将包都拷贝到了 `/YUM` 下,我们也可以创建一个软连接来从 `/var/www/html` 指向 `/YUM`
```
$ ln -s /var/www/html/Centos /YUM
```
重启 Web 服务器应用改变:
```
$ systemctl restart httpd
```
#### 配置客户端机器
服务端的配置就完成了,现在需要配置下客户端来从我们创建的离线 Yum 中获取升级包,这里假设客户端 IP 为 `192.168.1.101`
`/etc/yum.repos.d` 目录中创建 `offline-yum.repo` 文件,输入如下信息,
```
$ vi /etc/yum.repos.d/offline-yum.repo
```
```
name=Local YUM
baseurl=http://192.168.1.100/CentOS/7
gpgcheck=0
enabled=1
```
客户端也配置完了。试一下用 `yum` 来安装/升级软件包来确认仓库是正常工作的。
### 使用 FTP 服务器
在 FTP 上分享 Yum首先需要安装所需要的软件包即 vsftpd。
```
$ yum install vsftpd
```
vsftp 的默认根目录为 `/var/ftp/pub`,因此你可以拷贝 rpm 包到这个目录,或者为它创建一个软连接:
```
$ ln -s /var/ftp/pub /YUM
```
重启服务应用改变:
```
$ systemctl restart vsftpd
```
#### 配置客户端机器
像上面一样,在 `/etc/yum.repos.d` 中创建 `offline-yum.repo` 文件,并输入下面信息,
```
$ vi /etc/yum.repos.d/offline-yum.repo
```
```
[Offline YUM]
name=Local YUM
baseurl=ftp://192.168.1.100/pub/CentOS/7
gpgcheck=0
enabled=1
```
现在客户机可以通过 ftp 接收升级了。要配置 vsftpd 服务器为其他 Linux 系统分享文件,请[阅读这篇指南][2]。
这两种方法都很不错,你可以任意选择其中一种方法。有任何疑问或这想说的话,欢迎在下面留言框中留言。
--------------------------------------------------------------------------------
via: http://linuxtechlab.com/offline-yum-repository-for-lan/
作者:[Shusain][a]
译者:[lujun9972](https://github.com/lujun9972)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linuxtechlab.com/author/shsuain/
[1]:https://linux.cn/article-9296-1.html
[2]:http://linuxtechlab.com/ftp-secure-installation-configuration/

View File

@ -1,3 +1,5 @@
XLCYun 翻译中
Manjaro Gaming: Gaming on Linux Meets Manjaros Awesomeness
======
[![Meet Manjaro Gaming, a Linux distro designed for gamers with the power of Manjaro][1]][1]

View File

@ -0,0 +1,94 @@
6 pivotal moments in open source history
============================================================
### Here's how open source developed from a printer jam solution at MIT to a major development model in the tech industry today.
![6 pivotal moments in open source history](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/welcome-open-sign-door-osdc-lead.png?itok=i9jCnaiu "6 pivotal moments in open source history")
Image credits : [Alan Levine][4]. [CC0 1.0][5]
Open source has taken a prominent role in the IT industry today. It is everywhere from the smallest embedded systems to the biggest supercomputer, from the phone in your pocket to the software running the websites and infrastructure of the companies we engage with every day. Let's explore how we got here and discuss key moments from the past 40 years that have paved a path to the current day.
### 1\. RMS and the printer
In the late 1970s, [Richard M. Stallman (RMS)][6] was a staff programmer at MIT. His department, like those at many universities at the time, shared a PDP-10 computer and a single printer. One problem they encountered was that paper would regularly jam in the printer, causing a string of print jobs to pile up in a queue until someone fixed the jam. To get around this problem, the MIT staff came up with a nice social hack: They wrote code for the printer driver so that when it jammed, a message would be sent to everyone who was currently waiting for a print job: "The printer is jammed, please fix it." This way, it was never stuck for long.
In 1980, the lab accepted a donation of a brand-new laser printer. When Stallman asked for the source code for the printer driver, however, so he could reimplement the social hack to have the system notify users on a paper jam, he was told that this was proprietary information. He heard of a researcher in a different university who had the source code for a research project, and when the opportunity arose, he asked this colleague to share it—and was shocked when they refused. They had signed an NDA, which Stallman took as a betrayal of the hacker culture.
The late '70s and early '80s represented an era where software, which had traditionally been given away with the hardware in source code form, was seen to be valuable. Increasingly, MIT researchers were starting software companies, and selling licenses to the software was key to their business models. NDAs and proprietary software licenses became the norms, and the best programmers were hired from universities like MIT to work on private development projects where they could no longer share or collaborate.
As a reaction to this, Stallman resolved that he would create a complete operating system that would not deprive users of the freedom to understand how it worked, and would allow them to make changes if they wished. It was the birth of the free software movement.
### 2\. Creation of GNU and the advent of free software
By late 1983, Stallman was ready to announce his project and recruit supporters and helpers. In September 1983, [he announced the creation of the GNU project][7] (GNU stands for GNU's Not Unix—a recursive acronym). The goal of the project was to clone the Unix operating system to create a system that would give complete freedom to users.
In January 1984, he started working full-time on the project, first creating a compiler system (GCC) and various operating system utilities. Early in 1985, he published "[The GNU Manifesto][8]," which was a call to arms for programmers to join the effort, and launched the Free Software Foundation in order to accept donations to support the work. This document is the founding charter of the free software movement.
### 3\. The writing of the GPL
Until 1989, software written and released by the [Free Software Foundation][9] and RMS did not have a single license. Emacs was released under the Emacs license, GCC was released under the GCC license, and so on; however, after a company called Unipress forced Stallman to stop distributing copies of an Emacs implementation they had acquired from James Gosling (of Java fame), he felt that a license to secure user freedoms was important.
The first version of the GNU General Public License was released in 1989, and it encapsulated the values of copyleft (a play on words—what is the opposite of copyright?): You may use, copy, distribute, and modify the software covered by the license, but if you make changes, you must share the modified source code alongside the modified binaries. This simple requirement to share modified software, in combination with the advent of the internet in the 1990s, is what enabled the decentralized, collaborative development model of the free software movement to flourish.
### 4\. "The Cathedral and the Bazaar"
By the mid-1990s, Linux was starting to take off, and free software had become more mainstream—or perhaps "less fringe" would be more accurate. The Linux kernel was being developed in a way that was completely different to anything people had been seen before, and was very successful doing it. Out of the chaos of the kernel community came order, and a fast-moving project.
In 1997, Eric S. Raymond published the seminal essay, "[The Cathedral and the Bazaar][10]," comparing and contrasting the development methodologies and social structure of GCC and the Linux kernel and talking about his own experiences with a "bazaar" development model with the Fetchmail project. Many of the principles that Raymond describes in this essay will later become central to agile development and the DevOps movement—"release early, release often," refactoring of code, and treating users as co-developers are all fundamental to modern software development.
This essay has been credited with bringing free software to a broader audience, and with convincing executives at software companies at the time that releasing their software under a free software license was the right thing to do. Raymond went on to be instrumental in the coining of the term "open source" and the creation of the Open Source Institute.
"The Cathedral and the Bazaar" was credited as a key document in the 1998 release of the source code for the Netscape web browser Mozilla. At the time, this was the first major release of an existing, widely used piece of desktop software as free software, which brought it further into the public eye.
### 5\. Open source
As far back as 1985, the ambiguous nature of the word "free", used to describe software freedom, was identified as problematic by RMS himself. In the GNU Manifesto, he identified "give away" and "for free" as terms that confused zero price and user freedom. "Free as in freedom," "Speech not beer," and similar mantras were common when free software hit a mainstream audience in the late 1990s, but a number of prominent community figures argued that a term was needed that made the concept more accessible to the general public.
After Netscape released the source code for Mozilla in 1998 (see #4), a group of people, including Eric Raymond, Bruce Perens, Michael Tiemann, Jon "Maddog" Hall, and many of the leading lights of the free software world, gathered in Palo Alto to discuss an alternative term. The term "open source" was [coined by Christine Peterson][11] to describe free software, and the Open Source Institute was later founded by Bruce Perens and Eric Raymond. The fundamental difference with proprietary software, they argued, was the availability of the source code, and so this was what should be put forward first in the branding.
Later that year, at a summit organized by Tim O'Reilly, an extended group of some of the most influential people in the free software world at the time gathered to debate various new brands for free software. In the end, "open source" edged out "sourceware," and open source began to be adopted by many projects in the community.
There was some disagreement, however. Richard Stallman and the Free Software Foundation continued to champion the term "free software," because to them, the fundamental difference with proprietary software was user freedom, and the availability of source code was just a means to that end. Stallman argued that removing the focus on freedom would lead to a future where source code would be available, but the user of the software would not be able to avail of the freedom to modify the software. With the advent of web-deployed software-as-a-service and open source firmware embedded in devices, the battle continues to be waged today.
### 6\. Corporate investment in open source—VA Linux, Red Hat, IBM
In the late 1990s, a series of high-profile events led to a huge increase in the professionalization of free and open source software. Among these, the highest-profile events were the IPOs of VA Linux and Red Hat in 1999\. Both companies had massive gains in share price on their opening days as publicly traded companies, proving that open source was now going commercial and mainstream.
Also in 1999, IBM announced that they were supporting Linux by investing $1 billion in its development, making is less risky to traditional enterprise users. The following year, Sun Microsystems released the source code to its cross-platform office suite, StarOffice, and created the [OpenOffice.org][12] project.
The combined effect of massive Silicon Valley funding of open source projects, the attention of Wall Street for young companies built around open source software, and the market credibility that tech giants like IBM and Sun Microsystems brought had combined to create the massive adoption of open source, and the embrace of the open development model that helped it thrive have led to the dominance of Linux and open source in the tech industry today.
_Which pivotal moments would you add to the list? Let us know in the comments._
### About the author
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/picture-11423-8ecef7f357341aaa7aee8b43e9b530c9.png?itok=n1snBFq3)][13] Dave Neary - Dave Neary is a member of the Open Source and Standards team at Red Hat, helping make Open Source projects important to Red Hat be successful. Dave has been around the free and open source software world, wearing many different hats, since sending his first patch to the GIMP in 1999.[More about me][2]
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/2/pivotal-moments-history-open-source
作者:[Dave Neary ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/dneary
[1]:https://opensource.com/article/18/2/pivotal-moments-history-open-source?rate=gsG-JrjfROWACP7i9KUoqmH14JDff8-31C2IlNPPyu8
[2]:https://opensource.com/users/dneary
[3]:https://opensource.com/user/16681/feed
[4]:https://www.flickr.com/photos/cogdog/6476689463/in/photolist-aSjJ8H-qHAvo4-54QttY-ofm5ZJ-9NnUjX-tFxS7Y-bPPjtH-hPYow-bCndCk-6NpFvF-5yQ1xv-7EWMXZ-48RAjB-5EzYo3-qAFAdk-9gGty4-a2BBgY-bJsTcF-pWXATc-6EBTmq-SkBnSJ-57QJco-ddn815-cqt5qG-ddmYSc-pkYxRz-awf3n2-Rvnoxa-iEMfeG-bVfq5-jXy74D-meCC1v-qx22rx-fMScsJ-ci1435-ie8P5-oUSXhp-xJSm9-bHgApk-mX7ggz-bpsxd7-8ukud7-aEDmBj-qWkytq-ofwhdM-b7zSeD-ddn5G7-ddn5gb-qCxnB2-S74vsk
[5]:https://creativecommons.org/publicdomain/zero/1.0/
[6]:https://en.wikipedia.org/wiki/Richard_Stallman
[7]:https://groups.google.com/forum/#!original/net.unix-wizards/8twfRPM79u0/1xlglzrWrU0J
[8]:https://www.gnu.org/gnu/manifesto.en.html
[9]:https://www.fsf.org/
[10]:https://en.wikipedia.org/wiki/The_Cathedral_and_the_Bazaar
[11]:https://opensource.com/article/18/2/coining-term-open-source-software
[12]:http://www.openoffice.org/
[13]:https://opensource.com/users/dneary
[14]:https://opensource.com/users/dneary
[15]:https://opensource.com/users/dneary
[16]:https://opensource.com/article/18/2/pivotal-moments-history-open-source#comments
[17]:https://opensource.com/tags/licensing

View File

@ -0,0 +1,103 @@
How I coined the term 'open source'
============================================================
### Christine Peterson finally publishes her account of that fateful day, 20 years ago.
![How I coined the term 'open source'](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/hello-name-sticker-badge-tag.png?itok=fAgbMgBb "How I coined the term 'open source'")
Image by : opensource.com
In a few days, on February 3, the 20th anniversary of the introduction of the term "[open source software][6]" is upon us. As open source software grows in popularity and powers some of the most robust and important innovations of our time, we reflect on its rise to prominence.
I am the originator of the term "open source software" and came up with it while executive director at Foresight Institute. Not a software developer like the rest, I thank Linux programmer Todd Anderson for supporting the term and proposing it to the group.
This is my account of how I came up with it, how it was proposed, and the subsequent reactions. Of course, there are a number of accounts of the coining of the term, for example by Eric Raymond and Richard Stallman, yet this is mine, written on January 2, 2006.
It has never been published, until today.
* * *
The introduction of the term "open source software" was a deliberate effort to make this field of endeavor more understandable to newcomers and to business, which was viewed as necessary to its spread to a broader community of users. The problem with the main earlier label, "free software," was not its political connotations, but that—to newcomers—its seeming focus on price is distracting. A term was needed that focuses on the key issue of source code and that does not immediately confuse those new to the concept. The first term that came along at the right time and fulfilled these requirements was rapidly adopted: open source.
This term had long been used in an "intelligence" (i.e., spying) context, but to my knowledge, use of the term with respect to software prior to 1998 has not been confirmed. The account below describes how the term [open source software][7] caught on and became the name of both an industry and a movement.
### Meetings on computer security
In late 1997, weekly meetings were being held at Foresight Institute to discuss computer security. Foresight is a nonprofit think tank focused on nanotechnology and artificial intelligence, and software security is regarded as central to the reliability and security of both. We had identified free software as a promising approach to improving software security and reliability and were looking for ways to promote it. Interest in free software was starting to grow outside the programming community, and it was increasingly clear that an opportunity was coming to change the world. However, just how to do this was unclear, and we were groping for strategies.
At these meetings, we discussed the need for a new term due to the confusion factor. The argument was as follows: those new to the term "free software" assume it is referring to the price. Oldtimers must then launch into an explanation, usually given as follows: "We mean free as in freedom, not free as in beer." At this point, a discussion on software has turned into one about the price of an alcoholic beverage. The problem was not that explaining the meaning is impossible—the problem was that the name for an important idea should not be so confusing to newcomers. A clearer term was needed. No political issues were raised regarding the free software term; the issue was its lack of clarity to those new to the concept.
### Releasing Netscape
On February 2, 1998, Eric Raymond arrived on a visit to work with Netscape on the plan to release the browser code under a free-software-style license. We held a meeting that night at Foresight's office in Los Altos to strategize and refine our message. In addition to Eric and me, active participants included Brian Behlendorf, Michael Tiemann, Todd Anderson, Mark S. Miller, and Ka-Ping Yee. But at that meeting, the field was still described as free software or, by Brian, "source code available" software.
While in town, Eric used Foresight as a base of operations. At one point during his visit, he was called to the phone to talk with a couple of Netscape legal and/or marketing staff. When he was finished, I asked to be put on the phone with them—one man and one woman, perhaps Mitchell Baker—so I could bring up the need for a new term. They agreed in principle immediately, but no specific term was agreed upon.
Between meetings that week, I was still focused on the need for a better name and came up with the term "open source software." While not ideal, it struck me as good enough. I ran it by at least four others: Eric Drexler, Mark Miller, and Todd Anderson liked it, while a friend in marketing and public relations felt the term "open" had been overused and abused and believed we could do better. He was right in theory; however, I didn't have a better idea, so I thought I would try to go ahead and introduce it. In hindsight, I should have simply proposed it to Eric Raymond, but I didn't know him well at the time, so I took an indirect strategy instead.
Todd had agreed strongly about the need for a new term and offered to assist in getting the term introduced. This was helpful because, as a non-programmer, my influence within the free software community was weak. My work in nanotechnology education at Foresight was a plus, but not enough for me to be taken very seriously on free software questions. As a Linux programmer, Todd would be listened to more closely.
### The key meeting
Later that week, on February 5, 1998, a group was assembled at VA Research to brainstorm on strategy. Attending—in addition to Eric Raymond, Todd, and me—were Larry Augustin, Sam Ockman, and attending by phone, Jon "maddog" Hall.
The primary topic was promotion strategy, especially which companies to approach. I said little, but was looking for an opportunity to introduce the proposed term. I felt that it wouldn't work for me to just blurt out, "All you technical people should start using my new term." Most of those attending didn't know me, and for all I knew, they might not even agree that a new term was greatly needed, or even somewhat desirable.
Fortunately, Todd was on the ball. Instead of making an assertion that the community should use this specific new term, he did something less directive—a smart thing to do with this community of strong-willed individuals. He simply used the term in a sentence on another topic—just dropped it into the conversation to see what happened. I went on alert, hoping for a response, but there was none at first. The discussion continued on the original topic. It seemed only he and I had noticed the usage.
Not so—memetic evolution was in action. A few minutes later, one of the others used the term, evidently without noticing, still discussing a topic other than terminology. Todd and I looked at each other out of the corners of our eyes to check: yes, we had both noticed what happened. I was excited—it might work! But I kept quiet: I still had low status in this group. Probably some were wondering why Eric had invited me at all.
Toward the end of the meeting, the [question of terminology][8] was brought up explicitly, probably by Todd or Eric. Maddog mentioned "freely distributable" as an earlier term, and "cooperatively developed" as a newer term. Eric listed "free software," "open source," and "sourceware" as the main options. Todd advocated the "open source" model, and Eric endorsed this. I didn't say much, letting Todd and Eric pull the (loose, informal) consensus together around the open source name. It was clear that to most of those at the meeting, the name change was not the most important thing discussed there; a relatively minor issue. Only about 10% of my notes from this meeting are on the terminology question.
But I was elated. These were some key leaders in the community, and they liked the new name, or at least didn't object. This was a very good sign. There was probably not much more I could do to help; Eric Raymond was far better positioned to spread the new meme, and he did. Bruce Perens signed on to the effort immediately, helping set up [Opensource.org][9] and playing a key role in spreading the new term.
For the name to succeed, it was necessary, or at least highly desirable, that Tim O'Reilly agree and actively use it in his many projects on behalf of the community. Also helpful would be use of the term in the upcoming official release of the Netscape Navigator code. By late February, both O'Reilly & Associates and Netscape had started to use the term.
### Getting the name out
After this, there was a period during which the term was promoted by Eric Raymond to the media, by Tim O'Reilly to business, and by both to the programming community. It seemed to spread very quickly.
On April 7, 1998, Tim O'Reilly held a meeting of key leaders in the field. Announced in advance as the first "[Freeware Summit][10]," by April 14 it was referred to as the first "[Open Source Summit][11]."
These months were extremely exciting for open source. Every week, it seemed, a new company announced plans to participate. Reading Slashdot became a necessity, even for those like me who were only peripherally involved. I strongly believe that the new term was helpful in enabling this rapid spread into business, which then enabled wider use by the public.
A quick Google search indicates that "open source" appears more often than "free software," but there still is substantial use of the free software term, which remains useful and should be included when communicating with audiences who prefer it.
### A happy twinge
When an [early account][12] of the terminology change written by Eric Raymond was posted on the Open Source Initiative website, I was listed as being at the VA brainstorming meeting, but not as the originator of the term. This was my own fault; I had neglected to tell Eric the details. My impulse was to let it pass and stay in the background, but Todd felt otherwise. He suggested to me that one day I would be glad to be known as the person who coined the name "open source software." He explained the situation to Eric, who promptly updated his site.
Coming up with a phrase is a small contribution, but I admit to being grateful to those who remember to credit me with it. Every time I hear it, which is very often now, it gives me a little happy twinge.
The big credit for persuading the community goes to Eric Raymond and Tim O'Reilly, who made it happen. Thanks to them for crediting me, and to Todd Anderson for his role throughout. The above is not a complete account of open source history; apologies to the many key players whose names do not appear. Those seeking a more complete account should refer to the links in this article and elsewhere on the net.
### About the author
[![photo of Christine Peterson](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/cp2016_crop2_185.jpg?itok=vUkSjFig)][13] Christine Peterson - Christine Peterson writes, lectures, and briefs the media on coming powerful technologies, especially nanotechnology, artificial intelligence, and longevity. She is Cofounder and Past President of Foresight Institute, the leading nanotech public interest group. Foresight educates the public, technical community, and policymakers on coming powerful technologies and how to guide their long-term impact. She serves on the Advisory Board of the [Machine Intelligence... ][2][more about Christine Peterson][3][More about me][4]
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/2/coining-term-open-source-software
作者:[ Christine Peterson][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/christine-peterson
[1]:https://opensource.com/article/18/2/coining-term-open-source-software?rate=HFz31Mwyy6f09l9uhm5T_OFJEmUuAwpI61FY-fSo3Gc
[2]:http://intelligence.org/
[3]:https://opensource.com/users/christine-peterson
[4]:https://opensource.com/users/christine-peterson
[5]:https://opensource.com/user/206091/feed
[6]:https://opensource.com/resources/what-open-source
[7]:https://opensource.org/osd
[8]:https://wiki2.org/en/Alternative_terms_for_free_software
[9]:https://opensource.org/
[10]:http://www.oreilly.com/pub/pr/636
[11]:http://www.oreilly.com/pub/pr/796
[12]:https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Alternative_terms_for_free_software.html
[13]:https://opensource.com/users/christine-peterson
[14]:https://opensource.com/users/christine-peterson
[15]:https://opensource.com/users/christine-peterson
[16]:https://opensource.com/article/18/2/coining-term-open-source-software#comments

View File

@ -0,0 +1,95 @@
IT automation: How to make the case
======
At the start of any significant project or change initiative, IT leaders face a proverbial fork in the road.
Path #1 might seem to offer the shortest route from A to B: Simply force-feed the project to everyone by executive mandate, essentially saying, “Youre going to do this or else.”
Path #2 might appear less direct, because on this journey you take the time to explain the strategy and the reasons behind it. In fact, youre going to be making pit stops along this route, rather than marathoning from start to finish: “Heres what were doing and why were doing it.”
Guess which path bears better results?
If you said #2, youve traveled both paths before and experienced the results first-hand. Getting people on board with major changes beforehand is almost always the smarter choice.
IT leaders know as well as anyone that with significant change often comes [significant fear][1], skepticism, and other challenges. It may be especially true with IT automation. The term alone sounds scary to some people, and it is often tied to misconceptions. Helping people understand the what, why, and how of your companys automation strategy is a necessary step to achieving your goals associated with that strategy.
[ **Read our related article,** [**IT automation best practices: 7 keys to long-term success**][2]. ]
With that in mind, we asked a variety of IT leaders for their advice on making the case for automation in your organization:
## 1. Show people whats in it for them
Lets face it: Self-interest and self-preservation are natural instincts. Tapping into that human tendency is a good way to get people on board: Show people how your automation strategy will benefit them and their jobs. Will automating a particular process in the software pipeline mean fewer middle-of-the-night calls for team members? Will it enable some people to dump low-skill, manual tasks in favor of more strategic, higher-order work the sort that helps them take the next step in their career?
“Convey whats in it for them, and how it will benefit clients and the whole company,” advises Vipul Nagrath, global CIO at [ADP][3]. “Compare the current state to a brighter future state, where the company enjoys greater stability, agility, efficiency, and security.”
The same approach holds true when making the case outside of IT; just lighten up on the jargon when explaining the benefits to non-technical stakeholders, Nagrath says.
Setting up a before-and-after picture is a good storytelling device for helping people see the upside.
“You want to paint a picture of the current state that people can relate to,” Nagrath says. “Present whats working, but also highlight whats causing teams to be less than agile.” Then explain how automating certain processes will improve that current state.
## 2. Connect automation to specific business goals
Part of making a strong case entails making sure people understand that youre not just trend-chasing. If youre automating simply for the sake of automating, people will sniff that out and become more resistant perhaps especially within IT.
“The case for automation needs to be driven by a business demand signal, such as revenue or operating expense,” says David Emerson, VP and deputy CISO at [Cyxtera][4]. “No automation endeavor is self-justifying, and no technical feat, generally, should be a means unto itself, unless its a core competency of the company.”
Like Nagrath, Emerson recommends promoting the incentives associated with achieving the business goals of automation, and working toward these goals (and corresponding incentives) in an iterative, step-by-step fashion.
## 3. Break the automation plan into manageable pieces
Even if your automation strategy is literally “automate everything,” thats a tough sell (and probably unrealistic) for most organizations. Youll make a stronger case with a plan that approaches automation manageable piece by manageable piece, and that enables greater flexibility to adapt along the way.
“When making a case for automation, I recommend clearly illustrating the incentive to move to an automated process, and allowing iteration toward that goal to introduce and prove the benefits at lower risk,” Emerson says.
Sergey Zuev, founder at [GA Connector][5], shares an in-the-trenches account of why automating incrementally is crucial and how it will help you build a stronger, longer-lasting argument for your strategy. Zuev should know: His companys tool automates the import of data from CRM applications into Google Analytics. But it was actually the companys internal experience automating its own customer onboarding process that led to a lightbulb moment.
“At first, we tried to build the whole onboarding funnel at once, and as a result, the project dragged [on] for months,” Zuev says. “After realizing that it [was] going nowhere, we decided to select small chunks that would have the biggest immediate effect, and start with that. As a result, we managed to implement one of the email sequences in just a week, and are already reaping the benefits of the desecrated manual effort.”
## 4. Sell the big-picture benefits too
A step-by-step approach does not preclude painting a bigger picture. Just as its a good idea to make the case at the individual or team level, its also a good idea for help people understand the company-wide benefits.
“If we can accelerate the time it takes for the business to get what it needs, it will silence the skeptics.”
Eric Kaplan, CTO at [AHEAD][6], agrees that using small wins to show automations value is a smart strategy for winning people over. But the value those so-called “small” wins reveal can actually help you sharpen the big picture for people. Kaplan points to the value of individual and organizational time as an area everyone can connect with easily.
“The best place to do this is where you can show savings in terms of time,” Kaplan says. “If we can accelerate the time it takes for the business to get what it needs, it will silence the skeptics.”
Time and scalability are powerful benefits business and IT colleagues, both charged with growing the business, can grasp.
“The result of automation is scalability less effort per person to maintain and grow your IT environment, as [Red Hat][7] VP, Global Services John Allessio recently [noted][8]. “If adding manpower is the only way to grow your business, then scalability is a pipe dream. Automation reduces your manpower requirements and provides the flexibility required for continued IT evolution.” (See his full article, [What DevOps teams really need from a CIO][8].)
## 5. Promote the heck out of your results
At the outset of your automation strategy, youll likely be making the case based on goals and the anticipated benefits of achieving those goals. But as your automation strategy evolves, theres no case quite as convincing as one grounded in real-world results.
“Seeing is believing,” says Nagrath, ADPs CIO. “Nothing quiets skeptics like a track record of delivery.”
That means, of course, not only achieving your goals, but also doing so on time another good reason for the iterative, step-by-step approach.
While quantitative results such as percentage improvements or cost savings can speak loudly, Nagrath advises his fellow IT leaders not to stop there when telling your automation story.
“Making a case for automation is also a qualitative discussion, where we can promote the issues prevented, overall business continuity, reductions in failures/errors, and associates taking on [greater] responsibility as they tackle more value-added tasks.”
--------------------------------------------------------------------------------
via: https://enterprisersproject.com/article/2018/1/how-make-case-it-automation
作者:[Kevin Casey][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://enterprisersproject.com/user/kevin-casey
[1]:https://enterprisersproject.com/article/2017/10/how-beat-fear-and-loathing-it-change
[2]:https://enterprisersproject.com/article/2018/1/it-automation-best-practices-7-keys-long-term-success?sc_cid=70160000000h0aXAAQ
[3]:https://www.adp.com/
[4]:https://www.cyxtera.com/
[5]:http://gaconnector.com/
[6]:https://www.thinkahead.com/
[7]:https://www.redhat.com/en?intcmp=701f2000000tjyaAAA
[8]:https://enterprisersproject.com/article/2017/12/what-devops-teams-really-need-cio
[9]:https://enterprisersproject.com/email-newsletter?intcmp=701f2000000tsjPAAQ

View File

@ -0,0 +1,108 @@
Open source is 20: How it changed programming and business forever
======
![][1]
Every company in the world now uses open-source software. Microsoft, once its greatest enemy, is [now an enthusiastic open supporter][2]. Even [Windows is now built using open-source techniques][3]. And if you ever searched on Google, bought a book from Amazon, watched a movie on Netflix, or looked at your friend's vacation pictures on Facebook, you're an open-source user. Not bad for a technology approach that turns 20 on February 3.
Now, free software has been around since the first computers, but the philosophy of both free software and open source are both much newer. In the 1970s and 80s, companies rose up which sought to profit by making proprietary software. In the nascent PC world, no one even knew about free software. But, on the Internet, which was dominated by Unix and ITS systems, it was a different story.
In the late 70s, [Richard M. Stallman][6], also known as RMS, then an MIT programmer, created a free printer utility based on its source code. But then a new laser printer arrived on the campus and he found he could no longer get the source code and so he couldn't recreate the utility. The angry [RMS created the concept of "Free Software."][7]
RMS's goal was to create a free operating system, [Hurd][8]. To make this happen in September 1983, [he announced the creation of the GNU project][9] (GNU stands for GNU's Not Unix -- a recursive acronym). By January 1984, he was working full-time on the project. To help build it he created the grandfather of all free software/open-source compiler system [GCC][10] and other operating system utilities. Early in 1985, he published "[The GNU Manifesto][11]," which was the founding charter of the free software movement and launched the [Free Software Foundation (FSF)][12].
This went well for a few years, but inevitably, [RMS collided with proprietary companies][13]. The company Unipress took the code to a variation of his [EMACS][14] programming editor and turned it into a proprietary program. RMS never wanted that to happen again so he created the [GNU General Public License (GPL)][15] in 1989. This was the first copyleft license. It gave users the right to use, copy, distribute, and modify a program's source code. But if you make source code changes and distribute it to others, you must share the modified code. While there had been earlier free licenses, such as [1980's four-clause BSD license][16], the GPL was the one that sparked the free-software, open-source revolution.
In 1997, [Eric S. Raymond][17] published his vital essay, "[The Cathedral and the Bazaar][18]." In it, he showed the advantages of the free-software development methodologies using GCC, the Linux kernel, and his experiences with his own [Fetchmail][19] project as examples. This essay did more than show the advantages of free software. The programming principles he described led the way for both [Agile][20] development and [DevOps][21]. Twenty-first century programming owes a large debt to Raymond.
Like all revolutions, free software quickly divided its supporters. On one side, as John Mark Walker, open-source expert and Strategic Advisor at Glyptodon, recently wrote, "[Free software is a social movement][22], with nary a hint of business interests -- it exists in the realm of religion and philosophy. Free software is a way of life with a strong moral code."
On the other were numerous people who wanted to bring "free software" to business. They would become the founders of "open source." They argued that such phrases as "Free as in freedom" and "Free speech, not beer," left most people confused about what that really meant for software.
The [release of the Netscape web browser source code][23] sparked a meeting of free software leaders and experts at [a strategy session held on February 3rd][24], 1998 in Palo Alto, CA. There, Eric S. Raymond, Michael Tiemann, Todd Anderson, Jon "maddog" Hall, Larry Augustin, Sam Ockman, and Christine Peterson hammered out the first steps to open source.
Peterson created the "open-source term." She remembered:
> [The introduction of the term "open source software" was a deliberate effort][25] to make this field of endeavor more understandable to newcomers and to business, which was viewed as necessary to its spread to a broader community of users. The problem with the main earlier label, "free software," was not its political connotations, but that -- to newcomers -- its seeming focus on price is distracting. A term was needed that focuses on the key issue of source code and that does not immediately confuse those new to the concept. The first term that came along at the right time and fulfilled these requirements was rapidly adopted: open source.
To help clarify what open source was, and wasn't, Raymond and Bruce Perens founded the [Open Source Initiative (OSI)][26]. Its purpose was, and still is, to define what are real open-source software licenses and what aren't.
Stallman was enraged by open source. He wrote:
> The two terms describe almost the same method/category of software, but they stand for [views based on fundamentally different values][27]. Open source is a development methodology; free software is a social movement. For the free software movement, free software is an ethical imperative, essential respect for the users' freedom. By contrast, the philosophy of open source considers issues in terms of how to make software 'better' -- in a practical sense only. It says that non-free software is an inferior solution to the practical problem at hand. Most discussion of "open source" pays no attention to right and wrong, only to popularity and success.
He saw open source as kowtowing to business and taking the focus away from the personal freedom of being able to have free access to the code. Twenty years later, he's still angry about it.
In a recent e-mail to me, Stallman said, it is a "common error is connecting me or my work or free software in general with the term 'Open Source.' That is the slogan adopted in 1998 by people who reject the philosophy of the Free Software Movement." In another message, he continued, "I rejected 'open source' because it was meant to bury the "free software" ideas of freedom. Open source inspired the release ofu seful free programs, but what's missing is the idea that users deserve control of their computing. We libre-software activists say, 'Software you can't change and share is unjust, so let's escape to our free replacement.' Open source says only, 'If you let users change your code, they might fix bugs.' What it does says is not wrong, but weak; it avoids saying the deeper point."
Philosophical conflicts aside, open source has indeed become the model for practical software development. Larry Augustin, CEO of [SugarCRM][28], the open-source customer relationship management (CRM) Software-as-a-Service (SaaS), was one of the first to practice open-source in a commercial software business. Augustin showed that a successful business could be built on open-source software.
Other companies quickly embraced this model. Besides Linux companies such as [Canonical][29], [Red Hat][30] and [SUSE][31], technology businesses such as [IBM][32] and [Oracle][33] also adopted it. This, in turn, led to open source's commercial success. More recently companies you would never think of for a moment as open-source businesses like [Wal-Mart][34] and [Verizon][35], now rely on open-source programs and have their own open-source projects.
As Jim Zemlin, director of [The Linux Foundation][36], observed in 2014:
> A [new business model][37] has emerged in which companies are joining together across industries to share development resources and build common open-source code bases on which they can differentiate their own products and services.
Today, Hall looked back and said "I look at 'closed source' as a blip in time." Raymond is unsurprised at open-source's success. In an e-mail interview, Raymond said, "Oh, yeah, it *has* been 20 years -- and that's not a big deal because we won most of the fights we needed to quite a while ago, like in the first decade after 1998."
"Ever since," he continued, "we've been mainly dealing with the problems of success rather than those of failure. And a whole new class of issues, like IoT devices without upgrade paths -- doesn't help so much for the software to be open if you can't patch it."
In other words, he concludes, "The reward of victory is often another set of battles."
These are battles that open source is poised to win. Jim Whitehurst, Red Hat's CEO and president told me:
> The future of open source is bright. We are on the cusp of a new wave of innovation that will come about because information is being separated from physical objects thanks to the Internet of Things. Over the next decade, we will see entire industries based on open-source concepts, like the sharing of information and joint innovation, become mainstream. We'll see this impact every sector, from non-profits, like healthcare, education and government, to global corporations who realize sharing information leads to better outcomes. Open and participative innovation will become a key part of increasing productivity around the world.
Others see open source extending beyond software development methods. Nick Hopman, Red Hat's senior director of emerging technology practices, said:
> Open-source is much more than just a process to develop and expose technology. Open-source is a catalyst to drive change in every facet of society -- government, policy, medical diagnostics, process re-engineering, you name it -- and can leverage open principles that have been perfected through the experiences of open-source software development to create communities that drive change and innovation. Looking forward, open-source will continue to drive technology innovation, but I am even more excited to see how it changes the world in ways we have yet to even consider.
Indeed. Open source has turned twenty, but its influence, and not just on software and business, will continue on for decades to come.
--------------------------------------------------------------------------------
via: http://www.zdnet.com/article/open-source-turns-20/
作者:[Steven J. Vaughan-Nichols][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
[1]:https://zdnet1.cbsistatic.com/hub/i/r/2018/01/08/d9527281-2972-4cb7-bd87-6464d8ad50ae/thumbnail/570x322/9d4ef9007b3a3ce34de0cc39d2b15b0c/5a4faac660b22f2aba08fc3f-1280x7201jan082018150043poster.jpg
[2]:http://www.zdnet.com/article/microsoft-the-open-source-company/
[3]:http://www.zdnet.com/article/microsoft-uses-open-source-software-to-create-windows/
[4]:https://zdnet1.cbsistatic.com/hub/i/r/2016/11/18/a55b3c0c-7a8e-4143-893f-44900cb2767a/resize/220x165/6cd4e37b1904743ff1f579cb10d9e857/linux-open-source-money-penguin.jpg
[5]:http://www.zdnet.com/article/how-do-linux-and-open-source-companies-make-money-from-free-software/
[6]:https://stallman.org/
[7]:https://opensource.com/article/18/2/pivotal-moments-history-open-source
[8]:https://www.gnu.org/software/hurd/hurd.html
[9]:https://groups.google.com/forum/#!original/net.unix-wizards/8twfRPM79u0/1xlglzrWrU0J
[10]:https://gcc.gnu.org/
[11]:https://www.gnu.org/gnu/manifesto.en.html
[12]:https://www.fsf.org/
[13]:https://www.free-soft.org/gpl_history/
[14]:https://www.gnu.org/s/emacs/
[15]:https://www.gnu.org/licenses/gpl-3.0.en.html
[16]:http://www.linfo.org/bsdlicense.html
[17]:http://www.catb.org/esr/
[18]:http://www.catb.org/esr/writings/cathedral-bazaar/
[19]:http://www.fetchmail.info/
[20]:https://www.agilealliance.org/agile101/
[21]:https://aws.amazon.com/devops/what-is-devops/
[22]:https://opensource.com/business/16/11/open-source-not-free-software?sc_cid=70160000001273HAAQ
[23]:http://www.zdnet.com/article/the-beginning-of-the-peoples-web-20-years-of-netscape/
[24]:https://opensource.org/history
[25]:https://opensource.com/article/18/2/coining-term-open-source-software
[26]:https://opensource.org
[27]:https://www.gnu.org/philosophy/open-source-misses-the-point.html
[28]:https://www.sugarcrm.com/
[29]:https://www.canonical.com/
[30]:https://www.redhat.com/en
[31]:https://www.suse.com/
[32]:https://developer.ibm.com/code/open/
[33]:http://www.oracle.com/us/technologies/open-source/overview/index.html
[34]:http://www.zdnet.com/article/walmart-relies-on-openstack/
[35]:https://www.networkworld.com/article/3195490/lan-wan/verizon-taps-into-open-source-white-box-fervor-with-new-cpe-offering.html
[36]:http://www.linuxfoundation.org/
[37]:http://www.zdnet.com/article/it-takes-an-open-source-village-to-make-commercial-software/

View File

@ -0,0 +1,75 @@
Open source software: 20 years and counting
============================================================
### On the 20th anniversary of the coining of the term "open source software," how did it rise to dominance and what's next?
![Open source software: 20 years and counting](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/2cents.png?itok=XlT7kFNY "Open source software: 20 years and counting")
Image by : opensource.com
Twenty years ago, in February 1998, the term "open source" was first applied to software. Soon afterwards, the Open Source Definition was created and the seeds that became the Open Source Initiative (OSI) were sown. As the OSDs author [Bruce Perens relates][9],
> “Open source” is the proper name of a campaign to promote the pre-existing concept of free software to business, and to certify licenses to a rule set.
Twenty years later, that campaign has proven wildly successful, beyond the imagination of anyone involved at the time. Today open source software is literally everywhere. It is the foundation for the internet and the web. It powers the computers and mobile devices we all use, as well as the networks they connect to. Without it, cloud computing and the nascent Internet of Things would be impossible to scale and perhaps to create. It has enabled new ways of doing business to be tested and proven, allowing giant corporations like Google and Facebook to start from the top of a mountain others already climbed.
Like any human creation, it has a dark side as well. It has also unlocked dystopian possibilities for surveillance and the inevitably consequent authoritarian control. It has provided criminals with new ways to cheat their victims and unleashed the darkness of bullying delivered anonymously and at scale. It allows destructive fanatics to organize in secret without the inconvenience of meeting. All of these are shadows cast by useful capabilities, just as every human tool throughout history has been used both to feed and care and to harm and control. We need to help the upcoming generation strive for irreproachable innovation. As [Richard Feynman said][10],
> To every man is given the key to the gates of heaven. The same key opens the gates of hell.
As open source has matured, the way it is discussed and understood has also matured. The first decade was one of advocacy and controversy, while the second was marked by adoption and adaptation.
1. In the first decade, the key question concerned business models—“how can I contribute freely yet still be paid?”—while during the second, more people asked about governance—“how can I participate yet keep control/not be controlled?”
2. Open source projects of the first decade were predominantly replacements for off-the-shelf products; in the second decade, they were increasingly components of larger solutions.
3. Projects of the first decade were often run by informal groups of individuals; in the second decade, they were frequently run by charities created on a project-by-project basis.
4. Open source developers of the first decade were frequently devoted to a single project and often worked in their spare time. In the second decade, they were increasingly employed to work on a specific technology—professional specialists.
5. While open source was always intended as a way to promote software freedom, during the first decade, conflict arose with those preferring the term “free software.” In the second decade, this conflict was largely ignored as open source adoption accelerated.
So what will the third decade bring?
1. _The complexity business model_ —The predominant business model will involve monetizing the solution of the complexity arising from the integration of many open source parts, especially from deployment and scaling. Governance needs will reflect this.
2. _Open source mosaics_ —Open source projects will be predominantly families of component parts, together with being built into stacks of components. The resultant larger solutions will be a mosaic of open source parts.
3. _Families of projects_ —More and more projects will be hosted by consortia/trade associations like the Linux Foundation and OpenStack, and by general-purpose charities like Apache and the Software Freedom Conservancy.
4. _Professional generalists_ —Open source developers will increasingly be employed to integrate many technologies into complex solutions and will contribute to a range of projects.
5. _Software freedom redux_ —As new problems arise, software freedom (the application of the Four Freedoms to user and developer flexibility) will increasingly be applied to identify solutions that work for collaborative communities and independent deployers.
Ill be expounding on all this in conference keynotes around the world during 2018\. Watch for [OSIs 20th Anniversary World Tour][11]!
_This article was originally published on [Meshed Insights Ltd.][2] and is reprinted with permission. This article, as well as my work at OSI, is supported by [Patreon patrons][3]._
### About the author
[![Simon Phipps (smiling)](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/picture-2305.jpg?itok=CefW_OYh)][12] Simon Phipps - Computer industry and open source veteran Simon Phipps started [Public Software][4], a European host for open source projects, and volunteers as President at OSI and a director at The Document Foundation. His posts are sponsored by [Patreon patrons][5] - become one if you'd like to see more! Over a 30+ year career he has been involved at a strategic level in some of the worlds leading... [more about Simon Phipps][6][More about me][7]
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/2/open-source-20-years-and-counting
作者:[Simon Phipps ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/simonphipps
[1]:https://opensource.com/article/18/2/open-source-20-years-and-counting?rate=TZxa8jxR6VBcYukor0FDsTH38HxUrr7Mt8QRcn0sC2I
[2]:https://meshedinsights.com/2017/12/21/20-years-and-counting/
[3]:https://patreon.com/webmink
[4]:https://publicsoftware.eu/
[5]:https://patreon.com/webmink
[6]:https://opensource.com/users/simonphipps
[7]:https://opensource.com/users/simonphipps
[8]:https://opensource.com/user/12532/feed
[9]:https://perens.com/2017/09/26/on-usage-of-the-phrase-open-source/
[10]:https://www.brainpickings.org/2013/07/19/richard-feynman-science-morality-poem/
[11]:https://opensource.org/node/905
[12]:https://opensource.com/users/simonphipps
[13]:https://opensource.com/users/simonphipps
[14]:https://opensource.com/users/simonphipps

View File

@ -1,160 +0,0 @@
Linux Find Out Last System Reboot Time and Date Command
======
So, how do you find out your Linux or UNIX-like system was last rebooted? How do you display the system shutdown date and time? The last utility will either list the sessions of specified users, ttys, and hosts, in reverse time order, or list the users logged in at a specified date and time. Each line of output contains the user name, the tty from which the session was conducted, any hostname, the start and stop times for the session, and the duration of the session. To view Linux or Unix system reboot and shutdown date and time stamp using the following commands:
* last command
* who command
### Use who command to find last system reboot time/date
You need to use the [who command][1], to print who is logged on. It also displays the time of last system boot. Use the last command to display system reboot and shutdown date and time, run:
`$ who -b`
Sample outputs:
```
system boot 2017-06-20 17:41
```
Use the last command to display listing of last logged in users and system last reboot time and date, enter:
`$ last reboot | less`
Sample outputs:
[![Fig.01: last command in action][2]][2]
Or, better try:
`$ last reboot | head -1`
Sample outputs:
```
reboot system boot 4.9.0-3-amd64 Sat Jul 15 19:19 still running
```
The last command searches back through the file /var/log/wtmp and displays a list of all users logged in (and out) since that file was created. The pseudo user reboot logs in each time the system is rebooted. Thus last reboot command will show a log of all reboots since the log file was created.
### Finding systems last shutdown date and time
To display last shutdown date and time use the following command:
`$ last -x|grep shutdown | head -1`
Sample outputs:
```
shutdown system down 2.6.15.4 Sun Apr 30 13:31 - 15:08 (01:37)
```
Where,
* **-x** : Display the system shutdown entries and run level changes.
Here is another session from my last command:
```
$ last
$ last -x
$ last -x reboot
$ last -x shutdown
```
Sample outputs:
![Fig.01: How to view last Linux System Reboot Date/Time ][3]
### Find out Linux system up since…
Another option as suggested by readers in the comments section below is to run the following command:
`$ uptime -s`
Sample outputs:
```
2017-06-20 17:41:51
```
### OS X/Unix/FreeBSD find out last reboot and shutdown time command examples
Type the following command:
`$ last reboot`
Sample outputs from OS X unix:
```
reboot ~ Fri Dec 18 23:58
reboot ~ Mon Dec 14 09:54
reboot ~ Wed Dec 9 23:21
reboot ~ Tue Nov 17 21:52
reboot ~ Tue Nov 17 06:01
reboot ~ Wed Nov 11 12:14
reboot ~ Sat Oct 31 13:40
reboot ~ Wed Oct 28 15:56
reboot ~ Wed Oct 28 11:35
reboot ~ Tue Oct 27 00:00
reboot ~ Sun Oct 18 17:28
reboot ~ Sun Oct 18 17:11
reboot ~ Mon Oct 5 09:35
reboot ~ Sat Oct 3 18:57
wtmp begins Sat Oct 3 18:57
```
To see shutdown date and time, enter:
`$ last shutdown`
Sample outputs:
```
shutdown ~ Fri Dec 18 23:57
shutdown ~ Mon Dec 14 09:53
shutdown ~ Wed Dec 9 23:20
shutdown ~ Tue Nov 17 14:24
shutdown ~ Mon Nov 16 21:15
shutdown ~ Tue Nov 10 13:15
shutdown ~ Sat Oct 31 13:40
shutdown ~ Wed Oct 28 03:10
shutdown ~ Sun Oct 18 17:27
shutdown ~ Mon Oct 5 09:23
wtmp begins Sat Oct 3 18:57
```
### How do I find who rebooted/shutdown the Linux box?
You need [to enable psacct service and run the following command to see info][4] about executed commands including user name. Type the following [lastcomm command][5] to see
```
# lastcomm userNameHere
# lastcomm commandNameHere
# lastcomm | more
# lastcomm reboot
# lastcomm shutdown
### OR see both reboot and shutdown time
# lastcomm | egrep 'reboot|shutdown'
```
Sample outputs:
```
reboot S X root pts/0 0.00 secs Sun Dec 27 23:49
shutdown S root pts/1 0.00 secs Sun Dec 27 23:45
```
So root user rebooted the box from 'pts/0' on Sun, Dec, 27th at 23:49 local time.
### See also
* For more information read last(1) and [learn how to use the tuptime command on Linux server to see the historical and statistical uptime][6].
### about the author
The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][7], [Facebook][8], [Google+][9].
--------------------------------------------------------------------------------
via: https://www.cyberciti.biz/tips/linux-last-reboot-time-and-date-find-out.html
作者:[Vivek Gite][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.cyberciti.biz/
[1]:https://www.cyberciti.biz/faq/unix-linux-who-command-examples-syntax-usage/ (See Linux/Unix who command examples for more info)
[2]:https://www.cyberciti.biz/tips/wp-content/uploads/2006/04/last-reboot.jpg
[3]:https://www.cyberciti.biz/media/new/tips/2006/04/check-last-time-system-was-rebooted.jpg
[4]:https://www.cyberciti.biz/tips/howto-log-user-activity-using-process-accounting.html
[5]:https://www.cyberciti.biz/faq/linux-unix-lastcomm-command-examples-usage-syntax/ (See Linux/Unix lastcomm command examples for more info)
[6]:https://www.cyberciti.biz/hardware/howto-see-historical-statistical-uptime-on-linux-server/
[7]:https://twitter.com/nixcraft
[8]:https://facebook.com/nixcraft
[9]:https://plus.google.com/+CybercitiBiz

View File

@ -0,0 +1,82 @@
How to use lftp to accelerate ftp/https download speed on Linux/UNIX
======
lftp is a file transfer program. It allows sophisticated FTP, HTTP/HTTPS, and other connections. If the site URL is specified, then lftp will connect to that site otherwise a connection has to be established with the open command. It is an essential tool for all a Linux/Unix command line users. I have already written about [Linux ultra fast command line download accelerator][1] such as Axel and prozilla. lftp is another tool for the same job with more features. lftp can handle seven file access methods:
1. ftp
2. ftps
3. http
4. https
5. hftp
6. fish
7. sftp
8. file
### So what is unique about lftp?
* Every operation in lftp is reliable, that is any not fatal error is ignored, and the operation is repeated. So if downloading breaks, it will be restarted from the point automatically. Even if FTP server does not support REST command, lftp will try to retrieve the file from the very beginning until the file is transferred completely.
* lftp has shell-like command syntax allowing you to launch several commands in parallel in the background.
* lftp has a builtin mirror which can download or update a whole directory tree. There is also a reverse mirror (mirror -R) which uploads or updates a directory tree on the server. The mirror can also synchronize directories between two remote servers, using FXP if available.
### How to use lftp as download accelerator
lftp has pget command. It allows you download files in parallel. The syntax is
`lftp -e 'pget -n NUM -c url; exit'`
For example, download <http://kernel.org/pub/linux/kernel/v2.6/linux-2.6.22.2.tar.bz2> file using pget in 5 parts:
```
$ cd /tmp
$ lftp -e 'pget -n 5 -c http://kernel.org/pub/linux/kernel/v2.6/linux-2.6.22.2.tar.bz2'
```
Sample outputs:
```
45108964 bytes transferred in 57 seconds (775.3K/s)
lftp :~>quit
```
Where,
1. pget Download files in parallel
2. -n 5 Set maximum number of connections to 5
3. -c Continue broken transfer if lfile.lftp-pget-status exists in the current directory
### How to use lftp to accelerate ftp/https download on Linux/Unix
Another try with added exit command:
`$ lftp -e 'pget -n 10 -c https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.15.tar.xz; exit'`
[Linux-lftp-command-demo][https://www.cyberciti.biz/tips/wp-content/uploads/2007/08/Linux-lftp-command-demo.mp4]
### A note about parallel downloading
Please note that by using download accelerator you are going to put a load on remote host. Also note that lftp may not work with sites that do not support multi-source downloads or blocks such requests at firewall level.
NA command offers many other features. Refer to [lftp][2] man page for more information:
`man lftp`
### about the author
The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][3], [Facebook][4], [Google+][5]. Get the **latest tutorials on SysAdmin, Linux/Unix and open source topics via[my RSS/XML feed][6]**.
--------------------------------------------------------------------------------
via: https://www.cyberciti.biz/tips/linux-unix-download-accelerator.html
作者:[Vivek Gite][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.cyberciti.biz
[1]:https://www.cyberciti.biz/tips/download-accelerator-for-linux-command-line-tools.html
[2]:https://lftp.yar.ru/
[3]:https://twitter.com/nixcraft
[4]:https://facebook.com/nixcraft
[5]:https://plus.google.com/+CybercitiBiz
[6]:https://www.cyberciti.biz/atom/atom.xml

View File

@ -1,134 +0,0 @@
Linux Check IDE / SATA SSD Hard Disk Transfer Speed
======
So how do you find out how fast is your hard disk under Linux? Is it running at the SATA I (150 MB/s) or SATA II (300 MB/s) or SATA III (6.0Gb/s) speed without opening computer case or chassis?
You can use the **hdparm or dd command** to check hard disk speed. It provides a command line interface to various hard disk ioctls supported by the stock Linux ATA/IDE/SATA device driver subsystem. Some options may work correctly only with the latest kernels (make sure you have cutting edge kernel installed). I also recommend compiling hdparm with the included files from the most recent kernel source code.
### How to measure hard disk data transfer speed using hdparm
Login as the root user and enter the following command:
`$ sudo hdparm -tT /dev/sda`
OR
`$ sudo hdparm -tT /dev/hda`
Sample outputs:
```
/dev/sda:
Timing cached reads: 7864 MB in 2.00 seconds = 3935.41 MB/sec
Timing buffered disk reads: 204 MB in 3.00 seconds = 67.98 MB/sec
```
For meaningful results, this operation should be **repeated 2-3 times**. This displays the speed of reading directly from the Linux buffer cache without disk access. This measurement is essentially an indication of the **throughput of the processor, cache, and memory** of the system under test. [Here is a for loop example][1], to run test 3 time in a row:
`for i in 1 2 3; do hdparm -tT /dev/hda; done`
Where,
* **-t** :perform device read timings
* **-T** : perform cache read timings
* **/dev/sda** : Hard disk device file
To [find out SATA hard disk link speed][2], enter:
`sudo hdparm -I /dev/sda | grep -i speed`
Output:
```
* Gen1 signaling speed (1.5Gb/s)
* Gen2 signaling speed (3.0Gb/s)
* Gen3 signaling speed (6.0Gb/s)
```
Above output indicate that my hard disk can use 1.5Gb/s, 3.0Gb/s, or 6.0Gb/s speed. Please note that your BIOS / Motherboard must have support for SATA-II/III:
`$ dmesg | grep -i sata | grep 'link up'`
[![Linux Check IDE SATA SSD Hard Disk Transfer Speed][3]][3]
### dd Command
You can use the dd command as follows to get speed info too:
```
dd if=/dev/zero of=/tmp/output.img bs=8k count=256k
rm /tmp/output.img
```
Sample outputs:
```
262144+0 records in
262144+0 records out
2147483648 bytes (2.1 GB) copied, 23.6472 seconds, **90.8 MB/s**
```
The [recommended syntax for the dd command is as follows][4]
```
dd if=/dev/input.file of=/path/to/output.file bs=block-size count=number-of-blocks oflag=dsync
## GNU dd syntax ##
dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync
## OR alternate syntax for GNU/dd ##
dd if=/dev/zero of=/tmp/testALT.img bs=1G count=1 conv=fdatasync
```
Sample outputs from the last dd command:
```
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.23889 s, 253 MB/s
```
### Disks & storage - GUI tool
You can also use disk utility located at System > Administration > Disk utility menu. Please note that in latest version of Gnome it is simply called Disks.
#### How do I test the performance of my hard disk using Disks on Linux?
To test the speed of your hard disk:
1. Open **Disks** from the **Activities** overview (press the Super key on your keyboard and type Disks)
2. Choose the **disk** from the list in the **left pane**
3. Select the menu button and select **Benchmark disk …** from the menu
4. Click **Start Benchmark …** and adjust the Transfer Rate and Access Time parameters as desired.
5. Choose **Start Benchmarking** to test how fast data can be read from the disk. Administrative privileges required. Enter your password
A quick video demo of above procedure:
https://www.cyberciti.biz/tips/wp-content/uploads/2007/10/disks-performance.mp4
#### Read Only Benchmark (Safe option)
Then, select > Read only:
![Fig.01: Linux Benchmarking Hard Disk Read Only Test Speed][5]
The above option will not destroy any data.
#### Read and Write Benchmark (All data will be lost so be careful)
Visit System > Administration > Disk utility menu > Click Benchmark > Click Start Read/Write Benchmark button:
![Fig.02:Linux Measuring read rate, write rate and access time][6]
### About the author
The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][7], [Facebook][8], [Google+][9].
--------------------------------------------------------------------------------
via: https://www.cyberciti.biz/tips/how-fast-is-linux-sata-hard-disk.html
作者:[Vivek Gite][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.cyberciti.biz/
[1]:https://www.cyberciti.biz/faq/bash-for-loop/
[2]:https://www.cyberciti.biz/faq/linux-command-to-find-sata-harddisk-link-speed/
[3]:https://www.cyberciti.biz/tips/wp-content/uploads/2007/10/Linux-Check-IDE-SATA-SSD-Hard-Disk-Transfer-Speed.jpg
[4]:https://www.cyberciti.biz/faq/howto-linux-unix-test-disk-performance-with-dd-command/
[5]:https://www.cyberciti.biz/media/new/tips/2007/10/Linux-Hard-Disk-Speed-Benchmark.png (Linux Benchmark Hard Disk Speed)
[6]:https://www.cyberciti.biz/media/new/tips/2007/10/Linux-Hard-Disk-Read-Write-Benchmark.png (Linux Hard Disk Benchmark Read / Write Rate and Access Time)
[7]:https://twitter.com/nixcraft
[8]:https://facebook.com/nixcraft
[9]:https://plus.google.com/+CybercitiBiz

View File

@ -0,0 +1,141 @@
How to use yum-cron to automatically update RHEL/CentOS Linux
======
The yum command line tool is used to install and update software packages under RHEL / CentOS Linux server. I know how to apply updates using [yum update command line][1], but I would like to use cron to update packages where appropriate manually. How do I configure yum to install software patches/updates [automatically with cron][2]?
You need to install yum-cron package. It provides files needed to run yum updates as a cron job. Install this package if you want auto yum updates nightly via cron.
### How to install yum cron on a CentOS/RHEL 6.x/7.x
Type the following [yum command][3] on:
`$ sudo yum install yum-cron`
![](https://www.cyberciti.biz/media/new/faq/2009/05/How-to-install-yum-cron-on-CentOS-RHEL-server.jpg)
Turn on service using systemctl command on **CentOS/RHEL 7.x** :
```
$ sudo systemctl enable yum-cron.service
$ sudo systemctl start yum-cron.service
$ sudo systemctl status yum-cron.service
```
If you are using **CentOS/RHEL 6.x** , run:
```
$ sudo chkconfig yum-cron on
$ sudo service yum-cron start
```
![](https://www.cyberciti.biz/media/new/faq/2009/05/How-to-turn-on-yum-cron-service-on-CentOS-or-RHEL-server.jpg)
yum-cron is an alternate interface to yum. Very convenient way to call yum from cron. It provides methods to keep repository metadata up to date, and to check for, download, and apply updates. Rather than accepting many different command line arguments, the different functions of yum-cron can be accessed through config files.
### How to configure yum-cron to automatically update RHEL/CentOS Linux
You need to edit /etc/yum/yum-cron.conf and /etc/yum/yum-cron-hourly.conf files using a text editor such as vi command:
`$ sudo vi /etc/yum/yum-cron.conf`
Make sure updates should be applied when they are available
`apply_updates = yes`
You can set the address to send email messages from. Please note that localhost will be replaced with the value of system_name.
`email_from = root@localhost`
List of addresses to send messages to.
`email_to = your-it-support@some-domain-name`
Name of the host to connect to to send email messages.
`email_host = localhost`
If you [do not want to update kernel package add the following on CentOS/RHEL 7.x][4]:
`exclude=kernel*`
For RHEL/CentOS 6.x add [the following to exclude kernel package from updating][5]:
`YUM_PARAMETER=kernel*`
[Save and close the file in vi/vim][6]. You also need to update /etc/yum/yum-cron-hourly.conf file if you want to apply update hourly. Otherwise /etc/yum/yum-cron.conf will run on daily using the following cron job (us [cat command][7]:
`$ cat /etc/cron.daily/0yum-daily.cron`
Sample outputs:
```
#!/bin/bash
 
# Only run if this flag is set. The flag is created by the yum-cron init
# script when the service is started -- this allows one to use chkconfig and
# the standard "service stop|start" commands to enable or disable yum-cron.
if [[ ! -f /var/lock/subsys/yum-cron ]]; then
exit 0
fi
 
# Action!
exec /usr/sbin/yum-cron /etc/yum/yum-cron-hourly.conf
[root@centos7-box yum]# cat /etc/cron.daily/0yum-daily.cron
#!/bin/bash
 
# Only run if this flag is set. The flag is created by the yum-cron init
# script when the service is started -- this allows one to use chkconfig and
# the standard "service stop|start" commands to enable or disable yum-cron.
if [[ ! -f /var/lock/subsys/yum-cron ]]; then
exit 0
fi
 
# Action!
exec /usr/sbin/yum-cron
```
That is all. Now your system will update automatically everyday using yum-cron. See man page of yum-cron for more details:
`$ man yum-cron`
### Method 2 Use shell scripts
**Warning** : The following method is outdated. Do not use it on RHEL/CentOS 6.x/7.x. I kept it below for historical reasons only when I used it on CentOS/RHEL version 4.x/5.x.
Let us see how to configure CentOS/RHEL for yum automatic update retrieval and installation of security packages. You can use yum-updatesd service provided with CentOS / RHEL servers. However, this service provides a few overheads. You can create daily or weekly updates with the following shell script. Create
* **/etc/cron.daily/yumupdate.sh** to apply updates one a day.
* **/etc/cron.weekly/yumupdate.sh** to apply updates once a week.
#### Sample shell script to update system
A shell script that instructs yum to update any packages it finds via [cron][8]:
```
#!/bin/bash
YUM=/usr/bin/yum
$YUM -y -R 120 -d 0 -e 0 update yum
$YUM -y -R 10 -e 0 -d 0 update
```
(Code listing -01: /etc/cron.daily/yumupdate.sh)
Where,
1. First command will update yum itself and next will apply system updates.
2. **-R 120** : Sets the maximum amount of time yum will wait before performing a command
3. **-e 0** : Sets the error level to 0 (range 0 10). 0 means print only critical errors about which you must be told.
4. -d 0 : Sets the debugging level to 0 turns up or down the amount of things that are printed. (range: 0 10).
5. **-y** : Assume yes; assume that the answer to any question which would be asked is yes.
Make sure you setup executable permission:
`# chmod +x /etc/cron.daily/yumupdate.sh`
### about the author
Posted by:
The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][9], [Facebook][10], [Google+][11]. Get the **latest tutorials on SysAdmin, Linux/Unix and open source topics via[my RSS/XML feed][12]**.
--------------------------------------------------------------------------------
via: https://www.cyberciti.biz/faq/fedora-automatic-update-retrieval-installation-with-cron/
作者:[Vivek Gite][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.cyberciti.biz/
[1]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/
[2]:https://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses
[3]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ (See Linux/Unix yum command examples for more info)
[4]:https://www.cyberciti.biz/faq/yum-update-except-kernel-package-command/
[5]:https://www.cyberciti.biz/faq/redhat-centos-linux-yum-update-exclude-packages/
[6]:https://www.cyberciti.biz/faq/linux-unix-vim-save-and-quit-command/
[7]:https://www.cyberciti.biz/faq/linux-unix-appleosx-bsd-cat-command-examples/ (See Linux/Unix cat command examples for more info)
[8]:https://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses
[9]:https://twitter.com/nixcraft
[10]:https://facebook.com/nixcraft
[11]:https://plus.google.com/+CybercitiBiz
[12]:https://www.cyberciti.biz/atom/atom.xml

View File

@ -0,0 +1,164 @@
Create your first Ansible server (automation) setup
======
Automation/configuration management tools are the new craze in the IT world, organizations are moving towards adopting them. There are many tools that are available in market like Puppet, Chef, Ansible etc & in this tutorial, we are going to learn about Ansible.
Ansible is an open source configuration tool; that is used to deploy, configure & manage servers. Ansible is one of the easiest automation tool to learn and master. It does not require you to learn complicated programming language like ruby (used in puppet & chef) & uses YAML, which is a very simple language. Also it does not require any special agent to be installed on client machines & only requires client machines to have python and ssh installed, both of these are usually available on systems.
## Pre-requisites
Before we move onto installation part, let's discuss the pre-requisites for Ansible
1. For server, we will need a machine with either CentOS or RHEL 7 installed & EPEL repository enabled
To enable epel repository, use the commands below,
**RHEL/CentOS 7**
```
$ rpm -Uvh https://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-10.noarch.rpm
```
**RHEL/CentOS 6 (64 Bit)**
```
$ rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
```
**RHEL/CentOS 6 (32 Bit)**
```
$ rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
```
2. For client machines, Open SSH & python should be installed. Also we need to configure password less login for ssh session (create public-private keys). To create public-private keys & configure password less login for ssh session, refer to our article "
[Setting up SSH Server for Public/Private keys based Authentication (Password-less login)][1]"
## Installation
Once we have epel repository enabled, we can now install anisble using yum,
```
$ yum install ansible
```
## Configuring Ansible hosts
We will now configure hosts that we want Ansible to manage. To do that we need to edit the file **/etc/ansible/host** s & add the clients in following syntax,
```
[group-name]
alias ansible_ssh_host=host_IP_address
```
where, alias is the alias name given to hosts we adding & it can be anything,
host_IP_address is where we enter the IP address for the hosts.
For this tutorial, we are going to add 2 clients/hosts for ansible to manage, so let's create an entry for these two hosts in the configuration file,
```
$ vi /etc/ansible/hosts
[test_clients]
client1 ansible_ssh_host=192.168.1.101
client2 ansible_ssh_host=192.168.1.10
```
Save file & exit it. Now as mentioned in pre-requisites, we should have a password less login to these clients from the ansible server. To check if that's the case, ssh into the clients and we should be able to login without password,
```
$ ssh root@192.168.1.101
```
If that's working, then we can move further otherwise we need to create Public/Private keys for ssh session (Refer to article mentioned above in pre-requisites).
We are using root to login to other servers but we can use other local users as well & we need to define it for Ansible whatever user we will be using. To do so, we will first create a folder named 'group_vars' in '/etc/ansible'
```
$ cd /etc/ansible
$ mkdir group_vars
```
Next, we will create a file named after the group we have created in 'etc/ansible/hosts' i.e. test_clients
```
$ vi test_clients
```
& add the ifollowing information about the user,
```
--
ansible_ssh_user:root
```
**Note :-** File will start with '--' (minus symbol), so keep not of that.
If we want to use same user for all the groups created, then we can create only a single file named 'all' to mention the user details for ssh login, instead of creating a file for every group.
```
$ vi /etc/ansible/group_vars/all
--
ansible_ssh_user: root
```
Similarly, we can setup files for individual hosts as well.
Now, the setup for the clients has been done. We will now push some simple commands to all the clients being managed by Ansible.
## Testing hosts
To check the connectivity of all the hosts, we will issue a command,
```
$ ansible -m ping all
```
If all the hosts are properly connected, it should return the following output,
```
client1 | SUCCESS = > {
" changed": false,
" ping": "pong"
}
client2 | SUCCESS = > {
" changed": false,
" ping": "pong"
}
```
We can also issue command to an individual host,
```
$ ansible -m ping client1
```
or to the multiple hosts,
```
$ ansible -m ping client1:client2
```
or even to a single group,
```
$ ansible -m ping test_client
```
This complete our tutorial on setting up an Ansible server, in our future posts we will further explore funtinalities offered by Ansible. If any having doubts or queries regarding this post, use the comment box below.
--------------------------------------------------------------------------------
via: http://linuxtechlab.com/create-first-ansible-server-automation-setup/
作者:[SHUSAIN][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linuxtechlab.com/author/shsuain/
[1]:http://linuxtechlab.com/configure-ssh-server-publicprivate-key/

View File

@ -1,4 +1,4 @@
How to Use the ZFS Filesystem on Ubuntu Linux
How to Use the ZFS Filesystem on Ubuntu Linux
======
There are a myriad of [filesystems available for Linux][1]. So why try a new one? They all work, right? They're not all the same, and some have some very distinct advantages, like ZFS.

View File

@ -0,0 +1,108 @@
/dev/[u]random: entropy explained
======
### Entropy
When the topic of /dev/random and /dev/urandom come up, you always hear this word: “Entropy”. Everyone seems to have their own analogy for it. So why not me? I like to think of Entropy as “Random juice”. It is juice, required for random to be more random.
If you have ever generated an SSL certificate, or a GPG key, you may have seen something like:
```
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
++++++++++..+++++.+++++++++++++++.++++++++++...+++++++++++++++...++++++
+++++++++++++++++++++++++++++.+++++..+++++.+++++.+++++++++++++++++++++++++>.
++++++++++>+++++...........................................................+++++
Not enough random bytes available. Please do some other work to give
the OS a chance to collect more entropy! (Need 290 more bytes)
```
By typing on the keyboard, and moving the mouse, you help generate Entropy, or Random Juice.
You might be asking yourself… Why do I need Entropy? and why it is so important for random to be actually random? Well, lets say our Entropy was limited to keyboard, mouse, and disk IO. But our system is a server, so I know there is no mouse and keyboard input. This means the only factor is your IO. If it is a single disk, that was barely used, you will have low Entropy. This means your systems ability to be random is weak. In other words, I could play the probability game, and significantly decrease the amount of time it would take to crack things like your ssh keys, or decrypt what you thought was an encrypted session.
Okay, but that is pretty unrealistic right? No, actually it isnt. Take a look at this [Debian OpenSSH Vulnerability][1]. This particular issue was caused by someone removing some of the code responsible for adding Entropy. Rumor has it they removed it because it was causing valgrind to throw warnings. However, in doing that, random is now MUCH less random. In fact, so much less that Brute forcing the private ssh keys generated is now a fesible attack vector.
Hopefully by now we understand how important Entropy is to security. Whether you realize you are using it or not.
### /dev/random & /dev/urandom
/dev/urandom is a Psuedo Random Number Generator, and it **does not** block if you run out of Entropy.
/dev/random is a True Random Number Generator, and it **does** block if you run out of Entropy.
Most often, if we are dealing with something pragmatic, and it doesnt contain the keys to your nukes, /dev/urandom is the right choice. Otherwise if you go with /dev/random, then when the system runs out of Entropy your application is just going to behave funny. Whether it outright fails, or just hangs until it has enough depends on how you wrote your application.
### Checking the Entropy
So, how much Entropy do you have?
```
[root@testbox test]# cat /proc/sys/kernel/random/poolsize
4096
[root@testbox test]# cat /proc/sys/kernel/random/entropy_avail
2975
[root@testbox test]#
```
/proc/sys/kernel/random/poolsize, to state the obvious is the size(in bits) of the Entropy Pool. eg: How much random-juice we should save before we stop pumping more. /proc/sys/kernel/random/entropy_avail, is the amount(in bits) of random-juice in the pool currently.
### How can we influence this number?
The number is drained as we use it. The most crude example I can come up with is catting /dev/random into /dev/null:
```
[root@testbox test]# cat /dev/random > /dev/null &
[1] 19058
[root@testbox test]# cat /proc/sys/kernel/random/entropy_avail
0
[root@testbox test]# cat /proc/sys/kernel/random/entropy_avail
1
[root@testbox test]#
```
The easiest way to influence this is to run [Haveged][2]. Haveged is a daemon that uses the processor “flutter” to add Entropy to the systems Entropy Pool. Installation and basic setup is pretty straight forward
```
[root@b08s02ur ~]# systemctl enable haveged
Created symlink from /etc/systemd/system/multi-user.target.wants/haveged.service to /usr/lib/systemd/system/haveged.service.
[root@b08s02ur ~]# systemctl start haveged
[root@b08s02ur ~]#
```
On a machine with relatively moderate traffic:
```
[root@testbox ~]# pv /dev/random > /dev/null
40 B 0:00:15 [ 0 B/s] [ <=> ]
52 B 0:00:23 [ 0 B/s] [ <=> ]
58 B 0:00:25 [5.92 B/s] [ <=> ]
64 B 0:00:30 [6.03 B/s] [ <=> ]
^C
[root@testbox ~]# systemctl start haveged
[root@testbox ~]# pv /dev/random > /dev/null
7.12MiB 0:00:05 [1.43MiB/s] [ <=> ]
15.7MiB 0:00:11 [1.44MiB/s] [ <=> ]
27.2MiB 0:00:19 [1.46MiB/s] [ <=> ]
43MiB 0:00:30 [1.47MiB/s] [ <=> ]
^C
[root@testbox ~]#
```
Using pv we are able to see how much data we are passing via pipe. As you can see, before haveged, we were getting 2.1 bits per second(B/s). Whereas after starting haveged, and adding processor flutter to our Entropy pool we get ~1.5MiB/sec.
--------------------------------------------------------------------------------
via: http://jhurani.com/linux/2017/11/01/entropy-explained.html
作者:[James J][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://jblevins.org/log/ssh-vulnkey
[1]:http://jhurani.com/linux/2017/11/01/%22https://jblevins.org/log/ssh-vulnkey%22
[2]:http://www.issihosts.com/haveged/

View File

@ -1,49 +0,0 @@
translating-----geekpi
3 Essential Questions to Ask at Your Next Tech Interview
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/os-jobs_0.jpg?itok=nDf5j7xC)
Interviewing can be stressful, but 58 percent of companies tell Dice and the Linux Foundation that they need to hire open source talent in the months ahead. Learn how to ask the right questions.
The Linux Foundation
The annual [Open Source Jobs Report][1] from Dice and The Linux Foundation reveals a lot about prospects for open source professionals and hiring activity in the year ahead. In this year's report, 86 percent of tech professionals said that knowing open source has advanced their careers. Yet what happens with all that experience when it comes time for advancing within their own organization or applying for a new roles elsewhere?
Interviewing for a new job is never easy. Aside from the complexities of juggling your current work while preparing for a new role, there's the added pressure of coming up with the necessary response when the interviewer asks "Do you have any questions for me?"
At Dice, we're in the business of careers, advice, and connecting tech professionals with employers. But we also hire tech talent at our organization to work on open source projects. In fact, the Dice platform is based on a number of Linux distributions and we leverage open source databases as the basis for our search functionality. In short, we couldn't run Dice without open source software, therefore it's vital that we hire professionals who understand, and love, open source.
Over the years, I've learned the importance of asking good questions during an interview. It's an opportunity to learn about your potential new employer, as well as better understand if they are a good match for your skills.
Here are three essential questions to ask and the reason they're important:
**1\. What is the company 's position on employees contributing to open source projects or writing code in their spare time?**
The answer to this question will tell you a lot about the company you're interviewing with. In general, companies will want tech pros who contribute to websites or projects as long as they don't conflict with the work you're doing at that firm. Allowing this outside the company also fosters an entrepreneurial spirt among the tech organization, and teaches tech skills that you may not otherwise get in the normal course of your day.
**2\. How are projects prioritized here?**
As all companies have become tech companies, there is often a division between innovative customer facing tech projects versus those that improve the platform itself. Will you be working on keeping the existing platform up to date? Or working on new products for the public? Depending on where your interests lie, the answer could determine if the company is a right fit for you.
**3\. Who primarily makes decisions on new products and how much input do developers have in the decision-making process?**
This question is one part understanding who is responsible for innovation at the company (and how close you'll be working with him/her) and one part discovering your career path at the firm. A good company will talk to its developers and open source talent ahead of developing new products. It seems like a no brainer, but it's a step that's sometimes missed and will mean the difference between a collaborative environment or chaotic process ahead of new product releases.
Interviewing can be stressful, however as 58 percent of companies tell Dice and The Linux Foundation that they need to hire open source talent in the months ahead, it's important to remember the heightened demand puts professionals like you in the driver's seat. Steer your career in the direction you desire.
[Download ][2] the full 2017 Open Source Jobs Report now.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/os-jobs/2017/12/3-essential-questions-ask-your-next-tech-interview
作者:[Brian Hostetter][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/brianhostetter
[1]:https://www.linuxfoundation.org/blog/2017-jobs-report-highlights-demand-open-source-skills/
[2]:http://bit.ly/2017OSSjobsreport

View File

@ -1,210 +0,0 @@
# Tutorial on how to write basic udev rules in Linux
Contents
* * [1. Objective][4]
* [2. Requirements][5]
* [3. Difficulty][6]
* [4. Conventions][7]
* [5. Introduction][8]
* [6. How rules are organized][9]
* [7. The rules syntax][10]
* [8. A test case][11]
* [9. Operators][12]
* * [9.1.1. == and != operators][1]
* [9.1.2. The assignment operators: = and :=][2]
* [9.1.3. The += and -= operators][3]
* [10. The keys we used][13]
### Objective
Understanding the base concepts behind udev, and learn how to write simple rules
### Requirements
* Root permissions
### Difficulty
MEDIUM
### Conventions
* **#** - requires given command to be executed with root privileges either directly as a root user or by use of `sudo` command
* **$** - given command to be executed as a regular non-privileged user
### Introduction
In a GNU/Linux system, while devices low level support is handled at the kernel level, the management of events related to them is managed in userspace by `udev`, and more precisely by the `udevd` daemon. Learning how to write rules to be applied on the occurring of those events can be really useful to modify the behavior of the system and adapt it to our needs.
### How rules are organized
Udev rules are defined into files with the `.rules` extension. There are two main locations in which those files can be placed: `/usr/lib/udev/rules.d` it's the directory used for system-installed rules, `/etc/udev/rules.d/`is reserved for custom made rules. 
The files in which the rules are defined are conventionally named with a number as prefix (e.g `50-udev-default.rules`) and are processed in lexical order independently of the directory they are in. Files installed in `/etc/udev/rules.d`, however, override those with the same name installed in the system default path.
### The rules syntax
The syntax of udev rules is not very complicated once you understand the logic behind it. A rule is composed by two main sections: the "match" part, in which we define the conditions for the rule to be applied, using a series of keys separated by a comma, and the "action" part, in which we perform some kind of action, when the conditions are met. 
### A test case
What a better way to explain possible options than to configure an actual rule? As an example, we are going to define a rule to disable the touchpad when a mouse is connected. Obviously the attributes provided in the rule definition, will reflect my hardware. 
We will write our rule in the `/etc/udev/rules.d/99-togglemouse.rules` file with the help of our favorite text editor. A rule definition can span over multiple lines, but if that's the case, a backslash must be used before the newline character, as a line continuation, just as in shell scripts. Here is our rule:
```
ACTION=="add" \
, ATTRS{idProduct}=="c52f" \
, ATTRS{idVendor}=="046d" \
, ENV{DISPLAY}=":0" \
, ENV{XAUTHORITY}="/run/user/1000/gdm/Xauthority" \
, RUN+="/usr/bin/xinput --disable 16"
```
Let's analyze it.
### Operators
First of all, an explanation of the used and possible operators:
#### == and != operators
The `==` is the equality operator and the `!=` is the inequality operator. By using them we establish that for the rule to be applied the defined keys must match, or not match the defined value respectively.
#### The assignment operators: = and :=
The `=` assignment operator, is used to assign a value to the keys that accepts one. We use the `:=` operator, instead, when we want to assign a value and we want to make sure that it is not overridden by other rules: the values assigned with this operator, in facts, cannot be altered.
#### The += and -= operators
The `+=` and `-=` operators are used respectively to add or to remove a value from the list of values defined for a specific key.
### The keys we used
Let's now analyze the keys we used in the rule. First of all we have the `ACTION` key: by using it, we specified that our rule is to be applied when a specific event happens for the device. Valid values are `add`, `remove` and `change` 
We then used the `ATTRS` keyword to specify an attribute to be matched. We can list a device attributes by using the `udevadm info` command, providing its name or `sysfs` path:
```
udevadm info -ap /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.1/0003:046D:C52F.0010/input/input39
Udevadm info starts with the device specified by the devpath and then
walks up the chain of parent devices. It prints for every device
found, all possible attributes in the udev rules key format.
A rule to match, can be composed by the attributes of the device
and the attributes from one single parent device.
looking at device '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.1/0003:046D:C52F.0010/input/input39':
KERNEL=="input39"
SUBSYSTEM=="input"
DRIVER==""
ATTR{name}=="Logitech USB Receiver"
ATTR{phys}=="usb-0000:00:1d.0-1.2/input1"
ATTR{properties}=="0"
ATTR{uniq}==""
looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.1/0003:046D:C52F.0010':
KERNELS=="0003:046D:C52F.0010"
SUBSYSTEMS=="hid"
DRIVERS=="hid-generic"
ATTRS{country}=="00"
looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.1':
KERNELS=="2-1.2:1.1"
SUBSYSTEMS=="usb"
DRIVERS=="usbhid"
ATTRS{authorized}=="1"
ATTRS{bAlternateSetting}==" 0"
ATTRS{bInterfaceClass}=="03"
ATTRS{bInterfaceNumber}=="01"
ATTRS{bInterfaceProtocol}=="00"
ATTRS{bInterfaceSubClass}=="00"
ATTRS{bNumEndpoints}=="01"
ATTRS{supports_autosuspend}=="1"
looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2':
KERNELS=="2-1.2"
SUBSYSTEMS=="usb"
DRIVERS=="usb"
ATTRS{authorized}=="1"
ATTRS{avoid_reset_quirk}=="0"
ATTRS{bConfigurationValue}=="1"
ATTRS{bDeviceClass}=="00"
ATTRS{bDeviceProtocol}=="00"
ATTRS{bDeviceSubClass}=="00"
ATTRS{bMaxPacketSize0}=="8"
ATTRS{bMaxPower}=="98mA"
ATTRS{bNumConfigurations}=="1"
ATTRS{bNumInterfaces}==" 2"
ATTRS{bcdDevice}=="3000"
ATTRS{bmAttributes}=="a0"
ATTRS{busnum}=="2"
ATTRS{configuration}=="RQR30.00_B0009"
ATTRS{devnum}=="12"
ATTRS{devpath}=="1.2"
ATTRS{idProduct}=="c52f"
ATTRS{idVendor}=="046d"
ATTRS{ltm_capable}=="no"
ATTRS{manufacturer}=="Logitech"
ATTRS{maxchild}=="0"
ATTRS{product}=="USB Receiver"
ATTRS{quirks}=="0x0"
ATTRS{removable}=="removable"
ATTRS{speed}=="12"
ATTRS{urbnum}=="1401"
ATTRS{version}==" 2.00"
[...]
```
Above is the truncated output received after running the command. As you can read it from the output itself, `udevadm` starts with the specified path that we provided, and gives us information about all the parent devices. Notice that attributes of the device are reported in singular form (e.g `KERNEL`), while the parent ones in plural form (e.g `KERNELS`). The parent information can be part of a rule but only one of the parents can be referenced at a time: mixing attributes of different parent devices will not work. In the rule we defined above, we used the attributes of one parent device: `idProduct` and `idVendor`. 
The next thing we have done in our rule, is to use the `ENV` keyword: it can be used to both set or try to match environment variables. We assigned a value to the `DISPLAY` and `XAUTHORITY` ones. Those variables are essential when interacting with the X server programmatically, to setup some needed information: with the `DISPLAY` variable, we specify on what machine the server is running, what display and what screen we are referencing, and with `XAUTHORITY` we provide the path to the file which contains Xorg authentication and authorization information. This file is usually located in the users "home" directory. 
Finally we used the `RUN` keyword: this is used to run external programs. Very important: this is not executed immediately, but the various actions are executed once all the rules have been parsed. In this case we used the `xinput` utility to change the status of the touchpad. I will not explain the syntax of xinput here, it would be out of context, just notice that `16` is the id of the touchpad. 
Once our rule is set, we can debug it by using the `udevadm test` command. This is useful for debugging but it doesn't really run commands specified using the `RUN` key:
```
$ udevadm test --action="add" /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.1/0003:046D:C52F.0010/input/input39
```
What we provided to the command is the action to simulate, using the `--action` option, and the sysfs path of the device. If no errors are reported, our rule should be good to go. To run it in the real world, we must reload the rules:
```
# udevadm control --reload
```
This command will reload the rules files, however, will have effect only on new generated events. 
We have seen the basic concepts and logic used to create an udev rule, however we only scratched the surface of the many options and possible settings. The udev manpage provides an exhaustive list: please refer to it for a more in-depth knowledge.
--------------------------------------------------------------------------------
via: https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux
作者:[Egidio Docile ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://disqus.com/by/egidiodocile/
[1]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h9-1-1-and-operators
[2]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h9-1-2-the-assignment-operators-and
[3]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h9-1-3-the-and-operators
[4]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h1-objective
[5]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h2-requirements
[6]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h3-difficulty
[7]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h4-conventions
[8]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h5-introduction
[9]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h6-how-rules-are-organized
[10]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h7-the-rules-syntax
[11]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h8-a-test-case
[12]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h9-operators
[13]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h10-the-keys-we-used

View File

@ -1,3 +1,4 @@
leemeans translating
Getting Started with ncurses
======
How to use curses to draw to the terminal screen.

View File

@ -0,0 +1,102 @@
How to Install Tripwire IDS (Intrusion Detection System) on Linux
============================================================
Tripwire is a popular Linux Intrusion Detection System (IDS) that runs on systems in order to detect if unauthorized filesystem changes occurred over time.
In CentOS and RHEL distributions, tripwire is not a part of official repositories. However, the tripwire package can be installed via [Epel repositories][1].
To begin, first install Epel repositories in CentOS and RHEL system, by issuing the below command.
```
# yum install epel-release
```
After youve installed Epel repositories, make sure you update the system with the following command.
```
# yum update
```
After the update process finishes, install Tripwire IDS software by executing the below command.
```
# yum install tripwire
```
Fortunately, tripwire is a part of Ubuntu and Debian default repositories and can be installed with following commands.
```
$ sudo apt update
$ sudo apt install tripwire
```
On Ubuntu and Debian, the tripwire installation will be asked to choose and confirm a site key and local key passphrase. These keys are used by tripwire to secure its configuration files.
[![Create Tripwire Site and Local Key](https://www.tecmint.com/wp-content/uploads/2018/01/Create-Site-and-Local-key.png)][2]
Create Tripwire Site and Local Key
On CentOS and RHEL, you need to create tripwire keys with the below command and supply a passphrase for site key and local key.
```
# tripwire-setup-keyfiles
```
[![Create Tripwire Keys](https://www.tecmint.com/wp-content/uploads/2018/01/Create-Tripwire-Keys.png)][3]
Create Tripwire Keys
In order to validate your system, you need to initialize Tripwire database with the following command. Due to the fact that the database hasnt been initialized yet, tripwire will display a lot of false-positive warnings.
```
# tripwire --init
```
[![Initialize Tripwire Database](https://www.tecmint.com/wp-content/uploads/2018/01/Initialize-Tripwire-Database.png)][4]
Initialize Tripwire Database
Finally, generate a tripwire system report in order to check the configurations by issuing the below command. Use `--help` switch to list all tripwire check command options.
```
# tripwire --check --help
# tripwire --check
```
After tripwire check command completes, review the report by opening the file with the extension `.twr` from /var/lib/tripwire/report/ directory with your favorite text editor command, but before that you need to convert to text file.
```
# twprint --print-report --twrfile /var/lib/tripwire/report/tecmint-20170727-235255.twr > report.txt
# vi report.txt
```
[![Tripwire System Report](https://www.tecmint.com/wp-content/uploads/2018/01/Tripwire-System-Report.png)][5]
Tripwire System Report
Thats It! you have successfully installed Tripwire on Linux server. I hope you can now easily configure your [Tripwire IDS][6].
--------------------------------------------------------------------------------
作者简介:
I'am a computer addicted guy, a fan of open source and linux based system software, have about 4 years experience with Linux distributions desktop, servers and bash scripting.
-------
via: https://www.tecmint.com/install-tripwire-ids-intrusion-detection-system-on-linux/
作者:[ Matei Cezar][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.tecmint.com/author/cezarmatei/
[1]:https://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/
[2]:https://www.tecmint.com/wp-content/uploads/2018/01/Create-Site-and-Local-key.png
[3]:https://www.tecmint.com/wp-content/uploads/2018/01/Create-Tripwire-Keys.png
[4]:https://www.tecmint.com/wp-content/uploads/2018/01/Initialize-Tripwire-Database.png
[5]:https://www.tecmint.com/wp-content/uploads/2018/01/Tripwire-System-Report.png
[6]:https://www.tripwire.com/
[7]:https://www.tecmint.com/author/cezarmatei/
[8]:https://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
[9]:https://www.tecmint.com/free-linux-shell-scripting-books/

View File

@ -1,61 +0,0 @@
Containers, the GPL, and copyleft: No reason for concern
============================================================
### Wondering how open source licensing affects Linux containers? Here's what you need to know.
![Containers, the GPL, and copyleft: No reason for concern](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_patents4abstract_B.png?itok=6RHeRaYh "Containers, the GPL, and copyleft: No reason for concern")
Image by : opensource.com
Though open source is thoroughly mainstream, new software technologies and old technologies that get newly popularized sometimes inspire hand-wringing about open source licenses. Most often the concern is about the GNU General Public License (GPL), and specifically the scope of its copyleft requirement, which is often described (somewhat misleadingly) as the GPLs derivative work issue.
One imperfect way of framing the question is whether GPL-licensed code, when combined in some sense with proprietary code, forms a single modified work such that the proprietary code could be interpreted as being subject to the terms of the GPL. While we havent yet seen much of that concern directed to Linux containers, we expect more questions to be raised as adoption of containers continues to grow. But its fairly straightforward to show that containers do  _not_  raise new or concerning GPL scope issues.
Statutes and case law provide little help in interpreting a license like the GPL. On the other hand, many of us give significant weight to the interpretive views of the Free Software Foundation (FSF), the drafter and steward of the GPL, even in the typical case where the FSF is not a copyright holder of the software at issue. In addition to being the author of the license text, the FSF has been engaged for many years in providing commentary and guidance on its licenses to the community. Its views have special credibility and influence based on its public interest mission and leadership in free software policy.
The FSFs existing guidance on GPL interpretation has relevance for understanding the effects of including GPL and non-GPL code in containers. The FSF has placed emphasis on the process boundary when considering copyleft scope, and on the mechanism and semantics of the communication between multiple software components to determine whether they are closely integrated enough to be considered a single program for GPL purposes. For example, the [GNU Licenses FAQ][4] takes the view that pipes, sockets, and command-line arguments are mechanisms that are normally suggestive of separateness (in the absence of sufficiently "intimate" communications).
Consider the case of a container in which both GPL code and proprietary code might coexist and execute. A container is, in essence, an isolated userspace stack. In the [OCI container image format][5], code is packaged as a set of filesystem changeset layers, with the base layer normally being a stripped-down conventional Linux distribution without a kernel. As with the userspace of non-containerized Linux distributions, these base layers invariably contain many GPL-licensed packages (both GPLv2 and GPLv3), as well as packages under licenses considered GPL-incompatible, and commonly function as a runtime for proprietary as well as open source applications. The ["mere aggregation" clause][6] in GPLv2 (as well as its counterpart GPLv3 provision on ["aggregates"][7]) shows that this type of combination is generally acceptable, is specifically contemplated under the GPL, and has no effect on the licensing of the two programs, assuming incompatibly licensed components are separate and independent.
Of course, in a given situation, the relationship between two components may not be "mere aggregation," but the same is true of software running in non-containerized userspace on a Linux system. There is nothing in the technical makeup of containers or container images that suggests a need to apply a special form of copyleft scope analysis.
It follows that when looking at the relationship between code running in a container and code running outside a container, the "separate and independent" criterion is almost certainly met. The code will run as separate processes, and the whole technical point of using containers is isolation from other software running on the system.
Now consider the case where two components, one GPL-licensed and one proprietary, are running in separate but potentially interacting containers, perhaps as part of an application designed with a [microservices][8] architecture. In the absence of very unusual facts, we should not expect to see copyleft scope extending across multiple containers. Separate containers involve separate processes. Communication between containers by way of network interfaces is analogous to such mechanisms as pipes and sockets, and a multi-container microservices scenario would seem to preclude what the FSF calls "[intimate][9]" communication by definition. The composition of an application using multiple containers may not be dispositive of the GPL scope issue, but it makes the technical boundaries between the components more apparent and provides a strong basis for arguing separateness. Here, too, there is no technical feature of containers that suggests application of a different and stricter approach to copyleft scope analysis.
A company that is overly concerned with the potential effects of distributing GPL-licensed code might attempt to prohibit its developers from adding any such code to a container image that it plans to distribute. Insofar as the aim is to avoid distributing code under the GPL, this is a dubious strategy. As noted above, the base layers of conventional container images will contain multiple GPL-licensed components. If the company pushes a container image to a registry, there is normally no way it can guarantee that this will not include the base layer, even if it is widely shared.
On the other hand, the company might decide to embrace containerization as a means of limiting copyleft scope issues by isolating GPL and proprietary code—though one would hope that technical benefits would drive the decision, rather than legal concerns likely based on unfounded anxiety about the GPL. While in a non-containerized setting the relationship between two interacting software components will often be mere aggregation, the evidence of separateness that containers provide may be comforting to those who worry about GPL scope.
Open source license compliance obligations may arise when sharing container images. But theres nothing technically different or unique about containers that changes the nature of these obligations or makes them harder to satisfy. With respect to copyleft scope, containerization should, if anything, ease the concerns of the extra-cautious.
### About the author
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/picture-216.jpg?itok=R8W7jae8)][10] Richard Fontana - Richard is Senior Commercial Counsel on the Products and Technologies team in Red Hat's legal department. Most of his work focuses on open source-related legal issues.[More about me][2]
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/1/containers-gpl-and-copyleft
作者:[Richard Fontana ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/fontana
[1]:https://opensource.com/article/18/1/containers-gpl-and-copyleft?rate=qTlANxnuA2tf0hcGE6Po06RGUzcbB-cBxbU3dCuCt9w
[2]:https://opensource.com/users/fontana
[3]:https://opensource.com/user/10544/feed
[4]:https://www.gnu.org/licenses/gpl-faq.en.html#MereAggregation
[5]:https://github.com/opencontainers/image-spec/blob/master/spec.md
[6]:https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html#section2
[7]:https://www.gnu.org/licenses/gpl.html#section5
[8]:https://www.redhat.com/en/topics/microservices
[9]:https://www.gnu.org/licenses/gpl-faq.en.html#GPLPlugins
[10]:https://opensource.com/users/fontana
[11]:https://opensource.com/users/fontana
[12]:https://opensource.com/users/fontana
[13]:https://opensource.com/tags/licensing
[14]:https://opensource.com/tags/containers

View File

@ -0,0 +1,153 @@
Building a Linux-based HPC system on the Raspberry Pi with Ansible
============================================================
### Create a high-performance computing cluster with low-cost hardware and open source software.
![Building a Linux-based HPC system on the Raspberry Pi with Ansible](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_development_programming.png?itok=4OM29-82 "Building a Linux-based HPC system on the Raspberry Pi with Ansible")
Image by : opensource.com
In my [previous article for Opensource.com][14], I introduced the [OpenHPC][15] project, which aims to accelerate innovation in high-performance computing (HPC). This article goes a step further by using OpenHPC's capabilities to build a small HPC system. To call it an  _HPC system_  might sound bigger than it is, so maybe it is better to say this is a system based on the [Cluster Building Recipes][16] published by the OpenHPC project.
The resulting cluster consists of two Raspberry Pi 3 systems acting as compute nodes and one virtual machine acting as the master node:
![Map of HPC cluster](https://opensource.com/sites/default/files/u128651/hpc_with_pi-1.png "Map of HPC cluster")
My master node is running CentOS on x86_64 and my compute nodes are running a slightly modified CentOS on aarch64.
This is what the setup looks in real life:
![HPC hardware setup](https://opensource.com/sites/default/files/u128651/hpc_with_pi-2.jpg "HPC hardware setup")
To set up my system like an HPC system, I followed some of the steps from OpenHPC's Cluster Building Recipes [install guide for CentOS 7.4/aarch64 + Warewulf + Slurm][17] (PDF). This recipe includes provisioning instructions using [Warewulf][18]; because I manually installed my three systems, I skipped the Warewulf parts and created an [Ansible playbook][19] for the steps I took.
Once my cluster was set up by the [Ansible][26] playbooks, I could start to submit jobs to my resource manager. The resource manager, [Slurm][27] in my case, is the instance in the cluster that decides where and when my jobs are executed. One possibility to start a simple job on the cluster is:
```
[ohpc@centos01 ~]$ srun hostname
calvin
```
If I need more resources, I can tell Slurm that I want to run my command on eight CPUs:
```
[ohpc@centos01 ~]$ srun -n 8 hostname
hobbes
hobbes
hobbes
hobbes
calvin
calvin
calvin
calvin
```
In the first example, Slurm ran the specified command (`hostname`) on a single CPU, and in the second example Slurm ran the command on eight CPUs. One of my compute nodes is named `calvin` and the other is named `hobbes`; that can be seen in the output of the above commands. Each of the compute nodes is a Raspberry Pi 3 with four CPU cores.
Another way to submit jobs to my cluster is the command `sbatch`, which can be used to execute scripts with the output written to a file instead of my terminal.
```
[ohpc@centos01 ~]$ cat script1.sh
#!/bin/sh
date
hostname
sleep 10
date
[ohpc@centos01 ~]$ sbatch script1.sh
Submitted batch job 101
```
This will create an output file called `slurm-101.out` with the following content:
```
Mon 11 Dec 16:42:31 UTC 2017
calvin
Mon 11 Dec 16:42:41 UTC 2017
```
To demonstrate the basic functionality of the resource manager, simple and serial command line tools are suitable—but a bit boring after doing all the work to set up an HPC-like system.
A more interesting application is running an [Open MPI][20] parallelized job on all available CPUs on the cluster. I'm using an application based on [Game of Life][21], which was used in a [video][22] called "Running Game of Life across multiple architectures with Red Hat Enterprise Linux." In addition to the previously used MPI-based Game of Life implementation, the version now running on my cluster colors the cells for each involved host differently. The following script starts the application interactively with a graphical output:
```
$ cat life.mpi
#!/bin/bash
module load gnu6 openmpi3
if [[ "$SLURM_PROCID" != "0" ]]; then
    exit
fi
mpirun ./mpi_life -a -p -b
```
I start the job with the following command, which tells Slurm to allocate eight CPUs for the job:
```
$ srun -n 8 --x11 life.mpi
```
For demonstration purposes, the job has a graphical interface that shows the current result of the calculation:
![](https://opensource.com/sites/default/files/u128651/hpc_with_pi-3.png)
The position of the red cells is calculated on one of the compute nodes, and the green cells are calculated on the other compute node. I can also tell the Game of Life program to color the cell for each used CPU (there are four per compute node) differently, which leads to the following output:
![](https://opensource.com/sites/default/files/u128651/hpc_with_pi-4.png)
Thanks to the installation recipes and the software packages provided by OpenHPC, I was able to set up two compute nodes and a master node in an HPC-type configuration. I can submit jobs to my resource manager, and I can use the software provided by OpenHPC to start MPI applications utilizing all my Raspberry Pis' CPUs.
* * *
_To learn more about using OpenHPC to build a Raspberry Pi cluster, please attend Adrian Reber's talks at [DevConf.cz 2018][10], January 26-28, in Brno, Czech Republic, and at the [CentOS Dojo 2018][11], on February 2, in Brussels._
### About the author
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/gotchi-square.png?itok=PJKu7LHn)][23] Adrian Reber - Adrian is a Senior Software Engineer at Red Hat and is migrating processes at least since 2010\. He started to migrate processes in a high performance computing environment and at some point he migrated so many processes that he got a PhD for that and since he joined Red Hat he started to migrate containers. Occasionally he still migrates single processes and is still interested in high performance computing topics.[More about me][12]
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/1/how-build-hpc-system-raspberry-pi-and-openhpc
作者:[Adrian Reber ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/adrianreber
[1]:https://opensource.com/resources/what-are-linux-containers?utm_campaign=containers&intcmp=70160000000h1s6AAA
[2]:https://opensource.com/resources/what-docker?utm_campaign=containers&intcmp=70160000000h1s6AAA
[3]:https://opensource.com/resources/what-is-kubernetes?utm_campaign=containers&intcmp=70160000000h1s6AAA
[4]:https://developers.redhat.com/blog/2016/01/13/a-practical-introduction-to-docker-container-terminology/?utm_campaign=containers&intcmp=70160000000h1s6AAA
[5]:https://opensource.com/file/384031
[6]:https://opensource.com/file/384016
[7]:https://opensource.com/file/384021
[8]:https://opensource.com/file/384026
[9]:https://opensource.com/article/18/1/how-build-hpc-system-raspberry-pi-and-openhpc?rate=l9n6B6qRcR20LJyXEoUoWEZ4mb2nDc9sFZ1YSPc60vE
[10]:https://devconfcz2018.sched.com/event/DJYi/openhpc-introduction
[11]:https://wiki.centos.org/Events/Dojo/Brussels2018
[12]:https://opensource.com/users/adrianreber
[13]:https://opensource.com/user/188446/feed
[14]:https://opensource.com/article/17/11/openhpc
[15]:https://openhpc.community/
[16]:https://openhpc.community/downloads/
[17]:https://github.com/openhpc/ohpc/releases/download/v1.3.3.GA/Install_guide-CentOS7-Warewulf-SLURM-1.3.3-aarch64.pdf
[18]:https://en.wikipedia.org/wiki/Warewulf
[19]:http://people.redhat.com/areber/openhpc/ansible/
[20]:https://www.open-mpi.org/
[21]:https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life
[22]:https://www.youtube.com/watch?v=n8DvxMcOMXk
[23]:https://opensource.com/users/adrianreber
[24]:https://opensource.com/users/adrianreber
[25]:https://opensource.com/users/adrianreber
[26]:https://www.ansible.com/
[27]:https://slurm.schedmd.com/
[28]:https://opensource.com/tags/raspberry-pi
[29]:https://opensource.com/tags/programming
[30]:https://opensource.com/tags/linux
[31]:https://opensource.com/tags/ansible

View File

@ -1,109 +0,0 @@
translating by wenwensnow
Linux whereis Command Explained for Beginners (5 Examples)
======
Sometimes, while working on the command line, we just need to quickly find out the location of the binary file for a command. Yes, the [find][1] command is an option in this case, but it's a bit time consuming and will likely produce some non-desired results as well. There's a specific command that's designed for this purpose: **whereis**.
In this article, we will discuss the basics of this command using some easy to understand examples. But before we do that, it's worth mentioning that all examples in this tutorial have been tested on Ubuntu 16.04LTS.
### Linux whereis command
The whereis command lets users locate binary, source, and manual page files for a command. Following is its syntax:
```
whereis [options] [-BMS directory... -f] name...
```
And here's how the tool's man page explains it:
```
whereis locates the binary, source and manual files for the specified command names. The supplied
names are first stripped of leading pathname components and any (single) trailing extension of the
form .ext (for example: .c) Prefixes of s. resulting from use of source code control are also dealt
with. whereis then attempts to locate the desired program in the standard Linux places, and in the
places specified by $PATH and $MANPATH.
```
The following Q&A-styled examples should give you a good idea on how the whereis command works.
### Q1. How to find location of binary file using whereis?
Suppose you want to find the location for, let's say, the whereis command itself. Then here's how you can do that:
```
whereis whereis
```
[![How to find location of binary file using whereis][2]][3]
Note that the first path in the output is what you are looking for. The whereis command also produces paths for manual pages and source code (if available, which isn't in this case). So the second path you see in the output above is the path to the whereis manual file(s).
### Q2. How to specifically search for binaries, manuals, or source code?
If you want to search specifically for, say binary, then you can use the **-b** command line option. For example:
```
whereis -b cp
```
[![How to specifically search for binaries, manuals, or source code][4]][5]
Similarly, the **-m** and **-s** options are used in case you want to find manuals and sources.
### Q3. How to limit whereis search as per requirement?
By default whereis tries to find files from hard-coded paths, which are defined with glob patterns. However, if you want, you can limit the search using specific command line options. For example, if you want whereis to only search for binary files in /usr/bin, then you can do this using the **-B** command line option.
```
whereis -B /usr/bin/ -f cp
```
**Note** : Since you can pass multiple paths this way, the **-f** command line option terminates the directory list and signals the start of file names.
Similarly, if you want to limit manual or source searches, you can use the **-M** and **-S** command line options.
### Q4. How to see paths that whereis uses for search?
There's an option for this as well. Just run the command with **-l**.
```
whereis -l
```
Here is the list (partial) it produced for us:
[![How to see paths that whereis uses for search][6]][7]
### Q5. How to find command names with unusual entries?
For whereis, a command becomes unusual if it does not have just one entry of each explicitly requested type. For example, commands with no documentation available, or those with documentation in multiple places are considered unusual. The **-u** command line option, when used, makes whereis show the command names that have unusual entries.
For example, the following command should display files in the current directory which have no documentation file, or more than one.
```
whereis -m -u *
```
### Conclusion
Agreed, whereis is not the kind of command line tool that you'll require very frequently. But when the situation arises, it definitely makes your life easy. We've covered some of the important command line options the tool offers, so do practice them. For more info, head to its [man page][8].
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/linux-whereis-command/
作者:[Himanshu Arora][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com
[1]:https://www.howtoforge.com/tutorial/linux-find-command/
[2]:https://www.howtoforge.com/images/command-tutorial/whereis-basic-usage.png
[3]:https://www.howtoforge.com/images/command-tutorial/big/whereis-basic-usage.png
[4]:https://www.howtoforge.com/images/command-tutorial/whereis-b-option.png
[5]:https://www.howtoforge.com/images/command-tutorial/big/whereis-b-option.png
[6]:https://www.howtoforge.com/images/command-tutorial/whereis-l.png
[7]:https://www.howtoforge.com/images/command-tutorial/big/whereis-l.png
[8]:https://linux.die.net/man/1/whereis

View File

@ -0,0 +1,106 @@
An introduction to the Web::Simple Perl module, a minimalist web framework
============================================================
### Perl module Web::Simple is easy to learn and packs a big enough punch for a variety of one-offs and smaller services.
![An introduction to the Web::Simple Perl module, a minimalist web framework](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openweb-osdc-lead.png?itok=yjU4KliG "An introduction to the Web::Simple Perl module, a minimalist web framework")
Image credits : [You as a Machine][10]. Modified by Rikki Endsley. [CC BY-SA 2.0][11].
One of the more-prominent members of the Perl community is [Matt Trout][12], technical director at [Shadowcat Systems][13]. He's been building core tools for Perl applications for years, including being a co-maintaner of the [Catalyst][14] MVC (Model, View, Controller) web framework, creator of the [DBIx::Class][15] object-management system, and much more. In person, he's energetic, interesting, brilliant, and sometimes hard to keep up with. When Matt writes code…well, think of a runaway chainsaw, with the trigger taped down and the safety features disabled. He's off and running, and you never quite know what will come out. Two things are almost certain: the module will precisely fit the purpose Matt has in mind, and it will show up on CPAN for others to use.
One of Matt's special-purpose modules is [Web::Simple][23]. Touted as "a quick and easy way to build simple web applications," it is a stripped-down, minimalist web framework, with an easy to learn interface. Web::Simple is not at all designed for a large-scale application; however, it may be ideal for a small tool that does one or two things in a lower-traffic environment. I can also envision it being used for rapid prototyping if you wanted to create quick wireframes of a new application for demonstrations.
### Installation, and a quick "Howdy!"
You can install the module using `cpan` or `cpanm`. Once you've got it installed, you're ready to write simple web apps without having to hassle with managing the connections or any of that—just your functionality. Here's a quick example:
```
#!/usr/bin/perl
package HelloReader;
use Web::Simple;
sub dispatch_request {
  GET => sub {
    [ 200, [ 'Content-type', 'text/plain' ], [ 'Howdy, Opensource.com reader!' ] ]
  },
  '' => sub {
    [ 405, [ 'Content-type', 'text/plain' ], [ 'You cannot do that, friend. Sorry.' ] ]
  }
}
HelloReader->run_if_script;
```
There are a couple of things to notice right off. For one, I didn't `use strict` and `use warnings` like I usually would. Web::Simple imports those for you, so you don't have to. It also imports [Moo][16], a minimalist OO framework, so if you know Moo and want to use it here, you can! The heart of the system lies in the `dispatch_request`method, which you must define in your application. Each entry in the method is a match string, followed by a subroutine to respond if that string matches. The subroutine must return an array reference containing status, headers, and content of the reply to the request.
### Matching
The matching system in Web::Simple is powerful, allowing for complicated matches, passing parameters in a URL, query parameters, and extension matches, in pretty much any combination you want. As you can see in the example above, starting with a capital letter will match on the request method, and you can combine that with a path match easily:
```
'GET + /person/*' => sub {
  my ($self, $person) = @_;
  # write some code to retrieve and display a person
  },
'POST + /person/* + %*' => sub {
  my ($self, $person, $params) = @_;
  # write some code to modify a person, perhaps
  }
```
In the latter case, the third part of the match indicates that we should pick up all the POST parameters and put them in a hashref called `$params` for use by the subroutine. Using `?` instead of `%` in that part of the match would pick up query parameters, as normally used in a GET request. There's also a useful exported subroutine called `redispatch_to`. This tool lets you redirect, without using a 3xx redirect; it's handled internally, invisible to the user. So:
```
'GET + /some/url' => sub {
  redispatch_to '/some/other/url';
}
```
A GET request to `/some/url` would get handled as if it was sent to `/some/other/url`, without a redirect, and the user won't see a redirect in their browser.
I've just scratched the surface with this module. If you're looking for something production-ready for larger projects, you'll be better off with [Dancer][17] or [Catalyst][18]. But with its light weight and built-in Moo integration, Web::Simple packs a big enough punch for a variety of one-offs and smaller services.
### About the author
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/dsc_0028.jpg?itok=RS0GBh25)][19] Ruth Holloway - Ruth Holloway has been a system administrator and software developer for a long, long time, getting her professional start on a VAX 11/780, way back when. She spent a lot of her career (so far) serving the technology needs of libraries, and has been a contributor since 2008 to the Koha open source library automation suite.Ruth is currently a Perl Developer at cPanel in Houston, and also serves as chief of staff for an obnoxious cat. In her copious free time, she occasionally reviews old romance... [more about Ruth Holloway][7][More about me][8]
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/1/introduction-websimple-perl-module-minimalist-web-framework
作者:[Ruth Holloway ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/druthb
[1]:https://opensource.com/tags/python?src=programming_resource_menu1
[2]:https://opensource.com/tags/javascript?src=programming_resource_menu2
[3]:https://opensource.com/tags/perl?src=programming_resource_menu3
[4]:https://developers.redhat.com/?intcmp=7016000000127cYAAQ&src=programming_resource_menu4
[5]:http://perldoc.perl.org/functions/package.html
[6]:https://opensource.com/article/18/1/introduction-websimple-perl-module-minimalist-web-framework?rate=ICN35y076ElpInDKoMqp-sN6f4UVF-n2Qt6dL6lb3kM
[7]:https://opensource.com/users/druthb
[8]:https://opensource.com/users/druthb
[9]:https://opensource.com/user/36051/feed
[10]:https://www.flickr.com/photos/youasamachine/8025582590/in/photolist-decd6C-7pkccp-aBfN9m-8NEffu-3JDbWb-aqf5Tx-7Z9MTZ-rnYTRu-3MeuPx-3yYwA9-6bSLvd-irmvxW-5Asr4h-hdkfCA-gkjaSQ-azcgct-gdV5i4-8yWxCA-9G1qDn-5tousu-71V8U2-73D4PA-iWcrTB-dDrya8-7GPuxe-5pNb1C-qmnLwy-oTxwDW-3bFhjL-f5Zn5u-8Fjrua-bxcdE4-ddug5N-d78G4W-gsYrFA-ocrBbw-pbJJ5d-682rVJ-7q8CbF-7n7gDU-pdfgkJ-92QMx2-aAmM2y-9bAGK1-dcakkn-8rfyTz-aKuYvX-hqWSNP-9FKMkg-dyRPkY
[11]:https://creativecommons.org/licenses/by/2.0/
[12]:https://shadow.cat/resources/bios/matt_short/
[13]:https://shadow.cat/
[14]:https://metacpan.org/pod/Catalyst
[15]:https://metacpan.org/pod/DBIx::Class
[16]:https://metacpan.org/pod/Moo
[17]:http://perldancer.org/
[18]:http://www.catalystframework.org/
[19]:https://opensource.com/users/druthb
[20]:https://opensource.com/users/druthb
[21]:https://opensource.com/users/druthb
[22]:https://opensource.com/article/18/1/introduction-websimple-perl-module-minimalist-web-framework#comments
[23]:https://metacpan.org/pod/Web::Simple
[24]:https://opensource.com/tags/perl
[25]:https://opensource.com/tags/programming
[26]:https://opensource.com/tags/perl-column
[27]:https://opensource.com/tags/web-development

View File

@ -0,0 +1,280 @@
Running a Python application on Kubernetes
============================================================
### This step-by-step tutorial takes you through the process of deploying a simple Python application on Kubernetes.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/build_structure_tech_program_code_construction.png?itok=nVsiLuag)
Image by : opensource.com
Kubernetes is an open source platform that offers deployment, maintenance, and scaling features. It simplifies management of containerized Python applications while providing portability, extensibility, and self-healing capabilities.
Whether your Python applications are simple or more complex, Kubernetes lets you efficiently deploy and scale them, seamlessly rolling out new features while limiting resources to only those required.
In this article, I will describe the process of deploying a simple Python application to Kubernetes, including:
* Creating Python container images
* Publishing the container images to an image registry
* Working with persistent volume
* Deploying the Python application to Kubernetes
### Requirements
You will need Docker, kubectl, and this [source code][10].
Docker is an open platform to build and ship distributed applications. To install Docker, follow the [official documentation][11]. To verify that Docker runs your system:
```
$ docker info
Containers: 0
Images: 289
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Dirs: 289
Execution Driver: native-0.2
Kernel Version: 3.16.0-4-amd64
Operating System: Debian GNU/Linux 8 (jessie)
WARNING: No memory limit support
WARNING: No swap limit support
```
kubectl is a command-line interface for executing commands against a Kubernetes cluster. Run the shell script below to install kubectl:
```
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
```
Deploying to Kubernetes requires a containerized application. Let's review containerizing Python applications.
### Containerization at a glance
Containerization involves enclosing an application in a container with its own operating system. This full machine virtualization option has the advantage of being able to run an application on any machine without concerns about dependencies.
Roman Gaponov's [article][12] serves as a reference. Let's start by creating a container image for our Python code.
### Create a Python container image
To create these images, we will use Docker, which enables us to deploy applications inside isolated Linux software containers. Docker is able to automatically build images using instructions from a Docker file.
This is a Docker file for our Python application:
```
FROM python:3.6
MAINTAINER XenonStack
# Creating Application Source Code Directory
RUN mkdir -p /k8s_python_sample_code/src
# Setting Home Directory for containers
WORKDIR /k8s_python_sample_code/src
# Installing python dependencies
COPY requirements.txt /k8s_python_sample_code/src
RUN pip install --no-cache-dir -r requirements.txt
# Copying src code to Container
COPY . /k8s_python_sample_code/src/app
# Application Environment variables
ENV APP_ENV development
# Exposing Ports
EXPOSE 5035
# Setting Persistent data
VOLUME ["/app-data"]
# Running Python Application
CMD ["python", "app.py"]
```
This Docker file contains instructions to run our sample Python code. It uses the Python 3.5 development environment.
### Build a Python Docker image
We can now build the Docker image from these instructions using this command:
```
docker build -t k8s_python_sample_code .
```
This command creates a Docker image for our Python application.
### Publish the container images
We can publish our Python container image to different private/public cloud repositories, like Docker Hub, AWS ECR, Google Container Registry, etc. For this tutorial, we'll use Docker Hub.
Before publishing the image, we need to tag it to a version:
```
docker tag k8s_python_sample_code:latest k8s_python_sample_code:0.1
```
### Push the image to a cloud repository
Using a Docker registry other than Docker Hub to store images requires you to add that container registry to the local Docker daemon and Kubernetes Docker daemons. You can look up this information for the different cloud registries. We'll use Docker Hub in this example.
Execute this Docker command to push the image:
```
docker push k8s_python_sample_code
```
### Working with CephFS persistent storage
Kubernetes supports many persistent storage providers, including AWS EBS, CephFS, GlusterFS, Azure Disk, NFS, etc. I will cover Kubernetes persistence storage with CephFS.
To use CephFS for persistent data to Kubernetes containers, we will create two files:
persistent-volume.yml
```
apiVersion: v1
kind: PersistentVolume
metadata:
  name: app-disk1
  namespace: k8s_python_sample_code
spec:
  capacity:
  storage: 50Gi
  accessModes:
  - ReadWriteMany
  cephfs:
  monitors:
    - "172.17.0.1:6789"
  user: admin
  secretRef:
    name: ceph-secret
  readOnly: false
```
persistent_volume_claim.yaml
```
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: appclaim1
  namespace: k8s_python_sample_code
spec:
  accessModes:
  - ReadWriteMany
  resources:
  requests:
    storage: 10Gi
```
We can now use kubectl to add the persistent volume and claim to the Kubernetes cluster:
```
$ kubectl create -f persistent-volume.yml
$ kubectl create -f persistent-volume-claim.yml
```
We are now ready to deploy to Kubernetes.
### Deploy the application to Kubernetes
To manage the last mile of deploying the application to Kubernetes, we will create two important files: a service file and a deployment file.
Create a file and name it `k8s_python_sample_code.service.yml` with the following content:
```
apiVersion: v1
kind: Service
metadata:
  labels:
  k8s-app: k8s_python_sample_code
  name: k8s_python_sample_code
  namespace: k8s_python_sample_code
spec:
  type: NodePort
  ports:
  - port: 5035
  selector:
  k8s-app: k8s_python_sample_code
```
Create a file and name it `k8s_python_sample_code.deployment.yml` with the following content:
```
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: k8s_python_sample_code
  namespace: k8s_python_sample_code
spec:
  replicas: 1
  template:
  metadata:
    labels:
    k8s-app: k8s_python_sample_code
  spec:
    containers:
    - name: k8s_python_sample_code
      image: k8s_python_sample_code:0.1
      imagePullPolicy: "IfNotPresent"
      ports:
      - containerPort: 5035
      volumeMounts:
        - mountPath: /app-data
          name: k8s_python_sample_code
     volumes: 
         - name: <name of application>
           persistentVolumeClaim:
             claimName: appclaim1
```
Finally, use kubectl to deploy the application to Kubernetes:
```
$ kubectl create -f k8s_python_sample_code.deployment.yml $ kubectl create -f k8s_python_sample_code.service.yml
```
Your application was successfully deployed to Kubernetes.
You can verify whether your application is running by inspecting the running services:
```
kubectl get services
```
May Kubernetes free you from future deployment hassles!
_Want to learn more about Python? Nanjekye's book, [Python 2 and 3 Compatibility][7]offers clean ways to write code that will run on both Python 2 and 3, including detailed examples of how to convert existing Python 2-compatible code to code that will run reliably on both Python 2 and 3._
### About the author
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/joannah-nanjekye.jpg?itok=F4RqEjoA)][13] Joannah Nanjekye - Straight Outta 256 , I choose Results over Reasons, Passionate Aviator, Show me the code.[More about me][8]
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/1/running-python-application-kubernetes
作者:[Joannah Nanjekye ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/nanjekyejoannah
[1]:https://opensource.com/resources/python?intcmp=7016000000127cYAAQ
[2]:https://opensource.com/resources/python/ides?intcmp=7016000000127cYAAQ
[3]:https://opensource.com/resources/python/gui-frameworks?intcmp=7016000000127cYAAQ
[4]:https://opensource.com/tags/python?intcmp=7016000000127cYAAQ
[5]:https://developers.redhat.com/?intcmp=7016000000127cYAAQ
[6]:https://opensource.com/article/18/1/running-python-application-kubernetes?rate=D9iKksKbd9q9vOVb92Mg-v0Iyqn0QVO5fbIERTbSHz4
[7]:https://www.apress.com/gp/book/9781484229545
[8]:https://opensource.com/users/nanjekyejoannah
[9]:https://opensource.com/user/196386/feed
[10]:https://github.com/jnanjekye/k8s_python_sample_code/tree/master
[11]:https://docs.docker.com/engine/installation/
[12]:https://hackernoon.com/docker-tutorial-getting-started-with-python-redis-and-nginx-81a9d740d091
[13]:https://opensource.com/users/nanjekyejoannah
[14]:https://opensource.com/users/nanjekyejoannah
[15]:https://opensource.com/users/nanjekyejoannah
[16]:https://opensource.com/tags/python
[17]:https://opensource.com/tags/kubernetes

View File

@ -0,0 +1,109 @@
5 Real World Uses for Redis
============================================================
Redis is a powerful in-memory data structure store which has many uses including a database, a cache, and a message broker. Most people often think of it a simple key-value store, but it has so much more power. I will be going over some real world examples of some of the many things Redis can do for you.
### 1\. Full Page Cache
The first thing is full page caching. If you are using server-side rendered content, you do not want to re-render each page for every single request. Using a cache like Redis, you can cache regularly requested content and drastically decrease latency for your most requested pages, and most frameworks have hooks for caching your pages with Redis.
Simple Commands
```
// Set the page that will last 1 minute
SET key "<html>...</html>" EX 60
// Get the page
GET key
```
### 2\. Leaderboard
One of the places Redis shines is for leaderboards. Because Redis is in-memory, it can deal with incrementing and decrementing very fast and efficiently. Compare this to running a SQL query every request the performance gains are huge! This combined with Redis's sorted sets means you can grab only the highest rated items in the list in milliseconds, and it is stupid easy to implement.
Simple Commands
```
// Add an item to the sorted set
ZADD sortedSet 1 "one"
// Get all items from the sorted set
ZRANGE sortedSet 0 -1
// Get all items from the sorted set with their score
ZRANGE sortedSet 0 -1 WITHSCORES
```
### 3\. Session Storage
The most common use for Redis I have seen is session storage. Unlike other session stores like Memcache, Redis can persist data so in the situation where your cache goes down when it comes back up all the data will still be there. Although this isn't mission critical to be persisted, this feature can save your users lots of headaches. No one likes their session to be randomly dropped for no reason.
Simple Commands
```
// Set session that will last 1 minute
SET randomHash "{userId}" EX 60
// Get userId
GET randomHash
```
### 4\. Queue
One of the less common, but very useful things you can do with Redis is queue things. Whether it's a queue of emails or data to be consumed by another application, you can create an efficient queue it in Redis. Using this functionality is easy and natural for any developer who is familiar with Stacks and pushing and popping items.
Simple Commands
```
// Add a Message
HSET messages <id> <message>
ZADD due <due_timestamp> <id>
// Recieving Message
ZRANGEBYSCORE due -inf <current_timestamp> LIMIT 0 1
HGET messages <message_id>
// Delete Message
ZREM due <message_id>
HDEL messages <message_id>
```
### 5\. Pub/Sub
The final real world use for Redis I am going to bring up in this post is pub/sub. This is one of the most powerful features Redis has built in; the possibilities are limitless. You can create a real-time chat system with it, trigger notifications for friend requests on social networks, etc... This feature is one of the most underrated features Redis offers but is very powerful, yet simple to use.
Simple Commands
```
// Add a message to a channel
PUBLISH channel message
// Recieve messages from a channel
SUBSCRIBE channel
```
### Conclusion
I hope you enjoyed this list of some of the many real world uses for Redis. This is just scratching the surface of what Redis can do for you, but I hope it gave you some ideas of how you can use the full potential Redis has to offer.
--------------------------------------------------------------------------------
作者简介:
Hi, my name is Ryan! I am a Software Developer with experience in many web frameworks and libraries including NodeJS, Django, Golang, and Laravel.
-------------------
via: https://ryanmccue.ca/5-real-world-uses-for-redis/
作者:[Ryan McCue ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://ryanmccue.ca/author/ryan/
[1]:https://ryanmccue.ca/author/ryan/

View File

@ -0,0 +1,68 @@
A look inside Facebook's open source program
============================================================
### Facebook developer Christine Abernathy discusses how open source helps the company share insights and boost innovation.
![A look inside Facebook's open source program](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-Internet_construction_9401467_520x292_0512_dc.png?itok=RPkPPtDe "A look inside Facebook's open source program")
Image by : opensource.com
Open source becomes more ubiquitous every year, appearing everywhere from [government municipalities][11] to [universities][12]. Companies of all sizes are also increasingly turning to open source software. In fact, some companies are taking open source a step further by supporting projects financially or working with developers.
Facebook's open source program, for example, encourages others to release their code as open source, while working and engaging with the community to support open source projects. [Christine Abernathy][13], a Facebook developer, open source advocate, and member of the company's open source team, visited the Rochester Institute of Technology last November, presenting at the [November edition][14] of the FOSS Talks speaker series. In her talk, Abernathy explained how Facebook approaches open source and why it's an important part of the work the company does.
### Facebook and open source
Abernathy said that open source plays a fundamental role in Facebook's mission to create community and bring the world closer together. This ideological match is one motivating factor for Facebook's participation in open source. Additionally, Facebook faces unique infrastructure and development challenges, and open source provides a platform for the company to share solutions that could help others. Open source also provides a way to accelerate innovation and create better software, helping engineering teams produce better software and work more transparently. Today, Facebook's 443 projects on GitHub comprise 122,000 forks, 292,000 commits, and 732,000 followers.
![open source projects by Facebook](https://opensource.com/sites/default/files/images/life-uploads/blog-article-facebook-open-source-projects.png "open source projects by Facebood")
Some of the Facebook projects released as open source include React, GraphQL, Caffe2, and others. (Image by Christine Abernathy, used with permission)
### Lessons learned
Abernathy emphasized that Facebook has learned many lessons from the open source community, and it looks forward to learning many more. She identified the three most important ones:
* Share what's useful
* Highlight your heroes
* Fix common pain points
_Christine Abernathy visited RIT as part of the FOSS Talks speaker series. Every month, a guest speaker from the open source world shares wisdom, insight, and advice about the open source world with students interested in free and open source software. The [FOSS @ MAGIC][3] community is thankful to have Abernathy attend as a speaker._
### About the author
[![Picture of Justin W. Flory](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/october_2017_cropped_0.jpg?itok=gV-RgINC)][15] Justin W. Flory - Justin is a student at the [Rochester Institute of Technology][4]majoring in Networking and Systems Administration. He is currently a contributor to the [Fedora Project][5]. In Fedora, Justin is the editor-in-chief of the [Fedora Magazine][6], the lead of the [Community... ][7][more about Justin W. Flory][8][More about me][9]
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/1/inside-facebooks-open-source-program
作者:[Justin W. Flory ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/jflory
[1]:https://opensource.com/file/383786
[2]:https://opensource.com/article/18/1/inside-facebooks-open-source-program?rate=H9_bfSwXiJfi2tvOLiDxC_tbC2xkEOYtCl-CiTq49SA
[3]:http://foss.rit.edu/
[4]:https://www.rit.edu/
[5]:https://fedoraproject.org/wiki/Overview
[6]:https://fedoramagazine.org/
[7]:https://fedoraproject.org/wiki/CommOps
[8]:https://opensource.com/users/jflory
[9]:https://opensource.com/users/jflory
[10]:https://opensource.com/user/74361/feed
[11]:https://opensource.com/article/17/8/tirana-government-chooses-open-source
[12]:https://opensource.com/article/16/12/2016-election-night-hackathon
[13]:https://twitter.com/abernathyca
[14]:https://www.eventbrite.com/e/fossmagic-talks-open-source-facebook-with-christine-abernathy-tickets-38955037566#
[15]:https://opensource.com/users/jflory
[16]:https://opensource.com/users/jflory
[17]:https://opensource.com/users/jflory
[18]:https://opensource.com/article/18/1/inside-facebooks-open-source-program#comments

View File

@ -0,0 +1,245 @@
CopperheadOS: Security features, installing apps, and more
============================================================
### Fly your open source flag proudly with Copperhead, a mobile OS that takes its FOSS commitment seriously.
![Android security and privacy](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/android_security_privacy.png?itok=MPHAV5mL "Android security and privacy")
Image by : Norebbo via [Flickr][15] (Original: [public domain][16]). Modified by Opensource.com. [CC BY-SA 4.0][17].
_Editor's note: CopperheadOS is [licensed][11] under the Creative Commons Attribution-NonCommercial-Shar<wbr>eAlike 4.0 license (userspace) and GPL2 license (kernel). It is also based on Android Open Source Project (AOSP)._
Several years ago, I made the decision to replace proprietary technologies (mainly Apple products) with technology that ran on free and open source software (FOSS). I can't say it was easy, but I now happily use FOSS for pretty much everything.
The hardest part involved my mobile handset. There are basically only two choices today for phones and tablets: Apple's iOS or Google's Android. Since Android is open source, it seemed the obvious choice, but I was frustrated by both the lack of open source applications on Android and the pervasiveness of Google on those devices.
So I entered the world of custom ROMs. These are projects that take the base [Android Open Source Project][18] (AOSP) and customize it. Almost all these projects allow you to install the standard Google applications as a separate package, called GApps, and you can have as much or as little Google presence on your phone as you like. GApps packages come in a number of flavors, from the full suite of apps that Google ships with its devices to a "pico" version that includes just the minimal amount of software needed to run the Google Play Store, and from there you can add what you like.
I started out using CyanogenMod, but when that project went in a direction I didn't like, I switched to OmniROM. I was quite happy with it, but still wondered what information I was sending to Google behind the scenes.
Then I found out about [CopperheadOS][19]. Copperhead is a version of AOSP that focuses on delivering the most secure Android experience possible. I've been using it for a year now and have been quite happy with it.
Unlike other custom ROMs that strive to add lots of new functionality, Copperhead runs a pretty vanilla version of AOSP. Also, while the first thing you usually do when playing with a custom ROM is to add root access to the device, not only does Copperhead prevent that, it also requires that you have a device that has verified boot, so there's no unlocking the bootloader. This is to prevent malicious code from getting access to the handset.
Copperhead starts with a hardened version of the AOSP baseline, including full encryption, and then adds a [ton of stuff][20] I can only pretend to understand. It also applies a number of kernel and Android patches before they are applied to the mainline Android releases.
### [copperos_extrapatches.png][1]
![About phone with extra patches](https://opensource.com/sites/default/files/u128651/copperos_extrapatches.png "About phone with extra patches")
It has a couple of more obvious features that I like. If you use a PIN to unlock your device, there is an option to scramble the digits.
### [copperos_scrambleddigits.png][2]
![Option to scramble digits](https://opensource.com/sites/default/files/u128651/copperos_scrambleddigits.png "Option to scramble digits")
This should prevent any casual shoulder-surfer from figuring out your PIN, although it can make it a bit more difficult to unlock your device while, say, driving (but no one should be using their handset in the car, right?).
Another issue it addresses involves tracking people by monitoring their WiFi MAC address. Most devices that use WiFi perform active scanning for wireless access points. This protocol includes the MAC address of the interface, and there are a number of ways people can use [mobile location analytics][21] to track your movement. Copperhead has an option to randomize your MAC address, which counters this process.
### [copperos_randommac.png][3]
![Randomize MAC address](https://opensource.com/sites/default/files/u128651/copperos_randommac.png "Randomize MAC address")
### Installing apps
This all sounds pretty good, right? Well, here comes the hard part. While Android is open source, much of the Google code, including the [Google Play Store][22], is not. If you install the Play Store and the code necessary for it to work, you allow Google to install software without your permission. [Google Play's terms of service][23] says:
> "Google may update any Google app or any app you have downloaded from Google Play to a new version of such app, irrespective of any update settings that you may have selected within the Google Play app or your Device, if Google determines that the update will fix a critical security vulnerability related to the app."
This is not acceptable from a security standpoint, so you cannot install Google applications on a Copperhead device.
This took some getting used to, as I had come to rely on things such as Google Maps. The default application repository that ships with Copperhead is [F-Droid][24], which contains only FOSS applications. While I previously used many FOSS applications on Android, it took some effort to use  _nothing but_  free software. I did find some ways to cheat this system, and I'll cover that below. First, here are some of the applications I've grown to love from F-Droid.
### F-Droid favorites
**K-9 Mail**
### [copperheados_k9mail.png][4]
![K-9 Mail](https://opensource.com/sites/default/files/u128651/copperheados_k9mail.png "K-9 Mail")
Even before I started using Copperhead, I loved [K-9 Mail][25]. This is simply the best mobile email client I've found, period, and it is one of the first things I install on any new device. I even use it to access my Gmail account, via IMAP and SMTP.
**Open Camera**
### [copperheados_cameraapi.png][5]
![Open Camera](https://opensource.com/sites/default/files/u128651/copperheados_cameraapi.png "Open Camera")
Copperhead runs only on rather new hardware, and I was consistently disappointed in the quality of the pictures from its default camera application. Then I discovered [Open Camera][26]. A full-featured camera app, it allows you to enable an advanced API to take advantage of the camera hardware. The only thing I miss is the ability to take a panoramic photo.
**Amaze**
### [copperheados_amaze.png][6]
![Amaze](https://opensource.com/sites/default/files/u128651/copperheados_amaze.png "Amaze")
[Amaze][27] is one of the best file managers I've ever used, free or not. When I need to navigate the filesystem, Amaze is my go-to app.
**Vanilla Music**
### [copperheados_vanillamusic.png][7]
![Vanilla Music](https://opensource.com/sites/default/files/u128651/copperheados_vanillamusic.png "Vanilla Music")
I was unhappy with the default music player, so I checked out a number of them on F-Droid and settled on [Vanilla Music][28]. It has an easy-to-use interface and interacts well with my Bluetooth devices.
**OCReader**
### [coperheados_ocreader.png][8]
![OCReader](https://opensource.com/sites/default/files/u128651/coperheados_ocreader.png "OCReader")
I am a big fan of [Nextcloud][29], particularly [Nextcloud News][30], a replacement for the now-defunct [Google Reader][31]. While I can access my news feeds through a web browser, I really missed the ability to manage them through a dedicated app. Enter [OCReader][32]. While it stands for "ownCloud Reader," it works with Nextcloud, and I've had very few issues with it.
**Noise**
The SMS/MMS application of choice for most privacy advocates is [Signal][33] by [Open Whisper Systems][34]. Endorsed by [Edward Snowden][35], Signal allows for end-to-end encrypted messaging. If the person you are messaging is also on Signal, your messages will be sent, encrypted, over a data connection facilitated by centralized servers maintained by Open Whisper Systems. It also, until recently, relied on [Google Cloud Messaging][36] (GCM) for notifications, which requires Google Play Services.
The fact that Signal requires a centralized server bothered some people, so the default application on Copperhead is a fork of Signal called [Silence][37]. This application doesn't use a centralized server but does require that all parties be on Silence for encryption to work.
Well, no one I know uses Silence. At the moment you can't even get it from the Google Play Store in the U.S. due to a trademark issue, and there is no iOS client. An encrypted SMS client isn't very useful if you can't use it for encryption.
Enter [Noise][38]. Noise is another application maintained by Copperhead that is a fork of Signal that removes the need for GCM. While not available in the standard F-Droid repositories, Copperhead includes their own repository in the version of F-Droid they ship, which at the moment contains only the Noise application. This app will let you communicate securely with anyone else using Noise or Signal.
### F-Droid workarounds
**FFUpdater**
Copperhead ships with a hardened version of the Chromium web browser, but I am a Firefox fan. Unfortunately, [Firefox is no longer included][39] in the F-Droid repository. Apps on F-Droid are all built by the F-Droid maintainers, so the process for getting into F-Droid can be complicated. The [Compass app for OpenNMS][40] isn't in F-Droid because, at the moment, it does not support builds using the [Ionic Framework][41], which Compass uses.
Luckily, there is a simple workaround: Install the [FFUpdater][42] app on F-Droid. This allows me to install Firefox and keep it up to date through the browser itself.
**Amazon Appstore**
This brings me to a cool feature of Android 8, Oreo. In previous versions of Android, you had a single "known source" for software, usually the Google Play Store, and if you wanted to install software from another repository, you had to go to settings and allow "Install from Unknown Sources." I always had to remember to turn that off after an install to prevent malicious code from being able to install software on my device.
### [copperheados_sources.png][9]
![Allowing sources to install apps](https://opensource.com/sites/default/files/u128651/copperheados_sources.png "Allowing sources to install apps")
With Oreo, you can permanently allow a specified application to install applications. For example, I use some applications from the [Amazon Appstore][43] (such as the Amazon Shopping and Kindle apps). When I download and install the Amazon Appstore Android package (APK), I am prompted to allow the application to install apps and then I'm not asked again. Of course, this can be turned on and off on a per-application basis.
The Amazon Appstore has a number of useful apps, such as [IMDB][44] and [eBay][45]. Many of them don't require Google Services, but some do. For example, if I install the [Skype][46] app via Amazon, it starts up, but then complains about the operating system. The American Airlines app would start, then complain about an expired certificate. (I contacted them and was told they were no longer maintaining the version in the Amazon Appstore and it would be removed.) In any case, I can pretty simply install a couple of applications I like without using Google Play.
**Google Play**
Well, what about those apps you love that don't use Google Play Services but are only available through the Google Play Store? There is yet another way to safely get those apps on your Copperhead device.
This does require some technical expertise and another device. On the second device, install the [TWRP][47] recovery application. This is usually a key first step in installing any custom ROM, and TWRP is supported on a large number of devices. You will also need the Android Debug Bridge ([ADB][48]) application from the [Android SDK][49], which can be downloaded at no cost.
On the second device, use the Google Play Store to install the applications you want. Then, reboot into recovery. You can mount the system partition via TWRP; plug the device into a computer via a USB cable and you should be able to see it via ADB. There is a system directory called `/data/app`, and in it you will find all the APK files for your applications. Copy those you want to your computer (I use the ADB `pull`command and copy over the whole directory).
Disconnect that phone and connect your Copperhead device. Enable the "Transfer files" option, and you should see the storage directory mounted on your computer. Copy over the APK files for the applications you want, then install them via the Amaze file manager (just navigate to the APK file and click on it).
Note that you can do this for any application, and it might even be possible to install Google Play Services this way on Copperhead, but that kind of defeats the purpose. I use this mainly to get the [Electric Sheep][50] screensaver and a guitar tuning app I like called [Cleartune][51]. Be aware that if you install TWRP, especially on a Google Pixel, security updates may not work, as they'll expect the stock recovery. In this case you can always use [fastboot][52] to access TWRP, but leave the default recovery in place.
### Must-have apps without a workaround
Unfortunately, there are still a couple of Google apps I find it hard to live without. Google Maps is probably the main Google application I use, and yes, while I know I'm giving up my location to Google, it has saved hours of my life by routing me around traffic issues. [OpenStreetMap][53] has an app available via F-Droid, but it doesn't have the real-time information that makes Google Maps so useful. I also use Skype on occasion, usually when I am out of the country and have only a data connection (i.e., through a hotel WiFi network). It lets me call home and other places at a very affordable price.
My workaround is to carry two phones. I know this isn't an option for most people, but it is the only one I've found for now. I use my Copperhead phone for anything personal (email, contacts, calendars, pictures, etc.) and my "Googlephone" for Maps, Skype, and various games.
My dream would be for someone to perfect a hypervisor on a handset. Then I could run Copperhead and stock Google Android on the same device. I don't think anyone has a strong business reason to do it, but I do hope it happens.
### Devices that support Copperhead
Before you rush out to install Copperhead, there are some hurdles you'll have to jump. First, it is supported on only a [limited number of handsets][54], almost all of them late-model Google devices. The logic behind this is simple: Google tends to release Android security updates for its devices quickly, and I've found that Copperhead is able to follow suit within a day, if not within hours. Second, like any open source project, it has limited resources and it is difficult to support even a fraction of the devices now available to end users. Finally, if you want to run Copperhead on handsets like the Pixel and Pixel XL, you'll either have to build from source or [buy a device][55] from Copperhead directly.
When I discovered Copperhead, I had a Nexus 6P, which (along with the Nexus 5X) is one of the supported devices. This allowed me to play with and get used to the operating system. I liked it so much that I donated some money to the project, but I kind of balked at the price they were asking for Pixel and Pixel XL handsets.
Recently, though, I ended up purchasing a Pixel XL directly from Copperhead. There were a couple of reasons. One, since all of the code is available on GitHub, I set out to do [my own build][56] for a Pixel device. That process (which I never completed) made me appreciate the amount of work Copperhead puts into its project. Two, there was an article on [Slashdot][57] discussing how people were selling devices with Copperhead pre-installed and using Copperhead's update servers. I didn't appreciate that very much. Finally, I support FOSS not only by being a vocal user but also with my wallet.
### Putting the "libre" back into free
Another thing I love about FOSS is that I have options. There is even a new option to Copperhead being developed called [Eelo][58]. Created by [Gaël Duval][59], the developer of Mandrake Linux, this is a privacy-based Android operating system based on [LineageOS][60] (the descendant of CyanogenMod). While it should be supported on more handsets than Copperhead is, it is still in the development stage, and Copperhead is very stable and mature. I am eager to check it out, though.
For the year I've used CopperheadOS, I've never felt safer when using a mobile device to connect to a network. I've found the open source replacements for my old apps to be more than adequate, if not better than the original apps. I've also rediscovered the browser. Where I used to have around three to four tabs open, I now have around 10, because I've found that I usually don't need to install an app to easily access a site's content.
With companies like Google and Apple trying more and more to insinuate themselves into the lives of their users, it is nice to have an option that puts the "libre" back into free.
### About the author
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/balog_tarus_-_julian_-_square.jpg?itok=ZA6yem3I)][61]
Tarus Balog - Having been kicked out of some of the best colleges and universities in the country, I managed after seven years to get a BSEE and entered the telecommunications industry. I always ended up working on projects where we were trying to get the phone switch to talk to PCs. This got me interested in the creation and management of large communication networks. So I moved into the data communications field (they were separate back then) and started working with commercial network management tools... [more about Tarus Balog][12][More about me][13]
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/1/copperheados-delivers-mobile-freedom-privacy-and-security
作者:[Tarus Balog ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/sortova
[1]:https://opensource.com/file/384496
[2]:https://opensource.com/file/384501
[3]:https://opensource.com/file/384506
[4]:https://opensource.com/file/384491
[5]:https://opensource.com/file/384486
[6]:https://opensource.com/file/384481
[7]:https://opensource.com/file/384476
[8]:https://opensource.com/file/384471
[9]:https://opensource.com/file/384466
[10]:https://opensource.com/article/18/1/copperheados-delivers-mobile-freedom-privacy-and-security?rate=P32BmRpJF5bYEYTHo4mW3Hp4XRk34Eq3QqMDf2oOGnw
[11]:https://copperhead.co/android/docs/building#redistribution
[12]:https://opensource.com/users/sortova
[13]:https://opensource.com/users/sortova
[14]:https://opensource.com/user/11447/feed
[15]:https://www.flickr.com/photos/mstable/17517955832
[16]:https://creativecommons.org/publicdomain/mark/1.0/
[17]:https://creativecommons.org/licenses/by-sa/4.0/
[18]:https://en.wikipedia.org/wiki/Android_(operating_system)#AOSP
[19]:https://copperhead.co/
[20]:https://copperhead.co/android/docs/technical_overview
[21]:https://en.wikipedia.org/wiki/Mobile_location_analytics
[22]:https://en.wikipedia.org/wiki/Google_Play#Compatibility
[23]:https://play.google.com/intl/en-us_us/about/play-terms.html
[24]:https://en.wikipedia.org/wiki/F-Droid
[25]:https://f-droid.org/en/packages/com.fsck.k9/
[26]:https://f-droid.org/en/packages/net.sourceforge.opencamera/
[27]:https://f-droid.org/en/packages/com.amaze.filemanager/
[28]:https://f-droid.org/en/packages/ch.blinkenlights.android.vanilla/
[29]:https://nextcloud.com/
[30]:https://github.com/nextcloud/news
[31]:https://en.wikipedia.org/wiki/Google_Reader
[32]:https://f-droid.org/packages/email.schaal.ocreader/
[33]:https://en.wikipedia.org/wiki/Signal_(software)
[34]:https://en.wikipedia.org/wiki/Open_Whisper_Systems
[35]:https://en.wikipedia.org/wiki/Edward_Snowden
[36]:https://en.wikipedia.org/wiki/Google_Cloud_Messaging
[37]:https://f-droid.org/en/packages/org.smssecure.smssecure/
[38]:https://github.com/copperhead/Noise
[39]:https://f-droid.org/wiki/page/org.mozilla.firefox
[40]:https://compass.opennms.io/
[41]:https://ionicframework.com/
[42]:https://f-droid.org/en/packages/de.marmaro.krt.ffupdater/
[43]:https://www.amazon.com/gp/feature.html?docId=1000626391
[44]:https://www.imdb.com/
[45]:https://www.ebay.com/
[46]:https://www.skype.com/
[47]:https://twrp.me/
[48]:https://en.wikipedia.org/wiki/Android_software_development#ADB
[49]:https://developer.android.com/studio/index.html
[50]:https://play.google.com/store/apps/details?id=com.spotworks.electricsheep&hl=en
[51]:https://play.google.com/store/apps/details?id=com.bitcount.cleartune&hl=en
[52]:https://en.wikipedia.org/wiki/Android_software_development#Fastboot
[53]:https://f-droid.org/packages/net.osmand.plus/
[54]:https://copperhead.co/android/downloads
[55]:https://copperhead.co/android/store
[56]:https://copperhead.co/android/docs/building
[57]:https://news.slashdot.org/story/17/11/12/024231/copperheados-fights-unlicensed-installations-on-nexus-phones
[58]:https://eelo.io/
[59]:https://en.wikipedia.org/wiki/Ga%C3%ABl_Duval
[60]:https://en.wikipedia.org/wiki/LineageOS
[61]:https://opensource.com/users/sortova
[62]:https://opensource.com/users/sortova
[63]:https://opensource.com/users/sortova
[64]:https://opensource.com/article/18/1/copperheados-delivers-mobile-freedom-privacy-and-security#comments
[65]:https://opensource.com/tags/mobile
[66]:https://opensource.com/tags/android

View File

@ -190,6 +190,11 @@ This wasn't the end of the story, since the next question was: What about zombie
I'll conclude by saying that this was a simpler task than this image search, and it was greatly helped by the processes I had already developed.
### About the author
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/20150529_gregp.jpg?itok=nv02g6PV)][7] Greg Pittman - Greg is a retired neurologist in Louisville, Kentucky, with a long-standing interest in computers and programming, beginning with Fortran IV in the 1960s. When Linux and open source software came along, it kindled a commitment to learning more, and eventually contributing. He is a member of the Scribus Team.[More about me][8]
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/1/parsing-html-python
@ -203,3 +208,5 @@ via: https://opensource.com/article/18/1/parsing-html-python
[a]:https://opensource.com/users/greg-p
[1]:https://www.crummy.com/software/BeautifulSoup/
[2]:https://www.kde.org/applications/utilities/kwrite/
[7]:https://opensource.com/users/greg-p
[8]:https://opensource.com/users/greg-p

View File

@ -0,0 +1,189 @@
What Happens When You Want to Create a Special File with All Special Characters in Linux?
============================================================
![special chars](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/special-chars.png?itok=EEvlt5Nw "special chars")
Learn how to handle creation of a special file filled with special characters.[Used with permission][1]
I recently joined Holberton School as a student, hoping to learn full-stack software development. What I did not expect was that in two weeks I would be pretty much proficient with creating shell scripts that would make my coding life easy and fast!
So what is the post about? It is about a novel problem that my peers and I faced when we were asked to create a file with no regular alphabets/ numbers but instead special characters!! Just to give you a look at what kind of file name we were dealing with —
### \*\\”Holberton School”\\\*$\?\*\*\*\*\*:)
What a novel file name! Of course, this question was met with the collective groaning and long drawn sighs of all 55 (batch #5) students!
![1*Lf_XPhmgm-RB5ipX_lBjsQ.gif](https://cdn-images-1.medium.com/max/1600/1*Lf_XPhmgm-RB5ipX_lBjsQ.gif)
Some proceeded to make their lives easier by breaking the file name into pieces on a doc file and adding in the **“\\” or “\”** in front of certain special character which kind of resulted in this format -
#### \\*\\\\\”Holberton School\”\\\\\\*$\\?\\*\\*\\*\\*\\*:)
![1*p6s8WlysClalj0x2fQhGOg.gif](https://cdn-images-1.medium.com/max/1600/1*p6s8WlysClalj0x2fQhGOg.gif)
Everyone trying to get the \\ right
bamboozled? me, too! I did not want to believe that this was the only way to solve this, as I was getting frustrated with every “\\” or “\” that was required to escape and print those special characters as normal characters!
If youre new to shell scripting, here is a quick walk through on why so many “\\” , “\” were required and where.
In shell scripting “ ” and have special usage and once you understand and remember when and where to use them it can make your life easier!
#### Double Quoting
The first type of quoting we will look at is double quotes. **If you place text inside double quotes, all the special characters used by the shell lose their special meaning and are treated as ordinary characters. The exceptions are “$”, “\” (backslash), and “`” (back- quote).**This means that word-splitting, pathname expansion, tilde expansion, and brace expansion are suppressed, but parameter expansion, arithmetic expansion, and command substitution are still carried out. Using double quotes, we can cope with filenames containing embedded spaces.
So this means that you can create file with names that have spaces between wordsif that is your thing, but I would suggest you to not do that as it is inconvenient and rather an unpleasant experience for you to try to find that file when you need !
**Quoting “THE” guide for linux I follow and read like it is the Harry Potter of the linux coding world —**
Say you were the unfortunate victim of a file called two words.txt. If you tried to use this on the command line, word-splitting would cause this to be treated as two separate arguments rather than the desired single argument:
**[[me@linuxbox][3] me]$ ls -l two words.txt**
```
ls: cannot access two: No such file or directory
ls: cannot access words.txt: No such file or directory
```
By using double quotes, you can stop the word-splitting and get the desired result; further, you can even repair the damage:
```
[me@linuxbox me]$ ls -l “two words.txt”
-rw-rw-r — 1 me me 18 20080220 13:03 two words.txt
[me@linuxbox me]$ mv “two words.txt” two_words.t
```
There! Now we dont have to keep typing those pesky double quotes.
Now, let us talk about single quotes and what is their significance in shell —
#### Single Quotes
Enclosing characters in single quotes () preserves the literal value of each character within the quotes. A single quote may not occur between single quotes, even when preceded by a backslash.
Yes! that got me and I was wondering how will I use it, apparently when I was googling to find and easier way to do it I stumbled across this piece of information on the internet —
### Strong quoting
Strong quoting is very easy to explain:
Inside a single-quoted string **nothing** is interpreted, except the single-quote that closes the string.
```
echo 'Your PATH is: $PATH'
```
`$PATH` won't be expanded, it's interpreted as ordinary text because it's surrounded by strong quotes.
In practice that means to produce a text like `Here's my test…` as a single-quoted string, **you have to leave and re-enter the single quoting to get the character "`'`" as literal text:**
```
# WRONG
echo 'Here's my test...'
```
```
# RIGHT
echo 'Here'\''s my test...'
```
```
# ALTERNATIVE: It's also possible to mix-and-match quotes for readability:
echo "Here's my test"
```
Well now youre wondering“well that explains the quotes but what about the “\”??”
So for certain characters we need a special way to escape those pesky “\” we saw in that file name.
#### Escaping Characters
Sometimes you only want to quote a single character. To do this, you can precede a character with a backslash, which in this context is called the  _escape character_ . Often this is done inside double quotes to selectively prevent an expansion:
```
[me@linuxbox me]$ echo “The balance for user $USER is: \$5.00”
The balance for user me is: $5.00
```
It is also common to use escaping to eliminate the special meaning of a character in a filename. For example, it is possible to use characters in filenames that normally have special meaning to the shell. These would include “$”, “!”, “&”, “ “, and others. To include a special character in a filename you can to this:
```
[me@linuxbox me]$ mv bad\&filename good_filename
```
> _**To allow a backslash character to appear, escape it by typing “\\”. Note that within single quotes, the backslash loses its special meaning and is treated as an ordinary character.**_
Looking at the filename now we can understand better as to why the “\\” were used in front of all those “\”s.
So, to print the file name without losing “\” and other special characters what others did was to suppress the “\” with “\\” and to print the single quotes there are a few ways you can do that.
```
1. echo $'It\'s Shell Programming' # ksh, bash, and zsh only, does not expand variables
2. echo "It's Shell Programming" # all shells, expands variables
3. echo 'It'\''s Shell Programming' # all shells, single quote is outside the quotes
4\. echo 'It'"'"'s Shell Programming' # all shells, single quote is inside double quotes
```
```
for further reading please follow this link
```
Looking at option 3, I realized this would mean that I would only need to use “\” and single quotes at certain places to be able to write the whole file without getting frustrated with “\\” placements.
So with the hope in mind and lesser trial and errors I was actually able to print out the file name like this:
#### \*\\\”Holberton School”\\\\*$\?\*\*\*\*\*:)
to understand better I have added an **“a”** instead of my single quotes so that the file name and process becomes more clearer. For a better understanding, Ill break them down into modules:
![1*hP1gmzbn7G7gUEhoynj1ew.gif](https://cdn-images-1.medium.com/max/1600/1*hP1gmzbn7G7gUEhoynj1ew.gif)
#### a\*\\a \ a”Holberton School”\a \ a\\*$\?\*\*\*\*\*:)a
#### Module 1a\*\\a
Here the use of single quote (a) creates a safe suppression for \*\\ and as mentioned before in strong quoting, the only way we can print the  is to leave and re-enter the single quoting to get the character.
#### Module 2 , 4— \
The \ suppresses the single quote as a standalone module.
#### Module 3a”Holberton School”\a
Here the use of single quote (a) creates a safe suppression for double quotes and \ along with regular text.
#### Module 5a\\*$\?\*\*\*\*\*:)a
Here the use of single quote (a) creates a safe suppression for all special characters being used such as *, \, $, ?, : and ).
so in the end I was able to be lazy and maintain my sanity, and got away with only using single quotes to create small modules and “\” in certain places.
![1*rO34jp-bYSkCnHSdwoO3qQ.gif](https://cdn-images-1.medium.com/max/1600/1*rO34jp-bYSkCnHSdwoO3qQ.gif)
And, that is how I was able to get the file to work right! After a few misses, it felt amazing and it was great to learn a new way to do things!
![1*PE9_VtcfGGQjnYMwJ8YB1A.gif](https://cdn-images-1.medium.com/max/1600/1*PE9_VtcfGGQjnYMwJ8YB1A.gif)
Handled that curve-ball pretty well! Hope this helps you in the future when, someday you might need to create a special file for a special reason in shell!
_**Mitali Sengupta **is a former digital marketing professional, currently enrolled as a full-stack engineering student at Holberton School. She is passionate about innovation in AI and Blockchain technologies.. You can contact Mitali on [Twitter][4], [LinkedIn][5] or [GitHub][6]._
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/what-happens-when-you-want-create-special-file-all-special-characters-linux
作者:[MITALI SENGUPTA ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/mitalisengupta
[1]:https://www.linux.com/licenses/category/used-permission
[2]:https://www.linux.com/files/images/special-charspng
[3]:mailto:me@linuxbox
[4]:https://twitter.com/aadhiBangalan
[5]:https://www.linkedin.com/in/mitali-sengupta-auger
[6]:https://github.com/MitaliSengupta
[7]:http://mywiki.wooledge.org/Quotes#Examples

View File

@ -0,0 +1,239 @@
Python + Memcached: Efficient Caching in Distributed Applications Real Python
======
When writing Python applications, caching is important. Using a cache to avoid recomputing data or accessing a slow database can provide you with a great performance boost.
Python offers built-in possibilities for caching, from a simple dictionary to a more complete data structure such as [`functools.lru_cache`][2]. The latter can cache any item using a [Least-Recently Used algorithm][3] to limit the cache size.
Those data structures are, however, by definition local to your Python process. When several copies of your application run across a large platform, using a in-memory data structure disallows sharing the cached content. This can be a problem for large-scale and distributed applications.
![](https://files.realpython.com/media/python-memcached.97e1deb2aa17.png)
Therefore, when a system is distributed across a network, it also needs a cache that is distributed across a network. Nowadays, there are plenty of network servers that offer caching capability—we already covered [how to use Redis for caching with Django][4].
As youre going to see in this tutorial, [memcached][5] is another great option for distributed caching. After a quick introduction to basic memcached usage, youll learn about advanced patterns such as “cache and set” and using fallback caches to avoid cold cache performance issues.
### Installing memcached
Memcached is [available for many platforms][6]:
* If you run **Linux** , you can install it using `apt-get install memcached` or `yum install memcached`. This will install memcached from a pre-built package but you can alse build memcached from source, [as explained here][6].
* For **macOS** , using [Homebrew][7] is the simplest option. Just run `brew install memcached` after youve installed the Homebrew package manager.
* On **Windows** , you would have to compile memcached yourself or find [pre-compiled binaries][8].
Once installed, memcached can simply be launched by calling the `memcached` command:
```
$ memcached
```
Before you can interact with memcached from Python-land youll need to install a memcached client library. Youll see how to do this in the next section, along with some basic cache access operations.
### Storing and Retrieving Cached Values Using Python
If you never used memcached, it is pretty easy to understand. It basically provides a giant network-available dictionary. This dictionary has a few properties that are different from a classical Python dictionnary, mainly:
* Keys and values have to be bytes
* Keys and values are automatically deleted after an expiration time
Therefore, the two basic operations for interacting with memcached are `set` and `get`. As you might have guessed, theyre used to assign a value to a key or to get a value from a key, respectively.
My preferred Python library for interacting with memcached is [`pymemcache`][9]—I recommend using it. You can simply [install it using pip][10]:
```
$ pip install pymemcache
```
The following code shows how you can connect to memcached and use it as a network-distributed cache in your Python applications:
```
>>> from pymemcache.client import base
# Don't forget to run `memcached' before running this next line:
>>> client = base.Client(('localhost', 11211))
# Once the client is instantiated, you can access the cache:
>>> client.set('some_key', 'some value')
# Retrieve previously set data again:
>>> client.get('some_key')
'some value'
```
memcached network protocol is really simple an its implementation extremely fast, which makes it useful to store data that would be otherwise slow to retrieve from the canonical source of data or to compute again:
While straightforward enough, this example allows storing key/value tuples across the network and accessing them through multiple, distributed, running copies of your application. This is simplistic, yet powerful. And its a great first step towards optimizing your application.
### Automatically Expiring Cached Data
When storing data into memcached, you can set an expiration time—a maximum number of seconds for memcached to keep the key and value around. After that delay, memcached automatically removes the key from its cache.
What should you set this cache time to? There is no magic number for this delay, and it will entirely depend on the type of data and application that you are working with. It could be a few seconds, or it might be a few hours.
Cache invalidation, which defines when to remove the cache because it is out of sync with the current data, is also something that your application will have to handle. Especially if presenting data that is too old or or stale is to be avoided.
Here again, there is no magical recipe; it depends on the type of application you are building. However, there are several outlying cases that should be handled—which we havent yet covered in the above example.
A caching server cannot grow infinitely—memory is a finite resource. Therefore, keys will be flushed out by the caching server as soon as it needs more space to store other things.
Some keys might also be expired because they reached their expiration time (also sometimes called the “time-to-live” or TTL.) In those cases the data is lost, and the canonical data source must be queried again.
This sounds more complicated than it really is. You can generally work with the following pattern when working with memcached in Python:
```
from pymemcache.client import base
def do_some_query():
# Replace with actual querying code to a database,
# a remote REST API, etc.
return 42
# Don't forget to run `memcached' before running this code
client = base.Client(('localhost', 11211))
result = client.get('some_key')
if result is None:
# The cache is empty, need to get the value
# from the canonical source:
result = do_some_query()
# Cache the result for next time:
client.set('some_key', result)
# Whether we needed to update the cache or not,
# at this point you can work with the data
# stored in the `result` variable:
print(result)
```
> **Note:** Handling missing keys is mandatory because of normal flush-out operations. It is also obligatory to handle the cold cache scenario, i.e. when memcached has just been started. In that case, the cache will be entirely empty and the cache needs to be fully repopulated, one request at a time.
This means you should view any cached data as ephemeral. And you should never expect the cache to contain a value you previously wrote to it.
### Warming Up a Cold Cache
Some of the cold cache scenarios cannot be prevented, for example a memcached crash. But some can, for example migrating to a new memcached server.
When it is possible to predict that a cold cache scenario will happen, it is better to avoid it. A cache that needs to be refilled means that all of the sudden, the canonical storage of the cached data will be massively hit by all cache users who lack a cache data (also known as the [thundering herd problem][11].)
pymemcache provides a class named `FallbackClient` that helps in implementing this scenario as demonstrated here:
```
from pymemcache.client import base
from pymemcache import fallback
def do_some_query():
# Replace with actual querying code to a database,
# a remote REST API, etc.
return 42
# Set `ignore_exc=True` so it is possible to shut down
# the old cache before removing its usage from
# the program, if ever necessary.
old_cache = base.Client(('localhost', 11211), ignore_exc=True)
new_cache = base.Client(('localhost', 11212))
client = fallback.FallbackClient((new_cache, old_cache))
result = client.get('some_key')
if result is None:
# The cache is empty, need to get the value
# from the canonical source:
result = do_some_query()
# Cache the result for next time:
client.set('some_key', result)
print(result)
```
The `FallbackClient` queries the old cache passed to its constructor, respecting the order. In this case, the new cache server will always be queried first, and in case of a cache miss, the old one will be queried—avoiding a possible return-trip to the primary source of data.
If any key is set, it will only be set to the new cache. After some time, the old cache can be decommissioned and the `FallbackClient` can be replaced directed with the `new_cache` client.
### Check And Set
When communicating with a remote cache, the usual concurrency problem comes back: there might be several clients trying to access the same key at the same time. memcached provides a check and set operation, shortened to CAS, which helps to solve this problem.
The simplest example is an application that wants to count the number of users it has. Each time a visitor connects, a counter is incremented by 1. Using memcached, a simple implementation would be:
```
def on_visit(client):
result = client.get('visitors')
if result is None:
result = 1
else:
result += 1
client.set('visitors', result)
```
However, what happens if two instances of the application try to update this counter at the same time?
The first call `client.get('visitors')` will return the same number of visitors for both of them, lets say its 42. Then both will add 1, compute 43, and set the number of visitors to 43. That number is wrong, and the result should be 44, i.e. 42 + 1 + 1.
To solve this concurrency issue, the CAS operation of memcached is handy. The following snippet implements a correct solution:
```
def on_visit(client):
while True:
result, cas = client.gets('visitors')
if result is None:
result = 1
else:
result += 1
if client.cas('visitors', result, cas):
break
```
The `gets` method returns the value, just like the `get` method, but it also returns a CAS value.
What is in this value is not relevant, but it is used for the next method `cas` call. This method is equivalent to the `set` operation, except that it fails if the value has changed since the `gets` operation. In case of success, the loop is broken. Otherwise, the operation is restarted from the beginning.
In the scenario where two instances of the application try to update the counter at the same time, only one succeeds to move the counter from 42 to 43. The second instance gets a `False` value returned by the `client.cas` call, and have to retry the loop. It will retrieve 43 as value this time, will increment it to 44, and its `cas` call will succeed, thus solving our problem.
Incrementing a counter is interesting as an example to explain how CAS works because it is simplistic. However, memcached also provides the `incr` and `decr` methods to increment or decrement an integer in a single request, rather than doing multiple `gets`/`cas` calls. In real-world applications `gets` and `cas` are used for more complex data type or operations
Most remote caching server and data store provide such a mechanism to prevent concurrency issues. It is critical to be aware of those cases to make proper use of their features.
### Beyond Caching
The simple techniques illustrated in this article showed you how easy it is to leverage memcached to speed up the performances of your Python application.
Just by using the two basic “set” and “get” operations you can often accelerate data retrieval or avoid recomputing results over and over again. With memcached you can share the cache accross a large number of distributed nodes.
Other, more advanced patterns you saw in this tutorial, like the Check And Set (CAS) operation allow you to update data stored in the cache concurrently across multiple Python threads or processes while avoiding data corruption.
If you are interested into learning more about advanced techniques to write faster and more scalable Python applications, check out [Scaling Python][12]. It covers many advanced topics such as network distribution, queuing systems, distributed hashing, and code profiling.
--------------------------------------------------------------------------------
via: https://realpython.com/blog/python/python-memcache-efficient-caching/
作者:[Julien Danjou][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://realpython.com/team/jdanjou/
[1]:https://realpython.com/blog/categories/python/
[2]:https://docs.python.org/3/library/functools.html#functools.lru_cache
[3]:https://en.wikipedia.org/wiki/Cache_replacement_policies#Least_Recently_Used_(LRU)
[4]:https://realpython.com/blog/python/caching-in-django-with-redis/
[5]:http://memcached.org
[6]:https://github.com/memcached/memcached/wiki/Install
[7]:https://brew.sh/
[8]:https://commaster.net/content/installing-memcached-windows
[9]:https://pypi.python.org/pypi/pymemcache
[10]:https://realpython.com/learn/python-first-steps/#11-pythons-power-packagesmodules
[11]:https://en.wikipedia.org/wiki/Thundering_herd_problem
[12]:https://scaling-python.com

View File

@ -0,0 +1,104 @@
Refreshing old computers with Linux
============================================================
### A middle school's Tech Stewardship program is now an elective class for science and technology students.
![Refreshing old computers with Linux](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_kid_education.png?itok=3lRp6gFa "Refreshing old computers with Linux")
Image by : opensource.com
It's nearly impossible to enter a school these days without seeing an abundance of technology. Despite this influx of computers into education, funding inequity forces school systems to make difficult choices. Some educators see things as they are and wonder, "Why?" while others see problems as opportunities and think, "Why not?"
[Andrew Dobbie ][31]is one of those visionaries who saw his love of Linux and computer reimaging as a unique learning opportunity for his students.
Andrew teaches sixth grade at Centennial Senior Public School in Brampton, Ontario, Canada, and is a[Google Certified Innovator][16]. Andrew said, "Centennial Senior Public School hosts a special regional science & technology program that invites students from throughout the region to spend three years learning Ontario curriculum through the lens of science and technology." However, the school's students were in danger of falling prey to the digital divide that's exacerbated by hardware and software product lifecycles and inadequate funding.
![Tech Stewardship students working on a computer](https://opensource.com/sites/default/files/u128651/techstewardship_students.jpg "Tech Stewardship students working on a computer")
Image courtesy of [Affordable Tech for All][6]
Although there was a school-wide need for access to computers in the classrooms, Andrew and his students discovered that dozens of old computers were being shipped out of the school because they were too old and slow to keep up with the latest proprietary operating systems or function on the school's network.
Andrew saw this problem as a unique learning opportunity for his students and created the [Tech Stewardship][17] program. He works in partnership with two other teachers, Mike Doiu and Neil Lyons, and some students, who "began experimenting with open source operating systems like [Lubuntu][18] and [CubLinux][19] to help develop a solution to our in-class computer problem," he says.
The sixth-grade students deployed the reimaged computers into classrooms throughout the school. When they exhausted the school's supply of surplus computers, they sourced more free computers from a local nonprofit organization called [Renewed Computer Technology Ontario][20]. In all, the Tech Stewardship program has provided more than 200 reimaged computers for students to use in classrooms throughout the school.
![Tech Stewardship students](https://opensource.com/sites/default/files/u128651/techstewardship_class.jpg "Tech Stewardship students")
Image courtesy of [Affordable Tech for All][7]
The Tech Stewardship program is now an elective class for the school's science and technology students in grades six, seven, and eight. Not only are the students learning about computer reimaging, they're also giving back to their local communities through this open source outreach program.
### A broad impact
The Tech Stewardship program is linked directly to the school's curriculum, especially in social studies by teaching the [United Nations' Sustainable Development Goals][21] (SDGs). The program is a member of [Teach SDGs][22], and Andrew serves as a Teach SDGs ambassador. Also, as a Google Certified Innovator, Andrew partners with Google and the [EdTechTeam][23], and Tech Stewardship has participated in Ontario's [Bring it Together][24] conference for educational technology.
Andrew's students also serve as mentors to their fellow students. In one instance, a group of girls taught a grade 3 class about effective use of Google Drive and helped these younger students to make the best use of their Linux computers. Andrew said, "outreach and extension of learning beyond the classroom at Centennial is a major goal of the Tech Stewardship program."
### What the students say
Linux and open source are an integral part of the program. A girl named Ashna says, "In grade 6, Mr. Dobbie had shown us how to reimage a computer into Linux to use it for educational purposes. Since then, we have been learning more and growing." Student Shradhaa says, "At the very beginning, we didn't even know how to reimage with Linux. Mr. Dobbie told us to write steps for how to reimage Linux devices, and using those steps we are trying to reimage the computers."
![Tech Stewardship student upgrading memory](https://opensource.com/sites/default/files/u128651/techstewardship_upgrading-memory.jpg "Tech Stewardship student upgrading memory")
Image courtesy of [Affordable Tech for All][8]
The students were quick to add that Tech Stewardship has become a portal for discussion about being advocates for the change they want to see in the world. Through their hands-on activity, students learn to support the United Nations Sustainable Development goals. They also learn lessons far beyond the curriculum itself. For example, a student named Areez says he has learned how to find other resources, including donations, that allow the project to expand, since the class work upfitting older computers doesn't produce an income stream.
Another student, Harini, thinks the Tech Stewardship program has demonstrated to other students what is possible and how one small initiative can change the world. After learning about the program, 40 other schools and individuals are reimaging computers with Linux. Harini says, "The more people who use them for educational purposes, the more outstanding the future will become since those educated people will lead out new, amazing lives with jobs."
Joshua, another student in the program, sees it this way: "I thought of it as just a fun experience, but as it went on, we continued learning and understanding how what we were doing was making such a big impact on the world!" Later, he says, "a school reached out to us and asked us if we could reimage some computers for them. We went and completed the task. Then it continued to grow, as people from Europe came to see how we were fixing broken computers and started doing it when they went back."
Andrew Dobbie is keen to share his experience with schools and interested individuals. You can contact him on [Twitter][25] or through his [website][26].
### About the author
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/donw2-crop.jpg?itok=OqOYd3A8)][27] Don Watkins - Educator, education technology specialist,  entrepreneur, open source advocate. M.A. in Educational Psychology, MSED in Educational Leadership, Linux system administrator, CCNA, virtualization using Virtual Box. Follow me at [@Don_Watkins .][13][More about me][14]
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/1/new-linux-computers-classroom
作者:[Don Watkins ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/don-watkins
[1]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
[2]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
[3]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
[4]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
[5]:https://opensource.com/tags/linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
[6]:https://photos.google.com/share/AF1QipPnm-q9OIQnrzDD4n7oWIBBIE7RQ6BI9lv486RaU5lKBrs88pq3gPKM8VAgY0prkw?key=cS1RdEZ3ZHdXLWp0bUwzMEk3UnFQRkUwbWl1dWhn
[7]:https://photos.google.com/share/AF1QipPnm-q9OIQnrzDD4n7oWIBBIE7RQ6BI9lv486RaU5lKBrs88pq3gPKM8VAgY0prkw?key=cS1RdEZ3ZHdXLWp0bUwzMEk3UnFQRkUwbWl1dWhn
[8]:https://photos.google.com/share/AF1QipPnm-q9OIQnrzDD4n7oWIBBIE7RQ6BI9lv486RaU5lKBrs88pq3gPKM8VAgY0prkw?key=cS1RdEZ3ZHdXLWp0bUwzMEk3UnFQRkUwbWl1dWhn
[9]:https://opensource.com/file/384581
[10]:https://opensource.com/file/384591
[11]:https://opensource.com/file/384586
[12]:https://opensource.com/article/18/1/new-linux-computers-classroom?rate=bK5X7pRc5y9TyY6jzOZeLDW6ehlWmNPXuP38DYsQ-6I
[13]:https://twitter.com/Don_Watkins
[14]:https://opensource.com/users/don-watkins
[15]:https://opensource.com/user/15542/feed
[16]:https://edutrainingcenter.withgoogle.com/certification_innovator
[17]:https://sites.google.com/view/mrdobbie/tech-stewardship
[18]:https://lubuntu.net/
[19]:https://en.wikipedia.org/wiki/Cub_Linux
[20]:http://www.rcto.ca/
[21]:http://www.un.org/sustainabledevelopment/sustainable-development-goals/
[22]:http://www.teachsdgs.org/
[23]:https://www.edtechteam.com/team/
[24]:http://bringittogether.ca/
[25]:https://twitter.com/A_Dobbie11
[26]:http://bit.ly/linuxresources
[27]:https://opensource.com/users/don-watkins
[28]:https://opensource.com/users/don-watkins
[29]:https://opensource.com/users/don-watkins
[30]:https://opensource.com/article/18/1/new-linux-computers-classroom#comments
[31]:https://twitter.com/A_Dobbie11
[32]:https://opensource.com/tags/education
[33]:https://opensource.com/tags/linux

View File

@ -0,0 +1,128 @@
10 things I love about Vue
============================================================
![](https://cdn-images-1.medium.com/max/1600/1*X4ipeKVYzmY2M3UPYgUYuA.png)
I love Vue. When I first looked at it in 2016, perhaps I was coming from a perspective of JavaScript framework fatigue. Id already had experience with Backbone, Angular, React, among others and I wasnt overly enthusiastic to try a new framework. It wasnt until I read a comment on hacker news describing Vue as the new jquery of JavaScript, that my curiosity was piqued. Until that point, I had been relatively content with Reactit is a good framework based on solid design principles centred around view templates, virtual DOM and reacting to state, and Vue also provides these great things. In this blog post, I aim to explore why Vue is the framework for me. I choose it above any other that I have tried. Perhaps you will agree with some of my points, but at the very least I hope to give you some insight into what it is like to develop modern JavaScript applications with Vue.
1\. Minimal Template Syntax
The template syntax which you are given by default from Vue is minimal, succinct and extendable. Like many parts of Vue, its easy to not use the standard template syntax and instead use something like JSX (there is even an official page of documentation about how to do this), but I dont know why you would want to do that to be honest. For all that is good about JSX, there are some valid criticisms: by blurring the line between JavaScript and HTML, it makes it a bit too easy to start writing complex code in your template which should instead be separated out and written elsewhere in your JavaScript view code.
Vue instead uses standard HTML to write your templates, with a minimal template syntax for simple things such as iteratively creating elements based on the view data.
```
<template>
<div id="app">
<ul>
<li v-for='number in numbers' :key='number'>{{ number }}</li>
</ul>
<form @submit.prevent='addNumber'>
<input type='text' v-model='newNumber'>
<button type='submit'>Add another number</button>
</form>
</div>
</template>
<script>
export default {
name: 'app',
methods: {
addNumber() {
const num = +this.newNumber;
if (typeof num === 'number' && !isNaN(num)) {
this.numbers.push(num);
}
}
},
data() {
return {
newNumber: null,
numbers: [1, 23, 52, 46]
};
}
}
</script>
<style lang="scss">
ul {
padding: 0;
li {
list-style-type: none;
color: blue;
}
}
</style>
```
I also like the short-bindings provided by Vue, : for binding data variables into your template and @ for binding to events. Its a small thing, but it feels nice to type and keeps your components succinct.
2\. Single File Components
When most people write Vue, they do so using single file components. Essentially it is a file with the suffix .vue containing up to 3 parts (the css, html and javascript) for each component.
This coupling of technologies feels right. It makes it easy to understand each component in a single place. It also has the nice side effect of encouraging you to keep your code short for each component. If the JavaScript, CSS and HTML for your component is taking up too many lines then it might be time to modularise further.
When it comes to the <style> tag of a Vue component, we can add the scoped attribute. This will fully encapsulate the styling to this component. Meaning if we had a .name CSS selector defined in this component, it wont apply that style in any other component. I much prefer this approach of styling view components to the approaches of writing CSS in JS which seems popular in other leading frameworks.
Another very nice thing about single file components is that they are actually valid HTML5 files. <template>, <script>, <style> are all part of the official w3c specification. This means that many tools you use as part of your development process (such as linters) can work out of the box or with minimal adaptation.
3\. Vue as the new jQuery
Really these two libraries are not similar and are doing different things. Let me provide you with a terrible analogy that I am actually quite fond of to describe the relationship of Vue and jQuery: The Beatles and Led Zeppelin. The Beatles need no introduction, they were the biggest group of the 1960s and were supremely influential. It gets harder to pin the accolade of biggest group of the 1970s but sometimes that goes to Led Zeppelin. You could say that the musical relationship between the Beatles and Led Zeppelin is tenuous and their music is distinctively different, but there is some prior art and influence to accept. Maybe 2010s JavaScript world is like the 1970s music world and as Vue gets more radio plays, it will only attract more fans.
Some of the philosophy that made jQuery so great is also present in Vue: a really easy learning curve but with all the power you need to build great web applications based on modern web standards. At its core, Vue is really just a wrapper around JavaScript objects.
4\. Easily extensible
As mentioned, Vue uses standard HTML, JS and CSS to build its components as a default, but it is really easy to plug in other technologies. If we want to use pug instead of HTML or typescript instead of JS or sass instead of CSS, its just a matter of installing the relevant node modules and adding an attribute to the relevant section of our single file component. You could even mix and match components within a projecte.g. some components using HTML and others using pugalthough Im not sure doing this is the best practice.
5\. Virtual DOM
The virtual DOM is used in many frameworks these days and it is great. It means the framework can work out what has changed in our state and then efficiently apply DOM updates, minimizing re-rendering and optimising the performance of our application. Everyone and their mother has a Virtual DOM these days, so whilst its not something unique, its still very cool.
6\. Vuex is great
For most applications, managing state becomes a tricky issue which using a view library alone can not solve. Vues solution to this is the vuex library. Its easy to setup and integrates very well with vue. Those familiar with redux will be at home here, but I find that the integration between vue and vuex is neater and more minimal than that of react and redux. Soon-to-be-standard JavaScript provides the object spread operator which allows us to merge in state or functions to manipulate state from vuex into the components that need it.
7\. Vue CLI
The CLI provided by Vue is really great and makes it easy to get started with a webpack project with Vue. Single file components support, babel, linting, testing and a sensible project structure can all be created with a single command in your terminal.
There is one thing, however I miss from the CLI and that is the vue build.
```
try this: `echo '<template><h1>Hello World!</h1></template>' > Hello.vue && vue build Hello.vue -o`
```
It looked so simple to build and run components and test them in the browser. Unfortunately this command was later removed from vue, instead the recommendation is to now use poi. Poi is basically a wrapper around webpack, but I dont think it quite gets to the same point of simplicity as the quoted tweet.
8\. Re-rendering optimizations worked out for you
In Vue, you dont have to manually state which parts of the DOM should be re-rendered. I never was a fan of the management on react components, such as shouldComponentUpdate in order to stop the whole DOM tree re-rendering. Vue is smart about this.
9\. Easy to get help
Vue has reached a critical mass of developers using the framework to build a wide variety of applications. The documentation is very good. Should you need further help, there are multiple channels available with many active users: stackoverflow, discord, twitter etc.this should give you some more confidence in building an application over some other frameworks with less users.
10\. Not maintained by a single corporation
I think its a good thing for an open source library to not have the voting rights of its direction steered too much by a single corporation. Issues such as the react licensing issue (now resolved) are not something Vue has had to deal with.
In summary, I think Vue is an excellent choice for whatever JavaScript project you might be starting next. The available ecosystem is larger than I covered in this blog post. For a more full-stack offering you could look at Nuxt.js. And if you want some re-usable styled components you could look at something like Vuetify. Vue has been one of the fastest growing frameworks of 2017 and I predict the growth is not going to slow down for 2018\. If you have a spare 30 minutes, why not dip your toes in and see what Vue has to offer for yourself?
P.S.The documentation gives you a great comparison to other frameworks here: [https://vuejs.org/v2/guide/comparison.html][1]
--------------------------------------------------------------------------------
via: https://medium.com/@dalaidunc/10-things-i-love-about-vue-505886ddaff2
作者:[Duncan Grant ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://medium.com/@dalaidunc
[1]:https://vuejs.org/v2/guide/comparison.html

View File

@ -1,159 +0,0 @@
Fastest way to unzip a zip file in Python
======
So the context is this; a zip file is uploaded into a [web service][1] and Python then needs extract that and analyze and deal with each file within. In this particular application what it does is that it looks at the file's individual name and size, compares that to what has already been uploaded in AWS S3 and if the file is believed to be different or new, it gets uploaded to AWS S3.
[![Uploads today][2]][3]
The challenge is that these zip files that come in are huuuge. The average is 560MB but some are as much as 1GB. Within them, there are mostly plain text files but there are some binary files in there too that are huge. It's not unusual that each zip file contains 100 files and 1-3 of those make up 95% of the zip file size.
At first I tried unzipping the file, in memory, and deal with one file at a time. That failed spectacularly with various memory explosions and EC2 running out of memory. I guess it makes sense. First you have the 1GB file in RAM, then you unzip each file and now you have possibly 2-3GB all in memory. So, the solution, after much testing, was to dump the zip file to disk (in a temporary directory in `/tmp`) and then iterate over the files. This worked much better but I still noticed the whole unzipping was taking up a huge amount of time. **Is there perhaps a way to optimize that?**
### Baseline function
First it's these common functions that simulate actually doing something with the files in the zip file:
```
def _count_file(fn):
with open(fn, 'rb') as f:
return _count_file_object(f)
def _count_file_object(f):
# Note that this iterates on 'f'.
# You *could* do 'return len(f.read())'
# which would be faster but potentially memory
# inefficient and unrealistic in terms of this
# benchmark experiment.
total = 0
for line in f:
total += len(line)
return total
```
Here's the simplest one possible:
```
def f1(fn, dest):
with open(fn, 'rb') as f:
zf = zipfile.ZipFile(f)
zf.extractall(dest)
total = 0
for root, dirs, files in os.walk(dest):
for file_ in files:
fn = os.path.join(root, file_)
total += _count_file(fn)
return total
```
If I analyze it a bit more carefully, I find that it spends about 40% doing the `extractall` and 60% doing the looping over files and reading their full length.
### First attempt
My first attempt was to try to use threads. You create an instance of `zipfile.ZipFile`, extract every file name within and start a thread for each name. Each thread is given a function that does the "meat of the work" (in this benchmark, iterating over the file and getting its total size). In reality that function does a bunch of complicated S3, Redis and PostgreSQL stuff but in my benchmark I just made it a function that figures out the total length of file. The thread pool function:
```
def f2(fn, dest):
def unzip_member(zf, member, dest):
zf.extract(member, dest)
fn = os.path.join(dest, member.filename)
return _count_file(fn)
with open(fn, 'rb') as f:
zf = zipfile.ZipFile(f)
futures = []
with concurrent.futures.ThreadPoolExecutor() as executor:
for member in zf.infolist():
futures.append(
executor.submit(
unzip_member,
zf,
member,
dest,
)
)
total = 0
for future in concurrent.futures.as_completed(futures):
total += future.result()
return total
```
**Result: ~10% faster**
### Second attempt
So perhaps the GIL is blocking me. The natural inclination is to try to use multiprocessing to spread the work across multiple available CPUs. But doing so has the disadvantage that you can't pass around a non-pickleable object so you have to send just the filename to each future function:
```
def unzip_member_f3(zip_filepath, filename, dest):
with open(zip_filepath, 'rb') as f:
zf = zipfile.ZipFile(f)
zf.extract(filename, dest)
fn = os.path.join(dest, filename)
return _count_file(fn)
def f3(fn, dest):
with open(fn, 'rb') as f:
zf = zipfile.ZipFile(f)
futures = []
with concurrent.futures.ProcessPoolExecutor() as executor:
for member in zf.infolist():
futures.append(
executor.submit(
unzip_member_f3,
fn,
member.filename,
dest,
)
)
total = 0
for future in concurrent.futures.as_completed(futures):
total += future.result()
return total
```
**Result: ~300% faster**
### That's cheating!
The problem with using a pool of processors is that it requires that the original `.zip` file exists on disk. So in my web server, to use this solution, I'd first have to save the in-memory ZIP file to disk, then invoke this function. Not sure what the cost of that it's not likely to be cheap.
Well, it doesn't hurt to poke around. Perhaps, it could be worth it if the extraction was significantly faster.
But remember! This optimization depends on using up as many CPUs as it possibly can. What if some of those other CPUs are needed for something else going on in `gunicorn`? Those other processes would have to patiently wait till there's a CPU available. Since there's other things going on in this server, I'm not sure I'm willing to let on process take over all the other CPUs.
### Conclusion
Doing it serially turns out to be quite nice. You're bound to one CPU but the performance is still pretty good. Also, just look at the difference in the code between `f1` and `f2`! With `concurrent.futures` pool classes you can cap the number of CPUs it's allowed to use but that doesn't feel great either. What if you get the number wrong in a virtual environment? Of if the number is too low and don't benefit any from spreading the workload and now you're just paying for overheads to move the work around?
I'm going to stick with `zipfile.ZipFile(file_buffer).extractall(temp_dir)`. It's good enough for this.
### Want to try your hands on it?
I did my benchmarking using a `c5.4xlarge` EC2 server. The files can be downloaded from:
```
wget https://www.peterbe.com/unzip-in-parallel/hack.unzip-in-parallel.py
wget https://www.peterbe.com/unzip-in-parallel/symbols-2017-11-27T14_15_30.zip
```
The `.zip` file there is 34MB which is relatively small compared to what's happening on the server.
The `hack.unzip-in-parallel.py` is a hot mess. It contains a bunch of terrible hacks and ugly stuff but hopefully it's a start.
--------------------------------------------------------------------------------
via: https://www.peterbe.com/plog/fastest-way-to-unzip-a-zip-file-in-python
作者:[Peterbe][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.peterbe.com/
[1]:https://symbols.mozilla.org
[2]:https://cdn-2916.kxcdn.com/cache/b7/bb/b7bbcf60347a5fa91420f71bbeed6d37.png
[3]:https://cdn-2916.kxcdn.com/cache/e6/dc/e6dc20acd37d94239edbbc0727721e4a.png

View File

@ -0,0 +1,176 @@
Microservices vs. monolith: How to choose
============================================================
### Both architectures have pros and cons, and the right decision depends on your organization's unique needs.
![Microservices vs. monolith: How to choose](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/building_architecture_design.jpg?itok=lB_qYv-I "Microservices vs. monolith: How to choose")
Image by : 
Onasill ~ Bill Badzo on [Flickr][11]. [CC BY-NC-SA 2.0][12]. Modified by Opensource.com.
For many startups, conventional wisdom says to start with a monolith architecture over microservices. But are there exceptions to this?
The upcoming book,  [_Microservices for Startups_][13] , explores the benefits and drawbacks of microservices, offering insights from dozens of CTOs.
While different CTOs take different approaches when starting new ventures, they agree that context and capability are key. If you're pondering whether your business would be best served by a monolith or microservices, consider the factors discussed below.
### Understanding the spectrum
More on Microservices
* [How to explain microservices to your CEO][1]
* [Free eBook: Microservices vs. service-oriented architecture][2]
* [Secured DevOps for microservices][3]
Let's first clarify what exactly we mean by “monolith” and “microservice.”
Microservices are an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery.
A monolithic application is built as a single, unified unit, and usually one massive code base. Often a monolith consists of three parts: a database, a client-side user interface (consisting of HTML pages and/or JavaScript running in a browser), and a server-side application.
“System architectures lie on a spectrum,” Zachary Crockett, CTO of [Particle][14], said in an interview. “When discussing microservices, people tend to focus on one end of that spectrum: many tiny applications passing too many messages to each other. At the other end of the spectrum, you have a giant monolith doing too many things. For any real system, there are many possible service-oriented architectures between those two extremes.”
Depending on your situation, there are good reasons to tend toward either a monolith or microservices.
"We want to use the best tool for each service." Julien Lemoine, CTO at Algolia
Contrary to what many people think, a monolith isnt a dated architecture that's best left in the past. In certain circumstances, a monolith is ideal. I spoke to Steven Czerwinski, head of engineering at [Scaylr][15] and a former Google employee, to better understand this.
“Even though we had had positive experiences of using microservices at Google, we [at Scalyr] went [for a monolith] route because having one monolithic server means less work for us as two engineers,” he explained. (This was back in the early days of Scalyr.)
But if your team is experienced with microservices and you have a clear idea of the direction youre going, microservices can be a great alternative.
Julien Lemoine, CTO at [Algolia][16], chimed in on this point: “We have always started with a microservices approach. The main goal was to be able to use different technology to build our service, for two big reasons:
* We want to use the best tool for each service. Our search API is highly optimized at the lowest level, and C++ is the perfect language for that. That said, using C++ for everything is a waste of productivity, especially to build a dashboard.
* We want the best talent, and using only one technology would limit our options. This is why we have different languages in the company.”
If your team is prepared, starting with microservices allows your organization to get used to the rhythm of developing in a microservice environment right from the start.
### Weighing the pros and cons
Before you decide which approach is best for your organization, it's important to consider the strengths and weaknesses of each.
### Monoliths
### Pros:
* **Fewer cross-cutting concerns:** Most apps have cross-cutting concerns, such as logging, rate limiting, and security features like audit trails and DOS protection. When everything is running through the same app, its easy to address those concerns by hooking up components.
* **Less operational overhead:** Theres only one application to set up for logging, monitoring, and testing. Also, it's generally less complex to deploy.
* **Performance:** A monolith architecture can offer performance advantages since shared-memory access is faster than inter-process communication (IPC).
### Cons:
* **Tightly coupled:** Monolithic app services tend to get tightly coupled and entangled as the application evolves, making it difficult to isolate services for purposes such as independent scaling or code maintainability.
* **Harder to understand:** Monolithic architectures are more difficult to understand because of dependencies, side effects, and other factors that are not obvious when youre looking at a specific service or controller.
### Microservices
### Pros:
* **Better organization:** Microservice architectures are typically better organized, since each microservice has a specific job and is not concerned with the jobs of other components.
* **Decoupled:** Decoupled services are easier to recompose and reconfigure to serve different apps (for example, serving both web clients and the public API). They also allow fast, independent delivery of individual parts within a larger integrated system.
* **Performance:** Depending on how they're organized, microservices can offer performance advantages because you can isolate hot services and scale them independently of the rest of the app.
* **Fewer mistakes:** Microservices enable parallel development by establishing a strong boundary between different parts of your system. Doing this makes it more difficult to connect parts that shouldnt be connected, for example, or couple too tightly those that need to be connected.
### Cons:
* **Cross-cutting concerns across each service:** As you build a new microservice architecture, youre likely to discover cross-cutting concerns you may not have anticipated at design time. Youll either need to incur the overhead of separate modules for each cross-cutting concern (i.e., testing), or encapsulate cross-cutting concerns in another service layer through which all traffic is routed. Eventually, even monolithic architectures tend to route traffic through an outer service layer for cross-cutting concerns, but with a monolithic architecture, its possible to delay the cost of that work until the project is more mature.
* **Higher operational overhead:** Microservices are frequently deployed on their own virtual machines or containers, causing a proliferation of VM wrangling. These tasks are frequently automated with container fleet management tools.
### Decision time
Once you understand the pros and cons of both approaches, how do you apply this information to your startup? Based on interviews with CTOs, here are three questions to guide your decision process:
**Are you in familiar territory?**
Diving directly into microservices is less risky if your team has previous domain experience (for example, in e-commerce) and knowledge concerning the needs of your customers. If youre traveling down an unknown path, on the other hand, a monolith may be a safer option.
**Is your team prepared?**
Does your team have experience with microservices? If you quadruple the size of your team within the next year, will microservices offer the best environment? Evaluating the dimensions of your team is crucial to the success of your project.
**Hows your infrastructure?**
To make microservices work, youll need a cloud-based infrastructure.
David Strauss, CTO of [Pantheon][17], explained: “[Previously], you would want to start with a monolith because you wanted to deploy one database server. The idea of having to set up a database server for every single microservice and then scale out was a mammoth task. Only a huge, tech-savvy organization could do that. Today, with services like Google Cloud and Amazon AWS, you have many options for deploying tiny things without needing to own the persistence layer for each one.”
### Evaluate the business risk
As a tech-savvy startup with high ambitions, you might think microservices is the “right” way to go. But microservices can pose a business risk. Strauss explained, “A lot of teams overbuild their project initially. Everyone wants to think their startup will be the next unicorn, and they should therefore build everything with microservices or some other hyper-scalable infrastructure. But that's usually wrong.” In these cases, Strauss continued, the areas that they thought they needed to scale are often not the ones that actually should scale first, resulting in wasted time and effort.
### Situational awareness
Ultimately, context is key. Here are some tips from CTOs:
#### When to start with a monolith
* **Your team is at founding stage:** Your team is small—say, 2 to 5 members—and is unable to tackle a broader, high-overhead microservices architecture.
* **Youre building an unproven product or proof of concept:** If you're bringing a brand-new product to market, it will likely evolve over time, and a monolith is better-suited to allow for rapid product iteration. The same notion applies to a proof of concept, where your goal is to learn as much as possible as quickly as possible, even if you end up throwing it away.
* **You have no microservices experience:** Unless you can justify the risk of learning on the fly at an early stage, a monolith may be a safer approach for an inexperienced team.
#### When to start with microservices
* **You need quick, independent service delivery:** Microservices allow for fast, independent delivery of individual parts within a larger integrated system. Note that it can take some time to see service delivery gains with microservices compared to a monolith, depending on your team's size.
* **A piece of your platform needs to be extremely efficient:** If your business does intensive processing of petabytes of log volume, youll likely want to build that service out in an efficient language like C++, while your user dashboard may be built in [Ruby on Rails][5].
* **You plan to grow your team:** Starting with microservices gets your team used to developing in separate small services from the beginning, and teams that are separated by service boundaries are easier to scale as needed.
To decide whether a monolith or microservices is right for your organization, be honest and self-aware about your context and capabilities. This will help you find the best path to grow your business.
### Topics
 [Microservices][21][DevOps][22]
### About the author
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/profile_15.jpg?itok=EaSRMCN-)][18] jakelumetta - Jake is the CEO of [ButterCMS, an API-first CMS][6]. He loves whipping up Butter puns and building tools that makes developers lives better. For more content like this, follow [@ButterCMS][7] on Twitter and [subscribe to our blog][8].[More about me][9]
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/1/how-choose-between-monolith-microservices
作者:[jakelumetta ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/jakelumetta
[1]:https://blog.openshift.com/microservices-how-to-explain-them-to-your-ceo/?intcmp=7016000000127cYAAQ&src=microservices_resource_menu1
[2]:https://www.openshift.com/promotions/microservices.html?intcmp=7016000000127cYAAQ&src=microservices_resource_menu2
[3]:https://opensource.com/business/16/11/secured-devops-microservices?src=microservices_resource_menu3
[4]:https://opensource.com/article/18/1/how-choose-between-monolith-microservices?rate=tSotlNvwc-Itch5fhYiIn5h0L8PcUGm_qGvqSVzu9w8
[5]:http://rubyonrails.org/
[6]:https://buttercms.com/
[7]:https://twitter.com/ButterCMS
[8]:https://buttercms.com/blog/
[9]:https://opensource.com/users/jakelumetta
[10]:https://opensource.com/user/205531/feed
[11]:https://www.flickr.com/photos/onasill/16452059791/in/photolist-r4P7ci-r3xUqZ-JkWzgN-dUr8Mo-biVsvF-kA2Vot-qSLczk-nLvGTX-biVxwe-nJJmzt-omA1vW-gFtM5-8rsk8r-dk9uPv-5kja88-cv8YTq-eQqNJu-7NJiqd-pBUkk-pBUmQ-6z4dAw-pBULZ-vyM3V3-JruMsr-pBUiJ-eDrP5-7KCWsm-nsetSn-81M3EC-pBURh-HsVXuv-qjgBy-biVtvx-5KJ5zK-81F8xo-nGFQo3-nJr89v-8Mmi8L-81C9A6-qjgAW-564xeQ-ihmDuk-biVBNz-7C5VBr-eChMAV-JruMBe-8o4iKu-qjgwW-JhhFXn-pBUjw
[12]:https://creativecommons.org/licenses/by-nc-sa/2.0/
[13]:https://buttercms.com/books/microservices-for-startups/
[14]:https://www.particle.io/Particle
[15]:https://www.scalyr.com/
[16]:https://www.algolia.com/
[17]:https://pantheon.io/
[18]:https://opensource.com/users/jakelumetta
[19]:https://opensource.com/users/jakelumetta
[20]:https://opensource.com/users/jakelumetta
[21]:https://opensource.com/tags/microservices
[22]:https://opensource.com/tags/devops

View File

@ -0,0 +1,262 @@
Shallow vs Deep Copying of Python Objects Real Python
======
Assignment statements in Python do not create copies of objects, they only bind names to an object. For immutable objects, that usually doesnt make a difference.
But for working with mutable objects or collections of mutable objects, you might be looking for a way to create “real copies” or “clones” of these objects.
Essentially, youll sometimes want copies that you can modify without automatically modifying the original at the same time. In this article Im going to give you the rundown on how to copy or “clone” objects in Python 3 and some of the caveats involved.
> **Note:** This tutorial was written with Python 3 in mind but there is little difference between Python 2 and 3 when it comes to copying objects. When there are differences I will point them out in the text.
Lets start by looking at how to copy Pythons built-in collections. Pythons built-in mutable collections like [lists, dicts, and sets][3] can be copied by calling their factory functions on an existing collection:
```
new_list = list(original_list)
new_dict = dict(original_dict)
new_set = set(original_set)
```
However, this method wont work for custom objects and, on top of that, it only creates shallow copies. For compound objects like lists, dicts, and sets, theres an important difference between shallow and deep copying:
* A **shallow copy** means constructing a new collection object and then populating it with references to the child objects found in the original. In essence, a shallow copy is only one level deep. The copying process does not recurse and therefore wont create copies of the child objects themselves.
* A **deep copy** makes the copying process recursive. It means first constructing a new collection object and then recursively populating it with copies of the child objects found in the original. Copying an object this way walks the whole object tree to create a fully independent clone of the original object and all of its children.
I know, that was a bit of a mouthful. So lets look at some examples to drive home this difference between deep and shallow copies.
**Free Bonus:** [Click here to get access to a chapter from Python Tricks: The Book][4] that shows you Python's best practices with simple examples you can apply instantly to write more beautiful + Pythonic code.
### Making Shallow Copies
In the example below, well create a new nested list and then shallowly copy it with the `list()` factory function:
```
>>> xs = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
>>> ys = list(xs) # Make a shallow copy
```
This means `ys` will now be a new and independent object with the same contents as `xs`. You can verify this by inspecting both objects:
```
>>> xs
[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
>>> ys
[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
```
To confirm `ys` really is independent from the original, lets devise a little experiment. You could try and add a new sublist to the original (`xs`) and then check to make sure this modification didnt affect the copy (`ys`):
```
>>> xs.append(['new sublist'])
>>> xs
[[1, 2, 3], [4, 5, 6], [7, 8, 9], ['new sublist']]
>>> ys
[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
```
As you can see, this had the expected effect. Modifying the copied list at a “superficial” level was no problem at all.
However, because we only created a shallow copy of the original list, `ys` still contains references to the original child objects stored in `xs`.
These children were not copied. They were merely referenced again in the copied list.
Therefore, when you modify one of the child objects in `xs`, this modification will be reflected in `ys` as well—thats because both lists share the same child objects. The copy is only a shallow, one level deep copy:
```
>>> xs[1][0] = 'X'
>>> xs
[[1, 2, 3], ['X', 5, 6], [7, 8, 9], ['new sublist']]
>>> ys
[[1, 2, 3], ['X', 5, 6], [7, 8, 9]]
```
In the above example we (seemingly) only made a change to `xs`. But it turns out that both sublists at index 1 in `xs` and `ys` were modified. Again, this happened because we had only created a shallow copy of the original list.
Had we created a deep copy of `xs` in the first step, both objects wouldve been fully independent. This is the practical difference between shallow and deep copies of objects.
Now you know how to create shallow copies of some of the built-in collection classes, and you know the difference between shallow and deep copying. The questions we still want answers for are:
* How can you create deep copies of built-in collections?
* How can you create copies (shallow and deep) of arbitrary objects, including custom classes?
The answer to these questions lies in the `copy` module in the Python standard library. This module provides a simple interface for creating shallow and deep copies of arbitrary Python objects.
### Making Deep Copies
Lets repeat the previous list-copying example, but with one important difference. This time were going to create a deep copy using the `deepcopy()` function defined in the `copy` module instead:
```
>>> import copy
>>> xs = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
>>> zs = copy.deepcopy(xs)
```
When you inspect `xs` and its clone `zs` that we created with `copy.deepcopy()`, youll see that they both look identical again—just like in the previous example:
```
>>> xs
[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
>>> zs
[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
```
However, if you make a modification to one of the child objects in the original object (`xs`), youll see that this modification wont affect the deep copy (`zs`).
Both objects, the original and the copy, are fully independent this time. `xs` was cloned recursively, including all of its child objects:
```
>>> xs[1][0] = 'X'
>>> xs
[[1, 2, 3], ['X', 5, 6], [7, 8, 9]]
>>> zs
[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
```
You might want to take some time to sit down with the Python interpreter and play through these examples right about now. Wrapping your head around copying objects is easier when you get to experience and play with the examples firsthand.
By the way, you can also create shallow copies using a function in the `copy` module. The `copy.copy()` function creates shallow copies of objects.
This is useful if you need to clearly communicate that youre creating a shallow copy somewhere in your code. Using `copy.copy()` lets you indicate this fact. However, for built-in collections its considered more Pythonic to simply use the list, dict, and set factory functions to create shallow copies.
### Copying Arbitrary Python Objects
The question we still need to answer is how do we create copies (shallow and deep) of arbitrary objects, including custom classes. Lets take a look at that now.
Again the `copy` module comes to our rescue. Its `copy.copy()` and `copy.deepcopy()` functions can be used to duplicate any object.
Once again, the best way to understand how to use these is with a simple experiment. Im going to base this on the previous list-copying example. Lets start by defining a simple 2D point class:
```
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
def __repr__(self):
return f'Point({self.x!r}, {self.y!r})'
```
I hope you agree that this was pretty straightforward. I added a `__repr__()` implementation so that we can easily inspect objects created from this class in the Python interpreter.
> **Note:** The above example uses a [Python 3.6 f-string][5] to construct the string returned by `__repr__`. On Python 2 and versions of Python 3 before 3.6 youd use a different string formatting expression, for example:
```
> def __repr__(self):
> return 'Point(%r, %r)' % (self.x, self.y)
>
```
Next up, well create a `Point` instance and then (shallowly) copy it, using the `copy` module:
```
>>> a = Point(23, 42)
>>> b = copy.copy(a)
```
If we inspect the contents of the original `Point` object and its (shallow) clone, we see what wed expect:
```
>>> a
Point(23, 42)
>>> b
Point(23, 42)
>>> a is b
False
```
Heres something else to keep in mind. Because our point object uses primitive types (ints) for its coordinates, theres no difference between a shallow and a deep copy in this case. But Ill expand the example in a second.
Lets move on to a more complex example. Im going to define another class to represent 2D rectangles. Ill do it in a way that allows us to create a more complex object hierarchy—my rectangles will use `Point` objects to represent their coordinates:
```
class Rectangle:
def __init__(self, topleft, bottomright):
self.topleft = topleft
self.bottomright = bottomright
def __repr__(self):
return (f'Rectangle({self.topleft!r}, '
f'{self.bottomright!r})')
```
Again, first were going to attempt to create a shallow copy of a rectangle instance:
```
rect = Rectangle(Point(0, 1), Point(5, 6))
srect = copy.copy(rect)
```
If you inspect the original rectangle and its copy, youll see how nicely the `__repr__()` override is working out, and that the shallow copy process worked as expected:
```
>>> rect
Rectangle(Point(0, 1), Point(5, 6))
>>> srect
Rectangle(Point(0, 1), Point(5, 6))
>>> rect is srect
False
```
Remember how the previous list example illustrated the difference between deep and shallow copies? Im going to use the same approach here. Ill modify an object deeper in the object hierarchy, and then youll see this change reflected in the (shallow) copy as well:
```
>>> rect.topleft.x = 999
>>> rect
Rectangle(Point(999, 1), Point(5, 6))
>>> srect
Rectangle(Point(999, 1), Point(5, 6))
```
I hope this behaved how you expected it to. Next, Ill create a deep copy of the original rectangle. Then Ill apply another modification and youll see which objects are affected:
```
>>> drect = copy.deepcopy(srect)
>>> drect.topleft.x = 222
>>> drect
Rectangle(Point(222, 1), Point(5, 6))
>>> rect
Rectangle(Point(999, 1), Point(5, 6))
>>> srect
Rectangle(Point(999, 1), Point(5, 6))
```
Voila! This time the deep copy (`drect`) is fully independent of the original (`rect`) and the shallow copy (`srect`).
Weve covered a lot of ground here, and there are still some finer points to copying objects.
It pays to go deep (ha!) on this topic, so you may want to study up on the [`copy` module documentation][6]. For example, objects can control how theyre copied by defining the special methods `__copy__()` and `__deepcopy__()` on them.
### 3 Things to Remember
* Making a shallow copy of an object wont clone child objects. Therefore, the copy is not fully independent of the original.
* A deep copy of an object will recursively clone child objects. The clone is fully independent of the original, but creating a deep copy is slower.
* You can copy arbitrary objects (including custom classes) with the `copy` module.
If youd like to dig deeper into other intermediate-level Python programming techniques, check out this free bonus:
**Free Bonus:** [Click here to get access to a chapter from Python Tricks: The Book][4] that shows you Python's best practices with simple examples you can apply instantly to write more beautiful + Pythonic code.
--------------------------------------------------------------------------------
via: https://realpython.com/blog/python/copying-python-objects/
作者:[Dan Bader][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://realpython.com/team/dbader/
[1]:https://realpython.com/blog/categories/fundamentals/
[2]:https://realpython.com/blog/categories/python/
[3]:https://realpython.com/learn/python-first-steps/
[4]:https://realpython.com/blog/python/copying-python-objects/
[5]:https://dbader.org/blog/python-string-formatting
[6]:https://docs.python.org/3/library/copy.html

View File

@ -1,104 +0,0 @@
Why you should use named pipes on Linux
======
![](https://images.techhive.com/images/article/2017/05/blue-1845806_1280-100722976-large.jpg)
Just about every Linux user is familiar with the process of piping data from one process to another using | signs. It provides an easy way to send output from one command to another and end up with only the data you want to see without having to write scripts to do all of the selecting and reformatting.
There is another type of pipe, however, one that warrants the name "pipe" but has a very different personality. It's one that you may have never tried or even thought about -- the named pipe.
**Also read:[11 pointless but awesome Linux terminal tricks][1]**
One of the key differences between regular pipes and named pipes is that named pipes have a presense in the file system. That is, they show up as files. But unlike most files, they never appear to have contents. Even if you write a lot of data to a named pipe, the file appears to be empty.
### How to set up a named pipe on Linux
Before we look at one of these empty named pipes, let's step back and see how a named pipe is set up. You would use a command called **mkfifo**. Why the reference to "FIFO"? Because a named pipe is also known as a FIFO special file. The term "FIFO" refers to its first-in, first-out character. If you fill a dish with ice cream and then start eating it, you'd be doing a LIFO (last-in, first-out) maneuver. If you suck a milkshake through a straw, you'd be doing a FIFO one. So, here's an example of creating a named pipe.
```
$ mkfifo mypipe
$ ls -l mypipe
prw-r-----. 1 shs staff 0 Jan 31 13:59 mypipe
```
Notice the special file type designation of "p" and the file length of zero. You can write to a named pipe by redirecting output to it and the length will still be zero.
```
$ echo "Can you read this?" > mypipe
$ ls -l mypipe
prw-r-----. 1 shs staff 0 Jan 31 13:59 mypipe
```
So far, so good, but hit return and nothing much happens.
```
$ echo "Can you read this?" > mypipe
```
While it might not be obvious, your text has entered into the pipe, but you're still peeking into the _input_ end of it. You or someone else may be sitting at the _output_ end and be ready to read the data that's being poured into the pipe, now waiting for it to be read.
```
$ cat mypipe
Can you read this?
```
Once read, the contents of the pipe are gone.
Another way to see how a named pipe works is to perform both operations (pouring the data into the pipe and retrieving it at the other end) yourself by putting the pouring part into the background.
```
$ echo "Can you read this?" > mypipe &
[1] 79302
$ cat mypipe
Can you read this?
[1]+ Done echo "Can you read this?" > mypipe
```
Once the pipe has been read or "drained," it's empty, though it still will be visible as an empty file ready to be used again. Of course, this brings us to the "why bother?" stage.
### Why use named pipes?
Named pipes are used infrequently for a good reason. On Unix systems, there are almost always many ways to do pretty much the same thing. There are many ways to write to a file, read from a file, and empty a file, though named pipes have a certain efficiency going for them.
For one thing, named pipe content resides in memory rather than being written to disk. It is passed only when both ends of the pipe have been opened. And you can write to a pipe multiple times before it is opened at the other end and read. By using named pipes, you can establish a process in which one process writes to a pipe and another reads from a pipe without much concern about trying to time or carefully orchestrate their interaction.
You can set up a process that simply waits for data to appear at the output end of the pipe and then works with it when it does. In the command below, we use the tail command to wait for data to appear.
```
$ tail -f mypipe
```
Once the process that will be feeding the pipe has finished, we will see some output.
```
$ tail -f mypipe
Uranus replicated to WCDC7
Saturn replicated to WCDC8
Pluto replicated to WCDC9
Server replication operation completed
```
If you look at the process writing to a named pipe, you might be surprised by how little resources it uses. In the ps output below, the only significant resource use is virtual memory (the VSZ column).
```
ps u -P 80038
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
shs 80038 0.0 0.0 108488 764 pts/4 S 15:25 0:00 -bash
```
Named pipes are different enough from the more commonly used Unix/Linux pipes to warrant a different name, but "pipe" really invokes a good image of how they move data between processes, so "named pipe" fits pretty well. Maybe you'll come across a task that will benefit significantly from this very clever Unix/Linux feature.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3251853/linux/why-use-named-pipes-on-linux.html
作者:[Sandra Henry-Stocker][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
[1]:http://www.networkworld.com/article/2926630/linux/11-pointless-but-awesome-linux-terminal-tricks.html#tk.nww-fsb

View File

@ -0,0 +1,131 @@
Error establishing a database connection
======
![Error establishing a database connection][1]
Error establishing a database connection, is a very common error when you try to access your WordPress site. The database stores all the important information for your website, including your posts, comments, site configuration, user accounts, theme and plugin settings and so on. If the connection to your database cannot be established, your WordPress website will not load, and more then likely will give you the error: “Error establishing a database connection” In this tutorial we will show you, how to fix Error establishing a database connection in WordPress.
The most common cause for “Error establishing a database connection” issue, is one of the following:
Your database has been corrupted
Incorrect login credentials in your WordPress configuration file (wp-config.php)
Your MySQL service stopped working due to insufficient memory on the server (due to heavy traffic), or server problems
![Error establishing a database connection][2]
### 1. Requirements
In order to troubleshoot “Error establishing a database connection” issue, a few requirements must be met:
* SSH access to your server
* The database is located on the same server
* You need to know your database username, user password, and name of the database
Also before you try to fix “Error establishing a database connection” error, it is highly recommended that you make a backup of both your website and database.
### 1. Corrupted database
The first step to do when trying to troubleshoot “Error establishing a database connection” problem is to check whether this error is present for both the front-end and the back-end of the your site. You can access your back-end via <http://www.yourdomain.com/wp-admin> (replace “yourdomain” with your actual domain name)
If the error remains the same for both your front-end and back-end then you should move to the next step.
If you are able to access the back-end via <https://www.yourdomain.com/wp-admin, > and you see the following message:
```
“One or more database tables are unavailable. The database may need to be repaired”
```
it means that your database has been corrupted and you need to try to repair it.
To do this, you must first enable the repair option in your wp-config.php file, located inside the WordPress site root directory, by adding the following line:
```
define('WP_ALLOW_REPAIR', true);
```
Now you can navigate to this this page: <https://www.yourdomain.com/wp-admin/maint/repair.php> and click the “Repair and Optimize Database button.”
For security reasons, remember to turn off the repair option be deleting the line we added before in the wp-config.php file.
If this does not fix the problem or the database cannot be repaired you will probably need to restore it from a backup if you have one available.
### 2. Check your wp-config.php file
Another, probably most common reason, for failed database connection is because of incorrect database information set in your WordPress configuration file.
The configuration file resides in your WordPress site root directory and it is called wp-config.php .
Open the file and locate the following lines:
```
define('DB_NAME', 'database_name');
define('DB_USER', 'database_username');
define('DB_PASSWORD', 'database_password');
define('DB_HOST', 'localhost');
```
Make sure the correct database name, username, and password are set. Database host should be set to “localhost”.
If you ever change your database username and password you should always update this file as well.
If everything is set up properly and you are still getting the “Error establishing a database connection” error then the problem is probably on the server side and you should move on to the next step of this tutorial.
### 3. Check your server
Depending on the resources available, during high traffic hours, your server might not be able to handle all the load and it may stop your MySQL server.
You can either contact your hosting provider about this or you can check it yourself if the MySQL server is properly running.
To check the status of MySQL, log in to your server via [SSH][3] and use the following command:
```
systemctl status mysql
```
Or you can check if it is up in your active processes with:
```
ps aux | grep mysql
```
If your MySQL is not running you can start it with the following commands:
```
systemctl start mysql
```
You may also need to check the memory usage on your server.
To check how much RAM you have available you can use the following command:
```
free -m
```
If your server is running low on memory you may want to consider upgrading your server.
### 4. Conclusion
Most of the time. the “Error establishing a database connection” error can be fixed by following one of the steps above.
![How to Fix the Error Establishing a Database Connection in WordPress][4]Of course, you dont have to fix, Error establishing a database connection, if you use one of our [WordPress VPS Hosting Services][5], in which case you can simply ask our expert Linux admins to help you fix the Error establishing a database connection in WordPress, for you. They are available 24×7 and will take care of your request immediately.
**PS**. If you liked this post, on how to fix the Error establishing a database connection in WordPress, please share it with your friends on the social networks using the buttons on the left or simply leave a reply below. Thanks.
--------------------------------------------------------------------------------
via: https://www.rosehosting.com/blog/error-establishing-a-database-connection/
作者:[RoseHosting][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.rosehosting.com
[1]:https://www.rosehosting.com/blog/wp-content/uploads/2018/02/error-establishing-a-database-connection.jpg
[2]:https://www.rosehosting.com/blog/wp-content/uploads/2018/02/Error-establishing-a-database-connection-e1517474875180.png
[3]:https://www.rosehosting.com/blog/connect-to-your-linux-vps-via-ssh/
[4]:https://www.rosehosting.com/blog/wp-content/uploads/2018/02/How-to-Fix-the-Error-Establishing-a-Database-Connection-in-WordPress.jpg
[5]:https://www.rosehosting.com/wordpress-hosting.html

View File

@ -1,3 +1,5 @@
translating---geekpi
How to reload .vimrc file without restarting vim on Linux/Unix
======

View File

@ -0,0 +1,211 @@
CompositeAcceleration
======
### Composite acceleration in the X server
One of the persistent problems with the modern X desktop is the number of moving parts required to display application content. Consider a simple PresentPixmap call as made by the Vulkan WSI or GL using DRI3:
1. Application calls PresentPixmap with new contents for its window
2. X server receives that call and pends any operation until the target frame
3. At the target frame, the X server copies the new contents into the window pixmap and delivers a Damage event to the compositor
4. The compositor responds to the damage event by copying the window pixmap contents into the next screen pixmap
5. The compositor calls PresentPixmap with the new screen contents
6. The X server receives that call and either posts a Swap call to the kernel or delays any action until the target frame
This sequence has a number of issues:
* The operation is serialized between three processes with at least three context switches involved.
* There is no traceable relation between when the application asked for the frame to be shown and when it is finally presented. Nor do we even have any way to tell the application what time that was.
* There are at least two copies of the application contents, from DRI3 buffer to window pixmap and from window pixmap to screen pixmap.
We'd also like to be able to take advantage of the multi-plane capabilities in the display engine (where available) to directly display the application contents.
### Previous Attempts
I've tried to come up with solutions to this issue a couple of times in the past.
#### Composite Redirection
My first attempt to solve (some of) this problem was through composite redirection. The idea there was to directly pass the Present'd pixmap to the compositor and let it copy the contents directly from there in constructing the new screen pixmap image. With some additional hand waving, the idea was that we could associate that final presentation with all of the associated redirected compositing operations and at least provide applications with accurate information about when their images were presented.
This fell apart when I tried to figure out how to plumb the necessary events through to the compositor and back. With that, and the realization that we still weren't solving problems inherent with the three-process dance, nor providing any path to using overlays, this solution just didn't seem worth pursuing further.
#### Automatic Compositing
More recently, Eric Anholt and I have been discussing how to have the X server do all of the compositing work by natively supporting ARGB window content. By changing compositors to place all screen content in windows, the X server could then generate the screen image by itself and not require any external compositing manager assistance for each frame.
Given that a primitive form of automatic compositing is already supported, extending that to support ARGB windows and having the X server manage the stack seemed pretty tractable. We would extend the driver interface so that drivers could perform the compositing themselves using a mixture of GPU operations and overlays.
This runs up against five hard problems though.
1. Making transitions between Manual and Automatic compositing seamless. We've seen how well the current compositing environment works when flipping compositing on and off to allow full-screen applications to use page flipping. Lots of screen flashing and application repaints.
2. Dealing with RGB windows with ARGB decorations. Right now, the window frame can be an ARGB window with the client being RGB; painting the client into the frame yields an ARGB result with the A values being 1 everywhere the client window is present.
3. Mesa currently allocates buffers exactly the size of the target drawable and assumes that the upper left corner of the buffer is the upper left corner of the drawable. If we want to place window manager decorations in the same buffer as the client and not need to copy the client contents, we would need to allocate a buffer large enough for both client and decorations, and then offset the client within that larger buffer.
4. Synchronizing window configuration and content updates with the screen presentation. One of the major features of a compositing manager is that it can construct complete and consistent frames for display; partial updates to application windows need never be shown to the user, nor does the user ever need to see the window tree partially reconfigured. To make this work with automatic compositing, we'd need to both codify frame markers within the 2D rendering stream and provide some method for collecting window configuration operations together.
5. Existing compositing managers don't do this today. Compositing managers are currently free to paint whatever they like into the screen image; requiring that they place all screen content into windows would mean they'd have to buy in to the new mechanism completely. That could still work with older X servers, but the additional overhead of more windows containing decoration content would slow performance with those systems, making migration less attractive.
I can think of plausible ways to solve the first three of these without requiring application changes, but the last two require significant systemic changes to compositing managers. Ick.
### Semi-Automatic Compositing
I was up visiting Pierre-Loup at Valve recently and we sat down for a few hours to consider how to help applications regularly present content at known times, and to always know precisely when content was actually presented. That names just one of the above issues, but when you consider the additional work required by pure manual compositing, solving that one issue is likely best achieved by solving all three.
I presented the Automatic Compositing plan and we discussed the range of issues. Pierre-Loup focused on the last problem -- getting existing Compositing Managers to adopt whatever solution we came up with. Without any easy migration path for them, it seemed like a lot to ask.
He suggested that we come up with a mechanism which would allow Compositing Managers to ease into the new architecture and slowly improve things for applications. Towards that, we focused on a much simpler problem
> How can we get a single application at the top of the window stack to reliably display frames at the desired time, and to know when that doesn't occur.
Coming up with a solution for this led to a good discussion and a possible path to a broader solution in the future.
#### Steady-state Behavior
Let's start by ignoring how we start and stop this new mode and look at how we want applications to work when things are stable:
1. Windows not moving around
2. Other applications idle
Let's get a picture I can use to describe this:
[![][1]][1]
In this picture, the compositing manager is triple buffered (as is normal for a page flipping application) with three buffers:
1. Scanout. The image currently on the screen
2. Queued. The image queued to be displayed next
3. Render. The image being constructed from various window pixmaps and other elements.
The contents of the Scanout and Queued buffers are identical with the exception of the orange window.
The application is double buffered:
1. Current. What it has displayed for the last frame
2. Next. What it is constructing for the next frame
Ok, so in the steady state, here's what we want to happen:
1. Application calls PresentPixmap with 'Next' for its window
2. X server receives that call and copies Next to Queued.
3. X server posts a Page Flip to the kernel with the Queued buffer
4. Once the flip happens, the X server swaps the names of the Scanout and Queued buffers.
If the X server supports Overlays, then the sequence can look like:
1. Application calls PresentPixmap
2. X server receives that call and posts a Page Flip for the overlay
3. When the page flip completes, the X server notifies the client that the previous Current buffer is now idle.
When the Compositing Manager has content to update outside of the orange window, it will:
1. Compositing Manager calls PresentPixmap
2. X server receives that call and paints the Current client image into the Render buffer
3. X server swaps Render and Queued buffers
4. X server posts Page Flip for the Queued buffer
5. When the page flip occurs, the server can mark the Scanout buffer as idle and notify the Compositing Manager
If the Orange window is in an overlay, then the X server can skip step 2.
#### The Auto List
To give the Compositing Manager control over the presentation of all windows, each call to PresentPixmap by the Compositing Manager will be associated with the list of windows, the "Auto List", for which the X server will be responsible for providing suitable content. Transitioning from manual to automatic compositing can therefore be performed on a window-by-window basis, and each frame provided by the Compositing Manager will separately control how that happens.
The Steady State behavior above would be represented by having the same set of windows in the Auto List for the Scanout and Queued buffers, and when the Compositing Manager presents the Render buffer, it would also provide the same Auto List for that.
Importantly, the Auto List need not contain only children of the screen Root window. Any descendant window at all can be included, and the contents of that drawn into the image using appropriate clipping. This allows the Compositing Manager to draw the window manager frame while the client window is drawn by the X server.
Any window at all can be in the Auto List. Windows with PresentPixmap contents available would be drawn from those. Other windows would be drawn from their window pixmaps.
#### Transitioning from Manual to Auto
To transition a window from Manual mode to Auto mode, the Compositing Manager would add it to the Auto List for the Render image, and associate that Auto List with the PresentPixmap request for that image. For the first frame, the X server may not have received a PresentPixmap for the client window, and so the window contents would have to come from the Window Pixmap for the client.
I'm not sure how we'd get the Compositing Manager to provide another matching image that the X server can use for subsequent client frames; perhaps it would just create one itself?
#### Transitioning from Auto to Manual
To transition a window from Auto mode to Manual mode, the Compositing manager would remove it from the Auto List for the Render image and then paint the window contents into the render image itself. To do that, the X server would have to paint any PresentPixmap data from the client into the window pixmap; that would be done when the Compositing Manager called GetWindowPixmap.
### New Messages Required
For this to work, we need some way for the Compositing Manager to discover windows that are suitable for Auto composting. Normally, these will be windows managed by the Window Manager, but it's possible for them to be nested further within the application hierarchy, depending on how the application is constructed.
I think what we want is to tag Damage events with the source window, and perhaps additional information to help Compositing Managers determine whether it should be automatically presenting those source windows or a parent of them. Perhaps it would be helpful to also know whether the Damage event was actually caused by a PresentPixmap for the whole window?
To notify the server about the Auto List, a new request will be needed in the Present extension to set the value for a subsequent PresentPixmap request.
### Actually Drawing Frames
The DRM module in the Linux kernel doesn't provide any mechanism to remove or replace a Page Flip request. While this may get fixed at some point, we need to deal with how it works today, if only to provide reasonable support for existing kernels.
I think about the best we can do is to set a timer to fire a suitable time before vblank and have the X server wake up and execute any necessary drawing and Page Flip kernel calls. We can use feedback from the kernel to know how much slack time there was between any drawing and the vblank and adjust the timer as needed.
Given that the goal is to provide for reliable display of the client window, it might actually be sufficient to let the client PresentPixmap request drive the display; if the Compositing Manager provides new content for a frame where the client does not, we can schedule that for display using a timer before vblank. When the Compositing Manager provides new content after the client, it would be delayed until the next frame.
### Changes in Compositing Managers
As described above, one explicit goal is to ease the burden on Compositing Managers by making them able to opt-in to this new mechanism for a limited set of windows and only for a limited set of frames. Any time they need to take control over the screen presentation, a new frame can be constructed with an empty Auto List.
### Implementation Plans
This post is the first step in developing these ideas to the point where a prototype can be built. The next step will be to take feedback and adapt the design to suit. Of course, there's always the possibility that this design will also prove unworkable in practice, but I'm hoping that this third attempt will actually succeed.
--------------------------------------------------------------------------------
via: https://keithp.com/blogs/CompositeAcceleration/
作者:[keithp][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://keithp.com
[1]:https://keithp.com/pictures/ca-steady.svg

View File

@ -0,0 +1,191 @@
How do I edit files on the command line?
======
In this tutorial, we will show you how to edit files on the command line. This article covers three command line editors, vi (or vim), nano, and emacs.
#### Editing Files with Vi or Vim Command Line Editor
To edit files on the command line, you can use an editor such as vi. To open the file, run
```
vi /path/to/file
```
Now you see the contents of the file (if there is any. Please note that the file is created if it does not exist yet.).
The most important commands in vi are these:
Press `i` to enter the `Insert` mode. Now you can type in your text.
To leave the Insert mode press `ESC`.
To delete the character that is currently under the cursor you must press `x` (and you must not be in Insert mode because if you are you will insert the character `x` instead of deleting the character under the cursor). So if you have just opened the file with vi, you can immediately use `x` to delete characters. If you are in Insert mode you have to leave it first with `ESC`.
If you have made changes and want to save the file, press `:x` (again you must not be in Insert mode. If you are, press `ESC` to leave it).
If you haven't made any changes, press `:q` to leave the file (but you must not be in Insert mode).
If you have made changes, but want to leave the file without saving the changes, press `:q!` (but you must not be in Insert mode).
Please note that during all these operations you can use your keyboard's arrow keys to navigate the cursor through the text.
So that was all about the vi editor. Please note that the vim editor also works more or less in the same way, although if you'd like to know vim in depth, head [here][1].
#### Editing Files with Nano Command Line Editor
Next up is the Nano editor. You can invoke it simply by running the 'nano' command:
```
nano
```
Here's how the nano UI looks like:
[![Nano command line editor][2]][3]
You can also launch the editor directly with a file.
```
nano [filename]
```
For example:
```
nano test.txt
```
[![Open a file in nano][4]][5]
The UI, as you can see, is broadly divided into four parts. The line at the top shows editor version, file being edited, and the editing status. Then comes the actual edit area where you'll see the contents of the file. The highlighted line below the edit area shows important messages, and the last two lines are really helpful for beginners as they show keyboard shortcuts that you use to perform basic tasks in nano.
So here's a quick list of some of the shortcuts that you should know upfront.
Use arrow keys to navigate the text, the Backspace key to delete text, and **Ctrl+o** to save the changes you make. When you try saving the changes, nano will ask you for confirmation (see the line below the main editor area in screenshot below):
[![Save file in nano][6]][7]
Note that at this stage, you also have an option to save in different OS formats. Pressing **Altd+d** enables the DOS format, while **Atl+m** enables the Mac format.
[![Save file ind DOS format][8]][9]
Press enter and your changes will be saved.
[![File has been saved][10]][11]
Moving on, to cut and paste lines of text use **Ctrl+k** and **Ctrl+u**. These keyboard shortcuts can also be used to cut and paste individual words, but you'll have to select the words first, something you can do by pressing **Alt+A** (with the cursor under the first character of the word) and then using the arrow to key select the complete word.
Now comes search operations. A simple search can be initiated using **Ctrl+w** , while a search and replace operation can be done using **Ctrl+\**.
[![Search in files with nano][12]][13]
So those were some of the basic features of nano that should give you a head start if you're new to the editor. For more details, read our comprehensive coverage [here][14].
#### Editing Files with Emacs Command Line Editor
Next comes **Emacs**. If not already, you can install the editor on your system using the following command:
```
sudo apt-get install emacs
```
Like nano, you can directly open a file to edit in emacs in the following way:
```
emacs -nw [filename]
```
**Note** : The **-nw** flag makes sure emacs launches in bash itself, instead of a separate window which is the default behavior.
For example:
```
emacs -nw test.txt
```
Here's the editor's UI:
[![Open file in emacs][15]][16]
Like nano, the emacs UI is also divided into several parts. The first part is the top menu area, which is similar to the one you'd see in graphical applications. Then comes the main edit area, where the text (of the file you've opened) is displayed.
Below the edit area sits another highlighted bar that shows things like name of the file, editing mode ('Text' in screenshot above), and status (** for modified, - for non-modified, and %% for read only). Then comes the final area where you provide input instructions, see output as well.
Now coming to basic operations, after making changes, if you want to save them, use **Ctrl+x** followed by **Ctrl+s**. The last section will show you a message saying something on the lines of '**Wrote ........' . **Here's an example:
[![Save file in emacs][17]][18]
Now, if you want to discard changes and quit the editor, use **Ctrl+x** followed by **Ctrl+c**. The editor will confirm this through a prompt - see screenshot below:
[![Discard changes in emacs][19]][20]
Type 'n' followed by a 'yes' and the editor will quit without saving the changes.
Please note that Emacs represents 'Ctrl' as 'C' and 'Alt' as 'M'. So, for example, whenever you see something like C-x, it means Ctrl+x.
As for other basic editing operations, deleting is simple, as it works through the Backspace/Delete keys that most of us are already used to. However, there are shortcuts that make your deleting experience smooth. For example, use **Ctrl+k** for deleting complete line, **Alt+d** for deleting a word, and **Alt+k** for a sentence.
Undoing is achieved through ' **Ctrl+x** ' followed by ' **u** ', and to re-do, press **Ctrl+g** followed by **Ctrl+_**. Use **Ctrl+s** for forward search and **Ctrl+r** for reverse search.
[![Search in files with emacs][21]][22]
Moving on, to launch a replace operation, use the Alt+Shift+% keyboard shortcut. You'll be asked for the word you want to replace. Enter it. Then the editor will ask you for the replacement. For example, the following screenshot shows emacs asking user about the replacement for the word 'This'.
[![Replace text with emacs][23]][24]
Input the replacement text and press Enter. For each replacement operation emacs will carry, it'll seek your permission first:
[![Confirm text replacement][25]][26]
Press 'y' and the word will be replaced.
[![Press y to confirm][27]][28]
So that's pretty much all the basic editing operations that you should know to start using emacs. Oh, and yes, those menus at the top - we haven't discussed how to access them. Well, those can be accessed using the F10 key.
[![Basic editing operations][29]][30]
To come out of these menus, press the Esc key three times.
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/faq/how-to-edit-files-on-the-command-line
作者:[falko][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com
[1]:https://www.howtoforge.com/vim-basics
[2]:https://www.howtoforge.com/images/command-tutorial/nano-basic-ui.png
[3]:https://www.howtoforge.com/images/command-tutorial/big/nano-basic-ui.png
[4]:https://www.howtoforge.com/images/command-tutorial/nano-file-open.png
[5]:https://www.howtoforge.com/images/command-tutorial/big/nano-file-open.png
[6]:https://www.howtoforge.com/images/command-tutorial/nano-save-changes.png
[7]:https://www.howtoforge.com/images/command-tutorial/big/nano-save-changes.png
[8]:https://www.howtoforge.com/images/command-tutorial/nano-mac-format.png
[9]:https://www.howtoforge.com/images/command-tutorial/big/nano-mac-format.png
[10]:https://www.howtoforge.com/images/command-tutorial/nano-changes-saved.png
[11]:https://www.howtoforge.com/images/command-tutorial/big/nano-changes-saved.png
[12]:https://www.howtoforge.com/images/command-tutorial/nano-search-replace.png
[13]:https://www.howtoforge.com/images/command-tutorial/big/nano-search-replace.png
[14]:https://www.howtoforge.com/linux-nano-command/
[15]:https://www.howtoforge.com/images/command-tutorial/nano-file-open1.png
[16]:https://www.howtoforge.com/images/command-tutorial/big/nano-file-open1.png
[17]:https://www.howtoforge.com/images/command-tutorial/emacs-save.png
[18]:https://www.howtoforge.com/images/command-tutorial/big/emacs-save.png
[19]:https://www.howtoforge.com/images/command-tutorial/emacs-quit-without-saving.png
[20]:https://www.howtoforge.com/images/command-tutorial/big/emacs-quit-without-saving.png
[21]:https://www.howtoforge.com/images/command-tutorial/emacs-search.png
[22]:https://www.howtoforge.com/images/command-tutorial/big/emacs-search.png
[23]:https://www.howtoforge.com/images/command-tutorial/emacs-search-replace.png
[24]:https://www.howtoforge.com/images/command-tutorial/big/emacs-search-replace.png
[25]:https://www.howtoforge.com/images/command-tutorial/emacs-replace-prompt.png
[26]:https://www.howtoforge.com/images/command-tutorial/big/emacs-replace-prompt.png
[27]:https://www.howtoforge.com/images/command-tutorial/emacs-replaced.png
[28]:https://www.howtoforge.com/images/command-tutorial/big/emacs-replaced.png
[29]:https://www.howtoforge.com/images/command-tutorial/emacs-accessing-menus.png
[30]:https://www.howtoforge.com/images/command-tutorial/big/emacs-accessing-menus.png

View File

@ -0,0 +1,147 @@
How to Manage PGP and SSH Keys with Seahorse
============================================================
![Seahorse](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fish-1907607_1920.jpg?itok=u07bav4m "Seahorse")
Learn how to manage both PGP and SSH keys with the Seahorse GUI tool.[Creative Commons Zero][6]
Security is tantamount to peace of mind. After all, security is a big reason why so many users migrated to Linux in the first place. But why stop with merely adopting the platform, when you can also employ several techniques and technologies to help secure your desktop or server systems.
One such technology involves keys—in the form of PGP and SSH. PGP keys allow you to encrypt and decrypt emails and files, and SSH keys allow you to log into servers with an added layer of security.
Sure, you can manage these keys via the command-line interface (CLI), but what if youre working on a desktop with a resplendent GUI? Experienced Linux users may cringe at the idea of shrugging off the command line, but not all users have the same skill set and comfort level there. Thus, the GUI!
In this article, I will walk you through the process of managing both PGP and SSH keys through the [Seahorse][14] GUI tool. Seahorse has a pretty impressive feature set; it can:
* Encrypt/decrypt/sign files and text.
* Manage your keys and keyring.
* Synchronize your keys and your keyring with remote key servers.
* Sign and publish keys.
* Cache your passphrase.
* Backup both keys and keyring.
* Add an image in any GDK supported format as a OpenPGP photo ID.
* Create, configure, and cache SSH keys.
For those that dont know, Seahorse is a GNOME application for managing both encryption keys and passwords within the GNOME keyring. But fear not, Seahorse is available for installation on numerous desktops. And since Seahorse is found in the standard repositories, you can open up your desktops app store (such as Ubuntu Software or Elementary OS AppCenter) and install. To do this, locate Seahorse in your distributions application store and click to install. Once you have Seahorse installed, youre ready to start making use of a very handy tool.
Lets do just that.
### PGP Keys
The first thing were going to do is create a new PGP key. As I said earlier, PGP keys can be used to encrypt email (with tools like [Thunderbird][15]s [Enigmail][16] or the built-in encryption function with [Evolution][17]). A PGP key also allows you to encrypt files. Anyone with your public key will be able to decrypt those emails or files. Without a PGP key, no can do.
Creating a new PGP key pair is incredibly simple with Seahorse. Heres what you do:
1. Open the Seahorse app
2. Click the + button in the upper left corner of the main pane
3. Select PGP Key (Figure 1)
4. Click Continue
5. When prompted, type a full name and email address
6. Click Create
![Seahorse](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/seahorse_1.jpg?itok=khLOYC61 "Seahorse")
Figure 1: Creating a PGP key with Seahorse.[Used with permission][1]
While creating your PGP key, you can click to expand the Advanced key options section, where you can configure a comment for the key, encryption type, key strength, and expiration date (Figure 2).
![PGP](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/seahorse_2.jpg?itok=eWiazwrn "PGP")
Figure 2: PGP key advanced options.[Used with permission][2]
The comment section is very handy to help you remember a keys purpose (or other informative bits).
With your PGP created, double-click on it from the key listing. In the resulting window, click on the Names and Signatures tab. In this window, you can sign your key (to indicate you trust this key). Click the Sign button and then (in the resulting window) indicate how carefully youve checked this key and how others will see the signature (Figure 3).
![Key signing](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/seahorse_3.jpg?itok=7USKG9fI "Key signing")
Figure 3: Signing a key to indicate trust level.[Used with permission][3]
Signing keys is very important when youre dealing with other peoples keys, as a signed key will ensure your system (and you) youve done the work and can fully trust an imported key.
Speaking of imported keys, Seahorse allows you to easily import someones public key file (the file will end in .asc). Having someones public key on your system means you can decrypt emails and files sent to you from them. However, Seahorse has suffered a [known bug][18] for quite some time. The problem is that Seahorse imports using gpg version one, but displays with gpg version two. This means, until this long-standing bug is fixed, importing public keys will always fail. If you want to import a public PGP key into Seahorse, youre going to have to use the command line. So, if someone has sent you the file olivia.asc, and you want to import it so it can be used with Seahorse, you would issue the command gpg2 --import olivia.asc. That key would then appear in the GnuPG Keys listing. You can open the key, click the I trust signatures button, and then click the Sign this key button to indicate how carefully youve checked the key in question.
### SSH Keys
Now we get to what I consider to be the most important aspect of Seahorse—SSH keys. Not only does Seahorse make it easy to generate an SSH key, it makes it easy to send that key to a server, so you can take advantage of SSH key authentication. Heres how you generate a new key and then export it to a remote server.
1. Open up Seahorse
2. Click the + button
3. Select Secure Shell Key
4. Click Continue
5. Give the key a description
6. Click Create and Set Up
7. Type and verify a passphrase for the key
8. Click OK
9. Type the address of the remote server and a remote login name found on the server (Figure 4)
10. Type the password for the remote user
11. Click OK
![SSH key](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/seahorse_4.jpg?itok=ZxuxT8ry "SSH key")
Figure 4: Uploading an SSH key to a remote server.[Used with permission][4]
The new key will be uploaded to the remote server and is ready to use. If your server is set up for SSH key authentication, youre good to go.
Do note, during the creation of an SSH key, you can click to expand the Advanced key options and configure Encryption Type and Key Strength (Figure 5).
![Advanced options](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/seahorse_5.jpg?itok=vUT7pi0z "Advanced options")
Figure 5: Advanced SSH key options.[Used with permission][5]
### A must-use for new Linux users
Any new-to-Linux user should get familiar with Seahorse. Even with its flaws, Seahorse is still an incredibly handy tool to have at the ready. At some point, you will likely want (or need) to encrypt or decrypt an email/file, or manage secure shell keys for SSH key authentication. If you want to do this, while avoiding the command line, Seahorse is the tool to use.
_Learn more about Linux through the free ["Introduction to Linux" ][13]course from The Linux Foundation and edX._
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2018/2/how-manage-pgp-and-ssh-keys-seahorse
作者:[JACK WALLEN ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/jlwallen
[1]:https://www.linux.com/licenses/category/used-permission
[2]:https://www.linux.com/licenses/category/used-permission
[3]:https://www.linux.com/licenses/category/used-permission
[4]:https://www.linux.com/licenses/category/used-permission
[5]:https://www.linux.com/licenses/category/used-permission
[6]:https://www.linux.com/licenses/category/creative-commons-zero
[7]:https://www.linux.com/files/images/seahorse1jpg
[8]:https://www.linux.com/files/images/seahorse2jpg
[9]:https://www.linux.com/files/images/seahorse3jpg
[10]:https://www.linux.com/files/images/seahorse4jpg
[11]:https://www.linux.com/files/images/seahorse5jpg
[12]:https://www.linux.com/files/images/fish-19076071920jpg
[13]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
[14]:https://wiki.gnome.org/Apps/Seahorse
[15]:https://www.mozilla.org/en-US/thunderbird/
[16]:https://enigmail.net/index.php/en/
[17]:https://wiki.gnome.org/Apps/Evolution
[18]:https://bugs.launchpad.net/ubuntu/+source/seahorse/+bug/1577198

View File

@ -0,0 +1,191 @@
Shell Scripting: Dungeons, Dragons and Dice
======
In my [last article][1], I talked about a really simple shell script for a game called Bunco, which is a dice game played in rounds where you roll three dice and compare your values to the round number. Match all three and match the round number, and you just got a bunco for 25 points. Otherwise, any die that match the round are worth one point each. It's simple—a game designed for people who are getting tipsy at the local pub, and it also is easy to program.
The core function in the Bunco program was one that produced a random number between 16 to simulate rolling a six-sided die. It looked like this:
```
rolldie()
{
local result=$1
rolled=$(( ( $RANDOM % 6 ) + 1 ))
eval $result=$rolled
}
```
It's invoked with a variable name as the single argument, and it will load a random number between 16 into that value—for example:
```
rolldie die1
```
will assign a value 1..6 to $die1\. Make sense?
If you can do that, however, what's to stop you from having a second argument that specifies the number of sides of the die you want to "roll" with the function? Something like this:
```
rolldie()
{
local result=$1 sides=$2
rolled=$(( ( $RANDOM % $sides ) + 1 ))
eval $result=$rolled
}
```
To test it, let's just write a tiny wrapper that simply asks for a 20-sided die (d20) result:
```
rolldie die 20
echo resultant roll is $die
```
Easy enough. To make it a bit more useful, let's allow users to specify a sequence of dice rolls, using the standard D&D notation of nDm—that is, n m-sided dice. Bunco would have been done with 3d6, for example (three six-sided die). Got it?
Since you might well have starting flags too, let's build that into the parsing loop using the ever handy getopt:
```
while getopts "h" arg
do
case "$arg" in
* ) echo "dnd-dice NdM {NdM}"
echo "NdM = N M-sided dice"; exit 0 ;;
esac
done
shift $(( $OPTIND - 1 ))
for request in $* ; do
echo "Rolling: $request"
done
```
With a well formed notation like 3d6, it's easy to break up the argument into its component parts, like so:
```
dice=$(echo $request | cut -dd -f1)
sides=$(echo $request | cut -dd -f2)
echo "Rolling $dice $sides-sided dice"
```
To test it, let's give it some arguments and see what the program outputs:
```
$ dnd-dice 3d6 1d20 2d100 4d3 d5
Rolling 3 6-sided dice
Rolling 1 20-sided dice
Rolling 2 100-sided dice
Rolling 4 3-sided dice
Rolling 5-sided dice
```
Ah, the last one points out a mistake in the script. If there's no number of dice specified, the default should be 1\. You theoretically could default to a six-sided die too, but that's not anywhere near so safe an assumption.
With that, you're close to a functional program because all you need is a loop to process more than one die in a request. It's easily done with a while loop, but let's add some additional smarts to the script:
```
for request in $* ; do
dice=$(echo $request | cut -dd -f1)
sides=$(echo $request | cut -dd -f2)
echo "Rolling $dice $sides-sided dice"
sum=0 # reset
while [ ${dice:=1} -gt 0 ] ; do
rolldie die $sides
echo " dice roll = $die"
sum=$(( $sum + $die ))
dice=$(( $dice - 1 ))
done
echo " sum total = $sum"
done
```
This is pretty solid actually, and although the output statements need to be cleaned up a bit, the code's basically fully functional:
```
$ dnd-dice 3d6 1d20 2d100 4d3 d5
Rolling 3 6-sided dice
dice roll = 5
dice roll = 6
dice roll = 5
sum total = 16
Rolling 1 20-sided dice
dice roll = 16
sum total = 16
Rolling 2 100-sided dice
dice roll = 76
dice roll = 84
sum total = 160
Rolling 4 3-sided dice
dice roll = 2
dice roll = 2
dice roll = 1
dice roll = 3
sum total = 8
Rolling 5-sided dice
dice roll = 2
sum total = 2
```
Did you catch that I fixed the case when $dice has no value? It's tucked into the reference in the while statement. Instead of referring to it as $dice, I'm using the notation ${dice:=1}, which uses the value specified unless it's null or no value, in which case the value 1 is assigned and used. It's a handy and a perfect fix in this case.
In a game, you generally don't care much about individual die values; you just want to sum everything up and see what the total value is. So if you're rolling 4d20, for example, it's just a single value you calculate and share with the game master or dungeon master.
A bit of output statement cleanup and you can do that:
```
$ dnd-dice.sh 3d6 1d20 2d100 4d3 d5
3d6 = 16
1d20 = 13
2d100 = 74
4d3 = 8
d5 = 2
```
Let's run it a second time just to ensure you're getting different values too:
```
3d6 = 11
1d20 = 10
2d100 = 162
4d3 = 6
d5 = 3
```
There are definitely different values, and it's a pretty useful script, all in all.
You could create a number of variations with this as a basis, including what some gamers enjoy called "exploding dice". The idea is simple: if you roll the best possible value, you get to roll again and add the second value too. Roll a d20 and get a 20? You can roll again, and your result is then 20 + whatever the second value is. Where this gets crazy is that you can do this for multiple cycles, so a d20 could become 30, 40 or even 50.
And, that's it for this article. There isn't much else you can do with dice at this point. In my next article, I'll look at...well, you'll have to wait and see! Don't forget, if there's a topic you'd like me to tackle, please send me a note!
--------------------------------------------------------------------------------
via: http://www.linuxjournal.com/content/shell-scripting-dungeons-dragons-and-dice
作者:[Dave Taylor][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxjournal.com/users/dave-taylor
[1]:http://www.linuxjournal.com/content/shell-scripting-bunco-game

View File

@ -0,0 +1,73 @@
Tips for success when getting started with Ansible
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-big-data.png?itok=L34b2exg)
Ansible is an open source automation tool used to configure servers, install software, and perform a wide variety of IT tasks from one central location. It is a one-to-many agentless mechanism where all instructions are run from a control machine that communicates with remote clients over SSH, although other protocols are also supported.
While targeted for system administrators with privileged access who routinely perform tasks such as installing and configuring applications, Ansible can also be used by non-privileged users. For example, a database administrator using the `mysql` login ID could use Ansible to create databases, add users, and define access-level controls.
Let's go over a very simple example where a system administrator provisions 100 servers each day and must run a series of Bash commands on each one before handing it off to users.
![](https://opensource.com/sites/default/files/u128651/mapping-bash-commands-to-ansible.png)
This is a simple example, but should illustrate how easily commands can be specified in yaml files and executed on remote servers. In a heterogeneous environment, conditional statements can be added so that certain commands are only executed in certain servers (e.g., "only execute `yum` commands in systems that are not Ubuntu or Debian").
One important feature in Ansible is that a playbook describes a desired state in a computer system, so a playbook can be run multiple times against a server without impacting its state. If a certain task has already been implemented (e.g., "user `sysman` already exists"), then Ansible simply ignores it and moves on.
### Definitions
* **Tasks:**``A task is the smallest unit of work. It can be an action like "Install a database," "Install a web server," "Create a firewall rule," or "Copy this configuration file to that server."
* **Plays:**``A play is made up of tasks. For example, the play: "Prepare a database to be used by a web server" is made up of tasks: 1) Install the database package; 2) Set a password for the database administrator; 3) Create a database; and 4) Set access to the database.
* **Playbook:**``A playbook is made up of plays. A playbook could be: "Prepare my website with a database backend," and the plays would be 1) Set up the database server; and 2) Set up the web server.
* **Roles:**``Roles are used to save and organize playbooks and allow sharing and reuse of playbooks. Following the previous examples, if you need to fully configure a web server, you can use a role that others have written and shared to do just that. Since roles are highly configurable (if written correctly), they can be easily reused to suit any given deployment requirements.
* **Ansible Galaxy:**``Ansible [Galaxy][1] is an online repository where roles are uploaded so they can be shared with others. It is integrated with GitHub, so roles can be organized into Git repositories and then shared via Ansible Galaxy.
These definitions and their relationships are depicted here:
![](https://opensource.com/sites/default/files/u128651/ansible-definitions.png)
Please note this is just one way to organize the tasks that need to be executed. We could have split up the installation of the database and the web server into separate playbooks and into different roles. Most roles in Ansible Galaxy install and configure individual applications. You can see examples for installing [mysql][2] and installing [httpd][3].
### Tips for writing playbooks
The best source for learning Ansible is the official [documentation][4] site. And, as usual, online search is your friend. I recommend starting with simple tasks, like installing applications or creating users. Once you are ready, follow these guidelines:
* When testing, use a small subset of servers so that your plays execute faster. If they are successful in one server, they will be successful in others.
* Always do a dry run to make sure all commands are working (run with `--check-mode` flag).
* Test as often as you need to without fear of breaking things. Tasks describe a desired state, so if a desired state is already achieved, it will simply be ignored.
* Be sure all host names defined in `/etc/ansible/hosts` are resolvable.
* Because communication to remote hosts is done using SSH, keys have to be accepted by the control machine, so either 1) exchange keys with remote hosts prior to starting; or 2) be ready to type in "Yes" to accept SSH key exchange requests for each remote host you want to manage.
* Although you can combine tasks for different Linux distributions in one playbook, it's cleaner to write a separate playbook for each distro.
### In the final analysis
Ansible is a great choice for implementing automation in your data center:
* It's agentless, so it is simpler to install than other automation tools.
* Instructions are in YAML (though JSON is also supported) so it's easier than writing shell scripts.
* It's open source software, so contribute back to it and make it even better!
How have you used Ansible to automate your data center? Share your experience in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/2/tips-success-when-getting-started-ansible
作者:[Jose Delarosa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/jdelaros1
[1]:https://galaxy.ansible.com/
[2]:https://galaxy.ansible.com/bennojoy/mysql/
[3]:https://galaxy.ansible.com/xcezx/httpd/
[4]:http://docs.ansible.com/

View File

@ -0,0 +1,59 @@
Which Linux Kernel Version Is Stable?
============================================================
![Linux kernel ](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/apple1.jpg?itok=PGRxOQz_ "Linux kernel")
Konstantin Ryabitsev explains which Linux kernel versions are considered "stable" and how to choose what's right for you.[Creative Commons Zero][1]
Almost every time Linus Torvalds releases [a new mainline Linux kernel][4], there's inevitable confusion about which kernel is the "stable" one now. Is it the brand new X.Y one, or the previous X.Y-1.Z one? Is the brand new kernel too new? Should you stick to the previous release?
The [kernel.org][5] page doesn't really help clear up this confusion. Currently, right at the top of the page. we see that 4.15 is the latest stable kernel -- but then in the table below, 4.14.16 is listed as "stable," and 4.15 as "mainline." Frustrating, eh?
Unfortunately, there are no easy answers. We use the word "stable" for two different things here: as the name of the Git tree where the release originated, and as indicator of whether the kernel should be considered “stable” as in “production-ready.”
Due to the distributed nature of Git, Linux development happens in a number of [various forked repositories][6]. All bug fixes and new features are first collected and prepared by subsystem maintainers and then submitted to Linus Torvalds for inclusion into [his own Linux tree][7], which is considered the “master” Git repository. We call this the “mainline” Linux tree.
### Release Candidates
Before each new kernel version is released, it goes through several “release candidate” cycles, which are used by developers to test and polish all the cool new features. Based on the feedback he receives during this cycle, Linus decides whether the final version is ready to go yet or not. Usually, there are 7 weekly pre-releases, but that number routinely goes up to -rc8, and sometimes even up to -rc9 and above. When Linus is convinced that the new kernel is ready to go, he makes the final release, and we call this release “stable” to indicate that its not a “release candidate.”
### Bug Fixes
As any kind of complex software written by imperfect human beings, each new version of the Linux kernel contains bugs, and those bugs require fixing. The rule for bug fixes in the Linux Kernel is very straightforward: all fixes must first go into Linuss tree. Once the bug is fixed in the mainline repository, it may then be applied to previously released kernels that are still maintained by the Kernel development community. All fixes backported to stable releases must meet a [set of important criteria][8] before they are considered -- and one of them is that they “must already exist in Linuss tree.” There is a [separate Git repository][9] used for the purpose of maintaining backported bug fixes, and it is called the “stable” tree -- because it is used to track previously released stable kernels. It is maintained and curated by Greg Kroah-Hartman.
### Latest Stable Kernel
So, whenever you visit kernel.org looking for the latest stable kernel, you should use the version that is in the Big Yellow Button that says “Latest Stable Kernel.”
![sWnmAYf0BgxjGdAHshK61CE9GdQQCPBkmSF9MG8s](https://lh6.googleusercontent.com/sWnmAYf0BgxjGdAHshK61CE9GdQQCPBkmSF9MG8sYqZsmL6e0h8AiyJwqtWYC-MoxWpRWHpdIEpKji0hJ5xxeYshK9QkbTfubFb2TFaMeFNmtJ5ypQNt8lAHC2zniEEe8O4v7MZh)
Ah, but now you may wonder -- if both 4.15 and 4.14.16 are stable, then which one is more stable? Some people avoid using ".0" releases of kernel because they think a particular version is not stable enough until there is at least a ".1". It's hard to either prove or disprove this, and there are pro and con arguments for both, so it's pretty much up to you to decide which you prefer.
On the one hand, anything that goes into a stable tree release must first be accepted into the mainline kernel and then backported. This means that mainline kernels will always have fresher bug fixes than what is released in the stable tree, and therefore you should always use mainline “.0” releases if you want fewest “known bugs.”
On the other hand, mainline is where all the cool new features are added -- and new features bring with them an unknown quantity of “new bugs” that are not in the older stable releases. Whether new, unknown bugs are more worrisome than older, known, but yet unfixed bugs -- well, that is entirely your call. However, it is worth pointing out that many bug fixes are only thoroughly tested against mainline kernels. When patches are backported into older kernels, chances are they will work just fine, but there are fewer integration tests performed against older stable releases. More often than not, it is assumed that "previous stable" is close enough to current mainline that things will likely "just work." And they usually do, of course, but this yet again shows how hard it is to say "which kernel is actually more stable."
So, basically, there is no quantitative or qualitative metric we can use to definitively say which kernel is more stable -- 4.15 or 4.14.16\. The most we can do is to unhelpfully state that they are "differently stable.”
_Learn more about Linux through the free ["Introduction to Linux" ][3]course from The Linux Foundation and edX._
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/2018/2/which-linux-kernel-version-stable
作者:[KONSTANTIN RYABITSEV ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/mricon
[1]:https://www.linux.com/licenses/category/creative-commons-zero
[2]:https://www.linux.com/files/images/apple1jpg
[3]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
[4]:https://www.linux.com/blog/intro-to-linux/2018/1/linux-kernel-415-unusual-release-cycle
[5]:https://www.kernel.org/
[6]:https://git.kernel.org/pub/scm/linux/kernel/git/
[7]:https://git.kernel.org/torvalds/c/v4.15
[8]:https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
[9]:https://git.kernel.org/stable/linux-stable/c/v4.14.16

View File

@ -0,0 +1,259 @@
API Star: Python 3 API Framework Polyglot.Ninja()
======
For building quick APIs in Python, I have mostly depended on [Flask][1]. Recently I came across a new API framework for Python 3 named “API Star” which seemed really interesting to me for several reasons. Firstly the framework embraces modern Python features like type hints and asyncio. And then it goes ahead and uses these features to provide awesome development experience for us, the developers. We will get into those features soon but before we begin, I would like to thank Tom Christie for all the work he has put into Django REST Framework and now API Star.
Now back to API Star I feel very productive in the framework. I can choose to write async codes based on asyncio or I can choose a traditional backend like WSGI. It comes with a command line tool `apistar` to help us get things done faster. Theres (optional) support for both Django ORM and SQLAlchemy. Theres a brilliant type system that enables us to define constraints on our input and output and from these, API Star can auto generate api schemas (and docs), provide validation and serialization feature and a lot more. Although API Star is heavily focused on building APIs, you can also build web applications on top of it fairly easily. All these might not make proper sense until we build something all by ourselves.
### Getting Started
We will start by installing API Star. It would be a good idea to create a virtual environment for this exercise. If you dont know how to create a virtualenv, dont worry and go ahead.
```
pip install apistar
```
If youre not using a virtual environment or the `pip` command for your Python 3 is called `pip3`, then please use `pip3 install apistar` instead.
Once we have the package installed, we should have access to the `apistar` command line tool. We can create a new project with it. Lets create a new project in our current directory.
```
apistar new .
```
Now we should have two files created `app.py` which contains the main application and then `test.py` for our tests. Lets examine our `app.py` file:
```
from apistar import Include, Route
from apistar.frameworks.wsgi import WSGIApp as App
from apistar.handlers import docs_urls, static_urls
def welcome(name=None):
if name is None:
return {'message': 'Welcome to API Star!'}
return {'message': 'Welcome to API Star, %s!' % name}
routes = [
Route('/', 'GET', welcome),
Include('/docs', docs_urls),
Include('/static', static_urls)
]
app = App(routes=routes)
if __name__ == '__main__':
app.main()
```
Before we dive into the code, lets run the app and see if it works. If we navigate to `http://127.0.0.1:8080/` we will get this following response:
```
{"message": "Welcome to API Star!"}
```
And if we navigate to: `http://127.0.0.1:8080/?name=masnun`
```
{"message": "Welcome to API Star, masnun!"}
```
Similarly if we navigate to: `http://127.0.0.1:8080/docs/`, we will see auto generated docs for our API.
Now lets look at the code. We have a `welcome` function that takes a parameter named `name` which has a default value of `None`. API Star is a smart api framework. It will try to find the `name` key in the url path or query string and pass it to our function. It also generates the API docs based on it. Pretty nice, no?
We then create a list of `Route` and `Include` instances and pass the list to the `App` instance. `Route` objects are used to define custom user routing. `Include` , as the name suggests, includes/embeds other routes under the path provided to it.
### Routing
Routing is simple. When constructing the `App` instance, we need to pass a list as the `routes` argument. This list should comprise of `Route` or `Include` objects as we just saw above. For `Route`s, we pass a url path, http method name and the request handler callable (function or otherwise). For the `Include` instances, we pass a url path and a list of `Routes` instance.
##### Path Parameters
We can put a name inside curly braces to declare a url path parameter. For example `/user/{user_id}` defines a path where the `user_id` is a path parameter or a variable which will be injected into the handler function (actually callable). Heres a quick example:
```
from apistar import Route
from apistar.frameworks.wsgi import WSGIApp as App
def user_profile(user_id: int):
return {'message': 'Your profile id is: {}'.format(user_id)}
routes = [
Route('/user/{user_id}', 'GET', user_profile),
]
app = App(routes=routes)
if __name__ == '__main__':
app.main()
```
If we visit `http://127.0.0.1:8080/user/23` we will get a response like this:
```
{"message": "Your profile id is: 23"}
```
But if we try to visit `http://127.0.0.1:8080/user/some_string` it will not match. Because the `user_profile` function we defined, we added a type hint for the `user_id` parameter. If its not integer, the path doesnt match. But if we go ahead and delete the type hint and just use `user_profile(user_id)`, it will match this url. This is again API Star is being smart and taking advantages of typing.
#### Including / Grouping Routes
Sometimes it might make sense to group certain urls together. Say we have a `user` module that deals with user related functionality. It might be better to group all the user related endpoints under the `/user` path. For example `/user/new`, `/user/1`, `/user/1/update` and what not. We can easily create our handlers and routes in a separate module or package even and then include them in our own routes.
Lets create a new module named `user`, the file name would be `user.py`. Lets put these codes in this file:
```
from apistar import Route
def user_new():
return {"message": "Create a new user"}
def user_update(user_id: int):
return {"message": "Update user #{}".format(user_id)}
def user_profile(user_id: int):
return {"message": "User Profile for: {}".format(user_id)}
user_routes = [
Route("/new", "GET", user_new),
Route("/{user_id}/update", "GET", user_update),
Route("/{user_id}/profile", "GET", user_profile),
]
```
Now we can import our `user_routes` from within our main app file and use it like this:
```
from apistar import Include
from apistar.frameworks.wsgi import WSGIApp as App
from user import user_routes
routes = [
Include("/user", user_routes)
]
app = App(routes=routes)
if __name__ == '__main__':
app.main()
```
Now `/user/new` will delegate to `user_new` function.
### Accessing Query String / Query Parameters
Any parameters passed in the query parameters can be injected directly into handler function. Say for the url `/call?phone=1234`, the handler function can define a `phone` parameter and it will receive the value from the query string / query parameters. If the url query string doesnt include a value for `phone`, it will get `None` instead. We can also set a default value to the parameter like this:
```
def welcome(name=None):
if name is None:
return {'message': 'Welcome to API Star!'}
return {'message': 'Welcome to API Star, %s!' % name}
```
In the above example, we set a default value to `name` which is `None` anyway.
### Injecting Objects
By type hinting a request handler, we can have different objects injected into our views. Injecting request related objects can be helpful for accessing them directly from inside the handler. There are several built in objects in the `http` package from API Star itself. We can also use its type system to create our own custom objects and have them injected into our functions. API Star also does data validation based on the constraints specified.
Lets define our own `User` type and have it injected in our request handler:
```
from apistar import Include, Route
from apistar.frameworks.wsgi import WSGIApp as App
from apistar import typesystem
class User(typesystem.Object):
properties = {
'name': typesystem.string(max_length=100),
'email': typesystem.string(max_length=100),
'age': typesystem.integer(maximum=100, minimum=18)
}
required = ["name", "age", "email"]
def new_user(user: User):
return user
routes = [
Route('/', 'POST', new_user),
]
app = App(routes=routes)
if __name__ == '__main__':
app.main()
```
Now if we send this request:
```
curl -X POST \
http://127.0.0.1:8080/ \
-H 'Cache-Control: no-cache' \
-H 'Content-Type: application/json' \
-d '{"name": "masnun", "email": "masnun@gmail.com", "age": 12}'
```
Guess what happens? We get an error saying age must be equal to or greater than 18. The type system is allowing us intelligent data validation as well. If we enable the `docs` url, we will also get these parameters automatically documented there.
### Sending a Response
If you have noticed so far, we can just pass a dictionary and it will be JSON encoded and returned by default. However, we can set the status code and any additional headers by using the `Response` class from `apistar`. Heres a quick example:
```
from apistar import Route, Response
from apistar.frameworks.wsgi import WSGIApp as App
def hello():
return Response(
content="Hello".encode("utf-8"),
status=200,
headers={"X-API-Framework": "API Star"},
content_type="text/plain"
)
routes = [
Route('/', 'GET', hello),
]
app = App(routes=routes)
if __name__ == '__main__':
app.main()
```
It should send a plain text response along with a custom header. Please note that the `content` should be bytes, not string. Thats why I encoded it.
### Moving On
I just walked through some of the features of API Star. Theres a lot more of cool stuff in API Star. I do recommend going through the [Github Readme][2] for learning more about different features offered by this excellent framework. I shall also try to cover short, focused tutorials on API Star in the coming days.
--------------------------------------------------------------------------------
via: http://polyglot.ninja/api-star-python-3-api-framework/
作者:[MASNUN][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://polyglot.ninja/author/masnun/
[1]:http://polyglot.ninja/rest-api-best-practices-python-flask-tutorial/
[2]:https://github.com/encode/apistar

View File

@ -0,0 +1,84 @@
Evolving Your Own Life: Introducing Biogenesis
======
Biogenesis provides a platform where you can create entire ecosystems of lifeforms and see how they interact and how the system as a whole evolves over time.
You always can get the latest version from the project's main [website][1], but it also should be available in the package management systems for most distributions. For Debian-based distributions, install Biogenesis with the following command:
```
sudo apt-get install biogenesis
```
If you do download it directly from the project website, you also need to have a Java virtual machine installed in order to run it.
To start it, you either can find the appropriate entry in the menu of your desktop environment, or you simply can type biogenesis in a terminal window. When it first starts, you will get an empty window within which to create your world.
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12224biof1.png)
Figure 1\. When you first start Biogenesis, you get a blank canvas so you can start creating your world.
The first step is to create a world. If you have a previous instance that you want to continue with, click the Game→Open menu item and select the appropriate file. If you want to start fresh, click Game→New to get a new world with a random selection of organisms.
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12224biof2.png)
Figure 2\. When you launch a new world, you get a random selection of organisms to start your ecosystem.
The world starts right away, with organisms moving and potentially interacting immediately. However, you can pause the world by clicking on the icon that is second from the right in the toolbar. Alternatively, you also can just press the p key to pause and resume the evolution of the world.
At the bottom of the window, you'll find details about the world as it currently exists. There is a display of the frames per second, along with the current time within the world. Next, there is a count of the current population of organisms. And finally, there is a display of the current levels of oxygen and carbon dioxide. You can adjust the amount of carbon dioxide within the world either by clicking the relevant icon in the toolbar or selecting the World menu item and then clicking either Increase CO2 or Decrease CO2.
There also are several parameters that govern how the world works and how your organisms will fare. If you select World→Parameters, you'll see a new window where you can play with those values.
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12224biof3.png)
Figure 3\. The parameter configuration window allows you to set parameters on the physical characteristics of the world, along with parameters that control the evolution of your organisms.
The General tab sets the amount of time per frame and whether hardware acceleration is used for display purposes. The World tab lets you set the physical characteristics of the world, such as the size and the initial oxygen and carbon dioxide levels. The Organisms tab allows you to set the initial number of organisms and their initial energy levels. You also can set their life span and mutation rate, among other items. The Metabolism tab lets you set the parameters around photosynthetic metabolism. And, the Genes tab allows you to set the probabilities and costs for the various genes that can be used to define your organisms.
What about the organisms within your world though? If you click on one of the organisms, it will be highlighted and the display will change.
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12224biof4.png)
Figure 4\. You can select individual organisms to find information about them, as well as apply different types of actions.
The icon toolbar at the top of the window will change to provide actions that apply to organisms. At the bottom of the window is an information bar describing the selected organism. It shows physical characteristics of the organism, such as age, energy and mass. It also describes its relationships to other organisms. It does this by displaying the number of its children and the number of its victims, as well as which generation it is.
If you want even more detail about an organism, click the Examine genes button in the bottom bar. This pops up a new window called the Genetic Laboratory that allows you to look at and alter the genes making up this organism. You can add or delete genes, as well as change the parameters of existing genes.
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12224biof5.png)
Figure 5\. The Genetic Laboratory allows you to play with the individual genes that make up an organism.
Right-clicking on a particular organism displays a drop-down menu that provides even more tools to work with. The first one allows you to track the selected organism as the world evolves. The next two entries allow you either to feed your organism extra food or weaken it. Normally, organisms need a certain amount of energy before they can reproduce. Selecting the fourth entry forces the selected organism to reproduce immediately, regardless of the energy level. You also can choose either to rejuvenate or outright kill the selected organism. If you want to increase the population of a particular organism quickly, simply copy and paste a number of a given organism.
Once you have a particularly interesting organism, you likely will want to be able to save it so you can work with it further. When you right-click an organism, one of the options is to export the organism to a file. This pops up a standard save dialog box where you can select the location and filename. The standard file ending for Biogenesis genetic code files is .bgg. Once you start to have a collection of organisms you want to work with, you can use them within a given world by right-clicking a blank location on the canvas and selecting the import option. This allows you to pull those saved organisms back into a world that you are working with.
Once you have allowed your world to evolve for a while, you probably will want to see how things are going. Clicking World→Statistics will pop up a new window where you can see what's happening within your world.
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12224biof6.png)
Figure 6\. The statistics window gives you a breakdown of what's happening within the world you have created.
The top of the window gives you the current statistics, including the time, the number of organisms, how many are dead, and the oxygen and carbon dioxide levels. It also provides a bar with the relative proportions of the genes.
Below this pane is a list of some remarkable organisms within your world. These are organisms that have had the most children, the most victims or those that are the most infected. This way, you can focus on organisms that are good at the traits you're interested in.
On the right-hand side of the window is a display of the world history to date. The top portion displays the history of the population, and the bottom portion displays the history of the atmosphere. As your world continues evolving, click the update button to get the latest statistics.
This software package could be a great teaching tool for learning about genetics, the environment and how the two interact. If you find a particularly interesting organism, be sure to share it with the community at the project website. It might be worth a look there for starting organisms too, allowing you to jump-start your explorations.
--------------------------------------------------------------------------------
via: http://www.linuxjournal.com/content/evolving-your-own-life-introducing-biogenesis
作者:[Joey Bernard][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxjournal.com/users/joey-bernard
[1]:http://biogenesis.sourceforge.net

View File

@ -0,0 +1,122 @@
How to print filename with awk on Linux / Unix
======
I would like to print filename with awk on Linux / Unix-like system. How do I print filename in BEGIN section of awk? Can I print the name of the current input file using gawk/awk?
The name of the current input file set in FILENAME variable. You can use FILENAME to display or print current input file name If no files are specified on the command line, the value of FILENAME is “-” (stdin). However, FILENAME is undefined inside the BEGIN rule unless set by getline.
### How to print filename with awk
The syntax is:
```
awk '{ print FILENAME }' fileNameHere
awk '{ print FILENAME }' /etc/hosts
```
You might see file name multiple times as awk read file line-by-line. To avoid this problem update your awk/gawk syntax as follows:
```
awk 'FNR == 1{ print FILENAME } ' /etc/passwd
awk 'FNR == 1{ print FILENAME } ' /etc/hosts
```
![](https://www.cyberciti.biz/media/new/faq/2018/02/How-to-print-filename-using-awk-on-Linux-or-Unix.jpg)
### How to print filename in BEGIN section of awk
Use the following syntax:
```
awk 'BEGIN{print ARGV[1]}' fileNameHere
awk 'BEGIN{print ARGV[1]}{ print "someting or do something on data" }END{}' fileNameHere
awk 'BEGIN{print ARGV[1]}' /etc/hosts
```
Sample outputs:
```
/etc/hosts
```
However, ARGV[1] might not always work. For example:
`ls -l /etc/hosts | awk 'BEGIN{print ARGV[1]} { print }'`
So you need to modify it as follows (assuming that ls -l only produced a single line of output):
`ls -l /etc/hosts | awk '{ print "File: " $9 ", Owner:" $3 ", Group: " $4 }'`
Sample outputs:
```
File: /etc/hosts, Owner:root, Group: roo
```
### How to deal with multiple filenames specified by a wild card
Use the following simple syntax:
```
awk '{ print FILENAME; nextfile } ' *.c
awk 'BEGIN{ print "Starting..."} { print FILENAME; nextfile }END{ print "....DONE"} ' *.conf
```
Sample outputs:
```
Starting...
blkid.conf
cryptconfig.conf
dhclient6.conf
dhclient.conf
dracut.conf
gai.conf
gnome_defaults.conf
host.conf
idmapd.conf
idnalias.conf
idn.conf
insserv.conf
iscsid.conf
krb5.conf
ld.so.conf
logrotate.conf
mke2fs.conf
mtools.conf
netscsid.conf
nfsmount.conf
nscd.conf
nsswitch.conf
openct.conf
opensc.conf
request-key.conf
resolv.conf
rsyncd.conf
sensors3.conf
slp.conf
smartd.conf
sysctl.conf
vconsole.conf
warnquota.conf
wodim.conf
xattr.conf
xinetd.conf
yp.conf
....DONE
```
nextfile tells awk to stop processing the current input file. The next input record read comes from the next input file. For more information see awk/[gawk][1] command man page:
```
man awk
man gawk
```
### about the author
The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][2], [Facebook][3], [Google+][4]. Get the **latest tutorials on SysAdmin, Linux/Unix and open source topics via[my RSS/XML feed][5]**.
--------------------------------------------------------------------------------
via: https://www.cyberciti.biz/faq/how-to-print-filename-with-awk-on-linux-unix/
作者Vivek GIte[][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.cyberciti.biz/
[1]:https://www.gnu.org/software/gawk/manual/
[2]:https://twitter.com/nixcraft
[3]:https://facebook.com/nixcraft
[4]:https://plus.google.com/+CybercitiBiz
[5]:https://www.cyberciti.biz/atom/atom.xml

View File

@ -0,0 +1,131 @@
Python Hello World and String Manipulation
======
![](https://process.filestackapi.com/cache=expiry:max/resize=width:700/compress/eadkmsrBTcWSyCeA4qti)
Before starting, I should mention that the [code][1] used in this blog post and in the [video][2] below is available on my github.
With that, lets get started! If you get lost, I recommend opening the [video][3] below in a separate tab.
[Hello World and String Manipulation Video using Python][2]
#### ** Get Started (Prerequisites)
Install Anaconda (Python) on your operating system. You can either download anaconda from the [official site][4] and install on your own or you can follow these anaconda installation tutorials below.
Install Anaconda on Windows: [Link][5]
Install Anaconda on Mac: [Link][6]
Install Anaconda on Ubuntu (Linux): [Link][7]
#### Open a Jupyter Notebook
Open your terminal (Mac) or command line and type the following ([see 1:16 in the video to follow along][8]) to open a Jupyter Notebook:
```
jupyter notebook
```
#### Print Statements/Hello World
Type the following into a cell in Jupyter and type **shift + enter** to execute code.
```
# This is a one line comment
print('Hello World!')
```
![][9]
Output of printing Hello World!
#### Strings and String Manipulation
Strings are a special type of a python class. As objects, in a class, you can call methods on string objects using the .methodName() notation. The string class is available by default in python, so you do not need an import statement to use the object interface to strings.
```
# Create a variable
# Variables are used to store information to be referenced
# and manipulated in a computer program.
firstVariable = 'Hello World'
print(firstVariable)
```
![][9]
Output of printing the variable firstVariable
```
# Explore what various string methods
print(firstVariable.lower())
print(firstVariable.upper())
print(firstVariable.title())
```
![][9]
Output of using .lower(), .upper() , and title() methods
```
# Use the split method to convert your string into a list
print(firstVariable.split(' '))
```
![][9]
Output of using the split method (in this case, split on space)
```
# You can add strings together.
a = "Fizz" + "Buzz"
print(a)
```
![][9]
string concatenation
#### Look up what Methods Do
For new programmers, they often ask how you know what each method does. Python provides two ways to do this.
1. (works in and out of Jupyter Notebook) Use **help** to lookup what each method does.
![][9]
Look up what each method does
2. (Jupyter Notebook exclusive) You can also look up what a method does by having a question mark after a method.
```
# To look up what each method does in jupyter (doesnt work outside of jupyter)
firstVariable.lower?
```
![][9]
Look up what each method does in Jupyter
#### Closing Remarks
Please let me know if you have any questions either here or in the comments section of the [youtube video][2]. The code in the post is also available on my [github][1]. Part 2 of the tutorial series is [Simple Math][10].
--------------------------------------------------------------------------------
via: https://www.codementor.io/mgalarny/python-hello-world-and-string-manipulation-gdgwd8ymp
作者:[Michael][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.codementor.io/mgalarny
[1]:https://github.com/mGalarnyk/Python_Tutorials/blob/master/Python_Basics/Intro/Python3Basics_Part1.ipynb
[2]:https://www.youtube.com/watch?v=JqGjkNzzU4s
[3]:https://www.youtube.com/watch?v=kApPBm1YsqU
[4]:https://www.continuum.io/downloads
[5]:https://medium.com/@GalarnykMichael/install-python-on-windows-anaconda-c63c7c3d1444
[6]:https://medium.com/@GalarnykMichael/install-python-on-mac-anaconda-ccd9f2014072
[7]:https://medium.com/@GalarnykMichael/install-python-on-ubuntu-anaconda-65623042cb5a
[8]:https://youtu.be/JqGjkNzzU4s?t=1m16s
[9]:data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==
[10]:https://medium.com/@GalarnykMichael/python-basics-2-simple-math-4ac7cc928738

View File

@ -0,0 +1,183 @@
Linux 系统查询机器最近重新启动的日期和时间的命令
======
在你的 Linux 或 类 UNIX 系统中,你是如何查询系统重新启动的日期和时间?你是如何查询系统关机的日期和时间? last 命令不仅可以按照时间从近到远的顺序列出指定的用户,终端和主机名,而且还可以列出指定日期和时间登录的用户。输出到终端的每一行都包括用户名,会话终端,主机名,会话开始和结束的时间,会话持续的时间。使用下面的命令来查看 Linux 或类 UNIX 系统重启和关机的时间和日期。
- last 命令
- who 命令
### 使用 who 命令来查看系统重新启动的时间/日期
你需要在终端使用 [who][1] 命令来打印有哪些人登陆了系统。who 命令同时也会显示上次系统启动的时间,使用 last 命令来查看系统重启和关机的日期和时间,运行:
`$ who -b`
示例输出:
`system boot 2017-06-20 17:41`
使用 last 命令来查询最近登陆到系统的用户和系统重启的时间和日期。输入:
`$ last reboot | less`
示例输出:
[![Fig.01: last command in action][2]][2]
或者,尝试输入:
`$ last reboot | head -1`
示例输出:
```
reboot system boot 4.9.0-3-amd64 Sat Jul 15 19:19 still running
```
last 命令通过查看文件 /var/log/wtmp 来显示自 wtmp 文件被创建时的所有登陆(和注销)的用户。每当系统重新启动时,伪用户将重启信息记录到日志。因此,`last reboot` 命令将会显示自日志文件被创建以来的所有重启信息。
### 查看系统上次关机的时间和日期
可以使用下面的命令来显示上次关机的日期和时间:
`$ last -x|grep shutdown | head -1`
示例输出:
```
shutdown system down 2.6.15.4 Sun Apr 30 13:31 - 15:08 (01:37)
```
命令中,
* **-x**:显示系统开关机和运行等级改变信息
这里是 last 命令的其它的一些选项:
```
$ last
$ last -x
$ last -x reboot
$ last -x shutdown
```
示例输出:
![Fig.01: How to view last Linux System Reboot Date/Time ][3]
### 查看系统正常的运行时间
评论区的读者建议的另一个命令如下:
`$ uptime -s`
示例输出:
```
2017-06-20 17:41:51
```
### OS X/Unix/FreeBSD 查看最近重启和关机时间的命令示例
在终端输入下面的命令:
`$ last reboot`
在 OS X 示例输出结果如下:
```
reboot ~ Fri Dec 18 23:58
reboot ~ Mon Dec 14 09:54
reboot ~ Wed Dec 9 23:21
reboot ~ Tue Nov 17 21:52
reboot ~ Tue Nov 17 06:01
reboot ~ Wed Nov 11 12:14
reboot ~ Sat Oct 31 13:40
reboot ~ Wed Oct 28 15:56
reboot ~ Wed Oct 28 11:35
reboot ~ Tue Oct 27 00:00
reboot ~ Sun Oct 18 17:28
reboot ~ Sun Oct 18 17:11
reboot ~ Mon Oct 5 09:35
reboot ~ Sat Oct 3 18:57
wtmp begins Sat Oct 3 18:57
```
查看关机日期和时间,输入:
`$ last shutdown`
示例输出:
```
shutdown ~ Fri Dec 18 23:57
shutdown ~ Mon Dec 14 09:53
shutdown ~ Wed Dec 9 23:20
shutdown ~ Tue Nov 17 14:24
shutdown ~ Mon Nov 16 21:15
shutdown ~ Tue Nov 10 13:15
shutdown ~ Sat Oct 31 13:40
shutdown ~ Wed Oct 28 03:10
shutdown ~ Sun Oct 18 17:27
shutdown ~ Mon Oct 5 09:23
wtmp begins Sat Oct 3 18:57
```
### 如何查看是谁重启和关闭机器?
你需要[启动 psacct 服务然后运行下面的命令][4]来查看执行过的命令,同时包括用户名,在终端输入 [lastcomm][5] 命令查看信息
```
# lastcomm userNameHere
# lastcomm commandNameHere
# lastcomm | more
# lastcomm reboot
# lastcomm shutdown
### OR see both reboot and shutdown time
# lastcomm | egrep 'reboot|shutdown'
```
示例输出:
```
reboot S X root pts/0 0.00 secs Sun Dec 27 23:49
shutdown S root pts/1 0.00 secs Sun Dec 27 23:45
```
我们可以看到 root 用户在当地时间 12 月 27 日星期二 23:49 在 pts/0 重新启动了机器。
### 参见
* 更多信息可以查看 man 手册( man last )和参考文章 [如何在 Linux 服务器上使用 tuptime 命令查看历史和统计的正常的运行时间][6].
### 关于作者
作者是 nixCraft 的创立者同时也是一名经验丰富的系统管理员,也是 Linux类 Unix 操作系统 shell 脚本的培训师。他曾与全球各行各业的客户工作过,包括 IT教育国防和空间研究以及非营利部门等等。你可以在 [Twitter][7] ,[Facebook][8][Google+][9] 关注他。
--------------------------------------------------------------------------------
via: https://www.cyberciti.biz/tips/linux-last-reboot-time-and-date-find-out.html
作者:[Vivek Gite][a]
译者:[amwps290](https://github.com/amwps290)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.cyberciti.biz/
[1]:https://www.cyberciti.biz/faq/unix-linux-who-command-examples-syntax-usage/ "See Linux/Unix who command examples for more info"
[2]:https://www.cyberciti.biz/tips/wp-content/uploads/2006/04/last-reboot.jpg
[3]:https://www.cyberciti.biz/media/new/tips/2006/04/check-last-time-system-was-rebooted.jpg
[4]:https://www.cyberciti.biz/tips/howto-log-user-activity-using-process-accounting.html
[5]:https://www.cyberciti.biz/faq/linux-unix-lastcomm-command-examples-usage-syntax/ "See Linux/Unix lastcomm command examples for more info"
[6]:https://www.cyberciti.biz/hardware/howto-see-historical-statistical-uptime-on-linux-server/
[7]:https://twitter.com/nixcraft
[8]:https://facebook.com/nixcraft
[9]:https://plus.google.com/+CybercitiBiz

View File

@ -0,0 +1,136 @@
Linux 检测 IDE / SATA SSD 硬盘的传输速度
======
你知道你的硬盘在 linux 下挂在传输有多快吗? 打开电脑的机箱活着机柜,你了解你的硬盘类型吗? 不同类型的传输速度也不同 SATA I (150 MB/s) 、 SATA II (300 MB/s) 、 SATA III (6.0Gb/s) 。
你能够使用 **hdparm 和 dd 命令** 来检测你的硬盘速度。它为常见的 Linux 系统的 ATA / IDE / SATA 设备驱动程序子系统支持的各种硬盘的 ioctl 提供命令行界面。有些选项只能用最新的内核才能正常工作(请确保安装了最新的内核)。我也推荐使用最新的内核源代码中包含的编译的 hdparm 命令。
### 如何使用 hdparm 命令来检测硬盘的传输速度
以 root 管理员权限登录并执行命令:
`$ sudo hdparm -tT /dev/sda`
或者
`$ sudo hdparm -tT /dev/hda`
输出:
```
/dev/sda:
Timing cached reads: 7864 MB in 2.00 seconds = 3935.41 MB/sec
Timing buffered disk reads: 204 MB in 3.00 seconds = 67.98 MB/sec
```
为了检测更精准,这个操作应该 **重复2-3次** 。这显示了直接从 Linux 缓冲区缓存中读取的速度,而无需磁盘访问。这个测量实际上是被测系统的处理器,高速缓存和存储器的吞吐量的指示。这里是 [for循环的例子][1]连续运行测试3次:
`for i in 1 2 3; do hdparm -tT /dev/hda; done`
Where,
* **-t** : 执行设备读取时序
* **-T** : 执行缓存读取时间
* **/dev/sda** : 硬盘设备文件
要 [找出SATA硬盘链接速度][2] ,请输入:
`sudo hdparm -I /dev/sda | grep -i speed`
输出:
```
* Gen1 signaling speed (1.5Gb/s)
* Gen2 signaling speed (3.0Gb/s)
* Gen3 signaling speed (6.0Gb/s)
```
以上输出表明我的硬盘可以使用1.5Gb / s3.0Gb / s或6.0Gb / s的速度。请注意您的BIOS /主板必须支持SATA-II / III :
`$ dmesg | grep -i sata | grep 'link up'`
[![Linux Check IDE SATA SSD Hard Disk Transfer Speed][3]][3]
### dd 命令
你使用 dd 发热命令也可以获取到相应的速度信息:
```
dd if=/dev/zero of=/tmp/output.img bs=8k count=256k
rm /tmp/output.img
```
输出:
```
262144+0 records in
262144+0 records out
2147483648 bytes (2.1 GB) copied, 23.6472 seconds, **90.8 MB/s**
```
下面是 [ dd 命令推荐的输入的参数][4]
```
dd if=/dev/input.file of=/path/to/output.file bs=block-size count=number-of-blocks oflag=dsync
## GNU dd syntax ##
dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync
## OR alternate syntax for GNU/dd ##
dd if=/dev/zero of=/tmp/testALT.img bs=1G count=1 conv=fdatasync
```
这个是上面命令的第三个命令的输出结果:
```
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.23889 s, 253 MB/s
```
### 磁盘存储 - GUI 工具
您还可以使用位于系统>管理>磁盘实用程序菜单中的磁盘实用程序。请注意在最新版本的Gnome中它简称为磁盘。
#### 如何使用Linux上的磁盘测试我的硬盘的性能
要测试硬盘的速度:
1. 从 **活动概览** 中打开 **磁盘**(按键盘上的超级键并键入磁盘)
2. 从 **左侧窗格** 的列表中选择 **磁盘**
3. 选择菜单按钮并从菜单中选择 **Benchmark disk** ...
4. 单击 **开始 benchmark...** 并根据需要调整传输速率和访问时间参数。
5. 选择 **Start Benchmarking** 来测试从磁盘读取数据的速度。需要管理权限请输入密码。
以上操作的快速视频演示:
https://www.cyberciti.biz/tips/wp-content/uploads/2007/10/disks-performance.mp4
#### 只读 Benchmark (安全模式下)
然后,选择 > 只读:
![Fig.01: Linux Benchmarking Hard Disk Read Only Test Speed][5]
上述选项不会销毁任何数据。
#### 读写的 Benchmark所有数据将丢失所以要小心
访问系统 > 管理 > 磁盘实用程序菜单 > 单击 benchmark >单击开始读/写 benchmark 按钮:
![Fig.02:Linux Measuring read rate, write rate and access time][6]
### 作者
作者是 nixCraft 的创造者,是经验丰富的系统管理员,也是 Linux 操作系统/ Unix shell 脚本的培训师。他曾与全球客户以及IT教育国防和空间研究以及非营利部门等多个行业合作。在TwitterFacebook和Google+上关注他。
--------------------------------------------------------------------------------
via: https://www.cyberciti.biz/tips/how-fast-is-linux-sata-hard-disk.html
作者:[Vivek Gite][a]
译者:[MonkeyDEcho](https://github.com/MonkeyDEcho)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.cyberciti.biz/
[1]:https://www.cyberciti.biz/faq/bash-for-loop/
[2]:https://www.cyberciti.biz/faq/linux-command-to-find-sata-harddisk-link-speed/
[3]:https://www.cyberciti.biz/tips/wp-content/uploads/2007/10/Linux-Check-IDE-SATA-SSD-Hard-Disk-Transfer-Speed.jpg
[4]:https://www.cyberciti.biz/faq/howto-linux-unix-test-disk-performance-with-dd-command/
[5]:https://www.cyberciti.biz/media/new/tips/2007/10/Linux-Hard-Disk-Speed-Benchmark.png (Linux Benchmark Hard Disk Speed)
[6]:https://www.cyberciti.biz/media/new/tips/2007/10/Linux-Hard-Disk-Read-Write-Benchmark.png (Linux Hard Disk Benchmark Read / Write Rate and Access Time)
[7]:https://twitter.com/nixcraft
[8]:https://facebook.com/nixcraft
[9]:https://plus.google.com/+CybercitiBiz

View File

@ -1,12 +1,13 @@
Ansible 教程:简单 Ansible 命令介绍
======
在我们之前的 Ansible 教程中,我们讨论了[ **Ansible** 的安装和配置][1]。在这个 ansible 教程中,我们将学习一些基本的 ansible 命令的例子,我们将用它来管理基础设施。所以让我们先看看一个完整的 ansible 命令的语法:
在我们之前的 Ansible 教程中,我们讨论了 [Ansible 的安装和配置][1]。在这个 Ansible 教程中,我们将学习一些基本的 Ansible 命令的例子,我们将用它来管理基础设施。所以让我们先看看一个完整的 Ansible 命令的语法:
```
$ ansible <group> -m <module> -a <arguments>
```
在这里,我们可以用单个主机或用 <group> 代替全部主机,<arguments> 代表可以提供的选项。现在我们来看看一些 ansible 的基本命令。
在这里,我们可以用单个主机或用 `<group>` 代替一组主机,`<arguments>` 是可选的参数。现在我们来看看一些 Ansible 的基本命令。
### 检查主机的连通性
@ -51,7 +52,7 @@ $ ansible <group> -m copy -a "src=/home/dan dest=/tmp/home"
#### 创建新用户
```
$ ansible <group> -m user -a "name=testuser password=<encrypted password>"
$ ansible <group> -m user -a "name=testuser password=<encrypted password>"
```
#### 删除用户
@ -60,22 +61,22 @@ $ ansible <group> -m copy -a "src=/home/dan dest=/tmp/home"
$ ansible <group> -m user -a "name=testuser state=absent"
```
**注意:** 要创建加密密码,请使用 ”mkpasswd -method=sha-512“
**注意:** 要创建加密密码,请使用 `"mkpasswd -method=sha-512"`
### 更改权限和所有者
要改变已连接主机文件的所有者,我们使用名为 ”file“ 的模块,使用如下。
### 更改文件权限
#### 更改文件权限
```
$ ansible <group> -m file -a "dest=/home/dan/file1.txt mode=777"
```
### 更改文件的所有者
#### 更改文件的所有者
```
$ ansible <group> -m file -a "dest=/home/dan/file1.txt mode=777 owner=dan group=dan"
$ ansible <group> -m file -a "dest=/home/dan/file1.txt mode=777 owner=dan group=dan"
```
### 管理软件包
@ -128,16 +129,7 @@ $ ansible <group> -m service -a "name=httpd state=stopped"
$ ansible <group> -m service -a "name=httpd state=restarted"
```
这样我们简单的,只有一行的 ansible 命令的教程就完成了。此外,在未来的教程中,我们将学习创建 playbook来帮助我们更轻松高效地管理主机。
If you think we have helped you or just want to support us, please consider these :-
如果你认为我们帮助到您,或者只是想支持我们,请考虑这些:
关注我们:[Facebook][2] | [Twitter][3] | [Google Plus][4]
成为支持者 - [通过 PayPal 捐助][5]
Linux TechLab 感谢你的持续支持。
这样我们简单的,单行 Ansible 命令的教程就完成了。此外,在未来的教程中,我们将学习创建 playbook来帮助我们更轻松高效地管理主机。
--------------------------------------------------------------------------------
@ -145,7 +137,7 @@ via: http://linuxtechlab.com/ansible-tutorial-simple-commands/
作者:[SHUSAIN][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,150 +0,0 @@
为什么 Kubernetes 很酷
============================================================
在我刚开始学习 Kubernetes大约是一年半以前吧我真的不明白为什么应该去关注它。
在我使用 Kubernetes 全职工作了三个多月后,我才有了一些想法为什么我应该考虑使用它了。(我距离成为一个 Kubernetes 专家还很远!)希望这篇文章对你理解 Kubernetes 能做什么会有帮助!
我将尝试去解释我认为的对 Kubernetes 感兴趣的一些原因,而不去使用 “原生云cloud native”、“编排系统orchestration"、”容器container“、或者任何 Kubernetes 专用的术语 :)。我去解释的这些主要来自 Kubernetes 操作者/基础设施工程师的观点,因为,我现在的工作就是去配置 Kubernetes 和让它工作的更好。
我根本就不去尝试解决一些如 “你应该在你的生产系统中使用 Kubernetes 吗?”这样的问题。那是非常复杂的问题。(不仅是因为“生产系统”根据你的用途而总是有不同的要求“)
### Kubernetes 可以让你在生产系统中运行代码而不需要去设置一台新的服务器
我首次被说教使用 Kubernetes 是与我的伙伴 Kamal 的下面的谈话:
大致是这样的:
* Kamal: 使用 Kubernetes 你可以通过几个简单的命令就能设置一台新的服务器。
* Julia: 我觉得不太可能吧。
* Kamal: 像这样,你写一个配置文件,然后应用它,这时候,你就在生产系统中运行了一个 HTTP 服务。
* Julia: 但是,现在我需要去创建一个新的 AWS 实例,明确地写一个 Puppet设置服务发现配置负载均衡配置开发软件并且确保 DNS 正常工作,如果没有什么问题的话,至少在 4 小时后才能投入使用。
* Kamal: 是的,使用 Kubernetes 你不需要做那么多事情,你可以在 5 分钟内设置一台新的 HTTP 服务,并且它将自动运行。只要你的集群中有空闲的资源它就能正常工作!
* Julia: 这儿一定是一个”坑“
这里有一种陷阱,设置一个生产用 Kubernetes 集群(在我的经险中)确实并不容易。(查看 [Kubernetes The Hard Way][3] 中去开始使用时有哪些复杂的东西)但是,我们现在并不深入讨论它。
因此Kubernetes 第一个很酷的事情是,它可能使那些想在生产系统中部署新开发的软件的方式变得更容易。那是很酷的事,而且它真的是这样,因此,一旦你使用一个 Kubernetes 集群工作,你真的可以仅使用一个配置文件在生产系统中设置一台 HTTP 服务(在 5 分钟内运行这个应用程序,设置一个负载均衡,给它一个 DNS 名字,等等)。看起来真的很有趣。
### 对于运行在生产系统中的你的代码Kubernetes 可以提供更好的可见性和可管理性
在我看来,在理解 etcd 之前,你可能不会理解 Kubernetes 的。因此,让我们先讨论 etcd
想像一下,如果现在我这样问你,”告诉我你运行在生产系统中的每个应用程序,它运行在哪台主机上?它是否状态很好?是否为它分配了一个 DNS 名字?”我并不知道这些,但是,我可能需要到很多不同的地方去查询来回答这些问题,并且,我需要花很长的时间才能搞定。我现在可以很确定地说不需要查询,仅一个 API 就可以搞定它们。
在 Kubernetes 中,你的集群的所有状态 应用程序运行 (“pods”)、节点、DNS 名字、 cron 任务、 等等 都保存在一个单一的数据库中etcd。每个 Kubernetes 组件是无状态的,并且基本是通过下列来工作的。
* 从 etcd 中读取状态(比如,“分配给节点 1 的 pods 列表“)
* 产生变化(比如,”在节点 1 上运行 pod A")
* 更新 etcd 中的状态(比如,“设置 pod A 的状态为 running
这意味着,如果你想去回答诸如 “在那个可用区域中有多少台运行 nginx 的 pods” 这样的问题时,你可以通过查询一个统一的 APIKubernetes API去回答它。并且你可以在每个其它 Kubernetes 组件上运行那个 API 去进行同样的访问。
这也意味着,你可以很容易地去管理每个运行在 Kubernetes 中的任何东西。如果你想这样做,你可以:
* 为部署实现一个复杂的定制的部署策略(部署一个东西,等待 2 分钟,部署 5 个以上,等待 3.7 分钟,等等)
* 每当推送到 github 上一个分支,自动化 [启动一个新的 web 服务器][1]
* 监视所有你的运行的应用程序,确保它们有一个合理的内存使用限制。
所有你需要做的这些事情,只需要写一个告诉 Kubernetes API“controller”的程序就可以了。
关于 Kubernetes API 的其它的令人激动的事情是,你不会被局限为 Kubernetes 提供的现有功能!如果对于你想去部署/创建/监视的软件有你自己的想法,那么,你可以使用 Kubernetes API 去写一些代码去达到你的目的!它可以让你做到你想做的任何事情。
### 如果每个 Kubernetes 组件都“挂了”,你的代码将仍然保持运行
关于 Kubernetes 我承诺的(通过各种博客文章:))一件事情是,“如果 Kubernetes API 服务和其它组件”挂了“,你的代码将一直保持运行状态”。从理论上说,这是它第二件很酷的事情,但是,我不确定它是否真是这样的。
到目前为止,这似乎是真的!
我已经断开了一些正在运行的 etcd它会发生的事情是
1. 所有的代码继续保持运行状态
2. 不能做 _新的_ 事情你不能部署新的代码或者生成变更cron 作业将停止工作)
3. 当它恢复时,集群将赶上这期间它错过的内容
这样做,意味着如果 etcd 宕掉,并且你的应用程序的其中之一崩溃或者发生其它事情,在 etcd 恢复之前它并不能返回come back up
### Kubernetes 的设计对 bugs 很有弹性
与任何软件一样Kubernetes 有 bugs。例如到目前为止我们的集群控制管理器有内存泄漏并且调度器经常崩溃。Bugs 当然不好,但是,我发现 Kubernetes 的设计,帮助减少了许多在它的内核中的错误。
如果你重启动任何组件,将发生:
* 从 etcd 中读取所有的与它相关的状态
* 基于那些状态(调度 pods、全部 pods 的垃圾回收、调度 cronjobs、按需部署、等等它启动去做它认为必须要做的事情。
因为,所有的组件并不会在内存中保持状态,你在任何时候都可以重启它们,它可以帮助你减少各种 bugs。
例如,假如说,在你的控制管理器中有内存泄露。因为,控制管理器是无状态的,你可以每小时定期去启动它,或者,感觉到可能导致任何不一致的问题发生时。或者 ,在我们运行的调度器中有一个 bug它有时仅仅是忘记了 pods 或者从来没有调度它们。你可以每隔 10 分钟来重启调度器来缓减这种情况。(我们并不这么做,而是去修复这个 bug但是你_可以吗_
因此,我觉得即使在它的内核组件中有 bug我仍然可以信任 Kubernetes 的设计去帮助我确保集群状态的一致性。并且,总在来说,随着时间的推移软件将会提高。你去操作的仅有的有状态的东西是 etcd。
不用过多地讨论“状态”这个东西 但是,我认为在 Kubernetes 中很酷的一件事情是,唯一需要去做备份/恢复计划的事情是 etcd (除非为你的 pods 使用了持久化存储的卷)。我认为这样可以使 kubernetes 对关于你考虑的事情的操作更容易一些。
### 在 Kubernetes 之上实现新的分发系统是非常容易的
假设你想去实现一个分发 cron 作业调度系统!从零开始做工作量非常大。但是,在 Kubernetes 里面实现一个分发 cron 作业调度系统是非常容易的!(它仍然是一个分布式系统)
我第一次读到 Kubernetes 的 cronjob 作业控制器的代码时,它是如此的简单,我真的特别高兴。它在这里,去读它吧,主要的逻辑大约是 400 行。去读它吧! => [cronjob_controller.go][4] <=
从本质上来看cronjob 控制器做了:
* 每 10 秒钟:
* 列出所有已存在的 cronjobs
* 检查是否有需要现在去运行的任务
* 如果有,创建一个新的作业对象去被调度并通过其它的 Kubernetes 控制器去真正地去运行它
* 清理已完成的作业
* 重复以上工作
Kubernetes 模型是很受限制的(它有定义在 etcd 中的资源模式,控制器读取这个资源和更新 etcd我认为这种相关的固有的/受限制的模型,可以使它更容易地在 Kubernetes 框架中开发你自己的分布式系统。
Kamal 介绍给我的 “ Kubernetes 是一个写你自己的分布式系统的很好的平台” 这一想法,而不是“ Kubernetes 是一个你可以使用的分布式系统”,并且,我想我对它真的有兴趣。他有一个 [system to run an HTTP service for every branch you push to github][5] 的雏型。他花了一个周末的时候,大约有了 800 行,我觉得它真的很不错!
### Kubernetes 可以使你做一些非常神奇的事情(但并不容易)
我一开始就说 “kubernetes 可以让你做一些很神奇的事情,你可以用一个配置文件来做这么多的基础设施,它太神奇了”,而且这是真的!
为什么说“Kubernetes 并不容易”呢?,是因为 Kubernetes 有很多的课件去学习怎么去成功地运营一个高可用的 Kubernetes 集群要做很多的工作。就像我发现它给我了许多抽象的东西,我需要去理解这些抽象的东西,为了去调试问题和正确地配置它们。我喜欢学习新东西,因此,它并不会使我发狂或者生气,我只是觉得理解它很重要:)
对于 “我不能仅依靠抽象概念” 的一个具体的例子是,我一直在努力学习需要的更多的 [Linux 上的关于网络的工作][6],去对设置 Kubernetes 网络有信心,这比我以前学过的关于网络的知识要多很多。这种方式很有意思但是非常费时间。在以后的某个时间,我可以写更多的关于设置 Kubernetes 网络的困难的/有趣的事情。
或者,我写一个关于学习 Kubernetes 的不同选项所做事情的 [2000 字的博客文章][7],才能够成功去设置我的 Kubernetes CAs。
我觉得,像 GKE (google 的 Kubernetes 生产系统) 这样的一些管理 Kubernetes 的系统可能更简单,因为,他们为你做了许多的决定,但是,我没有尝试过它们。
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2017/10/05/reasons-kubernetes-is-cool/
作者:[Julia Evans][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://jvns.ca/about
[1]:https://github.com/kamalmarhubi/kubereview
[2]:https://jvns.ca/categories/kubernetes
[3]:https://github.com/kelseyhightower/kubernetes-the-hard-way
[4]:https://github.com/kubernetes/kubernetes/blob/e4551d50e57c089aab6f67333412d3ca64bc09ae/pkg/controller/cronjob/cronjob_controller.go
[5]:https://github.com/kamalmarhubi/kubereview
[6]:https://jvns.ca/blog/2016/12/22/container-networking/
[7]:https://jvns.ca/blog/2017/08/05/how-kubernetes-certificates-work/

View File

@ -1,134 +0,0 @@
怎样完整地更新并升级基于Debian的离线操作系统
======
![](https://www.ostechnix.com/wp-content/uploads/2017/11/Upgrade-Offline-Debian-based-Systems-2-720x340.png)
不久之前我已经向你展示了如何在任意[ **离线的Ubuntu**][1] 操作系统和任意 [**离线的Arch Linux**][2] 操作系统上安装软件。 今天我们将会看看如何完整地更新并升级基于Debian(Debian-based)的离线操作系统。 和之前所述方法的不同之处在于,(这次)我们将会升级整个操作系统(落后的软件包),而不是单个的软件包。这个方法在你没有没有网络链接或拥有的网络速度很慢的时候十分有用。
### 完整更新并升级基于Debian的离线操作系统
首先假设,你家里拥有正在运行并配置有高速互联网链接的系统(Windows或者Linux)和一个没有网络链接或网络很慢(例如拨号网络)的Debian或Debian的衍生版本系统。现在如果你想要更新你的离线家用操作系统怎么办购买一个更加高速的网络链接根本不需要你仍然可以通过互联网更新升级你的离线操作系统。这正是 **Apt-Offline**工具可以帮助你做到的。
正如其名apt-offline 是一个为Debian和Debian衍生发行版诸如UbuntuLinux Mint这样基于APT的操作系统提供的离线APT包管理器。使用apt-offline我们可以完整地更新/升级我们的Debian系统而不需要网络链接。这个程序是由Python编程语言写成的兼具CLI和图形接口的跨平台工具。
#### 准备工作
#### Requirements
* 一个已经联网的操作系统(Windows或者Linux)。在这份手册中,为了便于理解,我们将之称为在线操作系统(online system)。
* 一个离线操作系统(Debian或者Debian衍生版本)。我们称之为离线操作系统(offline system)。
* 有足够空间容纳所有更新包的USB驱动器或者外接硬盘。
#### Installation
#### 安装
Apt-Offline可以在Debian和其衍生版本的默认仓库中获得。如果你的在线操作系统是运行的DebianUbuntuLinux Mint和其他基于DEB的操作系统你可以通过下面的命令安装Apt-Offline:
```shell
sudo apt-get install apt-offline
```
如果你的在线操作系统运行的是非Debian类的发行版使用git clone获取Apt-Offline仓库:
```shell
git clone https://github.com/rickysarraf/apt-offline.git
```
切换到克隆的目录下并在此处运行。
```shell
cd apt-offline/
```
```shell
sudo ./apt-offline
```
#### 离线操作系统上的步骤(没有联网的操作系统)
到你的离线操作系统上创建一个你想存储签名文件的目录
```shell
mkdir ~/tmp
```
```shell
cd ~/tmp/
```
你可以自己选择使用任何目录。接下来,运行下面的命令生成签名文件:
```shell
sudo apt-offline set apt-offline.sig
```
示例输出如下:
```shell
Generating database of files that are needed for an update.
Generating database of file that are needed for operation upgrade
```
默认条件下apt-offline将会生成需要更新和升级的(相关)文件的数据库。你可以使用 **--` update`** 或者 **--upgrade ** 选项选择创建(升级或者更新相关文件的数据库)的其中之一。
拷贝完整的**tmp**目录到你的USB驱动器或者或者外接硬盘上然后换到你的在线操作系统(有网络链接的操作系统)。
Copy the entire **tmp** folder in an USB drive or external drive and go to your online system (Internet-enabled system).
#### 在线操作系统上的步骤
插入你的USB驱动器然后进入临时文件夹
```shell
cd tmp/
```
然后,运行如下命令:
```shell
sudo apt-offline get apt-offline.sig --threads 5 --bundle apt-offline-bundle.zip
```
在这里的"-threads 5"代表着APT仓库的数目.。如果你想要从更多的仓库下载软件包,你可以增加这里的数值。然后 "-bundle apt-offline-bundle.zip" 选项表示所有的软件包将会打包到一个叫做**apt-offline-bundle.zip**的单独存档中。这个存档文件将会被保存在当前的工作目录中。
上面的命令将会按照之前在离线操作系统上生成的签名文件下载数据。
[![][3]][4]
根据你的网络状况这个操作将会花费几分钟左右的时间。请记住apt-offline是跨平台的所以你可以在任何操作系统上使用它下载包。
一旦下载完成,拷贝**tmp**文件夹到你的USB 或者外接硬盘上并且返回你的离线操作系统。千万保证你的USB驱动器上有足够的空闲空间存储所有的下载文件因为所有的包都放在**tmp**文件夹里了。
#### 离线操作系统上的步骤
把你的设备插入你的离线操作系统,然后切换到你之前下载了所有包的**tmp**目录下。
```shell
cd tmp
```
然后,运行下面的命令来安装所有下载好的包。
```shell
sudo apt-offline install apt-offline-bundle.zip
```
这个命令将会更新APT数据库所以APT将会找到在APT缓冲里所有需要的包。
**注意事项:** 如果在线和离线操作系统都在同一个局域网中,你可以通过"scp"或者其他传输应用程序将**tmp**文件传到离线操作系统中。如果两个操作系统在不同的位置(译者注:意指在不同的局域网)那就使用USB设备来拷贝(就可以了)。
好了大伙儿,现在就这么多了。 希望这篇指南对你有用。还有更多好东西正在路上。敬请关注!
祝你愉快!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/fully-update-upgrade-offline-debian-based-systems/
作者:[SK][a]
译者:[Leemeans](https://github.com/leemeans)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.ostechnix.com/install-softwares-offline-ubuntu-16-04/
[2]:https://www.ostechnix.com/install-packages-offline-arch-linux/
[3]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[4]:http://www.ostechnix.com/wp-content/uploads/2017/11/apt-offline.png

View File

@ -1,171 +0,0 @@
为初学者准备的 MariaDB 管理命令
======
之前我们学过了[在 Centos/RHEL 7 上安装 MariaDB 服务器并保证其安全 ][1],使之成为了 RHEL/CentOS 7 的默认数据库。现在我们再来看看一些有用的 MariaDB 管理命令。这些都是些使用 MariaDB 最基础的命令,而且他们对 MySQL 也同样适合,因为 Mariadb 就是 MySQL 的一个分支而已。
**(推荐阅读:[在 RHEL/CentOS 上安装并配置 MongoDB][2])**
## MariaDB 管理命令
### 查看 MariaDB 安装的版本
要查看所安装数据库的当前版本,在终端中输入下面命令
```
$ mysql -version
```
该命令会告诉你数据库的当前版本。此外你也可以运行下面命令来查看版本的详细信息,
```
$ mysqladmin -u root -p version
```
### 登陆 mariadb
要登陆 mariadb 服务器,运行
```
$ mysql -u root -p
```
然后输入密码登陆
### 列出所有的数据库
要列出 maridb 当前拥有的所有数据库,在你登陆到 mariadb 中后运行
```
$ show databases;
```
### 创建新数据库
在 mariadb 中创建新数据库,登陆 mariabdb 后运行
```
$ create database dan;
```
若想直接在终端创建数据库,则运行
```
$ mysqladmin -u user -p create dan
```
这里,**dan** 就是新数据库的名词
### 删除数据库
要删除数据库,在已登陆的 mariadb 会话中运行
```
$ drop database dan;
```
此外你也可以运行,
```
$ mysqladmin -u root -p drop dan
```
**注意:-** 若在运行 mysqladmin 命令时提示 'access denied' 错误,这应该是由于我们没有给 root 授权。要对 root 授权,请参照第 7 点方法,只是要将用户改成 root。
### 创建新用户
为数据库创建新用户,运行
```
$ CREATE USER 'dan'@'localhost' IDENTIFIED BY 'password';
```
### 授权用户访问某个数据库
授权用户访问某个数据库,运行
```
$ GRANT ALL PRIVILEGES ON test。* to 'dan'@'localhost';
```
这会赋予用户 dan 对名为 test 的数据库完全操作的权限。我们也可以限定为用户只赋予 SELECTINSERTDELETE 权限。
要赋予访问所有数据库的权限,将 test 替换成 * .像这样。
```
$ GRANT ALL PRIVILEGES ON *。* to 'dan'@'localhost';
```
### 备份/导出 数据库
要创建单个数据库的备份,在终端窗口中运行下列命令,
```
$ mysqldump -u root -p database_name>db_backup.sql
```
若要一次性创建多个数据库的备份则运行,
```
$ mysqldump -u root -p - - databases db1 db2 > db12_backup.sql
```
要一次性导出多个数据库,则运行,
```
$ mysqldump -u root -p - - all-databases >all_dbs.sql
```
### 从备份中恢复数据库
要从备份中恢复数据库,运行
```
$ mysql -u root -p database_name<db_backup.sql
```
但这条命令成功的前提是预先没有存在同名的数据库。如果想要恢复数据库数据到已经存在的数据库中,则需要用到 'mysqlimport' 命令,
```
$ mysqlimport -u root -p database_name<db_backup.sql
```
### 更改 mariadb 用户的密码
本例中我们会修改 root 的密码,但修改其他用户的密码也是一样的过程,
登陆 mariadb 并切换到 'mysql' 数据库,
```
$ mysql -u root -p
$ use mysql;
```
然后运行下面命令,
```
$ update user set password=PASSWORD( 'your_new_password_here') where User='root';
```
下一步,重新加载权限,
```
$ flush privileges;
```
然后退出会话。
我们的教程至此就结束了,在本教程中我们学习了一些有用的 MariaDB 管理命令。欢迎您的留言。
--------------------------------------------------------------------------------
via: http://linuxtechlab.com/mariadb-administration-commands-beginners/
作者:[Shusain ][a]
译者:[lujun9972](https://github.com/lujun9972)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linuxtechlab.com/author/shsuain/
[1]:http://linuxtechlab.com/installing-configuring-mariadb-rhelcentos/
[2]:http://linuxtechlab.com/mongodb-installation-configuration-rhelcentos/

View File

@ -0,0 +1,47 @@
在你下一次技术面试的时候要提的 3 个基本问题
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/os-jobs_0.jpg?itok=nDf5j7xC)
面试可能会有压力,但 58 的公司告诉 Dice 和 Linux 基金会,他们需要在未来几个月内聘请开源人才。学习如何提出正确的问题。
Linux 基金会
Dice 和 Linux 基金会的年度[开源工作报告][1]揭示了开源专业人士的前景以及未来一年的招聘活动。在今年的报告中86 的科技专业人士表示,了解开源推动了他们的职业生涯。然而,当在他们自己的组织内推进或在别处申请新职位的时候,有这些经历会发生什么呢?
面试新工作绝非易事。除了在准备新职位时还要应付复杂的工作,当面试官问“你对我有什么问题吗?”时适当的回答更增添了压力。
在 Dice我们从事职业、建议并将技术专家与雇主连接起来。但是我们也在公司雇佣技术人才来开发开源项目。实际上Dice 平台基于许多 Linux 发行版,我们利用开源数据库作为我们搜索功能的基础。总之,如果没有开源软件,我们就无法运行 Dice因此聘请了解和热爱开源软件的专业人士至关重要。
多年来,我在面试中了解到提出好问题的重要性。这是一个了解你的潜在新雇主的机会,以及更好地了解他们是否与你的技能相匹配。
这里有三个重要的问题需要以及其重要的原因:
**1\. 公司对员工在空闲时间致力于开源项目或编写代码的立场是什么?**
这个问题的答案会告诉正在面试的公司的很多信息。一般来说,只要它与你在该公司所从事的工作没有冲突,公司会希望技术专家为网站或项目做出贡献。在公司之外允许这种情况,也会在技术组织中培养出一种创业精神,并教授技术技能,否则在正常的日常工作中你可能无法获得这些技能。
**2\. 项目在这如何分优先级?**
由于所有的公司都成为了科技公司,所以在创新的客户面对技术项目与改进平台本身之间往往存在着分歧。你会努力保持现有的平台最新么?或者致力于公众开发新产品?根据你的兴趣,答案可以决定公司是否适合你。
**3\. 谁主要决定新产品,开发者在决策过程中有多少投入?**
这个问题是了解谁负责公司创新(以及与他/她有多少联系),还有一个是了解你在公司的职业道路。在开发新产品之前,一个好的公司会和开发人员和开源人才交流。这看起来没有困难,但有时会错过这步,意味着在新产品发布之前是协作环境或者混乱的过程。
面试可能会有压力,但是 58 的公司告诉 Dice 和 Linux 基金会他们需要在未来几个月内聘用开源人才,所以记住高需求会让像你这样的专业人士成为雇员。以你想要的方向引导你的事业。
现在[下载][2]完整的 2017 年开源工作报告。
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/os-jobs/2017/12/3-essential-questions-ask-your-next-tech-interview
作者:[Brian Hostetter][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/brianhostetter
[1]:https://www.linuxfoundation.org/blog/2017-jobs-report-highlights-demand-open-source-skills/
[2]:http://bit.ly/2017OSSjobsreport

View File

@ -0,0 +1,210 @@
#教程:在 Linux 中如何写一个基本的 udev 规则
### 目录
* * [1. 读者对象][4]
* [2. 要求][5]
* [3. 难度][6]
* [4. 约定][7]
* [5. 介绍][8]
* [6. 规则如何组织][9]
* [7. 规则语法][10]
* [8. 一个测试案例][11]
* [9. 操作符][12]
* * [9.1.1. == 和 != 操作][1]
* [9.1.2. 分配操作符 = 和 :=][2]
* [9.1.3. += 和 -= 操作符][3]
* [10. 我们使用的键][13]
### 读者对象
理解 udev 背后的基本概念,学习如何写简单的规则。
### 要求
* Root 权限
### 难度
中等
### 约定
* **#** - 要求给定的命令使用 root 权限或者直接以一个 root 用户或者使用 `sudo` 命令去运行。
* **$** - 要求给定的命令以一个普通的非特权用户运行。
### 介绍
在一个 GNU/Linux 系统中,虽然设备的低级别支持是在内核级中处理的,但是,它们相关的事件的管理是在用户空间中通过 `udev` 来管理的。确切地说是由 `udevd` 守护进程来完成的。学习如何去写一个规则,并应用到发生的这些事件上,将有助于我们修改系统的行为并使它适合我们的需要。
### 规则如何组织
Udev 规则是定义在一个以 `.rules` 为扩展名的文件中。那些文件主要放在两个位置:`/usr/lib/udev/rules.d`,这个目录用于存放系统安装的规则,`/etc/udev/rules.d/` 这个目录是保留给自定义规则的。
定义那些规则的文件的命名惯例是使用一个数字作为前缀(比如,`50-udev-default.rules`)并且以它们在目录中的词汇顺序进行处理的。在 `/etc/udev/rules.d` 中安装的文件,会覆盖安装在系统默认路径中的同名文件。
### 规则语法
如果你理解了 udev 规则的行为逻辑它的语法并不复杂。一个规则由两个主要的节构成“match” 部分,它使用一系列用逗号分隔的键定义了规则应用的条件,而 “action” 部分,是当条件满足时,我们执行一些动作。
### 一个测试案例
讲解可能的选项的最好方法莫过于配置一个真实的案例,因此,我们去定义一个规则作为演示,当鼠标被连接时禁用触摸板。显然,在规则定义中提供的属性将反映我的硬件。
我们将在 `/etc/udev/rules.d/99-togglemouse.rules` 文件中用我们喜欢的文本编辑器来写我们的规则。一个规则定义允许跨多个行,但是,如果是这种情况,必须在一个新行字符之前使用一个反斜线(\)表示这是上一行的后续部分,就和 shell 脚本一样,这是我们的规则:
```
ACTION=="add" \
, ATTRS{idProduct}=="c52f" \
, ATTRS{idVendor}=="046d" \
, ENV{DISPLAY}=":0" \
, ENV{XAUTHORITY}="/run/user/1000/gdm/Xauthority" \
, RUN+="/usr/bin/xinput --disable 16"
```
我们来分析一下这个规则。
### 操作符
首先,对已经使用以及将要使用的操作符解释如下:
#### == 和 != 操作符
`==` 是相等操作符,而 `!=` 是不等于操作符。通过使用它们,我们可以确认规则上应用的键是否匹配各自的值。
#### 分配操作符 = 和 :=
`=` 分配操作符,是用于为一个键分配一个值。当我们去分配一个值,并且想确保它不会被其它规则所覆盖,我们就需要使用 `:=` 操作符来代替,使用这个操作符分配的值,它确实不能被改变。
#### += 和 -= 操作符
`+=` 和 `-=` 操作符各自用于从一个指定的键中定义的值列表中增加或者移除一个值。
### 我们使用的键
现在,来分析一下在这个规则中我们使用的键。首先,我们有一个 `ACTION` 键:通过使用它,当在一个设备上发生了特定的事件,我们将指定我们要应用的规则的具体内容。有效的值有 `add`、`remove ` 以及 `change`。 
然后,我们使用 `ATTRS` 关键字去指定一个属性去匹配。我们可以使用 `udevadm info` 命令去列出一个设备属性,提供它的名字或者 `sysfs` 路径:
```
udevadm info -ap /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.1/0003:046D:C52F.0010/input/input39
Udevadm info starts with the device specified by the devpath and then
walks up the chain of parent devices. It prints for every device
found, all possible attributes in the udev rules key format.
A rule to match, can be composed by the attributes of the device
and the attributes from one single parent device.
looking at device '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.1/0003:046D:C52F.0010/input/input39':
KERNEL=="input39"
SUBSYSTEM=="input"
DRIVER==""
ATTR{name}=="Logitech USB Receiver"
ATTR{phys}=="usb-0000:00:1d.0-1.2/input1"
ATTR{properties}=="0"
ATTR{uniq}==""
looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.1/0003:046D:C52F.0010':
KERNELS=="0003:046D:C52F.0010"
SUBSYSTEMS=="hid"
DRIVERS=="hid-generic"
ATTRS{country}=="00"
looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.1':
KERNELS=="2-1.2:1.1"
SUBSYSTEMS=="usb"
DRIVERS=="usbhid"
ATTRS{authorized}=="1"
ATTRS{bAlternateSetting}==" 0"
ATTRS{bInterfaceClass}=="03"
ATTRS{bInterfaceNumber}=="01"
ATTRS{bInterfaceProtocol}=="00"
ATTRS{bInterfaceSubClass}=="00"
ATTRS{bNumEndpoints}=="01"
ATTRS{supports_autosuspend}=="1"
looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2':
KERNELS=="2-1.2"
SUBSYSTEMS=="usb"
DRIVERS=="usb"
ATTRS{authorized}=="1"
ATTRS{avoid_reset_quirk}=="0"
ATTRS{bConfigurationValue}=="1"
ATTRS{bDeviceClass}=="00"
ATTRS{bDeviceProtocol}=="00"
ATTRS{bDeviceSubClass}=="00"
ATTRS{bMaxPacketSize0}=="8"
ATTRS{bMaxPower}=="98mA"
ATTRS{bNumConfigurations}=="1"
ATTRS{bNumInterfaces}==" 2"
ATTRS{bcdDevice}=="3000"
ATTRS{bmAttributes}=="a0"
ATTRS{busnum}=="2"
ATTRS{configuration}=="RQR30.00_B0009"
ATTRS{devnum}=="12"
ATTRS{devpath}=="1.2"
ATTRS{idProduct}=="c52f"
ATTRS{idVendor}=="046d"
ATTRS{ltm_capable}=="no"
ATTRS{manufacturer}=="Logitech"
ATTRS{maxchild}=="0"
ATTRS{product}=="USB Receiver"
ATTRS{quirks}=="0x0"
ATTRS{removable}=="removable"
ATTRS{speed}=="12"
ATTRS{urbnum}=="1401"
ATTRS{version}==" 2.00"
[...]
```
上面截取了运行这个命令之后的输出的一部分。正如你从它的输出中看到的那样,`udevadm` 从我们提供的指定路径开始,并且提供了所有父级设备的信息。注意设备的属性都是以单数的形式报告的(比如,`KERNEL`),而它的父级是以复数形式出现的(比如,`KERNELS`)。父级信息可以做为规则的一部分,但是同一时间只能有一个父级可以被引用:不同父级设备的属性混合在一起是不能工作的。在上面我们定义的规则中,我们使用了一个父级设备属性:`idProduct` 和 `idVendor`。 
在我们的规则中接下来做的事情是,去使用 `ENV` 关键字:它既可以用于设置也可以用于去匹配环境变量。我们给 `DISPLAY` 和 `XAUTHORITY` 分配值。当我们使用 X server 程序进行交互时,这些变量是非常必要的,去设置一些需要的信息:使用 `DISPLAY` 变量,我们指定服务器运行的机器是哪个,显示什么,引用的屏幕是什么,使用 `XAUTHORITY` 它提供一个到文件的路径,它包含了 Xorg 认证和授权信息。这个文件一般位于用于的 "home" 目录中。 
最后,我们使用了 `RUN` 字:它用于去运行外部程序。非常重要:这里没有立即运行,但是一旦所有的规则被解析,将运行各种动作。在这个案例中,我们使用 `xinput` 实用程序去改变触摸板的状态。我不想增解释这里的 `xinput` 的语法,它超出了本文的范围,只需要注意这个触摸板的 ID 是 `16`。 
规则设置完成之后,我们可以通过使用 `udevadm test` 命令去调试它。这个命令对调试非常有用,它并不真实去运行 `RUN` 指定的命令:
```
$ udevadm test --action="add" /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.1/0003:046D:C52F.0010/input/input39
```
我们提供给命令的是使用 `--action` 选项,以及设备的 sysfs 路径的模拟动作。如果没有报告错误,说明我们的规则运行的很好。在真实的环境中去使用它,我们需要重新加载规则:
```
# udevadm control --reload
```
这个命令将重新加载规则文件,但是,它只对重新加载之后发生的事件有效果。
我们通过创建一个 udev 规则了解了基本的概念和逻辑,这只是 udev 规则中众多的选项和可能的设置中的一小部分。udev 手册页提供了一个详尽的列表:如果你想深入了解,请参考它。
--------------------------------------------------------------------------------
via: https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux
作者:[Egidio Docile ][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://disqus.com/by/egidiodocile/
[1]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h9-1-1-and-operators
[2]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h9-1-2-the-assignment-operators-and
[3]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h9-1-3-the-and-operators
[4]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h1-objective
[5]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h2-requirements
[6]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h3-difficulty
[7]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h4-conventions
[8]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h5-introduction
[9]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h6-how-rules-are-organized
[10]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h7-the-rules-syntax
[11]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h8-a-test-case
[12]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h9-operators
[13]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h10-the-keys-we-used

View File

@ -1,163 +0,0 @@
如何统计Linux中文件和文件夹/目录的数量
======
嗨,伙计们,今天我们又来了一系列棘手的命令,会多方面帮助你。 这是一种操作命令,它可以帮助您计算当前目录中的文件和目录,递归计数,特定用户创建的文件列表等。
在本教程中我们将向您展示如何使用多个命令并使用lsegrepwc和find命令执行一些高级操作。 下面的命令很有帮助。
为了实验我打算总共创建7个文件和2个文件夹5个常规文件和2个隐藏文件。 看到下面的tree命令的输出清楚的展示文件和文件夹列表。
**推荐阅读** [文件操作命令][1]
```
# tree -a /opt
/opt
├── magi
│   └── 2g
│   ├── test5.txt
│   └── .test6.txt
├── test1.txt
├── test2.txt
├── test3.txt
├── .test4.txt
└── test.txt
2 directories, 7 files
```
**示例-1 ** 统计当前目录文件(排除隐藏文件)。 运行以下命令以确定当前目录中有多少个文件并且不计算点文件LCTT译者注点文件即当前目录文件和上级目录文件
```
# ls -l . | egrep -c '^-'
4
```
**细节:**
* `ls` 列出目录内容
* `-l` 使用长列表格式
* `.` 列出有关文件的信息(默认为当前目录)
* `|` 控制操作器将一个程序的输出发送到另一个程序进行进一步处理
* `egrep` 打印符合模式的行
* `-c` 通用输出控制
* `'^-'` 它们分别匹配一行的开头和结尾的空字符串
**示例-2 ** 统计包含隐藏文件的当前目录文件。 包括当前目录中的点文件。
```
# ls -la . | egrep -c '^-'
5
```
**示例-3 ** 运行以下命令来计算当前目录文件和文件夹。 它会一次计算所有的。
```
# ls -1 | wc -l
5
```
**细节:**
* `ls` 列出目录内容
* `-l` 使用长列表格式
* `|` 控制操作器将一个程序的输出发送到另一个程序进行进一步处理
* `wc` 这是一个为每个文件打印换行符,字和字节数的命令
* `-l` 打印换行符数
**示例-4 ** 统计包含隐藏文件和目录的当前目录文件和文件夹。
```
# ls -1a | wc -l
8
```
**示例-5 ** 递归计算当前目录文件,其中包括隐藏文件。
```
# find . -type f | wc -l
7
```
**细节 **
* `find` 搜索目录层次结构中的文件
* `-type` 文件类型
* `f` 常规文件
* `wc` 这是一个为每个文件打印换行符,字和字节数的命令
* `-l` 打印换行符数
**示例-6 ** 使用tree命令打印目录和文件数排除隐藏文件
```
# tree | tail -1
2 directories, 5 files
```
**示例-7 ** 使用包含隐藏文件的树命令打印目录和文件数。
```
# tree -a | tail -1
2 directories, 7 files
```
**示例-8 ** 运行下面的命令递归计算包含隐藏目录的目录。
```
# find . -type d | wc -l
3
```
**示例-9 ** 根据文件扩展名计算文件数量。 这里我们要计算 `.txt` 文件。
```
# find . -name "*.txt" | wc -l
7
```
**示例-10 ** 使用echo命令和wc命令统计当前目录中的所有文件。 `4`表示当前目录中的文件数量。
```
# echo * | wc
1 4 39
```
**示例-11 ** 通过使用echo命令和wc命令来统计当前目录中的所有目录。 `1`表示当前目录中的目录数量。
```
# echo comic/ published/ sources/ translated/ | wc
1 1 6
```
**示例-12 ** 通过使用echo命令和wc命令来统计当前目录中的所有文件和目录。 `5`表示当前目录中的目录和文件的数量。
```
# echo * | wc
1 5 44
```
**示例-13 ** 统计系统(整个系统)中的文件数。
```
# find / -type f | wc -l
69769
```
**示例-14 ** 统计系统(整个系统)中的文件夹数。
```
# find / -type d | wc -l
8819
```
**示例-15 ** 运行以下命令来计算系统(整个系统)中的文件,文件夹,硬链接和符号链接数。
```
# find / -type d -exec echo dirs \; -o -type l -exec echo symlinks \; -o -type f -links +1 -exec echo hardlinks \; -o -type f -exec echo files \; | sort | uniq -c
8779 dirs
69343 files
20 hardlinks
11646 symlinks
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/how-to-count-the-number-of-files-and-folders-directories-in-linux/
作者:[Magesh Maruthamuthu][a]
译者:[Flowsnow](https://github.com/Flowsnow)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.2daygeek.com/author/magesh/
[1]:https://www.2daygeek.com/empty-a-file-delete-contents-lines-from-a-file-remove-matching-string-from-a-file-remove-empty-blank-lines-from-a-file/

View File

@ -1,76 +0,0 @@

[![Linux vs. Unix](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/unix-vs-linux_orig.jpg)][1]
在计算机时代,相当一部分的人错误地认为 **Unix****LInux** 操作系统是一样的。然而,事实恰好相反。让我们仔细看看。
### 什么是 Unix
[![what is unix](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/unix_orig.png)][2]
在 IT 领域,作为操作系统为我们所知的 Unix是 1969 年 AT& 公司在美国新泽西所开发的,目前它的商标权由国际开放标准组织所拥有。大多数的操作系统都受 Unix 的启发,而 Unix 也受到了未完成的 Multics 系统的启发。Unix 的另一版本是来自贝尔实验室的 Play 9。
### Unix 被用于哪里?
作为一个操作系统Unix 大多被用在服务器、工作站且现在也有用在个人计算机上。它在创建互联网、创建计算机网络或客户端/服务器模型方面发挥着非常重要的作用。
#### Unix 系统的特点
* 支持多任务multitasking
* 相比 Multics 操作更加简单
* 所有数据以纯文本形式存储
* 采用单根文件的树状存储
* 能够同时访问多用户账户
#### Unix 操作系统的组成:
**a)** 单核操作系统,负责低级操作以及由用户发起的操作,内核之间的通信通过一个系统调用进行。
**b)** 系统工具
**c)** 其他运用程序
### 什么是 Linux
[![what is linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/linux_orig.png)][4]
这是一个基于 Unix 系统原理的开源操作系统。正如开源的含义一样它是一个可以自由下载的系统。也可以干涉系统的编辑、添加然后扩充其源代码。这是它最大的好处之一而不像今天的其他操作系统Windows、Mac OS X....)需要付费。Linux 系统的开发不仅是因为 Unix 系统提供了一个模板,其中还有一个重要的因素是 MINIX 系统的启发。不像 Linus此版本被创造者Andrew Tanenbaum用于商业系统 。1991 年 **Linus Torvalds** 开始把对 Linux 系统的开发当做个人兴趣。其中Linux 开始处理 Unix 的原因是系统的简单性。Linux0.01)第一个官方版本发布于 1991年9月17日。虽然这个系统并不是很完美和完整但 Linus 对它产生很大的兴趣。在接下来的几天Linus 开始写关于 Linux 源代码扩展以及其他想法的电子邮件。
### Linux 的特点
Linux 内核是 Unix 内核,基于 Unix 的基本特点以及 **POSIX** 和 Single **UNIX 规范标准**。看起来那操作系统官方名字取自于 **Linus**,其中其操作系统名称的尾部"x"跟 **Unix 系统**相联系。
#### 主要特点:
* 一次运行多任务(多任务)
* 程序可以包含一个或多个进程multipurpose system且每个进程可能有一个或多个线程。
* 多用户,因此它可以运行多个用户程序。
* 个人帐户受适当授权的保护。
* 因此账户准确地定义了系统控制权。
**企鹅 Tux** Logo 的作者是 Larry Ewing他选择这个企鹅作为他的开源 **Linux 操作系统**的吉祥物。**Linux Torvalds** 最初提出这个新的操作系统的名字为 “Freax” 即为 “自由free” + “奇异freak” + xUNIX系统的结合字但它并不像之前在 **FTP 服务器** 上运行的那个版本那样。
--------------------------------------------------------------------------------
via: http://www.linuxandubuntu.com/home/linux-vs-unix
作者:[linuxandubuntu][a]
译者:[HardworkFish](https://github.com/HardworkFish)
校对:[imquanquan](https://github.com/imquanquan)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxandubuntu.com
[1]:http://www.linuxandubuntu.com/home/linux-vs-unix
[2]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/unix_orig.png
[3]:http://www.unix.org/what_is_unix.html
[4]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/linux_orig.png
[5]:https://www.linux.com

View File

@ -1,108 +0,0 @@
创建局域网内的离线 YUM 仓库
======
在早先的教程中,我们讨论了" **[如何使用 ISO 镜像和镜像在线 yum 仓库的方式来创建自己的 yum 仓库 ][1]** "。创建自己的 yum 仓库是一个不错的想法,但若网络中只有 2-3 台 Linux 机器那就没啥必要了。不过若你的网络中有大量的 Linux 服务器,而且这些服务器还需要定时进行升级,或者你有大量服务器无法直接访问因特网,那么创建自己的 yum 仓库就很有必要了。
当我们有大量的 Linux 服务器,而每个服务器都直接从因特网上升级系统时,数据消耗会很可观。为了节省数据量,我们可以创建个离线 yum 源并将之分享到本地网络中。网络中的其他 Linux 机器然后就可以直接从本地 yum 上获取系统更新,从而节省数据量,而且传输速度也会很好。
我们可以使用下面两种方法来分享 yum 仓库:
* **使用 Web 服务器 (Apache)**
* **使用 ftp (VSFTPD)**
在开始讲解这两个方法之前,我们需要先根据之前的教程创建一个 YUM 仓库( **[看这里 ][1]** )
## 使用 Web 服务器
首先在 yum 服务器上安装安装 Web 服务器 (Apache),我们假设服务器 IP 是 **192.168.1.100**。我们已经在这台系统上配置好了 yum 仓库,现在我们来使用 yum 命令安装 apache web 服务器,
```
$ yum install httpd
```
下一步,拷贝所有的 rpm 包到默认的 apache 跟目录下,即 **/var/www/html**,由于我们已经将包都拷贝到了 **/YUM** 下,我们也可以创建一个软连接来从 /var/www/html 指向 /YUM
```
$ ln -s /var/www/html/Centos /yum
```
重启 web 服务器应用变更
```
$ systemctl restart httpd
```
### 配置客户端机器
服务端的配置就完成了,现在需要配置下客户端来从我们创建的离线 yum 中获取升级包,这里假设客户端 IP 为 **192.168.1.101**
`/etc/yum.repos.d` 目录中创建 `offline-yum.repo` 文件,输入如下信息,
```
$ vi /etc/yum.repos.d/offline-yum.repo
```
```
name=Local YUM
baseurl=http://192.168.1.100/CentOS/7
gpgcheck=0
enabled=1
```
客户端也配置完了。试一下用 yum 来安装/升级软件包来确认仓库是正常工作的。
## 使用 FTP 服务器
在 FTP 上分享 YUM首先需要安装所需要的软件包即 vsftpd
```
$ yum install vsftpd
```
vsftp 的默认根目录为 `/var/ftp/pub`,因此你可以拷贝 rpm 包到这个目录或着为它创建一个软连接,
```
$ ln -s /var/ftp/pub /YUM
```
重启服务应用变更
```
$ systemctl restart vsftpd
```
### 配置客户端机器
像上面一样,在 `/etc/yum.repos.d` 中创建 **offline-yum.repo** 文件,并输入下面信息,
```
$ vi /etc/yum.repos.d/offline-yum.repo
```
```
[Offline YUM]
name=Local YUM
baseurl=ftp://192.168.1.100/pub/CentOS/7
gpgcheck=0
enabled=1
```
现在客户机可以通过 ftp 接受升级了。要配置 vsftpd 服务器为其他 Linux 系统分享文件,请[**阅读这篇指南 **][2]。
这两种方法都很不错,你可以任意选择其中一种方法。有任何疑问或这想说的话,欢迎在下面留言框中留言。
--------------------------------------------------------------------------------
via: http://linuxtechlab.com/offline-yum-repository-for-lan/
作者:[Shusain][a]
译者:[lujun9972](https://github.com/lujun9972)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linuxtechlab.com/author/shsuain/
[1]:http://linuxtechlab.com/creating-yum-repository-iso-online-repo/
[2]:http://linuxtechlab.com/ftp-secure-installation-configuration/

View File

@ -0,0 +1,123 @@
为初学者介绍 Linux whereis 命令 (5个例子)
======
有时,在使用命令行的时候,我们需要快速找到某一个命令二进制文件所在位置。这种情况下可以选择[find][1]命令,但使用它会耗费时间,可能也会出现意料之外的情况。有一个专门为这种情况设计的命令:**whereis**。
在这篇文章里,我们会通过一些便于理解的例子来解释这一命令的基础内容。但在这之前,值得说明的一点是,下面出现的所有例子都在 Ubuntu 16.04 LTS 下测试过。
### Linux whereis 命令
whereis 命令可以帮助用户寻找某一命令的二进制文件,源码以及帮助页面。下面是它的格式:
```
whereis [options] [-BMS directory... -f] name...
```
这是这一命令的man 页面给出的解释:
```
whereis可以查找指定命令的二进制文件源文件和帮助文件。 被找到的文件在显示时,会去掉主路径名,然后再去掉文件的扩展名 (如: .c),来源于源代码控制的.s前缀也会被去掉。接下来whereis会尝试在Linux存储命令的位置里寻找具体程序也会在由$ PATH和$ MANPATH指定的路径中寻找。
```
下面这些以Q&A 形式出现的例子可以给你一个关于如何使用whereis命令的直观感受。
### Q1.如何用whereis 命令寻找二进制文件所在位置?
假设你想找比如说whereis命令自己所在位置。下面是你具体的操作
```
whereis whereis
```
[![How to find location of binary file using whereis][2]][3]
需要注意的是输出的第一个路径才是你想要的结果。使用whereis 命令,同时也会显示帮助页面和源码所在路径。(如果能找到的情况下会显示,但是在这一例中没有找到)所以你在输出中看见的第二个路径就是帮助页面文件所在位置。
### Q2.怎么在搜索时规定只搜索二进制文件,帮助页面,还是源代码呢?
如果你想只搜索,假设说,二进制文件,你可以使用 **-b**  这一命令行选项。例如:
```
whereis -b cp
```
[![How to specifically search for binaries, manuals, or source code][4]][5]
类似的,  **-m** and **-s** 这两个 选项分别对应 帮助页面和源码。
### Q3.如何限制whereis 命令的输出结果条数?
默认情况下whereis 是从系统的硬编码路径来寻找文件的,它会输出所有符合条件的结果。但如果你想的话,你可以用命令行选项来限制输出内容。例如,如果你只想在 /usr/bin 寻找二进制文件,你可以用 **-B** 这一选项来实现。
```
whereis -B /usr/bin/ -f cp
```
**注意**:使用这种方式时可以给出多个路径。使用**-f**  这一选项是指在给出的路径中没有找到这些文件,
类似的,如果你想只搜索 帮助文件或源码,你可以对应使用  **-M** and **-S**  这两个选项。
### Q4. 如何查看 whereis 的搜索路径?
与次相对应的也有一个选项。只要在whereis 后加上 **-l**。
```
whereis -l
```
这是例子的部分输出结果:
[![How to see paths that whereis uses for search][6]][7]
### Q5. How to find command names with unusual entries? 如何找到一个有异常条目的命令?
对于whereis 命令来说,如果一个命令对每个显式请求类型都没有条目,则该命令异常。例如,没有可用文档的命令,或者对应文档分散在各处的命令都可以算作异常命令。 当使用 **-u** 这一选项whereis就会显示那些有异常条目的命令。
例如,下面这一例子就显示,在当前目录中,没有对应文档或有多个文档的命令。
```
whereis -m -u *
```
### 总结
我同意whereis 不是那种你需要经常使用的命令行工具。但在遇到某些特殊情况时,它绝对会让你的生活变得轻松。我们已经涉及了这一工具提供的一些重要命令行选项,所以要注意练习。想了解更多信息,直接去看它的[man][8]页面吧。
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/linux-whereis-command/
作者:[Himanshu Arora][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com
[1]:https://www.howtoforge.com/tutorial/linux-find-command/
[2]:https://www.howtoforge.com/images/command-tutorial/whereis-basic-usage.png
[3]:https://www.howtoforge.com/images/command-tutorial/big/whereis-basic-usage.png
[4]:https://www.howtoforge.com/images/command-tutorial/whereis-b-option.png
[5]:https://www.howtoforge.com/images/command-tutorial/big/whereis-b-option.png
[6]:https://www.howtoforge.com/images/command-tutorial/whereis-l.png
[7]:https://www.howtoforge.com/images/command-tutorial/big/whereis-l.png
[8]:https://linux.die.net/man/1/whereis

View File

@ -0,0 +1,158 @@
Python中最快解压zip文件的方法
======
假设(现在的)上下文(context计算机术语此处意为业务情景)是这样的一个zip文件被上传到一个[web服务][1]中然后Python需要解压这个zip文件然后分析和处理其中的每个文件。这个特殊的应用查看每个文件各自的名称和大小 并和已经上传到AWS S3上的文件进行比较如果文件(和AWS S3上的相比)有所不同或者文件本身更新那么就将它上传到AWS S3。
[![Uploads today][2]][3]
挑战在于这些zip文件太大了。他们的平均大小是560MB但是其中一些大于1GB。这些文件中大多数是文本文件但是其中同样也有一些巨大的二进制文件。不同寻常的是每个zip文件包含100个文件但是其中1-3个文件却占据了多达95%的zip文件大小。
最开始我尝试在内存中解压文件并且每次只处理一个文件。在各种内存爆炸和EC2耗尽内存的情况下这个方法壮烈失败了。我觉得这个方法应该有用。最开始你有1GB文件在RAM中然后你现在解压每个文件并有了大约2-3GB放在了内存中。所以在很多次测试之后解决方案是将这些zip文件提取(dump)到磁盘上(在临时目录`/tmp`中)然后遍历这些文件。这次情况好多了但是我仍然注意到了整个解压过程花费了巨量的时间。**是否可能有方法优化呢?**
### 原始函数(baseline function)
首先是下面这些模拟对zip文件中文件实际操作的普通函数
```
def _count_file(fn):
with open(fn, 'rb') as f:
return _count_file_object(f)
def _count_file_object(f):
# Note that this iterates on 'f'.
# You *could* do 'return len(f.read())'
# which would be faster but potentially memory
# inefficient and unrealistic in terms of this
# benchmark experiment.
total = 0
for line in f:
total += len(line)
return total
```
这里是可能最简单的另一个(函数)
```
def f1(fn, dest):
with open(fn, 'rb') as f:
zf = zipfile.ZipFile(f)
zf.extractall(dest)
total = 0
for root, dirs, files in os.walk(dest):
for file_ in files:
fn = os.path.join(root, file_)
total += _count_file(fn)
return total
```
如果我更仔细地分析一下,我(将会)发现这个函数花费时间40%运行`extractall`60%的时间在执行读取文件长度的循环。
### 第一步尝试
我的第一步尝试是使用线程。先创建一个`zipfile.ZipFile`的实例,展开每个文件名到其中然后为每一个名称开始一个线程。每个线程都给它一个函数来做"实质工作"(在这个基础测试(benchmark)中,就是遍历每个文件然后获取它的名称)。实际(业务中)的函数进行的工作是复杂的S3Redis和PostgreSQL操作但是在我的基准测试中我只需要制作一个可以找出文件长度的函数就好了。线程池函数
```
def f2(fn, dest):
def unzip_member(zf, member, dest):
zf.extract(member, dest)
fn = os.path.join(dest, member.filename)
return _count_file(fn)
with open(fn, 'rb') as f:
zf = zipfile.ZipFile(f)
futures = []
with concurrent.futures.ThreadPoolExecutor() as executor:
for member in zf.infolist():
futures.append(
executor.submit(
unzip_member,
zf,
member,
dest,
)
)
total = 0
for future in concurrent.futures.as_completed(futures):
total += future.result()
return total
```
**结果:加速~10%**
### 第二步尝试
所以可能是GIL(译者注Global Interpreter Lock一种全局锁CPython中的一个概念)阻碍了我。最自然的想法是尝试使用multiprocessing在多个CPU上分配工作。但是这样做有缺点那就是你不能传递一个非可pickle序列化的对象(译注意为只有可pickle序列化的对象可以被传递),所以你只能发送文件名到之后的函数中:
```
def unzip_member_f3(zip_filepath, filename, dest):
with open(zip_filepath, 'rb') as f:
zf = zipfile.ZipFile(f)
zf.extract(filename, dest)
fn = os.path.join(dest, filename)
return _count_file(fn)
def f3(fn, dest):
with open(fn, 'rb') as f:
zf = zipfile.ZipFile(f)
futures = []
with concurrent.futures.ProcessPoolExecutor() as executor:
for member in zf.infolist():
futures.append(
executor.submit(
unzip_member_f3,
fn,
member.filename,
dest,
)
)
total = 0
for future in concurrent.futures.as_completed(futures):
total += future.result()
return total
```
**结果: 加速~300%**
### 这是作弊
使用处理器池的问题是这样需要存储在磁盘上的原始`.zip`文件。所以为了在我的web服务器上使用这个解决方案我首先得要将内存中的ZIP文件保存到磁盘然后调用这个函数。这样做的代价我不是很清楚但是应该不低。
好吧,再翻翻(poke around)看又没有损失(Well, it doesn't hurt to poke around)。可能,解压过程加速到足以弥补这样做的损失了吧。
但是一定记住这个优化取决于使用所有可用的CPU。如果一些其他的CPU需要执行在`gunicorn`中的其它事务呢这时这些其他进程必须等待直到有CPU可用。由于在这个服务器上有其他的事务正在进行我不是很确定我想要在进程中接管所有其他CPU。
### 结论
一步一步地做(这个任务)这个过程感觉挺好的。你被限制在一个CPU上但是表现仍然特别好。同样地一定要看看在`f1`和`f2`两段代码之间的不同之处!利用`concurrent.futures`池类你可以获取可以使用的CPU的个数但是这样做同样给人感觉不是很好。如果你在虚拟环境中获取的个数是错的呢或者可用的个数太低以致无法从负载分配获取好处并且现在你仅仅是为了移动负载而支付营运开支呢
我将会继续使用`zipfile.ZipFile(file_buffer).extractall(temp_dir)`。这个工作这样做已经足够好了。
### 想试试手吗?
我使用一个`c5.4xlarge` EC2服务器来进行我的基准测试。文件可以从此处下载:
```
wget https://www.peterbe.com/unzip-in-parallel/hack.unzip-in-parallel.py
wget https://www.peterbe.com/unzip-in-parallel/symbols-2017-11-27T14_15_30.zip
```
这里的`.zip`文件有34MB。和在服务器上发生的已经小了很多。
`hack.unzip-in-parallel.py`文件里是一团糟。它包含了大量可怕的入侵和丑恶的事情,但是万幸这只是一个开始(译注:大概入侵没有完成)。
--------------------------------------------------------------------------------
via: https://www.peterbe.com/plog/fastest-way-to-unzip-a-zip-file-in-python
作者:[Peterbe][a]
译者:[Leemeans](https://github.com/leemeans)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.peterbe.com/
[1]:https://symbols.mozilla.org
[2]:https://cdn-2916.kxcdn.com/cache/b7/bb/b7bbcf60347a5fa91420f71bbeed6d37.png
[3]:https://cdn-2916.kxcdn.com/cache/e6/dc/e6dc20acd37d94239edbbc0727721e4a.png

View File

@ -0,0 +1,103 @@
为什么应该在Linux上使用命名管道
======
![](https://images.techhive.com/images/article/2017/05/blue-1845806_1280-100722976-large.jpg)
估计每一位Linux使用者都熟悉使用 “|”符号将数据从一个进程传输到另一个进程的操作。它使用户能简便地从一个命令输出数据到另一个命令并筛选出想要的数据而无须写脚本进行选择、重定格式等操作。
还有另一种管道, 虽然也叫“管道”这个名字却有着非常不同的性质。即您可能尚未使用甚至尚未知晓的——命名管道。
**另可参考:[11 pointless but awesome Linux terminal tricks][1]**
普通管道与命名管道的一个主要区别就是命名管道是它们以文件形式实实在在地存在于系统中存在。但是与其它文件不同的是,命名管道文件似乎从来没有文件内容。即使用户往命名管道中写入大量数据,该文件看起来还是空的。
### 如何在Linux上创建命名管道
Before在我们研究这些空命名管道之前先追根溯源来看看命名管道是如何被创建的。您可以使用**mkfifo** 命令。提到“FIFO”是因为命名管道也被认为是FIFO特殊文件。术语“FIFO”指的是它的先进先出("first-in, first-out ")属性。如果某人在吃一个甜筒冰淇淋那么他执行的就是一个LIFO(后进先出last-in, first-out)操作。如果他通过吸管饮食一个冰淇淋那就在执行一个FIFO操作。好接下来是一个创建命名管道的例子。
```
$ mkfifo mypipe
$ ls -l mypipe
prw-r-----. 1 shs staff 0 Jan 31 13:59 mypipe
```
(译者注:实际环境中当您输入`mkfifo mypipe`后命令行可能会阻塞在当前位置这是因为以只写方式打开的FIFO要阻塞到其他某个进程以读方式打开该FIFO。您可在另一个终端中输入 `cat < mypipe`解决注意两个终端中保证mypipe的路径一致。注意一下特殊的文件类型标记“p”以及文件大小为0。您可以将重定向数据写入命名管道文件而文件大小依然为0。
```
$ echo "Can you read this?" > mypipe
$ ls -l mypipe
prw-r-----. 1 shs staff 0 Jan 31 13:59 mypipe
```
正如上面所说,敲击回车似乎什么都没有发生。
```
$ echo "Can you read this?" > mypipe
```
不那么显然的是,用户输入的文本已经进入该命名管道。可能用户却还在盯着输入端,事实上应有用户或其他人在输出端读取接受数据。
```
$ cat mypipe
Can you read this?
```
一旦被读取之后,管道中的内容就不再被保存。
另一种研究命名管道如何工作的方式是通过在后台传送数据的方式完成传送和接受两样操作。
```
$ echo "Can you read this?" > mypipe &
[1] 79302
$ cat mypipe
Can you read this?
[1]+ Done echo "Can you read this?" > mypipe
```
一旦管道被读取或“耗干”,该管道就清空了,尽管我们还能看见它并再次使用,可为什么要费此周折呢?
### 为何要使用命名管道?
命名管道很少被使用的理由似乎很充分。毕竟在Unix系统上总有多种不同的方式完成同样的操作。有多种方式写文件、读文件、清空文件尽管命名管道比他们来得更高效。
值得注意的是,命名管道的内容驻留在内存中而不是被写到硬盘上。数据内容只有在输入输出端都打开时才会传送。用户可以在管道的输出端打开之前向管道多次写入。通过使用命名管道,用户可以创建一个同时有一个进程写入管道和一个进程读取管道的进程而不用关心协调二者时间上的同步。
用户可以创建一个单纯等待数据出现在输出端的管道,并在拿到输出数据后对其进行操作。
```
$ tail -f mypipe
```
一旦喂给管道数据的进程结束了,我们就可以看到一些输出。
```
$ tail -f mypipe
Uranus replicated to WCDC7
Saturn replicated to WCDC8
Pluto replicated to WCDC9
Server replication operation completed
```
如果研究一下向命名管道写入的进程,用户也许会惊讶于它的资源消耗之少。在下面的`ps`命令输出中唯一显著的资源消耗是虚拟内存VSZ那一列
```
ps u -P 80038
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
shs 80038 0.0 0.0 108488 764 pts/4 S 15:25 0:00 -bash
```
命名管道与Unix系统上更常用的管道相比足以不同到拥有另一个名号但是“管道”实在能反映出它们如何在进程间传送数据的形象故将称其为“命名管道”还真是恰如其分。也许您在执行操作时就能从这个聪明的Unix特性中获益匪浅呢。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3251853/linux/why-use-named-pipes-on-linux.html
作者:[Sandra Henry-Stocker][a]
译者:[YPBlib](https://github.com/YPBlib)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
[1]:http://www.networkworld.com/article/2926630/linux/11-pointless-but-awesome-linux-terminal-tricks.html#tk.nww-fsb

View File

@ -0,0 +1,94 @@
优化 MySQL 3 个简单的小调整
============================================================
如果你不改变 MySQL 的缺省配置,你的服务器的性能就像下图的挂着一档的法拉利一样 “虎落平阳被犬欺” …
![](https://cdn-images-1.medium.com/max/1000/1*b7M28XbrOc4FF3tJP-vvyg.png)
我并不期望成为一个专家级的 DBA但是在我优化 MySQL 时,我推崇 80/20 原则,明确说就是通过简单的调整一些配置,你可以压榨出高达 80% 的性能提升。尤其是在服务器资源越来越便宜的当下。
#### 警告:
1. 没有两个数据库或者应用程序是完全相同的。这里假设我们要调整的数据库是为一个“典型”的 web 网站服务的,你优先考虑的是快速查询、良好的用户体验以及处理大量的流量。
2. 在你对服务器进行优化之前,请做好数据库备份!
### 1\. 使用 InnoDB 存储引擎
如果你还在使用 MyISAM 存储引擎,那么是时候转换到 InnoDB 了。有很多的理由都表明 InnoDB 比 MyISAM 更有优势,如果你关注性能,那么,我们来看一下它们是如何利用物理内存的:
* MyISAM仅在内存中保存索引。
* InnoDB在内存中保存索引_和_ 数据。
结论:保存在内存的内容访问速度要比磁盘上的更快。
下面是如何在你的表上去转换存储引擎的命令:
```
ALTER TABLE table_name ENGINE=InnoDB;
```
_注意_ _你已经创建了所有合适的索引对吗为了更好的性能创建索引永远是第一优先考虑的事情。_
### 2\. 让 InnoDB 使用所有的内存
你可以在 _my.cnf_ 文件中编辑你的 MySQL 配置。使用 `innodb_buffer_pool_size` 参数去配置在你的服务器上允许 InnoDB 使用物理内存数量。
对此假设你的服务器_仅仅_运行 MySQL公认的“经验法则”是设置为你的服务器物理内存的 80%。在保证操作系统不使用 swap 而正常运行所需要的足够内存之后 ,尽可能多地为 MySQL 分配物理内存。
因此,如果你的服务器物理内存是 32 GB可以将那个参数设置为多达 25 GB。
```
innodb_buffer_pool_size = 25600M
```
_注意_ _ (1) 如果你的服务器内存较小并且小于 1 GB。为了适用本文的方法你应该去升级你的服务器。 (2) 如果你的服务器内存特别大,比如,它有 200 GB那么根据一般常识你也没有必要为操作系统保留多达 40 GB 的内存。
### 3\. 让 InnoDB 多任务运行
如果服务器上的参数 `innodb_buffer_pool_size` 的配置是大于 1 GB将根据参数 `innodb_buffer_pool_instances` 的设置, 将 InnoDB 的缓冲池划分为多个。
拥有多于一个的缓冲池的好处有:
> 在多线程同时访问缓冲池时可能会遇到瓶颈。你可以通过启用多缓冲池来最小化这种争用情况:
对于缓冲池数量的官方建议是:
> 为了实现最佳的效果,要综合考虑 `innodb_buffer_pool_instances``innodb_buffer_pool_size` 的设置,以确保每个实例至少有不小于 1 GB 的缓冲池。
因此,在我们的示例中,将参数 `innodb_buffer_pool_size` 设置为 25 GB 的拥有 32 GB 物理内存的服务器上。一个合适的设置为 25600M / 24 = 1.06 GB
```
innodb_buffer_pool_instances = 24
```
### 注意!
在修改了 _my.cnf_ 文件后需要重启 MySQL 才能生效:
```
sudo service mysql restart
```
* * *
还有更多更科学的方法来优化这些参数,这几点作为一个通用准则来应用,将使你的 MySQL 服务器性能更好。
--------------------------------------------------------------------------------
作者简介:
我喜欢商业技术以及跑车 | 集团 CTO @ Parcel Monkey Cloud Fulfilment & Kong。
------
via: https://medium.com/@richb_/tuning-mysql-3-simple-tweaks-6356768f9b90
作者:[Rich Barrett](https://medium.com/@richb_)
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出