Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
wxy 2018-02-02 23:30:26 +08:00
commit ff7a7f10e1
39 changed files with 6801 additions and 648 deletions

View File

@ -0,0 +1,76 @@
互联网化疗
======
> LCTT 译注:本文作者 janit0r 被认为是 BrickerBot 病毒的作者。此病毒会攻击物联网上安全性不足的设备并使其断开和其他网络设备的连接。janit0r 宣称他使用这个病毒的目的是保护互联网的安全避免这些设备被入侵者用于入侵网络上的其他设备。janit0r 称此项目为“互联网化疗”。janit0r 决定在 2017 年 12 月终止这个项目,并在网络上发表了这篇文章。
> —— 12/10 2017
### --[ 1 互联网化疗
<ruby>互联网化疗<rt>Internet Chemotherapy</rt></ruby>是在 2016 年 11 月 到 2017 年 12 月之间的一个为期 13 个月的项目。它曾被称为 “BrickerBot”、“错误的固件升级”、“勒索软件”、“大规模网络瘫痪”甚至 “前所未有的恐怖行为”。最后一个有点伤人了费尔南德斯LCTT 译注:委内瑞拉电信公司 CANTV 的光纤网络曾在 2017 年 8 月受到病毒攻击,公司董事长曼努埃尔·费尔南德斯称这次攻击为[“前所未有的恐怖行为”][1]),但我想我大概不能让所有人都满意吧。
你可以从 http://91.215.104.140/mod_plaintext.py 下载我的代码模块,它可以基于 http 和 telnet 发送恶意请求LCTT 译注:这个链接已经失效,不过在 [Github][2] 上有备份)。因为平台的限制,模块里是代码混淆过的单线程 Python 代码,但<ruby>载荷<rt>payload</rd></ruby>LCTT 译注payload指实质的攻击/利用代码依然是明文任何合格的程序员应该都能看得懂。看看这里面有多少载荷、0-day 漏洞和入侵技巧,花点时间让自己接受现实。然后想象一下,如果我是一个黑客,致力于创造出强大的 DDoS 生成器来勒索那些最大的互联网服务提供商ISP和公司的话互联网在 2017 年会受到怎样的打击。我完全可以让他们全部陷入混乱,并同时对整个互联网造成巨大的伤害。
我的 ssh 爬虫太危险了,不能发布出来。它包含很多层面的自动化,可以只利用一个被入侵的路由器就能够在设计有缺陷的 ISP 的网络上平行移动并加以入侵。正是因为我可以征用数以万计的 ISP 的路由器,而这些路由器让我知晓网络上发生的事情并给我提供源源不断的节点用来进行入侵行动,我才得以进行我的反物联网僵尸网络项目。我于 2015 年开始了我的非破坏性的 ISP 的网络清理项目,于是当 Mirai 病毒入侵时我已经做好了准备来做出回应。主动破坏其他人的设备仍然是一个困难的决定,但无比危险的 CVE-2016-10372 漏洞让我别无选择。从那时起我就决定一不做二不休。
LCTT 译注:上一段中提到的 Mirai 病毒首次出现在 2016 年 8 月。它可以远程入侵运行 Linux 系统的网络设备并利用这些设备构建僵尸网络。本文作者 janit0r 宣称当 Mirai 入侵时他利用自己的 BrickerBot 病毒强制将数以万计的设备从网络上断开,从而减少 Mirai 病毒可以利用的设备数量。)
我在此警告你们,我所做的只是权宜之计,它并不足以在未来继续拯救互联网。坏人们正变得更加聪明,潜在存在漏洞的设备数量在持续增加,发生大规模的、能使网络瘫痪的事件只是时间问题。如果你愿意相信,我曾经在一个持续 13 个月的项目中使上千万有漏洞的设备变得无法使用,那么不过分地说,如此严重的事件本已经在 2017 年发生了。
__你们应该意识到只需要再有一两个严重的物联网漏洞我们的网络就会严重瘫痪。__ 考虑到我们的社会现在是多么依赖数字网络而计算机安全应急响应组CERT、ISP 们和政府们又是多么地忽视这种问题的严重性这种事件造成的伤害是无法估计的。ISP 在持续地部署暴露了控制端口的设备,而且即使像 Shodan 这样的服务可以轻而易举地发现这些问题,国家 CERT 还是似乎并不在意。而很多国家甚至都没有自己的 CERT 。世界上许多最大的 ISP 都没有雇佣任何熟知计算机安全问题的人,而是在出现问题的时候依赖于外来的专家来解决。我曾见识过大型 ISP 在我的僵尸网络的调节之下连续多个月持续受损,但他们还是不能完全解决漏洞(几个好的例子是 BSNL、Telkom ZA、PLDT、某些时候的 PT Telkom以及南半球大部分的大型 ISP )。只要看看 Telkom ZA 解决他们的 Aztech 调制解调器问题的速度有多慢,你就会开始理解现状有多么令人绝望。在 99% 的情况下,要解决这个问题只需要 ISP 部署合理的访问控制列表并把部署在用户端的设备CPE单独分段就行但是几个月过去之后他们的技术人员依然没有弄明白。如果 ISP 在经历了几周到几个月的针对他们设备的蓄意攻击之后仍然无法解决问题,我们又怎么能期望他们会注意到并解决 Mirai 在他们网络上造成的问题呢?世界上许多最大的 ISP 对这些事情无知得令人发指,而这毫无疑问是最大的危险,但奇怪的是,这应该也是最容易解决的问题。
我已经尽自己的责任试着去让互联网多坚持一段时间,但我已经尽力了。接下来要交给你们了。即使很小的行动也是非常重要的。你们能做的事情有:
* 使用像 Shodan 之类的服务来检查你的 ISP 的安全性,并驱使他们去解决他们网络上开放的 telnet、http、httpd、ssh 和 tr069 等端口。如果需要的话,可以把这篇文章给他们看。从来不存在什么好的理由来让这些端口可以从外界访问。暴露控制端口是业余人士的错误。如果有足够的客户抱怨,他们也许真的会采取行动!
* 用你的钱包投票!拒绝购买或使用任何“智能”产品,除非制造商保证这个产品能够而且将会收到及时的安全更新。在把你辛苦赚的钱交给提供商之前,先去查看他们的安全记录。为了更好的安全性,可以多花一些钱。
* 游说你本地的政治家和政府官员让他们改进法案来规范物联网设备包括路由器、IP 照相机和各种“智能”设备。不论私有还是公有的公司目前都没有足够的动机去在短期内解决该问题。这件事情和汽车或者通用电器的安全标准一样重要。
* 考虑给像 GDI 基金会或者 Shadowserver 基金会这种缺少支持的白帽黑客组织贡献你的时间或者其他资源。这些组织和人能产生巨大的影响,并且他们可以很好地发挥你的能力来帮助互联网。
* 最后,虽然希望不大,但可以考虑通过设立法律先例来让物联网设备成为一种“<ruby>诱惑性危险品<rt>attractive nuisance</rt></ruby>LCTT 译注attractive nuisance 是美国法律中的一个原则,意思是如果儿童在私人领地上因为某些对儿童有吸引力的危险物品而受伤,领地的主人需要负责,无论受伤的儿童是否是合法进入领地)。如果一个房主可以因为小偷或者侵入者受伤而被追责,我不清楚为什么设备的主人(或者 ISP 和设备制造商)不应该因为他们的危险的设备被远程入侵所造成的伤害而被追责。连带责任原则应该适用于对设备应用层的入侵。如果任何有钱的大型 ISP 不愿意为设立这样的先例而出钱(他们也许的确不会,因为他们害怕这样的先例会反过来让自己吃亏),我们甚至可以在这里还有在欧洲为这个行动而进行众筹。 ISP 们:把你们在用来应对 DDoS 的带宽上省下的可观的成本当做我为这个目标的间接投资,也当做它的好处的证明吧。
### --[ 2 时间线
下面是这个项目中一些值得纪念的事件:
* 2016 年 11 月底的德国电信 Mirai 事故。我匆忙写出的最初的 TR069/64 请求只执行了 `route del default`,不过这已经足够引起 ISP 去注意这个问题,而它引发的新闻头条警告了全球的其他 ISP 来注意这个迫近的危机。
* 大约 1 月 11 日 到 12 日,一些位于华盛顿特区的开放了 6789 控制端口的硬盘录像机被 Mirai 入侵并瘫痪,这上了很多头条新闻。我要给 Vemulapalli 点赞,她居然认为 Mirai 加上 `/dev/urandom` 一定是“非常复杂的勒索软件”LCTT 译注Archana Vemulapalli 当时是华盛顿市政府的 CTO。欧洲的那两个可怜人又怎么样了呢
* 2017 年 1 月底发生了第一起真正的大规模 ISP 下线事件。Rogers Canada 的提供商 Hitron 非常粗心地推送了一个在 2323 端口上监听的无验证的 root shell这可能是一个他们忘记关闭的 debug 接口)。这个惊天的失误很快被 Mirai 的僵尸网络所发现,造成大量设备瘫痪。
* 在 2017 年 2 月,我注意到 Mirai 在这一年里的第一次扩张Netcore/Netis 以及 Broadcom 的基于 CLI 命令行接口的调制解调器都遭受了攻击。BCM CLI 后来成为了 Mirai 在 2017 年的主要战场,黑客们和我自己都在这一年的余下时间里花大量时间寻找无数 ISP 和设备制造商设置的默认密码。前面代码中的“broadcom” 载荷也许看上去有点奇怪,但它们是统计角度上最可能禁用那些大量的有问题的 BCM CLI 固件的序列。
* 在 2017 年 3 月,我大幅提升了我的僵尸网络的节点数量并开始加入更多的网络载荷。这是为了应对包括 Imeij、Amnesia 和 Persirai 在内的僵尸网络的威胁。大规模地禁用这些被入侵的设备也带来了新的一些问题。比如在 Avtech 和 Wificam 设备所泄露的登录信息当中,有一些用户名和密码非常像是用于机场和其他重要设施的,而英国政府官员大概在 2017 年 4 月 1 日关于针对机场和核设施的“实际存在的网络威胁”做出过警告。哎呀。
* 这种更加激进的扫描还引起了民间安全研究者的注意,安全公司 Radware 在 2017 年 4 月 6 日发表了一篇关于我的项目的文章。这个公司把它叫做“BrickerBot”。显然如果我要继续增加我的物联网防御措施的规模我必须想出更好的网络映射与检测方法来应对蜜罐或者其他有风险的目标。
* 2017 年 4 月 11 日左右的时候发生了一件非常不寻常的事情。一开始这看上去和许多其他的 ISP 下线事件相似,一个叫 Sierra Tel 的半本地 ISP 在一些 Zyxel 设备上使用了默认的 telnet 用户名密码 supervisor/zyad1234。一个 Mirai 运行器发现了这些有漏洞的设备我的僵尸网络紧随其后2017 年精彩绝伦的 BCM CLI 战争又开启了新的一场战斗。这场战斗并没有持续很久。它本来会和 2017 年的其他数百起 ISP 下线事件一样,如果不是在尘埃落定之后发生的那件非常不寻常的事情的话。令人惊奇的是,这家 ISP 并没有试着把这次网络中断掩盖成某种网络故障、电力超额或错误的固件升级。他们完全没有对客户说谎。相反,他们很快发表了新闻公告,说他们的调制解调器有漏洞,这让他们的客户们得以评估自己可能遭受的风险。这家全世界最诚实的 ISP 为他们值得赞扬的公开行为而收获了什么呢?悲哀的是,它得到的只是批评和不好的名声。这依然是我记忆中最令人沮丧的“为什么我们得不到好东西”的例子,这很有可能也是为什么 99% 的安全错误都被掩盖而真正的受害者被蒙在鼓里的最主要原因。太多时候,“有责任心的信息公开”会直接变成“粉饰太平”的委婉说法。
* 在 2017 年 4 月 14 日国土安全部关于“BrickerBot 对物联网的威胁”做出了警告,我自己的政府把我作为一个网络威胁这件事让我觉得他们很不公平而且目光短浅。跟我相比,对美国人民威胁最大的难道不应该是那些部署缺乏安全性的网络设备的提供商和贩卖不成熟的安全方案的物联网设备制造商吗?如果没有我,数以百万计的人们可能还在用被入侵的设备和网络来处理银行业务和其他需要保密的交易。如果国土安全部里有人读到这篇文章,我强烈建议你重新考虑一下保护国家和公民究竟是什么意思。
* 在 2017 年 4 月底,我花了一些时间改进我的 TR069/64 攻击方法,然后在 2017 年 5 月初,一个叫 Wordfence 的公司(现在叫 Defiant报道称一个曾给 Wordpress 网站造成威胁的基于入侵 TR069 的僵尸网络很明显地衰减了。值得注意的是,同一个僵尸网络在几星期后使用了一个不同的入侵方式暂时回归了(不过这最终也同样被化解了)。
* 在 2017 年 5 月,主机托管公司 Akamai 在它的 2017 年第一季度互联网现状报告中写道,相比于 2016 年第一季度,大型(超过 100 GbpsDDoS 攻击数减少了 89%,而总体 DDoS 攻击数减少了 30%。鉴于大型 DDoS 攻击是 Mirai 的主要手段,我觉得这给这些月来我在物联网领域的辛苦劳动提供了实际的支持。
* 在夏天我持续地改进我的入侵技术军火库,然后在 7 月底我针对亚太互联网络信息中心APNIC的 ISP 进行了一些测试。测试结果非常令人吃惊。造成的影响之一是数十万的 BSNL 和 MTNL 调制解调器被禁用而这次中断事故在印度成为了头条新闻。考虑到当时在印度和中国之间持续升级的地缘政治压力我觉得这个事故有很大的风险会被归咎于中国所为于是我很罕见地决定公开承认是我所做。Catalin我很抱歉你在报道这条新闻之后突然被迫放的“两天的假期”。
* 在处理过亚太互联网络信息中心APNIC和非洲互联网络信息中心AfriNIC的之后在 2017 年 8 月 9 日我又针对拉丁美洲与加勒比地区互联网络信息中心LACNIC进行了大规模的清理给这个大洲的许多提供商造成了问题。在数百万的 Movilnet 的手机用户失去连接之后,这次攻击在委内瑞拉被大幅报道。虽然我个人反对政府监管互联网,委内瑞拉的这次情况值得注意。许多拉美与加勒比地区的提供商与网络曾在我的僵尸网络的持续调节之下连续数个月逐渐衰弱,但委内瑞拉的提供商很快加强了他们的网络防护并确保了他们的网络设施的安全。我认为这是由于委内瑞拉相比于该地区的其他国家来说进行了更具侵入性的深度包检测。值得思考一下。
* F5 实验室在 2017 年 8 月发布了一个题为“狩猎物联网:僵尸物联网的崛起”的报告,研究者们在其中对近期 telnet 活动的平静表达了困惑。研究者们猜测这种活动的减少也许证实了一个或多个非常大型的网络武器正在成型(我想这大概确实是真的)。这篇报告是在我印象中对我的项目的规模最准确的评估,但神奇的是,这些研究者们什么都推断不出来,尽管他们把所有相关的线索都集中到了一页纸上。
* 2017 年 8 月Akamai 的 2017 年第二季度互联网现状报告宣布这是三年以来首个该提供商没有发现任何大规模(超过 100 Gbps攻击的季度而且 DDoS 攻击总数相比 2017 年第一季度减少了 28%。这看上去给我的清理工作提供了更多的支持。这个出奇的好消息被主流媒体所完全忽视了,这些媒体有着“流血的才是好新闻”的心态,即使是在信息安全领域。这是我们为什么不能得到好东西的又一个原因。
* 在 CVE-2017-7921 和 7923 于 2017 年 9 月公布之后,我决定更密切地关注海康威视公司的设备,然后我惊恐地发现有一个黑客们还没有发现的方法可以用来入侵有漏洞的固件。于是我在 9 月中旬开启了一个全球范围的清理行动。超过一百万台硬盘录像机和摄像机(主要是海康威视和大华出品)在三周内被禁用,然后包括 IPVM.com 在内的媒体为这些攻击写了多篇报道。大华和海康威视在新闻公告中提到或暗示了这些攻击。大量的设备总算得到了固件升级。看到这次清理活动造成的困惑,我决定给这些闭路电视制造商写一篇[简短的总结][3](请原谅在这个粘贴板网站上的不和谐的语言)。这令人震惊的有漏洞而且在关键的安全补丁发布之后仍然在线的设备数量应该能够唤醒所有人,让他们知道现今的物联网补丁升级过程有多么无力。
* 2017 年 9 月 28 日左右Verisign 发表了报告称 2017 年第二季度的 DDoS 攻击数相比第一季度减少了 55%,而且攻击峰值大幅减少了 81%。
* 2017 年 11 月 23 日CDN 供应商 Cloudflare 报道称“近几个月来Cloudflare 看到试图用垃圾流量挤满我们的网络的简单进攻尝试有了大幅减少”。Cloudflare 推测这可能和他们的政策变化有一定关系,但这些减少也和物联网清理行动有着明显的重合。
* 2017 年 11 月底Akamai 的 2017 年第三季度互联网现状报告称 DDoS 攻击数较前一季度小幅增加了 8%。虽然这相比 2016 年的第三季度已经减少了很多,但这次小幅上涨提醒我们危险仍然存在。
* 作为潜在危险的更进一步的提醒一个叫做“Satori”的新的 Mirai 变种于 2017 年 11 月至 12 月开始冒头。这个僵尸网络仅仅通过一个 0-day 漏洞而达成的增长速度非常值得注意。这起事件凸显了互联网的危险现状以及我们为什么距离大规模的事故只差一两起物联网入侵。当下一次威胁发生而且没人阻止的时候会发生什么Sinkholing 和其他的白帽或“合法”的缓解措施在 2018 年不会有效,就像它们在 2016 年也不曾有效一样。也许未来各国政府可以合作创建一个国际范围的反黑客特别部队来应对特别严重的会影响互联网存续的威胁,但我并不抱太大期望。
* 在年末出现了一些危言耸听的新闻报道有关被一个被称作“Reaper”和“IoTroop”的新的僵尸网络。我知道你们中有些人最终会去嘲笑那些把它的规模估算为一两百万的人但你们应该理解这些网络安全研究者们对网络上发生的事情以及不由他们掌控的硬件的事情都是一知半解。实际来说研究者们不可能知道或甚至猜测到大部分有漏洞的设备在僵尸网络出现时已经被禁用了。给“Reaper”一两个新的未经处理的 0-day 漏洞的话,它就会变得和我们最担心的事情一样可怕。
### --[ 3 临别赠言
我很抱歉把你们留在这种境况当中,但我的人身安全受到的威胁已经不允许我再继续下去。我树了很多敌人。如果你想要帮忙,请看前面列举的该做的事情。祝你好运。
也会有人批评我,说我不负责任,但这完全找错了重点。真正的重点是如果一个像我一样没有黑客背景的人可以做到我所做到的事情,那么一个比我厉害的人可以在 2017 年对互联网做比这要可怕太多的事情。我并不是问题本身,我也不是来遵循任何人制定的规则的。我只是报信的人。你越早意识到这点越好。
-Dr Cyborkian 又名 janit0r“病入膏肓”的设备的调节者。
--------------------------------------------------------------------------------
viahttps://ghostbin.com/paste/q2vq2
作者janit0r
译者:[yixunx](https://github.com/yixunx)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,
[Linux中国](https://linux.cn/) 荣誉推出
[1]:https://www.telecompaper.com/news/venezuelan-operators-hit-by-unprecedented-cyberattack--1208384
[2]:https://github.com/JeremyNGalloway/mod_plaintext.py
[3]:http://depastedihrn3jtw.onion.link/show.php?md5=62d1d87f67a8bf485d43a05ec32b1e6f

View File

@ -0,0 +1,133 @@
Custom Embedded Linux Distributions
======
### Why Go Custom?
In the past, many embedded projects used off-the-shelf distributions and stripped them down to bare essentials for a number of reasons. First, removing unused packages reduced storage requirements. Embedded systems are typically shy of large amounts of storage at boot time, and the storage available, in non-volatile memory, can require copying large amounts of the OS to memory to run. Second, removing unused packages reduced possible attack vectors. There is no sense hanging on to potentially vulnerable packages if you don't need them. Finally, removing unused packages reduced distribution management overhead. Having dependencies between packages means keeping them in sync if any one package requires an update from the upstream distribution. That can be a validation nightmare.
Yet, starting with an existing distribution and removing packages isn't as easy as it sounds. Removing one package might break dependencies held by a variety of other packages, and dependencies can change in the upstream distribution management. Additionally, some packages simply cannot be removed without great pain due to their integrated nature within the boot or runtime process. All of this takes control of the platform outside the project and can lead to unexpected delays in development.
A popular alternative is to build a custom distribution using build tools available from an upstream distribution provider. Both Gentoo and Debian provide options for this type of bottom-up build. The most popular of these is probably the Debian debootstrap utility. It retrieves prebuilt core components and allows users to cherry-pick the packages of interest in building their platforms. But, debootstrap originally was only for x86 platforms. Although there are ARM (and possibly other) options now, debootstrap and Gentoo's catalyst still take dependency management away from the local project.
Some people will argue that letting someone else manage the platform software (like Android) is much easier than doing it yourself. But, those distributions are general-purpose, and when you're sitting on a lightweight, resource-limited IoT device, you may think twice about any any advantage that is taken out of your hands.
### System Bring-Up Primer
A custom Linux distribution requires a number of software components. The first is the toolchain. A toolchain is a collection of tools for compiling software, including (but not limited to) a compiler, linker, binary manipulation tools and standard C library. Toolchains are built specifically for a target hardware device. A toolchain built on an x86 system that is intended for use with a Raspberry Pi is called a cross-toolchain. When working with small embedded devices with limited memory and storage, it's always best to use a cross-toolchain. Note that even applications written for a specific purpose in a scripted language like JavaScript will need to run on a software platform that needs to be compiled with a cross-toolchain.
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12278f1.png)
Figure 1\. Compile Dependencies and Boot Order
The cross-toolchain is used to build software components for the target hardware. The first component needed is a bootloader. When power is applied to a board, the processor (depending on design) attempts to jump to a specific memory location to start running software. That memory location is where a bootloader is stored. Hardware can have a built-in bootloader that can be run directly from its storage location or it may be copied into memory first before it is run. There also can be multiple bootloaders. A first-stage bootloader would reside on the hardware in NAND or NOR flash, for example. Its sole purpose would be to set up the hardware so a second-stage bootloader, such as one stored on an SD card, can be loaded and run.
Bootloaders have enough knowledge to get the hardware to the point where it can load Linux into memory and jump to it, effectively handing control over to Linux. Linux is an operating system. This means that, by design, it doesn't actually do anything other than monitor the hardware and provide services to higher layer software—aka applications. The [Linux kernel][1] often is accompanied by a variety of firmware blobs. These are software objects that have been precompiled, often containing proprietary IP (intellectual property) for devices used with the hardware platform. When building a custom distribution, it may be necessary to acquire any firmware blobs not provided by the Linux kernel source tree before beginning compilation of the kernel.
Applications are stored in the root filesystem. The root filesystem is constructed by compiling and collecting a variety of software libraries, tools, scripts and configuration files. Collectively, these all provide the services, such as network configuration and USB device mounting, required by applications the project will run.
In summary, a complete system build requires the following components:
1. A cross-toolchain.
2. One or more bootloaders.
3. The Linux kernel and associated firmware blobs.
4. A root filesystem populated with libraries, tools and utilities.
5. Custom applications.
### Start with the Right Tools
The components of the cross-toolchain can be built manually, but it's a complex process. Fortunately, tools exist that make this process easier. The best of them is probably [Crosstool-NG][2]. This project utilizes the same kconfig menu system used by the Linux kernel to configure the bits and pieces of the toolchain. The key to using this tool is finding the correct configuration items for the target platform. This typically includes the following items:
1. The target architecture, such as ARM or x86.
2. Endianness: little (typically Intel) or big (typically ARM or others).
3. CPU type as it's known to the compiler, such as GCC's use of either -mcpu or --with-cpu.
4. The floating point type supported, if any, by the CPU, such as GCC's use of either -mfpu or --with-fpu.
5. Specific version information for the binutils package, the C library and the C compiler.
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12278f2.png)
Figure 2. Crosstool-NG Configuration Menu
The first four are typically available from the processor maker's documentation. It can be hard to find these for relatively new processors, but for the Raspberry Pi or BeagleBoards (and their offspring and off-shoots), you can find the information online at places like the [Embedded Linux Wiki][3].
The versions of the binutils, C library and C compiler are what will separate the toolchain from any others that might be provided from third parties. First, there are multiple providers of each of these things. Linaro provides bleeding-edge versions for newer processor types, while working to merge support into upstream projects like the GNU C Library. Although you can use a variety of providers, you may want to stick to the stock GNU toolchain or the Linaro versions of the same.
Another important selection in Crosstool-NG is the version of the Linux kernel. This selection gets headers for use with various toolchain components, but it is does not have to be the same as the Linux kernel you will boot on the target hardware. It's important to choose a kernel that is not newer than the target hardware's kernel. When possible, pick a long-term support kernel that is older than the kernel that will be used on the target hardware.
For most developers new to custom distribution builds, the toolchain build is the most complex process. Fortunately, binary toolchains are available for many target hardware platforms. If building a custom toolchain becomes problematic, search online at places like the [Embedded Linux Wiki][4] for links to prebuilt toolchains.
### Booting Options
The next component to focus on after the toolchain is the bootloader. A bootloader sets up hardware so it can be used by ever more complex software. A first-stage bootloader is often provided by the target platform maker, burned into on-hardware storage like an EEPROM or NOR flash. The first-stage bootloader will make it possible to boot from, for example, an SD card. The Raspberry Pi has such a bootloader, which makes creating a custom bootloader unnecessary.
Despite that, many projects add a secondary bootloader to perform a variety of tasks. One such task could be to provide a splash animation without using the Linux kernel or userspace tools like plymouth. A more common secondary bootloader task is to make network-based boot or PCI-connected disks available. In those cases, a tertiary bootloader, such as GRUB, may be necessary to get the system running.
Most important, bootloaders load the Linux kernel and start it running. If the first-stage bootloader doesn't provide a mechanism for passing kernel arguments at boot time, a second-stage bootloader may be necessary.
A number of open-source bootloaders are available. The [U-Boot project][5] often is used for ARM platforms like the Raspberry Pi. CoreBoot typically is used for x86 platform like the Chromebook. Bootloaders can be very specific to target hardware. The choice of bootloader will depend on overall project requirements and target hardware (search for lists of open-source bootloaders be online).
### Now Bring the Penguin
The bootloader will load the Linux kernel into memory and start it running. Linux is like an extended bootloader: it continues hardware setup and prepares to load higher-level software. The core of the kernel will set up and prepare memory for sharing between applications and hardware, prepare task management to allow multiple applications to run at the same time, initialize hardware components that were not configured by the bootloader or were configured incompletely and begin interfaces for human interaction. The kernel may not be configured to do this on its own, however. It may include an embedded lightweight filesystem, known as the initramfs or initrd, that can be created separately from the kernel to assist in hardware setup.
Another thing the kernel handles is downloading binary blobs, known generically as firmware, to hardware devices. Firmware is pre-compiled object files in formats specific to a particular device that is used to initialize hardware in places that the bootloader and kernel cannot access. Many such firmware objects are available from the Linux kernel source repositories, but many others are available only from specific hardware vendors. Examples of devices that often provide their own firmware include digital TV tuners or WiFi network cards.
Firmware may be loaded from the initramfs or may be loaded after the kernel starts the init process from the root filesystem. However, creating the kernel often will be the process where obtaining firmware will occur when creating a custom Linux distribution.
### Lightweight Core Platforms
The last thing the Linux kernel does is to attempt to run a specific program called the init process. This can be named init or linuxrc or the name of the program can be passed to the kernel by the bootloader. The init process is stored in a file system that the kernel can access. In the case of the initramfs, the file system is stored in memory (either by the kernel itself or by the bootloader placing it there). But the initramfs is not typically complete enough to run more complex applications. So another file system, known as the root file system, is required.
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12278f3.png)
Figure 3\. Buildroot Configuration Menu
The initramfs filesystem can be built using the Linux kernel itself, but more commonly, it is created using a project called [BusyBox][6]. BusyBox combines a collection of GNU utilities, such as grep or awk, into a single binary in order to reduce the size of the filesystem itself. BusyBox often is used to jump-start the root filesystem's creation.
But, BusyBox is purposely lightweight. It isn't intended to provide every tool that a target platform will need, and even those it does provide can be feature-reduced. BusyBox has a sister project known as [Buildroot][7], which can be used to get a complete root filesystem, providing a variety of libraries, utilities and scripting languages. Like Crosstool-NG and the Linux kernel, both BusyBox and Buildroot allow custom configuration using the kconfig menu system. More important, the Buildroot system handles dependencies automatically, so selection of a given utility will guarantee that any software it requires also will be built and installed in the root filesystem.
Buildroot can generate a root filesystem archive in a variety of formats. However, it is important to note that the filesystem only is archived. Individual utilities and libraries are not packaged in either Debian or RPM formats. Using Buildroot will generate a root filesystem image, but its contents are not managed packages. Despite this, Buildroot does provide support for both the opkg and rpm package managers. This means custom applications that will be installed on the root filesystem can be package-managed, even if the root filesystem itself is not.
### Cross-Compiling and Scripting
One of Buildroot's features is the ability to generate a staging tree. This directory contains libraries and utilities that can be used to cross-compile other applications. With a staging tree and the cross toolchain, it becomes possible to compile additional applications outside Buildroot on the host system instead of on the target platform. Using rpm or opkg, those applications then can be installed to the root filesystem on the target at runtime using package management software.
Most custom systems are built around the idea of building applications with scripting languages. If scripting is required on the target platform, a variety of choices are available from Buildroot, including Python, PHP, Lua and JavaScript via Node.js. Support also exists for applications requiring encryption using OpenSSL.
### What's Next
The Linux kernel and bootloaders are compiled like most applications. Their build systems are designed to build a specific bit of software. Crosstool-NG and Buildroot are metabuilds. A metabuild is a wrapper build system around a collection of software, each with their own build systems. Alternatives to these include [Yocto][8] and [OpenEmbedded][9]. The benefit of Buildroot is the ease with which it can be wrapped by an even higher-level metabuild to automate customized Linux distribution builds. Doing this opens the option of pointing Buildroot to project-specific cache repositories. Using cache repositories can speed development and offers snapshot builds without worrying about changes to upstream repositories.
An example implementation of a higher-level build system is [PiBox][10]. PiBox is a metabuild wrapped around all of the tools discussed in this article. Its purpose is to add a common GNU Make target construction around all the tools in order to produce a core platform on which additional software can be built and distributed. The PiBox Media Center and kiosk projects are implementations of application-layer software installed on top of the core platform to produce a purpose-built platform. The [Iron Man project][11] is intended to extend these applications for home automation, integrated with voice control and IoT management.
But PiBox is nothing without these core software tools and could never run without an in-depth understanding of a complete custom distribution build process. And, PiBox could not exist without the long-term dedication of the teams of developers for these projects who have made custom-distribution-building a task for the masses.
--------------------------------------------------------------------------------
via: http://www.linuxjournal.com/content/custom-embedded-linux-distributions
作者:[Michael J.Hammel][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxjournal.com/user/1000879
[1]:https://www.kernel.org
[2]:http://crosstool-ng.github.io
[3]:https://elinux.org/Main_Page
[4]:https://elinux.org/Main_Page
[5]:https://www.denx.de/wiki/U-Boot
[6]:https://busybox.net
[7]:https://buildroot.org
[8]:https://www.yoctoproject.org
[9]:https://www.openembedded.org/wiki/Main_Page
[10]:https://www.piboxproject.com
[11]:http://redmine.graphics-muse.org/projects/ironman/wiki/Getting_Started

View File

@ -0,0 +1,161 @@
Learn your tools: Navigating your Git History
============================================================
Starting a greenfield application everyday is nearly impossible, especially in your daily job. In fact, most of us are facing (somewhat) legacy codebases on a daily basis, and regaining the context of why some feature, or line of code exists in the codebase is very important. This is where `git`, the distributed version control system, is invaluable. Lets dive in and see how we can use our `git` history and easily navigate through it.
### Git history
First and foremost, what is `git` history? As the name says, it is the commit history of a `git` repo. It contains a bunch of commit messages, with their authors name, the commit hash and the date of the commit. The easiest way to see the history of a `git`repo, is the `git log` command.
Sidenote: For the purpose of this post, we will use Ruby on Rails repo, the `master`branch. The reason behind this is because Rails has a very good `git` history, with nice commit messages, references and explanations behind every change. Given the size of the codebase, the age and the number of maintainers, its certainly one of the best repositories that I have seen. Of course, I am not saying there are no other repositories built with good `git` practices, but this is one that has caught my eye.
So back to Rails repo. If you run `git log` in the Rails repo, you will see something like this:
```
commit 66ebbc4952f6cfb37d719f63036441ef98149418Author: Arthur Neves <foo@bar.com>Date: Fri Jun 3 17:17:38 2016 -0400 Dont re-define class SQLite3Adapter on test We were declaring in a few tests, which depending of the order load will cause an error, as the super class could change. see https://github.com/rails/rails/commit/ac1c4e141b20c1067af2c2703db6e1b463b985da#commitcomment-17731383commit 755f6bf3d3d568bc0af2c636be2f6df16c651eb1Merge: 4e85538 f7b850eAuthor: Eileen M. Uchitelle <foo@bar.com>Date: Fri Jun 3 10:21:49 2016 -0400 Merge pull request #25263 from abhishekjain16/doc_accessor_thread [skip ci] Fix grammarcommit f7b850ec9f6036802339e965c8ce74494f731b4aAuthor: Abhishek Jain <foo@bar.com>Date: Fri Jun 3 16:49:21 2016 +0530 [skip ci] Fix grammarcommit 4e85538dddf47877cacc65cea6c050e349af0405Merge: 082a515 cf2158cAuthor: Vijay Dev <foo@bar.com>Date: Fri Jun 3 14:00:47 2016 +0000 Merge branch 'master' of github.com:rails/docrails Conflicts: guides/source/action_cable_overview.mdcommit 082a5158251c6578714132e5c4f71bd39f462d71Merge: 4bd11d4 3bd30d9Author: Yves Senn <foo@bar.com>Date: Fri Jun 3 11:30:19 2016 +0200 Merge pull request #25243 from sukesan1984/add_i18n_validation_test Add i18n_validation_testcommit 4bd11d46de892676830bca51d3040f29200abbfaMerge: 99d8d45 e98caf8Author: Arthur Nogueira Neves <foo@bar.com>Date: Thu Jun 2 22:55:52 2016 -0400 Merge pull request #25258 from alexcameron89/master [skip ci] Make header bullets consistent in engines.mdcommit e98caf81fef54746126d31076c6d346c48ae8e1bAuthor: Alex Kitchens <foo@bar.com>Date: Thu Jun 2 21:26:53 2016 -0500 [skip ci] Make header bullets consistent in engines.md
```
As you can see, the `git log` shows the commit hash, the author and his email and the date of when the commit was created. Of course, `git` being super customisable, it allows you to customise the output format of the `git log` command. Lets say, we want to just see the first line of the commit message, we could run `git log --oneline`, which will produce a more compact log:
```
66ebbc4 Dont re-define class SQLite3Adapter on test755f6bf Merge pull request #25263 from abhishekjain16/doc_accessor_threadf7b850e [skip ci] Fix grammar4e85538 Merge branch 'master' of github.com:rails/docrails082a515 Merge pull request #25243 from sukesan1984/add_i18n_validation_test4bd11d4 Merge pull request #25258 from alexcameron89/mastere98caf8 [skip ci] Make header bullets consistent in engines.md99d8d45 Merge pull request #25254 from kamipo/fix_debug_helper_test818397c Merge pull request #25240 from matthewd/reloadable-channels2c5a8ba Don't blank pad day of the month when formatting dates14ff8e7 Fix debug helper test
```
To see all of the `git log` options, I recommend checking out manpage of `git log`, available in your terminal via `man git-log` or `git help log`. A tip: if `git log` is a bit scarse or complicated to use, or maybe you are just bored, I recommend checking out various `git` GUIs and command line tools. In the past Ive used [GitX][1] which was very good, but since the command line feels like home to me, after trying [tig][2] Ive never looked back.
### Finding Nemo
So now, since we know the bare minimum of the `git log` command, lets see how we can explore the history more effectively in our everyday work.
Lets say, hypothetically, we are suspecting an unexpected behaviour in the`String#classify` method and we want to find how and where it has been implemented.
One of the first commands that you can use, to see where the method is defined, is `git grep`. Simply said, this command prints out lines that match a certain pattern. Now, to find the definition of the method, its pretty simple - we can grep for `def classify` and see what we get:
```
➜ git grep 'def classify'activesupport/lib/active_support/core_ext/string/inflections.rb: def classifyactivesupport/lib/active_support/inflector/methods.rb: def classify(table_name)tools/profile: def classify
```
Now, although we can already see where our method is created, we are not sure on which line it is. If we add the `-n` flag to our `git grep` command, `git` will provide the line numbers of the match:
```
➜ git grep -n 'def classify'activesupport/lib/active_support/core_ext/string/inflections.rb:205: def classifyactivesupport/lib/active_support/inflector/methods.rb:186: def classify(table_name)tools/profile:112: def classify
```
Much better, right? Having the context in mind, we can easily figure out that the method that we are looking for lives in `activesupport/lib/active_support/core_ext/string/inflections.rb`, on line 205\. The `classify` method, in all of its glory looks like this:
```
# Creates a class name from a plural table name like Rails does for table names to models.# Note that this returns a string and not a class. (To convert to an actual class# follow +classify+ with +constantize+.)## 'ham_and_eggs'.classify # => "HamAndEgg"# 'posts'.classify # => "Post"def classify ActiveSupport::Inflector.classify(self)end
```
Although the method we found is the one we usually call on `String`s, it invokes another method on the `ActiveSupport::Inflector`, with the same name. Having our `git grep` result available, we can easily navigate there, since we can see the second line of the result being`activesupport/lib/active_support/inflector/methods.rb` on line 186\. The method that we are are looking for is:
```
# Creates a class name from a plural table name like Rails does for table# names to models. Note that this returns a string and not a Class (To# convert to an actual class follow +classify+ with #constantize).## classify('ham_and_eggs') # => "HamAndEgg"# classify('posts') # => "Post"## Singular names are not handled correctly:## classify('calculus') # => "Calculus"def classify(table_name) # strip out any leading schema name camelize(singularize(table_name.to_s.sub(/.*\./, ''.freeze)))end
```
Boom! Given the size of Rails, finding this should not take us more than 30 seconds with the help of `git grep`.
### So, what changed last?
Now, since we have the method available, we need to figure out what were the changes that this file has gone through. The since we know the correct file name and line number, we can use `git blame`. This command shows what revision and author last modified each line of a file. Lets see what were the latest changes made to this file:
```
git blame activesupport/lib/active_support/inflector/methods.rb
```
Whoa! Although we get the last change of every line in the file, we are more interested in the specific method (lines 176 to 189). Lets add a flag to the `git blame` command, that will show the blame of just those lines. Also, we will add the `-s` (suppress) option to the command, to skip the author names and the timestamp of the revision (commit) that changed the line:
```
git blame -L 176,189 -s activesupport/lib/active_support/inflector/methods.rb9fe8e19a 176) # Creates a class name from a plural table name like Rails does for table5ea3f284 177) # names to models. Note that this returns a string and not a Class (To9fe8e19a 178) # convert to an actual class follow +classify+ with #constantize).51cd6bb8 179) #6d077205 180) # classify('ham_and_eggs') # => "HamAndEgg"9fe8e19a 181) # classify('posts') # => "Post"51cd6bb8 182) #51cd6bb8 183) # Singular names are not handled correctly:5ea3f284 184) #66d6e7be 185) # classify('calculus') # => "Calculus"51cd6bb8 186) def classify(table_name)51cd6bb8 187) # strip out any leading schema name5bb1d4d2 188) camelize(singularize(table_name.to_s.sub(/.*\./, ''.freeze)))51cd6bb8 189) end
```
The output of the `git blame` command now shows all of the file lines and their respective revisions. Now, to see a specific revision, or in other words, what each of those revisions changed, we can use the `git show` command. When supplied a revision hash (like `66d6e7be`) as an argument, it will show you the full revision, with the author name, timestamp and the whole revision in its glory. Lets see what actually changed at the latest revision that changed line 188:
```
git show 5bb1d4d2
```
Whoa! Did you test that? If you didnt, its an awesome [commit][3] by [Schneems][4] that made a very interesting performance optimization by using frozen strings, which makes sense in our current context. But, since we are on this hypothetical debugging session, this doesnt tell much about our current problem. So, how can we see what changes has our method under investigation gone through?
### Searching the logs
Now, we are back to the `git` log. The question is, how can we see all the revisions that the `classify` method went under?
The `git log` command is quite powerful, because it has a rich list of options to apply to it. We can try to see what the `git` log has stored for this file, using the `-p`options, which means show me the patch for this entry in the `git` log:
```
git log -p activesupport/lib/active_support/inflector/methods.rb
```
This will show us a big list of revisions, for every revision of this file. But, just like before, we are interested in the specific file lines. Lets modify the command a bit, to show us what we need:
```
git log -L 176,189:activesupport/lib/active_support/inflector/methods.rb
```
The `git log` command accepts the `-L` option, which takes the lines range and the filename as arguments. The format might be a bit weird for you, but it translates to:
```
git log -L <start-line>,<end-line>:<path-to-file>
```
When we run this command, we can see the list of revisions for these lines, which will lead us to the first revision that created the method:
```
commit 51xd6bb829c418c5fbf75de1dfbb177233b1b154Author: Foo Bar <foo@bar.com>Date: Tue Jun 7 19:05:09 2011 -0700 Refactordiff --git a/activesupport/lib/active_support/inflector/methods.rb b/activesupport/lib/active_support/inflector/methods.rb--- a/activesupport/lib/active_support/inflector/methods.rb+++ b/activesupport/lib/active_support/inflector/methods.rb@@ -58,0 +135,14 @@+ # Create a class name from a plural table name like Rails does for table names to models.+ # Note that this returns a string and not a Class. (To convert to an actual class+ # follow +classify+ with +constantize+.)+ #+ # Examples:+ # "egg_and_hams".classify # => "EggAndHam"+ # "posts".classify # => "Post"+ #+ # Singular names are not handled correctly:+ # "business".classify # => "Busines"+ def classify(table_name)+ # strip out any leading schema name+ camelize(singularize(table_name.to_s.sub(/.*\./, '')))+ end
```
Now, look at that - its a commit from 2011\. Practically, `git` allows us to travel back in time. This is a very good example of why a proper commit message is paramount to regain context, because from the commit message we cannot really regain context of how this method came to be. But, on the flip side, you should **never ever** get frustrated about it, because you are looking at someone that basically gives away his time and energy for free, doing open source work.
Coming back from that tangent, we are not sure how the initial implementation of the `classify` method came to be, given that the first commit is just a refactor. Now, if you are thinking something within the lines of “but maybe, just maybe, the method was not on the line range 176 to 189, and we should look more broadly in the file”, you are very correct. The revision that we saw said “Refactor” in its commit message, which means that the method was actually there, but after that refactor it started to exist on that line range.
So, how can we confirm this? Well, believe it or not, `git` comes to the rescue again. The `git log` command accepts the `-S` option, which looks for the code change (additions or deletions) for the specified string as an argument to the command. This means that, if we call `git log -S classify`, we can see all of the commits that changed a line that contains that string.
If you call this command in the Rails repo, you will first see `git` slowing down a bit. But, when you realise that `git` actually parses all of the revisions in the repo to match the string, its actually super fast. Again, the power of `git` at your fingertips. So, to find the first version of the `classify` method, we can run:
```
git log -S 'def classify'
```
This will return all of the revisions where this method has been introduced or changed. If you were following along, the last commit in the log that you will see is:
```
commit db045dbbf60b53dbe013ef25554fd013baf88134Author: David Heinemeier Hansson <foo@bar.com>Date: Wed Nov 24 01:04:44 2004 +0000 Initial git-svn-id: http://svn-commit.rubyonrails.org/rails/trunk@4 5ecf4fe2-1ee6-0310-87b1-e25e094e27de
```
How cool is that? Its the initial commit to Rails, made on a `svn` repo by DHH! This means that `classify` has been around since the beginning of (Rails) time. Now, to see the commit with all of its changes, we can run:
```
git show db045dbbf60b53dbe013ef25554fd013baf88134
```
Great, we got to the bottom of it. Now, by using the output from `git log -S 'def classify'` you can track the changes that have happened to this method, combined with the power of the `git log -L` command.
### Until next time
Sure, we didnt really fix any bugs, because we were trying some `git` commands and following along the evolution of the `classify` method. But, nevertheless, `git` is a very powerful tool that we all must learn to use and to embrace. I hope this article gave you a little bit more knowledge of how useful `git` is.
What are your favourite (or, most effective) ways of navigating through the `git`history?
--------------------------------------------------------------------------------
作者简介:
Backend engineer, interested in Ruby, Go, microservices, building resilient architectures and solving challenges at scale. I coach at Rails Girls in Amsterdam, maintain a list of small gems and often contribute to Open Source.
This is where I write about software development, programming languages and everything else that interests me.
------
via: https://ieftimov.com/learn-your-tools-navigating-git-history
作者:[Ilija Eftimov ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://ieftimov.com/
[1]:http://gitx.frim.nl/
[2]:https://github.com/jonas/tig
[3]:https://github.com/rails/rails/commit/5bb1d4d288d019e276335465d0389fd2f5246bfd
[4]:https://twitter.com/schneems

View File

@ -0,0 +1,139 @@
Ultimate guide to securing SSH sessions
======
Hi Linux-fanatics, in this tutorial we will be discussing some ways with which we make our ssh server more secure. OpenSSH is currently used by default to work on servers as physical access to servers is very limited. We use ssh to copy/backup files/folders, to remotely execute commands etc. But these ssh connections might not be as secure as we believee & we must make some changes to our default settings to make them more secure.
Here are steps needed to secure our ssh sessions,
### Use complex username & password
This is first of the problem that needs to be addressed, I have known users who have '12345' as their password. It seems they are inviting hackers to get themselves hacked. You should always have a complex password.
It should have at-least 8 characters with numbers & alphabets, lower case & upper case letter, and also special characters. A good example would be " ** ** _vXdrf23#$wd_**** " , it is not a word so dictionary attack will be useless & has uppercase, lowercase characters, numbers & special characters.
### Limit user logins
Not all the users are required to have access to ssh in an organization, so we should make changes to our configuration file to limit user logins. Let's say only Bob & Susan are authorized have access to ssh, so open your configuration file
```
$ vi /etc/ssh/sshd_config
```
& add the allowed users to the bottom of the file
```
AllowUsers bob susan
```
Save the file & restart the service. Now only Bob & Susan will have access to ssh , others won't be able to access ssh.
### Configure Idle logout time
Once logged into ssh sessions, there is default time before sessions logs out on it own. By default idle logout time is 60 minutes, which according to me is way to much. Consider this, you logged into a session , executed some commands & then went out to get a cup of coffee but you forgot to log-out of the ssh. Just think what could be done in the 60 seconds, let alone in 60 minutes.
So, its wise to reduce idle log-out time to something around 5 minutes & it can be done in config file only. Open '/etc/ssh/sshd_config' & change the values
```
ClientAliveInterval 300
ClientAliveCountMax 0
```
Its in seconds, so configure them accordingly.
### Disable root logins
As we know root have access to anything & everything on the server, so we must disable root access through ssh session. Even if it is needed to complete a task that only root can do, we can escalate the privileges of a normal user.
To disable root access, open your configuration file & change the following parameter
```
PermitRootLogin no
ClientAliveCountMax 0
```
This will disable root access to ssh sessions.
### Enable Protocol 2
SSH protocol 1 had man in the middle attack issues & other security issues as well, all these issues were addressed in Protocol 2. So protocol 1 must not be used at any cost. To change the protocol , open your sshd_config file & change the following parameter
```
Protocol 2
```
### Enable a warning screen
It would be a good idea to enable a warning screen stating a warning about misuse of ssh, just before a user logs into the session. To create a warning screen, create a file named **" warning"** in **/etc/** folder (or any other folder) & write something like "We monitor all our sessions on continuously. Don't misuse your access or else you will be prosecuted" or whatever you wish to warn. You can also consult legal team about this warning to make it more official.
After this file is create, open sshd_config file & enter the following parameter into the file
```
Banner /etc/issue
```
now you warning message will be displayed each time someone tries to access the session.
### Use non-standard ssh port
By default, ssh uses port 22 & all the brute force scripts are written for port 22 only. So to make your sessions even more secure, use a non-standard port like 15000. But make sure before selecting a port that its not being used by some other service.
To change port, open sshd_config & change the following parameter
```
Port 15000
```
Save & restart the service and you can access the ssh only with this new port. To start a session with custom port use the following command
```
$ ssh -p 15000 {server IP}
```
** Note:-** If using firewall, open the port on your firewall & we must also change the SELinux settings if using a custom port for ssh. Run the following command to update the SELinux label
```
$ semanage port -a -t ssh_port_t -p tcp 15000
```
### Limit IP access
If you have an environment where your server is accessed by only limited number of IP addresses, you can also allow access to those IP addresses only. Open sshd_config file & enter the following with your custom port
```
Port 15000
ListenAddress 192.168.1.100
ListenAddress 192.168.1.115
```
Now ssh session will only be available to these mentioned IPs with the custom port 15000.
### Disable empty passwords
As mentioned already that you should only use complex username & passwords, so using an empty password for remote login is a complete no-no. To disable empty passwords, open sshd_config file & edit the following parameter
```
PermitEmptyPasswords no
```
### Use public/private key based authentication
Using Public/Private key based authentication has its advantages i.e. you no longer need to enter the password when entering into a session (unless you are using a passphrase to decrypt the key) & no one can have access to your server until & unless they have the right authentication key. Process to setup public/private key based authentication is discussed in [**this tutorial here**][1].
So, this completes our tutorial on securing your ssh server. If having any doubts or issues, please leave a message in the comment box below.
--------------------------------------------------------------------------------
via: http://linuxtechlab.com/ultimate-guide-to-securing-ssh-sessions/
作者:[SHUSAIN][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linuxtechlab.com/author/shsuain/
[1]:http://linuxtechlab.com/configure-ssh-server-publicprivate-key/
[2]:https://www.facebook.com/techlablinux/
[3]:https://twitter.com/LinuxTechLab
[4]:https://plus.google.com/+linuxtechlab
[5]:http://linuxtechlab.com/contact-us-2/

View File

@ -1,125 +0,0 @@
How To Fully Update And Upgrade Offline Debian-based Systems
======
![](https://www.ostechnix.com/wp-content/uploads/2017/11/Upgrade-Offline-Debian-based-Systems-2-720x340.png)
A while ago we have shown you how to install softwares in any[ **offline Ubuntu**][1] system and any [**offline Arch Linux**][2] system. Today, we will see how to fully update and upgrade offline Debian-based systems. Unlike the previous methods, we do not update/upgrade a single package, but the whole system. This method can be helpful where you don't have an active Internet connection or slow Internet speed.
### Fully Update And Upgrade Offline Debian-based Systems
Let us say, you have a system (Windows or Linux) with high-speed Internet connection at work and a Debian or any Debian derived systems with no internet connection or very slow Internet connection(like dial-up) at home. You want to upgrade your offline home system. What would you do? Buy a high speed Internet connection? Not necessary! You still can update or upgrade your offline system with Internet. This is where **Apt-Offline** comes in help.
As the name says, apt-offline is an Offline APT Package Manager for APT based systems like Debian and Debian derived distributions such as Ubuntu, Linux Mint. Using apt-offline, we can fully update/upgrade our Debian box without the need of connecting it to the Internet. It is cross-platform tool written in the Python Programming Language and has both CLI and graphical interfaces.
#### Requirements
* An Internet connected system (Windows or Linux). We call it online system for the sake of easy understanding throughout this guide.
* An Offline system (Debian and Debian derived system). We call it offline system.
* USB drive or External Hard drive with sufficient space to carry all updated packages.
#### Installation
Apt-Offline is available in the default repositories of Debian and derivatives. If your Online system is running with Debian, Ubuntu, Linux Mint, and other DEB based systems, you can install Apt-Offline using command:
```
sudo apt-get install apt-offline
```
If your Online runs with any other distro than Debian, git clone Apt-Offline repository:
```
git clone https://github.com/rickysarraf/apt-offline.git
```
Go the directory and run it from there.
```
cd apt-offline/
```
```
sudo ./apt-offline
```
#### Steps to do in Offline system (Non-Internet connected system)
Go to your offline system and create a directory where you want to store the signature file:
```
mkdir ~/tmp
```
```
cd ~/tmp/
```
You can use any directory of your choice. Then, run the following command to generate the signature file:
```
sudo apt-offline set apt-offline.sig
```
Sample output would be:
```
Generating database of files that are needed for an update.
Generating database of file that are needed for operation upgrade
```
By default, apt-offline will generate database of files that are needed to be update and upgrade. You can use **--` update`** or `**--upgrade** options to create database for either one of these.`
Copy the entire **tmp** folder in an USB drive or external drive and go to your online system (Internet-enabled system).
#### Steps to do in Online system
Plug in your USB drive and go to the temp directory:
```
cd tmp/
```
Then, run the following command:
```
sudo apt-offline get apt-offline.sig --threads 5 --bundle apt-offline-bundle.zip
```
Here, "-threads 5" represents the number of APT repositories. You can increase the number if you want to download packages from more repositories. And, "-bundle apt-offline-bundle.zip" option represents all packages will be bundled in a single archive file called **apt-offline-bundle.zip**. This archive file will be saved in your current working directory.
The above command will download data based on the signature file generated earlier in the offline system.
[![][3]][4]
This will take several minutes depending upon the Internet connection speed. Please note that apt-offline is cross platform, so you can use it to download packages on any OS.
Once completed, copy the **tmp** folder to USB or External drive and return back to the offline system. Make sure your USB device has enough free space to keep all downloaded files, because all packages are available in the tmp folder now.
#### Steps to do in offline system
Plug in the device in your offline system and go to the **tmp** directory where you have downloaded all packages earlier.
```
cd tmp
```
Then, run the following command to install all download packages.
```
sudo apt-offline install apt-offline-bundle.zip
```
This will update the APT database, so APT will find all required packages in the APT cache.
**Note:** If both online and offline systems are in the same local network, you can transfer the **tmp** folder to the offline system using "scp" or any other file transfer applications. If both systems are in different places, copy the folder using USB devices.
And, that's all for now folks. I hope this guide will useful for you. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/fully-update-upgrade-offline-debian-based-systems/
作者:[SK][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.ostechnix.com/install-softwares-offline-ubuntu-16-04/
[2]:https://www.ostechnix.com/install-packages-offline-arch-linux/
[3]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[4]:http://www.ostechnix.com/wp-content/uploads/2017/11/apt-offline.png ()

View File

@ -1,48 +0,0 @@
A tour of containerd 1.0
======
![containerd][1]
We have done a few talks in the past on different features of containerd, how it was designed, and some of the problems that we have fixed along the way. Containerd is used by Docker, Kubernetes CRI, and a few other projects but this is a post for people who may not know what containerd actually does within these platforms. I would like to do more posts on the feature set and design of containerd in the future but for now, we will start with the basics.
I think the container ecosystem can be confusing at times. Especially with the terminology that we use. Whats this? A runtime. And this? A runtime… containerd (pronounced " _container-dee "_) as the name implies, not contain nerd as some would like to troll me with, is a container daemon. It was originally built as an integration point for OCI runtimes like runc but over the past six months it has added a lot of functionality to bring it up to par with the needs of modern container platforms like Docker and orchestration systems like Kubernetes.
So what do you actually get using containerd? You get push and pull functionality as well as image management. You get container lifecycle APIs to create, execute, and manage containers and their tasks. An entire API dedicated to snapshot management and an openly governed project to depend on. Basically everything that you need to build a container platform without having to deal with the underlying OS details. I think the most important part of containerd is having a versioned and stable API that will have bug fixes and security patches backported.
![containerd][2]
Since there is no such thing as Linux containers in the kernel, containers are various kernel features tied together, when you are building a large platform or distributed system you want an abstraction layer between your management code and the syscalls and duct tape of features to run a container. That is where containerd lives. It provides a client a layer of stable types that platforms can build on top of without ever having to drop down to the kernel level. It's so much nicer to work with Container, Task, and Snapshot types than it is to manage calls to clone() or mount(). Balanced with the flexibility to directly interact with the runtime or host-machine, these objects avoid the sacrifice of capabilities that typically come with higher-level abstractions. The result is that easy tasks are simple to complete and hard tasks are possible.
![containerd][3]Containerd was designed to be used by Docker and Kubernetes as well as any other container system that wants to abstract away syscalls or OS specific functionality to run containers on Linux, Windows, Solaris, or other Operating Systems. With these users in mind, we wanted to make sure that containerd has only what they need and nothing that they don't. Realistically this is impossible but at least that is what we try for. While networking is out of scope for containerd, what it doesn't do lets higher level systems have full control. The reason for this is, when you are building a distributed system, networking is a very central aspect. With SDN and service discovery today, networking is way more platform specific than abstracting away netlink calls on linux. Most of the new overlay networks are route based and require routing tables to be updated each time a new container is created or deleted. Service discovery, DNS, etc all have to be notified of these changes as well. It would be a large chunk of code to be able to support all the different network interfaces, hooks, and integration points to support this if we added networking to containerd. What we did instead is opted for a robust events system inside containerd so that multiple consumers can subscribe to the events that they care about. We also expose a [Task API ][4]that lets users create a running task, have the ability to add interfaces to the network namespace of the container, and then start the container's process without the need for complex hooks in various points of a container's lifecycle.
Another area that has been added to containerd over the past few months is a complete storage and distribution system that supports both OCI and Docker image formats. You have a complete content addressed storage system across the containerd API that works not only for images but also metadata, checkpoints, and arbitrary data attached to containers.
We also took the time to [rethink how "graphdrivers" work][5]. These are the overlay or block level filesystems that allow images to have layers and you to perform efficient builds. Graphdrivers were initially written by Solomon and I when we added support for devicemapper. Docker only supported AUFS at the time so we modeled the graphdrivers after the overlay filesystem. However, making a block level filesystem such as devicemapper/lvm act like an overlay filesystem proved to be much harder to do in the long run. The interfaces had to expand over time to support different features than what we originally thought would be needed. With containerd, we took a different approach, make overlay filesystems act like a snapshotter instead of vice versa. This was much easier to do as overlay filesystems provide much more flexibility than snapshotting filesystems like BTRFS, ZFS, and devicemapper as they don't have a strict parent/child relationship. This helped us build out [a smaller interface for the snapshotters][6] while still fulfilling the requirements needed from things [like a builder][7] as well as reduce the amount of code needed, making it much easier to maintain in the long run.
![][8]
You can find more details about the architecture of containerd in [Stephen Day's Dec 7th 2017 KubeCon SIG Node presentation][9].
In addition to the technical and design changes in the 1.0 codebase, we also switched the containerd [governance model from the long standing BDFL to a Technical Steering Committee][10] giving the community an independent third party resource to rely on.
--------------------------------------------------------------------------------
via: https://blog.docker.com/2017/12/containerd-ga-features-2/
作者:[Michael Crosby][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://blog.docker.com/author/michael/
[1]:https://i0.wp.com/blog.docker.com/wp-content/uploads/950cf948-7c08-4df6-afd9-cc9bc417cabe-6.jpg?resize=400%2C120&ssl=1
[2]:https://i1.wp.com/blog.docker.com/wp-content/uploads/4a7666e4-ebdb-4a40-b61a-26ac7c3f663e-4.jpg?resize=906%2C470&ssl=1 (containerd)
[3]:https://i1.wp.com/blog.docker.com/wp-content/uploads/2a73a4d8-cd40-4187-851f-6104ae3c12ba-1.jpg?resize=1140%2C680&ssl=1
[4]:https://github.com/containerd/containerd/blob/master/api/services/tasks/v1/tasks.proto
[5]:https://blog.mobyproject.org/where-are-containerds-graph-drivers-145fc9b7255
[6]:https://github.com/containerd/containerd/blob/master/api/services/snapshots/v1/snapshots.proto
[7]:https://blog.mobyproject.org/introducing-buildkit-17e056cc5317
[8]:https://i1.wp.com/blog.docker.com/wp-content/uploads/d0fb5eb9-c561-415d-8d57-e74442a879a2-1.jpg?resize=1140%2C556&ssl=1
[9]:https://speakerdeck.com/stevvooe/whats-happening-with-containerd-and-the-cri
[10]:https://github.com/containerd/containerd/pull/1748

View File

@ -1,197 +0,0 @@
How to Create a Docker Image
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/container-image_0.jpg?itok=G_Gz80R9)
In the previous [article][1], we learned about how to get started with Docker on Linux, macOS, and Windows. In this article, we will get a basic understanding of creating Docker images. There are prebuilt images available on DockerHub that you can use for your own project, and you can publish your own image there.
We are going to use prebuilt images to get the base Linux subsystem, as it's a lot of work to build one from scratch. You can get Alpine (the official distro used by Docker Editions), Ubuntu, BusyBox, or scratch. In this example, I will use Ubuntu.
Before we start building our images, let's "containerize" them! By this I just mean creating directories for all of your Docker images so that you can maintain different projects and stages isolated from each other.
```
$ mkdir dockerprojects
cd dockerprojects
```
Now create a Dockerfile inside the dockerprojects directory using your favorite text editor; I prefer nano, which is also easy for new users.
```
$ nano Dockerfile
```
And add this line:
```
FROM Ubuntu
```
![m7_f7No0pmZr2iQmEOH5_ID6MDG2oEnODpQZkUL7][2]
Save it with Ctrl+Exit then Y.
Now create your new image and provide it with a name (run these commands within the same directory):
```
$ docker build -t dockp .
```
(Note the dot at the end of the command.) This should build successfully, so you'll see:
```
Sending build context to Docker daemon 2.048kB
Step 1/1 : FROM ubuntu
---> 2a4cca5ac898
Successfully built 2a4cca5ac898
Successfully tagged dockp:latest
```
It's time to run and test your image:
```
$ docker run -it Ubuntu
```
You should see root prompt:
```
root@c06fcd6af0e8:/#
```
This means you are literally running bare minimal Ubuntu inside Linux, Windows, or macOS. You can run all native Ubuntu commands and CLI utilities.
![vpZ8ts9oq3uk--z4n6KP3DD3uD_P4EpG7fX06MC3][3]
Let's check all the Docker images you have in your directory:
```
$docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
dockp latest 2a4cca5ac898 1 hour ago 111MB
ubuntu latest 2a4cca5ac898 1 hour ago 111MB
hello-world latest f2a91732366c 8 weeks ago 1.85kB
```
You can see all three images: dockp, Ubuntu, and hello-world, which I created a few weeks ago when working on the previous articles of this series. Building a whole LAMP stack can be challenging, so we are going create a simple Apache server image with Dockerfile.
Dockerfile is basically a set of instructions to install all the needed packages, configure, and copy files. In this case, it's Apache and Nginx.
You may also want to create an account on DockerHub and log into your account before building images, in case you are pulling something from DockerHub. To log into DockerHub from the command line, just run:
```
$ docker login
```
Enter your username and password and you are logged in.
Next, create a directory for Apache inside the dockerproject:
```
$ mkdir apache
```
Create a Dockerfile inside Apache folder:
```
$ nano Dockerfile
```
And paste these lines:
```
FROM ubuntu
MAINTAINER Kimbro Staken version: 0.1
RUN apt-get update && apt-get install -y apache2 && apt-get clean && rm -rf /var/lib/apt/lists/*
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
EXPOSE 80
CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"]
```
Then, build the image:
```
docker build -t apache .
```
(Note the dot after a space at the end.)
It will take some time, then you should see successful build like this:
```
Successfully built e7083fd898c7
Successfully tagged ng:latest
Swapnil:apache swapnil$
```
Now let's run the server:
```
$ docker run -d apache
a189a4db0f7c245dd6c934ef7164f3ddde09e1f3018b5b90350df8be85c8dc98
```
Eureka. Your container image is running. Check all the running containers:
```
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED
a189a4db0f7 apache "/usr/sbin/apache2ctl" 10 seconds ago
```
You can kill the container with the docker kill command:
```
$docker kill a189a4db0f7
```
So, you see the "image" itself is persistent that stays in your directory, but the container runs and goes away. Now you can create as many images as you want and spin and nuke as many containers as you need from those images.
That's how to create an image and run containers.
To learn more, you can open your web browser and check out the documentation about how to build more complicated Docker images like the whole LAMP stack. Here is a[ Dockerfile][4] file for you to play with. In the next article, I'll show how to push images to DockerHub.
Learn more about Linux through the free ["Introduction to Linux" ][5]course from The Linux Foundation and edX.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/intro-to-linux/2018/1/how-create-docker-image
作者:[SWAPNIL BHARTIYA][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/arnieswap
[1]:https://www.linux.com/blog/learn/intro-to-linux/how-install-docker-ce-your-desktop
[2]:https://lh6.googleusercontent.com/m7_f7No0pmZr2iQmEOH5_ID6MDG2oEnODpQZkUL7q3GYRB9f1-lvMYLE5f3GBpzIk-ev5VlcB0FHYSxn6NNQjxY4jJGqcgdFWaeQ-027qX_g-SVtbCCMybJeD6QIXjzM2ga8M4l4
[3]:https://lh3.googleusercontent.com/vpZ8ts9oq3uk--z4n6KP3DD3uD_P4EpG7fX06MC3uFvj2-WaI1DfOfec9ZXuN7XUNObQ2SCc4Nbiqp-CM7ozUcQmtuzmOdtUHTF4Jq8YxkC49o2k7y5snZqTXsueITZyaLiHq8bT
[4]:https://github.com/fauria/docker-lamp/blob/master/Dockerfile
[5]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -1,85 +0,0 @@
translating---geekpi
4 cool new projects to try in COPR for January
======
![](https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg)
COPR is a [collection][1] of personal repositories for software that isn't carried in Fedora. Some software doesn't conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn't supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.
Here's a set of new and interesting projects in COPR.
### Elisa
[Elisa][2] is a minimal music player. It lets you browse music by albums, artists or tracks. It automatically detects all playable music in your ~/Music directory, thus it requires no set up at all - neither does it offer any. Currently, Elisa focuses on being a simple music player, so it offers no tools for managing your music collection.
![][3]
#### Installation instructions
The repo currently provides Elisa for Fedora 26, 27 and Rawhide. To install Elisa, use these commands:
```
sudo dnf copr enable eclipseo/elisa
sudo dnf install elisa
```
### Bing Wallpapers
[Bing Wallpapers][4] is a simple program that downloads Bing's wallpaper of the day and sets it as a desktop wallpaper or a lock screen image. The program can rotate over pictures in its directory in set intervals as well as delete old pictures after a set amount of time.
#### Installation instructions
The repo currently provides Bing Wallpapers for Fedora 25, 26, 27 and Rawhide. To install Bing Wallpapers, use these commands:
```
sudo dnf copr enable julekgwa/Bingwallpapers
sudo dnf install bingwallpapers
```
### Polybar
[Polybar][5] is a tool for creating status bars. It has a lot of customization options as well as built-in functionality to display information about commonly used services, such as systray icons, window title, workspace and desktop panel for [bspwm][6], [i3][7], and more. You can also configure your own modules for your status bar. See [Polybar's wiki][8] for more information about usage and configuration.
#### Installation instructions
The repo currently provides Polybar for Fedora 27. To install Polybar, use these commands:
```
sudo dnf copr enable tomwishaupt/polybar
sudo dnf install polybar
```
### Netdata
[Netdata][9] is a distributed monitoring system. It can run on all your systems including PCs, servers, containers and IoT devices, from which it collects metrics in real time. All the information then can be accessed using netdata's web dashboard. Additionally, Netdata provides pre-configured alarms and notifications for detecting performance issue, as well as templates for creating your own alarms.
![][10]
#### Installation instructions
The repo currently provides netdata for EPEL 7, Fedora 27 and Rawhide. To install netdata, use these commands:
```
sudo dnf copr enable recteurlp/netdata
sudo dnf install netdata
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-january/
作者:[Dominik Turecek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org
[1]:https://copr.fedorainfracloud.org/
[2]:https://community.kde.org/Elisa
[3]:https://fedoramagazine.org/wp-content/uploads/2018/01/elisa.png
[4]:http://bingwallpapers.lekgoara.com/
[5]:https://github.com/jaagr/polybar
[6]:https://github.com/baskerville/bspwm
[7]:https://i3wm.org/
[8]:https://github.com/jaagr/polybar/wiki
[9]:http://my-netdata.io/
[10]:https://fedoramagazine.org/wp-content/uploads/2018/01/netdata.png

View File

@ -0,0 +1,171 @@
Your instant Kubernetes cluster
============================================================
This is a condensed and updated version of my previous tutorial [Kubernetes in 10 minutes][10]. I've removed just about everything I can so this guide still makes sense. Use it when you want to create a cluster on the cloud or on-premises as fast as possible.
### 1.0 Pick a host
We will be using Ubuntu 16.04 for this guide so that you can copy/paste all the instructions. Here are several environments where I've tested this guide. Just pick where you want to run your hosts.
* [DigitalOcean][1] - developer cloud
* [Civo][2] - UK developer cloud
* [Packet][3] - bare metal cloud
* 2x Dell Intel i7 boxes - at home
> Civo is a relatively new developer cloud and one thing that I really liked was how quickly they can bring up hosts - in about 25 seconds. I'm based in the UK so I also get very low latency.
### 1.1 Provision the machines
You can get away with a single host for testing but I'd recommend at least three so we have a single master and two worker nodes.
Here are some other guidelines:
* Pick dual-core hosts with ideally at least 2GB RAM
* If you can pick a custom username when provisioning the host then do that rather than root. For example Civo offers an option of `ubuntu`, `civo` or `root`.
Now run through the following steps on each machine. It should take you less than 5-10 minutes. If that's too slow for you then you can use my utility script [kept in a Gist][11]:
```
$ curl -sL https://gist.githubusercontent.com/alexellis/e8bbec45c75ea38da5547746c0ca4b0c/raw/23fc4cd13910eac646b13c4f8812bab3eeebab4c/configure.sh | sh
```
### 1.2 Login and install Docker
Install Docker from the Ubuntu apt repository. This will be an older version of Docker but as Kubernetes is tested with old versions of Docker it will work in our favour.
```
$ sudo apt-get update \
&& sudo apt-get install -qy docker.io
```
### 1.3 Disable the swap file
This is now a mandatory step for Kubernetes. The easiest way to do this is to edit `/etc/fstab` and to comment out the line referring to swap.
To save a reboot then type in `sudo swapoff -a`.
> Disabling swap memory may appear like a strange requirement at first. If you are curious about this step then [read more here][4].
### 1.4 Install Kubernetes packages
```
$ sudo apt-get update \
&& sudo apt-get install -y apt-transport-https \
&& curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
$ echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" \
| sudo tee -a /etc/apt/sources.list.d/kubernetes.list \
&& sudo apt-get update
$ sudo apt-get update \
&& sudo apt-get install -y \
kubelet \
kubeadm \
kubernetes-cni
```
### 1.5 Create the cluster
At this point we create the cluster by initiating the master with `kubeadm`. Only do this on the master node.
> Despite any warnings I have been assured by [Weaveworks][5] and Lucas (the maintainer) that `kubeadm` is suitable for production use.
```
$ sudo kubeadm init
```
If you missed a step or there's a problem then `kubeadm` will let you know at this point.
Take a copy of the Kube config:
```
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
```
Make sure you note down the join token command i.e.
```
$ sudo kubeadm join --token c30633.d178035db2b4bb9a 10.0.0.5:6443 --discovery-token-ca-cert-hash sha256:<hash>
```
### 2.0 Install networking
Many networking providers are available for Kubernetes, but none are included by default, so let's use Weave Net from [Weaveworks][12] which is one of the most popular options in the Kubernetes community. It tends to work out of the box without additional configuration.
```
$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
```
If you have private networking enabled on your host then you may need to alter the private subnet that Weavenet uses for allocating IP addresses to Pods (containers). Here's an example of how to do that:
```
$ curl -SL "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&env.IPALLOC_RANGE=172.16.6.64/27" \
| kubectl apply -f -
```
> Weave also have a very cool visualisation tool called Weave Cloud. It's free and will show you the path traffic is taking between your Pods. [See here for an example with the OpenFaaS project][6].
### 2.2 Join the worker nodes to the cluster
Now you can switch to each of your workers and use the `kubeadm join` command from 1.5\. Once you run that log out of the workers.
### 3.0 Profit
That's it - we're done. You have a cluster up and running and can deploy your applications. If you need to setup a dashboard UI then consult the [Kubernetes documentation][13].
```
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
openfaas1 Ready master 20m v1.9.2
openfaas2 Ready <none> 19m v1.9.2
openfaas3 Ready <none> 19m v1.9.2
```
If you want to see my running through creating a cluster step-by-step and showing you how `kubectl` works then checkout my video below and make sure you subscribe
You can also get an "instant" Kubernetes cluster on your Mac for development using Minikube or Docker for Mac Edge edition. [Read my review and first impressions here][14].
--------------------------------------------------------------------------------
via: https://blog.alexellis.io/your-instant-kubernetes-cluster/
作者:[Alex Ellis ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://blog.alexellis.io/author/alex/
[1]:https://www.digitalocean.com/
[2]:https://www.civo.com/
[3]:https://packet.net/
[4]:https://github.com/kubernetes/kubernetes/issues/53533
[5]:https://weave.works/
[6]:https://www.weave.works/blog/openfaas-gke
[7]:https://blog.alexellis.io/tag/kubernetes/
[8]:https://blog.alexellis.io/tag/k8s/
[9]:https://blog.alexellis.io/tag/cloud-native/
[10]:https://www.youtube.com/watch?v=6xJwQgDnMFE
[11]:https://gist.github.com/alexellis/e8bbec45c75ea38da5547746c0ca4b0c
[12]:https://weave.works/
[13]:https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
[14]:https://blog.alexellis.io/docker-for-mac-with-kubernetes/
[15]:https://blog.alexellis.io/your-instant-kubernetes-cluster/#

View File

@ -1,117 +0,0 @@
translated by cyleft
reate your own personal Cloud: Install OwnCloud
======
Cloud is what everyone is talking about. We have a number of major players in the market that are offering cloud storage as well as other cloud based services. But we can also create a personal cloud for ourselves.
In this tutorial we are going to discuss how to create a personal cloud using OwnCloud. OwnCloud is a web application, that we can install on our Linux machines, to store & serve data. It can be used for sharing calender, Contacts and Bookmark sharing and Personal Audio/Video Streaming etc.
For this tutorial, we will be using CentOS 7 machine but this tutorial can also be used to install ownCloud on other Linux distributions as well. Let's start with pre-requisites for installing ownCloud,
**(Recommended Read:[How to use Apache as Reverse Proxy on CentOS & RHEL][1])**
**(Also Read:[Real Time Linux server monitoring with GLANCES monitoring tool][2])**
### Pre-requisites
* We need to have LAMP stack configured on our machines. Please go through our articles '[ Easiest guide for setting up LAMP server for CentOS/RHEL ][3] & [Create LAMP stack on Ubuntu][4].
* We will also need the following php packages installed on our machines,' php-mysql php-json php-xml php-mbstring php-zip php-gd curl php-curl php-pdo'. Install them using the package manager.
```
$ sudo yum install php-mysql php-json php-xml php-mbstring php-zip php-gd curl php-curl php-pdo
```
### Installation
To install owncloud, we will now download the ownCloud package on our server. Use the following command to download the latest package (10.0.4-1) from ownCloud official website,
```
$ wget https://download.owncloud.org/community/owncloud-10.0.4.tar.bz2
```
Extract it using the following command,
```
$ tar -xvf owncloud-10.0.4.tar.bz2
```
Now move all the contents of extracted folder to '/var/www/html'
```
$ mv owncloud/* /var/www/html
```
Next thing we need to do is to make a change in apache configuration file 'httpd.conf',
```
$ sudo vim /etc/httpd/conf/httpd.com
```
& change the following option,
```
AllowOverride All
```
Now save the file & change the permissions for owncloud folder,
```
$ sudo chown -R apache:apache /var/www/html/
$ sudo chmod 777 /var/www/html/config/
```
Now restart the apache service to implement the changes made,
```
$ sudo systemctl restart httpd
```
Now we need to create a database on mariadb to store all the data from owncloud. Create a database & a db user with the following commands,
```
$ mysql -u root -p
MariaDB [(none)] > create database owncloud;
MariaDB [(none)] > GRANT ALL ON owncloud.* TO ocuser@localhost IDENTIFIED BY 'owncloud';
MariaDB [(none)] > flush privileges;
MariaDB [(none)] > exit
```
Configuration part on server is completed, we can now access the owncloud from a web browser. Open a browser of your choosing & enter the IP address of the server, which in our case is 10.20.30.100,
![install owncloud][7]
As soon as the URL loads, we will be presented with the above mentioned page. Here we will create our admin user & will also provide our database information. Once all the information has been provided, click on 'Finish setup'.
We will than be redirected to login page, where we need to enter the credentials we created in previous step,
![install owncloud][9]
Upon successful authentication, we will enter into the ownCloud dashboard,
![install owncloud][11]
We can than use mobile apps or can also upload our data using the web interface. We now have our own personal cloud ready. We now end this tutorial on how to install ownCloud to create your own cloud. Please do leave your questions or suggestions in the comment box below.
--------------------------------------------------------------------------------
via: http://linuxtechlab.com/create-personal-cloud-install-owncloud/
作者:[SHUSAIN][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linuxtechlab.com/author/shsuain/
[1]:http://linuxtechlab.com/apache-as-reverse-proxy-centos-rhel/
[2]:http://linuxtechlab.com/linux-server-glances-monitoring-tool/
[3]:http://linuxtechlab.com/easiest-guide-creating-lamp-server/
[4]:http://linuxtechlab.com/install-lamp-stack-on-ubuntu/
[6]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=400%2C647
[7]:https://i1.wp.com/linuxtechlab.com/wp-content/uploads/2018/01/owncloud1-compressor.jpg?resize=400%2C647
[8]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=876%2C541
[9]:https://i1.wp.com/linuxtechlab.com/wp-content/uploads/2018/01/owncloud2-compressor1.jpg?resize=876%2C541
[10]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=981%2C474
[11]:https://i0.wp.com/linuxtechlab.com/wp-content/uploads/2018/01/owncloud3-compressor1.jpg?resize=981%2C474

View File

@ -0,0 +1,63 @@
How programmers learn to code
============================================================
[![How programmers learn to code](https://mybroadband.co.za/news/wp-content/uploads/2016/01/Programmer-working-computer-code.jpg)][8]
HackerRank recently published the results of its 2018 Developer Skills Report, in which it asked programmers when they started coding.
39,441 professional and student developers completed the online survey from 16 October to 1 November 2016, with over 25% of the developers surveyed writing their first piece of code before they were 16 years old.
### How programmers learn
In terms of how programmers learnt to code, self-teaching is the norm for developers of all ages, stated the report.
“Even though 67% of developers have computer science degrees, roughly 74% said they were at least partially self-taught.”
On average, developers know four languages, but they want to learn four more.
The thirst for learning varies by generations developers between 18 and 24 plan to learn six languages, whereas developers older than 35 only plan to learn three.
[![HackerRank 2018 how did you learn to code](https://mybroadband.co.za/news/wp-content/uploads/2018/01/HackerRank-2018-how-did-you-learn-to-code.jpg)][5]
### What programmers want
HackerRank also looked at what developers want most from an employer.
On average, a good work-life balance, closely followed by professional growth and learning, was the most desired requirement.
Segmenting the data by region revealed that Americans crave work-life balance more than developers Asia and Europe.
Students tend to rank growth and learning over work-life balance, while professionals rate compensation more highly than students do.
People who work in smaller companies tended to rank work-life balance lower, but it was still in their top three.
Age also made a difference, with developers 25 and older rating work-life balance as most important, while those between 18 and 24 rate it as less important.
“In some ways, weve discovered a slight contradiction here. Developers want work-life balance, but they also have an insatiable thirst and need for learning,” said HackerRank.
It advised that focusing on doing what you enjoy, as opposed to trying to learning everything, can help strike a better work-life balance.
[![HackerRank 2018 what do developers want most](https://mybroadband.co.za/news/wp-content/uploads/2018/01/HackerRank-2018-what-do-developers-want-most-640x342.jpg)][6]
[![HackerRank 2018 how to improve work-life balance](https://mybroadband.co.za/news/wp-content/uploads/2018/01/HackerRank-2018-how-to-improve-work-life-balance-378x430.jpg)][7]
--------------------------------------------------------------------------------
via: https://mybroadband.co.za/news/smartphones/246583-how-programmers-learn-to-code.html
作者:[Staff Writer ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://mybroadband.co.za/news/author/staff-writer
[1]:https://mybroadband.co.za/news/author/staff-writer
[2]:https://twitter.com/intent/tweet/?text=How+programmers+learn+to+code%20https://mybroadband.co.za/news/smartphones/246583-how-programmers-learn-to-code.html&via=mybroadband
[3]:mailto:?subject=How%20programmers%20learn%20to%20code&body=HackerRank%20recently%20published%20the%20results%20of%20its%202018%20Developer%20Skills%20Report.%0A%0Ahttps%3A%2F%2Fmybroadband.co.za%2Fnews%2Fsmartphones%2F246583-how-programmers-learn-to-code.html
[4]:https://mybroadband.co.za/news/smartphones/246583-how-programmers-learn-to-code.html#disqus_thread
[5]:https://mybroadband.co.za/news/wp-content/uploads/2018/01/HackerRank-2018-how-did-you-learn-to-code.jpg
[6]:https://mybroadband.co.za/news/wp-content/uploads/2018/01/HackerRank-2018-what-do-developers-want-most.jpg
[7]:https://mybroadband.co.za/news/wp-content/uploads/2018/01/HackerRank-2018-how-to-improve-work-life-balance.jpg
[8]:https://mybroadband.co.za/news/smartphones/246583-how-programmers-learn-to-code.html

View File

@ -0,0 +1,305 @@
Create and manage MacOS LaunchAgents using Go
============================================================
If you have ever tried writing a daemon for MacOS you have met with `launchd`. For those that dont have the experience, think of it as a framework for starting, stopping and managing daemons, applications, processes, and scripts. If you have any *nix experience the word daemon should not be too alien to you.
For those unfamiliar, a daemon is a program running in the background without requiring user input. A typical daemon might, for instance, perform daily maintenance tasks or scan a device for malware when connected.
This post is aimed at folks that know a little bit about what daemons are, what is the common way of using them and know a bit about Go. Also, if you have ever written a daemon for any other *nix system, you will have a good idea of what we are going to talk here. If you are an absolute beginner in Go or systems this might prove to be an overwhelming article. Still, feel free to give it a shot and let me know how it goes.
If you ever find yourself wanting to write a MacOS daemon with Go you would like to know most of the stuff we are going to talk about in this article. Without further ado, lets dive in.
### What is `launchd` and how it works?
`launchd` is a unified service-management framework, that starts, stops and manages daemons, applications, processes, and scripts in MacOS.
One of its key features is that it differentiates between agents and daemons. In `launchd` land, an agent runs on behalf of the logged in user while a daemon runs on behalf of the root user or any specified user.
### Defining agents and daemons
An agent/daemon is defined in an XML file, which states the properties of the program that will execute, among a list of other properties. Another aspect to keep in mind is that `launchd` decides if a program will be treated as a daemon or an agent by where the program XML is located.
Over at [launchd.info][3], theres a simple table that shows where you would (or not) place your programs XML:
```
+----------------+-------------------------------+----------------------------------------------------+| Type | Location | Run on behalf of |+----------------+-------------------------------+----------------------------------------------------+| User Agents | ~/Library/LaunchAgents | Currently logged in user || Global Agents | /Library/LaunchAgents | Currently logged in user || Global Daemons | /Library/LaunchDaemons | root or the user specified with the key 'UserName' || System Agents | /System/Library/LaunchAgents | Currently logged in user || System Daemons | /System/Library/LaunchDaemons | root or the user specified with the key 'UserName' |+----------------+-------------------------------+----------------------------------------------------+
```
This means that when we set our XML file in, for example, the `/Library/LaunchAgents` path our process will be treated as a global agent. The main difference between the daemons and agents is that LaunchDaemons will run as root, and are generally background processes. On the other hand, LaunchAgents are jobs that will run as a user or in the context of userland. These may be scripts or other foreground items and they also have access to the MacOS UI (e.g. you can send notifications, control the windows, etc.)
So, how do we define an agent? Lets take a look at a simple XML file that `launchd`understands:
```
<!--- Example blatantly ripped off from http://www.launchd.info/ --><?xml version="1.0" encoding="UTF-8"?><!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"><plist version="1.0"> <dict> <key>Label</key> <string>com.example.app</string> <key>Program</key> <string>/Users/Me/Scripts/cleanup.sh</string> <key>RunAtLoad</key> <true/> </dict></plist>
```
The XML is quite self-explanatory, unless its the first time you are seeing an XML file. The file has three main properties, with values. In fact, if you take a better look you will see the `dict` keyword which means `dictionary`. This actually means that the XML represents a key-value structure, so in Go it would look like:
```
map[string]string{ "Label": "com.example.app", "Program": "/Users/Me/Scripts/cleanup.sh", "RunAtLoad": "true",}
```
Lets look at each of the keys:
1. `Label` - The job definition or the name of the job. This is the unique identifier for the job within the `launchd` instance. Usually, the label (and hence the name) is written in [Reverse domain name notation][1].
2. `Program` - This key defines what the job should start, in our case a script with the path `/Users/Me/Scripts/cleanup.sh`.
3. `RunAtLoad` - This key specifies when the job should be run, in this case right after its loaded.
As you can see, the keys used in this XML file are quite self-explanatory. This is the case for the remaining 30-40 keys that `launchd` supports. Last but not least these files although have an XML syntax, in fact, they have a `.plist` extension (which means `Property List`). Makes a lot of sense, right?
### `launchd` v.s. `launchctl`
Before we continue with our little exercise of creating daemons/agents with Go, lets first see how `launchd` allows us to control these jobs. While `launchd`s job is to boot the system and to load and maintain services, there is a different command used for jobs management - `launchctl`. With `launchd` facilitating jobs, the control of services is centralized in the `launchctl` command.
`launchctl` has a long list of subcommands that we can use. For example, loading or unloading a job is done via:
```
launchctl unload/load ~/Library/LaunchAgents/com.example.app.plist
```
Or, starting/stopping a job is done via:
```
launchctl start/stop ~/Library/LaunchAgents/com.example.app.plist
```
To get any confusion out of the way, `load` and `start` are different. While `start`only starts the agent/daemon, `load` loads the job and it might also start it if the job is configured to run on load. This is achieved by setting the `RunAtLoad` property in the property list XML of the job:
```
<!--- Example blatantly ripped off from http://www.launchd.info/ --><?xml version="1.0" encoding="UTF-8"?><!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"><plist version="1.0"> <dict> <key>Label</key> <string>com.example.app</string> <key>Program</key> <string>/Users/Me/Scripts/cleanup.sh</string> <key>RunAtLoad</key><true/> </dict></plist>
```
If you would like to see what other commands `launchctl` supports, you can run`man launchctl` in your terminal and see the options in detail.
### Automating with Go
After getting the basics of `launchd` and `launctl` out of the way, why dont we see how we can add an agent to any Go package? For our example, we are going to write a simple way of plugging in a `launchd` agent for any of your Go packages.
As we already established before, `launchd` speaks in XML. Or, rather, it understands XML files, called  _property lists_  (or `.plist`). This means, for our Go package to have an agent running on MacOS, it will need to tell `launchd` “hey, `launchd`, run this thing!”. And since `launch` speaks only in `.plist`, that means our package needs to be capable of generating XML files.
### Templates in Go
While one could have a hardcoded `.plist` file in their project and copy it across to the `~/Library/LaunchAgents` path, a more programmatical way to do this would be to use a template to generate these XML files. The good thing is Gos standard library has us covered - the `text/template` package ([docs][4]) does exactly what we need.
In a nutshell, `text/template` implements data-driven templates for generating textual output. Or in other words, you give it a template and a data structure, it will mash them up together and produce a nice and clean text file. Perfect.
Lets say the `.plist` we need to generate in our case is the following:
```
<?xml version='1.0' encoding='UTF-8'?><!DOCTYPE plist PUBLIC \"-//Apple Computer//DTD PLIST 1.0//EN\" \"http://www.apple.com/DTDs/PropertyList-1.0.dtd\" ><plist version='1.0'> <dict> <key>Label</key><string>Ticker</string> <key>Program</key><string>/usr/local/bin/ticker</string> <key>StandardOutPath</key><string>/tmp/ticker.out.log</string> <key>StandardErrorPath</key><string>/tmp/ticker.err.log</string> <key>KeepAlive</key><true/> <key>RunAtLoad</key><true/> </dict></plist>
```
We want to keep it quite simple in our little exercise. It will contain only six properties: `Label`, `Program`, `StandardOutPath`, `StandardErrorPath`, `KeepAlive` and `RunAtLoad`. To generate such a XML, its template would look something like this:
```
<?xml version='1.0' encoding='UTF-8'?>
<!DOCTYPE plist PUBLIC \"-//Apple Computer//DTD PLIST 1.0//EN\" \"http://www.apple.com/DTDs/PropertyList-1.0.dtd\" >
<plist version='1.0'>
<dict>
<key>Label</key><string>{{.Label}}</string>
<key>Program</key><string>{{.Program}}</string>
<key>StandardOutPath</key><string>/tmp/{{.Label}}.out.log</string>
<key>StandardErrorPath</key><string>/tmp/{{.Label}}.err.log</string>
<key>KeepAlive</key><{{.KeepAlive}}/>
<key>RunAtLoad</key><{{.RunAtLoad}}/>
</dict>
</plist>
```
As you can see, the difference between the two XMLs is that the second one has the double curly braces with expressions in them in places where the first XML has some sort of a value. These are called “actions”, which can be data evaluations or control structures and are delimited by “ and “. Any of the text outside actions is copied to the output untouched.
### Injecting your data
Now that we have our template with its glorious XML and curly braces (or actions), lets see how we can inject our data into it. Since things are generally simple in Go, especially when it comes to its standard library, you should not worry - this will be easy!
To keep thing simple, we will store the whole XML template in a plain old string. Yes, weird, I know. The best way would be to store it in a file and read it from there, or embed it in the binary itself, but in our little example lets keep it simple:
```
// template.go
package main
func Template() string {
return `
<?xml version='1.0' encoding='UTF-8'?>
<!DOCTYPE plist PUBLIC \"-//Apple Computer//DTD PLIST 1.0//EN\" \"http://www.apple.com/DTDs/PropertyList-1.0.dtd\" >
<plist version='1.0'>
<dict>
<key>Label</key><string>{{.Label}}</string>
<key>Program</key><string>{{.Program}}</string>
<key>StandardOutPath</key><string>/tmp/{{.Label}}.out.log</string>
<key>StandardErrorPath</key><string>/tmp/{{.Label}}.err.log</string>
<key>KeepAlive</key><{{.KeepAlive}}/>
<key>RunAtLoad</key><{{.RunAtLoad}}/>
</dict>
</plist>
`
}
```
And the program that will use our little template function:
```
// main.gopackage mainimport ( "log" "os" "text/template")func main() { data := struct { Label string Program string KeepAlive bool RunAtLoad bool }{ Label: "ticker", Program: "/usr/local/bin/ticker", KeepAlive: true, RunAtLoad: true, } t := template.Must(template.New("launchdConfig").Parse(Template())) err := t.Execute(os.Stdout, data) if err != nil { log.Fatalf("Template generation failed: %s", err) }}
```
So, what happens there, in the `main` function? Its actually quite simple:
1. We declare a small `struct`, which has only the properties that will be needed in the template, and we immediately initialize it with the values for our program.
2. We build a new template, using the `template.New` function, with the name`launchdConfig`. Then, we invoke the `Parse` function on it, which takes the XML template as an argument.
3. We invoke the `template.Must` function, which takes our built template as argument. From the documentation, `template.Must` is a helper that wraps a call to a function returning `(*Template, error)` and panics if the error is non-`nil`. Actually, `template.Must` is built to, in a way, validate if the template can be understood by the `text/template` package.
4. Finally, we invoke `Execute` on our built template, which takes a data structure and applies its attributes to the actions in the template. Then it sends the output to `os.Stdout`, which does the trick for our example. Of course, the output can be sent to any struct that implements the `io.Writer` interface, like a file (`os.File`).
### Make and load my `.plist`
Instead of sending all this nice XML to standard out, lets throw in an open file descriptor to the `Execute` function and finally save our `.plist` file in`~/Library/LaunchAgents`. There are a couple of main points we need to change.
First, getting the location of the binary. Since its a Go binary, and we will install it via `go install`, we can assume that the path will be at `$GOPATH/bin`. Second, since we dont know the actual `$HOME` of the current user, we will have to get it through the environment. Both of these can be done via `os.Getenv` ([docs][5]) which takes a variable name and returns its value.
```
// main.gopackage mainimport ( "log" "os" "text/template")func main() { data := struct { Label string Program string KeepAlive bool RunAtLoad bool }{ Label: "com.ieftimov.ticker", // Reverse-DNS naming convention Program: fmt.Sprintf("%s/bin/ticker", os.Getenv("GOPATH")), KeepAlive: true, RunAtLoad: true, } plistPath := fmt.Sprintf("%s/Library/LaunchAgents/%s.plist", os.Getenv("HOME"), data.Label) f, err := os.Open(plistPath) t := template.Must(template.New("launchdConfig").Parse(Template())) err := t.Execute(f, data) if err != nil { log.Fatalf("Template generation failed: %s", err) }}
```
Thats about it. The first part, about setting the correct `Program` property, is done by concatenating the name of the program and `$GOPATH`:
```
fmt.Sprintf("%s/bin/ticker", os.Getenv("GOPATH"))// Output: /Users/<username>/go/bin/ticker
```
The second part is slightly more complex, and its done by concatenating three strings, the `$HOME` environment variable, the `Label` property of the program and the `/Library/LaunchAgents` string:
```
fmt.Sprintf("%s/Library/LaunchAgents/%s.plist", os.Getenv("HOME"), data.Label)// Output: /Users/<username>/Library/LaunchAgents/com.ieftimov.ticker.plist
```
By having these two paths, opening the file and writing to it is very trivial - we open the file via `os.Open` and we pass in the `os.File` structure to `t.Execute` which writes to the file descriptor.
### What about the Launch Agent?
We will keep this one simple as well. Lets throw in a command to our package, make it installable via `go install` (not that theres much to it) and make it runnable by our `.plist` file:
```
// cmd/ticker/main.gopackage tickerimport ( "time" "fmt")func main() { for range time.Tick(30 * time.Second) { fmt.Println("tick!") }}
```
This the `ticker` program will use `time.Tick`, to execute an action every 30 seconds. Since this will be an infinite loop, `launchd` will kick off the program on boot (because `RunAtLoad` is set to `true` in the `.plist` file) and will keep it running. But, to make the program controllable from the operating system, we need to make the program react to some OS signals, like `SIGINT` or `SIGTERM`.
### Understanding and handling OS signals
While theres quite a bit to be learned about OS signals, in our example we will scratch a bit off the surface. (If you know a lot about inter-process communication this might be too much of an oversimplification to you - and I apologize up front. Feel free to drop some links on the topic in the comments so others can learn more!)
The best way to think about a signal is that its a message from the operating system or another process, to a process. It is an asynchronous notification sent to a process or to a specific thread within the same process to notify it of an event that occurred.
There are quite a bit of various signals that can be sent to a process (or a thread), like `SIGKILL` (which kills a process), `SIGSTOP` (stop), `SIGTERM` (termination), `SIGILL`and so on and so forth. Theres an exhaustive list of signal types on [Wikipedias page][6]on signals.
To get back to `launchd`, if we look at its documentation about stopping a job we will notice the following:
> Stopping a job will send the signal `SIGTERM` to the process. Should this not stop the process launchd will wait `ExitTimeOut` seconds (20 seconds by default) before sending `SIGKILL`.
Pretty self-explanatory, right? We need to handle one signal - `SIGTERM`. Why not `SIGKILL`? Because `SIGKILL` is a special signal that cannot be caught - it kills the process without any chance for a graceful shutdown, no questions asked. Thats why theres a termination signal and a “kill” signal.
Lets throw in a bit of signal handling in our code, so our program knows that it needs to exit when it gets told to do so:
```
package mainimport ( "fmt" "os" "os/signal" "syscall" "time")func main() { sigs := make(chan os.Signal, 1) signal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM) go func() { <-sigs os.Exit(0) }() for range time.Tick(30 * time.Second) { fmt.Println("tick!") }}
```
In the new version, the agent program has two new packages imported: `os/signal`and `syscall`. `os/signal` implements access to incoming signals, that are primarily used on Unix-like systems. Since in this article we are specifically interested in MacOS, this is exactly what we need.
Package `syscall` contains an interface to the low-level operating system primitives. An important note about `syscall` is that it is locked down since Go v1.4\. This means that any code outside of the standard library that uses the `syscall` package should be migrated to use the new `golang.org/x/sys` [package][7]. Since we are using **only**the signals constants of `syscall` we can get away with this.
(If you want to read more about the package lockdown, you can see [the rationale on locking it down][8] by the Go team and the new [golang.org/s/sys][9] package.)
Having the basics of the packages out of the way, lets go step by step through the new lines of code added:
1. We make a buffered channel of type `os.Signal`, with a size of `1`. `os.Signal`is a type that represents an operating system signal.
2. We call `signal.Notify` with the new channel as an argument, plus`syscall.SIGINT` and `syscall.SIGTERM`. This function states “when the OS sends a `SIGINT` or a `SIGTERM` signal to this program, send the signal to the channel”. This allows us to somehow handle the sent OS signal.
3. The new goroutine that we spawn waits for any of the signals to arrive through the channel. Since we know that any of the signals that will arrive are about shutting down the program, after receiving any signal we use `os.Exit(0)`([docs][2]) to gracefully stop the program. One caveat here is that if we had any `defer`red calls they would not be run.
Now `launchd` can run the agent program and we can `load` and `unload`, `start`and `stop` it using `launchctl`.
### Putting it all together
Now that we have all the pieces ready, we need to put them together to a good use. Our application will consist of two binaries - a CLI tool and an agent (daemon). Both of the programs will be stored in separate subdirectories of the `cmd` directory.
The CLI tool:
```
// cmd/cli/main.gopackage mainimport ( "log" "os" "text/template")func main() { data := struct { Label string Program string KeepAlive bool RunAtLoad bool }{ Label: "com.ieftimov.ticker", // Reverse-DNS naming convention Program: fmt.Sprintf("%s/bin/ticker", os.Getenv("GOPATH")), KeepAlive: true, RunAtLoad: true, } plistPath := fmt.Sprintf("%s/Library/LaunchAgents/%s.plist", os.Getenv("HOME"), data.Label) f, err := os.Open(plistPath) t := template.Must(template.New("launchdConfig").Parse(Template())) err := t.Execute(f, data) if err != nil { log.Fatalf("Template generation failed: %s", err) }}
```
And the ticker program:
```
// cmd/ticker/main.gopackage mainimport ( "fmt" "os" "os/signal" "syscall" "time")func main() { sigs := make(chan os.Signal, 1) signal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM) go func() { <-sigs os.Exit(0) }() for range time.Tick(30 * time.Second) { fmt.Println("tick!") }}
```
To install them both, we need to run `go install ./...` in the project root. The command will install all the sub-packages that are located within the project. This will leave us with two available binaries, installed in the `$GOPATH/bin` path.
To install our launch agent, we need to run only the CLI tool, via the `cli` command. This will generate the `.plist` file and place it in the `~/Library/LaunchAgents`path. We dont need to touch the `ticker` binary - that one will be managed by `launchd`.
To load the newly created `.plist` file, we need to run:
```
launchctl load ~/Library/LaunchAgents/com.ieftimov.ticker.plist
```
When we run it, we will not see anything immediately, but after 30 seconds the ticker will add a `tick!` line in `/tmp/ticker.out.log`. We can `tail` the file to see the new lines being added. If we want to unload the agent, we can use:
```
launchctl unload ~/Library/LaunchAgents/com.ieftimov.ticker.plist
```
This will unload the launch agent and will stop the ticker from running. Remember the signal handling we added? This is the case where its being used! Also, we could have automated the (un)loading of the file via the CLI tool but for simplicity, we left it out. You can try to improve the CLI tool by making it a bit smarter with subcommands and flags, as a follow-up exercise from this tutorial.
Finally, if you decide to completely delete the launch agent, you can remove the`.plist` file:
```
rm ~/Library/LaunchAgents/com.ieftimov.ticker.plist
```
### In closing
As part of this (quite long!) article, we saw how we can work with `launchd` and Golang. We took a detour, like learning about `launchd` and `launchctl`, generating XML files using the `text/template` package, we took a look at OS signals and how we can gracefully shutdown a Go program by handling the `SIGINT` and `SIGTERM`signals. There was quite a bit to learn and see, but we got to the end.
Of course, we only scratched the surface with this article. For example, `launchd` is quite an interesting tool. You can use it also like `crontab` because it allows running programs at explicit time/date combinations or on specific days. Or, for example, the XML template can be embedded in the program binary using tools like [`go-bindata`][10], instead of hardcoding it in a function. Also, you explore more about signals, how they work and how Go implements these low-level primitives so you can use them with ease in your programs. The options are plenty, feel free to explore!
If you have found any mistakes in the article, feel free to drop a comment below - I will appreciate it a ton. I find learning through teaching (blogging) a very pleasant experience and would like to have all the details fully correct in my posts.
--------------------------------------------------------------------------------
作者简介:
Backend engineer, interested in Ruby, Go, microservices, building resilient architectures and solving challenges at scale. I coach at Rails Girls in Amsterdam, maintain a list of small gems and often contribute to Open Source.
This is where I write about software development, programming languages and everything else that interests me.
---------------------
via: https://ieftimov.com/create-manage-macos-launchd-agents-golang
作者:[Ilija Eftimov ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://ieftimov.com/about
[1]:https://ieftimov.com/en.wikipedia.org/wiki/Reverse_domain_name_notation
[2]:https://godoc.org/os#Exit
[3]:https://launchd.info/
[4]:https://godoc.org/text/template
[5]:https://godoc.org/os#Getenv
[6]:https://en.wikipedia.org/wiki/Signal_(IPC)
[7]:https://golang.org/x/sys
[8]:https://docs.google.com/document/d/1QXzI9I1pOfZPujQzxhyRy6EeHYTQitKKjHfpq0zpxZs/edit
[9]:https://golang.org/x/sys
[10]:https://github.com/jteeuwen/go-bindata

View File

@ -0,0 +1,62 @@
Linux Kernel 4.15: 'An Unusual Release Cycle'
============================================================
![Linux](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/background-penguin.png?itok=g8NBQs24 "Linux")
Linus Torvalds released version 4.15 of the Linux Kernel on Sunday, a week later than originally scheduled. Learn about key updates in this latest release.[Creative Commons Zero][1]Pixabay
Linus Torvalds [released version 4.15 of the Linux Kernel][7] on Sunday, again, and for a second version in a row, a week later than scheduled. The culprits for the late release were the Meltdown and Spectre bugs, as these two vulnerabilities forced developers to submit major patches well into what should have been the last cycle. Torvalds was not comfortable rushing the release, so he gave it another week.
Unsurprisingly, the first big bunch of patches worth mentioning were those designed to sidestep [Meltdown and Spectre][8]. To avoid Meltdown, a problem that affects Intel chips, [developers have implemented  _Page Table Isolation_  (PTI)][9] for the x86 architecture. If for any reason you want to turn this off, you can use the `pti=off` kernel boot option.
Spectre v2 affects both Intel and AMD chips and, to avoid it, [the kernel now comes with the  _retpoline_  mechanism][10]. Retpoline requires a version of GCC that supports the `-mindirect-branch=thunk-extern` functionality. As with PTI, the Spectre-inhibiting mechanism can be turned of. To do so, use the `spectre_v2=off` option at boot time. Although developers are working to address Spectre v1, at the moment of writing there is still not a solution, so there is no patch for this bug in 4.15.
The solution for Meltdown on ARM has also been pushed to the next development cycle, but there is [a remedy for the bug on PowerPC with the  _RFI flush of L1-D cache_ feature][11] included in this release.
An interesting side affect of all of the above is that new kernels now come with a  _/sys/devices/system/cpu/vulnerabilities/_  virtual directory. This directory shows the vulnerabilities affecting your CPU and the remedies being currently applied.
The issues with buggy chips (and the manufacturers that keep things like this secret) has revived the call for the development of viable open source alternatives. This brings us to the partial support for [RISC-V][12] chips that has now been merged into the mainline kernel. RISC-V is an open instruction set architecture that allows manufacturers to create their own implementation of RISC-V chips, and it has resulted in several open sourced chips. While RISC-V chips are currently used mainly in embedded devices, powering things like smart hard disks or Arduino-like development boards, RISC-V proponents argue that the architecture is also well-suited for use on personal computers and even in multi-node supercomputers.
[The support for RISC-V][13], as mentioned above, is still incomplete, and includes the architecture code but no device drivers. This means that, although a Linux kernel will run on RISC-V, there is no significant way to actually interact with the underlying hardware. That said, RISC-V is not vulnerable to any of the bugs that have dogged other closed architectures, and development for its support is progressing at a brisk pace, as [the RISC-V Foundation has the support of some of the industries biggest heavyweights][14].
### Other stuff that's new in kernel 4.15
Torvalds has often declared he likes things boring. Fortunately for him, he says, apart from the Spectre and Meltdown messes, most of the other things that happened in 4.15 were very much run of the mill, such as incremental improvements for drivers, support for new devices, and so on. However, there were a few more things worth pointing out:
* [AMD got support for Secure Encrypted Virtualization][3]. This allows the kernel to fence off the memory a virtual machine is using by encrypting it. The encrypted memory can only be decrypted by the virtual machine that is using it. Not even the hypervisor can see inside it. This means that data being worked on by VMs in the cloud, for example, is safe from being spied on by any other process outside the VM.
* AMD GPUs get a substantial boost thanks to [the inclusion of  _display code_][4] . This gives mainline support to Radeon RX Vega and Raven Ridge cards and also implements HDMI/DP audio for AMD cards.
* Raspberry Pi aficionados will be glad to know that [the 7'' touchscreen is now natively supported][5], which is guaranteed to lead to hundreds of fun projects.
To find out more, you can check out the write-ups at [Kernel Newbies][15] and [Phoronix][16].
_Learn more about Linux through the free ["Introduction to Linux" ][6]course from The Linux Foundation and edX._
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/intro-to-linux/2018/1/linux-kernel-415-unusual-release-cycle
作者:[PAUL BROWN ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/bro66
[1]:https://www.linux.com/licenses/category/creative-commons-zero
[2]:https://www.linux.com/files/images/background-penguinpng
[3]:https://git.kernel.org/linus/33e63acc119d15c2fac3e3775f32d1ce7a01021b
[4]:https://git.kernel.org/torvalds/c/f6705bf959efac87bca76d40050d342f1d212587
[5]:https://git.kernel.org/linus/2f733d6194bd58b26b705698f96b0f0bd9225369
[6]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
[7]:https://lkml.org/lkml/2018/1/28/173
[8]:https://meltdownattack.com/
[9]:https://git.kernel.org/linus/5aa90a84589282b87666f92b6c3c917c8080a9bf
[10]:https://git.kernel.org/linus/76b043848fd22dbf7f8bf3a1452f8c70d557b860
[11]:https://git.kernel.org/linus/aa8a5e0062ac940f7659394f4817c948dc8c0667
[12]:https://riscv.org/
[13]:https://git.kernel.org/torvalds/c/b293fca43be544483b6488d33ad4b3ed55881064
[14]:https://riscv.org/membership/
[15]:https://kernelnewbies.org/Linux_4.15
[16]:https://www.phoronix.com/scan.php?page=search&q=Linux+4.15

View File

@ -1,3 +1,5 @@
translated by cyleft
Linux ln Command Tutorial for Beginners (5 Examples)
======

View File

@ -0,0 +1,249 @@
Mitigating known security risks in open source libraries
============================================================
>Fixing vulnerable open source packages.
![Machine](https://d3tdunqjn7n0wj.cloudfront.net/360x240/machine-2881186_1920-aa3ebed0567d4ab0a107baa640661e35.jpg)
Machine (source: [Skitterphoto][9])
This is an excerpt from [Securing Open Source Libraries][13], by Guy Podjarny. 
[Read the preceding chapter][14] or [view the full report][15].
### Fixing Vulnerable Packages
Finding out if youre using vulnerable packages is an important step, but its not the real goal. The real goal is to fix those issues!
This chapter focuses on all you should know about fixing vulnerable packages, including remediation options, tooling, and various nuances. Note that SCA tools traditionally focused on finding or preventing vulnerabilities, and most put little emphasis on fix beyond providing advisory information or logging an issue. Therefore, you may need to implement some of these remediations yourself, at least until more SCA solutions expand to include them.
There are several ways to fix vulnerable packages, but upgrading is the best choice. If that is not possible, patching offers a good alternative. The following sections discuss each of these options, and we will later take a look at what you can do in situations where neither of these solutions is possible.
### Upgrading
As Ive previously stated, a vulnerability is a type of bug, and the best way to address a bug is to use a newer version where it is fixed. And so, the best way to fix a vulnerable dependency is to upgrade to a newer version. Statistically, most disclosed vulnerabilities are eventually fixed. In npm, 59% of reported vulnerabilities have a fix. In Maven, 90% are remediable, while that portion is 85% in RubyGems.[1][4] In other words, more often than not, there is a version of your library where the vulnerability is fixed.
Finding a vulnerable package requires knowledge of which versions are vulnerable. This means that, at the very least, every tool that finds issues can tell which versions are vulnerable, allowing you to look for newer versions of the library and upgrade. Most tools also take the minor extra step of determining the minimal fixed version, and noting it in the advisory.
Upgrading is therefore the best way to make a vulnerability go away. Its technically easy (update a manifest or lock file), and its something dev teams are very accustomed to doing. That said, upgrading still holds some complexity.
### Major Upgrades
While most issues are fixed, very often the fix is only applied to the latest and greatest version of the library. If youre still using an older version of the library, upgrading may mean switching to a new major version. Major upgrades are typically not backward compatible, introducing more risk and requiring more dev effort.
Another reason for fixing an issue only in the next major version is that sometimes fixing a vulnerability means reducing functionality. For instance, fixing a certain [XSS vulnerability in a jQuery 2.x codebase][5] requires a change to the way certain selectors are interpreted. The jQuery team determined too many people are relying on this functionality to deem this a non-breaking change, and so only fixed the vulnerability in their 3.x stream.
For these reasons, a major upgrade can often be difficult, but if you can accept it, its still the best way to fix a vulnerability.
### Indirect Dependency Upgrade
If youre consuming a dependency directly, upgrading is relatively straightforward. But what happens when one of your dependencies is the one who pulled in the vulnerable package? Most dependencies are in fact indirect dependencies (a.k.a. transitive dependencies), making upgrades a bit more complex.
The cleanest way to perform an indirect upgrade is through a direct one. If your app uses `A@1`, which uses a vulnerable `B@1`, its possible that upgrading to `A@2` will trigger a downstream upgrade to `B@2` and fix the issue. Applying such an upgrade is easy (its essentially a direct upgrade), but discovering  _which_  upgrade to do (and whether one even exists) is time consuming. While not common, some SCA tools can determine and advise on the  _direct_  upgrades you need to make to fix an  _indirect_  vulnerability. If your tooling doesnt support it, youll need to do the searching manually.
Old vulnerabilities in indirect libraries can often be fixed with a direct upgrade, but such upgrades are frequently unavailable for new issues. When a new vulnerability is disclosed, even if the offending package releases a fix right away, it takes a while for the dependency chain to catch up. If you cant find a path to an indirect upgrade for a newly disclosed flaw, be sure to recheck frequently as one may show up soon. Once again, some SCA tools will do this monitoring for you and alert you when new remediations are available.
![The direct vulnerable EJS can be upgraded, but indirect instance cannot currently be upgraded](https://d3ansictanv2wj.cloudfront.net/sosl_0301-d3ce5b0bf64893e26ee74627bfba5300.png)
Figure 1-1. The direct vulnerable EJS can be upgraded, but indirect instance cannot currently be upgraded
### Conflicts
Another potential obstacle to upgrading is a conflict. Many languages, such as Ruby and Python, require dependencies to be global, and clients such as Rubys bundler and Pythons pip determine the mix of library versions that can co-exist. As a result, upgrading one library may trigger a conflict with another. While developers are adept at handling such conflicts, there are times when such issues simply cannot be resolved.
On the positive side, global dependency managers, such as Rubys bundler, allow the parent app to add a constraint. For instance, if a downstream `B@1` gem is vulnerable, you can add `B@^2` to your Gemfile, and have bundler sort out the surrounding impact. Adding such constraints is a safe and legitimate solution, as long as your ecosystem tooling can figure out a conflict-free combination of libraries.
### Is a Newer Version Always Safer?
The conversation about upgrading begs a question: can a vulnerability also be fixed by downgrading?
For the most part, the answer is no. Vulnerabilities are bugs, and bugs are typically fixed in a newer version, not an older one. In general, maintaining a good upgrade cadence and keeping your dependencies up to date is a good preventative measure to reduce the risk of vulnerabilities.
However, in certain cases, code changes or (more often) new features are the ones that trigger a vulnerability. In those cases, its indeed possible that downgrading will fix the discovered flaw. The advisory should give you the information you need about which versions are affected by the vulnerability. That said, note that downgrading a package puts you at higher risk of being exposed to new issues, and can make it harder to upgrade when that happens. I suggest you see downgrading as a temporary and rarely used remediation path.
### There Is No Fixed Version
Last on the list of reasons preventing you from upgrading to a safe version is such a version not existing in the first place!
While most vulnerabilities are fixed, many remain unfixed. This is sometimes a temporary situation—for instance, when a vulnerability was made public without waiting for a fix to be released. Other times, it may be a more long-term scenario, as many repositories fall into a poor maintenance state, and dont fix reported issues nor accept community patches.
In the following sections Ill discuss some options for when you cannot upgrade a vulnerability away.
### Patching
Despite all the complexity it may involve, upgrading is the best way to fix an issue. However, if you cannot upgrade, patching the vulnerability is the next best option.
Patching means taking a library as is, including its vulnerabilities, and then modifying it to fix a vulnerability it holds. Patching should apply the minimal set of changes to the library, so as to keep its functionality unharmed and only address the issue at hand.
Patching inevitably holds a certain amount of risk. When you use a package downloaded millions of time a month, you have some assurance that bugs in it will be discovered, reported, and often fixed. When you download that package and modify it, your version of the code will not be quite as battle tested.
Patching is therefore an exercise in risk management. What presents a greater risk: having the vulnerability, or applying the patch? For well-managed patches, especially for ones small in scope, I believe its almost always better to have a patch than a vulnerability.
Its worth noting that patching application dependencies is a relatively new concept, but an old hat in the operating system world. When dealing with operating system dependencies, were accustomed to consuming a feed of fixes by running `apt-get upgrade` or an equivalent command, often remaining unaware of which issues we fixed. What most dont know is that many of the fixes you pull down are in fact back-ported versions of the original OS author code changes, created and tested by Canonical, RedHat, and the like. A safe registry that feeds you the non-vulnerable variants of your dependencies doesnt exist yet in the application libraries world, but patching is sometimes doable in other ways.
### Sourcing Patches
To create a patch, you first need to have a fix for the vulnerability! You could write one yourself, but patches are more often sourced from existing community fixes.
The first place to look for a patch is a new version of the vulnerable package. Most often the vulnerability  _was_  fixed by the maintainers of the library, but that fix may be in an out-of-reach indirect dependency, or perhaps was only fitted back into the latest major version. Those fixes can be extracted from the original repo and stored into their own patch file, as well as back-ported into older versions if need be.
Another common source for patches are external pull requests (PRs). Open source maintenance is a complicated topic, and its not uncommon for repos to go inactive. In such repos, you may find community pull requests that fix a vulnerability, have been commented on and perhaps vetted by others, but are not merged and published into the main stream. Such PRs are a good starting point—if not the full solution—for creating a patch. For instance, an XSS issue in the popular JavaScript Markdown parsing library marked had an [open fix PR][6] for nearly a year before it was incorporated into a new release. During this period, you could use the fix PR code to patch the issue in your apps.
Snyk maintains its own set of patches in its [open source database][7]. Most of those patches are captures or back-ports of original fixes, a few are packaged pull requests, and even fewer are written by the Snyk security research team.
### Depend on GitHub Hash
In very specific cases, you may be able to patch without storing any code changes. This is only possible if the vulnerable dependency is a direct dependency of your app, and the public repo holding the package has a commit that fixes the issue (often a pull request, as mentioned before).
If thats the case, most package managers allow you to change your manifest file to point to the GitHub commit instead of naming your package and version. Git hashes are immutable, so youll know exactly what youre getting, even if the pull request evolved. However, the commit may be deleted, introducing certain reliability concerns.
### Fork and Patch
When patching a vulnerability in a direct dependency, assuming you dont want to depend on an external commit or have none to use, you can create one of your own. Doing so typically means forking the GitHub repository to a user you control, and patching it. Once done, you can modify your manifest to point to your fixed repository.
Forking is a fairly common way of fixing different bugs in dependencies, and also carries some nice reliability advantages, as the code you use is now in your own control. It has the downside of breaking off the normal version stream of the dependency, but its a decent short-term solution to vulnerabilities in direct dependencies. Unfortunately, forking is not a viable option for patching indirect dependencies.
### Static Patching at Build Time
Another opportunity to patch a dependency is during build time. This type of patching is more complicated, as it requires:
1. Storing a patch in a file (often a  _.patch_  file, or an alternative JAR file with the issue fixed)
2. Installing the dependencies as usual
3. Determining where the dependency youd like to patch was installed
4. Applying the patch by modifying or swapping out the risky code
These steps are not trivial, but theyre also usually doable using package manager commands. If a vulnerability is worth fixing, and there are no easier means to fix it, this approach should be considered.
This is a classic problem for tools to address, as patches can be reused and their application can be repeated. However, at the time of this writing, Snyk is the only SCA tool that maintains patches in its DB and lets you apply them in your pipeline. I predict over time more and more tools will adopt this approach.
### Dynamic Patching at Boot Time
In certain programming languages, classes can also be modified at runtime, a technique often referred to as "monkey patching." Monkey patching can be used to fix vulnerabilities, though that practice has not become the norm in any ecosystem. The most prevalent use of monkey patching to fix vulnerabilities is in Ruby on Rails, where the Rails team has often released patches for vulnerabilities in the libraries it maintains.
### Other Remediation Paths
So far, Ive stated upgrades are the best way to address a vulnerability, and patching the second best. However, what should you do when you cannot (or will not) upgrade nor patch?
In those cases, you have no choice but to dig deeper. You need to understand the vulnerability better, and how it plays into your application. If it indeed puts your application at notable risk, there are a few steps you can take.
### Removal
Removing a dependency is a very effective way of fixing its vulnerabilities. Unfortunately, youll be losing its functionality at the same time.
Dropping a dependency is often hard, as it by definition requires changes to your actual code. That said, such removal may turn out to be easy—for instance, when a dependency was used for convenience and can be rewritten instead, or when a comparable alternative exists in the ecosystem.
Easy or hard, removing a dependency should always be considered an option, and weighed against the risk of keeping it.
### External Mitigation
If you cant fix the vulnerable code, you can try to block attacks that attempt to exploit it instead. Introducing a rule in a web app firewall, modifying the parts of your app that accept related user input, or even blocking a port are all potential ways to mitigate a vulnerability.
Whether you can mitigate and how to do so depends on the specific vulnerability and application, and in many cases such protection is impossible or high risk. That said, the most trivially exploited vulnerabilities, such as the March 2017 Struts2 RCE and ImageTragick, are often the ones most easily identified and blocked, so this approach is definitely worth exploring.
###### Tip
### Protecting Against Unknown Vulnerabilities
Once youre aware of a known vulnerability, your best move is to fix it, and external mitigation is a last resort. However, security controls that protect against unknown vulnerabilities, ranging from web app firewalls to sandboxed processes to ensuring least privilege, can often protect you from known vulnerabilities as well.
### Log Issue
Last but not least, even if you choose not to remediate the issue, the least you can do is create an issue for it. Beyond its risk management advantages, logging the issue will remind you to re-examine the remediation options over time—for instance, looking for newly available upgrades or patches that can help.
If you have a security operations team, make sure to make them aware of vulnerabilities you are not solving right now. This information can prove useful when they triage suspicious behavior on the network, as such behavior may come down to this security hole being exploited.
### Remediation Process
Beyond the specific techniques, there are few broader guidelines when it comes to remediating issues.
### Ignoring Issues
If you choose not to fix an issue, or to fix it through a custom path, youll need to tell your SCA tool you did. Otherwise, the tool will continue to indicate this problem.
All OSS security tools support ignoring a vulnerability, but have slightly different capabilities. You should consider the following, and try to note that in your tool of choice:
* Are you ignoring the issue because it doesnt affect you (perhaps youve mitigated it another way) or because youve accepted the risk? This may reflect differently in your top-level reports.
* Do you want to mute the issue indefinitely, or just "snooze" it? Ignoring temporarily is common for low-severity issues that dont yet have an upgrade, where youre comfortable taking the risk for a bit and anticipate an upgrade will show up soon.
* Do you want to ignore all instances of this known vulnerability (perhaps it doesnt apply to your system), or only certain vulnerable paths (which, after a careful vetting process, youve determined to be non-exploitable)?
Properly tagging the reason for muting an alert helps manage these vulnerabilities over time and across projects, and reduces the chance of an issue being wrongfully ignored and slipping through the cracks.
### Fix All Vulnerable Paths
For all the issues youre not ignoring, remember that remediation has to be done for  _every vulnerable path_ .
This is especially true for upgrades, as every path must be assessed for upgrade separately, but also applies to patches in many ecosystems.
### Track Remediations Over Time
As already mentioned, a fix is typically issued for the vulnerable package first, and only later propagates through the dependency chain as other libraries upgrade to use the newer (and safer) version. Similarly, community or author code contributions are created constantly, addressing issues that werent previously fixable.
Therefore, its worth tracking remediation options over time. For ignored issues, periodically check if an easy fix is now available. For patched issues, track potential updates you can switch to. Certain SCA tools automate this tracking and notify you (or open automated pull requests) when such new remediations are available.
### Invest in Making Fixing Easy
The unfortunate reality is that new vulnerabilities in libraries are discovered all the time. This is a fact of life—code will have bugs, some of those bugs are security bugs (vulnerabilities), and some of those are disclosed. Therefore, you and your team should expect to get a constant stream of vulnerability notifications, which you need to act on.
If fixing these vulnerabilities isnt easy, your team will not do it. Fixing these issues competes with many priorities, and its oh-so-easy to put off this invisible risk. If each alert requires a lot of time to triage and determine a fix for, the ensuing behavior would likely be to either put it off or try to convince yourself its not a real problem.
In the world of operating systems, fixing has become the default action. In fact, "patching your servers" means taking in a feed of fixes, often without ever knowing which vulnerabilities we fix. We should strive to achieve at least this level of simplicity when dealing with vulnerable app dependencies too.
Part of this effort is on tooling providers. SCA tools should let you fix vulnerabilities with a click or proactive pull requests, or patch them with a single command like `apt-get upgrade` does on servers. The other part of the effort is on you. Consider it a high priority to make vulnerability remediation easy, choose priority, choose your tools accordingly, and put in the effort to enrich or adapt those tools to fit your workflow.
### Summary
You should always keep in mind that finding these vulnerabilities isnt the goal—fixing them is. Because fixing vulnerabilities is something your team will need to do often, defining the processes and tools to get that done is critical.
A great way to get started with remediation is to find vulnerabilities that can be fixed with a non-breaking upgrade, and get those upgrades done. While not entirely risk-free, these upgrades should be backward compatible, and getting these security holes fixed gets you off to a very good start.
[1][8]Stats based on vulnerabilities curated in the Snyk vulnerability DB.
This is an excerpt from [Securing Open Source Libraries][16], by Guy Podjarny. 
[Read the preceding chapter][17] or [view the full report][18].
-------------------------------------
作者简介:
Guy Podjarny (Guypo) is a web performance researcher/evangelist and Akamai's Web CTO, focusing primarily on Mobile and Front-End performance. As a researcher, Guy frequently runs large scale tests, exploring performance in the real world and matching it to how browsers behave, and was one of the first to highlight the performance implications of Responsive Web Design. Guy is also the author of Mobitest, a free mobile measurement tool, and contributes to various open source tools. Guy was previously the co-founder and CTO of blaze.io, ac...
--------------------------------------------------------------------------------
via: https://www.oreilly.com/ideas/mitigating-known-security-risks-in-open-source-libraries
作者:[ Guy Podjarny][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.oreilly.com/people/4dda0-guy-podjarny
[1]:https://www.safaribooksonline.com/home/?utm_source=newsite&utm_medium=content&utm_campaign=lgen&utm_content=security-post-safari-right-rail-cta
[2]:https://www.safaribooksonline.com/home/?utm_source=newsite&utm_medium=content&utm_campaign=lgen&utm_content=security-post-safari-right-rail-cta
[3]:https://www.safaribooksonline.com/home/?utm_source=newsite&utm_medium=content&utm_campaign=lgen&utm_content=security-post-safari-right-rail-cta
[4]:https://www.oreilly.com/ideas/mitigating-known-security-risks-in-open-source-libraries#id-xJ0u4SBFphz
[5]:https://snyk.io/vuln/npm:jquery:20150627
[6]:https://github.com/chjj/marked/pull/592
[7]:https://github.com/snyk/vulnerabilitydb
[8]:https://www.oreilly.com/ideas/mitigating-known-security-risks-in-open-source-libraries#id-xJ0u4SBFphz-marker
[9]:https://pixabay.com/en/machine-mill-industry-steam-2881186/
[10]:https://www.oreilly.com/ideas/mitigating-known-security-risks-in-open-source-libraries
[11]:https://www.oreilly.com/people/4dda0-guy-podjarny
[12]:https://www.oreilly.com/people/4dda0-guy-podjarny
[13]:https://www.safaribooksonline.com/library/view/securing-open-source/9781491996980/?utm_source=oreilly&utm_medium=newsite&utm_campaign=fixing-vulnerable-open-source-packages
[14]:https://www.oreilly.com/ideas/finding-vulnerable-open-source-packages?utm_source=oreilly&utm_medium=newsite&utm_campaign=fixing-vulnerable-open-source-packages
[15]:https://www.safaribooksonline.com/library/view/securing-open-source/9781491996980/?utm_source=oreilly&utm_medium=newsite&utm_campaign=fixing-vulnerable-open-source-packages
[16]:https://www.safaribooksonline.com/library/view/securing-open-source/9781491996980/?utm_source=oreilly&utm_medium=newsite&utm_campaign=fixing-vulnerable-open-source-packages
[17]:https://www.oreilly.com/ideas/finding-vulnerable-open-source-packages?utm_source=oreilly&utm_medium=newsite&utm_campaign=fixing-vulnerable-open-source-packages
[18]:https://www.safaribooksonline.com/library/view/securing-open-source/9781491996980/?utm_source=oreilly&utm_medium=newsite&utm_campaign=fixing-vulnerable-open-source-packages
[19]:https://pixabay.com/en/machine-mill-industry-steam-2881186/

View File

@ -0,0 +1,85 @@
Reckoning The Spectre And Meltdown Performance Hit For HPC
============================================================
![](https://3s81si1s5ygj3mzby34dq6qf-wpengine.netdna-ssl.com/wp-content/uploads/2015/03/intel-chip-logo-bw-200x147.jpg)
While no one has yet created an exploit to take advantage of the Spectre and Meltdown speculative execution vulnerabilities that were exposed by Google six months ago and that were revealed in early January, it is only a matter of time. The [patching frenzy has not settled down yet][2], and a big concern is not just whether these patches fill the security gaps, but at what cost they do so in terms of application performance.
To try to ascertain the performance impact of the Spectre and Meltdown patches, most people have relied on comments from Google on the negligible nature of the performance hit on its own applications and some tests done by Red Hat on a variety of workloads, [which we profiled in our initial story on the vulnerabilities][3]. This is a good starting point, but what companies really need to do is profile the performance of their applications before and after applying the patches and in such a fine-grained way that they can use the data to debug the performance hit and see if there is any remediation they can take to alleviate the impact.
In the meantime, we are relying on researchers and vendors to figure out the performance impacts. Networking chip maker Mellanox Technologies, always eager to promote the benefits of the offload model of its switch and network interface chips, has run some tests to show the effects of the Spectre and Meltdown patches on high performance networking for various workloads and using various networking technologies, including its own Ethernet and InfiniBand devices and Intels OmniPath. Some HPC researchers at the University of Buffalo have also done some preliminary benchmarking of selected HPC workloads to see the effect on compute and network performance. This is a good starting point, but is far from a complete picture of the impact that might be seen on HPC workloads after organization deploy the Spectre and Meltdown patches to their systems.
To recap, here is what Red Hat found out when it tested the initial Spectre and Meltdown patches running its Enterprise Linux 7 release on servers using Intels “Haswell” Xeon E5 v3, “Broadwell” Xeon E5 v4, and “Skylake” Xeon SP processors:
* **Measurable, 8 percent to 19 percent:** Highly cached random memory, with buffered I/O, OLTP database workloads, and benchmarks with high kernel-to-user space transitions are impacted between 8 percent and 19 percent. Examples include OLTP Workloads (TPC), sysbench, pgbench, netperf (< 256 byte), and fio (random I/O to NvME).
* **Modest, 3 percent to 7 percent:** Database analytics, Decision Support System (DSS), and Java VMs are impacted less than the Measurable category. These applications may have significant sequential disk or network traffic, but kernel/device drivers are able to aggregate requests to moderate level of kernel-to-user transitions. Examples include SPECjbb2005, Queries/Hour and overall analytic timing (sec).
* **Small, 2 percent to 5 percent:** HPC CPU-intensive workloads are affected the least with only 2 percent to 5 percent performance impact because jobs run mostly in user space and are scheduled using CPU pinning or NUMA control. Examples include Linpack NxN on X86 and SPECcpu2006.
* **Minimal impact:** Linux accelerator technologies that generally bypass the kernel in favor of user direct access are the least affected, with less than 2% overhead measured. Examples tested include DPDK (VsPERF at 64 byte) and OpenOnload (STAC-N). Userspace accesses to VDSO like get-time-of-day are not impacted. We expect similar minimal impact for other offloads.
And just to remind you, according to Red Hat containerized applications running atop Linux do not incur an extra Spectre or Meltdown penalty compared to applications running on bare metal because they are implemented as generic Linux processes themselves. But applications running inside virtual machines running atop hypervisors, Red Hat does expect that, thanks to the increase in the frequency of user-to-kernel transitions, the performance hit will be higher. (How much has not yet been revealed.)
Gilad Shainer, the vice president of marketing for the InfiniBand side of the Mellanox house, shared some initial performance data from the companys labs with regard to the Spectre and Meltdown patches. ([The presentation is available online here.][4])
In general, Shainer tells  _The Next Platform_ , the offload model that Mellanox employs in its InfiniBand switches (RDMA is a big component of this) and in its Ethernet (The RoCE clone of RDMA is used here) are a very big deal given the fact that the network drivers bypass the operating system kernels. The exploits take advantage, in one of three forms, of the porous barrier between the kernel and user spaces in the operating systems, so anything that is kernel heavy will be adversely affected. This, says Shainer, includes the TCP/IP protocol that underpins Ethernet as well as the OmniPath protocol, which by its nature tries to have the CPUs in the system do a lot of the network processing. Intel and others who have used an onload model have contended that this allows for networks to be more scalable, and clearly there are very scalable InfiniBand and OmniPath networks, with many thousands of nodes, so both approaches seem to work in production.
Here are the feeds and speeds on the systems that Mellanox tested on two sets of networking tests. For the comparison of Ethernet with RoCE added and standard TCP over Ethernet, the hardware was a two-socket server using Intels Xeon E5-2697A v4 running at 2.60 GHz. This machine was configured with Red Hat Enterprise Linux 7.4, with kernel versions 3.10.0-693.11.6.el7.x86_64 and 3.10.0-693.el7.x86_64\. (Those numbers  _are_  different there is an  _11.6_  in the middle of the second one.) The machines were equipped with ConnectX-5 server adapters with firmware 16.22.0170 and the MLNX_OFED_LINUX-4.3-0.0.5.0 driver. The workload that was tested was not a specific HPC application, but rather a very low level, homegrown interconnect benchmark that is used to stress switch chips and NICs to see their peak  _sustained_  performance, as distinct from peak  _theoretical_ performance, which is the absolute ceiling. This particular test was run on a two-node cluster, passing data from one machine to the other.
Here is how the performance stacked up before and after the Spectre and Meltdown patches were added to the systems:
[![](https://3s81si1s5ygj3mzby34dq6qf-wpengine.netdna-ssl.com/wp-content/uploads/2018/01/mellanox-spectre-meltdown-roce-versus-tcp.jpg)][5]
As you can see, at this very low level, there is no impact on network performance between two machines supporting RoCE on Ethernet, but running plain vanilla TCP without an offload on top of Ethernet, there are some big performance hits. Interestingly, on this low-level test, the impact was greatest on small message sizes in the TCP stack and then disappeared as the message sizes got larger.
On a separate round of tests pitting InfiniBand from Mellanox against OmniPath from Intel, the server nodes were configured with a pair of Intel Xeon SP Gold 6138 processors running at 2 GHz, also with Red Hat Enterprise Linux 7.4 with the 3.10.0-693.el7.x86_64 and 3.10.0-693.11.6.el7.x86_64          kernel versions. The OmniPath adapter uses the IntelOPA-IFS.RHEL74-x86_64.10.6.1.0.2 driver and the Mellanox ConnectX-5 adapter uses the MLNX_OFED 4.2 driver.
Here is how the InfiniBand and OmniPath protocols did on the tests before and after the patches:
[![](https://3s81si1s5ygj3mzby34dq6qf-wpengine.netdna-ssl.com/wp-content/uploads/2018/01/mellanox-spectre-meltdown-infiniband-versus-omnipath.jpg)][6]
Again, thanks to the offload model and the fact that this was a low level benchmark that did not hit the kernel very much (and some HPC applications might cross that boundary and therefore invoke the Spectre and Meltdown performance penalties), there was no real effect on the two-node cluster running InfiniBand. With the OmniPath system, the impact was around 10 percent for small message sizes, and then grew to 25 percent or so once the message sizes transmitted reached 512 bytes.
We have no idea what the performance implications are for clusters of more than two machines using the Mellanox approach. It would be interesting to see if the degradation compounds or doesnt.
### Early HPC Performance Tests
While such low level benchmarks provide some initial guidance on what the effect might be of the Spectre and Meltdown patches on HPC performance, what you really need is a benchmark run of real HPC applications running on clusters of various sizes, both before and after the Spectre and Meltdown patches are applied to the Linux nodes. A team of researchers led by Nikolay Simakov at the Center For Computational Research at SUNY Buffalo fired up some HPC benchmarks and a performance monitoring tool derived from the National Science Foundations Extreme Digital (XSEDE) program to see the effect of the Spectre and Meltdown patches on how much work they could get done as gauged by wall clock time to get that work done.
The paper that Simakov and his team put together on the initial results [is found here][7]. The tool that was used to monitor the performance of the systems was called XD Metrics on Demand, or XDMoD, and it was open sourced and is available for anyone to use. (You might consider [Open XDMoD][8] for your own metrics to determine the performance implications of the Spectre and Meltdown patches.) The benchmarks tested by the SUNY Buffalo researchers included the NAMD molecular dynamics and NWChem computational chemistry applications, as well as the HPC Challenge suite, which itself includes the STREAM memory bandwidth test and the NASA Parallel Benchmarks (NPB), the Interconnect MPI Benchmarks (IMB). The researchers also tested the IOR file reading and the MDTest metadata benchmark tests from Lawrence Livermore National Laboratory. The IOR and MDTest benchmarks were run in local mode and in conjunction with a GPFS parallel file system running on an external 3 PB storage cluster. (The tests with a “.local” suffix in the table are run on storage in the server nodes themselves.)
SUNY Buffalo has an experimental cluster with two-socket machines based on Intel “Nehalem” Xeon L5520 processors, which have eight cores and which are, by our reckoning, very long in the tooth indeed in that they are nearly nine years old. Each node has 24 GB of main memory and has 40 Gb/sec QDR InfiniBand links cross connecting them together. The systems are running the latest CentOS 7.4.1708 release, without and then with the patches applied. (The same kernel patches outlined above in the Mellanox test.) Simakov and his team ran each benchmark on a single node configuration and then ran the benchmark on a two node configuration, and it shows the difference between running a low-level benchmark and actual applications when doing tests. Take a look at the table of results:
[![](https://3s81si1s5ygj3mzby34dq6qf-wpengine.netdna-ssl.com/wp-content/uploads/2018/01/suny-buffalo-spectre-meltdown-test-table.jpg)][9]
The before runs of each application tested were done on around 20 runs, and the after was done on around 50 runs. For the core HPC applications NAMD, NWChem, and the elements of HPCC the performance degradation was between 2 percent and 3 percent, consistent with what Red Hat told people to expect back in the first week that the Spectre and Meltdown vulnerabilities were revealed and the initial patches were available. However, moving on to two-node configurations, where network overhead was taken into account, the performance impact ranged from 5 percent to 11 percent. This is more than you would expect based on the low level benchmarks that Mellanox has done. Just to make things interesting, on the IOR and MDTest benchmarks, moving from one to two nodes actually lessened the performance impact; running the IOR test on the local disks resulted in a smaller performance hit then over the network for a single node, but was not as low as for a two-node cluster running out to the GPFS file system.
There is a lot of food for thought in this data, to say the least.
What we want to know and what the SUNY Buffalo researchers are working on is what happens to performance on these HPC applications when the cluster is scaled out.
“We will know that answer soon,” Simakov tells  _The Next Platform_ . “But there are only two scenarios that are possible. Either it is going to get worse or it is going to stay about the same as a two-node cluster. We think that it will most likely stay the same, because all of the MPI communication happens through the shared memory on a single node, and when you get to two nodes, you get it into the network fabric and at that point, you are probably paying all of the extra performance penalties.”
We will update this story with data on larger scale clusters as soon as Simakov and his team provide the data.
--------------------------------------------------------------------------------
via: https://www.nextplatform.com/2018/01/30/reckoning-spectre-meltdown-performance-hit-hpc/
作者:[Timothy Prickett Morgan][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.nextplatform.com/author/tpmn/
[1]:https://www.nextplatform.com/author/tpmn/
[2]:https://www.nextplatform.com/2018/01/18/datacenters-brace-spectre-meltdown-impact/
[3]:https://www.nextplatform.com/2018/01/08/cost-spectre-meltdown-server-taxes/
[4]:http://www.mellanox.com/related-docs/presentations/2018/performance/Spectre-and-Meltdown-Performance.pdf?homepage
[5]:https://3s81si1s5ygj3mzby34dq6qf-wpengine.netdna-ssl.com/wp-content/uploads/2018/01/mellanox-spectre-meltdown-roce-versus-tcp.jpg
[6]:https://3s81si1s5ygj3mzby34dq6qf-wpengine.netdna-ssl.com/wp-content/uploads/2018/01/mellanox-spectre-meltdown-infiniband-versus-omnipath.jpg
[7]:https://arxiv.org/pdf/1801.04329.pdf
[8]:http://open.xdmod.org/7.0/index.html
[9]:https://3s81si1s5ygj3mzby34dq6qf-wpengine.netdna-ssl.com/wp-content/uploads/2018/01/suny-buffalo-spectre-meltdown-test-table.jpg

View File

@ -0,0 +1,112 @@
Trying Other Go Versions
============================================================
While I generally use the current release of Go, sometimes I need to try a different version. For example, I need to check that all the examples in my [Guide to JSON][2] work with [both the supported releases of Go][3](1.8.6 and 1.9.3 at time of writing) along with go1.10rc1.
I primarily use the current version of Go, updating it when new versions are released. I try out other versions as needed following the methods described in this article.
### Trying Betas and Release Candidates[¶][4]
When [go1.8beta2 was released][5], a new tool for trying the beta and release candidates was also released that allowed you to `go get` the beta. It allowed you to easily run the beta alongside your Go installation by getting the beta with:
```
go get golang.org/x/build/version/go1.8beta2
```
This downloads and builds a small program that will act like the `go` tool for that specific version. The full release can then be downloaded and installed with:
```
go1.8beta2 download
```
This downloads the release from [https://golang.org/dl][6] and installs it into `$HOME/sdk` or `%USERPROFILE%\sdk`.
Now you can use `go1.8beta2` as if it were the normal Go command.
This method works for [all the beta and release candidates][7] released after go1.8beta2.
### Trying a Specific Release[¶][8]
While only beta and release candidates are provided, they can easily be adapted to work with any released version. For example, to use go1.9.2:
```
package main
import (
"golang.org/x/build/version"
)
func main() {
version.Run("go1.9.2")
}
```
Replace `go1.9.2` with the release you want to run and build/install as usual.
Since the program I use to build my [Guide to JSON][9] calls `go` itself (for each example), I build this as `go` and prepend the directory to my `PATH` so it will use this one instead of my normal version.
### Trying Any Release[¶][10]
This small program can be extended so you can specify the release to use instead of having to maintain binaries for each version.
```
package main
import (
"fmt"
"os"
"golang.org/x/build/version"
)
func main() {
if len(os.Args) < 2 {
fmt.Printf("USAGE: %v <version> [commands as normal]\n",
os.Args[0])
os.Exit(1)
}
v := os.Args[1]
os.Args = append(os.Args[0:1], os.Args[2:]...)
version.Run("go" + v)
}
```
I have this installed as `gov` and run it like `gov 1.8.6 version`, using the version I want to run.
### Trying a Source Build (e.g., tip)[¶][11]
I also use this same infrastructure to manage source builds of Go, such as tip. Theres just a little trick to it:
* use the directory `$HOME/sdk/go<version>` (e.g., `$HOME/sdk/gotip`)
* [build as normal][1]
* `touch $HOME/sdk/go<version>/.unpacked-success` This is an empty file used as a sentinel to indicate the download and unpacking was successful.
(On Windows, replace `$HOME/sdk` with `%USERPROFILE%\sdk`)
--------------------------------------------------------------------------------
via: https://pocketgophers.com/trying-other-versions/
作者:[Nathan Kerr ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:nathan@pocketgophers.com
[1]:https://golang.org/doc/install/source
[2]:https://pocketgophers.com/guide-to-json/
[3]:https://pocketgophers.com/when-should-you-upgrade-go/
[4]:https://pocketgophers.com/trying-other-versions/#trying-betas-and-release-candidates
[5]:https://groups.google.com/forum/#!topic/golang-announce/LvfYP-Wk1s0
[6]:https://golang.org/dl
[7]:https://godoc.org/golang.org/x/build/version#pkg-subdirectories
[8]:https://pocketgophers.com/trying-other-versions/#trying-a-specific-release
[9]:https://pocketgophers.com/guide-to-json/
[10]:https://pocketgophers.com/trying-other-versions/#trying-any-release
[11]:https://pocketgophers.com/trying-other-versions/#trying-a-source-build-e-g-tip

View File

@ -1,3 +1,5 @@
translating---geekpi
Use of du & df commands (with examples)
======
In this article I will discuss du & df commands. Both du & df commands are important utilities of Linux system & shows disk usage of Linux filesystem. Here we will share usage of both commands with some examples.

View File

@ -0,0 +1,117 @@
A history of low-level Linux container runtimes
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/running-containers-two-ship-container-beach.png?itok=wr4zJC6p)
At Red Hat we like to say, "Containers are Linux--Linux is Containers." Here is what this means. Traditional containers are processes on a system that usually have the following three characteristics:
### 1\. Resource constraints
When you run lots of containers on a system, you do not want to have any container monopolize the operating system, so we use resource constraints to control things like CPU, memory, network bandwidth, etc. The Linux kernel provides the cgroups feature, which can be configured to control the container process resources.
### 2\. Security constraints
Usually, you do not want your containers being able to attack each other or attack the host system. We take advantage of several features of the Linux kernel to set up security separation, such as SELinux, seccomp, capabilities, etc.
### 3\. Virtual separation
Container processes should not have a view of any processes outside the container. They should be on their own network. Container processes need to be able to bind to port 80 in different containers. Each container needs a different view of its image, needs its own root filesystem (rootfs). In Linux we use kernel namespaces to provide virtual separation.
Therefore, a process that runs in a cgroup, has security settings, and runs in namespaces can be called a container. Looking at PID 1, systemd, on a Red Hat Enterprise Linux 7 system, you see that systemd runs in a cgroup.
```
# tail -1 /proc/1/cgroup
1:name=systemd:/
```
The `ps` command shows you that the system process has an SELinux label ...
```
# ps -eZ | grep systemd
system_u:system_r:init_t:s0             1 ?     00:00:48 systemd
```
and capabilities.
```
# grep Cap /proc/1/status
...
CapEff: 0000001fffffffff
CapBnd: 0000001fffffffff
CapBnd:    0000003fffffffff
```
Finally, if you look at the `/proc/1/ns` subdir, you will see the namespace that systemd runs in.
```
ls -l /proc/1/ns
lrwxrwxrwx. 1 root root 0 Jan 11 11:46 mnt -> mnt:[4026531840]
lrwxrwxrwx. 1 root root 0 Jan 11 11:46 net -> net:[4026532009]
lrwxrwxrwx. 1 root root 0 Jan 11 11:46 pid -> pid:[4026531836]
...
```
If PID 1 (and really every other process on the system) has resource constraints, security settings, and namespaces, I argue that every process on the system is in a container.
Container runtime tools just modify these resource constraints, security settings, and namespaces. Then the Linux kernel executes the processes. After the container is launched, the container runtime can monitor PID 1 inside the container or the container's `stdin`/`stdout`--the container runtime manages the lifecycles of these processes.
### Container runtimes
You might say to yourself, well systemd sounds pretty similar to a container runtime. Well, after having several email discussions about why container runtimes do not use `systemd-nspawn` as a tool for launching containers, I decided it would be worth discussing container runtimes and giving some historical context.
[Docker][1] is often called a container runtime, but "container runtime" is an overloaded term. When folks talk about a "container runtime," they're really talking about higher-level tools like Docker, [ CRI-O][2], and [ RKT][3] that come with developer functionality. They are API driven. They include concepts like pulling the container image from the container registry, setting up the storage, and finally launching the container. Launching the container often involves running a specialized tool that configures the kernel to run the container, and these are also referred to as "container runtimes." I will refer to them as "low-level container runtimes." Daemons like Docker and CRI-O, as well as command-line tools like [ Podman][4] and [ Buildah][5], should probably be called "container managers" instead.
When Docker was originally written, it launched containers using the `lxc` toolset, which predates `systemd-nspawn`. Red Hat's original work with Docker was to try to integrate `[ libvirt][6]` (`libvirt-lxc`) into Docker as an alternative to the `lxc` tools, which were not supported in RHEL. `libvirt-lxc` also did not use `systemd-nspawn`. At that time, the systemd team was saying that `systemd-nspawn` was only a tool for testing, not for production.
At the same time, the upstream Docker developers, including some members of my Red Hat team, decided they wanted a golang-native way to launch containers, rather than launching a separate application. Work began on libcontainer, as a native golang library for launching containers. Red Hat engineering decided that this was the best path forward and dropped `libvirt-lxc`.
Later, the [Open Container Initiative][7] (OCI) was formed, party because people wanted to be able to launch containers in additional ways. Traditional namespace-separated containers were popular, but people also had the desire for virtual machine-level isolation. Intel and [Hyper.sh][8] were working on KVM-separated containers, and Microsoft was working on Windows-based containers. The OCI wanted a standard specification defining what a container is, so the [ OCI Runtime Specification][9] was born.
The OCI Runtime Specification defines a JSON file format that describes what binary should be run, how it should be contained, and the location of the rootfs of the container. Tools can generate this JSON file. Then other tools can read this JSON file and execute a container on the rootfs. The libcontainer parts of Docker were broken out and donated to the OCI. The upstream Docker engineers and our engineers helped create a new frontend tool to read the OCI Runtime Specification JSON file and interact with libcontainer to run the container. This tool, called `[ runc][10]`, was also donated to the OCI. While `runc` can read the OCI JSON file, users are left to generate it themselves. `runc` has since become the most popular low-level container runtime. Almost all container-management tools support `runc`, including CRI-O, Docker, Buildah, Podman, and [ Cloud Foundry Garden][11]. Since then, other tools have also implemented the OCI Runtime Spec to execute OCI-compliant containers.
Both [Clear Containers][12] and Hyper.sh's `runV` tools were created to use the OCI Runtime Specification to execute KVM-based containers, and they are combining their efforts in a new project called [ Kata][12]. Last year, Oracle created a demonstration version of an OCI runtime tool called [RailCar][13], written in Rust. It's been two months since the GitHub project has been updated, so it's unclear if it is still in development. A couple of years ago, Vincent Batts worked on adding a tool, `[ nspawn-oci][14]`, that interpreted an OCI Runtime Specification file and launched `systemd-nspawn`, but no one really picked up on it, and it was not a native implementation.
If someone wants to implement a native `systemd-nspawn --oci OCI-SPEC.json` and get it accepted by the systemd team for support, then CRI-O, Docker, and eventually Podman would be able to use it in addition to `runc `and Clear Container/runV ([Kata][15]). (No one on my team is working on this.)
The bottom line is, back three or four years, the upstream developers wanted to write a low-level golang tool for launching containers, and this tool ended up becoming `runc`. Those developers at the time had a C-based tool for doing this called `lxc` and moved away from it. I am pretty sure that at the time they made the decision to build libcontainer, they would not have been interested in `systemd-nspawn` or any other non-native (golang) way of running "namespace" separated containers.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/1/history-low-level-container-runtimes
作者:[Daniel Walsh][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/rhatdan
[1]:https://github.com/docker
[2]:https://github.com/kubernetes-incubator/cri-o
[3]:https://github.com/rkt/rkt
[4]:https://github.com/projectatomic/libpod/tree/master/cmd/podman
[5]:https://github.com/projectatomic/buildah
[6]:https://libvirt.org/
[7]:https://www.opencontainers.org/
[8]:https://www.hyper.sh/
[9]:https://github.com/opencontainers/runtime-spec
[10]:https://github.com/opencontainers/runc
[11]:https://github.com/cloudfoundry/garden
[12]:https://clearlinux.org/containers
[13]:https://github.com/oracle/railcar
[14]:https://github.com/vbatts/nspawn-oci
[15]:https://github.com/kata-containers

View File

@ -0,0 +1,159 @@
Fastest way to unzip a zip file in Python
======
So the context is this; a zip file is uploaded into a [web service][1] and Python then needs extract that and analyze and deal with each file within. In this particular application what it does is that it looks at the file's individual name and size, compares that to what has already been uploaded in AWS S3 and if the file is believed to be different or new, it gets uploaded to AWS S3.
[![Uploads today][2]][3]
The challenge is that these zip files that come in are huuuge. The average is 560MB but some are as much as 1GB. Within them, there are mostly plain text files but there are some binary files in there too that are huge. It's not unusual that each zip file contains 100 files and 1-3 of those make up 95% of the zip file size.
At first I tried unzipping the file, in memory, and deal with one file at a time. That failed spectacularly with various memory explosions and EC2 running out of memory. I guess it makes sense. First you have the 1GB file in RAM, then you unzip each file and now you have possibly 2-3GB all in memory. So, the solution, after much testing, was to dump the zip file to disk (in a temporary directory in `/tmp`) and then iterate over the files. This worked much better but I still noticed the whole unzipping was taking up a huge amount of time. **Is there perhaps a way to optimize that?**
### Baseline function
First it's these common functions that simulate actually doing something with the files in the zip file:
```
def _count_file(fn):
with open(fn, 'rb') as f:
return _count_file_object(f)
def _count_file_object(f):
# Note that this iterates on 'f'.
# You *could* do 'return len(f.read())'
# which would be faster but potentially memory
# inefficient and unrealistic in terms of this
# benchmark experiment.
total = 0
for line in f:
total += len(line)
return total
```
Here's the simplest one possible:
```
def f1(fn, dest):
with open(fn, 'rb') as f:
zf = zipfile.ZipFile(f)
zf.extractall(dest)
total = 0
for root, dirs, files in os.walk(dest):
for file_ in files:
fn = os.path.join(root, file_)
total += _count_file(fn)
return total
```
If I analyze it a bit more carefully, I find that it spends about 40% doing the `extractall` and 60% doing the looping over files and reading their full length.
### First attempt
My first attempt was to try to use threads. You create an instance of `zipfile.ZipFile`, extract every file name within and start a thread for each name. Each thread is given a function that does the "meat of the work" (in this benchmark, iterating over the file and getting its total size). In reality that function does a bunch of complicated S3, Redis and PostgreSQL stuff but in my benchmark I just made it a function that figures out the total length of file. The thread pool function:
```
def f2(fn, dest):
def unzip_member(zf, member, dest):
zf.extract(member, dest)
fn = os.path.join(dest, member.filename)
return _count_file(fn)
with open(fn, 'rb') as f:
zf = zipfile.ZipFile(f)
futures = []
with concurrent.futures.ThreadPoolExecutor() as executor:
for member in zf.infolist():
futures.append(
executor.submit(
unzip_member,
zf,
member,
dest,
)
)
total = 0
for future in concurrent.futures.as_completed(futures):
total += future.result()
return total
```
**Result: ~10% faster**
### Second attempt
So perhaps the GIL is blocking me. The natural inclination is to try to use multiprocessing to spread the work across multiple available CPUs. But doing so has the disadvantage that you can't pass around a non-pickleable object so you have to send just the filename to each future function:
```
def unzip_member_f3(zip_filepath, filename, dest):
with open(zip_filepath, 'rb') as f:
zf = zipfile.ZipFile(f)
zf.extract(filename, dest)
fn = os.path.join(dest, filename)
return _count_file(fn)
def f3(fn, dest):
with open(fn, 'rb') as f:
zf = zipfile.ZipFile(f)
futures = []
with concurrent.futures.ProcessPoolExecutor() as executor:
for member in zf.infolist():
futures.append(
executor.submit(
unzip_member_f3,
fn,
member.filename,
dest,
)
)
total = 0
for future in concurrent.futures.as_completed(futures):
total += future.result()
return total
```
**Result: ~300% faster**
### That's cheating!
The problem with using a pool of processors is that it requires that the original `.zip` file exists on disk. So in my web server, to use this solution, I'd first have to save the in-memory ZIP file to disk, then invoke this function. Not sure what the cost of that it's not likely to be cheap.
Well, it doesn't hurt to poke around. Perhaps, it could be worth it if the extraction was significantly faster.
But remember! This optimization depends on using up as many CPUs as it possibly can. What if some of those other CPUs are needed for something else going on in `gunicorn`? Those other processes would have to patiently wait till there's a CPU available. Since there's other things going on in this server, I'm not sure I'm willing to let on process take over all the other CPUs.
### Conclusion
Doing it serially turns out to be quite nice. You're bound to one CPU but the performance is still pretty good. Also, just look at the difference in the code between `f1` and `f2`! With `concurrent.futures` pool classes you can cap the number of CPUs it's allowed to use but that doesn't feel great either. What if you get the number wrong in a virtual environment? Of if the number is too low and don't benefit any from spreading the workload and now you're just paying for overheads to move the work around?
I'm going to stick with `zipfile.ZipFile(file_buffer).extractall(temp_dir)`. It's good enough for this.
### Want to try your hands on it?
I did my benchmarking using a `c5.4xlarge` EC2 server. The files can be downloaded from:
```
wget https://www.peterbe.com/unzip-in-parallel/hack.unzip-in-parallel.py
wget https://www.peterbe.com/unzip-in-parallel/symbols-2017-11-27T14_15_30.zip
```
The `.zip` file there is 34MB which is relatively small compared to what's happening on the server.
The `hack.unzip-in-parallel.py` is a hot mess. It contains a bunch of terrible hacks and ugly stuff but hopefully it's a start.
--------------------------------------------------------------------------------
via: https://www.peterbe.com/plog/fastest-way-to-unzip-a-zip-file-in-python
作者:[Peterbe][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.peterbe.com/
[1]:https://symbols.mozilla.org
[2]:https://cdn-2916.kxcdn.com/cache/b7/bb/b7bbcf60347a5fa91420f71bbeed6d37.png
[3]:https://cdn-2916.kxcdn.com/cache/e6/dc/e6dc20acd37d94239edbbc0727721e4a.png

View File

@ -0,0 +1,912 @@
For your first HTML code, lets help Batman write a love letter
============================================================
![](https://cdn-images-1.medium.com/max/1000/1*kZxbQJTdb4jn_frfqpRg9g.jpeg)
[Image Credit][1]
One fine night, your stomach refuses to digest the large Pizza you had at dinner, and you have to rush to the bathroom in the middle of your sleep.
In the bathroom, while wondering why this is happening to you, you hear a heavy voice from the vent: “Hey, I am Batman.”
What would you do?
Before you panic and stand up in middle of the critical process, Batman says, “I need your help. I am a super geek, but I dont know HTML. I need to write my love letter in HTMLwould you do it for me?”
Who could refuse a request from Batman, right? Lets write Batmans love letter in HTML.
### Your first HTML file
An HTML webpage is like other files on your computer. As a .doc file opens in MS Word, a .jpg file opens in Image Viewer, and a .html file opens in your Browser.
So, lets create a .html file. You can do this in Notepad, or any other basic editor, but I would recommend using VS Code instead. [Download and install VS Code here][2]. Its free, and the only Microsoft Product I love.
Create a directory in your system and name it “HTML Practice” (without quotes). Inside this directory, create one more directory called “Batmans Love Letter” (without quotes). This will be our project root directory. That means that all our files related to this project will live here.
Open VS Code and press ctrl+n to create a new file and ctrl+s to save the file. Navigate to the “Batmans Love Letter” folder and name the file “loveletter.html” and click save.
Now, if you open this file by double-clicking it from your file explorer, it will open in your default browser. I recommend using Firefox for web development, but Chrome is fine too.
Lets relate this process to something we are already familiar with. Remember the first time you got your hands on a computer? The first thing I did was open MS Paint and draw something. You draw something in Paint and save it as an image and then you can view that image in Image Viewer. Then if you want to edit that image again, you re-open it in Paint, edit it, and save it.
Our current process is quite similar. As we use Paint to create and edit images, we use VS Code to create and edit our HTML files. And as we use Image Viewer to view the images, we use the Browser to view our HTML pages.
### Your first Paragraph in HTML
We have our blank HTML file, so heres the first paragraph Batman wants to write in his love letter
“After all the battles we fought together, after all the difficult times we saw together, and after all the good and bad moments weve been through, I think its time I let you know how I feel about you.”
Copy the paragraph in your loveletter.html in VS Code. Enable word wrap by clicking View -> Toggle Word Wrap (alt+z).
Save and open the file in your browser. If its already open, click refresh on your browser.
Voila! Thats your first webpage!
Our first paragraph is ready, but this is not the recommended way of writing a paragraph in HTML. We have a specific way to let the browser know that a text is a paragraph.
If you wrap the text with `<p>` and `</p>`, then the browser will know that the text inside the `<p>` and `</p>` is a paragraph. Lets do this:
```
<p>After all the battles we fought together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you.</p>
```
By writing the paragraph inside `<p>` and `</p>` you created an HTML element. A web page is collection of HTML elements.
Lets get some of the terminologies out of the way first: `<p>` is the opening tag, `</p>` is the closing tag, and “p” is the tag name. The text inside the opening and closing tag of an element is that elements content.
### The “style” attribute
You will see that the text covers the entire width of the screen.
We dont want that. No one wants to read such long lines. Lets give a width of, say, 550px to our paragraph.
We can do that by using the elements “style” attribute. You can define an elements style (for example, width in our case), inside its style attribute. The following line will create an empty style attribute on a “p” element:
```
<p style="">...</p>
```
You see that empty `""`? Thats where we will define the looks of the element. Right now we want to set the width to 550px. Lets do that:
```
<p style="width:550px;">
After all the battles we fought together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you.
</p>
```
We set the “width” property to 550px separated by a colon “:” and ended by a semicolon “;”.
Also, notice how we put the `<p>` and `</p>` in separate lines and the text content indented with one tab. Styling your code like this makes it more readable.
### Lists in HTML
Next, Batman wants to list some of the virtues of the person that he admires, like this:
“ You complete my darkness with your light. I love:
- the way you see good in the worst things
- the way you handle emotionally difficult situations
- the way you look at Justice
I have learned a lot from you. You have occupied a special place in my heart over time.”
This looks simple.
Lets go ahead and copy the required text below the `</p>:`
```
<p style="width:550px;">
After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you.
</p>
<p style="width:550px;">
You complete my darkness with your light. I love:
- the way you see good in the worse
- the way you handle emotionally difficult situations
- the way you look at Justice
I have learned a lot from you. You have occupied a special place in my heart over the time.
</p>
```
Save and refresh your browser.
![](https://cdn-images-1.medium.com/max/1000/1*M0Ae5ZpRTucNyucfaaz4uw.jpeg)
Woah! What happened here, where is our list?
If you look closely, you will observe that line breaks are not displayed. We wrote the list items in new lines in our code, but those are displayed in a single line in the browser.
If you want to insert a line break in HTML (newline) you have to mention it using `<br>`. Lets use `<br>` and see how it looks:
```
<p style="width:550px;">
After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you.
</p>
<p style="width:550px;">
You complete my darkness with your light. I love: <br>
- the way you see good in the worse <br>
- the way you handle emotionally difficult situations <br>
- the way you look at Justice <br>
I have learned a lot from you. You have occupied a special place in my heart over the time.
</p>
```
Save and refresh:
![](https://cdn-images-1.medium.com/max/1000/1*Mj4Sr_jUliidxFpEtu0pXw.jpeg)
Okay, now it looks the way we want it to.
Also, notice that we didnt write a `</br>`. Some tags dont need a closing tag, (and theyre called self-closing tags).
One more thing: we didnt use a `<br>` between the two paragraphs, but still the second paragraph starts from a new line. Thats because the “p” element automatically inserts a line break.
We wrote our list using plain text, but there are two tags we can use for this same purpose: `<ul>` and `<li>`.
To get the naming out of the way: ul stands for Unordered List, and li stands for List Item. Lets use these to display our list:
```
<p style="width:550px;">
After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you.
</p>
```
```
<p style="width:550px;">
You complete my darkness with your light. I love:
<ul>
<li>the way you see good in the worse</li>
<li>the way you handle emotionally difficult situations</li>
<li>the way you look at Justice</li>
</ul>
I have learned a lot from you. You have occupied a special place in my heart over the time.
</p>
```
Before copying the code, notice the differences we made:
* We removed all the `<br>`, since each `<li>` automatically displays in new line
* We wrapped the individual list items between `<li>` and `</li>`.
* We wrapped the collection of all the list items between the `<ul>` and `</ul>`
* We didnt define the width of the “ul” element as we were doing with the “p” element. This is because “ul” is a child of “p” and “p” is already constrained to 550px, so “ul” wont go beyond that.
Lets save the file and refresh the browser to see the results:
![](https://cdn-images-1.medium.com/max/1000/1*aPlMpYVZESPwgUO3Iv-qCA.jpeg)
You will instantly notice that we get bullet points displayed before each list item. We dont need to write that “-” before each list item now.
On careful inspection, you will notice that the last line goes beyond 550px width. Why is that? Because a “ul” element is not allowed inside a “p” element in HTML. Lets put the first and last lines in separate “p” elements:
```
<p style="width:550px;">
After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you.
</p>
```
```
<p style="width:550px;">
You complete my darkness with your light. I love:
</p>
```
```
<ul style="width:550px;">
<li>the way you see good in the worse</li>
<li>the way you handle emotionally difficult situations</li>
<li>the way you look at Justice</li>
</ul>
```
```
<p style="width:550px;">
I have learned a lot from you. You have occupied a special place in my heart over the time.
</p>
```
Save and reload.
Notice that this time we defined the width of the “ul” element also. Thats because we have now moved our “ul” element out of the “p” element.
Defining the width of all the elements of our letter can become cumbersome. We have a specific element for this purpose: the “div” element. A “div” element is a generic container which is used to group content so it can be easily styled.
Lets wrap our entire letter with a div element and give width:550px to that div element:
```
<div style="width:550px;">
<p>
After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you.
</p>
<p>
You complete my darkness with your light. I love:
</p>
<ul>
<li>the way you see good in the worse</li>
<li>the way you handle emotionally difficult situations</li>
<li>the way you look at Justice</li>
</ul>
<p>
I have learned a lot from you. You have occupied a special place in my heart over the time.
</p>
</div>
```
Great. Our code looks much cleaner now.
### Headings in HTML
Batman is quite happy looking at the results so far, and he wants a heading on the letter. He wants to make the heading: “Bat Letter”. Of course, you saw this name coming already, didnt you? :D
You can add heading using ht, h2, h3, h4, h5, and h6 tags, h1 is the biggest and main heading and h6 the smallest one.
![](https://cdn-images-1.medium.com/max/1000/1*Ud-NzfT-SrMgur1WX4LCkQ.jpeg)
Lets make the main heading using h1 and a subheading before second paragraph:
```
<div style="width:550px;">
<h1>Bat Letter</h1>
<p>
After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you.
</p>
```
```
<h2>You are the light of my life</h2>
<p>
You complete my darkness with your light. I love:
</p>
<ul>
<li>the way you see good in the worse</li>
<li>the way you handle emotionally difficult situations</li>
<li>the way you look at Justice</li>
</ul>
<p>
I have learned a lot from you. You have occupied a special place in my heart over the time.
</p>
</div>
```
Save, and reload.
![](https://cdn-images-1.medium.com/max/1000/1*rzyIl-gHug3nQChqfscU3w.jpeg)
### Images in HTML
Our letter is not complete yet, but before proceeding, one big thing is missinga Bat logo. Have you ever seen anything Batman owns that doesnt have a Bat logo?
Nope.
So, lets add a Bat logo to our letter.
Including an image in HTML is like including an image in a Word file. In MS Word you go to menu -> insert -> image -> then navigate to the location of the image -> select the image -> click on insert.
In HTML, instead of clicking on the menu, we use `<img>` tag to let the browser know that we need to load an image. We write the location and name of the file inside the “src” attribute. If the image is in the project root directory, we can simply write the name of the image file in the src attribute.
Before we dive into coding this, download this Bat logo from [here][3]. You might want to crop the extra white space in the image. Copy the image in your project root directory and rename it “bat-logo.jpeg”.
```
<div style="width:550px;">
<h1>Bat Letter</h1>
<img src="bat-logo.jpeg">
<p>
After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you.
</p>
```
```
<h2>You are the light of my life</h2>
<p>
You complete my darkness with your light. I love:
</p>
<ul>
<li>the way you see good in the worse</li>
<li>the way you handle emotionally difficult situations</li>
<li>the way you look at Justice</li>
</ul>
<p>
I have learned a lot from you. You have occupied a special place in my heart over the time.
</p>
</div>
```
We included the img tag on line 3\. This tag is also a self-closing tag, so we dont need to write `</img>`. In the src attribute, we give the name of the image file. This name should be exactly same as your images name, including the extension (.jpeg) and its case.
Save and refresh to see the result.
![](https://cdn-images-1.medium.com/max/1000/1*uMNWAISOACJlzDOONcrGXw.jpeg)
Damn! What just happened?
When you include an image using the img tag, by default the image will be displayed in its original resolution. In our case, the image is much wider than 550px. Lets define its width using the style attribute:
```
<div style="width:550px;">
<h1>Bat Letter</h1>
<img src="bat-logo.jpeg" style="width:100%">
<p>
After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you.
</p>
```
```
<h2>You are the light of my life</h2>
<p>
You complete my darkness with your light. I love:
</p>
<ul>
<li>the way you see good in the worse</li>
<li>the way you handle emotionally difficult situations</li>
<li>the way you look at Justice</li>
</ul>
<p>
I have learned a lot from you. You have occupied a special place in my heart over the time.
</p>
</div>
```
You will notice that this time we defined width with “%” instead of “px”. When we define a width in “%” it will occupy that % of the parent elements width. So, 100% of 550px will give us 550px.
Save and refresh to see the results.
![](https://cdn-images-1.medium.com/max/1000/1*5c0ngx3BFVlyyP6UNtfYyg.jpeg)
Fantastic! This brings a timid smile to Batmans face :)
### Bold and Italic in HTML
Now Batman wants to confess his love in the last few paragraphs. He has this text for you to write in HTML:
“I have a confession to make
It feels like my chest  _does_  have a heart. You make my heart beat. Your smile brings a smile to my face, your pain brings pain to my heart.
I dont show my emotions, but I think this man behind the mask is falling for you.”
While reading this you ask Batman, “Wait, who is this for?” and Batman replies:
“Its for Superman.”
![](https://cdn-images-1.medium.com/max/1000/1*UNDvfIZQJ1Q_goHc-F-IPA.jpeg)
You: Oh! I was going to guess Wonder Woman.
Batman: No, its Sups, please write “I love you Superman” at the end.
Fine, lets do it then:
```
<div style="width:550px;">
<h1>Bat Letter</h1>
<img src="bat-logo.jpeg" style="width:100%">
<p>
After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you.
</p>
```
```
<h2>You are the light of my life</h2>
<p>
You complete my darkness with your light. I love:
</p>
<ul>
<li>the way you see good in the worse</li>
<li>the way you handle emotionally difficult situations</li>
<li>the way you look at Justice</li>
</ul>
<p>
I have learned a lot from you. You have occupied a special place in my heart over the time.
</p>
<h2>I have a confession to make</h2>
<p>
It feels like my chest does have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart.
</p>
<p>
I don't show my emotions, but I think this man behind the mask is falling for you.
</p>
<p>I love you Superman.</p>
<p>
Your not-so-secret-lover, <br>
Batman
</p>
</div>
```
The letter is almost done, and Batman wants just two more changes. Batman wants the word “does” in the first sentence of the confession paragraph to be italic, and the sentence “I love you Superman” to be in bold.
We use `<em>` and `<strong>` to display text in italic and bold. Lets update these changes:
```
<div style="width:550px;">
<h1>Bat Letter</h1>
<img src="bat-logo.jpeg" style="width:100%">
<p>
After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you.
</p>
```
```
<h2>You are the light of my life</h2>
<p>
You complete my darkness with your light. I love:
</p>
<ul>
<li>the way you see good in the worse</li>
<li>the way you handle emotionally difficult situations</li>
<li>the way you look at Justice</li>
</ul>
<p>
I have learned a lot from you. You have occupied a special place in my heart over the time.
</p>
<h2>I have a confession to make</h2>
<p>
It feels like my chest <em>does</em> have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart.
</p>
<p>
I don't show my emotions, but I think this man behind the mask is falling for you.
</p>
<p><strong>I love you Superman.</strong></p>
<p>
Your not-so-secret-lover, <br>
Batman
</p>
</div>
```
![](https://cdn-images-1.medium.com/max/1000/1*6hZdQJglbHUcEEHzouk2eA.jpeg)
### Styling in HTML
There are three ways you can style or define the look of an HTML element:
* Inline styling: We write styles using “style” attribute of the elements. This is what we have done up until now. This is not a good practice.
* Embedded styling: We write all the styles within a “style” element wrapped by <style> and </style>.
* Linked stylesheet: We write styles of all the elements in a separate file with .css extension. This file is called Stylesheet.
Lets have a look at how we defined the inline style of the “div” until now:
```
<div style="width:550px;">
```
We can write this same style inside `<style>` and `</style>` like this:
```
div{
width:550px;
}
```
In embedded styling, the styles we write are separate from the elements. So we need a way to relate the element and its style. The first word “div” does exactly that. It lets the browser know that whatever style is inside the curly braces `{…}` belongs to the “div” element. Since this phrase determines which element to apply the style to, its called a selector.
The way we write style remains same: property(width) and value(550px) separated by a colon(:) and ended by a semicolon(;).
Lets remove inline style from our “div” and “img” element and write it inside the `<style>` element:
```
<style>
div{
width:550px;
}
img{
width:100%;
}
</style>
```
```
<div>
<h1>Bat Letter</h1>
<img src="bat-logo.jpeg">
<p>
After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you.
</p>
```
```
<h2>You are the light of my life</h2>
<p>
You complete my darkness with your light. I love:
</p>
<ul>
<li>the way you see good in the worse</li>
<li>the way you handle emotionally difficult situations</li>
<li>the way you look at Justice</li>
</ul>
<p>
I have learned a lot from you. You have occupied a special place in my heart over the time.
</p>
<h2>I have a confession to make</h2>
<p>
It feels like my chest <em>does</em> have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart.
</p>
<p>
I don't show my emotions, but I think this man behind the mask is falling for you.
</p>
<p><strong>I love you Superman.</strong></p>
<p>
Your not-so-secret-lover, <br>
Batman
</p>
</div>
```
Save and refresh, and the result should remain the same.
There is one big problem thoughwhat if there is more than one “div” and “img” element in our HTML file? The styles that we defined for div and img inside the “style” element will apply to every div and img on the page.
If you add another div in your code in the future, then that div will also become 550px wide. We dont want that.
We want to apply our styles to the specific div and img that we are using right now. To do this, we need to give our div and img element unique ids. Heres how you can give an id to an element using its “id” attribute:
```
<div id="letter-container">
```
and heres how to use this id in our embedded style as a selector:
```
#letter-container{
...
}
```
Notice the “#” symbol. It indicates that it is an id, and the styles inside {…} should apply to the element with that specific id only.
Lets apply this to our code:
```
<style>
#letter-container{
width:550px;
}
#header-bat-logo{
width:100%;
}
</style>
```
```
<div id="letter-container">
<h1>Bat Letter</h1>
<img id="header-bat-logo" src="bat-logo.jpeg">
<p>
After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you.
</p>
```
```
<h2>You are the light of my life</h2>
<p>
You complete my darkness with your light. I love:
</p>
<ul>
<li>the way you see good in the worse</li>
<li>the way you handle emotionally difficult situations</li>
<li>the way you look at Justice</li>
</ul>
<p>
I have learned a lot from you. You have occupied a special place in my heart over the time.
</p>
<h2>I have a confession to make</h2>
<p>
It feels like my chest <em>does</em> have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart.
</p>
<p>
I don't show my emotions, but I think this man behind the mask is falling for you.
</p>
<p><strong>I love you Superman.</strong></p>
<p>
Your not-so-secret-lover, <br>
Batman
</p>
</div>
```
Our HTML is ready with embedded styling.
However, you can see that as we include more styles, the <style></style> will get bigger. This can quickly clutter our main html file. So lets go one step further and use linked styling by copying the content inside our style tag to a new file.
Create a new file in the project root directory and save it as style.css:
```
#letter-container{
width:550px;
}
#header-bat-logo{
width:100%;
}
```
We dont need to write `<style>` and `</style>` in our CSS file.
We need to link our newly created CSS file to our HTML file using the `<link>`tag in our html file. Heres how we can do that:
```
<link rel="stylesheet" type="text/css" href="style.css">
```
We use the link element to include external resources inside your HTML document. It is mostly used to link Stylesheets. The three attributes that we are using are:
* rel: Relation. What relationship the linked file has to the document. The file with the .css extension is called a stylesheet, and so we keep rel=“stylesheet”.
* type: the Type of the linked file; its “text/css” for a CSS file.
* href: Hypertext Reference. Location of the linked file.
There is no </link> at the end of the link element. So, <link> is also a self-closing tag.
```
<link rel="gf" type="cute" href="girl.next.door">
```
If only getting a Girlfriend was so easy :D
Nah, thats not gonna happen, lets move on.
Heres the content of our loveletter.html:
```
<link rel="stylesheet" type="text/css" href="style.css">
<div id="letter-container">
<h1>Bat Letter</h1>
<img id="header-bat-logo" src="bat-logo.jpeg">
<p>
After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you.
</p>
<h2>You are the light of my life</h2>
<p>
You complete my darkness with your light. I love:
</p>
<ul>
<li>the way you see good in the worse</li>
<li>the way you handle emotionally difficult situations</li>
<li>the way you look at Justice</li>
</ul>
<p>
I have learned a lot from you. You have occupied a special place in my heart over the time.
</p>
<h2>I have a confession to make</h2>
<p>
It feels like my chest <em>does</em> have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart.
</p>
<p>
I don't show my emotions, but I think this man behind the mask is falling for you.
</p>
<p><strong>I love you Superman.</strong></p>
<p>
Your not-so-secret-lover, <br>
Batman
</p>
</div>
```
and our style.css:
```
#letter-container{
width:550px;
}
#header-bat-logo{
width:100%;
}
```
Save both the files and refresh, and your output in the browser should remain the same.
### A Few Formalities
Our love letter is almost ready to deliver to Batman, but there are a few formal pieces remaining.
Like any other programming language, HTML has also gone through many versions since its birth year(1990). The current version of HTML is HTML5.
So, how would the browser know which version of HTML you are using to code your page? To tell the browser that you are using HTML5, you need to include `<!DOCTYPE html>` at top of the page. For older versions of HTML, this line used to be different, but you dont need to learn that because we dont use them anymore.
Also, in previous HTML versions, we used to encapsulate the entire document inside `<html></html>` tag. The entire file was divided into two major sections: Head, inside `<head></head>`, and Body, inside `<body></body>`. This is not required in HTML5, but we still do this for compatibility reasons. Lets update our code with `<Doctype>`, `<html>`, `<head>` and `<body>`:
```
<!DOCTYPE html>
<html>
<head>
<link rel="stylesheet" type="text/css" href="style.css">
</head>
<body>
<div id="letter-container">
<h1>Bat Letter</h1>
<img id="header-bat-logo" src="bat-logo.jpeg">
<p>
After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you.
</p>
<h2>You are the light of my life</h2>
<p>
You complete my darkness with your light. I love:
</p>
<ul>
<li>the way you see good in the worse</li>
<li>the way you handle emotionally difficult situations</li>
<li>the way you look at Justice</li>
</ul>
<p>
I have learned a lot from you. You have occupied a special place in my heart over the time.
</p>
<h2>I have a confession to make</h2>
<p>
It feels like my chest <em>does</em> have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart.
</p>
<p>
I don't show my emotions, but I think this man behind the mask is falling for you.
</p>
<p><strong>I love you Superman.</strong></p>
<p>
Your not-so-secret-lover, <br>
Batman
</p>
</div>
</body>
</html>
```
The main content goes inside `<body>` and meta information goes inside `<head>`. So we keep the div inside `<body>` and load the stylesheets inside `<head>`.
Save and refresh, and your HTML page should display the same as earlier.
### Title in HTML
This is the last change. I promise.
You might have noticed that the title of the tab is displaying the path of the HTML file:
![](https://cdn-images-1.medium.com/max/1000/1*PASKm4ji29hbcZXVSP8afg.jpeg)
We can use `<title>` tag to define a title for our HTML file. The title tag also, like the link tag, goes inside head. Lets put “Bat Letter” in our title:
```
<!DOCTYPE html>
<html>
<head>
<title>Bat Letter</title>
<link rel="stylesheet" type="text/css" href="style.css">
</head>
<body>
<div id="letter-container">
<h1>Bat Letter</h1>
<img id="header-bat-logo" src="bat-logo.jpeg">
<p>
After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you.
</p>
<h2>You are the light of my life</h2>
<p>
You complete my darkness with your light. I love:
</p>
<ul>
<li>the way you see good in the worse</li>
<li>the way you handle emotionally difficult situations</li>
<li>the way you look at Justice</li>
</ul>
<p>
I have learned a lot from you. You have occupied a special place in my heart over the time.
</p>
<h2>I have a confession to make</h2>
<p>
It feels like my chest <em>does</em> have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart.
</p>
<p>
I don't show my emotions, but I think this man behind the mask is falling for you.
</p>
<p><strong>I love you Superman.</strong></p>
<p>
Your not-so-secret-lover, <br>
Batman
</p>
</div>
</body>
</html>
```
Save and refresh, and you will see that instead of the file path, “Bat Letter” is now displayed on the tab.
Batmans Love Letter is now complete.
Congratulations! You made Batmans Love Letter in HTML.
![](https://cdn-images-1.medium.com/max/1000/1*qC8qtrYtxAB6cJfm9aVOOQ.jpeg)
### What we learned
We learned the following new concepts:
* The structure of an HTML document
* How to write elements in HTML (<p></p>)
* How to write styles inside the element using the style attribute (this is called inline styling, avoid this as much as you can)
* How to write styles of an element inside <style></style> (this is called embedded styling)
* How to write styles in a separate file and link to it in HTML using <link> (this is called a linked stylesheet)
* What is a tag name, attribute, opening tag, and closing tag
* How to give an id to an element using id attribute
* Tag selectors and id selectors in CSS
We learned the following HTML tags:
* <p>: for paragraphs
* <br>: for line breaks
* <ul>, <li>: to display lists
* <div>: for grouping elements of our letter
* <h1>, <h2>: for heading and sub heading
* <img>: to insert an image
* <strong>, <em>: for bold and italic text styling
* <style>: for embedded styling
* <link>: for including external an stylesheet
* <html>: to wrap the entire HTML document
* <!DOCTYPE html>: to let the browser know that we are using HTML5
* <head>: to wrap meta info, like <link> and <title>
* <body>: for the body of the HTML page that is actually displayed
* <title>: for the title of the HTML page
We learned the following CSS properties:
* width: to define the width of an element
* CSS units: “px” and “%”
Thats it for the day friends, see you in the next tutorial.
* * *
Want to learn Web Development with fun and engaging tutorials?
[Click here to get new Web Development tutorials every week.][4]
--------------------------------------------------------------------------------
作者简介:
Developer + Writer | supersarkar.com | twitter.com/supersarkar
-------------
via: https://medium.freecodecamp.org/for-your-first-html-code-lets-help-batman-write-a-love-letter-64c203b9360b
作者:[Kunal Sarkar][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://medium.freecodecamp.org/@supersarkar
[1]:https://www.pexels.com/photo/batman-black-and-white-logo-93596/
[2]:https://code.visualstudio.com/
[3]:https://www.pexels.com/photo/batman-black-and-white-logo-93596/
[4]:http://supersarkar.com/

View File

@ -0,0 +1,222 @@
How to test Webhooks when youre developing locally
============================================================
![](https://cdn-images-1.medium.com/max/1000/1*0HNQmPw5yXva6powvVwn5Q.jpeg)
Photo by [Fernando Venzano][1] on [Unsplash][2]
[Webhooks][10] can be used by an external system for notifying your system about a certain event or update. Probably the most well known type is the one where a Payment Service Provider (PSP) informs your system about status updates of payments.
Often they come in the form where you listen on a predefined URL. For example [http://example.com/webhooks/payment-update][11]. Meanwhile the other system sends a POST request with a certain payload to that URL (for example a payment ID). As soon as the request comes in, you fetch the payment ID, ask the PSP for the latest status via their API, and update your database afterward.
Other examples can be found in this excellent explanation about Webhooks. [https://sendgrid.com/blog/whats-webhook/][12].
Testing these webhooks goes fairly smoothly as long as the system is publicly accessible over the internet. This might be your production environment or a publicly accessible staging environment. It becomes harder when you are developing locally on your laptop or inside a Virtual Machine (VM, for example, a Vagrant box). In those cases, the local URLs are not publicly accessible by the party sending the webhook. Also, monitoring the requests being sent around is be difficult, which might make development and debugging hard.
What will this example solve:
* Testing webhooks from a local development environment, which is not accessible over the internet. It cannot be accessed by the service sending the data to the webhook from their servers.
* Monitor the requests and data being sent around, but also the response your application generates. This will allow easier debugging, and therefore a shorter development cycle.
Prerequisites:
* _Optional_ : in case you are developing using a Virtual Machine (VM), make sure its running and make sure the next steps are done in the VM.
* For this tutorial, we assume you have a vhost defined at `webhook.example.vagrant`. I used a Vagrant VM for this tutorial, but you are free in choosing the name of your vhost.
* Install `ngrok`by following the [installation instructions][3]. Inside a VM, I find the Node version of it also useful: [https://www.npmjs.com/package/ngrok][4], but feel free to use other methods.
I assume you dont have SSL running in your environment, but if you do, feel free to replace port 80 with port 433 and  `_http://_`  with  `_https://_`  in the examples below.
#### Make the webhook testable
Lets assume the following example code. Ill be using PHP, but read it as pseudo-code as I left some crucial parts out (for example API keys, input validation, etc.)
The first file:  _payment.php_ . This file creates a payment object and then registers it with the PSP. It then fetches the URL the customer needs to visit in order to pay and redirects the user to the customer in there.
Note that the `webhook.example.vagrant` in this example is the local vhost weve defined for our development set-up. Its not accessible from the outside world.
```
<?php
/*
* This file creates a payment and tells the PSP what webhook URL to use for updates
* After creating the payment, we get a URL to send the customer to in order to pay at the PSP
*/
$payment = [
'order_id' => 123,
'amount' => 25.00,
'description' => 'Test payment',
'redirect_url' => 'http://webhook.example.vagrant/redirect.php',
'webhook_url' => 'http://webhook.example.vagrant/webhook.php',
];
```
```
$payment = $paymentProvider->createPayment($payment);
header("Location: " . $payment->getPaymentUrl());
```
Second file:  _webhook.php_ . This file waits to be called by the PSP to get notified about updates.
```
<?php
/*
* This file gets called by the PSP and in the $_POST they submit an 'id'
* We can use this ID to get the latest status from the PSP and update our internal systems afterward
*/
```
```
$paymentId = $_POST['id'];
$paymentInfo = $paymentProvider->getPayment($paymentId);
$status = $paymentInfo->getStatus();
```
```
// Perform actions in here to update your system
if ($status === 'paid') {
..
}
elseif ($status === 'cancelled') {
..
}
```
Our webhook URL is not accessible over the internet (remember: `webhook.example.vagrant`). Thus, the file  _webhook.php_  will never be called by the PSP. Your system will never get to know about the payment status. This ultimately leads to orders never being shipped to customers.
Luckily,  _ngrok_  can in solving this problem.  [_ngrok_][13]  describes itself as:
> ngrok exposes local servers behind NATs and firewalls to the public internet over secure tunnels.
Lets start a basic tunnel for our project. On your environment (either on your system or on the VM) run the following command:
`ngrok http -host-header=rewrite webhook.example.vagrant:80`
Read about more configuration options in their documentation: [https://ngrok.com/docs][14].
A screen like this will come up:
![](https://cdn-images-1.medium.com/max/1000/1*BZZE-CvZwHZ3pxsElJMWbA.png)
ngrok output
What did we just start? Basically, we instructed `ngrok` to start a tunnel to `[http://webhook.example.vagr][5]ant` at port 80\. This same URL can now be reached via `[http://39741ffc.ngrok.i][6]o` or `[https://39741ffc.ngrok.io][7]`[,][15] They are publicly accessible over the internet by anyone that knows this URL.
Note that you get both HTTP and HTTPS available out of the box. The documentation gives examples of how to restrict this to HTTPS only: [https://ngrok.com/docs#bind-tls][16].
So, how do we make our webhook work now? Update  _payment.php_  to the following code:
```
<?php
/*
* This file creates a payment and tells the PSP what webhook URL to use for updates
* After creating the payment, we get a URL to send the customer to in order to pay at the PSP
*/
$payment = [
'order_id' => 123,
'amount' => 25.00,
'description' => 'Test payment',
'redirect_url' => 'http://webhook.example.vagrant/redirect.php',
'webhook_url' => 'https://39741ffc.ngrok.io/webhook.php',
];
```
```
$payment = $paymentProvider->createPayment($payment);
header("Location: " . $payment->getPaymentUrl());
```
Now, we told the PSP to call the tunnel URL over HTTPS.  _ngrok_  will make sure your internal URL gets called with an unmodified payload, as soon as the PSP calls the webhook via the tunnel.
#### How to monitor calls to the webhook?
The screenshot youve seen above gives an overview of the calls being made to the tunnel host. This data is rather limited. Fortunately, `ngrok` offers a very nice dashboard, which allows you to inspect all calls:
![](https://cdn-images-1.medium.com/max/1000/1*qZw9GRTnG1sMgEUmsJPz3g.png)
I wont go into this very deep because its self-explanatory as soon as you have it running. Therefore I will explain how to access it on the Vagrant box as it doesnt work out of the box.
The dashboard will allow you to see all the calls, their status codes, the headers and data being sent around. You will also see the response your application generated.
Another neat feature of the dashboard is that it allows you to replay a certain call. Lets say your webhook code ran into a fatal error, it would be tedious to start a new payment and wait for the webhook to be called. Replaying the previous call makes your development process way faster.
The dashboard by default is accessible at [http://localhost:4040.][17]
#### Dashboard in a VM
In order to make this work inside a VM, you have to perform some additional steps:
First, make sure the VM can be accessed on port 4040\. Then, create a file inside the VM holding this configuration:
`web_addr: 0.0.0.0:4040`
Now, kill the `ngrok` process thats still running and start it with this slightly adjusted command:
`ngrok http -config=/path/to/config/ngrok.conf -host-header=rewrite webhook.example.vagrant:80`
You will get a screen looking similar to the previous screenshot though the IDs have changed. The previous URL doesnt work anymore, but you got a new URL. Also, the `Web Interface` URL got changed:
![](https://cdn-images-1.medium.com/max/1000/1*3FZq37TF4dmBqRc1R0FMVg.png)
Now direct your browser to `[http://webhook.example.vagrant:4040][8]` to access the dashboard. Also, make a call to `[https://e65642b5.ngrok.io/webhook.php][9]`[.][18]This will probably result in an error in your browser, but the dashboard should show the request being made.
#### Final remarks
The examples above are pseudo-code. The reason is that every external system uses webhooks in a different way. I tried to give an example based on a fictive PSP implementation, as probably many developers have to deal with payments at some moment.
Please be aware that your webhook URL can also be used by others with bad intentions. Make sure to validate any input being sent to it.
Preferably also add a token to the URL which is unique for each payment. This token must only be known by your system and the system sending the webhook.
Good luck testing and debugging your webhooks!
Note: I havent tested this tutorial on Docker. However, this Docker container looks like a good starting point and includes clear instructions. [https://github.com/wernight/docker-ngrok][19].
Stefan Doorn
[https://github.com/stefandoorn][20]
[https://twitter.com/stefan_doorn][21]
[https://www.linkedin.com/in/stefandoorn][22]
--------------------------------------------------------------------------------
作者简介:
Backend Developer (PHP/Node/Laravel/Symfony/Sylius)
--------
via: https://medium.freecodecamp.org/testing-webhooks-while-using-vagrant-for-development-98b5f3bedb1d
作者:[Stefan Doorn ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://medium.freecodecamp.org/@stefandoorn
[1]:https://unsplash.com/photos/MYTyXb7fgG0?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[2]:https://unsplash.com/?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]:https://ngrok.com/download
[4]:https://www.npmjs.com/package/ngrok
[5]:http://webhook.example.vagrnat/
[6]:http://39741ffc.ngrok.io/
[7]:http://39741ffc.ngrok.io/
[8]:http://webhook.example.vagrant:4040/
[9]:https://e65642b5.ngrok.io/webhook.php.
[10]:https://sendgrid.com/blog/whats-webhook/
[11]:http://example.com/webhooks/payment-update%29
[12]:https://sendgrid.com/blog/whats-webhook/
[13]:https://ngrok.com/
[14]:https://ngrok.com/docs
[15]:http://39741ffc.ngrok.io%2C/
[16]:https://ngrok.com/docs#bind-tls
[17]:http://localhost:4040./
[18]:https://e65642b5.ngrok.io/webhook.php.
[19]:https://github.com/wernight/docker-ngrok
[20]:https://github.com/stefandoorn
[21]:https://twitter.com/stefan_doorn
[22]:https://www.linkedin.com/in/stefandoorn

View File

@ -0,0 +1,395 @@
How to write a really great resume that actually gets you hired
============================================================
![](https://cdn-images-1.medium.com/max/2000/1*k7HRLZAsuINP9vIs2BIh1g.png)
This is a data-driven guide to writing a resume that actually gets you hired. Ive spent the past four years analyzing which resume advice works regardless of experience, role, or industry. The tactics laid out below are the result of what Ive learned. They helped me land offers at Google, Microsoft, and Twitter and have helped my students systematically land jobs at Amazon, Apple, Google, Microsoft, Facebook, and more.
### Writing Resumes Sucks.
Its a vicious cycle.
We start by sifting through dozens of articles by career “gurus,” forced to compare conflicting advice and make our own decisions on what to follow.
The first article says “one page MAX” while the second says “take two or three and include all of your experience.”
The next says “write a quick summary highlighting your personality and experience” while another says “summaries are a waste of space.”
You scrape together your best effort and hit “Submit,” sending your resume into the ether. When you dont hear back, you wonder what went wrong:
_“Was it the single page or the lack of a summary? Honestly, who gives a s**t at this point. Im sick of sending out 10 resumes every day and hearing nothing but crickets.”_
![](https://cdn-images-1.medium.com/max/1000/1*_zQqAjBhB1R4fz55InrrIw.jpeg)
How it feels to try and get your resume read in todays world.
Writing resumes sucks but its not your fault.
The real reason its so tough to write a resume is because most of the advice out there hasnt been proven against the actual end goal of getting a job. If you dont know what consistently works, you cant lay out a system to get there.
Its easy to say “one page works best” when youve seen it happen a few times. But how does it hold up when we look at 100 resumes across different industries, experience levels, and job titles?
Thats what this article aims to answer.
Over the past four years, Ive personally applied to hundreds of companies and coached hundreds of people through the job search process. This has given me a huge opportunity to measure, analyze, and test the effectiveness of different resume strategies at scale.
This article is going to walk through everything Ive learned about resumes over the past 4 years, including:
* Mistakes that more than 95% of people make, causing their resumes to get tossed immediately
* Three things that consistently appear in the resumes of highly effective job searchers (who go on to land jobs at the worlds best companies)
* A quick hack that will help you stand out from the competition and instantly build relationships with whomever is reading your resume (increasing your chances of hearing back and getting hired)
* The exact resume template that got me interviews and offers at Google, Microsoft, Twitter, Uber, and more
Before we get to the unconventional strategies that will help set you apart, we need to make sure our foundational bases are covered. That starts with understanding the mistakes most job seekers make so we can make our resume bulletproof.
### Resume Mistakes That 95% Of People Make
Most resumes that come through an online portal or across a recruiters desk are tossed out because they violate a simple rule.
When recruiters scan a resume, the first thing they look for is mistakes. Your resume could be fantastic, but if you violate a rule like using an unprofessional email address or improper grammar, its going to get tossed out.
Our goal is to fully understand the triggers that cause recruiters/ATS systems to make the snap decisions on who stays and who goes.
In order to get inside the heads of these decision makers, I collected data from dozens of recruiters and hiring mangers across industries. These people have several hundred years of hiring experience under their belts and theyve reviewed 100,000+ resumes across industries.
They broke down the five most common mistakes that cause them to cut resumes from the pile:
![](https://cdn-images-1.medium.com/max/1000/1*5Zbr3HFeKSjvPGZdq_LCKA.png)
### The Five Most Common Resume Mistakes (According To Recruiters & Hiring Managers)
Issue #1: Sloppiness (typos, spelling errors, & grammatical mistakes). Close to 60% of resumes have some sort of typo or grammatical issue.
Solution: Have your resume reviewed by three separate sourcesspell checking software, a friend, and a professional. Spell check should be covered if youre using Microsoft Word or Google Docs to create your resume.
A friend or family member can cover the second base, but make sure you trust them with reviewing the whole thing. You can always include an obvious mistake to see if they catch it.
Finally, you can hire a professional editor on [Upwork][1]. It shouldnt take them more than 1520 minutes to review so its worth paying a bit more for someone with high ratings and lots of hours logged.
Issue #2: Summaries are too long and formal. Many resumes include summaries that consist of paragraphs explaining why they are a “driven, results oriented team player.” When hiring managers see a block of text at the top of the resume, you can bet they arent going to read the whole thing. If they do give it a shot and read something similar to the sentence above, theyre going to give up on the spot.
Solution: Summaries are highly effective, but they should be in bullet form and showcase your most relevant experience for the role. For example, if Im applying for a new business sales role my first bullet might read “Responsible for driving $11M of new business in 2018, achieved 168% attainment (#1 on my team).”
Issue #3: Too many buzz words. Remember our driven team player from the last paragraph? Phrasing like that makes hiring managers cringe because your attempt to stand out actually makes you sound like everyone else.
Solution: Instead of using buzzwords, write naturally, use bullets, and include quantitative results whenever possible. Would you rather hire a salesperson who “is responsible for driving new business across the healthcare vertical to help companies achieve their goals” or “drove $15M of new business last quarter, including the largest deal in company history”? Skip the buzzwords and focus on results.
Issue #4: Having a resume that is more than one page. The average employer spends six seconds reviewing your resumeif its more than one page, it probably isnt going to be read. When asked, recruiters from Google and Barclays both said multiple page resumes “are the bane of their existence.”
Solution: Increase your margins, decrease your font, and cut down your experience to highlight the most relevant pieces for the role. It may seem impossible but its worth the effort. When youre dealing with recruiters who see hundreds of resumes every day, you want to make their lives as easy as possible.
### More Common Mistakes & Facts (Backed By Industry Research)
In addition to personal feedback, I combed through dozens of recruitment survey results to fill any gaps my contacts might have missed. Here are a few more items you may want to consider when writing your resume:
* The average interviewer spends 6 seconds scanning your resume
* The majority of interviewers have not looked at your resume until
 you walk into the room
* 76% of resumes are discarded for an unprofessional email address
* Resumes with a photo have an 88% rejection rate
* 58% of resumes have typos
* Applicant tracking software typically eliminates 75% of resumes due to a lack of keywords and phrases being present
Now that you know every mistake you need to avoid, the first item on your to-do list is to comb through your current resume and make sure it doesnt violate anything mentioned above.
Once you have a clean resume, you can start to focus on more advanced tactics that will really make you stand out. There are a few unique elements you can use to push your application over the edge and finally get your dream company to notice you.
![](https://cdn-images-1.medium.com/max/1000/1*KthhefFO33-8tm0kBEPbig.jpeg)
### The 3 Elements Of A Resume That Will Get You Hired
My analysis showed that highly effective resumes typically include three specific elements: quantitative results, a simple design, and a quirky interests section. This section breaks down all three elements and shows you how to maximize their impact.
### Quantitative Results
Most resumes lack them.
Which is a shame because my data shows that they make the biggest difference between resumes that land interviews and resumes that end up in the trash.
Heres an example from a recent resume that was emailed to me:
> Experience
> + Identified gaps in policies and processes and made recommendations for solutions at the department and institution level
> + Streamlined processes to increase efficiency and enhance quality
> + Directly supervised three managers and indirectly managed up to 15 staff on multiple projects
> + Oversaw execution of in-house advertising strategy
> + Implemented comprehensive social media plan
As an employer, that tells me absolutely nothing about what to expect if I hire this person.
They executed an in-house marketing strategy. Did it work? How did they measure it? What was the ROI?
They also also identified gaps in processes and recommended solutions. What was the result? Did they save time and operating expenses? Did it streamline a process resulting in more output?
Finally, they managed a team of three supervisors and 15 staffers. How did that team do? Was it better than the other teams at the company? What results did they get and how did those improve under this persons management?
See what Im getting at here?
These types of bullets talk about daily activities, but companies dont care about what you do every day. They care about results. By including measurable metrics and achievements in your resume, youre showcasing the value that the employer can expect to get if they hire you.
Lets take a look at revised versions of those same bullets:
> Experience
> + Managed a team of 20 that consistently outperformed other departments in lead generation, deal size, and overall satisfaction (based on our culture survey)
> + Executed in-house marketing strategy that resulted in a 15% increase in monthly leads along with a 5% drop in the cost per lead
> + Implemented targeted social media campaign across Instagram & Pintrest, which drove an additional 50,000 monthly website visits and generated 750 qualified leads in 3 months
If you were in the hiring managers shoes, which resume would you choose?
Thats the power of including quantitative results.
### Simple, Aesthetic Design That Hooks The Reader
These days, its easy to get carried away with our mission to “stand out.” Ive seen resume overhauls from graphic designers, video resumes, and even resumes [hidden in a box of donuts.][2]
While those can work in very specific situations, we want to aim for a strategy that consistently gets results. The format I saw the most success with was a black and white Word template with sections in this order:
* Summary
* Interests
* Experience
* Education
* Volunteer Work (if you have it)
This template is effective because its familiar and easy for the reader to digest.
As I mentioned earlier, hiring managers scan resumes for an average of 6 seconds. If your resume is in an unfamiliar format, those 6 seconds wont be very comfortable for the hiring manager. Our brains prefer things we can easily recognize. You want to make sure that a hiring manager can actually catch a glimpse of who you are during their quick scan of your resume.
If were not relying on design, this hook needs to come from the  _Summary_ section at the top of your resume.
This section should be done in bullets (not paragraph form) and it should contain 34 highlights of the most relevant experience you have for the role. For example, if I was applying for a New Business Sales position, my summary could look like this:
> Summary
> Drove quarterly average of $11M in new business with a quota attainment of 128% (#1 on my team)
> Received award for largest sales deal of the year
> Developed and trained sales team on new lead generation process that increased total leads by 17% in 3 months, resulting in 4 new deals worth $7M
Those bullets speak directly to the value I can add to the company if I was hired for the role.
### An “Interests” Section Thats Quirky, Unique, & Relatable
This is a little “hack” you can use to instantly build personal connections and positive associations with whomever is reading your resume.
Most resumes have a skills/interests section, but its usually parked at the bottom and offers little to no value. Its time to change things up.
[Research shows][3] that people rely on emotions, not information, to make decisions. Big brands use this principle all the timeemotional responses to advertisements are more influential on a persons intent to buy than the content of an ad.
You probably remember Apples famous “Get A Mac” campaign:
When it came to specs and performance, Macs didnt blow every single PC out of the water. But these ads solidified who was “cool” and who wasnt, which was worth a few extra bucks to a few million people.
By tugging at our need to feel “cool,” Apples campaign led to a [42% increase in market share][4] and a record sales year for Macbooks.
Now were going to take that same tactic and apply it to your resume.
If you can invoke an emotional response from your recruiter, you can influence the mental association they assign to you. This gives you a major competitive advantage.
Lets start with a questionwhat could you talk about for hours?
It could be cryptocurrency, cooking, World War 2, World of Warcraft, or how Googles bet on segmenting their company under the Alphabet is going to impact the technology sector over the next 5 years.
Did a topic (or two) pop into year head? Great.
Now think about what it would be like to have a conversation with someone who was just as passionate and knew just as much as you did on the topic. Itd be pretty awesome, right?  _Finally, _ someone who gets it!
Thats exactly the kind of emotional response were aiming to get from a hiring manager.
There are five “neutral” topics out there that people enjoy talking about:
1. Food/Drink
2. Sports
3. College
4. Hobbies
5. Geography (travel, where people are from, etc.)
These topics are present in plenty of interest sections but we want to take them one step further.
Lets say you had the best night of your life at the Full Moon Party in Thailand. Which of the following two options would you be more excited to read:
* Traveling
* Ko Pha Ngan beaches (where the full moon party is held)
Or, lets say that you went to Duke (an ACC school) and still follow their basketball team. Which would you be more pumped about:
* College Sports
* ACC Basketball (Go Blue Devils!)
In both cases, the second answer would probably invoke a larger emotional response because it is tied directly to your experience.
I want you to think about your interests that fit into the five categories I mentioned above.
Now I want you to write a specific favorite associated with each category in parentheses next to your original list. For example, if you wrote travel you can add (ask me about the time I was chased by an elephant in India) or (specifically meditation in a Tibetan monastery).
Here is the [exact set of interests][5] I used on my resume when I interviewed at Google, Microsoft, and Twitter:
_ABC Kitchens Atmosphere, Stumptown Coffee (primarily cold brew), Michael Lewis (Liars Poker), Fishing (especially fly), Foods That Are Vehicles For Hot Sauce, ACC Sports (Go Deacs!) & The New York Giants_
![](https://cdn-images-1.medium.com/max/1000/1*ONxtGr_xUYmz4_Xe66aeng.jpeg)
If you want to cheat here, my experience shows that anything about hot sauce is an instant conversation starter.
### The Proven Plug & Play Resume Template
Now that we have our strategies down, its time to apply these tactics to a real resume. Our goal is to write something that increases your chances of hearing back from companies, enhances your relationships with hiring managers, and ultimately helps you score the job offer.
The example below is the exact resume that I used to land interviews and offers at Microsoft, Google, and Twitter. I was targeting roles in Account Management and Sales, so this sample is tailored towards those positions. Well break down each section below:
![](https://cdn-images-1.medium.com/max/1000/1*B2RQ89ue2dGymRdwMY2lBA.png)
First, I want you to notice how clean this is. Each section is clearly labeled and separated and flows nicely from top to bottom.
My summary speaks directly to the value Ive created in the past around company culture and its bottom line:
* I consistently exceeded expectations
* I started my own business in the space (and saw real results)
* Im a team player who prioritizes culture
I purposefully include my Interests section right below my Summary. If my hiring managers six second scan focused on the summary, I know theyll be interested. Those bullets cover all the subconscious criteria for qualification in sales. Theyre going to be curious to read more in my Experience section.
By sandwiching my Interests in the middle, Im upping their visibility and increasing the chance of creating that personal connection.
You never knowthe person reading my resume may also be a hot sauce connoisseur and I dont want that to be overlooked because my interests were sitting at the bottom.
Next, my Experience section aims to flesh out the points made in my Summary. I mentioned exceeding my quota up top, so I included two specific initiatives that led to that attainment, including measurable results:
* A partnership leveraging display advertising to drive users to a gamified experience. The campaign resulted in over 3000 acquisitions and laid the groundwork for the 2nd largest deal in company history.
* A partnership with a top tier agency aimed at increasing conversions for a client by improving user experience and upgrading tracking during a company-wide website overhaul (the client has ~20 brand sites). Our efforts over 6 months resulted in a contract extension worth 316% more than their original deal.
Finally, I included my education at the very bottom starting with the most relevant coursework.
Download My Resume Templates For Free
You can download a copy of the resume sample above as well as a plug and play template here:
Austins Resume: [Click To Download][6]
Plug & Play Resume Template: [Click To Download][7]
### Bonus Tip: An Unconventional Resume “Hack” To Help You Beat Applicant Tracking Software
If youre not already familiar, Applicant Tracking Systems are pieces of software that companies use to help “automate” the hiring process.
After you hit submit on your online application, the ATS software scans your resume looking for specific keywords and phrases (if you want more details, [this article][8] does a good job of explaining ATS).
If the language in your resume matches up, the software sees it as a good fit for the role and will pass it on to the recruiter. However, even if youre highly qualified for the role but you dont use the right wording, your resume can end up sitting in a black hole.
Im going to teach you a little hack to help improve your chances of beating the system and getting your resume in the hands of a human:
Step 1: Highlight and select the entire job description page and copy it to your clipboard.
Step 2: Head over to [WordClouds.com][9] and click on the “Word List” button at the top. Towards the top of the pop up box, you should see a link for Paste/Type Text. Go ahead and click that.
Step 3: Now paste the entire job description into the box, then hit “Apply.”
WordClouds is going to spit out an image that showcases every word in the job description. The larger words are the ones that appear most frequently (and the ones you want to make sure to include when writing your resume). Heres an example for a data a science role:
![](https://cdn-images-1.medium.com/max/1000/1*O7VO1C9nhC9LZct7vexTbA.png)
You can also get a quantitative view by clicking “Word List” again after creating your cloud. That will show you the number of times each word appeared in the job description:
9 data
6 models
4 experience
4 learning
3 Experience
3 develop
3 team
2 Qualifications
2 statistics
2 techniques
2 libraries
2 preferred
2 research
2 business
When writing your resume, your goal is to include those words in the same proportions as the job description.
Its not a guaranteed way to beat the online application process, but it will definitely help improve your chances of getting your foot in the door!
* * *
### Want The Inside Info On Landing A Dream Job Without Connections, Without “Experience,” & Without Applying Online?
[Click here to get the 5 free strategies that my students have used to land jobs at Google, Microsoft, Amazon, and more without applying online.][10]
_Originally published at _ [_cultivatedculture.com_][11] _._
--------------------------------------------------------------------------------
作者简介:
I help people land jobs they love and salaries they deserve at CultivatedCulture.com
----------
via: https://medium.freecodecamp.org/how-to-write-a-really-great-resume-that-actually-gets-you-hired-e18533cd8d17
作者:[Austin Belcak ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://medium.freecodecamp.org/@austin.belcak
[1]:http://www.upwork.com/
[2]:https://www.thrillist.com/news/nation/this-guy-hides-his-resume-in-boxes-of-donuts-to-score-job-interviews
[3]:https://www.psychologytoday.com/blog/inside-the-consumer-mind/201302/how-emotions-influence-what-we-buy
[4]:https://www.businesswire.com/news/home/20070608005253/en/Apple-Mac-Named-Successful-Marketing-Campaign-2007
[5]:http://cultivatedculture.com/resume-skills-section/
[6]:https://drive.google.com/file/d/182gN6Kt1kBCo1LgMjtsGHOQW2lzATpZr/view?usp=sharing
[7]:https://drive.google.com/open?id=0B3WIcEDrxeYYdXFPVlcyQlJIbWc
[8]:https://www.jobscan.co/blog/8-things-you-need-to-know-about-applicant-tracking-systems/
[9]:https://www.wordclouds.com/
[10]:https://cultivatedculture.com/dreamjob/
[11]:https://cultivatedculture.com/write-a-resume/

View File

@ -0,0 +1,140 @@
What I Learned from Programming Interviews
============================================================
![](https://cdn-images-1.medium.com/max/1250/1*DXPdaGPM4oM6p5nSkup7IQ.jpeg)
Whiteboard programming interviews
In 2017, I went to the [Grace Hopper Celebration][1] of women in computing. Its the largest gathering of this kind, with 17,000 women attending last year.
This conference has a huge career fair where companies interview attendees. Some even get offers. Walking around the area, I noticed that some people looked stressed and worried. I overheard conversations, and some talked about how they didnt do well in the interview.
I approached a group of people that I overheard and gave them advice. I considered some of the advice I gave to be basic, such as “its okay to think of the naive solution first.” But people were surprised by most of the advice I gave them.
I wanted to help more people with this. I gathered a list of tips that worked for me and published a [podcast episode][2] about them. Theyre also the topic of this post.
Ive had many programming interviews both for internships and full-time jobs. When I was in college studying Computer Science, there was a career fair every fall semester where the first round of interviews took place. I have failed at the first and final rounds of interviews. After each interview, I reflected on what I couldve done better and had mock up interviews with friends who gave me feedback.
Whether we find a job through a job portal, networking, or university recruiting, part of the process involves doing a technical interview.
In recent years weve seen different interview formats emerge:
* Pair programming with an engineer
* Online quiz and online coding
* Whiteboard interviews
Ill focus on the whiteboard interview because its the one that I have experienced. Ive had many interviews. Some of them have gone well, while others havent.
### What I did wrong
First, I want to go over the things I did wrong in my interviews. This helps see the problems and what to improve.
When an interviewer gave me a technical problem, I immediately went to the whiteboard and started trying to solve it.  _Without saying a word._
I made two mistakes here:
#### Not clarifying information that is crucial to solve a problem
For example, are we only working with numbers or also strings? Are we supporting multiple data types? If you dont ask questions before you start working on a question, your interviewer can get the impression that you wont ask questions before you start working on a project at their company. This is an important skill to have in the workplace. It is not like school anymore. You dont get an assignment with all the steps detailed for you. You have to find out what those are and define them.
#### Thinking without writing or communicating
Often times I stood there thinking without writing. When I was doing a mock interview with a friend, he told me that he knew I was thinking because we had worked together. To a stranger, it can seem that Im clueless, or that Im thinking. It is also important not to rush on a solution right away. Take some time to brainstorm ideas. Sometimes the interviewer will gladly participate in this. After all, thats how it is at work meetings.
### Coming up with a solution
Before you begin writing code, it helps if you come up with the algorithm first. Dont start writing code and hope that youll solve the problem as you write.
This is what has worked for me:
1. Brainstorm
2. Coding
3. Error handling
4. Testing
#### 1\. Brainstorm
For me, it helps to visualize first what the problem is through a series of examples. If its a problem related to trees, I would start with the null case, one node, two nodes, three nodes. This can help you generalize a solution.
On the whiteboard, write down a list of the things the algorithm needs to do. This way, you can find bugs and issues before writing any code. Just keep track of the time. I made a mistake once where I spent too much time asking clarifying questions and brainstorming, and I barely had time to write the code. The downside of this is that your interviewer doesnt get to see how you code. You can also come off as if youre trying to avoid the coding portion. It helps to wear a wrist watch, or if theres a clock in the room, look at it occasionally. Sometimes the interviewer will tell you, “I think we have the necessary information, lets start coding it.”
#### 2\. Coding and code walkthrough
If you dont have the solution right away, it always helps to point out the obvious naive solution. While youre explaining this, you should be thinking of how to improve it. When you state the obvious, indicate why it is not the best solution. For this it helps to be familiar with big O notation. It is okay to go over 23 solutions first. The interviewer sometimes guides you by saying, “Can we do better?” This can sometimes mean they are looking for a more efficient solution.
#### 3\. Error handling
While youre coding, point out that youre leaving a code comment for error handling. Once an interviewer said, “Thats a good point. How would you handle it? Would you throw an exception? Or return a specific value?” This can make for a good short discussion about code quality. Mention a few error cases. Other times, the interviewer might say that you can assume that the parameters youre getting already passed a validation. However, it is still important to bring this up to show that you are aware of error cases and quality.
#### 4\. Testing
After you have finished coding the solution, re-use the examples from brainstorming to walk through your code and make sure it works. For example you can say, “Lets go over the example of a tree with one node, two nodes.”
After you finish this, the interviewer sometimes asks you how you would test your code, and what your test cases would be. I recommend that you organize your test cases in different categories.
Some examples are:
1. Performance
2. Error cases
3. Positive expected cases
For performance, think about extreme quantities. For example, if the problem is about lists, mention that you would have a case with a large list and a really small list. If its about numbers, youll test the maximum integer number and the smallest. I recommend reading about testing software to get more ideas. My favorite book on this is [How We Test Software at Microsoft][3].
For error cases, think about what is expected to fail and list those.
For positive expected cases, it helps to think of what the user requirements are. What are the cases that this solution is meant to solve? Those are the positive test cases.
### “Do you have any questions for me?”
Almost always there will be a few minutes dedicated at the end for you to ask questions. I recommend that you write down the questions you would ask your interviewer before the interview. Dont say, “I dont have any questions.” Even if you feel the interview didnt go well, or youre not super passionate about the company, theres always something you can ask. It can be about what the person likes and hates most about his or her job. Or it can be something related to the persons work, or technologies and practices used at the company. Dont feel discouraged to ask something even if you feel you didnt do well.
### Applying for a job
As for searching and applying for a job, Ive been told that you should only apply to a place that you would be truly passionate to work for. They say pick a company that you love, or a product that you enjoy using, and see if you can work there.
I dont recommend that you always do this. You can rule out many good options this way, especially if youre looking for an internship or an entry-level job.
You can focus on other goals instead. What do I want to get more experience in? Is it cloud computing, web development, or artificial intelligence? When you talk to companies at the career fair, find out if their job openings are in this area. You might find a really good position at a company or a non-profit that wasnt in your list.
#### Switching teams
After a year and a half at my first team, I decided that it was time to explore something different. I found a team I liked and had 4 rounds of interviews. I didnt do well.
I didnt practice anything, not even simply writing on a whiteboard. My logic had been, if I have been working at the company for almost 2 years, why would I need to practice? I was wrong about this. I struggled to write a solution on the whiteboard. Things like my writing being too small and running out of space by not starting at the top left all contributed to not passing.
I hadnt brushed up on data structures and algorithms. If I had, I wouldve been more confident. Even if youve been working at a company as a Software Engineer, before you do a round of interviews with another team, I strongly recommend you go through practice problems on a whiteboard.
As for finding a team, if you are looking to switch teams at your company, it helps to talk informally with members of that team. For this, I found that almost everyone is willing to have lunch with you. People are mostly available at noon too, so there is low risk of lack of availability and meeting conflicts. This is an informal way to find out what the team is working on, and see what the personalities of your potential team members are like. You can learn many things from lunch meetings that can help you in the formal interviews.
It is important to know that at the end of the day, you are interviewing for a specific team. Even if you do really well, you might not get an offer because you are not a culture fit. Thats part of why I try to meet different people in the team first, but this is not always possible. Dont get discouraged by a rejection, keep your options open, and practice.
This content is from the [“Programming interviews”][4] episode on [The Women in Tech Show: Technical Interviews with Prominent Women in Tech][5].
--------------------------------------------------------------------------------
作者简介:
Software Engineer II at Microsoft Research, opinions are my own, host of www.thewomenintechshow.com
------------
via: https://medium.freecodecamp.org/what-i-learned-from-programming-interviews-29ba49c9b851
作者:[Edaena Salinas ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://medium.freecodecamp.org/@edaenas
[1]:https://anitab.org/event/2017-grace-hopper-celebration-women-computing/
[2]:https://thewomenintechshow.com/2017/12/18/programming-interviews/
[3]:https://www.amazon.com/How-We-Test-Software-Microsoft/dp/0735624259
[4]:https://thewomenintechshow.com/2017/12/18/programming-interviews/
[5]:https://thewomenintechshow.com/

View File

@ -0,0 +1,104 @@
Why you should use named pipes on Linux
======
![](https://images.techhive.com/images/article/2017/05/blue-1845806_1280-100722976-large.jpg)
Just about every Linux user is familiar with the process of piping data from one process to another using | signs. It provides an easy way to send output from one command to another and end up with only the data you want to see without having to write scripts to do all of the selecting and reformatting.
There is another type of pipe, however, one that warrants the name "pipe" but has a very different personality. It's one that you may have never tried or even thought about -- the named pipe.
**Also read:[11 pointless but awesome Linux terminal tricks][1]**
One of the key differences between regular pipes and named pipes is that named pipes have a presense in the file system. That is, they show up as files. But unlike most files, they never appear to have contents. Even if you write a lot of data to a named pipe, the file appears to be empty.
### How to set up a named pipe on Linux
Before we look at one of these empty named pipes, let's step back and see how a named pipe is set up. You would use a command called **mkfifo**. Why the reference to "FIFO"? Because a named pipe is also known as a FIFO special file. The term "FIFO" refers to its first-in, first-out character. If you fill a dish with ice cream and then start eating it, you'd be doing a LIFO (last-in, first-out) maneuver. If you suck a milkshake through a straw, you'd be doing a FIFO one. So, here's an example of creating a named pipe.
```
$ mkfifo mypipe
$ ls -l mypipe
prw-r-----. 1 shs staff 0 Jan 31 13:59 mypipe
```
Notice the special file type designation of "p" and the file length of zero. You can write to a named pipe by redirecting output to it and the length will still be zero.
```
$ echo "Can you read this?" > mypipe
$ ls -l mypipe
prw-r-----. 1 shs staff 0 Jan 31 13:59 mypipe
```
So far, so good, but hit return and nothing much happens.
```
$ echo "Can you read this?" > mypipe
```
While it might not be obvious, your text has entered into the pipe, but you're still peeking into the _input_ end of it. You or someone else may be sitting at the _output_ end and be ready to read the data that's being poured into the pipe, now waiting for it to be read.
```
$ cat mypipe
Can you read this?
```
Once read, the contents of the pipe are gone.
Another way to see how a named pipe works is to perform both operations (pouring the data into the pipe and retrieving it at the other end) yourself by putting the pouring part into the background.
```
$ echo "Can you read this?" > mypipe &
[1] 79302
$ cat mypipe
Can you read this?
[1]+ Done echo "Can you read this?" > mypipe
```
Once the pipe has been read or "drained," it's empty, though it still will be visible as an empty file ready to be used again. Of course, this brings us to the "why bother?" stage.
### Why use named pipes?
Named pipes are used infrequently for a good reason. On Unix systems, there are almost always many ways to do pretty much the same thing. There are many ways to write to a file, read from a file, and empty a file, though named pipes have a certain efficiency going for them.
For one thing, named pipe content resides in memory rather than being written to disk. It is passed only when both ends of the pipe have been opened. And you can write to a pipe multiple times before it is opened at the other end and read. By using named pipes, you can establish a process in which one process writes to a pipe and another reads from a pipe without much concern about trying to time or carefully orchestrate their interaction.
You can set up a process that simply waits for data to appear at the output end of the pipe and then works with it when it does. In the command below, we use the tail command to wait for data to appear.
```
$ tail -f mypipe
```
Once the process that will be feeding the pipe has finished, we will see some output.
```
$ tail -f mypipe
Uranus replicated to WCDC7
Saturn replicated to WCDC8
Pluto replicated to WCDC9
Server replication operation completed
```
If you look at the process writing to a named pipe, you might be surprised by how little resources it uses. In the ps output below, the only significant resource use is virtual memory (the VSZ column).
```
ps u -P 80038
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
shs 80038 0.0 0.0 108488 764 pts/4 S 15:25 0:00 -bash
```
Named pipes are different enough from the more commonly used Unix/Linux pipes to warrant a different name, but "pipe" really invokes a good image of how they move data between processes, so "named pipe" fits pretty well. Maybe you'll come across a task that will benefit significantly from this very clever Unix/Linux feature.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3251853/linux/why-use-named-pipes-on-linux.html
作者:[Sandra Henry-Stocker][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
[1]:http://www.networkworld.com/article/2926630/linux/11-pointless-but-awesome-linux-terminal-tricks.html#tk.nww-fsb

View File

@ -0,0 +1,206 @@
Conditional Rendering in React using Ternaries and Logical AND
============================================================
![](https://cdn-images-1.medium.com/max/2000/1*eASRJrCIVgsy5VbNMAzD9w.jpeg)
Photo by [Brendan Church][1] on [Unsplash][2]
There are several ways that your React component can decide what to render. You can use the traditional `if` statement or the `switch` statement. In this article, well explore a few alternatives. But be warned that some come with their own gotchas, if youre not careful.
### Ternary vs if/else
Lets say we have a component that is passed a `name` prop. If the string is non-empty, we display a greeting. Otherwise we tell the user they need to sign in.
Heres a Stateless Function Component (SFC) that does just that.
```
const MyComponent = ({ name }) => {
if (name) {
return (
<div className="hello">
Hello {name}
</div>
);
}
return (
<div className="hello">
Please sign in
</div>
);
};
```
Pretty straightforward. But we can do better. Heres the same component written using a conditional ternary operator.
```
const MyComponent = ({ name }) => (
<div className="hello">
{name ? `Hello ${name}` : 'Please sign in'}
</div>
);
```
Notice how concise this code is compared to the example above.
A few things to note. Because we are using the single statement form of the arrow function, the `return` statement is implied. Also, using a ternary allowed us to DRY up the duplicate `<div className="hello">` markup. 🎉
### Ternary vs Logical AND
As you can see, ternaries are wonderful for `if/else` conditions. But what about simple `if` conditions?
Lets look at another example. If `isPro` (a boolean) is `true`, we are to display a trophy emoji. We are also to render the number of stars (if not zero). We could go about it like this.
```
const MyComponent = ({ name, isPro, stars}) => (
<div className="hello">
<div>
Hello {name}
{isPro ? '🏆' : null}
</div>
{stars ? (
<div>
Stars:{'⭐️'.repeat(stars)}
</div>
) : null}
</div>
);
```
But notice the “else” conditions return `null`. This is becasue a ternary expects an else condition.
For simple `if` conditions, we could use something a little more fitting: the logical AND operator. Heres the same code written using a logical AND.
```
const MyComponent = ({ name, isPro, stars}) => (
<div className="hello">
<div>
Hello {name}
{isPro && '🏆'}
</div>
{stars && (
<div>
Stars:{'⭐️'.repeat(stars)}
</div>
)}
</div>
);
```
Not too different, but notice how we eliminated the `: null` (i.e. else condition) at the end of each ternary. Everything should render just like it did before.
Hey! What gives with John? There is a `0` when nothing should be rendered. Thats the gotcha that I was referring to above. Heres why.
[According to MDN][3], a Logical AND (i.e. `&&`):
> `expr1 && expr2`
> Returns `expr1` if it can be converted to `false`; otherwise, returns `expr2`. Thus, when used with Boolean values, `&&` returns `true` if both operands are true; otherwise, returns `false`.
OK, before you start pulling your hair out, let me break it down for you.
In our case, `expr1` is the variable `stars`, which has a value of `0`. Because zero is falsey, `0` is returned and rendered. See, that wasnt too bad.
I would write this simply.
> If `expr1` is falsey, returns `expr1`, else returns `expr2`.
So, when using a logical AND with non-boolean values, we must make the falsey value return something that React wont render. Say, like a value of `false`.
There are a few ways that we can accomplish this. Lets try this instead.
```
{!!stars && (
<div>
{'⭐️'.repeat(stars)}
</div>
)}
```
Notice the double bang operator (i.e. `!!`) in front of `stars`. (Well, actually there is no “double bang operator”. Were just using the bang operator twice.)
The first bang operator will coerce the value of `stars` into a boolean and then perform a NOT operation. If `stars` is `0`, then `!stars` will produce `true`.
Then we perform a second NOT operation, so if `stars` is 0, `!!stars` would produce `false`. Exactly what we want.
If youre not a fan of `!!`, you can also force a boolean like this (which I find a little wordy).
```
{Boolean(stars) && (
```
Or simply give a comparator that results in a boolean value (which some might say is even more semantic).
```
{stars > 0 && (
```
#### A word on strings
Empty string values suffer the same issue as numbers. But because a rendered empty string is invisible, its not a problem that you will likely have to deal with, or will even notice. However, if you are a perfectionist and dont want an empty string on your DOM, you should take similar precautions as we did for numbers above.
### Another solution
A possible solution, and one that scales to other variables in the future, would be to create a separate `shouldRenderStars` variable. Then you are dealing with boolean values in your logical AND.
```
const shouldRenderStars = stars > 0;
```
```
return (
<div>
{shouldRenderStars && (
<div>
{'⭐️'.repeat(stars)}
</div>
)}
</div>
);
```
Then, if in the future, the business rule is that you also need to be logged in, own a dog, and drink light beer, you could change how `shouldRenderStars` is computed, and what is returned would remain unchanged. You could also place this logic elsewhere where its testable and keep the rendering explicit.
```
const shouldRenderStars =
stars > 0 && loggedIn && pet === 'dog' && beerPref === 'light`;
```
```
return (
<div>
{shouldRenderStars && (
<div>
{'⭐️'.repeat(stars)}
</div>
)}
</div>
);
```
### Conclusion
Im of the opinion that you should make best use of the language. And for JavaScript, this means using conditional ternary operators for `if/else`conditions and logical AND operators for simple `if` conditions.
While we could just retreat back to our safe comfy place where we use the ternary operator everywhere, you now possess the knowledge and power to go forth AND prosper.
--------------------------------------------------------------------------------
作者简介:
Managing Editor at the American Express Engineering Blog http://aexp.io and Director of Engineering @AmericanExpress. MyViews !== ThoseOfMyEmployer.
----------------
via: https://medium.freecodecamp.org/conditional-rendering-in-react-using-ternaries-and-logical-and-7807f53b6935
作者:[Donavon West][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://medium.freecodecamp.org/@donavon
[1]:https://unsplash.com/photos/pKeF6Tt3c08?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[2]:https://unsplash.com/search/photos/road-sign?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Logical_Operators

View File

@ -0,0 +1,223 @@
Here are some amazing advantages of Go that you dont hear much about
============================================================
![](https://cdn-images-1.medium.com/max/2000/1*NDXd5I87VZG0Z74N7dog0g.png)
Artwork from [https://github.com/ashleymcnamara/gophers][1]
In this article, I discuss why you should give Go a chance and where to start.
Golang is a programming language you might have heard about a lot during the last couple years. Even though it was created back in 2009, it has started to gain popularity only in recent years.
![](https://cdn-images-1.medium.com/max/2000/1*cQ8QzhCPiFXqk_oQdUk_zw.png)
Golang popularity according to Google Trends
This article is not about the main selling points of Go that you usually see.
Instead, I would like to present to you some rather small but still significant features that you only get to know after youve decided to give Go a try.
These are amazing features that are not laid out on the surface, but they can save you weeks or months of work. They can also make software development more enjoyable.
Dont worry if Go is something new for you. This article does not require any prior experience with the language. I have included a few extra links at the bottom, in case you would like to learn a bit more.
We will go through such topics as:
* GoDoc
* Static code analysis
* Built-in testing and profiling framework
* Race condition detection
* Learning curve
* Reflection
* Opinionatedness
* Culture
Please, note that the list doesnt follow any particular order. It is also opinionated as hell.
### GoDoc
Documentation in code is taken very seriously in Go. So is simplicity.
[GoDoc][4] is a static code analyzing tool that creates beautiful documentation pages straight out of your code. A remarkable thing about GoDoc is that it doesnt use any extra languages, like JavaDoc, PHPDoc, or JSDoc to annotate constructions in your code. Just English.
It uses as much information as it can get from the code to outline, structure, and format the documentation. And it has all the bells and whistles, such as cross-references, code samples, and direct links to your version control system repository.
All you can do is to add a good old `// MyFunc transforms Foo into Bar` kind of comment which would be reflected in the documentation, too. You can even add [code examples][5] which are actually runnable via the web interface or locally.
GoDoc is the only documentation engine for Go that is used by the whole community. This means that every library or application written in Go has the same format of documentation. In the long run, it saves you tons of time while browsing those docs.
Here, for example, is the GoDoc page for my recent pet project: [pullkeeGoDoc][6].
### Static code analysis
Go heavily relies on static code analysis. Examples include [godoc][7] for documentation, [gofmt][8] for code formatting, [golint][9] for code style linting, and many others.
There are so many of them that theres even an everything-included-kind-of project called [gometalinter][10] to compose them all into a single utility.
Those tools are commonly implemented as stand-alone command line applications and integrate easily with any coding environment.
Static code analysis isnt actually something new to modern programming, but Go sort of brings it to the absolute. I cant overestimate how much time it saved me. Also, it gives you a feeling of safety, as though someone is covering your back.
Its very easy to create your own analyzers, as Go has dedicated built-in packages for parsing and working with Go sources.
You can learn more from this talk: [GothamGo Kickoff Meetup: Go Static Analysis Tools by Alan Donovan][11].
### Built-in testing and profiling framework
Have you ever tried to pick a testing framework for a Javascript project you are starting from scratch? If so, you might understand that struggle of going through such an analysis paralysis. You might have also realized that you were not using like 80% of the framework you have chosen.
The issue repeats over again once you need to do some reliable profiling.
Go comes with a built-in testing tool designed for simplicity and efficiency. It provides you the simplest API possible, and makes minimum assumptions. You can use it for different kinds of testing, profiling, and even to provide executable code examples.
It produces CI-friendly output out-of-box, and the usage is usually as easy as running `go test`. Of course, it also supports advanced features like running tests in parallel, marking them skipped, and many more.
### Race condition detection
You might already know about Goroutines, which are used in Go to achieve concurrent code execution. If you dont, [heres][12] a really brief explanation.
Concurrent programming in complex applications is never easy regardless of the specific technique, partly due to the possibility of race conditions.
Simply put, race conditions happen when several concurrent operations finish in an unpredicted order. It might lead to a huge number of bugs, which are particularly hard to chase down. Ever spent a day debugging an integration test which only worked in about 80% of executions? It probably was a race condition.
All that said, concurrent programming is taken very seriously in Go and, luckily, we have quite a powerful tool to hunt those race conditions down. It is fully integrated into Gos toolchain.
You can read more about it and learn how to use it here: [Introducing the Go Race DetectorThe Go Blog][13].
### Learning curve
You can learn ALL Gos language features in one evening. I mean it. Of course, there are also the standard library, and the best practices in different, more specific areas. But two hours would totally be enough time to get you confidently writing a simple HTTP server, or a command-line app.
The project has [marvelous documentation][14], and most of the advanced topics have already been covered on their blog: [The Go Programming Language Blog][15].
Go is much easier to bring to your team than Java (and the family), Javascript, Ruby, Python, or even PHP. The environment is easy to setup, and the investment your team needs to make is much smaller before they can complete your first production code.
### Reflection
Code reflection is essentially an ability to sneak under the hood and access different kinds of meta-information about your language constructs, such as variables or functions.
Given that Go is a statically typed language, its exposed to a number of various limitations when it comes to more loosely typed abstract programming. Especially compared to languages like Javascript or Python.
Moreover, Go [doesnt implement a concept called Generics][16] which makes it even more challenging to work with multiple types in an abstract way. Nevertheless, many people think its actually beneficial for the language because of the amount of complexity Generics bring along. And I totally agree.
According to Gos philosophy (which is a separate topic itself), you should try hard to not over-engineer your solutions. And this also applies to dynamically-typed programming. Stick to static types as much as possible, and use interfaces when you know exactly what sort of types youre dealing with. Interfaces are very powerful and ubiquitous in Go.
However, there are still cases in which you cant possibly know what sort of data you are facing. A great example is JSON. You convert all the kinds of data back and forth in your applications. Strings, buffers, all sorts of numbers, nested structs and more.
In order to pull that off, you need a tool to examine all the data in runtime that acts differently depending on its type and structure. Reflection to rescue! Go has a first-class [reflect][17] package to enable your code to be as dynamic as it would be in a language like Javascript.
An important caveat is to know what price you pay for using itand only use it when there is no simpler way.
You can read more about it here: [The Laws of ReflectionThe Go Blog][18].
You can also read some real code from the JSON package sources here: [src/encoding/json/encode.goSource Code][19]
### Opinionatedness
Is there such a word, by the way?
Coming from the Javascript world, one of the most daunting processes I faced was deciding which conventions and tools I needed to use. How should I style my code? What testing library should I use? How should I go about structure? What programming paradigms and approaches should I rely on?
Which sometimes basically got me stuck. I was doing this instead of writing the code and satisfying the users.
To begin with, I should note that I totally get where those conventions should come from. Its always you and your team. Anyway, even a group of experienced Javascript developers can easily find themselves having most of the experience with entirely different tools and paradigms to achieve kind of the same results.
This makes the analysis paralysis cloud explode over the whole team, and also makes it harder for the individuals to integrate with each other.
Well, Go is different. You have only one style guide that everyone follows. You have only one testing framework which is built into the basic toolchain. You have a lot of strong opinions on how to structure and maintain your code. How to pick names. What structuring patterns to follow. How to do concurrency better.
While this might seem too restrictive, it saves tons of time for you and your team. Being somewhat limited is actually a great thing when you are coding. It gives you a more straightforward way to go when architecting new code, and makes it easier to reason about the existing one.
As a result, most of the Go projects look pretty alike code-wise.
### Culture
People say that every time you learn a new spoken language, you also soak in some part of the culture of the people who speak that language. Thus, the more languages you learn, more personal changes you might experience.
Its the same with programming languages. Regardless of how you are going to apply a new programming language in the future, it always gives you a new perspective on programming in general, or on some specific techniques.
Be it functional programming, pattern matching, or prototypal inheritance. Once youve learned it, you carry these approaches with you which broadens the problem-solving toolset that you have as a software developer. It also changes the way you see high-quality programming in general.
And Go is a terrific investment here. The main pillar of Gos culture is keeping simple, down-to-earth code without creating many redundant abstractions and putting the maintainability at the top. Its also a part of the culture to spend the most time actually working on the codebase, instead of tinkering with the tools and the environment. Or choosing between different variations of those.
Go is also all about “there should be only one way of doing a thing.”
A little side note. Its also partially true that Go usually gets in your way when you need to build relatively complex abstractions. Well, Id say thats the tradeoff for its simplicity.
If you really need to write a lot of abstract code with complex relationships, youd be better off using languages like Java or Python. However, even when its not obvious, its very rarely the case.
Always use the best tool for the job!
### Conclusion
You might have heard of Go before. Or maybe its something that has been staying out of your radar for a while. Either way, chances are, Go can be a very decent choice for you or your team when starting a new project or improving the existing one.
This is not a complete list of all the amazing things about Go. Just the undervalued ones.
Please, give Go a try with [A Tour of Go][20] which is an incredible place to start.
If you wish to learn more about Gos benefits, you can check out these links:
* [Why should you learn Go?Keval PatelMedium][2]
* [Farewell Node.jsTJ HolowaychukMedium][3]
Share your observations down in the comments!
Even if you are not specifically looking for a new language to use, its worth it to spend an hour or two getting the feel of it. And maybe it can become quite useful for you in the future.
Always be looking for the best tools for your craft!
* * *
If you like this article, please consider following me for more, and clicking on those funny green little hands right below this text for sharing. 👏👏👏
Check out my [Github][21] and follow me on [Twitter][22]!
--------------------------------------------------------------------------------
作者简介:
Software Engineer and Traveler. Coding for fun. Javascript enthusiast. Tinkering with Golang. A lot into SOA and Docker. Architect at Velvica.
------------
via: https://medium.freecodecamp.org/here-are-some-amazing-advantages-of-go-that-you-dont-hear-much-about-1af99de3b23a
作者:[Kirill Rogovoy][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:
[1]:https://github.com/ashleymcnamara/gophers
[2]:https://medium.com/@kevalpatel2106/why-should-you-learn-go-f607681fad65
[3]:https://medium.com/@tjholowaychuk/farewell-node-js-4ba9e7f3e52b
[4]:https://godoc.org/
[5]:https://blog.golang.org/examples
[6]:https://godoc.org/github.com/kirillrogovoy/pullkee
[7]:https://godoc.org/
[8]:https://golang.org/cmd/gofmt/
[9]:https://github.com/golang/lint
[10]:https://github.com/alecthomas/gometalinter#supported-linters
[11]:https://vimeo.com/114736889
[12]:https://gobyexample.com/goroutines
[13]:https://blog.golang.org/race-detector
[14]:https://golang.org/doc/
[15]:https://blog.golang.org/
[16]:https://golang.org/doc/faq#generics
[17]:https://golang.org/pkg/reflect/
[18]:https://blog.golang.org/laws-of-reflection
[19]:https://golang.org/src/encoding/json/encode.go
[20]:https://tour.golang.org/
[21]:https://github.com/kirillrogovoy/
[22]:https://twitter.com/krogovoy

View File

@ -0,0 +1,101 @@
How to Run Your Own Public Time Server on Linux
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/eddington_a._space_time_and_gravitation._fig._9.jpg?itok=KgNqViyZ)
One of the most important public services is timekeeping, but it doesn't get a lot of attention. Most public time servers are run by volunteers to help meet always-increasing demands. Learn how to run your own public time server and contribute to an essential public good. (See [Keep Accurate Time on Linux with NTP][1] to learn how to set up a LAN time server.)
### Famous Time Server Abusers
Like everything in life, even something as beneficial as time servers are subject to abuse fueled by either incompetence or malice.
Vendors of consumer network appliances are notorious for creating big messes. The first one I recall happened in 2003, when Netgear hard-coded the address of the University of Wisconsin-Madison's NTP server into their routers. All of a sudden the server was getting hammered with requests, and as Netgear sold more routers, the worse it got. Adding to the fun, the routers were programmed to send requests every second, which is way too many. Netgear issued a firmware upgrade, but few users ever upgrade their devices, and a number of them are pummeling the University of Wisconsin-Madison's NTP server to this day. Netgear gave them a pile of money, which hopefully will cover their costs until the last defective router dies. Similar ineptitudes were perpetrated by D-Link, Snapchat, TP-Link, and others.
The NTP protocol has become a choice vector for distributed denial-of-service attacks, using both reflection and amplification. It is called reflection when an attacker uses a forged source address to target a victim; the attacker sends requests to multiple servers, which then reply and bombard the forged address. Amplification is a large reply to a small request. For example, on Linux the `ntpq` command is a useful tool to query your NTP servers to verify that they are operating correctly. Some replies, such as lists of peers, are large. Combine reflection with amplification, and an attacker can get a return of 10x or more on the bandwidth they spend on the attack.
How do you protect your nice beneficial public NTP server? Start by using NTP 4.2.7p26 or newer, which hopefully is not an issue with your Linux distribution because that version was released in 2010. That release shipped with the most significant abuse vectors disabled as the default. The [current release is 4.2.8p10][2], released in 2017.
Another step you can take, which you should be doing anyway, is use ingress and egress filtering on your network. Block packets from entering your network that claim to be from your network, and block outgoing packets with forged return addresses. Ingress filtering helps you, and egress filtering helps you and everyone else. Read [BCP38.info][3] for much more information.
### Stratum 0, 1, 2 Time Servers
NTP is more than 30 years old, one of the oldest Internet protocols that is still widely used. Its purpose is keep computers synchronized to Coordinated Universal Time (UTC). The NTP network is both hierarchical, organized into strata, and peer. Stratum 0 contains master timekeeping devices such as atomic clocks. Stratum 1 time servers synchronize with Stratum 0 devices. Stratum 2 time servers synchronize with Stratum 1 time servers, and Stratum 3 with Stratum 2. The NTP protocol supports 16 strata, though in real life there not that many. Servers in each stratum also peer with each other.
In the olden days, we selected individual NTP servers for our client configurations. Those days are long gone, and now the better way is to use the [NTP pool addresses][4], which use round-robin DNS to share the load. Pool addresses are only for clients, such as individual PCs and your local LAN NTP server. When you run your own public server you won't use the pool addresses.
### Public NTP Server Configuration
There are two steps to running a public NTP server: set up your server, and then apply to join the NTP server pool. Running a public NTP server is a noble deed, but make sure you know what you're getting into. Joining the NTP pool is a long-term commitment, because even if you run it for a short time and then quit, you'll be receiving requests for years.
You need a static public IP address, a permanent reliable Internet connection with at least 512Kb/s bandwidth, and know how to configure your firewall correctly. NTP uses UDP port 123. The machine itself doesn't have to be any great thing, and a lot of admins piggyback NTP on other public-facing servers such as Web servers.
Configuring a public NTP server is just like configuring a LAN NTP server, with a few more configurations. Start by reading the [Rules of Engagement][5]. Follow the rules and mind your manners; almost everyone maintaining a time server is a volunteer just like you. Then select 4-7 Stratum 2 upstream time servers from [StratumTwoTimeServers][6]. Select some that are geographically close to your upstream Internet service provider (mine is 300 miles away), read their access policies, and then use `ping` and `mtr` to find the servers with the lowest latency and least number of hops.
This example `/etc/ntp.conf` includes both IPv4 and IPv6 and basic safeguards:
```
# stratum 2 server list
server servername_1 iburst
server servername_2 iburst
server servername_3 iburst
server servername_4 iburst
server servername_5 iburst
# access restrictions
restrict -4 default kod noquery nomodify notrap nopeer limited
restrict -6 default kod noquery nomodify notrap nopeer limited
# Allow ntpq and ntpdc queries only from localhost
restrict 127.0.0.1
restrict ::1
```
Start your NTP server, let it run for a few minutes, and then test that it is querying the remote servers:
```
$ ntpq -p
remote refid st t when poll reach delay offset jitter
=================================================================
+tock.no-such-ag 200.98.196.212 2 u 36 64 7 98.654 88.439 65.123
+PBX.cytranet.ne 45.33.84.208 3 u 37 64 7 72.419 113.535 129.313
*eterna.binary.n 199.102.46.70 2 u 39 64 7 92.933 98.475 56.778
+time.mclarkdev. 132.236.56.250 3 u 37 64 5 111.059 88.029 74.919
```
Good so far. Now test from another PC, using your NTP server name. The following example shows correct output. If something is not correct you'll see an error message.
```
$ ntpdate -q _yourservername_
server 66.96.99.10, stratum 2, offset 0.017690, delay 0.12794
server 98.191.213.2, stratum 1, offset 0.014798, delay 0.22887
server 173.49.198.27, stratum 2, offset 0.020665, delay 0.15012
server 129.6.15.28, stratum 1, offset -0.018846, delay 0.20966
26 Jan 11:13:54 ntpdate[17293]: adjust time server 98.191.213.2 offset 0.014798 sec
```
Once your server is running satisfactorily apply at [manage.ntppool.org][7] to join the pool.
See the official handbook, [The Network Time Protocol (NTP) Distribution][8] to learn about all the command and configuration options, and advanced features such as management, querying, and authentication. Visit the following sites to learn pretty much everything you need about running a time server.
Learn more about Linux through the free ["Introduction to Linux" ][9]course from The Linux Foundation and edX.
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2018/2/how-run-your-own-public-time-server-linux
作者:[CARLA SCHRODER][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/cschroder
[1]:https://www.linux.com/learn/intro-to-linux/2018/1/keep-accurate-time-linux-ntp
[2]:http://www.ntp.org/downloads.html
[3]:http://www.bcp38.info/index.php/Main_Page
[4]:http://www.pool.ntp.org/en/use.html
[5]:http://support.ntp.org/bin/view/Servers/RulesOfEngagement
[6]:http://support.ntp.org/bin/view/Servers/StratumTwoTimeServers?redirectedfrom=Servers.StratumTwo
[7]:https://manage.ntppool.org/manage
[8]:https://www.eecis.udel.edu/~mills/ntp/html/index.html
[9]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -0,0 +1,83 @@
How to reload .vimrc file without restarting vim on Linux/Unix
======
I am a new vim text editor user. I usually load ~/.vimrc usingfor configuration. Once edited my .vimrc file I need to reload it without having to quit Vim session. How do I edit my .vimrc file and reload it without having to restart Vim on Linux or Unix-like system?
Vim is free and open source text editor that is upwards compatible to Vi. It can be used to edit all kinds of good old text. It is especially useful for editing programs written in C/Perl/Python. One can use it for editing Linux/Unix configuration files. ~/.vimrc is your personal Vim initializations and customization file.
### How to reload .vimrc file without restarting vim session
The procedure to reload .vimrc in Vim without restart:
1. Start vim text editor by typing: `vim filename`
2. Load vim config file by typing vim command: `Esc` followed by `:vs ~/.vimrc`
3. Add customization like:
```
filetype indent plugin on
set number
syntax on
```
4. Use `:wq` to save a file and exit from ~/.vimrc window.
5. Reload ~/.vimrc by typing any one of the following command:
```
:so $MYVIMRC
```
OR
```
:source ~/.vimrc
```
[![How to reload .vimrc file without restarting vim][1]][1]
Fig.01: Editing ~/.vimrc and reloading it when needed without quiting vim so that you can continuing editing program
The `:so[urce]! {file}` vim command read vim configfileor ommands from given file such as ~/.vimrc. These are commands that are executed from Normal mode, like you type them. When used after :global, :argdo, :windo, :bufdo, in a loop or when another command follows the display won't be updated while executing the commands.
### How to may keys for edit and reload ~/.vimrc
Append the following in your ~/.vimrc file
```
" Edit vimr configuration file
nnoremap confe :e $MYVIMRC<CR>
"
Reload vims configuration file
nnoremap confr :source $MYVIMRC<CR>
```
" Edit vimr configuration file nnoremap confe :e $MYVIMRC<CR> " Reload vims configuration file nnoremap confr :source $MYVIMRC<CR>
Now just press `Esc` followed by `confe` to edit ~/.vimrc file. To reload type `Esc` followed by `confr`. Some people like to use the <Leader> key in a .vimrc file. So above mapping becomes:
```
" Edit vimr configuration file
nnoremap <Leader>ve :e $MYVIMRC<CR>
"
" Reload vimr configuration file
nnoremap <Leader>vr :source $MYVIMRC<CR>
```
" Edit vimr configuration file nnoremap <Leader>ve :e $MYVIMRC<CR> " " Reload vimr configuration file nnoremap <Leader>vr :source $MYVIMRC<CR>
The <Leader> key is mapped to \ by default. So you just press \ followed by ve to edit the file. To reload the ~/vimrc file you press \ followed by vr
And there you have it, you just reload .vimrc file without restarting vim ever.
### About the author
The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][2], [Facebook][3], [Google+][4]. Get the **latest tutorials on SysAdmin, Linux/Unix and open source topics via[my RSS/XML feed][5]**.
--------------------------------------------------------------------------------
via: https://www.cyberciti.biz/faq/how-to-reload-vimrc-file-without-restarting-vim-on-linux-unix/
作者:[Vivek Gite][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.cyberciti.biz/
[1]:https://www.cyberciti.biz/media/new/faq/2018/02/How-to-reload-.vimrc-file-without-restarting-vim.jpg
[2]:https://twitter.com/nixcraft
[3]:https://facebook.com/nixcraft
[4]:https://plus.google.com/+CybercitiBiz
[5]:https://www.cyberciti.biz/atom/atom.xml

View File

@ -0,0 +1,106 @@
How to use lxc remote with the LXD snap
======
**Background** : LXD is a hypervisor that manages machine containers on Linux distributions. You install LXD on your Linux distribution and then you can launch machine containers into your distribution running all sort of (other) Linux distributions.
You have installed the LXD snap and you are happy using it. However, you are developing LXD and you would like to use your freshly compiled LXD client (executable: **lxc** ) on the LXD snap.
Lets run our compile lxc executable.
```
$ ./lxc list
LXD socket not found; is LXD installed and running?
Exit 1
```
By default it cannot access the LXD server from the snap. We need to [set up a remote LXD host][1] and then configure the client to be able to connect to that remote LXD server.
### Configuring the remote LXD server (snap)
We run the following on the LXD snap,
```
$ which lxd
/snap/bin/lxd
$ sudo lxd init
Do you want to configure a new storage pool (yes/no) [default=yes]? no
Would you like LXD to be available over the network (yes/no) [default=no]? yes
Address to bind LXD to (not including port) [default=all]: press_enter_to_accept_default
Port to bind LXD to [default=8443]: press_enter_to_accept_default
Trust password for new clients: type_a_password
Again: type_the_same_password
Do you want to configure the LXD bridge (yes/no) [default=yes]? no
LXD has been successfully configured.
$
```
Now the snap LXD server is configured to accept remote connections, and the clients much be configured with the correct trust password.
### Configuring the client (compiled lxc)
Lets configure now the compiled lxc client.
First, here is how the unconfigured compiled lxc client would react,
```
$ ./lxc list
LXD socket not found; is LXD installed and running?
Exit 1
```
Now we add the remote, given the name **lxd.snap** , which binds on localhost (127.0.0.1). It asks to verify the certificate fingerprint. I am not aware how to view the fingerprint from inside the snap. We type the one-time password that we set earlier and we are good to go.
```
$ lxc remote add lxd.snap 127.0.0.1
Certificate fingerprint: 2c5829064cf795e29388b0d6310369fcf693257650b5c90c922a2d10f542831e
ok (y/n)? y
Admin password for lxd.snap: type_that_password
Client certificate stored at server: lxd.snap
$ lxc remote list
+-----------------|------------------------------------------|---------------|-----------|--------|--------+
| NAME | URL | PROTOCOL | AUTH TYPE | PUBLIC | STATIC |
+-----------------|------------------------------------------|---------------|-----------|--------|--------+
| images | https://images.linuxcontainers.org | simplestreams | | YES | NO |
+-----------------|------------------------------------------|---------------|-----------|--------|--------+
| local (default) | unix:// | lxd | tls | NO | YES |
+-----------------|------------------------------------------|---------------|-----------|--------|--------+
| lxd.snap | https://127.0.0.1:8443 | lxd | tls | NO | NO |
+-----------------|------------------------------------------|---------------|-----------|--------|--------+
| ubuntu | https://cloud-images.ubuntu.com/releases | simplestreams | | YES | YES |
+-----------------|------------------------------------------|---------------|-----------|--------|--------+
| ubuntu-daily | https://cloud-images.ubuntu.com/daily | simplestreams | | YES | YES |
+-----------------|------------------------------------------|---------------|-----------|--------|--------+
```
Still, the default remote is **local**. That means that **./lxc** will not work yet. We need to make **lxd.snap** the default remote.
```
$ ./lxc list
LXD socket not found; is LXD installed and running?
Exit 1
$ ./lxc remote set-default lxd.snap
$ ./lxc list
... now it works ...
```
### Conclusion
We saw how to get a client to access a LXD server. A more advanced scenario would be to have two LXD servers, and set them up so that each one can connect to the other.
--------------------------------------------------------------------------------
via: https://blog.simos.info/how-to-use-lxc-remote-with-the-lxd-snap/
作者:[Simos Xenitellis][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://blog.simos.info/author/simos/
[1]:https://stgraber.org/2016/04/12/lxd-2-0-remote-hosts-and-container-migration-612/

View File

@ -0,0 +1,199 @@
I Built This - Now What? How to deploy a React App on a DigitalOcean Droplet.
============================================================
![](https://cdn-images-1.medium.com/max/1000/1*6K5vmzalJUxn44v3cm6wBw.jpeg)
Photo by [Thomas Kvistholt][1]
Most aspiring developers have uploaded static HTML sites before. The process isnt too daunting, as youre essentially just moving files from one computer to another, and then BAM! Website.
But those who have tackled learning React often pour hundreds or even thousands of hours into learning about components, props, and state, only to be left with the question “How do I host this?” Fear not, fellow developer. Deploying your latest masterpiece is a little more in-depth, but not overly difficult. Heres how:
### Preparing For Production
There are a few things youll want to do to get your app ready for deployment.
#### Turn off service workers
If youve used something like create-react-app to bootstrap your project, youll want to turn off the built-in service worker if you havent specifically integrated it to work with your app. While usually harmless, it can cause some issues, so its best to just get rid of it up front. Find these lines in your `src/index.js` file and delete them:`registerServiceWorker();` `import registerServiceWorker from register-service-worker`
#### Get your server ready
To get the most bang for your buck, a production build will minify the code and remove extra white-space and comments so that its as fast to download as possible. It creates a new directory called `/build`, and we need to make sure were telling Express to use it. On your server page, add this line: `app.use( express.static( `${__dirname}/../build` ) );`
Next, youll need to make sure your routes know how to get to your index.html file. To do this, we need to create an endpoint and place it below all other endpoints in your server file. It should look like this:
```
const path = require('path')app.get('*', (req, res)=>{ res.sendFile(path.join(__dirname, '../build/index.html'));})
```
#### Create the production build
Now that Express knows to use the `/build` directory, its time to create it. Open up your terminal, make sure youre in your project directory, and use the command `npm run build`
#### Keep your secrets safe
If youre using API keys or a database connection string, hopefully youve already hidden them in a `.env` file. All the configuration that is different between deployed and local should go into this file as well. Tags cannot be proxied, so we have to hard code in the backend address when using the React dev server, but we want to use relative paths in production. Your resulting `.env` file might look something like this:
```
REACT_APP_LOGIN="http://localhost:3030/api/auth/login"REACT_APP_LOGOUT="http://localhost:3030/api/auth/logout"DOMAIN="user4234.auth0.com"ID="46NxlCzM0XDE7T2upOn2jlgvoS"SECRET="0xbTbFK2y3DIMp2TdOgK1MKQ2vH2WRg2rv6jVrMhSX0T39e5_Kd4lmsFz"SUCCESS_REDIRECT="http://localhost:3030/"FAILURE_REDIRECT="http://localhost:3030/api/auth/login"
```
```
AWS_ACCESS_KEY_ID=AKSFDI4KL343K55L3
AWS_SECRET_ACCESS_KEY=EkjfDzVrG1cw6QFDK4kjKFUa2yEDmPOVzN553kAANcy
```
```
CONNECTION_STRING="postgres://vuigx:k8Io13cePdUorndJAB2ijk_u0r4@stampy.db.elephantsql.com:5432/vuigx"NODE_ENV=development
```
#### Push your code
Test out your app locally by going to `[http://localhost:3030][2]` and replacing 3030 with your server port to make sure everything still runs smoothly. Remember to start your local server with node or nodemon so its up and running when you check it. Once everything looks good, we can push it to Github (or Bit Bucket, etc).
IMPORTANT! Before you do so, double check that your `.gitignore` file contains `.env` and `/build` so youre not publishing sensitive information or needless files.
### Setting Up DigitalOcean
[DigitalOcean][8] is a leading hosting platform, and makes it relatively easy and cost-effective to deploy React sites. They utilize Droplets, which is the term they use for their servers. Before we create our Droplet, we still have a little work to do.
#### Creating SSH Keys
Servers are computers that have public IP addresses. Because of this, we need a way to tell the server who we are, so that we can do things we wouldnt want anyone else doing, like making changes to our files. Your everyday password wont be secure enough, and a password long and complex enough to protect your Droplet would be nearly impossible to remember. Instead, well use an SSH key.
** 此处有Canvas,请手动处理 **
![](https://cdn-images-1.medium.com/max/1000/1*qeGqHqXrV22_aBwFQ4WhaA.jpeg)
Photo by [Brenda Clarke][3]
To create your SSH key, enter this command in your terminal: `ssh-keygen -t rsa`
This starts the process of SSH key generation. First, youll be asked to specify where to save the new key. Unless you already have a key you need to keep, you can keep the default location and simply press enter to continue.
As an added layer of security in case someone gets ahold of your computer, youll have to enter a password to secure your key. Your terminal will not show your keystrokes as you type this password, but it is keeping track of it. Once you hit enter, youll have to type it in once more to confirm. If successful, you should now see something like this:
```
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/username/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in demo_rsa.
Your public key has been saved in demo_rsa.pub.
The key fingerprint is:
cc:28:30:44:01:41:98:cf:ae:b6:65:2a:f2:32:57:b5 user@user.local
The key's randomart image is:
+--[ RSA 2048]----+
|=*+. |
|o. |
| oo |
| oo .+ |
| . ....S |
| . ..E |
| . + |
|*.= |
|+Bo |
+-----------------+
```
#### What happened?
Two files have been created on your computer`id_rsa` and `id_rsa.pub`. The `id_rsa` file is your private key and is used to verify your signature when you use the `id_rsa.pub` file, or public key. We need to give our public key to DigitalOcean. To get it, enter `cat ~/.ssh/id_rsa.pub`. You should now be looking at a long string of characters, which is the contents of your `id_rsa.pub` file. It looks something like this:
```
ssh-rsaAABC3NzaC1yc2EAAAADAQABAAABAQDR5ehyadT9unUcxftJOitl5yOXgSi2Wj/s6ZBudUS5Cex56LrndfP5Uxb8+Qpx1D7gYNFacTIcrNDFjdmsjdDEIcz0WTV+mvMRU1G9kKQC01YeMDlwYCopuENaas5+cZ7DP/qiqqTt5QDuxFgJRTNEDGEebjyr9wYk+mveV/acBjgaUCI4sahij98BAGQPvNS1xvZcLlhYssJSZrSoRyWOHZ/hXkLtq9CvTaqkpaIeqvvmNxQNtzKu7ZwaYWLydEKCKTAe4ndObEfXexQHOOKwwDSyesjaNc6modkZZC+anGLlfwml4IUwGv10nogVg9DTNQQLSPVmnEN3Z User@Computer.local
```
Now  _thats_  a password! Copy the string manually, or use the command `pbcopy < ~/.ssh/id_rsa.pub` to have the terminal copy it for you.
#### Adding your SSH Key to DigitalOcean
Login to your DigitalOcean account or sign up if you havent already. Go to your [Security Settings][9] and click on Add SSH. Paste in the key you copied and give it a name. You can name it whatever you like, but its good idea to reference the computer the key is saved on, especially if you use multiple computers regularly.
#### Creating a Droplet
** 此处有Canvas,请手动处理 **
![](https://cdn-images-1.medium.com/max/1000/1*dN9vn7lxBjtK72iV3CZZXw.jpeg)
Photo by [M. Maddo][4]
With the key in place, we can finally create our Droplet. To get started, click Create Droplet. Youll be asked to choose an OS, but for our purposes, the default Ubuntu will work just fine.
Youll need to select which size Droplet you want to use. In many cases, the smallest Droplet will do. However, review the available options and choose the one that will work best for your project.
Next, select a data center for your Droplet. Choose a location central to your expected visitor base. New features are rolled out by DigitalOcean in different data centers at different times, but unless you know you want to use a special feature thats only available in specific locations, this wont matter.
If you want to add additional services to your Droplet such as backups or private networking, you have that option here. Be aware, there is an associated cost for these services.
Finally, make sure your SSH key is selected and give your Droplet a name. It is possible to host multiple projects on a single Droplet, so you may not want to give it a project-specific name. Submit your settings by clicking the Create button at the bottom of the page.
#### Connecting to your Droplet
With our Droplet created, we can now connect to it via SSH. Copy the IP address for your Droplet and go back to your terminal. Enter ssh followed by root@youripaddress, where youripaddress is the IP address for your Droplet. It should look something like this: `ssh root@123.45.67.8`. This tells your computer that you want to connect to your IP address as the root user. Alternatively, you can [set up user accounts][10] if you dont want to login as root, but its not necessary.
#### Installing Node
![](https://cdn-images-1.medium.com/max/1000/1*3mKtwxfRWi8zxHs4EGcUMw.png)
To run React, well need an updated version of Node. First we want to run `apt-get update && apt-get dist-upgrade` to update the Linux software list. Next, enter `apt-get install nodejs -y`, `apt-get install npm -y`, and `npm i -g n` to install Nodejs and npm.
Your React app dependencies might require a specific version of Node, so check the version that your project is using by running `node -v` in your projects directory. Youll probably want to do this in a different terminal tab so you dont have to log in through SSH again.
Once you know what version you need, go back to your SSH connection and run `n 6.11.2`, replacing 6.11.2 with your specific version number. This ensures your Droplets version of Node matches your project and minimizes potential issues.
### Install your app to the Droplet
All the groundwork has been laid, and its finally time to install our React app! While still connected through SSH, make sure youre in your home directory. You can enter `cd ~` to take you there if youre not sure.
To get the files to your Droplet, youre going to clone them from your Github repo. Grab the HTTP clone link from Github and in your terminal enter `git clone [https://github.com/username/my-react-project.git][5]`. Just like with your local project, cd into your project folder using `cd my-react-project` and then run `npm install`.
#### Dont ignore your ignored files
Remember that we told Git to ignore the `.env` file, so it wont be included in the code we just pulled down. We need to add it manually now. `touch .env`will create an empty `.env` file that we can then open in the nano editor using `nano .env`. Copy the contents of your local `.env` file and paste them into the nano editor.
We also told Git to ignore the build directory. Thats because we were just testing the production build, but now were going to build it again on our Droplet. Use `npm run build` to run this process again. If you get an error, check to make sure you have all of your dependencies listed in your `package.json` file. If there are any missing, npm install those packages.
#### Start it up!
Run your server with `node server/index.js` (or whatever your server file is named) to make sure everything is working. If it throws an error, check again for any missing dependencies that might not have been caught in the build process. If everything starts up, you should now be able to go to ipaddress:serverport to see your site: `123.45.67.8:3232`. If your server is running on port 80, this is a default port and you can just use the IP address without specifying a port number: `123.45.67.8`
![](https://cdn-images-1.medium.com/max/1000/1*Hvs_Dqclz-uajcjmsgH4gA.jpeg)
Photo by [John Baker][6] on [Unsplash][7]
You now have a space on the internet to call your own! If you have purchased a domain name youd like to use in place of the IP address, you can follow [DigitalOceans instructions][11] on how to set this up.
#### Keep it running
Your site is live, but once you close the terminal, your server will stop. This is a problem, so well want to install some more software that will tell the server not to stop once the connection is terminated. There are some options for this, but lets use Program Manager 2 for the sake of this article.
Kill your server if you havent already and run `npm install -g pm2`. Once installed, we can tell it to run our server using `pm2 start server/index.js`
### Updating your code
At some point, youll probably want to update your project, but luckily uploading changes is quick and easy. Once you push your code to Github, ssh into your Droplet and cd into your project directory. Because we cloned from Github initially, we dont need to provide any links this time. You can pull down the new code simply by running `git pull`.
To incorporate frontend changes, you will need to run the build process again with `npm run build`. If youve made changes to the server file, restart PM2 by running `pm2 restart all`. Thats it! Your updates should be live now.
--------------------------------------------------------------------------------
via: https://medium.freecodecamp.org/i-built-this-now-what-how-to-deploy-a-react-app-on-a-digitalocean-droplet-662de0fe3f48
作者:[Andrea Stringham ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://medium.freecodecamp.org/@astringham
[1]:https://unsplash.com/photos/oZPwn40zCK4?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[2]:http://localhost:3030/
[3]:https://www.flickr.com/photos/37753256@N08/
[4]:https://www.flickr.com/photos/14141796@N05/
[5]:https://github.com/username/my-react-project.git
[6]:https://unsplash.com/photos/3To9V42K0Ag?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[7]:https://unsplash.com/search/photos/key?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[8]:https://www.digitalocean.com/
[9]:https://cloud.digitalocean.com/settings/security
[10]:https://www.digitalocean.com/community/tutorials/initial-server-setup-with-ubuntu-14-04
[11]:https://www.digitalocean.com/community/tutorials/how-to-point-to-digitalocean-nameservers-from-common-domain-registrars

View File

@ -0,0 +1,292 @@
Rock Solid React.js Foundations: A Beginners Guide
============================================================
** 此处有Canvas,请手动处理 **
![](https://cdn-images-1.medium.com/max/1000/1*wj5ujzj5wPQIKb0mIWLgNQ.png)
React.js crash course
Ive been working with React and React-Native for the last couple of months. I have already released two apps in production, [Kiven Aa][1] (React) and [Pollen Chat][2] (React Native). When I started learning React, I was searching for something (a blog, a video, a course, whatever) that didnt only teach me how to write apps in React. I also wanted it to prepare me for interviews.
Most of the material I found, concentrated on one or the other. So, this post is aimed towards the audience who is looking for a perfect mix of theory and hands-on. I will give you a little bit of theory so that you understand what is happening under the hood and then I will show you how to write some React.js code.
If you prefer video, I have this entire course up on YouTube as well. Please check that out.
Lets dive in…
> React.js is a JavaScript library for building user interfaces
You can build all sorts of single page applications. For example, chat messengers and e-commerce portals where you want to show changes on the user interface in real-time.
### Everythings a component
A React app is comprised of components,  _a lot of them_ , nested into one another.  _But what are components, you may ask?_
A component is a reusable piece of code, which defines how certain features should look and behave on the UI. For example, a button is a component.
Lets look at the following calculator, which you see on Google when you try to calculate something like 2+2 = 4 1 = 3 (quick maths!)
![](https://cdn-images-1.medium.com/max/1000/1*NS9DykYDyYG7__UXJdysTA.png)
Red markers denote components
As you can see in the image above, the calculator has many areaslike the  _result display window_  and the  _numpad_ . All of these can be separate components or one giant component. It depends on how comfortable one is in breaking down and abstracting away things in React
You write code for all such components separately. Then combine those under one container, which in turn is a React component itself. This way you can create reusable components and your final app will be a collection of separate components working together.
The following is one such way you can write the calculator, shown above, in React.
```
<Calculator>
<DisplayWindow />
<NumPad>
<Key number={1}/>
<Key number={2}/>
.
.
.
<Key number={9}/>
</NumPad>
</Calculator>
```
Yes! It looks like HTML code, but it isnt. We will explore more about it in the later sections.
### Setting up our Playground
This tutorial focuses on Reacts fundamentals. It is not primarily geared towards React for Web or [React Native][3] (for building mobile apps). So, we will use an online editor so as to avoid web or native specific configurations before even learning what React can do.
Ive already set up an environment for you on [codepen.io][4]. Just follow the link and read all the comments in HTML and JavaScript (JS) tabs.
### Controlling Components
Weve learned that a React app is a collection of various components, structured as a nested tree. Thus, we require some sort of mechanism to pass data from one component to other.
#### Enter “props”
We can pass arbitrary data to our component using a `props` object. Every component in React gets this `props` object.
Before learning how to use this `props` object, lets learn about functional components.
#### a) Functional component
A functional component in React consumes arbitrary data that you pass to it using `props` object. It returns an object which describes what UI React should render. Functional components are also known as Stateless components.
Lets write our first functional component.
```
function Hello(props) {
return <div>{props.name}</div>
}
```
Its that simple. We just passed `props` as an argument to a plain JavaScript function and returned,  _umm, well, what was that? That _ `_<div>{props.name}</div>_` _thing!_  Its JSX (JavaScript Extended). We will learn more about it in a later section.
This above function will render the following HTML in the browser.
```
<!-- If the "props" object is: {name: 'rajat'} -->
<div>
rajat
</div>
```
> Read the section below about JSX, where I have explained how did we get this HTML from our JSX code.
How can you use this functional component in your React app? Glad you asked! Its as simple as the following.
```
<Hello name='rajat' age={26}/>
```
The attribute `name` in the above code becomes `props.name` inside our `Hello`component. The attribute `age` becomes `props.age` and so on.
> Remember! You can nest one React component inside other React components.
Lets use this `Hello` component in our codepen playground. Replace the `div`inside `ReactDOM.render()` with our `Hello` component, as follows, and see the changes in the bottom window.
```
function Hello(props) {
return <div>{props.name}</div>
}
ReactDOM.render(<Hello name="rajat"/>, document.getElementById('root'));
```
> But what if your component has some internal state. For instance, like the following counter component, which has an internal count variable, which changes on + andkey presses.
A React component with an internal state
#### b) Class-based component
The class-based component has an additional property `state` , which you can use to hold a components private data. We can rewrite our `Hello` component using class notation as follows. Since these components have a state, these are also known as Stateful components.
```
class Counter extends React.Component {
// this method should be present in your component
render() {
return (
<div>
{this.props.name}
</div>
);
}
}
```
We extend `React.Component` class of React library to make class-based components in React. Learn more about JavaScript classes [here][5].
The `render()` method must be present in your class as React looks for this method in order to know what UI it should render on screen.
To use this sort of internal state, we first have to initialize the `state` object in the constructor of the component class, in the following way.
```
class Counter extends React.Component {
constructor() {
super();
// define the internal state of the component
this.state = {name: 'rajat'}
}
render() {
return (
<div>
{this.state.name}
</div>
);
}
}
// Usage:
// In your react app: <Counter />
```
Similarly, the `props` can be accessed inside our class-based component using `this.props` object.
To set the state, you use `React.Component`'s `setState()`. We will see an example of this, in the last part of this tutorial.
> Tip: Never call `setState()` inside `render()` function, as `setState()` causes component to re-render and this will result in endless loop.
![](https://cdn-images-1.medium.com/max/1000/1*rPUhERO1Bnr5XdyzEwNOwg.png)
A class-based component has an optional property “state”.
_Apart from _ `_state_` _, a class-based component has some life-cycle methods like _ `_componentWillMount()._` _ These you can use to do stuff, like initializing the _ `_state_` _and all but that is out of the scope of this post._
### JSX
JSX is a short form of  _JavaScript Extended_  and it is a way to write `React`components. Using JSX, you get the full power of JavaScript inside XML like tags.
You put JavaScript expressions inside `{}`. The following are some valid JSX examples.
```
<button disabled={true}>Press me!</button>
<button disabled={true}>Press me {3+1} times!</button>;
<div className='container'><Hello /></div>
```
The way it works is you write JSX to describe what your UI should look like. A [transpiler][6] like `Babel` converts that code into a bunch of `React.createElement()` calls. The React library then uses those `React.createElement()` calls to construct a tree-like structure of DOM elements. In case of React for Web or Native views in case of React Native. It keeps it in the memory.
React then calculates how it can effectively mimic this tree in the memory of the UI displayed to the user. This process is known as [reconciliation][7]. After that calculation is done, React makes the changes to the actual UI on the screen.
** 此处有Canvas,请手动处理 **
![](https://cdn-images-1.medium.com/max/1000/1*ighKXxBnnSdDlaOr5-ZOPg.png)
How React converts your JSX into a tree which describes your apps UI
You can use [Babels online REPL][8] to see what React actually outputs when you write some JSX.
![](https://cdn-images-1.medium.com/max/1000/1*NRuBKgzNh1nHwXn0JKHafg.png)
Use Babel REPL to transform JSX into plain JavaScript
> Since JSX is just a syntactic sugar over plain `React.createElement()` calls, React can be used without JSX.
Now we have every concept in place, so we are well positioned to write a `counter` component that we saw earlier as a GIF.
The code is as follows and I hope that you already know how to render that in our playground.
```
class Counter extends React.Component {
constructor(props) {
super(props);
this.state = {count: this.props.start || 0}
// the following bindings are necessary to make `this` work in the callback
this.inc = this.inc.bind(this);
this.dec = this.dec.bind(this);
}
inc() {
this.setState({
count: this.state.count + 1
});
}
dec() {
this.setState({
count: this.state.count - 1
});
}
render() {
return (
<div>
<button onClick={this.inc}>+</button>
<button onClick={this.dec}>-</button>
<div>{this.state.count}</div>
</div>
);
}
}
```
The following are some salient points about the above code.
1. JSX uses `camelCasing` hence `button`'s attribute is `onClick`, not `onclick`, as we use in HTML.
2. Binding is necessary for `this` to work on callbacks. See line #8 and 9 in the code above.
The final interactive code is located [here][9].
With that, weve reached the conclusion of our React crash course. I hope I have shed some light on how React works and how you can use React to build bigger apps, using smaller and reusable components.
* * *
If you have any queries or doubts, hit me up on Twitter [@rajat1saxena][10] or write to me at [rajat@raynstudios.com][11].
* * *
#### Please recommend this post, if you liked it and share it with your network. Follow me for more tech related posts and consider subscribing to my channel [Rayn Studios][12] on YouTube. Thanks a lot.
--------------------------------------------------------------------------------
via: https://medium.freecodecamp.org/rock-solid-react-js-foundations-a-beginners-guide-c45c93f5a923
作者:[Rajat Saxena ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://medium.freecodecamp.org/@rajat1saxena
[1]:https://kivenaa.com/
[2]:https://play.google.com/store/apps/details?id=com.pollenchat.android
[3]:https://facebook.github.io/react-native/
[4]:https://codepen.io/raynesax/pen/MrNmBM
[5]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes
[6]:https://en.wikipedia.org/wiki/Source-to-source_compiler
[7]:https://reactjs.org/docs/reconciliation.html
[8]:https://babeljs.io/repl
[9]:https://codepen.io/raynesax/pen/QaROqK
[10]:https://twitter.com/rajat1saxena
[11]:mailto:rajat@raynstudios.com
[12]:https://www.youtube.com/channel/UCUmQhjjF9bsIaVDJUHSIIKw

View File

@ -0,0 +1,134 @@
怎样完整地更新并升级基于Debian的离线操作系统
======
![](https://www.ostechnix.com/wp-content/uploads/2017/11/Upgrade-Offline-Debian-based-Systems-2-720x340.png)
不久之前我已经向你展示了如何在任意[ **离线的Ubuntu**][1] 操作系统和任意 [**离线的Arch Linux**][2] 操作系统上安装软件。 今天我们将会看看如何完整地更新并升级基于Debian(Debian-based)的离线操作系统。 和之前所述方法的不同之处在于,(这次)我们将会升级整个操作系统(落后的软件包),而不是单个的软件包。这个方法在你没有没有网络链接或拥有的网络速度很慢的时候十分有用。
### 完整更新并升级基于Debian的离线操作系统
首先假设,你家里拥有正在运行并配置有高速互联网链接的系统(Windows或者Linux)和一个没有网络链接或网络很慢(例如拨号网络)的Debian或Debian的衍生版本系统。现在如果你想要更新你的离线家用操作系统怎么办购买一个更加高速的网络链接根本不需要你仍然可以通过互联网更新升级你的离线操作系统。这正是 **Apt-Offline**工具可以帮助你做到的。
正如其名apt-offline 是一个为Debian和Debian衍生发行版诸如UbuntuLinux Mint这样基于APT的操作系统提供的离线APT包管理器。使用apt-offline我们可以完整地更新/升级我们的Debian系统而不需要网络链接。这个程序是由Python编程语言写成的兼具CLI和图形接口的跨平台工具。
#### 准备工作
#### Requirements
* 一个已经联网的操作系统(Windows或者Linux)。在这份手册中,为了便于理解,我们将之称为在线操作系统(online system)。
* 一个离线操作系统(Debian或者Debian衍生版本)。我们称之为离线操作系统(offline system)。
* 有足够空间容纳所有更新包的USB驱动器或者外接硬盘。
#### Installation
#### 安装
Apt-Offline可以在Debian和其衍生版本的默认仓库中获得。如果你的在线操作系统是运行的DebianUbuntuLinux Mint和其他基于DEB的操作系统你可以通过下面的命令安装Apt-Offline:
```shell
sudo apt-get install apt-offline
```
如果你的在线操作系统运行的是非Debian类的发行版使用git clone获取Apt-Offline仓库:
```shell
git clone https://github.com/rickysarraf/apt-offline.git
```
切换到克隆的目录下并在此处运行。
```shell
cd apt-offline/
```
```shell
sudo ./apt-offline
```
#### 离线操作系统上的步骤(没有联网的操作系统)
到你的离线操作系统上创建一个你想存储签名文件的目录
```shell
mkdir ~/tmp
```
```shell
cd ~/tmp/
```
你可以自己选择使用任何目录。接下来,运行下面的命令生成签名文件:
```shell
sudo apt-offline set apt-offline.sig
```
示例输出如下:
```shell
Generating database of files that are needed for an update.
Generating database of file that are needed for operation upgrade
```
默认条件下apt-offline将会生成需要更新和升级的(相关)文件的数据库。你可以使用 **--` update`** 或者 **--upgrade ** 选项选择创建(升级或者更新相关文件的数据库)的其中之一。
拷贝完整的**tmp**目录到你的USB驱动器或者或者外接硬盘上然后换到你的在线操作系统(有网络链接的操作系统)。
Copy the entire **tmp** folder in an USB drive or external drive and go to your online system (Internet-enabled system).
#### 在线操作系统上的步骤
插入你的USB驱动器然后进入临时文件夹
```shell
cd tmp/
```
然后,运行如下命令:
```shell
sudo apt-offline get apt-offline.sig --threads 5 --bundle apt-offline-bundle.zip
```
在这里的"-threads 5"代表着APT仓库的数目.。如果你想要从更多的仓库下载软件包,你可以增加这里的数值。然后 "-bundle apt-offline-bundle.zip" 选项表示所有的软件包将会打包到一个叫做**apt-offline-bundle.zip**的单独存档中。这个存档文件将会被保存在当前的工作目录中。
上面的命令将会按照之前在离线操作系统上生成的签名文件下载数据。
[![][3]][4]
根据你的网络状况这个操作将会花费几分钟左右的时间。请记住apt-offline是跨平台的所以你可以在任何操作系统上使用它下载包。
一旦下载完成,拷贝**tmp**文件夹到你的USB 或者外接硬盘上并且返回你的离线操作系统。千万保证你的USB驱动器上有足够的空闲空间存储所有的下载文件因为所有的包都放在**tmp**文件夹里了。
#### 离线操作系统上的步骤
把你的设备插入你的离线操作系统,然后切换到你之前下载了所有包的**tmp**目录下。
```shell
cd tmp
```
然后,运行下面的命令来安装所有下载好的包。
```shell
sudo apt-offline install apt-offline-bundle.zip
```
这个命令将会更新APT数据库所以APT将会找到在APT缓冲里所有需要的包。
**注意事项:** 如果在线和离线操作系统都在同一个局域网中,你可以通过"scp"或者其他传输应用程序将**tmp**文件传到离线操作系统中。如果两个操作系统在不同的位置(译者注:意指在不同的局域网)那就使用USB设备来拷贝(就可以了)。
好了大伙儿,现在就这么多了。 希望这篇指南对你有用。还有更多好东西正在路上。敬请关注!
祝你愉快!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/fully-update-upgrade-offline-debian-based-systems/
作者:[SK][a]
译者:[Leemeans](https://github.com/leemeans)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.ostechnix.com/install-softwares-offline-ubuntu-16-04/
[2]:https://www.ostechnix.com/install-packages-offline-arch-linux/
[3]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[4]:http://www.ostechnix.com/wp-content/uploads/2017/11/apt-offline.png

View File

@ -0,0 +1,47 @@
containerd 1.0 探索之旅
======
![containerd][1]
我们在过去的文章中讨论了一些 containerd 的不同特性它是如何设计的以及随着时间推移已经修复的一些问题。Containerd 是被用于 Docker、Kubernetes CRI、以及一些其它的项目在这些平台中事实上都使用了 containerd而许多人并不知道 containerd 存在于这些平台之中,这篇文章就是为这些人所写的。我想写更多的关于 containerd 的设计以及特性集方面的文章,但是现在,我们从它的基础知识开始。
我认为容器生态系统有时候可能很复杂。尤其是我们所使用的技术。它是什么?一个运行时,还是别的?一个运行时 … containerd它的发音是 " _container-dee "_)正如它的名字,它是一个容器守护进程,而不是一些人所“传说”的那样。它最初是作为 OCI 运行时(就像 runc 一样)的集成点构建的,在过去的六个月中它增加了许多特性,使其达到了像 Docker 这样的现代容器平台以及像 Kubernetes 这样的编排平台的需求。
那么,你使用 containerd 能去做些什么呢?你可以推送或拉取功能以及镜像管理。可以获得容器生命周期 APIs 去创建、运行、以及管理容器和它们的任务。一个完整的专门用于快照管理的 API以及一个公开管理的项目。如果你需要去构建一个容器平台基本上你不需要去处理任何底层操作系统细节方面的事情。我认为关于 containerd 中最重要的部分是,它有一个版本化的并且有 bug 修复和安全补丁的稳定 API。
![containerd][2]
由于在内核中并没有太多的用作 Linux 容器的东西,因此容器是多种内核特性捆绑在一起的,当你构建一个大型平台或者分布式系统时,你需要在你的管理代码和系统调用之间构建一个抽象层,然后将这些特性捆绑粘接在一起去运行一个容器。而这个抽象层就是 containerd 的所在之外。它为稳定类型的平台层提供了一个客户端,这样平台可以构建在顶部而无需进入到内核级。因此,可以让使用容器、任务、和快照类型的工作相比通过管理调用去 clone() 或者 mount() 要友好的多。与灵活性相平衡,直接与运行时或者宿主机交互,这些对象避免了常规的高级抽象所带来的性能牺牲。结果是简单的任务很容易完成,而困难的任务也变得更有可能完成。
![containerd][3]Containerd 被设计用于 Docker 和 Kubernetes、以及想去抽象出系统调用或者在 Linux、Windows、Solaris、 以及其它的操作系统上特定的功能去运行容器的其它的容器系统。考虑到这些用户的想法,我们希望确保 containerd 只拥有它们所需要的东西,而没有它们不希望的东西。事实上这是不太可能的,但是至少我们想去尝试一下。虽然网络不在 containerd 的范围之内,它并不能做到高级系统完全控制的那些东西。原因是,当你构建一个分布式系统时,网络是非常重要的方面。现在,对于 SDN 和服务发现,在 Linux 上,相比于抽象出 netlink 调用网络是更特殊的平台。大多数新的网络都是基于路由的并且每次一个新的容器被创建或者删除时都会请求更新路由表。服务发现、DNS 等等都需要及时通知到这些改变。如果在 containerd 中添加对网络的管理,为了能够支持不同的网络接口、钩子、以及集成点,将会在 containerd 中增加很大的一块代码。而我们的选择是,在 containerd 中做一个健壮的事件系统,以便于很多的消费者可以去订阅它们所关心的事件。我们也公开发布了一个 [任务 API ][4],它可以让用户去创建一个运行任务,也可以在一个容器的网络命名空间中添加一个接口,以及在一个容器的生命周期中的任何时候,无需复杂的 hooks 来启用容器的进程。
在过去的几个月中另一个添加到 containerd 中的领域是完整的存储,以及支持 OCI 和 Docker 镜像格式的分布式系统。你有一个跨 containerd API 的完整的目录地址存储系统,它不仅适用于镜像,也适用于元数据、检查点、以及附加到容器的任何数据。
我们也花时间去 [重新考虑如何使用 "图形驱动" 工作][5]。这些是叠加的或者允许镜像分层的块级文件系统,以使你执行的构建更加高效。当我们添加对 devicemapper 的支持时,图形驱动最初是由 Solomon 和我写的。Docker 在那个时候仅支持 AUFS因此我们在叠加文件系统之后对图形驱动进行建模。但是做一个像 devicemapper/lvm 这样的块级文件系统,就如同一个堆叠文件系统一样,从长远来看是非常困难的。这些接口必须基于时间的推移进行扩展,以支持我们最初认为并不需要的那些不同的特性。对于 containerd我们使用了一个不同的方法像快照一样做一个堆叠文件系统而不是相反。这样做起来更容易因为堆叠文件系统比起像 BTRFS、ZFS、以及 devicemapper 这样的文件系统提供了更好的灵活性。因为这些文件系统没有严格的父/子关系。这有助于我们去构建出 [快照的一个小型接口][6],同时还能满足 [构建者][7] 的要求,还能减少了需要的代码数量,从长远来看这样更易于维护。
![][8]
你可以在 [Stephen Day's Dec 7th 2017 KubeCon SIG Node presentation][9]上找到更多关于 containerd 的架构方面的详细资料。
除了在 1.0 代码库中的技术和设计上的更改之外,我们也将 [containerd 管理模式从长期 BDFL 转换为技术委员会][10],为社区提供一个独立的可信任的第三方资源。
--------------------------------------------------------------------------------
via: https://blog.docker.com/2017/12/containerd-ga-features-2/
作者:[Michael Crosby][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://blog.docker.com/author/michael/
[1]:https://i0.wp.com/blog.docker.com/wp-content/uploads/950cf948-7c08-4df6-afd9-cc9bc417cabe-6.jpg?resize=400%2C120&amp;amp;ssl=1
[2]:https://i1.wp.com/blog.docker.com/wp-content/uploads/4a7666e4-ebdb-4a40-b61a-26ac7c3f663e-4.jpg?resize=906%2C470&amp;amp;ssl=1 "containerd"
[3]:https://i1.wp.com/blog.docker.com/wp-content/uploads/2a73a4d8-cd40-4187-851f-6104ae3c12ba-1.jpg?resize=1140%2C680&amp;amp;ssl=1
[4]:https://github.com/containerd/containerd/blob/master/api/services/tasks/v1/tasks.proto
[5]:https://blog.mobyproject.org/where-are-containerds-graph-drivers-145fc9b7255
[6]:https://github.com/containerd/containerd/blob/master/api/services/snapshots/v1/snapshots.proto
[7]:https://blog.mobyproject.org/introducing-buildkit-17e056cc5317
[8]:https://i1.wp.com/blog.docker.com/wp-content/uploads/d0fb5eb9-c561-415d-8d57-e74442a879a2-1.jpg?resize=1140%2C556&amp;amp;ssl=1
[9]:https://speakerdeck.com/stevvooe/whats-happening-with-containerd-and-the-cri
[10]:https://github.com/containerd/containerd/pull/1748

View File

@ -1,76 +0,0 @@
互联网化疗
======
译注:本文作者 janit0r 被认为是 BrickerBot 病毒的作者。此病毒会攻击物联网上安全性不足的设备并使其断开和其他网络设备的连接。janit0r 宣称他使用这个病毒的目的是保护互联网的安全避免这些设备被入侵者用于入侵网络上的其他设备。janit0r 称此项目为”互联网化疗“。janit0r 决定在 2017 年 12 月终止这个项目,并在网络上发表了这篇文章。
12/10 2017
### 1. 互联网化疗
互联网化疗是在 2016 年 11 月 到 2017 年 12 月之间的一个为期 13 个月的项目。它曾被称为“BrickerBot”、“错误的固件升级”、“勒索软件”、“大规模网络瘫痪”甚至“前所未有的恐怖行为”。最后一个有点伤人了费尔南德斯译注委内瑞拉电信公司 CANTV 的光纤网络曾在 2017 年 8 月受到病毒攻击,公司董事长曼努埃尔·费尔南德斯称这次攻击为[“前所未有的恐怖行为”][1]),但我想我大概不能让所有人都满意吧。
你可以从 http://91.215.104.140/mod_plaintext.py 下载我的代码模块,它可以基于 http 和 telnet 发送恶意请求(译注:这个链接已经失效,不过在 [Github][2] 上有备份)。因为平台的限制,模块里是混淆过的单线程 Python 代码但请求的内容依然是明文任何合格的程序员应该都能看得懂。看看这里面有多少恶意请求、0-day 漏洞和入侵技巧,花点时间让自己接受现实。然后想象一下,如果我是一个黑客,致力于创造出强大的 DDoS 生成器来勒索那些最大的互联网服务提供商和公司的话,互联网在 2017 年会受到怎样的打击。我完全可以让他们全部陷入混乱,并同时对整个互联网造成巨大的伤害。
我的 ssh 爬虫太危险了,不能发布出来。它包含很多层面的自动化,可以只利用一个被入侵的路由器就能够在设计有缺陷的互联网服务提供商的网络上平行移动并加以入侵。正是因为我可以征用数以万计的互联网服务提供商的路由器,而这些路由器让我知晓网络上发生的事情并给我源源不断的节点用来进行入侵行动,我才得以进行我的反物联网僵尸网络项目。我于 2015 年开始了我的非破坏性的提供商网络清理项目,于是当 Mirai 病毒入侵时我已经做好了准备来做出回应。主动破坏其他人的设备仍然是一个困难的决定,但无比危险的 CVE-2016-10372 漏洞让我别无选择。从那时起我就决定一不做二不休。
(译注:上一段中提到的 Mirai 病毒首次出现在 2016 年 8 月。它可以远程入侵运行 Linux 系统的网络设备并利用这些设备构建僵尸网络。本文作者 janit0r 宣称当 Mirai 入侵时他利用自己的 BrickerBot 病毒强制将数以万计的设备从网络上断开,从而减少 Mirai 病毒可以利用的设备数量。)
我在此警告你们,我所做的只是权宜之计,它并不足以在未来继续拯救互联网。坏人们正变得更加聪明,可能有漏洞的设备数量在持续增加,发生大规模的、能使网络瘫痪的事件只是时间问题。如果你愿意相信我曾经在一个持续 13 个月的项目中使上千万有漏洞的设备变得无法使用,那么不过分地说,如此严重的事件本可能在 2017 年就发生。
__你们应该意识到只需要再有一两个严重的物联网漏洞我们的网络就会严重瘫痪。__ 考虑到我们的社会现在是多么依赖数字网络,而计算机安全应急响应组和互联网服务提供商们又是多么地忽视问题的严重性,这种事件造成的伤害是无法估计的。互联网服务提供商在持续地部署有开放的控制端口的设备,而且即使像 Shodan 这样的服务可以轻而易举地发现这些问题,我国的计算机安全应急响应组还是似乎并不在意。而很多国家甚至都没有自己的计算机安全应急响应组。世界上许多的互联网服务提供商都没有雇佣任何熟知计算机安全问题的人,而是在出现问题的时候依赖于外来的专家来解决。我曾见识过大型互联网服务提供商在我的僵尸网络的调节之下连续多个月持续受损,但他们还是不能完全解决漏洞(几个好的例子是 BSNL、Telkom ZA、PLDT、某些时候的 PT Telkom以及南半球大部分的大型互联网服务提供商。只要看看 Telkom ZA 解决他们的 Aztech 调制解调器问题的速度有多慢,你就会开始理解现状有多么令人绝望。在 99% 的情况下,要解决这个问题只需要互联网服务提供商部署合理的访问控制列表并把部署在用户端的设备单独分段,但是几个月过去之后他们的技术人员依然没有弄明白。如果互联网服务提供商在经历了几周到几个月的针对他们设备的蓄意攻击之后仍然无法解决问题,我们又怎么能期望他们会注意到并解决 Mirai 在他们网络上造成的问题呢?世界上许多最大的互联网服务提供商对这些事情无知得令人发指,而这毫无疑问是最大的危险,但奇怪的是,这应该也是最容易解决的问题。
我已经尽自己的责任试着去让互联网多坚持一段时间,但我已经尽力了。接下来要交给你们了。即使很小的行动也是非常重要的。你们能做的事情有:
* 使用像 Shodan 之类的服务来检查你的互联网服务提供商的安全性,并驱使他们去解决他们网络上开放的 telnet、http、httpd、ssh 和 tr069 等端口。如果需要的话,可以把这篇文章给他们看。从来不存在什么好的理由来让这些端口可以从外界访问。开放控制端口是业余人士的错误。如果有足够的客户抱怨,他们也许真的会采取行动!
* 用你的钱包投票!拒绝购买或使用任何“智能“产品,除非制造商保证这个产品能够而且将会收到及时的安全更新。在把你辛苦赚的钱交给提供商之前,先去查看他们的安全记录。为了更好的安全性,可以多花一些钱。
* 游说你本地的政治家和政府官员让他们改进法案来规范物联网设备包括路由器、IP 照相机和各种”智能“设备。不论私有还是公有的公司目前都没有足够的动机去在短期内解决问题。这件事情和汽车或者通用电气的安全标准一样重要。
* 考虑给像 GDI 基金会或者 Shadowserver 基金会这种缺少支持的白帽黑客组织贡献你的时间或者其他资源。这些组织和人能产生巨大的影响,并且他们可以很好地发挥你的能力来帮助互联网。
* 最后,虽然希望不大,但可以考虑通过设立法律先例来让物联网设备成为一种”<ruby>诱惑性危险品<rt>attractive nuisance</rt></ruby>译注attractive nuisance 是美国法律中的一个原则,意思是如果儿童在私人领地上因为某些对儿童有吸引力的危险物品而受伤,领地的主人需要负责,无论受伤的儿童是否是合法进入领地)。如果一个房主可以因为小偷或者侵入者受伤而被追责,我不清楚为什么设备的主人(或者互联网服务提供商和设备制造商)不应该因为他们的危险的设备被远程入侵所造成的伤害而被追责。连带责任原则应该适用于对设备应用层的入侵。如果任何有钱的大型互联网服务提供商不愿意为设立这样的先例而出钱(他们也许的确不会,因为他们害怕这样的先例会反过来让自己吃亏),我们甚至可以在这里还有在欧洲为这个行动而众筹。互联网服务提供商们:把你们在用来应对 DDoS 的带宽上省下的可观的成本当做我为这个目标的间接投资,也当做它的好处的证明吧。
### 2. 时间线
下面是这个项目中一些值得纪念的事件:
* 2016 年 11 月底的德国电信 Mirai 事故。我匆忙写出的最初的 TR069/64 请求只执行了 `route del default`,不过这已经足够引起互联网服务提供商去注意这个问题,而它引发的新闻头条警告了全球的其他互联网服务提供商来注意这个迫近的危机。
* 大约 1 月 11 日 到 12 日,一些位于华盛顿特区的开放了 6789 控制端口的硬盘录像机被 Mirai 入侵并瘫痪,这上了很多头条新闻。我要给 Vemulapalli 点赞,她居然认为 Mirai 加上 /dev/urandom 一定是“非常复杂的勒索软件”译注Archana Vemulapalli 当时是华盛顿市政府的 CTO。欧洲的那两个可怜人又怎么样了呢
* 2017 年 1 月底发生了第一起真正的大规模互联网服务提供商下架事件。Rogers Canada 的提供商 Hitron 非常粗心地推送了一个在 2323 端口上监听的无验证的 root shell这可能是一个他们忘记关闭的 debug 接口)。这个惊天的失误很快被 Mirai 的僵尸网络所发现,造成大量设备瘫痪。
* 在 2017 年 2 月,我注意到 Mirai 在这一年里的第一次扩张Netcore/Netis 以及基于 Broadcom CLI 的调制解调器都遭受了攻击。BCM CLI 后来成为了 Mirai 在 2017 年的主要战场黑客们和我自己都在这一年的余下时间里花大量时间寻找无数互联网服务提供商和设备制造商设置的默认密码。前面代码中的“broadcom”请求内容也许看上去有点奇怪但它们是统计角度上最可能禁用那些大量的有问题的 BCM CLI 固件的序列。
* 在 2017 年 3 月,我大幅提升了我的僵尸网络的节点数量并开始加入更多的网络请求。这是为了应对包括 Imeij、Amnesia 和 Persirai 在内的僵尸网络的威胁。大规模地禁用这些被入侵的设备也带来了新的一些问题。比如在 Avtech 和 Wificam 设备所泄露的登录信息当中,有一些用户名和密码非常像是用于机场和其他重要设施的,而英国政府官员大概在 2017 年 4 月 1 日关于针对机场和核设施的“实际存在的网络威胁”做出过警告。哎呀。
* 这种更加激进的排查还引起了民间安全研究者的注意,安全公司 Radware 在 2017 年 4 月 6 日发表了一篇关于我的项目的文章。这个公司把它叫做“BrickerBot”。显然如果我要继续增加我的物联网防御措施的规模我必须想出更好的网络映射与检测方法来应对蜜罐或者其他有风险的目标。
* 2017 年 4 月 11 日左右的时候发生了一件非常不寻常的事情。一开始这看上去和许多其他的互联网服务提供商下架事件相似,一个叫 Sierra Tel 的半本地的互联网服务提供商在一些 Zyxel 设备上使用了默认的 telnet 用户名密码 supervisor/zyad1234。一个 Mirai 使用者发现了这些有漏洞的设备而我的僵尸网络紧随其后2017年精彩绝伦的 BCM CLI 战争又开启了新的一场战斗。这场战斗并没有持续很久。它本来会和 2017 年的其他数百起互联网服务提供商下架事件一样,如果不是在尘埃落定之后发生的那件非常不寻常的事情的话。令人惊奇的是,这家互联网服务提供商并没有试着把这次网络中断掩盖成某种网络故障、电力超额或错误的固件升级。他们完全没有对客户说谎。相反,他们很快发表了新闻公告,说他们的调制解调器有漏洞,这让他们的客户们得以评估自己可能遭受的风险。这家全世界最诚实的互联网服务提供商为他们值得赞扬的公开行为而收获了什么呢?悲哀的是,它得到的只是批评和不好的名声。这依然是我记忆中最令人沮丧的“为什么我们得不到好东西”的例子,这很有可能也是为什么 99% 的安全错误都被掩盖而真正的受害者被蒙在鼓里的最主要原因。太多时候,“有责任心的信息公开”会直接变成“粉饰太平”的委婉说法。
* 在 2017 年 4 月 14 日国土安全部关于“BrickerBot 对物联网的威胁”做出了警告,我自己的政府把我作为一个网络威胁这件事让我觉得他们很不公平而且目光短浅。跟我相比,对美国人民威胁最大的难道不应该是那些部署缺乏安全性的网络设备的提供商和贩卖不成熟的安全方案的物联网设备制造商吗?如果没有我,数以百万计的人们可能还在用被入侵的设备和网络来处理银行业务和其他需要保密的交易。如果国土安全部里有人读到这篇文章,我强烈建议你重新考虑一下保护国家和公民究竟是什么意思。
* 在 2017 年 4 月底,我花了一些时间改进我的 TR069/64 攻击方法,然后在 2017 年 5 月初,一个叫 Wordfence 的公司(现在叫 Defiant报道称一个曾给 Wordpress 网站造成威胁的基于入侵 TR069 的僵尸网络很明显地衰减了。值得注意的是,同一个僵尸网络在几星期后使用了一个不同的入侵方式暂时回归了(不过这最终也同样被化解了)。
* 在 2017 年 5 月,主机公司 Akamai 在它的 2017 年第一季度互联网现状报告中写道,相比于 2016 年第一季度,大型(超过 100 GbpsDDoS 攻击数减少了 89%,而总体 DDoS 攻击数减少了 30%。鉴于大型 DDoS 攻击是 Mirai 的主要手段,我觉得这给这些月来在物联网领域的辛苦劳动提供了实际的支持。
* 在夏天我持续地改进我的入侵技术军火库,然后在 7 月底我针对亚太互联网络信息中心的互联网服务提供商进行了一些测试。测试结果非常令人吃惊。造成的影响之一是数十万的 BSNL 和 MTNL 调制解调器被禁用而这次中断事故在印度成为了头条新闻。考虑到当时在印度和中国之间持续升级的地缘政治压力我觉得这个事故有很大的风险会被归咎于中国所为于是我很罕见地决定公开承认是我所做。Catalin我很抱歉你在报道这条新闻之后突然被迫放的“两天的假期”。
* 在处理过亚太地区和非洲地区的互联网络信息中心之后,在 2017 年 8 月 9 日我又针对拉丁美洲与加勒比地区互联网络信息中心进行了大规模的清理,给这个大洲的许多提供商造成了问题。在数百万的 Movilnet 的手机用户失去连接之后,这次攻击在委内瑞拉被大幅报道。虽然我个人反对政府监管互联网,委内瑞拉的这次情况值得注意。许多拉美与加勒比地区的提供商与网络曾在我的僵尸网络的持续调节之下连续数个月逐渐衰弱,但委内瑞拉的提供商很快加强了他们的网络防护并确保了他们的网络设施的安全。我认为这是由于委内瑞拉相比于该地区的其他国家来说进行了更具侵入性的深度包检测。值得思考一下。
* F5 实验室在 2017 年 8 月发布了一个题为“狩猎物联网:僵尸物联网的崛起”的报告,研究者们在其中对近期 telnet 活动的平静表达了困惑。研究者们猜测这种活动的减少也许证实了一个或多个非常大型的网络武器正在成型(我想这大概确实是真的)。这篇报告是在我印象中对我的项目的规模最准确的评估,但神奇的是,这些研究者们什么都推断不出来,尽管他们把所有相关的线索都集中到了一页纸上。
* 2017 年 8 月Akamai 的 2017 年第二季度互联网现状报告宣布这是三年以来首个该提供商没有发现任何大规模(超过 100 Gbps攻击的季度而且 DDoS 攻击总数相比 2017 年第一季度减少了 28%。这看上去给我的清理工作提供了更多的支持。这个出奇的好消息被主流媒体所完全忽视了,这些媒体有着“流血的才是好新闻”的心态,即使是在信息安全领域。这是我们为什么不能得到好东西的又一个原因。
* 在 CVE-2017-7921 和 7923 于 2017 年 9 月发表之后,我决定更密切地关注海康威视公司的设备,然后我惊恐地发现有一个黑客们还没有发现的方法可以用来入侵有漏洞的固件。于是我在 9 月中旬开启了一个全球范围的清理行动。超过一百万台硬盘录像机和摄像机(主要是海康威视和大华出品)在三周内被禁用,然后包括 IPVM.com 在内的媒体为这些攻击写了多篇报道。大华和海康威视在新闻公告中提到或暗示了这些攻击。大量的设备总算得到了固件升级。看到这次清理活动造成的困惑,我决定给这些闭路电视制造商写一篇[简短的总结][3](请原谅在这个粘贴板网站上的不和谐的语言)。这令人震惊的有漏洞而且在关键的安全补丁发布之后仍然在线的设备数量应该能够唤醒所有人,让他们知道现今的物联网补丁升级过程有多么无力。
* 2017 年 9 月 28 日左右Verisign 发表了报告称 2017 年第二季度的 DDoS 攻击数相比第一季度减少了 55%,而且攻击峰值大幅减少了 81%。
* 2017 年 11 月 23 日CDN 供应商 Cloudflare 报道称“近几个月来Cloudflare 看到试图用垃圾流量挤满我们的网络的简单进攻尝试有了大幅减少”。Cloudflare 推测这可能和他们的政策变化有一定关系,但这些减少也和物联网清理行动有着明显的重合。
* 2017 年 11 月底Akamai 的 2017 年第三季度互联网现状报告称 DDoS 攻击数较前一季度小幅增加了 8%。虽然这相比 2016 年的第三季度已经减少了很多,但这次小幅上涨提醒我们危险仍然存在。
* 作为潜在危险的更进一步的提醒一个叫做“Satori”的新的 Mirai 变种于 2017 年 11 月至 12 月开始冒头。这个僵尸网络仅仅通过一个 0-day 漏洞而达成的增长速度非常值得注意。这起事件凸显了互联网的危险现状以及我们为什么距离大规模的事故只差一两起物联网入侵。当下一次威胁发生而且没人阻止的时候会发生什么Sinkholing 和其他的白帽或“合法”的缓解措施在 2018 年不会有效,就像它们在 2016 年也不曾有效一样。也许未来各国政府可以合作创建一个国际范围的反黑客特别部队来应对特别严重的会影响互联网存续的威胁,但我并不抱太大期望。
* 在年末出现了一些危言耸听的新闻报道有关被一个被称作“Reaper”和“IoTroop”的新的僵尸网络。我知道你们中有些人最终会去嘲笑那些把它的规模估算为一两百万的人但你们应该理解这些网络安全研究者们对网络上发生的事情以及不由他们掌控的硬件的事情都是一知半解。实际来说研究者们不可能知道或甚至猜测到大部分有漏洞的设备在僵尸网络出现时已经被禁用了。给“Reaper”一两个新的未经处理的 0-day 漏洞的话,它就会变得和我们最担心的事情一样可怕。
### 3. 临别赠言
我很抱歉把你们留在这种境况当中,但我的人身安全受到的威胁已经不允许我再继续下去。我树了很多敌人。如果你想要帮忙,请看前面列举的该做的事情。祝你好运。
也会有人批评我,说我不负责任,但这完全找错了重点。真正的重点是如果一个像我一样没有黑客背景的人可以做我做到的事情,那么一个比我厉害的人可以在 2017 年对互联网做比这要可怕太多的事情。我并不是问题本身,我也不是来遵循任何人制定的规则的。我只是报信的人。你越早意识到这点越好。
-Dr Cyborkian 又名 janit0r“病入膏肓”的设备的调节者。
--------------------------------------------------------------------------------
viahttps://ghostbin.com/paste/q2vq2
作者janit0r
译者:[yixunx](https://github.com/yixunx)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,
[Linux中国](https://linux.cn/) 荣誉推出
[1]:https://www.telecompaper.com/news/venezuelan-operators-hit-by-unprecedented-cyberattack--1208384
[2]:https://github.com/JeremyNGalloway/mod_plaintext.py
[3]:http://depastedihrn3jtw.onion.link/show.php?md5=62d1d87f67a8bf485d43a05ec32b1e6f

View File

@ -0,0 +1,197 @@
如何创建一个 Docker 镜像
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/container-image_0.jpg?itok=G_Gz80R9)
在 [前面的文章][1] 中,我们学习了在 Linux、macOS、以及 Windows 上如何使用 Docker 的基础知识。在这篇文章中,我们将去学习创建 Docker 镜像的基本知识。我们可以在 DockerHub 上得到你可以用于你自己的项目的预构建镜像,并且也可以将你自己的镜像发布到这里。
我们使用预构建镜像得到一个基本的 Linux 子系统,因为,从头开始构建需要大量的工作。你可以得到 Alpine Docker 版使用的官方版本、Ubuntu、BusyBox、或者 scratch。在我们的示例中我将使用 Ubuntu。
在我们开始构建镜像之前,让我们先“容器化”它们!我的意思是,为你的所有 Docker 镜像创建目录,这样你就可以维护不同的项目和阶段,并保持它们彼此隔离。
```
$ mkdir dockerprojects
cd dockerprojects
```
现在,在 `dockerprojects` 目录中,你可以使用自己喜欢的文本编辑器去创建一个 `Dockerfile` 文件;我喜欢使用 nano它对新手来说很容易上手。
```
$ nano Dockerfile
```
然后添加这样的一行内容:
```
FROM Ubuntu
```
![m7_f7No0pmZr2iQmEOH5_ID6MDG2oEnODpQZkUL7][2]
使用 Ctrl+Exit 然后选择 Y 去保存它。
现在开始创建你的新镜像,然后给它起一个名字(在刚才的目录中运行如下的命令):
```
$ docker build -t dockp .
```
(注意命令后面的圆点)这样就创建成功了,因此,你将看到如下内容:
```
Sending build context to Docker daemon 2.048kB
Step 1/1 : FROM ubuntu
---> 2a4cca5ac898
Successfully built 2a4cca5ac898
Successfully tagged dockp:latest
```
现在去运行和测试一下你的镜像:
```
$ docker run -it Ubuntu
```
你将看到 root 提示符:
```
root@c06fcd6af0e8:/#
```
这意味着在 Linux、Windows、或者 macOS 中你可以运行一个最小的 Ubuntu 了。你可以运行所有的 Ubuntu 原生命令或者 CLI 实用程序。
![vpZ8ts9oq3uk--z4n6KP3DD3uD_P4EpG7fX06MC3][3]
我们来查看一下在你的目录下你拥有的所有 Docker 镜像:
```
$docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
dockp latest 2a4cca5ac898 1 hour ago 111MB
ubuntu latest 2a4cca5ac898 1 hour ago 111MB
hello-world latest f2a91732366c 8 weeks ago 1.85kB
```
你可以看到共有三个镜像dockp、Ubuntu、和 hello-world hello-world 是我在几周前创建的,这一系列的前面的文章就是在它下面工作的。构建一个完整的 LAMP 栈可能是一个挑战,因此,我们使用 Dockerfile 去创建一个简单的 Apache 服务器镜像。
从本质上说Dockerfile 是安装所有需要的包、配置、以及拷贝文件的一套指令。在这个案例中,它是安装配置 Apache 和 Nginx。
你也可以在 DockerHub 上去创建一个帐户,然后在构建镜像之前登入到你的帐户,在这个案例中,你需要从 DockerHub 上拉取一些东西。从命令行中登入 DockerHub运行如下所求的命令
```
$ docker login
```
在登入时输入你的用户名和密码。
接下来,为这个 Docker 项目,在目录中创建一个 Apache 目录:
```
$ mkdir apache
```
在 Apache 目录中创建 Dockerfile 文件:
```
$ nano Dockerfile
```
然后,粘贴下列内容:
```
FROM ubuntu
MAINTAINER Kimbro Staken version: 0.1
RUN apt-get update && apt-get install -y apache2 && apt-get clean && rm -rf /var/lib/apt/lists/*
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
EXPOSE 80
CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"]
```
然后,构建镜像:
```
docker build -t apache .
```
(注意命令尾部的空格和圆点)
这将花费一些时间,然后你将看到如下的构建成功的消息:
```
Successfully built e7083fd898c7
Successfully tagged ng:latest
Swapnil:apache swapnil$
```
现在,我们来运行一下这个服务器:
```
$ docker run -d apache
a189a4db0f7c245dd6c934ef7164f3ddde09e1f3018b5b90350df8be85c8dc98
```
发现了吗,你的容器镜像已经运行了。可以运行如下的命令来检查所有运行的容器:
```
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED
a189a4db0f7 apache "/usr/sbin/apache2ctl" 10 seconds ago
```
你可以使用 docker kill 命令来杀死容器:
```
$docker kill a189a4db0f7
```
正如你所见,这个 "镜像" 它已经永久存在于你的目录中了,而不论运行与否。现在你可以根据你的需要创建很多的镜像,并且可以从这些镜像中繁衍出来更多的镜像。
这就是如何去创建镜像和运行容器。
想学习更多内容,你可以打开你的浏览器,然后找到更多的关于如何构建像 LAMP 栈这样的完整的 Docker 镜像的文档。这里有一个帮你实现它的 [ Dockerfile][4] 文件。在下一篇文章中,我将演示如何推送一个镜像到 DockerHub。
你可以通过来自 Linux 基金会和 edX 的 ["介绍 Linux" ][5] 免费课程来学习更多的知识。
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/intro-to-linux/2018/1/how-create-docker-image
作者:[SWAPNIL BHARTIYA][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/arnieswap
[1]:https://www.linux.com/blog/learn/intro-to-linux/how-install-docker-ce-your-desktop
[2]:https://lh6.googleusercontent.com/m7_f7No0pmZr2iQmEOH5_ID6MDG2oEnODpQZkUL7q3GYRB9f1-lvMYLE5f3GBpzIk-ev5VlcB0FHYSxn6NNQjxY4jJGqcgdFWaeQ-027qX_g-SVtbCCMybJeD6QIXjzM2ga8M4l4
[3]:https://lh3.googleusercontent.com/vpZ8ts9oq3uk--z4n6KP3DD3uD_P4EpG7fX06MC3uFvj2-WaI1DfOfec9ZXuN7XUNObQ2SCc4Nbiqp-CM7ozUcQmtuzmOdtUHTF4Jq8YxkC49o2k7y5snZqTXsueITZyaLiHq8bT
[4]:https://github.com/fauria/docker-lamp/blob/master/Dockerfile
[5]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -0,0 +1,83 @@
一月 COPR 中 4 个新的很酷的项目
======
![](https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg)
COPR 是一个个人软件仓库[集合][1],它们不存在于 Fedora 中。有些软件不符合标准而不容易打包。或者它可能不符合其他的 Fedora 标准尽管它是免费和开源的。COPR 可以在 Fedora 套件之外提供这些项目。COPR 中的软件不受 Fedora 支持或者由项目自己签名。但是,它是尝试新的或实验性软件的一种很好的方法。
这是 COPR 中一系列新的和有趣的项目。
### Elisa
[Elisa][2] 是一个最小的音乐播放器。它可以让你通过专辑、艺术家或曲目浏览音乐。它会自动检测你的 ~/Music 目录中的所有可播放音乐,因此它根本不需要设置 - 它也不提供任何音乐。目前Elisa 专注于做一个简单的音乐播放器,所以它不提供管理音乐收藏的工具。
![][3]
#### 安装说明
仓库目前为 Fedora 26、27 和 Rawhide 提供 Elisa。要安装 Elisa请使用以下命令
```
sudo dnf copr enable eclipseo/elisa
sudo dnf install elisa
```
### 必应壁纸
[必应壁纸][4]是一个简单的程序,它会下载当日的必应壁纸,并将其设置为桌面壁纸或锁屏图片。该程序可以在设定的时间间隔内轮转目录中的图片,并在一段时间后删除旧图片。
#### 安装说明
仓库目前为 Fedora 25、26、27 和 Rawhide 提供必应壁纸。要安装必应壁纸,请使用以下命令:
```
sudo dnf copr enable julekgwa/Bingwallpapers
sudo dnf install bingwallpapers
```
### Polybar
[Polybar][5] 是一个创建状态栏的工具。它有很多自定义选项以及内置的功能来显示常用服务的信息,例如 [bspwm][6]、[i3][7] 的系统托盘图标、窗口标题、工作区和桌面面板等。你也可以为你的状态栏配置你自己的模块。有关使用和配置的更多信息,请参考[ Polybar 的 wiki][8]。
#### 安装说明
仓库目前为 Fedora 27 提供 Polybar。要安装 Polybar请使用以下命令
```
sudo dnf copr enable tomwishaupt/polybar
sudo dnf install polybar
```
### Netdata
[Netdata][9] 是一个分布式监控系统。它可以运行在包括个人电脑、服务器、容器和物联网设备在内的所有系统上,从中实时收集指标。所有的信息都可以使用 netdata 的 web 面板访问。此外Netdata 还提供预配置的警报和通知来检测性能问题,以及用于创建自己的警报的模板。
![][10]
#### 安装说明
仓库目前提供 EPEL 7、Fedora 27 和 Rawhide 提供 netdata。要安装 netdata请使用以下命令
```
sudo dnf copr enable recteurlp/netdata
sudo dnf install netdata
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-january/
作者:[Dominik Turecek][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org
[1]:https://copr.fedorainfracloud.org/
[2]:https://community.kde.org/Elisa
[3]:https://fedoramagazine.org/wp-content/uploads/2018/01/elisa.png
[4]:http://bingwallpapers.lekgoara.com/
[5]:https://github.com/jaagr/polybar
[6]:https://github.com/baskerville/bspwm
[7]:https://i3wm.org/
[8]:https://github.com/jaagr/polybar/wiki
[9]:http://my-netdata.io/
[10]:https://fedoramagazine.org/wp-content/uploads/2018/01/netdata.png

View File

@ -0,0 +1,115 @@
搭建私有云OwnCloud
======
所有人都在讨论云。尽管市面上有很多为我们提供云存储和其他云服务的主要的服务商,但是我们还是可以为自己搭建一个私有云。
在本教程中,我们将讨论如何利用 OwnCloud 搭建私有云。OwnCloude 是一个可以安装在我们 Linux 设备上的 web 应用程序能够存储和服务我们的数据。OwnCloude 可以分享日历、联系人和书签,共享音/视频流等等。
本教程中,我们使用的是 CentOS 7 系统,但是本教程同样适用于其他 Linux 发行版中安装 OwnClude。让我们开始安装 OwnCloude 并且做一些准备工作,
**(推荐阅读:[如何在 CentOS & RHEL 上使用 Apache 作为反向代理服务器][1])**
**(同时推荐:[实时 Linux 服务器监测和 GLANCES 监测工具][2]**
### 预备知识
* 我们需要在机器上配置 LAMP。参照阅读我们的文章[CentOS/RHEL 上配置 LAMP 服务器最简单的教程][3] & [在 Ubuntu 搭建 LAMP stack][4]’。
* 我们需要在自己的设备里安装这些包,‘ php-mysql php-json php-xml php-mbstring php-zip php-gd curl php-curl php-pdo。使用包管理器安装它们。
```
$ sudo yum install php-mysql php-json php-xml php-mbstring php-zip php-gd curl php-curl php-pdo
```
### 安装
安装 owncloud我们现在需要在服务器上下载 ownCloud 安装包。使用下面的命令从官方网站下载最新的安装包10.0.4-1
```
$ wget https://download.owncloud.org/community/owncloud-10.0.4.tar.bz2
```
使用下面的命令解压,
```
$ tar -xvf owncloud-10.0.4.tar.bz2
```
现在,将所有解压后的文件移动至‘/var/www/html
```
$ mv owncloud/* /var/www/html
```
下一步,我们需要在 apache 上配置 httpd.conf文件
```
$ sudo vim /etc/httpd/conf/httpd.com
```
同时更改下面的选项,
```
AllowOverride All
```
在 owncloud 文件夹下保存和修改文件权限,
```
$ sudo chown -R apache:apache /var/www/html/
$ sudo chmod 777 /var/www/html/config/
```
然后重启 apache 服务器执行修改,
```
$ sudo systemctl restart httpd
```
现在,我们需要在 MariaDB 上创建一个数据库,保存来自 owncould 的数据。使用下面的命令创建数据库和数据库用户,
```
$ mysql -u root -p
MariaDB [(none)] > create database owncloud;
MariaDB [(none)] > GRANT ALL ON owncloud.* TO ocuser@localhost IDENTIFIED BY 'owncloud';
MariaDB [(none)] > flush privileges;
MariaDB [(none)] > exit
```
服务器配置部分完成后,现在我们可以在网页浏览器上访问 owncloud。打开浏览器输入您的服务器 IP 地址,我这边的服务器是 10.20.30.100
![安装 owncloud][7]
一旦 URL 加载完毕我们将呈现上述页面。这里我们将创建管理员用户同时提供数据库信息。当所有信息提供完毕点击Finish setup
我们将被重定向到登陆页面,在这里,我们需要输入先前创建的凭据,
![安装 owncloud][9]
认证成功之后,我们将进入 owncloud 面板,
![安装 owncloud][11]
我们可以使用移动设备应用程序,同样也可以使用网页界面更新我们的数据。现在,我们已经有自己的私有云了,同时,关于如何安装 owncloud 创建私有云的教程也进入尾声。请在评论区留下自己的问题或建议。
--------------------------------------------------------------------------------
via: http://linuxtechlab.com/create-personal-cloud-install-owncloud/
作者:[SHUSAIN][a]
译者:[CYLeft](https://github.com/CYLeft)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linuxtechlab.com/author/shsuain/
[1]:http://linuxtechlab.com/apache-as-reverse-proxy-centos-rhel/
[2]:http://linuxtechlab.com/linux-server-glances-monitoring-tool/
[3]:http://linuxtechlab.com/easiest-guide-creating-lamp-server/
[4]:http://linuxtechlab.com/install-lamp-stack-on-ubuntu/
[6]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=400%2C647
[7]:https://i1.wp.com/linuxtechlab.com/wp-content/uploads/2018/01/owncloud1-compressor.jpg?resize=400%2C647
[8]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=876%2C541
[9]:https://i1.wp.com/linuxtechlab.com/wp-content/uploads/2018/01/owncloud2-compressor1.jpg?resize=876%2C541
[10]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=981%2C474
[11]:https://i0.wp.com/linuxtechlab.com/wp-content/uploads/2018/01/owncloud3-compressor1.jpg?resize=981%2C474