mirror of
https://github.com/LCTT/TranslateProject.git
synced 2024-12-26 21:30:55 +08:00
commit
4d0632ec08
@ -0,0 +1,172 @@
|
||||
IT 灾备:系统管理员对抗自然灾害
|
||||
======
|
||||
|
||||
![](https://www.hpe.com/content/dam/hpe/insights/articles/2017/11/it-disaster-recovery-sysadmins-vs-natural-disasters/featuredStory/Sysadmins-vs-natural-disasters-1740.jpg.transform/nxt-1043x496-crop/image.jpeg)
|
||||
|
||||
> 面对倾泻的洪水或地震时业务需要继续运转。在飓风卡特里娜、桑迪和其他灾难中幸存下来的系统管理员向在紧急状况下负责 IT 的人们分享真实世界中的建议。
|
||||
|
||||
说到自然灾害,2017 年可算是多灾多难。(LCTT 译注:本文发表于 2017 年)飓风哈维、厄玛和玛莉亚给休斯顿、波多黎各、弗罗里达和加勒比造成了严重破坏。此外,西部的野火将多处住宅和商业建筑付之一炬。
|
||||
|
||||
再来一篇关于[有备无患][1]的警示文章 —— 当然其中都是好的建议 —— 是很简单的,但这无法帮助网络管理员应对湿漉漉的烂摊子。那些善意的建议中大多数都假定掌权的人乐于投入资金来实施这些建议。
|
||||
|
||||
我们对真实世界更感兴趣。不如让我们来充分利用这些坏消息。
|
||||
|
||||
一个很好的例子:自然灾害的一个后果是老板可能突然愿意给灾备计划投入预算。如同一个纽约地区的系统管理员所言,“[我发现飓风桑迪的最大好处][2]是我们的客户对 IT 投资更有兴趣了,但愿你也能得到更多预算。”
|
||||
|
||||
不过别指望这种意愿持续很久。任何想提议改进基础设施的系统管理员最好趁热打铁。如同另一位飓风桑迪中幸存下来的 IT 专员懊悔地提及那样,“[对 IT 开支最初的兴趣持续到当年为止][3]。到了第二年,任何尚未开工的计划都因为‘预算约束’被搁置了,大约 6 个月之后则完全被遗忘。”
|
||||
|
||||
在管理层忘记恶劣的自然灾害也可能降临到好公司头上之前提醒他们这点会有所帮助。根据<ruby>商业和家庭安全协会<rt>Institute for Business & Home Safety</rt></ruby>的说法,[自然灾害后歇业的公司中 25% 再也没能重新开业][4]。<ruby>联邦紧急事务管理署<rt>FEMA</rt></ruby>认为这过于乐观。根据他们的统计,“灾后 [40% 的小公司再也没能重新开门营业][5]。”
|
||||
|
||||
如果你是个系统管理员,你能帮忙挽救你的公司。这里有一些幸存者的最好的主意,这些主意是基于他们从过去几次自然灾害中得到的经验。
|
||||
|
||||
### 制订一个计划
|
||||
|
||||
当灯光忽明忽暗,狂风象火车机车一样怒号时,就该启动你的业务持续计划和灾备计划了。
|
||||
|
||||
有太多的系统管理员报告当暴风雨来临时这两个计划中一个也没有。这并不令人惊讶。2014 年<ruby>[灾备预备状态委员会][6]<rt>Disaster Recovery Preparedness Council</rt></ruby>发现[世界范围内被调查的公司中有 73% 没有足够的灾备计划][7]。
|
||||
|
||||
“**足够**”是关键词。正如一个系统管理员 2016 年在 Reddit 上写的那样,“[我们的灾备计划就是一场灾难。][8]我们所有的数据都备份在离这里大约 30 英里的一个<ruby>存储区域网络<rt>SAN</rt></ruby>。我们没有将数据重新上线的硬件,甚至好几天过去了都没能让核心服务器启动运行起来。我们是个年营收 40 亿美元的公司,却不愿为适当的设备投入几十万美元,或是在数据中心添置几台服务器。当添置硬件的提案被提出的时候,我们的管理层说,‘嗐,碰到这种事情的机会能有多大呢’。”
|
||||
|
||||
同一个帖子中另一个人说得更简洁:“眼下我的灾备计划只能在黑暗潮湿的角落里哭泣,但愿没人在乎损失的任何东西。”
|
||||
|
||||
如果你在哭泣,但愿你至少不是独自流泪。任何灾备计划,即便是 IT 部门制订的灾备计划,必须确定[你能跟别人通讯][10],如同系统管理员 Jim Thompson 从卡特里娜飓风中得到的教训:“确保你有一个与人们通讯的计划。在一场严重的区域性灾难期间,你将无法给身处灾区的任何人打电话。”
|
||||
|
||||
有一个选择可能会让有技术头脑的人感兴趣:<ruby>[业余电台][11]<rt>ham radio</rt></ruby>。[它在波多黎各发挥了巨大作用][12]。
|
||||
|
||||
### 列一个愿望清单
|
||||
|
||||
第一步是承认问题。“许多公司实际上对灾备计划不感兴趣,或是消极对待”,[Micro Focus][14] 的首席架构师 [Joshua Focus][13] 说。“将灾备看作业务持续性的一个方面是种不同的视角。所有公司都要应对业务持续性,所以灾备应被视为业务持续性的一部分。”
|
||||
|
||||
IT 部门需要将其需求书面化以确保适当的灾备和业务持续性计划。即使是你不知道如何着手,或尤其是这种时候,也是如此。正如一个系统管理员所言,“我喜欢有一个‘想法转储’,让所有计划、点子、改进措施毫无保留地提出来。(这)[对一类情况尤其有帮助,即当你提议变更][15],并付诸实施,接着 6 个月之后你警告过的状况就要来临。”现在你做好了一切准备并且可以开始讨论:“如同我们之前在 4 月讨论过的那样……”
|
||||
|
||||
因此,当你的管理层对业务持续性计划回应道“嗐,碰到这种事的机会能有多大呢?”的时候你能做些什么呢?有个系统管理员称这也完全是管理层的正常行为。在这种糟糕的处境下,老练的系统管理员建议用书面形式把这些事情记录下来。记录应清楚表明你告知管理层需要采取的措施,且[他们拒绝采纳建议][16]。“总的来说就是有足够的书面材料能让他们搓成一根绳子上吊,”该系统管理员补充道。
|
||||
|
||||
如果那也不起作用,恢复一个被洪水淹没的数据中心的相关经验对你[找个新工作][17]是很有帮助的。
|
||||
|
||||
### 保护有形的基础设施
|
||||
|
||||
“[我们的办公室是幢摇摇欲坠的建筑][18],”飓风哈维重创休斯顿之后有个系统管理员提到。“我们盲目地进入那幢建筑,现场的基础设施糟透了。正是我们给那幢建筑里带去了最不想要的一滴水,现在基础设施整个都沉在水下了。”
|
||||
|
||||
尽管如此,如果你想让数据中心继续运转——或在暴风雨过后恢复运转 —— 你需要确保该场所不仅能经受住你所在地区那些意料中的灾难,而且能经受住那些意料之外的灾难。一个旧金山的系统管理员知道为什么重要的是确保公司的服务器安置在可以承受里氏 7 级地震的建筑内。一家圣路易斯的公司知道如何应对龙卷风。但你应当为所有可能发生的事情做好准备:加州的龙卷风、密苏里州的地震,或[僵尸末日][19](给你在 IT 预算里增加一把链锯提供了充分理由)。
|
||||
|
||||
在休斯顿的情况下,[多数数据中心保持运转][20],因为它们是按照抵御暴风雨和洪水的标准建造的。[Data Foundry][21] 的首席技术官 Edward Henigin 说他们公司的数据中心之一,“专门建造的休斯顿 2 号的设计能抵御 5 级飓风的风速。这个场所的公共供电没有中断,我们得以避免切换到后备发电机。”
|
||||
|
||||
那是好消息。坏消息是伴随着超级飓风桑迪于 2012 年登场,如果[你的数据中心没准备好应对洪水][22],你会陷入一个麻烦不断的世界。一个不能正常运转的数据中心 [Datagram][23] 服务的客户包括 Gawker、Gizmodo 和 Buzzfeed 等知名网站。
|
||||
|
||||
当然,有时候你什么也做不了。正如某个波多黎各圣胡安的系统管理员在飓风厄玛扫过后悲伤地写到,“发电机没油了。服务器机房靠电池在运转但是没有(空调)。[永别了,服务器][24]。”由于 <ruby>MPLS<rt>Multiprotocol Lable Switching</rt></ruby> 线路亦中断,该系统管理员没法切换到灾备措施:“多么充实的一天。”
|
||||
|
||||
总而言之,IT 专业人士需要了解他们所处的地区,了解他们面临的风险并将他们的服务器安置在能抵御当地自然灾害的数据中心内。
|
||||
|
||||
### 关于云的争议
|
||||
|
||||
当暴风雨席卷一切时避免 IT 数据中心失效的最佳方法就是确保灾备数据中心在其他地方。选择地点时需要审慎的决策。你的灾备数据中心不应在会被同一场自然灾害影响到的<ruby>地域<rt>region</rt></ruby>;你的资源应安置在多个<ruby>可用区<rt>availability zone</rt></ruby>内。考虑一下主备数据中心位于一场地震中的同一条断层带上,或是主备数据中心易于受互通河道导致的洪灾影响这类情况。
|
||||
|
||||
有些系统管理员[利用云作为冗余设施][25]。例如,总是用微软 Azure 云存储服务保存副本以确保持久性和高可用性。根据你的选择,Azure 复制功能将你的数据要么拷贝到同一个数据中心要么拷贝到另一个数据中心。多数公有云提供类似的自动备份服务以确保数据安全,不论你的数据中心发生什么情况——除非你的云服务供应商全部设施都在暴风雨的行进路径上。
|
||||
|
||||
昂贵么?是的。跟业务中断 1、2 天一样昂贵么?并非如此。
|
||||
|
||||
信不过公有云?可以考虑 <ruby>colo<rt>colocation</rt></ruby> 服务。有了 colo,你依旧拥有你的硬件,运行你自己的应用,但这些硬件可以远离麻烦。例如飓风哈维期间,一家公司“虚拟地”将它的资源从休斯顿搬到了其位于德克萨斯奥斯汀的 colo。但是那些本地数据中心和 colo 场所需要准备好应对灾难;这点是你选择场所时要考虑的一个因素。举个例子,一个寻找 colo 场所的西雅图系统管理员考虑的“全都是抗震和旱灾应对措施(加固的地基以及补给冷却系统的运水卡车)。”
|
||||
|
||||
### 周围一片黑暗时
|
||||
|
||||
正如 Forrester Research 的分析师 Rachel Dines 在一份为[灾备期刊][27]所做的调查中报告的那样,宣布的灾难中[最普遍的原因就是断电][26]。尽管你能应对一般情况下的断电,飓风、火灾和洪水的考验会超越设备的极限。
|
||||
|
||||
某个系统管理员挖苦式的计划是什么呢?“趁 UPS 完蛋之前把你能关的机器关掉,不能关的就让它崩溃咯。然后,[喝个痛快直到供电恢复][28]。”
|
||||
|
||||
在 2016 年德尔塔和西南航空停电事故之后,IT 员工推动的一个更加严肃的计划是由一个有管理的服务供应商为其客户[部署不间断电源][29]:“对于至关重要的部分,在供电中断时我们结合使用<ruby>简单网络管理协议<rt>SNMP</rt></ruby>信令和 <ruby>PowerChute 网络关机<rt>PowerChute Nrework Shutdown</rt></ruby>客户端来关闭设备。至于重新开机,那取决于客户。有些是自动启动,有些则需要人工干预。”
|
||||
|
||||
另一种做法是用来自两个供电所的供电线路支持数据中心。例如,[西雅图威斯汀大厦数据中心][30]有来自不同供电所的多路 13.4 千伏供电线路,以及多个 480 伏三相变电箱。
|
||||
|
||||
预防严重断电的系统不是“通用的”设备。系统管理员应当[为数据中心请求一台定制的柴油发电机][31]。除了按你特定的需求调整,发电机必须能迅速跳至全速运转并承载全部电力负荷而不致影响系统负载性能。”
|
||||
|
||||
这些发电机也必须加以保护。例如,将你的发电机安置在泛洪区的一楼就不是个聪明的主意。位于纽约<ruby>百老街<rt>Broad street</rt></ruby>的数据中心在超级飓风桑迪期间就是类似情形,备用发电机的燃料油桶在地下室 —— 并且被水淹了。尽管一场[“人力接龙”用容量 5 加仑的水桶将柴油输送到 17 段楼梯之上的发电机][32]使 [Peer 1 Hosting][33] 得以继续运营,但这不是一个切实可行的业务持续计划。
|
||||
|
||||
正如多数数据中心专家所知那样,如果你有时间 —— 假设一个飓风离你有一天的距离 —— 确保你的发电机正常工作,加满油,准备好当供电线路被刮断时立即开启,不管怎样你之前应当每月测试你的发电机。你之前是那么做的,是吧?是就好!
|
||||
|
||||
### 测试你对备份的信心
|
||||
|
||||
普通用户几乎从不备份,检查备份是否实际完好的就更少了。系统管理员对此更加了解。
|
||||
|
||||
有些 [IT 部门在寻求将他们的备份迁移到云端][34]。但有些系统管理员仍对此不买账 —— 他们有很好的理由。最近有人报告,“在用了整整 5 天[从亚马逊 Glacier 恢复了(400 GB)数据][35]之后,我欠了亚马逊将近 200 美元的传输费,并且(我还是)处于未完全恢复状态,还差大约 100 GB 文件。”
|
||||
|
||||
结果是有些系统管理员依然喜欢磁带备份。磁带肯定不够时髦,但正如操作系统专家 Andrew S. Tanenbaum 说的那样,“[永远不要低估一辆装满磁带在高速上飞驰的旅行车的带宽][36]。”
|
||||
|
||||
目前每盘磁带可以存储 10 TB 数据;有的进行中的实验可在磁带上存储高达 200 TB 数据。诸如<ruby>[线性磁带文件系统][37]<rt>Linear Tape File System</rt></ruby>之类的技术允许你象访问网络硬盘一样读取磁带数据。
|
||||
|
||||
然而对许多人而言,磁带[绝对是最后选择的手段][38]。没关系,因为备份应该有大量的可选方案。在这种情况下,一个系统管理员说到,“故障时我们会用这些方法(恢复备份):(Windows)服务器层面的 VSS (Volume Shadow Storage)快照,<ruby>存储区域网络<rt>SAN</rt></ruby>层面的卷快照,以及存储区域网络层面的异地归档快照。但是万一有什么事情发生并摧毁了我们的虚拟机,存储区域网络和备份存储区域网络,我们还是可以取回磁带并恢复数据。”
|
||||
|
||||
当麻烦即将到来时,可使用副本工具如 [Veeam][39],它会为你的服务器创建一个虚拟机副本。若出现故障,副本会自动启动。没有麻烦,没有手忙脚乱,正如某个系统管理员在这个流行的系统管理员帖子中所说,“[我爱你 Veeam][40]。”
|
||||
|
||||
### 网络?什么网络?
|
||||
|
||||
当然,如果员工们无法触及他们的服务,没有任何云、colo 和远程数据中心能帮到你。你不需要一场自然灾害来证明冗余互联网连接的正确性。只需要一台挖断线路的挖掘机或断掉的光缆就能让你在工作中渡过糟糕的一天。
|
||||
|
||||
“理想状态下”,某个系统管理员明智地观察到,“你应该有[两路互联网接入线路连接到有独立基础设施的两个 ISP][41]。例如,你不希望两个 ISP 都依赖于同一根光缆。你也不希望采用两家本地 ISP,并发现他们的上行带宽都依赖于同一家骨干网运营商。”
|
||||
|
||||
聪明的系统管理员知道他们公司的互联网接入线路[必须是商业级别的][43],带有<ruby>服务等级协议<rt>service-level agreement(SLA)</rt></ruby>,其中包含“修复时间”条款。或者更好的是采用<ruby>互联网接入专线<rt></rt>dedicated Internet access</ruby>。技术上这与任何其他互联网接入方式没有区别。区别在于互联网接入专线不是一种“尽力而为”的接入方式,而是你会得到明确规定的专供你使用的带宽并附有服务等级协议。这种专线不便宜,但正如一句格言所说的那样,“速度、可靠性、便宜,只能挑两个。”当你的业务跑在这条线路上并且一场暴风雨即将来袭,“可靠性”必须是你挑的两个之一。
|
||||
|
||||
### 晴空重现之时
|
||||
|
||||
你没法准备应对所有自然灾害,但你可以为其中很多做好计划。有一个深思熟虑且经过测试的灾备和业务持续计划,并逐字逐句严格执行,当竞争对手溺毙的时候,你的公司可以幸存下来。
|
||||
|
||||
### 系统管理员对抗自然灾害:给领导者的教训
|
||||
|
||||
* 你的 IT 员工得说多少次:不要仅仅备份,还得测试备份?
|
||||
* 没电就没公司。确保你的服务器有足够的应急电源来满足业务需要,并确保它们能正常工作。
|
||||
* 如果你的公司在一场自然灾害中幸存下来,或者避开了灾害,明智的系统管理员知道这就是向管理层申请被他们推迟的灾备预算的时候了。因为下次你就未必有这么幸运了。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.hpe.com/us/en/insights/articles/it-disaster-recovery-sysadmins-vs-natural-disasters-1711.html
|
||||
|
||||
作者:[Steven-J-Vaughan-Nichols][a]
|
||||
译者:[0x996](https://github.com/0x996)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.hpe.com/us/en/insights/contributors/steven-j-vaughan-nichols.html
|
||||
[1]:https://www.hpe.com/us/en/insights/articles/what-is-disaster-recovery-really-1704.html
|
||||
[2]:https://www.reddit.com/r/sysadmin/comments/6wricr/dear_houston_tx_sysadmins/
|
||||
[3]:https://www.reddit.com/r/sysadmin/comments/6wricr/dear_houston_tx_sysadmins/dma6gse/
|
||||
[4]:https://disastersafety.org/wp-content/uploads/open-for-business-english.pdf
|
||||
[5]:https://www.fema.gov/protecting-your-businesses
|
||||
[6]:http://drbenchmark.org/about-us/our-council/
|
||||
[7]:https://www.prnewswire.com/news-releases/global-benchmark-study-reveals-73-of-companies-are-unprepared-for-disaster-recovery-248359051.html
|
||||
[8]:https://www.reddit.com/r/sysadmin/comments/3cob1k/what_does_your_disaster_recovery_plan_look_like/csxh8sn/
|
||||
[9]:https://www.hpe.com/us/en/resources/servers/datacenter-trends-challenges.html?jumpid=in_insights~510287587~451research_datacenter~sjvnSysadmin
|
||||
[10]:http://www.theregister.co.uk/2015/07/12/surviving_hurricane_katrina
|
||||
[11]:https://theprepared.com/guides/beginners-guide-amateur-ham-radio-preppers/
|
||||
[12]:http://www.npr.org/2017/09/29/554600989/amateur-radio-operators-stepped-in-to-help-communications-with-puerto-rico
|
||||
[13]:http://www8.hp.com/us/en/software/joshua-brusse.html
|
||||
[14]:https://www.microfocus.com/
|
||||
[15]:https://www.reddit.com/r/sysadmin/comments/6wricr/dear_houston_tx_sysadmins/dma87xv/
|
||||
[16]:https://www.hpe.com/us/en/insights/articles/my-boss-asked-me-to-do-what-how-to-handle-worrying-work-requests-1710.html
|
||||
[17]:https://www.hpe.com/us/en/insights/articles/sysadmin-survival-guide-1707.html
|
||||
[18]:https://www.reddit.com/r/sysadmin/comments/6wk92q/any_houston_admins_executing_their_dr_plans_this/dm8xj0q/
|
||||
[19]:https://community.spiceworks.com/how_to/1243-ensure-your-dr-plan-is-ready-for-a-zombie-apocolypse
|
||||
[20]:http://www.datacenterdynamics.com/content-tracks/security-risk/houston-data-centers-withstand-hurricane-harvey/98867.article
|
||||
[21]:https://www.datafoundry.com/
|
||||
[22]:http://www.datacenterknowledge.com/archives/2012/10/30/major-flooding-nyc-data-centers
|
||||
[23]:https://datagram.com/
|
||||
[24]:https://www.reddit.com/r/sysadmin/comments/6yjb3p/shutting_down_everything_blame_irma/
|
||||
[25]:https://www.hpe.com/us/en/insights/articles/everything-you-need-to-know-about-clouds-and-hybrid-it-1701.html
|
||||
[26]:https://www.drj.com/images/surveys_pdf/forrester/2011Forrester_survey.pdf
|
||||
[27]:https://www.drj.com
|
||||
[28]:https://www.reddit.com/r/sysadmin/comments/4x3mmq/datacenter_power_failure_procedures_what_do_yours/d6c71p1/
|
||||
[29]:https://www.reddit.com/r/sysadmin/comments/4x3mmq/datacenter_power_failure_procedures_what_do_yours/
|
||||
[30]:https://cloudandcolocation.com/datacenters/the-westin-building-seattle-data-center/
|
||||
[31]:https://www.techrepublic.com/article/what-to-look-for-in-a-data-center-backup-generator/
|
||||
[32]:http://www.datacenterknowledge.com/archives/2012/10/31/peer-1-mobilizes-diesel-bucket-brigade-at-75-broad
|
||||
[33]:https://www.cogecopeer1.com/
|
||||
[34]:https://www.reddit.com/r/sysadmin/comments/7a6m7n/aws_glacier_archival/
|
||||
[35]:https://www.reddit.com/r/sysadmin/comments/63mypu/the_dangers_of_cloudberry_and_amazon_glacier_how/
|
||||
[36]:https://en.wikiquote.org/wiki/Andrew_S._Tanenbaum
|
||||
[37]:http://www.snia.org/ltfs
|
||||
[38]:https://www.reddit.com/r/sysadmin/comments/5visaq/backups_how_many_of_you_still_have_tapes/de2d0qm/
|
||||
[39]:https://helpcenter.veeam.com/docs/backup/vsphere/failover.html?ver=95
|
||||
[40]:https://www.reddit.com/r/sysadmin/comments/5rttuo/i_love_you_veeam/
|
||||
[41]:https://www.reddit.com/r/sysadmin/comments/5rmqfx/ars_surviving_a_cloudbased_disaster_recovery_plan/dd90auv/
|
||||
[42]:https://www.hpe.com/us/en/insights/articles/how-do-you-evaluate-cloud-service-agreements-and-slas-very-carefully-1705.html
|
||||
[43]:http://www.e-vergent.com/what-is-dedicated-internet-access/
|
138
published/20171216 Sysadmin 101- Troubleshooting.md
Normal file
138
published/20171216 Sysadmin 101- Troubleshooting.md
Normal file
@ -0,0 +1,138 @@
|
||||
系统管理员入门:排除故障
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/06/100659pox8xkkr8zek888r.jpg)
|
||||
|
||||
我通常会严格保持此博客的技术性,将观察、意见等内容保持在最低限度。但是,这篇和接下来的几篇文章将介绍刚进入系统管理/SRE/系统工程师/sysops/devops-ops(无论你想称自己是什么)角色的常见的基础知识。
|
||||
|
||||
请跟我来!
|
||||
|
||||
> “我的网站很慢!”
|
||||
|
||||
我只是随机选择了本文的问题类型,这也可以应用于任何与系统管理员相关的故障排除。我并不是要炫耀那些可以发现最多的信息的最聪明的“金句”。它也不是一个详尽的、一步步指导的、并在最后一个方框中导向“利润”一词的“流程图”。
|
||||
|
||||
我会通过一些例子展示常规的方法。
|
||||
|
||||
示例场景仅用于说明本文目的。它们有时会做一些不适用于所有情况的假设,而且肯定会有很多读者在某些时候说“哦,但我觉得你会发现……”。
|
||||
|
||||
但那可能会让我们错失重点。
|
||||
|
||||
十多年来,我一直在从事于支持工作,或在支持机构工作,有一件事让我一次又一次地感到震惊,这促使我写下了这篇文章。
|
||||
|
||||
**有许多技术人员在遇到问题时的本能反应,就是不管三七二十一去尝试可能的解决方案。**
|
||||
|
||||
*“我的网站很慢,所以”,*
|
||||
|
||||
* 我将尝试增大 `MaxClients`/`MaxRequestWorkers`/`worker_connections`
|
||||
* 我将尝试提升 `innodb_buffer_pool_size`/`effective_cache_size`
|
||||
* 我打算尝试启用 `mod_gzip`(遗憾的是,这是真实的故事)
|
||||
|
||||
*“我曾经看过这个问题,它是因为某种原因造成的 —— 所以我估计还是这个原因,它应该能解决这个问题。”*
|
||||
|
||||
这浪费了很多时间,并会让你在黑暗中盲目乱撞,胡乱鼓捣。
|
||||
|
||||
你的 InnoDB 的缓冲池也许达到 100% 的利用率,但这可能只是因为有人运行了一段时间的一次性大型报告导致的。如果没有排除这种情况,那你就是在浪费时间。
|
||||
|
||||
### 开始之前
|
||||
|
||||
在这里,我应该说明一下,虽然这些建议同样适用于许多角色,但我是从一般的支持系统管理员的角度来撰写的。在一个成熟的内部组织中,或与规模较大的、规范管理的或“企业级”客户合作时,你通常会对一切都进行检测、测量、绘制、整理(甚至不是文字),并发出警报。那么你的方法也往往会有所不同。让我们在这里先忽略这种情况。
|
||||
|
||||
如果你没有这种东西,那就随意了。
|
||||
|
||||
### 澄清问题
|
||||
|
||||
首先确定实际上是什么问题。“慢”可以是多种形式的。是收到第一个字节的时间吗?从糟糕的 Javascript 加载和每页加载要拉取 15 MB 的静态内容,这是一个完全不同类型的问题。是慢,还是比通常慢?这是两个非常不同的解决方案!
|
||||
|
||||
在你着手做某事之前,确保你知道实际报告和遇到的问题。找到问题的根源通常很困难,但即便找不到也必须找到问题本身。
|
||||
|
||||
否则,这相当于系统管理员带着一把刀去参加枪战。
|
||||
|
||||
### 唾手可得
|
||||
|
||||
首次登录可疑服务器时,你可以查找一些常见的嫌疑对象。事实上,你应该这样做!每当我登录到服务器时,我都会发出一些命令来快速检查一些事情:我们是否发生了页交换(`free` / `vmstat`),磁盘是否繁忙(`top` / `iostat` / `iotop`),是否有丢包(`netstat` / `proc` / `net` / `dev`),是否处于连接数过多的状态(`netstat`),有什么东西占用了 CPU(`top`),谁在这个服务器上(`w` / `who`),syslog 和 `dmesg` 中是否有引人注目的消息?
|
||||
|
||||
如果你从 RAID 控制器得到 2000 条抱怨直写式缓存没有生效的消息,那么继续进行是没有意义的。
|
||||
|
||||
这用不了半分钟。如果什么都没有引起你的注意 —— 那么继续。
|
||||
|
||||
### 重现问题
|
||||
|
||||
如果某处确实存在问题,并且找不到唾手可得的信息。
|
||||
|
||||
那么采取所有步骤来尝试重现问题。当你可以重现该问题时,你就可以观察它。**当你能观察到时,你就可以解决。**如果在第一步中尚未显现出或覆盖了问题所在,询问报告问题的人需要采取哪些确切步骤来重现问题。
|
||||
|
||||
对于由太阳耀斑或只能运行在 OS/2 上的客户端引起的问题,重现并不总是可行的。但你的第一个停靠港应该是至少尝试一下!在一开始,你所知道的是“某人认为他们的网站很慢”。对于那些人,他们可能还在用他们的 GPRS 手机,也可能正在安装 Windows 更新。你在这里挖掘得再深也是浪费时间。
|
||||
|
||||
尝试重现!
|
||||
|
||||
### 检查日志
|
||||
|
||||
我对于有必要包括这一点感到很难过。但是我曾经看到有人在运行 `tail /var/log/...` 之后几分钟就不看了。大多数 *NIX 工具都特别喜欢记录日志。任何明显的错误都会在大多数应用程序日志中显得非常突出。检查一下。
|
||||
|
||||
### 缩小范围
|
||||
|
||||
如果没有明显的问题,但你可以重现所报告的问题,那也很棒。所以,你现在知道网站是慢的。现在你已经把范围缩小到:浏览器的渲染/错误、应用程序代码、DNS 基础设施、路由器、防火墙、网卡(所有的)、以太网电缆、负载均衡器、数据库、缓存层、会话存储、Web 服务器软件、应用程序服务器、内存、CPU、RAID 卡、磁盘等等。
|
||||
|
||||
根据设置添加一些其他可能的罪魁祸首。它们也可能是 SAN,也不要忘记硬件 WAF!以及…… 你明白我的意思。
|
||||
|
||||
如果问题是接收到第一个字节的时间,你当然会开始对 Web 服务器去应用上已知的修复程序,就是它响应缓慢,你也觉得几乎就是它,对吧?但是你错了!
|
||||
|
||||
你要回去尝试重现这个问题。只是这一次,你要试图消除尽可能多的潜在问题来源。
|
||||
|
||||
你可以非常轻松地消除绝大多数可能的罪魁祸首:你能从服务器本地重现问题吗?恭喜,你刚刚节省了自己必须尝试修复 BGP 路由的时间。
|
||||
|
||||
如果不能,请尝试使用同一网络上的其他计算机。如果可以的话,至少你可以将防火墙移到你的嫌疑人名单上,(但是要注意一下那个交换机!)
|
||||
|
||||
是所有的连接都很慢吗?虽然服务器是 Web 服务器,但并不意味着你不应该尝试使用其他类型的服务进行重现问题。[netcat][1] 在这些场景中非常有用(但是你的 SSH 连接可能会一直有延迟,这可以作为线索)! 如果这也很慢,你至少知道你很可能遇到了网络问题,可以忽略掉整个 Web 软件及其所有组件的问题。用这个知识(我不收 200 美元)再次从顶部开始,按你的方式由内到外地进行!
|
||||
|
||||
即使你可以在本地复现 —— 仍然有很多“因素”留下。让我们排除一些变量。你能用普通文件重现它吗? 如果 `i_am_a_1kb_file.html` 很慢,你就知道它不是数据库、缓存层或 OS 以外的任何东西和 Web 服务器本身的问题。
|
||||
|
||||
你能用一个需要解释或执行的 `hello_world.(py|php|js|rb..)` 文件重现问题吗?如果可以的话,你已经大大缩小了范围,你可以专注于少数事情。如果 `hello_world` 可以马上工作,你仍然学到了很多东西!你知道了没有任何明显的资源限制、任何满的队列或在任何地方卡住的 IPC 调用,所以这是应用程序正在做的事情或它正在与之通信的事情。
|
||||
|
||||
所有页面都慢吗?或者只是从第三方加载“实时分数数据”的页面慢?
|
||||
|
||||
**这可以归结为:你仍然可以重现这个问题所涉及的最少量的“因素”是什么?**
|
||||
|
||||
我们的示例是一个缓慢的网站,但这同样适用于几乎所有问题。邮件投递?你能在本地投递吗?能发给自己吗?能发给<常见的服务提供者>吗?使用小的、纯文本的消息进行测试。尝试直到遇到 2MB 拥堵时。使用 STARTTLS 和不使用 STARTTLS 呢?按你的方式由内到外地进行!
|
||||
|
||||
这些步骤中的每一步都只需要几秒钟,远远快于实施大多数“可能的”修复方案。
|
||||
|
||||
### 隔离观察
|
||||
|
||||
到目前为止,当你去除特定组件时无法重现问题时,你可能已经偶然发现了问题所在。
|
||||
|
||||
但如果你还没有,或者你仍然不知道**为什么**:一旦你找到了一种方法来重现问题,你和问题之间的“东西”(某个技术术语)最少,那么就该开始隔离和观察了。
|
||||
|
||||
请记住,许多服务可以在前台运行和/或启用调试。对于某些类别的问题,执行此操作通常非常有帮助。
|
||||
|
||||
这也是你的传统武器库发挥作用的地方。`strace`、`lsof`、`netstat`、`GDB`、`iotop`、`valgrind`、语言分析器(cProfile、xdebug、ruby-prof ……)那些类型的工具。
|
||||
|
||||
一旦你走到这一步,你就很少能摆脱剖析器或调试器了。
|
||||
|
||||
[strace][2] 通常是一个非常好的起点。
|
||||
|
||||
你可能会注意到应用程序停留在某个连接到端口 3306 的套接字文件描述符上的特定 `read()` 调用上。你会知道该怎么做。
|
||||
|
||||
转到 MySQL 并再次从顶部开始。显而易见:“等待某某锁”、死锁、`max_connections` ……进而:是所有查询?还是只写请求?只有某些表?还是只有某些存储引擎?等等……
|
||||
|
||||
你可能会注意到调用外部 API 资源的 `connect()` 需要五秒钟才能完成,甚至超时。你会知道该怎么做。
|
||||
|
||||
你可能会注意到,在同一对文件中有 1000 个调用 `fstat()` 和 `open()` 作为循环依赖的一部分。你会知道该怎么做。
|
||||
|
||||
它可能不是那些特别的东西,但我保证,你会发现一些东西。
|
||||
|
||||
如果你只是从这一部分学到一点,那也不错;学习使用 `strace` 吧!**真的**学习它,阅读整个手册页。甚至不要跳过历史部分。`man` 每个你还不知道它做了什么的系统调用。98% 的故障排除会话以 `strace` 而终结。
|
||||
|
||||
---------------------------------------------------------------------
|
||||
|
||||
via: http://northernmost.org/blog/troubleshooting-101/index.html
|
||||
|
||||
作者:[Erik Ljungstrom][a]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://northernmost.org
|
||||
[1]:http://nc110.sourceforge.net/
|
||||
[2]:https://linux.die.net/man/1/strace
|
135
published/20180116 Command Line Heroes- Season 1- OS Wars.md
Normal file
135
published/20180116 Command Line Heroes- Season 1- OS Wars.md
Normal file
@ -0,0 +1,135 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lujun9972)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11251-1.html)
|
||||
[#]: subject: (Command Line Heroes: Season 1: OS Wars)
|
||||
[#]: via: (https://www.redhat.com/en/command-line-heroes/season-1/os-wars-part-1)
|
||||
[#]: author: (redhat https://www.redhat.com)
|
||||
|
||||
《代码英雄》第一季(1):操作系统战争(上)
|
||||
======
|
||||
|
||||
> 代码英雄讲述了开发人员、程序员、黑客、极客和开源反叛者如何彻底改变技术前景的真实史诗故事。
|
||||
|
||||
![](https://www.redhat.com/files/webux/img/bandbg/bkgd-clh-ep1-2000x950.png)
|
||||
|
||||
本文是《[代码英雄](https://www.redhat.com/en/command-line-heroes)》系列播客[第一季(1):操作系统战争(上)](https://www.redhat.com/en/command-line-heroes/season-1/os-wars-part-1) 的[音频](https://dts.podtrac.com/redirect.mp3/audio.simplecast.com/f7670e99.mp3)脚本。
|
||||
|
||||
**Saron Yitbarek:**有些故事如史诗般,惊险万分,在我脑海中似乎出现了星球大战电影开头的爬行文字。你知道的,就像——
|
||||
|
||||
配音:“第一集,操作系统大战”
|
||||
|
||||
**Saron Yitbarek:**是的,就像那样子。
|
||||
|
||||
配音:这是一个局势紧张加剧的时期。<ruby>比尔·盖茨<rt>Bill Gates</rt></ruby>与<ruby>史蒂夫·乔布斯<rt>Steve Jobs</rt></ruby>的帝国发起了一场无可避免的专有软件之战。[00:00:30] 盖茨与 IBM 结成了强大的联盟,而乔布斯则拒绝了对它的硬件和操作系统开放授权。他们争夺统治地位的争斗在一场操作系统战争中席卷了整个银河系。与此同时,这些帝王们所不知道的偏远之地,开源的反叛者们开始集聚。
|
||||
|
||||
**Saron Yitbarek:**好吧。这也许有点戏剧性,但当我们谈论上世纪八九十年代和 2000 年代的操作系统之争时,这也不算言过其实。*[00:01:00]* 确实曾经发生过一场史诗级的统治之战。史蒂夫·乔布斯和比尔·盖茨确实掌握着数十亿人的命运。掌控了操作系统,你就掌握了绝大多数人使用计算机的方式、互相通讯的方式、获取信息的方式。我可以一直罗列下去,不过你知道我的意思。掌握了操作系统,你就是帝王。
|
||||
|
||||
我是 Saron Yitbarek,你现在收听的是代码英雄,一款红帽公司原创的博客节目。*[00:01:30]* 你问,什么是<ruby>代码英雄<rt>Command Line Hero</rt></ruby>?嗯,如果你愿意创造而不仅仅是使用,如果你相信开发者拥有构建美好未来的能力,如果你希望拥有一个大家都有权利表达科技如何塑造生活的世界,那么你,我的朋友,就是一位代码英雄。在本系列节目中,我们将为你带来那些“白码起家”(LCTT 译注:原文是 “from the command line up”,应该是演绎自 “from the ground up”——白手起家)改变技术的程序员故事。*[00:02:00]* 那么我是谁,凭什么指导你踏上这段艰苦的旅程?Saron Yitbarek 是哪根葱?嗯,事实上我觉得我跟你差不多。我是一名面向初学者的开发人员,我做的任何事都依赖于开源软件,我的世界就是如此。通过在博客中讲故事,我可以跳出无聊的日常工作,鸟瞰全景,希望这对你也一样有用。
|
||||
|
||||
我迫不及待地想知道,开源技术从何而来?我的意思是,我对<ruby>林纳斯·托瓦兹<rt>Linus Torvalds</rt></ruby>和 Linux^® 的荣耀有一些了解,*[00:02:30]* 我相信你也一样,但是说真的,开源并不是一开始就有的,对吗?如果我想发自内心的感激这些最新、最棒的技术,比如 DevOps 和容器之类的,我感觉我对那些早期的开发者缺乏了解,我有必要了解这些东西来自何处。所以,让我们暂时先不用担心内存泄露和缓冲溢出。我们的旅程将从操作系统之战开始,这是一场波澜壮阔的桌面控制之战。*[00:03:00]* 这场战争亘古未有,因为:首先,在计算机时代,大公司拥有指数级的规模优势;其次,从未有过这么一场控制争夺战是如此变化多端。比尔·盖茨和史蒂夫·乔布斯? 他们也不知道结果会如何,但是到目前为止,这个故事进行到一半的时候,他们所争夺的所有东西都将发生改变、进化,最终上升到云端。
|
||||
|
||||
*[00:03:30]* 好的,让我们回到 1983 年的秋季。还有六年我才出生。那时候的总统还是<ruby>罗纳德·里根<rt>Ronald Reagan</rt></ruby>,美国和苏联扬言要把地球拖入核战争之中。在檀香山(火奴鲁鲁)的市政中心正在举办一年一度的苹果公司销售会议。一群苹果公司的员工正在等待史蒂夫·乔布斯上台。他 28 岁,热情洋溢,看起来非常自信。乔布斯很严肃地对着麦克风说他邀请了三个行业专家来就软件进行了一次小组讨论。*[00:04:00]* 然而随后发生的事情你肯定想不到。超级俗气的 80 年代音乐响彻整个房间。一堆多彩灯管照亮了舞台,然后一个播音员的声音响起-
|
||||
|
||||
**配音:**女士们,先生们,现在是麦金塔软件的约会游戏时间。
|
||||
|
||||
**Saron Yitbarek:**乔布斯的脸上露出一个大大的笑容,台上有三个 CEO 都需要轮流向他示好。这基本上就是 80 年代钻石王老五,不过是科技界的。*[00:04:30]* 两个软件大佬讲完话后,然后就轮到第三个人讲话了。仅此而已不是吗?是的。新面孔比尔·盖茨带着一个大大的遮住了半张脸的方框眼镜。他宣称在 1984 年,微软的一半收入将来自于麦金塔软件。他的这番话引来了观众热情的掌声。*[00:05:00]* 但是他们不知道的是,在一个月后,比尔·盖茨将会宣布发布 Windows 1.0 的计划。你永远也猜不到乔布斯正在跟苹果未来最大的敌人打情骂俏。但微软和苹果即将经历科技史上最糟糕的婚礼。他们会彼此背叛、相互毁灭,但又深深地、痛苦地捆绑在一起。
|
||||
|
||||
**James Allworth:***[00:05:30]* 我猜从哲学上来讲,一个更理想化、注重用户体验高于一切,是一个一体化的组织,而微软则更务实,更模块化 ——
|
||||
|
||||
**Saron Yitbarek:**这位是 James Allworth。他是一位多产的科技作家,曾在苹果零售的企业团队工作。注意他给出的苹果的定义,一个一体化的组织,那种只对自己负责的公司,一个不想依赖别人的公司,这是关键。
|
||||
|
||||
**James Allworth:***[00:06:00]* 苹果是一家一体化的公司,它希望专注于令人愉悦的用户体验,这意味着它希望控制整个技术栈以及交付的一切内容:从硬件到操作系统,甚至运行在操作系统上的应用程序。当新的创新、重要的创新刚刚进入市场,而你需要横跨软硬件,并且能够根据自己意愿和软件的革新来改变硬件时,这是一个优势。例如 ——,
|
||||
|
||||
**Saron Yitbarek:***[00:06:30]* 很多人喜欢这种一体化的模式,并因此成为了苹果的铁杆粉丝。还有很多人则选择了微软。让我们回到檀香山的销售会议上,在同一场活动中,乔布斯向观众展示了他即将发布的超级碗广告。你可能已经亲眼见过这则广告了。想想<ruby>乔治·奥威尔<rt>George Orwell<rt></ruby>的 《一九八四》。在这个冰冷、灰暗的世界里,无意识的机器人正在独裁者的投射凝视下徘徊。*[00:07:00]* 这些机器人就像是 IBM 的用户们。然后,代表苹果公司的漂亮而健美的<ruby>安娅·梅杰<rt>Anya Major</rt></ruby>穿着鲜艳的衣服跑过大厅。她向着大佬们的屏幕猛地投出大锤,将它砸成了碎片。老大哥的咒语解除了,一个低沉的声音响起,苹果公司要开始介绍麦金塔了。
|
||||
|
||||
**配音:**这就是为什么现实中的 1984 跟小说《一九八四》不一样了。
|
||||
|
||||
Saron Yitbarek:是的,现在回顾那则广告,认为苹果是一个致力于解放大众的自由斗士的想法有点过分了。但这件事触动了我的神经。*[00:07:30]* Ken Segal 曾在为苹果制作这则广告的广告公司工作过。在早期,他为史蒂夫·乔布斯做了十多年的广告。
|
||||
|
||||
**Ken Segal:**1984 这则广告的风险很大。事实上,它的风险太大,以至于苹果公司在看到它的时候都不想播出它。你可能听说了史蒂夫喜欢它,但苹果公司董事会的人并不喜欢它。事实上,他们很愤怒,这么多钱被花在这么一件事情上,以至于他们想解雇广告代理商。*[00:08:00]* 史蒂夫则为我们公司辩护。
|
||||
|
||||
**Saron Yitbarek:**乔布斯,一如既往地,慧眼识英雄。
|
||||
|
||||
**Ken Segal:**这则广告在公司内、在业界内都引起了共鸣,成为了苹果产品的代表。无论人们那天是否有在购买电脑,它都带来了一种持续了一年又一年的影响,并有助于定义这家公司的品质:我们是叛军,我们是拿着大锤的人。
|
||||
|
||||
**Saron Yitbarek:***[00:08:30]* 因此,在争夺数十亿潜在消费者心智的过程中,苹果公司和微软公司的帝王们正在学着把自己塑造成救世主、非凡的英雄、一种对生活方式的选择。但比尔·盖茨知道一些苹果难以理解的事情。那就是在一个相互连接的世界里,没有人,即便是帝王,也不能独自完成任务。
|
||||
|
||||
*[00:09:00]* 1985 年 6 月 25 日。盖茨给当时的苹果 CEO John Scully 发了一份备忘录。那是一个迷失的年代。乔布斯刚刚被逐出公司,直到 1996 年才回到苹果。也许正是因为乔布斯离开了,盖茨才敢写这份东西。在备忘录中,他鼓励苹果授权制造商分发他们的操作系统。我想读一下备忘录的最后部分,让你们知道这份备忘录是多么的有洞察力。*[00:09:30]* 盖茨写道:“如果没有其他个人电脑制造商的支持,苹果现在不可能让他们的创新技术成为标准。苹果必须开放麦金塔的架构,以获得快速发展和建立标准所需的支持。”换句话说,你们不要再自己玩自己的了。你们必须有与他人合作的意愿。你们必须与开发者合作。
|
||||
|
||||
*[00:10:00]* 多年后你依然可以看到这条哲学思想,当微软首席执行官<ruby>史蒂夫·鲍尔默<rt>Steve Ballmer</rt></ruby>上台做主题演讲时,他开始大喊:“开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者。”你懂我的意思了吧。微软喜欢开发人员。虽然目前(LCTT 译注:本播客发布于 2018 年初)他们不打算与这些开发人员共享源代码,但是他们确实想建立起整个合作伙伴生态系统。*[00:10:30]* 而当比尔·盖茨建议苹果公司也这么做时,如你可能已经猜到的,这个想法就被苹果公司抛到了九霄云外。他们的关系产生了间隙,五个月后,微软发布了 Windows 1.0。战争开始了。
|
||||
|
||||
> 开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者。
|
||||
|
||||
*[00:11:00]* 你正在收听的是来自红帽公司的原创播客《代码英雄》。本集是第一集,我们将回到过去,重温操作系统战争的史诗故事,我们将会发现,科技巨头之间的战争是如何为我们今天所生活的开源世界扫清道路的
|
||||
|
||||
好的,让我们先来个背景故事吧。如果你已经听过了,那么请原谅我,但它很经典。当时是 1979 年,史蒂夫·乔布斯开车去<ruby>帕洛阿尔托<rt>Palo Alto</rt></ruby>的<ruby>施乐公园研究中心<rt>Xerox Park research center</rt></ruby>。*[00:11:30]* 那里的工程师一直在为他们所谓的图形用户界面开发一系列的元素。也许你听说过。它们有菜单、滚动条、按钮、文件夹和重叠的窗口。这是对计算机界面的一个美丽的新设想。这是前所未有的。作家兼记者 Steve Levy 会谈到它的潜力。
|
||||
|
||||
**Steven Levy:***[00:12:00]* 对于这个新界面来说,有很多令人激动的地方,它比以前的交互界面更友好,以前用的所谓的命令行 —— 你和电脑之间的交互方式跟现实生活中的交互方式完全不同。鼠标和电脑上的图像让你可以做到像现实生活中的交互一样,你可以像指向现实生活中的东西一样指向电脑上的东西。这让事情变得简单多了。你无需要记住所有那些命令。
|
||||
|
||||
**Saron Yitbarek:***[00:12:30]* 不过,施乐的高管们并没有意识到他们正坐在金矿上。一如既往地,工程师比主管们更清楚它的价值。因此那些工程师,当被要求向乔布斯展示所有这些东西是如何工作时,有点紧张。然而这是毕竟是高管的命令。乔布斯觉得,用他的话来说,“这个产品天才本来能够让施乐公司垄断整个行业,可是它最终会被公司的经营者毁掉,因为他们对产品的好坏没有概念。”*[00:13:00]* 这话有些苛刻,但是,乔布斯带着一卡车施乐高管错过的想法离开了会议。这几乎包含了他需要革新桌面计算体验的所有东西。1983 年,苹果发布了 Lisa 电脑,1984 年又发布了 Mac 电脑。这些设备的创意是抄袭自施乐公司的。
|
||||
|
||||
让我感兴趣的是,乔布斯对控诉他偷了图形用户界面的反应。他对此很冷静。他引用毕加索的话:“好的艺术家抄袭,伟大的艺术家偷窃。”他告诉一位记者,“我们总是无耻地窃取伟大的创意。”*[00:13:30]* 伟大的艺术家偷窃,好吧,我的意思是,我们说的并不是严格意义上的“偷窃”。没人拿到了专有的源代码并公然将其集成到他们自己的操作系统中去。这要更温和些,更像是创意的借用。这就难控制的多了,就像乔布斯自己即将学到的那样。传奇的软件奇才、真正的代码英雄 Andy Hertzfeld 就是麦金塔开发团队的最初成员。
|
||||
|
||||
**Andy Hertzfeld:***[00:14:00]* 是的,微软是我们的第一个麦金塔电脑软件合作伙伴。当时,我们并没有把他们当成是竞争对手。他们是苹果之外,我们第一家交付麦金塔电脑原型的公司。我通常每周都会和微软的技术主管聊一次。他们是第一个尝试我们所编写软件的外部团队。*[00:14:30]* 他们给了我们非常重要的反馈,总的来说,我认为我们的关系非常好。但我也注意到,在我与技术主管的交谈中,他开始问一些系统实现方面的问题,而他本无需知道这些,我觉得,他们想要复制麦金塔电脑。我很早以前就向史蒂夫·乔布斯反馈过这件事,但在 1983 年秋天,这件事达到了高潮。*[00:15:00]* 我们发现,他们在 1983 年 11 月的 COMDEX 上发布了 Windows,但却没有提前告诉我们。对此史蒂夫·乔布斯勃然大怒。他认为那是一种背叛。
|
||||
|
||||
**Saron Yitbarek:**随着新版 Windows 的发布,很明显,微软从苹果那里学到了苹果从施乐那里学来的所有想法。乔布斯很易怒。他的关于伟大艺术家如何偷窃的毕加索名言被别人学去了,而且恐怕盖茨也正是这么做的。*[00:15:30]* 据报道,当乔布斯怒斥盖茨偷了他们的东西时,盖茨回应道:“史蒂夫,我觉得这更像是我们都有一个叫施乐的富有邻居,我闯进他家偷电视机,却发现你已经偷过了”。苹果最终以窃取 GUI 的外观和风格为名起诉了微软。这个案子持续了好几年,但是在 1993 年,第 9 巡回上诉法院的一名法官最终站在了微软一边。*[00:16:00]* Vaughn Walker 法官宣布外观和风格不受版权保护。这是非常重要的。这一决定让苹果在无法垄断桌面计算的界面。很快,苹果短暂的领先优势消失了。以下是 Steven Levy 的观点。
|
||||
|
||||
**Steven Levy:**他们之所以失去领先地位,不是因为微软方面窃取了知识产权,而是因为他们无法巩固自己在上世纪 80 年代拥有的更好的操作系统的优势。坦率地说,他们的电脑索价过高。*[00:16:30]* 因此微软从 20 世纪 80 年代中期开始开发 Windows 系统,但直到 1990 年开发出的 Windows 3,我想,它才真正算是一个为黄金时期做好准备的版本,才真正可供大众使用。*[00:17:00]* 从此以后,微软能够将数以亿计的用户迁移到图形界面,而这是苹果无法做到的。虽然苹果公司有一个非常好的操作系统,但是那已经是 1984 年的产品了。
|
||||
|
||||
**Saron Yitbarek:**现在微软主导着操作系统的战场。他们占据了 90% 的市场份额,并且针对各种各样的个人电脑进行了标准化。操作系统的未来看起来会由微软掌控。此后发生了什么?*[00:17:30]* 1997 年,波士顿 Macworld 博览会上,你看到了一个几近破产的苹果,一个谦逊的多的史蒂夫·乔布斯走上舞台,开始谈论伙伴关系的重要性,特别是他们与微软的新型合作伙伴关系。史蒂夫·乔布斯呼吁双方缓和关系,停止火拼。微软将拥有巨大的市场份额。从表面看,我们可能会认为世界和平了。但当利益如此巨大时,事情就没那么简单了。*[00:18:00]* 就在苹果和微软在数十年的争斗中伤痕累累、最终败退到死角之际,一名 21 岁的芬兰计算机科学专业学生出现了。几乎是偶然地,他彻底改变了一切。
|
||||
|
||||
我是 Saron Yitbarek,这里是代码英雄。
|
||||
|
||||
正当某些科技巨头正忙着就专有软件相互攻击时,自由软件和开源软件的新领军者如雨后春笋般涌现。*[00:18:30]* 其中一位优胜者就是<ruby>理查德·斯托尔曼<rt>Richard Stallman</rt></ruby>。你也许对他的工作很熟悉。他想要有自由软件和自由社会。这就像言论自由一样的<ruby>自由<rt>free</rt></ruby>,而不是像免费啤酒一样的<ruby>免费<rt>free</rt></ruby>。早在 80 年代,斯托尔曼就发现,除了昂贵的专有操作系统(如 UNIX)外,就没有其他可行的替代品。因此他决定自己做一个。斯托尔曼的<ruby>自由软件基金会<rt>Free Software Foundation</rt></ruby>开发了 GNU,当然,它的意思是 “GNU's not UNIX”。它将是一个像 UNIX 一样的操作系统,但不包含所有的 UNIX 代码,而且用户可以自由共享。
|
||||
|
||||
*[00:19:00]* 为了让你体会到上世纪 80 年代自由软件概念的重要性,从不同角度来说拥有 UNIX 代码的两家公司,<ruby>AT&T 贝尔实验室<rt>AT&T Bell Laboratories</rt></ruby>以及<ruby>UNIX 系统实验室<rt>UNIX System Laboratories</rt></ruby>威胁将会起诉任何看过 UNIX 源代码后又创建自己操作系统的人。这些人是次级专利所属。*[00:19:30]* 用这两家公司的话来说,所有这些程序员都在“精神上受到了污染”,因为他们都见过 UNIX 代码。在 UNIX 系统实验室和<ruby>伯克利软件设计公司<rt>Berkeley Software Design</rt></ruby>之间的一个著名的法庭案例中,有人认为任何功能类似的系统,即使它本身没有使用 UNIX 代码,也侵犯版权。Paul Jones 当时是一名开发人员。他现在是数字图书馆 ibiblio.org 的主管。
|
||||
|
||||
**Paul Jones:***[00:20:00]* 任何看过代码的人都受到了精神污染是他们的观点。因此几乎所有在安装有与 UNIX 相关操作系统的电脑上工作过的人以及任何在计算机科学部门工作的人都受到精神上的污染。因此,在 USENIX 的一年里,我们都得到了一写带有红色字母的白色小别针,上面写着“精神受到了污染”。我们很喜欢带着这些别针到处走,以表达我们跟着贝尔实验室混,因为我们的精神受到了污染。
|
||||
|
||||
**Saron Yitbarek:***[00:20:30]* 整个世界都被精神污染了。想要保持纯粹、保持事物的美好和专有的旧哲学正变得越来越不现实。正是在这被污染的现实中,历史上最伟大的代码英雄之一诞生了,他是一个芬兰男孩,名叫<ruby>林纳斯·托瓦兹<rt>Linus Torvalds</rt></ruby>。如果这是《星球大战》,那么林纳斯·托瓦兹就是我们的<ruby>卢克·天行者<rt>Luke Skywalker</rt></ruby>。他是赫尔辛基大学一名温文尔雅的研究生。*[00:21:00]* 有才华,但缺乏大志。典型的被逼上梁山的英雄。和其他年轻的英雄一样,他也感到沮丧。他想把 386 处理器整合到他的新电脑中。他对自己的 IBM 兼容电脑上运行的 MS-DOS 操作系统并不感冒,也负担不起 UNIX 软件 5000 美元的价格,而只有 UNIX 才能让他自由地编程。解决方案是托瓦兹在 1991 年春天基于 MINIX 开发了一个名为 Linux 的操作系统内核。他自己的操作系统内核。
|
||||
|
||||
**Steven Vaughan-Nichols:***[00:21:30]* 林纳斯·托瓦兹真的只是想找点乐子而已。
|
||||
|
||||
**Saron Yitbarek:**Steven Vaughan-Nichols 是 ZDNet.com 的特约编辑,而且他从科技行业出现以来就一直在写科技行业相关的内容。
|
||||
|
||||
**Steven Vaughan-Nichols:**当时有几个类似的操作系统。他最关注的是一个名叫 MINIX 的操作系统,MINIX 旨在让学生学习如何构建操作系统。林纳斯看到这些,觉得很有趣,但他想建立自己的操作系统。*[00:22:00]* 所以,它实际上始于赫尔辛基的一个 DIY 项目。一切就这样开始了,基本上就是一个大孩子在玩耍,学习如何做些什么。*[00:22:30]* 但不同之处在于,他足够聪明、足够执着,也足够友好,让所有其他人都参与进来,然后他开始把这个项目进行到底。27 年后,这个项目变得比他想象的要大得多。
|
||||
|
||||
**Saron Yitbarek:**到 1991 年秋季,托瓦兹发布了 10000 行代码,世界各地的人们开始评头论足,然后进行优化、添加和修改代码。*[00:23:00]* 对于今天的开发人员来说,这似乎很正常,但请记住,在那个时候,像这样的开放协作是对微软、苹果和 IBM 已经做的很好的整个专有系统的道德侮辱。随后这种开放性被奉若神明。托瓦兹将 Linux 置于 GNU 通用公共许可证(GPL)之下。曾经保障斯托尔曼的 GNU 系统自由的许可证现在也将保障 Linux 的自由。Vaughan-Nichols 解释道,这种融入到 GPL 的重要性怎么强调都不过分,它基本上能永远保证软件的自由和开放性。
|
||||
|
||||
**Steven Vaughan-Nichols:***[00:23:30]* 事实上,根据 Linux 所遵循的许可协议,即 GPL 第 2 版,如果你想贩卖 Linux 或者向全世界展示它,你必须与他人共享代码,所以如果你对其做了一些改进,仅仅给别人使用是不够的。事实上你必须和他们分享所有这些变化的具体细节。然后,如果这些改进足够好,就会被 Linux 所吸收。
|
||||
|
||||
**Saron Yitbarek:***[00:24:00]* 事实证明,这种公开的方式极具吸引力。<ruby>埃里克·雷蒙德</rt>Eric Raymond</rt></ruby> 是这场运动的早期传道者之一,他在他那篇著名的文章中写道:“微软和苹果这样的公司一直在试图建造软件大教堂,而 Linux 及类似的软件则提供了一个由不同议程和方法组成的巨大集市,集市比大教堂有趣多了。”
|
||||
|
||||
**tormy Peters:**我认为在那个时候,真正吸引人的是人们终于可以把控自己的世界了。
|
||||
|
||||
**Saron Yitbarek:**Stormy Peters 是一位行业分析师,也是自由和开源软件的倡导者。
|
||||
|
||||
**Stormy Peters:***[00:24:30]* 当开源软件第一次出现的时候,所有的操作系统都是专有的。如果不使用专有软件,你甚至不能添加打印机,你不能添加耳机,你不能自己开发一个小型硬件设备,然后让它在你的笔记本电脑上运行。你甚至不能放入 DVD 并复制它,因为你不能改变软件,即使你拥有这张 DVD,你也无法复制它。*[00:25:00]* 你无法控制你购买的硬件/软件系统。你不能从中创造出任何新的、更大的、更好的东西。这就是为什么开源操作系统在一开始就如此重要的原因。我们需要一个开源协作环境,在那里我们可以构建更大更好的东西。
|
||||
|
||||
**Saron Yitbarek:**请注意,Linux 并不是一个纯粹的平等主义乌托邦。林纳斯·托瓦兹不会批准对内核的所有修改,而是主导了内核的变更。他安排了十几个人来管理内核的不同部分。*[00:25:30]* 这些人也会信任自己下面的人,以此类推,形成信任金字塔。变化可能来自任何地方,但它们都是经过判断和策划的。
|
||||
|
||||
然而,考虑到到林纳斯的 DIY 项目一开始是多么的简陋和随意,这项成就令人十分惊讶。他完全不知道自己就是这一切中的卢克·天行者。当时他只有 21 岁,一半的时间都在编程。但是当魔盒第一次被打开,人们开始给他反馈。*[00:26:00]* 几十个,然后几百个,成千上万的贡献者。有了这样的众包基础,Linux 很快就开始成长。真的成长得很快。甚至最终引起了微软的注意。他们的首席执行官<ruby>史蒂夫·鲍尔默<rt>Steve Ballmer</rt></ruby>将 Linux 称为是“一种癌症,从知识产权得角度来看,它传染了接触到得任何东西 ”。Steven Levy 将会描述 Ballmer 的由来。
|
||||
|
||||
**Steven Levy:***[00:26:30]* 一旦微软真正巩固了它的垄断地位,而且它也确实被联邦法院判定为垄断,他们将会对任何可能对其构成威胁的事情做出强烈反应。因此,既然他们对软件收费,很自然得,他们将自由软件得出现看成是一种癌症。他们试图提出一个知识产权理论,来解释为什么这对消费者不利。
|
||||
|
||||
**Saron Yitbarek:***[00:27:00]* Linux 在不断传播,微软也开始担心起来。到了 2006 年,Linux 成为仅次于 Windows 的第二大常用操作系统,全球约有 5000 名开发人员在使用它。5000 名开发者。还记得比尔·盖茨给苹果公司的备忘录吗?在那份备忘录中,他向苹果公司的员工们论述了与他人合作的重要性。事实证明,开源将把伙伴关系的概念提升到一个全新的水平,这是比尔·盖茨从未预见到的。
|
||||
|
||||
*[00:27:30]* 我们一直在谈论操作系统之间的大战,但是到目前为止,并没有怎么提到无名英雄和开发者们。在下次的代码英雄中,情况就不同了。第二集讲的是操作系统大战的第二部分,是关于 Linux 崛起的。业界醒悟过来,认识到了开发人员的重要性。这些开源反叛者变得越来越强大,战场从桌面转移到了服务器领域。*[00:28:00]* 这里有商业间谍活动、新的英雄人物,还有科技史上最不可思议的改变。这一切都在操作系统大战的后半集内达到了高潮。
|
||||
|
||||
要想免费自动获得新一集的代码英雄,请点击订阅苹果播客、Spotify、谷歌 Play,或其他应用获取该播客。在这一季剩下的时间里,我们将参观最新的战场,相互争斗的版图,这里是下一代的代码英雄留下印记的地方。*[00:28:30]* 更多信息,请访问 https://redhat.com/commandlineheroes 。我是 Saron Yitbarek。下次之前,继续编码。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.redhat.com/en/command-line-heroes/season-1/os-wars-part-1
|
||||
|
||||
作者:[redhat][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.redhat.com
|
||||
[b]: https://github.com/lujun9972
|
@ -0,0 +1,137 @@
|
||||
两种 cp 命令的绝佳用法的快捷方式
|
||||
===================
|
||||
|
||||
> 这篇文章是关于如何在使用 cp 命令进行备份以及同步时提高效率。
|
||||
|
||||
![Two great uses for the cp command: Bash shortcuts ](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC)
|
||||
|
||||
去年七月,我写了一篇[关于 cp 命令的两种绝佳用法][7]的文章:备份一个文件,以及同步一个文件夹的备份。
|
||||
|
||||
虽然这些工具确实很好用,但同时,输入这些命令太过于累赘了。为了解决这个问题,我在我的 Bash 启动文件里创建了一些 Bash 快捷方式。现在,我想把这些捷径分享给你们,以便于你们在需要的时候可以拿来用,或者是给那些还不知道怎么使用 Bash 的别名以及函数的用户提供一些思路。
|
||||
|
||||
### 使用 Bash 别名来更新一个文件夹的副本
|
||||
|
||||
如果要使用 `cp` 来更新一个文件夹的副本,通常会使用到的命令是:
|
||||
|
||||
```
|
||||
cp -r -u -v SOURCE-FOLDER DESTINATION-DIRECTORY
|
||||
```
|
||||
|
||||
其中 `-r` 代表“向下递归访问文件夹中的所有文件”,`-u` 代表“更新目标”,`-v` 代表“详细模式”,`SOURCE-FOLDER` 是包含最新文件的文件夹的名称,`DESTINATION-DIRECTORY` 是包含必须同步的`SOURCE-FOLDER` 副本的目录。
|
||||
|
||||
因为我经常使用 `cp` 命令来复制文件夹,我会很自然地想起使用 `-r` 选项。也许再想地更深入一些,我还可以想起用 `-v` 选项,如果再想得再深一层,我会想起用选项 `-u`(不知道这个选项是代表“更新”还是“同步”还是一些什么其它的)。
|
||||
|
||||
或者,还可以使用[Bash 的别名功能][8]来将 `cp` 命令以及其后的选项转换成一个更容易记忆的单词,就像这样:
|
||||
|
||||
```
|
||||
alias sync='cp -r -u -v'
|
||||
```
|
||||
|
||||
如果我将其保存在我的主目录中的 `.bash_aliases` 文件中,然后启动一个新的终端会话,我可以使用该别名了,例如:
|
||||
|
||||
```
|
||||
sync Pictures /media/me/4388-E5FE
|
||||
```
|
||||
|
||||
可以将我的主目录中的图片文件夹与我的 USB 驱动器中的相同版本同步。
|
||||
|
||||
不清楚 `sync` 是否已经定义了?你可以在终端里输入 `alias` 这个单词来列出所有正在使用的命令别名。
|
||||
|
||||
喜欢吗?想要现在就立即使用吗?那就现在打开终端,输入:
|
||||
|
||||
```
|
||||
echo "alias sync='cp -r -u -v'" >> ~/.bash_aliases
|
||||
```
|
||||
|
||||
然后启动一个新的终端窗口并在命令提示符下键入 `alias`。你应该看到这样的东西:
|
||||
|
||||
```
|
||||
me@mymachine~$ alias
|
||||
|
||||
alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"'
|
||||
alias egrep='egrep --color=auto'
|
||||
alias fgrep='fgrep --color=auto'
|
||||
alias grep='grep --color=auto'
|
||||
alias gvm='sdk'
|
||||
alias l='ls -CF'
|
||||
alias la='ls -A'
|
||||
alias ll='ls -alF'
|
||||
alias ls='ls --color=auto'
|
||||
alias sync='cp -r -u -v'
|
||||
me@mymachine:~$
|
||||
```
|
||||
|
||||
这里你能看到 `sync` 已经定义了。
|
||||
|
||||
### 使用 Bash 函数来为备份编号
|
||||
|
||||
若要使用 `cp` 来备份一个文件,通常使用的命令是:
|
||||
|
||||
```
|
||||
cp --force --backup=numbered WORKING-FILE BACKED-UP-FILE
|
||||
```
|
||||
|
||||
其中 `--force` 代表“强制制作副本”,`--backup= numbered` 代表“使用数字表示备份的生成”,`WORKING-FILE` 是我们希望保留的当前文件,`BACKED-UP-FILE` 与 `WORKING-FILE` 的名称相同,并附加生成信息。
|
||||
|
||||
我们不仅需要记得所有 `cp` 的选项,我们还需要记得去重复输入 `WORKING-FILE` 的名字。但当[Bash 的函数功能][9]已经可以帮我们做这一切,为什么我们还要不断地重复这个过程呢?就像这样:
|
||||
|
||||
再一次提醒,你可将下列内容保存入你在家目录下的 `.bash_aliases` 文件里:
|
||||
|
||||
```
|
||||
function backup {
|
||||
if [ $# -ne 1 ]; then
|
||||
echo "Usage: $0 filename"
|
||||
elif [ -f $1 ] ; then
|
||||
echo "cp --force --backup=numbered $1 $1"
|
||||
cp --force --backup=numbered $1 $1
|
||||
else
|
||||
echo "$0: $1 is not a file"
|
||||
fi
|
||||
}
|
||||
```
|
||||
|
||||
我将此函数称之为 `backup`,因为我的系统上没有任何其他名为 `backup` 的命令,但你可以选择适合的任何名称。
|
||||
|
||||
第一个 `if` 语句是用于检查是否提供有且只有一个参数,否则,它会用 `echo` 命令来打印出正确的用法。
|
||||
|
||||
`elif` 语句是用于检查提供的参数所指向的是一个文件,如果是的话,它会用第二个 `echo` 命令来打印所需的 `cp` 的命令(所有的选项都是用全称来表示)并且执行它。
|
||||
|
||||
如果所提供的参数不是一个文件,文件中的第三个 `echo` 用于打印错误信息。
|
||||
|
||||
在我的家目录下,如果我执行 `backup` 这个命令,我可以发现目录下多了一个文件名为`checkCounts.sql.~1~` 的文件,如果我再执行一次,便又多了另一个名为 `checkCounts.sql.~2~` 的文件。
|
||||
|
||||
成功了!就像所想的一样,我可以继续编辑 `checkCounts.sql`,但如果我可以经常地用这个命令来为文件制作快照的话,我可以在我遇到问题的时候回退到最近的版本。
|
||||
|
||||
也许在未来的某个时间,使用 `git` 作为版本控制系统会是一个好主意。但像上文所介绍的 `backup` 这个简单而又好用的工具,是你在需要使用快照的功能时却还未准备好使用 `git` 的最好工具。
|
||||
|
||||
### 结论
|
||||
|
||||
在我的上一篇文章里,我保证我会通过使用脚本,shell 里的函数以及别名功能来简化一些机械性的动作来提高生产效率。
|
||||
|
||||
在这篇文章里,我已经展示了如何在使用 `cp` 命令同步或者备份文件时运用 shell 函数以及别名功能来简化操作。如果你想要了解更多,可以读一下这两篇文章:[怎样通过使用命令别名功能来减少敲击键盘的次数][10] 以及由我的同事 Greg 和 Seth 写的 [Shell 编程:shift 方法和自定义函数介绍][11]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/two-great-uses-cp-command-update
|
||||
|
||||
作者:[Chris Hermansen][a]
|
||||
译者:[zyk2290](https://github.com/zyk2290)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/clhermansen
|
||||
[1]:https://opensource.com/users/clhermansen
|
||||
[2]:https://opensource.com/users/clhermansen
|
||||
[3]:https://opensource.com/user/37806/feed
|
||||
[4]:https://opensource.com/article/18/1/two-great-uses-cp-command-update?rate=J_7R7wSPbukG9y8jrqZt3EqANfYtVAwZzzpopYiH3C8
|
||||
[5]:https://opensource.com/article/18/1/two-great-uses-cp-command-update#comments
|
||||
[6]:https://www.flickr.com/photos/internetarchivebookimages/14803082483/in/photolist-oy6EG4-pZR3NZ-i6r3NW-e1tJSX-boBtf7-oeYc7U-o6jFKK-9jNtc3-idt2G9-i7NG1m-ouKjXe-owqviF-92xFBg-ow9e4s-gVVXJN-i1K8Pw-4jybMo-i1rsBr-ouo58Y-ouPRzz-8cGJHK-85Evdk-cru4Ly-rcDWiP-gnaC5B-pAFsuf-hRFPcZ-odvBMz-hRCE7b-mZN3Kt-odHU5a-73dpPp-hUaaAi-owvUMK-otbp7Q-ouySkB-hYAgmJ-owo4UZ-giHgqu-giHpNc-idd9uQ-osAhcf-7vxk63-7vwN65-fQejmk-pTcLgA-otZcmj-fj1aSX-hRzHQk-oyeZfR
|
||||
[7]:https://opensource.com/article/17/7/two-great-uses-cp-command
|
||||
[8]:https://opensource.com/article/17/5/introduction-alias-command-line-tool
|
||||
[9]:https://opensource.com/article/17/1/shell-scripting-shift-method-custom-functions
|
||||
[10]:https://opensource.com/article/17/5/introduction-alias-command-line-tool
|
||||
[11]:https://opensource.com/article/17/1/shell-scripting-shift-method-custom-functions
|
||||
[12]:https://opensource.com/tags/linux
|
||||
[13]:https://opensource.com/users/clhermansen
|
||||
[14]:https://opensource.com/users/clhermansen
|
@ -0,0 +1,201 @@
|
||||
本地开发如何测试 Webhook
|
||||
===================
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/11/090540wipp5c65iinyyf63.jpg)
|
||||
|
||||
[Webhook][10] 可用于外部系统通知你的系统发生了某个事件或更新。可能最知名的 [Webhook][10] 类型是支付服务提供商(PSP)通知你的系统支付状态有了更新。
|
||||
|
||||
它们通常以监听的预定义 URL 的形式出现,例如 `http://example.com/webhooks/payment-update`。同时,另一个系统向该 URL 发送具有特定有效载荷的 POST 请求(例如支付 ID)。一旦请求进入,你就会获得支付 ID,可以通过 PSP 的 API 用这个支付 ID 向它们询问最新状态,然后更新你的数据库。
|
||||
|
||||
其他例子可以在这个对 Webhook 的出色的解释中找到:[https://sendgrid.com/blog/whats-webhook/][12]。
|
||||
|
||||
只要系统可通过互联网公开访问(这可能是你的生产环境或可公开访问的临时环境),测试这些 webhook 就相当顺利。而当你在笔记本电脑上或虚拟机内部(例如,Vagrant 虚拟机)进行本地开发时,它就变得困难了。在这些情况下,发送 webhook 的一方无法公开访问你的本地 URL。此外,监视发送的请求也很困难,这可能使开发和调试变得困难。
|
||||
|
||||
因此,这个例子将解决:
|
||||
|
||||
* 测试来自本地开发环境的 webhook,该环境无法通过互联网访问。从服务器向 webhook 发送数据的服务无法访问它。
|
||||
* 监控发送的请求和数据,以及应用程序生成的响应。这样可以更轻松地进行调试,从而缩短开发周期。
|
||||
|
||||
前置需求:
|
||||
|
||||
* *可选*:如果你使用虚拟机(VM)进行开发,请确保它正在运行,并确保在 VM 中完成后续步骤。
|
||||
* 对于本教程,我们假设你定义了一个 vhost:`webhook.example.vagrant`。我在本教程中使用了 Vagrant VM,但你可以自由选择 vhost。
|
||||
* 按照这个[安装说明][3]安装 `ngrok`。在 VM 中,我发现它的 Node 版本也很有用:[https://www.npmjs.com/package/ngrok][4],但你可以随意使用其他方法。
|
||||
|
||||
我假设你没有在你的环境中运行 SSL,但如果你使用了,请将在下面的示例中的端口 80 替换为端口 433,`http://` 替换为 `https://`。
|
||||
|
||||
### 使 webhook 可测试
|
||||
|
||||
我们假设以下示例代码。我将使用 PHP,但请将其视作伪代码,因为我留下了一些关键部分(例如 API 密钥、输入验证等)没有编写。
|
||||
|
||||
第一个文件:`payment.php`。此文件创建一个 `$payment` 对象,将其注册到 PSP。然后它获取客户需要访问的 URL,以便支付并将用户重定向到客户那里。
|
||||
|
||||
请注意,此示例中的 `webhook.example.vagrant` 是我们为开发设置定义的本地虚拟主机。它无法从外部世界进入。
|
||||
|
||||
```
|
||||
<?php
|
||||
/*
|
||||
* This file creates a payment and tells the PSP what webhook URL to use for updates
|
||||
* After creating the payment, we get a URL to send the customer to in order to pay at the PSP
|
||||
*/
|
||||
$payment = [
|
||||
'order_id' => 123,
|
||||
'amount' => 25.00,
|
||||
'description' => 'Test payment',
|
||||
'redirect_url' => 'http://webhook.example.vagrant/redirect.php',
|
||||
'webhook_url' => 'http://webhook.example.vagrant/webhook.php',
|
||||
];
|
||||
|
||||
$payment = $paymentProvider->createPayment($payment);
|
||||
header("Location: " . $payment->getPaymentUrl());
|
||||
```
|
||||
|
||||
第二个文件:`webhook.php`。此文件等待 PSP 调用以获得有关更新的通知。
|
||||
|
||||
```
|
||||
<?php
|
||||
/*
|
||||
* This file gets called by the PSP and in the $_POST they submit an 'id'
|
||||
* We can use this ID to get the latest status from the PSP and update our internal systems afterward
|
||||
*/
|
||||
|
||||
$paymentId = $_POST['id'];
|
||||
$paymentInfo = $paymentProvider->getPayment($paymentId);
|
||||
$status = $paymentInfo->getStatus();
|
||||
|
||||
// Perform actions in here to update your system
|
||||
if ($status === 'paid') {
|
||||
..
|
||||
}
|
||||
elseif ($status === 'cancelled') {
|
||||
..
|
||||
}
|
||||
```
|
||||
|
||||
我们的 webhook URL 无法通过互联网访问(请记住它:`webhook.example.vagrant`)。因此,PSP 永远不可能调用文件 `webhook.php`,你的系统将永远不会知道付款状态,这最终导致订单永远不会被运送给客户。
|
||||
|
||||
幸运的是,`ngrok` 可以解决这个问题。 [ngrok][13] 将自己描述为:
|
||||
|
||||
> ngrok 通过安全隧道将 NAT 和防火墙后面的本地服务器暴露给公共互联网。
|
||||
|
||||
让我们为我们的项目启动一个基本的隧道。在你的环境中(在你的系统上或在 VM 上)运行以下命令:
|
||||
|
||||
```
|
||||
ngrok http -host-header=rewrite webhook.example.vagrant:80
|
||||
```
|
||||
|
||||
> 阅读其文档可以了解更多配置选项:[https://ngrok.com/docs][14]。
|
||||
|
||||
会出现这样的屏幕:
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1000/1*BZZE-CvZwHZ3pxsElJMWbA.png)
|
||||
|
||||
*ngrok 输出*
|
||||
|
||||
我们刚刚做了什么?基本上,我们指示 `ngrok` 在端口 80 建立了一个到 `http://webhook.example.vagrant` 的隧道。同一个 URL 也可以通过 `http://39741ffc.ngrok.io` 或 `https://39741ffc.ngrok.io` 访问,它们能被任何知道此 URL 的人通过互联网公开访问。
|
||||
|
||||
请注意,你可以同时获得 HTTP 和 HTTPS 两个服务。这个文档提供了如何将此限制为 HTTPS 的示例:[https://ngrok.com/docs#bind-tls][16]。
|
||||
|
||||
那么,我们如何让我们的 webhook 现在工作起来?将 `payment.php` 更新为以下代码:
|
||||
|
||||
```
|
||||
<?php
|
||||
/*
|
||||
* This file creates a payment and tells the PSP what webhook URL to use for updates
|
||||
* After creating the payment, we get a URL to send the customer to in order to pay at the PSP
|
||||
*/
|
||||
$payment = [
|
||||
'order_id' => 123,
|
||||
'amount' => 25.00,
|
||||
'description' => 'Test payment',
|
||||
'redirect_url' => 'http://webhook.example.vagrant/redirect.php',
|
||||
'webhook_url' => 'https://39741ffc.ngrok.io/webhook.php',
|
||||
];
|
||||
|
||||
$payment = $paymentProvider->createPayment($payment);
|
||||
header("Location: " . $payment->getPaymentUrl());
|
||||
```
|
||||
|
||||
现在,我们告诉 PSP 通过 HTTPS 调用此隧道 URL。只要 PSP 通过隧道调用 webhook,`ngrok` 将确保使用未修改的有效负载调用内部 URL。
|
||||
|
||||
### 如何监控对 webhook 的调用?
|
||||
|
||||
你在上面看到的屏幕截图概述了对隧道主机的调用,这些数据相当有限。幸运的是,`ngrok` 提供了一个非常好的仪表板,允许你检查所有调用:
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1000/1*qZw9GRTnG1sMgEUmsJPz3g.png)
|
||||
|
||||
我不会深入研究这个问题,因为它是不言自明的,你只要运行它就行了。因此,我将解释如何在 Vagrant 虚拟机上访问它,因为它不是开箱即用的。
|
||||
|
||||
仪表板将允许你查看所有调用、其状态代码、标头和发送的数据。你将看到应用程序生成的响应。
|
||||
|
||||
仪表板的另一个优点是它允许你重放某个调用。假设你的 webhook 代码遇到了致命的错误,开始新的付款并等待 webhook 被调用将会很繁琐。重放上一个调用可以使你的开发过程更快。
|
||||
|
||||
默认情况下,仪表板可在 `http://localhost:4040` 访问。
|
||||
|
||||
### 虚拟机中的仪表盘
|
||||
|
||||
为了在 VM 中完成此工作,你必须执行一些额外的步骤:
|
||||
|
||||
首先,确保可以在端口 4040 上访问 VM。然后,在 VM 内创建一个文件已存放此配置:
|
||||
|
||||
```
|
||||
web_addr: 0.0.0.0:4040
|
||||
```
|
||||
|
||||
现在,杀死仍在运行的 `ngrok` 进程,并使用稍微调整过的命令启动它:
|
||||
|
||||
```
|
||||
ngrok http -config=/path/to/config/ngrok.conf -host-header=rewrite webhook.example.vagrant:80
|
||||
```
|
||||
|
||||
尽管 ID 已经更改,但你将看到类似于上一屏幕截图的屏幕。之前的网址不再有效,但你有了一个新网址。 此外,`Web Interface` URL 已更改:
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1000/1*3FZq37TF4dmBqRc1R0FMVg.png)
|
||||
|
||||
现在将浏览器指向 `http://webhook.example.vagrant:4040` 以访问仪表板。另外,对 `https://e65642b5.ngrok.io/webhook.php` 做个调用。这可能会导致你的浏览器出错,但仪表板应显示正有一个请求。
|
||||
|
||||
### 最后的备注
|
||||
|
||||
上面的例子是伪代码。原因是每个外部系统都以不同的方式使用 webhook。我试图基于一个虚构的 PSP 实现给出一个例子,因为可能很多开发人员在某个时刻肯定会处理付款。
|
||||
|
||||
请注意,你的 webhook 网址也可能被意图不好的其他人使用。确保验证发送给它的任何输入。
|
||||
|
||||
更好的的,可以向 URL 添加令牌,该令牌对于每个支付是唯一的。只有你的系统和发送 webhook 的系统才能知道此令牌。
|
||||
|
||||
祝你测试和调试你的 webhook 顺利!
|
||||
|
||||
注意:我没有在 Docker 上测试过本教程。但是,这个 Docker 容器看起来是一个很好的起点,并包含了明确的说明:[https://github.com/wernight/docker-ngrok][19] 。
|
||||
|
||||
--------
|
||||
|
||||
via: https://medium.freecodecamp.org/testing-webhooks-while-using-vagrant-for-development-98b5f3bedb1d
|
||||
|
||||
作者:[Stefan Doorn][a]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://medium.freecodecamp.org/@stefandoorn
|
||||
[1]:https://unsplash.com/photos/MYTyXb7fgG0?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[2]:https://unsplash.com/?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[3]:https://ngrok.com/download
|
||||
[4]:https://www.npmjs.com/package/ngrok
|
||||
[5]:http://webhook.example.vagrnat/
|
||||
[6]:http://39741ffc.ngrok.io/
|
||||
[7]:http://39741ffc.ngrok.io/
|
||||
[8]:http://webhook.example.vagrant:4040/
|
||||
[9]:https://e65642b5.ngrok.io/webhook.php.
|
||||
[10]:https://sendgrid.com/blog/whats-webhook/
|
||||
[11]:http://example.com/webhooks/payment-update%29
|
||||
[12]:https://sendgrid.com/blog/whats-webhook/
|
||||
[13]:https://ngrok.com/
|
||||
[14]:https://ngrok.com/docs
|
||||
[15]:http://39741ffc.ngrok.io%2C/
|
||||
[16]:https://ngrok.com/docs#bind-tls
|
||||
[17]:http://localhost:4040./
|
||||
[18]:https://e65642b5.ngrok.io/webhook.php.
|
||||
[19]:https://github.com/wernight/docker-ngrok
|
||||
[20]:https://github.com/stefandoorn
|
||||
[21]:https://twitter.com/stefan_doorn
|
||||
[22]:https://www.linkedin.com/in/stefandoorn
|
214
published/20180622 Use LVM to Upgrade Fedora.md
Normal file
214
published/20180622 Use LVM to Upgrade Fedora.md
Normal file
@ -0,0 +1,214 @@
|
||||
使用 LVM 升级 Fedora
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/06/lvm-upgrade-816x345.jpg)
|
||||
|
||||
大多数用户发现使用标准流程升级[从一个 Fedora 版本升级到下一个][1]很简单。但是,Fedora 升级也不可避免地会遇到许多特殊情况。本文介绍了使用 DNF 和逻辑卷管理(LVM)进行升级的一种方法,以便在出现问题时保留可引导备份。这个例子是将 Fedora 26 系统升级到 Fedora 28。
|
||||
|
||||
此处展示的过程比标准升级过程更复杂。在使用此过程之前,你应该充分掌握 LVM 的工作原理。如果没有适当的技能和细心,你可能会丢失数据和/或被迫重新安装系统!如果你不知道自己在做什么,那么**强烈建议**你坚持只使用得到支持的升级方法。
|
||||
|
||||
### 准备系统
|
||||
|
||||
在开始之前,请确保你的现有系统已完全更新。
|
||||
|
||||
```
|
||||
$ sudo dnf update
|
||||
$ sudo systemctl reboot # 或采用 GUI 方式
|
||||
```
|
||||
|
||||
检查你的根文件系统是否是通过 LVM 挂载的。
|
||||
|
||||
```
|
||||
$ df /
|
||||
Filesystem 1K-blocks Used Available Use% Mounted on
|
||||
/dev/mapper/vg_sdg-f26 20511312 14879816 4566536 77% /
|
||||
|
||||
$ sudo lvs
|
||||
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
|
||||
f22 vg_sdg -wi-ao---- 15.00g
|
||||
f24_64 vg_sdg -wi-ao---- 20.00g
|
||||
f26 vg_sdg -wi-ao---- 20.00g
|
||||
home vg_sdg -wi-ao---- 100.00g
|
||||
mockcache vg_sdg -wi-ao---- 10.00g
|
||||
swap vg_sdg -wi-ao---- 4.00g
|
||||
test vg_sdg -wi-a----- 1.00g
|
||||
vg_vm vg_sdg -wi-ao---- 20.00g
|
||||
```
|
||||
|
||||
如果你在安装 Fedora 时使用了默认值,你可能会发现根文件系统挂载在名为 `root` 的逻辑卷(LV)上。卷组(VG)的名称可能会有所不同。看看根卷的总大小。在该示例中,根文件系统名为 `f26`,大小为 `20G`。
|
||||
|
||||
接下来,确保 LVM 中有足够的可用空间。
|
||||
|
||||
```
|
||||
$ sudo vgs
|
||||
VG #PV #LV #SN Attr VSize VFree
|
||||
vg_sdg 1 8 0 wz--n- 232.39g 42.39g
|
||||
```
|
||||
|
||||
该系统有足够的可用空间,可以为升级后的 Fedora 28 的根卷分配 20G 的逻辑卷。如果你使用的是默认安装,则你的 LVM 中将没有可用空间。对 LVM 的一般性管理超出了本文的范围,但这里有一些情形下可能采取的方法:
|
||||
|
||||
1、`/home` 在自己的逻辑卷,而且 `/home` 中有大量空闲空间。
|
||||
|
||||
你可以从图形界面中注销并切换到文本控制台,以 `root` 用户身份登录。然后你可以卸载 `/home`,并使用 `lvreduce -r` 调整大小并重新分配 `/home` 逻辑卷。你也可以从<ruby>现场镜像<rt>Live image</rt></ruby>启动(以便不使用 `/home`)并使用 gparted GUI 实用程序进行分区调整。
|
||||
|
||||
2、大多数 LVM 空间被分配给根卷,该文件系统中有大量可用空间。
|
||||
|
||||
你可以从现场镜像启动并使用 gparted GUI 实用程序来减少根卷的大小。此时也可以考虑将 `/home` 移动到另外的文件系统,但这超出了本文的范围。
|
||||
|
||||
3、大多数文件系统已满,但你有个已经不再需要逻辑卷。
|
||||
|
||||
你可以删除不需要的逻辑卷,释放卷组中的空间以进行此操作。
|
||||
|
||||
### 创建备份
|
||||
|
||||
首先,为升级后的系统分配新的逻辑卷。确保为系统的卷组(VG)使用正确的名称。在这个例子中它是 `vg_sdg`。
|
||||
|
||||
```
|
||||
$ sudo lvcreate -L20G -n f28 vg_sdg
|
||||
Logical volume "f28" created.
|
||||
```
|
||||
|
||||
接下来,创建当前根文件系统的快照。此示例创建名为 `f26_s` 的快照卷。
|
||||
|
||||
```
|
||||
$ sync
|
||||
$ sudo lvcreate -s -L1G -n f26_s vg_sdg/f26
|
||||
Using default stripesize 64.00 KiB.
|
||||
Logical volume "f26_s" created.
|
||||
```
|
||||
|
||||
现在可以将快照复制到新逻辑卷。当你替换自己的卷名时,**请确保目标正确**。如果不小心,就会不可撤销地删除了数据。此外,请确保你从根卷的快照复制,**而不是**从你的现在的根卷。
|
||||
|
||||
```
|
||||
$ sudo dd if=/dev/vg_sdg/f26_s of=/dev/vg_sdg/f28 bs=256k
|
||||
81920+0 records in
|
||||
81920+0 records out
|
||||
21474836480 bytes (21 GB, 20 GiB) copied, 149.179 s, 144 MB/s
|
||||
```
|
||||
|
||||
给新文件系统一个唯一的 UUID。这不是绝对必要的,但 UUID 应该是唯一的,因此这避免了未来的混淆。以下是在 ext4 根文件系统上的方法:
|
||||
|
||||
```
|
||||
$ sudo e2fsck -f /dev/vg_sdg/f28
|
||||
$ sudo tune2fs -U random /dev/vg_sdg/f28
|
||||
```
|
||||
|
||||
然后删除不再需要的快照卷:
|
||||
|
||||
```
|
||||
$ sudo lvremove vg_sdg/f26_s
|
||||
Do you really want to remove active logical volume vg_sdg/f26_s? [y/n]: y
|
||||
Logical volume "f26_s" successfully removed
|
||||
```
|
||||
|
||||
如果你单独挂载了 `/home`,你可能希望在此处制作 `/home` 的快照。有时,升级的应用程序会进行与旧版 Fedora 版本不兼容的更改。如果需要,编辑**旧**根文件系统上的 `/etc/fstab` 文件以在 `/home` 上挂载快照。请记住,当快照已满时,它将消失!另外,你可能还希望给 `/home` 做个正常备份。
|
||||
|
||||
### 配置以使用新的根
|
||||
|
||||
首先,安装新的逻辑卷并备份现有的 GRUB 设置:
|
||||
|
||||
```
|
||||
$ sudo mkdir /mnt/f28
|
||||
$ sudo mount /dev/vg_sdg/f28 /mnt/f28
|
||||
$ sudo mkdir /mnt/f28/f26
|
||||
$ cd /boot/grub2
|
||||
$ sudo cp -p grub.cfg grub.cfg.old
|
||||
```
|
||||
|
||||
编辑 `grub.conf` 并在第一个菜单项 `menuentry` 之前添加这些,除非你已经有了:
|
||||
|
||||
```
|
||||
menuentry 'Old boot menu' {
|
||||
configfile /grub2/grub.cfg.old
|
||||
}
|
||||
```
|
||||
|
||||
编辑 `grub.conf` 并更改默认菜单项以激活并挂载新的根文件系统。改变这一行:
|
||||
|
||||
```
|
||||
linux16 /vmlinuz-4.16.11-100.fc26.x86_64 root=/dev/mapper/vg_sdg-f26 ro rd.lvm.lv=vg_sdg/f26 rd.lvm.lv=vg_sdg/swap rhgb quiet LANG=en_US.UTF-8
|
||||
```
|
||||
|
||||
如你看到的这样。请记住使用你系统上的正确的卷组和逻辑卷条目名称!
|
||||
|
||||
```
|
||||
linux16 /vmlinuz-4.16.11-100.fc26.x86_64 root=/dev/mapper/vg_sdg-f28 ro rd.lvm.lv=vg_sdg/f28 rd.lvm.lv=vg_sdg/swap rhgb quiet LANG=en_US.UTF-8
|
||||
```
|
||||
|
||||
编辑 `/mnt/f28/etc/default/grub` 并改变在启动时激活的默认的根卷:
|
||||
|
||||
```
|
||||
GRUB_CMDLINE_LINUX="rd.lvm.lv=vg_sdg/f28 rd.lvm.lv=vg_sdg/swap rhgb quiet"
|
||||
```
|
||||
|
||||
编辑 `/mnt/f28/etc/fstab`,将挂载的根文件系统从旧的逻辑卷:
|
||||
|
||||
```
|
||||
/dev/mapper/vg_sdg-f26 / ext4 defaults 1 1
|
||||
```
|
||||
|
||||
改为新的:
|
||||
|
||||
```
|
||||
/dev/mapper/vg_sdg-f28 / ext4 defaults 1 1
|
||||
```
|
||||
|
||||
然后,出于参考的用途,只读挂载旧的根卷:
|
||||
|
||||
```
|
||||
/dev/mapper/vg_sdg-f26 /f26 ext4 ro,nodev,noexec 0 0
|
||||
```
|
||||
|
||||
如果你的根文件系统是通过 UUID 挂载的,你需要改变这个方式。如果你的根文件系统是 ext4 你可以这样做:
|
||||
|
||||
```
|
||||
$ sudo e2label /dev/vg_sdg/f28 F28
|
||||
```
|
||||
|
||||
现在编辑 `/mnt/f28/etc/fstab` 使用该卷标。改变该根文件系统的挂载行,像这样:
|
||||
|
||||
```
|
||||
LABEL=F28 / ext4 defaults 1 1
|
||||
```
|
||||
|
||||
### 重启与升级
|
||||
|
||||
重新启动,你的系统将使用新的根文件系统。它仍然是 Fedora 26,但是是带有新的逻辑卷名称的副本,并可以进行 `dnf` 系统升级!如果出现任何问题,请使用旧引导菜单引导回到你的工作系统,此过程可避免触及旧系统。
|
||||
|
||||
```
|
||||
$ sudo systemctl reboot # or GUI equivalent
|
||||
...
|
||||
$ df / /f26
|
||||
Filesystem 1K-blocks Used Available Use% Mounted on
|
||||
/dev/mapper/vg_sdg-f28 20511312 14903196 4543156 77% /
|
||||
/dev/mapper/vg_sdg-f26 20511312 14866412 4579940 77% /f26
|
||||
```
|
||||
|
||||
你可能希望验证使用旧的引导菜单确实可以让你回到挂载在旧的根文件系统上的根。
|
||||
|
||||
现在按照[此维基页面][2]中的说明进行操作。如果系统升级出现任何问题,你还会有一个可以重启回去的工作系统。
|
||||
|
||||
### 进一步的考虑
|
||||
|
||||
创建新的逻辑卷并将根卷的快照复制到其中的步骤可以使用通用脚本自动完成。它只需要新的逻辑卷的名称,因为现有根的大小和设备很容易确定。例如,可以输入以下命令:
|
||||
|
||||
```
|
||||
$ sudo copyfs / f28
|
||||
```
|
||||
|
||||
提供挂载点以进行复制可以更清楚地了解发生了什么,并且复制其他挂载点(例如 `/home`)可能很有用。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/use-lvm-upgrade-fedora/
|
||||
|
||||
作者:[Stuart D Gathman][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fedoramagazine.org/author/sdgathman/
|
||||
[1]:https://fedoramagazine.org/upgrading-fedora-27-fedora-28/
|
||||
[2]:https://fedoraproject.org/wiki/DNF_system_upgrade
|
@ -0,0 +1,106 @@
|
||||
Logreduce:用 Python 和机器学习去除日志噪音
|
||||
======
|
||||
|
||||
> Logreduce 可以通过从大量日志数据中挑选出异常来节省调试时间。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/sound-radio-noise-communication.png?itok=KMNn9QrZ)
|
||||
|
||||
持续集成(CI)作业会生成大量数据。当一个作业失败时,弄清楚出了什么问题可能是一个繁琐的过程,它涉及到调查日志以发现根本原因 —— 这通常只能在全部的作业输出的一小部分中找到。为了更容易地将最相关的数据与其余数据分开,可以使用先前成功运行的作业结果来训练 [Logreduce][1] 机器学习模型,以从失败的运行日志中提取异常。
|
||||
|
||||
此方法也可以应用于其他用例,例如,从 [Journald][2] 或其他系统级的常规日志文件中提取异常。
|
||||
|
||||
### 使用机器学习来降低噪音
|
||||
|
||||
典型的日志文件包含许多标称事件(“基线”)以及与开发人员相关的一些例外事件。基线可能包含随机元素,例如难以检测和删除的时间戳或唯一标识符。要删除基线事件,我们可以使用 [k-最近邻模式识别算法][3](k-NN)。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/ml-generic-workflow.png)
|
||||
|
||||
日志事件必须转换为可用于 k-NN 回归的数值。使用通用特征提取工具 [HashingVectorizer][4] 可以将该过程应用于任何类型的日志。它散列每个单词并在稀疏矩阵中对每个事件进行编码。为了进一步减少搜索空间,这个标记化过程删除了已知的随机单词,例如日期或 IP 地址。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/hashing-vectorizer.png)
|
||||
|
||||
训练模型后,k-NN 搜索可以告诉我们每个新事件与基线的距离。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/kneighbors.png)
|
||||
|
||||
这个 [Jupyter 笔记本][5] 演示了该稀疏矩阵向量的处理和图形。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/anomaly-detection-with-scikit-learn.png)
|
||||
|
||||
### Logreduce 介绍
|
||||
|
||||
Logreduce Python 软件透明地实现了这个过程。Logreduce 的最初目标是使用构建数据库来协助分析 [Zuul CI][6] 作业的失败问题,现在它已集成到 [Software Factory 开发车间][7]的作业日志处理中。
|
||||
|
||||
最简单的是,Logreduce 会比较文件或目录并删除相似的行。Logreduce 为每个源文件构建模型,并使用以下语法输出距离高于定义阈值的任何目标行:`distance | filename:line-number: line-content`。
|
||||
|
||||
```
|
||||
$ logreduce diff /var/log/audit/audit.log.1 /var/log/audit/audit.log
|
||||
INFO logreduce.Classifier - Training took 21.982s at 0.364MB/s (1.314kl/s) (8.000 MB - 28.884 kilo-lines)
|
||||
0.244 | audit.log:19963: type=USER_AUTH acct="root" exe="/usr/bin/su" hostname=managesf.sftests.com
|
||||
INFO logreduce.Classifier - Testing took 18.297s at 0.306MB/s (1.094kl/s) (5.607 MB - 20.015 kilo-lines)
|
||||
99.99% reduction (from 20015 lines to 1
|
||||
|
||||
```
|
||||
|
||||
更高级的 Logreduce 用法可以离线训练模型以便重复使用。可以使用基线的许多变体来拟合 k-NN 搜索树。
|
||||
|
||||
```
|
||||
$ logreduce dir-train audit.clf /var/log/audit/audit.log.*
|
||||
INFO logreduce.Classifier - Training took 80.883s at 0.396MB/s (1.397kl/s) (32.001 MB - 112.977 kilo-lines)
|
||||
DEBUG logreduce.Classifier - audit.clf: written
|
||||
$ logreduce dir-run audit.clf /var/log/audit/audit.log
|
||||
```
|
||||
|
||||
Logreduce 还实现了接口,以发现 Journald 时间范围(天/周/月)和 Zuul CI 作业构建历史的基线。它还可以生成 HTML 报告,该报告在一个简单的界面中将在多个文件中发现的异常进行分组。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/html-report.png)
|
||||
|
||||
### 管理基线
|
||||
|
||||
使用 k-NN 回归进行异常检测的关键是拥有一个已知良好基线的数据库,该模型使用数据库来检测偏离太远的日志行。此方法依赖于包含所有标称事件的基线,因为基线中未找到的任何内容都将报告为异常。
|
||||
|
||||
CI 作业是 k-NN 回归的重要目标,因为作业的输出通常是确定性的,之前的运行结果可以自动用作基线。 Logreduce 具有 Zuul 作业角色,可以将其用作失败的作业发布任务的一部分,以便发布简明报告(而不是完整作业的日志)。只要可以提前构建基线,该原则就可以应用于其他情况。例如,标称系统的 [SoS 报告][8] 可用于查找缺陷部署中的问题。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/baselines.png)
|
||||
|
||||
### 异常分类服务
|
||||
|
||||
下一版本的 Logreduce 引入了一种服务器模式,可以将日志处理卸载到外部服务,在外部服务中可以进一步分析该报告。它还支持导入现有报告和请求以分析 Zuul 构建。这些服务以异步方式运行分析,并具有 Web 界面以调整分数并消除误报。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/classification-interface.png)
|
||||
|
||||
已审核的报告可以作为独立数据集存档,其中包含目标日志文件和记录在一个普通的 JSON 文件中的异常行的分数。
|
||||
|
||||
### 项目路线图
|
||||
|
||||
Logreduce 已经能有效使用,但是有很多机会来改进该工具。未来的计划包括:
|
||||
|
||||
* 策划在日志文件中发现的许多带注释的异常,并生成一个公共域数据集以进行进一步研究。日志文件中的异常检测是一个具有挑战性的主题,并且有一个用于测试新模型的通用数据集将有助于识别新的解决方案。
|
||||
* 重复使用带注释的异常模型来优化所报告的距离。例如,当用户通过将距离设置为零来将日志行标记为误报时,模型可能会降低未来报告中这些日志行的得分。
|
||||
* 对存档异常取指纹特征以检测新报告何时包含已知的异常。因此,该服务可以通知用户该作业遇到已知问题,而不是报告异常的内容。解决问题后,该服务可以自动重新启动该作业。
|
||||
* 支持更多基准发现接口,用于 SOS 报告、Jenkins 构建、Travis CI 等目标。
|
||||
|
||||
如果你有兴趣参与此项目,请通过 #log-classify Freenode IRC 频道与我们联系。欢迎反馈!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/9/quiet-log-noise-python-and-machine-learning
|
||||
|
||||
作者:[Tristan de Cacqueray][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/tristanc
|
||||
[1]: https://pypi.org/project/logreduce/
|
||||
[2]: http://man7.org/linux/man-pages/man8/systemd-journald.service.8.html
|
||||
[3]: https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm
|
||||
[4]: http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.HashingVectorizer.html
|
||||
[5]: https://github.com/TristanCacqueray/anomaly-detection-workshop-opendev/blob/master/datasets/notebook/anomaly-detection-with-scikit-learn.ipynb
|
||||
[6]: https://zuul-ci.org
|
||||
[7]: https://www.softwarefactory-project.io
|
||||
[8]: https://sos.readthedocs.io/en/latest/
|
||||
[9]: https://www.openstack.org/summit/berlin-2018/summit-schedule/speakers/4307
|
||||
[10]: https://www.openstack.org/summit/berlin-2018/
|
@ -0,0 +1,249 @@
|
||||
探索 Linux 内核:Kconfig/kbuild 的秘密
|
||||
======
|
||||
|
||||
> 深入理解 Linux 配置/构建系统是如何工作的。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/15/093935dvyk5znoaooaooba.jpg)
|
||||
|
||||
自从 Linux 内核代码迁移到 Git 以来,Linux 内核配置/构建系统(也称为 Kconfig/kbuild)已存在很长时间了。然而,作为支持基础设施,它很少成为人们关注的焦点;甚至在日常工作中使用它的内核开发人员也从未真正思考过它。
|
||||
|
||||
为了探索如何编译 Linux 内核,本文将深入介绍 Kconfig/kbuild 内部的过程,解释如何生成 `.config` 文件和 `vmlinux`/`bzImage` 文件,并介绍一个巧妙的依赖性跟踪技巧。
|
||||
|
||||
### Kconfig
|
||||
|
||||
构建内核的第一步始终是配置。Kconfig 有助于使 Linux 内核高度模块化和可定制。Kconfig 为用户提供了许多配置目标:
|
||||
|
||||
|
||||
| 配置目标 | 解释 |
|
||||
| ---------------- | --------------------------------------------------------- |
|
||||
| `config` | 利用命令行程序更新当前配置 |
|
||||
| `nconfig` | 利用基于 ncurses 菜单的程序更新当前配置 |
|
||||
| `menuconfig` | 利用基于菜单的程序更新当前配置 |
|
||||
| `xconfig` | 利用基于 Qt 的前端程序更新当前配置 |
|
||||
| `gconfig` | 利用基于 GTK+ 的前端程序更新当前配置 |
|
||||
| `oldconfig` | 基于提供的 `.config` 更新当前配置 |
|
||||
| `localmodconfig` | 更新当前配置,禁用没有载入的模块 |
|
||||
| `localyesconfig` | 更新当前配置,转换本地模块到核心 |
|
||||
| `defconfig` | 带有来自架构提供的 `defconcig` 默认值的新配置 |
|
||||
| `savedefconfig` | 保存当前配置为 `./defconfig`(最小配置) |
|
||||
| `allnoconfig` | 所有选项回答为 `no` 的新配置 |
|
||||
| `allyesconfig` | 所有选项回答为 `yes` 的新配置 |
|
||||
| `allmodconfig` | 尽可能选择所有模块的新配置 |
|
||||
| `alldefconfig` | 所有符号(选项)设置为默认值的新配置 |
|
||||
| `randconfig` | 所有选项随机选择的新配置 |
|
||||
| `listnewconfig` | 列出新选项 |
|
||||
| `olddefconfig` | 同 `oldconfig` 一样,但设置新符号(选项)为其默认值而无须提问 |
|
||||
| `kvmconfig` | 启用支持 KVM 访客内核模块的附加选项 |
|
||||
| `xenconfig` | 启用支持 xen 的 dom0 和 访客内核模块的附加选项 |
|
||||
| `tinyconfig` | 配置尽可能小的内核 |
|
||||
|
||||
我认为 `menuconfig` 是这些目标中最受欢迎的。这些目标由不同的<ruby>主程序<rt>host program</rt></ruby>处理,这些程序由内核提供并在内核构建期间构建。一些目标有 GUI(为了方便用户),而大多数没有。与 Kconfig 相关的工具和源代码主要位于内核源代码中的 `scripts/kconfig/` 下。从 `scripts/kconfig/Makefile` 中可以看到,这里有几个主程序,包括 `conf`、`mconf` 和 `nconf`。除了 `conf` 之外,每个都负责一个基于 GUI 的配置目标,因此,`conf` 处理大多数目标。
|
||||
|
||||
从逻辑上讲,Kconfig 的基础结构有两部分:一部分实现一种[新语言][1]来定义配置项(参见内核源代码下的 Kconfig 文件),另一部分解析 Kconfig 语言并处理配置操作。
|
||||
|
||||
大多数配置目标具有大致相同的内部过程(如下所示):
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/kconfig_process.png)
|
||||
|
||||
请注意,所有配置项都具有默认值。
|
||||
|
||||
第一步读取源代码根目录下的 Kconfig 文件,构建初始配置数据库;然后它根据如下优先级读取现有配置文件来更新初始数据库:
|
||||
|
||||
1. `.config`
|
||||
2. `/lib/modules/$(shell,uname -r)/.config`
|
||||
3. `/etc/kernel-config`
|
||||
4. `/boot/config-$(shell,uname -r)`
|
||||
5. `ARCH_DEFCONFIG`
|
||||
6. `arch/$(ARCH)/defconfig`
|
||||
|
||||
如果你通过 `menuconfig` 进行基于 GUI 的配置或通过 `oldconfig` 进行基于命令行的配置,则根据你的自定义更新数据库。最后,该配置数据库被转储到 `.config` 文件中。
|
||||
|
||||
但 `.config` 文件不是内核构建的最终素材;这就是 `syncconfig` 目标存在的原因。`syncconfig`曾经是一个名为 `silentoldconfig` 的配置目标,但它没有做到其旧名称所说的工作,所以它被重命名。此外,因为它是供内部使用的(不适用于用户),所以它已从上述列表中删除。
|
||||
|
||||
以下是 `syncconfig` 的作用:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/syncconfig.png)
|
||||
|
||||
`syncconfig` 将 `.config` 作为输入并输出许多其他文件,这些文件分为三类:
|
||||
|
||||
* `auto.conf` & `tristate.conf` 用于 makefile 文本处理。例如,你可以在组件的 makefile 中看到这样的语句:`obj-$(CONFIG_GENERIC_CALIBRATE_DELAY) += calibrate.o`。
|
||||
* `autoconf.h` 用于 C 语言的源文件。
|
||||
* `include/config/` 下空的头文件用于 kbuild 期间的配置依赖性跟踪。下面会解释。
|
||||
|
||||
配置完成后,我们将知道哪些文件和代码片段未编译。
|
||||
|
||||
### kbuild
|
||||
|
||||
组件式构建,称为*递归 make*,是 GNU `make` 管理大型项目的常用方法。kbuild 是递归 make 的一个很好的例子。通过将源文件划分为不同的模块/组件,每个组件都由其自己的 makefile 管理。当你开始构建时,顶级 makefile 以正确的顺序调用每个组件的 makefile、构建组件,并将它们收集到最终的执行程序中。
|
||||
|
||||
kbuild 指向到不同类型的 makefile:
|
||||
|
||||
* `Makefile` 位于源代码根目录的顶级 makefile。
|
||||
* `.config` 是内核配置文件。
|
||||
* `arch/$(ARCH)/Makefile` 是架构的 makefile,它用于补充顶级 makefile。
|
||||
* `scripts/Makefile.*` 描述所有的 kbuild makefile 的通用规则。
|
||||
* 最后,大约有 500 个 kbuild makefile。
|
||||
|
||||
顶级 makefile 会将架构 makefile 包含进去,读取 `.config` 文件,下到子目录,在 `scripts/ Makefile.*` 中定义的例程的帮助下,在每个组件的 makefile 上调用 `make`,构建每个中间对象,并将所有的中间对象链接为 `vmlinux`。内核文档 [Documentation/kbuild/makefiles.txt][2] 描述了这些 makefile 的方方面面。
|
||||
|
||||
作为一个例子,让我们看看如何在 x86-64 上生成 `vmlinux`:
|
||||
|
||||
![vmlinux overview][4]
|
||||
|
||||
(此插图基于 Richard Y. Steven 的[博客][5]。有过更新,并在作者允许的情况下使用。)
|
||||
|
||||
进入 `vmlinux` 的所有 `.o` 文件首先进入它们自己的 `built-in.a`,它通过变量`KBUILD_VMLINUX_INIT`、`KBUILD_VMLINUX_MAIN`、`KBUILD_VMLINUX_LIBS` 表示,然后被收集到 `vmlinux` 文件中。
|
||||
|
||||
在下面这个简化的 makefile 代码的帮助下,了解如何在 Linux 内核中实现递归 make:
|
||||
|
||||
```
|
||||
# In top Makefile
|
||||
vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps)
|
||||
+$(call if_changed,link-vmlinux)
|
||||
|
||||
# Variable assignments
|
||||
vmlinux-deps := $(KBUILD_LDS) $(KBUILD_VMLINUX_INIT) $(KBUILD_VMLINUX_MAIN) $(KBUILD_VMLINUX_LIBS)
|
||||
|
||||
export KBUILD_VMLINUX_INIT := $(head-y) $(init-y)
|
||||
export KBUILD_VMLINUX_MAIN := $(core-y) $(libs-y2) $(drivers-y) $(net-y) $(virt-y)
|
||||
export KBUILD_VMLINUX_LIBS := $(libs-y1)
|
||||
export KBUILD_LDS := arch/$(SRCARCH)/kernel/vmlinux.lds
|
||||
|
||||
init-y := init/
|
||||
drivers-y := drivers/ sound/ firmware/
|
||||
net-y := net/
|
||||
libs-y := lib/
|
||||
core-y := usr/
|
||||
virt-y := virt/
|
||||
|
||||
# Transform to corresponding built-in.a
|
||||
init-y := $(patsubst %/, %/built-in.a, $(init-y))
|
||||
core-y := $(patsubst %/, %/built-in.a, $(core-y))
|
||||
drivers-y := $(patsubst %/, %/built-in.a, $(drivers-y))
|
||||
net-y := $(patsubst %/, %/built-in.a, $(net-y))
|
||||
libs-y1 := $(patsubst %/, %/lib.a, $(libs-y))
|
||||
libs-y2 := $(patsubst %/, %/built-in.a, $(filter-out %.a, $(libs-y)))
|
||||
virt-y := $(patsubst %/, %/built-in.a, $(virt-y))
|
||||
|
||||
# Setup the dependency. vmlinux-deps are all intermediate objects, vmlinux-dirs
|
||||
# are phony targets, so every time comes to this rule, the recipe of vmlinux-dirs
|
||||
# will be executed. Refer "4.6 Phony Targets" of `info make`
|
||||
$(sort $(vmlinux-deps)): $(vmlinux-dirs) ;
|
||||
|
||||
# Variable vmlinux-dirs is the directory part of each built-in.a
|
||||
vmlinux-dirs := $(patsubst %/,%,$(filter %/, $(init-y) $(init-m) \
|
||||
$(core-y) $(core-m) $(drivers-y) $(drivers-m) \
|
||||
$(net-y) $(net-m) $(libs-y) $(libs-m) $(virt-y)))
|
||||
|
||||
# The entry of recursive make
|
||||
$(vmlinux-dirs):
|
||||
$(Q)$(MAKE) $(build)=$@ need-builtin=1
|
||||
```
|
||||
|
||||
递归 make 的<ruby>配方<rt>recipe</rt></ruby>被扩展开是这样的:
|
||||
|
||||
```
|
||||
make -f scripts/Makefile.build obj=init need-builtin=1
|
||||
```
|
||||
|
||||
这意味着 `make` 将进入 `scripts/Makefile.build` 以继续构建每个 `built-in.a`。在`scripts/link-vmlinux.sh` 的帮助下,`vmlinux` 文件最终位于源根目录下。
|
||||
|
||||
#### vmlinux 与 bzImage 对比
|
||||
|
||||
许多 Linux 内核开发人员可能不清楚 `vmlinux` 和 `bzImage` 之间的关系。例如,这是他们在 x86-64 中的关系:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/vmlinux-bzimage.png)
|
||||
|
||||
源代码根目录下的 `vmlinux` 被剥离、压缩后,放入 `piggy.S`,然后与其他对等对象链接到 `arch/x86/boot/compressed/vmlinux`。同时,在 `arch/x86/boot` 下生成一个名为 `setup.bin` 的文件。可能有一个可选的第三个文件,它带有重定位信息,具体取决于 `CONFIG_X86_NEED_RELOCS` 的配置。
|
||||
|
||||
由内核提供的称为 `build` 的宿主程序将这两个(或三个)部分构建到最终的 `bzImage` 文件中。
|
||||
|
||||
#### 依赖跟踪
|
||||
|
||||
kbuild 跟踪三种依赖关系:
|
||||
|
||||
1. 所有必备文件(`*.c` 和 `*.h`)
|
||||
2. 所有必备文件中使用的 `CONFIG_` 选项
|
||||
3. 用于编译该目标的命令行依赖项
|
||||
|
||||
第一个很容易理解,但第二个和第三个呢? 内核开发人员经常会看到如下代码:
|
||||
|
||||
```
|
||||
#ifdef CONFIG_SMP
|
||||
__boot_cpu_id = cpu;
|
||||
#endif
|
||||
```
|
||||
|
||||
当 `CONFIG_SMP` 改变时,这段代码应该重新编译。编译源文件的命令行也很重要,因为不同的命令行可能会导致不同的目标文件。
|
||||
|
||||
当 `.c` 文件通过 `#include` 指令使用头文件时,你需要编写如下规则:
|
||||
|
||||
```
|
||||
main.o: defs.h
|
||||
recipe...
|
||||
```
|
||||
|
||||
管理大型项目时,需要大量的这些规则;把它们全部写下来会很乏味无聊。幸运的是,大多数现代 C 编译器都可以通过查看源文件中的 `#include` 行来为你编写这些规则。对于 GNU 编译器集合(GCC),只需添加一个命令行参数:`-MD depfile`
|
||||
|
||||
```
|
||||
# In scripts/Makefile.lib
|
||||
c_flags = -Wp,-MD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE) \
|
||||
-include $(srctree)/include/linux/compiler_types.h \
|
||||
$(__c_flags) $(modkern_cflags) \
|
||||
$(basename_flags) $(modname_flags)
|
||||
```
|
||||
|
||||
这将生成一个 `.d` 文件,内容如下:
|
||||
|
||||
```
|
||||
init_task.o: init/init_task.c include/linux/kconfig.h \
|
||||
include/generated/autoconf.h include/linux/init_task.h \
|
||||
include/linux/rcupdate.h include/linux/types.h \
|
||||
...
|
||||
```
|
||||
|
||||
然后主程序 [fixdep][6] 通过将 depfile 文件和命令行作为输入来处理其他两个依赖项,然后以 makefile 格式输出一个 `.<target>.cmd` 文件,它记录命令行和目标的所有先决条件(包括配置)。 它看起来像这样:
|
||||
|
||||
```
|
||||
# The command line used to compile the target
|
||||
cmd_init/init_task.o := gcc -Wp,-MD,init/.init_task.o.d -nostdinc ...
|
||||
...
|
||||
# The dependency files
|
||||
deps_init/init_task.o := \
|
||||
$(wildcard include/config/posix/timers.h) \
|
||||
$(wildcard include/config/arch/task/struct/on/stack.h) \
|
||||
$(wildcard include/config/thread/info/in/task.h) \
|
||||
...
|
||||
include/uapi/linux/types.h \
|
||||
arch/x86/include/uapi/asm/types.h \
|
||||
include/uapi/asm-generic/types.h \
|
||||
...
|
||||
```
|
||||
|
||||
在递归 make 中,`.<target>.cmd` 文件将被包含,以提供所有依赖关系信息并帮助决定是否重建目标。
|
||||
|
||||
这背后的秘密是 `fixdep` 将解析 depfile(`.d` 文件),然后解析里面的所有依赖文件,搜索所有 `CONFIG_` 字符串的文本,将它们转换为相应的空的头文件,并将它们添加到目标的先决条件。每次配置更改时,相应的空的头文件也将更新,因此 kbuild 可以检测到该更改并重建依赖于它的目标。因为还记录了命令行,所以很容易比较最后和当前的编译参数。
|
||||
|
||||
### 展望未来
|
||||
|
||||
Kconfig/kbuild 在很长一段时间内没有什么变化,直到新的维护者 Masahiro Yamada 于 2017 年初加入,现在 kbuild 正在再次积极开发中。如果你不久后看到与本文中的内容不同的内容,请不要感到惊讶。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/kbuild-and-kconfig
|
||||
|
||||
作者:[Cao Jin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/pinocchio
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://github.com/torvalds/linux/blob/master/Documentation/kbuild/kconfig-language.txt
|
||||
[2]: https://www.mjmwired.net/kernel/Documentation/kbuild/makefiles.txt
|
||||
[3]: https://opensource.com/file/411516
|
||||
[4]: https://opensource.com/sites/default/files/uploads/vmlinux_generation_process.png (vmlinux overview)
|
||||
[5]: https://blog.csdn.net/richardysteven/article/details/52502734
|
||||
[6]: https://github.com/torvalds/linux/blob/master/scripts/basic/fixdep.c
|
@ -0,0 +1,69 @@
|
||||
使用 MacSVG 创建 SVG 动画
|
||||
======
|
||||
|
||||
> 开源 SVG:墙上的魔法字。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/18/000809mzl1wb1ww754z455.jpg)
|
||||
|
||||
新巴比伦的摄政王[伯沙撒][1]没有注意到他在盛宴期间神奇地[书写在墙上的文字][2]。但是,如果他在公元前 539 年有一台笔记本电脑和良好的互联网连接,他可能会通过在浏览器上阅读 SVG 来避开那些讨厌的波斯人。
|
||||
|
||||
出现在网页上的动画文本和对象是建立用户兴趣和参与度的好方法。有几种方法可以实现这一点,例如视频嵌入、动画 GIF 或幻灯片 —— 但你也可以使用[可缩放矢量图形(SVG)][3]。
|
||||
|
||||
SVG 图像与 JPG 不同,因为它可以缩放而不会丢失其分辨率。矢量图像是由点而不是像素创建的,所以无论它放大到多大,它都不会失去分辨率或像素化。充分利用可缩放的静态图像的一个例子是网站的徽标。
|
||||
|
||||
### 动起来,动起来
|
||||
|
||||
你可以使用多种绘图程序创建 SVG 图像,包括开源的 [Inkscape][4] 和 Adobe Illustrator。让你的图像“能动起来”需要更多的努力。幸运的是,有一些开源解决方案甚至可以引起伯沙撒的注意。
|
||||
|
||||
[MacSVG][5] 是一款可以让你的图像动起来的工具。你可以在 [GitHub][6] 上找到源代码。
|
||||
|
||||
根据其[官网][5]说,MacSVG 由阿肯色州康威的 Douglas Ward 开发,是一个“用于设计 HTML5 SVG 艺术和动画的开源 Mac OS 应用程序”。
|
||||
|
||||
我想使用 MacSVG 来创建一个动画签名。我承认我发现这个过程有点令人困惑,并且在我第一次尝试创建一个实际的动画 SVG 图像时失败了。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/macsvg-screen.png)
|
||||
|
||||
重要的是首先要了解要展示的书法内容实际写的是什么。
|
||||
|
||||
动画文字背后的属性是 [stroke-dasharray][7]。将该术语分成三个单词有助于解释正在发生的事情:“stroke” 是指用笔(无论是物理的笔还是数字化笔)制作的线条或笔画。“dash” 意味着将笔划分解为一系列折线。“array” 意味着将整个东西生成为数组。这是一个简单的概述,但它可以帮助我理解应该发生什么以及为什么。
|
||||
|
||||
使用 MacSVG,你可以导入图形(.PNG)并使用钢笔工具描绘书写路径。我使用了草书来表示我的名字。然后,只需应用该属性来让书法动画起来、增加和减少笔划的粗细、改变其颜色等等。完成后,动画的书法将导出为 .SVG 文件,并可以在网络上使用。除书写外,MacSVG 还可用于许多不同类型的 SVG 动画。
|
||||
|
||||
### 在 WordPress 中书写
|
||||
|
||||
我准备在我的 [WordPress][8] 网站上传和分享我的 SVG 示例,但我发现 WordPress 不允许进行 SVG 媒体导入。幸运的是,我找到了一个方便的插件:Benbodhi 的 [SVG 支持][9]插件允许快速、轻松地导入我的 SVG,就像我将 JPG 导入媒体库一样。我能够在世界各地向巴比伦人展示我[写在墙上的魔法字][10]。
|
||||
|
||||
我在 [Brackets][11] 中开源了 SVG 的源代码,结果如下:
|
||||
|
||||
```
|
||||
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
|
||||
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
|
||||
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:cc="http://web.resource.org/cc/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd" xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape" height="360px" style="zoom: 1;" cursor="default" id="svg_document" width="480px" baseProfile="full" version="1.1" preserveAspectRatio="xMidYMid meet" viewBox="0 0 480 360"><title id="svg_document_title">Path animation with stroke-dasharray</title><desc id="desc1">This example demonstrates the use of a path element, an animate element, and the stroke-dasharray attribute to simulate drawing.</desc><defs id="svg_document_defs"></defs><g id="main_group"></g><path stroke="#004d40" id="path2" stroke-width="9px" d="M86,75 C86,75 75,72 72,61 C69,50 66,37 71,34 C76,31 86,21 92,35 C98,49 95,73 94,82 C93,91 87,105 83,110 C79,115 70,124 71,113 C72,102 67,105 75,97 C83,89 111,74 111,74 C111,74 119,64 119,63 C119,62 110,57 109,58 C108,59 102,65 102,66 C102,67 101,75 107,79 C113,83 118,85 122,81 C126,77 133,78 136,64 C139,50 147,45 146,33 C145,21 136,15 132,24 C128,33 123,40 123,49 C123,58 135,87 135,96 C135,105 139,117 133,120 C127,123 116,127 120,116 C124,105 144,82 144,81 C144,80 158,66 159,58 C160,50 159,48 161,43 C163,38 172,23 166,22 C160,21 155,12 153,23 C151,34 161,68 160,78 C159,88 164,108 163,113 C162,118 165,126 157,128 C149,130 152,109 152,109 C152,109 185,64 185,64 " fill="none" transform=""><animate values="0,1739;1739,0;" attributeType="XML" begin="0; animate1.end+5s" id="animateSig1" repeatCount="indefinite" attributeName="stroke-dasharray" fill="freeze" dur="2"></animate></path></svg>
|
||||
```
|
||||
|
||||
你会使用 MacSVG 做什么?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/macsvg-open-source-tool-animation
|
||||
|
||||
作者:[Jeff Macharyas][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/rikki-endsley
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Belshazzar
|
||||
[2]: https://en.wikipedia.org/wiki/Belshazzar%27s_feast
|
||||
[3]: https://en.wikipedia.org/wiki/Scalable_Vector_Graphics
|
||||
[4]: https://inkscape.org/
|
||||
[5]: https://macsvg.org/
|
||||
[6]: https://github.com/dsward2/macSVG
|
||||
[7]: https://gist.github.com/mbostock/5649592
|
||||
[8]: https://macharyas.com/
|
||||
[9]: https://wordpress.org/plugins/svg-support/
|
||||
[10]: https://macharyas.com/index.php/2018/10/14/open-source-svg/
|
||||
[11]: http://brackets.io/
|
@ -0,0 +1,150 @@
|
||||
DF-SHOW:一个基于老式 DOS 应用的终端文件管理器
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/dfshow-720x340.png)
|
||||
|
||||
如果你曾经使用过老牌的 MS-DOS,你可能已经使用或听说过 DF-EDIT。DF-EDIT,意即 **D**irectory **F**ile **Edit**,它是一个鲜为人知的 DOS 文件管理器,最初由 Larry Kroeker 为 MS-DOS 和 PC-DOS 系统而编写。它用于在 MS-DOS 和 PC-DOS 系统中显示给定目录或文件的内容。今天,我偶然发现了一个名为 DF-SHOW 的类似实用程序(**D**irectory **F**ile **Show**),这是一个类 Unix 操作系统的终端文件管理器。它是鲜为人知的 DF-EDIT 文件管理器的 Unix 重写版本,其基于 1986 年发布的 DF-EDIT 2.3d。DF-SHOW 完全是自由开源的,并在 GPLv3 下发布。
|
||||
|
||||
DF-SHOW 可以:
|
||||
|
||||
* 列出目录的内容,
|
||||
* 查看文件,
|
||||
* 使用你的默认文件编辑器编辑文件,
|
||||
* 将文件复制到不同位置,
|
||||
* 重命名文件,
|
||||
* 删除文件,
|
||||
* 在 DF-SHOW 界面中创建新目录,
|
||||
* 更新文件权限,所有者和组,
|
||||
* 搜索与搜索词匹配的文件,
|
||||
* 启动可执行文件。
|
||||
|
||||
### DF-SHOW 用法
|
||||
|
||||
DF-SHOW 实际上是两个程序的结合,名为 `show` 和 `sf`。
|
||||
|
||||
#### Show 命令
|
||||
|
||||
`show` 程序(类似于 `ls` 命令)用于显示目录的内容、创建新目录、重命名和删除文件/文件夹、更新权限、搜索文件等。
|
||||
|
||||
要查看目录中的内容列表,请使用以下命令:
|
||||
|
||||
```
|
||||
$ show <directory path>
|
||||
```
|
||||
|
||||
示例:
|
||||
|
||||
```
|
||||
$ show dfshow
|
||||
```
|
||||
|
||||
这里,`dfshow` 是一个目录。如果在未指定目录路径的情况下调用 `show` 命令,它将显示当前目录的内容。
|
||||
|
||||
这是 DF-SHOW 默认界面的样子。
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/dfshow-1.png)
|
||||
|
||||
如你所见,DF-SHOW 的界面不言自明。
|
||||
|
||||
在顶部栏上,你会看到可用的选项列表,例如复制、删除、编辑、修改等。
|
||||
|
||||
完整的可用选项列表如下:
|
||||
|
||||
* `C` opy(复制)
|
||||
* `D` elete(删除)
|
||||
* `E` dit(编辑)
|
||||
* `H` idden(隐藏)
|
||||
* `M` odify(修改)
|
||||
* `Q` uit(退出)
|
||||
* `R` ename(重命名)
|
||||
* `S` how(显示)
|
||||
* h `U` nt(文件内搜索)
|
||||
* e `X` ec(执行)
|
||||
* `R` un command(运行命令)
|
||||
* `E` dit file(编辑文件)
|
||||
* `H` elp(帮助)
|
||||
* `M` ake dir(创建目录)
|
||||
* `S` how dir(显示目录)
|
||||
|
||||
在每个选项中,有一个字母以大写粗体标记。只需按下该字母即可执行相应的操作。例如,要重命名文件,只需按 `R` 并键入新名称,然后按回车键重命名所选项目。
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/dfshow-2.png)
|
||||
|
||||
要显示所有选项或取消操作,只需按 `ESC` 键即可。
|
||||
|
||||
此外,你将在 DF-SHOW 界面的底部看到一堆功能键,以浏览目录的内容。
|
||||
|
||||
* `UP` / `DOWN` 箭头或 `F1` / `F2` - 上下移动(一次一行),
|
||||
* `PgUp` / `PgDn` - 一次移动一页,
|
||||
* `F3` / `F4` - 立即转到列表的顶部和底部,
|
||||
* `F5` - 刷新,
|
||||
* `F6` - 标记/取消标记文件(标记的文件将在它们前面用 `*` 表示),
|
||||
* `F7` / `F8` - 一次性标记/取消标记所有文件,
|
||||
* `F9` - 按以下顺序对列表排序 - 日期和时间、名称、大小。
|
||||
|
||||
按 `h` 了解有关 `show` 命令及其选项的更多详细信息。
|
||||
|
||||
要退出 DF-SHOW,只需按 `q` 即可。
|
||||
|
||||
#### SF 命令
|
||||
|
||||
`sf` (显示文件)用于显示文件的内容。
|
||||
|
||||
```
|
||||
$ sf <file>
|
||||
```
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/dfshow-3.png)
|
||||
|
||||
按 `h` 了解更多 `sf` 命令及其选项。要退出,请按 `q`。
|
||||
|
||||
想试试看?很好,让我们继续在 Linux 系统上安装 DF-SHOW,如下所述。
|
||||
|
||||
### 安装 DF-SHOW
|
||||
|
||||
DF-SHOW 在 [AUR][1] 中可用,因此你可以使用 AUR 程序(如 [yay][2])在任何基于 Arch 的系统上安装它。
|
||||
|
||||
```
|
||||
$ yay -S dfshow
|
||||
```
|
||||
|
||||
在 Ubuntu 及其衍生版上:
|
||||
|
||||
```
|
||||
$ sudo add-apt-repository ppa:ian-hawdon/dfshow
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install dfshow
|
||||
```
|
||||
|
||||
在其他 Linux 发行版上,你可以从源代码编译和构建它,如下所示。
|
||||
|
||||
```
|
||||
$ git clone https://github.com/roberthawdon/dfshow
|
||||
$ cd dfshow
|
||||
$ ./bootstrap
|
||||
$ ./configure
|
||||
$ make
|
||||
$ sudo make install
|
||||
```
|
||||
|
||||
DF-SHOW 项目的作者只重写了 DF-EDIT 实用程序的一些应用程序。由于源代码可以在 GitHub 上免费获得,因此你可以添加更多功能、改进代码并提交或修复错误(如果有的话)。它仍处于 beta 阶段,但功能齐全。
|
||||
|
||||
你有没试过吗?如果试过,觉得如何?请在下面的评论部分告诉我们你的体验。
|
||||
|
||||
不管如何,希望这有用。还有更多好东西。敬请关注!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/df-show-a-terminal-file-manager-based-on-an-old-dos-application/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://aur.archlinux.org/packages/dfshow/
|
||||
[2]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
|
285
published/20181202 How To Customize The GNOME 3 Desktop.md
Normal file
285
published/20181202 How To Customize The GNOME 3 Desktop.md
Normal file
@ -0,0 +1,285 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: subject: (How To Customize The GNOME 3 Desktop?)
|
||||
[#]: via: (https://www.2daygeek.com/how-to-customize-the-gnome-3-desktop/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
[#]: url: (https://linux.cn/article-11256-1.html)
|
||||
|
||||
如何自定义 GNOME 3 桌面?
|
||||
======
|
||||
|
||||
我们收到很多来自用户的电子邮件,要我们写一篇关于 GNOME 3 桌面自定义的文章,但是,我们没有时间来写这个主题。
|
||||
|
||||
在很长时间内,我一直在我的主要笔记本电脑上使用 Ubuntu 操作系统,并且,渐感无聊,我想测试一些与 Arch Linux 相关的其它的发行版。
|
||||
|
||||
我比较喜欢 Majaro,我在我的笔记本电脑中安装使用了 GNOME 3 桌面的 Manjaro 18.0 。
|
||||
|
||||
我按照我想要的自定义我的桌面。所以,我想抓住这个机会来详细撰写这篇文章,以帮助其他人。
|
||||
|
||||
这篇文章帮助其他人来轻松地自定义他们的桌面。
|
||||
|
||||
我不打算包括我所有的自定义,并且,我将强制性地添加一个对 Linux 桌面用户来说有用的必要的东西。
|
||||
|
||||
如果你觉得这篇文章中缺少一些调整,请你在评论区提到缺少的东西。对其它用户来说这是非常有用的 。
|
||||
|
||||
### 1) 如何在 GNOME 3 桌面中启动活动概述?
|
||||
|
||||
活动概述将显示所有运行的应用程序,或通过单击 `Super` 键 ,或在左上角上单击“活动”按钮来启动/打开窗口。
|
||||
|
||||
它允许你来启动一个新的应用程序、切换窗口,和在工作空间之间移动窗口。
|
||||
|
||||
你可以通过选择如下任一操作简单地退出活动概述,如选择一个窗口、应用程序或工作区间,或通过按 `Super` 键或 `Esc` 键。
|
||||
|
||||
![][2]
|
||||
|
||||
*活动概述屏幕截图*
|
||||
|
||||
### 2) 如何在 GNOME 3 桌面中重新调整窗口大小?
|
||||
|
||||
通过下面的组合键来将启动的窗口最大化、取消最大化,并吸附到屏幕的一侧(左侧或右侧)。
|
||||
|
||||
* `Super Key+下箭头`:来非最大化窗口。
|
||||
* `Super Key+上箭头`:来最大化窗口。
|
||||
* `Super Key+右箭头`:来在填充半个屏幕的右侧窗口。
|
||||
* `Super Key+作箭头`:来在填充半个屏幕的左侧窗口。
|
||||
|
||||
|
||||
![][3]
|
||||
|
||||
*使用 `Super Key+下箭头` 来取消最大化窗口。*
|
||||
|
||||
![][4]
|
||||
|
||||
*使用 `Super Key+上箭头` 来最大化窗口。*
|
||||
|
||||
|
||||
![][5]
|
||||
|
||||
*使用 `Super Key+右箭头` 来在填充半个屏幕的右侧窗口。*
|
||||
|
||||
![][6]
|
||||
|
||||
*使用 `Super Key+左箭头` 来在填充半个屏幕的左侧窗口。*
|
||||
|
||||
这个功能将帮助你可以一次查看两个应用程序,又名,拆分屏幕。
|
||||
|
||||
![][7]
|
||||
|
||||
### 3) 如何在 GNOME 3 桌面中显示应用程序?
|
||||
|
||||
在 Dash 中,单击“显示应用程序网格”按钮来显示在你的系统上的所有已安装的应用程序。
|
||||
|
||||
![][8]
|
||||
|
||||
### 4) 如何在 GNOME 3 桌面中的 Dash 中添加应用程序?
|
||||
|
||||
为加速你的日常活动,你可能想要把频繁使用的应用程序添加到 Dash 中,或拖拽应用程序启动器到 Dash 中。
|
||||
|
||||
它将允许你直接启动你的收藏夹中的应用程序,而不用先去搜索应用程序。为做到这样,在应用程序上简单地右击,并使用选项“添加到收藏夹”。
|
||||
|
||||
![][9]
|
||||
|
||||
为从 Dash 中移除一个应用程序启动器(收藏的程序),要么从 Dash 中拖拽应用程序到网格按钮,或者在应用程序上简单地右击,并使用选项“从收藏夹中移除”。
|
||||
|
||||
![][10]
|
||||
|
||||
### 5) 如何在 GNOME 3 桌面中的工作区间之间切换?
|
||||
|
||||
工作区间允许你将窗口组合在一起。它将帮助你恰当地分隔你的工作。如果你正在做多项工作,并且你想对每项工作和相关的事物进行单独地分组,那么,它将是非常便利的,对你来说是一个非常方便和完美的选项。
|
||||
|
||||
你可以用两种方法切换工作区间,打开活动概述,并从右手边选择一个工作区间,或者使用下面的组合键。
|
||||
|
||||
* 使用 `Ctrl+Alt+Up` 切换到上一个工作区间。
|
||||
* 使用 `Ctrl+Alt+Down` 切换到下一个工作区间。
|
||||
|
||||
![][11]
|
||||
|
||||
### 6) 如何在 GNOME 3 桌面中的应用程序之间切换 (应用程序切换器) ?
|
||||
|
||||
使用 `Alt+Tab` 或 `Super+Tab` 来在应用程序之间切换。为启动应用程序切换器,使用 `Alt+Tab` 或 `Super+Tab` 。
|
||||
|
||||
在启动后,只需要按住 `Alt` 或 `Super` 键,按 `Tab` 键来从左到右的依次移动接下来的应用程序。
|
||||
|
||||
### 7) 如何在 GNOME 3 桌面中添加用户姓名到顶部面板?
|
||||
|
||||
如果你想添加你的用户姓名到顶部面板,那么安装下面的[添加用户姓名到顶部面板][12] GNOME 扩展。
|
||||
|
||||
![][13]
|
||||
|
||||
### 8) 如何在 GNOME 3 桌面中添加微软 Bing 的桌面背景?
|
||||
|
||||
安装下面的 [Bing 桌面背景更换器][14] GNOME shell 扩展,来每天更改你的桌面背景为微软 Bing 的桌面背景。
|
||||
|
||||
![][15]
|
||||
|
||||
### 9) 如何在 GNOME 3 桌面中启用夜光?
|
||||
|
||||
夜光应用程序是著名的应用程序之一,它通过在日落后把你的屏幕从蓝光调成暗黄色,来减轻眼睛疲劳。
|
||||
|
||||
它在智能手机上也可用。相同目标的其它已知应用程序是 flux 和 [redshift][16]。
|
||||
|
||||
为启用这个特色,导航到**系统设置** >> **设备** >> **显示** ,并打开夜光。
|
||||
|
||||
![][17]
|
||||
|
||||
在它启用后,状态图标将被放置到顶部面板上。
|
||||
|
||||
![][18]
|
||||
|
||||
### 10) 如何在 GNOME 3 桌面中显示电池百分比?
|
||||
|
||||
电池百分比将向你精确地显示电池使用情况。为启用这个功能,遵循下面的步骤。
|
||||
|
||||
启动 GNOME Tweaks >> **顶部栏** >> **电池百分比** ,并打开它。
|
||||
|
||||
![][19]
|
||||
|
||||
在修改后,你能够在顶部面板上看到电池百分比图标。
|
||||
|
||||
![][20]
|
||||
|
||||
### 11) 如何在 GNOME 3 桌面中启用鼠标右键单击?
|
||||
|
||||
在 GNOME 3 桌面环境中右键单击是默认禁用的。为启用这个特色,遵循下面的步骤。
|
||||
|
||||
启动 GNOME Tweaks >> **键盘和鼠标** >> 鼠标点击硬件仿真,并选择“区域”选项。
|
||||
|
||||
![][21]
|
||||
|
||||
### 12) 如何在 GNOME 3 桌面中启用单击最小化?
|
||||
|
||||
启用单击最小化功能,这将帮助我们最小化打开的窗口,而不必使用最小化选项。
|
||||
|
||||
```
|
||||
$ gsettings set org.gnome.shell.extensions.dash-to-dock click-action 'minimize'
|
||||
```
|
||||
|
||||
### 13) 如何在 GNOME 3 桌面中自定义 Dock ?
|
||||
|
||||
如果你想更改你的 Dock,类似于 Deepin 桌面或 Mac 桌面,那么使用下面的一组命令。
|
||||
|
||||
```
|
||||
$ gsettings set org.gnome.shell.extensions.dash-to-dock dock-position BOTTOM
|
||||
$ gsettings set org.gnome.shell.extensions.dash-to-dock extend-height false
|
||||
$ gsettings set org.gnome.shell.extensions.dash-to-dock transparency-mode FIXED
|
||||
$ gsettings set org.gnome.shell.extensions.dash-to-dock dash-max-icon-size 50
|
||||
```
|
||||
|
||||
![][22]
|
||||
|
||||
### 14) 如何在 GNOME 3桌面中显示桌面?
|
||||
|
||||
默认 `Super 键 + D` 快捷键不能显示你的桌面。为配置这种情况,遵循下面的步骤。
|
||||
|
||||
设置 >> **设备** >> **键盘** >> 单击在导航下的 **隐藏所有普通窗口** ,然后按 `Super 键 + D` ,最后按`设置`按钮来启用它。
|
||||
|
||||
![][23]
|
||||
|
||||
### 15) 如何自定义日期和时间格式?
|
||||
|
||||
GNOME 3 默认用 `Sun 04:48` 的格式来显示日期和时间。它并不清晰易懂,如果你想获得以下格式的输出:`Sun Dec 2 4:49 AM` ,遵循下面的步骤。
|
||||
|
||||
**对于日期修改:** 打开 GNOME Tweaks >> **顶部栏** ,并在时钟下启用“星期”选项。
|
||||
|
||||
![][24]
|
||||
|
||||
**对于时间修改:** 设置 >> **具体情况** >> **日期和时间** ,然后,在时间格式中选择 `AM/PM` 选项。
|
||||
|
||||
![][25]
|
||||
|
||||
在修改后,你能够看到与下面相同的日期和时间格式。
|
||||
|
||||
![][26]
|
||||
|
||||
### 16) 如何在启动程序中永久地禁用不使用的服务?
|
||||
|
||||
就我来说,我不使用 **蓝牙** & **cups(打印机服务)**。因此,在我的笔记本电脑上禁用这些服务。为在基于 Arch 的系统上禁用服务,使用 [Pacman 软件包管理器][27]。
|
||||
|
||||
对于蓝牙:
|
||||
|
||||
```
|
||||
$ sudo systemctl stop bluetooth.service
|
||||
$ sudo systemctl disable bluetooth.service
|
||||
$ sudo systemctl mask bluetooth.service
|
||||
$ systemctl status bluetooth.service
|
||||
```
|
||||
|
||||
对于 cups:
|
||||
|
||||
```
|
||||
$ sudo systemctl stop org.cups.cupsd.service
|
||||
$ sudo systemctl disable org.cups.cupsd.service
|
||||
$ sudo systemctl mask org.cups.cupsd.service
|
||||
$ systemctl status org.cups.cupsd.service
|
||||
```
|
||||
|
||||
最后,使用以下命令验证这些服务是否在启动程序中被禁用。如果你想再次确认这一点,你可以重新启动一次,并检查相同的东西。导航到以下链接来了解更多关于 [systemctl][28] 的用法,
|
||||
|
||||
```
|
||||
$ systemctl list-unit-files --type=service | grep enabled
|
||||
[email protected] enabled
|
||||
dbus-org.freedesktop.ModemManager1.service enabled
|
||||
dbus-org.freedesktop.NetworkManager.service enabled
|
||||
dbus-org.freedesktop.nm-dispatcher.service enabled
|
||||
display-manager.service enabled
|
||||
gdm.service enabled
|
||||
[email protected] enabled
|
||||
linux-module-cleanup.service enabled
|
||||
ModemManager.service enabled
|
||||
NetworkManager-dispatcher.service enabled
|
||||
NetworkManager-wait-online.service enabled
|
||||
NetworkManager.service enabled
|
||||
systemd-fsck-root.service enabled-runtime
|
||||
tlp-sleep.service enabled
|
||||
tlp.service enabled
|
||||
```
|
||||
|
||||
### 17) 在 GNOME 3 桌面中安装图标和主题?
|
||||
|
||||
有大量的图标和主题可供 GNOME 桌面使用,因此,选择吸引你的 [GTK 主题][29] 和 [图标主题][30]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/how-to-customize-the-gnome-3-desktop/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[robsean](https://github.com/robsean)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-overview-screenshot.jpg
|
||||
[3]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-unmaximize-the-window.jpg
|
||||
[4]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-maximize-the-window.jpg
|
||||
[5]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-fill-a-window-right-side.jpg
|
||||
[6]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-fill-a-window-left-side.jpg
|
||||
[7]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-split-screen.jpg
|
||||
[8]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-display-applications.jpg
|
||||
[9]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-add-applications-on-dash.jpg
|
||||
[10]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-remove-applications-from-dash.jpg
|
||||
[11]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-workspaces-screenshot.jpg
|
||||
[12]: https://extensions.gnome.org/extension/1108/add-username-to-top-panel/
|
||||
[13]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-add-username-to-top-panel.jpg
|
||||
[14]: https://extensions.gnome.org/extension/1262/bing-wallpaper-changer/
|
||||
[15]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-add-microsoft-bings-wallpaper.jpg
|
||||
[16]: https://www.2daygeek.com/install-redshift-reduce-prevent-protect-eye-strain-night-linux/
|
||||
[17]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-enable-night-light.jpg
|
||||
[18]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-enable-night-light-1.jpg
|
||||
[19]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-display-battery-percentage.jpg
|
||||
[20]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-display-battery-percentage-1.jpg
|
||||
[21]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-enable-mouse-right-click.jpg
|
||||
[22]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-dock-customization.jpg
|
||||
[23]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-enable-show-desktop.jpg
|
||||
[24]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-customize-date.jpg
|
||||
[25]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-customize-time.jpg
|
||||
[26]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-customize-date-time.jpg
|
||||
[27]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
|
||||
[28]: https://www.2daygeek.com/sysvinit-vs-systemd-cheatsheet-systemctl-command-usage/
|
||||
[29]: https://www.2daygeek.com/category/gtk-theme/
|
||||
[30]: https://www.2daygeek.com/category/icon-theme/
|
167
published/20181220 Getting started with Prometheus.md
Normal file
167
published/20181220 Getting started with Prometheus.md
Normal file
@ -0,0 +1,167 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11234-1.html)
|
||||
[#]: subject: (Getting started with Prometheus)
|
||||
[#]: via: (https://opensource.com/article/18/12/introduction-prometheus)
|
||||
[#]: author: (Michael Zamot https://opensource.com/users/mzamot)
|
||||
|
||||
Prometheus 入门
|
||||
======
|
||||
|
||||
> 学习安装 Prometheus 监控和警报系统并编写它的查询。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/16/113724zqe12khkdye2mesy.jpg)
|
||||
|
||||
[Prometheus][1] 是一个开源的监控和警报系统,它直接从目标主机上运行的代理程序中抓取指标,并将收集的样本集中存储在其服务器上。也可以使用像 `collectd_exporter` 这样的插件推送指标,尽管这不是 Promethius 的默认行为,但在主机位于防火墙后面或位于安全策略禁止打开端口的某些环境中它可能很有用。
|
||||
|
||||
Prometheus 是[云原生计算基金会(CNCF)][2]的一个项目。它使用<ruby>联合模型<rt>federation model</rt></ruby>进行扩展,该模型使得一个 Prometheus 服务器能够抓取另一个 Prometheus 服务器的数据。这允许创建分层拓扑,其中中央系统或更高级别的 Prometheus 服务器可以抓取已从下级实例收集的聚合数据。
|
||||
|
||||
除 Prometheus 服务器外,其最常见的组件是[警报管理器][3]及其输出器。
|
||||
|
||||
警报规则可以在 Prometheus 中创建,并配置为向警报管理器发送自定义警报。然后,警报管理器处理和管理这些警报,包括通过电子邮件或第三方服务(如 [PagerDuty][4])等不同机制发送通知。
|
||||
|
||||
Prometheus 的输出器可以是库、进程、设备或任何其他能将 Prometheus 抓取的指标公开出去的东西。 这些指标可在端点 `/metrics` 中获得,它允许 Prometheus 无需代理直接抓取它们。本文中的教程使用 `node_exporter` 来公开目标主机的硬件和操作系统指标。输出器的输出是明文的、高度可读的,这是 Prometheus 的优势之一。
|
||||
|
||||
此外,你可以将 Prometheus 作为后端,配置 [Grafana][5] 来提供数据可视化和仪表板功能。
|
||||
|
||||
### 理解 Prometheus 的配置文件
|
||||
|
||||
抓取 `/metrics` 的间隔秒数控制了时间序列数据库的粒度。这在配置文件中定义为 `scrape_interval` 参数,默认情况下设置为 60 秒。
|
||||
|
||||
在 `scrape_configs` 部分中为每个抓取作业设置了目标。每个作业都有自己的名称和一组标签,可以帮助你过滤、分类并更轻松地识别目标。一项作业可以有很多目标。
|
||||
|
||||
### 安装 Prometheus
|
||||
|
||||
在本教程中,为简单起见,我们将使用 Docker 安装 Prometheus 服务器和 `node_exporter`。Docker 应该已经在你的系统上正确安装和配置。对于更深入、自动化的方法,我推荐 Steve Ovens 的文章《[如何使用 Ansible 与 Prometheus 建立系统监控][6]》。
|
||||
|
||||
在开始之前,在工作目录中创建 Prometheus 配置文件 `prometheus.yml`,如下所示:
|
||||
|
||||
```
|
||||
global:
|
||||
scrape_interval: 15s
|
||||
evaluation_interval: 15s
|
||||
|
||||
scrape_configs:
|
||||
- job_name: 'prometheus'
|
||||
|
||||
static_configs:
|
||||
- targets: ['localhost:9090']
|
||||
|
||||
- job_name: 'webservers'
|
||||
|
||||
static_configs:
|
||||
- targets: ['<node exporter node IP>:9100']
|
||||
```
|
||||
|
||||
通过运行以下命令用 Docker 启动 Prometheus:
|
||||
|
||||
```
|
||||
$ sudo docker run -d -p 9090:9090 -v
|
||||
/path/to/prometheus.yml:/etc/prometheus/prometheus.yml
|
||||
prom/prometheus
|
||||
```
|
||||
|
||||
默认情况下,Prometheus 服务器将使用端口 9090。如果此端口已在使用,你可以通过在上一个命令的后面添加参数 `--web.listen-address="<IP of machine>:<port>"` 来更改它。
|
||||
|
||||
在要监视的计算机中,使用以下命令下载并运行 `node_exporter` 容器:
|
||||
|
||||
```
|
||||
$ sudo docker run -d -v "/proc:/host/proc" -v "/sys:/host/sys" -v
|
||||
"/:/rootfs" --net="host" prom/node-exporter --path.procfs
|
||||
/host/proc --path.sysfs /host/sys --collector.filesystem.ignored-
|
||||
mount-points "^/(sys|proc|dev|host|etc)($|/)"
|
||||
```
|
||||
|
||||
出于本文练习的目的,你可以在同一台机器上安装 `node_exporter` 和 Prometheus。请注意,生产环境中在 Docker 下运行 `node_exporter` 是不明智的 —— 这仅用于测试目的。
|
||||
|
||||
要验证 `node_exporter` 是否正在运行,请打开浏览器并导航到 `http://<IP of Node exporter host>:9100/metrics`,这将显示收集到的所有指标;也即是 Prometheus 将要抓取的相同指标。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/check-node_exporter.png)
|
||||
|
||||
要确认 Prometheus 服务器安装成功,打开浏览器并导航至:<http://localhost:9090>。
|
||||
|
||||
你应该看到了 Prometheus 的界面。单击“Status”,然后单击“Targets”。在 “Status” 下,你应该看到你的机器被列为 “UP”。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/targets-up.png)
|
||||
|
||||
### 使用 Prometheus 查询
|
||||
|
||||
现在是时候熟悉一下 [PromQL][7](Prometheus 的查询语法)及其图形化 Web 界面了。转到 Prometheus 服务器上的 `http://localhost:9090/graph`。你将看到一个查询编辑器和两个选项卡:“Graph” 和 “Console”。
|
||||
|
||||
Prometheus 将所有数据存储为时间序列,使用指标名称标识每个数据。例如,指标 `node_filesystem_avail_bytes` 显示可用的文件系统空间。指标的名称可以在表达式框中使用,以选择具有此名称的所有时间序列并生成即时向量。如果需要,可以使用选择器和标签(一组键值对)过滤这些时间序列,例如:
|
||||
|
||||
```
|
||||
node_filesystem_avail_bytes{fstype="ext4"}
|
||||
```
|
||||
|
||||
过滤时,你可以匹配“完全相等”(`=`)、“不等于”(`!=`),“正则匹配”(`=~`)和“正则排除匹配”(`!~`)。以下示例说明了这一点:
|
||||
|
||||
要过滤 `node_filesystem_avail_bytes` 以显示 ext4 和 XFS 文件系统:
|
||||
|
||||
```
|
||||
node_filesystem_avail_bytes{fstype=~"ext4|xfs"}
|
||||
```
|
||||
|
||||
要排除匹配:
|
||||
|
||||
```
|
||||
node_filesystem_avail_bytes{fstype!="xfs"}
|
||||
```
|
||||
|
||||
你还可以使用方括号得到从当前时间往回的一系列样本。你可以使用 `s` 表示秒,`m` 表示分钟,`h` 表示小时,`d` 表示天,`w` 表示周,而 `y` 表示年。使用时间范围时,返回的向量将是范围向量。
|
||||
|
||||
例如,以下命令生成从五分钟前到现在的样本:
|
||||
|
||||
```
|
||||
node_memory_MemAvailable_bytes[5m]
|
||||
```
|
||||
|
||||
Prometheus 还包括了高级查询的功能,例如:
|
||||
|
||||
```
|
||||
100 * (1 - avg by(instance)(irate(node_cpu_seconds_total{job='webservers',mode='idle'}[5m])))
|
||||
```
|
||||
|
||||
请注意标签如何用于过滤作业和模式。指标 `node_cpu_seconds_total` 返回一个计数器,`irate()`函数根据范围间隔的最后两个数据点计算每秒的变化率(意味着该范围可以小于五分钟)。要计算 CPU 总体使用率,可以使用 `node_cpu_seconds_total` 指标的空闲(`idle`)模式。处理器的空闲比例与繁忙比例相反,因此从 1 中减去 `irate` 值。要使其为百分比,请将其乘以 100。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/cpu-usage.png)
|
||||
|
||||
### 了解更多
|
||||
|
||||
Prometheus 是一个功能强大、可扩展、轻量级、易于使用和部署的监视工具,对于每个系统管理员和开发人员来说都是必不可少的。出于这些原因和其他原因,许多公司正在将 Prometheus 作为其基础设施的一部分。
|
||||
|
||||
要了解有关 Prometheus 及其功能的更多信息,我建议使用以下资源:
|
||||
|
||||
+ 关于 [PromQL][8]
|
||||
+ 什么是 [node_exporters 集合][9]
|
||||
+ [Prometheus 函数][10]
|
||||
+ [4 个开源监控工具] [11]
|
||||
+ [现已推出:DevOps 监控工具的开源指南] [12]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/12/introduction-prometheus
|
||||
|
||||
作者:[Michael Zamot][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mzamot
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://prometheus.io/
|
||||
[2]: https://www.cncf.io/
|
||||
[3]: https://prometheus.io/docs/alerting/alertmanager/
|
||||
[4]: https://en.wikipedia.org/wiki/PagerDuty
|
||||
[5]: https://grafana.com/
|
||||
[6]: https://opensource.com/article/18/3/how-use-ansible-set-system-monitoring-prometheus
|
||||
[7]: https://prometheus.io/docs/prometheus/latest/querying/basics/
|
||||
[8]: https://prometheus.io/docs/prometheus/latest/querying/basics/
|
||||
[9]: https://github.com/prometheus/node_exporter#collectors
|
||||
[10]: https://prometheus.io/docs/prometheus/latest/querying/functions/
|
||||
[11]: https://opensource.com/article/18/8/open-source-monitoring-tools
|
||||
[12]: https://opensource.com/article/18/8/now-available-open-source-guide-devops-monitoring-tools
|
@ -0,0 +1,130 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11200-1.html)
|
||||
[#]: subject: (How to detect automatically generated emails)
|
||||
[#]: via: (https://arp242.net/weblog/autoreply.html)
|
||||
[#]: author: (Martin Tournoij https://arp242.net/)
|
||||
|
||||
如何检测自动生成的电子邮件
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/08/003503fw0w0pzx2ue6a6a6.jpg)
|
||||
|
||||
当你用电子邮件系统发送自动回复时,你需要注意不要向自动生成的电子邮件发送回复。最好的情况下,你将获得无用的投递失败消息。更可能的是,你会得到一个无限的电子邮件循环和一个混乱的世界。
|
||||
|
||||
事实证明,可靠地检测自动生成的电子邮件并不总是那么容易。以下是基于为此编写的检测器并使用它扫描大约 100,000 封电子邮件(大量的个人存档和公司存档)的观察结果。
|
||||
|
||||
### Auto-submitted 信头
|
||||
|
||||
由 [RFC 3834][1] 定义。
|
||||
|
||||
这是表示你的邮件是自动回复的“官方”标准。如果存在 `Auto-Submitted` 信头,并且其值不是 `no`,你应该**不**发送回复。
|
||||
|
||||
### X-Auto-Response-Suppress 信头
|
||||
|
||||
[由微软][2]定义。
|
||||
|
||||
此信头由微软 Exchange、Outlook 和其他一些产品使用。许多新闻订阅等都设定了这个。如果 `X-Auto-Response-Suppress` 包含 `DR`(“抑制投递报告”)、`AutoReply`(“禁止 OOF 通知以外的自动回复消息”)或 `All`,你应该**不**发送回复。
|
||||
|
||||
### List-Id 和 List-Unsubscribe 信头
|
||||
|
||||
由 [RFC 2919][3] 定义。
|
||||
|
||||
你通常不希望给邮件列表或新闻订阅发送自动回复。几乎所有的邮件列表和大多数新闻订阅都至少设置了其中一个信头。如果存在这些信头中的任何一个,你应该**不**发送回复。这个信头的值不重要。
|
||||
|
||||
### Feedback-ID 信头
|
||||
|
||||
[由谷歌][4]定义。
|
||||
|
||||
Gmail 使用此信头识别邮件是否是新闻订阅,并使用它为这些新闻订阅的所有者生成统计信息或报告。如果此信头存在,你应该**不**发送回复。这个信头的值不重要。
|
||||
|
||||
### 非标准方式
|
||||
|
||||
上述方法定义明确(即使有些是非标准的)。不幸的是,有些电子邮件系统不使用它们中的任何一个 :-( 这里有一些额外的措施。
|
||||
|
||||
#### Precedence 信头
|
||||
|
||||
在 [RFC 2076][5] 中没有真正定义,不鼓励使用它(但通常会遇到此信头)。
|
||||
|
||||
请注意,不建议检查是否存在此信头,因为某些邮件使用 `normal` 和其他一些(少见的)值(尽管这不常见)。
|
||||
|
||||
我的建议是如果其值不区分大小写地匹配 `bulk`、`auto_reply` 或 `list`,则**不**发送回复。
|
||||
|
||||
#### 其他不常见的信头
|
||||
|
||||
这是我遇到的另外的一些(不常见的)信头。如果设置了其中一个,我建议**不**发送自动回复。大多数邮件也设置了上述信头之一,但有些没有(这并不常见)。
|
||||
|
||||
* `X-MSFBL`:无法真正找到定义(Microsoft 信头?),但我只有自动生成的邮件带有此信头。
|
||||
* `X-Loop`:在任何地方都没有真正定义过,有点罕见,但有时有。它通常设置为不应该收到电子邮件的地址,但也会遇到 `X-Loop: yes`。
|
||||
* `X-Autoreply`:相当罕见,并且似乎总是具有 `yes` 的值。
|
||||
|
||||
#### Email 地址
|
||||
|
||||
检查 `From` 或 `Reply-To` 信头是否包含 `noreply`、`no-reply` 或 `no_reply`(正则表达式:`^no.?reply@`)。
|
||||
|
||||
#### 只有 HTML 部分
|
||||
|
||||
如果电子邮件只有 HTML 部分,而没有文本部分,则表明这是一个自动生成的邮件或新闻订阅。几乎所有邮件客户端都设置了文本部分。
|
||||
|
||||
#### 投递失败消息
|
||||
|
||||
许多传递失败消息并不能真正表明它们是失败的。一些检查方法:
|
||||
|
||||
* `From` 包含 `mailer-daemon` 或 `Mail Delivery Subsystem`
|
||||
|
||||
#### 特定的邮件库特征
|
||||
|
||||
许多邮件类库留下了某种痕迹,大多数常规邮件客户端使用自己的数据覆盖它。检查这个似乎工作得相当可靠。
|
||||
|
||||
* `X-Mailer: Microsoft CDO for Windows 2000`:由某些微软软件设置;我只能在自动生成的邮件中找到它。是的,在 2015 年它仍然在使用。
|
||||
* `Message-ID` 信头包含 `.JavaMail.`:我发现了一些(5 个 50k 大小的)常规消息,但不是很多;绝大多数(数千封)邮件是新闻订阅、订单确认等。
|
||||
* `^X-Mailer` 以 `PHP` 开头。这应该会同时看到 `X-Mailer: PHP/5.5.0` 和 `X-Mailer: PHPmailer XXX XXX`。与 “JavaMail” 相同。
|
||||
* 出现了 `X-Library`;似乎只有 [Indy][6] 设定了这个。
|
||||
* `X-Mailer` 以 `wdcollect` 开头。由一些 Plesk 邮件设置。
|
||||
* `X-Mailer` 以 `MIME-tools` 开头。
|
||||
|
||||
### 最后的预防措施:限制回复的数量
|
||||
|
||||
即使遵循上述所有建议,你仍可能会遇到一个避开所有这些检测的电子邮件程序。这可能非常危险,因为电子邮件系统只是“如果有电子邮件那么发送”,就有可能导致无限的电子邮件循环。
|
||||
|
||||
出于这个原因,我建议你记录你自动发送的电子邮件,并将此速率限制为在几分钟内最多几封电子邮件。这将打破循环链条。
|
||||
|
||||
我们使用每五分钟一封电子邮件的设置,但没这么严格的设置可能也会运作良好。
|
||||
|
||||
### 你需要为自动回复设置什么信头
|
||||
|
||||
具体细节取决于你发送的邮件类型。这是我们用于自动回复邮件的内容:
|
||||
|
||||
```
|
||||
Auto-Submitted: auto-replied
|
||||
X-Auto-Response-Suppress: All
|
||||
Precedence: auto_reply
|
||||
```
|
||||
|
||||
### 反馈
|
||||
|
||||
你可以发送电子邮件至 [martin@arp242.net][7] 或 [创建 GitHub 议题][8]以提交反馈、问题等。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://arp242.net/weblog/autoreply.html
|
||||
|
||||
作者:[Martin Tournoij][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://arp242.net/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: http://tools.ietf.org/html/rfc3834
|
||||
[2]: https://msdn.microsoft.com/en-us/library/ee219609(v=EXCHG.80).aspx
|
||||
[3]: https://tools.ietf.org/html/rfc2919)
|
||||
[4]: https://support.google.com/mail/answer/6254652?hl=en
|
||||
[5]: http://www.faqs.org/rfcs/rfc2076.html
|
||||
[6]: http://www.indyproject.org/index.en.aspx
|
||||
[7]: mailto:martin@arp242.net
|
||||
[8]: https://github.com/Carpetsmoker/arp242.net/issues/new
|
@ -0,0 +1,228 @@
|
||||
[#]: collector: "lujun9972"
|
||||
[#]: translator: "wxy"
|
||||
[#]: reviewer: "wxy"
|
||||
[#]: publisher: "wxy"
|
||||
[#]: url: "https://linux.cn/article-11198-1.html"
|
||||
[#]: subject: "How To Parse And Pretty Print JSON With Linux Commandline Tools"
|
||||
[#]: via: "https://www.ostechnix.com/how-to-parse-and-pretty-print-json-with-linux-commandline-tools/"
|
||||
[#]: author: "EDITOR https://www.ostechnix.com/author/editor/"
|
||||
|
||||
如何用 Linux 命令行工具解析和格式化输出 JSON
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2019/03/json-720x340.png)
|
||||
|
||||
JSON 是一种轻量级且与语言无关的数据存储格式,易于与大多数编程语言集成,也易于人类理解 —— 当然,如果格式正确的话。JSON 这个词代表 **J**ava **S**cript **O**bject **N**otation,虽然它以 JavaScript 开头,而且主要用于在服务器和浏览器之间交换数据,但现在正在用于许多领域,包括嵌入式系统。在这里,我们将使用 Linux 上的命令行工具解析并格式化打印 JSON。它对于在 shell 脚本中处理大型 JSON 数据或在 shell 脚本中处理 JSON 数据非常有用。
|
||||
|
||||
### 什么是格式化输出?
|
||||
|
||||
JSON 数据的结构更具人性化。但是在大多数情况下,JSON 数据会存储在一行中,甚至没有行结束字符。
|
||||
|
||||
显然,这对于手动阅读和编辑不太方便。
|
||||
|
||||
这是<ruby>格式化输出<rt>pretty print</rt></ruby>就很有用。这个该名称不言自明:重新格式化 JSON 文本,使人们读起来更清晰。这被称为 **JSON 格式化输出**。
|
||||
|
||||
### 用 Linux 命令行工具解析和格式化输出 JSON
|
||||
|
||||
可以使用命令行文本处理器解析 JSON 数据,例如 `awk`、`sed` 和 `gerp`。实际上 `JSON.awk` 是一个来做这个的 awk 脚本。但是,也有一些专用工具可用于同一目的。
|
||||
|
||||
1. `jq` 或 `jshon`,shell 下的 JSON 解析器,它们都非常有用。
|
||||
2. Shell 脚本,如 `JSON.sh` 或 `jsonv.sh`,用于在 bash、zsh 或 dash shell 中解析JSON。
|
||||
3. `JSON.awk`,JSON 解析器 awk 脚本。
|
||||
4. 像 `json.tool` 这样的 Python 模块。
|
||||
5. `undercore-cli`,基于 Node.js 和 javascript。
|
||||
|
||||
在本教程中,我只关注 `jq`,这是一个 shell 下的非常强大的 JSON 解析器,具有高级过滤和脚本编程功能。
|
||||
|
||||
### JSON 格式化输出
|
||||
|
||||
JSON 数据可能放在一行上使人难以解读,因此为了使其具有一定的可读性,JSON 格式化输出就可用于此目的的。
|
||||
|
||||
**示例:**来自 `jsonip.com` 的数据,使用 `curl` 或 `wget` 工具获得 JSON 格式的外部 IP 地址,如下所示。
|
||||
|
||||
```
|
||||
$ wget -cq http://jsonip.com/ -O -
|
||||
```
|
||||
|
||||
实际数据看起来类似这样:
|
||||
|
||||
```
|
||||
{"ip":"111.222.333.444","about":"/about","Pro!":"http://getjsonip.com"}
|
||||
```
|
||||
|
||||
现在使用 `jq` 格式化输出它:
|
||||
|
||||
```
|
||||
$ wget -cq http://jsonip.com/ -O - | jq '.'
|
||||
```
|
||||
|
||||
通过 `jq` 过滤了该结果之后,它应该看起来类似这样:
|
||||
|
||||
```
|
||||
{
|
||||
"ip": "111.222.333.444",
|
||||
"about": "/about",
|
||||
"Pro!": "http://getjsonip.com"
|
||||
}
|
||||
```
|
||||
|
||||
同样也可以通过 Python `json.tool` 模块做到。示例如下:
|
||||
|
||||
```
|
||||
$ cat anything.json | python -m json.tool
|
||||
```
|
||||
|
||||
这种基于 Python 的解决方案对于大多数用户来说应该没问题,但是如果没有预安装或无法安装 Python 则不行,比如在嵌入式系统上。
|
||||
|
||||
然而,`json.tool` Python 模块具有明显的优势,它是跨平台的。因此,你可以在 Windows、Linux 或 Mac OS 上无缝使用它。
|
||||
|
||||
### 如何用 jq 解析 JSON
|
||||
|
||||
首先,你需要安装 `jq`,它已被大多数 GNU/Linux 发行版选中,并使用各自的软件包安装程序命令进行安装。
|
||||
|
||||
在 Arch Linux 上:
|
||||
|
||||
```
|
||||
$ sudo pacman -S jq
|
||||
```
|
||||
|
||||
在 Debian、Ubuntu、Linux Mint 上:
|
||||
|
||||
```
|
||||
$ sudo apt-get install jq
|
||||
```
|
||||
|
||||
在 Fedora 上:
|
||||
|
||||
```
|
||||
$ sudo dnf install jq
|
||||
```
|
||||
|
||||
在 openSUSE 上:
|
||||
|
||||
```
|
||||
$ sudo zypper install jq
|
||||
```
|
||||
|
||||
对于其它操作系统或平台参见[官方的安装指导][1]。
|
||||
|
||||
#### jq 的基本过滤和标识符功能
|
||||
|
||||
`jq` 可以从 `STDIN` 或文件中读取 JSON 数据。你可以根据情况使用。
|
||||
|
||||
单个符号 `.` 是最基本的过滤器。这些过滤器也称为**对象标识符-索引**。`jq` 使用单个 `.` 过滤器基本上相当将输入的 JSON 文件格式化输出。
|
||||
|
||||
- **单引号**:不必始终使用单引号。但是如果你在一行中组合几个过滤器,那么你必须使用它们。
|
||||
- **双引号**:你必须用两个双引号括起任何特殊字符,如 `@`、`#`、`$`,例如 `jq .foo.”@bar”`。
|
||||
- **原始数据打印**:不管出于任何原因,如果你只需要最终解析的数据(不包含在双引号内),请使用带有 `-r` 标志的 `jq` 命令,如下所示:`jq -r .foo.bar`。
|
||||
|
||||
#### 解析特定数据
|
||||
|
||||
要过滤出 JSON 的特定部分,你需要了解格式化输出的 JSON 文件的数据层次结构。
|
||||
|
||||
来自维基百科的 JSON 数据示例:
|
||||
|
||||
```
|
||||
{
|
||||
"firstName": "John",
|
||||
"lastName": "Smith",
|
||||
"age": 25,
|
||||
"address": {
|
||||
"streetAddress": "21 2nd Street",
|
||||
"city": "New York",
|
||||
"state": "NY",
|
||||
"postalCode": "10021"
|
||||
},
|
||||
"phoneNumber": [
|
||||
{
|
||||
"type": "home",
|
||||
"number": "212 555-1234"
|
||||
},
|
||||
{
|
||||
"type": "fax",
|
||||
"number": "646 555-4567"
|
||||
}
|
||||
],
|
||||
"gender": {
|
||||
"type": "male"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
我将在本教程中将此 JSON 数据用作示例,将其保存为 `sample.json`。
|
||||
|
||||
假设我想从 `sample.json` 文件中过滤出地址。所以命令应该是这样的:
|
||||
|
||||
```
|
||||
$ jq .address sample.json
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
{
|
||||
"streetAddress": "21 2nd Street",
|
||||
"city": "New York",
|
||||
"state": "NY",
|
||||
"postalCode": "10021"
|
||||
}
|
||||
```
|
||||
|
||||
再次,我想要邮政编码,然后我要添加另一个**对象标识符-索引**,即另一个过滤器。
|
||||
|
||||
```
|
||||
$ cat sample.json | jq .address.postalCode
|
||||
```
|
||||
|
||||
另请注意,**过滤器区分大小写**,并且你必须使用完全相同的字符串来获取有意义的输出,否则就是 null。
|
||||
|
||||
#### 从 JSON 数组中解析元素
|
||||
|
||||
JSON 数组的元素包含在方括号内,这无疑是非常通用的。
|
||||
|
||||
要解析数组中的元素,你必须使用 `[]` 标识符以及其他对象标识符索引。
|
||||
|
||||
在此示例 JSON 数据中,电话号码存储在数组中,要从此数组中获取所有内容,你只需使用括号,像这个示例:
|
||||
|
||||
```
|
||||
$ jq .phoneNumber[] sample.json
|
||||
```
|
||||
|
||||
假设你只想要数组的第一个元素,然后使用从 `0` 开始的数组对象编号,对于第一个项目,使用 `[0]`,对于下一个项目,它应该每步增加 1。
|
||||
|
||||
```
|
||||
$ jq .phoneNumber[0] sample.json
|
||||
```
|
||||
|
||||
#### 脚本编程示例
|
||||
|
||||
假设我只想要家庭电话,而不是整个 JSON 数组数据。这就是用 `jq` 命令脚本编写的方便之处。
|
||||
|
||||
```
|
||||
$ cat sample.json | jq -r '.phoneNumber[] | select(.type == "home") | .number'
|
||||
```
|
||||
|
||||
首先,我将一个过滤器的结果传递给另一个,然后使用 `select` 属性选择特定类型的数据,再次将结果传递给另一个过滤器。
|
||||
|
||||
解释每种类型的 `jq` 过滤器和脚本编程超出了本教程的范围和目的。强烈建议你阅读 `jq` 手册,以便更好地理解下面的内容。
|
||||
|
||||
资源:
|
||||
|
||||
- https://stedolan.github.io/jq/manual/
|
||||
- http://www.compciv.org/recipes/cli/jq-for-parsing-json/
|
||||
- https://lzone.de/cheat-sheet/jq
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-parse-and-pretty-print-json-with-linux-commandline-tools/
|
||||
|
||||
作者:[ostechnix][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/editor/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://stedolan.github.io/jq/download/
|
150
published/20190318 Let-s try dwm - dynamic window manager.md
Normal file
150
published/20190318 Let-s try dwm - dynamic window manager.md
Normal file
@ -0,0 +1,150 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11235-1.html)
|
||||
[#]: subject: (Let’s try dwm — dynamic window manager)
|
||||
[#]: via: (https://fedoramagazine.org/lets-try-dwm-dynamic-window-manger/)
|
||||
[#]: author: (Adam Šamalík https://fedoramagazine.org/author/asamalik/)
|
||||
|
||||
试试动态窗口管理器 dwm 吧
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
如果你崇尚效率和极简主义,并且正在为你的 Linux 桌面寻找新的窗口管理器,那么你应该尝试一下<ruby>动态窗口管理器<rt>dynamic window manager</rt></ruby> dwm。以不到 2000 标准行的代码写就的 dwm,是一个速度极快而功能强大,且可高度定制的窗口管理器。
|
||||
|
||||
你可以在平铺、单片和浮动布局之间动态选择,使用标签将窗口组织到多个工作区,并使用键盘快捷键快速导航。本文将帮助你开始使用 dwm。
|
||||
|
||||
### 安装
|
||||
|
||||
要在 Fedora 上安装 dwm,运行:
|
||||
|
||||
```
|
||||
$ sudo dnf install dwm dwm-user
|
||||
```
|
||||
|
||||
`dwm` 包会安装窗口管理器本身,`dwm-user` 包显著简化了配置,本文稍后将对此进行说明。
|
||||
|
||||
此外,为了能够在需要时锁定屏幕,我们还将安装 `slock`,这是一个简单的 X 显示锁屏。
|
||||
|
||||
```
|
||||
$ sudo dnf install slock
|
||||
```
|
||||
|
||||
当然,你可以根据你的个人喜好使用其它的锁屏。
|
||||
|
||||
### 快速入门
|
||||
|
||||
要启动 dwm,在登录屏选择 “dwm-user” 选项。
|
||||
|
||||
![][2]
|
||||
|
||||
登录后,你将看到一个非常简单的桌面。事实上,顶部唯一的一个面板列出了代表工作空间的 9 个标签和一个代表窗户布局的 `[]=` 符号。
|
||||
|
||||
#### 启动应用
|
||||
|
||||
在查看布局之前,首先启动一些应用程序,以便你可以随时使用布局。可以通过按 `Alt+p` 并键入应用程序的名称,然后回车来启动应用程序。还有一个快捷键 `Alt+Shift+Enter` 用于打开终端。
|
||||
|
||||
现在有一些应用程序正在运行了,请查看布局。
|
||||
|
||||
#### 布局
|
||||
|
||||
默认情况下有三种布局:平铺布局,单片布局和浮动布局。
|
||||
|
||||
平铺布局由条形图上的 `[]=` 表示,它将窗口组织为两个主要区域:左侧为主区域,右侧为堆叠区。你可以按 `Alt+t` 激活平铺布局。
|
||||
|
||||
![][3]
|
||||
|
||||
平铺布局背后的想法是,主窗口放在主区域中,同时仍然可以看到堆叠区中的其他窗口。你可以根据需要在它们之间快速切换。
|
||||
|
||||
要在两个区域之间交换窗口,请将鼠标悬停在堆叠区中的一个窗口上,然后按 `Alt+Enter` 将其与主区域中的窗口交换。
|
||||
|
||||
![][4]
|
||||
|
||||
单片布局由顶部栏上的 `[N]` 表示,可以使你的主窗口占据整个屏幕。你可以按 `Alt+m` 切换到它。
|
||||
|
||||
最后,浮动布局可让你自由移动和调整窗口大小。它的快捷方式是 `Alt+f`,顶栏上的符号是 `><>`。
|
||||
|
||||
#### 工作区和标签
|
||||
|
||||
每个窗口都分配了一个顶部栏中列出的标签(1-9)。要查看特定标签,请使用鼠标单击其编号或按 `Alt+1..9`。你甚至可以使用鼠标右键单击其编号,一次查看多个标签。
|
||||
|
||||
通过使用鼠标突出显示后,并按 `Alt+Shift+1..9`,窗口可以在不同标签之间移动。
|
||||
|
||||
### 配置
|
||||
|
||||
为了使 dwm 尽可能简约,它不使用典型的配置文件。而是你需要修改代表配置的 C 语言头文件,并重新编译它。但是不要担心,在 Fedora 中你只需要简单地编辑主目录中的一个文件,而其他一切都会在后台发生,这要归功于 Fedora 的维护者提供的 `dwm-user` 包。
|
||||
|
||||
首先,你需要使用类似于以下的命令将文件复制到主目录中:
|
||||
|
||||
```
|
||||
$ mkdir ~/.dwm
|
||||
$ cp /usr/src/dwm-VERSION-RELEASE/config.def.h ~/.dwm/config.h
|
||||
```
|
||||
|
||||
你可以通过运行 `man dwm-start` 来获取确切的路径。
|
||||
|
||||
其次,只需编辑 `~/.dwm/config.h` 文件。例如,让我们配置一个新的快捷方式:通过按 `Alt+Shift+L` 来锁定屏幕。
|
||||
|
||||
考虑到我们已经安装了本文前面提到的 `slock` 包,我们需要在文件中添加以下两行以使其工作:
|
||||
|
||||
在 `/* commands */` 注释下,添加:
|
||||
|
||||
```
|
||||
static const char *slockcmd[] = { "slock", NULL };
|
||||
```
|
||||
|
||||
添加下列行到 `static Key keys[]` 中:
|
||||
|
||||
```
|
||||
{ MODKEY|ShiftMask, XK_l, spawn, {.v = slockcmd } },
|
||||
```
|
||||
|
||||
最终,它应该看起来如下:
|
||||
|
||||
```
|
||||
...
|
||||
/* commands */
|
||||
static char dmenumon[2] = "0"; /* component of dmenucmd, manipulated in spawn() */
|
||||
static const char *dmenucmd[] = { "dmenu_run", "-m", dmenumon, "-fn", dmenufont, "-nb", normbgcolor, "-nf", normfgcolor, "-sb", selbgcolor, "-sf", selfgcolor, NULL };
|
||||
static const char *termcmd[] = { "st", NULL };
|
||||
static const char *slockcmd[] = { "slock", NULL };
|
||||
|
||||
static Key keys[] = {
|
||||
/* modifier key function argument */
|
||||
{ MODKEY|ShiftMask, XK_l, spawn, {.v = slockcmd } },
|
||||
{ MODKEY, XK_p, spawn, {.v = dmenucmd } },
|
||||
{ MODKEY|ShiftMask, XK_Return, spawn, {.v = termcmd } },
|
||||
...
|
||||
```
|
||||
|
||||
保存文件。
|
||||
|
||||
最后,按 `Alt+Shift+q` 注销,然后重新登录。`dwm-user` 包提供的脚本将识别你已更改主目录中的`config.h` 文件,并会在登录时重新编译 dwm。因为 dwm 非常小,它快到你甚至都不会注意到它重新编译了。
|
||||
|
||||
你现在可以尝试按 `Alt+Shift+L` 锁定屏幕,然后输入密码并按回车键再次登录。
|
||||
|
||||
### 总结
|
||||
|
||||
如果你崇尚极简主义并想要一个非常快速而功能强大的窗口管理器,dwm 可能正是你一直在寻找的。但是,它可能不适合初学者,你可能需要做许多其他配置才能按照你的喜好进行配置。
|
||||
|
||||
要了解有关 dwm 的更多信息,请参阅该项目的主页: <https://dwm.suckless.org/>。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/lets-try-dwm-dynamic-window-manger/
|
||||
|
||||
作者:[Adam Šamalík][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/asamalik/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/dwm-magazine-image-816x345.png
|
||||
[2]: https://fedoramagazine.org/wp-content/uploads/2019/03/choosing-dwm-1024x469.png
|
||||
[3]: https://fedoramagazine.org/wp-content/uploads/2019/03/dwm-desktop-1024x593.png
|
||||
[4]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-2019-03-15-at-11.12.32-1024x592.png
|
@ -0,0 +1,78 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11245-1.html)
|
||||
[#]: subject: (How To Turn On And Shutdown The Raspberry Pi [Absolute Beginner Tip])
|
||||
[#]: via: (https://itsfoss.com/turn-on-raspberry-pi/)
|
||||
[#]: author: (Chinmay https://itsfoss.com/author/chinmay/)
|
||||
|
||||
如何打开和关闭树莓派(绝对新手)
|
||||
======
|
||||
|
||||
> 这篇短文教你如何打开树莓派以及如何在之后正确关闭它。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/19/192825rlrjy3sj77j7j79y.jpg)
|
||||
|
||||
[树莓派][1]是[最流行的 SBC(单板计算机)][2]之一。如果你对这个话题感兴趣,我相信你已经有了一个树莓派。我还建议你使用[其他树莓派配件][3]来开始使用你的设备。
|
||||
|
||||
你已经准备好打开并开始使用。与桌面和笔记本电脑等传统电脑相比,它有相似以及不同之处。
|
||||
|
||||
今天,让我们继续学习如何打开和关闭树莓派,因为它并没有真正的“电源按钮”。
|
||||
|
||||
在本文中,我使用的是树莓派 3B+,但对于所有树莓派变体都是如此。
|
||||
|
||||
### 如何打开树莓派
|
||||
|
||||
![Micro USB port for Power][7]
|
||||
|
||||
micro USB 口为树莓派供电,打开它的方式是将电源线插入 micro USB 口。但是开始之前,你应该确保做了以下事情。
|
||||
|
||||
* 根据官方[指南][8]准备带有 Raspbian 的 micro SD 卡并插入 micro SD 卡插槽。
|
||||
* 插入 HDMI 线、USB 键盘和鼠标。
|
||||
* 插入以太网线(可选)。
|
||||
|
||||
成上述操作后,请插入电源线。这会打开树莓派,显示屏将亮起并加载操作系统。
|
||||
|
||||
如果您将其关闭并且想要再次打开它,则必须从电源插座(首选)或从电路板的电源端口拔下电源线,然后再插上。它没有电源按钮。
|
||||
|
||||
### 关闭树莓派
|
||||
|
||||
关闭树莓派非常简单,单击菜单按钮并选择关闭。
|
||||
|
||||
![Turn off Raspberry Pi graphically][9]
|
||||
|
||||
或者你可以在终端使用 [shutdown 命令][10]
|
||||
|
||||
```
|
||||
sudo shutdown now
|
||||
```
|
||||
|
||||
`shutdown` 执行后**等待**它完成,接着你可以关闭电源。
|
||||
|
||||
再说一次,树莓派关闭后,没有真正的办法可以在不关闭再打开电源的情况下打开树莓派。你可以使用 GPIO 打开树莓派,但这需要额外的改装。
|
||||
|
||||
*注意:Micro USB 口往往是脆弱的,因此请关闭/打开电源,而不是经常拔出插入 micro USB 口。
|
||||
|
||||
好吧,这就是关于打开和关闭树莓派的所有内容,你打算用它做什么?请在评论中告诉我。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/turn-on-raspberry-pi/
|
||||
|
||||
作者:[Chinmay][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/chinmay/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.raspberrypi.org/
|
||||
[2]: https://linux.cn/article-10823-1.html
|
||||
[3]: https://itsfoss.com/things-you-need-to-get-your-raspberry-pi-working/
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/raspberry-pi-3-microusb.png?fit=800%2C532&ssl=1
|
||||
[8]: https://www.raspberrypi.org/documentation/installation/installing-images/README.md
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/Raspbian-ui-menu.jpg?fit=800%2C492&ssl=1
|
||||
[10]: https://linuxhandbook.com/linux-shutdown-command/
|
@ -1,74 +1,64 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11195-1.html)
|
||||
[#]: subject: (Check storage performance with dd)
|
||||
[#]: via: (https://fedoramagazine.org/check-storage-performance-with-dd/)
|
||||
[#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/)
|
||||
|
||||
Check storage performance with dd
|
||||
使用 dd 检查存储性能
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
This article includes some example commands to show you how to get a _rough_ estimate of hard drive and RAID array performance using the _dd_ command. Accurate measurements would have to take into account things like [write amplification][2] and [system call overhead][3], which this guide does not. For a tool that might give more accurate results, you might want to consider using [hdparm][4].
|
||||
本文包含一些示例命令,向你展示如何使用 `dd` 命令*粗略*估计硬盘驱动器和 RAID 阵列的性能。准确的测量必须考虑诸如[写入放大][2]和[系统调用开销][3]之类的事情,本指南不会考虑这些。对于可能提供更准确结果的工具,你可能需要考虑使用 [hdparm][4]。
|
||||
|
||||
To factor out performance issues related to the file system, these examples show how to test the performance of your drives and arrays at the block level by reading and writing directly to/from their block devices. **WARNING** : The _write_ tests will destroy any data on the block devices against which they are run. **Do not run them against any device that contains data you want to keep!**
|
||||
为了分解与文件系统相关的性能问题,这些示例显示了如何通过直接读取和写入块设备来在块级测试驱动器和阵列的性能。**警告**:*写入*测试将会销毁用来运行测试的块设备上的所有数据。**不要对包含你想要保留的数据的任何设备运行这些测试!**
|
||||
|
||||
### Four tests
|
||||
### 四个测试
|
||||
|
||||
Below are four example dd commands that can be used to test the performance of a block device:
|
||||
下面是四个示例 `dd` 命令,可用于测试块设备的性能:
|
||||
|
||||
1. One process reading from $MY_DISK:
|
||||
1、 从 `$MY_DISK` 读取的一个进程:
|
||||
|
||||
```
|
||||
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
|
||||
```
|
||||
|
||||
2. One process writing to $MY_DISK:
|
||||
2、写入到 `$MY_DISK` 的一个进程:
|
||||
|
||||
```
|
||||
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
|
||||
```
|
||||
|
||||
3. Two processes reading concurrently from $MY_DISK:
|
||||
3、从 `$MY_DISK` 并发读取的两个进程:
|
||||
|
||||
```
|
||||
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
|
||||
```
|
||||
|
||||
4. Two processes writing concurrently to $MY_DISK:
|
||||
4、 并发写入到 `$MY_DISK` 的两个进程:
|
||||
|
||||
```
|
||||
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &)
|
||||
```
|
||||
|
||||
- 执行读写测试时,相应的 `iflag=nocache` 和 `oflag=direct` 参数非常重要,因为没有它们,`dd` 命令有时会显示从[内存][5]中传输数据的结果速度,而不是从硬盘。
|
||||
- `bs` 和 `count` 参数的值有些随意,我选择的值应足够大,以便在大多数情况下为当前硬件提供合适的平均值。
|
||||
- `null` 和 `zero` 设备在读写测试中分别用于目标和源,因为它们足够快,不会成为性能测试中的限制因素。
|
||||
- 并发读写测试中第二个 `dd` 命令的 `skip=200` 参数是为了确保 `dd` 的两个副本在硬盘驱动器的不同区域上运行。
|
||||
|
||||
### 16 个示例
|
||||
|
||||
下面是演示,显示针对以下四个块设备中之一运行上述四个测试中的各个结果:
|
||||
|
||||
– The _iflag=nocache_ and _oflag=direct_ parameters are important when performing the read and write tests (respectively) because without them the dd command will sometimes show the resulting speed of transferring the data to/from [RAM][5] rather than the hard drive.
|
||||
1. `MY_DISK=/dev/sda2`(用在示例 1-X 中)
|
||||
2. `MY_DISK=/dev/sdb2`(用在示例 2-X 中)
|
||||
3. `MY_DISK=/dev/md/stripped`(用在示例 3-X 中)
|
||||
4. `MY_DISK=/dev/md/mirrored`(用在示例 4-X 中)
|
||||
|
||||
– The values for the _bs_ and _count_ parameters are somewhat arbitrary and what I have chosen should be large enough to provide a decent average in most cases for current hardware.
|
||||
|
||||
– The _null_ and _zero_ devices are used for the destination and source (respectively) in the read and write tests because they are fast enough that they will not be the limiting factor in the performance tests.
|
||||
|
||||
– The _skip=200_ parameter on the second dd command in the concurrent read and write tests is to ensure that the two copies of dd are operating on different areas of the hard drive.
|
||||
|
||||
### 16 examples
|
||||
|
||||
Below are demonstrations showing the results of running each of the above four tests against each of the following four block devices:
|
||||
|
||||
1. MY_DISK=/dev/sda2 (used in examples 1-X)
|
||||
2. MY_DISK=/dev/sdb2 (used in examples 2-X)
|
||||
3. MY_DISK=/dev/md/stripped (used in examples 3-X)
|
||||
4. MY_DISK=/dev/md/mirrored (used in examples 4-X)
|
||||
|
||||
|
||||
|
||||
A video demonstration of the these tests being run on a PC is provided at the end of this guide.
|
||||
|
||||
Begin by putting your computer into _rescue_ mode to reduce the chances that disk I/O from background services might randomly affect your test results. **WARNING** : This will shutdown all non-essential programs and services. Be sure to save your work before running these commands. You will need to know your _root_ password to get into rescue mode. The _passwd_ command, when run as the root user, will prompt you to (re)set your root account password.
|
||||
首先将计算机置于*救援*模式,以减少后台服务的磁盘 I/O 随机影响测试结果的可能性。**警告**:这将关闭所有非必要的程序和服务。在运行这些命令之前,请务必保存你的工作。你需要知道 `root` 密码才能进入救援模式。`passwd` 命令以 `root` 用户身份运行时,将提示你(重新)设置 `root` 帐户密码。
|
||||
|
||||
```
|
||||
$ sudo -i
|
||||
@ -77,14 +67,14 @@ $ sudo -i
|
||||
# systemctl rescue
|
||||
```
|
||||
|
||||
You might also want to temporarily disable logging to disk:
|
||||
你可能还想暂时禁止将日志记录到磁盘:
|
||||
|
||||
```
|
||||
# sed -r -i.bak 's/^#?Storage=.*/Storage=none/' /etc/systemd/journald.conf
|
||||
# systemctl restart systemd-journald.service
|
||||
```
|
||||
|
||||
If you have a swap device, it can be temporarily disabled and used to perform the following tests:
|
||||
如果你有交换设备,可以暂时禁用它并用于执行后面的测试:
|
||||
|
||||
```
|
||||
# swapoff -a
|
||||
@ -93,7 +83,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th
|
||||
# mdadm --zero-superblock $MY_DEVS
|
||||
```
|
||||
|
||||
#### Example 1-1 (reading from sda)
|
||||
#### 示例 1-1 (从 sda 读取)
|
||||
|
||||
```
|
||||
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
|
||||
@ -106,7 +96,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 1.7003 s, 123 MB/s
|
||||
```
|
||||
|
||||
#### Example 1-2 (writing to sda)
|
||||
#### 示例 1-2 (写入到 sda)
|
||||
|
||||
```
|
||||
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
|
||||
@ -119,7 +109,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 1.67117 s, 125 MB/s
|
||||
```
|
||||
|
||||
#### Example 1-3 (reading concurrently from sda)
|
||||
#### 示例 1-3 (从 sda 并发读取)
|
||||
|
||||
```
|
||||
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
|
||||
@ -135,7 +125,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 3.52614 s, 59.5 MB/s
|
||||
```
|
||||
|
||||
#### Example 1-4 (writing concurrently to sda)
|
||||
#### 示例 1-4 (并发写入到 sda)
|
||||
|
||||
```
|
||||
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
|
||||
@ -150,7 +140,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 3.60872 s, 58.1 MB/s
|
||||
```
|
||||
|
||||
#### Example 2-1 (reading from sdb)
|
||||
#### 示例 2-1 (从 sdb 读取)
|
||||
|
||||
```
|
||||
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
|
||||
@ -163,7 +153,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 1.67285 s, 125 MB/s
|
||||
```
|
||||
|
||||
#### Example 2-2 (writing to sdb)
|
||||
#### 示例 2-2 (写入到 sdb)
|
||||
|
||||
```
|
||||
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
|
||||
@ -176,7 +166,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 1.67198 s, 125 MB/s
|
||||
```
|
||||
|
||||
#### Example 2-3 (reading concurrently from sdb)
|
||||
#### 示例 2-3 (从 sdb 并发读取)
|
||||
|
||||
```
|
||||
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
|
||||
@ -192,7 +182,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 3.57736 s, 58.6 MB/s
|
||||
```
|
||||
|
||||
#### Example 2-4 (writing concurrently to sdb)
|
||||
#### 示例 2-4 (并发写入到 sdb)
|
||||
|
||||
```
|
||||
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
|
||||
@ -208,7 +198,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 3.81475 s, 55.0 MB/s
|
||||
```
|
||||
|
||||
#### Example 3-1 (reading from RAID0)
|
||||
#### 示例 3-1 (从 RAID0 读取)
|
||||
|
||||
```
|
||||
# mdadm --create /dev/md/stripped --homehost=any --metadata=1.0 --level=0 --raid-devices=2 $MY_DEVS
|
||||
@ -222,7 +212,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 0.837419 s, 250 MB/s
|
||||
```
|
||||
|
||||
#### Example 3-2 (writing to RAID0)
|
||||
#### 示例 3-2 (写入到 RAID0)
|
||||
|
||||
```
|
||||
# MY_DISK=/dev/md/stripped
|
||||
@ -235,7 +225,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 0.823648 s, 255 MB/s
|
||||
```
|
||||
|
||||
#### Example 3-3 (reading concurrently from RAID0)
|
||||
#### 示例 3-3 (从 RAID0 并发读取)
|
||||
|
||||
```
|
||||
# MY_DISK=/dev/md/stripped
|
||||
@ -251,7 +241,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 1.80016 s, 116 MB/s
|
||||
```
|
||||
|
||||
#### Example 3-4 (writing concurrently to RAID0)
|
||||
#### 示例 3-4 (并发写入到 RAID0)
|
||||
|
||||
```
|
||||
# MY_DISK=/dev/md/stripped
|
||||
@ -267,7 +257,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 1.81323 s, 116 MB/s
|
||||
```
|
||||
|
||||
#### Example 4-1 (reading from RAID1)
|
||||
#### 示例 4-1 (从 RAID1 读取)
|
||||
|
||||
```
|
||||
# mdadm --stop /dev/md/stripped
|
||||
@ -282,7 +272,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 1.74963 s, 120 MB/s
|
||||
```
|
||||
|
||||
#### Example 4-2 (writing to RAID1)
|
||||
#### 示例 4-2 (写入到 RAID1)
|
||||
|
||||
```
|
||||
# MY_DISK=/dev/md/mirrored
|
||||
@ -295,7 +285,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 1.74625 s, 120 MB/s
|
||||
```
|
||||
|
||||
#### Example 4-3 (reading concurrently from RAID1)
|
||||
#### 示例 4-3 (从 RAID1 并发读取)
|
||||
|
||||
```
|
||||
# MY_DISK=/dev/md/mirrored
|
||||
@ -311,7 +301,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 1.67685 s, 125 MB/s
|
||||
```
|
||||
|
||||
#### Example 4-4 (writing concurrently to RAID1)
|
||||
#### 示例 4-4 (并发写入到 RAID1)
|
||||
|
||||
```
|
||||
# MY_DISK=/dev/md/mirrored
|
||||
@ -327,7 +317,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 4.1067 s, 51.1 MB/s
|
||||
```
|
||||
|
||||
#### Restore your swap device and journald configuration
|
||||
#### 恢复交换设备和日志配置
|
||||
|
||||
```
|
||||
# mdadm --stop /dev/md/stripped /dev/md/mirrored
|
||||
@ -339,23 +329,19 @@ If you have a swap device, it can be temporarily disabled and used to perform th
|
||||
# reboot
|
||||
```
|
||||
|
||||
### Interpreting the results
|
||||
### 结果解读
|
||||
|
||||
Examples 1-1, 1-2, 2-1, and 2-2 show that each of my drives read and write at about 125 MB/s.
|
||||
示例 1-1、1-2、2-1 和 2-2 表明我的每个驱动器以大约 125 MB/s 的速度读写。
|
||||
|
||||
Examples 1-3, 1-4, 2-3, and 2-4 show that when two reads or two writes are done in parallel on the same drive, each process gets at about half the drive’s bandwidth (60 MB/s).
|
||||
示例 1-3、1-4、2-3 和 2-4 表明,当在同一驱动器上并行完成两次读取或写入时,每个进程的驱动器带宽大约为一半(60 MB/s)。
|
||||
|
||||
The 3-x examples show the performance benefit of putting the two drives together in a RAID0 (data stripping) array. The numbers, in all cases, show that the RAID0 array performs about twice as fast as either drive is able to perform on its own. The trade-off is that you are twice as likely to lose everything because each drive only contains half the data. A three-drive array would perform three times as fast as a single drive (all drives being equal) but it would be thrice as likely to suffer a [catastrophic failure][6].
|
||||
3-X 示例显示了将两个驱动器放在 RAID0(数据条带化)阵列中的性能优势。在所有情况下,这些数字表明 RAID0 阵列的执行速度是任何一个驱动器能够独立提供的速度的两倍。相应的是,丢失所有内容的可能性也是两倍,因为每个驱动器只包含一半的数据。一个三个驱动器阵列的执行速度是单个驱动器的三倍(所有驱动器规格都相同),但遭受[灾难性故障][6]的可能也是三倍。
|
||||
|
||||
The 4-x examples show that the performance of the RAID1 (data mirroring) array is similar to that of a single disk except for the case where multiple processes are concurrently reading (example 4-3). In the case of multiple processes reading, the performance of the RAID1 array is similar to that of the RAID0 array. This means that you will see a performance benefit with RAID1, but only when processes are reading concurrently. For example, if a process tries to access a large number of files in the background while you are trying to use a web browser or email client in the foreground. The main benefit of RAID1 is that your data is unlikely to be lost [if a drive fails][7].
|
||||
4-X 示例显示 RAID1(数据镜像)阵列的性能类似于单个磁盘的性能,除了多个进程同时读取的情况(示例4-3)。在多个进程读取的情况下,RAID1 阵列的性能类似于 RAID0 阵列的性能。这意味着你将看到 RAID1 的性能优势,但仅限于进程同时读取时。例如,当你在前台使用 Web 浏览器或电子邮件客户端时,进程会尝试访问后台中的大量文件。RAID1 的主要好处是,[如果驱动器出现故障][7],你的数据不太可能丢失。
|
||||
|
||||
### Video demo
|
||||
### 故障排除
|
||||
|
||||
Testing storage throughput using dd
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
If the above tests aren’t performing as you expect, you might have a bad or failing drive. Most modern hard drives have built-in Self-Monitoring, Analysis and Reporting Technology ([SMART][8]). If your drive supports it, the _smartctl_ command can be used to query your hard drive for its internal statistics:
|
||||
如果上述测试未按预期执行,则可能是驱动器坏了或出现故障。大多数现代硬盘都内置了自我监控、分析和报告技术([SMART][8])。如果你的驱动器支持它,`smartctl` 命令可用于查询你的硬盘驱动器的内部统计信息:
|
||||
|
||||
```
|
||||
# smartctl --health /dev/sda
|
||||
@ -363,21 +349,21 @@ If the above tests aren’t performing as you expect, you might have a bad or fa
|
||||
# smartctl -x /dev/sda
|
||||
```
|
||||
|
||||
Another way that you might be able to tune your PC for better performance is by changing your [I/O scheduler][9]. Linux systems support several I/O schedulers and the current default for Fedora systems is the [multiqueue][10] variant of the [deadline][11] scheduler. The default performs very well overall and scales extremely well for large servers with many processors and large disk arrays. There are, however, a few more specialized schedulers that might perform better under certain conditions.
|
||||
另一种可以调整 PC 以获得更好性能的方法是更改 [I/O 调度程序][9]。Linux 系统支持多个 I/O 调度程序,Fedora 系统的当前默认值是 [deadline][11] 调度程序的 [multiqueue][10] 变体。默认情况下它的整体性能非常好,并且对于具有许多处理器和大型磁盘阵列的大型服务器,其扩展性极为出色。但是,有一些更专业的调度程序在某些条件下可能表现更好。
|
||||
|
||||
To view which I/O scheduler your drives are using, issue the following command:
|
||||
要查看驱动器正在使用的 I/O 调度程序,请运行以下命令:
|
||||
|
||||
```
|
||||
$ for i in /sys/block/sd?/queue/scheduler; do echo "$i: $(<$i)"; done
|
||||
```
|
||||
|
||||
You can change the scheduler for a drive by writing the name of the desired scheduler to the /sys/block/<device name>/queue/scheduler file:
|
||||
你可以通过将所需调度程序的名称写入 `/sys/block/<device name>/queue/scheduler` 文件来更改驱动器的调度程序:
|
||||
|
||||
```
|
||||
# echo bfq > /sys/block/sda/queue/scheduler
|
||||
```
|
||||
|
||||
You can make your changes permanent by creating a [udev rule][12] for your drive. The following example shows how to create a udev rule that will set all [rotational drives][13] to use the [BFQ][14] I/O scheduler:
|
||||
你可以通过为驱动器创建 [udev 规则][12]来永久更改它。以下示例显示了如何创建将所有的[旋转式驱动器][13]设置为使用 [BFQ][14] I/O 调度程序的 udev 规则:
|
||||
|
||||
```
|
||||
# cat << END > /etc/udev/rules.d/60-ioscheduler-rotational.rules
|
||||
@ -385,7 +371,7 @@ ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1", ATTR{queue
|
||||
END
|
||||
```
|
||||
|
||||
Here is another example that sets all [solid-state drives][15] to use the [NOOP][16] I/O scheduler:
|
||||
这是另一个设置所有的[固态驱动器][15]使用 [NOOP][16] I/O 调度程序的示例:
|
||||
|
||||
```
|
||||
# cat << END > /etc/udev/rules.d/60-ioscheduler-solid-state.rules
|
||||
@ -393,11 +379,7 @@ ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue
|
||||
END
|
||||
```
|
||||
|
||||
Changing your I/O scheduler won’t affect the raw throughput of your devices, but it might make your PC seem more responsive by prioritizing the bandwidth for the foreground tasks over the background tasks or by eliminating unnecessary block reordering.
|
||||
|
||||
* * *
|
||||
|
||||
_Photo by _[ _James Donovan_][17]_ on _[_Unsplash_][18]_._
|
||||
更改 I/O 调度程序不会影响设备的原始吞吐量,但通过优先考虑后台任务的带宽或消除不必要的块重新排序,可能会使你的 PC 看起来响应更快。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -405,8 +387,8 @@ via: https://fedoramagazine.org/check-storage-performance-with-dd/
|
||||
|
||||
作者:[Gregory Bartholomew][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
57
published/20190611 What is a Linux user.md
Normal file
57
published/20190611 What is a Linux user.md
Normal file
@ -0,0 +1,57 @@
|
||||
[#]: collector: "lujun9972"
|
||||
[#]: translator: "qfzy1233 "
|
||||
[#]: reviewer: "wxy"
|
||||
[#]: publisher: "wxy"
|
||||
[#]: url: "https://linux.cn/article-11231-1.html"
|
||||
[#]: subject: "What is a Linux user?"
|
||||
[#]: via: "https://opensource.com/article/19/6/what-linux-user"
|
||||
[#]: author: "Anderson Silva https://opensource.com/users/ansilva/users/petercheer/users/ansilva/users/greg-p/users/ansilva/users/ansilva/users/bcotton/users/ansilva/users/seth/users/ansilva/users/don-watkins/users/ansilva/users/seth"
|
||||
|
||||
何谓 Linux 用户?
|
||||
======
|
||||
|
||||
> “Linux 用户”这一定义已经拓展到了更大的范围,同时也发生了巨大的改变。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/15/211706trkbzp8juhbzifia.jpg)
|
||||
|
||||
*编者按: 本文更新于 2019 年 6 月 11 日下午 1:15:19,以更准确地反映作者对 Linux 社区中开放和包容的社区性的看法。*
|
||||
|
||||
再有不到两年,Linux 内核就要迎来它 30 岁的生日了。让我们回想一下!1991 年的时候你在哪里?你出生了吗?那年我 13 岁!在 1991 到 1993 年间只推出了少数几款 Linux 发行版,至少它们中的三个:Slackware、Debian 和 Red Hat 为 Linux 运动发展提供了[支柱][2]。
|
||||
|
||||
当年获得 Linux 发行版的副本,并在笔记本或服务器上进行安装和配置,和今天相比是很不一样的。当时是十分艰难的!也是令人沮丧的!如果你让能让它运行起来,就是一个了不起的成就!我们不得不与不兼容的硬件、设备上的配置跳线、BIOS 问题以及许多其他问题作斗争。甚至即使硬件是兼容的,很多时候,你仍然需要编译内核、模块和驱动程序才能让它们在你的系统上工作。
|
||||
|
||||
如果你当时经过过那些,你可能会表示赞同。有些读者甚至称它们为“美好的过往”,因为选择使用 Linux 意味着仅仅是为了让操作系统继续运行,你就必须学习操作系统、计算机体系架构、系统管理、网络,甚至编程。但我并不赞同他们的说法,窃以为:Linux 在 IT 行业带给我们的最让人惊讶的改变就是,它成为了我们每个人技术能力的基础组成部分!
|
||||
|
||||
将近 30 年过去了,无论是桌面和服务器领域 Linux 系统都有了脱胎换骨的变换。你可以在汽车上,在飞机上,家用电器上,智能手机上……几乎任何地方发现 Linux 的影子!你甚至可以购买预装 Linux 的笔记本电脑、台式机和服务器。如果你考虑云计算,企业甚至个人都可以一键部署 Linux 虚拟机,由此可见 Linux 的应用已经变得多么普遍了。
|
||||
|
||||
考虑到这些,我想问你的问题是:**这个时代如何定义“Linux 用户”?**
|
||||
|
||||
如果你从 System76 或 Dell 为你的父母或祖父母购买一台 Linux 笔记本电脑,为其登录好他们的社交媒体和电子邮件,并告诉他们经常单击“系统升级”,那么他们现在就是 Linux 用户了。如果你是在 Windows 或 MacOS 机器上进行以上操作,那么他们就是 Windows 或 MacOS 用户。令人难以置信的是,与 90 年代不同,现在的 Linux 任何人都可以轻易上手。
|
||||
|
||||
由于种种原因,这也归因于 web 浏览器成为了桌面计算机上的“杀手级应用程序”。现在,许多用户并不关心他们使用的是什么操作系统,只要他们能够访问到他们的应用程序或服务。
|
||||
|
||||
你知道有多少人经常使用他们的电话、桌面或笔记本电脑,但不会管理他们系统上的文件、目录和驱动程序?又有多少人不会安装“应用程序商店”没有收录的二进制文件程序?更不要提从头编译应用程序,对我来说,几乎全是这样的。这正是成熟的开源软件和相应的生态对于易用性的改进的动人之处。
|
||||
|
||||
今天的 Linux 用户不需要像上世纪 90 年代或 21 世纪初的 Linux 用户那样了解、学习甚至查询信息,这并不是一件坏事。过去那种认为 Linux 只适合工科男使用的想法已经一去不复返了。
|
||||
|
||||
对于那些对计算机、操作系统以及在自由软件上创建、使用和协作的想法感兴趣、好奇、着迷的 Linux 用户来说,Liunx 依旧有研究的空间。如今在 Windows 和 MacOS 上也有同样多的空间留给创造性的开源贡献者。今天,成为 Linux 用户就是成为一名与 Linux 系统同行的人。这是一件很棒的事情。
|
||||
|
||||
### Linux 用户定义的转变
|
||||
|
||||
当我开始使用 Linux 时,作为一个 Linux 用户意味着知道操作系统如何以各种方式、形态和形式运行。Linux 在某种程度上已经成熟,这使得“Linux 用户”的定义可以包含更广泛的领域及那些领域里的人们。这可能是显而易见的一点,但重要的还是要说清楚:任何 Linux 用户皆“生”而平等。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/what-linux-user
|
||||
|
||||
作者:[Anderson Silva][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[qfzy1233](https://github.com/qfzy1233)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ansilva/users/petercheer/users/ansilva/users/greg-p/users/ansilva/users/ansilva/users/bcotton/users/ansilva/users/seth/users/ansilva/users/don-watkins/users/ansilva/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22
|
||||
[2]: https://en.wikipedia.org/wiki/Linux_distribution#/media/File:Linux_Distribution_Timeline.svg
|
@ -1,38 +1,38 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: translator: (MjSeven)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11243-1.html)
|
||||
[#]: subject: (How To Check Linux Package Version Before Installing It)
|
||||
[#]: via: (https://www.ostechnix.com/how-to-check-linux-package-version-before-installing-it/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
How To Check Linux Package Version Before Installing It
|
||||
如何在安装之前检查 Linux 软件包的版本?
|
||||
======
|
||||
|
||||
![Check Linux Package Version][1]
|
||||
|
||||
Most of you will know how to [**find the version of an installed package**][2] in Linux. But, what would you do to find the packages’ version which are not installed in the first place? No problem! This guide describes how to check Linux package version before installing it in Debian and its derivatives like Ubuntu. This small tip might be helpful for those wondering what version they would get before installing a package.
|
||||
大多数人都知道如何在 Linux 中[查找已安装软件包的版本][2],但是,你会如何查找那些还没有安装的软件包的版本呢?很简单!本文将介绍在 Debian 及其衍生品(如 Ubuntu)中,如何在软件包安装之前检查它的版本。对于那些想在安装之前知道软件包版本的人来说,这个小技巧可能会有所帮助。
|
||||
|
||||
### Check Linux Package Version Before Installing It
|
||||
### 在安装之前检查 Linux 软件包版本
|
||||
|
||||
There are many ways to find a package’s version even if it is not installed already in DEB-based systems. Here I have given a few methods.
|
||||
在基于 DEB 的系统中,即使软件包还没有安装,也有很多方法可以查看他的版本。接下来,我将一一介绍。
|
||||
|
||||
##### Method 1 – Using Apt
|
||||
#### 方法 1 – 使用 Apt
|
||||
|
||||
The quick and dirty way to check a package version, simply run:
|
||||
检查软件包的版本的懒人方法:
|
||||
|
||||
```
|
||||
$ apt show <package-name>
|
||||
```
|
||||
|
||||
**Example:**
|
||||
**示例:**
|
||||
|
||||
```
|
||||
$ apt show vim
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
**示例输出:**
|
||||
|
||||
```
|
||||
Package: vim
|
||||
@ -67,23 +67,21 @@ Description: Vi IMproved - enhanced vi editor
|
||||
N: There is 1 additional record. Please use the '-a' switch to see it
|
||||
```
|
||||
|
||||
As you can see in the above output, “apt show” command displays, many important details of the package such as,
|
||||
正如你在上面的输出中看到的,`apt show` 命令显示了软件包许多重要的细节,例如:
|
||||
|
||||
1. package name,
|
||||
2. version,
|
||||
3. origin (from where the vim comes from),
|
||||
4. maintainer,
|
||||
5. home page of the package,
|
||||
6. dependencies,
|
||||
7. download size,
|
||||
8. description,
|
||||
9. and many.
|
||||
1. 包名称,
|
||||
2. 版本,
|
||||
3. 来源(vim 来自哪里),
|
||||
4. 维护者,
|
||||
5. 包的主页,
|
||||
6. 依赖,
|
||||
7. 下载大小,
|
||||
8. 简介,
|
||||
9. 其他。
|
||||
|
||||
因此,Ubuntu 仓库中可用的 Vim 版本是 **8.0.1453**。如果我把它安装到我的 Ubuntu 系统上,就会得到这个版本。
|
||||
|
||||
|
||||
So, the available version of Vim package in the Ubuntu repositories is **8.0.1453**. This is the version I get if I install it on my Ubuntu system.
|
||||
|
||||
Alternatively, use **“apt policy”** command if you prefer short output:
|
||||
或者,如果你不想看那么多的内容,那么可以使用 `apt policy` 这个命令:
|
||||
|
||||
```
|
||||
$ apt policy vim
|
||||
@ -98,7 +96,7 @@ vim:
|
||||
500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
|
||||
```
|
||||
|
||||
Or even shorter:
|
||||
甚至更短:
|
||||
|
||||
```
|
||||
$ apt list vim
|
||||
@ -107,17 +105,17 @@ vim/bionic-updates,bionic-security 2:8.0.1453-1ubuntu1.1 amd64
|
||||
N: There is 1 additional version. Please use the '-a' switch to see it
|
||||
```
|
||||
|
||||
**Apt** is the default package manager in recent Ubuntu versions. So, this command is just enough to find the detailed information of a package. It doesn’t matter whether given package is installed or not. This command will simply list the given package’s version along with all other details.
|
||||
`apt` 是 Ubuntu 最新版本的默认包管理器。因此,这个命令足以找到一个软件包的详细信息,给定的软件包是否安装并不重要。这个命令将简单地列出给定包的版本以及其他详细信息。
|
||||
|
||||
##### Method 2 – Using Apt-get
|
||||
#### 方法 2 – 使用 Apt-get
|
||||
|
||||
To find a package version without installing it, we can use **apt-get** command with **-s** option.
|
||||
要查看软件包的版本而不安装它,我们可以使用 `apt-get` 命令和 `-s` 选项。
|
||||
|
||||
```
|
||||
$ apt-get -s install vim
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
**示例输出:**
|
||||
|
||||
```
|
||||
NOTE: This is only a simulation!
|
||||
@ -136,19 +134,19 @@ Inst vim (2:8.0.1453-1ubuntu1.1 Ubuntu:18.04/bionic-updates, Ubuntu:18.04/bionic
|
||||
Conf vim (2:8.0.1453-1ubuntu1.1 Ubuntu:18.04/bionic-updates, Ubuntu:18.04/bionic-security [amd64])
|
||||
```
|
||||
|
||||
Here, -s option indicates **simulation**. As you can see in the output, It performs no action. Instead, It simply performs a simulation to let you know what is going to happen when you install the Vim package.
|
||||
这里,`-s` 选项代表 **模拟**。正如你在输出中看到的,它不执行任何操作。相反,它只是模拟执行,好让你知道在安装 Vim 时会发生什么。
|
||||
|
||||
You can substitute “install” option with “upgrade” option to see what will happen when you upgrade a package.
|
||||
你可以将 `install` 选项替换为 `upgrade`,以查看升级包时会发生什么。
|
||||
|
||||
```
|
||||
$ apt-get -s upgrade vim
|
||||
```
|
||||
|
||||
##### Method 3 – Using Aptitude
|
||||
#### 方法 3 – 使用 Aptitude
|
||||
|
||||
**Aptitude** is an ncurses and commandline-based front-end to APT package manger in Debian and its derivatives.
|
||||
在 Debian 及其衍生品中,`aptitude` 是一个基于 ncurses(LCTT 译注:ncurses 是终端基于文本的字符处理的库)和命令行的前端 APT 包管理器。
|
||||
|
||||
To find the package version with Aptitude, simply run:
|
||||
使用 aptitude 来查看软件包的版本,只需运行:
|
||||
|
||||
```
|
||||
$ aptitude versions vim
|
||||
@ -156,7 +154,7 @@ p 2:8.0.1453-1ubuntu1
|
||||
p 2:8.0.1453-1ubuntu1.1 bionic-security,bionic-updates 500
|
||||
```
|
||||
|
||||
You can also use simulation option ( **-s** ) to see what would happen if you install or upgrade package.
|
||||
你还可以使用模拟选项(`-s`)来查看安装或升级包时会发生什么。
|
||||
|
||||
```
|
||||
$ aptitude -V -s install vim
|
||||
@ -167,33 +165,29 @@ Need to get 1,152 kB of archives. After unpacking 2,852 kB will be used.
|
||||
Would download/install/remove packages.
|
||||
```
|
||||
|
||||
Here, **-V** flag is used to display detailed information of the package version.
|
||||
|
||||
Similarly, just substitute “install” with “upgrade” option to see what would happen if you upgrade a package.
|
||||
这里,`-V` 标志用于显示软件包的详细信息。
|
||||
|
||||
```
|
||||
$ aptitude -V -s upgrade vim
|
||||
```
|
||||
|
||||
Another way to find the non-installed package’s version using Aptitude command is:
|
||||
类似的,只需将 `install` 替换为 `upgrade` 选项,即可查看升级包会发生什么。
|
||||
|
||||
```
|
||||
$ aptitude search vim -F "%c %p %d %V"
|
||||
```
|
||||
|
||||
Here,
|
||||
这里,
|
||||
|
||||
* **-F** is used to specify which format should be used to display the output,
|
||||
* **%c** – status of the given package (installed or not installed),
|
||||
* **%p** – name of the package,
|
||||
* **%d** – description of the package,
|
||||
* **%V** – version of the package.
|
||||
* `-F` 用于指定应使用哪种格式来显示输出,
|
||||
* `%c` – 包的状态(已安装或未安装),
|
||||
* `%p` – 包的名称,
|
||||
* `%d` – 包的简介,
|
||||
* `%V` – 包的版本。
|
||||
|
||||
当你不知道完整的软件包名称时,这非常有用。这个命令将列出包含给定字符串(即 vim)的所有软件包。
|
||||
|
||||
|
||||
This is helpful when you don’t know the full package name. This command will list all packages that contains the given string (i.e vim).
|
||||
|
||||
Here is the sample output of the above command:
|
||||
以下是上述命令的示例输出:
|
||||
|
||||
```
|
||||
[...]
|
||||
@ -207,17 +201,17 @@ p vim-voom Vim two-pane out
|
||||
p vim-youcompleteme fast, as-you-type, fuzzy-search code completion engine for Vim 0+20161219+git
|
||||
```
|
||||
|
||||
##### Method 4 – Using Apt-cache
|
||||
#### 方法 4 – 使用 Apt-cache
|
||||
|
||||
**Apt-cache** command is used to query APT cache in Debian-based systems. It is useful for performing many operations on APT’s package cache. One fine example is we can [**list installed applications from a certain repository/ppa**][3].
|
||||
`apt-cache` 命令用于查询基于 Debian 的系统中的 APT 缓存。对于要在 APT 的包缓存上执行很多操作时,它很有用。一个很好的例子是我们可以从[某个仓库或 ppa 中列出已安装的应用程序][3]。
|
||||
|
||||
Not just installed applications, we can also find the version of a package even if it is not installed. For instance, the following command will find the version of Vim package:
|
||||
不仅是已安装的应用程序,我们还可以找到软件包的版本,即使它没有被安装。例如,以下命令将找到 Vim 的版本:
|
||||
|
||||
```
|
||||
$ apt-cache policy vim
|
||||
```
|
||||
|
||||
Sample output:
|
||||
示例输出:
|
||||
|
||||
```
|
||||
vim:
|
||||
@ -231,19 +225,19 @@ vim:
|
||||
500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
|
||||
```
|
||||
|
||||
As you can see in the above output, Vim is not installed. If you wanted to install it, you would get version **8.0.1453**. It also displays from which repository the vim package is coming from.
|
||||
正如你在上面的输出中所看到的,Vim 并没有安装。如果你想安装它,你会知道它的版本是 **8.0.1453**。它还显示 vim 包来自哪个仓库。
|
||||
|
||||
##### Method 5 – Using Apt-show-versions
|
||||
#### 方法 5 – 使用 Apt-show-versions
|
||||
|
||||
**Apt-show-versions** command is used to list installed and available package versions in Debian and Debian-based systems. It also displays the list of all upgradeable packages. It is quite handy if you have a mixed stable/testing environment. For instance, if you have enabled both stable and testing repositories, you can easily find the list of applications from testing and also you can upgrade all packages in testing.
|
||||
在 Debian 和基于 Debian 的系统中,`apt-show-versions` 命令用于列出已安装和可用软件包的版本。它还显示所有可升级软件包的列表。如果你有一个混合的稳定或测试环境,这是非常方便的。例如,如果你同时启用了稳定和测试仓库,那么你可以轻松地从测试库找到应用程序列表,还可以升级测试库中的所有软件包。
|
||||
|
||||
Apt-show-versions is not installed by default. You need to install it using command:
|
||||
默认情况下系统没有安装 `apt-show-versions`,你需要使用以下命令来安装它:
|
||||
|
||||
```
|
||||
$ sudo apt-get install apt-show-versions
|
||||
```
|
||||
|
||||
Once installed, run the following command to find the version of a package,for example Vim:
|
||||
安装后,运行以下命令查找软件包的版本,例如 Vim:
|
||||
|
||||
```
|
||||
$ apt-show-versions -a vim
|
||||
@ -253,15 +247,15 @@ vim:amd64 2:8.0.1453-1ubuntu1.1 bionic-updates archive.ubuntu.com
|
||||
vim:amd64 not installed
|
||||
```
|
||||
|
||||
Here, **-a** switch prints all available versions of the given package.
|
||||
这里,`-a` 选项打印给定软件包的所有可用版本。
|
||||
|
||||
If the given package is already installed, you need not to use **-a** option. In that case, simply run:
|
||||
如果已经安装了给定的软件包,那么就不需要使用 `-a` 选项。在这种情况下,只需运行:
|
||||
|
||||
```
|
||||
$ apt-show-versions vim
|
||||
```
|
||||
|
||||
And, that’s all. If you know any other methods, please share them in the comment section below. I will check and update this guide.
|
||||
差不多完了。如果你还了解其他方法,在下面的评论中分享,我将检查并更新本指南。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -269,8 +263,8 @@ via: https://www.ostechnix.com/how-to-check-linux-package-version-before-install
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,183 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11221-1.html)
|
||||
[#]: subject: (How to install Elasticsearch and Kibana on Linux)
|
||||
[#]: via: (https://opensource.com/article/19/7/install-elasticsearch-and-kibana-linux)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
如何在 Linux 上安装 Elasticsearch 和 Kibana
|
||||
======
|
||||
|
||||
> 获取我们关于安装两者的简化说明。
|
||||
|
||||
![5 pengiuns floating on iceburg][1]
|
||||
|
||||
如果你渴望学习基于开源 Lucene 库的著名开源搜索引擎 Elasticsearch,那么没有比在本地安装它更好的方法了。这个过程在 [Elasticsearch 网站][2]中有详细介绍,但如果你是初学者,官方说明就比必要的信息多得多。本文采用一种简化的方法。
|
||||
|
||||
### 添加 Elasticsearch 仓库
|
||||
|
||||
首先,将 Elasticsearch 仓库添加到你的系统,以便你可以根据需要安装它并接收更新。如何做取决于你的发行版。在基于 RPM 的系统上,例如 [Fedora][3]、[CentOS] [4]、[Red Hat Enterprise Linux(RHEL)][5] 或 [openSUSE][6],(本文任何地方引用 Fedora 或 RHEL 的也适用于 CentOS 和 openSUSE)在 `/etc/yum.repos.d/` 中创建一个名为 `elasticsearch.repo` 的仓库描述文件:
|
||||
|
||||
```
|
||||
$ cat << EOF | sudo tee /etc/yum.repos.d/elasticsearch.repo
|
||||
[elasticsearch-7.x]
|
||||
name=Elasticsearch repository for 7.x packages
|
||||
baseurl=https://artifacts.elastic.co/packages/oss-7.x/yum
|
||||
gpgcheck=1
|
||||
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
|
||||
enabled=1
|
||||
autorefresh=1
|
||||
type=rpm-md
|
||||
EOF
|
||||
```
|
||||
|
||||
在 Ubuntu 或 Debian 上,不要使用 `add-apt-repository` 工具。由于它自身默认的和 Elasticsearch 仓库提供的不匹配而导致错误。相反,设置这个:
|
||||
|
||||
```
|
||||
$ echo "deb https://artifacts.elastic.co/packages/oss-7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
|
||||
```
|
||||
|
||||
在你从该仓库安装之前,导入 GPG 公钥,然后更新:
|
||||
|
||||
```
|
||||
$ sudo apt-key adv --keyserver \
|
||||
hkp://keyserver.ubuntu.com:80 \
|
||||
--recv D27D666CD88E42B4
|
||||
$ sudo apt update
|
||||
```
|
||||
|
||||
此存储库仅包含 Elasticsearch 的开源功能,在 [Apache 许可证][7]下发布,没有提供订阅版本的额外功能。如果你需要仅限订阅的功能(这些功能是**并不**开源),那么 `baseurl` 必须设置为:
|
||||
|
||||
```
|
||||
baseurl=https://artifacts.elastic.co/packages/7.x/yum
|
||||
```
|
||||
|
||||
### 安装 Elasticsearch
|
||||
|
||||
你需要安装的软件包的名称取决于你使用的是开源版本还是订阅版本。本文使用开源版本,包名最后有 `-oss` 后缀。如果包名后没有 `-oss`,那么表示你请求的是仅限订阅版本。
|
||||
|
||||
如果你创建了订阅版本的仓库却尝试安装开源版本,那么就会收到“非指定”的错误。如果你创建了一个开源版本仓库却没有将 `-oss` 添加到包名后,那么你也会收到错误。
|
||||
|
||||
使用包管理器安装 Elasticsearch。例如,在 Fedora、CentOS 或 RHEL 上运行以下命令:
|
||||
|
||||
```
|
||||
$ sudo dnf install elasticsearch-oss
|
||||
```
|
||||
|
||||
在 Ubuntu 或 Debian 上,运行:
|
||||
|
||||
```
|
||||
$ sudo apt install elasticsearch-oss
|
||||
```
|
||||
|
||||
如果你在安装 Elasticsearch 时遇到错误,那么你可能安装的是错误的软件包。如果你想如本文这样使用开源包,那么请确保使用正确的 apt 仓库或在 Yum 配置正确的 `baseurl`。
|
||||
|
||||
### 启动并启用 Elasticsearch
|
||||
|
||||
安装 Elasticsearch 后,你必须启动并启用它:
|
||||
|
||||
```
|
||||
$ sudo systemctl daemon-reload
|
||||
$ sudo systemctl enable --now elasticsearch.service
|
||||
```
|
||||
|
||||
要确认 Elasticsearch 在其默认端口 9200 上运行,请在 Web 浏览器中打开 `localhost:9200`。你可以使用 GUI 浏览器,也可以在终端中执行此操作:
|
||||
|
||||
|
||||
```
|
||||
$ curl localhost:9200
|
||||
{
|
||||
|
||||
"name" : "fedora30",
|
||||
"cluster_name" : "elasticsearch",
|
||||
"cluster_uuid" : "OqSbb16NQB2M0ysynnX1hA",
|
||||
"version" : {
|
||||
"number" : "7.2.0",
|
||||
"build_flavor" : "oss",
|
||||
"build_type" : "rpm",
|
||||
"build_hash" : "508c38a",
|
||||
"build_date" : "2019-06-20T15:54:18.811730Z",
|
||||
"build_snapshot" : false,
|
||||
"lucene_version" : "8.0.0",
|
||||
"minimum_wire_compatibility_version" : "6.8.0",
|
||||
"minimum_index_compatibility_version" : "6.0.0-beta1"
|
||||
},
|
||||
"tagline" : "You Know, for Search"
|
||||
}
|
||||
```
|
||||
|
||||
### 安装 Kibana
|
||||
|
||||
Kibana 是 Elasticsearch 数据可视化的图形界面。它包含在 Elasticsearch 仓库,因此你可以使用包管理器进行安装。与 Elasticsearch 本身一样,如果你使用的是 Elasticsearch 的开源版本,那么必须将 `-oss` 放到包名最后,订阅版本则不用(两者安装需要匹配):
|
||||
|
||||
|
||||
```
|
||||
$ sudo dnf install kibana-oss
|
||||
```
|
||||
|
||||
在 Ubuntu 或 Debian 上:
|
||||
|
||||
```
|
||||
$ sudo apt install kibana-oss
|
||||
```
|
||||
|
||||
Kibana 在端口 5601 上运行,因此打开图形化 Web 浏览器并进入 `localhost:5601` 来开始使用 Kibana,如下所示:
|
||||
|
||||
![Kibana running in Firefox.][8]
|
||||
|
||||
### 故障排除
|
||||
|
||||
如果在安装 Elasticsearch 时出现错误,请尝试手动安装 Java 环境。在 Fedora、CentOS 和 RHEL 上:
|
||||
|
||||
```
|
||||
$ sudo dnf install java-openjdk-devel java-openjdk
|
||||
```
|
||||
|
||||
在 Ubuntu 上:
|
||||
|
||||
```
|
||||
$ sudo apt install default-jdk
|
||||
```
|
||||
|
||||
如果所有其他方法都失败,请尝试直接从 Elasticsearch 服务器安装 Elasticsearch RPM:
|
||||
|
||||
```
|
||||
$ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-7.2.0-x86_64.rpm{,.sha512}
|
||||
$ shasum -a 512 -c elasticsearch-oss-7.2.0-x86_64.rpm.sha512 && sudo rpm --install elasticsearch-oss-7.2.0-x86_64.rpm
|
||||
```
|
||||
|
||||
在 Ubuntu 或 Debian 上,请使用 DEB 包。
|
||||
|
||||
如果你无法使用 Web 浏览器访问 Elasticsearch 或 Kibana,那么可能是你的防火墙阻止了这些端口。你可以通过调整防火墙设置来允许这些端口上的流量。例如,如果你运行的是 `firewalld`(Fedora 和 RHEL 上的默认防火墙,并且可以在 Debian 和 Ubuntu 上安装),那么你可以使用 `firewall-cmd`:
|
||||
|
||||
```
|
||||
$ sudo firewall-cmd --add-port=9200/tcp --permanent
|
||||
$ sudo firewall-cmd --add-port=5601/tcp --permanent
|
||||
$ sudo firewall-cmd --reload
|
||||
```
|
||||
|
||||
设置完成了,你可以关注我们接下来的 Elasticsearch 和 Kibana 安装文章。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/install-elasticsearch-and-kibana-linux
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux31x_cc.png?itok=Pvim4U-B (5 pengiuns floating on iceburg)
|
||||
[2]: https://www.elastic.co/guide/en/elasticsearch/reference/current/rpm.html
|
||||
[3]: https://getfedora.org
|
||||
[4]: https://www.centos.org
|
||||
[5]: https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux
|
||||
[6]: https://www.opensuse.org
|
||||
[7]: http://www.apache.org/licenses/
|
||||
[8]: https://opensource.com/sites/default/files/uploads/kibana.jpg (Kibana running in Firefox.)
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11202-1.html)
|
||||
[#]: subject: (What is a golden image?)
|
||||
[#]: via: (https://opensource.com/article/19/7/what-golden-image)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
@ -12,27 +12,27 @@
|
||||
|
||||
> 正在开发一个将广泛分发的项目吗?了解一下黄金镜像吧,以便在出现问题时轻松恢复到“完美”状态。
|
||||
|
||||
![Gold star][1]
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/08/184231ivv745lukchbrhul.jpg)
|
||||
|
||||
如果你正在进行质量保证、系统管理或媒体制作(没想到吧),你可能听说过<ruby>正式版<rt>gold master</rt></ruby>、<ruby>黄金镜像<rt>golden image</rt></ruby>或<ruby>母片<rt>master image</rt></ruby>等等这一术语的某些变体。这个术语已经进入了每个参与创建一个**完美**模具的人的集体意识,然后从该模具中产生许多复制品。母片或黄金镜像就是:一种虚拟模具,你可以从中打造可分发的模型。
|
||||
如果你正在从事于质量保证、系统管理或媒体制作(没想到吧),你可能听说过<ruby>正式版<rt>gold master</rt></ruby>这一术语的某些变体,如<ruby>黄金镜像<rt>golden image</rt></ruby>或<ruby>母片<rt>master image</rt></ruby>等等。这个术语已经进入了每个参与创建**完美**模具的人的集体意识,然后从该模具中产生许多复制品。母片或黄金镜像就是:一种虚拟模具,你可以从中打造可分发的模型。
|
||||
|
||||
在媒体制作中,这就是全体人员努力开发母片的过程。这个最终产品是独一无二的。它看起来和听起来像是能够看和听的最好的电影或专辑(或其他任何东西)。可以制作和压缩该母片的副本并发送给急切的公众。
|
||||
在媒体制作中,这就是所有人努力开发母片的过程。这个最终产品是独一无二的。它看起来和听起来像是可以看和听的最好的电影或专辑(或其他任何东西)。可以制作和压缩该母片的副本并发送给急切的公众。
|
||||
|
||||
在软件中,与该术语相关联的也是类似的思路。一旦软件经过编译和一再测试,完美的构建就会被声明为**黄金版本**。不允许对它进一步更改,并且所有可分发的副本都是从此母片生成的(当软件是用 CD 或 DVD 分发时,这实际上意味着母盘)。
|
||||
在软件中,与该术语相关联的也是类似的意思。一旦软件经过编译和一再测试,完美的构建成果就会被声明为**黄金版本**,不允许对它进一步更改,并且所有可分发的副本都是从此母片生成的(当软件是用 CD 或 DVD 分发时,这实际上就是母盘)。
|
||||
|
||||
在系统管理中,你可能会遇到你的机构所选的操作系统的黄金镜像,其中的重要设置已经就绪,如安装好的虚拟专用网络(VPN)证书、设置好的电子邮件收件服务器的邮件客户端,等等。同样,你可能也会在虚拟机(VM)的世界中听到这个术语,其中精心配置的虚拟驱动器的黄金镜像是所有克隆的新虚拟机的源头。
|
||||
在系统管理中,你可能会遇到你的机构所选的操作系统的黄金镜像,其中的重要设置已经就绪,如安装好的虚拟专用网络(VPN)证书、设置好的电子邮件收件服务器的邮件客户端等等。同样,你可能也会在虚拟机(VM)的世界中听到这个术语,其中精心配置了虚拟驱动器的黄金镜像是所有克隆的新虚拟机的源头。
|
||||
|
||||
### GNOME Boxes
|
||||
|
||||
正式版的概念很简单,但往往忽视将其付诸实践。有时,你的团队很高兴能够达成他们的目标,但没有人停下来考虑将这些成就指定为权威版本。在其他时候,没有简单的机制来做到这一点。
|
||||
|
||||
黄金镜像等同于部分历史保存和提前备份计划。一旦你制作了一个完美的模型,无论你正在努力做什么,你都应该为自己保留这项工作,因为它不仅标志着你的进步,而且如果你继续工作时遇到问题,它就会成为一个后备。
|
||||
黄金镜像等同于部分历史的保存和提前备份计划。一旦你制作了一个完美的模型,无论你正在努力做什么,你都应该为自己保留这项工作,因为它不仅标志着你的进步,而且如果你继续工作时遇到问题,它就会成为一个后备。
|
||||
|
||||
[GNOME Boxes][2],是随 GNOME 桌面一起提供的虚拟化平台,可以提供简单的演示。如果你从未使用过GNOME Boxes,你可以在 Alan Formy-Duval 的文章 [GNOME Boxes 入门][3]中学习它的基础知识。
|
||||
[GNOME Boxes][2],是随 GNOME 桌面一起提供的虚拟化平台,可以用作简单的演示用途。如果你从未使用过 GNOME Boxes,你可以在 Alan Formy-Duval 的文章 [GNOME Boxes 入门][3]中学习它的基础知识。
|
||||
|
||||
想象一下,你使用 GNOME Boxes 创建虚拟机,然后将操作系统安装到该 VM 中。现在,你想要制作一个黄金镜像。GNOME Boxes 已经率先摄取了你的安装快照,可以作为库存的操作系统安装的黄金镜像。
|
||||
想象一下,你使用 GNOME Boxes 创建虚拟机,然后将操作系统安装到该 VM 中。现在,你想要制作一个黄金镜像。GNOME Boxes 已经率先摄取了你的安装快照,可以作为更多的操作系统安装的黄金镜像。
|
||||
|
||||
打开 GNOME Boxes 并在仪表板视图中,右键单击任何虚拟机,然后选择**属性**。 在**属性**窗口中,选择**快照**选项卡。由 GNOME Boxes 自动创建的第一个快照是“Just Installed”。顾名思义,这是你最初安装到虚拟机上的操作系统。
|
||||
打开 GNOME Boxes 并在仪表板视图中,右键单击任何虚拟机,然后选择**属性**。在**属性**窗口中,选择**快照**选项卡。由 GNOME Boxes 自动创建的第一个快照是“Just Installed”。顾名思义,这是你最初安装到虚拟机上的操作系统。
|
||||
|
||||
![The Just Installed snapshot, or initial golden image, in GNOME Boxes.][4]
|
||||
|
||||
@ -52,7 +52,7 @@
|
||||
|
||||
### 黄金镜像
|
||||
|
||||
很少有学科无法从黄金镜像中受益。无论你是在 [Git][8] 中标记版本、在 Boxes 中拍摄快照、出版原型黑胶唱片、打印书籍以进行审核、设计用于批量生产的丝网印刷、还是制作文字模具,到处都是各种原型。这只是现代技术让我们人类更聪明而不是更努力的另一种方式,因此为你的项目制作一个黄金镜像,并根据需要随时生成克隆。
|
||||
很少有学科无法从黄金镜像中受益。无论你是在 [Git][8] 中标记版本、在 Boxes 中拍摄快照、出版原型黑胶唱片、打印书籍以进行审核、设计用于批量生产的丝网印刷、还是制作文字模具,到处都是各种原型。这只是现代技术让我们人类更聪明而不是更努力的另一种方式,因此为你的项目制作一个黄金镜像,并根据需要随时生成克隆吧。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -61,7 +61,7 @@ via: https://opensource.com/article/19/7/what-golden-image
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
349
published/20190715 Understanding software design patterns.md
Normal file
349
published/20190715 Understanding software design patterns.md
Normal file
@ -0,0 +1,349 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (arrowfeng)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11208-1.html)
|
||||
[#]: subject: (Understanding software design patterns)
|
||||
[#]: via: (https://opensource.com/article/19/7/understanding-software-design-patterns)
|
||||
[#]: author: (Bryant Son https://opensource.com/users/brsonhttps://opensource.com/users/erezhttps://opensource.com/users/brson)
|
||||
|
||||
理解软件设计模式
|
||||
======
|
||||
> 设计模式可以帮助消除冗余代码。学习如何利用 Java 使用单例模式、工厂模式和观察者模式。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/10/080849ygyqtrw88f2qtzk4.jpg)
|
||||
|
||||
如果你是一名正在致力于计算机科学或者相关学科的程序员或者学生,很快,你将会遇到一条术语 “<ruby>软件设计模式<rt>software design pattern</rt></ruby>”。根据维基百科,“*[软件设计模式][2]是在平常的软件设计工作中所遭遇的问题的一种通用的、可重复使用的解决方案*”。我对该定义的理解是:当在从事于一个编码项目时,你经常会思考,“嗯,这里貌似是冗余代码,我觉得是否能改变这些代码使之更灵活和便于修改?”因此,你会开始考虑怎样分割那些保持不变的内容和需要经常改变的内容。
|
||||
|
||||
> **设计模式**是一种通过分割那些保持不变的部分和经常变化的部分,让你的代码更容易修改的方法。
|
||||
|
||||
不出意外的话,每个从事编程项目的人都可能会有同样的思考。特别是那些工业级别的项目,在那里通常工作着数十甚至数百名开发者;协作过程表明必须有一些标准和规则来使代码更加优雅并适应变化。这就是为什么我们有了 [面向对象编程][3](OOP)和 [软件框架工具][4]。设计模式有点类似于 OOP,但它通过将变化视为自然开发过程的一部分而进一步发展。基本上,设计模式利用了一些 OOP 的思想,比如抽象和接口,但是专注于改变的过程。
|
||||
|
||||
当你开始开发项目时,你经常会听到这样一个术语*重构*,它意味着*通过改变代码使它变得更优雅和可复用*;这就是设计模式耀眼的地方。当你处理现有代码时(无论是由其他人构建还是你自己过去构建的),了解设计模式可以帮助你以不同的方式看待事物,你将发现问题以及改进代码的方法。
|
||||
|
||||
有很多种设计模式,其中单例模式、工厂模式和观察者模式三种最受欢迎,在这篇文章中我将会一一介绍它们。
|
||||
|
||||
### 如何遵循本指南
|
||||
|
||||
无论你是一位有经验的编程工作者还是一名刚刚接触的新手,我想让这篇教程让每个人都很容易理解。设计模式概念并不容易理解,减少开始旅程时的学习曲线始终是首要任务。因此,除了这篇带有图表和代码片段的文章外,我还创建了一个 [GitHub 仓库][5],你可以克隆仓库并在你的电脑上运行这些代码来实现这三种设计模式。你也可以观看我创建的 [YouTube视频][6]。
|
||||
|
||||
#### 必要条件
|
||||
|
||||
如果你只是想了解一般的设计模式思想,则无需克隆示例项目或安装任何工具。但是,如果要运行示例代码,你需要安装以下工具:
|
||||
|
||||
* **Java 开发套件(JDK)**:我强烈建议使用 [OpenJDK][7]。
|
||||
* **Apache Maven**:这个简单的项目使用 [Apache Maven][8] 构建;好的是许多 IDE 自带了Maven。
|
||||
* **交互式开发编辑器(IDE)**:我使用 [社区版 IntelliJ][9],但是你也可以使用 [Eclipse IDE][10] 或者其他你喜欢的 Java IDE。
|
||||
* **Git**:如果你想克隆这个工程,你需要 [Git][11] 客户端。
|
||||
|
||||
安装好 Git 后运行下列命令克隆这个工程:
|
||||
|
||||
```
|
||||
git clone https://github.com/bryantson/OpensourceDotComDemos.git
|
||||
```
|
||||
|
||||
然后在你喜欢的 IDE 中,你可以将 TopDesignPatterns 仓库中的代码作为 Apache Maven 项目导入。
|
||||
|
||||
我使用的是 Java,但你也可以使用支持[抽象原则][12]的任何编程语言来实现设计模式。
|
||||
|
||||
### 单例模式:避免每次创建一个对象
|
||||
|
||||
<ruby>[单例模式][13]<rt>singleton pattern</rt></ruby>是非常流行的设计模式,它的实现相对来说很简单,因为你只需要一个类。然而,许多开发人员争论单例设计模式的是否利大于弊,因为它缺乏明显的好处并且容易被滥用。很少有开发人员直接实现单例;相反,像 Spring Framework 和 Google Guice 等编程框架内置了单例设计模式的特性。
|
||||
|
||||
但是了解单例模式仍然有巨大的用处。单例模式确保一个类仅创建一次且提供了一个对它的全局访问点。
|
||||
|
||||
> **单例模式**:确保仅创建一个实例且避免在同一个项目中创建多个实例。
|
||||
|
||||
下面这幅图展示了典型的类对象创建过程。当客户端请求创建一个对象时,构造函数会创建或者实例化一个对象并调用方法返回这个类给调用者。但是每次请求一个对象都会发生这样的情况:构造函数被调用,一个新的对象被创建并且它返回了一个独一无二的对象。我猜面向对象语言的创建者有每次都创建一个新对象的理由,但是单例过程的支持者说这是冗余的且浪费资源。
|
||||
|
||||
![Normal class instantiation][14]
|
||||
|
||||
下面这幅图使用单例模式创建对象。这里,构造函数仅当对象首次通过调用预先设计好的 `getInstance()` 方法时才会被调用。这通常通过检查该值是否为 `null` 来完成,并且这个对象被作为私有变量保存在单例类的内部。下次 `getInstance()` 被调用时,这个类会返回第一次被创建的对象。而没有新的对象产生;它只是返回旧的那一个。
|
||||
|
||||
![Singleton pattern instantiation][15]
|
||||
|
||||
下面这段代码展示了创建单例模式最简单的方法:
|
||||
|
||||
```
|
||||
package org.opensource.demo.singleton;
|
||||
|
||||
public class OpensourceSingleton {
|
||||
|
||||
private static OpensourceSingleton uniqueInstance;
|
||||
|
||||
private OpensourceSingleton() {
|
||||
}
|
||||
|
||||
public static OpensourceSingleton getInstance() {
|
||||
if (uniqueInstance == null) {
|
||||
uniqueInstance = new OpensourceSingleton();
|
||||
}
|
||||
return uniqueInstance;
|
||||
}
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
在调用方,这里展示了如何调用单例类来获取对象:
|
||||
|
||||
```
|
||||
Opensource newObject = Opensource.getInstance();
|
||||
```
|
||||
|
||||
这段代码很好的验证了单例模式的思想:
|
||||
|
||||
1. 当 `getInstance()` 被调用时,它通过检查 `null` 值来检查对象是否已经被创建。
|
||||
2. 如果值为 `null`,它会创建一个新对象并把它保存到私有域,返回这个对象给调用者。否则直接返回之前被创建的对象。
|
||||
|
||||
单例模式实现的主要问题是它忽略了并行进程。当多个进程使用线程同时访问资源时,这个问题就产生了。对于这种情况有对应的解决方案,它被称为*双重检查锁*,用于多线程安全,如下所示:
|
||||
|
||||
```
|
||||
package org.opensource.demo.singleton;
|
||||
|
||||
public class ImprovedOpensourceSingleton {
|
||||
|
||||
private volatile static ImprovedOpensourceSingleton uniqueInstance;
|
||||
|
||||
private ImprovedOpensourceSingleton() {}
|
||||
|
||||
public static ImprovedOpensourceSingleton getInstance() {
|
||||
if (uniqueInstance == null) {
|
||||
synchronized (ImprovedOpensourceSingleton.class) {
|
||||
if (uniqueInstance == null) {
|
||||
uniqueInstance = new ImprovedOpensourceSingleton();
|
||||
}
|
||||
}
|
||||
}
|
||||
return uniqueInstance;
|
||||
}
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
再强调一下前面的观点,确保只有在你认为这是一个安全的选择时才直接实现你的单例模式。最好的方法是通过使用一个制作精良的编程框架来利用单例功能。
|
||||
|
||||
### 工厂模式:将对象创建委派给工厂类以隐藏创建逻辑
|
||||
|
||||
<ruby>[工厂模式][16]<rt>factory pattern</rt></ruby>是另一种众所周知的设计模式,但是有一小点复杂。实现工厂模式的方法有很多,而下列的代码示例为最简单的实现方式。为了创建对象,工厂模式定义了一个接口,让它的子类去决定实例化哪一个类。
|
||||
|
||||
> **工厂模式**:将对象创建委派给工厂类,因此它能隐藏创建逻辑。
|
||||
|
||||
下列的图片展示了最简单的工厂模式是如何实现的。
|
||||
|
||||
![Factory pattern][17]
|
||||
|
||||
客户端请求工厂类创建类型为 x 的某个对象,而不是客户端直接调用对象创建。根据其类型,工厂模式决定要创建和返回的对象。
|
||||
|
||||
在下列代码示例中,`OpensourceFactory` 是工厂类实现,它从调用者那里获取*类型*并根据该输入值决定要创建和返回的对象:
|
||||
|
||||
```
|
||||
package org.opensource.demo.factory;
|
||||
|
||||
public class OpensourceFactory {
|
||||
|
||||
public OpensourceJVMServers getServerByVendor(String name) {
|
||||
if(name.equals("Apache")) {
|
||||
return new Tomcat();
|
||||
}
|
||||
else if(name.equals("Eclipse")) {
|
||||
return new Jetty();
|
||||
}
|
||||
else if (name.equals("RedHat")) {
|
||||
return new WildFly();
|
||||
}
|
||||
else {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
`OpenSourceJVMServer` 是一个 100% 的抽象类(即接口类),它指示要实现的是什么,而不是怎样实现:
|
||||
|
||||
```
|
||||
package org.opensource.demo.factory;
|
||||
|
||||
public interface OpensourceJVMServers {
|
||||
public void startServer();
|
||||
public void stopServer();
|
||||
public String getName();
|
||||
}
|
||||
```
|
||||
|
||||
这是一个 `OpensourceJVMServers` 类的实现示例。当 `RedHat` 被作为类型传递给工厂类,`WildFly` 服务器将被创建:
|
||||
|
||||
```
|
||||
package org.opensource.demo.factory;
|
||||
|
||||
public class WildFly implements OpensourceJVMServers {
|
||||
public void startServer() {
|
||||
System.out.println("Starting WildFly Server...");
|
||||
}
|
||||
|
||||
public void stopServer() {
|
||||
System.out.println("Shutting Down WildFly Server...");
|
||||
}
|
||||
|
||||
public String getName() {
|
||||
return "WildFly";
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 观察者模式:订阅主题并获取相关更新的通知
|
||||
|
||||
最后是<ruby>[观察者模式][20]<rt>observer pattern</rt></ruby>。像单例模式那样,很少有专业的程序员直接实现观察者模式。但是,许多消息队列和数据服务实现都借用了观察者模式的概念。观察者模式在对象之间定义了一对多的依赖关系,当一个对象的状态发生改变时,所有依赖它的对象都将被自动地通知和更新。
|
||||
|
||||
> **观察者模式**:如果有更新,那么订阅了该话题/主题的客户端将被通知。
|
||||
|
||||
理解观察者模式的最简单方法是想象一个邮件列表,你可以在其中订阅任何主题,无论是开源、技术、名人、烹饪还是您感兴趣的任何其他内容。每个主题维护者一个它的订阅者列表,在观察者模式中它们相当于观察者。当某一个主题更新时,它所有的订阅者(观察者)都将被通知这次改变。并且订阅者总是能取消某一个主题的订阅。
|
||||
|
||||
如下图所示,客户端可以订阅不同的主题并添加观察者以获得最新信息的通知。因为观察者不断的监听着这个主题,这个观察者会通知客户端任何发生的改变。
|
||||
|
||||
![Observer pattern][21]
|
||||
|
||||
让我们来看看观察者模式的代码示例,从主题/话题类开始:
|
||||
|
||||
```
|
||||
package org.opensource.demo.observer;
|
||||
|
||||
public interface Topic {
|
||||
|
||||
public void addObserver(Observer observer);
|
||||
public void deleteObserver(Observer observer);
|
||||
public void notifyObservers();
|
||||
}
|
||||
```
|
||||
|
||||
这段代码描述了一个为不同的主题去实现已定义方法的接口。注意一个观察者如何被添加、移除和通知的。
|
||||
|
||||
这是一个主题的实现示例:
|
||||
|
||||
```
|
||||
package org.opensource.demo.observer;
|
||||
|
||||
import java.util.List;
|
||||
import java.util.ArrayList;
|
||||
|
||||
public class Conference implements Topic {
|
||||
private List<Observer> listObservers;
|
||||
private int totalAttendees;
|
||||
private int totalSpeakers;
|
||||
private String nameEvent;
|
||||
|
||||
public Conference() {
|
||||
listObservers = new ArrayList<Observer>();
|
||||
}
|
||||
|
||||
public void addObserver(Observer observer) {
|
||||
listObservers.add(observer);
|
||||
}
|
||||
|
||||
public void deleteObserver(Observer observer) {
|
||||
int i = listObservers.indexOf(observer);
|
||||
if (i >= 0) {
|
||||
listObservers.remove(i);
|
||||
}
|
||||
}
|
||||
|
||||
public void notifyObservers() {
|
||||
for (int i=0, nObservers = listObservers.size(); i < nObservers; ++ i) {
|
||||
Observer observer = listObservers.get(i);
|
||||
observer.update(totalAttendees,totalSpeakers,nameEvent);
|
||||
}
|
||||
}
|
||||
|
||||
public void setConferenceDetails(int totalAttendees, int totalSpeakers, String nameEvent) {
|
||||
this.totalAttendees = totalAttendees;
|
||||
this.totalSpeakers = totalSpeakers;
|
||||
this.nameEvent = nameEvent;
|
||||
notifyObservers();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
这段代码定义了一个特定主题的实现。当发生改变时,这个实现调用它自己的方法。注意这将获取观察者的数量,它以列表方式存储,并且可以通知和维护观察者。
|
||||
|
||||
这是一个观察者类:
|
||||
|
||||
```
|
||||
package org.opensource.demo.observer;
|
||||
|
||||
public interface Observer {
|
||||
public void update(int totalAttendees, int totalSpeakers, String nameEvent);
|
||||
}
|
||||
```
|
||||
|
||||
这个类定义了一个接口,不同的观察者可以实现该接口以执行特定的操作。
|
||||
|
||||
例如,实现了该接口的观察者可以在会议上打印出与会者和发言人的数量:
|
||||
|
||||
```
|
||||
package org.opensource.demo.observer;
|
||||
|
||||
public class MonitorConferenceAttendees implements Observer {
|
||||
private int totalAttendees;
|
||||
private int totalSpeakers;
|
||||
private String nameEvent;
|
||||
private Topic topic;
|
||||
|
||||
public MonitorConferenceAttendees(Topic topic) {
|
||||
this.topic = topic;
|
||||
topic.addObserver(this);
|
||||
}
|
||||
|
||||
public void update(int totalAttendees, int totalSpeakers, String nameEvent) {
|
||||
this.totalAttendees = totalAttendees;
|
||||
this.totalSpeakers = totalSpeakers;
|
||||
this.nameEvent = nameEvent;
|
||||
printConferenceInfo();
|
||||
}
|
||||
|
||||
public void printConferenceInfo() {
|
||||
System.out.println(this.nameEvent + " has " + totalSpeakers + " speakers and " + totalAttendees + " attendees");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 接下来
|
||||
|
||||
现在你已经阅读了这篇对于设计模式的介绍引导,你还可以去寻求了解其他设计模式,例如外观模式,模版模式和装饰器模式。也有一些并发和分布式系统的设计模式如断路器模式和锚定模式。
|
||||
|
||||
可是,我相信最好的磨砺你的技能的方式首先是通过在你的业余项目或者练习中实现这些设计模式。你甚至可以开始考虑如何在实际项目中应用这些设计模式。接下来,我强烈建议你查看 OOP 的 [SOLID 原则][23]。之后,你就准备好了解其他设计模式。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/understanding-software-design-patterns
|
||||
|
||||
作者:[Bryant Son][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[arrowfeng](https://github.com/arrowfeng)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/brsonhttps://opensource.com/users/erezhttps://opensource.com/users/brson
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003601_05_mech_osyearbook2016_cloud_cc.png?itok=XSV7yR9e (clouds in the sky with blue pattern)
|
||||
[2]: https://en.wikipedia.org/wiki/Software_design_pattern
|
||||
[3]: https://en.wikipedia.org/wiki/Object-oriented_programming
|
||||
[4]: https://en.wikipedia.org/wiki/Software_framework
|
||||
[5]: https://github.com/bryantson/OpensourceDotComDemos/tree/master/TopDesignPatterns
|
||||
[6]: https://www.youtube.com/watch?v=VlBXYtLI7kE&feature=youtu.be
|
||||
[7]: https://openjdk.java.net/
|
||||
[8]: https://maven.apache.org/
|
||||
[9]: https://www.jetbrains.com/idea/download/#section=mac
|
||||
[10]: https://www.eclipse.org/ide/
|
||||
[11]: https://git-scm.com/
|
||||
[12]: https://en.wikipedia.org/wiki/Abstraction_principle_(computer_programming)
|
||||
[13]: https://en.wikipedia.org/wiki/Singleton_pattern
|
||||
[14]: https://opensource.com/sites/default/files/uploads/designpatterns1_normalclassinstantiation.jpg (Normal class instantiation)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/designpatterns2_singletonpattern.jpg (Singleton pattern instantiation)
|
||||
[16]: https://en.wikipedia.org/wiki/Factory_method_pattern
|
||||
[17]: https://opensource.com/sites/default/files/uploads/designpatterns3_factorypattern.jpg (Factory pattern)
|
||||
[18]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
|
||||
[19]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system
|
||||
[20]: https://en.wikipedia.org/wiki/Observer_pattern
|
||||
[21]: https://opensource.com/sites/default/files/uploads/designpatterns4_observerpattern.jpg (Observer pattern)
|
||||
[22]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+observer
|
||||
[23]: https://en.wikipedia.org/wiki/SOLID
|
190
published/20190715 What is POSIX- Richard Stallman explains.md
Normal file
190
published/20190715 What is POSIX- Richard Stallman explains.md
Normal file
@ -0,0 +1,190 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (martin2011qi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11222-1.html)
|
||||
[#]: subject: (What is POSIX? Richard Stallman explains)
|
||||
[#]: via: (https://opensource.com/article/19/7/what-posix-richard-stallman-explains)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
POSIX 是什么?让我们听听 Richard Stallman 的诠释
|
||||
======
|
||||
|
||||
> 从计算机自由先驱的口中探寻操作系统兼容性标准背后的本质。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/13/231737robbwoss7p3p7jwo.jpg)
|
||||
|
||||
[POSIX][2] 是什么?为什么如此重要?你可能在很多的技术类文章中看到这个术语,但往往会在探寻其本质时迷失在<ruby>技术初始主义<rt>techno-initialisms</rt></ruby>的海洋或是<ruby>以 X 结尾的行话<rt>jargon-that-ends-in-X</rt></ruby>中。我给 [Richard Stallman][3] 博士(在黑客圈里面常称之为 RMS)发了邮件以探寻这个术语的起源及其背后的概念。
|
||||
|
||||
Richard Stallman 认为用 “开源” 和 “闭源” 来归类软件是一种错误的方法。Stallman 将程序分类为 <ruby>尊重自由的<rt>freedom-respecting</rt></ruby>(“<ruby>自由<rt>free</rt></ruby>” 或 “<ruby>自由(西语)<rt>libre</rt></ruby>”)和 <ruby>践踏自由的<rt>freedom-trampling</rt></ruby>(“<ruby>非自由<rt>non-free</rt></ruby>” 或 “<ruby>专有<rt>proprietary</rt></ruby>”)。开源讨论通常会为了(用户)实际得到的<ruby>优势/便利<rt>advantages</rt></ruby>考虑去鼓励某些做法,而非作为道德层面上的约束。
|
||||
|
||||
Stallman 在由其本人于 1984 年发起的<ruby>自由软件运动<rt>The free software movement</rt></ruby>表明,不仅仅是这些 <ruby>优势/便利<rt>advantages</rt></ruby> 受到了威胁。计算机的用户 <ruby>理应得到<rt>deserve</rt></ruby> 计算机的控制权,因此拒绝被用户控制的程序即是 <ruby>非正义<rt>injustice</rt></ruby>,理应被<ruby>拒绝<rt>rejected</rt></ruby>和<ruby>排斥<rt>eliminated</rt></ruby>。对于用户的控制权,程序应当给予用户 [四项基本自由][4]:
|
||||
|
||||
* 自由度 0:无论用户出于何种目的,必须可以按照用户意愿,自由地运行该软件。
|
||||
* 自由度 1:用户可以自由地学习并修改该软件,以便按照自己的意愿进行计算。作为前提,用户必须可以访问到该软件的源代码。
|
||||
* 自由度 2:用户可以自由地分发该软件的副本,以便可以帮助他人。
|
||||
* 自由度 3:用户可以自由地分发该软件修改后的副本。借此,你可以让整个社区受益于你的改进。作为前提,用户必须可以访问到该软件的源代码。
|
||||
|
||||
### 关于 POSIX
|
||||
|
||||
**Seth:** POSIX 标准是由 [IEEE][5] 发布,用于描述 “<ruby>可移植操作系统<rt>portable operating system</rt></ruby>” 的文档。只要开发人员编写符合此描述的程序,他们生产的便是符合 POSIX 的程序。在科技行业,我们称之为 “<ruby>规范<rt>specification</rt></ruby>” 或将其简写为 “spec”。就技术用语而言,这是可以理解的,但我们不禁要问是什么使操作系统 “可移植”?
|
||||
|
||||
**RMS:** 我认为是<ruby>接口<rt>interface</rt></ruby>应该(在不同系统之间)是可移植的,而非任何一种*系统*。实际上,内部构造不同的各种系统都支持部分的 POSIX 接口规范。
|
||||
|
||||
**Seth:** 因此,如果两个系统皆具有符合 POSIX 的程序,那么它们便可以彼此假设,从而知道如何相互 “交谈”。我了解到 “POSIX” 这个简称是你想出来的。那你是怎么想出来的呢?它是如何就被 IEEE 采纳了呢?
|
||||
|
||||
**RMS:** IEEE 已经完成了规范的开发,但还没为其想好简练的名称。标题类似是 “可移植操作系统接口”,虽然我已记不清确切的单词。委员会倾向于将 “IEEEIX” 作为简称。而我认为那不太好。发音有点怪 - 听起来像恐怖的尖叫,“Ayeee!” - 所以我觉得人们反而会倾向于称之为 “Unix”。
|
||||
|
||||
但是,由于 <ruby>[GNU 并不是 Unix][6]<rt>GNU's Not Unix</rt></ruby>,并且它打算取代之,我不希望人们将 GNU 称为 “Unix 系统”。因此,我提出了人们可能会实际使用的简称。那个时候也没有什么灵感,我就用了一个并不是非常聪明的方式创造了这个简称:我使用了 “<ruby>可移植操作系统<rt>portable operating system</rt></ruby>” 的首字母缩写,并在末尾添加了 “ix” 作为简称。IEEE 也欣然接受了。
|
||||
|
||||
**Seth:** POSIX 缩写中的 “操作系统” 是仅涉及 Unix 和类 Unix 的系统(如 GNU)呢?还是意图包含所有操作系统?
|
||||
|
||||
**RMS:** 术语 “操作系统” 抽象地说,涵盖了完全不像 Unix 的系统、完全和 POSIX 规范无关的系统。但是,POSIX 规范适用于大量类 Unix 系统;也只有这样的系统才适合 POSIX 规范。
|
||||
|
||||
**Seth:** 你是否参与审核或更新当前版本的 POSIX 标准?
|
||||
|
||||
**RMS:** 现在不了。
|
||||
|
||||
**Seth:** GNU Autotools 工具链可以使应用程序更容易移植,至少在构建和安装时如此。所以可以认为 Autotools 是构建可移植基础设施的重要一环吗?
|
||||
|
||||
**RMS:** 是的,因为即使在遵循 POSIX 的系统中,也存在着诸多差异。而 Autotools 可以使程序更容易适应这些差异。顺带一提,如果有人想助力 Autotools 的开发,可以发邮件联系我。
|
||||
|
||||
**Seth:** 我想,当 GNU 刚刚开始让人们意识到一个非 Unix 的系统可以从专有的技术中解放出来的时候,关于自由软件如何协作方面,这其间一定存在一些空白区域吧。
|
||||
|
||||
**RMS:** 我不认为有任何空白或不确定性。我只是照着 BSD 的接口写而已。
|
||||
|
||||
**Seth:** 一些 GNU 应用程序符合 POSIX 标准,而另一些 GNU 应用程序的 GNU 特定的功能,要么不在 POSIX 规范中,要么缺少该规范要求的功能。对于 GNU 应用程序 POSIX 合规性有多重要?
|
||||
|
||||
**RMS:** 遵循标准对于利于用户的程度很重要。我们不将标准视为权威,而是且将其作为可能有用的指南来遵循。因此,我们谈论的是<ruby>遵循<rt>following</rt></ruby>标准而不是“<ruby>遵守<rt>complying</rt></ruby>”。可以参考 <ruby>GNU 编码标准<rt>GNU Coding Standards</rt></ruby>中的 [非 GNU 标准][7] 段落。
|
||||
|
||||
我们努力在大多数问题上与标准兼容,因为在大多数的问题上这最有利于用户。但也偶有例外。
|
||||
|
||||
例如,POSIX 指定某些实用程序以 512 字节为单位测量磁盘空间。我要求委员会将其改为 1K,但被拒绝了,说是有个<ruby>官僚主义的规则<rt>bureaucratic rule</rt></ruby>强迫选用 512。我不记得有多少人试图争辩说,用户会对这个决定感到满意的。
|
||||
|
||||
由于 GNU 在用户的<ruby>自由<rt>freedom</rt></ruby>之后的第二优先级,是用户的<ruby>便利<rt>convenience</rt></ruby>,我们使 GNU 程序以默认 1K 为单位按块测量磁盘空间。
|
||||
|
||||
然而,为了防止竞争对手利用这点给 GNU 安上 “<ruby>不合规<rt>noncompliant</rt></ruby>” 的骂名,我们实现了遵循 POSIX 和 ISO C 的可选模式,这种妥协着实可笑。想要遵循 POSIX,只需设置环境变量 `POSIXLY_CORRECT`,即可使程序符合 POSIX 以 512 字节为单位列出磁盘空间。如果有人知道实际使用 `POSIXLY_CORRECT` 或者 GCC 中对应的 `--pedantic` 会为某些用户提供什么实际好处的话,请务必告诉我。
|
||||
|
||||
**Seth:** 符合 POSIX 标准的自由软件项目是否更容易移植到其他类 Unix 系统?
|
||||
|
||||
**RMS:** 我认为是这样,但自上世纪 80 年代开始,我决定不再把时间浪费在将软件移植到 GNU 以外的系统上。我开始专注于推进 GNU 系统,使其不必使用任何非自由软件。至于将 GNU 程序移植到非类 GNU 系统就留给想在其他系统上运行它们的人们了。
|
||||
|
||||
**Seth:** POSIX 对于软件的自由很重要吗?
|
||||
|
||||
**RMS:** 本质上说,(遵不遵循 POSIX)其实没有任何区别。但是,POSIX 和 ISO C 的标准化确实使 GNU 系统更容易迁移,这有助于我们更快地实现从非自由软件中解放用户的目标。这个目标于上世纪 90 年代早期达成,当时Linux成为自由软件,同时也填补了 GNU 中内核的空白。
|
||||
|
||||
### POSIX 采纳 GNU 的创新
|
||||
|
||||
我还问过 Stallman 博士,是否有任何 GNU 特定的创新或惯例后来被采纳为 POSIX 标准。他无法回想起具体的例子,但友好地代我向几位开发者发了邮件。
|
||||
|
||||
开发者 Giacomo Catenazzi,James Youngman,Eric Blake,Arnold Robbins 和 Joshua Judson Rosen 对以前的 POSIX 迭代以及仍在进行中的 POSIX 迭代做出了回应。POSIX 是一个 “<ruby>活的<rt>living</rt></ruby>” 标准,因此会不断被行业专业人士更新和评审,许多从事 GNU 项目的开发人员提出了对 GNU 特性的包含。
|
||||
|
||||
为了回顾这些有趣的历史,接下来会罗列一些已经融入 POSIX 的流行的 GNU 特性。
|
||||
|
||||
#### Make
|
||||
|
||||
一些 GNU **Make** 的特性已经被 POSIX 的 `make` 定义所采用。相关的 [规范][8] 提供了从现有实现中借来的特性的详细归因。
|
||||
|
||||
#### Diff 和 patch
|
||||
|
||||
[diff][9] 和 [patch][10] 命令都直接从这些工具的 GNU 版本中引进了 `-u` 和 `-U` 选项。
|
||||
|
||||
#### C 库
|
||||
|
||||
POSIX 采用了 GNU C 库 **glibc** 的许多特性。<ruby>血统<rt>Lineage</rt></ruby>一时已难以追溯,但 James Youngman 如是写道:
|
||||
|
||||
> “我非常确定 GCC 首创了许多 ISO C 的特性。例如,**_Noreturn** 是 C11 中的新特性,但 GCC-1.35 便具有此功能(使用 `volatile` 作为声明函数的修饰符)。另外尽管我不确定,GCC-1.35 支持的可变长度数组似乎与现代 C 中的(<ruby>柔性数组<rt>conformant array</rt></ruby>)非常相似。”
|
||||
|
||||
Giacomo Catenazzi 援引 Open Group 的 [strftime][11] 文章,并指出其归因:“这是基于某版本 GNU libc 的 `strftime()` 的特性。”
|
||||
|
||||
Eric Blake 指出,对于 `getline()` 和各种基于语言环境的 `*_l()` 函数,GNU 绝对是这方面的先驱。
|
||||
|
||||
Joshua Judson Rosen 补充道,他清楚地记得,在全然不同的操作系统的代码中奇怪地目睹了熟悉的 GNU 式的行为后,对 `getline()` 函数的采用给他留下了深刻的印象。
|
||||
|
||||
“等等……那不是 GNU 特有的吗?哦,显然已经不再是了。”
|
||||
|
||||
Rosen 向我指出了 [getline 手册页][12] 中写道:
|
||||
|
||||
> `getline()` 和 `getdelim()` 最初都是 GNU 扩展。在 POSIX.1-2008 中被标准化。
|
||||
|
||||
Eric Blake 向我发送了一份其他扩展的列表,这些扩展可能会在下一个 POSIX 修订版中添加(代号为 Issue 8,大约在 2021 年前后):
|
||||
|
||||
* [ppoll][13]
|
||||
* [pthread_cond_clockwait et al.][14]
|
||||
* [posix_spawn_file_actions_addchdir][15]
|
||||
* [getlocalename_1][16]
|
||||
* [reallocarray][17]
|
||||
|
||||
### 关于用户空间的扩展
|
||||
|
||||
POSIX 不仅为开发人员定义了函数和特性,还为用户空间定义了标准行为。
|
||||
|
||||
#### ls
|
||||
|
||||
`-A` 选项会排除来自 `ls` 命令结果中的符号 `.`(代表当前位置)和 `..`(代表上一级目录)。它被 POSIX 2008 采纳。
|
||||
|
||||
#### find
|
||||
|
||||
`find` 命令是一个<ruby>特别的<rt>ad hoc</rt></ruby> [for 循环][18] 工具,也是 [<ruby>并行<rt>parallel</rt></ruby>][19] 处理的出入口。
|
||||
|
||||
一些从 GNU 引入到 POSIX 的<ruby>便捷操作<rt>conveniences</rt></ruby>,包括 `-path` 和 `-perm` 选项。
|
||||
|
||||
`-path` 选项帮你过滤与文件系统路径模式匹配的搜索结果,并且从 1996 年(根据 `findutil` 的 Git 仓库中最早的记录)GNU 版本的 `find` 便可使用此选项。James Youngman 指出 [HP-UX][20] 也很早就有这个选项,所以究竟是 GNU 还是 HP-UX 做出的这一创新(抑或两者兼而有之)无法考证。
|
||||
|
||||
`-perm` 选项帮你按文件权限过滤搜索结果。这在 1996 年 GNU 版本的 `find` 中便已存在,随后被纳入 POSIX 标准 “IEEE Std 1003.1,2004 Edition” 中。
|
||||
|
||||
`xargs` 命令是 `findutils` 软件包的一部分,1996 年的时候就有一个 `-p` 选项会将 `xargs` 置于交互模式(用户将被提示是否继续),随后被纳入 POSIX 标准 “IEEE Std 1003.1, 2004 Edition” 中。
|
||||
|
||||
#### Awk
|
||||
|
||||
GNU awk(即 `/usr/bin` 目录中的 `gawk` 命令,可能也是符号链接 `awk` 的目标地址)的维护者 Arnold Robbins 说道,`gawk` 和 `mawk`(另一个GPL 的 `awk` 实现)允许 `RS`(记录分隔符)是一个正则表达式,即这时 `RS` 的长度会大于 1。这一特性还不是 POSIX 的特性,但有 [迹象表明它即将会是][21]:
|
||||
|
||||
> NUL 在扩展正则表达式中产生的未定义行为允许 GNU `gawk` 程序未来可以扩展以处理二进制数据。
|
||||
>
|
||||
> 使用多字符 RS 值的未指定行为是为了未来可能的扩展,它是基于用于记录分隔符(RS)的扩展正则表达式的。目前的历史实现为采用该字符串的第一个字符而忽略其他字符。
|
||||
|
||||
这是一个重大的增强,因为 `RS` 符号定义了记录之间的分隔符。可能是逗号、分号、短划线、或者是任何此类字符,但如果它是字符*序列*,则只会使用第一个字符,除非你使用的是 `gawk` 或 `mawk`。想象一下这种情况,使用省略号(连续的三个点)作为解析 IP 地址文档的分隔记录,只是想获取在每个 IP 地址的每个点处解析的结果。
|
||||
|
||||
[mawk][22] 首先支持这个功能,但是几年来没有维护者,留下来的火把由 `gawk` 接过。(`mawk` 已然获得了一个新的维护者,可以说是大家薪火传承地将这一特性推向共同的预期值。)
|
||||
|
||||
### POSIX 规范
|
||||
|
||||
总的来说,Giacomo Catenzzi 指出,“……因为 GNU 的实用程序使用广泛,而且许多其他的选项和行为又对标规范。在 shell 的每次更改中,Bash 都会(作为一等公民)被用作比较。” 当某些东西被纳入 POSIX 规范时,无需提及 GNU 或任何其他影响,你可以简单地认为 POSIX 规范会受到许多方面的影响,GNU 只是其中之一。
|
||||
|
||||
共识是 POSIX 存在的意义所在。一群技术人员共同努力为了实现共同规范,再分享给数以百计各异的开发人员,经由他们的赋能,从而实现软件的独立性,以及开发人员和用户的自由。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[martin2011qi](https://github.com/martin2011qi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/document_free_access_cut_security.png?itok=ocvCv8G2 (Scissors cutting open access to files)
|
||||
[2]: https://pubs.opengroup.org/onlinepubs/9699919799.2018edition/
|
||||
[3]: https://stallman.org/
|
||||
[4]: https://www.gnu.org/philosophy/free-sw.en.html
|
||||
[5]: https://www.ieee.org/
|
||||
[6]: http://gnu.org
|
||||
[7]: https://www.gnu.org/prep/standards/html_node/Non_002dGNU-Standards.html
|
||||
[8]: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/make.html
|
||||
[9]: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/diff.html
|
||||
[10]: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/patch.html
|
||||
[11]: https://pubs.opengroup.org/onlinepubs/9699919799/functions/strftime.html
|
||||
[12]: http://man7.org/linux/man-pages/man3/getline.3.html
|
||||
[13]: http://austingroupbugs.net/view.php?id=1263
|
||||
[14]: http://austingroupbugs.net/view.php?id=1216
|
||||
[15]: http://austingroupbugs.net/view.php?id=1208
|
||||
[16]: http://austingroupbugs.net/view.php?id=1220
|
||||
[17]: http://austingroupbugs.net/view.php?id=1218
|
||||
[18]: https://opensource.com/article/19/6/how-write-loop-bash
|
||||
[19]: https://opensource.com/article/18/5/gnu-parallel
|
||||
[20]: https://www.hpe.com/us/en/servers/hp-ux.html
|
||||
[21]: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/awk.html
|
||||
[22]: https://invisible-island.net/mawk/
|
245
published/20190725 Introduction to GNU Autotools.md
Normal file
245
published/20190725 Introduction to GNU Autotools.md
Normal file
@ -0,0 +1,245 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11218-1.html)
|
||||
[#]: subject: (Introduction to GNU Autotools)
|
||||
[#]: via: (https://opensource.com/article/19/7/introduction-gnu-autotools)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
GNU Autotools 介绍
|
||||
======
|
||||
|
||||
> 如果你仍未使用过 Autotools,那么这篇文章将改变你递交代码的方式。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/13/094739ahql50gx9x10y157.jpg)
|
||||
|
||||
你有没有下载过流行的软件项目的源代码,要求你输入几乎是仪式般的 `./configure; make && make install` 命令序列来构建和安装它?如果是这样,你已经使用过 [GNU Autotools][2] 了。如果你曾经研究过这样的项目所附带的一些文件,你可能会对这种构建系统的显而易见的复杂性感到害怕。
|
||||
|
||||
好的消息是,GNU Autotools 的设置要比你想象的要简单得多,GNU Autotools 本身可以为你生成这些上千行的配置文件。是的,你可以编写 20 或 30 行安装代码,并免费获得其他 4,000 行。
|
||||
|
||||
### Autotools 工作方式
|
||||
|
||||
如果你是初次使用 Linux 的用户,正在寻找有关如何安装应用程序的信息,那么你不必阅读本文!如果你想研究如何构建软件,欢迎阅读它;但如果你只是要安装一个新应用程序,请阅读我在[在 Linux 上安装应用程序][3]的文章。
|
||||
|
||||
对于开发人员来说,Autotools 是一种管理和打包源代码的快捷方式,以便用户可以编译和安装软件。 Autotools 也得到了主要打包格式(如 DEB 和 RPM)的良好支持,因此软件存储库的维护者可以轻松管理使用 Autotools 构建的项目。
|
||||
|
||||
Autotools 工作步骤:
|
||||
|
||||
1. 首先,在 `./configure` 步骤中,Autotools 扫描宿主机系统(即当前正在运行的计算机)以发现默认设置。默认设置包括支持库所在的位置,以及新软件应放在系统上的位置。
|
||||
2. 接下来,在 `make` 步骤中,Autotools 通常通过将人类可读的源代码转换为机器语言来构建应用程序。
|
||||
3. 最后,在 `make install` 步骤中,Autotools 将其构建好的文件复制到计算机上(在配置阶段检测到)的相应位置。
|
||||
|
||||
这个过程看起来很简单,和你使用 Autotools 的步骤一样。
|
||||
|
||||
### Autotools 的优势
|
||||
|
||||
GNU Autotools 是我们大多数人认为理所当然的重要软件。与 [GCC(GNU 编译器集合)][4]一起,Autotools 是支持将自由软件构建和安装到正在运行的系统的脚手架。如果你正在运行 [POSIX][5] 系统,可以毫不保守地说,你的计算机上的操作系统里大多数可运行软件都是这些这样构建的。
|
||||
|
||||
即使是你的项目是个玩具项目不是操作系统,你可能会认为 Autotools 对你的需求来说太过分了。但是,尽管它的名气很大,Autotools 有许多可能对你有益的小功能,即使你的项目只是一个相对简单的应用程序或一系列脚本。
|
||||
|
||||
#### 可移植性
|
||||
|
||||
首先,Autotools 考虑到了可移植性。虽然它无法使你的项目在所有 POSIX 平台上工作(这取决于你,编码的人),但 Autotools 可以确保你标记为要安装的文件安装到已知平台上最合理的位置。而且由于 Autotools,高级用户可以轻松地根据他们自己的系统情况定制和覆盖任何非最佳设定。
|
||||
|
||||
使用 Autotools,你只要知道需要将文件安装到哪个常规位置就行了。它会处理其他一切。不需要可能破坏未经测试的操作系统的定制安装脚本。
|
||||
|
||||
#### 打包
|
||||
|
||||
Autotools 也得到了很好的支持。将一个带有 Autotools 的项目交给一个发行版打包者,无论他们是打包成 RPM、DEB、TGZ 还是其他任何东西,都很简单。打包工具知道 Autotools,因此可能不需要修补、魔改或调整。在许多情况下,将 Autotools 项目结合到流程中甚至可以实现自动化。
|
||||
|
||||
### 如何使用 Autotools
|
||||
|
||||
要使用 Autotools,必须先安装它。你的发行版可能提供一个单个的软件包来帮助开发人员构建项目,或者它可能为每个组件提供了单独的软件包,因此你可能需要在你的平台上进行一些研究以发现需要安装的软件包。
|
||||
|
||||
Autotools 的组件是:
|
||||
|
||||
* `automake`
|
||||
* `autoconf`
|
||||
* `automake`
|
||||
* `make`
|
||||
|
||||
虽然你可能需要安装项目所需的编译器(例如 GCC),但 Autotools 可以很好地处理不需要编译的脚本或二进制文件。实际上,Autotools 对于此类项目非常有用,因为它提供了一个 `make uninstall` 脚本,以便于删除。
|
||||
|
||||
安装了所有组件之后,现在让我们了解一下你的项目文件的组成结构。
|
||||
|
||||
#### Autotools 项目结构
|
||||
|
||||
GNU Autotools 有非常具体的预期规范,如果你经常下载和构建源代码,可能大多数都很熟悉。首先,源代码本身应该位于一个名为 `src` 的子目录中。
|
||||
|
||||
你的项目不必遵循所有这些预期规范,但如果你将文件放在非标准位置(从 Autotools 的角度来看),那么你将不得不稍后在 `Makefile` 中对其进行调整。
|
||||
|
||||
此外,这些文件是必需的:
|
||||
|
||||
* `NEWS`
|
||||
* `README`
|
||||
* `AUTHORS`
|
||||
* `ChangeLog`
|
||||
|
||||
你不必主动使用这些文件,它们可以是包含所有信息的单个汇总文档(如 `README.md`)的符号链接,但它们必须存在。
|
||||
|
||||
#### Autotools 配置
|
||||
|
||||
在你的项目根目录下创建一个名为 `configure.ac` 的文件。`autoconf` 使用此文件来创建用户在构建之前运行的 `configure` shell 脚本。该文件必须至少包含 `AC_INIT` 和 `AC_OUTPUT` [M4 宏][6]。你不需要了解有关 M4 语言的任何信息就可以使用这些宏;它们已经为你编写好了,并且所有与 Autotools 相关的内容都在该文档中定义好了。
|
||||
|
||||
在你喜欢的文本编辑器中打开该文件。`AC_INIT` 宏可以包括包名称、版本、报告错误的电子邮件地址、项目 URL 以及可选的源 TAR 文件名称等参数。
|
||||
|
||||
[AC_OUTPUT][7] 宏更简单,不用任何参数。
|
||||
|
||||
```
|
||||
AC_INIT([penguin], [2019.3.6], [[seth@example.com][8]])
|
||||
AC_OUTPUT
|
||||
```
|
||||
|
||||
如果你此刻运行 `autoconf`,会依据你的 `configure.ac` 文件生成一个 `configure` 脚本,它是可以运行的。但是,也就是能运行而已,因为到目前为止你所做的就是定义项目的元数据,并要求创建一个配置脚本。
|
||||
|
||||
你必须在 `configure.ac` 文件中调用的下一个宏是创建 [Makefile][9] 的函数。 `Makefile` 会告诉 `make` 命令做什么(通常是如何编译和链接程序)。
|
||||
|
||||
创建 `Makefile` 的宏是 `AM_INIT_AUTOMAKE`,它不接受任何参数,而 `AC_CONFIG_FILES` 接受的参数是你要输出的文件的名称。
|
||||
|
||||
最后,你必须添加一个宏来考虑你的项目所需的编译器。你使用的宏显然取决于你的项目。如果你的项目是用 C++ 编写的,那么适当的宏是 `AC_PROG_CXX`,而用 C 编写的项目需要 `AC_PROG_CC`,依此类推,详见 Autoconf 文档中的 [Building Programs and Libraries][10] 部分。
|
||||
|
||||
例如,我可能会为我的 C++ 程序添加以下内容:
|
||||
|
||||
```
|
||||
AC_INIT([penguin], [2019.3.6], [[seth@example.com][8]])
|
||||
AC_OUTPUT
|
||||
AM_INIT_AUTOMAKE
|
||||
AC_CONFIG_FILES([Makefile])
|
||||
AC_PROG_CXX
|
||||
```
|
||||
|
||||
保存该文件。现在让我们将目光转到 `Makefile`。
|
||||
|
||||
#### 生成 Autotools Makefile
|
||||
|
||||
`Makefile` 并不难手写,但 Autotools 可以为你编写一个,而它生成的那个将使用在 `./configure` 步骤中检测到的配置选项,并且它将包含比你考虑要包括或想要自己写的还要多得多的选项。然而,Autotools 并不能检测你的项目构建所需的所有内容,因此你必须在文件 `Makefile.am` 中添加一些细节,然后在构造 `Makefile` 时由 `automake` 使用。
|
||||
|
||||
`Makefile.am` 使用与 `Makefile` 相同的语法,所以如果你曾经从头开始编写过 `Makefile`,那么这个过程将是熟悉和简单的。通常,`Makefile.am` 文件只需要几个变量定义来指示要构建的文件以及它们的安装位置即可。
|
||||
|
||||
以 `_PROGRAMS` 结尾的变量标识了要构建的代码(这通常被认为是<ruby>原语<rt>primary</rt></ruby>目标;这是 `Makefile` 存在的主要意义)。Automake 也会识别其他原语,如 `_SCRIPTS`、`_ DATA`、`_LIBRARIES`,以及构成软件项目的其他常见部分。
|
||||
|
||||
如果你的应用程序在构建过程中需要实际编译,那么你可以用 `bin_PROGRAMS` 变量将其标记为二进制程序,然后使用该程序名称作为变量前缀引用构建它所需的源代码的任何部分(这些部分可能是将被编译和链接在一起的一个或多个文件):
|
||||
|
||||
```
|
||||
bin_PROGRAMS = penguin
|
||||
penguin_SOURCES = penguin.cpp
|
||||
```
|
||||
|
||||
`bin_PROGRAMS` 的目标被安装在 `bindir` 中,它在编译期间可由用户配置。
|
||||
|
||||
如果你的应用程序不需要实际编译,那么你的项目根本不需要 `bin_PROGRAMS` 变量。例如,如果你的项目是用 Bash、Perl 或类似的解释语言编写的脚本,那么定义一个 `_SCRIPTS` 变量来替代:
|
||||
|
||||
```
|
||||
bin_SCRIPTS = bin/penguin
|
||||
```
|
||||
|
||||
Automake 期望源代码位于名为 `src` 的目录中,因此如果你的项目使用替代目录结构进行布局,则必须告知 Automake 接受来自外部源的代码:
|
||||
|
||||
```
|
||||
AUTOMAKE_OPTIONS = foreign subdir-objects
|
||||
```
|
||||
|
||||
最后,你可以在 `Makefile.am` 中创建任何自定义的 `Makefile` 规则,它们将逐字复制到生成的 `Makefile` 中。例如,如果你知道一些源代码中的临时值需要在安装前替换,则可以为该过程创建自定义规则:
|
||||
|
||||
```
|
||||
all-am: penguin
|
||||
touch bin/penguin.sh
|
||||
|
||||
penguin: bin/penguin.sh
|
||||
@sed "s|__datadir__|@datadir@|" $< >bin/$@
|
||||
```
|
||||
|
||||
一个特别有用的技巧是扩展现有的 `clean` 目标,至少在开发期间是这样的。`make clean` 命令通常会删除除了 Automake 基础结构之外的所有生成的构建文件。它是这样设计的,因为大多数用户很少想要 `make clean` 来删除那些便于构建代码的文件。
|
||||
|
||||
但是,在开发期间,你可能需要一种方法可靠地将项目返回到相对不受 Autotools 影响的状态。在这种情况下,你可能想要添加:
|
||||
|
||||
```
|
||||
clean-local:
|
||||
@rm config.status configure config.log
|
||||
@rm Makefile
|
||||
@rm -r autom4te.cache/
|
||||
@rm aclocal.m4
|
||||
@rm compile install-sh missing Makefile.in
|
||||
```
|
||||
|
||||
这里有很多灵活性,如果你还不熟悉 `Makefile`,那么很难知道你的 `Makefile.am` 需要什么。最基本需要的是原语目标,无论是二进制程序还是脚本,以及源代码所在位置的指示(无论是通过 `_SOURCES` 变量还是使用 `AUTOMAKE_OPTIONS` 告诉 Automake 在哪里查找源代码)。
|
||||
|
||||
一旦定义了这些变量和设置,如下一节所示,你就可以尝试生成构建脚本,并调整缺少的任何内容。
|
||||
|
||||
#### 生成 Autotools 构建脚本
|
||||
|
||||
你已经构建了基础结构,现在是时候让 Autotools 做它最擅长的事情:自动化你的项目工具。对于开发人员(你),Autotools 的接口与构建代码的用户的不同。
|
||||
|
||||
构建者通常使用这个众所周知的顺序:
|
||||
|
||||
```
|
||||
$ ./configure
|
||||
$ make
|
||||
$ sudo make install
|
||||
```
|
||||
|
||||
但是,要使这种咒语起作用,你作为开发人员必须引导构建这些基础结构。首先,运行 `autoreconf` 以生成用户在运行 `make` 之前调用的 `configure` 脚本。使用 `-install` 选项将辅助文件(例如符号链接)引入到 `depcomp`(这是在编译过程中生成依赖项的脚本),以及 `compile` 脚本的副本(一个编译器的包装器,用于说明语法,等等)。
|
||||
|
||||
```
|
||||
$ autoreconf --install
|
||||
configure.ac:3: installing './compile'
|
||||
configure.ac:2: installing './install-sh'
|
||||
configure.ac:2: installing './missing'
|
||||
```
|
||||
|
||||
使用此开发构建环境,你可以创建源代码分发包:
|
||||
|
||||
```
|
||||
$ make dist
|
||||
```
|
||||
|
||||
`dist` 目标是从 Autotools “免费”获得的规则。这是一个内置于 `Makefile` 中的功能,它是通过简单的 `Makefile.am` 配置生成的。该目标可以生成一个 `tar.gz` 存档,其中包含了所有源代码和所有必要的 Autotools 基础设施,以便下载程序包的人员可以构建项目。
|
||||
|
||||
此时,你应该仔细查看存档文件的内容,以确保它包含你要发送给用户的所有内容。当然,你也应该尝试自己构建:
|
||||
|
||||
```
|
||||
$ tar --extract --file penguin-0.0.1.tar.gz
|
||||
$ cd penguin-0.0.1
|
||||
$ ./configure
|
||||
$ make
|
||||
$ DESTDIR=/tmp/penguin-test-build make install
|
||||
```
|
||||
|
||||
如果你的构建成功,你将找到由 `DESTDIR` 指定的已编译应用程序的本地副本(在此示例的情况下为 `/tmp/penguin-test-build`)。
|
||||
|
||||
```
|
||||
$ /tmp/example-test-build/usr/local/bin/example
|
||||
hello world from GNU Autotools
|
||||
```
|
||||
|
||||
### 去使用 Autotools
|
||||
|
||||
Autotools 是一个很好的脚本集合,可用于可预测的自动发布过程。如果你习惯使用 Python 或 Bash 构建器,这个工具集对你来说可能是新的,但它为你的项目提供的结构和适应性可能值得学习。
|
||||
|
||||
而 Autotools 也不只是用于代码。Autotools 可用于构建 [Docbook][11] 项目,保持媒体有序(我使用 Autotools 进行音乐发布),文档项目以及其他任何可以从可自定义安装目标中受益的内容。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/introduction-gnu-autotools
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_kernel_clang_vscode.jpg?itok=fozZ4zrr (Linux kernel source code (C) in Visual Studio Code)
|
||||
[2]: https://www.gnu.org/software/automake/faq/autotools-faq.html
|
||||
[3]: https://linux.cn/article-9486-1.html
|
||||
[4]: https://en.wikipedia.org/wiki/GNU_Compiler_Collection
|
||||
[5]: https://en.wikipedia.org/wiki/POSIX
|
||||
[6]: https://www.gnu.org/software/autoconf/manual/autoconf-2.67/html_node/Initializing-configure.html
|
||||
[7]: https://www.gnu.org/software/autoconf/manual/autoconf-2.67/html_node/Output.html#Output
|
||||
[8]: mailto:seth@example.com
|
||||
[9]: https://www.gnu.org/software/make/manual/html_node/Introduction.html
|
||||
[10]: https://www.gnu.org/software/automake/manual/html_node/Programs.html#Programs
|
||||
[11]: https://opensource.com/article/17/9/docbook
|
131
published/20190730 How to create a pull request in GitHub.md
Executable file
131
published/20190730 How to create a pull request in GitHub.md
Executable file
@ -0,0 +1,131 @@
|
||||
[#]: collector: "lujun9972"
|
||||
[#]: translator: "furrybear"
|
||||
[#]: reviewer: "wxy"
|
||||
[#]: publisher: "wxy"
|
||||
[#]: url: "https://linux.cn/article-11215-1.html"
|
||||
[#]: subject: "How to create a pull request in GitHub"
|
||||
[#]: via: "https://opensource.com/article/19/7/create-pull-request-github"
|
||||
[#]: author: "Kedar Vijay Kulkarni https://opensource.com/users/kkulkarn"
|
||||
|
||||
如何在 Github 上创建一个拉取请求
|
||||
======
|
||||
|
||||
> 学习如何复刻一个仓库,进行更改,并要求维护人员审查并合并它。
|
||||
|
||||
![a checklist for a team][1]
|
||||
|
||||
你知道如何使用 git 了,你有一个 [GitHub][2] 仓库并且可以向它推送。这一切都很好。但是你如何为他人的 GitHub 项目做出贡献? 这是我在学习 git 和 GitHub 之后想知道的。在本文中,我将解释如何<ruby>复刻<rt>fork</rt></ruby>一个 git 仓库、进行更改并提交一个<ruby>拉取请求<rt>pull request</rt></ruby>。
|
||||
|
||||
当你想要在一个 GitHub 项目上工作时,第一步是复刻一个仓库。
|
||||
|
||||
![Forking a GitHub repo][3]
|
||||
|
||||
你可以使用[我的演示仓库][4]试一试。
|
||||
|
||||
当你在这个页面时,单击右上角的 “Fork”(复刻)按钮。这将在你的 GitHub 用户账户下创建我的演示仓库的一个新副本,其 URL 如下:
|
||||
|
||||
```
|
||||
https://github.com/<你的用户名>/demo
|
||||
```
|
||||
|
||||
这个副本包含了原始仓库中的所有代码、分支和提交。
|
||||
|
||||
接下来,打开你计算机上的终端并运行命令来<ruby>克隆<rt>clone</rt></ruby>仓库:
|
||||
|
||||
```
|
||||
git clone https://github.com/<你的用户名>/demo
|
||||
```
|
||||
|
||||
一旦仓库被克隆后,你需要做两件事:
|
||||
|
||||
1、通过发出命令创建一个新分支 `new_branch` :
|
||||
|
||||
```
|
||||
git checkout -b new_branch
|
||||
```
|
||||
|
||||
2、使用以下命令为上游仓库创建一个新的<ruby>远程<rt>remote</rt></ruby>:
|
||||
|
||||
```
|
||||
git remote add upstream https://github.com/kedark3/demo
|
||||
```
|
||||
|
||||
在这种情况下,“上游仓库”指的是你创建复刻来自的原始仓库。
|
||||
|
||||
现在你可以更改代码了。以下代码创建一个新分支,进行任意更改,并将其推送到 `new_branch` 分支:
|
||||
|
||||
```
|
||||
$ git checkout -b new_branch
|
||||
Switched to a new branch ‘new_branch’
|
||||
$ echo “some test file” > test
|
||||
$ cat test
|
||||
Some test file
|
||||
$ git status
|
||||
On branch new_branch
|
||||
No commits yet
|
||||
Untracked files:
|
||||
(use "git add <file>..." to include in what will be committed)
|
||||
test
|
||||
nothing added to commit but untracked files present (use "git add" to track)
|
||||
$ git add test
|
||||
$ git commit -S -m "Adding a test file to new_branch"
|
||||
[new_branch (root-commit) 4265ec8] Adding a test file to new_branch
|
||||
1 file changed, 1 insertion(+)
|
||||
create mode 100644 test
|
||||
$ git push -u origin new_branch
|
||||
Enumerating objects: 3, done.
|
||||
Counting objects: 100% (3/3), done.
|
||||
Writing objects: 100% (3/3), 918 bytes | 918.00 KiB/s, done.
|
||||
Total 3 (delta 0), reused 0 (delta 0)
|
||||
Remote: Create a pull request for ‘new_branch’ on GitHub by visiting:
|
||||
Remote: <http://github.com/example/Demo/pull/new/new\_branch>
|
||||
Remote:
|
||||
* [new branch] new_branch -> new_branch
|
||||
```
|
||||
|
||||
一旦你将更改推送到您的仓库后, “Compare & pull request”(比较和拉取请求)按钮将出现在GitHub。
|
||||
|
||||
![GitHub's Compare & Pull Request button][5]
|
||||
|
||||
单击它,你将进入此屏幕:
|
||||
|
||||
![GitHub's Open pull request button][6]
|
||||
|
||||
单击 “Create pull request”(创建拉取请求)按钮打开一个拉取请求。这将允许仓库的维护者们审查你的贡献。然后,如果你的贡献是没问题的,他们可以合并它,或者他们可能会要求你做一些改变。
|
||||
|
||||
### 精简版
|
||||
|
||||
总之,如果您想为一个项目做出贡献,最简单的方法是:
|
||||
|
||||
1. 找到您想要贡献的项目
|
||||
2. 复刻它
|
||||
3. 将其克隆到你的本地系统
|
||||
4. 建立一个新的分支
|
||||
5. 进行你的更改
|
||||
6. 将其推送回你的仓库
|
||||
7. 单击 “Compare & pull request”(比较和拉取请求)按钮
|
||||
8. 单击 “Create pull request”(创建拉取请求)以打开一个新的拉取请求
|
||||
|
||||
如果审阅者要求更改,请重复步骤 5 和 6,为你的拉取请求添加更多提交。
|
||||
|
||||
快乐编码!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/create-pull-request-github
|
||||
|
||||
作者:[Kedar Vijay Kulkarni][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[furrybear](https://github.com/furrybear)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/kkulkarnhttps://opensource.com/users/fontanahttps://opensource.com/users/mhanwellhttps://opensource.com/users/mysentimentshttps://opensource.com/users/greg-p
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_hands_team_collaboration.png?itok=u82QepPk "a checklist for a team"
|
||||
[2]: https://github.com/
|
||||
[3]: https://opensource.com/sites/default/files/uploads/forkrepo.png "Forking a GitHub repo"
|
||||
[4]: https://github.com/kedark3/demo
|
||||
[5]: https://opensource.com/sites/default/files/uploads/compare-and-pull-request-button.png "GitHub's Compare & Pull Request button"
|
||||
[6]: https://opensource.com/sites/default/files/uploads/open-a-pull-request_crop.png "GitHub's Open pull request button"
|
@ -0,0 +1,77 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11193-1.html)
|
||||
[#]: subject: (OpenHMD: Open Source Project for VR Development)
|
||||
[#]: via: (https://itsfoss.com/openhmd/)
|
||||
[#]: author: (John Paul https://itsfoss.com/author/john/)
|
||||
|
||||
OpenHMD:用于 VR 开发的开源项目
|
||||
======
|
||||
|
||||
> 在这个时代,有一些开源替代品可满足你的所有计算需求。甚至还有一个 VR 眼镜之类的开源平台。让我们快速看一下 OpenHMD 这个项目。
|
||||
|
||||
![][5]
|
||||
|
||||
### 什么是 OpenHMD?
|
||||
|
||||
![][1]
|
||||
|
||||
[OpenHMD][2] 是一个为沉浸式技术创建开源 API 及驱动的项目。这类技术包括带内置头部跟踪的头戴式显示器。
|
||||
|
||||
它目前支持很多系统,包括 Android、FreeBSD、Linux、OpenBSD、mac OS 和 Windows。它支持的[设备][3]包括 Oculus Rift、HTC Vive、DreamWorld DreamGlass、Playstation Move 等。它还支持各种语言,包括 Go、Java、.NET、Perl、Python 和 Rust。
|
||||
|
||||
OpenHMD 项目是在 [Boost 许可证][4]下发布的。
|
||||
|
||||
### 新版本中的更多功能和改进功能
|
||||
|
||||
最近,OpenHMD 项目[发布版本 0.3.0][6],代号为 Djungelvral([Djungelvral][7] 是来自瑞典的盐渍甘草)。它带来了不少变化。
|
||||
|
||||
这次更新添加了对以下设备的支持:
|
||||
|
||||
* 3Glasses D3
|
||||
* Oculus Rift CV1
|
||||
* HTC Vive 和 HTC Vive Pro
|
||||
* NOLO VR
|
||||
* Windows Mixed Reality HMD 支持
|
||||
* Deepoon E2
|
||||
* GearVR Gen1
|
||||
|
||||
OpenHMD 增加了一个通用扭曲着色器。这一新增功能“可以方便地在驱动程序中设置一些变量,为着色器提供有关镜头尺寸、色差、位置和 Quirks 的信息。”
|
||||
|
||||
他们还宣布计划改变构建系统。OpenHMD 增加了对 Meson 的支持,并将在下一个 (0.4) 版本中将删除对 Autotools 的支持。
|
||||
|
||||
OpenHMD 背后的团队还不得不删除一些功能,因为他们希望他们的系统适合所有人。由于 Windows 和 mac OS 对 HID 头的兼容问题,因此禁用了对 PlayStation VR 的支持。NOLO 有一堆固件版本,很多都会有小改动。OpenHMD 无法测试所有固件版本,因此某些版本可能无法正常工作。他们建议升级到最新的固件版本。最后,几个设备仅提供有限的支持,因此不包含在此版本中。
|
||||
|
||||
他们预计将加快 OpenHMD 发布周期,以便更快地获得更新的功能并为用户提供更多设备支持。他们优先要做的是“让当前在主干分支中禁用的设备在下次发布补丁时能够试用,同时让支持的头戴式显示器支持位置跟踪。”
|
||||
|
||||
### 最后总结
|
||||
|
||||
我没有 VR 设备而且从未使用过。我相信它们有很大的潜力,甚至能超越游戏。我很兴奋(但并不惊讶)有一个开源实现会去支持许多设备。我很高兴他们专注于各种各样的设备,而不是专注于一些非品牌的 VR 的努力。
|
||||
|
||||
我希望 OpenHMD 团队做得不错,并希望他们创建一个平台,让它们成为 VR项目。
|
||||
|
||||
你曾经使用或看到过 OpenHMD 吗?你有没有使用 VR 进行游戏和其他用途?如果是,你是否用过任何开源硬件或软件?请在下面的评论中告诉我们。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/openhmd/
|
||||
|
||||
作者:[John Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/john/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/07/openhmd-logo.png?resize=300%2C195&ssl=1
|
||||
[2]: http://www.openhmd.net/
|
||||
[3]: http://www.openhmd.net/index.php/devices/
|
||||
[4]: https://github.com/OpenHMD/OpenHMD/blob/master/LICENSE
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/07/virtual-reality-development.jpg?ssl=1
|
||||
[6]: http://www.openhmd.net/index.php/2019/07/12/openhmd-0-3-0-djungelvral-released/
|
||||
[7]: https://www.youtube.com/watch?v=byP5i6LdDXs
|
||||
[9]: http://reddit.com/r/linuxusersgroup
|
124
published/20190801 5 Free Partition Managers for Linux.md
Normal file
124
published/20190801 5 Free Partition Managers for Linux.md
Normal file
@ -0,0 +1,124 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11196-1.html)
|
||||
[#]: subject: (5 Free Partition Managers for Linux)
|
||||
[#]: via: (https://itsfoss.com/partition-managers-linux/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
5 个免费的 Linux 分区管理器
|
||||
======
|
||||
|
||||
> 以下是我们推荐的 Linux 分区工具。它们能让你删除、添加、调整或缩放 Linux 系统上的磁盘分区。
|
||||
|
||||
通常,你在安装操作系统时决定磁盘分区。但是,如果你需要在安装后的某个时间修改分区,该怎么办?你无法回到系统安装阶段。因此,这就需要分区管理器(或准确地说是磁盘分区管理器)上场了。
|
||||
|
||||
在大多数情况下,你无需单独安装分区管理器,因为它已预先安装。此外,值得注意的是,你可以选择基于命令行或有 GUI 的分区管理器。
|
||||
|
||||
**注意!**
|
||||
|
||||
> 磁盘分区是一项有风险的任务。除非绝对必要,否则不要这样做。
|
||||
>
|
||||
> 如果你使用的是基于命令行的分区工具,那么需要学习完成任务的命令。否则,你可能最终会擦除整个磁盘。
|
||||
|
||||
### Linux 中的 5 个管理磁盘分区的工具
|
||||
|
||||
![][1]
|
||||
|
||||
下面的列表没有特定的排名顺序。大多数分区工具应该存在于 Linux 发行版的仓库中。
|
||||
|
||||
#### GParted
|
||||
|
||||
![GParted][2]
|
||||
|
||||
这可能是 Linux 发行版中最流行的基于 GUI 的分区管理器。你可能已在某些发行版中预装它。如果还没有,只需在软件中心搜索它即可完成安装。
|
||||
|
||||
它会在启动时直接提示你以 root 用户进行身份验证。所以,你根本不需要在这里使用终端。身份验证后,它会分析设备,然后让你调整磁盘分区。如果发生数据丢失或意外删除文件,你还可以找到“尝试数据救援”的选项。
|
||||
|
||||
- [GParted][3]
|
||||
|
||||
#### GNOME Disks
|
||||
|
||||
![Gnome Disks][4]
|
||||
|
||||
一个基于 GUI 的分区管理器,随 Ubuntu 或任何基于 Ubuntu 的发行版(如 Zorin OS)一起出现。
|
||||
|
||||
它能让你删除、添加、缩放和微调分区。如果你遇到故障,它甚至可以[在 Ubuntu 中格式化 USB][6] 来帮助你救援机器。
|
||||
|
||||
你甚至可以借助此工具尝试修复分区。它的选项还包括编辑文件系统、创建分区镜像、还原镜像以及对分区进行基准测试。
|
||||
|
||||
- [GNOME Disks][7]
|
||||
|
||||
#### KDE Partition Manager
|
||||
|
||||
![Kde Partition Manager][8]
|
||||
|
||||
KDE Partition Manager 应该已经预装在基于 KDE 的 Linux 发行版上了。但是,如果没有,你可以在软件中心搜索并轻松安装它。
|
||||
|
||||
如果你不是预装的,那么可能会在尝试启动时通知你没有管理权限。没有管理员权限,你无法做任何事情。因此,在这种情况下,请输入以下命令:
|
||||
|
||||
```
|
||||
sudo partitionmanager
|
||||
```
|
||||
|
||||
它将扫描你的设备,然后你就可以创建、移动、复制、删除和缩放分区。你还可以导入/导出分区表及使用其他许多调整选项。
|
||||
|
||||
- [KDE Partition Manager][9]
|
||||
|
||||
#### Fdisk(命令行)
|
||||
|
||||
![Fdisk][10]
|
||||
|
||||
[fdisk][11] 是一个命令行程序,它在每个类 Unix 的系统中都有。不要担心,即使它需要你启动终端并输入命令,但这并不是很困难。但是,如果你在使用基于文本的程序时感到困惑,那么你应该继续使用上面提到的 GUI 程序。它们都做同样的事情。
|
||||
|
||||
要启动 `fdisk`,你必须是 root 用户并指定管理分区的设备。以下是该命令的示例:
|
||||
|
||||
```
|
||||
sudo fdisk /dev/sdc
|
||||
```
|
||||
|
||||
你可以参考 [Linux 文档项目的维基页面][12]以获取命令列表以及有关其工作原理的更多详细信息。
|
||||
|
||||
#### GNU Parted(命令行)
|
||||
|
||||
![Gnu Parted][13]
|
||||
|
||||
这是在你 Linux 发行版上预安装的另一个命令行程序。你需要输入下面的命令启动:
|
||||
|
||||
```
|
||||
sudo parted
|
||||
```
|
||||
|
||||
### 总结
|
||||
|
||||
我不会忘了说 [QtParted][15] 是分区管理器的替代品之一。但它已经几年没有维护,因此我不建议使用它。
|
||||
|
||||
你如何看待这里提到的分区管理器?我有错过你最喜欢的吗?让我知道,我将根据你的建议更新这个 Linux 分区管理器列表。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/partition-managers-linux/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/07/disk-partitioning-tools-linux.jpg?resize=800%2C450&ssl=1
|
||||
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/07/g-parted.png?ssl=1
|
||||
[3]: https://gparted.org/
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/07/gnome-disks.png?ssl=1
|
||||
[6]: https://itsfoss.com/format-usb-drive-sd-card-ubuntu/
|
||||
[7]: https://wiki.gnome.org/Apps/Disks
|
||||
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/07/kde-partition-manager.jpg?resize=800%2C404&ssl=1
|
||||
[9]: https://kde.org/applications/system/org.kde.partitionmanager
|
||||
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/07/fdisk.jpg?fit=800%2C496&ssl=1
|
||||
[11]: https://en.wikipedia.org/wiki/Fdisk
|
||||
[12]: https://www.tldp.org/HOWTO/Partition/fdisk_partitioning.html
|
||||
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/07/gnu-parted.png?fit=800%2C559&ssl=1
|
||||
[15]: http://qtparted.sourceforge.net/
|
@ -0,0 +1,104 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11199-1.html)
|
||||
[#]: subject: (Bash Script to Send a Mail When a New User Account is Created in System)
|
||||
[#]: via: (https://www.2daygeek.com/linux-bash-script-to-monitor-user-creation-send-email/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
在系统创建新用户时发送邮件的 Bash 脚本
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/07/232807l7x4j7y77555j1j5.jpg)
|
||||
|
||||
目前市场上有许多开源监测工具可用于监控 Linux 系统的性能。当系统到达指定的阈值时,它将发送邮件提醒。
|
||||
|
||||
它会监控 CPU 利用率、内存利用率、交换内存利用率、磁盘空间利用率等所有内容。但我不认为它们可以选择监控新用户创建活动,并发送提醒。
|
||||
|
||||
如果没有,这并不重要,因为我们可以编写自己的 bash 脚本来实现这一点。
|
||||
|
||||
我们过去写了许多有用的 shell 脚本。如果要查看它们,请点击以下链接。
|
||||
|
||||
* [如何使用 shell 脚本自动化执行日常任务?][1]
|
||||
|
||||
这个脚本做了什么?它监测 `/var/log/secure` 文件,并在系统创建新帐户时提醒管理员。
|
||||
|
||||
我们不会经常运行此脚本,因为创建用户不经常发生。但是,我打算一天运行一次这个脚本。因此,我们可以获得有关用户创建的综合报告。
|
||||
|
||||
如果在昨天的 `/var/log/secure` 中找到了 “useradd” 字符串,那么该脚本将向指定的邮箱发送邮件提醒,其中包含了新用户的详细信息。
|
||||
|
||||
**注意:**你需要更改邮箱而不是使用我们的邮箱。
|
||||
|
||||
```
|
||||
# vi /opt/scripts/new-user.sh
|
||||
```
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
#Set the variable which equal to zero
|
||||
prev_count=0
|
||||
count=$(grep -i "`date --date='yesterday' '+%b %e'`" /var/log/secure | egrep -wi 'useradd' | wc -l)
|
||||
|
||||
if [ "$prev_count" -lt "$count" ] ; then
|
||||
# Send a mail to given email id when errors found in log
|
||||
SUBJECT="ATTENTION: New User Account is created on server : `date --date='yesterday' '+%b %e'`"
|
||||
# This is a temp file, which is created to store the email message.
|
||||
MESSAGE="/tmp/new-user-logs.txt"
|
||||
TO="2daygeek@gmail.com"
|
||||
echo "Hostname: `hostname`" >> $MESSAGE
|
||||
echo -e "\n" >> $MESSAGE
|
||||
echo "The New User Details are below." >> $MESSAGE
|
||||
echo "+------------------------------+" >> $MESSAGE
|
||||
grep -i "`date --date='yesterday' '+%b %e'`" /var/log/secure | egrep -wi 'useradd' | grep -v 'failed adding'| awk '{print $4,$8}' | uniq | sed 's/,/ /' >> $MESSAGE
|
||||
echo "+------------------------------+" >> $MESSAGE
|
||||
mail -s "$SUBJECT" "$TO" < $MESSAGE
|
||||
rm $MESSAGE
|
||||
fi
|
||||
```
|
||||
|
||||
给 `new-user.sh` 添加可执行权限。
|
||||
|
||||
```
|
||||
$ chmod +x /opt/scripts/new-user.sh
|
||||
```
|
||||
|
||||
最后添加一个 cron 任务来自动化执行它。它会在每天 7 点运行。
|
||||
|
||||
```
|
||||
# crontab -e
|
||||
|
||||
0 7 * * * /bin/bash /opt/scripts/new-user.sh
|
||||
```
|
||||
|
||||
注意:你将在每天 7 点收到一封邮件提醒,但这是昨天的日志。
|
||||
|
||||
你将会看到类似下面的邮件提醒。
|
||||
|
||||
```
|
||||
# cat /tmp/logs.txt
|
||||
|
||||
Hostname: 2g.server10.com
|
||||
|
||||
The New User Details are below.
|
||||
+------------------------------+
|
||||
2g.server10.com name=magesh
|
||||
2g.server10.com name=daygeek
|
||||
+------------------------------+
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/linux-bash-script-to-monitor-user-creation-send-email/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/category/shell-script/
|
@ -0,0 +1,88 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (scvoet)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11229-1.html)
|
||||
[#]: subject: (Linux Smartphone Librem 5 is Available for Preorder)
|
||||
[#]: via: (https://itsfoss.com/librem-5-available/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
基于 Linux 的智能手机 Librem 5 开启预售
|
||||
======
|
||||
|
||||
Purism 近期[宣布][1]了 [Librem 5 智能手机][2]的最终规格。它不是基于 Android 或 iOS 的,而是基于 [Android 的开源替代品][4]--[PureOS][3]。
|
||||
|
||||
随着这一消息的宣布,Librem 5 也正式[以 649 美元的价格开启预售][5](这是 7 月 31 日前的早鸟价),在那以后价格将会上涨 50 美元,产品将会于 2019 年第三季度发货。
|
||||
|
||||
![][6]
|
||||
|
||||
以下是 Purism 博客文章中关于 Librem 5 的信息:
|
||||
|
||||
> 我们认为手机不应该跟踪你,也不应该利用你的数字生活。
|
||||
|
||||
> Librem 5 意味着你有机会通过自由开源软件、开放式治理和透明度来收回和保护你的私人信息和数字生活。Librem 5 是一个**基于 [PureOS][3] 的手机**,这是一个完全免费、符合道德的**不基于 Android 或 iOS** 的开源操作系统(了解更多关于[为什么这很重要][7]的信息)。
|
||||
|
||||
> 我们已成功超额完成了众筹计划,我们将会一一去实现我们的承诺。Librem 5 的硬件和软件开发正在[稳步前进][8],它计划在 2019 年的第三季度发行初始版本。你可以用 649 美元的价格预购直到产品发货或正式价格生效。现在附赠外接显示器、键盘和鼠标的套餐也可以预购了。
|
||||
|
||||
### Librem 5 的配置
|
||||
|
||||
从它的预览来看,Librem 5 旨在提供更好的隐私保护和安全性。除此之外,它试图避免使用 Google 或 Apple 的服务。
|
||||
|
||||
虽然这个想法够好,它是如何成为一款低于 700 美元的商用智能手机?
|
||||
|
||||
![Librem 5 智能手机][9]
|
||||
|
||||
让我们来看一下它的配置:
|
||||
|
||||
![Librem 5][10]
|
||||
|
||||
从数据上讲它的配置已经足够高了。不是很好,也不是很差。但是,性能呢?用户体验呢?
|
||||
|
||||
我们并不能够确切地了解到它的信息,除非我们用过它。所以,如果你打算预购,应该要考虑到这一点。
|
||||
|
||||
### Librem 5 提供终身软件更新支持
|
||||
|
||||
当然,和同价位的智能手机相比,它的这些配置并不是很优秀。
|
||||
|
||||
然而,随着他们做出终身软件更新支持的承诺后,它看起来确实像被开源爱好者所钟情的一个好产品。
|
||||
|
||||
### 其他关键特性
|
||||
|
||||
Purism 还强调 Librem 5 将成为有史以来第一款以 [Matrix][12] 提供支持的智能手机。这意味着它将支持端到端的分布式加密通讯的短信、电话。
|
||||
|
||||
除了这些,耳机接口和用户可以自行更换电池使它成为一个可靠的产品。
|
||||
|
||||
### 总结
|
||||
|
||||
即使它很难与 Android 或 iOS 智能手机竞争,但多一种选择方式总是好的。Librem 5 不可能成为每个用户都喜欢的智能手机,但如果你是一个开源爱好者,而且正在寻找一款尊重隐私和安全,不使用 Google 和 Apple 服务的简单智能手机,那么这就很适合你。
|
||||
|
||||
另外,它提供终身的软件更新支持,这让它成为了一个优秀的智能手机。
|
||||
|
||||
你如何看待 Librem 5?有在考虑预购吗?请在下方的评论中将你的想法告诉我们。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/librem-5-available/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Scvoet][c]
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[c]: https://github.com/scvoet
|
||||
[1]: https://puri.sm/posts/librem-5-smartphone-final-specs-announced/
|
||||
[2]: https://itsfoss.com/librem-linux-phone/
|
||||
[3]: https://pureos.net/
|
||||
[4]: https://itsfoss.com/open-source-alternatives-android/
|
||||
[5]: https://shop.puri.sm/shop/librem-5/
|
||||
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/librem-5-linux-smartphone.jpg?resize=800%2C450&ssl=1
|
||||
[7]: https://puri.sm/products/librem-5/pureos-mobile/
|
||||
[8]: https://puri.sm/posts/tag/phones
|
||||
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/07/librem-5-smartphone.jpg?ssl=1
|
||||
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/07/librem-5-specs.png?ssl=1
|
||||
[11]: https://itsfoss.com/linux-games-performance-boost-amd-gpu/
|
||||
[12]: http://matrix.org
|
@ -0,0 +1,173 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (MjSeven)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11224-1.html)
|
||||
[#]: subject: (Use Postfix to get email from your Fedora system)
|
||||
[#]: via: (https://fedoramagazine.org/use-postfix-to-get-email-from-your-fedora-system/)
|
||||
[#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/)
|
||||
|
||||
使用 Postfix 从 Fedora 系统中获取电子邮件
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
交流是非常重要的。你的电脑可能正试图告诉你一些重要的事情。但是,如果你没有正确配置邮件传输代理([MTA][2]),那么你可能不会收到通知。Postfix 是一个[易于配置且以强大的安全记录而闻名][3]的 MTA。遵循以下步骤,以确保从本地服务发送的电子邮件通知将通过 Postfix MTA 路由到你的互联网电子邮件账户中。
|
||||
|
||||
### 安装软件包
|
||||
|
||||
使用 `dnf` 来安装一些必须软件包([你应该配置了 sudo,对吧?][4]):
|
||||
|
||||
```
|
||||
$ sudo -i
|
||||
# dnf install postfix mailx
|
||||
```
|
||||
|
||||
如果以前配置了不同的 MTA,那么你可能需要将 Postfix 设置为系统默认。使用 `alternatives` 命令设置系统默认 MTA:
|
||||
|
||||
```
|
||||
$ sudo alternatives --config mta
|
||||
There are 2 programs which provide 'mta'.
|
||||
Selection Command
|
||||
*+ 1 /usr/sbin/sendmail.sendmail
|
||||
2 /usr/sbin/sendmail.postfix
|
||||
Enter to keep the current selection[+], or type selection number: 2
|
||||
```
|
||||
|
||||
### 创建一个 password_maps 文件
|
||||
|
||||
你需要创建一个 Postfix 查询表条目,其中包含你要用于发送电子邮件账户的地址和密码:
|
||||
|
||||
```
|
||||
# MY_EMAIL_ADDRESS=glb@gmail.com
|
||||
# MY_EMAIL_PASSWORD=abcdefghijklmnop
|
||||
# MY_SMTP_SERVER=smtp.gmail.com
|
||||
# MY_SMTP_SERVER_PORT=587
|
||||
# echo "[$MY_SMTP_SERVER]:$MY_SMTP_SERVER_PORT $MY_EMAIL_ADDRESS:$MY_EMAIL_PASSWORD" >> /etc/postfix/password_maps
|
||||
# chmod 600 /etc/postfix/password_maps
|
||||
# unset MY_EMAIL_PASSWORD
|
||||
# history -c
|
||||
```
|
||||
|
||||
如果你使用的是 Gmail 账户,那么你需要为 Postfix 配置一个“应用程序密码”而不是使用你的 Gmail 密码。有关配置应用程序密码的说明,参阅“[使用应用程序密码登录][5]”。
|
||||
|
||||
接下来,你必须对 Postfix 查询表运行 `postmap` 命令,以创建或更新 Postfix 实际使用的文件的散列版本:
|
||||
|
||||
```
|
||||
# postmap /etc/postfix/password_maps
|
||||
```
|
||||
|
||||
散列后的版本将具有相同的文件名,但后缀为 `.db`。
|
||||
|
||||
### 更新 main.cf 文件
|
||||
|
||||
更新 Postfix 的 `main.cf` 配置文件,以引用刚刚创建 Postfix 查询表。编辑该文件并添加以下行:
|
||||
|
||||
```
|
||||
relayhost = smtp.gmail.com:587
|
||||
smtp_tls_security_level = verify
|
||||
smtp_tls_mandatory_ciphers = high
|
||||
smtp_tls_verify_cert_match = hostname
|
||||
smtp_sasl_auth_enable = yes
|
||||
smtp_sasl_security_options = noanonymous
|
||||
smtp_sasl_password_maps = hash:/etc/postfix/password_maps
|
||||
```
|
||||
|
||||
这里假设你使用 Gmail 作为 `relayhost` 设置,但是你可以用正确的主机名和端口替换系统应该将邮件发送到的邮件主机。
|
||||
|
||||
有关上述配置选项的最新详细信息,参考 man 帮助:
|
||||
|
||||
```
|
||||
$ man postconf.5
|
||||
```
|
||||
|
||||
### 启用、启动和测试 Postfix
|
||||
|
||||
更新 `main.cf` 文件后,启用并启动 Postfix 服务:
|
||||
|
||||
```
|
||||
# systemctl enable --now postfix.service
|
||||
```
|
||||
|
||||
然后,你可以使用 `exit` 命令或 `Ctrl+D` 以 root 身份退出 `sudo` 会话。你现在应该能够使用 `mail` 命令测试你的配置:
|
||||
|
||||
```
|
||||
$ echo 'It worked!' | mail -s "Test: $(date)" glb@gmail.com
|
||||
```
|
||||
|
||||
### 更新服务
|
||||
|
||||
如果你安装了像 [logwatch][6]、[mdadm][7]、[fail2ban][8]、[apcupsd][9] 或 [certwatch][10] 这样的服务,你现在可以更新它们的配置,以便它们的电子邮件通知转到你的 Internet 电子邮件地址。
|
||||
|
||||
另外,你可能希望将发送到本地系统 root 账户的所有电子邮件都转到互联网电子邮件地址中,将以下行添加到系统的 `/etc/alises` 文件中(你需要使用 `sudo` 编辑此文件,或首先切换到 `root` 账户):
|
||||
|
||||
```
|
||||
root: glb+root@gmail.com
|
||||
```
|
||||
|
||||
现在运行此命令重新读取别名:
|
||||
|
||||
```
|
||||
# newaliases
|
||||
```
|
||||
|
||||
* 提示: 如果你使用的是 Gmail,那么你可以在用户名和 `@` 符号之间[添加字母数字标记][11],如上所示,以便更轻松地识别和过滤从计算机收到的电子邮件。
|
||||
|
||||
### 常用命令
|
||||
|
||||
**查看邮件队列:**
|
||||
|
||||
```
|
||||
$ mailq
|
||||
```
|
||||
|
||||
**清除队列中的所有电子邮件:**
|
||||
|
||||
```
|
||||
# postsuper -d ALL
|
||||
```
|
||||
|
||||
**过滤设置,以获得感兴趣的值:**
|
||||
|
||||
```
|
||||
$ postconf | grep "^relayhost\|^smtp_"
|
||||
```
|
||||
|
||||
**查看 `postfix/smtp` 日志:**
|
||||
|
||||
```
|
||||
$ journalctl --no-pager -t postfix/smtp
|
||||
```
|
||||
|
||||
**进行配置更改后重新加载 postfix:**
|
||||
|
||||
```
|
||||
$ systemctl reload postfix
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/use-postfix-to-get-email-from-your-fedora-system/
|
||||
|
||||
作者:[Gregory Bartholomew][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/glb/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/07/postfix-816x345.jpg
|
||||
[2]: https://en.wikipedia.org/wiki/Message_transfer_agent
|
||||
[3]: https://en.wikipedia.org/wiki/Postfix_(software)
|
||||
[4]: https://fedoramagazine.org/howto-use-sudo/
|
||||
[5]: https://support.google.com/accounts/answer/185833
|
||||
[6]: https://src.fedoraproject.org/rpms/logwatch
|
||||
[7]: https://fedoramagazine.org/mirror-your-system-drive-using-software-raid/
|
||||
[8]: https://fedoraproject.org/wiki/Fail2ban_with_FirewallD
|
||||
[9]: https://src.fedoraproject.org/rpms/apcupsd
|
||||
[10]: https://www.linux.com/learn/automated-certificate-expiration-checks-centos
|
||||
[11]: https://gmail.googleblog.com/2008/03/2-hidden-ways-to-get-more-from-your.html
|
||||
[12]: https://unsplash.com/@sharonmccutcheon?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[13]: https://unsplash.com/search/photos/envelopes?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
@ -0,0 +1,80 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11187-1.html)
|
||||
[#]: subject: (The fastest open source CPU ever, Facebook shares AI algorithms fighting harmful content, and more news)
|
||||
[#]: via: (https://opensource.com/article/19/8/news-august-3)
|
||||
[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeo)
|
||||
|
||||
开源新闻综述:有史以来最快的开源 CPU、Facebook 分享对抗有害内容的 AI 算法
|
||||
======
|
||||
|
||||
> 不要错过最近两周最大的开源新闻。
|
||||
|
||||
![Weekly news roundup with TV][1]
|
||||
|
||||
在本期开源新闻综述中,我们分享了 Facebook 开源了两种算法来查找有害内容,Apple 在数据传输项目中的新角色以及你应该知道的更多新闻。
|
||||
|
||||
### Facebook 开源算法用于查找有害内容
|
||||
|
||||
Facebook 宣布它[开源两种算法][2]用于在该平台上发现儿童剥削、恐怖主义威胁和写实暴力。在 8 月 1 日的博客文章中,Facebook 分享了 PDQ 和 TMK + PDQF 这两种将文件存储为数字哈希的技术,然后将它们与已知的有害内容示例进行比较 - [现在已放在 GitHub 上][3]。
|
||||
|
||||
该代码是在 Facebook 要尽快将有害内容从平台上移除的压力之下发布的。三月份在新西兰的大规模谋杀案被曝光在 Facebook Live 上,澳大利亚政府[威胁][4]如果视频没有及时删除 Facebook 高管将被处以罚款和入狱。通过发布这些算法的源代码,Facebook 表示希望非营利组织、科技公司和独立开发人员都能帮助他们更快地找到并删除有害内容。
|
||||
|
||||
### 阿里巴巴发布了最快的开源 CPU
|
||||
|
||||
上个月,阿里巴巴的子公司平头哥半导体公司[发布了其玄铁 91 处理器][5]。它可以用于人工智能、物联网、5G 和自动驾驶汽车等基础设施。它拥有 7.1 Coremark/MHz 的基准,使其成为市场上最快的开源 CPU。
|
||||
|
||||
平头哥宣布计划在今年 9 月在 GitHub 上提供其优质代码。分析师认为此次发布旨在帮助中国实现其目标,即到 2021 年使用本地供应商满足 40% 的处理器需求。近期美国的关税调整威胁要破坏这一目标,从而造成了对开源计算机组件的需求。
|
||||
|
||||
### Mattermost 为开源协作应用提供了理由
|
||||
|
||||
所有开源社区都受益于可以从一个或多个地方彼此进行通信。团队聊天应用程序的世界似乎由 Slack 和 Microsoft Teams 等极少数的强大工具主导。大多数选择都是基于云的和专有产品的;而 Mattermost 采用了不同的方法,销售开源协作应用程序的价值。
|
||||
|
||||
“人们想要一个开源替代品,因为他们需要信任、灵活性和只有开源才能提供的创新,”Mattermost 的联合创始人兼首席执行官 Ian Tien 说。
|
||||
|
||||
随着从优步到国防部的客户,Mattermost 走上了一个关键市场:需要开源软件的团队,他们可以信任这些软件并安装在他们自己的服务器上。对于需要协作应用程序在其内部基础架构上运行的企业,Mattermost 填补了 [Atlassian 离开后][6] 的空白。在 Computerworld 的一篇文章中,Matthew Finnegan [探讨][7]了为什么在本地部署的开源聊天工具尚未死亡。
|
||||
|
||||
### Apple 加入了开源数据传输项目
|
||||
|
||||
Google、Facebook、Twitter 和微软去年联合创建了<ruby>数据传输项目<rt>Data Transfer Project</rt></ruby>(DTP)。DTP 被誉为通过自己的数据提升数据安全性和用户代理的一种方式,是一种罕见的技术上的团结展示。本周,Apple 宣布他们也将[加入][8]。
|
||||
|
||||
DTP 的目标是帮助用户通过开源平台将数据从一个在线服务转移到另一个在线服务。DTP 旨在通过使用 API 和授权工具来取消中间人,以便用户将数据从一个服务转移到另一个服务。这将消除用户下载数据然后将其上传到另一个服务的需要。Apple 加入 DTP 的选择将允许用户将数据传入和传出 iCloud,这可能是隐私权拥护者的一大胜利。
|
||||
|
||||
#### 其它新闻
|
||||
|
||||
* [FlexiWAN 的开源 SD-WAN 可在公共测试版中下载] [9]
|
||||
* [开源的 Windows 计算器应用程序获得了永远置顶模式] [10]
|
||||
* [通过 Zowe,开源和 DevOps 正在使大型计算机民主化] [11]
|
||||
* [Mozilla 首次推出 WebThings Gateway 开源路由器固件的实现] [12]
|
||||
* [更新:向 Mozilla 代码库做成贡献][13]
|
||||
|
||||
*谢谢 Opensource.com 的工作人员和主持人本周的帮助。*
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/news-august-3
|
||||
|
||||
作者:[Lauren Maffeo][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/lmaffeo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/weekly_news_roundup_tv.png?itok=B6PM4S1i (Weekly news roundup with TV)
|
||||
[2]: https://www.theverge.com/2019/8/1/20750752/facebook-child-exploitation-terrorism-open-source-algorithm-pdq-tmk
|
||||
[3]: https://github.com/facebook/ThreatExchange/tree/master/hashing/tmk
|
||||
[4]: https://www.buzzfeed.com/hannahryan/social-media-facebook-livestreaming-laws-christchurch
|
||||
[5]: https://hexus.net/tech/news/cpu/133229-alibabas-16-core-risc-v-fastest-open-source-cpu-yet/
|
||||
[6]: https://lab.getapp.com/atlassian-slack-on-premise-software/
|
||||
[7]: https://www.computerworld.com/article/3428679/mattermost-makes-case-for-open-source-as-team-messaging-market-booms.html
|
||||
[8]: https://www.techspot.com/news/81221-apple-joins-data-transfer-project-open-source-project.html
|
||||
[9]: https://www.fiercetelecom.com/telecom/flexiwan-s-open-source-sd-wan-available-for-download-public-beta-release
|
||||
[10]: https://mspoweruser.com/open-source-windows-calculator-app-to-get-always-on-top-mode/
|
||||
[11]: https://siliconangle.com/2019/07/29/zowe-open-source-devops-democratizing-mainframe-computer/
|
||||
[12]: https://venturebeat.com/2019/07/25/mozilla-debuts-webthings-gateway-open-source-router-firmware-for-turris-omnia/
|
||||
[13]: https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Introduction
|
@ -0,0 +1,105 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11206-1.html)
|
||||
[#]: subject: (4 cool new projects to try in COPR for August 2019)
|
||||
[#]: via: (https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-august-2019/)
|
||||
[#]: author: (Dominik Turecek https://fedoramagazine.org/author/dturecek/)
|
||||
|
||||
COPR 仓库中 4 个很酷的新项目(2019.8)
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
COPR 是个人软件仓库[集合][2],它不在 Fedora 中。这是因为某些软件不符合轻松打包的标准;或者它可能不符合其他 Fedora 标准,尽管它是自由而开源的。COPR 可以在 Fedora 套件之外提供这些项目。COPR 中的软件不受 Fedora 基础设施的支持,或者是由项目自己背书的。但是,这是一种尝试新的或实验性的软件的一种巧妙的方式。
|
||||
|
||||
这是 COPR 中一组新的有趣项目。
|
||||
|
||||
### Duc
|
||||
|
||||
[duc][3] 是磁盘使用率检查和可视化工具的集合。Duc 使用索引数据库来保存系统上文件的大小。索引完成后,你可以通过命令行界面或 GUI 快速查看磁盘使用情况。
|
||||
|
||||
![][4]
|
||||
|
||||
#### 安装说明
|
||||
|
||||
该[仓库][5]目前为 EPEL 7、Fedora 29 和 30 提供 duc。要安装 duc,请使用以下命令:
|
||||
|
||||
```
|
||||
sudo dnf copr enable terrywang/duc
|
||||
sudo dnf install duc
|
||||
```
|
||||
|
||||
### MuseScore
|
||||
|
||||
[MuseScore][6] 是一个处理音乐符号的软件。使用 MuseScore,你可以使用鼠标、虚拟键盘或 MIDI 控制器创建乐谱。然后,MuseScore 可以播放创建的音乐或将其导出为 PDF、MIDI 或 MusicXML。此外,它还有一个由 MuseScore 用户创建的含有大量乐谱的数据库。
|
||||
|
||||
![][7]
|
||||
|
||||
#### 安装说明
|
||||
|
||||
该[仓库][5]目前为 Fedora 29 和 30 提供 MuseScore。要安装 MuseScore,请使用以下命令:
|
||||
|
||||
```
|
||||
sudo dnf copr enable jjames/MuseScore
|
||||
sudo dnf install musescore
|
||||
```
|
||||
|
||||
### 动态墙纸编辑器
|
||||
|
||||
[动态墙纸编辑器][9] 是一个可在 GNOME 中创建和编辑随时间变化的壁纸集合的工具。这可以使用 XML 文件来完成,但是,动态墙纸编辑器通过其图形界面使其变得简单,你可以在其中简单地添加图片、排列图片并设置每张图片的持续时间以及它们之间的过渡。
|
||||
|
||||
![][10]
|
||||
|
||||
#### 安装说明
|
||||
|
||||
该[仓库][11]目前为 Fedora 30 和 Rawhide 提供动态墙纸编辑器。要安装它,请使用以下命令:
|
||||
|
||||
```
|
||||
sudo dnf copr enable atim/dynamic-wallpaper-editor
|
||||
sudo dnf install dynamic-wallpaper-editor
|
||||
```
|
||||
|
||||
### Manuskript
|
||||
|
||||
[Manuskript][12] 是一个给作者的工具,旨在让创建大型写作项目更容易。它既可以作为编写文本的编辑器,也可以作为组织故事本身、故事人物和单个情节的注释的工具。
|
||||
|
||||
![][13]
|
||||
|
||||
#### 安装说明
|
||||
|
||||
该[仓库][14]目前为 Fedora 29、30 和 Rawhide 提供 Manuskript。要安装 Manuskript,请使用以下命令:
|
||||
|
||||
```
|
||||
sudo dnf copr enable notsag/manuskript
|
||||
sudo dnf install manuskript
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-august-2019/
|
||||
|
||||
作者:[Dominik Turecek][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/dturecek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg
|
||||
[2]: https://copr.fedorainfracloud.org/
|
||||
[3]: https://duc.zevv.nl/
|
||||
[4]: https://fedoramagazine.org/wp-content/uploads/2019/07/duc.png
|
||||
[5]: https://copr.fedorainfracloud.org/coprs/terrywang/duc/
|
||||
[6]: https://musescore.org/
|
||||
[7]: https://fedoramagazine.org/wp-content/uploads/2019/07/musescore-1024x512.png
|
||||
[8]: https://copr.fedorainfracloud.org/coprs/jjames/MuseScore/
|
||||
[9]: https://github.com/maoschanz/dynamic-wallpaper-editor
|
||||
[10]: https://fedoramagazine.org/wp-content/uploads/2019/07/dynamic-walppaper-editor.png
|
||||
[11]: https://copr.fedorainfracloud.org/coprs/atim/dynamic-wallpaper-editor/
|
||||
[12]: https://www.theologeek.ch/manuskript/
|
||||
[13]: https://fedoramagazine.org/wp-content/uploads/2019/07/manuskript-1024x600.png
|
||||
[14]: https://copr.fedorainfracloud.org/coprs/notsag/manuskript/
|
@ -0,0 +1,105 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11211-1.html)
|
||||
[#]: subject: (GameMode – A Tool To Improve Gaming Performance On Linux)
|
||||
[#]: via: (https://www.ostechnix.com/gamemode-a-tool-to-improve-gaming-performance-on-linux/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
GameMode:提高 Linux 游戏性能的工具
|
||||
======
|
||||
|
||||
![Gamemmode improve gaming performance on Linux][1]
|
||||
|
||||
去问一些 Linux 用户为什么他们仍然坚持 Windows 双启动,他们的答案可能是 - “游戏!”。这是真的!幸运的是,开源游戏平台如 [Lutris][2] 和专有游戏平台 Steam 已经为 Linux 平台带来了许多游戏,并且近几年来显著改善了 Linux 的游戏体验。今天,我偶然发现了另一款名为 GameMode 的 Linux 游戏相关开源工具,它能让用户提高 Linux 上的游戏性能。
|
||||
|
||||
GameMode 基本上是一组守护进程/库,它可以按需优化 Linux 系统的游戏性能。我以为 GameMode 是一个杀死在后台运行的对资源消耗大进程的工具。但它并不是。它实际上只是让 CPU **在用户玩游戏时自动运行在高性能模式下**并帮助 Linux 用户从游戏中获得最佳性能。
|
||||
|
||||
在玩游戏时,GameMode 通过对宿主机请求临时应用一组优化来显著提升游戏性能。目前,它支持下面这些优化:
|
||||
|
||||
* CPU 调控器,
|
||||
* I/O 优先级,
|
||||
* 进程 nice 值
|
||||
* 内核调度器(SCHED_ISO),
|
||||
* 禁止屏幕保护,
|
||||
* GPU 高性能模式(NVIDIA 和 AMD),GPU 超频(NVIDIA),
|
||||
* 自定义脚本。
|
||||
|
||||
GameMode 是由世界领先的游戏发行商 [Feral Interactive][3] 开发的自由开源的系统工具。
|
||||
|
||||
### 安装 GameMode
|
||||
|
||||
GameMode 适用于许多 Linux 发行版。
|
||||
|
||||
在 Arch Linux 及其变体上,你可以使用任何 AUR 助手程序,如 [Yay][5] 从 [AUR][4] 安装它。
|
||||
|
||||
```
|
||||
$ yay -S gamemode
|
||||
```
|
||||
|
||||
在 Debian、Ubuntu、Linux Mint 和其他基于 Deb 的系统上:
|
||||
|
||||
```
|
||||
$ sudo apt install gamemode
|
||||
```
|
||||
|
||||
如果 GameMode 不适用于你的系统,你可以按照它的 Github 页面中开发章节下的描述从源码手动编译和安装它。
|
||||
|
||||
### 激活 GameMode 支持以改善 Linux 上的游戏性能
|
||||
|
||||
以下是集成支持了 GameMode 的游戏列表,因此我们无需进行任何其他配置即可激活 GameMode 支持。
|
||||
|
||||
* 古墓丽影:崛起
|
||||
* 全面战争传奇:不列颠尼亚王座
|
||||
* 全面战争:战锤 2
|
||||
* 尘埃 4
|
||||
* 全面战争:三国
|
||||
|
||||
只需运行这些游戏,就会自动启用 GameMode 支持。
|
||||
|
||||
这里还有将 GameMode 与 GNOME shell 集成的的[扩展][6]。它会在顶部指示 GameMode 何时处于活跃。
|
||||
|
||||
对于其他游戏,你可能需要手动请求 GameMode 支持,如下所示。
|
||||
|
||||
```
|
||||
gamemoderun ./game
|
||||
```
|
||||
|
||||
我不喜欢游戏,并且我已经很多年没玩游戏了。所以,我无法分享一些实际的基准测试。
|
||||
|
||||
但是,我在 Youtube 上找到了一个简短的[视频教程](https://youtu.be/4gyRyYfyGJw),以便为 Lutris 游戏启用 GameMode 支持。对于那些想要第一次尝试 GameMode 的人来说,这是个不错的开始。
|
||||
|
||||
通过浏览视频中的评论,我可以说 GameMode 确实提高了 Linux 上的游戏性能。
|
||||
|
||||
对于更多细节,请参阅 [GameMode 的 GitHub 仓库][7]。
|
||||
|
||||
相关阅读:
|
||||
|
||||
* [GameHub – 将所有游戏集合在一起的仓库][8]
|
||||
* [如何在 Linux 中运行 MS-DOS 游戏和程序][9]
|
||||
|
||||
你用过 GameMode 吗?它真的有改善 Linux 上的游戏性能吗?请在下面的评论栏分享你的想法。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/gamemode-a-tool-to-improve-gaming-performance-on-linux/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/07/Gamemode-720x340.png
|
||||
[2]: https://www.ostechnix.com/manage-games-using-lutris-linux/
|
||||
[3]: http://www.feralinteractive.com/en/
|
||||
[4]: https://aur.archlinux.org/packages/gamemode/
|
||||
[5]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
|
||||
[6]: https://github.com/gicmo/gamemode-extension
|
||||
[7]: https://github.com/FeralInteractive/gamemode
|
||||
[8]: https://www.ostechnix.com/gamehub-an-unified-library-to-put-all-games-under-one-roof/
|
||||
[9]: https://www.ostechnix.com/how-to-run-ms-dos-games-and-programs-in-linux/
|
@ -0,0 +1,109 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (scvoet)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11232-1.html)
|
||||
[#]: subject: (How To Add ‘New Document’ Option In Right Click Context Menu In Ubuntu 18.04)
|
||||
[#]: via: ((https://www.ostechnix.com/how-to-add-new-document-option-in-right-click-context-menu-in-ubuntu-18-04/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
如何在 Ubuntu 18.04 的右键单击菜单中添加“新建文档”按钮
|
||||
======
|
||||
|
||||
![Add 'New Document' Option In Right Click Context Menu In Ubuntu 18.04 GNOME desktop][1]
|
||||
|
||||
前几天,我在各种在线资源站点上收集关于 [Linux 包管理器][2] 的参考资料。在我想创建一个用于保存笔记的文件,我突然发现我的 Ubuntu 18.04 LTS 桌面并没有“新建文件”的按钮了,它好像离奇失踪了。在谷歌一下后,我发现原来“新建文档”按钮不再被集成在 Ubuntu GNOME 版本中了。庆幸的是,我找到了一个在 Ubuntu 18.04 LTS 桌面的右键单击菜单中添加“新建文档”按钮的简易解决方案。
|
||||
|
||||
就像你在下方截图中看到的一样,Nautilus 文件管理器的右键单击菜单中并没有“新建文件”按钮。
|
||||
|
||||
![][3]
|
||||
|
||||
*Ubuntu 18.04 移除了右键点击菜单中的“新建文件”的选项。*
|
||||
|
||||
如果你想添加此按钮,请按照以下步骤进行操作。
|
||||
|
||||
### 在 Ubuntu 的右键单击菜单中添加“新建文件”按钮
|
||||
|
||||
首先,你需要确保你的系统中有 `~/Templates` 文件夹。如果没有的话,可以按照下面的命令进行创建。
|
||||
|
||||
```
|
||||
$ mkdir ~/Templates
|
||||
```
|
||||
|
||||
然后打开终端应用并使用 `cd` 命令进入 `~/Templates` 文件夹:
|
||||
|
||||
```
|
||||
$ cd ~/Templates
|
||||
```
|
||||
|
||||
创建一个空文件:
|
||||
|
||||
```
|
||||
$ touch Empty\ Document
|
||||
```
|
||||
|
||||
或
|
||||
|
||||
```
|
||||
$ touch "Empty Document"
|
||||
```
|
||||
|
||||
![][4]
|
||||
|
||||
新打开一个 Nautilus 文件管理器,然后检查一下右键单击菜单中是否成功添加了“新建文档”按钮。
|
||||
|
||||
![][5]
|
||||
|
||||
*在 Ubuntu 18.04 的右键单击菜单中添加“新建文件”按钮*
|
||||
|
||||
如上图所示,我们重新启用了“新建文件”的按钮。
|
||||
|
||||
你还可以为不同文件类型添加按钮。
|
||||
|
||||
```
|
||||
$ cd ~/Templates
|
||||
|
||||
$ touch New\ Word\ Document.docx
|
||||
$ touch New\ PDF\ Document.pdf
|
||||
$ touch New\ Text\ Document.txt
|
||||
$ touch New\ PyScript.py
|
||||
```
|
||||
|
||||
![][6]
|
||||
|
||||
在“新建文件”子菜单中给不同的文件类型添加按钮
|
||||
|
||||
注意,所有文件都应该创建在 `~/Templates` 文件夹下。
|
||||
|
||||
现在,打开 Nautilus 并检查“新建文件” 菜单中是否有相应的新建文件按钮。
|
||||
|
||||
![][7]
|
||||
|
||||
如果你要从子菜单中删除任一文件类型,只需在 Templates 目录中移除相应的文件即可。
|
||||
|
||||
```
|
||||
$ rm ~/Templates/New\ Word\ Document.docx
|
||||
```
|
||||
|
||||
我十分好奇为什么最新的 Ubuntu GNOME 版本将这个常用选项移除了。不过,重新启用这个按钮也十分简单,只需要几分钟。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-add-new-document-option-in-right-click-context-menu-in-ubuntu-18-04/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[scvoet](https://github.com/scvoet)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux 中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/07/Add-New-Document-Option-In-Right-Click-Context-Menu-1-720x340.png
|
||||
[2]: https://www.ostechnix.com/linux-package-managers-compared-appimage-vs-snap-vs-flatpak/
|
||||
[3]: https://www.ostechnix.com/wp-content/uploads/2019/07/new-document-option-missing.png
|
||||
[4]: https://www.ostechnix.com/wp-content/uploads/2019/07/Create-empty-document-in-Templates-directory.png
|
||||
[5]: https://www.ostechnix.com/wp-content/uploads/2019/07/Add-New-Document-Option-In-Right-Click-Context-Menu-In-Ubuntu.png
|
||||
[6]: https://www.ostechnix.com/wp-content/uploads/2019/07/Add-options-for-different-files-types.png
|
||||
[7]: https://www.ostechnix.com/wp-content/uploads/2019/07/Add-New-Document-Option-In-Right-Click-Context-Menu.png
|
@ -0,0 +1,341 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11194-1.html)
|
||||
[#]: subject: (How To Find Hardware Specifications On Linux)
|
||||
[#]: via: (https://www.ostechnix.com/getting-hardwaresoftware-specifications-in-linux-mint-ubuntu/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
如何在 Linux 上查找硬件规格
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/06/111717t1f77n7v3egxhf18.jpg)
|
||||
|
||||
在 Linux 系统上有许多工具可用于查找硬件规格。在这里,我列出了四种最常用的工具,可以获取 Linux 系统的几乎所有硬件(和软件)细节。好在是这些工具在某些 Linux 发行版上默认预装。我在 Ubuntu 18.04 LTS 桌面上测试了这些工具,但是它们也适用于其他 Linux 发行版。
|
||||
|
||||
### 1、LSHW
|
||||
|
||||
lshw(硬件列表)是一个简单但功能齐全的实用程序,它提供了 Linux 系统上的硬件规格的详细信息。它可以报告确切的内存规格、固件版本、主板规格、CPU 版本和速度、缓存规格、总线速度等。信息可以以纯文本、XML 或 HTML 格式输出。
|
||||
|
||||
它目前支持 DMI(仅限 x86 和 EFI)、Open Firmware 设备树(仅限 PowerPC)、PCI/AGP、ISA PnP(x86)、CPUID(x86)、IDE/ATA/ATAPI、PCMCIA(仅在 x86 上测试过)、USB 和 SCSI。
|
||||
|
||||
就像我已经说过的那样,Ubuntu 默认预装了 lshw。如果它未安装在你的 Ubuntu 系统中,请使用以下命令安装它:
|
||||
|
||||
```
|
||||
$ sudo apt install lshw lshw-gtk
|
||||
```
|
||||
|
||||
在其他 Linux 发行版上,例如 Arch Linux,运行:
|
||||
|
||||
```
|
||||
$ sudo pacman -S lshw lshw-gtk
|
||||
```
|
||||
|
||||
安装后,运行 `lshw` 以查找系统硬件详细信息:
|
||||
|
||||
```
|
||||
$ sudo lshw
|
||||
```
|
||||
|
||||
你将看到输出详细的系统硬件。
|
||||
|
||||
示例输出:
|
||||
|
||||
![][2]
|
||||
|
||||
*使用 lshw 在 Linux 上查找硬件规格*
|
||||
|
||||
请注意,如果你没有以 `sudo` 权限运行 `lshw` 命令,则输出可能不完整或不准确。
|
||||
|
||||
`lshw` 可以将输出显示为 HTML 页面。为此,请使用:
|
||||
|
||||
```
|
||||
$ sudo lshw -html
|
||||
```
|
||||
|
||||
同样,我们可以将设备树输出为 XML 和 json 格式,如下所示:
|
||||
|
||||
```
|
||||
$ sudo lshw -xml
|
||||
$ sudo lshw -json
|
||||
```
|
||||
|
||||
要输出显示硬件路径的设备树,请使用 `-short` 选项:
|
||||
|
||||
```
|
||||
$ sudo lshw -short
|
||||
```
|
||||
|
||||
![][3]
|
||||
|
||||
*使用 lshw 显示具有硬件路径的设备树*
|
||||
|
||||
要列出设备的总线信息、详细的 SCSI、USB、IDE 和 PCI 地址,请运行:
|
||||
|
||||
```
|
||||
$ sudo lshw -businfo
|
||||
```
|
||||
|
||||
默认情况下,`lshw` 显示所有硬件详细信息。你还可以使用类选项查看特定硬件详细信息的硬件信息,例如处理器、内存、显示器等。可以使用 `lshw -short` 或 `lshw -businfo` 找到类选项。
|
||||
|
||||
要显示特定硬件详细信息,例如处理器,请执行以下操作:
|
||||
|
||||
```
|
||||
$ sudo lshw -class processor
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
*-cpu
|
||||
description: CPU
|
||||
product: Intel(R) Core(TM) i3-2350M CPU @ 2.30GHz
|
||||
vendor: Intel Corp.
|
||||
physical id: 4
|
||||
bus info: [email protected]
|
||||
version: Intel(R) Core(TM) i3-2350M CPU @ 2.30GHz
|
||||
serial: To Be Filled By O.E.M.
|
||||
slot: CPU 1
|
||||
size: 913MHz
|
||||
capacity: 2300MHz
|
||||
width: 64 bits
|
||||
clock: 100MHz
|
||||
capabilities: x86-64 fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer xsave avx lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid xsaveopt dtherm arat pln pts md_clear flush_l1d cpufreq
|
||||
configuration: cores=2 enabledcores=1 threads=2
|
||||
```
|
||||
|
||||
类似的,你可以得到系统细节:
|
||||
|
||||
```
|
||||
$ sudo lshw -class system
|
||||
```
|
||||
|
||||
硬盘细节:
|
||||
|
||||
```
|
||||
$ sudo lshw -class disk
|
||||
```
|
||||
|
||||
网络细节:
|
||||
|
||||
```
|
||||
$ sudo lshw -class network
|
||||
```
|
||||
|
||||
内存细节:
|
||||
|
||||
```
|
||||
$ sudo lshw -class memory
|
||||
```
|
||||
|
||||
你也可以像下面这样列出多个设备的细节:
|
||||
|
||||
```
|
||||
$ sudo lshw -class storage -class power -class volume
|
||||
```
|
||||
|
||||
如果你想要查看带有硬件路径的细节信息,加上 `-short` 选项即可:
|
||||
|
||||
```
|
||||
$ sudo lshw -short -class processor
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
H/W path Device Class Description
|
||||
=======================================================
|
||||
/0/4 processor Intel(R) Core(TM) i3-2350M CPU @ 2.30GHz
|
||||
```
|
||||
|
||||
有时,你可能希望将某些硬件详细信息共享给别人,例如客户支持人员。如果是这样,你可以从输出中删除潜在的敏感信息,如 IP 地址、序列号等,如下所示。
|
||||
|
||||
```
|
||||
$ lshw -sanitize
|
||||
```
|
||||
|
||||
#### lshw-gtk GUI 工具
|
||||
|
||||
如果你对 CLI 不熟悉,可以使用 `lshw-gtk`,这是 `lshw` 命令行工具的图形界面。
|
||||
|
||||
它可以从终端或 Dash 中打开。
|
||||
|
||||
要从终端启动它,只需执行以下操作:
|
||||
|
||||
```
|
||||
$ sudo lshw-gtk
|
||||
```
|
||||
|
||||
这是 `lshw` 工具的默认 GUI 界面。
|
||||
|
||||
![][4]
|
||||
|
||||
*使用 lshw-gtk 在 Linux 上查找硬件*
|
||||
|
||||
只需双击“Portable Computer”即可进一步展开细节。
|
||||
|
||||
![][5]
|
||||
|
||||
*使用 lshw-gtk GUI 在 Linux 上查找硬件*
|
||||
|
||||
你可以双击后续的硬件选项卡以获取详细视图。
|
||||
|
||||
有关更多详细信息,请参阅手册页。
|
||||
|
||||
```
|
||||
$ man lshw
|
||||
```
|
||||
|
||||
### 2、Inxi
|
||||
|
||||
Inxi 是我查找 Linux 系统上几乎所有内容的另一个最喜欢的工具。它是一个自由开源的、功能齐全的命令行系统信息工具。它显示了系统硬件、CPU、驱动程序、Xorg、桌面、内核、GCC 版本、进程、RAM 使用情况以及各种其他有用信息。无论是硬盘还是 CPU、主板还是整个系统的完整细节,inxi 都能在几秒钟内更准确地显示它。由于它是 CLI 工具,你可以在桌面或服务器版本中使用它。有关更多详细信息,请参阅以下指南。
|
||||
|
||||
* [如何使用 inxi 发现系统细节][6]
|
||||
|
||||
### 3、Hardinfo
|
||||
|
||||
Hardinfo 将为你提供 `lshw` 中没有的系统硬件和软件详细信息。
|
||||
|
||||
HardInfo 可以收集有关系统硬件和操作系统的信息,执行基准测试,并以 HTML 或纯文本格式生成可打印的报告。
|
||||
|
||||
如果 Ubuntu 中未安装 Hardinfo,请使用以下命令安装:
|
||||
|
||||
```
|
||||
$ sudo apt install hardinfo
|
||||
```
|
||||
|
||||
安装后,Hardinfo 工具可以从终端或菜单中进行。
|
||||
|
||||
以下是 Hardinfo 默认界面的外观。
|
||||
|
||||
![][7]
|
||||
|
||||
*使用 Hardinfo 在 Linux 上查找硬件*
|
||||
|
||||
正如你在上面的屏幕截图中看到的,Hardinfo 的 GUI 简单直观。
|
||||
|
||||
所有硬件信息分为四个主要组:计算机、设备、网络和基准。每个组都显示特定的硬件详细信息。
|
||||
|
||||
例如,要查看处理器详细信息,请单击“设备”组下的“处理器”选项。
|
||||
|
||||
![][8]
|
||||
|
||||
*使用 hardinfo 显示处理器详细信息*
|
||||
|
||||
与 `lshw` 不同,Hardinfo 可帮助你查找基本软件规范,如操作系统详细信息、内核模块、区域设置信息、文件系统使用情况、用户/组和开发工具等。
|
||||
|
||||
![][9]
|
||||
|
||||
*使用 hardinfo 显示操作系统详细信息*
|
||||
|
||||
Hardinfo 的另一个显着特点是它允许我们做简单的基准测试来测试 CPU 和 FPU 功能以及一些图形用户界面功能。
|
||||
|
||||
![][10]
|
||||
|
||||
*使用 hardinfo 执行基准测试*
|
||||
|
||||
建议阅读:
|
||||
|
||||
* [Phoronix 测试套件 - 开源测试和基准测试工具][11]
|
||||
* [UnixBench - 类 Unix 系统的基准套件][12]
|
||||
* [如何从命令行对 Linux 命令和程序进行基准测试][13]
|
||||
|
||||
我们可以生成整个系统以及各个设备的报告。要生成报告,只需单击菜单栏上的“生成报告”按钮,然后选择要包含在报告中的信息。
|
||||
|
||||
![][14]
|
||||
|
||||
*使用 hardinfo 生成系统报告*
|
||||
|
||||
Hardinfo 也有几个命令行选项。
|
||||
|
||||
例如,要生成报告并在终端中显示它,请运行:
|
||||
|
||||
```
|
||||
$ hardinfo -r
|
||||
```
|
||||
|
||||
列出模块:
|
||||
|
||||
```
|
||||
$ hardinfo -l
|
||||
```
|
||||
|
||||
更多信息请参考手册:
|
||||
|
||||
```
|
||||
$ man hardinfo
|
||||
```
|
||||
|
||||
### 4、Sysinfo
|
||||
|
||||
Sysinfo 是 HardInfo 和 lshw-gtk 实用程序的另一个替代品,可用于获取下面列出的硬件和软件信息。
|
||||
|
||||
* 系统详细信息,如发行版版本、GNOME 版本、内核、gcc 和 Xorg 以及主机名。
|
||||
* CPU 详细信息,如供应商标识、型号名称、频率、L2 缓存、型号和标志。
|
||||
* 内存详细信息,如系统全部内存、可用内存、交换空间总量和空闲、缓存、活动/非活动的内存。
|
||||
* 存储控制器,如 IDE 接口、所有 IDE 设备、SCSI 设备。
|
||||
* 硬件详细信息,如主板、图形卡、声卡和网络设备。
|
||||
|
||||
让我们使用以下命令安装 sysinfo:
|
||||
|
||||
```
|
||||
$ sudo apt install sysinfo
|
||||
```
|
||||
|
||||
Sysinfo 可以从终端或 Dash 启动。
|
||||
|
||||
要从终端启动它,请运行:
|
||||
|
||||
```
|
||||
$ sysinfo
|
||||
```
|
||||
|
||||
这是 Sysinfo 实用程序的默认界面。
|
||||
|
||||
![][15]
|
||||
|
||||
*sysinfo 界面*
|
||||
|
||||
如你所见,所有硬件(和软件)详细信息都分为五类,即系统、CPU、内存、存储和硬件。单击导航栏上的类别以获取相应的详细信息。
|
||||
|
||||
![][16]
|
||||
|
||||
*使用 Sysinfo 在 Linux 上查找硬件*
|
||||
|
||||
更多细节可以在手册页上找到。
|
||||
|
||||
```
|
||||
$ man sysinfo
|
||||
```
|
||||
|
||||
就这样。就像我已经提到的那样,可以有很多工具可用于显示硬件/软件规范。但是,这四个工具足以找到你的 Linux 发行版的所有软硬件规格信息。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/getting-hardwaresoftware-specifications-in-linux-mint-ubuntu/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[2]: https://www.ostechnix.com/wp-content/uploads/2013/01/Find-Hardware-Specifications-On-Linux-using-lshw-1.png
|
||||
[3]: https://www.ostechnix.com/wp-content/uploads/2013/01/Show-device-tree-with-hardware-path-using-lshw.png
|
||||
[4]: https://www.ostechnix.com/wp-content/uploads/2013/01/Find-Hardware-Specifications-On-Linux-using-lshw-gtk-1.png
|
||||
[5]: https://www.ostechnix.com/wp-content/uploads/2013/01/Find-Hardware-Specifications-On-Linux-using-lshw-gtk-2.png
|
||||
[6]: https://www.ostechnix.com/how-to-find-your-system-details-using-inxi/
|
||||
[7]: https://www.ostechnix.com/wp-content/uploads/2013/01/Find-Hardware-Specifications-On-Linux-Using-Hardinfo.png
|
||||
[8]: https://www.ostechnix.com/wp-content/uploads/2013/01/Show-processor-details-using-hardinfo.png
|
||||
[9]: https://www.ostechnix.com/wp-content/uploads/2013/01/Show-operating-system-details-using-hardinfo.png
|
||||
[10]: https://www.ostechnix.com/wp-content/uploads/2013/01/Perform-benchmarks-using-hardinfo.png
|
||||
[11]: https://www.ostechnix.com/phoronix-test-suite-open-source-testing-benchmarking-tool/
|
||||
[12]: https://www.ostechnix.com/unixbench-benchmark-suite-unix-like-systems/
|
||||
[13]: https://www.ostechnix.com/how-to-benchmark-linux-commands-and-programs-from-commandline/
|
||||
[14]: https://www.ostechnix.com/wp-content/uploads/2013/01/Generate-system-reports-using-hardinfo.png
|
||||
[15]: https://www.ostechnix.com/wp-content/uploads/2013/01/sysinfo-interface.png
|
||||
[16]: https://www.ostechnix.com/wp-content/uploads/2013/01/Find-Hardware-Specifications-On-Linux-Using-Sysinfo.png
|
@ -0,0 +1,240 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11220-1.html)
|
||||
[#]: subject: (How To Set Up Time Synchronization On Ubuntu)
|
||||
[#]: via: (https://www.ostechnix.com/how-to-set-up-time-synchronization-on-ubuntu/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
如何在 Ubuntu 上设置时间同步
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/13/135423xnk7zib00nn2aebv.jpg)
|
||||
|
||||
你可能设置过 [cron 任务][2] 来在特定时间备份重要文件或执行系统相关任务。也许你配置了一个日志服务器在特定时间间隔[轮转日志][3]。但如果你的时钟不同步,这些任务将无法按时执行。这就是要在 Linux 系统上设置正确的时区并保持时钟与互联网同步的原因。本指南介绍如何在 Ubuntu Linux 上设置时间同步。下面的步骤已经在 Ubuntu 18.04 上进行了测试,但是对于使用 systemd 的 `timesyncd` 服务的其他基于 Ubuntu 的系统它们是相同的。
|
||||
|
||||
### 在 Ubuntu 上设置时间同步
|
||||
|
||||
通常,我们在安装时设置时区。但是,你可以根据需要更改或设置不同的时区。
|
||||
|
||||
首先,让我们使用 `date` 命令查看 Ubuntu 系统中的当前时区:
|
||||
|
||||
```
|
||||
$ date
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
Tue Jul 30 11:47:39 UTC 2019
|
||||
```
|
||||
|
||||
如上所见,`date` 命令显示实际日期和当前时间。这里,我当前的时区是 **UTC**,代表**协调世界时**。
|
||||
|
||||
或者,你可以在 `/etc/timezone` 文件中查找当前时区。
|
||||
|
||||
```
|
||||
$ cat /etc/timezone
|
||||
UTC
|
||||
```
|
||||
|
||||
现在,让我们看看时钟是否与互联网同步。只需运行:
|
||||
|
||||
```
|
||||
$ timedatectl
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
Local time: Tue 2019-07-30 11:53:58 UTC
|
||||
Universal time: Tue 2019-07-30 11:53:58 UTC
|
||||
RTC time: Tue 2019-07-30 11:53:59
|
||||
Time zone: Etc/UTC (UTC, +0000)
|
||||
System clock synchronized: yes
|
||||
systemd-timesyncd.service active: yes
|
||||
RTC in local TZ: no
|
||||
```
|
||||
|
||||
如你所见,`timedatectl` 命令显示本地时间、世界时、时区以及系统时钟是否与互联网服务器同步,以及 `systemd-timesyncd.service` 是处于活动状态还是非活动状态。就我而言,系统时钟已与互联网时间服务器同步。
|
||||
|
||||
如果时钟不同步,你会看到下面截图中显示的 `System clock synchronized: no`。
|
||||
|
||||
![][4]
|
||||
|
||||
*时间同步已禁用。*
|
||||
|
||||
注意:上面的截图是旧截图。这就是你看到不同日期的原因。
|
||||
|
||||
如果你看到 `System clock synchronized:` 值设置为 `no`,那么 `timesyncd` 服务可能处于非活动状态。因此,只需重启服务并看下是否正常。
|
||||
|
||||
```
|
||||
$ sudo systemctl restart systemd-timesyncd.service
|
||||
```
|
||||
|
||||
现在检查 `timesyncd` 服务状态:
|
||||
|
||||
```
|
||||
$ sudo systemctl status systemd-timesyncd.service
|
||||
● systemd-timesyncd.service - Network Time Synchronization
|
||||
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
|
||||
Active: active (running) since Tue 2019-07-30 10:50:18 UTC; 1h 11min ago
|
||||
Docs: man:systemd-timesyncd.service(8)
|
||||
Main PID: 498 (systemd-timesyn)
|
||||
Status: "Synchronized to time server [2001:67c:1560:8003::c7]:123 (ntp.ubuntu.com)."
|
||||
Tasks: 2 (limit: 2319)
|
||||
CGroup: /system.slice/systemd-timesyncd.service
|
||||
└─498 /lib/systemd/systemd-timesyncd
|
||||
|
||||
Jul 30 10:50:30 ubuntuserver systemd-timesyncd[498]: Network configuration changed, trying to estab
|
||||
Jul 30 10:50:31 ubuntuserver systemd-timesyncd[498]: Network configuration changed, trying to estab
|
||||
Jul 30 10:50:31 ubuntuserver systemd-timesyncd[498]: Network configuration changed, trying to estab
|
||||
Jul 30 10:50:32 ubuntuserver systemd-timesyncd[498]: Network configuration changed, trying to estab
|
||||
Jul 30 10:50:32 ubuntuserver systemd-timesyncd[498]: Network configuration changed, trying to estab
|
||||
Jul 30 10:50:35 ubuntuserver systemd-timesyncd[498]: Network configuration changed, trying to estab
|
||||
Jul 30 10:50:35 ubuntuserver systemd-timesyncd[498]: Network configuration changed, trying to estab
|
||||
Jul 30 10:50:35 ubuntuserver systemd-timesyncd[498]: Network configuration changed, trying to estab
|
||||
Jul 30 10:50:35 ubuntuserver systemd-timesyncd[498]: Network configuration changed, trying to estab
|
||||
Jul 30 10:51:06 ubuntuserver systemd-timesyncd[498]: Synchronized to time server [2001:67c:1560:800
|
||||
```
|
||||
|
||||
如果此服务已启用并处于活动状态,那么系统时钟应与互联网时间服务器同步。
|
||||
|
||||
你可以使用命令验证是否启用了时间同步:
|
||||
|
||||
```
|
||||
$ timedatectl
|
||||
```
|
||||
|
||||
如果仍然不起作用,请运行以下命令以启用时间同步:
|
||||
|
||||
```
|
||||
$ sudo timedatectl set-ntp true
|
||||
```
|
||||
|
||||
现在,你的系统时钟将与互联网时间服务器同步。
|
||||
|
||||
#### 使用 timedatectl 命令更改时区
|
||||
|
||||
如果我想使用 UTC 以外的其他时区怎么办?这很容易!
|
||||
|
||||
首先,使用命令列出可用时区:
|
||||
|
||||
```
|
||||
$ timedatectl list-timezones
|
||||
```
|
||||
|
||||
你将看到类似于下图的输出。
|
||||
|
||||
![][5]
|
||||
|
||||
*使用 timedatectl 命令列出时区*
|
||||
|
||||
你可以使用以下命令设置所需的时区(例如,Asia/Shanghai):
|
||||
|
||||
(LCTT 译注:本文原文使用印度时区作为示例,这里为了便于使用,换为中国标准时区 CST。另外,在时区设置中,要注意 CST 这个缩写会代表四个不同的时区,因此建议使用城市和 UTC+8 来说设置。)
|
||||
|
||||
```
|
||||
$ sudo timedatectl set-timezone Asia/Shanghai
|
||||
```
|
||||
|
||||
使用 `date` 命令再次检查时区是否已真正更改:
|
||||
|
||||
```
|
||||
$ date
|
||||
Tue Jul 30 20:22:33 CST 2019
|
||||
```
|
||||
|
||||
或者,如果需要详细输出,请使用 `timedatectl` 命令:
|
||||
|
||||
```
|
||||
$ timedatectl
|
||||
Local time: Tue 2019-07-30 20:22:35 CST
|
||||
Universal time: Tue 2019-07-30 12:22:35 UTC
|
||||
RTC time: Tue 2019-07-30 12:22:36
|
||||
Time zone: Asia/Shanghai (CST, +0800)
|
||||
System clock synchronized: yes
|
||||
systemd-timesyncd.service active: yes
|
||||
RTC in local TZ: no
|
||||
```
|
||||
|
||||
如你所见,我已将时区从 UTC 更改为 CST(中国标准时间)。
|
||||
|
||||
要切换回 UTC 时区,只需运行:
|
||||
|
||||
```
|
||||
$ sudo timedatectl set-timezone UTC
|
||||
```
|
||||
|
||||
#### 使用 tzdata 更改时区
|
||||
|
||||
在较旧的 Ubuntu 版本中,没有 `timedatectl` 命令。这种情况下,你可以使用 `tzdata`(Time zone data)来设置时间同步。
|
||||
|
||||
```
|
||||
$ sudo dpkg-reconfigure tzdata
|
||||
```
|
||||
|
||||
选择你居住的地理区域。对我而言,我选择 **Asia**。选择 OK,然后按回车键。
|
||||
|
||||
![][6]
|
||||
|
||||
接下来,选择与你的时区对应的城市或地区。这里,我选择了 **Kolkata**(LCTT 译注:中国用户请相应使用 Shanghai 等城市)。
|
||||
|
||||
![][7]
|
||||
|
||||
最后,你将在终端中看到类似下面的输出。
|
||||
|
||||
```
|
||||
Current default time zone: 'Asia/Shanghai'
|
||||
Local time is now: Tue Jul 30 21:59:25 CST 2019.
|
||||
Universal Time is now: Tue Jul 30 13:59:25 UTC 2019.
|
||||
```
|
||||
|
||||
#### 在图形模式下配置时区
|
||||
|
||||
有些用户可能对命令行方式不太满意。如果你是其中之一,那么你可以轻松地在图形模式的系统设置面板中进行设置。
|
||||
|
||||
点击 Super 键(Windows 键),在 Ubuntu dash 中输入 **settings**,然后点击设置图标。
|
||||
|
||||
![][8]
|
||||
|
||||
*从 Ubuntu dash 启动系统的设置*
|
||||
|
||||
或者,单击位于 Ubuntu 桌面右上角的向下箭头,然后单击左上角的“设置”图标。
|
||||
|
||||
![][9]
|
||||
|
||||
*从顶部面板启动系统的设置*
|
||||
|
||||
在下一个窗口中,选择“细节”,然后单击“日期与时间”选项。打开“自动的日期与时间”和“自动的时区”。
|
||||
|
||||
![][10]
|
||||
|
||||
*在 Ubuntu 中设置自动时区*
|
||||
|
||||
关闭设置窗口就行了!你的系统始终应该与互联网时间服务器同步了。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-set-up-time-synchronization-on-ubuntu/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/07/Set-Up-Time-Synchronization-On-Ubuntu-720x340.png
|
||||
[2]: https://www.ostechnix.com/a-beginners-guide-to-cron-jobs/
|
||||
[3]: https://www.ostechnix.com/manage-log-files-using-logrotate-linux/
|
||||
[4]: https://www.ostechnix.com/wp-content/uploads/2019/07/timedatectl-command-output-ubuntu.jpeg
|
||||
[5]: https://www.ostechnix.com/wp-content/uploads/2019/07/List-timezones-using-timedatectl-command.png
|
||||
[6]: https://www.ostechnix.com/wp-content/uploads/2019/07/configure-time-zone-using-tzdata-1.png
|
||||
[7]: https://www.ostechnix.com/wp-content/uploads/2019/07/configure-time-zone-using-tzdata-2.png
|
||||
[8]: https://www.ostechnix.com/wp-content/uploads/2019/07/System-settings-Ubuntu-dash.png
|
||||
[9]: https://www.ostechnix.com/wp-content/uploads/2019/07/Ubuntu-system-settings.png
|
||||
[10]: https://www.ostechnix.com/wp-content/uploads/2019/07/Set-automatic-timezone-in-ubuntu.png
|
147
published/20190805 How To Verify ISO Images In Linux.md
Normal file
147
published/20190805 How To Verify ISO Images In Linux.md
Normal file
@ -0,0 +1,147 @@
|
||||
[#]: collector: "lujun9972"
|
||||
[#]: translator: "FSSlc"
|
||||
[#]: reviewer: "wxy"
|
||||
[#]: publisher: "wxy"
|
||||
[#]: url: "https://linux.cn/article-11212-1.html"
|
||||
[#]: subject: "How To Verify ISO Images In Linux"
|
||||
[#]: via: "https://www.ostechnix.com/how-to-verify-iso-images-in-linux/"
|
||||
[#]: author: "sk https://www.ostechnix.com/author/sk/"
|
||||
|
||||
如何在 Linux 中验证 ISO 镜像
|
||||
======
|
||||
|
||||
![如何在 Linux 中校验 ISO 镜像][1]
|
||||
|
||||
你从喜爱的 Linux 发行版的官方网站或第三方网站下载了它的 ISO 镜像之后,接下来要做什么呢?是[创建可启动介质][2]并开始安装系统吗?并不是,请稍等一下。在开始使用它之前,强烈建议你检查一下你刚下载到本地系统中的 ISO 文件是否是下载镜像站点中 ISO 文件的一个精确拷贝。因为在前几年 [Linux Mint 的网站被攻破了][3],并且攻击者创建了一个包含后门的经过修改的 Linux Mint ISO 文件。 所以验证下载的 Linux ISO 镜像的可靠性和完整性是非常重要的一件事儿。假如你不知道如何在 Linux 中验证 ISO 镜像,本次的简要介绍将给予你帮助,请接着往下看!
|
||||
|
||||
### 在 Linux 中验证 ISO 镜像
|
||||
|
||||
我们可以使用 ISO 镜像的“校验和”来验证 ISO 镜像。校验和是一系列字母和数字的组合,用来检验下载文件的数据是否有错以及验证其可靠性和完整性。当前存在不同类型的校验和,例如 SHA-0、SHA-1、SHA-2(224、256、384、512)和 MD5。MD5 校验和最为常用,但对于现代的 Linux 发行版,SHA-256 最常被使用。
|
||||
|
||||
我们将使用名为 `gpg` 和 `sha256` 的两个工具来验证 ISO 镜像的可靠性和完整性。
|
||||
|
||||
#### 下载校验和及签名
|
||||
|
||||
针对本篇指南的目的,我将使用 Ubuntu 18.04 LTS 服务器 ISO 镜像来做验证,但对于其他的 Linux 发行版应该也是适用的。
|
||||
|
||||
在靠近 Ubuntu 下载页的最上端,你将看到一些额外的文件(校验和及签名),正如下面展示的图片那样:
|
||||
|
||||
![Ubuntu 18.04 的校验和及签名][4]
|
||||
|
||||
其中名为 `SHA256SUMS` 的文件包含了这里所有可获取镜像的校验和,而 `SHA256SUMS.gpg` 文件则是这个文件的 GnuPG 签名。在下面的步骤中,我们将使用这个签名文件来 **验证** 校验和文件。
|
||||
|
||||
下载 Ubuntu 的 ISO 镜像文件以及刚才提到的那两个文件,然后将它们放到同一目录下,例如这里的 `ISO` 目录:
|
||||
|
||||
```
|
||||
$ ls ISO/
|
||||
SHA256SUMS SHA256SUMS.gpg ubuntu-18.04.2-live-server-amd64.iso
|
||||
```
|
||||
|
||||
如你所见,我已经下载了 Ubuntu 18.04.2 LTS 服务器版本的镜像,以及对应的校验和文件和签名文件。
|
||||
|
||||
#### 下载有效的签名秘钥
|
||||
|
||||
现在,使用下面的命令来下载正确的签名秘钥:
|
||||
|
||||
```
|
||||
$ gpg --keyid-format long --keyserver hkp://keyserver.ubuntu.com --recv-keys 0x46181433FBB75451 0xD94AA3F0EFE21092
|
||||
```
|
||||
|
||||
示例输出如下:
|
||||
|
||||
```
|
||||
gpg: key D94AA3F0EFE21092: 57 signatures not checked due to missing keys
|
||||
gpg: key D94AA3F0EFE21092: public key "Ubuntu CD Image Automatic Signing Key (2012) <[email protected]>" imported
|
||||
gpg: key 46181433FBB75451: 105 signatures not checked due to missing keys
|
||||
gpg: key 46181433FBB75451: public key "Ubuntu CD Image Automatic Signing Key <[email protected]>" imported
|
||||
gpg: no ultimately trusted keys found
|
||||
gpg: Total number processed: 2
|
||||
gpg: imported: 2
|
||||
```
|
||||
|
||||
#### 验证 SHA-256 校验和
|
||||
|
||||
接下来我们将使用签名来验证校验和文件:
|
||||
|
||||
```
|
||||
$ gpg --keyid-format long --verify SHA256SUMS.gpg SHA256SUMS
|
||||
```
|
||||
|
||||
下面是示例输出:
|
||||
|
||||
```
|
||||
gpg: Signature made Friday 15 February 2019 04:23:33 AM IST
|
||||
gpg: using DSA key 46181433FBB75451
|
||||
gpg: Good signature from "Ubuntu CD Image Automatic Signing Key <[email protected]>" [unknown]
|
||||
gpg: WARNING: This key is not certified with a trusted signature!
|
||||
gpg: There is no indication that the signature belongs to the owner.
|
||||
Primary key fingerprint: C598 6B4F 1257 FFA8 6632 CBA7 4618 1433 FBB7 5451
|
||||
gpg: Signature made Friday 15 February 2019 04:23:33 AM IST
|
||||
gpg: using RSA key D94AA3F0EFE21092
|
||||
gpg: Good signature from "Ubuntu CD Image Automatic Signing Key (2012) <[email protected]>" [unknown]
|
||||
gpg: WARNING: This key is not certified with a trusted signature!
|
||||
gpg: There is no indication that the signature belongs to the owner.
|
||||
Primary key fingerprint: 8439 38DF 228D 22F7 B374 2BC0 D94A A3F0 EFE2 1092
|
||||
```
|
||||
|
||||
假如你在输出中看到 `Good signature` 字样,那么该校验和文件便是由 Ubuntu 开发者制作的,并且由秘钥文件的所属者签名认证。
|
||||
|
||||
#### 检验下载的 ISO 文件
|
||||
|
||||
下面让我们继续检查下载的 ISO 文件是否和所给的校验和相匹配。为了达到该目的,只需要运行:
|
||||
|
||||
```
|
||||
$ sha256sum -c SHA256SUMS 2>&1 | grep OK
|
||||
ubuntu-18.04.2-live-server-amd64.iso: OK
|
||||
```
|
||||
|
||||
假如校验和是匹配的,你将看到 `OK` 字样,这意味着下载的文件是合法的,没有被改变或篡改过。
|
||||
|
||||
假如你没有获得类似的输出,或者看到不同的输出,则该 ISO 文件可能已经被修改过或者没有被正确地下载。你必须从一个更好的下载源重新下载该文件。
|
||||
|
||||
某些 Linux 发行版已经在它的下载页面中包含了校验和。例如 Pop!_os 的开发者在他们的下载页面中提供了所有 ISO 镜像的 SHA-256 校验和,这样你就可以快速地验证这些 ISO 镜像。
|
||||
|
||||
![Pop os 位于其下载页面中的 SHA256 校验和][5]
|
||||
|
||||
在下载完 ISO 镜像文件后,可以使用下面的命令来验证它们:
|
||||
|
||||
```
|
||||
$ sha256sum Soft_backup/ISOs/pop-os_18.04_amd64_intel_54.iso
|
||||
```
|
||||
|
||||
示例输出如下:
|
||||
|
||||
```
|
||||
680e1aa5a76c86843750e8120e2e50c2787973343430956b5cbe275d3ec228a6 Soft_backup/ISOs/pop-os_18.04_amd64_intel_54.iso
|
||||
```
|
||||
|
||||
![Pop os 的 SHA256 校验和的值][6]
|
||||
|
||||
在上面的输出中,以 `680elaa` 开头的部分为 SHA-256 校验和的值。请将该值与位于下载页面中提供的 SHA-256 校验和的值进行比较,如果这两个值相同,那说明这个下载的 ISO 文件是合法的,与它的原有状态相比没有经过更改或者篡改。万事俱备,你可以进行下一步了!
|
||||
|
||||
上面的内容便是我们如何在 Linux 中验证一个 ISO 文件的可靠性和完整性的方法。无论你是从官方站点或者第三方站点下载 ISO 文件,我们总是推荐你在使用它们之前做一次简单的快速验证。希望本篇的内容对你有所帮助。
|
||||
|
||||
参考文献:
|
||||
|
||||
* [https://tutorials.ubuntu.com/tutorial/tutorial-how-to-verify-ubuntu][7]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-verify-iso-images-in-linux/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/07/Verify-ISO-Images-In-Linux-720x340.png
|
||||
[2]: https://www.ostechnix.com/etcher-beauitiful-app-create-bootable-sd-cards-usb-drives/
|
||||
[3]: https://blog.linuxmint.com/?p=2994
|
||||
[4]: https://www.ostechnix.com/wp-content/uploads/2019/07/Ubuntu-18.04-checksum-and-signature.png
|
||||
[5]: https://www.ostechnix.com/wp-content/uploads/2019/07/Pop-os-SHA256-sum.png
|
||||
[6]: https://www.ostechnix.com/wp-content/uploads/2019/07/Pop-os-SHA256-sum-value.png
|
||||
[7]: https://tutorials.ubuntu.com/tutorial/tutorial-how-to-verify-ubuntu
|
@ -0,0 +1,58 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11207-1.html)
|
||||
[#]: subject: (Microsoft finds Russia-backed attacks that exploit IoT devices)
|
||||
[#]: via: (https://www.networkworld.com/article/3430356/microsoft-finds-russia-backed-attacks-that-exploit-iot-devices.html)
|
||||
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
|
||||
|
||||
微软发现由俄罗斯背后支持的利用物联网设备进行的攻击
|
||||
======
|
||||
|
||||
> 微软表示,默认密码、未打补丁的设备,物联网设备库存不足是导致俄罗斯的 STRONTIUM 黑客组织发起针对公司的攻击的原因。
|
||||
|
||||
![Zmeel / Getty Images][1]
|
||||
|
||||
在微软安全响应中心周一发布的博客文章中,该公司称,STRONTIUM 黑客组织对未披露名字的微软客户进行了基于 [IoT][2] 的攻击,安全研究人员相信 STRONTIUM 黑客组织和俄罗斯 GRU 军事情报机构有密切的关系。
|
||||
|
||||
微软[在博客中说][3],它在 4 月份发现的攻击针对三种特定的物联网设备:一部 VoIP 电话、一部视频解码器和一台打印机(该公司拒绝说明品牌),并将它们用于获得对不特定的公司网络的访问权限。其中两个设备遭到入侵是因为没有更改过制造商的默认密码,而另一个设备则是因为没有应用最新的安全补丁。
|
||||
|
||||
以这种方式受到攻击的设备成为了安全的网络的后门,允许攻击者自由扫描这些网络以获得进一步的漏洞,并访问其他系统获取更多的信息。攻击者也被发现其在调查受攻击网络上的管理组,试图获得更多访问权限,以及分析本地子网流量以获取其他数据。
|
||||
|
||||
STRONTIUM,也被称为 Fancy Bear、Pawn Storm、Sofacy 和 APT28,被认为是代表俄罗斯政府进行的一系列恶意网络活动的幕后黑手,其中包括 2016 年对民主党全国委员会的攻击,对世界反兴奋剂机构的攻击,针对记者调查马来西亚航空公司 17 号航班在乌克兰上空被击落的情况,向美国军人的妻子发送捏造的死亡威胁等等。
|
||||
|
||||
根据 2018 年 7 月特别顾问罗伯特·穆勒办公室发布的起诉书,STRONTIUM 袭击的指挥者是一群俄罗斯军官,所有这些人都被 FBI 通缉与这些罪行有关。
|
||||
|
||||
微软通知客户发现其遭到了民族国家的攻击,并在过去 12 个月内发送了大约 1,400 条与 STRONTIUM 相关的通知。微软表示,其中大多数(五分之四)是对政府、军队、国防、IT、医药、教育和工程领域的组织的攻击,其余的则是非政府组织、智囊团和其他“政治附属组织”。
|
||||
|
||||
根据微软团队的说法,漏洞的核心是机构缺乏对其网络上运行的所有设备的充分认识。另外,他们建议对在企业环境中运行的所有 IoT 设备进行编目,为每个设备实施自定义安全策略,在可行的情况下在各自独立的网络上屏蔽物联网设备,并对物联网组件执行定期补丁和配置审核。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3430356/microsoft-finds-russia-backed-attacks-that-exploit-iot-devices.html
|
||||
|
||||
作者:[Jon Gold][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Jon-Gold/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/07/cso_russian_hammer_and_sickle_binary_code_by_zmeel_gettyimages-927363118_2400x1600-100801412-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
|
||||
[3]: https://msrc-blog.microsoft.com/2019/08/05/corporate-iot-a-path-to-intrusion/
|
||||
[4]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html
|
||||
[5]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
|
||||
[6]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
|
||||
[7]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
|
||||
[8]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
|
||||
[9]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
|
||||
[10]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
|
||||
[11]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
|
||||
[12]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
|
||||
[13]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[14]: https://www.facebook.com/NetworkWorld/
|
||||
[15]: https://www.linkedin.com/company/network-world
|
99
published/20190806 Unboxing the Raspberry Pi 4.md
Normal file
99
published/20190806 Unboxing the Raspberry Pi 4.md
Normal file
@ -0,0 +1,99 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11248-1.html)
|
||||
[#]: subject: (Unboxing the Raspberry Pi 4)
|
||||
[#]: via: (https://opensource.com/article/19/8/unboxing-raspberry-pi-4)
|
||||
[#]: author: (Anderson Silva https://opensource.com/users/ansilvahttps://opensource.com/users/bennuttall)
|
||||
|
||||
树莓派 4 开箱记
|
||||
======
|
||||
|
||||
> 树莓派 4 与其前代产品相比具有令人印象深刻的性能提升,而入门套件使其易于快速启动和运行。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/20/091730rl99q2ahycd4sz9h.jpg)
|
||||
|
||||
当树莓派 4 [在 6 月底宣布发布][2]时,我没有迟疑,在发布的同一天就从 [CanaKit][3] 订购了两套树莓派 4 入门套件。1GB RAM 版本有现货,但 4GB 版本要在 7 月 19 日才能发货。由于我想两个都试试,这两个都订购了让它们一起发货。
|
||||
|
||||
![CanaKit's Raspberry Pi 4 Starter Kit and official accessories][4]
|
||||
|
||||
这是我开箱我的树莓派 4 后所看到的。
|
||||
|
||||
### 电源
|
||||
|
||||
树莓派 4 使用 USB-C 连接器供电。虽然 USB-C 电缆现在非常普遍,但你的树莓派 4 [可能不喜欢你的USB-C 线][5](至少对于树莓派 4 的第一版如此)。因此,除非你确切知道自己在做什么,否则我建议你订购含有官方树莓派充电器的入门套件。如果你想尝试手头的充电设备,那么该设备的输入是 100-240V ~ 50/60Hz 0.5A,输出为 5.1V - 3.0A。
|
||||
|
||||
![Raspberry Pi USB-C charger][6]
|
||||
|
||||
### 键盘和鼠标
|
||||
|
||||
官方的键盘和鼠标是和入门套件是[分开出售][7]的,总价 25 美元,并不很便宜,因为你的这台树莓派电脑也才只有 35 到 55 美元。但树莓派徽标印在这个键盘上(而不是 Windows 徽标),并且外观相宜。键盘也是 USB 集线器,因此它允许你插入更多设备。我插入了我的 [YubiKey][8] 安全密钥,它运行得非常好。我会把键盘和鼠标分类为“值得拥有”而不是“必须拥有”。你的常规键盘和鼠标应该也可以正常工作。
|
||||
|
||||
![Official Raspberry Pi keyboard \(with YubiKey plugged in\) and mouse.][9]
|
||||
|
||||
![Raspberry Pi logo on the keyboard][10]
|
||||
|
||||
### Micro-HDMI 电缆
|
||||
|
||||
可能让一些人惊讶的是,与带有 Mini-HDMI 端口的树莓派 Zero 不同,树莓派 4 配备了 Micro-HDMI。它们不是同一个东西!因此,即使你手头有合适的 USB-C 线缆/电源适配器、鼠标和键盘,也很有可能需要使用 Micro-HDMI 转 HDMI 的线缆(或适配器)来将你的新树莓派接到显示器上。
|
||||
|
||||
### 外壳
|
||||
|
||||
树莓派的外壳已经有了很多年,这可能是树莓派基金会销售的第一批“官方”外围设备之一。有些人喜欢它们,而有些人不喜欢。我认为将一个树莓派放在一个盒子里可以更容易携带它,可以避免静电和针脚弯曲。
|
||||
|
||||
另一方面,把你的树莓派装在盒子里会使电路板过热。这款 CanaKit 入门套件还配备了处理器散热器,这可能有所帮助,因为较新的树莓派已经[以运行相当热而闻名][11]了。
|
||||
|
||||
![Raspberry Pi 4 case][12]
|
||||
|
||||
### Raspbian 和 NOOBS
|
||||
|
||||
入门套件附带的另一个东西是 microSD 卡,其中预装了树莓派 4 的 [NOOBS][13] 操作系统的正确版本。(我拿到的是 3.19 版,发布于 2019 年 6 月 24 日)。如果你是第一次使用树莓派并且不确定从哪里开始,这可以为你节省大量时间。入门套件中的 microSD 卡容量为 32GB。
|
||||
|
||||
插入 microSD 卡并连接所有电缆后,只需启动树莓派,引导进入 NOOBS,选择 Raspbian 发行版,然后等待安装。
|
||||
|
||||
![Raspberry Pi 4 with 4GB of RAM][14]
|
||||
|
||||
我注意到在安装最新的 Raspbian 时有一些改进。(如果它们已经出现了一段时间,请原谅我 —— 自从树莓派 3 出现以来我没有对树莓派进行过全新安装。)其中一个是 Raspbian 会要求你在安装后的首次启动为你的帐户设置一个密码,另一个是它将运行软件更新(假设你有网络连接)。这些都是很大的改进,有助于保持你的树莓派更安全。我很希望能有一天在安装时看到加密 microSD 卡的选项。
|
||||
|
||||
![Running Raspbian updates at first boot][15]
|
||||
|
||||
![Raspberry Pi 4 setup][16]
|
||||
|
||||
运行非常顺畅!
|
||||
|
||||
### 结语
|
||||
|
||||
虽然 CanaKit 不是美国唯一授权的树莓派零售商,但我发现它的入门套件的价格物超所值。
|
||||
|
||||
到目前为止,我对树莓派 4 的性能提升印象深刻。我打算尝试用一整个工作日将它作为我唯一的计算机,我很快就会写一篇关于我探索了多远的后续文章。敬请关注!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/unboxing-raspberry-pi-4
|
||||
|
||||
作者:[Anderson Silva][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ansilvahttps://opensource.com/users/bennuttall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberrypi4_board_hardware.jpg?itok=KnFU7NvR (Raspberry Pi 4 board, posterized filter)
|
||||
[2]: https://opensource.com/article/19/6/raspberry-pi-4
|
||||
[3]: https://www.canakit.com/raspberry-pi-4-starter-kit.html
|
||||
[4]: https://opensource.com/sites/default/files/uploads/raspberrypi4_canakit.jpg (CanaKit's Raspberry Pi 4 Starter Kit and official accessories)
|
||||
[5]: https://www.techrepublic.com/article/your-new-raspberry-pi-4-wont-power-on-usb-c-cable-problem-now-officially-confirmed/
|
||||
[6]: https://opensource.com/sites/default/files/uploads/raspberrypi_usb-c_charger.jpg (Raspberry Pi USB-C charger)
|
||||
[7]: https://www.canakit.com/official-raspberry-pi-keyboard-mouse.html?defpid=4476
|
||||
[8]: https://www.yubico.com/products/yubikey-hardware/
|
||||
[9]: https://opensource.com/sites/default/files/uploads/raspberrypi_keyboardmouse.jpg (Official Raspberry Pi keyboard (with YubiKey plugged in) and mouse.)
|
||||
[10]: https://opensource.com/sites/default/files/uploads/raspberrypi_keyboardlogo.jpg (Raspberry Pi logo on the keyboard)
|
||||
[11]: https://www.theregister.co.uk/2019/07/22/raspberry_pi_4_too_hot_to_handle/
|
||||
[12]: https://opensource.com/sites/default/files/uploads/raspberrypi4_case.jpg (Raspberry Pi 4 case)
|
||||
[13]: https://www.raspberrypi.org/downloads/noobs/
|
||||
[14]: https://opensource.com/sites/default/files/uploads/raspberrypi4_ram.jpg (Raspberry Pi 4 with 4GB of RAM)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/raspberrypi4_rasbpianupdate.jpg (Running Raspbian updates at first boot)
|
||||
[16]: https://opensource.com/sites/default/files/uploads/raspberrypi_setup.jpg (Raspberry Pi 4 setup)
|
@ -0,0 +1,113 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11238-1.html)
|
||||
[#]: subject: (Find Out How Long Does it Take To Boot Your Linux System)
|
||||
[#]: via: (https://itsfoss.com/check-boot-time-linux/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
你的 Linux 系统开机时间已经击败了 99% 的电脑
|
||||
======
|
||||
|
||||
当你打开系统电源时,你会等待制造商的徽标出现,屏幕上可能会显示一些消息(以非安全模式启动),然后是 [Grub][1] 屏幕、操作系统加载屏幕以及最后的登录屏。
|
||||
|
||||
你检查过这花费了多长时间么?也许没有。除非你真的需要知道,否则你不会在意开机时间。
|
||||
|
||||
但是如果你很想知道你的 Linux 系统需要很长时间才能启动完成呢?使用秒表是一种方法,但在 Linux 中,你有一种更好、更轻松地了解系统启动时间的方法。
|
||||
|
||||
### 在 Linux 中使用 systemd-analyze 检查启动时间
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/17/104358s1ho8ug868hso1y8.jpg)
|
||||
|
||||
无论你是否喜欢,[systemd][3] 运行在大多数流行的 Linux 发行版中。systemd 有许多管理 Linux 系统的工具。其中一个就是 `systemd-analyze`。
|
||||
|
||||
`systemd-analyze` 命令为你提供最近一次启动时运行的服务数量以及消耗时间的详细信息。
|
||||
|
||||
如果在终端中运行以下命令:
|
||||
|
||||
```
|
||||
systemd-analyze
|
||||
```
|
||||
|
||||
你将获得总启动时间以及固件、引导加载程序、内核和用户空间所消耗的时间:
|
||||
|
||||
```
|
||||
Startup finished in 7.275s (firmware) + 13.136s (loader) + 2.803s (kernel) + 12.488s (userspace) = 35.704s
|
||||
|
||||
graphical.target reached after 12.408s in userspace
|
||||
```
|
||||
|
||||
正如你在上面的输出中所看到的,我的系统花了大约 35 秒才进入可以输入密码的页面。我正在使用戴尔 XPS Ubuntu。它使用 SSD 存储,尽管如此,它还需要很长时间才能启动。
|
||||
|
||||
不是那么令人印象深刻,是吗?为什么不共享你们系统的启动时间?我们来比较吧。
|
||||
|
||||
你可以使用以下命令将启动时间进一步细分为每个单元:
|
||||
|
||||
```
|
||||
systemd-analyze blame
|
||||
```
|
||||
|
||||
这将生成大量输出,所有服务按所用时间的降序列出。
|
||||
|
||||
```
|
||||
7.347s plymouth-quit-wait.service
|
||||
6.198s NetworkManager-wait-online.service
|
||||
3.602s plymouth-start.service
|
||||
3.271s plymouth-read-write.service
|
||||
2.120s apparmor.service
|
||||
1.503s [email protected]
|
||||
1.213s motd-news.service
|
||||
908ms snapd.service
|
||||
861ms keyboard-setup.service
|
||||
739ms fwupd.service
|
||||
702ms bolt.service
|
||||
672ms dev-nvme0n1p3.device
|
||||
608ms [email protected]:intel_backlight.service
|
||||
539ms snap-core-7270.mount
|
||||
504ms snap-midori-451.mount
|
||||
463ms snap-screencloud-1.mount
|
||||
446ms snapd.seeded.service
|
||||
440ms snap-gtk\x2dcommon\x2dthemes-1313.mount
|
||||
420ms snap-core18-1066.mount
|
||||
416ms snap-scrcpy-133.mount
|
||||
412ms snap-gnome\x2dcharacters-296.mount
|
||||
```
|
||||
|
||||
#### 额外提示:改善启动时间
|
||||
|
||||
如果查看此输出,你可以看到网络管理器和 [plymouth][4] 都消耗了大量的启动时间。
|
||||
|
||||
Plymouth 负责你在 Ubuntu 和其他发行版中在登录页面出现之前的引导页面。网络管理器负责互联网连接,可以关闭它来加快启动时间。不要担心,在你登录后,你可以正常使用 wifi。
|
||||
|
||||
```
|
||||
sudo systemctl disable NetworkManager-wait-online.service
|
||||
```
|
||||
|
||||
如果要还原更改,可以使用以下命令:
|
||||
|
||||
```
|
||||
sudo systemctl enable NetworkManager-wait-online.service
|
||||
```
|
||||
|
||||
请不要在不知道用途的情况下自行禁用各种服务。这可能会产生危险的后果。
|
||||
|
||||
现在你知道了如何检查 Linux 系统的启动时间,为什么不在评论栏分享你的系统的启动时间?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/check-boot-time-linux/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.gnu.org/software/grub/
|
||||
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/linux-boot-time.jpg?resize=800%2C450&ssl=1
|
||||
[3]: https://en.wikipedia.org/wiki/Systemd
|
||||
[4]: https://wiki.archlinux.org/index.php/Plymouth
|
87
published/20190808 How to manipulate PDFs on Linux.md
Normal file
87
published/20190808 How to manipulate PDFs on Linux.md
Normal file
@ -0,0 +1,87 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11230-1.html)
|
||||
[#]: subject: (How to manipulate PDFs on Linux)
|
||||
[#]: via: (https://www.networkworld.com/article/3430781/how-to-manipulate-pdfs-on-linux.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
如何在 Linux 命令行操作 PDF
|
||||
======
|
||||
|
||||
> pdftk 命令提供了许多处理 PDF 的命令行操作,包括合并页面、加密文件、添加水印、压缩文件,甚至还有修复 PDF。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/15/110119x6sjnjs6s22srnje.jpg)
|
||||
|
||||
虽然 PDF 通常被认为是相当稳定的文件,但在 Linux 和其他系统上你可以做很多处理。包括合并、拆分、旋转、拆分成单页、加密和解密、添加水印、压缩和解压缩,甚至还有修复。 `pdftk` 命令能执行所有甚至更多操作。
|
||||
|
||||
“pdftk” 代表 “PDF 工具包”(PDF tool kit),这个命令非常易于使用,并且可以很好地操作 PDF。例如,要将独立的文件合并成一个文件,你可以使用以下命令:
|
||||
|
||||
```
|
||||
$ pdftk pg1.pdf pg2.pdf pg3.pdf pg4.pdf pg5.pdf cat output OneDoc.pdf
|
||||
```
|
||||
|
||||
`OneDoc.pdf` 将包含上面显示的所有五个文档,命令将在几秒钟内运行完毕。请注意,`cat` 选项表示将文件连接在一起,`output` 选项指定新文件的名称。
|
||||
|
||||
你还可以从 PDF 中提取选定页面来创建单独的 PDF 文件。例如,如果要创建仅包含上面创建的文档的第 1、2、3 和 5 页的新 PDF,那么可以执行以下操作:
|
||||
|
||||
```
|
||||
$ pdftk OneDoc.pdf cat 1-3 5 output 4pgs.pdf
|
||||
```
|
||||
|
||||
另外,如果你想要第 1、3、4 和 5 页(总计 5 页),我们可以使用以下命令:
|
||||
|
||||
```
|
||||
$ pdftk OneDoc.pdf cat 1 3-end output 4pgs.pdf
|
||||
```
|
||||
|
||||
你可以选择单独页面或者页面范围,如上例所示。
|
||||
|
||||
下一个命令将从一个包含奇数页(1、3 等)的文件和一个包含偶数页(2、4 等)的文件创建一个整合文档:
|
||||
|
||||
```
|
||||
$ pdftk A=odd.pdf B=even.pdf shuffle A B output collated.pdf
|
||||
```
|
||||
|
||||
请注意,`shuffle` 选项使得能够完成整合,并指示文档的使用顺序。另请注意:虽然上面建议用的是奇数/偶数页,但你不限于仅使用两个文件。
|
||||
|
||||
如果要创建只能由知道密码的收件人打开的加密 PDF,可以使用如下命令:
|
||||
|
||||
```
|
||||
$ pdftk prep.pdf output report.pdf user_pw AsK4n0thingGeTn0thing
|
||||
```
|
||||
|
||||
选项提供 40(`encrypt_40bit`)和 128(`encrypt_128bit`)位加密。默认情况下使用 128 位加密。
|
||||
|
||||
你还可以使用 `burst` 选项将 PDF 文件分成单个页面:
|
||||
|
||||
```
|
||||
$ pdftk allpgs.pdf burst
|
||||
$ ls -ltr *.pdf | tail -5
|
||||
-rw-rw-r-- 1 shs shs 22933 Aug 8 08:18 pg_0001.pdf
|
||||
-rw-rw-r-- 1 shs shs 23773 Aug 8 08:18 pg_0002.pdf
|
||||
-rw-rw-r-- 1 shs shs 23260 Aug 8 08:18 pg_0003.pdf
|
||||
-rw-rw-r-- 1 shs shs 23435 Aug 8 08:18 pg_0004.pdf
|
||||
-rw-rw-r-- 1 shs shs 23136 Aug 8 08:18 pg_0005.pdf
|
||||
```
|
||||
|
||||
`pdftk` 命令使得合并、拆分、重建、加密 PDF 文件非常容易。要了解更多选项,请查看 [PDF 实验室][3]中的示例页面。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3430781/how-to-manipulate-pdfs-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/08/book-pages-100807709-large.jpg
|
||||
[3]: https://www.pdflabs.com/docs/pdftk-cli-examples/
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,183 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11216-1.html)
|
||||
[#]: subject: (Use a drop-down terminal for fast commands in Fedora)
|
||||
[#]: via: (https://fedoramagazine.org/use-a-drop-down-terminal-for-fast-commands-in-fedora/)
|
||||
[#]: author: (Guilherme Schelp https://fedoramagazine.org/author/schelp/)
|
||||
|
||||
在 Fedora 下使用下拉式终端更快输入命令
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
下拉式终端可以一键打开,并快速输入桌面上的任何命令。通常它会以平滑的方式创建终端,有时会带有下拉效果。本文演示了如何使用 Yakuake、Tilda、Guake 和 GNOME 扩展等下拉式终端来改善和加速日常任务。
|
||||
|
||||
### Yakuake
|
||||
|
||||
[Yakuake][2] 是一个基于 KDE Konsole 技术的下拉式终端模拟器。它以 GNU GPLv2 条款分发。它包括以下功能:
|
||||
|
||||
* 从屏幕顶部平滑地滚下
|
||||
* 标签式界面
|
||||
* 尺寸和动画速度可配置
|
||||
* 换肤
|
||||
* 先进的 D-Bus 接口
|
||||
|
||||
要安装 Yakuake,请使用以下命令:
|
||||
|
||||
```
|
||||
$ sudo dnf install -y yakuake
|
||||
```
|
||||
|
||||
#### 启动和配置
|
||||
|
||||
如果你运行 KDE,请打开系统设置,然后转到“启动和关闭”。将“yakuake”添加到“自动启动”下的程序列表中,如下所示:
|
||||
|
||||
![][3]
|
||||
|
||||
Yakuake 运行时很容易配置,首先在命令行启动该程序:
|
||||
|
||||
```
|
||||
$ yakuake &
|
||||
```
|
||||
|
||||
随后出现欢迎对话框。如果标准的快捷键和你已经使用的快捷键冲突,你可以设置一个新的。
|
||||
|
||||
![][4]
|
||||
|
||||
点击菜单按钮,出现如下帮助菜单。接着,选择“配置 Yakuake……”访问配置选项。
|
||||
|
||||
![][5]
|
||||
|
||||
你可以自定义外观选项,例如透明度、行为(例如当鼠标指针移过它们时聚焦终端)和窗口(如大小和动画)。在窗口选项中,你会发现当你使用两个或更多监视器时最有用的选项之一:“在鼠标所在的屏幕上打开”。
|
||||
|
||||
#### 使用 Yakuake
|
||||
|
||||
主要的快捷键有:
|
||||
|
||||
* `F12` = 打开/撤回 Yakuake
|
||||
* `Ctrl+F11` = 全屏模式
|
||||
* `Ctrl+)` = 上下分割
|
||||
* `Ctrl+(` = 左右分割
|
||||
* `Ctrl+Shift+T` = 新会话
|
||||
* `Shift+Right` = 下一个会话
|
||||
* `Shift+Left` = 上一个会话
|
||||
* `Ctrl+Alt+S` = 重命名会话
|
||||
|
||||
以下是 Yakuake 像[终端多路复用器][6]一样分割会话的示例。使用此功能,你可以在一个会话中运行多个 shell。
|
||||
|
||||
![][7]
|
||||
|
||||
### Tilda
|
||||
|
||||
[Tilda][8] 是一个下拉式终端,可与其他流行的终端模拟器相媲美,如 GNOME 终端、KDE 的 Konsole、xterm 等等。
|
||||
|
||||
它具有高度可配置的界面。你甚至可以更改终端大小和动画速度等选项。Tilda 还允许你启用热键,以绑定到各种命令和操作。
|
||||
|
||||
要安装 Tilda,请运行以下命令:
|
||||
|
||||
```
|
||||
$ sudo dnf install -y tilda
|
||||
```
|
||||
|
||||
#### 启动和配置
|
||||
|
||||
大多数用户更喜欢在登录时就在后台运行一个下拉式终端。要设置此选项,请首先转到桌面上的应用启动器,搜索 Tilda,然后将其打开。
|
||||
|
||||
接下来,打开 Tilda 配置窗口。 选择“隐藏启动 Tilda”,即启动时不会立即显示终端。
|
||||
|
||||
![][9]
|
||||
|
||||
接下来,你要设置你的桌面自动启动 Tilda。如果你使用的是 KDE,请转到“系统设置 > 启动与关闭 > 自动启动”并“添加一个程序”。
|
||||
|
||||
![][10]
|
||||
|
||||
如果你正在使用 GNOME,则可以在终端中运行此命令:
|
||||
|
||||
```
|
||||
$ ln -s /usr/share/applications/tilda.desktop ~/.config/autostart/
|
||||
```
|
||||
|
||||
当你第一次运行它时,会出现一个向导来设置首选项。如果需要更改设置,请右键单击终端并转到菜单中的“首选项”。
|
||||
|
||||
![][11]
|
||||
|
||||
你还可以创建多个配置文件,并绑定其他快捷键以在屏幕上的不同位置打开新终端。为此,请运行以下命令:
|
||||
|
||||
```
|
||||
$ tilda -C
|
||||
```
|
||||
|
||||
每次使用上述命令时,Tilda 都会在名为 `~/.config/tilda/` 文件夹中创建名为 `config_0`、`config_1` 之类的新配置文件。然后,你可以映射组合键以打开具有一组特定选项的新 Tilda 终端。
|
||||
|
||||
#### 使用 Tilda
|
||||
|
||||
主要快捷键有:
|
||||
|
||||
* `F1` = 拉下终端 Tilda(注意:如果你有多个配置文件,快捷方式是 F1、F2、F3 等)
|
||||
* `F11` = 全屏模式
|
||||
* `F12` = 切换透明模式
|
||||
* `Ctrl+Shift+T` = 添加标签
|
||||
* `Ctrl+Page Up` = 下一个标签
|
||||
* `Ctrl+Page Down` = 上一个标签
|
||||
|
||||
### GNOME 扩展
|
||||
|
||||
Drop-down Terminal [GNOME 扩展][12]允许你在 GNOME Shell 中使用这个有用的工具。它易于安装和配置,使你可以快速访问终端会话。
|
||||
|
||||
#### 安装
|
||||
|
||||
打开浏览器并转到[此 GNOME 扩展的站点][12]。启用扩展设置为“On”,如下所示:
|
||||
|
||||
![][13]
|
||||
|
||||
然后选择“Install”以在系统上安装扩展。
|
||||
|
||||
![][14]
|
||||
|
||||
执行此操作后,无需设置任何自动启动选项。只要你登录 GNOME,扩展程序就会自动运行!
|
||||
|
||||
#### 配置
|
||||
|
||||
安装后,将打开 Drop-down Terminal 配置窗口以设置首选项。例如,可以设置终端大小、动画、透明度和使用滚动条。
|
||||
|
||||
![][15]
|
||||
|
||||
如果你将来需要更改某些首选项,请运行 `gnome-shell-extension-prefs` 命令并选择“Drop Down Terminal”。
|
||||
|
||||
#### 使用该扩展
|
||||
|
||||
快捷键很简单:
|
||||
|
||||
* 反尖号 (通常是 `Tab` 键上面的一个键) = 打开/撤回终端
|
||||
* `F12` (可以定制) = 打开/撤回终端
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/use-a-drop-down-terminal-for-fast-commands-in-fedora/
|
||||
|
||||
作者:[Guilherme Schelp][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/schelp/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/07/dropdown-terminals-816x345.jpg
|
||||
[2]: https://kde.org/applications/system/org.kde.yakuake
|
||||
[3]: https://fedoramagazine.org/wp-content/uploads/2019/07/auto_start-1024x723.png
|
||||
[4]: https://fedoramagazine.org/wp-content/uploads/2019/07/yakuake_config-1024x419.png
|
||||
[5]: https://fedoramagazine.org/wp-content/uploads/2019/07/yakuake_config_01.png
|
||||
[6]: https://fedoramagazine.org/4-cool-terminal-multiplexers/
|
||||
[7]: https://fedoramagazine.org/wp-content/uploads/2019/07/yakuake_usage.gif
|
||||
[8]: https://github.com/lanoxx/tilda
|
||||
[9]: https://fedoramagazine.org/wp-content/uploads/2019/07/tilda_startup.png
|
||||
[10]: https://fedoramagazine.org/wp-content/uploads/2019/07/tilda_startup_alt.png
|
||||
[11]: https://fedoramagazine.org/wp-content/uploads/2019/07/tilda_config.png
|
||||
[12]: https://extensions.gnome.org/extension/442/drop-down-terminal/
|
||||
[13]: https://fedoramagazine.org/wp-content/uploads/2019/07/gnome-shell-install_2-1024x455.png
|
||||
[14]: https://fedoramagazine.org/wp-content/uploads/2019/07/gnome-shell-install_3.png
|
||||
[15]: https://fedoramagazine.org/wp-content/uploads/2019/07/gnome-shell-install_4.png
|
@ -0,0 +1,164 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11244-1.html)
|
||||
[#]: subject: (How to measure the health of an open source community)
|
||||
[#]: via: (https://opensource.com/article/19/8/measure-project)
|
||||
[#]: author: (Jon Lawrence https://opensource.com/users/the3rdlaw)
|
||||
|
||||
如何衡量一个开源社区的健康度
|
||||
======
|
||||
|
||||
> 这比较复杂。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/19/184719nz3xuazppzu3vwcg.jpg)
|
||||
|
||||
作为一个经常管理软件开发团队的人,多年来我一直关注度量指标。一次次,我发现自己领导团队使用一个又一个的项目平台(例如 Jira、GitLab 和 Rally)生成了大量可测量的数据。从那时起,我已经及时投入了大量时间从记录平台中提取了有用的指标,并采用了一种我们可以理解的格式,然后使用这些指标对开发的许多方面做出更好的选择。
|
||||
|
||||
今年早些时候,我有幸在 [Linux 基金会][2]遇到了一个名为<ruby>[开源软件社区健康分析][3]<rt>Community Health Analytics for Open Source Software</rt></ruby>(CHAOSS)的项目。该项目侧重于从各种来源收集和丰富指标,以便开源社区的利益相关者可以衡量他们项目的健康状况。
|
||||
|
||||
### CHAOSS 介绍
|
||||
|
||||
随着我对该项目的基本指标和目标越来越熟悉,一个问题在我的脑海中不断翻滚。什么是“健康”的开源项目,由谁来定义?
|
||||
|
||||
特定角色的人认为健康的东西可能另一个角色的人就不会这样认为。似乎可以用 CHAOSS 收集的细粒度数据进行市场细分实验,重点关注对特定角色可能最有意义的背景问题,以及 CHAOSS 收集哪些指标可能有助于回答这些问题。
|
||||
|
||||
CHAOSS 项目创建并维护了一套开源应用程序和度量标准定义,使得这个实验具有可能性,这包括:
|
||||
|
||||
* 许多基于服务器的应用程序,用于收集、聚合和丰富度量标准(例如 Augur 和 GrimoireLab)。
|
||||
* ElasticSearch、Kibana 和 Logstash(ELK)的开源版本。
|
||||
* 身份服务、数据分析服务和各种集成库。
|
||||
|
||||
在我过去的一个程序中,有六个团队从事于不同复杂程度的项目,我们找到了一个简洁的工具,它允许我们从简单(或复杂)的 JQL 语句中创建我们想要的任何类型的指标,然后针对这些指标开发计算。在我们注意到之前,我们仅从 Jira 中就提取了 400 多个指标,而且还有更多指标来自手动的来源。
|
||||
|
||||
在项目结束时,我们认定这 400 个指标中,大多数指标在*以我们的角色*做出决策时并不重要。最终,只有三个对我们非常重要:“缺陷去除效率”、“已完成的条目与承诺的条目”,以及“每个开发人员的工作进度”。这三个指标最重要,因为它们是我们对自己、客户和团队成员所做出的承诺,因此是最有意义的。
|
||||
|
||||
带着这些通过经验得到的教训和对什么是健康的开源项目的问题,我跳进了 CHAOSS 社区,开始建立一套角色,以提供一种建设性的方法,从基于角色的角度回答这个问题。
|
||||
|
||||
CHAOSS 是一个开源项目,我们尝试使用民主共识来运作。因此,我决定使用<ruby>组成分子<rt>constituent</rt></ruby>这个词而不是利益相关者,因为它更符合我们作为开源贡献者的责任,以创建更具共生性的价值链。
|
||||
|
||||
虽然创建此组成模型的过程采用了特定的“目标-问题-度量”方法,但有许多方法可以进行细分。CHAOSS 贡献者已经开发了很好的模型,可以按照矢量进行细分,例如项目属性(例如,个人、公司或联盟)和“失败容忍度”。在为 CHAOSS 开发度量定义时,每个模型都会提供建设性的影响。
|
||||
|
||||
基于这一切,我开始构建一个谁可能关心 CHAOSS 指标的模型,以及每个组成分子在 CHAOSS 的四个重点领域中最关心的问题:
|
||||
|
||||
* [多样性和包容性][4]
|
||||
* [演化][5]
|
||||
* [风险][6]
|
||||
* [价值][7]
|
||||
|
||||
在我们深入研究之前,重要的是要注意 CHAOSS 项目明确地将背景判断留给了实施指标的团队。什么是“有意义的”和“什么是健康的?”的答案预计会因团队和项目而异。CHAOSS 软件的现成仪表板尽可能地关注客观指标。在本文中,我们关注项目创始人、项目维护者和贡献者。
|
||||
|
||||
### 项目组成分子
|
||||
|
||||
虽然这绝不是这些组成分子可能认为重要的问题的详尽清单,但这些选择感觉是一个好的起点。以下每个“目标-问题-度量”标准部分与 CHAOSS 项目正在收集和汇总的指标直接相关。
|
||||
|
||||
现在,进入分析的第 1 部分!
|
||||
|
||||
#### 项目创始人
|
||||
|
||||
作为**项目创始人**,我**最**关心:
|
||||
|
||||
* 我的项目**对其他人有用吗?**通过以下测量:
|
||||
* 随着时间推移有多少复刻?
|
||||
* **指标:**存储库复刻数。
|
||||
* 随着时间的推移有多少贡献者?
|
||||
* **指标:**贡献者数量。
|
||||
* 贡献净质量。
|
||||
* **指标:**随着时间的推移提交的错误。
|
||||
* **指标:**随着时间的回归。
|
||||
* 项目的财务状况。
|
||||
* **指标:**随着时间的推移的捐赠/收入。
|
||||
* **指标:**随着时间的推移的费用。
|
||||
* 我的项目对其它人的**可见**程度?
|
||||
* 有谁知道我的项目?别人认为它很整洁吗?
|
||||
* **指标:**社交媒体上的提及、分享、喜欢和订阅的数量。
|
||||
* 有影响力的人是否了解我的项目?
|
||||
* **指标:**贡献者的社会影响力。
|
||||
* 人们在公共场所对项目有何评价?是正面还是负面?
|
||||
* **指标:**跨社交媒体渠道的情感(关键字或 NLP)分析。
|
||||
* 我的项目**可行性**程度?
|
||||
* 我们有足够的维护者吗?该数字是随着时间的推移而上升还是下降?
|
||||
* **指标:**维护者数量。
|
||||
* 改变速度如何随时间变化?
|
||||
* **指标:**代码随时间的变化百分比。
|
||||
* **指标:**拉取请求、代码审查和合并之间的时间。
|
||||
* 我的项目的[多样化 & 包容性][4]如何?
|
||||
* 我们是否拥有有效的公开行为准则(CoC)?
|
||||
* **度量标准:** 检查存储库中的 CoC 文件。
|
||||
* 与我的项目相关的活动是否积极包容?
|
||||
* **指标:**关于活动的票务政策和活动包容性行为的手动报告。
|
||||
* 我们的项目在可访问性上做的好不好?
|
||||
* **指标:**验证发布的文字会议纪要。
|
||||
* **指标:**验证会议期间使用的隐藏式字幕。
|
||||
* **指标:**验证在演示文稿和项目前端设计中色盲可访问的素材。
|
||||
* 我的项目代表了多少[价值][7]?
|
||||
* 我如何帮助组织了解使用我们的项目将节省多少时间和金钱(劳动力投资)
|
||||
* **指标:**仓库的议题、提交、拉取请求的数量和估计人工费率。
|
||||
* 我如何理解项目创建的下游价值的数量,以及维护我的项目对更广泛的社区的重要性(或不重要)?
|
||||
* **指标:**依赖我的项目的其他项目数。
|
||||
* 为我的项目做出贡献的人有多少机会使用他们学到的东西来找到合适的工作岗位,以及在哪些组织(即生活工资)?
|
||||
* **指标:**使用或贡献此库的组织数量。
|
||||
* **指标:**使用此类项目的开发人员的平均工资。
|
||||
* **指标:**与该项目匹配的关键字的职位发布计数。
|
||||
|
||||
### 项目维护者
|
||||
|
||||
作为**项目维护者**,我**最**关心:
|
||||
|
||||
* 我是**高效的**维护者吗?
|
||||
* **指标:**拉取请求在代码审查之前等待的时间。
|
||||
* **指标:**代码审查和后续拉取请求之间的时间。
|
||||
* **指标:**我的代码审核中有多少被批准?
|
||||
* **指标:**我的代码评论中有多少被拒绝或返工?
|
||||
* **指标:**代码审查的评论的情感分析。
|
||||
* 我如何让**更多人**帮助我维护这件事?
|
||||
* **指标:**项目贡献者的社交覆盖面数量。
|
||||
* 我们的**代码质量**随着时间的推移变得越来越好吗?
|
||||
* **指标:**计算随着时间的推移引入的回归数量。
|
||||
* **指标:**计算随着时间推移引入的错误数量。
|
||||
* **指标:**错误归档、拉取请求、代码审查、合并和发布之间的时间。
|
||||
|
||||
### 项目开发者和贡献者
|
||||
|
||||
作为**项目开发者或贡献者**,我**最**关心:
|
||||
|
||||
* 我可以从为这个项目做出贡献中获得哪些有价值的东西,以及实现这个价值需要多长时间?
|
||||
* **指标:**下游价值。
|
||||
* **指标:**提交、代码审查和合并之间的时间。
|
||||
* 通过使用我在贡献中学到的东西来增加工作机是否有良好的前景?
|
||||
* **指标:**生活工资。
|
||||
* 这个项目有多受欢迎?
|
||||
* **指标:**社交媒体帖子、分享和收藏的数量。
|
||||
* 社区有影响力的人知道我的项目吗?
|
||||
* **指标:**创始人、维护者和贡献者的社交范围。
|
||||
|
||||
通过创建这个列表,我们开始让 CHAOSS 更加丰满了,并且在今年夏天项目中首次发布该指标时,我迫不及待地想看看广泛的开源社区可能有什么其他伟大的想法,以及我们还可以从这些贡献中学到什么(并衡量!)。
|
||||
|
||||
### 其它角色
|
||||
|
||||
接下来,你需要了解有关其他角色(例如基金会、企业开源计划办公室、业务风险和法律团队、人力资源等)以及最终用户的目标问题度量集的更多信息。他们关心开源的不同事物。
|
||||
|
||||
如果你是开源贡献者或组成分子,我们邀请你[来看看这个项目][8]并参与社区活动!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/measure-project
|
||||
|
||||
作者:[Jon Lawrence][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/the3rdlaw
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI- (metrics and data shown on a computer screen)
|
||||
[2]: https://www.linuxfoundation.org/
|
||||
[3]: https://chaoss.community/
|
||||
[4]: https://github.com/chaoss/wg-diversity-inclusion
|
||||
[5]: https://github.com/chaoss/wg-evolution
|
||||
[6]: https://github.com/chaoss/wg-risk
|
||||
[7]: https://github.com/chaoss/wg-value
|
||||
[8]: https://github.com/chaoss/
|
@ -0,0 +1,86 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11236-1.html)
|
||||
[#]: subject: (How to Get Linux Kernel 5.0 in Ubuntu 18.04 LTS)
|
||||
[#]: via: (https://itsfoss.com/ubuntu-hwe-kernel/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
如何在 Ubuntu 18.04 LTS 中获取 Linux 5.0 内核
|
||||
======
|
||||
|
||||
> 最近发布的 Ubuntu 18.04.3 包括 Linux 5.0 内核中的几个新功能和改进,但默认情况下没有安装。本教程演示了如何在 Ubuntu 18.04 LTS 中获取 Linux 5 内核。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/17/101052xday1jyrszbddsfc.jpg)
|
||||
|
||||
[Ubuntu 18.04 的第三个“点发布版”已经发布][2],它带来了新的稳定版本的 GNOME 组件、livepatch 桌面集成和内核 5.0。
|
||||
|
||||
可是等等!什么是“<ruby>小数点版本<rt>point release</rt></ruby>”?让我先解释一下。
|
||||
|
||||
### Ubuntu LTS 小数点版本
|
||||
|
||||
Ubuntu 18.04 于 2018 年 4 月发布,由于它是一个长期支持 (LTS) 版本,它将一直支持到 2023 年。从那时起,已经有许多 bug 修复、安全更新和软件升级。如果你今天下载 Ubuntu 18.04,你需要在[在安装 Ubuntu 后首先安装这些更新][3]。
|
||||
|
||||
当然,这不是一种理想情况。这就是 Ubuntu 提供这些“小数点版本”的原因。点发布版包含所有功能和安全更新以及自 LTS 版本首次发布以来添加的 bug 修复。如果你今天下载 Ubuntu,你会得到 Ubuntu 18.04.3 而不是 Ubuntu 18.04。这节省了在新安装的 Ubuntu 系统上下载和安装数百个更新的麻烦。
|
||||
|
||||
好了!现在你知道“小数点版本”的概念了。你如何升级到这些小数点版本?答案很简单。只需要像平时一样[更新你的 Ubuntu 系统][4],这样你将在最新的小数点版本上了。
|
||||
|
||||
你可以[查看 Ubuntu 版本][5]来了解正在使用的版本。我检查了一下,因为我用的是 Ubuntu 18.04.3,我以为我的内核会是 5。当我[查看 Linux 内核版本][6]时,它仍然是基本内核 4.15。
|
||||
|
||||
![Ubuntu Version And Linux Kernel Version Check][7]
|
||||
|
||||
这是为什么?如果 Ubuntu 18.04.3 有 Linux 5.0 内核,为什么它仍然使用 Linux 4.15 内核?这是因为你必须通过选择 LTS <ruby>支持栈<rt>Enablement Stack</rt></ruby>(通常称为 HWE)手动请求在 Ubuntu LTS 中安装新内核。
|
||||
|
||||
### 使用 HWE 在Ubuntu 18.04 中获取 Linux 5.0 内核
|
||||
|
||||
默认情况下,Ubuntu LTS 将保持在最初发布的 Linux 内核上。<ruby>[硬件支持栈][9]<rt>hardware enablement stack</rt></ruby>(HWE)为现有的 Ubuntu LTS 版本提供了更新的内核和 xorg 支持。
|
||||
|
||||
最近发生了一些变化。如果你下载了 Ubuntu 18.04.2 或更新的桌面版本,那么就会为你启用 HWE,默认情况下你将获得新内核以及常规更新。
|
||||
|
||||
对于服务器版本以及下载了 18.04 和 18.04.1 的人员,你需要安装 HWE 内核。完成后,你将获得 Ubuntu 提供的更新的 LTS 版本内核。
|
||||
|
||||
要在 Ubuntu 桌面上安装 HWE 内核以及更新的 xorg,你可以在终端中使用此命令:
|
||||
|
||||
```
|
||||
sudo apt install --install-recommends linux-generic-hwe-18.04 xserver-xorg-hwe-18.04
|
||||
```
|
||||
|
||||
如果你使用的是 Ubuntu 服务器版,那么就不会有 xorg 选项。所以只需在 Ubutnu 服务器版中安装 HWE 内核:
|
||||
|
||||
```
|
||||
sudo apt-get install --install-recommends linux-generic-hwe-18.04
|
||||
```
|
||||
|
||||
完成 HWE 内核的安装后,重启系统。现在你应该拥有更新的 Linux 内核了。
|
||||
|
||||
### 你在 Ubuntu 18.04 中获取 5.0 内核了么?
|
||||
|
||||
请注意,下载并安装了 Ubuntu 18.04.2 的用户已经启用了 HWE。所以这些用户将能轻松获取 5.0 内核。
|
||||
|
||||
在 Ubuntu 中启用 HWE 内核遇到困难了么?这完全取决于你。[Linux 5.0 内核][10]有几项性能改进和更好的硬件支持。你将从新内核获益。
|
||||
|
||||
你怎么看?你会安装 5.0 内核还是宁愿留在 4.15 内核上?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/ubuntu-hwe-kernel/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.youtube.com/channel/UCEU9D6KIShdLeTRyH3IdSvw
|
||||
[2]: https://ubuntu.com/blog/enhanced-livepatch-desktop-integration-available-with-ubuntu-18-04-3-lts
|
||||
[3]: https://itsfoss.com/things-to-do-after-installing-ubuntu-18-04/
|
||||
[4]: https://itsfoss.com/update-ubuntu/
|
||||
[5]: https://itsfoss.com/how-to-know-ubuntu-unity-version/
|
||||
[6]: https://itsfoss.com/find-which-kernel-version-is-running-in-ubuntu/
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/ubuntu-version-and-kernel-version-check.png?resize=800%2C300&ssl=1
|
||||
[9]: https://wiki.ubuntu.com/Kernel/LTSEnablementStack
|
||||
[10]: https://itsfoss.com/linux-kernel-5/
|
@ -0,0 +1,74 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11254-1.html)
|
||||
[#]: subject: (Fix ‘E: The package cache file is corrupted, it has the wrong hash’ Error In Ubuntu)
|
||||
[#]: via: (https://www.ostechnix.com/fix-e-the-package-cache-file-is-corrupted-it-has-the-wrong-hash-error-in-ubuntu/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
修复 Ubuntu 中 “E: The package cache file is corrupted, it has the wrong hash” 错误
|
||||
======
|
||||
|
||||
今天,我尝试更新我的 Ubuntu 18.04 LTS 的仓库列表,但收到了一条错误消息:“**E: The package cache file is corrupted, it has the wrong hash**”。这是我在终端运行的命令以及输出:
|
||||
|
||||
```
|
||||
$ sudo apt update
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
Hit:1 http://it-mirrors.evowise.com/ubuntu bionic InRelease
|
||||
Hit:2 http://it-mirrors.evowise.com/ubuntu bionic-updates InRelease
|
||||
Hit:3 http://it-mirrors.evowise.com/ubuntu bionic-backports InRelease
|
||||
Hit:4 http://it-mirrors.evowise.com/ubuntu bionic-security InRelease
|
||||
Hit:5 http://ppa.launchpad.net/alessandro-strada/ppa/ubuntu bionic InRelease
|
||||
Hit:7 http://ppa.launchpad.net/leaeasy/dde/ubuntu bionic InRelease
|
||||
Hit:8 http://ppa.launchpad.net/rvm/smplayer/ubuntu bionic InRelease
|
||||
Ign:6 https://dl.bintray.com/etcher/debian stable InRelease
|
||||
Get:9 https://dl.bintray.com/etcher/debian stable Release [3,674 B]
|
||||
Fetched 3,674 B in 3s (1,196 B/s)
|
||||
Reading package lists... Done
|
||||
E: The package cache file is corrupted, it has the wrong hash
|
||||
```
|
||||
|
||||
![][2]
|
||||
|
||||
*Ubuntu 中的 “The package cache file is corrupted, it has the wrong hash” 错误*
|
||||
|
||||
经过一番谷歌搜索,我找到了解决此错误的方法。
|
||||
|
||||
如果你遇到过这个错误,不要惊慌。只需运行下面的命令修复。
|
||||
|
||||
在运行命令之前,**请再次确认你在最后加入了 `*`**。在命令最后加上 `*` 很重要。如果你没有添加,它会删除 `/var/lib/apt/lists/`*目录,而且无法恢复。我提醒过你了!
|
||||
|
||||
```
|
||||
$ sudo rm -rf /var/lib/apt/lists/*
|
||||
```
|
||||
|
||||
现在我再次使用下面的命令更新系统:
|
||||
|
||||
```
|
||||
$ sudo apt update
|
||||
```
|
||||
|
||||
![][3]
|
||||
|
||||
现在好了!希望它有帮助。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/fix-e-the-package-cache-file-is-corrupted-it-has-the-wrong-hash-error-in-ubuntu/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[2]: https://www.ostechnix.com/wp-content/uploads/2019/08/The-package-cache-file-is-corrupted.png
|
||||
[3]: https://www.ostechnix.com/wp-content/uploads/2019/08/apt-update-command-output-in-Ubuntu.png
|
@ -0,0 +1,156 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (tomjlw)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11252-1.html)
|
||||
[#]: subject: (How To Set up Automatic Security Update (Unattended Upgrades) on Debian/Ubuntu?)
|
||||
[#]: via: (https://www.2daygeek.com/automatic-security-update-unattended-upgrades-ubuntu-debian/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
如何在 Debian/Ubuntu 上设置自动安全更新(无人值守更新)
|
||||
======
|
||||
|
||||
对于 Linux 管理员来说重要的任务之一是让系统保持最新状态,这可以使得你的系统更加稳健并且可以避免不想要的访问与攻击。
|
||||
|
||||
在 Linux 上安装软件包是小菜一碟,用相似的方法我们也可以更新安全补丁。
|
||||
|
||||
这是一个向你展示如何配置系统接收自动安全更新的简单教程。当你运行自动安全包更新而不经审查会给你带来一定风险,但是也有一些好处。
|
||||
|
||||
如果你不想错过安全补丁,且想要与最新的安全补丁保持同步,那你应该借助无人值守更新机制设置自动安全更新。
|
||||
|
||||
如果你不想要自动安全更新的话,你可以[在 Debian/Ubuntu 系统上手动安装安全更新][1]。
|
||||
|
||||
我们有许多可以自动化更新的办法,然而我们将先采用官方的方法之后我们会介绍其它方法。
|
||||
|
||||
### 如何在 Debian/Ubuntu 上安装无人值守更新包
|
||||
|
||||
无人值守更新包默认应该已经装在你的系统上。但万一它没被安装,就用下面的命令来安装。
|
||||
|
||||
使用 [APT-GET 命令][2]和 [APT 命令][3]来安装 `unattended-upgrades` 软件包。
|
||||
|
||||
```
|
||||
$ sudo apt-get install unattended-upgrades
|
||||
```
|
||||
|
||||
下方两个文件可以使你自定义该机制:
|
||||
|
||||
```
|
||||
/etc/apt/apt.conf.d/50unattended-upgrades
|
||||
/etc/apt/apt.conf.d/20auto-upgrades
|
||||
```
|
||||
|
||||
### 在 50unattended-upgrades 文件中做出必要修改
|
||||
|
||||
默认情况下只有安全更新需要的最必要的选项被启用。但并不限于此,你可以配置其中的许多选项以使得这个机制更加有用。
|
||||
|
||||
我修改了一下文件并仅加上被启用的行以方便阐述:
|
||||
|
||||
```
|
||||
# vi /etc/apt/apt.conf.d/50unattended-upgrades
|
||||
|
||||
Unattended-Upgrade::Allowed-Origins {
|
||||
"${distro_id}:${distro_codename}";
|
||||
"${distro_id}:${distro_codename}-security";
|
||||
"${distro_id}ESM:${distro_codename}";
|
||||
};
|
||||
Unattended-Upgrade::DevRelease "false";
|
||||
```
|
||||
|
||||
有三个源被启用,细节如下:
|
||||
|
||||
* `${distro_id}:${distro_codename}`:这是必须的,因为安全更新可能会从非安全来源拉取依赖。
|
||||
* `${distro_id}:${distro_codename}-security`:这用来从来源得到安全更新。
|
||||
* `${distro_id}ESM:${distro_codename}`:这是用来从 ESM(扩展安全维护)获得安全更新。
|
||||
|
||||
**启用邮件通知:** 如果你想要在每次安全更新后收到邮件通知,那么就修改以下行段(取消其注释并加上你的 email 账号)。
|
||||
|
||||
从:
|
||||
|
||||
```
|
||||
//Unattended-Upgrade::Mail "root";
|
||||
```
|
||||
|
||||
修改为:
|
||||
|
||||
```
|
||||
Unattended-Upgrade::Mail "2daygeek@gmail.com";
|
||||
```
|
||||
|
||||
**自动移除不用的依赖:** 你可能需要在每次更新后运行 `sudo apt autoremove` 命令来从系统中移除不用的依赖。
|
||||
|
||||
我们可以通过修改以下行来自动化这项任务(取消注释并将 `false` 改成 `true`)。
|
||||
|
||||
从:
|
||||
|
||||
```
|
||||
//Unattended-Upgrade::Remove-Unused-Dependencies "false";
|
||||
```
|
||||
|
||||
修改为:
|
||||
|
||||
```
|
||||
Unattended-Upgrade::Remove-Unused-Dependencies "true";
|
||||
```
|
||||
|
||||
**启用自动重启:** 你可能需要在安全更新安装至内核后重启你的系统。你可以在以下行做出修改:
|
||||
|
||||
从:
|
||||
|
||||
```
|
||||
//Unattended-Upgrade::Automatic-Reboot "false";
|
||||
```
|
||||
|
||||
到:取消注释并将 `false` 改成 `true`以启用自动重启。
|
||||
|
||||
```
|
||||
Unattended-Upgrade::Automatic-Reboot "true";
|
||||
```
|
||||
|
||||
**启用特定时段的自动重启:** 如果自动重启已启用,且你想要在特定时段进行重启,那么做出以下修改。
|
||||
|
||||
从:
|
||||
|
||||
```
|
||||
//Unattended-Upgrade::Automatic-Reboot-Time "02:00";
|
||||
```
|
||||
|
||||
到:取消注释并将时间改成你需要的时间。我将重启设置在早上 5 点。
|
||||
|
||||
```
|
||||
Unattended-Upgrade::Automatic-Reboot-Time "05:00";
|
||||
```
|
||||
|
||||
### 如何启用自动化安全更新?
|
||||
|
||||
现在我们已经配置好了必须的选项,一旦配置好,打开以下文件并确认是否这两个值都已设置好?值不应为0。(1=启用,0=禁止)。
|
||||
|
||||
```
|
||||
# vi /etc/apt/apt.conf.d/20auto-upgrades
|
||||
|
||||
APT::Periodic::Update-Package-Lists "1";
|
||||
APT::Periodic::Unattended-Upgrade "1";
|
||||
```
|
||||
|
||||
**详情:**
|
||||
|
||||
* 第一行使 `apt` 每天自动运行 `apt-get update`。
|
||||
* 第一行使 `apt` 每天自动安装安全更新。
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/automatic-security-update-unattended-upgrades-ubuntu-debian/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[tomjlw](https://github.com/tomjlw)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/manually-install-security-updates-ubuntu-debian/
|
||||
[2]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[3]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
|
@ -0,0 +1,170 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (tomjlw)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11255-1.html)
|
||||
[#]: subject: (How To Setup Multilingual Input Method On Ubuntu)
|
||||
[#]: via: (https://www.ostechnix.com/how-to-setup-multilingual-input-method-on-ubuntu/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
如何在 Ubuntu 上设置多语言输入法
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/21/231916g3gxbhybq0zv0q1h.jpg)
|
||||
|
||||
或许你不知道,在印度有数以百计的语言被使用,其中 22 种被印度机构列为官方语言。我的母语不是英语,因此当我需要从英语输入或者翻译到我的母语泰米尔语时我经常使用**谷歌翻译**。嗯,我估计我不再需要依靠谷歌翻译了。我刚发现在 Ubuntu 上输入印度语的好办法。这篇教程解释了如何配置多语言输入法的方法。这个是为 Ubuntu 18.04 LTS 特别打造的,但是它可以在其它类 Ubuntu 系统例如 Linux mint、Elementary OS 上使用。
|
||||
|
||||
### 在 Ubuntu Linux 上设置多语言输入法
|
||||
|
||||
通过 **IBus** 的帮助,我们可以轻松在 Ubuntu 及其衍生物上配置多语言输入法。Ibus,代表 **I** ntelligent **I** nput **Bus**(智能输入总线),是一种针对类 Unix 操作系统下多语言输入的输入法框架。它使得我们可以在大多数 GUI 应用例如 LibreOffice 下输入母语。
|
||||
|
||||
### 在 Ubuntu 上安装 IBus
|
||||
|
||||
在 Ubuntu 上 安装 IBus 包,运行:
|
||||
|
||||
```
|
||||
$ sudo apt install ibus-m17n
|
||||
```
|
||||
|
||||
Ibus-m17n 包提供了许多印度语和其它国家语言包括阿姆哈拉语,阿拉伯语,阿美尼亚语,阿萨姆语,阿萨巴斯卡语,白俄罗斯语,孟加拉语,缅甸语,中高棉语,占文,**汉语**,克里语,克罗地亚语,捷克语,丹麦语,迪维希语,马尔代夫语,世界语,法语,格鲁吉亚语,古/现代希腊语,古吉拉特语,希伯来语,因纽特语,日语,卡纳达语,克什米尔语,哈萨克语,韩语,老挝语,马来语,马拉地语,尼泊尔语,欧吉布威语,欧瑞亚语,旁遮普语,波斯语,普什图语,俄语,梵语,塞尔维亚语,四川彝文,彝文,西格西卡语,信德语,僧伽罗语,斯洛伐克语,瑞典语,泰语,泰米尔语,泰卢固语,藏语,维吾尔语,乌都语,乌兹别克语,越语及意第绪语。
|
||||
|
||||
##### 添加输入语言
|
||||
|
||||
我们可以在系统里的**设置**部分添加语言。点击你的 Ubuntu 桌面右上角的下拉箭头选择底部左下角的设置图标。
|
||||
|
||||
![][2]
|
||||
|
||||
*从顶部面板启动系统设置*
|
||||
|
||||
从设置部分,点击左侧面板的**区域及语言**选项。再点击右侧**输入来源**标签下的**+**(加号)按钮。
|
||||
|
||||
![][3]
|
||||
|
||||
*设置部分的区域及语言选项*
|
||||
|
||||
在下个窗口,点击**三个垂直的点**按钮。
|
||||
|
||||
![][4]
|
||||
|
||||
*在 Ubuntu 里添加输入来源*
|
||||
|
||||
搜寻并选择你想从列表中添加的输入语言。
|
||||
|
||||
![][5]
|
||||
|
||||
*添加输入语言*
|
||||
|
||||
在本篇教程中,我将加入**泰米尔**语。在选择语言后,点击**添加**按钮。
|
||||
|
||||
![][6]
|
||||
|
||||
*添加输入来源*
|
||||
|
||||
现在你会看到选中的输入来源已经被添加了。你会在输入来源标签下的区域及语言选项中看到它。
|
||||
|
||||
![][7]
|
||||
|
||||
*Ubuntu 里的输入来源选项*
|
||||
|
||||
点击输入来源标签下的“管理安装的语言”按钮
|
||||
|
||||
![][8]
|
||||
|
||||
*在 Ubuntu 里管理安装的语言*
|
||||
|
||||
接下来你会被询问是否想要为选定语言安装翻译包。如果你想的话你可以安装它们。或者仅仅选择“稍后提醒我”按钮。你下次打开的时候会收到统治。
|
||||
|
||||
![][9]
|
||||
|
||||
*语言支持没完全安装好*
|
||||
|
||||
一旦翻译包安装好,点击**安装/移除语言**按钮。同时确保 IBus 在键盘输入法系统中被选中。
|
||||
|
||||
![][10]
|
||||
|
||||
*在 Ubuntu 中安装/移除语言*
|
||||
|
||||
从列表中选择你想要的语言并点击采用按钮。
|
||||
|
||||
![][11]
|
||||
|
||||
*选择输入语言*
|
||||
|
||||
到此为止了。我们已成功在 Ubuntu 18.04 桌面上配置好多语输入方法。同样的,你可以添加尽可能多的输入语言。
|
||||
|
||||
在添加完所有语言来源后,登出再登陆回去。
|
||||
|
||||
### 用印度语或者你喜欢的语言输入
|
||||
|
||||
一旦你添加完所有语言后,你就会从你的 Ubuntu 桌面上的顶端菜单下载栏看到它们。
|
||||
|
||||
![][12]
|
||||
|
||||
*从 Ubuntu 桌面的顶端栏选择输入语言。*
|
||||
|
||||
你也可以使用键盘上的**徽标键+空格键**在不同语言中切换。
|
||||
|
||||
![][13]
|
||||
|
||||
*在 Ubuntu 里用**徽标键+空格键**选择输入语言*
|
||||
|
||||
打开任何 GUI 文本编辑器/应用开始打字吧!
|
||||
|
||||
![][14]
|
||||
|
||||
*在 Ubuntu 中用印度语输入*
|
||||
|
||||
### 将 IBus 加入启动应用
|
||||
|
||||
我们需要让 IBus 在每次重启后自动打开,这样每次你想要用自己喜欢的语言输入的时候就无须手动打开。
|
||||
|
||||
为此仅须在面板中输入“开机应用”点开开机应用选项。
|
||||
|
||||
![][15]
|
||||
|
||||
在下个窗口,点击添加,在名字栏输入“Ibus”并在命令栏输入“ibus-daemon”点击添加按钮。
|
||||
|
||||
![][16]
|
||||
|
||||
*在 Ubuntu 中将 Ibus 添加进开机启动项*
|
||||
|
||||
从现在起 IBus 将在系统启动后自动开始。
|
||||
|
||||
现在到你的回合了。在什么应用/工具中你用当地的印度语输入?在下方评论区让我们知道它们。
|
||||
|
||||
参考:
|
||||
|
||||
* [IBus – Ubuntu 社区百科][20]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-setup-multilingual-input-method-on-ubuntu/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[tomjlw](https://github.com/tomjlw)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[2]: https://www.ostechnix.com/wp-content/uploads/2019/07/Ubuntu-system-settings.png
|
||||
[3]: https://www.ostechnix.com/wp-content/uploads/2019/08/Region-language-in-Settings-ubuntu.png
|
||||
[4]: https://www.ostechnix.com/wp-content/uploads/2019/08/Add-input-source-in-Ubuntu.png
|
||||
[5]: https://www.ostechnix.com/wp-content/uploads/2019/08/Add-input-language.png
|
||||
[6]: https://www.ostechnix.com/wp-content/uploads/2019/08/Add-Input-Source-Ubuntu.png
|
||||
[7]: https://www.ostechnix.com/wp-content/uploads/2019/08/Input-sources-Ubuntu.png
|
||||
[8]: https://www.ostechnix.com/wp-content/uploads/2019/08/Manage-Installed-Languages.png
|
||||
[9]: https://www.ostechnix.com/wp-content/uploads/2019/08/The-language-support-is-not-installed-completely.png
|
||||
[10]: https://www.ostechnix.com/wp-content/uploads/2019/08/Install-Remove-languages.png
|
||||
[11]: https://www.ostechnix.com/wp-content/uploads/2019/08/Choose-language.png
|
||||
[12]: https://www.ostechnix.com/wp-content/uploads/2019/08/Choose-input-language-from-top-bar-in-Ubuntu.png
|
||||
[13]: https://www.ostechnix.com/wp-content/uploads/2019/08/Choose-input-language-using-SuperSpace-keys.png
|
||||
[14]: https://www.ostechnix.com/wp-content/uploads/2019/08/Setup-Multilingual-Input-Method-On-Ubuntu.png
|
||||
[15]: https://www.ostechnix.com/wp-content/uploads/2019/08/Launch-startup-applications-in-ubuntu.png
|
||||
[16]: https://www.ostechnix.com/wp-content/uploads/2019/08/Add-Ibus-to-startup-applications-on-Ubuntu.png
|
||||
[17]: https://www.ostechnix.com/use-google-translate-commandline-linux/
|
||||
[18]: https://www.ostechnix.com/type-indian-rupee-sign-%e2%82%b9-linux/
|
||||
[19]: https://www.ostechnix.com/setup-japanese-language-environment-arch-linux/
|
||||
[20]: https://help.ubuntu.com/community/ibus
|
205
published/20190815 SSLH - Share A Same Port For HTTPS And SSH.md
Normal file
205
published/20190815 SSLH - Share A Same Port For HTTPS And SSH.md
Normal file
@ -0,0 +1,205 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11247-1.html)
|
||||
[#]: subject: (SSLH – Share A Same Port For HTTPS And SSH)
|
||||
[#]: via: (https://www.ostechnix.com/sslh-share-port-https-ssh/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
SSLH:让 HTTPS 和 SSH 共享同一个端口
|
||||
======
|
||||
|
||||
![SSLH - Share A Same Port For HTTPS And SSH][1]
|
||||
|
||||
一些 ISP 和公司可能已经阻止了大多数端口,并且只允许少数特定端口(如端口 80 和 443)访问来加强其安全性。在这种情况下,我们别无选择,但同一个端口可以用于多个程序,比如 HTTPS 端口 443,很少被阻止。通过 SSL/SSH 多路复用器 SSLH 的帮助,它可以侦听端口 443 上的传入连接。更简单地说,SSLH 允许我们在 Linux 系统上的端口 443 上运行多个程序/服务。因此,你可以同时通过同一个端口同时使用 SSL 和 SSH。如果你遇到大多数端口被防火墙阻止的情况,你可以使用 SSLH 访问远程服务器。这个简短的教程描述了如何在类 Unix 操作系统中使用 SSLH 让 https、ssh 共享相同的端口。
|
||||
|
||||
### SSLH:让 HTTPS、SSH 共享端口
|
||||
|
||||
#### 安装 SSLH
|
||||
|
||||
大多数 Linux 发行版上 SSLH 都有软件包,因此你可以使用默认包管理器进行安装。
|
||||
|
||||
在 Debian、Ubuntu 及其衍生品上运行:
|
||||
|
||||
```
|
||||
$ sudo apt-get install sslh
|
||||
```
|
||||
|
||||
安装 SSLH 时,将提示你是要将 sslh 作为从 inetd 运行的服务,还是作为独立服务器运行。每种选择都有其自身的优点。如果每天只有少量连接,最好从 inetd 运行 sslh 以节省资源。另一方面,如果有很多连接,sslh 应作为独立服务器运行,以避免为每个传入连接生成新进程。
|
||||
|
||||
![][2]
|
||||
|
||||
*安装 sslh*
|
||||
|
||||
在 Arch Linux 和 Antergos、Manjaro Linux 等衍生品上,使用 Pacman 进行安装,如下所示:
|
||||
|
||||
```
|
||||
$ sudo pacman -S sslh
|
||||
```
|
||||
|
||||
在 RHEL、CentOS 上,你需要添加 EPEL 存储库,然后安装 SSLH,如下所示:
|
||||
|
||||
```
|
||||
$ sudo yum install epel-release
|
||||
$ sudo yum install sslh
|
||||
```
|
||||
|
||||
在 Fedora:
|
||||
|
||||
```
|
||||
$ sudo dnf install sslh
|
||||
```
|
||||
|
||||
如果它在默认存储库中不可用,你可以如[这里][3]所述手动编译和安装 SSLH。
|
||||
|
||||
#### 配置 Apache 或 Nginx Web 服务器
|
||||
|
||||
如你所知,Apache 和 Nginx Web 服务器默认会监听所有网络接口(即 `0.0.0.0:443`)。我们需要更改此设置以告知 Web 服务器仅侦听 `localhost` 接口(即 `127.0.0.1:443` 或 `localhost:443`)。
|
||||
|
||||
为此,请编辑 Web 服务器(nginx 或 apache)配置文件并找到以下行:
|
||||
|
||||
```
|
||||
listen 443 ssl;
|
||||
```
|
||||
|
||||
将其修改为:
|
||||
|
||||
```
|
||||
listen 127.0.0.1:443 ssl;
|
||||
```
|
||||
|
||||
如果你在 Apache 中使用虚拟主机,请确保你也修改了它。
|
||||
|
||||
```
|
||||
VirtualHost 127.0.0.1:443
|
||||
```
|
||||
|
||||
保存并关闭配置文件。不要重新启动该服务。我们还没有完成。
|
||||
|
||||
#### 配置 SSLH
|
||||
|
||||
使 Web 服务器仅在本地接口上侦听后,编辑 SSLH 配置文件:
|
||||
|
||||
```
|
||||
$ sudo vi /etc/default/sslh
|
||||
```
|
||||
|
||||
找到下列行:
|
||||
|
||||
```
|
||||
Run=no
|
||||
```
|
||||
|
||||
将其修改为:
|
||||
|
||||
```
|
||||
Run=yes
|
||||
```
|
||||
|
||||
然后,向下滚动一点并修改以下行以允许 SSLH 在所有可用接口上侦听端口 443(例如 `0.0.0.0:443`)。
|
||||
|
||||
```
|
||||
DAEMON_OPTS="--user sslh --listen 0.0.0.0:443 --ssh 127.0.0.1:22 --ssl 127.0.0.1:443 --pidfile /var/run/sslh/sslh.pid"
|
||||
```
|
||||
|
||||
这里,
|
||||
|
||||
* `–user sslh`:要求在这个特定的用户身份下运行。
|
||||
* `–listen 0.0.0.0:443`:SSLH 监听于所有可用接口的 443 端口。
|
||||
* `–sshs 127.0.0.1:22` : 将 SSH 流量路由到本地的 22 端口。
|
||||
* `–ssl 127.0.0.1:443` : 将 HTTPS/SSL 流量路由到本地的 443 端口。
|
||||
|
||||
保存并关闭文件。
|
||||
|
||||
最后,启用并启动 `sslh` 服务以更新更改。
|
||||
|
||||
```
|
||||
$ sudo systemctl enable sslh
|
||||
$ sudo systemctl start sslh
|
||||
```
|
||||
|
||||
#### 测试
|
||||
|
||||
检查 SSLH 守护程序是否正在监听 443。
|
||||
|
||||
```
|
||||
$ ps -ef | grep sslh
|
||||
sslh 2746 1 0 15:51 ? 00:00:00 /usr/sbin/sslh --foreground --user sslh --listen 0.0.0.0 443 --ssh 127.0.0.1 22 --ssl 127.0.0.1 443 --pidfile /var/run/sslh/sslh.pid
|
||||
sslh 2747 2746 0 15:51 ? 00:00:00 /usr/sbin/sslh --foreground --user sslh --listen 0.0.0.0 443 --ssh 127.0.0.1 22 --ssl 127.0.0.1 443 --pidfile /var/run/sslh/sslh.pid
|
||||
sk 2754 1432 0 15:51 pts/0 00:00:00 grep --color=auto sslh
|
||||
```
|
||||
|
||||
现在,你可以使用端口 443 通过 SSH 访问远程服务器:
|
||||
|
||||
```
|
||||
$ ssh -p 443 [email protected]
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
[email protected]'s password:
|
||||
Welcome to Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-55-generic x86_64)
|
||||
|
||||
* Documentation: https://help.ubuntu.com
|
||||
* Management: https://landscape.canonical.com
|
||||
* Support: https://ubuntu.com/advantage
|
||||
|
||||
System information as of Wed Aug 14 13:11:04 IST 2019
|
||||
|
||||
System load: 0.23 Processes: 101
|
||||
Usage of /: 53.5% of 19.56GB Users logged in: 0
|
||||
Memory usage: 9% IP address for enp0s3: 192.168.225.50
|
||||
Swap usage: 0% IP address for enp0s8: 192.168.225.51
|
||||
|
||||
* Keen to learn Istio? It's included in the single-package MicroK8s.
|
||||
|
||||
https://snapcraft.io/microk8s
|
||||
|
||||
61 packages can be updated.
|
||||
22 updates are security updates.
|
||||
|
||||
|
||||
Last login: Wed Aug 14 13:10:33 2019 from 127.0.0.1
|
||||
```
|
||||
|
||||
![][4]
|
||||
|
||||
*通过 SSH 使用 443 端口访问远程系统*
|
||||
|
||||
看见了吗?即使默认的 SSH 端口 22 被阻止,我现在也可以通过 SSH 访问远程服务器。正如你在上面的示例中所看到的,我使用 https 端口 443 进行 SSH 连接。
|
||||
|
||||
我在我的 Ubuntu 18.04 LTS 服务器上测试了 SSLH,它如上所述工作得很好。我在受保护的局域网中测试了 SSLH,所以我不知道是否有安全问题。如果你在生产环境中使用它,请在下面的评论部分中告诉我们使用 SSLH 的优缺点。
|
||||
|
||||
有关更多详细信息,请查看下面给出的官方 GitHub 页面。
|
||||
|
||||
资源:
|
||||
|
||||
* [SSLH GitHub 仓库][12]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/sslh-share-port-https-ssh/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2017/08/SSLH-Share-A-Same-Port-For-HTTPS-And-SSH-1-720x340.jpg
|
||||
[2]: https://www.ostechnix.com/wp-content/uploads/2017/08/install-sslh.png
|
||||
[3]: https://github.com/yrutschle/sslh/blob/master/doc/INSTALL.md
|
||||
[4]: https://www.ostechnix.com/wp-content/uploads/2017/08/Access-remote-systems-via-SSH-using-port-443.png
|
||||
[5]: https://www.ostechnix.com/how-to-ssh-into-a-particular-directory-on-linux/
|
||||
[6]: https://www.ostechnix.com/how-to-create-ssh-alias-in-linux/
|
||||
[7]: https://www.ostechnix.com/configure-ssh-key-based-authentication-linux/
|
||||
[8]: https://www.ostechnix.com/how-to-stop-ssh-session-from-disconnecting-in-linux/
|
||||
[9]: https://www.ostechnix.com/allow-deny-ssh-access-particular-user-group-linux/
|
||||
[10]: https://www.ostechnix.com/4-ways-keep-command-running-log-ssh-session/
|
||||
[11]: https://www.ostechnix.com/scanssh-fast-ssh-server-open-proxy-scanner/
|
||||
[12]: https://github.com/yrutschle/sslh
|
@ -0,0 +1,81 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11241-1.html)
|
||||
[#]: subject: (GNOME and KDE team up on the Linux desktop, docs for Nvidia GPUs open up, a powerful new way to scan for firmware vulnerabilities, and more news)
|
||||
[#]: via: (https://opensource.com/article/19/8/news-august-17)
|
||||
[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
|
||||
|
||||
开源新闻综述:GNOME 和 KDE 达成合作、Nvidia 开源 GPU 文档
|
||||
======
|
||||
|
||||
> 不要错过两周以来最大的开源头条新闻。
|
||||
|
||||
![Weekly news roundup with TV][1]
|
||||
|
||||
在本期开源新闻综述中,我们将介绍两种新的强大数据可视化工具、Nvidia 开源其 GPU 文档、激动人心的新工具、确保自动驾驶汽车的固件安全等等!
|
||||
|
||||
### GNOME 和 KDE 在 Linux 桌面上达成合作伙伴
|
||||
|
||||
Linux 在桌面计算机上一直处于分裂状态。在最近的一篇[公告][2]中称,“两个主要的 Linux 桌面竞争对手,[GNOME 基金会][3] 和 [KDE][4] 已经同意合作。”
|
||||
|
||||
这两个组织将成为今年 11 月在巴塞罗那举办的 [Linux App Summit(LAS)2019][5] 的赞助商。这一举措在某种程度上似乎是对桌面计算不再是争夺支配地位的最佳场所的回应。无论是什么原因,Linux 桌面的粉丝们都有新的理由希望未来出现一个标准化的 GUI 环境。
|
||||
|
||||
### 新的开源数据可视化工具
|
||||
|
||||
这个世界上很少有不是由数据驱动的。除非数据以人们可以互动的形式出现,否则它并不是很好使用。最近开源的两个数据可视化项目正在尝试使数据更有用。
|
||||
|
||||
第一个工具名为 **Neuroglancer**,由 [Google 的研究团队][6]创建。它“使神经科医生能够在交互式可视化中建立大脑神经通路的 3D 模型。”Neuroglancer 通过使用神经网络追踪大脑中的神经元路径并构建完整的可视化来实现这一点。科学家已经使用了 Neuroglancer(你可以[从 GitHub 取得][7])通过扫描果蝇的大脑来建立一个交互式地图。
|
||||
|
||||
第二个工具来自一个不太能想到的的来源:澳大利亚信号理事会。这是该国家类似 NSA 的机构,它“开源了[内部数据可视化和分析工具][8]之一。”这个被称为 **[Constellation][9]** 的工具可以“识别复杂数据集中的趋势和模式,并且能够扩展到‘数十亿输入’。”该机构总干事迈克•伯吉斯表示,他希望“这一工具将有助于产生有利于所有澳大利亚人的科学和其他方面的突破。”鉴于它是开源的,它可以使整个世界受益。
|
||||
|
||||
### Nvidia 开始发布 GPU 文档
|
||||
|
||||
多年来,图形处理单元(GPU)制造商 Nvidia 并没有做出什么让开源项目轻松开发其产品的驱动程序的努力。现在,该公司通过[发布 GPU 硬件文档][10]向这些项目迈出了一大步。
|
||||
|
||||
该公司根据 MIT 许可证发布的文档[可在 GitHub 上获取][11]。它涵盖了几个关键领域,如设备初始化、内存时钟/调整和电源状态。据硬件新闻网站 Phoronix 称,开发了 Nvidia GPU 的开源驱动程序的 Nouveau 项目将是率先使用该文档来推动其开发工作的项目之一。
|
||||
|
||||
### 用于保护固件的新工具
|
||||
|
||||
似乎每周都有的消息称,移动设备或连接互联网的小设备中出现新漏洞。通常,这些漏洞存在于控制设备的固件中。自动驾驶汽车服务 Cruise [发布了一个开源工具][12],用于在这些漏洞成为问题之前捕获这些漏洞。
|
||||
|
||||
该工具被称为 [FwAnalzyer][13]。它检查固件代码中是否存在许多潜在问题,包括“识别潜在危险的可执行文件”,并查明“任何错误遗留的调试代码”。Cruise 的工程师 Collin Mulliner 曾帮助开发该工具,他说通过在代码上运行 FwAnalyzer,固件开发人员“现在能够检测并防止各种安全问题。”
|
||||
|
||||
### 其它新闻
|
||||
|
||||
* [为什么洛杉矶决定将未来寄予开源][14]
|
||||
* [麻省理工学院出版社发布了关于开源出版软件的综合报告][15]
|
||||
* [华为推出鸿蒙操作系统,不会放弃 Android 智能手机][16]
|
||||
|
||||
*一如既往地感谢 Opensource.com 的工作人员和主持人本周的帮助。*
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/news-august-17
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/scottnesbitt
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/weekly_news_roundup_tv.png?itok=B6PM4S1i (Weekly news roundup with TV)
|
||||
[2]: https://www.zdnet.com/article/gnome-and-kde-work-together-on-the-linux-desktop/
|
||||
[3]: https://www.gnome.org/
|
||||
[4]: https://kde.org/
|
||||
[5]: https://linuxappsummit.org/
|
||||
[6]: https://www.cbronline.com/news/brain-mapping-google-ai
|
||||
[7]: https://github.com/google/neuroglancer
|
||||
[8]: https://www.computerworld.com.au/article/665286/australian-signals-directorate-open-sources-data-analysis-tool/
|
||||
[9]: https://www.constellation-app.com/
|
||||
[10]: https://www.phoronix.com/scan.php?page=news_item&px=NVIDIA-Open-GPU-Docs
|
||||
[11]: https://github.com/nvidia/open-gpu-doc
|
||||
[12]: https://arstechnica.com/information-technology/2019/08/self-driving-car-service-open-sources-new-tool-for-securing-firmware/
|
||||
[13]: https://github.com/cruise-automation/fwanalyzer
|
||||
[14]: https://www.techrepublic.com/article/why-la-decided-to-open-source-its-future/
|
||||
[15]: https://news.mit.edu/2019/mit-press-report-open-source-publishing-software-0808
|
||||
[16]: https://www.itnews.com.au/news/huawei-unveils-harmony-operating-system-wont-ditch-android-for-smartphones-529432
|
@ -1,91 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Linux Smartphone Librem 5 is Available for Preorder)
|
||||
[#]: via: (https://itsfoss.com/librem-5-available/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Linux Smartphone Librem 5 is Available for Preorder
|
||||
======
|
||||
|
||||
Purism recently [announced][1] the final specs for its [Librem 5 smartphone][2]. This is not based on Android or iOS – but built on [PureOS][3], which is an [open-source alternative to Android][4].
|
||||
|
||||
Along with the announcement, the Librem 5 is also available for [pre-orders for $649][5] (as an early bird offer till 31st July) and it will go up by $50 following the date. It will start shipping from Q3 of 2019.
|
||||
|
||||
![][6]
|
||||
|
||||
Here’s what Purism mentioned about Librem 5 in its blog post:
|
||||
|
||||
_**We believe phones should not track you nor exploit your digital life.**_
|
||||
|
||||
_The Librem 5 represents the opportunity for you to take back control and protect your private information, your digital life through free and open source software, open governance, and transparency. The Librem 5 is_ **_a phone built on_ [_PureOS_][3]**_, a fully free, ethical and open-source operating system that is_ _**not based on Android or iOS**_ _(learn more about [why this is important][7])._
|
||||
|
||||
_We have successfully crossed our crowdfunding goals and will be delivering on our promise. The Librem 5’s hardware and software development is advancing [at a steady pace][8], and is scheduled for an initial release in Q3 2019. You can preorder the phone at $649 until shipping begins and regular pricing comes into effect. Kits with an external monitor, keyboard and mouse, are also available for preorder._
|
||||
|
||||
### Librem 5 Specifications
|
||||
|
||||
From what it looks like, Librem 5 definitely aims to provide better privacy and security. In addition to this, it tries to avoid using anything from Google or Apple.
|
||||
|
||||
While the idea is good enough – how does it hold up as a commercial smartphone under $700?
|
||||
|
||||
![Librem 5 Smartphone][9]
|
||||
|
||||
Let us take a look at the tech specs:
|
||||
|
||||
![Librem 5][10]
|
||||
|
||||
On paper the tech specs seems to be good enough. Not too great – not too bad. But, what about the performance? The user experience?
|
||||
|
||||
Well, we can’t be too sure about it – unless we use it. So, if you are pre-ordering it – take that into consideration.
|
||||
|
||||
[][11]
|
||||
|
||||
Suggested read Linux Games Get A Performance Boost for AMD GPUs Thanks to Valve's New Compiler
|
||||
|
||||
### Lifetime software updates for Librem 5
|
||||
|
||||
Of course, the specs aren’t very pretty when compared to the smartphones available at this price range.
|
||||
|
||||
However, with the promise of lifetime software updates – it does look like a decent offering for open source enthusiasts.
|
||||
|
||||
### Other Key Features
|
||||
|
||||
Purism also highlights the fact that Librem 5 will be the first-ever [Matrix][12]-powered smartphone. This means that it will support end-to-end decentralized encrypted communications for messaging and calling.
|
||||
|
||||
In addition to all these, the presence of headphone jack and a user-replaceable battery makes it a pretty solid deal.
|
||||
|
||||
### Wrapping Up
|
||||
|
||||
Even though it is tough to compete with the likes of Android/iOS smartphones, having an alternative is always good. Librem 5 may not prove to be an ideal smartphone for every user – but if you are an open-source enthusiast and looking for a simple smartphone that respects privacy and security without utilizing Google/Apple services, this is for you.
|
||||
|
||||
Also the fact that it will receive lifetime software updates – makes it an interesting smartphone.
|
||||
|
||||
What do you think about Librem 5? Are you thinking to pre-order it? Let us know your thoughts in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/librem-5-available/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://puri.sm/posts/librem-5-smartphone-final-specs-announced/
|
||||
[2]: https://itsfoss.com/librem-linux-phone/
|
||||
[3]: https://pureos.net/
|
||||
[4]: https://itsfoss.com/open-source-alternatives-android/
|
||||
[5]: https://shop.puri.sm/shop/librem-5/
|
||||
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/librem-5-linux-smartphone.jpg?resize=800%2C450&ssl=1
|
||||
[7]: https://puri.sm/products/librem-5/pureos-mobile/
|
||||
[8]: https://puri.sm/posts/tag/phones
|
||||
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/07/librem-5-smartphone.jpg?ssl=1
|
||||
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/07/librem-5-specs.png?ssl=1
|
||||
[11]: https://itsfoss.com/linux-games-performance-boost-amd-gpu/
|
||||
[12]: http://matrix.org
|
@ -1,85 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The fastest open source CPU ever, Facebook shares AI algorithms fighting harmful content, and more news)
|
||||
[#]: via: (https://opensource.com/article/19/8/news-august-3)
|
||||
[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeo)
|
||||
|
||||
The fastest open source CPU ever, Facebook shares AI algorithms fighting harmful content, and more news
|
||||
======
|
||||
Catch up on the biggest open source headlines from the past two weeks.
|
||||
![Weekly news roundup with TV][1]
|
||||
|
||||
In this edition of our open source news roundup, we share Facebook's choice to open source two algorithms for finding harmful content, Apple's new role in the Data Transfer Project, and more news you should know.
|
||||
|
||||
### Facebook open sources algorithms used to find harmful content
|
||||
|
||||
Facebook announced that it has [open sourced two algorithms][2] used to find child exploitation, threats of terrorism, and graphic violence on the platform. In a blog post on August 1st, Facebook shared that PDQ and TMK+PDQF–two technologies that store files as digital hashes, then compare them with known examples of harmful content–are [now live on GitHub][3].
|
||||
|
||||
The code release comes amidst increased pressure to get harmful content off the platform as soon as possible. After March's mass murder in New Zealand was streamed on Facebook Live, the Australian government [threatened Facebook execs with fines and jail time][4] if the video wasn't promptly removed. By releasing the source code for these algorithms, Facebook said that it hopes nonprofits, tech companies, and solo developers can all help them find and remove harmful content faster.
|
||||
|
||||
### Alibaba launches the fastest open source CPU
|
||||
|
||||
Pingtouge Semiconductor - an Alibaba subsidiary - [announced its Xuantie 91 processor][5] last month. It's equipped to manage infrastructure for AI, the IoT, 5G, and autonomous vehicles, among other projects. It boasts a a 7.1 Coremark/MHz, making it the fastest open source CPU on the market.
|
||||
|
||||
Pintogue announced plans to make its polished code available on GitHub this September. Analysts view this release as a power move to help China hit its goal of using local suppliers to meet 40 percent of processor demand by 2021. Recent tariffs on behalf of the U.S. threaten to derail this goal, creating the need for open source computer components.
|
||||
|
||||
### Mattermost makes the case for open source collaboration apps
|
||||
|
||||
All open source communities benefit from one or more places to communicate with each other. The world of team chat apps seems dominated by minimal, mighty tools like Slack and Microsoft Teams. Most options are cloud-based and proprietary; Mattermost takes a different approach by selling the value of open source collaboration apps.
|
||||
|
||||
> “People want an open-source alternative because they need the trust, the flexibility and the innovation that only open source is able to deliver,” said Ian Tien, co-founder and CEO of Mattermost.
|
||||
|
||||
With clients that range from Uber to the Department of Defense, Mattermost cornered a crucial market: Teams that want open source software they can trust and install on their own servers. For businesses that need collaboration apps to run on their internal infrastructure, Mattermost fills a gap that [Atlassian left bare][6]. In an article for Computerworld, Matthew Finnegan [explores][7] why on-premises, open source chat tools aren't dead yet.
|
||||
|
||||
### Apple joins the open source Data Transfer Project
|
||||
|
||||
Google, Facebook, Twitter, and Microsoft united last year to create the Data Transfer Project (DTP). Hailed as a way to boost data security and user agency over their own data, the DTP is a rare show of solidarity in tech. This week, Apple announced that they'll [join the fold][8].
|
||||
|
||||
The DTP's goal is to help users transfer their data from one online service to another via an open source platform. DTP aims to take out the middleman by using APIs and authorization tools to let users transfer their data from one service to another. This would erase the need for users to download their data, then upload it to another service. Apple's choice to join the DTP will allow users to transfer data in and out of iCloud, and could be a big win for privacy advocates.
|
||||
|
||||
#### In other news
|
||||
|
||||
* [FlexiWAN's open source SD-WAN available for download in public beta release][9]
|
||||
|
||||
* [Open source Windows Calculator app to get Always-on-Top mode][10]
|
||||
|
||||
* [With Zowe, open source and DevOps are democratizing the mainframe computer][11]
|
||||
|
||||
* [Mozilla debuts implementation of WebThings Gateway open source router firmware][12]
|
||||
|
||||
* [Updated: Contributing to the Mozilla code base][13]
|
||||
|
||||
|
||||
|
||||
|
||||
_Thanks, as always, to Opensource.com staff members and moderators for their help this week._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/news-august-3
|
||||
|
||||
作者:[Lauren Maffeo][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/lmaffeo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/weekly_news_roundup_tv.png?itok=B6PM4S1i (Weekly news roundup with TV)
|
||||
[2]: https://www.theverge.com/2019/8/1/20750752/facebook-child-exploitation-terrorism-open-source-algorithm-pdq-tmk
|
||||
[3]: https://github.com/facebook/ThreatExchange/tree/master/hashing/tmk
|
||||
[4]: https://www.buzzfeed.com/hannahryan/social-media-facebook-livestreaming-laws-christchurch
|
||||
[5]: https://hexus.net/tech/news/cpu/133229-alibabas-16-core-risc-v-fastest-open-source-cpu-yet/
|
||||
[6]: https://lab.getapp.com/atlassian-slack-on-premise-software/
|
||||
[7]: https://www.computerworld.com/article/3428679/mattermost-makes-case-for-open-source-as-team-messaging-market-booms.html
|
||||
[8]: https://www.techspot.com/news/81221-apple-joins-data-transfer-project-open-source-project.html
|
||||
[9]: https://www.fiercetelecom.com/telecom/flexiwan-s-open-source-sd-wan-available-for-download-public-beta-release
|
||||
[10]: https://mspoweruser.com/open-source-windows-calculator-app-to-get-always-on-top-mode/
|
||||
[11]: https://siliconangle.com/2019/07/29/zowe-open-source-devops-democratizing-mainframe-computer/
|
||||
[12]: https://venturebeat.com/2019/07/25/mozilla-debuts-webthings-gateway-open-source-router-firmware-for-turris-omnia/
|
||||
[13]: https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Introduction
|
@ -0,0 +1,107 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (scvoet)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (LiVES Video Editor 3.0 is Here With Significant Improvements)
|
||||
[#]: via: (https://itsfoss.com/lives-video-editor/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
LiVES 视频编辑器 3.0 有了显著的改善
|
||||
======
|
||||
|
||||
我们最近列出了[最好开源视频编辑器][1]的清单。LiVES 是这些开源视频编辑器中的免费提供服务的一个。
|
||||
|
||||
即使许多用户还在等待 Windows 版本的发行,但在刚刚发行的 LiVES 视频编辑器 Linux 版本中(最新版本 v3.0.1)进行了一个重大更新,更新内容中包括了一些新的功能和改进。
|
||||
|
||||
在这篇文章里,我将会列出新版本中的重要改进,并且我将会提到在 Linux 上安装的步骤。
|
||||
|
||||
### LiVES 视频编辑器 3.0:新的改进
|
||||
|
||||
![Zorin OS 中正在加载的 LiVES 视频编辑器][2]
|
||||
|
||||
总的来说,在这次重大更新中 LiVES 视频编辑器旨在提供更加丝滑的回放、防止闻所未闻的崩溃、优化视频记录,以及让在线视频下载更加实用。
|
||||
|
||||
下面列出了修改:
|
||||
|
||||
* 如果需要加载的话,可以静默加载直到到视频播放完毕。
|
||||
* 改进回放插件为 openGL,提供更加丝滑的回放。
|
||||
* 重新启用了 openGL 回放插件的高级选项。
|
||||
* 在 VJ/预解码 中允许“充足”的所有帧
|
||||
* 重构了在播放时基础计算的代码(有了更好的 a/v 同步)。
|
||||
* 彻底修复了外部音频和音频,提高了准确性并减少了 CPU 周期。
|
||||
* 进入多音轨模式时自动切换至内部音频。
|
||||
* 重新显示效果映射器窗口时,将会正常展示效果状态(on/off)。
|
||||
* 解决了音频和视频线程之间的冲突。
|
||||
* 现在可以对在线视频下载器,剪辑大小和格式进行修改并添加了更新选项。
|
||||
* 对实时效果实行了参考计数的记录。
|
||||
* 大范围重写了主界面,清理代码并改进多视觉。
|
||||
* 优化了视频播放器运行时的录制功能。
|
||||
* 改进了 projectM 过滤器,包括支持了 SDL2。
|
||||
* 添加了一个选项来逆转多轨合成器中的 Z-order(后层现在可以覆盖上层了)。
|
||||
* 增加了对 musl libc 的支持
|
||||
* 更新了乌克兰语的翻译
|
||||
|
||||
|
||||
如果您不是一位高级视频编辑师,也许会对上面列出的重要更新提不起太大的兴趣。但正是因为这些更新,才使得“LiVES 视频编辑器”成为了最好的开源视频编辑软件。
|
||||
|
||||
[][3]
|
||||
|
||||
推荐阅读 VidCutter Lets You Easily Trim And Merge Videos In Linux
|
||||
|
||||
### 在 Linux 上安装 LiVES 视频编辑器
|
||||
|
||||
LiVES 几乎可以在所有主要 Linux 发行版中使用。但是,您可能并不能在软件中心找到它的最新版本。所以,如果你想通过这种方式安装,那你就不得不耐心等待了。
|
||||
|
||||
如果你想要手动安装,可以从它的下载页面获取 Fedora/Open SUSE 的 RPM 安装包。它也适用于其他 Linux 发行版。
|
||||
|
||||
[下载 LiVES 视频编辑器] [4]
|
||||
|
||||
如果您使用的是 Ubuntu(或其他基于 Ubuntu 的发行版),您可以安装由 [Ubuntuhandbook][6] 进行维护的[非官方 PPA][5]。
|
||||
|
||||
下面由我来告诉你,你该做些什么:
|
||||
|
||||
**1. **启动终端后输入以下命令:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:ubuntuhandbook1 / lives
|
||||
```
|
||||
|
||||
系统将提示您输入密码用于确认添加 PPA。
|
||||
|
||||
**2. **完成后,您现在可以轻松地更新软件包列表并安装 LiVES 视频编辑器。以下是需要您输入的命令段:
|
||||
|
||||
```
|
||||
sudo apt update
|
||||
sudo apt install life-plugins
|
||||
```
|
||||
|
||||
**3.** 现在,它开始下载并安装视频编辑器,等待大约一分钟即可完成。
|
||||
|
||||
**总结**
|
||||
|
||||
Linux 上有许多[视频编辑器] [7]。但它们通常被认为不能进行专业的编辑。而我并不是一名专业人士,所以像 LiVES 这样免费的视频编辑器就足以进行简单的编辑了。
|
||||
|
||||
您认为怎么样呢?您在 Linux 上使用 LiVES 或其他视频编辑器的体验还好吗?在下面的评论中告诉我们你的感觉吧。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/lives-video-editor/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Scvoet][c]
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[c]: https://github.com/scvoet
|
||||
[1]: https://itsfoss.com/open-source-video-editors/
|
||||
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/lives-video-editor-loading.jpg?ssl=1
|
||||
[3]: https://itsfoss.com/vidcutter-video-editor-linux/
|
||||
[4]: http://lives-video.com/index.php?do=downloads#binaries
|
||||
[5]: https://itsfoss.com/ppa-guide/
|
||||
[6]: http://ubuntuhandbook.org/index.php/2019/08/lives-video-editor-3-0-released-install-ubuntu/
|
||||
[7]: https://itsfoss.com/best-video-editing-software-linux/
|
@ -1,3 +1,5 @@
|
||||
translating by valoniakim
|
||||
|
||||
How allowing myself to be vulnerable made me a better leader
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/leaderscatalysts.jpg?itok=f8CwHiKm)
|
||||
|
@ -0,0 +1,162 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lujun9972)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Command Line Heroes: Season 1: OS Wars)
|
||||
[#]: via: (https://www.redhat.com/en/command-line-heroes/season-1/os-wars-part-2-rise-of-linux)
|
||||
[#]: author: (redhat https://www.redhat.com)
|
||||
|
||||
Command Line Heroes: Season 1: OS Wars(Part2 Rise of Linux)
|
||||
======
|
||||
Saron Yitbarek: Is this thing on? Cue the epic Star Wars crawl, and, action.
|
||||
|
||||
Voice Actor: [00:00:30] Episode Two: Rise of Linux ® . The empire of Microsoft controls 90 % of desktop users . C omplete standardization of operating systems seems assured. However, the advent of the internet swerves the focus of the war from the desktop toward enterprise, where all businesses scramble to claim a server of their own. Meanwhile, an unlikely hero arises from amongst the band of open source rebels . Linus Torvalds, head strong, bespectacled, releases his Linux system free of charge. Microsoft reels — and regroups.
|
||||
|
||||
Saron Yitbarek: [00:01:00] Oh, the nerd in me just loves that. So, where were we? Last time, Apple and Microsoft were trading blows, trying to dominate in a war over desktop users. By the end of e pisode o ne, we saw Microsoft claiming most of the prize. Soon, the entire landscape went through a seismic upheaval. That's all because of the rise of the internet and the army of developers that rose with it. The internet moves the battlefield from PC users in their home offices to giant business clients with hundreds of servers.
|
||||
|
||||
[00:01:30] This is a huge resource shift. Not only does every company out there wanting to remain relevant suddenly have to pay for server space and get a website built — they also have to integrate software to track resources, monitor databases, et cetera, et cetera. You're going to need a lot of developers to help you with that. At least, back then you did.
|
||||
|
||||
In p art t wo of the OS wars, we'll see how that enormous shift in priorities , and the work of a few open source rebels like Linus Torvalds and Richard Stallman , managed to strike fear in the heart of Microsoft, and an entire software industry.
|
||||
|
||||
[00:02:00] I'm Saron Yitbarek and you're listening to Command Line Heroes, an original podcast from Red Hat. In each episode, we're bringing you stories about the people who transform technology from the command line up.
|
||||
|
||||
[00:02:30] Okay. Imagine for a second that you're Microsoft in 1991. You're feeling pretty good, right? Pretty confident. Assured global domination feels nice. You've mastered the art of partnering with other businesses, but you're still pretty much cutting out the developers, programmers, and sys admins that are the real foot soldiers out there. There is this Finnish geek named Linus Torvalds. He and his team of open source programmers are starting to put out versions of Linux, this OS kernel that they're duct taping together.
|
||||
|
||||
[00:03:00] If you're Microsoft, frankly, you're not too concerned about Linux or even about open source in general, but eventually, the sheer size of Linux gets so big that it becomes impossible for Microsoft not to notice. The first version comes out in 1991 and it's got maybe 10,000 lines of code. A decade later, there will be three million lines of code. In case you're wondering, today it's at 20 million.
|
||||
|
||||
[00:03:30] For a moment, let's stay in the early 90s. Linux hasn't yet become the behemoth we know now. It's just this strangely viral OS that's creeping across the planet, and the geeks and hackers of the world are falling in love with it. I was too young in those early days, but I sort of wish I'd been there. At that time, discovering Linux was like gaining access to a secret society. Programmers would share the Linux CD set with friends the same way other people would share mixtapes of underground music.
|
||||
|
||||
Developer Tristram Oaten [00:03:40] tells the story of how he first encountered Linux when he was 16 years old.
|
||||
|
||||
Tristram Oaten: [00:04:00] We went on a scuba diving holiday, my family and I, to Hurghada, which is on the Red Sea. Beautiful place, highly recommend it. The first day, I drank the tap water. Probably, my mom told me not to. I was really sick the whole week — didn't leave the hotel room. All I had with me was a laptop with a fresh install of Slackware Linux, this thing that I'd heard about and was giving it a try. There were no extra apps, just what came on the eight CDs. By necessity, all I had to do this whole week was to get to grips with this alien system. I read man pages, played around with the terminal. I remember not knowing the difference between a single dot, meaning the current directory, and two dots, meaning the previous directory.
|
||||
|
||||
[00:04:30] I had no clue. I must have made so many mistakes, but slowly, over the course of this forcible solitude, I broke through this barrier and started to understand and figure out what this command line thing was all about. By the end of the holiday, I hadn't seen the pyramids, the Nile, or any Egyptian sites, but I had unlocked one of the wonders of the modern world. I'd unlocked Linux, and the rest is history.
|
||||
|
||||
Saron Yitbarek: You can hear some variation on that story from a lot of people. Getting access to that Linux command line was a transformative experience.
|
||||
|
||||
David Cantrell: This thing gave me the source code. I was like, "That's amazing."
|
||||
|
||||
Saron Yitbarek: We're at a 2017 Linux developers conference called Flock to Fedora.
|
||||
|
||||
David Cantrell: ... very appealing. I felt like I had more control over the system and it just drew me in more and more. From there, I guess, after my first Linux kernel compile in 1995, I was hooked, so, yeah.
|
||||
|
||||
Saron Yitbarek: Developers David Cantrell and Joe Brockmire.
|
||||
|
||||
Joe Brockmeier: I was going through the cheap software and found a four - CD set of Slackware Linux. It sounded really exciting and interesting so I took it home, installed it on a second computer, started playing with it, and really got excited about two things. One was, I was excited not to be running Windows, and I was excited by the open source nature of Linux.
|
||||
|
||||
Saron Yitbarek: [00:06:00] That access to the command line was, in some ways, always there. Decades before open source really took off, there was always a desire to have complete control, at least among developers. Go way back to a time before the OS wars, before Apple and Microsoft were fighting over their GUIs. There were command line heroes then, too. Professor Paul Jones is the director of the online library ibiblio.org. He worked as a developer during those early days.
|
||||
|
||||
Paul Jones: [00:07:00] The internet, by its nature, at that time, was less client server, totally, and more peer to peer. We're talking about, really, some sort of VAX to VAX, some sort of scientific workstation, the scientific workstation. That doesn't mean that client and server relationships and applications weren't there, but it does mean that the original design was to think of how to do peer - to - peer things, the opposite of what IBM had been doing, in which they had dumb terminals that had only enough intelligence to manage the user interface, but not enough intelligence to actually let you do anything in the terminal that would expose anything to it.
|
||||
|
||||
Saron Yitbarek: As popular as GUI was becoming among casual users, there was always a pull in the opposite direction for the engineers and developers. Before Linux in the 1970s and 80s, that resistance was there, with EMAX and GNU . W ith Stallman's free software foundation, certain folks were always begging for access to the command line, but it was Linux in the 1990s that delivered like no other.
|
||||
|
||||
[00:07:30] The early lovers of Linux and other open source software were pioneers. I'm standing on their shoulders. We all are.
|
||||
|
||||
You're listening to Command Line Heroes, an original podcast from Red Hat. This is part two of the OS wars: Rise of Linux.
|
||||
|
||||
Steven Vaughan-Nichols: By 1998, things have changed.
|
||||
|
||||
Saron Yitbarek: Steven Vaughan-Nichols is a contributing editor at zdnet.com, and he's been writing for decades about the business side of technology. He describes how Linux slowly became more and more popular until the number of volunteer contributors was way larger than the number of Microsoft developers working on Windows. Linux never really went after Microsoft's desktop customers, though, and maybe that's why Microsoft ignored them at first. Where Linux did shine was in the server room. When businesses went online, each one required a unique programming solution for their needs.
|
||||
|
||||
[00:08:30] Windows NT comes out in 1993 and it's competing with other server operating systems, but lots of developers are thinking, "Why am I going to buy an AIX box or a large windows box when I could set up a cheap Linux-based system with Apache?" Point is, Linux code started seeping into just about everything online.
|
||||
|
||||
Steven Vaughan-Nichols: [00:09:00] Microsoft realizes that Linux, quite to their surprise, is actually beginning to get some of the business, not so much on the desktop, but on business servers. As a result of that, they start a campaign, what we like to call FUD — fear, uncertainty and doubt — saying, "Oh this Linux stuff, it's really not that good. It's not very reliable. You can't trust it with anything."
|
||||
|
||||
Saron Yitbarek: [00:09:30] That soft propaganda style attack goes on for a while. Microsoft wasn't the only one getting nervous about Linux, either. It was really a whole industry versus that weird new guy. For example, anyone with a stake in UNIX was likely to see Linux as a usurper. Famously, the SCO Group, which had produced a version of UNIX, waged lawsuits for over a decade to try and stop the spread of Linux. SCO ultimately failed and went bankrupt. Meanwhile, Microsoft kept searching for their opening. They were a company that needed to make a move. It just wasn't clear what that move was going to be.
|
||||
|
||||
Steven Vaughan-Nichols: [00:10:30] What will make Microsoft really concerned about it is the next year, in 2000, IBM will announce that they will invest a billion dollars in Linux in 2001. Now, IBM is not really in the PC business anymore. They're not out yet, but they're going in that direction, but what they are doing is they see Linux as being the future of servers and mainframe computers, which, spoiler alert, IBM was correct. Linux is going to dominate the server world.
|
||||
|
||||
Saron Yitbarek: This was no longer just about a bunch of hackers loving their Jedi-like control of the command line. This was about the money side working in Linux's favor in a major way. John "Mad Dog" Hall, the executive director of Linux International, has a story that explains why that was. We reached him by phone.
|
||||
|
||||
John Hall: [00:11:30] A friend of mine named Dirk Holden [00:10:56] was a German systems administrator at Deutsche Bank in Germany, and he also worked in the graphics projects for the early days of the X Windows system for PCs. I visited him one day at the bank, and I said, "Dirk, you have 3,000 servers here at the bank and you use Linux. Why don't you use Microsoft NT?" He looked at me and he said, "Yes, I have 3,000 servers , and if I used Microsoft Windows NT, I would need 2,999 systems administrators." He says, "With Linux, I only need four." That was the perfect answer.
|
||||
|
||||
Saron Yitbarek: [00:12:00] The thing programmers are getting obsessed with also happens to be deeply attractive to big business. Some businesses were wary. The FUD was having an effect. They heard open source and thought, "Open. That doesn't sound solid. It's going to be chaotic, full of bugs," but as that bank manager pointed out, money has a funny way of convincing people to get over their hangups. Even little businesses, all of which needed websites, were coming on board. The cost of working with a cheap Linux system over some expensive proprietary option, there was really no comparison. If you were a shop hiring a pro to build your website, you wanted them to use Linux.
|
||||
|
||||
[00:12:30] Fast forward a few years. Linux runs everybody's website. Linux has conquered the server world, and then, along comes the smartphone. Apple and their iPhones take a sizeable share of the market, of course, and Microsoft hoped to get in on that, except, surprise, Linux was there, too, ready and raring to go.
|
||||
|
||||
Author and journalist James Allworth.
|
||||
|
||||
James Allworth: [00:13:00] There was certainly room for a second player, and that could well have been Microsoft, but for the fact of Android, which was fundamentally based on Linux, and because Android, famously acquired by Google, and now running a majority of the world's smartphones, Google built it on top of that. They were able to start with a very sophisticated operating system and a cost basis of zero. They managed to pull it off, and it ended up locking Microsoft out of the next generation of devices, by and large, at least from an operating system perspective.
|
||||
|
||||
Saron Yitbarek: [00:13:30] The ground was breaking up, big time, and Microsoft was in danger of falling into the cracks. John Gossman is the chief architect on the Azure team at Microsoft. He remembers the confusion that gripped the company at that time.
|
||||
|
||||
John Gossman: [00:14:00] Like a lot of companies, Microsoft was very concerned about IP pollution. They thought that if you let developers use open source they would likely just copy and paste bits of code into some product and then some sort of a viral license might take effect that ... They were also very confused, I think, it was just culturally, a lot of companies, Microsoft included, were confused on the difference between what open source development meant and what the business model was. There was this idea that open source meant that all your software was free and people were never going to pay anything.
|
||||
|
||||
Saron Yitbarek: [00:14:30] Anybody invested in the old, proprietary model of software is going to feel threatened by what's happening here. When you threaten an enormous company like Microsoft, yeah, you can bet they're going to react. It makes sense they were pushing all that FUD — fear, uncertainty and doubt. At the time, an “ us versus them ” attitude was pretty much how business worked. If they'd been any other company, though, they might have kept that old grudge, that old thinking, but then, in 2013, everything changes.
|
||||
|
||||
[00:15:00] Microsoft's cloud computing service, Azure, goes online and, shockingly, it offers Linux virtual machines from day one. Steve Ballmer, the CEO who called Linux a cancer, he's out, and a new forward - thinking CEO, Satya Nadella, has been brought in.
|
||||
|
||||
John Gossman: Satya has a different attitude. He's another generation. He's a generation younger than Paul and Bill and Steve were, and had a different perspective on open source.
|
||||
|
||||
Saron Yitbarek: John Gossman, again, from Microsoft's Azure team.
|
||||
|
||||
John Gossman: [00:16:00] We added Linux support into Azure about four years ago, and that was for very pragmatic reasons. If you go to any enterprise customer, you will find that they are not trying to decide whether to use Windows or to use Linux or to use .net or to use Java TM . They made all those decisions a long time ago — about 15 years or so ago, there was some of this argument. Now, every company that I have ever seen has a mix of Linux and Java and Windows and .net and SQL Server and Oracle and MySQL — proprietary source code - based products and open source code products.
|
||||
|
||||
If you're going to operate a cloud and you're going to allow and enable those companies to run their businesses on the cloud, you simply cannot tell them, "You can use this software but you can't use this software."
|
||||
|
||||
Saron Yitbarek: [00:16:30] That's exactly the philosophy that Satya Nadella adopted. In the fall of 2014, he gets up on stage and he wants to get across one big, fat point. Microsoft loves Linux. He goes on to say that 20 % of Azure is already Linux and that Microsoft will always have first - class support for Linux distros. There's not even a whiff of that old antagonism toward open source.
|
||||
|
||||
To drive the point home, there's literally a giant sign behind them that reads, "Microsoft hearts Linux." Aww. For some of us, that turnaround was a bit of a shock, but really, it shouldn't have been. Here's Steven Levy, a tech journalist and author.
|
||||
|
||||
Steven Levy: [00:17:30] When you're playing a football game and the turf becomes really slick, maybe you switch to a different kind of footwear in order to play on that turf. That's what they were doing. They can't deny reality and there are smart people there so they had to realize that this is the way the world is and put aside what they said earlier, even though they might be a little embarrassed at their earlier statements, but it would be crazy to let their statements about how horrible open source was earlier, affect their smart decisions now.
|
||||
|
||||
Saron Yitbarek: [00:18:00] Microsoft swallowed its pride in a big way. You might remember that Apple, after years of splendid isolation, finally shifted toward a partnership with Microsoft. Now it was Microsoft's turn to do a 180. After years of battling the open source approach, they were reinventing themselves. It was change or perish. Steven Vaughan-Nichols.
|
||||
|
||||
Steven Vaughan-Nichols: [00:18:30] Even a company the size of Microsoft simply can't compete with the thousands of open source developers working on all these other major projects , including Linux. They were very loath e to do so for a long time. The former Microsoft CEO, Steve Ballmer, hated Linux with a passion. Because of its GPL license, it was a cancer, but once Ballmer was finally shown the door, the new Microsoft leadership said, "This is like trying to order the tide to stop coming in. The tide is going to keep coming in. We should work with Linux, not against it."
|
||||
|
||||
Saron Yitbarek: [00:19:00] Really, one of the big wins in the history of online tech is the way Microsoft was able to make this pivot, when they finally decided to. Of course, older, hardcore Linux supporters were pretty skeptical when Microsoft showed up at the open source table. They weren't sure if they could embrace these guys, but, as Vaughan-Nichols points out, today's Microsoft simply is not your mom and dad's Microsoft.
|
||||
|
||||
Steven Vaughan-Nichols : [00:19:30] Microsoft 2017 is not Steve Ballmer's Microsoft, nor is it Bill Gates' Microsoft. It's an entirely different company with a very different approach and, again, once you start using open source, it's not like you can really pull back. Open source has devoured the entire technology world. People who have never heard of Linux as such, don't know it, but every time they're on Facebook , they're running Linux. Every time you do a Google search , you're running Linux.
|
||||
|
||||
[00:20:00] Every time you do anything with your Android phone , you're running Linux again. It literally is everywhere, and Microsoft can't stop that, and thinking that Microsoft can somehow take it all over, I think is naïve.
|
||||
|
||||
Saron Yitbarek: [00:20:30] Open source supporters might have been worrying about Microsoft coming in like a wolf in the flock, but the truth is, the very nature of open source software protects it from total domination. No single company can own Linux and control it in any specific way. Greg Kroah-Hartman is a fellow at the Linux Foundation.
|
||||
|
||||
Greg Kroah-Hartman: Every company and every individual contributes to Linux in a selfish manner. They're doing so because they want to solve a problem that they have, be it hardware isn't working , or they want to add a new feature to do something else , or want to take it in a direction that they'll build that they can use for their product. That's great, because then everybody benefits from that because they're releasing the code back, so that everybody can use it. It's because of that selfishness that all companies and all people have, everybody benefits.
|
||||
|
||||
Saron Yitbarek: [00:21:30] Microsoft has realized that in the coming cloud wars, fighting Linux would be like going to war with, well, a cloud. Linux and open source aren't the enemy, they're the atmosphere. Today, Microsoft has joined the Linux Foundation as a platinum member. They became the number one contributor to open source on GitHub. In September, 2017, they even joined the Open Source Initiative. These days, Microsoft releases a lot of its code under open licenses. Microsoft's John Gossman describes what happened when they open sourced .net. At first, they didn't really think they'd get much back.
|
||||
|
||||
John Gossman: [00:22:00] We didn't count on contributions from the community, and yet, three years in, over 50 per cent of the contributions to the .net framework libraries, now, are coming from outside of Microsoft. This includes big pieces of code. Samsung has contributed ARM support to .net. Intel and ARM and a couple other chip people have contributed code generation specific for their processors to the .net framework, as well as a surprising number of fixes, performance improvements , and stuff — from just individual contributors to the community.
|
||||
|
||||
Saron Yitbarek: Up until a few years ago, the Microsoft we have today, this open Microsoft, would have been unthinkable.
|
||||
|
||||
[00:23:00] I'm Saron Yitbarek, and this is Command Line Heroes. Okay, we've seen titanic battles for the love of millions of desktop users. We've seen open source software creep up behind the proprietary titans, and nab huge market share. We've seen fleets of command line heroes transform the programming landscape into the one handed down to people like me and you. Today, big business is absorbing open source software, and through it all, everybody is still borrowing from everybody.
|
||||
|
||||
[00:23:30] In the tech wild west, it's always been that way. Apple gets inspired by Xerox, Microsoft gets inspired by Apple, Linux gets inspired by UNIX. Evolve, borrow, constantly grow. In David and Goliath terms, open source software is no longer a David, but, you know what? It's not even Goliath, either. Open source has transcended. It's become the battlefield that others fight on. As the open source approach becomes inevitable, new wars, wars that are fought in the cloud, wars that are fought on the open source battlefield, are ramping up.
|
||||
|
||||
Here's author Steven Levy.
|
||||
|
||||
Steven Levy: [00:24:00] Basically, right now, we have four or five companies, if you count Microsoft, that in various ways are fighting to be the platform for all we do, for artificial intelligence, say. You see wars between intelligent assistants, and guess what? Apple has an intelligent assistant, Siri. Microsoft has one, Cortana. Google has the Google Assistant. Samsung has an intelligent assistant. Amazon has one, Alexa. We see these battles shifting to different areas, there. Maybe, you could say, the hottest one is would be, whose AI platform is going to control all the stuff in our lives there, and those five companies are all competing for that.
|
||||
|
||||
Saron Yitbarek: If you're looking for another rebel that's going to sneak up behind Facebook or Google or Amazon and blindside them the way Linux blindsided Microsoft, you might be looking a long time, because as author James Allworth points out, being a true rebel is only getting harder and harder.
|
||||
|
||||
James Allworth: [00:25:30] Scale's always been an advantage but the nature of scale advantages are almost ... Whereas, I think previously they were more linear in nature, now it's more exponential in nature, and so, once you start to get out in front with something like this , it becomes harder and harder for a new player to come in and catch up. I think this is true of the internet era in general, whether it's scale like that or the importance and advantages that data bestow on an organization in terms of its ability to compete. Once you get out in front, you attract more customers, and then that gives you more data and that enables you to do an even better job, and then, why on earth would you want to go with the number two player, because they're so far behind? I think it's going to be no different in cloud.
|
||||
|
||||
Saron Yitbarek: [00:26:00] This story began with singular heroes like Steve Jobs and Bill Gates, but the progress of technology has taken on a crowdsourced, organic feel. I think it's telling that our open source hero, Linus Torvalds, didn't even have a real plan when he first invented the Linux kernel. He was a brilliant , young developer for sure, but he was also like a single drop of water at the very front of a tidal wave. The revolution was inevitable. It's been estimated that for a proprietary company to create a Linux distribution in their old - fashioned , proprietary way, it would cost them well over $ 10 billion. That points to the power of open source.
|
||||
|
||||
[00:26:30] In the end, it's not something that a proprietary model is going to compete with. Successful companies have to remain open. That's the big, ultimate lesson in all this. Something else to keep in mind: W hen we're wired together, our capacity to grow and build on what we've already accomplished becomes limitless. As big as these companies get, we don't have to sit around waiting for them to give us something better. Think about the new developer who learns to code for the sheer joy of creating, the mom who decides that if nobody's going to build what she needs, then she'll build it herself.
|
||||
|
||||
Wherever tomorrow's great programmers come from, they're always going to have the capacity to build the next big thing, so long as there's access to the command line.
|
||||
|
||||
[00:27:30] That's it for our two - part tale on the OS wars that shaped our digital lives. The struggle for dominance moved from the desktop to the server room, and ultimately into the cloud. Old enemies became unlikely allies, and a crowdsourced future left everything open . Listen, I know, there are a hundred other heroes we didn't have space for in this history trip, so drop us a line. Share your story. Redhat.com/commandlineheroes. I'm listening.
|
||||
|
||||
We're spending the rest of the season learning what today's heroes are creating, and what battles they're going through to bring their creations to life. Come back for more tales — from the epic front lines of programming. We drop a new episode every two weeks. In a couple weeks' time, we bring you episode three: the Agile Revolution.
|
||||
|
||||
[00:28:00] Command Line Heroes is an original podcast from Red Hat. To get new episodes delivered automatically for free, make sure to subscribe to the show. Just search for “ Command Line Heroes ” in Apple p odcast s , Spotify, Google Play, and pretty much everywhere else you can find podcasts. Then, hit “ subscribe ” so you will be the first to know when new episodes are available.
|
||||
|
||||
I'm Saron Yitbarek. Thanks for listening. Keep on coding.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.redhat.com/en/command-line-heroes/season-1/os-wars-part-2-rise-of-linux
|
||||
|
||||
作者:[redhat][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.redhat.com
|
||||
[b]: https://github.com/lujun9972
|
@ -1,3 +1,12 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (cycoe)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (A brief history of text-based games and open source)
|
||||
[#]: via: (https://opensource.com/article/18/7/interactive-fiction-tools)
|
||||
[#]: author: (Jason Mclntosh https://opensource.com/users/jmac)
|
||||
|
||||
A brief history of text-based games and open source
|
||||
======
|
||||
|
||||
|
@ -0,0 +1,80 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (hopefully2333)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Is your enterprise software committing security malpractice?)
|
||||
[#]: via: (https://www.networkworld.com/article/3429559/is-your-enterprise-software-committing-security-malpractice.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Is your enterprise software committing security malpractice?
|
||||
======
|
||||
ExtraHop discovered enterprise security and analytic software are "phoning home" and quietly uploading information to servers outside of customers' networks.
|
||||
![Getty Images][1]
|
||||
|
||||
Back when this blog was dedicated to all things Microsoft I routinely railed against the spying aspects of Windows 10. Well, apparently that’s nothing compared to what enterprise security, analytics, and hardware management tools are doing.
|
||||
|
||||
An analytics firm called ExtraHop examined the networks of its customers and found that their security and analytic software was quietly uploading information to servers outside of the customer's network. The company issued a [report and warning][2] last week.
|
||||
|
||||
ExtraHop deliberately chose not to name names in its four examples of enterprise security tools that were sending out data without warning the customer or user. A spokesperson for the company told me via email, “ExtraHop wants the focus of the report to be the trend, which we have observed on multiple occasions and find alarming. Focusing on a specific group would detract from the broader point that this important issue requires more attention from enterprises.”
|
||||
|
||||
**[ For more on IoT security, read [tips to securing IoT on your network][3] and [10 best practices to minimize IoT security vulnerabilities][4]. | Get regularly scheduled insights by [signing up for Network World newsletters][5]. ]**
|
||||
|
||||
### Products committing security malpractice and secretly transmitting data offsite
|
||||
|
||||
[ExtraHop's report][6] found a pretty broad range of products secretly phoning home, including endpoint security software, device management software for a hospital, surveillance cameras, and security analytics software used by a financial institution. It also noted the applications may run afoul of Europe’s [General Data Privacy Regulation (GDPR)][7].
|
||||
|
||||
In every case, ExtraHop provided evidence that the software was transmitting data offsite. In one case, a company noticed that approximately every 30 minutes, a network-connected device was sending UDP traffic out to a known bad IP address. The device in question was a Chinese-made security camera that was phoning home to a known malicious IP address with ties to China.
|
||||
|
||||
And the camera was likely set up independently by an employee at their office for personal security purposes, showing the downside to shadow IT.
|
||||
|
||||
In the cases of the hospital's device management tool and the financial firm's analytics tool, those were violations of data security laws and could expose the company to legal risks even though it was happening without their knowledge.
|
||||
|
||||
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][8] ]**
|
||||
|
||||
The hospital’s medical device management product was supposed to use the hospital’s Wi-Fi network to only ensure patient data privacy and HIPAA compliance. ExtraHop noticed traffic from the workstation that was managing initial device rollout was opening encrypted SSL:443 connections to vendor-owned cloud storage, a major HIPAA violation.
|
||||
|
||||
ExtraHop notes that while there may not be any malicious activity in these examples, it is still in violation of the law, and administrators need to keep an eye on their networks to monitor traffic for unusual activity.
|
||||
|
||||
"To be clear, we don’t know why these vendors are phoning home data. The companies are all respected security and IT vendors, and in all likelihood, their phoning home of data was either for a legitimate purpose given their architecture design or the result of a misconfiguration," the report says.
|
||||
|
||||
### How to mitigate phoning-home security risks
|
||||
|
||||
To address this security malpractice problem, ExtraHop suggests companies do these five things:
|
||||
|
||||
* Monitor for vendor activity: Watch for unexpected vendor activity on your network, whether they are an active vendor, a former vendor or even a vendor post-evaluation.
|
||||
* Monitor egress traffic: Be aware of egress traffic, especially from sensitive assets such as domain controllers. When egress traffic is detected, always match it to approved applications and services.
|
||||
* Track deployment: While under evaluation, track deployments of software agents.
|
||||
* Understand regulatory considerations: Be informed about the regulatory and compliance considerations of data crossing political and geographic boundaries.
|
||||
* Understand contract agreements: Track whether data is used in compliance with vendor contract agreements.
|
||||
|
||||
|
||||
|
||||
**[ Now read this: [Network World's corporate guide to addressing IoT security][9] ]**
|
||||
|
||||
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3429559/is-your-enterprise-software-committing-security-malpractice.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/03/cybersecurity_eye-with-binary_face-recognition_abstract-eye-100751589-large.jpg
|
||||
[2]: https://www.extrahop.com/company/press-releases/2019/extrahop-issues-warning-about-phoning-home/
|
||||
[3]: https://www.networkworld.com/article/3254185/internet-of-things/tips-for-securing-iot-on-your-network.html#nww-fsb
|
||||
[4]: https://www.networkworld.com/article/3269184/10-best-practices-to-minimize-iot-security-vulnerabilities#nww-fsb
|
||||
[5]: https://www.networkworld.com/newsletters/signup.html#nww-fsb
|
||||
[6]: https://www.extrahop.com/resources/whitepapers/eh-security-advisory-calling-home-success/
|
||||
[7]: https://www.csoonline.com/article/3202771/general-data-protection-regulation-gdpr-requirements-deadlines-and-facts.html
|
||||
[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[9]: https://www.networkworld.com/article/3269165/internet-of-things/a-corporate-guide-to-addressing-iot-security-concerns.html
|
||||
[10]: https://www.facebook.com/NetworkWorld/
|
||||
[11]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,62 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Intel pulls the plug on Omni-Path networking fabric architecture)
|
||||
[#]: via: (https://www.networkworld.com/article/3429614/intel-pulls-the-plug-on-omni-path-networking-fabric-architecture.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Intel pulls the plug on Omni-Path networking fabric architecture
|
||||
======
|
||||
Omni-Path Architecture, which Intel had high hopes for in the high-performance computing (HPC) space, has been scrapped after one generation.
|
||||
![Gerd Altmann \(CC0\)][1]
|
||||
|
||||
Intel’s battle to gain ground in the high-performance computing (HPC) market isn’t going so well. The Omni-Path Architecture it had pinned its hopes on has been scrapped after one generation.
|
||||
|
||||
An Intel spokesman confirmed to me that the company will no longer offer Intel OmniPath Architecture 200 (OPA200) products to customers, but it will continue to encourage customers, OEMs, and partners to use OPA100 in new designs.
|
||||
|
||||
“We are continuing to sell, maintain, and support OPA100. We actually announced some new features for OPA100 back at International Supercomputing in June,” the spokesperson said via email.
|
||||
|
||||
**[ Learn [who's developing quantum computers][2]. ]**
|
||||
|
||||
Intel said it continues to invest in connectivity solutions for its customers and that the recent [acquisition of Barefoot Networks][3] is an example of Intel’s strategy of supporting end-to-end cloud networking and infrastructure. It would not say if Barefoot’s technology would be the replacement for OPA.
|
||||
|
||||
While Intel owns the supercomputing market, it has not been so lucky with the HPC fabric, a network that connects CPUs and memory for faster data sharing. Market leader Mellanox with its Enhanced Data Rate (HDR) InfiniBand framework rules the roost, and now [Mellanox is about to be acquired][4] by Intel’s biggest nemesis, Nvidia.
|
||||
|
||||
Technically, Intel was a bit behind Mellanox. OPA100 is 100 Gbits, and OPA200 was intended to be 200 Gbits, but Mellanox was already at 200 Gbits and is set to introduce 400-Gbit products later this year.
|
||||
|
||||
Analyst Jim McGregor isn’t surprised. “They have a history where if they don’t get high uptick on something and don’t think it’s of value, they’ll kill it. A lot of times when they go through management changes, they look at how to optimize. Paul [Otellini] did this extensively. I would expect Bob [Swan, the newly minted CEO] to do that and say these things aren’t worth our investment,” he said.
|
||||
|
||||
The recent [sale of the 5G unit to Apple][5] is another example of Swan cleaning house. McGregor notes Apple was hounding them to invest more in 5G and at the same time tried to hire people away.
|
||||
|
||||
The writing was on the wall for OPA as far back as March when Intel introduced Compute Express Link (CXL), a cache coherent accelerator interconnect that basically does what OPA does. At the time, people were wondering where this left OPA. Now they know.
|
||||
|
||||
The problem once again is that Intel is swimming upstream. CXL competes with Cache Coherent Interconnect for Accelerators (CCIX) and OpenCAPI, the former championed by basically all of its competition and the latter promoted by IBM.
|
||||
|
||||
All are built on PCI Express (PCIe) and bring features such as cache coherence to PCIe, which it does not have natively. Both CXL and CCIX can run on top of PCIe and co-exist with it. The trick is that the host and the accelerator must have matching support. A host with CCIX can only work with a CCIX device; there is no mixing them.
|
||||
|
||||
As I said, CCIX support is basically everybody but Intel: ARM, AMD, IBM, Marvell, Qualcomm, and Xilinx are just a few of its backers. CXL includes Intel, Hewlett Packard Enterprise, and Dell EMC. The sane thing to do would be merge the two standards, take the best of both and make one standard. But anyone who remembers the HD-DVD/Blu-ray battle of last decade knows how likely that is.
|
||||
|
||||
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3429614/intel-pulls-the-plug-on-omni-path-networking-fabric-architecture.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/07/digital_transformation_finger_tap_causes_waves_of_interconnected_digital_ripples_by_gerd_altmann_cc0_via_pixabay_1200x800-100765086-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3275385/who-s-developing-quantum-computers.html
|
||||
[3]: https://www.idgconnect.com/news/1502086/intel-delves-deeper-network-barefoot-networks
|
||||
[4]: https://www.networkworld.com/article/3356444/nvidia-grabs-mellanox-out-from-under-intels-nose.html
|
||||
[5]: https://www.computerworld.com/article/3411922/what-you-need-to-know-about-apples-1b-intel-5g-modem-investment.html
|
||||
[6]: https://www.facebook.com/NetworkWorld/
|
||||
[7]: https://www.linkedin.com/company/network-world
|
62
sources/talk/20190806 Is Perl going extinct.md
Normal file
62
sources/talk/20190806 Is Perl going extinct.md
Normal file
@ -0,0 +1,62 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Is Perl going extinct?)
|
||||
[#]: via: (https://opensource.com/article/19/8/command-line-heroes-perl)
|
||||
[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/shawnhcoreyhttps://opensource.com/users/sethhttps://opensource.com/users/roger78)
|
||||
|
||||
Is Perl going extinct?
|
||||
======
|
||||
Command Line Heroes explores the meteoric rise of Perl, its fall from
|
||||
the spotlight, and what's next in the programming language's lifecycle.
|
||||
![Listen to the Command Line Heroes Podcast][1]
|
||||
|
||||
Is there an [endangered species][2] list for programming languages? If there is, [Command Line Heroes][3] suggests that Perl is somewhere between vulnerable and critically endangered. The dominant language of the 1990s is the focus of this week's podcast (Season 3, Episode 4) and explores its highs and lows since it was introduced over 30 years ago.
|
||||
|
||||
### The timeline
|
||||
|
||||
1991 was the year that changed everything. Tim Berners-Lee released the World Wide Web. The internet had been connecting computers for 20 years, but the web connected people in brand new ways. An entire new frontier of web-based development opened.
|
||||
|
||||
Last week's episode explored [how JavaScript was born][4] and launched the browser wars. Before that language dominated the web, Perl was incredibly popular. It was open source, general purpose, and ran on nearly every Unix-like platform. Perl allowed a familiar set of practices any sysadmin would appreciate running.
|
||||
|
||||
### What happened?
|
||||
|
||||
So, if Perl was doing so well in the '90s, why did it start to sink? The dot-com bubble burst in 2000, and the first heady rush of web development was about to give way to a slicker, faster, different generation. Python became a favorite for first-time developers, much like Perl used to be an attractive first language that stole newbies away from FORTRAN or C.
|
||||
|
||||
Perl was regarded highly because it allowed developers to solve a problem in many ways, but that feature later became known as a bug. [Python's push toward a single right answer][5] ended up being where many people wanted to go. Conor Myhrvold wrote in _Fast Company_ how Python became more attractive and [what Perl might have done to keep up][6]. For these reasons and myriad others, like the delay of Perl 6, it lost momentum.
|
||||
|
||||
### Lifecycle management
|
||||
|
||||
Overall, I'm comfortable with the idea that some languages don't make it. [BASIC][7] isn't exactly on the software bootcamp hit list these days. But maybe Perl isn't on the same trajectory and could be best-in-class for a more specific type of problem around glue code for system challenges.
|
||||
|
||||
I love how Command Line Heroes host [Saron Yitbarek][8] summarizes it at the end of the podcast episode:
|
||||
|
||||
> "Languages have lifecycles. When new languages emerge, exquisitely adapted to new realities, an option like Perl might occupy a smaller, more niche area. But that's not a bad thing. Our languages should expand and shrink their communities as our needs change. Perl was a crucial player in the early history of web development—and it stays with us in all kinds of ways that become obvious with a little history and a look at the big picture."
|
||||
|
||||
Learning about Perl's rise and search for a new niche makes me wonder which of the new languages we're developing today will still be around in 30 years.
|
||||
|
||||
Command Line Heroes will cover programming languages for all of season 3. [Subscribe so you don't miss a single one][3]. I would love to hear your thoughts in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/command-line-heroes-perl
|
||||
|
||||
作者:[Matthew Broberg][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/shawnhcoreyhttps://opensource.com/users/sethhttps://opensource.com/users/roger78
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command-line-heroes-520x292.png?itok=s_F6YEoS (Listen to the Command Line Heroes Podcast)
|
||||
[2]: https://www.nationalgeographic.org/media/endangered/
|
||||
[3]: https://www.redhat.com/en/command-line-heroes
|
||||
[4]: https://opensource.com/article/19/7/command-line-heroes-javascript
|
||||
[5]: https://opensource.com/article/19/6/command-line-heroes-python
|
||||
[6]: https://www.fastcompany.com/3026446/the-fall-of-perl-the-webs-most-promising-language
|
||||
[7]: https://opensource.com/19/7/command-line-heroes-ruby-basic
|
||||
[8]: https://twitter.com/saronyitbarek
|
@ -0,0 +1,249 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Intro to Corteza, an open source alternative to Salesforce)
|
||||
[#]: via: (https://opensource.com/article/19/8/corteza-open-source-alternative-salesforce)
|
||||
[#]: author: (Denis Arh https://opensource.com/users/darhhttps://opensource.com/users/scottnesbitthttps://opensource.com/users/jason-bakerhttps://opensource.com/users/jason-baker)
|
||||
|
||||
Intro to Corteza, an open source alternative to Salesforce
|
||||
======
|
||||
Learn how to use and get Corteza up and running.
|
||||
![Team communication, chat][1]
|
||||
|
||||
[Corteza][2] is an open source, self-hosted digital work platform for growing an organization's productivity, enabling its relationships, and protecting its work and the privacy of those involved. The project was developed entirely in the public domain by [Crust Technology][3]. It has four core features: customer relationship management, a low-code development platform, messaging, and a unified workspace. This article will also explain how to get started with Corteza on the command line.
|
||||
|
||||
### Customer relationship management
|
||||
|
||||
[Corteza CRM][4] is a feature-rich open source CRM platform that gives organizations a 360-degree overview of leads and accounts. It's flexible, can easily be tailored to any organization, and includes a powerful automation module to automate processes.
|
||||
|
||||
![Corteza CRM screenshot][5]
|
||||
|
||||
### Low-code development platform
|
||||
|
||||
[Corteza Low Code][6] is an open source [low-code development platform][7] and alternative to Salesforce Lightning. It has an intuitive drag-and-drop builder and allows users to create and deploy record-based business applications with ease. Corteza CRM is built on Corteza Low Code.
|
||||
|
||||
![Corteza Low Code screenshot][8]
|
||||
|
||||
### Messaging
|
||||
|
||||
[Corteza Messaging][9] is an alternative to Salesforce Chatter and Slack. It is a secure, high-performance collaboration platform that allows teams to collaborate more efficiently and communicate safely with other organizations or customers. It integrates tightly with Corteza CRM and Corteza Low Code.
|
||||
|
||||
![Corteza Messaging screenshot][10]
|
||||
|
||||
### Unified workspace
|
||||
|
||||
[Corteza One][11] is a unified workspace to access and run third-party web and Corteza applications. Centralized access management from a single console enables administrative control over who can see or access applications.
|
||||
|
||||
![Corteza One screenshot][12]
|
||||
|
||||
## Set up Corteza
|
||||
|
||||
You can set up and run the Corteza platform with a set of simple command-line commands.
|
||||
|
||||
### Set up Docker
|
||||
|
||||
If Docker is already set up on the machine where you want to use Corteza, you can skip this section. (If you are using a Docker version below 18.0, I strongly encourage you to update.)
|
||||
|
||||
If you're not sure whether you have Docker, open your console or terminal and enter:
|
||||
|
||||
|
||||
```
|
||||
`$> docker -v`
|
||||
```
|
||||
|
||||
If the response is "command not found," download and install a Docker community edition for [desktop][13] or [server or cloud][14] that fits your environment.
|
||||
|
||||
### Configure Corteza locally
|
||||
|
||||
By using Docker's command-line interface (CLI) utility **docker-compose** (which simplifies work with containers), Corteza's setup is as effortless as possible.
|
||||
|
||||
The following script provides the absolute minimum configuration to set up a local version of Corteza. If you prefer, you can [open this file on GitHub][15]. Note that this setup does not use persistent storage; you will need to set up container volumes for that.
|
||||
|
||||
|
||||
```
|
||||
version: '2.0'
|
||||
|
||||
services:
|
||||
db:
|
||||
image: percona:8.0
|
||||
environment:
|
||||
MYSQL_DATABASE: corteza
|
||||
MYSQL_USER: corteza
|
||||
MYSQL_PASSWORD: oscom-tutorial
|
||||
MYSQL_ROOT_PASSWORD: supertopsecret
|
||||
|
||||
server:
|
||||
image: cortezaproject/corteza-server:latest
|
||||
|
||||
# Map internal 80 port (where Corteza API is listening)
|
||||
# to external port 10080. If you change this, make sure you change API_BASEURL setting below
|
||||
ports: [ "10080:80" ]
|
||||
|
||||
environment:
|
||||
# Tell corteza-server where can it be reached from the outside
|
||||
VIRTUAL_HOST: localhost:10080
|
||||
|
||||
# Serving the app from the localhost port 20080 is not very usual setup,
|
||||
# this will override settings auto-discovery procedure (provision) and
|
||||
# use custom values for frontend URL base
|
||||
PROVISION_SETTINGS_AUTH_FRONTEND_URL_BASE: <http://localhost:20080>
|
||||
|
||||
# Database connection, make sure username, password, and database matches values in the db service
|
||||
DB_DSN: corteza:oscom-tutorial@tcp(db:3306)/corteza?collation=utf8mb4_general_ci
|
||||
|
||||
webapp:
|
||||
image: cortezaproject/corteza-webapp:latest
|
||||
|
||||
# Map internal 80 port (where we serve the web application)
|
||||
# to external port 20080.
|
||||
ports: [ "20080:80" ]
|
||||
|
||||
environment:
|
||||
# Where API can be found
|
||||
API_BASEURL: localhost:10080
|
||||
|
||||
# We're using one service for the API
|
||||
MONOLITH_API: 1
|
||||
```
|
||||
|
||||
Run the services by entering:
|
||||
|
||||
|
||||
```
|
||||
`docker-compose up`
|
||||
```
|
||||
|
||||
You'll see a stream of log lines announcing the database container initialization. Meanwhile, Corteza server will try (and retry) to connect to the database. If you change the database configuration (i.e., username, database, password), you'll get some errors.
|
||||
|
||||
When Corteza server connects, it initializes "store" (for uploaded files), and the settings-discovery process will try to auto-discover as much as possible. (You can help by setting the **VIRTUAL_HOST** and **PROVISION_SETTINGS_AUTH_FRONTEND_URL_BASE** variables just right for your environment.)
|
||||
|
||||
When you see "Starting HTTP server with REST API," Corteza server is ready for use.
|
||||
|
||||
#### Troubleshooting
|
||||
|
||||
If you misconfigure **VIRTUAL_HOST**, **API_BASEURL**, or **PROVISION_SETTINGS_AUTH_FRONTEND_URL_BASE**, your setup will most likely be unusable. The easiest fix is bringing all services down (**docker-compose down**) and back up (**docker-compose up**) again, but this will delete all data. See "Splitting services" below if you want to make it work without this purge-and-restart approach.
|
||||
|
||||
### Log in and explore
|
||||
|
||||
Open <http://localhost:20080> in your browser, and give Corteza a try.
|
||||
|
||||
First, you'll see the login screen. Follow the sign-up link and register. Corteza auto-promotes the first user to the administrator role. You can explore the administration area and the messaging and low-code tools with the support of the [user][16] and [administrator][17] guides.
|
||||
|
||||
### Run in the background
|
||||
|
||||
If you're not familiar with **docker-compose**, you can bring up the services with the **-d** flag and run them in the background. You can still access service logs with the **docker-container logs** command if you want.
|
||||
|
||||
## Expose Corteza to your internal network and the world
|
||||
|
||||
If you want to use Corteza in production and with other users, take a look at Corteza's [simple][18] and [advanced][19] deployment setup examples.
|
||||
|
||||
### Establish data persistency
|
||||
|
||||
If you use one of the simple or advanced examples, you can persist your data by uncommenting one of the volume line pairs.
|
||||
|
||||
If you want to store data on your local filesystem, you might need to pay special attention to file permissions. Review the logs when starting up the services if there are any related errors.
|
||||
|
||||
### Proxy requests to containers
|
||||
|
||||
Both server and web-app containers listen on port 80 by default. If you want to expose them to the outside world, you need to use a different outside port. Unfortunately, this makes it not very user-friendly. We do not want to have to tell users to access Corteza on (for example) **corteza.example.org:31337** but directly on **corteza.example.org** with an API served from **api.corteza.example.org**.
|
||||
|
||||
The easiest way to do this is with another Docker image: **[jwilder/nginx-proxy][20]**. You can find a [configuration example][21] in Corteza's docs. When you spin-up an **nginx-proxy** container, it listens for Docker events (e.g., when a container starts or stops), reads values from the container's **VIRTUAL_HOST** variable, and creates an [Nginx][22] configuration that proxies requests directed to the domain configured with **VIRTUAL_HOST** to the container using the domain.
|
||||
|
||||
### Secure HTTP
|
||||
|
||||
Corteza server speaks only plain HTTP (and HTTP 2.0). It does not do any SSL termination, so you must set up the (reverse) proxy that handles HTTPS traffic and redirects it to internal HTTP ports.
|
||||
|
||||
If you use **jwilder/nginx-proxy** as your front end, you can use another image—**[jrcs/letsencrypt-nginx-proxy-companion][23]**—to take care of SSL certificates from Let's Encrypt. It listens for Docker events (in a similar way as **nginx-proxy**) and reads the **LETSENCRYPT_HOST** variable.
|
||||
|
||||
### Configuration
|
||||
|
||||
Another ENV file holds configuration values. See the [docs][24] for details. There are two levels of configuration, with ENV variables and settings stored in the database.
|
||||
|
||||
### Authentication
|
||||
|
||||
Along with its internal authentication capabilities (with usernames and encrypted passwords stored in the database), Corteza supports registration and authentication with GitHub, Google, Facebook, Linkedin, or any other [OpenID Connect][25] (OIDC)-compatible identity provider.
|
||||
|
||||
You can add any [external OIDC provider][26] using auto-discovery or by explicitly setting your key and secret. (Note that GitHub, Google, Facebook, and Linkedin require you to register an application and provide a redirection link.)
|
||||
|
||||
### Splitting services
|
||||
|
||||
If you expect more load or want to separate services to fine-tune your containers, follow the [advanced deployment][19] example, which has more of a microservice-type architecture. It still uses a single database, but it can split into three parts.
|
||||
|
||||
### Other types of setup
|
||||
|
||||
In the future, Corteza will be available for different distributions, appliances, container types, and more
|
||||
|
||||
If you have special requirements, you can always build Corteza as a backend service and the web applications [from source][27].
|
||||
|
||||
## Using Corteza
|
||||
|
||||
Once you have Corteza up and running, you can start using all its features. Here is a list of recommended actions.
|
||||
|
||||
### Log in
|
||||
|
||||
Enter Corteza at, for example, your **corteza.example.org** link. You'll see the login screen.
|
||||
|
||||
![Corteza Login screenshot][28]
|
||||
|
||||
As mentioned above, Corteza auto-promotes the first user to the administrator role. If you haven't yet, explore the administration area, messaging, and low-code tools.
|
||||
|
||||
### Other tasks to get started
|
||||
|
||||
Here are some other tasks you'll want to do when you're setting up Corteza for your organization.
|
||||
|
||||
* Share the login link with others who will work in your Corteza instance so that they can sign up.
|
||||
* Create roles, if needed, and assign users to the right roles. By default, only admins can do this.
|
||||
* Corteza CRM has a complete list of modules. You can enter the CRM admin page to fine-tune the CRM to your needs or just start using it with the defaults.
|
||||
* Enter Corteza Low Code and create a new low-code app from scratch.
|
||||
* Create public and private channels in Corteza Messaging for your team. (For example a public "General" or a private "Sales" channel.)
|
||||
|
||||
|
||||
|
||||
## For more information
|
||||
|
||||
If you or your users have any questions—or would like to contribute—please join the [Corteza Community][29]. After you log in, please introduce yourself in the #Welcome channel.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/corteza-open-source-alternative-salesforce
|
||||
|
||||
作者:[Denis Arh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/darhhttps://opensource.com/users/scottnesbitthttps://opensource.com/users/jason-bakerhttps://opensource.com/users/jason-baker
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_team_mobile_desktop.png?itok=d7sRtKfQ (Team communication, chat)
|
||||
[2]: https://www.cortezaproject.org/
|
||||
[3]: https://www.crust.tech/
|
||||
[4]: https://cortezaproject.org/technology/core/corteza-crm/
|
||||
[5]: https://opensource.com/sites/default/files/uploads/screenshot-corteza-crm-1.png (Corteza CRM screenshot)
|
||||
[6]: https://cortezaproject.org/technology/core/corteza-low-code/
|
||||
[7]: https://en.wikipedia.org/wiki/Low-code_development_platform
|
||||
[8]: https://opensource.com/sites/default/files/uploads/screenshot-corteza-low-code-2.png (Corteza Low Code screenshot)
|
||||
[9]: https://cortezaproject.org/technology/core/corteza-messaging/
|
||||
[10]: https://opensource.com/sites/default/files/uploads/screenshot-corteza-messaging-1.png (Corteza Messaging screenshot)
|
||||
[11]: https://cortezaproject.org/technology/core/corteza-one/
|
||||
[12]: https://opensource.com/sites/default/files/uploads/screenshot-cortezaone-unifiedworkspace.png (Corteza One screenshot)
|
||||
[13]: https://www.docker.com/products/docker-desktop
|
||||
[14]: https://hub.docker.com/search?offering=community&type=edition
|
||||
[15]: https://github.com/cortezaproject/corteza-docs/blob/master/deploy/docker-compose/example-2019-07-os.com/docker-compose.yml
|
||||
[16]: https://cortezaproject.org/documentation/user-guides/
|
||||
[17]: https://cortezaproject.org/documentation/administrator-guides/
|
||||
[18]: https://github.com/cortezaproject/corteza-docs/blob/master/deploy/docker-compose/simple.md
|
||||
[19]: https://github.com/cortezaproject/corteza-docs/blob/master/deploy/docker-compose/advanced.md
|
||||
[20]: https://github.com/jwilder/nginx-proxy
|
||||
[21]: https://github.com/cortezaproject/corteza-docs/blob/master/deploy/docker-compose/nginx-proxy/docker-compose.yml
|
||||
[22]: http://nginx.org/
|
||||
[23]: https://github.com/JrCs/docker-letsencrypt-nginx-proxy-companion
|
||||
[24]: https://github.com/cortezaproject/corteza-docs/tree/master/config
|
||||
[25]: https://en.wikipedia.org/wiki/OpenID_Connect
|
||||
[26]: https://github.com/cortezaproject/corteza-docs/blob/master/config/ExternalAuth.md
|
||||
[27]: https://github.com/cortezaproject
|
||||
[28]: https://opensource.com/sites/default/files/uploads/screenshot-corteza-login.png (Corteza Login screenshot)
|
||||
[29]: https://latest.cortezaproject.org/
|
@ -0,0 +1,103 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (WAN Transformation: It’s More Than SD-WAN)
|
||||
[#]: via: (https://www.networkworld.com/article/3430638/wan-transformation-it-s-more-than-sd-wan.html)
|
||||
[#]: author: (Cato Networks https://www.networkworld.com/author/Matt-Conran/)
|
||||
|
||||
WAN Transformation: It’s More Than SD-WAN
|
||||
======
|
||||
Tomorrow’s networking challenges will extend beyond the capabilities of SD-WAN alone. Here’s why and how you can prepare your network.
|
||||
![metamorworks][1]
|
||||
|
||||
As an IT leader, you’re expected to be the technology vanguard of your organization. It is you who must deflate technology hype and devise the technology plan to keep the organization competitive.
|
||||
|
||||
Addressing the WAN is, of course, essential to that plan. The high costs and limited agility of legacy MPLS-based networks are well known. What’s less clear is how to transform the enterprise network in a way that will remain agile and efficient for decades to come.
|
||||
|
||||
Many mistakenly assume [SD-WAN][2] to be that transformation. After all, SD-WAN brings agility, scalability, and cost efficiencies lacking in telco-managed MPLS services. But while a critical step, SD-WAN alone is insufficient to address the networking challenges you’re likely to face today — and tomorrow. Here’s why.
|
||||
|
||||
### SD-WAN Cannot Fix the Unpredictability of the Global Internet
|
||||
|
||||
Enterprise networks are nothing but predictable. Yet to realize their benefits, SD-WANs must rely on the unpredictable public Internet, a crapshoot, meeting enterprise requirements one day, wildly amiss the next. There’s simply no way to anticipate [exceptional Internet events][3]. And with global Internet connections, you don’t even need to wait for unusual. At Cato, we routinely see [how latency across our private backbone can halve the latency of similar Internet routes][4], a fact confirmed by numerous third-party sources.
|
||||
|
||||
![][5]
|
||||
|
||||
SD-WAN Lacks Security, Yet Security is Required _Everywhere_Is it any wonder [SD-WAN vendors][6] partner with legacy telcos? But telcos too often come with a last-mile agenda, locking you into specific providers. Cost and support models are also designed for the legacy business, not the digital one.
|
||||
|
||||
It’s no secret that with Internet access you need the advanced security protection of a next-generation firewall, IPS, and the rest of today’s security stack. It's also no secret SD-WAN lacks advanced security. How, then, will you provide branch locations with [secure Direct Internet Access (DIA)?][7]
|
||||
|
||||
Deploying branch security appliances will complicate the network, running counter to your goal of creating a leaner, more agile infrastructure. Appliances, whether physical or virtual ([VNFs in an NFV architecture][8]), must be maintained. New software patches must be tested, staged, and deployed. As traffic loads grow or compute-intensive features are enabled, such as TLS inspection, the security appliance’s compute requirements increase, ultimately forcing unplanned hardware upgrades.
|
||||
|
||||
Cloud security services can avoid those problems. But too often they only inspect Internet traffic, not site-to-site traffic, forcing IT to maintain and coordinate separate security policies, complicating troubleshooting and deployment.
|
||||
|
||||
### SD-WAN Does Not Extend Well to the Cloud, Mobile Users, or the Tools of Tomorrow
|
||||
|
||||
Then there’s the problem of the new tenants of the enterprise. SD-WAN is an MPLS replacement; it doesn’t extend naturally to the cloud, today’s destination for most enterprise traffic. And mobile users are completely beyond SD-WAN’s scope, requiring separate connectivity and security infrastructure that too often disrupts the mobile experience and fragments visibility, complicating troubleshooting and management.
|
||||
|
||||
Just over the horizon are IoT devices, not to mention the developments we can’t even foresee. In many cases, installing appliances won’t be possible. How will your SD-WAN accommodate these developments without compromising on the operational agility and efficiencies demanded by the digital business?
|
||||
|
||||
### It’s Time to Evolve the Network Architecture
|
||||
|
||||
Continuing to solve network challenges in parts —MPLS service here, remote access VPN there, and a sprinkling of cloud access solutions, routers, firewalls, WAN optimizers, and sensors — only persists in complicating the enterprise network, ultimately restricting how much cost can be saved or operational efficiency gained. SD-WAN-only solutions are symptomatic of this segmented thinking, solving only a small part of the enterprise’s far bigger networking challenge.
|
||||
|
||||
What’s needed isn’t another point appliance or another network. What’s needed is **one network** that connects **and** secures **all** company resources worldwide without compromising on cost or performance. This is an architectural issue, one that can’t be solved by repackaging multiple appliances as a network service. Such approaches lead to inconsistent services, poor manageability, and high latency — a fact that Gartner notes in its recent [Hype Cycle for Enterprise Networking][9].
|
||||
|
||||
# Picture the Network of the Future
|
||||
|
||||
What might this architecture look like? At its basis, think of collapsing MPLS, VPN, and all other networks with **one** global, private, managed backbone available from anywhere to anywhere. Such as network would connect all edges — sites, cloud resources, and mobile devices — with far better performance than the Internet at far lower cost than MPLS services.
|
||||
|
||||
Such a vision is possible today, in fact, due to two trends — the massive investment in global IP capacity and advancements in high-performance, commercial off-the-shelf (COTS) hardware.
|
||||
|
||||
#### Connect
|
||||
|
||||
The Points of Presence (PoPs) comprising such a backbone would interconnect using SLA-backed IP connections across multiple provider networks. By connecting PoPs across multiple networks, the backbone would offer better performance and resiliency than any one underlying network. It would, in effect, bring the power of SD-WAN to the backbone core.
|
||||
|
||||
The cloud-native software would execute all major networking and security functions normally running in edge appliances. WAN optimization, dynamic path selection, policy-based routing, and more would move to the cloud. The PoPs would also monitor the real-time conditions of the underlying networks, routing traffic, including cloud traffic, along the optimum path to the PoP closest to the destination.
|
||||
|
||||
With most processing done by the PoP, connecting any type of “edge” — site, cloud resources, mobile devices, IoT devices, and more — would become simple. All that’s needed is a small client, primarily to establish an encrypted tunnel across an Internet connection to the nearest PoP. By colocating PoP and cloud IXPs in the same physical data centers, cloud resources would implicitly become part of your new optimized, corporate network. All without deploying additional software or hardware.
|
||||
|
||||
#### Secure
|
||||
|
||||
To ensure security, all traffic would only reach an “edge” after security inspection that runs as part of the cloud-native software. Security services would include next-generation firewall, secure web gateway, and advanced threat protection. By running in the PoPs, security services benefit from scalability and elasticity of the cloud, which was never available as appliances.
|
||||
|
||||
With the provider running the security stack, IT would be freed from its security burden. Security services would always be current without the operational overhead of appliances or investing in specialized security skills. Inspecting all enterprise traffic by one platform means IT needs only one set of security policies to protect all users. Overall, security is made simpler, and mobile users and cloud resources no longer need to remain second-class citizens.
|
||||
|
||||
#### Run
|
||||
|
||||
Without deploying tons of specialized appliances, the network would be much easier to run and manage. A single pane of glass would give the IT manager end-to-end visibility across networking and security domains without the myriads of sensors, agents, normalization tools, and more needed today for that kind of capability.
|
||||
|
||||
### One Architecture Many Benefits
|
||||
|
||||
Such an approach addresses the gamut of networking challenges facing today’s enterprises. Connectivity costs would be slashed. Latency would rival those of global MPLS but with far better throughput thanks to built-in network optimization, which would be available inside — and outside — sites. Security would be pervasive, easy to maintain, and effective.
|
||||
|
||||
This architecture isn’t just a pipe dream. Hundreds of companies across the globe today realize these benefits every day by relying on such a platform from [Cato Networks][10]. It’s a secure, global, managed SD-WAN service powered by the scalability, self-service, and agility of the cloud.
|
||||
|
||||
### The Time is Now
|
||||
|
||||
WAN transformation is a rare opportunity for IT leaders to profoundly impact the business’ ability to do business better tomorrow and for decades to come. SD-WAN is a piece of that vision, but only a piece. Addressing the entire network challenge (not just a part of it) to accommodate the needs you can anticipate — and the ones you can’t -- will go a long way towards measuring the effectiveness of your IT leadership.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3430638/wan-transformation-it-s-more-than-sd-wan.html
|
||||
|
||||
作者:[Cato Networks][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Matt-Conran/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/08/istock-1127447341-100807620-large.jpg
|
||||
[2]: https://www.catonetworks.com/glossary-use-cases/sd-wan?utm_source=IDG&utm_campaign=IDG
|
||||
[3]: https://arstechnica.com/information-technology/2019/07/facebook-cloudflare-microsoft-and-twitter-suffer-outages/
|
||||
[4]: https://www.catonetworks.com/blog/the-internet-is-broken-heres-why?utm_source=IDG&utm_campaign=IDG
|
||||
[5]: https://images.idgesg.net/images/article/2019/08/capture-100807619-large.jpg
|
||||
[6]: https://www.topsdwanvendors.com?utm_source=IDG&utm_campaign=IDG
|
||||
[7]: https://www.catonetworks.com/glossary-use-cases/secure-direct-internet-access?utm_source=IDG&utm_campaign=IDG
|
||||
[8]: https://www.catonetworks.com/blog/the-pains-and-problems-of-nfv?utm_source=IDG&utm_campaign=IDG
|
||||
[9]: https://www.gartner.com/en/documents/3947237/hype-cycle-for-enterprise-networking-2019
|
||||
[10]: https://www.catonetworks.com?utm_source=IDG&utm_campaign=IDG
|
@ -0,0 +1,91 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why fear of failure is a silent DevOps virus)
|
||||
[#]: via: (https://opensource.com/article/19/8/why-fear-failure-silent-devops-virus)
|
||||
[#]: author: (Willy-Peter SchaubJustin Kearns https://opensource.com/users/wpschaubhttps://opensource.com/users/bclasterhttps://opensource.com/users/juliegundhttps://opensource.com/users/kearnsjdhttps://opensource.com/users/ophirhttps://opensource.com/users/willkelly)
|
||||
|
||||
Why fear of failure is a silent DevOps virus
|
||||
======
|
||||
In the context of software development, fail fast is innocuous to
|
||||
DevOps.
|
||||
![gears and lightbulb to represent innovation][1]
|
||||
|
||||
Do you recognize the following scenario? I do, because a manager once stifled my passion and innovation to the point I was anxious to make decisions, take risks, and focus on what's important: "_uncovering better ways of developing software by doing it and helping others do it_" ([Agile Manifesto, 2001][2]).
|
||||
|
||||
> **Developer:** "_The UX hypothesis failed. Users did not respond well to the new navigation experience, resulting in 80% of users switching back to the classic navigation._"
|
||||
>
|
||||
> **Manager: **"_This is really bad! How is this possible? We need to fix this because I'm not sure that our stakeholders want to hear about your failure._"
|
||||
|
||||
Here is a different, more powerful response.
|
||||
|
||||
> **Leader:** "What are the probable causes for our hypothesis failing and how can we improve the user experience? Let us analyze and share our remediation plan with our stakeholders."
|
||||
|
||||
It is all about the tone that centers on a constructive and blameless mindset.
|
||||
|
||||
There are various types of fear that paralyze people, who then infect their team. Fearful that nothing is ever enough, pushing themselves to do more and more, viewing feedback as unfavorable, and often burning themselves out. They work hard, not smart—delivering volume, not value.
|
||||
|
||||
Others fear being judged, compare themselves with others, and shy away from accountability. They seldom share their knowledge, passion, or work; instead of vibrant collaboration, they find themselves wandering through a ghost ship filled with skeletons and foul fish.
|
||||
|
||||
> _"The only thing we have to fear is fear itself."_ – Franklin D. Roosevelt
|
||||
|
||||
_Fear of failure_ is rife in many organizations, especially those that have embarked on a digital transformation journey. It's caused by the undesirability of failure, knowledge of repercussions, and lack of appetite for validated learning.
|
||||
|
||||
This is a strange phenomenon because when we look at the Manifesto for Agile Software Development, we notice references to "customer collaboration" and "responding to change." Lean thinking promotes principles such as "optimize the whole," "eliminate waste," "create knowledge," "build quality in," and "respect people." Also, two of the [Kanban principles][3] mention "visualize work" and "continuous improvement."
|
||||
|
||||
I have a hypothesis:
|
||||
|
||||
> "_I believe an organization will embrace failure if we elaborate the benefit of failure in the context of software engineering to all stakeholders._"
|
||||
|
||||
Let's look at a traditional software development lifecycle that strives for quality, is based on strict processes, and is sensitive to failure. We design, develop, and test all features in sequence. The solution is released to the customer when QA and security give us the "thumbs up," followed by a happy user (success) or unhappy user (failure).
|
||||
|
||||
![Traditional software development lifecycle][4]
|
||||
|
||||
With the traditional model, we have one opportunity to fail or succeed. This is probably an effective model if we are building a sensitive product such as a multimillion-dollar rocket or aircraft. Context is important!
|
||||
|
||||
Now let's look at a more modern software development lifecycle that strives for quality and _embraces failure_. We design, build, and test and release a limited release to our users for preview. We get feedback. If the user is happy (success), we move to the next feature. If the user is unhappy (failure), we either improve or scrap the feature based on validated feedback.
|
||||
|
||||
![Modern software development lifecycle][5]
|
||||
|
||||
Note that we have a minimum of one opportunity to fail per feature, giving us a minimum of 10 opportunities to improve our product, based on validated user feedback, before we release the same product. Essentially, this modern approach is a repetition of the traditional approach, but it's broken down into smaller release cycles. We cannot reduce the effort to design, develop, and test our features, but we can learn and improve the process. You can take this software engineering process a few steps further.
|
||||
|
||||
* **Continuous delivery** (CD) aims to deliver software in short cycles and releases features reliably, one at a time, at a click of a button by the business or user.
|
||||
* **Test-driven development** (TDD) is a software engineering approach that creates many debates among stakeholders in business, development, and quality assurance. It relies on short and repetitive development cycles, each one crafting test cases based on requirements, failing, and developing software to pass the test cases.
|
||||
* [**Hypothesis-driven development**][6] (HDD) is based on a series of experiments to validate or disprove a hypothesis in a complex problem domain where we have unknown unknowns. When the hypothesis fails, we pivot. When it passes, we focus on the next experiment. Like TDD, it is based on very short repetitions to explore and react on validated learning.
|
||||
|
||||
|
||||
|
||||
Yes, I am confident that failure is not bad. In fact, it is an enabler for innovation and continuous learning. It is important to emphasize that we need to embrace failure in the form of _fail fast_, which means that we slice our product into small units of work that can be developed and delivered as value, quickly and reliably. When we fail, the waste and impact must be minimized, and the validated learning maximized.
|
||||
|
||||
To avoid the fear of failure among engineers, all stakeholders in an organization need to trust the engineering process and embrace failure. The best antidote is leaders who are supportive and inspiring, along with having a collective blameless mindset to plan, prioritize, build, release, and support. We should not be reckless or oblivious of the impact of failure, especially when it impacts investors or livelihoods.
|
||||
|
||||
We cannot be afraid of failure when we're developing software. Otherwise, we will stifle innovation and evolution, which in turn suffocates the union of people, process, and continuous delivery of value, key ingredients of DevOps as defined by [Donovan Brown][7]:
|
||||
|
||||
> _"DevOps is the union of people, process, and products to enable continuous delivery of value to our end users."_
|
||||
|
||||
* * *
|
||||
|
||||
_Special thanks to Anton Teterine and Alex Bunardzic for sharing their perspectives on fear._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/why-fear-failure-silent-devops-virus
|
||||
|
||||
作者:[Willy-Peter SchaubJustin Kearns][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/wpschaubhttps://opensource.com/users/bclasterhttps://opensource.com/users/juliegundhttps://opensource.com/users/kearnsjdhttps://opensource.com/users/ophirhttps://opensource.com/users/willkelly
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/innovation_lightbulb_gears_devops_ansible.png?itok=TSbmp3_M (gears and lightbulb to represent innovation)
|
||||
[2]: https://agilemanifesto.org/
|
||||
[3]: https://www.agilealliance.org/glossary/kanban
|
||||
[4]: https://opensource.com/sites/default/files/uploads/waterfall.png (Traditional software development lifecycle)
|
||||
[5]: https://opensource.com/sites/default/files/uploads/agile_1.png (Modern software development lifecycle)
|
||||
[6]: https://opensource.com/article/19/6/why-hypothesis-driven-development-devops
|
||||
[7]: http://donovanbrown.com/post/what-is-devops
|
@ -0,0 +1,67 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (A data transmission revolution is underway)
|
||||
[#]: via: (https://www.networkworld.com/article/3429610/a-data-transmission-revolution-is-underway.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
A data transmission revolution is underway
|
||||
======
|
||||
Two radical data transmission ideas are being developed. One uses trions instead of electrons to transmit information, and the other replaces the silicon in semiconductors with other compounds.
|
||||
![Getty Images][1]
|
||||
|
||||
Radical data communications technologies are in development in a slew of academic scientific labs around the world. While we’ve already seen, and gotten used, to a shift from data sent being over copper wire to light-based, fiber-optic channels (and the resulting capacity and speed increases), much of the thrust by engineers today is in the area of semiconductor improvements, in part to augment those pipes.
|
||||
|
||||
The work includes [a potential overall shift to photons and light, not wires on chips][2], and even more revolutionary ideas such as the abandonment of not only silicon, but also the traditional electron.
|
||||
|
||||
**[ Also read: [Who's developing quantum computers?][3] ]**
|
||||
|
||||
### Electrons aren’t capacious enough
|
||||
|
||||
“Most electronics today use individual electrons to conduct electricity and transmit information,” writes Iqbal Pittalwala [on the University of California at Riverside news website][4]. Trions, though, are better than electrons for data transmission, the physicists at that university claim.
|
||||
|
||||
Why? Electrons, the incumbents, are subatomic particles that are charged and have a surrounding electrical field. They carry electricity and information. However, gate-tunable trions from the quantum family are a spinning, charged combination of two electrons and one hole, or two holes and one electron, depending on polarity, they explain. More, in other words, and enough to carry greater amounts of information than a single electron.
|
||||
|
||||
A trion contains three interacting particles, allowing it to carry much more information than a single electron, the researchers say.
|
||||
|
||||
“Just like increasing your Wi-Fi bandwidth at home, trion transmission allows more information to come through than individual electrons,” says Erfu Liu in the article. Liu is the first author of the [research paper][5] about the work being done.
|
||||
|
||||
The researchers plan to test dark trions (harder to do than light trions, but with more capacity) to transport quantum information. It could revolutionize information transmission, the group says.
|
||||
|
||||
### Dump silicon
|
||||
|
||||
Separately, scientists at Cardiff University’s Institute for Compound Semiconductors are adopting an alternative approach to speed up and gain capacity at the semiconductor level. They aim to replace silicon with other variants of atom combinations, the team explains in a press release.
|
||||
|
||||
The compound semiconductors they’re working on are like silicon, but they come from elements on either side of silicon on the periodic table, the institute explains in a video presentation of its work. The properties on the wafer are different and thus allow new technologies. Some compound semiconductors are already used in smartphone and other newer technology, but the group says much more can be done in this area.
|
||||
|
||||
“Extremely low excess noise and high-sensitivity avalanche photodiodes [have] the potential to yield a new class of high-performance receivers,” says Diana Huffaker, a professor at Cardiff University’s Institute for Compound Semiconductors, [on the school’s website][6]. That technology is geared towards applications in fast networking and sensing environments.
|
||||
|
||||
The avalanche photodiodes (APDs) that the institute is building create less noise than silicon. APDs are semiconductors that convert light into electricity. Autonomous vehicles’ LIDAR (Light Detection and Ranging) is one use. LIDAR is a way for the vehicle to sense where it is, and it needs very fast communications. “The innovation lies in the advanced materials development to ‘grow’ the compound semiconductor crystal in an atom-by-atom regime,” Huffaker says in the article. Special reactors are needed to do it.
|
||||
|
||||
Players are noticing. Huffaker says Airbus may implement the APD technology in a “future free space optics communication system.” Airbus is behind the design and build-out of OneWeb’s planned internet backbone in space. Space laser systems, coming in due course, will have the advantage of performing a capacious data-send without hinderance of interfering air or other earthly environmental limitations—such as digging trenches, making NIMBY-prone planning applications, or implementing latency-increasing repeaters.
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3429610/a-data-transmission-revolution-is-underway.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/11/3_industrial-iot_solar-power-panels_energy_network_internet-100779353-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3338081/light-based-computers-to-be-5000-times-faster.html
|
||||
[3]: https://www.networkworld.com/article/3275385/who-s-developing-quantum-computers.html
|
||||
[4]: https://news.ucr.edu/articles/2019/07/09/physicists-finding-could-revolutionize-information-transmission
|
||||
[5]: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.123.027401
|
||||
[6]: https://www.cardiff.ac.uk/news/view/1527841-cardiff-in-world-beating-cs-breakthrough
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,95 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Scrum vs. kanban: Which agile methodology is better?)
|
||||
[#]: via: (https://opensource.com/article/19/8/scrum-vs-kanban)
|
||||
[#]: author: (Taz Brown https://opensource.com/users/heronthecli)
|
||||
|
||||
Scrum vs. kanban: Which agile methodology is better?
|
||||
======
|
||||
Learn the differences between scrum and kanban and which may be best for
|
||||
your team.
|
||||
![Team checklist and to dos][1]
|
||||
|
||||
Because scrum and kanban both fall under the agile methodology umbrella, many people confuse them or think they're the same thing. There are differences, however. For one, scrum is more specific to software development teams, while kanban is used by many kinds of teams and focuses on providing a visual representation of an agile team's workflow. Some argue that kanban is about getting things done, and scrum is about talking about getting things done.
|
||||
|
||||
### A history lesson
|
||||
|
||||
Before we get too deep into scrum and kanban, let's talk a little history. Before scrum, kanban, and agile, there was the waterfall model. It was popular in the '80s and '90s, especially in civil and mechanical engineering where changes were rare and design often stayed the same. It was adopted for software development, but it didn't translate well into that arena, with results rarely as anyone expected or desired.
|
||||
|
||||
In 2001, the [Agile Manifesto][2] emerged as an alternative to overcome the problems with waterfall. The Manifesto outlined agile principles and beliefs including shorter lead times, open communication, lighter processes, continuous training, and adaptation to change. These principles took on a life of their own when it came to software development practices and teams. In cases of irregularities, bugs, or dissatisfied customers, agile enabled development teams to make changes quickly, and software was released faster with much higher quality.
|
||||
|
||||
### What is agile?
|
||||
|
||||
An agile framework (or just agile) is an umbrella term for several iterative and incremental software development approaches such as kanban and scrum. Kanban and scrum are also considered to be agile frameworks on their own. As [Mendix explains][3]:
|
||||
|
||||
> "While each agile methodology type has its own unique qualities, they all incorporate elements of iterative development and continuous feedback when creating an application. Any agile development project involves continuous planning, continuous testing, continuous integration, and other forms of continuous development of both the project and the application resulting from the agile framework."
|
||||
|
||||
### What is kanban?
|
||||
|
||||
[Kanban][4] is the Japanese word for "visual signal." It is also an agile framework or work management system and is considered to be a powerful project management tool.
|
||||
|
||||
A kanban board (such as [Wekan][5], an open source kanban application) is a visual method for managing the creation of products through a series of fixed steps. It emphasizes continuous flow and is designed as a list of stages displayed in columns on a board. There is a waiting or backlog stage at the start of the kanban board, and there may be some progress stages, such as testing, development, completed, or abandoned.
|
||||
|
||||
![Wekan kanban board][6]
|
||||
|
||||
Each task or part of a project is represented on a card, and the cards are moved across this board as they progress across the stages. A card's current stage must be completed before it can be moved to the next stage.
|
||||
|
||||
Other features of kanban include color-coding (to identify different stages or types of tasks visually) and Work in Progress ([WIP][7]) limits (to restrict the maximum number of work items allowed in the different stages of the workflow).
|
||||
|
||||
Wekan is [similar to Trello][8] (a proprietary kanban application). It's one of [a variety][9] of digital kanban tools. Teams can also use the traditional kanban approach: a wall, a board, or a large piece of paper with different colored sticky notes for various tasks. Whatever method you use, the idea is to apply agile effectively, efficiently, and continuously.
|
||||
|
||||
Overall, kanban and Wekan offer a simple, graphical way of monitoring progress, sharing responsibility, and mitigating bottlenecks. It is a team effort to ensure that the final product is created with high quality and to the customers' satisfaction.
|
||||
|
||||
### What is scrum?
|
||||
|
||||
[Scrum][10] typically involves daily standups and sprints with sprint planning, sprint reviews, and retrospectives. It establishes specific release days and where cards can move across the board. There are daily scrums and two- to four-week sprints (putting code into production) with the goal to create a shippable product after every sprint.
|
||||
|
||||
![team_meeting_at_board.png][11]
|
||||
|
||||
Daily stand-up meetings allow team members to share progress. (Photo credit: Andrea Truong)
|
||||
|
||||
|
||||
|
||||
Scrum teams are usually comprised of a scrum master, a product owner, and the development team. All must operate in synchronicity to produce high-quality software products in a fast, efficient, cost-effective way that pleases the customer.
|
||||
|
||||
### Which is better: scrum or kanban?
|
||||
|
||||
With all that as background, the important question we are left with is: Which agile methodology is superior, kanban, or scrum? Well, it depends. It is certainly not a straightforward or easy choice, and neither method is inherently superior. The type of team and the project's scope or requirements influence which is likely to be the better choice.
|
||||
|
||||
Software development teams typically use scrum because it has been found to be highly useful in the software lifecycle process.
|
||||
|
||||
Kanban can be used by all kinds of teams—IT, marketing, HR, transformation, manufacturing, healthcare, finance, etc. Its core values are continuous workflow, continuous feedback, continuous change, and stir vigorously until you achieve the desired quality and consistency or create a shippable product. The team works from the backlog until all tasks are completed. Usually, members will pick tasks based on their specialized knowledge or area of expertise, but the team must be careful not to reduce its effectiveness with too much specialization.
|
||||
|
||||
### Conclusion
|
||||
|
||||
There is a place for both scrum and kanban agile frameworks, and their utility is determined by the makeup of the team, the product or service to be delivered, the requirements or scope of the project, and the organizational culture. There will be trial and error, especially for new teams.
|
||||
|
||||
Scrum and kanban are both iterative work systems that rely on process flows and aim to reduce waste. No matter which framework your team chooses, you will be a winner. Both methodologies are valuable now and likely will be for some time to come.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/scrum-vs-kanban
|
||||
|
||||
作者:[Taz Brown][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/heronthecli
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/todo_checklist_team_metrics_report.png?itok=oB5uQbzf (Team checklist and to dos)
|
||||
[2]: https://www.scrumalliance.org/resources/agile-manifesto
|
||||
[3]: https://www.mendix.com/agile-framework/
|
||||
[4]: https://en.wikipedia.org/wiki/Kanban
|
||||
[5]: https://wekan.github.io/
|
||||
[6]: https://opensource.com/sites/default/files/uploads/wekan-board.png (Wekan kanban board)
|
||||
[7]: https://www.atlassian.com/agile/kanban/wip-limits
|
||||
[8]: https://opensource.com/article/19/1/productivity-tool-wekan
|
||||
[9]: https://opensource.com/alternatives/trello
|
||||
[10]: https://en.wikipedia.org/wiki/Scrum_(software_development)
|
||||
[11]: https://opensource.com/sites/default/files/uploads/team_meeting_at_board.png (team_meeting_at_board.png)
|
67
sources/talk/20190809 Goodbye, Linux Journal.md
Normal file
67
sources/talk/20190809 Goodbye, Linux Journal.md
Normal file
@ -0,0 +1,67 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Goodbye, Linux Journal)
|
||||
[#]: via: (https://opensource.com/article/19/8/goodbye-linux-journal)
|
||||
[#]: author: (Jim Hall https://opensource.com/users/jim-hallhttps://opensource.com/users/scottnesbitthttps://opensource.com/users/alanfdoss)
|
||||
|
||||
Goodbye, Linux Journal
|
||||
======
|
||||
Linux Journal's coverage from 1994 to 2019 highlighted Linux’s rise to
|
||||
an enterprise platform that runs a majority of the world’s servers and
|
||||
services.
|
||||
![Linux keys on the keyboard for a desktop computer][1]
|
||||
|
||||
I first discovered Linux in 1993, when I was an undergraduate physics student who wanted the power of Big Unix on my home PC. I remember installing my first Linux distribution, SoftLanding Systems (SLS), and exploring the power of Linux on my ‘386 PC. I was immediately impressed. Since then, I’ve run Linux at home—and even at work.
|
||||
|
||||
In those early days, it felt like I was the only person who knew about Linux. Certainly, there was an online community via Usenet, but there weren’t many other ways to get together with other Linux users—unless you had a local Linux User Group in your area. I shared what I knew about Linux with those around me, and we pooled our Linux fu.
|
||||
|
||||
So, it was awesome to learn about a print magazine that was dedicated to all things Linux. In March 1994, Phil Hughes and Red Hat co-founder Bob Young published a new magazine about Linux, named _Linux Journal_. The [first issue][2] featured an "[Interview With Linus, The Author of Linux][3]" by Robert Young, and an article comparing "[Linux Vs. Windows NT and OS/2][4]" by Bernie Thompson.
|
||||
|
||||
From the start, _Linux Journal_ aimed to be a community-driven magazine. Hughes and Young were not the only contributors to the magazine. Instead, they invited others to write about Linux and share what they had learned. In a way, _Linux Journal_ used a model similar to open source software. Anyone could contribute, and the editors acted as "maintainers" to ensure content was top quality and informative.
|
||||
|
||||
_Linux Journal_ also went for a broad audience. The editors realized that a purely technical magazine would lose too many new users, while a magazine written for "newbies" would not attract a more focused audience. In the first issue, [Hughes highlighted][5] both groups of users as the audience _Linux Journal_ was looking for, writing: "We see this part of our audience as being two groups. Lots of the current Linux users have worked professionally with Unix. The other segment is the DOS user who wants to upgrade to a multi-user system. With a combination of tutorials and technical articles, we hope to satisfy the needs of both these groups."
|
||||
|
||||
I was glad to discover _Linux Journal_ in those early days, and I quickly became a subscriber. In time, I contributed my own stories to _Linux Journal_. I’ve written several articles including essays on usability in open source software, Bash shell scripting tricks, and C programming how-tos.
|
||||
|
||||
But my contributions to Linux Journal are meager compared to others. Over the years, I have enjoyed reading many article series from regular contributors. I loved Dave Taylor's "Work the Shell" series about practical and sometimes magical scripts written for the Bash shell. I always turned to Kyle Rankin's "Hack and /" series about cool projects with Linux. And I have enjoyed reading articles from the latest Linux Journal deputy editor Bryan Lunduke, especially a recent geeky article about "[How to Live Entirely in a Terminal][6]" that showed you can still do daily tasks on Linux without a graphical environment.
|
||||
|
||||
Many years later, things took a turn. Linux Journal’s Publisher Carlie Fairchild wrote a seemingly terminal essay [_Linux Journal Ceases Publication_][7] in December 2017 that indicated _Linux Journal_ had "run out of money, and options along with it." But a month later, Carlie updated the news item to report that "*Linux Journal *was saved and brought back to life" by an angel investor. London Trust Media, the parent company of Private Internet Access, injected new funds into Linux Journal to get the magazine back on its feet. _Linux Journal_ resumed regular issues in March 2018.
|
||||
|
||||
But it seems the rescue was not enough. Late in the evening of August 7, 2019, _Linux Journal_ posted a final, sudden goodbye. Kyle Rankin’s essay [_Linux Journal Ceases Publication: An Awkward Goodbye_][8] was preceded with this announcement:
|
||||
|
||||
**IMPORTANT NOTICE FROM LINUX JOURNAL, LLC:**
|
||||
_On August 7, 2019, Linux Journal shut its doors for good. All staff were laid off and the company is left with no operating funds to continue in any capacity. The website will continue to stay up for the next few weeks, hopefully longer for archival purposes if we can make it happen.
|
||||
–Linux Journal, LLC_
|
||||
|
||||
The announcement came as a surprise to readers and staff alike. I reached out to Bryan Lunduke, who commented the shutdown was a "total surprise. Was writing an article the night before for an upcoming issue... No indication that things were preparing to fold." The next morning, on August 7, Lunduke said he "had a series of frantic messages from our Editor (Jill) and Publisher (Carlie). They had just found out, effective the night before... _Linux Journal_ was shut down. So we weren't so much being told that Linux Journal is shutting down... as _Linux Journal_ had already been shut down the day before... and we just didn't know it."
|
||||
|
||||
It's the end of an era. And as we salute the passing of _Linux Journal_, I’d like to recognize the indelible mark the magazine has left on the Linux landscape. _Linux Journal_ was the first publication to highlight Linux as a serious platform, and I think that made people take notice.
|
||||
|
||||
And with that seriousness, that maturity, _Linux Journal_ helped Linux shake its early reputation of being a hobby project. _Linux Journal's_ coverage from 1994 to 2019 highlighted Linux’s rise to an enterprise platform that runs a majority of the world’s servers and services.
|
||||
|
||||
I tip my hat to everyone at _Linux Journal_ and any contributor who was part of its journey. It has been a pleasure to work with you over the years. You kept the spirit alive. This may be a painful experience, but I hope everyone ends up in a good place.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/goodbye-linux-journal
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jim-hallhttps://opensource.com/users/scottnesbitthttps://opensource.com/users/alanfdoss
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_keyboard_desktop.png?itok=I2nGw78_ (Linux keys on the keyboard for a desktop computer)
|
||||
[2]: https://www.linuxjournal.com/issue/1
|
||||
[3]: https://www.linuxjournal.com/article/2736
|
||||
[4]: https://www.linuxjournal.com/article/2734
|
||||
[5]: https://www.linuxjournal.com/article/2735
|
||||
[6]: https://www.linuxjournal.com/content/without-gui-how-live-entirely-terminal
|
||||
[7]: https://www.linuxjournal.com/content/linux-journal-ceases-publication
|
||||
[8]: https://www.linuxjournal.com/content/linux-journal-ceases-publication-awkward-goodbye
|
@ -0,0 +1,99 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How SD-Branch addresses today’s network security concerns)
|
||||
[#]: via: (https://www.networkworld.com/article/3431166/how-sd-branch-addresses-todays-network-security-concerns.html)
|
||||
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
|
||||
|
||||
How SD-Branch addresses today’s network security concerns
|
||||
======
|
||||
New digital technologies such as IoT at remote locations increase the need to identify devices and monitor network activity. That’s where SD-Branch can help, says Fortinet’s John Maddison.
|
||||
![KontekBrothers / Getty Images][1]
|
||||
|
||||
Secure software-defined WAN (SD-WAN) has become one of the hottest new technologies, with some reports claiming that 85% of companies are actively considering [SD-WAN][2] to improve cloud-based application performance, replace expensive and inflexible fixed WAN connections, and increase security.
|
||||
|
||||
But now the industry is shifting to software-defined branch ([SD-Branch][3]), which is broader than SD-WAN but introduced several new things for organizations to consider, including better security for new digital technologies. To understand what's required in this new solution set, I recently sat down with John Maddison, Fortinet’s executive vice president of products and solutions.
|
||||
|
||||
**[ Learn more: [SD-Branch: What it is, and why you'll need it][3] | Get regularly scheduled insights: [Sign up for Network World newsletters][4] ]**
|
||||
|
||||
### Zeus Kerravala: To get started, what exactly is SD-Branch?
|
||||
|
||||
**John Maddison:** To answer that question, let’s step back and look at the need for a secure SD-WAN solution. Organizations need to expand their digital transformation efforts out to their remote locations, such as branch offices, remote school campuses, and retail locations. The challenge is that today’s networks and applications are highly elastic and constantly changing, which means that the traditional fixed and static WAN connections to their remote offices, such as [MPLS][5], can’t support this new digital business model.
|
||||
|
||||
That’s where SD-WAN comes in. It replaces those legacy, and sometimes quite expensive, connections with flexible and intelligent connectivity designed to optimize bandwidth, maximize application performance, secure direct internet connections, and ensure that traffic, applications, workflows, and data are secure.
|
||||
|
||||
However, most branch offices and retail stores have a local LAN behind that connection that is undergoing rapid transformation. Internet of things (IoT) devices, for example, are being adopted at remote locations at an unprecedented rate. Retail shops now include a wide array of connected devices, from cash registers and scanners to refrigeration units and thermostats, to security cameras and inventory control devices. Hotels monitor room access, security and safety devices, elevators, HVAC systems, and even minibar purchases. The same sort of transformation is happening at schools, branch and field offices, and remote production facilities.
|
||||
|
||||
![John Maddison, executive vice president, Fortinet][6]
|
||||
|
||||
The challenge is that many of these environments, especially these new IoT and mobile end-user devices, lack adequate safeguards. SD-Branch extends the benefits of the secure SD-WAN’s security and control functions into the local network by securing wired and wireless access points, monitoring and inspecting internal traffic and applications, and leveraging network access control (NAC) to identify the devices being deployed at the branch and then dynamically assigning them to network segments where they can be more easily controlled.
|
||||
|
||||
### What unique challenges do remote locations, such as branch offices, schools, and retail locations, face?
|
||||
|
||||
Many of the devices being deployed at these remote locations need access to the internal network, to cloud services, or to internet resources to operate. The challenge is that IoT devices, in particular, are notoriously insecure and vulnerable to a host of threats and exploits. In addition, end users are connecting a growing number of unauthorized devices to the office. While these are usually some sort of personal smart device, they can also include anything from a connected coffee maker to a wireless access point.
|
||||
|
||||
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][7] ]**
|
||||
|
||||
Any of these, if connected to the network and then exploited, not only represent a threat to that remote location, but they can also be used as a door into the larger core network. There are numerous examples of vulnerable point-of-sale devices or HVAC systems being used to tunnel back into the organization’s data center to steal account and financial information.
|
||||
|
||||
Of course, these issues might be solved by adding a number of additional networking and security technologies to the branch, but most IT teams can’t afford to put IT resources onsite to deploy and manage these solutions, even temporarily. What’s needed is a security solution that combines traffic scanning and security enforcement, access control for both wired and wireless connections, device recognition, dynamic segmentation, and integrated management in a single low-touch/no-touch device. That’s where SD-Branch comes in.
|
||||
|
||||
### Why aren't traditional branch solutions, such as integrated routers, solving these challenges?
|
||||
|
||||
Most of the solutions designed for branch and retail locations predate SD-WAN and digital transformation. As a result, most do not provide support for the sort of flexible SD-WAN functionality that today’s remote locations require. In addition, while they may claim to provide low-touch deployment and management, the experience of most organizations tells a different story. Complicating things further, these solutions provide little more than a superficial integration between their various services.
|
||||
|
||||
For example, few if any of these integrated devices can manage or secure the wired and wireless access points deployed as part of the larger branch LAN, provide device recognition and network access control, scan network traffic, or deliver the sort of robust security that today’s networks require. Instead, many of these solutions are little more than a collection of separate limited networking, connectivity, and security elements wrapped in a piece of sheet metal that all require separate management systems, providing little to no control for those extended LAN environments with their own access points and switches – which adds to IT overhead rather than reducing it.
|
||||
|
||||
### What role does security play in an SD-Branch?
|
||||
|
||||
Security is a critical element of any branch or retail location, especially as the ongoing deployment of IoT and end-user devices continues to expand the potential attack surface. As I explained before, IoT devices are a particular concern, as they are generally quite insecure, and as a result, they need to be automatically identified, segmented, and continuously monitored for malware and unusual behaviors.
|
||||
|
||||
But that is just part of the equation. Security tools need to be integrated into the switch and wireless infrastructure so that networking protocols, security policies, and network access controls can work together as a single system. This allows the SD-Branch solution to identify devices and dynamically match them to security policies, inspect applications and workflows, and dynamically assign devices and traffic to their appropriate network segment based on their function and role.
|
||||
|
||||
The challenge is that there is often no IT staff on site to set up, manage, and fine-tune a system like this. SD-Branch provides these advanced security, access control, and network management services in a zero-touch model so they can be deployed across multiple locations and then be remotely managed through a common interface.
|
||||
|
||||
### Security teams often face challenges with a lack of visibility and control at their branch offices. How does SD-Branch address this?
|
||||
|
||||
An SD-Branch solution seamlessly extends an organization's core security into the local branch network. For organizations with multiple branch or retail locations, this enables the creation of an integrated security fabric operating through a single pane of glass management system that can see all devices and orchestrate all security policies and configurations. This approach allows all remote locations to be dynamically coordinated and updated, supports the collection and correlation of threat intelligence from every corner of the network – from the core to the branch to the cloud – and enables a coordinated response to cyber events that can automatically raise defenses everywhere while identifying and eliminating all threads of an attack.
|
||||
|
||||
Combining security with switches, access points, and network access control systems means that every connected device can not only be identified and monitored, but every application and workflow can also be seen and tracked, even if they travel across or between the different branch and cloud environments.
|
||||
|
||||
### How is SD-Branch related to secure SD-WAN?
|
||||
|
||||
SD-Branch is a natural extension of secure SD-WAN. We are finding that once an organization deploys a secure SD-WAN solution, they quickly discover that the infrastructure behind that connection is often not ready to support their digital transformation efforts. Every new threat vector adds additional risk to their organization.
|
||||
|
||||
While secure SD-WAN can see and secure applications running to or between remote locations, the applications and workflows running inside those branch offices, schools, or retail stores are not being recognized or properly inspected. Shadow IT instances are not being identified. Wired and wireless access points are not secured. End-user devices have open access to network resources. And IoT devices are expanding the potential attack surface without corresponding protections in place. That requires an SD-Branch solution.
|
||||
|
||||
Of course, this is about much more than the emergence of the next-gen branch. These new remote network environments are just another example of the new edge model that is extending and replacing the traditional network perimeter. Cloud and multi-cloud, mobile workers, 5G networks, and the next-gen branch – including offices, retail locations, and extended school campuses – are all emerging simultaneously. That means they all need to be addressed by IT and security teams at the same time. However, the traditional model of building a separate security strategy for each edge environment is a recipe for an overwhelmed IT staff. Instead, every edge needs to be seen as part of a larger, integrated security strategy where every component contributes to the overall health of the entire distributed network.
|
||||
|
||||
With that in mind, adding SD-Branch solutions to SD-WAN deployments not only extends security deep into branch office and other remote locations, but they are also a critical component of a broader strategy that ensures consistent security across all edge environments, while providing a mechanism for controlling operational expenses across the entire distributed network through central management, visibility, and control.
|
||||
|
||||
**[ For more on IoT security, see [our corporate guide to addressing IoT security concerns][8]. ]**
|
||||
|
||||
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3431166/how-sd-branch-addresses-todays-network-security-concerns.html
|
||||
|
||||
作者:[Zeus Kerravala][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/07/cio_cw_distributed_decentralized_global_network_africa_by_kontekbrothers_gettyimages-1004007018_2400x1600-100802403-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
|
||||
[3]: https://www.networkworld.com/article/3250664/sd-branch-what-it-is-and-why-youll-need-it.html
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html
|
||||
[5]: https://www.networkworld.com/article/2297171/network-security-mpls-explained.html
|
||||
[6]: https://images.idgesg.net/images/article/2019/08/john-maddison-_fortinet-square-100808017-small.jpg
|
||||
[7]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[8]: https://www.networkworld.com/article/3269165/internet-of-things/a-corporate-guide-to-addressing-iot-security-concerns.html
|
||||
[9]: https://www.facebook.com/NetworkWorld/
|
||||
[10]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,69 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Xilinx launches new FPGA cards that can match GPU performance)
|
||||
[#]: via: (https://www.networkworld.com/article/3430763/xilinx-launches-new-fpga-cards-that-can-match-gpu-performance.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Xilinx launches new FPGA cards that can match GPU performance
|
||||
======
|
||||
Xilinx says its new FPGA card, the Alveo U50, can match the performance of a GPU in areas of artificial intelligence (AI) and machine learning.
|
||||
![Thinkstock][1]
|
||||
|
||||
Xilinx has launched a new FPGA card, the Alveo U50, that it claims can match the performance of a GPU in areas of artificial intelligence (AI) and machine learning.
|
||||
|
||||
The company claims the card is the industry’s first low-profile adaptable accelerator with PCIe Gen 4 support, which offers double the throughput over PCIe Gen3. It was finalized in 2017, but cards and motherboards to support it have been slow to come to market.
|
||||
|
||||
The Alveo U50 provides customers with a programmable low-profile and low-power accelerator platform built for scale-out architectures and domain-specific acceleration of any server deployment, on premises, in the cloud, and at the edge.
|
||||
|
||||
**[ Also read: [What is quantum computing (and why enterprises should care)][2] ]**
|
||||
|
||||
Xilinx claims the Alveo U50 delivers 10 to 20 times improvements in throughput and latency as compared to a CPU. One thing's for sure, it beats the competition on power draw. It has a 75 watt power envelope, which is comparable to a desktop CPU and vastly better than a Xeon or GPU.
|
||||
|
||||
For accelerated networking and storage workloads, the U50 card helps developers identify and eliminate latency and data movement bottlenecks by moving compute closer to the data.
|
||||
|
||||
![Xilinx Alveo U50][3]
|
||||
|
||||
The Alveo U50 card is the first in the Alveo portfolio to be packaged in a half-height, half-length form factor. It runs the Xilinx UltraScale+ FPGA architecture, features high-bandwidth memory (HBM2), 100 gigabits per second (100 Gbps) networking connectivity, and support for the PCIe Gen 4 and CCIX interconnects. Thanks to the 8GB of HBM2 memory, data transfer speeds can reach 400Gbps. It also supports NVMe-over-Fabric for high-speed SSD transfers.
|
||||
|
||||
That’s a lot of performance packed into a small card.
|
||||
|
||||
**[ [Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][4] ]**
|
||||
|
||||
### What the Xilinx Alveo U50 can do
|
||||
|
||||
Xilinx is making some big boasts about Alveo U50's capabilities:
|
||||
|
||||
* Deep learning inference acceleration (speech translation): delivers up to 25x lower latency, 10x higher throughput, and significantly improved power efficiency per node compared to GPU-only for speech translation performance.
|
||||
* Data analytics acceleration (database query): running the TPC-H Query benchmark, Alveo U50 delivers 4x higher throughput per hour and reduced operational costs by 3x compared to in-memory CPU.
|
||||
* Computational storage acceleration (compression): delivers 20x more compression/decompression throughput, faster Hadoop and big data analytics, and over 30% lower cost per node compared to CPU-only nodes.
|
||||
* Network acceleration (electronic trading): delivers 20x lower latency and sub-500ns trading time compared to CPU-only latency of 10us.
|
||||
* Financial modeling (grid computing): running the Monte Carlo simulation, Alveo U50 delivers 7x greater power efficiency compared to GPU-only performance for a faster time to insight, deterministic latency and reduced operational costs.
|
||||
|
||||
|
||||
|
||||
The Alveo U50 is sampling now with OEM system qualifications in process. General availability is slated for fall 2019.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3430763/xilinx-launches-new-fpga-cards-that-can-match-gpu-performance.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.techhive.com/images/article/2014/04/bolts-of-light-speeding-through-the-acceleration-tunnel-95535268-100264665-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3275367/what-s-quantum-computing-and-why-enterprises-need-to-care.html
|
||||
[3]: https://images.idgesg.net/images/article/2019/08/xilinx-alveo-u50-100808003-medium.jpg
|
||||
[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,64 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Extreme's acquisitions have prepped it to better battle Cisco, Arista, HPE, others)
|
||||
[#]: via: (https://www.networkworld.com/article/3432173/extremes-acquisitions-have-prepped-it-to-better-battle-cisco-arista-hpe-others.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Extreme's acquisitions have prepped it to better battle Cisco, Arista, HPE, others
|
||||
======
|
||||
Extreme has bought cloud, SD-WAN and data center technologies that make it more prepared to take on its toughest competitors.
|
||||
Extreme Networks has in recent months restyled the company with data-center networking technology acquisitions and upgrades, but now comes the hard part – executing with enterprise customers and effectively competing with the likes of Cisco, VMware, Arista, Juniper, HPE and others.
|
||||
|
||||
The company’s latest and perhaps most significant long-term move was closing the [acquisition of wireless-networking vendor Aerohive][1] for about $210 million. The deal brings Extreme Aerohive’s wireless-networking technology – including its WiFi 6 gear, SD-WAN software and cloud-management services.
|
||||
|
||||
**More about edge networking**
|
||||
|
||||
* [How edge networking and IoT will reshape data centers][2]
|
||||
* [Edge computing best practices][3]
|
||||
* [How edge computing can help secure the IoT][4]
|
||||
|
||||
|
||||
|
||||
With the Aerohive technology, Extreme says customers and partners will be able to mix and match a broader array of software, hardware, and services to create networks that support their unique needs, and that can be managed and automated from the enterprise edge to the cloud.
|
||||
|
||||
The Aerohive buy is just the latest in a string of acquisitions that have reshaped the company. In the past few years the company has acquired networking and data-center technology from Avaya and Brocade, and it bought wireless player Zebra Technologies in 2016 for $55 million.
|
||||
|
||||
While it has been a battle to integrate and get solid sales footing for those acquisitions – particularly Brocade and Avaya, the company says those challenges are behind it and that the Aerohive integration will be much smoother.
|
||||
|
||||
“After scaling Extreme’s business to $1B in revenue [for FY 2019, which ended in June] and expanding our portfolio to include end-to-end enterprise networking solutions, we are now taking the next step to transform our business to add sustainable, subscription-oriented cloud-based solutions that will enable us to drive recurring revenue and improved cash-flow generation,” said Extreme CEO Ed Meyercord at the firm’s [FY 19 financial analysts][5] call.
|
||||
|
||||
The strategy to move more toward a software-oriented, cloud-based revenue generation and technology development is brand new for Extreme. The company says it expects to generate as much as 30 percent of revenues from recurring charges in the near future. The tactic was enabled in large part by the Aerohive buy, which doubled Extreme’s customer based to 60,000 and its sales partners to 11,000 and whose revenues are recurring and cloud-based. The acquisition also created the number-three enterprise Wireless LAN company behind Cisco and HPE/Aruba.
|
||||
|
||||
“We are going to take this Aerohive system and expand across our entire portfolio and use it to deliver common, simplified software with feature packages for on-premises or in-cloud based on customers' use case,” added Norman Rice, Extreme’s Chief Marketing, Development and Product Operations Officer. “We have never really been in any cloud conversations before so for us this will be a major add.”
|
||||
|
||||
Indeed, the Aerohive move is key for the company’s future, analysts say.
|
||||
|
||||
To continue reading this article register now
|
||||
|
||||
[Get Free Access][6]
|
||||
|
||||
[Learn More][7] Existing Users [Sign In][6]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3432173/extremes-acquisitions-have-prepped-it-to-better-battle-cisco-arista-hpe-others.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3405440/extreme-targets-cloud-services-sd-wan-wifi-6-with-210m-aerohive-grab.html
|
||||
[2]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
|
||||
[3]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
|
||||
[4]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
|
||||
[5]: https://seekingalpha.com/article/4279527-extreme-networks-inc-extr-ceo-ed-meyercord-q4-2019-results-earnings-call-transcript
|
||||
[6]: javascript://
|
||||
[7]: https://www.networkworld.com/learn-about-insider/
|
@ -0,0 +1,71 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Nvidia rises to the need for natural language processing)
|
||||
[#]: via: (https://www.networkworld.com/article/3432203/nvidia-rises-to-the-need-for-natural-language-processing.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Nvidia rises to the need for natural language processing
|
||||
======
|
||||
As the demand for natural language processing grows for chatbots and AI-powered interactions, more companies will need systems that can provide it. Nvidia says its platform can handle it.
|
||||
![andy.brandon50 \(CC BY-SA 2.0\)][1]
|
||||
|
||||
Nvidia is boasting of a breakthrough in conversation natural language processing (NLP) training and inference, enabling more complex interchanges between customers and chatbots with immediate responses.
|
||||
|
||||
The need for such technology is expected to grow, as digital voice assistants alone are expected to climb from 2.5 billion to 8 billion within the next four years, according to Juniper Research, while Gartner predicts that by 2021, 15% of all customer service interactions will be completely handled by AI, an increase of 400% from 2017.
|
||||
|
||||
The company said its DGX-2 AI platform trained the BERT-Large AI language model in less than an hour and performed AI inference in 2+ milliseconds, making it possible “for developers to use state-of-the-art language understanding for large-scale applications.”
|
||||
|
||||
**[ Also read: [What is quantum computing (and why enterprises should care)][2] ]**
|
||||
|
||||
BERT, or Bidirectional Encoder Representations from Transformers, is a Google-powered AI language model that many developers say has better accuracy than humans in some performance evaluations. It’s all discussed [here][3].
|
||||
|
||||
### Nvidia sets natural language processing records
|
||||
|
||||
All told, Nvidia is claiming three NLP records:
|
||||
|
||||
**1\. Training:** Running the largest version of the BERT language model, a Nvidia DGX SuperPOD with 92 Nvidia DGX-2H systems running 1,472 V100 GPUs cut training from several days to 53 minutes. A single DGX-2 system, which is about the size of a tower PC, trained BERT-Large in 2.8 days.
|
||||
|
||||
“The quicker we can train a model, the more models we can train, the more we learn about the problem, and the better the results get,” said Bryan Catanzaro, vice president of applied deep learning research, in a statement.
|
||||
|
||||
**2\. Inference**: Using Nvidia T4 GPUs on its TensorRT deep learning inference platform, Nvidia performed inference on the BERT-Base SQuAD dataset in 2.2 milliseconds, well under the 10 millisecond processing threshold for many real-time applications, and far ahead of the 40 milliseconds measured with highly optimized CPU code.
|
||||
|
||||
**3\. Model:** Nvidia said its new custom model, called Megatron, has 8.3 billion parameters, making it 24 times larger than the BERT-Large and the world's largest language model based on Transformers, the building block used for BERT and other natural language AI models.
|
||||
|
||||
In a move sure to make FOSS advocates happy, Nvidia is also making a ton of source code available via [GitHub][4].
|
||||
|
||||
* NVIDIA GitHub BERT training code with PyTorch
|
||||
* NGC model scripts and check-points for TensorFlow
|
||||
* TensorRT optimized BERT Sample on GitHub
|
||||
* Faster Transformer: C++ API, TensorRT plugin, and TensorFlow OP
|
||||
* MXNet Gluon-NLP with AMP support for BERT (training and inference)
|
||||
* TensorRT optimized BERT Jupyter notebook on AI Hub
|
||||
* Megatron-LM: PyTorch code for training massive Transformer models
|
||||
|
||||
|
||||
|
||||
Not that any of this is easily consumed. We’re talking very advanced AI code. Very few people will be able to make heads or tails of it. But the gesture is a positive one.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3432203/nvidia-rises-to-the-need-for-natural-language-processing.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/04/alphabetic_letters_characters_language_by_andybrandon50_cc_by-sa_2-0_1500x1000-100794409-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3275367/what-s-quantum-computing-and-why-enterprises-need-to-care.html
|
||||
[3]: https://medium.com/ai-network/state-of-the-art-ai-solutions-1-google-bert-an-ai-model-that-understands-language-better-than-92c74bb64c
|
||||
[4]: https://github.com/NVIDIA/TensorRT/tree/release/5.1/demo/BERT/
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,73 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Get ready for the convergence of IT and OT networking and security)
|
||||
[#]: via: (https://www.networkworld.com/article/3432132/get-ready-for-the-convergence-of-it-and-ot-networking-and-security.html)
|
||||
[#]: author: (Linda Musthaler https://www.networkworld.com/author/Linda-Musthaler/)
|
||||
|
||||
Get ready for the convergence of IT and OT networking and security
|
||||
======
|
||||
Collecting telemetry data from operational networks and passing it to information networks for analysis has its benefits. But this convergence presents big cultural and technology challenges.
|
||||
![Thinkstock][1]
|
||||
|
||||
Most IT networking professionals are so busy with their day-to-day responsibilities that they don’t have time to consider taking on more work. But for companies with an industrial component, there’s an elephant in the room that is clamoring for attention. I’m talking about the increasingly common convergence of IT and operational technology (OT) networking and security.
|
||||
|
||||
Traditionally, IT and OT have had very separate roles in an organization. IT is typically tasked with moving data between computers and humans, whereas OT is tasked with moving data between “things,” such as sensors, actuators, smart machines, and other devices to enhance manufacturing and industrial processes. Not only were the roles for IT and OT completely separate, but their technologies and networks were, too.
|
||||
|
||||
That’s changing, however, as companies want to collect telemetry data from the OT side to drive analytics and business processes on the IT side. The lines between the two sides are blurring, and this has big implications for IT networking and security teams.
|
||||
|
||||
“This convergence of IT and OT systems is absolutely on the increase, and it's especially affecting the industries that are in the business of producing things, whatever those things happen to be,” according to Jeff Hussey, CEO of [Tempered Networks][2], which is working to help bridge the gap between the two. “There are devices on the OT side that are increasingly networked but without any security to those networks. Their operators historically relied on an air gap between the networks of devices, but those gaps no longer exist. The complexity of the environment and the expansive attack surface that is created as a result of connecting all of these devices to corporate networks massively increases the tasks needed to secure even the traditional networks, much less the expanded converged networks.”
|
||||
|
||||
**[ Also read: [Is your enterprise software committing security malpractice?][3] | Get regularly scheduled insights: [Sign up for Network World newsletters][4] ]**
|
||||
|
||||
Hussey is well versed on the cultural and technology issues in this arena. When asked if IT and OT people are working together to integrate their networks, he says, “That would be ideal, but it’s not really what we see in the marketplace. Typically, we see some acrimony between these two groups.”
|
||||
|
||||
Hussey explains that the groups move at different paces.
|
||||
|
||||
“The OT groups think in terms of 10-plus year cycles, whereas the IT groups think in terms of three-plus years cycles,” he says. “There's a lot more change and iteration in IT environments than there is OT environments, which are traditionally extremely static. But now companies want to bring telemetry data that is produced by OT devices back to some workload in a data center or in a cloud. That forces a requirement for secure connectivity because of corporate governance or regulatory requirements, and this is when we most often see the two groups clash.”
|
||||
|
||||
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][5] ]**
|
||||
|
||||
Based on the situations Hussey has observed so far, the onus to connect and secure the disparate networks falls to the IT side of the house. This is a big challenge because the tools that have traditionally been used for security in IT environments aren’t necessarily appropriate or applicable in OT environments. IT and OT systems have very different protocols and operating systems. It’s not practical to try to create network segmentation using firewall rules, access control lists, VLANs, or VPNs because those things can’t scale to the workloads presented in OT environments.
|
||||
|
||||
### OT practices create IT security concerns
|
||||
|
||||
Steve Fey, CEO of [Totem Building Cybersecurity][6], concurs with Hussey and points out another significant issue in trying to integrate the networking and security aspects of IT and OT systems. In the OT world, it’s often the device vendors or their local contractors who manage and maintain all aspects of the device, typically through remote access. These vendors even install the remote access capabilities and set up the users. “This is completely opposite to how it should be done from a cybersecurity policy perspective,” says Fey. And yet, it’s common today in many industrial environments.
|
||||
|
||||
Fey’s company is in the building controls industry, which automates control of everything from elevators and HVAC systems to lighting and life safety systems in commercial buildings.
|
||||
|
||||
“The building controls industry, in particular, is one that's characterized by a completely different buying and decision-making culture than in enterprise IT. Everything from how the systems are engineered, purchased, installed, and supported is very different than the equivalent world of enterprise IT. Even the suppliers are largely different,” says Fey. “This is another aspect of the cultural challenge between IT and OT teams. They are two worlds that are having to figure each other out because of the cyber threats that pose a risk to these control systems.”
|
||||
|
||||
Fey says major corporate entities are just waking up to the reality of this massive threat surface, whether it’s in their buildings or their manufacturing processes.
|
||||
|
||||
“There’s a dire need to overcome decades of installed OT systems that have been incorrectly configured and incorrectly operated without the security policies and safeguards that are normal to enterprise IT. But the toolsets for these environments are incompatible, and the cultural differences are great,” he says.
|
||||
|
||||
Totem’s goal is to bridge this gap with a specific focus on cyber and to provide a toolset that is recognizable to the enterprise IT world.
|
||||
|
||||
Both Hussey and Fey say it’s likely that IT groups will be charged with leading the convergence of IT and OT networks, but they must include their OT counterparts in the efforts. There are big cultural and technical gaps to bridge to deliver the results that industrial companies are hoping to achieve.
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3432132/get-ready-for-the-convergence-of-it-and-ot-networking-and-security.html
|
||||
|
||||
作者:[Linda Musthaler][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Linda-Musthaler/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/02/abstract_networks_thinkstock_881604844-100749945-large.jpg
|
||||
[2]: https://www.temperednetworks.com/
|
||||
[3]: https://www.networkworld.com/article/3429559/is-your-enterprise-software-committing-security-malpractice.html
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html
|
||||
[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[6]: https://totembuildings.com/
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,73 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Powering edge data centers: Blue energy might be the perfect solution)
|
||||
[#]: via: (https://www.networkworld.com/article/3432116/powering-edge-data-centers-blue-energy-might-be-the-perfect-solution.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
Powering edge data centers: Blue energy might be the perfect solution
|
||||
======
|
||||
Blue energy, created by mixing seawater and fresh water, could be the perfect solution for providing cheap and eco-friendly power to edge data centers.
|
||||
![Benkrut / Getty Images][1]
|
||||
|
||||
About a cubic yard of freshwater mixed with seawater provides almost two-thirds of a kilowatt-hour of energy. And scientists say a revolutionary new battery chemistry based on that theme could power edge data centers.
|
||||
|
||||
The idea is to harness power from wastewater treatment plants located along coasts, which happen to be ideal locations for edge data centers and are heavy electricity users.
|
||||
|
||||
“Places where salty ocean water and freshwater mingle could provide a massive source of renewable power,” [writes Rob Jordan in a Stanford University article][2].
|
||||
|
||||
**[ Read also: [Data centers may soon recycle heat into electricity][3] | Get regularly scheduled insights: [Sign up for Network World newsletters][4] ]**
|
||||
|
||||
The chemical process harnesses a mix of sodium and chloride ions. They’re squirted from battery electrodes into a solution and cause current to flow. That initial infusion is then followed by seawater being exchanged with wastewater effluent. It reverses the current flow and creates the energy, the researchers explain.
|
||||
|
||||
“Energy is recovered during both the freshwater and seawater flushes, with no upfront energy investment and no need for charging,” the article says.
|
||||
|
||||
In other words, the battery is continually recharging and discharging with no added input—such as electricity from the grid. The Stanford researchers say the technology could be ideal for making coastal wastewater plants energy independent.
|
||||
|
||||
### Coastal edge data centers
|
||||
|
||||
But edge data centers, also taking up location on the coasts, could also benefit. Those data centers are already exploring kinetic wave-energy to harvest power, as well as using seawater to cool data centers.
|
||||
|
||||
I’ve written about [Ocean Energy’s offshore, power platform using kinetic wave energy][5]. That 125-feet-long, wave converter solution, not only uses ocean water for power generation, but its sea-environment implementation means the same body of water can be used for cooling, too.
|
||||
|
||||
“Ocean cooling and ocean energy in the one device” is a seductive solution, the head of that company said at the time.
|
||||
|
||||
[Microsoft, too, has an underwater data center][6] that proffers the same kinds of power benefits.
|
||||
|
||||
Locating data centers on coasts or in the sea rather than inland doesn’t just provide virtually free-of-cost, power and cooling advantages, plus the associated eco-emissions advantages. The coasts tend to be where the populous is, and locating data center operations near to where the actual calculations, data stores, and other activities need to take place fits neatly into low-latency edge computing, conceptually.
|
||||
|
||||
Other advantages to placing a data center actually in the ocean, although close to land, include the fact that there’s no rent in open waters. And in international waters, one could imagine regulatory advantages—there isn’t a country’s official hovering around.
|
||||
|
||||
However, by placing the installation on terra firma (as the seawater-saltwater mix power solution would be designed for) but close to water at a coast, one can use the necessary seawater and gain an advantage of ease of access to the real estate, connections, and so on.
|
||||
|
||||
The Stanford University engineers, in their seawater/wastewater mix tests, flushed a battery prototype 180 times with wastewater from the Palo Alto Regional Water Quality Control Plant and seawater from nearby Half Moon Bay. The group says they obtained 97% “capturing [of] the salinity gradient energy,” or the blue energy, as it’s sometimes called.
|
||||
|
||||
“Surplus power production could even be diverted to nearby industrial operations,” the article continues.
|
||||
|
||||
“Tapping blue energy at the global scale: rivers running into the ocean” is yet to be solved. “But it is a good starting point,” says Stanford scholar Kristian Dubrawski in the article.
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3432116/powering-edge-data-centers-blue-energy-might-be-the-perfect-solution.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/08/uk_united_kingdom_northern_ireland_belfast_river_lagan_waterfront_architecture_by_benkrut_gettyimages-530205844_2400x1600-100807934-large.jpg
|
||||
[2]: https://news.stanford.edu/2019/07/29/generating-energy-wastewater/
|
||||
[3]: https://www.networkworld.com/article/3410578/data-centers-may-soon-recycle-heat-into-electricity.html
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html
|
||||
[5]: https://www.networkworld.com/article/3314597/wave-energy-to-power-undersea-data-centers.html
|
||||
[6]: https://www.networkworld.com/article/3283332/microsoft-launches-undersea-free-cooling-data-center.html
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -1,13 +1,17 @@
|
||||
Managing Digital Files (e.g., Photographs) in Files and Folders
|
||||
qfzy1233 is translating
|
||||
|
||||
|
||||
|
||||
数码文件与文件夹收纳术(以照片为例)
|
||||
======
|
||||
Update 2014-05-14: added real world example
|
||||
更新 2014-05-14:增加了一些具体实例
|
||||
|
||||
Update 2015-03-16: filtering photographs according to their GPS coordinates
|
||||
更新 2015-03-16:根据照片的 GPS 坐标过滤图片
|
||||
|
||||
Update 2016-08-29: replaced outdated `show-sel.sh` method with new `filetags --filter` method
|
||||
更新 2016-08-29:以新的 `filetags--filter` (LCTT译注:文件标签过滤器)替换已经过时的 `show-sel.sh` 脚本(LCTT译注:show-sel 为 show firmware System Event Log records 即硬件系统事件及日志显示)
|
||||
|
||||
Update 2017-08-28: Email comment on geeqie video thumbnails
|
||||
|
||||
更新 2017-08-28:
|
||||
I am a passionate photographer when being on vacation or whenever I see something beautiful. This way, I collected many [JPEG][1] files over the past years. Here, I describe how I manage my digital photographs while avoiding any [vendor lock-in][2] which binds me to a temporary solution and leads to loss of data. Instead, I prefer solutions where I am able to **invest my time and effort for a long-term relationship**.
|
||||
|
||||
This (very long) entry is **not about image files only** : I am going to explain further things like my folder hierarchy, file name convention, and so forth. Therefore, this information applies to all kind of files I process.
|
||||
|
@ -1,4 +1,3 @@
|
||||
leemeans translating
|
||||
7 deadly sins of documentation
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-cat-writing-king-typewriter-doc.png?itok=afaEoOqc)
|
||||
|
@ -1,5 +1,3 @@
|
||||
bestony is translating.
|
||||
|
||||
Create a Clean-Code App with Kotlin Coroutines and Android Architecture Components
|
||||
============================================================
|
||||
|
||||
|
@ -1,177 +0,0 @@
|
||||
IT disaster recovery: Sysadmins vs. natural disasters | HPE
|
||||
======
|
||||
|
||||
![](https://www.hpe.com/content/dam/hpe/insights/articles/2017/11/it-disaster-recovery-sysadmins-vs-natural-disasters/featuredStory/Sysadmins-vs-natural-disasters-1740.jpg.transform/nxt-1043x496-crop/image.jpeg)
|
||||
|
||||
Businesses need to keep going even when faced with torrential flooding or earthquakes. Sysadmins who lived through Katrina, Sandy, and other disasters share real-world advice for anyone responsible for IT during an emergency.
|
||||
|
||||
In terms of natural disasters, 2017 has been one heck of a year. Hurricanes Harvey, Irma, and Maria brought destruction to Houston, Puerto Rico, Florida, and the Caribbean. On top of that, wildfires burned out homes and businesses in the West.
|
||||
|
||||
It'd be easy to respond with yet another finger-wagging article about [preparing for disasters][1]--and surely it's all good advice--but that doesn't help a network administrator cope with the soggy mess. Most of those well-meant suggestions also assume that the powers that be are cheerfully willing to invest money in implementing them.
|
||||
|
||||
We're a little more interested in the real world. Instead, let's put that bad news to some good use.
|
||||
|
||||
Case in point: One result of a natural disaster is that the boss may suddenly be willing to find budget for disaster recovery planning. As a New York area sysadmin puts it, "The [greatest benefit I found from Hurricane Sandy][2] is our client's interest in investing back into IT, so hopefully you will welcome bigger budgets as well."
|
||||
|
||||
Don't expect that willingness to last long, though. Any sysadmin who'd like to suggest infrastructure improvements is urged to make hay while the sun shines. As another Sandy-survivor IT specialist ruefully remarks, "[Initial interest in IT spending lasted the calendar year for us][3]. By the following year, any plans that hadn't already been put in the works got put on the back burner due to 'budgetary constraints,' and then completely forgotten about by around 6 months later."
|
||||
|
||||
It can help to remind management of the cold hard facts before they forget that bad natural disasters can happen to good companies. According to the Institute for Business & Home Safety, [25 percent of businesses that close after a natural disaster never reopen][4]. FEMA thinks that's optimistic. By its measure, "[40 percent of small businesses never reopen their doors][5] following a disaster."
|
||||
|
||||
If you're a sysadmin, you can help save your business. Here are some of survivors' best ideas based on what they've learned from the past few natural disasters.
|
||||
|
||||
### Have a plan
|
||||
|
||||
When the lights flicker and the wind howls like a locomotive, it's time to put your business continuity and disaster recovery plans into operation.
|
||||
|
||||
Too many sysadmins report that neither were in place when the storms came. That's not surprising. In 2014, the [Disaster Recovery Preparedness Council][6] found that [73 percent of surveyed businesses worldwide didn't have adequate disaster recovery plans][7].
|
||||
|
||||
"Adequate" is a key word. As a sysadmin on Reddit wrote in 2016, "[Our disaster plan is a disaster.][8] All our data is backed up to a storage area network [SAN] about 30 miles from here. We have no hardware to get it back online or have even our core servers up and running within a few days. We're a $4 billion a year company that won't spend a few $100K for proper equipment. Or even some servers at a data center. Our executive team said, 'Meh what are the odds of anything happening' when the hardware proposal was brought up."
|
||||
|
||||
Another on the same thread put it more succinctly: "Currently my DR plan is to cry in a dark damp corner and hope nobody cared about anything that was lost."
|
||||
|
||||
Get the report from 451 Research - Datacenter Modernization: Trends and Challenges
|
||||
|
||||
[Transform with Hybrid IT][9]
|
||||
|
||||
If you're crying, let's hope you aren't crying alone. Any disaster plan, even one devised by the IT department, has to ascertain that [you can communicate with humans][10], as sysadmin Jim Thompson learned during Katrina: "Make sure you have a plan to communicate with people. During a serious regional disaster, you will not be able to call anyone with a phone in the affected area code."
|
||||
|
||||
One option that may appeal to the technically minded: [ham radio][11]. That [made a difference in Puerto Rico][12].
|
||||
|
||||
### Make a wish list
|
||||
|
||||
The first step is recognizing the problem. "Many companies are not actually interested in disaster recovery, or they address it reluctantly," says [Joshua Brusse][13], a chief architect at [Micro Focus][14]. "Viewing disaster recovery as an aspect of business continuity is a different perspective. All companies deal with business continuity, so disaster recovery should be considered as part of that."
|
||||
|
||||
Ensuring that there's an adequate disaster recovery and business continuity plan in place requires the IT department to document its needs. That's true even if--or particularly when--you don't get your way. As one sysadmin remarks, "I like to have a 'thought dump' location where any and all plans/ideas/improvements can be just dumped in with no limitations or restrictions. [This] is [especially helpful for when you propose a change][15], it gets shot down, and six months later that situation you warned about came up." Now you have everything prepared and can start the discussion: "As we discussed back in April…"
|
||||
|
||||
So, what can you do when your executive team responds to the business continuity plan with "Meh what are the odds of anything happening?" Shockingly poor judgement as that is, one sysadmin suggests it's also completely normal behavior for the executive layer. In situations this dire, experienced sysadmins say document the events. Be clear that you told the executives what needed to be done and that [they refused to do so][16]. "The general idea is to have a paper trail long enough for them to hang themselves," the sysadmin adds.
|
||||
|
||||
If that doesn't work, the experience of bringing back a flooded data center will serve you well in [a new job search][17].
|
||||
|
||||
### Protect the physical infrastructure
|
||||
|
||||
"[Our office is an old decrepit building][18]," reported one sysadmin after Harvey hammered Houston. "We went into the building blind and the infrastructure in place was terrible. We literally just finished the last of the drops we needed in that building and now it's all under water."
|
||||
|
||||
Nonetheless, if you want the data center to keep running--or to get back up and working after a storm--you need to ensure the facility can stand up to not only the kind of disasters expected in your area but the unexpected ones as well. One reason Sandy was devastating is that the New York area wasn't prepped for that sort of weather system. A sysadmin in San Francisco knows why it's important to ensure the company's servers are in a building that can withstand a magnitude 7 earthquake. A business in St. Louis knows how to respond to tornadoes. But you should prepare for every eventuality: a tornado in California, an earthquake in Missouri, or [a zombie apocalypse][19] (which also gives you justification for a chainsaw in the IT budget).
|
||||
|
||||
In Houston's case, [most data centers stayed up][20] and running because they were built to withstand storms and floods. [Data Foundry][21]'s chief technology officer, Edward Henigin, says of one of its data centers, "Houston 2 is a purpose-built facility designed to withstand Category 5 hurricane wind speeds. This site has not lost utility power, and we have not had to transition to our backup generators."
|
||||
|
||||
That's the good news. The bad news is, as superstorm Sandy showed in 2012, if your [data center isn't ready to handle flooding][22], you're in for a world of trouble. Customers of one failed data center, [Datagram][23], included high-profile sites Gawker, Gizmodo, and Buzzfeed.
|
||||
|
||||
Of course, sometimes there's nothing you can do. As one San Juan, Puerto Rico, sysadmin sadly wrote when Irma came through, "Generator took a dump. Server room running on batteries but no [air conditioning]. [Bye bye servers][24]." The sysadmin couldn't fail over to disaster recovery because the MPLS (Multiprotocol Label Switching) line was also down: "Fun day."
|
||||
|
||||
To sum up, IT professionals need to know their area, know their risks, and place their servers in data centers that can handle the local conditions.
|
||||
|
||||
### An argument for the cloud
|
||||
|
||||
The best way to avoid an IT data center failure when a storm rolls through is to make sure the backup data center is elsewhere. That requires sensible decision-making in locating them. Your backup data center should not be in a region that can be affected by the same natural disaster; place your resources in more than one availability zone. Think backup and primary along the same fault line in an earthquake or vulnerable to flooding from linked water sources.
|
||||
|
||||
Some sysadmins [use the cloud for redundancy][25]. For example, Microsoft Azure storage is always replicated to ensure durability and high availability. Depending on the options you choose, Azure replication copies your data, either within the same data center or to a second data center. Most public clouds offer similar automatic backup services to help ensure data stays safe no matter what happens to your local data center--unless your cloud provider is in the same storm path.
|
||||
|
||||
Expensive? Yes. As expensive as being down for a day or two? No.
|
||||
|
||||
Don't trust the public cloud? Consider a colocation (colo) service. With colo, you still own your hardware and run your own applications, but the hardware can be miles away from trouble. For instance, during Harvey, one company "virtually" moved all its resources from Houston to its colo in Austin, Texas. But those local data centers and colocation sites need to be ready to handle disasters; it's one of the criteria you should use in choosing them. For example, a Seattle sysadmin looking for colocation space considered, "It was all about their earthquake and drought protection (overbuilt foundations and water trucks to feed the chillers)."
|
||||
|
||||
### When the lights go out
|
||||
|
||||
The most [common cause of declared disaster is power failures][26], as Forrester Research analyst Rachel Dines reported in a survey for [Disaster Recovery Journal][27]. While you can work against those in ordinary circumstances, hurricanes, fires, and floods test the equipment past its limits.
|
||||
|
||||
One sysadmin's tongue-in-cheek plan? "Turn off what you can before the UPS dies, let crash what you can't. Then, [drink until power comes back on][28]."
|
||||
|
||||
A more serious plan driven by IT staff in the wake of 2016's Delta and Southwest outages was for a managed service provider to [deploy uninterruptible power supplies][29] to its clients: "On the critical pieces, we use a combination of SNMP signalling and PowerChute Network Shutdown (PCNS) clients to shut things down in the event of a power failure. Bringing things back up, well... that depends on the client. Some are automatic, and some require manual intervention."
|
||||
|
||||
Another approach is to support the data center with utility power from two substations. For example, the [Seattle Westin Building data cente][30]r has multiple 13.4-kilovolt utility feeds, diverse power substations, and multiple 480-volt three-phase transformer vaults.
|
||||
|
||||
Serious power failure prevention systems are not "one size fits all" units. Sysadmins should requisition a [custom-designed diesel generator for the data center][31]. Besides being tuned for your specific needs, generators must be capable of jumping to full speed in moments and accept full-power loads without impacting the load performance.
|
||||
|
||||
These generators must also be protected. For example, putting your generators on the ground floor in a flood plain is not a smart idea. The data centers on Broad Street in New York had fits during Superstorm Sandy because the backup generators' fuel tanks were in the basement--and they were flooded out. While a ["bucket brigade" relaying 5-gallon buckets of diesel fuel up 17 flights of stairs to the generator][32] kept [Peer 1 Hosting][33] in business, this is not a viable business continuity plan.
|
||||
|
||||
As most data center professionals know, if you have time--say, a hurricane is a day away--make sure your generator is working, fully fueled up, and is ready to kick on when the power lines get cut. Of course, you should have been testing your generator every month anyway. You have been doing that? Right? Right!
|
||||
|
||||
### Testing your confidence in backups
|
||||
|
||||
Ordinary users almost never make backups, and fewer still check to make sure their backups are actually any good. Sysadmins know better.
|
||||
|
||||
Some [IT departments are looking into moving their backups to the cloud][34]. But some sysadmins aren't sold on it yet--for good reason. One recently reported, "After [five solid days of restoring [400 GB of] data from Amazon Glacier][35], I owe Amazon nearly $200 in data transfer fees and [I] still have an inconsistent restore state and [am] missing 100 GB of my files."
|
||||
|
||||
As a result, some sysadmins still prefer tape backup. Tape is certainly not fashionable, but as operating system guru Andrew S. Tanenbaum says, "[Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway][36]."
|
||||
|
||||
These days, tape can handle 10 terabytes per tape; there are experiments underway that take tape up to 200 TB. Technologies such as the [Linear Tape File System][37] enable you to read tape data as if it were just another network drive.
|
||||
|
||||
Yet for many, tape is the [option of absolute last resort][38]. That's fine, because backup should have plenty of options. In this case, says one sysadmin, "we would have to fail with: [Windows] server level VSS [Volume Shadow Storage] snapshots, SAN level volume snapshots, and SAN level offsite archived snapshot copies. But if, hypothetically, something happened that nuked our VM, the SAN, and the backup SAN, we could still get the tapes back and recover the data."
|
||||
|
||||
When trouble is coming your way, use replication tools such as [Veeam][39], which create a virtual machine replica of your servers. If there's a failure, the replicas are automatically spun up. No fuss, no muss, as one sysadmin says in the popular sysadmin post, "[I love you Veeam.][40]"
|
||||
|
||||
### Network? What network?
|
||||
|
||||
Of course, no cloud, no colo, and no remote data center helps you if staff can't reach their services. You don't need a natural disaster to justify redundant Internet connections. All it takes is a backhoe cable cut or severed fiber lines to give you a bad day at work.
|
||||
|
||||
"Ideally," one sysadmin wisely observes, "you should have [two wired Internet connections to two ISPs with separate infrastructures][41]. You do not want to find out both ISPs are dependent on the same fiber cable, for example. Nor do you want to use two local ISPs and find out they are both dependent on Level 3 for their upstream bandwidth."
|
||||
|
||||
Smart sysadmins know their corporate Internet connections [must be business-class connections with a service-level agreement][42] (SLA) that includes a "time to repair" clause. Better still is to get a [dedicated Internet access][43] (DIA) circuit. Technically, they're no different than any other Internet connection. The difference is that a DIA is not a "best effort" connection. Instead, you get a specified amount of bandwidth that is dedicated for your use and comes with a SLA. They're not cheap, but as the saying goes, "Fast. Reliable. Cheap. Pick any two." When it's your business on the line and a storm is coming your way, "reliable" has to be one of your two picks.
|
||||
|
||||
### When the storm skies clear
|
||||
|
||||
You can't prepare for all disasters, but you can plan for many of them. With a well-thought-out and tested disaster recovery and business continuity plan that is followed to the letter, your company can stay afloat while your rivals are drowning.
|
||||
|
||||
### Sysadmins vs. disasters: Lessons for leaders
|
||||
|
||||
* How many times must your IT staff say this: Don't just make backups. Test backups.
|
||||
* No power? No company. Make certain your servers' emergency power is sufficient for your needs and work.
|
||||
* If your company survives a natural disaster--or dodges one--wise sysadmins know that this is the time to ask management for the disaster recovery budget they've been postponing. Because next time, you might not be so lucky.
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.hpe.com/us/en/insights/articles/it-disaster-recovery-sysadmins-vs-natural-disasters-1711.html
|
||||
|
||||
作者:[Steven-J-Vaughan-Nichols][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.hpe.com/us/en/insights/contributors/steven-j-vaughan-nichols.html
|
||||
[1]:https://www.hpe.com/us/en/insights/articles/what-is-disaster-recovery-really-1704.html
|
||||
[2]:https://www.reddit.com/r/sysadmin/comments/6wricr/dear_houston_tx_sysadmins/
|
||||
[3]:https://www.reddit.com/r/sysadmin/comments/6wricr/dear_houston_tx_sysadmins/dma6gse/
|
||||
[4]:https://disastersafety.org/wp-content/uploads/open-for-business-english.pdf
|
||||
[5]:https://www.fema.gov/protecting-your-businesses
|
||||
[6]:http://drbenchmark.org/about-us/our-council/
|
||||
[7]:https://www.prnewswire.com/news-releases/global-benchmark-study-reveals-73-of-companies-are-unprepared-for-disaster-recovery-248359051.html
|
||||
[8]:https://www.reddit.com/r/sysadmin/comments/3cob1k/what_does_your_disaster_recovery_plan_look_like/csxh8sn/
|
||||
[9]:https://www.hpe.com/us/en/resources/servers/datacenter-trends-challenges.html?jumpid=in_insights~510287587~451research_datacenter~sjvnSysadmin
|
||||
[10]:http://www.theregister.co.uk/2015/07/12/surviving_hurricane_katrina
|
||||
[11]:https://theprepared.com/guides/beginners-guide-amateur-ham-radio-preppers/
|
||||
[12]:http://www.npr.org/2017/09/29/554600989/amateur-radio-operators-stepped-in-to-help-communications-with-puerto-rico
|
||||
[13]:http://www8.hp.com/us/en/software/joshua-brusse.html
|
||||
[14]:https://www.microfocus.com/
|
||||
[15]:https://www.reddit.com/r/sysadmin/comments/6wricr/dear_houston_tx_sysadmins/dma87xv/
|
||||
[16]:https://www.hpe.com/us/en/insights/articles/my-boss-asked-me-to-do-what-how-to-handle-worrying-work-requests-1710.html
|
||||
[17]:https://www.hpe.com/us/en/insights/articles/sysadmin-survival-guide-1707.html
|
||||
[18]:https://www.reddit.com/r/sysadmin/comments/6wk92q/any_houston_admins_executing_their_dr_plans_this/dm8xj0q/
|
||||
[19]:https://community.spiceworks.com/how_to/1243-ensure-your-dr-plan-is-ready-for-a-zombie-apocolypse
|
||||
[20]:http://www.datacenterdynamics.com/content-tracks/security-risk/houston-data-centers-withstand-hurricane-harvey/98867.article
|
||||
[21]:https://www.datafoundry.com/
|
||||
[22]:http://www.datacenterknowledge.com/archives/2012/10/30/major-flooding-nyc-data-centers
|
||||
[23]:https://datagram.com/
|
||||
[24]:https://www.reddit.com/r/sysadmin/comments/6yjb3p/shutting_down_everything_blame_irma/
|
||||
[25]:https://www.hpe.com/us/en/insights/articles/everything-you-need-to-know-about-clouds-and-hybrid-it-1701.html
|
||||
[26]:https://www.drj.com/images/surveys_pdf/forrester/2011Forrester_survey.pdf
|
||||
[27]:https://www.drj.com
|
||||
[28]:https://www.reddit.com/r/sysadmin/comments/4x3mmq/datacenter_power_failure_procedures_what_do_yours/d6c71p1/
|
||||
[29]:https://www.reddit.com/r/sysadmin/comments/4x3mmq/datacenter_power_failure_procedures_what_do_yours/
|
||||
[30]:https://cloudandcolocation.com/datacenters/the-westin-building-seattle-data-center/
|
||||
[31]:https://www.techrepublic.com/article/what-to-look-for-in-a-data-center-backup-generator/
|
||||
[32]:http://www.datacenterknowledge.com/archives/2012/10/31/peer-1-mobilizes-diesel-bucket-brigade-at-75-broad
|
||||
[33]:https://www.cogecopeer1.com/
|
||||
[34]:https://www.reddit.com/r/sysadmin/comments/7a6m7n/aws_glacier_archival/
|
||||
[35]:https://www.reddit.com/r/sysadmin/comments/63mypu/the_dangers_of_cloudberry_and_amazon_glacier_how/
|
||||
[36]:https://en.wikiquote.org/wiki/Andrew_S._Tanenbaum
|
||||
[37]:http://www.snia.org/ltfs
|
||||
[38]:https://www.reddit.com/r/sysadmin/comments/5visaq/backups_how_many_of_you_still_have_tapes/de2d0qm/
|
||||
[39]:https://helpcenter.veeam.com/docs/backup/vsphere/failover.html?ver=95
|
||||
[40]:https://www.reddit.com/r/sysadmin/comments/5rttuo/i_love_you_veeam/
|
||||
[41]:https://www.reddit.com/r/sysadmin/comments/5rmqfx/ars_surviving_a_cloudbased_disaster_recovery_plan/dd90auv/
|
||||
[42]:https://www.hpe.com/us/en/insights/articles/how-do-you-evaluate-cloud-service-agreements-and-slas-very-carefully-1705.html
|
||||
[43]:http://www.e-vergent.com/what-is-dedicated-internet-access/
|
@ -1,123 +0,0 @@
|
||||
Sysadmin 101: Troubleshooting
|
||||
======
|
||||
I typically keep this blog strictly technical, keeping observations, opinions and the like to a minimum. But this, and the next few posts will be about basics and fundamentals for starting out in system administration/SRE/system engineer/sysops/devops-ops (whatever you want to call yourself) roles more generally.
|
||||
Bear with me!
|
||||
|
||||
"My web site is slow"
|
||||
|
||||
I just picked the type of issue for this article at random, this can be applied to pretty much any sysadmin related troubleshooting. It's not about showing off the cleverest oneliners to find the most information. It's also not an exhaustive, step-by-step "flowchart" with the word "profit" in the last box. It's about general approach, by means of a few examples.
|
||||
The example scenarios are solely for illustrative purposes. They sometimes have a basis in assumptions that doesn't apply to all cases all of the time, and I'm positive many readers will go "oh, but I think you will find…" at some point.
|
||||
But that would be missing the point.
|
||||
|
||||
Having worked in support, or within a support organization for over a decade, there is one thing that strikes me time and time again and that made me write this;
|
||||
**The instinctive reaction many techs have when facing a problem, is to start throwing potential solutions at it.**
|
||||
|
||||
"My website is slow"
|
||||
|
||||
* I'm going to try upping `MaxClients/MaxRequestWorkers/worker_connections`
|
||||
* I'm going to try to increase `innodb_buffer_pool_size/effective_cache_size`
|
||||
* I'm going to try to enable `mod_gzip` (true story, sadly)
|
||||
|
||||
|
||||
|
||||
"I saw this issue once, and then it was because X. So I'm going to try to fix X again, it might work".
|
||||
|
||||
This wastes a lot of time, and leads you down a wild goose chase. In the dark. Wearing greased mittens.
|
||||
InnoDB's buffer pool may well be at 100% utilization, but that's just because there are remnants of a large one-off report someone ran a while back in there. If there are no evictions, you've just wasted time.
|
||||
|
||||
### Quick side-bar before we start
|
||||
|
||||
At this point, I should mention that while it's equally applicable to many roles, I'm writing this from a general support system adminstrator's point of view. In a mature, in-house organization or when working with larger, fully managed or "enterprise" customers, you'll typically have everything instrumented, measured, graphed, thresheld (not even word) and alerted on. Then your approach will often be rather different. We're going in blind here.
|
||||
|
||||
If you don't have that sort of thing at your disposal;
|
||||
|
||||
### Clarify and First look
|
||||
|
||||
Establish what the issue actually is. "Slow" can take many forms. Is it time to first byte? That's a whole different class of problem from poor Javascript loading and pulling down 15 MB of static assets on each page load. Is it slow, or just slower than it usually is? Two very different plans of attack!
|
||||
|
||||
Make sure you know what the issue reported/experienced actually is before you go off and do something. Finding the source of the problem is often difficult enough, without also having to find the problem itself.
|
||||
That is the sysadmin equivalent of bringing a knife to a gunfight.
|
||||
|
||||
### Low hanging fruit / gimmies
|
||||
|
||||
You are allowed to look for a few usual suspects when you first log in to a suspect server. In fact, you should! I tend to fire off a smattering of commands whenever I log in to a server to just very quickly check a few things; Are we swapping (`free/vmstat`), are the disks busy (`top/iostat/iotop`), are we dropping packets (`netstat/proc/net/dev`), is there an undue amount of connections in an undue state (`netstat`), is something hogging the CPUs (`top`), is someone else on this server (`w/who`), any eye-catching messages in syslog and `dmesg`?
|
||||
|
||||
There's little point to carrying on if you have 2000 messages from your RAID controller about how unhappy it is with its write-through cache.
|
||||
|
||||
This doesn't have to take more than half a minute. If nothing catches your eye - continue.
|
||||
|
||||
### Reproduce
|
||||
|
||||
If there indeed is a problem somewhere, and there's no low hanging fruit to be found;
|
||||
|
||||
Take all steps you can to try and reproduce the problem. When you can reproduce, you can observe. **When you can observe, you can solve.** Ask the person reporting the issue what exact steps to take to reproduce the issue if it isn 't already obvious or covered by the first section.
|
||||
|
||||
Now, for issues caused by solar flares and clients running exclusively on OS/2, it's not always feasible to reproduce. But your first port of call should be to at least try! In the very beginning, all you know is "X thinks their website is slow". For all you know at that point, they could be tethered to their GPRS mobile phone and applying Windows updates. Delving any deeper than we already have at that point is, again, a waste of time.
|
||||
|
||||
Attempt to reproduce!
|
||||
|
||||
### Check the log!
|
||||
|
||||
It saddens me that I felt the need to include this. But I've seen escalations that ended mere minutes after someone ran `tail /var/log/..` Most *NIX tools these days are pretty good at logging. Anything blatantly wrong will manifest itself quite prominently in most application logs. Check it.
|
||||
|
||||
### Narrow down
|
||||
|
||||
If there are no obvious issues, but you can reproduce the reported problem, great. So, you know the website is slow. Now you've narrowed things down to: Browser rendering/bug, application code, DNS infrastructure, router, firewall, NICs (all eight+ involved), ethernet cables, load balancer, database, caching layer, session storage, web server software, application server, RAM, CPU, RAID card, disks.
|
||||
Add a smattering of other potential culprits depending on the set-up. It could be the SAN, too. And don't forget about the hardware WAF! And.. you get my point.
|
||||
|
||||
If the issue is time-to-first-byte you'll of course start applying known fixes to the webserver, that's the one responding slowly and what you know the most about, right? Wrong!
|
||||
You go back to trying to reproduce the issue. Only this time, you try to eliminate as many potential sources of issues as possible.
|
||||
|
||||
You can eliminate the vast majority of potential culprits very easily: Can you reproduce the issue locally from the server(s)? Congratulations, you've just saved yourself having to try your fixes for BGP routing.
|
||||
If you can't, try from another machine on the same network. If you can - at least you can move the firewall down your list of suspects, (but do keep a suspicious eye on that switch!)
|
||||
|
||||
Are all connections slow? Just because the server is a web server, doesn't mean you shouldn't try to reproduce with another type of service. [netcat][1] is very useful in these scenarios (but chances are your SSH connection would have been lagging this whole time, as a clue)! If that's also slow, you at least know you've most likely got a networking problem and can disregard the entire web stack and all its components. Start from the top again with this knowledge (do not collect $200). Work your way from the inside-out!
|
||||
|
||||
Even if you can reproduce locally - there's still a whole lot of "stuff" left. Let's remove a few more variables. Can you reproduce it with a flat-file? If `i_am_a_1kb_file.html` is slow, you know it's not your DB, caching layer or anything beyond the OS and the webserver itself.
|
||||
Can you reproduce with an interpreted/executed `hello_world.(py|php|js|rb..)` file? If you can, you've narrowed things down considerably, and you can focus on just a handful of things. If `hello_world` is served instantly, you've still learned a lot! You know there aren't any blatant resource constraints, any full queues or stuck IPC calls anywhere. So it's something the application is doing or something it's communicating with.
|
||||
|
||||
Are all pages slow? Or just the ones loading the "Live scores feed" from a third party?
|
||||
|
||||
**What this boils down to is; What 's the smallest amount of "stuff" that you can involve, and still reproduce the issue?**
|
||||
|
||||
Our example is a slow web site, but this is equally applicable to almost any issue. Mail delivery? Can you deliver locally? To yourself? To <common provider here>? Test with small, plaintext messages. Work your way up to the 2MB campaign blast. STARTTLS and no STARTTLS. Work your way from the inside-out.
|
||||
|
||||
Each one of these steps takes mere seconds each, far quicker than implementing most "potential" fixes.
|
||||
|
||||
### Observe / isolate
|
||||
|
||||
By now, you may already have stumbled across the problem by virtue of being unable to reproduce when you removed a particular component.
|
||||
|
||||
But if you haven't, or you still don't know **why** ; Once you've found a way to reproduce the issue with the smallest amount of "stuff" (technical term) between you and the issue, it's time to start isolating and observing.
|
||||
|
||||
Bear in mind that many services can be ran in the foreground, and/or have debugging enabled. For certain classes of issues, it is often hugely helpful to do this.
|
||||
|
||||
Here's also where your traditional armory comes into play. `strace`, `lsof`, `netstat`, `GDB`, `iotop`, `valgrind`, language profilers (cProfile, xdebug, ruby-prof…). Those types of tools.
|
||||
|
||||
Once you've come this far, you rarely end up having to break out profilers or debugers though.
|
||||
|
||||
[`strace`][2] is often a very good place to start.
|
||||
You might notice that the application is stuck on a particular `read()` call on a socket file descriptor connected to port 3306 somewhere. You'll know what to do.
|
||||
Move on to MySQL and start from the top again. Low hanging fruit: "Waiting_for comic core.md Dict.md lctt2014.md lctt2016.md LCTT翻译规范.md LICENSE Makefile published README.md sign.md sources translated 选题模板.txt 中文排版指北.md lock", deadlocks, max_connections.. Move on to: All queries? Only writes? Only certain tables? Only certain storage engines?…
|
||||
|
||||
You might notice that there's a `connect()` to an external API resource that takes five seconds to complete, or even times out. You'll know what to do.
|
||||
|
||||
You might notice that there are 1000 calls to `fstat()` and `open()` on the same couple of files as part of a circular dependency somewhere. You'll know what to do.
|
||||
|
||||
It might not be any of those particular things, but I promise you, you'll notice something.
|
||||
|
||||
If you're only going to take one thing from this section, let it be; learn to use `strace`! **Really** learn it, read the whole man page. Don 't even skip the HISTORY section. `man` each syscall you don't already know what it does. 98% of troubleshooting sessions ends with strace.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://northernmost.org/blog/troubleshooting-101/index.html
|
||||
|
||||
作者:[Erik Ljungstrom][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://northernmost.org
|
||||
[1]:http://nc110.sourceforge.net/
|
||||
[2]:https://linux.die.net/man/1/strace
|
@ -1,154 +0,0 @@
|
||||
translating by zyk2290
|
||||
|
||||
Two great uses for the cp command: Bash shortcuts
|
||||
============================================================
|
||||
|
||||
### Here's how to streamline the backup and synchronize functions of the cp command.
|
||||
|
||||
![Two great uses for the cp command: Bash shortcuts ](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC)
|
||||
|
||||
>Image by : [Internet Archive Book Images][6]. Modified by Opensource.com. CC BY-SA 4.0
|
||||
|
||||
Last July, I wrote about [two great uses for the cp command][7]: making a backup of a file, and synchronizing a secondary copy of a folder.
|
||||
|
||||
Having discovered these great utilities, I find that they are more verbose than necessary, so I created shortcuts to them in my Bash shell startup script. I thought I’d share these shortcuts in case they are useful to others or could offer inspiration to Bash users who haven’t quite taken on aliases or shell functions.
|
||||
|
||||
### Updating a second copy of a folder – Bash alias
|
||||
|
||||
The general pattern for updating a second copy of a folder with cp is:
|
||||
|
||||
```
|
||||
cp -r -u -v SOURCE-FOLDER DESTINATION-DIRECTORY
|
||||
```
|
||||
|
||||
I can easily remember the -r option because I use it often when copying folders around. I can probably, with some more effort, remember -v, and with even more effort, -u (is it “update” or “synchronize” or…).
|
||||
|
||||
Or I can just use the [alias capability in Bash][8] to convert the cp command and options to something more memorable, like this:
|
||||
|
||||
```
|
||||
alias sync='cp -r -u -v'
|
||||
```
|
||||
|
||||
```
|
||||
sync Pictures /media/me/4388-E5FE
|
||||
```
|
||||
|
||||
Not sure if you already have a sync alias defined? You can list all your currently defined aliases by typing the word alias at the command prompt in your terminal window.
|
||||
|
||||
Like this so much you just want to start using it right away? Open a terminal window and type:
|
||||
|
||||
```
|
||||
echo "alias sync='cp -r -u -v'" >> ~/.bash_aliases
|
||||
```
|
||||
|
||||
```
|
||||
me@mymachine~$ alias
|
||||
|
||||
alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"'
|
||||
|
||||
alias egrep='egrep --color=auto'
|
||||
|
||||
alias fgrep='fgrep --color=auto'
|
||||
|
||||
alias grep='grep --color=auto'
|
||||
|
||||
alias gvm='sdk'
|
||||
|
||||
alias l='ls -CF'
|
||||
|
||||
alias la='ls -A'
|
||||
|
||||
alias ll='ls -alF'
|
||||
|
||||
alias ls='ls --color=auto'
|
||||
|
||||
alias sync='cp -r -u -v'
|
||||
|
||||
me@mymachine:~$
|
||||
```
|
||||
|
||||
### Making versioned backups – Bash function
|
||||
|
||||
The general pattern for making a backup of a file with cp is:
|
||||
|
||||
```
|
||||
cp --force --backup=numbered WORKING-FILE BACKED-UP-FILE
|
||||
```
|
||||
|
||||
Besides remembering the options to the cp command, we also need to remember to repeat the WORKING-FILE name a second time. But why repeat ourselves when [a Bash function][9] can take care of that overhead for us, like this:
|
||||
|
||||
Again, you can save this to your .bash_aliases file in your home directory.
|
||||
|
||||
```
|
||||
function backup {
|
||||
|
||||
if [ $# -ne 1 ]; then
|
||||
|
||||
echo "Usage: $0 filename"
|
||||
|
||||
elif [ -f $1 ] ; then
|
||||
|
||||
echo "cp --force --backup=numbered $1 $1"
|
||||
|
||||
cp --force --backup=numbered $1 $1
|
||||
|
||||
else
|
||||
|
||||
echo "$0: $1 is not a file"
|
||||
|
||||
fi
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
The first if statement checks to make sure that only one argument is provided to the function, otherwise printing the correct usage with the echo command.
|
||||
|
||||
The elif statement checks to make sure the argument provided is a file, and if so, it (verbosely) uses the second echo to print the cp command to be used and then executes it.
|
||||
|
||||
If the single argument is not a file, the third echo prints an error message to that effect.
|
||||
|
||||
In my home directory, if I execute the backup command so defined on the file checkCounts.sql, I see that backup creates a file called checkCounts.sql.~1~. If I execute it once more, I see a new file checkCounts.sql.~2~.
|
||||
|
||||
Success! As planned, I can go on editing checkCounts.sql, but if I take a snapshot of it every so often with backup, I can return to the most recent snapshot should I run into trouble.
|
||||
|
||||
At some point, it’s better to start using git for version control, but backup as defined above is a nice cheap tool when you need to create snapshots but you’re not ready for git.
|
||||
|
||||
### Conclusion
|
||||
|
||||
In my last article, I promised you that repetitive tasks can often be easily streamlined through the use of shell scripts, shell functions, and shell aliases.
|
||||
|
||||
Here I’ve shown concrete examples of the use of shell aliases and shell functions to streamline the synchronize and backup functionality of the cp command. If you’d like to learn more about this, check out the two articles cited above: [How to save keystrokes at the command line with alias][10] and [Shell scripting: An introduction to the shift method and custom functions][11], written by my colleagues Greg and Seth, respectively.
|
||||
|
||||
|
||||
### About the author
|
||||
|
||||
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/clh_portrait2.jpg?itok=V1V-YAtY)][13] Chris Hermansen
|
||||
|
||||
|
||||
Engaged in computing since graduating from the University of British Columbia in 1978, I have been a full-time Linux user since 2005 and a full-time Solaris, SunOS and UNIX System V user before that. On the technical side of things, I have spent a great deal of my career doing data analysis; especially spatial data analysis. I have a substantial amount of programming experience in relation to data analysis, using awk, Python, PostgreSQL, PostGIS and lately Groovy. I have also built a few... [more about Chris Hermansen][14]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/two-great-uses-cp-command-update
|
||||
|
||||
作者:[Chris Hermansen][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/clhermansen
|
||||
[1]:https://opensource.com/users/clhermansen
|
||||
[2]:https://opensource.com/users/clhermansen
|
||||
[3]:https://opensource.com/user/37806/feed
|
||||
[4]:https://opensource.com/article/18/1/two-great-uses-cp-command-update?rate=J_7R7wSPbukG9y8jrqZt3EqANfYtVAwZzzpopYiH3C8
|
||||
[5]:https://opensource.com/article/18/1/two-great-uses-cp-command-update#comments
|
||||
[6]:https://www.flickr.com/photos/internetarchivebookimages/14803082483/in/photolist-oy6EG4-pZR3NZ-i6r3NW-e1tJSX-boBtf7-oeYc7U-o6jFKK-9jNtc3-idt2G9-i7NG1m-ouKjXe-owqviF-92xFBg-ow9e4s-gVVXJN-i1K8Pw-4jybMo-i1rsBr-ouo58Y-ouPRzz-8cGJHK-85Evdk-cru4Ly-rcDWiP-gnaC5B-pAFsuf-hRFPcZ-odvBMz-hRCE7b-mZN3Kt-odHU5a-73dpPp-hUaaAi-owvUMK-otbp7Q-ouySkB-hYAgmJ-owo4UZ-giHgqu-giHpNc-idd9uQ-osAhcf-7vxk63-7vwN65-fQejmk-pTcLgA-otZcmj-fj1aSX-hRzHQk-oyeZfR
|
||||
[7]:https://opensource.com/article/17/7/two-great-uses-cp-command
|
||||
[8]:https://opensource.com/article/17/5/introduction-alias-command-line-tool
|
||||
[9]:https://opensource.com/article/17/1/shell-scripting-shift-method-custom-functions
|
||||
[10]:https://opensource.com/article/17/5/introduction-alias-command-line-tool
|
||||
[11]:https://opensource.com/article/17/1/shell-scripting-shift-method-custom-functions
|
||||
[12]:https://opensource.com/tags/linux
|
||||
[13]:https://opensource.com/users/clhermansen
|
||||
[14]:https://opensource.com/users/clhermansen
|
@ -1,222 +0,0 @@
|
||||
How to test Webhooks when you’re developing locally
|
||||
============================================================
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1000/1*0HNQmPw5yXva6powvVwn5Q.jpeg)
|
||||
Photo by [Fernando Venzano][1] on [Unsplash][2]
|
||||
|
||||
[Webhooks][10] can be used by an external system for notifying your system about a certain event or update. Probably the most well known type is the one where a Payment Service Provider (PSP) informs your system about status updates of payments.
|
||||
|
||||
Often they come in the form where you listen on a predefined URL. For example [http://example.com/webhooks/payment-update][11]. Meanwhile the other system sends a POST request with a certain payload to that URL (for example a payment ID). As soon as the request comes in, you fetch the payment ID, ask the PSP for the latest status via their API, and update your database afterward.
|
||||
|
||||
Other examples can be found in this excellent explanation about Webhooks. [https://sendgrid.com/blog/whats-webhook/][12].
|
||||
|
||||
Testing these webhooks goes fairly smoothly as long as the system is publicly accessible over the internet. This might be your production environment or a publicly accessible staging environment. It becomes harder when you are developing locally on your laptop or inside a Virtual Machine (VM, for example, a Vagrant box). In those cases, the local URL’s are not publicly accessible by the party sending the webhook. Also, monitoring the requests being sent around is be difficult, which might make development and debugging hard.
|
||||
|
||||
What will this example solve:
|
||||
|
||||
* Testing webhooks from a local development environment, which is not accessible over the internet. It cannot be accessed by the service sending the data to the webhook from their servers.
|
||||
|
||||
* Monitor the requests and data being sent around, but also the response your application generates. This will allow easier debugging, and therefore a shorter development cycle.
|
||||
|
||||
Prerequisites:
|
||||
|
||||
* _Optional_ : in case you are developing using a Virtual Machine (VM), make sure it’s running and make sure the next steps are done in the VM.
|
||||
|
||||
* For this tutorial, we assume you have a vhost defined at `webhook.example.vagrant`. I used a Vagrant VM for this tutorial, but you are free in choosing the name of your vhost.
|
||||
|
||||
* Install `ngrok`by following the [installation instructions][3]. Inside a VM, I find the Node version of it also useful: [https://www.npmjs.com/package/ngrok][4], but feel free to use other methods.
|
||||
|
||||
I assume you don’t have SSL running in your environment, but if you do, feel free to replace port 80 with port 433 and `_http://_` with `_https://_` in the examples below.
|
||||
|
||||
#### Make the webhook testable
|
||||
|
||||
Let’s assume the following example code. I’ll be using PHP, but read it as pseudo-code as I left some crucial parts out (for example API keys, input validation, etc.)
|
||||
|
||||
The first file: _payment.php_ . This file creates a payment object and then registers it with the PSP. It then fetches the URL the customer needs to visit in order to pay and redirects the user to the customer in there.
|
||||
|
||||
Note that the `webhook.example.vagrant` in this example is the local vhost we’ve defined for our development set-up. It’s not accessible from the outside world.
|
||||
|
||||
```
|
||||
<?php
|
||||
/*
|
||||
* This file creates a payment and tells the PSP what webhook URL to use for updates
|
||||
* After creating the payment, we get a URL to send the customer to in order to pay at the PSP
|
||||
*/
|
||||
$payment = [
|
||||
'order_id' => 123,
|
||||
'amount' => 25.00,
|
||||
'description' => 'Test payment',
|
||||
'redirect_url' => 'http://webhook.example.vagrant/redirect.php',
|
||||
'webhook_url' => 'http://webhook.example.vagrant/webhook.php',
|
||||
];
|
||||
```
|
||||
|
||||
```
|
||||
$payment = $paymentProvider->createPayment($payment);
|
||||
header("Location: " . $payment->getPaymentUrl());
|
||||
```
|
||||
|
||||
Second file: _webhook.php_ . This file waits to be called by the PSP to get notified about updates.
|
||||
|
||||
```
|
||||
<?php
|
||||
/*
|
||||
* This file gets called by the PSP and in the $_POST they submit an 'id'
|
||||
* We can use this ID to get the latest status from the PSP and update our internal systems afterward
|
||||
*/
|
||||
```
|
||||
|
||||
```
|
||||
$paymentId = $_POST['id'];
|
||||
$paymentInfo = $paymentProvider->getPayment($paymentId);
|
||||
$status = $paymentInfo->getStatus();
|
||||
```
|
||||
|
||||
```
|
||||
// Perform actions in here to update your system
|
||||
if ($status === 'paid') {
|
||||
..
|
||||
}
|
||||
elseif ($status === 'cancelled') {
|
||||
..
|
||||
}
|
||||
```
|
||||
|
||||
Our webhook URL is not accessible over the internet (remember: `webhook.example.vagrant`). Thus, the file _webhook.php_ will never be called by the PSP. Your system will never get to know about the payment status. This ultimately leads to orders never being shipped to customers.
|
||||
|
||||
Luckily, _ngrok_ can in solving this problem. [_ngrok_][13] describes itself as:
|
||||
|
||||
> ngrok exposes local servers behind NATs and firewalls to the public internet over secure tunnels.
|
||||
|
||||
Let’s start a basic tunnel for our project. On your environment (either on your system or on the VM) run the following command:
|
||||
|
||||
`ngrok http -host-header=rewrite webhook.example.vagrant:80`
|
||||
|
||||
Read about more configuration options in their documentation: [https://ngrok.com/docs][14].
|
||||
|
||||
A screen like this will come up:
|
||||
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1000/1*BZZE-CvZwHZ3pxsElJMWbA.png)
|
||||
ngrok output
|
||||
|
||||
What did we just start? Basically, we instructed `ngrok` to start a tunnel to `[http://webhook.example.vagr][5]ant` at port 80\. This same URL can now be reached via `[http://39741ffc.ngrok.i][6]o` or `[https://39741ffc.ngrok.io][7]`[,][15] They are publicly accessible over the internet by anyone that knows this URL.
|
||||
|
||||
Note that you get both HTTP and HTTPS available out of the box. The documentation gives examples of how to restrict this to HTTPS only: [https://ngrok.com/docs#bind-tls][16].
|
||||
|
||||
So, how do we make our webhook work now? Update _payment.php_ to the following code:
|
||||
|
||||
```
|
||||
<?php
|
||||
/*
|
||||
* This file creates a payment and tells the PSP what webhook URL to use for updates
|
||||
* After creating the payment, we get a URL to send the customer to in order to pay at the PSP
|
||||
*/
|
||||
$payment = [
|
||||
'order_id' => 123,
|
||||
'amount' => 25.00,
|
||||
'description' => 'Test payment',
|
||||
'redirect_url' => 'http://webhook.example.vagrant/redirect.php',
|
||||
'webhook_url' => 'https://39741ffc.ngrok.io/webhook.php',
|
||||
];
|
||||
```
|
||||
|
||||
```
|
||||
$payment = $paymentProvider->createPayment($payment);
|
||||
header("Location: " . $payment->getPaymentUrl());
|
||||
```
|
||||
|
||||
Now, we told the PSP to call the tunnel URL over HTTPS. _ngrok_ will make sure your internal URL get’s called with an unmodified payload, as soon as the PSP calls the webhook via the tunnel.
|
||||
|
||||
#### How to monitor calls to the webhook?
|
||||
|
||||
The screenshot you’ve seen above gives an overview of the calls being made to the tunnel host. This data is rather limited. Fortunately, `ngrok` offers a very nice dashboard, which allows you to inspect all calls:
|
||||
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1000/1*qZw9GRTnG1sMgEUmsJPz3g.png)
|
||||
|
||||
I won’t go into this very deep because it’s self-explanatory as soon as you have it running. Therefore I will explain how to access it on the Vagrant box as it doesn’t work out of the box.
|
||||
|
||||
The dashboard will allow you to see all the calls, their status codes, the headers and data being sent around. You will also see the response your application generated.
|
||||
|
||||
Another neat feature of the dashboard is that it allows you to replay a certain call. Let’s say your webhook code ran into a fatal error, it would be tedious to start a new payment and wait for the webhook to be called. Replaying the previous call makes your development process way faster.
|
||||
|
||||
The dashboard by default is accessible at [http://localhost:4040.][17]
|
||||
|
||||
#### Dashboard in a VM
|
||||
|
||||
In order to make this work inside a VM, you have to perform some additional steps:
|
||||
|
||||
First, make sure the VM can be accessed on port 4040\. Then, create a file inside the VM holding this configuration:
|
||||
|
||||
`web_addr: 0.0.0.0:4040`
|
||||
|
||||
Now, kill the `ngrok` process that’s still running and start it with this slightly adjusted command:
|
||||
|
||||
`ngrok http -config=/path/to/config/ngrok.conf -host-header=rewrite webhook.example.vagrant:80`
|
||||
|
||||
You will get a screen looking similar to the previous screenshot though the ID’s have changed. The previous URL doesn’t work anymore, but you got a new URL. Also, the `Web Interface` URL got changed:
|
||||
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1000/1*3FZq37TF4dmBqRc1R0FMVg.png)
|
||||
|
||||
Now direct your browser to `[http://webhook.example.vagrant:4040][8]` to access the dashboard. Also, make a call to `[https://e65642b5.ngrok.io/webhook.php][9]`[.][18]This will probably result in an error in your browser, but the dashboard should show the request being made.
|
||||
|
||||
#### Final remarks
|
||||
|
||||
The examples above are pseudo-code. The reason is that every external system uses webhooks in a different way. I tried to give an example based on a fictive PSP implementation, as probably many developers have to deal with payments at some moment.
|
||||
|
||||
Please be aware that your webhook URL can also be used by others with bad intentions. Make sure to validate any input being sent to it.
|
||||
|
||||
Preferably also add a token to the URL which is unique for each payment. This token must only be known by your system and the system sending the webhook.
|
||||
|
||||
Good luck testing and debugging your webhooks!
|
||||
|
||||
Note: I haven’t tested this tutorial on Docker. However, this Docker container looks like a good starting point and includes clear instructions. [https://github.com/wernight/docker-ngrok][19].
|
||||
|
||||
Stefan Doorn
|
||||
|
||||
[https://github.com/stefandoorn][20]
|
||||
[https://twitter.com/stefan_doorn][21]
|
||||
[https://www.linkedin.com/in/stefandoorn][22]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Backend Developer (PHP/Node/Laravel/Symfony/Sylius)
|
||||
|
||||
|
||||
--------
|
||||
|
||||
via: https://medium.freecodecamp.org/testing-webhooks-while-using-vagrant-for-development-98b5f3bedb1d
|
||||
|
||||
作者:[Stefan Doorn ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://medium.freecodecamp.org/@stefandoorn
|
||||
[1]:https://unsplash.com/photos/MYTyXb7fgG0?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[2]:https://unsplash.com/?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[3]:https://ngrok.com/download
|
||||
[4]:https://www.npmjs.com/package/ngrok
|
||||
[5]:http://webhook.example.vagrnat/
|
||||
[6]:http://39741ffc.ngrok.io/
|
||||
[7]:http://39741ffc.ngrok.io/
|
||||
[8]:http://webhook.example.vagrant:4040/
|
||||
[9]:https://e65642b5.ngrok.io/webhook.php.
|
||||
[10]:https://sendgrid.com/blog/whats-webhook/
|
||||
[11]:http://example.com/webhooks/payment-update%29
|
||||
[12]:https://sendgrid.com/blog/whats-webhook/
|
||||
[13]:https://ngrok.com/
|
||||
[14]:https://ngrok.com/docs
|
||||
[15]:http://39741ffc.ngrok.io%2C/
|
||||
[16]:https://ngrok.com/docs#bind-tls
|
||||
[17]:http://localhost:4040./
|
||||
[18]:https://e65642b5.ngrok.io/webhook.php.
|
||||
[19]:https://github.com/wernight/docker-ngrok
|
||||
[20]:https://github.com/stefandoorn
|
||||
[21]:https://twitter.com/stefan_doorn
|
||||
[22]:https://www.linkedin.com/in/stefandoorn
|
@ -1,215 +0,0 @@
|
||||
Use LVM to Upgrade Fedora
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/06/lvm-upgrade-816x345.jpg)
|
||||
|
||||
Most users find it simple to upgrade [from one Fedora release to the next][1] with the standard process. However, there are inevitably many special cases that Fedora can also handle. This article shows one way to upgrade using DNF along with Logical Volume Management (LVM) to keep a bootable backup in case of problems. This example upgrades a Fedora 26 system to Fedora 28.
|
||||
|
||||
The process shown here is more complex than the standard upgrade process. You should have a strong grasp of how LVM works before you use this process. Without proper skill and care, you could **lose data and/or be forced to reinstall your system!** If you don’t know what you’re doing, it is **highly recommended** you stick to the supported upgrade methods only.
|
||||
|
||||
### Preparing the system
|
||||
|
||||
Before you start, ensure your existing system is fully updated.
|
||||
```
|
||||
$ sudo dnf update
|
||||
$ sudo systemctl reboot # or GUI equivalent
|
||||
|
||||
```
|
||||
|
||||
Check that your root filesystem is mounted via LVM.
|
||||
```
|
||||
$ df /
|
||||
Filesystem 1K-blocks Used Available Use% Mounted on
|
||||
/dev/mapper/vg_sdg-f26 20511312 14879816 4566536 77% /
|
||||
|
||||
$ sudo lvs
|
||||
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
|
||||
f22 vg_sdg -wi-ao---- 15.00g
|
||||
f24_64 vg_sdg -wi-ao---- 20.00g
|
||||
f26 vg_sdg -wi-ao---- 20.00g
|
||||
home vg_sdg -wi-ao---- 100.00g
|
||||
mockcache vg_sdg -wi-ao---- 10.00g
|
||||
swap vg_sdg -wi-ao---- 4.00g
|
||||
test vg_sdg -wi-a----- 1.00g
|
||||
vg_vm vg_sdg -wi-ao---- 20.00g
|
||||
|
||||
```
|
||||
|
||||
If you used the defaults when you installed Fedora, you may find the root filesystem is mounted on a LV named root. The name of your volume group will likely be different. Look at the total size of the root volume. In the example, the root filesystem is named f26 and is 20G in size.
|
||||
|
||||
Next, ensure you have enough free space in LVM.
|
||||
```
|
||||
$ sudo vgs
|
||||
VG #PV #LV #SN Attr VSize VFree
|
||||
vg_sdg 1 8 0 wz--n- 232.39g 42.39g
|
||||
|
||||
```
|
||||
|
||||
This system has enough free space to allocate a 20G logical volume for the upgraded Fedora 28 root. If you used the default install, there will be no free space in LVM. Managing LVM in general is beyond the scope of this article, but here are some possibilities:
|
||||
|
||||
1\. /home on its own LV, and lots of free space in /home
|
||||
|
||||
You can logout of the GUI and switch to a text console, logging in as root. Then you can unmount /home, and use lvreduce -r to resize and reallocate the /home LV. You might also boot from a Live image (so as not to use /home) and use the gparted GUI utility.
|
||||
|
||||
2\. Most of the LVM space allocated to a root LV, with lots of free space in the filesystem
|
||||
|
||||
You can boot from a Live image and use the gparted GUI utility to reduce the root LV. Consider moving /home to its own filesystem at this point, but that is beyond the scope of this article.
|
||||
|
||||
3\. Most of the file systems are full, but you have LVs you no longer need
|
||||
|
||||
You can delete the unneeded LVs, freeing space in the volume group for this operation.
|
||||
|
||||
### Creating a backup
|
||||
|
||||
First, allocate a new LV for the upgraded system. Make sure to use the correct name for your system’s volume group (VG). In this example it’s vg_sdg.
|
||||
```
|
||||
$ sudo lvcreate -L20G -n f28 vg_sdg
|
||||
Logical volume "f28" created.
|
||||
|
||||
```
|
||||
|
||||
Next, make a snapshot of your current root filesystem. This example creates a snapshot volume named f26_s.
|
||||
```
|
||||
$ sync
|
||||
$ sudo lvcreate -s -L1G -n f26_s vg_sdg/f26
|
||||
Using default stripesize 64.00 KiB.
|
||||
Logical volume "f26_s" created.
|
||||
|
||||
```
|
||||
|
||||
The snapshot can now be copied to the new LV. **Make sure you have the destination correct** when you substitute your own volume names. If you are not careful you could delete data irrevocably. Also, make sure you are copying from the snapshot of your root, **not** your live root.
|
||||
```
|
||||
$ sudo dd if=/dev/vg_sdg/f26_s of=/dev/vg_sdg/f28 bs=256k
|
||||
81920+0 records in
|
||||
81920+0 records out
|
||||
21474836480 bytes (21 GB, 20 GiB) copied, 149.179 s, 144 MB/s
|
||||
|
||||
```
|
||||
|
||||
Give the new filesystem copy a unique UUID. This is not strictly necessary, but UUIDs are supposed to be unique, so this avoids future confusion. Here is how for an ext4 root filesystem:
|
||||
```
|
||||
$ sudo e2fsck -f /dev/vg_sdg/f28
|
||||
$ sudo tune2fs -U random /dev/vg_sdg/f28
|
||||
|
||||
```
|
||||
|
||||
Then remove the snapshot volume which is no longer needed:
|
||||
```
|
||||
$ sudo lvremove vg_sdg/f26_s
|
||||
Do you really want to remove active logical volume vg_sdg/f26_s? [y/n]: y
|
||||
Logical volume "f26_s" successfully removed
|
||||
|
||||
```
|
||||
|
||||
You may wish to make a snapshot of /home at this point if you have it mounted separately. Sometimes, upgraded applications make changes that are incompatible with a much older Fedora version. If needed, edit the /etc/fstab file on the **old** root filesystem to mount the snapshot on /home. Remember that when the snapshot is full, it will disappear! You may also wish to make a normal backup of /home for good measure.
|
||||
|
||||
### Configuring to use the new root
|
||||
|
||||
First, mount the new LV and backup your existing GRUB settings:
|
||||
```
|
||||
$ sudo mkdir /mnt/f28
|
||||
$ sudo mount /dev/vg_sdg/f28 /mnt/f28
|
||||
$ sudo mkdir /mnt/f28/f26
|
||||
$ cd /boot/grub2
|
||||
$ sudo cp -p grub.cfg grub.cfg.old
|
||||
|
||||
```
|
||||
|
||||
Edit grub.conf and add this before the first menuentry, unless you already have it:
|
||||
```
|
||||
menuentry 'Old boot menu' {
|
||||
configfile /grub2/grub.cfg.old
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Edit grub.conf and change the default menuentry to activate and mount the new root filesystem. Change this line:
|
||||
```
|
||||
linux16 /vmlinuz-4.16.11-100.fc26.x86_64 root=/dev/mapper/vg_sdg-f26 ro rd.lvm.lv=vg_sdg/f26 rd.lvm.lv=vg_sdg/swap rhgb quiet LANG=en_US.UTF-8
|
||||
|
||||
```
|
||||
|
||||
So that it reads like this. Remember to use the correct names for your system’s VG and LV entries!
|
||||
```
|
||||
linux16 /vmlinuz-4.16.11-100.fc26.x86_64 root=/dev/mapper/vg_sdg-f28 ro rd.lvm.lv=vg_sdg/f28 rd.lvm.lv=vg_sdg/swap rhgb quiet LANG=en_US.UTF-8
|
||||
|
||||
```
|
||||
|
||||
Edit /mnt/f28/etc/default/grub and change the default root LV activated at boot:
|
||||
```
|
||||
GRUB_CMDLINE_LINUX="rd.lvm.lv=vg_sdg/f28 rd.lvm.lv=vg_sdg/swap rhgb quiet"
|
||||
|
||||
```
|
||||
|
||||
Edit /mnt/f28/etc/fstab to change the mounting of the root filesystem from the old volume:
|
||||
```
|
||||
/dev/mapper/vg_sdg-f26 / ext4 defaults 1 1
|
||||
|
||||
```
|
||||
|
||||
to the new one:
|
||||
```
|
||||
/dev/mapper/vg_sdg-f28 / ext4 defaults 1 1
|
||||
|
||||
```
|
||||
|
||||
Then, add a read-only mount of the old system for reference purposes:
|
||||
```
|
||||
/dev/mapper/vg_sdg-f26 /f26 ext4 ro,nodev,noexec 0 0
|
||||
|
||||
```
|
||||
|
||||
If your root filesystem is mounted by UUID, you will need to change this. Here is how if your root is an ext4 filesystem:
|
||||
```
|
||||
$ sudo e2label /dev/vg_sdg/f28 F28
|
||||
|
||||
```
|
||||
|
||||
Now edit /mnt/f28/etc/fstab to use the label. Change the mount line for the root file system so it reads like this:
|
||||
```
|
||||
LABEL=F28 / ext4 defaults 1 1
|
||||
|
||||
```
|
||||
|
||||
### Rebooting and upgrading
|
||||
|
||||
Reboot, and your system will be using the new root filesystem. It is still Fedora 26, but a copy with a new LV name, and ready for dnf system-upgrade! If anything goes wrong, use the Old Boot Menu to boot back into your working system, which this procedure avoids touching.
|
||||
```
|
||||
$ sudo systemctl reboot # or GUI equivalent
|
||||
...
|
||||
$ df / /f26
|
||||
Filesystem 1K-blocks Used Available Use% Mounted on
|
||||
/dev/mapper/vg_sdg-f28 20511312 14903196 4543156 77% /
|
||||
/dev/mapper/vg_sdg-f26 20511312 14866412 4579940 77% /f26
|
||||
|
||||
```
|
||||
|
||||
You may wish to verify that using the Old Boot Menu does indeed get you back to having root mounted on the old root filesystem.
|
||||
|
||||
Now follow the instructions at [this wiki page][2]. If anything goes wrong with the system upgrade, you have a working system to boot back into.
|
||||
|
||||
### Future ideas
|
||||
|
||||
The steps to create a new LV and copy a snapshot of root to it could be automated with a generic script. It needs only the name of the new LV, since the size and device of the existing root are easy to determine. For example, one would be able to type this command:
|
||||
```
|
||||
$ sudo copyfs / f28
|
||||
|
||||
```
|
||||
|
||||
Supplying the mount-point to copy makes it clearer what is happening, and copying other mount-points like /home could be useful.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/use-lvm-upgrade-fedora/
|
||||
|
||||
作者:[Stuart D Gathman][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fedoramagazine.org/author/sdgathman/
|
||||
[1]:https://fedoramagazine.org/upgrading-fedora-27-fedora-28/
|
||||
[2]:https://fedoraproject.org/wiki/DNF_system_upgrade
|
@ -1,110 +0,0 @@
|
||||
Quiet log noise with Python and machine learning
|
||||
======
|
||||
|
||||
Logreduce saves debugging time by picking out anomalies from mountains of log data.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/sound-radio-noise-communication.png?itok=KMNn9QrZ)
|
||||
|
||||
Continuous integration (CI) jobs can generate massive volumes of data. When a job fails, figuring out what went wrong can be a tedious process that involves investigating logs to discover the root cause—which is often found in a fraction of the total job output. To make it easier to separate the most relevant data from the rest, the [Logreduce][1] machine learning model is trained using previous successful job runs to extract anomalies from failed runs' logs.
|
||||
|
||||
This principle can also be applied to other use cases, for example, extracting anomalies from [Journald][2] or other systemwide regular log files.
|
||||
|
||||
### Using machine learning to reduce noise
|
||||
|
||||
A typical log file contains many nominal events ("baselines") along with a few exceptions that are relevant to the developer. Baselines may contain random elements such as timestamps or unique identifiers that are difficult to detect and remove. To remove the baseline events, we can use a [k-nearest neighbors pattern recognition algorithm][3] (k-NN).
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/ml-generic-workflow.png)
|
||||
|
||||
Log events must be converted to numeric values for k-NN regression. Using the generic feature extraction tool [HashingVectorizer][4] enables the process to be applied to any type of log. It hashes each word and encodes each event in a sparse matrix. To further reduce the search space, tokenization removes known random words, such as dates or IP addresses.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/hashing-vectorizer.png)
|
||||
|
||||
Once the model is trained, the k-NN search tells us the distance of each new event from the baseline.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/kneighbors.png)
|
||||
|
||||
This [Jupyter notebook][5] demonstrates the process and graphs the sparse matrix vectors.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/anomaly-detection-with-scikit-learn.png)
|
||||
|
||||
### Introducing Logreduce
|
||||
|
||||
The Logreduce Python software transparently implements this process. Logreduce's initial goal was to assist with [Zuul CI][6] job failure analyses using the build database, and it is now integrated into the [Software Factory][7] development forge's job logs process.
|
||||
|
||||
At its simplest, Logreduce compares files or directories and removes lines that are similar. Logreduce builds a model for each source file and outputs any of the target's lines whose distances are above a defined threshold by using the following syntax: **distance | filename:line-number: line-content**.
|
||||
|
||||
```
|
||||
$ logreduce diff /var/log/audit/audit.log.1 /var/log/audit/audit.log
|
||||
INFO logreduce.Classifier - Training took 21.982s at 0.364MB/s (1.314kl/s) (8.000 MB - 28.884 kilo-lines)
|
||||
0.244 | audit.log:19963: type=USER_AUTH acct="root" exe="/usr/bin/su" hostname=managesf.sftests.com
|
||||
INFO logreduce.Classifier - Testing took 18.297s at 0.306MB/s (1.094kl/s) (5.607 MB - 20.015 kilo-lines)
|
||||
99.99% reduction (from 20015 lines to 1
|
||||
|
||||
```
|
||||
|
||||
A more advanced Logreduce use can train a model offline to be reused. Many variants of the baselines can be used to fit the k-NN search tree.
|
||||
|
||||
```
|
||||
$ logreduce dir-train audit.clf /var/log/audit/audit.log.*
|
||||
INFO logreduce.Classifier - Training took 80.883s at 0.396MB/s (1.397kl/s) (32.001 MB - 112.977 kilo-lines)
|
||||
DEBUG logreduce.Classifier - audit.clf: written
|
||||
$ logreduce dir-run audit.clf /var/log/audit/audit.log
|
||||
```
|
||||
|
||||
Logreduce also implements interfaces to discover baselines for Journald time ranges (days/weeks/months) and Zuul CI job build histories. It can also generate HTML reports that group anomalies found in multiple files in a simple interface.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/html-report.png)
|
||||
|
||||
### Managing baselines
|
||||
|
||||
The key to using k-NN regression for anomaly detection is to have a database of known good baselines, which the model uses to detect lines that deviate too far. This method relies on the baselines containing all nominal events, as anything that isn't found in the baseline will be reported as anomalous.
|
||||
|
||||
CI jobs are great targets for k-NN regression because the job outputs are often deterministic and previous runs can be automatically used as baselines. Logreduce features Zuul job roles that can be used as part of a failed job post task in order to issue a concise report (instead of the full job's logs). This principle can be applied to other cases, as long as baselines can be constructed in advance. For example, a nominal system's [SoS report][8] can be used to find issues in a defective deployment.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/baselines.png)
|
||||
|
||||
### Anomaly classification service
|
||||
|
||||
The next version of Logreduce introduces a server mode to offload log processing to an external service where reports can be further analyzed. It also supports importing existing reports and requests to analyze a Zuul build. The services run analyses asynchronously and feature a web interface to adjust scores and remove false positives.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/classification-interface.png)
|
||||
|
||||
Reviewed reports can be archived as a standalone dataset with the target log files and the scores for anomalous lines recorded in a flat JSON file.
|
||||
|
||||
### Project roadmap
|
||||
|
||||
Logreduce is already being used effectively, but there are many opportunities for improving the tool. Plans for the future include:
|
||||
|
||||
* Curating many annotated anomalies found in log files and producing a public domain dataset to enable further research. Anomaly detection in log files is a challenging topic, and having a common dataset to test new models would help identify new solutions.
|
||||
* Reusing the annotated anomalies with the model to refine the distances reported. For example, when users mark lines as false positives by setting their distance to zero, the model could reduce the score of those lines in future reports.
|
||||
* Fingerprinting archived anomalies to detect when a new report contains an already known anomaly. Thus, instead of reporting the anomaly's content, the service could notify the user that the job hit a known issue. When the issue is fixed, the service could automatically restart the job.
|
||||
* Supporting more baseline discovery interfaces for targets such as SOS reports, Jenkins builds, Travis CI, and more.
|
||||
|
||||
|
||||
|
||||
If you are interested in getting involved in this project, please contact us on the **#log-classify** Freenode IRC channel. Feedback is always appreciated!
|
||||
|
||||
Tristan Cacqueray will present [Reduce your log noise using machine learning][9] at the [OpenStack Summit][10], November 13-15 in Berlin.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/9/quiet-log-noise-python-and-machine-learning
|
||||
|
||||
作者:[Tristan de Cacqueray][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/tristanc
|
||||
[1]: https://pypi.org/project/logreduce/
|
||||
[2]: http://man7.org/linux/man-pages/man8/systemd-journald.service.8.html
|
||||
[3]: https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm
|
||||
[4]: http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.HashingVectorizer.html
|
||||
[5]: https://github.com/TristanCacqueray/anomaly-detection-workshop-opendev/blob/master/datasets/notebook/anomaly-detection-with-scikit-learn.ipynb
|
||||
[6]: https://zuul-ci.org
|
||||
[7]: https://www.softwarefactory-project.io
|
||||
[8]: https://sos.readthedocs.io/en/latest/
|
||||
[9]: https://www.openstack.org/summit/berlin-2018/summit-schedule/speakers/4307
|
||||
[10]: https://www.openstack.org/summit/berlin-2018/
|
@ -1,257 +0,0 @@
|
||||
Exploring the Linux kernel: The secrets of Kconfig/kbuild
|
||||
======
|
||||
Dive into understanding how the Linux config/build system works.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/compass_map_explore_adventure.jpg?itok=ecCoVTrZ)
|
||||
|
||||
The Linux kernel config/build system, also known as Kconfig/kbuild, has been around for a long time, ever since the Linux kernel code migrated to Git. As supporting infrastructure, however, it is seldom in the spotlight; even kernel developers who use it in their daily work never really think about it.
|
||||
|
||||
To explore how the Linux kernel is compiled, this article will dive into the Kconfig/kbuild internal process, explain how the .config file and the vmlinux/bzImage files are produced, and introduce a smart trick for dependency tracking.
|
||||
|
||||
### Kconfig
|
||||
|
||||
The first step in building a kernel is always configuration. Kconfig helps make the Linux kernel highly modular and customizable. Kconfig offers the user many config targets:
|
||||
| config | Update current config utilizing a line-oriented program |
|
||||
| nconfig | Update current config utilizing a ncurses menu-based program |
|
||||
| menuconfig | Update current config utilizing a menu-based program |
|
||||
| xconfig | Update current config utilizing a Qt-based frontend |
|
||||
| gconfig | Update current config utilizing a GTK+ based frontend |
|
||||
| oldconfig | Update current config utilizing a provided .config as base |
|
||||
| localmodconfig | Update current config disabling modules not loaded |
|
||||
| localyesconfig | Update current config converting local mods to core |
|
||||
| defconfig | New config with default from Arch-supplied defconfig |
|
||||
| savedefconfig | Save current config as ./defconfig (minimal config) |
|
||||
| allnoconfig | New config where all options are answered with 'no' |
|
||||
| allyesconfig | New config where all options are accepted with 'yes' |
|
||||
| allmodconfig | New config selecting modules when possible |
|
||||
| alldefconfig | New config with all symbols set to default |
|
||||
| randconfig | New config with a random answer to all options |
|
||||
| listnewconfig | List new options |
|
||||
| olddefconfig | Same as oldconfig but sets new symbols to their default value without prompting |
|
||||
| kvmconfig | Enable additional options for KVM guest kernel support |
|
||||
| xenconfig | Enable additional options for xen dom0 and guest kernel support |
|
||||
| tinyconfig | Configure the tiniest possible kernel |
|
||||
|
||||
I think **menuconfig** is the most popular of these targets. The targets are processed by different host programs, which are provided by the kernel and built during kernel building. Some targets have a GUI (for the user's convenience) while most don't. Kconfig-related tools and source code reside mainly under **scripts/kconfig/** in the kernel source. As we can see from **scripts/kconfig/Makefile** , there are several host programs, including **conf** , **mconf** , and **nconf**. Except for **conf** , each of them is responsible for one of the GUI-based config targets, so, **conf** deals with most of them.
|
||||
|
||||
Logically, Kconfig's infrastructure has two parts: one implements a [new language][1] to define the configuration items (see the Kconfig files under the kernel source), and the other parses the Kconfig language and deals with configuration actions.
|
||||
|
||||
Most of the config targets have roughly the same internal process (shown below):
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/kconfig_process.png)
|
||||
|
||||
Note that all configuration items have a default value.
|
||||
|
||||
The first step reads the Kconfig file under source root to construct an initial configuration database; then it updates the initial database by reading an existing configuration file according to this priority:
|
||||
|
||||
> .config
|
||||
> /lib/modules/$(shell,uname -r)/.config
|
||||
> /etc/kernel-config
|
||||
> /boot/config-$(shell,uname -r)
|
||||
> ARCH_DEFCONFIG
|
||||
> arch/$(ARCH)/defconfig
|
||||
|
||||
If you are doing GUI-based configuration via **menuconfig** or command-line-based configuration via **oldconfig** , the database is updated according to your customization. Finally, the configuration database is dumped into the .config file.
|
||||
|
||||
But the .config file is not the final fodder for kernel building; this is why the **syncconfig** target exists. **syncconfig** used to be a config target called **silentoldconfig** , but it doesn't do what the old name says, so it was renamed. Also, because it is for internal use (not for users), it was dropped from the list.
|
||||
|
||||
Here is an illustration of what **syncconfig** does:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/syncconfig.png)
|
||||
|
||||
**syncconfig** takes .config as input and outputs many other files, which fall into three categories:
|
||||
|
||||
* **auto.conf & tristate.conf** are used for makefile text processing. For example, you may see statements like this in a component's makefile:
|
||||
|
||||
```
|
||||
obj-$(CONFIG_GENERIC_CALIBRATE_DELAY) += calibrate.o
|
||||
```
|
||||
|
||||
* **autoconf.h** is used in C-language source files.
|
||||
|
||||
* Empty header files under **include/config/** are used for configuration-dependency tracking during kbuild, which is explained below.
|
||||
|
||||
|
||||
|
||||
|
||||
After configuration, we will know which files and code pieces are not compiled.
|
||||
|
||||
### kbuild
|
||||
|
||||
Component-wise building, called _recursive make_ , is a common way for GNU `make` to manage a large project. Kbuild is a good example of recursive make. By dividing source files into different modules/components, each component is managed by its own makefile. When you start building, a top makefile invokes each component's makefile in the proper order, builds the components, and collects them into the final executive.
|
||||
|
||||
Kbuild refers to different kinds of makefiles:
|
||||
|
||||
* **Makefile** is the top makefile located in source root.
|
||||
* **.config** is the kernel configuration file.
|
||||
* **arch/$(ARCH)/Makefile** is the arch makefile, which is the supplement to the top makefile.
|
||||
* **scripts/Makefile.*** describes common rules for all kbuild makefiles.
|
||||
* Finally, there are about 500 **kbuild makefiles**.
|
||||
|
||||
|
||||
|
||||
The top makefile includes the arch makefile, reads the .config file, descends into subdirectories, invokes **make** on each component's makefile with the help of routines defined in **scripts/Makefile.*** , builds up each intermediate object, and links all the intermediate objects into vmlinux. Kernel document [Documentation/kbuild/makefiles.txt][2] describes all aspects of these makefiles.
|
||||
|
||||
As an example, let's look at how vmlinux is produced on x86-64:
|
||||
|
||||
![vmlinux overview][4]
|
||||
|
||||
(The illustration is based on Richard Y. Steven's [blog][5]. It was updated and is used with the author's permission.)
|
||||
|
||||
All the **.o** files that go into vmlinux first go into their own **built-in.a** , which is indicated via variables **KBUILD_VMLINUX_INIT** , **KBUILD_VMLINUX_MAIN** , **KBUILD_VMLINUX_LIBS** , then are collected into the vmlinux file.
|
||||
|
||||
Take a look at how recursive make is implemented in the Linux kernel, with the help of simplified makefile code:
|
||||
|
||||
```
|
||||
# In top Makefile
|
||||
vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps)
|
||||
+$(call if_changed,link-vmlinux)
|
||||
|
||||
# Variable assignments
|
||||
vmlinux-deps := $(KBUILD_LDS) $(KBUILD_VMLINUX_INIT) $(KBUILD_VMLINUX_MAIN) $(KBUILD_VMLINUX_LIBS)
|
||||
|
||||
export KBUILD_VMLINUX_INIT := $(head-y) $(init-y)
|
||||
export KBUILD_VMLINUX_MAIN := $(core-y) $(libs-y2) $(drivers-y) $(net-y) $(virt-y)
|
||||
export KBUILD_VMLINUX_LIBS := $(libs-y1)
|
||||
export KBUILD_LDS := arch/$(SRCARCH)/kernel/vmlinux.lds
|
||||
|
||||
init-y := init/
|
||||
drivers-y := drivers/ sound/ firmware/
|
||||
net-y := net/
|
||||
libs-y := lib/
|
||||
core-y := usr/
|
||||
virt-y := virt/
|
||||
|
||||
# Transform to corresponding built-in.a
|
||||
init-y := $(patsubst %/, %/built-in.a, $(init-y))
|
||||
core-y := $(patsubst %/, %/built-in.a, $(core-y))
|
||||
drivers-y := $(patsubst %/, %/built-in.a, $(drivers-y))
|
||||
net-y := $(patsubst %/, %/built-in.a, $(net-y))
|
||||
libs-y1 := $(patsubst %/, %/lib.a, $(libs-y))
|
||||
libs-y2 := $(patsubst %/, %/built-in.a, $(filter-out %.a, $(libs-y)))
|
||||
virt-y := $(patsubst %/, %/built-in.a, $(virt-y))
|
||||
|
||||
# Setup the dependency. vmlinux-deps are all intermediate objects, vmlinux-dirs
|
||||
# are phony targets, so every time comes to this rule, the recipe of vmlinux-dirs
|
||||
# will be executed. Refer "4.6 Phony Targets" of `info make`
|
||||
$(sort $(vmlinux-deps)): $(vmlinux-dirs) ;
|
||||
|
||||
# Variable vmlinux-dirs is the directory part of each built-in.a
|
||||
vmlinux-dirs := $(patsubst %/,%,$(filter %/, $(init-y) $(init-m) \
|
||||
$(core-y) $(core-m) $(drivers-y) $(drivers-m) \
|
||||
$(net-y) $(net-m) $(libs-y) $(libs-m) $(virt-y)))
|
||||
|
||||
# The entry of recursive make
|
||||
$(vmlinux-dirs):
|
||||
$(Q)$(MAKE) $(build)=$@ need-builtin=1
|
||||
```
|
||||
|
||||
The recursive make recipe is expanded, for example:
|
||||
|
||||
```
|
||||
make -f scripts/Makefile.build obj=init need-builtin=1
|
||||
```
|
||||
|
||||
This means **make** will go into **scripts/Makefile.build** to continue the work of building each **built-in.a**. With the help of **scripts/link-vmlinux.sh** , the vmlinux file is finally under source root.
|
||||
|
||||
#### Understanding vmlinux vs. bzImage
|
||||
|
||||
Many Linux kernel developers may not be clear about the relationship between vmlinux and bzImage. For example, here is their relationship in x86-64:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/vmlinux-bzimage.png)
|
||||
|
||||
The source root vmlinux is stripped, compressed, put into **piggy.S** , then linked with other peer objects into **arch/x86/boot/compressed/vmlinux**. Meanwhile, a file called setup.bin is produced under **arch/x86/boot**. There may be an optional third file that has relocation info, depending on the configuration of **CONFIG_X86_NEED_RELOCS**.
|
||||
|
||||
A host program called **build** , provided by the kernel, builds these two (or three) parts into the final bzImage file.
|
||||
|
||||
#### Dependency tracking
|
||||
|
||||
Kbuild tracks three kinds of dependencies:
|
||||
|
||||
1. All prerequisite files (both * **.c** and * **.h** )
|
||||
2. **CONFIG_** options used in all prerequisite files
|
||||
3. Command-line dependencies used to compile the target
|
||||
|
||||
|
||||
|
||||
The first one is easy to understand, but what about the second and third? Kernel developers often see code pieces like this:
|
||||
|
||||
```
|
||||
#ifdef CONFIG_SMP
|
||||
__boot_cpu_id = cpu;
|
||||
#endif
|
||||
```
|
||||
|
||||
When **CONFIG_SMP** changes, this piece of code should be recompiled. The command line for compiling a source file also matters, because different command lines may result in different object files.
|
||||
|
||||
When a **.c** file uses a header file via a **#include** directive, you need write a rule like this:
|
||||
|
||||
```
|
||||
main.o: defs.h
|
||||
recipe...
|
||||
```
|
||||
|
||||
When managing a large project, you need a lot of these kinds of rules; writing them all would be tedious and boring. Fortunately, most modern C compilers can write these rules for you by looking at the **#include** lines in the source file. For the GNU Compiler Collection (GCC), it is just a matter of adding a command-line parameter: **-MD depfile**
|
||||
|
||||
```
|
||||
# In scripts/Makefile.lib
|
||||
c_flags = -Wp,-MD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE) \
|
||||
-include $(srctree)/include/linux/compiler_types.h \
|
||||
$(__c_flags) $(modkern_cflags) \
|
||||
$(basename_flags) $(modname_flags)
|
||||
```
|
||||
|
||||
This would generate a **.d** file with content like:
|
||||
|
||||
```
|
||||
init_task.o: init/init_task.c include/linux/kconfig.h \
|
||||
include/generated/autoconf.h include/linux/init_task.h \
|
||||
include/linux/rcupdate.h include/linux/types.h \
|
||||
...
|
||||
```
|
||||
|
||||
Then the host program **[fixdep][6]** takes care of the other two dependencies by taking the **depfile** and command line as input, then outputting a **. <target>.cmd** file in makefile syntax, which records the command line and all the prerequisites (including the configuration) for a target. It looks like this:
|
||||
|
||||
```
|
||||
# The command line used to compile the target
|
||||
cmd_init/init_task.o := gcc -Wp,-MD,init/.init_task.o.d -nostdinc ...
|
||||
...
|
||||
# The dependency files
|
||||
deps_init/init_task.o := \
|
||||
$(wildcard include/config/posix/timers.h) \
|
||||
$(wildcard include/config/arch/task/struct/on/stack.h) \
|
||||
$(wildcard include/config/thread/info/in/task.h) \
|
||||
...
|
||||
include/uapi/linux/types.h \
|
||||
arch/x86/include/uapi/asm/types.h \
|
||||
include/uapi/asm-generic/types.h \
|
||||
...
|
||||
```
|
||||
|
||||
A **. <target>.cmd** file will be included during recursive make, providing all the dependency info and helping to decide whether to rebuild a target or not.
|
||||
|
||||
The secret behind this is that **fixdep** will parse the **depfile** ( **.d** file), then parse all the dependency files inside, search the text for all the **CONFIG_** strings, convert them to the corresponding empty header file, and add them to the target's prerequisites. Every time the configuration changes, the corresponding empty header file will be updated, too, so kbuild can detect that change and rebuild the target that depends on it. Because the command line is also recorded, it is easy to compare the last and current compiling parameters.
|
||||
|
||||
### Looking ahead
|
||||
|
||||
Kconfig/kbuild remained the same for a long time until the new maintainer, Masahiro Yamada, joined in early 2017, and now kbuild is under active development again. Don't be surprised if you soon see something different from what's in this article.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/kbuild-and-kconfig
|
||||
|
||||
作者:[Cao Jin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/pinocchio
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://github.com/torvalds/linux/blob/master/Documentation/kbuild/kconfig-language.txt
|
||||
[2]: https://www.mjmwired.net/kernel/Documentation/kbuild/makefiles.txt
|
||||
[3]: https://opensource.com/file/411516
|
||||
[4]: https://opensource.com/sites/default/files/uploads/vmlinux_generation_process.png (vmlinux overview)
|
||||
[5]: https://blog.csdn.net/richardysteven/article/details/52502734
|
||||
[6]: https://github.com/torvalds/linux/blob/master/scripts/basic/fixdep.c
|
@ -1,69 +0,0 @@
|
||||
Create animated, scalable vector graphic images with MacSVG
|
||||
======
|
||||
|
||||
Open source SVG: The writing is on the wall
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_design_paper_plane_2_0.jpg?itok=xKdP-GWE)
|
||||
|
||||
The Neo-Babylonian regent [Belshazzar][1] did not heed [the writing on the wall][2] that magically appeared during his great feast. However, if he had had a laptop and a good internet connection in 539 BC, he might have staved off those pesky Persians by reading the SVG on the browser.
|
||||
|
||||
Animating text and objects on web pages is a great way to build user interest and engagement. There are several ways to achieve this, such as a video embed, an animated GIF, or a slideshow—but you can also use [scalable vector graphics][3] (SVG).
|
||||
|
||||
An SVG image is different from, say, a JPG, because it is scalable without losing its resolution. A vector image is created by points, not dots, so no matter how large it gets, it will not lose its resolution or pixelate. An example of a good use of scalable, static images would be logos on websites.
|
||||
|
||||
### Move it, move it
|
||||
|
||||
You can create SVG images with several drawing programs, including open source [Inkscape][4] and Adobe Illustrator. Getting your images to “do something” requires a bit more effort. Fortunately, there are open source solutions that would get even Belshazzar’s attention.
|
||||
|
||||
[MacSVG][5] is one tool that will get your images moving. You can find the source code on [GitHub][6].
|
||||
|
||||
Developed by Douglas Ward of Conway, Arkansas, MacSVG is an “open source Mac OS app for designing HTML5 SVG art and animation,” according to its [website][5].
|
||||
|
||||
I was interested in using MacSVG to create an animated signature. I admit that I found the process a bit confusing and failed at my first attempts to create an actual animated SVG image.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/macsvg-screen.png)
|
||||
|
||||
It is important to first learn what makes “the writing on the wall” actually write.
|
||||
|
||||
The attribute behind the animated writing is [stroke-dasharray][7]. Breaking the term into three words helps explain what is happening: Stroke refers to the line or stroke you would make with a pen, whether physical or digital. Dash means breaking the stroke down into a series of dashes. Array means producing the whole thing into an array. That’s a simple overview, but it helped me understand what was supposed to happen and why.
|
||||
|
||||
With MacSVG, you can import a graphic (.PNG) and use the pen tool to trace the path of the writing. I used a cursive representation of my first name. Then it was just a matter of applying the attributes to animate the writing, increase and decrease the thickness of the stroke, change its color, and so on. Once completed, the animated writing was exported as an .SVG file and was ready for use on the web. MacSVG can be used for many different types of SVG animation in addition to handwriting.
|
||||
|
||||
### The writing is on the WordPress
|
||||
|
||||
I was ready to upload and share my SVG example on my [WordPress][8] site, but I discovered that WordPress does not allow for SVG media imports. Fortunately, I found a handy plugin: Benbodhi’s [SVG Support][9] allowed a quick, easy import of my SVG the same way I would import a JPG to my Media Library. I was able to showcase my [writing on the wall][10] to Babylonians everywhere.
|
||||
|
||||
I opened the source code of my SVG in [Brackets][11], and here are the results:
|
||||
|
||||
```
|
||||
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
|
||||
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
|
||||
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:cc="http://web.resource.org/cc/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd" xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape" height="360px" style="zoom: 1;" cursor="default" id="svg_document" width="480px" baseProfile="full" version="1.1" preserveAspectRatio="xMidYMid meet" viewBox="0 0 480 360"><title id="svg_document_title">Path animation with stroke-dasharray</title><desc id="desc1">This example demonstrates the use of a path element, an animate element, and the stroke-dasharray attribute to simulate drawing.</desc><defs id="svg_document_defs"></defs><g id="main_group"></g><path stroke="#004d40" id="path2" stroke-width="9px" d="M86,75 C86,75 75,72 72,61 C69,50 66,37 71,34 C76,31 86,21 92,35 C98,49 95,73 94,82 C93,91 87,105 83,110 C79,115 70,124 71,113 C72,102 67,105 75,97 C83,89 111,74 111,74 C111,74 119,64 119,63 C119,62 110,57 109,58 C108,59 102,65 102,66 C102,67 101,75 107,79 C113,83 118,85 122,81 C126,77 133,78 136,64 C139,50 147,45 146,33 C145,21 136,15 132,24 C128,33 123,40 123,49 C123,58 135,87 135,96 C135,105 139,117 133,120 C127,123 116,127 120,116 C124,105 144,82 144,81 C144,80 158,66 159,58 C160,50 159,48 161,43 C163,38 172,23 166,22 C160,21 155,12 153,23 C151,34 161,68 160,78 C159,88 164,108 163,113 C162,118 165,126 157,128 C149,130 152,109 152,109 C152,109 185,64 185,64 " fill="none" transform=""><animate values="0,1739;1739,0;" attributeType="XML" begin="0; animate1.end+5s" id="animateSig1" repeatCount="indefinite" attributeName="stroke-dasharray" fill="freeze" dur="2"></animate></path></svg>
|
||||
```
|
||||
|
||||
What would you use MacSVG for?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/macsvg-open-source-tool-animation
|
||||
|
||||
作者:[Jeff Macharyas][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/rikki-endsley
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Belshazzar
|
||||
[2]: https://en.wikipedia.org/wiki/Belshazzar%27s_feast
|
||||
[3]: https://en.wikipedia.org/wiki/Scalable_Vector_Graphics
|
||||
[4]: https://inkscape.org/
|
||||
[5]: https://macsvg.org/
|
||||
[6]: https://github.com/dsward2/macSVG
|
||||
[7]: https://gist.github.com/mbostock/5649592
|
||||
[8]: https://macharyas.com/
|
||||
[9]: https://wordpress.org/plugins/svg-support/
|
||||
[10]: https://macharyas.com/index.php/2018/10/14/open-source-svg/
|
||||
[11]: http://brackets.io/
|
@ -1,162 +0,0 @@
|
||||
DF-SHOW – A Terminal File Manager Based On An Old DOS Application
|
||||
======
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/dfshow-720x340.png)
|
||||
|
||||
If you have worked on good-old MS-DOS, you might have used or heard about **DF-EDIT**. The DF-EDIT, stands for **D** irectory **F** ile **Edit** or, is an obscure DOS file manager, originally written by **Larry Kroeker** for MS-DOS and PC-DOS systems. It is used to display the contents of a given directory or file in MS-DOS and PC-DOS systems. Today, I stumbled upon a similar utility named **DF-SHOW** ( **D** irectory **F** ile **S** how), a terminal file manager for Unix-like operating systems. It is an Unix rewrite of obscure DF-EDIT file manager and is based on DF-EDIT 2.3d release from 1986. DF-SHOW is completely free, open source and released under GPLv3.
|
||||
|
||||
DF-SHOW can be able to,
|
||||
|
||||
* List contents of a directory,
|
||||
* View files,
|
||||
* Edit files using your default file editor,
|
||||
* Copy files to/from different locations,
|
||||
* Rename files,
|
||||
* Delete files,
|
||||
* Create new directories from within the DF-SHOW interface,
|
||||
* Update file permissions, owners and groups,
|
||||
* Search files matching a search term,
|
||||
* Launch executable files.
|
||||
|
||||
|
||||
|
||||
### DF-SHOW Usage
|
||||
|
||||
DF-SHOW consists of two programs, namely **“show”** and **“sf”**.
|
||||
|
||||
**Show command**
|
||||
|
||||
The “show” program (similar to the `ls` command) is used to display the contents of a directory, create new directories, rename, delete files/folders, update permissions, search files and so on.
|
||||
|
||||
To view the list of contents in a directory, use the following command:
|
||||
|
||||
```
|
||||
$ show <directory path>
|
||||
```
|
||||
|
||||
Example:
|
||||
|
||||
```
|
||||
$ show dfshow
|
||||
```
|
||||
|
||||
Here, dfshow is a directory. If you invoke the “show” command without specifying a directory path, it will display the contents of current directory.
|
||||
|
||||
Here is how DF-SHOW default interface looks like.
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/dfshow-1.png)
|
||||
|
||||
As you can see, DF-SHOW interface is self-explanatory.
|
||||
|
||||
On the top bar, you see the list of available options such as Copy, Delete, Edit, Modify etc.
|
||||
|
||||
Complete list of available options are given below:
|
||||
|
||||
* **C** opy,
|
||||
* **D** elete,
|
||||
* **E** dit,
|
||||
* **H** idden,
|
||||
* **M** odify,
|
||||
* **Q** uit,
|
||||
* **R** ename,
|
||||
* **S** how,
|
||||
* h **U** nt,
|
||||
* e **X** ec,
|
||||
* **R** un command,
|
||||
* **E** dit file,
|
||||
* **H** elp,
|
||||
* **M** ake dir,
|
||||
* **Q** uit,
|
||||
* **S** how dir
|
||||
|
||||
|
||||
|
||||
In each option, one letter has been capitalized and marked as bold. Just press the capitalized letter to perform the respective operation. For example, to rename a file, just press **R** and type the new name and hit ENTER to rename the selected item.
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/dfshow-2.png)
|
||||
|
||||
To display all options or cancel an operation, just press **ESC** key.
|
||||
|
||||
Also, you will see a bunch of function keys at the bottom of DF-SHOW interface to navigate through the contents of a directory.
|
||||
|
||||
* **UP/DOWN** arrows or **F1/F2** – Move up and down (one line at time),
|
||||
* **PgUp/Pg/Dn** – Move one page at a time,
|
||||
* **F3/F4** – Instantly go to Top and bottom of the list,
|
||||
* **F5** – Refresh,
|
||||
* **F6** – Mark/Unmark files (Files marked will be indicated with an ***** in front of them),
|
||||
* **F7/F8** – Mark/Unmark all files at once,
|
||||
* **F9** – Sort the list by – Date & time, Name, Size.,
|
||||
|
||||
|
||||
|
||||
Press **h** to learn more details about **show** command and its options.
|
||||
|
||||
To exit DF-SHOW, simply press **q**.
|
||||
|
||||
**SF Command**
|
||||
|
||||
The “sf” (show files) is used to display the contents of a file.
|
||||
|
||||
```
|
||||
$ sf <file>
|
||||
```
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/dfshow-3.png)
|
||||
|
||||
Press **h** to learn more “sf” command and its options. To quit, press **q**.
|
||||
|
||||
Want to give it a try? Great! Go ahead and install DF-SHOW on your Linux system as described below.
|
||||
|
||||
### Installing DF-SHOW
|
||||
|
||||
DF-SHOW is available in [**AUR**][1], so you can install it on any Arch-based system using AUR programs such as [**Yay**][2].
|
||||
|
||||
```
|
||||
$ yay -S dfshow
|
||||
```
|
||||
|
||||
On Ubuntu and its derivatives:
|
||||
|
||||
```
|
||||
$ sudo add-apt-repository ppa:ian-hawdon/dfshow
|
||||
|
||||
$ sudo apt-get update
|
||||
|
||||
$ sudo apt-get install dfshow
|
||||
```
|
||||
|
||||
On other Linux distributions, you can compile and build it from the source as shown below.
|
||||
|
||||
```
|
||||
$ git clone https://github.com/roberthawdon/dfshow
|
||||
$ cd dfshow
|
||||
$ ./bootstrap
|
||||
$ ./configure
|
||||
$ make
|
||||
$ sudo make install
|
||||
```
|
||||
|
||||
The author of DF-SHOW project has only rewritten some of the applications of DF-EDIT utility. Since the source code is freely available on GitHub, you can add more features, improve the code and submit or fix the bugs (if there are any). It is still in alpha stage, but fully functional.
|
||||
|
||||
Have you tried it already? If so, how’d go? Tell us your experience in the comments section below.
|
||||
|
||||
And, that’s all for now. Hope this was useful.More good stuffs to come.
|
||||
|
||||
Stay tuned!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/df-show-a-terminal-file-manager-based-on-an-old-dos-application/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://aur.archlinux.org/packages/dfshow/
|
||||
[2]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
|
@ -1,266 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: subject: (How To Customize The GNOME 3 Desktop?)
|
||||
[#]: via: (https://www.2daygeek.com/how-to-customize-the-gnome-3-desktop/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
[#]: url: ( )
|
||||
|
||||
How To Customize The GNOME 3 Desktop?
|
||||
======
|
||||
|
||||
We have got many emails from user to write an article about GNOME 3 desktop customization but we don’t get a time to write this topic.
|
||||
|
||||
I was using Ubuntu operating system since long time in my primary laptop and i got bored so, i would like to test some other distro which is related to Arch Linux.
|
||||
|
||||
I prefer to go with Majaro so, i have installed Manjaro 18.0 with GNOME 3 desktop in my laptop.
|
||||
|
||||
I’m customizing my desktop, how i want it. So, i would like to take this opportunity to write up this article in detailed way to help others.
|
||||
|
||||
This article helps others to customize their desktop without headache.
|
||||
|
||||
I’m not going to include all my customization and i will be adding a necessary things which will be mandatory and useful for Linux desktop users.
|
||||
|
||||
If you feel some tweak is missing in this article, i would request you to mention that in comment sections. It will be very helpful for other users.
|
||||
|
||||
### 1) How to Launch Activities Overview in GNOME 3 Desktop?
|
||||
|
||||
The Activities Overview will display all the running applications or launched/opened windows by clicking `Super Key` or by clicking `Activities` button in the topmost left corner.
|
||||
|
||||
It allows you to launch a new applications, switch windows, and move windows between workspaces.
|
||||
|
||||
You can simply exit the Activities Overview by choosing the following any of the one actions like selecting a window, application or workspace, or by pressing the `Super Key` or `Esc Key`.
|
||||
|
||||
Activities Overview Screenshot.
|
||||
![][2]
|
||||
|
||||
### 2) How to Resize Windows in GNOME 3 Desktop?
|
||||
|
||||
The Launched windows can be maximized, unmaximized and snapped to one side of the screen (Left or Right) by using the following key combinations.
|
||||
|
||||
* `Super Key+Down Arrow:` To unmaximize the window.
|
||||
* `Super Key+Up Arrow:` To maximize the window.
|
||||
* `Super Key+Right Arrow:` To fill a window in the right side of the half screen.
|
||||
* `Super Key+Left Arrow:` To fill a window in the left side of the half screen
|
||||
|
||||
|
||||
|
||||
Use `Super Key+Down Arrow` to unmaximize the window.
|
||||
![][3]
|
||||
|
||||
Use `Super Key+Up Arrow` to maximize the window.
|
||||
![][4]
|
||||
|
||||
Use `Super Key+Right Arrow` to fill a window in the right side of the half screen.
|
||||
![][5]
|
||||
|
||||
Use `Super Key+Left Arrow` to fill a window in the left side of the half screen.
|
||||
![][6]
|
||||
|
||||
This feature will help you to view two applications at a time a.k.a splitting screen.
|
||||
![][7]
|
||||
|
||||
### 3) How to Display Applications in GNOME 3 Desktop?
|
||||
|
||||
Click on the `Show Application Grid` button in the Dash to display all the installed applications on your system.
|
||||
![][8]
|
||||
|
||||
### 4) How to Add Applications on Dash in GNOME 3 Desktop?
|
||||
|
||||
To speed up your day to day activity you may want to add frequently used application into Dash or Drag the application launcher to the Dash.
|
||||
|
||||
It will allow you to directly launch your favorite applications without searching them. To do so, simply right click on it and use the option `Add to Favorites`.
|
||||
![][9]
|
||||
|
||||
To remove a application launcher a.k.a favorite from Dash, either drag it from the Dash to the grid button or simply right click on it and use the option `Remove from Favorites`.
|
||||
![][10]
|
||||
|
||||
### 5) How to Switch Between Workspaces in GNOME 3 Desktop?
|
||||
|
||||
Workspaces allow you to group windows together. It will helps you to segregate your work properly. If you are working on Multiple things and you want to group each work and related things separately then it will be very handy and perfect option for you.
|
||||
|
||||
You can switch workspaces in two ways, Open the Activities Overview and select a workspace from the right-hand side or use the following key combinations.
|
||||
|
||||
* Use `Ctrl+Alt+Up` Switch to the workspace above.
|
||||
* Use `Ctrl+Alt+Down` Switch to the workspace below.
|
||||
|
||||
|
||||
|
||||
![][11]
|
||||
|
||||
### 6) How to Switch Between Applications (Application Switcher) in GNOME 3 Desktop?
|
||||
|
||||
Use either `Alt+Tab` or `Super+Tab` to switch between applications. To launch Application Switcher, use either `Alt+Tab` or `Super+Tab`.
|
||||
|
||||
Once launched, just keep holding the Alt or Super key and hit the tab key to move to the next application from left to right order.
|
||||
|
||||
### 7) How to Add UserName to Top Panel in GNOME 3 Desktop?
|
||||
|
||||
If you would like to add your UserName to Top Panel then install the following [Add Username to Top Panel][12] GNOME Extension.
|
||||
![][13]
|
||||
|
||||
### 8) How to Add Microsoft Bing’s wallpaper in GNOME 3 Desktop?
|
||||
|
||||
Install the following [Bing Wallpaper Changer][14] GNOME shell extension to change your wallpaper every day to Microsoft Bing’s wallpaper.
|
||||
![][15]
|
||||
|
||||
### 9) How to Enable Night Light in GNOME 3 Desktop?
|
||||
|
||||
Night light app is one of the famous app which reduces strain on the eyes by turning your screen a dim yellow from blue light after sunset.
|
||||
|
||||
It is available in smartphones. The other known apps for the same purpose are flux and **[redshift][16]**.
|
||||
|
||||
To enable this, navigate to **System Settings** >> **Devices** >> **Displays** and turn Nigh Light on.
|
||||
![][17]
|
||||
|
||||
Once it’s enabled and status icon will be placed on the top panel.
|
||||
![][18]
|
||||
|
||||
### 10) How to Show the Battery Percentage in GNOME 3 Desktop?
|
||||
|
||||
Battery percentage will show you the exact battery usage. To enable this follow the below steps.
|
||||
|
||||
Start GNOME Tweaks >> **Top Bar** >> **Battery Percentage** and switch it on.
|
||||
![][19]
|
||||
|
||||
After modification you can able to see the battery percentage icon on the top panel.
|
||||
![][20]
|
||||
|
||||
### 11) How to Enable Mouse Right Click in GNOME 3 Desktop?
|
||||
|
||||
By default right click is disabled on GNOME 3 desktop environment. To enable this follow the below steps.
|
||||
|
||||
Start GNOME Tweaks >> **Keyboard & Mouse** >> Mouse Click Emulation and select “Area” option.
|
||||
![][21]
|
||||
|
||||
### 12) How to Enable Minimize On Click in GNOME 3 Desktop?
|
||||
|
||||
Enable one-click minimize feature which will help us to minimize opened window without using minimize option.
|
||||
|
||||
```
|
||||
$ gsettings set org.gnome.shell.extensions.dash-to-dock click-action 'minimize'
|
||||
```
|
||||
|
||||
### 13) How to Customize Dock in GNOME 3 Desktop?
|
||||
|
||||
If you would like to change your Dock similar to Deepin desktop or Mac then use the following set of commands.
|
||||
|
||||
```
|
||||
$ gsettings set org.gnome.shell.extensions.dash-to-dock dock-position BOTTOM
|
||||
$ gsettings set org.gnome.shell.extensions.dash-to-dock extend-height false
|
||||
$ gsettings set org.gnome.shell.extensions.dash-to-dock transparency-mode FIXED
|
||||
$ gsettings set org.gnome.shell.extensions.dash-to-dock dash-max-icon-size 50
|
||||
```
|
||||
|
||||
![][22]
|
||||
|
||||
### 14) How to Show Desktop in GNOME 3 Desktop?
|
||||
|
||||
By default `Super Key+D` shortcut doesn’t show your desktop. To configure this follow the below steps.
|
||||
|
||||
Settings >> **Devices** >> **Keyboard** >> Click **Hide all normal windows** under Navigation then Press `Super Key+D` finally hit `Set` button to enable it.
|
||||
![][23]
|
||||
|
||||
### 15) How to Customize Date and Time Format?
|
||||
|
||||
By default GNOME 3 shows date and time with `Sun 04:48`. It’s not clear and if you want to get the output with following format `Sun Dec 2 4:49 AM` follow the below steps.
|
||||
|
||||
**For Date Modification:** Start GNOME Tweaks >> **Top Bar** and enable `Weekday` option under Clock.
|
||||
![][24]
|
||||
|
||||
**For Time Modification:** Settings >> **Details** >> **Date & Time** then choose `AM/PM` option in the time format.
|
||||
![][25]
|
||||
|
||||
After modification you can able to see the date and time format same as below.
|
||||
![][26]
|
||||
|
||||
### 16) How to Permanently Disable Unused Services in Boot?
|
||||
|
||||
In my case, i’m not going to use **Bluetooth** & **cpus a.k.a Printer service**. Hence, disabling these services on my laptop. To disable services on Arch based systems use **[Pacman Package Manager][27]**.
|
||||
For Bluetooth
|
||||
|
||||
```
|
||||
$ sudo systemctl stop bluetooth.service
|
||||
$ sudo systemctl disable bluetooth.service
|
||||
$ sudo systemctl mask bluetooth.service
|
||||
$ systemctl status bluetooth.service
|
||||
```
|
||||
|
||||
For cups
|
||||
|
||||
```
|
||||
$ sudo systemctl stop org.cups.cupsd.service
|
||||
$ sudo systemctl disable org.cups.cupsd.service
|
||||
$ sudo systemctl mask org.cups.cupsd.service
|
||||
$ systemctl status org.cups.cupsd.service
|
||||
```
|
||||
|
||||
Finally verify whether these services are disabled or not in the boot using the following command. If you want to double confirm this, you can reboot once and check the same. Navigate to the following link to know more about **[systemctl][28]** usage,
|
||||
|
||||
```
|
||||
$ systemctl list-unit-files --type=service | grep enabled
|
||||
[email protected] enabled
|
||||
dbus-org.freedesktop.ModemManager1.service enabled
|
||||
dbus-org.freedesktop.NetworkManager.service enabled
|
||||
dbus-org.freedesktop.nm-dispatcher.service enabled
|
||||
display-manager.service enabled
|
||||
gdm.service enabled
|
||||
[email protected] enabled
|
||||
linux-module-cleanup.service enabled
|
||||
ModemManager.service enabled
|
||||
NetworkManager-dispatcher.service enabled
|
||||
NetworkManager-wait-online.service enabled
|
||||
NetworkManager.service enabled
|
||||
systemd-fsck-root.service enabled-runtime
|
||||
tlp-sleep.service enabled
|
||||
tlp.service enabled
|
||||
```
|
||||
|
||||
### 17) Install Icons & Themes in GNOME 3 Desktop?
|
||||
|
||||
Bunch of Icons and Themes are available for GNOME Desktop so, choose the desired **[GTK Themes][29]** and **[Icons Themes][30]** for you. To configure this further, navigate to the below links which makes your Desktop more elegant.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/how-to-customize-the-gnome-3-desktop/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-overview-screenshot.jpg
|
||||
[3]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-unmaximize-the-window.jpg
|
||||
[4]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-maximize-the-window.jpg
|
||||
[5]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-fill-a-window-right-side.jpg
|
||||
[6]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-fill-a-window-left-side.jpg
|
||||
[7]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-split-screen.jpg
|
||||
[8]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-display-applications.jpg
|
||||
[9]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-add-applications-on-dash.jpg
|
||||
[10]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-remove-applications-from-dash.jpg
|
||||
[11]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-workspaces-screenshot.jpg
|
||||
[12]: https://extensions.gnome.org/extension/1108/add-username-to-top-panel/
|
||||
[13]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-add-username-to-top-panel.jpg
|
||||
[14]: https://extensions.gnome.org/extension/1262/bing-wallpaper-changer/
|
||||
[15]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-add-microsoft-bings-wallpaper.jpg
|
||||
[16]: https://www.2daygeek.com/install-redshift-reduce-prevent-protect-eye-strain-night-linux/
|
||||
[17]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-enable-night-light.jpg
|
||||
[18]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-enable-night-light-1.jpg
|
||||
[19]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-display-battery-percentage.jpg
|
||||
[20]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-display-battery-percentage-1.jpg
|
||||
[21]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-enable-mouse-right-click.jpg
|
||||
[22]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-dock-customization.jpg
|
||||
[23]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-enable-show-desktop.jpg
|
||||
[24]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-customize-date.jpg
|
||||
[25]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-customize-time.jpg
|
||||
[26]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-customize-date-time.jpg
|
||||
[27]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
|
||||
[28]: https://www.2daygeek.com/sysvinit-vs-systemd-cheatsheet-systemctl-command-usage/
|
||||
[29]: https://www.2daygeek.com/category/gtk-theme/
|
||||
[30]: https://www.2daygeek.com/category/icon-theme/
|
@ -1,145 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Podman and user namespaces: A marriage made in heaven)
|
||||
[#]: via: (https://opensource.com/article/18/12/podman-and-user-namespaces)
|
||||
[#]: author: (Daniel J Walsh https://opensource.com/users/rhatdan)
|
||||
|
||||
Podman and user namespaces: A marriage made in heaven
|
||||
======
|
||||
Learn how to use Podman to run containers in separate user namespaces.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/architecture_structure_planning_design_.png?itok=KL7dIDct)
|
||||
|
||||
[Podman][1], part of the [libpod][2] library, enables users to manage pods, containers, and container images. In my last article, I wrote about [Podman as a more secure way to run containers][3]. Here, I'll explain how to use Podman to run containers in separate user namespaces.
|
||||
|
||||
I have always thought of [user namespace][4], primarily developed by Red Hat's Eric Biederman, as a great feature for separating containers. User namespace allows you to specify a user identifier (UID) and group identifier (GID) mapping to run your containers. This means you can run as UID 0 inside the container and UID 100000 outside the container. If your container processes escape the container, the kernel will treat them as UID 100000. Not only that, but any file object owned by a UID that isn't mapped into the user namespace will be treated as owned by "nobody" (65534, kernel.overflowuid), and the container process will not be allowed access unless the object is accessible by "other" (world readable/writable).
|
||||
|
||||
If you have a file owned by "real" root with permissions [660][5], and the container processes in the user namespace attempt to read it, they will be prevented from accessing it and will see the file as owned by nobody.
|
||||
|
||||
### An example
|
||||
|
||||
Here's how that might work. First, I create a file in my system owned by root.
|
||||
|
||||
```
|
||||
$ sudo bash -c "echo Test > /tmp/test"
|
||||
$ sudo chmod 600 /tmp/test
|
||||
$ sudo ls -l /tmp/test
|
||||
-rw-------. 1 root root 5 Dec 17 16:40 /tmp/test
|
||||
```
|
||||
|
||||
Next, I volume-mount the file into a container running with a user namespace map 0:100000:5000.
|
||||
|
||||
```
|
||||
$ sudo podman run -ti -v /tmp/test:/tmp/test:Z --uidmap 0:100000:5000 fedora sh
|
||||
# id
|
||||
uid=0(root) gid=0(root) groups=0(root)
|
||||
# ls -l /tmp/test
|
||||
-rw-rw----. 1 nobody nobody 8 Nov 30 12:40 /tmp/test
|
||||
# cat /tmp/test
|
||||
cat: /tmp/test: Permission denied
|
||||
```
|
||||
|
||||
The **\--uidmap** setting above tells Podman to map a range of 5000 UIDs inside the container, starting with UID 100000 outside the container (so the range is 100000-104999) to a range starting at UID 0 inside the container (so the range is 0-4999). Inside the container, if my process is running as UID 1, it is 100001 on the host
|
||||
|
||||
Since the real UID=0 is not mapped into the container, any file owned by root will be treated as owned by nobody. Even if the process inside the container has **CAP_DAC_OVERRIDE** , it can't override this protection. **DAC_OVERRIDE** enables root processes to read/write any file on the system, even if the process was not owned by root or world readable or writable.
|
||||
|
||||
User namespace capabilities are not the same as capabilities on the host. They are namespaced capabilities. This means my container root has capabilities only within the container—really only across the range of UIDs that were mapped into the user namespace. If a container process escaped the container, it wouldn't have any capabilities over UIDs not mapped into the user namespace, including UID=0. Even if the processes could somehow enter another container, they would not have those capabilities if the container uses a different range of UIDs.
|
||||
|
||||
Note that SELinux and other technologies also limit what would happen if a container process broke out of the container.
|
||||
|
||||
### Using `podman top` to show user namespaces
|
||||
|
||||
We have added features to **podman top** to allow you to examine the usernames of processes running inside a container and identify their real UIDs on the host.
|
||||
|
||||
Let's start by running a sleep container using our UID mapping.
|
||||
|
||||
```
|
||||
$ sudo podman run --uidmap 0:100000:5000 -d fedora sleep 1000
|
||||
```
|
||||
|
||||
Now run **podman top** :
|
||||
|
||||
```
|
||||
$ sudo podman top --latest user huser
|
||||
USER HUSER
|
||||
root 100000
|
||||
|
||||
$ ps -ef | grep sleep
|
||||
100000 21821 21809 0 08:04 ? 00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000
|
||||
```
|
||||
|
||||
Notice **podman top** reports that the user process is running as root inside the container but as UID 100000 on the host (HUSER). Also the **ps** command confirms that the sleep process is running as UID 100000.
|
||||
|
||||
Now let's run a second container, but this time we will choose a separate UID map starting at 200000.
|
||||
|
||||
```
|
||||
$ sudo podman run --uidmap 0:200000:5000 -d fedora sleep 1000
|
||||
$ sudo podman top --latest user huser
|
||||
USER HUSER
|
||||
root 200000
|
||||
|
||||
$ ps -ef | grep sleep
|
||||
100000 21821 21809 0 08:04 ? 00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000
|
||||
200000 23644 23632 1 08:08 ? 00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000
|
||||
```
|
||||
|
||||
Notice that **podman top** reports the second container is running as root inside the container but as UID=200000 on the host.
|
||||
|
||||
Also look at the **ps** command—it shows both sleep processes running: one as 100000 and the other as 200000.
|
||||
|
||||
This means running the containers inside separate user namespaces gives you traditional UID separation between processes, which has been the standard security tool of Linux/Unix from the beginning.
|
||||
|
||||
### Problems with user namespaces
|
||||
|
||||
For several years, I've advocated user namespace as the security tool everyone wants but hardly anyone has used. The reason is there hasn't been any filesystem support or a shifting file system.
|
||||
|
||||
In containers, you want to share the **base** image between lots of containers. The examples above use the Fedora base image in each example. Most of the files in the Fedora image are owned by real UID=0. If I run a container on this image with the user namespace 0:100000:5000, by default it sees all of these files as owned by nobody, so we need to shift all of these UIDs to match the user namespace. For years, I've wanted a mount option to tell the kernel to remap these file UIDs to match the user namespace. Upstream kernel storage developers continue to investigate and make progress on this feature, but it is a difficult problem.
|
||||
|
||||
|
||||
Podman can use different user namespaces on the same image because of automatic [chowning][6] built into [containers/storage][7] by a team led by Nalin Dahyabhai. Podman uses containers/storage, and the first time Podman uses a container image in a new user namespace, container/storage "chowns" (i.e., changes ownership for) all files in the image to the UIDs mapped in the user namespace and creates a new image. Think of this as the **fedora:0:100000:5000** image.
|
||||
|
||||
When Podman runs another container on the image with the same UID mappings, it uses the "pre-chowned" image. When I run the second container on 0:200000:5000, containers/storage creates a second image, let's call it **fedora:0:200000:5000**.
|
||||
|
||||
Note if you are doing a **podman build** or **podman commit** and push the newly created image to a container registry, Podman will use container/storage to reverse the shift and push the image with all files chowned back to real UID=0.
|
||||
|
||||
This can cause a real slowdown in creating containers in new UID mappings since the **chown** can be slow depending on the number of files in the image. Also, on a normal [OverlayFS][8], every file in the image gets copied up. The normal Fedora image can take up to 30 seconds to finish the chown and start the container.
|
||||
|
||||
Luckily, the Red Hat kernel storage team, primarily Vivek Goyal and Miklos Szeredi, added a new feature to OverlayFS in kernel 4.19. The feature is called **metadata only copy-up**. If you mount an overlay filesystem with **metacopy=on** as a mount option, it will not copy up the contents of the lower layers when you change file attributes; the kernel creates new inodes that include the attributes with references pointing at the lower-level data. It will still copy up the contents if the content changes. This functionality is available in the Red Hat Enterprise Linux 8 Beta, if you want to try it out.
|
||||
|
||||
This means container chowning can happen in a couple of seconds, and you won't double the storage space for each container.
|
||||
|
||||
This makes running containers with tools like Podman in separate user namespaces viable, greatly increasing the security of the system.
|
||||
|
||||
### Going forward
|
||||
|
||||
I want to add a new flag, like **\--userns=auto** , to Podman that will tell it to automatically pick a unique user namespace for each container you run. This is similar to the way SELinux works with separate multi-category security (MCS) labels. If you set the environment variable **PODMAN_USERNS=auto** , you won't even need to set the flag.
|
||||
|
||||
Podman is finally allowing users to run containers in separate user namespaces. Tools like [Buildah][9] and [CRI-O][10] will also be able to take advantage of user namespaces. For CRI-O, however, Kubernetes needs to understand which user namespace will run the container engine, and the upstream is working on that.
|
||||
|
||||
In my next article, I will explain how to run Podman as non-root in a user namespace.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/12/podman-and-user-namespaces
|
||||
|
||||
作者:[Daniel J Walsh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/rhatdan
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://podman.io/
|
||||
[2]: https://github.com/containers/libpod
|
||||
[3]: https://opensource.com/article/18/10/podman-more-secure-way-run-containers
|
||||
[4]: http://man7.org/linux/man-pages/man7/user_namespaces.7.html
|
||||
[5]: https://chmodcommand.com/chmod-660/
|
||||
[6]: https://en.wikipedia.org/wiki/Chown
|
||||
[7]: https://github.com/containers/storage
|
||||
[8]: https://en.wikipedia.org/wiki/OverlayFS
|
||||
[9]: https://buildah.io/
|
||||
[10]: http://cri-o.io/
|
@ -1,166 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Getting started with Prometheus)
|
||||
[#]: via: (https://opensource.com/article/18/12/introduction-prometheus)
|
||||
[#]: author: (Michael Zamot https://opensource.com/users/mzamot)
|
||||
|
||||
Getting started with Prometheus
|
||||
======
|
||||
Learn to install and write queries for the Prometheus monitoring and alerting system.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_sysadmin_cloud.png?itok=sUciG0Cn)
|
||||
|
||||
[Prometheus][1] is an open source monitoring and alerting system that directly scrapes metrics from agents running on the target hosts and stores the collected samples centrally on its server. Metrics can also be pushed using plugins like **collectd_exporter** —although this is not Promethius' default behavior, it may be useful in some environments where hosts are behind a firewall or prohibited from opening ports by security policy.
|
||||
|
||||
Prometheus, a project of the [Cloud Native Computing Foundation][2], scales up using a federation model, which enables one Prometheus server to scrape another Prometheus server. This allows creation of a hierarchical topology, where a central system or higher-level Prometheus server can scrape aggregated data already collected from subordinate instances.
|
||||
|
||||
Besides the Prometheus server, its most common components are its [Alertmanager][3] and its exporters.
|
||||
|
||||
Alerting rules can be created within Prometheus and configured to send custom alerts to Alertmanager. Alertmanager then processes and handles these alerts, including sending notifications through different mechanisms like email or third-party services like [PagerDuty][4].
|
||||
|
||||
Prometheus' exporters can be libraries, processes, devices, or anything else that exposes the metrics that will be scraped by Prometheus. The metrics are available at the endpoint **/metrics** , which allows Prometheus to scrape them directly without needing an agent. The tutorial in this article uses **node_exporter** to expose the target hosts' hardware and operating system metrics. Exporters' outputs are plaintext and highly readable, which is one of Prometheus' strengths.
|
||||
|
||||
In addition, you can configure [Grafana][5] to use Prometheus as a backend to provide data visualization and dashboarding functions.
|
||||
|
||||
### Making sense of Prometheus' configuration file
|
||||
|
||||
The number of seconds between when **/metrics** is scraped controls the granularity of the time-series database. This is defined in the configuration file as the **scrape_interval** parameter, which by default is set to 60 seconds.
|
||||
|
||||
Targets are set for each scrape job in the **scrape_configs** section. Each job has its own name and a set of labels that can help filter, categorize, and make it easier to identify the target. One job can have many targets.
|
||||
|
||||
### Installing Prometheus
|
||||
|
||||
In this tutorial, for simplicity, we will install a Prometheus server and **node_exporter** with docker. Docker should already be installed and configured properly on your system. For a more in-depth, automated method, I recommend Steve Ovens' article [How to use Ansible to set up system monitoring with Prometheus][6].
|
||||
|
||||
Before starting, create the Prometheus configuration file **prometheus.yml** in your work directory as follows:
|
||||
|
||||
```
|
||||
global:
|
||||
scrape_interval: 15s
|
||||
evaluation_interval: 15s
|
||||
|
||||
scrape_configs:
|
||||
- job_name: 'prometheus'
|
||||
|
||||
static_configs:
|
||||
- targets: ['localhost:9090']
|
||||
|
||||
- job_name: 'webservers'
|
||||
|
||||
static_configs:
|
||||
- targets: ['<node exporter node IP>:9100']
|
||||
```
|
||||
|
||||
Start Prometheus with Docker by running the following command:
|
||||
|
||||
```
|
||||
$ sudo docker run -d -p 9090:9090 -v
|
||||
/path/to/prometheus.yml:/etc/prometheus/prometheus.yml
|
||||
prom/prometheus
|
||||
```
|
||||
|
||||
By default, the Prometheus server will use port 9090. If this port is already in use, you can change it by adding the parameter **\--web.listen-address=" <IP of machine>:<port>"** at the end of the previous command.
|
||||
|
||||
In the machine you want to monitor, download and run the **node_exporter** container by using the following command:
|
||||
|
||||
```
|
||||
$ sudo docker run -d -v "/proc:/host/proc" -v "/sys:/host/sys" -v
|
||||
"/:/rootfs" --net="host" prom/node-exporter --path.procfs
|
||||
/host/proc --path.sysfs /host/sys --collector.filesystem.ignored-
|
||||
mount-points "^/(sys|proc|dev|host|etc)($|/)"
|
||||
```
|
||||
|
||||
For the purposes of this learning exercise, you can install **node_exporter** and Prometheus on the same machine. Please note that it's not wise to run **node_exporter** under Docker in production—this is for testing purposes only.
|
||||
|
||||
To verify that **node_exporter** is running, open your browser and navigate to **http:// <IP of Node exporter host>:9100/metrics**. All the metrics collected will be displayed; these are the same metrics Prometheus will scrape.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/check-node_exporter.png)
|
||||
|
||||
To verify the Prometheus server installation, open your browser and navigate to <http://localhost:9090>.
|
||||
|
||||
You should see the Prometheus interface. Click on **Status** and then **Targets**. Under State, you should see your machines listed as **UP**.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/targets-up.png)
|
||||
|
||||
### Using Prometheus queries
|
||||
|
||||
It's time to get familiar with [PromQL][7], Prometheus' query syntax, and its graphing web interface. Go to **<http://localhost:9090/graph>** on your Prometheus server. You will see a query editor and two tabs: Graph and Console.
|
||||
|
||||
Prometheus stores all data as time series, identifying each one with a metric name. For example, the metric **node_filesystem_avail_bytes** shows the available filesystem space. The metric's name can be used in the expression box to select all of the time series with this name and produce an instant vector. If desired, these time series can be filtered using selectors and labels—a set of key-value pairs—for example:
|
||||
|
||||
```
|
||||
node_filesystem_avail_bytes{fstype="ext4"}
|
||||
```
|
||||
|
||||
When filtering, you can match "exactly equal" ( **=** ), "not equal" ( **!=** ), "regex-match" ( **=~** ), and "do not regex-match" ( **!~** ). The following examples illustrate this:
|
||||
|
||||
To filter **node_filesystem_avail_bytes** to show both ext4 and XFS filesystems:
|
||||
|
||||
```
|
||||
node_filesystem_avail_bytes{fstype=~"ext4|xfs"}
|
||||
```
|
||||
|
||||
To exclude a match:
|
||||
|
||||
```
|
||||
node_filesystem_avail_bytes{fstype!="xfs"}
|
||||
```
|
||||
|
||||
You can also get a range of samples back from the current time by using square brackets. You can use **s** to represent seconds, **m** for minutes, **h** for hours, **d** for days, **w** for weeks, and **y** for years. When using time ranges, the vector returned will be a range vector.
|
||||
|
||||
For example, the following command produces the samples from five minutes to the present:
|
||||
|
||||
```
|
||||
node_memory_MemAvailable_bytes[5m]
|
||||
```
|
||||
|
||||
Prometheus also includes functions to allow advanced queries, such as this:
|
||||
|
||||
```
|
||||
100 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated (1 - avg by(instance)(irate(node_cpu_seconds_total{job='webservers',mode='idle'}[5m])))
|
||||
```
|
||||
|
||||
Notice how the labels are used to filter the job and the mode. The metric **node_cpu_seconds_total** returns a counter, and the **irate()** function calculates the per-second rate of change based on the last two data points of the range interval (meaning the range can be smaller than five minutes). To calculate the overall CPU usage, you can use the idle mode of the **node_cpu_seconds_total** metric. The idle percent of a processor is the opposite of a busy processor, so the **irate** value is subtracted from 1. To make it a percentage, multiply it by 100.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/cpu-usage.png)
|
||||
|
||||
### Learn more
|
||||
|
||||
Prometheus is a powerful, scalable, lightweight, and easy to use and deploy monitoring tool that is indispensable for every system administrator and developer. For these and other reasons, many companies are implementing Prometheus as part of their infrastructure.
|
||||
|
||||
To learn more about Prometheus and its functions, I recommend the following resources:
|
||||
|
||||
+ About [PromQL][8]
|
||||
+ What [node_exporters collects][9]
|
||||
+ [Prometheus functions][10]
|
||||
+ [4 open source monitoring tools][11]
|
||||
+ [Now available: The open source guide to DevOps monitoring tools][12]
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/12/introduction-prometheus
|
||||
|
||||
作者:[Michael Zamot][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mzamot
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://prometheus.io/
|
||||
[2]: https://www.cncf.io/
|
||||
[3]: https://prometheus.io/docs/alerting/alertmanager/
|
||||
[4]: https://en.wikipedia.org/wiki/PagerDuty
|
||||
[5]: https://grafana.com/
|
||||
[6]: https://opensource.com/article/18/3/how-use-ansible-set-system-monitoring-prometheus
|
||||
[7]: https://prometheus.io/docs/prometheus/latest/querying/basics/
|
||||
[8]: https://prometheus.io/docs/prometheus/latest/querying/basics/
|
||||
[9]: https://github.com/prometheus/node_exporter#collectors
|
||||
[10]: https://prometheus.io/docs/prometheus/latest/querying/functions/
|
||||
[11]: https://opensource.com/article/18/8/open-source-monitoring-tools
|
||||
[12]: https://opensource.com/article/18/8/now-available-open-source-guide-devops-monitoring-tools
|
@ -1,144 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to detect automatically generated emails)
|
||||
[#]: via: (https://arp242.net/weblog/autoreply.html)
|
||||
[#]: author: (Martin Tournoij https://arp242.net/)
|
||||
|
||||
How to detect automatically generated emails
|
||||
======
|
||||
|
||||
### How to detect automatically generated emails
|
||||
|
||||
|
||||
When you send out an auto-reply from an email system you want to take care to not send replies to automatically generated emails. At best, you will get a useless delivery failure. At words, you will get an infinite email loop and a world of chaos.
|
||||
|
||||
Turns out that reliably detecting automatically generated emails is not always easy. Here are my observations based on writing a detector for this and scanning about 100,000 emails with it (extensive personal archive and company archive).
|
||||
|
||||
### Auto-submitted header
|
||||
|
||||
Defined in [RFC 3834][1].
|
||||
|
||||
This is the ‘official’ standard way to indicate your message is an auto-reply. You should **not** send a reply if `Auto-Submitted` is present and has a value other than `no`.
|
||||
|
||||
### X-Auto-Response-Suppress header
|
||||
|
||||
Defined [by Microsoft][2]
|
||||
|
||||
This header is used by Microsoft Exchange, Outlook, and perhaps some other products. Many newsletters and such also set this. You should **not** send a reply if `X-Auto-Response-Suppress` contains `DR` (“Suppress delivery reports”), `AutoReply` (“Suppress auto-reply messages other than OOF notifications”), or `All`.
|
||||
|
||||
### List-Id and List-Unsubscribe headers
|
||||
|
||||
Defined in [RFC 2919][3]
|
||||
|
||||
You usually don’t want to send auto-replies to mailing lists or news letters. Pretty much all mail lists and most newsletters set at least one of these headers. You should **not** send a reply if either of these headers is present. The value is unimportant.
|
||||
|
||||
### Feedback-ID header
|
||||
|
||||
Defined [by Google][4].
|
||||
|
||||
Gmail uses this header to identify mail newsletters, and uses it to generate statistics/reports for owners of those newsletters. You should **not** send a reply if this headers is present; the value is unimportant.
|
||||
|
||||
### Non-standard ways
|
||||
|
||||
The above methods are well-defined and clear (even though some are non-standard). Unfortunately some email systems do not use any of them :-( Here are some additional measures.
|
||||
|
||||
#### Precedence header
|
||||
|
||||
Not really defined anywhere, mentioned in [RFC 2076][5] where its use is discouraged (but this header is commonly encountered).
|
||||
|
||||
Note that checking for the existence of this field is not recommended, as some ails use `normal` and some other (obscure) values (this is not very common though).
|
||||
|
||||
My recommendation is to **not** send a reply if the value case-insensitively matches `bulk`, `auto_reply`, or `list`.
|
||||
|
||||
#### Other obscure headers
|
||||
|
||||
A collection of other (somewhat obscure) headers I’ve encountered. I would recommend **not** sending an auto-reply if one of these is set. Most mails also set one of the above headers, but some don’t (but it’s not very common).
|
||||
|
||||
* `X-MSFBL`; can’t really find a definition (Microsoft header?), but I only have auto-generated mails with this header.
|
||||
|
||||
* `X-Loop`; not really defined anywhere, and somewhat rare, but sometimes it’s set. It’s most often set to the address that should not get emails, but `X-Loop: yes` is also encountered.
|
||||
|
||||
* `X-Autoreply`; fairly rare, and always seems to have a value of `yes`.
|
||||
|
||||
|
||||
|
||||
|
||||
#### Email address
|
||||
|
||||
Check if the `From` or `Reply-To` headers contains `noreply`, `no-reply`, or `no_reply` (regex: `^no.?reply@`).
|
||||
|
||||
#### HTML only
|
||||
|
||||
If an email only has a HTML part, but no text part it’s a good indication this is an auto-generated mail or newsletter. Pretty much all mail clients also set a text part.
|
||||
|
||||
#### Delivery failures
|
||||
|
||||
Many delivery failure messages don’t really indicate that they’re failures. Some ways to check this:
|
||||
|
||||
* `From` contains `mailer-daemon` or `Mail Delivery Subsystem`
|
||||
|
||||
|
||||
|
||||
Many mail libraries leave some sort of footprint, and most regular mail clients override this with their own data. Checking for this seems to work fairly well.
|
||||
|
||||
* `X-Mailer: Microsoft CDO for Windows 2000` – Set by some MS software; I can only find it on autogenerated mails. Yes, it’s still used in 2015.
|
||||
|
||||
* `Message-ID` header contains `.JavaMail.` – I’ve found a few (5 on 50k) regular messages with this, but not many; the vast majority (thousends) of messages are news-letters, order confirmations, etc.
|
||||
|
||||
* `^X-Mailer` starts with `PHP`. This should catch both `X-Mailer: PHP/5.5.0` and `X-Mailer: PHPmailer blah blah`. The same as `JavaMail` applies.
|
||||
|
||||
* `X-Library` presence; only [Indy][6] seems to set this.
|
||||
|
||||
* `X-Mailer` starts with `wdcollect`. Set by some Plesk mails.
|
||||
|
||||
* `X-Mailer` starts with `MIME-tools`.
|
||||
|
||||
|
||||
|
||||
|
||||
### Final precaution: limit the number of replies
|
||||
|
||||
Even when following all of the above advice, you may still encounter an email program that will slip through. This can very dangerous, as email systems that simply `IF email THEN send_email` have the potential to cause infinite email loops.
|
||||
|
||||
For this reason, I recommend keeping track of which emails you’ve sent an autoreply to and rate limiting this to at most n emails in n minutes. This will break the back-and-forth chain.
|
||||
|
||||
We use one email per five minutes, but something less strict will probably also work well.
|
||||
|
||||
### What you need to set on your auto-response
|
||||
|
||||
The specifics for this will vary depending on what sort of mails you’re sending. This is what we use for auto-reply mails:
|
||||
|
||||
```
|
||||
Auto-Submitted: auto-replied
|
||||
X-Auto-Response-Suppress: All
|
||||
Precedence: auto_reply
|
||||
```
|
||||
|
||||
### Feedback
|
||||
|
||||
You can mail me at [martin@arp242.net][7] or [create a GitHub issue][8] for feedback, questions, etc.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://arp242.net/weblog/autoreply.html
|
||||
|
||||
作者:[Martin Tournoij][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://arp242.net/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: http://tools.ietf.org/html/rfc3834
|
||||
[2]: https://msdn.microsoft.com/en-us/library/ee219609(v=EXCHG.80).aspx
|
||||
[3]: https://tools.ietf.org/html/rfc2919)
|
||||
[4]: https://support.google.com/mail/answer/6254652?hl=en
|
||||
[5]: http://www.faqs.org/rfcs/rfc2076.html
|
||||
[6]: http://www.indyproject.org/index.en.aspx
|
||||
[7]: mailto:martin@arp242.net
|
||||
[8]: https://github.com/Carpetsmoker/arp242.net/issues/new
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (zianglei)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,264 +0,0 @@
|
||||
[#]: collector: "lujun9972"
|
||||
[#]: translator: " "
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
[#]: subject: "How To Parse And Pretty Print JSON With Linux Commandline Tools"
|
||||
[#]: via: "https://www.ostechnix.com/how-to-parse-and-pretty-print-json-with-linux-commandline-tools/"
|
||||
[#]: author: "EDITOR https://www.ostechnix.com/author/editor/"
|
||||
|
||||
How To Parse And Pretty Print JSON With Linux Commandline Tools
|
||||
======
|
||||
|
||||
**JSON** is a lightweight and language independent data storage format, easy to integrate with most programming languages and also easy to understand by humans, of course when properly formatted. The word JSON stands for **J** ava **S** cript **O** bject **N** otation, though it starts with JavaScript, and primarily used to exchange data between server and browser, but now being used in many fields including embedded systems. Here we’re going to parse and pretty print JSON with command line tools on Linux. It’s extremely useful for handling large JSON data in a shell scripts, or manipulating JSON data in a shell script.
|
||||
|
||||
### What is pretty printing?
|
||||
|
||||
The JSON data is structured to be somewhat more human readable. However in most cases, JSON data is stored in a single line, even without a line ending character.
|
||||
|
||||
Obviously that’s not very convenient for reading and editing manually.
|
||||
|
||||
That’s when pretty print is useful. The name is quite self explanatory, re-formatting the JSON text to be more legible by humans. This is known as **JSON pretty printing**.
|
||||
|
||||
### Parse And Pretty Print JSON With Linux Commandline Tools
|
||||
|
||||
JSON data could be parsed with command line text processors like **awk** , **sed** and **gerp**. In fact JSON.awk is an awk script to do that. However there are some dedicated tools for the same purpose.
|
||||
|
||||
1. **jq** or **jshon** , JSON parser for shell, both of them are quite useful.
|
||||
|
||||
2. Shell scripts like **JSON.sh** or **jsonv.sh** to parse JSON in bash, zsh or dash shell.
|
||||
|
||||
3. **JSON.awk** , JSON parser awk script.
|
||||
|
||||
4. Python modules like **json.tool**.
|
||||
|
||||
5. **underscore-cli** , Node.js and javascript based.
|
||||
|
||||
|
||||
|
||||
|
||||
In this tutorial I’m focusing only on **jq** , which is quite powerful JSON parser for shells with advanced filtering and scripting capability.
|
||||
|
||||
### JSON pretty printing
|
||||
|
||||
JSON data could be in one and nearly illegible for humans, so to make it somewhat readable, JSON pretty printing is here.
|
||||
|
||||
**Example:** A data from **jsonip.com** , to get external IP address in JSON format, use **curl** or **wget** tools like below.
|
||||
|
||||
```
|
||||
$ wget -cq http://jsonip.com/ -O -
|
||||
```
|
||||
|
||||
The actual data looks like this:
|
||||
|
||||
```
|
||||
{"ip":"111.222.333.444","about":"/about","Pro!":"http://getjsonip.com"}
|
||||
```
|
||||
|
||||
Now pretty print it with jq:
|
||||
|
||||
```
|
||||
$ wget -cq http://jsonip.com/ -O - | jq '.'
|
||||
```
|
||||
|
||||
This should look like below, after filtering the result with jq.
|
||||
|
||||
```
|
||||
{
|
||||
|
||||
"ip": "111.222.333.444",
|
||||
|
||||
"about": "/about",
|
||||
|
||||
"Pro!": "http://getjsonip.com"
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
The Same thing could be done with python **json.tool** module. Here is an example:
|
||||
|
||||
```
|
||||
$ cat anything.json | python -m json.tool
|
||||
```
|
||||
|
||||
This Python based solution should be fine for most users, but it’s not that useful where Python is not pre-installed or could not be installed, like on embedded systems.
|
||||
|
||||
However the json.tool python module has a distinct advantage, it’s cross platform. So, you can use it seamlessly on Windows, Linux or mac OS.
|
||||
|
||||
|
||||
### How to parse JSON with jq
|
||||
|
||||
First, you need to install jq, it’s already picked up by most GNU/Linux distributions, install it with their respective package installer commands.
|
||||
|
||||
On Arch Linux:
|
||||
|
||||
```
|
||||
$ sudo pacman -S jq
|
||||
```
|
||||
|
||||
On Debian, Ubuntu, Linux Mint:
|
||||
|
||||
```
|
||||
$ sudo apt-get install jq
|
||||
```
|
||||
|
||||
On Fedora:
|
||||
|
||||
```
|
||||
$ sudo dnf install jq
|
||||
```
|
||||
|
||||
On openSUSE:
|
||||
|
||||
```
|
||||
$ sudo zypper install jq
|
||||
```
|
||||
|
||||
For other OS or platforms, see the [official installation instructions][1].
|
||||
|
||||
**Basic filters and identifiers of jq**
|
||||
|
||||
jq could read the JSON data either from **stdin** or a **file**. You’ve to use both depending on the situation.
|
||||
|
||||
The single symbol of **.** is the most basic filter. These filters are also called as **object identifier-index**. Using a single **.** along with jq basically pretty prints the input JSON file.
|
||||
|
||||
**Single quotes** – You don’t have to use the single quote always. But if you’re combining several filters in a single line, then you must use them.
|
||||
|
||||
**Double quotes** – You’ve to enclose any special character like **@** , **#** , **$** within two double quotes, like this example, **jq .foo.”@bar”**
|
||||
|
||||
**Raw data print** – For any reason, if you need only the final parsed data, not enclosed within a double quote, use the -r flag with the jq command, like this. **– jq -r .foo.bar**.
|
||||
|
||||
**Parsing specific data**
|
||||
|
||||
To filter out a specific part of JSON, you’ve to look into the pretty printed JSON file’s data hierarchy.
|
||||
|
||||
An example of JSON data, from Wikipedia:
|
||||
|
||||
```
|
||||
{
|
||||
|
||||
"firstName": "John",
|
||||
|
||||
"lastName": "Smith",
|
||||
|
||||
"age": 25,
|
||||
|
||||
"address": {
|
||||
|
||||
"streetAddress": "21 2nd Street",
|
||||
|
||||
"city": "New York",
|
||||
|
||||
"state": "NY",
|
||||
|
||||
"postalCode": "10021"
|
||||
|
||||
},
|
||||
|
||||
"phoneNumber": [
|
||||
|
||||
{
|
||||
|
||||
"type": "home",
|
||||
|
||||
"number": "212 555-1234"
|
||||
|
||||
},
|
||||
|
||||
{
|
||||
|
||||
"type": "fax",
|
||||
|
||||
"number": "646 555-4567"
|
||||
|
||||
}
|
||||
|
||||
],
|
||||
|
||||
"gender": {
|
||||
|
||||
"type": "male"
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
I’m going to use this JSON data as an example in this tutorial, saved this as **sample.json**.
|
||||
|
||||
Let’s say I want to filter out the address from sample.json file. So the command should be like:
|
||||
|
||||
```
|
||||
$ jq .address sample.json
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
|
||||
```
|
||||
{
|
||||
|
||||
"streetAddress": "21 2nd Street",
|
||||
|
||||
"city": "New York",
|
||||
|
||||
"state": "NY",
|
||||
|
||||
"postalCode": "10021"
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
Again let’s say I want the postal code, then I’ve to add another **object identifier-index** , i.e. another filter.
|
||||
|
||||
```
|
||||
$ cat sample.json | jq .address.postalCode
|
||||
```
|
||||
|
||||
Also note that the **filters are case sensitive** and you’ve to use the exact same string to get something meaningful output instead of null.
|
||||
|
||||
**Parsing elements from JSON array**
|
||||
|
||||
Elements of JSON array are enclosed within square brackets, undoubtedly quite versatile to use.
|
||||
|
||||
To parse elements from a array, you’ve to use the **[]identifier** along with other object identifier-index.
|
||||
|
||||
In this sample JSON data, the phone numbers are stored inside an array, to get all the contents from this array, you’ve to use only the brackets, like this example.
|
||||
|
||||
```
|
||||
$ jq .phoneNumber[] sample.json
|
||||
```
|
||||
|
||||
Let’s say you just want the first element of the array, then use the array object numbers starting for 0, for the first item, use **[0]** , for the next items, it should be incremented by one each step.
|
||||
|
||||
```
|
||||
$ jq .phoneNumber[0] sample.json
|
||||
```
|
||||
|
||||
**Scripting examples**
|
||||
|
||||
Let’s say I want only the the number for home, not entire JSON array data. Here’s when scripting within jq command comes handy.
|
||||
|
||||
```
|
||||
$ cat sample.json | jq -r '.phoneNumber[] | select(.type == "home") | .number'
|
||||
```
|
||||
|
||||
Here first I’m piping the results of one filer to another, then using the select attribute to select a particular type of data, again piping the result to another filter.
|
||||
|
||||
Explaining every type of jq filters and scripting is beyond the scope and purpose of this tutorial. It’s highly suggested to read the JQ manual for better understanding given below.
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-parse-and-pretty-print-json-with-linux-commandline-tools/
|
||||
|
||||
作者:[EDITOR][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/editor/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://stedolan.github.io/jq/download/
|
@ -1,150 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Let’s try dwm — dynamic window manager)
|
||||
[#]: via: (https://fedoramagazine.org/lets-try-dwm-dynamic-window-manger/)
|
||||
[#]: author: (Adam Šamalík https://fedoramagazine.org/author/asamalik/)
|
||||
|
||||
Let’s try dwm — dynamic window manager
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
If you like efficiency and minimalism, and are looking for a new window manager for your Linux desktop, you should try _dwm_ — dynamic window manager. Written in under 2000 standard lines of code, dwm is extremely fast yet powerful and highly customizable window manager.
|
||||
|
||||
You can dynamically choose between tiling, monocle and floating layouts, organize your windows into multiple workspaces using tags, and quickly navigate through using keyboard shortcuts. This article helps you get started using dwm.
|
||||
|
||||
## **Installation**
|
||||
|
||||
To install dwm on Fedora, run:
|
||||
|
||||
```
|
||||
$ sudo dnf install dwm dwm-user
|
||||
```
|
||||
|
||||
The _dwm_ package installs the window manager itself, and the _dwm-user_ package significantly simplifies configuration which will be explained later in this article.
|
||||
|
||||
Additionally, to be able to lock the screen when needed, we’ll also install _slock_ — a simple X display locker.
|
||||
|
||||
```
|
||||
$ sudo dnf install slock
|
||||
```
|
||||
|
||||
However, you can use a different one based on your personal preference.
|
||||
|
||||
## **Quick start**
|
||||
|
||||
To start dwm, choose the _dwm-user_ option on the login screen.
|
||||
|
||||
![][2]
|
||||
|
||||
After you log in, you’ll see a very simple desktop. In fact, the only thing there will be a bar at the top listing our nine tags that represent workspaces and a _[]=_ symbol that represents the layout of your windows.
|
||||
|
||||
### Launching applications
|
||||
|
||||
Before looking into the layouts, first launch some applications so you can play with the layouts as you go. Apps can be started by pressing _Alt+p_ and typing the name of the app followed by _Enter_. There’s also a shortcut _Alt+Shift+Enter_ for opening a terminal.
|
||||
|
||||
Now that some apps are running, have a look at the layouts.
|
||||
|
||||
### Layouts
|
||||
|
||||
There are three layouts available by default: the tiling layout, the monocle layout, and the floating layout.
|
||||
|
||||
The tiling layout, represented by _[]=_ on the bar, organizes windows into two main areas: master on the left, and stack on the right. You can activate the tiling layout by pressing _Alt+t._
|
||||
|
||||
![][3]
|
||||
|
||||
The idea behind the tiling layout is that you have your primary window in the master area while still seeing the other ones in the stack. You can quickly switch between them as needed.
|
||||
|
||||
To swap windows between the two areas, hover your mouse over one in the stack area and press _Alt+Enter_ to swap it with the one in the master area.
|
||||
|
||||
![][4]
|
||||
|
||||
The monocle layout, represented by _[N]_ on the top bar, makes your primary window take the whole screen. You can switch to it by pressing _Alt+m_.
|
||||
|
||||
Finally, the floating layout lets you move and resize your windows freely. The shortcut for it is _Alt+f_ and the symbol on the top bar is _> <>_.
|
||||
|
||||
### Workspaces and tags
|
||||
|
||||
Each window is assigned to a tag (1-9) listed at the top bar. To view a specific tag, either click on its number using your mouse or press _Alt+1..9._ You can even view multiple tags at once by clicking on their number using the secondary mouse button.
|
||||
|
||||
Windows can be moved between different tags by highlighting them using your mouse, and pressing _Alt+Shift+1..9._
|
||||
|
||||
## **Configuration**
|
||||
|
||||
To make dwm as minimalistic as possible, it doesn’t use typical configuration files. Instead, you modify a C header file representing the configuration, and recompile it. But don’t worry, in Fedora it’s as simple as just editing one file in your home directory and everything else happens in the background thanks to the _dwm-user_ package provided by the maintainer in Fedora.
|
||||
|
||||
First, you need to copy the file into your home directory using a command similar to the following:
|
||||
|
||||
```
|
||||
$ mkdir ~/.dwm
|
||||
$ cp /usr/src/dwm-VERSION-RELEASE/config.def.h ~/.dwm/config.h
|
||||
```
|
||||
|
||||
You can get the exact path by running _man dwm-start._
|
||||
|
||||
Second, just edit the _~/.dwm/config.h_ file. As an example, let’s configure a new shortcut to lock the screen by pressing _Alt+Shift+L_.
|
||||
|
||||
Considering we’ve installed the _slock_ package mentioned earlier in this post, we need to add the following two lines into the file to make it work:
|
||||
|
||||
Under the _/* commands */_ comment, add:
|
||||
|
||||
```
|
||||
static const char *slockcmd[] = { "slock", NULL };
|
||||
```
|
||||
|
||||
And the following line into _static Key keys[]_ :
|
||||
|
||||
```
|
||||
{ MODKEY|ShiftMask, XK_l, spawn, {.v = slockcmd } },
|
||||
```
|
||||
|
||||
In the end, it should look like as follows: (added lines are highlighted)
|
||||
|
||||
```
|
||||
...
|
||||
/* commands */
|
||||
static char dmenumon[2] = "0"; /* component of dmenucmd, manipulated in spawn() */
|
||||
static const char *dmenucmd[] = { "dmenu_run", "-m", dmenumon, "-fn", dmenufont, "-nb", normbgcolor, "-nf", normfgcolor, "-sb", selbgcolor, "-sf", selfgcolor, NULL };
|
||||
static const char *termcmd[] = { "st", NULL };
|
||||
static const char *slockcmd[] = { "slock", NULL };
|
||||
|
||||
static Key keys[] = {
|
||||
/* modifier key function argument */
|
||||
{ MODKEY|ShiftMask, XK_l, spawn, {.v = slockcmd } },
|
||||
{ MODKEY, XK_p, spawn, {.v = dmenucmd } },
|
||||
{ MODKEY|ShiftMask, XK_Return, spawn, {.v = termcmd } },
|
||||
...
|
||||
```
|
||||
|
||||
Save the file.
|
||||
|
||||
Finally, just log out by pressing _Alt+Shift+q_ and log in again. The scripts provided by the _dwm-user_ package will recognize that you have changed the _config.h_ file in your home directory and recompile dwm on login. And becuse dwm is so tiny, it’s fast enough you won’t even notice it.
|
||||
|
||||
You can try locking your screen now by pressing _Alt+Shift+L_ , and then logging back in again by typing your password and pressing enter.
|
||||
|
||||
## **Conclusion**
|
||||
|
||||
If you like minimalism and want a very fast yet powerful window manager, dwm might be just what you’ve been looking for. However, it probably isn’t for beginners. There might be a lot of additional configuration you’ll need to do in order to make it just as you like it.
|
||||
|
||||
To learn more about dwm, see the project’s homepage at <https://dwm.suckless.org/>.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/lets-try-dwm-dynamic-window-manger/
|
||||
|
||||
作者:[Adam Šamalík][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/asamalik/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/dwm-magazine-image-816x345.png
|
||||
[2]: https://fedoramagazine.org/wp-content/uploads/2019/03/choosing-dwm-1024x469.png
|
||||
[3]: https://fedoramagazine.org/wp-content/uploads/2019/03/dwm-desktop-1024x593.png
|
||||
[4]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-2019-03-15-at-11.12.32-1024x592.png
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (luming)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,111 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Turn On And Shutdown The Raspberry Pi [Absolute Beginner Tip])
|
||||
[#]: via: (https://itsfoss.com/turn-on-raspberry-pi/)
|
||||
[#]: author: (Chinmay https://itsfoss.com/author/chinmay/)
|
||||
|
||||
How To Turn On And Shutdown The Raspberry Pi [Absolute Beginner Tip]
|
||||
======
|
||||
|
||||
_**Brief: This quick tip teaches you how to turn on Raspberry Pi and how to shut it down properly afterwards.**_
|
||||
|
||||
The [Raspberry Pi][1] is one of the [most popular SBC (Single-Board-Computer)][2]. If you are interested in this topic, I believe that you’ve finally got a Pi device. I also advise to get all the [additional Raspberry Pi accessories][3] to get started with your device.
|
||||
|
||||
You’re ready to turn it on and start to tinker around with it. It has it’s own similarities and differences compared to traditional computers like desktops and laptops.
|
||||
|
||||
Today, let’s go ahead and learn how to turn on and shutdown a Raspberry Pi as it doesn’t really feature a ‘power button’ of sorts.
|
||||
|
||||
For this article I’m using a Raspberry Pi 3B+, but it’s the same for all the Raspberry Pi variants.
|
||||
|
||||
Bestseller No. 1
|
||||
|
||||
[][4]
|
||||
|
||||
[CanaKit Raspberry Pi 3 B+ (B Plus) Starter Kit (32 GB EVO+ Edition, Premium Black Case)][4]
|
||||
|
||||
CanaKit - Personal Computers
|
||||
|
||||
$79.99 [][5]
|
||||
|
||||
Bestseller No. 2
|
||||
|
||||
[][6]
|
||||
|
||||
[CanaKit Raspberry Pi 3 B+ (B Plus) with Premium Clear Case and 2.5A Power Supply][6]
|
||||
|
||||
CanaKit - Personal Computers
|
||||
|
||||
$54.99 [][5]
|
||||
|
||||
### Turn on Raspberry Pi
|
||||
|
||||
![Micro USB port for Power][7]
|
||||
|
||||
The micro USB port powers the Raspberry Pi, the way you turn it on is by plugging in the power cable into the micro USB port. But, before you do that you should make sure that you have done the following things.
|
||||
|
||||
* Preparing the micro SD card with Raspbian according to the official [guide][8] and inserting into the micro SD card slot.
|
||||
* Plugging in the HDMI cable, USB keyboard and a Mouse.
|
||||
* Plugging in the Ethernet Cable(Optional).
|
||||
|
||||
|
||||
|
||||
Once you have done the above, plug in the power cable. This turns on the Raspberry Pi and the display will light up and load the Operating System.
|
||||
|
||||
Bestseller No. 1
|
||||
|
||||
[][4]
|
||||
|
||||
[CanaKit Raspberry Pi 3 B+ (B Plus) Starter Kit (32 GB EVO+ Edition, Premium Black Case)][4]
|
||||
|
||||
CanaKit - Personal Computers
|
||||
|
||||
$79.99 [][5]
|
||||
|
||||
### Shutting Down the Pi
|
||||
|
||||
Shutting down the Pi is pretty straight forward, click the menu button and choose shutdown.
|
||||
|
||||
![Turn off Raspberry Pi graphically][9]
|
||||
|
||||
Alternatively, you can use the [shutdown command][10] in the terminal:
|
||||
|
||||
```
|
||||
sudo shutdown now
|
||||
```
|
||||
|
||||
Once the shutdown process has started **wait** till it completely finishes and then you can cut the power to it. Once the Pi shuts down, there is no real way to turn the Pi back on without turning off and turning on the power. You could the GPIO’s to turn on the Pi from the shutdown state but it’ll require additional modding.
|
||||
|
||||
[][2]
|
||||
|
||||
Suggested read 12 Single Board Computers: Alternative to Raspberry Pi
|
||||
|
||||
_Note: Micro USB ports tend to be fragile, hence turn-off/on the power at source instead of frequently unplugging and plugging into the micro USB port._
|
||||
|
||||
Well, that’s about all you should know about turning on and shutting down the Pi, what do you plan to use it for? Let me know in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/turn-on-raspberry-pi/
|
||||
|
||||
作者:[Chinmay][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/chinmay/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.raspberrypi.org/
|
||||
[2]: https://itsfoss.com/raspberry-pi-alternatives/
|
||||
[3]: https://itsfoss.com/things-you-need-to-get-your-raspberry-pi-working/
|
||||
[4]: https://www.amazon.com/CanaKit-Raspberry-Starter-Premium-Black/dp/B07BCC8PK7?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07BCC8PK7&keywords=raspberry%20pi%20kit (CanaKit Raspberry Pi 3 B+ (B Plus) Starter Kit (32 GB EVO+ Edition, Premium Black Case))
|
||||
[5]: https://www.amazon.com/gp/prime/?tag=chmod7mediate-20 (Amazon Prime)
|
||||
[6]: https://www.amazon.com/CanaKit-Raspberry-Premium-Clear-Supply/dp/B07BC7BMHY?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07BC7BMHY&keywords=raspberry%20pi%20kit (CanaKit Raspberry Pi 3 B+ (B Plus) with Premium Clear Case and 2.5A Power Supply)
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/raspberry-pi-3-microusb.png?fit=800%2C532&ssl=1
|
||||
[8]: https://www.raspberrypi.org/documentation/installation/installing-images/README.md
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/Raspbian-ui-menu.jpg?fit=800%2C492&ssl=1
|
||||
[10]: https://linuxhandbook.com/linux-shutdown-command/
|
@ -1,88 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Blockchain 2.0 – An Introduction To Hyperledger Project (HLP) [Part 8])
|
||||
[#]: via: (https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/)
|
||||
[#]: author: (editor https://www.ostechnix.com/author/editor/)
|
||||
|
||||
Blockchain 2.0 – An Introduction To Hyperledger Project (HLP) [Part 8]
|
||||
======
|
||||
|
||||
![Introduction To Hyperledger Project][1]
|
||||
|
||||
Once a new technology platform reaches a threshold level of popularity in terms of active development and commercial interests, major global companies and smaller start-ups alike rush to catch a slice of the pie. **Linux** was one such platform back in the day. Once the ubiquity of its applications was realized individuals, firms, and institutions started displaying their interest in it and by 2000 the **Linux foundation** was formed.
|
||||
|
||||
The Linux foundation aims to standardize and develop Linux as a platform by sponsoring their development team. The Linux Foundation is a non-profit organization that is supported by software and IT behemoths such as Microsoft, Oracle, Samsung, Cisco, IBM, Intel among others[1]. This is excluding the hundreds of individual developers who offer their services for the betterment of the platform. Over the years the Linux foundation has taken many projects under its roof. The **Hyperledger Project** is their fastest growing one till date.
|
||||
|
||||
Such consortium led development have a lot of advantages when it comes to furthering tech into usable useful forms. Developing the standards, libraries and all the back-end protocols for large scale projects are expensive and resource intensive without a shred of income generating from it. Hence, it makes sense for companies to pool in their resources to develop the common “boring” parts by supporting such organizations and later upon completing work on these standard parts to simply plug & play and customize their products afterwards. Apart from the economics of the model, such collaborative efforts also yield standards allowing for easier use and integration into aspiring products and services.
|
||||
|
||||
Other major innovations that were once or are currently being developed following the said consortium model include standards for WiFi (The Wi-Fi alliance), Mobile Telephony etc.
|
||||
|
||||
### Introduction to Hyperledger Project (HLP)
|
||||
|
||||
The Hyperledger project was launched in December 2015 by the Linux foundation as is currently among the fastest growing project they’ve incubated. It’s an umbrella organization for collaborative efforts into developing and advancing tools & standards for [**blockchain**][2] based distributed ledger technologies(DLT). Major industry players supporting the project include **IBM** , **Intel** and **SAP Ariba** among [**others**][3]. The HLP aims to create frameworks for individuals and companies to create shared as well as closed blockchains as required to further their own requirements. The design principles include a strong tilt toward developing a globally deployable, scalable, robust platform with a focus on privacy, and future auditability[2]. It is also important to note that most of the blockchains proposed and the frame.
|
||||
|
||||
### Development goals and structure: Making it plug & play
|
||||
|
||||
Although enterprise facing platforms exist from the likes of the Ethereum alliance, HLP is by definition business facing and supported by industry behemoths who contribute and further development in the many modules that come under the HLP banner. The HLP incubates projects in development after their induction into the cause and after finishing work on it and correcting the knick-knacks rolls it out for the public. Members of the Hyperledger project contribute their own work such as how IBM contributed their Fabric platform for collaborative development. The codebase is absorbed and developed in house by the group in the project and rolled out for all members equally for their use.
|
||||
|
||||
Such processes make the modules in HLP highly flexible plug-in frameworks which will support rapid development and roll-outs in enterprise settings. Furthermore, other comparable platforms are open **permission-less blockchains** or rather **public chains** by default and even though it is possible to adapt them to specific applications, HLP modules support the feature natively.
|
||||
|
||||
The differences and use cases of public & private blockchains are covered more [**here**][4] in this comparative primer on the same.
|
||||
|
||||
The Hyperledger project’s mission is four-fold according to **Brian Behlendorf** , the executive director of the project.
|
||||
|
||||
They are:
|
||||
|
||||
1. To create an enterprise grade DLT framework and standards which anyone can port to suit their specific industrial or personal needs.
|
||||
2. To give rise to a robust open source community to aid the ecosystem.
|
||||
3. To promote and further participation of industry members of the said ecosystem such as member firms.
|
||||
4. To host a neutral unbiased infrastructure for the HLP community to gather and share updates and developments regarding the same.
|
||||
|
||||
|
||||
|
||||
The original document can be accessed [**here**][5]****.
|
||||
|
||||
### Structure of the HLP
|
||||
|
||||
The **HLP consists of 12 projects** that are classified as independent modules, each usually structured and working independently to develop their module. These are first studied for their capabilities and viability before being incubated. Proposals for additions can be made by any member of the organization. After the project is incubated active development ensues after which it is rolled out. The interoperability between these modules are given a high priority, hence regular communication between these groups are maintained by the community. Currently 4 of these projects are categorized as active. The active tag implies these are ready for use but not ready for a major release yet. These 4 are arguably the most significant or rather fundamental modules to furthering the blockchain revolution. We’ll look at the individual modules and their functionalities at a later time in detail. However, a brief description of a the Hyperledger Fabric platform, arguably the most popular among them follows.
|
||||
|
||||
### Hyperledger Fabric
|
||||
|
||||
The **Hyperledger Fabric** [2] is a fully open-source, permissioned (non-public) blockchain-based DLT platform that is designed keeping enterprise uses in mind. The platform provides features and is structured to fit the enterprise environment. It is highly modular allowing its developers to choose from different consensus protocols, **chain code protocols ([smart contracts][6])** , or identity management systems etc., as they go along. **It is a permissioned blockchain based platform** that’s makes use of an identity management system, meaning participants will be aware of each other’s identities which is required in an enterprise setting. Fabric allows for smart contract ( _ **“chaincode”, is the term that the Hyperledger team uses**_ ) development in a variety of mainstream programming languages including **Java** , **Javascript** , **Go** etc. This allows institutions and enterprises to make use of their existing talent in the area without hiring or re-training developers to develop their own smart contracts. Fabric also uses an execute-order-validate system to handle smart contracts for better reliability compared to the standard order-validate system that is used by other platforms providing smart contract functionality. Pluggable performance, identity management systems, DBMS, Consensus platforms etc. are other features of Fabric that keeps it miles ahead of its competition.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Projects such as the Hyperledger Fabric platforms enable a faster rate of adoption of blockchain technology in mainstream use-cases. The Hyperledger community structure itself supports open governance principles and since all the projects are led as open source platforms, this improves the security and accountability that the teams exhibit in pushing out commitments.
|
||||
|
||||
Since major applications of such projects involve working with enterprises to further development of platforms and standards, the Hyperledger project is currently at a great position with respect to comparable projects by others.
|
||||
|
||||
**References:**
|
||||
|
||||
* **[1][Samsung takes a seat with Intel and IBM at the Linux Foundation | TheINQUIRER][7]**
|
||||
* **[2] E. Androulaki et al., “Hyperledger Fabric: A Distributed Operating System for Permissioned Blockchains,” 2018.**
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/
|
||||
|
||||
作者:[editor][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/editor/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/Introduction-To-Hyperledger-Project-720x340.png
|
||||
[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction/
|
||||
[3]: https://www.hyperledger.org/members
|
||||
[4]: https://www.ostechnix.com/blockchain-2-0-public-vs-private-blockchain-comparison/
|
||||
[5]: http://www.hitachi.com/rev/archive/2017/r2017_01/expert/index.html
|
||||
[6]: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/
|
||||
[7]: https://www.theinquirer.net/inquirer/news/2182438/samsung-takes-seat-intel-ibm-linux-foundation
|
@ -1,56 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (qfzy1233 )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What is a Linux user?)
|
||||
[#]: via: (https://opensource.com/article/19/6/what-linux-user)
|
||||
[#]: author: (Anderson Silva https://opensource.com/users/ansilva/users/petercheer/users/ansilva/users/greg-p/users/ansilva/users/ansilva/users/bcotton/users/ansilva/users/seth/users/ansilva/users/don-watkins/users/ansilva/users/seth)
|
||||
|
||||
What is a Linux user?
|
||||
======
|
||||
The definition of who is a "Linux user" has grown to be a bigger tent,
|
||||
and it's a great change.
|
||||
![][1]
|
||||
|
||||
> _Editor's note: this article was updated on Jun 11, 2019, at 1:15:19 PM to more accurately reflect the author's perspective on an open and inclusive community of practice in the Linux community._
|
||||
|
||||
In only two years, the Linux kernel will be 30 years old. Think about that! Where were you in 1991? Were you even born? I was 13! Between 1991 and 1993 a few Linux distributions were created, and at least three of them—Slackware, Debian, and Red Hat–provided the [backbone][2] the Linux movement was built on.
|
||||
|
||||
Getting a copy of a Linux distribution and installing and configuring it on a desktop or server was very different back then than today. It was hard! It was frustrating! It was an accomplishment if you got it running! We had to fight with incompatible hardware, configuration jumpers on devices, BIOS issues, and many other things. Even if the hardware was compatible, many times, you still had to compile the kernel, modules, and drivers to get them to work on your system.
|
||||
|
||||
If you were around during those days, you are probably nodding your head. Some readers might even call them the "good old days," because choosing to use Linux meant you had to learn about operating systems, computer architecture, system administration, networking, and even programming, just to keep the OS functioning. I am not one of them though: Linux being a regular part of everyone's technology experience is one of the most amazing changes in our industry!
|
||||
|
||||
Almost 30 years later, Linux has gone far beyond the desktop and server. You will find Linux in automobiles, airplanes, appliances, smartphones… virtually everywhere! You can even purchase laptops, desktops, and servers with Linux preinstalled. If you consider cloud computing, where corporations and even individuals can deploy Linux virtual machines with the click of a button, it's clear how widespread the availability of Linux has become.
|
||||
|
||||
With all that in mind, my question for you is: **How do you define a "Linux user" today?**
|
||||
|
||||
If you buy your parent or grandparent a Linux laptop from System76 or Dell, log them into their social media and email, and tell them to click "update system" every so often, they are now a Linux user. If you did the same with a Windows or MacOS machine, they would be Windows or MacOS users. It's incredible to me that, unlike the '90s, Linux is now a place for anyone and everyone to compute.
|
||||
|
||||
In many ways, this is due to the web browser becoming the "killer app" on the desktop computer. Now, many users don't care what operating system they are using as long as they can get to their app or service.
|
||||
|
||||
How many people do you know who use their phone, desktop, or laptop regularly but can't manage files, directories, and drivers on their systems? How many can't install a binary that isn't attached to an "app store" of some sort? How about compiling an application from scratch?! For me, it's almost no one. That's the beauty of open source software maturing along with an ecosystem that cares about accessibility.
|
||||
|
||||
Today's Linux user is not required to know, study, or even look up information as the Linux user of the '90s or early 2000s did, and that's not a bad thing. The old imagery of Linux being exclusively for bearded men is long gone, and I say good riddance.
|
||||
|
||||
There will always be room for a Linux user who is interested, curious, _fascinated_ about computers, operating systems, and the idea of creating, using, and collaborating on free software. There is just as much room for creative open source contributors on Windows and MacOS these days as well. Today, being a Linux user is being anyone with a Linux system. And that's a wonderful thing.
|
||||
|
||||
### The change to what it means to be a Linux user
|
||||
|
||||
When I started with Linux, being a user meant knowing how to the operating system functioned in every way, shape, and form. Linux has matured in a way that allows the definition of "Linux users" to encompass a much broader world of possibility and the people who inhabit it. It may be obvious to say, but it is important to say clearly: anyone who uses Linux is an equal Linux user.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/what-linux-user
|
||||
|
||||
作者:[Anderson Silva][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ansilva/users/petercheer/users/ansilva/users/greg-p/users/ansilva/users/ansilva/users/bcotton/users/ansilva/users/seth/users/ansilva/users/don-watkins/users/ansilva/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22
|
||||
[2]: https://en.wikipedia.org/wiki/Linux_distribution#/media/File:Linux_Distribution_Timeline.svg
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Flowsnow)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (zmaster-zhang)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,311 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (MjSeven)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Learn object-oriented programming with Python)
|
||||
[#]: via: (https://opensource.com/article/19/7/get-modular-python-classes)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
Learn object-oriented programming with Python
|
||||
======
|
||||
Make your code more modular with Python classes.
|
||||
![Developing code.][1]
|
||||
|
||||
In my previous article, I explained how to [make Python modular][2] by using functions, creating modules, or both. Functions are invaluable to avoid repeating code you intend to use several times, and modules ensure that you can use your code across different projects. But there's another component to modularity: the class.
|
||||
|
||||
If you've heard the term _object-oriented programming_, then you may have some notion of the purpose classes serve. Programmers tend to consider a class as a virtual object, sometimes with a direct correlation to something in the physical world, and other times as a manifestation of some programming concept. Either way, the idea is that you can create a class when you want to create "objects" within a program for you or other parts of the program to interact with.
|
||||
|
||||
### Templates without classes
|
||||
|
||||
Assume you're writing a game set in a fantasy world, and you need this application to be able to drum up a variety of baddies to bring some excitement into your players' lives. Knowing quite a lot about functions, you might think this sounds like a textbook case for functions: code that needs to be repeated often but is written once with allowance for variations when called.
|
||||
|
||||
Here's an example of a purely function-based implementation of an enemy generator:
|
||||
|
||||
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import random
|
||||
|
||||
def enemy(ancestry,gear):
|
||||
enemy=ancestry
|
||||
weapon=gear
|
||||
hp=random.randrange(0,20)
|
||||
ac=random.randrange(0,20)
|
||||
return [enemy,weapon,hp,ac]
|
||||
|
||||
def fight(tgt):
|
||||
print("You take a swing at the " + tgt[0] + ".")
|
||||
hit=random.randrange(0,20)
|
||||
if hit > tgt[3]:
|
||||
print("You hit the " + tgt[0] + " for " + str(hit) + " damage!")
|
||||
tgt[2] = tgt[2] - hit
|
||||
else:
|
||||
print("You missed.")
|
||||
|
||||
foe=enemy("troll","great axe")
|
||||
print("You meet a " + foe[0] + " wielding a " + foe[1])
|
||||
print("Type the a key and then RETURN to attack.")
|
||||
|
||||
while True:
|
||||
action=input()
|
||||
|
||||
if action.lower() == "a":
|
||||
fight(foe)
|
||||
|
||||
if foe[2] < 1:
|
||||
print("You killed your foe!")
|
||||
else:
|
||||
print("The " + foe[0] + " has " + str(foe[2]) + " HP remaining")
|
||||
```
|
||||
|
||||
The **enemy** function creates an enemy with several attributes, such as ancestry, a weapon, health points, and a defense rating. It returns a list of each attribute, representing the sum total of the enemy.
|
||||
|
||||
In a sense, this code has created an object, even though it's not using a class yet. Programmers call this "enemy" an _object_ because the result (a list of strings and integers, in this case) of the function represents a singular but complex _thing_ in the game. That is, the strings and integers in the list aren't arbitrary: together, they describe a virtual object.
|
||||
|
||||
When writing a collection of descriptors, you use variables so you can use them any time you want to generate an enemy. It's a little like a template.
|
||||
|
||||
In the example code, when an attribute of the object is needed, the corresponding list item is retrieved. For instance, to get the ancestry of an enemy, the code looks at **foe[0]**, for health points, it looks at **foe[2]** for health points, and so on.
|
||||
|
||||
There's nothing necessarily wrong with this approach. The code runs as expected. You could add more enemies of different types, you could create a list of enemy types and randomly select from the list during enemy creation, and so on. It works well enough, and in fact [Lua][3] uses this principle very effectively to approximate an object-oriented model.
|
||||
|
||||
However, there's sometimes more to an object than just a list of attributes.
|
||||
|
||||
### The way of the object
|
||||
|
||||
In Python, everything is an object. Anything you create in Python is an _instance_ of some predefined template. Even basic strings and integers are derivatives of the Python **type** class. You can witness this for yourself an interactive Python shell:
|
||||
|
||||
|
||||
```
|
||||
>>> foo=3
|
||||
>>> type(foo)
|
||||
<class 'int'>
|
||||
>>> foo="bar"
|
||||
>>> type(foo)
|
||||
<class 'str'>
|
||||
```
|
||||
|
||||
When an object is defined by a class, it is more than just a collection of attributes. Python classes have functions all their own. This is convenient, logically, because actions that pertain only to a certain class of objects are contained within that object's class.
|
||||
|
||||
In the example code, the fight code is a function of the main application. That works fine for a simple game, but in a complex one, there would be more than just players and enemies in the game world. There might be townsfolk, livestock, buildings, forests, and so on, and none of them ever need access to a fight function. Placing code for combat in an enemy class means your code is better organized; and in a complex application, that's a significant advantage.
|
||||
|
||||
Furthermore, each class has privileged access to its own local variables. An enemy's health points, for instance, isn't data that should ever change except by some function of the enemy class. A random butterfly in the game should not accidentally reduce an enemy's health to 0. Ideally, even without classes, that would never happen, but in a complex application with lots of moving parts, it's a powerful trick of the trade to ensure that parts that don't need to interact with one another never do.
|
||||
|
||||
Python classes are also subject to garbage collection. When an instance of a class is no longer used, it is moved out of memory. You may never know when this happens, but you tend to notice when it doesn't happen because your application takes up more memory and runs slower than it should. Isolating data sets into classes helps Python track what is in use and what is no longer needed.
|
||||
|
||||
### Classy Python
|
||||
|
||||
Here's the same simple combat game using a class for the enemy:
|
||||
|
||||
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import random
|
||||
|
||||
class Enemy():
|
||||
def __init__(self,ancestry,gear):
|
||||
self.enemy=ancestry
|
||||
self.weapon=gear
|
||||
self.hp=random.randrange(10,20)
|
||||
self.ac=random.randrange(12,20)
|
||||
self.alive=True
|
||||
|
||||
def fight(self,tgt):
|
||||
print("You take a swing at the " + self.enemy + ".")
|
||||
hit=random.randrange(0,20)
|
||||
|
||||
if self.alive and hit > self.ac:
|
||||
print("You hit the " + self.enemy + " for " + str(hit) + " damage!")
|
||||
self.hp = self.hp - hit
|
||||
print("The " + self.enemy + " has " + str(self.hp) + " HP remaining")
|
||||
else:
|
||||
print("You missed.")
|
||||
|
||||
if self.hp < 1:
|
||||
self.alive=False
|
||||
|
||||
# game start
|
||||
foe=Enemy("troll","great axe")
|
||||
print("You meet a " + foe.enemy + " wielding a " + foe.weapon)
|
||||
|
||||
# main loop
|
||||
while True:
|
||||
|
||||
print("Type the a key and then RETURN to attack.")
|
||||
|
||||
action=input()
|
||||
|
||||
if action.lower() == "a":
|
||||
foe.fight(foe)
|
||||
|
||||
if foe.alive == False:
|
||||
print("You have won...this time.")
|
||||
exit()
|
||||
```
|
||||
|
||||
This version of the game handles the enemy as an object containing the same attributes (ancestry, weapon, health, and defense), plus a new attribute measuring whether the enemy has been vanquished yet, as well as a function for combat.
|
||||
|
||||
The first function of a class is a special function called (in Python) an _init_, or initialization, function. This is similar to a [constructor][4] in other languages; it creates an instance of the class, which is identifiable to you by its attributes and to whatever variable you use when invoking the class (**foe** in the example code).
|
||||
|
||||
### Self and class instances
|
||||
|
||||
The class' functions accept a new form of input you don't see outside of classes: **self**. If you don't include **self**, then Python has no way of knowing _which_ instance of the class to use when you call a class function. It's like challenging a single orc to a duel by saying "I'll fight the orc" in a room full of orcs; nobody knows which one you're referring to, and so bad things happen.
|
||||
|
||||
![Image of an Orc, CC-BY-SA by Buch on opengameart.org][5]
|
||||
|
||||
CC-BY-SA by Buch on opengameart.org
|
||||
|
||||
Each attribute created within a class is prepended with the **self** notation, which identifies that variable as an attribute of the class. Once an instance of a class is spawned, you swap out the **self** prefix with the variable representing that instance. Using this technique, you could challenge just one orc to a duel in a room full of orcs by saying "I'll fight the gorblar.orc"; when Gorblar the Orc hears **gorblar.orc**, he knows which orc you're referring to (him_self_), and so you get a fair fight instead of a brawl. In Python:
|
||||
|
||||
|
||||
```
|
||||
gorblar=Enemy("orc","sword")
|
||||
print("The " + gorblar.enemy + " has " + str(gorblar.hp) + " remaining.")
|
||||
```
|
||||
|
||||
Instead of looking to **foe[0]** (as in the functional example) or **gorblar[0]** for the enemy type, you retrieve the class attribute (**gorblar.enemy** or **gorblar.hp** or whatever value for whatever object you need).
|
||||
|
||||
### Local variables
|
||||
|
||||
If a variable in a class is not prepended with the **self** keyword, then it is a local variable, just as in any function. For instance, no matter what you do, you cannot access the **hit** variable outside the **Enemy.fight** class:
|
||||
|
||||
|
||||
```
|
||||
>>> print(foe.hit)
|
||||
Traceback (most recent call last):
|
||||
File "./enclass.py", line 38, in <module>
|
||||
print(foe.hit)
|
||||
AttributeError: 'Enemy' object has no attribute 'hit'
|
||||
|
||||
>>> print(foe.fight.hit)
|
||||
Traceback (most recent call last):
|
||||
File "./enclass.py", line 38, in <module>
|
||||
print(foe.fight.hit)
|
||||
AttributeError: 'function' object has no attribute 'hit'
|
||||
```
|
||||
|
||||
The **hit** variable is contained within the Enemy class, and only "lives" long enough to serve its purpose in combat.
|
||||
|
||||
### More modularity
|
||||
|
||||
This example uses a class in the same text document as your main application. In a complex game, it's easier to treat each class almost as if it were its own self-standing application. You see this when multiple developers work on the same application: one developer works on a class, and the other works on the main program, and as long as they communicate with one another about what attributes the class must have, the two code bases can be developed in parallel.
|
||||
|
||||
To make this example game modular, split it into two files: one for the main application and one for the class. Were it a more complex application, you might have one file per class, or one file per logical groups of classes (for instance, a file for buildings, a file for natural surroundings, a file for enemies and NPCs, and so on).
|
||||
|
||||
Save one file containing just the Enemy class as **enemy.py** and another file containing everything else as **main.py**.
|
||||
|
||||
Here's **enemy.py**:
|
||||
|
||||
|
||||
```
|
||||
import random
|
||||
|
||||
class Enemy():
|
||||
def __init__(self,ancestry,gear):
|
||||
self.enemy=ancestry
|
||||
self.weapon=gear
|
||||
self.hp=random.randrange(10,20)
|
||||
self.stg=random.randrange(0,20)
|
||||
self.ac=random.randrange(0,20)
|
||||
self.alive=True
|
||||
|
||||
def fight(self,tgt):
|
||||
print("You take a swing at the " + self.enemy + ".")
|
||||
hit=random.randrange(0,20)
|
||||
|
||||
if self.alive and hit > self.ac:
|
||||
print("You hit the " + self.enemy + " for " + str(hit) + " damage!")
|
||||
self.hp = self.hp - hit
|
||||
print("The " + self.enemy + " has " + str(self.hp) + " HP remaining")
|
||||
else:
|
||||
print("You missed.")
|
||||
|
||||
if self.hp < 1:
|
||||
self.alive=False
|
||||
```
|
||||
|
||||
Here's **main.py**:
|
||||
|
||||
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import enemy as en
|
||||
|
||||
# game start
|
||||
foe=en.Enemy("troll","great axe")
|
||||
print("You meet a " + foe.enemy + " wielding a " + foe.weapon)
|
||||
|
||||
# main loop
|
||||
while True:
|
||||
|
||||
print("Type the a key and then RETURN to attack.")
|
||||
|
||||
action=input()
|
||||
|
||||
if action.lower() == "a":
|
||||
foe.fight(foe)
|
||||
|
||||
if foe.alive == False:
|
||||
print("You have won...this time.")
|
||||
exit()
|
||||
```
|
||||
|
||||
Importing the module **enemy.py** is done very specifically with a statement that refers to the file of classes as its name without the **.py** extension, followed by a namespace designator of your choosing (for example, **import enemy as en**). This designator is what you use in the code when invoking a class. Instead of just using **Enemy()**, you preface the class with the designator of what you imported, such as **en.Enemy**.
|
||||
|
||||
All of these file names are entirely arbitrary, although not uncommon in principle. It's a common convention to name the part of the application that serves as the central hub **main.py**, and a file full of classes is often named in lowercase with the classes inside it, each beginning with a capital letter. Whether you follow these conventions doesn't affect how the application runs, but it does make it easier for experienced Python programmers to quickly decipher how your application works.
|
||||
|
||||
There's some flexibility in how you structure your code. For instance, using the code sample, both files must be in the same directory. If you want to package just your classes as a module, then you must create a directory called, for instance, **mybad** and move your classes into it. In **main.py**, your import statement changes a little:
|
||||
|
||||
|
||||
```
|
||||
`from mybad import enemy as en`
|
||||
```
|
||||
|
||||
Both systems produce the same results, but the latter is best if the classes you have created are generic enough that you think other developers could use them in their projects.
|
||||
|
||||
Regardless of which you choose, launch the modular version of the game:
|
||||
|
||||
|
||||
```
|
||||
$ python3 ./main.py
|
||||
You meet a troll wielding a great axe
|
||||
Type the a key and then RETURN to attack.
|
||||
a
|
||||
You take a swing at the troll.
|
||||
You missed.
|
||||
Type the a key and then RETURN to attack.
|
||||
a
|
||||
You take a swing at the troll.
|
||||
You hit the troll for 8 damage!
|
||||
The troll has 4 HP remaining
|
||||
Type the a key and then RETURN to attack.
|
||||
a
|
||||
You take a swing at the troll.
|
||||
You hit the troll for 11 damage!
|
||||
The troll has -7 HP remaining
|
||||
You have won...this time.
|
||||
```
|
||||
|
||||
The game works. It's modular. And now you know what it means for an application to be object-oriented. But most importantly, you know to be specific when challenging an orc to a duel.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/get-modular-python-classes
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_development_programming.png?itok=M_QDcgz5 (Developing code.)
|
||||
[2]: https://opensource.com/article/19/6/get-modular-python-functions
|
||||
[3]: https://opensource.com/article/17/4/how-program-games-raspberry-pi
|
||||
[4]: https://opensource.com/article/19/6/what-java-constructor
|
||||
[5]: https://opensource.com/sites/default/files/images/orc-buch-opengameart_cc-by-sa.jpg (CC-BY-SA by Buch on opengameart.org)
|
@ -1,185 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (runningwater)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to install Elasticsearch and Kibana on Linux)
|
||||
[#]: via: (https://opensource.com/article/19/7/install-elasticsearch-and-kibana-linux)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
How to install Elasticsearch and Kibana on Linux
|
||||
======
|
||||
Get our simplified instructions for installing both.
|
||||
![5 pengiuns floating on iceburg][1]
|
||||
|
||||
If you're keen to learn Elasticsearch, the famous open source search engine based on the open source Lucene library, then there's no better way than to install it locally. The process is outlined in detail on the [Elasticsearch website][2], but the official instructions have a lot more detail than necessary if you're a beginner. This article takes a simplified approach.
|
||||
|
||||
### Add the Elasticsearch repository
|
||||
|
||||
First, add the Elasticsearch software repository to your system, so you can install it and receive updates as needed. How you do so depends on your distribution. On an RPM-based system, such as [Fedora][3], [CentOS][4], [Red Hat Enterprise Linux (RHEL)][5], or [openSUSE][6], (anywhere in this article that references Fedora or RHEL applies to CentOS and openSUSE as well) create a repository description file in **/etc/yum.repos.d/** called **elasticsearch.repo**:
|
||||
|
||||
|
||||
```
|
||||
$ cat << EOF | sudo tee /etc/yum.repos.d/elasticsearch.repo
|
||||
[elasticsearch-7.x]
|
||||
name=Elasticsearch repository for 7.x packages
|
||||
baseurl=<https://artifacts.elastic.co/packages/oss-7.x/yum>
|
||||
gpgcheck=1
|
||||
gpgkey=<https://artifacts.elastic.co/GPG-KEY-elasticsearch>
|
||||
enabled=1
|
||||
autorefresh=1
|
||||
type=rpm-md
|
||||
EOF
|
||||
```
|
||||
|
||||
On Ubuntu or Debian, do not use the **add-apt-repository** utility. It causes errors due to a mismatch in its defaults and what Elasticsearch’s repository provides. Instead, set up this one:
|
||||
|
||||
|
||||
```
|
||||
$ echo "deb <https://artifacts.elastic.co/packages/oss-7.x/apt> stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
|
||||
```
|
||||
|
||||
This repository contains only Elasticsearch’s open source features, under an [Apache License][7], with none of the extra features provided by a subscription. If you need subscription-only features (these features are _not_ open source), the **baseurl** must be set to:
|
||||
|
||||
|
||||
```
|
||||
`baseurl=https://artifacts.elastic.co/packages/7.x/yum`
|
||||
```
|
||||
|
||||
|
||||
|
||||
### Install Elasticsearch
|
||||
|
||||
The name of the package you need to install depends on whether you use the open source version or the subscription version. This article uses the open source version, which appends **-oss** to the end of the package name. Without **-oss** appended to the package name, you are requesting the subscription-only version.
|
||||
|
||||
If you create a repository pointing to the subscription version but try to install the open source version, you will get a fairly non-specific error in return. If you create a repository for the open source version and fail to append **-oss** to the package name, you will also get an error.
|
||||
|
||||
Install Elasticsearch with your package manager. For instance, on Fedora, CentOS, or RHEL, run the following:
|
||||
|
||||
|
||||
```
|
||||
$ sudo dnf install elasticsearch-oss
|
||||
```
|
||||
|
||||
On Ubuntu or Debian, run:
|
||||
|
||||
|
||||
```
|
||||
$ sudo apt install elasticsearch-oss
|
||||
```
|
||||
|
||||
If you get errors while installing Elasticsearch, then you may be attempting to install the wrong package. If your intention is to use the open source package, as this article does, then make sure you are using the correct **apt** repository or baseurl in your Yum configuration.
|
||||
|
||||
### Start and enable Elasticsearch
|
||||
|
||||
Once Elasticsearch has been installed, you must start and enable it:
|
||||
|
||||
|
||||
```
|
||||
$ sudo systemctl daemon-reload
|
||||
$ sudo systemctl enable --now elasticsearch.service
|
||||
```
|
||||
|
||||
Then, to confirm that Elasticsearch is running on its default port of 9200, point a web browser to **localhost:9200**. You can use a GUI browser or you can do it in the terminal:
|
||||
|
||||
|
||||
```
|
||||
$ curl localhost:9200
|
||||
{
|
||||
|
||||
"name" : "fedora30",
|
||||
"cluster_name" : "elasticsearch",
|
||||
"cluster_uuid" : "OqSbb16NQB2M0ysynnX1hA",
|
||||
"version" : {
|
||||
"number" : "7.2.0",
|
||||
"build_flavor" : "oss",
|
||||
"build_type" : "rpm",
|
||||
"build_hash" : "508c38a",
|
||||
"build_date" : "2019-06-20T15:54:18.811730Z",
|
||||
"build_snapshot" : false,
|
||||
"lucene_version" : "8.0.0",
|
||||
"minimum_wire_compatibility_version" : "6.8.0",
|
||||
"minimum_index_compatibility_version" : "6.0.0-beta1"
|
||||
},
|
||||
"tagline" : "You Know, for Search"
|
||||
}
|
||||
```
|
||||
|
||||
### Install Kibana
|
||||
|
||||
Kibana is a graphical interface for Elasticsearch data visualization. It’s included in the Elasticsearch repository, so you can install it with your package manager. Just as with Elasticsearch itself, you must append **-oss** to the end of the package name if you are using the open source version of Elasticsearch, and not the subscription version (the two installations need to match):
|
||||
|
||||
|
||||
```
|
||||
$ sudo dnf install kibana-oss
|
||||
```
|
||||
|
||||
On Ubuntu or Debian:
|
||||
|
||||
|
||||
```
|
||||
$ sudo apt install kibana-oss
|
||||
```
|
||||
|
||||
Kibana runs on port 5601, so launch a graphical web browser and navigate to **localhost:5601** to start using the Kibana interface, which is shown below:
|
||||
|
||||
![Kibana running in Firefox.][8]
|
||||
|
||||
### Troubleshoot
|
||||
|
||||
If you get errors while installing Elasticsearch, try installing a Java environment manually. On Fedora, CentOS, and RHEL:
|
||||
|
||||
|
||||
```
|
||||
$ sudo dnf install java-openjdk-devel java-openjdk
|
||||
```
|
||||
|
||||
On Ubuntu:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo apt install default-jdk`
|
||||
```
|
||||
|
||||
If all else fails, try installing the Elasticsearch RPM directly from the Elasticsearch servers:
|
||||
|
||||
|
||||
```
|
||||
$ wget <https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-7.2.0-x86\_64.rpm{,.sha512}>
|
||||
$ shasum -a 512 -c elasticsearch-oss-7.2.0-x86_64.rpm.sha512 && sudo rpm --install elasticsearch-oss-7.2.0-x86_64.rpm
|
||||
```
|
||||
|
||||
On Ubuntu or Debian, use the DEB package instead.
|
||||
|
||||
If you cannot access either Elasticsearch or Kibana with a web browser, then your firewall may be blocking those ports. You can allow traffic on those ports by adjusting your firewall settings. For instance, if you are running **firewalld** (the default on Fedora and RHEL, and installable on Debian and Ubuntu), then you can use **firewall-cmd**:
|
||||
|
||||
|
||||
```
|
||||
$ sudo firewall-cmd --add-port=9200/tcp --permanent
|
||||
$ sudo firewall-cmd --add-port=5601/tcp --permanent
|
||||
$ sudo firewall-cmd --reload
|
||||
```
|
||||
|
||||
You’re now set up and can follow along with our upcoming installation articles for Elasticsearch and Kibana.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/install-elasticsearch-and-kibana-linux
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[runningwater](https://github.com/runningwater)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux31x_cc.png?itok=Pvim4U-B (5 pengiuns floating on iceburg)
|
||||
[2]: https://www.elastic.co/guide/en/elasticsearch/reference/current/rpm.html
|
||||
[3]: https://getfedora.org
|
||||
[4]: https://www.centos.org
|
||||
[5]: https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux
|
||||
[6]: https://www.opensuse.org
|
||||
[7]: http://www.apache.org/licenses/
|
||||
[8]: https://opensource.com/sites/default/files/uploads/kibana.jpg (Kibana running in Firefox.)
|
@ -1,263 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Introduction to GNU Autotools)
|
||||
[#]: via: (https://opensource.com/article/19/7/introduction-gnu-autotools)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
Introduction to GNU Autotools
|
||||
======
|
||||
If you're not using Autotools yet, this tutorial will change the way you
|
||||
deliver your code.
|
||||
![Linux kernel source code \(C\) in Visual Studio Code][1]
|
||||
|
||||
Have you ever downloaded the source code for a popular software project that required you to type the almost ritualistic **./configure; make && make install** command sequence to build and install it? If so, you’ve used [GNU Autotools][2]. If you’ve ever looked into some of the files accompanying such a project, you’ve likely also been terrified at the apparent complexity of such a build system.
|
||||
|
||||
Good news! GNU Autotools is a lot simpler to set up than you think, and it’s GNU Autotools itself that generates those 1,000-line configuration files for you. Yes, you can write 20 or 30 lines of installation code and get the other 4,000 for free.
|
||||
|
||||
### Autotools at work
|
||||
|
||||
If you’re a user new to Linux looking for information on how to install applications, you do not have to read this article! You’re welcome to read it if you want to research how software is built, but if you’re just installing a new application, go read my article about [installing apps on Linux][3].
|
||||
|
||||
For developers, Autotools is a quick and easy way to manage and package source code so users can compile and install software. Autotools is also well-supported by major packaging formats, like DEB and RPM, so maintainers of software repositories can easily prepare a project built with Autotools.
|
||||
|
||||
Autotools works in stages:
|
||||
|
||||
1. First, during the **./configure** step, Autotools scans the host system (the computer it’s being run on) to discover the default settings. Default settings include where support libraries are located, and where new software should be placed on the system.
|
||||
2. Next, during the **make** step, Autotools builds the application, usually by converting human-readable source code into machine language.
|
||||
3. Finally, during the **make install** step, Autotools copies the files it built to the appropriate locations (as detected during the configure stage) on your computer.
|
||||
|
||||
|
||||
|
||||
This process seems simple, and it is, as long as you use Autotools.
|
||||
|
||||
### The Autotools advantage
|
||||
|
||||
GNU Autotools is a big and important piece of software that most of us take for granted. Along with [GCC (the GNU Compiler Collection)][4], Autotools is the scaffolding that allows Free Software to be constructed and installed to a running system. If you’re running a [POSIX][5] system, it’s not an understatement to say that most of your operating system exists as runnable software on your computer because of these projects.
|
||||
|
||||
In the likely event that your pet project isn’t an operating system, you might assume that Autotools is overkill for your needs. But, despite its reputation, Autotools has lots of little features that may benefit you, even if your project is a relatively simple application or series of scripts.
|
||||
|
||||
#### Portability
|
||||
|
||||
First of all, Autotools comes with portability in mind. While it can’t make your project work across all POSIX platforms (that’s up to you, as the coder), Autotools can ensure that the files you’ve marked for installation get installed to the most sensible locations on a known platform. And because of Autotools, it’s trivial for a power user to customize and override any non-optimal value, according to their own system.
|
||||
|
||||
With Autotools, all you need to know is what files need to be installed to what general location. It takes care of everything else. No more custom install scripts that break on any untested OS.
|
||||
|
||||
#### Packaging
|
||||
|
||||
Autotools is also well-supported. Hand a project with Autotools over to a distro packager, whether they’re packaging an RPM, DEB, TGZ, or anything else, and their job is simple. Packaging tools know Autotools, so there’s likely to be no patching, hacking, or adjustments necessary. In many cases, incorporating an Autotools project into a pipeline can even be automated.
|
||||
|
||||
### How to use Autotools
|
||||
|
||||
To use Autotools, you must first have Autotools installed. Your distribution may provide one package meant to help developers build projects, or it may provide separate packages for each component, so you may have to do some research on your platform to discover what packages you need to install.
|
||||
|
||||
The components of Autotools are:
|
||||
|
||||
* **automake**
|
||||
* **autoconf**
|
||||
* **automake**
|
||||
* **make**
|
||||
|
||||
|
||||
|
||||
While you likely need to install the compiler (GCC, for instance) required by your project, Autotools works just fine with scripts or binary assets that don’t need to be compiled. In fact, Autotools can be useful for such projects because it provides a **make uninstall** script for easy removal.
|
||||
|
||||
Once you have all of the components installed, it’s time to look at the structure of your project’s files.
|
||||
|
||||
#### Autotools project structure
|
||||
|
||||
GNU Autotools has very specific expectations, and most of them are probably familiar if you download and build source code often. First, the source code itself is expected to be in a subdirectory called **src**.
|
||||
|
||||
Your project doesn’t have to follow all of these expectations, but if you put files in non-standard locations (from the perspective of Autotools), then you’ll have to make adjustments for that in your Makefile later.
|
||||
|
||||
Additionally, these files are required:
|
||||
|
||||
* **NEWS**
|
||||
* **README**
|
||||
* **AUTHORS**
|
||||
* **ChangeLog**
|
||||
|
||||
|
||||
|
||||
You don’t have to actively use the files, and they can be symlinks to a monolithic document (like **README.md**) that encompasses all of that information, but they must be present.
|
||||
|
||||
#### Autotools configuration
|
||||
|
||||
Create a file called **configure.ac** at your project’s root directory. This file is used by **autoconf** to create the **configure** shell script that users run before building. The file must contain, at the very least, the **AC_INIT** and **AC_OUTPUT** [M4 macros][6]. You don’t need to know anything about the M4 language to use these macros; they’re already written for you, and all of the ones relevant to Autotools are defined in the documentation.
|
||||
|
||||
Open the file in your favorite text editor. The **AC_INIT** macro may consist of the package name, version, an email address for bug reports, the project URL, and optionally the name of the source TAR file.
|
||||
|
||||
The **[AC_OUTPUT][7]** macro is much simpler and accepts no arguments.
|
||||
|
||||
|
||||
```
|
||||
AC_INIT([penguin], [2019.3.6], [[seth@example.com][8]])
|
||||
AC_OUTPUT
|
||||
```
|
||||
|
||||
If you were to run **autoconf** at this point, a **configure** script would be generated from your **configure.ac** file, and it would run successfully. That’s all it would do, though, because all you have done so far is define your project’s metadata and called for a configuration script to be created.
|
||||
|
||||
The next macros you must invoke in your **configure.ac** file are functions to create a [Makefile][9]. A Makefile tells the **make** command what to do (usually, how to compile and link a program).
|
||||
|
||||
The macros to create a Makefile are **AM_INIT_AUTOMAKE**, which accepts no arguments, and **AC_CONFIG_FILES**, which accepts the name you want to call your output file.
|
||||
|
||||
Finally, you must add a macro to account for the compiler your project needs. The macro you use obviously depends on your project. If your project is written in C++, the appropriate macro is **AC_PROG_CXX**, while a project written in C requires **AC_PROG_CC**, and so on, as detailed in the [Building Programs and Libraries][10] section in the Autoconf documentation.
|
||||
|
||||
For example, I might add the following for my C++ program:
|
||||
|
||||
|
||||
```
|
||||
AC_INIT([penguin], [2019.3.6], [[seth@example.com][8]])
|
||||
AC_OUTPUT
|
||||
AM_INIT_AUTOMAKE
|
||||
AC_CONFIG_FILES([Makefile])
|
||||
AC_PROG_CXX
|
||||
```
|
||||
|
||||
Save the file. It’s time to move on to the Makefile.
|
||||
|
||||
#### Autotools Makefile generation
|
||||
|
||||
Makefiles aren’t difficult to write manually, but Autotools can write one for you, and the one it generates will use the configuration options detected during the `./configure` step, and it will contain far more options than you would think to include or want to write yourself. However, Autotools can’t detect everything your project requires to build, so you have to add some details in the file **Makefile.am**, which in turn is used by **automake** when constructing a Makefile.
|
||||
|
||||
**Makefile.am** uses the same syntax as a Makefile, so if you’ve ever written a Makefile from scratch, then this process will be familiar and simple. Often, a **Makefile.am** file needs only a few variable definitions to indicate what files are to be built, and where they are to be installed.
|
||||
|
||||
Variables ending in **_PROGRAMS** identify code that is to be built (this is usually considered the _primary_ target; it’s the main reason the Makefile exists). Automake recognizes other primaries, like **_SCRIPTS**, **_DATA**, **_LIBRARIES**, and other common parts that make up a software project.
|
||||
|
||||
If your application is literally compiled during the build process, then you identify it as a binary program with the **bin_PROGRAMS** variable, and then reference any part of the source code required to build it (these parts may be one or more files to be compiled and linked together) using the program name as the variable prefix:
|
||||
|
||||
|
||||
```
|
||||
bin_PROGRAMS = penguin
|
||||
penguin_SOURCES = penguin.cpp
|
||||
```
|
||||
|
||||
The target of **bin_PROGRAMS** is installed into the **bindir**, which is user-configurable during compilation.
|
||||
|
||||
If your application isn’t actually compiled, then your project doesn’t need a **bin_PROGRAMS** variable at all. For instance, if your project is a script written in Bash, Perl, or a similar interpreted language, then define a **_SCRIPTS** variable instead:
|
||||
|
||||
|
||||
```
|
||||
bin_SCRIPTS = bin/penguin
|
||||
```
|
||||
|
||||
Automake expects sources to be located in a directory called **src**, so if your project uses an alternative directory structure for its layout, you must tell Automake to accept code from outside sources:
|
||||
|
||||
|
||||
```
|
||||
AUTOMAKE_OPTIONS = foreign subdir-objects
|
||||
```
|
||||
|
||||
Finally, you can create any custom Makefile rules in **Makefile.am** and they’ll be copied verbatim into the generated Makefile. For instance, if you know that a temporary value needs to be replaced in your source code before the installation proceeds, you could make a custom rule for that process:
|
||||
|
||||
|
||||
```
|
||||
all-am: penguin
|
||||
touch bin/penguin.sh
|
||||
|
||||
penguin: bin/penguin.sh
|
||||
@sed "s|__datadir__|@datadir@|" $< >bin/$@
|
||||
```
|
||||
|
||||
A particularly useful trick is to extend the existing **clean** target, at least during development. The **make clean** command generally removes all generated build files with the exception of the Automake infrastructure. It’s designed this way because most users rarely want **make clean** to obliterate the files that make it easy to build their code.
|
||||
|
||||
However, during development, you might want a method to reliably return your project to a state relatively unaffected by Autotools. In that case, you may want to add this:
|
||||
|
||||
|
||||
```
|
||||
clean-local:
|
||||
@rm config.status configure config.log
|
||||
@rm Makefile
|
||||
@rm -r autom4te.cache/
|
||||
@rm aclocal.m4
|
||||
@rm compile install-sh missing Makefile.in
|
||||
```
|
||||
|
||||
There’s a lot of flexibility here, and if you’re not already familiar with Makefiles, it can be difficult to know what your **Makefile.am** needs. The barest necessity is a primary target, whether that’s a binary program or a script, and an indication of where the source code is located (whether that’s through a **_SOURCES** variable or by using **AUTOMAKE_OPTIONS** to tell Automake where to look for source code).
|
||||
|
||||
Once you have those variables and settings defined, you can try generating your build scripts as you see in the next section, and adjust for anything that’s missing.
|
||||
|
||||
#### Autotools build script generation
|
||||
|
||||
You’ve built the infrastructure, now it’s time to let Autotools do what it does best: automate your project tooling. The way the developer (you) interfaces with Autotools is different from how users building your code do.
|
||||
|
||||
Builders generally use this well-known sequence:
|
||||
|
||||
|
||||
```
|
||||
$ ./configure
|
||||
$ make
|
||||
$ sudo make install
|
||||
```
|
||||
|
||||
For that incantation to work, though, you as the developer must bootstrap the build infrastructure. First, run **autoreconf** to generate the configure script that users invoke before running **make**. Use the **–install** option to bring in auxiliary files, such as a symlink to **depcomp**, a script to generate dependencies during the compiling process, and a copy of the **compile** script, a wrapper for compilers to account for syntax variance, and so on.
|
||||
|
||||
|
||||
```
|
||||
$ autoreconf --install
|
||||
configure.ac:3: installing './compile'
|
||||
configure.ac:2: installing './install-sh'
|
||||
configure.ac:2: installing './missing'
|
||||
```
|
||||
|
||||
With this development build environment, you can then create a package for source code distribution:
|
||||
|
||||
|
||||
```
|
||||
$ make dist
|
||||
```
|
||||
|
||||
The **dist** target is a rule you get for "free" from Autotools.
|
||||
It’s a feature that gets built into the Makefile generated from your humble **Makefile.am** configuration. This target produces a **tar.gz** archive containing all of your source code and all of the essential Autotools infrastructure so that people downloading the package can build the project.
|
||||
|
||||
At this point, you should review the contents of the archive carefully to ensure that it contains everything you intend to ship to your users. You should also, of course, try building from it yourself:
|
||||
|
||||
|
||||
```
|
||||
$ tar --extract --file penguin-0.0.1.tar.gz
|
||||
$ cd penguin-0.0.1
|
||||
$ ./configure
|
||||
$ make
|
||||
$ DESTDIR=/tmp/penguin-test-build make install
|
||||
```
|
||||
|
||||
If your build is successful, you find a local copy of your compiled application specified by **DESTDIR** (in the case of this example, **/tmp/penguin-test-build**).
|
||||
|
||||
|
||||
```
|
||||
$ /tmp/example-test-build/usr/local/bin/example
|
||||
hello world from GNU Autotools
|
||||
```
|
||||
|
||||
### Time to use Autotools
|
||||
|
||||
Autotools is a great collection of scripts for a predictable and automated release process. This toolset may be new to you if you’re used to Python or Bash builders, but it’s likely worth learning for the structure and adaptability it provides to your project.
|
||||
|
||||
And Autotools is not just for code, either. Autotools can be used to build [Docbook][11] projects, to keep media organized (I use Autotools for my music releases), documentation projects, and anything else that could benefit from customizable install targets.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/introduction-gnu-autotools
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_kernel_clang_vscode.jpg?itok=fozZ4zrr (Linux kernel source code (C) in Visual Studio Code)
|
||||
[2]: https://www.gnu.org/software/automake/faq/autotools-faq.html
|
||||
[3]: https://opensource.com/article/18/1/how-install-apps-linux
|
||||
[4]: https://en.wikipedia.org/wiki/GNU_Compiler_Collection
|
||||
[5]: https://en.wikipedia.org/wiki/POSIX
|
||||
[6]: https://www.gnu.org/software/autoconf/manual/autoconf-2.67/html_node/Initializing-configure.html
|
||||
[7]: https://www.gnu.org/software/autoconf/manual/autoconf-2.67/html_node/Output.html#Output
|
||||
[8]: mailto:seth@example.com
|
||||
[9]: https://www.gnu.org/software/make/manual/html_node/Introduction.html
|
||||
[10]: https://www.gnu.org/software/automake/manual/html_node/Programs.html#Programs
|
||||
[11]: https://opensource.com/article/17/9/docbook
|
@ -1,144 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to create a pull request in GitHub)
|
||||
[#]: via: (https://opensource.com/article/19/7/create-pull-request-github)
|
||||
[#]: author: (Kedar Vijay Kulkarni https://opensource.com/users/kkulkarnhttps://opensource.com/users/fontanahttps://opensource.com/users/mhanwellhttps://opensource.com/users/mysentimentshttps://opensource.com/users/greg-p)
|
||||
|
||||
How to create a pull request in GitHub
|
||||
======
|
||||
Learn how to fork a repo, make changes, and ask the maintainers to
|
||||
review and merge it.
|
||||
![a checklist for a team][1]
|
||||
|
||||
So, you know how to use git. You have a [GitHub][2] repo and can push to it. All is well. But how the heck do you contribute to other people's GitHub projects? That is what I wanted to know after I learned git and GitHub. In this article, I will explain how to fork a git repo, make changes, and submit a pull request.
|
||||
|
||||
When you want to work on a GitHub project, the first step is to fork a repo.
|
||||
|
||||
![Forking a GitHub repo][3]
|
||||
|
||||
Use [my demo repo][4] to try it out.
|
||||
|
||||
Once there, click on the **Fork** button in the top-right corner. This creates a new copy of my demo repo under your GitHub user account with a URL like:
|
||||
|
||||
|
||||
```
|
||||
`https://github.com/<YourUserName>/demo`
|
||||
```
|
||||
|
||||
The copy includes all the code, branches, and commits from the original repo.
|
||||
|
||||
Next, clone the repo by opening the terminal on your computer and running the command:
|
||||
|
||||
|
||||
```
|
||||
`git clone https://github.com/<YourUserName>/demo`
|
||||
```
|
||||
|
||||
Once the repo is cloned, you need to do two things:
|
||||
|
||||
1. Create a new branch by issuing the command:
|
||||
|
||||
|
||||
```
|
||||
`git checkout -b new_branch`
|
||||
```
|
||||
|
||||
2. Create a new remote for the upstream repo with the command:
|
||||
|
||||
|
||||
```
|
||||
`git remote add upstream https://github.com/kedark3/demo`
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
In this case, "upstream repo" refers to the original repo you created your fork from.
|
||||
|
||||
Now you can make changes to the code. The following code creates a new branch, makes an arbitrary change, and pushes it to **new_branch**:
|
||||
|
||||
|
||||
```
|
||||
$ git checkout -b new_branch
|
||||
Switched to a new branch ‘new_branch’
|
||||
$ echo “some test file” > test
|
||||
$ cat test
|
||||
Some test file
|
||||
$ git status
|
||||
On branch new_branch
|
||||
No commits yet
|
||||
Untracked files:
|
||||
(use "git add <file>..." to include in what will be committed)
|
||||
test
|
||||
nothing added to commit but untracked files present (use "git add" to track)
|
||||
$ git add test
|
||||
$ git commit -S -m "Adding a test file to new_branch"
|
||||
[new_branch (root-commit) 4265ec8] Adding a test file to new_branch
|
||||
1 file changed, 1 insertion(+)
|
||||
create mode 100644 test
|
||||
$ git push -u origin new_branch
|
||||
Enumerating objects: 3, done.
|
||||
Counting objects: 100% (3/3), done.
|
||||
Writing objects: 100% (3/3), 918 bytes | 918.00 KiB/s, done.
|
||||
Total 3 (delta 0), reused 0 (delta 0)
|
||||
Remote: Create a pull request for ‘new_branch’ on GitHub by visiting:
|
||||
Remote: <http://github.com/example/Demo/pull/new/new\_branch>
|
||||
Remote:
|
||||
* [new branch] new_branch -> new_branch
|
||||
```
|
||||
|
||||
Once you push the changes to your repo, the **Compare & pull request** button will appear in GitHub.
|
||||
|
||||
![GitHub's Compare & Pull Request button][5]
|
||||
|
||||
Click it and you'll be taken to this screen:
|
||||
|
||||
![GitHub's Open pull request button][6]
|
||||
|
||||
Open a pull request by clicking the **Create pull request** button. This allows the repo's maintainers to review your contribution. From here, they can merge it if it is good, or they may ask you to make some changes.
|
||||
|
||||
### TLDR
|
||||
|
||||
In summary, if you want to contribute to a project, the simplest way is to:
|
||||
|
||||
1. Find a project you want to contribute to
|
||||
2. Fork it
|
||||
3. Clone it to your local system
|
||||
4. Make a new branch
|
||||
5. Make your changes
|
||||
6. Push it back to your repo
|
||||
7. Click the **Compare & pull request** button
|
||||
8. Click **Create pull request** to open a new pull request
|
||||
|
||||
|
||||
|
||||
If the reviewers ask for changes, repeat steps 5 and 6 to add more commits to your pull request.
|
||||
|
||||
Happy coding!
|
||||
|
||||
In a previous article , I discussed the complaints that have been leveled against GitHub during the...
|
||||
|
||||
Arfon Smith works at GitHub and is involved in a number of activities at the intersection of open...
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/create-pull-request-github
|
||||
|
||||
作者:[Kedar Vijay Kulkarni][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/kkulkarnhttps://opensource.com/users/fontanahttps://opensource.com/users/mhanwellhttps://opensource.com/users/mysentimentshttps://opensource.com/users/greg-p
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_hands_team_collaboration.png?itok=u82QepPk (a checklist for a team)
|
||||
[2]: https://github.com/
|
||||
[3]: https://opensource.com/sites/default/files/uploads/forkrepo.png (Forking a GitHub repo)
|
||||
[4]: https://github.com/kedark3/demo
|
||||
[5]: https://opensource.com/sites/default/files/uploads/compare-and-pull-request-button.png (GitHub's Compare & Pull Request button)
|
||||
[6]: https://opensource.com/sites/default/files/uploads/open-a-pull-request_crop.png (GitHub's Open pull request button)
|
@ -1,86 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (OpenHMD: Open Source Project for VR Development)
|
||||
[#]: via: (https://itsfoss.com/openhmd/)
|
||||
[#]: author: (John Paul https://itsfoss.com/author/john/)
|
||||
|
||||
OpenHMD: Open Source Project for VR Development
|
||||
======
|
||||
|
||||
In this day and age, there are open-source alternatives for all your computing needs. There is even an open-source platform for VR goggles and the like. Let’s have a quick look at the OpenHMD project.
|
||||
|
||||
### What is OpenHMD?
|
||||
|
||||
![][1]
|
||||
|
||||
[OpenHMD][2] is a project that aims to create an open-source API and drivers for immersive technology. This category includes head-mounted displays with built-in head tracking.
|
||||
|
||||
They currently support quite a few systems, including Android, FreeBSD, Linux, OpenBSD, mac OS, and Windows. The [devices][3] that they support include Oculus Rift, HTC Vive, DreamWorld DreamGlass, Playstation Move, and others. They also offer support for a wide range of languages, including Go, Java, .NET, Perl, Python, and Rust.
|
||||
|
||||
The OpenHMD project is released under the [Boost License][4].
|
||||
|
||||
### More and Improved Features in the new Release
|
||||
|
||||
![][5]
|
||||
|
||||
Recently, the OpenHMD project [released version 0.3.0][6] codenamed Djungelvral. ([Djungelvral][7] is a salted licorice from Sweden.) This brought quite a few changes.
|
||||
|
||||
The update added support for the following devices:
|
||||
|
||||
* 3Glasses D3
|
||||
* Oculus Rift CV1
|
||||
* HTC Vive and HTC Vive Pro
|
||||
* NOLO VR
|
||||
* Windows Mixed Reality HMD support
|
||||
* Deepoon E2
|
||||
* GearVR Gen1
|
||||
|
||||
|
||||
|
||||
A universal distortion shader was added to OpenHMD. This additions “makes it possible to simply set some variables in the drivers that gives information to the shader regarding lens size, chromatic aberration, position and quirks.”
|
||||
|
||||
They also announced plans to change the build system. OpenHMD added support for Meson and will remove support for Autotools in the next (0.4) release.
|
||||
|
||||
The team behind OpenHMD also had to remove some features because they want their system to work for everyone. Support for PlayStation VR has been disabled because of some issue with Windows and mac OS due to incomplete HID headers. NOLO has a bunch of firmware version, many will small changes. OpenHMD is unable to test all of the firmware versions, so some version might not work. They recommend upgrading to the latest firmware release. Finally, several devices only have limited support and therefore are not included in this release.
|
||||
|
||||
[][8]
|
||||
|
||||
Suggested read To Do App Remember The Milk Is Now Available For Linux
|
||||
|
||||
They accounted that they will be speeding up the OpenHMD release cycle to get newer features and support for more devices to users quicker. Their main priority will be to get “currently disabled devices in master ready for a patch release will be priority as well, among getting the elusive positional tracking functional for supported HMD’s.”
|
||||
|
||||
### Final Thoughts
|
||||
|
||||
I don’t have a VR device and have never used one. I do believe that they have great potential, even beyond gaming. I am thrill (but not surprised) that there is an open-source implementation that seeks to support many devices. I’m glad that they are focusing on a wide range of devices, instead of focussing on some off-brand VR effort.
|
||||
|
||||
I wish the OpenHMD team well and hope they create a platform that will make them the goto VR project.
|
||||
|
||||
Have you ever used or encountered OpenHMD? Have you ever used VR for gaming and other pursuits? If yes, have you encountered any open-source hardware or software? Please let us know in the comments below.
|
||||
|
||||
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][9].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/openhmd/
|
||||
|
||||
作者:[John Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/john/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/07/openhmd-logo.png?resize=300%2C195&ssl=1
|
||||
[2]: http://www.openhmd.net/
|
||||
[3]: http://www.openhmd.net/index.php/devices/
|
||||
[4]: https://github.com/OpenHMD/OpenHMD/blob/master/LICENSE
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/07/virtual-reality-development.jpg?ssl=1
|
||||
[6]: http://www.openhmd.net/index.php/2019/07/12/openhmd-0-3-0-djungelvral-released/
|
||||
[7]: https://www.youtube.com/watch?v=byP5i6LdDXs
|
||||
[8]: https://itsfoss.com/remember-the-milk-linux/
|
||||
[9]: http://reddit.com/r/linuxusersgroup
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user